entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.07285v1 | 20230714113525 | Infinite derivative gravity resolves nonscalar curvature singularities | [
"Ivan Kolář",
"Tomáš Málek"
] | gr-qc | [
"gr-qc",
"hep-th"
] |
[email protected]
Institute of Theoretical Physics, Faculty of Mathematics and Physics, Charles University,
V Holešovičkách 2, Prague 180 00, Czech Republic
Van Swinderen Institute, University of Groningen, 9747 AG, Groningen, Netherlands
[email protected]
Institute of Mathematics of the Czech Academy of Sciences, Žitná 25, 115 67 Prague 1, Czech Republic
We explicitly demonstrate that the nonlocal ghost-free ultraviolet modification of general relativity (GR) known as the infinite derivative gravity (IDG) resolves nonscalar curvature singularities in exact solutions of the full theory. We analyze exact pp-wave solutions of GR and IDG describing gravitational waves generated by null radiation. Curvature of GR and IDG solutions with the same energy-momentum tensor is compared in parallel-propagated frames along timelike and null geodesics at finite values of the affine parameter. While the GR pp-wave solution contains a physically problematic nonscalar curvature singularity at the location of the source, the curvature of its IDG counterpart is finite.
Infinite derivative gravity resolves nonscalar curvature singularities
Tomáš Málek
August 12, 2023
======================================================================
§ INTRODUCTION
Terms with higher derivatives appear in many effective descriptions of quantum gravity. They mitigate issues related to ultraviolet (UV) incompleteness of general relativity (GR) such as the existence of curvature singularities and quantum non-renormalizability <cit.>. Although higher-derivative theories of finite order already regularize the linearized solutions (and their linearized curvature) to a certain order <cit.>, they suffer from the Ostrogradsky instability and ghosts. Infinite derivative gravity (IDG) <cit.> is a promising UV modification of GR that resolves these issues by introducing nonlocal analytic operators ℱ() in the action. At the linearized level, this theory is ghost-free, singularity-free, renormalizable, and asymptotes to linearized GR in the infrared (IR). Naturally, the major open question is whether the nonlocality of IDG can resolve curvature singularities at the level of exact solutions of the full theory.
Due to the immense complexity of the field equations <cit.>, the known exact solutions of IDG are very scarce. Putting aside the universal spacetimes <cit.> that solve practically every gravitational theory in vacuum (up to an appropriate cosmological constant), the known exact solutions are either i) FLRW spacetimes satisfying recursive ansatz such as R=α R+β <cit.> or ii) pp-wave and Kundt geometries with trace-free (TF) Ricci and Weyl of types III/N <cit.>. Since the above recursive ansatz effectively `localizes' the field equations, the former are actually quite insensitive to nonlocality.[They are affected by ℱ() only through a finite number of values, ℱ(0), ℱ(α), and ℱ'(α), meaning that one can always find an equivalent local theory (with polynomial ℱ) that admits the same spacetimes with the same matter content.] On the other hand, the latter still retain strong dependence on the nonlocal operators ℱ(), so they provide better testbeds of the singularity resolution due to nonlocality.
In this Letter, we study exact pp-wave solutions of GR and IDG generated by the same energy-momentum tensor — null radiation with the point-like distribution in the transverse 2-space and an arbitrary wave profile. In the context of IDG, these Aichelburg–Sexl-type solutions were first studied at the linearized level in <cit.> and later promoted to the exact solutions of the full theory in <cit.>. Note that we actually consider an improved version of IDG. [In particular, our theory (<ref>) contains the Weyl term instead of the Ricci term. We also demand suppression of the propagator at both ends of the spectrum of in (<ref>), which should improve the behavior of the theory not only for short spacelike but also timelike distances.] Nevertheless, our conclusions remain valid also for the solutions from <cit.>.
The goal of this Letter is to analyze the curvature of these pp-wave solutions and explicitly show the presence of curvature singularities in the GR solution and their absence in the IDG solution. Recall that curvature singularities can be either scalar or nonscalar. Both of them are equally unwanted and physically problematic since they are associated with unboundedly large tidal forces experienced by an observer in a finite time. The pp-waves of type N are free from scalar curvature singularites (since all scalar invariant vanish) but they still may contain a nonscalar curvature singularity. This is defined as a point at the boundary of the manifold with diverging components of the Riemann tensor when measured in a parallel-propagated (PP) frame along a timelike or null geodesic that reaches the point with a finite value of affine parameter. In what follows, we explicitly demonstrate that the curvature in such PP frames blow up for the GR solution but remain finite for the IDG solution.
§ INFINITE DERIVATIVE GRAVITY
Consider a general class of gravitational theories described by the Lagrangian
L =12[ϰ^-1R-13 Rℱ_0(□)R+ C^abcdℱ_2(□) C_abcd]+L_m ,
where ℱ_i, i=0,2, are analytic functions of the wave operator , R is the Ricci scalar and C is the Weyl tensor.[We use the bold font for tensors and their abstract indices. The regular font is used for scalars, e.g., coordinates and tensor components.] GR is recovered by setting ℱ_i()=0, while IDG corresponds to non-polynomial functions ℱ_i, which are typically chosen by demanding specific properties of the linearized theory.
To fix a particular physically interesting IDG, let us consider the metric perturbations γ, g=g̅+γ, around Minkowski spacetime g̅. The linearized field equations in the Landau gauge, ∇̅_aγ^a_b=1/4∇̅_bγ, read
-ℰ_2()γ_ab^⊥ =2ϰT_ab^⊥ ,
-ℰ_0() γ =-43ϰ T ,
where the symbol ⊥ stands for the covariant transverse-traceless projection of the symmetric rank-2 tensor and the analytic operators ℰ_i() are defined by
ℰ_i() =1+2ϰℱ_i() .
Interesting choices of ℱ_i() can be characterized by the following conditions on ℰ_i():
ℰ_i()=e^𝒜_i() ,
lim_→±∞𝒜_i()= +∞ ,
𝒜_i(0)=0 ,
where 𝒜_i are entire functions. The first condition guarantees the absence of ghosts and other extra degrees of freedom (because the exponential function is non-zero in ℂ); the second condition represents the desired suppression of the propagator at short spacetime distances (i.e., in the UV); and the last condition is necessary to recover GR solutions at long spacetime distances (i.e., in the IR).
The simplest choice satisfying (<ref>) is
ℰ_i()=e^ℓ ^4 ^2 .
It has been demonstrated on various examples that (<ref>) (and similar choices) regularize singular GR solutions at this linearized level. The choice (<ref>) also fixes the non-local operators ℱ_i() of IDG in the full theory (<ref>),
ℱ_i()=e^ℓ ^4^2-1/ 2ϰ .
In what follows, we would like to demonstrate that the exact pp-wave solution of such a theory are also regular in contrast to their singular GR counterparts.
§ EXACT PP-WAVE SOLUTIONS
The field equations of (<ref>) reduce immensely for various subclasses of Kundt spacetimes <cit.>. One of the simplest cases is the pp-wave metric of type N,
g = -u ∨(r + H u) +ρ^2 + ρ^2 φ^2
where H=H(u,ρ,φ) is the only unknown function. Choosing this metric ansatz and the pure-radiation energy-momentum tensor, T=Eu^2, E=E(u,ρ,φ), the field equations of the full theory reduce to
ℰ_2()H=ϰE ,
where is the Laplace operator on transverse 2-space ρ^2 + ρ^2 φ^2. The similarity of (<ref>) with the first equation of (<ref>) reflects the fact that the exact solutions of the full theory within this class of metrics are also solutions of the linearized theory.[Note that this is not true for the gyraton pp-wave metrics of more general type III <cit.> corresponding to the spinning null matter.] This feature is purely due to the simplicity of the metric ansatz (<ref>) — no approximation was assumed in the derivation of (<ref>).
Consider a source located at ρ=0 in the transverse 2-space with an arbitrary wave profile w(u), ϰE= w(u)δ(ρ). Due to the symmetry of the source, the field equation contain only derivatives with respect to ρ, so the metric is characterized by one single-variable function, h=h(ρ),
H=w(u) h(ρ) .
The equation (<ref>) can be solved for an arbitrary analytic choice of ℱ_i() using the Fourier transform in polar coordinates that takes the form of the Hankel transform,[This integral usually contains a diverging part that is constant in ρ and of no physical significance. An easy way to extract the convergent part is to replace J_0→∂_ρJ_0=-sJ_1, evaluate the s-integral, and calculate the primitive function in ρ.]
h=
∫_0^∞ds J_0(sρ)/-sℰ_2(-s^2) ,
where J_0 is the Bessel function of the first kind.
The solutions for GR and IDG from the previous section read
h_GR =
logρ+C ,
h_IDG =
√(π)ρ ^2 2^4 ℓ ^2 _1F_3(12;1,32,32;ρ ^42^8 ℓ ^4)
-ρ ^42^8 ℓ ^4 _2F_4(1,1;32,32,2,2;ρ ^42^8 ℓ ^4)+C ,
where C is a constant of no physical significance <cit.>. The metric components of the GR pp-wave solution diverge at the location of source (ρ=0) because h_GR∝logρ. On the other hand, the metric components of the IDG solution are smooth at ρ=0 because h_IDG is analytic as a function of ρ^2, which follows from analyticity of generalized hypergeometric functions _pF_q and h_IDG^(2n+1)(0)=0. As we will see, these properties of h are intimately related to the presence/absence of curvature singularities.
§ GEODESICS AND PP FRAMES
Since all scalar curvature invariants vanish, we have to analyze the presence/absence of nonscalar curvature singularities. Hence, we have to investigate the geodesic motion of massive and massless test particles near ρ=0. For an arbitrary geodesic x(λ)=(u(λ),r(λ),ρ(λ),φ(λ)), the corresponding tangent vector v is given by
v=Dxdλ=u'∂_u+r'∂_r+ρ'∂_ρ+φ'∂_φ ,
where λ is the affine parameter and '=d/dλ.
The pp-wave spacetimes possess symmetries which generate conserved quantities and simplify the integration of the geodesic equation. If we also assume that the wave-profile function is constant, w(u) = const., the metric (<ref>) is independent of u, r and φ, which gives rise to three Killing vectors ∂_u, ∂_r, ∂_φ and consequently three constants of motion c_(u), c_(r), c_(φ). In addition to that, v fulfills the normalization condition with the normalization constant ϵ, hence
v^♭·∂_u = -r' - 2 w h u' = c_(u) ,
v^♭·∂_r = -u' = c_(r) ,
v^♭·∂_φ = ρ^2 φ' = c_(φ) ,
v^♭·v = -2 r' u' - 2 w h u'^2 + ρ'^2 + ρ^2 φ'^2 = ϵ ,
where the ϵ=-1 for timelike geodesics and ϵ=0 for null geodesics.
Let us now we determine the PP frames along all causal geodesics (ϵ = -1,0) reaching ρ=0. We start with the timelike geodesics (ϵ = -1) and identify the first frame vector with the timelike tangent vector v. The remaining frame vectors are chosen to satisfy the orthonormality condition, i.e.,
e_1 = v , e_3 = ρ'u'∂_r + ∂_ρ ,
e_2 = v - 1u'∂_r , e_4 = ρφ'u'∂_r + 1ρ∂_φ .
The only non-vanishing components of the TF Ricci tensor S and Weyl tensor C of the metric (<ref>) in the orthonormal frame (<ref>) are given by
S_11 = S_22 = S_12 = w Ξ_S u'^2 ,
C_1313 = C_1323 = C_2323 = - C_1414
= - C_1424 = - C_2424 = w Ξ_C u'^2 ,
where
Ξ_S = h” + 1ρ h' , Ξ_C = h” - 1ρ h' .
Note that the vectors e_3 and e_4 are PP along the geodesic with the tangent vector e_1 only if φ' = 0; otherwise they have to be properly rotated. However, as we show bellow, φ is constant for all geodesics passing through the source in the IDG solution.
In the case of null geodesics (ϵ = 0) with the null tangent vector v, we construct the corresponding complex null frame
l = v , n = 1u'∂_r , m = 1√(2) (e_3 - i e_4) ,
where the tangent vector v is identified with the null frame vector l and e_3, e_4 are defined as in (<ref>).
Similarly as for the orthonormal frame (<ref>), the null frame (<ref>) is PP along l only if φ'=0.
In the standard notation of the Newman–Penrose formalism, the only non-vanishing components of the TF Ricci tensor S and Weyl tensor C of the metric (<ref>) then read
Φ_22 = w2Ξ_S u'^2 , Ψ_4 = w2Ξ_C u'^2 .
§ GR PP-WAVE SOLUTION EXHIBITS CURVATURE SINGULARITY
It is obvious that Ξ_S=0 for ρ>0 while Ξ_C diverges at the location of the source, ρ=0, for the GR solution h_GR. Therefore also the non-vanishing components of the Weyl tensor C in PP frames along any causal geodesic reaching ρ=0 necessarily diverge. To prove the presence of non-scalar curvature singularity in the GR pp-wave solution, it only remains to show that the boundary points ρ=0 can be reached by at least one particular causal geodesic in a finite value of affine parameter.
An explicit timelike geodesic with vanishing angular momentum per unit mass in the transverse 2-space, c_(φ) = 0, can be obtained analytically by integration of (<ref>). Choosing the remaining constants of motion c_(u) = -1, c_(r) = -1, and the initial conditions u(0)=0, r(0)=0, ρ(0) = e^1/4, one gets
u = λ , r = 2 ( λ - ρ^-1(λ /λ_0) ) ,
ρ = exp[1/4 - ( ^-1(λ /λ_0) )^2 ], φ = const. ,
where we denoted λ_0=e^1/4√(π)/2. This geodesic ends up at the location of the source, ρ=0 [in particular at (u,r,ρ,φ)=(λ_0,2λ_0,0,const.)], for the finite value of proper time, λ = λ_0. Graphs of the functions (<ref>) are depicted in Fig. <ref>.
§ IDG PP-WAVE SOLUTION IS REGULAR
In the previous section, we studied only one particular geodesic as it is sufficient to show the presence of nonscalar curvature singularities. In order to verify the absence of such singularities in the IDG pp-wave solution, we have to discuss all possible causal geodesics passing through the source, ρ=0.
First, we notice that φ(λ) is constant along such geodesics in the IDG solution. If one assumes φ' ≠ 0, it follows from (<ref>) that c_(φ)φ' = (c_(φ)/ ρ)^2 → + ∞ on the source, ρ = 0. Employing the constants of motion, r' and ρ^2 can be eliminated in the normalization condition
2w h_IDG c_(r)^2 - 2 c_(u) c_(r) + ρ'^2 + c_(φ)φ' = ϵ .
Since h_IDG is bounded and ρ'^2 is positive, there is no way how to compensate c_(φ)φ' → + ∞ on the source and therefore necessarily φ' = 0 implying c_(φ)=0. On the other hand, the vanishing of the constant of motion c_(φ)=0 then ensures that φ' = 0 also for ρ≠ 0 and we can thus conclude that φ is constant along any geodesic passing through the source.
As an immediate consequence of this statement, the orthonormal frame (<ref>) and the null frame (<ref>) are PP along the corresponding geodesics. It turns out that the components of the curvature tensors (S and C) in PP frames along any timelike and null geodesics passing through the source are given exactly by (<ref>) and (<ref>), respectively. These components are finite everywhere since Ξ_S, Ξ_C are bounded and u'=-c_(r) for IDG pp-waves solution h_IDG.
§ CONCLUSIONS
We have shown that IDG can resolve nonscalar curvature singularities in exact pp-wave solutions. Thus, the nonlocality of IDG can prevent arbitrarily large tidal forces that would be experienced by an observer in finite time in these spacetimes. This is the first explicit exact demonstration of the singularity resolution due to nonlocality where the curvature of the exact GR and IDG solutions with the same matter content were compared side by side. These results were achieved by analyzing components of the curvature tensors in PP-frames along timelike and null geodesics reaching the source with the finite value of affine parameter.
We believe that our study will stimulate further search for exact solutions of IDG and examination of the presence/absence of (scalar or nonscalar) curvature singularities due to nonlocality. For example, a similar calculation can be repeated for gyraton solutions generated by spinning null matter <cit.> or Kundt geometries with non-zero cosmological constant <cit.>. The former would be particularly interesting because the exact IDG gyratons <cit.> differ from the linearized IDG gyratons <cit.>. A careful analysis of spacetime singularities in nonlocal gravitational theories (and the associated matter distribution) should also reduce the lack of clarity on this topic in the current literature <cit.>.
§ ACKNOWLEDGEMENTS
I.K. was supported by Netherlands Organization for Scientific research (NWO) grant no. 680-91-119 and Primus grant PRIMUS/23/SCI/005 from Charles University. T.M. acknowledges the support of the Czech Academy of Sciences (RVO 67985840) and the Czech Science Foundation GAČR grant no. GA19-09659S.
apsrev4-2
21
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Stelle(1977)]Stelle:1976gc
author author K. S. Stelle, https://doi.org/10.1103/PhysRevD.16.953 journal journal Phys. Rev. D volume
16, pages 953 (year 1977)NoStop
[Stelle(1978)]Stelle:1977ry
author author K. S. Stelle, https://doi.org/10.1007/BF00760427 journal
journal Gen. Rel. Grav. volume 9, pages 353 (year 1978)NoStop
[Burzillà et al.(2021)Burzillà, Giacchini, Netto, and Modesto]Burzilla:2020utr
author author N. Burzillà, author B. L. Giacchini, author T. d. P. Netto, and author L. Modesto, https://doi.org/10.1140/epjc/s10052-021-09238-x
journal journal Eur. Phys. J. C volume 81, pages 462 (year 2021), https://arxiv.org/abs/2012.11829 arXiv:2012.11829 [gr-qc]
NoStop
[Krasnikov(1987)]Krasnikov:1987yj
author author N. V. Krasnikov, https://doi.org/10.1007/BF01017588 journal journal Theor. Math. Phys. volume 73, pages 1184 (year
1987)NoStop
[Kuzmin(1989)]kuzmin1989finite
author author Y. V. Kuzmin, @noop journal journal Soviet
Journal of Nuclear Physics-Ussr volume 50, pages 1011 (year 1989)NoStop
[Tomboulis(1997)]Tomboulis:1997gg
author author E. T. Tomboulis, @noop title Superrenormalizable gauge
and gravitational theories (year 1997), https://arxiv.org/abs/hep-th/9702146 arXiv:hep-th/9702146 NoStop
[Biswas et al.(2006)Biswas,
Mazumdar, and Siegel]Biswas:2005qr
author author T. Biswas, author A. Mazumdar, and author W. Siegel, https://doi.org/10.1088/1475-7516/2006/03/009 journal
journal JCAP volume 2006number number (03), pages 009, https://arxiv.org/abs/hep-th/0508194 arXiv:hep-th/0508194 NoStop
[Modesto(2012)]Modesto:2011kw
author author L. Modesto, https://doi.org/10.1103/PhysRevD.86.044005 journal journal Phys. Rev. D volume
86, pages 044005 (year 2012), https://arxiv.org/abs/1107.2403 arXiv:1107.2403 [hep-th] NoStop
[Biswas et al.(2012)Biswas,
Gerwick, Koivisto, and Mazumdar]Biswas:2011ar
author author T. Biswas, author E. Gerwick,
author T. Koivisto, and author A. Mazumdar, https://doi.org/10.1103/PhysRevLett.108.031101 journal
journal Phys. Rev. Lett. volume 108, pages 031101 (year 2012), https://arxiv.org/abs/1110.5249 arXiv:1110.5249 [gr-qc] NoStop
[Biswas et al.(2014)Biswas,
Conroy, Koshelev, and Mazumdar]Biswas:2013cha
author author T. Biswas, author A. Conroy,
author A. S. Koshelev, and author A. Mazumdar, https://doi.org/10.1088/0264-9381/31/1/015022 journal
journal Class. Quant. Grav. volume
31, pages 015022 (year 2014), https://arxiv.org/abs/1308.2319 arXiv:1308.2319 [hep-th] NoStop
[Hervik et al.(2014)Hervik,
Pravda, and Pravdova]Hervik:2013cla
author author S. Hervik, author V. Pravda, and author A. Pravdova, https://doi.org/10.1088/0264-9381/31/21/215005 journal
journal Class. Quant. Grav. volume
31, pages 215005 (year 2014), https://arxiv.org/abs/1311.0234 arXiv:1311.0234 [gr-qc] NoStop
[Biswas et al.(2010)Biswas,
Koivisto, and Mazumdar]Biswas:2010zk
author author T. Biswas, author T. Koivisto, and author A. Mazumdar, https://doi.org/10.1088/1475-7516/2010/11/008 journal
journal JCAP volume 2010number number (11), pages 008, https://arxiv.org/abs/1005.0590 arXiv:1005.0590 [hep-th] NoStop
[Koshelev et al.(2018)Koshelev, Sravan Kumar, and Starobinsky]Koshelev:2017tvv
author author A. S. Koshelev, author K. Sravan Kumar, and author A. A. Starobinsky, https://doi.org/10.1007/JHEP03(2018)071 journal journal JHEP volume
2018number number (3), pages
071, https://arxiv.org/abs/1711.08864 arXiv:1711.08864
[hep-th] NoStop
[Kilicarslan(2019)]Kilicarslan:2019njc
author author E. Kilicarslan, https://doi.org/10.1103/PhysRevD.99.124048
journal journal Phys. Rev. D volume 99, pages 124048 (year 2019), https://arxiv.org/abs/1903.04283 arXiv:1903.04283 [gr-qc]
NoStop
[Dengiz et al.(2020)Dengiz,
Kilicarslan, Kolář, and Mazumdar]Dengiz:2020xbu
author author S. Dengiz, author E. Kilicarslan,
author I. Kolář, and author A. Mazumdar, https://doi.org/10.1103/PhysRevD.102.044016 journal journal Phys. Rev. D volume 102, pages 044016 (year 2020), https://arxiv.org/abs/2006.07650 arXiv:2006.07650 [gr-qc] NoStop
[Kolář et al.(2021)Kolář, Málek, and Mazumdar]Kolar:2021rfl
author author I. Kolář, author T. Málek, and author A. Mazumdar, https://doi.org/10.1103/PhysRevD.103.124067 journal journal Phys. Rev. D volume
103, pages 124067 (year 2021), https://arxiv.org/abs/2103.08555 arXiv:2103.08555 [gr-qc] NoStop
[Kolář et al.(2022)Kolář, Málek, Dengiz, and Kilicarslan]Kolar:2021uiu
author author I. Kolář, author T. Málek, author S. Dengiz, and author E. Kilicarslan, https://doi.org/10.1103/PhysRevD.105.044018 journal
journal Phys. Rev. D volume 105, pages 044018 (year 2022), https://arxiv.org/abs/2107.11884 arXiv:2107.11884 [gr-qc] NoStop
[Frolov and Zelnikov(2016)]Frolov:2015usa
author author V. P. Frolov and author A. Zelnikov, https://doi.org/10.1103/PhysRevD.93.064048 journal journal Phys. Rev. D volume
93, pages 064048 (year 2016), https://arxiv.org/abs/1509.03336 arXiv:1509.03336 [hep-th] NoStop
[Griffiths and Podolský(2009)]griffithspodolsky2009
author author J. B. Griffiths and author J. Podolský, https://doi.org/10.1017/CBO9780511635397 title Exact Space-Times in Einstein's General Relativity, Cambridge Monographs on Mathematical Physics (publisher
Cambridge University Press, year 2009)NoStop
[Boos et al.(2020)Boos,
Pinedo Soto, and Frolov]Boos:2020ccj
author author J. Boos, author J. Pinedo Soto, and author V. P. Frolov, https://doi.org/10.1103/PhysRevD.101.124065 journal
journal Phys. Rev. D volume 101, pages 124065 (year 2020), https://arxiv.org/abs/2004.07420 arXiv:2004.07420 [gr-qc] NoStop
[Buoninfante et al.(2018)Buoninfante, Koshelev, Lambiase,
Marto, and Mazumdar]Buoninfante:2018rlq
author author L. Buoninfante, author A. S. Koshelev, author G. Lambiase,
author J. a. Marto, and author A. Mazumdar, https://doi.org/10.1088/1475-7516/2018/06/014 journal
journal JCAP volume 2018number number (06), pages 014, https://arxiv.org/abs/1804.08195 arXiv:1804.08195 [gr-qc] NoStop
|
http://arxiv.org/abs/2307.04028v1 | 20230708183125 | Measuring the Success of Diffusion Models at Imitating Human Artists | [
"Stephen Casper",
"Zifan Guo",
"Shreya Mogulothu",
"Zachary Marinov",
"Chinmay Deshpande",
"Rui-Jie Yew",
"Zheng Dai",
"Dylan Hadfield-Menell"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
[
Measuring the Success of Diffusion Models at Imitating Human Artists
equal*
Stephen Casperequal,MIT
Zifan Guoequal,MIT
Shreya MogulothuMIT
Zachary MarinovMIT
Chinmay DeshpandeHarvard
Rui-Jie YewMIT,Brown
Zheng DaiMIT
Dylan Hadfield-MenellMIT
MITMIT
HarvardHarvard University
BrownBrown University
Stephen [email protected]
Machine Learning, ICML
0.3in
]
§ OVERVIEW
Modern diffusion models have set the state-of-the-art in AI image generation.
Their success is due, in part, to training on Internet-scale data which often includes copyrighted work. This prompts questions about the extent to which these models learn from, imitate, or copy the work of human artists.
This work suggests that questions involving copyright liability should factor in a model's capacity to imitate an artist.
Tying copyright liability to the capabilities of the model may be useful given the evolving ecosystem of generative models.
Specifically, much of the legal analysis of copyright and generative systems focuses on the use of protected data for training <cit.>.
However, generative systems are often the result of multiple training processes. As a result, the connections between data, training, and the system are often obscured.
In our approach, we consider simple image classification techniques to measure a model's ability to imitate specific artists. Specifically, we use Contrastive Language-Image Pretrained (CLIP) <cit.> encoders to classify images in a zero-shot fashion.
Our process first prompts a model to imitate a specific artist. Then, we test whether CLIP can be used to reclassify the artist (or the artist's work) from the imitation. If these tests match the imitation back to the original artist, this suggests the model can imitate that artist's expression.
Our approach is simple and quantitative. Furthermore, it uses standard techniques and does not require additional training. We demonstrate our approach with an audit of Stable Diffusion's <cit.> capacity to imitate 70 professional digital artists with copyrighted work online. When Stable Diffusion is prompted to imitate an artist from this set, we find that the artist can be identified from the imitation with an average accuracy of 81.0%. Finally, we also show that a sample of the artist's work can be matched to these imitation images with a high degree of statistical reliability. Overall, these results suggest that Stable Diffusion is broadly successful at imitating individual human artists. Code is available https://colab.research.google.com/drive/1ScHo9uMdUgId0DlSr4W4RgnMD44dLiku?usp=sharinghere.
§ BACKGROUND
Contrastive Language-Image Pretraining (CLIP): CLIP <cit.> is a technique for training AI systems that encode images and text into fixed-length vector representations.
CLIP image and text encoders are trained to produce similar encodings of image/caption pairs and dissimilar encodings of image/caption non-pairs.
The more geometrically distant two encodings of images or captions are, the less related they are according to the encoder, and vice versa.
Using this principle, <cit.> introduced a method to classify an image among a set of labels based on the distances between encodings. We use this method in our proposed test.
Diffusion Models: Diffusion models <cit.> such as Stable Diffusion <cit.> and Midjourney <cit.>, are capable of generating images from arbitrary, user-specified prompts.
Their success has largely been due to training on large amounts of text/image data, often including copyrighted works <cit.>.
Modern image-generation diffusion models are trained using CLIP-style encoders.
When given an encoding of a caption, a diffusion model is trained to generate an image corresponding to the caption <cit.>.
Accordingly, a diffusion model that generates images from these embeddings is trained to be the inverse of a CLIP image encoder.
Legal Motivation: In the United States, <cit.> established that copyright infringement “is measured by considering the qualitative and quantitative significance of the copied portion in relation to the plaintiff’s work as a whole”. However, the subjective nature of these determinations makes practical enforcement complicated.
<cit.>.
In evaluating copyright questions involving AI systems, legal analyses have focused on how copyrighted work is used in the system's training data <cit.>, but such a focus on training data does not connect liability to an AI system's ability to copy an artist.
In contrast, we show how standard image classification techniques can be used to help determine how successful AI image generators are at imitating individual human artists.
This approach is consistent, quantitative, and connected to the capabilities of the resulting AI system.
Our goal, however, is not to automate determinations of infringement but to demonstrate how tried and tested image classification techniques from machine learning can be used to analyze legal claims.
§ EXPERIMENTS
We conduct two complementary experiments to evaluate Stable Diffusion's ability to imitate human artists. First, we classify human artists from imitations of their work, and second, we match real work from human artists to imitations. Both experiments suggest that Stable Diffusion is broadly successful at imitating human artists.
§.§ Identifying Artists from Imitations
Method: We used CLIP encoders to classify artists from Stable Diffusion's imitations of them. We selected 70 artists from the LAION-aesthetics dataset <cit.>, the dataset used to train Stable Diffusion. We selected these 70 as artists who may potentially be harmed by digital imitations using several criteria: each artist is alive, has a presence on digital art platforms (Instagram, DeviantArt, and ArtStation), publishes artwork or sells their artwork (e.g., prints or digital works), and has more than 100 images in the LAION dataset.
Figure <ref> outlines our method.
We prompted https://huggingface.co/runwayml/stable-diffusion-v1-5Stable Diffusion (v1.5) to generate images in the style of each artist, using prompts of the form “Artwork from <artist’s name>”.
Example images are in Figure <ref>.
We then used https://huggingface.co/openai/clip-vit-base-patch32CLIP encoders to classify each image among a set of 73 labels.
The 73 labels consisted of each of the 70 artist's prompts (“Artwork from <artist’s name>”) plus three default labels: “Artwork”, “Digital Artwork”, and “Artwork from the public domain.”
These additional labels lend insight into how confident CLIP is that an image imitates a particular artist's style instead of some more generic style.
We then classified each imitation image among these labels using the technique from <cit.>.
CLIP-based classification produces a probability of an image matching each label, and we evaluate the model on the correctness of its most-likely prediction and confidence in the correct artists.
Results: We repeated the experiment with the 70 artists ten times to reduce the effect of random variation. On average, CLIP correctly classified 81.0% of the generated images as works made by artists whose names were used to generate them.
Over the ten trials, 69 of the 70 artists were correctly classified in a plurality of the ten trials.
Overall, these results suggest that Stable Diffusion has a broad-ranging ability to imitate the styles of individual artists.
We compared these results to two baselines.
First, we implemented a random-name baseline by running the same experiment with 70 random names from a https://randomwordgenerator.com/name.phprandom name generator.
Since Stable Diffusion was not trained on artists with these names (unless a random name is coincidentally the same as some artist's), this experiment serves as a proxy for how Stable Diffusion would handle artists not in its training data.
In this case, only 6 names (8.6%) were guessed correctly.
Second, a random guess would only result in a successful classification every 1 in 73 attempts (1.4%) on average.
We visualize results from our main experiment alongside the controls in Figure <ref>.
Results are Robust to Different Sets of Artists: To test whether our 70 artists were especially classifiable, we ran the original experiment but with a larger set of indiscriminately-selected artists and found similar results. We selected the 250 artists with the highest number of images in the LAION dataset and found that CLIP correctly classified 81.2% of the images.
This demonstrates that successful classification transcends a particular specific set of artists.
§.§ Matching Artwork to Imitations
Method: Our first experiment tested how easily artists could be identified from diffusion model imitations of them.
To provide a complementary perspective, we also directly study the similarity of artists' digital works to Stable Diffusion's imitations of them. For each of the 70 artists, we retrieve the top result obtained by Google Image searching “<artist's name> art.”
As before, we then use Stable Diffusion to generate 10 images for each artist with the prompt “Artwork from [artist's name].” We then compare the real images and generated images. Distances are measured by first encoding images
using the CLIP image encoder and calculating the cosine distance between encodings.
Results: For each artist, we calculate whether real images from artists are more similar to imitations of that artist or other artists. The significance was calculated using a rank sum test with a Bonferroni correction factor of 70. Results are in Figure <ref>.
90% (63/70) of the experiments produce p values less than 0.05. This compares to an average of 22.8% (16/70) for a control experiment using random artist assignments of real images. These results further support that Stable Diffusion is broadly successful at imitating artists.
§ CONCLUSION
We have demonstrated how AI image classification can help to measure the success of diffusion models imitating human artists.
We argue that these methods can provide a practical way to tie questions about copyright liability to the capabilities of a model instead of its training data alone.
By matching imitation images to both artists' names and works, we find that Stable Diffusion is broadly successful at imitating human digital artists.
We hope that future work can use image classification to analyze legal claims and to test defenses against AI imitation of copyrighted work.
§ ACKNOWLEDGEMENTS
We thank Taylor Lynn Curtis and Lennart Schulze for feedback.
icml2023
|
http://arxiv.org/abs/2307.04786v1 | 20230710180001 | Combining contextuality and causality: a game semantics approach | [
"Samson Abramsky",
"Rui Soares Barbosa",
"Amy Searle"
] | quant-ph | [
"quant-ph",
"cs.LO"
] |
Combining contextuality and causality]Combining contextuality and causality:
a game semantics approach
S. Abramsky]Samson Abramsky
Samson Abramsky Department of Computer Science, University College London 66–72 Gower Street, London WC1E 6EA, United Kingdom
[email protected]
http://www.cs.ucl.ac.uk/people/S.Abramsky/
R.S. Barbosa]Rui Soares Barbosa
Rui Soares Barbosa INL – International Iberian Nanotechnology Laboraory Av. Mestre José Veiga, 4715-330 Braga, Portugal
[email protected]
https://www.ruisoaresbarbosa.com/
A. Searle]Amy Searle
Amy Searle Department of Physics, University of Oxford Clarendon Laboratory, Parks Road, Oxford OX1 3PU, United Kingdom
[email protected]
https://www.physics.ox.ac.uk/our-people/searle
We develop an approach to combining contextuality with causality, which is general enough to cover causal background structure, adaptive measurement-based quantum computation, and causal networks.
The key idea is to view contextuality as arising from a game played between Experimenter and Nature, allowing for causal dependencies in the actions of both the Experimenter (choice of measurements) and Nature (choice of outcomes).
[
[
Received 24 May 2023 / Accepted 30 June 2023
================================================
§ INTRODUCTION
Contextuality is a key non-classical feature of quantum theory.
Besides its importance in quantum foundations, it has been linked to quantum advantage in information-processing tasks.
It also arises beyond quantum mechanics, cf. <cit.>.
We wish to generalise contextuality to accommodate causality and adaptivity.
These features may arise from:
* fundamental aspects of the physical setting, in particular the causal structure of spacetime;
* the causal structure of an experiment, where measurements are performed in some causal order, and moreover, which measurements are performed may depend on the outcomes of previous measurements;
* feed forward in measurement-based quantum computation (MBQC) <cit.>, and more generally, adaptive computation.
Our objectives include:
* A more fine-grained analysis of contextuality.
Signalling should be allowed from the causal past, the backward light cone, and thus no-signalling/no-disturbance should be imposed only from outside it.
This in turn modifies the scope of classicality (non-contextuality), which now becomes relative to this weaker form of no-signalling constraints.
* A better connection with computational models such as circuits and MBQC. Explicitly representing causal flows of information, outputs of gates feeding into inputs of other gates, enables a deeper analysis of the relationships between contextuality and quantum advantage.
It turns out that capturing these different manifestations of causality and their interactions with contextuality is rather subtle.
The perspective we adopt here is to view contextuality as a two-person game played between Experimenter and Nature.
The Experimenter's moves are the measurements; the actions of the Experimenter are to choose the next measurement to be performed. Nature's moves are the outcomes.
We can capture the various forms of causal dependency which may arise in terms of strategies for Experimenter or for Nature.
The game format is already familiar in the form of non-local games.
There, the Verifier plays the role of the Experimenter, and Nature responds with outcomes according to the probability distributions
corresponding to Alice–Bob strategies.
Non-local games are one-shot games, with a single round of interaction. By considering more general games, causal structure can be incorporated.
Our treatment builds upon the sheaf-theoretic approach to contextuality. A pleasing feature is that once one modifies the basic sheaf of events to take causal structure into account, the further definitions and treatment of contextuality follow automatically.
This illustrates the advantages of a compositional and functorial approach.
§ PREVIOUS WORK
Pearl had already noted the connection with Bell inequalities in his seminal paper on testability of causal models with latent and instrumental variables <cit.>.
The extension of causal networks to allow for quantum resources, or more generally the operations offered by Generalised Probabilistic Theories, has been studied in <cit.>.
Our starting point is the sheaf-theoretic treatment of contextuality introduced in <cit.>, and extensively developed subsequently.
This is a general, mathematically robust approach, which provides a basis for:
* the contextual fraction as a measure of contextuality <cit.>;
* a general characterisation of noncontextuality inequalities in terms of consistency conditions (“logical Bell inequalities”, Boole's “conditions of possible experience”) <cit.>;
* resource theory of contextuality, and simulations between contextual systems <cit.>;
* cohomological criteria for contextuality, the topology of contextuality <cit.>;
* connections with logic and computation, database theory, constraint satisfaction <cit.>;
* generalisations <cit.> and applications <cit.> of Vorob'ev's theorem <cit.>.
The aim is to develop a refined version incorporating causality for which all these features will carry over.
There have been some prior works in this direction:
* Shane Mansfield in <cit.> introduced a refinement of the sheaf-theoretic approach with an order on the measurements,
and used it to study the two-slit experiment and the Leggett–Garg scenario.
* Stefano Gogioso and Nicola Pinzani in <cit.> developed a causal refinement of the sheaf-theoretic approach to non-locality, for the case of Bell-type scenarios.
They introduce an order on the sites or agents in the Bell scenario.
In both cases, the order is used to refine the no-signalling or no-disturbance condition which guarantees that joint distributions have consistent marginals.
In the presence of causality, signalling is allowed from within the backwards light cone or causal past of an event, and thus no-signalling is only required outside it.
One may contrast this with the Contextuality-by-Default (CbD) approach introduced by Ehtibar Dzhafarov and Janne Kujala <cit.>.
In CbD, every variable is regarded as contextual, differently labelled in each context.
Classicality is characterised by the existence of a joint distribution under which different occurrences of variables with the same “content” have the same value with the maximum probability consistent with their individual marginals.
This allows for the analysis of arbitrary signalling systems, which has applications e.g. in the behavioural sciences, where signalling is the norm. Moreover, this signalling may in general be impossible to characterise or control.
By contrast, both in the above work by Mansfield and Gogioso–Pinzani and in the present paper, the aim is to explicitly describe a given causal background – which might arise from the structure of an experiment, circuit, or physical system – and to characterise contextuality relative to such a background.
In this paper, we extend the scope of previous work in several directions.
First, we allow more general dependencies of events on their prior causal histories.
In particular, the choice of which measurement to perform can depend on previous outcomes as well as on which measurements have been performed. This is an important feature of MBQC (“feedforward”), and more generally of adaptive computation.
Secondly, we extend general contextuality scenarios with causality, not just the non-locality Bell scenarios as in the Gogioso–Pinzani (GP) approach.
Finally, and most subtly, we recognise the different roles played by Nature and Experimenter in their causal interactions, highlighting an important difference between causal background and adaptivity.
An interesting feature of our approach, in common with that of Gogioso–Pinzani, is that it proceeds essentially by modifying the sheaf of events from <cit.> to reflect the refined signalling constraints in the presence of causality.
Once this has been done, the remainder of the analysis of contextuality follows exactly the same script as in <cit.>.
In particular, the appropriate definition of empirical model, the relaxed no-signalling constraints, and the notion of classicality/non-contextuality follow automatically.
§ EXAMPLES
As we have already suggested, causality in relation to contextuality has dual aspects. It may be imposed by Nature, in the form of a causal background against which the contextual behaviour plays out; or it may be imposed by the Experimenter, to achieve computational effects (adaptive computation).
We illustrate these two sources of causality in two basic examples.
§.§ Example I: causal background à la GP
Consider a standard bipartite nonlocality scenario, the Bell–CHSH scenario:
two experimenters, Alice and Bob, with sets of local measurements I_A and I_B, and outcome sets O_A and O_B.
We may think of these as `ìnputs” and “outputs”.
We now introduce a variation, in which
we assume that Alice's events causally precede those of Bob.
Thus Bob's backward light cone includes the events where Alice chooses a measurement and observes an outcome.
Whereas in a standard, causally “flat” scenario, we would have deterministic outcomes given by functions
s_A : I_A → O_A, s_B : I_B → O_B,
with these causal constraints, we have functions
s_A : I_A → O_A, s_B : I_A × I_B → O_B .
That is, the responses by Nature to Bob's measurement may depend on the previous measurement made by Alice.[Note that, in a deterministic model, Nature “knows” what response it would have given for Alice's measurement, so there is no real dependency on this outcome.]
If we have measurements x_1, x_2 ∈ I_A, y ∈ I_B, then { (x_1,0), (y,0) } and { (x_2,0), (y,1) } are valid histories in a single deterministic model.
If we now go to distributions over such histories, say d_{x,y} as a distribution over outcomes for the Alice measurement x and the Bob measurement y, then
of the usual no-signalling/compatibility equations
d_{x,y} |_{x} = d_{x}
d_{x,y} |_{y} = d_{y}
only (<ref>) remains. In fact, d_{y} is not even defined, since { y} is not a “causally secured” context: the measurement y can never occur on its own without a preceding Alice measurement.
Thus no-signalling is relaxed in a controlled fashion.
§.§ Example II: Anders–Browne
The Anders–Browne construction <cit.> shows how we can use a form of Experimenter-imposed causality to promote two sub-universal computational models (Pauli measurements and mod-2 linear classical processing) to universal MBQC.
It uses the GHZ state as a resource state:
= |↑↑↑⟩ + |↓↓↓⟩/√(2) .
Performing local Pauli X and Y measurements, we obtain the following table of possible joint outcomes[The table shows only the possibilistic information, the supports of the probability distributions on joint outcomes, which are uniform on each row.]
+++ ++- +-+ +– -++ -+- –+ —
X Y Y 0 1 1 0 1 0 0 1
Y X Y 0 1 1 0 1 0 0 1
Y Y X 0 1 1 0 1 0 0 1
X X X 1 0 0 1 0 1 1 0
In terms of parities (products of +1/-1 outputs), the support satisfies the following equations:
[ X_1 Y_2 Y_3 = -1; Y_1 X_2 Y_3 = -1; Y_1 Y_2 X_3 = -1; X_1 X_2 X_3 = +1 . ]
The idea is to use an Experimenter causal flow to implement AND.
Taking X as 0, Y as 1, we consider the measurements for Alice and Bob as inputs to an AND gate.
We then use the following simple
mod-2 linear mapping (XOR on the bit representations) from the Alice–Bob measurements to determine Charlie's measurement:
[ 0, 0 ↦ 0; 0, 1 ↦ 1; 1, 0 ↦ 1; 1, 1 ↦ 0; ] [ X, X ↦ X; X, Y ↦ Y; Y, X ↦ Y; Y, Y ↦ X . ]
The output of the AND function is read off from the XOR of the three outcome bits.
We draw attention to the following two remarks.
* This example illustrates causality that is purely employed by the Experimenter.
From Nature's point of view, it is just the standard (“causally flat”) GHZ construction.
* The above describes a simplified “one-shot” implementation of a single AND gate.
To represent general logical circuits with embedded AND gates, using this construction as a building block,
really requires (classically computed) feedforward of measurement settings.
This means that there is full adaptivity at work, dependence of measurement choices on prior measurement outcomes.
§ GAME SEMANTICS OF CAUSALITY
We conceptualise the dual nature of causality as a two-person game, played between Experimenter and Nature:
* Experimenter’s moves are measurements to be performed;
* Nature’s moves are the outcomes.
By formalising this, we develop a theory of causal contextuality that recovers:
* the usual theory of contextuality in the “flat” case,
* the Gogioso–Pinzani theory of non-locality in a causal background,
* MBQC with adaptive computation,
* classical causal networks,
as special cases, and more.
§.§ Measurement scenarios
We begin by briefly reviewing some basic ingredients of the sheaf-theoretic formulation of contextuality. For further details, see e.g. <cit.>.
A (flat) measurement scenario is a pair (X, O), where:
* X is a set of measurements.
* O = { O_x }_x ∈ X is the set of possible outcomes for each measurement.
An event has the form (x,o), where x ∈ X and o ∈ O_x. It corresponds to the measurement x being performed, with outcome o being observed.
Given a set of events s, its domain is the set of measurements performed:
(s) π_1 s = { x |∃ o. (x,o) ∈ s } .
We say that s is consistent if (x,y), (x, y') ∈ s implies y = y'.
In this case, s defines a function from the measurements in its domain to outcomes.
A consistent set of events is a section.
We define the event sheaf over sets of measurements: for each set U ⊆ X of measurements, (U) is the set of sections whose domain is U; when U ⊆ V, there is a restriction map (V) →(U).
The functoriality of these restriction maps formalises the no-disturbance condition, or “generalised no-signalling”, at the level of deterministic models. Generalised no-signalling of probabilistic (or possibilistic) models will then follow automatically when we compose with the appropriate distribution monad, cf. <cit.>.
The sheaf property of the event sheaf – that compatible families of local sections glue together to yield unique global sections – corresponds to the fact that deterministic models are non-contextual.[Note that if we drop no-signalling, as in the CbD approach, this no longer holds.]
When we pass to distributions over the event sheaf,
the sheaf property no longer holds, and this is exactly how contextuality arises. More precisely, we extend the measurement scenario to a contextuality scenario by specifying a cover of X; a failure of the sheaf property with respect to this cover constitutes a witness to contextuality.
Our general strategy to accommodate causality is to modify the definition of the event sheaf. After this, we essentially follow the same script as above to give an account of contextuality in the causal setting. A similar procedure is followed in <cit.>.
§.§ Causal measurement scenarios
A causal measurement scenario is a tuple M=(X, O, ⊢), where the additional ingredient is an enabling relation
that expresses causal constraints.
The intended interpretation of s ⊢ x, where s ∈⋃_U ⊆ X(U) is a consistent set of events and x ∈ X a measurement,
is that it is possible to perform x after the events in s have occurred.
Note that this constraint refers to the measurement outcomes as well as the measurements that have been performed.
This allows adaptive behaviours to be described.
Given such a causal measurement scenario M, we use it to generate a set of histories. A history is a set of events that can happen in a causally consistent fashion. We associate each measurement x with a unique event occurrence, so histories are required to be consistent.
To formalise this, we first define the accessibility relation between consistent sets of events s and measurements x: s x if and only if x ∉(s) and for some t⊆ s, t ⊢ x. The intuition is that x may be performed if the events in s have occurred.
Now, (M), the set of histories over M, is defined inductively as the least family H of consistent sets of events
which contains the empty set and is closed under accessibility, meaning that if s ∈ H and s x,
then for all o ∈ O_x, s ∪{ (x,o)}∈ H. Note that if a measurement can be performed, then any of its outcomes may occur, forming a valid history.
We can give a more explicit description of (M) as a least fixed point. We define an increasing family of sets of histories { H_k } inductively:
H_0 {}
H_k+1 H_k ∪ { s ∪{ (x,o) }| s ∈ H_k, s x, o ∈ O_x }.
If X is finite, then for some k we have H_k = H_k+1, and (M) = H_k for the least such k.
§.§ Strategies
We regard a causal measurement scenario as specifying a game between Experimenter and Nature. Events (x,o) correspond to the Experimenter choosing a measurement x, and Nature responding with outcome o. The histories correspond to the plays or runs of the game.
Given this interpretation, we define a strategy for Nature over the game M as a set of histories ⊆(M) satisfying the following conditions:
* is downwards closed: if s, t ∈(M) and s ⊆ t ∈, then s ∈.
* is deterministic and total: if s ∈ and s x, then there is a unique o ∈ O_x such that s ∪{ (x,o) }∈.
Thus at any position s reachable under the strategy , the strategy determines a unique response to any measurement that can be chosen by the Experimenter.
We note an important property of strategies.
If s, t ∈, s ⊆ t, and s x, then
s ∪{ (x,o) }∈ t ∪{ (x,o) }∈ .
Under the given assumptions, since t x, we must have t ∪{ (x,o') }∈ for some o' ∈ O_x. Since s x, we have
that s ∪{ (x,o') } is a history (in (M)),
and by down-closure, s ∪{ (x,o') }∈. Since is deterministic, we must have o = o'.
Monotonicity says that
the outcomes for a measurement x under strategy are determined at the minimal histories at which x can occur. This still leaves open the possibility of assigning different outcomes to x relative to incomparable causal pasts.
We note another useful property, which follows immediately from totality and determinism.
If , τ are strategies with ⊆τ, then = τ.
§.§ The presheaf of strategies
Given a causal measurement scenario M = (X,O,⊢) and a set of measurements U ⊆ X, we define M_U, the restriction of M to U, as the causal measurement scenario (U, { O_x }_x ∈ U, ⊢_U), where s ⊢_U x iff s ⊢ x and (s) ∪{ x }⊆ U.
Note that M_X = M.
If U ⊆ V, then (M_U) is a down-closed subset of (M_V) under set inclusion.
Given a strategy over M_V, and U ⊆ V, we define |_U, the restriction of to U, as the intersection |_U ∩(M_U).
If is a strategy over M_V and U ⊆ V, then |_U is a strategy over M_U.
The restriction |_U inherits down-closure from .
For the second condition, if s ∈ |_U and s _U x, then s ∈ and s _V x. So, there is a unique o ∈ O_x such that s ∪{ (x,o) }∈.
But since x ∈ U, we have s ∪{ (x,o) }∈(M_U), and so s ∪{ (x,o) }∈ |_U.
Given a causal measurement scenario M = (X,O,⊢), we can now define a presheaf
Γ : (X)^→
of strategies over M.
For each U ⊆ X, Γ(U) is the set of strategies for M_U.
Given U ⊆ V, the restriction map Γ(U ⊆ V) : Γ(V) →Γ(U) is given by ↦ |_U.
The following is immediate:
Γ is a presheaf.
§.§ Historical note
Causal measurement scenarios are a renaming and repurposing of Kahn–Plotkin information matrices <cit.>, which were introduced circa 1975 to represent concrete domains.[For a historical perspective, see <cit.>.]
We have changed the terminology to reflect the intuitions and applications motivating the present paper:
Kahn–Plotkin Here
information matrix causal measurement scenario
cell measurement
value outcome
decision event
configuration history
The interpretation of causal measurement scenarios as Experimenter–Nature games, the notion of strategy, and the presheaf of strategies, are all new to the present paper.
§ CAUSAL CONTEXTUALITY
Our plan now is to follow the script from <cit.>, replacing the event sheaf by the presheaf of strategies Γ.
Thus local sections are replaced by strategies, whose assignments of outcomes to measurements are sensitive to the previous history of the game.
A causal contexuality scenario is a structure (M, ), where M = (X, O, ⊢) is a causal measurement scenario and is a cover of X, a family = { C_i }_i ∈ I of subsets of measurements C_i ⊆ X satisfying ⋃ = ⋃_i ∈ I C_i = X.
We work with the presheaf Γ of strategies over M, as described in the previous section.
Recall the distribution monad _R from <cit.>, where R is a semiring.
When R is the non-negative reals, it yields the usual discrete probability distributions.
We construct the presheaf _R Γ, obtained by composing the endofunctor part of the monad with the sheaf of strategies Γ.
An empirical model on the scenario (M, ) is a compatible family for the presheaf _R Γ over the cover = { C_i }_i ∈ I.
That is, it is a family { e_i }_i ∈ I, where e_i ∈_R Γ (C_i),
subject to the compatibility conditions: for all i, j ∈ I, e_i |_C_i ∩ C_j = e_j |_C_i ∩ C_j.
Each distribution e_i assigns probabilities to the strategies over M_C_i, to those strategies over M that only perform measurements drawn from the context C_i. As usual, the compatibility conditions require that the marginal distributions agree.
This follows the definition of empirical model in <cit.>, replacing the event sheaf by the presheaf of strategies.
The empirical model is causally non-contextual if this compatible family extends to a global section of the presheaf _R Γ, if there is a distribution d ∈_R Γ (X) such that, for all i ∈ I, d |_C_i = e_i.
If a causal contextuality scenario is finite, then so is the set of histories and therefore that of strategies.
The causally non-contextual models thus form a convex polytope, the convex hull of the empirical models on (M,) corresponding to deterministic strategies ∈Γ(X).
This is in keeping with the usual setup of “flat” non-locality and contextuality (without causality), where such classical polytopes are studied.
The classicality of a given model, membership in this polytope, can be checked by linear programming;
and this also suggests a generalisation of the contextual fraction <cit.> to the causal setting.
Similarly, causal contextuality is witnessed by violations of the linear inequalities defining the facets of the polytope.
An open question is to find a logical characterisation of such inequalities in the spirit of “logical Bell inequalities” <cit.>.
§ SPECIAL CASES
To check that these notions make sense, we look at two special cases: flat scenarios and Gogioso–Pinzani scenarios.
§.§ Flat scenarios
A contextuality scenario from <cit.> is (X, O, ). We define the trivial enabling relation where all measurements are initially enabled: ⊢ x for all x ∈ X. This yields a causal measurement scenario (M, ), where M = (X,O,⊢).
For any set of measurements U ⊆ X, the histories over M_U have support contained in U.
Using the monotonicity property and the fact that all measurements are enabled by ,
any strategy in Γ(U) assigns the same outcome to each measurement across all its histories.
Hence, it will correspond to a section in (U) = ∏_x ∈ U O_x. In fact, these will be in bijective correspondence.
Because of this bijective correspondence between Γ and , we see that the notions of empirical model, global section, and contextuality defined for the game-based scenario coincide with the usual notions in this case.
As this example illustrates, the restrictions on which measurements can be performed together are imposed by the cover, not by the causal structure.
§.§ GP scenarios
In recent work, Stefano Gogioso and Nicola Pinzani studied a causal refinement of the sheaf-theoretic approach to non-locality over Bell scenarios <cit.>.
A GP scenario is given by ((Ω, ≤), {}_∈, {}_∈), where:
* Ω is a set of sites or agents (Alice, Bob, etc.), with a causal ordering.
* is the set of inputs (or measurement settings) at .
* is the set of outputs (or measurement outcomes) at .
Given such a scenario, we define a causal measurement scenario M = (X,O,⊢).
This mirrors the usual encoding of Bell non-locality scenarios as contextuality scenarios.
First, we set:
* X ∑_∈ = { (, i) |∈, i ∈};
* O_(,i).
Given a set of events
s= { ( (ω_1,i_1) ,o_1), … , ( (_n, i_n) , o_n) }
and a measurement (, i) ∈ X, we define
s ⊢ (, i) if and only if
the support of s has a measurement for each site strictly preceding ω, {ω_1, …, ω_n } = {' ∈|' < }.
So, a measurement (, i) can only be played after a measurement from each site in the causal past of has been played.
Consequently, the support of any history is a set of measurements per site for some lower subset λ⊆.
This corresponds to the usual notion of context for Bell scenarios, refined to ensure that such contexts are “causally secured”.
We consider a simple example to illustrate the comparison between Γ defined over (X, O, ⊢), and the “sheaf of sections” from <cit.>.
We take to be the 2-chain _1 < _2.
This is a variation on a standard bipartite Bell–CHSH type scenario, with Alice causally preceding Bob, and hence allowed to signal to Bob.
We take the standard Bell scenario cover, where the maximal contexts correspond to choosing one measurement per site, and focus our analysis on the contexts below the cover
The equivalence between sections of Γ and those of the presheaf from <cit.> actually extends more generally to all subsets of measurements, but this is sufficient to illustrate our main point.
Now consider a strategy ∈Γ(X).
The non-empty histories in M which are compatible in the standard Bell cover have the form
{ ( _1,z_1) , o_1) } or { ( (_1,z_1) , o_1), ( (_2,z_2) , o_2) } ,
where z_i ∈{ x,y }, o_i ∈{ 0,1}, i=1,2.
Using monotonicity, the strtegy assigns a unique o_1 for each (_1,z_1) and a unique o_2 for each (_1,z_1) and (_2,z_2).
Thus determines a pair of functions of type
(I__1→ O__1) × (I__1× I__2→ O__2).
This accords with the description given in <cit.>; see in particular the discussion in Section 5.
It extends to an equivalence between Γ and the sheaf of sections of <cit.>.
Thus, if we take the standard Bell cover we obtain the same empirical models and notion of contextuality as in <cit.>.
In an extended version of the present paper, we show that
this analysis carries over to general GP scenarios. Hence, we recover the Gogioso–Pinzani theory as a special case of our framework.
§ THE SHEAF PROPERTY FOR THE STRATEGY PRESHEAF
The strategy presheaf Γ plays the role in our causal theory of the event sheaf in <cit.>.
The sheaf property of has some conceptual significance since it shows that for deterministic models local consistency implies global consistency. It is only when we introduce distributions, whether probabilistic or possibilistic, that the sheaf property fails and contextuality arises.
This raises the question of whether Γ is also a sheaf.
We now show one half of the sheaf property, namely that gluing is always possible.
So, the fact that local consistency implies global consistency for deterministic models carries over to the causal theory.
Let { U_i }_i ∈ I be a family of subsets of X covering U = ⋃_i ∈ I U_i.
Suppose we are given a compatible family {σ_i }_i ∈ I, with _i ∈Γ(U_i)
and _i |_U_i ∩ U_j = _j |_U_i ∩ U_j for all i, j ∈ I.
The sheaf property requires that there exist a unique strategy ∈Γ(U) such that |_U_i = _i for all i ∈ I.
From the definition of restriction, if such a gluing exists, it must contain the union ' ⋃_i ∈ I_i.
So, if this ' happens to be a strategy, by maximality it must be the required unique gluing of the family {_i}_i∈ I.
The union of down-closed sets is down-closed.
Thus ' can only fail to be a strategy if determinacy or totality fails.
We show that the first of these can never arise.
If {σ_i}_i ∈ I is a compatible family for the presheaf Γ, then ' ⋃_i ∈ I_i is deterministic.
Suppose that s ∪{ (x,o_k) }∈σ' for k = 1,2.
For some i,j ∈ I we have s ∪{ (x,o_1) }∈σ_i and s ∪{ (x,o_2) }∈σ_j.
This implies that (s) ∪{x}⊆ U_i ∩ U_j, and hence s ∪{ (x,o_1) }∈σ_i |_U_i ∩ U_j and s ∪{ (x,o_2) }∈σ_j |_U_i ∩ U_j. By compatibility and determinacy of _i and _j, this implies o_1 = o_2.
Finally, if totality fails, we can always complete the union ' to a strategy over M_U by making arbitrary choices of outcomes for any remaining accessible measurements.
In general, this can be done in multiple ways, so the uniqueness part of the sheaf condition fails, Γ is not separated.
We give a simple example to show how this can happen.
Fix X = { x,y,z }, O_w = { 0,1} for all w ∈{ x,y,z}, and the following enabling relation:
⊢ x, ⊢ y, { (x,0), (y,0) }⊢ z .
Consider the cover consisting of U_1 { x,z} and U_2 { y,z }, and take strategies
_1 { , { (x,0) } } and _2 { , { (y,0) } } .
Note that _1 and _2 are compatible since they both restrict to the empty strategy over U_1 ∩ U_2 = { z }, as the measurement z is not enabled.
Similarly, _1 and _2 are both total, since z is not accessible from any history over U_1 or U_2. However, _1 ∪_2 is not total, since z is accessible but has no assigned outcome.
This example is rather pathological, as it hinges on the inaccessibility of z in the cover, leading to the following question.
Is there a notion of “good cover” which implies that gluings are unique?
§ EXPERIMENTER STRATEGIES AND ADAPTIVE COMPUTATION
The strategies considered so far have been strategies for Nature. These prescribe a response – an outcome – for each measurement that can be chosen by the Experimenter.
Using the duality inherent in game theory, there is also a notion of strategy for Experimenter.
To formulate this, we use the following observation.
For a history s ∈(M), the following are equivalent:
* s is maximal in ((M),⊆);
* no measurement is accessible from s, for all x ∈ X, (s x).
We now define a strategy for Experimenter over the game M to be a set of histories τ⊆(M) satisfying the following conditions:
* τ is downwards closed: if s, t ∈(M) and s ⊆ t ∈τ, then s ∈τ.
* τ is co-total: if s ∈τ
and s is not maximal,
then there is a measurement x
with s x such that s ∪{ (x,o) }∈τ for some o ∈ O_x.
Moreover, for all such x, s ∪{ (x,o') }∈τ for all
o' ∈ O_x.
Thus at each stage, the strategy determines which measurements may be performed.
Note that it may allow more than one measurement, so some nondeterminism remains.
For each such measurement, it must then accept any possible response from Nature. The future choices of the Experimenter can then depend on Nature's responses, allowing for adaptive protocols.
If we are given a strategy for Nature and a strategy for the Experimenter τ, we can play them off against each other, resulting in ⟨σ|τ⟩∩τ.
This is the down-set of a set of maximal histories.
This operation can be extended to distributions on strategies, to mixed strategies, in a bilinear fashion.[The extension to mixed strategies hinges on the fact that the distribution monad is commutative.]
We refer to strategies for Nature as N-strategies, and to strategies for Experimenter as E-strategies.
§.§ Anders–Browne revisited
We now show how the Anders–Browne construction of an AND gate discussed in section <ref> can be formalised using an Experimenter strategy.
First, we have the description of the standard GHZ construction. This is given by a flat measurement scenario with X = { A_i, B_j, C_k | i,j,k ∈{ 0,1}}, and O_x = { 0,1 } for all x ∈ X.
The maximal compatible sets of measurements are all sets of the form { A_i, B_j, C_k } with i,j,k ∈{ 0,1}, a choice of one measurement per each site or agent.
We regard each measurement as initially enabled. The N-strategies for this scenario form the usual sections assigning an outcome to each choice of measurement for each site, and the GHZ model assigns distributions on these strategies as in the table shown in section <ref>.
To get the Anders–Browne construction, we consider the E-strategy which initially allows any A or B measurement to be performed, and after a history { (A_i, o_1), (B_j, o_2) } chooses the C-measurement C_i ⊕ j.
Playing this against the GHZ model results in a strategy that computes the AND function with probability 1.
The full power of adaptivity is required when using this as a building block to implement a more involved logical circuit. Suppose that the output of the AND gate above is to be fed as the first input of a second AND gate, built over a GHZ scenario with measurements labelled { A'_i, B'_j, C'_k | i,j,k ∈{ 0,1}}.
The E-strategy implements the first AND gate as above, with any B' measurement also enabled, being a free input.
After that, the A'-measurement can be determined: after a history containing { (A_i, o_1), (B_j, o_2), (C_i ⊕ j, o_3) }, the E-strategy chooses the A'-measurement A'_o_1 ⊕ o_2 ⊕ o_3. The second AND gate is then implemented like the first. Note that the choice of A'-measurement depends not only on previous measurement choices, but on outcomes provided by Nature.
§ OUTLOOK
In a forthcoming extended version of this paper, we show how a number of additional examples, including Leggett–Garg, can be handled in our approach.
We also show that our formalism faithfully represents a number of others, including Gogioso–Pinzani scenarios, adaptive MBQC, and causal networks.
In future work, we aim to employ our formalism to describe unconditional quantum advantage in shallow circuits, building on <cit.>.
We will also investigate other potential applications to quantum advantage.
We also aim to clarify how our approach can be related to the currently very active study of indefinite causal orders <cit.>.
§.§ Acknowledgements
This work was developed in part while AS was hosted on secondment at INL.
This work is supported by the Digital Horizon Europe project FoQaCiA, Foundations of quantum computational advantage, GA no. 101070558, funded by the European Union, NSERC (Canada), and UKRI (U.K.).
SA also acknowledges support from EPSRC – Engineering and Physical Sciences Research Council (U.K.) through
EPSRC fellowship EP/V040944/1, Resources in Computation.
RSB also acknowledges support from FCT – Fundação para a Ciência e a Tecnologia (Portugal) through CEECINST/00062/2018.
AS acknowledges support from EPSRC Standard Research Studentship (Doctoral Training Partnership), EP/T517811/1, and the Smith-Westlake Graduate Scholarship at St. Hugh's College.
amsplain
|
http://arxiv.org/abs/2307.04953v2 | 20230711005025 | Measuring Cause-Effect with the Variability of the Largest Eigenvalue | [
"Alejandro Rodriguez Dominguez",
"Irving Ramirez Carrillo",
"David Parraga Riquelme"
] | q-fin.PM | [
"q-fin.PM",
"stat.AP",
"58C40, 37M10, 60B12",
"G.3"
] |
Joint Radio Frequency Fingerprints Identification via Multi-antenna Receiver
Xiaofang Chen, Student Member, IEEE,
Wenbo Xu, Member, IEEE,
and Yue Wang, Senior Member, IEEE
XXX.
XXX.
XXX.
XXX.
XXX.
October 2023
==================================================================================================================================================
We present a method to test and monitor structural relationships between time variables. The distribution of the first eigenvalue for lagged correlation matrices (Tracy-Widom distribution) is used to test structural time relationships between variables against the alternative hypothesis (Independence). This distribution studies the asymptotic dynamics of the largest eigenvalue as a function of the lag in lagged correlation matrices. By analyzing the time series of the standard deviation of the greatest eigenvalue for 2× 2 correlation matrices with different lags we can analyze deviations from the Tracy-Widom distribution to test structural relationships between these two time variables. These relationships can be related to causality. We use the standard deviation of the explanatory power of the first eigenvalue at different lags as a proxy for testing and monitoring structural causal relationships. The method is applied to analyse causal dependencies between daily monetary flows in a retail brokerage business allowing to control for liquidity risks.
§ OVERVIEW
The Marcenko-Pastur paper <cit.> on the spectrum of empirical correlation matrices turned out to be useful in many, very different contexts (neural networks, image processing, wireless communications, etc.). It became relevant in the last two decades, as a new statistical tool to analyse large dimensional data sets and can be used to try to identify common causes (or factors) that explain the dynamics of N quantities.
The realization of the i^th quantity (i = 1,…, N) at “time” t (t = 1,…, T ) will be denoted r_t^i, which are demeaned and standardized. The normalized T × N matrix of returns will be denoted as X: X_ti = r^t_i/√(T). The Pearson estimator of the correlation matrix is given by:
E_ij = 1/T∑_i=1^Tr_i^tr_j^t≡(X^TX)_ij
where E will denote the empirical correlation matrix on a given realization, in contrast ti the true correlation matrix C of the underlying statistical process. The difference can be analysed by the Marcenko-Pastur result <cit.>. The empirical density of eigenvalues (the spectrum) is strongly distorted when compared to the ‘true’ density in the special asymptotic limit. When T→∞, N→∞, the spectrum has some degree of universality with respect to the distribution of the r_ti’s.
The lagged correlation matrix between past and future returns 𝒞_ij(τ) can be defined as:
𝒞_ij(τ)=⟨ r_i^t,r_j^t+τ⟩
such that 𝒞_ij(τ=0)=𝒞_ij is the standard correlation coefficient. Whereas 𝒞_ij is clearly a symmetric matrix, 𝒞_ij(τ>0) is in general non symmetric, and only obeys 𝒞_ij(τ)).
§ THE TRACY-WIDOM REGION
The Tracy-Widom result is that for a large class of N× N matrices (e.g. symmetric random matrices with i.i.d elements with a finite fourth moment, or empirical correlation matrices of i.i.d random variables with a finite fourth moment), the re-scaled distribution of λ_max - λ^∗ converges towards the Tracy-Widom distribution, usually noted F_1:
Prob(λ_max≤λ_+ + γ N^-2/3u)=F_1(u)
where γ is a constant that depends on the problem. Everything is known about the Tracy-Widom density f_1(u) = F^'_1(u), in particular its left and right far tails:
lnf_1(u)∝-u^3/2, (u→∞); lnf_1(u)∝-|u|^3, (u→-∞)
The left tail is much thinner: pushing the largest eigenvalue inside the allowed band implies compressing the whole Coulomb-Dyson gas of charges, which is difficult. Using this analogy, the large deviation regime of the Tracy-Widom problem (i.e. for λ_max - λ^+ = O(1)) can be obtained <cit.>.For square symmetric random matrices, the celebrated semicircle law of Wigner <cit.> describes the limiting density of eigenvalues. There is an analog for covariance matrices <cit.>, and independently, Stein <cit.>. The Marcenko-Pastur result is stated here for Wishart matrices with identity covariance E = I, but is true more generally, including non-null cases. Suppose that both n and p tend to ∞, in some ratio n/p →γ≥ 1. Then the empirical distribution of the eigenvalues converges almost surely,
G_p(t)=1/p#{l_i:l_i≤ nt}→ G(t)
and the limiting distribution has a density g(t) = G^'(t):
g(t)=γ/2π t√((b-t)(a-t)), a≤ t ≤ b
where a = (1 - γ^1/2)^2 and b = (1 + γ^1/2)^2. Consider now the right-hand edge, and particularly the largest eigenvalue. Why the interest in extremes? In the estimation of a sparse mean vector, the maximum of n i.i.d. Gaussian noise variables plays a key role. Similarly, in distinguishing a "signal subspace" of higher variance from many noise variables, one expects the largest eigenvalue of a null (or white) sample covariance matrix to play a basic role.
The bulk limit (<ref>) points to a strong law for the largest eigenvalue. Indeed, <cit.> shows that:
n^-1l_1→(1 + γ^1/2)^2, a.s
that is l_1 ∼ (√(()n)+√(()p))^2. Later Bai, Krishnaiah, Silverstein and Yin established that strong convergence occurred iff the parent distribution had zero mean and finite fourth moment. For more details, full citations and results on the smallest eigenvalue <cit.>. However, these results say nothing about the variability of the largest eigenvalue, let alone about its distribution. For a survey of existing results <cit.>. For example, [<cit.>, page 1284] gives an exact expression in terms of a zonal polynomial series for a confluent hypergeometric function of matrix argument:
P(l_1 ≤ nt) = d_p,nt^pn/2_1F_1(1/2n;1/2(n+p+1);-1/2btℐ_p))
where d_p,n is a constant depending only on p and n [cf. also <cit.>, page 421]. There are explicit evaluations for p = 2, 3, but in general the alternating series converges very slowly, even for small n and p, and so is difficult to use in practice. For fixed p and large n, the classic paper by <cit.> gives the limiting joint distribution of the roots, but the marginal distribution of l_1 is hard to extract even in the null case X = I. <cit.> gives a series approximation again for p = 2, 3. In general, there are upper bounds on the d.f. using p independent χ^2(n). Overall, there is little that helps numerically with approximations for large p.
We now turn to what can be derived from random matrix theory (RMT) methods. Suppose that X = (X_jk)_n× p has entries which are i.i.d. X_jk∼ N(0, 1). Denote the sample eigenvalues of the Wishart matrix X^' X by
l_1 >…> l_p. Define center and scaling constants:
μ_np=(√(n-1)+√(p))^2
σ_np=(√(n-1)+√(p))(1/√(n-1)+1/√(p))^1/3
The Thacy-Widom law of order 1 has distribution function defined by:
F_1(s)=exp{-1/2∫_s^∞q(x)+(x-s)q^2(x)dx},s∈ℝ
where q solves the (nonlinear) Painleve II differential equation:
q^''(x)=xq(x)+2q^3(x)
q(x) ∼ A_i(x) as x→ +∞
and A_i(x) denotes the Airy function. This distribution was found by Tracy
and Widom (1996) as the limiting law of the largest eigenvalue of an n by n
Gaussian symmetric matrix.
Let 𝒲 be a white Wishart matrixand l_1 be its largets eigenvalue. Then:
l_1-μ_np/σ_np𝒟⟶W_1∼ F_1
where the center and scaling constants are:
μ_np=(√(n-1)+√(p))^2 , σ_np=μ_np((n-1)^-1/2+p^-1/2)^1/3
and F_1 stands for the distribution function of the Tracy-Widom law of order 1.
The theorem is stated for situations in which n > p. However, it applies equally well if n < p are both large, simply by reversing the roles of n and p in (<ref>) and (<ref>). The limiting distribution function F_1 is a particular distribution from a family of distributions F_β. For β = 1, 2, 4 functions F_β appear as the limiting distributions
for the largest eigenvalues in the ensembles GOE, GUE and GSE, correspondingly. For the largest eigenvalue l_max(𝒜) of the random matrix 𝒜 (GOE (β = 1), GUE (β = 2) or GSE (β = 4)) its distribution function F_N,β(s) = P(l_max(𝒜) < s), β = 1, 2, 4 satisfies the limit law in (<ref>) with l_1=l_max(𝒜), and F_1=F_β given by (<ref>).
From (<ref>), the Airy special function A_i(s) is one of the pairs of linearly independent solutions to the differential equation: w^''-zw=0 such that:
lim_s→∞q(s)/A_i(s)=1
The Painleve II is a second-order ordinary differential equation of the form d^2w/dz^2=F(z,w,dw/dz). In Figure <ref> <cit.>, we can see some simulations for square cases n - p = 5, 10 and 100, using R = 10,000 replications (s is the lag value).
The main conclusion is that the Tracy-Widom distribution F_1 provides a usable numerical approximation to the null distribution of the largest principal component from Gaussian data even for quite moderate values of n and p. In particular, we have the following simple approximate rules of thumb:
* About 83% of the distribution is less than μ_np = (√(n-1) +√(p))^2
* About 95% and 99% lie below μ_np + σ_np and μ_np + 2σ_np respectively
§ METHOD
We can monitor the empirical value of σ_np in time from data. However, our method focuses on σ_λ, the standard deviation of the explanatory power for the first eigenvalue in 2× 2 matrices, λ_1/λ_1+λ_2, instead of just λ_1-λ_2 as with σ_np. This allows us to deal with a measure with empirical sense for the method. With extreme values of σ_λ indicating deviations from the Tracy-Widom distribution F_1 which implies asymptotically independence versus structural time (causal) relationships. In Algorithm <ref> we present the method for any dataset.
In Figure <ref>, we show the average of the σ_λ for 10 i.i.d time series generated from a Normal distribution. For this, Algorithm <ref> is applied to sets of 2 different time-series. In Figure <ref>, we compare the average values of σ_λ for the i.i.d series from the previous case with the average values for Algorithm <ref> applied to time series variables from a financial dataset. The hypothesis can be simply verified from data as for independent time series the values of the indicator σ_λ are much smaller than the values for time series with structural relationships.
§ EXPERIMENTS
For the experiments, we analyse daily brokerage activities with time series data of 15000 clients approximately. Our goal is to analyse the power of the structural temporal (causal) relationship between cash balance and other variables in time. Experiments are performed in cause-effect pairs with effect being cash balance, but for illustration we plot the results in thematic buckets. The buckets are:
* The account status includes the daily amount in account currency of open positions in different products including Shares, Bonds, Mutual Funds, CFDs, Derivatives, FX Spot and ETO.
* Bookkeeping cash includes all the daily cash movements in the account currency, both internal and external cash transactions, including the financial product that is the subject of the transaction.
* Trades Executes is focused in the daily buying and selling transaction activities in the account currency for all clients in all broker products.
* From experiments we can see that account status bucket has more mixed results, with Mutual Funds from November 2022 until March 2023 holding majority. After that, the majority of cash movements could be explained by Stocks (Shares) (Figure <ref>).
* In the Bookkeeping cash bucket we see a defined pattern of the majority of cash transactions coming from shares activities. Until March 2023 this majority was shared with Mutual Funds however, this stopped from march on-wards and taking some importance the derivatives and FX products since March. The relationship is studied with respect to daily variations of cash balance gross amount (net outflows plus inflows) and not the total amount of flows (Figure <ref>).
* Finally, the trades executed bucket shows very similar behaviour than the Bookkeeping cash serving as validation point for our results (Figure <ref>). The reason why Mutual Funds are not in this picture is because the execution process is different for Mutual Funds, much of operations not having to be converted to cash. This could mean that, although Mutual Funds has been a product that at least historically until March 2023 has been responsible for the movement of cash balance of our clients (seen Bookkeeping), this has not been due to daily executions in funds from cash but in the form of internal and external transfer funds. With regards to trades executed, we can see that from June 2022 until December 2022 the relationships where mixed with shares not holding the majority. From December 2022 until now shares have hold the majority with some derivatives gaining importance since march.
In Figure <ref>, we show the average σ_λ for lags 2 and 5 of the top candidates with greater values in the sample dataset (causal candidates). We perform the Granger causality test <cit.> with the sample dataset to compare results with our method. In Figures <ref> and <ref>, we show the values for the logarithm of the inverse of the p-value for the Granger causality test for lags 2 and 5 respectively. We can see discrepancies between the Granger Causality test and our method, however our method is closer to the structural causal relationship based on empirical evidence from the dataset. In the case of the Granger test, the CFD product seems to be the one most causally related with the cash balance, in contrast to the product shares or stocks for our method. The issue with the Granger causality test is that is window based, which means that it depends on the sample time interval you choose without a dynamic sense of the causal relationships. On the other hand, a big structural relationship in one data point out of the full sample can bias the results. In contrast, our method is rolling window based, which means that is adaptive and able to capture changes in causal or structural relationships. It is less biased to causal point relationships focusing more on persistent dynamics. The CFDs business is highly correlated with the equity business, reason why the Granger causal test chooses CFDs against our method choosing Shares or stocks. However, our method is able to infer causality better in that, the Shares business is the biggest in size and flows and the CFDs business is much smaller. To measure true causal relationships between these flows, the time series and correlations are not necessary as the flow size matters, and our method is able to capture causality better, as it avoids window biases coming from extreme point values which are more related to correlation than to causal dynamics.
We can conclude that with greater lags, 5 or more, values only get smoother but patterns look the same (In Figure <ref> we can see results for lag 10). This verifies the theoretical setup presented in this document in that, for greater lags the standard deviation of the explanatory power of their largest eigenvalue (σ_λ) is higher, and this implies a higher probability that the two variables deviate from independence based on the Tracy-Widom distribution towards a structured (causal) relationship.
§ CONCLUSION
A systematic approach to measure and monitor structural relationships in time, which are related to causal relationships, is presented. The method is based on monitoring the time series of the standard deviation of the explanatory power of the first eigenvalue for multiple lags in lagged correlation matrices, which is related to the Tracy-Widom distribution from RMT. These matrices consist on 2× 2 correlation matrices between an hypothetical causal variable and the respective effect variable. The different time series for different causal variables given the same effect variable are compared. The method is simple and fast, allowing to avoid biases produced by other statistical tests such as Granger Causality test. The method is applied to analyse the structural or causal dependencies between daily monetary flows in a retail brokerage business. This allows practitioners to understand the causal dynamics between these flows being able to control for liquidity risk in banks or other financial institutions. The method can be applied to monitor causal or structural dependencies in time in any particular dataset. Extreme values of the indicator can serve for risk management or alpha signal purposes.
unsrt
|
http://arxiv.org/abs/2307.04600v2 | 20230710143905 | Mass-stream trajectories with non-synchronously rotating donors | [
"David Hendriks",
"Robert Izzard"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.HE"
] |
firstpage–lastpage
InPars Toolkit: A Unified and Reproducible Synthetic Data Generation Pipeline for Neural Information Retrieval
Rodrigo Nogueira
February 2023
==============================================================================================================
Mass-transfer interactions in binary stars can lead to accretion
disk formation, mass loss from the system and spin-up of the
accretor. To determine the trajectory of the mass-transfer stream,
and whether it directly impacts the accretor, or forms an accretion
disk, requires numerical simulations. The mass-transfer stream is
approximately ballistic, and analytic approximations based on such
trajectories are used in many binary population synthesis codes as
well as in detailed stellar evolution codes.
We use binary population synthesis to explore the conditions under
which mass transfer takes place. We then solve the reduced
three-body equations to compute the trajectory of a particle in the
stream for systems with varying system mass ratio, donor
synchronicity and initial stream velocity.
Our results show that on average both more mass and more time is
spent during mass transfer from a sub-synchronous donor than from a
synchronous donor.
Moreover, we find that at low initial stream velocity the
asynchronous rotation of the donor leads to self-accretion over a
large range of mass ratios, especially for super-synchronous
donors. The stream (self-)intersects in a narrow region of parameter
space where it transitions between accreting onto the donor or the
accretor.
Increasing the initial stream velocity leads to larger areas of the
parameter space where the stream accretes onto the accretor, but
also more (self-)intersection. The radii of closest approach
generally increase, but the range of specific angular momenta that
these trajectories carry at the radius of closest approach gets
broader.
Our results are made publicly available.
binaries: close – stars: mass-loss – accretion
§ INTRODUCTION
Binary stellar systems are ubiquitous and the proximity of a star to a
companion introduces a variety of interactions. These interactions
lead to a range of phenomena like the stripping of the outer envelope
of a star and the transfer of mass and angular momentum
<cit.>, tidal interactions
<cit.>, the formation and evolution
of accretion disks <cit.>,
accretion induced supernovae <cit.>,
the (high velocity) ejection of companions
<cit.>, quasi chemically-homogeneous
evolution <cit.>, Be stars
<cit.>,
and circumbinary disk formation <cit.>.
A comprehensive review of these binary interactions is given in
<cit.>, but the most relevant
interactions to the current study are the transfer of mass and tidal
interactions between the stars. Both these interchange orbital and
rotational angular momentum of the system and the stars. Tidal
interactions circularise the orbit, i.e. reduce the eccentricity, and
synchronise the stars, i.e. force the stellar rotation rate to equal
the orbital rotation rate. Mass transfer, among other effects,
de-synchronises the stars by angular momentum transfer from the donor
to the accretor.
In a semi-detached system, where the accretor significantly underfills
its Roche-Lobe, the mass transfer process can be split into three main
parts: The ejection from the donor, the flight of the particles in the
potential between the stars, and the accretion onto the accretor
<cit.>. During the flight stage,
the gravitational interaction between the binary system and the
particle leads to a torque and subsequent exchange of angular momentum
between the binary system and the particle. It is during this stage
that the final outcome of the trajectory is determined, i.e. accretion
onto the companion star, accretion back onto the donor star or loss
from the system entirely. The flight stage is approximately ballistic,
and it is the stage that we focus on in this study.
The potential that is used to calculate when and how mass is
transferred from one star to the other is often calculated under the
assumption that the orbit of the binary system is circular and that
the donor rotates synchronously with the orbit
<cit.>. Together with the
approximation that the stars are point particles this setup is often
called the Roche potential (fig:schematic_overview_frame).
The points in this potential where accelerations vanish are called
Lagrange points. The first Lagrange point lies on the critical
equipotential surface and is located between the two stars. While
generalisations of the equipotential surface and the inclusion of
additional physical effects have been studied, binary
stellar-evolution codes often still use simplified analytical formulae
for the mass stream properties based on circular and synchronous
systems.
Some examples of extensions to this simple Roche model that relax some
assumptions, or add additional physics, are those that allow the
asynchronous rotation of the donor with respect to the orbital
rotation <cit.>,
eccentric orbits <cit.>, spin-orbit
misalignment <cit.>, effects of external radiation
<cit.> or combinations of these
<cit.>. These extensions
change the shape of the critical surface and the location of the
Lagrange points, most notably the first.
Asynchronous rotation of the donor induces time-dependent tides,
exerted by the companion, which then affect the potential. The
above-mentioned extensions that take the asynchronous rotation of the
donor into account <cit.> rest
on several assumptions. First, the shape of the donor is assumed to
conform instantaneously to the shape dictated by the
potential. Secondly, the motion of mass in the donor is assumed to
move primarily along the axis of rotation (i.e. primarily zonal,
instead of meridional). These two assumptions are called the first
approximation <cit.>. Asynchronous rotation
of the donor can occur due to, e.g., rapid expansion of the donor star
leading to sub-synchronous rotation when it fills its Roche Lobe.
Given L1 it is possible to calculate the initial conditions and
subsequent trajectory of the mass flow away from the donor star.
<cit.> analyse the behaviour of
donor material at L1 and the trajectory of the stream of matter
flowing from L1 to the accretor. Their perturbative analysis provides
mass-transfer stream properties over a range of orbital configurations
of the binary based on ballistic trajectories of particles in the
Roche potential. Critical to the study of
<cit.> are the assumptions that the
donor rotates synchronously with the orbit, that the stream at L1 has
a low thermal-velocity (cold) of compared to the orbital velocity,
that the gas remains isothermal throughout the flow, and that the mass
contained in the stream is negligible compared to the total mass of
the system. <cit.> provide
analytical fits to this data and study the response of the accretor
when the mass-transfer stream either directly impacts the accretor or
misses the accretor and forms an accretion
disk. <cit.> calculates properties of the mass
transfer in non-synchronous rotating donors, including the effects of
kinematic acceleration due to the bulging motion of the donor star as
a result of its non-synchronicity. <cit.>
and <cit.> study the effect of initial
thermal-velocity of the stream particles on the location of hotspots
in cataclysmic variable
systems. <cit.> and
<cit.>
calculate the ballistic trajectories to include in their osculating
orbit calculations and consider asynchronous rotating donors. They do
not make the results of these calculations public, however.
The aim of our paper is to publicly release interpolation tables that
contain the results of our ballistic-stream trajectories calculations
over a wide range of mass ratios and degrees of asynchronicity of the
donor, as well as mass-stream surface areas and initial thermal
velocities at L1. These can be used in combination with osculating
orbit calculations <cit.>, and as tables in stellar
evolution codes like
<cit.> and population synthesis
codes like <cit.> or
<cit.>.
Our paper is structured as follows. In sec:theory we explain
the theoretical basis of our project, and in sec:Method we
lay out the methods used to calculate our ballistic trajectories and
our approach to dataset interpolation. In sec:results we show
the results of our ballistic trajectory calculations for several
initial properties of the mass transfer stream. We discuss and
conclude in Sections <ref>
and <ref>. sec:fiduc-source-distr provides a
description of our interpolation datasets, and
sec:lagrange-point-plot contains a visual overview of the
first three Lagrange point locations in two different frames of
reference.
§ THEORY
In this section we lay out the theoretical basis of the calculations
of the trajectory of a particle flowing through L1. We first determine
the potential that the particle experiences when attached to the donor
star and when moving freely through the system, and we then determine
the cross-sectional surface area of the stream and the initial
velocity of the particles L1.
§.§ Generalised Roche potential and Lagrange points
To calculate the particle trajectory through the potential of the
binary system, we consider the reduced three-body problem in a
Cartesian coordinate system Oxyz in the co-rotating frame of the
binary, which rotates with angular frequency ω, with the origin
O of the frame of reference located on the centre of mass of the
system <cit.>. The x-coordinate
is defined parallel to the line connecting the centres of the stars,
the y-coordinate defined perpendicular to the x-coordinate and in
the plane of the orbit and the z-coordinate perpendicular to the
orbital plane. Throughout our calculations we consider particle motion
only in the plane of the orbit, i.e. z = 0.
The donor and accretor are regarded as point masses,
M_don and M_acc, with their positions fixed
at x_don = [-μ_acc, 0] and
x_acc = [1-μ_acc, 0]
respectively, where
μ_acc = M_acc/(M_don +
M_acc), and =
M_acc/M_don. Our units of length, time,
velocity, and potential are the semi-major axis a, the inverse
orbital frequency ω^-1, the orbital velocity aω, and
a^2ω^2 respectively, unless otherwise indicated.
A particle freely moving in a binary star system in a co-rotating
frame experiences the gravitational potential of both stars, and a
centrifugal potential due to the co-rotation, and a Coriolis force due
to movement relative to the co-rotating frame. When we assume that
both stars are centrally condensed, i.e. the Roche model, the
potential is,
Φ(x,y) =
-μ_acc/[(x-1+μ_acc)^2 +
y^2]^1/2 - 1-μ_acc/[(x+μ_acc)^2
+ y^2]^1/2
- 1/2(x^2+y^2).
This is valid for a freely moving particle, i.e. not inside either
star, because there is no other force acting on the particle. This
potential is also valid to calculate the critical surface beyond which
mass starts flowing away from the donor, in the case the donor rotates
synchronously with the orbit and its rotation is along an axis
parallel to the orbital rotation. We show an example of the Roche
potential in fig:schematic_overview_frame.
To calculate the location at which mass starts flowing from the donor
we need to find the critical surface of the donor, i.e. the last
surface at which the net inward force of the potential is balanced by
the pressure of the star. We assume the rotation of the donor is in
the same direction as the orbit of the binary system, the dynamic
timescale is shorter than the tidal timescale, and that the orbit is
circular, in the rest of our study. The potential felt by a
non-synchronously rotating donor is
Φ_don(x,y, ) = - μ_acc/[(x-1+μ_acc)^2 + y^2]^1/2
-1-μ_acc/[(x+μ_acc)^2
+ y^2]^1/2 -1/2^2(x^2+y^2) -
(^2-1)μ x.
Here the potential acting on the donor depends on the synchronicity
factor,
= Ω_don / ω,
where Ω_don is the rotation rate of the donor.
We calculate the location of the first three Lagrange points of the
donor, determining the critical equipotential surface, by taking the
derivative of the potential in eq:roche_potential_COM_don with
respect to x and setting y = 0,
dΦ_don(y=0)/dx = (1 - μ_acc)/(μ_acc + x)^2 + μ_acc/(μ_acc + x - 1)^2
- ^2 x - μ_acc(^2 - 1) =
0.
We solve this equation for x which gives the first three Lagrange
points. In sec:lagrange-point-plot we show these points for a
selection of .
In the potential acting on particles in the donor
(eq:roche_potential_COM_don) we assume that the dynamical
timescale of the donor is much shorter than the timescale of the tides
induced by the secondary star and the non-synchronous rotation of the
donor <cit.>, and thus the potential
is approximately static. We express the validity of this approximation
as
η_static = P_orb/τ_dyn, donα(e, f, ν)≫ 1,
where P_orb is the orbital period of the system,
τ_dyn, don = √(R^3/2GM_don) is the
dynamical timescale of the donor where R is its radius and
M_don is its mass, G is the gravitational constant, and
α(f, e=0, ν=0) = |1-|
is generally a function of synchronicity , eccentricity e
and mean anomaly ν, but here we focus on circular systems
(i.e. e=0, ν is irrelevant)
<cit.>.
α = τ_tideω/ 2π captures the timescale,
τ_tide, on which tides induced by asynchronous
rotation operate. If η_static≫ 1, the response of
the donor to a change in the potential is much faster than the
timescale of the tides induced by the asynchronous rotation of the
donor. The potential can then be regarded as static.
§.§ Mass-stream particle properties
In this section we describe the relevant properties of the particles
in the mass stream at and around the first Lagrange point, L1.
§.§.§ Thermal velocity of stream particles at L1
The initial velocity with which material flows through L1 is set by
the thermal velocity of the material at L1
<cit.>. The thermal velocity, ,
depends on the properties of the photosphere of the donor,
= ṽ_thermal/aω = √(3kT_eff, don/m)1/aω= √(3kT_eff, don/μ_phot, don m_a)1/aω,
where ṽ_thermal is the dimensionful thermal
velocity, k is the Boltzmann constant, T_eff, don is
the effective temperature of the donor, m and
μ_phot, don are the average mass and the mean
molecular weight of the particles in the photosphere respectively,
m_a is the atomic mass unit, a is the semi-major axis
of the system and ω is the orbital frequency of the
system. Here we have assumed the equation of state behaves like an
ideal gas.
§.§.§ Stream surface area at L1
The mass-transfer stream at L1 has a non-zero surface area such that
particles are distributed around L1. We calculate the surface area of
the stream, A_stream
<cit.>, assuming a circular cross-section,
as,
A_stream = Ã_stream/a^2 =
2π k T_eff, don/μ_phot, don m_a/μ_accω^2
×{g()[g()-(1+)^2]}^-1/21/a^2,
where Ã_stream is the dimensionful stream area,
is the synchronicity factor
(eq:synchronicity_factor). The geometric factor, g(),
is,
g() = 1/d_L1, don^3 + /(1-d_L1, don)^3,
where d_L1, don is the distance from the centre of the
donor to L1 in terms of the separation of the binary system.
We reformulate the mass stream area in terms of the thermal-velocity
of the particle at L1, as,
A_stream =
2π/3 ^2(1+)
×{g()[g()-(1+)^2]}^-1/2.
fig:combined_area_plot (a) shows the stream diameter,
, as a function of the thermal-velocity, ,
mass ratio and synchronicity factor. The solid line indicates
= 1 and = 1, and the grey transparent area indicates
the extent of diameters spanned by the ranges of and
. At fixed thermal-velocity, the extent of stream diameters
spans about a factor of 3-4, and from ⪆ 0.06
the diameter of the stream reaches a significant fraction
(⪆ 0.1) of the separation of the
system. fig:combined_area_plot (b) shows the ratio of the
stream diameter and the thermal-velocity as a function of mass ratio
and . Overall, in most of the parameter space this
ratio does not exceed ≈ 0.7, except for > 1.1 and
< 10^-1. Only in the extreme case of ⪆ 1.5
and ≈ 10^-2 does the ratio exceed unity, indicating
that for most of the parameter space, the stream diameter is close to
that of the case of a synchronous and equal mass-ratio system.
The density distribution in the stream at L1 is approximately Gaussian
<cit.>,
ξ(l̃) = η e^-l̃^ 2/2σ^2,
where reduced position offset |l̃| < 1 and position offset
l = l̃ √(A/π) σ = 0.4 such that at
l = ± 1 the density equals that of the photosphere of the donor
<cit.>, and,
η = 1/∫_-1^1ξ(l̃) dl̃.
In a given system with , and , we calculate
trajectories with N_A stream equally-spaced initial
positions relative to L1, sampled in the range [-/2,
/2], and weigh each according to
eq:stream_density_distribution. We use these trajectories to
calculate averaged quantities.
§ METHOD
In this section we explain how we calculate the trajectory of a
particle and how we classify its trajectory in the potential of
sec:roche-potent-reduc, as well how we calculate the relevant
properties of mass transfer in a binary population.
§.§ Particle trajectories in the Roche potential
In the following sections we explain our method of calculating the
trajectory of particles in the Roche potential.
§.§.§ Reduced three-body equations and ballistic integration
The trajectory of a particle is found by integrating the equations of
motion of the particle in the rotating frame,
ẍ = -∂Φ/∂ x + 2 ẏ
and
ÿ = -∂Φ/∂ y - 2 ẋ,
where x and y are the position components of the particle with
respect to the centre of mass of the binary system, ẋ and
ẏ the velocity components of the particle, ẍ and
ÿ the acceleration components of the particle, and Φ is
the potential experienced by the particle
(eq:roche_potential_COM_particle). The first terms in
equations <ref>
and <ref> are the gradient of the potential
and the second terms are the Coriolis force in each direction.
We calculate the specific energy and angular momentum of the particle
in the inertial frame, with respect to the centre of mass, using
quantities defined in the co-rotating frame,
ε = Φ + 1/2(ẋ^2 + ẏ^2) + x^2 + y^2 + xẏ-ẋy
and
h = x^2 + y^2 + xẏ-ẋy,
respectively, in units a^2ω^2 and a^2ω.
In the circular reduced three-body problem, the only first integral of
motion is the Jacobi constant <cit.>,
C = Φ + 1/2(ẋ^2 + ẏ^2) = ε-h,
which is the difference between the energy and the angular momentum of
the particle with respect to the observer frame. We use the Jacobi
constant to determine the accuracy of our calculations.
§.§.§ Initial position and velocity
We integrate trajectories from a given initial position,
x_i, relative to L1 and initial velocity,
v_i, relative to the co-rotating frame.
The initial position is,
x_i = x_minor offset + x_stream area offset.
Here x_minor offset = [δ x, 0] is a
minor offset to prevent the particle starting exactly on L1, where
δ x = |x_L_1-x_acc|/100,
x_acc is the position of the accretor, and
x_L_1 is the x-coordinate of L1, and
x_stream area offset = [x_stream area offset, 0] is an offset to sample the surface area of
the stream at L1 (sec:stream-surface-area).
The initial velocity is,
v_i = v_non-synchronous offset + v_thermal,
where v_thermal = [, 0] is the
thermal velocity of the particle in the stream
(sec:therm-veloc-stre-1).
v_asynchronous offset = [0, (-1)d_don, L1] is the velocity relative to the co-rotating
frame due to the non-synchronous rotation of the donor, and
d_don, L1 is the normalised distance from the centre of
the donor to L1 <cit.>. The
synchronicity changes the tangential velocity offset in two ways. It
determines the angular velocity offset, -1, and it affects the
distance, d_don, L1 (eq:critical_surface and
fig:lagrange_point_plot). We show the y-component of
v_non-synchronous offset as a function of
and in
fig:non_synchronous_rotation_schematic. Generally, with
higher mass-ratio, the lower the velocity offset due to asynchronous
rotation is. This is due to the increasingly smaller size of the donor
relative to the system. At low mass-ratio this effect is reversed, and
there is a clear asymmetry, with at low (∼ 0.2)
synchronicity the velocity offset is larger in absolute terms than at
high (∼ 1.8) synchronicity. This is due to that the L1
point moves outward for lower synchronicity which increases the
velocity offset.
We show the initial position and velocity components for an equal mass
binary (= 1) with a sub-synchronously rotating donor
(= 0.6) and a hot stream (= 0.1) in
fig:initial_position_velocity_schematic, where the thick
black and red arrows indicate the position and momentum vectors
respectively, and the thin dashed lines indicate their component
vectors.
§.§ Integration method
We calculate ballistic trajectories by solving the equations of motion
(equations <ref>
and <ref>) with an explicit 4th order
Runge-Kutta method using the dopri5 ODE solver
<cit.> from the Python
SciPy package
<cit.>. We use an adaptive
method that rejects the model and halves the time step if the relative
error on the Jacobi constant exceeds 10^-6. We either terminate
the integration based on a classification of the trajectory
(sec:classifying-and-averaging) or when the integrator fails
to conserve the Jacobi constant and the time step is shorter than
10^-20ω^-1.
§.§ Classifying and averaging trajectories
For each set of parameters [, , ] we
integrate N_A, stream trajectories, each with a position
offset x_stream area offset, i and a
weighting w_A_stream
(sec:mass-stream-particle,
eq:stream_density_distribution).
The trajectories are classified by their behaviour and
outcome. Particles accrete onto either the accretor or the donor, or
are lost from the system. Classification happens during integration,
and changes how the calculation is terminated.
* Accretion onto accretor: Classified by motion towards
the accretor, away from the donor, away from L1, into a deeper
potential than L1, and within the Roche lobe of the
accretor. Terminated at the moment the particle starts moving away
from the accretor.
* Accretion onto donor: Classified by motion towards the
donor, away from the accretor, away from L1, into a deeper potential
than L1 and within the Roche lobe of the donor. Terminated at the
moment of classification.
* Lost from system: Classified by distance from centre of
mass >3. Terminated on classification.
We show an example of different classifications in
fig:trajectory_classification_overview
Of the trajectories that are not terminated for numerical reasons, we
calculate weighted averages of their properties.
We determine the fraction, β_acc, of our trajectories
that accrete onto the accretor,
β_acc = ∑_i ∈𝒞δ_i w_A stream, i/∑_i ∈𝒞 w_A stream, i,
where w_A stream, i is the weight of the sampled
position offset along the mass stream cross-section, 𝒞 is
the set of classified trajectories, and
δ_i =
1 if trajectory_i classification is accretion onto accretor, and
0 otherwise.
We calculate the fraction that accretes back onto the donor,
β_don, in the same way as β_acc
(equations <ref>
and <ref>).
We calculate the fraction of trajectories that is lost from the system
or classified as
β_lost = 1 - β_acc -
β_don.
We denote the total weight of all trajectories that are successfully
categorised with,
w_successful = ∑_i ∈𝒞 w_A stream, i,
and the total weight of all those that fail or are rejected with,
w_fail = 1-w_successful,
which can occur when our integrator is not able to conserve the Jacobi
constant within the minimum time step threshold
(sec:integration-method).
With these weights and fractions we can quickly identify how
successful our calculations are for a given set of parameters
[, , ], and how the trajectories are
classified.
§.§ Intersecting orbits
At each coordinate in our parameter space we evolve a set of
trajectories sampled along the stream diameter
(sec:stream-surface-area). We treat each of these
trajectories independently, even though these trajectories can cross
either themselves or each-other.
To find intersecting trajectories we use the
SweepIntersectorLib[<https://github.com/prochitecture/sweep_intersector>],
which is a Python implementation of the Sweep line algorithm
of <cit.>.
Orbits that self-intersect are always flagged as such, but only
trajectories with an angle of intersection, , with
another trajectory larger than
= get flagged as intersecting
with others. While the exact threshold angle is not strongly
motivated, we argue that low-angle intersecting trajectories would
merge and be well approximated by their weighted average, high angles
of intersection could significantly change the outcome of both
trajectories.
fig:trajectory_classification_overview shows the different
types of intersection for a system with = 10^-1.2,
= 0.22, and = 10^-0.5, and
N_A, stream = 12 equally spaced sampled trajectories.
At each coordinate we record the weighted fraction of trajectories
that self-intersect, , as well as those that intersect
with other trajectories, , if their intersection
angle exceeds the threshold.
§.§ Radii, specific angular momenta and torques
When the mass stream misses the accretor it loops back around and form
an accretion disk. This disk forms at the circularisation radius,
defined as the radius where the specific angular momentum,
h_stream, min, acc, with respect to the accretor at the
moment of closest approach, , equals that of a circular
Keplerian orbit around the accretor with radius
= h_stream, min, acc^2/μ_acc.
The specific angular momentum of a particle with respect to the
accretor is,
h_acc = (x-x_acc)^2 + (y-y_acc)^2 + (x-x_don)ẏ-ẋ(y-y_don),
= (x-1+μ_acc)^2 + y^2 +
(x-1+μ_acc)ẏ-ẋy.
We calculate h_stream, min, acc by evaluating
eq:angmom_wrt_acc at the radius of closest approach.
While in our ballistic trajectory calculations we implicitly assume
that the stream will miss the accretor and will form an accretion disk
around the star, many interacting binaries actually transfer mass
through direct-impact accretion. When the stream collides with the
accretor, i.e. direct-impact accretion
r_stream < r_accretor, the specific angular
momentum of the stream (eq:angmom_wrt_acc) at that point is
different than at the point of closest approach during disk formation.
We calculate the specific angular momentum of the stream with respect
to the accretor as a function of the distance to the centre of the
accretor. This allows a more accurate determination of the specific
angular momentum accretion rate when the stream directly impacts the
accretor.
We record the (averaged) specific angular momentum of the stream at
fixed distances from the accretor, with a minimum distance of
d_stream min, a maximum distance of
d_stream max at N_radii equally spaced
radii, located at,
d_stream i = d_stream min + i ×(d_stream max - d_stream min)/N_radii.
Here d_stream i indicates the i-th radius from the
centre of the accretor in units of the Roche-lobe radius of the
accretor, at which we record the i-th specific angular momentum along
the stream h_stream i. We show a schematic example of
the locations at which we record the specific angular momentum of the
stream in fig:stream_interpolation_schematic.
§.§.§ Self-accretion torque
Accretion of (part of) the mass transfer stream back onto the donor
exerts a torque on the donor star. We calculate the specific angular
momentum of a particle at the moment of impact on the donor, with
respect to the donor,
h_don = (x-x_don)^2 + (y-y_don)^2 + (x-x_don)ẏ-ẋ(y-y_don),
= (x+μ_acc)^2 + y^2 +
(x+μ_acc)ẏ-ẋy.
We calculate the initial h_i, don and final
h_f, don specific angular momentum of a particle
accreting back onto the donor by evaluating eq:angmom_wrt_don
with the initial and final positions and velocities respectively, and
we use these specific angular momenta to calculate the total torque on
the donor due to self-accretion.
§.§ Properties of mass transfer in binary populations
To inform us of the ranges of , and we
should cover, we evolve a binary population with the rapid binary
population synthesis framework <cit.>, which is based on the
algorithm from <cit.>, and makes use of the single star
models of <cit.> and provides
analytical fits to their evolution as in
<cit.>.
Specifically relevant to this study are the tidal interactions between
binary stars. These are implemented as in
<cit.>, in which dynamical tides are
based on <cit.> and equilibrium tides are based
on <cit.>.
Our population contains binary systems with an initial primary mass
M_1, secondary mass M_2 and orbital period P, and we assign
weights to each system according to the distribution functions of
their birth properties of <cit.>.
M_1 is sampled logarithmically in the range 0.8 to 120 .
M_2 is sampled from a flat mass-ratio distribution between
0.1 M_⊙/M_1 and 1.
P is sampled from a logarithmically-spaced distribution of periods
between 1 day and 10^8 days.
We evolve
N_M_1× N_M_2× N_P = 80 × 80 × 80
binary systems sampled with the distributions described above at
near-solar metallicity (Z = 0.02).
During Roche-lobe overflow we record the mass transfer quantities
, and , and we weigh them by the
time spent transferring mass and the mass transferred,
W_time, i = p_i * dt [yr]
W_mass, i = p_i * dt Ṁ_don [M_⊙],
where W_time, i is the time-weighted probability,
W_mass, i is the mass-weighted probability, p_i is
the probability of the i-th system according to the distribution
functions of <cit.> dt is the time-step taken in
and Ṁ_don is the mass-transfer rate of
the donor.
Based on our results of our binary population, we determine the
parameter ranges for our ballistic interpolation calculations
(tab:interpolation_table_properties). We use these ranges to
span a hypercube of initial parameters for our ballistic calculations.
§ RESULTS
We present our results in the following sections. First, we show our
binary population which contain data on the properties of the mass
transfer in many systems. We then take these results and use them to
determine the ranges of the parameters in our trajectory
calculations. We then show our ballistic trajectory results for
“cold” (narrow and slow) and “hot” (wide and fast) streams.
§.§ Mass transfer in binary populations
With the results of our stellar population generated in
sec:expl-param-rang, we calculate the ranges of the
parameters of interest in a population of interacting binary
systems. Our results include the average time spent, and the average
mass transferred, of each system configuration.
fig:exploration_results_parameters shows the distributions of
the parameters of interest, weighted either by time spent transferring
mass or mass transferred. We normalise the area under each of the
curves to unity, and we define values <10^-5 as rare and indicate
them by a green horizontal line.
fig:exploration_results_parameters (a) shows the logarithmic
thermal-velocity, log_10(),
distributions. All systems have thermal-velocities between 10^-3.5
and 10^-0.5.
fig:exploration_results_parameters (b) shows the
synchronicity fraction, , distributions. These are
mostly between 0 and 2, with a peak around both 0 and 1 for
both the time spent transferring mass and mass-transferred
weights. While the time-spent distribution peaks at synchronous
rotation rates (= 1), the mass-transferred
distribution peaks at very sub-synchronous rotation rates
(∼ 0). There is a large tail of synchronicity
fractions from = 2 to
≃ 10, but their probability is low.
fig:exploration_results_parameters (c) shows the mass ratio,
log_10(), distribution. We see a single main range
between log_10() = -2 and 2 for both the time
spent and mass transferred weights. The data show that at small mass
ratios (< 1) hardly any time is spent transferring mass
(probabilities up to 10^-4), while at larger mass ratios
(> 1) the opposite is true. This is understood by the mass
ratio reversal during mass transfer and the transition from thermal
timescale mass transfer (high mass-transfer rate, short time) to
nuclear timescale mass transfer (low mass-transfer rate, long time).
We show the distributions of the logarithm of the ratio of the
dynamical timescale of the donor to the tidal timescale,
log_10(η_static) in
fig:exploration_results_alpha. We indicate equal-valued
timescales, log_10(η_static) = 0
with a red-dashed vertical line. The area on the right of this line
indicates that the static-tide approximation is justified, and vice
versa. The numbers in the legend indicate the total fraction for
either weights with
log_10(η_static) < 0. The data
show a broad range of
log_10(η_static), and clearly
show that in terms of time-spent transferring mass, the static
approximation is overall valid (less than 0.1 per cent below
log_10(η_static) = 0). This is
not always the case for the mass-transferred, because a significant
fraction (13 per cent) of all mass transferred occurs when the
static-tide approximation is invalid.
We show the normalised distribution of
log_10(η_static) as a function
of in
fig:exploration_results_alpha_vs_synchronicity, where in
fig:exploration_results_alpha_vs_synchronicity (a) we show
the distribution weighted by mass-transferred, and in
fig:exploration_results_alpha_vs_synchronicity (b) we show
the data in terms of time-spent transferring mass. We indicate 6
sections, separated by red-dotted lines. Section 1 indicates
super-synchronous (> 1.025) systems where the potential is
approximately static
(log_10(η_static) >= 0), section
2 indicates near-synchronous systems
(0.975 <= <= 1.025) with a static potential
(log_10(η_static) >= 0) and
section 3 indicates sub-synchronous systems (0.975 <)
with a static potential
(log_10(η_static) >= 0). Section
4 indicates sub-synchronous systems (0.975 <) where the
static approximation is not valid
(log_10(η_static) < 0, i.e. with
a dynamic potential), section 5 indicates near-synchronous
systems (0.975 <= <= 1.025) with a dynamic potential
(log_10(η_static) < 0) and
section 6 indicates super-synchronous systems (> 1.025)
with a dynamic potential
(log_10(η_static) < 0). The
range of in the near-synchronous regions is determined by the
bin-width in our simulations.
fig:exploration_results_alpha_vs_synchronicity (a) shows that
the transferred mass is mostly transferred in three sections. Only
9.8 per cent of the systems normalised by mass-transferred are
synchronous and are well approximated by the static potential (section
2). The large majority (77.5 per cent) of transferred mass
takes place in systems with a sub-synchronous donor that still
responds rapidly enough to regard the potential as static (section
3). Most of the remaining systems (12.6 per cent) have donors
that rotate sub-synchronously for which the static potential
approximation does not hold (section 4). The rest of the
sections cover less than 0.08 per cent of all transferred mass,
which indicates that super-synchronous rotation does not occur much in
field binaries (<0.07 per cent), and especially not in cases where
the static potential approximation breaks down.
fig:exploration_results_alpha_vs_synchronicity (b) shows that
the time-spent transferring mass mostly is spent in just two sections,
section 2 and 3. Between these two sections, the
synchronous case where the static potential approximation holds
(section 2) covers 37.6 per cent of all time-spent
transferring mass. The majority, thus, is spent where systems have
donors that rotate sub-synchronously but effectively experience a
static potential. The contribution of the other regions is negligible
(<0.04), indicating that like in the mass-transferred case
super-synchronous rotation is not common in field binaries, but also
that not much time is spent in the case where the donor effectively
experiences dynamical tides.
With our results shown in fig:exploration_results_parameters
and fig:exploration_results_alpha_vs_synchronicity we
determine the parameter ranges for the trajectory simulations.
* For the thermal velocity, log_10(), we
consider the range between -3.5 and -0.5 for our trajectory
calculations.
* For the synchronicity factor, , we consider the range
between 0 and 2 for our trajectory calculations. A small
fraction of systems has > 2, they are however clearly less
frequent.
* For the mass ratio, , we use the range between -2 and
2 for our trajectory calculations.
These values are listed in
tab:interpolation_table_properties.
The above results indicate that sub-synchronous mass-transfer is
common, both for the time-spent (>60 per cent) and for the
mass-transferred (90 per cent). This further motivates the remainder
of this study.
§.§ Ballistic trajectory properties
In this section we show our results of the ballistic trajectory
calculations. While our results span a large parameter space, we
choose to highlight the two extreme cases, with the results for cold
and narrow streams (= 10^-3 and
≈ 10^-4-10^-3) in sec:cold-narrow
and hot and wide streams (= 10^-0.5 and
≈ 0.1-0.4) in sec:traj-prop-hot.
Before looking at the results let us highlight several effects that
are relevant to the evolution of the trajectories.
In sub-synchronous systems (< 1) L1 moves outward relative to
the synchronous case, the velocity offset due to asynchronous rotation
at L1 is downward (v_non-synchronous offset is
negative), the Coriolis force for downward motion leads to a rightward
acceleration (a_Coriolis, y is positive), and at the
moment of release the particle is located within the Roche lobe of the
accretor.
In super-synchronous systems (> 1) L1 moves inward relative
to the synchronous case, the velocity offset due to asynchronous
rotation at L1 is upward (v_non-synchronous offset is
positive), the Coriolis force for upward motion leads to a leftward
acceleration (a_Coriolis, y is negative) and at the
moment of release the particle is located within the Roche lobe of the
donor.
In low mass ratio systems (< 1) the velocity offset due to
asynchronous rotation is larger relative to equal mass-ratio systems
due to the large size of the Roche-lobe of the donor and the velocity
is even higher for sub-synchronous rotation as L1 moves outward.
In high mass ratio systems (> 1) the velocity offset due to
asynchronous rotation is smaller relative to the equal mass-ratio
systems due to the small size of the Roche-lobe of the donor.
These effects are visualised and quantified in Figures
<ref>,
<ref> and
<ref>, and eq:equations_of_motion_x.
§.§.§ Cold and narrow streams
We show our cold and narrow ballistic integrations,
= 10^-3, in the ranges of mass ratio, , and
synchronicity factor, , described in
tab:interpolation_table_properties. From
fig:combined_area_plot we know that the stream diameter is
small, ≈ 10^-4-10^-3, so all the
trajectories sampled along the stream effectively have the same
initial position. From fig:non_synchronous_rotation_schematic
we know that for asynchronous systems (≠ 1) at low mass
ratios < 1 the initial radial velocity is low compared to the
tangential asynchronous velocity offset,
v_asynchronous offset, which indicates that results in
that part of the parameter space will deviate most from the
synchronous case explored by <cit.>.
In fig:rmin_low_v we show the radii of closest approach,
, of particles that accrete onto the accretor as a function of
mass ratio, (abscissa), and donor synchronicity,
(colour scale). The triangles indicate the orientation of the
particle, where the upward triangle indicates prograde (same direction
as the binary orbit) orientation and the downward triangle indicates
retrograde (opposite direction). The red diamonds are from
<cit.>, and the blue dashed line
indicates the prescription of
<cit.>.
The radii of closest approach in synchronously-rotating
= 1 donor systems match closely to the results of
<cit.>.
Overall, in the range that covers the parameters of
<cit.> we find a good match,
confirming that our method works as it should, given the assumptions
and approach.
Our data show the super-synchronous donors only accrete onto the
accretor at high mass ratios (> 10 and > 1.5), with a
decrease in minimum required for accretion onto the donor with
a decrease in . At high mass-ratio the donor is not able to
exert enough force to turn the stream back onto itself even though the
particle is released within its Roche-Lobe, due to its low mass.
With sub-synchronous donors we find an increase in the minimum
mass-ratio that accretes onto the accretor, with decreasing
. Moreover, given a synchronicity factor, , the radius
of closest approach decreases with decreasing mass ratio, .
Systems with a low mass ratio and a low synchronicity factor
(< 1 and < 0.4) experience a high negative velocity
offset due to asynchronous rotation and, even though they initially
start in the Roche lobe of the accretor, they experience an
acceleration towards the donor because of the Coriolis force, which is
strong enough to steer the trajectory onto the donor.
Generally, in high mass-ratio systems, the effect of asynchronous
rotation on the radius of closest approach is small, with a spread of
only a factor of 2 at ∼ 30. This is because the velocity
offset due to the asynchronous rotation for systems with mass-ratio
≥ 30 is generally low
(|v_asynchronous offset| < 0.2,
fig:non_synchronous_rotation_schematic), so the trajectories
do not differ much from the synchronous case.
In fig:rcirc_over_rmin we show the ratio of circularisation
radius to radius of closest approach, /, as a function of
mass ratio, , and synchronicity factor, . This data is
a measure of the specific angular momentum at the radius of closest
approach and how much it differs from that of a circular orbit at the
radius of closest approach. Moreover, this data is used to calculate
the radius at which an accretion disk forms. The red diamonds are from
<cit.> and the blue-dashed
horizontal line is from
<cit.>.
At high mass-ratios (> 10), we see a general decrease of the
r_circ/r_min with increasing mass ratio
, regardless of the synchronicity factor, with a spread of at
most 0.2. This indicates that the specific angular momentum at the
radius of closest approach tends to that of a circular orbit at the
radius of closest approach, and that the asynchronous rotation of the
donor does not affect this quantity strongly either.
At mass-ratios < 1 the trajectories of most asynchronous
donors accrete onto the accretor. All the trajectories have a ratio of
radii between 1.7 and 2.0, indicating that the stream carries much
more specific angular momentum at the radius of closest approach than
a circular orbit would. Because of its low mass, the torque exerted by
the accretor is insufficient to circularise the stream.
Overall, the ratio is between 1.3 and 2, indicating that the
stream always carries more specific angular momentum than a circular
orbit at the radius of closest approach would. Moreover, the commonly
used constant ratio 1.7 used by
<cit.> is up to 30 per
cent off.
In fig:self_accretion_specific_angular_momentum_factor we
show the fractional difference between the final
(h_f, don) and initial (h_i, don) specific
angular momenta (ordinate) of particles that accrete back onto the
donor as a function of mass ratio, (abscissa), and
synchronicity fraction, . The data show two distinct regions.
The trajectories from sub-synchronous donors (≤ 0.7) show
an increasingly larger final specific angular momentum
h_f, don compared to the initial specific angular
momentum, h_i, don, of the stream for decreasing
synchronicity factor, . Moreover, the lower , the
larger the range in mass-ratios, , for which the stream
accretes onto the donor. This is because a larger deviation from
synchronism introduces a larger velocity offset, which requires an
increasingly massive accretor to completely turn the stream towards
itself. These trajectories all exert a positive torque on the donor
that leads to the donor becoming more synchronous.
Trajectories from super-synchronous donors show a decrease in specific
angular momentum relative to their initial specific angular momentum,
with a decrease in specific angular momentum relative to the initial
angular momentum as increases. This is because of a decrease
in angle of incidence with the donor with increasing asynchronicity
for super-synchronous donors, caused by a combination of a lower
velocity offset and an acceleration towards the donor, and vice versa
for sub-synchronous donors. Trajectories that accrete onto
super-synchronous donors all exert a negative torque that again leads
to the donor becoming more synchronous.
For both the super-synchronous (negative torque) and the
sub-synchronous (positive torque) torque self-accretion, the magnitude
of the difference between the initial and final specific angular
momenta increases, for a given synchronicity factor, with increasing
mass ratio . At higher mass-ratios the trajectory is affected
more, due to the stronger gravitational effect of the accretor. This
increasingly affects the final angular momentum of the stream, which
leads to the increasing difference. In the low mass-ratio systems, the
stream angular momentum is hardly affected, and thus the difference
remains small (e.g. at = 0.01,
h_f, don/h_i, don-1 > -10^-1 for
sub-synchronous donors, and
h_f, don/h_i, don-1 < 5×10^-1 for
super-synchronous donors).
We show the fractions of each classification as a function of mass
ratio, (abscissa), and synchronicity factor,
(ordinate, sec:classifying-and-averaging,
eq:fraction_accretion_accretor), in
fig:classification_fractions. fig:classification_fractions
(a) shows the fraction of all trajectories accreting onto the
accretor, fig:classification_fractions (b) shows those
accreting onto the donor and fig:classification_fractions (c)
shows those that are lost from the system. The colour-scale indicates
a non-zero fraction, where a white indicates a fraction of zero. The
red lines indicate the fraction of all trajectories that failed to
evolve correctly (sec:integration-method and
eq:all_fail).
The data in fig:classification_fractions (a) show that, at
low mass-ratio, (< 0.1), only the near-synchronous donors
accrete onto the accretor. The region of synchronicity factor,
, that corresponds to accretion onto the accretor increases
both to sub- and super-synchronous donors with increasing mass ratio
. This is due to the decrease in velocity offset due to
asynchronous rotation with increasing
(fig:non_synchronous_rotation_schematic). This reduces the
effect of asynchronicity is reduced and the makes the trajectories
behave like ones from synchronous systems. The asymmetry in the shape
of the fraction accreted onto the accretor is caused by the Coriolis
force, which accelerates the particle towards the accretor for
sub-synchronous donors and away for super-synchronous donors.
The data in fig:classification_fractions (b) show an exact
inversion of the data in fig:classification_fractions (a),
and fig:classification_fractions (c) shows that for the low
thermal-velocity (cold) there are no trajectories that escape the
system.
The transition between each region is sharp, caused by the narrow
stream associated with the low thermal-velocity (cold), which
indicates that for every classification at a given coordinate either
none (white) of the trajectories or all (yellow) of the trajectories
are classified as such. Moreover, we find no failing systems for our
cold and narrow trajectories.
fig:intersection_low_v shows the fractions of trajectory
intersections (sec:intersecting-orbits) as a function of
and , for low thermal-velocity (cold,
= 10^-3) streams. The red contours show the fraction of
self-intersecting trajectories, the blue contours show the fraction of
intersection with other trajectories. The dashed line indicates a
weighted fraction of at least 0.1 of all the trajectories, a dotted
line indicates a weighted fraction of at least 0.5 and the solid line
indicates a weighted fraction of at least 0.9 of all trajectories.
We find that self-intersecting orbits occur at the edges of the
transition regions between accretion onto the accretor and accretion
onto the donor (fig:classification_fractions). The fraction
is always high, since the stream is itself so narrow that the
trajectories stay bundled and follow approximately the same path.
Intersection with other trajectories, with angles of incidence above
the threshold , occurs in the same narrow region of
(, ) parameter space as the self-intersecting
orbits. This is because the stream is so narrow that the trajectories
effectively follow the same path as each other.
In fig:rmin_low_v and fig:rcirc_over_rmin we focus
on properties of the stream at its radius of closest approach to the
accretor. In many situations, though, the radius of the accretor
exceeds the radius of closest approach and the stream directly impacts
the accretor. In that case, the stream has less travel time through
the potential and experiences less torque by the binary system, which
affects the specific angular momentum of the stream upon impact with
the accretor.
We show the evolution of the specific angular momentum of the
mass-transfer stream as a function of its distance to the accretor and
the mass ratio for systems with synchronously rotating donors
(= 1) in fig:stream_interpolation_low_v. The color
indicates the specific angular momentum of the stream in units of that
of the specific angular momentum at the radius of closest
approach. The orange lines show 5 equally spaced lines where this
specific angular momentum is constant.
For all mass ratios the specific angular momentum of the stream starts
out higher than what the specific angular momentum at the radius of
closest approach is. For systems with high mass ratios the difference
between the initial specific angular momentum and that at is
minor (a few percent), but this difference increases with decreasing
mass ratio (up to ten percent).
We note that the qualitative behaviour of the stream systems with a
different synchronicity factor, , and thermal velocity,
, is not necessarily the same as described above.
§.§.§ Hot and wide streams
In this section we show the trajectory properties of systems with a
hot and wide stream (= 10^-0.5 and
≈ 0.1-0.4). Whereas in the low
thermal-velocity (cold) regime the stream area is negligible, here the
stream area is sufficiently large as to cause a relevant offset
between the initial positions of the particles. Moreover, the high
thermal-velocity (hot) provides a large initial radial velocity
towards the accretor, and the Coriolis force subsequently provides a
large downward (negative y-direction) acceleration on the particles.
We show the radii of closest approach in our high thermal-velocity
(hot) calculations in fig:rmin_high_v.
Overall, we again find a small spread of radii (∼ 0.3-0.5)
at large mass-ratios = 100, but we now see a much larger
spread (∼ 0.01-0.8) at low mass ratios (∼
0.1). Notably, a wider range (= 0.1-2.0) of initial
asynchronicities lead to the accretion onto the accretor. This is due
to the larger (≈ 0.32) initial radial velocity that
makes it harder to deflect the stream.
Accretion onto the donor now only occurs for systems with a low
mass-ratio (< 0.1, significantly lower than in the cold-stream
case), either with < 0.4 or with > 1.5
(fig:classification_fractions_high_v).
Sub-synchronous donors show a general increase of with
decreasing synchronicity factor. This is due to the initially negative
transversal velocity from the sub-synchronous rotation directing the
stream further away from the accretor. This eventually leads to a
fraction of the stream escaping from the system, but for very
sub-synchronous rotating donors many trajectories self-intersect.
At low mass-ratios, super-synchronous donors show a general decrease
of with increasing synchronicity factor, but this behaviour
turns around for highly super-synchronous donors (> 1.7) at
low mass ratios (< 0.1). This is because part of the stream
for these systems starts accreting onto the donor, and the
trajectories that do still accrete onto the accretor on average have a
large radius of closest approach. This region of parameter space
contains many (self-)intersecting trajectories
(fig:intersection_high_v), and since we do not treat
intersecting orbits differently, this indicates that this region
requires a more sophisticated approach than our current one.
We show the ratio of / in our high thermal-velocity (hot)
calculations in fig:rcirc_over_rmin_high_v.
At high mass-ratios (> 1), while as a function of mass ratio
the results are similar to the low thermal-velocity (cold) case,
i.e. with higher the mass ratio a lower /, the behaviour
as a function of synchronicity is now reversed. The lower the
synchronicity, the lower the ratio /, indicating that the
trajectories on averages at the radius of closest approach are similar
to circular orbits at that radius, and vice versa.
At small mass-ratios (< 1) the behaviour is similar as above,
but from ≤ 0.4 the sub-synchronous systems get an
increasingly high ratio / with decreasing mass ratio
. This coincides with regions of the parameter space where part
of the stream either escapes from the system, or starts accreting onto
the donor. The remaining trajectories that barely do not escape often
fall back into the Roche-lobe of the accretor near radially, or they
find their radius of closest approach very early in their
trajectory. For these trajectories, the first radius of closest
approach is potentially not suitable to determine the angular momentum
of the ring that would form when the stream circles around the
accretor and hits itself.
We show the ratio of final and initial specific angular momenta of
self-accreting material in our high thermal-velocity (hot)
calculations in
fig:self_accretion_specific_angular_momentum_factor_high_v
While again there are two distinct regions of positive and negative
torque of self-accreting material, both regions are smaller and
require a higher degree of asynchronicity (i.e. > 1.5) and/or
a lower mass ratio (< 0.1) to self-accrete. Systems with
super-synchronous that self-accrete tend to experience a higher torque
for a given mass ratio, e.g. for = 0.01,
h_don, f/h_don, i-1 > -10^-1 for
= 10^-0.5 compared to
h_don, f/h_don, i-1 = [-10^-2,
-10^-1]. The angles of incidence of these trajectories with the
donor are much larger, nearing perpendicular to its surface.
In fig:classification_fractions_high_v we show the fractions
of trajectories in each classification for the hot stream
calculations.
fig:classification_fractions_high_v (a) shows that compared
to our low thermal-velocity (cold) results
(fig:classification_fractions), a larger fraction of mass
ratios and synchronicity factors accrete onto the accretor,
e.g. systems with 0.1 < < 10 and > 1.5 or < 0.5 now
accrete onto the accretor instead of onto the donor in the low
thermal-velocity case (= 10^-3.0). This is mainly
attributed to the larger initial radial velocity, which gives the
particles more momentum to start with and make it more difficult to
change the course of their trajectories. This, in turn, leads to a
smaller region of the parameter space in and that
accretes back onto the donor.
fig:classification_fractions_high_v (c) shows the fraction of
trajectories that escapes as a function of and . While
at low (= 10^-3.0) thermal-velocity (cold) there is no
trajectory that escapes, the high (= 10^-0.5)
thermal-velocity allows trajectories to pass the accretor and escape
through the Lagrange point behind the accretor (at
x > x_acc). This primarily occurs in sub-synchronous
systems, again due to the Coriolis force accelerating the particle
towards positive x.
Overall, the data in fig:classification_fractions_high_v (a)
and (b) show that instead of the sharp transition between accretion
onto accretor and self-accretion, there is a much more gradual
transition between the regions where the fractions transition from 0
to 1 over a larger range of parameters. This is because the high
thermal-velocity leads to a wide stream, i.e. a wider range of initial
positions around L1 for our trajectories for a given system. In
systems with e.g. = 0.1 and = 1.9, about half of the
trajectories that make up the stream accrete onto the donor, and half
accrete onto the accretor.
Moreover, some trajectories fail to stay accurate within the given
minimum time step, but the total fraction of the failing systems is
negligible, and they only occur in small regions.
fig:intersection_high_v shows the intersection fractions for
high thermal-velocity (hot, ≈ 0.32) mass
transfer. The structure of the figure is the same as in
fig:intersection_low_v.
We find that the self-intersecting orbits again occur on the edges of
the transition regions between accretion onto the donor and accretion
onto the accretor. The fraction itself it not always high, due to the
stream being wider and parts the transition region is more gradual
(i.e. for a wider range in , , parts of the stream can
accrete onto different regions). For sub-synchronous rotation
(< 0.75) the region of parameter space where self-accretion
occurs is narrow and is more confined to the transition region than
self-accretion in super-synchronously (> 1.75) rotating
systems. Super-synchronous systems with high-thermal velocity streams
have very wide streams, but the asynchronous velocity offset is lower
than in the equivalent sub-synchronous configurations
(< 1.75,
fig:non_synchronous_rotation_schematic). This leads to
trajectories in a larger region in the parameter space
self-intersecting.
Intersection with other trajectories again coincides with regions of
self-intersection, where for sub-synchronous systems the regions
overlap strongly but for super-synchronous the region where
trajectories intersect with others extends to a larger part of the
parameter space (< 0.5 and > 1.25). The increase in
stream diameter in this leads to the initial conditions of each
trajectory to be sufficiently different to cross at high angles of
incidence (>).
Overall, the regions of self-intersection and other-intersection are
confined to regions of low mass-ratios (< 0.5), due to the
higher radial velocity that occurs at high-thermal velocity, which
gives the stream more momentum and makes it harder to deflect or
rotate. Only for low mass-ratios is the donor massive enough to turn
the trajectories and lead to (self-)intersections.
§ DISCUSSION
We use binary population synthesis to evolve populations of binary
systems and record their properties during mass transfer. We do this
to find the ranges of the mass ratios of the accretor, ,
synchronicity factors of the donor, , and thermal velocities
of the stream, that we should cover in our ballistic
stream trajectory calculations. At the same time we use these results
as a motivation for this study. Most notably, we find that mass
transfer takes place with non-synchronous donors for a significant
fraction of either mass transferred (≈ 90 per cent), as well
as time spent transferring mass (≈ 60 per cent,
fig:exploration_results_alpha_vs_synchronicity).
We find that the approximation of static tides does not always hold,
especially the fraction of mass transferred while the static
approximation fails is significant (≈ 10 per cent). This
indicates that mass transfer in those systems occurs in a
time-dependent potential, the effects of which are not captured by our
modelling approach and these systems likely require detailed stellar
evolution models and time-averaging to model the mass transfer
correctly.
We note that, while the results shown in
sec:mass-transfer-binary indicate the extent of the
parameters relevant to this study, they should be used just for
that. We currently calculate the population statistics for a starburst
population at a specific metallicity, and we do not convolve with any
star formation rate. This means that our population results are not a
directly observable quantity, even if the assumption of a single
metallicity is not entirely wrong for populations of dwarfs in the
solar neighbourhood
<cit.>. Moreover, our results
depend on the details of the population synthesis
calculations. Changes in, e.g., tidal interaction physics
<cit.> or birth distributions
<cit.> of the binary
components will change our results, although the extent to which is
not clear.
Recently <cit.> performed calculations with a similar
approach to ours. While they don't supply a data-release, the
behaviour of their stream models is described in some cases. They find
that, in all cases of self-accretion, the donor experiences a positive
torque, effectively spinning up the donor and removing angular
momentum from the orbit. This agrees with the results of
<cit.>, as well as with
those of <cit.>. The focus of all
these studies is on sub-synchronous donors. We find that
self-accretion onto super-synchronous donors lead to a spin-down of
the donor.
Our results imply that if a donor rotates asynchronously and
self-accretes, this self-accretion always works to synchronise the
donor even if it rotates super-synchronously.
We capture the effects of a large mass transfer stream cross-section
by simulating a set of trajectories with initial position offsets
along the stream. We treat these trajectories as individual, and we do
not include any interaction between these trajectories. In some cases,
however, the trajectories along the mass stream intersect at large
angles with other trajectories
<cit.>. Realistically, these would be swept
up by parts of the stream with a higher density and momentum
<cit.>. We track whether trajectories
intersect with either themselves or with others
(sec:intersecting-orbits) and we find self-intersection and
intersection with other trajectories (at angles larger than the
threshold) occurs primarily in the transition regions between
accretion onto the accretor and accretion onto the donor
(Figures <ref>
and <ref>). Especially in the high
thermal-velocity (hot) stream super-synchronous cases we find that
where a high fraction of intersection with other trajectories takes
place (> 1.25 and < 0.5) extends to a larger part of
the parameter space than the region where self-intersection occurs
(> 1.75 and < 0.25,
fig:intersection_high_v). The very wide stream causes the
particles along it to have a large spread in initial conditions and to
follow significantly varying trajectories.
We currently do not post-process any of these trajectories to alter
their outcome or to reject them based on intersection. The regions
where a high degree of (self-)intersection occurs likely require an
approach that is more sophisticated than approximating the stream by a
series of non-interacting ballistic trajectories.
Our ballistic approach imposes some assumptions on the starting
conditions of the particle, especially in asynchronous rotating
donors.
We take the transversal velocity offset due to asynchronous rotation
v_non-synchronous offset to scale linearly
with the synchronicity factor. <cit.>
critiques this approach, and argues that this axisymmetric velocity
assumption is not valid <cit.>,
and that the problem requires a hydrodynamical analysis.
This is based on two studies that look at the gas dynamics of material
at L1 in non-synchronous donors using polytropic models for the
radiative <cit.> and
convective <cit.>
stars, specifically the shape of the flow field at L1.
They both find that in the linearised and low-asynchronicity case the
velocity field tends to zero as it approaches L1, and hence and flow
towards L1 slows down and tend to zero before flowing through L1 and
increasing. This is in contrast with our assumption of a transverse
velocity component linearly dependent on the non-synchronicity factor
. A lower velocity offset with the same asynchronous rotation
of the donor leads to stream properties that are more like the
synchronous case.
Because of the initial supersonic velocity relative to the L1 point
(fig:non_synchronous_rotation_schematic) the slow-down can be
accompanied by shocks <cit.>. The heating
of the shock dissipation could change the initial properties of the
stream (e.g. increase the local temperature at L1), and could be
observable as an excess luminosity around L1. With observations of
mass-transferring systems it might be possible to discern whether this
slow-down to L1 actually occurs, and whether mass-stream trajectories
behave like those in synchronous systems even for asynchronous donors.
The aim of this paper was to include the effects of non-synchronous
rotation of the donor on the particles in the mass transfer stream in
the ballistic approach, where we treat the accretor as a point
particle with no physical size. Our method, however, is suitable for
extensions like treating direct impact accretion onto the accretor and
adding properties of the particle during its flight to the
interpolation dataset, and the inclusion of additional physical
effects like post-Newtonian potentials for the accretor
<cit.>, the effects of kinematic
acceleration <cit.> or those of
irradiation by the secondary
<cit.> on the critical surface of
the donor.
§ CONCLUSIONS
Motivated by the lack of publicly available data of stream properties
in systems with non-synchronously rotating donor stars, we hereby
present our results of ballistic trajectory calculations. We calculate
ballistic trajectories with varying mass ratio , synchronicity
factor and initial thermal-velocity , and we
assume the accretor radius is infinitely small. We make use of binary
population synthesis to inform us of the ranges of the initial
parameters of the ballistic calculations and to provide further
motivation for the importance of this study and the need for a
publicly accessible data set on ballistic trajectories for
non-synchronous donors.
The main results of our study are summarised below.
* Our binary population calculations with metallicity Z=0.02
indicate that a large fraction of binary systems transfer mass
sub-synchronously, but they transfer more mass (90.14 per cent)
sub-synchronously than spend time doing so (62.44). Only a very
low fraction of systems transfers mass super synchronously (<0.07
per cent mass transferred and <0.02 per cent time spent
transferring mass). Moreover, while only a small fraction of time is
spent during which the static tide approximation breaks down, a
non-negligible fraction of mass (12.64 per cent) is transferred
when the donor experiences a dynamic potential. This does, however,
mean that the static potential approximation is valid for the
majority (87.36 per cent) of mass transferred with
sub-synchronously rotating donors.
* Our ballistic trajectory calculations indicate that at low
initial thermal-velocity (cold, = 10^-3.0) there are
clear distinctions between accretion onto the accretor and accretion
onto the donor within the parameter space of and ,
and no trajectories escape from the system. The minimum radius of
approach can be as low as 10^-3, indicating a near head-on
stream. A larger region in the (, ), parameter space
leads to accretion onto the donor for super-synchronous donors
(> 1 and < 100) than for sub-synchronous donors
(< 0.75 and < 5), but the change in specific
angular momentum of the self-accreting stream is overall lower for
super-synchronous donors. Both for sub-synchronous as for
super-synchronous donors the self-accretion always works
synchronising. We find that intersecting trajectories only occur at
the edge of the transition region between accretion onto the
accretor and accretion onto the donor and that the self-intersection
regions overlap with that of intersection with other trajectories.
* High initial thermal-velocities (hot, = 10^-0.5)
correspond to a wider mass stream, and lead to a less sharp
transition between the regions of accretion onto the donor and
accretion onto the accretor. Fewer configurations of and
, i.e. > 1.5 and < 0.2 for
super-synchronous donors and < 0.75 and < 0.1 for
sub-synchronous donors, lead to accretion onto donor because of the
larger initial radial velocity of the stream that makes it more
difficult to deflect the stream. We find some trajectories can
escape the system through the Lagrange point behind the accretor
(x > x_acc), especially in systems with a
sub-synchronous donor. Intersecting trajectories again occur at the
edge of the transition region, but for super-synchronous donors the
intersection with other trajectories occurs for a larger part of the
parameter space than self-intersecting orbits, i.e. > 1.25
and < 0.5 for intersection with other trajectories and
> 1.7 and < 0.1 for self-intersection.
Our results are useful for orbital evolution and mass transfer
calculations, including determining the formation and properties of
accretion disks. They can be used in stellar evolution and population
synthesis code, and they are available online upon publication of the
paper.
§ ACKNOWLEDGEMENTS
DDH thanks the UKRI and the University of Surrey for the funding grant
H120341A, and thanks Arman Aryaeipour, Dominika Hubovà, Giovanni
Mirouh, Ondřej Pejcha, Natalie Rees and Mathieu Renzo for useful
discussions. RGI thanks STFC for funding grants
https://gtr.ukri.org/projects?ref=ST
and
https://gtr.ukri.org/projects?ref=ST/L003910/2ST/L003910/2.
§ DATA AVAILABILITY
We make our ballistic trajectory integration code, as well as the
interpolation tables for the stream properties and the exploration
data generated through population synthesis available on
https://doi.org/10.5281/zenodo.7007591https://doi.org/10.5281/zenodo.7007591
upon publication.
mnras
§ DESCRIPTION OF OUTPUT DATASETS
The ballistics stream trajectory summary datasets contain the
parameters described in tab:description_table, along with
meta-data regarding indices and global configurations. These datasets
can be interpolated on and implemented in other binary stellar
evolution codes to include the effect explored in our paper and the
subsequent changes in the mass transfer properties like the torque on
the orbit, the fraction of self accretion.
§ LAGRANGE POINTS AS A FUNCTION OF SYNCHRONICITY
With eq:critical_surface we calculate the first three Lagrange
points, the donor for the synchronicity factors and mass
ratios . We calculate these in the non-inertial reference frame
centred on the donor and transform this to the non-inertial reference
frame centred on the centre of mass of the system
(sec:roche-potent-reduc). In fig:lagrange_point_plot
we show the x-coordinate of the first three Lagrange points for both
of these frames.
|
http://arxiv.org/abs/2307.04790v2 | 20230710180003 | The hunt for formamide in interstellar ices: A toolkit of laboratory infrared spectra in astronomically relevant ice mixtures and comparisons to ISO, Spitzer, and JWST observations | [
"Katerina Slavicinska",
"Marina Gomes Rachid",
"Will Robson Monteiro Rocha",
"Ko-Ju Chuang",
"Ewine Fleur van Dishoeck",
"Harold Linnartz"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.IM",
"astro-ph.SR"
] |
A toolkit of laboratory infrared spectra in astronomically relevant ice mixtures and comparisons to ISO, Spitzer, and JWST observations
Laboratory for Astrophysics, Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA Leiden, The Netherlands.
[email protected]
Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA Leiden, The Netherlands.
Max Planck Institut für Extraterrestrische Physik (MPE), Giessenbachstrasse 1, 85748 Garching, Germany
Although solid-state pathways are expected to dominate the formation mechanisms of many complex organic molecules (COMs), very few COMs have been securely identified in interstellar ices, in stark contrast with the many COM detections in the gas phase. The launch of the James Webb Space Telescope (JWST) and its increase in sensitivity and spectral resolution opens the possibility of identifying more COMs in ices, but additional laboratory data are necessary. Formamide (NH_2CHO) is one such COM that is of great interstellar and prebiotic relevance where more laboratory data are needed in the hunt for its presence in interstellar ices.
This work aims to characterize the mid-IR spectra of formamide in its pure form as well as in mixtures of the most abundant interstellar ices via laboratory simulation of such ices, as well as to demonstrate how these laboratory spectra can be used to search for formamide in ice observations.
Mid-IR spectra (4000 - 500 cm^-1/2.5 - 20 μm) of formamide, both in its pure form as well as in binary and tertiary mixtures with H_2O, CO_2, CO, NH_3, CH_3OH, H_2O:CO_2, H_2O:NH_3, CO:NH_3, and CO:CH_3OH, were collected at temperatures ranging from 15 - 212 K.
Apparent band strengths and positions of eight IR bands of pure amorphous and crystalline formamide at various temperatures are provided. Three of these bands are identified as potential formamide tracers in observational ice spectra: the overlapping C=O stretch and NH_2 scissor bands at 1700.3 and 1630.4 cm^-1 (5.881 and 6.133 μm), the CH bend at 1388.1 cm^-1 (7.204 μm), and the CN stretch at 1328.1 cm^-1 (7.529 μm). The relative apparent band strengths, positions, and full width half maxima (FWHM) of these features in mixtures at various temperatures were also determined. All of the laboratory spectra are available to the community on the Leiden Ice Database for Astrochemistry (LIDA) for use in the interpretation of both observations (e.g., from JWST) and laboratory spectroscopic data. Finally, the laboratory spectra are compared to observational spectra of a variety of low- and high-mass young stellar objects as well as prestellar cores observed with the Infrared Space Observatory, the Spitzer Space Telescope, and JWST. A comparison between the formamide CH bend in laboratory data and the 7.24 μm band in the observations tentatively indicates that, if formamide ice is contributing significantly to the observed absorption, it is more likely in a polar matrix. Upper limits ranging from 0.35-5.1% with respect to H_2O were calculated via scaling the formamide:H_2O laboratory spectrum to the observations. These upper limits are in agreement with gas-phase formamide abundances and take into account the effect of a H_2O matrix on formamide's band strengths.
The hunt for formamide in interstellar ices
K. Slavicinska1,2
M. G. Rachid1
W. R. M. Rocha1,2
K. -J. Chuang1
E. F. van Dishoeck2,3
H. Linnartz1
Received 24 May 2023 / Accepted 30 June 2023
==========================================================================================================================
§ INTRODUCTION
Of the >280 molecules that have been detected in interstellar environments <cit.>, formamide (NH_2CHO) has become one of the most widely and deeply investigated in observational, modeling, computational, and laboratory studies in the last decade. Containing all four of the most abundant biological elements (C, H, N, and O), formamide is the simplest molecule that contains the biologically essential amide bond and has been suggested as a plausible prebiotic precursor to various nucleobases (e.g., ), the chemical building blocks of RNA and DNA. It has also been proposed as an alternative prebiotic solvent to promote condensation reactions, which form many vital biological molecules but are highly endergonic in purely aqueous solutions (e.g., phosphorylation), by lowering water activity <cit.>.
Given this potential prebiotic relevance, the fact that formamide has been observed in numerous sources in the interstellar medium as well as on extraterrestrial bodies in our own Solar System has exciting implications for astrobiology. First detected in the interstellar medium in the gas phase by <cit.> in the Sagittarius B2 high-mass star-forming region, formamide has since been observed in over 30 massive young stellar objects (MYSOs) as well as low-mass YSOs (LYSOs) with hot corinos and protostellar shocks ( and references therein). Within our Solar System, gas-phase formamide has been found in the comae of the comets Lemmon, Lovejoy, and Hale-Bopp, with abundances ranging around 0.01-0.02% with respect to H_2O <cit.>. It was also detected in situ by the Rosetta mission on comet 67P Churyumov-Gerasimenko, both on the surface by the Cometary Sampling and Composition experiment (COSAC) instrument on the Philae lander <cit.> and in the coma by the Double Focusing Mass Spectrometer (DFMS) on the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) instrument <cit.>, where the formamide abundance was found to be ∼0.004% with respect to H_2O.
Notably, all of the interstellar sources in which gas-phase formamide has been securely detected have hot cores and corinos or shocked regions, where temperatures are high enough for formamide to thermally desorb from icy grains into the gas phase <cit.>. Additionally, in many of these sources, the formamide abundance correlates almost linearly with the abundance of isocyanic acid (HNCO) <cit.>, and, in the case of the low-mass source IRAS 16293-2422, the two species are spatially correlated and have very similar deuteration ratios <cit.>.
These aspects of formamide observations could be considered evidence that formamide is formed in the solid state (i.e., via ice chemistry), possibly in a pathway chemically related to HNCO, and it is detected in the gas phase following desorption from icy grains. The ice formation and grain sublimation scenario is further supported by recent observational work investigating excitation temperatures of N-bearing complex organic molecules (COMs) in 37 MYSOs from the ALMA Evolutionary study of High Mass Protocluster Formation in the Galaxy (ALMAGAL) survey, where formamide had the highest excitation temperatures of all the studied N-bearing COMs (≳250 K) <cit.>. These temperatures are consistent with thermal desorption experiments, in which formamide ice sublimes at high temperatures (typically >210 K) even when it is mixed with or deposited on top of more volatile species such as H_2O and CO, and at even higher temperatures (>250 K) when the experiments are performed on certain dust grain analog substrates <cit.>.
Experimentally, solid-state formamide has been identified as a product of processing via a variety of energetic sources (e.g., electron, UV, X-ray, and ion irradiation) of a myriad of simple ice mixtures, including (but not limited to) CO:NH_3 <cit.> and H_2O:CO:NH_3 <cit.>, NO:H_2CO:H and NO:CH_3OH:H <cit.>, H_2O:HCN <cit.>, H_2O:CH_4:NH_3 and H_2O:CH_4:N_2 <cit.>, and HNCO <cit.> and CH_4:HNCO <cit.>. Evidently, energetic processing of almost any ice mixture that contains H, N, C, and O is very likely to produce formamide. Such processing experiments mimic the radiation environments experienced by ices in protostellar envelopes and protoplanetary disks. Furthermore, recent experiments by <cit.> demonstrate that hydrogenation of NO:H_2CO can also produce formamide, providing a plausible nonenergetic formation pathway that is relevant to cold, dark clouds.
While a plethora of observational, experimental, and theoretical works (see Section <ref>) have significantly progressed our understanding of formamide's interstellar presence and its plausible chemical history, whether its formation occurs in the solid state, gas phase, or both remains unclear. A secure detection of formamide in ices would be immensely valuable to resolve this debate regarding its formation mechanism. Such a detection, if well resolved, could provide parameters such as formamide's solid-state abundance and its physico-chemical environment, which are essential to elucidating its formation pathway.
Previously, formamide has been tentatively detected in the solid state in the Infrared Space Observatory Short Wavelength Spectrometer (ISO-SWS) spectra of the MYSOs W33A and NGC 7538 IRS 9. In the case of W33A, an upper limit of 2.1% with respect to H_2O via the CH bend at 7.22 μm/1385 cm^-1 was derived, but the authors noted that the peak position in the observation (7.24 μm) was red-shifted relative to the formamide peak in their laboratory spectra <cit.>. For NGC 7538 IRS 9, no upper limit of formamide was provided – a laboratory spectrum of irradiated HNCO that showed IR evidence of formamide formation was qualitatively evaluated as a spectral fit to the observed 6 μm/1700 cm^-1 band <cit.>. In both of these cases, the bands attributed to formamide were overlaid on top of or blended with other strong ice features.
Typically, reference laboratory IR spectra are used to assign and fit astronomically observed IR features to specific species, and band strengths acquired via systematic laboratory experiments are used to quantify the column densities of these species. For COMs such as formamide that are expected to be present in the ice in very low concentrations (≲5%), it is important to obtain these spectra and band strengths not only for pure ices, but also in chemical conditions that are more realistic for interstellar ices. Namely, the molecule of interest should be diluted in the more abundant simple ice species (e.g., H_2O, CO, and CO_2), as interactions with other species present in the ice matrix can significantly alter the positions, profiles, and apparent band strengths of a molecule's vibrational features. Morphological changes in the ice caused by thermal processing, such as transitions from amorphous to crystalline ice or matrix segregation, can also dramatically change an ice's spectral features, so spectra should be collected at a variety of temperatures as well. Considering such factors is not only important to accurately assign and quantify the molecule of interest, but it can also provide valuable information about the molecule's physico-chemical environment and history.
In previous IR characterization work, <cit.> derived the refractive index, density, and several band strengths of pure formamide, but integration ranges and errors were not provided for these band strengths, and no spectra of heated formamide or formamide in mixtures were collected. In order to tentatively assign the 7.24 μm band in W33A's spectrum to formamide, <cit.> collected spectra of formamide at 10 K in H_2O and H_2O:CH_3OH matrices, but only one band was characterized from these spectra, and it is unclear for what phase of formamide the band strength used in the upper limit calculation was derived. <cit.> collected IR spectra of formamide in pure, H_2O-dominated, and CO-dominated ice matrices, but the band strengths, peak positions, and full width half maxima (FWHMs) of the formamide features in these mixtures are not given. <cit.> presented the peak positions of the bands of pure formamide in the 30 - 210 K temperature range, but no spectra of formamide in mixtures were collected.
Thus, in an effort to enable more secure assignments and accurate abundance and/or upper limit determinations of formamide in observed ice spectra, this work provides a comprehensive set of laboratory transmission IR spectra of pure formamide as well as formamide diluted in nine different astrophysically relevant ice mixtures of varying polarities. These spectra are provided at temperatures ranging from 15 - 212 K. Apparent band strengths were derived for eight integrated regions from the pure formamide spectra, and from these, three bands are evaluated as the most promising for future identification of formamide in observations. These bands are also fully characterized (i.e., peak positions, FWHMs, and relative band strengths are provided). Examples of how these spectra and values can be used in future analyses of ice observations are described, and new upper limits of formamide in a variety of objects (prestellar cores, low-mass protostars, and high-mass protostars) were calculated. Finally, all spectra are made publicly available on the Leiden Ice Database[https://www.icedb.strw.leidenuniv.nl] <cit.> for the community to use in fitting to their ice observations. This work is particularly timely given the recent launch of the James Webb Space Telescope (JWST), which may enable the detection of new COMs in interstellar ices due to its unprecedented sensitivity and spectral resolution.
§ FORMAMIDE FORMATION MECHANISM DEBATE
A variety of pathways have been suggested to explain the observed solid-state formamide formation in laboratory ice experiments. One initially proposed mechanism was the hydrogenation of HNCO, an attractive premise given that it provided a direct chemical link between HNCO and formamide to explain their correlation in gas-phase observations:
HNCO + 2^.H →NH2CHO.
This pathway was first suggested by <cit.> and was stated as a possible formation mechanism of formamide when it was observed in VUV irradiation experiments of pure HNCO <cit.>. However, hydrogenation experiments by <cit.> via H bombardment of HNCO <20 K did not produce detectable amounts of formamide, although the authors suggested that the reaction may be prevented in their experiments by the formation of very stable HNCO dimers or polymers, and that it could possibly proceed if HNCO is diluted in the matrix of an ice like H_2O. Indeed, subsequent experiments by <cit.> showed that, in a 3.3 K para-H_2 matrix, formamide can form from HNCO via a hydrogen addition-abstraction cycling mechanism, but in this reaction scheme, HNCO is still the favored product.
Another proposed formation pathway is the following radical-radical recombination:
^.NH2 + ^.CHO →NH2CHO.
This mechanism is technically barrierless and can proceed at low temperatures (∼10 K) but produces higher yields at higher temperatures (∼20-40 K) due to increased mobility allowing the radicals to orient in the proper reaction geometry <cit.>. In the laboratory, this mechanism requires some form of energetic processing to generate the NH_2 and CHO radicals, and its viability is supported by the presence of the CHO radical in the experimental spectra <cit.>.
Various mechanisms have also been suggested where formamide is produced from the NH_2CO radical, which could form by the radical-molecule association of NH_2 and CO or CN and H_2O <cit.>:
^.NH2CO + ^.H →NH2CHO
^.NH2CO + H2O →NH2CHO + ^.OH
2^.NH2CO →NH2CHO + HNCO.
However, the formation of the NH_2CO radical via a pathway that does not involve hydrogen abstraction from already existing formamide, as seen in <cit.>, has yet to be experimentally confirmed.
While these latter mechanisms do not provide an immediately obvious direct solid-state link between HNCO and NH_2CHO, some experimental studies have suggested alternative links consistent with these mechanisms. For example, once formed, formamide can decompose into HNCO via dehydrogenation and photolysis by H_2 loss <cit.>, so HNCO may be a product of NH_2CHO rather than the other way around. <cit.> proposed that the NH_2 radical can produce either HNCO or NH_2CHO depending on the degree of hydrogenation of the C- and O-containing molecule with which it reacts: the reaction of NH_2 with CO leads to HNCO, while NH_2 with HCO or H_2CO leads to formamide.
Thus, while formamide may not be a direct product of HNCO, the two species may be linked in a solid-state chemical network by common precursors. Astrochemical models using the rate constants from <cit.> further corroborate that, indeed, a direct chemical link between HNCO and NH_2CHO is not necessary to reproduce the observed linear correlation between them in models of various interstellar environments and suggest instead that their correlation could be explained by their similar responses to physical (i.e., thermal) environments <cit.>.
In addition to these solid-state mechanisms, the plausibility of the following gas-phase formation route has been extensively debated in computational and modeling works since its proposal in <cit.>:
^.NH2 + H2CO →NH2CHO + ^.H.
According to its first published electronic structure and kinetic calculations, this reaction is essentially barrierless at low temperatures and thus should proceed readily in interstellar environments <cit.>. Furthermore, chemical models of the protostar IRAS 16293-2422 and the molecular shocks L1157-B1 and B2 utilizing the calculated rate coefficients of this reaction produce formamide abundances that are consistent with observed values <cit.>, and follow-up studies calculating rate coefficients of deuterated formamide formation via the same reaction show that formamide's observed deuteration ratio does not necessarily exclude the possibility of gas-phase formation <cit.>.
However, the accuracy of these calculated rate coefficients has been called into question given that they neglect the zero point energy (ZPE) of one of the transition states. When the ZPE of the transition state is included, the reaction barrier becomes large enough that the reaction rate is negligible at low temperatures <cit.>, although some argue that inclusion of the ZPE is not warranted for this transition state and results in overestimation of the reaction barrier <cit.>. Recent gas-phase experiments attempting to perform this route did not confirm any formamide formation, and their detection upper limits are consistent with the reaction barrier that includes the transition state ZPE <cit.>.
§ METHODOLOGY
All of the measurements were collected in the Laboratory for Astrophysics at Leiden Observatory on the IRASIS (InfraRed Absorption Setup for Ice Spectroscopy) chamber. The setup was described in detail in <cit.> and <cit.>, and it has since undergone several upgrades, including a decrease of its base pressure to <1.0×10^-9 mbar by the addition of new pumps, an exchange of the laser used for interference measurements to one with a wavelength of 543 nm (as the formamide ice refractive index was measured by at this wavelength), and the implementation of an independent tri-dosing leak valve system that can be calibrated with a quadrupole mass spectrometer (QMS) following the procedure described in Appendix <ref>.
The optical layout of the chamber remains the same as that shown in Figure 1 in <cit.>: a Ge substrate sits at the center of the chamber and is cooled by a closed-cycle He cryostat to 15 K. Ices are grown on the substrate via background deposition of gases and vapors dosed into the chamber through leak valves. Infrared transmission spectra are collected through two ZnSe viewports that are parallel to the Ge substrate and normal to the IR light beam. During deposition, laser interference patterns used to determine ice thickness are measured on both sides of the Ge substrate (which is opaque and reflective in the visible light range) via photodiode detectors placed outside of viewports positioned 45^∘ from the substrate normal. The patterns obtained from each side of the substrate during deposition show equal deposition rates on both sides. After deposition, the substrate can be heated to obtain IR spectra at different temperatures. In this work, 256 spectral scans with a 0.5 cm^-1 resolution were collected and averaged while the substrate was heated at a rate of 25 K hr^-1, resulting in a temperature uncertainty of ±1.5 K in each heated spectrum. Spectra were collected during heating until reaching the temperature at which the major matrix component desorbed. Before their analysis, all spectra were baseline-corrected using a cubic spline function.
The liquids and gases used in this work were formamide (Sigma Aldrich, ≥99.5%), water (Milli-Q, Type I), carbon dioxide (Linde, ≥99.995%), carbon monoxide (Linde, ≥99.997%), ammonia (PraxAir, ≥99.96%), and methanol (Sigma Aldrich, ≥99.9%). The mixing ratios calculated for all of the spectra via the method outlined in Appendix <ref> are presented in Table <ref>. Uncertainties in the column densities used to calculate these ratios are estimated to be ∼21% for the formamide column densities and ∼27% for the matrix species column densities (see Appendix <ref>). Prior to deposition, the liquid formamide sample was heated to 60^∘C and pumped on directly with a turbomolecular pump in order to remove contaminants (primarily water).
The apparent band strengths of pure formamide are determined via depositing formamide onto the substrate held at 15 K while simultaneously collecting the transmission IR spectra and the laser interference pattern. The thickness d of the ice can be derived from the laser interference pattern via the following equation:
d = mλ/2√(n^2 - sin^2θ),
where m is an integer number of constructive fringes, λ is the laser wavelength, n is the ice refractive index (1.361 for formamide at 543 nm, from ), and θ is the angle of incidence.
Enough formamide is deposited so that four constructive fringes are acquired, the thickness of the ice at each fringe peak is calculated, and the integrated absorbances of eight spectral regions (see Table <ref>) are calculated from the spectra collected at the time that a fringe peak was reached. Then, the integrated absorbance for each spectral region is plotted as a function of ice thickness, and the slope of this line, Δ∫ abs(ν) dν/Δ d, is obtained via a least-squares fit. From this value, the apparent band strengths A' can be approximated with an equation based on the Beer-Lambert Law (e.g., ):
A' = 2.303 M/ρ N_A×Δ∫ abs(ν) dν/Δ d,
where M is the molar mass of formamide (45.041 g mol^-1), ρ is the density of formamide ice (0.937 g cm^-3, from ), and N_A is Avogadro's number. Using change in integrated absorbance over change in thickness in this equation rather than the absolute values of both variables ensures that there is no contribution of any residue from previous experiments on the substrate to the calculated ice thickness. It also does not require a constant ice growth rate.
The apparent band strengths reported in Table <ref> are the averages of three repeated measurements following this method. The experimental uncertainties derived from the standard deviation of these three measurements range from 3-8% for the eight band strengths. However, simply using the standard deviations from the repeated measurements as the band strengths uncertainties neglects potential systemic sources of error such as uncertainties in the laser alignment geometry and the data analysis procedure. Thus, the uncertainties provided in Table <ref> are calculated via error propagation of all of the experimental terms in Equation <ref>, using the same estimated uncertainties as <cit.> for the ice thickness (4%) and integrated absorbance (10%) as well as the ice density (10%). This calculation yields an uncertainty of 15% for the reported band strength values.
From the pure formamide apparent band strengths, the apparent band strengths of formamide in the investigated mixtures, A'_i, are calculated using the formamide column densities N_mix (obtained from the methods described in Appendix <ref>) via the following equation:
A'_i = 2.303 ×∫ abs(ν) dν/N_mix,
and the relative apparent band strengths, η, are subsequently found by:
η = A'_i/A',
Following propagation of error from the pure apparent band strengths, integrated absorbances, and the formamide column densities in the mixtures (see Appendix <ref>), the uncertainties of the relative apparent band strengths presented here are estimated to be ∼28%.
§ RESULTS
The spectra of pure amorphous and crystalline formamide are presented in Figure <ref>, and the eight apparent band strengths calculated at 15 K are presented in Table <ref>. Peak positions and vibrational mode assignments are also provided. Some integrated regions contain multiple overlapping peaks; in these cases, the peak positions and assignments were provided for all peaks within the integrated region, but the peaks were not deconvolved to give an individual band strength for each peak. These band strengths have percent differences ranging from 1-35% compared to those given for the same peak values in <cit.>. As integration bounds were not provided by <cit.>, any discrepancies in band strengths may be caused by differences in chosen integration regions.
The transition from amorphous to crystalline formamide is observed at 170 K, indicated by its bands becoming sharper and narrower and some peaks splitting. The amorphous nature of almost all of the pure and mixed ices collected at 15 K can be ascertained from their spectra, which have typical amorphous features that show evidence of matrix crystallization during the warm-up phase of the experiments. This excludes the mixtures containing CO, whose phase at 15 K in these experiments may be crystalline given recent investigations of CO ice structure ≥10 K <cit.>.
Figure <ref> presents the spectrum of pure formamide ice along with the spectra of the pure matrix components, all at 15 K. The formamide peaks indicated in the shaded areas were selected for full characterization (i.e., their peak positions, FWHMs, and relative band strengths are determined for mixtures): the overlapping C=O stretch and NH_2 scissor at 1700.3 cm^-1/5.881 μm and 1630.4 cm^-1/6.133 μm, respectively, and the slightly overlapping CH bend and CN stretch at 1388.2 cm^-1/7.204 μm and 1328.1 cm^-1/7.529 μm, respectively. These peaks were selected because they are strong, have sharp profiles, and overlap the least with the major peaks of the most common interstellar ices, making them the best candidates for identifying formamide in interstellar ice spectra. There is still some overlap between these formamide peaks and some minor peaks of the matrix components, namely the water OH bend at ∼1600 cm^-1/6.25 μm, the methanol CH_3 and OH bends at ∼1460 cm^-1/6.85 μm, and the ammonia NH scissoring at 1624 cm^-1/6.16 μm. However, with sufficiently high formamide concentrations, it may still be possible to identify formamide in these spectral regions, as these matrix bands are relatively weak and broad.
The matrix- and temperature-dependent changes in these selected formamide ice bands are discussed in the following subsections, and their peak positions, FWHMs, and relative band strengths in different mixtures at various temperatures are reported in Appendices <ref> and <ref>. The NH_2 stretching features at 3371.2 cm^-1/2.966 μm and 3176.4 cm^-1/3.148 μm and the NH_2 wagging and twisting features at 689.2 cm^-1/14.510 μm and 634.0 cm^-1/15.773 μm were excluded from further characterization despite their relatively large band strengths due to their direct overlap with the two most intense water features, the OH stretch at ∼3279 cm^-1/3.05 μm and the H_2O libration at ∼780 cm^-1/12.8 μm, respectively <cit.>. The remaining formamide bands, the CH stretch at 2881.9 cm^-1/3.470 μm, the CH bend overtone at 2797.7 cm^-1/3.574 μm, and the convolved NH_2 rock at 1108.1 cm^-1/9.024 μm and CH out-of-plane deformation at 1056.1 cm^-1/9.469 μm, have low band strengths and directly overlap with various methanol features: the CH_3 stretches at 2950 cm^-1/3.389 μm and 2830 cm^-1/3.533 μm, the CH_3 rock at 1126 cm^-1/8.881 μm, and the C-O stretch at 1027 cm^-1/9.737 μm <cit.>.
§.§ C=O stretching and NH_2 scissoring features (∼1700 and 1630 cm^-1)
Figure <ref> shows how the profile of the C=O band (1700.3 cm^-1/5.881 μm) changes in different mixtures and temperatures and presents the peak positions and FWHMs of these spectra in a scatter plot. This type of scatter plot can help to narrow down the possible thermochemical environments of molecules identified in observations (see Section <ref>). The right bottom plot in the figure shows the strengths of the band in the different mixtures at 15 K relative to the value of the band strength of pure formamide. The integrated regions used to calculate these band strengths also include the NH_2 scissoring mode, which presents as a weak, broad feature overlapping with the red shoulder of the C=O stretch (see Figure <ref>). The FWHM and relative band strengths of the formamide:NH_3 mixture are excluded from the bottom scatter plots in Figure <ref> and the tables in Appendix <ref> due to the significant overlap of this band with ammonia's NH scissoring mode at 1624 cm^-1/6.16 μm. The NH_3 peak is small enough in the NH_3-containing tertiary mixtures relative to the formamide C=O stretch to extract reliable peak positions and FWHMs, but relative band strengths were not calculated.
In pure amorphous formamide (<170 K), the C=O stretch appears as a single broad peak centered at 1704.2 cm^-1/5.868 μm. Generally, being in a mixture causes the feature to sharpen, most dramatically so in apolar mixtures in which CO or CO_2 are the dominant species. For example, the FWHM of the feature in formamide:CO_2 at 15 K is 51.1 cm^-1, over three times narrower than that in pure formamide. Also, in the CO, CO:CH_3OH, and crystalline CO_2 matrices, some peak splitting occurs before the formamide crystallization temperature is reached. Such sharpening and splitting is typical when a polar molecule is highly diluted in an apolar matrix and is caused by the polar molecule being isolated in the matrix as a monomer or dimer, unable to form the hydrogen bonds with other polar molecules that tend to broaden and blend vibrational features (e.g., ). <cit.> also previously observed the formamide peaks splitting due to monomer and dimer formation in their very dilute 1:40 formamide:CO mixture. In the polar mixtures, however, as hydrogen bonding with the matrix is still possible, the feature remains broad. The feature is the most blue-shifted in the binary CO and CO_2 mixtures, where its peak values are 1717.2 and 1703.7 cm^-1, respectively, in the 15 K ices, while in polar mixtures it tends to red-shift, with the most red-shifted peak position being that of the tertiary H_2O:CO_2 mixture, 1694.0 cm^-1. Despite containing a high fraction of apolar CO, the tertiary mixtures with CO:CH_3OH and CO:NH_3 have peak positions similar to the polar mixtures. The relative band strength of this formamide feature is >1 in all of the investigated matrices, with no observable trend related to polarity present in these values.
At formamide's crystalline phase transition temperature (170 K), the C=O peak blue-shifts and splits into multiple blended features. This is only observed in the pure formamide spectrum because all of the matrix molecules investigated here desorb below 170 K. An interesting trend to note is that, as the mixtures increase in temperature, the formamide C=O feature tends to broaden to have a FWHM value more similar to that of pure formamide. This trend can be easily identified in the scatter plot in Figure <ref>, where the scatter points of several of the mixtures move closer to the points of the pure amorphous spectrum as temperature increases. It is also particularly noticeable in Figure <ref> in the spectra of mixtures containing H_2O, which have peak position and FWHM values at high temperatures (>150 K) that are the close to those of the pure spectrum. Sudden broadening of the FWHM to a value closer to that of pure formamide also tends to occur at the matrix crystallization temperatures (for example, in the binary CO_2 mixture between ∼30 and 40 K and in the H_2O-containing mixtures between ∼130 and 150 K). These spectral changes indicate that formamide segregation is occurring in the matrix as the ice is heated and is particularly promoted when the ice undergoes a dramatic restructuring during matrix crystallization. The conclusion that solid-phase formamide diluted in a matrix is mobilized via heating is consistent with formamide thermal processing studies, in which formamide deposited on top of water ice diffused through the water during heating <cit.>.
§.§ CH bending and CN stretching features (∼1388 and 1328 cm^-1)
The shape and position of the CH bend (1388.1 cm^-1/7.204 μm) does not vary much depending on chemical environment or temperature, with peak positions only ranging from 1398.0 - 1387.2 cm^-1 and FWHM values ranging from 11.1 - 27.5 cm^-1 in the mixtures investigated here (see Figures <ref> and <ref>). As in the C=O stretch band, the binary apolar mixtures with CO and CO_2 have the most blue-shifted and narrow peaks; however, a trend of the mixture band shifting during heating to peak position and FWHM values closer to those of the pure band is not as clear. The band strength of the CH bend increases in all of the mixtures (e.g., η=1.63 at 15 K in the formamide:H_2O mixture) except for the CO_2 mixture, in which the band strength decreases slightly (η=0.85 at 15 K).
The CN stretching band (1328.1 cm^-1/7.529 μm) varies much more dramatically across different mixtures and temperatures (see Figures <ref> and <ref>), particularly in the binary apolar mixtures, in which it red-shifts by up to ∼50 cm^-1 and splits into multiple convolved features. In the formamide:CO_2 spectrum, two peaks are present at 15 K at 1316.8 and 1277.0 cm^-1, with the peak at 1277.0 cm^-1 having a greater intensity until 40 K, at which point the intensity of the 1316.8 cm^-1 peak increases and that of the 1277.0 cm^-1 peak decreases. The 1277.0 cm^-1 peak intensity then continues to decrease during heating until CO_2 sublimates at 90 K (see Figure <ref>). This trend is indicative of the 1277.0 cm^-1 peak belonging to the formamide monomer and the 1316.8 cm^-1 peak belonging to the formamide dimer, as it would be expected for the monomer peak to decrease and the dimer peak to increase if segregation occurs during heating, especially during a major ice structure rearrangement like matrix crystallization, which occurs for CO_2 at 40 K. Such assignments are consistent with the assignments in <cit.>, who observed the formamide monomer and dimer in a xenon matrix at 1267.2 and 1305.4 cm^-1, respectively, and supported their assignments with computations. The peak in the formamide:CO spectrum also has a red component that appears to decrease in intensity during heating, but the monomer and dimer peaks are not as clearly distinguishable as more than two peaks appear to be overlapping in that spectrum. In the mixtures containing other polar molecules, the band is generally blue-shifted, broadened, and decreases in intensity relative to the CH bend. The relative strength of the band is close to 1 in most of the characterized polar mixtures, except for the H_2O:CO_2 mixture, which has a relative band strength of 0.75 at 15 K. In contrast, the relative band strength is closer to 2 in all of the primarily apolar mixtures.
While the CN stretch clearly has more potential than the CH bend as a diagnostic of the chemical environment of formamide, it is also much broader and less intense in most of the mixture spectra than in the pure spectra. This diminishes the ability to identify this band in a spectral region where several other astronomically relevant COMs also have features (see Section <ref>).
§ ASTRONOMICAL IMPLICATIONS
The ability of formamide to form via both atom addition and energetic processing in a variety of ices containing C, H, N, and O means that its solid-state presence is plausible in many interstellar environments, ranging from dark interstellar clouds to protoplanetary disks. However, in order to securely detect it, an absorption with a clear peak position and profile that is distinguishable from other ice features in the same spectral region must be identified.
The C=O stretch is amorphous formamide's strongest and sharpest feature, but it overlaps with the blue wing of the strong and broad 6.0 μm feature present in most interstellar ice spectra. Water and ammonia, which have been securely identified in ices, as well as formic acid and formaldehyde, which have been tentatively identified, have features in this spectral region <cit.>. Additionally, many other carbonyl group-containing COMs that have been detected in the gas-phase and may be present in the solid state, like acetaldehyde, acetone, and methyl formate, also have strong absorptions in this wavelength region <cit.>. While this limits the potential of using formamide's C=O band as its primary means of identification, the band can still be used for performing fits spanning a wider wavelength region in combination with other bands.
The CH bend and the CN stretch are medium-strength features that lie in the "COM-rich region" of interstellar ice spectra between 7-8 μm <cit.>. This region, where many organic functional groups have absorptions, sits on the tail of the strong 6.85 μm band (whose assignment remains uncertain but likely contains absorptions by methanol and the ammonium cation, ). The methane CH bending band at 7.68 μm is the most clearly and frequently observed ice band in this region <cit.>, but additional weaker features at 7.03, 7.24, 7.41, and 8.01 μm are also consistently observed toward some sources (Figure <ref>). Candidate carriers suggested for some of these absorptions include species like formic acid, ethanol, acetaldehyde, the formate anion, and, potentially, formamide <cit.>.
As mentioned previously, <cit.> tentatively assigned formamide as a plausible contributor to the 7.24 μm band in W33A using a formamide:H_2O spectrum and calculated a formamide ice upper limit of 2.1% with respect to H_2O, although they pointed out that in their lab data, the formamide peak position was blue-shifted by 0.02 μm relative to the observed band, and that an assignment to the CH bend of formic acid (HCOOH) may be more appropriate. Ethanol (CH_3CH_2OH) and the formate anion (HCOO^-) have also been considered candidates for this band <cit.>. No distinct and consistently observed bands are located at the peak position of the formamide CN stretch at ∼7.5 μm. However, in mixtures (particularly those with polar components), the intensity and sharpness of this band weaken (relative to the intensity and sharpness of the CH bend). Such a profile change makes a distinction of the CN stretch from the continuum in this region less feasible if formamide is present at the low ice abundances expected for COMs, especially given that around this wavelength, many sources also show a broad and significant absorption commonly attributed to SO_2 ice <cit.>. On the other hand, the CH bend remains strong and sharp in all of the mixtures investigated here. All of the other absorption features of formamide either have profiles that are too broad or weak, or overlap directly with the strongest absorptions of the major ice components (see Figure <ref>), and will therefore not be utilized in our hunt for formamide ice.
Thus, if formamide is indeed present in interstellar ices, the CH bend is likely its best tracer. We focus our subsequent analysis on the comparison of the formamide CH bend in mixtures to the observed 7.24 μm band in nine spectra collected toward a variety of sources by ISO, Spitzer, and the recently launched JWST (Figure <ref>). The ISO (SWS) spectra include three massive young stellar objects (MYSOs), W33A, NGC 7538 IRS 9, and AFGL 7009s, and the Spitzer (IRS) spectra include three low-mass young stellar objects (LYSOs), B1c, 2MASS J17112317, and RNO 91. These archival spectra were selected due to their 7-8 μm regions having several deep and distinct features, indicating that they may be COM-rich, and because their profiles in this region slightly differ, demonstrating the variety of spectral features that have been observed here. In addition, three spectra recently collected by the JWST have been included: two pristine, high-extinction dark clouds toward background stars, NIR38 and J110621, observed with the Mid-InfraRed Instrument (MIRI) Low-Resolution Spectrometer (LRS) <cit.> in the ERS program Ice Age (program 1309, PI: M. McClure), and a Class 0/I low-mass protostar, L1527, observed with the MIRI Medium-Resolution Spectrometer (MRS) in the GTO program JOYS (program 1290, PI: E. F. van Dishoeck, ). These are some of the first spectra ever collected of such low-flux sources. While the resolution of the ISO data is comparable to that of the JWST data, the resolution of the Spitzer data is significantly lower (R∼60-100), limiting its use in the analysis of weak and narrow bands.
The 7.24 μm band is present to some extent in all of the sources, usually at an optical depth similar to the 7.41 μm band in the local continuum-subtracted spectra. The position and FWHM of the band were extracted from the spectra that have spectral resolutions high enough to clearly define the shape and position of the peak – that is, the ISO-SWS and JWST MIRI-MRS MYSO spectra – by fitting a Gaussian profile to the peak. Figure <ref> shows these observed peak positions and FWHMs (indicated with star shapes) in a scatter plot with the peak positions and FWHMs of the CH bend extracted from the laboratory spectra. The peak positions and FWHMs extracted from laboratory spectra of ethanol in a H_2O mixture <cit.>, formic acid in a H_2O:CH_3OH mixture <cit.>, and ammonium formate in a H_2O mixture at 150 K <cit.> are also included in this figure (indicated with the letters E, F, and H respectively) to enable a comparison between formamide and the other commonly proposed carriers. From this plot, it is evident that, while the polar mixtures have the band position and profile closest to the observations, they are all still too blue-shifted (by ∼7 cm^-1/0.04 μm) from the astronomical values for formamide to be the major carrier of this band. In contrast, ethanol, formic acid, and the formate anion in polar mixtures are much better candidates.
It is still possible that formamide could be contributing to the blue wing of this band. However, to result in non-negligible upper limits, the formamide must be present in a matrix containing other polar molecules, as the band is far too blue-shifted in the purely apolar mixtures to contribute significantly to the observed absorption. Therefore, we derived upper limits of formamide by fitting the CH bend in the laboratory spectrum of the formamide:H_2O mixture at 15 K to the 7.24 μm band in the local continuum-subtracted observed spectra (see example fits in Figure <ref>). The water mixture was chosen for the fit for simplicity's sake and due to the fact that water is by far the most abundant interstellar ice component. The water contribution was subtracted out of the laboratory ice spectrum using a spectrum of pure water ice to ensure that absorption by the broad water bending band did not contribute to the calculated formamide upper limit. The band strength used to perform the upper limit calculation was 1.5×10^-17 cm molec^-1, the band strength of the CH bend in pure formamide at 15 K (from Table <ref>) multiplied by the relative band strength of formamide in H_2O at 15 K (1.63, from Appendix <ref>).
When deriving upper limits, it is prudent to ensure that the laboratory spectrum fits to the observed spectrum across a wider wavelength range, as upper limits can be easily overestimated if only one band is considered. Subtracting out the contributions of other ices that absorb in the analyzed spectral region, if their abundances can be unambiguously determined from other spectral regions, also prevents further upper limit overestimations. Therefore, we ensured that the calculated upper limits in Table <ref> do not result in a C=O stretch absorption that exceeds the observed optical depth of the ∼6 μm band in our selected objects. Prior to checking the C=O absorption in this region, the spectral contribution of water's OH bend ∼1655 cm^-1/6.04 μm was removed from the observed spectra by scaling a laboratory water spectrum at 15 K from <cit.>, so that the water column density of the scaled spectrum was the same as what was previously determined for these objects, and then performing a subtraction. (For the ISO and Spitzer data, the water column densities from <cit.> were used for scaling; for the JWST MIRI-LRS data, the water column densities from <cit.> were used. For the JWST MIRI-MRS spectrum (L1527), the water column density was determined by first subtracting the silicate contribution by fitting the GCS3 spectrum to the 10 μm silicate band and then fitting the laboratory water spectrum from to the water libration band.)
The resulting upper limits of solid-state formamide, presented in column densities as well as with respect to the abundance of water in each source, are presented in Table <ref>. These upper limits (ranging from 0.35-5.1% with respect to H_2O) are all at least an order of magnitude greater than (but consistent with) the observed gas-phase formamide abundances in three comets (0.016-0.021% with respect to H_2O) as well as the average beam dilution-corrected abundance of 22 MYSOs from the ALMAGAL survey (∼0.05% with respect to H_2O, assuming a CH_3OH/H_2O ratio of ∼5%). As a beam dilution-corrected gas-phase formamide abundance has also been obtained for the LYSO B1c (∼0.05%), one of the sources investigated here, it can be directly compared to our solid-state formamide upper limit derived from the object's low-resolution Spitzer data. While our upper limit (≤0.93%) is consistent with this gas-phase abundance, it is an order of magnitude greater. We expect the precision of this upper limit to be further refined by future high-resolution observations of B1c, planned to be observed by MIRI-MRS in the JOYS program.
A formamide upper limit of 2.1% with respect to H_2O was previously derived for W33A in <cit.> by assuming that the entire 7.24 μm band consisted of formamide and using a band strength of 3.2 × 10^-18 cm molec^-1 attributed to <cit.>, where it is unclear for what phase of formamide this band strength was derived. Despite our very different approaches, we have fortuitously arrived at nearly the same upper limit value for W33A (2.2%).
In the higher resolution observational data of MYSOs explored here, the lack of a formamide CH bending feature distinct from other COM absorptions prevents a secure formamide ice detection. However, it is clear from the example upper limit fits shown in Figure <ref> that the profile of the 7.24 μm feature is not uniform across different sources, and several sources, such as NGC 7538 IRS 9, NIR38, and RNO 91, may have a blue wing on this band that spectrally overlaps with the CH bend of formamide. Therefore, it is possible that a more distinct absorption at the expected 7.20 μm will emerge more clearly in sources targeted by future JWST MIRI-MRS observations. The first ice spectra arriving now from MIRI-MRS illuminate a promising future. In the spectrum of the LYSO IRAS 15398-3359 acquired by the JWST CORINOS program (program 2151, PI: Y. -L. Yang, ), the COM features between 7-8 μm previously detected barely above 3σ levels in the spectra in Figure <ref> are beautifully resolved (although a distinct absorption centered at 7.20 μm is not present). More sources known to have strong COM absorptions in this spectral region have been specifically targeted by the JOYS program as well as the JWST proposals "It's COMplicated" (program 1854, PI: M. McClure, ) and "Ice chemical complexity toward the Ophiuchus molecular cloud" (program 1959, PI: W. R. M. Rocha, ), and these sources are scheduled to be observed throughout the remainder of this year. As demonstrated by the examples of spectral analysis of ices in this section, the laboratory spectra from this work can serve as a toolkit for formamide identification in such ice observations.
§ CONCLUSIONS
In an effort to facilitate the hunt for formamide in interstellar ices, laboratory spectra of pure formamide and formamide in various astronomically relevant ice mixtures ranging from temperatures of 15 - 212 K have been collected and made freely available to the astronomical community on the Leiden Ice Database for Astrochemistry (LIDA). The band strengths at 15 K for all pure formamide features between 4000 - 500 cm^-1/2.5 - 20 μm are presented, and the peak positions, FWHMs, and relative apparent band strengths of the three bands identified as the most promising for future formamide detection were extracted from the pure and mixed formamide spectra. These spectra and extracted data were used to assess present and future detectability of ices in various interstellar objects. The primary conclusions drawn from this work are as follows:
* Out of the eight formamide features in the investigated IR spectral region, the C=O stretch (1700.9 cm^-1/5.881 μm), the CH bend (1388.3 cm^-1/7.203 μm), and the CN stretch (1328.0 cm^-1/7.530 μm) are likely to be the most useful for future formamide identification due to their strength, sharp profile, and low overlap with the strongest features of the major ice components, with the CH bending feature being the most promising. The NH_2 stretching features (3371.2 cm^-1/2.966 μm and 3176.4 cm^-1/3.148 μm) and the NH_2 wagging and twisting features (689.2 cm^-1/14.510 μm and 634.0 cm^-1/15.773 μm) directly overlap with strong water absorptions, while the CH stretch (2881.9 cm^-1/3.470 μm), the CH bend overtone (2797.7 cm^-1/3.574 μm), and the convolved NH_2 rock and CH out-of-plane deformation (1108.1 cm^-1/9.024 μm and 1056.1 cm^-1/9.469 μm) have both low band strengths and direct overlap with methanol absorptions, making them less suitable for formamide identification.
* In the mixtures investigated here, the CN stretch is the most affected by ice composition – its peak position varies by up to ∼68 cm^-1 and its FWHM by up to ∼50 cm^-1 across the mixtures investigated here, with peak splitting observed in the apolar mixtures. The C=O stretch can also change significantly, depending on the matrix, by up to ∼27 cm^-1 in peak position and up to ∼40 cm^-1 in FWHM, although peak splitting in the apolar mixtures is not as prominent as in the CN stretch. The CH bend is relatively unaffected by ice composition, with its peak position and FWHM only varying by ∼11 cm^-1 and ∼15 cm^-1, respectively, across the different mixtures. Relative to the pure spectrum, the band strength of the C=O stretch increases in all of the investigated mixtures. The CH bend band strength also increases in all of the mixtures except the binary CO_2 mixture, while a significant increase in the band strength of the CN stretch is only observed in the mixtures dominated by an apolar component.
* Although the polar formamide mixtures provide the closest match to the 7.24 μm band observed toward nine lines of sight (including dense clouds, LYSOs, and MYSOs) with three different space telescopes (ISO, Spitzer, and JWST), none provide a convincing fit, with all having their CH bend peak position approximately 7 cm^-1/0.04 μm too far to the blue from the clearly observed band at 1381 cm^-1/7.24 μm. Instead, formic acid and ethanol mixtures containing H_2O provide a better fit. However, this does not exclude the possibility of formamide being present in these ices. The calculated formamide upper limits in these objects range from 0.35-5.1% with respect to H_2O, which are consistent with gas-phase abundances of formamide in several LYSOs, MYSOs, and comets. The upper limit value derived for W33A, 2.2% with respect to H_2O, is fortuitously in agreement with that derived by <cit.>.
While a more secure formamide detection is not possible with the telescopic data explored in this work, the first ice observations arriving from JWST demonstrate an unprecedented sensitivity and spectral resolution that will enable us in the near future to broaden the search for formamide ice in both objects previously observed by Spitzer, whose analysis is limited by low spectral resolution, as well as newly observed objects that were too dim to be observed by Spitzer or ISO.
This work is supported by funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101019751 MOLDISK), the Netherlands Research School for Astronomy (NOVA), and the Danish National Research Foundation through the Center of Excellence "InterCat" (Grant agreement no.: DNRF150). The authors acknowledge the Ice Age (program 1309, PI: M. McClure) and JOYS (program 1290, PI: E. F. van Dishoeck) observing programs for the JWST astronomical data used in this work. KS acknowledges Thanja Lamberts and Pooneh Nazari for helpful discussions about the formamide formation mechanism and Sergio Ioppolo for helpful discussions about the QMS calibration methodology.
aa
§ PEAK POSITIONS AND FWHMS OF FORMAMIDE IN PURE AND MIXED ICES
This appendix contains the peak positions and FWHMs of the formamide features selected for complete IR characterization in this work. The values are listed for the formamide features in pure ice as well as in mixtures containing H_2O, CO_2, CO, CH_3OH, and NH_3. The peak position is the wavelength at which the absorption reaches its maximum, and the FWHM is the width of the peak between the half-maximum values on each side. A Savitzky-Golay filter with a second-order polynomial was applied to many of the mixture spectra here before extraction of the peak position and FWHM to eliminate shifts in these values caused by noise. The smoothing windows used ranged from 10-100 depending on level of noise present in each spectrum, and care was taken that these smoothing windows did not warp the shape of any features. Values were extracted until the temperatures at which the major matrix component desorbed were reached.
For formamide features in mixtures where there is direct overlap with weaker matrix component bands (e.g., the C=O stretch in the NH_2CHO:H_2O mixture), the spectrum of the matrix component without formamide collected using identical experimental parameters at the corresponding temperature was scaled to the formamide mixture spectrum via a feature without overlap with formamide features and subtracted prior to peak position and FWHM extraction. These cases are denoted with a ^M. For formamide features in mixtures where the formamide features lie on the tails of bands or on very wide bands without sharp features (e.g., the CH bend and CN stretch in the NH_2CHO:NH_3 mixture), a second-order polynomial was used to perform a local continuum subtraction. These cases are denoted with a ^P. For formamide features in mixtures where overlap with a strong matrix component band was very substantial and difficult to reliably subtract (e.g., the C=O stretch in the NH_2CHO:NH_3 mixture), only peak positions are given. These cases are denoted with a ^N. For formamide features that contain multiple peaks, all peak positions are given, and the FWHM of the strongest peak is given. However, if a weaker peak maximum occurs within the two half maximum values of the stronger peak (e.g., the CN stretch in the NH_2CHO:CO 15 K mixture), it is included in the FWHM. These cases are denoted with a ^B.
§ RELATIVE APPARENT BAND STRENGTHS OF FORMAMIDE IN PURE AND MIXED ICES
This appendix provides the relative apparent band strengths (η) of formamide, calculated via Equation <ref>, where the value of A' used in the calculations is the apparent band strength of the respective band in the pure amorphous formamide ice at 15 K (given in Table <ref>). Thus, the η value of each band in the pure ice at 15 K is unity. The integration ranges used to calculate the integrated absorbances are stated for each mixture individually, as the same integration ranges were not used for all mixtures due to shifting peak positions and FWHMs. Different integration ranges were used to calculate the integrated absorbances of the amorphous and crystalline pure formamide peaks for the same reason.
These η values can be used to calculate column densities or upper limits of formamide in a specific mixture and at a specific temperature by simply multiplying the corresponding relative apparent band strength by the appropriate apparent band strength in Table <ref>.
§ QMS CALIBRATION OF AN INDEPENDENT TRI-DOSING LEAK VALVE SYSTEM AND MIXING RATIO DETERMINATION
§.§ Calibration procedure and mixing ratio determination
The new tri-dosing system mentioned in Section <ref> allows for simultaneous but independent deposition of gases and vapors via three leak valves, each connected to a separate gas line. Compared to our previous method in which gases and vapors were premixed in the desired ice ratio in a gas bulb and then dosed into the chamber through a single valve, the new method allows for codepositing multiple gases and vapors without experimental errors in the ratio caused by mixing gases with different volatilities in a single bulb or dosing gases that may have different flow, pumping, and substrate deposition rates through the same valve. Subsequently, it greatly improves the ability to create mixtures with precisely determined ratios of molecules with low volatilities like formamide, which is challenging in traditional premixing procedures. The benefits of independent multidosing systems were also described for similar systems with two leak valves in <cit.> and <cit.>.
There are several ways to calibrate such a system to ensure a certain ratio of ice components. One such method is calibrating the deposition rate on the substrate to a specific leak valve position with a specific pressure of the gas or vapor of choice in its manifold line. However, because formamide has a very low vapor pressure compared to liquids like H_2O and CH_3OH and tends to stick to and condense in various parts of the line, reproducing a specific line pressure throughout multiple experiments using this method is difficult. Therefore, to conduct a systematic and thorough IR characterization of formamide in a wide variety of ices with precisely constrained mixing ratios, a different method is necessary.
For this purpose, we calibrate molecules' ice deposition rates with the intensity of their mass signals during the deposition with a QMS. In this calibration procedure, a pure molecule is dosed at a constant rate into the chamber, with the substrate cooled to the desired deposition temperature and the IR spectrometer continuously collecting IR spectra, while the QMS continuously collects mass peak intensity values of selected mass-to-charge ratios (m/z) in the selected ion monitoring (SIM) mode. The IR spectrometer is used to measure the ice column density rather than the laser interference because the formamide deposition pressure does not remain stable over the long period of time necessary to generate multiple interference fringes (>18 hours), which is necessary to reliably extract a deposition rate. Conversely, a deposition rate can be extracted from integrated absorbance growth rates (obtained via a least-squares fit to the integrated absorbance over time) in ∼30 mins, during which time the formamide deposition rate remains stable (as indicated by the linearity of the integrated absorbance increase over time). The integrated absorbance growth rate for that molecule can then be correlated to a specific mass peak's signal intensity (typically the molecule's base peak) in the QMS (obtained via averaging the mass peak's signal intensity values collected during the deposition and simultaneous IR data collection). The integrated absorbance growth rate can then be converted to the ice column density growth rate, dN/dt, via the following equation if the band strength of the pure molecule, A, is known:
dN/dt = 2.303/A×d∫ abs(ν) dν/dt
.
Table <ref> provides the peak used for the calibration of each pure molecule and its corresponding band strength and reference.
Via this method, a calibration curve relating a mass peak's signal intensity in the QMS to its column density growth rate can be determined, with the slope of this curve referred to here as a molecule's sensitivity (see Figure <ref> for an example of such a calibration). When starting a deposition, the leak valve can then be opened accordingly so that the mass signal of the molecule in the QMS corresponds to the desired column density growth rate. In this work, such calibration curves were completed for all molecules used in these spectra with a Spectra Microvision Plus QMS. The relationship between column density growth rate and QMS signal intensity is linear for all molecules within the deposition pressure ranges used (R^2 values of the linear fits ranged from 0.9699-0.9999 with an average of 0.9936).
After the experiment, the mass signal data during the deposition can be converted via the equation from the calibration curve to a column density growth rate, which is then integrated over time to give the absolute column density of each species at the end of the deposition. However, in the case that some of the species in a given mixture share their strongest mass peaks and have no alternative strong peaks without overlap with the other mixture components (which is the case for several mixtures in this work), the individual column density growth rates must be extracted from the mass spectra by utilizing ratios of a given molecule's base peak to another mass peak that is not shared with any other molecules in a given mixture. For example, the mass spectrum of formamide contains a peak at 28, the base peak of CO. Thus, during the deposition of the formamide:CO mixture, the 28 m/z signal contains contribution from both formamide and CO. The contribution of formamide to the signal at 28 m/z was calculated by dividing the signal at 45 m/z (which, in this mixture, only formamide contributed to) by the ratio of the 45 and 28 m/z peaks during pure formamide deposition. This calculated contribution was then subtracted from the 28 m/z signal to yield the CO 28 m/z signal.
In order to estimate the error of the calculated column density of each component and, subsequently, the mixing ratios in each ice, multiple sources of error have been considered. These are discussed in the following subsections.
§.§ Ion interference effect
Ion interactions within the instrument, such as ion-molecule interactions or ions interacting with the QMS filament or rods, during the dosing of multiple species into the chamber can effect a molecule's sensitivity. Such interactions between two different species can cause their sensitivities to deviate from the values determined in the calibration of each species in pure form. This phenomenon is often referred to as the ion interference effect, and it complicates using a mass spectrometer to quantify gases or vapors in a mixture <cit.>.
The magnitude of this effect is highly dependent on the species as well as the instrument. It increases with total pressure and decreases for a given species as its proportion in a mixture increases <cit.>. Thus, the sensitivities that are most affected by this effect are those of species that are present in the lowest proportions in a mixture. Given that our formamide dosing pressure was in the range of a couple 10^-9 mbar and that the intended ratio of formamide to matrix components was ∼5:100 in the case of binary mixtures and 5:100:25 in the case of tertiary mixtures, we treated the interference effect of formamide on the matrix components as negligible and accounted for ion interference only in the formamide signal. While the formamide absolute column densities are necessary to calculate its relative band strengths (see Section <ref>), the absolute column densities of the matrix components are not needed to find any values other than the mixing ratios.
In order to quantify the ion interference effect on formamide in each mixture, at the start of each deposition, formamide was first dosed alone, and its mass signal was given ∼5 min to stabilize before the other matrix components were introduced into the chamber. Although this meant that each experiment started with a very brief deposition of pure formamide, the deposition rate of formamide was so slow in all of the experiments (on the order of tens of monolayers per hour) that this brief pure deposition was usually not even noticeable above the noise level in the IR spectra. Then, the ratio between formamide's signal before and after the matrix molecules were added to the chamber was used as a correction factor to remove the ion interference effect from formamide's signal. An example of this correction is shown in Figure <ref> for the formamide:CH_3OH mixture, which had the highest correction factor of all the mixtures (1.11).
The ion interference effect on formamide was noticeable in all of the mixtures where the major matrix component was polar, while it was not detected above the noise level in the mixtures in which the major matrix component was apolar. In order to provide a conservative estimate of the error caused by the ion interference effect on the calculated column density of formamide, the percent difference of the formamide column density before and after the ion interference effect correction was obtained for all mixtures in which the effect was detected above the noise level. The average percent difference was ∼5%, with the highest percent difference being that of the formamide:CH_3OH mixture (∼10%). To avoid underestimating the error in any of the mixtures, we use this maximum error, ∼10%, as the uncertainty in the column density caused by the ion interference effect.
The sensitivity of a QMS can also drift over time. However, such drift is typically only significant over timescales spanning several months to a couple years, and it is more significant for absolute sensitivities than for relative sensitivities <cit.>. The contribution of this drift to the method error was assumed to be negligible here given that all the formamide mixture spectra were collected within a span of two months, and that the calibration curves were usually either determined within a few days of their use to create an ice mixture or were frequently updated with new values that were consistent with the fits to the previous values.
§.§ Error calculation
In this method of determining ice column densities, multiple sources of error must be considered. First, there is the error in the method used to determine the ice column density growth rate (dN/dt). This error can be estimated by propagating uncertainties of the variables in Equation <ref>. For all ices, the integrated absorbance growth rate uncertainty is estimated to be 10%, as mentioned in Section <ref>. For formamide, the uncertainty in the band strengths reported in this work is estimated to be 15% (also see Section <ref>). For the matrix components, literature band strength values were used (see Table <ref>). However, in the literature, variations between the band strengths reported in different publications can be large (e.g., ). For this reason, we estimate a 25% uncertainty for the literature band strengths used.
Then, we determine the uncertainty resulting from converting the integrated QMS measurement to a column density experimentally by finding the difference between the column density calculated from the QMS signal and the column density calculated from the integrated IR absorbance at the end of a pure molecule's IR measurement. Comparing these differences in formamide, H_2O, and CO deposition experiments resulted in an average error ∼2.5%. To be conservative, we estimate the error from converting the QMS measurement to a column density to be 5%.
Propagating all of these uncertainties, along with the 10% uncertainty from the ion interference effect for the formamide measurements, results in an uncertainty of ∼21% for the formamide column densities and ∼27% for the matrix column densities in the ice mixtures.
|
http://arxiv.org/abs/2307.04431v1 | 20230710091152 | PSO-Based Optimal Coverage Path Planning for Surface Defect Inspection of 3C Components with a Robotic Line Scanner | [
"Hongpeng Chen",
"Shengzeng Huo",
"Muhammad Muddassir",
"Hoi-Yin Lee",
"Anqing Duan",
"Pai Zheng",
"Hongsheng Pan",
"David Navarro-Alarcon"
] | cs.RO | [
"cs.RO"
] |
Article Title]PSO-Based Optimal Coverage Path Planning for Surface Defect
Inspection of 3C Components with a Robotic Line Scanner
1]Hongpeng [email protected]
1]Shengzeng [email protected]
2]Muhammad [email protected]
1]Hoi-Yin [email protected]
1]Anqing [email protected]
1]Pai [email protected]
3]Hongsheng [email protected]
[1]David [email protected]
*[1]Faculty of Engineering, The Hong Kong Polytechnic University, Kowloon, Hong Kong
[2]Faculty of Construction and Environment, The Hong Kong Polytechnic University, Kowloon, Hong Kong
[3]Shanghai Microintelligence Technology Co. Ltd, Shanghai, China
The automatic inspection of surface defects is an important task for quality control in the computers, communications, and consumer electronics (3C) industry.
Conventional devices for defect inspection (viz. line-scan sensors) have a limited field of view, thus, a robot-aided defect inspection system needs to scan the object from multiple viewpoints.
Optimally selecting the robot's viewpoints and planning a path is regarded as coverage path planning (CPP), a problem that enables inspecting the object's complete surface while reducing the scanning time and avoiding misdetection of defects.
However, the development of CPP strategies for robotic line scanners has not been sufficiently studied by researchers.
To fill this gap in the literature, in this paper, we present a new approach for robotic line scanners to detect surface defects of 3C free-form objects automatically.
Our proposed solution consists of generating a local path by a new hybrid region segmentation method and an adaptive planning algorithm to ensure the coverage of the complete object surface.
An optimization method for the global path sequence is developed to maximize the scanning efficiency.
To verify our proposed methodology, we conduct detailed simulation-based and experimental studies on various free-form workpieces, and compare its performance with a state-of-the-art solution.
The reported results demonstrate the feasibility and effectiveness of our approach.
[
*
August 12, 2023
===================
§ INTRODUCTION
Defect inspection is essential to quality control, process monitoring, and non-destructive testing (NDT) in the manufacturing industry (Chen et al., chen2022novel; Chen & Yang, chen2020arrival; Luo & He, luo2016cost).
Specifically, manufacturing processes in the 3C industry are highly sophisticated and demand detailed and accurate defect inspection.
Traditional defect inspection approaches typically rely on visual inspection of an intermediate/finished product by a quality control or quality check inspector.
This sole dependence on human workers is a problem for regions and countries with a shortage of manpower (Liu et al., liu2021task; Ming et al., ming2020comprehensive). Furthermore, human-based inspection is inherently subjective, hence, prone to errors.
To address these problems, various researchers have reported the automatic surface inspection system for free-form components (Li et al., li2022five; Yang et al., yang2023template).
Recently, automatic detection systems equipped with an industrial-grade line scanner, depth camera, and robotic manipulator has been developed to offer effective and rapid non-contact measurement (Huo et al., huo2021sensor; Liu et al., liu2022coverage).
During the defect inspection task, the robotics inspection system scans the surface of the target workpiece exhaustively from different viewpoints. Planning an inspection path can be considered as the CPP problem (Molina et al., molina2017detection).
Estimating a CPP strategy for automatic inspection consists of three tasks: (1) determining the viewpoints to measure the workpiece’s surfaces, (2) generating a sequence to visit all viewpoints in a time and kinematically optimal way, and (3) planning a feasible path to travel to each viewpoint.
Additional criteria can be defined while planning the coverage path, including full coverage of the target surfaces and the resulting cycle-time for the inspection task (Glorieux et al., glorieux2020coverage).
The existing CPP methods can be divided into two coarse categories: two-dimensional and three-dimensional methods.
Various researchers reported two-dimensional (2D) CPP for mobile robots in floor cleaning, bridge crack monitoring, and weed mowing tasks (Almadhoun et al., almadhoun2016survey; Galceran & Carrreras, galceran2013survey).
Veerajagadheswar et al. (veerajagadheswar2020motion) developed a motion planner for floor cleaning.
Polyomino tiling theory was adapted to define reference coordinates and generate a navigation path to maximize the area coverage; Real-time experiments in different scenarios tested the planner on a Tetris-inspired shape-shifting robot. Hung M. La et al. (la2013mechatronic) proposed an autonomous robotic system for precise and efficient bridge deck inspection and identification, where a boustrophedon decomposition was applied to solve the CPP problem.
Lim et al. (lim2014robotic) developed an automatic detection and mapping system for automatic bridge crack inspection and maintenance; They used an improved genetic algorithm to search for a CPP solution to minimize the number of turns and detection time while achieving an efficient bridge inspection.
Danial Pour Arab et al. (pour2022complete) presented a CPP algorithm providing the optimal movements over an agricultural field; First, tree exploration was applied to find all potential solutions meeting predefined requirements, and then, a similarity comparison was proposed to find the best solution for minimizing overlaps, path length, and overall travel time.
It must be remarked that 2D CPP methods cannot be adopted directly for a three-dimensional (3D) CPP problem, as the level of complexity in 3D space is much higher than in 2D space.
In most 2D applications, a complete planner map is available during planning.
Most 3D CPP methods have to plan the paths from partial or occluded 3D maps.
A CPP method for 3D reconstruction based on Building information modeling used a robot arm and a lifting mechanism for wall painting at construction sites (Zhou et al., zhou2022building).
It consists of a two‐stage coverage planning framework, a global planner that can optimally generate the waypoints sequence, and a local planner that can provide the mobile base pose.
The authors reported that this method could ensure coverage of all waypoints and improve painting efficiency.
Hassan and Liu (hassan2019ppcpp) proposed an adaptive path planning approach cable of updating the paths when unexpected changes occur and still can attain the coverage goal.
Zbiss. K et al. (zbiss2022automatic) reported a path-planning method for collaborative robotic car painting.
This proposed algorithm depends on computational geometry and convex optimization, and Morse cellular decomposition and boustrophedon algorithms are applied for path planning to generate a feasible and collision-free trajectory.
A CPP method is based on Unmanned Aerial Vehicles (UAV) equipped LiDAR for bridge inspection (Bolourian & Hammad, bolourian2020lidar).
This method combined a genetic algorithm and an A* algorithm to find a barrier-free and shortest path. This method planned the near-optimal and feasible path.
Recent studies on 3D CPP for industrial product quality detection focused on achieving full surface coverage of the workpiece with minimum inspection time are:
Li et al. (li2018path) demonstrated a robust CPP method for aerospace structures based on their geometric features. Path planning relied on the feature graph construction through Voronoi Diagram. Then, a search method is proposed to find this graph to decide the inspection sequence and a convex hull-based approach is applied to avoid collisions.
Glorieux et al. (glorieux2020coverage) presented a targetted waypoint sampling strategy with the shortest inspection time for dimensional quality inspection of sheet metal parts.
Liu et al. (liu2022coverage) developed an enhanced rapidly exploring random tree (RRT*) method and integrated the inspection errors and the optimal number of viewpoints into measurement cost evaluation for higher precision in quality inspection.
Huo et al. (huo2021sensor) applied the nearest neighbor search algorithm to find a near-shortest scanning path aiming at convex free-form specular surface inspection.
Despite numerous recent developments, CPP for free-form surface inspection remains an open research problem.
There are very few CPP solutions for line scanning robotic systems (Kapetanovic et al., kapetanovic2018side).
Compared with area-scan sensors, a line-scanning sensor is more suitable for defect inspection in industrial/manufacturing applications due to higher spatial resolution and lower production costs (Steger & Ulrich, steger2021camera; Wang et al., wang2022new).
Unlike a common area camera or other optical sensors that only work at some discrete positions, a line scanner utilizes only single beam scanning light to detect 3D objects when capturing images, and it needs to move continuously using a robotics manipulator along the coverage path. These features lead to many traditional CPP methods being ineffective. Therefore, developing a novel CPP method for the automatic line scanning system becomes imperative and advantageous.
This paper aims to overcome the limitations of existing CPP methods for surface defect inspection. We focus on defect detection for free-form surfaces of 3C workpieces based on a robotic line scanning system.
This robotic system utilizes a 6-DOF robot manipulator with a line scanner to finish a full-coverage inspection path and a depth sensor to localize the workpiece.
The proposed CPP method for robotics line scanning inspection consists of two parts, local path definition for accurate defect inspection and global time optimization for minimum scanning path.
It incorporates the detailed requirements of 3C components surface inspection and the specific characteristics of a robotic line scanning system.
The main contribution of this paper includes:
* A new region segmentation method and an adaptive region-of-interest (ROI) algorithm to define the local scanning paths for free-form surfaces.
* A Particle Swarm Optimization (PSO)-based global inspection path generation method to minimize the inspection time.
* Detailed simulations, experiments, and comparisons to validate the proposed method.
The rest of this article is organized as follows.
Section “sec:ccp_for_inspection" describes the path planning problem for 3C component surface detection.
Section “sec:methods" presented the proposed CPP approach in detail. Section “sec:results" shows the specific simulations, experiments, and comparisons on 3C components to validate the method's feasibility.
Finally, Section “sec:conclusion" concludes this article and discusses the limitations and future direction.
§ COVERAGE PATH PLANNING FOR INSPECTION
The CPP problem can be divided into two subproblems: 1) the local path definition is to generate view regions and partial scanning paths to meet the precise scanning and full coverage for 3C free-form workpieces. 2) global path planning aims to find an optimal or near-optimal sequence of all local paths (Gerbino et al., gerbino2016influence).
The key to the first sub-problem determines the position and orientation of each pair of viewpoints at both ends of local paths (the path between two consecutive viewpoints). The line-scan camera only captures an image line of pixels at a time, so the relative motion perpendicular to the line of pixels between the camera and object is necessary for 2D image acquisition during the defect inspection task (see Fig. <ref>). In this automatic scanning system, the camera is moved with a robotics manipulator along the stationary object, and the direction of depth of view (DOV) of the camera should be perpendicular to the scanned region to ensure image quality. Therefore, the scanned area needs to keep as flat as possible even if models of workpieces include many different geometric features (see Fig. <ref>). In addition, each local path consists of two viewpoints at both ends of it, and the camera at the robotic end-effector could scan one viewpoint to another to inspect the surface defects of the regions corresponding to this local path. The change in the position of these two waypoints is required to be along one regular direction, whose orientations need to remain as unchanged as possible to ensure the quality of acquired images. Besides, this sub-problem is also affected by some critical factors, such as field of view (FOV) and DOV (Liu et al., liu2022coverage).
The global path planning problem is concerned with finding the sequence and path connecting the selected viewpoints to minimize the total travel cost. This generated coverage path needs to reach all local paths with the shortest connection path. In other words, the objective is to find the minimum kinematic feasible path for the robot manipulator to target the scanning sensor at each viewpoint precisely through all local paths, without colliding with any obstacles in the workspace.
This proposed method should provide a feasible coverage path that transverses all the local paths with minimum inspection time efficiently and automatically. Moreover, it needs to consider diverse measurement directions of local paths to ensure high detection precision. Generally, there are many local paths to evaluate the surface quality of the 3C components. To obtain precise defect original images, every scanning parameter is significant and could be set according to one new automatic method rather than the workers’ experience and opinion.
§ METHODOLOGY
A CPP generation and optimization approach is presented based on the robotics line scanning system(see Fig. <ref>). This includes i) a new hybrid region segmentation method based on the random sample consensus (RANSAC) and K-means clustering method; ii) an adaptive ROI method to define the local measurement paths; and iii) one PSO-based global optimization approach for the minimum inspection time. This optimal path is then implemented for offline programming and surface detection, thereby improving the efficiency of the inspection of 3C components.
To exact the workpiece's geometry features, the 3D model is converted to a point cloud. The sampling procedure is based on selecting a series of points randomly and uniformly from the model to form a point cloud that can be used to segment and process all surfaces of the workpiece. The acquired point cloud O consists of points p_i=[x_i,y_i,z_i], i=1,2,..., m (m is the total sampling number of O), which preserves the geometric information of all faces.
§.§ Hybrid region segmentation based on RANSAC and K-means clustering
The image acquisition characteristics of line-scan cameras necessitate the preservation of flat scanning areas to ensure optimal image quality. Therefore, it becomes crucial to employ an effective segmentation method to divide the entire surface into flat regions. In this study, we propose a hybrid region segmentation method specifically designed for the surface features of 3C components. This method leverages the RANSAC method and enhanced K-means clustering to achieve accurate segmentation. The RANSAC method is used to detect a region with planar geometry. It can also remove some points with minimum curvature from the entire point cloud, enhancing the computation speed of the whole procedure (Su et al., su2022building). Furthermore, it can effectively remove outliers, thereby improving the accuracy of the subsequent K-means clustering process.
Here, we use RANSAC to partition O first. It includes two steps: producing an assumption by random samples and proving this assumption with the remaining data. Given different hypothesis geometrical models, RANSAC can identify planes, spheres, cylinders, and cones (Xu et al., xu2015investigation). Since the flat regions are required for precise line scanning, RANSAC utilized the equation of a plane as a feature model in the proposed system. It selects N sample points of O and estimates the plane model parameters by those sample points. The position of a point is selected as an inlier if the distance between the point and plane is less than the fixed thresholds and the shape that contains the greatest number of outlier points could be split and extracted after multiple iterations. The plane model can be represented as
aX+bY+cZ+d=0
where [a,b,c,d]^T is the plane model parameter, and [X,Y,Z]^T denotes any point in the 3D coordinates.
This method can extract a nearly planar point cloud region C_0 when the best plane model has been identified. RANSAC does not require complex optimization or high memory resource so that we can obtain C_0 rapidly. However, the remaining point cloud O^r with the size η^r cannot be segmented clearly by this approach since O^r consists of bevels, curved surfaces, and other complex geometrical information.
The traditional K-means clustering methods regarded the region segmentation as a clustering analysis problem of surface geometric features. They applied the position and surface normals of the point cloud for segmentation, which are not appropriate for workpieces with large variations in curvature or many bevels and corners (Li et al., li2018leaf; Liu et al., liu2020method). Therefore, Some different factors should be considered to describe the features of the object. The enhanced K-means clustering is proposed in this paper to process O^r. In the standard K-means method, the number of clusters N dramatically affects the performance of this method, and many trials are required to find a near-optimal N in some classical methods (Juang & Wu, WOS:000290138700014). In this developed method, we apply not only the corresponding surface normals n_i^r=[n_ix^r,n_iy^r,n_iz^r] of the points in O^r but also the Gaussian curvature K_i^r and Mean curvature H_i^r of each point p_i^r in O^r as the inputs of the enhance K-means clustering. Besides, a feasible weighting factor ω among n_i^r, K_i^r, and H_i^r is determined through many manual experiments. K_i^r is the product of the principal curvatures of p_i^r, and it neutralizes the maximum and minimum curvatures. A positive Gaussian curvature value means the surface is locally either a summit or a valley, while a negative value illustrates the surface locally consists of saddle points. And zero Gaussian curvature indicates the surface is flat in at least one direction like a plane or cylinder (Li et al., li2019automated). In mathematics, the mean curvature of a surface presents the curvature of an inset surface in Euclidean space or other ambient spaces. The curvature of the point can be represented by c_i^r=[K_i^r,H_i^r]. With adding these two parameters in this enhanced K-means method, the clustering quality can be improved than before, so the geometric feature of the point of O^r is presented as I_i^r = [n_i^r,c_i^r]. Besides, we present a method to automatically adjust N since N affects the result of the classification, and the traditional techniques set one fixed N, whose drawback is its poor flexibility. The algorithm depends on a two-looped 1D search, with the inner loop for similarity comparison and the outer loop for iterating N. The iteration can end when the largest intra-class difference is smaller than a threshold T. The entire procedure of this enhanced K-means method is illustrated in Algorithm <ref>.
For the outer loop, we represent the feature vectors of the N-cluster set as
Q_j=[q_n,q_c]
q_n=[q_1,q_2,q_3]
q_c=[q_4,q_5]
Q_j is one 5-dimensional vector (j=1,2...,N). All of them can be initialized with a random value. Afterward, the procedure goes into the inner loop, composed of two steps: 1) similarity comparison and 2) updating. In the first step, cosine similarity is used in this proposed method for assessing the similarity between I_i^r and Q_j, which is considered as a measure of similarity between two sequences of numbers in data analysis (Kiricsci et al., kiricsci2022new). The similarity α _ij is described in detail as follows:
α _ij =ω _1cos(n_i^r · q_n / | n_i^r | · | q_n | )+ω _2cos(c_i^r · q_c / | c_i^r | · | q_c | )
where ω _1 and ω _2 are the weighting factors for α _ij, and they are set as 0.6 and 0.4 respectively in this method according to many experiments.
Then, this method should find the cluster C_j with the smallest α _ij and exact the corresponding p_i^r and I_i^r to it. The next step is to determine whether the classification has met the termination condition. For each cluster C_j, the termination parameter λ _j is calculated from the maximum intra-class difference D_j as:
λ _j=
0, D_j>T
1,else
;
D_j= max_iα _ij
β _t represents the sum of λ _j from every region C_j at this iteration t .If β _t = N, the current segmentation is satisfactory and the algorithm can finish iteration. Otherwise, the procedure continues. In this stage, the search direction should be considered since the method includes two loops, the inner one that compares similarity and clusters concerning N and the outer one that increases the value of
N gradually. The change relies on the performance of β _t. If the performance deteriorates at the iteration step t (i.e. β _t is smaller than β _t-1),
the inner loop must stop immediately and a new outer loop starts with N←N+1 because the current N is not ideal. If the performance is better(i.e. β _t is larger than β _t-1), the search within the inner loop continues.
Before switching to the next inner iteration, all feature vector Q_j=[q_n,q_c] are updated to improve the representation level:
q_n= 1/η _j∑_i=1^η _j n_ij/1/η _j∑_i=1^η _j n_ij
q_c= 1/η _j∑_i=1^η _j c_ij/1/η _j∑_i=1^η _j c_ij
where n_ij, c_ij and η_j are i-th normal feature vector in C_j, curvature feature vector in C_j and the size of the C_j separately.
The proposed algorithm only takes the limited features of the region C_j into consideration, which can lead to a high sparsity of the clustered points within the same region. Therefore, Euclidean cluster extraction is implemented as a post-processing step to verify if it is necessary to subdivide the region C_j into two new regions according to the location of the points in it.
§.§ Adaptive ROI Based Path Planning
The local paths are generated according to the proposed planning method, which takes the segmented region C_j as input. Due to the synchronization of the scanning inspection of the line camera and the robot's motion, every viewpoint in these local paths should be produced through a feasible method for accurate detection, and all local paths are required to cover the whole region C_j of the workpiece. Hence, this part presents an adaptive ROI method for generating local paths that aim to adapt scan paths and viewpoints to the various shapes of objects
Since the scanning sensor captures a horizontal line image, the scanning coverage can be thought of as a cuboid when the system is moving linearly, which contains the DOV V_D, the FOV V_F, and the moving direction V_L(see Fig. <ref>). Besides, the key of this approach is to determine the position μ =[x,y,z] and pose i=[d⃗,l⃗] of the viewpoints (v^p,v^p*) at both ends of a local path G_t,t=1,2,...,U. The pose i is described by the direction d⃗ of V_D and the direction l⃗ of V_L.
To make the geometric scanning model effective and keep the accuracy of this system, our algorithm further segments every C_j into 3 sub-regions W_jf, f=1,2,3. Due to the irregular shape of each C_j, we stipulate that the C_j is divided into 3 sub-region W_jf evenly following the direction k⃗ of the longest length of each C_j and the scanning motion is also along k⃗ for every area ( l⃗=k⃗). In addition, we define that d⃗ is the reverse direction of the surface normal w⃗_⃗j⃗f⃗ of this W_jf ( d⃗=-w⃗_⃗j⃗f⃗).
Thus, the corresponding μ_1,μ_2 are located on :
μ = τ-w⃗_⃗j⃗f⃗· |V_D|
The center of the sub-region W_jf is regarded as c_jf=[c_x,c_y,c_z], and the intersections τ_1,τ_2 of the W_jf 's edge and the line k⃗· c_jf are deemed as the inspection points of viewpoints v^p,v^p* at both ends of a local path G_t on this sub-region surface. |V_D| is the magnitude of V_D.
§.§ PSO-based global path optimization
Based on the local path definition in the previous step, we need to find an optimal sequence of all local paths to generate a complete scanning path for the whole free-form workpiece surface. We should consider how to minimize the total robot's motion time under a constant velocity of the sensor during the inspection task. According to the requirements in practice, the robotics manipulator should complete the scanning inspection task through all pre-defined viewpoints. This sequence optimization problem can be regarded as Traveling Salesman Problem (TSP) to obtain a path with the shortest time (Claro et al., claro2023energy). The TSP is one integrated optimization problem and nondeterministic polynomial time (NP)-hard. The problem of global path planning can be formulated
min{∑_t=1^U∑_s=1^U-1 T_t^scanning+T_s^across}
where T_t^scanning is the cost time of passing every local path G_t, T_s^across means the cost time from G_t to G_t+1 and U represents the total number of local paths. The cost time in the context of the robot manipulator's end-effector is determined by the straight-line distance between two viewpoints, considering the constant speed of movement. In contrast to the general TSP, our scenario requires sequential traversal of adjacent viewpoints within the same local path to ensure optimal inspection performance. This constraint is imposed due to the limitations of region segmentation and the necessity for adaptive ROI local path definition. The limitation can be summarized as
T_t^scanning(G_t)=
T(v_t^p→ v_t^p*)
T(v_t^p*→ v_t^p)
T_s^across(G_t,G_t+1)=
T(v_t^p→ v_t+1^p)
T(v_t^p→ v_t+1^p*)
T(v_t^p*→ v_t+1^p*)
T(v_t^p*→ v_t+1^p)
The prior studies on this problem include branch and bound linear programming, and dynamic programming methods (Shang et al., shang2020co; Xu et al., xu2022path). However, with the increasing number of targets, the computation of a feasible path becomes exponentially more difficult, and obtaining the global optimal solution becomes more challenging. Different heuristic algorithms have been developed for TSP, including Simulated Annealing, Genetic Algorithm, Ant Colony Optimization, A* algorithm, etc (Abualigah & Diabat, abualigah2022improved); Ghali et al., ghali2023genetic). In the proposed method, the PSO-based method is used to solve TSP with the advantage of general flexibility in TSP solving. After selecting the shortest path, the optimal general path sequence can be acquired in this step.
In PSO (Karim et al., karim2021hovering), a swarm of particles are used to describe the possible solutions. Every particle ξ is related to two vectors in D-dimension space, i.e.,
the velocity vector V_ξ=[V_ξ^1,V_ξ^2,...,V_ξ^D] and the position vector X_ξ=[X_ξ^1,X_ξ^2,...,X_ξ^D].Both of them are initialized by random vectors. During the PSO process, the velocity and position of particle ξ on dimension d are updated as (Zhan et al., zhan2009adaptive):
V_ξ^d= ω V_ξ^d+c_1rand_1^d(pBest_ξ-X_ξ^d)
+ c_2rand_2^d(gBest-X_ξ^d)
X_ξ^d= X_ξ^d+V_ξ^d
where ω represents the inertia weight, and c_1 and c_2 are random numbers within [0,1]. pBest_ξ is the position with the best fitness value for the ξth particle and gBest is the best position in the global.
The main steps of PSO are:
* Initialize all particles, including their velocity and position.
* Establish the fitness function and calculate the fitness value of each particle,
* Update the pBest_ξ and gBest.
* Update the velocity and position of each particle according to (10) and (11).
* Increase the number of iterations, Go to step 3 and repeat until the termination condition.
§ CASE STUDY
To illustrate the performance of the proposed method, we provide two case studies for simulation tests (Case 1: a camera lens, Case 2: a Computer fan) and two case studies for experimental evaluation (Case 3: a tablet back cover, Case 4: upper part of computer mouse) on 3C component surface inspection. A state-of-the-art CPP method is also used for comparison with the developed method in “ssec:case_study".
§.§ Case study setup
Fig. <ref> shows the experimental setup for evaluating the proposed methods.
A custom-made end-effector housed the defect inspection system consisting of a line scanning sensor (Hikvision MV-CL041-70GM camera) and a uniform line illumination source (TSD-LSH230200-B from TSD company).
The Intel RealSense L515 LiDAR camera was mounted on the top of the workspace to capture the real-time stream of point clouds.
The pose of the workpiece was estimated using the point clouds from LiDAR.
An analog control box with a high-power strobe ensures an adjustable and stable voltage for the light source.
The system consisted of a UR5 manipulator from Universal Robots to manipulate the end-effector in order to scan the workpiece automatically.
The entire automated line scanning framework is based on ROSon Linux PC, which can simultaneously monitor the sensors (line scanner, depth sensor) and control the actuator (manipulator).
The line velocity and acceleration of the manipulator's end-effector were empirically set to 0.05 m/s and 0.5m/s^2, respectively.
During trajectory execution, the robot manipulator followed a constant line speed to maintain consistency of image acquisition (the acquisition line rate of the scanner is 3000 line/s).
Table <ref> summarizes the other parameters for the line scanning system used for the experiment.
§.§ Path generation and defect inspection
Fig. <ref> presents four 3C component models.
Each 3D mesh model (or CAD model) was converted into a point cloud to identify the geometrical features through uniform and random sampling (Arias-Castro et al., WOS:000237574800012), as shown in Fig. <ref>.
Some geometrical features, such as surface normals, Gaussian curvature, and mean curvature, are computed by a point cloud processing software named CloudCompare (Tang et al., 10081460).
Then, the point cloud was inputted into the proposed method for estimating the scanning path.
The similarity threshold T should be selected before region segmentation.
If T is large, the segmentation process needs more computation time to cluster the point cloud, which could reduce the overall clustering efficiency.
On the contrary, a smaller value of T groups the different features into the same cluster C_j, which degrades the segmentation accuracy.
Consequently, selecting this component must balance the segmentation accuracy and calculation efficiency.
0.64 is an optimal value for T, found by hit and trials.
The results from the hybrid segmentation method are shown in Fig. <ref>, where the different colors indicate various segmented regions (or clusters).
Here, the methods used RANSAC to cluster the plane region.
In Case 3 and Case 4, a significant portion of the planar/near-planar region has been grouped in one cluster, as shown in Fig. <ref>(c).
Initial clustering using RANSAC significantly reduces the processing time.
After the hybrid unsupervised region segmentation, the surfaces with similar geometric features were clustered together.
Fig. <ref> shows the four geometrically diverse workpieces, and each is divided into different regions based on the features.
Some segmentation errors will remain due to the uncertain nature of computing features, but if they do not affect the scanning path generation.
With adaptive ROI-based path planning and PSO-based global path generation, a complete and near-optimal inspection path can be produced, which is visualized in Fig. <ref>. The number of viewpoints is 48, 48, 42, and 30 in Case 1-4 respectively, displayed by the frames. They show the pose of the robot's end-effector during the inspection task. The global path planning is demoted with a black line and every segmentated region has a corresponding local path. The different viewpoints are connected by straight lines in the optimal sequence. The robotics motion should follow this detection path to achieve full object coverage.
We input the inspection paths to the automatic line scanning system to scan the tablet back cover and upper part of computer mouse in order to mimic the real defect inspection, as illustrated in Fig. <ref>.
Fig. <ref> illustrates the surface defects of these two objects.
Since the segmented results have similar geometric features, and the feasible viewpoints can be selected by the ROI-based method based on the parameters of the line-scan camera, surface defects can be acquired clearly, even where the defects are easy to ignore for a human eye, like corners and curved surfaces.
The proposed method can effectively conduct region segmentation, local path planning, and global path optimization, enabling precise surface defect inspection and further process optimization for the 3C industry.
§.§ Comparative analysis and verification
To further validate the proposed CPP method, a cutting-edge line scanning CPP method (Huo et al., huo2021sensor), a convex specular surface inspection method, is applied as a benchmark approach for comparative analysis. In this method, the traditional K-means clustering method is used for region segmentation and they produced the final path through a local optimization method, nearest neighbor search (Aryal et al., arya1998optimal).
There are five comparison criteria: region segmentation time, total number of viewpoints, length of the global inspection path, total inspection time, and surface defect detection rate. Segmentation time was used as a measure of efficiency for region segmentation methods. The inspection path length and total detection time served as indicators of overall path efficiency in CPP methods. The surface defect detection rate provided insights into the actual effectiveness of defect acquisition, reflecting the accuracy of region segmentation and the quality of path planning. Additionally, when defect results or coverage rates were similar, preference was given to the CPP method that generated fewer viewpoints as it was considered a more viable path planning approach (Liu et al., liu2020optimal).
The comparison results are shown in Fig. <ref>. For region segmentation time, the proposed time used less time to finish this procedure. Due to the usage of RANSAC and more geometric features, the proposed method can obtain the subregions with planar/near-planar geometry efficiently. As for the viewpoints, our developed approach produces fewer viewpoints since more accurate region segmentation results and concise ROI generation. Conversely, the convex specular surface inspection method employed a more complex iteration process for viewpoint determination, as it struggled to precisely segment objects with intricate geometries. When comparing inspection path length and time, our method outperformed the benchmark approach. While the benchmark utilized a local optimization solution, namely nearest neighbor search, it fell short in generating a feasible global inspection path for CPP. In contrast, our PSO-based method effectively addressed the TSP with reasonable optimization goals and feasible viewpoints. Although our approach is slightly better than its surface defect detection rate, the presented method can finish the inspection task with less time and shorter paths. Based on this comprehensive comparison, our proposed CPP method stands as a superior choice over the state-of-the-art line scanning inspection method. Consequently, the proposed method presents a valuable and feasible solution for CPP in surface defect inspection.
§ CONCLUSION
This paper proposes a systematic framework for an inspection CPP method for 3C component surfaces. According to this framework, a high-resolution line scanning sensor, mounted on a multi-DOF robotic manipulator, can execute surface scanning and detection precisely and flexibly. The developed methodology includes (1) a new hybrid region segmentation method based on the RANSAC and K-means clustering method; (2) an adaptive ROI method to define the local measurement paths; and (3) a PSO-based global optimization approach for the minimum inspection time. Four case studies verify the effectiveness and efficiency of this method. The results show it outperforms the state-of-the-art line scanning CPP method according to comparison. Overall, the proposed method can achieve precise and efficient surface inspection for 3C free-from components. It can be applied in the 3C industry and be extended to inspect other structures such as auto spare parts and industry-standard components.
However, it should be noted that the proposed method may encounter challenges when applied to workpieces with complex structures, making it less suitable for parts with intricate shapes. Future research should focus on optimizing the design of the system end-effector to enhance the flexibility of the inspection framework. Additionally, exploring mathematical methods for optimal path planning and investigating the potential of information theory and deep learning techniques, such as convolutional neural networks, could further improve the effectiveness of the segmentation method.
*Supplementary information
The following video demonstrates the performance of the proposed method with simulations and experiments: https://vimeo.com/842785212 https://vimeo.com/842785212.
*Funding This work was supported by the grant from Shanghai Microintelligence Technology Co. Ltd (No. P21-0078).
§ DECLARATIONS
*Competing interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
*Data availability statement
The data underlying this article will be shared on reasonable request to the corresponding author.
*Authors' contribution
Hongpeng Chen: Conceptualization, Methodology, Software, Validation, Writing – original draft. Shengzeng Huo: Software, Validation, Writing – review & editing. Muhammad Muddassir: Conceptualization, Validation, Writing – review & editing. Hoi-Yin Lee: Video Making, Validation, Writing – review & editing. Anqing Duan: Methodology, Data curation, Writing – review & editing. Pai zheng: Supervision, Resources, Conceptualization. Hongsheng Pan: Resources, Funding acquisition, Writing – review & editing. David Navarro-Alarcon: Supervision, Resources, Conceptualization, Methodology, Funding acquisition, Writing – review & editing.
|
http://arxiv.org/abs/2307.05176v2 | 20230711111439 | Measuring the Sterile Neutrino Mass in Spallation Source and Direct Detection Experiments | [
"David Alonso-González",
"Dorian W. P. Amaral",
"Adriana Bariego-Quintana",
"David Cerdeno",
"Martín de los Rios"
] | hep-ph | [
"hep-ph",
"astro-ph.CO"
] |
IFT-UAM/CSIC-23-89
[email protected]
Instituto de Física Teórica, IFT-UAM/CSIC, 28049 Madrid, Spain
Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain
[email protected]
Department of Physics and Astronomy, Rice University, Houston, TX 77005, USA
[email protected]
Instituto de Física Corpuscular (CSIC - Universitat de València), 46980 Paterna, Valencia, Spain
[email protected]
Instituto de Física Teórica, IFT-UAM/CSIC, 28049 Madrid, Spain
Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain
[email protected]
Instituto de Física Teórica, IFT-UAM/CSIC, 28049 Madrid, Spain
Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain
We explore the complementarity of direct detection (DD) and spallation source (SS) experiments for the study of sterile neutrino physics. We focus on the sterile baryonic neutrino model: an extension of the Standard Model that introduces a massive sterile neutrino with couplings to the quark sector via a new gauge boson. In this scenario, the inelastic scattering of an active neutrino with the target material in both DD and SS experiments gives rise to a characteristic nuclear recoil energy spectrum that can allow for the reconstruction of the neutrino mass in the event of a positive detection. We first derive new bounds on this model based on the data from the COHERENT collaboration on CsI and LAr targets, which we find do not yet probe new areas of the parameter space. We then assess how well future SS experiments will be able to measure the sterile neutrino mass and mixings, showing that masses in the range ∼15-50 MeV can be reconstructed. We show that there is a degeneracy in the measurement of the sterile neutrino mixing that substantially affects the reconstruction of parameters for masses of the order of 40 MeV. Thanks to their lower energy threshold and sensitivity to the solar tau neutrino flux, DD experiments allow us to partially lift the degeneracy in the sterile neutrino mixings and considerably improve its mass reconstruction down to 9 MeV. Our results demonstrate the excellent complementarity between DD and SS experiments in measuring the sterile neutrino mass and highlight the power of DD experiments in searching for new physics in the neutrino sector.
Measuring the Sterile Neutrino Mass in Spallation Source and Direct Detection Experiments
M. de los Rios0000-0003-2190-2196
August 12, 2023
=========================================================================================
§ INTRODUCTION
The neutrino sector remains one of the most promising places to look for new physics beyond the Standard Model (SM). Amongst the most obvious open problems, the SM offers no explanation for the origin of neutrino masses. A generic prediction of new physics models for neutrino masses is the presence of new sterile neutrino states, which have very small interactions with the SM ones. The masses of these new exotic states depend on the actual mechanism by which neutrinos acquire a mass, but an interesting range of values is the MeV.
The search for sterile neutrinos involves different types of experimental probes and the constraints depend strongly on the mass range of the new states. For example, sterile neutrinos have been widely searched for in meson decays, where masses of up to hundreds of MeV in peak searches of pion and kaon decays have been probed <cit.>, and heavier steriles have been searched for in neutrino beam dump experiments <cit.>. In our regime of interest (tens of MeV), bounds can be derived through their possible direct production processes. This could be observed in solar neutrino data <cit.>, atmospheric neutrino data <cit.>, or neutrino beam experiment data <cit.> like MINOS/MINOS+ <cit.>. In addition, the presence of an extra sterile neutrino may have a non-negligible impact on different cosmological observations depending on its mass and couplings <cit.>. For example, long-lived sterile neutrinos with masses of the order of MeV may alter Big Bang nucleosynthesis and the expansion rate of the universe <cit.>. Moreover, sterile neutrinos decaying before recombination may affect the cosmic microwave background anisotropies <cit.>.
Experiments situated at spallation source (SS) facilities have recently become excellent probes of new neutrino physics. Most notably, the COHERENT collaboration <cit.> has been able to observe, for the first time, a very rare SM phenomenon: the coherent elastic scattering of neutrinos with nuclei (). The results from both the first run on a CsI target <cit.> and a second run that employed LAr in the CENNS-10 detector <cit.> are compatible with the SM prediction <cit.>. This has been used to derive limits on new physics in the neutrino sector (see, for example, Refs. <cit.>), with particular attention to what future detectors can achieve. Planned experiments include CENNS610 <cit.> (an extension of CENNS-10 LAr <cit.>), CCM <cit.>, and efforts in the European Spallation Source facility <cit.>. The bounds from COHERENT and the sensitivity of the planned detectors are generally interpreted in models with low-mass mediators (or using an effective description in terms of non-standard neutrino interactions), which alters the SM prediction for <cit.>. Likewise, they are applicable to inelastic processes that involve the up-scattering to a heavy neutrino state, for example through the presence of a nonzero neutrino transition magnetic moment <cit.>, or even to a dark fermion <cit.>.
In parallel, underground experiments searching directly for dark matter particles have become increasingly sensitive. Planned detectors, especially those based on liquid noble gases, feature extremely clean, ton-scale targets with excellent background discrimination that will soon enable them to measure from solar neutrinos. Although this would constitute a serious background for dark matter searches, it also offers the unique possibility to test new neutrino physics <cit.> in a way that is complementary to that of dedicated neutrino detectors. The main advantages of these direct detection (DD) experiments are that they can probe both electron and nuclear recoils, which makes them a perfect complement to SS and oscillation experiments <cit.>, and that they are also sensitive to the tau neutrinos in the solar flux.
The sensitivity of DD experiments to observe heavy neutrino states was studied in Ref. <cit.> for the particular case of the neutrino dipole portal, showing that current xenon-based detectors could significantly improve existing astrophysical bounds. The neutrino dipole portal was considered to account for the apparent excess in the low-energy data from electronic recoils in the XENON1T experiment <cit.>. However, this solution was seriously limited by other experimental constraints <cit.>, and the excess was not reproduced by XENONnT <cit.>. Since the coupling of a sterile neutrino to the leptonic sector is in general severely limited by experimental searches, in this article we will focus on the potential interactions with the quark sector. These are more difficult to probe, but they could lead to changes in the predicted nuclear recoil rates in DD and SS experiments that could be accessible in near future experiments. For concreteness, in this work we set up to study the sterile baryonic neutrino (SBN) <cit.> as an example of models in which the active neutrinos can up-scatter to heavy states.
More specifically, in this article we study the potential of DD and SS experiments to not only detect the sterile neutrino but also reconstruct its parameters—namely, its mass and mixings with the active neutrinos. Our main goal is to determine the conditions under which the sterile neutrino mass can be unambiguously measured (distinguished from zero).
In <ref>, we introduce an effective construction based on the sterile baryonic neutrino model and determine the new inelastic contribution to neutrino-nucleus scattering. In <ref>, we address the prospects for upcoming SS experiments. In <ref>, we extend the analysis to include future xenon-based DD experiments. Finally, in <ref>, we study the complementary role of DD and SS experiments. We present our conclusions in <ref>.
§ THE STERILE BARYONIC NEUTRINO
We introduce a dark sector consisting of a new vector mediator, Z', stemming from a broken U(1)_B gauge symmetry and a new baryonic sterile neutrino, ν_b, that is also charged under this new symmetry <cit.>. For the purpose of this work, we regard this model as an effective theory, and we do not address its possible anomaly-free UV completion. The relevant part of our Lagrangian is given by
ℒ⊃m_Z'^2/2Z'^μ Z'_μ + g_b Z'^μν_b γ_μν_b + 1/3g_q Z'^μ∑_qqγ_μ q .
Here, m_Z' is the mass of the new boson, g_b is its gauge coupling to the baryonic neutrino and g_q to the quarks, and the sum runs over all quark flavours q. In this model, a generic flavour eigenstate, |ν_α⟩, can then be written as a linear combination of mass eigenstates, |ν_i⟩, as
|ν_α⟩ = ∑_i=1^4 U_α i^* |ν_i⟩ ,
where |ν_4⟩ is the new mass eigenstate with mass m_4, and α∈{e, μ, τ, b}.
From <ref>, and defining the coupling g_Z'≡√(g_bg_q), the neutrino-nucleus up-scattering process ν_α A→ν_4A has amplitude
ℳ_α 4=g_Z'^2/q^2-m_Z'^2l^μ h_μ ,
where q^2 is the square-momentum exchange with the nucleus, h^μ is the nucleus transition amplitude for the nuclear ground state A, and l^μ is the leptonic transition amplitude. Using <ref> to re-write the dark baryonic current in terms of the mass eigenstates, we have that
l^μ≡⟨ν_4|ν_b γ_μν_b |ν_α⟩ =∑_ijk⟨ν_4| U_α k^* U_b i^* U_b jν_j γ_μν_i|ν_k⟩ = ∑_i U_α i^* U_b 4^* U_b i⟨ν_4|ν_4 γ_μν_i|ν_i⟩
≃ U_α 4^* ⟨ν_4|ν_4 γ_μν_i|ν_i⟩ ,
where, in the last step, we have assumed that |U_b i|≪|U_b 4| for i≠4 and that |U_b4|^2 ≃ 1 <cit.>. The differential neutrino-nucleus up-scattering cross section then follows:
σ_α 4E_R=g_Z'^4A^2|U_α4|^2m_A/2π E_ν^2(2m_AE_R+m_Z'^2)^2[4E_ν^2-2E_R(m_A-E_R+2E_ν)-m_4^2/m_A(m_A-E_R-E_ν)]F^2(E_R),
where m_A is the mass of the target nucleus, E_ν is the energy of the incoming neutrino, and E_R is the nuclear recoil energy. For the nuclear form factor F^2(E_R), which arises from the hadronic part of the amplitude, we use the Helm form factor <cit.> with the parametrisation introduced in Ref. <cit.>. This new inelastic scattering process provides an extra contribution to the usual SM elastic neutrino-nucleus scattering, which takes place through and has the following differential cross section,
σ_ CEν NSE_R=G_F^2/4πQ_ν^2m_A(1-m_AE_R/2E_ν^2)F^2(E_R) ,
where G_F is the Fermi constant, and Q_ν≡ N-(1-4sin^2θ_W)Z is the SM coherence factor in terms of the Weinberg angle, θ_W, and the number of neutrons, N, and protons, Z.
Note that, for the characteristic recoil energies at SS experiments (E_R ≲100) and DD experiments (E_R ≲10), the cross section in <ref> can be interpreted as being proportional to the effective coupling g_Z'^4 |U_α 4|^2 / m_Z'^4. As both of these types of experiments are sensitive to this product of model parameters, they are only able to make inferences on this effective coupling. Since the focus of our analysis is the physics underlying the baryonic neutrino, we choose to fix the parameters related to the new vector mediator to m_Z' = 1 and g_Z' = 4 × 10^-3, taking into account the constraints found in Ref. <cit.>. Thus, without loss of generality, for as long as m_Z'^2 remains greater than the momentum transfer at these experiments, our results can simply be rescaled by the factor g_Z'^4 / m_Z'^4. We therefore consider a four-dimensional parameter space (m_4, |U_e 4|^2, |U_μ 4|^2, |U_τ 4|^2) and <ref> shows some representative benchmark points used in this work.
§ SPALLATION SOURCE EXPERIMENTS
Neutrino experiments at spallation sources have become an extremely useful tool to explore new neutrino physics associated with neutrino-nucleus scattering. The neutrino flux arriving on-target has three components, shown in <ref>. The prompt decay of the initially produced pions, π^+→μ^+ν_μ, induces a monochromatic beam of muon neutrinos with energy E_ν_μ = (m_π^2 - m_μ^2)/2 m_π≃30. The delayed decay μ^+ → e^+ ν_e ν̅_μ gives rise to a flux of muon antineutrinos and electron neutrinos with continuous energy distributions. The corresponding fluxes are given by (see, e.g., Ref. <cit.>)
ϕ_ν_μE_ν =
ξδ(E_ν-m_π^2-m_μ^2/2 m_π) ,
ϕ_ν̅_μE_ν =
ξ64/m_μ[(E_ν/m_μ)^2(3/4-E_ν/m_μ)] ,
ϕ_ν_eE_ν =
ξ192/m_μ[(E_ν/m_μ)^2(1/2-E_ν/m_μ)] ,
where, from kinematics, E_ν∈[0, m_μ / 2] for the continuous spectra of ν̅_μ and ν_e. The constant ξ≡ r R_ / (4 π L^2) accounts for the luminosity of the experiment. Here, r is the number of neutrinos of any given flavour produced per proton collision, R_ is the number of protons on target per unit time, and L is the total length of the experimental baseline. Given the promising sensitivity of the configurations planned to run at the European Spallation Source, in this article we will consider it as a paradigmatic example of a realistic future experiment. Two different setups can be considered <cit.>: a small (10 kg) but extremely sensitive detector with an energy threshold of E_ th=0.1 keV (which we refer to as ESS10), and a large detector (1 ton) but with a higher energy threshold of E_ th=20 keV (which we refer to as ESS). For both configurations, the baseline is L=20 m, R_=2.8×10^23 yr^-1, and r=0.3. Despite the great advantage of its extremely low threshold, the small target size of ESS10 makes it insufficient to explore new regions of the parameter space of sterile neutrino models, and, for this reason, we will concentrate on ESS assuming 1 yr of operation. In our analysis, we consider a bin energy resolution of 5 keV. For the quenching factor, we have extrapolated that of COHERENT-LAr <cit.>, Q_F = 0.246+7.8×10^-4E_R, whereby E[]=Q_F E_R. Following the treatment in Ref. <cit.>, we approximate the efficiency as ϵ(E_R)=0.5(1+tanh(E_R-E_ th))/E_ width, where we take E_ width=1 keV for ESS.
To compute the differential rate of nuclear recoil events, we integrate each neutrino flux, α' ∈{e,μ,μ̅}, taking into account both SM and new physics up-scattering processes, from <ref> and <ref>, respectively. The differential scattering rate is given by
R_α'E_R = 1/m_A(
∫_E_ν^ min, CEν NS
^E_ν^ maxϕ_ν_α'E_νσ_ CEν NSE_R E_ν
+
∫_E_ν^ min, α' 4^E_ν^ maxϕ_ν_α'E_νσ_α' 4E_R E_ν )
,
where 1 / m_A is the total number of targets per unit mass in a given experiment, dσ_μ̅4/dE_R=dσ_μ 4/dE_R, and E_ν^ max = m_μ /2 is the maximum allowed neutrino energy. The minimum neutrino energy required to produce a recoil of energy E_R differs for the elastic and inelastic processes. For usual SM , it is given by
E_ν^ min, CEν NS = 1/2(E_R+√(E_R^2+2 m_A E_R)) ≃√(m_A E_R/2) .
However, for the inelastic up-scattering process, the minimum energy must be high enough to produce the massive sterile neutrino, leading to
E_ν^ min, α' 4 = (1 + m_4^2/2 m_A E_R)E_ν^min, CEν NS .
Finally, the total number of nuclear recoils in each energy bin is computed by integrating the differential rate over the experimental range of recoil energies (given by the specific experimental setup) weighted by the corresponding energy-dependent efficiency function, ϵ(E_R),
N_ SS = ε∑_α'∫_E_R^ min^E_R^ maxR_α'E_Rϵ(E_R) E_R .
where ε is the experiment exposure: the product of its total mass and its live time. For the ESS configuration that we are considering, ε=1 ton yr.
<ref> shows the differential spectrum for each contribution in <ref> and for four representative benchmark points (BP1a, BP2a, BP3a, and BP5a with parameters specified in <ref>), where the sterile neutrino mass is varied for the same choice of couplings. The inelastic contribution only switches on above a certain recoil energy, leading to a characteristic bump with energies in the range
E_R^∈ [
1/2 m_A(2 (E_ν^ max)^2 - m_4^2 - 2 E_ν^ max√( (E_ν^ max)^2 - m_4^2)). ,
.1/2 m_A(2 (E_ν^ max)^2 - m_4^2 + 2 E_ν^ max√( (E_ν^ max)^2 - m_4^2))] ,
where we have made the approximation E_ν/m_A≪ 1. In the event of a future observation, this `bump' could be used to determine the mass of the sterile neutrino, thus helping to discriminate this model from other potential new physics contributions in the neutrino sector. In practice, this could confirm the existence of a sterile neutrino (with mass different from zero). Notice that the lower end of the energy bump takes place at very small values of the recoil energy, well below the reach of current and future detectors. For this reason, the sterile neutrino mass reconstruction mostly relies on determining the upper end of the bump, which is displaced from the end of the SM spectrum. The contribution from muon neutrinos is particularly interesting for this purpose. As their flux is monochromatic, the energy bump in their spectrum is more easily distinguishable from the SM prediction. The difference of the endpoint in the SM spectrum and the inelastic contribution from ν_μ is denoted Δ_μ in <ref> for each benchmark point.
To observe this feature, the experimental threshold must be low enough and the energy resolution of the detector must at least be comparable to Δ_μ. Since Δ_μ increases with m_4 (which we can see in <ref> or infer from <ref>), heavier sterile neutrino masses are easier to reconstruct. Since the energy thresholds of current and planned experiments at spallation sources are of the order of ∼ 10 keV, a measurement of the sterile neutrino mass is only possible above a certain value of m_4. In particular, given the planned characteristics of the ESS experiment, the signal of both BP1 and BP2 would be indistinguishable from that for m_4=0.
For reference, the vertical grey dotted (dashed-dotted) lines in <ref> represent the expected energy threshold of both ESS and ESS10 respectively.
It should be emphasized that measuring the sterile neutrino mass—that is, confirming that m_4=0 is not within the 2σ best-fit region—is crucial to discriminate the signal due to the SBN model from that of a generic neutrino non-standard interaction (NSI), where no extra neutrinos are introduced <cit.>. Indeed, the spectrum from a particular choice of NSI can mimic the observed signal in the SBN model when the lower end of the energy bump is below the experimental threshold. We illustrate this in <ref> for BP1a, where we have generated an NSI spectrum with a pure up-quark effective NSI parameter of ε_μμ^u = 0.4. For the range of observable energies, we see that the SBN and NSI spectra almost completely overlap, making them indistinguishable from one another.
To test the reconstruction of the sterile neutrino parameters, we have created Asimov data sets for each of these benchmark points and attempted to reconstruct their associated model parameters in the four-dimensional space (m_4, |U_e4|^2, |U_μ 4|^2, |U_τ 4|^2). In these Asimov sets, our `observed' data are equal to the theoretically expected number of events for each given benchmark point. The ensuing limit from such an analysis should asymptotically approach the median limit arising from many Monte Carlo runs <cit.>. The statistical details of our analysis can be found in <ref>. We compute the expected number of nuclear recoil events from <ref> using an extension of the package <cit.>. For each benchmark point, we carry out a profile-likelihood analysis using the nested sampling algorithm <cit.> via its Python implementation <cit.>.
We show in <ref> the parameter reconstruction corresponding to BP1a, BP2a, BP3a, and BP5a, assuming the projected configuration of the ESS detector. The hatched areas correspond to the allowed regions (Δχ^2 < 6.18). As we can see, ESS would be able to observe the first three benchmark points and measure the coupling |U_μ 4|^2. It would also be able to fully reconstruct the mass of the sterile neutrino in BP3a. Nevertheless, for BP1a and BP2a, only an upper bound on the sterile neutrino mass can be extracted (the end-point of the bump cannot be distinguished from the SM spectrum). Since the sterile neutrino mass for BP5a is above the energy of the neutrino flux in spallation source experiments, the up-scattering is kinematically forbidden and hence there will be no observation. For this benchmark point, we can only obtain an exclusion region.
As a new result, we have derived constraints on the SBN model using current COHERENT data from the two targets, LAr <cit.> and CsI <cit.>. To do this, we have used the statistical treatment of <ref>. The bounds are represented in <ref> as light and dark grey areas in the corresponding plots for the LAr and CsI targets, respectively. As we can see, the excluded areas lie above the upper bound on the sterile neutrino mixing with the muon sector from Ref. <cit.> and therefore do not probe new areas of the parameter space.
It is interesting to note that for sterile neutrino masses above m_4≳30 MeV, the monochromatic ν_μ flux is not energetic enough to produce the sterile neutrino and only the ν̅_μ and ν_e fluxes contribute in <ref>. When this occurs, the characteristic feature Δ_μ is no longer present. This makes the mass reconstruction more difficult and leads to a degeneracy between the mixings with muon neutrinos, U_μ4, and electron neutrinos, U_e4. This effect is more pronounced for m_4≃ 40 MeV, where the ν_e and ν̅_μ fluxes are comparable. To exemplify this, in <ref> we analyse a benchmark point with m_4=40 MeV and |U_μ 4|^2=9 × 10^-3 (BP4a in <ref>), which we attempt to reconstruct through a profile-likelihood analysis. The degeneracy on the reconstruction of the mixings (evidenced on the right panel) induces a similar degeneracy on the sterile neutrino mass (see left and middle panels of <ref>), making measuring m_4 impossible. This degeneracy is lifted for sterile neutrino masses m_4 ≳45 (depending on the value of the mixings) when the contributions from the ν_e and ν_μ fluxes differ (see <ref>).
Our analysis so far shows that
* Current limits on the SBN model using COHERENT data do not exclude new areas of the parameter space, but future experiments like ESS would allow us to explore regions below current experimental constraints.
* In the event of a positive observation, future SS experiments might be able to determine the sterile neutrino mass (distinguishing it from the massless case) for a range m_4∼ 15-50 MeV. For lighter masses, the observed signal is indistinguishable from that of a new massless neutrino.
* The sterile neutrino mixing with the electron and muon sectors can, in general, be disentangled based on the different shapes of the contribution from the ν_e and ν_μ fluxes.
* There is, however, a region for sterile neutrino masses around m_4 ∼40 for which the reconstruction is highly degenerate and the sterile neutrino mass (and mixing with ν_e and ν_μ) cannot be measured.
* SS experiments are completely insensitive to the sterile neutrino mixing with the tau sector, as there is no ν_τ flux.
In the following sections, we will study how (dark matter) direct detection experiments can provide complementary information that improves the reconstruction of the SBN model parameters, partially lifting some of these degeneracies and considerably improving the mass measurement.
§ DIRECT DETECTION EXPERIMENTS
While primarily employed in the search for dark matter, direct detection experiments are becoming so sensitive that they will start observing from solar neutrinos. Indeed, the sensitivities of xenon-based experiments of this and future generations—such as LZ <cit.>, XENONnT <cit.>, and DARWIN <cit.>—are projected to hit the neutrino fog: a region of the parameter space where a dark matter signal and a neutrino event will be difficult to disentangle <cit.>. This motivates us to think of these experiments as neutrino observatories instead of as dark matter detectors, treating this `background' as a signal to help us learn more about the nature of both SM and BSM neutrino physics. In this section, we show how these experiments can use measurements of the solar neutrino scattering rate as a probe of the SNB model.
In the case of nuclear recoils, the calculation of the differential rate is similar to that of SS. The key differences are that we instead use the solar neutrino flux and that we must now account for the oscillation probabilities as neutrinos propagate to the Earth from the solar core. As we did in <ref>, the SM and new inelastic contributions must be considered separately since the minimal neutrino energy to produce a nuclear recoil of a given energy differs. The differential scattering rate, after summing over the flavours α∈{e, μ, τ}, is ultimately given by[It has recently been noted that one must be careful when calculating the solar neutrino scattering rate in the presence of new physics <cit.>. If the new physics introduces flavour-changing neutral current processes, then a more general density matrix formalism must be employed. This was recently done in the context of DD experiments and general NSI in Ref. <cit.>. In our case, flavour charge is conserved, so we can compute the rate in the usual manner as we have written.]
RE_R = 1/m_A [
∫_E_ν^ min,CEν NS
^E_ν^ maxϕ_ν_eE_νσ_ CEν NSE_R E_ν
+
∑_α∫_E_ν^ min, α 4^E_ν^ maxϕ_ν_eE_ν P_eα
σ_α 4E_R E_ν]
,
where dϕ_ν_e/dE_ν is the total differential solar electron-neutrino flux and P_eα is the transition probability for an electron neutrino to oscillate to the flavour α. Notice that since SM is flavour blind, the transition probabilities factor out and sum to one. For the new physics contribution, the cross section is instead flavour dependent, so the probabilities must be retained.
In this work, we consider a multi-ton xenon experiment with an exposure of ε=200 ton yr, a recoil energy threshold of E_th=1, and an energy bin resolution of 1. This type of experiment has been shown to be a powerful probe of new physics in the neutrino sector <cit.>. When calculating the total number of expected events, we incorporate experimental effects, folding into <ref> the energy-dependent efficiency and resolution functions. We do this using
N_DD = ε∫_0^E_max(∫RE'ϵ(E') 1/σ(E')√(2π)exp[-(E_R - E')^2/2σ^2(E')] E') E_R ,
where the convolution with the Gaussian resolution function is taken with respect to the theoretically expected recoil energy, E', which is converted to the observed recoil energy, E_R. The integral is taken from E_R = 0, with the threshold of the experiment implicitly incorporated through the efficiency function, ϵ. Note that it is crucial to incorporate this convolution with the resolution function, as this smears lower energy ^8B events beyond where would be kinematically forbidden. As experimental thresholds are typically placed near where this forbidden region occurs, which is useful for dark matter searches, this smearing allows us to see some events as opposed to almost no events.
To implement <ref>, we once again make use of the package. This package uses the B16-GS98 standard solar model neutrino flux predictions <cit.> and the oscillation parameter results to compute the electron neutrino survival and transition probabilities <cit.>. For more information on the package, please see Ref. <cit.> for the theory and Ref. <cit.> for the code base.
With the existence of the new flavour state |ν_b⟩, it is possible that the electron neutrinos produced in the Sun can oscillate into baryonic neutrinos. These neutrinos could then elastically scatter off target nuclei via the new vector mediator, leading to an observable signal in DD experiments that could, in principle, dominate over that of our considered inelastic process <cit.>. However, for sterile neutrinos in the mass range we have considered (m_4 ∼1–100) deviations from the unitarity of the PMNS matrix are highly constrained by flavour and electroweak precision data, as well as direct searches for such heavy neutrino states <cit.>. Consequently, we take the liberty of ignoring transitions to the baryonic neutrino state, neglecting the elastic scattering process and using the SM prediction for the survival and transition probabilities.
<ref> shows the resulting differential spectrum for some representative benchmark points from <ref>. As in the case of SS experiments, the new physics contribution from the inelastic process shows a characteristic bump. There is, however, an important difference. Since the solar neutrino fluxes are not monochromatic, this feature is not as abrupt as the ν_μ contribution in SS experiments. Consequently, the reconstruction of the sterile neutrino mass from a hypothetical future signal in DD experiments is significantly more challenging. Notice that the lower end of the energy bump is generally well below the experimental threshold (and is therefore not observable). Thus, it is difficult to determine a lower bound on the mass of the sterile neutrino using DD alone. Given the shape of the solar neutrino flux <cit.>, for sterile neutrino masses above ∼2, only the ^8B and hep neutrino fluxes contribute to the inelastic process.
Despite this, DD experiments have the great advantage that they are sensitive to all three flavours of active neutrinos, thereby conveniently complementing the information from spallation sources, which lack a tau neutrino flux.
As we did for SS experiments, we can compare the expected number of events for a given set of model parameters with the simulated data of each benchmark point detailed in <ref>. Since the expected number of events is significantly lower than in SS experiments, we model the likelihood as a product of Poissonian likelihoods for each energy bin. In addition, we introduce a nuisance parameter to account for the systematic uncertainty on the ^8 B flux. The full statistical description can be found in <ref>. To test how this uncertainty impacts our results, we consider two cases[These values are motivated by the current uncertainty obtained through global fits analysis <cit.> (σ_^8 B = 2%) and the uncertainty to which DUNE will measure ^8B using a combination of elastic scattering and charged-current interactions (σ_^8 B = 2.5%) <cit.>.]: one with the current experimental uncertainty of σ_^8 B = 4% <cit.> and another one with an optimistic uncertainty of σ_^8 B = 1%.
In <ref>, we show as blue hatched regions the parameters that would be allowed (Δχ ^2 < 6.18) by a future observation in a multi-ton liquid xenon experiment with σ_^8 B = 1%. For comparison, we include as a blue dashed line the results obtained with σ_^8 B = 4%. Given the maximum energy of the ^8B solar neutrino flux, DD experiments will be insensitive to BP3a and BP5a. Hence, DD experiments can only probe sterile neutrinos with a low mass (m_4≲ 20 MeV) and a large mixing. Regarding the benchmark points of <ref>, only BP1a is observable—while we do observe events for BP2a, the statistics are not high enough for a reconstruction. For BP2a, BP3a, and BP5a we only obtain an upper bound on the neutrino mixing. For BP5a, adding DD data leads to a more constraining upper bound for small sterile neutrino masses. It should be emphasised that one cannot disentangle the individual contributions from each of the three neutrino flavours using only DD data, and therefore the reconstruction of the mixing parameters is completely degenerate (in the figure, this leads to |U_μ 4|^2 being unbounded).
§ THE COMPLEMENTARITY OF DIRECT DETECTION AND SPALLATION SOURCE EXPERIMENTS
In this section, we forecast the sensitivity that will be achieved by combining the results of future DD and SS experiments. In particular, we analyse how their complementarity can be used to break the degeneracies found in their individual analyses and better determine the parameters of the SBN model. Since the measurements performed by DD and SS experiments are independent of one another, we model the total likelihood as the product of the individual likelihoods described in <ref>. Using this combined likelihood, we repeat our previous analysis.
In <ref>, we present the results for the same benchmark points as in <ref>, but now considering the information that DD experiments can contribute. The blue-shaded areas correspond to the best-fit regions when only DD data are considered, while green-shaded regions are those that employ the combination of DD and SS data. Only BP1a is observable by a future multi-ton xenon experiment. While the corresponding mass of BP1a cannot be determined using DD alone, the inclusion of DD data leads to a more stringent upper bound on m_4. For BP2a, BP3a, and BP5a, DD can only set upper bounds on the mixing parameters; however, this can still prove to be extremely useful. For example, when combined with SS results, this can help to exclude regions with small m_4. In the case of BP2a, for instance, DD complements the results of SS and is crucial to better measure the sterile neutrino mass. For BP5a, DD data improves the exclusion for small values of m_4.
A particularly interesting case is that of BP4a. As explained in <ref>, for m_4≃40, the parameter reconstruction using only data from SS experiments displays a degeneracy in the sterile neutrino mixings and mass (see <ref>). In <ref>, we show how this degeneracy is partially lifted when DD data is included. Although BP4a is not observable in a future xenon detector because of its large mass, the bounds from DD exclude the region of the parameter space with small m_4 and large |U_e4|^2, which in turn leads to a good measurement of the sterile neutrino mass.
Another great advantage of combining both types of experiments is that the solar neutrino flux includes a ν_τ component due to neutrino oscillations. This provides an extra handle with which to measure the sterile neutrino mixing with tau neutrinos. In order to test this, <ref> shows an analysis of BP1d: a benchmark point with a non-negligible U_τ4 mixing. Not only is this component measured with DD data, but also the combination with SS results leads to a better upper bound on the sterile neutrino mass and an improved reconstruction of U_τ4.
For completeness, <ref> shows a series of examples where both U_μ 4 and U_τ 4 are non-vanishing, corresponding to BP2b, BP2c, and BP2d in <ref>. These benchmark points are observable in DD thanks to the U_τ 4 component. When the best-fit regions are determined, the upper bound on |U_μ 4|^2 from DD data is sensitive to the magnitude of the mixing with tau neutrinos: for small |U_μ 4|^2 (e.g., BP2b), the bound on |U_μ 4|^2 is less stringent than when |U_μ 4|^2 increases (e.g., BP2d). This also makes the combination with SS results less trivial—in some cases, the excluded regions allow for a better reconstruction of the sterile neutrino mass (BP2b), whereas in other cases this is not possible (BP2c and BP2d).
§.§ How well can we measure the sterile neutrino mass?
As we have demonstrated, the combination of DD data with that from SS experiments can lead to a better measurement of the sterile neutrino mass. This can happen even in the cases where DD would not observe a new physics signal, simply from the effect that the DD exclusions have on the regions of the parameter space that are consistent with detection in SS experiments. Reconstructing m_4 (i.e., confirming that it is non-vanishing) is crucial to discriminate a sterile neutrino model from other kinds of BSM neutrino physics (such as NSI on the active neutrinos).
In order to better quantify the relevance of the DD and SS complementary role in measuring m_4 and to provide a more general picture, we show in <ref> various projections of the (m_4, |U_e4|^2, |U_μ 4|^2, |U_τ 4|^2) parameter space, indicating the areas where m_4 can be reconstructed (i.e., m_4=0 is not within the 95% CL region). Using the same colour convention as in previous plots, the orange (blue) areas are those where m_4 can be reconstructed solely from SS (DD) data, and green regions correspond to their combination. From top to bottom, the first row corresponds to the (m_4, |U_τ 4|^2) plane with |U_e4|^2 = 0 and |U_μ4|^2 = 4 × 10^-3 (9 × 10^-3) left (right) column. The second row shows the (m_4, |U_μ 4|^2) plane with |U_e4|^2 = 0 and |U_τ4|^2 = 4 × 10^-3 (9 × 10^-3) left (right) column. In the third row, we represent the (m_4, |U_e 4|^2) plane for |U_τ4|^2 = 0 and |U_μ4|^2 = 4 × 10^-3 (9 × 10^-3) left (right) column. The different benchmark points of <ref> are indicted with yellow stars.
In all of these figures, we observe a clear synergy between DD and SS experiments. This is evinced by the green areas extending beyond the union of the blue and orange ones. In particular, the addition of DD data allows us to measure smaller values of m_4. The gap in the orange area of the top right and lower right panels appears for m_4≃ 40 MeV and corresponds to the regions where the degeneracy between |U_e4|^2 and |U_μ4|^2 makes the mass reconstruction impossible for SS experiments alone (see <ref> for BP4a). The addition of DD information is crucial to break this degeneracy and, hence, allow for a mass reconstruction in this region (as in <ref>).
As already mentioned, the performance of DD experiments is extremely sensitive to the uncertainty in the solar neutrino fluxes. For completeness, in <ref> we show in dashed, dashed-dotted and dotted green lines the results obtained when combining both types of experiments and considering a ^8B flux uncertainty of 4%, 6% and 12%, respectively. As expected, we see how our results worsen when increasing this uncertainty.
§ CONCLUSIONS
In this work, we have analysed the complementarity of direct detection and spallation source experiments for the study of sterile neutrino physics. Specifically, we have focused on the sterile baryonic neutrino (SBN) model: an extension of the SM that incorporates a new gauge boson that couples to baryons and a sterile neutrino that mixes with the active ones and also couples to this mediator. Due to this mixing, the sterile neutrino can be produced through the up-scattering of an active neutrino with the nucleus of a target material. This inelastic process alters the expected nuclear recoil spectra for both DD and SS experiments, providing a characteristic signature that can allow for the measurement of the sterile neutrino mass and mixing parameters in the event of a future detection.
Using current data from the COHERENT collaboration on CsI and LAr, we have first derived new constraints on the SBN model, showing that they do not exclude new areas of the parameter space. Assuming a future SS experiment with the projected properties of a detector to be installed at the ESS, we have then assessed how well the sterile neutrino properties would be determined upon a positive observation. We have shown that the new inelastic contribution to neutrino-nucleus scattering induces a bump in the nuclear recoil spectrum. This proves extremely useful to reconstruct the sterile neutrino mass, conclusively disentangling this model from a generic NSI contribution to the active neutrinos. We have demonstrated that using only SS data, values in the range 15-50 MeV can be measured. However, in a narrow range of masses of the order of 40 MeV, there is a degeneracy in the measurement of the sterile neutrino mixing that substantially affects mass reconstruction.
Incorporating future DD data helps in two ways. These detectors have an excellent energy resolution and generally a lower energy threshold than SS experiments. Furthermore, DD experiments are sensitive to all three neutrino flavours, including tau neutrinos, present in the solar neutrino flux. Thus, they are extremely helpful in removing degenerate solutions in the neutrino mixing parameter space. Considering the case of a future multi-ton liquid xenon experiment, we have demonstrated that the combination of future DD and SS results is crucial to substantially increase the area of the parameter space where the sterile neutrino mass can be reconstructed (see <ref>), allowing us to measure values as low as ∼ 8 MeV.
These results strengthen the role of DD experiments as probes of the neutrino sector and their complementarity with dedicated neutrino detectors.
§ ACKNOWLEDGEMENTS
We would like to thank Pilar Coloma, Manuel González-López, Elías López Asamar, Patrick Foldenauer, Marina Cermeño, Andrés Pérez and Karen Macías for useful discussions and comments. DAG, DGC and MdlR acknowledge support from the Comunidad Autonoma de Madrid and Universidad Autonoma de Madrid under grant SI2/PBG/2020-00005, and by the Spanish Agencia Estatal de Investigación through the grants PID2021-125331NB-I00 and CEX2020-001007-S, funded by MCIN/AEI/10.13039/501100011033. DGC also acknowledges support from the Spanish Ministerio de Ciencia e Innovación under grant CNS2022-135702.
DA is supported by the National Science Foundation under award 2209444.
§ STATISTICAL TREATMENT
In all of our analyses, we consider the profiled log-likelihood-ratio test statistic, defined as
q(θ⃗;⃗ ⃗ζ⃗⃗⃗_⃗0⃗) ≡ -2 ln[ℒ(θ⃗, ω̂⃗̂, â; ζ⃗_0)/ℒ(θ̂̂̂⃗̂̂̂, ω̂̂̂⃗̂̂̂, â̂; ζ⃗_0)] ,
where ℒ is the likelihood function describing our data given the model parameters. For later convenience, we have split our model parameters into three subsets, represented by θ⃗, ω⃗, and ζ⃗_0. The parameters θ⃗≡ (m_4, |U_α 4|^2)^T, for some given flavour index α∈{e, μ, τ}, are the two parameters we are constraining at any given time. The parameters ω⃗≡ (|U_β 4|^2, |U_γ 4|^2)^T, with α≠β≠γ, are the two remaining mixings we profile over at a given BP. Finally, as explained in <ref>, we fix the parameters related to the new vector mediator, denoted by ζ⃗_0 ≡ (g_Z', m_Z')^T. We also introduce a dimensionless pull parameter, a, as a nuisance parameter that is designed to capture systematic uncertainties in the theoretically expected count. We model this parameter as being Gaussian distributed with a mean of zero and an experiment-dependent standard deviation.
Hatted variables indicate quantities that maximise the likelihood at a given parameter space point (the null hypothesis likelihood), while double-hatted variables represent the quantities that maximise the unconstrained likelihood (that of the alternative hypothesis).
§.§ Spallation Source Experiments
Following Refs. <cit.> for SS experiments, we perform a binned statistical analysis, modelling the likelihood of each bin i as a Gaussian. In this case, <ref> reduces to the simpler Δχ^2 statistic, with
χ^2(θ⃗, ω⃗, a; ζ⃗_0) = ∑_i = 1^N_bins(N^i_obs - [1 + a]N_th^i(θ⃗, ω⃗; ζ⃗_0)/σ_stat^i)^2 + (a/σ_sys)^2 .
Here, N^i_obs and N_th^i(θ⃗, ω⃗; ζ⃗_0) are the numbers of observed and theoretically expected events in the i^th bin, respectively. The quantity σ_stat^i is the statistical uncertainty of the observed number of events, which we take to be
σ_stat^i ≡√(N_obs^i + N_bkg^i) ,
where N_bkg^i is the expected number of background events in the i^th bin. When performing our analysis of COHERENT data, we use the backgrounds reported by collaboration <cit.>. However, when considering the future ESS experiment, we instead use the fact that the beam-related neutron (BRN) background represents an important background in this type of search, with CENNS-10 reporting that 10% of its measured signal events arose due to this background source <cit.>. Since we make no assumptions on how well future SS experiments will handle this background, we take N_bkg^i ≡ N_SM^i / 10, with N_SM^i the number of expected events in the i^th bin under the SM. For the pull parameter, a, we take its uncertainty to be σ_sys = 0.05 <cit.>.
To construct the Δχ^2 for our parameters of interest, we compute the profiled test statistic
Δχ^2(θ⃗; ζ⃗_0) = χ^2(θ⃗, ω̂⃗̂, â; ζ⃗_0) - χ^2(θ̂̂̂⃗̂̂̂, ω̂̂̂⃗̂̂̂, â̂; ζ⃗_0) .
As explained in <ref>, we make use of Asimov data sets throughout our analyses. This means that our `observed' data are set to the theoretically expected number of events for each given benchmark point. This leads to two simplifications. Firstly, as the data are perfectly consistent with a given BP, we know that the value of the overall minimised χ^2 will be zero. Secondly, the minimisation over a can be done without resorting to numerical methods for any given θ⃗ and ω̂⃗̂. By simply finding that value of a for which ∂_a(Δχ^2) = 0, we get the analytical result
â=[∑_i(N^i_obs - N^i_th) N^i_th/(σ^i_stat)^2] / [(σ_sys)^-2+∑_i(N^i_th/σ^i_stat)^2] .
Note that, since N_th^i is not a function of a, the minimisation over a and ω⃗ can be done separately.
Finally, when drawing our contours for the 95% CL regions, we use the fact that our Δχ^2 should be distributed according to a χ^2 distribution with 2 degrees of freedom. This is because, of the 7 parameters that <ref> depends on, we profile over 3 of them in <ref>, keeping the remaining 2, represented by ζ⃗_0, fixed throughout. We therefore draw the boundaries of our regions at Δχ^2 = 6.18.
§.§ Direct Detection Experiments
For DD experiments, we also perform a binned statistical treatment. However, unlike for SS experiments, we assume that the number of counts in each bin follows a Poisson distribution due to the lower number of events expected within the high-energy bins.
Inserting a Poisson likelihood for ℒ in <ref> and once again exploiting our use of Asimov data sets, we get that
q(θ⃗; ζ⃗_0) = 2[∑_i = 1^N_bins (1 + â) N_th^i(θ⃗,⃗ ⃗ω⃗̂⃗;⃗ ⃗ζ⃗⃗⃗_⃗0⃗) - N_obs^i + N_obs^i lnN_obs^i/(1 + â) N_th^i(θ⃗,⃗ ⃗ω⃗̂⃗;⃗ ⃗ζ⃗⃗⃗_⃗0⃗) ] + (â/σ_^8B)^2 .
Note that, as for SS experiments, we have also introduced the pull parameter a to capture the effect of systematic uncertainties. In the case of DD experiments searching for , we assume that this is dominated by the uncertainty in the ^8B solar neutrino flux, σ_^8B, for which we take different values in the main text.
As before, we can derive the analytical form for â; we do this by solving the equation ∂_a q = 0. We find that
â = -(1 + N_th^totσ_^8B^2) + √((1 + N_th^totσ_^8B^2)^2 - 4σ_^8B^2(N_th^tot - N_obs^tot))/2,
where N_obs^tot and N_th^tot are the total observed and theoretically expected number of events across all bins, respectively. We note that in <ref> we have neglected any background contribution, as the background (𝒪 (1)) in DARWIN is expected to be much smaller than the expected signal (𝒪 (10^2-3)) for the majority of bins. Since the pull parameter a only impacts the signal, the analytical minimisation presented in <ref> is only possible with zero (or, more generally, constant) background. With a bin-variable background contribution, the minimisation must instead be done numerically.
To draw our 95% CL limits, we make use of Wilks' theorem <cit.>. This tells us that the log-likelihood-ratio test statistic asymptotically follows a χ^2 distribution with number of degrees of freedom equal to the difference in the number of free parameters between the null and alternative hypotheses. As previously, this gives us two degrees of freedom. We therefore draw the boundaries of our regions at q = 6.18.
JHEP-cerdeno
|
http://arxiv.org/abs/2307.05273v2 | 20230710161710 | Viscosity and diffusion in life processes and tuning of fundamental constants | [
"K Trachenko"
] | physics.bio-ph | [
"physics.bio-ph"
] |
roman
|
http://arxiv.org/abs/2307.04105v1 | 20230709055525 | Towards Assumption-free Bias Mitigation | [
"Chia-Yuan Chang",
"Yu-Neng Chuang",
"Kwei-Herng Lai",
"Xiaotian Han",
"Xia Hu",
"Na Zou"
] | cs.LG | [
"cs.LG",
"cs.CY"
] |
Texas A&M University
[email protected]
Rice University
[email protected]
Rice University
[email protected]
Texas A&M University
[email protected]
Rice University
[email protected]
Texas A&M University
[email protected]
Despite the impressive prediction ability, machine learning models show discrimination towards certain demographics and suffer from unfair prediction behaviors. To alleviate the discrimination, extensive studies focus on eliminating the unequal distribution of sensitive attributes via multiple approaches. However, due to privacy concerns, sensitive attributes are often either unavailable or missing in real-world scenarios. Therefore, several existing works alleviate the bias without sensitive attributes. Those studies face challenges, either in inaccurate predictions of sensitive attributes or the need to mitigate unequal distribution of manually defined non-sensitive attributes related to bias. The latter requires strong assumptions about the correlation between sensitive and non-sensitive attributes. As data distribution and task goals vary, the strong assumption on non-sensitive attributes may not be valid and require domain expertise.
In this work, we propose an assumption-free framework to detect the related attributes automatically by modeling feature interaction for bias mitigation. The proposed framework aims to mitigate the unfair impact of identified biased feature interactions.
Experimental results on four real-world datasets demonstrate that our proposed framework can significantly alleviate unfair prediction behaviors by considering biased feature interactions.
Our source code is available at: https://anonymous.4open.science/r/fairint-5567
<ccs2012>
<concept>
<concept_id>10002951.10003227.10003351.10003269</concept_id>
<concept_desc>Information systems Collaborative filtering</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257.10010293.10010319</concept_id>
<concept_desc>Computing methodologies Learning latent representations</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
Towards Assumption-free Bias Mitigation
Na Zou
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
============================================================================
§ INTRODUCTION
Machine learning models have shown superiority in various high-stake decision-makings <cit.>, and have been deployed in many real-world applications, such as credit scoring <cit.>, loan approval <cit.>, criminal justice <cit.>, education opportunity <cit.>.
However, machine learning models show discrimination towards certain demographics and suffer from biased prediction behavior, which may negatively impact the minority groups in those application fields.
For example, COMPAS, a recidivism prediction system, shows discrimination towards African-American offenders with a higher possibility of becoming a recidivist two years after leaving prison <cit.>. Recent works focus on bias mitigation techniques to alleviate discrimination in machine learning models.
Existing works to tackle the fairness issues are generally based on two groups of assumptions, i.e., bias assumptions and correlation assumptions.
For the works based on bias assumptions, they mitigate bias with known distributions of sensitive attributes by fairness regularization <cit.>, contrastive learning <cit.>, adversarial learning <cit.>, disentanglement representations <cit.>, and representation neutralization <cit.>.
However, due to privacy concerns, sensitive attributes are often missing <cit.>. Therefore, existing works adopt clustering methods <cit.> and an auxiliary module <cit.> to simulate the sensitive attributes. However, they often suffer from the inaccuracy of the predicted sensitive attributes when adopting clustering algorithms <cit.>.
Thus, the work based on correlation assumptions, FairRF <cit.>, addresses the unfair issues with a strong assumption that the unfair model prediction actually comes from the relationship between sensitive attributes and a set of predefined related non-sensitive attributes.
In this paper, we argue that correlation assumptions between sensitive and non-sensitive attributes may not be valid as data distribution and task goals vary. For example, FairRF <cit.> predefines (inherently assumes) that the related features of gender in the Adult dataset are age, relationship, and marital status. To show this assumption is invalid, we conducted an experiment to explore the relationship between gender and all other features. This assumption is not consistent with the linear relationships between gender and all other features, as shown in Figure <ref>. Additionally, domain expertise and knowledge are required to predefine the related features.
Therefore, we raise the following question: Can we achieve fairness without assuming a predefined relation between sensitive and non-sensitive attributes?
To tackle the limitations of 1) the correlation assumption that unfair model prediction comes from the handcrafting predefined related attributes and 2) the further fairness problems caused by feature interactions, we aim to develop an assumption-free framework to automatically detect and integrate feature interactions for bias mitigation.
It is nontrivial to achieve our goal due to the following challenges.
First, in the real-world scenario, implicit bias of feature interactions are difficult to be detected, especially when sensitive attributes are missing.
Specifically, it is hard to find the high-order statistical interactions that may lead to biased predictions of deep neural networks due to the complex model structures.
Thus, when neither the sensitive attributes are available nor make strong correlation assumptions on related features, it becomes very challenging to identify the biased feature interactions.
For example, identifying the biased feature interactions among all the combinations of features without the sensitive attributes may lead to numerous candidate features interactions, which make models extremely hard to learn the distribution of actual biased feature interactions.
Second, it is challenging to mitigate bias in feature interactions due to the uneven distribution among feature interactions.
For example, without considering the potential uneven distribution of the feature interactions, trained prediction models may fail to detect and mitigate the bias in feature interactions.
To address the aforementioned challenges, we propose FairInt, an assumption-free framework to automatically identify and further mitigate the bias in feature interactions.
Specifically, we develop a sensitive attribute reconstructor for tackling a situation where sensitive attributes are unavailable during the inference stage.
By designing a sensitive-oriented attention score, we develop a biased interaction detection layer to automatically identify the biased feature interactions and then embed the biased interaction information into the latent representation.
It is different from traditional deep neural networks that model feature interactions among all possible feature combinations and cannot identify specific biased feature interactions.
To equalize the probability distribution of sensitive attributes, we design two bias regularizations for debiasing the latent representation that contains biased interaction information.
These two regularizations debias the feature interactions by minimizing the divergence of latent space and the model predictions between different sensitive attribute groups.
We evaluate our framework on four real-world datasets across three different application domains, which include finance, education, and healthcare.
Compared with baseline models, the experimental results demonstrate that the FairInt can successfully further mitigate the biased prediction behaviors while providing similar performances of downstream tasks by considering biased feature interactions.
Moreover, by observing the modeled feature interaction, the FairInt shows the ability to provide better explainability via the designed sensitive-oriented attention score. We highlight our contributions as follows:
* We argue that the related attributes with high correlations to sensitive attributes that can be identified by prior knowledge is problematic. Because the correlations between sensitive and non-sensitive attributes will be changed with different models.
* We propose an assumption-free framework to automatically identify and further mitigate the biased feature interactions. Our framework does not need to handcraft related attributes for mitigating the unfair model prediction that comes from the interactions between sensitive and non-sensitive attributes. Instead, the proposed framework automatically identifies related attributes without prior knowledge during the inference stage.
* Experimental results on several real-world datasets demonstrate the effectiveness of the proposed FairInt framework.
Additionally, our framework provides better explainability via observing the attention weights between sensitive and non-sensitive attributes.
§ PRELIMINARIES
In this section, we introduce the existing bias mitigation strategies for deep neural networks and feature interaction modeling methods that inspire our proposed framework.
§.§ Bias Mitigation
To tackle the prejudicial decisions problem in deep learning models, there is increased attention to bias mitigation methods in recent studies <cit.>.
Extensive approaches apply regularization-based methods to the objective function of the proposed models, which require pre-hoc assumptions to develop.
Existing alleviating techniques are generally based on two groups of assumptions.
Bias Assumptions.
Because machine learning models show discrimination towards certain demographics, people assume that machine learning models have biased behaviors against certain groups.
With a known distribution of a sensitive attribute set, there are several advancements proposed to mitigate bias, such as:
1) Fairness regularization: the objective function of the bias mitigated models generally adds the fairness-related constraint terms <cit.>, which may penalize the prejudiced behaviors of the prediction models. Another existing work <cit.> compares the distributions of model predictions of different sensitive attributes and then minimizes KL-divergence between each sensitive attribute.
2) Adversarial learning: adversarial learning alleviates the biased effects from the known sensitive attributions by simultaneously building an Adversary with the Predictor of machine learning models.
One previous work <cit.> aims to leverage bias alleviation by proposing an adversarial learning strategy with the given distribution of sensitive attributes.
The model includes a Predictor, which accomplishes the downstream task predictions, and an Adversary, which predicts the target sensitive attributes.
The framework adopts adversarial training by minimizing Predictor and maximizing adversary, which aims to debias the unfair situations brought from Predictor.
3) Latent representation neutralization: one latent representation neutralization work <cit.> is to implicitly mitigate bias by adjusting the distribution of latent representations during the model training.
Correlation Assumptions.
In the real-world scenario, it is hard to get the true distribution of sensitive attributes due to privacy concerns, we thus assume that the unfair model predictions are caused by certain related attributes that have high correlations to sensitive attributes.
Specifically, when we face the fairness issue for model prediction, it is challenging to leverage the model bias if we lack sensitive feature information.
Thus, there are some works that focus on eliminating prediction bias under the constraint of unknown sensitive attributes' distribution.
ARL <cit.> utilizes adversarial learning based on Rawlsian Max-Min fairness objectives. However, this approach could be too strict in enhancing fairness across groups, and it is hard to maintain the performance of downstream tasks.
FairRF <cit.> addresses the biased issues by leveraging the relatedness between a set of related non-sensitive attributes and sensitive attributes.
This work assumes that the bias of model prediction actually comes from the high correlation between non-sensitive attributes and sensitive features.
In this manner, a fair model can be achieved by the proposed objective function of alleviating the relatedness between non-sensitive attributes and sensitive attributes. Formally, the objective function of FairRF can be illustrated as follows:
Let f_i ∈ F_n be a set of predefined related non-sensitive attributes, where F_n is a set of non-sensitive features, FairRF applies correlation regularization R_related on each f_i to make trained model fair toward sensitive attribute s by calculating the following function:
min_θℛ_related = ∑_i=1^Kλ_i ·ℛ(f_i, ŷ),
where λ is the weight for regularizing correlation coefficient between x^i and ŷ.
However, this correlation assumption between sensitive and non-sensitive attributes may be sub-optimal, because it requires strong assumptions on feature dependencies.
In other words, data-specific and distribution similarity are necessary.
For example, when we define the related features of sensitive features Gender are three non-sensitive features, which are Age, Relation, and Marital-Status, an accompanying assumption comes up that the three non-sensitive features have top-3 highest correlation with the given sensitive feature Gender.
Nevertheless, it is possible that the true highest correlation-related features of the sensitive feature in the dataset are not obvious for human beings, so we cannot define it correctly.
For instance, maybe the highest related features of the sensitive feature gender are color of eyes and sleeping quality in a certain dataset, and it is hard for humans to associate the two features as related features of gender.
In our work, instead of adopting assumptions on bias feature distribution with its related features, we propose an assumption-free framework for automatically detecting the related features for bias mitigation.
§.§ Learning Feature Interactions
One major advantage of neural networks is their ability to model complex interactions between features by automatic feature learning.
In the territory of click-through rate prediction, CTR prediction, feature interaction modeling has been playing a key role in improving downstream task performances by modeling different orders of feature combinations.
Instead of multiple layers of non-linear neural network approaches which suffer from inefficient and lack of good explanation of feature interactions <cit.>, there are popular approaches that are able to explicitly model different orders of feature combinations and meanwhile offer good model interpretability.
One of the previous works models feature interactions by calculating the inner products between a feature embedding and a trainable matrix, afterward calculating the Hadamard product of another feature embedding <cit.>.
AutoInt <cit.> models feature interactions by adopting the key-value attention mechanism and using the conducted attention weights between all feature pairs to weighted-sum the all input feature embedding.
AutoInt utilizes the inner product operator ψ(·, ·) to define the similarity between two feature embeddings e_j and e_c, and leverages it to compute the attention weights under a specific attention head h by the following equation:
a^(h)_j, c = exp(ψ^(h)(e_j, e_c))/∑_n=1^Nexp(ψ^(h)(e_j, e_n)),
where N represents the number of input features.
The classic self-attention-based approach considers all feature pairs for feature interaction learning, therefore it is difficult to significantly identify the bias between feature pairs containing target sensitive attributes.
In our work, we only consider the feature pairs which treat target sensitive attributes as a Query of attention components to identify the feature interactions between sensitive and non-sensitive attributes for further alleviating the biased interactions.
Our framework can automatically detect the related features for bias mitigation.
§.§ Problem Definition
We first define the notations used in this work.
Let X be the input data set and Y be the ground truth label set of the model output, where X = { x_1, …, x_p} is the p-kind attribute set and Y∈{0, 1} is the binary label set.
Among the input attribute set X = S∪C, where sensitive attributes set S (e.g. gender, race, marital status) and non-sensitive attributes set C.
We observe that the biased feature interactions are the influential factor in yielding fairness of predictive results.
Formally, we define the sensitive feature interaction set as ℐ_s = {ℐ(s, c_1), … , ℐ(s, c_p-1) | ∀ c_j ∈C}, where ℐ(·, ·) denotes an feature interaction between any two features, and s ∈S is a sensitive attribute.
For example, an interaction between a sensitive attribute gender and non-sensitive attribute job can be denoted as ℐ(gender, job).
Based on modeling the feature interactions throughout the prediction models, the biased interactions from ℐ_s eventually lead to bias on prediction tasks.
Based on the definitions and the intuitions above, we consider the interaction bias from prediction model f(X, θ) ≡ p(g(X)), where θ is the model parameters and p(·) is a single-layer prediction head of d-dimensional feature embedding encoder g(·): X→ℝ^d.
In our work, let ℐ_s be the sensitive feature interaction set learned from prediction model f(·), we aim to identify the biased interaction that appears in ℐ_s such that the detected biased interactions are alleviated during the prediction model training.
§ METHODOLOGY
In this section, we introduce an assumption-free fair mitigation framework, FairInt, to alleviate the biased feature interactions.
Figure <ref> illustrates our FairInt framework with two components: Assumption-free Bias Detection, which includes Sensitive Attributes Reconstructor (SAR) and Bias Interaction Detection (BID) layer, and Interaction-wise Bias Mitigation, which includes the regularizations Fairness Constraint (FC) and Interaction Fairness Constraint (IFC).
Our goal is to encourage the classifier to disentangle the biased interaction between sensitive and non-sensitive attributes and instead focus more on learning task-relevant information.
Assumption-free Bias Detection aims at identifying bias within feature interactions without predefined related features, and Interaction-wise Bias Mitigation focuses on alleviating the identified feature interaction bias.
In the following sections, we give a comprehensive description of our FairInt framework.
We first illustrate the details of the proposed bias detection component (Sec. <ref>).
Then, we introduce our two bias mitigation components (Sec. <ref>).
Finally, we demonstrate how to learn the fair predictor through our FairInt framework (Sec. <ref>).
§.§ Assumption-free Bias Detection
sensitive attributes s ∈S are generally unavailable in real-world scenario during the inference stage. Many existing works mitigate the interaction bias under the assumption of the known distribution of sensitive attributes. However, in real-world scenarios, the unavailability of sensitive attributes exists due to various reasons, such as legal issues, which make most of the existing advancements unworkable. To tackle the problems, we develop two corresponding components: Sensitive Attributes Reconstructor (SAR) for sensitive attributes bias assumption-free, and Bias Interaction Detection (BID) for feature interaction assumption-free. Our assumption-free framework aims to disentangle the hand-crafted assumptions of the feature dependency between sensitive and specific non-sensitive attributes during the debiasing process.
Sensitive Attributes Reconstructor (SAR).
Since sensitive attributes s ∈S are generally unavailable in real-world scenario during the inference stage, we design Sensitive Attributes Reconstructor (SAR) to simulate the sensitive attributes for alleviating the implicit interaction bias obtaining in non-sensitive attributes.
Specifically, we aim to generate a pseudo-sensitive attribute ŝ by imitating the distribution of sensitive attributes s ∈S throughout our proposed reconstructor, which brings out the biased interaction between the sensitive attributes and all other non-sensitive features.
Let the input attribute set be x ∈X without the sensitive attributes s ∈S. The objective of Sensitive Attributes Reconstructor (SAR) is to construct a reconstructor f to generate a pseudo-sensitive attribute ŝ for identifying the implicitly biased interactions toward non-sensitive features. The generating process of a pseudo-sensitive attribute can be formally illustrated as follows:
ŝ = SAR(e_x/s; Θ_r),
where Θ_r is the trainable parameters of reconstructor r, and e_x/s denotes the latent representation set of input features x without sensitive attribute s.
Specifically, we leverage the embeddings of all non-sensitive attributions to generate a pseudo-sensitive attribute vector. This makes the reconstructor extract the correlated information between sensitive and non-sensitive features. During training stage, the reconstructor loss ℒ_SAR can be shown as follows:
ℒ_SAR≡min_Θ_r∑_i=1^N (ŝ_i - s_i)^2,
where N is the number of training instance.
The effectiveness of SAR was evaluated by predicting unavailable sensitive attributes using non-sensitive features from Adult and Law School datasets. SAR achieved 87% accuracy for predicting Sex in Adult and 94% for predicting Race in Law School.
The results show that SAR can achieve impressive performance by capturing the correlations between non-sensitive attributes and unobserved sensitive attributes.
Besides predicting the pseudo-sensitive attributes ŝ, SAR advantages our FairInt to better capture the interactions between unobserved sensitive and non-sensitive attributes.
Bias Interaction Detection (BID) Layer.
Optimizing Eq. <ref> in SAR generates a pseudo-sensitive attribute ŝ as a sensitive sensitive attribute, which allows our proposed FairInt to quantitatively analyze the interaction between pseudo-sensitive attributes and non-sensitive attributes. Thus, we propose Bias Interaction Detection (BID) to identify the highly potential biased interactions with the generated pseudo-sensitive attribute.
We first let all the input features be the p-kind attribute set X = {x_1, …, x_p} which contains categorical and numerical features.
Because categorical features are too sparse to learn, we map all the input features into low-dimensional spaces with the unique feature embeddings e_i. The formula can be illustrated as e_i = M_i x_i,
where M_i is an embedding lookup matrix corresponding to feature x_i with dimension d.
Feature interactions are typically modeled by either the inner product similarity or attention scoring mechanism between two feature embeddings <cit.>. For instance, AutoInt <cit.> utilizes the multi-head self-attention mechanism to model high-order feature interactions for improving the downstream task's performance.
AutoInt learns feature interaction within a hierarchical representation structure, which has proven to be effective in several machine learning territories <cit.>.
Especially, self-attention-based mechanism has been utilized in several machine learning areas for capturing the importance within features of input instances <cit.>.
In our work, we exploit self-attention machanism <cit.> to model feature interactions. The main goal of our framework is to mitigate the biased feature interaction for the model predictions but without predefined assumptions.
Therefore, based on the ability of self-attention mechanism to identify important feature interactions, we design Bias Interaction Detection (BID) to point out the key biased interactions of pseudo-sensitive attributes.
Unlike original self-attention mechanism that calculates the attention weights between all the feature two by two, we focus on modeling the feature interactions only between the sensitive pseudo-sensitive attribute ŝ and other non-sensitive features by computing their attention weights.
Specifically, we model the interactions between a pseudo-sensitive attribute ŝ and one non-sensitive features c ∈C with attention head h as a_ŝ, c, which can be calculated as follows:
a_ŝ, c = exp(ψ^h(ê_̂ŝ, e_c))/∑_c ∈Cexp(ψ^h(ê_̂ŝ, e_c)),
where ê_̂ŝ and e_c are the low-dimensional embedding of ŝ and c, and ψ^h(ê_̂ŝ, e_c) denotes as the scoring operator to evaluate the similarity between ê_̂ŝ and e_c.
In this paper, we adopt dot product as an example for ψ^h(ê_̂ŝ, e_c), which can be illustrated as follows:
ψ^h(ê_̂ŝ, e_c) = ⟨ W^h_Queryê_̂ŝ, W^h_Key e_c ⟩,
where ⟨· , ·⟩ is inner product operator, and W^h_Query and W^h_Key are embedding matrices for ê_̂ŝ and e_c. The biased interaction scores can now be defined as a_ŝ, c in this manner.
After obtaining the biased interaction scores between the sensitive and non-sensitive features, we generate the biased interaction embeddings ê^H_s to represent the biased interactions for bias mitigation. We formally define the biased interaction embeddings as following formula:
ê^H_s = _h=1^|H| ∑_c=1^C a_ŝ, c (W^h_value· e_c),
where W^h_value is a trainable embedding matrix, and ‖ denotes the concatenation operator for all biased interaction embeddings of each attention layer h ∈H.
§.§ Interaction-wise Bias Mitigation
After receiving the detected bias interaction embeddings ê^H_s, we focus on alleviating the bias from feature interactions.
Our goal is to equalize the conditional probability distribution of bias interaction embeddings given different sensitive attributes s ∈S. However, the sensitive attribute information in ê^H_s can be easily perturbed due to the imbalance amounts of sensitive and non-sensitive attributes. This may affect the bias mitigation performance since the alleviation process requires an explicate sensitive attribute as a pivot to mitigate. Hence, we adopt a residual neural network (ResNet) <cit.> to enrich the information of pseudo-sensitive attributes, which we can formally reveal as follows:
e_ŝ = ReLU(ê^H_s + W_Res·ê_̂ŝ),
where W_Res is the residual model weight and ê_̂ŝ is the embedding of pseudo-sensitive attributes.
In this work, we design two fairness constraints: Interaction Fairness Constraint and Fairness Constraint for biased interaction mitigation.
Interaction Fairness Constraint (IFC) Loss.
In order to mitigate the detected bias interactions from different sensitive attribute groups, we design the Interaction Fairness Constraint (IFC) loss to minimize the KL-divergence between the sensitive attribute groups. IFC can then ensure the equivalent information gained from each feature interaction.
Formally, IFC can be formulated as follows:
ℒ_IFC = ∑_i ∈S∑_j ∈S/iKL(e_[ŝ≈ i], e_[ŝ≈ j]),
where KL(·) denotes the KL-Divergence, and e_[ŝ≈ i] is the subset of e_ŝ that is more similar to sensitive attributes i ∈S. To the convenience of our work, we set the hierarchical boundary with expected value of uniform distributed S to distinguish which group ŝ belongs to in S.
IFC loss mitigates the bias information of the latent representation by calculating the KL-divergence scores as biased scores between each group in pseudo-sensitive attributes S.
Therefore, by adding ℒ_IFC as a regularization term to our framework, the bias feature interaction of latent representation e_ŝ can be alleviated.
Fairness Constraint (FC) Loss.
Although our proposed IFC mitigates most of the biased interaction information from the embedding aspect, the remaining biased interaction may be amplified by prediction models and generate unfair task predictions.
To alleviate the unfairness of model predictions on downstream tasks, we adopt the Fairness Constraint (FC) loss toward pseudo-sensitive attributes ŝ. In this work, we focus on classification tasks.
Our proposed FC aims to mitigate biased prediction behaviors ŷ by computing the absolute differences of the cross entropy between every two of each pseudo-sensitive attribute (ŝ_i, ŝ_j) ∈S.
Formally, FC can be formulated as follows:
ℒ_FC = ∑_i ∈S∑_j ∈S/i |CE_[ŝ≈ i] - CE_[ŝ≈ j]|,
where CE_[ŝ≈ i] is cross entropy which belongs to a certain sensitive attribute i ∈S.
Based on the meaning of cross entropy, it reflects the correctness of classification prediction ŷ. The idea of FC loss is to ensure the discrepancy of the correctness of ŷ by giving every two of each pseudo-sensitive attribute.
Thus, ℒ_FC can effectively alleviate the prejudiced model predictions among the pseudo-sensitive attribute set.
§.§ Fair Classifier with FairInt
Here we discuss how to incorporate the IFC loss ℒ_IFC and the FC loss ℒ_FC with a classifier to alleviate the biased interaction.
We adopt the reconstructor loss ℒ_SAR to our framework for training the SAR to generate the pseudo-sensitive features.
As the IFC loss can be a stand-alone optimization objective function, it is capable of mitigating bias feature interaction in latent representations for any kinds of classification model.
In our work, we evaluate the effectiveness of our framework on a one-layer multi-layer perceptron as the classification model, which can be replaced by any deeper or more powerful models.
To train a classification model with our proposed FairInt framework, we optimize the cross entropy loss ℒ_0.
We then incorporate ℒ_0 with the Interaction Fairness Constraint (IFC) loss ℒ_IFC, Fairness Constraint (FC) loss ℒ_FC, and reconstructor loss ℒ_SAR as the final objective function to fair classifier training.
Our proposed IFC loss and FC loss help the classification models mitigate the bias feature interactions from the views of latent representations and alleviate the prejudiced model predictions with given different kinds of sensitive attributes during training.
Specifically, we optimize the proposed FairInt by illustrating the following joint loss function:
ℒ_FairInt = ℒ_0 + λ_IFCℒ_IFC + λ_FCℒ_FC + ℒ_SAR,
where ℒ_FairInt denotes as the loss function to the proposed FairInt and λ_IFC and λ_FC are the weighting hyper-parameters to balance the biased interaction mitigating and feature interactions modeling.
By optimizing ℒ_FairInt, we can alleviate the bias model predictions by mitigating the detected bias feature interactions without defining any related and potentially biased features interactions in advance.
In inference stage, the trained FairInt framework can provide fair predictions without sensitive attributes.
§ EXPERIMENT
In this section, we empirically evaluate the proposed FairInt framework. We mainly focus on the following research questions:
* Compared with the existing baseline methods, can our assumption-free FairInt framework mitigate the unfair model prediction on the downstream tasks (Sec. <ref>)?
* Can our proposed Bias Interaction Detection layer identify the bias feature interaction and encode it in the latent representation (Sec. <ref>)?
* How do the proposed Interaction Fairness Constraint loss and Fairness Constraint loss in Eq. <ref> and Eq. <ref> impact the fairness of the classification model (Sec. <ref>)?
* How do the hyper-parameters impact the fairness performance of the proposed FairInt (Sec. <ref>)?
* How does our assumption-free FairInt framework automatically detect the related features and further mitigate the bias feature interaction (Sec. <ref>)?
§.§ Datasets
We consider four real-world tabular datasets <cit.> that are commonly used for studying fairness-aware classification, which include three application domains as shown in Table <ref>.
§.§ Baselines and Fairness Metrics
Besides the ARL <cit.> and FairRF <cit.> mentioned in Sec <ref>, we also leverage two fairness constraint regularization methods to train vanilla MLP models as baselines for comparing with our framework.
We adopt the Fair Constraint (FC) loss as a regularization to a vanilla MLP model as a baseline, where the FC loss is to mitigate biased behaviors toward model predictions, and it can be calculated as Eq. <ref>.
For another baseline, we apply a regularization-based mitigation method to a vanilla MLP model, Prejudice Remover <cit.>, which considers the mutual information for equalizing the distributions between two variables to alleviate biases.
For each dataset, both the two baselines are leveraged to the same vanilla MLP model which we will describe in the Sec. <ref>.
We also compare the proposed FairInt to two other baselines including vanilla MLP classification models and the CTR prediction model AutoInt, which modeling feature interaction by adopting the key-value attention mechanism to improve the performance on CTR prediction tasks.
We use two group fairness metrics to evaluate the fairness of prediction models: Demographic Parity (Δ DP) <cit.> and Equalized Odds (Δ EO) <cit.>.
Δ DP measures the difference in the probability of a positive outcome between different sensitive groups and it is better to be closer to 0, which can be calculated as follows:
Δ DP = p(ŷ = 1|s = s_i) - p(ŷ = 1|s = s_j),
where s_i and s_j represent different sensitive groups.
Equalized Odds require the probability of positive outcomes to be independent of the sensitive group s, conditioned on the ground truth label y.
Specifically, Δ EO calculates the summation of the True Positive Rate difference and False Positive Rate difference as follows:
Δ EO = |P(ŷ = 1|s = s_i, y = 1) - P(ŷ = 1|s = s_j, y = 1)|
+ |P(ŷ = 1|s = s_i, y = 0) - P(ŷ = 1|s = s_j, y = 0)|,
where Δ EO is better to be closer to 0.
§.§ Implementation Details
In FairInt, we set the embedding dimension d=4 and the number of attention heads in BID layer as 1 among all four datasets.
For the Adult dataset, we use a two-layer MLP with 64 and 32 units of each hidden layer as the vanilla MLP model.
For the Law School dataset, we use a two-layer MLP with 64 and 32 units of each hidden layer as the vanilla MLP model.
For the Bank Marketing dataset, we use a two-layer MLP with 40 and 20 units of each hidden layer as the vanilla MLP model.
For the Diabetes dataset, we use a four-layer MLP with 512, 256, 64, and 64 units of each hidden layer as the vanilla MLP model.
As for the AutoInt, we set the embedding dimension of each feature to 4, the number of interaction layers to 2, and the number of attention heads in each interaction layer to 2 for all four datasets.
To prevent overfitting, we search dropout rate from the set {0.1, 0.3, 0.5, 0.7, 1.0} and search l_2 norm regularization from the set {5e-1, 1e-1, 5e-2, 1e-2, 5e-3, 1e-3, 5e-4, 1e-4, 5e-5, 1e-5} for all the models.
To address the situation of the unavailablity of sensitive attributes in the inference stage, we utilize a four-layer MLP as the SAR on both vanilla MLP predictor and AutoInt models.
§.§ (Q1) Performance Comparison on Real-world Datasets
In this section, we provide the results of classification prediction by using a binary classifier performance measure AUC <cit.> for evaluating imbalanced data.
The fairness testings are evaluated with the two aforementioned fairness metrics: Δ DP and Δ EO.
Table <ref> summarizes the performance of the FairInt and the baselines on the four real-world datasets, where FC and PR refer to two vanilla MLP models which are debiased by two regularization-based bias alleviation methods Fair Constraint proposed in Sec. <ref> and Prejudice Remover <cit.>.
We observe that our FairInt significantly and consistently mitigates the bias predictions with the lower Δ DP and Δ EO across all four datasets.
The best fairness results are highlighted in bold.
Given the limitations demonstrated by FairRF <cit.> and ARL <cit.> in balancing the trade-off between AUC and fairness performance as assessed in the Law School and Bank Marketing, a comparison of their DP and EO performance with other methods is not performed.
Compared with the best bias alleviated baselines between FC and PR, our FairInt improves Δ DP by 5.37%, 36.36%, 8.08%, and 19.61% on Adult, Law School, Bank Marketing, and Diabetes, respectively.
As for Δ EO, our FairInt improves Δ EO by 4.89%, 37.70%, 17.82% and 40.35% on the four datasets, respectively.
We also make the following observations of the experimental results.
First, AutoInt can slightly improve the AUC performance with the attention-based feature interaction modeling mechanism, but it also augments the biased prediction behaviors.
As we can see from Table <ref>, AutoInt can improve the AUC of vanilla MLP models by 0.44%, 0.15%, 0.36% and 0.74% on the four datasets, respectively.
However, it at the same time increases Δ DP and Δ EO on all four datasets.
Compared with the vanilla MLP models, AutoInt increases Δ DP on three out of four datasets and increases Δ EO on all four datasets.
The reason is that the modeled feature interactions not only improve the downstream task performances but also contain the biased feature interactions that will augment the biased behaviors of predictions.
Second, our FairInt can maintain the competitive classification performance compared with the other debiased baselines.
As we can see from Table <ref>, the fairness performances of our proposed FairInt are improved significantly while the classification performances are slightly decreased.
We compare our FairInt with the Vanilla MLP model, AutoInt, and the debiased baselines PR in Figure <ref>, which illustrates their fairness-AUC curves for the four datasets.
The hyper-parameter λ_IFC and λ_FC in Eq. <ref> controls the trade-off between AUC and fairness for FairInt.
For the debiased vanilla MLP with PR, the hyper-parameter in front of the regularization term also controls the trade-off.
From Figure <ref> we also can observe that our proposed FairInt can achieve the best Δ DP and Δ EO in all four datasets while remaining competitive AUC compared to PR.
§.§ (Q2) Analysis of Bias Interaction Detection Layer
We analyze the ability of the Bias Interaction Detection (BID) layer that can identify the biased feature interactions.
In Table <ref>, the Vanilla FairInt refers to the FairInt framework without the two interaction-wise bias mitigation regularization IFC and FC, and it keeps the Bias Interaction Detection (BID) layer which is designed to identify biased feature interactions.
Compared with the vanilla MLP models, the Vanilla FairInt significantly augments biased behaviors of model predictions.
For all four datasets, the Vanilla FairInt increases Δ DP by 2.99%, 16.06%, 22.22% and 38.67%, and it increases Δ EO by 33.10%, 48.13%, 37.24% and 55.29%, respectively.
The reason Vanilla FairInt can remarkably augment the biased predictions is that BID focuses on detecting the biased feature interactions and embedding them into the latent representation.
A similar scenario can be observed in AutoInt because it models all the interactions between all the feature pairs that include biased feature interactions.
Unlike the AutoInt, our proposed FairInt focuses on learning to model the biased feature interactions only among the feature pairs which contain a sensitive attribute.
By doing so, the latent representations in FairInt embed the biased feature interaction information without other noising knowledge.
§.§ (Q3) Analysis of Fairness Constraint Components
After the latent representations in FairInt embed the bias feature interaction information, we leverage the two fairness constraints to mitigate the embedded bias feature interactions.
To better understand the effects of the two fairness constraints, Interaction Fairness Constraint and Fairness Constraint, in the proposed FairInt, we conduct the ablation studies to analyze and verify their contributions to the FairInt framework.
In Table <ref>, the Vanilla FairInt refers to the FairInt framework without the two interaction-wise bias mitigation regularization IFC and FC, + FC refers to the Vanilla FairInt with Fairness Constraint, and + IFC refers to the Vanilla FairInt with Interaction Fairness Constraint.
Although the debiasing effects of the + FC are not as significant as the FairInt, it can achieve the same level of Δ DP and Δ EO as the vanilla MLP models debiased by FC in all the four datasets.
Compared with the + FC, the + IFC more focuses on improving Δ EO than Δ DP.
The reason is that the implicit mitigation regularization IFC focuses on optimizing the latent representation rather than directly mitigating bias behaviors against the model predictions.
Therefore, when the FairInt adopts the IFC with the FC, it can markedly improve the fairness with the lower Δ DP and Δ EO than only leverage one of the two bias regularization components.
§.§ (Q4) Analysis of Sensitive Hyper-parameter
In this section, we study the impact of the hyper-parameter λ_IFC and λ_FC in the Eq. <ref> to answer the research question Q4.
We conduct the sensitivity analysis for both the two hyper-parameters on the Adult and Bank Marketing datasets.
To analyze the influence of λ_FC, we fix the best λ_IFC to see the trend of AUC, Δ DP, and Δ EO when changing λ_FC on the two datasets, respectively.
As shown in Figure <ref>, in the proposed FairInt, λ_FC is not sensitive to downstream task performances AUC.
As for the two fairness metrics Δ DP and Δ EO, they will be improved when the λ_FC increases, and the improvement will gradually converge to a certain level.
And to analyze λ_IFC, we fix the best λ_FC to observe the trend of AUC, Δ DP and Δ EO when changing λ_IFC on the Adult and Bank Marketing datasets, respectively.
According to the observations from Figure <ref>, in the FairInt, λ_IFC is not sensitive to downstream task performance AUC.
At the same time, the best λ_IFC can typically achieve the best Δ DP when reaching the best Δ EO on both Adult and Bank Marketing datasets.
§.§ (Q5) Key Observations on Interaction
One of the benefits of modeling feature interaction is that it provides better interpretability by observing the pattern of modeled feature interactions.
Therefore, in this section, we provide the key observations on the feature interactions, which refer to the attention weights a_s,k calculated by Eq. <ref> in our proposed FairInt.
Here, we show the feature interactions between the sensitive and non-sensitive attributes on the Adult dataset, and we treat the FairInt w/o Both as a biased model, the FairInt w/o FC as a slightly fair model, the FairInt w/o IFC as a fair model, and FairInt as a fairer model.
The feature interactions of FairInt w/o Both, FairInt w/o FC, FairInt w/o IFC and FairInt are shown in the Figure <ref>.
In the four figures, the yellow points represent the mean values of each attention weight between the sensitive attribute gender and a non-sensitive attribute.
By comparing the feature interactions between biased and fair models, we conclude with the two factors of the feature interactions, which are variance and mean value.
Fair models have a lower variance of each feature interaction between sensitive and non-sensitive attributes, and a mean value of one feature interaction represents the correlation between the sensitive and the non-sensitive attribute.
For example, comparing the attention weights of FairInt, the fairest one among the four models, with FairInt (w/o Both), the most unfair one among the four models, the feature interactions between gender and all other non-sensitive attributes have lower variances in the more fair model.
Also, the mean value of the feature interaction between gender and relationship is lower in the fairest model, which implies the fairer model treats relationship as a less relevant attribute against gender.
§ CONCLUSION AND FUTURE WORKS
In this paper, we proposed FairInt, an assumption-free framework that automatically identifies and mitigates the biased feature interactions. Our framework doesn't need prior knowledge to identify the related attributes in advance for mitigating the unfair model predictions. FairInt is composed of Sensitive Attribute Reconstructor, Bias Interaction Detection, and Interaction-wise Bias Mitigation, which aims to predict pseudo-sensitive attributes, model the information of identified bias feature interactions, and mitigate biased interaction with FC and IFC, respectively. Experiments on four real-world datasets demonstrate that FairInt can alleviate the unfair model predictions while maintaining the competitive classification performance.
As for the future direction, we will explore the novel fairness constraint by limiting the variance of feature interaction, which implies the fairness extent of the proposed FairInt.
acm
|
http://arxiv.org/abs/2307.04088v1 | 20230709034448 | Cracking the Puzzle of CO2 Formation on Interstellar Ices. Quantum Chemical and Kinetic Study of the CO + OH -> CO2 + H Reaction | [
"Germán Molpeceres",
"Joan Enrique-Romero",
"Yuri Aikawa"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Quantum Chemical and Kinetic Study of the CO + OH -> CO2 + H Reaction.
Department of Astronomy, Graduate School of Science, The University of Tokyo, Tokyo 113 0033, Japan
[email protected]
Leiden Institute of Chemistry, Gorlaeus Laboratories, Leiden University, PO Box 9502, 2300 RA Leiden, The Netherlands
[email protected]
CO2 is one of the dominant components of the interstellar ice. Recent observations show CO2 exists more abundantly in polar (H2O-dominated) ice than in apolar (H2O-poor) ice. CO2 ice formation is primarily attributed to the reaction between CO and OH, which has a barrier.
We investigate the title reaction in H2O ice and CO ice to quantify the efficiency of the reaction in polar ice and apolar ice.
Highly accurate quantum chemical calculations were employed to analyze the stationary points of the potential energy surfaces of the title reaction in the gas phase on a H2O and CO clusters. Microcanonical transition state theory was used as a diagnostic tool for the efficiency of the reaction under ISM conditions. We simulate the kinetics of ice chemistry, considering different scenarios involving non-thermal processes and energy dissipation.
The CO + OH reaction proceeds through the remarkably stable intermediate HOCO radical. On the H2O cluster, the formation of this intermediate is efficient, but the subsequent reaction leading to CO2 formation is not. Conversely, HOCO formation on the CO cluster is inefficient without external energy input. Thus, CO2 ice cannot be formed by the title reaction alone either on H2O cluster or CO cluster.
In the polar ice, CO2 ice formation is possible via CO + OH -> HOCO, followed by HOCO + H ->CO2 + H2, as demonstrated by abundant experimental literature. In apolar ice, CO2 formation is less efficient because HOCO formation requires external energy. Our finding is consistent with the JWST observations. Further experimental work is encouraged using low-temperature OH radicals.
Cracking the Puzzle of CO2 Formation on Interstellar Ices
G. Molpeceres
1
J. Enrique-Romero
2
Y. Aikawa
1
Received August 12, 2023; accepted August 12, 2023
================================================================================================================
§ INTRODUCTION
In the cold molecular clouds of the interstellar medium (ISM), a significant fraction of the molecules are contained in the solid phase in the form of ice. While most of the molecules present in the ISM have been detected in the gas phase using radio telescopes through their rotational transitions, the direct observation of ices requires studying their vibrational transitions, which are commonly affected by telluric contamination. In this context, space telescopes, such as Spitzer or, more recently, JWST, are essential. Ice observations <cit.> reveal the presence of several components such as H2O, CO, CH3OH, and the object of this study, CO2. The abundance of these species, as well as their speciation in the ice or their presence in specific regions of the ISM, can only be explained by considering their formation routes and the chemical conditions necessary for their appearance.
The different components of interstellar ice may be formed in situ on the surface of refractory material. Such is the case of H2O, which is formed from the hydrogenation of atomic oxygen <cit.>, or the case of CH3OH, which is formed from the hydrogenation of CO <cit.>. Other significant components are primarily synthesized in the gas and accrete under extremely cold and dense conditions on the grain, like CO. Interstellar carbon dioxide, CO2, is thought to form via reactions on the surface (see, e.g., <cit.>). The postulated reactions contributing to the CO2 formation are:
CO + OH -> CO2 + H
HCO + O -> CO2 + H
CO + O -> CO2
From this ternary of reactions, Reaction <ref> has a barrier energy when atomic oxygen is in its ground state, (^3P)O <cit.>. Reaction <ref> is barrierless, and Reaction <ref>, the reaction whose study we tackle in this paper, is assumed to have a minimal activation energy (∼ 100 K, <cit.>.
The assumption of tiny activation energy for the CO + OH -> CO2 + H reaction is supported by a plethora of experiments dealing with surface chemical experiments <cit.>. Each of these experiments vary in different factors, including the formation route of the OH radical, either by hydrogenation of O2, <cit.>, dissociation of H2O molecules before deposition on the ice <cit.>, or direct photodissociation of H2O ice molecules <cit.>. Other variations between experiments include the substrate under consideration, either amorphous silicates <cit.>, CO <cit.>, matrix isolation <cit.> or H2O <cit.>. On the modelling side, <cit.> build on the experimental knowledge and coarse-grained it in a combination of a direct formation route CO + OH -> CO2 + H operating at T≥12 K, coinciding with the onset of CO diffusion on H2O, and an indirect three-body route on CO ices that relies in the formation of a kinetically excited OH radical O + H -> OH^* that subsequently partakes in the CO + OH^* reaction. The latter route on CO ices allows to explain the CO2 bands in a non-polar media observed in infrared observations of ices <cit.>. In summary, there is ample evidence for Reaction <ref>, to be efficient on dust grains. However, the same reaction in the gas phase is relatively slow, with rate constants as low as ∼ 2x10^-13 molecules cm^-3 s^-1 at 300 K <cit.>. The title reaction in the gas phase has also been a source of extensive theoretical attention. It has been simulated using both semi-classical and quantum dynamics on highly accurate potential energy surfaces (PES) <cit.>. It was also studied in the presence of other CO2 molecules <cit.>. The theoretical works find rate constants even lower than the values reported in <cit.>.
The different reactivity on surfaces and the gas phase is puzzling and counterintuitive. In both phases, the reaction is acknowledged to proceed through the highly stable HOCO radical. The evolution from this radical is the primary source of uncertainty because of the high activation energies to form the bimolecular CO2 + H products. In the gas, where a third body to stabilize HOCO is unavailable, the reaction is more likely to occur owing to the energy redistribution into the few vibrational degrees of freedom, ultimately leading to an irreversible reaction. On the surface, the ice molecules dissipate a significant fraction of this energy, ideally leading to the thermalization of HOCO, hence slowing or impeding the formation of CO2. This was proved by <cit.>, initiating the conundrum we tackle in this work and that has also been debated from different prisms <cit.>. If the reaction is slow in the gas, it should not proceed on the ice, where little energy is left for the reaction after dissipation into the ice. Hence, how is the mismatch between gas and solid phase experiments possible? In this article, we aim to shed light on this particular issue. The two main possibilities to explain the disagreement include, in the first place, the operation of external energy input, either chemical from the O2 + H or O + H reactions required to form the OH radical, or the excess energy used to photodissociate H2O. Secondly, free H atoms from the experiment may promote H abstraction reactions, HOCO + H -> CO2 + H2. While these two possibilities are often assumed when interpreting the experimental results, it is fundamental to distinguish which is dominant, if any, to establish under which conditions the laboratory measurements apply to the ISM. Determining the factors contributing to the reaction yield in the experiments is complicated because the detection techniques are suited for identifying only the final products. Quantum chemical calculations are instrumental and provide an atomistic perspective of the different elementary processes relevant to the reaction.
In this work, we simulate the title reaction on two different model ices, H2O and CO, and perform kinetic simulations using a microcanonical formalism to determine the importance of non-thermal effects in the reaction, including dissipation over different numbers of molecules, and complete the picture left by the different experimental studies.
The paper is structured as follows. In <Ref>, we describe the employed computational methodology. In <Ref> we present the structural models for the ices (<Ref>), the PES for the reactions in each of the surfaces (<Ref> and <Ref>) and the associated kinetic analysis (<Ref>). <Ref> is dedicated to interpreting our results from an astrophysical point of view, contextualising the preceding experiments. We finally summarize our main findings in <Ref>.
§ METHODOLOGY
§.§ Quantum chemical calculations
The stationary points in the PES were characterized using density functional theory (DFT) calculations on model clusters mimicking H2O and CO ices. Because this work aims to determine the impact of energy redistribution in the formation of CO2 on ice, we need to use sufficiently large structural models to allow for (ergodic) energy equipartition. In a preceding calculation, <cit.> used a cluster containing 33 H2O water molecules and discussed the suitability of a model of this size, indicating that energy dissipation should be well described with a model of this size. This was later confirmed with dedicated studies using ab-initio molecular dynamics simulations <cit.>. Therefore, in this study, we use the same 33 H2O cluster to simulate the H2O ice <cit.>, and we constructed a 33 CO cluster to simulate the CO ice. To construct such a cluster, we used Packmol <cit.> in a 8 Å radius sphere, ensuring that every molecule is at a minimum initial distance of 3 Å from each other. This initial cluster is later refined at the level of the theory described below.
The geometries of the initial clusters were optimized at the MN15-D3BJ/6-31+G(d,p) level of theory <cit.>, with parameters for the D3BJ dispersion correction taken from <cit.>. The DFT and optimizations utilize the Gaussian16 (rev.C.01) suite of programs <cit.>. We later place the CO and OH admolecules on the clusters sequentially, first occupying a binding site for the CO molecule and later for OH. Once the two admolecules are located on the clusters, we followed the gas-phase reaction mechanism presented in <cit.> for both clusters, except for an alternative exit path on CO ice (<Ref>). Additional differences between the gas-phase and surface-like profiles are highlighted in <Ref>. After locating every stationary point, we confirmed them as either true minima or first-order saddle points, i.e., transition states (TS), in the PES by computing the molecular Hessian of the system. The electronic energies of the stationary points on the PES were further refined using the domain-based local pair-natural orbital coupled cluster singles and doubles with a perturbative treatment of triple excitations, DLPNO-CCSD(T) <cit.> using a two-point complete basis set extrapolation (CBS) to the basis-set limit using the cc-pVDZ and cc-pVTZ basis sets <cit.>. The internal options for the PNO localization scheme were set to normal, and resolution of the identity (RI) techniques were used to evaluate exchange and Coulomb integrals (RIJK) using a cc-PVTZ/JK auxiliary basis set. We apply the frozen-core approximation in the correlated calculations. The ORCA (v.5.0.4) code was used for the DLPNO-CCSD(T)/CBS calculations <cit.>.
In addition to cluster calculations, we also carried out gas-phase calculations at the same level of theory for comparative purposes, which are indicated throughout the paper in square brackets. Finally, we assessed the quality of our theoretical method of choice, comparing our gas phase results with the ones of <cit.>, finding excellent agreement for all the relevant parts of the PES. These results are presented in the <Ref>. It is worth noting here that our theoretical method does not predict the correct energetics for the high energy intermediate HCO2. This intermediate is not relevant to the kinetics of the system because its formation requires surmounting an emerged barrier of ∼8-9 kcal mol^-1 from the bimolecular OH + CO asymptote (38-39 kcal mol^-1 from the HOCO potential well) <cit.>. Moreover, we could not find this intermediate in the simulations on the H2O cluster. We, therefore, skip the search for this intermediate in all cluster calculations. Nonetheless, we discuss the origin of this disagreement in <Ref>.
§.§ Kinetic Analysis
We employed the microcanonical flavour of the transition state theory, called Rice–Ramsperger–Kassel–Marcus (RRKM) to compute the energy-dependent rate constants k(E) for the transitions between reaction wells, given by:
k(E) = N^(E - E_0)hρ(E),
where h is the Planck's constant, N^(E - E_0) is the sum of states of the transition state evaluated at energy E to the energy of the transition state, E_0, and ρ(E) is the density of states of the reactant at energy E. In addition, the sum of states contains tunnelling corrections, for which the non-symmetric Eckart potential model was employed <cit.>.
We did not include rotational symmetry factors in our calculations due to the symmetry breaking induced by the amorphous surface. The rigid-rotor harmonic oscillator model is used throughout the kinetic calculations. The application of RRKM to interstellar reactions is discussed in <cit.> and used or implied in several other works <cit.>
As it will be explained later on (<Ref>), the title reaction occurs strictly non-thermally at 10 K. Hence we make our analysis based on k(E) for the entrance CO + OH -> t-HOCO/c-HOCO and exit channels: c-HOCO -> CO2 + H (and alternatively c-HOCO/t-HOCO + CO -> CO2 + HCO, <Ref>). We provide k(E) considering several energy dissipation scenarios. Each of them has a different number of molecules, n, over which instantaneous energy dissipation is allowed. We studied n=16, 10, 5, and 0 (CO/H2O) molecules. In the latter (n=0), energy redistribution occurs only within the CO + OH system. We carried out this study by projecting out the molecular Hessian matrix elements for the m molecules (where m = 33 - n) farther from the t-HOCO minima, as the global minima of our study. The microcanonical rate constants obtained in this study are calculated with the MESS code <cit.>. We note that the sizes of the clusters (see Figure <ref>) and the highest number of dissipating water molecules are sufficient according to previous studies, e.g., <cit.>. Although no specific studies have addressed this issue for CO ice, we have made a reasonable assumption that the same holds true. It is worth highlighting again that we considered different dissipating CO ice molecules.
§ RESULTS
§.§ Cluster model
The fully optimized H2O and CO clusters mimicking ice surfaces are presented in <Ref>. While the CO ice model has a more spherical and compact shape with dimensions 10×12×13 Å, the water one is slightly more elongated, 15×9×10.5 Å. The latter hosts a cavity, where the CO + OH -> CO2 + H reaction is simulated. On the contrary, the more compact CO cluster does not have any clear deeper binding site. Hence the reaction site was randomly chosen.
The binding energies of the reactants and reaction intermediates on the surfaces are presented in <Ref>. These were calculated as the energy difference between the complexes containing the surface and the admolecule and the sum of the isolated fragments, including ZPVE. In the H2O cluster cavity, we find a binding energy for CO of 4.64 kcal mol^-1, higher than the values reported by <cit.> (≤3.71 kcal mol^-1). This indicates that our cavity is a really deep binding site with a maximized number of neighbour water molecules. For the OH radical, on the contrary, the cavity binding site yields lower than average binding energies (6.45 kcal mol^-1) than other reported values, e.g., 10.33 kcal mol^-1 <cit.>, and 10.6 kcal mol^-1 <cit.>. The observed differences arise from the specific structure of our cavity, where the number of dangling H-bonds is saturated, and the binding mode of OH, whose acceptor/donnor H-bonds about 0.1 Å shorter than in the cavity case reported by <cit.>. On the CO cluster, the CO/CO binding energy corresponds to the lower bound of the values presented in <cit.> while the values of OH/CO are unreported. We note that the dual-level error introduced by our calculations is relevant for determining binding energies for CO/CO due to the mismatch of geometries arising from the weak CO-CO interaction in the ice <cit.>. In the subsequent reactivity studies, the relative magnitude of this error is diminished because energy differences between reaction steps are much higher than the CO-CO interaction energy.
For the reactivity studies, we keep the CO binding site determined above, while the OH radical is placed on a different binding site. We justify this choice based on two arguments. First, when both adsorbates are thermalized, the higher interstellar abundance of CO makes it more likely to be located in deep binding sites, such as the cavity formed in the H2O cluster. Second, in <Ref>, we investigate the effect of a translationally excited OH radical colliding with a pre-adsorbed CO.
§.§ Potential energy surface construction
All the energy diagrams have been referenced from the asymptotes, i.e., from the sum of energies of the surface, reacting CO and the reacting OH radical. We will refer to this as the bimolecular system, and for the sake of simplicity it will be denoted as CO + OH, regardless of the ice surface. This was done for the sake of clarity, as it is much clearer what the influence of the substrate in stabilizing the reactants is, as well as its catalytic effect on the barriers.
§.§.§ H2O ice
We include two pre-reactant complexes following the literature <cit.>. First, a pre-reactant complex with large dihedral ∠HOCO angles, PRC, which leads to the formation of the t-HOCO intermediate. Second, a near 0° dihedral angle pre-reactant complex (PRC'), that forms the c-HOCO intermediate (which was not found on CO ice, as discussed in <Ref>). The transition states that connect the PRCs with the reaction wells are named TS1 and TS1', respectively, and the transition state connecting these two wells is TS2. Finally, the transition state leading to CO2 + H from c-HOCO is named TS4. The reason for not naming it TS3 is that the TS3 label (specifically TS3') is reserved for the exit transition state from t-HOCO, a stationary point we do not find on water ice.
The stationary points on the reaction profile are gathered in <Ref>. The reaction profile has, for the most part, the same profile as in the gas phase, with two notable exceptions. The first concerns the absence of the HCO2 intermediate, as we already discussed in <Ref>. The second is the inversion in energy between PRC and PRC'. This inversion appears following the formation of a HO–H2O hydrogen bond that locks the PRC' geometry in the binding site contiguous to the CO binding site. The snapshots of the stationary points are collated in <Ref>, where this effect can be visualized. The higher stabilization of PRC' also results in higher activation energy to c-HOCO through TS1'.
The binding energies of t-HOCO and c-HOCO on the cavity are 15.51 kcal mol^-1 (7805 K) and 12.30 kcal mol^-1 (6190 K), respectively. These binding energies are significantly higher than the ones for CO and OH presented in <Ref>, and are closer to the average values reported for the related molecule, HC(O)OH, formic acid (e.g., ∼ 12.30 kcal mol^-1 <cit.>, 10.7–21.0 kcal mol^-1 <cit.>). The t-HOCO and c-HOCO wells are significantly stabilized on the surface, evinced by the 13–16 kcal mol^-1 difference in energy with the same intermediates in the gas phase. As a consequence, the activation energy of TS4 is higher on water.
When breaking the O–H bond in c-HOCO, the energy corresponding to the OH moiety must be overcome, i.e. a significant fraction of the binding energy. The binding energy of the CO2 + H system on H2O was found to be 7.30 kcal mol^-1 (3673 K).
Finally, from <Ref>, it is evident that the reaction, if viable, must proceed through quantum tunnelling. The c-HOCO -> CO2 + H barrier is 32.1 kcal mol^-1, which is extremely high for ISM conditions. However, contrary to what happens in the gas phase, TS4 is submerged with respect to the reactant asymptote, thanks to the stabilization promoted by the H2O surface. The product of the reaction, CO2 + H, is higher in energy than both radicals, and the reaction is significantly less exothermic because of the break of hydrogen bonds. Nonetheless, once CO2 + H is formed, H is susceptible of diffusing or evaporating, thus concluding the reaction.
§.§.§ CO ice
The reaction profile on CO ice is shown in Figure <ref> and the stationary points in Figure <ref>. With respect to the gas-phase process, as previously discussed, the profile lacks the HCO2 intermediate. When comparing with the results for the water cluster presented above, the main difference is the lack of PRC', so that the reaction must go through the t-HOCO intermediate to reach CO2. While PRC' exists on the CO ice, we found it to be a first-order saddle point. Unlike in water, where PRC' is stabilized thanks to the interaction of the OH radical with a dangling bond of H2O, on CO, this interaction is unavailable, and the weak OH-CO interaction promotes the rotation to PRC. There is still the possibility that the lack of PRC' is an effect of the random selection of the binding site, however a full binding site sampling is beyond our computational resources.
To reach the t-HOCO intermediate, however, the TS1 must be crossed at the same energy level as the asymptote. Hence, significant energy dissipation would suppress the whole reaction unless enough energy input is provided via non-thermal mechanisms.
Additionally, the much reduced inter-molecular interaction of the admolecules with the surface due to the lack of electrostatic and H-bonding interactions of CO ices affects the energetics of the stationary points. The most prominent examples are the lower stabilisation of intermediates and the barrier in TS4, which sits above the energy of the asymptote. In general, the energetics on CO ice is closer to the gas phase case, with small differences, e.g., the isomerisation barrier for the t-HOCO -> cis-HOCO reaction on CO is about 1 kcal mol^-1 lower (and about 2 kcal mol^-1 lower for the reverse reaction).
The fact that there are more CO molecules surrounding the reaction site opens a new possibility not available on water ice or the gas phase. It involves the reactivity of the t-HOCO and cis-HOCO intermediates with a neighbouring CO, leading to CO2 + HCO, see Figure <ref>. Interestingly, these reactions possess lower activation energy barriers than TS4, see Figure <ref>, and in the case of the cis-HOCO + CO -> CO2 + HCO reaction, the barrier sits below the asymptote.
§.§ Microcanonical rate constants
We estimated the microcanonical rate constants for the PES entrance and exit channels described in the previous sections. The entrance channels start with the pre-reactant complexes and finish with t/c-HOCO, and the exit channels start with t/c-HOCO and finish with CO2 + H, and additionally CO2 + HCO for CO. These channels present the relevant rate constants for the kinetics of the reaction because the t-HOCO -> c-HOCO is much faster, even when energy redistribution is at play. Notice that due to the barriers (TS1 and TS1'), if the stationary points of the PES were populated according to a thermal distribution, the formation of the HOCO intermediates would be slow, and the formation of products would likely not happen at all. To simulate non-thermal reactions, an initial amount of energy is given to the system; see below.
Experiments of <cit.> show the formation of HOCO with an apparent small barrier or null barrier. We note that for the exit channel (c/t)-HOCO -> CO2 + H/HCO , the starting potential well is very deep, and thermalization is more likely <cit.>. Nevertheless, as we will show, under a microcanonical formalism, the formation of CO2 + H is found to be slow. Finally, different energy dissipation is allowed by changing the number of ice molecules considered in the microcanonical calculations, n.
Our PESs indicate that adsorption energy (formation of PRC/PRC') is not completely dissipated but employed in forming HOCO. The energy reference is again the energy of the asymptotes. One could consider that this is not the best choice since the initial energy lies above the energy of the PRC/PRC' and it would actually mean that the initial state is actually higher in energy than a fully thermalized reactant set. However, it must be noted that (i) if a reference state is an upper bound of the real one, and even in this case the reaction is not plausible, then starting from a more stable reference will not change the qualitative picture, and (ii) in cases where an incomplete energy dissipation promoted by certain exothermic processes, e.g. diffusion into deeper binding sites and possible Eley-Rideal mechanisms [That may be of relevance for CO molecules given their abundance in ISM ices.] would actually involve higher initial energies than PRC/PRC'. This effect is irrelevant when the activation energy of a reaction is much higher than the exothermicity caused by the mentioned processes, but for CO + OH -> HOCO the activation energy of the reaction falls below the adsorption energy, and it is of small magnitude. The correct energy reference would lie somewhere in between that of the asymptote and the PRC/PRC'.
The microcanonical rate constants for the entrance step are shown in <Ref> and <Ref> for H2O and CO ice. In this plot, we show the reaction rate constants as a function of the energy, where k(E=0) corresponds to the separated, no adsorption asymptote (CO + OH in <Ref> and <Ref>). Energies above zero indicate extra energy from non-thermal excitation mechanisms.
In this work, to compare with experimental observations, we will consider the presence of extra energy from either (i) a prior O + H -> OH reaction (ΔU = 102.1 kcal mol^-1) or (ii) half the energy deposited by a single Ly-α photon, assuming equal energy partition into the products of the H2O -> OH + H, (ΔU = 118.7 kcal mol^-1). Notice that the amount of extra energy used to promote the title reaction through the non-thermal mechanisms is unknown. Hence, we represent fractions of that energy, 0.10, 0.25, 0.50, as vertical dashed lines in <Ref> and <Ref> to serve as a guide to evaluate how the rate constants would increase under these assumed scenarios. As we introduced in <Ref>, we evaluated the behaviour of the reaction assuming dissipation into a set of n molecules. The four different cases for n=0, 5, 10, 16 are illustrated in <Ref> and <Ref>.
The rate constants for the entrance step on H2O ice are, for all n dissipating molecules, fast for the PRC -> t-HOCO step, indicating that external energy input is unnecessary for this reaction, as determined experimentally by <cit.> and computationally by <cit.>. However, for the alternative PRC' → c-HOCO reaction, we observe k(E=0)≤10^8 s^-1 for the models with 10, 16 H2O dissipating molecules. This means that if the timescale for thermalization is shorter than tens of nanoseconds, the adsorption energy alone is insufficient to overcome the entrance barrier. This constraint is lifted by considering extra energy. The reason for the difference between rate constants for the reactions starting from PRC and PRC' stems from the significantly higher activation energy in the latter case.
For the CO model, we observe systematically lower values of k(0) than in water, owing to the lower stabilization of the PRC complex on CO than on H2O leading to higher energy barriers than in the best case for H2O. This, in turn, yields k(E=0)≤10^8 s^-1 for all of our models. Because k(E) is a very steep function around E=0, the reaction is viable with a small input of energy that can come from reactions, e.g. O2 + H <cit.>. This finding reinforces the scenario presented in <cit.> for the three body formations of CO2 on CO ice, as we will discuss in <Ref>. An important comment for each of these rate constants is that we implicitly assumed an infinitely fast energy partition into n molecules, which may not be a good representation of this reaction on CO. At this research stage, we warn that extracting strong conclusions for a limit case like the one found for PRC -> t-HOCO on CO ice is difficult and more sophisticated approaches are necessary. We are currently working on a molecular dynamics study of this reaction to illuminate this issue.
Similarly to the entrance rate constants, the exit c-HOCO -> CO2 + H rate constants on H2O ice and c/t-HOCO -> CO2 + H/HCO rate constants on CO ice are plotted in <Ref> and <Ref> for the different dissipation scenarios. It is important to remind that while the entrance channels are unaffected by quantum tunnelling, all the exit channels involve the migration of an H atom, turning quantum tunnelling into an important driver for the reaction, as already evinced by nuclear quantum dynamics calculations <cit.>. Still, even with the influence of quantum tunnelling, the reactions are, in all cases, significantly slower than in the entrance step. The importance of the energy dissipation scheme is major for these reactions. There is a clear gap in exit rate constant values between the (ideal) n=0 dissipation model and the 5, 10 and 16 molecules dissipation models that, in all the cases, yield rate constants k(E=0)≤ 0 s^-1. We remind that these values must be confronted against the thermalization timescale, i.e. if thermalization is faster, the reaction will not proceed. A rate constant of k(E=0)≤ 0 s^-1 means reaction times of seconds, and we find it hard that thermalization would not happen on those timescales, precluding all the c/t-HOCO -> CO2 + H/HCO reactions in all the conditions and substrates considered in this work. We conclude then that, without the input of any external energy other than the adsorption energy of the reactants, the reaction can proceed neither microcanonically nor from thermalized HOCO.
When including a degree of external energy from the mechanisms explained above (chemical and H2O photodissociation), the exit reaction is faster, as expected. However, only the n=0 dissipation model yields rate constants that are sufficiently high ≥ 10^8 s^-1 to compete with thermalization. The upper bound of the timescale for (almost) complete thermalization of HOCO is estimated to be similar to that of CO2 formed from the CO + (^1D)O -> CO2 reaction, that is, a few nanoseconds <cit.>. While the energy dissipation in RRKM is instantaneous, and an incomplete energy dissipation may increase the values of the rate constants, our assumption for the external energy input is also rather ideal. Thus, we conclude that even in the presence of external energy input, we find it hard to justify the formation of CO2 and H/HCO from the title reaction. This suggests that the formation of CO2 relies on the subsequent reaction described as follows:
t/c-HOCO + H -> CO2 + H2.
Reaction <ref> involves two radicals, and even though an activation barrier may be present on ice <cit.> quantum tunnelling should play a major role, as it is the case found for H abstraction reactions <cit.>. Thus, reaction <ref> must be viable. The inclusion of reaction <ref> in the CO2 reaction network was already in place for the non-energetic formation of CO2, for example, in <cit.>. Still, this article shows that it also applies to the energetic formation of CO2. We put our results in a laboratory/simulation and astrophysical context in <Ref>.
Finally, and despite it does not affect the outcome of the reactions studied in this work (e.g. the t/c-HOCO ( + CO) -> CO2 + H/HCO reactions remain non-viable under ISM conditions), it is interesting from a purely chemical perspective to comment on the effect observed for the two competing reactions c-HOCO -> CO2 + H and t/c-HOCO + CO -> CO2 + HCO. The competition between these two processes is energy dependent. At low values of E, e.g. k(E=0), favours t/c-HOCO + CO -> CO2 + HCO whereas c-HOCO -> CO2 + H is the preferred exit channel at higher energies, between 10–120 kcal mol^-1, depending on the number of dissipating molecules. The dependence on the energy and number of dissipating molecules clearly reveals that the dominion of the c-HOCO -> CO2 + H route at high energies is an entropic effect. For both routes, the count of states at the TS energy (the numerator of <Ref>) depends on the height of the barrier and the number of low-frequency vibrational modes. Because HCO, in contrast with H, has two molecular vibrations, H-C and C=O, at 2800 and 1900 cm^-1, the count of states will be smaller at high energies. Low-frequency vibrations overwhelm the purely kinetic effect arising from the lower barrier.
§ DISCUSSION
§.§ The CO + OH -> CO2 + H reaction in the laboratory
The experiments carried out in the CO + OH -> CO2 + H reaction were reviewed in <Ref>. For most of them, the biggest experimental conundrum is the generation of the OH radical, which is very unstable under laboratoty conditions and needs to be generated in situ. The experimental methods for forming the OH radical in these experiments are, in most cases, different. However, all the possible formation pathways involve the co-deposition or co-generation of H atoms e.g. formation with O2 + H, fragmentation of H2O in a microwave discharge or H2O photodissociation. In general, it is impossible to experimentally discern whether the CO + OH reaction proceeds directly to CO2 + H or, in turn, stops at t-HOCO, which is converted to CO2 via reaction <ref>.
A rigorous study of the reaction using molecular dynamics <cit.> showed the probability of direct formation of CO2 on H2O ice is lower than 1%. It is important to remark that in <cit.>, the OH was generated with excess energy coming from photodissociation of H2O. Our results support the latter scenario and discard the direct reaction. Compared with our results, the small fraction observed for the direct formation of CO2 + H in <cit.> may come from the slower and more realistic non-ergodic energy dissipation present in the molecular dynamics study.
On CO ice, the reaction proceeds similarly to in H2O, both in our calculations and in the experiments of <cit.>, where HOCO is explicitly included as the intermediate for the reaction. <cit.> discuss the competition with formic acid (HC(O)OH) through the reaction:
HOCO + H -> HC(O)OH
with Reaction <ref>.
Our results complement these experiments as well, showing that in addition to what was already known, the formation of the HOCO complex has to surmount an activation energy of 2.2 kcal mol^-1 with a mere adsorption energy of 2.5 kcal mol ^-1, in contrast with H2O ice, where the higher stabilization of the PRC complex increases the energetic budget for the formation of HOCO. The consequence of this effect in the overall reaction scheme is that the formation of HOCO cannot be taken for granted on CO ice under a non-energetic regime. In <cit.>, such energy input is given by a preceding chemical reaction. The more impeded formation of the HOCO radical on CO is the main difference with H2O ice and is illustrated by the rate constants in <Ref> (Top panel) and <Ref>. This different reactivity on different substrates may explain the recent JWST observations of a higher degree of mixing of CO2 with H2O than with CO <cit.>. However, and as we indicated in section <Ref>, further studies are being undertaken to understand the precise behaviour of the CO + OH -> t-HOCO association step on CO ices.
On the other hand, <cit.> used matrix isolation, electron paramagnetic resonance and FT-IR techniques, which made it possible to observe several radicals, among which HOCO, and CO2. HC(O)OH is also detected, although its formation seems to be due to HCO + OH rather than reaction <ref>. In this experiment, methanol molecules embedded in an Argon matrix are photolysed at 14 K. The resulting photo-products can relax as the matrix acts as a third body. Later the sample is warmed up to 35 K, and the Ar matrix is removed, allowing light species to diffuse. The peak of CO_2 production occurs in this last stage. According to our results and interpretation, if CO2 is formed via reaction <ref>, either there is some extra energy input, not all the energy from the photolysis step was completely dissipated, or H-abstraction reactions are in place. In the latter case, this can be triggered by other radicals rather than reaction <ref>, something we did not consider in this work, and that would require either the diffusion at warmer temperatures or the presence of a nearby radical species. In addition, an efficient H-abstraction radical-radical channel should be present, which will certainly depend on their relative orientation <cit.>. Notice that in this experiment, no ice surface is present, but rather the bare copper plate on top of which the matrix and reactant mixture is prepared. Finally, we would like to encourage more experiments on CO_2 formation starting from thermalized reactants, especially on CO surfaces.
§.§ The CO + OH -> CO2 + H reaction in the ISM
The comparison between the experiments and our calculations presented in the last section motivates us to contextualize our results in the expected conditions of the ISM. We concluded that the sole CO + OH reaction is insufficient for the formation of CO2 on ices and that Reaction <ref> is the most promising candidate for the follow-up reaction. Considering this, is it justified to consider a small activation energy for the OH + CO -> CO2 + H reaction in astrochemical models of molecular clouds and prestellar cores? In light of our simulations, we consider that there are at least four different cases.
* High coverage of H2O ice and high abundance of H atoms.
* High coverage of H2O ice and low abundance of H atoms.
* High coverage of CO ice and high abundance of H atoms.
* High coverage of CO ice and low abundance of H atoms.
On H2O ice (Cases 1 and 2 above), the formation of the HOCO complex is facile and does not require any energy input, with a fast reaction occurring thanks to the adsorption energy (or a fraction of it) on water ice. Moreover, the dominance of H2O in the early stages of a molecular cloud's life, during the translucent cloud phase <cit.>, ensure mild temperature conditions (15–50 K) that allow for diffusion of CO molecules, and relatively low extinction (A_v∼ 1-2 mag). Under these conditions, Case 1 is the most likely one, with H atoms produced from photodissociation of H2O and other hydrogenated molecules both in the gas and on the grain. Other mechanisms, such as cosmic ray ionization, also contribute to these fragmentation processes. Under these conditions, we determine that considering a null or low activation barrier for Reaction <ref> in astrochemical models is justified because the H atom will ensure prompt conversion of HOCO to CO2 through reaction <ref>. However, we warn that HC(O)OH abundance could be underestimated following this approach. At higher extinctions, but without enough CO surface coverage (Case 2, molecular cloud stage), the abundance of H atoms on grain surfaces will be reduced, and the HOCO complex will survive longer on the grain. Under these conditions, we recommend differentiating Reaction <ref> and <ref>.
The next two cases (Cases 3 and 4) can be treated conjointly. Our simulations show that forming the HOCO radical from CO + OH is not straightforward on CO ice and requires initial energy input. While the energy required to initiate the reaction is not very high, the very low temperatures where Cases 3 and 4 would dominate (dense prestellar cores with T=10 K) discard the thermal energy as the initiator of the reaction. This energy input can come from a neighbouring chemical reaction because H2O photodissociation should be a small factor in CO ices. Therefore we consider that the approach presented in <cit.> of modelling the CO2 formation as the three-body reaction, e.g. H + O + CO is a good compromise to model the reaction on CO ice. Whether the three-body reaction can be coarse-grained to yield CO2 + H directly or HOCO (and later proceed through reaction <ref>) is likely to depend on the H atom abundance. For example, an important factor should be the local cosmic ray ionization rate (ζ) determining the dissociation of H2 into 2H, thus the ratio of HOCO radicals to H atoms. We must emphasize that coarse-graining the formation of CO2 through the title reaction to study CO2 formation and evolution may be acceptable only when H atom abundance overwhelms HOCO abundance. However, in doing so, the abundance of other HOCO-derived molecules like HC(O)OH will be underestimated. Precaution is advised when the target of the models involves these molecules.
Finally we would like to discuss other possible scenarios. One possibility is that the excited formation of OH leads to non-thermal diffusion out of the reaction site or its desorption (notice that the latter would be more plausible on CO ices due to the lower binding energy), in these cases the reaction would not take place. Another possible scenario regards the energy dissipation after HOCO is formed. Because of the high exothermicity of the CO + OH -> HOCO reaction and the low binding energies of these radicals on CO ice, there is the possibility that HOCO chemically desorbs, or triggers the desorption of a nearby ice CO molecule. In addition, if these reactions would have to take place in the inner layers of the ice, one must take into account that energy dissipation would be even more efficient due to the larger number of intermolecular interactions and the higher number of surrounding molecules, rendering each reaction step less and less efficient.
§ CONCLUSIONS
Using accurate quantum chemical calculations and microcanonical kinetic modelling, we found that the CO + OH -> CO2 + H reaction, which has been considered as the most important producer of interstellar CO2, is rather inefficient, and its occurrence cannot be taken for granted. The reaction proceeds through a rather stable intermediate, HOCO, and more specifically through its two structural isomers t-HOCO and c-HOCO. On H2O ice, the formation of HOCO is feasible, but its evolution to CO2 requires a further reaction step that most likely involves H abstraction through reaction <ref>. On CO ice, we found, for the first time, that the formation of HOCO is not as efficient as currently assumed, owing to the lower adsorption energy of OH and CO molecules on CO ice. We indicate that non-thermal effects are necessary to form HOCO, and thus CO2, on CO ice. This limitation may be behind the recent ice observations showing higher fraction of CO2 found in water-dominated environments <cit.> when comparing with apolar (CO-dominated) ices.
Because our calculations assume an ideal energy redistribution in an infinitely short time after the reactions, our results represent a lower bound for the production of HOCO and CO2 from the CO + OH reaction. We aim to improve the description of energy dissipation in forthcoming works to resolve ambiguous cases. We encourage further experimental work on the topic, especially on CO ices following <cit.>. Nonetheless, with our results, we were able to provide atomistic insight into the formation of CO2, one of the most important interstellar ice constituents, and indicate the cases where coarse-graining of the CO + OH reaction in astrochemical models is, to a first approximation, acceptable and not.
G.M. thanks the Japan Society for the Promotion of Science (JSPS International Fellow P22013, and Grant-in-aid 22F22013) for its support. The authors acknowledge support by the Research Center for Computational Science in Okazaki, Japan (Projects: 22-IMS-C301, 23-IMS-C128), the state of Baden-Württemberg through the bwHPC consortium and the German Research Foundation (DFG) through grant no INST 40/575-1 FUGG (JUSTUS 2 cluster) (Project: 22-IMS-C301). Y.A. acknowledges support by Grant-in-Aid for Transformative Research Areas (A) grant Nos. 20H05847.
aa
§ GAS-PHASE COMPARISON WITH <CIT.>
We compare our energetics of the CO + OH -> CO2 + H gas-phase reaction profile at the DLPNO-CCSD(T)/CBS//MN15-D3BJ/6-31+G(d,p) level with the high-quality CCSD(T)/AVTZ results presented in <cit.> in <Ref>. Note that the energies presented here are not ZPVE corrected, unlike in the main manuscript. We observe excellent (between 0.0–1.3 kcal mol^-1) deviations between methods, e.g. chemical accuracy, for all structures except HCO2. As we introduced in the methods section, this intermediate and the associated entrance and exit transition states, TS5 and TS6, are irrelevant to the reaction kinetics or dynamics <cit.>. Hence, a wrong prediction of the energetics of this intermediate does not affect our results, and we do not include it in our kinetic simulations. Yet, it is interesting to mention the reason for the discrepancy.
In <cit.>, the authors show that the HCO2 intermediate belongs to the C_2v symmetry point group at the CCSD(T)/AVTZ level of theory. However, the geometries at the MN15-D3BJ/6-31+G(d,p) level converge to a C_s intermediate. The T_1 diagnostic at the DLPNO-CCSD(T)/cc-pVTZ level of theory for the HCO2 intermediate hints at a strong multireference character (T_1=0.068), so it is not clear if the CCSD(T) or the MN15-D3BJ calculations are better in predicting the correct HCO2 geometry. However, it is clear that a dual-level approach like DLPNO-CCSD(T)/CBS//MN15-D3BJ/6-31+G(d,p) will fail due to the mismatch of geometries. Despite the discrepancy found for HCO2, the excellent agreement for all the relevant parts of the PES indicate that the studies on the H2O and CO clusters will yield the correct energetics for the system.
|
http://arxiv.org/abs/2307.06162v1 | 20230712134209 | Deep Generative Models for Physiological Signals: A Systematic Literature Review | [
"Nour Neifar",
"Afef Mdhaffar",
"Achraf Ben-Hamadou",
"Mohamed Jmaiel"
] | cs.LG | [
"cs.LG",
"cs.AI",
"eess.SP"
] |
redcad]Nour Neifarmycorrespondingauthor
[email protected]
redcad]Afef Mdhaffar
[email protected]
crns]Achraf [email protected]
redcad]Mohamed Jmaiel
[email protected]
[redcad]ReDCAD Lab, ENIS, University of Sfax, Tunisia
[crns]Centre de Recherche en Numérique de Sfax, Laboratory of Signals, Systems, Artificial Intelligence and Networks, Technopôle de Sfax, Sfax, Tunisia
[mycorrespondingauthor]Corresponding author
In this paper, we present a systematic literature review on deep generative models for physiological signals, particularly electrocardiogram, electroencephalogram, photoplethysmogram and electromyogram. Compared to the existing review papers, we present the first review that summarizes the recent state-of-the-art deep generative models. By analysing the state-of-the-art research related to deep generative models along with their main applications and challenges, this review contributes to the overall understanding of these models applied to physiological signals. Additionally, by highlighting the employed evaluation protocol and the most used physiological databases, this review facilitates the assessment and benchmarking of deep generative models.
Deep generative models, ECG, EEG, PPG, EMG
§ INTRODUCTION
Physiological signals such as electrocardiogram (ECG) and electroencephalogram (EEG) represent an essential tool in health monitoring as they provide significant information about the body internal state <cit.>. Recently, deep learning methods have attracted a significant interest to analyze physiological signals for diagnosis and treatments purposes <cit.>. In particular, deep generative models have gained attention and have been effectively used in the medical field for various tasks <cit.>, such as analysing and generating physiological signals.
In the last years, the use of deep learning models in particular the deep generative models in various fields, including the analysis of time series data have been the subject of multiple literature studies. For instance, Brophy <cit.> provided a comprehensive overview of the application of Generative Adversarial Networks (GANs), the most common deep generative model in the field, specifically for the analysis of time series data. The main objective of their paper is to summarize the current discrete and continuous variants of GANs as well as their challenges in the context of time series.
Zhang <cit.> conducted a comprehensive review on GANs applied to time-series such as speech, music and biological signals. They summarized latest advancements for the generation of these signals using GANs, as well as the existing assessment protocols employed to evaluate the GANs performance. Musa <cit.> presented a systematic review on the use of deep learning algorithms in ECG analysis. They have also performed a meta-data analysis that summarizes the areas, advantages and challenges of applying deep learning models to ECGs.
Given the importance of physiological signals in human health monitoring, it is valuable to explore the challenges and opportunities they present especially in relation to the application of deep generative models. To the best of our knowledge, this paper presents the first systematic literature review on the application of deep generative models with a focus on the essential and commonly used physiological signals, in particular the electrocardiogram (ECG), electroencephalogram (EEG), photoplethysmogram (PPG), and electromyogram (EMG).
ECG signals represent the electrical activity of the heart, captured by electrodes placed on the chest and limbs. ECGs are often employed for the diagnosis of heart disorders and monitoring of heart activity. On the other hand, EEG signals represent the electrical activity of the brain recorded by placing electrodes on the scalp. EEGs provide important insight related to brain activity, including neurological disorders. PPG signals measure the hemodynamic activity of the heart. These signals, which are often measured at the body periphery such us the fingertip, offer important information about cardiovascular system. While, EMG signals are recorded by using electrodes positioned on the muscles to measure their electrical activity. These signals represent a valuable tool for diagnosing neuromuscular disorders as well as assessing muscle function and activity during different tasks or movements. <ref> represents examples of ECG, EEG, PPG and EMG signals taken from real databases.
Our objective is to present a comprehensive overview of the current state-of-the-art deep generative models currently used in the analysis of discussed signals. By conducting a thorough analysis and synthesis of the existing literature guided by well-defined research questions, we aim to identify the various deep generative architectures employed in analysing physiological signals. We explore how these models have been applied to address faced problems with physiological signals. Furthermore, we identify and discuss the challenges faced by deep generative models in analysing physiological signals. Additionally, we review the existing evaluation protocols and metrics used in the literature to assess the performance of deep generative models on the most widely used physiological databases in this field. This synthesis can help researchers to select appropriate models, address challenges, and explore future directions for advancing the field.
The rest of this paper is structured as follows. In Section <ref>, we outline the adopted methodology for conducting our systematic literature review (SLR). We describe the search strategy as well as the inclusion and exclusion criteria, and data extraction process. Section <ref> presents the results of our SLR and provides an analysis of the identified studies. In section <ref>, we discuss the findings and some direction for future research. In section <ref>, we present a summary of our paper.
§ METHODOLOGY
In our systematic literature review, we followed a well-defined methodology that included the following elements:
* Formulation of specific research questions to address the aims of our study,
* Development of a comprehensive search strategy to identify relevant research,
* Definition of inclusion and exclusion criteria to select the studies that could be considered in our review,
* Collection of data.
§.§ Research questions
In our systematic literature review, we consider the following research questions (RQs).
* RQ1: What are the most commonly used classes of deep generative models for ECG, PPG, EEG, and EMG signals?
* RQ2: How are these classes of deep generative models applied in practice?
* RQ3: What are the main challenges associated with using deep generative models for ECG, PPG, EEG, and EMG signals?
* RQ4: What is the commonly used evaluation protocol for assessing the performance of deep generative models?
* RQ5: Which physiological datasets have been utilized to evaluate the effectiveness of deep generative models?
§.§ Query
To capture relevant literature for our systematic review, a search was conducted between 2018 and 2023 in various search engines, including Google scholar, IEEE Xplore, ACM Digital Library, Scopus, ScienceDirect, HAL and Springer. We first defined a set of keywords based on the research questions. Next, these keywords were combined using boolean operators such as AND and OR to formulate the following search query:
("electrocardiogram" AND "deep generative models") OR ("electroencephalogram" AND "deep generative models") OR ("photoplethysmogram" AND "deep generative models") OR ("electromyogram" AND "deep generative models")
§.§ Inclusion and exclusion criteria
We established different criteria to ensure that the selected papers align with the research questions and objectives of our systematic review.
* Papers that correspond to a search term are considered,
* Only the signals modalities of ECG, EEG, PPG and EMG are considered,
* Papers published between 2018 and 2023 are considered,
* Papers should be written in English,
* Only journal and conference papers are considered,
* Review papers are not included.
§.§ Data collection
The search methodology for our systematic review is depicted in <ref>. It consists of three major steps, as described below.
* Research findings: In this step, various search engines were used to retrieve relevant articles. The research findings resulted in 458 papers selected for further evaluation.
* Elimination: The second step involves applying elimination criteria to refine the selection of papers. We start with duplication remove. The next two steps is the exclusion based on title and abstract screening and then the full-text screening.
* Final selection: This step presents the outcome of the selection process. 49 articles that matched the inclusion criteria were included in the systematic review. In addition, 17 papers were included based on expert suggestions, bringing the final total to 66 articles.
By following this search methodology, we successfully identified and selected a subset of articles that were most relevant to our research questions and matched the required inclusion criteria.
§ RESULTS AND FINDINGS
§.§ RQ1: classes of deep generative models for ECG, PPG, EEG, and EMG signals
Based on the selected studies, we identified three classes of deep generative models:
* Generative adversarial networks (GANs)
* Variational autoencoders (VAEs)
* Diffusion models (DMs)
<ref> presents the detailed number of these deep generative models applied for the different considered physiological signals.
Some studies have focused on applying GANs to multiple types of signals within a same research paper <cit.>, leading to a total number of papers exceeding 66. <ref> summarizes the list of selected papers per signal and the employed deep generative models. We can observe that GANs have been widely explored and applied in the domain of physiological signals compared to VAES and diffusion models, proving their effectiveness. On the other hand, diffusion models, as a relatively recent class of deep generative models, are currently attracting interest and investigation specifically in the context of physiological signals.
§.§.§ Generative adversarial networks
Generative Adversarial Networks (GANs), proposed by <cit.>, are the most used class of deep generative models which have gained significant attention in last years. There were 49 studies from the total selected research that focused on applying GANs with the different physiological signals. GANs consist of of two neural networks that compete against each other to generate new samples that closely match a particular distribution. <ref> depicts the working principle of GAN. The first network is the generator. Its goal is to synthesize synthetic samples by learning the underlying distribution of the training data. It takes as input random noise and produces synthetic samples similar to real data. The other network is the discriminator. The role of the discriminator is to distinguish between real data and the synthetic data produced by the generator. The goal of the discriminator is to accurately identify the real samples as well as providing feedback to the generator to improve the generated samples. The training of these two networks is formulated as:
min_ G max_D_ [ log ( D ( ) ) ]
+ _ [ log ( 1 - D ( G ( ) ) ) ]
GANs have been enhanced over time in order to address particular challenges or improve their performance on specific tasks.
* Conditional GANs (cGANs): cGANS are extensions of the original GANs that contain additional information in the generation process such as class labels to allow more control over the generated samples. Several research from the selected papers <cit.> have proposed a cGAN frameworks for generating ECG and EEG signals.
* Wasserstein GANs (WGANs): WGANs were proposed as a solution to the training instability and mode collapse challenges of GANs by introducing a different objective function based on the Wasserstein distance. For instance, proposed approaches <cit.> are based on WGANs with gradient penalty to improve the training process.
* CycleGANs: they are primarily used for unsupervised translation tasks. They are based on learning mappings between two different domains without paired training data. In addition to the adversarial loss, the cycle consistency loss is introduced to create realistic translations and ensure that the translated data could be accurately converted back to the original domain. For example, cycleGAN was used for ECG data translations, imputation and denoising in <cit.>.
* Other variants were employed such as Auxiliary Classifier GAN (ACGAN) in <cit.>, Deep Convolutional GAN (DCGAN) in <cit.>, Least Square GAN (LSGAN) in <cit.>.
§.§.§ Variational autoencoders:
VAEs, proposed by Kingma <cit.>, are widely used in various domains as a class of deep generative models. The main concept behind VAEs is to transform input data to a low dimensional latent space representation. <ref> presents the principle of VAE. The VAE is composed of two neural networks. The first network is called the encoder. This network maps the input data to a latent space, often assumed to be Gaussian distribution with a learnt mean and variance. The other network is the decoder. This network takes a sample from the latent space distribution and reconstructs the original input data. The decoder's goal is to produce a reconstructed sample closely similar to the input data. During the training step, the parameters of the encoder and decoder are optimized in order to minimize the reconstruction error. Additionally, a regularization term called the Kullback-Leibler (KL) divergence is introduced to ensure that the learned latent space distribution is similar to a standard Gaussian distribution. The training of the basic VAE is formulated as:
Loss = - ^2 + KL[N(μ_,σ_),N(0,1)]
several variants of the VAE have been proposed to enhance its performance and address specific challenges:
* Conditional VAEs (cVAEs): cVAEs are an extention of VAEs where conditional information are incorporated during both the encoding and decoding processes, allowing the generation of samples conditioned on specific input conditions. For example, in <cit.> conditional VAEs were proposed for 12-lead ECG generation and learning EEG representations.
* Variational Graph AutoEncoders (VGAEs): VGAEs are designed for unsupervised learning on graph-structured data. In <cit.>, a VGAE is proposed to extract nodal features of EEG functional connectivity.
* Other variants were used in the selected studies such as Convolutional VAEs (CNN-VAEs) in <cit.> and Variational Recurrent Autoencoders (VRAEs) in <cit.>
§.§.§ Diffusion models:
Diffusion models are a rising class of deep generative models with a different method for modeling data distributions. In contrast to GANS and VAEs, diffusion models are based on employing a sequence of transformations on the input distribution. <ref> presents the principle of diffusion model. The basic concept behind diffusion models is to disturb the input data by sequentially adding noise. Then a reverse process is applied to transform the noise distribution back into the desired data distribution.
Current selected studies on diffusion models are mostly based on one type of diffusion models:
* Denoising diffusion probabilistic models (DDPMs) <cit.>: DDPMS are a specific class of diffusion models based on two Markov chains: a forward and reverse diffusion processes. During the forward process, a Gaussian noise ϵ is incrementally introduced to the input data x_0 from the real distribution D over a number of steps T until converging to a standard Gaussian distribution. In the reverse process, a learned model is trained to remove the noise and recover the original data by learning the inverse mapping. The training process (<ref>) involves optimizing the model parameters to minimize the reconstruction error between the denoised output data and the original data.
min _ θ _x_0 ∼ P, ϵ∼ N (0, 1), t ∼ U(0,T)‖ϵ - ϵ_θ(√(α_t)x_0 +(1-α_t)ϵ,t) ‖ _2^2
where ϵ_θ is the denoising function that estimates the noise ϵ introduced to x_t.
This variant of diffusion models was used in different studies, such as <cit.>
* Other variants of diffusion models were proposed such as Score-Based Generative Models (SGMs) <cit.>. SGMs focus on learning the score function of the data distribution, which represents the gradient of the log-density function. This variant has not been employed in any selected studies.
§.§ RQ2: application of deep generative models
Deep generative models have been employed in various applications that have considerably contributed to advancements in the medical field.
The main considered applications of GANs, VAEs, and DMs in the selected papers are:
2
* Data augmentation
* Denoising
* Forecasting
* Imputation
* Modality transfer
* Anomaly detection
<ref> represents the distribution of literature per application. <ref> summarizes the list of papers focusing on the above various tasks classified by signal type and deep generative model approach.
§.§.§ Data augmentation
Deep generative models are commonly used to augment medical datasets for various purposes, in particular when using small and imbalanced datasets. Medical datasets frequently suffer from limited training data, which can significantly impact the effectiveness of deep learning models. However, these datasets can be augmented by using deep generative models. Generating synthetic samples will result in a larger and more varied training set, enabling deep learning models to accurately learn the representation of the principal patterns seen in the medical data. Furthermore, collecting positive data related to some medical emergencies (e.g., epileptic seizures) can be challenging, mainly due to the unexpected nature of these events. Medical emergencies can happen suddenly without prior warning which makes it challenging to collect a sufficient amount of positive instances leading to imbalanced datasets. By generating synthetic examples of the underrepresented conditions, these datasets can be balanced to enhance deep models performance. <ref> (data augmentation rows) summarizes the considered studies that are mainly concerned with using physiological signals generation.
Many studies (30.30%) focused on applying GAN for ECG generation. For instance, several approaches were proposed in order to balance the different arrhythmia classes, by generating samples from these minor classes <cit.>. In addition, VAEs were also employed to ECG generation. For example, Sang <cit.> used a conditional VAE to generate 12-lead ECG.
[]
List of papers focusing on the various tasks classified by signal type and deep generative model approach.
[1pt]3-6
1c 1c ECG EEG PPG EMG
[1pt]1-6
[c]3*[10pt][origin=c]90[-50]Data augmentation GANs <cit.> <cit.> <cit.> <cit.>
[0.1pt]2-6
VAEs <cit.>
<cit.>
[0.1pt]2-6
DMs <cit.> <cit.>
[1pt]
3*[-5pt][origin=c]90[-60]Denoising GANs <cit.> <cit.> <cit.>
[0.1pt]2-6
VAEs <cit.>
[0.1pt]2-6
DMs <cit.>
[1pt]
3*[-5pt][origin=c]90[-40]Imputation GANs <cit.> <cit.>
[0.1pt]2-6
VAEs <cit.>
[0.1pt]2-6
DMs <cit.>
[1pt]
3*[-4pt][origin=c]90[-50]Forecasting GANs <cit.>
[0.1pt]2-6
VAEs
[0.1pt]2-6
DMs <cit.>
[1pt]
4*[-5pt][origin=c]90[-50]Modality transfer GANs <cit.> <cit.>
<cit.>
[0.1pt]2-6
VAEs
[0.1pt]2-6
DMs
4*[origin=c]90 [-60]Anomaly detection GANs <cit.>
[0.1pt]2-6
VAEs <cit.>, <cit.> , <cit.>,
<cit.>
[0.1pt]2-6
DMs
[1pt]
§.§.§ Denoising
Physiological signals can be distorted by numerous types of noise and artifacts. Several noise sources that may affect signal quality can be detected such as include baseline wander, muscle artifacts, environmental noise. Deep generative models are widely employed for signals denoising purpose. They have shown promising results in removing undesired noise and improving physiological signals quality, resulting in more accurate analysis and diagnosis. The considered studies that deal with signals denoising are regrouped in <ref> (denoising rows).
For example, Afandizadeh <cit.> proposed a CycleGAN framework for PPG denoising particular from motion artifacts. Furthermore, Li <cit.> proposed a conditional score-based diffusion framework for removing baseline wander and noise in ECG signals.
§.§.§ Imputation
Missing data represents a significant challenge in the analysis of physiological signals. It could be caused by various factors such as sensor malfunction or data transmission errors. This missing data can limit the effectiveness of the analysis and interpretation of the signals. However, deep generative models have emerged as a effective solution for handling missing values problem in physiological signals. <ref> (imputation rows) summarizes the corresponding research for physiological signals imputation. Alcaraz <cit.> proposed a novel solution for ECG imputation by using conditional diffusion models and structured state space models. Furthermore, Mahalanabis employed a CycleGAN framework in her thesis for ECG imputation <cit.>. In this approach, the author used Long Short-Term Memory (LSTM) for the generator and discriminator. The wasserstein loss was used to train the CycleGAN model.
§.§.§ Forecasting
Signals forecasting remains a significant tool in health monitoring as it allows to predict future changes in a patient's state allowing for appropriate decisions and timely interventions. Deep generative models are commonly used to make accurate predictions and detect variations of future signal values. They have demonstrated their ability to capture the different patterns inherent in physiological signals and to learn their temporal dependencies. <ref> (forecasting rows) provides an overview of the primary studies for physiological signals forecasting.
For example, Neifar <cit.> presented a novel framework based on the denoising diffusion probabilistic models for synthesizing ECG. In this approach, three scenarios are covered including full heartbeat forecasting. In addition, two additional conditions related to the prior shape of the ECG are employed to guide the reverse process in cases of imputation or forecasting, ensuring realistic and accurate synthetic ECG signals.
§.§.§ Modality transfer
Modality transfer is an effective technique with several applications in the medical field. It can be used for improving signal analysis, combining information obtained from different modalities to a more accurate diagnosis of physiological states, or overcoming data limitation problem with a particular modality. Employing deep generative models for this task contributes significantly to better understand the physiological systems and enhance the diseases diagnosis. The primary studies that have focused on modality transfer are presented in <ref> (modality transfer rows). For example, Sarkar <cit.> proposed a GAN framework called CardioGAN based on the CycleGAN architecture to generate ECG from PPG signals.
§.§.§ Anomaly Detection
Detecting anomalies in physiological signals is crucial as it can help identify potential health issues and monitor patient conditions. Deep generative models can be widely employed to identify abnormal patterns in physiological signals, helping in the detection of various health conditions. These models can effectively identify deviations from the expected patterns by learning the underlying normal distribution of data, enabling early diseases identification and diagnosis. The primary studies that have focused on anomaly detection are presented in <ref> (anomaly detection rows). For instance, Rasmussen <cit.> proposed an approach that combines an unsupervised VAE with a supervised classifier to differentiate between atrial fibrillation and non-atrial fibrillation.
§.§.§ Other applications
GANs have been successfully also applied to translate between different classes of the same physiological signals. This could be useful for various problems, such as the limited volume of signals and the lack of diversity in profiles or conditions. For example, a GAN model called RhythmGAN for translating between different classes of ECG profiles for the same individual was introduced in <cit.>.
VAEs have indeed been applied to various other applications. Gyawali <cit.> proposed a VAE that is utilized to disentangle and identify unobserved confounding factors in ECG signals. A VAE model, presented by Gyawali <cit.>, is employed to disentangle the variations present in ECG individual data. An other application of VAE, discussed by Zhu <cit.>, is the learning of a significant representation of ECG signals which will be used for various tasks, including clustering similar ECG patterns. Other additional uses of VAEs in the context of EEG data are proposed. They have been employed to extract nodal features that capture the functional connectivity of the brain based on EEG data <cit.>. They are also used for dimensionality reduction <cit.> and learning latent factors or representations that capture meaningful features in EEG data <cit.>.
Also, a conditional VAE based framework, called EEG2Vec, was proposed by Bethge <cit.>, to learn generative-discriminative representations from EEG that could be employed for affective state estimation, emotion generation as well as synthetic of subject-specific multi-channel EEG signals.
§.§ RQ3: main challenges associated with using deep generative models for ECG, EEG, PPG, and EMG signals
Several major challenges are faced when applying deep generative models to physiological signals. The most commonly faced problem with GANs and VAEs is training instability, whereas diffusion models provide a more stable training process. The reason behind training instability with GANs is the adversarial nature of their training where the generator and discriminator networks compete in a min-max game. The generator attempts to synthesize realistic samples to fool the discriminator, whereas the discriminator tries to accurately identify real and generated samples. This sensitive balance can result in instability problem as mode collapse or vanishing gradients. Mode collapse occurs when the generator is unable to capture the full diversity of the data distribution, leading to limited variations in the generated samples. While, vanishing gradients, which occurs when the discriminator gets better during training, can limit the learning and make it difficult to train the generator network successfully. For example, the proposed approaches in <cit.> were based on using WGAN for training stability. For ECG denoising, a LSGAN framework was proposed by Singh <cit.>. To stabilize the GAN training process, the original cross-entropy loss function was changed by the least-square function. An other technique were proposed by Ye <cit.> to address the instability during training through the use of policy gradient in reinforcement learning with SeqGAN.
Furthermore, VAEs can suffer from training instability. VAEs try to optimize a compromise between the two losses in their objective functions: the reconstruction and the regularization terms with the aim of learning a significant latent representation of the data. However, finding the optimal balance between these terms can be difficult. Overfitting can occur as a result of inadequate regularization in which the model success to memorize the training data but fails to generalize well to new samples. On the other hand, excessive regularization may result in blurry reconstructions or inadequate diversity in the generated samples.
On the other hand, diffusion models provide more stability during training. Diffusion models progressively transform the simple initial distribution into the target distribution by iteratively denoising the data through a step-by-step process, Contrary to GANs and VAEs, no adversarial training or complex regularization is needed. This simplicity results in a more stable training and higher data quality. However, it is essential to highlight that diffusion models have their specific challenges. Achieving a balance between data quality and training stability requires making appropriate design choices, such as selecting appropriate diffusion steps and noise schedules. Moreover, due to the iterative nature of training of diffusion models, it can be more computationally complex than the training of GANs and VAEs, needing additional time and resources.
An other challenge when applying deep generative models to ECG, EEG, PPG, and EMG is their complex dynamics and nature. These challenges result from the complex variations and dynamics present within physiological signals, as well as their high-dimensionality and inter-/intra-individual variability. Furthermore, the presence of multiple leads further increases the modeling complexity of these signals, since each lead records a distinct aspect of the physiological activity. To address these challenges, the development of advanced deep generative models especially designed to overcome the complex dynamics of physiological signals is required. Recent selected GAN-based studies <cit.> have focused on integrating customized prior knowledge of ECG dynamics and patterns into the generation process. Leveraging customized prior knowledge involves incorporating domain-specific information such as specific patterns of ECG signals (P, QRS, T waves) into the generative process. By using this knowledge, the generation will be more guided while maintaining the dynamics and patterns observed in real ECG data. For instance, Golany <cit.> proposed to incorporate physical considerations related to ECG signals as supplementary input into the generation process. In addition, Neifar <cit.> introduced a novel prior knowledge modeling about ECG shape and dynamics by integrating statistical shape modeling. Indeed, by leveraging statistical shape model the GAN will be able to encode prior knowledge about the shape variations observed in ECG signals. This prior knowledge provides useful guidance to the generation process, enabling the GAN to generate ECG signals with realistic shape characteristics and correspond to the expected variations.
§.§ RQ4: commonly used protocols evaluation for assessing the performance of deep generative models
We have identified two protocols of evaluation from the selected studies: A qualitative and quantitative evaluation. The qualitative evaluation consists of visual inspection and assessment for coherence, fidelity, and consistency of deep generative models outputs. More than 60% of the selected studies have evaluated the quality of the used deep generative models outputs visually. During this evaluation, real and synthetic signals are visually compared with the goal for looking for similarities, differences, and overall coherence. For example, in signal augmentation task, the studies <cit.> compared synthetic signals with real signals. Similarly, in denoising tasks, <cit.> compared denoised signals with real signals. Furthermore, experts in the medical field such as cardiologists may be contributing to the qualitative evaluation by providing their domain-specific knowledge and expertise for assessing the coherence and fidelity of the generated signals <cit.>. In addition to visual comparisons, other techniques such as t-SNE (t-Distributed Stochastic Neighbor Embedding), PCA (Principal Component Analysis), and UMAP (Uniform Manifold Approximation and Projection) have been employed to compare the distributions of real and synthetic signals in lower-dimensional spaces. For example, research proposed in <cit.> employed t-SNE and UMAP to visualize the distribution of real and synthetic samples in a lower-dimensional space. Additionally, Kalashami <cit.> used PCA to analyze the extracted features of real and fake EEG signals.
On the other hand, the quantitative evaluation involves the use of distance or statistic evaluation metrics. These metrics provide quantitative indications of similarity, dissimilarity between the deep generative models outputs and real data. <ref> summarizes the used metrics in the primary studies in the different applications. <ref> depicts the most used evaluation metrics for the different tasks for ECG, EEG, PPG and EMG. The RMSE is used to quantify the stability between signals. While, the MSE is calculated to measure their average squared difference. However, the PCC is employed to assess the relationship between signals. The MAE provides the average of the absolute differences. Whereas, the MMD calculates the dissimilarity between signals. The PRD is used to measure the distortion between signals. On the other hand, the FD is calculated to measure the similarity between signals by considering the location and order of the data points. The similarity between two signals is also measured by the DTW metric. On the other hand, the FID measures the similarity of data distributions.
<ref> summarizes the formula of the most used metrics.
In the context of data augmentation, these metrics are computed between the synthetic samples and real samples. While, in modality transfer, these metrics could be calculated between original and translated signals or between the same feature of signals in the transfer task. For example, Sarkar <cit.> computed the RMSE, PRD and FD between reference ECG signals and the reconstructed ECG signals from PPG input. The MAE was also used to compare the extracted heart rate from both reconstructed ECG and the input PPG. Other metrics in frequency domain were also explored in <cit.> for the reconstructed signals such as the Hellinger Distance. Furthermore, the effectiveness of deep generative models used in the different discussed applications can also be assessed in relation to particular tasks. For example, classification tasks were conducted using both real and generated data in <cit.>, and metrics such as precision, accuracy, F1 score, recall or cohen’s kappa coefficient can be employed to assess the model's performance. In the context of signal denoising, classification models could be used to test the performance of deep generative models employed for signals denoising <cit.>. Similarly, for anomaly detection tasks, the performance of deep generative models can be assessed based on their abilities to detect unusual patterns in the generated samples, and metrics such as precision, accuracy, recall, or F1 score can be computed to assess their performance.
§.§ RQ5: most utilized physiological datasets for deep generative models' evaluation
Several databases were used to evaluate the effectiveness of deep generative models in the various discussed applications. <ref> shows the most open access used datasets that we identified in the primary studies for ECG. The MIT-BIH arrhythmia database <cit.> represents one of the most often used databases for ECG signals <cit.>. ECG recordings in this database include signals from both normal heart rhythms and several classes of arrhythmia, thus serving as a useful resource for training arrhythmia detection and classification methods. On the other hand, the MIT-BIH Noise Stress Test database <cit.> was used in 4 papers <cit.>. It represents a subset of the MIT-BIH Arrhythmia Database. It was mainly created to assess the robustness of arrhythmia detection methods under various types of noise and artifacts faced in clinical settings. The ECG5000 database has appeared in only 3 research <cit.>. This dataset, provided by Eamonn Keogh and Yanping Chen, consists of a collection of univariate time series representing ECG heartbeats from normal and pathological conditions providing a diverse range of physiological patterns for analysis.
The BIDMC <cit.> also has been used for ECG and PPG research <cit.>. It is composed of a variety of physiological signals in addition to ECGs, PPGs such as arterial blood pressure (ABP) waveforms obtained from a diverse range of participants with different ages, genders, and clinical conditions. However, the PTB-XL database has been used in 2 papers of the primary studies <cit.>. It is a large database of 12-lead ECG recordings with a variety of cardiac abnormalities <cit.>. Other ECG databases were used such as MIT-BIH Atrial Fibrillation database <cit.>, American Heart Association database <cit.>, European Society of Cardiology ST-T database <cit.>, Creighton University Sustained Ventricular Arrhythmia database <cit.>, EPHNOGRAM database <cit.>
For the EEG signals, the SEED and DEAP databases were the most used in the selected studies. They are commonly used in the task of emotion recognition. Finally, the EMG recordings from the Sleep-EDF database were used once for EMG synthesis in <cit.>. <ref> provides more details about these databases.
§ DISCUSSION AND FUTURE DIRECTIONS
Generative adversarial networks, variational autoencoders and diffusion models are currently promising methods in the analysis and processing of physiological signals. They have been successfully applied in various tasks including data augmentation, denoising, imputation While these deep generative models have significant advantages, there are still existing challenges mainly the training instability and complex dynamic of physiological signals that require further considerations. Future research should concentrate on numerous important areas, including models enhancement, where further research and development is needed for the enhancement of the performance of deep generative models for physiological signals. In addition, the incorporation of prior knowledge about physiological signals is crucial. Although there have been a few attempts to include prior knowledge into deep generative models, additional exploration in this area should be investigated. Furthermore, future studies need to focus on the integration of information from multiple leads or modalities. As physiological signals are frequently recorded by using multiple leads or modalities simultaneously, employing this multi-modal information can offer a more expanded understanding of physiological processes for more effective analysis. In addition, the absence of standardized evaluation protocols in particular metrics for deep generative models make it extremely difficult to assess their performance objectively. Developing a common evaluation protocol is indeed a crucial step.
§ CONCLUSION
In our systematic literature review, we have examined a total of 66 primary studies to explore the use of various deep generative models with ECG, EEG, PPG, and EMG signals. The aim of our review was to address specific research questions and provide an overview about the current deep generative models in addition to their main applications in this domain. We have also examined the fundamentals of GANs, VAEs, and diffusion models, and discussed the challenges associated with employing these models with different physiological signals. Furthermore, we have discussed the evaluation protocols employed in these studies on the most used databases. Finally, we concluded by outlining potential directions for future research. As future work, we aim to extend the scope of our study to cover additional physiological signals. In addition, we intend to provide a more comprehensive synthesis that includes a thorough analysis of the types and architectures of several variants of deep generative models.
|
http://arxiv.org/abs/2307.07472v1 | 20230714165409 | Spectral gap for projective processes of linear SPDEs | [
"Martin Hairer",
"Tommaso Rosati"
] | math.PR | [
"math.PR",
"math.AP",
"60H15"
] |
OT1ptmbxn
OT1ptmmn
Ujkpsybmn
arrows.meta,calc,decorations.markings,math,patterns, shapes
terminator = [rectangle, draw, text centered, rounded corners, minimum height=2em]
process = [rectangle, draw, text centered, minimum height=2em]
decision = [diamond, draw, text centered, minimum height=2em]
data=[trapezium, draw, text centered, trapezium left angle=60, trapezium right angle=120, minimum height=2em]
connector = [draw, very thick]
connectorG = [draw, green!70!gray, very thick]
connectorR = [draw, red!70!gray, very thick]
arrow = [very thick,->,>=stealth]
assumption[lemma]Assumption
timesoperatorsT1ptmmn
timesoperatorsboldT1ptmbn
0.18em–0.18em
0.15em/0.15em
#1[darkblue]MH: #1
#1[darkred]TR: #1
#1Tommaso: #1
#1Martin: #1
^↑_0em
^↑_
1
𝐚
𝐊
ℰ
/ε
tbl
theglossary
l>p0.65
tbl
con
name= Concentrated ,
symbol=(Π_M_t-1^ + Π_M_t-2^) u_t⩾1/4Π^_M_t-2 u_t,
description=The system is concentrated if mass
accumulates close to the energy median.
dil
name= Diluted ,
symbol=(Π_M_t-1^ + Π_M_t-2^) u_t < 1/4Π^_M_t-2 u_t,
description=If the system is not concentrated, then it is diluted.
low
name= A ,
symbol= Π^_Lφ^α = ∑_| k | ⩽
L_α⟨φ^α, e_k⟩ e_k,
description = Projection on frequencies lower than L.
high
name= A ,
description = Projection on frequencies strictly higher than L +1.,
symbol= Π^_Lφ^α = ∑_| k |
>(L+1)_α⟨φ^α, e_k⟩ e_k
central
name= A ,
symbol= Π_L^φ^α = ∑_L_α < | k |
⩽ (L+1)_α⟨φ^α, e_k⟩ e_k,
description = Projection on frequencies of order L.,
energy median
name= A ,
symbol= inf_M ∈^+{Π^_M u ⩽Π^_M u } ,
description = Energy median.,
eigenvalue
name= A ,
symbol= ζ_k = | k |^2 a ,
description = Eigenvalue of the operator (- Δ)^a associated
to the Fourier mode k ∈^d,
gap
name= A ,
symbol= ζ_L+1 - ζ_L ,
description = Gaps between the eigenvalues of (- Δ)^a in
dimension d=1, used in any dimension.,
w
name= A ,
symbol= Π^_L u_t(x) / Π^_L
u_t ,
description = No central frequencies.,
wbold
name= A ,
symbol= Π^_L u_t(x) / Π^_L u_t ,
description = Centered around N_t = M_t -1,
unless stated otherwise.,
sbold
name= A ,
symbol= inf{ t ⩾ t_0 Π^_L u_t⩾βΠ^_L
u_t}∧ (t_0 +1) ,
description = σ^ = σ^_3/2.,
taul
name= A ,
symbol= inf{ t ⩾ t_0 Π_ L^ u_t⩾ 2 Π_
L^ u_t}∧ (t_0 + 1) ,
description = Lower exit time. τ^ (t_0) = τ^ (
N_t_0, t_0).,
numin
name= A ,
symbol= min_α = 1, …, mν^α ,
description = aa,
numax
name= A ,
symbol= max_α = 1, …, mν^α ,
description = aa,
la
name= A ,
symbol= ( ν^α)^- 1/2 L ,
description = aa,
EPFL, Switzerland, [email protected]
Imperial College London, UK, [email protected]
University of Warwick, UK, [email protected]
Spectral gap for projective processes of linear SPDEs
Martin Hairer^1,2 0000-0002-2141-6561 and Tommaso
Rosati^3 0000-0001-5255-6519
August 12, 2023
==================================================================================
This work studies the angular component π_t = u_t /
u_t associated to the solution u
of a vector-valued linear hyperviscous SPDE on a d–dimensional torus
u^α =- ν^α (- Δ)^ u^α t + (u
·W)^α ,
α∈{ 1, …, m }
for u ^d→^m, ⩾ 1 and a sufficiently
smooth and non-degenerate noise W. We provide conditions for existence, as well as uniqueness
and spectral gaps (if > d/2) of
invariant measures for π in the projective space.
Our proof relies on the introduction of a novel Lyapunov functional for π_t, based on the study of dynamics of the “energy median”: the energy
level M at which projections of u onto frequencies with energies less or more than M have about
equal L^2 norm. This technique is applied
to obtain in an infinite-dimensional setting without order preservation
lower bounds on top Lyapunov exponents of the equation, and their
uniqueness via Furstenberg–Khasminskii formulas.
Keywords: Linear SPDEs, Lyapunov exponents,
projective processes, Furstenberg–Khasminskii.
MSC classification: 60H15
§ INTRODUCTION
We consider, for fixed d, m ∈, a parameter ⩾ 1 and ν^α >0 for all α∈{ 1, …, m }, the solution u = (u^α)_α =1^m [0,
∞) ×^d→^m (^d being the d-dimensional
torus) to the vector-valued linear stochastic PDE
u^α = - ν^α (- Δ)^ u^α t + (u
·W)^α , u^α(0, x)= u_0^α (x) , ∀α∈{ 1, …, m } ,
for (t, x) ∈ [0, ∞) ×^d and u_0∈ L^2_⋆ =
L^2∖{ 0 }. The noise W is chosen white in time, translation
invariant and
sufficiently smooth in space for classical solution
theories to apply.
In this setting, the multiplicative ergodic theorem guarantees that for
every u_0∈ L^2 the Lyapunov exponent
λ(u_0) = lim_t →∞ 1/t log u_t ∈∪{ ±∞}
exists, is deterministic, and under mild assumptions on the noise satisfies λ (u_0) < ∞. The converse bound, namely λ (u_0) > - ∞,
is however unknown in general. The aim of the present work is to prove that under
relatively weak non-degeneracy conditions there exists a uniform lower
bound on λ (u_0) over all initial conditions u_0:
[eqn:lower-bd-lyap]
inf_u_0 ∈L^2_⋆ λ(u_0) > - ∞ .
In addition, under stronger assumptions we show that the Lyapunov
exponent does not depend at all on the initial condition: λ (u_0) =
λ (v_0), for all u_0, v_0∈ L^2_⋆. In this case we
prove Furstenberg–Khasminskii type formulas for the exponent.
Both results build on the study of the angular component π_t = u_t / u_t of the
solution to (<ref>), which is at the heart of the present work. While
in finite dimensions the link between properties of the Lyapunov exponent and
ergodic properties of the process π_t has been extensively used
(see for example the monograph <cit.>), in
infinite dimensions such an approach has remained mostly inaccessible. The
main difficulty is that the process
π_t no longer takes values in a compact state space, so the proof
of existence of invariant measures requires special care. To the best of
our knowledge, the only infinite-dimensional case that has been treated to some
extent is the order-preserving one <cit.>, namely when the dynamic of the linear equation
preserves both cones of positive and negative functions. The most prominent
example in this setting is (<ref>) with = 1 and m=1, which is linked via the Cole–Hopf transform to
both the KPZ and Burgers' equation. Here, under the assumption π_0⩾ 0, geometric ergodicity of π_t in fact, even a pathwise contraction
and a one force one solution principle can be established, building on
suitable generalizations of the Krein–Rutman theorem,
see for example the seminal work by Sinai <cit.> and many
later works and the references therein
<cit.>. Yet,
even in the case =1 and m =1, no proof of (<ref>), which
allows for arbitrary initial data, seems to
be available, although a simpler proof than ours appears very plausible. Indeed for
sufficiently non-degenerate noise one would expect the dynamic of u to
eventually be trapped in either the cone of positive or that of negative
functions, even if the initial condition has no definite sign, which then reduces to the known case.
Therefore, although in the regime > 1 the local solution theory of (<ref>)
is simplified by the additional smoothing, this is an interesting regime
from our perspective since no order preservation property is satisfied and
our study of π_t cannot rely on previous methods. Even more
interesting is the case = 1 but m ⩾ 2 which is relevant in the study of fluid dynamics. The difficulty we
are faced with is not just technical: it is easy to see
that our results simply do not hold in the deterministic case W = 0, since
the Laplacian does not have a bounded spectrum, contradicting
(<ref>). Indeed, a
sufficiently non-degenerate noise is required, in opposition to the mentioned
order preserving case, with positive or negative initial data, where the
noise is not necessary and if W = 0 the result reduces to the
Krein–Rutman theorem.
The linchpin on which our argument hinges is a novel Lyapunov functional
for the process π_t, which can be obtained under rather mild regularity and
non-degeneracy assumptions on the noise W.
Its definition is based on the analysis of the dynamics of energy levels of the
angular process π_t.
The main issue that has to be overcome is that in the deterministic dynamic
(W = 0) every eigenfunction of the Laplacian is a fixed point for the deterministic
angular dynamic, so that π_t can get stuck in high-frequency
states and, for large times, converges to the eigenfunction associated to
its smallest non-zero Fourier mode.
On the other hand, every eigenfunction but the one associated to the top
eigenvalue is a saddle point, the unstable directions being given by all
eigenfunctions with strictly smaller wave number. One therefore expects
that, provided the noise is sufficiently non-degenerate, the process is unlikely to get trapped by these
critical points and bounds such as (<ref>) become more plausible.
Our approach to making this heuristic rigorous is to measure the high-frequency
state in which the process π_t finds itself through the “energy median”
M(π_t), a level at which roughly half of its L^2 energy lies in
frequencies both above and below M(π_t). We use the energy median to
distinguish between large scales, where we have to deal with dynamical phenomena such
as the one just explained, and small scales, where we expect to be able to exploit
the strong dissipativity of the equation.
Eventually, the construction of the Lyapunov functional builds on small scale
(or high-frequency) hyperviscous regularity estimates, together with a drift
towards low frequencies for the energy median. The proof of
the latter result requires to distinguish two cases. On the one hand a
diluted case, in which the energy is spread out, also in frequencies
distant from the level M(π_t): here the negative drift is simply a
consequence of dissipation.
On the other hand a concentrated
case, in which most of the energy is to be found around frequency M(π_t): this is where the deterministic analysis alone
cannot be sufficient and we solve a control problem to prove that a non-degenerate
noise rapidly pushes the system out of high-frequency concentrated states.
Having constructed the Lyapunov functional, the proof of the
uniform lower bound on the Lyapunov exponent follows by a bootstrap argument
which delivers sufficiently good regularity estimates for π.
Furthermore, uniqueness of the
invariant measures follows from Harris' theorem, when viewing π as a projective
process, that is identifying π and - π. Here
the proof requires some regularity for the law of π_t near the constant
eigenfunction π≡ 1/ √(m) (the constant is chosen to have unit
L^2 norm), which imposes among others the much
stronger requirement > d/2 to make use of the Bismut–Elworthy–Li formula:
see Remark <ref> for a further discussion of this point.
To conclude this introduction, let us discuss the relevance of our results. The
study of ergodicity for projective processes is fundamental to obtain a
precise control on top Lyapunov exponents of SPDEs, from properties such as
finiteness, uniqueness, Furstenberg–Khasminskii formulas and continuous dependence on the parameters of the
equation <cit.>, to central limit theorems for the sample Lyapunov exponent
<cit.>,
to precise estimates on the stability or instability of nonlinear equations.
Especially the study of nonlinear SPDEs close
to an invariant manifold still presents many open challenges. In the order
preserving case, a recent work
<cit.>
considers the stability or instability of a nonlinear equation similar to
(<ref>) with = 1 close to the fixed point u ≡ 0. The
authors show that if the noise is either very weak or very strong, then the equation is locally unstable (respectively stable). Beyond the
order preserving case, very little is known, see for example
<cit.> for quartic (=2) equations in a
small noise regime, where the argument of the authors relies on a
transformation that reduces the problem to the study of an order preserving
system. Similarly relevant is the recent work <cit.> (related
to a classical result <cit.> for finite-dimensional systems): here
the particular structure of the Allen–Cahn equation can be used to prove the
negativity of the Lyapunov exponent, although so far there is not proof of a
lower bound to the exponent.
In the finite dimensional case, much more precise tools are available. Building on spectral
gaps for projective processes one can implicitly construct Lyapunov functionals
for nonlinear problems close to unstable invariant manifolds
<cit.>: this allows to establish and quantify the stability or
instability of an invariant manifolds in terms of the sign of the top Lyapunov
exponent, as should be expected. If extended to infinite dimensions, these tools can in
principle be used to address open problems, such as the non-uniqueness of invariant measures for the
Navier–Stokes equations under degenerate forcing, as opposed to uniqueness
when the noise satisfies minimal non-degeneracy conditions
<cit.>, see for example <cit.> for
a result with the same underlying motivation, but in the case of a 3D Lorenz system.
A series of nice results which are related in both motivation and flavour to this article was recently
obtained by Bedrossian, Blumenthal, and Punshon-Smith.
They obtained two types of results for the stochastically forced Navier–Stokes equations.
On one hand, they study in <cit.> the behaviour of passive tracers advected by the corresponding
random velocity field. They show that as long as the forcing is sufficiently non-degenerate for the
strong Feller property to hold, these exhibit “Lagrangian chaos”, namely tracers started nearby
separate exponentially fast.
On the other hand, they show in <cit.> that its finite-dimensional Galerkin truncations exhibit “Eulerian chaos”,
namely that the top Lyapunov exponent for the linearised (with respect to initial data) equation
is strictly positive at high enough Reynolds number.
Extending the latter result to
infinite dimensions presents many fundamental challenges.
Even establishing the continuity of the Lyapunov exponent
with respect to the size of the finite-dimensional approximation, or a
lower bound in the spirit of (<ref>) remain open problems.
Once more, this is not out of purely technical reasons
since in the deterministic Navier–Stokes system, results concerning enhanced dissipation
show that arbitrarily large exponential, or even super-exponential decay is
possible <cit.>.
In conclusion, this article introduces a new approach to
study Lyapunov exponents and projective processes for
SPDEs beyond the order preserving case. The long-term goal of these methods is to
tackle some of the problems described above.
§.§ Acknowledgments
This article was written in vast majority while TR was employed at
Imperial College, London.
Support from the Royal Society through MH's research
professorship, grant RP\R1\191065, is
gratefully acknowledged.
TR is very grateful to Alex Blumenthal and Sam Punshon-Smith for many inspiring
discussions.
§.§ Notations
We write = { 0, 1, 2,3 …}, ^+ = ∖{ 0
}, and ^d =
^d/^d for the d-dimensional Torus. We denote by ·
the
L^2(^d; ^n) norm φ^2 = ∫_^d |φ|^2(x) x, when n is clear from context and we write L^2_⋆
(^d; ^n) = L^2(^d; ^n)∖{ 0 }. For x ∈^n we write | x | for its
Euclidean norm. We also write for φ, ψ∈ L^2(^d;
^n):
⟨φ, ψ⟩ = ∑_α=1^n∫_^dφ^α (x)
ψ^α(x) x .
We will control high frequency regularity, for
L ∈ and γ⩾ 0, via the
(semi-)norms
φ_H^γ(^d; ^n)^2 = ∑_α=1^n∑_k ∈^d (1 + | k |)^2 |
φ̂^α (k) |^2 ,
φ_H^γ_L^2 = ∑_α=1^n ∑_| k | >
L_α (1+| k |- L_α)^2 γ |
φ̂^α(k) |^2 , L_α =
(ν^α)^- 1/2 L ,
where (ν^α)_α are viscosity coefficients.
Further, to simplify the notation we write
φ_H^γ_L = φ_γ, L ,
φ_H^1/2_L = φ_L .
For any set and functions f, g → we write f ≲ g if there exists a constant c >0
such that f(x) ⩽ c g(x) for all x ∈.
The (lack of) dependence of c on additional parameters will hopefully either be clear
from context or explicitly specified. For a complex number z ∈, we
write
Re (z), Im(z),
for its real and imaginary parts.
We write ζ_k for the eigenvalue of (- Δ)^ associated to mode
k ∈^d, and Δ_M for the one-dimensional gaps:
eigenvalue = | k |^2 , gap = ζ_L+1- ζ_L ,
∀ k ∈^d , L ∈ .
Throughout the article we will consider the solution (u_t)_t ⩾ 0 to a stochastic PDE on a probability space (Ω, , ) and we write (_t)_t ⩾ 0 for the right-continuous filtration generated by the Wiener process
driving u. We use the following notation for conditional expectations, given any stopping time t_0
_t_0[ φ] = [φ|_t_0] , _t_0() = ( |_t_0) .
§ PRELIMINARIES AND MAIN RESULTS
We start with some general considerations, which hold for a wide class of
matrix-valued noises W. First, for any (t,x) ∈
[0, ∞) ×^d we write in coordinates W_t (x) =
(W^α_β, t(x))_α, β =1^m∈ (^m)^⊗ 2.
Hence in (<ref>) we find the matrix multiplication
[e:dotproduct]
(u ·W_t)^α = ∑_β u^β W^α_β, t
.
Now, for the sake of this introductory section, let us consider a generic
spatially homogeneous noise W. By this, we mean that
⟨ W^α_β(x), W^α^'_β^'
(y) ⟩_t = Λ^α, α^'_β, β^'
(x-y) t , ∀ x, y ∈^d ,
for a tensor-valued function Λ→ (^m)^⊗ 4.
In Fourier coordinates, such a noise can be written as
W^α_β, t = ∑_k ∈^d e_k B^α, β_k, t ,
with e_k(x) = exp ( ι k · x ) for x ∈^d and with
{
B^α, β_k, t k ∈^d , α, β∈{ 1, …, m } , t ⩾ 0 }
a collection of rescaled complex
Brownian motions with quadratic
covariation of the form
[e:noisefc]
⟨B^α, β_k ,
B^α^', β^'_k^'
⟩_t = Γ^α, α^'_β,
β^', k δ_k,- k^'t , B^α,
β_-k, t = B^α, β_k, t ,
such that
[e:def-Lambda]
Λ^α, α^'_β, β^'
(x) = ∑_k ∈ Γ^α, α^'_β, β^',
k e_k(x) .
For the time being, we will refrain from stating more precise assumptions on
the noise coefficients Γ appearing above: the assumptions will be
provided in the upcoming section. Instead we proceed with some heuristic
arguments, assuming that the noise is sufficiently smooth for (<ref>)
to be well posed. Throughout this work we will decompose the solution as u_t =
r_tπ_t, with
[e:defrpi]
r_t = u_t , π_t = u_t/
u_t .
We will refer to the first as the radial and to the second as the
angular component of the solution u_t.
The natural state
space for the angular component is the infinite-dimensional sphere
S = {
φ∈L^2(^d; ^m) φ_L^2(^d; ^m) = 1 } ,
but on
this space the process may in principle have multiple invariant measures. For example, in the case
=1, one has at least two invariant measures since the equation preserves the cones of positive and
negative functions. If ⩾ 1 and the noise is sufficiently non-degenerate,
then it follows from our results that there are at most two invariant measures,
since any invariant measure must contain either e_0 or - e_0 in its
support and the angular component is strong Feller at these points, see the
proof of Theorem <ref>. In general, we cannot expect less than two
invariant measures, since in the case =1, m=1, the angular component π_t preserves the cones of positive and
negative functions. On the other hand, if > 1, since the
cones of positive and negative functions are no longer preserved, one might expect
exactly one invariant measure for the process π_t, but this falls
beyond the scope of this work.
Motivated by the case =1, m=1, the natural space to check for
uniqueness of the invariant measure is therefore
the infinite dimensional projective space 𝐏, which can be viewed
as the Hilbert manifold obtained by
quotienting the sphere with the antipodal map. We define
Ap S →S ,
Ap(φ) = - φ , 𝐏 =
S / Ap ,
and we denote by [φ] ∈𝐏 the equivalence class of φ∈ S
in the projective space:
[ · ] S →𝐏 , [φ] = [-
φ] .
We observe that due to the linearity of (<ref>), the process ( [π_t] )_t ⩾ 0 is a Markov process on 𝐏 which we call the
projective process associated to u. Before we move on to state our
main results, let us perform some formal calculations for r_t and π_t, which establish the link between them and the Lyapunov exponent λ(u_0).
§.§ Formula for the top Lyapunov exponent
Control over the projective process opens the road to a detailed analysis of
Lyapunov exponents. In this section we cover but one aspect in which control
over the projective dynamic is helpful, by formally establishing
Furstenberg–Khasminskii type formulas for the Lyapunov exponent.
We start by observing that we can formally write an SPDE for π_t. For this
purpose it is
useful to rewrite the equation for u_t in Stratonovich form:
[eqn:main-strat]
u = u t - 1/2 u ·(Λ) + u ·∘W_t ,
where we have defined
( u)^α = - ν^α (-
Δ)^ u^α , (Λ) ∈(^m)^⊗2 , (Λ)^α_β =
∑_γ= 1^m Λ^γ, α_β, γ(0) ,
and u ·(Λ) = ∑_β u^β
(Λ)^α_β.
Regarding the radial process, setting
Q_(u,u) = ⟨
u , u - 1/2 u ·(Λ) ⟩ ,
a simple application of the chain rule yields
[e:r]
r = r Q_(π, π) t +
r ⟨π, π·∘W ⟩ .
This in turn leads to an expression for the projective dynamic:
[e:pi]
π= ( π- 1/2 π·(Λ)-
Q_(π, π) π) t + π·∘W - ⟨π, π·∘W ⟩π .
We observe that if m =1, then the equation for π does not depend on
Λ (other than through the noise), as should be expected since (Λ) is scalar in this case.
These calculations allow us to derive a heuristic
formula for the top Lyapunov exponent. If we write the equation for log( r_t ) in Itô form
log( r ) = Q_(π,
π) t +
⟨π, π·∘W ⟩
= ⟨π, π⟩+ 1/2 C(π, Λ) t +
⟨π, π·W ⟩ ,
where the Itô–Stratonovich corrector can be obtained via (<ref>) and
is given by
C(π, Λ) = ⟨π, π·^u (Λ) ⟩-2 ⟨π^⊗2, Λπ^⊗2 ⟩ .
Here we have defined
^u (Λ)^α_β =
∑_γ=1^m Λ^γ, γ_α, β(0) ,
⟨π^⊗2, Λπ^⊗2 ⟩ = ∑_α, β,
γ, η∫_(^d)^2 π^α(x)π^γ(x)
Λ^α, β_γ, η(x -y)
π^β(y)π^η (y) x y .
For the sake of clarity, let us observe that if m=1 and if the noise is
space-independent, that is Λ (x) ≡Λ(0), then the
corrector reduces to C(π, Λ) =
- Λ^2∈ (- ∞, 0). In other words, the corrector we obtain is
an infinite-dimensional and vector-valued generalisation of the corrector
between Itô and Stratonovich versions of geometric Brownian motion, the first one with
Lyapunov exponent - 1/2, the second one with Lyapunov exponent zero.
Indeed, we see that in the limit t→∞, if we consider 1/tlogr_t, the
martingale term disappears, being roughly of order √(t). Assuming
that [π_t] is uniquely ergodic, and since the drift above is quadratic
in π and therefore depends only
on the projective class of π, this calculation suggests the identity
[eqn:fk]
λ= _∞ [ ⟨π, π⟩+ 1/2
C(π, Λ)] ∈∪{ - ∞} ,
where _∞ stands for expectation under the stationary law of [π_t]. One of the aims of this article is to rigorously
derive (<ref>), by obtaining suitable conditions for the existence of a
unique invariant measure for the projective process. Another
aim is to prove that λ > - ∞ by obtaining suitable regularity
estimates.
§.§ The Lyapunov functional and the energy median
The fundamental tool to achieve our objective is to introduce a Lyapunov
functional for the projective process. Its construction rests on a large- and
small- scale separation, which is achieved by separating frequencies according to
whether they lie above or below the energy median, which we will shortly
define. In the vector-valued setting, if the viscosity coefficients ν^α are not identical, we must adjust low or high
frequency projections so as to match the dissipation rate. Namely, for given L >0 we project on modes that dissipate at, above, or below level ζ_L = - L^2. Since at frequency k and in the entry α the dissipation rate is given by ν^α | k |^2 𝐚, the
threshold for the projection for the α-th entry will be
| k
| = L/ (ν^α)^1/2 L_α .
In other words, rather
than projecting on frequencies less than a given frequency level, we project on eigenspaces with
eigenvalues less than a fixed threshold.
For every L ∈ and φ∈ L^2(^d; ^m) we define respectively the
low- and high-frequency projections, written in components for α∈{ 1, … , m }
lowφ^α = ∑_| k | ⩽L_α ⟨φ^α,
e_k ⟩e_k , highφ^α = ∑_| k | >
(L+1)_α ⟨φ^α, e_k ⟩e_k ,
as well as the central frequency projection
central φ^α = ∑_ L_α < | k | ⩽(L+1)_α ⟨φ^α, e_k ⟩e_k .
In addition we consider
Π_L^ = Π_L^ + Π_L^ = Π_L+1^ , Π_L^ = Π_L^ + Π_L^ = Π_L-1^ .
Finally, we define the energy median M L^2(^d; ^m) → by
energy median = inf{ M ∈ , M ⩾1 :
Π_M^ u ⩽ Π_M^ u } .
In one dimension, there is no eigenvalue in the interval (L, L+1). In this case, at least if all ν^α are identical, some of the upcoming technical steps do
simplify significantly, while in higher dimensions we must treat separately modes in between two
separated shells (in the picture below, for d =2 and L = 5, the modes that fall in
the gray area). When we consider separated shells we can make
use of gaps between eigenvalues of , which is of great advantage. This
motivates the distinction (superfluous in one dimension) in
Section <ref> between w^ and w^.
[line width = 1pt] (0,0) circle (5*0.40);
[line width = 1pt] (0,0) circle (6*0.40);
[gray!25, even odd rule] (0,0) circle (5*0.40) (0,0) circle (6*0.40);
in -7,-6,...,7
in -7,-6,...,7
(0.4*)^2+(0.4*)^2
<9[black!80] (0.40*, 0.40*) circle (0.75pt);
[black!60] (0,0) circle (2pt);
The second ingredient in the construction of the Lyapunov functional is a
control on high frequency regularity. For this reason we introduce for L ∈ the shifted Sobolev
spaces H^γ_L (these are Banach spaces only if considered as subsets of the functions
φ∈ L^2 such that Π_L^φ = φ) defined by the seminorm
[eqn:shifted-sobolev]
φ_H^γ_L^2 = ∑_α=1^m∑_| k | >
L_α +1 (1+| k |- L_α)^2 γ |
φ̂^α (k) |^2 ,
and to simplify the notation we write
φ_γ, L = φ_H^γ_L , φ_L = φ_1/2, L .
The choice of the regularity parameter equal to 1/2 simplifies
certain computations since it is “linear” in L:
φ^2_1/2 , L = Π_L^ φ^2_H^1/2 - ∑_α=1^m L_α
Π_L^ φ^α ^2 .
In particular, in this setting we observe that we can bound for any k_0∈
π_t _H^1/2^2 ⩽2
ν_min^-1/2 ( M(π_t) + k_0) +1 +
π_t _M(π_t)+ k_0^2 ,
with ν_min= min_αν^α.
Now, a natural first candidate for a Lyapunov functional to t ↦π_t could be the map
[e:naivelyap]
S ∋π↦exp( π_H^γ^2 ) ,
for some γ > 0. This is a reasonable choice for a Lyapunov functional,
but our analysis is not sufficient to prove that it does indeed satisfy
the Lyapunov property.
As we will see, a
crucial point of our argument is to control
the evolution of level sets of the energy of π_t, such as the energy
median, which we will use as
thresholds to distinguish between large and small scales.
Therefore we will replace our first guess for the Lyapunov functional by
[eqn:def-G-lyap]
G S →[0, ∞] , G (π) = exp( κ_0 M (π) + π_M(π) +k_0^2 ) ,
for two parameters κ_0>0, k_0∈ to be fixed later on.
For fixed κ_0, k_0, there exists constants c_1 (κ_0, k_0) < c_2 (κ_0, k_0) such that for every π∈ S:
c_1 π_H^1/2^2 ⩽κ_0 M (π) + π_M(π) +k_0^2 ⩽c_2 π^ 2_H^1/2 .
In particular, if we could prove a super-Lyapunov property for the
functional G, namely that
[ G( π_t) ] ⩽c exp( c e^- λt κ_0 M
(π_0) + c e^- λt π_0
_M(π_0) +k_0^2 ) ,
for some c , λ > 0,
then we would be able to deduce that also the map in (<ref>)
satisfies the Lyapunov property. Unfortunately, proving such super-Lyapunov
property lies beyond our capacities and we restrict to proving the Lyapunov
property only. The main issue lies in the analysis of the median
process M( π_t): for the high-frequency regularity π_t_M (π_t) +
k_0, which behaves very much in analogy to the Sobolev norm of a typical
parabolic SPDE, we can prove a bound very crudely of the type [ exp(
π_t_M (π_t) +
k_0^2) ] ⩽ c exp( c e^- λ k_0 tπ_0_M (π_0) + k_0^2). For the median however we can
only prove a bound along the lines of [ exp( κ_0 M (π_t))]
⩽ c( e^- λκ_0 t exp( κ_0 M (π_0)) +1).
Namely, we show that M is upper bounded, at least for large values,
by a diffusion with a constant drift towards low frequencies. It is this upper
bound which we believe to be sub-optimal but whose improvement would require a much
deeper analysis of energy level dynamics which leads to the restriction
mentioned above. This discussion also
explains the presence of the parameters
κ_0, k_0, which are used to increase to our convenience the
contraction constants e^- λκ_0 t and e^- λ k_0
t.
Before we move on, let us remark that sometimes we will
call G the full Lyapunov functional, in opposition to a
skeleton functional F which we construct in
Section <ref> and which is at the
heart of our technical analysis. Now we are ready to state the main
achievements of this paper.
§.§ Main results
Our main results neatly divide in two distinct theorems, with respectively
weaker and stronger non-degeneracy assumptions on the noise: one concerning the
existence of invariant measures and lower bounds for the Lyapunov
exponent(s), and one regarding the uniqueness of invariant measures and
Furstenberg–Khasminskii formulas for the Lyapunov exponent. We start with the
first result.
§.§.§ Existence of invariant measures
Our proof of existence of invariant measures amounts to proving that G in (<ref>) is a Lyapunov functional for π_t. This is not true for any noise, but requires a certain
non-degeneracy. The sufficient property we identify for the desired Lyapunov property to hold
guarantees that the noise
counteracts concentration in high-frequency states: if a substantial amount of
the energy of π_t finds itself in a frequency shell at level M, so that say (Π^_M-2 + Π^_M-1) π_t⩾β_0 for some
β_0 > 0 with no control on how much energy lies in lower frequencies,
then the noise must nevertheless shift some energy to lower frequencies arbitrarily quickly,
provided that M is sufficiently large. The fact that we consider two shells
(M -2 and M-1) is out of purely technical reasons, for later
convenience. In this case, we say that W induces high-frequency stochastic instability.
The noise W appearing in (<ref>) is said to induce
high-frequency stochastic instability if
for every t >0 and ∈ (0, 1) there exists a (t, ) ∈ such that for all
M ⩾ and all u_0∈ L^2_⋆, if
[e:idata]
Π^_M u_0 ⩽2 Π^_M u_0 ,
(Π^_M-1 + Π^_M-2) u_0 ⩾1/4
Π^_M-2 u_0 ,
then
( (Π^_M-1 + Π^_M-2) u_t < 1/4 Π^_M-2 u_t
for some t ∈[0, t]) ⩾1- .
The exact value of the constants 2 and 1/4 is irrelevant but chosen in
harmony with later thresholds. The assumption (<ref>) on the initial data is stating
that M is related to the energy median of u_0 (it bounds another energy level
set) and that the energy of π_0 = u_0 / u_0 is concentrating
in a shell of level M. The claim is then that the noise shifts energy to lower
frequencies no matter how small Π^_M-2 u_0 might be.
It is easy to see that if there is no noise (W = 0) the condition is not satisfied,
which is why this is a non-degeneracy assumption on the noise.
Our main result can then be formulated in terms of this stochastic instability.
Here we make use of the Fourier coefficients appearing in (<ref>).
Assume that ⩾ 1, ν^α > 0 for all α∈{ 1, …, m }, that the noise W appearing in (<ref>) induces a
high-frequency stochastic instability, and that the noise coefficients Γ in (<ref>) satisfy for every α, α^' , β, β^'∈{ 1, …, m }
[e:reg-assu]
∃γ_0 > d/2 +1 , C > 0 such that |
Γ^α, α^'_β, β^', k | ⩽C( 1+ | k
|)^- 2 γ_0 , ∀k ∈^d .
Then the following hold
* For every u_0∈ L^2_⋆ (^d; ^m), almost surely
the solution u_t to (<ref>) satisfies u_t≠ 0 for all
t ⩾ 0. The angular component π_t = u_t/
u_t is hence defined for any π_0∈ S, almost surely, for all t
⩾ 0, and it is a Markov process.
* For any c∈ (0, 1) there exist
κ_0, k_0, J, t_⋆ >0
such that for G as in (<ref>) and all π_0∈ S
_t [ G(π_t + t_⋆) ] ⩽c ·G(π_t) + J .
Before we move on, let us comment on the assumptions of this theorem.
The assumption ⩾ 1 is slightly unnatural: the natural threshold
for our analysis would be = 1/2, since it is at this point that gaps Δ_L of the (1D) Laplacian are no longer increasing in L. In our
analysis, the restriction ⩾ 1 appears in the proof of
Proposition <ref> when treating systems with different
viscosity coefficients (ν^α)_α=1^m (if all ν^α were identical, we would be able to treat all > 1/2).
The requirement γ_0 > d/2 +1 appearing in (<ref>) is
instead a mild and mostly
technical regularity assumption. It is needed for example to obtain the energy estimates in
Proposition <ref>.
The proof of the Lyapunov property is the content of
Section <ref>, the existence and Markov property of the angular component is
the content of Section <ref>, see Lemma <ref>.
We observe that (see Remark <ref>) for some
constants c_1, c_2 > 0 the functional G satisfies
G(π) ⩾c_1 exp( c_2 π^2_H^1/2 ) ,
which suffices to see that the next result implies tightness of the process π_t in S.
Of course, the notion of high-frequency stochastic instability is not very
practical, so it is desirable to have some easy-to-check conditions implying it. As it
turns out, there is a bonanza of fairly mild non-degeneracy conditions that can be imposed on the noise
coefficients Γ to enforce this stochastic instability.
One possible condition which is very weak yet easy to state is the following.
Assume that the noise coefficients Γ^α, α^'_β, β^', k satisfy
(<ref>) and are diagonal:
Γ^α, α^'_β, β^', k =
Γ^α_β, k δ_α, α^' δ_β,
β^' ,
for some {Γ^α_β, k}_α, β, k⊆ [0,
∞) (with a slight abuse of the letter Γ).
Further, assume that
for every β∈{ 1, …, m } there exists an α (β) ∈{ 1, …,
m } such that the viscosity coefficients satisfy ν^β⩾ν^α and the noise coefficients satisfy that there exists a finite set
and a _0∈ such that
⊆ (Γ^α_β) = { k Γ^α_β, k > 0 } ,
and for which
_ * _B(M)⩾_B( M+b ) , ∀ M ⩾_0 ,
where B(M) = { k ∈^d | k
| ⩽ M } and b=3 ν_min^- 1/ 2.
Here, in the large scale non-degeneracy assumption we have written _A for the characteristic function of a subset A ⊆^d and we used the discrete convolution (f * g) (k) =
∑_l ∈^d f (k-l) g(l). Before we proceed, let us conclude
with some remarks on the setting above.
We observe that there are many possible choices for the coefficients Γ^α_β,k for which (<ref>) is satisfied.
* For example, by Lemma <ref>, if ν^α and Γ^α_β, k
are such that for any β∈{ 1, …, m } there exists an α(β) with ν^β⩾ν^α and
Γ^α_β, k > 0
, ∀ k ∈^d | k | ⩽η(d , ν) ,
where η(d, ν) is an arbitrary constant such that η(d , ν) > 3
ν_min^- 1/2 √(d) is satisfied, then
(<ref>) is satisfied.
* The condition _ * _B(M)⩾_B (M+b) could be
relaxed to _ * _B(M)⩾_B(M+ ), for arbitrary > 0 by considering shells of smaller width throughout the work: we work
with shells of width one only to lighten the burden of notation. For the very same
reason, we also expect that the factor 3
ν_min^- 1/2 √(d) appearing in the previous point can be
replaced by 1 in any dimension.
As already anticipated, this is sufficient to deduce high-frequency stochastic
instability for the noise W.
Under Assumption <ref> the noise W induces high-frequency
stochastic instability, in the sense of Definition <ref>.
The proof of this proposition can be found in Section <ref>.
One can observe that at least heuristically, in view of (<ref>) the
functional G alone is not sufficient to obtain the bound λ (u_0) > - ∞, as this result requires an a priori estimate on [ π_t_H^^2] (as opposed to the H^1/2 bound provided by G), which we obtain separately by a bootstrap argument.
Under the same assumptions as in Theorem <ref>, there exist s_⋆ and C > 0 such that uniformly over all π_0∈ S
sup_t ⩾s_⋆[ π_t
^2_H^ ] <
C G(π_0) ,
Hence, in particular inf_u_0∈ L^2_⋆λ (u_0) > - ∞.
The uniform estimate on the H^ norm follows from
Proposition <ref>. As for the last claim, let us prove
a uniform lower bound over u_0∈ H^1/2∖{ 0 }, which is sufficient since for every
u_0∈ L^2_⋆ we have that u_t∈ H^1/2 for any t > 0. By (<ref>) we find that -almost surely
λ(u_0) = lim_t →∞ 1/t (
∫_s_⋆^t ⟨π_s, π_s ⟩+
1/2 C(π_s, Λ) s
+ ∫_s_⋆^t ⟨π_s, π_s ·W_s ⟩) .
We observe that the quadratic variation of the last term is given by
∫_s_⋆^t⟨π^⊗ 2_s, Λπ^⊗ 2_s⟩ s ,
as defined in (<ref>). Further we can estimate, using the assumption
of Theorem <ref>
|Λ^α, α^'_β, β^' (x) | =
|∑_kΓ^α, α^'_β, β^' ,
k e_k(x-y) |⩽∑_k | Γ^α, α^'_β, β^' ,
k | ≲∑_k (1 + | k |)^- d -2 < ∞ ,
meaning that Λ_∞≲ 1. Therefore, the
quadratic variation is bounded by
∫_s_⋆^t⟨π^⊗ 2_s, Λπ^⊗ 2_s⟩ s ⩽Λ_∞ t ,
since π_s∈ S, so that
from the law of the iterated logarithm we find that -almost surely
lim_t →∞ 1/t ∫_s_⋆^t ⟨π_s, π_s ·W_s ⟩=0 .
Hence the Lyapunov exponent is bounded by
λ(u_0) = lim_t →∞ 1/t
∫_s_⋆^t ⟨π_s, π_s ⟩+ 1/2 C(π_s, Γ) s
⩾( lim sup_t →∞ 1/t
∫_s_⋆^t ⟨π_s, π_s ⟩s ) -
^u(Λ) - Λ_∞ ,
since the left-hand side is deterministic take the expected value of
the first line and replace the limit by a limsup.
Since ⟨π_s, π_s⟩ is a negative random variable, Fatou's lemma yields
[ lim sup_t→∞ 1/t ∫_s_⋆^t ⟨π_s, π_s ⟩s ] = - [ lim inf_t→∞ 1/t ∫_s_⋆^t -⟨π_s, π_s ⟩s ]
⩾- lim inf_t→∞ 1/t ∫_s_⋆^t - [Q_(π_s,
π_s) ] s
⩾- ( max_α ν^α ) lim inf_t→∞ 1/t
∫_s_⋆^t [ π_s ^2_H^ ] s .
The first statement of our result furthermore guarantees that for every n ∈^+ we have the upper bound
lim inf_t →∞ 1/t ∫_n s_⋆^t
π_s ^2_H^ s ⩽C G(π_(n-1) s_⋆)
⩽C ( c^n-1 G(π_0) + J (∑_i =
0^n-2 c^i) ) ,
where we used the uniform bound on π_s_H^^2 and the contraction property for
G from Theorem <ref>. Passing to the limit n →∞ we deduce that
lim inf_t →∞ 1/t ∫_n s_⋆^t
π_s ^2_H^ s ⩽C J 1/1 - c ,
yielding the desired uniform bound.
§.§.§ Uniqueness of invariant measures
We can next exploit our control on π_t to derive
unique ergodicity for the projective process [π_t]: here we require some strong
non-degeneracy assumptions on the noise, and a condition on the hyperviscosity
parameter and the dimension.
The parameter in (<ref>) satisfies
∈ [1,∞)∩ (d/2,∞) and the noise coefficients Γ^α, α^'_β, β^', k satisfy
(<ref>) and are diagonal:
Γ^α, α^'_β, β^', k =
Γ^α_β, k _{ α= α^' } _{ β=
β^'} ,
for some {Γ^α_β, k}_α, β, k⊆ [0,
∞). Furthermore, there exist
constants 0 < c < C < ∞ and γ_0 > d/2+1 such that for all α, β∈{ 1, …, m }:
[e:boundalphak]
c (1 + | k |)^- 2 γ_0 ⩽Γ^α_β, k ⩽C (1 + | k
|)^ - 2 γ_0 , ∀k ∈^d .
The condition above, and in particular the requirement >d/2,
is very restrictive and we believe it is far
from optimal.
It is
used to guarantee that the Jacobian of the solution u_t takes
values in the image of the noise operator, which
allows to establish the strong Feller property via
the Bismut–Elworthy–Li formula and a localisation argument. This could
be avoided by using different approaches, such as asymptotic strong
Feller <cit.> or asymptotic couplings
<cit.>, but extending our approach to make use of such techniques
lies beyond the scope of this paper.
Similarly, the matching lower and upper
bounds in (<ref>) are imposed to us by our strategy of proof.
Under Assumption <ref>, which implies
Assumption <ref>, we are able to derive a spectral gap
for [π_t] as stated in Theorem <ref> below. This implies uniqueness of the Lyapunov
exponent over all initial conditions, along with a Furstenberg–Khasminskii
type formula <cit.> for it and continuous
dependence on the parameters of the model: in our setting, we chose for simplicity
only the parameter appearing as a power in the Laplacian,
although one could as well choose the noise strength coefficients Γ.
In the statement of the following theorem we denote by _t ([π_0], ·)
the law of [π_t] started in [π_0] (the functional G(·) does not depend on the choice of representative for [π_0]).
Moreover, for a measurable space (, ), we
denote with ·_TV, the total variation norm of a
signed measure μ over (scaled by a factor 1/2 for later
convenience) by
[e:tv]
μ_TV, = 1/2 | μ|() .
Under Assumption <ref>
there exists a unique invariant measure μ_∞ for ([π_t])_t ⩾ 0 on 𝐏. In addition, there
exist C , γ > 0 such that for G as in
Theorem <ref> and any π_0∈ S ∩ H^1/2:
_t ( [π_0] , ·) - μ_∞ _TV,
𝐏 ⩽C
e^- γt G([π_0]) .
In addition
* For any initial condition u_0∈ L^2_⋆(^d;
^m ),
-almost surely
lim_t →∞ 1/t logu_t = λ ,
with λ∈ given by (<ref>).
* If we consider ↦λ( ) as a function of the parameter ⩾ 1 appearing in
(<ref>), then the map λ is
continuous on the interval (d/2, ∞) ∩ [1, ∞).
The proof of the spectral gap is the content of Section <ref>,
and together with Theorem <ref> and
Corollary <ref> it immediately implies (<ref>). As for the last statement, it
follows from the observation that all the estimates of this work hold locally
uniformly over ∈ [1, ∞). In particular, for any compact interval
[η, ] =I ⊆ (d/2, ∞) ∩ [1, ∞), provided that | η -
| < 1/2, and for any n ∈, we find by
Proposition <ref> the uniform bound
sup_ 𝐚∈ I_μ_[ π^n_H^] < ∞ ,
where μ_ denotes the invariant measure for the process t ↦π_t for a given parameter > d/2.
A first consequence of the uniform bound (<ref>) is that the
measures μ_ are continuous in with respect to weak convergence
as probability measures on S. Indeed (<ref>) implies that any
sequence of measures {μ_ _k}_k ∈ for { _k}_k⊆
I with _k→∈ I is tight and, writing _t^ for the Markov semigroup
with given parameter , it is not hard to verify via Lemma <ref> that
the map (, μ) ↦_t^μ is jointly continuous for any fixed
t>0.
This implies that any weak limit point must be
invariant under the evolution of π_t associated to the limiting value of , and therefore equal μ_. Finally, weak continuity of μ_ together with (<ref>) and
the Furstenberg–Khasminskii formula (<ref>) imply
that ↦λ ( ) is continuous.
§ THE SKELETON LYAPUNOV FUNCTIONAL AND DISSIPATION ESTIMATES
One of the issues that appear when dealing directly with the energy median (M
(π_t))_t ⩾ 0 is that we have no control on the frequency of its
jump times. In fact, we expect M( π_t) to jump very rapidly to low
frequencies, provided it starts from
a sufficiently high frequency level. Such jumps are in principle
“good” for us (since they imply that energy is not shifting to high frequencies), but every time the
energy median jumps to lower frequencies, the high-frequency regularity also jumps,
but upwards, so that it is unclear whether the Lyapunov functional is
decreasing. In addition, the energy median can be very irregular in
time since, due to the noise, it can easily accumulate
infinitely many jumps in a finite time interval while rapidly oscillating
between two neighbouring frequency levels.
To avoid all these issues, our strategy is to introduce a piecewise constant process (M_t)_t ⩾ 0, which we call the skeleton median process,
which behaves similarly to M( π_t), but is such that the time intervals between successive
jumps are of order one.
For this, we will construct a suitable sequence of stopping times
0 = T_0 < T_1 < …
with intervals of length of order one, meaning that T_i +1 - T_i has
uniform bounds on both positive and negative moments. The skeleton energy median
is then a process that remains constant on every interval [T_i, T_i +1) and
updates its value only at the stopping times T_i. For the sake of the
present discussion, one can for instance think of the stopping times as being deterministic
times
T_i∼η· i ,
for some η > 0 and of the skeleton energy median to be defined as the
true median at times T_i:
M_t∼ M (π_T_i) , ∀ t ∈ [T_i, T_i+1) .
Unfortunately, there are several reasons why this simple construction does not
quite work for our analysis. First, our study of the median relies on bounds on
the dynamic of the relative energy processes which are addressed in
Proposition <ref>. These bounds do not hold over deterministic time
intervals, but only up to certain stopping times. Second, the energy median may exhibit
large negative jumps over fixed time intervals, which is not desirable for the reason mentioned above,
so we enforce the deterministic bound M_T_i +1 - M_T_i⩾ -1, as well as the bound M_T_i⩾ M (π_T_i) for all i ∈.
Overall, although the stopping times T_i will not be deterministic,
we will have a deterministic upper bound on the increment T_i+1 -
T_i as well as a bound on all inverse moments of such increments.
We refer to Lemma <ref> below for a
list of desirable properties satisfied by the skeleton median process.
Similarly, since the exact definitions of both the stopping times T_i and of the skeleton
process are quite intricate, they are deferred to Section <ref>,
after we have introduced all the required tools.
Now we introduce another feature of our analysis, namely that we distinguish
between “concentrated” and “diluted” states of the process u_t. Given a
frequency level L ∈
and a function u ∈ L^2, we introduce the marker m(L, u) ∈{, } given by
[eqn:def-dil-n]
m(L, u) =
(con) if
(Π_L^ + Π_L-1^ ) u ⩾1/4 Π_L-1^ u ,
(dil) if
(Π_L^ + Π_L-1^ )u < 1/4 Π_L-1^ u .
The value 1/4 is as usual arbitrary and can be replaced with any value in
(0, 1).
In our setting, assuming that the skeleton median process is given,
we are interested in whether the system is concentrated or diluted about level M_t-1. Therefore, we set
m_t = m (M_t-1, u_t) ∈{, } .
We introduce this marker because we exploit different mechanisms depending on whether
the system is in a diluted (m_t =) or a concentrated (m_t =) state. In both cases, for any i ∈, provided that M_T_i is sufficiently large, the quantity Π_M_T_i^ u_t/Π_M_T_i^ u_t is
very likely to decrease (if it is finite) for t ∈ [ T_i, T_i+1) as a consequence of the
dissipative nature of our equation. This
is the content of Proposition <ref>, which shows that the
evolution of this ratio is bounded from above by an Ornstein–Uhlenbeck process
on certain time intervals.
At time T_i+1, once the small scale energy captured by Π_M_T_i^ u_t is likely to have dissipated, we update the
value of M_T_i to the new median, which we now expect to be strictly smaller
than the original one. But if the energy is
concentrated in a shell of width two about level M_T_i -1, namely if (Π_M_T_i-1^ + Π_M_T_i-2^) π_t contains a significant
fraction of energy in
the most extreme case without any energy in lower modes then the
deterministic dynamic predicts that the energy will just further
concentrate at level M_T_i and dissipation alone is not sufficient
to guarantee that M_T_i+1 < M_T_i.
Instead it is the effect of the non-degenerate
noise which pushes energy to modes surrounding M_T_i in a time of
order one combined with dissipation which guarantees that the
median actually drifts to lower levels.
Eventually, we will compare the dynamics of M_t for large values of
M_t to a random walk with drift towards lower frequencies. Hence we have
to control how far the process jumps to high frequencies in the unlikely event
that this occurs. Here, once more, we make use of the dissipation, by obtaining suitable
regularity estimates for
[eqn:def-h]
w_t = Π^_M_tu_t/ Π^_M_t u_t
=Π^_M_t π_t/ Π^_M_t π_t .
We use this approach to study, for κ_0,
κ > 0, k_0∈, the functional
[eqn:def-lyap-fun]
F (κ, _t) = exp( κ_0 M_t +
κ w_t _M_t+ k_0^2 ) ,
which is now a functional of the enhanced process
_t = (M_t, π_t) ∈×S ,
and should be interpreted as a skeleton version of the functional G defined in (<ref>). While the parameters κ_0 and
k_0 will be fixed later on, κ will be allowed to vary.
Note that F cannot be expressed as a
functional of π_t alone since the process M_t depends on the past:
one of the key technical results of this work is then
Theorem <ref>, which shows that F satisfies
a Lyapunov-type property.
In the next section we introduce one the main
building blocks of our analysis: a dissipation
estimate relating the relative energy Π^_M_T_i u_t / Π^_M_T_i u_t to
an Ornstein–Uhlenbeck process on suitable time intervals.
§.§ Dissipation estimates
Let us start by considering any stopping time t_0⩾
0 and an _t_0–adapted random variable L ∈.
To state our dissipation estimates we need two additional
ingredients. First the dissipation is captured by the gap between different
eigenvalues, which satisfies
Δ_L = ζ_L + 1 - ζ_L ∈(0, ∞), lim_L →∞ Δ_L = ∞ , Δ_L ∼L^2 -1 .
Then, for any u_0∈ L^2_⋆(^d; ^m) and u_t the solution to
(<ref>) we define the following functions for all t ⩾ t_0, assuming Π^_L u_t_0 > 0 almost surely:
[eqn:def-w]
w = Π_L^ u_t(x)/ Π_
L^ u_t , wbold = Π_L^ u_t(x)/ Π_
L^ u_t .
Note that nothing prevents the denominator
from vanishing in finite time. For this reason, we will only ever track these functions up to
suitable stopping times which avoids this. In particular, we define for any β > 0:
[eqn:tau-generic-n]
taul = inf{ t ⩾t_0
w (L; t, ·) ⩽1/2 } ∧(t_0 + 1) ,
σ^_β (L,t_0) = inf{
t ⩾t_0 w^ (L; t , ·) ⩾β} ∧(t_0 +1) .
Stopping after a time interval of length at most one will later allow us to enforce
a deterministic upper bound on T_i+1 - T_i. Further, the value 1/2 is of course
again somewhat arbitrary, and we use the letter σ rather than τ for the second stopping time since τ^ will be reserved for the
particular threshold β=2 later on. As for the variable L, we will
use these definitions mostly with L ∈{ M_t_0, M_t_0 -2 },
where M_t is the skeleton median process constructed in
Section <ref>.
Finally, it will be convenient to rewrite (<ref>) in Fourier
coordinates.
Let u_t be the solution to (<ref>) with u_0∈
L^2_⋆ (^d; ^m). Then for any k ∈^d the process û_t^k= ⟨ u_t, e_k⟩ satisfies
û_t^k = - νζ_k û_t^k t + ∑_l ∈^d û_t^k- l ·B^l_t .
Recall that u is vector-valued, so here we have defined ν =
(ν^α)_α =1^m and for φ∈^m we define the
component-wise product
νφ= (ν^α φ^α)_α=1^m ∈^m .
The quadratic covariation matrix between û_t^k and û_t^l is given by
⟨û^k , û^l ⟩_t = ∑_m ∈^d
û_t^k - m ⊗û_t^l+m ·Γ_m t
C_k, l(u_t)
t .
Here the resulting covariance is a matrix (C_k , l^α,
β)_α, β∈ (^m)^⊗ 2, defined by
C_k , l^α, β (u) = ⟨û^α, k,
û^β, l⟩ = ∑_m∑_γ, ηû^γ, k - mû^η , k + mΓ^α, β_γ, η, m ,
where we can interpret the inner sum as a dot product between the tensor û^k-m⊗û^l+m and the 4-tensor (Γ^α,
β_γ, η, m )_α, β, γ, η, with m fixed.
Now, if we define
Γ_m = max_α, β, γ, η | Γ^α,
β_γ, η, m | ,
then the covariation C_k,l(u) can be estimated by the
Cauchy–Schwarz inequality, for every α, β:
[eqn:bd-cov]
| C_k, l^α, β(u) | ⩽∑_m
Γ_m | û^k + m | | û^k - m | ⩽ u
_ł^2_k u _ł^2_l ,
with
u _ł^2_k^2 = ∑_m ∈^d Γ_k+ m |
û^m |^2 .
Now we are ready to state our dissipation estimates.
Under the assumptions of Theorem <ref>, consider any stopping time t_0 and any _t_0-adapted random variable L with values in . Then fix u_0∈ L^2_⋆ (^d; ^m), let u_t be the solution
to (<ref>) and assume that almost surely Π_L^
u_t_0 > 0.
Further, let w and w^ be as in (<ref>) and 0 < E < β two non-negative constants.
Then there exists a (deterministic) increasing function β↦ R(β) ∈ (0, ∞)
such that the following holds if w^ (L; t_0, ·) ⩽
E, uniformly over all u_0:
* For all t ∈ [t_0, σ_β^ (L,t_0) ] we can bound
w(L; t, ·) ^2 ⩽-2 ν_min Δ_L
w(L; t, ·) ^2 t + R(β) t + _t ,
where ν_min = min_αν^α > 0 and _t is a continuous, square integrable martingale on [t_0,
σ_β^(L,t_0)] with _t_0 = 0 and
quadratic variation bounded by ⟨⟩_t⩽
R(β ) t.
* Similarly, for all t ∈ [t_0, σ_β^ (L,
t_0)] we can bound:
w^ (L; t, ·) ^2 ⩽R(β) t + 𝒩_t ,
where 𝒩_t is a continuous, square integrable martingale on [t_0,
σ^_β(L,t_0) ] with 𝒩_t_0 = 0 and
quadratic variation bounded by ⟨𝒩⟩_t⩽
R(β) t.
By the strong Markov property of the solution u we may assume that t_0 = 0.
To show the first point, we use Itô's formula to find
Π_ L^ u_t ^2/ Π_L^
u_t ^2 = 2 [ 1/ Π_L^ u_t ^2⟨Π_L^ u_t, u_t⟩- Π_L^ u_t ^2/ Π_
L^ u_t ^4 ⟨Π_L^ u_t, u_t⟩] t
+ _t + 1/2 ∑_k, l ∈^d ∂_ û^k û^l ( A_t/B_t ) : ⟨û^k, û^l ⟩_t .
Here the last line contains the quadratic covariation term and we have defined
A_t = Π_L^ u_t^2 , B_t = Π_L^ u_t^2 ,
so that ∂_û^kû^l( A_t/B_t) is the matrix, over the indices α,
β∈{ 1, … , m }
( ∂_û^kû^l( A_t/B_t)
)^α, β = ∂_û^α, kû^β,l( A_t/B_t)
and for two matrices G, F ∈
(^m )^⊗ 2 we have F : G = ∑_α, β
F^α, β G^α, β.
Then, via Remark <ref>, we can estimate the quadratic variation
term by
∑_k, l ∈^d ∂_ û^α, k û^β, l (
A_t/B_t ) ⟨û^k , û^l
⟩_t^α, β
≲ ∑_|k| > (L+1)_α | C^α,
β_k , -k(u_t) |/B_t_{ α= β} +
∑_|l| ⩽L_α A_t/B_t^2 |C_l, -l^α, β(u_t)| _{α= β}
+ ∑_| k | ⩽L_α, | l | ⩽L_β
A_t/B_t^3 | û_t^l | | û^k_t | |C^α, β_k,
l(u_t)| + ∑_|k| > (L+1)_α, |l| ⩽L_β | û_t^k
| | û_t^l |/B^2_t | C_k, l^α, β(u_t) | .
Next we use the bound in (<ref>). We observe that by
(<ref>), since ∑_mΓ_m < ∞,
we find that ∑_k ∈^d u
_ł^2_k^2≲ u ^2 and therefore
∑_l | C^α, β_l , -l (u) | ≲ u ^2 ,
∑_l, k | û^k | | û^l | | C^α, β_k, l
(u)| ≲( ∑_l | û^l | u
_ł^2_l )^2 ≲ u ^4 .
Then we can estimate the terms above as
follows for t ∈ [t_0, σ_β^ (t_0)]:
∑_k, l ∈^d ∂_ û^k û^l (
A_t/B_t ) : ⟨û^k, û^l ⟩_t
≲ u_t ^2/B_t + u_t ^4/B_t^2 + u_t ^6/B_t^3 ≲_β 1 .
Regarding the first term in (<ref>), we observe that
1/ Π_L^ u_t ^2⟨Π_L^ u_t, u_t⟩- Π_L^ u_t ^2/
Π_L^ u_t ^4 ⟨Π_L^ u_t, u_t⟩⩽-(ζ_L+1 - ζ_L) A_t/B_t .
Here we have used the particular definition of the projection in
Definition <ref> to deal with distinct viscosity coefficients. Indeed,
note that for every α∈{ 1, …, m } we have that
⟨Π^_L u^α, ^α u^α⟩ = -
ν^α⟨Π^_L u^α, (- Δ)^
u^α⟩
⩽ - ν^α | (L+1)_α |^2 Π^ u^α^2
= - (L+1)^2 Π^ u_t^2 ,
as desired, and similarly for the low frequency projection.
This concludes the proof of (<ref>). As for the quadratic variation, the
martingale term is given by
_t = 2 [ 1/ Π_L^ u_t ^2⟨Π_L^ u_t, u_t ·W_t⟩- Π_L^ u_t
^2/ Π_L^
u_t ^4 ⟨Π_L^ u_t, u_t ·W_t⟩] .
The first bracket (and verbatim for the second one) can be rewritten as
follows
⟨Π_L^ u_t, u_t W_t⟩= ∑_α, β ∑_| k | >
(L+1)_α û_t^α, k ∑_l û_t^β, k-l B^α, β_ l, t ,
which has quadratic variation
∑_α, β, γ, η∑_l( ∑_| k | >
(L+1)_αû_t^α, kû_t^β, k-l) ·( ∑_| k | >
(L+1)_γû_t^γ, kû_t^η, k-l) Γ^α, γ_β, η, l ,
which can be bounded by
∑_α, β ∑_l Γ_l | ∑_| k | > (L+1)_α û_t^ α, k
û_t^β, k-l |^2 ≲ u_t ^4 .
Hence with the same notation as above we obtain
/t ⟨⟩_t ≲ u_t
^2/B_t^2 + u_t ^8/B_t^4 ≲_β 1 ,
which proves the required bound.
To obtain the estimate for w^, we can follow verbatim the
previous calculations. The only difference lies in the treatment of the term
1/ Π_L^ u_t ^2⟨Π_L^ u_t, u_t⟩- Π_L^ u_t ^2/
Π_L^ u_t ^4 ⟨Π_L^ u_t, u_t⟩⩽-(ζ_L - ζ_L) A_t/B_t = 0 ,
which delivers the desired result.
§.§ Construction of the skeleton median
We are now ready to define the stopping times { T_i}_i ∈ and
the skeleton median process (M_t)_t ⩾ 0. The first step in the
definition of the stopping times is to make sure that they do not kick in too
quickly. We therefore start by introducing a padding time of length about δ, where δ∈ (0, 1) is a fixed parameter (the padding is the
red region in the figure below).
[line width=3mm,red!15] (0,0) – (2,0);
[line width=3mm,blue!15] (2,0) – (4,0);
[line width=3mm,green!15] (4,0) – (6,0);
(-0.1,0.0) – (6.1,0.0) ;
(0,0.3) – (0, -0.3) ;
at (0, -0.55) (a) T_i;
at (1.0,0.35) (a1) Padding;
at (3.0,0.35) (a2) Dilution;
at (5.0,0.35) (a3) Dissipation;
(2.0,0.3) – (2.0, -0.3) ;
at (2.2, -0.55) (b) V_i+1;
(4.0,0.3) – (4.0, -0.3) ;
at (4.2, -0.55) (c) S_i+1;
(6.0,0.3) – (6.0, -0.3) ;
at (6.2, -0.55) (c) T_i+1;
After the padding time, we wait for the noise to shift the
system into a diluted state, at time S_i+1 note
that the system may well be diluted to start with. Once the system is diluted, we wait for
dissipation to kick in and reduce the value of the median.
Of course we may be in bad luck
and see an unexpected event somewhere along the line: if this is the case we
stop prematurely: for this reason V_i+1 is for example a stopping time,
and not the deterministic T_i + δ. This discussion
motivates the definition of the following stopping times as well as of the
skeleton median process. Recall here the stopping
times defined in (<ref>), the marker defined in
(<ref>), and let us also introduce the first dilution time
[e:defsigmaDil]
σ^ (t_0) = inf{ t ⩾t_0
m_t =} .
Let δ∈ (0, 1) be fixed and
set, for any π_0∈ S
T_0 = 0 , M_0 = M(π_0) .
Then for any i ∈,
assuming that T_i and (M_t)_t ⩽ T_i are given, and writing M_i =
M_T_i for short, define the following.
* The next padding time:
V_i+1 = (T_i + δ) ∧σ^_5/4 (M_i, T_i) .
* The next dilution time:
S_i+1 = σ^ (V_i+1) ∧σ^_3/2 (M_i, V_i+1) .
* The next dissipation time:
T_i+1 = τ^ (M_i -2, S_i+1) ∧σ_2^
(M_i-2, S_i+1) .
* The skeleton median process up to time T_i+1:
M_t = M_i , ∀t ∈[ T_i , T_i+1) ,
M_T_i+1 = M_i+1 = M_i - 1 if M(π_T_i+1) < M_i ,
M (π_T_i+1)
else.
Finally, for any stopping time t_0 we write T(t_0) for the first time after t_0 in which the skeleton median
jumps:
T (t_0) = min{ T_i T_i > t_0} .
This definition may appear slightly circular since σ^ used in point 2 requires
knowledge of M_t for t > T_i which is only defined in point 4.
To alleviate this, one could simply replace m_t with m(M_t_0-1,u_t) in
(<ref>) which leads to the same construction.
As we have described before, the stopping times σ^_β
should kick in rarely (at least in the regime of interest to us, namely when the
Lyapunov function is large) and are present to guarantee that the system stays under
control. We can therefore identify an event _i, in which all
the stopping times kick in, which are more likely to do so (at least if M is sufficiently large). Its complement _i = ^c_i covers instead the
case in which one of the σ_β^ kicks in, and happens
with small probability:
_i = { V_i +1 = T_i + δ}∩{m_S_i+1 = }∩{ T_i +1 = τ^ (M_i -2, S_i +1) } ,
_i = _i^c .
Now we would like to establish some basic properties of the skeleton median.
For example, the values 5/4, 3/2 and 2 are chosen increasingly so that
the stopping times kick in one strictly after the other. This is not always the
case, since σ_2 (M_i-2, S_i+1) ⩽σ_3/2 (M_i, V_i
+1) is guaranteed only if m_S_i+1 = (recall that m_t = m
(M_t-1, u_t) is defined in (<ref>)), and even this bound
requires a short computation, see below. Another fundamental
property is that on the event _i we have M_T_i +1 =
M_T_i -1.
Further, the skeleton median process is defined to be constant on each interval
[T_i, T_i +1), as already anticipated. Yet, in opposition to the
heuristic definition we have previously given, at time T_i +1 we do
not update its value to match M(π_T_i+1), unless M(π_T_i+1) ⩾ M_T_i.
This choice is taken because of a rather technical issue. If M (u_T_i) ≫ 1, then we expect the
true median to jump very quickly to lower
frequencies, meaning that M (π_T_i+1) ≪ M (π_T_i). But if the
median drops quickly, it is cumbersome to control high-frequency regularity: as M decreases,
the value of π_t_M increases in such a way that the first
effect could be canceled by the latter and may not see any decrease in our
Lyapunov functional. With our definition we artificially rule out such large drops
in M_t. Our skeleton process still satisfies M_T_i⩾ M
(π_T_i) for all i ∈, but the inequality may be strict and
there is no upper bound on the gap M_i - M
(π_T_i). We collect these and other considerations
in the following lemma.
The skeleton median process (M_t)_t ⩾ 0 and the stopping times defined in
Definition <ref> satisfy the following properties:
* For all t ⩾ 0, it holds that
[e:bd-pi]
Π^_M_t π_t ⩽2 Π^_M_t
π_t , 1 ⩽√(5) Π^_M_t π_t
.
* There exists a (deterministic) constant E ∈ (0, 2) such that
w^ (M_i-2; S_i+1, ·) ⩽E , if
m_S_i+1 = .
* For all i ∈ we can lower bound
M_i⩾ M (π_T_i).
* The jumps of the skeleton median are such that for all i ∈ one has M_i +1⩾ M_i - 1, and furthermore
M_i+1 = M_i -1 on _i.
* For all i ∈ we have T_i+1 - T_i⩽ 3.
The first bound in (<ref>) follows from the definitions of σ^_β
and of τ^ and the second bound follows from the
first since Π^_M_tπ_t^2 + Π^_M_tπ_t^2=1. The second point holds since
Π^_M_i-2 u_S_i+1 /
Π^_M_i-2 u_S_i+1 ⩽√(
( Π^_M_i-1+ Π^_M_i-2) u_S_i+1 ^2/ Π^_M_i-2
u_S_i+1 ^2 + Π^_M_i u_S_i +1 ^2/
Π^_M_i-2 u_S_i+1 ^2 )
⩽√( ( 1/4 )^2 + Π^_M_i u_S_i +1 ^2/
Π^_M_i-2 u_S_i+1 ^2 · Π^_M_i u_S_i +1 ^2/
Π^_M_i u_S_i+1 ^2)
⩽√( ( 1/4 )^2 + ( 1 + (
1/4 )^2 ) ( 3/2 )^2) < 2 ,
where we used that since m_S_i+1 = we have the bound (
Π^_M_i -2 + Π^_M_i-1) u_S_i+1⩽1/4Π^_M_i-2 u_S_i+1, together with the fact
that S_i+1⩽σ_3/2^ (M_i, T_i).
The third point follows immediately from Definition <ref>. As for the fourth
point, we see that on _i necessarily
either T_i+1 = τ^ (M_i -2, S_i+1). Therefore we find from
the definition of τ^ that
Π^_M_i-1 u_T_i+1 ⩽1/2
Π^_M_i-2 u_T_i+1 ⩽1/2
Π^_M_i-1 u_T_i+1 ,
so that indeed M (u_T_i +1) ⩽ M_i-1.
The last point follows by observing that the stopping times τ^ and σ^_β kick in after a time interval of length at most one.
Finally, to improve the readability of later calculations, we introduce the
following shorthand notations for the stopping times that have appeared so far
and an additional arbitrary stopping time t_0:
[e:short]
τ^ (t_0) = σ_2^(M_t_0-2, t_0) ,
τ^ (t_0) = τ^ (M_t_0-2, t_0) ,
σ_β^ (t_0) = σ^_β (M_t_0,
t_0) , σ^(t_0) = σ^_3/2 (M_t_0,
t_0) .
§ PROOF OF THE MAIN RESULT
The proof of Theorem <ref>, follows from an analogue of the
same theorem for the stopped process.
To state this result, recall that the definition of M_t depends on the padding parameter
δ∈ (0, 1). Furthermore the Lyapunov functional F defined
in (<ref>) depends on
the parameters κ_0> 0, k_0∈ and on the free variable κ > 0. For convenience we recall here its definition, and that of the
Lyapunov functional G:
F (κ, ) = exp( κ_0 M +
κ w()_M+ k_0^2 ) , = (M, π) ∈×S ,
G (π) = exp( κ_0 M (π) + π_M(π) +k_0^2 ) .
Here we use the definition w() = Π^_Mπ / Π^_Mπ as in
(<ref>).
Finally, the Lyapunov property of F will hold outside of a compact set
of the state space of _t. We use the following convention for
any 𝐊∈_+
< 𝐊 ⇔ M < 𝐊 and w _M + k_0<
𝐊 .
Under the assumptions of Theorem <ref>, for any u_0∈
L^2_⋆ (or alternatively π_0∈ S) let M_t be the
skeleton median (with parameter δ∈ (0, 1)) from Definition <ref> with associated stopping times
{ T_i}_i ∈ and _t = (M_t,
w_t), with w_t as in (<ref>).
Then, for any b ⩾ 1 and c∈ (0,1) there exist δ∈ (0, 1), k_0∈^+, 𝐊∈_+, J > 0, κ_0 > 0 and increasing maps
J, h
[1/2, ∞) → [1, ∞) , with h(1/2)=1 ,
such that:
* For all κ⩾ 1/2
[eqn:contraction]
_T_i [ F^b( κ, _T_i+1) ] ⩽c ·h(κ) ·F^b( κ/2 , _T_i) _ {
_T_i ⩾}
+ J(κ) _{
_T_i < } .
* There exists a C > 1 and κ_1 > 0 such that uniformly over and i ∈, if κ⩾κ_1
[e:unif]
_T_i [ sup_T_i ⩽s< T_i+1 G^b(π_s) ]
⩽C ·F^b(κ, _T_i) .
* If κ⩽ 1/2, then
G(π_0) ⩾F(κ, _0) for all π_0∈ and _0 = (M (π_0), π_0).
In this result, the last two claims establish the relationship between the
skeleton Lyapunov functional and the full Lyapunov functional. We highlight in
particular that the lower bound in the last point can hold only at time t = 0, since at later times the skeleton median process t ↦ M_t may be
much larger than the actual energy median M (π_t).
The crucial statement in the theorem is the first one, which establishes a
version of the Lyapunov property.
The statement captures two effects: on one hand
taking expectations will reduce the value of the parameter κ in F: this is almost a super-Lyapunov property, although there is no gain
in the prefactor of the skeleton median. On the other hand,
when κ is equal to 1/2, then we recover the classical Lyapunov
property since h (1/2) =1.
We will need these properties, because we will first of all apply the uniform
bound (<ref>), which holds for some κ_1 potentially larger
than 1/2. Then we apply (<ref>) several times, to decrease
the value of κ from κ_1 to 1/2. At this point,
(<ref>) will guarantee the Lyapunov property.
Before we turn to the proof of this discretised result, we show that it is sufficient to
obtain Theorem <ref>.
Since the Markov process (π_t)_t ⩾ 0 is time homogeneous it
suffices to show the desired property for t = 0. For any c∈ (0,
1) consider δ, k_0, , J and κ_0 fixed as in
Theorem <ref>. Then if we define A_j =
[T_j, T_j+1) we have by Cauchy–Schwarz
[G(π_t_⋆)] = [ ∑_j ∈
G(π_t_⋆) _A_j (t_⋆) ]
⩽∑_j ∈ [ G^2
(π_t_⋆)_A_j (t_⋆)
]^1/2 √(𝐩_ t_⋆(j) ) ,
where we have defined
[eqn:def-p-star]
𝐩_t_⋆(j) = ( t_⋆ ∈A_j) ,
∀j ∈ .
Let us now fix κ_1 = 8 κ_0 + 1 and choose j_0∈
such that κ_1 / 2^j_0⩽ 1/2. Then we estimate uniformly over j ⩾
j_0, using the uniform bound of Theorem <ref>
with κ = κ_1
[ G^2 (π_ t_⋆) _A_j
(t_⋆) ] ⩽C [
F^2(κ_1, _T_j) ]
⩽C [ c^j_0 h^j_0 (κ_1) F^2(
1/2, _T_j-j_0) +J (κ_1)
∑_ł= 0^j_0-1 h^ł(κ_1)c^ł] ,
by applying (<ref>) for a total of j_0 times. Now we can
use the fact that h(1/2) = 1 to further bound via
(<ref>):
[ G^2 (π_ t_⋆) _A_j
(t_⋆) ] ⩽C h^j_0 (κ_1) c^j
F^2(1/2, _0) + J(κ_1) Ch^j_0 (κ_1) /1 -
c .
Note that so far we did not use any property of t_⋆. Now we will
choose t_⋆ sufficiently large, so that the factor c^j can compensate the constant Ch^j_0 ( κ_1).
In particular, by Lemma <ref> for any j_1 >
j_0 we can choose t_⋆ > 0 such that 𝐩_t_⋆ (j) = 0 for all j ⩽
j_1.
We therefore obtain the bound
[ G(π_t_⋆)] ⩽∑_j > j_1
√(p_t_⋆ (j)) ( (C c^j h^j_0(κ_1))^1/2
F( 1/2, _0) + √(J(κ_1) C h^j_0(κ_1)/1 - c) )
⩽c^j_1/2 √(C h^j_0(κ_1))/1 -
√(c) F(1/2, _0)
+C(t_⋆) √(J(κ_1) C h^j_0(κ_1) (1 - c)^-1) ,
where an application of Lemma <ref> guarantees
that for some C(t_⋆) > 0
∑_j > j_1 √(𝐩_t_⋆(j)) ⩽C(t_⋆) .
Now for any c∈ (0, 1) we can choose j_1 > j_0 such
that
c^j_1/2√(Ch^j_0(κ_1))/1 - √(c)⩽c .
Therefore, for any c∈
(0, 1) we have found a t_⋆ > 0 and a J^'(t_⋆) > 0 such that
[ G(π_ t_⋆)] ⩽c F(1/2,
_0) + J^' ⩽c G( π_0) + J^' ,
where we used that F(1/2, _0) ⩽G(π_0) by the
last property of Theorem <ref>. This completes the proof.
Under the assumptions of Theorem <ref>, for t_⋆ > 0 and j ∈, let 𝐩_
t_⋆(j) be as in (<ref>).
Then one has
* For every compact set K ⊆ [0, ∞ ) there exist constants
c(K), C(K) > 0 such that
𝐩_t_⋆(j) ⩽C (K) e^- c(K) j , ∀j ∈ , t_⋆ ∈K .
* We have 𝐩_t_⋆(j) = 0 for all t_⋆⩾ 3(j+1).
The second claim follows immediately from the fact that T_i+1 -
T_i⩽ 3, as observed in Lemma <ref>,
so we focus on the first claim. By definition we have that
T_i +1 - T_i⩾ V_i +1 - T_i = δ∧
(σ^_5/4(T_i +1) - T_i ) ,
where we recall the notation from (<ref>). Moreover, by
construction, since M_T_i⩾ M( π_T_i) by
Lemma <ref>, we have at time T_i that w^
(M_T_i; T_i, ·) ⩽ 1, so that the event _E (T_i) takes place with E =1.
In particular, it follows that by the second bound of Lemma <ref>,
there exists an > 0 such that
_T_i (T_i+1 - T_i > ) ⩾1/2 , ∀i ∈ .
Neither the value of nor the value 1/2 will play any particular
role in the following, but we do need that the bound is uniform over i. We write
𝐩_t_⋆(j) ⩽ ( T_j⩽ t_⋆)
and obtain an upper bound on the latter. To do so, we divide
the first j stopping times into batches of size N_⋆ =
⌊ t_⋆/ ⌋ +1. Suppose that j ⩾
N_⋆ (ł +1) for some ł∈. Then it holds that
_ł{ T_i+1 - T_i > , ∀ i ∈ [N_⋆ł , N_⋆ (ł +1)] }⊆{ T_j > t_⋆} ,
since we would have
T_j⩾ N_⋆> t_⋆ .
Now, the event _ł happens with positive probability that is uniform
over j but depends on the value of t_⋆:
_T_N_⋆ ł (_ł) ⩾2^- N_⋆ .
In particular, if we define
ł_j = max{ł∈ N_⋆ (ł +1) ⩽ j } ,
then we obtain the following bound:
(T_j⩽ t_⋆) ⩽ ( ∩_ł⩽ł_j^c_ł) ⩽ (1 - 2^- N_⋆)^ł_j .
Since there exists a constant c (t_⋆) such that ł_j⩾ c
(t_⋆) j for all j ∈ the claim is proven: it is clear that all
estimates hold locally uniformly over t_⋆.
§.§ Proof of the discretised theorem
The proof of Theorem <ref> will build on distinguishing
between the “good” event _i (because by Lemma <ref> we
have the certainty that M_i+1 = M_i -1) and the “bad” event _i (because it might be that M_i+1⩾ M_i). Since i will be
fixed throughout the proof, we will from now on refrain from writing it as a
index for these events.
To further clarify the structure of the proof, let us highlight three results,
appearing in later sections, which play a
fundamental role in the present proof. These are Lemma <ref>,
Lemma <ref> and Proposition <ref>.
Lemma <ref> is a technical result which estimates the jump
of the norm w_t_M_t + k_0^2 at time t = T_i+1: note
that at this time both the skeleton median M_t and the process w_t jump.
Lemma <ref> is used when proving that the
“bad” event happens with small probability. It is fundamental
in our analysis, since it allows to treat the case in which the projective
dynamic reaches a concentrated state, so that the event m_S_i+1 = has small probability. The lemma guarantees
that even on such occasion the skeleton median has
an arbitrarily high probability of diluting a consequence of the
high-frequency instability assumed in Theorem <ref>.
Finally, Proposition <ref> yields high-frequency
regularity estimates: these estimates split into three groups, depending on how
quickly the next stopping time kicks in. The one in
(<ref>) relates to stopping times with a deterministic lower
bound (conditional on some event): this will be used to treat the event V_i
+1 = T_i+ δ⩽σ_5/4^ (M_i, T_i). The uniform estimate in
(<ref>) relates instead to stopping times that can kick in very
quickly, so that dissipation may not have time to improve the regularity of the initial datum:
therefore the estimates involves in the upper bound the regularity of the process w_t at the starting time. The estimate
(<ref>) relates instead to stopping times that kick in when
we observe upwards jumps of the median. These stopping times take a macroscopic
time to kick in, so that dissipation has in the meantime smoothed out the
initial datum: therefore, the eventual estimates are uniform over the initial
datum.
Finally, let us introduce (and in part recall) the following shorthand
notation:
[e:shorthand]
M_i = M_T_i , ΔM_i = M_i+1 - M_i , m_i = m_S_i+1 .
Note that even though t ↦ M_t remains
constant over all of [T_i, T_i+1), this is not the case for m_t
since u_t still evolves.
We prove the result for b =1, since other values of b can be treated
identically.
Our analysis is divided into 5 steps: one for , two for the event ∩{_T_i⩾}, one for the event {_T_i
< }, and a final step where we address the uniform in time bounds on
G appearing in points 2 and 3 of
Theorem <ref>.
Recall that we have four parameters at our disposal: κ_0, k_0
from the definition of F, which defines the size of
the “center set” outside of which we hope to see contraction of the Lyapunov
functional, and δ from the definition of the padding stopping time V_i+1.
Let now c>1 be the constant appearing in Lemma <ref> and
H and H̅ the functions appearing in (<ref>) and (<ref>) below.
We then first choose κ_0 large enough and then small enough so that
[eqn:for-kappa-0]
H(1/2) e^- κ_0+c/2 ⩽c ,
H̅(2cκ_0+c) ⩽c^2 .
We then use the fact that
it is possible to choose δ∈ (0, 1) small enough so that, almost surely,
[e:d]
_T_i (^V ) ⩽ , ^V = {V_i+1 < T_i + δ} .
This follows because ^V coincides
with the event {σ_5/4^ (T_i) < T_i + δ}, which has small probability via Lemma <ref> since at time T_i we have w^ (M_i; T_i, ·)⩽ 1 < 5/4 by the
third point of Lemma <ref>. Note also that
= ^V∪^S with
^S = { σ^ (V_i+1) >σ^ (V_i+1)}
∪{τ^ (S_i+1)> τ^ (S_i+1)} .
Similarly to above, a combination of Lemma <ref> and
Lemma <ref> guarantees that for large enough
_V_i+1 ( ^S) _{ M_i ⩾} ⩽_{ M_i ⩾} .
Indeed, the probability of the event {τ^ (S_i+1)> τ^
(S_i+1)} is arbitrarily small via
Lemma <ref>
and the bound in the second point of
Lemma <ref>, conditional on the event {σ^ (V_i+1) ⩽σ^ (V_i+1)}.
On the other hand, the probability of the event {σ^ (V_i+1) >σ^ (V_i+1)} is small
by Lemma <ref>, both provided that is
sufficiently large.
Combining both (<ref>) and (<ref>) we conclude that, for
> 0 satisfying (<ref>),
there are an upper bound on δ
and a lower bound on (depending on c, κ_0 and which at this stage are already fixed) which guarantee that
_T_i ( ) _{ M_i ⩾} ⩽_{ M_i ⩾} .
Step 1: . In this step we obtain an estimate on
_T_i [ F (κ, _T_i+1) _]. By
construction, the skeleton median decreases on the event : Δ M_i
= -1 by Lemma <ref>. Therefore by the second bound in
Lemma <ref>
_T_i [ F (κ, _T_i+1) _
] ⩽e^- κ_0 + c κ_T_i [ e^ c κ
w_T_i+1- ^2_M_i+ k_0_
] e^κ_0 M_i .
Observe that the quantity under the expectation on the right-hand side depends only on the left limit
of w at time T_i+1. Now, the aim of this step will be to show that there exists an increasing function
κ↦ H(κ) > 1 such that
[e:aim-first]
_T_i [ e^ c κ
w_T_i+1- ^2_M_i+ k_0_
]⩽H (κ) e^ 4 c e^- 2 δk_0 κ w_T_i _M_i+ k_0^2 .
To obtain this bound we first condition on time V_i+1 and use the
estimate (<ref>) from Proposition <ref>:
_V_i+1 [ e^ c κ w_T_i+1- ^2_M_i+
k_0_ ] ⩽_V_i+1 [ e^ c κ w_T_i+1- ^2_M_i+
k_0 ]_^V
⩽Ĉ(cκ)e^ 2 c κ w_V_i+1- ^2_M_i+
k_0 _^V ,
where we have set ^V = { V_i+1 = T_i + δ}.
We then take the expectation at time T_i and use (<ref>) to deduce the desired bound (<ref>) with
[e:defH]
H(κ) = Ĉ(cκ) C(2cκ) .
We then choose k_0 sufficiently large as a function of δ such that
[eqn:for-k0]
4 c e^- 2 δk_0 ⩽1/2 .
Let us observe that (<ref>) is the only point where we require the parameter k_0.
With this choice of parameters and recalling that κ_0 was chosen in such a way that
(<ref>) holds, we conclude from (<ref>) that provided that h is
chosen in such a way that
[e:h-1]
h(κ) ⩾e^cκ-c/2H(κ)/ H(1/2) ,
one has the bound
[eqn:final-2]
_T_i [ F (κ, _T_i+1) _
] ⩽c ·h (κ) ·e^κ_0 M_i + κ/2 w_T_i
_M_i+ k_0^2
⩽c ·h(κ) ·F
(κ/2 , _T_i) ,
which is an estimate of the required order. In addition, the map h (κ) defined above satisfies the requirements of the theorem.
Step 2: ∩{ M_i⩾}_. Once
again, let us start by estimating the
conditional expectation at time V_i+1. We observe that although is a “bad” event, we have no guarantee that
Δ M_i⩾ 0.
Hence we combine both estimates of Lemma <ref> (the first one with L
=1, in order to keep the estimate uniform over k_0), to find that for
some constant c > 1
_V_i+1 [ F (κ, _T_i+1)
__
] e^- κ_0 M_i
⩽_V_i+1 [ exp( c ( κ_0 + κ) (
w_T_i+1-^2_M_i+ 1 + 1 ) ) _
_
] ,
where we have in addition used that w _M_i + k_0⩽
w _M_i + 1.
Now, we know from (<ref>) and
(<ref>) that _T_i ( _)
⩽.
Therefore, via Cauchy–Schwarz and the mentioned bounds we obtain that
[eqn:niceBoundB]
_T_i [ F (κ, _T_i+1)
__ ]
⩽√() _T_i[ e^ 2 c (κ_0 + κ)
w_T_i+1- ^2_M_i+ 1 _
]^1/2 e^c(κ_0 + κ) e^κ_0 M_i .
We now note that ⊂∪ where
= {σ^_5/4(T_i) < S_i+1}
, = {m_S_i+1 =} ∩{
τ^ (S_i+1) > τ^ (S_i+1) } .
Given two random variables a and b, write (a)_(b) for the random variable equal to a on and b otherwise.
It then follows immediately from Definition <ref> that, if we set β = (5/4)_(2), L = (M_i)_(M_i - 2),
as well as t_0 = (T_i)_(S_i+1), then the stopping time[Although t_0 isn't a stopping time, one verifies that t_1 is.]
t_1 = σ^_β(L,t_0) satisfies t_1 ⩽ T_i+1 on so that, for any κ̅> 0,
(<ref>) yields
[e:boundt1]
_t_1 [ e^κ̅
w_T_i+1- ^2_M_i+ 1 _ ]
⩽Ĉ (κ̅) e^2κ̅
w_t_1 ^2_M_i+ 1
_{t_1 ⩽T_i+1} .
We can then apply _t_0 to both sides: observe that t_0 is not a stopping time, but for any two stopping times τ, σ
we define the expectation _τ_σ[ X ] = _τ [ X _] +
_σ [X _^c], which still satisfies the
“tower property” _T_i = _T_i_t_0. In particular, one has the identity
_T_i X = _T_i _t_1 X = _T_i _t_0 _t_1 X ,
so that, by (<ref>), one has
_T_i [ e^κ̅
w_T_i+1- ^2_M_i+ 1 _ ]
⩽Ĉ (κ̅) _T_i _t_0[ e^2κ̅
w_t_1 ^2_M_i+ 1
_{t_1 ⩽T_i+1}] .
Hence, after applying _t_0 we can use
the bound (<ref>) since,
by the second step of Lemma <ref> (and E as appearing there) and the fact that
M_i ≥ M(u_T_i), we have w^ (L; t_0, ·) ⩽ (1)_(E) < (5/4)_(2). We conclude that there exists an increasing function H̅ such
that
[e:defbarH]
_T_i [ e^κ̅
w_T_i+1- ^2_M_i+ 1 _ ] ⩽e^-κ̅H̅(κ̅) .
Inserting this back into (<ref>), we see that
_T_i [ F (κ, _T_i+1)
__ ]
⩽√(H̅(2c(κ_0+κ))) F(0, _T_i) .
Since we assumed that δ is chosen sufficiently small so that (<ref>) holds,
we conclude that, provided that we choose h large enough so that
[e:h-2]
h(κ)^2 ⩾H̅(2c(κ_0+κ))/H̅(2cκ_0+c) ,
we have the bound
[e:f3]
_T_i [ F (κ, _T_i+1)
__ ] ⩽c ·h (κ)
·(0, _T_i) ⩽c ·h (κ) ·( κ/2 , _T_i) .
Step 3: ∩{
w_T_i_M_i+k_0⩾}_. Here we follow
a slightly different estimate than in the previous step, since we are not
allowed to use (<ref>), as M_i is not necessarily large.
Therefore we are not able to obtain a small factor because of an event that
happens with small probability. Instead, we will gain a small factor, provided
that is sufficiently large.
We start once more by applying
Lemma <ref> (using both bounds, the first one with L =1) and the fact that w _M_i + k_0⩽ w _M_i +1,
so that on the event we have:
F (κ, _T_i+1) e^- κ_0 M_i⩽exp( c (κ_0 + κ)
( w_T_i+1-^2_M_i+1 + 1) ) .
Therefore, by (<ref>), we obtain the bound
_T_i [ F (κ, _T_i+1) _
_ ] e^- κ_0 M_i
⩽e^c ( κ_0 + κ) _T_i [
e^ c(κ_0 + κ) w_T_i+1- _M_i+1^2
_ _ ]
⩽H̅(c(κ_0 + κ)) _{
w_T_i _M_i+k_0 ⩾} .
Since κ⩾ 1/2, we can conclude that
_T_i [ F (κ, _T_i+1) _
_ ]
⩽H̅( c(κ_0+ κ)) e^- 1/4 ^2 F
(κ/2, _T_i) _{ w_T_i _M_i+k_0 ⩾}
⩽c ·h (κ) ·F (κ,
_T_i) ,
provided that
[e:h-3]
h(κ) ⩾H̅(c(κ_0 + κ))/ H̅(c (κ_0
+ 1/2)) ,
and by choosing a (, κ_0, c)
> 0 such that in addition to (<ref>) also the following
holds:
H̅(c (κ_0 +1/2)) e^- 1/4 ^2 ⩽c < 1 .
As in the previous steps, we have therefore obtained a bound of the required order.
Step 4: The case _T_i <. Also this
estimate follows along the lines of the previous steps: indeed the proof is even simpler since we
are not interested in obtaining the contraction constant c. We start
as usual by applying Lemma <ref>:
_T_i [ F(κ, _T_i+1)] e^-
κ_0 M_i _{ _T_i <}
⩽e^c (κ_0 + κ)_T_i [ exp( c
(κ_0 + κ)
w_T_i+1-^2_M_i+ 1 ) ]_{ _T_i
<} .
Now, via the uniform estimate (<ref>) in
Proposition <ref> we obtain
_T_i [ exp( c (κ_0 + κ)
w_T_i+1-^2_M_i+ 1 ) ] ⩽Ĉ ( c
(κ_0 + κ)) e^2 c (κ_0 + κ) w_T_i-
^2_M_i+1 ,
which yields as desired that for some J (κ_0, κ, ) ∈
(0, ∞)
_T_i[ F(κ, _T_i+1)] _{_T_i <}⩽
J (κ) _{_T_i <} ,
as required. This concludes the proof of the contraction estimate
(<ref>), since we can combine (<ref>),
(<ref>), (<ref>) and (<ref>) to obtain
_T_i[ (κ, _T_i +1) ] ⩽
4 c·h (κ) (κ /2, _T_i) _{_T_i⩾} + J(κ) _{_T_i < } ,
for any h satisfying (<ref>), (<ref>) and (<ref>),
and such that h (1/2) =1.
This is the desired bound, since c∈ (0, 1) can be chosen
arbitrarily small.
Step 5: Uniform estimates. It now remains to obtain
(<ref>) and the bound at time t=0. The uniform estimate
(<ref>) follows along similar
lines as the estimates above, using the uniform bound (<ref>)
from Proposition <ref>.
Our first objective is therefore to obtain some deterministic estimates which reduce
the problem to an exponential bound of the kind treated in
Proposition <ref>. For s ∈ [T_i, T_i+1) we have that
G (π_s) = F ( κ, _T_i) B(κ, _T_i,
π_s) ,
where
B(κ, _T_i, π_s) = exp( κ_0 (M (π_s) -
M_i) + π_s ^2_M (π_s) + k_0 - κ
w_T_i ^2_M_i + k_0 ) .
Now since π_s∈ S we can use Lemma <ref> to bound
π_s _M (π_s) + k_0^2 ⩽
Π^_M_i+ k_0 π_s _M_i + k_0^2 + 4
(ν_min^- 1/
2 +1)^2 ( M(π_s)
- M_i)_-
⩽ Π^_M_i π_s ^2
w_s _M_i+ k_0^2 + 4(ν_min^- 1/
2 +1)^2( M(π_s)
- M_i)_-
⩽
w_s _M_i + k_0^2 + 4(ν_min^- 1/
2 +1)^2( M(π_s) - M_i)_- .
In particular, we can use this bound to estimate, with c_1 =4 (ν_min^- 1/
2 +1)^2:
B(κ, _T_i, π_s ) ⩽exp(κ_0 (M (π_s) -
M_i)_+ - (κ_0 - c_1) (M (π_s) - M_i)_-
+ w_s ^2_M_i + k_0 - κ w_T_i
^2_M_i + k_0 ) .
Further, via Lemma <ref> and once more
via Lemma <ref>:
(M(π_s) - M_i)_+ ⩽(M (Π_M_i^ π_s) -
M_i)_+
⩽4 ν_max^1/2 π_s _M_i^2
⩽c_2 ( π_s _M_i + k_0 ^2 + k_0 )
⩽c_2 ( w_s _M_i + k_0 ^2 + k_0 ) .
Hence overall, since we can assume that κ_0⩾ c_1, we obtain that
B(κ, _T_i, π_s)⩽e^ κ_0
k_0 c_2 exp( c_2
w_s ^2_M_i + k_0 - κ w_T_i
^2_M_i + k_0 ) .
This estimate allows us to conclude, since we can apply (<ref>)
from Proposition <ref> three times to obtain that for
for κ⩾κ_1 = 2^3c_2 and A_i = [T_i, T_i+1) one has
_T_i [ sup_s ∈A_i C(κ, _T_i,
π_s) ] ⩽C(κ_0, k_0, δ) ,
which proves the desired bound.
To conclude, we turn to the estimate at time t =0. Here we use that M_0 = M( π_0) to find that
F(κ, _0) = G(π_0) D( κ,
π_0) ,
with
D(κ, π_0) = exp( κ w_0 ^2_M
(π_0) + k_0 - π_0 ^2_M(π_0)+ k_0 ) .
By the definition of M (π_0) we know that 2^-1⩽Π^_M(π_0)π_0^2, from which we deduce
π_0 ^2_M(π_0)+ k_0 = Π^_M (π_0) π_0 ^2
w_0 ^2_M ( π_0) + k_0 ⩾2^-1 w_0 ^2_M
(π_0) + k_0 ,
implying that D( κ, π_0) ⩽ 1 if κ⩽ 1/2 as
required.
§.§ Regularity to median estimates
In this subsection we state elementary lemmas that relate jumps of the
energy median to high frequency regularity and vice versa.
Consider two values L^+, L^-∈ and write Δ L =
L^+ - L^-. Then for any φ∈ S and γ⩾ 1/2
φ_γ, L^+^2 ⩽
φ_γ, L^-^2 if ΔL ⩾0 ,
2^2 γ(ν_min^- γ/
+1 )(ΔL)_- + 2^2 γ-1 φ_γ,
L^-^2 if ΔL < 0 ,
with (Δ L)_- the negative part of Δ L.
The restriction γ⩾ 1/2 is not necessary: for γ < 1/2
the parameters of the estimate change only slightly, but such an estimate is not
required.
Let us start with the case Δ L ⩾ 0. From the definition of the norms
we have for any φ∈ S
φ _γ, L^+^2 - φ_γ, L^-^2
= ∑_α=1^m ( ∑_| k | > L^+_α (| k | - L^+_α + 1
)^2 γ|
φ̂^α_k|^2 - ∑_| k | > L^-_α ( | k |
-L^-_α + 1)^2 γ |
φ̂_k^α |^2 ) ⩽0 .
On the other hand if Δ L < 0, since φ =1
φ _L^+^2 - 2^2 γ-1 φ_L^-^2
= ∑_α=1^m ( ∑_|
k | > L^+_α (| k | - L^+_α +1 )^2 γ|
φ̂_k^α |^2 -
2^2 γ-1 ∑_| k | >
L^-_α ( | k | -L^-_α +1 )^2 γ |
φ̂_k^α |^2)
⩽2^2 γ-1ν_min^- γ/
( ΔL )_-^2 γ + 2^2 γ-1 ∑_α=1^m ∑_L^+_α < | k | ⩽L^-_α (| k
| - L^+_α+1)^2 γ | φ̂_k^α |^2
⩽2^2 γ [ν_min^- γ/
( ΔL )_- +1]
⩽2^2 γ(ν_min^- γ/
+1 ) ( ΔL )_- ,
where we used that (Δ L)_-⩾ 1 together with
(a - L^+_α)^2 γ = (a - L^-_α + (ΔL_α)_-)^2 γ ⩽2^2 γ-1 [ (a - L^-_α)^2 γ + (ΔL_α)_-^2 γ ] ,
for all a ⩾ L^-.
This concludes the proof.
An analogous estimate provides a bound on the size of the energy median through high
frequency regularity.
Consider any L ∈. Then for every φ∈ S satisfying Π^_Lφ > 0 the following
estimate holds:
(M ( Π^_L φ) - L)^1/2 ⩽2
ν_max^1/4 φ_L .
Let us write M = M ( Π^_Lφ) for short and note that M > L, so that M-1 ⩾ L, since we are working with integers.
Therefore, from the definition of the
energy median
φ_L ⩾( ∑_α=1^m∑_| k
| > M_α -1 ( | k | - L_α+1) |
φ̂_k^α |^2 )^1/2
⩾( ∑_α=1^m (M_α - L_α)
Π^_M-1 φ^α ^2 )^1/2 ⩾1/2 ν_max^1/4 ( M - L)^1/2 .
Hence, the result is proven.
The next result considers the jump of the process w_t =
Π_M_t^ u_t / Π^_M_t u_t at the jump times T_i: it is a slight adaptation of Lemma <ref>.
Under the assumptions of Theorem <ref> and with the stopping
times and skeleton median from Definition <ref> the following holds.
For every i ∈ and Δ M_i = M_i+1 - M_i =
M_T_i+1 - M_T_i we have the following estimates for any k_0∈ and γ⩾ 1/2
w_T_i+1 _γ, M_i+1 + k_0^2 ⩽
w_T_i+1- _γ, M_i + k_0^2 if ΔM_i ⩾0
,
5 ·2^2 γ [ ν_min^- γ/ +1
+ 2^ - 1 w_T_i+1- _γ, M_i+ k_0^2 ]
if ΔM_i < 0 .
Let us start with the first estimate, in the case Δ M_i⩾ 0. In this case we find that π_T_i+1_γ, M_i+1+ k_0⩽π_T_i+1_γ, M_i+ k_0, as well as Π^_M_iπ_T_i+1⩽Π^_M_i+1π_T_i+1. Hence, in
particular, we conclude that
w_T_i+1 _γ, M_i+1+ k_0 = π_T_i+1
_γ, M_i+1 + k_0/ Π^_M_i+1 π_T_i+1 ⩽ π_T_i+1
_γ, M_i+ k_0/ Π^_M_i π_T_i+1 =
w_T_i+1 - _γ, M_i+ k_0 .
On the other hand, in the case Δ M_i < 0, which by
Definition <ref> of the skeleton median process (M_t)_t
⩾ 0 implies that the jump is exactly of size one, i.e. Δ
M_i = -1, we find by
Lemma <ref> and (<ref>)
w_T_i+1 _γ, M_i+1+ k_0^2 = π_T_i+1
_γ, M_i+1+ k_0^2 / Π^_M_i+1 π_T_i+1
^2
⩽2^2 γ (ν_min^- γ/
+1 )| ΔM_i |/ Π^_M_i+1 π_T_i+1
^2 + 2^2 γ-1 π_T_i+1 _γ, M_i+
k_0^2 / Π_M_i+1^ π_T_i+1 ^2
⩽2^2 γ (ν_min^- γ/
+1 ) ·5 + 2^2 γ-1 π_T_i+1 _γ, M_i+
k_0^2 / Π_M_i+1^ π_T_i+1 ^2 .
We can also estimate via (<ref>)
Π_M_i^ π_T_i+1 ^2/ Π_M_i+1^
π_T_i+1 ^2 ⩽5 ,
so that overall
w_T_i+1 _γ, M_i+1+ k_0^2 ⩽2^2 γ (ν_min^- γ/
+1 )
·5 + 2^2 γ-1 ·5 π_T_i+1 _γ,
M_i+ k_0^2 / Π^_M_i
π_T_i+1 ^2 ,
which completes the proof of the lemma.
Now, a combination of the previous estimates delivers the following lemma.
Under the assumptions of Theorem <ref> and with the stopping
times and skeleton median from Definition <ref>, there exists a constant c >1 such that, uniformly over all L, i ∈ and κ, κ_0 > 0:
* If Δ M_i⩾ 0, then:
F(κ, _T_i+1) e^- κ_0 M_i
⩽exp( c κ_0
( w_T_i+1 - _M_i+ L^2 +
L ) + κ w_T_i+1- _M_i+
k_0^2 ) .
* If Δ M_i < 0, then:
F(κ, _T_i+1) e^- κ_0 M_i ⩽exp( -
κ_0 + c κ( w_T_i+1 -
_M_i+ k_0^2+1) ) .
We start by showing that the following estimate holds uniformly over L, i ∈:
[e:extra1]
ΔM_i ⩽c w_T_i+1- _M_i+ L^2 + c
L .
This bound is only non-trivial when Δ M_i > 0 so we assume that this is the case.
Recall that w_T_i+1- =
Π^_M_iπ_T_i+1 / Π^_M_iπ_T_i+1 and that,
because of the presence of the projection Π^_M_i, we have
by definition that M_i+1⩽ M( w_T_i+1- ). Hence
applying first Lemma <ref> (with L = M_i and φ = π_T_i+1) and then
Lemma <ref> we obtain
ΔM_i = ( √(ΔM_i ) )^2
⩽c(ν_max) π_T_i+1_M_i^2 ⩽c(ν_max) w_T_i+1- _M_i^2
⩽c(ν_max) ( w_T_i+1- _M_i+ L^2 + (
w_T_i+1- _M_i^2 - w_T_i+1- _M_i+
L^2) )
⩽c(ν_max, ν_min) ( w_T_i+1- _M_i+ L^2 + L
w_T_i+1- ^2 )
⩽c(ν_max, ν_min) ( w_T_i+1- _M_i+ L^2 +
L) ,
where in the last line we have made use of the bound
(<ref>) in Lemma <ref> since
w_T_i+1- = lim_t ↑T_i +1 Π_M_t^
π_t / Π_M_t^ π_t ⩽2 ,
thus proving (<ref>). Note that the
values of the constants c in (<ref>) change from line to line.
At this point, the two desired estimates follow from Lemma <ref>.
If Δ M_i⩾ 0, then we have w_T_i+1_M_i+1+ k_0^2≤ w_T_i+1-_M_i+ k_0^2, so that (<ref>)
implies as desired
F(κ, _T_i+1) e^- κ_0 M_i
⩽exp( κ_0 c
w_T_i+1 -_M_i+ L^2 + κ_0 c L + κ w_T_i+1- _M_i + k_0^2
) .
If instead Δ M_i < 0 then Δ M_i = -1 by definition and, by Lemma <ref>,
we have
w_T_i+1 _M_i+1+k_0^2 ⩽c(ν_min)(1 + w_T_i+1-
_M_i+ k_0^2 ) ,
which yields the desired bound.
§.§ Exit time bounds
In this subsection we collect estimates on certain exit times.
Given any stopping time t_0 and any _t_0-adapted random
variables E ∈ (0, ∞) and L ∈, let us define the following event:
[e:sets-1n]
_E (L;t_0) = { w^ (L; t_0, ·)
⩽E } .
Our first result is a negative exponential moment bound on
σ_β^ (L,t_0) - t_0: recall the definition in
(<ref>).
Under the assumptions of Theorem <ref>,
fix any κ, E, β > 0 such that E ∈
(0, β) and let t_0 be any stopping
time. Then there exist a deterministic
function (ζ, κ, β, δ) ↦ C(ζ, κ, β,
δ) ∈ (0, ∞) such that uniformly over all parameters
_t_0 [ exp(κ(σ_β^(L, t_0) - t_0)^- ζ
) __E(L;t_0)
] < C(ζ, κ, β, E ) __E(L;t_0) .
Let us highlight that
the estimate is not uniform over E ↑β and as a matter of fact fails at E ⩾β.
This result follows from Proposition <ref>, as a consequence of
Doob's submartingale inequality.
Indeed, consider R and as in Proposition <ref>, so that we can
define for all t ∈ [t_0, σ_β^ (L, t_0)]
x_ t = E+ R(β) ·(t - t_0) + _t .
For simplicity, we will consider defined for all t ⩾
t_0 by setting _t = _t ∧σ_β^ (L,
t_0). By comparison, we then have that
σ_β^(L, t_0) ⩾σ^_β(L, t_0), where the latter is defined by
σ_β^(L, t_0) = inf{ t ⩾t_0 x_t ⩾β} .
Now fix any t_⋆(β, β - E) > 0 satisfying R t_⋆⩽1/2 ( β - E).
Then for s ∈ [0, t_⋆] we find
σ_β^(L, t_0) ⩽t_0 + s ⇒ sup_r ∈[0, s] _t_0 + r ⩾1/2 ( β- E) μ ,
so that overall for s ∈ [0, t_⋆]
_t_0 ( σ_β^(L, t_0) ⩽s )
__E(L; t_0 )
⩽_t_0 ( sup_r ∈[0, s] _t_0 + r ⩾μ)__E(L;t_0) .
We can now apply
Doob's submartingale inequality, since by
Proposition <ref> the quadratic variation of satisfies ⟨⟩_t⩽ R t for all t ∈ [t_0 ,
σ_β^ (L; t_0)], to obtain for any λ > 0
_t_0 ( sup_r ∈[0, s] _t_0 + r ⩾μ) __E(L;t_0) = _t_0 ( sup_r ∈[0, s]
exp( λ_t_0 + r ) ⩾e^λμ
) __E(L;t_0)
⩽exp( -
λμ+ λ^2 s R/2 )
__E(L;t_0) ,
so that choosing λ = μ/R s delivers eventually,
for s ∈ [0, t_⋆], the bound
_t_0 ( sup_r ∈[0, s] _t_0 + r ⩾μ) __E(L;t_0) ⩽exp( - μ^2/2R
s^-1) __E(L;t_0) ,
from which the required moment bound follows.
Also the next result follows along classical lines from
Proposition <ref>. It guarantees a drift of the median to low
frequencies, in that the probability of the relative energy process about the
median becoming small is much higher than
that of increasing, provided the energy median is sufficiently large.
For this purpose, for any > 0, E ∈ (0, 2), define _E^(t_0) =
_E( M_t_0 -2;t_0 ) ∩{ M_t_0 > }. Recall also the
stopping times defined in (<ref>).
Under the assumptions of Theorem <ref>, fix any E ∈ (0, 2) and let t_0 be any stopping
time. Then for every ∈ (0, 1) there exists a (, E)
such that
_t_0 ( τ^(t_0) < τ^(t_0) )⩾1 - ,
on the event ^_E(t_0).
Let and R be as in Proposition <ref>, and define for
t ⩾ t_0 (by setting _t = _t ∧τ^
(t_0) to define the martingale also for times larger than τ^):
x_t = E + R(t - t_0) + _t .
Then the stopping time τ^ (t_0) given by
τ^(t_0) = inf{ t ⩾t_0 x_t ⩾2 } ,
satisfies by Doob's submartingale inequality that for some constant C(ζ,
E):
_t_0 [ exp( (τ^(t_0) -
t_0)^- ζ ) ] 1_^_E(t_0) ⩽C(ζ, E) .
Therefore, via the previous Lemma <ref>, for any ∈
(0, 1) we can find a δ
(, E) ∈ (0, 1) such that on
the event ^_E (t_0)
_t_0 ( τ^(t_0) ⩽t_0 + δ) ⩽ .
Now, on the event {τ^ (t_0) > t_0 + δ , τ^
(t_0) > t_0 + δ} we find by
Proposition <ref> that if τ^(t_0) > t_0 + δ, then
w (M_t_0-2; t_0 + δ, ·) ^2 ⩽-
ν_min δΔ_M_t_0-2 + x_δ ⩽- ν_min δΔ_M_t_0-2 + 2
⩽1/4 ,
where the last inequality holds provided is
chosen sufficiently large depending on δ, and thus on and E. This proves that
_t_0 (τ^ (t_0) > t_0 + δ , τ^
(t_0) > t_0 + δ, τ^ (t_0) > t_0 + δ )= 0
on ^_E (t_0), so that
_t_0 ( τ^(t_0) < τ^(t_0))_
^_E (t_0) ⩾(1 -
)_ ^_E (t_0) ,
as required.
§ HIGH-FREQUENCY STOCHASTIC INSTABILITY
This section collects estimates in the study of energy median dynamics in the
concentrated setting. It is a fundamental
tool of our analysis, as we make use of the non-degeneracy of the
noise to obtain the instability of high-frequency states. The aim of this section
is a complete proof of Proposition <ref> in the different
settings that appear under Assumption <ref>.
Let us start by considering the solution u
to (<ref>) with initial condition u_0 such
that u_0 =1, up to normalising it.
By (<ref>), showing high-frequency stochastic instability requires us to
consider initial data u_0 satisfying
(Π_M-1^ + Π_M-2^) u_0 ⩾1/4 Π^_M-2 u_0 ,
Π^_M u_0 ⩽2 Π^_M u_0 .
It follows in particular that
(Π_M-1^ + Π_M-2^) u_0 ^2 ⩾1/16 Π^_M-2 u_0
^2 ,
(Π_M-1^ + Π_M-2^) u_0 ^2 ⩾1/2^4 ·(1 +
2^4) Π_M^ u_0 ^2 ,
which eventually implies, since u_0=1, that there exists β_0∈{ 1, …, m } such that
[e:beta0-1]
(Π_M-1^ + Π_M-2^) u_0^β_0 ⩾η_0 ,
η_0 = 1/√(m) √(1 + 2^5 + 2^8) .
It follows from Assumption <ref> and in particular (<ref>)
that there exists α_0∈{ 1, …, m } with ν^β_0⩾ν^α_0 and a constant c > 0 such that
∑_| k | ⩽ (M-3)_α_0∑_l ∈^dΓ^α_0_β_0, l
|φ̂^β_0 , k+ l|^2⩾
c (Π_M-1^ + Π_M-2^) φ^β_0^2 .
Indeed, since ν^β_0⩾ν^α_0 we have that L_β_0⩽
L_α_0 for all L ∈. Therefore, to deduce (<ref>)
it suffices to show that there exists a constant c > 0 such that
∑_| k | ⩽ (M-3)_β_0∑_l ∈^dΓ^α_0_β_0, l
|φ̂^β_0 , k+ l|^2⩾
c ∑_| k | ⩽ M_β_0 | φ̂^β_0, k
|^2 ,
which is the case with c = min_l ∈Γ^α_0_β_0, l > 0, where is the set in
Assumption <ref>, since for every | k | ⩽
M_β_0 there exists an l ∈ such that | k - l | ⩽
(M-3)_β_0: note that B ( M_β_0 ) ⊆ B (
(M-3)_β_0 + 3 ν_min^-1/2 ), so that indeed the claim follows from the assumption.
Given these preliminaries, we now write the solution u to (<ref>)
in its mild formulation in
Fourier coordinates. Following the conventions of
Remark <ref>, this yields
û_t^k = e^- νζ_k tû_0^k + ∫_0^t
e^- νζ_k (t -s)∑_lû_s^k- l·
B^l_s .
Now consider the time horizon t_⋆^M, (1) and the threshold η:
t_⋆^M, (1) = 1/2Δ_M-3 , η <
η_0 ,
where η is arbitrary (as long as it satisfies the constraint with a
strict inequality). Then we introduce the following stopping times:
τ_low = inf{ t ⩾ 0 e^ζ_Mt (Π_M-1^ + Π_M-2^)
u_t^β_0⩽η} ,
τ_low = τ_low if τ_low < t_⋆^M, (1) ,
∞ else.
This definition may appear a bit odd, so let us explain it. Our aim will be to
prove that for short times, or at least until the system has diluted,
we have a lower bound on the energy in a shell of width two about level M-1 and
in coordinate β_0,
which is enough to shift the energy to level M-3.
As we will see, we eventually show that the system has a (very small) chance
of diluting only after a time of order
t_⋆^M, (2) = log(λΔ_M-3)/2Δ_M-3 ≫t_⋆^M, (1) .
The parameter λ > 0 can be arbitrarily large and is required to
close the estimates below.
The eventual time scale t_⋆^M, (2) hides two different effects.
One is a shift of energy, and another is an increase due to the different rates
of dissipation. Indeed, we will provide a lower bound on the amount of energy
that is shifted of the following order (omitting noise terms, which are the
technically the most challenging):
e^2 ζ_M-2 t Π^_M-3 u_t ^2 ≳e^ 2Δ_M-2
t ∫_0^t
e^- 2Δ_M-2 s (Π_M-1^ + Π_M-2^) u_s^β_0 ^2 s .
Now we see that the contribution of the integral comes only over the time
interval [0, t_⋆^M, (1)] and is of order 1/ Δ_M-2. This
then becomes of order λ by time t_⋆^M, (2), when
multiplied with the factor e^Δ_M t, which accounts for the
different rates of dissipation. Hence t^M,
(1)_⋆ is the time-scale up to which we require a lower bound on the
energy in (Π_M-1^ + Π_M-2^)u^β_0_s. At the same
time, we cannot expect a lower bound of this kind for
larger times scales, because modes in a shell of width two about level M-1 start dissipating at substantially different rates after that time. Hence the
definition of τ_low.
Next we consider the following stopping time to control central and high
frequencies
τ_up = inf{ t ⩾0 e^ ζ_M-2t
Π^_M-2 u_t ⩾2 } .
Similarly, we introduce a stopping time for the low frequency component, which
kicks in roughly when the system has diluted:
σ= inf{ t ⩾0 e^ζ_M-2 t
Π^_M-3 u_t ⩾8 } .
Finally, define
τ= σ∧τ_up ∧τ_low , v̂^k_t =
û^k_t if t ⩽τ ,
e^- ζ_M-2 (t - τ)
û^k_τ
if t > τ .
With this definition we have that for all k ∈ and t ⩾ 0
v_t⩽√(2^6+ 2^2) e^- ζ_M-2t , ∀ t ⩾ 0 ,
as well as
(Π_M-1^ + Π_M-2^) v_t^β_0⩾η
e^- ζ_M-2 t , ∀ t ∈ [0, t_⋆^M, (1)] .
Consider then the system of equations
ẑ_t^k = e^- νζ_kt û_0 ^k + ∫_0^t
e^- νζ_k (t-s) ∑_l v̂^k - l_s ·B^l_s
,
so that by construction
ẑ_t^k = û_t^k , ∀k ∈^d , t
⩽τ .
Now our aim is to prove by contradiction that τ < t_⋆^M, (2)
with a probability bounded from below uniformly over the initial
data and with an explicit dependence on M (which we will then work to
improve by iterating the argument). To
do so, we start by obtaining a lower bound on the probability that Π_M-2^
z_t_⋆^M, (2) is suitably large (in particular, it suffices to
show that Π_M-3^
z_t_⋆^M, (2) is large). For a single mode we have by Itô's
formula, applied to the α-th component of z_t:
| ẑ^α, k_t |^2 = -2 ν_αζ_k |
ẑ^α, k_t |^2 t +
∑_β =1^m∑_l ∈^dΓ^α_β, l
|v̂^β, k + l_t|^2 t
+ 2 Re( ∑_lv̂^α, k_t( v̂^- k - l_t·
B^l_t)^α) ,
which, by summing up over all modes k such that | k | ⩽
(M-3)_α and over α∈{ 1, …,m }, and by using the bound
(<ref>), leads to
Π^_M-3 z_t^2⩾ - 2 ζ_M-3Π^_M-3 z_t^2 t + c (Π_M-1^ +
Π_M-2^)v_t^β_0^2 t
+ ∑_α =1^m∑_| k | ⩽ (M - 3)_α 2 Re( ∑_lv̂^α, k_t( v̂^- k - l_t·
B^l_t)^α) .
Therefore, we obtain the following lower bound:
Π^_M-3 z_t^2 ⩾ c ∫_0^t e^-2
ζ_M-3 (t-s) (Π_M-1^ +
Π_M-2^) v_s^β_0^2 s + e^- 2 ζ_M-3 t_t
= e^-2 ζ_M-2tX_t + e^- 2 ζ_M-2t Y_t ,
where
X_t = e^2 Δ_M-3 t c∫_0^t
e^2 ζ_M-3 sΠ^_M-1 v_s^β_0^2 s ,
Y_t = e^2 Δ_M-3 t_t ,
and where _t is the martingale
_t = 2 Re( ∫_0^t e^2 ζ_M-3s∑_α =1^m∑_| k | ⩽ (M - 3)_α∑_lv̂^α, k_s( v̂^- k - l_s·
B^l_s)^α) .
At this point we would like to prove that by time t_⋆^M, (2), the
drift X_t has become very large, while the martingale term is not relevant.
For this purpose, we require an upper bound on Y_t and a lower
bound on X_t. For the martingale _t we have the following
estimate on the quadratic variation:
⟨⟩_t = ∑_α, β=1^m ∑_l
Γ^α_β, l ∫_0^t |e^ 2
ζ_M-3 s∑_| k | ⩽(M-3)_α v̂^α, k_s
v̂^β, -k - l_s |^2 s
≲_Γ ∫_0^t e^-4Δ_M-3 s s
≲_Γ 1 - e^- 4Δ_M-3 t/Δ_M-3 ,
where we used both the decay assumption on Γ in (<ref>) and
the upper bound on v from (<ref>).
In particular, from the Burkholder–Davis–Gundy inequality, for any p
⩾ 1 there exists a constant C(p, Γ) such that
| Y_t |^2p⩽ C(p, Γ) (e^4
Δ_M-3 t/Δ_M-3)^p .
On the other hand for the drift term we have that for t ⩾
t_⋆^M, (1), and by (<ref>), the following lower bound holds:
X_t ≳_η, Γ e^2Δ_M-3t∫_0^t_⋆^M, (1) e^- Δ_M-3 s s ≳_η,
Γe^2 Δ_M-3 t/Δ_M-3 .
Now define
X = X_t_⋆^M, (2) , Y = Y_t_⋆^M, (2) ,
so that by our lower bound on the drift (<ref>) and our upper bound
on the martingale term (<ref>), we have that there exist constants c > 0 and C_p, λ > 0 (for any p ⩾ 1, and where λ is the parameter in the definition of t_⋆^M, (2)) such that
c λ⩽ X , [|Y|^2 p] ⩽
C_p, λ (Δ_M-3)^p .
At this point, our aim is to obtain a quantitative lower bound on the probability that X + Y is strictly positive. Note that at first it is not clear at all that
such a lower bound should hold, since albeit Y has mean zero, its fluctuations are much larger than the
lower bound on X. Despite this fact, we obtain that at least with some small
probability the sum X+ Y stays positive.
Indeed, by (<ref>) we have that
(X + Y ⩽ 8) ⩽ ( Y ⩽ 8 - c λ ) .
The number 8 is chosen in connection to the definition of σ, and
our aim is to prove that the probability above is bounded away from one.
Now, let us fix λ > 0 such that 8 - c λ = - 1, so that
our aim becomes to find an upper bound on the probability
φ = (Y ⩽ - 1) .
Since Y has mean zero we have
0 = [Y] ⩽ - φ + [Y _{Y > -1}] ,
from which we deduce that
[Y _{Y > - 1}] ⩾φ .
At this point we use Hölder's inequality to bound
[Y _{Y > - 1}] ⩽ [Y^2p]^1/2p (1-
φ)^1/q , 1/q + 1/2 p = 1 , q
∈ (1, 2] .
Here the requirement q ∈ (1, 2] is needed to guarantee p ⩾ 1, so that (<ref>) applies.
Indeed, from here, by using the second bound in (<ref>), we deduce
that for some constant c_q > 0 and any q ∈ (1, 2]:
(√(Δ_M-3))^q(1 - φ) ⩾ c_qφ^q .
Writing φ = 1 - for ∈ (0,1), we then have
(√(Δ_M-3) )^q⩾ c_q (1 - )^q⩾c_q(1 - q ) ,
which in turn implies the lower bound
⩾c_q/ c_q q + (√(Δ_M-3))^q∼_q1/Δ_M-3^q/2 , ∀ q ∈ (1, 2] .
Now we can choose q arbitrarily close to 1 and hence q/2 arbitrarily
close to 1/2. For instance, by fixing appropriate q we find that
(X + Y > 8) ⩾Δ_M-3^-3/4 ,
for any M ⩾, provided we choose sufficiently large. Observe
that on the event X + Y ⩾ 8 we must have τ⩽
t_⋆^M,(2), since σ would kick in the latest by time t_⋆^M, (2).
At this point we are left with two tasks. First, we have so far proven that, at
least with a small probability, it holds that τ⩽ t_⋆^M, (2). We must
still prove that conditional on this event we have, with high probability, that indeed σ
< τ_up∧τ_low, meaning that we have overall
a small probability of dilution in the time interval [0,
t_⋆^M, (2)]. Second, we must iterate our argument, to pass form a
small probability to a probability of order one by considering longer
time-scales.
We start with the first task, namely by proving that there exists a constant c > 0 for which
( σ < τ_low∧τ_up∧
t_⋆^M, (2)) ⩾
c Δ_M-3^-3/4 .
This requires some estimates on the high and central frequencies of the process
t ↦ z_t.
Estimate for Π^_M-1 z_t.
By summing (<ref>) over | k | > (M-2)_α and over α∈{ 1, …, m }, we obtain for all t ⩾ 0
e^2 ζ_M-2 tΠ^_M-2 z_t^2⩽Π^_M-2
z_0^2 + C(Γ)∫_0^t e^2 ζ_M-2s v_s^2 s +
_t ,
again by the decay assumption (<ref>) on the coefficients Γ^α_β, l and where is the martingale given by
_t = 2 Re( ∑_α =1^m∑_l∑_(M-2)_α < | k | ∫_0^t
e^2 ζ_M-2 sv̂_s^α, k( v̂_s^- k - l· B^l_s)^α) .
Now, using once more the upper bound (<ref>) and the decay of the
coefficients Γ, the continuous martingale
t ↦_t has quadratic variation bounded by
∑_α, β = 1^m∑_lΓ^α_β, l∫_0^t|∑_(M-2)_α < | k |e^2
ζ_M-2 sv̂_s^α, kv̂_s^β, - k - l|^2 s ≲_Γ t
.
Therefore, by Doob's submartingale inequality, we obtain that for any p ∈
[1, ∞]
( sup_0 ⩽ s ⩽ t_⋆^M, (2)_s⩾ C
) ≲ | _t_⋆^M,(2) |^p/C^p≲
(t_⋆^M,(2))^p/2⩽1/Δ_M-3 ,
where the last inequality follows for example by choosing p =4, and
provided that is sufficiently
large. We therefore conclude that
( τ_up < σ∧τ_low∧ t_⋆^M, (2)) ⩽1/Δ_M-3 .
This is a sufficient upper bound for our purposes, since the probability Δ_M-3^-1 is much smaller than Δ_M-3^- 3/4, which is the
lower bound on our tentative “success” event. The next step
is to establish a similar upper bound for the stopping time τ_low.
Estimate for Π_M-1^ z_t^β_0.
Similarly to above, we would now like to prove
a lower bound to Π_M-1^ z_t^β_0 up to time t_⋆^M,(1),
with a very high probability. By summing
(<ref>) for α= β_0 over (M-2)_β_0 < | k | ⩽ M_β_0 we find that
∑_(M-2)_β_0 < | k | ⩽ M_β_0 e^2 ζ_M t |
ẑ^β_0, k_t
|^2⩾ (Π_M-1^ + Π_M-2^) z_0^β_0^2 + _t ,
with
_t = 2 Re( ∑_l∑_(M-2)_β_0 < | k | ⩽
M_β_0∫_0^t
e^2 ζ_M sv̂_s^β_0, k( v̂_s^- k - l· B^l_s)^β_0) .
Now, as before, we compute the quadratic variation of , up to time
t_⋆^M,(1):
⟨ ⟩_t_⋆^M,(1)
≲∑_β=1^m∑_l ∈^dΓ^β_0_β, l∫_0^t_⋆^M,(1)|∑_(M-2)_β_0 < | k | ⩽ M_β_0e^2 ζ_M sv̂_s^β_0, kv̂_s^β, - k - l|^2 s
≲∑_β, l Γ^β_0_β, l∫_0^t_⋆^M,(1) e^
4(ζ_M - ζ_M-2) s( ∑_(M-2)_β_0 < | k | ⩽ M_β_0e^2
ζ_M-2 s
|v̂_s^β_0, kv̂_s^β, - k - l| )^2 s
≲_Γ t_⋆^M,(1) ,
where we used that for s ⩽ t_⋆^M,(1) we have
e^ 4( ζ_M - ζ_M-2)s⩽ c,
for some constant c >0 independent of M, the bound (<ref>), and
the decay assumptions on Γ.
Following the same steps as above, we
therefore find that, provided is sufficiently large:
(τ_low < τ_up∧σ∧
t_⋆^M, (2)) ⩽1/Δ_M-3 .
Hence, combining (<ref>) and (<ref>), we obtain
(<ref>), since Δ_M-3^-1≪Δ_M-3^-
3/4.
Iteration.
The lower bound (<ref>) guarantees that we can dilute, but only with
a very small probability, by time t_⋆^M, (2). The last step in the
proof is to perform a large number of attempts (more than Δ_M-3^3/4, in order to compensate the small probability) so that with
a high probability, we observe at least one success. We therefore define the final time-horizon
t_⋆^M,(3) = log( λΔ_M-3)/2
Δ_M-3^1 - r = Δ_M-3^r·
t_⋆^M, (2) ,
for an arbitrary parameter r ∈ (3/4, 1).
Before we can conclude let us observe that the previous calculations prove
also, up to changing the value of the proportionality constant, that there
exists a c > 0 such that for any
stopping time t_0
[e:t1bd]
_t_0 ( σ^(M, t_0) ⩽t_0 + t_⋆^M, (2))
_ _, t_0 > c Δ_M-3^- 3/4 _ _,
t_0 ,
where _, t_0 is the event
_, t_0 = { (Π_M-1^ + Π_M-2^) u_t_0 ⩾1/4Π^_M -2 u_t_0 ,
M ⩾ ,
w^ (M; t_0, ·) ⩽ 3 } .
The event _ , t_0 is almost equivalent to the assumption on the
initial condition appearing in
Definition <ref> for high-frequency stochastic instability
(up to a time shift).
The only difference is that at time t_1 we only assume Π^
u_t_1⩽ 3 Π^_M u_t_1, rather than the same
inequality with the constant 2. Therefore (<ref>) follows
identically to (<ref>), up to choosing a slightly smaller value for
η_0 in (<ref>).
In addition, a slight adaptation of the second estimate of
Lemma <ref> (the only difference being that we do not assume that
M is the skeleton median) and the same calculations that led to
(<ref>), we find that the event w (M;
t_0, ·) > 3 has small probability, with respect to our benchmark
probability Δ_M-3^- 3/4, at least if t_0⩽ t_⋆^M, (3):
( sup_0 ⩽t ⩽t_⋆^M, (3) w^
(M ; t, ·) > 3 ) ⩽1/Δ_M-3 ,
provided is sufficiently large: as above, the upper bound Δ_M-3^-1
could be replaced by arbitrary larger inverse power, up to choosing
sufficiently large . For later reference, let us denote
σ = inf{ t ⩾ 0 w^
(M ; t, · ) > 3} .
Now we are ready to iterate our bound to obtain the desired result.
Let us fix
L_M = ⌊Δ_M-3^r⌋ -1 ,
as well as the following sequence of events for ł∈{ 1, …,
L_M}:
_ł = { (Π_M-1^ + Π_M-2^) u_t ⩾1/4 Π^_M -2 u_t , ∀t ∈[łt_⋆^M,(2),
(ł+1) t_⋆^M,(2)] }
∩{ M ⩾ , σ ⩾ł·t_⋆^M, (2) } ,
_ł = ⋂_j = 1^ł _j .
Then we have
(σ^⩾ L_M t_⋆^M,(2)) ⩽ (
_L_M) + ( σ⩽ t_⋆^M,
(3) )
⩽ ( _L_M) + Δ_M-3^-1 ,
by (<ref>).
In particular, for our purposes it suffices to find an upper bound to (_L_M ). Here we find that for any ł∈^+:
(_ł) = [ __ł-1_(ł-1)
t_⋆^M,(2)(_ł)
]
= [
__ł-1_(ł-1) t_⋆^M,(2)(_, łt_⋆^M, (2)) ]+ ( σ ⩽t_⋆^M,
(3) )
⩽[
__ł-1 ](1 - c Δ_M-3^- 3/4) +
Δ_M-3^-1
⩽(_ł-1) (1 - c Δ_M-3^- 3/4) +
Δ_M-3^-1 ,
where we used both (<ref>) and (<ref>). Now we can iterate
this bound to obtain overall for some c, C > 0:
(_L_M) ⩽(1 - c Δ_M-3^- 3/4)^L_M +
Δ_M-3^-1 ∑_ł=
0^L_M-1(1 - c Δ_M-3^- 3/4)^ł
⩽(1 - c Δ_M-3^- 3/ 4)^L_M + C
Δ_M-3^-1 Δ_M-3^r .
This last bound is now sufficient to conclude the proof, since for every ∈ (0, 1) there exists a () such that
(1 - c Δ_M-3^- 3/ 4)^L_M +
CΔ_M-3^-1Δ_M-3^r⩽ ,
∀ M ⩾() ,
where we have used that in the definition of L_M the parameter r
satisfies r ∈ (3/4, 1).
Then choose (t, ) sufficiently large such that t⩾ t_⋆^, (3), and such
that all previous calculations hold true. We have proven that
( σ^⩽t) ⩾
1- , ∀ M ⩾(t, ) .
This concludes the proof.
The next result is a simple corollary of the definition of high frequency
instability, together with the moment bounds in Lemma <ref>.
Recall the notation in (<ref>).
Under the assumptions of Theorem <ref>, and in particular if W induces a high-frequency stochastic instability, then the following holds. For any i
∈, let V_i+1 be the stopping time defined in
Definition <ref>. For any ∈ (0, 1) there exists a () > 0 such that
_V_i+1 ( σ^(V_i+1) <
σ^(V_i+1) ) ⩾1 - ,
on the event { M_T_i⩾}.
To further lighten the notation we assume that V_i+1 = 0 and write σ^ and σ^ instead of σ^ (V_i+1) and σ^
(V_i +1) respectively.
Next, let
δ∈ (0, 1) be a parameter. We can lower bound
( σ^ < σ^ ) ⩾(
σ^ < δ< σ^ )
⩾1 - ( σ^ ⩾δ)-
( σ^ ⩽δ) .
Now, by Lemma <ref>, choose δ() ∈ (0, 1)
sufficiently small, so that
(σ^ ⩽δ) ⩽ .
This is possible because by definition of V_i+1 we have that w^ (M_i; V_i+1, ·) ⩽ 5/4 < 3/2 (see
Definition <ref>).
As for the first probability, we use the Definition <ref>
regarding high frequency stochastic instability. By choosing (δ, )
> 0 sufficiently large and M ⩾ (δ, ), we find that ( σ^⩾δ) ⩽, so that the
result follows.
We conclude this section with a lemma which provides a simple criterion to establish the
non-degeneracy property in Assumption <ref>. See also
Remark <ref>.
For every d, β∈ and > 0 there exists an L_0 (β, ) such that for every k ∈^d with | k | ⩾ L_0 there exists an ł (k) ∈^d such that
| ł | ⩽β , | k + ł | ⩽
| k | - β/√(d) + .
We can assume without loss of generality that k = (k_i)_i=1^d satisfies k_i⩾ 0
for all i ∈{ 1, …, d }. Next fix an i_0∈{ 1, … , d
} such that k_i_0⩾| k |/√(d) and fix
ł = - β e_i_0 = (0, …, 0, - β,0, …, 0 ) ∈^d ,
so that the value -β appears in the i_0-th position. Then we can
compute that
| k + ł |^2 = | k |^2 - 2 k_i_0β + β^2 .
Now assume that β > 0. Then by concavity of the square root we obtain that
| k + ł | ⩽ | k | - 2 k_i_0β - β^2/2| k |⩽ | k | - β/√(d) + β^2/L_0 .
Therefore the result follows by choosing L_0 sufficiently large.
§ HIGH-FREQUENCY REGULARITY
The final step towards the construction of the Lyapunov functional is to
establish high frequency regularity estimates. We start with a bound on
exponential moments of w_t_γ, M_t for γ⩽ 1/2.
Later we proceed to polynomial moments of the same quantity, but allowing higher
values of γ.
§.§ Exponential moments
The purpose of the next result
is to obtain suitable regularity estimates, up to certain stopping times.
For the statement of the proposition recall here the definition of the stopping
times in (<ref>), as well as
of the event _E(Q;t_0) from
(<ref>), and of the first jump time T (t_ 0) of
the skeleton median in (<ref>).
Under the assumptions of Theorem <ref>, let t_0
be any stopping time. Further,
fix parameters ⩾ 1, κ > 0, and γ∈ (0, 1/2] (with appearing in (<ref>)).
Finally, set L = M_t_0 + k_0 for any k_0∈^+. Then the
following estimates hold uniformly over
k_0∈^+, γ∈ (0, 1/2] and locally uniformly over ⩾ 1.
* Uniformly over δ∈ (0, 1), there exists a deterministic increasing function κ↦ C(κ) such that:
[eqn:reg-dwn-n]
_t_0 [ e^ κ w(
t_0+ δ- , ·) _γ, L^2 _
{ t_0 + δ⩽T (t_0) }] ⩽C(κ) e^ 2κe^- 2 δk_0 w(t_0, ·)
^2_γ, L .
* For any constants 0 < E < β and any _t_0-adapted random variable Q ∈, assuming that σ_β^ (Q, t_0) ⩽ T
(t_0) on _E(Q;t_0), there exists a deterministic function (κ,β, δ ) ↦ C(κ, β, δ)
increasing in κ such that:
[e:rupnew]
_t_0 [ e^
κ w( σ_β^ (Q,t_0)- , ·)
_γ, L^2 __E(Q;t_0) ] ⩽C (κ, β, E) __E(Q;t_0) .
* And finally, for any other
stopping time t_0⩽
t_1⩽ T(t_0), the following
uniform in time estimate holds:
[eqn:uniform-bd]
_t_0 [ sup_t ∈[t_0, t_1) e^κ w(t, ·) _γ, L^2 ] ⩽Ĉ(κ) e^ 2 κ w (t_0, ·) _γ, L^2 ,
for some increasing deterministic function κ↦Ĉ(κ).
We observe that in the bounds we consider left limits w (t_1-, ·)
because by assumption t_1⩽ T (t_0). It may therefore be that
t_1 is the first time after t_0 at which the skeleton median jumps
and there is a discontinuity in w: indeed, in many instances in which we use
the estimates above this will be the case and we will have t_1 = T
(t_0). The discontinuity will then be treated separately. Moreover, let us
observe as in Lemma <ref> that the estimate in
(<ref>) breaks down as E ↑β.
The proofs of all these estimates follow along similar lines, the main
difference being the treatment of the initial condition.
Let us start with some general considerations. First, in all cases we are
considering the evolution of w on some time interval [t_0,
t_1), where t_1 is another stopping time.
To lighten the notation, we will consider throughout the proof t_0 = 0
and write M = M_t_0 and L = M_t_0 + k_0.
Then, by Lemma <ref> we have that w
(t, x) ⩽ 2 for all t ⩾ 0, and in addition since t_1⩽ T (t_0) we have for all t ∈ [0, t_1) that M_t = M and, via
Itô's formula, w satisfies the following equation:
[eqn:high-freq]
w_t = [ w_t + ψ(M, u_t)
w_t + Q(u_t) ] t + σ(u_t, W_t) , ∀t ∈[0, t_1) ,
where ( φ)^α = - ν^α (-
Δ)^φ^α and ψ (M,
u_t) ∈ is given by
[eqn:def-alpha]
ψ(M, u_t) = - 1/ Π^_M u_t
^2 ⟨Π^_M u_t, u_t ⟩ .
In particular, by our definition of projection in Definition <ref>, it holds that 0 ⩽ψ (M, u_t) ⩽ζ_M. Moreover the martingale term σ is given by
[eqn:def-sigma]
σ(u_t, W_t) = Π_M^ ( u_t ·W_t /
Π^_M
u_t ) - Π_M^ u_t/ Π^_
M u_t
^3 ⟨Π^_M u_t, u_t ·W_t ⟩
and the vector-valued quadratic variation term Q is given by
[eqn:for-Q]
Q^α(u_t)(x) = 3/2 Π^_M u_t^α
∑_γ, η=1^m ∑_| k | ⩽M_γ, | l | ⩽M_η û_t^γ, k
û_t^η, l/ Π^_M u_t ^5 C_k, l^γ, η
(u_t)
-1/2 Π^_M u_t^α
∑_γ, η=1^m ∑_| k | ⩽M_γ, | l | ⩽M_η 1/
Π^_M u_t ^3 _{ k+l =0 } _{ γ= η}C_k,
l^γ, η (u_t)
- 1/2 ∑_β=1^m∑_| k | > (M+1)_α, | l |
⩽M_β
û_t^β, l e_k(x)/ Π^_M u_t ^3 C_k,
l^α, β(u_t) ,
where C_k, l (u) is the covariation defined in Remark <ref>.
We can rewrite the term in the last line in spatial
coordinates, so that it becomes simpler to estimate:
[e:q-spatial]
-∑_β=1^m∑_| k | > (M+1)_α, | l | ⩽M_β
û_t^β, l e_k(x)/ Π^_M u_t ^3
C^α, β_k, l(u_t) = - Π^_M [
F^α (u_t) ] (x) ,
with
F^α (u)(x) = ∑_β, θ, η=1^m
u^β(x)/ Π^_M u^3 ∫_^d Π^_M u^η (y)
u^θ(y) Λ^α, η_β, θ (x-y) y .
Here Λ is the spatial correlation function introduced in
(<ref>), and in particular by our regularity assumption
(<ref>) on the noise, we have
that Λ_∞ < ∞.
Next we will write w_t_1 as a mild solution to
(<ref>).
Namely, for 0 ⩽ s < t < ∞ and L = M + k_0 we
introduce the time-inhomogeneous semigroup as follows (written in coordinates
for α∈{ 1, …, m }):
[e:def-semi]
S_s,t^L φ^α = e^(t-s) ^α + ∫_s^t
ψ_r^α r Π_L^ φ^α .
Observe that although we do no state
this explicitly in the definition, the semigroup depends on M and
u through ψ. Hence, for t ∈ [0, t_1)
[eqn:decomposition-w]
Π_L^ w_t = S_0,
t^L w_0
+ ∫_0^t S_s, t^L Q(u_s) s +
∫_0^t S_s,
t^L σ(u_s, W_s) .
Note also that we project on frequencies higher than L because
eventually we are interested in the norm w_t_γ, L.
For later reference let us write, for t ∈ [0, t_1)
[eqn:def-y-z]
y_t = ∫_0^t S_s, t^L Q(u_s) s, z_t = ∫_0^t S_s, t^L σ(u_s, W_s) .
Regarding the last term z_t, we observe that for any L ∈, the
semigroup S_s, t^L is not adapted to the filtration _s at time
s, because of the
presence of ψ_t = ψ(M, u_t). But the stochastic integral
can be still defined by rewriting
exp( ∫_s^tψ_r r
) = exp( ∫_0^tψ_r r )exp(-∫_0^sψ_r r ),
so that the part that is not adapted can be taken out of the stochastic
integral. Yet, for this very reason, proving a useful bound on the
stochastic convolution is a bit cumbersome and we will use energy estimates for
z_t instead. In any case, as we already observed, the crucial observation for our proof is that
[eqn:reg-prf-alpha-bd]
0 ⩽ψ_t ⩽ζ_M ,
so that we are in the setting of Lemma <ref>.
This concludes our preliminary observations.
Next we will concentrate on finding
separate bounds for the three terms S_0, t^L w_0, y_t and z_t for t ∈ [0, t_1). To lighten the notation we will omit
writing explicitly the dependence of constants on the parameters appearing in
the statement of the proposition.
Bound on the initial condition. We start with the first term,
concerning the initial condition, where via the two estimates in
Lemma <ref>, since > 1/2 and in view of the bound (<ref>), for some
deterministic C(γ) > 0, both of the following two bounds hold:
[eqn:reg-prf-1]
S_0, t^L w_0 _γ,
L ⩽ C(γ) t^-
γ/2 ,
e^- t k_0
w_0 _γ, L , ∀t ∈[0, t_1) ,
where the first estimate follows from
S_0, t^L w_0_γ,
L≲_γ t^-
γ/2 w_0≲_γt^-
γ/2 ,
since w_0⩽ 2.
Bound for y. For the convolution term y_t we start with a bound on
the quadratic variation Q(u_t) from (<ref>). We find that for
all t ∈ [0, t_1) and α∈{ 1, …, m }, by the estimate in
Remark <ref> and (<ref>):
Q^α(u_t) ≲ Π_M^ u_t^α ( ( ∑_ k | û_t^k | u_t _ł^2_k )^2 /
Π_M^ u_t ^5 + ∑_ k u_t _ł^2_k^2/
Π_M^ u_t ^3 ) + Π^_M [ F^α(u_t) ]
≲ Π_M^ u_t u_t ^2/
Π_M^ u_t ^3 Π_M^ u_t ^2/
Π_M^ u_t ^2 + Π_M^ u_t
u_t ^2/ Π_M^ u_t ^3 + u_t ^3/
Π^_M u_t ^3 Λ_∞
≲1 .
Here we used that by assumption
t_1⩽ T(t_0) so that M = M_t for all t ∈
[0,t_1) and therefore
by Lemma <ref> we have u_t⩽√(5)Π^_M u_t for all t ∈ [0, t_1).
We therefore conclude that for some deterministic constant C ∈ (0, ∞)
[eqn:reg-prf-2]
Q (u_t) ⩽C , ∀t ∈[0, t_1) .
In view of this bound, we now use Lemma <ref> with δ = γ to estimate the term t ↦ y_t as follows. Here we recall that by assumption we have t_1⩽ 3, since T (t_0) - t_0⩽ 3 by
Lemma <ref>, so that we can estimate
∫_0^t S^L_s, t Q(u_s) s
_γ, L ⩽ ∫_0^t S^L_s , t Q(u_s) s
_γ, L ≲∫_0^t (t-s)^- γ/2 s ≲1 ,
uniformly over t ∈ [0, t_1), where we applied (<ref>) and used
that γ < 2. Hence we find
a deterministic constant C(γ)>0 such that
[eqn:reg-prf-3]
sup_0 ⩽t < t_1 ∫_0^t S^M_s, t Q(u_s) s
_γ, L⩽C(γ) .
This concludes our bound on the convolution term y. The stochastic
convolution term t ↦ z_t requires special care, so we defer its
study to the separate Lemma <ref>. In particular, the named lemma
shows that for every κ and γ as in our assumption there
exists a C(κ, γ) ∈ (0, ∞) such that
[ sup_0 ⩽ t< t_1 e^κ z_t_γ,
L^2] ⩽ C(κ,
γ) .
We have now all tools at our disposal to deduce the desired bounds.
Proof of (<ref>). The only difference that appears
throughout the proofs of the bounds lies in the treatment of the initial
condition. In all cases we have that
e^κ w_t_1 -^2_γ, L⩽exp( 2κ S^L_0, t_1- w_0^2_γ,L + 4κ
y_t_1-^2_γ, L + 4 κ z_t_1-^2_γ, L) ,
as (a + b)^2⩽ 2 a^2 + 2 b^2.
Therefore, by (<ref>), we can further bound for t_1 =
t_0 + δ on the event { t_0 + δ < T (t_0) }:
[ e^κ w_t_1 -^2_γ, L 1_{ t_0 + δ⩽ T(t_0) }] ⩽
C(κ, γ) [
exp( 2κ S^L_0, t_1- w_0^2_γ,L + 4 κ z_t_1-^2_γ, L) ] .
Now, to obtain (<ref>) we apply the second estimate in
(<ref>) as well as (<ref>) to bound
[ e^κ w_t_1 -^2_γ, L]
≲ e^2 κ e^- 2 δ k_0 w_0_γ, L^2 ,
as required.
Proof of (<ref>).
In this estimate, the point is that the stopping time t_1 = σ_β^(Q; t_0) take a time of order one to kick in, which suffices to
regularise the initial condition. As above, we start from the estimate
[ e^κ w_t_1 -^2_γ, L] ⩽
C(κ, γ) [
exp( 2κ S^L_0, t_1- w_0^2_γ,L +4 κ z_t_1-^2_γ, L) ] .
This time, we apply the first estimate in (<ref>) and
Cauchy–Schwarz to obtain
[ e^κ w_t_1 -^2_γ, L] ⩽
C(κ, γ) [ e^4 κ C(γ) t_1^-
γ/]^1/2[ e^8 κ
z_t_1-_γ, L^2]^1/2 .
Now, since γ /
< 1, and since E < β we can apply
Lemma <ref> together with (<ref>). We then obtain that as desired
[ e^κ w (σ_β^ (Q, t_0) - , ·) ^2_γ, L]
1__E(Q; t_0)⩽ C __E(Q; t_0) ,
with the proportionality constant depending on all the parameters of the
problem.
Proof of the uniform bound (<ref>).
We find, as above, that
[ sup_0 ⩽ t < t_1 e^κ w_t^2_γ, L] ⩽[ sup_0 ⩽ t < t_1
e^ 2κ S^L_0, t w_0^2_γ,L + 4κ
y_t^2_γ, L + 4 κ z_t^2_γ, L]
⩽ C(κ, γ) e^2 κ w_0^2_γ, L[ sup_0 ⩽ t < t_1
e^ 4 κ z_t^2_γ, L]
≲ e^2 κ w_0_γ, L^2 ,
by (<ref>) and the second bound on (<ref>), as desired.
This concludes the proof of the proposition.
In the following result we obtain exponential moments of the stochastic
convolution term that was relevant in the preceding proof.
In the same setting as that of Proposition <ref>, for any
stopping time t_0 and any other stopping time t_1 such that t_0⩽ t_1⩽ T(t_0), let z_t be the solution to:
z_t = [ z_t + ψ_t z_t ] t + σ(u_t, W_t), z_t_0 = 0 , t ∈[t_0, t_1) ,
where ψ_t is defined in (<ref>), and σ in
(<ref>). Then for any γ∈ (0, 1/2] and ⩾
1 we can bound
_t_0 [ sup_s ∈ [t_0, t_1)exp( ^-1 z_s_γ, M_t_0 + k_0^2) ] < C(, γ, ) ,
for any > 0, and where the constant C () > 0 additionally
depends on all the parameters in the statement of
Proposition <ref>.
As in the previous proof we fix t_0 = 0, M =
M_t_0 and M_t_0 + k_0 = L.
Our aim is to obtain an energy estimate for z_t_γ, L^2. Therefore we compute
[eqn:for-z]
z_t _γ, L^2 = 2 ⟨Λ^2
γ_L z_t, z_t + ψ_t z_t ⟩t +
Q(u_t) t + _t ,
where Λ^2 γ_L is the Fourier multiplier, which in
coordinates for α∈{ 1, …, m } is defined by
Λ^2
γ_L z^α (k) = (1+|k| - L_α)^2 γẑ^α, k_{ | k | > (L+1)_α} ,
Q is a quadratic variation term which we compute below,
and is a local
martingale defined by
_t = ⟨Λ^2 γ_L
z_t, σ(u_t, W_t) ⟩ ,
with σ as in (<ref>). In particular, in Fourier
coordinates
( σ^α (u_t, W_t)) (k) = 1_{ | k | > (M+1)_α}∑_l ∈^d( ∑_β=1^mû^β, k-l/Π^_M u_t B^α, β_l , t
- û^α, k/Π^_M u_t^3 ∑_h ∈^d∑_γ, η= 1^mû^γ, h_tû^η,
h-l B^γ, η_l, s) .
As for the quadratic variation term Q, we can compute it as
follows:
[e:qbar]
Q (u_t) = ∑_α=1^m ∑_| k | > (L+1)_α
(1 + | k | - L_α)^2 γ ∑_l ∈^d( ∑_β,
β^' Γ^α, α_β,
β^', l û^β, k - l_t û^β^',
-k + l_t/
Π^_M u_t ^2
- 2 Re ( ∑_β, γ, η=1^m Γ^α, γ_β, η, l
û^β, k-l_t û^α, -k_t / Π^_M
u_t ^4 ∑_h ∈^d û^γ, h_t û^η,
h-l_t )
+ | û^α, k_t |^2 / Π^_M u_t
^6
∑_γ, γ^', η, η^' =1^m
Γ^γ, γ^'_η, η^', l ( ∑_h ∈^d û^γ, h_t û^η,
h-l_t) ( ∑_h ∈^d û^γ^', h_t û^η^',
h-l)) .
In particular, via Remark <ref>, we can bound Q by
Q (u_t) ≲_Γ ∑_α =1^m∑_| k | > (L+1)_α
(1 + | k | - L_α)^2 γ(| û^α,
k_t |^2/Π^_M u_t^2 u_t^4/Π^_M u_t^4
+ ∑_l ∈^d(
Γ_l| û^k - l_t |^2/Π^_M u_t^2 + Γ_l| û^α, k_t | | û^k -
l_t |/Π^_M u_t^2 u_t^2/Π^_M u_t^2) ) ,
where the first line contains a bound on the last term appearing in
(<ref>). Then by using the estimate u_t≲Π^_M u_t, via Lemma <ref>, since M =
M_t, we can further estimate
Q (u_t) ≲∑_α =1^m∑_| k | > (L+1)_α
(1 + | k | - L_α)^2 γ(| û^α,
k_t |^2/Π^_M u_t^2 + ∑_l ∈^dΓ_l| û^k - l_t |^2/Π^_M u_t^2)
≲ w_t_γ, L^2 + ∑_α =1^m∑_| k | > (L+1)_α
(1 + | k | - L_α)^2 γ∑_l ∈^dΓ_l| û^k - l_t |^2/Π^_M u_t^2 .
Here the last term is the most tedious one to control, since we have on the one
hand to pass the quantity (1+| k | - L_α)^2 γ inside the norm
·_ł^2_k and on the other hand because inside the sum we do
not have the α-th component of the solution, which means that we must
replace L_α with L_β for arbitrary β∈{ 1,
…, m }. To be precise, we bound the sum as follows:
∑_α=1^m ∑_| k | > (L+1)_α
(1 + | k | - L_α)^2 γ ∑_l ∈^d
Γ_l | û^k - l_t |^2 / Π^_M u_t
^2
⩽ ν_max^γ/ ν_min^- γ/
∑_α, β=1^m ∑_| k | > (L+1)_β
(1 + | k | - L_β)^2 γ ∑_l ∈^d
Γ_l | û^β, k - l_t |^2 / Π^_M u_t
^2
+ ∑_α, β=1^m ∑_(L +1)_β ⩾| k | > (L+1)_α
(1 + ΔL^β, α_+ )^2 γ ∑_l ∈^d
Γ_l | û^β, k - l_t |^2 / Π^_M u_t
^2 ,
where Δ L^β, α_+ = { L_β - L_α}∨ 0.
To treat the first term above, let us introduce for k ∈^d the weight ϱ^L_k defined as follows:
[eqn:def-weights]
ϱ^L_k = 1, if | k | ⩽L ,
(1+| k | - L)^2 γ, if | k | > L .
We omit writing the dependence of on γ, since this parameter is fixed
throughout the proof. Similarly, define ϱ_k = (1 + | k
|)^2 γ. Then by Lemma <ref> there exists a c(γ) > 0 such that
ϱ^L_k ⩽c(γ) ϱ_k+l^L ϱ_l, ∀k, l
∈^d .
In particular, by the decay assumptions on Γ
together with the fact that γ < 1, we find
∑_| k | > (L+1)_β
(1 + | k | - L_β)^2 γ ∑_l ∈^d
Γ_l | û^β, k - l_t |^2 / Π^_M u_t
^2 ≲∑_l ∈^d Γ_l ϱ_l ∑_k ∈^d
ϱ^L_β_l+k | û_t^β, k+l |^2
≲_Γ Π_L^ u_t ^2 + u_t _γ,
L^2 .
As for the second term in (<ref>), we have
∑_(L +1)_β⩾ | k | > (L+1)_α
(1 + Δ L^β, α_+ )^2 γ∑_l ∈^dΓ_l| û^β, k - l_t |^2/Π^_M u_t^2≲_Γ, ν L^2 γ u_t^2/Π^_M u_t^2 .
Therefore, overall and once more via Lemma <ref>, we deduce that for all t ∈ [0, t_1)
[eqn:reg-prf-4]
Q (u_t) ≲L^2 γ u_t ^2/
Π^ u_t^2 + u_t ^2_γ, L/
Π^ u_t ^2
≲L^2 γ + w_t _γ, L^2 .
Now we in turn estimate for t ∈ [0, t_1)
w_t _γ, L ⩽ S_0, t^L
w_0 _γ, L + y_t _γ, L + z_t
_γ, L .
Hence, substituting (<ref>) into (<ref>) and using that ψ_t⩽ζ_M we obtain for some C(γ )> 0 and c_1(ν), c_2 (ν) > 0
and uniformly over t ∈ [0, t_1)
z_t _γ, L^2 ⩽ - ( c_1(ν) z_t _γ+
, L^2 + c_2(ν) M^2 -1 k_0 z_t _γ, L ) t
+ C ( L^2 γ + z_t _γ,L^2 + S_0,
t^L w_0 _γ, L^2 + y_t
_γ, L^2 ) t + _t ,
where we have employed the two different estimates below for the quadratic form ⟨Λ^2 γ_L z_t, ( + ψ_t) z_t⟩ appearing in
(<ref>). On the one
hand, we have
⟨Λ^2 γ_L z_t, ( + ψ_t) z_t ⟩ = 2 ∑_α=1^m ∑_| k | > (L +1)_α (1+ | k | - L_α)^2
γ ( - ν^α | k |^2 + ψ_t )
|ẑ_t^α, k|^2
⩽- ∑_α=1^m ∑_| k | > (L +1)_α (1+ | k | -
L_α)^2 γ( ν^α | k |^2 - ζ_M )
|ẑ_t^α, k|^2
⩽- ν_min ∑_α=1^m ∑_| k | > (L +1)_α (1+| k | -
L_α)^2 γ(1 + | k |- L_α)^2 |ẑ_t^α, k|^2
⩽- ν_min z_t _H^γ+ _L^2 .
Here we have used that ẑ^α, k = 0 for | k | ⩽
L_α+1, and that since k_0∈^+ we have L ⩾ M +1
together with the inequality
x^2 - y^2⩾ (x - y)^2 for all x ⩾ y
⩾ 0.
On the other hand, we can estimate the quadratic form by using convexity of the
map x ↦ x^2, since ⩾ 1/2:
⟨Λ^2 γ_L z_t, ( + ψ_t) z_t ⟩ = ∑_α=1^m ∑_| k | > (L +1)_α (1+ | k | - L_α)^2
γ ( - ν^α | k |^2 + ψ_t )
|ẑ_t^α, k|^2
⩽- ∑_α=1^m ∑_| k | > (L +1)_α (1+ | k | -
L_α)^2 γ( ν^α | k |^2 - ζ_M )
|ẑ_t^α, k|^2
⩽- c_2(ν) ∑_α=1^m ∑_| k | > (L +1)_α (1+| k | -
L_α)^2 γ M^2-1 k_0|ẑ_t^α, k|^2
⩽- c_2 (ν) M^2-1 k_0 z_t _γ, L^2 .
At this point we use interpolation with μ = γ/γ+ and
then Young's inequality for products for any ∈ (0, 1) to estimate for
some c > 0
z_t _γ, L ⩽c z_t ^1- μ
z_t _γ+, L^μ
⩽c (1-μ) ^-1/1-μ z_t + c μ^1/μ z_t
_γ+, L
⩽c (1-μ) ^-1/1-μ ( w_t +
S^L_0, t w_0 + y_t ) + c μ^1/μ z_t
_γ+, L
⩽c (1-μ) ^-1/1-μC+ c μ^1/μ z_t
_γ+, L ,
where the bound on S_0 , t w_0 and y_t is provided in
the relative steps of the proof of Proposition <ref> above.
Thus, finally, via (<ref>) and the above estimate, by choosing ∈ (0, 1) such that c μ^1/μ⩽c_1(ν) /2, we obtain for some C > 0, whose value
changes from line to line:
z_t _ γ, L^2 ⩽ - (
c_1(ν)/2 z_t _γ+
, L^2 + c_2(ν) M^2 -1k_0 z_t _γ,
L^2 ) t
+ C ( L^2 γ + S^L_0, t w_0 _γ,
L^2 ) t + _t
⩽ -( c_1(ν)/2 z_t _γ+
, L^2 + c_2(ν) M^2 -1k_0 z_t _γ,
L^2 ) t
+ C ( L^2 γ + t^- γ/ ) t + _t .
The final ingredient in our study is a bound on the quadratic variation of the
martingale . Note that in Fourier coordinates
_t = ∑_l ∈^d (
∑_α=1^m∑_| k | >
( L+1)_α ϱ^L_α_k ẑ_t^α, k
( û_t^ k-l ·B^l_t )^α /
Π^_M u_t )
- ∑_l ∈^d ( ⟨Λ^2 γ_L z_t,
u_t ⟩/ Π^_ M u_t
^3 ∑_α=1^m ∑_| k | ⩽M_α
û_t^α, k ( û_t^k-l ·B^l_t
)^α ) .
We then bound the quadratic variation of as follows, using
as usual that t_1⩽ T(t_0) together with
Lemma <ref>
⟨ ⟩_t ≲ ∑_l ∈^d
Γ_l
|∑_α= 1^m ∑_| k | > (L+1)_α ϱ^L_α_k
|ẑ^α, k | / Π^_M
u_t | û_t^k - l | |^2
+ ∑_l ∈^d Γ_l ||⟨Λ^2 γ_L z_t, u_t ⟩|/ Π^_ M u_t
^3 ∑_α=1^m ∑_| k | ⩽M_α
|û_t^α, k| |û_t^k-l | |^2
≲ ∑_l ∈^d Γ_l ( z_t _2
γ, L^2 u_t ^2/ Π_M^
u_t ^2 + z_t _2 γ,L^2
u_t^6/ Π^_M u_t ^6 ) t
≲ z_t _2 γ, L^2 t .
Now we make use of the fact that γ < 1 ⩽, so that we can use
interpolation to estimate the norm z_t_2γ, L^2 by
z_t_γ + , L^2, with the gain of a
small factor. In particular, by Young's inequality and interpolation as in
(<ref>), we obtain
that for any ∈ (0, 1) there exists a C() ∈ (0, ∞) such
that for all t ∈ [0, t_1)
[eqn:bd-qv-total]
⟨ ⟩_t
⩽{ z_t ^2_γ+, L + C() } t .
Now from (<ref>) and since by assumption L^2 γ⩽
M + k_0 as γ⩽ 1/2, we find that
z_t_γ, L^2⩽∫_0^t e^- c_2 (ν)
M^2 -1 k_0 (t -s) C(M + k_0 + s^- γ/) s +
_t ,
where the process _t satisfies _0 = 0 and
_t = - ( c_1(ν)/2 z_t_γ +
, L^2 + c_2(ν) M^2 -1k_0_t) t + _t
⩽ - c_1(ν)/2 z_t_γ +
, L^2 t + _t .
In particular, since 2 -1 ⩾ 1 for ⩾ 1, and since
by assumption t_1⩽ 3 (which follows from
Lemma <ref>, because T (t_0) - t_0⩽ 3 for any
stopping time t_0), there exists a (deterministic) constant C > 0 such that
∫_0^t e^- c_2 (ν)
M^2 -1 k_0 (t -s) (M + k_0 + s^- γ/) s ⩽
C , ∀ M , k_0∈^+ .
Therefore we obtain that uniformly over M and k_0 we have, for any μ > 0
[ sup_t ∈ [0, t_1) e^μ z_t_γ, L^2]
≲_μ[ sup_t ∈ [0, t_1) e^μ_t] .
Now for _t we find that if we set F_μ (x) = e^μ x,
then
F_μ ( _t) ⩽ F_μ (_t) ( - μc_1(ν)/2 z_t_γ +
, L^2 + 1/2μ^2 ( z_t_γ +
, L^2+ C()) ) t
+ ∂_x F_μ (_t) _t
⩽ - F_μ (_t) μc_1(ν)/4 z_t_γ +
, L^2 + ∂_x F_μ (_t) _t ,
where we have used (<ref>) to obtain the first inequality, and
to obtain the second inequality we have assumed that μ satisfies
μ⩽c_1(ν) ^-1/2 .
In particular, we conclude that for μ satisfying (<ref>)
sup_0 ⩽ t ⩽ 1[ e^μ_t ∧ t_1+
∫_0^3 F_μ (_s ∧ t_1) z_s ∧ t_1_γ + , L^2 s] < ∞ .
To pass the supremum inside the expectation we can further bound, by using the
already mentioned bound t_1⩽ 3, which follows from our
assumptions:
[ sup_0 ⩽ t ⩽ t_1 F_μ (_t) ]
⩽[ sup_0 ⩽ t ⩽ 3 F_μ (_t ∧
t_1) ]
≲[ ∫_0^3 | ∂_x F_μ
(_s ∧ t_1) |^2⟨_·∧
t_1⟩_s]
≲[ ∫_0^3 F_2 μ (_s ∧ t_1) z_s
∧ t_1_γ + , L^2 s ] < ∞ ,
once more by applying (<ref>) and by the previous step,
provided that μ⩽c_1 (ν) ^-1 /4. Since ∈
(0, 1) can be chosen small at will, this proves the result.
For any γ >0 and any
L ∈ consider ( ϱ^L_k )_k ∈^d as in (<ref>), and ϱ_k
= (1 + |k|)^2γ for k ∈^d. Then there exists a c(γ) >0 such
that uniformly over L
ϱ^L_k ⩽c(γ) ϱ^L_k +l ϱ_l, ∀k, l
∈^d .
We distinguish three elementary cases. If | k+l | ⩾ | k |, then the result
is trivial. Otherwise we can either have L < | k+l | < | k | or | k+l
| ⩽ L and | k+l | < | k |. In the first case we find by the triangle inequality and for
some c(γ) > 0
ϱ^L_k ⩽(| k+l |+ | l | - L )^2 γ ⩽c(γ) (| k+l | - L)^2 γ (| l |+ 1)^2 γ = c(γ)
ϱ_k+l^L ϱ_l ,
where in the last equality we used the assumption | k+l | > L. Instead,
in the second case, | k +l | ⩽ L we have
| l | ⩾| k | - L ⇒ ϱ_k^L ⩽ϱ_l ,
so that the claim follows.
Fix any L^(1), L^(2)∈ such that L^(1)⩾ L^(2) and ψ [0, ∞) →^m such that | ψ_s^α | ⩽ζ_L^(2) for all s ⩾ 0
and α∈{ 1, …, m }, and
define the time-inhomogeneous semigroup for φ∈ L^2(^d;
^m) (written in components for α∈{ 1, …, m }):
S_s, t^L^(1) φ^α = e^(t-s) ^α +
∫_s^tψ_r^α r Π_L^(1)^φ^α , 0 ⩽ s < t < ∞ ,
where ^α = -ν^α (- Δ)^, for ⩾ 1/2 and (ν^α)_α =1^m > 0.
Then for any γ∈ and δ∈ [0, ∞) there exists a
constant C(γ, δ) such that for all 0 ⩽ s < t < ∞
S_s, t^L^(1) φ_γ+ δ, L^(1) ⩽C
(ν_min, γ,
δ)(t -s)^-δ/2 Π^_L^(1) φ_γ,L^(1) ,
S_s, t^L^(1) φ_γ, L^(1) ⩽e^-
|t -s|(L^(1) - L^(2)) Π^_L^(1) φ_γ,L^(1) .
The first bound of the lemma is not optimal, since it should actually improve
as L^(1) - L^(2) increases (in analogy to the second one). We state this
weaker version only because we do not require the improved bound.
Let us start with the first bound. We find
|S_s, t^L^(1) φ^α |(k) = e^-(t -s) ν^α | k |^2
- ∫_s^t ψ_r^α r | φ̂(k) | _{ | k | >
(L^(1) + 1)_α } .
Since ψ_r^α⩽ζ_L^(2) we have
(t -s) ν^α | k |^2 + ∫_s^t ψ_r^α r ⩾(t-s)
(ν^α| k |^2 - | L^(2) |^2 ) ,
so that from the inequality x^2 - y^2 ⩾ (x
- y)^2, for x ⩾ y ⩾ 0, we conclude that
| S_s, t^L^(1) φ^α |(k)
⩽e^- (t-s)
ν^α (| k | -
L^(2)_α)^2
| φ̂(k)|
⩽( (t - s)^1/2 ( |
k | - L^(2)_α ) )^- δ (
(t-s)^1/2 (| k | - L^(2)_α) )^ δ e^-
(t-s) ν^α (| k | - L^(2)_α)^2 | φ̂(k)| .
Then, using the bound ( t^1/2 x )^δ e^-
ν^α t x^2 ⩽ C(ν_min) for some C(ν_min) > 0 we obtain that uniformly over t,
L^(1), L^(2) and | k | >(L^(1) + 1)_α
S_s, t^L^(1) φ_γ+ δ,L^(1)^2 =
∑_α=1^m ∑_| k | >
(L^(1)+1)_α (| k | - L^(1)_α)^2(γ+ δ) | S_s, t^L^(1) φ^α |^2 (k)
≲_ν_min (t -s)^- δ/ φ_γ, L^(1)^2 ,
where we used that L^(1)⩾ L^(2). This proves the first bound. As
for the second one, we simply have for all α∈{ 1, …, m }
S_s, t^L^(1) φ^α _γ,L^(1)^2 ⩽e^- 2 (t -
s) ν^α (L^(1)_α - L^(2)_α )^2
∑_| k | > (L^(1) + 1)_α (| k | - L^(1)_α)^2 γ| φ̂ (k) |^2
⩽e^- 2 (t - s)(L^(1) - L^(2))^2 φ_γ,
L^(1) ,
which immediately proves the estimate since ⩾ 1/2.
§.§ Higher order polynomial regularity estimates
The aim of this section is to provide a proof of the higher order regularity estimate
appearing in Corollary <ref>.
In particular, we will need a polynomial bound on the H^ norm, as opposed to the estimate on the H^1/2 provided by the Lyapunov
functional G and in the preceding Proposition <ref>.
But the latter two provide estimates on exponential moments of π_γ, and as we will see this result can be
improved if we restrict to proving polynomial moments of π_t in higher
regularity spaces.
Under the assumptions of Theorem <ref>, for any ⩾ 1, γ∈ (0, + 1/2) and n ∈ there exist
s_⋆, C(γ,n) ∈ (0, ∞) such
that for all π_0∈ S
sup_t ⩾s_⋆ π_t _H^γ^n ⩽C(γ, n) G(π_0) ,
where G is the Lyapunov functional in (<ref>).
Let us fix t_⋆ as in Theorem <ref>. Then for any s_⋆⩾ 0 we can rewrite
sup_t ⩾s_⋆ π_t _H^γ^n =
sup_m ∈ sup_t ∈[m t_⋆, (m+1) t_⋆ ] π_
s_⋆ + t _H^γ^n .
Now, if we prove that uniformly over all m ∈ the following bound
holds
[eqn:aim-hr]
sup_t ∈[m t_⋆, (m+1) t_⋆ ] _m t_⋆ π_
s_⋆ + t _H^γ^n ⩽C(γ, n)
G( π_m t_⋆ ) ,
then by Theorem <ref> we conclude that as desired
sup_t ⩾s_⋆ π_t _H^γ^n ⩽C (γ, n) sup_m ∈ G( π_m t_⋆)
⩽C(γ, n) < ∞ .
Hence we are left with verifying (<ref>), and since π_t is a time-homogeneous Markov process we can reduce the problem to
verifying the claim for m= 0:
[eqn:aim-hr-reduced]
sup_t ∈[0, t_⋆ ] π_
s_⋆ + t _H^γ^n ⩽C(γ, n)
G( π_0 ) .
To obtain this bound, let us recall the definition of the stopping times { T_i}_i ∈ and the skeleton energy median process (M_t)_t ⩾ 0 as in Section <ref>. Then we
estimate for A_i = [T_i, T_i+1) by the Cauchy–Schwarz inequality
π_s_⋆ + t ^n_H^γ = ∑_i = 0^∞ [
_A_i (s_⋆+t) π_s_⋆+t _H^γ^n
]
⩽∑_i =1^∞ √(𝐩_s_⋆+t(i)) ·[
sup_T_i ⩽s < T_i+1 π_s _H^γ^2 n
]^1/2 ,
with 𝐩_s_⋆+t(i) = ( s_⋆+t ∈ [T_i, T_i+1))
for i ∈, and where we assumed that the sum can start from i =1 since by Lemma <ref>, T_1⩽ 3 and we can
choose s_⋆ > 3.
Now for t ∈ [ T_i, T_i+1) we find that
π_t _H^γ^2 ⩽M_T_i^2 γ +
∑_| k | > M_T_i (1 + | k |)^2 γ | π̂_t (k)
|^2
≲M_T_i^2 γ + π_t _γ, M_T_i^2
≲M_T_i^2 γ + w_t _γ,
M_T_i^2 ,
where in the last line w_t is as in (<ref>) and we made use
of the estimate (<ref>).
Hence, if we would prove that
[eqn:aim-fk-discrete]
[ M_T_i^n γ + sup_t ∈A_i w_t
_γ, M_T_i^n ] ⩽C (γ, n)
G(π_0) ,
then we could conclude that
sup_t ∈[0, t_⋆] π_
s_⋆ + t _H^γ^n ⩽C(γ, n)
G( π_0 ) ( sup_t ∈[0, t_⋆] ∑_i =
1^∞ √(𝐩_s_⋆+t(i)) )
⩽C(γ, n) C(t_⋆, s_⋆)G( π_0 ) .
by Lemma <ref>, which implies
(<ref>), up to choosing a larger C.
To conclude, we
observe that (<ref>) follows from
Lemma <ref> below and Theorem <ref>.
Indeed the latter theorem implies immediately that uniformly over i ∈
[M_T_i^γn] ≲_γ, n [F (1/2, _T_i)] ≲F(1/2, _0) =
G(π_0) .
Furthermore, applying first the second and then the first estimate of
Lemma <ref> guarantees that for any i ∈^+
[sup_t ∈A_i w_t
_γ, M_T_i^n ] = [ _T_i [ sup_t
∈A_i w_t _γ, M_T_i^n
] ]
⩽C [F(1/2, _T_i) + w_T_i _γ, M_T_i^n
]
⩽C^2 [ F(1/2, _T_i-1) ]
⩽C(γ, n) F (1/2, _0) =
C(γ, n) G (π_0) ,
where in the last lines we used again
Theorem <ref> (up to choosing a larger C).
In the next lemma we provide the key technical estimate of this section.
In the setting of Proposition <ref>,
for any ⩾ 1, γ∈ (0, + 1/2), n ∈ there exists a C (γ, n)> 1
such that uniformly over i ∈ and with A_i = [T_i,
T_i+1)
_T_i [ w_T_i+1 _γ, M_T_i^n ] ⩽C F(1/2, _T_i) ,
_T_i [ sup_t ∈A_i w_t _γ,
M_T_i^n ] ⩽C(F(1/2, _T_i) + w_T_i _γ,
M_T_i^n ) .
The result is true for γ⩽ 1/2 by
Theorem <ref>, so we consider only the case γ >
1/2.
The proof of both estimates follows along similar lines. As for the first
estimate, we observe first that by Lemma <ref>, since γ > 1/2,
for some C(ν, γ ) > 0
w_T_i+1 _γ, M_T_i^2 ⩽C(ν, γ) (1 + w_T_i+1 - _γ,
M_T_i-1^2) .
Now, for t ∈ [T_i, T_i+1) we follow the same decomposition that we
have used in (<ref>), in the
proof of Proposition <ref>, namely for all t ∈
[T_i, T_i+1):
w_t = S_T_i, t^M_T_i w_T_i + y_t +
z_t ,
where S^M_T_i_T_i, · is the semigroup defined in
(<ref>) and the terms y_t and z_t are defined in (<ref>), so that
w_T_i+1- _γ,M_T_i ⩽ S_T_i,
T_i+1^M_T_i w_T_i_γ, M_T_i + y_T_i+1
_γ, M_T_i+ z_T_i+1 _γ, M_T_i .
In particular, both the claimed estimates now follow if we prove the following
bounds for some deterministic constant C > 0:
e:hf-aim
_T_i [ S_T_i, T_i+1^M_T_i w_T_i_γ,
M_T_i^n ] ⩽C ,
_T_i [ sup_t ∈A_i S_T_i,t^M_T_i
w_T_i_γ, M_T_i^n ] ⩽C
w_T_i _γ, M_T_i^n ,
_T_i [ sup_t ∈A_i y_t ^n_γ, M_T_i
] ⩽C ,
_T_i [ sup_t ∈A_i z_t ^n_γ, M_T_i
] ⩽C F (1/2, _T_i) .
The proofs of these bounds follow roughly the same arguments as in the proof of
Proposition <ref>, up to the stochastic convolution term z_t, which requires a separate treatment. The rest of the proof is devoted
to obtaining (<ref>), treating term by term.
Step 1. Bound on the initial condition and on y_t. As for
(<ref>), we estimate by Lemma <ref>
[eqn:bd-ic]
S_T_i-1, T_i^M_T_i-1 w_T_i-1_γ, M_T_i-1
≲(T_i - T_i-1)^-γ/2 w_T_i-1 ≲(T_i - T_i-1)^-γ/2 ,
so that the estimate follows from Lemma <ref>. Similarly also
(<ref>).
Instead, for the bound (<ref>) on the term y_t, we follow verbatim the estimate
(<ref>), which holds for any γ∈ (0, 2), to obtain
for some deterministic C(γ) ∈ (0, ∞)
sup_T_i-1 ⩽t ⩽T_i y_t _γ,
M_T_i-1 ⩽C (γ) .
Step 3. Bound on z_t. Here we must follow a different
estimate than in the proof of Proposition <ref> and in
particular Lemma <ref>, although we will use, through a bootstrap
argument, the results in the quoted proposition and lemma.
Recall that we have, with ψ as in
(<ref>) and σ as in (<ref>):
z_t = [ z_t + ψ_t z_t ] t + σ(u_t, W_t) , z_T_i-1 = 0 , ∀t ∈[T_i-1, T_i) .
To further simplify the notation let us assume as usual that T_i-1 = 0 and write T_1 = T and M_T_i-1 = M. We represent z_t in its mild form
z_t = ∫_0^t e^(t - s) ψ_s z_s s +
∫_0^te^(t-s) σ(u_s, W_s) .
Then, since ψ_t⩽ζ_M we find that for some c()> 0
sup_0 ⩽t ⩽T z_t _H^γ ⩽sup_0 ⩽t ⩽T z_t _H^γ sup_0 ⩽t
⩽T ∫_0^t e^-(t-s) (M+1)^2 ζ_M s
+ sup_0 ⩽t ⩽T
∫_0^te^(t -s) σ( u_s , W_s) _H^γ
⩽ M^2 /(M+1)^2 sup_0 ⩽t ⩽T
z_t _H^γ + sup_0 ⩽t ⩽T
∫_0^te^(t -s) σ( u_s , W_s) _H^γ
⩽ ( 1 - c()/M ) sup_0 ⩽t ⩽T
z_t _H^γ+ sup_0 ⩽t ⩽T
∫_0^te^(t -s) σ( u_s , W_s) _H^γ
,
where we used that
M^2 = (M +1)^2 - ( (M +1)^2 - M^2 )
⩽(M +1)^2 - inf_ξ∈[M , M+1] 2 ξ^2-1
⩽(M +1)^2 - 2 M^2-1
⩽(M +1)^2 - 2^-2+2 (M+1)^2-1 ,
since M+1 ⩽ 2M.
Hence from (<ref>) we overall estimate
sup_0 ⩽t ⩽T z_t _H^γ^n
≲_ M^n sup_0 ⩽t ⩽T
∫_0^te^(t -s) σ( u_s , W_s) _H^γ^n
≲_ M^2 n + sup_0 ⩽t ⩽T ∫_0^te^(t -s) σ( u_s ,
W_s) _H^γ^2n .
Now our efforts will concentrate on estimating the latter stochastic integral.
We observe that contrary to our original term z_t, the semigroup e^- t is deterministic, so that the convolution becomes simpler to
bound with classical tools.
To this purpose, let us introduce parameters γ, α, β > 0 such that
[eqn:assu-parameters-hr]
γ + β= γ , β< 2 ·α· , γ ∈(0, 1/2) , α∈(0, 1/2) .
This choice is possible since γ∈ (0, + 1/2).
Our aim will be to obtain an estimate on the stochastic convolution in
(<ref>) that depends on z_t_H^γ
(so that we have improved the regularity from γ to γ+ β) and then use Proposition <ref>
to control the H^γ norm, since γ∈
(0, 1/2). To estimate uniformly in time the stochastic integral we use the
so-called “factorisation method”, in which one rewrites the convolution as
follows for any α∈ (0, 1) and an appropriate normalisation constant
c_α> 0:
∫_0^te^(t -s) σ( u_s , W_s) = c_α
F_t ,
where F_t is given by
F_t = ∫_0^t (t -s)^α-1 e^(t -s) ( ∫_0^s(s-r)^-αe^(s -r)
σ( u_r , W_r) )s .
Then via Lemma <ref> we bound for t ⩾ 0 and G_s =∫_0^s(s-r)^-αe^(s -r) σ( u_r ∧ T ,
W_r) (note that since we have stopped u at time T, the process G_s is defined for all s
⩾ 0):
F_t _H^ γ + β ≲∫_0^t (t -
s)^α-1 - β/2 G_s _H^
γ s .
Since β/2 < α by (<ref>), there exists a p (α, β) ∈ (1,
∞) such that for p^' the conjugate exponent satisfying 1/p +
1/p^' =1, we have
∫_0^t (t - s)^α-1 - β/2 G_s
_H^γ s ⩽t^q G _L^p^'([0,t];
H^γ) ,
where p is chosen sufficiently close to 1 such that
q = 1/p [p (α- 1 - β/2 ) +1 ] = α- 1 -
β/2 + 1/p > 0 .
Therefore, we can bound
sup_0 ⩽t ⩽T F_t _H^ γ + β ≲T^q G _L^p^' ([0, T]; H^ γ) .
Then, since T ⩽ 3 (see Lemma <ref>) and assuming
without loss of generality that n ⩾ p^' we
use Jensen's inequality to obtain
[ sup_0 ⩽t ⩽T F_t ^n_H^ γ+
β ] ≲sup_0 ⩽t ⩽3 [ G_t
_H^ γ^n ] .
Now for the last term we use the vector-valued BDG inequality (see for
example <cit.>) to obtain
[eqn:bdg]
[ G_t^t _H^ γ^n ] ≲[ ⟨G^t ⟩_t^n/2 ] .
Here G^t_s is the martingale
G^t_s = ∫_0^s(t-r)^-αe^(s -r) σ( u_r ∧ T ,
W_r) , ∀ s ∈ [0, t] ,
and ⟨ G ⟩_t indicates the scalar quadratic variation computed with
respect to the Hilbert space H^γ, namely the unique
continuous increasing process s ↦ A_s^t, for s ∈ [0, t] such that G^t_s_H^γ^2 - A_s^t /2 is a martingale on [0, t]. In our setting, the quadratic variation is given by
⟨ G^t⟩_s = 2 (t -s)^- 2 αQ
(u_s ∧ T) t ,
where Q is defined in (<ref>), only in the present case with L = 0 and γ replaced by γ.
In particular, following the calculation that leads to (<ref>)
(the only difference being that since L = 0 the terms Δ^α,
β L_ are not present), we obtain that
⟨ G^t⟩_s≲ (t-s)^- 2 α (1 +
w_s^2_H^γ) .
Therefore, by (<ref>) and since α∈ (0, 1/2) by assumption we can conclude that
[ G_t^t ^n_H^ γ ]
≲( M^n γ + [ sup_0 ⩽s
⩽T w_s ^n_ γ, M]
) ·(
∫_0^t (t -s)^- 2 α s)^n/2
≲_α M^n γ + [ sup_0 ⩽s ⩽T w_s ^n_γ, M] .
Now we use the uniform bound (<ref>) from Proposition <ref> to
conclude that
sup_0 ⩽t ⩽3[ G_t^t ^n_H^
γ ] ≲_n F (1/2, _T_i) ,
from which all the desired estimates follow.
§ UNIQUENESS OF THE INVARIANT MEASURE
Our aim is to apply Harris' theorem to the
Markov process ( [π_t])_t ⩾ 0. Throughout this section we work
under Assumption <ref>.
We denote with (_t)_t ⩾ 0 the transition probabilities of [π_t] on 𝐏:
_t([π], A) = ([π_t] ∈A | [π_0] = [π] ) , ∀A
⊆𝐏, [π] ∈𝐏 .
Harris' theorem applies if the Markov process possesses a Lyapunov functional,
which is the case by Theorem <ref>, and if level sets of the
Lyapunov functional satisfy a “smallness” property, which we now recall.
For any t > 0 we say that a set A ⊆𝐏 is small for _t if there exists δ∈ (0, 1) such that
_t([π], ·)- _t([ν], ·) _TV,
𝐏 ⩽1- δ , ∀ [π], [ν] ∈A .
Recall, that in (<ref>) we have defined the total variation distance between two positive
measures by μ - ν_TV, = 1/2sup_A ∈ | μ(A) - ν (A) |, so there with an additional 1/2 normalisation
factor in contrast to the “usual” definition.
For convenience, we also state Harris' theorem below, adjusted to our setting. See
for example <cit.>.
Consider the Lyapunov functional G
as in Theorem <ref>. If there exists t_⋆ > 0 such that for every R > 0 the set
V_R = { [π] ∈𝐏 G([π]) ⩽R }
is small for _ t_⋆, then there exists a unique invariant measure μ_∞ for the Markov process [π_t], and it satisfies
for some C, γ > 0
_t ([π], ·) - μ_∞ _TV,
𝐏 ⩽CG([π]) e^- γt , ∀
[ π] ∈𝐏 , t ⩾0 .
Hence, the rest of this section is devoted to establishing the smallness property of level
sets of the Lyapunov functional, from which the spectral gap in Theorem <ref> promptly
follows. This is a standard consequence of the strong Feller property (Lemma <ref>)
and controllability (Lemma <ref>).
In the setting of Theorem <ref>, and in particular under
Assumption <ref>, there exists a t_⋆ > 0 such that for every R > 0,
V_R is small for _t_⋆.
The first step is to establish the smallness property
locally around [π] ≡ [1]. Here we write 1 for the unit function
^d∋ x ↦ (1, …, 1) ∈^m .
For this reason we define
B_ = {u ∈H^γ_0 u - 1
_H^γ_0 < } ,
the ball of radius ∈ (0, 1) about 1 in the H^γ_0 topology, for γ_0 > d/2 as in
Assumption <ref>, so that in particular by Sobolev
embedding H^γ_0⊆ C (^d): this will be essential for the
invertibility of the multiplication operator further on. Since we are interested in the
projective dynamic we also define
[eqn:b-ve-proj]
B_^proj = { [π] π∈B_ } ⊆𝐏 .
By Lemma <ref> below, for any δ∈
(0, 1) there exists ∈ (0, 1) such that
_1 ([π], ·) - _1([ν], ·) _TV,
𝐏 < 1 - δ ,
∀ [π], [ν] ∈B_^proj .
Next, by Lemma <ref> we find an s_⋆ >0 such
that for any R > 0 there exists
δ^' for which
_s_⋆([π] , B_^proj) ⩾δ^'
, ∀ [π] ∈V_R .
It follows immediately that for any [π], [ν] ∈ V_R,
_ s_⋆+1 ( [π] , ·) - _ s_⋆ +1( [ν] , ·)
_TV, 𝐏
≤δ' (1-δ) + 1-δ' = 1-δδ' ,
thus concluding the proof.
The next lemma establishes the small set property locally around the point
π≡ 1: the result follows by proving the strong Feller property for
the solution to a new SPDE, which coincides with the linear SPDE
(<ref>) for u_0 close to u≡ 1.
In the setting of Theorem <ref>, and in particular under
Assumption <ref>, for any δ∈ (0, 1) and t > 0 there exists ∈ (0, 1) such that
[e:Doeblin]
_t ([π], ·) - _t([ν], ·) _TV,
𝐏 < 1 - δ ,
∀ [π], [ν] ∈B_^proj ,
with B_^proj (depending on γ_0 as in
Assumption <ref>) defined in (<ref>).
In the upcoming proof, for a map f X → Y between two Banach spaces X, Y, we define its
Fréchet derivative D f(x) ∈𝐋(X,Y) as the bounded linear map X→ Y such that
lim_ h _X →0
f (x + h) - f (x) - D f (x)h_Y / h _X = 0 , ∀x ∈X ,
provided that it exists.
We say that a functional f as above is of class C^m, for m ∈, if it is m times Fréchet differentiable (the derivatives being not
necessarily uniformly bounded), and that it is smooth if it lies in C^m for all m ∈.
Let us start by reducing the problem to the study of the strong Feller
property of (<ref>). Since [π_t] is a functional of π_t, which in turn is a functional of u_t, if we denote with _t(u, ·) the law of the solution u_t to (<ref>) with initial condition u_0 = u ∈
L^2, then for π, ν∈ S
_t([π], ·) - _t([ν], ·) _TV,
𝐏 ⩽
_t(π, ·) - _t(ν, ·) _TV, L^2 ,
so that it suffices to check (<ref>) for _t, uniformly over B_ = { u ∈ H^γ_0 u - 1
_H^γ_0 < }, again with γ_0 as in Assumption <ref>.
We localise our argument around B_ by
constructing a dynamic that coincides with that of u_t only on B_. Since we are working under Assumption <ref>, we can
represent u_t as the solution to
u_t^α = - ν^α (- Δ)^ u_t^α t
+( G(u_t) ·ξ)^α , ∀α∈{ 1, …,
m } ,
where (G(u) ·ξ)^α = ∑_β =1^m G^α, β
(u) ξ^α, β, and where ξ is a space-time white noise,
that is a homogeneous Gaussian field with, formally, the covariance structure
[ ξ^α, β (x) ξ^α^' , β^'
(y)] = _α = α^'_β = β^'δ
(x-y) ,
and G^α, β(u) L^2(^d; ) → L^2(^d; ) is given in Fourier coordinates by
[G^α, β (u ) φ] (k) = ∑_l û^β( k - l)
Θ^α, β_l φ̂(l) ,
where we have defined Θ^α, β_l = (Γ^α,
β_l)^1/2. In other words, G^α, β (u) =
M_u^β∘ K_Θ^α, β, where M_u L^2→ L^2 is
the multiplication operator in physical coordinates M_u^βφ =
u^βφ and K_Θ^α, β L^2→ L^2 is the multiplication operator in Fourier coordinates [K_Θ^α, βφ ](k) = Θ^α, β_kφ̂(k). We note that for u ∈ H^γ_0(^d; ^m) fixed
we have G^α, β( u) ∈𝐋 (L^2(^d; )), where 𝐋(X) is the space of bounded linear operators from a Banach space X into itself, since we have
G^α, β (u) φ⩽ u _∞
K_Θ^α, βφ≲ u _H^γ_0
K_Θ^α, βφ≲_Γ u
_H^γ_0φ ,
by Sobolev embedding, since H^γ_0(^d; ^m) ⊆ C(^d;
^m), and by Assumption <ref> on the noise
coefficients (here we have merely used that the coefficients are bounded).
Step 1: Localisation. We now construct an operator-valued map
G^α, β
H^γ_0 (^d; ^m) →𝐋(L^2(^d; )) ,
such that for any u ∈ H^γ_0 (^d; ^m),
the linear operator G^α, β(u) is globally invertible
and satisfies the following properties for some C ∈ (0, ∞):
[eqn:prop-B-bar]
G^α, β(u) = G^α, β(u) , if u ∈B__1 ,
[ G^α, α(u)]^-1
_𝐋(H^γ_0(^d; ); L^2(^d; ))
⩽C , ∀u ∈H^γ_0 .
In addition, the parameter _1∈ (0, 1) must be chosen small enough as it turns out, a
sufficient choice is given by
_1 = C_s^-1 /4 ,
where C_s is the constant in the continuous embedding H^γ_0 (^d; ^m) ⊆ C(^d; ^m), so that φ_∞⩽ C_sφ_H^γ_0.
We can construct a G^α , β with the desired properties as
follows. We choose a smooth functional ϱ H^γ_0→ [0, 1], such that
ϱ(u) = 1 if u ∈B__1 ,
0 if u ∈B_2 _1^c .
Such a choice of ϱ is always possible, for example because H^γ_0 is a Hilbert
space and one can define ϱ (u) = ϱ ( u -1
^2_H^γ_0) for a suitable smooth and compactly supported
function ϱ→ [0,1].
Then we define G^α, β by
G^α, β (u) = ϱ(u) ·(M_u^β ∘K_Θ^α, β) + (1 -
ϱ(u)) ·K_Θ^α, β ,
where the dot indicates multiplication with a scalar.
If we introduce the map
H^γ_0(^d; ^m) →H^γ_0(^d; ^m) , (u) =
ϱ(u) u + (1 - ϱ(u)) 1 ,
then we can rewrite G^α, β as G^α, β(u) = M_(u)^β∘
K_Θ^α, β.
In particular, the Fréchet derivative with respect to u of G^α, β can be computed as follows, for h ∈ H^γ_0:
[e:dG]
D G^α, β (u) = M_(D (u) h)^β ∘K_Θ^α, β ∈𝐋(L^2 (^d; )) .
Here the derivative of is given by
D (u) h = (D ϱ(u) h ) u +
ϱ(u) h - (D ϱ(u) h) 1 ∈H^γ_0 ,
so that for some C > 0
[eqn:bdd-derivative]
(u) _H^γ_0 ⩽1 , D
(u) h _H^γ_0 ⩽C h
_H^γ_0 .
The fact that D G^α, β (u) ∈𝐋 (L^2 (^d; )) follows from (<ref>) and
the same calculation as in (<ref>).
Hence, with this definition, G^α, β satisfies the first
property in (<ref>). Let us check that is satisfies also the
second one. For the inverse we have
G^α, α (u)^-1 = M_ [(u)^α]^-1 ∘K_Θ^-1^α, α ,
where we have defined for any α, β∈{ 1, …, m }
[(u)^β]^-1 (x) = 1/[(u)^β(x)] ,
[K_Θ^-1^α, βφ ](k) = ( Θ^α,
β_k)^-1φ̂(k) .
The latter operator is defined in view of the lower bound on the correlation
coefficients in Assumption <ref>. In particular, we can
follow the same calculations as in (<ref>) to obtain
that G^α, α (u)^-1∈𝐋
(H^γ_0 (^d; ), L^2(^d; )). More precisely, we can bound:
G^α, α (u)^-1φ_L^2≲^-1 (u) _∞ K^α, α_Θ^-1φ≲_Γφ_H^γ_0 ,
since (u) - 1 _∞⩽ C_s(u) -1
_H^γ_0⩽ 1/2, by Sobolev
embedding and by our choice of _1 in (<ref>), and where in the
last step we used the lower bound on the noise coefficients in
Assumption <ref>. Hence overall the nonlinearity G that we
have constructed does indeed satisfy (<ref>) (the
first requirement of (<ref>)is satisfied by construction).
Now the proof of the strong Feller property follows closely the proof of
<cit.>. In particular, we observe
that Hypothesis 7.1.(iv) regarding the Hilbert–Schmidt norm of the semigroup
s ↦ e^-s (- Δ)^
in the quoted book corresponds to our assumption 𝐚 > d/2 (with the Hilbert space H being L^2
(^d; ^m)).
Step 2: Properties of the localised dynamic. Let us consider the
solution u to the nonlinear and nonlocal equation
[eqn:localized]
u_t^α = - ν^α (- Δ)^
u_t^α t + ( G (
u_t) ·ξ)^α , u_0 ∈H^γ_0 , α∈{ 1, …, m } .
Here, as usual, we have defined ( G ( u) ·ξ)^α =
∑_β =1^mG^α, β ( u) ξ^α, β.
Our objective is to obtain some a priori bounds on the solution: in particular
we would like to guarantee that with high probability, at least for small
times, the solution remains close to 1, if it is started in its
neighbourhood. Such estimates are classical, therefore we do not enter into
details. Instead, since (<ref>) is not dissimilar from
(<ref>), we refer to the proof of Lemma <ref> to obtain that,
since we are assuming crucially > d/2 (so that we can choose γ =
γ_0 in Lemma <ref>) there exist C, ζ^' > 0 such
that:
( [ sup_0 ⩽s ⩽t u_s -1_H^γ_0^2]
)^1/2 ⩽
u_0 -1 _H^γ_0 + C t^ζ^' ,
One consequence of this bound is that if we define τ to be the
stopping time
τ= inf{ t ⩾0 u_t ∉B__1 } ,
then for any δ∈ (0, 1), we find a (deterministic) t ∈ (0,
1) such that
sup{ _u_0 ( τ< s ) u_0 ∈B__1/2 , s
⩽t } ⩽δ/ 2 .
Now, for any u_0 = u_0∈ B__1/2 we have u_t =
u_t up to the stopping time τ.
Therefore, uniformly over s ⩽ t and u, v
∈ B_, with ∈ (0, _1/2), we find that
_s (u , ·) - _s(v, ·) _TV, L^2 ⩽
_s(u, ·) - _s(v, ·)
_TV, L^2 + δ/2 .
In particular, the lemma is now proven if we show that for any s ∈
(0, t) there exists ∈ (0, 1/4) such that
[e:fnl]
sup_u, v ∈B_ _s(u, ·) -
_s(v, ·) _TV, L^2 ⩽1 - 3δ2 .
Step 3: The Bismut–Elworthy–Li formula. Finally, we set out to prove
(<ref>), which follows if we can establish the strong Feller property of u_t. Here we follow a classical approach via the
Bismut–Elworthy–Li formula, see for example <cit.>, which reads
D_u _t (u, ψ) h
= _u [
ψ( u_t) 1/t ∑_α=1^m ∫_0^t ∫_^d [
[G^α, α (
u_s)]^-1 (D_u u_s^α h ) ] (x)
·ξ^α, α ( x, s)] .
Here we have written _u for the expectation under the law of u_t started in u_0 = u. Furthermore, D_uu^α denotes the Fréchet derivative of the α-th component of the solution u_t with respect to its initial condition.
In particular, this formula allows us to bound
| D_u _t (u, ψ) h |^2
≲ ψ_∞^2 t^-2 ∑_α=1^m _u [∫_0^t
∫_^d | [G^α, α (
u_s)]^-1 (D_u u_s^α h) |^2(x) s x ]
≲ ψ_∞^2 t^-2 ∫_0^t _u
( D_u u_s h )
_H^γ_0(^d; ^m)^2 s ,
where in the last line we have used the second property of G in
(<ref>).
Hence to conclude we have to find a bound on the H^γ_0
(^d; ^m) norm of
the differential of the flow to (<ref>). If we set r_t^α = D_uu_t^α h we find that
r_t^α = - ν^α (- Δ)^ r_t^α t +
( D G ( u_t)
r_t ·ξ)^α , r_0^α =
h^α ∈H^γ_0(^d; ) ,
where we have written as usual ( D G ( u_t)
r_t·ξ)^α = ∑_β=1^m(D G^α, β ( u_t) r_t) ξ^α, β.
The equation for r can in turn can be rewritten via (<ref>) as
r_t^α = - ν^α (- Δ)^ r_t^α t
+ ∑_β=1^m M_( D ( u_t) h)^β ∘K_Θ^α, β ξ^α, β
.
Now, for simplicity let us define for u_0, h ∈ H^γ_0 fixed
v_t = D ( u_t) h.
Then, if we rewrite the equation for r_t in its mild formulation, we
can follow the same calculations as in Lemma <ref> to obtain for some ζ^' > 0 and γ_1 < γ_0
r_t _H^γ_0^2 ≲ r_0
_H^γ_0^2 + t^ζ sup_0 ⩽s ⩽t
v_s _H^γ_1^2 ≲
h _H^γ_0^2(1 +t^ζ^') ,
where in the last step we have used (<ref>) to bound v_s.
We are now ready to conclude our argument. From (<ref>) we deduce that
sup_ h _H^γ_0 ⩽1 | D_u
_t (u , ψ) h | ≲ ψ_∞^2 t^- 1 (1 + t^ζ^' ) ,
so that for some C>0 and uniformly over u, v ∈ B_
| _t (u, A) - _t (v, A) | ⩽C t^- 1 (1
+ t^ζ^' ) u - v _H^γ_0 ⩽C t^- 1 (1 +
t^ζ^') ,
from which we deduce the claim.
The second ingredient used to establish the small set property is a
form of controllability.
In the setting of Theorem <ref>, and in particular under
Assumption <ref>, for any ∈ (0, 1
), R > 0 there exist s_⋆ > 0
and δ^'∈ (0, 1) such that
_s_⋆([π] , B_^proj) ⩾δ^'
, ∀ [π] ∈V_R .
The result follows from the support theorem for (<ref>) by solving a
control problem.
To this aim, for any s_⋆ >0 (we will fix this parameter later on), we denote by 𝒞ℳ⊆ L^2(^d; (^m)^⊗ 2) the
Cameron–Martin space associated to the noise Ẇ on the time
interval [0, s_⋆] (for convenience, we omit the dependence on s_⋆ in the notation), characterised by the norm
h _𝒞ℳ^2 = ∫_0^s_⋆ ∑_α, β=1^m ∑_k ∈^d (Γ^α, β_k)^-1 |
ĥ^α, β_s(k) |^2 s .
Then for h ∈𝒞 ℳ, we denote by Φ_t^hπ the flow of the following deterministic, controlled
equation, for any initial datum u ∈ L^2:
∂_tΦ_t^h u = Φ_t^h
u + h Φ_t^h u, Φ_0^h
u = u , ∀ t ∈ [0, s_⋆] .
Now the support theorem, see for example <cit.> (the setting of this work is slightly
more complex than ours, since the noise is space-time white, but the
key technical result <cit.> adapts
to our setting), guarantees that
( _s_⋆(u, ·) ) = {Φ_s_⋆^h u h ∈𝒞ℳ}^H^γ_0 , ∀
u ∈ H^γ_0(^d; ^m) .
Here we indicate with _s (u , ·) the law in H^γ_0 of the solution u_s to (<ref>) started in u. Recall that the support of a positive measure μ on a metric space
(, d) is given by
(μ) = { x ∈ μ (B)> 0 for all open sets
B such that x ∈ B } .
Now we observe that if we prove that for B^sym_ = B_∪ (- B_) we have
B_/2^sym∩ (_s_⋆(π, ·)) ≠ 0 ,
∀ π∈ V_R ,
then also B_/2^proj∈(_s_⋆([π], ·)) for all π∈ S ∩ V_R. In
particular, if we show that (<ref>) is true, then our
result holds in view of Lemma <ref>, which guarantees the lower
bound of the probability to be uniform over π∈ V_R (since V_R is compact in S and B_^sym is open in H^γ_0, so that _t([π], B_^proj) is
lower-semicontinuous in [π] by the Portmanteau theorem). Therefore, our aim is now to prove
(<ref>), and in particular it suffices to show that
there exists an s_⋆ (R) > 0 such that for each π∈ V_R we can find a control h satisfying
[eqn:aim-ctrl-har]
Φ^h_s_⋆ (π) /
Φ^h_s_⋆ (π) ∈B_/2^sym , ∀π∈V_R .
One possible construction of such h is as follows.
We start by observing that from the definition of V_R we have
M = M (π) ⩽⌊κ_0^-1 logR ⌋= n_R ∈ ,
with κ_0 as in Theorem <ref>. In particular, there
exists an α_0∈{ 1, …, m } such that Π^_M(π)π^α⩾η, for some deterministic η > 0. Our argument then proceeds in two steps. The main point will be
to prove that there exists a time s_⋆^(1) > 0 such that, for
appropriate control h, at time s_⋆^(1) we have Π^_1π_s_⋆^(1)^α_0⩾Π^_1π_s_⋆^(1)^α_0. That is, at
least the α_0-th component is concentrated in the 0-th frequency
level. We then use this result to prove that by a time s_⋆^(2) >
s_⋆^(1) and for an appropriate choice of h we have Π^_1π_s_⋆^(2)⩾Π^_1π_s_⋆^(2), so that all components are now close to the unit,
at least in L^2 norm.
We start by constructing the control h on the first time step [0,
s_⋆^(1)). Let us fix a time
horizon t_⋆∈ (0, 1) to be fixed later on. Then consider
h^α_0, α_0(t, x) = P_tΠ_M^π^α_0 (x)
, h^α, β (t, x) = 0 , ∀ (α, β) ≠
(α_0, α_0) , t ∈ [0, t_⋆) ,
where P_t is the semigroup P_t= e^- ν^α_0 (- Δ)^t.
Note that such h lies in the Cameron–Martin space by
Assumption <ref>. Further, there exists a constant C(R) >0 such that
sup_0 ⩽ s ⩽ 1{Φ^h_sπ
+ h_s_∞}⩽ C(R) .
Then we can compute
∫_^dΦ^h_tπ (x) x = ∫_^dπ (x) x +
∫_0^t∫_^d h_s(x) [ P_sπ + ∫_0^s P_s -r
(h_r u_r) r ] (x) x s .
We can estimate the last term as follows:
∫_0^t_⋆ ∫_^d h_s(x) [ P_sπ + ∫_0^s P_s -r
(h_r u_r) r ] (x) x s
⩾∫_0^t_⋆ P_sΠ_M^π^2 s -
∫_0^t_⋆ h_s_∞∫_0^s h_r_∞
u_r r s
⩾ e^- 2 ζ_M t_⋆ t_⋆η -
t^2_⋆/2 C^3⩾ c_0(η, δ, R) > 0
for some constant c_0(η, δ, R), provided that δ is chosen
sufficiently small. Now assume that |∫_^dπ (x) x
| < c_0/2 (otherwise set, instead of the present choice of h, the trivial h = 0 on [0, t_⋆)). In this way we have
|∫_^dΦ^h_t_⋆π (x) x |⩾ c_0/2 .
Now by setting h = 0 on [t_⋆, s_⋆^(1)), so that by
choosing s_⋆^(1) sufficiently large we obtain the desired Π^_1 (Φ^h_s_⋆^(1)π )^α_0⩾Π^_1 (Φ^h_s_⋆^(1)π )^α_0.
Therefore we find that at time s_⋆^(1) (up to choosing a
larger s_⋆^(1))
π_s_⋆^(1) = Φ^h_s_⋆^(1)π/Φ^h_s_⋆^(1)π
satisfies π_s_⋆^(1) - κ⩽/2, where κ∈ L^2(^d; ^m) is the constant vector κ =
(κ_α)_α =1^m and by our previous discussion there exists a deterministic κ such that |κ^α_0| ⩾κ>0.
Finally, by choosing any s_⋆^(2) > s_⋆^(1), and since κ is nonzero, we can construct a time-independent control h on [s_⋆^(1), s_⋆^(2)) for the ODE
∂_t y^α = ∑_β=1^m y^β h^α, β ,
y^α_s_⋆^(1) = κ^α ,
such that y_s_⋆^(2)^α =
y_s_⋆^(1)^α_0 for all α∈{ 1,
…,m}. Up to choosing sufficiently small, this guarantees that
under such choice of control
min{Φ^h_s_⋆^(1)π / Φ^h_s_⋆^(1)π - 1 , Φ^h_s_⋆^(1)π / Φ^h_s_⋆^(1)π + 1 }⩽ .
Finally, by Schauder estimates for a sufficiently
large time s_⋆> s_⋆^(2) and h =0 on [s_⋆^(2), s_⋆], we find π_s_⋆∈ B^sym_ as desired.
§ BASIC PROPERTIES OF THE ANGULAR COMPONENT
The first result of this section establishes well-posedness and continuity with
respect to the initial data for (<ref>).
Under the assumptions of Theorem <ref>, for any u_0∈ L^2
(^d; ^m) there exists a unique mild solution to (<ref>) for all t ⩾
0. Furthermore, for every
γ∈ [0, + γ_0 -d/2) ,
and p ⩾ 2,
T > 0 there exists a C(T, γ, p ) such that
[ sup_0 ⩽ s ⩽ T u_s_H^γ^p]^1/p⩽ C(T, γ, p)
u_0_H^γ .
This result is entirely classical, so we refrain from proving it completely. We
only provide a short description of how the condition γ < +
γ_0 -d/2 appears. We can represent the solution u_t in mild form
as
u_t = e^t u_0 + ∫_0^t e^(t -s) u_s· W_s ,
where e^t φ^α = exp ( - ν^α(- Δ)^ t)
φ^α for every α∈{ 1, …, m }. Then the crucial point of the
proof is to estimate the stochastic convolution appearing above. Here we use
the so-called “factorisation method” (see also the proof of
Lemma <ref>).
Namely, if for some ζ∈ (0, 1) we
define (assuming for the moment that the integral is defined, which will be
justified below)
_s =∫_0^s(s-r)^- ζe^(s -r) u_r·
W_r ,
then we find that for some constant c_ζ∈ (0, 1)
∫_0^s e^(s-r) u_r· W_r = c_ζ∫_0^s (s -r)^ζ -1 e^(s
- r) _r r .
Therefore, by Hölder's inequality and Lemma <ref> we find that
sup_0
⩽ s ⩽ t ∫_0^s e^(s-r) u_r·
W_r_H^γ
≲sup_0 ⩽ s ⩽ t∫_0^s (s -r)^ζ -1 - (γ - γ_1) /2_r_H^γ_1 r
≲sup_0
⩽ s ⩽ t( ∫_0^s (s - r)^(ζ - 1 -
(γ - γ_1) /2) p r
)^1/p·( ∫_0^s_r_H^γ_1^q r )^1/q ,
where γ_1 < γ and p,q ∈ [1, ∞] are conjugate, so
that 1/p + 1/q =1.
Now, to close our argument we must choose appropriate parameters γ_1, ζ and p. First, we fix any arbitrary γ_1∈
(0, γ) such that
γ - < γ_1 < γ_0 - d/2 ,
which is possible in view of our assumption γ < + γ_0 - d/2. Next we fix ζ∈ (0, 1/2) such
that ζ - (γ - γ_1)/2 > 0. This choice is now possible
in view of the upper bound in (<ref>). Finally, we choose p ∈
(1, ∞) sufficiently close to 1 such that
ζ^'-1 (ζ - 1 - (γ_0 - γ_1) /2) p > -1 ,
which is now possible in view of all our choices above. With these choices
taken, we can now conclude that
[ sup_0
⩽ s ⩽ t∫_0^s e^(s-r) u_r·
W_r_H^γ_0^2] ≲ t^2 ζ^'[
( ∫_0^t_s_H^γ_1^q s )^2/q] .
The result therefore follows if we can prove that for any t > 0 and any q ∈ (2, ∞)
sup_0 ⩽s ⩽t _s _H^
γ_1^q < ∞ .
Since _s is a stochastic integral, by BDG this reduces to bounding
its quadratic variation, as _s_H^γ_1^q≲⟨^s⟩^q/2_s, where ⟨·⟩ denotes the scalar quadratic variation of an H^γ_1-valued martingale (see also the analogous argument leading to
(<ref>)), and for 0 ⩽ r ⩽ s we write ^s_r=∫_0^r(s-h)^- ζe^(s -h) u_h· W_h.
In our setting, we can bound the quadratic variation as follows for any t > 0 and following the notation of Remark <ref>:
⟨^t ⟩_t ⩽∑_k, l ∈^d (1 + | k |)^2 γ_1 ∫_0^t
(t -s)^- 2 ζ
e^- 2 ν_min(t -s)| k |^2
| û_s^k- l|^2
Γ_l s
≲∫_0^t (t -s)^- 2 ζ∑_k ∈^d (1 + | k |)^2 γ_1 ∑_l ∈^d Γ_l^2
| û^k- l_s |^2 s .
Now we use that by assumption Γ_l≲ (1 + | k |)^- 2 γ_0 together with
Lemma <ref>, so that
⟨^t⟩_t ≲_Γ t^1 - 2 ζsup_0 ⩽ s ⩽ t∑_k , l ∈^d (1 + | l
|)^2(γ_1- γ_0) (1 + | k-l
|)^2γ_1 | û_s^k-l|^2
≲_Γ t^1 - 2 ζsup_0 ⩽ s ⩽ t
u_s_H^γ_1^2 ,
where by (<ref>) we have that 2(γ_0- γ_1) > d.
From here on the study of well-posedness of the equation follows classical lines.
The next result guarantees that the angular component is
defined for all positive times. This result is not completely obvious: as a matter
of fact we make use of the full strength of the results in the previous
sections (including the higher order regularity estimates in
Proposition <ref>). We do however believe that this is overkill and
that one could get such a result as a consequence of a pathwise form of backwards uniqueness.
Unfortunately we were unable to find such a result in the existing literature, although
<cit.> covers the case = 1 and m = 1.
Under the assumptions of Theorem <ref>, for all u_0∈ L^2_⋆(^d; ^m), the solution t ↦ u_t to (<ref>) almost surely satisfies u_t≠ 0 for all t ⩾ 0. In addition, for every ∈
(0, 1), t ⩾ 0 and R ⩾ 1 there exists a δ (, R, t) ∈ (0, 1) such that
[e:ubd]
sup_u_0 ∈S^_R ( sup_0 ⩽s ⩽t u_s
⩽δ) ⩽ ,
with
S^_R = S ∩{ φ∈H^ φ_H^ ⩽R } .
Finally, the process t ↦π_t = u_t / u_t is a Markov process.
We restrict to proving the claim in (<ref>). The fact that for u_0∈ L^2_⋆, almost surely u_t≠ 0
for all t ⩾ 0, follows along similar lines: the only difference is
that when considering the skeleton median (M_t)_t ⩾ 0 and the
related stopping times we must stop at the potential hitting time τ_0 = inf{ t ⩾ 0 u_t = 0 }, or else the
process defined in the previous sections might not be defined.
As for (<ref>), consider an initial condition u_0∈ S^_R. Our aim will be to use (<ref>) via an upper bound on
the following quantity (see the proof of Corollary <ref>
for a somewhat similar argument):
[ sup_0 ⩽s ⩽t π_s _H^^2 ]
,
where t ↦π_t = u_t / u_t is the angular component
associated to u.
To obtain an estimate on the quantity above we follow roughly the proof of Proposition <ref>.
Indeed, for any i ∈ consider the probability 𝐩_t (i) = ( t ∈ [
T_i, T_i +1)). Then we can bound
[ sup_0 ⩽s ⩽t π_s _H^^2 ]
⩽∑_i ∈ √(𝐩_t(i)) ( [ sup_0
⩽s ⩽T_i+1 π_s _H^^4]
)^1/2 .
Now, for every i ∈ we have, similarly to (<ref>) in the proof
of Proposition <ref>
_T_i[ sup_T_i⩽ s < T_i+1π_s_H^^4] ≲ M_T_i^4 + _T_i[
sup_T_i⩽ s < T_i+1 w_s_, M_T_i^4]
≲F(1/2, _T_i ) + w_T_i_,
M_T_i^4 ,
where the last bound follows from Lemma <ref>. Therefore, following verbatim the proof of
Proposition <ref> we obtain that for every i ∈∖{ 0 } and some constant C () (uniform over u_0 and i)
[ sup_T_i⩽ s < T_i+1π_s_H^^4] ⩽C () G ( π_0) , [ sup_0 ⩽ s < T_1π_s_H^^4] ⩽C () π_0_H^^4 .
Now, if we bound
[ sup_0
⩽ s ⩽ T_i+1π_s_H^^4]
⩽∑_j = 0^i[ sup_T_j⩽ s ⩽ T_j+1π_s_H^^4] ,
then collecting all our estimates we can then conclude via
Lemma <ref> that
[e:bdfin]
[ sup_0 ⩽s ⩽t π_s _H^^2 ]
≲e^c u_0 _H^^2 ∑_i √(
𝐩_t ( i) ·i) ≲_t e^c u_0 _H^^2 .
Now to conclude we will make use of (<ref>), which allows us to represent
u_t = u_0 exp( ∫_0^t ⟨π_s, π_s ⟩s + ∫_0^t ⟨π_s, π_s ·∘W_s ⟩)
.
The results follows then immediately from (<ref>), since the stochastic
integral has bounded Itô–Stratonovich corrector and the Itô integral has
bounded quadratic variation: see for example the proof of
Corollary <ref>.
The fact that π_t is a Markov process follows for example from the
representation in (<ref>), since by our argument above Q_(π_t, π_t) is finite
for all t > 0.
The next result guarantees the Feller property of the semigroup associated to
the angular component.
In the setting of Theorem <ref>, fix γ∈ [, +
γ_0 - d/2). For any π∈ S ∩ H^γ, let _t^γ (π, ·) be the
law in H^γ of the angular component π_t of
(<ref>) started
in π_0 = π. Then the map π↦_t^γ
(π, ·) is continuous from S ∩ H^γ to the space of probability measures on S ∩ H^γ equipped with the topology of weak convergence.
It is sufficient to show that if π^n→π in H^γ, then for every t > 0 one has π_t^n →π_t in probability in H^γ.
We know that u^n_t→ u_t in probability in H^γ by
Lemma <ref> (in fact, we have convergence in L^p(Ω) for any p
⩾ 2), and the uniform
bound in Lemma <ref> implies that in addition
u^n_t / u^n_t→ u_t / u_t in probability, as required.
Martin
#10=#11.5ex`0
BBPS22b
#1#2#2#1
[AC98]Order
L. Arnold and I. Chueshov.
Order-preserving random dynamical systems: equilibria, attractors,
applications.
Dynam. Stability Systems 13, no. 3, (1998), 265–280.
doi:10.1080/02681119808806264https://dx.doi.org/10.1080/02681119808806264.
[BB21]BianchBlomker21SwiftHohenberg
L. A. Bianchi and D. Blömker.
The impact of white noise on a supercritical bifurcation in the
Swift-Hohenberg equation.
Phys. D 415, (2021), Paper No. 132742, 8.
doi:10.1016/j.physd.2020.132742https://dx.doi.org/10.1016/j.physd.2020.132742.
[BBPS22a]BedrossianBlumenthalSmith22BatchlorSpectrum
J. Bedrossian, A. Blumenthal, and S. Punshon-Smith.
The Batchelor spectrum of passive scalar turbulence in stochastic
fluid mechanics at fixed Reynolds number.
Comm. Pure Appl. Math. 75, no. 6, (2022), 1237–1291.
[BBPS22b]BedrossianBlumenthalSmith22Lagrangian
J. Bedrossian, A. Blumenthal, and S. Punshon-Smith.
Lagrangian chaos and scalar advection in stochastic fluid mechanics.
J. Eur. Math. Soc. (JEMS) 24, no. 6, (2022),
1893–1990.
doi:10.4171/jems/1140https://dx.doi.org/10.4171/jems/1140.
[BBPS22c]BedrossianBlumenthalSmith22LowerBoundLyap
J. Bedrossian, A. Blumenthal, and S. Punshon-Smith.
A regularity method for lower bounds on the Lyapunov exponent for
stochastic differential equations.
Invent. Math. 227, no. 2, (2022), 429–516.
doi:10.1007/s00222-021-01069-7https://dx.doi.org/10.1007/s00222-021-01069-7.
[BCZ17]BedrossianCoti17Enhanced
J. Bedrossian and M. Coti Zelati.
Enhanced dissipation, hypoellipticity, and anomalous small noise
inviscid limits in shear flows.
Arch. Ration. Mech. Anal. 224, no. 3, (2017),
1161–1204.
doi:10.1007/s00205-017-1099-yhttps://dx.doi.org/10.1007/s00205-017-1099-y.
[BMSS95]BallyMilletSole1995Support
V. Bally, A. Millet, and M. Sanz-Solé.
Approximation and support theorem in Hölder norm for parabolic
stochastic partial differential equations.
Ann. Probab. 23, no. 1, (1995), 178–222.
[BMV16]BedrossianMasmoudiVicol16InvDampEnhancedDiss
J. Bedrossian, N. Masmoudi, and V. Vicol.
Enhanced dissipation and inviscid damping in the inviscid limit of
the Navier-Stokes equations near the two dimensional Couette flow.
Arch. Ration. Mech. Anal. 219, no. 3, (2016),
1087–1159.
doi:10.1007/s00205-015-0917-3https://dx.doi.org/10.1007/s00205-015-0917-3.
[Bor16]Boritchev16Burgulence
A. Boritchev.
Multidimensional potential Burgers turbulence.
Comm. Math. Phys. 342, no. 2, (2016), 441–489.
doi:10.1007/s00220-015-2521-7https://dx.doi.org/10.1007/s00220-015-2521-7.
[BPS21]BedrossianSmith21chaos
J. Bedrossian and S. Punshon-Smith.
Chaos in stochastic 2d Galerkin-Navier-Stokes.
arXiv preprint (2021).
arXiv:2106.13748https://arxiv.org/abs/2106.13748.
[BR16]BackwardsUnique
V. Barbu and M. Röckner.
Backward uniqueness of stochastic parabolic like equations driven by
Gaussian multiplicative noise.
Stochastic Process. Appl. 126, no. 7, (2016),
2163–2179.
doi:10.1016/j.spa.2016.01.007https://dx.doi.org/10.1016/j.spa.2016.01.007.
[CF98]crauel1998additive
H. Crauel and F. Flandoli.
Additive noise destroys a pitchfork bifurcation.
Journal of Dynamics and Differential Equations 10,
(1998), 259–274.
[Chu02]Monotone
I. Chueshov.
Monotone random systems theory and applications, vol. 1779 of
Lecture Notes in Mathematics.
Springer-Verlag, Berlin, 2002, viii+234.
doi:10.1007/b83277https://dx.doi.org/10.1007/b83277.
[CZH21]HairerCotiZelati21Lorenz
M. Coti Zelati and M. Hairer.
A noise-induced transition in the Lorenz system.
Comm. Math. Phys. 383, no. 3, (2021), 2243–2274.
doi:10.1007/s00220-021-04000-6https://dx.doi.org/10.1007/s00220-021-04000-6.
[DGK21]DuGuKo21Fluct
A. Dunlap, Y. Gu, and T. Komorowski.
Fluctuations of the KPZ equation on a large torus.
arXiv preprint (2021).
To appear in Commun. Pure Appl. Math.
arXiv:2111.03650https://arxiv.org/abs/2111.03650.
[DGR21]DunlapGrahamRyzhik21Burgers
A. Dunlap, C. Graham, and L. Ryzhik.
Stationary solutions to the stochastic Burgers equation on the
line.
Comm. Math. Phys. 382, no. 2, (2021), 875–949.
doi:10.1007/s00220-021-04025-xhttps://dx.doi.org/10.1007/s00220-021-04025-x.
[DPZ96]DaPratoZabczyk96Ergodicity
G. Da Prato and J. Zabczyk.
Ergodicity for infinite-dimensional systems, vol. 229 of
London Mathematical Society Lecture Note Series.
Cambridge University Press, Cambridge, 1996, xii+339.
doi:10.1017/CBO9780511662829https://dx.doi.org/10.1017/CBO9780511662829.
[Fur63]Furstenberg
H. Furstenberg.
Noncommuting random products.
Trans. Amer. Math. Soc. 108, (1963), 377–428.
doi:10.2307/1993589https://dx.doi.org/10.2307/1993589.
[GT22]gess2022lyapunov
B. Gess and P. Tsatsoulis.
Lyapunov exponents and synchronisation by noise for systems of
SPDEs, 2022.
arXiv:2207.09820https://arxiv.org/abs/2207.09820.
[Hai02]AsCoupling
M. Hairer.
Exponential mixing properties of stochastic PDEs through asymptotic
coupling.
Probab. Theory Related Fields 124, no. 3, (2002),
345–380.
doi:10.1007/s004400200216https://dx.doi.org/10.1007/s004400200216.
[Hai09]Hairer09Heat
M. Hairer.
How hot can a heat bath get?
Comm. Math. Phys. 292, no. 1, (2009), 131–177.
doi:10.1007/s00220-009-0857-6https://dx.doi.org/10.1007/s00220-009-0857-6.
[Has80]MR600653
R. Z. Has'minskiĭ.
Stochastic stability of differential equations, vol. 7 of
Monographs and Textbooks on Mechanics of Solids and Fluids, Mechanics
and Analysis.
Sijthoff & Noordhoff, Alphen aan den Rijn-Germantown, Md., 1980,
xvi+344.
Translated from the Russian by D. Louvish.
[HM06]HairerMattingly06
M. Hairer and J. C. Mattingly.
Ergodicity of the 2D Navier-Stokes equations with degenerate
stochastic forcing.
Ann. of Math. (2) 164, no. 3, (2006), 993–1032.
doi:10.4007/annals.2006.164.993https://dx.doi.org/10.4007/annals.2006.164.993.
[HM11]HairerMattingly11Harris
M. Hairer and J. C. Mattingly.
Yet another look at Harris' ergodic theorem for Markov chains.
In Seminar on Stochastic Analysis, Random Fields and
Applications VI, vol. 63 of Progr. Probab., 109–117.
Birkhäuser/Springer Basel AG, Basel, 2011.
doi:10.1007/978-3-0348-0021-1
_7https://dx.doi.org/10.1007/978-3-0348-0021-1%5C_7.
[KKMS20]MuellerKhoshnevisan20Phase
D. Khoshnevisan, K. Kim, C. Mueller, and
S.-Y. Shiu.
Phase analysis for a family of stochastic reaction-diffusion
equations.
arXiv preprint (2020).
arXiv:2012.12512https://arxiv.org/abs/2012.12512.
[MR16]MarinelliRockner16BDG
C. Marinelli and M. Röckner.
On the maximal inequalities of Burkholder, Davis and Gundy.
Expo. Math. 34, no. 1, (2016), 1–26.
doi:10.1016/j.exmath.2015.01.002https://dx.doi.org/10.1016/j.exmath.2015.01.002.
[Ros21]rosati2021lyapunov
T. Rosati.
Lyapunov exponents in a slow environment.
arXiv preprint (2021).
arXiv:2109.14698https://arxiv.org/abs/2109.14698.
[Ros22]RosatiSynchro
T. Rosati.
Synchronization for KPZ.
Stoch. Dyn. 22, no. 4, (2022), Paper No. 2250010, 46.
doi:10.1142/S0219493722500101https://dx.doi.org/10.1142/S0219493722500101.
[Sin91]Sinai1991Buergers
Y. G. Sinaĭ.
Two results concerning asymptotic behavior of solutions of the
Burgers equation with force.
J. Statist. Phys. 64, no. 1-2, (1991), 1–12.
doi:10.1007/BF01057866https://dx.doi.org/10.1007/BF01057866.
|
http://arxiv.org/abs/2307.04206v1 | 20230709153017 | Multi-mission view of low-luminosity 'obscured' phase of GRS 1915+105 | [
"Athulya M. P.",
"Anuj Nandi"
] | astro-ph.HE | [
"astro-ph.HE"
] |
firstpage–lastpage
Sharper Asymptotically Optimal CDC Schemes via Combinatorial Designs
Yingjie Cheng, Gaojun Luo, Xiwang Cao, Martianus Frederic Ezerman, and San Ling
Y. Cheng, and X. Cao are with the Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China, and also with Key Laboratory of Mathematical Modeling and High Performance Computing of Air Vechicles (NUAA), MIIT, Nanjing 210016, China, e-mails: { xwcao,chengyingjie}@nuaa.edu.cn
G. Luo, M. F. Ezerman, and S. Ling are with the School of Physical and Mathematical Sciences, Nanyang Technological University, 21 Nanyang Link, Singapore 637371, e-mails: { gaojun.luo, fredezerman, lingsan}@ntu.edu.sg.
G. Luo, M. F. Ezerman, and S. Ling are supported by Nanyang Technological University Research Grant No. 04INS000047C230GRT01. X. Cao, Y. Cheng, and G. Luo are also supported by the National Natural Science Foundation of China under Grant 12171241.
August 12, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
GRS 1915+105 is observed in an `obscured' phase since May 2019, exhibiting steady and low X-ray luminosities, while being intervened by sporadic re-brightenings. In this work, we perform a comprehensive and wide-band analysis of the spectral and timing properties of the source during the period 2019–2021 using AstroSat (SXT: 0.5–8 keV; LAXPC: 3–60 keV), NICER (0.5–12 keV), and NuSTAR (3–60 keV) observations. Spectral analysis reveals the presence of a highly variable obscurer (N_ H_ 1∼ 10^22–10^24 atoms cm^-2) throughout the observation period. Source is detected in the Low/Hard state for most of the time, with the spectra being described by a Comptonised component (Γ ∼1.16 – 1.79, kT_ e∼2 – 31 keV). The source spectra steepen (Γ∼2.5) indicating softening of the spectrum during the rise of the re-brightenings. Various emission and absorption lines corresponding to the neutral Fe-Kα, Fe-XXV Kα, Fe-XXVI Kα, and the Ni-XXVIII Kα were detected with the equivalent widths varying between 70 eV – 3.5 keV. The column density of the absorbing plasma varied between 10^16–10^18 atoms cm^-2 at a distance ≤2×10^10 cm. Interestingly, the source is also seen exhibiting various variability classes (ρ, λ, δ, χ)
at relatively low luminosities (∼0.01L_Edd) during the re-brightening phases. Different variability classes show signature of QPOs (ν_ QPO: 20–180 mHz,
rms_ QPO: 7.5%–16%). The source showed a maximum bolometric luminosity (L_bol) of ∼0.01L_Edd (Re-brightening phases) and a minimum L_bol of 0.004L_Edd (Quiet phase) during the period. We discuss the possible disc dynamics around the black hole during this low-luminosity `obscured' phase.
X-ray binaries - accretion, accretion discs - black hole physics - stars: black holes - radiation mechanisms: general - stars: individual: GRS 1915+105
§ INTRODUCTION
GRS 1915+105 is a unique Low Mass X-ray Binary (LMXB), hosting a massive (12.4_-1.8^+2.0M_⊙; ) and a maximally rotating (â > 0.98^+0.01_-0.01; ) black hole at the center, accreting matter from a K-giant companion <cit.>. GRS 1915+105 is the only LMXB that has exhibited 15 unique variability classes (α, β, γ, δ, θ, κ, λ, μ, ν, ρ, ϕ, χ, ω, η, ξ) so far (<cit.>, see also ). Each of these classes exhibit variabilities at timescales ranging from a few seconds to many hours, thereby providing a captivating illustration of various instabilities and the timescales at which these instabilities develop in the accretion disc around a stellar mass black hole ( and the references therein). GRS 1915+105 exhibits steady radio jets in the hard state <cit.>, while transient and discrete radio jets <cit.> are seen during transition of the source from the hard state to the soft state. Besides jet events, disc winds have also been a prominent ejection event in GRS 1915+105. <cit.> detected various absorption features that indicated ionized outflowing winds, which also acted as jet suppressing mechanisms by averting the disc matter inflow into the radio jets <cit.>. Owing to its eccentric inflow and outflow phenomena, GRS 1915+105 sets an exemplary case study to analyse the astrophysical phenomena around the compact object.
After more than 25 years of high X-ray activity, GRS 1915+105 began to show a decrease in the X-ray flux <cit.> since May 2018, leading to the presumption that the source is finally approaching quiescence. Yet, exceptionally again, the source started exhibiting a series of non-generic activities since April 2019 <cit.>. The detection of multiple absorption lines in the energy spectrum of GRS 1915+105 <cit.> and the requirement of an additional absorption model to address the soft excess <cit.>, indicated the presence of a local obscuration in the system. also detected a decrease in the bolometric luminosity of the source, caused by the local obscuration in the system. GRS 1915+105 is therefore, perceived to have entered into a new state of accretion called the `obscured state'. Obscuration, although is a commonly observed phenomenon in Active Galactic Nuclei ( and the references therein), is rarely observed in LMXBs. The various models developed to explain the cause of obscuration in X-ray binaries comprise the disc flaring theory <cit.>, the obscuring winds caused by the stellar activity in the secondary star <cit.>, slim disc at the close vicinity of the compact object <cit.> etc. However, in the recent work on GRS 1915+105, detected three ionization zones layered up at a distance ∼10^11 cm, around the outer disc. Their study featured the possibility of vertical expansion of the outer disc that further acted as the obscuration medium. Meanwhile, estimated the wind launch radius (r < ∼10^9 cm), velocity of the winds (350 km s^-1), and the magnetic field strength required to drive the winds away from the compact object. Their results revealed a wind that failed to escape the system, eventually enshrouding the compact object thus causing the obscuration.
Over the period of obscuration, the source also displayed several re-brightenings <cit.>, either in the form of a quick flare or a prolonged re-brightening. Few quick flares were reported to be a sequel to the radio flares <cit.>, whereas during the prolonged re-brightenings the ALMA observations (15.5 GHz) of the source showed a decrease in the radio activity (). The quasi-simultaneous radio-X-ray flares happening on short timescales (∼1400 sec) was also observed in Cyg X–1 <cit.>, where the X-ray emission is hypothesised to be originated at the base of the jet. The prolonged re-brightenings (also called as re-flares, mini-outbursts, failed outbursts etc.) have been observed in many LMXBs. The nature of the mini-outbursts varied in each LMXBs, with a few sources exhibiting only one spectral state throughout the mini-outburst (eg: MAXI J1659–152 <cit.>, IGR J17379–3747 <cit.>, XTE J1650–500 <cit.>, while few other sources exhibited different spectral states throughout the mini-outburst (MAXI J1535–571 <cit.> and GRS 1739–278 <cit.>. Irrespective of the nature of the outbursts, the cause and offset of the mini-outbursts are not clearly understood. Augmented mass transfer due to the irradiation of the companion <cit.> is one of the commonly used models to explain the cause of a mini-outburst.
GRS 1915+105 has been extensively studied throughout 26-years of long outburst. However, only a few attempts, using scattered observations of the source, have been made to understand the characteristics after the source descended into the low-luminosity `obscured' phase. In this manuscript, we perform for the first time, an in-depth and a cohesive analysis of the spectral and timing properties of the source, during the period of March 2019 to November 2021 using observations from AstroSat, NICER and NuSTAR. Through our results, we describe the attributes of obscuration in the system. The observations also reveal the source to be exhibiting multiple re-brightening phases with the display of the characteristic variability classes and the transition between classes during prolonged re-brightenings. We, therefore, characterize the source properties and spectral state transitions observed during the re-brightening phases as well as the quiet phase, in this work.
This paper is structured as follows: <ref> briefly describes the data reduction procedures for all the observations obtained from SXT & LAXPC onboard AstroSat, NICER and NuSTAR. In <ref>, we explain the modeling techniques and the procedure of spectral and timing analysis. In <ref>, we present the results obtained through our analysis and in <ref>, we discuss the overall behavioural pattern of the source. Finally, in <ref>, we conclude with a summary of our results.
§ OBSERVATIONS AND DATA REDUCTION
We use the data obtained from AstroSat, NICER and NuSTAR from March 2019 to November 2021 to perform a coordinated and wide-band study of the spectral and timing properties of the source. Table <ref> enlists the overall observations of the source made by AstroSat together with the simultaneous NICER – NuSTAR observations available during these Epochs. All of them are also indicated in Figure <ref> with vertical dashed lines. In addition, Table <ref> gives the log of further NICER observations for most of the re-brightening phases observed between March 2019 –- November 2021. Below, we brief the reduction procedures for the data obtained from all three instruments.
§.§ AstroSat
AstroSat <cit.>, India's first dedicated astronomy mission, observes celestial bodies in broad energy band simultaneously, ranging from near UV to hard X-rays. In our work, we use the observations made by two instruments on board AstroSat, the Soft X-ray Telescope (SXT) <cit.> covering the energy band 0.3 - 8 keV and the Large Area X-ray Proportional Counter (LAXPC) <cit.> operating in 3 - 80 keV energy band. AstroSat had made 5 observations of the source during our period of study. Level-2 SXT data and Level-1 LAXPC20 data are obtained from the ISSDC data dissemination archive[<https://astrobrowse.issdc.gov.in/astro_archive/archive/Home.jsp>]. In order to perform simultaneous analysis, we choose one segment from both SXT and LAXPC, where the observations made by both these instruments happen at almost same time, with exposure time ≥ 1 ks for both instruments. The SXT pipeline is used for Level 2 data analysis. The light curve and the spectral files are further extracted using the . The background spectrum and the response files for SXT data is distributed by TIFR-POC[<https://www.tifr.res.in/ astrosat_sxt/sxtpipeline.html>]. The ARF files for each observation is generated using the tool. The LAXPC data processing, from Level 1 to Level 2, is carried out using the software . The background spectrum for each LAXPC spectrum is generated using the code , while we use the pre-computed response files (version v1.0) provided by the LAXPC team[<https://www.tifr.res.in/ astrosat_laxpc/LaxpcSoft.html>]. A detailed description for the standard reduction and extraction procedures are provided in (see also ). Additionally, a circular region of 12^' is used while extracting SXT data (see Figure <ref>). We use a combination of the top-layer and all events during the extraction of LAXPC data. The energy spectra thus obtained are grouped to 25 counts per bin using the and a systematic error of 3% <cit.> is additionally included to account for the uncertainty in the spectral response.
§.§ NuSTAR
The Nuclear Spectroscopic Telescope Array (NuSTAR) is sensitive to X-rays within the energy range 3 - 78 keV <cit.>. In this paper, we consider all the 11 observations of the source made by the modules, FPMA and FPMB onboard NuSTAR, during the period 2019 – 2021. These data were obtained from the HEASARC database[<https://heasarc.gsfc.nasa.gov/cgi-bin/W3Browse/w3browse.pl>]. NuSTAR Data Analysis Software (NuSTARDAS) and CALDB v20191219 is used to generate the cleaned event file <cit.>. A circular region of 60^'' is used to extract the source events. Similarly, a region of 60^'' radius that is free of source photons, is chosen for the background extraction (see Figure <ref>). The cleaned events and the region files thus obtained are used to extract the source products using the module . The extracted spectra were uniformly grouped to 25 counts per energy bin and is modeled in the energy range 3 – 60 keV.
§.§ NICER
Neutron star Interior Composition Explorer (NICER) <cit.> has persistently observed GRS 1915+105 during the obscured phase using its primary scientific instrument, the X-ray Timing Instrument (XTI) that covers the energy band 0.2 - 12 keV. In our work, we study 23 NICER-XTI observations, some of which without a simultaneous high energy observation. We also did a thorough spectral analysis of 40 additional observations, made by NICER between MJD 58610-58650, 59050-59150 and 59375-59500. However, we tabulate only 23 observations, as they sufficiently describe the spectral and timing evolution of the source parameters. These observations were obtained from the HEASARC database[<https://heasarc.gsfc.nasa.gov/cgi-bin/W3Browse/w3browse.pl>]. The data is processed using the latest version of the NICER software (NICERDAS ver 10). The Level 2 analysis is performed using the task. The Level 3 data analysis is performed using the new extraction task (recently made available with released in November 2022). concurrently generates spectrum, background, ancillary and response files from the un-barycentered merged event file. Additionally, we also set the background model type to 3C50 while running the script. All the spectra thus obtained were uniformly grouped to 25 counts per bin.
§ ANALYSIS & MODELING
§.§ Timing Analysis
Light curves for AstroSat-LAXPC and NICER observations were initially generated with a time bin of 10 sec in the energy ranges 3 – 60 keV and 0.5 – 12 keV respectively. In addition, the NICER light curves corresponding to each re-brightening phase observation were separately extracted again for three individual energy ranges; 0.3 – 3 keV, 3 – 6 keV and 6 – 12 keV with a time bin of 10 sec in order to plot the Color-Color Diagram (CCD) (see Figures <ref>, <ref> and <ref>). The CCD plots HR1 (Hardness Ratio 1) on the X - axis and HR2 (Hardness Ratio 2) on the Y - axis, where HR1 is the ratio of photons from 3 – 6 keV to the photons in 0.3 – 3 keV and HR2 is the ratio of photons in 6 – 12 keV to the photons in 0.3 – 3 keV.
The Power Density Spectrum (PDS) was generated using light curves with a time resolution of 10 msec, obtained from AstroSat-LAXPC, NICER and NuSTAR observations. Although a Nyquist frequency of 50 Hz was obtained, the PDS was dominated with noise above 1 Hz. The data points were binned to 32768 bins resulting in a lowest frequency of 0.003 Hz (1/(32768×10 ms)). All the PDS are rms-normalized <cit.> with a geometrical re-bin factor of 1.05. The noise associated with the PDS is fitted using powerlaw. The narrow features in the PDS, known as Quasi-periodic Oscillations (QPOs), are described using the Lorentzian profile,
L(f) = KQf_0/π/f_0^2 + Q^2(f-f_0)^2,
where, f_0 is the frequency of the QPO, K is the normalization that defines the total strength of the oscillation and Q (= f_0/Δ, where Δ is the half width at half maximum) is the quality factor that denotes the coherence of the variation.
We, therefore, use a Lorentzian model to fit the QPO feature ( and the references therein). Details of the method to obtain the model fitted parameters are mentioned in . The narrow features with a Q-factor ≥ 2 <cit.> and significance (σ) > 3 <cit.> are considered as QPOs. The total rms variability for the frequency range 0.003 – 1 Hz is estimated using the rectangle rule integration method, where the rms = √((P(ν)×δν))×100 (in %). P(ν) is in units of rms^2 Hz^-1 and δν is the interval width in Hz (see and the references therein).
§.§ Spectral Analysis and Modeling
The spectral analyses for all the observations were done using and and . We perform the broadband spectral analysis and modeling across 0.7 – 60 keV using simultaneous data from SXT (0.7 – 7 keV) + LAXPC (3 – 60 keV) and NICER (0.7 – 12 keV) + NuSTAR (3 – 60 keV). The difference in flux normalization between two instruments, while performing the simultaneous fit, is taken care by the multiplicative model, constant. Absorption due to the interstellar medium (ISM) of the Galaxy has been modeled using the TBabs model with all the abundance parameters set to default as mentioned in .
The combined spectra were initially fitted using the multi-temperature disc blackbody (diskbb) <cit.> model and the Comptonisation (nthComp) <cit.> model individually. The spectra corresponding to Epochs 6 – 9, 11, 12, 17 and 18 showed acceptable fits using the nthComp model only, whereas Epochs 1 – 5, 10 and 13 – 16 required the combination of both models, diskbb+nthComp for the continuum. We also use a partial covering absorption model, TBpcf along the continuum for all the observations, considering the recent evolution of the source into the `obscured' phase (see ). TBpcf addresses the edge at ∼7 keV which is interpreted as a neutral iron K-alpha photoelectric edge. This spectral feature, conventionally seen in AGNs, is described by the partial covering absorption by winds/gas clouds <cit.>. TBpcf quantifies the equivalent hydrogen column density that is local to the source (N_ H_ 1) and the covering fraction (PCF) of the obscuration (see also ). Henceforth, the model combinations - TBabs(TBpcf(diskbb+nthComp)) and TBabs(TBpcf(nthComp)) will be referred as Model-1 and Model-2 respectively. Along with the above mentioned model combinations, few additional models were required to address the absorption and emission features between 6 – 9 keV. For example, the gaussian model was used to address the emission line between 6 - 8 keV. Epochs 13 and 17 showed a narrow absorption feature in the same energy range, which was addressed by the gabs <cit.> model. A broad absorption feature was also present in few Epochs (Epochs 6, 7, 8, 11, 12 and 13) between 7 – 9 keV, for which the smedge <cit.> model was used. The additional Si and Au edges in the SXT spectra were addressed using the command (see and the references therein), with the slope set to 1. We use the edge model to address the instrumental Xenon edge (∼32 keV) observed in the LAXPC spectrum <cit.>.
The NICER spectra (0.7 – 12 keV) corresponding to the re-brightening phases (Table <ref>) were initially fitted using Model-1, considering the requirement of disc and Comptonization model components to produce a best fit for the broadband observations. Nonetheless, all the NICER observations required only a single component for the continuum, except Obs. 1 corresponding to RBI, where this particular observation showed improved fit with a combination of
diskbb and nthComp model components (Model-1). However, for this observation, we had to freeze both disc parameters to the values close to the ones obtained from the broadband best-fits. The remaining NICER observations did not show the requirement of two components for the continuum. We then tried to fit the remaining NICER spectra using the diskbb model component along with TBabs and TBpcf. While few of the observations produced good fits, the rest of the observations produced non-physical disc temperatures. Following <cit.>, we also attempted to fit the NICER spectra using either with the powerlaw model or with the bbody model. Although these models produced satisfactory fits for few observations, they turned non-viable for most of the observations. Additionally, we also tried simpl*diskbb to fit the observations. Unfortunately, that model combination also did not work for most of the observations. And for the ones it did, we got an extremely steep photon index (Γ > 4), with high errors. At face value, it can be broadly inferred that the NICER spectra of the source in the 0.7 – 12 keV energy band, corresponding to the obscured phase, is best described using a single Comptonized component and does not show the requirement for an additional disc component. We infer that this could be because of the fact that the disc temperature in the nthComp model component adequately describes the moderately faint disc flux, without actually having to include the disc component. We, therefore, proceeded to fit all the NICER observations using Model-2, which produced acceptable fits for all the observations. All the NICER spectra also showed additional absorption and emission features, which are addressed using the gaussian, gabs and smedge models. Errors for the parameters are estimated using the error command in . All the error values are quoted at 90% confidence level. However, the error values of certain parameters were too small (< 5%), and thus are tagged with a dagger symbol in the tables (Tables <ref>, <ref>, <ref> and <ref>) to notify that the error values are insignificant.
We computed the unabsorbed bolometric luminosity (L_ bol) in 0.3 – 100 keV energy range using the relation, L_ bol = 4πD^2× (F), where D is the distance of the source (D = 8.2 kpc <cit.>; also see and the references therein) and the F is the unabsorbed intrinsic flux (0.3 – 100 keV), which is estimated by incorporating the cflux model along the continuum. The partial covering absorption model (TBpcf) is excluded while integrating the cflux model along with the continuum components. The flux value thus obtained does not account the effects of obscuration. For example, we used the resultant model combination TBabs(TBpcf(cflux(diskbb+gauss+nthComp))) to estimate the unabsorbed intrinsic flux during Epoch 10.
In the subsequent section, we present the results obtained from our analysis.
§ RESULTS
§.§ Prolonged Low Luminosity Phase
In Figure <ref>, we show the flux variation of GRS 1915+105, starting from March 2019 to November 2021, as observed by multiple instruments both in the X-rays and the radio bands. The top four panels display the source flux as observed by MAXI (2 – 20 keV), BAT (15 – 50 keV), NICER (0.3 – 10 keV) and RATAN-600 radio telescope (11.2 GHz) respectively, while the bottom-most panel shows the Hardness Ratio (HR = (6 – 20 keV) / (2 – 6 keV)) of the source obtained from MAXI light curve.
GRS 1915+105 is observed in the decay phase at the beginning of the observation period, where MAXI flux showed a decrease from ∼1 ph cm^-2 s^-1 during MJD 58250 to ∼0.4 ph cm^-2 s^-1 during MJD 58400. This pattern is also reflected in the BAT light curve, where the flux does not drastically decrease, rather a marginal decrease is seen from 0.3 cts cm^-2 s^-1 to 0.15 cts cm^-2 s^-1. There is also a gradual increase in the HR from 0.4 – 0.7 during this period. The source exhibited a consistently low flux since MJD 58600. However, this low luminosity phase is noticed to be intermittently perturbed by strong and suḍden re-brightenings in X-rays. We refer to these sudden re-brightenings as the re-brightening (RB) phases. The six re-brightening phases, referred as RB_ I, RB_ II, RB_ III, RB_ IV, RB_ V, and RB_ VI are shown in gold, grey, cyan, olive-green, blue, and pink colour-shaded regions respectively in Figure <ref>.
The MAXI light curve of GRS 1915+105 showed a sequence of oscillations with flux varying between 0.02 to 0.4 ph cm^-2 s^-1 during re-brightening phase I (RB_ I) between MJD 58600 to 58633.
RB_ II (MJD 58799) and RB_ IV (MJD 58994) represent quick flares lasting for ∼900 and 2500 sec, respectively. RB_ III (MJD 58891) is also recognized as a quick flare. But due to the lack of observations of the source during RB_ III, the exact duration of the quick flare could not be determined. The source exhibited a sudden increase in the MAXI flux from ∼0.02 to 0.5 ph cm^-2 s^-1 during these quick flares. The available radio data also reveals that RB_ I, RB_ II and RB_ III were precursory to the radio flares (see panel d of Figure <ref>). In addition to the quick flares, GRS 1915+105 also exhibited two relatively prolonged re-brightenings (RB_ V and RB_ VI), which lasted for ∼100 days and ∼150 days respectively. Source showed a gradual increase and decrease in the flux analogous to a mini-outburst <cit.> and the average NICER flux varied from ∼20 cts s^-1 at the beginning to ∼120 cts s^-1 at the peak of RB_ V and ∼30 cts s^-1 to ∼250 cts s^-1 from the beginning to the peak of RB_ VI (see Table <ref>). The drop in HR during both RB_ V and RB_ VI indicates a slow propagation of the source towards the softer state during these two re-brightenings. All these re-brightening phases are extensively studied using the NICER observations. RB_ I, RB_ V and RB_ VI have also been observed in the broad energy band by AstroSat and NuSTAR. Although, RB_ II and RB_ IV being quick and sudden flares, could not be monitored in the broadband energies. RB_ III is not studied in this paper due to the lack of observations.
§.§ The Re-brightening Phases
§.§.§ Re-brightening phase I (RB_ I)
The MAXI light curve in the top-panel of Figure <ref>a shows the flux variation of the source during RB_ I, starting from MJD 58600 to MJD 58633. The 5 color-shaded regions in the top-panel represent the 5 NICER observations that we considered to study RB_ I, where each observation corresponds to a different phase of the re-brightening. Obs. 1 corresponds to the low luminosity phase (blue color-shaded region in the top-panel of Figure <ref>a). Obs. 2, 3, 4 and 5 correspond to the rise (shaded in brown), flare (shaded in green), decay (shaded in orange) and low phase after the decay (shaded in cyan) respectively. The NICER light curves corresponding to all the 5 observations are shown in panels b, d, f, h & j respectively, while the corresponding CCDs are shown in the adjacent panels (c, e, g, i & k respectively).
The above-mentioned colour index is followed throughout this paper, while plotting the NICER data points corresponding to each phase. The NICER flux varied from 7 cts s^-1 at the low phase to 1200 cts s^-1 at the peak of the re-brightening. The variation in the average count rate during each observation is shown in Table <ref>. In the CCD, during Obs. 1, 2, 4 and 5, HR1 and HR2 varied between the limits 0.1 ≤ HR1 ≤ 5 and 0.2 ≤ HR2 ≤ 2. However, during Obs. 3 (flare), the upper value of the range of both HR1 and HR2 increased significantly to 9 and 6 respectively. These model independent analysis does not explicitly unveil any variability classes in the source during RB_ I.
In the top-panel of Figure <ref>b, we show an overplot of the PDS obtained from Obs. 1 and 3, plotted in blue and green points respectively. The PDS during Obs. 2, 4 and 5 is dominated with broadband noise beyond 0.05 Hz, similar to the PDS corresponding to Obs. 1 shown in the figure. PDS obtained from Obs. 3 showed a power-law distribution. The total rms varied between 9_-2^+1% to 44_-4^+3% in 0.003 – 1 Hz frequency range. The bottom-panel of the Figure <ref>b shows an overplot of spectra obtained from all the 5 observations, along with the residuals from the best-fit model. The spectral analysis for all the 5 observations is carried out with Model-2. The photon index (Γ) varies between 1.29_-0.04^+0.02 – 1.47_-0.02^+0.04 throughout RB_ I. The N_ H value varied between 5.0_-0.2^+0.1 - 5.5_-0.3^+0.2× 10^22 atoms cm^-2. Along with interstellar absorption (N_ H) all the 5 observations also showed effects of local obscuration. We observed an additional column density (N_ H_ 1) of 82_-4^+4×10^22 atoms cm^-2 initially, which decreased to 18^-1_+2×10^22 during the flare. N_ H_ 1 further increased to 108^-5_+8×10^22 at the end of RB_ I. The PCF however, varied randomly between 0.56 – 0.77. The best-fitted timing and spectral parameters are mentioned in Tables <ref> and <ref> respectively.
§.§.§ Re-brightening phase II and IV (RB_ II & RB_ IV)
RB_ II and RB_ IV (grey and olive-green shaded regions in Figure <ref>) are two quick flares that spanned for ∼1000 sec and 2700 sec respectively. The MAXI flux corresponding to RB_ II is partly missing, while the NICER flux showed a steady flux with an average value of 38 cts sec^-1. However, the radio observations (panel d of Figure <ref>) show a simultaneous radio flare with the flux varying from ∼40 to 400 mJy. During RB_ IV, the MAXI flux is suddenly seen increasing from 0.04 to 0.5 ph cm^-2 s^-1. This flare pattern is also observed in the BAT and the NICER light curves. The hardness during both RB_ II and RB_ IV is relatively higher with the HR values > 1.5 (panel e of Figure <ref>).
The power spectra obtained from RB_ II and RB_ IV is characterized by a power law distribution. No QPO features were identified and the PDS was dominated with broadband noise above 0.1 Hz for both re-brightening phases. Model-2 provides the best-fit for spectra from both RB_ II and RB_ IV. The source showed a Γ value of 1.20_-0.03^+0.02 and 1.16_-0.06^+0.06 for RB_ II and RB_ IV respectively. A constant galactic absorption with N_ H ∼ 5.5×10^22 atoms cm^-2 was observed during both observations. The additional column density drastically varied between RB_ II and RB_ IV, with N_ H_ 1∼16^+3_-2× 10^22 atoms cm^-2 and a PCF of 0.63_-0.02^+0.03 for RB_ II and N_ H_ 1∼78^+4_-7× 10^22 atoms cm^-2 and a PCF of 0.71_-0.02^+0.02 for RB_ IV. The results of the fit are presented in Tables <ref> and <ref>.
§.§.§ Re-brightening phase V (RB_ V)
The prolonged re-brightening phase, RB_ V, spanned from MJD 59050 to 59150 with the MAXI flux varying from ∼0.02 ph cm^-2 s^-1 at the beginning to ∼0.6 ph cm^-2 s^-1 at the peak of RB_ V (top-panel of Figure <ref>a). The NICER light curves and the corresponding CCDs pertaining to the low (Obs. 1), rise (Obs. 2), peak (Obs. 3) and the decay (Obs. 4) phases of RB_ V are shown in the bottom panels (b & c, d & e, f & g and h & i, respectively). At the beginning of RB_ V, the source displayed low flux (∼ 15 cts sec^-1) with no structured variability in the light curve. Recurring burst profiles with a periodicity of ∼ 50 sec, were observed in the light curve during the rise. Each flare profile showed a varying peak amplitude. These flares also did not resemble the typical heart-beat profile (ρ class) as classified by . We therefore, classify the source belongs to ρ^' variability class (defined as a variant of the ρ class; see ), during Obs. 2. At the peak of RB_ V, the source exhibited large amplitude variability with flux varying between 40 cts sec^-1 at the dip to 220 cts sec^-1 at the peak of the variability. The source variability during Obs. 3 resembled with the λ variability class (). During the decay, the source is seen exhibiting the typical ρ profile with a periodicity of ∼160 sec. Obs. 1 exhibits high HR values in the CCD with HR2 going as high as 0.75 (panel c in Figure <ref>a), while Obs. 2, 3 and 4 show relatively lower HR values, HR2 < 0.4 (see panels e, g and i in Figure <ref>a). The relatively higher HR values and the absence of any variability structure in the light curve during Obs. 1 leads to the assumption that the source exhibits χ variability class during Obs. 1.
The top-panel of Figure <ref>b shows an overplot of the PDS obtained from Obs. 2, 3 and 4 plotted in brown, green and orange colors respectively. The PDSs show QPOs at 26 mHz and 6 mHz during Obs. 2 and 4, attributing to the periodicity of the ρ^' and ρ profiles in the light curves. Obs. 3 showed a powerlaw noise distribution in the PDS, while the PDS during Obs. 1 was dominated with noise beyond 0.1 Hz. The total rms of the source varied between 8.2_-0.8^+0.9% – 40.2_-2.1^+4.9%.
The NICER spectra corresponding to all 4 observations are well described with Model-2. Γ increases as the source proceeds from the low phase (Obs. 1) to the peak phase (Obs. 3) from 1.34_-0.04^+0.03 to 2.47_-0.06^+0.03 and decreases down to 1.91_-0.01^+0.03 during Obs. 4. The source showed an almost constant galactic hydrogen column density with N_ H ∼5.0×10^22 atoms cm^-2. A high N_ H_ 1 of 150^+12_-10×10^22 atoms cm^-2 is observed during the low phase (Obs. 1), while N_ H_ 1 drastically drops to ∼4^-0.4_+0.3-8^-0.1_+0.1×10^22 atoms cm^-2 during Obs. 2, 3 and 4. The PCF varied between 0.60 to 0.79_-0.03^+0.03 without a pattern. In the bottom-panel of Figure <ref>b, we present an overplot of the fitted NICER spectra for all 4 observations along with the residuals obtained after fitting with Model-2. All the model fitted parameters are summarized in Tables <ref> and <ref>.
§.§.§ Re-brightening phase VI (RB_ VI)
RB_ VI, observed from MJD 59350 to 59500, is also a prolonged re-brightening phase like RB_ V. However, RB_ VI portrayed a fast-rise and slow decay light curve profile, in contrast to RB_ V that showed a slow-rise and fast-decay profile in the light curve. The flux evolution of the source during RB_ VI is shown in the MAXI light curve (top-panel of Figure <ref>a). The light curves obtained from all the 5 observations did not show any periodic variability structure (see panels b, d, f, h and j in Figure <ref>a). The CCDs corresponding to Obs. 1 – 4 showed moderate HR values with HR1 < 3 and HR2 < 0.8 (panels c, e, g and i in Figure <ref>a). But Obs. 5 shows an increased hardness with the upper value of the range of HR1 and HR2 extending up to 9 and 6 respectively (panel k in Figure <ref>a). The avg. count rate corresponding to each observation is given in Table <ref>.
The PDS obtained from Obs. 2, 3, and 4 (top-panel of Figure <ref>b) showed QPOs at 170, 180 and 200 mHz respectively, with Q-factor evolving from 2.70_-0.01^+0.02 (Obs. 2) to 4.22_-0.03^+0.03 (Obs. 4). The total rms varied between 15_-1^+1% – 20_-2^+3% during Obs. 2, 3 and 4. PDS corresponding to Obs. 1 and 5 exhibited high fractional variability (> 26%) and showed no indication of QPOs. The parameters are provided in Table <ref>.
We present an overplot of the modeled spectra corresponding to the 5 NICER observations along with the residuals in the bottom-panel of Figure <ref>b. All the 5 spectra were well-fitted using Model-2. Initially, source showed Γ of 1.36_-0.02^+0.02 during Obs. 1. The spectra showed a steeper Γ value of 2.04_-0.04^+0.02 during Obs. 3, which again decreased to 1.37_-0.4^+0.04 during Obs. 5. A constant kT_ eof ∼ 1.9 keV was observed throughout RB_ VI. N_ H_ 1 value was minimum during the rise, peak and the decay phases, with values varying between 4.4_-0.4^+0.6-8.9_-0.6^+0.6×10^22 atoms cm^-2. An increased N_ H_ 1 was observed during Obs. 1 (N_ H_ 1∼98_-10^+5×10^22 atoms cm^-2) and Obs. 5 (N_ H_ 1∼44_-4^+6×10^22 atoms cm^-2). With reference to , and based on the CCD, PDS and the spectral characteristics, the source could possibly belong to the hard state (or the χ class) during the beginning and the end of RB_ VI, while it exhibited the δ variability classes during the rise, peak and the decay phases. The best-fitted model parameters are given in Tables <ref> and <ref>.
§.§ Wide-band Observational Analysis
The intermittent re-brightenings, exhibited by GRS 1915+105 during the low-luminosity period, were vividly observed in the soft energies (0.7 - 12 keV) by NICER. The spectral and timing properties of the source during each re-brightening have already been discussed in <ref>.
In this section, we constrain the broadband spectral and timing properties of the source by analyzing the simultaneous observations by AstroSat, NICER and NuSTAR (18 Epochs in Table <ref>). The 18 wide-band observations are divided into two categories based on the X-ray activity of the source: The Quiet Phase - when the source exhibits steady and low X-ray flux (Epochs 6 – 9, 11, 12, 17 and 18 of Figure <ref>) and the Active Phase - when the source exhibits X-ray activities thereby producing an enhanced flux (Epochs 1 – 5, 10 and 13 – 16 of Figure <ref>). The source exhibited high X-ray flux during Epoch 5(∼58649 MJD). However, due to the lack of NICER observations between the period MJD 58636 and MJD 58656, we classify Epoch 5 under the active phase. The PDS and energy spectra corresponding to all the wide-band observations are modeled and analyzed as mentioned in <ref> and <ref>. The spectral and timing properties are presented in the subsequent sections.
§.§.§ Quiet Phase (QP)
The light curves corresponding to Epochs 6 – 9, 11, 12, 17 and 18 showed no structured variability.
The average MAXI flux was ∼0.15 ph cm^-2 s^-1 (panel a of Figure <ref>). The HR values are relatively higher (HR > 1, panel e of Figure <ref>). The avg. count rate for every Epoch is mentioned in Table <ref>. The power spectra obtained from all the Epochs are consistent with a power law model. None of these Epochs showed any indication of QPO features. The total rms varied between 7.1_-0.9^+0.6 to 22.2_+1.4^-1.2% in 0.003 - 1 Hz frequency range. The timing properties corresponding to all Epochs in the QP are summarized in Tables <ref>.
In Figure <ref>, we show the spectra corresponding to the QP (spectra plotted in red) obtained from Epoch 18.
A good fit for all the energy spectra is obtained using Model-2. The Γ varied between 1.13_-0.06^+0.06 – 1.73_-0.02^+0.02, while the kT_ e ranged from 6.3_-0.2^+0.1 to 18.2_-3.7^+4.8 keV. All the Epochs showed obscuration with the N_ H_ 1 highly varying between 21_-1^+1 - 545_-72^+85× 10^22 atoms cm^-2. The bolometric luminosities (L_ bol) during these Epochs varied between 0.001 L_ Edd – 0.004 L_ Edd. The fit values obtained for each of the Epochs are presented in Table <ref>.
§.§.§ Active Phase (AP)
The X-ray light curves obtained from Epochs 1 – 5, 10 and 13 - 16 showed high X-ray activity, when compared to the Epochs corresponding to the Quiet Phase. The MAXI flux during these Epochs varied from 0.2 – 0.6 ph cm^-2 s^-1 (panel a of Figure <ref>). Each of these Epochs marks an event exhibited by the source during the three-year observation period.
Epoch 1 corresponds to the decay phase of the major outburst with a flux value of 0.4 ph cm^-2 s^-1. During Epochs 2, 3 and 4, the source exhibited RB_ I and the flux oscillated between 0.1 – 0.4 ph cm^-2 s^-1. Epoch 10 corresponds to the rising phase of RB_ V, where the source exhibited ρ^' variability class (see <ref>). The LAXPC light curve and CCD corresponding to Epoch 10 is shown in Figure <ref> (panels a and b respectively), with the flux during each flare varying from ∼150 - 250 cts s^-1. The HR1 in the CCD is the ratio of count rates in 6 - 15 keV and 3 - 6 keV, while HR2 is the ratio of counts in 15 - 60 keV and 3 -6 keV. Epochs 13 – 16 observe the source activity during the rise and the decay phase of RB_ VI, with the MAXI flux at ∼0.5 ph cm^-2 s^-1. The HR value corresponding to the rebrightening phases (Epochs 10, 13 – 16) showed lower HR values (HR < 0.4, panel e of Figure <ref>).
The PDS corresponding to Epoch 1 shows QPO at 2.08 Hz with a rms_ QPO of ∼12%. The power spectrum has a flat-top noise with a total rms of ∼21.5%. Epochs 2, 3 and 4, pertaining to RB_ I, show high variability with the rms_ Tot > 22% and does not show any QPO signatures. Epochs 10 and 13 – 16, pertaining to RB_ V and RB_ VI respectively, show QPO signatures with ν_ QPO varying from 20 to 200 mHz and rms_ QPO varying between 8.7_-0.6^+0.6 – 20.9_-1.0^+1.0%. The total rms variability was relatively lesser with values varying from 12.1_-0.3^+0.3 – 22.0_-1.2^+1.0%. Panel c of Figure <ref> shows the PDS corresponding to Epoch 10 with a ν_ QPO detected at 23 mHz having a Q-factor of 6.3_-0.8^+0.8 and a rms_ QPO of 8.7_-0.6^+0.5%. The details of the timing properties obtained from the fits are mentioned in Table <ref>.
The broadband source spectra obtained from all the Epochs in the AP are modeled using Model-1. The spectrum (plotted in black) in Figure <ref> obtained from Epoch 13, demonstrates the typical broadband spectra corresponding to the AP. During the Epochs 1 – 5, the source exhibited the Low/Hard spectral nature with the Γ and kT_ in varying between 1.13_-0.01^+0.01 – 1.73_-0.02^+0.02 and 0.25_-0.02^+0.01 – 1.32_-0.04^+0.05 keV respectively. kT_ e varied from 8.2_-0.2^+0.2 – 16.6_-1.4^+1.7 keV. During the Epochs 10, 13 – 16, we observe the source exhibiting softer spectral states, with Γ varying from 1.8_-0.1^+0.1 to 2.8_-0.1^+0.1. kT_ in and kT_ e varied from 0.97_-0.01^+0.01 keV – 1.59_-0.02^+0.01 keV and 3.1_-0.1^+0.1 to 12.9_-0.5^+0.4 keV, respectively. The best fitted model parameters are tabulated in Table <ref>.
§.§ Absorption and Emission Features
The characteristic features in the X-ray spectrum include the prominent emission and absorption lines superposed on the spectral continuum. The emission lines are essentially the fluorescent line photons from the disc, originated from the illumination of the disc by the hard X-ray photons <cit.>. The absorption by the outflowing plasma from the accretion disc generates absorption lines <cit.>. All the observations considered in our work (wide-band observations (see Table <ref>) and individual NICER observations (see Table <ref>) also showed the presence of prominent Fe absorption and emission line features (see Figure <ref>). We use the gaussian and the gabs models to estimate the features of the emission and the absorption lines respectively (as already described in <ref>). Broad and narrow emission lines were detected between the energy ranges 6.4 – 8.3 keV (see Tables <ref> and <ref>). The centroid energies of these lines correspond to the neutral Fe Kα, Fe XXV Kα, Fe XXVI Kα, and the Ni XXVIII Kα energies at 6.4 keV, 6.7 keV, 6.97 keV, and 8.10 keV respectively. The strength of these lines are measured in terms of the Equivalent Width (EW), which is estimated using the standard definition,
EW = ∫_E_1^E_2F_c(E)-F(E)/F_c(E) dE,
where, F_ c(E) is the flux in the continuum, F(E) the flux in the line at Energy (E). E_1 and E_2 represent the lower and upper energy limits of the observed line (see ). Emission lines were predominantly observed in the QP and re-brightening phases - RB_ I, RB_ II and RB_ V. The EW of the emission lines is observed to vary from 70 – 990 eV. However, the source also exhibited broad emission lines where the EW varied between 1020 – 3260 eV. Emission lines with EW ≥ 1 keV is not a commonly observed feature in X-ray binaries, but instead is considered as a classic indicator of a Compton-thick (N_ H_ 1≥ 10^24 atoms cm^-2) obscuration generally seen in AGNs <cit.>. A Compton thick obscuration suppresses the continuum beneath the neutral line, thus leading to an increase in the EW of the Fe Kα line. However, the absorption lines observed throughout the observation period were narrow with the EWs varying from 120 to 590 eV. These narrow absorption lines were observed during RB_ IV and RB_ VI in the energy ranges 6.4 keV – 7 keV. The line properties obtained from the best-fit models are quoted in Tables <ref> and <ref>. The broad emission lines during the hard state and the narrow absorption lines during the relatively softer spectral states were formerly observed in GRS 1915+105 <cit.>. It is speculated that the broad emission lines originate when the inner accretion disc is illuminated by the hard X-ray photons from the jet/corona, while the narrow absorption lines are due to the winds in the accretion disc.
The EW of the absorption lines enable us to estimate the column densities of Fe XXV and Fe XXVI elements, using the relation,
W_λ = (π e^2/M_ec^2)N_jλ^2f_ij = 8.85×10^-13N_jλ^2f_ij,
where, W_λ is the EW of the line and λ is the wavelength in centimeters, f_ij is the oscillator strength and is equal to 0.798 and 0.416 for Fe XXV and Fe XXVI elements, respectively (with reference to ). The energy of the lines (in keV) is converted to wavelength (in cm) using the relation, E = hc/λ, where h is the plank's constant (6.626× 10^-34 Js) and c is the velocity of light (3×10^10 cm/s). The ion column density (N_j) thus obtained helps in constraining the physical parameters of the absorbing plasma. Our estimates show the Fe column density values to be varying between 10^16 - 10^18 atoms cm^-2. These moderate values of ion column densities suggest the kinetic temperature of the absorbing plasma (kT_ Fe) to be ≥ 25 keV <cit.>. With reference to , if we assume the absorbing plasma to be in hydrodynamical equilibrium in the direction vertical to the plane, we can calculate the radius of the absorbing plasma from the center (r) using the relation suggested in ,
( h/r)^2GMm_ H/r≃ kT_ th,
where, h/r = tan(90^∘-i), i being the inclination angle (60^∘; ), G is the gravitational constant, M is the mass of the black hole and m_ H is the mass of the hydrogen atom. kT_ th is the thermal temperature and can be estimated using the relation, kT_ th = (m_ H/m_ Fe) kT_Fe, where m_ H is the mass of the Fe atom <cit.>. For kT_ Fe≥25 keV, the absorbing plasma is found at a distance r ≤ 2×10^10 cm. estimated the radius of the inner hot absorption zone, from where the winds are launched, to be at r < 10^9 cm.
§ DISCUSSION
We performed a comprehensive study of the spectral and timing characteristics of the source during the low-luminosity `obscured' phase between March 2019 – November 2021. During this period, the source is seen exhibiting a low-luminosity phase at length, along with the occasional re-brightening phases. Below, we present a cohesive explanation of the overall evolution of the source properties during each of these phases.
§.§ State Transitions during the Re-brightening Phases
GRS 1915+105 exhibited 6 major re-brightenings - RB_ I, RB_ II, RB_ III, RB_ IV, RB_ V and RB_ VI (see panel a of Figure <ref>) throughout the three-year observation period. RB_ I was a series of flares, RB_ II and RB_ IV quick flares (spanning for a few ksec) and RB_ V and RB_ VI were prolonged re-brightenings (spanning for ∼100 days and ∼150 days respectively). Figure <ref> shows the overall evolution of few important spectral parameters like, L_ bol (in 10^38 erg s^-1), Γ, kT_ e (in keV) and N_ H_1 (atoms cm^-2) during the rebrightening phases - RB_ I, RB_ V and RB_ VI. Figure <ref> also includes the results from the analysis of the additional 40 NICER observations which are not tabulated in Table <ref>, as stated in <ref>.
The source exhibited state transitions during the prolonged re-brightenings RB_ V (Figure <ref>) and RB_ VI (Figure <ref>). At the beginning of both RB_ V and RB_ VI, the source is detected in the hard spectral state during the low phase (Obs. 1), with Γ of ∼1.3 (see panel c corresponding to RB_ V and RB_ VI in Figure <ref>) and an almost constant electron temperature, kT_ e∼ 2 keV (see <ref> and <ref>). As the source progresses into the rise and the peak phase of these re-brightenings, Γ is seen to increase from 1.34_-0.04^+0.03 to 2.47_-0.06^+0.03 during RB_ V and 1.36_-0.02^+0.02 to 2.04_-0.04^+0.02 during RB_ VI. Source exhibited a maximum luminosity L_ bol of 12.8×10^38 erg s^-1 and 13.4×10^38 erg s^-1 during the peak of RB_ V and RB_ VI, respectively (see panel b in the blue and red shaded regions in Figure <ref>). The decay phase is characterized by a decrease in Γ, to 1.91_-0.01^+0.03 and 1.69_-0.05^+0.02 during the decay phases (Obs. 4) of RB_ V and RB_ VI, respectively. A further decrease in the photon index (Γ∼1.3) is observed as the source descends to the low phase after the decay (Obs. 5 in RB_ VI), indicating the hard spectral nature of the source. However, kT_ e (see panel d in the blue and red shaded regions in Figure <ref>) is found to remain constant throughout the re-brightening phases. The rise, peak and the decay phases (Obs. 2, 3 and 4 respectively) are recognized as the intermediate/soft state. The total rms variability also decreases as the source progresses from the hard to the soft state (13.1_-0.8^+1.1% to 8_-0.8^+0.9% during RB_ V, 32.0_-3.2^+2.9% to 17_-1.4^+1.3% during RB_ VI).
Similar evolution pattern in the spectral and timing properties has already been observed during the 2018 mini-outburst of MAXI J1535-571 <cit.> and 2017 mini-outburst of GRS 1739-278 <cit.>. Several other LMXBs also have previously exhibited re-brightenings <cit.>. However, spectral state transitions are not witnessed during every re-brightening.
Only 2 BH LMXBs - MAXI J1535-571 <cit.> and GRS 1739-278 <cit.>, so far, have exhibited spectral state transitions from the hard state to the soft state during the re-brightenings. However, a disc component is seen in both sources during the peak of the outburst with 2.1 < Γ < 2.7 and 0.33 < kT_ in (keV) < 0.5, whereas these source showed no indication of the disc component during the hard state, with 1.5 < Γ < 2. These two sources also showed hysteresis in the HID, thereby generating a q-track in the HID. Nevertheless, we not observe a q-track in the HID during RB_ V and RB_ VI. Based on an analogy of the spectral and timing characteristics of GRS 1915+105 with MAXI J1535-571 and GRS 1739-278, it can be concluded that the source undergoes the following sequence of transitions: hard → intermediate → soft state → intermediate → hard state (during RB_ V) and hard → soft state → hard state (during RB_ VI).
All the mini-outbursts/re-brightenings in MAXI J1535–571, GRS 1739–278 and GRS 1915+105 are seen to have completely different timescales. In case of MAXI J1535-571, re-flares occurred soon after the major outburst, whereas in GRS 1739–278, the time gap between the major outburst and the mini-outburst is not clearly understood due to the observational gap. However, <cit.> predicts a time gap of < 200 days between the major and the mini-outburst. But, in the case of GRS 1915+105, RB_ V happened 500 days after the major outburst and RB_ VI is seen occurring 200 days after the decay of RB_ V. The periodicity of the mini-outbursts in MAXI J1535–571 and GRS 1739–278 was estimated to vary between ∼20 – 35 days. In GRS 1915+105, RB_ V and RB_ V lasted for ∼100 days and ∼150 days, respectively. A comparison of the timescales seen in all the three sources does not lead us to the common cause that triggers these re-brightenings. There exists several models in the literature that explains the origin of the re-brightenings <cit.>. But based on the results, we postulate that the re-brightenings/mini-outbursts are small-scale outbursts <cit.>, where mini-outbursts develop and progress in a way similar to the main outburst. The instability is assumed to be triggered at some location in the outer disc, which gradually increases the disc density and temperature. The mass accretion rate increases as this instability advances and propagates inwards as a heating wave, thus causing a re-brightening or a mini-outburst. The detection of the disc component in GRS 1915+105 as the spectral state of the source softens towards the peak, complies with the above scenario.
§.§ Variability during the Re-brightenings
GRS 1915+105 has exhibited 15 variability classes since its discovery. During the major outburst, the average count rate exhibited by the source, during each of the variability classes, varied from 1 - 50 kcts s^-1 (<cit.> and references therein). It was also reported that the limit-cycle oscillations seen during certain variability classes, disappeared at an average count rate < 5 kcts s^-1 <cit.>. However, during the recent `obscured' phase, the source has displayed ρ, λ and δ variability classes during RB_ V and RB_ VI (Figures <ref>a and <ref>a) exhibiting an average count rates of ∼30, 120 and 250 cts s^-1, respectively. GRS 1915+105 also exhibited ρ^' variability class (<ref>), a variant of the typical ρ class (see ) during the rise phase of RB_ V. Using the Modified Hindmarsh-Rose (MHR) model, <cit.> shows that ρ^' could be a result of the slight modulation of time dependent input function (J(t)). We also categorize the source to belong to χ variability class during the low-luminosity phase, where the source exhibited Low/Hard spectral state (Γ∼ 1.13_-0.06^+0.06 – 1.73_-0.02^+0.02 and kT_ e∼ 6.3_-0.2^+0.1 – 18.2_-3.7^+4.8 keV, see <ref>). However, the PDS did not explicitly show any QPO features during the the hard state, which could be a result of low statistics.
The source also exhibits a sequential class transition from χ→λ→ρ, during the rising phase of RB_ V. The time evolution of the variability classes in GRS 1915+105 depicted by the MHR model <cit.> indicate that the source makes transition from stable states - states showing stable equilibrium patterns (classes ϕ, χ, α^'', θ, ξ and ω) to an unstable state - state showing unstable equilibrium pattern (ρ class) via transition segment (δ, γ, λ, κ and α^' classes), as the time dependent input function (J(t)) varies. This MHR model is in complete accordance with the sequential transition from χ→λ→ρ, exhibited by the source during RB_ V.
The variability classes is a unique feature particular to this source. These unique variability classes depict the accretion and ejection limit cycles in an unstable disc <cit.>. Based on the statistical analysis of RXTE observations of the source between 1996 April to 2000 May, it was inferred that the unique variability classes occurred when the source radiated at exorbitant luminosities and satisfied the criteria L/L_ Edd≥ 1 <cit.>. However, recently, GRS 1915+105 began exhibiting variability classes at L/L_ Edd≤ 1 <cit.>. In hindsight, this decrease in the source luminosities were anticipated, because of the depletion of the matter in the accretion disc with the persistent X-ray activity. Similarly, IGR J17091–3624 <cit.> also exhibited these variability patterns at ∼20 – 30 % L_ Edd, thereby manifesting that the source need not necessarily radiate at luminosities ≥ L_ Edd, in order to exhibit unique variability classes.
In addition to the observations, the results from the time-dependent disc models and the simulations predict the outset of limit-cycle instabilities at L/L_ Edd≥ 0.3 <cit.>. Nonetheless, the above mentioned calculated predictions stand questionable based on the recent activities of GRS 1915+105, where the source exhibited variability classes at unabsorbed luminosities varying between 0.01 – 0.004 L_ Edd, which is ≪ 0.3 L_ Edd. At these luminosities, LMXBs generally show least activity/variabilities. With the discrepancies in the disc-instability models to explain the limit-cycle oscillation behaviour, magnetic fields can be considered as an alternative way to explain the limit-cycle instabilities. The lack of threshold large-scale magnetic fields of uniform polarity eventually fails in sustaining the thermal stability in the accretion disc <cit.>. This could be a cause for the peculiar variabilities seen in GRS 1915+105. However, the key questions on how the phenomena that triggers the unique variability patterns in the system is invoked even at extremely low luminosities (L ∼ 0.01 L_ Edd), is yet to be understood.
§.§ Dynamical Obscuration in the System
An incessant presence of `obscuration' in GRS 1915+105 since May 2019 has been detected from the spectral analysis. This obscuration is detected to be highly variable and in-homogeneous, i.e. PCF < 1 (see ). The column density due to the local obscuration (N_ H_ 1) and the partial covering fraction (PCF) varied drastically within a few minutes.
During the prolonged low-luminosity phase (<ref>), the source exhibited non-deviant Low/Hard spectral state properties. However, the obscuration in the system was highly dynamic and random with N_ H_ 1 varying between 20 - 550× 10^22 atoms cm^-2 and the PCF varied between 0.38 – 0.96 (see Table <ref>). In case of the prolonged re-brightening phases (RB_ V and RB_ VI), the source showed intrinsically evolving spectral characteristics as well as a methodically evolving local obscuration medium (see panel a in the blue and red shaded columns in Figure <ref>). The obscuration varied in a pattern, where the least column density (N_ H_ 1) was detected during the active phase of the re-brightening (rise/peak/decay phases of the re-brightenings) and a maximum density was detected in the low phase (hard state) either before the rise or after the decay of the re-brightenings. A minimum N_ H_ 1 value of 4× 10^22 atoms cm^-2 was observed near the decay phase of RB_ V and the peak phases of RB_ VI. However, an increased N_ H_ 1 value of ∼150×10^22 atoms cm^-2 and ∼100×10^22 atoms cm^-2 was observed during the low phase/hard state of both RB_ V and RB_ VI, respectively. The PCF varied randomly between 0.6 – 0.79 and 0.37 – 0.93 throughout RB_ V and RB_ VI, respectively.
This varying trend in N_ H_ 1 is also observed during RB_ I (see panel a corresponding to RB_ I in Figure <ref>), where N_ H_ 1 decreased significantly to ∼18×10^22 atoms cm^-2 during the flare in comparison with the other phases of RB_ I (see Table <ref>).
An identical scenario was observed in V404 Cyg during its outburst in 2015. The Swift observations of the source revealed fast and highly variable obscuration (N_ H_ 1 ∼10^21 - 10^24 atoms cm^-2) within the system <cit.>. The authors suggest a clumpy Compton thick outflow to explain the fast variable obscuration in the system (see Figure 10 in ). This clumpy Compton thick outflow aptly describes the nature of the obscuring material in GRS 1915+105. The high EW of the neutral Fe Kα line (EW ≥ 1 keV) obtained from the analysis of GRS 1915+105 (see <ref>), can also be considered a proponent indicator of the presence of a Compton thick absorbing material <cit.>. The
decrease in the density of the obscuring medium during the flares is seen in both GRS 1915+105 and V404 Cyg. <cit.> presumes that the radio flare preceding the X-ray flare in V404 Cyg drives the obscuring medium away, at least for a short span, which leads to a decreased density and the PCF during the flare. This explanation can be adapted to justify the decrease in N_ H_ 1 during the flare in RB_ I, as there exists a precursory radio flare to RB_ I. But, in case of RB_ V and RB_ VI, we observe a decreased radio activity as the source moves towards the peak during RB_ V and RB_ VI (ALMA observations as reported in ). This indicates that there is some other mechanism in the system that drives away the obscuring medium. Although their results and our results show some similarities to some extent, we understand that this explanation cannot be considered appropriate for GRS 1915+105 because both V404 Cyg and GRS 1915+105 differs fundamentally with regards to the accretion rate. V404 Cyg accreted at Eddington/super-Eddington rates thereby inflating the inner accretion disc (slim disc; ), thus causing the clumpy and Compton thick outflows, whereas GRS 1915+105 accretes at sub-Eddington rates which is inadequate to generate a clumpy outflow.
In the recent works, <cit.> and <cit.> identified the obscuring medium as various layers of absorbing zones extending radially out. <cit.> detected the winds emanating from the inner hot absorption zone, at a radius < 10^9 cm and N_ H_ 1 ∼10^23 atoms cm^-2. These winds are elucidated as failed winds, which could not be launched to infinity due to the lack of magnetic field strength. The winds eventually surround the central engine thus obscuring the system. Alternatively, <cit.> deduced the obscuring medium at a distance ∼10^11 cm with a density ∼10^12-10^13 cm^-3. The radial and the vertical profile of the disc, from the deduced density and radius values, suggested an inflation of the outer disc. The inflated disc acted like a torus thereby partially or completely obscuring the inner accretion disc (see also ). However, it is not viable to capture the dynamic nature of the obscuration with models as suggested in <cit.> based on a limited sample of observations. In our study, we consider a large sample of observations to study the fast and the complex evolution of the obscuring medium. Yet, there is no consensus on the processes causing the obscuration and the dynamics of the obscuration. An insight on the dynamics of the obscuring medium may be obtained by a quantitative analysis of the source using a time-dependent model for obscuration, which is beyond the scope of the present work.
§ SUMMARY
In this paper, we performed a thorough and comprehensive spectral and timing analysis of AstroSat, NICER and NuSTAR observations pertaining to the low-luminosity `obscured' phase, that GRS 1915+105 has been exhibiting since May 2019. Based on the results, we present a cohesive summary of the evolution of the spectral and timing characteristics of the source.
* GRS 1915+105 exhibited multiple re-brightenings both in X-ray and radio energies. The bolometric luminosities (L_ bol) of the source varied between 0.004 L_ Edd at the low-luminosity phase to 0.01 L_ Edd during the peak of the re-brightening phases.
* The source exhibited state transitions during the prolonged re-brightening phases. The source was tracked making a transition from hard → intermediate → soft state → intermediate → hard state, during RB_ V and hard → soft state → hard state, during RB_ VI.
* GRS 1915+105 displayed the characteristic variability classes, ρ, λ and δ at unabsorbed luminosities of 0.01 L_ Edd, 0.02 L_ Edd and 0.02 L_ Edd respectively. Although QPOs could not be detected, based on the spectral characteristics the source is classified to belong to the χ variability class during the low-luminosity phases.
* The source revealed multiple Fe absorption and emission line features between 6 – 8 keV with EW varying between 70 eV – 3.26 keV. Fe XXV and Fe XXVI ion column density varied between 10^16 - 10^18 atoms cm^-2. The distance of the absorbing plasma was constrained to be ≤2×10^10 cm.
* The source exhibited a highly dynamic obscuration with column density varying between ∼10^22 - 10^24atoms cm^-2 throughout the 3-year observation period.
§ ACKNOWLEDGEMENTS
We thank the anonymous reviewer for his/her suggestions and comments that helped to improve the quality of this manuscript. AMP, AN acknowledge the financial support of Indian Space Research Organisation (ISRO) under RESPOND program Sanction order No. DS-2B-13012(2)/19/2019-Sec.II. AMP acknowledges the PI of this project, Dr. Baishali Garai, for the relentless support and guidance. AMP also thanks Dr. Dominic Walton (University of Hertfordshire, Hatfield, UK) for his support in pursuing this work. This publication uses data from the AstroSat mission of the ISRO archived at the Indian Space Science Data Centre (ISSDC). This work has been performed utilising the calibration databases and auxiliary analysis tools developed, maintained and distributed by AstroSat-SXT team with members from various institutions in India and abroad. This research has made use of MAXI data provided by RIKEN, JAXA and the MAXI team. Also this research has made use of software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC) and NASA’s Astrophysics Data System Bibliographic Services. AN also thank GH, SAG; DD, PDMSA and Director-URSC for encouragement and continuous support to carry out this research.
Facilities: AstroSat, MAXI, NICER, NuSTAR .
§ DATA AVAILABILITY
The data used for analysis in this article are available in AstroSat-ISSDC website (<https://astrobrowse.issdc.gov.in/astro_archive/archive/Home.jsp>), MAXI website (<http://maxi.riken.jp/top/index.html>) and NICER and NuSTAR observations from HEASARC database (<https://heasarc.gsfc.nasa.gov/docs/cgro/db-perl/W3Browse/w3browse.pl>).
mnras
|
http://arxiv.org/abs/2307.05837v1 | 20230711230318 | The Geometrical Structure of Bifurcations During Spatial Decision-Making | [
"Dan Gorbonos",
"Nir S. Gov",
"Iain D. Couzin"
] | q-bio.NC | [
"q-bio.NC",
"cond-mat.stat-mech",
"physics.bio-ph"
] |
()
unsrt^1Department of Collective Behaviour, Max Planck Institute of Animal Behavior, 78464 Konstanz, Germany^2Centre for the Advanced Study of Collective Behaviour, University of Konstanz, 78464 Konstanz, Germany^3Department of Biology, University of Konstanz, 78464 Konstanz, Germany^4Department of Chemical and Biological Physics, Weizmann Institute of Science, Israel
Animals must constantly make decisions on the move, such as when choosing among multiple options, or “targets”, in space. Recent evidence suggests that this results from a recursive feedback between the (vectorial) neural representation of the targets and the resulting motion defined by this consensus, which then changes the egocentric neural representation of the the options, and so on. Here we employ a simple model of this process to both explore how its dynamics account for the experimentally-observed abruptly-branching trajectories exhibited by animals during spatial decision-making, and to provide new insights into spatiotemporal computation. Essential neural dynamics, notably local excitation and long-range inhibition, are captured in our model via spin-system dynamics, with groups of Ising-spins representing neural “activity bumps” corresponding to target directions (as in a neural ring-attractor network, for example). Analysis, employing a novel “mean-field trajectory” approach, reveals the nature of the spontaneous symmetry breaking—bifurcations in the model that result in literal bifurcations in trajectory space and how it results in new geometric principles for spatiotemporal decision-making. We find that all bifurcation points, beyond the very first, fall on a small number of “bifurcation curves”. It is the spatial organization of these curves that is shown to be key to determining the shape of the trajectories, such as self-similar or space filling, exhibited during decision-making, irrespective of the trajectory’s starting point. Furthermore, we find that a non-Euclidean (neural) representation of space (effectively an elliptic geometry) considerably reduces the number of bifurcation points in many geometrical configurations (including from an infinite number to only 3), preventing endless indecision and promoting effective spatial decision-making. This suggests that a non-Euclidean neural representation of space may be expected to have evolved across species in order to facilitate spatial decision-making.
The Geometrical Structure of Bifurcations During Spatial Decision-Making
Dan Gorbonos^1, Nir S. Gov^4 and Iain D. Couzin^1,2,3
July 11, 2023
========================================================================
§ INTRODUCTION
Selecting among spatially-discrete options (targets), such as food sources, shelters, or mates, is a ubiquitous challenge for animals. Despite this, very little research has been undertaken with respect to the mechanistic basis of spatial decision-making. For example, only recently have researchers explicitly considered the trajectories exhibited by animals when making such decisions <cit.>. These experiments with both invertebrates (fruit flies and locusts) and a vertebrate (zebrafish) provided evidence that the brain repeatedly breaks multi-choice decisions into a series of binary decisions in space-time. This is reflected in abrupt changes in the trajectory of animals as they approach targets. We demonstrated, in a simple spin-based model of this cognitive process, that the bifurcations within the brain—that correspond to literal bifurcations in the trajectories (when summed over many repeated decisions) consistent with a recursive feedback between the (vectorial) neural representation of the different options and the animal’s movement. Spatial decision-making therefore appears to be an ‘embodied’ process that depends on the recurrent interplay between the time-varying egocentric neural representation of the options as animals move through space and the neural consensus dynamics that establish which direction to select at each moment in time. At such bifurcation points the spin model predicts that the brain spontaneously becomes extremely sensitive to very small differences between the options <cit.>, which is a highly valuable property for decision-making.
Consistent with this model, the brains of a wide range of species have been shown to represent egocentric spatial relationships via explicit vectorial representation <cit.>. For example, neural ring-attractor models were motivated by such neuronal representations of the instantaneous heading direction of animals in the horizontal plane regardless their location and ongoing behavior <cit.>. When animals are exposed to a prominent target, an activity bump that corresponds to this target appears on a specified sector of the ring <cit.>, and as result it ends up pointing towards the target as the animal turns towards it. After exposure to a new attractive landmark, the choice to turn to the new one is expressed as a shift in the activity bump towards the new target. The exact nature of the angular shift of the activity bump was found to be dependent on the relative angle of the new landmark from the older one <cit.>. The interaction between the neurons within this network have been shown to exhibit local excitation (positive feedback among neural ensembles that represent directions with a small relative angle) and long range, or global, inhibition (negative feedback between neural ensembles that represent directions with a large relative angle). This transition between types of feedback is responsible for the abrupt transition between two types of behavior - a movement in a compromise between the targets and a decision phase where the organism makes its way towards one of the targets. This behavior is captured in our spin-model - at a critical angle there exists a transition between a positive interaction between the spins (“ferromagnetic”) and a negative one (“antiferromagnetic”). By analogy, the spins in the model describe neuronal groups that either exhibit a relatively high, or low, firing rate, respectively. It was shown in <cit.> that the number of spins that are active in a specific direction can be formally mapped to the firing rate of the corresponding neuronal cluster in the ring attractor class of models.
The bifurcated trajectories observed for the fly, locust and fish <cit.>, when moving towards two or three targets, indicated that the spin-spin interactions in the model have an angular dependence that deviates from a simple vectorial dot product <cit.>.
We therefore considered in <cit.> a distortion of the angle in the
decision making process that represents neuronal encoding of space in a manner which could be
non-Euclidean. This is supported by evidence for such distortion in the neuronal interactions <cit.>. Despite the good agreement between the model and the experiments, many open questions remain. We introduced in <cit.> a new tool to study the mean field solution which we called “MF trajectory”, where we solved at each point in space the direction (or directions) of motion assuming that the system reached thermal equilibrium. The solution at each point gave us a vector (or vectors) that pointed towards the next point. This way we got a solution which is effectively at thermal equilibrium at each point in space. Using the MF trajectory we discovered the emergence of trajectories with an infinite series of bifurcations (for 3 targets), but it was not clear what determines this phenomena. In addition we found that the non-Euclidean distortion was roughly the same across species. These results naturally bring up the following questions: What (if anything) is special about this regime of distortion?
What controls the complexity, overall spatial organization and structure of the trajectories and their bifurcations?
Why do we find in the simulations (and consistent with experimental data) always two outgoing branches from each bifurcation?
These and other questions motivated us to study in greater detail the origin of the different classes of trajectories predicted by the model and the bifurcations along them. The analysis presented here exposes novel geometrical principles that relate the mutual arrangements of the bifurcations to the arrangement of the targets.
The model is also interesting from a basic statistical physics point of view, as a new non-equilibrium system. Note that this is a complex physics problem, where the spin state defines the direction of motion, while the motion along the trajectory constantly changes the relative angles between the targets, as perceived by the moving animal, thereby changing the spin interactions (the Hamiltonian) along the trajectory. The present study of this problem also makes an advance in our understanding of this novel non-equilibrium statistical physics model, whereby the order parameter (“magnetization”) of the spins translates into direction of motion through space. Therefore it expands our understanding of a novel class of active complex systems, where self-propelled particles undergo internal “decisions” regarding their directions of motion.
§ THE SPIN MODEL
Here we generalize the spin model that was introduced in <cit.> to k targets (see also in the SI of <cit.>). The degrees of freedom are modeled as N spins that are divided into k equal subgroups. The direction of a target which is associated with a subgroup is denoted by p̂_i (i=1,..,k). Each individual spin in a subgroup has two states; “on”, representing the neuronal firing (σ_i=1) or “off” (σ_i=0). The system can be described by the following Hamiltonian:
H=-k v̅^2/N∑_i≠ jp̂_i·p̂_jσ_iσ_j,
where v̅ is a constant whose dimension is velocity. The connectivity between the spins is all-to-all, for simplicity, which makes the mean-field (MF) solutions exact in the large N limit.
The instantaneous velocity of the organism (or the group <cit.>) is given by the sum of all the “on” spins in the direction of the respective targets
V⃗=v̅∑_i=1^kn_ip̂_i,
where n_i=N_i/N, N_i is the number of individuals that exert a force in the group i (towards the target in the direction p̂_i). When the dot product between two spins is positive (p̂_i·p̂_j>0) the interaction is ferromagnetic (as between the spins in the same subgroup) and when it is negative, the interaction is antiferromagnetic. Below, we modify this dot product, motivated by recent models of neuronal perception <cit.>.
The spin flip rates are constructed from the Hamiltonian (Eq.<ref>), in terms of Glauber dynamics <cit.>:
r^(i)_1→ 0=1/1+exp2 k v̅ V⃗·p̂_i/T
r^(i)_0→ 1=1/1+exp-2 k v̅ V⃗·p̂_i/T,
where r^(i)_1→ 0 is the rate in which a spin in group i is switched off and
r^(i)_0→ 1 is the rate in which the spin is switched on. The temperature in our model describes the noise that drives random spin-flipping dynamics <cit.>. Within the context of neuronal dynamics, the temperature T represents the stochastic noise of neuronal firing. The equations of motion for the number of active spins in each subgroup (master equation), in the limit of N≫1, are:
d n_i/dt=1/k-n_i/1+exp-2 k v̅ V⃗·p̂_i/T-n_i/1+exp2 k v̅ V⃗·p̂_i/T.
It is convenient to rearrange Eq. (<ref>) to get
d n_i/dt=1/k1+exp-2 k v̅ V⃗·p̂_i/T-n_i.
The steady-state solution of Eq. (<ref>) can be written as the solution of the following system of algebraic equations:
n_i=1/k1+exp-2 k v̅ V⃗·p̂_i/T i=1,...,k.
The system of MF equations include the k+2 equations (<ref>) and (<ref>), whose solutions are the steady-state velocity and the fraction of active spins in each group: (V⃗_ss,n_i,ss) (i=1,..,k). We will refer to this system as the MF steady-state (MFSS) solutions.
§ MEAN FIELD TRAJECTORIES AND BIFURCATION POINTS
Here we introduce “MF trajectories” as a new tool to study the dynamics, while in <cit.> the trajectories were found by simulations. The nature of these trajectories, and the types of bifurcations along them, are explored below for systems with two or three targets.
§.§ Two targets
At each point in space we can calculate the stable steady-state solutions (V⃗,n_i), which give the MF direction vectors that are possible at that point. This defines a global flow field (Fig. <ref>D) along which the animal moves, in the limit of slow speed, such that the spins have time at each position to equilibrate to the MFSS solutions, and the trajectory follows the flow field defined by these stable solutions. For targets that are at infinity, the angles between them are constant, and the Hamiltonian (Eq.<ref>) is time-independent. However, for targets at finite distances, the relative angles change along the trajectory. Furthermore, the animal can move from a region with a single solution to a region where there are several stable solutions, and there is a spontaneous symmetry breaking. The location along the trajectory where such a transition occurs is defined as a bifurcation point, as the trajectory splits into several possible trajectories that emerge from that point.
We can define a bifurcation point where the MFSS solution along which the animal was moving, becomes unstable, and write the criterion for instability of a general MFSS solution. The dynamical equation for the velocity can be obtained using equations (<ref>) and (<ref>):
dV⃗/dt=v̅∑^k_i=1d n_i/dtp̂_i=∑^k_i=1v̅/k1+exp-2 k v̅ V⃗·p̂_i/Tp̂_i-V⃗
Let V⃗_ss be a solution of the model equations and consider a small perturbation to the velocity in the perpendicular direction to the velocity V⃗:
V⃗=V⃗_ss+ϵn̂
where n̂ is the normal to V⃗. Substituting into Eq. (<ref>), expanding to first order in ϵ and taking the normal component we get the following equation for the perturbation ϵdϵ/dt=-Aϵ+𝒪ϵ^2
where
A≡ 1-v̅^2/2 T∑^k_i=1^2k v̅ V⃗·p̂_i/T(n̂·p̂_i)^2.
Then the solution V⃗ is stable if A>0 and unstable if A<0. Therefore the bifurcation occurs where A=0.
This definition means that we choose to construct the MF trajectories such that the bifurcation occurs at the spinodal curve, according to the MF phase diagram, as shown for two targets in Fig. <ref>A. The transition could be chosen to occur at any point in the hysteresis region of the phase diagram (grey region), between the binodal (where two new stable solutions first appear) and spinodal (where the original “compromise” direction becomes unstable) lines. We show in the SI that at the point where a MF solution becomes unstable, there are at least two alternative stable solutions. Then we repeat the same procedure along the new stable solutions and this way form a directed graph whose nodes are the bifurcation points. For more details on how the MF trajectories were calculated in practice, see the SI.
The MF trajectory for two targets is given in Fig. <ref>B for T=0.2 (the geometrical arrangement of the targets as in the experimental setup, see Fig. 1 in <cit.>). The critical bifurcation angle between the targets, where the bifurcation point occurs, is the one that corresponds to the intersection of the spinodal curve and the horizontal line T=0.2 in the phase diagram (Fig. <ref>A). At higher temperatures, above the tri-critical point where the binodal and spinodal curves coincide, the transition is second order and there is no hysteresis region.
Another way to examine the bifurcation points is using the effective free energy (whose derivation appears in the SI, and in <cit.>), which is given by
F(V⃗,T)=k V⃗^2-T/k∑_i=1^kln1+exp2 k v̅/T V⃗·p_i.
The effective free energy at a specific point in space is a function of the velocity and the angles towards the targets, and it shows the energetic cost of different spin configurations that correspond to different velocities (the “free energy landscape”). The stable MFSS solutions correspond to minima of this free energy. In Fig. <ref>C we see the free energy function at the bifurcation point of Fig. <ref>B,
where the original direction of movement becomes a saddle point, and the two new minima correspond to the two new stable solutions and directions of motion.
In Fig. <ref>D we plot the flow field, of the MFSS solutions. The bifurcation points for two different trajectories are denoted, lying along a curve that connects the two targets. We find this curve by requiring that the compromise between the two targets n_1=n_2≡ n is a stable solution (Eqs. <ref>,<ref>), while requiring that A=0 at the bifurcation points (Eq. <ref>). We then obtain the following two equations:
2 T/v̅^2 = ^22 v̅ V_p/T(sin^2θ_1+sin^2θ_2)
n = 1/21+exp-4 v̅ V_p/T
where
V_p=v̅ n1+cos(θ_1+θ_2)
and we denote by θ_1,2 the angles to the respective targets as measured relative to the x axis. We have three variables (θ_1,θ_2,n) for two equations and therefore a one parameter family of solutions which defines the bifurcation curve that connects the two targets (the dashed brown line in Fig. <ref>D).
Using the effective free energy we can also prove that the description of a bifurcation point as a point of instability of one solution is equivalent to the existence of at least two stable solutions at this point. The proof appears in the appendix.
§.§ Three targets
In the case of three targets there is more than one bifurcation point along the trajectory. Let's consider a symmetric starting point, as shown in Fig. <ref>F. The first bifurcation is between a compromise of all the three targets (for θ<θ_c) and a compromise of only two targets (while the remaining target is suppressed). The phase diagram for this first bifurcation point, along a symmetric trajectory, is given in Fig. <ref>E, with the corresponding MF trajectory in Fig. <ref>F. The effective energy landscape for this first bifurcation is shown in Fig. <ref>G, and has the same binary structure as for the bifurcation of two targets (Fig. <ref>C).
At the second bifurcation we find that the trajectories become much more complex (Fig. <ref>F), with one solution pointing towards an edge target, while the second solution is a compromise of the remaining two targets. Subsequently there are more bifurcations of the same type, alternating between the outermost targets and compromise solutions (between the left and central targets, or between the right and central targets). This pattern introduces an infinite number of bifurcations that become infinitely dense towards the central target. In the SI we show numerically that this sequence converges into an infinite geometric series towards the central target, where the pattern becomes self-similar. For this particular arrangement of targets, the MF trajectories are found to reach the central target after an infinite series of bifurcations. The presence of noise during the motion of a real animal, or during dynamic simulations, enables trajectories to reach the central target <cit.>.
The number and positions of the bifurcation points strongly depends on the geometrical arrangement of the targets. In Fig. <ref>A-D we demonstrate this by calculating the trajectories for a geometry where the central target is shifted to be further behind the two edge targets, compared to the arrangement of Fig. <ref>F. In this configuration, the trajectories are not self-similar, and there are trajectories that reach the central target, as shown in Fig. <ref>B,C by following the trajectories up to the fourth bifurcations. In the case of three targets, the curve on which the first bifurcation points lie does not correspond to a compromise between targets, and unlike the case of two targets we do not have an analytic condition for this curve. It is found numerically, denoted by the green dashed line in Fig. <ref> for the trajectories that start to the left of the edge targets.
The second bifurcation along the trajectories is also binary, leading to one of the edge targets or to a compromise of the remaining targets. However, the third bifurcation is sometimes split into more than two outgoing trajectories (Fig. <ref>B,C). In general, the trajectory arriving at the third bifurcation was a compromise between two targets. This compromise becomes unstable at the “bifurcation curves” (BC) denoted by the dashed brown lines in Fig. <ref>. Therefore, from each bifurcation point on these BC, there can be up to four outgoing trajectories (Figs. <ref>B,C): two “decision” trajectories towards the two targets that were in compromise when arriving at the BC, and two “compromise” solutions that involve the third target.
The BC, and their organization with respect to the targets, are key to determining the shape of the trajectories, as all the bifurcation points of second and higher order reside on them. Let us start by specifying the analytic conditions that these BC obey: We require an instability A=0 (Eq. <ref>), in which there is a compromise of two group of spins (in the case of three targets, in analogy to Eq. (<ref>) for two targets). By requiring a compromise of two of the three targets, we obtain three pairs that give us the three BC in Fig <ref>B,C. Let us consider for example the trajectories where we have compromise between two targets (n_1=n_2). It's convenient to parameterize the points according to three angles that are given in the inset of Fig. <ref>A. For a given point θ_1,θ_2,θ_3 are angles to the corresponding target directions 1,2,3 and in addition we define the following angles:
θ_12 = θ_1+θ_2,
θ_23 = π-θ_2-θ_3,
θ_13 = π-θ_1+θ_3,
that we substitute into the velocity components, that are given for the case of three targets in Eqs. (<ref>, <ref>, <ref>), to give us
V_p1 = v̅n_1+n_2cos(θ_1+θ_2)-n_3cos(θ_1-θ_3),
V_p2 = v̅n_1cos(θ_1+θ_2)+n_2-n_3cos(θ_2+θ_3),
V_p3 = v̅n_3-n_1cos(θ_1-θ_3)-n_2cos(θ_2+θ_3).
Substituting into the steady state equations <ref> and <ref>, where we take A=0 and compromise between two targets at the bifurcation points, we get the following system of five equations:
n_1 = n_2,
2 T/v̅^2 = ^23 v̅ V_p_1/Tsin^2θ_1+^23 v̅ V_p_2/Tsin^2θ_2+^23 v̅ V_p_3/Tsin^2θ_3,
n_1 = 1/31+exp-6 v̅ V_p_1/T,
n_2 = 1/31+exp-6 v̅ V_p_2/T,
n_3 = 1/31+exp-6 v̅ V_p_3/T,
for six variables: n_1,n_2,n_3,θ_1,θ_2,θ_3. Then the solution can be parameterized by one parameter that gives us a curve that connects pairs of targets (Fig. <ref>B,C dashed brown curves). In Fig. <ref>B,C we show the first four bifurcation points, which lie on the BC. In Fig <ref>D we follow the trajectory up to high order (12) of bifurcations, and we find that the trajectory has a space-filling property, with the bifurcation points accumulating along the central part of the BC.
Furthermore, the BC are determined by the relative positions of the targets and not by the starting point of the trajectory (Fig. <ref>B,C). In this sense the BC act as attractors of the MF trajectories. We demonstrate this by plotting the trajectories and the BC for configurations where the central target is shifted closer to the edge targets, as shown in Fig. <ref>E-G by following the trajectories up to 12th bifurcation order. Since the edge targets remain in the same positions as in Figs. <ref>B-D, the BC that connects these two targets remains the same. By shifting the central target towards the edge targets we reach a configuration (Fig. <ref>G) where this BC is on the other side of the central target, and thus cannot be reached by trajectories that bifurcate along the other two BC. As a result, we obtain only two reachable BC (dashed brown lines in Fig. <ref>G), and an infinite interplay between them which converges to a self similar pattern (Fig.<ref>F, and see SI for more details).
The dependence of the bifurcation structure on the temperature is given in appendix (Figs. <ref>, <ref>, <ref>). In particular, we find that the BC simplify significantly and become straight lines between the targets in the limit of T→ 0.
§ COMPARISON BETWEEN SIMULATED AND MF TRAJECTORIES
Since real animals and flocks move at a finite speed, and are exposed to sources of noise (e.g. sensory noise), we next explore the relation between the deterministic MF trajectories, and noisy simulations of the spatial decision-making process. In <cit.> we showed that these sources of noise can smear out the fractal-like trajectories predicted by the MF (Fig. <ref>G), in the numerical simulations.
In the numerical simulations (Fig. <ref>) each target is represented by a small number of spins (15 per target), and therefore has intrinsic finite-size noise. In addition, the position of the moving animal (or flock) is updated with a time step that allows for about 10 spin updates, using Glauber dynamics (Eq.<ref>). This procedure of finite time per movement step is different from the MF behavior, which is the limit of infinitely slow movements. Unlike MF where the spins are at their equilibrium at each point along the trajectory, the spins are not at equilibrium along the simulated trajectory.
Examples of simulation trajectories are given in Fig. <ref>A, which are compared to the corresponding MF trajectory (Fig. <ref>B). Let us focus on the third bifurcation, denoted by the blue frame in Fig. <ref>A,B. In this bifurcation in the simulation there are only two outgoing trajectories, while in the MF trajectory we find three outgoing trajectories and in different directions. We can explain these differences by looking at the energy lanscape for this bifurcation, in Fig. <ref>C. The red arrow indicates the original unstable direction. The green arrows show the time evolution of the spins, as the system flows towards the newly available minima. The MF directions correspond to the three new minima at this bifurcation point, while the directions in which the system leaves the bifurcation in the simulations correspond to the minimum (“min-1”) and saddle point (“meta”) that are closest to the original direction. Due to the short time available for the spins to evolve at each point along the simulated trajectory, the system moves along the two directions where the spin evolution slows down, corresponding to the the minima or saddle point closest to the original direction before the bifurcation point. Trajectories pointing towards the compromise of the edge targets (“min-2” in Fig. <ref>C) is therefore not observed in the simulations. When the system continues along the saddle-point direction (“meta”), it sometimes becomes a true minimum after a short time, allowing the system to continue towards the central target. This way, in this example, the system can reach the central target from the third bifurcation point, even though we do not see it in the MF trajectory. In other cases, we see that the spins evolve away from the saddle to the local minimum (“min-3”), and there is a sharp turn in the trajectories (denoted by red circles in Fig. <ref>C).
This example explains why we never see bifurcations that are non-binary in the simulations and in the experimental data <cit.>, despite the existence of such complex bifurcations in the MF trajectories.
§ THE NON-EUCLIDEAN ENCODING OF SPACE
Another aspect of the trajectories that we found when comparing our model to the experiments <cit.>, is that the interactions between the spins are modified from the simple cosine function of the relative angles (dot product in Eq.<ref>). We now explore how this modified interaction affects the MF trajectories, which can shed light on the features that this modification provides.
The angle between targets at p̂_i and p̂_j, θ_ij, denote the angle between targets at p̂_i and p̂_j, so that
cosθ_ij=p_i·p_j
The modified interaction between the spins that represent targets i,j is simply the cosine of the distorted angle θ̃_ij, parameterized by νθ_ij→θ̃_ij=π θ_ij/π^ν.
For ν<1 in Eq. (<ref>) we get a distortion of the Euclidean angle between the directions of the targets corresponding to the interaction becoming negative at a smaller relative angle, and as a result a non-Euclidean encoding of space (effectively an elliptic geometry). From comparison to a series of experiments in fruit flies, locusts and zebrafish a common value of ν∼ 0.5 was obtained, and for this value we plot the phase diagrams for a symmetric trajectory towards two and three targets (Fig. <ref>A,C), which shows that the transition lines are shifted to lower angles compared to Fig. <ref>A,E. The corresponding MF trajectories for the two and three target systems are shown in Fig. <ref>B,D, compared to the trajectories for simple cosine interactions (ν=1, Fig. <ref>B,F).
The stability test in Eq. (<ref>), which we used before to identify the bifurcations along the MF trajectories, cannot be generalized to ν<1 directly. For this purpose we had to rewrite it into an equivalent form which is given in the appendix (Eq. (<ref>)). Using this equivalent form, we can also obtain equations for the BC, for the modified interactions for ν<1, by replacing Eq. (<ref>) with Eq. (<ref>).
Next we want to examine how the trajectories change when the angular distortion is introduced, namely as ν is decreased below 1.
For the case of three targets, we find that for decreasing ν there is a reduction in the complexity of the trajectories, as shown for example in Fig. <ref>A-D. Due to the reduction in ν the bifurcations happen earlier along the trajectory at a smaller relative angle (as in Fig. <ref>), and this way many complexities of the bifurcation pattern shrink and sometimes vanish. The number of bifurcations is reduced from infinity for ν=1 up to three bifurcations for ν≲ 0.55.
In the case of four targets we find that the effects of the angular distortion can be even more dramatic. For the regular cosine (ν=1) interactions, we find loops in the trajectories (Fig. <ref>E), which form due to an endless series of bifurcations, and do not allow any trajectory to reach the middle targets (the two targets that are on the right side). We show in detail how loops are formed in this case in the appendix. Applying angular distortion (Fig. <ref>F-H) we find that the whole bifurcation tree becomes more spatially spread, and trajectories that lead to the middle targets appear.
Overall, the MF trajectories in Figs. <ref>- <ref> demonstrate that by employing interactions based on the distorted relative angle, with ν significantly lower than 1, the animals and flocks achieve trajectories that have bifurcations that are spread over a much larger spatial domain. This spreading of the bifurcation network simplifies the trajectories, and resolves pathological paths such as loops that never reach certain targets. This leads in reality to a more uniform probability of reaching the different targets, thereby making foraging more efficient and less prone to biases based on the geometry of the targets' relative locations.
§ DISCUSSION
We presented here a detailed theoretical exploration of the mean field (MF) trajectories that arise from our spin-based decision-making model, which describes how single animals (and perhaps also animal groups <cit.>) navigate towards an array of identical targets. This model was recently shown to predict tree-like trajectories with bifurcations, that were found in experiments and simulations of single animal and flock movement <cit.>. Here we showed that the bifurcations along these trajectories lie along “bifurcation curves” (BC), and the organization of these BC determine the global nature of the trajectories. These BC are dependent on the spatial organization of the targets, and thus do not depend on specific initial conditions of the trajectory. The BC serve as attractors of the bifurcation points, such that any trajectory tends to converge to the same pattern of transitional movement between the BC. We show that the mutual arrangement of BC determine the qualitative nature of the trajectories. For example, for 3 targets if all the three BC intersect each other we get a space filling trajectory and if they do not intersect each other we get a self-similar trajectory.
The MF analysis of the trajectories also allows us to map the energy landscape in the spin/velocity space at the bifurcation points, thereby exposing when the outgoing paths will reach directly one of the targets, or continue along other compromise directions between targets. We can explain why we see in simulations only two branches (two outgoing trajectories) at every bifurcation point - at the bifurcation point the direction of motion shifts towards the new stable directions of motion, ending up at the first minima or saddle point (“exit point”) that are closest to the original direction of movement (as explained in Fig. <ref>).
In <cit.> it was shown that the sensitivity of the animal to external cues is maximal when it is moving through a bifurcation, thereby enhancing its ability to pick up small biases and more accurately navigate towards the most beneficial target. This property that our model predicts makes the organization of the bifurcations along the trajectories crucially important - beyond determining the overall shape of the trajectories, these also define points in space-time near which we expect the brain can amplify discrimination among options based on even very small differences in their perceived quality.
When comparing the theoretical trajectories to the experiments <cit.>, it was found that the spins have modified interactions that can be mapped to a distortion of the relative angles between targets. Using our MF analysis of the trajectories, we show how increasing the angular distortion leads to bifurcations at smaller relative angles between the targets. By spatially spreading the bifurcations more uniformly the distortion simplifies greatly the resulting trajectories. In many spatial configurations of three targets, distortion reduces the number of bifurcations from an infinite number to three, and in the case of four targets it prevents endless indecision that appears in the formation of loops. We therefore conclude that the value of the angular distortion that was found to apply across taxa <cit.> can be motivated by allowing animals to move along less complex trajectories, resolving convoluted paths that appear when there are more numerous targets. Our results suggest that nature seems to have chosen this form of interactions, as they allow animals to perform more efficient foraging, uniformly exploring arrays of identical targets.
These results highlight the richness of this spin model, where movement through space is determined by spin-spin interactions, which are in turn dependent on the position (of the animal or flock) with respect to the targets. It forms the first step towards further theoretical analysis of this model under more realistic conditions, such as in the presence of bias between the targets.
§ ACKNOWLEDGMENTS
N.S.G. is the incumbent of the Lee and William Abramowitz Professorial Chair of Biophysics. This research is made possible in part by the historic generosity of the Harold Perlman Family. D.G. and I.D.C. acknowledge support from the Office of Naval Research Grant N0001419-1-2556, Germany’s Excellence Strategy-EXC 2117-422037984 (to I.D.C.) and
the Max Planck Society, as well as the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant agreement (to I.D.C.; #860949).
*
§ APPENDIX
§ THE ASSOCIATION OF AN INSTABILITY (BIFURCATION) WITH THE EXISTENCE OF AT LEAST TWO STABLE SOLUTIONS
Proposition 1There is an unstable solution to the model equations at a point r⃗_b, if and only if there are at least two stable solutions at this point.
This proposition gives us an alternative definition for the bifurcation point as a point where there exist at least two stable solutions. It is easy to see why it is true if we look at the expression for the effective free energy of the spins (Eq. <ref>), as a function of ±|V⃗| (one dimensional projection). For large values of |V⃗| the free energy diverges
lim_±|V⃗|→∞F(V⃗,T)=∞.
An unstable point at r⃗_b corresponds to a local maximum of F(V⃗,T) at V⃗(r⃗_b). Together with Eq. (<ref>) (and Rolle's lemma) we conclude that there should exist at least two minima for the function F(V⃗,T), that correspond to stable solutions of the model equations. If we start from two stable solutions (minima of F(V⃗,T)) according to Rolle's lemma there should be at least one unstable point between them.
§ ALTERNATIVE WAY TO CHECK STABILITY
We can write the system of equations (<ref>) through its projections on the directions of the targets p̂_i (i=1,..,k):
dV_p_i/dt=∑_j=1^kv̅p̂_j·p̂_i/k(1+exp-2 k v̅ V_p_j/T)-V_p_i,
where V_p_i≡V⃗·p_i.
Let V^(0)_p_i be a solution of equations (<ref>) and consider a small perturbation
V_p_i=V^(0)_p_i+ϵ_i.
Then the equations for the linear perturbations (first order in ϵ_i) are
dϵ_i/dt=∑_j=1^kT_ijϵ_j
where
T_ij≡v̅^2p̂_j·p̂_i/2 T^2k v̅ V_p_j/T-δ_ij.
Then the solution V^(0)_p_i is stable if and only if T_ij is negative-definite. The advantage of this formulation is that it can be extended to the distorted angle θ̃_ij with ν<1 (Eqs. <ref>-<ref>).
§.§ Derivation of the effective free energy of the spins
From the two-dimensional Hamiltonian (Eq. <ref>) we can obtain the effective free energy in the MF approximation, following the procedure that was used for the one dimensional Curie-Weiss model (see for example <cit.>, chapter 13), and was applied to the current Hamiltonian in <cit.> for the case of two targets. Here we extend the derivation to k targets.
The starting point is the partition function for the Hamiltonian from Eq. (<ref>), which is
Z=tr_{σ_i}Exp[k v̅^2/N T∑_i=1^Nσ_i p_i^2],
where here p_i∈{p_1,...,p_k} and we have in the sum an equal number of spins for every target N/k (assuming N ≫ 1), while v̅ is a constant whose dimension is velocity.
Since the system is two-dimensional, let us introduce two auxiliary fields
V⃗≡(V_x,V_y)
and using the Gaussian identity
e^a^2+b^2/4 c=c/π∫_-∞^∞ dV_x dV_y e^-c (V_x^2+V_y^2)+a V_x+b V_y
where we take
a = 2 k v̅/T∑_i=1^Nσ_i p_i_x
b = 2 k v̅/T∑_i=1^Nσ_i p_i_y
c = N k/T,
we can write the partition function in a form which is linear in σ_i:
Z=N k/π T tr_{σ_i}∫_-∞^∞dV_x dV_y exp-N k/T V⃗^2+2 k v̅/T V⃗·∑_i=1^Nσ_i p_i.
Then summing over the possible values of the spins we get
tr_{σ_i}exp2 k v̅/T V⃗·∑_i=1^Nσ_i p_i=∏_i=1^N∑_σ=0,1exp2 k v̅/T V⃗·p_i σ=∏_i=1^N e^ln1+exp2 k v̅/T V⃗·p_i=e^N/k∑_i=1^kln1+exp2 k v̅/T V⃗·p_i,
when we sum over N/k spins for each target and where the last summation is not over individual spins but over the unit vectors towards the targets p_i (i=1,..,k).
Then we can read the free energy per spin F(V⃗,T) by comparison to the general form
Z ∼∫_-∞^∞dV_x dV_y exp-N/T F(V⃗,T)
and obtain
F(V⃗,T)=k V⃗^2-T/k∑_i=1^kln1+exp2 k v̅/T V⃗·p_i.
§ TRAJECTORY EQUATIONS FOR THREE TARGETS
For a system at a point (x_0,y_0) and three targets at (x_i,y_i)i=1,..,3, we define the following vectors
p_i=x_i-x_0/√((x_i-x_0)^2)+(y_i-y_0)^2,y_i-y_0/√((x_i-x_0)^2)+(y_i-y_0)^2,
and the proportion of “on” spins in each group
n_i=1/31+exp-6 v̅ V_p_i/T,
where V_p_i≡V⃗·p_i.
Then we obtain the following set of equations for V_pi (by taking projections of Eq. (<ref>) and the definitions in Eq.(<ref>):
V_p1 = v̅n_1+n_2cosθ̃_12+n_3cosθ̃_13
V_p2 = v̅n_1cosθ̃_12+n_2+n_3cosθ̃_23
V_p3 = v̅n_1cosθ̃_13+n_2cosθ̃_23+n_3.
Let t̂^0 be the (tangent) velocity unit vector, where n̂^0 is the perpendicular directiom. Then since
(n̂^0·p̂_i)^2=1-(t̂^0·p̂_i)^2
we get the relation
(n̂^0·p̂_i)^2=1-V_pi^2/V⃗^2.
Solving Eqs. (<ref>)-(<ref>) for V_pi and substituting
into Eq. (<ref>) we get the following criterion for stability:
A=1-v̅^2/2 T∑^3_i=1^23 v̅ V_pi/T1-V_pi^2/V⃗^2>0.
When this condition is violated along the trajectory, we identify a bifurcation point.
§ MF TRAJECTORIES AT HIGH AND LOW TEMPERATURES
We find that the simplest form of the BC appear in the limit T→0 (Fig. <ref>A). In this limit the bifurcation curves are (almost) straight lines that connect two of the three targets. The reason for the BC approaching straight lines becomes clear when looking at the phase diagram for movement along the axis of symmetry for three targets (Fig. <ref>B). When the system moves along the axis of symmetry, the bifurcation occurs at the critical angle θ_crit→π, which corresponds to points along the straight lines that connect the corresponding targets. When the temperature is increased the BC acquires curvature and eventually the backwards outgoing trajectories disappear (Fig. <ref>). At higher temperature, the “space filling” pattern shrinks even more (Fig. <ref>).
§ LOOPS IN TRAJECTORIES FOR FOUR TARGETS
In order to understand how loops form in trajectories with four targets, let us follow the bifurcation pattern starting from a particular bifurcation point in Fig. <ref>E. In Fig. <ref>A the bifurcation point is denoted by red dot and the targets by a,b,c,d as seen in Fig. <ref>. The formation of loops in the case of four targets is similar to the formation of the self similar structure for three targets (Fig. <ref>G). The loop is created as a sequence of bifurcation points that alternate between a compromise of the two upper targets (a and b) and a compromise of the two lower targets (c and d). The bifurcation point in the example of Fig. <ref>A corresponds to a compromise of a and b. There are two outgoing trajectories from this point. One of them corresponds to a decision solution, leading towards target a and terminates there, and one that ends with a new bifurcation point which goes downward and corresponds to a compromise between c and d (the short segment). We can see the new bifurcation point in Fig. <ref>B. Then, from this point, there are two outgoing trajectories, one which corresponds again to a compromise of a and b (the short segment upwards) and a second one that corresponds to a compromise of c and d. This way a loop is formed (up to a small shift) where in the next step (not shown) we would have a decision towards d or a compromise of a and b.
|
http://arxiv.org/abs/2307.04514v1 | 20230710122050 | Improving Heterogeneous Graph Learning with Weighted Mixed-Curvature Product Manifold | [
"Tuc Nguyen-Van",
"Dung D. Le",
"The-Anh Ta"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
An Examination of Wearable Sensors and Video Data Capture for Human Exercise Classification
Ashish Singh
Antonio Bevilacqua
Timilehin B. Aderinola
Thach Le Nguyen
Darragh Whelan
Martin O'Reilly
Brian Caulfield
Georgiana Ifrim
August 12, 2023
============================================================================================================================================
In graph representation learning, it is important that the complex geometric structure of the input graph, e.g. hidden relations among nodes, is well captured in embedding space.
However, standard Euclidean embedding spaces have a limited capacity in representing graphs of varying structures.
A promising candidate for the faithful embedding of data with varying structure is product manifolds of component spaces of different geometries (spherical, hyperbolic, or Euclidean).
In this paper, we take a closer look at the structure of product manifold embedding spaces and argue that each component space in a product contributes differently to expressing structures in the input graph, hence should be weighted accordingly.
This is different from previous works which consider the roles of different components equally.
We then propose , a data-driven method for learning embedding of heterogeneous graphs in weighted product manifolds.
Our method utilizes the topological information of the input graph to automatically determine the weight of each component in product spaces. Extensive experiments on synthetic and real-world graph datasets demonstrate that is capable of learning better graph representations with lower geometric distortion from input data, and performs better on multiple downstream tasks, such as word similarity learning, top-k recommendation, and knowledge graph embedding.
We provide the source of implementation in https://github.com/sharecodesubmission/weighted_product_manifoldhttps://github.com/product_manifold.
§ INTRODUCTION
Representation learning aims to acquire the ability to effectively embed meaningful data into feature spaces <cit.>. In traditional representation learning models, Euclidean embedded spaces have been predominantly utilized. However, the uniform geometric structure of Euclidean spaces has certain limitations when it comes to providing accurate representations for various types of structured data, particularly graphs such as tree structures <cit.> or circular graphs <cit.>. Consequently, there is a growing interest in developing methods that enable the embedding of graph features in non-Euclidean spaces <cit.>.
Real-world data frequently exhibit diverse patterns and complex geometries that cannot be adequately captured by the uniform structures of Euclidean embedding spaces. It has been observed that Euclidean spaces are often insufficient for embedding various types of real-world graph data, such as hierarchical structures that induce negative curvature geometry <cit.>, or circle structures <cit.> that require positive curvature geometry.
Previous research has demonstrated that using spherical embedding spaces instead of Euclidean ones can result in minimal distortion errors when embedding data with circle and ring structures <cit.>. Moreover, models that solely utilize embedding spaces of a single geometric type often struggle to capture mixed structures effectively. These models tend to produce embedding representations with significant geometric distortion compared to the underlying geometry of the input data <cit.>. In contrast, approaches employing product spaces composed of components with different geometries have shown promising results in graph representation learning.
Problem
Current geometric embedding models, as seen in <cit.>, typically employ product spaces with equally weighted components. In this setup, the learnable parameters are fitted to the training data samples across all component spaces in a uniform manner. However, we contend that this approach hinders the robustness of models when learning data with diverse geometric structures.
Specifically, when the input data predominantly exhibit a particular geometric type compared to others, updating all components equally may not be optimal. Instead, it would be advantageous to assign more emphasis to the dominant geometric type during the parameter update process. This would allow the model to better capture and represent the most prevalent geometric structure in the data.
Our approach
To address this issue, we introduce a novel data-driven approach that incorporates a scoring mechanism for each component in the product spaces. This scoring mechanism enables the automatic learning of weights for each component based on the geometric structures present in the input data.
By considering the specific geometric characteristics of the data, our method allows for the construction of flexible and adaptive product spaces. This includes not only updating the weights of the components but also adjusting the geometric curvatures of the spaces.
As a result, our models are capable of effectively capturing and representing the complex geometric structures inherent in the data, leading to improved embedding performance.
Contributions
We summarize our contribution as follows.
Firstly, to the best of our knowledge, this is the first work that considers the structure at each component of product manifold and proposes that each component space contributes differently to expressing various geometric structures in the input graph, hence should be weighted accordingly.
Secondly, we propose , a data-driven method for learning to embed of
heterogeneous graphs in weighted product manifolds.
Thirdly, we conduct extensive experiments on both synthetic and real-world datasets to validate our approach to the various downstream tasks.
§ RELATED WORKS & BACKGROUND
The field of machine learning has witnessed a proliferation of works focusing on learning data representations in non-Euclidean spaces, as evidenced by studies such as <cit.>. However, recent research by <cit.> has highlighted the computational challenges and numerical instability faced by hyperbolic graph convolution networks, particularly in high-dimensional settings. To address this issue, <cit.> proposed a random feature mapping technique that utilizes the eigenfunctions of the Laplace operator to approximate an isometry-invariant kernel on hyperbolic space.
Another notable approach in this area is CurvGAN <cit.>, which introduces a GAN-based graph representation method that preserves the topological properties of discrete structures by approximating them as continuous Riemannian geometric manifolds. However, these methods primarily focus on a single embedding space and may struggle to effectively capture the underlying structure of the input data.
In contrast, the product of spaces has been shown to possess the capability to achieve higher generalization and effectively capture the intrinsic structures of graphs with mixed geometries <cit.>. By combining multiple spaces with different geometric characteristics, the product of spaces approach offers improved representation learning and a more comprehensive understanding of complex data structures.
While several approaches have explored the use of product spaces, few have addressed the challenges associated with defining and updating the component spaces. One such work, Switch Spaces <cit.>, introduces a method that selects a combination of K components from a set of N spaces based on input specifications. It employs a gating mechanism to score and choose subspace components using pairwise relationships in the training data. However, since entities in a graph are not independent and identically distributed (iid), the component spaces selected based on individual input instances may not effectively capture the overall relationships between nodes in the graph. Consequently, Switch Spaces requires embedding spaces with high dimensions (e.g., 100, 500) to achieve competitive performance in various downstream tasks like knowledge graph embedding and recommendation.
Unfortunately, this approach unintentionally sacrifices the advantages offered by non-Euclidean models, which can achieve compactness by requiring smaller dimensions to achieve the same capacity as Euclidean space. In our study, we propose a novel approach that leverages a richer and more robust representation space to capture the diverse geometric structures present in graph data. By enhancing the quality of embeddings, our research complements existing graph-based learning methods and enables more effective representation learning.
Non-Euclidean embedding spaces
Non-Euclidean representation learning has emerged as a powerful approach, delivering state-of-the-art performance across diverse tasks. Specifically, hyperbolic space has proven effective in tasks such as network embedding <cit.>, recommendation systems <cit.>, and knowledge graphs <cit.>. On the other hand, spherical space excels in modeling directional similarity and data with cyclical structures <cit.>. Each of these spaces possesses unique geometric features, and the selection of an appropriate embedding space should be guided by the inherent structure of the data. By leveraging the most suitable embedding space, we can effectively capture the intrinsic properties and relationships within the data, leading to superior performance across a wide range of applications.
Product manifold
Product manifolds are constructed by combining embedding spaces with different geometric types, such as Euclidean, hyperbolic, and spherical spaces. In the context of representation learning, the concept of product spaces was introduced in <cit.>, where each component of the product space has a constant curvature. The curvature of the product space is determined by the sum of curvatures of its individual components <cit.>, resulting in a constant curvature overall. This property enables product spaces to capture a wide range of curvatures with lower distortion compared to a single space <cit.>. As a result, product spaces are particularly well-suited for real-world data that exhibit mixtures of geometric structures.
For example, <cit.> developed a Mixed-curvature Variational Autoencoder, which efficiently trains a VAE with a latent space consisting of a product of constant curvature Riemannian manifolds. Additionally, the heterogeneous structure present in user-item interaction graphs can be effectively learned by utilizing product spaces with different curvature components <cit.>.
Distortion error of embedding
Given metric spaces U and V equipped with distances d_U and d_V respectively, an embedding is a continuous and injective mapping f: U → V. To evaluate the quality of an embedding, we use the average distortion metric D_avg(f), which calculates the average distortion over all pairs of points. Distortion between a pair of points a and b is defined as |(d_V(f(a), f(b))/d_U(a, b))^2 - 1|.
§ PROPOSED METHOD:
In this section, we present our approach to learning the weights between sub-geometries with different curvatures in the product of embedding spaces. Our objective is to ensure that the curvatures of the graph embedding spaces closely match the curvatures of the graph itself. To accomplish this, we introduce a novel gating mechanism that assigns a score to each component space.
Motivated from the coarsening approaches <cit.>, we designed gating mechanism to leverage the message-passing of information across various regions of the input graph, enabling the extraction of topology information. Our gating mechanism divides the graph into multiple parts, where each sub-graph is predominantly characterized by a specific type of geometry, such as a tree or cycle structure.
For example, in a graph consisting of a ring of trees where the tree structure dominates, we assign higher scores to hyperbolic components in the product space compared to spherical components. This choice is made to improve the quality of the embeddings produced.
By applying this gating mechanism and adjusting the weights between the different sub-geometries, we aim to achieve a more accurate representation of the graph's underlying structures, resulting in improved embedding results.
Problem formulation
Given three types of geometry: Euclidean (𝔼), Hyperbolic (ℍ), and Spherical (𝕊).
Let ℳ_1, ℳ_2, …, ℳ_N be N component spaces where M_i is of one geometric type among {𝔼, ℍ, 𝕊}, and M_i = b_i.
The goal of our approach is to learn the score 𝐰 = (w_1, …, w_N) ∈ℝ^N from the input graph data on each component of product manifold embedding space in such a way that the embedding of input graph into P = w_1 ℳ_1 × w_2 ℳ_2 ×…× w_N ℳ_N will have lowest possible geometric distortion.
§.§ Coarsening input graph data
Hierarchical pooling layers
Given input graph 𝒢, with n > 0 nodes, adjacency matrix 𝐀∈{ 0, 1}^n × n and node features 𝐗∈𝐑^n × d.
The matrix 𝐀 represents graph structure: 𝐀(i, j) = 1 if there is an edge connecting two nodes i, j, otherwise 𝐀(i, j) = 0.
D is the diagonal degree matrix of the graph 𝐆 where D_ii = ∑_i 𝐀_ij.
We use hierarchical pooling-based GCNs to learn cluster assignments.
There are two GCNs with two different sets of parameters in this module.
At each layer l, the soft cluster assignment matrix 𝐒^(l)∈𝐑^n_l-1× n_l is computed as follows:
0.8!𝐒^(l) = softmax (GNN_1^l(𝐀^(l-1), 𝐗^(l-1))) with (𝐀^(0), 𝐗^(0)) = (𝐀, 𝐗).
Then, we apply the second GNN on 𝐒^(l) to compute the graph representation at layer l:
0.8!𝐗^(l) = 𝐒^(l)^T (GNN_2^(l)(𝐀^(l-1), 𝐗^(l-1))) and 𝐀^(l) = 𝐒^(l)^T 𝐀^(l-1)𝐒^(l)).
Coarsening input graph
The hierarchical pooling layer produces a coarsened graph with m < n nodes, a weighted adjacency matrix A' ∈ℝ^m × m, and node embeddings Z' ∈ℝ^m × d.
This process is then repeated L times, resulting in a GNN model with L layers that operate on the input graph and a series of coarser versions of it.
The soft assignment matrix S^(l) assigns each node at layer l to a cluster at the next layer l+1.
In other words, each row of S^(l) corresponds to one of the n_l nodes or clusters at layer l, while each column of S^(l) corresponds to one of the n_l+1 clusters at layer l+1.
In our approach, we treat the number of clusters as a hyperparameter and set n_l+1 = N, where N is the number of components in the product space P.
Each row of S^(l) shows the degree of membership of a node to each component space in P.
Attention pooling
We use the attention mechanism with the input being the matrix 𝐒^(l) to take the influence vector for each subspace.
Consider the matrix 𝐒 in form 𝐒 = [𝐡_1, 𝐡_2, ... , 𝐡_N ], with 𝐡_t ∈ℝ^d , and a trainable matrix 𝐔∈ℝ^d.
Self attention:
We define a relevant scalar weight for each element of the sequence through a softmax layer as follows w_t = softmax(𝐡_t^T 𝐔).
Given the set of weights over all the elements of the sequence, we can then obtain the pooled representation as the weighted average of the hidden states
s = ∑_t = 1^N 𝐡_t^T 𝐰_t.
Multi-head self attention:
Considering a number of k heads for the multi-head attention, 𝐡_t = [𝐡_t1, 𝐡_t2, …, 𝐡_tk] where 𝐡_tj∈ℝ^d/k and size of each head is d/k.
In the same sense, we have a trainable parameter 𝐔 =[𝐮_1 𝐮_2 …𝐮_k] where 𝐮_j ∈ℝ^d/k.
Different attention is then applied over each head of the encoded sequence softmax function following
w_t = softmax(𝐡_tj^T 𝐮_j), where w_tj corresponds to the attention weight of the head j on the element t.
A soft weight representation for each subspace is computed as follows:
s_j = ∑_t=1^N 𝐡_tj^T 𝐰_tj.
This method allows a multi-head self-attention network extracts different kinds of information over different regions of the self-attention network.
In the end, 𝐬∈ℝ^N represents the average weight of N component spaces in the product manifold P over the n_l clusters.
§.§ Objective function
Let 𝐬∈ℝ^N be the weight vector of N components based on the data's local geometry information.
The distance between x_i, x_j ∈ P is computed following d_P^2(x_i, x_j) = ∑_k = 1^N 𝐬_k dist^2 (x_i^k, x_j^k).
Then the base objective ℒ_base is defined as:
ℒ_base = ∑_1 ≤ i < j ≤ n|(d_P(x_i, x_j)/d_G(X_i, X_j))^2-1|
Finally, the total average distortion objective function is defined as ℒ = ℒ_base + ℒ_aux,
where ℒ_aux = ℒ_LP + ℒ_e is a combination of the link prediction loss (ℒ_LP) and the entropy regularization loss (ℒ_e).
More precisely, ℒ_LP = 𝐀^(l) - 𝐒^(l)𝐒^(l)^T_F at each layer l, where ·_F denotes the Frobenius norm; and
ℒ_e=1/n∑_i=1^n H(𝐒_i) where H(𝐒_i) is the entropy of the row i^th in matrix 𝐒.
Minimizing ℒ_LP means enforcing close nodes to be pooled together, while
minimizing ℒ_e makes the output cluster assignment for each node close to a one-hot vector so that the membership for each cluster is clearly defined.
Our total average distortion ℒ is optimized with the Algorithm <ref>.
§.§ Physical meaning of subspace weights
In manifold representation learning, the goal is to embed data into appropriate embedding spaces where the curvature of the embedding matches the curvature of the original data. In the case of a product manifold, each data point is partially embedded in different subspaces with varying curvatures.
Our work explores the relationship among the curvatures of all the subspaces and introduces a partial update mechanism for the embedding space based on their respective influence scores. In the importance score box of Model Architecture (Figure <ref>), if the input data is predominantly characterized by hierarchical structures, the importance score of the hyperbolic embedding component (s_2) will receive a larger value compared to the others (s_1 and s_3).
In Algorithm <ref>, we update the subspaces' curvatures and the embedding itself. The higher the curvature embedding scores, the more effort is required to minimize them. As a result, the negative curvature loss should contribute more to the overall loss, leading to more active updates of the embedding spaces associated with negative curvature compared to the other spaces. This ensures that the embedding adapts to the data's curvature characteristics and effectively captures the underlying structures.
§ EXPERIMENTS
This section presents our experimental evaluation of the proposed model's performance across various learning tasks. We begin by evaluating the model's effectiveness in improving graph reconstruction, as described in section <ref>.
Following this, we apply our framework to four downstream tasks: recommendation systems, knowledge graph embedding, node classification, as well as graph classification, and word similarity tasks.
§.§ Graph reconstruction
We perform experiments on both synthetic and real-world datasets to evaluate the performance of our proposed model.
More information on baselines and metrics is shown in Appendix <ref>.
Model performance on synthetic datasets
Table <ref> shows the average distortion (D_avg) of our model on the three synthetic graphs. When d = 3, achieves D_avg = 0.104 with the product manifold s_1 ℍ^2×s_2 𝕊^1.
Meanwhile, without any constraints in subspace curvatures (PM <cit.>), the distortion measure of ℍ^2×𝕊^1 on the Cycle graph is 0.11.
Overall, for all three synthetic graphs, our proposed model improves upon the main contender method PM from <cit.> by 5.4 %, 16.3 %, and 18.6 %, respectively (Table <ref>).
Similar trend continues in higher dimension d=5, our proposed method improves upon the baseline by 17.3%, 3.3% and 11.9 %, respectively (Table <ref>).
Model performance on benchmark datasets
We first employ a single space to generate embedding representations for each dataset in order to explore its intrinsic geometry.
Based on these observations, we develop heuristics for the dataset characteristics and utilize them to select the component in the model space product.
Then, the learning process optimizes the curvature of each subspace according to the dominant graph structure.
Figure <ref> presents the average distortion D_avg of embeddings into single model spaces for three complex benchmark datasets, as the number of embedding dimensions increases within the range of [5, 100].
We can see that, with the Cs PhDs and Power dataset, D_avg is smaller in hyperbolic space than in spherical space when d<50, indicating that the hyperbolic space should be considered in the general product space.
Similarly, the Cities dataset exhibits a more spherical structure than other geometric properties, and thus components of positive curvature should be used.
Table <ref> reports the performance of our model on the benchmark datasets.
Unlike the results obtained from the synthetic dataset, the best results are predominantly obtained when learning with the product manifolds.
This phenomenon is attributed to the more complex structure of real-world data compared to synthetic ones.
Specifically, the Power graph dataset has a hierarchical and cyclical structure that can be embedded effectively into any space with a hyperbolic and spherical component.
Our proposed model outperforms the main baseline PM <cit.> in all cases.
With embedding dimension d = 10, our model achieves the best distortion on the three datasets.
Specifically, in the Cs PhDs dataset, the percentage of improvements in terms of D_avg is 15.6 %.
In the Power dataset, with the soft gating mechanism, our model achieves better distortion upon the product of the space model with 28.4 %.
In case d = 50, the same improvement with d = 50, for specific, these average distortions (D_avg) compare with the uniform product of spaces (PM) of <cit.> is 19.3 % and 13.9%, respectively.
Furthermore, Table <ref> shows that for distortion of 0.0231 in the product space ℍ^5 ×𝕊^5 with the Power dataset, our method determines that the optimally weighted product manifold for embedding the dataset are 0.83 ℍ^5 × 0.16 𝕊^5. The ratio between the hyperbolic and spherical components is approximately 5:1, indicating the greater importance of hyperbolic components compared to spherical ones.
In contrast, the uniform product embedding space PM of <cit.> assumes that each component space contributes equally to learning representations in the product of spaces.
Our method , on the other hand, captures the constraints relation among all sub-geometries of different curvatures in the product manifold, depending on the geometry of the input graph data, leading to better performance than using the uniform product of spaces (PM) without scoring mechanism. Our proposed method has advantages in discovering general models with suitable geometry in the product manifold. Notably, we also observe that the mAP measures are not consistently better than the uniform product model spaces <cit.> when D_avg decreases.
§.§ on Knowledge Graph Embedding
Knowledge graphs (KGs) are a fundamental tool for representing information and have a wide range of applications, including question answering and web search <cit.>.
However, KGs are often highly incomplete, which poses a significant challenge for downstream use.
The goal of our approach is to address this issue by inferring missing facts in the KGs using entity and relation embedding techniques to map them to appropriate spaces.
In this section, we propose using the product of manifolds with a gating mechanism to represent the relations between entities in the KGs.
Detailed experimental scenario is shown in Appendix <ref>.
Model performance
Table <ref> reports the performance of various methods on two knowledge graphs.
To enable a fair comparison, we set the total embedding dimension to 64, which is a common practice in non-Euclidean embedding due to its ability to provide more compact spaces than Euclidean embeddings.
Our proposed model achieves superior performance over the baselines on the knowledge embedding graph, highlighting its effectiveness in learning informative representations of the data.
§.§ on node classification and link prediction
In this section, we evaluate the performance of our proposed model on node and graph classification tasks.
Hyperbolic GCN <cit.> uses message-passing on the hyperbolic tangent space for graph convolutional networks (GCNs).
However, our proposed model replaces the hyperbolic space with and applies message passing in the tangent of the product spaces.
We further introduce δ <cit.> which is used to evaluate the degree of tree-likeness of a graph by evaluating its graph distance metric.
The value of δ ranges from 0 to half of the graph diameter, with trees having δ = 0, while "circle graphs" and "grid graphs" have a larger δ, approximately half of their diameters.
Further details on the metrics, datasets, and baselines used in our experiments can be found in Appendix <ref>.
Model performance
Table <ref> presents the F1 and AUC scores for the link prediction and node classification tasks.
Notably, the DISEASE and AIRPORT datasets exhibit high hyperbolicity (δ = 0 and 1, respectively), where the performance of using the product of hyperbolic space surpasses that of using the product of mixture curvatures.
This is because the unified product of curvature fails to differentiate the primary intrinsic graph structure and instead adapts equally to spaces that do not align with the graph's topology.
Our proposed extension addresses this issue by incorporating a weighting mechanism that identifies the dominant embedding manifold most influenced by the underlying structure of the graph data, leading to improved results in both link prediction and node classification for these two datasets.
§.§ on Recommendation Systems
In this section, we evaluate the performance of our proposed model on the recommendation task. Specifically, we apply to replace the hyperbolic space in metric learning recommendation (HyperML <cit.>). Detailed information on baselines, datasets and metrics can be seen in Appendix <ref>.
Objective function
In HyperML <cit.>, the push-pull loss is proposed to learn the metric between the positive and negative items.
The overall objective is defined as ℒ = ℒ_P + γℒ_D,
where pull-push loss ℒ_P and distortion loss ℒ_D are defined as:
ℒ_P = ∑_(i, j) ∈𝕊∑_(i, k) ∉𝕊 [m + d^2_𝔻(i,j) - d^2_𝔻(i,k)]_+,
0.8!ℒ_D = ∑_(i, j) ∈𝕊[d_𝔻(f(i), f(j)) - d_𝔼(i, j)|/d_𝔼(i, j)]_+ + ∑_(i, k) ∉𝕊[d_𝔻(f(i), f(k)) - d_𝔼(i, k)/d_𝔼(i, k)]_+,
where |z|_+ = max(0, z), m > 0 is the margin size (m = 0.4 in this paper),
and f(.) is a mapping function f: 𝔼→𝔻 (f is the identity in <cit.>), γ is the multi-task learning weight and 𝕊 is the set of positive user-item pairs.
We use the same loss function in <cit.> with a difference in the distance on 𝔻. For specific, we compute the distance d between two embeddings in the product of model spaces.
Model performance
Table <ref> reports the H@10 and N@10 scores for two different datasets, considering the number of factors d ∈{32, 64}.
Our experiments demonstrate that, overall, CML and HyperML achieve better results with the weighted product manifolds () than in the Hyperbolic space alone, highlighting the advantages of using scoring sub-manifolds to model the distance between users and items.
§.§ Performance on word similarity task
We evaluated our model's performance on applications that require an understanding of the underlying manifold structure. To conduct our experiment, we trained word embeddings on the Word Similarity (WS-353) benchmark dataset, following the methodology established in previous works such as <cit.>. Our implementation is based on hyperbolic skip-gram embeddings from <cit.>.
Setup
For our setup, we utilized the standard skip-gram model <cit.> and extended the loss function to a generic objective suitable for arbitrary manifolds, using a variant of the objective used in <cit.>.
Specifically, given a word u and a target w with label y=1 if w is a context word for u and y=0 if it is a negative sample, our model is represented by P(y | w, u)=σ((-1)^1-y(-cosh(d(α_u, γ_w))+θ)).
Word similarity
To measure the effectiveness of our model, we evaluated its performance on the WS-353 dataset using the Spearman rank correlation ρ between our scores and annotated ratings.
We obtained the dataset from <cit.>, and the results of our experiment are presented in Table <ref>.
Our model outperformed the hyperbolic word embeddings of <cit.> and the product space (PM) in all dimension settings.
§ CONCLUSIONS
Real-world data often possess intricate geometric structures that are challenging to capture by embedding into spaces with uniform curvature.
To address this issue, we propose a method that partially extracts the topology information from the input data to update the embedding vectors and curvature of each subspace.
Our motivation is that graphs are constructed by combining simple structure topologies, such as trees, cycles, and stars.
Our approach introduces a data-driven method of weighted product spaces for learning better representations.
Our empirical experiments on synthetic and real-world datasets demonstrate that our framework enhances the embedding quality of input graphs with varying structures and improves the performance of the downstream tasks.
iclr2021_conference
§ ADDITIONAL BACKGROUND
Riemannian Geometry
Let ℳ^n be a smooth manifold in n-dimensional space, where ℳ^n is locally approximated by an n-dimensional Euclidean tangent space T_pℳ at p ∈ℳ.
The pair (ℳ, g) is called a Riemannian manifold if ℳ is equipped with a positive-definite metric tensor g that satisfies certain conditions.
Geodesics are the shortest-distance paths on manifolds, and the metric tensor g is integrated along the geodesic to compute distances on a Riemannian manifold.
The exponential map exp_p: T_p ℳ→ℳ and logarithmic maps log_p: ℳ→ T_p ℳ are two common bijections defined on the manifold ℳ.
A formal introduction to Riemannian manifolds can be found in <cit.>.
Product manifolds
Consider a sequence of smooth Riemannian manifolds ℳ_1, ℳ_2, …, ℳ_k.
ℳ_i can be positive (Spherical), zero (Euclidean), negative (Hyperbolic) curvature space.
The product manifold is defined as the Cartesian product ℳ = ℳ_1 ×ℳ_2 ×…×ℳ_k.
We write a point p ∈ℳ through their coordinates p=(p_1, …, p_k), p_i ∈ℳ_i. Similarly, a tangent vector v ∈ T_p ℳ can be written as (v_1, … , v_k) : v_i ∈ T_p_iℳ_i.
Gradient descent on manifolds requires the notion of taking steps.
This step can be performed in the tangent space and transferred to the manifold via the logarithmic map, and exponential map <cit.>.
The product space is also equipped with a distance function. The squared distance between points x, y ∈ℳ is defined as: d_P^2(x, y)=∑_i=1^k d_i^2(x_i, y_i).
§ CURVATURE ESTIMATION ON GRAPH DATA
Curvature estimation on simple graphs
There are three commonly used definitions for local graph curvature: Ollivier-Ricci <cit.>, Forman-Ricci <cit.>, and sectional curvature <cit.>.
In this paper, we use sectional curvature for estimating the geometric structures of graphs.
Sectional curvature is determined by geometric triangle properties as follows.
Theorem 1: Recall from <cit.> that on a given constant curvature geometric space, if abc is a geodesic triangle and m is the midpoint of bc, then d(a,m)^2+ d(b,c)^2/4 - d(a,b)^2 + d(a,c)^2/2 is equal to zero when the underlying space is Euclidean, is positive in spherical and negative in hyperbolic space, respectively.
Proof:
We provide proof of Theorem 1.
A = d(a,m)^2+ d(b,c)^2/4 - d(a,b)^2 + d(a,c)^2/2
= x^2 + z^2 - y^2/2 - t^2/2
= 1/2 (2x^2 + 2z^2 - y^2 - t^2)
= 1/2 [(x^2 +z^2 - y^2) + (x^2 + z^2 - t^2)]
= 1/2 [2xz cosα_1 + 2 xz cosα_2]
= xz (cosα_1 + cosα_2)
From Equation (6), we apply the cosine rule [https://en.wikipedia.org/wiki/Law_of_cosines].
We have three cases:
* cosα_1 + cosα_2 = 0: α_1 and α_2 are two supplementary angles, α_1 + α_2 = 180^0. Then the triangle is in Euclidean space.
* Similarly, it will be negative in hyperbolic and positive in the spherical curvature space.
Curvature estimation on graph data
Given theorem (1), let v be a node in G; b, c neighbors of v and a any other node.
Then, the sectional curvature of a node v and its neighbors b,c is defined following: 1/|V|-3∑_a ∈ G \{v, b, c}ξ_G(v ; b, c ; a) where
0.8!ξ_G(v ; b, c ; a)=1/2 d_G(a, v)(d_G(a, v)^2+d_G(b, c)^2/4-d_G(a, b)^2+d_G(a, c)^2/2)
and 2d_G(v;b,c) is included to yield the right scalings for trees and cycles.
Next, we estimate the curvature of some typical topology graph structures.
Star 𝐒_n is created from one central node and n leaves. We consider n ≥ 3, the local curvature at the center node v with two neighbors b, c is -1.
Tree 𝐓_b with branching factor b is the finite depth tree with b ≥ 2. The sectional curvature on the tree in the range ξ(T) ∈ [-1, 0].
Cycles graph 𝐂_n with n ≥ 4. If n is even, then ξ_C_n(v; b,c;a) = 0 for all points except the one diametrically opposite to v for which have ξ_C_n(v; b,c;a) = 1.
If n is odd, then for two points we have ξ_C_n(v; b,c;a) = n/2(n-1).
As a result, ξ(C_n) = 1/n-3 for even n and ξ(C_n) = n/(n-1)(n-3) for odd n.
Distortion error on simple graphs
We have demonstrated the limitations of using a single curvature approach to embed graphs with varying topologies.
To investigate the impact of curvature spaces on the quality of embedding spaces, we conducted experiments on three synthetic datasets with specific structures, including trees, circles, and rings of trees (Table <ref>).
Figure <ref> shows the distortion error results for Cycle and Tree graphs.
Our findings suggest that different graph structures require corresponding curvature spaces for optimal embedding quality.
For instance, spherical space (positive curvature) provides the least distortion error for cycle-like datasets (from 𝐒_3 to 𝐒_50), while hyperbolic spaces (negative curvature) give a minimal error for tree-like datasets (from 𝐇_3 to 𝐇_50).
All three models show some advancements compared to others in certain cases.
However, the overall distortions achieved are significantly higher than when using hyperbolic space with tree-like or spherical space with circle-like data.
For example, the distortion error on the Cycle tree is 0.09 compared to 0.02 on H_10 with Cycle data and 0.042 on S_5 with simple Tree data.
Therefore, using a product of individual spaces can improve the accuracy of embedding data with a mixture of structures.
§ ADDITIONAL EXPERIMENTAL RESULTS
§.§ Graph reconstruction task
Datasets
The synthetic datasets we use are small graphs with 40 nodes that are designed to have specific geometric structures, including a circle, a tree, and a ring of trees.
To assess the effectiveness of our approach on larger and more complex graphs, we also use three benchmark datasets: CsPhD <cit.>, Power <cit.>, and Cities <cit.>.
The Cities dataset consists of 1025 nodes and 1043 edges, while the Power dataset contains 4941 nodes and 6594 edges. Additionally, the CsPhD dataset has 312 nodes and 48516 edges.
Baselines
We compare the distortion error of node embeddings on both synthetic and benchmark datasets between our proposed model and the product spaces (PM) <cit.> method.
Metrics
We use two standard metrics to measure the quality of embeddings: average distortion D_avg and mean average precision mAP.
D_avg is a global metric that considers all the exact distance values.
Let G = (V, E) be a graph and node a ∈ V have a neighborhood 𝒩_a = b_1, ⋯, b_deg(a), where deg(a) is the degree of a.
In the embedding f, define R_a, b_i to be the smallest ball around f(a) that contains b_i, which means R_a, b_i is the smallest set of nearest points required to retrieve the i^th neighbor of a in f.
Thus, mAP = 1/|V|∑_a ∈ V1/deg(a)∑_i = 1^|𝒩a||𝒩a∩ R_a, b|/|R_a, b_i|. mAP is a ranking-based measure for local neighborhoods, and it does not track exact distances like D_avg.
§.§ Additional information for Recommendation task
Metrics
We use two measures Hit Ratio (H) <cit.> and Normalized Discounted Cumulative Gain (N) <cit.> to examine the predictive ability of these models.
The final H@k and N@k are averaged on all users' H@k and N@k scores.
We choose k = 10 to evaluate the model.
Datasets We perform experiments on two popular datasets, MovieLens-1M and LastFM-20K. The LastFm dataset <cit.> is obtained from a music website[http://millionsongdataset.com/lastfm/]. It is preprocessed to have 1892 users and 17632 music records. The MovieLens-1M is created from 6040 users and 3706 movies.
Baselines
We consider the three works below as the baselines for our model: CML <cit.>, HyperML <cit.>.
For specific, CML <cit.> investigates the relationship between metric learning and collaborative filtering.
It proposes a method that learns a joint metric space capable of encoding not only users' preferences but also the similarity between users and items.
HyperML <cit.> presents the connection between metric learning in hyperbolic space and collaborative filtering by exploring hyperbolic geometry.
HyperML-PM is our extension of HyperML in the product of model space.
HyperML-WPM (Our) is our extension of HyperML in the product of model spaces with the gating mechanism.
§.§ Additional information for Knowledge graph embedding
Metrics
The performance of various models is evaluated using two standard metrics: mean reciprocal rank (MRR) and hit rate (HR@3).
Datasets
We used two standard datasets, WN18RR <cit.> and FB15K-237 <cit.>, for our analysis. WN18RR is derived from WordNet, a lexical database of semantic relations between words. FB15K-237 is a subset of the Freebase knowledge graph, which is a comprehensive resource containing general information.
Table <ref> shows the statistics of the two datasets.
Objective function
Given a knowledge graph 𝒢 with a set of entities ℰ and a set of relation ℛ. Each triplet (h,r,t) ∈𝒢 is included by head entity h, tail entity t, and the relation r ∈ℛ between them.
There are a lot of works that propose RotE <cit.> in Euclidean space, and RotH <cit.> in Hyperbolic space. In this work, we extend to the product of different curvature spaces. Formally, entities h, t are represented by vector 𝐞_h, 𝐞_t ∈ℝ^b and the relation r is represented by two translation vectors α_r, β_r ∈ℝ^b and a rotation vector γ_r ∈ℝ^b. The head entity will translate twice via Mobius addition operation and rotate one time.
Q(h,r)= Rot(exp_0^c(𝐞_h) ⊕_c exp_0^c(α_r), γ_r) ⊕_c exp_0^c (β_r)
with c > 0 and exp_0^c is the exponential map over the origin. Rot is a rotation function with γ_r is the rotation matrix.
According to the above definition, for each triple (h,r,t), we define the distance function as:
d_r(h, t) = √(d_ℳ_c^2 (Q(h,r), exp_0^c(e_t)))
where ℳ_c is the product of curvature manifold. In <cit.>, the distance function of RotatE for the triple (h,r,t) is defined as: d_r(h, r) = || h⊙r - t||
The final negative sampling loss is defined by the cross-entropy loss:
ℒ =∑_(h,r,t) ∈Ωlog(1+ exp(-Y_(h,r,t) d_r(h,t)))
where Y_(h,r,t)∈{1, -1} is a binary label indicating whether a triplet is real or not.
Baselines
RotatE <cit.> is a knowledge graph embedding that is used to learn the representations of entities and relations in knowledge graphs.
RotatH is the extension of RotatE <cit.> in the hyperbolic space.
Product-RotatH is the extension of RotatE in the product of the hyperbolic spaces <cit.>.
SwisE <cit.> used the gating mechanism which is learned to choose the component space for knowledge graph embedding.
-Rotat is our extension by using the product of manifold in representing the relations among entities in the knowledge graph.
§.§ Additional information for Node Classification and Link Prediction
Metrics We utilize ROC AUC as a metric to evaluate the performance of Link Prediction (LP), whereas we rely on the F1 score to assess the Node Classification (NC) performance. In both cases, a higher score indicates better performance.
Datasets In this experiment, we evaluate model performance on the two different benchmark datasets.
DISEASE is the dataset of Infectious diseases from Oxford University <cit.>.
AIRPORT: is the dataset of airline routes from OpenFlight.org. Each node represents an airport, and the edge represents airline routes among these airports.
Detailed information regarding these datasets is provided in Table <ref>.
Baselines We evaluate the contributions of our proposed model by measuring the F1 and AUC scores on two datasets, compared with five different baseline models:
MLP and Hyperbolic-MLP are two variants of multilayer perceptron (MLP) classifiers operating on the Euclidean (𝐄) and hyperbolic space (𝐇), respectively.
HGCN <cit.> is an extension of graph convolutional networks (GCNs) to hyperbolic geometry.
Product-HGCN <cit.> extends GCNs in the product of hyperbolic geometries.
Mix-GCN <cit.> extends GCNs in the product of hyperbolic, spherical, and Euclidean spaces.
Our proposed model (-GCN) extends GCNs with a gating mechanism in the product of different curvature spaces (H, E, S).
|
http://arxiv.org/abs/2307.04882v1 | 20230710200454 | Word length versus lower central series depth for surface groups and RAAGs | [
"Justin Malestein",
"Andrew Putman"
] | math.GR | [
"math.GR",
"math.GT"
] |
maketitle42@
42@
=1
=1.5pt
[0]label=(*)
equationsection
plain
theoremTheorem[section]
maintheoremTheorem
maincorollary[maintheorem]Corollary
maintheoremprimeTheorem
proposition[theorem]Proposition
lemma[theorem]Lemma
repeatlemmaLemma
*unnumberedlemmaLemma
sublemma[theorem]Sublemma
corollary[theorem]Corollary
conjecture[theorem]Conjecture
question[theorem]Question
*unnumberedquestionQuestion
problem[theorem]Problem
stepxStep
claimxClaim
definition
asm[theorem]Assumption
assumption[1][][#1]
defn[theorem]Definition
definition[1][]
notn[theorem]Notation
notation[1][][#1]
remark
rmk[theorem]Remark
remark[1][][#1]
eg[theorem]Example
example[1][][#1]
Word length versus lower central series depth for surface groups and RAAGs]Word length versus lower central series depth for surface groups and RAAGs
Dept of Mathematics; University of Oklahoma; 601 Elm Ave; Norman, OK 73019
[email protected]
Dept of Mathematics; University of Notre Dame; 255 Hurley Hall; Notre Dame, IN 46556
[email protected]
JM was supported in part by a Simons Foundation Collaboration Grant 713006.
For surface groups and right-angled Artin groups, we prove lower bounds on the shortest word in the generators
representing a nontrivial element of the k^th term of the lower central series.
[
Andrew Putman
July 10, 2023
=================
empty
§ INTRODUCTION
Let G be a group and let γ_k(G) be its lower central series:
γ_1(G) = G and γ_k+1(G) = [γ_k(G),G] for k ≥ 1.
If γ_k+1(G) = 1, then G is at most k-step nilpotent. Let S be a finite generating set
for G.
What is the shortest word in S^± 1 representing
a nontrivial element in γ_k(G)? What are the asymptotics of the length of this word
as k →∞?
The asymptotic question is only interesting for non-nilpotent groups. It is also natural
to only consider groups that are residually nilpotent, i.e., such that
⋂_k=1^∞γ_k(G) = 1
Let G be a non-nilpotent residually nilpotent group with a finite generating set S. Define for g ∈ G its associated word norm:
g_S = minℓg can be written as a word of length ℓ in S^± 1.
The lower central series depth function
is the following function
d_G,S→:
d_G,S(k) = ming_Sg ∈γ_k(G), g ≠ 1.
Though d_G,S(k) depends on the generating set S, its asymptotic behavior as k →∞ is independent
of S. Our goal in this paper is to find bounds on d_G,S(k) for
several natural classes of groups G.
§.§ Free groups
For n ≥ 2, let F_n be the free group on
S = {x_1,…,x_n}. These are the most fundamental examples
of groups that are residually nilpotent but not nilpotent <cit.>, and both lower and upper bounds on d_F_n,S(k) have been studied:
* Using the free differential calculus, Fox <cit.> proved that
d_F_n,S(k) ≥1/2 k for k ≥ 1.
In <cit.>, the authors improved this to d_F_n,S(k) ≥ k.
* In <cit.>, the authors proved that
d_F_n,S(k) ≤1/4(k+1)^2. Elkasapy–Thom <cit.> later
improved this to a bound that grows like k^c with
c = log_2(3+√(17))-1/log_2(1+√(2))≈ 1.4411.
The growth rate of d_F_n,S(k) thus lies between k and k^1.4411. It is not clear what the correct
asymptotics are.
§.§ Upper bounds
Now let G be a non-nilpotent residually nilpotent group with a finite generating set S. If
G contains a non-abelian free subgroup, then using the work of Elkasapy–Thom discussed above
we can find an upper bound on d_G,S(k) that grows[Precise upper bounds are
more complicated and depend on how the free subgroup is embedded in G.] like k^1.4411. However, lower bounds
on d_G,S(k) do not follow from the analogous results for free groups, so for
the rest of this paper we focus on lower bounds.
§.§ Surface groups
Let Σ_g be a closed oriented genus g ≥ 2 surface and let
π = π_1(Σ_g) = a_1,b_1,…,a_g,b_g[a_1,b_1] ⋯ [a_g,b_g] = 1.
Here our convention is that [x, y] = xyx^-1y^-1.
The surface group π is residually nilpotent but not nilpotent <cit.>, and shares many features
with free groups. Since g ≥ 2, the subgroup of π generated by a_1 and b_1 is a rank 2
free group. As in <ref> above, this implies a
k^1.4411 upper bound on the growth rate of d_π,S(k).
However, lower bounds are more problematic. The known lower bounds for free groups use
the free differential calculus, and there is no analogue of the free differential
calculus for surface groups.[The free derivatives are derivations
d F_n →[F_n]. For a group G, if there exist nontrivial derivations
d G →[G] then ^1(G;[G]) ≠ 0. If G has a compact K(G,1) this implies that G has more
than one end <cit.>, so G cannot be a one-ended group like a surface group.]
The lower bounds for free groups can also be derived using
the “Magnus representations” from free groups to units in rings of power series with
noncommuting variables,
but again it seems hard to construct suitable analogues for surface groups.
Nevertheless, we are able to prove the following:
Let π be a nonabelian surface group with standard generating set
S = {a_1,b_1,…,a_g,b_g}. Then for all k ≥ 1 we have
d_π,S(k) ≥1/4 k.
The 1/4 in this theorem is probably not optimal. We make the following conjecture:
Let π be a nonabelian surface group with standard generating set
S = {a_1,b_1,…,a_g,b_g}. Then d_π,S(k) ≥ k for all k ≥ 1.
See <ref> below for why our proof likely cannot be extended to prove
this conjecture.
§.§ Right-angled Artin groups
We will derive Theorem <ref> from an analogous result
for right-angled Artin groups, which are defined as follows.
Let X be a finite graph. The associated right-angled Artin group (RAAG) is the group A_X given
by the following presentation:
* The generators are the vertex set V(X).
* The relations are [x,y]=1x,y ∈ V(X) are joined by an edge.
The free abelian group ^n is the RAAG with X the complete graph on n vertices, and the free group F_n is the RAAG
with X a graph with n vertices and no edges.
These groups play an important role in many areas of geometric group theory (see, e.g., <cit.>).
Just like free groups and surface groups, they are residually nilpotent <cit.>, and
they are only nilpotent if they are free abelian, i.e., if X is a complete graph.
The latter fact can be deduced from the basic observation that if Y is a vertex-induced
subgraph of X, then the natural map A_Y → A_X is split injective; indeed,
the map A_X → A_Y that kills the generators which are not vertices of Y is a right inverse
for it.
Right-angled Artin groups
often contain many surface subgroups <cit.>, and
we will prove Theorem <ref> by embedding surface groups into RAAGs
and studying the lower central series depth function there. The main result
we need along these lines is as follows.
Let X be a finite graph that is not a complete graph,
and let S = V(X) be the generating set of A_X. Then
for k ≥ 1 we have d_A_X,S(k) ≥ k.
Though Theorem <ref> does not seem to previously appear
in the literature, it is implicit in the work of Wade (see <cit.>), and
our proof follows his ideas.
The key tool is
a version of the “Magnus representation” for RAAGs
that was introduced by Droms in his thesis <cit.>, generalizing
work of Magnus on free groups. The classical Magnus representations
are maps from F_n to units in rings of power series with noncommuting variables (see <cit.>).
They contain much of the same information as the free derivatives.
§.§ From RAAGs to surface groups
Let G be a non-nilpotent residually nilpotent group with finite generating set T and let H be the subgroup of G generated
by a finite subset S<G. Each s ∈ S can be written as a word in T^± 1, so we can define
r = maxs_Ts ∈ S.
For h ∈ H, we thus have
h_S ≥1/rh_T.
From this, we see that
d_H,S(k) ≥1/r d_G,S(k) for all k ≥ 1.
Since all nonabelian surface groups π are subgroups of RAAGs, Theorem <ref> therefore immediately
implies a linear lower bound on the lower central series depth function of π.
However, the precise constants depend on the embedding into a RAAG, and without further
work might depend on the genus g.
To get the genus-independent constant 1/4 from Theorem <ref>,
we will have to carefully control the geometry of our embeddings of surface groups into RAAGs
and ensure that we can take r=4 in the above.
Many other groups can also be embedded in right-angled Artin groups, and the argument
above shows that all of them have linear lower bounds on their lower central series
depth functions.
§.§ Optimal embeddings
It is natural to wonder if we can improve the 1/4 in Theorem <ref>
by using a more clever embedding into a RAAG. We conjecture that this is not possible:
Let π be a nonabelian surface group with standard generating set
S = {a_1,b_1,…,a_g,b_g}, let X be a finite graph, and let
ϕπ↪ A_X be an embedding. Then there exists
some s ∈ S such that ϕ(s)_V(X)≥ 4.
As we will discuss in <ref> below, Crisp–Wiest <cit.> gave
an explicit description of all homomorphisms from surface groups to RAAGs in terms of collections of
loops on the surface. To prove Conjecture <ref>, what one would have
to show is that if ϕπ→ A_X is a map from a surface group to a RAAG
arising from the Crisp–Wiest construction that does not satisfy the conclusion of
Conjecture <ref>, then ϕ is not injective.
§.§ Outline
We prove Theorem <ref> in <ref> and Theorem <ref>
in <ref>. This last
section depends on the preliminary
<ref>, which discusses work of Crisp–Wiest parameterizing maps from surface groups to RAAGs.
§ RIGHT-ANGLED ARTIN GROUPS
Let X be a finite graph with associated right-angled Artin group A_X. In this section,
we first discuss some structural results about A_X and then prove Theorem <ref>.
§.§ Monoid
In addition to the right-angled Artin group A_X, we will also need the right-angled Artin monoid M_X. This
is the associative monoid with the following presentation:
* The generators are the vertices V(X) of X. To distinguish these generators from the
corresponding generators of A_X, we will sometimes write them with bold-face letters. In other words, s denotes
an element of A_X and denotes an element of M_X.
* The relations are =x,y ∈ V(X) are joined by an edge.
There is a monoid homomorphism M_X → A_X whose image is the set of all elements
of A_X that can be represented by “positive words”. As we will discuss below, this
monoid homomorphism is injective.
§.§ Normal form
Let S = V(X) be the generating set for A_X and M_X. Consider a word
w = s_1^e_1⋯ s_n^e_n with s_1,…,s_n ∈ S and e_1,…,e_n ∈.
This word represents an element of A_X, and if e_i ≥ 0 for all 1 ≤ i ≤ n it represents
an element of M_X (here for conciseness we are not using our bold-face conventions). We say that w is fully reduced if it satisfies the following conditions:
* Each e_i is nonzero.
* For all 1 ≤ i < j ≤ n with s_i = s_j, there exists some k with i < k < j such that
s_k does not commute[As observed earlier, A_Y embeds in A_X for any
vertex-induced subgraph Y, so this is equivalent to s_k being distinct from
and not adjacent to s_i = s_j.] with s_i = s_j.
Note that this implies in particular that s_i ≠ s_i+1 for all 1 ≤ i < n, so w is reduced as
a word in the free group on S. It is clear that every element of A_X and M_X can be represented
by a fully reduced word.
This representation is unique in the following sense:
* Consider fully reduced words
w = s_1^e_1⋯ s_n^e_n and w' = t_1^f_1⋯ t_m^f_m
representing the same element of A_X or M_X. Then we can obtain w' from w by a sequence of swaps, i.e.,
flipping adjacent terms s_i^e_i and s_i+1^e_i+1 such that s_i commutes with s_i+1.
For A_X, this uniqueness was stated without proof by Servatius <cit.>. The earliest proof
we are aware of is in Green's thesis <cit.>. Alternate proofs can be found in
<cit.> and <cit.>.
Using the monoid homomorphism M_X → A_X, the uniqueness for M_X follows[Whether this is a
circular argument depends on the proof of uniqueness used for A_X. The geometric proof
from <cit.> works directly with groups, and does not even implicitly
prove anything about monoids.] from that of A_X. Note that this uniqueness also implies
that the monoid homomorphism M_X → A_X is injective.
The following lemma shows that fully reduced words realize the word norm in A_X:
Let X be a finite graph. Let S = V(X) be the generating set for A_X. Consider some w ∈ A_X, and represent
w by a fully reduced word
w = s_1^e_1⋯ s_n^e_n with s_1,…,s_n ∈ S and e_1,…,e_n ∈.
Then w_S = |e_1|+⋯+|e_n|.
Immediate from the uniqueness up to swaps of fully reduced words as well as the fact that taking
an arbitrary word and putting it in fully reduced form does not lengthen the word.
§.§ Monoid ring
Let [M_X] be the monoid ring whose elements are formal -linear combinations of elements of M_X.
Since the relations in M_X are all of the form = for generators and ,
all words representing an element ∈ M_X have the same length, which we will denote ℓ(). This length function
satisfies ℓ(_1 _2) = ℓ(_1)+ℓ(_2) for _1,_2 ∈ M_X. For k ≥ 0, define
M_X^(k) = ∈ M_Xℓ() = k.
The monoid ring [M_X] is a graded ring with [M_X]_(k) = [M_X^(k)].
§.§ Partially commuting power series
Let I ⊂[M_X] be the ideal generated by the elements of the generating set V(X). For k ≥ 1,
the ideal I^k consists of -linear combinations of ∈ M_X with ℓ() ≥ k. Define
_X = lim_⟵[M_X]/I^k.
Elements of the inverse limit _X can be regarded as power series
∑_k=0^∞_k with _k ∈[M_X]_(k) for all k ≥ 0.
Each _k is a linear combination of products of k generators from V(X), some of which commute and
some of which do not. Multiplication works in the usual way:
(∑_k=0^∞_k) (∑_k'=0^∞'_k') = ∑_ℓ=0^∞(∑_k+k' = ℓ_k '_k').
§.§ Magnus representation
We now discuss the Magnus representation of A_X, which was introduced by Droms in his thesis <cit.>, generalizing
classical work of Magnus for free groups (see <cit.>). See <cit.> for
a survey. The starting point is the observation that for s ∈ V(X), we have the following identity in
_X:
(1+)(1-+^2-^3+⋯) = 1.
In other words, 1+ is a unit in _X. If generators s,s' ∈ V(X) commute, then 1+ and 1+' also commute. It follows that we can define
a homomorphism
μ A_X ⟶(_X)^×
via the formula
μ(s) = 1+ for s ∈ V(X).
§.§ Dimension subgroups and the lower central series
Recall that I ⊂[M_X] is the ideal generated by elements of the generating set V(X). There
is a corresponding ideal ⊂_X consisting of
all elements with constant term 0.
For k ≥ 1, the k^th dimension subgroup of A_X,
denoted D_k(A_X), is the kernel of the composition
A_X μ⟶_X ⟶_X/^k.
In other words, D_k(A_X) consists of elements w ∈ A_X such that
μ(w) = 1 + (terms of degree at least k).
The most important theorem about D_k(A_X) identifies it with the k^th term
of the lower central series of A_X:
Let X be a finite graph. Then D_k(A_X) = γ_k(A_X) for all k ≥ 1.
In fact, for what follows all we need is the much easier fact that γ_k(A_X) ⊂ D_k(A_X),
which appears in Droms's thesis <cit.>. For this, since D_1(A_X) = A_X = γ_1(A_X)
it is enough to verify that
[D_k(A_X),D_ℓ(A_X)] ⊂ D_k+ℓ(A_X),
which is immediate from the definitions.
§.§ Lower bounds for the lower central series of a RAAG
We close this section by proving Theorem <ref>. As we said in the introduction,
the proof closely follows ideas of Wade <cit.>.
We start by recalling the statement. Let X be a finite graph that is not a complete graph and
let S = V(X) be the generating set for A_X. Consider a nontrivial element w ∈ A_X, and
let k = w_S be its word norm in the generating set S. We must prove that
w ∉γ_k+1(A_X). By Theorem <ref>, it is enough to prove
that w ∉ D_k+1(A_X).
Represent w by a fully reduced word:
w = s_1^e_1⋯ s_n^e_n with s_1,…,s_n ∈ S and e_1,…,e_n ∈.
By Lemma <ref>, we have
w_S = |e_1|+⋯+|e_n| ≥ n.
It is thus enough to prove that w ∉ D_n+1(A_X). To do this, is enough to
prove that a term of degree n appears in μ(w) ∈_X.
An easy induction shows that for all 1 ≤ i ≤ n, we have
μ(s_i^e_i) = (1+_i)^e_i = 1 + e_i _i + _i^2 _i for some _i ∈_X.
It follows that
μ(w) = (1+e_1 _1 + _1^2 _1) (1+e_2 _2 + _2^2 _2) ⋯(1+e_n _n + _n^2 _n).
Say that some ∈ M_X is square-free if it cannot be expressed as a word
in the generators S = V(X) for the monoid M_X with two consecutive letters the same generator.[Be warned
that it is possible for an element to have one such expression while not being square-free.
For instance, if ,' ∈ S are distinct commuting generators then ' is not square-free
since ' = ^2 '.] It is immediate from the uniqueness up to swaps of fully reduced
words that the fully reduced word _1 _2 ⋯_n represents a square-free element of
M_X. When we expand out (<ref>), the only square-free term of degree
n is
e_1 e_2 ⋯ e_n _1 _2 ⋯_n.
It follows that this degree n term survives when we expand out μ(w), as desired.
§ MAPPING SURFACE GROUPS TO RAAGS
Before we can prove Theorem <ref>, we must
discuss some work of Crisp–Wiest <cit.> that parameterizes maps from surface groups to RAAGs.
We will not need the most general form of their construction (which they prove can give any homomorphism
from a surface group to a RAAG), so we will only describe a special case of it. Fix a closed oriented
surface Σ and a basepoint ∗∈Σ.
§.§ Crisp–Wiest construction
A simple dissection[Crisp and Wiest use the term dissection for a collection of curves which satisfy some conditions and have a certain decoration. We add “simple” to indicate that
we do not have any decoration.] on Σ is a finite collection of oriented simple closed curves
on Σ satisfying the following conditions:
* None of the curves contain the basepoint ∗.
* Any two curves in intersect transversely.
* There are no triple intersection points between three curves in .
For a simple dissection , let X() be the graph whose vertices are the curves in and where
two vertices are joined by an edge if the corresponding curves intersect. Crisp–Wiest <cit.> proved that
the following gives a well-defined homomorphism ϕπ_1(Σ,∗) → A_X():
* Consider some x ∈π_1(Σ,∗). Realize x by an immersed based loop [0,1] →Σ
that is transverse to all the curves in and avoids intersection points between curves of . If is disjoint from all the curves in , then ϕ(x) = 1. Otherwise,
let
0 < t_1 < ⋯ < t_n < 1
be the collection of all values such that (t_i) is contained in some γ_i ∈. For 1 ≤ i ≤ n, let
e_i = ± 1 be the sign of the intersection of with the oriented loop γ_i at x(t_i). Then
ϕ(x) = γ_1^e_1⋯γ_n^e_n∈ A_X().
We will say that ϕ is the map obtained by applying the Crisp–Wiest construction to .
§.§ Injectivity criterion
Crisp–Wiest <cit.> describe an approach for proving that ϕ is injective in certain cases.
To describe it, we must introduce some more terminology. For a simple dissection
on Σ, let
G() = ⋃_γ∈γ,
which we view as a graph embedded in Σ_g with a vertex for each intersection point
between curves in . We say that is a filling simple dissection if each
component of Σ∖ G() is a disk.
For a component U of Σ∖ G(), the boundary of U can be identified
with a circuit in the graph G(). Say that U satisfies the injectivity criterion
if the following holds for any two distinct edges e and e'
in the boundary of U. Let γ and γ' be the oriented curves in
that contain e and e', respectively. We then require that γ≠γ' and
that if γ intersects γ', then e and e'
are adjacent edges in the boundary of U.
We can now state our injectivity criterion:
Let Σ be a closed oriented surface equipped with a basepoint
∗ and let be a filling simple dissection on Σ.
For all components U of Σ∖ G(), assume that U
satisfies the injectivity criterion.
Then the map ϕπ_1(Σ,∗) → A_X() obtained
by applying the Crisp–Wiest construction to is injective.
While Proposition <ref> is not explicitly stated or proved in <cit.>,
it is implicit in their work. We present a proof for the convenience of the reader. This requires some preliminary definitions.
§.§ Salvetti complex
Let X be a finite graph and let A_X be the corresponding right-angled Artin group. The
Salvetti complex of A_X, denoted (X), is a certain non-positively curved cube complex[Here
a cube complex is non-positively curved if its universal cover is CAT(0).] with π_1((X)) = A_X.
It can be constructed as follows. Enumerate
the vertices of X as
V(X) = {v_1,…,v_n}.
Identify S^1 with the the unit circle in , so 1 ∈ S^1 is a basepoint.
For a subset I ⊂{v_1,…,v_n} of cardinality k, let S_I ≅ (S^1)^k be
S_I = (z_1,…,z_n) ∈ (S^1)^nz_i = 1 for all i with v_i ∉ I.
A subset I ⊂{v_1,…,v_n} is a k-clique of X if the subgraph of X
induced by I is a complete subgraph on k vertices. A clique is a set of vertices
that forms a k-clique for some k. With these definitions,
(X) is the union of the S_I as I ranges over cliques in X. The space (X)
can be given a cube complex structure containing a k-cube for each k-clique
in X. In particular, it has a single vertex (i.e., 0-cube) corresponding to the (empty)
0-clique.
§.§ Dual cubulation
Now let be a filling simple dissection on Σ_g.
We can form a dual cube complex structure on Σ_g as follows:
* Put a vertex in the interior of each component of Σ_g ∖ G(). For the component
containing the basepoint ∗, the vertex should be ∗.
* For each edge e of G(), connect the vertices in the components on either side of
e by an edge.
* For each vertex v of G(), put a 2-cube centered at v as follows:
file=Cube,scale=1
Here the graph G() is blue, the cube centered at the vertex of G() is green, and the surrounding
cubes are yellow, pink, and orange. The colors are just there to distinguish the different cubes visually, so
e.g., the different yellow cubes might or might not coincide (depending on the structure of G() on the rest
of the surface).
We will call this the cube complex structure dual to .
§.§ Proof of Proposition <ref>
We first recall what we must prove. Let Σ be a closed oriented surface equipped with a basepoint
∗ and let be a filling simple dissection on Σ.
For all components U of Σ∖ G(), assume that U
satisfies the injectivity criterion.
We must prove that the map ϕπ_1(Σ,∗) → A_X() obtained
by applying the Crisp–Wiest construction to is injective.
Endow Σ with the cube complex structure dual to , and let (X()) be the Salvetti complex
of A_(X). We start by constructing a map of cube complexes fΣ→(X())
such that
f_∗π_1(Σ,∗) →π_1((X())) = A_X()
equals ϕ. Define f as follows:
* The map f sends each vertex of Σ to the unique vertex of (X()).
* For an edge e of Σ that crosses an oriented loop γ of , the map
f takes e isometrically to the loop of (X()) corresponding to the 1-clique {γ}
of X(). Orienting e such that the intersection of e with γ is positive, we do
this such that f(e) goes around the loop in the direction corresponding to the generator γ
of π_1((X())) = A_X().
* For a 2-cube c of Σ centered at an intersection of loops γ_1 and γ_2
of , the map f sends c isometrically to the 2-cube corresponding to the 2-clique
{γ_1,γ_2} of X().
With these definitions, it is clear that f_∗ = ϕ.
By <cit.>, the map f_∗ = ϕ will be an injection if for every vertex
v of Σ, the map f take the link of v injectively into a full subcomplex
of the link of f(v) in (X()). These links have the following description:
* The vertex v lies in some component U of Σ∖ G(). The link of v is
a cycle whose vertices are precisely the edges of G() surrounding U.
* The vertex f(v) is the unique vertex of (X()). Its link is the following complex:
* There are two vertices for each generator γ of A_X() (or alternatively, each
γ∈), one corresponding to the positive direction and the other to the negative direction.
* A collection of vertices forms a simplex if they correspond to distinct generators of
A_X_ all of which commute.
From this description, we see that the fact that U satisfies the injectivity criterion
ensures that f takes the link of v injectively into a full
subcomplex of the link of f(v) in (X()), as desired.
§ BOUNDS ON SURFACE GROUPS
We now study the lower central series of surface groups and prove
Theorem <ref>.
We start by recalling the statement. For some g ≥ 2, let Σ_g be a closed oriented genus g surface
equipped with a basepoint ∗ and let S = {a_1,b_1,…,a_g,b_g} be the standard basis for π = π_1(Σ_g,∗).
Our goal is to prove that
d_π,S(k) ≥1/4 k for all k ≥ 1. Equivalently, consider
some nontrivial w ∈γ_k(π). We must prove that w_S ≥1/4 k.
What we will do is find a finite graph X and an injective homomorphism
ϕπ→ A_X such that letting T = V(X) be the generating set for A_X, we have
ϕ(s)_T≤ 4 for all s ∈ S.
We then have
ϕ(w) ∈γ_k(A_X), and since ϕ is injective we have ϕ(w) ≠ 1.
Since π is nonabelian the graph X is not a complete graph, so
we can apply Theorem <ref> to deduce that
ϕ(w)_T≥ k. Since ϕ(s)_T≤ 4 for all s ∈ S,
we conclude that
w_S ≥1/4ϕ(w)_T≥1/4 k,
as desired.
It remains to construct X and ϕ. We can draw the elements of S as follows, where a_k “encircles” the kth hole from the left:
file=PiGenerators,scale=1
Let
= {x_0,…,x_g,y_1,…,y_g,z}
be the following filling simple dissection on Σ_g:
file=ArtinLoops,scale=1
Let ϕπ→ A_X() be the homomorphism obtained by applying the Crisp–Wiest
construction to and let T = V(X()) be the generating set for A_X().
There are four components of Σ_g ∖ G(), and by inspection each of them
satisfies the injectivity criterion from <ref>.
Proposition <ref> thus implies that ϕ is injective.
By construction, the following hold:
ϕ(a_k) = x_k-1 x_k^-1,
ϕ(b_k) = x_k z y_k x_k^-1.
These formulas imply that
ϕ(s)_T≤ 4 for all s ∈ S, as desired.
99
Baumslag
G. Baumslag, On generalised free products, Math. Z. 78 (1962), 423–438.
CharneySurvey
R. M. Charney, An introduction to right-angled Artin groups, Geom. Dedicata 125 (2007), 141–158. math/0610668
CrispSageevSapir
J. S. Crisp, M. Sageev and M. V. Sapir, Surface subgroups of right-angled Artin groups, Internat. J. Algebra Comput. 18 (2008), no. 3, 443–491. 0707.1144
CrispWiest
J. S. Crisp and B. Wiest, Embeddings of graph braid and surface groups in right-angled Artin groups and braid groups, Algebr. Geom. Topol. 4 (2004), 439–472. math/0303217
DromsThesis
C. Droms, Graph Groups, PhD thesis, Syracuse University, 1983.
ElkasapyThom
A. I. Elkasapy and A. Thom, On the length of the shortest non-trivial element in the derived and the lower central series, J. Group Theory 18 (2015), no. 5, 793–804. 1311.0138
FoxFree1
R. H. Fox, Free differential calculus. I. Derivation in the free group ring, Ann. of Math. (2) 57 (1953), 547–560.
Frederick
K. N. Frederick, The Hopfian property for a class of fundamental groups, Comm. Pure Appl. Math. 16 (1963), 1–8.
GreenThesis
E. R. Green, Graph products of groups, PhD thesis, University of Leeds, 1990.
KimSurface
S. Kim, On right-angled Artin groups without surface subgroups, Groups Geom. Dyn. 4 (2010), no. 2, 275–307. 0811.1946
Magnus
W. Magnus, Beziehungen zwischen Gruppen und Idealen in einem speziellen Ring, Math. Ann. 111 (1935), no. 1, 259–280.
MagnusKarrassSolitar
W. Magnus, A. Karrass and D. M. Solitar, Combinatorial group theory, second revised edition, Dover Publications, Inc., New York, 1976.
MalesteinPutmanFree
J. Malestein and A. Putman, On the self-intersections of curves deep in the lower central series of a surface group, Geom. Dedicata 149 (2010), 73–84. 0901.2561
ScottWallTopological
G. P. Scott and C. T. C. Wall, Topological methods in group theory, in Homological group theory (Proc. Sympos., Durham, 1977), 137–203, London Math. Soc. Lecture Note Ser., 36, Cambridge Univ. Press, Cambridge.
ServatiusAutos
H. Servatius, Automorphisms of graph groups, J. Algebra 126 (1989), no. 1, 34–60.
ServatiusDromsServatius
H. Servatius, C. Droms and B. Servatius, Surface subgroups of graph groups, Proc. Amer. Math. Soc. 106 (1989), no. 3, 573–578.
WadeSurvey
R. D. Wade, The lower central series of a right-angled Artin group, Enseign. Math. 61 (2015), no. 3-4, 343–371. 1109.1722
WiseBook
D. T. Wise, From riches to raags: 3-manifolds, right-angled Artin groups, and cubical geometry, CBMS Regional Conference Series in Mathematics, 117, Published for the Conference Board of the Mathematical Sciences, Washington, DC, 2012.
|
http://arxiv.org/abs/2307.09469v2 | 20230710180805 | Graph Representation of the Magnetic Field Topology in High-Fidelity Plasma Simulations for Machine Learning Applications | [
"Ioanna Bouri",
"Fanni Franssila",
"Markku Alho",
"Giulia Cozzani",
"Ivan Zaitsev",
"Minna Palmroth",
"Teemu Roos"
] | physics.plasm-ph | [
"physics.plasm-ph",
"cs.LG"
] |
[
Graph Representation of the Magnetic Field Topology in High-Fidelity Plasma Simulations for Machine Learning Applications
equal*
Ioanna Bourics
Fanni Franssilacs
Markku Alhophy
Giulia Cozzaniphy
Ivan Zaitsevphy
Minna Palmrothphy
Teemu Rooscs
phyDepartment of Physics, University of Helsinki, Helsinki, Finland
csDepartment of Computer Science, University of Helsinki, Helsinki, Finland
Ioanna Bourimailto:[email protected]@helsinki.fi
Machine Learning, ICML
0.3in
]
Topological analysis of the magnetic field in simulated plasmas allows the study of various physical phenomena in a wide range of settings. One such application is magnetic reconnection, a phenomenon related to the dynamics of the magnetic field topology, which is difficult to detect and characterize in three dimensions. We propose a scalable pipeline for topological data analysis and spatiotemporal graph representation of three-dimensional magnetic vector fields. We demonstrate our methods on simulations of the Earth's magnetosphere produced by Vlasiator, a supercomputer-scale Vlasov theory-based simulation for near-Earth space. The purpose of this work is to challenge the machine learning community to explore graph-based machine learning approaches to address a largely open scientific problem with wide-ranging potential impact.
§ INTRODUCTION
Magnetic reconnection is a fundamental plasma physical process characterized by a topological reconfiguration of the magnetic field and energy conversion from magnetic to kinetic and thermal energy, leading to plasma heating, particle acceleration, and mixing of plasmas <cit.>. The phenomenon is encountered in different settings and plays a key role in the eruption of solar flares and coronal mass ejections (CMEs) in the solar corona <cit.>, in the Earth's magnetosphere and its interaction with the solar wind <cit.>, in astrophysical plasmas <cit.>, as well as in fusion plasma during major and minor tokamak disruptions <cit.>.
Magnetic reconnection is linked to space weather conditions that can potentially damage terrestrial technological infrastructure, satellites, and manned space missions <cit.>.
CMEs cause magnetospheric magnetic storms <cit.>, during which the terrestrial power grids may suffer from Geomagnetically Induced Currents (GICs) and even fail <cit.>. Solar flares accelerate particles into relativistic energies, which propagate to the Earth's upper atmosphere and
affect satellite and radar signals that can be significantly altered or lost during active space weather conditions <cit.>.
The nature of the phenomenon is well-understood in two-dimensional (2D) settings, and quasi-2D models have been successful at reproducing many features of reconnection in the solar corona and the Earth's magnetosphere <cit.>.
However, magnetic reconnection is intrinsically a three-dimensional (3D) process. This becomes especially evident when considering reconnection in the solar corona, where the magnetic field forms twisted coronal loops with complex topologies <cit.>. Despite considerable progress, the additional complexity introduced in 3D settings continues to pose many open questions regarding the nature of 3D magnetic reconnections in the solar and the Earth's magnetospheric environment <cit.>.
We present a scalable pipeline for topological data analysis and graph representation of 3D magnetic vector fields. First, we introduce spatial null graphs, a graph representation that can be used to characterize the topology of a magnetic field. In addition, to encode the temporal evolution of the magnetic field, we extend this concept with spatiotemporal null graphs. Finally, we present the spatial and spatiotemporal null graphs produced by the topological analysis of the magnetic vector field in the Earth's magnetotail. For this purpose, we use 3D global simulations produced by Vlasiator, a supercomputer-scale Vlasov theory-based model that incorporates the solar wind – magnetosphere – ionosphere system
<cit.>. The constructed graphs enable the use of topological information as input for machine learning methods such as (spatiotemporal) graph neural networks (GNNs) <cit.>.
§ MAGNETIC FIELD TOPOLOGY
This section introduces some concepts of vector field topology. For a general introduction, see <cit.>; for reviews focused on magnetic fields and magnetic reconnection in particular, see <cit.>.
The magnetic field on a location with spatial coordinates x⃗ = (x,y,z) ∈ ℝ^3 can be represented as a vector field B⃗(x⃗) = (B_x(x⃗), B_y(x⃗), B_z(x⃗)). According to Gauss's law for magnetism, the field has zero divergence everywhere
∇·B⃗
= (∂ B_x/∂ x +
∂ B_y/∂ y +
∂ B_z/∂ z)
≡ 0.
From a topological perspective, magnetic nulls, points where the magnetic field vanishes B⃗(x⃗_0)_2 = 0, are of special interest. At such points, the structure of the local field can be characterized by forming a first-order Taylor approximation around x⃗_0:
B⃗(x⃗) =
J(x⃗_0) (x⃗ - x⃗_0)
+ o(x⃗ - x⃗_0_2),
where J = ∇B⃗ is the Jacobian of the magnetic field.
The topology of the field is characterized by the magnetic skeleton, which comprises of the magnetic nulls, separatrix surfaces delineating distinct magnetic domains, and separator curves formed on the intersections of separatrix surfaces <cit.>.
To extract the magnetic skeleton of a field, we use the Visualization Toolkit (VTK) <cit.> – an open source software package for scientific data analysis and visualization.
The VTK vector field topology filter <cit.> is a later extension to the package that adds the functionality for computing the main elements of the topological skeletons of 2D and 3D vector fields.
§.§ Magnetic nulls
Magnetic nulls can be classified into different types that characterize their topology, according to the eigenvalues of the Jacobian matrix J of the vector field <cit.>.
Given the three eigenvalues of the Jacobian, (λ_1, λ_2,λ_3) ∈ℂ^3, it follows from Eq. (<ref>) that their sum is equal to zero:
λ_1 + λ_2 + λ_3 = 0,
and, therefore,
two of the eigenvalues must have the same sign while the third one is of the opposite sign[We do not consider degenerate nulls where one or more of the eigenvalues is exactly zero. Such points are physically unstable <cit.> and can be handled as special cases of one or more of the four types we introduce here.]. Moreover, while the eigenvalues can be complex-valued, the eigenvalues with non-zero imaginary parts always come in pairs of complex conjugates, so that their real parts are the same, and the third eigenvalue is a real number of the opposite sign.
Due to the above constraints, each null can be classified in terms of its polarity; if the two same-sign eigenvalues are negative, the null is classified as a negative null, otherwise it is a positive null <cit.>. Furthermore, magnetic nulls with complex eigenvalues exhibit a spiraling topology <cit.>. Figure <ref> illustrates the resulting classification into four types, where types A and B represent (non-spiraling) topologies of negative and positive magnetic nulls, while types As and Bs encode spiraling topologies of negative and positive magnetic nulls, respectively.
§.§ Separatrices and separators
The eigenvectors of the Jacobian can be used to define the so-called separatrices that are associated with the magnetic nulls <cit.>. Each non-degenerate null has one 2D separatrix or fan surface and two one-dimensional virtual separatrices or spines <cit.>. The fan surface is defined by the infinitely many magnetic field lines within the plane spanned by the two eigenvectors corresponding to the same-sign eigenvalues. The two spine field lines end in the magnetic null point, entering
along the directions parallel and antiparallel to the third eigenvector, normal to the fan plane <cit.>.
In physical simulations, magnetic nulls connect via separator curves (or reconnection lines) formed by the intersection of the fan surfaces of two connected nulls <cit.>. However, the process of integrating the separatrices to find their intersection can be computationally very expensive, which is why separators are usually approximated <cit.>.
In order to approximate the separators, we choose to follow the 2D null lines: curves along which two of the magnetic field components are zero, while the third can vary. <cit.> found that with a small guide-field, 3D reconnection is well-approximated by a 2D system. Specifically in the magnetotail environment, we ignore the East-West component of the magnetic field, B_y, since it tends to exhibit the least variation as the guide-field component, and we require that the other two components are zero, B_x=B_z=0. We call such points 2D nulls and use the term proper null to refer to points where B⃗_2=0, which are clearly a subset of the 2D nulls. We provide a modification of the existing VTK vector field topology filter to detect such 2D nulls.
§ GRAPH REPRESENTATIONS
Originally introduced by <cit.>, null graphs are graph representations that characterize the topology of a magnetic field by encoding the connectivity between proper nulls (vertices) via separators (edges).
We extend their definition to construct a graph representation that can be useful for different machine learning tasks and downstream applications, such as spatiotemporal GNNs <cit.>. We propose two computationally efficient heuristics to trace the connectivity between proper nulls both spatially and temporally.
§.§ Spatial Null Graphs
After modifying the VTK vector field topology filter to detect the 2D magnetic nulls where B_x = B_z = 0 (sec. <ref>), we construct the 2D null lines by connecting the 2D nulls to each other based on spatial proximity.
In practice, the 2D null lines can be traced by initializing at most two paths from each proper null based on a cut-off value on the maximum Euclidean distance from the proper null.
Each of the paths is then iteratively expanded by finding – within the same cut-off distance – the nearest 2D null that is not already included in any of the already traced paths. Paths terminating without reaching a proper null are considered a dead end and are discarded, while paths ending at a proper null become edges in the null graph. The type of each proper null (A / B / As / Bs) is encoded as a node feature in the graph.
§.§ Spatiotemporal Null Graphs
Consider a bipartite graph 𝒢 = (𝒱_i, 𝒱_i+1, ℰ), where the vertex sets 𝒱_i and 𝒱_i+1 are defined by proper nulls detected at times t_i and t_i+1, respectively. The set of edges ℰ represents a (partial) matching between the magnetic nulls with the interpretation that vertices v∈𝒱_i and v'∈𝒱_i+1 are connected by an edge e=(v,v')∈ℰ if they correspond to the same proper null. The problem can be cast as an unbalanced assignment problem defined by the following maximization problem
max_ℰ∈𝔐∑_(v,v') ∈ℰ 1/w(v,v'),
where the set of allowed matchings 𝔐 is defined by requiring that each vertex appears in at most one edge, and the weight w(v,v') = x⃗(v) - x⃗(v')_2 is defined by the Euclidean distance between the respective coordinates of the proper nulls v and v'. Additional constraints including (i) a maximum distance constraint w_max, or (ii) matching only the same type nulls,
can be incorporated by letting w(v,v')=-1, for any edge that does not satisfy them. We apply both constraints to get an initial matching, and, for some of the unmatched magnetic nulls, we need to run a subsequent matching without constraint (ii) to account for type switches.
All vertices in 𝒱_i that remain unmatched are considered to have disappeared after step t_i. Likewise, all vertices in 𝒱_i+1 that remain unmatched are considered to have appeared before step t_i+1. According to <cit.>, proper nulls can either appear by entering the simulation domain across a boundary, or as a result of a bifurcation, in which case they appear in pairs of opposite polarity. These two cases can be distinguished based on the coordinates and types of the unmatched vertices in 𝒱_i+1.
§ RESULTS AND DISCUSSION
The supercomputer-generated simulations from Vlasiator provide large-scale, high-fidelity data – some of which is openly available <cit.>. To give an idea of the scale of available data, the example of the published Vlasiator dataset provides 170 time-steps × 1,723,328 grid points at each time-step, i.e., the time series consists of a total of ∼ 293 million grid points. Working with such scales requires efficient and scalable methods, and the spatiotemporal graph representation of the data can allow for less resource-intensive machine learning approaches.
The magnetotail simulations used to produce the results presented here, consist of a grid with resolution 50 × 127 × 108 in the x (tailward–Earthward), y (East–West), and z (North–South) directions of the magnetic field, respectively. A step of one unit in any direction of the grid corresponds to 1000 km, and the time series used to generate the spatiotemporal null graphs has 1 s cadence <cit.>.
First, the modified VTK topology filter is used to detect 2D and proper nulls, and to classify the latter in types (sec. <ref>). The 2D nulls are detected using B_y as the guide-field component (sec. <ref>).
The results obtained from the first stage of the process are illustrated on the left side of Figure <ref>.
The result of the spatial tracing method (sec. <ref>), is presented on the right of Figure <ref>. The 3D null points are colored according to their type. Different types of spatial connectivity are also color-encoded, using different colors of spatial edges depending on the type of the connection.
Figure <ref> shows an example of a spatiotemporal graph, where for each time-step t_i for i ∈{0,1,2}, a 2D projection (X-Y plane) of the spatial graph is presented. The temporal edges trace the temporal evolution of each proper null across all time-steps. The colors of the temporal edges represent the type of the proper null traced over time, with the exception of the pink temporal edges which denote a type switch scenario (e.g., t_0 → t_1: B → Bs). Finally, the green circle at t_2 is used to mark a pair of proper nulls of opposite polarity that appear together before t_2 due to a bifurcation.
We have presented a scalable data analysis pipeline for the detection and spatiotemporal tracing of proper magnetic nulls. These methods allow us to characterize the topology a 3D magnetic field using graph representations. The resulting spatiotemporal null graphs can be useful in various downstream learning tasks, especially in GNN applications <cit.>.
In the process of formulating 3D magnetic reconnection detection as a machine learning task, two potential limitations arise. If we formulate the problem as a supervised learning task, there is a severe difficulty in reliably labeling a sufficient amount of training data, as 3D magnetic reconnection remains difficult to detect and characterize. Similarly, if we were to formulate the problem as an unsupervised learning task, questions arise regarding the interpretability of results and the model performance evaluation.
Currently, we are working on a GNN approach that aims to circumvent these issues by formulating the learning task as a plasmoid[Outflows of plasma driven by the magnetic tension force of newly reconnected field lines <cit.>.] formation forecast, as their generation is linked to reconnecting plasmas <cit.>. The location of a plasmoid can be characterized using the magnetic skeleton <cit.>, which allows us to use spatiotemporal null graphs to learn when and where a plasmoid is formed. Next, in order to detect the magnetic reconnection, we can examine the reconnection rate and energy conversion rate at the possible reconnection sites located in close proximity to the newly-formed plasmoid. This work can then be extended to facilitate the study of magnetized plasmas in different settings, which is linked to a variety of open questions that can be interesting to both the astrophysics and machine learning research communities.
§ ACKNOWLEDGEMENTS
Funding in direct support of this work: Research Council of Finland grants #345635 (DAISY) and #339327 (Carrington). The authors thank the Finnish Computing Competence Infrastructure (FCCI) for supporting this work with computational and data storage resources.
icml2023
|
http://arxiv.org/abs/2307.04340v2 | 20230710044840 | Crystal Structure Generation with Autoregressive Large Language Modeling | [
"Luis M. Antunes",
"Keith T. Butler",
"Ricardo Grau-Crespo"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
Realization of an extremely anisotropic Heisenberg magnet in Rydberg atom arrays
Jaewook Ahn
August 12, 2023
================================================================================
The generation of plausible crystal structures is often an important step in the computational prediction of crystal structures from composition. Here, we introduce a methodology for crystal structure generation involving autoregressive large language modeling of the Crystallographic Information File (CIF) format. Our model, CrystaLLM, is trained on a comprehensive dataset of millions of CIF files, and is capable of reliably generating correct CIF syntax and plausible crystal structures for many classes of inorganic compounds. Moreover, we provide general and open access to the model by deploying it as a web application, available to anyone over the internet. Our results indicate that the model promises to be a reliable and efficient tool for both crystallography and materials informatics.
§ INTRODUCTION
The in silico search for new materials often involves the exploration of a space of compositions in a chemical system, and the investigation of various predicted structural phases in that space (see <cit.> and <cit.> for examples). To predict the structures of unknown materials, a Crystal Structure Prediction (CSP) approach is often employed, which attempts to derive the ground state crystal structure for a given chemical composition under specific physical conditions. CSP approaches are relatively computationally expensive, typically involving ab initio techniques. They often begin with the generation of candidate structures. Examples are the AIRSS <cit.> and USPEX <cit.> approaches. Initializing the search space with sensible structures increases the likelihood of success, and decreases the amount of computation required. It is therefore expected that effective Crystal Structure Generation (CSG) tools would help accelerate the prediction of structures using CSP methods.
Increasingly, techniques from Machine Learning (ML) and data science are being used to solve problems in materials science. <cit.> In particular, generative modelling approaches based on autoencoder architectures and generative adversarial networks (GANs) <cit.> have been used to generate crystal structures. <cit.> Indeed, generative modelling has become commonplace, an outcome catalyzed by astounding advancements in the computational generation of images, audio and natural language over the last several years. <cit.> The Large Language Model (LLM), backed by the Transformer architecture <cit.>, is the approach behind state-of-the-art performance on natural language processing tasks. This approach begins with a generative pre-training step, which is autoregressive in nature, involving the unsupervised task of predicting the next token given a sequence of preceding tokens. <cit.> When such models are scaled to billions of parameters, their effectiveness becomes quite remarkable, as tools such as ChatGPT <cit.> demonstrate.
The LLM approach has recently been used in the context of materials science. <cit.> However, these attempts have been focused on either training and tuning the model for natural language tasks, and utilizing the model in natural language generation scenarios involving chemical subject matter, or training the model on a corpus of expanded chemical compositions for the purposes of generating unseen compositions. An alternate perspective, which we present here, is to train the model on textual representations of inorganic crystal structures, such as the Crystallographic Information File (CIF) format, rather than on corpora of natural language, or chemical compositions alone.
The motivation for this perspective originates from two conjectures: The first states that a sequence of symbols (i.e. tokens) is an appropriate representation modality for many predictive tasks (including those involving chemical structure). The idea of representing any domain with a sequence of tokens may at first seem counter-intuitive. However, consider that even images can be represented this way, and be subject to the autoregressive language modelling of pixels <cit.>. This challenges the notion that domain-specific representations, such as graphs for chemical structure, are necessary for superior performance. The second conjecture states that LLMs learn more than simply “surface statistics” and the conditional probability distribution of tokens. Indeed, autoregressive pre-training involving next-token prediction may result in learning an effective world model: an internalized causal model of the processes generating the target phenomena. A model which simply learns spurious correlations in the data is less desirable, as it may have greater difficulty in generalizing beyond the training distribution. Recent studies have demonstrated that LLMs trained on sequences of board game play (e.g. Chess and Othello) do indeed track the state of the board, and probes of the internal activations of the model reveal the existence of representations of various abstract concepts specific to the domain. <cit.> We therefore asked whether a model trained to predict the 3-dimensional coordinates of atoms, digit-by-digit, could learn the chemistry implicit in crystal structures, and generate unseen structures, borrowing from its model of the world of atoms.
As such, we herein describe the CrystaLLM model, a tool for CSG trained on an extensive corpus of CIF files representing the structures of millions of inorganic solid-state materials. Unlike small molecule organic compounds, the generative modelling of inorganic crystals presents unique challenges: the structures are complex and periodic, are not readily described by simple graphs, and are imbued with different forms of symmetry. Moreover, they can be constructed from more than 100 different elements. Even so, the model is capable of reliably generating correct CIF syntax and physically
plausible crystal structures for many classes of inorganic compounds.
§ METHODS
The following terminology is used in the remainder of the document:
A formula, or reduced composition, refers to the empirical formula, or formula unit, which is the simplest, whole-number ratio of atoms in the compound. An example of a formula is Ba2MnCr.
A cell composition is a chemical formula referring to the total number of atoms of each type in the unit cell of a crystal. It represents the chemical formula of the compound as it would appear in the crystal structure, which might contain multiple formula units. An example of a cell composition is Ba6Mn3Cr3.
§.§ Dataset
The dataset was assembled by obtaining structures from the Materials Project <cit.>, the OQMD <cit.>, and NOMAD <cit.>, which were originally optimized using density functional theory (DFT) simulations. In total, approximately 3.6 million structures were obtained. This dataset consists of compounds containing anywhere from 1 to 10 elements, with most consisting of 3 or 4 elements. The elements up to and including atomic number 94 are present, with the exception of polonium, astatine, radon, francium, and radium. The dataset contains roughly 800,000 unique formulas, and 1.2 million unique cell compositions. When paired with space groups, there are 2.3 million unique cell composition-space group pairs. To choose between duplicate structures containing the same cell composition and space group, the structure with the lowest volume per formula unit was selected. The 2.3 million structures in this dataset were converted to CIF files using the pymatgen library <cit.>, and were used for training. The CIF files were created with the pymatgen option for symmetry finding tolerance set to 0.1 Å. All floating point numbers in the files were rounded to 4 decimal places. The dataset was split randomly into train, validation, and test sets, such that the training set consisted of about 2.2 million CIF files, the validation set 35,000 CIF files, and the test set 10,000 CIF files.
§.§ Tokenization
The dataset of CIF files was tokenized prior to training. The vocabulary consisted of CIF tags, space group symbols, element symbols, numeric digits, and various punctuation symbols, for a total of 371 symbols. After tokenization, the training set consisted of 768 million tokens.
§.§ Generative Pre-training
The generative pre-training step requires a vocabulary, 𝒱, and an ordered list of tokens 𝒰 = (u_1, ..., u_n), with u_i ∈𝒱. We want to maximize the following likelihood:
ℒ(θ; 𝒰) = ∑_i log P(u_i | u_i-c, ..., u_i-1;θ)
where c is the size of a context window, P is the conditional probability distribution to be modelled, and θ the parameters of a neural network. We therefore minimize 𝒥(θ; 𝒰)=-ℒ, using stochastic gradient descent to adjust the parameters. We use a multi-layer Transformer decoder <cit.> for the neural network, as described in <cit.>. Our model consists of 25 million parameters, with 8 layers, 8 attention heads, and an embedding size of 512. We decay the learning rate from 10^-3 to 10^-4 over the course of training, and use a batch size of 32.
§.§ Evaluation
To evaluate the generative capabilities of the model, we define two scenarios where the model is tasked with generating the compounds of the held-out test set. The first scenario, which we name the Cell Composition-only scenario, involves prompting the model with each cell composition in the test set, and having it generate up to a maximum of 3000 tokens. The model is prompted with only the first line of a CIF file, which consists of the data block header, containing the cell composition of the structure specified in the rest of the file. The second scenario, which we name the Cell Composition+Space Group scenario, is similar to the first, except that the model is prompted with both the cell composition and space group, for each entry in the test set. Moreover, we perform the generation 3 separate times for each entry.
To assess how well the model performed in the first scenario, we check if a generated CIF file is consistent in terms of space group, if it is consistent in terms of the atom site multiplicity, and if the generated bond lengths are reasonable. To check if the generated structure is consistent with the printed space group, we use the class of the pymatgen library, which uses the spglib library <cit.>. To check if bond lengths are reasonable, we first use a Voronoi-based nearest-neighbour algorithm in pymatgen to define which atoms are bonded together; then, we establish expected bond lengths based on the electronegativity difference between the bonded atoms, and their ionic or covalent radii. We classify a structure as having reasonable bond lengths if all the detected bond lengths are within 30% of the corresponding expected bond lengths.
The goal of the second evaluation scenario is to establish how often the model can recover the unseen structures of the test set, when prompted with a cell composition and space group. To determine whether a generated structure matches the structure in the test set, we use the pymatgen class, which performs a structural similarity assessment of two crystals. We use a fractional length tolerance of 0.2, a site tolerance of 0.3 Å, and an angle tolerance of 5 degrees, which are the default values in pymatgen. Both structures are reduced to primitive cells before matching, and are scaled to equivalent volume.
§.§ DFT Calculations
For the pyrochlore case study, a small number of DFT calculations were performed using VASP, following as closely as possible the settings used in the OQMD project (where most of the pyrochlore structures seen in training were taken from). For example, the recommended PAW potential was used for each element: Zr_sv for zirconium, Hf_pv for hafnium, Lu_3 for lutetium, Pr_3 for praseodymium, Ce_3 for cerium (for the remaining elements, the name of the PAW potential simply matched the element's symbol). The Perdew-Burke- Ernzerhof (PBE) exchange-correlation functional <cit.>, in the generalized-gradient approximation, was used in all calculations. Hubbard (PBE+U) corrections were applied for transition metal elements with unfilled d levels (U_eff=3.8 eV for Mn and 3.1 eV for V). Although the cell parameters reported here correspond to the conventional cubic cell with 8 formula units, the DFT calculations were performed using the primitive cell with two formula units, and sampling of the reciprocal space corresponding to that primitive cell was performed using a 7x7x7 grid, as done for all pyrochlore calculations in the OQMD project.
§ RESULTS
§.§ Assessment of Generation Quality
To assess the quality of the model's generated structures, we considered two scenarios, as discussed in section <ref>. The Cell Composition-only scenario involves prompting the model with the first line of the test set CIF file only (which specifies the cell composition), whereas the Cell Composition+Space Group scenario involves prompting the model from the first line of the test set CIF file to the line specifying the space group (inclusive). The fraction of generated structures that are consistent in terms of space group, atom site multiplicity, and have reasonable bond lengths are presented in Table <ref>.
The generated CIF files of the Cell Composition+Space Group scenario were compared to the corresponding CIF files of the test set using a structure matching algorithm (as discussed in section <ref>). The fraction of matching structures is presented in Table <ref>. The Reduced Unseen column represents the results for formulas that were not seen in training with any Z.
We further examined how closely the generated cell parameters resembled the actual cell parameters, for the cases where there was a structural match. We took the first matching structure for samples that had at least one generated structure matching the test set structure, and measured the R^2 and mean absolute error (MAE) for the true versus generated cell lengths, the true versus generated (i.e. printed) volume, and the implied (from cell parameters) versus generated volume. The results are presented in Table <ref> and Figure <ref>.
§.§ Generalizing to Unseen Scenarios
To further examine the model's ability to generalize to unseen scenarios, we prompted the model with various formulas, and examined its output. The results are presented in Figure <ref>.
An example of the model generalizing to a formula that had been seen in training, but with different space groups, is presented in Figure <ref>a. The formula, Ba2MnCr, was in the held-out test set, with the R3̅m space group. That combination of formula and space group had not been seen in training. The model generated a structure matching the one in the test set on the first attempt, when the space group was provided.
The model also demonstrated the ability to generate plausible structures for formulas not seen in training with any Z. An example is the quaternary compound CsCuTePt. This compound was not in the training set, but was in the held-out test set (with Z=4). The model generated a structure matching the one in the test set, in the F4̅3m space group, on the third attempt when the space group was provided. The generated structure is presented in Figure <ref>b.
Finally, in Figure <ref>c is the generated structure of YbMn6Sn6 <cit.>, an example of the model generalizing to structural motifs with atoms not seen in training. This formula was not seen in training for any Z, and was not in the held-out test set. However, ZrMn6Sn6 was seen in training, in the P6/mmm space group. The model generated a structure in the same space group on the first attempt, without the space group being provided. The generated structure matched the ZrMn6Sn6 structure, with Yb substituted for Zr, and with cell parameters and atomic coordinates adjusted accordingly. This demonstrates the model performing a structure prediction by analogy procedure, as commonly used by materials scientists for discovery <cit.>, despite never having been provided with the procedure to do this.
§.§ Generating Known Structural Classes
The CrystaLLM model was trained on an extensive collection of the various structural classes known to inorganic chemistry. We thus investigated its ability to generate unseen members of these classes. We focused on classes of binary, ternary and quaternary compounds.
§.§.§ Rutiles
Rutiles are a class of binary compounds that adopt a tetragonal unit cell, in the P4_2/mnm space group (Z=2), as is seen in TiO2, from which this class of materials adopts its name. The general formula for rutile oxides is MO2, where M is a metallic species in the +4 oxidation state. Rutile fluorides are also known, where the metal is in the +2 oxidation state.
The model's training dataset consisted of essentially all of the rutiles one might expect to be able to find in nature. Therefore, to test the model's ability to generate unseen rutiles, we requested the generation of theoretically possible, but unlikely compounds, such as AuO2. With gold in a highly unlikely +4 oxidation state, AuO2 is not expected to be formed under most conditions. However, the model was able to imagine what the structure of such a compound might be (when the space group is provided). While TiO2 has cell parameters a=4.594Å, c=2.959Å, the generated rutile gold variant has a=4.838Å c=3.429Å, reflecting the increased volume occupied by the larger gold atoms (Figure <ref>a).
§.§.§ Spinels
The spinels are a group of ternary compounds with the general formula AB2X4, where A is a cation in the +2 oxidation state, B is a cation in the +3 oxidation state, and X, normally a chalcogen, is an anion. Spinels form cubic close-packed structures, with eight tetrahedral, and four octahedral sites, normally in the Fd3̅m space group.
To explore the model's ability to generate unseen spinels, we selected two samarium spinels: Sm2BO4, which was present in the held out test set, and the thiospinel Sm2BS4, which was absent from both the training and test sets. The model was able to generate the expected spinel structures for both compounds when the cell composition and space group were provided (Figures <ref>b and <ref>c). During training, the model encountered a number of different oxy-, thio-, and selenospinels, and this likely contributed to its ability to generate these two compounds.
§.§.§ Elpasolites
The elpasolites are quaternary compounds with the general formula ABC2X6. The A and C species are typically alkali metal cations in the +1 oxidation state, B is usually a transition metal cation in the +3 oxidation state, and X is a halogen anion. The elpasolites are often referred to as “double perovskites”, since their structures are related to perovskites by the doubling of their unit cell dimensions, and the replacement of the M^2+ cation with alternating M^+ and M^3+ cations. Elpasolites crystallize in the Fm3̅m space group, and are the most common quaternary crystal system reported in the Inorganic Crystal Structure Database (ICSD) <cit.>. We wondered if the CrystaLLM model could generate elpasolites not seen during training.
We selected two elpasolites from the held-out test, that were not seen in training: the fluoride KRb2TiF6 and the iodide K2AgMoI6. The model was able to generate the correct elpasolite structure when the cell composition and space group was provided (Figures <ref>d and <ref>e).
§.§.§ Pyrochlores
The general formula for the pyrochlores is A2B2O7, where A, a trivalent cation, and B, a tetravalent cation, are either rare-earths or transition metals (other oxidation states, e.g. combining monovalent and pentavalent cations, are also possible, but we focus here on the trivalent/tetravalent pyrochlores). Pyrochlores crystallize in the Fd3̅m space group (Z=8). There are many combinations of A and B that are possible for this structure, by using lanthanide ions, actinide ions, and Y(III) for the A species, and various transition metal ions, as well as Ti(IV), Zr(IV), and Hf(IV) for the B species. We investigated whether CrystaLLM could generate valid pyrochlore structures for any unseen combinations, and whether it could estimate reasonable cell parameters in line with the trends observed for the pyrochlore series, as the cell parameters are expected to be correlated with the ionic radii of the A and B cations.
We created a space of pyrochlores consisting of 144 compounds by producing different combinations of A and B species. Of these, 54 were seen in training. We selected 10 compounds from among the 90 not seen in training, and attempted 3 generations with the model, for each. The cell composition and space group were included in the prompt. All generations resulted in valid pyrochlore structures (Table <ref>).
We subsequently performed DFT relaxation calculations on the first generated structure for each of the 10 compounds. One case, Ce2V2O7, was problematic and was excluded from further analysis. This result isn't very surprising, since both Ce and V are pathological elements in DFT settings. The DFT-derived value of the cell parameter for each of the 10 compounds is plotted against the mean generated value in Figure <ref>. A good agreement exists between the DFT-derived and generated cell lengths, with an R^2 of 0.62 and MAE of 0.08 Å being exhibited.
§.§ Problematic Cases
While the model seems capable of generating structures for many different classes of inorganic crystals, it does nonetheless have difficulty in certain cases. All of the cases appear to involve systems that are rare, and under-represented in the training dataset. For example, the model was generally unable to generate a structure for Mg7Pt4Ge4, the structure of which was reported recently to exist in the P6_3mc space group (Z=2). <cit.> In this case, there were only 38 examples of 7:4:4 systems in the training dataset, none contained Mg or Pt, and none were in the P6_3mc space group.
The current version of the model also seems to struggle with generating phosphates, sulfates, carbonates, and organic-inorganic hybrid structures. Examples include carbonate hydroxide minerals, such as Co2CO3(OH)2 <cit.> and Cu2CO3(OH)2 (malachite). While present in the dataset, they belong to a group of analogous structures for which there are only a handful of examples. While the model can generate Ca5(PO4)3(OH) (hydroxyapatite), it generally fails to generate a valid structure for Mn4(PO4)3. A common theme is the appearance of multiple oxyanions, which can give rise to more complex arrangements of atoms, for which the model may not have seen enough examples. In contrast, the model can generate compounds of the perovskite class reliably. However, over 5,000 examples of the ABX3 (X=O,F) system in the Pm3̅m space group were seen in training.
Future versions of the model will consider strategies for addressing these occurrences of class imbalance.
§.§ The CrystaLLM.com Web Application
To allow for general and open access to the CrystaLLM model, we make it available through a web application, available at https://crystallm.com/https://crystallm.com. The user of the application is presented with a text field requiring a formula to be entered. Optionally, they may provide the number of formula units (Z) and the desired space group (Figure <ref>). Once they press the button, a request is sent to a GPU server which has the model in memory. The request is converted into a prompt, and the generated contents are returned to the user. If no Z is provided, we scan through Z values of 1, 2, 3, 4, 6, and 8, and return the first valid structure generated by the model. We validate the generated structure using the same procedure described in the Methods section, checking that the generated structure is consistent in terms of the printed space group, and other elements of the CIF file. If no valid structure can be found, the user is presented with an informative error message, including the option to view the generated content. Requests typically take several seconds to process, but can take longer if no Z is provided and the model has trouble finding an appropriate Z value. Generated structures are displayed in a web browser-based 3D structure viewer provided by the Crystal Toolkit framework, upon which the front-end of the web application is built. <cit.>
By making the model easily accessible, we hope to contribute a potentially useful tool to the materials structure research community. We also hope to receive feedback from users that may help improve future versions of the model.
§ DISCUSSION & CONCLUSION
Here, we have shown that LLMs of the CIF format are able to generate inorganic crystal structures for a variety of known classes. Indeed, the model is able to produce valid and sensible arrangements of atoms in 3-dimensional space by generating xyz coordinates digit-by-digit. The model also seems to have captured the relationship between space group symbols and the symmetries inherent in the structures it generates.
We chose to build a language model of the CIF format (instead of a simplified format, for example, which might include a minimal vocabulary) for several reasons. First, the CIF format is not particularly verbose. The model learns the grammatical structure of the format fairly quickly. We can thus avoid having to devise an intermediate format that requires inter-conversion between more common formats, which could also be error prone. Second, we believe that having the model learn to generate the more redundant parts of the CIF format, such as the cell volume, and Z, which are inferable from prior inputs, helps the model to perform better overall.
While the model can generate sensible structures, this does not by itself make it suitable, as is, for CSP. Just as natural language LLMs, such as GPT-3 and -4, are not suitable chatbots without further fine-tuning, the CrystaLLM model will also need to be fine-tuned for more advanced tasks. Fine-tuning involves an additional and separate training step, where the model's parameters are adjusted in the context of a different task. This may also involve altering the model's output layer, such as to make it suitable for a regression task, for example. Models can be fine-tuned using a variety of techniques, but supervised learning and reinforcement learning <cit.> are most common. One might use reinforcement learning, for example, when a task is not clearly defined as a supervised learning problem. When fine-tuning natural language LLMs for chatbot applications, it is common to use Reinforcement Learning from Human Feedback (RLHF). <cit.> With RLHF, the idea is to gather data from human annotators to be used to train a reward model, which scores generated text according to its desirableness. The reward model is then used as part of a reinforcement learning-based tuning of the LLM. In CSP, one would like to produce ground-state structures (for some given physical conditions). One could thus imagine an analogous procedure where CrystaLLM is fine-tuned for the goal of generating low-energy structures, via feedback from an external evaluator of the generated structure's energy. We call this Reinforcement Learning from Thermodynamic Feedback (RLTF). This procedure would also require a reward model, and such a model should ideally provide a timely estimate of a structure's energy. This excludes time-consuming approaches such as DFT. A viable approach could make use of a separate machine learning-based model of formation energy, such as one based on ALIGNN. <cit.> Indeed, neural network potentials have been used to accelerate the prediction of crystal structures. <cit.>
There are several limitations with the current approach. First, none of the structures of the dataset have site-occupancy disorder (fractional site occupancies). Therefore, CrystaLLM cannot generate disordered structures, and may not successfully generate structures for combinations of cell composition and space group that imply a disordered structure. An example is K2NaTiOF5, which is reported to be an elpasolite, in the Fm3̅m space group (Z=4), with F and O species sharing the same crystal site <cit.>. Another limitation is that the CIF files of the dataset were not all created using the same level of theory. The training set is derived from a combination of DFT sources using different settings, functionals, etc., which may make it difficult for the model, in some instances, to learn a consistent relationship between cell composition and detailed structure. <cit.>
Nevertheless, we believe that CrystaLLM will be a useful tool for CSG and materials informatics. We plan to explore fine-tuning the model for physical property prediction tasks, such as the prediction of lattice thermal conductivity, where experimental data is relatively scarce. <cit.> The architecture of the model allows it to be fine-tuned for either composition-based or structure-based prediction tasks. This implies that CrystaLLM may be the basis for a general-purpose materials informatics model, which can be used for generative tasks, and fine-tuned for property prediction tasks that require either composition or structure. If the model is able to transfer what it has learned about the world of atoms to these various predictive problems, it may prove to be a quite flexible tool relevant to many aspects of materials chemistry.
§ NOTE
During development of the CrystaLLM model, we became aware of a pre-print by Flam-Shepherd and Aspuru-Guzik that describes the use of autoregressive large language modelling for molecular and crystal structure generation. <cit.> While the fundamental idea of generating the coordinates of atomic systems token-by-token is the same, our work differs in the following ways: 1, we focus exclusively on the generation of the crystal structures of inorganic materials; 2, we train the model directly on CIF files and CIF syntax, with a vocabulary consisting of CIF tags and space group symbols, in addition to atomic symbols and numeric digits; 3, we use a much larger and custom dataset consisting of millions of CIF files for training the model; 4, our model is symmetry-aware, and supports the generation of structures in specified space groups and for specific numbers of formula units. In summary, we develop a model specifically for the purposes of material structure generation, which produces syntactically valid and physically sensible CIF files as an output.
§ DATA AVAILABILITY
The structures used in the experiments described in this work were obtained from the Materials Project (https://materialsproject.org/https://materialsproject.org/), the OQMD (https://oqmd.org/https://oqmd.org/), and NOMAD (https://nomad-lab.eu/https://nomad-lab.eu/). All structures were made available by those sources under the Creative Commons Attribution 4.0 License. <cit.>
§ ACKNOWLEDGEMENTS
This work was partially supported by computational resource donations from Amazon Web Services through the AWS Activate program, obtained with assistance from the Communitech Hub. For the DFT calculations, we used the Young supercomputer facility via the UK Materials and Molecular Modelling Hub, which is partially funded by EPSRC (EP/T022213/1, EP/W032260/1).
§ AUTHOR CONTRIBUTIONS
L.M.A. conceived the project, performed the experiments, and drafted the manuscript. L.M.A. and R.G.-C. designed the experiments. R.G-C. carried out the DFT calculations for the pyrochlore case study. R.G.-C. and K.T.B. supervised and guided the project. All authors reviewed, edited and approved the manuscript.
naturemag
|
http://arxiv.org/abs/2307.03882v1 | 20230708024835 | The Busboy Problem: Efficient Tableware Decluttering Using Consolidation and Multi-Object Grasps | [
"Kishore Srinivas",
"Shreya Ganti",
"Rishi Parikh",
"Ayah Ahmad",
"Wisdom Agboh",
"Mehmet Dogar",
"Ken Goldberg"
] | cs.RO | [
"cs.RO"
] |
The Busboy Problem: Efficient Tableware Decluttering
Using Consolidation and Multi-Object Grasps
Kishore Srinivas^1, Shreya Ganti^1, Rishi Parikh^1, Ayah Ahmad^1,
Wisdom Agboh^1,2,
Mehmet Dogar^2, Ken Goldberg^1
^1The AUTOLab at UC Berkeley (automation.berkeley.edu).
^2University of Leeds, UK.
=============================================================================================================================================================================================================
We present the “Busboy Problem": automating an efficient decluttering of cups, bowls, and silverware from a planar surface. As grasping and transporting individual items is highly inefficient, we propose policies to generate grasps for multiple items. We introduce the metric of Objects per Trip (OpT) carried by the robot to the collection bin to analyze the improvement seen as a result of our policies. In physical experiments with singulated items, we find that consolidation and multi-object grasps resulted in an 1.8x improvement in OpT, compared to methods without multi-object grasps. See https://sites.google.com/berkeley.edu/busboyproblem for code and supplemental materials.
§ INTRODUCTION
The post-meal task of clearing a dining table, commonly referred to as “bussing,” requires moving cups, bowls, and utensils that are dispersed across the surface into a bin or tray to be cleaned in the kitchen. This is a common task that occurs after any event involving food service and dish collection, from daily household meals to casual picnics to formal cocktail parties and dinners. Automating this tedious and repetitive task could reduce fatigue and busy work for the skilled waiters who typically perform it.
We define the “Busboy Problem" as the efficient transfer of cups, bowls, and utensils (collectively called tableware) from the table into a designated collection bin while minimizing the time required for completion. This is an interesting problem for automation because the tableware are of varying shape, requiring low-level planning to execute grasps and high-level planning to consolidate tableware for efficient transport. Even small inaccuracies can lead to toppling or dropping delicate and expensive tableware, so the system must be extremely reliable.
Previous work in multi-object grasping, object manipulation, and grasp candidate generation highlight the efficiency of grasping pre-stacked objects as well as objects manually oriented for multi-object grasps <cit.>. Whereas these works explore situations with objects are already positioned for said grasps, our work investigates methods of stacking and clustering objects into these favorable positions for multi-object grasps.
In this paper, we present a framework and algorithms for the Busboy Problem. We consider a scenario where multiple items are placed on a work surface (see Fig. <ref>), under an RGBD camera. We use the concept of multi-object grasping, which enables the robot to move multiple items simultaneously, thus reducing the number of pick-and-place actions needed.
This paper makes the following contributions:
* Formulation of the Busboy Problem.
* Action primitives for rearranging and grasping cups, bowls, and utensils.
* Two algorithms that leverage consolidation and multi-object grasps.
* Experimental results indicating a 1.8x improvement in OpT.
§ RELATED WORK
§.§ Multi Object Grasping
Prior work on multi-object grasping includes different grasping techniques to facilitate multi-object grasps <cit.>, detecting the number of objects in a grasp <cit.>, decluttering surfaces <cit.>, and multi-object grasping to place objects in virtual reality <cit.>. Yamada et al. considered the simplified multi-object grasping problem, where the objects are already in a configuration where they can be grasped at once <cit.>. Agboh et. al. <cit.>
showed that friction can increase picks per hour for convex polygonal objects.
Some prior work has focused on the design of grippers for multi-object grasping. Jiang et. al. <cit.> proposed a vaccum gripper with multiple suction cups, while Nguyen et. al. <cit.> proposed a soft gripper based on elastic wires for multi-object grasping.
Object stacking <cit.> has the potential to improve the number of objects per trip. We take inspiration from these works to include a stacking primitive.
§.§ Pulling
Prior work by Berretty et al. has examined the use of inside-out pulling to orient convex polygonal parts <cit.>. We utilize a similar technique for circular cups and bowls. Furthermore, a planner for ensuring convergence to the final pose of pulling trajectories is proposed by Huang et al. <cit.>, where they examine the motion of planar objects undergoing quasi-static movement.
§.§ Grasp Candidates
Satish et al. discuss using a synthetic data sampling distribution that combines grasps sampled from the policy action set with guiding samples from a robust grasping supervisor to construct grasp candidates <cit.>.
Additionally, Mahler et al. <cit.> discuss the use of energy-bounded caging to evaluate grasp candidates. They efficiently compute candidate rigid configurations of obstacles that form energy-bounded cages of an object, where the generated push-grasps are robust to perturbations.
Mousavian et al. describe the process of using a variational autoencoder to generate grasps by mapping the partial point cloud of an observed object to a diverse set of grasps for the object <cit.>.
Because of the relative simplicity of our setup, we found that an analytical approach to constructing grasp candidates is sufficient. In the case of bowls and cups, we sample a random point uniformly on the rim and then orient the gripper perpendicular to the tangent of the circle at that point. In the case of utensils, we identify the axis of the utensil, and pick the highest depth point along that line, with the gripper perpendicular to the axis.
§.§ Object Manipulation in Cluttered Environments
Efficiently finding object manipulation plans in high-dimensional environments with a large number of objects is a challenging problem. Hasan et al. <cit.> addressed this problem by identifying high-level manipulation plans in humans, and transferring these skills to robot planners. Other work by Tirumala et al. <cit.> used tactile sensing to singulate layers of cloth from a stack.
Different from these works, our goal in the cluttered environment is to bring objects together, or stack them, to enable multi-object grasps.
§ THE BUSBOY PROBLEM
The Busboy Problem involves the task of decluttering a workspace containing cups, bowls, and utensils, with the objective of minimizing both the time and number of trips required for completion.
§.§ Assumptions
In the initial configuration, a planar workspace is defined in a cartesian grid (x, y) and has n_c cups, n_b bowls, and n_u utensils scattered across its surface.
All items are assumed to be face up, visible by camera, and within a workspace defined by the constraints of the robot arm. These items may be initially stacked on top of one another or resting individually on the surface, and we assume that the initial state meets the following criteria:
* All items are of known dimensions, and cups and bowls are circular when viewed from top-down. Cups have radius 4.5cm, bowls have radius 8.5cm, and utensils are at most 17cm × 1.8cm.
* Cups and bowls are upright, and utensils are laid flat on the surface.
* Any stacks that exist are stable, such that r_0 ≥ r_1 ≥ ... ≥ r_s, where r_0 represents the radius of the vertically lowest item, and r_s the highest one.
* Initially, no two items are touching (items are singulated).
§.§ State
We use cups, bowls, and utensils (forks and spoons) as the tableware set - collectively called “tableware" - in this work.
Each cup and bowl has a position [x, y], and each utensil has a position [x, y] and orientation θ.
§ DECLUTTERING TABLEWARE
§.§ Action primitives
We propose to use a combination of manipulation primitives to solve the Busboy Problem. We specifically propose to use single object grasps, multi-object grasps, pull-grasps, and stack-grasps to efficiently clear a work surface of items (Figure <ref>).
§.§.§ Grasp
We use both single and multi-object grasps in this work. Let u⃗_G be the grasp to pickup objects — single or multiple. We represent this action as:
u⃗_G = [p⃗_G, θ_G]
where p⃗_G = [x_G, y_G, z_G] is the center point of the grasp, and θ_G is the grasp orientation.
§.§.§ Pull-Grasp
A pull-grasp action involves two steps: a pull of one object to another, then a multi-object grasp of both objects. We represent a pull action as:
u⃗_P = [p⃗_S, θ_S, p⃗_E, θ_E]
where p⃗_S = [x_S, y_S, z_S] is the pull start point, θ_s is the gripper orientation at the state point, p⃗_E = [x_E, y_E, z_E] is the pull end point, and θ_E is the gripper orientation at p⃗_E. For circular objects such as bowls and cups, the gripper pulls outwards from the center of the dish using an internal pull, and for utensils, the gripper cages the utensil around its center point while moving it (Figure <ref>). Then, we denote a pull-grasp action as:
u⃗_⃗P⃗G⃗ = [u⃗_P, u⃗_G]
§.§.§ Stack-Grasp
A stack-grasp action involves two steps: a stack of one object onto another, then a multi-object grasp of both objects. We represent a stack action as:
u⃗_⃗S⃗ = [u⃗_G_i, p⃗_L, θ_L]
where u⃗_G_i is a grasp on the lifted object, and p⃗_L = [x_L, y_L, z_L] is the placement point on the stationary object, and θ_L is the gripper orientation at p⃗_L. Then, we denote a stack-grasp action as:
u⃗_⃗S⃗G⃗ = [u⃗_S, u⃗_G]
§.§ Determining allowable actions
§.§.§ Grasp
A single-object grasp is always allowable. We can safely assume this since any dish or stack of items is already top-down graspable. When no other actions are allowed, the single-object grasp action is used as a default to clear the workspace.
A multi-object grasp is allowable when the grasp heights of both items are similar (within an adjustable threshold value) and if the lateral distance between the grasp points of both items is less than the width of the gripper. If the grasp heights of the items are significantly different, the gripper will have to either collide with the taller dish while attempting to grasp the shorter dish or grasp only the taller dish to avoid the collision, and either case results in a failure of grasping multiple items at once. Similarly, if the items are separated by more than the maximum inside width of the grippers, an attempt to grasp both at the same time will fail.
§.§.§ Pull
A pull of two items is allowable if a multi-object grasp can be executed on those items and if no other objects lie between the two items on the workspace. We disallow pull actions of items for which a multi-object grasp cannot be executed, since the pull becomes a wasted action. We also disallow pull actions of items with other objects between them to ensure that the intermediate objects are not displaced in a non-deterministic manner.
§.§.§ Stack
A stack of dish d_a with radius r_a onto dish d_b with radius r_b is allowable if r_a ≤ r_b. This means that a cup can be stacked onto a bowl, but not vice versa, and that a utensil can be stacked onto any other dish, including another utensil. This is to ensure that the stack stability assumption present at the initial state remains valid after each action.
§.§ Robustness of action primitives
We present three primitives to robustly execute the above actions. This design makes the primitives more robust.
§.§.§ Grasp
When executing a grasp at location x, y, z, the robot will open its grippers centered around x, y, and then move down to the appropriate height, as measured by the depth sensor, before closing the gripper to grasp the object. The affordances granted by max gripper opening, gripper height, and gripper width mean that an off-center grasp point x, y, z will still successfully complete the single-object or multi-object grasp of the object (Figure <ref>).
§.§.§ Pull
For cups and bowls, the gripper pulls outwards from the center of the dish, contacting the inner surface of the dish (Figure <ref>). This action is successful as both r_b and r_c are larger than the width of the gripper when closed. If the gripper is anywhere within the opening of the object, it will be able to move the target object to a specified location. For utensils, the gripper cages the utensil around its center point while moving it, preventing unwanted rotation and moving the utensil to its specified location.
§.§.§ Stack
For bowls and cups, the top lip radius is larger than the radius of the base, giving the sides a taper. Because a dish d_a is only stacked onto another dish d_b of equal or larger size, the base radius of d_a is guaranteed to be smaller than the top radius of d_b, allowing the tapered sides of the items to funnel d_a into place even if there is slight error in the placement of the dish. Placing a utensil onto a bowl is extremely robust to error because of the relative radii of the items, and placing a utensil onto another utensil is robust due to the curvature of the utensils themselves which slide a misplaced utensil into place, making them naturally conducive to stacking.
§.§ Policies
§.§.§ Pull Policy
The pull policy combines Pull-Grasp and Grasp actions. From the initial scene, it checks if any multi-object grasps can be executed right away, and executes those first. Then, it runs the Pull-Grasp action for all remaining items, pulling together items that don't cause collisions and executing multi-object grasps to clear them from the workspace. If any items remain after all possible multi-object grasps are executed, those items are cleared with single-object Grasp actions. After each action, a new image of the workspace is taken and the state representation is updated to reflect the new state of the workspace, including any tableware that has been moved or left behind by the previous action. This policy is formalized in Algorithm <ref>.
§.§.§ Stack Policy
The stack policy combines Stack-Grasp and Grasp actions. It repeatedly executes the Stack-Grasp action to clear the workspace, and if there are any remaining items they are cleared with single-object Grasp actions. It prioritizes stacking utensils onto bowls and transporting them to the bin, and then tries to stack the remaining dishes. Stacking utensils first is an efficient way to improve the number of OpT for this policy. The policy is formalized in Algorithm <ref>.
After utensils are cleared, the stacks created by this policy are limited to be a combination of at most 2 existing stacks (i.e. once a Stack action is executed, the next action is necessarily a Grasp on the resulting stack, not another Stack action onto that stack). This is because when 4 or more bowls or cups are stacked, the height difference between the lip of the top dish and the lip of the bottom dish exceeds the height of the gripper jaws, causing many attempted grasps to fail. By limiting stacks to at most 2 existing stacks, we significantly reduce the chances of creating a stack with more than 3 dishes.
§ EXPERIMENTS AND RESULTS
We evaluate through physical experiments the robustness of the pulling action primitive and then evaluate the pull and stack policies on a real-world table clearing task.
§.§ Experimental Setup
We use a UR5 robot arm with a Robotiq 2F-85 gripper and Intel RealSense 455D RGBD camera mounted 83cm above the workspace. The workspace is a flat 78cm x 61cm surface with 4 cups, 4 bowls, and 4 utensils, n_b = n_c = n_u = 4. In our experimental setup, we calculated a max gripper opening of w = 8.5cm, gripper height of h = 4.5cm, bowl radius r_b = 8.5cm, cup radius r_c = 4.5cm and utensil width r_u = 1.8cm.
We identify and locate tableware on the workspace with a vision pipeline. Since the surface of the workspace is white, we use darker colored tableware to be easily visible. To locate cups and bowls, we first use edge detection, contour forming, and HoughCircles to identify circular shapes on the workspace, then filter these circles based on the known image radius of cups and bowls. We cluster these circles by their centers and remove circles that overlap beyond a specified threshold, allowing an unambiguous detection of cups and bowls. To locate utensils, we use edge detection and contour forming, and then filter out the contours that are too “square", as determined by the aspect ratio of the identified contour. We draw an imaginary line through the lengthwise center of bounding rectangle of the contour, and sample depth values along that line; we use the highest depth point as the grasp point of the utensil to allow the gripper maximum clearance with the surface.
We define three tiers to evaluate the performance of our algorithm on scenes of increasing complexity.
* Tier 0: scenes contain 6 items, either all cups, all bowls, or all utensils, with no stacks in the initial state.
* Tier 1: scenes contain 4 items each of cups, bowls, and utensils, and have no stacks in the initial state.
* Tier 2: scenes contain 4 items each of cups, bowls, and utensils, but we allow stacks of at most 3 objects in the initial state.
For Tier 2, we limit initial stacks to at most 3 objects because of the dimensions of the gripper, as mentioned in Section <ref>. The number of objects in a stack, and not the actual dimensions of individual dishes, is the main limiting factor for the grasp, because we grasp dishes from the rim. The dishes could actually be much larger and still be graspable as long as the walls are thin enough to allow the gripper to slide over them, and the weight of the dish does not exceed the payload limitations of the gripper itself. We limit ourselves to a small set of known kitchenware objects for consistency in our experiments.
We evaluate the performance of the pull and stack policies against a baseline single-item policy, referred to as “Random" in Table <ref>. This policy picks a dish at random, and if the dish is a cup or bowl, it uniformly samples a point on the rim and grasps the dish at that point. If the dish is a utensil, it identifies the grasp point of the utensil as described above and grasps the utensil at that point. This policy is stack-agnostic, so even in Tier 2 when there are stacks present in the initial state, it treats each item in the stack as its own object, and clears the stack by transporting one item at a time.
§.§ Scene Generation
In order to evaluate our policies, we generate multiple scenes at each tier, and every policy is run once on each scene. To generate each scene, we use the dimensions of the workspace (78cm × 61cm), and r_b, r_c, r_u for the dimensions of the objects. We randomly sample x, y locations within the scene for each object. If an object intersects with another object, we create a stack of the two objects if the maximum number of intersections has not been exceeded, and resample a position for the object if it has. Tiers 0 and 1 allow no such intersections, whereas Tier 2 allows 4 intersections. For each trial we manually reset the scene to maintain consistency.
§.§ Evaluation
We evaluated on 9 scenes at Tier 0 (3 scenes per type of dish), 3 scenes at Tier 1, and 3 scenes at Tier 2. A trial is one execution of one policy on one scene, so we have a total of (9+3+3)*3 = 45 trials. For each trial, we record the time in seconds to clear the table, the OpT, and the number of failures. A failure occurs when the robot is unable to move all items to the collection bin, either because of a perception failure that leaves items behind on the workspace or a policy failure that drops a dish off the workspace. We report our results in Table <ref>.
To evaluate the performance of our policies in more realistic scenario, we present the theoretical improvement in execution time when the bin is placed further away from the workspace, as might be seen in a home or professional kitchen. Given the physical limitation of the UR5 arm length, we simulated the lengthening distance by adding time delays of 3 and 5 seconds in both directions of motion (to and from the collection bin). We find that moving the bin further away causes the stack and pull policies to perform significantly better than the baseline policy because motions to and from the bin are penalized, making policies with fewer total actions perform better. We report these results in Table III in the appendix of the project website.
§ DISCUSSION
Results show that using consolidation and multi-object grasps allows clearing the workspace efficiently, with the pull policy transporting at least 1.6x as many objects per trip, and the stack policy at least 1.8x. A discussion of resulting execution time improvement is in the appendix of the project website.
§ LIMITATIONS AND FUTURE WORK
An overhead RGBD camera gives only a clear top view. This affects state estimation and can lead to failures. We assume circular cups and bowls. This makes it easy to compute grasps. For more general dishes, advanced grasp generation methods will be needed. In future work, we will loosen the assumption of starting with singulated objects. We also hope to combine the pull and stack policies into a higher-level policy that can efficiently clear the workspace.
§ ACKNOWLEDGMENTS
This research was performed at the AUTOLAB at UC Berkeley in
affiliation with the Berkeley AI Research (BAIR) Lab,
and the CITRIS “People and Robots" (CPAR) Initiative. The authors were supported in part by donations from Toyota Research
Institute, Bosch, Google, Siemens, and Autodesk and by equipment
grants from PhotoNeo, NVidia, and Intuitive Surgical. Mehmet Dogar was partially supported by an EPSRC Fellowship (EP/V052659).
IEEEtran
|
http://arxiv.org/abs/2307.04735v1 | 20230710175007 | On tricyclic graphs with maximum edge Mostar index | [
"Fazal Hayat",
"Shou-jun Xu",
"Bo Zhou"
] | math.CO | [
"math.CO"
] |
On tricyclic graphs with maximum edge Mostar index
Fazal Hayat^a, Shou-Jun Xu^a[Corresponding author
E-mail addresses: [email protected] (F. Hayat), [email protected] (S. J. Xu), [email protected] (B. Zhou)], Bo Zhou^b
^aSchool of Mathematics and Statistics, Gansu Center for Applied Mathematics,
Lanzhou University, Lanzhou 730000, P.R. China
^bSchool of Mathematical Sciences, South China Normal University,
Guangzhou 510631, P.R. China
==========================================================================================================================================================================================================================================================================================================================================================================================================================
For a given connected graph G, the edge Mostar index Mo_e(G) is defined as Mo_e(G)=∑_e=uv ∈ E(G)|m_u(e|G) - m_v(e|G)|, where m_u(e|G) and m_v(e|G) are respectively, the number of edges of G lying closer to vertex u than to vertex v and the number of edges of G lying closer to vertex v than to vertex u. In this paper, we determine the sharp upper bound for the edge Mostar index on tricyclic graphs with a fixed number of edges, and the graphs that attain the bound are completely characterized.
Keywords: Mostar index, edge Mostar index, tricyclic graph, distance-balanced graph.
2010 Mathematics Subject Classification: 05C12; 05C35
§ INTRODUCTION
All graphs considered in this paper are simple, connected and undirected. Let G be a graph on n vertices with vertex set V(G) and edge set E(G). For a set X, denoted by |X| is its cardinality. Thus, the order and size of G are the cardinality of V(G) and E(G), respectively. For v ∈ V(G), denoted by N_G(v) the set of all neighbors of v in G. The degree of v ∈ V(G) , denoted by d_G(v), is the cardinality of N_G(v). A vertex with degree one is called a pendent vertex and an edge incident to a pendent vertex is called a pendent edge. The distance between u and v in G is the least length of the path connecting u and v, denoted by d(u,v). A graph G with n vertices is a tricyclic graph if |E(G)|=n+2. As usual, by S_n, P_n and C_n we denote the star, path and cycle on n vertices, respectively.
Let e=uv ∈ E(G), and define two subsets of V(G) as follows:
N_u(e|G)= {x∈ V(G): d_G(u,x)< d_G(v,x)},
N_v(e|G)= {x∈ V(G): d_G(v,x)< d_G(u,x)}.
Let n_i(e|G)= |N_i(e|G)|, for i = u, v.
A graph G is distance-balanced if n_u(e|G) = n_v(e|G) for each e=uv ∈ E(G). One may refer to <cit.>, and the references cited therein, for the study on distance-balanced graph invariants. Since there exist many graphs which are not distance-balanced, measuring how far is a graph from being distance-balanced is a natural problem.
However, such a measuring invariant was proposed by Doslić et al. <cit.>, named the Mostar index. For a graph G, the Mostar index of G is defined as
Mo(G)=∑_e=uv ∈ E(G)|n_u(e|G) - n_v(e|G)|.
Doslić et al. <cit.> studied the Mostar index of trees and unicyclic graphs, and gave a cut method for computing the Mostar index of
benzenoid systems. Hayat and Zhou <cit.> determined all the n-vertex cacti with the largest Mostar index, and obtained a sharp upper bound for the Mostar index among cacti of order n with k cycles, and characterized the extremal cacti. Hayat and Zhou <cit.> identified those trees with minimum and/or maximum Mostar index in the families of trees of order n with fixed parameters like maximum degree, diameter and the number of pendent vertices.
Deng and Li <cit.> determined those trees with a given degree sequence have a maximum Mostar index. In <cit.> Deng and Li studied the extremal problem for the Mostar index among trees with a given number of segment sequence.
Ali and Doslić <cit.> stated more modifications and generalizations of the Mostar index.
For more studies about the Mostar index see <cit.>.
For a vertex x and edge e = uv of a graph G, the distance between x and e, denoted by d_G (x, e) , is defined as d_G (x, e)= min{ d_G(x,u), d_G(x,v)}. For e=uv ∈ E(G), let M_u(e|G) and M_v(e|G) respectively, the set of edges of G lying closer to u than to v and the set of edges of G lying closer to v than to u.
Let m_u(e|G) and m_v(e|G) denote the size of M_u(e|G) and M_v(e|G), respectively. Arockiaraj et al. <cit.>, introduced the edge Mostar index as a quantitative refinement of the distance non-balancedness, also it can measure the peripherality of every edge and consider the contributions of all edges into a global measure of peripherality for a given chemical graph.. The edge Mostar index of G is defined as
Mo_e(G)=∑_e=uv ∈ E(G)ψ_G(uv),
where ψ_G(uv)=|m_u(e|G) - m_v(e|G)|, we use ψ(uv)=|m_u(e) - m_v(e)| for short, if there is no ambiguity.
Imran et al. <cit.> studied the edge Mostar index of chemical structures and nanostructures using graph operations.
Liu et al. <cit.> determined the extremal values of the edge Mostar index among trees and unicyclic graphs and determined the maximum and the second maximum value of the edge Mostar index among cactus graphs with a given number of vertices. Ghalavand et al. <cit.> determined the minimum values of the edge Mostar index among bicyclic graphs with fixed size, and characterized the corresponding extremal graphs. The edge Mostar index for several classes of cycle-containing graphs was computed in <cit.>. Recently, Hayat et al. <cit.> determined the sharp upper bound for the edge Mostar index on bicyclic graphs with a fixed number of edges, and the graphs that achieve the bound are completely characterized.
In this paper, we determine the sharp upper bound for the edge Mostar index on tricyclic graphs with a fixed number of edges, and the graphs that achieve the bound are completely characterized.
Let G be a tricyclic graph of size m. Then
Mo_e(G) ≤{[ 12, if m=7, and equality holds iff G ≅ F_1, H_1,; 23, if m=8, and equality holds iff G ≅ A_3, F_1, H_1,; 36, if m=9, and equality holds iff G ≅ F_1, H_1, A_i (i= 2,...,6),; 53, if m=10, and equality holds iff G ≅ A_2,; 72, if m=11, and equality holds iff G ≅ A_1, A_2,; m^2-m-36, if m ≥ 12, and equality holds iff G ≅ A_0, ].
(where A_i (i= 0,1,...,6) are depicted in Fig. <ref>, F_1, H_1 are depicted and Fig. <ref> and Fig. <ref>, respectively).
In section 2, we give some definitions and preliminary results. Theorem <ref> is proved in section 3.
§ PRELIMINARIES
Let G_1 · G_2 be the graph obtained from G_1 and G_2 by identifying one vertex of the two graphs. Set u as the identified vertex of G_1 and G_2. If G_1 contains a cycle and u belongs to some cycle, and G_2 is a tree, then we call G_2 a pendent tree in G_1 · G_2 associated with u. For each e ∈ E(G_1), every path from e to some edges of G_2 passes through u. Therefore, the contribution of G_2 to ∑_e∈ E(G_1)ψ(e) totally depends on the size of G_2, that is, changing the structure of G_2 cannot alter the value ∑_e∈ E(G_1)ψ(e).
If a graph H is gotten by removing repeatedly all pendants (If any) of G. Then we say H is the brace of G. That is to say, H does not contain any pendent vertex. Obviously, for all connected tricyclic graphs, their braces are shown in Fig. <ref>. Let 𝒢_m^i be the collection whose element includes α_i as its brace for i=1, … , 15. For convenience, let 𝒜 = ∪_i=5^15𝒢_m^i.
<cit.> Let G be a bicyclic graph of size m. Then
Mo_e(G) ≤{[ 4, if m=5, and equality holds iff G ≅ B_3, B_4,; m^2-3m-6, if 6 ≤ m ≤ 8, and equality holds iff G ≅ B_1, B_3,; 48, if m=9, and equality holds iff G ≅ B_0, B_1, B_2, B_3, B_4,; m^2-m-24, if m ≥ 10, and equality holds iff G ≅ B_0, ].
(where B_0, B_1, B_2, B_3, B_4 are depicted in Fig. <ref>).
Let S_m,r≅ S_m-r· C_r, where the common vertex of S_m-r and C_r is the center of S_m-r.
<cit.> Let G_1 be a connected graph of size m_1 and G_2 be a unicyclic graph of size m_2. Then
Mo_e(G_1 · G_2 ) ≤ Mo_e(G_1 · S_m_2, 3 ) for m_1 + m_2 ≤ 8,
Mo_e(G_1 · S_m_2, 3 )= Mo_e(G_1 · S_m_2, 4 ) for m_1 + m_2 = 9,
Mo_e(G_1 · S_m_2, 4 ) for m_1 + m_2 ≥ 10.
By means of Theorem <ref> and the above result, the following conclusions are obtained.
Let G=G_1 · G_2 be a tricyclic graph, where G_1 is a bicyclic graph of size m_1 and G_2 is a unicyclic graph of size m_2. Then
Mo_e(G) ≤ Mo_e(B_3 · S_m_2, 3 ) for m_1 + m_2= 8,
Mo_e(G) ≤ Mo_e(B_2 · S_m_2, 3 )= Mo_e(B_3 · S_m_2, 3 ) = Mo_e(B_3 · S_m_2, 4)= Mo_e(B_4 · S_m_2, 3 ) = Mo_e(B_4 · S_m_2, 4 ) for m_1 + m_2= 9,
Mo_e(G) ≤ Mo_e(B_0 · S_m_2, 4 ) for m_1 + m_2 ≥ 12.
§ PROOF OF THEOREM <REF>
Let G ∈𝒜 of size m. Then
Mo_e(G) ≤{[ 23, if m=8, and equality holds iff G ≅ A_3,; 36, if m=9, and equality holds iff G ≅ A_i (i= 2,...,6),; 53, if m=10, and equality holds iff G ≅ A_2,; 72, if m=11, and equality holds iff G ≅ A_1, A_2,; m^2-m-36, if m ≥ 12, and equality holds iff G ≅ A_0, ].
Suppose G ∈𝒜, then G contains α_i (i=5,6,...,15) as its brace. Let G_1 be a bicyclic graph of size m_1 and G_2 be a unicyclic graph of size m_2 such that G = G_1 · G_2. Then, in view of Lemmas <ref> and <ref>, if m= 8, we get
Mo_e(G ) = Mo_e(G_1 · G_2 ) ≤ Mo_e(G_1 · S_m_2, 3 )
≤ Mo_e(B_3 · S_m_2, 3 )= Mo_e(A_3);
if m= 9, we get
Mo_e(G ) = Mo_e(G_1 · G_2 ) ≤ Mo_e(B_2 · S_m_2, 3 )= Mo_e(B_3 · S_m_2, 3 )
= Mo_e(B_3 · S_m_2, 4)= Mo_e(B_4 · S_m_2, 3 ) = Mo_e(B_4 · S_m_2, 4 )
= Mo_e(A_i) (i= 2,...,6);
if m ≥ 12, we have
Mo_e(G ) = Mo_e(G_1 · G_2 ) ≤ Mo_e(G_1 · S_m_2, 4 )
≤ Mo_e(B_0 · S_m_2, 4 )= Mo_e(A_0).
By simple calculation, it is easy to check that,
Mo_e(A_0)= m^2-m-36,
Mo_e(A_1)= Mo_e(A_2)=m^2-2m-27,
Mo_e(A_3)= Mo_e(A_4)=m^2-4m-9,
Mo_e(A_5)= Mo_e(A_6)=Mo_e(A_7)=m^2-3m-18.
Clearly, Mo_e(A_0)= m^2-m-36 > A_i (i=3,...,7), for m ≥ 10, but A_0 contains at least 12 edges. Therefore, if m=11, then Mo_e(A_1)= Mo_e(A_2) > A_i (i=3,...,7); if m=10, then Mo_e(A_2) > A_i (i=3,...,7).
Let G ∈𝒢_m^1 with brace α_1 (1,1,1,2,1,1). Then
Mo_e(G) ≤{[ m^2-3m-24, if 7 ≤ m ≤ 10, and equality holds iff G ≅ D_1,; 64, if m=11, and equality holds iff G ≅ D_1, D_2,; m^2-2m-35, if m ≥ 12, and equality holds iff G ≅ D_2. ].
Suppose that v_i (i=1,...,5) be the vertices in α_1 of G, as shown in Fig. <ref>. Let a_i be the number of pendent edges of v_i (i=1,...,5). Suppose that a_1+a_3 ≥ a_2 + a_4 ≥ 1 . Let G_1 be the graph obtained from G by shifting a_2 (resp. a_4) pendent edges from v_2 (resp. v_4) to v_1 (resp. v_3). We deduce that
Mo_e(G_1 )- Mo_e(G ) = (a_1+a_2-a_3-a_4-a_5)-(a_1+a_4-a_3-a_5)
+ (a_3+a_4+a_5+2-3)-(a_2+a_4+3-a_3-a_5-2)
+ (a_1+a_2+a_3+a_4-a_5)-(a_1+a_3-a_4-a_5)
+ (a_3+a_4+3-a_5-2)-(a_2+a_3+3-a_4-a_5-2)
+ (a_1+a_2)-(a_1-a_2)+(a_1+a_2+a_3+a_4+3-a_5-1)
- (a_1+a_2+a_3+3-a_4-a_5-1)
+ (a_1+a_2+3-a_3-a_4-a_5-1)
- (a_1+a_2+a_4+3-a_3-a_5-1)
= 2( a_2+a_3+a_4 + a_5 )-2 > 0.
For a_5 >0, let G_2 be the graph obtained from G_1 by shifting a_5 pendent edges from v_5 to v_3. We obtain
Mo_e(G_2 )- Mo_e(G_1 ) = (a_1+a_3+a_5)-(a_1+a_3-a_5)+(a_3+a_5+3-2)
- (a_3+3-a_5-2)+(a_1+a_3+a_5+3-1)
- (a_1+a_3+3-a_5-1)
= 6 a_5 > 0.
Let G_3 be the graph obtained from G_2 by shifting a_1 pendent edges from v_1 to v_3. We obtain
Mo_e(G_3 )- Mo_e(G_2 ) = (a_1+a_3)-(a_3-a_1)+(a_1+a_3+2-3)
- (a_3+2-3)+(a_1+a_3+3-2)-(a_3+3-2)
+ 0-a_1+(a_1+a_3+1-3)-(a_3+1-a_1-3)
= 5 a_1 >0.
Clearly, G_3 ≅ D_2, and G_2 ≅ D_1 for a_3=0. Observe that Mo_e(D_1 )=m^2-3m-24, and Mo_e(D_2 )=m^2-2m-35 .
Let G ∈𝒢_m^1 of size m. Then
Mo_e(G) < m^2-m-36.
Suppose that G ∈𝒢_m^1, then G has a brace α_1 (a_1, a_2, a_3, a_4, a_5, a_6) as shown in Fig. <ref>. We consider the following three possible cases.
Case 1. α_1 have at least three paths with length at least two.
Subcase 1.1. The three paths inclose a cycle.
Assume that the three paths are P(a_1), P(a_2) and P(a_6) by the symmetry of α_1. We choose nine edges, two edges in the path P(a_1) such that each one is incident to x or u, two edges in the path P(a_2) such that each one is incident to y or u, two edges in the path P(a_6) such that each one is incident to y or z, one edge in the path P(a_3) incident to z, one edge in the path P(a_4) incident to z and one edge in the path P(a_5) incident to z. Let e be one of the nine edges. Then ψ(e) ≤ m-7. This fact is also true for the remaining eight edges. Thus,
Mo_e(G) ≤ 9(m-7)+(m-9)(m-1) < m^2-m-36.
Subcase 1.2. The three paths composed a new path.
Assume that the three paths are P(a_1), P(a_2) and P(a_4) by the symmetry of α_1. We choose nine edges, two edges in the path P(a_1) such that each one is incident to x or u, two edges in the path P(a_2) such that each one is incident to y or u, two edges in the path P(a_4) such that each one is incident to y or z, one edge in the path P(a_3) incident to z, one edge in the path P(a_5) incident to z and one edge in the path P(a_6) incident to x. Thus,
Mo_e(G) ≤ 2(m-6)+4(m-7)+2(m-8)+(m-9)+(m-9)(m-1) < m^2-m-36.
Subcase 1.3. The three paths share a common vertex.
Assume that the three paths are P(a_1), P(a_2) and P(a_3) by the symmetry of α_1. We choose nine edges, two edges in the path P(a_1) such that each one is incident to x or u, two edges in the path P(a_2) such that each one is incident to y or u, two edges in the path P(a_3) such that each one is incident to u or z, one edge in the path P(a_4) incident to y, one edge in the path P(a_5) incident to z and one edge in the path P(a_6) incident to x. We have,
Mo_e(G) ≤ 3(m-7)+3(m-8)+3(m-9)+(m-9)(m-1) < m^2-m-36.
Case 2. α_1 have just two paths with length at least two.
Subcase 2.1. The two paths belong to the same cycle at α_1.
Assume that the two paths are P(a_1) and P(a_2) by the symmetry of α_1. We choose eight edges, two edges in the path P(a_1) such that each one is incident to x or u, two edges in the path P(a_2) such that each one is incident to y or u, one edge in the path P(a_3) incident to u, one edge in the path P(a_4) incident to y, one edge in the path P(a_5) incident to x and one edge in the path P(a_6) incident to x. We deduce that,
Mo_e(G) ≤ 4(m-6)+3(m-7)+(m-8)+(m-8)(m-1) < m^2-m-36.
Subcase 2.2. The two paths belong to the two different cycles at α_1.
We choose eight edges in a similar way, as in Subcase 2.1. We obtain
Mo_e(G) ≤ 4(m-5)+4(m-8)+(m-8)(m-1) < m^2-m-36.
Case 3. α_1 has exactly one path with length at least two.
Assume that the path is P(a_4) with a_4 ≥ 2. If a_4=2, then by Lemma <ref>, Mo_e(G) < m^2-m-36. If a_4 ≥ 3, then similarly choose eight edges as in Subcase 2.1. We obtain
Mo_e(G) ≤ 2(m-5)+6(m-8)+(m-8)(m-1) < m^2-m-36.
Let G ∈𝒢_m^2 with brace α_2 (2,1,1,2,1). Then
Mo_e(G) ≤{[ m^2-4m-9, if 7 ≤ m ≤ 16, and equality holds iff G ≅ F_1,; 212, if m=17, and equality holds iff G ≅ F_1, F_2,; m^2-3m-26, if m ≥ 18, and equality holds iff G ≅ F_2. ].
Suppose that v_i (i=1,...,5) is the vertices in α_2 (2,1,1,2,1) of G, as shown in Fig. <ref>. Let a_i be the number of pendent edges of v_i (i=1,...,5). Suppose a_2+a_4 ≥ a_3 + a_5 ≥ 1 . Let G_1 be the graph obtained from G by shifting a_3 (resp. a_5) pendent edges from v_3 (resp. v_5) to v_2 (resp. v_4). We deduce that
Mo_e(G_1 )- Mo_e(G ) = (a_2+a_3+1-a_1-4)-(a_1+a_3+a_5+4-a_2-1)
+ (1+a_2+a_3-3-a_4-a_5)-(a_2+1-a_4-a_5-3)
+ (a_1+3-a_4-a_5-2)-(a_1+a_3+3-a_4-2)
+ (a_2+a_3+a_4+a_5+3-3)-(a_2+a_4+3-a_5-a_3-3)
+ (a_1+a_2+a_3+a_4+a_5+4-1)-(a_1+a_2+a_4+4-a_3-1)
+ (a_4+a_5+3-1)-(a_4+a_5+3-1-a_3)
+ (a_1+a_2+a_3+3-2)-(a_1+a_2+a_3-a_5-2)
= 2 a_2+6a_3+2a_5 -2a_1-6 > 0.
Let G_2 be the graph obtained from G_1 by shifting a_4 pendent edges from v_4 to v_1. We obtain
Mo_e(G_2 )- Mo_e(G_1 ) = (a_1+a_4+4-a_2-1)-(a_1+4-a_2-1)
+ (a_1+a_4+3-2)-(a_1+3-a_4-2)
+ (a_1+a_2+a_4+3-2)-(a_1+a_2+3-2)
+ (a_2+1-3)-(a_2+1-3-a_4)+(a_2+3-3)
- (a_2+a_4+3-3)+(3-1)-(a_4+3-1)
= 3 a_4 > 0.
Let G_3 be the graph obtained from G_2 by shifting a_2 pendent edges from v_2 to v_1. We obtain
Mo_e(G_3 )- Mo_e(G_2 ) = (a_1+a_2+4-1)-(a_1+4-a_2-1)+(a_1+a_2+3-2)
- (a_1+3-2)+(1-3)-(a_2+1-3)+0-(a_2+3-3)
= 2 a_2 > 0.
For a_1 >6-2a_2, let G_4 be the graph obtained from G_3 by shifting a_1 pendent edges from v_1 to v_2. We have
Mo_e(G_3 )- Mo_e(G_2 ) = (a_1+a_2+1-4)-(a_1+4-a_2-1)+(3-2)
- (a_1+3-2)+(a_1+a_2+1-3)-(a_2+1-3)
+ (a_1+a_2+3-3)-(a_2+3-3)
= a_1+ 2 a_2-6 > 0.
Clearly, G_3 ≅ F_1 and G_4 ≅ F_2. By simple calculation, we have Mo_e(F_1 )=m^2-4m-9, and Mo_e(F_2 )=m^2-3m-26.
Let G ∈𝒢_m^2 with brace α_2 (2,1,1,2,2). Then
Mo_e(G) ≤ m^2-3m-20 with equality if and only if G ≅ F_3.
Suppose that v_i (i=1,...,6) be the vertices in α_2 (2,1,1,2,2) of G, as shown in Fig. <ref>. Let a_i be the number of pendent edges of v_i (i=1,...,6). For a_6 >0, let G_1 be the graph obtained from G by shifting a_6 pendent edges from v_6 to v_1. We obtain
Mo_e(G_1 )- Mo_e(G ) = (a_1+a_3+ a_5+ a_6+5-a_2-1)-(a_1+a_3+a_5+5-a_2-1)
+ (a_1+a_2+a_4+a_6+5-a_3-1)-(a_1+a_2+a_4+5-a_3-3)
+ (a_1+a_3+a_5+a_6+4-a_4-2)
- (a_1+a_3+a_5+4-a_4-a_6-2)
+ (a_1+a_2+a_4+a_6+4-a_5-2)
- (a_1+a_2+a_4+4-a_5-a_6-2)
+ (a_1+a_2+a_4+a_6+4-a_5-2)
- (a_1+a_2+a_4+4-a_5-a_6-2)
+ (a_1+a_3+a_5+4-a_4-2)-(a_1+a_3+a_5+4-a_4-a_6-2)
+ (a_2+1-a_4-3)-(a_2+1-a_4-a_6-3)
+ (a_3+1-a_5-3)-(a_3+1-a_5-a_6-3)
= 11a_6 > 0.
For a_2+a_3>a_1, let G_2 be the graph obtained from G_1 by shifting a_3 (resp. a_5) pendent edges from v_3 (resp. v_5) to v_2 (resp. v_4). We deduce that
Mo_e(G_2 )- Mo_e(G_1 ) = (a_2+a_3+1-a_1-5)-(a_1+a_3+a_5+5-a_2-1)
+ (a_1+a_2+a_3+a_4+a_5+5-1)-(a_1+a_2+a_4+5-a_3-1)
+ (a_1+4-a_4-a_5-2)-(a_1+a_3+a_5+4-a_4-2)
+ (a_1+a_2+a_3+a_4+a_5+4-2)-(a_1+a_2+a_4+4-a_5-2)
+ (a_2+a_3+1-a_4-a_5-3)-(a_2+1-a_4-3)
+ (3-1)-(a_3+1-a_5-3)+(a_1+a_2+a_3+a_4+a_5-2
- (a_1+a_2+a_4+4-a_5-2)+(a_1+4-a_4-a_5-2)
- (a_1+a_3+a_5+4-a_4-2)
= 2a_2+2a_3-2a_1 > 0.
For a_2+a_4≥ 1, let G_3 be the graph obtained from G_2 by shifting a_2 (resp. a_4) pendent edges from v_2 (resp. v_4) to v_1 (resp. v_4). We have
Mo_e(G_3 )- Mo_e(G_2 ) = (a_1+a_2+a_4+5-1)-(a_1+5-a_2-1)
+ (a_1+a_2+a_4+5-1)-(a_1+5-a_2-1)
+ (a_1+a_2+a_4+4-2)-(a_1+a_3+a_5+4-a_4-2)
+ (1-3)-(a_2+1-a_4-3)
+ (a_1+a_2+a_4+4-2)-(a_1+4-a_4-2)
= 5a_2+7a_4 > 0.
Clearly, G_3 ≅ F_3, and Mo_e(F_3 )=m^2-3m-20.
Let G ∈𝒢_m^2 with brace α_2 (3,1,1,2,1). Then
Mo_e(G) ≤ m^2-2m-33 with equality if and only if G ≅ F_4.
Suppose that v_i (i=1,...,6) be the vertices in α_2 (3,1,1,2,1) of G, as shown in Fig. <ref>. Let a_i be the number of pendent edges of v_i (i=1,...,6). For a_6 >0, let G_1 be the graph obtained from G by shifting a_5 (resp. a_6) pendent edges from v_5 (resp. v_6) to v_1. We obtain
Mo_e(G_1 )- Mo_e(G ) = (a_1+a_3+ a_5+ a_6+3-a_2-2)
- (a_1+a_3+a_5+3-a_2-a_6-2)
+ (a_1+a_2+a_3+a_4+a_5+a_6+5-1)
- (a_1+a_2+a_3+a_4+5-a_5-a_6-1)
+ (a_1+a_2+a_3+a_4+a_5+a_6+5-1)
- (a_1+a_2+a_3+a_4+5-a_5-a_6-1)
+ (a_1+a_3+a_5+a_6+3-a_2-2)
- (a_1+a_3+a_5+3-a_2-a_6-2)
+ (a_2+a_4+3-a_3-1)-(a_2+a_4+a_6+3-a_3-1)
+ (a_3+a_4+2-a_2-3)-(a_2+a_6+3-a_3-a_4-2)
+ (a_1+a_5+a_6+4-a_2-2)-(a_1+a_5+4-a_4-2)
= 2a_3+4a_5+7a_6-2a_2-2 > 0.
Let G_2 be the graph obtained from G_1 by shifting a_3 (resp. a_4) pendent edges from v_3 (resp. v_4) to v_1 (resp. v_2). We deduce that
Mo_e(G_2 )- Mo_e(G_1 ) = (a_1+a_2+a_3+a_4+2-3)-(a_1+a_3+3-a_2-2)
+ (a_1+a_2+a_3+a_4+2-3)-(a_1+a_3+3-a_2-2)
+ (a_1+a_2+a_3+a_4+5-1)-(a_1+a_2+5-a_3-1)
+ (a_1+a_2+a_3+a_4+3-1)-(a_2+a_4+3-a_3-1)
+ (a_1+a_2+a_3+a_4+3-2)-(a_2+3-a_3-a_4-2)
= a_1+ 3a_2+6a_3+5a_4 > 0.
Let G_3 be the graph obtained from G_2 by shifting a_1 pendent edges from v_1 to v_2. We obtain
Mo_e(G_3 )- Mo_e(G_2 ) = (a_1+a_2+2-3)-(a_1+3-a_2-2)
+ (a_1+a_2+2-3)-(a_2+2-a_1-3)
+ (a_1+a_2+3-1)-(a_2+3-1)
+ (a_1+a_2+3-2)-(a_2+3-2)
+ (4-2)-(a_1+4-2)
= 3a_1+ 2a_2-2 > 0.
Thus, Mo_e(G )< Mo_e(G_1 )< Mo_e(G_2 )< Mo_e(G_3 ). Clearly, G_3 ≅ F_4, and Mo_e(F_4) = m^2-2m-33.
Let G ∈𝒢_m^2 of size m. Then
Mo_e(G) < m^2-m-36 for m ≥ 9, and Mo_e(G) ≤ Mo_e(F_1) for m ≤ 9.
Suppose that G ∈𝒢_m^2, then G has a brace α_2 (a_1, a_2, a_3, a_4, a_5) as shown in Fig. <ref>. Assume that a_4, a_5 ≥ 2. We consider the following three possible cases.
Case 1. a_4, a_5 ≥ 3.
Subcase 1.1. a_1= a_2= a_3 =1.
We choose nine edges, three edges in the path P(a_4) such that two are incident to x or y and one is in the middle of P(a_4), three edges in the path P(a_5) such that two are incident to x or z and one is in the middle of P(a_5), one edge in the path P(a_2) incident to x, one edge in the path P(a_3) incident to x and one edge in the path P(a_1) incident to y. We have
Mo_e(G) ≤ 4(m-4)+4(m-7)+(m-9)+(m-9)(m-1) < m^2-m-36.
Subcase 1.2. At least one of a_1, a_2, a_3 is greater than 1.
If a_2, a_3 ≥ 2, then we choose 10 edges, three edges in the path P(a_4) such that two are incident to x or y and one is in the middle of P(a_4), three edges in the path P(a_5) such that two are incident to x or z and one is in the middle of P(a_5), two edges in the path P(a_2) incident to x or y, one edge in the path P(a_3) incident to x and one edge in the path P(a_1) incident to y. We have
Mo_e(G) ≤ 2(m-4)+ (m-5)+2(m-6)+2(m-8)+3(m-9)+(m-10)(m-1) < m^2-m-36.
If a_1 ≥ 2, then we choose 10 edges, three edges in the path P(a_4) such that two are incident to x or y and one is in the middle of P(a_4), three edges in the path P(a_5) such that two are incident to x or z and one is in the middle of P(a_5), two edges in the path P(a_2) incident to x or y, one edge in the path P(a_3) incident to x and two edges in the path P(a_1) incident to y or z. We obtain
Mo_e(G) ≤ 4(m-4)+ 6(m-7)+(m-10)(m-1) < m^2-m-36.
Case 2. a_4 ≥ 3, a_5 = 2.
Subcase 2.1.a_4 ≥ 4, a_5 = 2, and a_1= a_2= a_3 =1.
We choose nine edges, four edges in the path P(a_4) such that two are incident to x or y and two are in the middle of P(a_4), two edges in the path P(a_5) such that one is incident to x and one is in the middle of P(a_5), one edge in the path P(a_2) incident to x, one edge in the path P(a_3) incident to x and one edge in the path P(a_1) incident to y. We have
Mo_e(G) ≤ (m-4)+2(m-5)+(m-6)+2(m-7)+3(m-8)+(m-9)(m-1) < m^2-m-36.
Subcase 2.2.a_4 =3, a_5 = 2, and a_1= a_2= a_3 =1.
The Subcase follows from Lemma <ref>.
Subcase 2.3.a_4 ≥ 3, a_5 = 2, and at least one of a_1, a_2, a_3 is greater than 1.
The proof is similar to the Subcase 2.1.
Case 3. a_4 = a_5 = 2.
Subcase 3.1. At least one of a_1, a_2, a_3 is greater than 1.
If a_2, a_3 ≥ 2, then we choose eight edges, three edges in the path P(a_4) such that two are incident to x or y and one is in the middle of P(a_4), two edges in the path P(a_5) such that one is incident to x and other is in the middle of P(a_5), two edges in the path P(a_2) incident to x or y, one edge in the path P(a_3) incident to x and one edge in the path P(a_1) incident to y. We have
Mo_e(G) ≤ 4(m-5)+4(m-7)+(m-8)(m-1) < m^2-m-36.
If a_1 ≥ 3, then we choose nine edges, two edges in the path P(a_4) such that one is incident to x and other is in the middle of P(a_4), two edges in the path P(a_5) such that one is incident to x and the other is in the middle of P(a_5), one edge in the path P(a_2) incident to x, one edge in the path P(a_3) incident to x and three edges in the path P(a_1) such that two are incident to y or z and one is in the middle of P(a_1). We obtain
Mo_e(G) ≤ 2(m-5)+ 2(m-6)+4(m-7)+(m-9)+(m-9)(m-1) < m^2-m-36.
If a_1 = 2, then by Lemma <ref>,
Mo_e(G) ≤ m^2-3m-20 < m^2-m-36.
Subcase 3.2. a_1= a_2= a_3 =1.
By Lemma <ref>, we have Mo_e(G) < m^2-m-36 for m ≥ 9, and Mo_e(G) ≤ Mo_e(F_1) for m ≤ 9.
Let G ∈𝒢_m^3 with brace α_3 (1,2,2,2). Then
Mo_e(G) ≤{[ m^2-4m-9, if 7 ≤ m ≤ 10, and equality holds iff G ≅ H_1,; 68, if m=11, and equality holds iff G ≅ H_1, H_2,; m^2-2m-31, if m ≥ 12, and equality holds iff G ≅ H_2. ].
Suppose that v_i (i=1,...,5) be the vertices in α_3 (1,2,2,2) of G with d_G(v_1)=d_G(v_2)=4 and d_G(v_3)=d_G(v_4)=d_G(v_5)=2, as shown in Fig. <ref>. Let a_i be the number of pendent edges of v_i (i=1,...,5). Suppose that a_3 ≥ a_4 ≥ a_5. For a_4 +a_5 > a_1+a_2+8, let G_1 be the graph obtained from G by shifting a_4 (resp. a_5) pendent edges from v_4 (resp. v_5) to v_3. We deduce that
Mo_e(G_1 )- Mo_e(G ) = (a_3+a_4+a_5+1-a_1-3)-(a_1+a_4+a_5+3-a_3-1)
+ (a_3+a_4+a_5+1-a_2-3)-(a_1+a_4+a_5+3-a_3-1)
+ (a_1+a_3+a_4+a_5+3-1)-(a_1+a_3+a_5+3-a_4-1)
+ (a_2+a_3+a_4+a_5+3-1)-(a_2+a_3+a_5+3-a_4-1)
+ (a_1+a_3+a_4+a_5+3-1)-(a_1+a_3+a_4+3-a_5-1)
+ (a_1+a_3+a_4+a_5+3-1)-(a_2+a_3+a_4+3-a_5-1)
= 4( a_3+a_4 + a_5 )-2(a_1+a_2) -8> 0.
For a_2 +a_3 > 1, let G_2 be the graph obtained from G_1 by shifting a_2 ) pendent edges from v_2 to v_1. We have
Mo_e(G_2 )- Mo_e(G_1 ) = (a_1+a_2+3-a_3-1)-(a_1+3-a_3-1)
+ (a_3+1-3)-(a_2+3-a_3-1)
+ (a_1+a_2+3-3)-(a_1+3-a_2-3)
+ (a_1+a_2+a_3+3-1)-(a_1+a_3+3-1)
+ (a_3+3-1)-(a_2+a_3+3-1)
+ (a_1+a_2+a_3+3-1)-(a_1+a_3+3-1)
+ (a_3+3-1)-(a_2+a_3+3-1)
= 2( a_2+a_3 )-4> 0.
Clearly, G_2 ≅ H_2 for a_1=0, a_3 >0, and G_2 ≅ H_1 for a_3=0, a_1 >0. For a_1 +a_3 > 2, let G_3 be the graph obtained from G_2 by shifting a_1 pendent edges from v_1 to v_3. We have
Mo_e(G_2 )- Mo_e(G_1 ) = (a_1+a_3+1-3)-(a_1+3-a_3-1)
+ (a_1+a_3+1-3)-(a_3+1-3)+(3-3)
- (a_1+3-3)+(a_1+a_3+3-1)-(a_3+3-1)
+ (a_1+a_3+3-1)-(a_3+3-1)
= 2( a_1+a_3 )-4> 0.
Thus, Mo_e(G )< Mo_e(G_1 )< Mo_e(G_2 )< Mo_e(G_3 ). Clearly, G_3 ≅ H_2, and by simple calculation, we deduce that Mo_e(H_2) = m^2-2m-31, Mo_e(H_1) = m^2-4m-9.
Let G ∈𝒢_m^3 with brace α_3 (2,2,2,2). Then
Mo_e(G) ≤ m^2-m-48 with equality if and only if G ≅ H_3.
Suppose that v_i (i=1,...,6) be the six vertices in α_3 (2,2,2,2) of G with d_G(v_1)=d_G(v_2)=4 and d_G(v_3)=d_G(v_4)=d_G(v_5)=d_G(v_6)=2, as shown in Fig. <ref>. Let a_i be the number of pendent edges of v_i (i=1,...,6). Suppose that a_3 ≥ a_4 ≥ a_5 ≥ a_6>0. Let G_1 be the graph obtained from G by shifting a_i ( i ≥ 4) pendent edges from v_i ( i ≥ 4) to v_3. We obtain
Mo_e(G_1 )- Mo_e(G ) = (a_2+a_3+ a_4+a_5+ a_6+1-a_1-3)
- (a_1+ a_4+a_5+a_6+3-a_2-a_3-1)
+ (a_1+a_3+a_4+a_5+a_6+1-a_2-3)
- (a_2+a_4+a_5+a_6+3-a_1-a_3-1)
+ (a_1+a_3+a_4+a_5+a_6+3-a_2-1)
- (a_1+a_3+a_5+a_6+3-a_4-a_2-1)
+ (a_2+a_3+a_4+a_5+a_6+3-a_1-1)
- (a_2+a_3+a_5+a_6+3-a_1-a_4-1)
+ (a_1+a_3+a_4+a_5+a_6+3-a_2-1)
- (a_1+a_3+a_4+a_6+3-a_2-a_5-1)
+ (a_2+a_3+a_4+a_5+a_6+3-a_1-1)
- (a_2+a_3+a_4+a_6+3-a_1-a_5-1)
+ (a_1+a_3+a_4+a_5+a_6+3-a_2-1)
- (a_1+a_3+a_4+a_5+3-a_2-a_6-1)
+ (a_2+a_3+a_4+a_5+a_6+3-a_1-1)
- (a_2+a_3+a_4+a_5+3-a_1-a_6-1)
= 4(a_3+a_4+a_5+a_6)-8 > 0.
For a_1 +a_2 > 0, let G_2 be the graph obtained from G_1 by shifting a_1 (resp. a_2) pendent edges from v_1 (resp. v_2) to v_3. We have
Mo_e(G_2 )- Mo_e(G_1 ) = (a_1+a_2+a_3+1-3)-(a_2+a_3+1-a_1-1)
+ (a_1+a_2+a_3+1-3)-(a_2+3-a_1-a_3-1)
+ (a_1+a_2+a_3+3-1)-(a_1+a_3+3-a_2-1)
+ (a_1+a_2+a_3+3-1)-(a_2+a_3+3-a_1-1)
+ (a_1+a_2+a_3+3-1)-(a_1+a_3+3-a_2-1)
+ (a_1+a_2+a_3+3-1)-(a_2+a_3+3-a_1-1)
+ (a_1+a_2+a_3+3-1)-(a_1+a_3+3-a_2-1)
+ (a_1+a_2+a_3+3-1)-(a_2+a_3+3-a_1-1)
= 10a_1+6a_2+2a_3-8> 0.
Thus, Mo_e(G )< Mo_e(G_1 )< Mo_e(G_2 ). Clearly, G_2 ≅ H_3, and by simple calculation, we obtain Mo_e(H_3) = m^2-m-48.
Let G ∈𝒢_m^3 with brace α_3 (1,2,2,3). Then
Mo_e(G) ≤ m^2-3m-24 with equality if and only if G ≅ H_4.
Suppose that v_i (i=1,...,6) be the six vertices in α_3 (1,2,2,3) of G with d_G(v_1)=d_G(v_2)=4 and d_G(v_3)=d_G(v_4)=d_G(v_5)=d_G(v_6)=2, as shown in Fig. <ref>. Let a_i be the number of pendent edges of v_i (i=1,...,6). Assume that a_3 ≥ a_2, and a_4+a_5+a_6 >1. Let G_1 be the graph obtained from G by shifting a_i ( i ≥ 4) pendent edges from v_i ( i ≥ 4) to v_3. We get
Mo_e(G_1 )- Mo_e(G ) = (a_1+4-a_3- a_4-a_5+-a_6-1)
- (a_1+ a_4+a_5+4-a_3-1)
+ (a_3+a_4+a_5+a_6+1-a_2-4)
- (a_2+a_4+a_6+4-a_3-1)
+ (a_1+a_3+a_4+a_5+a_6+4-1)
- (a_1+a_3+a_5+4-a_4-1)
+ (a_2+a_3+a_4+a_5+a_6+4-1)
- (a_2+a_3+a_6+4-a_4-1)
+ (a_1+a_2+a_3+a_4+a_5+a_6+5-1)
- (a_1+a_2+a_2+a_4+5-a_5-a_6-1)
+ (a_1+a_2+a_3+a_4+a_5+a_6+5-1)
- (a_1+a_2+a_2+a_4+5-a_5-a_6-1)
+ (a_1+3-a_2-3)-(a_1+a_5+3-a_2-a_6-3)
+ (a_1+3-a_2-3)-(a_1+a_5+3-a_2-a_6-3)
= 2(a_3+a_4+a_5)+6a_6-2a_2-12 > 0.
For a_1 +a_2 > 0, let G_2 be the graph obtained from G_1 by shifting a_1 (resp. a_2) pendent edges from v_1 (resp. v_2) to v_3. We have
Mo_e(G_2 )- Mo_e(G_1 ) = (a_1+a_2+a_3+1-4)-(a_3+1-a_1-4)
+ (a_1+a_2+a_3+1-4)-(a_3+1-a_2-4)
+ (a_1+a_2+a_3+4-1)-(a_1+a_3+4-1)
+ (a_1+a_2+a_3+4-1)-(a_2+a_3+4-1)
+ (3-3)-(a_1+3-a_2-3)
+ (3-3)-(a_1+3-a_2-3)
= 2a_1+6a_2> 0.
Thus, Mo_e(G )< Mo_e(G_1 )< Mo_e(G_2 ). Clearly, G_2 ≅ H_4, and by simple calculation, we get Mo_e(H_4) = m^2-3m-248.
Let G ∈𝒢_m^3 of size m. Then
Mo_e(G) < m^2-m-36 for m ≥ 9, and Mo_e(G) ≤ Mo_e(H_1) for m ≤ 9.
Suppose that G ∈𝒢_m^3, then G has a brace α_2 (a_1, a_2, a_3, a_4) as shown in Fig. <ref>. Assume that 1 ≤ a_1 ≤ a_2 ≤ a_3 ≤ a_4. We proceed with the following three possible cases.
Case 1. 3 ≤ a_1 ≤ a_2 ≤ a_3 ≤ a_4.
We choose twelve edges, eight edges in the paths P(a_i) (i=1,2,3,4) such that each one is incident to x or y, four edges in the middle of P(a_i) (i=1,2,3,4). We deduce that
Mo_e(G) ≤ 8(m-8)+ 4(m-12)+(m-12)(m-1) < m^2-m-36.
Case 2. a_1 = 2.
Subcase 2.1. 3 ≤ a_2 ≤ a_3 ≤ a_4.
We choose eleven edges, eight edges in the paths P(a_i) (i=1,2,3,4) such that each one is incident to x or y, three edges in the middle of P(a_i) (i=2,3,4). We deduce that
Mo_e(G) ≤ 6(m-7)+ 2(m-9)+3(m-11)+(m-11)(m-1) < m^2-m-36.
Subcase 2.2. a_2 = a_3 = a_4= 2.
The Subcase follows from Lemma <ref>.
Case 3. a_1 = 1.
Subcase 3.1. 3 ≤ a_2 ≤ a_3 ≤ a_4.
We choose ten edges, six edges in the paths P(a_i) (i=2,3,4) such that each one is incident to x or y, three edges in the middle of P(a_i) (i=2,3,4), and one edge in P(a_1) incident to x. It follows that
Mo_e(G) ≤ 6(m-4)+ 4(m-10)+(m-10)(m-1) < m^2-m-36.
Subcase 3.2. a_2=2, 3 ≤ a_3 ≤ a_4.
The proof is similar to the Subcase 3.1.
Subcase 3.3. a_2= a_3=2, 3 ≤ a_4.
If a_4=3, then it follows from Lemma <ref>. If a_4 ≥ 4, then we choose nine edges, four edges in the path P(a_4) such that two are incident to x or y and the other two are in the middle of P(a_4), two edges in the path P(a_3) (resp. P(a_2)) such that one is incident to x and the other is in the middle of P(a_3) (resp. P(a_2)) and one edge in P(a_1) incident to x. We have
Mo_e(G) ≤ 2(m-5)+2(m-7)+ 4(m-6)+(m-9)+(m-9)(m-1) < m^2-m-36.
Subcase 3.4. a_2= a_3=a_4=2.
By Lemma <ref>, Mo_e(G) < m^2-m-36 for m ≥ 9, and Mo_e(G) ≤ Mo_e(H_1) for m ≤ 9.
Let G ∈𝒢_m^4 of size m. Then Mo_e(G) < m^2-m-36.
Suppose that G ∈𝒢_m^4, then G has a brace α_4 (a_1, a_2, a_3, a_4, a_5, a_6) as shown in Fig. <ref>. We choose eight edges, two edges in the path P(a_5) such that each is incident to w or y, two edges in the path P(a_6) such that each is incident to z or x, the four edges yz, yw, wx, zx. We obtain
Mo_e(G) ≤ 4(m-5)+4(m-8)+ (m-8)(m-1) < m^2-m-36.
The proof of the Theorem <ref> follows from Lemmas <ref>, <ref>, <ref>, <ref> and <ref>.
Acknowledgement: This work was partially supported by the National Natural Science Foundation of China (Grant Nos. 12071194, 11571155 and 12071158).
20
ACT M. Arockiaraj, J. Clement and N. Tratnik, Mostar indices of carbon nanostructures and circumscribed donut benzenoid systems. Int. J. Quantum Chem. 119 (2019) e26043.
AD A. Ali, T. Došlić, Mostar index: results and perspectives. Appl. Math. Comput. 404 (2021) 19. 126245.
DL K. Deng, S. Li, On the extremal values for the Mostar index of trees with given degree sequence. Appl. Math. Comput. 390 (2021) 11. 125598.
DL1 K. Deng, S. Li, On the extremal Mostar indices of trees with a given segment sequence. Bull. Malays. Math. Sci. Soc. 390 (2021) 45. 593–612.
DL2 K. Deng, S. Li, Chemical trees with extremal Mostar index. MATCH Commun. Math. Comput. Chem. 85 (2021) 161–180.
DL3 K. Deng, S. Li, Extremal catacondensed benzenoid with respect to the Mostar index. J. Math. Chem. 58 (2020) 1437–1465.
DoM T. Došlić, I. Martinjak, R. Škrekovski, S. Tipurić Spužević, I. Zubac, Mostar index. J. Math. Chem. 56 (2018) 2995–3013.
GAN A. Ghalvandi, A. R. Ashrafi, M. H. Nezhaad, On Mostar and edge Mostar indices of graphs. Journal of Mathematics (2021) 6651220.
GRI M. Ghorbani, S. Rahmani, M. J. Islampoor, Some new results on Mostar index of graphs. Iranian J. Math. Chem. 11 (2020) 33–42.
GXD F. Gao, K. Xu, T. Došlić, On the difference between Mostar index and irregularity of graphs. Bull. Malays. Math. Sci. Soc. 44 (2021) 45. 905–926.
H O. C. Havare, Mostar index and edge Mostar index for some cycle related graphs. Rom. J. Math. Comput. Sci. 10 (2020) 53–66.
HXZ F. Hayat, S. J. Xu, B. Zhou, On bicyclic graphs with maximum edge Mostar index. (Preprint).
HZ F. Hayat, B. Zhou, On cacti with large Mostar index. Filomat 33 (2019) 4865–4873.
HZ1 F. Hayat, B. Zhou, On Mostar index of trees with parameters. Filomat 33 (2019) 6453–6458.
HLM S. Huang, S. Li, M. Zhang, On the extremal Mostar indices of hexagonal Chains. MATCH Commun. Math. Comput. Chem. 84 (2020) 249–271.
IAI M. Imran, S. Akhter, Z. Iqbal, Edge Mostar index of chemical structures and nanostructures using graph operations. Int. J. Quan. Chem. 120 (2020) e26259.
JKR J. Jerebic, S. Klavžar, D.F. Rall, Distance-balanced graphs. Ann. Combin. 12 (2008) 71–79.
LD G. Liu, K. Deng, The maximum Mostar indices of unicyclic graphs with given diameter. Appl. Math. Comput. 439 (2023) 127636.
LSX H. Liu, L. Song, Q. Xiao, Z. Tang, On edge Mostar index of graphs. Iranian J. Math. Chem. 11(2) (2020) 95–106.
MS Š. Miklavič, P. Šparl, ℓ-distance-balanced graphs. Discrete Appl. Math. 244 (2018) 143–154.
Te A. Tepeh, Extremal bicyclic graphs with respect to Mostar index. Appl. Math. Comput. 355 (2019) 319–324.
XZT Q. Xiao, M. Zeng, Z. Tang, The hexagonal chains with the first three maximal Mostar indices. Discrete Appl. Math. 288 (2020) 180–191.
XZT2 Q. Xiao, M. Zeng, Z. Tang, H. Deng, H. Hua, Hexagonal chains with first three minimal Mostar indices. MATCH Commun. Math. Comput. Chem. 85 (2021) 47–61.
|
http://arxiv.org/abs/2307.05661v1 | 20230711165126 | Subtyping Context-Free Session Types | [
"Gil Silva",
"Andreia Mordido",
"Vasco T. Vasconcelos"
] | cs.PL | [
"cs.PL"
] |
One-Versus-Others Attention: Scalable Multimodal Integration
G. W. Morley
August 12, 2023
============================================================
Context-free session types describe structured patterns of communication on heterogeneously-typed channels, allowing the specification of protocols unconstrained by tail recursion. The enhanced expressive power provided by non-regular recursion comes, however, at the cost of the decidability of subtyping, even if equivalence is still decidable. We present an approach to subtyping context-free session types based on a novel kind of observational preorder we call 𝒳𝒴𝒵𝒲-simulation, which generalizes 𝒳𝒴-simulation (also known as covariant-contravariant simulation) and therefore also bisimulation and plain simulation. We further propose a subtyping algorithm that we prove to be sound, and present an empirical evaluation in the context of a compiler for a programming language. Due to the general nature of the simulation relation upon which it is built, this algorithm may also find applications in other domains.
§ INTRODUCTION
What does it mean for a type to be a subtype of another? The principle of safe substitution, attributed to Liskov <cit.>, states that T is a subtype of U if values of type T can take the place of values of type U in whatever context without violating the guarantees offered by the type system.
Session types, introduced by Honda et al. <cit.>, enhance traditional type systems with the ability to specify and enforce structured communication protocols on bidirectional, heterogeneously typed channels. Typically, these specifications include the type, direction (input or output) and order of the messages, as well as branching points where one participant can choose how to proceed and the other must follow.
Traditional session types are bound by tail recursion and therefore restricted
to the specification of protocols described by regular languages. This excludes
many protocols of practical interest, with the quintessential example being the
serialization of tree-structured data on a single channel. Context-free session
types, proposed by Thiemann and Vasconcelos <cit.>,
liberate types from tail recursion by introducing a sequential composition operator (__) with a monoidal structure and a left and right identity in type , representing no action. As their name hints, context-free session types can specify protocols corresponding to (simple deterministic) context-free languages and are thus considerably more expressive than their regular counterparts.
When applied to session types, subtyping allows increased flexibility in the interactions between participants, namely on the type of the messages (a feature inherited from the subtyped π-calculus <cit.>) and on the choices available at branching points <cit.>, allowing channels to be governed by simpler session types if their context so requires. A practical benefit of this flexibility is that it promotes modular development: the behaviour of one participant may be refined while the behaviour of the other is kept intact.
Consider the following context-free session types for serializing binary trees.
[t]
𝖲𝖳𝗋𝖾𝖾 = s⊕{Nil, Nodess}
𝖣𝖳𝗋𝖾𝖾 = s&{Nil, Nodess}
[t]
𝖲𝖤𝗆𝗉𝗍𝗒 = ⊕{Nil}
𝖲𝖥𝗎𝗅𝗅𝖳𝗋𝖾𝖾0 = ⊕{Node𝖲𝖤𝗆𝗉𝗍𝗒𝖲𝖤𝗆𝗉𝗍𝗒}
𝖲𝖥𝗎𝗅𝗅𝖳𝗋𝖾𝖾1 = ⊕{Node𝖲𝖥𝗎𝗅𝗅𝖳𝗋𝖾𝖾0𝖲𝖥𝗎𝗅𝗅𝖳𝗋𝖾𝖾0}
The recursive 𝖲𝖳𝗋𝖾𝖾 and 𝖣𝖳𝗋𝖾𝖾 types specify, respectively,
the serialization and deserialization of a possibly infinite arbitrary tree,
while the remaining non-recursive types specify the serialization of finite
trees of particular configurations. The benefit of subtyping is that it makes
the particular types 𝖲𝖤𝗆𝗉𝗍𝗒, 𝖲𝖥𝗎𝗅𝗅𝖳𝗋𝖾𝖾0 and
𝖲𝖥𝗎𝗅𝗅𝖳𝗋𝖾𝖾1 compatible with the general 𝖣𝖳𝗋𝖾𝖾 type. Observe
that its dual, 𝖲𝖳𝗋𝖾𝖾, may safely take the place of any type in right column. Consider now a function 𝖿 that generates full trees of height 1 and serializes them on a given channel end. Assigning it type 𝖲𝖳𝗋𝖾𝖾 would not statically ensure that the fullness and height of the tree are as specified. Type 𝖲𝖥𝗎𝗅𝗅𝖳𝗋𝖾𝖾1 would do so, and subtyping would still allow the function to use an 𝖲𝖳𝗋𝖾𝖾 channel (i.e., communicate with someone expecting an arbitrary tree).
Expressive power usually comes at the cost of decidability. While subtyping for regular session types has been formalized, shown decidable and given an algorithm by Gay and Hole <cit.>, subtyping in the context-free setting has been proven undecidable by Padovani <cit.>. The proof is given by a reduction from the inclusion problem for simple languages, shown undecidable by Friedman <cit.>. Remarkably, the equivalence problem for simple languages is known to be decidable, as is the type equivalence of context-free session types <cit.>.
Subtyping in the context-free setting has until now only been considered for
first-order session types, limiting the type messages to basic types and
therefore not accounting for the possibility of conveying channels in messages
<cit.>. Consequently, the interesting
co/contravariant properties of input and output types have been left unexplored.
In this paper, we promote the theory of subtyping for context-free session types
to a higher-order setting, where messages may carry values of arbitrary types. To handle the resulting contravariance of output types, we introduce a novel notion of observational preorder, which we call 𝒳𝒴𝒵𝒲-simulation (by analogy with the 𝒳𝒴-simulation of Aarts and Vaandrager <cit.>).
While initially formulated in the context of the π-calculus, considerable
work has been done to integrate session types in more standard settings, such as
functional languages based on the polymorphic λ-calculus with linear
types <cit.>. In this scenario, functional
types and session types are not orthogonal: sessions may carry functions that
may act on sessions that may carry functions... and so on. With this in mind,
we promote our theory to a linear functional setting, thereby showing how
subtyping for records, variants and (linear and unrestricted
<cit.>) functions — usually introduced by inference rules
— can be seamlessly integrated with simulation-based subtyping for
context-free session types.
Finally, we present a sound algorithm for the novel notion of subtyping, based on the type equivalence algorithm of Almeida et al. <cit.>. This algorithm works by first encoding the types as words from a simple grammar and then deciding their 𝒳𝒴𝒵𝒲-similarity. Being grammar-based and, at its core, agnostic to types, our algorithm may also find applications for other objects with similar non-regular and contravariant properties.
Contributions
We address the subtyping problem for context-free session types, contributing:
* A syntactic definition of subtyping for context-free session types;
* A novel kind of behavioural preorder called 𝒳𝒴𝒵𝒲-simulation, and, based on it, a semantic definition of subtyping that coincides with the syntactic one;
* A sound subtyping algorithm based on the 𝒳𝒴𝒵𝒲-similarity of simple grammars;
* An empirical evaluation of the performance of the algorithm, and a comparison with an existing type equivalence algorithm.
Overview
The rest of this paper is organized as follows: in
<ref> we introduce types, type formation and
syntactic subtyping; in <ref> we present a notion
of semantic subtyping, to be used as a stepping stone to develop our subtyping
algorithm; in <ref> we present the algorithm and show it
to be sound; in <ref> we evaluate the performance of our
implementation of the algorithm; in <ref> we present
related work; in <ref> we conclude the paper and trace a
path for the work to follow. The reader can find the rules for type formation
and proofs for all results in the paper in the appendices.
§ TYPES AND SYNTACTIC SUBTYPING
We base our contributions on a type language that includes both functional types
and higher-order context-free session types (i.e., session types that allow
messages of arbitrary types). The language is shown in <ref>. As customary in session types for functional
languages <cit.>, the language of types is given by two
mutually recursive syntactic categories: one for functional types (T,U,V,W)
and another for session types (R,S). We assume two disjoint and denumerable
sets of type references, with the first ranged over by t,u,v,w, the second by
r,s and their union by x,y,z. We further assume a set of record, variant and
choice labels, ranged over by j,k,l.
The first three productions of the grammar for functional types introduce the
type, functions TmU, records ℓT_ℓL
and variants ℓT_ℓL (which correspond to
datatypes in ML-like languages). Our system exhibits linear
characteristics: function types contain a multiplicity annotation m (also
in <ref>), meaning that they must be used exactly once if
m= or without restrictions if m=∗ (such types can also be
found, for instance, in System F^∘<cit.> and in
the FreeST language <cit.>). Their inclusion in
our system is justified by the interesting subtyping properties they exhibit
<cit.>.
Session types T and T represent the sending and receiving,
respectively, of a value of type T (an arbitrary type, making the system
higher-order). Internal choice types ℓS_ℓL allow the
selection of a label k∈ L and its continuation S_k, while external
choice types ℓS_ℓL represent the branching on any
label k∈ L and its continuation S_k. We stipulate that the set of labels
for these types must be non empty. Type represents no action, while
type indicates the closing of a channel, after which no more
communication can take place. Type RS denotes the sequential
composition of R and S, which is associative, right distributes over choices types, has identity and left-absorber .
The final two productions in both functional and session grammars introduce self-references and the recursion operator. Their inclusion in the two grammars ensures we can have both recursive functional types and recursive session types while avoiding nonsensical types such as t*t at the syntactical level (avoiding the need for a kind system).
Still, we do not consider all types generated by these grammars to be well-formed. Consider session type rr. No matter how many times we unfold it, we cannot resolve its first communication action. The same could be said of rr. We must therefore ensure that any self-reference in a sequential composition is preceded by a type constructor representing some meaningful action, i.e., not equivalent to . This is achieved by adapting the conventional notion of contractivity (no subterms of the form xx_1 … x_nx) to account for as the identity of sequential composition. Readers familiar with process algebra may recognize this restriction, as it corresponds to the notion of guardedness found, for example, in CCS <cit.>.
In addition to contractivity, we must ensure that well-formed types contain no
free references. The type formation judgement ΔT, where
Δ is a set of references, combines these requirements. The rules for the
judgement can be found in <ref>.
We are now set to define our syntactic subtyping relation. We begin by surveying the features it should support.
Input and output subtyping
Input variance and output contravariance are the central features of subtyping for types that govern entities that can be written to or read from, such as channels and references <cit.>. They are therefore natural features of the subtyping relation for conventional session types as well <cit.>. Observe that {A,B}{A} should be true, for the type of the received value, {A,B}, safely substitutes the expected type, {A}. Observe also that {A}{A,B} should be true, because the type of the value to be sent, {A,B}, is a subtype of {A}, the type of the messages the substitute channel is allowed to send.
Choice subtyping
If we understand external and internal choice types as, respectively, the input and output of a label, then their subtyping properties are easy to derive: external choices are covariant on their label set, internal choices are contravariant on their label set, and both are covariant on the continuation of the labels (this is known as width subtyping). Observe that &{A}&{A,B} should be true, for every branch in the first type can be safely handled by matching on the second type. Likewise, ⊕{A,B}⊕{A} should be true, for every choice in the second type can be safely selected in the first. These properties are not unlike those of variant and record types, and the resemblance is clearer when comparing branch matching with case analysis and selection with projection.
Sequential composition
In the classical subtyping relation for regular session types, input and output types (T.S) can be characterized as covariant in their continuation. Although the same general intuition applies in the context-free setting, we cannot as easily characterize the variance of the sequential composition constructor (SR) due to its monoidal, distributive and absorbing properties. For instance, consider types S_1S_2 and R_1R_2, with S_1=, S_2=, R_1= and R_2=. Although it should be true that S_1S_2R_1R_2, we can have neither S_1 R_1 nor S_2 R_2.
Functional subtyping
The subtyping properties of function, record and variant types are well known, and we refer the readers to Pierce's book for the reasoning behind them <cit.>. Succinctly, the function type constructor is contravariant on the domain and covariant on the range, and the variant and record constructors are both covariant on the type of the fields, but respectively covariant and contravariant on their label sets.
Multiplicity subtyping
Using an unrestricted (*) resource where a linear () one is expected does not compromise safety, provided that, multiplicities aside, the type of the former may safely substitute the type of the latter. We can express this relationship between multiplicities through a preorder captured by inequality *⊑. In our system, function types may be either linear or unrestricted. Thus, by the conventional subtyping properties of function types, and by the principle of safe substitution, type T_1mT_2 can be considered a subtype of another type U_1nU_2 if U_1 and T_2 are subtypes, respectively, of T_1 and U_2 and if m⊑ n (thus we can characterize the function type constructor as covariant on its multiplicity).
The rules for our syntactic subtyping relation, interpreted coinductively, are shown in <ref>. Rules S-Unit, S-Arrow, S-Rcd, S-Vrt, S-RecL and S-RecR establish the classical subtyping properties associated with both functional and equi-recursive types, with S-Arrow additionally encoding subtyping between linear and unrestricted functions, relying on a preorder on multiplicities also defined in <ref>. Rules S-End, S-In, S-Out, S-ExtChoice and S-IntChoice bring to the context-free setting the classical subtyping properties expected from session types, as put forth by Gay and Hole <cit.>.
The remaining rules account for sequential composition, which distributes over
choice and exhibits a monoidal structure with its (left and right) neutral
element in . We include, for each session type constructor S, a left
rule (denoted by suffix L) of the form SR S' and a
right rule (denoted by suffix R) of the form S'SR. An additional rule is necessary for each constructor over which sequential composition does not distribute, associate or neutralize (S-InSeq2, S-OutSeq2 and S-EndSeq2). Since we are using a coinductive proof scheme, we include rules to `move' sequential composition down the syntax. Thus, given a type SR, we inspect S to decide which rule to apply next.
theoremsyntacticPreorder
The syntactic subtyping relation is a preorder on types.
Let us briefly return to <ref>. It is now easy to see that 𝖲𝖳𝗋𝖾𝖾𝖲𝖥𝗎𝗅𝗅𝖳𝗋𝖾𝖾1: we unfold the left-hand side and apply the distributivity rules to both sides as necessary until reaching an internal choice with no continuation, at which point we can apply S-IntChoice, or until reaching a type with at the head, at which point we apply S-InSeq2. We repeat this process until reaching 𝖲𝖳𝗋𝖾𝖾𝖲𝖥𝗎𝗅𝗅𝖳𝗋𝖾𝖾0, and proceed similarly until reaching 𝖲𝖳𝗋𝖾𝖾𝖲𝖤𝗆𝗉𝗍𝗒, which follows from S-IntChoice and S-Skip.
Despite clearly conveying the intended meaning of the subtyping relation, the rules suggest no obvious algorithmic intepretation: on the one
hand, the presence of bare metavariables makes the system not syntax-directed;
on the other hand, rules S-RecL, S-RecSeqL and their
right counterparts lead to infinite derivations which are not solvable by
a conventional fixed-point construction <cit.>. In
the next section we develop an alternative, semantic approach to subtyping,
which we use as a stepping stone to develop a subtyping algorithm.
§ SEMANTIC SUBTYPING
Semantic equivalence for context-free session types is usually based on observational equivalence or bisimilarity, meaning that two session types are considered equivalent if they exhibit exactly the same communication behaviour <cit.>. An analogous notion of semantic subtyping should therefore rely on an observational preorder. In this section we develop such a preorder.
We define the behaviour of types through a labelled transition system (LTS) by establishing relation TaU (“type T transitions by action a to type U”). We follow Costa et al. <cit.> in attributing behaviour to functional types, allowing them to be encompassed in our observational preorder. The rules defining the transition relation, as well as the grammar that generates all possible transition actions, are shown in <ref>.
In general, each functional type constructor generates a transition for each of its fields (, which has none, transitions to ). Linear functions, exhibit an additional transition to represent their restricted use (L-LinArrow), and records/variants include a default transition that is independent of their fields. The behaviour of session types is more complex, since it must account for their algebraic properties. Message types exhibit a transition for their payload (L-Msg1, L-MsgSeq1) and another for their continuation, which is by omission (L-Msg2, L-MsgSeq2). Choices behave much like records/variants when alone, but are subject to distributivity when composed (L-ChoiceFieldSeq). Type , which absorbs its continuation, transitions to (L-End, L-EndSeq). Rules L-SeqSeq, L-SkipSeq account for associativity and identity, and rules L-Rec and L-RecSeq dictate that recursive types behave just like their unfoldings. Notice that has no transitions.
With the behaviour of types established, we now look for an appropriate notion of observational preorder. Several such notions have been studied in the literature. Similarity, defined as follows, is arguably the simplest of them <cit.>.
A type relation ℛ is said to be a simulation if, whenever TℛU, for all a, T' with TaT' there is U' such that UaU' and T'ℛU'
Similarity, written ≼, is the union of all simulation relations. We say that a type U simulates type T if T≼ U.
Unfortunately, plain similarity is of no use to us. A small example demonstrates why: type ⊕{𝖠,𝖡} both simulates and is a subtype of ⊕{𝖠: }, while type &{𝖠: } does not simulate yet is a subtype of &{𝖠: , 𝖡: }. An anti-simulation relation, based on the converse clause, would be of no avail either, as it would simply leave us with the converse problem.
It is apparent that a more refined notion of simulation is necessary, where the direction of the implication depends on the transition labels. Aarts and Vaandrager provide just such a notion in the form of 𝒳𝒴-simulation <cit.>, a simulation relation parameterized by two subsets of actions, 𝒳 and 𝒴, such that actions in 𝒳 are simulated from left to right and those in 𝒴 are simulated from right to left, selectively combining the requirements of simulation and anti-simulation. Its definition, adapted to types, is as follows.
Let 𝒳,𝒴⊆𝒜. A type relation ℛ is said to be an 𝒳𝒴-simulation if, whenever TℛU, we have:
* for each a ∈𝒳 and each T' with TaT', there is U' such that UaU' with T'ℛU';
* for each a ∈𝒴 and each U' with UaU', there is T' such that TaT' with T'ℛU'.
𝒳𝒴-similarity, written ≼^𝒳𝒴, is the union of all 𝒳𝒴-simulation relations. We say that a type T is 𝒳𝒴-similar to type U if T≼^𝒳𝒴U.
Similar or equivalent notions have appeared throughout the literature: modal refinement <cit.>, alternating simulation <cit.> and, perhaps more appropriately named (for our purposes), covariant-contravariant simulation <cit.>. Padovani's original subtyping relation for first-order context-free session types <cit.> can also be understood as a refined form of 𝒳𝒴-simulation.
We can tentatively define a semantic subtyping relation ≲' as an 𝒳𝒴-simulation where 𝒳 and 𝒴 are the label sets generated by the following grammars for a_𝒳 and a_𝒴, respectively.
[t]
a_𝒳 a_𝒳𝒴ℓℓ
a_𝒴 a_𝒳𝒴ℓℓ [t]
a_𝒳𝒴
This would indeed give us the desired result for our previous example, but we still cannot account for the contravariance of output and function types: we want T={𝖠: } to be a subtype of U={𝖠: , 𝖡: }, yet T≲'U does not hold (in fact, we have U≲'T, a clear violation of run-time safety). The same could be said for types {𝖠: }* and {𝖠: , 𝖡: }*. In short, our simulation needs the and -derivatives to be related in the direction opposite to that of the initial types.
To allow this inversion, we generalize the definition of 𝒳𝒴-simulation by parameterizing it on two further subsets of actions and including two more clauses where the direction of the relation between the derivatives is inverted. By analogy with 𝒳𝒴-simulation, we call the resulting notion 𝒳𝒴𝒵𝒲-simulation.
Let 𝒳,𝒴,𝒵,𝒲⊆𝒜. A type relation ℛ is a 𝒳𝒴𝒵𝒲-simulation if, whenever TℛU, we have:
* for each a ∈𝒳 and each T' with TaT', there is U' such that UaU' with T'ℛU';
* for each a ∈𝒴 and each U' with UaU', there is T' such that TaT' with T'ℛU';
* for each a ∈𝒵 and each T' with TaT', there is U' such that UaU' with U'ℛT';
* for each a ∈𝒲 and each U' with UaU', there is T' such that TaT' with U'ℛT'.
𝒳𝒴𝒵𝒲-similarity, written , is the union of all 𝒳𝒴𝒵𝒲-simulation relations. We say that a type T is 𝒳𝒴𝒵𝒲-similar to type U if T U.
𝒳𝒴𝒵𝒲-simulation generalizes several existing observational relations: 𝒳𝒴-simulation can be defined as an
𝒳𝒴∅∅-simulation, bisimulation as 𝒜𝒜∅∅-simulation (alternatively, ∅∅𝒜𝒜-simulation or 𝒜𝒜𝒜𝒜-simulation), and plain simulation as 𝒜∅∅∅-simulation.
theoremxyzwsimPreorder
For any 𝒳,𝒴,𝒵,𝒲, is a preorder relation on types.
Equipped with the notion of 𝒳𝒴𝒵𝒲-similarity, we are ready to define the semantic subtyping relation for functional and higher-order context-free session types as follows.
The semantic subtyping relation for functional and higher-order context-free session types is defined by T U when T U such that 𝒳, 𝒴, 𝒵 and 𝒲 are defined as the label sets generated by the following grammars for a_𝒳, a_𝒴, a_𝒵 and a_𝒲, respectively.
[t]
a_ a_𝒳𝒴 ℓℓ
a_ a_𝒳𝒴ℓℓ [t]
a_, a_ !_d
a_ ?_d
Notice the correspondence between the placement of the labels and the variance of their respective type constructors. Labels arising from covariant positions of the arrow and input type constructors are placed in both the 𝒳 and 𝒴 sets, while those arising from the contravariant positions of the arrow and output type constructors are placed in both the 𝒵 and 𝒲 sets. Labels arising from the fields of constructors exhibiting width subtyping are placed in a single set, depending on the variance of the constructor on the label set: 𝒳 for covariance (external choice and variant constructors), 𝒴 for contravariance (internal choice and record constructors). The function type constructor is covariant on its multiplicity, thus the linear arrow label is placed in 𝒳. Finally, checkmark labels and those arising from nullary constructors are placed in 𝒳 and 𝒴, but they could alternatively be placed in 𝒵 and 𝒲 or in all four sets (notice the parallel with bisimulation, that can be defined as 𝒜𝒜∅∅-simulation, ∅∅𝒜𝒜-simulation, or 𝒜𝒜𝒜𝒜-simulation).
Let us go back once again to our tree serialization example from <ref>. Here it is also easy to see that 𝖲𝖳𝗋𝖾𝖾𝖲𝖥𝗎𝗅𝗅𝖳𝗋𝖾𝖾1. Observe that, on the side of 𝖲𝖳𝗋𝖾𝖾, transitions by 𝖭𝗂𝗅 and 𝖭𝗈𝖽𝖾 always appear together, while on the side of 𝖲𝖥𝗎𝗅𝗅𝖳𝗋𝖾𝖾1 types only transition by either one label or the other, if they do. Since 𝖭𝗂𝗅 and 𝖭𝗈𝖽𝖾 belong exclusively to 𝒴, 𝖲𝖳𝗋𝖾𝖾 is always able to match 𝖲𝖥𝗎𝗅𝗅𝖳𝗋𝖾𝖾1 on these labels (as in all the others in 𝒴∪𝒲, and vice-versa for 𝒳∪𝒵).
[Soundness and completeness for subtyping relations]theoremsyntacticSemanticSoundnessCompleteness
Let ⊢ T and ⊢ U. Then T U iff T U.
§ DECIDING SUBTYPING
In this section we introduce our subtyping algorithm, adapted from the equivalence algorithm of Almeida et al. <cit.>. At its core, it determines the 𝒳𝒴𝒵𝒲-similarity of simple grammars. Its application to context-free session types is facilitated by a translation function to properly encode types as grammars. It may likewise be adapted to other domains.
Much like the original, our algorithm can be succinctly described in three distinct phases:
* translate the given types to a simple grammar and two starting words;
* prune unreachable symbols from productions;
* explore an expansion tree rooted at a node containing the initial words, alternating between expansion and simplification operations until either an empty node is found (decide ) or all nodes fail to expand (decide False).
The first phase consists of translating the two types to a pair of words (X⃗, Y⃗) and a grammar in Greibach normal form (GNF), i.e., a pair (X⃗,) where X⃗ is the start word and a set of productions of the form Y→ a Z⃗. A word, denoted by X⃗, is defined as a sequence of non-terminal symbols. We can check the 𝒳𝒴𝒵𝒲-similarity of words in GNF grammars because they naturally induce a labelled transition system, where states are words X⃗, actions are terminal symbols a and the transition relation is defined as XY⃗aZ⃗Y⃗ when X → a Z⃗∈. We denote the bisimilarity and 𝒳𝒴𝒵𝒲-similarity of grammars by, respectively, ∼_ and , where 𝒫 is the set of productions. We also let _ denote grammar 𝒳𝒴𝒵𝒲-similarity with label sets as in <ref>. The deterministic nature of context-free session types allows their corresponding grammars to be simple: for each non-terminal Y and terminal symbol a, we have at most one production of the form Y → a Z⃗.
The grammar translation procedure remains unchanged from the original algorithm <cit.>, and for this reason we omit its details (which include generating productions for all μ-subterms in types). However, this procedure relies on two auxiliary definitions which must be adapted: the function (<ref>), which normalizes the head of session types and unravels recursive types until reaching a type constructor, and the procedure (<ref>), which builds a word from a session type while updating a set of productions. Our presentation of is somewhat naive since it does not avoid redundant productions. Almeida et al. suggest an optimization <cit.>.
The unraveling of a type T is defined by induction on the structure of T:
[c]
(xT) = (xxTT)
(S) =
(ℓS_ℓLR) = ℓS_ℓ;RL
[c]
(S) = (S)
((sS)R) = ((ssSS)R)
((S_1S_2)S_3) = (S_1(S_2S_3))
and in all other cases by (T)=T.
The word corresponding to a type T, (T), is built by descending on the structure of T while updating a set of productions:
() = Y, setting ∪{Y →}
(UV) = Y , setting ∪{Y→(U),Y→(V), Y→}
(U*V) = Y , setting ∪{Y→(U),Y→(V) }
(ℓT_ℓL) = Y , setting ∪{Y →}∪{Y →_k (T_k) k ∈ L}
() = ε
() = Y , setting ∪{Y →}
( U) = Y , setting ∪{Y →(U) ,Y →}
(ℓS_ℓL) = Y , setting ∪{Y →}∪{Y →l(S_k) k ∈ L}
(S_1;S_2) = (S_1)(S_2)
(xU) = X
where, in each equation, Y is understood as a fresh non-terminal symbol
and, X as the non-terminal symbol corresponding to type reference x and as a non-terminal symbol without productions.
Consider again the types for tree serialization in <ref>. Suppose we want to know whether 𝖲𝖥𝗎𝗅𝗅𝖳𝗋𝖾𝖾0𝖲𝖳𝗋𝖾𝖾. The following productions form the grammar generated for these types, with X_0 and Y_0 as their starting words.
[t]
X_0 → X_1
X_0 → X_5
X_1 →𝖭𝗈𝖽𝖾 X_2 X_3 X_2
X_1 → [t]
X_2 →𝖤𝗆𝗉𝗍𝗒
X_2 →
X_3 → X_4
X_3 → [t]
X_4 →
X_5 →
[t]
Y_0 → Y_1
Y_0 → X_4
Y_0 →
[t]
Y_1 →
Y_1 →𝖤𝗆𝗉𝗍𝗒
Y_1 →𝖭𝗈𝖽𝖾Y_1 X_3 Y_1
For the rest of this section let T, U,
(X⃗_T,') = (T,∅) and
(X⃗_U,)=(U,').
[Soundness for grammars]theoremsfg
If X⃗_T _X⃗_U, then T U.
Context-free session types may include communication actions that are never reached, e.g. in (ss). The second phase of the algorithm consists of pruning the corresponding symbols from the generated grammar (() being its pruned version). More details on this procedure can be found in <ref>.
[Pruning preserves 𝒳𝒴𝒵𝒲-similarity]lemmapruningPreservesXYZWSimilarity
X⃗_Y⃗ iff X⃗_()Y⃗
In its third and final phase the algorithm explores an expansion tree, alternating between expansion and simplification steps. An expansion tree is defined as a tree in which nodes are sets of pairs of words, the root is the singleton set containing the pair of starting words being tested, and every child is an expansion of its parent. A branch is deemed successful if it is infinite or has an empty leaf, and deemed unsuccessful otherwise. The original definition of expansion ensures that the union of all nodes along a successful branch constitutes a bisimulation <cit.>. We adapt this definition to ensure that such a union yields an 𝒳𝒴𝒵𝒲-simulation instead.
The 𝒳𝒴𝒵𝒲-expansion of a node N is defined as the minimal set N' such that, for every pair (X⃗,Y⃗) in N, it holds that:
* if X⃗→ aX⃗'⃗ and a ∈𝒳
then Y⃗→ a Y⃗'⃗
with (X⃗'⃗,Y⃗'⃗) ∈ N'
* if Y⃗→ aY⃗'⃗ and a ∈𝒴
then X⃗→ a X⃗'⃗
with (X⃗'⃗,Y⃗'⃗) ∈ N'
* if X⃗→ aX⃗'⃗ and a ∈𝒵
then Y⃗→ a Y⃗'⃗
with (Y⃗'⃗,X⃗'⃗) ∈ N'
* if Y⃗→ aY⃗'⃗ and a ∈𝒲
then X⃗→ a X⃗'⃗
with (Y⃗'⃗,X⃗'⃗) ∈ N'
[Safeness property for 𝒳𝒴𝒵𝒲-similarity]lemmasp
Given a set of productions , X⃗_Y⃗ iff the expansion tree rooted at {( X⃗, Y⃗ )} has a successful branch.
The simplification step consists of applying rules that safely modify the expansion tree during its construction, in an attempt to keep it finite. The rules are iteratively applied to each node until a fixed point is reached, at which point we can proceed with expansion. The equivalence algorithm applies four simplification rules: Reflexivity, Congruence, BPA1 and BPA2 <cit.>. Since 𝒳𝒴𝒵𝒲-similarity is not a congruence (or even a precongruence), we replace the Congruence rule with a more general Preorder rule: omit from a node N any pair that belongs to the least preorder containing the ancestors of N. The remaining rules are left unchanged.
The algorithm explores the tree by breadth-first search using a queue, thus avoiding getting stuck in infinite branches, alternating between expansion and simplification steps until it terminates with if all nodes fail to expand or with if an empty node is reached. The following pseudo-code illustrates the procedure.
(X⃗, Y⃗ , ) = explore((((X⃗, Y⃗ ), ∅), )
where explore(q, ) =
if (q) then
else let (n, a) = (q)
if (n) then
else if not isLeafNode(n, )
then explore(({(expand(n, P), a ∪ n)}, (q), ), )
else explore((q), )
The 𝒳𝒴𝒵𝒲-expansion tree for <ref> is illustrated in <ref>.
Finally, function puts all the pieces of the algorithm together:
(T,U) = let (X⃗, ') = (T,∅), (Y⃗, ) = (U, ') in (X⃗, Y⃗, ())
It receives two well-formed types T and U, computes their grammar and respective starting words X⃗ and Y⃗, prunes the productions of the grammar and, lastly, uses function to determine whether X⃗_Y⃗.
The following result shows that algorithm is sound with respect to the meta-theory of functional and higher-order context-free session types.
[Soundness]theoremsoundness
If (X⃗_T, X⃗_U, ()) returns , then T U.
§ EVALUATION
We have implemented our subtyping algorithm in Haskell and integrated it in the freely available compiler for FreeST, a statically typed functional programming language featuring message-passing channels governed by context-free session types <cit.>. The FreeST compiler features a running implementation of the type equivalence algorithm of Almeida et al. <cit.>. With our contributions, FreeST effectively gains support for subtyping at little to no cost in performance. In this section we present an empirical study to support this claim.
We employed three test suites to evaluate the performance of our algorithm: a suite of handwritten pairs of types, a suite of randomly generated pairs of types, and a suite of handwritten FreeST programs. We focus on the last two, since they allow a more robust and realistic analysis. All data was collected on a machine featuring an Intel Core i5-6300U at 2.4GHz with 16GB of RAM.
To build our randomly generated suite we employed a type generation module, implemented using the Quickcheck library <cit.> and following an algorithm induced from the properties of subtyping, much like the one induced by Almeida et al. from the properties of bisimilarity <cit.> (its description can be found in <ref>). It includes generators for valid and invalid subtyping pairs. We conducted our evaluation by taking the running time of the algorithm on 2000 valid pairs and 2000 invalid pairs, ranging from 2 to 730 total AST nodes, with a timeout of 30s. The results are plotted in <ref>. We encountered no false negatives, but obtained 200 timeouts. We found, as expected, that the running time increases considerably with the number of nodes. When a result was produced, valid pairs took generally longer than invalid pairs. On the other hand, we found that invalid pairs produced the largest number of timeouts, with 186 of 200 attributed to them.
Randomly generated types allow for a robust analysis, but they typically do not reflect the types encountered by a subtyping algorithm in its most obvious practical application, a compiler. For this reason, we turn our attention to our suite of FreeST programs, comprised of 288 valid and invalid programs collected throughout the development of the FreeST language. Programs range from small examples demonstrating particular features of the language to concurrent applications simulating, for example, an FTP server.
We began by integrating the algorithm in the FreeST compiler, placing next to every call to the original algorithm <cit.> (henceforth 𝖾𝗊𝗎𝗂𝗏𝖳) a call to 𝗌𝗎𝖻𝖳 on the same pairs of types. We then ran each program in our suite 10 times, collecting and averaging the accumulated running time of both algorithms on the same pairs of types. We then took the difference between the average accumulated running times of 𝗌𝗎𝖻𝖳 and 𝖾𝗊𝗎𝗂𝗏𝖳, obtaining an average difference of 0.149ms, with a standard deviation of 1.563ms, a minimum difference of -3.913ms and a maximum difference of 6.114ms. <ref> illustrate this comparison by plotting against each other the accumulated running times (for clarity, only those between 20ms and 40ms) of both algorithms during the typechecking phase of each.
The data collected in this evaluation suggests that that replacing the original equivalence algorithm <cit.> with the subtyping algorithm in a typechecker incurs no significant overhead, while providing additional expressive power for programmers. Given that we encountered no false negatives in our tests, we conjecture that our algorithm is partially correct: it may not halt, but when it does the answer is correct. However, we cannot back this claim without a careful analysis of completeness and termination, which we leave for future work. We believe such an analysis will be a substantial contribution in itself, and should advance the understanding of the subtyping problem by clarifying the practical reasons for its undecidability. Still, we must keep in mind that we are dealing with an undecidable problem we cannot aim for a complete, totally correct algorithm.
Still, however promising these results, we cannot the timeouts we observed in the random tests. For this reason, it is worth considering a timeout policy in its implementation, or a mechanism to disable subtyping where it is unappropriate, with equivalence as a fallback.
§ RELATED WORK
Session types emerged as a formalism to express communication protocols and statically verify their implementations <cit.>. Their initial formulation allowed only pairwise, tail-recursive protocols, earning it the `binary' and `regular' epithets. Since then, considerable efforts have been made to extend the theory of session types beyond the binary and regular realms: multiparty session types allow sessions with multiple participants <cit.>, while context-free session types <cit.> and nested session types <cit.> allow non-regular communication patterns. Our work is centered on context-free session types, which have seen considerable development since their introduction, most notably their integration in System F <cit.>, an higher-order formulation <cit.>, as well as proposals for kind and type inference <cit.>.
Subtyping is a standard feature of many type systems, and the literature on the topic is vast <cit.>. Its conventional interpretation, based on the notion of substitutability, originates from the work of Liskov and Wing <cit.>. Multiple approaches to subtyping for regular session types have been proposed, and they can be classified according to the objects they consider substitutable: channels versus processes (the difference being most notable in the variance of type constructors).
The earliest approach, subscribing to the substitutability of channels, is that of Gay and Hole <cit.>. It is also the one we follow. A later formulation, proposed by Carbone et al. <cit.>, subscribes to the substitutability of processes. A survey of both interpretations is given by Gay <cit.>. Horne and Padovani study subtyping under the linear logic interpretation of session types <cit.>. Working on regular session types, they show that subtyping preserves termination of terms.
Subtyping for session types has spread beyond the regular realm. Das et al. introduce subtyping for nested session types, show the problem to be undecidable and present a sound but incomplete algorithm <cit.>. In the context-free setting, the first and, to the best of our knowledge, only formulation before our work is that of Padovani <cit.>. It proposes a simulation-based subtyping relation, proves the undecidability of the subtyping problem and provides a sound but incomplete implementation. Since it predates the higher-order formulation, the relation it proposes does not contemplate input/output subtyping nor functional types. Furthermore, its implementation relies heavily on the subtyping features of OCaml, the implementation language. In contrast, we propose a more expressive relation that allows arbitrary types in messages, input and output subtyping, as well as functional types. Furthermore, we provide an also sound but incomplete algorithm that is independent of the implementation language.
Our subtyping relation is based on a novel form of observational preorder, 𝒳𝒴𝒵𝒲-simulation. There is, as far as we know, no analogue in the literature. It is a generalization of 𝒳𝒴-simulation, introduced by Aarts and Vaandrager in the context of learning automata <cit.> but already known, under slightly different forms, as modal refinement <cit.>, alternating simulation <cit.> and covariant-contravariant simulation <cit.>. The contravariance on the derivatives introduced by 𝒳𝒴𝒵𝒲-simulation is also prefigured in contrasimulation <cit.>, but the former uses strong transitions whereas the latter uses weak ones. There is a vast literature on other observational relations, to which Sangiorgi's book provides an introduction <cit.>.
Our algorithm decides the 𝒳𝒴𝒵𝒲-similarity of simple grammars. It is an adaptation of the bisimilarity algorithm for simple grammars of Almeida et al. <cit.>. To our knowledge, these are the only running algorithms of their sort. On the related topic of basic process algebra (BPA), BPA processes have been shown to be equivalent to grammars in Greibach normal form <cit.>, of which simple grammars are a particular case. This makes results and algorithms for BPA processes applicable to grammars in Greibach normal form, and vice-versa. A bisimilarity algorithm for general BPA processes, of doubly-exponential complexity, has been proposed by Burkart et al. <cit.>, while an analogous algorithm for the special case of normed BPA processes, of polynomial complexity, has been proposed by Hirschfield et al <cit.>.
§ CONCLUSION AND FUTURE WORK
We have contributed an intuitive notion of subtyping for context-free session types, based on a novel form of observational preorder, 𝒳𝒴𝒵𝒲-simulation. This preorder inverts the direction of the simulation in the derivatives covered by its 𝒲 and 𝒵 parameters, allowing it to handle co/contravariant features of input/output types. We plan to explore the properties of this preorder, as well as other domains and applications where it may be useful.
We take advantage of the fact that 𝒳𝒴𝒵𝒲-simulation generalizes bisimulation to derive a sound subtyping algorithm from an existing type equivalence algorithm. Despite the unavoidable incompleteness of our algorithm, an empirical analysis shows it to be reliable enough to be incorporated in a compiler.
As shown by Thiemann and Vasconcelos <cit.>, support for polymorphic recursion is paramount in practical applications of context-free session types. The lack of polymorphism in our system makes our contributions no less applicable, since subtyping and polymorphism can be taken as orthogonal. However, the combination of these two features in the form of bounded quantification results in more expressive (and theoretically challenging) type systems. The interaction between bounded polymorphism and session types has already been explored by Gay in a regular setting <cit.>. Its promotion to a context-free setting is therefore another avenue for future work.
§ PRELIMINARIES
§.§ Type formation
In this section we introduce the rules for type formation, which ensure types are closed (no free references) and contractive. For convenience, type formation also ensures that types under a μ binder are not equivalent to and that all type references introduced by such binders are pairwise distinct.
The rules for the contractivity judgement Tx (“T is contractive on reference x”) can be found in <ref> and depend, in turn, on judgement T (“T is terminated”; rules also in <ref>) to characterize types that exhibit no communication action.
The type formation judgement T is defined by the smallest congruence relation on types that includes the rules in <ref> (congruence rules omitted). We understand the notation Δ,x as requiring x ∉Δ.
§.§ Substitution
The following lemma shows that we can discard type references from type formation contexts, under the assumption that they do not occur free in the type in question.
Let x∉(T). If Δ,xT, then ΔT.
By rule induction on the hypothesis.
The following lemma shows that substitution preserves the good properties of types: termination, contractivity and type formation. From it follows that these properties are also preserved by the function.
Suppose that Δ⊢ U.
* If Δ,xT and T, then xUT.
* If Δ,xT and T, then xUT.
* If Δ,x,y ⊢ T and Ty and y∉𝖿𝗋𝖾𝖾(U), then xUTy.
* If Δ,xT then ΔxUT.
* By rule induction on T.
* By structural induction on T. All cases are either straightforward or follow from the induction hypothesis.
* By rule induction on TY, using <ref>. All cases follow from the induction hypothesis except the case for rule C-Var, where we have T=z with z≠ x,y, and where the result follows from hypothesis zy.
* By rule induction on Δ,x⊢ T. For the case TF-Rec we have T=yV. The premises to the rule are V, Vy and Δ,x,y⊢ V. Induction on the third premise gives Δ,y⊢xUV. <ref> gives xUV, while <ref> gives xUVy. Rule TF-Rec gives Δ⊢y(xUV). Conclude with the definition of substitution. For the case TF-Var with T=y≠ x, we have x ∉𝖿𝗋𝖾𝖾(y). The result follows from hypothesis Δ,xy and strengthening. For TF-Var with T=x the result follows from the hypothesis Δ⊢ U.
§.§ Unraveling
By immediate inspection of the definition of the function (<ref>), we get the following characterization.
If T=(T), then T is one of , UmV, ℓT_ℓL, x, U, ℓS_ℓL, US, or . If T ≠(T), then T is one of xU, ℓS_ℓLS, (S_1S_2)S_3, S, S or (sS)R.
We can also define the notion of one-step unraveling for our types.
We say that a type T' is a one-step unraveling of another type T, denoted (T), if: T is a direct application of a type constructor, and T'=T; or T is not a direct application of a type constructor, and T' is obtained by one recursive call of the function, which attempts to bring a type constructor into the front of a type.
One example is (S)=S; another example is
((S_1S_2)S_3) = S_1(S_2S_3). Notice that T_0 is contractive iff any sequence T_0, T_1, ..., where T_i+1=(T_i), eventually stabilises in (T) (after finitely many steps).
Finally, we make some preliminary observations about the structure of type equivalence derivations. We can split the type equivalence rules in three groups.
* We will say that rules S-Unit, S-Arrow, S-Rcd, S-Vrt, S-In, S-Out, S-IntChoice, S-OutChoice, S-End, S-Skip, S-EndSeq1L, S-EndSeq1R, S-EndSeq2, S-InSeq1L, S-OutSeq1L, S-InSeq1R, S-OutSeq1R, S-InSeq2, S-OutSeq2, are progressing. These rules consume the types on both sides of the relation. In other words, if we apply one of these rules from judgement T U, we end up with judgements T' U' where T',U' are both proper subterms of T,U. Moreover, these rules can be applied iff T = (T) and U = (U).
* We will say that rules S-RecL, S-SkipSeqL, S-ChoiceSeqL, S-SeqSeqL, S-RecSeqL are right-preserving. These rules change the type on the left-hand side of the relation, but the type on the right-hand side remains the same. These rules can be applied iff T ≠(T).
* We will say that rules S-RecR, S-SkipSeqR, S-ChoiceSeqR, S-SeqSeqR, S-RecSeqR are right-preserving. These rules change the type on the right-hand side of the relation, but the type on the left-hand side remains the same. These rules can be applied iff U ≠(U).
Also, by inspection of the rules, we can observe the following.
* If we can apply a progressing rule for judgement T U, then this is the only rule that can be applied.
* If we can apply a left-preserving rule for judgement T U, then this the only left-preserving rule that can be applied (but we can possibly also apply a right-preserving rule).
* If we can apply a right-preserving rule for judgement T U, then this the only right-preserving rule that can be applied (but we can possibly also apply a left-preserving rule).
* If we can apply a left-preserving rule as well as a right-preserving rule for judgement T U, then we can apply them one after the other (in any order); and moreover, any successful derivation for T U must eventually apply both rules.
From the above discussion we can derive the following immediate results.
* Let T'=(T) for some type T.
* If Δ⊢ T, then Δ⊢ T'.
* T U iff T' U.
* TaU iff T'aU.
* Let T'=(T) for some type T.
* If Δ⊢ T, then Δ⊢ T'.
* T U iff T' U.
* TaU iff T'aU.
Sub-item 1.a is immediate by inspection of the type formation rules. Sub-item 1.b follows from the preceding discussion. Sub-item 1.c is immediate by inspection of the LTS rules. Item 2 follows from Item 1 since (T) is reached in a finite number of steps when defined.
§ PROOF OF <REF>
In this section we focus on the proof of <ref>:
*
We need to prove reflexivity and transitivity.
*Reflexivity We present a coinductive proof that T T for every type T s.t. ⊢ T. Consider the following relation.
ℛ = {(T,T) ⊢ T}
∪ {(T,T') ⊢ T and T'=(T)}
We shall prove that ℛ is backward-closed for the rules of syntactic subtyping. This will show that ℛ⊆ and, consequently, that T T for every type T.
Let (T,T)∈ℛ. We consider first the cases in which T fits a type constructor, i.e., T = (T). Given that T is well-formed, we have the following case analysis for it:
(Case T =): we apply axiom S-Unit to (T,T).
(Case T = UmB): We apply rule S-Arrow to (T,T), arriving at goals (U,U) and (V,V). Since the derivation of ⊢ T must use rule TF-Arrow, we also have that ⊢ U and ⊢ V, and therefore that (U,U),(V,V)∈ℛ.
(Case T = ℓT_ℓL): we apply rule (S-Rcd and arrive at goals (T_k,T_k) for each k∈ L. The derivation of ⊢ T must use rule TF-RcdVrt, which implies that ⊢ T_k for each k ∈ L, which means that (T_k, T_k)∈ℛ for each k∈ℛ.
(Case T = ℓT_ℓL): analogous to case T = ℓT_ℓL.
(Case T = X): Cannot occur, for ⊢X.
(Case T =): we apply axiom S-End to (T,T).
(Case T = U): we apply rule S-In to (T,T), arriving at goal (U,U). Since the derivation of ⊢ T must use rule TF-Msg, we have that ⊢ U, and therefore that (U,U)∈ℛ.
(Case T = ℓT_ℓL): analogous to case T = ℓT_ℓL.
(Case T = ℓT_ℓL): analogous to case T = ℓT_ℓL.
(Case T =): we apply axiom S-Skip to (T,T).
(Case T = US): we apply rule S-InSeq2, arriving at goals (U,U),(S,S). The derivation of ⊢ T must use rule TF-Seq, implying ⊢U and Δ⊢ S. Moreover, the derivation of ⊢U must use rule TF-Msg, implying ⊢ U. Since ⊢ U and ⊢ S, we have (U,U),(S,S)∈ℛ. The case where T=US is similar.
(Case T = sS): Cannot occur, since T.
Next, we consider cases in which T ≠(T).
(Case T = xU): We apply rule S-RecR to (T,T), arriving at goal (T,T') where T'=xxUU. Since T'=(T), we have that (T,T')∈ℛ.
(Case T=S): We apply axiom S-EndSeq2.
(Case T = ℓS_ℓLR): we apply rule S-ChoiceSeqR to (T,T), arriving at goal (T,T') where T' = ℓS_ℓRL. Since T'=(T), we have that (T,T')∈ℛ.
(Case T = S): We apply rule S-SkipSeqR, arriving at goal (T,S). Since S=(T), we obtain that (T,S)∈ℛ.
(Case T = (S_1S_2)S_3): we apply rule S-SeqSeqR, arriving at goal (T,T') where T' = S_1(S_2S_3). Since T'=(T), we obtain that (T,T')∈ℛ.
(Case T = (sS)R): we apply rule S-RecSeqR to (T,T), arriving at goal (T,T'), where T'=(ssSS)R. Since T'=(T), we have that (T,T')∈ℛ.
Next, we must consider cases (T,T')∈ℛ where T≠ T', which means, by definition, that T'=(T) and therefore that T ≠(T). Given that ⊢ T, we have the following case analysis for T.
(Case T = xU): From <ref> follows that ⊢xxUU. Since T'=(T), we know T'=xUxU. We apply rule S-RecL to (T,T'), arriving at goal (T',T')∈ℛ.
(Case T=S): then T'=. We apply axiom S-EndSeq1L to (T,T').
(Case T = ℓS_ℓLR): the derivation of ⊢ T must use rules TF-Seq and TF-Choice, implying that ⊢ S_k for each k∈ L and ⊢ U. Again by rule TF-Seq, we get that ⊢S_kR for each k∈ L and thus, by rule TF-Choice, ⊢ℓS_ℓRL. Since T'=(T), we know that T'=ℓS_ℓRL. We apply rule S-ChoiceSeqL to (T,T'), arriving at goal (T',T')∈ℛ.
(Case T = S): the derivation of ⊢ T must use rule TF-Seq, implying that ⊢ S. Since T'=(T), we know that T'= S. We apply rule E-SkipSeqL to (T,T'), arriving at goal (T',T')∈ℛ.
(Case T = (S_1S_2)S_3): the derivation of ⊢ T must use rule TF-Seq, hence ⊢ S_1, ⊢ S_2, ⊢ S_3. Therefore, by rule TF-Seq also, we have ⊢S_1(S_2S_3). Since T'=(T), we know that T'= S_1(S_2S_3). We apply rule S-SeqSeqL to (T,T'), arriving at goal (T',T') ∈ℛ.
(Case T = (sS)R): the derivation of ⊢ T must use rule TF-Seq, implying that ⊢sS and ⊢ R. From <ref> follows that ⊢ssSS. By rule TF-Seq we get that ⊢(ssSS)R. Since T'=(T), we know that T'=(ssSS)R. We apply rule S-RecSeqL to (T,T'), arriving at goal (T',T')∈ℛ.
*Transitivity We now prove by coinduction that, for all types T,U,V with ⊢ T, ⊢ U, ⊢ V, if T U and U V, then T V. Consider the following relation.
ℛ = {(T,V)⊢ T, ⊢ V and there exists U s.t. ⊢ U, T U and U V}
We shall prove that ℛ is backward closed for the rules of the syntactic subtyping relation. This will show that R ⊆, giving the desired property.
Firstly, we can assume that there are no left-preserving rules applicable to judgements T U. If there was such a rule, then its symmetric counterpart could be applied to judgement U V, and we would get a different U' for which T U' and U' V. Without loss of generality, our derivation for T U starts with a (finite) sequence of left-preserving rules reaching a type U' that can only be consumed. The symmetric sequence of right-preserving rules could then be applied to the derivation for U V to reach the same type U'. Thus we can just assume that our type U is already in a form that must be consumed.
Secondly, suppose a derivation for T U starts with a right-preserving rule. After this rule we get the judgement T' U for some type T', with U remaining the same. But then we can apply the corresponding rule to (T,V), arriving at goal (T',V), which is in ℛ since T' U and U V. The case in which U V starts with a left-preserving rule can be handled in a similar way.
The remaining possibility is that both derivations for T U and U V start with a progressing rule. Here we need to split our analysis into several cases, depending on which rule is at the start of the derivation for T U.
(Case S-Unit): then T= and U=. The only progressing rule that can be applied at U V is also S-Unit, implying that V= as well. Therefore, we can apply axiom S-Unit to (T,V). The case for S-Skip is similar.
(Case S-Arrow): then T = T_1mT_2 and U = U_1nU_2 for some T_1,T_2,U_1,U_2,m,n. Furthermore, we have U_1 T_1 and T_2 U_2 and m ⊑ n. The only progressing rule that can be applied to U V is also S-Arrow, implying that V = V_1oV_2 for some V_1,V_2,o. Furthermore, we have V_1 U_1, U_2 V_2 and n ⊑ o. By transitivity of ⊑ we obtain m ⊑ o. We apply rule S-Arrow to (T,V), arriving at goals (V_1,T_1),(T_2,V_2)∈ℛ.
(Case S-Rcd): then T=ℓT_ℓL and U=kU_kK for some L,K,T_i,U_j,i∈ L, j∈ K. Furthermore, we have K⊆ L, T_j U_j for j ∈ K. The only progressing rule that can be applied to U V is also S-Rcd, implying that V = hV_hH for some H,V_h,h∈ H. Furthermore, we have H ⊆ K U_h V_h for each h ∈ H. By transitivity of ⊆ we get H ⊆ L. We apply rule S-Rcd to (T,V), arriving at goals (T_h, V_h)∈ℛ for each h∈ H. Case S-IntChoice is similar.
(Case S-Vrt): then T=ℓT_ℓL and U=kU_kK for some L,K,T_i,U_j,i∈ L, j∈ K. Furthermore, we have L⊆ K, T_j U_j for j ∈ K. The only progressing rule that can be applied to U V is also S-Vrt, implying that V = hV_hH for some H,V_h,h∈ H. Furthermore, we have K ⊆ H, U_h V_h for each h ∈ H. By transitivity of ⊆ we get L ⊆ H. We apply rule S-Vrt to (T,V), arriving at goals (T_h, V_h)∈ℛ for each h∈ H. Case S-ExtChoice is similar.
(Case S-End): then T= and U =. The two possible progressing rules for U V are S-End and S-EndSeq1R. In the first case we have V=, so we apply S-End to (T,V). In the second case we have V=S for some S, so we apply rule S-EndSeq1R to (T,V).
(Case S-In): then T=T' and U=U' for some T',U'. Furthermore, we have T' U'. The two possible progressing rules for U V are S-In and S-InSeq1R. In the first case, we have V=V' for some V'. It follows that U' V'. We apply rule S-In to (T,V), arriving at goal (T',T')∈ℛ. In the second case, we have V = V'V” for some V' and V”. It follows that U' V' and V”. We apply rule S-InSeq1R to (T,V), arriving at goal (T',V')∈ℛ. The cases S-InSeq1L, S-InSeq1R, S-InSeq2 are handled similarly.
(Case S-Out): then T=T' and U=U' for some T',U'. Furthermore, we have U' T'. The two possible progressing rules for U V are S-Out and S-OutSeq1R. In the first case, we have V=V' for some V'. It follows that V' U'. We apply rule S-Out to (T,V), arriving at goal (V',T')∈ℛ. In the second case, we have V = V'V” for some V' and V”. It follows that V' U' and V”. We apply rule S-OutSeq1R to (T,V), arriving at goal (V',T')∈ℛ. The cases S-OutSeq1L, S-OutSeq1R, S-OutSeq2 are handled similarly.
(Case S-EndSeq1L): then T = S and U= for some S. The two possible progressing rules for U V are S-End and S-EndSeq1R. In the first case we have V=, so we can apply rule S-EndSeq1L to (T,V). In the second case we have V=R for some R, so we can apply rule S-EndSeq2 to (T,V).
(Case S-EndSeq1R): then T= and U=S for some S. The two possible progressing rules for U V are S-EndSeq1L and S-EndSeq2. In the first case we have V =, so we can apply S-End to (T,V). In the second case we have V = R for some R, so we can apply S-EndSeq1R to (T,V).
(Case S-EndSeq2): then T=S and U=R for some S,R. The two possible progressing rules for U V are S-EndSeq1L and S-EndSeq2. In the first case we have V=, so we can apply S-EndSeq1L to (T,V). In the second case we have V=S' for some S', so we apply S-EndSeq2 to (T,V).
§ PROOF OF <REF>
Recall the statement of <ref>, presented in <ref>.
*
We analyse both directions of the biconditional separately.
*Direct implication
Consider the relation ℛ = {(T,U)⊢ T, ⊢ U and T U}.
We must show that ℛ is an 𝒳𝒴𝒵𝒲-simulation with 𝒳,𝒴,𝒵,𝒲 as defined in <ref>. This will show that ℛ⊆, and hence that T U implies T U.
The proof has two parts. First consider cases (T,U)∈ℛ s.t. (T)=T and (U)=U. We proceed by case analysis for the last rule in the derivation of T U, which must be a progressing rule.
(Case S-Unit): then T= and U=. The unique transition that can be applied to is (L-Unit). Since B∈,, we should have that if TBT' for some T', then UBU' for some U' with (T',U')∈ℛ, and also that if UBU' for some U', then TBT' for some T' with (T',U')∈ℛ. It is readily verifiable that the single transitions of both T and U match each other. That (,)∈ℛ follows from S-Skip and the definition of ℛ.
(Case S-Arrow): then T=T_1mT_2, U=U_1nU_2, U_1 T_1, T_2 U_2 and m⊑ n. The only transitions that can be applied to T are TT_1, TT_2 and, if m=, T (L-ArrowDom, L-ArrowRng and L-LinArrow). Similarly, the only transitions applicable to U are UU_1, UU_2 and, if n=, U.
Since ∈,, we should have that if TT' for some T', then UU' for some U' with (U',T')∈ℛ, and also that if UU' for some U', then TT' for some T' with (U',T')∈ℛ. That the transitions of T and U by match is readily verifiable, and that (U_1,T_1)∈ℛ is given by U_1 T_1 and the definition of ℛ.
Similarly, since ∈,, we should have that if TT' for some T', then UU' for some U' with (T',U')∈ℛ, and also that if UU' for some U', then TT' for some T' with (T',U')∈ℛ. That the transitions of T and U by match is readily verifiable, and that (T_2,U_2)∈ℛ is given by T_2 U_2 and the definition of ℛ.
Finally, as ∈, we need to show that if TT' for some T', then UU' for some U' with (T',U')∈ℛ. As we have seen, T can have at most one such transition, T, and only in the case where m=. From m⊑ n follows that n=. From this follows that U. We arrive at pair (, ), which is obviously in ℛ.
(Case S-Rcd): then T=ℓT_ℓL, U=kU_kK and K⊆ L. The only transitions that can be applied to T are T and TiT_i for each i∈ L (L-RcdVrt and L-RcdVrtField). Similarly, the only transitions that can be applied to U are U and UjU_j for each j∈ K. It is clear from the rules of type formation that ⊢ T_i and ⊢ U_j for each i∈ L, j∈ K and, because S-Rcd was used, we know T_j U_j for each j ∈ K.
Since ∈,, we should have that if TT' for some T', then UU' for some U' with (T',U')∈ℛ, and also that if UU' for some U', then TT' for some T' with (T',U')∈ℛ. That the transitions of T and U by match is readily verifiable, and that (,)∈ℛ is also evident.
Finally, since j∈ for each j∈ K, we need to show that if UjU' for some U', then TjT' for some T' with (T',U')∈ℛ. As K⊆ L, it is readily verifiable that T matches every transition of U by j for each j∈ K. That (T_j,U_j)∈ℛ follows by T_j U_j for each j ∈ K and the definition of ℛ.
(Case S-Vrt): then T=ℓT_ℓL, U=kU_kK and L⊆ K. The only transitions that can be applied to T are T and TiT_i for each i∈ L (L-RcdVrt and L-RcdVrtField). Similarly, the only transitions that can be applied to U are U and UjU_j for each j∈ K. It is clear from the rules of type formation that ⊢ T_i and ⊢ U_j for each i∈ L, j∈ K and, because S-Vrt was used, we know T_i U_i for each i ∈ L.
Since ∈,, we should have that if TT' for some T', then UU' for some U' with (T',U')∈ℛ, and also that if UU' for some U', then TT' for some T' with (T',U')∈ℛ. That the transitions of T and U by match is readily verifiable, and that (,)∈ℛ is also evident.
Finally, since i∈ for each i∈ L, we need to show that if TiT' for some T', then UiU' for some U' with (T',U')∈ℛ. As L⊆ K, it is readily verifiable that U matches every transition of T by i for each i∈ L. That (T_i,U_i)∈ℛ follows by T_i U_i for each i ∈ L and the definition of ℛ.
(Case S-End): analogous to case S-Unit.
(Case S-In): then T=T' and U=U'. The only transitions that can be applied to T are TT' and T (L-Msg1, L-Msg2). Similarly, the only transitions that can be applied to U are UU' and U. It is clear from the rules of type formation that ⊢ T' and ⊢ U'. Furthermore, because S-In was used, T' U'.
Since ,∈,, we should have that if TT' for some T', then UU' for some U' with (T',U')∈ℛ, and similarly for . We must also have that if UU' for some U', then TT' for some T' with (T',U')∈ℛ, and similarly for . That the transitions of T and U by and match is readily verifiable, and (T',U'),(,)∈ℛ follows from T' U', S-Skip and from the definition of ℛ.
(Case S-Out): then T=T' and U=U'. The only transitions that can be applied to T are TT' and T (L-Msg1, L-Msg2). Similarly, the only transitions that can be applied to U are UU' and U. It is clear from the rules of type formation that ⊢ T' and ⊢ U'. Furthermore, because S-Out was used, U' T'.
Since ∈,, we should have that if TT” for some T”, then UU” for some U” with (U”,T”)∈ℛ, and also that if UU” for some U”, then TT” for some T” with (U”,T”)∈ℛ. That the transitions of T and U by match is readily verifiable, and that (U',T')∈ℛ is given by U' T' and the definition of ℛ.
Finally, since ∈,, we should have that if TT' for some T', then UU' for some U' with (T',U')∈ℛ, and also that if UU' for some U', then TT' for some T' with (T',U')∈ℛ. That the transitions of T and U by match is readily verifiable, and that (,)∈ℛ follows from S-Skip and the definition of ℛ.
(Case S-ExtChoice): analogous to case S-Vrt.
(Case S-IntChoice): analogous to case S-Rcd.
(Case S-Skip): then T= and U=. Since no transitions apply to , the conditions for 𝒳𝒴𝒵𝒲-simulation trivially hold.
(Case S-EndSeq1L): then T=S and U=. The only transition that can be applied to T is T (L-EndSeq). Similarly, the only transition that can be applied to U is U (L-End).
Since ∈,, we should have that if TT' for some T', then UU' for some U' with (T',U')∈ℛ, and also that if UU' for some U', then TT' for some T' with (T',U')∈ℛ. That the transitions of T and U by match is readily verifiable, and (,)∈ℛ follows from S-Skip and the definition of ℛ.
(Case S-EndSeq1R, S-EndSeq2): analogous to case S-EndSeq1L.
(Case S-InSeq1L): then T=T'S and U=U'. The only transitions that can be applied to T are TT' and TS (L-MsgSeq1, MsgSeq2). Similarly, the only transitions that can be applied to U are UU' and U. It is clear from the rules of type formation that ⊢ T',⊢ U' and ⊢ S. Furthermore, because S-InSeq1L was used, T' U' and S.
Since ,∈,, we should have that if TT' for some T', then UU' for some U' with (T',U')∈ℛ, and similarly for . For the same reason, we must also have that if UU' for some U', then TT' for some T' with (T',U')∈ℛ, and similarly for . That the transitions of T and U by and match is readily verifiable, and (T',U'),(S,)∈ℛ follows from T' U', from S and from the definition of ℛ.
(Case S-InSeq1R, S-InSeq2): analogous to case S-InSeq1L.
(Case S-OutSeq1L): then T=T'S and U=U'. The only transitions that can be applied to T are TT' and TS (L-MsgSeq1, MsgSeq2). Similarly, the only transitions that can be applied to U are UU' and U. It is clear from the rules of type formation that ⊢ T', ⊢ U' and ⊢ S. Furthermore, because S-OutSeq1L was used, U' T' and S.
Since ∈,, we should have that if TT” for some T”, then UU” for some U” with (U”,T”)∈ℛ, and also that if UU” for some U”, then TT” for some T” with (U”,T”)∈ℛ. That the transitions of T and U by match is readily verifiable, and that (U',T')∈ℛ is given by U' T' and the definition of ℛ.
Finally, since ∈,, we should have that if TT' for some T', then UU' for some U' with (T',U')∈ℛ, and also that if UU' for some U', then TT' for some T' with (T',U')∈ℛ. That the transitions of T and U by match is readily verifiable, and (S,)∈ℛ follows from S and the definition of ℛ.
(Case S-OutSeq1R, S-OutSeq2): analogous to case S-OutSeq1L.
Now consider that T ≠(T). From T U follows that (T) U, resulting from the application of right-preserving rules. If U=(U), then the above case analysis shows that the conditions for 𝒳𝒴𝒵𝒲-simulation between (T) and U hold. Since T and (T) have the same transitions, i.e., TaT' iff (T)aT' for some T', the conditions for 𝒳𝒴𝒵𝒲-simulation between T and U also hold. Otherwise, if U≠(U), it similarly follows that (T)(U), resulting from the application of left-preserving rules. The previous case analysis shows that the conditions for 𝒳𝒴𝒵𝒲-simulation between (T) and (U) hold. Since U and (U) have the same transitions, the same conditions also hold between T and U. The case with T=(U), U≠(U) is analogous.
*Reverse implication
Consider the relation 𝒮 = {(T,U)⊢ T, ⊢ U and T U}.
We prove that relation 𝒮 is backward closed for the rules of the syntactic subtyping relation. This will show that 𝒮⊆, and hence that T U implies T U.
The proof has two parts. First, consider the cases where both T and U fit a type constructor, i.e., T=(T) and U=(U). We proceed by case analysis on the structure of T.
(Case T=): the only transition that applies to T is T. Since T U and U=(U), then U=. Therefore we can apply S-Unit.
(Case T=T_1mT_2): Two transitions apply to T regardless of m: TT_1 and TT_2. Since T U and U = (U), we know that U = U_1nU_2 and that, regardless of n, UU_1 and UU_2. Furthermore, we know that U_1 T_1 (since ∈,) and that T_2 U_2 (since ∈,).
Before we apply S-Arrow, we need to have m⊑ n. We know that T iff m= and that U iff n=. Recall that ∈, which means that the only case where n⋢m (m=1 and n=*) cannot occur, for it would contradict T U (since U could not match a transition of T by a label in ).
We can therefore apply S-Arrow, arriving at (U_1,T_1),(T_2,U_2)∈𝒮.
(Case T=ℓT_ℓL): the only transitions that can be applied to T are T and TiT_i for each i∈ L. Since T U and U=(U), we have U = kU_kK with transitions U and UjU_j for j ∈ K. Since labels of the form ℓ belong to , we know that K⊆ L, for T must be able to match all transitions of U by j for each j∈ K. From this we obtain T_j U_j for each j∈ K, arriving at (T_j,U_j)∈𝒮 for each j∈ K.
(Case T=ℓT_ℓL): the only transitions that can be applied to T are T and TiT_i for each i∈ L. Since T U and U=(U), we have U = kU_kK with transitions U and UjU_j for j ∈ K. Since labels of the form ℓ belong to , we know that L⊆ K, for U must be able to match all transitions of T by i for each i∈ L. From this we obtain T_i U_i for each i∈ L, arriving at (T_i,U_i)∈𝒮 for each i∈ L.
(Case T=x): cannot occur, since T.
(Case T=): analogous to the case where T=B.
(Case T=T'): the only transitions applicable to T are TT' and T. Since T U and U = (U), then either U=U' or U=U'V with V. In either case, T' U'. In the first case, we can apply S-In, arriving at (T',U')∈𝒮. In the second case we can apply S-InSeq1R, arriving at (T',U'), (V,)∈𝒮.
(Case T=T'): the only transitions applicable to T are TT' and T. Since T U and U = (U), then either U=U' or U=U'V with V. In either case, U' T'. In the first case, we can apply S-Out, arriving at (U',T')∈𝒮. In the second case we can apply S-OutSeq1R, arriving at (T',U'),(V,)∈𝒮.
(Case T=ℓS_ℓL): analogous to case T=ℓT_ℓL.
(Case T=ℓS_ℓL): analogous to case T=ℓT_ℓL.
(Case T=): no transitions apply to T. Since T U and U=(U), then U=. Therefore we can apply S-Skip.
(Case T=T_1T_2): the only transitions applicable to T are TT_1 and TT_2. Since T U and U = (U), then either U=U_1 or U=U_1U_2. In the first case, T_1 U_1 and T_2, which implies that T_2; we can apply S-InSeq1L, arriving at (T_1,U_1)∈𝒮. In the second case, T_1 U_1 and T_2 U_2; we can apply S-InSeq2, arriving at (T_1,U_1),(T_2,U_2)∈𝒮.
(Case T=T_1T_2): the only transitions applicable to T are TT_1 and TT_2. Since T U and U = (U), then either U=U_1 or U=U_1U_2. In the first case, U_1 T_1 and T_2, which implies that T_2; we can apply S-OutSeq1L, arriving at (U_1,T_1)∈𝒮. In the second case, U_1 T_1 and T_2 U_2; we can apply S-OutSeq2, arriving at (U_1,T_1),(T_2,U_2)∈𝒮.
Now consider that T ≠(T). From T U it follows that T' U where T'=(T), due to the fact that T and T' have the same transitions (<ref>). Then we can apply an appropriate right-preserving rule to (T,U), arriving at (T',U)∈𝒮. The case with T=(T),U≠(U) is analogous.
§ PRUNING
The grammars generated by procedure may contain unreachable words, which can be ignored by the algorithm. Intuitively, these words correspond to communication actions that cannot be fulfilled, such as subterm in type (ss). Formally, these words appear in productions following what are known as unnormed words.
Let a⃗ be a non-empty sequence of non-terminal symbols a_1,…,a_n. Write Y⃗a⃗Z⃗ when Y⃗a_1…a_nZ⃗. We say that a word Y⃗ is normed if Y⃗a⃗ε for some a⃗, and unnormed otherwise. If Y⃗ is normed and a⃗ is the shortest path such that Y⃗a⃗ε, then a⃗ is called the minimal path of Y⃗, and its length is the norm of Y⃗, denoted norm(Y⃗).
It is known that any unnormed word Y⃗ is bisimilar to its concatenation with any other word, i.e., if Y⃗ is unnormed, then Y⃗∼_Y⃗X⃗. In this case, X⃗ is said to be unreachable and can be safely removed from the grammar. We call the procedure of removing all unreachable symbols from a grammar pruning, and denote the pruned version of a grammar by ().
*
For the direct implication, the 𝒳𝒴𝒵𝒲-simulation for X⃗ and Y⃗ over is also an 𝒳𝒴𝒵𝒲-simulation for X⃗ and Y⃗ over (). For the reverse implication, if ℛ' is an 𝒳𝒴𝒵𝒲-simulation for X⃗ and Y⃗ over (P), then relation
ℛ = ℛ' ∪{(V⃗ W, V⃗ W Z⃗) | (W →V⃗ W Z⃗) ∈, W unnormed}
is an 𝒳𝒴𝒵𝒲-simulation for X⃗ and Y⃗ over 𝒫.
§ THE SIMPLIFICATION RULES ARE SAFE WITH RESPECT TO 𝒳𝒴𝒵𝒲-SIMILARITY (<REF>)
We begin this section by recalling the safeness property for bisimilarity, and then move on to define the analogous safeness property for 𝒳𝒴𝒵𝒲-similarity, which our simplification rules maintain.
§.§ Safeness with respect to bisimilarity
According to Jančar and Moller <cit.>, a simplification rule is said to be safe with respect to bisimilarity if its application preserves the bisimulation from a parent node to at least one child node and, reciprocally, if the bisimulation on a child node implies the bisimulation of its parent node. More formally, a rule that is safe with respect to bisimulation maintains the following lemma.
For any node N ≠∅ and any n ∈ℕ, it holds that N ⊆∼^n+1_ iff N has a child C⊆∼_^n. As a consequence, N⊆∼ iff N has a child C⊆∼.
Here, ∼^n_ denotes stratified bisimilarity for grammars (Jančar and Moller provide a general definition). Rules that maintain the previous lemma are considered safe because this lemma implies the safeness property for bisimilarity:
Given a set of productions , X⃗∼_Y⃗ iff the expansion tree rooted at {( X⃗, Y⃗ )} has a successful branch.
Jančar and Moller also provide the safe simplification rules that used in the bisimilarity algorithm of Almeida et al.:
* Congruence rule: omit from a node N any pair that belongs to the least congruence containing the ancestors of N;
* BPA1 rule: If (X_0X⃗, Y_0 Y⃗) is in N and (X_0 X⃗', Y_0 Y⃗') belongs to the ancestors of N , then create a sibling node for N replacing (X_0 X⃗, Y_0 Y⃗) by ( X⃗, X⃗') and (Y⃗ , Y⃗');
* BPA2 rule: If (X_0 X⃗, Y_0 Y⃗ ) is in N and X_0 and Y0 are normed, then:
* Case norm(X_0) ≤norm(Y_0): Let a⃗ be a minimal path for X_0 and Z⃗ the word such that Y⃗_0a⃗Z⃗. Add a sibling node for N including the pairs (X_0 Z⃗, Y_0) and (X⃗, Z⃗Y⃗ ) in place of (X_0 X⃗, Y_0 Y⃗ );
* Otherwise: Let a⃗ be a minimal path for Y_0 and Z⃗ the word such that X⃗_0a⃗Z⃗. Add a sibling node for N including the pairs (X_0, Y_0 Z⃗) and ( Z⃗X⃗, Y⃗ ) in place of (X_0 X⃗, Y_0 Y⃗ ).
The Congruence rule is a concrete instance of a more abstract Omitting rule, also given by Jančar and Moller: omit from a node N any pair (X⃗, Y⃗) whenever there is M ⊆ N^⇑ such that for all n ∈ N, if M ⊆∼_n then X⃗∼_n Y⃗.
§.§ Safeness with respect to 𝒳𝒴𝒵𝒲-similarity
We maintain the BPA rules in our algorithm, but since 𝒳𝒴𝒵𝒲-simulation is not a congruence, we cannot use the Congruence rule. We devise instead the Preorder rule, which is also a concrete instance of the Omitting rule. Hence all of our rules maintain the safeness property for bisimilarity. It is readily verifiable that they also maintain the analogous safeness property for 𝒳𝒴𝒵𝒲-similarity, on which we now elaborate.
Just as with bisimilarity, a rule that is safe with respect to 𝒳𝒴𝒵𝒲-similarity must preserve the 𝒳𝒴𝒵𝒲-simulation from a parent node to at least one child node and, reciprocally, maintain that the 𝒳𝒴𝒵𝒲-simulation on a child node implies the 𝒳𝒴𝒵𝒲-simulation of its parent node. Formally stated, the following lemma must hold.
For any node N ≠∅ and any n ∈ℕ, it holds that N ⊆_n+1 iff N has a child C⊆_n. As a consequence, N⊆ iff N has a child C⊆.
Here, _,n denotes stratified 𝒳𝒴𝒵𝒲-similarity for GNF grammars, defined as follows.
For all types T,U:
* T _,0 U
* T _, n+1 U if:
* if TaT' with a∈, then UaU' and T' _, n U';
* if UaU' with a∈, then TaT' and T' _,n U';
* if TaT' with a∈, then UaU' and U' _,n T';
* if UaU' with a∈, then TaT' and U' _,n T'.
(Notice that _ = ⋂_n∈ℕ_,n). The definition for GNF grammar words is similar, and denoted by _,n.
As with bisimilarity, rules that maintain the previous lemma are considered safe with respect to 𝒳𝒴𝒵𝒲-similarity because it implies the analogous safeness property:
*
§ PROOF OF <REF> (SOUNDNESS)
In this section we prove the results of <ref> that culminate in the proof of soundness.
If (X⃗_T, X⃗_U,
()) returns , then X⃗_T _()X⃗_U.
Function returns whenever it reaches a finite successful branch (i.e., a branch terminating in
an empty node) in the expansion tree rooted at {(X⃗, Y⃗ )}. Conclude with the safeness property (<ref>).
*
From <ref>, <ref> and <ref>.
§ GENERATING SUBTYPING PAIRS
We rely on a number of properties of subtyping to generate valid and invalid subtyping pairs. Before enumerating these properties, we introduce the following definition.
The sets of free references in covariant and contravariant positions in a type T, respectively 𝖿𝗋𝖾𝖾𝖢𝗈𝗏(T) and 𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(T), are defined by induction on the structure of T:
𝖿𝗋𝖾𝖾𝖢𝗈𝗏(TmU) = 𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(T) ∪𝖿𝗋𝖾𝖾𝖢𝗈𝗏(U)
𝖿𝗋𝖾𝖾𝖢𝗈𝗏(ℓT_ℓL) = ⋃_k∈ L𝖿𝗋𝖾𝖾𝖢𝗈𝗏(T_k)
𝖿𝗋𝖾𝖾𝖢𝗈𝗏(T) = 𝖿𝗋𝖾𝖾𝖢𝗈𝗏(T)
𝖿𝗋𝖾𝖾𝖢𝗈𝗏(T) = 𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(T)
𝖿𝗋𝖾𝖾𝖢𝗈𝗏(SR) = 𝖿𝗋𝖾𝖾𝖢𝗈𝗏(S)∪𝖿𝗋𝖾𝖾𝖢𝗈𝗏(R)
𝖿𝗋𝖾𝖾𝖢𝗈𝗏(xT) = 𝖿𝗋𝖾𝖾𝖢𝗈𝗏(T) ∖{x}
𝖿𝗋𝖾𝖾𝖢𝗈𝗏(x) = {x}
𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(TmU) = 𝖿𝗋𝖾𝖾𝖢𝗈𝗏(T) ∪𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(U)
𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(ℓT_ℓL) = ⋃_k∈ L𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(T_k)
𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(T) = 𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(T)
𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(T) = 𝖿𝗋𝖾𝖾𝖢𝗈𝗏(T)
𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(SR) = 𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(S)∪𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(R)
𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(xT) = 𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(T) ∖{x}
and in all other cases by 𝖿𝗋𝖾𝖾𝖢𝗈𝗏(T) = ∅ and 𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺𝗏(T) = ∅, respectively.
The following theorem enumerates the properties. From it we can derive generation algorithm for valid subtyping pairs, parameterized on the size i of the pair: if i=0, then select one of the pairs in <ref>; if i≥ 1, then select one of the pairs in the remaining items.
* , and ;
* TmUVnW if V T, U W and m⊑ n;
* ℓT_ℓLkU_kL if K ⊆ L and T_j U_j (∀ j∈ K);
* ℓT_ℓLkU_kK if L ⊆ K and T_j U_j (∀ j∈ L);
* S_1S_2R_1R_2 if S_1 R_1 and S_2 R_2;
* xTxU if:
* T U and x∉freeContrav(T)∪freeContrav(U);
* T U and U T;
* ℓS_ℓLkR_kL if K ⊆ L and S_j R_j (∀ j∈ L);
* ℓS_ℓLkR_kK if L ⊆ K and S_j R_j (∀ j∈ L);
* S, S and SR for any S,R;
* S R, S R, SR and SR if S R;
* ℓS_ℓLS'kR_kR'L and ℓS_ℓS'LkR_kLR' if K ⊆ L, S_j R_j (∀ j∈ L) and S' R';
* ℓS_ℓLS'kR_kR'L and ℓS_ℓS'LkR_kLR' if L ⊆ K, S_j R_j (∀ j∈ L) and S' R';
* S_1(S_2S_3)(R_1R_2)R_3 and (S_1S_2)S_3R_1(R_2R_3) if S_1 R_1, S_2 R_2 and S_3 R_3;
* xT U if T U and x ∉(T);
* TxU if T U and x ∉(U);
* xTyyUU if xTyU.
By observation of the syntactic subtyping rules and the definition of 𝖿𝗋𝖾𝖾𝖢𝗈𝗇𝗍𝗋𝖺 for <ref>.
Identifying the set of references in contravariant positions makes it easier to generate valid subtyping pairs featuring recursion. Observe that, despite looking so, tt*t is not a subtype of ttt (their unfolding makes it clear). This is because the same self-reference appears in both covariant and contravariant positions (hence the first must be simultaneously subtype and supertype of the latter, which cannot happen because of their multiplicities). We avoid generating such pairs in <ref> by ensuring that references introduced by a pair of recursive types only appear in covariant positions of their bodies, unless they are equivalent, in which case there is no such restriction.
To generate invalid subtyping pairs, we follow the same algorithm but inject also the invalid pairs that occur in each item of the following theorem.
* , , and
* ;
* ℓT_ℓLkU_kL if L ⊊ K and T_j U_j (∀ j∈ L);
* ℓT_ℓLkU_kK if K ⊊ L and T_j U_j (∀ j∈ K);
* TUVW, with V T, U W;
* TmUVnW, with T V, T ≁V, m⊑ n and U W;
* TmUVnW, with V T, m⊑ n, W U and W≁U;
* TT;
* TT;
* TU, with T ≁U, U T;
* TU, with T≁U, T U;
* ℓT_ℓLkT_kK if L ⊊ K and T_j U_j (∀ j∈ L);
* ℓT_ℓLkT_kK if K ⊊ L and T_j U_j (∀ j∈ K).
By inspection of the syntactic subtyping rules, relying on the observation that if T U and T≁U, then U T.
If i = 1, we generate one of the pairs in <ref> of <ref>, otherwise we use one of the items in <ref> and randomly inject a subtyping pair where a valid one is supposed to be. If the generated pair turns out to be in the subtyping relation, we simply discard the result.
For example, suppose i=4. We randomly choose <ref> of <ref> to generate a pair of sequential compositions (S_1R_1, S_2R_2). We proceed in the valid path for the types before the semicolon, obtaining S_1= and S_2=, but inject an invalid pair in R_1 and R_2. To generate it, we randomly choose <ref> of <ref>. Here we generate a valid pair (T,U) using <ref> and ensuring T and V are not equivalent, then multiplicities m and n such that m ⊑ n, and finally another valid pair (V,W). Thus we obtain R_1=TmU and R_2=VnW, making (S_1R_1, S_2R_2) an invalid pair. If we had, however, generated S_1= and S_2=, then (S_1R_1, S_2R_2) would be a valid pair, and its test result would be discarded.
|
http://arxiv.org/abs/2307.04679v2 | 20230710162905 | Generalization Error of First-Order Methods for Statistical Learning with Generic Oracles | [
"Kevin Scaman",
"Mathieu Even",
"Laurent Massoulié"
] | cs.LG | [
"cs.LG",
"math.OC"
] |
plain
lemmaLemma
theoremTheorem
propositionProposition
corollaryCorollary
definition
definitionDefinition
assumptionAssumption
remarkRemark
exampleExample
|
http://arxiv.org/abs/2307.05223v1 | 20230711124548 | Plasmonic polarons induced by alkali-atom deposition in hafnium disulfide (1$T$-HfS$_2$) | [
"Christoph Emeis",
"Sanjoy Kr Mahatha",
"Sebastian Rohlf",
"Kai Rossnagel",
"Fabio Caruso"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.mtrl-sci"
] |
Institut für Theoretische Physik und Astrophysik, Kiel University, 24098 Kiel, Germany
UGC-DAE Consortium for Scientific Research, University Campus, Khandwa Road, Indore - 452001, India
Ruprecht‑Haensel‑Labor, Deutsches Elektronen-Synchrotron DESY, 22607 Hamburg, Germany
Institut für Experimentelle und Angewandte Physik, Kiel University, 24098 Kiel, Germany
Ruprecht‑Haensel‑Labor, Deutsches Elektronen-Synchrotron DESY, 22607 Hamburg, Germany
Institut für Experimentelle und Angewandte Physik, Kiel University, 24098 Kiel, Germany
Kiel Nano, Surface and Interface Science KiNSIS, 24118 Kiel, Germany
Institut für Theoretische Physik und Astrophysik, Kiel University, 24098 Kiel, Germany
Kiel Nano, Surface and Interface Science KiNSIS, 24118 Kiel, Germany
We combine ab-initio calculations based on many-body perturbation theory and the cumulant expansion with angle-resolved photoemission spectroscopy (ARPES) to quantify the electron-plasmon interaction in the highly-doped semiconducting transition metal dichalcogenide 1T-HfS_2. ARPES reveals the emergence of satellite spectral features in the vicinity of quasiparticle excitations at the bottom of the conduction band, suggesting coupling to bosonic excitations with a characteristic energy of 200 meV. Our first-principles calculations of the photoemission spectral function reveal that these features can be ascribed to electronic coupling to carrier plasmons (doping-induced collective charge-density fluctuations). We further show that reduced screening at the surface enhances the electron-plasmon interaction and is primarily responsible for the emergence of plasmonic polarons.
Plasmonic polarons induced by alkali-atom deposition in hafnium disulfide (1T-HfS_2)
Fabio Caruso
August 12, 2023
====================================================================================
§ INTRODUCTION
The existence of satellite structures in the spectral function of solids has
been known since the infancy of photoemission spectroscopy <cit.>. Satellites have first been identified by X-ray photoemission spectroscopy (XPS) in elemental metals – such as Al <cit.>, alkali (Li, Na) <cit.>, and alkaline earth metals (Be, Mg) <cit.> – as broadened replica of the valence and core density of states red-shifted by multiples of the plasmon energy.
Besides ordinary metals, photoemission satellites have been observed in pristine semiconductors (as, e.g., undoped silicon <cit.>) – where the excitation of photoholes couples to valence plasmons.
Ab-initio calculations and angle-resolved photoemission experiments
later revealed that full band-structure replicas can arise from the
simultaneous excitations of a photohole and a valence plasmon <cit.>.
The interest in photoemission satellites has been revived by the discovery
of photoemission satellites due to the Fröhlich electron-phonon
interactions in highly-doped anatase TiO_2 <cit.>,
and in the 2D electron gas formed at the surface of SrTiO_3 <cit.>.
These features have been recognized as the smoking-gun evidence for the
formation of Fröhlich polarons – strongly-coupled
quasiparticles resulting from the dressing of photoexcited holes by polar longitudinal optical phonons <cit.>.
Overall, the emergence of satellite structures in photoemission spectroscopy is a hallmark of strong electron-boson interaction in solids, and it has provided a strong stimulus for the development of new ab-initio theories for electron-boson coupling, including Fröhlich coupling <cit.>, density-functional <cit.> and many-body polaron theories <cit.>, electron-plasmon interaction <cit.>, and the cumulant expansion approach <cit.>.
At variance with metals and pristine semiconductors,
satellites in highly-doped semiconductors and insulators occur
in the immediate vicinity of the band edges, and thus
influence fundamental properties of direct relevance for the
transport and dynamics of charge carriers, including quasiparticle
lifetimes and effective masses <cit.>.
Changes of the doping concentration can further be exploited to exert control on the electron-phonon and electron-plasmon coupling strength, with visible effects on the structure of photoemission satellites
<cit.>.
In particular, doping-induced free carriers can screen the electron-phonon interaction, suppressing the formation of Fröhlich polarons and washing out the corresponding spectral fingerprints in ARPES <cit.>.
At the same time, at large doping concentrations carrier plasmons can be excited in materials, with plasmon energies and electron-plasmon coupling strengths that increase with the carrier density. At strong coupling, electron-plasmon interactions can result in the formation of plasmonic polarons with spectral signatures analogous to those of phonon-induced polaronic satellites <cit.>.
Plasmonic polarons have thus far only been observed in a handful of materials, including EuO <cit.>, anatase TiO_2 <cit.>, and monolayer MoS_2 <cit.>.
A challenge that must be overcome for the observation of these phenomena consists in reaching the very high doping concentrations (of the order of n=10^20 cm^-3) – which are required for the emergence of an electron liquid while preserving the sample crystallinity.
In EuO, these conditions have been realized via Eu-substitution by Gd <cit.>; in anatase, TiO_2 free carriers are introduced by oxygen vacancies <cit.>; highly-doped MoS_2 monolayers have been realized by stimulating the formation of chalcogen vacancies via thermal annealing <cit.>.
In this work, we realize strong electron-plasmon interactions via the deposition of alkali atoms on the surface of hafnium disulfide (1T-HfS_2).
To corroborate this new way of controlling the electron-plasmon interaction, we conduct a combined theoretical and experimental investigation of the electronic and quasiparticle excitations for pristine and highly-doped 1T-HfS_2. ARPES measurements for n-doped samples reveal the emergence of satellite spectral structures in the vicinity of the quasiparticle peak at the bottom of the conduction band.
To unravel the origin of these features we performed ab-initio calculations of the electron spectral function by explicitly including the influence of electron-plasmon interaction in the Fan-Migdal approximation. Spectral function calculations based on the cumulant expansion approach – the state of the art for the description of satellites in photoemission – are in excellent agreement with ARPES experiments, corroborating the plasmonic origin of the ARPES satellites. These findings demonstrate that alkali doping in bulk transition metal dichalcogenides can alter the spectrum of quasiparticle excitations, providing a viable route to realize strong electron-plasmon coupling.
The manuscript is structured as follows. In Sec. <ref> experimental and computational methods are discussed. In Sec. <ref>, we present ARPES measurements and ab-initio calculations of pristine 1T-HfS_2. In Sec. <ref> we discuss the theory and measurements of plasmonic polarons in the ARPES spectral function of highly-doped 1T-HfS_2. Concluding remarks are presented in Sec. <ref>.
§ METHOD
1T-HfS_2 single crystals were grown by chemical vapor transport at the in-house facilities. The sample was cleaved inside the ultra-high vacuum chamber at room temperature and subsequently transferred to the liquid helium-cooled manipulator for photoemission measurements. During the ARPES measurements, the sample temperature was maintained at 10 K. In situ doping of the 1T-HfS_2 samples was achieved by depositing potassium atoms from an alkali metal dispenser (SAES Getters) on the surface. The dopant atoms adsorbed on the surface and sub-surface, but did not intercalate into deeper layers of the van der Waals material.
The experiments were performed at beamline P04 of PETRA III at DESY using the ASPHERE photoelectron spectroscopy endstation. The area probed by the synchrotron beam had a size of approximately 15×15 μ m^2, the photon energies used and corresponding total energy resolution of the ARPES measurements were within a range of 260-450 eV and 50-80 meV, respectively. The Fermi surface map of the doped 1T-HfS_2 sample in Fig. <ref> was recorded at a photon energy of 432 eV, probing the 11th Γ point in the k_z direction.
Density functional theory (DFT) calculations were performed with the plane-wave pseudopotential code Quantum ESPRESSO <cit.>. We used the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation for the exchange-correlation functional <cit.> and Optimized Norm-Conserving Vanderbilt fully relativistic pseudopotentials <cit.>. The plane-wave kinetic energy cutoff was set to 120 Ry and the integrals over the Brillouin zone were discretized on a 12× 12 × 6 Monkhorst-Pack k-point mesh. Spin-orbit coupling (SOC) was included at all steps of our calculations.
The band structure was interpolated onto a 60 × 60 × 30 homogeneous grid via maximally-localized Wannier functions <cit.> as implemented in the WANNIER90 package <cit.>.
The effect of n-type doping was included by rigidly shifting the Fermi level above the conduction-band bottom to account for additional free carriers. Charge neutrality of the system is retained by introducing a compensating positively charged homogeneous background.
Ab-initio calculations of the electron-plasmon interaction were conducted with the EPW code <cit.> and employed the Fan-Migdal approximation for the electron self-energy and the cumulant expansion for the spectral function <cit.>.
§ ELECTRONIC PROPERTIES OF PRISTINE 1T-HFS_2
1T-HfS_2 crystallizes in a layered crystal structure with a hexagonal unit cell belonging to the 164 space group (P3m1). A side view of the 1T-HfS_2 crystal structure and its Brillouin zone are shown in Fig. <ref>(a) and <ref>(b), respectively. The bulk band structure of pristine 1T-HfS_2 as obtained from DFT-PBE is shown in Fig. <ref>(c). The path across the Brillouin zone passes through the M-K-Γ-M and the L-H-A-L high-symmetry points and it was chosen to facilitate comparison with the ARPES measurements. 1T-HfS_2 is an indirect band gap semiconductor with the valence band maximum (conduction band minimum) located at the Γ (L) high-symmetry point. The calculated indirect band gap of 1.2 eV is in good agreement with earlier DFT studies <cit.>.
Analysis of the projected DOS (not shown) reveals that the valence bands arise primarily from the hybridization of p-orbitals with S character, while the conduction bands are predominantly characterized by the d-orbitals with Hf character <cit.>. In heavy elements with unfilled 5d orbitals, such as Hf, SOC has important effects on the electronic structure <cit.>. In 1T-HfS_2 it leads to a shift of the valence band maximum to Γ and induces a bandsplitting of the two highest valence bands, while the conduction band minimum remains unaffected. The influence of SOC on the band structure is further discussed in Appendix A.
A parabolic fitting to the conduction-band minimum along the three reciprocal lattice vectors yields the following values for the electron effective masses m^*_1= 0.25 m_e, m^*_2= 1.65 m_e, and m^*_3= 0.20 m_e, which are in good agreement with earlier DFT calculations <cit.>. The density of states (DOS) effective mass was determined as m^*_DOS = (g m^*_1m^*_2m^*_3 )^3/2, where g=6 is the degeneracy factor of the conduction band minimum, yielding m^*_DOS = 1.44 m_e <cit.>.
The measured ARPES spectral function for the valence band along the M-K-Γ-M high-symmetry path is shown in Fig. <ref>(d). The DFT-PBE band structure, superimposed on the measurements for comparison, is in very good agreement with the experiments. We observe a small deviation between measurements and calculations in the vicinity of the K high-symmetry point for energies around -3.0 eV, which we tentatively attribute to the finite k_z broadening, resulting in the superposition of ARPES intensities corresponding to different k_z-planes of the Brillouin zone.
In Figs. <ref>(a) and <ref>(b) we show the measured ARPES intensity maps for crystal momenta spanning the k_x-k_y-plane for energies corresponding to -2.2 eV and -1.3 eV below the Fermi energy marked by horizontal dashed lines in Fig. <ref>(d), respectively. The ab-initio band structure evaluated from DFT and interpolated using maximally-localized Wannier functions closely matches the experimental data within the first Brillouin zone. The slight deviations in the second Brillouin zones are attributed to the k_z variation of constant-energy ARPES angle maps.
§ POLARONS IN HIGHLY-DOPED 1T-HFS_2
In the following we investigate the influence of n-type doping on the
band structure and on the spectrum of quasiparticle excitations of 1T-HfS_2.
Figure <ref>(c) shows the ARPES measurement of the Fermi surface of highly-doped 1T-HfS_2. While no signal is seen at these energies for the pristine sample, finite intensity arises from the population of the conduction band due to doping. The elliptical intensity pattern reflects the anisotropic band dispersion of the lowest conduction band. Due to photoemission matrix element effects, the intensity in the first Brillouin zone is suppressed. The extrinsic carrier concentration of n =1.5· 10^20 cm^-3 is extracted from the size of the Fermi pockets of Fig. <ref>(c).
The extrinsic charge carriers introduced by n-type dopants can significantly modify the electronic properties <cit.>. In addition to the population of the conduction-band bottom, the doping-induced extrinsic carriers can lead to the emergence of carrier plasmons with a characteristic frequency given by ωpl = √(4 πn e^2/ m_ DOS^*ϵ∞), with high-frequency dielectric constant ϵ∞ and DOS effective mass m_ DOS^* <cit.>. In doped semiconductors, the plasmon energy can span values between 10 and 200 meV <cit.>. These low-energy plasmons can further couple to carriers in the conduction band via electron-plasmon interactions leading to the emergence of polaronic quasiparticle excitations <cit.>. In the following, we proceed to investigate these phenomena on a quantitative ground, by combining ab-initio theory and ARPES measurements of highly-doped 1T-HfS_2.
The band structure of the n-doped 1T-HfS_2 obtained by ARPES along Γ-K-M direction is shown in Fig. <ref>(a). The energies are relative to the Fermi level, which is located 50 meV above the conduction-band bottom. Compared to the pristine sample, structural disorder and additional doping-induced scattering processes contribute to an enhancement of the band structure broadening. For photoelectron energies above the fundamental gap (1.9 eV), our measurements reveal the emergence of additional photoemission intensities that reflect the population of the conduction-band bottom by the alkali deposition-induced carriers. The Fermi pockets of the conduction band are centered around |k_x| = 1.8Å^-1.
Figure <ref>(b) illustrates the ARPES spectral function Aexp(ω) for energies and crystal momentum marked by the red line in Fig. <ref>(a). In Fig. <ref>(b), we eliminated the background signal from the experimental data by subtracting a Shirley background function B(ω), defined as B(ω) = β∫^μ_ω dω' I(ω') where β is an adjustable parameter and μ is the chemical potential <cit.>. The resulting spectral function Aexp(ω) is characterized by a sharp quasiparticle peak and an additional shoulder structure at 200 meV below the Fermi level, with a decreasing photoemission intensity extending down to 1 eV below the Fermi level. A Gaussian decomposition of the ARPES intensity, marked in green in Fig. <ref>(b), suggests that these spectral features are compatible with a superposition of a quasiparticle peak and a photoemission satellite peak red-shifted by 200 meV from the maximum of the quasiparticle peak.
The emergence of photoemission satellites in doped semiconductors is a
hallmark of strong electron-boson interaction which has been widely
investigated in the past owing to its close relation to the formation
of Fröhlich polarons – a prototypical emergent phenomenon due to strong electron-phonon coupling. The energy separation between quasiparticle and satellite peaks
is expected to match the energy of the boson that underpins the coupling.
For example, Fröhlich polarons in polar semiconductors arise
from the coupling of n-type carriers with
polar longitudinal-optical (LO) phonons, and they manifest themselves in ARPES spectra
via satellite structures at energies matching the LO phonon energies.
In 1T-HfS_2, the energy separation between the quasiparticle and satellite peak (200 meV) exceeds
the LO phonon energies (<40 meV, see e.g., the phonon dispersion in Appendix B).
The discrepancy of these energy scales enables us to promptly exclude the Fröhlich
electron-phonon interaction as a source of polaronic coupling.
The absence of spectral fingerprints of Fröhlich polarons
can be easily rationalized by noting that (i) 1T-HfS_2 is a weakly
polar crystal, i.e., it is characterized by small Born effective
charges, and (ii) at the high doping concentration considered in
our work electron-phonon coupling is screened by free carriers, thus,
further mitigating the effects of Fröhlich coupling.
In the following, we thus proceed to inspect the electron-plasmon interaction as
a possible source of polaronic coupling, and we analyze its influence on the emergence of
photoemission satellites.
To quantify the electron-plasmon interaction and its effect on the ARPES measurements, we
evaluate the electron self-energy due to the electron-plasmon interaction, which in the Fan-Migdal approximation can be expressed as <cit.>:
Σ_n 𝐤epl = ∫d 𝐪/Ω BZ∑_m |g_mnepl(𝐤,𝐪)|^2
× [ n𝐪+f_m 𝐤 + 𝐪/ε_n 𝐤 - ε_m 𝐤 + 𝐪 + ħωpl𝐪 + i η
+ n𝐪 +1 - f_m 𝐤 + 𝐪/ε_n 𝐤 - ε_m 𝐤 + 𝐪 - ħωpl𝐪 + i η]
where ΩBZ is the Brillouin zone volume, m and n are band indices, 𝐤 and 𝐪 are Bloch wave vectors, n𝐪 denotes the Bose-Einstein and f_m 𝐤 + 𝐪 the Fermi-Dirac distribution, ε are the Kohn-Sham (KS) eigenstates, ωpl𝐪 is the plasmon frequency and η is a positive infinitesimal. The integral runs over the Brillouin zone volume. The first term accounts for electron scattering processes involving the absorption of a plasmon +ωpl𝐪, while the second term accounts for hole scattering processes mediated by plasmon emission. g_mnepl denotes the electron-plasmon coupling matrix elements, that can be expressed as <cit.>:
g_mnepl(𝐤,𝐪) = [ .∂ϵ(𝐪,ω)/∂ω|_ωpl_𝐪]-1/2
×(4π/ΩBZ)^1/21/|𝐪|⟨ψ_m 𝐤 + 𝐪| e i 𝐪·𝐫|ψ_n 𝐤⟩.
Here, ϵ is the dielectric function, ⟨ψ_m 𝐤 + 𝐪| e i 𝐪·𝐫|ψ_n 𝐤⟩ the dipole matrix element and ψ_𝐤 the Kohn-Sham orbital. The |𝐪|^-1 singularity in the electron-plasmon coupling matrix element is reminiscent of the Fröhlich interaction in polar semiconductors, and it indicates that the long-wavelength plasmons dominate electron-plasmon scattering processes.
Owing to the dependence of the matrix elements g_mnepl on the
dielectric function, the electron-plasmon interaction is profoundly
influenced by the screening environment of the system. In 1T-HfS_2, the alkali
dopant atoms are concentrated in the vicinity of the surface and, possibly, underneath the first
1T-HfS_2 layers of the sample.
Correspondingly, the dielectric screening experienced by n-type carriers
is mitigated as compared to bulk carriers.
To account for the charge localization at the surface we introduce an effective dielectric constant εS∞ = (εHfS_2∞ +1)/2 = 3.6, with εHfS_2∞ = 6.2 being the high-frequency dielectric constant of bulk 1T-HfS_2 <cit.>. Further details on the evaluation of the electron-plasmon matrix elements can be found elsewhere <cit.>.
Based on this value, we estimate the plasmon frequency to be 200 meV for a doping concentration n=1.5·10^20 cm^-3, which matches closely the satellite energy, thus, suggesting electron-plasmon coupling as a likely origin of this polaronic feature.
In Figs. <ref>(c)-(d) the real and imaginary part of the electron self-energy due to electron-plasmon coupling are presented, respectively, for doping carrier concentrations n=7.5· 10^19, 1.5·10^20, and 2.25· 10^20 cm^-3. The middle value coincides with the doping concentration determined from experiment.
The corresponding imaginary parts of the self-energy in Fig. <ref>(d) exhibit a sharply peaked structure with a Lorentzian line profile in the vicinity of the energy ε_𝐤 - ħωpl. For larger doping concentrations, we observe a progressive red-shift of the peak in ImΣ and an increase of its intensity, which arise from the increase of plasmon energy and of the electron-plasmon coupling matrix elements, respectively.
The real part of the self-energy is related to ImΣ by a
Kramers-Kronig's transformation and it thus also has a similar dependence on the doping concentration.
Based on the electron self-energy, we proceed to investigate the signatures of electron-plasmon coupling in ARPES.
Earlier studies revealed that ab-initio calculations of photoemission satellites based on the Fan-Migdal approximation overestimate the satellite energy and intensity by up to 50% as compared to experiment <cit.>. To circumvent this limitation, we evaluate the spectral function based on the cumulant expansion approach <cit.>. The cumulant expansion representation of the spectral function can be expressed as <cit.>:
A(𝐤,ω) = ∑_n e^AS1_n𝐤(ω) * AQP_n𝐤(ω).
Here, * denotes a convolution product and AQP_n𝐤(ω) = 2π^-1Im[ħω - ε_n 𝐤 - Σ_n 𝐤epl(ε_n 𝐤)]^-1 is the quasiparticle contribution to the spectral function evaluated in the on the energy shell approximation, in which the frequency dependence of the self-energy Σ_n 𝐤(ω) is replaced by the KS energy ω = ε_n𝐤 <cit.>.
The satellite part of spectral function is further given by <cit.>:
AS1_n𝐤(ω) = - β_n𝐤(ω) - β_n𝐤(ε_n𝐤) - (ω - ε_n𝐤) β'_n𝐤(ε_n𝐤)/(ω - ε_n𝐤)^2,
with β = π^-1ImΣ_n 𝐤(ε_n𝐤 - ω) Θ(ω) and β' denoting its first derivative. The first term in the Taylor series expansion of the exponential in Eq. (<ref>) corresponds to the quasiparticle peak of the photoemission spectrum, while higher-order terms account for plasmon-assisted scattering up to infinite order. In the following, we truncated Eq. (<ref>) to the second order. Terms above the second order contribute negligibly to the spectral function and their inclusion is inconsequential.
The spectral function in the vicinity of the conduction-band bottom computed from Eqs. (<ref>)-(<ref>) is shown in Fig. <ref>(b). To account for the finite experimental resolution, the spectral function has been convoluted with a Gaussian with a variance of 80 meV. The intensity of the convoluted spectral function is rescaled to match the experimental spectrum at the quasiparticle peak. Experimental broadening and intensity rescaling are the only adjustable parameters in our simulations.
In short, the cumulant spectral function exhibits a pronounced shoulder arising from the convolution of the satellite and quasi-particle spectral function AS * AQP. This spectral feature corresponds to the coupled excitation of a photohole and a plasmon and matches closely the photoemission satellite measured in ARPES. Higher-order satellite structures due to multiple plasmon excitations have small intensity and they are washed out by the finite experimental resolution.
Overall, the close agreement between simulations and
measurements suggest that carrier plasmons in highly-doped 1T-HfS_2
are strongly coupled to free carriers in the conduction bands,
leading to the formation of plasmonic polarons and to corresponding
spectral fingerprints in the ARPES spectrum.
The residual discrepancy between theory and experiments at energies
smaller than -0.5 eV is tentatively attributed to impurity scattering and statistical noise,
which are not captured by the Shirley background model.
§ CONCLUSION
In summary, we conducted ab-initio calculations and ARPES measurements of the electronic properties and quasiparticle excitation of pristine and highly-doped 1T-HfS_2.
We report the observation of polaronic satellites in the ARPES spectral function, which we attribute to the formation of plasmonic polarons. Our first-principles calculations of the Fan-Migdal self-energy for electron-plasmon interaction explicitly account for extrinsic carriers introduced by alkali doping and closely reproduce the spectral fingerprints of polaronic satellites in the measured ARPES spectral function.
In particular, the alkali doping enables the injection of free carriers in the vicinity of the surface, where screening is weak and it thus provides ideal conditions for realizing strong coupling between free carriers and plasmons.
Overall, our combined theoretical and experimental investigation reveals the possibility to tailor quasiparticle excitations and the electron-plasmon coupling strength via extrinsic doping mediated by the adsorption and intercalation of alkali atoms affecting the first atomic layers of 1T-HfS_2.
Materials interfaces and hybrid heterostructures may provide further opportunities to directly control the dielectric environment and, thus, tailor the spectrum of quasiparticle interactions.
§ ACKNOWLEDGMENTS
This work was funded by the Deutsche Forschungsgemeinschaft (DFG), Projects No. 499426961 and 434434223 – SFB 1461.
We thank DESY (Hamburg, Germany), a member of the Helmholtz Association HGF, for the provision of experimental facilities. Parts of this research were performed at PETRA III. Funding for the photoemission spectroscopy instrument at beamline P04 (contracts 05KS7FK2, 05K10FK1, 05K12FK1, and 05K13FK1 with Kiel University; 05KS7WW1 and 05K10WW2 with the University of Würzburg) by the German Federal Ministry of Education and Research (BMBF) is gratefully acknowledged.
§ APPENDIX A: INFLUENCE OF SPIN-ORBIT COUPLING ON THE BAND STRUCTURE
In Fig. <ref> the band structure of 1T-HfS_2 with (black, continuous) and without (red, dashed) SOC is depicted. SOC leads to an avoided crossing in both the valence and conduction bands, lifting the degeneracy of upper valence bands at the Γ point. These findings are in good agreement with earlier calculations <cit.>.
§ APPENDIX B: PHONON DISPERSION
In Fig. <ref> the phonon dispersion of 1T-HfS_2 obtained from density functional perturbation theory (DFPT) is presented. The discontinuity of the second and third highest-energy modes at the Γ point arises from the LO-TO splitting. The highest phonon mode has a frequency of 43 meV.
53
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Baer and Busch(1973)]baer_x-ray_1973
author author Y. Baer and author G. Busch, title title X-ray photoemission from aluminum, https://doi.org/10.1103/PhysRevLett.30.280 journal
journal Phys. Rev. Lett. volume 30, pages 280 (year 1973)NoStop
[Barrie(1973)]barrie_x-ray_1973
author author A. Barrie, title title X-ray photoelectron
spectra of aluminium and oxidised aluminium, https://doi.org/10.1016/0009-2614(73)87074-5 journal
journal Chem. Phys. Lett. volume 19, pages 109 (year 1973)NoStop
[Kowalczyk et al.(1973)Kowalczyk, Ley, McFeely, Pollak, and Shirley]kowalczyk_x-ray_1973
author author S. P. Kowalczyk, author L. Ley,
author F. R. McFeely, author R. A. Pollak, and author D. A. Shirley, title title X-Ray Photoemission from Sodium and
Lithium, https://doi.org/10.1103/PhysRevB.8.3583 journal journal Phys. Rev. B volume
8, pages 3583 (year 1973)NoStop
[Pardee et al.(1975)Pardee,
Mahan, Eastman, Pollak,
Ley, McFeely, Kowalczyk, and Shirley]pardee_analysis_1975
author author W. J. Pardee, author G. D. Mahan,
author D. E. Eastman, author R. A. Pollak, author
L. Ley, author F. R. McFeely, author S. P. Kowalczyk, and author D. A. Shirley, title title Analysis of
surface- and bulk-plasmon contributions to x-ray photoemission spectra, https://doi.org/10.1103/PhysRevB.11.3614 journal
journal Phys. Rev. B volume 11, pages 3614 (year 1975)NoStop
[Guzzo et al.(2011)Guzzo,
Lani, Sottile, Romaniello,
Gatti, Kas, Rehr,
Silly, Sirotti, and Reining]guzzo2011valence
author author M. Guzzo, author G. Lani,
author F. Sottile, author P. Romaniello, author
M. Gatti, author J. J. Kas, author J. J. Rehr, author M. G. Silly, author F. Sirotti, and author L. Reining, title title Valence electron photoemission
spectrum of semiconductors: Ab initio description of multiple satellites, @noop journal journal Phys. Rev. Lett. volume 107, pages 166401 (year 2011)NoStop
[Caruso et al.(2015)Caruso,
Lambert, and Giustino]caruso2015band
author author F. Caruso, author H. Lambert, and author F. Giustino, title title Band structures of plasmonic
polarons, @noop journal journal Phys.
Rev. Lett. volume 114, pages 146404
(year 2015)NoStop
[Lischner et al.(2015)Lischner, Pálsson, Vigil-Fowler,
Nemsak, Avila, Asensio,
Fadley, and Louie]lischner_satellite_2015
author author J. Lischner, author G. K. Pálsson, author D. Vigil-Fowler, author S. Nemsak, author J. Avila,
author M. C. Asensio, author C. S. Fadley, and author S. G. Louie, title
title Satellite band structure in silicon caused by
electron-plasmon coupling, https://doi.org/10.1103/PhysRevB.91.205113 journal journal Phys. Rev. B volume 91, pages 205113 (year 2015)NoStop
[Caruso and Giustino(2015)]caruso2015spectral
author author F. Caruso and author F. Giustino, title title Spectral fingerprints of
electron-plasmon coupling, https://doi.org/10.1103/PhysRevB.92.045123 journal journal Phys. Rev. B volume 92, pages 045123 (year 2015)NoStop
[Moser et al.(2013)Moser,
Moreschini, Jaćimović, Barišić, Berger, Magrez,
Chang, Kim, Bostwick,
Rotenberg et al.]moser2013tunable
author author S. Moser, author L. Moreschini,
author J. Jaćimović,
author O. Barišić,
author H. Berger, author A. Magrez, author
Y. Chang, author K. Kim, author A. Bostwick, author E. Rotenberg,
et al., title title Tunable polaronic
conduction in anatase TiO_2, @noop journal journal Phys. Rev. Lett. volume 110, pages 196403 (year
2013)NoStop
[Wang et al.(2016)Wang,
McKeown Walker, Tamai, Wang,
Ristic, Bruno, De La Torre,
Riccò, Plumb, Shi,
Hlawenka, Sánchez-Barriga, Varykhalov, Kim, Hoesch, King, Meevasana, Diebold, Mesot, Moritz, Devereaux, Radovic, and Baumberger]wang_tailoring_2016
author author Z. Wang, author S. McKeown Walker, author A. Tamai, author Y. Wang,
author Z. Ristic, author F. Y. Bruno, author
A. De La Torre, author
S. Riccò, author N. C. Plumb, author M. Shi, author P. Hlawenka, author J. Sánchez-Barriga, author A. Varykhalov, author T. K. Kim,
author M. Hoesch, author P. D. C. King, author
W. Meevasana, author
U. Diebold, author J. Mesot, author B. Moritz, author T. P. Devereaux, author M. Radovic, and author F. Baumberger, title title Tailoring the nature
and strength of electron–phonon interactions in the SrTiO_3(001)
2D electron liquid, https://doi.org/10.1038/nmat4623 journal journal Nat. Mater. volume
15, pages 835 (year 2016)NoStop
[Franchini et al.(2021)Franchini, Reticcioli, Setvin, and Diebold]franchini_polarons_2021
author author C. Franchini, author M. Reticcioli, author M. Setvin, and author U. Diebold, title title Polarons in materials, https://doi.org/10.1038/s41578-021-00289-w journal journal Nat. Rev. Mater. volume 6, pages 560 (year 2021)NoStop
[Kang et al.(2018)Kang,
Jung, Shin, Sohn,
Ryu, Kim, Hoesch, and Kim]kang2018holstein
author author M. Kang, author S. W. Jung,
author W. J. Shin, author Y. Sohn, author
S. H. Ryu, author T. K. Kim, author M. Hoesch, and author K. S. Kim, title title Holstein polaron
in a valley-degenerate two-dimensional semiconductor, @noop
journal journal Nat. Mater. volume 17, pages 676 (year 2018)NoStop
[Verdi and Giustino(2015)]VerdiPRL2015
author author C. Verdi and author F. Giustino, title title Fröhlich
electron-phonon vertex from first principles, https://doi.org/10.1103/PhysRevLett.115.176401 journal
journal Phys. Rev. Lett. volume 115, pages 176401 (year 2015)NoStop
[Verdi et al.(2017)Verdi,
Caruso, and Giustino]Verdi2017a
author author C. Verdi, author F. Caruso, and author F. Giustino, title title Origin of the crossover from polarons to fermi
liquids in transition metal oxides, @noop journal
journal Nat. Commun. volume 8, pages 15769 (year 2017)NoStop
[Antonius et al.(2015)Antonius, Poncé, Lantagne-Hurtubise,
Auclair, Gonze, and Côté]antonius2015dynamical
author author G. Antonius, author S. Poncé,
author E. Lantagne-Hurtubise,
author G. Auclair, author X. Gonze, and author
M. Côté, title
title Dynamical and anharmonic effects on the electron-phonon
coupling and the zero-point renormalization of the electronic structure, @noop journal journal Phys. Rev. B volume 92, pages 085137 (year 2015)NoStop
[Miceli et al.(2018)Miceli,
Chen, Reshetnyak, and Pasquarello]pasquarello2018
author author G. Miceli, author W. Chen,
author I. Reshetnyak, and author A. Pasquarello, title title Nonempirical hybrid functionals for
band gaps and polaronic distortions in solids, https://doi.org/10.1103/PhysRevB.97.121112 journal journal Phys. Rev. B volume 97, pages 121112 (year 2018)NoStop
[Sio et al.(2019)Sio,
Verdi, Poncé, and Giustino]sio_polarons_2019
author author W. H. Sio, author C. Verdi, author S. Poncé, and author
F. Giustino, title title Polarons from First Principles, without Supercells, https://doi.org/10.1103/PhysRevLett.122.246403 journal
journal Phys. Rev. Lett. volume 122, pages 246403 (year 2019)NoStop
[Vasilchenko et al.(2022)Vasilchenko, Zhugayevych, and Gonze]gonzePRB2022
author author V. Vasilchenko, author A. Zhugayevych, and author X. Gonze, title title Variational polaron
equations applied to the anisotropic Fröhlich model, https://doi.org/10.1103/PhysRevB.105.214301 journal journal Phys. Rev. B volume 105, pages 214301 (year 2022)NoStop
[Lafuente-Bartolome et al.(2022)Lafuente-Bartolome, Lian,
Sio, Gurtubay, Eiguren, and Giustino]lafuente-bartolome_unified_2022
author author J. Lafuente-Bartolome, author C. Lian, author W. H. Sio,
author I. G. Gurtubay, author A. Eiguren, and author
F. Giustino, title title Unified Approach to Polarons and Phonon-Induced Band
Structure Renormalization, https://doi.org/10.1103/PhysRevLett.129.076402 journal
journal Phys. Rev. Lett. volume 129, pages 076402 (year 2022)NoStop
[Caruso and Giustino(2016a)]caruso2016theory
author author F. Caruso and author F. Giustino, title title Theory of
electron-plasmon coupling in semiconductors, @noop journal journal Phys. Rev. B volume
94, pages 115208 (year
2016a)NoStop
[Langreth(1970)]Langreth1970
author author D. C. Langreth, title title Singularities in the
x-ray spectra of metals, https://doi.org/10.1103/PhysRevB.1.471
journal journal Phys. Rev. B volume 1, pages 471 (year 1970)NoStop
[Aryasetiawan et al.(1996)Aryasetiawan, Hedin, and Karlsson]aryasetiawan1996multiple
author author F. Aryasetiawan, author L. Hedin, and author K. Karlsson, title title Multiple plasmon
satellites in Na and Al spectral functions from ab initio cumulant
expansion, @noop journal journal Phys.
Rev. Lett. volume 77, pages 2268
(year 1996)NoStop
[Lischner et al.(2013)Lischner, Vigil-Fowler, and Louie]LischnerPRL2013
author author J. Lischner, author D. Vigil-Fowler, and author S. G. Louie, title title Physical origin of
satellites in photoemission of doped graphene: An ab initio GW plus
cumulant study, https://doi.org/10.1103/PhysRevLett.110.146801
journal journal Phys. Rev. Lett. volume 110, pages 146801 (year
2013)NoStop
[Kas et al.(2014)Kas,
Rehr, and Reining]PhysRevB.90.085112
author author J. J. Kas, author J. J. Rehr, and author L. Reining, title title Cumulant expansion of the retarded one-electron
green function, https://doi.org/10.1103/PhysRevB.90.085112
journal journal Phys. Rev. B volume 90, pages 085112 (year
2014)NoStop
[Caruso et al.(2020)Caruso,
Verdi, and Giustino]caruso2020many
author author F. Caruso, author C. Verdi, and author F. Giustino, @noop
title Many-Body Calculations of Plasmon and
Phonon Satellites in Angle-Resolved Photoelectron Spectra Using
the Cumulant Expansion Approach (publisher Springer, year 2020) pp. pages 341–365NoStop
[Giustino(2017)]giustino2017electron
author author F. Giustino, title title Electron-phonon
interactions from first principles, @noop journal
journal Rev. Mod. Phys. volume 89, pages 015003 (year 2017)NoStop
[Guster et al.(2021)Guster,
Melo, Martin, Brousseau-Couture, de Abreu, Miglio,
Giantomassi, Côté, Frost,
Verstraete, and Gonze]GonzePRB2021
author author B. Guster, author P. Melo,
author B. A. A. Martin, author V. Brousseau-Couture, author J. C. de Abreu, author
A. Miglio, author M. Giantomassi, author M. Côté, author J. M. Frost, author M. J. Verstraete, and author X. Gonze, title title Fröhlich polaron
effective mass and localization length in cubic materials: Degenerate and
anisotropic electronic bands, https://doi.org/10.1103/PhysRevB.104.235123 journal journal Phys. Rev. B volume 104, pages 235123 (year 2021)NoStop
[Riley et al.(2018)Riley,
Caruso, Verdi, Duffy,
Watson, Bawden, Volckaert,
van der Laan, Hesjedal, Hoesch et al.]riley2018crossover
author author J. M. Riley, author F. Caruso,
author C. Verdi, author L. Duffy, author
M. D. Watson, author
L. Bawden, author K. Volckaert, author G. van der Laan, author T. Hesjedal, author M. Hoesch, et al., title title Crossover from lattice to plasmonic polarons of a spin-polarised
electron gas in ferromagnetic euo, @noop journal
journal Nat. Commun. volume 9, pages 1 (year 2018)NoStop
[Ma et al.(2021)Ma,
Cheng, Tian, Liu,
Cui, Huang, Tan,
Yang, and Wang]ma_formation_2021
author author X. Ma, author Z. Cheng, author M. Tian, author
X. Liu, author X. Cui, author Y. Huang, author S. Tan, author J. Yang, and author
B. Wang, title title Formation of Plasmonic Polarons in Highly Electron-Doped
Anatase TiO_2, https://doi.org/10.1021/acs.nanolett.0c03802 journal
journal Nano Lett. volume 21, pages 430 (year 2021)NoStop
[Caruso et al.(2021)Caruso,
Amsalem, Ma, Aljarb,
Schultz, Zacharias, Tung,
Koch, and Draxl]caruso2021two
author author F. Caruso, author P. Amsalem,
author J. Ma, author
A. Aljarb, author T. Schultz, author M. Zacharias, author V. Tung, author N. Koch, and author C. Draxl, title title Two-dimensional plasmonic polarons in
n-doped monolayer MoS_2, @noop journal
journal Phys. Rev. B volume 103, pages 205152 (year 2021)NoStop
[Giannozzi et al.(2009)Giannozzi, Baroni, Bonini, Calandra, Car, Cavazzoni, Ceresoli, Chiarotti, Cococcioni,
Dabo et al.]giannozzi2009quantum
author author P. Giannozzi, author S. Baroni,
author N. Bonini, author M. Calandra, author
R. Car, author C. Cavazzoni, author D. Ceresoli, author G. L. Chiarotti, author M. Cococcioni, author I. Dabo,
et al., title title Quantum espresso: a
modular and open-source software project for quantum simulations of
materials, @noop journal journal J.
Phys. Condens. Matter volume 21, pages
395502 (year 2009)NoStop
[Perdew et al.(1996)Perdew,
Burke, and Ernzerhof]PhysRevLett.77.3865
author author J. P. Perdew, author K. Burke, and author M. Ernzerhof, title title Generalized gradient approximation made simple, https://doi.org/10.1103/PhysRevLett.77.3865 journal
journal Phys. Rev. Lett. volume 77, pages 3865 (year 1996)NoStop
[Hamann(2013)]PhysRevB.88.085117
author author D. R. Hamann, title title Optimized norm-conserving
vanderbilt pseudopotentials, https://doi.org/10.1103/PhysRevB.88.085117 journal journal Phys. Rev. B volume 88, pages 085117 (year 2013)NoStop
[Marzari et al.(2012)Marzari, Mostofi, Yates, Souza, and Vanderbilt]MLWF12
author author N. Marzari, author A. A. Mostofi, author J. R. Yates,
author I. Souza, and author D. Vanderbilt, title
title Maximally localized Wannier functions: Theory and
applications, https://doi.org/10.1103/RevModPhys.84.1419
journal journal Rev. Mod. Phys. volume 84, pages 1419 (year
2012)NoStop
[Pizzi et al.(2020)Pizzi,
Vitale, Arita, Blügel,
Freimuth, Géranton, Gibertini, Gresch, Johnson, Koretsune et al.]pizzi2020wannier90
author author G. Pizzi, author V. Vitale,
author R. Arita, author S. Blügel, author
F. Freimuth, author
G. Géranton, author
M. Gibertini, author
D. Gresch, author C. Johnson, author T. Koretsune, et al., title
title Wannier90 as a community code: new features and
applications, @noop journal journal J.
Phys. Condens. Matter volume 32, pages
165902 (year 2020)NoStop
[Poncé et al.(2016)Poncé, Margine, Verdi, and Giustino]ponce2016epw
author author S. Poncé, author E. R. Margine, author C. Verdi, and author F. Giustino, title title EPW: Electron–phonon coupling,
transport and superconducting properties using maximally localized Wannier
functions, @noop journal journal
Comput. Phys. Commun. volume 209, pages 116 (year 2016)NoStop
[Caruso and Giustino(2016b)]caruso2016gw
author author F. Caruso and author F. Giustino, title title The GW plus
cumulant method and plasmonic polarons: application to the homogeneous
electron gas, @noop journal journal
Eur. Phys. J. B volume 89, pages 1
(year 2016b)NoStop
[Iordanidou et al.(2016)Iordanidou, Houssa, Pourtois, Afanas' ev, and Stesmans]iordanidou2016impact
author author K. Iordanidou, author M. Houssa,
author G. Pourtois, author V. Afanas' ev, and author A. Stesmans, title
title Impact of point defects and oxidation on the electronic
properties of HfS_2 monolayers, @noop journal journal ECS J. Solid State Sci. Technol. volume 5, pages Q3054 (year
2016)NoStop
[Shang et al.(2017)Shang,
Zhang, Cheng, Wei, and Li]shang2017electric
author author J. Shang, author S. Zhang,
author X. Cheng, author Z. Wei, and author
J. Li, title title Electric field induced electronic properties modification of
ZrS_2/HfS_2 van der Waals heterostructure, @noop
journal journal RSC Adv. volume 7, pages 14625 (year
2017)NoStop
[Neal et al.(2021)Neal,
Li, Birol, and Musfeldt]neal2021
author author S. N. Neal, author S. Li, author T. Birol, and author
J. L. Musfeldt, title
title Chemical bonding and born charge in
1T-HfS_2, @noop journal
journal NPJ 2D Mater. Appl. volume
5, pages 45 (year 2021)NoStop
[Lau et al.(2019)Lau,
Cocchi, and Draxl]lau2019electronic
author author K. W. Lau, author C. Cocchi, and author C. Draxl, title title Electronic and optical excitations of
two-dimensional ZrS_2 and HfS_2 and their
heterostructure, @noop journal journal
Phys. Rev. Mater. volume 3, pages
074001 (year 2019)NoStop
[Lu et al.(2018)Lu,
Guo, and Robertson]lu2018band
author author H. Lu, author Y. Guo, and author J. Robertson, title title Band edge states, intrinsic defects, and dopants
in monolayer HfS_2 and SnS_2, @noop
journal journal Appl. Phys. Lett. volume 112, pages 062105 (year
2018)NoStop
[Green(1990)]green1990intrinsic
author author M. A. Green, title title Intrinsic concentration,
effective densities of states, and effective mass in silicon, @noop
journal journal J. Appl. Phys. volume 67, pages 2944 (year
1990)NoStop
[Ziambaras et al.(2007)Ziambaras, Kleis, Schröder, and Hyldgaard]ziambaras2007potassium
author author E. Ziambaras, author J. Kleis,
author E. Schröder, and author P. Hyldgaard, title title Potassium intercalation in graphite: A van der
Waals density-functional study, @noop journal
journal Phys. Rev. B volume 76, pages 155425 (year 2007)NoStop
[Rossnagel(2010)]rossnagel2010suppression
author author K. Rossnagel, title title Suppression and
emergence of charge-density waves at the surfaces of layered
1T-TiSe_2 and 1T-TaS_2 by in situ Rb deposition, @noop journal journal New J. Phys. volume 12, pages 125018 (year 2010)NoStop
[Giuliani and Vignale(2005)]giuliani2005quantum
author author G. Giuliani and author G. Vignale, @noop title Quantum Theory of the
Electron Liquid (publisher Cambridge University Press, year 2005)NoStop
[Caruso et al.(2018)Caruso,
Verdi, Poncé, and Giustino]caruso2018electron
author author F. Caruso, author C. Verdi,
author S. Poncé, and author F. Giustino, title title Electron-plasmon and electron-phonon satellites in
the angle-resolved photoelectron spectra of n-doped anatase
TiO_2, @noop journal journal
Phys. Rev. B volume 97, pages 165113
(year 2018)NoStop
[Shirley(1972)]shirley1972high
author author D. A. Shirley, title title High-resolution x-ray
photoemission spectrum of the valence bands of gold, @noop
journal journal Phys. Rev. B volume 5, pages 4709 (year 1972)NoStop
[Lucovsky et al.(1973)Lucovsky, White, Benda, and Revelli]lucovsky1973infrared
author author G. Lucovsky, author R. White,
author J. Benda, and author J. Revelli, title
title Infrared-reflectance spectra of layered group-IV and
group-VI transition-metal dichalcogenides, @noop journal journal Phys. Rev. B volume
7, pages 3859 (year 1973)NoStop
[Iwasaki et al.(1982)Iwasaki, Kuroda, and Nishina]iwasaki1982anisotropy
author author T. Iwasaki, author N. Kuroda, and author Y. Nishina, title title Anisotropy of lattice dynamical
properties in ZrS_2 and HfS_2, @noop
journal journal J. Phys. Soc. Japan volume 51, pages 2233 (year
1982)NoStop
[Aryasetiawan and Gunnarsson(1998)]aryasetiawan1998gw
author author F. Aryasetiawan and author O. Gunnarsson, title title The GW method, @noop journal journal Rep. Prog. Phys. volume 61, pages 237 (year
1998)NoStop
[Gumhalter et al.(2016)Gumhalter, Kovač, Caruso,
Lambert, and Giustino]gumhalter2016combined
author author B. Gumhalter, author V. Kovač, author F. Caruso,
author H. Lambert, and author F. Giustino, title
title On the combined use of GW approximation and
cumulant expansion in the calculations of quasiparticle spectra: The paradigm
of Si valence bands, @noop journal
journal Phys. Rev. B volume 94, pages 035103 (year 2016)NoStop
[Quinn and Ferrell(1958)]quinn1958electron
author author J. J. Quinn and author R. A. Ferrell, title title Electron self-energy
approach to correlation in a degenerate electron gas, @noop
journal journal Phys. Rev. volume 112, pages 812 (year
1958)NoStop
|
http://arxiv.org/abs/2307.04101v1 | 20230709052851 | Enhancing Building Semantic Segmentation Accuracy with Super Resolution and Deep Learning: Investigating the Impact of Spatial Resolution on Various Datasets | [
"Zhiling Guo",
"Xiaodan Shi",
"Haoran Zhang",
"Dou Huang",
"Xiaoya Song",
"Jinyue Yan",
"Ryosuke Shibasaki"
] | cs.CV | [
"cs.CV",
"eess.IV"
] |
Enhancing Building Semantic Segmentation Accuracy with Super Resolution and Deep Learning: Investigating the Impact of Spatial Resolution on Various Datasets
Zhiling Guo^1,2,
Xiaodan Shi^2,
Haoran Zhang^2,
Dou Huang^2,
Xiaoya Song^3,
Jinyue Yan^1,
Ryosuke Shibasaki^2
^1Department of Building Environment and Energy Engineering,
The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
^2Center for Spatial Information Science, The University of Tokyo, Kashiwa, Japan
^3School of Architecture, Harbin Institute of Technology, Harbin, China
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================
empty
The development of remote sensing and deep learning techniques has enabled building semantic segmentation with high accuracy and efficiency. Despite their success in different tasks, the discussions on the impact of spatial resolution on deep learning based building semantic segmentation are quite inadequate, which makes choosing a higher cost-effective data source a big challenge. To address the issue mentioned above, in this study, we create remote sensing images among three study areas into multiple spatial resolutions by super-resolution and down-sampling. After that, two representative deep learning architectures: UNet and FPN, are selected for model training and testing. The experimental results obtained from three cities with two deep learning models indicate that the spatial resolution greatly influences building segmentation results, and with a better cost-effectiveness around 0.3m, which we believe will be an important insight for data selection and preparation.
§ INTRODUCTION
Buildings semantic segmentation via remote sensing imagery has become an important research topic in recent years <cit.>. With the rapid development of data acquisition systems and machine learning, the ever-expanding choices of datasets with very high resolution (VHR) <cit.> and deep learning methods <cit.> expand the opportunities for researchers to conduct more accurate analysis.
Although VHR imagery would express finer information contents of the landscape, it requires higher cost, longer processing time, and bigger storage space. Thus, the evaluation of the technical and economic trade-offs associated with using different resolution imagery is essential. The previous scholars have studied the impact of resolution in plant species <cit.>, land use <cit.>, and water <cit.> pattern recognition based on coarser-resolution or conventional machine learning methods. In this study, we investigate the impact of spatial resolution for building semantic segmentation via VHR imagery and deep learning methods, as shown in figure <ref>.
To compare the segmentation accuracy under different resolutions, we created remote sensing imagery in a specific area with resolutions from 0.075m to 2.4m by super-resolution (SR) <cit.> and down-sampling processing. The experimental results obtained from three different study areas via two deep learning models reveal that the finer the spatial resolution may not be the best in building semantic segmentation tasks, and the relatively low-cost imagery would be sufficient in many study cases. Thus, choosing a cost-effective spatial resolution for different scenarios is worth discussing.
The main contributions of this study can be highlighted as two folds. First, to the best of our knowledge, it is the first investigation for the impact of spatial-resolution on deep learning-based building semantic segmentation. Second, the resolution is not the higher the better for segmentation accuracy. According to our dataset, a resolution around 0.3m is better for cost-effectiveness, which enables researchers and developers to conduct their research efficiently.
§ DATA
We analyzed the impact of spatial resolution for building semantic segmentation over three representative study areas: Austin, Christchurch, and Tokyo. The original resolutions of the datasets mentioned above are about 0.075m, 0.150m, and 0.300m, respectively.
§ METHODS
The variation of spatial resolution will lead to differences in semantic segmentation results. At first, we resampled the imagery to a total of six pixel scales according to the spatial resolution range of most VHR images in data preprocessing, as shown in figure <ref>. After that, two representative semantic segmentation models are applied for building semantic segmentation. Finally, the comparison is conducted based on four assessment criteria.
§.§ Preprocessing
Compared with upscaling low-resolution imagery to HR space using a single filter such as bicubic interpolation, SR could increase the image resolution while providing finer spatial details than those captured by the original acquisition sensors. In this study, one of the typical deep learning SR models: ESPCN <cit.> is utilized to perform SR. In terms of the resample to lower-resolution, the pixel aggregate method is adopted. After that, six pixel scales in 0.075m, 0.150m, 0.300m, 0.600m, 1.200m, 2.400m can be generated.
§.§ Semantic Segmentation
As the representative deep learning models, in this study, we propose to adopt UNet <cit.> and FPN <cit.> to conduct the building semantic segmentation and investigate the impact of spatial Resolution in results. In general, Unet applies multiple skip connections between upper and downer layers, while FPN obtains features in bottom-up and top-down pathways. Both models have shown the high feasibility and robustness in many segmentation tasks. It should be noted, the data augmentation methods are adopted without random scaling in training, and a model trained by a specific area and resolution is applied to test the corresponding area and resolution for a fair comparison.
§ RESULTS AND DISCUSSIONS
After testing, we generated segmentation results in three cities with different resolutions by two deep learning architectures. Figure <ref> illustrates the impact of spatial resolution on deep learning-based building semantic segmentation, and the detailed quantitative results in IoU can be found in Table <ref>. It can be seen that resolution significantly influences the segmentation results, although images in some resolutions are generated by resampling methods. With the decrease of spatial resolution, in the beginning, the IoU increases slightly in Austin and is stable in both Christchurch and Tokyo. After a certain threshold of 0.300m, the IoU drops rapidly in all study areas. Importantly, both UNet and FPN show a similar tendency. This makes sense, as building features have specific physical size, and the spatial resolution is significantly finer than the certain threshold, which may not help the segmentation performance while providing redundant information. Therefore, the spatial resolution should reach a certain threshold to achieve decent accuracy, and the excessively pursue of finer resolution than the threshold is no need in many cases. Such a trade-off should be involved while selecting an appropriate data source. The experimental results obtained from three cities with two deep learning models demonstrate that the resolution is not the higher the better, and 0.3m resolution would be a better cost-effective choice for data selection and preparation in building semantic segmentation tasks.
§ CONCLUSION
In this study, we have investigated the impact of spatial resolution on deep learning-based building semantic segmentation and demonstrated the effectiveness of super resolution techniques in enhancing segmentation accuracy. Our results suggest that spatial resolution plays a critical role in the accuracy and generalization capability of deep learning models for building semantic segmentation, and that super resolution techniques can help to overcome the limitations of low-resolution data.
To further advance this line of research, future work could extend our empirical evaluation to other deep learning models, study areas, and data sources.
§ ACKNOWLEDGEMENT
We are grateful for the support and funding provided by the JSPS 21K14261 grant.
ieee_fullname
|
http://arxiv.org/abs/2307.04984v1 | 20230711024736 | Turán number of the odd-ballooning of complete bipartite graphs | [
"Xing Peng",
"Mengjie Xia"
] | math.CO | [
"math.CO",
"05C35, 05C38"
] |
Turán number of the odd-ballooning of complete bipartite graphs
Xing Peng and Mengjie Xia[Center for Pure Mathematics, School of Mathematical Sciences, Anhui University,
Hefei 230601, P. R. China. Email: [email protected]. Supported in part by the National Natural Science Foundation of China
(No. 12071002) and the Anhui Provincial Natural Science Foundation (No. 2208085J22). ]
================================================================================================================================================================================================================================================================================================================================
Given a graph L, the Turán number (n,L) is the maximum possible number of edges in an n-vertex L-free graph.
The study of Turán number of graphs is a central topic in extremal graph theory.
Although the celebrated Erdős-Stone-Simonovits theorem gives the asymptotic value of (n,L) for nonbipartite L, it is challenging in general to determine the exact value of (n,L) for χ(L) ≥ 3.
The odd-ballooning of H is a graph such that each edge of H is replaced by an odd cycle and all new vertices of odd cycles are distinct. Here the length of odd cycles is not necessarily equal. The exact value of Turán number of the odd-ballooning of H is previously known for H being a cycle, a path, a tree with assumptions, and K_2,3. In this paper, we manage to obtain the exact value of Turán number of the odd-ballooning of K_s,t with 2≤ s ≤ t, where (s,t) ∉{(2,2),(2,3)} and each odd cycle has length at least five.
Keywords: Turán number, odd-ballooning of graphs, decomposition family.
§ INTRODUCTION
Let ł be a family of graphs. A graph G is ł-free if G does not contain any L ∈ł as a subgraph. The Turán number (n,ł) is the maximum number of edges in an n-vertex ł-free graph. If an n-vertex ł-free graph G contains exactly (n,ł) edges, then G is called an extremal graph for ł. The set of extremal graphs with n vertices is denoted by (n,ł).
For the case where ł contains only one graph L, we write (n,L) and (n,L) for the Turán number of L and the set of extremal graphs for L respectively. The well-known result by Mantel asserts that (n,K_3)=⌊ n^2/4⌋ and the unique extremal graph is the balanced complete bipartite graph. Turán's theorem <cit.> gives the value of (n,K_r+1) of all r ≥ 2. Moreover,
(n,K_r+1) contains only the balanced complete r-partite graph which is known as Turán graph T_r(n). The number of edges in the Turán graph is denoted by t_r(n). Turán's theorem is considered as the origin of extremal graph theory. For the Turán number of general graphs, Erdős-Stone-Simonovits <cit.> theorem gives that
(n, L) = (1 - 1/χ(L)-1) n2+o(n^2).
Here χ(L) is the chromatic number of L. This theorem is one of the cornerstones of extremal graph theory.
For χ(L) ≥ 3, the asymptotic value of (n, L) is given by Erdős-Stone-Simonovits theorem. Thus it is meaningful to determine the exact value of (n, L) for such kind of graphs, which is quite challenging in general. The notion of the decomposition family introduced by Simonovits <cit.> turns out to be very useful, for example <cit.>. Given two graphs G and H, the join G ⊗ H is the graph obtained from the vertex disjoint union of G and H by connecting each vertex of G and each vertex of H.
Given a graph L with χ(L)=p+1,
the decomposition family (L) is the set of minimal graphs M such that L⊂ (M∪K_t)⊗ T_p-1((p-1)t), where t=t(L) is a constant.
Roughly speaking, M ∈ℳ(L) if the graph obtained from adding a copy of M (but not any of its proper subgraphs) into a class of T_p(n) with n large enough contains L as a subgraph. We remark that the decomposition family ℳ(L) always contains bipartite graphs.
The following inequality can be checked easily:
(n,L) ≥ t_p(n)+(⌈ n/p⌉, (L)).
To see this, let G be a graph obtained from the Turán graph T_p(n) by adding a graph from (⌈ n/p⌉, (L)) to a largest part of T_p(n). Thus e(G)=t_p(n)+(⌈ n/p⌉, (L)) and G is L-free by the definition of (L). Surprisingly, the above construction indeed gives the true value of (n,L) for many graphs, i.e.,
(n,L)=t_p(n)+(⌈ n/p⌉, (L)). Before we state examples, we introduce the following definitions and a related result. A matching M_k is a set of k disjoint edges. For a graph G, the matching number ν(G) is the number of edges in a maximum matching of G. For positive integers ν and Δ, let
f(ν,Δ)=max{ e(G):ν(G) ≤νΔ(G) ≤Δ}.
The following result gives the upper bound for f(ν,Δ).
f(ν,Δ)=νΔ+⌊Δ/2⌋⌊ν/⌈Δ/2 ⌉⌋≤νΔ+ν.
A special case where ν=Δ=k-1 was first proved by Abbott, Hanson, and Sauer <cit.> as follows:
f(k-1, k-1)= k^2-k if k is odd;
k^2-3/2 k if k is even.
Let F_k be the graph which consists of k triangles sharing a common vertex.
For k ≥ 1 and n ≥ 50k^2,
(n,F_k)=t_2(n)+f(k-1, k-1).
One can see (F_k)={M_k,K_1,k}, here K_1,k is a star with k+1 vertices. Moreover, (⌈ n/2 ⌉, {M_k,K_1,k})=f(k-1, k-1). Thus the equality holds in (<ref>)
for F_k.
Given a graph G, let kG denote the graph consisting of k vertex disjoint copies of G.
As K_3 is a clique, Theorem <ref> was generalized as follows.
For any p ≥ 2 and k ≥ 1, if n ≥ 16 k^3(p+1)^8, then
ex(n, K_1 ⊗ k K_p)=t_p(n)+f(k-1, k-1).
It is clear that F_k is a special case of K_1 ⊗ k K_p where p=2. One can observe that (K_1 ⊗ k K_p)={M_k,K_1,k} and then the equality also holds in (<ref>) for K_1 ⊗ k K_p.
Motivated by this result, Liu <cit.> introduced the concept of edge blow-up of graphs. For a graph G, the edge blow-up G^p+1 is a graph obtained from G such that each edge is replaced by a clique with p+1 vertices and all new vertices for different cliques are distinct. One can see F_k=K_1,k^3 and K_1 ⊗ k K_p=K_1,k^p+1. So far, Turán number of the edge blow-up of many families of graphs are known, for example, trees, cycles, keyrings, cliques K_r with p ≥ r+1, and complete bipartite graphs K_s,t with p ≥ 3, see <cit.>. In general, a remarkable result by Yuan <cit.> gives the range of (n,G^p+1) for p ≥χ(G)+1.
If one view K_3 as an odd cycle, then Theorem <ref> can be generalized in another way. Let G be a graph. The odd ballooning of G is a graph obtained from G such that each edge in G is replaced by an odd cycle and all new vertices for different odd cycles are distinct. Apparently, F_k is the odd-ballooning of K_1,k in which all odd cycles are triangles. Notice that it is only meaningful to consider the odd-ballooning of bipartite graphs. A star may be one of the simplest bipartite graphs.
Hou, Qiu, and Liu <cit.> first studied the Turán number of the odd-ballooning of K_1,k in which the length of all odd cycles is the same and is at least five. Later, they <cit.> considered the general case in which triangles are allowed. Zhu, Kang, and Shan <cit.> determined the Turán number of the odd-ballooning of paths and cycles.
Recently, Zhu and Chen <cit.> obtained the Turán number of the odd-ballooning of trees under some assumptions. It is nice that previous results for paths and stars are special cases of the result by Zhu and Chen <cit.>.
As the Turán number of the odd-ballooning of a cycle is known, the next step is to study such a problem for bipartite graphs with many cycles. A possible candidate is complete bipartite graphs. Note that K_2,2 is C_4 and thus the simplest case is K_2,3. This case was solved by Yan <cit.> previously.
The goal of this paper is to extend this result to all bipartite graphs.
To state our result, we need to define a number of graphs. Let H be the graph obtained from K_t-1,t-1 by removing a M_t-2, see Figure <ref>, where M_t-2={u_2v_2,…,u_t-1v_t-1}. Note that H is P_4 for t=3.
For 2 ≤ s ≤ t, let G_s,t be the graph obtained from T_2(n-s+1) ⊗ K_s-1 by embedding H into one class of T_2(n-s+1). Similarly, let G_3,3' be the graph obtained from T_2(n-2) ⊗ K_2 by embedding a triangle into one class of T_2(n-2). The following theorem is our main result.
Let F_s,t be the odd-ballooning of K_s,t with t ≥ s ≥ 2, where (s,t) ∉{(2,2),(2,3)} and each odd cycle has length at least 5. Then for n large enough,
(n,F_s,t)=⌈n-s+1/2⌉⌊n-s+1/2⌋+(s-1)(n-s+1)+s-12+t^2-3t+3.
Moreover, G_s,t is the only extremal graph for t ≥ 4. For t=3, there are at least two extremal graphs G_3,3 and G'_3,3.
The notation in this paper is standard. For a graph G and a vertex v ∈ V(G), let N_G(v)={u: u is adjacent to v} be the neighborhood of v and d_G(v)=N_G(v) be the degree of v. If X ⊂ V(G), then let d_G(v,X)=|N_G(v) ∩ X| denote the number of neighbors of v in X. Additionally, G[X] is the subgraph induced by X, Let e(X) and e(X) be the number of edges and non-edges in X respectively. If X and Y are disjoint subsets of V(G), then e(X,Y) and e(X,Y) are the number of edges and non-edges between X and Y respectively. If e(X,Y)=|X||Y|, then we say X is completely adjacent to Y.
We use uv to denote an edge. If u and v are not adjacent, then we use u ≁v to denote it.
The rest of this paper is organized as follows. In Section 2, we will recall a few results and prove some lemmas. In Section 3, we will present the proof of Theorem <ref>.
§ PRELIMINARIES
The following definition of symmetric graphs was introduced by Simonovits.
Let T_1 and T_2 be connected subgraphs of G. They are called symmetric in G if either T_1=T_2 or:
(1) T_1 ∩ T_2=∅; and
(2) (x, y) ∉ G if x ∈ T_1, y ∈ T_2; and
(3) there exists an isomorphism ψ_2: T_1 → T_2 such that for every x ∈ T_1 and u ∈ G-T_1-T_2, x is joined to u if and only if ψ_2(x) is joined to u.
Note that T_1, …, T_k are symmetric if for every 1 ≤ i<j ≤ k, graphs T_i and T_j are symmetric.
We also need to define a special family of graphs.
Let 𝒟(n, p, r) be the family of n-vertex graphs G satisfying the following symmetric conditions:
(1) It is possible to omit at most r vertices of G such that the remaining graph G^' is a join of graphs of almost equal order, i.e. G^'=⊗_i=1^p G^i where |V(G^i)|=n_i and |n_i-n / p| ≤ r for any i ∈[p]. The vertices in V(G) \ V(G^') are called the exceptional vertices.
(2) For every i ∈[p], there exist connected graphs H_i such that G^i=k_i H_i where k_i=n_i /|H_i| and |V(H_i)| ≤ r. Moreover, any two copies H_i^j and H_i^ℓ in G^i(1 ≤ j<ℓ≤ k_i) are symmetric subgraphs of G.
Our proof relies on the following theorem by Simonovits.
For a given graph F with χ(F)=p+1 ≥ 3, if (F) contains a linear forest, then there exist r=r(F) and n_0=n_0(r) such that 𝒟(n, p, r) contains an extremal graph for F and n ≥ n_0. Furthermore, if this is the only extremal graph in 𝒟(n, p, r), then it is the unique extremal graph for every sufficiently large n.
Let F be a graph with chromatic number t≥ 3. If G is an extremal graph for F with n large enough, then δ(G)= ((t-2)/(t-1) )n+o(n).
We conclude this section with two more lemmas.
Let ℳ(F_s,t) be the decomposition family of F_s,t. Then
{ K_s,t,K_1,p_1∪ K_1,p_2∪⋯∪ K_1,p_s∪ M_q }⊂ℳ(F_s,t),
where ∑_i=1^s p_i+q=st and 0 ≤ p_i ≤ t for 1 ≤ i ≤ s.
Proof: Let A and B be the two classes of K_s,t. For each a ∈ A and b ∈ B, let ℓ_a,b≥ 5 be the length of the odd cycle in F_s,t associated with a and b. As a complete bipartite graph does not contain any odd cycles, if M ∈ℳ(F_s,t), then M contains at least one edge of each odd cycle and |E(M)| ≥ st.
Given T_2(n) with n large, put K_1,p_1∪ K_1,p_2∪⋯∪ K_1,p_s∪ M_q in one class.
View centers of s stars as vertices of A and put B in the other class arbitrarily. Then for each a ∈ A and b ∈ B, we can get an odd cycle C_a,b either by including an edge incident with a or by taking an edge from M_q. In addition, picking extra vertices appropriately, we can ensure that the length of C_a,b equals ℓ_a,b. It is minimal as it contains st edges. Similarly, we are able to show K_s,t∈(F). □
Note that if p_i=0 for each 1 ≤ i ≤ s, then K_1,p_1∪ K_1,p_2∪⋯∪ K_1,p_s∪ M_q is the matching M_st. Similarly, we can determine the decomposition family of F_1,t.
Let ℳ(F_1,t) be the decomposition family of F_1,t. Then
ℳ(F_1,t)={K_1,a∪ M_t-a: 0 ≤ a ≤ t}.
Proof: We have shown that each M ∈ℳ(F_1,t) contains at least one edge of each odd cycle. Since M is minimal, each odd cycle contributes exactly one edge to E(M). Note that edges that are incident with the center of K_1,a span a star and other edges form a matching. Meanwhile, one can find F_1,t if one class of T_2(n) contains K_1,a∪ M_t-a. The lemma is proved. □
Remark 1: From the proof of Lemma <ref>, one can see the assumption ℓ_a,b≥ 5 is crucial. Actually, if one can prove the theorem by assuming ℓ_a,b=5 for each a ∈ A and b ∈ B, then the proof also works for the case where ℓ_a,b≥ 5 and the length of odd cycles may be different.
§ PROOF OF THEOREM <REF>
§.§ Proof of the lower bound
We start to prove an auxiliary result. Recall F_s,t is the odd-ballooning of K_s,t and G_s,t is the graph obtained from T_2(n-s+1) ⊗ K_s-1 by embedding H into one class of T_2(n-s+1). In particular, G_1,t is the graph obtained from T_2(n) by embedding H into one class.
For any 1 ≤ a ≤ (t+1)/2, the graph G_1,t does not contain F_a,t+1-a as a subgraph, where all odd cycles have length at least five.
Proof: Assume V(K_a,t+1-a)=A ∪ B, where |A|=a and |B|=t+1-a.
Let L ∪ R be the partition of G_1,t such that H ⊂ L. In addition, set L'={v_1,…,v_t-1} and R'=V(H) ∖ L'. Suppose that F_a,t+1-a is a subgraph of G_1,t for some a.
Note that there are no odd cycles in a bipartite graph. Thus each odd cycle in F_a,t+1-a must contain an edge in H. In addition, all new vertices through the operation of odd-ballooning are distinct. There are two cases.
Case 1: A ∩ L ≠∅. If A ∩ V(H) ≠∅, then we may assume v_1 ∈ A for a moment. As A and B are completely adjacent in F_a,t+1-a, one may assume B=B' ∪ B”, where B' ⊂ R' and B”⊂ R.
We begin with the case where both B' and B” are not empty. This implies that A ⊂ L', say v_1,…,v_a.
Let b'=|B'|.
As there are t+1-a odd cycles associated with v_1 and each of them contains an edge in H, there is a (t+1-a-b')-set E_v_1 of edges in H that are associated with v_1.
As H is bipartite, let B_1 ⊂ R' ∖ B' be the set which contains exactly one endpoint of edges in E_v_1. Note that vertices in B_1 are new in the operation of odd-ballooning.
It is clear that |B_1|=t+1-a-b'. For each 2 ≤ i ≤ a, if we repeat the analysis above for v_i, then there exist a subset B_i ∈ R'∖ B' associated with v_i such that B_i and B_j are pairwise disjoint for 1 ≤ i <j ≤ a. Here vertices from B_i and B_j are new and of course are distinct. Therefore,
t-1=|R'| ≥ |B' ∪ B_1 ∪⋯∪ B_a|
=|B'|+|B_1|+⋯+|B_a|
=t+1-a+|B_2|+⋯+|B_a|
≥ t+1-a+a-1=t.
This is a contradiction.
If B” is empty, then we consider locations of vertices of A. If A ∩ R=∅, then A ⊂ L', say A={v_1,…,v_a}. Then {v_2,…,v_a} and B form a K_a-1,t+1-a which is a subgraph of H-v_1. The definition of H gives that each v_i (here 2 ≤ i ≤ t-1) has a unique non-neighbor u_i in R'. Furthermore, u_i and u_j are distinct. Therefore, vertices from {v_2,…,v_a} have at most t-1-(a-1)=t-a common neighbors in R', a contradiction. If there is a vertex r ∈ A ∩ R, then there is a (t+1-a)-set E_r of edges from H associated with r as there are t+1-a odd cycles containing r in F_a,t+1-a. Thus there is a (t+1-a)-subset L_r' ⊂ L' which consists of exactly one endpoint of each edge from E_r. Note that vertices from L_r' are new with respect to the operation of odd-ballooning. As the assumptions of a and |L'|=t-1, there is only one such vertex. That is A ∖ r ⊂ L'. Note that L_r' and A ∖ r are disjoint. It holds that
t-1=|L'| ≥ |(A∖ r) ∪ L_r'|=|A∖ r|+|L_r'|=a-1+t+1-a=t,
a contradiction.
If B' is empty, then A ⊂ L. Reusing the argument above, for each vertex a ∈ A, there is a (t+1-a)-set V_a ⊂ V(H) such that vertices from V_a are new with respect to odd-ballooning. Thus V_a and V_a' are disjoint for a ≠ a' ∈ A. Recall that we assume v_1 ∈ A. Thus v_1 ∪_a ∈ A V_a ⊂ V(H).
It follows that 2(t-1) ≥ a(t+1-a)+1. Let f(a)=-a^2+(t+1)a-2t+3. Then f(a) is concave down. As a ≤ (t+1)/2, one can check f(a)>0 for 2 ≤ a ≤ (t+1)/2, which is a contradiction to the inequality above. For the case of a=1, the graph H must contain M_p ∪ K_1,t-p for some 0 ≤ p ≤ t as a subgraph by Lemma <ref>, this is impossible by the definition of H.
To prove the case where A ∩ V(H)≠∅, it suffices to consider the case of v_2 ∈ A by the symmetry of vertices in H. The proof runs the same lines as above and it is skipped here.
To complete the proof for Case 1, it remains to consider the case where A ∩ V(H)=∅. Note that A ⊂ L∖ V(H) and B ⊂ R. For each a ∈ A, if we let E_a ⊂ E(H) be the set of edges which consists of one edge from each odd cycle associated with a in F_a,t+1-a, then E_a is a matching with t+1-a edges. Moreover, E_a and E_a' are disjoint for a ≠ a' ∈ A. This is impossible by the definition of H. We complete the proof for Case 1.
Case 2: A ∩ L=∅. Equivalently, A ⊂ R and B ⊂ L. By the symmetry of A and B, this case can be proved by repeating the one for Case 1. □
We are ready to prove the lower bound for (n,F).
Proof of the lower bound: For positive integers 1 ≤ k ≤ t, we apply the induction on k to show G_k,t is F_k,t-free. The base case where k=1 follows from Lemma <ref> in which a=1. Assume it is true for small k. To show the induction step, let A and B be the two classes of K_k,t, where |A|=k and |B|=t. The vertex set of K_k-1 in G_k,t is denoted by W. We view A and B as subsets of V(F_k,t). Suppose F_k,t is a subgraph of G_k,t.
If W ∩ V(F_k,t)=∅, then it means that F_k,t is a subgraph of G_k,t-W. Note that G_k,t-W is a subgraph of G_1,t and F_1,t⊂ F_k,t, then it must be the case where F_1,t⊂ G_1,t. This is a contradiction by Lemma <ref>. Therefore, W ∩ V(F_k,t) ≠∅.
If there is a vertex v ∈ V(F_k,t)∖ B such that v ∈ W, then F_k,t-v ⊂ G_k,t-v.
Note that F_k,t-v contains F_k-1,t as a subgraph no matter v ∈ A or v is a new vertex with respect to odd-ballooning. Thus G_k,t-v contains F_k-1,t as a subgraph. However, this is impossible by the induction hypothesis and the observation G_k,t-v ⊂ G_k-1,t.
It is left to consider the case where W ∩ F_k,t⊂ B.
The assumption F_k,t⊂ G_k,t implies that F_k,t-W ⊂ G_k,t-W.
Note that |W|=k-1 and W ∩ F_k,t⊂ B. It follows that F_k,t-W contains F_k,t-k+1 as a subgraph. Note that G_k,t-W is a subgraph of G_1,t. We get that F_k,t-k+1 is a subgraph of G_1,t. This is a contradiction to Lemma <ref>. There is a contradiction in each case and the proof for the induction step is complete. Therefore, G_s,t does not contain F_s,t as a subgraph.
As G_n is an extremal graph, we get e(G_n) ≥ e(G_s,t) and the lower bound for e(G_n) follows.
§.§ Proof of the upper bound
Observe that χ(F)=3. Lemma <ref> gives that (F) contains a matching. By Theorem <ref>, D(n,2,r) contains an extremal graph for F, say G_n, provided n is large enough. Here r is a constant.
Assume V(G_n)=A_1 ∪ A_2 ∪ R, where A_1 and A_2 form a complete bipartite graph and R is the set of exceptional vertices. By the definition of D(n,2,r), for i ∈{1,2}, the subgraph G[A_i]=k_iH_i. If H_i is nontrivial for some i, then k_i ≥ |A_i|/r ≥ st as |V(H_i)| ≤ r. This indicates that M_st⊂ G[A_i], a contradiction to Lemma <ref>. Therefore, both A_1 and A_2 are independent sets. Recall the definition of symmetric graphs. For each 1 ≤ i ≤ 2 and each vertex v ∈ R, we get either v is adjacent to all vertices in A_i or v has no neighbors in A_i. For 1 ≤ i ≤ 2, we define
B_i={v ∈ R: v is adjacent to all vertices in A_3-i}.
Similarly, let
W={v ∈ R: v is adjacent to all vertices in A_1 ∪ A_2}
and
W'={v ∈ R: v has no neighbors in A_1 ∪ A_2}.
Apparently, R=B_1 ∪ B_2 ∪ W ∪ W' is a partition of R. If W'≠∅, then for each vertex v ∈ W', we have d_G_n(v) ≤ r
as N_G_n(v) ⊂ R by the definition of W'. As r is a constant, this is a contradiction to Theorem <ref>. Thus W'=∅.
Recall F_1,t is the odd-ballooning of K_1,t. Let T be the set of t leaves of K_1,t.
The graph G_n-W does not contain F_1,t such that either T ⊂ A_1 or T ⊂ A_2.
Proof: Note that each vertex w ∈ W is adjacent to all vertices of A_1 ∪ A_2. Suppose that G_n-W contains F_1,t such that T ⊂ A_1. Note that W ∪ T form a K_s-1,t. As the definition of W, we get F_s-1,t is a subgraph of G_n. Together with the F_1,t, there is an F_s,t in G_n, a contradiction. □
Let ł={K_1,p∪ M_t-p: 0 ≤ p ≤ t}. To make the notation simple, we will use B_1 and B_2 to denote subgraphs of G_n induced by B_1 and B_2 respectively.
Both B_1 and B_2 are ł-free.
Proof: Suppose that B_1 contains K_1,p∪ M_t-p for some 0 ≤ p ≤ t. Consider the subgraph induced by A_1∪ A_2 ∪ B_1. Observe that A_1 ∪ B_1 and A_2 form a complete bipartite graph. As K_1,p∪ M_t-p∈(F_1,t), the subgraph induced by A_1∪ A_2 ∪ B_1 contains a copy of F_1,t, here we can choose t vertices from A_2 as leaves of K_1,t, i.e., T ⊂ A_2. This is a contradiction by Claim <ref>. We can show B_2 is ł-free similarly. □
Let ν_1=ν(B_1) and ν_2=ν(B_2).
ν_1+ν_2 ≤ t-1.
The proof is the same as the one for Claim 3 and it is skipped here.
|W|=s-1.
Proof: Notice that W ∪ A_1 and A_2 form a complete bipartite graph. As K_s,t∈(F_s,t), we get |W| ≤ s-1. Suppose that |W|=w ≤ s-2.
Note that
e(G_n) ≤⌊n-w/2⌋⌈n-w/2⌉+e(B_1)+e(B_2)+∑_r ∈ W d_G_n(r)
≤⌊n-w/2⌋⌈n-w/2⌉+|B_1|2+|B_2|2+(n-1)w
≤⌊n-w/2⌋⌈n-w/2⌉+|B_1|+|B_2|2+(n-1)w
≤⌊n-w/2⌋⌈n-w/2⌉+r2+(n-1)w
≤⌊n-s+1/2⌋⌈n-s+1/2⌉+n(s-1-w)/2+r2+(n-1)w
=⌊n-s+1/2⌋⌈n-s+1/2⌉+n(s-1-w)/2+r2+(n-1)(s-1)+(n-1)(w-(s-1))
< ⌊n-s+1/2⌋⌈n-s+1/2⌉ +(s-1)(n-s+1)+s-12+t^2-3t+3,
for the last step, we note that r and t are constants, w ≤ s-2, and n is large enough. This is a contradiction to the lower bound for e(G_n) and the claim is proved. □
Assume that M_p⊂ G_n[B_i] and there is a vertex v ∈ B_3-i with at least q neighbors in B_3-i, where i ∈{1,2}, 1 ≤ p,q ≤ t-1 and p+q ≥ t. Then v is not incident with at least p+q-t+1 edges from M_p. Equivalently, v has at least 2(p+q-t+1) non-neighbors in B_i.
Proof: Suppose that M_p ⊂ B_2 and v ∈ B_1. Let M_p={x_1y_1,…,x_py_p} and M'={x_iy_i: 1 ≤ i ≤ p, either vx_i ∈ E(G_n) or vy_i ∈ E(G_n)}. Note that K_1,q⊂ B_1 with the center v. We claim |M'| ≤ t-1-q. Otherwise, suppose that {x_1y_1,…,x_t-qy_t-q}⊆ M'. Consider the subgraph induced by A_1 ∪ A_2 ∪ B_1 ∪ B_2. Fix a t-subset T of A_2. We can find the F_1,t as follows. Actually, to get an odd cycle, we include an edge from the star K_1,q and M' one by one. For an edge x_iy_i ∈ M', if assume vx_i is an edge and the length of the odd cycle associated with x_iy_i is 2k+1, then we can find an odd cycle vx_iy_ia_1a_2⋯ a_2k-2v, here a_j∈ A_1 for odd j, a_j ∈ A_2 for even j, and a_2k-2∈ T. This is a contradiction to Claim <ref> and the claim is proved. □
Similarly, we can show the following variant and the proof is omitted here.
Let v ∈ B_i and ν_i' be the matching number of the subgraph induced by B_i- (v ∪ N_B_i(v)). If d_B_i(v)+ν'_i=t-1, then v is not incident with any edge in B_3-i.
e(B_1 ∪ B_2) ≥ |B_1||B_2|+t^2-3t+3.
Proof: Recall the lower bound for e(G_n). On the one hand,
e(G_n-W) ≥⌊n-s+1/2⌋⌈n-s+1/2⌉+t^2-3t+3.
On the other hand,
e(G_n-W) =|A_1 ∪ B_1||A_2|+|A_2 ∪ B_2||A_1|+e(B_1 ∪ B_2)
=|A_1 ∪ B_1||A_2 ∪ B_2|-|B_1||B_2|+e(B_1 ∪ B_2)
≤⌊n-s+1/2⌋⌈n-s+1/2⌉-|B_1||B_2|+e(B_1 ∪ B_2).
The desired lower bound for e(B_1 ∪ B_2) follows.
Assume that both B_1 and B_2 contain edges. Let ν_1=ν(B_1) and ν_2=ν(B_2) with ν_1 ≥ν_2 and ν_1+ν_2 ≤ t-1. If either (ν_1,ν_2)=(t-2,1) with t ≥ 5 or t ∈{3,4}, then e(B_1∪ B_2) <|B_1||B_2|+t^2-3t+3.
Proof: Let E' be the set of non-edges between B_1 and B_2.
The proof is split into the following cases.
Case 1: (ν_1,ν_2)=(t-2,1) and t ≥ 5.
If Δ(B_1)=t-1, then let v be a vertex with the maximum degree and N_B_1(v)={v_1,…,v_t-1}. By Claim <ref>, B_1-{v,v_1,…,v_t-1} is an independent set. Thus e(B_1) ≤∑_i=1^t-1 d_B_1(v_i). If d_B_1(v_i) ≤ t-2 for each 1 ≤ i ≤ t-1, then e(B_1) ≤ t^2-3t+2. As ν_2=1, edges in B_2 span a star or a triangle. Let B_2' ⊂ B_2 be the set of vertices with degree at least one.
Then |B_2'| ≥ e(B_2) and the vertex v is not adjacent to any vertex from B_2' by Claim <ref>, i.e., |E'| ≥ e(B_2). It is clear that e(B_1∪ B_2) ≤ e(B_1)+e(B_2)+|B_1||B_2|-|E'| < |B_1||B_2|+t^2-3t+3. If there is a vertex v_i such that d_B_1(v_i)=t-1, then let I={1 ≤ i ≤ t-1: d_B_1(v_i)=t-1}. As above, for each i ∈ I, the vertex v_i is not adjacent to any vertex from B_2'. Thus
|E'| ≥ (|I|+1) e(B_2). Note that e(B_1) ≤ t^2-3t+2+|I| in this case. Therefore,
e(B_1∪ B_2) ≤ e(B_1)+e(B_2)+|B_1||B_2|-|E'|
≤ |B_1||B_2|+t^2-3t+2+|I|+e(B_2)-(|I|+1)e(B_2)
<|B_1||B_2|+t^2-3t+3.
If Δ(B_1)=t-2, then let v be a vertex with the maximum degree and N_B_1(v)={v_1,…,v_t-2}, here d_B_1(v_i) ≤ t-2 for each 1 ≤ i ≤ t-2. Set B_1'=B_1-v ∪ N_B_1(v). If B_1' is an independent set, then
e(B_1) ≤∑_i=1^j d_B_1(v_i) ≤ t^2-4t+4. If further e(B_2) ≤ t-2, then the claim follows. Recall that t ≥ 5 in this case. If e(B_2)=t-1 ≥ 4, then edges in B_2 span a star since ν_2=1. Let u be the center of the star and then u is not adjacent to vertices from v ∪ N_B_1(v), i.e., |E'| ≥ t-1. Now, e(B_1∪ B_2) ≤
|B_1||B_2|+t^2-4t+4+t-1-|E'|<|B_1||B_2|+t^2-3t+3. If B_1' contains edges, then ν(B_1')=1 by Claim <ref>.
For e(B_1') ≤ t-2, we get e(B_1) ≤∑_i=1^j d_B_1(v_i)+e(B_1') ≤ t^2-3t+2. Note that there are at least e(B_2) vertices which has degree at least one in B_2. Claim <ref> gives that |E'| ≥ e(B_2) and then e(B_1 ∪ B_2) ≤ |B_1||B_2|+t^2-3t+2. If e(B_1')=t-1, then B_1' is a star as t ≥ 5. This is impossible as it contradicts to the assumption Δ(B_1)=t-2.
If Δ(B_1) ≤ t-3, then e(B_1) ≤ t^2-4t+4 by Theorem <ref>. Repeating the argument above, we can show the upper bound for e(B_1 ∪ B_2).
Case 2: t=4. It is clear that either (ν_1,ν_2)=(2,1) or (ν_1,ν_2)=(1,1). For the case of (ν_1,ν_2)=(1,1), note that e(B_1) ≤ 3 and e(B_2) ≤ 3. This implies that e(B_1∪ B_2) ≤ |B_1||B_2|+6<|B_1||B_2|+7. For the case of (ν_1,ν_2)=(2,1), we can prove the upper bound for e(B_1∪ B_2) by repeating the argument in Case 1.
Case 3: t=3. Apparently, (ν_1,ν_2)=(1,1) in this case. Reusing the argument in Case 2, one can show the desired upper bound for e(B_1∪ B_2). □
One of B_1 and B_2 is an independent set.
Proof: Suppose that both B_1 and B_2 contain edges. Recall ν_1=ν(B_1) and ν_2=ν(B_2). We assume that ν_1 ≥ν_2 >0. Claim <ref> implies that Δ(B_1) ≤ t-1 and Δ(B_2) ≤ t-1. If ν_1+ν_2 ≤ t-3, then by Theorem <ref>, we get
e(B_1 ∪ B_2) ≤ |B_1|B_2|+ t ν_1+t ν_2 ≤ |B_1||B_2|+ t^2-3t.
A contradiction to Claim <ref>. Thus it remains to consider the case where t-2 ≤ν_1+ν_2 ≤ t-1. To simply the notation, let K=G_n[B_1 ∪ B_2] and show an upper bound for e(K). Observe that
2e(K)=∑_v ∈ V(K) d_K(v)=∑_v ∈ B_1d_K(v)+ ∑_v ∈ B_2d_K(v).
We aim to establish ∑_v ∈ B_1d_K(v) ≤ |B_1||B_2|+2(t-ν_2)ν_1. If d_K(v,B_1) ≤ t-1-ν_2 for each vertex v ∈ B_1, then Theorem <ref> gives that e(B_1) ≤ (t-ν_2)ν_1, which yields that
∑_v ∈ B_1d_K(v)=∑_v ∈ B_1 d_K(v,B_1)+ ∑_v ∈ B_1 d_K(v,B_2) = 2e(B_1)+e(B_1,B_2) ≤ 2(t-ν_2)ν_1+|B_1||B_2|.
Thus we assume that there is a vertex v ∈ B_1 such that d_K(v,B_1) ≥ t-ν_2. Let v_1 be such a vertex with d_K(v_1,B_1)=t-1+j_1-ν_2, here j_1 ≥ 1. Applying Claim <ref> with i=1, p=ν_2, and q=t-1+j_1-ν_2, we get that v_1 has at least 2j_1 non-neighbors in B_2.
We remove j_1 neighbors of v_1 from B_1 arbitrarily and turn 2j_1 non-neighbors of v_1 in B_2 as neighbors.
Let K_1 be the resulting graph. Observe that
∑_v ∈ B_1 d_K_1(v)=∑_v ∈ B_1 d_K(v) and d_K_1(v,B_2) ≤ |B_2| for each v ∈ B_1.
Actually, the removal of j_1 neighbors of v_1 makes the degree sum decreased by 2j_1 while the adding of 2j_1 neighbors of v_1 contributes 2j_1 to the degree sum. Because of the choice of v_1, it is clear that d_K_1(v,B_2) ≤ |B_2| for each v ∈ B_1.
For i ≥ 2,
we shall define a vertex v_i and a graph K_i recursively such that ∑_v ∈ B_1 d_K_i(v)=∑_v ∈ B_1 d_K_i-1(v) and d_K_i(v,B_2) ≤ |B_2| for each v ∈ B_1. If d_K_i-1(v,B_1) ≤ t-1-ν_2 for each v ∈ B_1, then we stop. Otherwise, let v_i be a vertex such that d_K_i-1(v_i,B_1)= t-1+j_i-ν_2 with j_i ≥ 1. Observe that the added crossing edges so far are not incident with v_i. That is N_K_i-1(v_i,B_2)=N_K(v_i,B_2).
Thus we can apply Claim <ref> again to show that v_i has at least 2j_i non-neighbors in B_2 (in K_i-1 and K). We repeat the operation above to get K_i which satisfies desired properties.
Assume that the process terminates after ℓ steps. We remark that all new crossing edges are distinct as they are associated with distinct vertices v_1,…,v_ℓ in B_1.
Note that d_K_ℓ(v,B_1)≤ t-1-ν_2 for each v ∈ B_1. Therefore, by Theorem <ref> and the definition of K_i for 1 ≤ i ≤ℓ, we get
∑_v ∈ B_1 d_K(v) =∑_v ∈ B_1 d_K(v,B_1)+∑_v ∈ B_1 d_K(v,B_2)
=∑_v ∈ B_1 d_K_ℓ(v,B_1)+∑_v ∈ B_1 d_K_ℓ(v,B_2)
≤∑_v ∈ B_1 d_K_ℓ(v,B_1)+|B_1||B_2|
=2e_K_ℓ(B_1)+|B_1||B_2|
≤ |B_1||B_2|+2(t-ν_2)ν_1.
Repeating the argument above, we can show
∑_v ∈ B_2 d_K(v) ≤ |B_1||B_2|+2(t-ν_1)ν_2.
Therefore,
e(B_1∪ B_2)=E(K)=1/2∑_v ∈ B_1d_K(v)+ 1/2∑_v ∈ B_2d_K(v) ≤ |B_1||B_2|+(t-ν_2)ν_1+(t-ν_1)ν_2.
The case where either t ∈{3,4} or t≥ 5 with (ν_1,ν_2) = (t-2,1) is proved by Claim <ref>. We next assume t ≥ 5 and (ν_1,ν_2) ≠ (t-2,1).
For the case of ν_1+ν_2=t-1, let ν_2=t-1-ν_1 and g(ν_1)=(t-ν_2)ν_1+(t-ν_1)ν_2=2ν_1^2+(2-2t)ν_1+t^2-t. Clearly, g(ν_1) is concave up. As assumptions
ν_1 ≥ν_2 and (ν_1,ν_2) ≠ (t-2,1), the maximum value of g(ν_1) is g(t-3)=t^2-5t+12. Note that g(t-3)<t^2-3t+3 when t ≥ 5.
For the case of ν_1+ν_2=t-2, let ν_2=t-2-ν_1 and h(ν_1)=2ν_1^2+(4-2t)ν_1+t^2-2t. Similarly, the maximum value of h(ν_1) is h(t-3)=t^2-4t+6<t^2-3t+3 for t ≥ 4.
We established e(B_1 ∪ B_2) < |B_1||B_2|+t^2-3t+3 in each case. There is a contradiction to Claim <ref> in each case and the proof is complete. □
From now on, assume B_2 is an independent set. Let Δ_1 be the maximum degree of B_1. Recall ν_1 is the matching number of B_1.
We have e(B_1) ≥ t^2-3t+3.
Furthermore, if e(B_1) ≤ t^2-3t+3+k for an integer 0 ≤ k ≤ s-1, then e(W)+e(W,B_1) ≤ k. As a special case where k=0, i.e., e(B_1)=t^2-3t+3, then e(B_1,B_2)=|B_1||B_2| and each vertex w ∈ W has degree n-1.
Proof: As G_n is an extremal graph, it satisfies that
e(G_n) ≥⌊n-s+1/2⌋⌈n-s+1/2⌉+(s-1)(n-s+1)+s-12+t^2-3t+3.
Let C_1=A_1 ∪ B_1 and C_2=A_2 ∪ B_2. Observe that
e(G_n)=e(C_1, C_2)+e(B_1)+e(W,C_1 ∪ C_2)+e(W).
Notice that e(C_1, C_2) ≤⌊n-s+12⌋⌈n-s+12⌉ and
e(W,C_1 ∪ C_2)+e(W) ≤ (s-1)(n-s+1)+s-12.
Therefore, e(B_1) ≥ t^3-3t+3.
If e(B_1) ≤ t^2-3t+3+k for an integer 0 ≤ k ≤ s-1, then
e(W,C_1 ∪ C_2)+e(W) ≤ (s-1)(n-s+1)+s-12-e(W)-e(W,B_1). Combined with the lower bound for e(B_1), we get e(W)+e(W,B_1)≤ k and the second part follows. The special case of k=0 can be shown similarly. □
The subgraph B_1 has exactly one connected component and Δ_1=t-1.
Proof: Claim <ref> gives that Δ_1 ≤ t-1 and ν_1 ≤ t-1. We assert that Δ_1 ≥ t-2. Otherwise, e(B_1) ≤ t^2-3t+2 by Theorem <ref>. This is a contradiction to Claim <ref> and the assertion follows. Let v be a vertex with maximum degree in B_1 and C_1 be the connected component containing v.
If Δ_1=t-1, then C_1 is the only connected component in B_1. Otherwise, K_1,t-1∪ M_1 ⊂ B_1. This is a contradiction to Claim <ref>.
If Δ_1=t-2, then let N(v) be the neighborhood of v in B_1.
Suppose B_1 has another connected component C_2. Then ν(C_2)=1. Otherwise, K_1,t-2∪ M_2 ⊂ B_1, which is a contradiction to Claim <ref>. Similarly, we can show B_1-(v ∪ N(v)) is an independent set, i.e., each edge in C_1 is incident with a vertex from N(v). As Δ_1=t-2, we get e(C_2) ≤ t-2 for t ≥ 5. Therefore, e(B_1) ≤∑_i=1^t-2 d_B_1(v_i)+e(C_2) ≤ (t-2)^2+t-2=t^2-3t+2 provided t ≥ 5.
For t=4, if e(C_2) ≤ 2, then e(B_1) ≤ 6 which leads to the same contradiction. If t=4 and e(C_2)=3, then C_2 is a triangle. If v_1v_2 is an edge, then e(C_1)=3 and e(B_1)=6, a contradiction to Claim <ref>. If v_1 is not adjacent to v_2, then ν(C_1)=2 and Δ(C_2)=2 which imply that K_1,2∪ M_2 ⊂ B_1. This is a contradiction to Claim <ref>.
For t=3, as Δ_1=1, we get e(B_1)=2<3 and obtain the same contradiction. Therefore, B_1 has exactly one connected component.
Notice that we already showed Δ_1 ≥ t-2 and B_1 contains exactly one connected component. One can repeat the argument above to show Δ_1=t-1. □
If t=3, then the unique connected component in B_1 either is a triangle or is a P_4.
Proof: Note that s=t=3 now. By Claim <ref>, there is a vertex v with two neighbors in B_1, say v_1 and v_2. Claim <ref> yields that B_1 ∖ (N_B_1(v_1) ∪ N_B_1(v_2)) is an independent set.
If v_1v_2 is an edge, then vv_1v_2 is a triangle and there is no other edge as Claim <ref>. If v_1 ≁v_2, then both v_1 and v_2 have at most one neighbor other than v as Δ_1=2. If one of v_1 and v_2 has degree one, then we get a P_4 and we are done. Assume v_1u_1 and v_2u_2 are two edges. If u_1≠ u_2, then K_1,2∪ M_1 ⊂ B_1, this is a contradiction to Claim <ref>. If u_1=u_2, then e(B_1)=4. Note that s-1=2 and assume W={w_1,w_2}.
If both w_1 and w_2 are completely adjacent to B_1, then {v,u_1,w_1} and {v_1,v_2,w_2} form a K_3,3. Otherwise, as the lower bound and the upper bound for e(G_n), only one of w_1 and w_2 has a unique non-neighbor in B_1, say w_1. Thus w_1 is completely adjacent to one of {v_1,v_2} and {v,u_1}, say {v,u_1}. Note that w_2 is completely adjacent to B_1.
Now {v,u_1,w_2} and {v_1,v_2,w_1} form a K_3,3. Thus F_3,3⊂ G_n by Lemma <ref>. This is a contradiction and the claim is proved. □
Let u and v be two vertices from B_1 such that u and v are not adjacent. If d_B_1(u)=t-1, then N_B_1(v) ⊆ N_B_1(u).
Proof: Suppose that v has a neighbor x such that x ∉N_B_1(u). Then observe that K_1,t-1∪ M_1 ⊂ B_1, here K_1,t-1 is formed by u and its neighbors in B_1 while M_1 is the edge vx. This is a contradiction to Claim <ref>. □
Let v be a vertex from B_1 with d_B_1(v)=t-1. Then N_B_1(v) is an independent set for t ≥ 4.
Proof: Let N_B_1(v)={v_1,…,v_t-1} and B_1'=B_1-(v ∪ N_B_1(v)). Without causing any confusion, for each u ∈ B_1, we will use N(u) to denote N_B_1(u) in the proof. Claim <ref> gives us that B_1' is an independent set. Thus
e(B_1)=∑_i=1^t-1 d_B_1(v_i)-e(N(v)).
This equation will be used frequently in the proof.
If N(v) contains an edge, then there is a vertex v_i ∈ N(v) such that d_B_1(v_i)=t-1. Otherwise, the equation (<ref>) gives that e(B_1) ≤ (t-1)(t-2)-1=t^2-3t+2 and this is a contradiction to Claim <ref>. Without loss of generality, let v_1 be such a vertex.
The vertex v_1 is adjacent to some v_j ∈ N(v). If it is not the case, then N(v_1)=v ∪ T, where T ⊂ B_1' and |T|=t-2. By Claim <ref>, N(v_i) ⊆ N(v_1)=v ∪ T for each 2 ≤ i ≤ t-1 and then N(v) is an independent set, a contradiction to the assumption. Let {v_2,…,v_p} be the set of vertices that are adjacent to v_1. We next show p<t-1. Otherwise, e(N(v)) ≥ t-2 as p=t-1. By equation (<ref>) and Claim <ref>,
t^2-3t+3 ≤ e(B_1) ≤ (t-1)^2-e(N(v)) ≤ t^2-3t+3,
which implies that d_B_1(v_i)=t-1 for each 1 ≤ i ≤ t-1, e(B_1)=t^2-3t+3, and e(N(v))=t-2, i.e., {v_2,…,v_t-1} is an independent set. By Claim <ref>, there is a subset D ⊂ B_1' with t-3 vertices such that N(v_i)={v,v_1}∪ D for each 2 ≤ i ≤ t-1. Note that N(v_1)={v,v_2,…,v_t-1}.
If we let L={v_2,…,v_t-1} and R={v,v_1}∪ D, then L ∪ R form a K_t-2,t-1.
As e(B_1)=t^2-3t+3, each vertex from W has degree n-1 by Claim <ref>. Thus including vertices from W properly, there is a K_s,t in B_1∪ W. As A_1 ∪ B_1 ∪ W and A_2 form a complete bipartite graph, then F_s,t⊂ G_n by Lemma <ref>, a contradiction. Thus p<t-1.
There is a vertex v_i such that p+1 ≤ i ≤ t-1 and d_B_1(v_i)=t-1. If there is no such a vertex, then by equation (<ref>)
e(B_1) =∑_i=1^t-1 d_B_1(v_i)-e(N(v))
=∑_i=1^p d_B_1(v_i)+∑_i=p+1^t-1 d_B_1(v_i)-e(N(v))
≤ p(t-1)+(t-1-p)(t-2)-e(N(v))
≤ p(t-1)+(t-1-p)(t-2)-(p-1)
=t^2-3t+3,
here e(N(v)) ≥ p-1 as v_1 is adjacent to v_i for each 1 ≤ i ≤ p . Together with Claim <ref>, we get
(1) d_B_1(v_i)=t-1 for 1 ≤ i ≤ p,
(2) d_B_1(v_i)=t-2 for each p+1 ≤ i ≤ t-1,
(3) e(N(v))=p-1, i.e., E(N(v))={v_1v_2,…,v_1v_p}.
Assume N(v_1)={v,v_2,…,v_p}∪ T, where T ⊂ B_1' with |T|=t-p-1.
Claim <ref> implies that N(v_i) ⊂ v ∪ T for each p+1 ≤ i ≤ t-1.
Since d_B_1(v_i)=t-2 and |v ∪ T| = t-p ≤ t-2, we get p=2 and N(v_i) =v ∪ T for each 3 ≤ i ≤ t-1.
Similarly, observe that N(v_2)={v,v_1}∪ T. Now v ∪ T and {v_1,v_2,…,v_t-1} form a K_t-2,t-1.
Notice that each vertex from W has degree n-1 as e(B_1)=t^2-3t+3. We can show F_s,t⊂ G_n similarly and this is a contradiction.
In the following, we assume that d_B_1(v_i)=t-1 for p+1 ≤ i ≤ p+q.
We next show p=2. Recall that N(v_1)={v,v_2,…,v_p}∪ T, where T ⊂ B_1' with |T|=t-p-1.
As v_1 ≁v_i for p+1 ≤ i ≤ p+q, Claim <ref> gives that N(v_i) ⊆ N(v_1)={v,v_2,…,v_p}∪ T. The assumption d_B_1(v_1)=d_B_1(v_i)=t-1 indicates that v_i is adjacent to all vertices from {v_2,…,v_p} for each p+1 ≤ i ≤ p+q. Thus e(N(v)) ≥ p-1+q(p-1). If p>2, then by equation (<ref>), we get e(B_1) ≤ (t-1)(p+q)+(t-1-p-q)(t-2)-(p-1)(q+1)=t^2-3t+3+(2-p)q<t^2-3t+3, which is a contradiction to Claim <ref>. Therefore, p=2 and e(B_1)=t^2-3t+3. This implies that d_G(v_i)=t-2 for q+3 ≤ i ≤ t-1 and each w ∈ W has degree n-1 by Claim <ref>. Notice that |T|=t-3 now.
If t ≥ 5, then reusing Claim <ref>, we get N_G_n(v_i)=v ∪ T for q+3 ≤ i ≤ t-1 and T ⊂ N_G_n(v_2). Now N_G_n(v_2)={v,v_1,v_3}∪ T and d_G_n(v_2)=t, a contradiction to Claim <ref>.
If t=4, then N_G_n(v_2)={v,v_1,v_3}. Now {v_1,v_3} and {v,v_2,u_1} form a K_2,3, here T={u_1}.
Recall each vertex w ∈ W has degree n-1 and |W|=s-1. Thus taking vertices from W properly, we can see v ∪ T ∪ N(v) ∪ W contains a K_s,4 for any 2 ≤ s ≤ 4. Since A_1 ∪ B_1 ∪ W and A_2 is a complete bipartite graph, F_s,t⊂ G_n by Lemma <ref>. This is a contradiction and the claim is proved.
□
The subgraph B_1 contains exactly one copy of H as a subgraph for t ≥ 4.
Proof: We reuse the assumptions in the proof of Claim <ref>. Then v is a vertex from B_1 with maximum degree, N_B_1(v)={v_1,…,v_t-1}, and B_1'=B_1-(v ∪ N_B_1(v)). Claim <ref> tells us that N_B_1(v) is an independent set.
Notice that e(B_1)=∑_i=1^t-1 d_B_1(v_i). Claim <ref> implies that there is at least one v_i ∈ N(v) such d_B_1(v_i)=t-1. Assume that d_B_1(v_i)=t-1 for each 1 ≤ i ≤ k. We claim k=1. Otherwise,
Claim <ref> yields that N(v_i)=v ∪ T for each 1 ≤ i ≤ k, where T ⊂ B_1' and |T|=t-2. Then L={v_1,…,v_k} and R=v ∪ T form a K_k,t-1 with k ≥ 2. In the following, we will show K_s,t⊂ B_1 ∪ W. As A_1 ∪ B_1 ∪ W and A_2 is a complete bipartite graph, then F_s,t is a subgraph of G_n by Lemma <ref> which is a contradiction to the assumption.
There are two cases depending on k.
Case 1: 2 ≤ k ≤ s-1. Let
W'={w ∈ W: w is adjacent to all vertices in R}.
Note that each w ∈ W ∖ W' has a non-neighbor in B_1. As e(B_1) ≤ t^2-3t+2+k, it follows that |W∖ W'| ≤e(W,B_1)+e(W) ≤ k-1 by Claim <ref>, i.e., |W'| ≥ s-k. Assume |W'|=s-k+j.
For j=0, each vertex from W∖ W' has exactly one non-neighbor in R and has degree n-2.
Note that |W'| ≤ s-2. Pick w ∈ W∖ W' arbitrarily and notice that w is adjacent to all vertices in L ∪ W'. Now, L ∪ W' and R ∪ w form a K_s,t.
For j>0, we have |W∖ W'|=k-j-1.
As e(W ∖ W',B_1) ≥ k-j-1, we get e(W')+e(W',L) ≤ j by Claim <ref>. If e(W',L)=0, then there is a vertex w ∈ W' which has at least s-k neighbors in W' provided s-k+j>2. Otherwise, e(W') ≤(s-k-1)(s-k+j)2<s-k+j2-j, a contradiction. Let w ∈ W' be such a vertex and W” be the set of s-k neighbors of w in W'. Then L ∪ W” and R ∪ w is a K_s,t.
If s-k=j=1, then k=s-1, |W'|=2, and |W∖ W'|=s-3. Assume W'={w_1,w_2}. If w_1 is adjacent to w_2, then L ∪ w_1 and R ∪ w_2 is a K_s,t. Thus we assume w_1 is not adjacent to w_2. For W∖ W' ≠∅, then each w ∈ W∖ W' is completely adjacent to L ∪ W'.
We can find a K_s,t in B_1 ∪ W as we did in the case where j=0. For W∖ W' = ∅, i.e., s=3 and W=W'={w_1,w_2}. As w_1 is not adjacent to w_2, we get both w_1 and w_2 are completely adjacent to L ∪ R by Claim <ref>.
Note that t>3 by the assumption. Then d_G_n(v_i)=t-2 for 3 ≤ i ≤ t-1 and v_i has a unique non-neighbor in T by Claim <ref>. Let Y={v_3,…,v_t-2}.
Thus there are two vertices t_1,t_2 ∈ T such that both t_1 and t_2 are completely adjacent to Y as |T|=t-2. Now {v,t_1,t_2} and W ∪{v_1,v_2}∪ Y form a K_3,t.
If e(W',L)>0, then let Z ⊂ W' be the set of vertices which have non-neighbors in L. Thus e(W') ≤ j-|Z|. Recall |W'|=s-k+j.
Removing vertices from W' which has non-neighbors in W' one by one, we will get a subset W” such that |W”| ≥ s-k+|Z| and W” is a clique. Let w ∈ W”∖ Z and W”' ⊂ W” be the set of s-k neighbors of w in W”. Then L ∪ W”' and R ∪ w form a K_s,t.
Case 2: k ≥ s. We first consider the case where k ≥ s+1. We claim that there is a vertex w ∈ W such that w has at least s neighbors in L. If there is a such vertex w, then let L' be the set of s neighbors of w in L. Observe that L' and R ∪ w is K_s,t. It is left to show the existence of w. Let
W'={w ∈ W: w has at most s-1 neighbors in L}.
As e(B_1) ≤ t^2-3t+2+k and each w ∈ W' has at least k-s+1 non-neighbors in L, it follows that (k-s+1)|W'| ≤ k-1. Thus |W'| ≤ 1+s-2k-s+1≤ 1+s-22 since k ≥ s+1.
If s ≥ 4, then |W'| ≤ s-2 and the desired vertex w ∈ W exists. If s=3, then the inequality above gives |W'| ≤ 1 and the existence of w also follows. For s=2, note that W contains only one vertex, say w, and e(B_1) ≤ t^2-3t+2+k. As w has at most one neighbor in L. Equivalently, d_G_n(w) ≤ n-k. Combining the upper bound and the lower bound for e(G_n), we get that d_G_n(w) = n-k and d_G_n(v_i)=t-2 for each k+1 ≤ i ≤ t-1, here we assume k<t-1 for a while. Claim <ref> give that N_G_n(v_i)=v ∪ T_i such that T_i ⊂ T and |T_i|=t-3, i.e., v_i has exactly one non-neighbor in T. As |T|=t-2 and k ≥ s+1, there is a vertex t ∈ T such that t is adjacent to each vertex v_i for k+1 ≤ i ≤ t-1. As d_G_n(w) = n-k and w has at least k-1 non-neighbors in L, then w is completely adjacent to R. Now {v,t} and w ∪ N_B_1(v) form a K_2,t.
For k=t-1, then note that R and L ∪ w is a K_t-1,t and we can find a K_s,t easily.
It remains to prove the case of k=s. As above, if |W'| ≤ s-2, then we are done. Thus we assume |W'|=s-1, i.e., W'=W. Similarly, one can show d_G_n(w)=n-2 for each w ∈ W and the unique non-neighbor of w is in L. In addition, d_G_n(v_i)=t-2 for each s+1 ≤ i ≤ t-1, here we also assume s<t-1 for a moment. As above, we can find an (s-1)-subset T' ⊂ T such that T' and {v_s+1,…,v_t-1} form a complete bipartite graph. Now v∪ T' and w ∪ N_B_1(v) is a K_s,t for any w ∈ W.
For k=s=t-1, as above R and L ∪ w is a K_t-1,t for any w ∈ W and we are able to find K_s,t easily.
There is a contradiction in each case and then k=1 follows. Assume v_1 is the unique vertex such that d_B_1(v_1)=t-1 and N(v_1)=v ∪ T, where T ⊂ B_1' with |T|=t-2. Meanwhile, e(B_1)=t^2-3t+3 and d_B_1(v_i)=t-2 for each 2 ≤ i ≤ t-1. By Claim <ref>, N_B_1(v_i) ⊂ v ∪ T for each 2 ≤ i ≤ t-1. Actually, for each v_i, there is a unique vertex u_i ∈ T such that v_i ≁u_i. Moreover, we assert that u_i ≠ u_j for 2 ≤ i ≠ j ≤ t-2. Otherwise, suppose u_2=u_3 without losing any generality. Observe that {v_1,v_2,v_3} and v ∪ (T ∖ u_2) is K_3,t-2. Claim <ref> together with e(B_1)=t^2-3t+3 imply that each vertex from W has degree n-1. Let W' be an (s-3)-subset of W.
Then {v_1,v_2,v_3}∪ W' and v ∪ (T ∖ u_2) ∪ (W∖ W') form a K_s,t for s ≥ 3. For s=2, there is a vertex u_1 ∈ T which is completely adjacent to {v_2,…,v_t-1}. Now {v,u_1} and N(v) ∪ W is a K_2,t.
By Lemma <ref>, F_2,t⊂ G_n. This is a contradiction and the proof is complete.
Proof of Theorem <ref>: For (s,t) ≠ (3,3), Claim <ref> gives that B_1 contains exactly one copy of H as a subgraph. Let C_1=A_1 ∪ B_1 and C_2=A_2 ∪ B_2. Reusing the proof for Claim <ref>, we get that
e(G_n) ≤⌊n-s+1/2⌋⌈n-s+1/2⌉+(n-s+1)(s-1)+s-12+t^2-3t+3.
As the lower bound for e(G_n), it follows that d_G_n(w)=n-1 for each w ∈ W. In addition, e(C_1,C_2)=⌊n-s+1/2⌋⌈n-s+1/2⌉, i.e., C_1 and C_2 is a balanced complete bipartite graph with n-s+1 vertices. Therefore, G_n=G_s,t and it is the unique extremal graph by Theorem <ref>. For (s,t)=(3,3), one can show either G_n=G_3,3 or G_n=G_3,3'. For this case, we are not able to characterize all extremal graphs. □
99
AHS
H. Abbott, D. Hanson, and H. Sauer, Intersection theorems for systems of sets, J. Combin. Theory Ser. A, 12 (1972), 381–389.
CGPW
G. Chen, R. Gould, F. Pfender, and B. Wei, Extremal graphs for intersecting cliques, J. Combin. Theory Ser. B, 89 (2003), 159–171.
chi
C. Chi and L. Yuan, The Turán number for the edge blow-up of trees: The missing case, Discrete Math., 346(6) (2023), No.113370.
CH
V. Chvátal and D. Hanson, Degrees and matchings, J. Combin. Theory Ser. B, 20 (1976), 128-138.
erdos67
P. Erdős, Some recent results on extremal problem in graph theory, Theory of Graphs (ed P. Rosenstiehl), (Internat. Sympos., Rome, 1966), Gordon and Breach, New York, and Dunod, Paris, 1967, 117–123.
erdos68
P. Erdős, On some new inequalities concering extremal properties of graphs, Theory of Graphs (P. Erdős and G. Katona, Eds.), Academic Press, New. York, 1968, 77–81.
ES
P. Erdős and M. Simonovits, A limit theorem in graph theory, Studia Sci. Math Hungar., 1 (1966), 51–57.
ES1 P. Erdős and A. Stone, On the structure of linear graphs, Bull. Amer. Math., 52 (1946), 1089–1091.
EFGG
P. Erdős, Z. Füredi, R. Gould, and D. Gunderson, Extremal graphs for intersecting triangles, J. Combin. Theory Ser. B, 64(1) (1995), 89–100.
HQL
X. Hou, Y. Qiu, and B. Liu, Extremal graph for intersecting odd cycles, Electron. J. Combin., 23(2) (2016), P29.
HQL1
X. Hou, Y. Qiu, and B. Liu, Turán number and decomposition number of intersecting odd cycles, Discrete Math., 341(1) (2018), 126–137.
Liu
H. Liu, Extremal graphs for blow-ups of cycles and trees, Electron. J. Combin., 20(1) (2013), P65.
NKSZ
Z. Ni, L Kang, E. Shan, and H. Zhu, Extremal graphs for blow-ups of keyrings, Graphs Combin., 36(6) (2020), 1827–1853.
S1
M. Simonovits, A method for solving extremal problems in graph theory, stability problems, in:
Theory of Graphs, Proc. Colloq., Tihany, 1966, Academic Press, New York, 1968, 279–319.
S2
M. Simonovits, Extremal graph problems with symmetrical extremal graphs, additional chromatic conditions, Discrete Math., 7 (1974), 349–376.
Turan
P. Turán, On an extremal problem in graph theory (in Hungrarian), Mat. es Fiz. Lapok., 48 (1941), 436–452.
WHLM
A. Wang, X Hou, B. Liu, and Y. Ma,
The Turán number for the edge blow-up of trees,
Discrete Math., 344(12) (2021), No.112627.
Yan
N. Yan, The Turán number of graphs with given decomposition family, Acta Scientiarum Naturalium Universitatis Nankaiensis, 54(4) 2021, 34–43.
Yuan3 L. Yuan, Extremal graphs for the k-flower, J. Graph Theory, 89(1) (2018), 26–39.
Yuan2 L. Yuan, Extremal graphs for odd wheels, J. Graph Theory, 98(4) (2021), 691–707.
Yuan L. Yuan, Extremal graphs for edge blow-up of graphs, J. Combin. Theory Ser. B, 152 (2022), 379–398.
Yuan1
L. Yuan, Extremal graphs of the pth power of paths, European J. Combin., 104 (2022), No.103548.
ZKS
H. Zhu, L. Kang, and E. Shan, Extremal Graphs for odd-ballooning of paths and cycles, Graphs Combin., 36(3) (2020), 755–766.
ZC
X. Zhu and Y. Chen, Turán number for odd-ballooning of trees, J. Graph Theory, online DOI: 10.1002/jgt.22959.
|
http://arxiv.org/abs/2307.04767v1 | 20230710175940 | Semantic-SAM: Segment and Recognize Anything at Any Granularity | [
"Feng Li",
"Hao Zhang",
"Peize Sun",
"Xueyan Zou",
"Shilong Liu",
"Jianwei Yang",
"Chunyuan Li",
"Lei Zhang",
"Jianfeng Gao"
] | cs.CV | [
"cs.CV"
] |
Empirically Constraining the Spectra of a Star's Heterogeneities From Its Rotation Lightcurve
[
Received 24 May 2023 / Accepted 30 June 2023
=============================================================================================
In this paper, we introduce , a universal image segmentation model to enable segment and recognize anything at any desired granularity. Our model offers two key advantages: semantic-awareness and granularity-abundance. To achieve semantic-awareness, we consolidate multiple datasets across granularities and train on decoupled objects and parts classification. This allows our model to facilitate knowledge transfer among rich semantic information. For the multi-granularity capability, we propose a multi-choice learning scheme, enabling each click point to generate masks at multiple levels that correspond to multiple ground-truth masks. Notably, this work represents the first attempt to jointly train a model on SA-1B, generic, and part segmentation datasets. Experimental results and visualizations demonstrate that our model successfully achieves semantic-awareness and granularity-abundance. Furthermore, combining SA-1B training with other segmentation tasks, such as panoptic and part segmentation, leads to performance improvements. We will provide code and a demo for further exploration and evaluation at <https://github.com/UX-Decoder/Semantic-SAM>.
§ INTRODUCTION
The universal and interactive AI systems that follow human intents have shown their potential in natural language processing <cit.> and controllable image generation <cit.>. However, such a universal system for pixel-level image understanding remains less explored.
We argue that a universal segmentation model should possess the following important properties: universal representation,
semantic-awareness, and granularity-abundance. Regardless of the specific image domain or prompt context, the model is capable of acquiring a versatile representation, predicting segmentation masks in multi-granularity, and understanding the semantic meaning behind each segmented region.
Previous works <cit.> attempted to investigate these properties, but only achieved part of the goals. The main obstacles impeding the progress of such a universal image segmentation model can be attributed to limitations in both model architecture flexibility and training data availability.
* Model Architecture. The existing image segmentation model architectures are dominated by the single-input-single-output pipeline that discards any ambiguity. While this pipeline is prevalent in both anchor-based CNN architectures <cit.> and query-based Transformer architectures <cit.>, and has demonstrated remarkable performance in semantic, instance, and panoptic segmentation tasks <cit.>, it inherently restricts the model to predict multi-granularity segmentation masks in an end-to-end manner. Although clustering postprocessing techniques <cit.> can produce multiple masks for a single object query, they are neither efficient nor effective solutions for a granularity-aware segmentation model.
* Training Data. Scaling up segmentation datasets that possess both semantic-awareness and granularity-awareness is a costly endeavor. Existing generic object and segmentation datasets such as MSCOCO <cit.> and Objects365 <cit.> offer large amounts of data and rich semantic information, but only at the object level. On the other hand, part segmentation datasets such as Pascal Part <cit.>, PartImageNet <cit.>, and PACO <cit.> provide more fine-grained semantic annotations, but their data volumes are limited. Recently, SAM <cit.> has successfully scale up the multi-granularity mask data to millions of images, but it does not include semantic annotations. In order to achieve the dual objectives of semantic-awareness and granularity-abundance, there is a pressing need to unify segmentation training on various data formats to facilitate knowledge transfer. However, the inherent differences in semantics and granularity across different datasets pose a significant challenge to joint training efforts.
In this paper, we introduce Semantic-SAM, a universal image segmentation model designed to enable segmenting and recognizing objects at any desired granularity. Given one click point from a user, our model addresses the spatial ambiguity by predicting masks in multiple granularities, accompanied by semantic labels at both the object and part levels. As shown in Figure <ref>, our model generates multi-level segmentation masks ranging from the person head to the whole truck.
The multi-granularity capability is achieved through a multi-choice learning design <cit.> incorporated into the decoder architecture. Each click is represented with multiple queries, each containing a different level of embedding. These queries are trained to learn from all available ground-truth masks representing different granularities. To establish a correspondence between multiple masks and ground-truths, we employ a many-to-many matching scheme to ensure that a single click point could generate high-quality masks in multiple granularities.
To accomplish semantic-awareness with a generalized capability, we introduce a decoupled classification approach for objects and parts, leveraging a shared text encoder to encode both objects and parts independently. This allows us to perform object and part segmentation separately, while adapting the loss function based on the data type. For instance, generic segmentation data lacks part classification loss, whereas SAM data does not include classification loss.
To enrich semantics and granularity within our model, we consolidate seven datasets on three types of granularities, including generic segmentation of MSCOCO <cit.>, Objects365 <cit.>, ADE20k <cit.>, part segmentation of PASCAL Part <cit.>, PACO <cit.>, PartImagenet <cit.>, and SA-1B <cit.>. Their data formats are reorganized to match our training objectives accordingly. After joint training, our model obtains a strong performance across a variety of datasets. Notably, we find that learning from interactive segmentation could improve generic and part segmentation. For example, by jointly training SA-1B promptable segmentation and COCO panoptic segmentation, we achieve a gain of 2.3 box AP and a gain of 1.2 mask AP. In addition, through comprehensive experiments, we demonstrate that our granularity completeness is better than SAM with more than 3.4 1-IoU.
§ DATA UNIFICATION: SEMANTICS AND GRANULARITY
In order for multi-level semantics, we include seven datasets that contain different granularity-level masks. The datasets are SA-1B, COCO panoptic, ADE20k panoptic, PASCAL part, PACO, PartImageNet, and Objects365. Within them, COCO and ADE20k panoptic datasets contain object-level masks and class labels. PASCAL part, PACO, and PartImageNet contain part-level masks and class labels. SA-1B contains up to 6-level masks without labels, while Objects365 contains abundant class labels for object-level instances. The details of these datasets are shown in Table <ref>. We further visualize the data distribution of different data type in Fig <ref>.
[b]0.56
0.99!
2*Type 2*Data 2*#Images 2c|Semantic Concept 2cGranularity Level
Part Object Part Whole
Class-agnostic SA-1B 11B
2*Object-level
Objects365 1.7M 365
COCO 110K 133
ADE20K 20K 150
3*Part-level
PACO-LVIS 45K 201 75
PartImageNet 16K 13 11
Pascal Part 5K 30 20
tableThe data statistics in .
[b]0.43
< g r a p h i c s >
figureSemantics-Granularity 2D chart.
§
§.§ Model
Our follows <cit.> to exploit a query-based mask decoder to produce semantic-aware and multi-granularity masks. In addition to the generic queries, it supports two types of prompts including point and box, similar to SAM <cit.>. The overall pipeline is shown in Fig. <ref>.
We represent both click and box prompts into anchor boxes as a unified format. In particular, we convert user click point (x, y) into an anchor box (x, y, w, h) with small width w and height h, so that the anchor box can closely approximate the point. To capture different granularities of masks, each click is first encoded to position prompt and combined with K different content prompts, where each content prompt is represented as a trainable embedding vector for a given granularity level. Here we empirically choose K=6, considering there are at most 6 levels of masks per user click for the majority of images in SA-1B <cit.>.
More specifically, a click/box 𝐛=(x, y, w, h) is encoded into K content embeddings and one position embedding, respectively. We represent its content embeddings as a set of query vectors 𝐐 = ( _1, ⋯, _K). For the i-th query,
𝐪_i=𝐪^_i+𝐪^_i,
where
* 𝐪^ is the embedding for granularity level i,
* 𝐪^ distinguishes the query type, chosen from either the click or the box embeddings.
The position embedding of 𝐜 is implemented via sine encoding. Assuming that the output image feature from vision encoder is 𝐅, the mask decoder of the proposed represents the click on the input image as:
𝐎= (𝐐,, 𝐅 ) with O=(_1, ⋯, _K),
where (·,·,·) is a deformable decoder that takes query feature, reference box, and image features as input to output queried features. _i is the model output for the ith input query _i. Each _i=(_i, _i) consists of the predicted semantic category _i and mask _i, which are used to construct the concept recognition loss and mask prediction loss, respectively.
§.§ Training
r0.53
0.53
< g r a p h i c s >
figureDecoupled object and part classification.
Recognize Anything.
As we train with various types of data with different semantic annotations, in which some contain object-level annotations (COCO), some contain both object and part-level annotations (Pascal Part), and SA-1B has no semantic annotations but contains masks of all semantic levels. Note that a large number of part concepts are shared across different objects, for example, head for all animals. We aim to transfer the part concept knowledge across objects trained with only object-level annotations in our joint training.
To address this discrepancy between semantic annotations and better transfer semantics of different granularity, we propose to decouple object and part recognition. As shown in Fig <ref>, we utilize a shared text encoder to encode objects and parts, which are used to perform object and part segmentation separately.
Importantly, while all types of segmentation data share a unified format, the loss varies for different data types. We summarize the loss items to construct the training objective in in Table <ref>. It is the part-level data that bridges the gap to recognize semantic concepts between part and object levels, and it is the use of SAM data in Hungarian matching that bridges the gap to segment masks at any granularity.
Segment at any granularity.
To endow the model with a multi-granularity segmentation ability, we propose a many-to-many matching method during training. We found that SAM fails in providing good multi-level segmentation results with a single click because SAM uses many-to-one matching during training. In other words, the three SAM-predicted masks for each click only match with one GT mask. This causes that points located in masks of small levels cannot predict large masks with high quality according to our observation. In contrast, to enable multi-level mask prediction with a single click, we fully leverage the structures in both data and algorithm. First, we re-organize the data by clustering multiple GT masks of different levels sharing the same click. To allow multiple predictions of the same click to match with the GT masks, we employ the Hungarian algorithm to enable the many-to-many matching. The similarity matrix and scores vary based on the availability of different segmentation data components.
For box input and generic segmentation, we follow existing methods. Specifically, to generate a mask from an input box, we follow a similar idea as in denoising training (DN) <cit.>. We add noises to ground-truth boxes to simulate inaccurate box inputs from users, and these noised boxes serve as spatial prompts for the decoder. The model is trained to reconstruct the original boxes and masks given noised boxes. For the content part of box prompts, we adopt a learnable token as a general prompt. Note that this is the only difference from DN, as DN uses ground-truth label embedding as the content prompts.
For generic segmentation, we follow the same pipeline as in Mask DINO <cit.>.
Discussion.
As shown in Fig. <ref>, compared with previous interactive segmentation models, differs from previous segmentation models in two aspects. Firstly, we train the model to output all the possible segmentation masks with one click. Secondly, our output granularities are richer to generate diverse output masks.
§ EXPERIMENTS
§.§ Experimental Setup
Implementation Details.
In our experiments, we jointly train on three types of data, as shown in Table <ref>.
We implement our model based on Mask DINO <cit.> . Mask DINO is a unified detection and segmentation framework which simultaneously predicts box and mask. We follow <cit.> to use 300 latent queries and nine decoder layers for all segmentation tasks. For the visual backbone, we adopt pre-trained Swin-T/L <cit.> by default. For the language backbone, we adopt the pre-trained base model in UniCL <cit.>.
As SA-1B <cit.> dominates the data, during training, we first train on only SA-1B data. Then, we add object and part-level data to jointly train the three types of data. During training, the image resolution is 1024× 1024 for all data. We use AdamW <cit.> as the optimizer. We use large-scale jittering for object and part-level data and did not use data augmentations for SA-1B data, as SA-1B images are abundant. We set the learning rate to 0.0001, which is decayed at 0.9 and 0.95 fractions of the total number of steps by 10.
Evaluation. We mainly evaluate two datasets, including COCO Val2017 and a subset of SA-1B <cit.> with 1000 images. For evaluation metrics, we evaluate PQ and AP for generic and part segmentation datasets. For single-granularity interactive segmentation, we report Point (Max) and Point (Oracle). Max denotes we select the output mask with the maximum confidence score. Oracle
denotes we select the output mask with the max IoU by calculating the IoU between the prediction and target mask. For multi-granularity interactive segmentation, we report 1-IoU@All Granularity that matches all the possible ground-truth masks for a single click to the multi-granularity predictions and then calculate the average IoU of all granularities.
§.§ Semantic Segmentation of Anything
Generic Segmentation
As shown in Table <ref>, to validate the compatibility of multi-granularity interactive segmentation and generic segmentation, we jointly train with SA-1B <cit.> (1/10 data) and COCO panoptic segmentation. The result indicates that interactive segmentation with SAM can significantly help the instance-level detection and segmentation with a performance improvement of +2.2 AP on the box and +1.3 AP on the mask. Notably, OpenSeed <cit.> and are both based on Mask DINO <cit.>. Our joint training with SA-1B even outperforms OpenSeed which is trained with Object365 <cit.>. In addition, adding SA-1B mainly improves small object detection (APs and APm), as there are a large number of small objects in SA-1B.
Part Segmentation
We also validate the compatibility of joint training SA-1B (1/10 data) and part segmentation. As shown in Table <ref>, adding SA-1B brings a decent performance improvement on Pascal Part <cit.>.
Single-granularity Interactive Segmentation
In Table <ref>, we evaluate the 1-click mIoU (denoted as 1-IoU) for SAM and our model on COCO Val2017. Our model outperforms SAM under the same settings.
Multi-granularity Interactive Segmentation
In Table <ref>, we compare SAM <cit.> and our model on the output granularities for a single click. We adopt a Hungarian Matching to match all the possible target masks with the predicted masks for the click and calculate the average IoU score. As SAM has only three prompts, we also sample two clicks from a single mask to produce six output masks for a fair comparison. Notably, SAM has been trained on this validation set while we did not.
§.§ Abaltions
Match Strategy
As shown in Table <ref>, we compare different match strategies in our model. When using many-to-many matching to match all the possible ground-truth masks for each click, the 1-IoU@All Granularity performance is significantly improved. This validates our matching strategy is effective to learn complete granularities.
Box Interactive Evaluation
We also evaluate the 1-IoU given boxes in Table <ref>. We achieve better performance compared with object-level interactive segmentation model SEEM <cit.> and multi-granularity model SAM <cit.>.
Increasing SA-1B Training data
In Table <ref>, we show the performance improvement on COCO Val 2017 when training with more SA-1B data. The performance is saturated after using more than 15% of the total data. It indicates that we do not need
to train with the whole SA-1B data to get a good zero-shot performance.
§.§ Visualization
We compare our model with SAM to show that our model can output more levels of high-quality masks, as shown in Fig. <ref>.
Multi-Level Masks Our model outputs more meaningful granularities of masks. SAM outputs three masks at most and different levels of outputs are sometimes duplications, While, the output masks of our model are more diverse.
Mask Qualities
It is also proved that our model output masks with higher quality. SAM sometimes outputs masks with artifacts such as holes or islands especially for large masks when the click is within a small-scale mask, while our model output high-quality masks for all levels.
Compare with SA-1B Ground-truth Granularity
We output more meaningful granularity on SAM data compared with the original annotation.
Query semantics
We also find that each point content prompt embeddings learns to correspond to a fixed granularity. As shown in Fig. <ref>, when we visualize masks in a specific order of the corresponding content embeddings, the masks follow the order from small to large in each row consistently. This proves that each content embedding represents a semantic granularity level in our model.
§ RELATED WORKS
§.§ Generic Segmentation
Segmenting visual concepts is well-documented within the expansive field of computer vision <cit.>. Broad segmentation methodologies comprise several subdivisions, such as instance segmentation, semantic segmentation, and panoptic segmentation <cit.>, each catering to a unique semantic degree. For example, semantic segmentation's goal is to detect and assign a label to each pixel in an image according to its corresponding semantic class <cit.>. Conversely, instance segmentation seeks to cluster pixels associated with the same semantic class into distinct object instances <cit.>. Panoptic segmentation is the hybrid of these two tasks.
Recently, Transformer-based methods <cit.> have contributed to significant progress in segmentation tasks <cit.>.
Generic object detection and segmentation have led to the development of abundant datasets, such as MSCOCO <cit.>, LVIS <cit.>, Objects365 <cit.>, PASCAL <cit.>,CityScapes <cit.>,ADE20k <cit.>, etc.
§.§ Part Segmentation
Beyond generic segmentation, part segmentation aims to more fine-grained visual understanding.
Most early works were bottom-up methods by grouping super-pixels into parts and then objects <cit.>. Later, based on high-performance object detection networks <cit.>, top-down methods were developed by firstly
detecting an object and then parsing it to part segmentation <cit.>. To segment the scene in multi-granularity, part-aware panoptic segmentation <cit.> is introduced. PPS <cit.> establishes the baseline through assembling panoptic and part segmentation models. JPPF <cit.> simplifies the model by a shared image encoder for both panoptic segmentation and part segmentation. By representing thing, stuffs, and parts as object queries, Panoptic-PartFormer <cit.> proposes a unified architecture based on Transformer. While part segmentation data is much expensive than
object detection and segmentation data, a number of public datasets are available. Datasets for specific domains include cars <cit.>, birds <cit.>, and fashion <cit.>. General objects include Pascal-Part <cit.>, PartImageNet <cit.>, ADE20K <cit.>, Cityscapes-Panoptic-Parts <cit.>, and PACO <cit.>. More recently, SAM <cit.> provides a large-scale multi-granularity class-agnostic segmentation dataset. Our work is jointly trained on these datasets and contributes to a multi-granularity segmentation model.
§.§ Open-Vocabulary Segmentation
While generic segmentation and part segmentation have made remarkable progress, they can only segment the image in a close-set vocabulary. To expand the vocabulary size, recent works leverage the visual-semantic knowledge from large-scale foundation models like CLIP <cit.>, ALIGN <cit.> and Diffusion models <cit.> to various segmentation tasks. LSeg <cit.>, OpenSeg <cit.>, GroupViT <cit.> achieves open-vocabulary semantic segmentation ability on ADE20K and PASCAL. DenseCLIP <cit.> and MaskCLIP <cit.> achieves open-vocabulary instance and panoptic segmentation on COCO dataset. More recently, X-Decoder <cit.> proposes a unified approach to tackle various segmentation and vision-language tasks for open-vocabulary segmentation, OpenSeeD <cit.> proposes to use a large amount of detection data and a joint training method to improve segmentation. To segment open-vocabulary masks in part-level, VLPart <cit.> leverages three part segmentation datasets and learns from the dense correspondence <cit.> between base objects and novel objects. Our work unifies these tasks into one architecture and builds up open-vocabulary segmentation in multi-granularity.
§.§ Interactive Segmentation
Interactive segmentation refers to the process of separating objects by actively integrating user inputs. This enduring challenge has seen notable advancements <cit.>. Previous works only focus on a small set of data or semantic-agnostic instance masks. Recently, SAM <cit.> enlarges the training data from 0.12M COCO images to 10M SAM fine-grained images. And SEEM <cit.> enriches the modality to language and function to both generic and grounded segmentation with an impressive compositionality.
§ CONCLUSION
In this paper, we have presented , which can segment and recognize anything at any desired granularity. Apart from performing generic open-vocabulary segmentation, demonstrates the advantages of semantic awareness and granularity abundance. To achieve such advantages, we have proposed improvements on data, model, and training where we utilized datasets from multiple granularity and semantic levels, multi-choice learning for training, and a universal framework for modeling. Comprehensive experiments and visualizations have verified the semantic awareness and granularity abundance of our model. Further, is the first successful attempt to jointly train on SA-1B and other classic segmentation datasets. Experimental results also show that training with SA-1B improves other tasks such as panoptic and part segmentation.
ieee_fullname
|
http://arxiv.org/abs/2307.10219v1 | 20230714212916 | Exploring Link Prediction over Hyper-Relational Temporal Knowledge Graphs Enhanced with Time-Invariant Relational Knowledge | [
"Zifeng Ding",
"Jingcheng Wu",
"Jingpei Wu",
"Yan Xia",
"Volker Tresp"
] | cs.AI | [
"cs.AI",
"cs.LG"
] |
Exploring LP over Hyper-Relational TKGs Enhanced with Time-Invariant Relational Knowledge]Exploring Link Prediction over Hyper-Relational Temporal Knowledge Graphs Enhanced with Time-Invariant Relational Knowledge
Both authors contributed equally to this research.
LMU Munich
Siemens AG
Munich
Germany
[email protected]
[1]
LMU Munich
Geschwister-Scholl-Platz 1
Munich
Germany
80539
[email protected]
LMU Munich
Geschwister-Scholl-Platz 1
Munich
Germany
80539
[email protected]
Technical University of Munich
Munich Center for Machine Learning
Munich
Germany
[email protected]
Corresponding author.
LMU Munich
Geschwister-Scholl-Platz 1
Munich
Germany
80539
[email protected]
Stemming from traditional knowledge graphs (KGs), hyper-relational KGs (HKGs) provide additional key-value pairs (i.e., qualifiers) for each KG fact that help to better restrict the fact validity. In recent years, there has been an increasing interest in studying graph reasoning over HKGs. In the meantime, due to the ever-evolving nature of world knowledge, extensive parallel works have been focusing on reasoning over temporal KGs (TKGs), where each TKG fact can be viewed as a KG fact coupled with a timestamp (or time period) specifying its time validity.
The existing HKG reasoning approaches do not consider temporal information because it is not explicitly specified in previous benchmark datasets. Besides, all the previous TKG reasoning methods only lay emphasis on temporal reasoning and have no way to learn from qualifiers.
To this end, we aim to fill the gap between TKG reasoning and HKG reasoning. We develop two new benchmark hyper-relational TKG (HTKG) datasets, i.e., Wiki-hy and YAGO-hy, and propose a HTKG reasoning model that efficiently models both temporal facts and qualifiers. We further exploit additional time-invariant relational knowledge from the Wikidata knowledge base and study its effectiveness in HTKG reasoning.
Time-invariant relational knowledge serves as the knowledge that remains unchanged in time (e.g., Sasha Obama is the child of Barack Obama), and it has never been fully explored in previous TKG reasoning benchmarks and approaches. Experimental results show that our model substantially outperforms previous related methods on HTKG link prediction and can be enhanced by jointly leveraging both temporal and time-invariant relational knowledge.
<ccs2012>
<concept>
<concept_id>10002951.10002952.10002953.10010146</concept_id>
<concept_desc>Information systems Graph-based database models</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010178.10010187</concept_id>
<concept_desc>Computing methodologies Knowledge representation and reasoning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Knowledge representation and reasoning
[300]Information systems Graph-based database models
20 February 2007
[revised]12 March 2009
[accepted]5 June 2009
[
Volker Tresp
August 12, 2023
===================
§ INTRODUCTION
Traditional knowledge graphs (KGs) represent world knowledge by storing a collection of facts in the form of triples. Each KG fact can be described as (s, r, o), where s, o are the subject and object entities of the fact and r denotes the relation between them.
On top of traditional triple-based KGs, hyper-relational KGs (HKGs) are designed to introduce additional information into each triple-based fact (also known as primary triple in HKGs) by incorporating a number of key-value restrictions named as qualifiers <cit.>. Compared with triple-based KGs, HKGs provide more complicated semantics. For example, in Figure <ref> (A), the degree and major information of Albert Einstein is provided, which helps to differentiate between the facts regarding two universities attended by him.
Since KGs are known as always incomplete <cit.>,
a wide range of approaches (e.g., <cit.>) are developed aiming for completing both triple-based and HKGs by predicting the unobserved facts (i.e., link prediction (LP)) based on the observed facts.
In the meantime, due to the ever-evolving nature of world knowledge, reasoning over temporal KGs (TKGs) has become a heated topic. Traditional TKGs describe a fact in the form of a quadruple (s,r,o,t), where t denotes the valid time (either a timestamp or a time period) of the fact. By specifying the time validity of facts, TKGs are more expressive than triple-based KGs
and require temporal reasoning skills for modeling. TKGs are also known as incomplete <cit.>, and therefore, extensive works have been done for performing automatic TKG LP (e.g., <cit.>).
Although previous works have achieved great success in reasoning over quadruple-based TKGs and HKGs, there has been no exploration in studying hyper-relational TKGs (HTKGs, see formal definition in Section <ref>), where a quadruple-based TKG fact can be augmented with additional qualifiers for better expressiveness.
For example, in Figure <ref> (B), by specifying the movie name, richer semantic information is provided to differentiate between the facts regarding two awards Valerio Mastandrea received in 2013. Same as traditional TKGs, HTKGs are also incomplete.
The existing HKG reasoning methods only focus on learning from qualifiers and they fail to model temporal information, while all the previous TKG reasoning methods lay emphasis on temporal reasoning and have no way to learn from qualifiers. Therefore, it is necessary to develop novel methods exclusively for HTKG LP.
In this paper, we propose a new data structure called HTKG. We construct two benchmark datasets Wiki-hy and YAGO-hy. They are based on two traditional TKG benchmark datasets Wikidata11k <cit.> and YAGO1830 <cit.>.
We extract the qualifiers of every quadruple-based fact in traditional TKG benchmarks from the Wikidata <cit.> knowledge base (KB) and augment the TKG quadruples with qualifiers to achieve hyper-relational fact creation.
We further develop a model to achieve LP over hyper-relational TKGs (HypeTKG).
To maximally learn from qualifiers, we first devise a qualifier-attentional time-aware graph encoder (QATGE).
Our graph encoder adaptively distinguishes the contribution of different qualifiers with an element-wise attention module and models temporal information in the graph aggregation process. We also design a qualifier matching decoder (QMD). Given any HTKG LP query, QMD not only considers its own qualifiers, but also computes a global feature originated from all the qualifiers appearing in all the facts related to the query subject (see Section <ref>, <ref> for motivation and examples).
Another point worth noting is that since the existing TKG reasoning benchmarks are purely made of time-evolving facts, previous TKG reasoning approaches have not studied how to explicitly model time-invariant relational knowledge[TNTComplEx <cit.> decomposes each relation's representation into temporal and non-temporal components to enable learning from time-invariant knowledge. However, this decomposition still assumes that part of the relational information is temporal and thus it is not optimal to model the relational information purely static over time.] jointly with temporal facts. In real-world scenarios, however, there exists a substantial amount of time-invariant facts. These facts can serve as a strong information source for improving reasoning over TKGs. Assume we want to predict the missing entity of a TKG LP query (Modern Family, award received, ?, 2014) and we can observe the temporal fact (Modern Family, award received, Emmy Award, 2011). If we further provide two time-invariant facts, i.e., (Emmy Award, subclass of, television award) and (International TV Audience Award, subclass of, television award), models can be greatly helped in predicting the ground-truth answer, i.e., International TV Audience Award, because
it is likely that Modern Family receives another television award given that it has got one. In our work, we mine the time-invariant relational knowledge from the Wikidata KB by collecting a number of time-invariant facts[Each augmented graph can be viewed as a partially temporal KG which is a mixture of temporal and static (time-invariant) facts. The term "static" is used in previous researches to distinguish from KGs and TKGs, e.g., <cit.>. We use it as an abbreviated paraphrase of "time-invariant" in our work.] from it. We pick out the facts that contain 10 frequently mentioned time-invariant relations, e.g., subclass of and child. We ensure that these facts remain valid within the whole time scopes of the HTKGs. We also adjust the model structure of HypeTKG to accommodate to time-invariant factual information.
To summarize, our contributions are as follows:
* This is the first work drawing attention to HTKGs and we propose two new corresponding benchmarks.
* We propose HypeTKG, a model specifically designed to reason over HTKGs.
* We mine the time-invariant relational knowledge from Wikidata KB and
adapt HypeTKG to jointly learning from temporal and time-invariant facts.
* Experimental results show that HypeTKG performs well on HTKG LP and the performance can be enhanced by leveraging time-invariant relational knowledge.
§ RELATED WORK AND PRELIMINARIES
§.§ Related Work
§.§.§ Traditional KG & TKG Reasoning.
Extensive researches have been conducted for KG reasoning. A series of works <cit.> designs KG score functions that computes plausibility scores of triple-based KG facts, while another line of works <cit.> incorporates neural-based modules, e.g., graph neural network (GNN) <cit.>, into score functions for learning better representations.
On top of the existing KG score functions, some recent works develop time-aware score functions <cit.> that further models time information for reasoning over traditional TKGs. Another group of TKG reasoning methods employ neural structures. Some of them <cit.> achieve temporal reasoning by first learning the entity and relation representations of each timestamp with GNNs and then using recurrent neural structures, e.g., LSTM <cit.>, to compute time-aware representations. Other methods <cit.> develop time-aware relational graph encoders that directly perform graph aggregation based on the temporal facts sampled from different time. There exist two settings in TKG LP, i.e., interpolation and extrapolation. In extrapolation, to predict a fact (i.e., link) happening at time t, models can only observe previous TKG facts before t, while such restriction is not imposed in interpolation. Among the above mentioned related work, <cit.> are designed for interpolation and <cit.> are for extrapolation. In our work, we only focus on the interpolated LP on HTKGs and leave extrapolation for future work. The existing TKG reasoning methods cannot model qualifiers in HTKG facts and our HypeTKG is the first method that achieves learning from qualifers in TKGs.
§.§.§ Hyper-Relational KG Reasoning.
Mainstream HKG reasoning methods can be categorized into three types.
The first type of works <cit.> treats each hyper-relational fact as an n-ary fact represented with an n-tuple: r_abs(e_1,e_2, ..., e_n), where n is the non-negative arity of an abstract relation r_abs[r_abs is called abstract relation because in this formulation, each r_abs is derived from a combination of several KG relations by concatenating the relation in the primary triple and qualifiers (as described in <cit.>).] representing the number of entities involved within r_abs and e_1, ..., e_n are the entities appearing in this n-ary fact. RAE <cit.> generalizes traditional KG reasoning method TransH <cit.> to reasoning n-ary facts and improves performance by considering the relatedness among entities. Similarly, HypE <cit.> and GETD <cit.> derive the n-ary fact reasoning models by modifying traditional KG score functions SimplE <cit.> and TuckER <cit.>, respectively. S2S <cit.> improves GETD by enabling reasoning over mixed-arity facts. HyConvE <cit.> employs convolutional neural networks to perform 3D convolution capturing the deep interactions of entities and relations. Although these methods show strong effectiveness, the way of treating HKG facts as n-ary facts naturally loses the semantics of the original KG relations and would lead to a combinatorial explosion of relation types <cit.>.
The second type of works <cit.> transforms each hyper-relational fact into a set of key-value pairs: {(r_i:e_i)}. NaLP <cit.> captures the relatedness among all
the r_i:e_i pairs by using neural networks. RAM <cit.> introduces a role learning paradigm that models both the relatedness among different entity roles as well as the role-entity compatibility. Formulating hyper-relational facts into solely key-value pairs would also cause a problem. The relations from the primary fact triples and qualifiers cannot be fully distinguished, and the semantic difference among them is ignored <cit.>.
To overcome the problems incurred in first two types of methods, recently, some works <cit.> formulate hyper-relational facts into a primary triple with a set of key-value qualifier pairs: {((s,r,o),{(r_q_i, e_q_i)})}. NeuInfer <cit.> uses fully-connected neural networks to separately model each primary triple and its qualifiers. HINGE <cit.> adopts a convolutional framework that is iteratively applied on the qualifiers for information fusion. StarE <cit.> develops a qualifier-aware GNN which allows jointly modeling an arbitrary number of qualifiers with the primary triple relation. GRAN <cit.> models HKGs with edge-biased fully-connected attention. It uses separate edge biases for the relations in the primary triples and qualifiers to distinguish their semantic difference.
While previous methods perform well on LP over HKGs, none of them devises temporal reasoning components to model temporal information. Although in HKG benchmarks, e.g., WD50K <cit.>, time information might exist as an entity in a qualifier, previous methods treat them as same as other entities (e.g., 2003 is treated as same as Barack Obama). Moreover, only a small part of facts in these benchmarks contain such "time entities", making temporal reasoning a less-concerned issue. Another problem is that, previous researches only consider a fact's own qualifier during reasoning, which potentially loses abundant information from the qualifiers of other related facts. In our work, we design a module called qualifier matcher in QMD as a solution to this problem.
§.§.§ Improving TKG Reasoning with Additional Knowledge.
Several recent works study the effect of different types of additional knowledge in improving TKG reasoning. ECOLA <cit.> enhances TKG reasoning by exploiting the natural language text contexts of entities. FILT <cit.> and FITCARL <cit.> utilize the entity concept information (entity type constraints) for learning better representations of newly-emerged few-shot entities. These methods show that properly employing additional knowledge can help improve TKG LP, however, none of them considers introducing time-invariant relational knowledge. We augment HTKGs with time-invariant relational knowledge and use it to enhance LP on temporal facts. This can be taken as modeling a partially temporal KG. But it is worth noting that our focus is still on predicting temporal facts, where each of them is explicitly coupled with time information.
§.§ Definition and Problem Statement
We give the definition of HTKG and HTKG LP as follows.
Definition 1 (Hyper-Relational TKG). Let ℰ, ℛ, 𝒯 denote a set of entities, relations and timestamps[Time periods can be decomposed into a series of timestamps with a suitable time granularity. We follow <cit.> and define time in this fashion.], respectively. An HTKG 𝒢 is a set of hyper-relational temporal facts. Each fact is described as ((s,r,o,t), {(r_q_i, e_q_i)}), where (s,r,o,t) is named as its primary quadruple. e_q_i∈ℰ and r_q_i∈ℛ denote the entity and relation in its i^th qualifier q_i, respectively. Since each fact is coupled with different number of qualifiers, the size of {(r_q_i, e_q_i)} varies.
Definition 2 (Hyper-Relational TKG LP). Let 𝒢_tr be a ground-truth HTKG. 𝒢_tr = {𝒢_obs, 𝒢_un} (𝒢_obs∩𝒢_un = ∅), where 𝒢_obs denotes a set of observed HTKG facts and 𝒢_un is a set of unobserved facts. Given an LP query ((s,r,?,t), {(r_q_i, e_q_i)}) (or ((?,r,o,t), {(r_q_i,
e_q_i)})) derived from an unobserved fact, a model is asked to predict the missing entity in this query by leveraging 𝒢_obs. Following previous works on TKGs, e.g., <cit.>, for each fact ((s,r,o,t), {(r_q_i, e_q_i)}), we create another fact ((o,r^-1,s,t), {(r_q_i, e_q_i)}) and add it to the graph, where r^-1 denotes the inverse relation of r. We derive an object entity prediction query from each fact and perform object prediction. Since we have included extra facts with inverse relations, only considering object prediction does not incur a loss of generality. Note that we follow <cit.> and only predict missing entities in the primary quadruples (as primary triples in <cit.>).
§ PROPOSING NEW BENCHMARKS
§.§ New Hyper-Relational TKGs
We propose two HTKG benchmark datasets, i.e., Wiki-hy and YAGO-hy. Wiki-hy contains hyper-relational facts extracted from Wikidata <cit.>, where they happen from year 1513 to 2020. YAGO-hy is constructed from the facts in YAGO3 <cit.> and the time scope is from year 1830 to 2018. We use previous traditional TKG benchmarks Wikidata11k[https://github.com/jaehunjung1/T-GAP/blob/master/data/preprocess.sh] <cit.> and YAGO1830[https://github.com/TemporalKGTeam/xERTE/tree/main/tKGR/data/YAGO1830] <cit.> as bases and search for the qualifiers of their facts in Wikidata. We use the MediaWiki API[https://www.wikidata.org/w/api.php] to identify the quadruple-based TKG facts in Wikidata and extract all the qualifiers stated under the corresponding Wikidata statements. Since Wikidata11k is originally extracted from Wikidata, we can directly find its relations and entities in this KB. However, YAGO1830 originates from YAGO3, where entities share the same pool as Wikidata but relation types are taken from schema.org. We manually map the relation types of YAGO1830 to the Wikidata relations to enable fact matching. The detailed mapping of relation types is shown in Table <ref> in Appendix B. Besides, YAGO1830 is originally a TKG extrapolation dataset (recall Section <ref>), we redistribute its facts and change it into an interpolation dataset before qualifier searching. We ensure that the proportions of the numbers of facts in train/valid/test sets of YAGO-hy conform to the corresponding sets in YAGO1830. We provide the dataset statistics of Wiki-hy and YAGO-hy in Table <ref>. Note that qualifier searching will include additional entities and relations, we include them into our datasets and consider them in model training and evaluation.
§.§ Exploring Time-Invariant Knowledge
We explore time-invariant knowledge as follows. We first find the top 400 frequent relations in Wikidata KB. Based on them, we then manually check each of them and pick out top 10 frequent relations that describe time-invariant static relationships among entities. The selected static relations are family name, native language, subclass of, capital, child, sibling, father, mother, ethnic group, country of origin. We ensure that they are disjoint from the existing relations in the original HTKGs. Starting from the entities in our HTKGs, we search for their associated time-invariant facts in Wikidata, where each of them corresponds to a selected static relation. For example, for the YAGO-hy entity Emmy Award, we take the facts such as (Emmy Award, subclass of, television award). Each time-invariant fact is in the form of (s,r,o) triples, same as the facts in triple-based KGs. As a result, we collect a set of facts denoted as 𝒢_static (𝒢_static∩𝒢_tr=∅) for Wiki-hy and YAGO-hy. We allow models to use all of them for enhancing LP over temporal facts during training, validation and test. We provide the statistics of 𝒢_static in Table <ref>.
§ HYPETKG
HypeTKG consists of two parts, i.e., a qualifier-attentional time-aware graph encoder (QATGE) and a qualifier matching decoder (QMD). To further learn from time-invariant knowledge, we equip HypeTKG with additional modules and develop a model variant HypeTKG^ψ. Figure <ref> illustrates the model structure of HypeTKG^ψ.
§.§ Qualifier-Attentional Time-Aware Graph Encoder
QATGE learns a contextualized representation for every entity. Given entity e, QATGE finds its temporal neighbors from 𝒢_obs, i.e., 𝒩_e = {((e',r',t'), {(r'_q_i, e'_q_i)})|((e',r',e,t'), {(r'_q_i, e'_q_i)}) ∈𝒢_obs}. For each temporal neighbor ζ = ((e',r',t'), {(r'_q_i, e'_q_i)}), QATGE employs an attention-based module to model its qualifiers. It computes the representation 𝐡^ζ_q_i for the i^th qualifier q_i of ζ with a function ϕ(·,·).
0.95!
𝐡^ζ_q_i = ϕ (𝐡_e'_q_i, 𝐡_r'_q_i) = 𝐖_1 (𝐡_e'_q_i𝐡_r'_q_i) * f(𝐡^ℂ_e'_q_i∘𝐡^ℂ_r'_q_i) * (𝐡_e'_q_i⊕𝐡_r'_q_i).
𝐡_e'_q_i∈ℝ^d and 𝐡_r'_q_i∈ℝ^d denote the representations of the entity and relation in q_i, respectively. means concatenation and 𝐖_1 ∈ℝ^d×2d is a weight matrix. 𝐡^ℂ_e'_q_i∈ℂ^d/2 and 𝐡^ℂ_r'_q_i∈ℂ^d/2 are the complex vectors mapped from 𝐡_e'_q_i and 𝐡_r'_q_i. The real part of 𝐡^ℂ_e'_q_i is the first half of 𝐡_e'_q_i and the imaginary part is the second half, e.g., if 𝐡_e'_q_i = [6,3]^⊤∈ℝ^2, then 𝐡^ℂ_e'_q_i = [6+3√(-1)]^⊤∈ℂ^1. 𝐡_r'_q_i^ℂ[j] = cos(𝐡_r'_q_i[j]) + √(-1)sin(𝐡_r'_q_i[d/2+j]), where 𝐡_r'_q_i^ℂ[j] and 𝐡_r'_q_i[d/2 + j] denote the j^th and (d/2 + j)^th element of 𝐡_r'_q_i^ℂ and 𝐡_r'_q_i, respectively. ∘ represents the Hadmard product on the complex space. f(·): ℂ^d/2→ℝ^d is a mapping function that maps the complex vectors back to the real vectors, in the reverse way as 𝐡_e'_q_i was mapped onto the complex space. f(𝐡^ℂ_e'_q_i∘𝐡^ℂ_r'_q_i) is a composition function inspired by RotatE <cit.> that performs a rotation in the complex plane. * and ⊕ are element-wise product and add operations, respectively. After getting {𝐡^ζ_q_i}, QATGE integrates the information from all of them by computing an attentional feature 𝐡^ζ_Qual related to the primary relation r' of ζ.
0.9!
𝐡^ζ_q_i = (𝐡^ζ_q_i^⊤𝐡_r') * 𝐰, α_i[j] = exp(𝐡^ζ_q_i[j])/∑_k=1^K exp(𝐡^ζ_q_k[j]); 𝐚_i = [α_i[1], ..., α_i[d]]^⊤,
𝐡^ζ_Qual = 1/K∑_q_i𝐖_Qual (𝐚_i * 𝐡^ζ_q_i).
𝐰∈ℝ^d is a trainable parameter. K is the number of qualifiers of ζ.
𝐚_i is an attention vector, where each of its element α_i[j] denotes the attention score determining how important the j^th element of the i^th qualifier is in the j^th element of 𝐡^ζ_Qual. The importance increases as the score rises. 𝐖_Qual∈ℝ^d× d is a weight matrix. 𝐡^ζ_Qual can be viewed as a parameter that adaptively selects the information highly-related to r' from all the qualifiers of the temporal neighbor. To compute e's representation 𝐡_e, we aggregate over all its temporal neighbors in 𝒩_e with a gated structure.
0.9!
𝐡_e = 1/|𝒩_e|∑_ζ∈𝒩_e𝐖_2 ϕ((γ𝐡^ζ_Qual + (1-γ)𝐡_r'), 𝐡_(e', t')),
where 𝐖_2 ∈ℝ^d× d is a weight matrix. γ is a trainable gate parameter controlling the amount of information taken from either the primary relation r' or the qualifiers. QATGE incorporates temporal information by learning a time-aware representation for each temporal neighbor's subject entity: 𝐡_(e', t') = f_t(𝐡_e'𝐡_t'). f_t(·):ℝ^2d→ℝ^d is a layer of neural network. 𝐡_t' = √(1/d)[cos(ω_1t'+ϕ_1), …, cos(ω_dt'+ ϕ_d)], where ω_1 …ω_d and ϕ_1 …ϕ_d are trainable parameters. Note that QATGE has a big difference from previous methods StarE <cit.> and GRAN <cit.>. StarE treats every qualifier equally, making it impossible to distinguish the contributions of different qualifiers. Although GRAN also uses attention-based module in encoding, it treats all entities and relations as vertices and performs message passing among them. This naturally neglects the implicit structural information in hyper-relational facts, i.e., qualifiers serve as supplementary information for the primary fact. Moreover, QATGE achieves temporal reasoning by learning time-aware entity representations, while previous methods cannot.
§.§ Qualifier Matching Decoder
QMD leverages the entity and relation representations encoded by QATGE for LP. Assume we want to predict the missing entity of the LP query ((s,r,?,t), {(r_q_i, e_q_i)}) (derived from the query fact ((s,r,o,t), {(r_q_i, e_q_i)})), QMD learns a query feature 𝐡_que. QMD first models query qualifiers {(r_q_i, e_q_i)} with a qualifier-wise Transformer. Each query qualifier's entity and relation are treated as two tokens and concatenated as a sub-sequence for this qualifier.
The classification ([CLS]) token is then concatenated with the query qualifier tokens as a sequence and input it into the qualifier-wise Transformer, where the sequence length is 2K_que+1 (K_que denotes the number of query qualifiers). We take the output representation of the [CLS] token as the query qualifier feature 𝐡^que_Qual∈ℝ^d who contains comprehensive information from all query qualifiers. Apart from 𝐡^que_Qual, we also devise a qualifier matcher that further exploits additional supporting information from the qualifiers of other observed facts related to query subject s in 𝒢_obs. The motivation of this step is that the evidence for LP is not only stored in the query qualifiers but also can be found in other subject-related facts (see Section <ref> for detailed examples). Qualifier matcher finds all the HTKG facts in 𝒢_obs where each of them takes s as the subject of its primary quadruple. It then collects all their qualifiers {(r_q_l, e_q_l)} and computes a global qualifier feature
0.9!
𝐡_Qual^glo = 1/M∑_q_lexp((𝐖_3(𝐡_r_q_l𝐡_e_q_l))^⊤ (
𝐖_4(𝐡_(s,t)𝐡_r)))/∑_m=1^M exp((𝐖_3(𝐡_r_q_m𝐡_e_q_m))^⊤ (𝐖_4(𝐡_(s,t)𝐡_r)))𝐖_3(𝐡_r_q_l𝐡_e_q_l),
where M denotes the number of s-related qualifiers and 𝐖_3, 𝐖_4 ∈ℝ^d× 2d are weight matrices. 𝐡_(s,t) = f_t(𝐡_s 𝐡_t). Given 𝐡_Qual^que and 𝐡_Qual^glo (𝐡_Qual^glo∈ℝ^d), QMD uses another query-wise Transformer to compute a query feature. We concatenate the representation of another separate [CLS] token with 𝐡_(s,t)𝐡_r 𝐡^que_Qual𝐡^glo_Qual and input it into the query-wise Transformer. The output representation of this separate [CLS] token corresponds to 𝐡^que∈ℝ^d. 𝐡^que is used by QMD to compute a score for each candidate entity e_c ∈ℰ
λ(((s,r,e_c,t), {(r_q_i, e_q_i)})) = (𝐡^que * 𝐡_t)^⊤𝐖_5 𝐡_e_c,
where 𝐖_5 ∈ℝ^d× d is a score matrix.
HypeTKG takes the candidate entity with the highest score computed in Equation <ref> as the predicted answer.
§.§ Time-Invariant Knowledge Modeling
Section <ref> and <ref> explain how HypeTKG performs HTKG LP without using time-invariant knowledge. In this section, we discuss how we adapt HypeTKG to leveraging time-invariant knowledge by developing a model variant HypeTKG^ψ. We first introduce another gated structure in QATGE to incorporate time-invariant knowledge in the encoding process. We change Equation <ref> to
0.9!
𝐡^ψ_e = 1/|𝒩_e^ψ|∑_ζ^ψ∈𝒩_e^ψ𝐖^ψϕ(𝐡_e”, 𝐡_r”),
𝐡_e = (1-β)(1/|𝒩_e|∑_ζ∈𝒩_e𝐖_2 ϕ((γ𝐡^ζ_Qual + (1-γ)𝐡_r'), 𝐡_(e', t'))) + β𝐡^ψ_e.
β is a trainable parameter controlling the magnitude of time-invariant information. 𝒩_e^ψ = {ζ^ψ} = {(e”, r”)|(e”, r”, e) ∈𝒢_static} denotes e's static neighbors derived from additional time-invariant facts. In the decoding process, we incorporate time-invariant knowledge when we compute the query feature 𝐡^que. Same as how we model query qualifiers, we use a static-wise Transformer to model s's static neighbors and output a time-invariant feature 𝐡^s_static. We expand the input length of the query-wise Transformer and input 𝐡_(s,t)𝐡_r 𝐡^que_Qual𝐡^glo_Qual𝐡^s_static for computing 𝐡^que. Note that we do not model static neighbors of candidate entities in QMD because (1) this will incur excessive computational cost and (2) this part of information has already been learned in QATGE.
§.§ Parameter Learning
We minimize a binary cross-entropy loss for learning model parameters. We take every fact ((s,r,o,t),{(r_q_i,e_q_i)}) ∈𝒢_obs as a query fact quef and switch its object entity o to every other entity e ∈ (ℰ∖{o}) to create |ℰ|-1 negative facts {quef^-}. The complete form of our loss function is
ℒ = 1/|𝒢_obs| × |ℰ|∑_quef ∈𝒢_obs(l_quef + ∑_quef^- l_quef^-).
l_quef = -y_queflog(λ(quef)) - (1-y_quef)log (1 - λ(quef)) and l_quef^- = -y_quef^-log(λ(quef^-)) - (1-y_quef^-) log(1-λ(quef^-) ) denote the binary cross-entropy of quef and quef^-, respectively. y_quef = 1 and y_quef^- = 0 because we want to simultaneously maximize λ(quef) and minimize λ(quef^-). |𝒢_obs| is the number of HTKG facts in 𝒢_obs.
§ EXPERIMENTS
We do LP over Wiki-hy and YAGO-hy. We report HTKG LP results in Section <ref>. We provide further analysis in the following sections, including ablation studies (Section <ref>), impact of qualifier-augmented fact proportion (Section <ref>), effectiveness of time-invariant knowledge (Section <ref>) and case studies of qualifier matcher (Section <ref>). See Appendix A for implementation details.
§.§ Experimental Setting
§.§.§ Evaluation Metrics
We use two evaluation metrics, i.e., mean reciprocal rank (MRR) and Hits@1/3/10. MRR computes the mean of the reciprocal ranks for all test queries: 1/2N_test∑_que ∈Que_test1/θ_que, where θ_que denotes the rank of the ground truth missing entity in the test query que. Note that for each hyper-relational fact in the test set, we derive two LP queries for both subject and object entity prediction, and therefore, the total number of test queries is 2N_test. Hits@1/3/10 denotes the proportion of the test queries where ground truth entities are ranked as top 1/3/10.
§.§.§ Baselines
We consider two types of baseline methods and compare them with HypeTKG: (1) Traditional KG & TKG reasoning methods, i.e., CompGCN <cit.>, BoxTE <cit.>, T-GAP <cit.> and TARGCN <cit.>. Among them, only CompGCN is designed for static KG reasoning, while other approaches focus on TKG reasoning and have temporal reasoning modules. Since these methods are not developed for HKG/HTKGs, they have no way to model qualifiers. We neglect the qualifiers during implementation. (2) HKG reasoning methods, i.e., StarE <cit.>, HyconvE <cit.>, GRAN <cit.>. These methods cannot explicitly model temporal information provided in primary quadruples of HTKGs. We make them neglect the timestamps in primary quadruples during implementation.
§.§ Comparative Study
We report the HTKG LP results of all methods in Table <ref> and have several findings: (1) We observe that HypeTKG outperforms both types of baselines. Traditional KG/TKG reasoning methods fail to learn from qualifiers, making them lose a large amount of semantic information. Meanwhile, the HKG reasoning baselines do not have the ability to distinguish from different timestamps, leading to inferior performance on temporal KGs. HypeTKG utilizes both the qualifiers and temporal information and thus can achieve state-of-the-art. (2) We also observe that HypeTKG^ψ achieves even better results than the original model. This proves that our model can effectively leverage time-invariant relational knowledge.
(3) Besides, to study the importance of our temporal reasoning components, we devise another model variant HypeTKG^τ, where we exclude all time modeling modules and neglect timestamps in primary quadruples (as we implement methods like GRAN).
We find that HypeTKG^τ's performance drops substantially on both datasets, indicating that it is essential to learn from temporal information in HTKGs and our time modeling modules are effective.
§.§ Further Analysis
§.§.§ Ablation Study.
We conduct several ablation studies to demonstrate the importance of different model components. We devise three model variants. In study A (variant A), we neglect the qualifiers in all HTKG facts and do not include any qualifier learning component. In study B (variant B), we remove qualifier attention in QATGE and keep other components unchanged. In study C (variant C), we remove the qualifier matcher in QMD and keep others unchanged. From Table <ref> and <ref>, we observe that learning qualifiers is essential in reasoning HTKGs. We also find that both qualifier attention in QATGE and qualifier matcher contribute to the improvement in qualifier modeling.
We further discuss the effectiveness of different model components in a varying proportion of qualifier-augmented facts in Section <ref>.
§.§.§ Impact of Qualifier-Augmented Fact Proportion.
To better quantify the effectiveness of HypeTKG in learning from qualifiers, we sample several datasets from Wiki-hy and YAGO-hy with different proportions of facts equipped with qualifiers. We take Wiki-hy as example. We first pick out all the facts, where each of them has at least one qualifier, from Wiki-hy and construct Wiki-hy (100). We call it Wiki-hy (100) because 100% of its facts are equipped with qualifiers. Next, we keep Wiki-hy (100) and randomly sample an extra number of facts without any qualifier from the original Wiki-hy. We add these facts into Wiki-hy (100) until the proportion of the facts equipped with qualifiers reaches 66%. We call this new dataset Wiki-hy (66). Similarly, we further expand Wiki-hy (66) to Wiki-hy (33). For YAGO-hy, we construct YAGO-hy (100)/(66)/(33) in the same way. Note that the proportions of facts with at least one qualifier in the original Wiki-hy and YAGO-hy are 9.59% and 6.98% (Table <ref>), respectively, which are much smaller than 33%.
See Appendix B for more details of dataset construction.
We report the performance of HypeTKG and its variants on all created datasets in Table <ref> and <ref>. We have several findings: (1) Whatever the proportion of qualifier-augmented facts is, HypeTKG and its variants B & C benefit from the qualifiers for improving LP on HTKGs, indicating the importance of learning qualifiers in HTKG reasoning. (2) As the proportion goes up, the margin between HypeTKG and variant A enlarges (see MRR difference). This means the performance gain of HypeTKG largely comes from its ability of effectively utilizing qualifier information. (3) Variant B & C constantly underperform HypeTKG on all datasets, confirming the effectiveness of both qualifier modeling components.
§.§.§ Effectiveness of Time-Invariant Relational Knowledge.
To show the effectiveness of our gate-structured graph encoder and qualifier matcher in learning time-invariant relational knowledge, we also enable all baselines to make use of the additional time-invariant facts and report their performance. For the static KG approaches, i.e., CompGCN, StarE, HyconvE and GRAN, we directly include these facts into our datasets. For TKG reasoning approaches, i.e., BoxTE, T-GAP and TARGCN, we create a number of temporal facts for each time-invariant fact along the whole timeline and include these temporal facts into the datasets. For example, let t_min/t_max denotes the minimum/maximum timestamp of an HTKG. We transform a time-invariant fact (s,r,o) to {(s,r,o,t_min), ..., (s,r,o,t_max)}. We do so because every time-invariant fact remains valid during the whole time period, which can be interpreted as a series of temporal facts valid at every timestamp.
We report for all methods (including baselines and HypeTKG) the performance comparison before and after considering time-invariant facts in Figure <ref>. Surprisingly, we observe that most baselines cannot constantly benefit from the additional time-invariant relational knowledge. Several methods, i.e., BoxTE, T-GAP and HyconvE, even have apparent performance drops on both datasets. Only CompGCN and HypeTKG can constantly benefit from additional time-invariant facts. We attribute this to the following reasons: (1) Time-invariant facts introduce distributional shift into the HTKGs, making models tend to be less focused on the temporal facts. (2) For traditional TKG reasoning methods, treating time-invariant facts as a series of temporal facts even enlarges the introduced distributional shift, making the additional facts become more dominant in temporal reasoning and increasing difficulty in reasoning over temporal facts. (3) Compared with other considered methods, CompGCN employs the simplest structure with the fewest parameters. This, to some extent, prevents it from overfitting to the additional facts and being completely distracted from modeling the temporal facts. (4) HypeTKG employs its gate-structured graph encoder that adaptively controls the amount of information from the time-invariant facts. In HypeTKG's decoder, we also use Transformer to distinguish the importance of different subject-related time-invariant facts. These two steps help HypeTKG to exploit the time-invariant knowledge that is beneficial in LP and discard the redundant information.
§.§.§ Case Studies of Qualifier Matcher.
We give an insight of how our qualifier matcher improves HTKG reasoning with three cases (Table <ref>). HypeTKG ranks the ground truth entities in these cases as top 1 and achieves optimal prediction. As discussed in Section <ref>, we learn a global qualifier feature in the qualifier matcher by considering the contribution of all the existing qualifiers related to the subject entity of the LP query. Each qualifier is assigned an attention score indicating its contribution. Note that numerous queries are derived from the facts that are without any qualifier. For example, in Case 1, no qualifier is provided in predicting which reward did Andrey Kolmogorov receive in 1941 (Case 1 and 2 are taken from YAGO-hy). HypeTKG extracts all the qualifiers related to Andrey Kolmogorov from other facts in YAGO-hy and computes the global qualifier feature based on them. We find that it assigns a great attention score to the qualifier (country of citizenship, Soviet Union) and this qualifier can directly be taken as a hint to predict the ground truth missing entity USSR State Prize since USSR is also interpreted as Soviet Union. We also find that (field of work, mathematics) is also dominant in the global qualifier feature. This is also reasonable because Andrey Kolmogorov is a mathematician and he is awarded USSR State Prize of mathematics in 1941. Compared with these two qualifiers, the last qualifier, i.e., {(country, Soviet Union)}), is not so important in prediction, and thus is assigned a low attention score by HypeTKG. Case 1 implies that to reason the facts without qualifiers, i.e., quadruple-based facts, our qualifier matcher can find the clues from the subject-related qualifiers existing in other hyper-relational facts and support prediction. In Case 2, we find that the qualifier matcher focuses more on the qualifiers from other facts but not the one from the query. Note that the query qualifiers have been explicitly modeled with a query-specific qualifier feature before computing the global qualifier feature. This indicates that our qualifier matcher can maximally extract important information from the extra qualifiers rather than only focusing on the query qualifiers, enabling efficient information fusion. Case 3 is taken from Wiki-hy. Since qualifier relations ℛ_Qual and primary relations ℛ_pri have intersection, some extra subject-related extra qualifiers can directly indicate the answers to the queries. In Case 3, we observe that HypeTKG manages to recognize such qualifiers to improve prediction. To summarize, our qualifier matcher achieves reasoning enhancement by efficiently utilizing additional information from the extra qualifiers related to the query subject.
§ CONCLUSION
In this work, we propose a new data structure named hyper-relational TKG. Each HTKG fact consists of a primary quadruple in the form of a traditional TKG fact together with a number of qualifiers, where qualifiers provide additional semantics to better restrict fact validity. We construct two HTKG benchmark datasets, i.e., Wiki-hy and YAGO-hy, based on existing TKG benchmarks and the Wikidata KB. To reason HTKGs, we design HypeTKG. HypeTKG is able to simultaneously deal with temporal facts and qualifiers. It employs QATGE that adaptively distinguishes the contributions of different qualifiers during graph aggregation, and QMD that exploits additional supporting information from all query subject-related qualifiers. We show that HypeTKG achieves superior performance on HTKG LP. Besides, we mine the time-invariant relational knowledge from the Wikidata KB and augment it to our proposed benchmarks. We devise a model variant HypeTKG^ψ that efficiently leverages time-invariant relational knowledge and enhances HTKG LP performance. We hope our work can sparkle efforts for studying HTKGs in the future.
ACM-Reference-Format
§ IMPLEMENTATION DETAILS
We implement all the experiments with PyTorch <cit.> on an NVIDIA A40 with 48GB memory and a 2.6GHZ AMD EPYC 7513 32-Core Processor. We search hyperparameters following Table <ref>. For each dataset, we do 1458 trials to try different hyperparameter settings. We run 100 epochs for each trail and compare their validation results. We choose the setting leading to the best validation result and take it as the best hyperparameter setting. The best hyperparameter setting is also stated in Table <ref>. Every result of our model is the average result of five runs.
Besides, we specify the GPU memory usage (Table <ref>) and number of parameters (Table <ref>).
We use official implementations of all baseline methods, i.e., CompGCN[https://github.com/malllabiisc/CompGCN], BoxTE[https://github.com/JohannesMessner/BoxTE], T-GAP[https://github.com/jaehunjung1/T-GAP], TARGCN[https://github.com/ZifengDing/TARGCN], StarE[https://github.com/migalkin/StarE], HyConvE[https://github.com/CarllllWang/HyConvE/tree/master], and GRAN[https://github.com/lrjconan/GRAN]. We use the default hyperparameters of all baselines for HTKG LP.
§ FURTHER DETAILS OF BENCHMARK CONSTRUCTION
§.§ Relation Mapping from YAGO1830 to Wikidata
We provide the relation mapping from YAGO1830 to Wikidata in Table <ref>.
§.§ Datasets with Different Proportions of Qualifier-Augmented Facts
We take Wiki-hy as example. We first pick out all the facts, where each of them has at least one qualifier, from Wiki-hy and construct Wiki-hy (100). We call it Wiki-hy (100) because 100% of its facts are equipped with qualifiers. Next, we keep Wiki-hy (100) and randomly sample an extra number of facts without any qualifier from the original Wiki-hy. We add these facts into Wiki-hy (100) until the proportion of the facts equipped with qualifiers reaches 66%. We call this new dataset Wiki-hy (66). Similarly, we further expand Wiki-hy (66) to Wiki-hy (33). For YAGO-hy, we construct YAGO-hy (100)/(66)/(33) in the same way. Note that during the process of sampling extra quadruple-based facts, we put each sampled fact to the same set where it comes from. For example, when we construct Wiki-hy (66), we keep Wiki-hy (100) unchanged and further sample quadruple-based facts from Wiki-hy. If a fact is sampled from the training set of Wiki-hy, then it will be put into the training set of Wiki-hy (66). We keep the proportions of training/validation/test sets in Wiki-hy (100)/(66)/(33) same as the ones in Wiki-hy. YAGO-hy (100)/(66)/(33) follows the same policy.
|
http://arxiv.org/abs/2307.05075v1 | 20230711071815 | Uni-Removal: A Semi-Supervised Framework for Simultaneously Addressing Multiple Degradations in Real-World Images | [
"Yongheng Zhang",
"Danfeng Yan",
"Yuanqiang Cai"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
IEEE Transactions on Circuits and Systems for Video Technology
Zhang et al.: Uni-Removal: A Semi-Supervised Framework for Simultaneously Addressing Multiple Degradations in Real-World Images
Uni-Removal: A Semi-Supervised Framework for Simultaneously Addressing Multiple Degradations in Real-World Images
Yongheng Zhang, Danfeng Yan†, Yuanqiang Cai
Y. Zhang, D. Yan, and Y. Cai are with State Key Laboratory Of Networking And Switching Technology, Beijing University of Posts and Telecommunications, Beijing, 100876, China; with the School of Computer Science, Beijing University of Posts and Telecommunications, Beijing, 100876, China. (e-mail: [email protected]; [email protected]; [email protected]).
† Corresponding author (D. Yan).
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Removing multiple degradations, such as haze, rain, and blur, from real-world images poses a challenging and ill-posed problem.
Recently, unified models that can handle different degradations have been proposed and yield promising results.
However, these approaches focus on synthetic images and experience a significant performance drop when applied to real-world images.
In this paper, we introduce Uni-Removal, a two-stage semi-supervised framework for addressing the removal of multiple degradations in real-world images using a unified model and parameters.
In the knowledge transfer stage, Uni-Removal leverages a supervised multi-teacher and student architecture in the knowledge transfer stage to facilitate learning from pre-trained teacher networks specialized in different degradation types.
A multi-grained contrastive loss is introduced to enhance learning from feature and image spaces.
In the domain adaptation stage, unsupervised fine-tuning is performed by incorporating an adversarial discriminator on real-world images.
The integration of an extended multi-grained contrastive loss and generative adversarial loss enables the adaptation of the student network from synthetic to real-world domains.
Extensive experiments on real-world degraded datasets demonstrate the effectiveness of our proposed method.
We compare our Uni-Removal framework with state-of-the-art supervised and unsupervised methods, showcasing its promising results in real-world image dehazing, deraining, and deblurring simultaneously.
Multiple degradations removal, real-world image enhancement, knowledge distillation, contrastive learning.
§ INTRODUCTION
Images captured in the real world are susceptible to various degradations resulting from natural phenomena, limitations of shooting equipment, and transmission constraints, as depicted in Fig. <ref> (a). These degradations not only impact the visual quality of the images but also hinder the accuracy of high-level visual tasks, including object detection, image segmentation, and text recognition. Consequently, the accurate and efficient removal of multiple degradations holds great significance for vision systems such as autonomous driving, security monitoring, and visual satellites.
Existing methods for degradation removal primarily focus on addressing individual degradations by leveraging handcrafted image priors <cit.>, such as the dark channel prior <cit.> and color-line prior <cit.>. However, designing such priors is challenging, and their stability is often inadequate, leading to unsatisfactory outcomes. In the era of deep learning, some networks have been developed to estimate image prior parameters <cit.> as an initial step. Moreover, with the advancements in convolutional neural networks (CNNs) and transformers, several end-to-end models have emerged that directly predict clear images without degradations <cit.>.
While the aforementioned methods focus on removing specific degradations, there have been subsequent efforts to tackle the removal of multiple degradations using unified models <cit.>.
Although these models exhibit good performance on various degradation removal tasks due to well-designed structures and selected functional units, they require switching between different sets of pre-trained parameters when tackling different types of degradations.
Recently, several All-in-One methods <cit.> have been proposed to address the removal of multiple degradations using a unified model and shared parameters.
While some of these methods employ multiple encoders or degradation type embeddings for different degradations, they significantly reduce the number of parameters and demonstrate encouraging results.
However, since these methods are trained on synthetically degraded images and corresponding clear images from real-world backgrounds, they often struggle to perform effectively in real-world scenes due to the substantial domain gap between degraded images in synthetic and real-world scenes.
In this paper, we propose Uni-Removal, a two-stage semi-supervised framework for removing multiple degradations from real-world images, which can be applied to existing degradation removal networks.
Uni-Removal aims to eliminate multiple degradations without incurring additional costs, relying solely on a unified model and a set of pre-trained parameters.
The framework consists of a knowledge transfer stage and a domain adaptation stage, ensuring the model's ability to handle multiple degradations and improve its performance in real-world scenarios.
In the knowledge transfer stage, we initially train several teacher networks in a supervised manner using corresponding synthetic datasets.
These teacher networks then guide the student network to acquire the knowledge of removing different types of degradations through a Multi-Grained Contrastive Learning (MGCL) loss, encompassing both feature-grained and image-grained aspects.
The MGCL loss is calculated based on features and restored images obtained from both the teacher networks and the student network, using synthetic degraded images.
Each feature and restored image from the teacher networks is treated as a positive example with respect to the corresponding feature and restored image from the student network, while other features and restored images from the student network within the same batch serve as negatives.
This MGCL loss facilitates alignment between the student network and the teacher networks in both feature and image spaces.
In the domain adaptation stage, we further refine the student network by training it on unpaired real-world degraded and clear images, employing a discriminator in an adversarial setting.
An EXtended Multi-Grained Contrastive Learning (EX-MGCL) loss is utilized in conjunction with the generative adversarial loss to fine-tune the student network's adaptation from the synthetic domain to the real-world domain.
The EX-MGCL loss operates on features and restored images obtained from the student network using real-world degraded and clear images.
A batch of features and restored images from real-world clear images are treated as positive examples with respect to feature and restored image from a real-world degraded image, while other features and restored images within the same batch of real-world degraded images serve as negatives.
The EX-MGCL loss further facilitates the alignment of the student network with the real-world domain in both feature and image spaces.
These two types of contrastive losses contribute to faster learning of the student network at different levels and enhance the results on both synthetic and real-world datasets.
In summary, the contributions of this work are as follows:
* We propose Uni-Removal, a two-stage semi-supervised framework for removing multiple degradations from real-world images, utilizing a unified model and unified parameters.
This framework addresses the challenge of handling different types of real-world degradations while maintaining a consistent model architecture.
* To enhance the performance of both the knowledge transfer stage and the domain adaptation stage, we introduce a Multi-Grained Contrastive Learning (MGCL) loss for the knowledge transfer stage and an Extended MGCL (EX-MGCL) loss for the domain adaptation stage.
We conduct ablation studies to demonstrate the effectiveness of these loss functions.
* Experimental results on real-world datasets demonstrate that Uni-Removal outperforms state-of-the-art all-in-one models, yielding promising results across various tasks, including real-world image dehazing, deraining, and deblurring.
Notably, on the SPA <cit.> dataset, Uni-Removal shows significant improvements of 14.2 and 5.8 on BRISQUE <cit.> and PIQE <cit.> quality assessment metrics, respectively, compared to the current state-of-the-art models.
§ RELATED WORK
In this section, we provide a brief overview of the research related to single degradation removal methods, multiple degradations removal methods, knowledge distillation, and contrastive learning, which are relevant to our work.
§.§ Single Degradation Removal
§.§.§ Supervised methods
Over time, the performance of methods based on priors <cit.> become increasingly unsatisfactory.
The advent of deep convolutional neural networks (CNNs) and the availability of large-scale synthetic datasets have led to an increased interest in learning-based methods for single degradation removal.
Early approaches <cit.> focused on estimating parameters in physical scattering models using neural networks.
Subsequently, various end-to-end methods <cit.> have been proposed to directly restore clear images without relying on explicit physical scattering models.
For instance, Zhang et al. <cit.> introduced a fish retina-inspired dehazing method that incorporates special retinal mechanisms to extract wavelength-dependent degradation information.
Hao et al. <cit.> addressed the challenge of size mismatch between rain streaks during the training and
testing phases by employing a monogenic wavelet transform-like hierarchy and a self-calibrated dual attention mechanism.
Li et al. <cit.> incorporated depth information into CNN-based models for dynamic scene deblurring.
Additionally, Li et al. <cit.> proposed DehazeFlow, the first work to utilize normalizing flows for single image dehazing.
§.§.§ Semi-supervised and unsupervised methods
Domain adaptation aims to bridge the gap between a source domain and a target domain.
Existing approaches <cit.> aim to align the source and target domains either at the feature level or pixel level by minimizing designed losses.
For instance, Kupyn et al. <cit.> proposed DeblurGAN-V2, a framework based on a relativistic conditional generative adversarial network (GAN) with a double-scale discriminator.
They also introduced the feature pyramid network into deblurring.
Shao et al. <cit.> presented a domain adaptation framework that incorporates real hazy images into the training process using a cycle-GAN. Similarly, Wei et al. <cit.> employed a similar structure in image deraining and introduced a rain attention mechanism.
Chen et al. <cit.> explored a range of physical priors and developed a loss committee to guide the training on real hazy images.
While these methods demonstrate remarkable generalization performances for specific degradations, they often experience significant performance degradation when applied to other types of degradations.
In contrast, our proposed Uni-Removal framework effectively addresses the removal of multiple real-world degradations simultaneously.
§.§ Multiple Degradations Removal
§.§.§ Multi-tasks methods
Furthermore, several studies have explored the use of unified models to address multiple degradation removal problems <cit.>.
For instance, Pan et al. <cit.> proposed DualCNN, which incorporates two parallel branches to recover structures and details in an end-to-end manner.
Zhang et al. <cit.> designed a residual dense network specifically for image restoration.
Zamir et al. <cit.> adopted a multi-stage approach to restore degraded images and introduced an innovative per-pixel adaptive design that leverages in-situ supervised attention to reweight local features at each stage.
In addition, Mao et al. <cit.> introduced an idempotent constraint into the deblurring framework, allowing the framework to be utilized for dehazing and deraining tasks as well.
These methods have demonstrated remarkable results across various degradation types using a unified framework. However, they typically require different sets of pre-trained weights for each type of degradation.
§.§.§ All-in-One Degradations Removal
Li et al. <cit.> proposed an end-to-end network with multiple encoders and a shared decoder, referred to as the All-in-One network.
This network incorporates a discriminator to simultaneously assess the correctness and classify the degradation type of the enhanced images.
Furthermore, an adversarial learning scheme is employed, wherein the loss of a specific degradation type is only backpropagated to the corresponding task-specific encoder.
Valanarasua et al. <cit.> developed a transformer-based end-to-end model consisting of a single encoder and a decoder.
Specifically, the TransWeather model utilizes a novel transformer encoder with intra-patch transformer blocks to enhance attention within patches, along with a transformer decoder that incorporates learnable weather type embeddings to adapt to the specific weather degradation.
Chen et al. <cit.> adopted a two-stage knowledge learning process, which includes knowledge collation and knowledge examination, for adverse weather removal.
In the collation stage, a collaborative knowledge transfer technique is proposed to guide the student model in integrating and learning the knowledge of various weather types from well-trained teacher models.
In the examination stage, a multi-contrastive regularization approach is adopted to enhance the robustness of the student network for comprehensive weather removal.
Although these methods can handle different types of degradations using a single network, they often suffer a significant performance drop when applied to real-world degraded images due to the domain gap between synthetic and real-world degradations.
To address this limitation, we propose Uni-Removal, a two-stage semi-supervised framework for multiple degradation removal in real-world images, aiming to adapt learning-based models to real degradation removal tasks.
§.§ Knowledge Distillation
Knowledge distillation <cit.> originally aimed to transfer knowledge from a large teacher model to a smaller student network.
This involved training the student network on a transfer set while leveraging the soft target distribution provided by the larger model.
However, it has been shown by Adriana et al. <cit.> that the teacher network does not necessarily have to be larger than the student network.
In fact, both the outputs and intermediate representations learned by the teacher can enhance the training process and improve the final performance of the student network.
The concept of knowledge distillation has found wide application in various high-level computer vision tasks, including object detection <cit.>, face recognition <cit.>, and semantic segmentation <cit.>.
More recently, researchers have also integrated knowledge distillation into image enhancement tasks <cit.>.
In contrast to traditional knowledge distillation methods that solely learn from positive examples provided by the teacher network, we propose a MGCL loss in the knowledge transfer stage to enable learning from both positive and negative examples.
§.§ Contrastive Learning
Contrastive learning is a technique that aims to sample positive and negative pairs from a given anchor point and then applies different contrastive losses to attract positive samples and repel negative samples <cit.>.
In recent years, contrastive learning has been introduced into low-level vision tasks such as image-to-image translation <cit.>, deraining <cit.>, and dehazing <cit.>. For instance, Chen et al. <cit.> proposed an unsupervised contrastive CDD-GAN framework based on CycleGAN <cit.> for image dehazing, where positive and negative samples are sampled from the hazy domain and clear domain, respectively. Similarly, Ye et al. <cit.> devised a novel non-local contrastive learning mechanism that leverages the inherent self-similarity property for image deraining.
Building upon the concept of image-level contrastive learning, we extend the framework by introducing a comprehensive image-level and feature-level MGCL loss. Additionally, we incorporate an EX-MGCL loss that replaces one strongly correlated positive example with multiple weakly correlated positive examples.
§ PROPOSED METHOD
In this section, we present a normative definition of the task and provide an overview of our proposed method, including its underlying idea and overall structure. Additionally, we delve into the two training stages in detail and conclude with an introduction to the loss functions employed.
§.§ Overview
Our objective is to accurately restore corresponding clear images from different types of real-world degraded images without requiring changes to the methods or pre-trained models.
Thus, it is essential to employ a method that can handle various degradation removal tasks simultaneously.
However, training supervised methods solely on synthetic degraded datasets proves inadequate for effectively removing diverse degradations in real-world images due to the presence of domain shift.
Consequently, fine-tuning the model using unsupervised training methods on real-world degraded datasets becomes necessary.
Addressing the aforementioned task requirements, we propose Uni-Removal, a semi-supervised framework for multiple degradations removal.
Uni-Removal consists of two training stages: the knowledge transfer stage and the domain adaptation stage.
As illustrated in Figure <ref>, during the knowledge transfer stage, multiple pre-trained teacher networks guide the student degradation removal network, enabling the student network to acquire the capability to remove different types of degradations.
Subsequently, in the domain adaptation stage, an unsupervised adversarial generative learning method is employed to simultaneously train the student network and a discriminator on real-world degradation removal datasets.
This stage aims to enhance the student network's ability to remove various degradations in real-world images.
Uni-Removal is trained on synthetic datasets defined as
X_S_i={x_s_i}_s_i=1^N_S_i,
Y_S_i={y_s_i}_s_i=1^N_S_i, i=1,2...k and real-world degraded datasets defined as X_R_i={x_r_i}_r_i=1^N_R_i, i=1,2...k, X_C={x_c}_c=1^N_C, where k denotes the number of degradation types, x_s_i denotes synthetic degraded image, y_s_i denotes corresponding ground truth, x_r_i denotes real degraded image, x_c denotes real clear image, N_S_i, N_R_i and N_C denote the number of the synthetic image pairs of degradation type i, real degraded images of degradation type i, and real clear images, respectively.
§.§ Knowledge Transfer Stage
Simultaneously removing multiple degradations directly from different kinds of synthetic degradation datasets poses a significant challenge for a network.
In contrast, learning to remove a specific degradation from a single type of synthetic degradation dataset is comparatively easier.
To address this, instead of directly training a unified multi-degradation removal network, we adopt a two-step approach.
Firstly, we train effective teacher networks for each degradation under supervision.
The structure of each teacher network remains the same, but the parameters differ for different kinds of degradations.
Specifically, we utilize the MPR-Net <cit.> as our backbone and denote the teacher networks as G_T_i, i=1,2...k.
Once the training of the teacher networks is completed, we fix their parameters and proceed to train a student network, denoted as G_S, leveraging the intermediate features and restored images produced by the teacher networks.
The student network shares the same structure as each teacher network.
The training of the student network relies on two components.
First, we employ a pixel-level L1 loss between the restored image generated by the student network and the corresponding restored image produced by the teacher network.
Second, we utilize a Multi-Grained Contrastive Learning (MGCL) loss that operates at both the feature and image levels.
As illustrated in Figure <ref>, the MGCL loss comprises a feature-grained contrastive loss and an image-grained contrastive loss.
In the feature-grained contrastive loss, the intermediate features of the corresponding teacher network are treated as positives in relation to the intermediate features of the student network, while a batch of intermediate features from the student network with different degradation types are considered as negatives.
In the image-grained contrastive loss, the restored image from the corresponding teacher network is regarded as positive with respect to the restored image from the student network, while a batch of synthetic images with various degradations serve as negatives.
The MGCL loss guides the student network to align with the teacher network both at the image level, by minimizing the distance to the corresponding teacher-restored image, and at the feature level, by minimizing the distance to the intermediate features of the corresponding teacher network.
Simultaneously, it encourages the student network to be far away from synthetic degraded negative examples.
To prevent misleading the fully trained student network, we assign a small trade-off weight to the feature-grained contrastive loss and gradually reduce it during training.
After the knowledge transfer stage, the student network gains the ability to remove different types of synthetic degradations.
However, its performance in real-world image degradation removal remains unsatisfactory.
Therefore, we proceed to further fine-tune the student network using real-world datasets in the domain adaptation stage.
§.§ Domain Adaptation Stage
In the domain adaptation stage, our objective is to enable the student network to effectively remove various degradations in real-world images while preserving the background unrelated to the degradations during the fine-tuning process.
To achieve this, the student network takes real-world degraded images and real-world clear images as input separately.
The expected output is a set of restored images that are indistinguishable from real-world clear images.
To accomplish this, we train a discriminator in an adversarial setting, where it aims to differentiate between the restored images and real-world clear images, while the student network strives to generate restored images that deceive the discriminator.
This training is guided by a generating adversarial loss.
Furthermore, to ensure that real-world clear images remain unchanged before and after being processed by the student network, we incorporate an identity loss into the training process.
This loss encourages the student network to preserve the essential details of the input clear images during restoration.
Additionally, we introduce an EXtend Multi-Grained Contrastive Learning (EX-MGCL) loss to facilitate the adaptation of the student network from the synthetic domain to the real-world domain.
The EX-MGCL loss operates at both the feature and image levels, serving as a guiding principle for aligning the student network with the real-world clear domain.
As depicted in Figure <ref>, the EX-MGCL loss consists of a feature-grained contrastive loss and an image-grained contrastive loss.
In the feature-grained contrastive loss, the intermediate features of the real-world degraded image take a batch of intermediate features of real-world clear images as positives, and take a batch of intermediate features of real-world degraded images of other degradation types as negatives.
In the image-grained contrastive loss, the restored image of the real-world degraded image takes a batch of restored images of real-world clear images as positives, and takes a batch of real-world images with different degradations as negatives.
The EX-MGCL loss guides the student network to converge towards the real-world clear domain, both at the image level by minimizing its distance from the positive restored images and at the feature level by minimizing its distance from the positive intermediate features of real-world clear images.
Simultaneously, it encourages the student network to move away from the real-world degraded domain.
Similar to the knowledge transfer stage, we assign a small trade-off weight to the feature-grained contrastive loss and gradually decrease it during training to avoid misleading the fully trained student network.
Upon completion of the fine-tuning process in the domain adaptation stage, the student network possesses the capability to effectively remove various degradations in real-world images, utilizing a unified model with unified parameters.
§.§ Training Losses
The knowledge transfer stage and the domain adaptation stage are trained sequentially.
In the knowledge transfer stage, the student network G_S is trained using a pixel-level loss L_pixel and a MGCL loss L_m.
Given an image x_s_i degraded by synthetic degradation i, the intermediate features and restored image of the corresponding teacher network are denoted as ft_s_i and xt_s_i= G_T_i(x_s_i), respectively.
The intermediate features and restored image of the student network are denoted as fs_s_i and xs_s_i = G_S(x_s_i), respectively.
The pixel-level loss L_pixel is defined as:
L_pixel =𝔼_x_s_i∼ X_S_i[G_S(x_s_i)-G_T_i(x_s_i)_1], i=1,2...k.
The MGCL loss includes a feature-grained contrastive loss and a image-grained contrastive loss.
The common contrastive loss can be formulated as:
L_C(f, f^+, f^-) =
-logsim(ϕ(f), ϕ(f^+))/sim(ϕ(f), ϕ(f^+))+∑_q=1^bsim(ϕ(f), ϕ(f_q^-)),
where f, f^+, andf_j^- denote the objective to be optimized, the positive sample, and the negative sample, respectively. b denotes the number of negative samples, which is usually equal to the batch size. sim(u, v)=exp(u^T v/uvτ) denotes the similarity between two normalized feature vectors. τ denotes a scalar temperature parameter. ϕ() denotes a feature extraction operation by the VGG-19 <cit.>, usually.
The feature-grained contrastive loss L_fg and the image-grained contrastive loss L_ig are defined as:
L_fg = L_C(fs_s_i, ft_s_i, {{fs^q_s_i}^k_i=1}^b_q=1), i=1,2...k,
L_ig = L_C(xs_s_i, xt_s_i, {{x^q_s_i}^k_i=1}^b_q=1),i=1,2...k,
where fs_s_i and xs_s_i are intermediate features and restored image from the student, the positive samples ft_s_i and xt_s_i are intermediate features and restored image from the teacher, the negative samples {{fs^q_s_i}^k_i=1}^b_q=1 and {{x^q_s_i}^k_i=1}^b_q=1 are a batch of intermediate features from the student and a batch of synthetic degraded images, respectively.
The MGCL loss and the overall loss of the knowledge transfer stage can be formulated as:
L_m = L_ig + α_1L_fg,
L_kt = L_pixel + α_2L_m,
where α_1 and α_2 are trade-off weights.
In the domain adaptation stage, the student network G_S and the discriminator D_S are trained using an adversarial loss L_gan, an identity mapping loss <cit.> L_idt and an EX-MGCL loss L_em.
Given an image x_r_i degraded by real-world degradation i, the intermediate features and restored image of the student network are denoted as fs_r_i and xs_r_i = G_S(x_r_i), respectively.
For a clear real-world image x_c, the intermediate features and restored image of the student network are denoted as fs_c and xs_c = G_S(x_c), respectively.
Then the generate adversarial loss L_gan can be formulated as:
L_gan =𝔼_x_r_i∼ X_R_i[D_S(G_S(x_r_i))]
+ 𝔼_x_c∼ X_C[D_S(G_S(x_c))-1].
The identity mapping loss is adopted to encourage the student network G_S to preserve content information between the input real-world degraded images and output restored images. L_idt can be defined as:
L_idt =𝔼_x_c∼ X_C[G_S(x_c)-x_c_1].
The EX-MGCL loss also includes a feature-grained contrastive loss and a image-grained contrastive loss.
However, since a strongly correlated positive sample is not available in real-world degradation removal, the extended contrastive loss is used, which replaces the strongly correlated positive with a batch of weakly correlated positives.
The extended contrastive loss can be formulated as:
L_EC(f, f^+, f^-) =
-log∑_p=1^bsim(ϕ(f),ϕ( f_p^+))/∑_p=1^bsim(ϕ(f), ϕ(f_p^+))+∑_q=1^bsim(ϕ(f), ϕ(f_q^-)),
The extended feature-grained contrastive loss L_efg and the extended image-grained contrastive loss L_eig are defined as:
L_efg = L_EC(fs_r_i, {fs^p_c}^b_p=1, {{fs^q_r_i}^k_i=1}^b_q=1),i=1,2...k,
L_eig = L_EC(xs_r_i, {xs^p_c}^b_p=1, {{x^q_r_i}^k_i=1}^b_q=1),i=1,2...k,
where fs_r_i and xs_r_i are intermediate features and restored image of the real-world degraded image, the positive samples {fs^p_c}^b_p=1 and {xs^p_c}^b_p=1 are intermediate features and restored images of a batch of real-world clear images, the negative samples {{fs^q_r_i}^k_i=1}^b_q=1 and {{x^q_r_i}^k_i=1}^b_q=1 are a batch of intermediate features of the real-world degraded images and a batch of real-world degraded images, respectively.
The EX-MGCL loss and the overall loss of the domain adaptation stage can be formulated as:
L_em = L_eig + λ_1L_efg,
L_da = L_gan + λ_2L_idt + λ_3L_em,
where λ_1, λ_2 and λ_3 are trade-off weights.
§ EXPERIMENTS
In this section, we present the experimental setup and results to evaluate the effectiveness of our proposed framework.
We implement our framework based on MPR-Net <cit.> and conduct experiments on both synthetic and real-world degradation removal datasets.
We compare our method against several state-of-the-art approaches and evaluate the visual quality and performance using commonly adopted metrics.
Additionally, we perform two ablation studies to demonstrate the effectiveness of our proposed loss function and framework on both synthetic and real-world degradation removal datasets.
§.§ Implementation Details
§.§.§ Datasets
For single image dehazing, we train and evaluate our method on the RESIDE <cit.> dataset. The RESIDE dataset consists of five subsets: Indoor Training Set (ITS), Outdoor Training Set (OTS), Synthetic Object Testing Set (SOTS), Unannotated real Hazy Images (URHI), and real Task-driven Testing Set (RTTS).
We select the OTS and the URHI for training.
For evaluation, we use the SOTS and the RTTS, which provides synthetic and real-world test images, respectively.
For single image deraining, we utilize the Rain1400 <cit.> dataset as our synthetic training and test set.
This synthetic dataset contains 1400 image pairs with rain streaks of various sizes, shapes, and directions.
To evaluate the generalization of our method, we fine-tune and test our model on the SPA dataset <cit.>, which consists of real rainy images with diverse rain streak patterns.
For single image deblurring, we train and test our model on the GoPro <cit.> dataset.
The synthetic GoPro dataset contains 2103 image pairs, with each pair consisting of a sharp image and a blurry image.
We further fine-tune our model on the RealBlur <cit.> dataset, which includes two subsets: RealBlur-J and RealBlur-R.
The RealBlur-J subset consists of camera JPEG outputs, while the RealBlur-R subset is generated offline by applying white balance, demosaicking, and denoising operations to RAW images.
Since the images in the RealBlur-J subset are more suitable for human observation, we only fine-tune and evaluate our method on this subset.
All subsequent experimental results are obtained based on the above datasets.
When training MAWR <cit.> and our Uni-Removal, we used a mixture of OTS <cit.>, Rain1400 <cit.>, and GoPro <cit.>, and when fine-tuning Uni-Removal, we used a mixture of URHI <cit.>, SPA <cit.>, and BLUR-J <cit.>.
In the training phase, all images are randomly cropped into patches of size 128 × 128.
The pixel values of the patches are normalized to the range of -1 to 1.
§.§.§ Training Details
We implement our framework in PyTorch and utilize ADAM optimizer with a batch size of 16 to train the teacher and student networks on an Nvidia RTX3090.
The temperature parameter τ = 1e-6 for both stages.
In the knowledge transfer stage, we train the student network for 400 epochs with the momentum β_1 = 0.9, β_2 = 0.999, and the learning rate is set as 2 × 10^-5. The trade-off weights are set as: α_1 = 0.5 and α_2 = 0.1.
There is a decay of 0.99 on α_1 every epoch.
In the domain adaptation stage, we fine-tune the student network for 40 epochs.
The momentum and the learning rate are set as: β_1 = 0.5, β_2 = 0.999, lr = 5 × 10^-6.
The trade-off weights are set as: λ_1 = 0.5, λ_2 = 0.1, λ_3 = 0.1 and λ_4 = 0.01.
There is also a decay of 0.99 on λ_1 every epoch.
§.§ Comparisons with State-of-the-art Methods
In this section, we compare our proposed Uni-Removal framework with several state-of-the-art methods on different types of real-world degradation removal datasets.
We evaluate the performance both qualitatively and quantitatively, using commonly adopted metrics.
For the sake of fairness, we prefer to use the codes and the trained models provided by the authors.
§.§.§ Visual Quality Comparison
To evaluate the visual quality of Uni-Removal, we conducted experiments on the real-world haze removal dataset RTTS <cit.>, which is a subset of the RESIDE dataset <cit.>.
We compared the results of Uni-Removal with four state-of-the-art dehazing methods: DA-NET <cit.>, MAWR <cit.>, MPR-Net <cit.> (the backbone), and DehazeFlow <cit.>.
Both DehazeFlow <cit.> and DA-NET <cit.> are networks designed for dehazing. DehazeFlow <cit.> is a supervised dehazing network, while DA-NET <cit.> is semi-supervised.
MPR-Net <cit.> can remove different types of synthetic degradations using a unified model but different parameters.
MAWR <cit.> is an "All-in-One" method for multiple synthetic degradations.
The results are shown in Fig. <ref>.
From the visual comparisons in Fig. <ref>, it can be observed that DA-NET <cit.> removes a significant amount of haze, but the color of the images is shifted, and some black shadows appear.
Methods that only perform supervised training on synthetic datasets, such as DehazeFlow <cit.>, MPR-Net <cit.>, or MAWR <cit.>, exhibit varying degrees of haze residue in their resulting images and lack the ability to effectively remove haze in real-world images.
In contrast, Uni-Removal generates images with the least haze residue and successfully preserves the background color, texture, and other details, achieving the best results in terms of visual quality.
Next, we evaluated the visual quality of Uni-Removal on the real-world rain streaks removal dataset SPA <cit.>.
We compared the results of Uni-Removal with four state-of-the-art deraining methods: MPR-Net <cit.>, MAWR <cit.>, NLCL <cit.>, and DerainCycleGan <cit.>.
Both NLCL <cit.> and DerainCycleGan <cit.> are unsupervised methods trained on real-world rain streaks removal datasets.
The results are shown in Fig. <ref>.
As illustrated in Fig. <ref>, compared to MPR-Net <cit.> trained on synthetic rainy datasets, unsupervised domain adaptation-based methods NLCL <cit.> and DerainCycleGan <cit.> demonstrate better performance in effectively removing rain streaks from real-world images.
Although MAWR <cit.> removes a majority of the rain streaks, the resulting images are generally too dark, leading to poor visual quality.
In contrast, Uni-Removal significantly reduces the presence of rain streaks and successfully restores the background images with superior visual quality compared to the aforementioned methods.
Lastly, we assessed the visual quality of Uni-Removal on the real-world blur removal dataset RealBlur-J <cit.>. We compared the results of Uni-Removal with four state-of-the-art deblurring methods: MAWR <cit.>, XYDeblur <cit.>, MPR-Net <cit.>, and DeblurGan-v2 <cit.>.
XYDeblur <cit.> and DeblurGan-v2 <cit.> are state-of-the-art supervised and unsupervised deblurring methods, respectively.
The results are shown in Fig. <ref>.
Similar to the results obtained in real-world image dehazing and deraining, the unsupervised deblurring method DeblurGan-v2 <cit.> surpasses the supervised methods MAWR <cit.>, XYDeblur <cit.>, and MPR-Net <cit.>.
Furthermore, Uni-Removal outperforms all the supervised blur removal methods and achieves comparable results to DeblurGan-v2 <cit.> in terms of visual quality.
Overall, Uni-Removal demonstrates superior performance compared to the unified model MPR-Net <cit.> and the all-in-one model MAWR <cit.> across various real-world degradation removal tasks.
Furthermore, Uni-Removal exhibits promising results in terms of visual quality when compared to task-specific state-of-the-art semi-supervised and unsupervised methods.
§.§.§ No-Reference Image Quality Assessment
To further validate the effectiveness of the proposed framework, we conducted quantitative comparisons on real degradation datasets, namely RTTS <cit.>, SPA <cit.>, and RealBlur-J <cit.>.
Since paired real-world degraded images with clear backgrounds were not available as test sets, we could not adopt the commonly used reference evaluation indicators, such as PSNR and SSIM.
Instead, we selected two general-purpose no-reference evaluation indicators: Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) <cit.> and Perception-based Image Quality Evaluator (PIQE) <cit.> for quantitative comparisons.
BRISQUE utilizes a natural scene statistics model framework based on locally normalized luminance coefficients to quantify naturalness and quality in the presence of distortion.
PIQE estimates quality from perceptually significant spatial regions, considering blockiness, blur, and noise.
We calculated the results using the official MATLAB functions provided for these two evaluators.
Lower values of these indicators indicate higher image quality. The results are summarized in Table <ref>, Table <ref>, and Table <ref>.
Table <ref> demonstrates the superior performance of Uni-Removal in both indicators, surpassing other state-of-the-art methods by 1.03 and 8.01 on BRISQUE <cit.> and PIQE <cit.>, respectively.
These quantitative comparisons demonstrate that Uni-Removal effectively removes haze, reduces blur, and restores details in real-world dehazing scenarios.
Moreover, Table <ref> showcases the superior performance of Uni-Removal in real-world image deraining.
The substantial improvements of 14.2 on BRISQUE <cit.> and 5.8 on PIQE <cit.> underscore the remarkable ability of Uni-Removal to address real-world deraining challenges, which aligns with the visual quality comparison results.
Additionally, Table <ref> presents the promising results achieved by Uni-Removal in real-world deblurring tasks.
The substantial improvement of 9.8 on BRISQUE <cit.> indicates that Uni-Removal effectively removes blur without introducing artifacts, further validating its performance in real-world scenarios.
The quantitative comparison results align with the visual quality assessment, further affirming the efficacy of Uni-Removal as a powerful framework for addressing multiple real-world degradations.
§.§ Ablation Study
To validate the effectiveness of the proposed MGCL loss, EX-MGCL loss, and domain adaptation training strategy, we conducted ablation studies on synthetic degradation datasets and real-world degraded datasets.
Initially, we performed an ablation study on the knowledge transfer stage to demonstrate the effectiveness of the MGCL loss.
We compared three combinations in the knowledge transfer stage using three synthetic degradation datasets: SOTS <cit.> (dehazing), Rain1400 <cit.> (deraining), and GoPro <cit.> (deblurring).
We employed peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) as quantitative evaluation metrics.
The results are presented in Table <ref>.
Table <ref> indicates that both the image-grained contrastive learning loss and the feature-grained contrastive learning loss improve the effectiveness of the knowledge transfer stage across all three datasets.
The complete knowledge transfer model (BKT+ICL+FCL) outperforms the base knowledge transfer model by 0.291, 0.210, 1.499, and 0.007, 0.004, 0.041 in terms of PSNR and SSIM on the three synthetic datasets, respectively.
Moreover, the complete knowledge transfer model shows minimal performance degradation compared to the teacher network trained on task-specific datasets.
Fig. <ref> provides visual results for the aforementioned combinations.
The base knowledge transfer leaves residual rain streaks, which are further reduced by both the image-grained contrastive learning loss and the feature-grained contrastive learning loss.
The background image restored by the complete knowledge transfer (BKT+ICL+FCL) not only exhibits the fewest residual rain streaks but also closely resembles the ground truth in terms of clarity and texture.
Next, we conducted an ablation study on the domain adaptation stage to verify the effectiveness of the domain adaptation training strategy and the EX-MGCL loss.
We compared four combinations in the domain adaptation stage using three real-world degradation datasets: RTTS <cit.> (dehazing), SPA <cit.> (deraining), and BLUR-J <cit.> (deblurring).
For quantitative comparison, we utilized BRISQUE <cit.> and PIQE <cit.> as evaluation indicators.
The results are presented in Table <ref>.
As presented in Table <ref>, the knowledge transfer model without domain adaptation exhibits poor performance on real-world degradation removal tasks.
However, with domain adaptation, significant improvements are observed, with enhancements of 5.94, 4.95, 2.56 and 14.73, 8.69, 10.92 in terms of BRISQUE and PIQE on the three real-world datasets, respectively.
Additionally, both the extended image-grained contrastive learning loss and the extended feature-grained contrastive learning loss contribute to the quality improvement of restored background images, including naturalness, color, distortion, and other factors.
Compared to the base domain adaptation model, the full Uni-Removal (KT+BDA+EICL+EFCL) demonstrates further significant improvements of 5.94, 4.95, 2.56 and 14.73, 8.69, 10.92 in BRISQUE and PIQE on the three real-world datasets, respectively.
Fig. <ref> illustrates the visual outcomes obtained from the four aforementioned combinations.
The performance of the knowledge transfer model alone struggles when it comes to eliminating rain streaks in real-world images.
However, incorporating the domain adaptation training stage significantly enhances the model's ability to derain such images effectively.
Moreover, the inclusion of the extended image-grained contrastive learning loss (EICL) and the extended feature-grained contrastive learning loss (EFCL) further contributes to improving the overall quality of the restored images.
Notably, the background image restored using the complete Uni-Removal approach (KT+BDA+EICL+EFCL) exhibits an impressive outcome with minimal residue of rain streaks and no noticeable artifacts.
These findings are supported by both quantitative and qualitative comparisons across the six synthetic and real-world datasets, providing compelling evidence for the effectiveness of the proposed MGCL loss, EX-MGCL loss, and domain adaptation training strategy.
§ CONCLUSION
In this paper, we introduced Uni-Removal, a two-stage semi-supervised framework designed for the removal of multiple degradations in real-world images.
Our framework incorporates a multi-grained contrastive learning loss function, an extended multi-grained contrastive learning loss function, and a two-stage training strategy.
Through the utilization of knowledge transfer and domain adaptation, Uni-Removal effectively addresses various degradations in real-world images, employing a unified model and unified parameters.
Extensive experiments conducted on synthetic and real-world datasets, covering dehazing, deraining, and deblurring tasks, demonstrate the effectiveness of our proposed loss functions, training strategy, and overall framework.
The results highlight the capability of Uni-Removal to tackle multiple degradations, achieving significant improvements in terms of visual quality and quantitative evaluation metrics.
In future research, we plan to expand the evaluation to cover a wider range of real-world degradations and explore the applicability of Uni-Removal to other related tasks.
Overall, the proposed Uni-Removal framework presents a promising approach for addressing multiple degradations in real-world images, opening up possibilities for improved image restoration techniques in various practical applications.
§ ACKNOWLEDGMENTS
This work is supported by the National Key Research and Development Program of China with No. 2021YF3300700, Beijing University of Posts and Telecommunications Basic Research Fund with No. 2022RC12, and State Key Laboratory of Networking and Switching Technology with No. NST20220303.
IEEEtran
[
< g r a p h i c s >
]Yongheng Zhang
received the B.S. and M.S. degrees from the Beijing University of Posts and Telecommunications, Beijing, China, in 2017 and 2020, and he is currently pursuing a Ph.D. degree at the Beijing University of Posts and Telecommunications, Beijing. His research interests include computer vision and image enhancement.
[
< g r a p h i c s >
]Yuanqiang Cai
received the Ph.D. degree with University of Chinese Academy of Sciences, Beijing, China, in 2021. He is currently a lecturer with the Beijing University of Posts and Telecommunications. His research interests include object detection, multimedia content analysis, and text localization and recognition in images and videos. He has published more than 10 papers in referred conference and journals including NeurIPS, AAAI, ACM MM, TCSVT, and PR.
[
< g r a p h i c s >
]Danfeng Yan
is a professor and PhD supervisor at Beijing University of Posts and Telecommunications. She is working in State Key Laboratory of Network and Switching Technology, Beijing University of Posts and Telecommunications. She visited as a scholar at the School of Computer Science and Bell Labs, University of Ottawa, Canada in 2010. She has been engaged in scientific research and teaching of computer application technology for a long time. As the main researcher, She presides over or participates in the National High-tech R&D Program (863 Program), National Basic Research Program (973 Program), National Key R&D Program of China, Provincial and Ministerial Program, innovative research groups of the National Natural Science Foundation of China.
|
http://arxiv.org/abs/2307.04239v1 | 20230709175720 | First-order Phase Transition interpretation of PTA signal produces solar-mass Black Holes | [
"Yann Gouttenoire"
] | hep-ph | [
"hep-ph",
"astro-ph.CO",
"gr-qc"
] |
[email protected]
School of Physics and Astronomy, Tel-Aviv University, Tel-Aviv 69978, Israel
We perform a Bayesian analysis of NANOGrav 15yr and IPTA DR2 pulsar timing residuals and show that the recently detected stochastic gravitational-wave background (SGWB) is compatible with a SGWB produced by bubble dynamics during a cosmological first-order phase transition. The timing data suggests that the phase transition would occur around QCD confinement temperature and would have a slow rate of completion. This scenario can naturally lead to the abundant production of primordial black holes (PBHs) with solar masses. These PBHs can potentially be detected by current and advanced gravitational wave detectors LIGO-Virgo-Kagra, Einstein Telescope, Cosmic Explorer, by astrometry with GAIA and by 21-cm survey.
First-order Phase Transition interpretation of PTA signal produces solar-mass Black Holes
Yann Gouttenoire
August 12, 2023
=========================================================================================
§ INTRODUCTION
By measuring cross-correlations in the arrival times of pulses emitted by rotating neutron stars, Pulsar Timing Arrays (PTAs) have been established as a mean to detect nano-Hertz (nHz) frequency Gravitational Waves (GW).
In 2020, a common low-frequency noise has been identified in the datasets of the NANOGrav <cit.>, EPTA <cit.>, PPTA <cit.> and IPTA <cit.> which combines data from the former and therefore provides the largest data release to date.
To distinguish a GW origin from systematic effects requires timing delay correlations to have a quadrupolar dependence on the angular separation between pulsars <cit.>. In June 2023, upon analysing their most recent data, the NANOGrav and the EPTA collaboration (NG15 and EPTA DR2) have found statistical evidences for such interpulsar correlations <cit.>, with Bayes factors of 600 and 60 respectively.
The primary expected source of GWs at low frequencies is believed to be from supermassive black holes binaries (SMBH) <cit.>. The stochastic GW background (SGWB) inferred from PTA data corresponds to the upper limit of the astrophysical predicted interval, see Fig. <ref>. Recent studies suggest the possibility of SMBH binaries being slightly more massive and more numerous than initially anticipated <cit.>. Alternatively, the PTA SGWB might originate from new physics taking place in the early universe <cit.>. The last hypothesis however comes with its own set of challenges. For instance, ascribing the SGWB to inflation necessitates unnaturally large values for the spectral tilt n_t ≃ 1.8 and a low reheating temperature T_ reh≲ 10 GeV <cit.>. GW induced by a Gaussian spectrum of curvature perturbation would results in excessive PBHs production <cit.>, same for a SGWB produced from domain wall annihilation <cit.>. A SGWB resulting from PBH mergers would not align with structure formation <cit.>. A cosmic strings network, when arising from a global symmetry is excluded by Big-Bang Nucleosynthesis (BBN) <cit.>, while when arising from a local symmetry is not favoured by the Bayesian analysis <cit.>.
To evade BBN bound, a first-order phase transition (1stOPT) sourcing PTA signal would necessitate the latent heat to be released dominantly to the Standard Model, e.g. <cit.>
Interestingly however, the 1stOPT interpretation of PTA SGWB requires a reheating temperature around the scale of QCD confinement 100 MeV, with a rather low completion rate β/H ∼ 10 and a large latent heat fraction α≳ 1 <cit.>.
This overlaps with the region where 1stOPT have been recently found to produce PBH in observable amount <cit.>. The PBH prior has been omitted in all previous analysis of the 1stOPT interpretation of PTA data <cit.>.
In this letter, we perform a Bayesian search for SGWB from 1stOPT in NANOGrav 15-year (NG15) and IPTA DR2 (IPTA2) timing residuals, including both BBN-N_ eff-bound and PBH-overproduction constraints as priors in the analysis.
To simplify the numerical strategy, we focus on the region α≫ 1 of strong supercooling where PBH production is the most efficient.[The Bayesian analysis of 1stOPT with finite α will be presented elsewhere.]
We argue that the SGWB from 1stOPT is given by the bulk flow model independently of whether the latent heat is still stored in bubble walls at percolation or has been released to the plasma before.
We find that PBH formation does not exclude the 1stOPT interpretation of PTA signal. Instead, a SGWB from supercooled PT is favoured with respect to the SMBH binary hypothesis by a Bayes factor of 15 in NG15 data set.
We point for the first time, the existence of a multi-messenger window: the NG15 posterior contains a region producing [10-100] solar masses PBHs, see Fig. <ref>. The merging of such PBHs would source GWs with kHz frequencies in the range of LIGO-Virgo
<cit.>, and ET/CE <cit.>. Additionally, their presence could be detected from lensing in GAIA <cit.> or from heating in 21-cm survey <cit.>.
We also consider the negative hypothesis in which the SGWB observed in PTA would not result from a supercooled PT and derive lower limits on the rate of completion β/H ≳ [10-20], implying that the universe could not have boiled longer than [5%-10%] of a Hubble time during the QCD phase transition.
§ GRAVITATIONAL WAVES FROM FIRST-ORDER PT
PT parameters —
The strength of a 1stOPT is characterized by the ratio of its latent heat Δ V, defined as the vacuum energy difference between the two minima of the potential driving the transition, to the radiation energy density ρ_ rad(T_n) at the nucleation temperature T_n
α≡Δ V/ρ_ rad(T_n)≡( T_ eq/T_ n)^4.
In this work, we assume α≫ 1, in which case the universe enters a stage of vacuum-domination at temperature T_ eq which ends at T_n when bubble growth converts the latent heat into radiation energy density.
The rate at which nucleation takes place is controlled by the time derivative of the tunneling rate per unit of volume Γ_V
β≡1/Γ_VdΓ_V/dt.
After the phase transition completes, the universe is reheated back to the temperature T_ eq up to changes in number of degrees of freedom which we neglect.
Energy budget —
The dynamics of weak phase transition α < 1 is rather well understood <cit.>. The non-relativistic motion of bubble walls, γ_w≃ 1, converts the latent heat into thermal and kinetic energy of the plasma, which propagate under the form of long-lasting sound waves <cit.>, and ultimately turn into turbulence <cit.>. GWs sourced by sound waves have been intensively simulated on the lattice in the recent years <cit.>, and analytical modelling have been proposed <cit.>.
The dynamics of supercooled phase transition α >1 is more complex due to the large Lorentz factor γ_w≫ 1 of bubble walls <cit.>.
In the relativistic limit, the acceleration of bubble walls with tension σ is set by the pressure balance <cit.>
dγ_w/dt = Δ V-𝒫_ fric/σ.
The friction pressure 𝒫_ fric is dominantly induced by transition radiation <cit.>, which resummed at leading-logs, reads <cit.>
𝒫_ fric = c_0 g_ D^3 γ_w v_ϕ T_n^3 log( v_ϕ/T_n), c_0=𝒪(1),
where g_ D is a gauge coupling and v_ϕ is the vev of the scalar field driving the phase transition.
As bubble walls accelerate, the retarding pressure 𝒫_ fric grows linearly with γ_w.
Scalar field gradient —
It is necessary to distinguish two scenarios according to whether the retarding pressure stops the walls from accelerating before collision 𝒫_ fric
= Δ V or not 𝒫_ fric≪Δ V <cit.>.
In the later case, bubble walls run-away γ_w ↗, and the latent heat is dominantly kept in terms of bubble wall kinetic energy which is the main source of GWs. This occurs for very large supercooling
T_ n/T_ eq ≲ 5.3 × 10^-5(v_ϕ/ 1 GeVβ/H/100.45/g_ D)^1/4.
GWs from scalar field gradient were first computed in the so-called “envelop” approximation where walls are infinitely thin and collided parts are neglected <cit.>.
Later, collided parts were added to the computation in the so-called “bulk flow" model at the analytical <cit.> and numerical level <cit.>. It was found that the long-lasting propagation of the infinitely thin shells produces an IR enhancement of the GW spectrum as Ω_ PT∝ f^1 instead of Ω_ PT∝ f^3.
For relativistic wall velocities, the bulk flow model predicts <cit.>
Ω_ PTh^2 ≃10^-6/(g_*/100)^1/3(H_*/β)^2( α/1+α)^2 S_ PT(f)S_H(f),
with the spectral shape S_ PT(f) peaked on f_ϕ
S_ PT(f) = 3(f/f_ PT)^0.9/2.1+0.9(f/f_ PT)^3, f_ PT = (a_*/a_0) 0.8 (β/2π),
and the redshift factor between percolation “*” and today “0”
a_*/a_0 = 1.65 × 10^-2 mHz (T_ eq/100 GeV) ( g_ eff, reh/100)^1/6 H_*^-1.
We added the correction factor
S_H(f) = (f/f_∗)^2.1/1+(f/f_∗)^2.1, f_∗ = c_*(a_*/a_0)(H_*/2π),
with c_* = 𝒪(1) to impose an f^3 scaling for emitted frequencies smaller than the Hubble factor H_∗/(2π) as required by causality <cit.>. We fix c_*=1 and leave the determination of c_* for future studies.
Plasma dynamics —
If Eq. (<ref>) is not satisfied, bubble walls reach a constant Lorentz factor γ̇_w=0, and the latent heat of the phase transition is dominantly transferred to the plasma, which is the main source of GWs. Friction-dominated bubble wall motion is expected to generate extremely thin and relativistic fluid configurations, which become long-lasting shock waves after bubble collisions <cit.>.
The large hierarchy between the bubble radius and the thickness of the shock front is a major challenge to numerical treatment. However, from a gravitational viewpoint an extremely peaked momentum distribution carried by the plasma should be indistinguishable from an extremely peaked momentum distribution carried by the scalar field. Hence we expect the GW signal in both situation to be similar.
A second difficulty in modelling plasma dynamics is the possibility for bubble walls to be followed by relativistic shells of free-streaming particles <cit.>, breaking down the fluid description. A recent study in the moderately relativistic regime γ_w ≲ 10 <cit.> suggests that the GW spectrum again resembles the one predicted in bulk flow model.
For the two aforementioned reasons, in the present work we assume the GW signal to be given by the bulk flow model in Eq. (<ref>) in the whole strongly supercooled regime T_n ≪ T_ eq, independently of whether Eq. (<ref>) is satisfied or not.[We thank Ryusuke Jinno for fruitful discussions regarding this point.]
§ PTA DATA ANALYSIS
Numerical strategy —
We searched for GW from 1stOPT in two open-access datasets, NG15 <cit.> and IPTA2 <cit.>. The released data are presented in terms of the timing-residual cross-power spectral density S_ab(f)≡Γ_ab h^2_c(f)/(12π^2)f^-3, where h_c(f)≃ 1.26· 10^-18(Hz/f)√(h^2Ω_GW(f)) signifies the characteristic strain spectrum <cit.> and Γ_ab denotes the Overlap Reduction Function (ORF) between pulsars 'a' and 'b' within a given PTA <cit.>.
We used the software packages enterprise <cit.> and enterprise_extensions <cit.> to compute the likelihood of observing given timing residuals assuming the presence of the SGWB from 1stOPT given in Eq. (<ref>). We used PTMCMC <cit.> to generate the posterior distribution. For IPTA2, we marginalized over white, red and dispersion measure noises as prescribed in <cit.>. For NG15, we instead used the handy wrapper PTArcade <cit.> with “enterprise” mode in which marginalization over noise parameters is automatized. We used GetDist <cit.> tool to plot the results. To circumvent pulsar-intrinsic excess noise at high frequencies, the SGWB search was confined to the lowest 14 and 13 frequency bins of the NG15 and IPTA2 datasets, respectively.
We included the BBN constraints assuming that the 1stOPT sector reheates dominantly into Standard Model degrees of freedom and, when specified, the one from PBH overproduction discussed in Sec. <ref>, to infer the prior distribution of 1stOPT parameters. Detailed information regarding data analysis and prior choices can be found in App.<ref>.
Supercooled PT —
We conducted searches for GW from strong 1stOPT (α≫ 1) in isolation, GW from SMBH binaries individually, as well as a combined analysis of 1stOPT and SMBH binaries. In Fig. <ref>, we show the GW spectra with parameters set to their mean posterior values given in Tab. <ref>.
The 68% and 95% confidence contours are depicted in Fig. <ref>-left. The posterior for the combined analysis of 1stOPT and SMBH is reported to Figs. <ref> and <ref> in the appendix. We assumed a flat prior on the strain amplitude of the SGWB from SMBH binaries.
To quantify the evidence provided by the observed PTA data, denoted as 𝒟, in favor of one model, say X, versus another, say Y, we employ the Bayesian factor
BF_Y,X≡𝒫(𝒟|Y) / 𝒫(𝒟/X),
which we compute using the product-space sampling method <cit.>
implemented in enterprise_extensions <cit.>. Here, 𝒫(𝒟/X) is the likelihood probability of observing data D given the model X. The outcomes of the Bayesian model comparison presented in Tab. <ref>, according to Jeffrey's scale <cit.>, suggests that NG15 data `substantially' favours the presence of a GW signal from 1stOPT aside to the one from SMBHB. Instead, IPTA2 data remains inconclusive.
Exclusion bounds —
Under the assumption that the PTA signal is due to SMBHB or any sources other than 1stOPT, we have derived upper limits on the GW signal emanating from 1stOPT. As depicted in Fig. <ref>-right, these limits correspond to lower bounds on the rate of completion, going up to β/H ≲ 20.
§ PRIMORDIAL BLACK HOLES
Supercooled late-blooming mechanism —
In <cit.>, it was demonstrated that PBHs could be produced in observable amount during supercooled PT through a process termed “late-blooming”.
During 1stOPT, the nucleation sites of bubbles are randomly dispersed across the entire volume of the false vacuum. As the universe gets close to the point of percolation, there remains a non-zero probability of identifying Hubble-sized regions where nucleation has not yet initiated.
Throughout the supercooled PT, these delayed regions maintain a constant vacuum energy, while the energy density in their vicinity redshifts like radiation. Upon completion of percolation, these “late-bloomers” evolve into over-dense regions. If these regions are Hubble-sized and exceed a certain density threshold δρ/ρ≳ 0.45, they collapse into PBHs.
We direct the reader to <cit.> for the precise analytical formula to estimate the abundance and mass of those PBHs.[Some other works <cit.> find a different PBH abundance. Refs. <cit.> find a lower PBH abundance because the formalism is restricting collapsing patch to remain 100% vacuum dominated until collapse. Ref. <cit.> find a larger abundance because nucleation is not accounted in the entire past light-cone of a collapsing patch. Instead, Ref. <cit.> accounts for nucleation to take place not only in the whole past light-cone but also in the collapsing patch itself as long as the critical overdensity is reached.]
We included the PBH overproduction constraints as a prior in the Bayesiasn analysis. The Bayes factors shown in Tab. <ref> is unaffected for IPTA2 and only decreases from 24 to 15 for NG15.
We have plotted the contour lines representing the PBH fraction of dark matter f_ PBH in Fig. <ref> and the PBH mass in Fig. <ref>. In addition, we overlay cosmological and astrophysical constraints on this population of PBHs.
Excluded regions and detection prospects —
With solid lines, we show current constraints.
In yellow, we have the exclusion regions arising from distortion of the Cosmic Microwave Background (CMB) caused by X-rays from accretion which modify the ionization history between recombination and reionizaton <cit.>.
In purple, we show the constraints using the search for photometric magnification (strong lensing) of stars in the Magellanic clouds conducted on Eros data <cit.>.
The solid cyan-colored region represents constraints derived from the data collected by LIGO/Virgo interferometers <cit.>.
With dashed lines, we show future prospects. In green, we have the reach of 21 cm surveys due to heating and ionization of the intergalactic medium via X-rays produced during accretion <cit.>. In red, we have the forecast from the search for transient astrometric deviation (weak lensing) of single or multiple stars in GAIA time-series data <cit.>. Finally, in dashed cyan we show the prospect for detecting GW from PBH binaries with Einstein telescope and Cosmic Explorer <cit.>.
§ CONCLUSION
We conducted a Bayesian analysis of the NANOGrav 15-yr (NG15) and IPTA DR2 (IPTA2) timing residuals. Our findings indicate that NG15 indicate a substantial preference for the presence of a strong first-order phase transitions (1stOPT) in isolation or combined with SGWB from SMBH binaries, while IPTA2 remains inconclusive on which scenario is preferred.
The phase transition is characterized by a remarkably low completion rate, e.g. β/H≃ 12.6 and 10.7 for NG15 with and without astrophysical signal from SMBH binaries. From a theoretical perspective, such a value is typical of supercooled phase transitions, characterized by a strong first-order phase transition with a parameter α significantly larger than 1, e.g. <cit.>, which motivates the choice of prior α≫ 1 done in this work. These cosmological scenarios have been demonstrated to produce primordial black holes (PBHs) in considerable quantities when β/H ≲ 7 <cit.>.
We checked that in contrast to the scalar-induced <cit.> and domain-wall <cit.> interpretations of PTA signal, the 1stOPT interpretation does not fall into the PBH graveyard of PTA's interpretations. The Bayes factor of the strong 1stOPT interpretation with respect to SMBH binary one is only reduced from 24 to 15 in NG15 after including the PBH prior, while it is not affected in IPTA2.
We further assessed the potential for detecting these PBHs using different observational techniques, including 21 cm cosmological hydrogen line observations, astrometry with the GAIA mission and next-generation kilohertz frequency GW interferometers such as the Einstein Telescope (ET) and Cosmic Explorer (CE).
In the event that an astrophysical explanation becomes definitive, we established 68% and 95% exclusion constraints on the parameter space of 1stOPT, up until β/H≳ 20. Under these conditions, it would effectively preclude any possibility of detecting PBHs from supercooled PTs within the mass range [1 M_⊙, 10^3 M_⊙].
We must emphasize that our current comprehension of the GW spectrum resulting from supercooled phase transitions is still in its early stages. The assumptions are founded on the bulk flow model, in which GWs are sourced by the expansion of an infinitely thin distribution of the stress-energy momentum tensor. Future investigations are necessitated to probe potential modifications of the GW spectrum that could be induced by non-linear effects, such as those arising from relativistic shock waves, or deviations from a fluid description.
Notwithstanding these constraints, the concept of employing multi-messenger observations of GWs at nHz and kHz frequencies to investigate supercooled phase transitions occurring around the QCD epoch remains an approach consistent with the overarching goal of exploring the cosmos using all available messengers and signals.
Acknowledgements.—
The author is grateful to Iason Baldes, Ryusuke Jinno, Marius Kongsore, Fabrizio Rompineve, Miguel Vanvlasselaer and Tomer Volansky for fruitful discussions and to the Azrieli Foundation for the award of an Azrieli Fellowship.
1113
§
1214.1em
§.§
1214.1em
§.§.§
1214)1em
1214:1em
§ DATA ANALYSIS
The purpose of this Appendix is to delineate the Bayesian search methodology employed in our study. We started rely on the NG15 dataset <cit.> and on Version B of the IPTA2 dataset <cit.>. To ascertain noise parameters of IPTA2, we closely follow the approach adopted by IPTA collaboration <cit.>, see also <cit.>. We then checked that we obtained consistent result with the software PTArcade <cit.> in which noise marginalization has been automatised, see Fig. <ref>-left. Instead the Bayesian analysis of NG15 was done solely with the “enterprise” mode of PTArcade <cit.>. We perform the search for SGWB in the first 13 and 14 frequency bins of IPTA2 and NG15, respectively.
We now described the Bayesian analysis of IPTA2 which we performed ourselves without the use of PTArcade <cit.>.
We adapted the software packages enterprise <cit.> and enterprise_extensions <cit.> to incorporate GW spectra from 1stOPT in terms of the power spectrum in timing residual, and used them to compute the likelihood function, symbolized as 𝒫(𝒟|θ). This function encapsulates the probability of observing the data 𝒟 given a specific set of model parameters θ. The posterior distribution, 𝒫(θ|𝒟), which illustrates the probability distribution of model parameters θ given the observed data 𝒟, is linked to the likelihood function via Bayes's theorem
𝒫(θ|𝒟) = 𝒫(𝒟|θ)𝒫(θ)/𝒫(𝒟).
Within this equation, 𝒫(θ) is the prior distribution, representing preliminary knowledge of the parameters prior to data observation, while 𝒫(𝒟) is the marginal likelihood or evidence, functioning as a normalization constant to ensure that the posterior distribution integrates to 1.
The parallel-tempering Markov Chain Monte-Carlo sampler PTMCMC <cit.> was employed to reconstruct the posterior distribution 𝒫(θ|𝒟) using an enhanced version of the Metropolis-Hastings algorithm <cit.>. The GetDist tool <cit.> was subsequently used to plot the posterior distributions and upper limits.
The pulsar noise parameters employed in the likelihood function can be classified into three distinct categories: white noise, red noise, and dispersion measures (DM). The white noise parameters are grouped into three sets for each backend/receiver associated with a given pulsar: EFAC (E_k), EQUAD (Q_k[s]), and ECORR (J_k[s]). The values of the white noise parameters are fixed to the mean posterior values obtained by performing single pulsar analysis devoid of GW signals. We only kept pulsars with more than 3 years of observation time which corresponds to 53 pulsars. Instead the Bayesian analysis of NG15 data performed via PTArcade contains 68 pulsars with more than 3 years of observation. We employ the Jet Propulsion Laboratory Development Ephemeris DE438 and the Terrestrial Time reference timescale of the International Bureau of Weights and Measures BIPM18.
Next, for the multi-pulsar analysis incorporating the GW signals, we account for two power-law red noise parameters per pulsar, specifically the amplitude at the reference frequency of yr^-1 denoted as A_red, and the spectral index denoted as γ_red. Additionally, we incorporate power-law errors associated with dispersion measures (DM). We note that the treatment of DM noise as a Gaussian process is specific of IPTADR2 dataset. Instead, in the analysis of NG15 data performed via PTArcade, but also in the analysis of NANOGrav 12.5-year (NG12) done in <cit.>, pulse dispersion is modelled by a set of “per- epoch” parameters describing the DM offset from a nominal fixed value <cit.>. These can add dozens of additional parameters per pulsar <cit.>.
In the individual pulsar analysis of PSR J1713+0747 (in IPTA2 but also in NG15), we extend our consideration to encompass a DM exponential dip parameter, following the methodology described in <cit.>. The priors for the noise parameters are reported in Tab. <ref>, along with the priors for the parameters for the GW spectra from 1stOPT and SMBH binaries.
To economize on computational time, we adopt the methodology of previous studies <cit.> and in our search for a GW background we utilize only auto-correlation terms I=J in the Overlap Reduction Function (ORF) Γ_IJ, rather than the complete Hellings-Downs ORF with I≠ J.
We acquire 10^6 samples per analysis presented in this study and discard 25% of each chain as burn-in. We could replicate the posteriors of <cit.> and <cit.> for a power-law model with excellent concurrence.
The violin features shown in Figs. <ref> and <ref> are obtained with the free-spectrum approach described in <cit.>. We do not repeat this analysis and instead take the data directly from https://zenodo.org/record/8060824NG15 and
https://zenodo.org/record/5787557IPTA2.
Our study encompasses two types of analyses. The first, a detection analysis, identifies the region of parameter space in which GWs from 1stOPT can account for the common-spectrum process in the datasets. Here, we use a uniform prior on the logarithm of each parameter and adopt a prior on β/H due to the BBN bound and - when mentioned - PBH overproduction.
The second, an lower-limit analysis, seeks to constrain the rate of completion of the phase transition β/H.
There, we use a uniform prior on H/β instead of log_10(β/H) as described in <cit.>. All prior choices are given in Tab. <ref>.
BBN prior. —
As a sub-component of the total energy density of the universe, the latent heat Δ V can impact the expansion rate of the universe which is strongly constrained by BBN and CMB. Its effect can be encoded in the effective number of extra neutrino relics
N_ eff = 8/7( ρ_ tot-ρ_γ/ρ_γ)( 11/4)^4/3,
where ρ_γ is the photon number density. The total number of effective degrees is constrained by CMB measurements <cit.> to N_ eff = 2.99_-0.33^+0.34 and by BBN predictions <cit.> to N_ eff = 2.90_-0.22^+0.22 whereas the SM prediction <cit.> is N_ eff≃ 3.045. The latent heat parameter of a generic 1stOPT reads
α = ρ_ DW(T)/π^2/30g_*(T)T^4,
where T is the photon temperature and g_*(T) contains eventual dark degrees of freedom. The maximal contribution to N_ eff occurs at reheating after percolation
Δ N_ eff(T) = 2.20g_*(T)α(T).
The BBN bound Δ N_ eff≲ 0.3 <cit.> applies after neutrino decouples below the temperature T_ dec where g_*(T< T_ dec)≡ 2+(7/8)· 6· (4/11)^4/3≃ 3.36. We obtain
Δ N_ eff = 7.4 α ≲ 0.3,
Two scenarios must be distinguished. The first one is when reheating after percolation occurs in a dark sector, in which case Eq. <ref> is the BBN constraints. The second one is when reheating after the 1stOPT occurs into the Standard Model, in which case Eq. <ref> applies only if the reheating temperature is below the neutrino decoupling temperature T_ reh≲ 1 MeV. The last case is the scenario we consider in this work.
PBH prior. —
The condition of not producing PBH with an energy density larger than the one of observed dark matter, f_ PBH < 1, implies a lower bound on the rate of completion of a 1stOPT <cit.>
β/H ≳ (5.54+0.232 log_10(T_ reh/ GeV)-0.00512 log_10^2(T_ reh/ GeV))( 1 - 0.0695ln( 1+908.1/α^3.204) ),
where we have introduced an analytical function fitted on numerical results of <cit.>.
When specified, we include the constraint in Eq. (<ref>) as prior information on β/H and T_ eq. Due to the exponential dependence of the PBH abundance on β/H, the precise PBH constraints due to astrophysical and cosmological constraints, as shown in e.g. Fig. <ref>, make little difference with respect to simple criterion f_PBH <1.
§ COMBINED GW FROM 1STOPT AND SMBH BINARIES
The squared characteristic strain spectrum of a population of circular SMBHBs n(z,ℳ) can be written as <cit.>
h_c^2(f) = 4G^5/3/3π^1/3f^4/3∫ dℳ∫ dz ℳ^5/3/(1+z)^1/3d^2n/dzdℳ,
where ℳ=(m_1m_2)^3/5/(m_1+m_2)^1/5 is the chirp mass and 1/(1+z) accounts for the cosmological redshifting of GW energy.
The strain spectrum can be expressed as a red-tilted power-law
h_c(f) = A_ SMBH(f/1 yr^-1)^-2/3,
where A_ SMBH is the strain amplitude at 1 yr^-1≃ 3.2× 10^-8. In terms of the fractional energy density, it corresponds to the blue-tilted power-law
Ω_ SMBH(f) = 2π^2/3H_0^2f^2 h_c^2(f) ∝ f^2/3.
We conduct search for combined GW from both supercooled 1stOPT and SMBH binaries. We present the posterior distribution of model parameters (A_ SMBH, T_ reh, β/H) in Fig. <ref>. We included BBN and PBH constraints in the prior distribution of 1stOPT parameters. The mean posterior values of the parameters are reported in Tab. <ref> and the associated GW spectra are plotted in Fig. <ref>.
§
1214.1em
§.§
1214.1em
§.§.§
1214)1em
1214:1em
|
http://arxiv.org/abs/2307.06110v1 | 20230712123527 | Quantum field theory for multipolar composite bosons with mass defect and relativistic corrections | [
"Tobias Aßmann",
"Enno Giese",
"Fabio Di Pumpo"
] | quant-ph | [
"quant-ph",
"hep-th",
"physics.atom-ph"
] |
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
Atomic high-precision measurements have become a competitive and essential technique for tests of fundamental physics, the Standard Model, and our theory of gravity.
It is therefore self-evident that such measurements call for a consistent relativistic description of atoms that eventually originates from quantum field theories like quantum electrodynamics.
Most quantum-metrological approaches even postulate effective field-theoretical treatments to describe a precision enhancement through techniques like squeezing.
However, a consistent derivation of interacting atomic quantum gases from an elementary quantum field theory that includes both the internal structure as well as the center of mass of atoms, has not yet been addressed.
We present such an effective quantum field theory for interacting, spin-carrying, and possibly charged ensembles of atoms composed of nucleus and electron that form composite bosons called cobosons, where the interaction with light is included in a multipolar description.
Relativistic corrections to the energy of a single coboson, light-matter interaction, and the scattering potential between cobosons arise in a consistent and natural manner.
In particular, we obtain a relativistic coupling between the coboson's center-of-mass motion and internal structure encoded by the mass defect, together with an ion spin-orbit coupling.
We use these results to derive modified bound-state energies including the motion of ions, modified scattering potentials, a relativistic extension of the Gross-Pitaevskii equation, and the mass defect applicable to atomic clocks or quantum-clock interferometry.
Our theory does not only combine and generalize aspects of effective field theories, quantum optics, scattering theory, and ultracold quantum gases, but it also bridges the gap between quantum electrodynamics and effective field theories for ultracold quantum gases.
Quantum field theory for multipolar composite bosons with mass defect and relativistic corrections
Fabio Di Pumpo 0000-0002-6304-6183
==================================================================================================
§ INTRODUCTION
Quantum field theories (QFT) <cit.> are powerful and successful tools with applications ranging from the field of particle physics described by the Standard Model <cit.>, over quantum electrodynamics (QED) <cit.> to nonrelativistic (NR) ultracold quantum gases <cit.>.
Because these gases consist of atoms, composite particles, and not of elementary particles, they have to be described by an effective field theory (EFT) <cit.>.
Such EFTs are the method of choice, for instance in describing Bose-Einstein condensates (BEC) <cit.>, but are usually not derived from an elementary theory.
Hence, they give no direct access to relativistic and further corrections, including radiative corrections <cit.>, effects from the composite nature of the nucleus <cit.>, or the mass defect <cit.> relevant for quantum clocks <cit.>.
In this work, we derive an EFT from QED to describe NR composite particles including relativistic corrections.
As a result, we obtain a field-theoretical description of charged, interacting atomic ensembles including both the coupling of the center-of-mass (c.m.) motion to the internal atomic structure, as well as atom-atom and light-matter interactions with relativistic corrections.
Since in many applications atoms move at NR velocities and pair creation plays no role, the respective EFT leads to nonrelativistic QED (NRQED) <cit.>.
It is routinely used to describe an individual neutral, composite particle, where usually the c.m. degrees of freedom are not taken into account and the light-matter interaction is only considered to lowest order.
This approach is suited for studying atomic structures, such as in spectroscopy <cit.>, giving rise to, radiative QED corrections.
On the other hand, atomic scattering experiments imply the presence of more than one particle and rely on the c.m. of atoms, so that the theory mentioned above has to be extended.
Common approaches <cit.> usually include the c.m. and atom-atom scattering by generalizing single-atom theories to a corresponding effective QFT, so that both coincide in the single-atom limit.
However, fundamental effects from QED remain inaccessible in such a treatment.
By extending NRQED to an EFT for interacting atomic ensembles and taking their c.m. degrees of freedom into account, we find from first principles a description that reduces in limiting cases to common approaches, but with access to radiative corrections and scattering potentials.
The coupling of the inner-atomic structure to the atom's c.m. motion is a consequence of the relativistic mass defect <cit.>, used in quantum-clock interferometry <cit.>, and which gives access to quantum tests of fundamental physics <cit.>.
This coupling has been derived in single-particle quantum mechanics for spin- and chargeless particles with <cit.> and without <cit.> gravitational backgrounds.
In this work, we unify, generalize, and extend these concepts into one framework.
To this end, we use NRQED to describe two different fermions (constituents of the coboson) on flat spacetime, and derive an EFT describing charged (ionized) spin-carrying atomic clouds including the mass defect, which are exposed to atom-atom and light-matter interactions.
The article follows the hierarchical steps presented in Fig. <ref>, summarizing transitions between different EFTs at each step, originating from QED as a fundamental starting point for the description of interactions between fermions and photons.
We aim for a model of interacting composite particles consisting of bound fermions.
Since antiparticles are irrelevant in bound systems like atoms at NR energies, we use NRQED by restricting ourselves to the NR limit of QED, where the antiparticle is removed from the formalism.
The resulting model is described in Sec. <ref> and covers the interaction of nuclei and electrons on a field-theoretical level, including relativistic effects.
This interaction is mediated by (virtual) photons that account for the binding potential between the constituents of the composite particle.
In a next step, such virtual photons are matched to instantaneous potentials V̂^(ij), resulting in potential NRQED (pNRQED) <cit.>.
These potentials mediate the electromagnetic (EM) interaction in the spirit of the classical Coulomb problem, but still include relativistic corrections.
However, such potentials between elementary fermions do not describe solely attractive potentials between constituents of an atom, but also repulsive interactions in a gas of fermions.
To this end, we reduce pNRQED to cobosonic QFT (CbQFT) via a projection technique on a subspace of paired nuclei and electrons that is based on different length scales (Sec. <ref>).
In addition, we introduce field operators for so-called cobosons (composite bosons) <cit.> whose commutation relation differs from the fundamental bosonic one <cit.>.
The resulting theory of composite particles has the desired form, but is still given in terms of the degrees of freedom of their constituents.
In Sec. <ref> we therefore describe the interaction of such composite particles with light via electric and magnetic fields <cit.> instead of the vector potential.
At the same time, we introduce c.m. and relative coordinates commonly used in the description of bound or composite particles.
This approach clearly distinguishes between the internal structure of composite particles and their c.m. motion, so that it connects to the field-theoretical treatment of quantum gases.
We present the main results in Sec. <ref>.
Our multipolar cobosonic QFT gives rise to a field-theoretical description of atoms, together with their internal degrees of freedom and their c.m. motion, as well as their interactions with EM fields and scattering between different cobosons.
It includes relativistic QED corrections in the spectrum of the bound system, but also in the motion of the composite particles, their scattering, as well as their interaction with EM fields.
In particular, we observe a coupling between relative and c.m. coordinates as a consequence of the mass defect, but also a spin-orbit coupling for charged cobosons.
Finally, we put our results in Sec. <ref> into context with existing approaches in various subfields, and we use this discussion as a motivation for sample applications given in Sec. <ref>.
We present the coupling of the atom's energy spectrum to the c.m. motion, reduce the scattering potential to a generalized dipole-dipole potential, derive a QFT for interacting cobosonic quantum gases, and find in a mean-field description a modified Gross-Pitaevskii equation <cit.> including the mass defect.
We conclude in Sec. <ref>.
For completeness, Appendix <ref> presents a derivation of the fermion-fermion potentials from pNRQED.
We discuss aspects of the coboson-subspace projection in Appendix <ref> and derive unitary transformations of the coboson field operator in Appendix <ref>.
The full transformation from CbQFT to multipolar CbQFT is performed in Appendix <ref>, while Appendix <ref> presents the eigenfunctions for the relative motion of hydrogen-like composite particles.
§ POTENTIAL NONRELATIVISTIC QED
We assume that composite particles are built from two fermionic particle species, namely electrons (e) and nuclei (n) [
We omit the composite-particle nature of the nucleus in its dynamical description by treating it as a single fermionic field.
The composite-particle nature of the nucleus may be taken into account by deriving an effective QFT in the same spirit.
However, nondynamical composite-particles effects are included in prefactors, as detailed later.
].
Since NR effects are primarily responsible for the bound-state dynamics between the constituents, we rely on
NRQED <cit.> for a field-theoretical description.
NRQED is an EFT of QED valid in NR regimes of both nucleus and electron momenta where antiparticles are of no relevance.
This assumption requires sufficiently low photon energies such that the particle-antiparticle dynamics remains negligible.
Because of the absence of antiparticles, the constituents are simply described by two-component field operators ψ̂_i(x) and ψ̂_i^† (x), associated with the annihilation and creation of particle i=e,n at position x, rather than four-component spinors containing both particle and antiparticle field operators.
Thus, the components u,v=1,2 of field operators of the same species obey anti-commutation relations {ψ̂_i, u (x),ψ̂_i, v^† (x^') } = δ_uvδ(x-x^') and {ψ̂_i, u (x),ψ̂_i, v (x^') } = 0.
Simultaneously, electron and nucleus field operators act on different Hilbert spaces implying vanishing commutators [ ψ̂_i,α, ψ̂_j,β^†]=0 = [ ψ̂_i,α, ψ̂_j,β] for i ≠ j between different particle species.
The Lagrangian density governing the dynamics of the fermionic field operators may be constructed <cit.> by considering all possible operator combinations that preserve the symmetries (namely hermiticity, as well as gauge, rotational, parity, and time-reversal invariance).
Each combination is then equipped with a coefficient determined by a matching of cross sections in the low-energy limit of QED <cit.>.
Alternatively, the NRQED Lagrangian follows directly from the QED Lagrangian by applying the Foldy-Wouthuysen transformation <cit.>, where the matching coefficients have to be added manually.
These so-called Wilson coefficients <cit.> partly account for QED effects that are no longer accessible in NRQED, such as the anomalous magnetic moment <cit.>.
In the spirit of EFTs, they also account for composite-particle aspects of the nucleus, loop corrections, or radiative effects.
After a Legendre transformation of the NRQED Lagrangian density <cit.> up to order c^-2 of the speed of light c, the NRQED Hamiltonian density
ℋ̂ = ℋ̂_EM +∑_i=e,nψ̂^†_i ĥ_i ψ̂_i + ℋ̂_Cont
contains three contributions [The following form of the Hamiltonian density follows from the flat spacetime Minkowski metric η_μ,ν=diag(+1,-1,-1,-1).].
The first one corresponds to the free energy density of the EM field ℋ̂_EM = ε_0 ( Ê^2 + B̂^2/c^2)/2.
The second term accounts for the energy density of electrons and nuclei, where the single-fermion Hamiltonian
ĥ_i = m_i c^2 + q_i ϕ̂ + 2/2m_i - c_F^(i) q_i ŝ_i·B̂/m_i - 4/8m_i^3c^2
- c_D^(i) q_i ħ^2 ∇·Ê/8 m_i^2c^2 + c_S^(i) q_i ŝ_i·×Ê - Ê×/4 m_i^2c^2
+ c_W1^(i) q_i {2, ŝ_i ·B̂}/4m^3_ic^2 - c_A1^(i) q^2_i ħ^2 B̂^2/8 m_i^3 c^2
corresponds to the energy of a single fermion of species i.
The Hamiltonian constitutes the basis for the Schrödinger equation of first-quantized systems.
It is sandwiched between the field operators ψ̂_i^† ... ψ̂_i and creates a weighted particle-number density in a field-theoretical treatment.
In leading order, the energy of particle species i is the sum of rest energy due to its rest mass m_i, energy caused by the scalar potential ϕ̂ because of its charge q_i, kinetic energy, as well as energy due to the coupling of spin ŝ_i = ħσ̂_i/2 with Pauli-matrix vector σ̂_i of particle i to a magnetic field B̂.
The kinetic energy associated with the particle's minimally-coupled momentum = p̂ - q_i Â, with momentum operator p̂ =- ħ∇, is modified by the vector potential Â, and obeys the commutation relation [ x_u, p̂_v]= ħδ_ℓ m for u,v=x,y,z.
The first-order relativistic corrections are the kinetic (4) and electric-field corrections, covering the Darwin term (∇·Ê) and spin-orbit term (×Ê), which give rise to a corresponding hydrogen fine-structure contribution.
While p̂_i acts on the field operator, ∇·Ê is only a spatial derivative of Ê.
The last line of Eq. (<ref>) contains relativistic corrections to light-matter interaction in form of general magnetic moment and diamagnetic corrections.
All light fields are functions of position x and are connected to ϕ̂ and  via Ê = - ∇ϕ̂ - ∂_t  and B̂ = ∇×Â.
The Wilson coefficients c_k^(i) in Eq. (<ref>) are determined from tree-level QED matching <cit.>, and particular subscripts stand for Fermi, Darwin, and Seagull.
In particular, c_F^(i) = Z_i + a_i is related to the anomalous magnetic momentum a_i of particle i and its charge number Z_e= 1 and Z_n = Z.
For instance, we can relate c_F^(e)=g_e /2 to the g-factor of the electron.
Some Wilson coefficients are defined completely through other coefficients <cit.>; and specific values for electrons <cit.> or protons <cit.> have been determined.
The third term
ℋ̂_Cont =ħ^3 ∑_i,jd_1^(ij)ψ̂^†_i ψ̂_i ψ̂^†_j ψ̂_j - d_2^(ij)ψ̂^†_i σ̂_i ψ̂_i ·ψ̂^†_j σ̂_j ψ̂_j/m_i m_j c
of the Hamiltonian density from Eq. (<ref>) describes contact interactions through which fermions couple directly (Darwin-like contact interaction) and through their spin (spin-spin contact interaction).
The Wilson coefficients d_1^(ij) and d_2^(ij) are in lowest order proportional to the fine-structure constant α = e^2/(4 πε_0 ħ c) with vacuum permittivity ε_0.
These terms solely arise from loop corrections <cit.>, such that they cannot be obtained from a pure tree-level treatment.
As a result, ℋ̂_Cont is of order α/c and by that of c^-2 [We use for convenience c to specify the order, but since c is connected to other fundamental constants, it is more precise to fix it to α/c or equivalently 1/(ε_0 c^2).].
The Hamiltonian neglects loop corrections of the order c^-2, which are suppressed by another d-type Wilson coefficient and are in fact of order α/c^2.
§.§ Matching of scattering processes to potentials
The Hamiltonian from Eq. (<ref>) allows to describe composite particles based on their fermionic constituents.
However, the defining property of composite particles, a bound-state potential due to EM interactions between fermions, does not appear explicitly yet.
Instead, it is contained in the EM fields which give rise to all allowed NRQED Feynman diagrams involving photons.
These photons may be categorized into real (external lines in Feynman diagrams) and virtual (internal lines in Feynman diagrams) photons.
The former describe all photons from external fields that scatter with the composite particle, the latter are virtual mediating EM interaction between the fermionic constituents of the composite particle.
Such a separation is sketched in Fig. [fig:Feynman]2a) where all possible Feynman diagrams between two constituents (solid lines) may be written as a sum of all virtual photon diagrams (dashed and wiggly lines), that scatter an increasing number of real photons (zigzag lines).
We integrate over all positions and perform such a separation to find the new Hamiltonian
Ĥ = Ĥ_EM + ∑_i∫[3]x_iψ̂^†_i ĥ_i ψ̂_i
+ Ĥ_f-f,
where the EM interaction between two fermions now explicitly appears in a modified fermion-fermion interaction Hamiltonian Ĥ_f-f absorbing also Ĥ_Cont, while the EM fields in the original single-fermion Hamiltonian [first term in Eq. (<ref>)] are now only associated with real photons scattering with fermions (light-matter interaction).
To determine Ĥ_f-f, we consider the first type of Feynman diagram in Fig. [fig:Feynman]2a) involving solely virtual photons.
According to Fig. [fig:Feynman]2b), the interaction or scattering between two real fermions i and j follows to lowest order from a second-order scattering process, two vertices in a Feynman diagram connected by a virtual scalar (dashed line) or vector photon (wiggly line).
These second-order interactions are reduced to an effective first-order scattering processes with one vertex containing an instantaneous potential V̂^(ij).
Consequently, the resulting Hamiltonian takes the form
Ĥ_f-f = ∑_i,j∫[3]x_i∫[3]x_j^'ψ̂^†_i ψ̂^' †_j V̂^(ij)ψ̂_j^'ψ̂_i + 𝒪(c^-3)
and gives rise to the effective field theory of potential nonrelativistic quantum electrodynamics (pNRQED) <cit.>.
In Eq. (<ref>) we use the abbreviations ψ̂_i = ψ̂_i ( x_i ), ψ̂_j^' = ψ̂_j ( x_j^'), and V̂^(ij) = V̂^(ij) ( x_i, x_j^', p̂_i, p̂_j^', ŝ_i, ŝ_j^' ).
The potential itself is determined by considering all possible virtual photons as indicated in Fig. [fig:Feynman]2b), leading to potentials of order c^-2.
They are summarized by Fig. <ref> which shows all relevant Feynman diagrams and their corresponding terms contributing to V̂^(ij).
A more detailed procedure is discussed in Appendix <ref>.
The full potential presented in Fig. <ref> is given with respect to single-fermion coordinates up to order c^-2.
As expected, we find in lowest order the Coulomb interaction V^(ij)_C.
The potential is completed by the orbit-orbit V̂^(ij)_LL, spin-orbit V̂^(ij)_LS, spin-spin V̂^(ij)_SS, Darwin V^(ij)_D, and contact interaction.
The last term already had the form of a fermion-fermion interaction.
These potentials are also known as part of the Breit-Pauli Hamiltonian <cit.>, however, augmented by QED corrections in our description.
Since the matching procedure requires a specified gauge to evaluate virtual photons via propagators in Feynman diagrams, Fig. <ref> shows the potentials for the Coulomb gauge, ∇·Â = 0, and Lorenz gauge [In Lorenz gauge the EM four potential may be quantized with respect to a weaker Lorenz-gauge condition in the spirit of Gupta and Bleuler <cit.> or with the help of BRST quantization <cit.>.].
While potentials connected to a specific Feynman diagram may differ, the overall potential V̂^(ij) and by that the total pNRQED Hamiltonian remains gauge invariant.
In addition, matching in order c^-2 can be extended to order α/c^2 by including relevant loop corrections <cit.>.
In pNRQED, virtual photons mediating EM interaction between two fermions are frozen out into potentials and any Feynman diagram depicted in Fig. <ref> is included in Ĥ_f-f.
All other Feynman diagrams up to the order c^-2 may be determined with the new Hamiltonian, the self-energy of the fermions, responsible for one contribution to the Lamb shift <cit.>, containing a virtual fermion and a virtual vector photon, since both Hamiltonians yield the same physical results.
The renormalization of the theory is similar to the transition from QED to NRQED <cit.>.
§.§ Matching potentials with external photons
So far we derived the potential between two fermions due to EM interactions mediated by virtual photons that will eventually give rise to the binding potential of composite particles.
In addition, we also aim to describe light-matter interaction between composite particles and external light fields.
The scattering process of a real photon with a composite particle contains fermion-photon interactions that correspond to the EM fields appearing in the single-fermion Hamiltonian ĥ_i from Eq. (<ref>).
Since the constituents form a bound system, we have to include also the case of a real photon that scatters from two fermions exchanging a virtual photon.
This process is not yet accounted for in Ĥ_f-f, since no minimally-coupled momentum operators appear in the potential that originates in the first term of Fig. [fig:Feynman]2a).
We incorporate this case in Ĥ_f-f by including Feynman diagrams according to the remaining two terms in Fig. [fig:Feynman]2a).
The relevant additional Feynman diagrams with external photons (depicted in red) are summarized in Fig. <ref>.
Together with these additional diagrams, we derive potentials in which the momentum operators are replaced by minimally-coupled momenta.
Here, we use Coulomb gauge (∇·Â = 0) to determine the matching, and consequently our remaining external photons are also fixed to this gauge from now on.
In this case, the vector potential can be decomposed as
 = ∑_r=1^21/(2π)^3/2∫[3]k𝒜_k e_r( k) [ â_r (k) ^k·x + H.c.].
We introduced the vacuum amplitude 𝒜_k =√(ħ/2 ε_0 c k) and unit polarization vectors e_r( k) corresponding to the wave vector k.
The commutators of the field-operator components commute as usual.
Hence, we have [ Â_u ( x ), Â_v ( x^') ] =0 for u,v=x,y,z.
The scalar potential ϕ̂ contains both real and all virtual photons arising from contractions, such as the Coulomb potential V̂^(ij)_C.
In Coulomb gauge, there are no real scalar photons in the absence of a free charge density sourcing the external field, and up to order c^-2 we determined all possible virtual scalar-photon contributions by collecting them in Ĥ_f-f.
Consequently, we set ϕ̂ = 0 [Dropping the scalar potential implies that higher order scalar photons become inaccessible.
But when moving to the next order, Ĥ_f-f has to be redetermined anyway for a consistent treatment.].
The self-energy of the fermions, being the main contribution to the Lamb shift <cit.>, can still be determined since only the virtual vector photon contributes in Coulomb gauge to it.
We conclude this section with the full minimally-coupled Hamiltonian
Ĥ = Ĥ_EM + ∑_i∫[3]x_iψ̂^†_i ĥ_i ψ̂_i
+ ∑_i,j∫[3]x_i∫[3]x_j^'ψ̂^†_i ψ̂^' †_j V̂^(ij)ψ̂_j^'ψ̂_i
where the potential V̂^(ij) = V̂^(ij) ( x_i, x_j^', , j', ŝ_i, ŝ_j^') is now a function of minimally-coupled momenta and all EM fields are given in Coulomb gauge.
§ COBOSONIC QFT
We now move from the single-fermion to a composite-particle description.
Moreover, we restrict ourselves to ensembles of one atomic species and we reduce the problem to the simplest case of electron-nucleus pairs, forming a composite boson, a coboson <cit.>.
The distance between electron and nucleus that form a coboson is given by an atomic length scale, whereas the distance to other cobosons and their constituents is much larger.
Thus, we consider a situation which is sufficiently dilute, such that individual cobosons do not overlap.
Motivated by these different length scales, we describe atoms as spatially restricted cobosons resembling hard-sphere based models <cit.>, such that two constituents within a sphere of a certain cutoff radius form a composite particle and are, by definition, free fermions outside of it.
Formally, we achieve such a transition from the single-fermion theory to cobosonic quantum field theory (CbQFT) by means of a projection π̂_Cb of the Schrödinger equation ħ|Ψ⟩ / t = Ĥ|Ψ⟩.
Here, cobosonic states are part of the general second-quantized state |Ψ⟩, represented in the following by upper-case symbols, in contrast to lower-case symbols that represent first-quantized states.
Thus, the projection operator is chosen such that only spatially-restricted cobosonic states are selected, giving rise to intra-cobosonic and inter-cobosonic length scales.
Conversely, observations like atomic decay, free fermions, multi-electron atoms, molecules, etc. do not lie within the subspace spanned by this projection.
Figure [fig:projection]5a) shows one exemplary configuration that is ruled out by projection and another one that is selected by the projector.
Guided by the intuitive picture in the figure, we define the intra-cobosonic scale a and the length scale b ≫ a associated with the distance between different cobosons.
As a result, the dominant EM interaction between fermions are the attractive binding potentials between atomic constituents.
Contrarily, inter-cobosonic interactions are based both on attractive and repulsive interactions between the fermionic constituents of different cobosons.
In the spirit of EFTs for atoms, we expect cobosonic creation φ̂^† = ψ̂^†_n ψ̂^†_e and annihilation operators φ̂ = ψ̂_n ψ̂_e instead of single-fermion field operators such that only cobosons can be created and annihilated.
To reduce the Hilbert space to the states depicted in Fig. [fig:projection]5a) we define a projector π̂_Cb = ∑_k=0^Nπ̂_k that projects on up to N cobosons, where π̂_k projects onto a subspace of k cobosons.
As such, the subspace projection
π̂_k = 1/k!∫_C_1[6]x_1 ... ∫_C_k[6]x_k( ∏_ℓ=1^k φ̂^†_ℓ) |0⟩⟨0|( ∏_ℓ=1^k φ̂_ℓ)
is defined by the cobosonic operator φ̂^†_ℓ = ψ̂^†_n ( x_ℓ,n) ψ̂^†_e (x_ℓ,e), creating a coboson at position (x_ℓ,n, x_ℓ,e), with an analogous definition for the annihilation operator.
Moreover, Eq. (<ref>) contains the abbreviation of a six-dimensional integration measure [6]x_k = [3]x_k,n [3]x_k,e and by definition the subspace projectors are orthogonal, π̂_k π̂_ℓ = 0 for k≠ℓ.
As explained in Fig. [fig:projection]5a), such a projection implies that not all fermion coordinates are independent.
In fact, we have to equip the integrals with proper integration regions C_k = C_k,n C_k,e for the nucleus and electron of the k-th coboson.
This way, we introduce the internal cobosonic (atomic) length scale a by restricting the electron coordinates x_k,e of coboson k to C_k,e=B_a(x_k,n) denoting a spherical volume with radius a around the nucleus of coboson k positioned at x_k,n.
The inter-cobosonic scale b enters through regions for nuclei coordinates in an iterative manner, see also Fig. [fig:projection]3b).
The first nucleus may be positioned anywhere, in the volume C_1,n = ℝ^3.
However, the second nucleus must not be within a sphere of radius b around the first nucleus and is therefore located in a region C_2,n = ℝ^3 ∖ B_b (x_1,n).
The distance b between the cobosons has to be larger than the atomic length scale a such that the pairing of nucleus k and electron k remains unique.
A generalization to the k-th coboson [For simplicity we defined the distance between cobosons with respect to the nucleus coordinate, even though center-of-mass distances are physically more precise.
However, both are equivalent by replacing b by b^' = b+a.] yields
C_k = C_k,n C_k,e = ℝ^3 ∖⋃_ℓ = 1^k-1 B_b (x_ℓ,n) B_a ( x_k,n).
Because of these limits of integration, the projector is normalized with 1/k!.
In the absence of such limits. spatially independent fermion pairs, the normalization factor would be 1/k!^2.
The normalization ensures idempotence, π̂_k^2 = π̂_k and by that π̂_Cb^2 = π̂_Cb, see also Appendix <ref>.
This projector resembles boson-like properties: if the integration regions in Eq. (<ref>) are dropped and φ̂ is replaced by a bosonic field operator obeying canonical commutation relations, idempotence is obtained for 1/k!.
Using this time-independent projection operator, we define the projected states via |Ψ⟩_Cb = π̂_Cb|Ψ⟩ and their equation of motion
ħ/ t|Ψ⟩_Cb = π̂_CbĤπ̂_Cb|Ψ⟩_Cb + π̂_CbĤ ( 1 - π̂_Cb) |Ψ⟩.
In the remainder of this article we focus on the contribution to the motion induced by the cobosonic Hamiltonian Ĥ_Cb=π̂_CbĤπ̂_Cb, as well as its eigenvalues and properties.
However, the coupling to states ( 1 - π̂_Cb) |Ψ⟩ that lie outside of our projected Hilbert space leads to additional energy shifts and other effects of the environment in the spirit of open quantum systems <cit.>.
While we present only the results for Ĥ_Cb, a more detailed derivation is carried out in Appendix <ref>.
The projected single-fermion Hamiltonian
∫[3]x_iψ̂^†_i ĥ_i ψ̂_i π̂_Cb = ∫_C_1[6]x_1φ̂^†_1 ĥ_i φ̂_1 π̂_Cb
has the form of a composite-particle theory as the fermion operators are replaced by coboson operators, while the region accessible to the electron is restricted to the atomic scale around the nucleus.
Similarly, the projection of the fermion-fermion Hamiltonian for the potentials V̂^(ij) resolves to
Ĥ_f-fπ̂_Cb = ∫_C_1[6]x_1φ̂^†_1 (V̂^(ne) + V̂^(en)) φ̂_1 π̂_Cb
+∫_C_1[6]x_1∫_C_2[6]x_2φ̂_1^†φ̂_2^†∑_i,jV̂^(ij)φ̂_2 φ̂_1 π̂_Cb.
As indicated in Fig. [fig:projection]5a), the interaction between the fermions divides into the dominant binding potential given by V̂_b =V̂^(ne)+V̂^(en) in the single-coboson part (first line), while the inter-cobosonic scattering potential ∑_ijV̂^(ij) in the two-coboson part includes attractive and repulsive interactions.
Because of the separation of scales, these interactions are weaker than the binding potential.
Moreover, Ĥ_EMπ̂_Cb is unaffected as Ĥ_EM contains no fermion field operators.
Hence, the projected Hamiltonian reads
Ĥ_Cb = Ĥ_EM + ∫_C_1[6]x_1φ̂^†_1 ĥ_Cbφ̂_1
+ ∫_C_1[6]x_1∫_C_2[6]x_2φ̂_1^†φ̂_2^†V̂_Scattφ̂_2 φ̂_1
with the internal cobosonic energy ĥ_Cb = ĥ_n + ĥ_e + V̂^(ne) + V̂^(en).
This contribution gives rise to the Breit-Pauli Hamiltonian for an electron and a nucleus <cit.> consisting of the sum of individual fermionic energies together with their total binding potential V̂^(ne) + V̂^(en) arising from EM interaction between the fermionic constituents.
The scattering potential V̂_Scatt = V̂^(nn) + V̂^(ne) + V̂^(en) + V̂^(ee) is based on both attractive (V̂^(ne) + V̂^(en)) and repulsive (V̂^(nn) + V̂^(ee)) EM interactions among all fermions of different cobosons.
In particular, these attractive terms are weaker than the single-particle binding potentials, since the coordinates x_1,i and x_2,j of different cobosons are separated by b ≫ a.
Compared to bosonic field theories for atomic ensembles <cit.>, our effective field theory is based on creation φ̂^† and annihilation φ̂ operators whose components α, β=1,2 obey a cobosonic commutation <cit.> relation
[ φ̂^'_α^'β^', φ̂^†_αβ] = δ_α^'αδ_β^'βδ ( x^'_n - x_n) δ ( x^'_e - x_e )
- δ_α^'αδ ( x^'_n - x_n) ψ̂_e^†ψ̂^'_e
- δ_β^'βδ ( x^'_e - x_e) ψ̂_n^†ψ̂^'_n ,
whereas the first line of Eq. (<ref>) describes the fundamental bosonic commutator.
Moreover, the projection operator includes integration regions that naturally ensure a boson-like normalization of 1/k! even for this type of commutation relation.
In fact, the cobosonic part of the commutator in the second and third line is responsible for the scattering potential in Eq. (<ref>).
When projecting to a single-coboson subspace <cit.>, these two aspects, the cobosonic part of the commutator and integration regions ensuring unique electron-nucleus pairs, become irrelevant.
In this case, we recover conventional single-particle quantum mechanics.
So far, we constructed a cobosonic theory from second-order scattering of fermions, where the internal structure is governed by the combined single-fermion energy and where their binding potential results from attractive interactions.
Furthermore, the inter-cobosonic dynamics arises from the inter-fermionic interactions between the constituents of different cobosons.
We emphasize that this projection does not exclusively work for the single-fermion Hamiltonian from Eq. (<ref>) and the potential from Fig. <ref> but rather for arbitrary single-fermion Hamiltonians and potentials.
However, effects of the environment given by states that do not lie within the coboson subspace have been neglected in our treatment.
§ SECOND-QUANTIZED TRANSFORMATIONS
Although the Hamiltonian from Eq. (<ref>) has already the desired form of an EFT for cobosons, it still involves the constituents' coordinates.
In the spirit of composite particles, we now move to center-of-mass (c.m.) and relative coordinates, where the latter take the internal cobosonic, atomic, structure into account.
Also, light-matter interaction enters in lowest order via the vector potential  contained in the canonical momenta through minimal coupling.
For a description of experiments, it is more convenient to express the coupling by EM fields Ê and B̂.
In this section we derive a method to incorporate the multipolar form of cobosonic QFT and move to relativistically-corrected c.m. and relative coordinates of the second-quantized Hamiltonian ĥ_Cb from Eq. (<ref>).
In first-quantized regimes (characterized by lower-case symbols), realizing these operations involves unitary transformations <cit.> û that transform a state |ψ⟩ = û|ψ̃⟩, where û = expλ̂ /ħ may be expressed through a (time-independent) single-particle generator λ̂.
Consequently, the effective Schrödinger equation ħ|ψ⟩ / t = ĥ_Cb|ψ⟩ for the single-coboson Hamiltonian ĥ_Cb yields a transformed operator
ĥ̃̂_Cb = û^†ĥ_Cbû
as long as û is time independent.
Guided by this concept, we define for the second-quantized Hamiltonian Ĥ_Cb from Eq. (<ref>) an analogues transformation |Ψ⟩_Cb = Û|Ψ̃⟩_Cb on a second-quantized state |Ψ⟩_Cb (characterized by upper-case symbols) with a unitary Û = expΛ̂ / ħ generated by Λ̂.
We choose the second-quantized generator Λ̂ in such a way that the transformation reduces to the single-particle transformation acting on the first-quantized Hamiltonian ĥ_Cb.
With the choice
Λ̂ = ∫_C_1[6]x_1φ̂^†_1 λ̂φ̂_1,
where λ̂ is the generator of the corresponding first-quantized transformation, we achieve the desired behavior of the transformation together with the relation Û^†φ̂_k Û = û_k φ̂_k shown in Appendix <ref>.
Here, the first-quantized unitary transformation û_k = û (x_k,n, x_k,e) of coboson k acts only on coordinates and operators associated with coboson coordinates (x_k,n, x_k,e).
As a result, we obtain the transformed second-quantized Hamiltonian
Û^†Ĥ_CbÛ = Û^†Ĥ_EMÛ + ∫_C_1[6]x_1φ̂^†_1 û_1^†ĥ_Cbû_1 φ̂_1
+ ∫_C_1[6]x_1∫_C_2[6]x_2φ̂^†_1 φ̂^†_2 û_1^†û_2^†V̂_Scattû_2 û_1 φ̂_2 φ̂_1
given that [ Λ̂, ĥ_Cb] = [ Λ̂, V̂_Scatt] =0.
If λ̂ contains EM fields, we also need to transform Ĥ_EM, otherwise it remains invariant.
With this procedure, we may apply any first-quantized unitary specified by λ̂ to the first-quantized Hamiltonian ĥ_Cb and the potential V̂_Scatt within the second-quantized framework as long as the requirements above are met.
In the following, we specify the transformations to obtain a multipolar CbQFT in c.m. and relative coordinates including relativistic corrections.
§.§ Nonrelativistic c.m. and relative coordinates
First, we move from the set of electron {x_k,e, p̂_k,e,ŝ_k,e} and nucleus {x_k,n, p̂_k,n,ŝ_k,n} coordinates to NR c.m. {R_k, P̂_k, Ŝ_k } and relative {r_k, p̂_k, ŝ_k } coordinates describing coboson k.
The connection between the different coordinates are listed in Table <ref> and chosen such that c.m. (position R_k, momentum P̂_k) and relative (position r_k, momentum p̂_k) share nonvanishing canonical commutators [R_ℓ^(u), P̂_k^(v)]= [r_ℓ^(u) , p̂_k^(v)]= ħδ_u vδ_ℓ k where u,v=x,y,z.
These coordinates are defined through the total mass M=m_e + m_n as well as the total spin Ŝ_k and the relative spin ŝ_k.
Changing the coordinates leaves the integration measure invariant and we replace [6]x_k→[6]ℛ_k = [3]R_k [3]r_k in Ĥ_Cb from Eq. (<ref>) together with single-particle coordinates in ĥ_Cb and V̂_Scatt according to the transformation specified in Table <ref>.
Note that the field operators φ̂_k = φ̂ ( R_k- m_rr_k /m_n, R_k + m_rr_k/m_e ) have thus become a function of c.m. and relative coordinates as well.
§.§ Relativistic corrections to c.m. and rel. coordinates
Our description contains relativistic corrections up to the order c^-2.
However, the transformation to NR c.m. and relative coordinates from Table <ref> is inconsistent to this order and has to be modified <cit.>.
Consequently, we take relativistic corrections to NR c.m. and relative coordinates into account, in order to remain consistent in c^-2.
These corrections can be implemented via a first-quantized unitary transformation <cit.>, which circumvents issues regarding the integration measure and the transformation of certain terms in the scattering potential that arise with an actual coordinate transformation.
Such a first-quantized unitary <cit.> is generated by
λ̂_k^(rel) = r_k ·P_k /4M^2c^2[ p_k ·P_k + Δ m ( p_k^2/m_r + q_eq_n/4 πε_0 r_k) ] + H.c.
- 1/4m_r Mc^2( p_k ×P_k + H.c.) ·ŝ_k.
The Coulomb-potential term proportional to the mass difference Δ m = m_n - m_e arises due to the internal EM interactions.
In addition, single-particle masses are contained in the total mass M=m_e+m_n and the reduced mass m_r= m_e m_n /M, and the Hermitian conjugate H.c. ensures hermiticity of the generator.
We also account for c^-2 corrections to light-matter interactions by using gauge-invariant minimally-coupled momenta <cit.> P_k = p_k,e + p_k,n and p_k = (m_n p_k,e - m_e p_k,n ) /M instead of purely kinetic momentum operators.
To apply this first-quantized transformation to a second-quantized theory, we need to confirm that [ Λ̂_rel, ĥ_Cb ] = [ Λ̂_rel, V̂_Scatt ]=0, where Λ̂_rel and λ̂_k^(rel) are connected through Eq. (<ref>).
Since Λ̂_rel contains an integration over coordinates that are independent of ĥ_Cb and V̂_Scatt, the cobosonic operators trivially commute.
While the vector potential commutes in Coulomb gauge with itself, its commutator with the electric fields in ĥ_Cb (Darwin term and spin-orbit term) is nonvanishing.
The resulting additional terms from this commutator, however, are yet ruled out by the limits of integration in Ĥ_Cb.
This fact is a consequence of the projection, ensuring that a coboson can only contain one nucleus and electron, and may be made explicit by introducing φ̂ = π̂_Cbφ̂, similar to Appendix <ref>.
Thus, the transformation reduces to Eq. (<ref>).
§.§ Power-Zienau-Woolley transformation
We now introduce the interaction of light with matter through electric and magnetic fields Ê and B̂ rather than through the vector potential, we move to multipolar cobosonic quantum field theory.
This transition follows from applying the unitary Power-Zienau-Woolley (PZW) transformation <cit.> defined by its first-quantized generator λ̂_k^(PZW) = ∫[3]y𝒫_k ( y) ·Â(y) through the polarization field <cit.>
𝒫_k ( y) = ∑_i q_i (x_k,i - R_k ) ∫_0^1ρδ[ y - R_k - ρ ( x_k,i - R_k )]
of the k-th coboson.
Here, we choose the coordinate R_k, which could be arbitrary in general, to coincide with the c.m. position.
Based on the discussion above, it can be shown that this generator meets the requirements to reduce the second-quantized transformation to the first-quantized one as well.
§.§ Transformation sequence
Finally, the total transformation sequence addresses first the generation of relativistic corrections to NR c.m. and relative coordinates, followed by the PZW transformation.
This particular order of transformations is crucial to remain gauge invariant.
The transformations of Eq. (<ref>) leads to the replacements
ĥ_Cb →û_1^(PZW) †û_1^(rel) †ĥ_Cbû_1^(rel)û_1^(PZW)
V̂_Scatt →û_12^(PZW) †û_12^(rel) †V̂_Scattû_12^(rel)û_12^(PZW)
Ĥ_EM →Û_PZW^†Û_rel^†Ĥ_EMÛ_relÛ_PZW
where û_12 = û_1 û_2 combines the transformation of both coordinate sets.
For a more detailed discussion on the above transformations, we refer to Appendix <ref> and present the major results in the next section.
§ MULTIPOLAR COBOSONIC QFT
Together with the transformations from the previous section, our multipolar CbQFT is formulated with respect to c.m. and relative coordinates, while every scale is corrected in c^-2.
In particular, the theory includes corrections to internal dynamics encoded in ĥ_Cb, inter-cobosonic dynamics in V̂_Scatt, and light-matter interactions contained in both terms.
Consequently, the multipolar cobosonic Hamiltonian
Ĥ_MpCb = Ĥ_EM + ∫_C[6]ℛφ̂^†ĥ_MpCbφ̂ +
+∫_C_1[6]ℛ_1∫_C_2[6]ℛ_2φ̂_1^†φ̂_2^†𝒱̂_Scattφ̂_2 φ̂_1
accounts via ĥ_MpCb for the single-coboson energy, while the scattering potential is accounted for by 𝒱̂_Scatt.
In the former, we omit the subscript “1” for simplicity and, therefore, whenever there are no interactions between more than one coboson in the following.
§.§ Single-coboson Hamiltonian
The explicit form of the single-coboson Hamiltonian
ĥ_MpCb = Mc^2 + ĥ^(0)_rel + ĥ^(1)_rel + P̂_Q^2/2M( 1 - ĥ^(0)_rel/Mc^2)- P̂_Q^4/8M^3c^2
+ĥ_r×P̂_Q + ĥ_I^(0) + ĥ_I^(1)
consists of the internal structure ĥ^(0)_rel + ĥ^(1)_rel, c.m. kinetic terms proportional to the minimally-coupled momentum P̂_Q^2, as well as the light-matter interaction ĥ_I^(0) + ĥ_I^(1).
The kinetic term couples to the internal structure as a consequence of the mass defect <cit.>.
The rest energy Mc^2 is modified by the relative motion
ĥ^(0)_rel= p̂^2/2m_r + q_e q_n /4 πε_0 r
solely given by a hydrogen-type Hamiltonian.
The next-order correction is given by
ĥ^(1)_rel = -p̂^4/8m_r^3c^2m_e^3 + m_n^3/M^3 - κ/r^3( 1/2ℓ̂^2 + ( r·p̂)^2 )
+ κα_Dπħ^2 δ(r ) + κα_ℓ S/r^3ℓ̂·Ŝ + κα_ℓ s/r^3ℓ̂·ŝ
+κα_ssπδ(r) ŝ_n ·ŝ_e + κ c_F^(n) c_F^(e)/r^3Ŝ_ne.
Similar to Sec. <ref>, we defined κ=2 κ_ne = -q_e q_n/(4 πε_0 m_r M c^2) and the abbreviation α_v summarizes all Wilson coefficients in Table <ref>.
These corrections <cit.> give rise to the fine- and hyperfine structure of hydrogen-like atoms and correspond in the same order to: the kinetic relative correction, orbit-orbit coupling, Darwin term, spin-orbit coupling of angular momentum ℓ̂ = r×p̂ to total and relative spin, spin-spin contact coupling, as well as the magnetic dipole-dipole interaction Ŝ_ne = - ŝ_n ·ŝ_e + 3 ( ŝ_n ·r ) ( ŝ_e ·r ) / r^2.
Note that ŝ_n and ŝ_e refer to the spin of nucleus and electron and may be expressed through their corresponding total and relative spin.
Here, we reproduce results known from the literature <cit.> that are augmented by particle-species dependent Wilson coefficients c_F^(i) and c_S^(i).
In addition to the internal structure, our results include c.m. degrees of freedom.
The c.m. kinetic energy appears as dominating, lowest-order contribution but is modified by a correction proportional to the relative Hamiltonian ĥ_rel^(0) as a consequence of the mass defect <cit.>, where relative and c.m. degrees of freedom couple to each other.
Relativistic corrections to the relative Hamiltonian ĥ_rel^(1) are not included in the mass-defect term for consistency, since these couplings are of order c^-4.
The mass defect implies an internal-state-dependent c.m. motion that can be identified with a state-dependent mass <cit.>, which we show in detail later.
The fact that our description allows also for charged cobosons (ions) manifests in a monopole coupling where the total charge Q=q_n + q_e and the vector potential evaluated at the c.m. position Â(R) appear in minimally-coupled momenta P̂_Q = P̂ - Q Â ( R ).
The kinetic c.m. degrees of freedom are completed by the c.m. relativistic kinetic correction proportional to P̂_Q^4, analogously <cit.> to the case of neutral atoms with Q=0.
For ions (Z>1), the Hamiltonian
ĥ_r×P_Q = - q_e q_n/8 πε_0 m_n M c^2Z-1/r^3( r×P̂_Q ) ·ŝ_n
describes a coupling of the coboson's total angular momentum r×P̂_Q to the nucleus spin ŝ_n, encoding the fact that the nucleus gives rise to charged cobosons.
This additional coupling between c.m. and relative degrees of freedom arises analogously to the mass defect due to our extension to spin-carrying and possibly charged cobosons, and has not yet been derived to the best of our knowledge.
Further, external EM fields interact with the coboson in leading order via
ĥ_I^(0) = ĥ_ME + ĥ_MM + ĥ_R + ĥ_Dia + ĥ_Self.
The explicit form of the components are summarized in Table <ref>, where all EM fields depend on the integration variable y if not stated otherwise.
The polarization 𝒫 and the magnetization M̂ [see Eq. (<ref>)] depend on y, as well as on coboson coordinates x_e and x_n that have to be expressed by c.m. and relative coordinates.
The term ĥ_ME couples the transverse electric field Ê^⊥ to the transverse part of the polarization field from Eq. (<ref>), giving rise to generalized electric moments (ME).
For instance, a multipole expansion of the polarization field in R implying small relative coordinates, we find in lowest order the dipole moment d = m_r ( q_e /m_e - q_n / m_n ) r.
Similarly, the magnetic field couples to magnetic moments (MM) in ĥ_MM and has two contributions:
The particles' spin, the magnetic moment μ_i = c_F^(i) q_i s_i / m_i, couples to the magnetic field B̂ ( x_i ), where the single-fermion coordinates have to be replaced by c.m. and relative coordinates x_i=R - q_i/q_im_r/m_ir.
Moreover, constituents of composite particles carry orbital angular momentum ℓ̂ that induces an orbital magnetic moment contained in the quantum magnetization
M̂ ( y ) = ∑_im_r/m_i q_i ℓ̂/m_i∫_0^1 ρρδ [ y - R - ρ( x_i - R) ]
similar to the relation between polarization fields and electric moments.
A multipole expansion of the magnetization and the magnetic field leads in lowest order to the magnetic moment of a coboson μ̂_ℓ + μ̂_n + μ̂_e, with μ̂_ℓ = m_r ( q_e / m_e^2 + q_n /m_n^2) ℓ̂/2, the sum of orbital, electron, and nucleus spin magnetic moments giving rise to the Zeeman shift <cit.>.
In addition, the c.m. motion of the coboson also yields the c.m. Röntgen Hamiltonian ĥ_R <cit.>.
Further, we find that the diamagnetic interaction ĥ_Dia with c.m. and relative contribution corresponds to an induced magnetic moment due to the external fields, being part of the quadratic Zeeman effect <cit.>.
Moreover, the cobosonic self-energy ĥ_Self is generally divergent, but can be renormalized <cit.> and contributes to the Lamb shift <cit.>.
The last contributions to the single-coboson Hamiltonian are relativistic corrections to the light-matter interaction and in general depend on the electric or the magnetic field.
In many applications light-matter interactions are dominated by electric fields.
Here, we present these dominant electric terms and suppress the influence of magnetic fields in c^-2.
The full Hamiltonian including magnetic-field contributions is given in Appendix <ref>, while the electric-field contribution resolves to
ĥ_I^(1) = ∑_i c_S^(i) q_i ŝ_i ·(m_i/MP̂_Q-q_i/q_ip̂) ×Ê + H.c./8 m_i^2 c^2
+1/2∑_i [ Ê^⊥ ( x_i ) ·d̂_i^(1)+ d̂_i^(1)·Ê^⊥ ( x_i) ].
The second line originates from relativistic corrections to c.m. and relative coordinates and describes the coupling of the transverse electric field Ê^⊥ to dipole-moment corrections
d̂_i^(1)/q_i = r/4M^2c^2( p̂·P̂_Q + Δ m/m_rp̂^2 + Δ m q_e q_n/4 πε_0 r) + H.c.
+ r·P̂_Q/4M^2c^2[ ( 1- 2 Δ m /m_iq_i/q_i) p̂ - m_r/m_iq_i/q_iP̂_Q ] + H.c.
+ 1/2m_r M c^2( m_r/m_iq_i/q_iP̂_Q + p̂) ×ŝ
in accordance <cit.> with the limiting case of Q=0 and arbitrary loosely bound cobosons.
§.§ Scattering potential
The scattering potential has the general form
𝒱̂_Scatt = ∑_i,j[𝒱_C^(ij)+ 𝒱̂_LL^(ij) + 𝒱̂_LS^(ij) + 𝒱̂_SS^(ij)] + 𝒱̂_Self,
where the components are summarized in Table <ref>.
For simplicity, we include only the most dominant c.m. contribution to the scattering for all c^-2-terms by omitting terms directly proportional to r_i, while keeping the general distance between two different constituents χ_ij = x_1,i - x_2,j and neglecting all terms proportional to the relative momentum p̂_i.
Besides, we also exclude the influence of light-matter interaction in the scattering processes presented in the Table.
The full scattering potential including light-matter interactions and relative contributions is given in Appendix <ref>.
As expected, the leading-order contribution of scattering is the Coulomb (C) interaction between fermion i of coboson 1 and fermion j of coboson 2.
However, we find a second Coulomb-like correction including the unit vector e_w in w-direction, where w is either the distance χ_ij between constituents of different cobosons or relative distances between each coboson's constituents r_1 and r_2.
Corrections to the Coulomb potential are interactions between all possible magnetic moments among different cobosons.
Consequently, in V̂_LL^(ij) we find orbital magnetic moments coupling to each other through orbital angular momenta L̂^(ij)_k= χ_ij×P̂_k.
Moreover, we find also in the scattering potential an additional term containing momentum operators that corresponds to the so-called retardation correction <cit.>.
The spin-orbit (LS) interaction includes an interaction of an effective orbital magnetic moment proportional to L̂^(ij)_1 - L̂_2^(ij) with an effective spin magnetic moment q_j μ̂_1,i/q_i + μ̂_2,j, complemented by pure spin-orbit coupling.
These potentials are completed by the known <cit.> magnetic dipole-dipole potential 𝒱̂_SS^(ij).
Moreover, one effect of the PZW transformation becomes apparent only in second quantization, which is the scattering self-energy [𝒱̂_Self, Eq. (<ref>)], arising analogously (and additionally) to the cobosonic self-energy [ĥ_Self in Eq. (<ref>)] in the single-coboson sector.
Because of the multi-coboson nature, this contribution may be associated as one part of the collective (or cooperative) Lamb shift <cit.>.
With these results, we have introduced a multipolar CbQFT.
The single-coboson Hamiltonian includes consistently c^-2 corrections for all scales, involving the corrected internal structure, the mass defect in the c.m. motion, but also light-matter interactions beyond a multipole expansion, and relativistic corrections.
We derived scattering potentials based on lowest-order two-particle scattering, yielding the Coulomb potential with several corrections in form of interactions between magnetic moments.
§ DISCUSSION
In the following section, we identify different physical systems and issues that can be described or addressed by our multipolar CbQFT, such as models for atomic systems, bound-state energies, the scattering between atoms, as well as ultracold quantum gases.
The examples demonstrate that our theory contributes to the fundamental understanding of various subfields, complementing and connecting existing approaches.
While illustrative, these examples are not exhaustive but only represent a small portion of what might be achieved with such effective models.
§.§ Models for atomic systems
First-quantized Lagrangian or Hamiltonian treatments of atomic dynamics restricted to single-particle systems have been studied extensively <cit.>.
There are relativistic treatments, extending the NR Schrödinger equation for a hydrogen atom to relativistic equations <cit.> or formulating equations of motion for c.m. coordinates of a system of relativistic Dirac particles, which allows for a description of relativistic bound-state systems <cit.>.
However, the dynamics of atomic ensembles are often studied in NR regimes since atomic quantum gases are mostly restricted to low-energy scales.
One accurate model of bound-state particles that includes relativistic corrections follows from two coupled Dirac equations in the respective NR limit <cit.>, and is known as Breit-Pauli Hamiltonian.
This Hamiltonian does not account for field-theoretical QED corrections and is derived with respect to single-fermion coordinates.
As a result, additional relativistic corrections enter the Hamiltonian <cit.> once NR c.m. and relative coordinates are introduced to separate the inner-atomic structure from the c.m. motion.
In the simplified case where one ignores the spin of fermions, such a description of atoms gives rise to a coupling of c.m. to relative degrees of freedom as a consequence of the mass defect <cit.>.
In contrast to calculating relativistic corrections to the NR bound-state atoms and their dynamics, there are other models that focus on corrections arising from a finite extension of atoms via hard-sphere models, atoms confined in spherical impenetrable boxes, and they have been studied for hydrogen <cit.>, hydrogen-like <cit.>, and many-electron atoms <cit.>.
In this case, different deviations from the standard NR treatment of hydrogen-like atoms than the relativistic ones follow.
Based on such first-quantized models, one usually postulates a corresponding effective field theory <cit.>, and rigorous field-theoretical top-down derivations for atoms are not addressed.
Conversely, our work embeds established first-quantized concepts mentioned above into a field-theoretical formulation.
As a result, the Breit-Pauli Hamiltonian is further modified by QED corrections.
Via our projection formalism, we naturally introduce length scales defining the extension of an atom, similar to hard-sphere impenetrable boxes.
Introducing c.m. and relative coordinates leads also to the mass defect, where we extend known derivations <cit.> to arbitrary numbers of spin-carrying and charged cobosons in a field-theoretical framework, yet restricted to special relativity.
Models for atomic systems include not only isolated atoms but also the description of their interaction with external fields, light and gravity.
In quantum and atom optics for instance, atoms are manipulated via the interaction with light, leading to magneto-optical traps <cit.> for neutral atoms as well as Paul <cit.> and Penning <cit.> traps for ions.
Instead of trapping atoms, light pulses <cit.> or Bloch oscillations <cit.> are used to manipulate the atoms' momenta and might also induce transitions between internal states <cit.>.
In the context of cold atoms, magnetic fields give control over scattering dynamics between atoms via Feshbach resonances <cit.> but are also crucial to implement, E1M1 <cit.> or magnetically-induced single-photon <cit.> pulses.
All of these light-matter interactions and processes are included in our multipolar CbQFT.
In many applications <cit.> it is sufficient to take only the lowest-order multipole expansion of the EM field into account.
Further contributions, such as higher-order multipole moments driving transitions <cit.> or respective energy shifts <cit.>, are then considered individually to the desired order <cit.>.
In our present work, we use the generalized polarization-field approach <cit.> for the PZW transformation <cit.>.
Thus, all possible contributions from light-matter interactions are covered similar to single-atomic treatments <cit.> but incorporated into a field-theoretical framework.
Moreover, relativistic corrections to light-matter interaction on the level of elementary fermions are known most accurately in the field of NRQED <cit.>.
Once we move to the multipolar form defined with respect to NR c.m. and relative coordinates, additional relativistic corrections arise also for EM fields.
These have been studied for electric-field contributions <cit.> but magnetic-field contributions have not been discussed explicitly yet.
Our treatment includes, in a field-theoretical framework, all relativistic corrections to EM fields appearing in light-matter interactions, in particular also magnetic-field contributions.
The simple first-quantized model of atoms falling in gravitational potentials <cit.>, as another external field, has been extended to a post-Newtonian description for an atomic Hamiltonian considering relativistic corrections associated with the coupling of gravity to atoms <cit.> for single, spinless, and neutral atoms.
Some of these works derived the mass defect under gravity <cit.>, confirming original ideas for quantum interferometry <cit.>.
The mass defect shows a connection to proper time associated with the c.m. of the atom and can be encoded by atomic <cit.> and quantum <cit.> clocks in gravitational backgrounds.
Some theories even predict effective gravitational decoherence mechanisms <cit.>.
In addition, quantum-clock interferometry allows for tests of general relativity <cit.>.
However, also in the gravity-free case, a coupling of internal degrees of freedom to the atomic c.m. via the mass defect leads to possible measurements of a quantum twin paradox <cit.> or allows for dark-matter detection <cit.>.
Our work does not include gravity so far, but the mass defect is yet derived consistently in a field-theoretical framework and augmented by other relativistic corrections to the internal Hamiltonian.
With QFT on curved spacetime and a generalization of the respective coordinate transformations, an extension to gravitational fields seems in principle possible.
§.§ Bound-state energies
Most models of atoms focus on their internal structure, allowing for calculations of bound-state energies.
Consequently, in most accurate treatments radiative QED corrections <cit.> and effects from the composite-particle nature of the nucleus <cit.> enter bound-state energies <cit.> of composite particles.
These can be calculated with relativistic approaches <cit.>, for hydrogen-like atoms with bound-state QED <cit.> or with the Bethe-Salpeter equation <cit.>.
Bound states and their properties for atoms have also been derived from field theory via flow equations and the functional renormalization group <cit.>.
In contrast to these fully relativistic treatments, relativistically-corrected bound-state energies for hydrogen-like atoms can be obtained from EFTs in the NR regime, NRQED <cit.> or pNRQED <cit.>.
These approaches exploit simplifications arising in inherent NR regimes while radiative corrections and effects from the nucleus are still taken into account via Wilson coefficients <cit.>.
However, bound-state calculations within these EFTs are usually restricted to a single atom that is assumed to be trapped.
Consequently, the c.m. motion as well as its relativistic corrections and corrections to the relative coordinates become irrelevant <cit.>.
Naturally, in the single-atom limit no atom-atom interactions occur and usually only basic light-matter interaction is considered, such as electric-dipole coupling for neutral atoms.
These approaches for calculating bound-state energies of trapped atoms may include contributions to the NR Lamb shift <cit.> but can also be used for fundamental tests <cit.>, the determination of the proton charge radius <cit.>, and dark-matter searches <cit.>.
Since they typically focus only on the atomic spectrum, there are calculations, for hydrogen <cit.>, going beyond the precision of the internal energies derived in our work.
In particular, a pNRQED treatment <cit.> may include next-order loop corrections that are omitted in the present article for simplicity but can be incorporated straightforwardly.
Moreover, these pNRQED derivations focus on positronium (equal-mass case of constituents) or neutral atoms, without taking c.m. degrees of freedom into account, while light-matter interactions enter solely through electric-dipole couplings.
As a result, light-induced internal energy shifts or shifts arising from the interaction with other atoms are not covered in these treatments.
Finally, in pNRQED Wilson coefficients have to be determined for each particular system, which, has been carried out explicitly for positronium <cit.>.
Compared to that, we derived a Hamiltonian for hydrogen-like, possibly charged atoms, where constituents differ in their masses, considering the c.m. coordinates including the relevant relativistic corrections, and leading to the mass defect as well as to relativistically-corrected light-matter interactions.
In the following applications, we will thus determine bound-state energies, the QED-corrected hyperfine structure of hydrogen-like atoms, including parts of the NR Lamb shift <cit.>, where we keep arbitrary Wilson coefficients such that our results remain valid for generic hydrogen-like atoms.
Because we also extend single-atomic considerations to an arbitrary number of atoms, scattering dynamics arise in addition to the usual pNRQED approaches.
§.§ Scattering between atoms
Since we aim to describe ultracold quantum gases, these atom-atom interactions become highly relevant.
There are several theoretical models <cit.> describing NR atomic scattering.
One possible description is based on interaction potentials between two scattering partners, where higher-order scattering events <cit.> are neglected.
The NR scattering of neutral atoms is then dominated by van der Waals interactions <cit.>.
In this context, theoretical models have been developed to determine van der Waals scattering potentials <cit.> and cover also density-functional-theory approaches <cit.>.
Approximations to the van der Waals interaction are often performed according to an expansion of the form -C_6/ Δ R^6- C_7/ Δ R^7 + ... with real constants C_n <cit.>, where Δ R is the distance between the c.m. of two atoms.
Hence, the long-range behavior may be observed in lowest order.
For example, the C_6 coefficient for hydrogen <cit.> can be obtained by second-order perturbation theory <cit.> of the dipole-dipole potential <cit.> in first-quantized regimes <cit.>.
Retardation effects may also be taken into account and correspond to the C_7/R^7 term <cit.>.
Another approximation of the van der Waals potential is the Lennard-Jones potential <cit.>.
In contrast to such first-quantized approaches, there are also EFTs <cit.> dealing with van der Waals interactions directly.
For the case of charged cobosons, ion-ion scattering <cit.> is characterized by the Coulomb repulsion to lowest order.
We augment these existing approaches for neutral and charged cobosons by deriving relativistic corrections to the lowest-order Coulomb scattering potentials and cover the interactions between magnetic moments associated with orbit-orbit, spin-orbit, and spin-spin (magnetic dipole-dipole potential) interactions.
Spin-orbit and spin-spin magnetic-moment interactions are known, from magnetic scattering in the context of neutrons <cit.>.
Since neutrons are free of charge, no Coulomb interaction is present and such interactions dominate the process.
Magnetic moments coupling in atomic scattering processes are partly discussed in the context of spinor BECs <cit.>.
However, to the best of our knowledge, the influence of relativistic corrections to the Coulomb potential in atomic scattering derived from first principles has not been discussed explicitly yet.
In addition to the Coulomb potential and its corrections, we find a scattering self-energy, that is part of the collective Lamb shift and was postulated before by embedding light-matter interaction into a field-theoretical framework <cit.>.
§.§ Ultracold quantum gases
The combination of bound-state energies with scattering dynamics together allows for a consistent treatment of ultracold quantum gases including their internal structure.
So far, the description of ultracold quantum gases often relies on bottom-up approaches for EFTs based on extensions of NR first-quantized theories.
Consequently, there are successful field-theoretical descriptions of scalar BECs <cit.> as well as spinor BECs including internal states <cit.>.
Although their realization is challenging <cit.>, due to the Coulomb repulsion among charged bosons, also ionized BECs <cit.> have been studied.
All these descriptions usually do not address relativistic corrections, the inner-atomic structure is often of minor importance, and light-matter interaction is only partly accounted for.
In our work, we derived an EFT for ultracold quantum gases from first principles, including possibly charged atoms and relativistic corrections to the c.m. and relative degrees of freedom of both the single-coboson energy and the scattering dynamics, as well as to light-matter interactions.
Our basic assumption for cobosonic QFT, to introduce different length scales, enters our description of an ultracold quantum gas in terms of hard-sphere atoms, and naturally the scattering dynamics in our model remain perturbations to the single-coboson contribution.
Moreover, this scattering dynamics is usually treated with approximations, leading to effective scattering lengths from s-wave scattering <cit.> as well as introducing effective pseudo potentials for scattering from hard-sphere interactions <cit.> instead of the full scattering potential.
Within these approximations, we may derive the Gross-Pitaevskii equation (GPE) <cit.> that describes a Schrödinger-type equation complemented by a nonlinear collision term corresponding to the lowest-order effects of the condensate mean-field contribution <cit.>.
Here, the field operator can be approximated by a wave function of the condensate by symmetry-breaking <cit.> or number-conserving approaches <cit.>.
Higher-order corrections such as fluctuations <cit.> arising from the coupling of the condensate to a noncondensed thermal cloud may also be taken into account.
There are extensions to coupled GPEs, both for different modes <cit.> and quantized light fields <cit.>, which are usually postulated extensions of first-quantized considerations.
Some studies <cit.> generalize the GPE to a relativistic equation by postulating an invariant Klein-Gordon-type equation <cit.> to account for relativistic effects.
The modified GPE then follows in these approaches in the NR limit by separating a rest-energy phase from the condensate function <cit.>, resulting into a relativistic correction proportional to a second derivative in time of the condensate function.
However, such a treatment does not include relativistic effects and the mass defect as derived in our work.
Consequently, as another application, we will derive a GPE including the mass defect, relativistic corrections, also for light-matter interactions, and a coupling of different internal states of the coboson.
This modified GPE differs significantly from previous Klein-Gordon-type derivations and might lead to fundamentally different predictions.
The deviation originates from the fact that atoms, as composite particles, are not fundamental bosons but rather cobosonic in their nature and, thus, they do not obey a Klein-Gordon equation describing spin-0 particles.
§ APPLICATIONS
Following the discussion above, we aim to derive the dynamics of interacting quantum gases and their internal structure encoded in the coboson field operator φ̂.
This includes modified bound-state energies associated with the fine and hyperfine structure of the coboson and a coupling via the mass defect to its c.m. motion.
We determine the scattering potentials between two internal states of the coboson with respect to internal degrees of freedom giving access to generalized van der Waals potentials.
The mean-field contribution of the field operator gives rise to a GPE modified by relativistic corrections and the mass defect.
§.§ Modes of relative motion
In the spirit of composite particles, we introduced c.m. and relative coordinates for the multipolar CbQFT.
As a next step, we explicitly describe the equation of motion of cobosons and separate between the c.m. and modes for the relative motion between constituents.
§.§.§ Cobosonic equation of motion
First, we derive the equation of motion for the cobosonic field operator φ̂ based on the Heisenberg equation ħdφ̂ / d t = [ φ̂, Ĥ_MpCb ], neglecting the influence of the environment that lies outside of our cobosonic subspace.
We recall that the equation of motion follows from the cobosonic commutation relation, generating additional terms compared to a purely bosonic field operator.
However, these additional terms correspond to processes that lie outside of the projected Hilbert space, such as the annihilation of an electron and a nucleus of different cobosons.
To derive the effective equation of motion, we rely on the projected equation of motion ħπ̂_Cbdφ̂ / d t π̂_Cb = π̂_Cb [ φ̂, Ĥ_MpCb ] π̂_Cb that resolves to
ħtφ̂ = Θ ( a - r) ( ĥ_MpCb + ∫_C_2[6]ℛ_2 φ̂_2^† 2 𝒱̂_Scattφ̂_2 ) φ̂.
The Heaviside step function Θ ( x ) accounts for creation and annihilation of only such cobosons whose constituents posses relative distances r≤ a.
The equation of motion yields a Schrödinger-like equation for the single-coboson energy governed by ĥ_MpCb, while the second term accounts for the influence of all other cobosons in the system via scattering.
§.§.§ Expansion into unperturbed hydrogen-like modes
While the dynamics implied by Eq. (<ref>) is involved, the limits of integration restrict the relative distances between constituents of different cobosons to x_1,i- x_2,j > b ≫ a, and allow for a perturbative treatment of the scattering potentials.
The remaining dominant term denotes the single-coboson contribution associated with ĥ_MpCb, where the leading-order contribution ĥ_rel^(0) = p̂^2/(2 m_r) +q_e q_n / (4πε_0 r) is followed by other perturbative terms contained in ĥ_rel^(1).
Consequently, we use an expansion into eigenmodes of ĥ_rel^(0), into hydrogen-like modes of the relative motion, and find
φ̂ = ∑_βψ_β (r) Ψ̂_β ( R,t ),
where ψ_β is the (first-quantized) wave function of the relative motion associated with internal state β.
The field operator Ψ̂_β ( R,t) annihilates a coboson in state β at c.m. position R.
The commutation relation of the remaining field operator Ψ̂_β ( R,t) is completely defined through the original cobosonic commutator from Eq. (<ref>).
Furthermore, the cobosonic equation of motion requires the wave functions to vanish at r=a, similar to the case of atoms in an impenetrable spherical box <cit.>.
This condition is numerically solvable, with an energy depending on the particular choice of a and converging to the known energies of hydrogen-type atoms for a →∞.
In the following, we choose the standard hydrogen-like wave functions for the relative motion, because for suitable values of a the probability density is exponentially suppressed in regions r > a.
However, a numerical treatment is possible as well <cit.>.
Hence, the hydrogen-like wave function ψ_β ( r ) is associated with a generalized quantum number β encompassing all quantum numbers, the principal quantum number n with energy eigenvalues E_n^(0) = - m_r (Zα c)^2/(2n^2), the quantum number j associated with total angular momentum ĵ = ℓ̂ + Ŝ, its projection to the z-axis (m_j), the orbital angular momentum (ℓ) and the quantum number of the total spin S associated with the spin Ŝ = ŝ_e + ŝ_n.
We present the angular momentum basis in Appendix <ref>, where ψ_β takes also the spin degrees of freedom of the coboson into account, it contains in total four components from the two spin-1/2 fermions.
By inserting the field-operator expansion from (<ref>) into the equation of motion, multiplying with the conjugate wave function ψ_α^*, and using the orthonormality of the relative modes when integrating over the relative coordinate, the equation of motion for the c.m. field operator of mode α resolves to
ħtΨ̂_α = ĥ_MpCb, αΨ̂_α + ∑_β≠αT̂_αβΨ̂_β
+ ∑_βνμ ∫_Δ R > b^'[3]R^'Ψ̂^' †_ν𝒱̂_αν ;βμΨ̂^'_μΨ̂_β.
It only contains an integration with respect to c.m. coordinates.
The inter-cobosonic scale was introduced through the nucleus coordinate.
Thus, for consistency with the previous definition, we replace b with b' = b+a for the distance ΔR = R - R^' between the c.m. positions of two cobosons.
Next, we present the internal Hamiltonian ĥ_MpCb,α, the transition elements T̂_αβ between internal states, and scattering matrix elements 𝒱̂_αν ;βμ in the following two subsections.
§.§ Modified bound-state energies
The equation of motion for the field operator Ψ̂_α associated with the annihilation of a coboson in mode α at c.m. position R includes the bound-state energy
ĥ_MpCb, α = M_α c^2 + E_α^(1) + P̂_Q^2/2M_α- P̂_Q^4/8M^3c^2 + ⟨ĥ_I⟩_α,
of a coboson in internal state α.
Figure <ref> shows the energy-momentum dispersion for the Hamiltonian ĥ_MpCb, α for different modes α.
By introducing an internal-state dependent rest mass M_α = M [ 1 + E^(0)_α/(Mc^2)], the spectrum of the atom enters the rest energy M_α c^2 and, through the relativistic mass defect, the minimally-coupled kinetic energy P̂_Q^2/(2M_α), where the latter implies a lowest-order Taylor expansion.
As a result, the energy-momentum dispersion depends on the internal energy of the coboson.
Fine and hyperfine splittings <cit.> enter through relativistic internal corrections E^(1)_α= ⟨ĥ_int^(1)⟩_α, where ⟨ô⟩_α = ∫[3]rψ_α^* ôψ_α denotes the expectation value with respect to internal state α of an arbitrary operator ô, see Appendix <ref> for details.
Due to these corrections, there is a splitting of the unperturbed hydrogen-like energy levels also presented in Fig. <ref>.
In the presence of EM fields, energy shifts occur in the form of ⟨ĥ_I⟩_α = ⟨ĥ_I^(0) + ĥ_I^(1)⟩_α, accounting for first-order perturbative shifts such as, the linear Zeeman shift <cit.>.
Contrarily, second-order effects like the quadratic Stark effect <cit.> are not explicitly accounted for in the diagonal matrix elements.
Further nonperturbative EM fields, giving rise to shifts such as AC-Stark <cit.> and other light shifts <cit.> are also not solely represented by these diagonal elements.
To cover such additional effects, the second term in Eq. (<ref>), including all off-diagonal transition elements
T̂_α, β = ∫[3]rψ^*_α( ĥ_rel^(1) + ĥ_r×P̂_Q + ĥ_I) ψ_β
from internal state β to α, cannot necessarily be treated perturbatively.
In summary, using the expansion into relative hydrogen-like modes, we find both the bound-state energy of a coboson, including energy shifts due to internal relativistic corrections, as well as transitions between different internal-coboson states driven by both internal interactions and light fields.
§.§ Modified scattering potentials
The multi-coboson aspect of our theory enters via the scattering matrix elements
𝒱̂_αν; βμ = ∫[3]r∫[3]r^'ψ_α^* ψ_ν^' * 2 𝒱̂_Scattψ_μ^'ψ_β,
describing the scattering from internal modes βμ into αν, where 𝒱̂_αν; βμ is a function of both R and R^'.
Similar to the splitting of the single-coboson energy into the bound-state energies and internal transitions, we divide the scattering matrix elements into one part without transitions, α = β, and a part including actual transitions, α≠β, that corresponds to internal state changing collisions.
As a result, the equation of motion
ħtΨ̂_α = ( ĥ_MpCb, α + ∑_νμ∫_Δ R > b^'[3]R^'Ψ̂^' †_ν𝒱̂_αν; αμΨ̂^'_μ)Ψ̂_α
+ ∑_β≠α( T̂_αβ + ∑_νμ∫_Δ R > b^'[3]R^'Ψ̂^' †_ν𝒱̂_αν; βμΨ̂^'_μ) Ψ̂_β
for internal state α includes the single-coboson energy ĥ_MpCb,α and is augmented by the scattering accounting for the mean field created by all other cobosons interacting with the coboson of mode α.
Transitions from mode β to α are either induced via internal or light-matter interactions but also by scattering with other cobosons that change its internal state from μ to ν.
By integrating over relative degrees of freedom to obtain the scattering matrix elements from Eq. (<ref>), we gain via Eq. (<ref>) access to exact scattering potentials predicted by our model.
We obtain analytic expressions for the potentials approximated order by order, at least for the regime where b' ≫ a, via the Taylor expansion of 𝒱̂_Scatt around x_i - x^'_j≅Δ R in Eq. (<ref>).
The dominant contribution in this regime follows from the Coulomb potential.
We find the generalized electric dipole-dipole potential
𝒱̂_Scatt≈1/8πε_0{ Q^2/Δ R +Q e_Δ R·( d - d^')/Δ R^2 + Q∑_u (𝒬_uu + 𝒬^'_uu) - 3 ∑_u,ve_Δ R^(u)( 𝒬_uv + 𝒬^'_uv) e_Δ R^(v)/Δ R^3+ d·d^' - 3 ( e_Δ R·d) ( e_Δ R·d^')/Δ R^3}
that accounts in general for cobosonic ions.
For Q≠ 0, the leading order corresponds to a repulsive Coulomb potential proportional to Q^2 as indicated in Fig. [fig:dipdip]7b).
It is followed by corrections in which the difference of generalized dipole moments d= m_r ( q_e /m_e - q_n / m_n ) r enters as well as the quadrupole-moment tensor 𝒬_uv = -r_u r_v m_r^2 (q_e/m_e^2 + q_n/m_n^2)/2 with components u,v=x,y,z.
The last term, the only one remaining in the limit of neutral cobosons with Q=0, corresponds to the standard electric dipole-dipole potential whose dipole moment simplifies to d = q_e r for q_e=-q_n.
Such a potential is the starting point to describe inter-atomic interactions in dipolar quantum gases <cit.>.
We plot it in Fig. [fig:dipdip]7a) for parallel r and r^', as well as for different values of the angle between Δ R and r.
For instance, using second-order perturbation theory in first quantization, the dipole-dipole potential gives rise to the energy shift associated with the van der Waals potential <cit.> of the form -C_6/Δ R^6 with a real constant C_6.
As a consequence, using the full Coulomb potential together with all relativistic corrections and explicitly integrating over relative degrees of freedom gives access to generalized van der Waals scattering potentials between cobosonic modes.
This approach has not been carried out and can serve as the cobosonic-model prediction to van der Waals potentials that may be compared with experimental results.
In addition, as we derived scattering dynamics with respect to internal states of the coboson, we are able to model cobosonic entanglement through scattering.
Since scattering can also be used for squeezing of internal states of atoms <cit.> and its description requires a field-theoretical formulation, our multipolar QFT can be embedded into the field of quantum metrology.
§.§ Modified Gross-Pitaevskii equation
The derivation of approximate solutions to the equation of motion from Eq. (<ref>) for the c.m. field operator often follows a mean-field approach <cit.>.
Such a treatment leads to the celebrated Gross-Pitaevskii equation (GPE) <cit.> which we derive below in favor of approaches following, density-functional theory <cit.>.
The scattering potentials in Eq. (<ref>) have the form of a hard-sphere interaction <cit.> characterized by a nonvanishing potential only at distances Δ R > b^', which is ensured by integration regions in our model.
In this case and for low temperatures as well as weakly interacting, dilute gases, such hard-sphere potentials can be replaced by a pseudopotential <cit.> of the form η_αν; αμδ( R - R^' ), where no integration region appears [The replacement may imply a restriction to quantum numbers β whose angular momentum ℓ=0 vanishes.].
Instead, we find an effective, renormalized <cit.> scattering length [As a consequence the effective scattering length is not only the width of a narrow dipole-dipole potential but contains numerical contributions as well.] η_αν, αμ, mediating scattering between cobosons of mode α with that of mode μ transitioning into mode ν.
An analogous replacement in the collision-induced coupling between modes α and β in the second line of Eq. (<ref>) with an effective scattering length η_αν, βμ can be made.
Within this approximation the equation of motion for the field operator takes the form
ħtΨ̂_α = ( ĥ_MpCb,α + ∑_νμη_αν; αμΨ̂^†_νΨ̂_μ) Ψ̂_α
+ ∑_β≠α( T̂_α, β + ∑_νμη_αν;βμΨ̂_ν^†Ψ̂_μ) Ψ̂_β.
Such an approximation is often applied in the context of ultracold quantum gases <cit.> and corrections may also be taken into account <cit.>.
However, already in a mean-field theory we observe a difference to the conventional treatment, we have access to relative and c.m. relativistic corrections, as well as to the full coupling to external EM fields.
To this end, we approximate Eq. (<ref>) by moving to a first-quantized equation of motion Ψ̂_α→Ψ_α where Ψ_α represents the mean field of the condensate <cit.>.
There are several ways to introduce the mean field as lowest-order contribution of the equation of motion <cit.>.
Extending the lowest-order contribution to beyond mean-field theory <cit.> may be achieved by including also an operator-valued noncondensate part of the field operator in terms of a thermal cloud that couples to the mean field <cit.>.
Within the mean-field approach, we find new effective scattering lengths η̃_αβ and η̃_αβ; α^'β^' that may differ from the previous values.
These approximations result in the modified GPE
ħtΨ_α = ( M_α c^2 + E_α^(1) + ⟨ĥ_I⟩_α + P̂_Q^2/2M_α- P̂_Q^4/8M^3c^2
+ ∑_νμη̃_αν; αμΨ_ν^* Ψ_μ) Ψ_α
+ ∑_β≠α(T̂_α, β + ∑_νμη̃_αν;βμΨ^*_νΨ_μ) Ψ_β
that contains, compared to the NR bosonic GPE <cit.> , first-order relativistic corrections.
It is valid for spinor Bose-Einstein condensates <cit.> and has a state-dependent mass M_α differing from previous derivations <cit.> significantly.
Even more, our derivation applies also to cobosonic ions (coupling via P̂_Q), as long as the gas can still be treated as weakly-interacting.
In addition, we find the energy shift E_α^(1) from the internal cobosonic structure and we account for light-matter interaction in ⟨ĥ_I⟩_α.
Moreover, we observe that the GPE for mode α may couple to other modes through Ψ_ν^* Ψ_μ terms <cit.>, where usually only the contributions proportional to Ψ_μ^2 are taken into account.
The coupling to other modes enters via nonvanishing internal transition elements T̂_α, α^', as well as via a scattering element including transitions from mode μ to ν.
To our knowledge, a modified GPE for c.m. degrees of freedom, taking into account internal degrees of freedom and incorporating the mass defect, has not yet been derived in a top-down approach.
Moreover, previous derivations of relativistically-corrected GPEs <cit.> differ significantly from our results.
§.§ Reduction to mass defect
With the modified GPE we reproduce two special cases:
(i) We find the typical atomic physics NR GPE by neglecting all relativistic contributions.
(ii) By restricting the treatment for Q=0 to two modes, ground (g) and excited (e) state, and by neglecting the P̂^4 term, internal relativistic corrections in E_α^(1), as well as the influence of any scattering, we reproduce a Hamiltonian <cit.> that is relevant in an atomic- <cit.> and quantum-clock context <cit.>.
For sake of presentation, we neglect light-matter interactions for the moment.
In this limit, the equation of motion for both, ground and excited state, reduces to ħ|j⟩ / t = ĥ_j |j⟩ with a first-quantized Hamiltonian ĥ_j = M_jc^2 + P̂^2/(2M_j), including the abstract form of the wave function in position representation Ψ_j = ⟨R|j⟩, with j=g,e.
Since the differential equations for the internal states are now decoupled, we find a Schrödinger equation for the general state |ψ⟩ = ψ_g |g⟩ + ψ_e |e⟩ with ψ_g^2+ ψ_g^2=1.
After Taylor expanding the state-dependent mass M_j, the system Hamiltonian, the sum of the two Hamiltonians ĥ_j, takes the form
ĥ = Mc^2 1 + ĥ_rel^(0) + P̂^2/2M( 1 - ĥ_rel^(0)/Mc^2),
which is the limit of addressing only two internal states of ĥ_rel^(0) = E_g |g⟩⟨g| + E_e |e⟩⟨e| in Eq. (<ref>), as expected.
This Hamiltonian can be recast into the form
ĥ = M̅ c^2 1 + ĥ_cl + P̂^2/2M̅(1-ĥ_cl/M̅c^2)
by introducing a new mean mass M̅ = M + (E_e^(0)+E_g^(0)) /(2c^2) together with replacing the unperturbed Hamiltonian of the relative degrees of freedom ĥ_rel^(0) by the clock Hamiltonian
ĥ_cl = E_e^(0)-E_g^(0)/2 [|e⟩⟨e|-|g⟩⟨g|].
This clock Hamiltonian describes the internal (relative) dynamics and constitutes the basis of atomic and quantum clocks <cit.>, in our case without gravity.
In particular, the preceding two equations are related by the fact that in the order c^-2 the equivalence
ĥ (M, ĥ_rel^(0))=ĥ ( M̅, ĥ_cl )
holds.
Moreover, the energy difference E_e^(0)-E_g^(0) can be associated with the transition frequency of a clock as well as the mass difference between both internal states.
This equivalence can be extended in the order c^-2 to the case where the total momentum P̂ is replaced by its minimally-coupled version P̂_Q.
Similarly, the equivalence holds in the order c^-2 also for the corrected relative degrees of freedom ĥ^(1)_rel, the angular momentum to spin coupling term ĥ_r×P̂_Q, and the corrected EM interaction ĥ_I^(1) in Eq. (<ref>).
However, replacing M by M̅ would lead to additional relativistic modifications for some parts of the NR EM interaction ĥ_I^(0), especially when considering magnetic fields, while its leading-order NR contributions maintain of the same form.
The Hamiltonian accounts for a modified c.m. motion and dispersion relation for atoms in different internal states through the mass defect.
To underline the implications of the mass defect, we observe that wave packets associated with the ground and excited state of a free coboson disperse and propagate differently over time, as indicated in Fig. <ref>.
Due to the state-dependent mass, the amplitude and the uncertainty differ.
Since both wave packets share the same initial momentum they evolve with a different velocity.
In the context of atomic clocks, energy shifts and phase contributions caused by the modified kinetic terms cause special-relativistic time dilation or second-order Doppler effects <cit.> and have to be taken into account for the analysis of high-accuracy frequency standards.
§ CONCLUSIONS
In this article, we derived an EFT for (possibly) charged, spin-carrying, and interacting composite bosons based on their constituents.
Our top-down approach includes relativistic contributions such as radiative corrections, mass defects, atom-atom scattering, and light-matter interactions.
We therefore unified low-energy aspects of particle physics, quantum optics, and atomic physics into one effective multipolar cobosonic QFT with a broad range of applications, to scattering experiments <cit.>, ultracold quantum gases <cit.>, and high-precision measurements based on quantum clocks <cit.>.
In particular, our effective QFT is valid for an arbitrary number of (possibly) charged cobosons and therefore goes beyond previous single-atom descriptions.
By considering their c.m. motion, we found inter-cobosonic interactions via relativistically-corrected scattering potentials and a coupling between c.m. and relative degrees of freedom that arises from the relativistic mass defect.
Moreover, our projection technique closes the gap between EFTs for ultra cold quantum gases and elementary quantum field theories such as QED.
This procedure is universal in the sense that it can be applied to arbitrary single-fermion Hamiltonians ĥ_i and potentials V̂^(ij).
We also introduced field-theoretical unitary transformations and reduced them to well-known single-particle unitaries.
In our case these transformations led to relativistic corrections to c.m. and relative coordinates together with the multipolar version of our effective QFT.
The extension to spin-carrying charged cobosons unveiled another coupling between the c.m. and relative motion, a spin-orbit coupling ĥ_r×P̂_Q of ions, in addition to the coupling induced by the mass defect.
To the best of our knowledge, relativistically-corrected scattering potentials to this extent are given for the first time, including a scattering self-energy term.
In addition, we presented a new modified, coupled GPE including light-matter interactions, other relativistic corrections, and the mass defect.
Our projection formalism introduces length scales associated with atoms composed of electron-nucleus pairs.
Introducing further different length scales may result into EFTs for other types of composite particles, multi-electron atoms and molecules.
An EFT for molecules would directly connect to and extend existing approaches <cit.> and lead to a field-theoretical description of interacting ultracold molecules.
Such an effective theory revolves around established concepts such as the Born-Oppenheimer approximation <cit.> and other bound-state calculations for many-body bound systems such as density functional theory <cit.>.
Furthermore, our model describing single-species ensembles may be extended to mixtures, of cobosons and free fermions, different species, isotopes, as well as ions within neutral quantum gases, the latter giving rise to respective spinor quantum gases.
Moreover, effects of the environment that lie outside of our cobosonic subspace could in principle be incorporated by techniques known from open quantum systems <cit.>, and will lead to additional energy shifts as well as to decoherence mechanisms.
Including external nonelectromagnetic fields such as gravity or violation fields in a similar fashion would set a quantum-field-theoretical foundation for established single-particle descriptions, being of essence for quantum-clock interferometry but also for atomic clocks exposed to micromotion <cit.>, tests of special and general relativity <cit.>, as well as dark-matter detection <cit.>.
By determining c.m. scattering potentials between two cobosons by integrating over relative degrees of freedom numerically, we expect to find corrected van der Waals scattering potentials <cit.> predicted by our model.
Moreover, our results facilitate a field-theoretical description of both the c.m. motion as well as the internal states of atomic quantum gases.
Since quantum-metrological methods enhancing the precision through techniques like squeezing rely on such a treatment and might be even generated through scattering, our results lay the basis for the description and modeling of super-sensitive measurements below the shot-noise limit and can be applied to spin-squeezed experiments <cit.> or momentum-squeezed atom interferometry <cit.>.
In summary, our multipolar cobosonic QFT can be applied to a large class of atomic ensembles, Bose-Einstein condensates <cit.>, ionized quantum gases <cit.>, and thermal clouds <cit.> that may be exposed to arbitrary light-matter interactions including trapping potentials <cit.> and light pulses <cit.>.
It also includes relativistic corrections to the relative Hamiltonian, the mass defect, light-matter interaction in its most general form, and scattering potentials.
Therefore, our results are a basis for studies of composite particles, both for fundamental physics but also for applied quantum systems in a vast area of different subfields.
We are grateful to W. P. Schleich for his stimulating input and continuing support.
Moreover, we are thankful to A. Friedrich for helpful support and instructive feedback throughout the whole project, as well as proofreading our manuscript.
We also thank O. Buchmüller, C. Kiefer, R. Lopp, C. Niehof, G. Paz, R. Walser, as well as the QUANTUS and INTENTAS teams for fruitful and interesting discussions.
The projects “Building composite particles from quantum field theory on dilaton gravity” (BOnD) and “Metrology with interfering Unruh-DeWitt detectors” (MIUnD) are funded by the Carl Zeiss Foundation (Carl-Zeiss-Stiftung).
The QUANTUS and INTENTAS projects are supported by the German Space Agency at the German Aerospace Center (Deutsche Raumfahrtagentur im Deutschen Zentrum für Luft- und Raumfahrt, DLR) with funds provided by the Federal Ministry for Economic Affairs and Climate Action (Bundesministerium für Wirtschaft und Klimaschutz, BMWK) due to an enactment of the German Bundestag under Grant Nos. 50WM2250D-2250E (QUANTUS+), as well as 50WM2177-2178 (INTENTAS).
The Qu-Gov project in cooperation with the “Bundesdruckerei GmbH” is supported by the Federal Ministry of Finance (Bundesministerium der Finanzen, BMF).
E.G. thanks the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) for a Mercator Fellowship within CRC 1227 (DQ-mat).
F.D.P. is grateful that this work has been supported by funding programs for junior researchers of the Graduate & Professional Training Center at Ulm University within the project “Long-Baseline-Atominterferometer Gravity and Standard-Model Extensions tests” (LArGE).
§ POTENTIAL MATCHING
Starting from the Hamiltonian given in Eq. (<ref>) , we move to an interaction picture with respect to the Hamiltonian
Ĥ_0 = ∑_i ∫[3]x_iψ̂_i^†( m_i c^2 + p̂_i^2/2m_i) ψ̂_i + Ĥ_EM
accounting for the free EM and fermion fields.
The resulting interaction Hamiltonian density
ℋ̂_I = ∑_i ψ̂_i^†( q_i ϕ̂ + -{p̂, q_i Â} + q_i^2 Â^2/2m_i - c_F^(i) q_i ŝ_i ·B̂/m_i
- c_D^(i) q_i ħ^2 ∇·Ê/8m_i^2c^2 - c_S^(i) q_i ŝ_i ·×Ê - Ê×/4m_i^2c^2) ψ̂_i
shows only those terms relevant for the matching in the order c^-2.
In particular, the kinetic correction and the last two terms of ĥ_i in Eq. (<ref>) give only rise to Feynman diagrams of orders higher than c^-2.
Moreover, the contact interaction has already the form of first-order scattering so that it does not give rise to additional second-order Feynman diagrams.
In this interaction picture, all field operators ψ̂_i, ϕ̂, and  depend on x^ϱ = (ct, x) with ϱ=0,1,2,3, they become explicitly time dependent.
The EM fields are connected to the four potential Â^ϱ = ( ϕ̂/c, Â) via Ê = - ∂ / ∂ t - ∇ϕ̂ and B̂ = ∇×Â.
With the interaction Hamiltonian density from Eq. (<ref>) and the time ordering operator 𝒯̂, we define the scattering matrix
𝒮̂ = 𝒯̂exp - /ħ c∫[4]xℋ̂_I (x).
The actual matching corresponds to determining scattering-matrix elements up to a given order for the desired interactions <cit.>.
In our case, we match up to the order c^-2 for virtual photons between two fermions.
Hence, second-order scattering
𝒮̂^(2) = 1/2!( - /ħ c)^2 ∫[4]x∫[4]x^'𝒯̂{ℋ̂_I (x) ℋ̂_I (x^') }
is the first and only relevant scattering-matrix element whose time ordering is resolved by Wick's theorem <cit.>, giving rise to only normally-ordered contractions.
As a result, we select all contractions with two (real) fermions entering and exiting the scattering (constituents of the cobosons), only contractions of EM fields, scalar photons Â_0ÂÂ_0 Â_0^', and vector photons Â_rÂÂ_r Â_s^' with r,s=1,2,3, are involved.
After power counting <cit.>, we find that consistent matching up to the order c^-2 requires the contraction of all scalar and vector photons between terms in the first line of Eq. (<ref>).
The terms in the second line can only be contracted via a scalar photon with the Coulomb potential (compare with the Feynman diagrams depicted in Fig. <ref>).
§.§ Coulomb gauge
To determine the matrix elements, a choice of gauge is required to resolve the contractions.
First, we move to Coulomb gauge where contractions of the scalar and vector photon take the form <cit.>
Â_0ÂÂ_0 Â_0^'= ħδ ( x_0 -x_0^' ) / 4 πε_0 c x - x^'
and
Â_rÂÂ_r Â_s^' = ħμ_0 c/21/( 2π)^3∫[3]k^k·( x - x^') ( δ_rs/k - k_r k_s/k^3)
×( ^- k ( x_0 - x_0^' )Θ ( x_0 - x_0^' ) + ( x_0 ↔ x_0^') )
with the Heaviside step function Θ(x_0 - x_0^').
Further, contractions between the vector and scalar potential Â_rÂÂ_r Â_0^' =0 vanish.
Multiplying all elements of ℋ̂_I ( x ) ℋ̂_I ( x^' ) yields a term proportional to ϕ̂ϕ̂^̂'̂, whose contraction leads to the Coulomb potential and corresponds to the first Feynman diagram in Fig. <ref>.
This particular second-order matrix element
Ŝ^(2)_C = ( - /ħ)^2 ∑_i,jq_i q_j/2!∫[4]x∫[4]x^'ψ̂_i^†ψ̂_j^' †Â_0ÂÂ_0 Â_0^'ψ̂_j^'ψ̂_i
contains abbreviations ψ̂_i= ψ̂_i ( x ) and ψ̂_j^' = ψ̂_j ( x^'), where the order of fermion operators takes both commuting and anti-commuting field operators into account.
Inserting the contraction of the scalar potential from Eq. (<ref>) yields
Ŝ^(2)_C = - /ħ∫t∫[3]x∫[3]x^'∑_i,j=e,nψ̂_i^†ψ̂_j^' † V^(ij)_Cψ̂_j^'ψ̂_i
and the corresponding Coulomb potential V̂^(ij)_C = q_i q_j/(8 πε_0 x - x^').
Moving to the contribution proportional to {p̂_r , Â^(r)}{p̂^'_s , Â^' (s)}, we find the matrix element
Ŝ^(2)_LL = ( - /ħ c)^2 1/2!∑_i,j=e,nq_i q_j/m_i m_j∫[4]x∫[4]x^'
×ψ̂_i^†ψ̂_j^' †Â_rÂÂ_r Â_s^'p̂_r p̂_s^'ψ̂_j^'ψ̂_i
with the help of p̂_ℓ·Â_ℓ = Â_ℓ·p̂_ℓ in Coulomb gauge.
The contraction of the vector photon is not an exact delta function in time, not instantaneous.
However, by partial integration with respect to one temporal coordinate we extract from the instantaneous part of the matrix element
Ŝ^(2)_LL = - /ħ∫t∫[3]x∫[3]x^'∑_i,jψ̂_i^†ψ̂_j^' †V̂^(ij)LLψ̂_j^'ψ̂_i
the potential
V̂^(ij)_LL = 4πκ_ij/( 2π)^3∫[3]k^k·( x - x^') ( δ_rs/k^2 - k_r k_s/k^4) p̂_r p̂_s^'
associated with the orbit-orbit coupling, while the remainder of the integral is of higher order and thus neglected.
Here, we introduce κ_ij=-q_i q_j /(8 πε_0 m_i m_j c^2) as before.
After performing the Fourier transform we find
V̂^(ij)_LL = κ_ij/2( 1/rp̂·p̂^' + 1/r^3r·( r·p̂) p̂^').
It is straightforward to show the equivalence to the form given in Fig. <ref>.
The remaining potentials are derived in a completely analogous procedure.
§.§ Lorenz gauge
When we use Lorenz gauge instead of Coulomb gauge to determine the potentials, the general procedure remains identical but we need to take into account that p̂_ℓ·Â_ℓ≠Â_ℓ·p̂_ℓ.
The contraction then reads
Â_μÂÂ_μÂ_ν^' = - ħμ_0 c/2η_μν/( 2π)^3∫[3]k^k·( x - x^') /k
×( ^- k ( x_0 - x_0^')Θ( x_0 - x_0^') + ( x_0 ↔ x_0^') ).
All potentials are identical to the ones obtained in Coulomb gauge except for the orbit-orbit and Coulomb potentials.
For the first, we find
V̂^(ij)_LL = κ_ij/r^3( ℓ̂·ℓ̂^' + ( r·p̂) ( r·p̂^') ) - κ_ijπħ^2 δ ( r ).
The Coulomb potential is of order c^0, but in Lorenz gauge the scalar photon propagator is not instantaneous in time.
As a consequence, there is a nonnegligible remaining integral after a partial integration.
The instantaneous part of this matrix element corresponds to the Coulomb potential and the remainder yields a term of order c^-2.
We resolve the second part as well with the help of partial integration in time and use consecutively the continuity equation
t( ψ̂_i^†ψ̂_i ) = ħ/2m_i∇·( ψ̂_i^†[∇ψ̂_i ] + [∇ψ̂_i^†] ψ̂_i )
to remove partial derivatives in time.
This procedure leads to the potential
V̂^(ij)_C = q_i q_j/8 πε_0 r - κ_ij1/2r^3ℓ̂·ℓ̂^' + κ_ijπħ^2 δ ( r )
showing that the sum of all potentials is identical in both gauges.
§ COBOSON SUBSPACE
§.§ Projection operator
First, we discuss the projector nature of π̂_Cb from Eq. (<ref>) defined via subspace projectors π̂_ℓ.
Projectors of different subspaces k≠ℓ are by construction orthogonal because of the different number of field operators, since annihilation operators acting on the vacuum state leads to a vanishing contribution.
We confirm the normalization of 1/j! by determining
π̂_ℓ^2 ∝1/ℓ!^2φ̂^†_1 ... φ̂^†_ℓ|0⟩⟨0|φ̂_ℓ ... φ̂_1 φ̂^†_1^' ... φ̂^†_ℓ^'|0⟩⟨0|φ̂_ℓ^' ... φ̂_1^'
where for simplicity the corresponding integrals are suppressed.
Thus, Eq. (<ref>) requires to calculate matrix elements
⟨0|φ̂_ℓ ... φ̂_1 φ̂^†_1^' ... φ̂^†_ℓ^'|0⟩
= ∑_u_1 ... u_ℓ=1
v_1 ... v_ℓ=1^ℓε_u_1 ... u_ℓε_v_1 ... v_ℓ∏_t=1^ℓδ( x_t,e - x^'_u_t,e) δ( x_t,n - x^'_v_t,n)
as a consequence of a consecutive application of the fundamental fermion anti-commutation relations given in Sec. <ref>.
The Levi-Civita symbol is defined as ε_1 2...ℓ=+1, changes its sign if two indices are interchanged, and vanishes if at least two indices coincide.
When evaluating the integrals that were not displayed in Eq. (<ref>), not all terms in Eq. (<ref>) contribute because of the integration limits of the coboson projection operator.
In fact, only terms where all sub-indices u_t=v_t coincide result in a nonvanishing contribution and all cross terms vanish, as motivated by Fig. <ref>.
One exemplary term is shown where the first and second line account for integration regions of projectors π̂_j and π̂_j^' respectively.
Each delta function δ(x_t,i-x_u_t,i^') is visualized by lines between coordinate x_t,i and x^'_u_t,i of particle species i.
Integrating over one coordinate gives rise to contributions where these coordinates coincide (turquoise lines) while the remaining fermions are still restricted to their respective regions.
The purple lines, which correspond to one of the cross terms from Eq. (<ref>), imply that there are two electrons with coordinate x_2,e^' and x_j,e^' around nucleus x_2,n^'.
As our projector does not allow such a situation, this term vanishes.
Consequently, only such terms where u_t = v_t contribute, only coordinates selected according to the blue lines in Fig. <ref>.
After a relabeling of integration variables we find in total ℓ! terms and, thus, the subspace projector is idempotent.
Comparing the nonvanishing terms from Eq. (<ref>) with the coboson commutator from Eq. (<ref>), we observe that these correspond to the bosonic contribution of the commutator, emphasizing again that the integration regions in our projector enforce boson-like behavior.
§.§ Projection of Hamiltonian
In the following, we summarize essential identities that are required to perform the projection of the Hamiltonian.
First, we may interchange x_i = ( x_i,n, x_i,e) in integrands, in the case of two cobosons
∫_C_1[6]x_1∫_C_2[6]x_2 f(x_1,x_2) = ∫_C_1[6]x_1∫_C_2[6]x_2 f(x_2,x_1)
for any integrand f.
This identity follows from relabeling integration variables and from the symmetry of the integration regions.
The extension to N cobosons is straightforward.
Second, annihilating a coboson at position x_1 = ( x_1,n, x_1,e), before projecting onto the space of ℓ-1 cobosons located anywhere but at x_1 is equivalent to projecting first onto the whole subspace of ℓ cobosons, before annihilating a coboson at position x_1,
∫_C_1[6]x_1φ̂^†_1 Ô_1 π̂_ℓ-1 ∖x_1φ̂_1 = ∫_C_1[6]x_1φ̂^†_1 Ô_1 φ̂π̂_ℓ.
Here, the operator
π̂_ℓ-1 ∖x_1 = ∫_C_2[6]x_2 ... ∫_C_ℓ[6]x_ℓφ̂^†_2 ... φ̂_ℓ^†|0⟩⟨0|φ̂_ℓ ... φ̂_2
is the projector whose largest region of integration is C_2 instead of C_1.
A generalization to N cobosons is found analogously.
With the identities from Eq. (<ref>) we project the single-fermion part of the Hamiltonian ∫[3] x ψ̂^†_i ĥ_i ψ̂_i π̂_ℓ with the help of the fermionic anti-commutation relation for the same species, while different fermionic species commute trivially.
We arrive at ℓ terms which reduce by relabeling integration variables with help of the first identity to
∫[3] x ψ̂^†_i ĥ_i ψ̂_i π̂_ℓ = ∫_C_1[6]x_1φ̂^†_1 ĥ_i π̂_ℓ-1 ∖x_1φ̂_1.
Together with Eq. (<ref>), we find the desired form from Eq. (<ref>).
The projection of repulsive fermion-fermion interactions is performed similarly to the single-fermion operator case, but we exchange two annihilation operators with ℓ creation operators.
This case yields an effective factor of ℓ( ℓ-1), corresponding to all possible combinations of electron-electron (or nucleus-nucleus) interaction in the presence of ℓ electrons (nuclei).
Hence, the repulsive case with i=j resolves to
∫[3]x_1∫[3]x_2ψ̂_i^†ψ̂_j^†V̂^(ij)ψ̂_j ψ̂_i π̂_ℓ
= ∫_C_1[6]x_1∫_C_2[6]x_2φ̂^†_1 φ̂^†_2 V̂^(ij)π̂_ℓ-2 ∖x_1,x_2φ̂_2 φ̂_1.
With the generalization of Eq. (<ref>) we arrive at the projected form.
The attractive part contains field operators of two different species that define φ̂ so that we apply the coboson commutator.
There are ℓ terms originating from the bosonic part and the corresponding coordinates are confined to the same region implying that these terms correspond to the binding potential.
This projection can be performed analogously to the single-fermion part.
On the other hand, the cobosonic part of the commutator is responsible for ℓ(ℓ-1 ) terms that result in attractive interactions between different cobosons, similar to the repulsive interactions discussed above.
In this derivation the importance of maintaining the elementary particle level becomes most evident as the cobosonic part of the commutator generates the attractive part of the scattering potential.
§ COBOSON OPERATOR TRANSFORMATION
We determine the transformation of Û^†φ̂Û with the help of a Baker-Campbell-Hausdorff formula
Û^†φ̂Û = ∑_k=0^∞1/k!(- /ħ)^k [ Λ̂, φ̂]_k
where [ Λ̂, φ̂]_k = [ Λ̂, [ Λ̂, φ̂]_k-1] and [ Λ̂, φ̂]_0 = φ̂.
Focusing first on the bosonic part of the coboson commutator, it can be shown that
[ Λ̂, φ̂]_k = ( - λ̂)^k φ̂ .
By definition, Λ̂ is the second-quantized generator of λ̂ and they are related to each other by Eq. (<ref>).
Regarding the additional parts of the coboson commutator, they do not vanish trivially but can be resolved with an argument similar to the case sketched in Fig. <ref>.
Through the cobosonic part of the commutator, we generate terms containing the annihilation of two electrons (nuclei) within the same sphere around one nucleus (electron) which lies outside of our projected subspace and thus vanishes.
This fact may be made explicit by introducing the projected transformation π̂_CbÛ^†φ̂Ûπ̂_Cb as φ̂ = π̂_Cbφ̂.
Consequently, the transformation reduces to
Û^†φ̂Û = ûφ̂,
the second-quantized unitary transformation given by Û reduces to the first-quantized unitary û defined via their respective generator.
§ MULTIPOLAR COBOSONIC HAMILTONIAN FROM UNITARY TRANSFORMATIONS
In this appendix, we present the transformation of the coboson Hamiltonian into its multipolar form including relativistic corrections of c.m. and relative coordinates, and provide the full expressions omitted in the main body of the article.
In Sec. <ref> we demonstrated that the transformation of the field-theoretical Hamiltonian can be reduced to its single-particle counterpart, summarized in Eq. (<ref>).
This appendix gives details on the respective single-particle transformations.
As discussed in Sec. <ref>, we first perform a unitary transformation to introduce relativistic corrections to c.m. and relativistic coordinates, before we transform in a second step the resulting operators with the help of the PZW transformation.
Because only the lowest-order relativistic correction is of relevance, we find for any single-particle operator Ô the transformation û^(rel) †Ôû^(rel) = Ô - [ λ̂^(rel), Ô] / ħ= Ô+ Ô^(1).
The generator λ̂^(rel) already given in Eq. (<ref>) has the form
λ̂_k^(rel) = r_k ·P_k /4M^2c^2[ p_k ·P_k + Δ m ( p_k^2/m_r + q_eq_n/4 πε_0 r_k) ] + H.c.
- 1/4m_r Mc^2( p_k ×P_k + H.c.) ·ŝ_k.
The subsequent PZW transformation is performed with the help of the generator λ̂_k^(PZW) = ∫[3]y𝒫_k ( y) ·Â(y), where 𝒫_k is a polarization field of the k-th coboson and is defined in Eq. (<ref>).
According to Eq. (<ref>), we discuss in the following the individual transformations of the single-coboson Hamiltonian, the coboson scattering potential, and the EM Hamiltonian.
§.§ Single-coboson Hamiltonian
The single-coboson Hamiltonian from Eq. (<ref>) includes the operators P, p, 1/r, B̂, ŝ_i, Ê, and ∇·Ê [We drop the subscript “1” for operators in the single-coboson sector] that have to be transformed.
§.§.§ Relativistic corrections
To compute the relativistic corrections Ô^(1), we first calculate the commutators between minimally-coupled momenta to
[ P̂_ℓ, P̂_m ] = ħε_ℓ m n Q B̂_Q,n,
[ p̂_ℓ, P̂_m ] = ħε_ℓ m n q_1 B̂_q_1,n, and
[ p̂_ℓ, p̂_m ] = ħε_ℓ m n q_2 B̂_q_2,n.
Here, we introduce the abbreviation B̂_q_r = ∑_i sgn (-q_i)^r q_i ( m_r / m_i )^r B̂(x_i ) / q_r with r=0,1,2,... that contains the weighted charge q_r = ∑_i sgn (-q_i)^r q_i ( m_r / m_i )^r and is chosen such that the lowest-order multipole expansion of B̂_q_r coincides with B̂(R).
In the particular case q_0 = Q we write the total charge Q and sgn is the sign function.
With the commutators of minimally-coupled momenta we determine all corrections which are presented in Table <ref>.
We see that c.m. and relative momentum are modified by light-induced corrections with the general r×B̂ structure.
In Table <ref> we introduced the operators
δ̂_r = r·P/4M^2c^2( P + 2 Δ m/m_rp) - P×ŝ/4 m_r M c^2
δ̂_R = r·P/4M^2c^2p + p·P + Δ m ( p^2/m_r + q_e q_n/4 πε_0 r)/4M^2c^2r + p×ŝ/4m_rMc^2
where δ̂_r and δ̂_R follow from a commutator involving the relative and c.m. momentum, respectively.
Moreover, terms arise that are of the form of a second-order multipole expansion of the magnetic field B̂_rs = q_r B̂_q_r + Δ m/m_r q_s B̂_q_s.
While the c.m. momentum contains only light-induced corrections, the relative momentum has an additional correction that is not induced by magnetic fields.
The correction to the Coulomb potential coincides <cit.> with the one when light-field corrections are neglected, but the canonical c.m. and relative momenta are exchanged by minimally-coupled ones.
Finally, the correction to the magnetic field may be rewritten into a second-order multipole expansion form but with the operator
δ̂_B̂ = - p·P + Δ m/m_rp^ 2 + Δ m q_e q_n/4 πε_0 r/4M^2c^2r + q_i/q_im_r/m_iP·r/4M^2c^2P
- ( 1 - 2 q_i/q_iΔ m/m_i) P·r/4M^2c^2p - p + q_i/q_im_r/m_iP/4m_rMc^2×ŝ
together with a contribution proportional to the Laplacian of the magnetic field.
Since the electric fields are already of the order c^-2 in the Hamiltonian, there are no further relevant corrections.
§.§.§ PZW transformation
We perform the PZW transformation in Coulomb gauge where the vector potential commutes with itself and the magnetic field at any position.
Thus, only the transformation of minimally-coupled momenta and the electric field is remaining.
As a result, we find the transformations <cit.>
P→ P̂_PZW = P̂_Q + F̂^(cm)
p→ p̂_PZW = p̂ + F̂^(rel)
Ê^⊥(y) → Ê^⊥(y) - 1/ε_0𝒫^⊥ ( y).
In the case of ions with Q≠ 0, the c.m. momentum couples minimally to a monopole evaluated at the c.m. position of the vector potential, P̂_Q = P̂ - Q Â(R).
Moreover, both c.m. and relative momenta are modified by generalized r×B̂ summarized by
F̂^(cm) = ∫[3]y𝒫( y) ×B̂( y)
F̂^(rel) = ∑_j=e,n q_j m_r^2/m_j^2r×∫_0^1ρρB̂(R +ρ (x_j-R ) ),
where the c.m. momentum now includes the polarization field defined in Eq. (<ref>).
The electric fields in the single-coboson Hamiltonian are evaluated at positions x_i.
Since the polarization field 𝒫 (x_i) = 0 vanishes, there is no electric-field contribution from the single-coboson electric fields.
§.§.§ Hamiltonian
After we insert the PZW-transformed momenta, also into the corrections from Table <ref>, the transformed single-coboson Hamiltonian ĥ_Cb from Eq. (<ref>) resolves to
ĥ_MpCb^' = Mc^2 + P̂^2_Q/2 M( 1 + ĥ_rel^(0)/Mc^2) - P̂^4_Q/8M^3c^2 + ĥ_rel^(0) + ĥ_rel^(1)
- q_eq_n/8πε_0m_n Mc^2Z-1/r^3( r×P̂_Q ) ·ŝ_n + ĥ_IB^(0) + ĥ_IB^(1).
The prime indicates that electric-field contributions that arise from the transformation of the EM Hamiltonian are not yet included.
The relative Hamiltonian is identical to Eq. (<ref>) and ĥ_IB^(0) contains the magnetic-field contribution of lowest-order light-matter interaction.
It is listed in Table <ref>.
The additional part, magnetic-field contributions to light-matter interaction at the order of c^-2, are collected in
ĥ_IB^(1) = - ( P̂_PZW^2 ĥ_rel,PZW^(0) - P̂_Q^2 ĥ_rel^(0)/4M^2c^2 + H.c.) - P̂_PZW^4- P̂_Q^4/8M^3c^2 - m_n^3 + m_e^3/M^3p̂_PZW^4 - p̂^4/8m_r^3c^2 - q_eq_n/8πε_0m_n Mc^2Z-1/r^3( r×F̂^(cm)) ·ŝ_n
-κ/r( p̂_PZW^2 - p̂^2 ) + κ/r^3r×F̂^(rel)·( α_ℓ SŜ + α_ℓ sŝ) + ∑_i [ c_S^(i)q_i ( m_i/MP̂_PZW - q_i/q_ip̂_PZW) ×Ê/4m_i^2c^2·ŝ_i + H.c.]
+ ∑_i c_W1^(i) q_i { ( P̂_PZW - q_i/q_im_r/m_ip̂_PZW)^2, ŝ_i ·B̂ (x_i )}/4m^3_ic^2 - ∑_i c_A1^(i) q^2_i ħ^2 B̂^2(x_i)/8 m_i^3 c^2 + {P̂_PZW^(1), P̂_PZW}/2M + {p̂_B,PZW^(1), p̂_PZW}/2m_r
- ∑_i ( μ̂_i, PZW^(1)·B̂ + μ̂_i ·B̂_PZW^(1)) + ħ^2/16m_rM^2c^2[ p̂_PZW·∇× q_1 B̂_q_1 - ( P̂_PZW + 2 Δ m/m_rp̂_PZW) ·∇× q_2 B̂_q_2 + H.c.]
+ ħ^2/4m_rM^2c^2[ q_2 B̂_q_2·( Q B̂_Q + Δ m/m_r q_1 B̂_q_1) + q_1 B̂_q_1·( q_1 B̂_q_1 + Δ m/m_r q_2 B̂_q_2) ].
Due to the PZW transformation and the fact that we keep light-matter interactions also in the order c^-2, we find for all c^-2 terms from the single-coboson Hamiltonian a light-field contribution that are listed in the first three lines of ĥ_IB^(1).
Every term that appears with a subscript “PZW” contains PZW-transformed momenta.
The relativistic corrections μ̂_i, PZW^(1), B̂_PZW^(1), P̂_PZW^(1), and p̂_PZW^(1) are the ones from Table <ref>, only with PZW-transformed momenta, where μ̂_i^(1) = c_F^(i)ŝ_i^(1)/m_i.
Moreover, we collect all PZW-transformed terms of p̂^(1)_PZW directly proportional to B̂ in p̂_B,PZW^(1) that includes light-field induced corrections.
The last line in Eq. (<ref>) represents the remainder of combining c.m. and relative parts of the kinetic correction and the relativistic correction p̂^(1) due to the noncommutativity of c.m. and relative momenta.
We see that in c^-2 the influence of the magnetic field becomes cumbersome, but the general structure is expressed through r×B̂-terms, B̂^2-terms and ∇×B̂-type terms in various combinations with other operators.
§.§ Coboson scattering potential
In the following, we determine the transformation of the scattering potential
V̂_Scatt = ∑_i,j( V̂_C^(ij) + V̂_LL^(ij) +V̂_LS^(ij) + V̂_SS^(ij)).
To our order, only relativistic corrections to the Coulomb potential
V̂^(ij)_C = q_i q_j/8 πε_0 χ_ij
between fermion i of coboson 1 and fermion j of coboson 2 at a relative distance χ_ij = x_1,i - x_2,j have to be considered.
Here, x_k,i = R_k - sgn(q_i) m_rr_k / m_i in NR c.m. and relative coordinates.
The transformation resolves to
û^(rel) †_121/χ_ijû^(rel) _12 = 1/χ_ij + δ̂_1,i^(ij) + δ̂_2,j^(ij) + 𝒪( c^-4)
with corrections δ̂_k,t^(ij) = - [ λ̂^(rel)_k, 1/χ_ij]/ħ that take the explicit form
δ̂_k,t^(ij) = - (-1)^k/4M^2c^2 χ_ij^3 { r_k ·χ_ijΔ m q_e q_n/4πε_0 r_k + ℓ_β^(ij)·( L_k + Δ m/Mℓ_k) + (χ_ij·P_k ) r_k ·( p_k - q_t/q_tm_r/m_tP_k )
+ ( χ_ij·p_k ) r_k ·( Δ m/Mp_k + (1-2 Δ m/Mq_t/q_tm_r/m_t) P_k ) + M/m_r( ℓ^(ij) + q_t/q_tm_r/m_tL^(ij)) ·ŝ_k + H.c.}.
These relativistic corrections can be identified with a scalar correction to the Coulomb potential, orbit-orbit-like and spin-orbit-like potentials.
For instance, the orbit-orbit scattering potential V̂^(ij)_LL describes orbit-orbit coupling between fermions of two different cobosons due to their respective angular momentum χ_ij×p̂_i.
Similar terms appear also in δ̂_k,t^(ij) that depend on the angular momentum ℓ_k^(ij) = χ_ij×p_k, which is the outer product of the distance between two fermions of different cobosons and the relative momentum of coboson k, where the bar indicates again minimally-coupled momenta.
These angular momenta between cobosons couple in this case to the internal total L_k = r_k ×P_k and relative ℓ_k = r_k ×p_k angular momentum of coboson k.
Next, we find a spin-orbit coupling between angular momenta L^(ij) and ℓ^(ij) to the relative spin ŝ_k.
Applying the PZW transformation as well changes all momenta in the scattering potentials to the PZW-transformed ones, also in the correction term given in Eq. (<ref>).
Consequently, the transformed scattering potential reads
𝒱̂_Scatt = ∑_i,j q_i q_j/8 πε_0( 1/χ_ij + δ̂_1,i^(ij,PZW) + δ̂_2,j^(ij,PZW))
+ 𝒱̂^(ij)_LL + 𝒱̂^(ij)_LS + 𝒱̂^(ij)_SS.
The potentials 𝒱̂^(ij)_v are the ones from Fig. <ref> where we replace r by χ_ij as well as p̂_i → m_i P̂_1, PZW/M - sgn (q_i) p̂_1, PZW and p̂^'_j → m_j P̂_2, PZW/M - sgn (q_j) p̂_2, PZW for the momenta.
Scattering between two cobosons reduces to interactions between fermion i and j of two different cobosons via the Coulomb potential in lowest order together with magnetic moments associated with both spin and orbital motion.
By that, all magnetic moments couple, spin to spin, spin to orbit, and orbit to orbit where the latter has an additional retardation correction.
In addition, corrections to NR c.m. and relative coordinates arise from the Coulomb term and modify the Coulomb potential, the LL-coupling, as well as the LS-coupling.
§.§ EM Hamiltonian
Finally, the transformation of the EM Hamiltonian requires the direct computation of the second-quantized unitary Û as it contains no cobosonic field operators.
Hence, we determine
Û^†_PZWÛ^†_relĤ_EMÛ_relÛ_PZW.
Note that momentum operators contained in field-theoretical unitaries do not act on the variables of integration in Ĥ_EM such that the transformation is solely determined through EM fields.
Moreover, Û_rel and Û_PZW contain only the vector potential  that commutes with itself and with the magnetic field B̂ in Coulomb gauge, so only the electric field gives rise to additional terms.
The relevant commutator between the vector potential and the electric field
[ Â^(ℓ) (x) , Ê^(m) (y) ] = - ħ/ε_0δ^ℓ m, ⊥ ( x - y )
is defined through the transverse delta function <cit.>.
The transformation generating the relativistic corrections gives rise to the form
Û^†_relĤ_EMÛ_rel = Ĥ_EM + ∫_C_1[6]ℛ_1φ̂^†_1 ĥ_IE^ (1)φ̂_1.
The relativistic corrections can be written as ĥ_IE^ (1) = 1/2∑_i ( Ê^⊥_i ·d_i + d_i ·Ê^⊥_i ) and appears with d_i defined in Eq. (<ref>), but the momenta are still the minimally-coupled ones.
Applying the PZW transformation to the first term in Eq. (<ref>) results in
Û_PZW^†Ĥ_EMÛ_PZW = Ĥ_EM + ∫_C_1[6]ℛ_1φ̂^†_1 ĥ_IE^(0)φ̂_1
+ ∫_C_1[6]ℛ_1∫_C_2[6]ℛ_2φ̂^†_1 φ̂^†_2 𝒱̂_Selfφ̂_2 φ̂_1.
Here, ĥ_IE^(0) contains the electric multipole moments and the self-energy from Table <ref> and 𝒱̂_Self is the scattering self-energy known from Table <ref>.
The PZW transformation of the second term in Eq. (<ref>) reduces now, due to the coboson field operators, to the first-quantized PZW transformation of ĥ_IE^ (1) only, such that the electric field Ê_i is again not affected as 𝒫 ( x_i) = 0 vanishes.
The momenta are exchanged by their PZW-transformed ones, ĥ_IE^ (1)→ĥ_IE^(1).
This Hamiltonian is the electric part of c^-2 corrections to light-matter interaction from Eq. (<ref>), where we replace canonical momenta with the PZW-transformed ones.
By taking into account the contributions from the EM Hamiltonian, the single-coboson Hamiltonian is modified in the light-matter interaction part to ĥ^(k)_I = ĥ^(k)_IB + ĥ^(k)_IE with k=0,1.
The scattering potential gets an additional self-energy 𝒱̂_Self.
§ FIRST-ORDER ENERGY SHIFT
To determine first-order energy shifts, the actual form of the wave function of relative modes of hydrogen-like cobosons ψ_β is required and is given in terms of quantum numbers n,j,m_j,ℓ, S.
In particular, the wave function
ψ_β = α_j,S,1ψ_n,ℓ,m_j-1χ_S,1 + α_j,S,0ψ_n,ℓ,m_jχ_S,0
+ α_j,S,-1ψ_n,ℓ,m_j+1χ_S,-1
consists of ψ_n,ℓ,m, the standard spatial part of the solution to hydrogen-like Schrödinger equation.
The spin wave function χ_S,m_S is the eigenbasis of operators Ŝ^2 and Ŝ_z of the total spin Ŝ = ŝ_e + ŝ_n.
By that, the total spin of a coboson formed by two spin-1/2 fermions can either be S=0 or S=1, while its projection onto the z-axis can take magnetic spin numbers m_S=0 or m_S = -1,0,1, respectively.
In the superposition of Eq. (<ref>) together with Clebsch-Gordan coefficients <cit.> (detailed in Table <ref>), we find the eigenbasis of the operators Ĵ, Ĵ_z, ℓ̂, and Ŝ, where Ĵ = ℓ̂ + Ŝ is the total angular momentum.
With this explicit wave function in angular momentum eigenbasis we determine the first-order energy shift E^(1)_β = ∫[3]r ψ_β^* ĥ_rel^(1)ψ_β and arrive at
E_β^(1)= m_r^2c^2(Zα)^4/M{ m_e^3+m_n^3/8 m_rM^2( 3- 8n/2ℓ+1) 1/n^4 +( 1 - 3n/2 ℓ+1) 1/n^4 + ( α_D - 3/4α_ss + α_ssδ_S,1) δ_ℓ,0/n^3 + (δ_ℓ,0 -1) δ_S,1/ℓ (ℓ+1) (2 ℓ +1) C_j,ℓ}.
The individual terms correspond to the kinetic correction (first), orbit-orbit coupling (second), Darwin and contact interaction (third) where ŝ_e ·ŝ_n = ( Ŝ^2 - ŝ_e^2 - ŝ_n^2)/2 was exploited such that ŝ_i^2 takes also a Darwin-like spin-independent form.
The last term combines both spin-orbit terms and the magnetic dipole-dipole potential in Ŝ_ne=- ŝ_n ·ŝ_e^' + 3 ( r·ŝ_n ) (r·ŝ_e^')/r^2 with
C_j,ℓ = {[ ℓ/2ℓ +3[ 2(2 ℓ +3) ( α_ℓ S + Δ m/2Mα_ℓ s) - c_F^(e) c_F^(n)], for j=ℓ+1; -2 ( α_ℓ S + Δ m/2Mα_ℓ s) + c_F^(e) c_F^(n), for j=ℓ; - ℓ +1/2 ℓ -1[ 2 (2 ℓ -1) ( α_ℓ S + Δ m/2Mα_ℓ s) + c_F^(e) c_F^(n)], for j=ℓ-1 ].
and the low-energy Wilson coefficients α_v are given in Table <ref>.
Hence, the first-order energy shift depends on quantum numbers n,j,ℓ, and S, but not on m_j.
|
http://arxiv.org/abs/2307.05874v1 | 20230712020218 | Multi-Object Tracking as Attention Mechanism | [
"Hiroshi Fukui",
"Taiki Miyagawa",
"Yusuke Morishita"
] | cs.CV | [
"cs.CV"
] |
Constraints on Self-Interacting dark matter from relaxed galaxy groups
Shantanu Desai
August 12, 2023
======================================================================
We propose a conceptually simple and thus fast multi-object tracking (MOT) model that does not require any attached modules, such as the Kalman filter, Hungarian algorithm, transformer blocks, or graph networks.
Conventional MOT models are built upon the multi-step modules listed above, and thus the computational cost is high.
Our proposed end-to-end MOT model, TicrossNet, is composed of a base detector and a cross-attention module only.
As a result, the overhead of tracking does not increase significantly even when the number of instances (N_t) increases.
We show that TicrossNet runs in real-time; specifically, it achieves 32.6 FPS on MOT17 and 31.0 FPS on MOT20 (Tesla V100), which includes as many as >100 instances per frame. We also demonstrate that TicrossNet is robust to N_t; thus, it does not have to change the size of the base detector, depending on N_t, as is often done by other models for real-time processing.
[© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.]
Multi-object tracking, Attention mechanism, Cross-attention mechanism, Real-time processing, End-to-end MOT model
§ INTRODUCTION
Simple and fast models for multi-object tracking (MOT) have been studied to realize real-time tracking.
In particular, single-shot models have remarkably improved the tracking speed by linking detection and tracking in an end-to-end trainable manner <cit.>.
Many such models are composed of three key processes: detection or feature extraction, re-identification (reID), and trajectory refinement <cit.>.
To construct such a model, several attached modules are generally used, such as the Kalman filter, Hungarian algorithm, cosine similarity, boxIoU, transformer blocks, or graph networks <cit.>.
These attached modules increase computational cost especially when the number of tracking instances is large <cit.>.
Therefore, such a model design is inefficient in this case.
To make an efficient model for MOT, we attempt to reduce the number of modules and module complexity.
Our core idea is to use the cross-attention mechanism <cit.> for MOT modeling.
This idea is based on the similarity between the cross-attention mechanism and the key processes of MOT (Tab. <ref>).
We argue that this similarity enables us to perform MOT using only one cross-attention module (and a base detector).
As a result, we can complete all the key processes of MOT in GPU only unlike conventional models (except for MOTR <cit.>).
Following the idea above, we propose a conceptually simple and fast tracker, called tracking crossword network (TicrossNet), which completes all the MOT key processes using the cross-attention mechanism.
It requires only minor modifications to the vanilla cross-attention mechanism, i.e., a softmax normalization, feature clipping, and micro convolutional neural network (CNN) (Fig. <ref>), which do not increase computational cost significantly.
As a result, the overhead of the tracking process does not increase significantly even when the number of instances increases.
Note that TicrossNet uses the cross-attention mechanism for efficient MOT modeling, not only for feature extraction like conventional transformers <cit.>.
Note also that TicrossNet uses only one single-head cross-attention module unlike MOTR <cit.>.
Our experimental results show that TicrossNet achieves 32.6 FPS on MOT17 <cit.> and 31.0 FPS on MOT20 <cit.> (using Tesla V100) even though the latter includes as many as >100 instances per frame.
Note that the video frame rates of MOT17 and MOT20 are 30 and 25 FPS, respectively.
Therefore, we can safely say that TicrossNet runs in real-time.
Furthermore, we show that TicrossNet maintains the processing speed even when the number of instances increases, while competitive baseline models <cit.> slow down significantly.
§ TICROSSNET
§.§ Architecture
The proposed tracking process is formally summarized as:
x_t, +=pe( x_t), x_t-τ, +=pe( x_t-τ),
q=Q( x_t,+), k=K( x_t-τ,+), v=V( x_t-τ),
q̃=clip( q), k̃=clip( k), ṽ=clip( v),
q̃'=extend(q̃), k̃'=extend(k̃),
A = attention(q̃', k̃', ṽ) = ϕ(γ(q̃' ⊗k̃'^⊤)) ⊙ṽ ,
x'_t = β( A⊙ W_p) + x_t ,
where ⊙ and ⊗ are the dot product and the Hadamard product operations, respectively. W_p∈ℝ^D × D is a trainable matrix for the linear projection layer, where
D is the sum of the numbers of channels in the output layer of the detector.
In the following, we explain the details of Eqs. <ref>–<ref> and Fig. <ref> in a step-by-step manner. The point is Eqs. <ref> & <ref> (A and x'_t).
1: Detector.
TicrossNet starts from a base detector of the instances in each frame (Figs. <ref> & <ref>).
We use CenterNet <cit.>, which is a commonly used detector for MOT because of its good tradeoff between speed and accuracy <cit.>.
We also use the reID feature (Fig. <ref>), as is also done in FairMOT <cit.>, to boost the accuracy of MOT.
2: Input token x_t.
The detector's output is then used to make the input token at times t-τ and t, which are denoted by x_t-τ and x_t. -3 ≤τ≤ 3 is random in training, and τ=1 in inference.
Also, the reID feature is concatenated if available; if it is not, the neck feature is used instead (Fig. <ref>).
3: Clipped feature q̃, k̃, ṽ.
The input tokens x_t and x_t-τ are then input to the cross-attention module.
It first encodes the positional information to x_t and x_t-τ (pe(·) in Eq. <ref>), as is also done in transformers <cit.>. The output of pe(·) is denoted by x_t, + and x_t-τ, +.
Next, x_t, +, x_t-τ, +, and x_t - τ are transformed into query q, key k, and value v (Eq. <ref>), where Q(·), K(·) and V(·) are matrix multiplications.
In addition, we apply the feature clipping (clip(·) in Eq. <ref>) to q, k, and v, which reduces the number of elements (from 120 × 240 to 300).
The clipping position is the center of the bounding box. The clipped features are denoted by q̃, k̃, and ṽ.
4: micro CNN γ(·).
We next extend q̃ and k̃ and obtain q̃' and k̃' by copying and concatenating their elements (Eq. <ref>) to form the token-wise pairs like the vanilla attention weight in transformers <cit.>.
The merged feature q̃' ⊗k̃'^⊤ is input to a micro CNN, denoted by γ(·) in Eq. <ref>, that is composed of two 1× 1 convolutions with the batch normalization and ReLU.
The micro CNN enriches the feature for the query-key process with the minimal cost.
5: Cross-softmax function ϕ(·).
The output of γ(·) is then input to the cross-softmax function ϕ(·) (Eq. <ref>).
ϕ(·) estimates the affinity matrix A∈ℝ^N_t× N_t-τ, where N_t and N_t-τ are the number of instances and tracklets, respectively.
ϕ(·) and A are defined as A = ϕ( r) = r_cols⊗ r_rows, where r_cols = ϕ_cols ( r) and r_rows = ϕ_rows ( r). ϕ_cols(·) and ϕ_rows(·) are the column- and row-softmax functions <cit.> that normalize all the columns and rows of the output of γ(·) (r = γ(q̃' ⊗k̃'^⊤)), respectively.
The meaning of A is as follows: if A_ij is large, the i_th instance at t and the j_th tracklet at t-τ are re-identified.
Importantly, A is a unimodal matrix <cit.> and can be used for the one-to-one matching of instances and tracklets. As a result, TicrossNet no longer requires the Hungarian algorithm, which is widely used for the reID process but is time-consuming and non-differentiable.
Note that Time3D <cit.> also computes the affinity matrix A for reID; however, they transform it with a matrix multiplication and also use the Hungarian algorithm. In contrast, we use A directly for reID and do not require the Hungarian algorithm.
To show the power of the proposed cross-softmax function, we compare the performance on the linear assignment problem.
Tab. <ref> shows that (1) a conventional method <cit.> cannot work well without the Hungarian algorithm, (2) despite this challenging setting (the large cost map), the cross-softmax function achieves 100% without the Hungarian algorithm, and (3) our cross-softmax function is faster than the conventional method with the Hungarian algorithm.
6: Output.
Finally, Eq. <ref> is applied.
β (·) inserts the clipped outputs (mentioned in 3: Clipped feature) from attention (Eq. <ref>) into the original position in the zero tensor 0∈ O^D × (120 × 240).
The final outputs are A and x'_t. A is used for re-identifying the instances at t with the tracklets at t-τ (“ReID result” in Fig. <ref>), using the argmax operation to each column and thresholding. x'_t has exactly the same format as x_t (output from the detector) and now is refined. x'_t is used for finding bounding boxes (“Refined detection result” in Figs. <ref> & <ref>).
7: Loss function.
The total loss is composed of five loss functions: L_total = λ_t-τ L_det( x_t-τ, y_det, t-τ) + λ_t L_det( x_t, y_det, t) + λ_t L_det( x'_t, y_det, t) + e^-ϵ_trk L_trk( A, y_a) + ϵ_trk.
L_det(·) is the loss for the base detector, where y_det, · is a ground truth label. L_det(·) is the same as that of CenteNet <cit.> in our case.
L_trk(·) is the loss for tracking and is the fast focal loss <cit.>, where y_a is a ground truth label and is an affinity-matrix-like binary matrix.
λ_· is a weight coefficient.
We use λ_t-τ=0.5 and λ_t=0.25.
If the reID feature is used (see 1: Detector and 2: Input token x_t), we add a trainable coefficient ϵ_trk <cit.>.
§.§ Inference
8: Track rebirth.
To address the occlusion problem and re-identify the tracklets lost for a short time, we modify the track rebirth <cit.> for TicrossNet.
The idea is simple: the affinity matrix A keeps a tracklet that is not re-identified with any instances at t, denoted by j. If j does not reappear in 30 frames, j is discarded. Note that our cross-attention mechanism successfully removes the attached modules for track rebirth, such as the boxIoU, Kalman filter, cosine similarity, and greedy matching <cit.>, and therefore, we can finish all the key processes of tracking in an end-to-end MOT manner on GPU only, unlike the conventional models.
9: Distance masking.
To improve reID accuracy, we newly introduce distance masking (different from <cit.>), which removes several irrelevant reID pairs; i.e., we ignore the reID pairs (in the sense of A) that include an instance and a tracklet located at a distance.
Specifically, the distance masking inserts -∞ to some elements in the output of micro CNN r = γ(q̃' ⊗k̃'^⊤).
Such elements are selected if the distance between query i and key j (in the sense of the center of the bounding box) is larger than a threshold, where i and j are the index of an instance and a tracklet, respectively. The threshold is defined as th_i = min( d_i) + α min(w_i, h_i) (α=0.4), where d_i is {d(i, 1), d(i, 2), …, d(i, j) }, d(i, j) is the Euclidean distance between i and j, and w_i and h_i are the width and height of the bounding box of i.
-∞ after the softmax ensures that the reID pair does not contribute to A.
10: Memory sharing mechanism (MSM).
To reduce the cost of the detector and boost the inference speed, we modify MSM <cit.>.
MSM for TicrossNet keeps the refined token x'_t in the memory and reuses it as x_t'-τ, where t'=t+τ.
§ EXPERIMENTS
Datasets.
The test and training sets are as follows unless otherwise noted. The test set is two datasets for person tracking: test sets of MOT17 <cit.> and MOT20 <cit.>.
The training set is composed of eight datasets <cit.>: Caltech <cit.>, CityPerson <cit.>, ETHZ <cit.>, CUHK-SYSU <cit.>, PRW <cit.>, MOT17 (training set), MOT20 (training set), and CrowdHuman <cit.>.
Training takes 4 days on 14 GPUs (Tesla V100).
Evaluation metrics.
The standard CLEAR MOT metrics <cit.> are used: identity switch (IDs), ID F1 score (IDF1), and MOT accuracy (MOTA).
The GPU used for TicrossNet is a single Tesla V100 except for Fig. <ref> & Tab. <ref>.
Remark.
We acknowledge the importance of opening codes; however, our code includes a part of a commercial product, and thus we cannot publish it.
§.§ Benchmark Comparison & Discussion
Tab. <ref> is a benchmark leaderboard on MOT17 and MOT20.
TicrossNet achieves 32.6 FPS on MOT17 and 31.0 FPS on MOT20 even though the latter includes as many as >100 instances per frame, while the other models including the state-of-the-art (SOTA) MOT model, ByteTrack <cit.>, significantly slow down (except for the slow networks, i.e., TransCenter <cit.> and TransTrack <cit.>).
Note that the video frame rates of MOT17 and MOT20 are 30 and 25 FPS, respectively; thus, we can safely say that TicrossNet runs in real-time.
In terms of MOTA, IDF1, IDs, TicrossNet performs similarly to MOTR, which is the only end-to-end MOT model other than TicrossNet, but TicrossNet is significantly faster.
However, this high speed is at the expense of MOTA, IDF1, and IDs, compared with the other models.
Let us discuss the reasons.
First, our empirical observation tells us that TicrossNet may not fully utilize motion information that may be captured by the Kalman filter in FairMOT.
For example, two people walking side-by-side or crossing each other are difficult to track, and TicrossNet sometimes fails to track them, while FairMOT does not. Thus, appropriately involving motion information may improve the performance of TicrossNet.
Second, the performance gaps in MOTA, IDF1, and IDs between ByteTrack and TicrossNet partly come from the difference in the detectors: YOLOX <cit.> and CenterNet, respectively.
In fact, our additional experiment shows that MOTA of TicrossNet with CenterNet and YOLOX on MOT17 half set <cit.> is 62.6% and 68.4% (improved), respectively; however, that of ByteTrack is 75.8% <cit.> and still outperforms TicrossNet.
The gap may be closed if we use another base detector and/or adapt it to our pipeline.
Nonetheless, in addition to the real-time speed, TicrossNet has an advantage that more than makes up for the low MOTA, IDF1, and IDs
: the robustness to the number of instances (N_t).
Fig. <ref> shows N_t vs. module latency.
For fair comparison, the same GPU (RTX 2080 Ti) is used, while Tab. <ref> is not. We pick out three fast models from Tab. <ref>.
Fig. <ref> shows that the computational cost of TicrossNet does not increase significantly even when N_t increases, unlike the other fast models including the SOTA MOT model, ByteTrack.
This is because (1) TicrossNet can process all the key processes of MOT on GPU unlike the others, and (2) TicrossNet does not require the attached modules for tracking that tend to significantly increase computational cost when N_t is large, as shown in Fig.<ref>.
Therefore, this result proves the robustness of TicrossNet to N_t.
As a result, it does not have to change the size of the base detector, depending on N_t, as is often done by the other models.
Ablation performance.
Tab. <ref> shows an ablation study.
The cross-softmax dramatically improves MOTA, IDF1, IDs, and even FPS.
The micro CNN, reID feature, and distance masking also improve MOTA, IDF1, and IDs with minimal additional computational cost, as expected.
§ CONCLUSION
TicrossNet is an end-to-end MOT model composed of a base detector and a single cross-attention module only and does not need any attached modules.
TicrossNet runs in real-time even when the number of instances is > 100.
Also, TicrossNet is robust to the change in the number of instances; thus, it does not have to change the size of the base detector, depending on the number of instances, as is often done by other models for real-time processing.
IEEEbib
|
http://arxiv.org/abs/2307.04433v1 | 20230710091541 | Holographic Gubser-Rocha model does not capture all the transport anomalies of strange metals | [
"Yongjun Ahn",
"Matteo Baggioli",
"Hyun-Sik Jeong",
"Keun-Young Kim"
] | cond-mat.str-el | [
"cond-mat.str-el",
"hep-th"
] |
=1
tarburst.fd
|
http://arxiv.org/abs/2307.06028v1 | 20230712092019 | Disassociation of a one-dimensional cold molecule via quantum scattering | [
"Wen-Liang Li",
"Hai-Jing Song",
"Tie-Ling Song",
"D. L. Zhou"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas",
"physics.atom-ph"
] |
Institute of Physics, Beijing National Laboratory for
Condensed Matter Physics,
Chinese Academy of Sciences, Beijing
100190, China
School of Physical Sciences, University of Chinese
Academy of Sciences, Beijing 100049, China
National Innovation Institute of Defense Technology, AMS, Beijing 100071, China
Department of fundamental Science, Space Engineering University, Beijing 101416, China
[][email protected]
Institute of Physics, Beijing National Laboratory for
Condensed Matter Physics,
Chinese Academy of Sciences, Beijing
100190, China
School of Physical Sciences, University of Chinese
Academy of Sciences, Beijing 100049, China
Motivated by the recent experimental developments on ultracold molecules and atoms, we propose a simplest theoretical model to address the disassociation, reflection and transmission probability of a 1-dimensional cold molecule via quantum scattering. First, we give the Born approximation results in the weak interaction regime. Then, employing the Lippmann-Schwinger equation, we give the numerical solution and investigate the disassociation's dependence on the injection momentum and the interaction strengths. We find that the maximum disassociation rate has a limit as increasing the interaction strengths and injection momentum. We expect that our model can be realized in experiments in the near future.
Disassociation of a one-dimensional cold molecule via quantum scattering
D. L. Zhou
August 12, 2023
========================================================================
§ INTRODUCTION
Laser cooling makes atoms or molecules ultracold, e.g., the temperature may arrive at the regime of nano-Kelvin <cit.>, which makes the emergence of quantum features of the atoms or molecules, which usually are hidden in the thermal noises from the environments. Thus the ultracold atoms or molecules becomes an ideal platform for investigation of fundamental quantum mechanics problems, quantum chemistry, precise quantum metrology, quantum simulations, and even quantum computing <cit.>.
Among these applications, ultracold chemistry is closely related with laser cooled atoms or molecules <cit.>. Along this direction, one dimensional ultracold atoms/molecules, which are formed by a tight confinement with a wave guide <cit.>, play a crucial rule due to its relatively simple theoretical model with rich physics <cit.>.
Currently different kinds of molecules formed from several atoms have been investigated intensively in literature <cit.>. However, the converse process, i.e., the disassociation of molecules into atoms, deserves further studies to deepen its understandings. Here we propose a simplest theoretical model to address the dis-associative probability of a one-dimensional cold molecule, and investigate its dependence on the injection momentum and the interaction strengths, which can be arbitrarily tuned via the Feshbach resonance technique <cit.>. Our results show that there is a limit of the maximum disassociation rate as increasing both the injection momentum and interaction strengths.
This article is structured as follows: In <Ref> we introduce our theoretical model of the scattering problem and give the Hamiltonian. In <Ref> we give the eigenstates and the in state of our scattering. Then we solve the model by applying Born approximation in <Ref> and integral equation method in <Ref>, and show our numerical results. Finally, we present our discussions and conclusions in <Ref>.
§ THE MODEL
We consider a one-dimensional molecule, which is the unique weakly bound state formed by an attractive one-dimensional contact interaction. Then the one-dimensional molecule scatters with a heavy atom. The Hamiltonian of our system is modeled by
H = p_1^2/2 m_1 + p_2^2/2 m_2 - αδ(x_2-x_1) + γ_1δ(x_1) + γ_2δ(x_2),
where α, γ_1, γ_2 > 0. Here we assume that the position of the heavy atom is at zero, and the motion of the heavy atom is neglected.
To solve the scattering problem, we split the Hamiltonian into two parts:
H = H_0 + V
where
H_0 = P^2/2 M + p^2/2 μ - αδ(x),
V = γ_1δ(X - r_2 x) + γ_2δ(X + r_1 x),
with
r_1 = m_1/M,
r_2 = m_2/M,
M = m_1 + m_2 = (r_1 + r_2)M,
μ = m_1 m_2/M = r_1 r_2 M,
κ = √(r_1 r_2) = √(μ/M),
X = m_1 x_1 + m_2 x_2/M = r_1 x_1 + r_2 x_2,
x = x_2 - x_1,
P = M Ẋ = p_1 + p_2,
p = μẋ = r_1 p_2 - r_2 p_1.
§ THE IN STATE OF OUR SCATTERING
In this section, we will examine the in state of our scattering. Let us start with the eigen problem of H_0, which can be divided into two parts:
H_0 = H_0^c + H_0^r,
where
H_0^c = P^2/2M,
H_0^r = p^2/2 μ - q/μδ(x)
with q=μα. Note that H_0^c is the kinetic energy of the center of mass for the two atoms, and H_0^r is the energy of their relative motion. Thus [H_0^c, H_0^r]=0, and the eigen problem of H_0 can be solved by finding the common eigenstates of H_0^c and H_0^r.
The eigen equation of H_0^c is given by
H_0^c |P⟩ =P^2/2M |P⟩,
where the eigen wave function is
XP = 1/√(2π) e^iPX.
The eigen equation of H_0^r is
H_0^r |ϕ_b⟩ = E_b |ϕ_b⟩,
H_0^r |ϕ_p+⟩ = E_p |ϕ_p+⟩,
where |ϕ_b⟩ is the unique bound state with energy E_b= - q^2/2μ, and the wave function for the bound state
⟨ x|ϕ_b⟩ = √(q) e^-q |x|.
The eigenstate |ϕ_p+⟩ is the scattering state with respect to H_0^r with energy E_p=p^2/2μ, and the wave function is
⟨ x|ϕ_p+⟩ =
1/√(2π)[e^i p x + i q /p - iq e^- i p xθ(-x)
+ p/p - i q e^i p xθ(x) ], p>0,
1/√(2π)[e^i p x + i q /-p - iq e^- i p xθ(x)
+ -p/-p - i q e^i p xθ(-x) ], p<0.
Here we observe that ⟨ x|ϕ_(-p)+⟩ = ⟨ -x|ϕ_p+⟩, i.e., ⟨ -x|ϕ_p+⟩ is also an eigenstate of H_0^r, which results from the symmetry of space inversion of H_0^r, i.e. the Hamiltonian is invariant under x→ -x. In the Hilbert space of the relative motion, we can show the following complete relation
∫_-∞^+∞ dp ϕ_p+ + ϕ_b = 1.
Now we are ready to give the in state of our scattering
|Ψ_in⟩ = |P⟩⊗|ϕ_b⟩≡ |P, ϕ_b⟩,
which describes a one-dimensional molecule in the bound state |ϕ_b⟩ scattering on the potential V with the momentum of the mass center P.
§ BORN APPROXIMATION IN THE MOLECULE CHANNEL
In this section, we will apply the Born approximation to our scattering problem. We start with the Lippmann-Schwinger equation:
|Φ_P,b^+⟩ = |P,ϕ_b⟩ + G^+(E_P) V
|P,ϕ_b⟩
= |P,ϕ_b⟩ + G_0^+(E_P) V
|Φ_P,b^+⟩,
where the Green function and the free Green function are given by
G^+(E) =1/E - H + iϵ,
G_0^+(E) =1/E - H_0 + iϵ.
Therefore the S matrix in the molecule channel is
Q,bSP,b = Φ_Q,b^-Φ_P,b^+
= δ(P-Q) -2i πδ(E_Q-E_P)
Q,ϕ_bVΦ_P,b^+.
The out scattering state in the molecule channel is
|Ψ_out⟩_b = ∫_-∞^∞ Q |Q,ϕ_b⟩Q,ϕ_bSP,ϕ_b
= 1 - i 2π M/PP,ϕ_bVΦ_P,b^+|P, ϕ_b⟩
- i 2π M/P-P,ϕ_bVΦ_P,b^+|-P,
ϕ_b⟩.
Then the reflection rate and the transmission rate for the molecule are
R_b = 4 π^2 M^2/P^2-P,ϕ_bVϕ^+_P,b^2,
T_b = 1 + 4 π^2 M^2/P^2P,ϕ_bVϕ^+_P,b^2
+ 4π M/PP,ϕ_bV
ϕ^+_P,b.
Therefore in the Born approximation up to second order of V:
R_b = 4 π^2 M^2/P^2-P,ϕ_bVP,
ϕ_b^2,
T_b = 1 + 4 π^2 M^2/P^2P,ϕ_bVP,
ϕ_b^2
+ 4π M/PP,ϕ_bV
G_0^+(E_P) VP, ϕ_b.
Note that
G_0^+(E_p) = 1/E_P + E_b - H_0 + i ϵ
= 1/E_P + E_b - H_0 - i πδ(E_P +
E_b - H_0).
Thus
P,ϕ_bV
G_0^+(E_P) VP, ϕ_b
= - πP,ϕ_bV
δ(E_P + E_b - H_0) VP, ϕ_b
= - π∫Qδ(E_P - E_Q)
Q,ϕ_bVP,ϕ_b^2 - π∫Q∫pδ(E_P + E_b - E_Q - E_p)
Q,ϕ_p+VP,ϕ_b^2
= -π M/PP,ϕ_bVP,ϕ_b^2
+ -P,ϕ_bVP,ϕ_b^2
- ∫_-p_max^p_maxpπ M/Q(p)Q(p),ϕ_p+VP,ϕ_b^2 +
-Q(p),ϕ_p+VP,ϕ_b^2,
where Q(p) = √(p_max^2- p^2)/κ with p_max=√(κ^2 P^2 - q^2).
Hence, we find
T_b = 1 - R_b - C_nb,
where
C_nb = 4 π^2 M^2/P∫_-p_max^p_maxpQ(p),ϕ_p+VP,ϕ_b^2 +
-Q(p),ϕ_p+VP,ϕ_b^2/Q(p).
Eq. (<ref>) implies that C_nb is the disassociation rate, i.e., the rate that the molecule becomes two atoms after the scattering. In addition, only when P>q/κ is C_nb positive.
By detailed calculations, we obtain
R_b = M^2 q^4/P^2γ_1/q^2 +
r_2^2 P^2 + γ_2/q^2 +
r_1^2 P^2^2,
and
Q,ϕ_p+VP,ϕ_b^2
= q/2π^316(P-Q)^2 p^2/p^2 +
q^2
( 1/p + (P-Q)r_2^2 + q^2 r_2γ_1/(P-Q)r_2 - p^2 + q^2^2.
+ 1/p + (P-Q)r_1^2 + q^2 r_1γ_2/(P-Q)r_1 - p^2 + q^2^2
+ r_2γ_1/(P-Q)r_2 - p^2 + q^2 r_1γ_2/(P-Q)r_1 - p^2 + q^2
. 2p + (P-Q)r_2p +
(P-Q)r_1 + 2 q^2/p + (P-Q)r_2^2 + q^2p + (P-Q)r_1^2 + q^2),
which can be inserted into Eq. (<ref>) to numerically calculate the disassociation rate C_nb.
Now we are ready to present our numerical results on the transmission rate T_b, the reflection rate R_b, and the disassociation rate C_nb in the first order Born approximation in Fig <ref>. In the case, the parameters are given by m_1=m_2=1.0, γ_1=γ_2=0.2, α=2.0. Due to the energy conservation, only when the mass-center momentum P>2 does the disassociation process occur. With the increasing of the momentum P, the transmission rate T_b increases while the reflection rate R_b decreases. In particular, the disassociation rate C_nb take its maximum ≃0.05 at P≃2.9.
§ INTEGRAL EQUATION METHOD
Note that Born approximation is valid only when the momentum P is large, and the interaction strengths γ_1 and γ_2 are small. To obtain more general information on the disassociation process, we may resort to the direct numerical solution of the Lippmann-Schwinger equation.
From Eqs. (<ref>)(<ref>), we need to calculate V|Φ^+_P,b⟩, which can be obtained from the Lippmann-Schwinger equation (<ref>) and satisfies
( 1 - V G_0^+(E_P) ) V |Φ^+_P,b⟩ = V |P, ϕ_b⟩.
Therefore we arrives at the integral equation
[ | Φ^1⟩; |Φ^2⟩ ]
-
[ G^11 G^12; G^21 G^22 ][ γ_1 0; 0 γ_2 ][ | Φ^1⟩; |Φ^2⟩ ]
=
[ |ϕ^1⟩; |ϕ_2⟩ ],
and the amplitudes of reflection rate and the transmission rate are given by
r_b = i
2π M/P[ ⟨ψ^1 | ⟨ψ^2 | ][ γ_1 0; 0 γ_2 ][ |Φ^1⟩; |Φ^2⟩ ],
t_b = 1- i
2π M/P[ ⟨ϕ^1| ⟨ϕ^2| ][ γ_1 0; 0 γ_2 ][ |Φ^1⟩; |Φ^2⟩ ],
where
⟨ y|Φ^1⟩ = ⟨ r_2 y,y|Φ^+_P,b⟩,
⟨ y| Φ^2⟩ = ⟨- r_1y,y|Φ^+_P,b⟩,
⟨ y| ϕ^1⟩ = ⟨ r_2y,y|P,ϕ_b⟩,
⟨ y| ϕ^2⟩ = ⟨ - r_1 y,y|P,ϕ_b⟩,
⟨ y| ψ^1⟩ = ⟨ r_2y,y|-P,ϕ_b⟩,
⟨ y| ψ^2⟩ = ⟨ -r_1y,y|-P,ϕ_b⟩,
⟨ x|G^11|y⟩ = ⟨ r_2x,x|G_0^+(E_p)|r_2y,y⟩,
⟨ x|G^12|y⟩ = ⟨ r_2x,x|G_0^+(E_p)|-r_1y,y⟩,
⟨ x|G^21|y⟩ = ⟨- r_1x,x|G_0^+(E_p)|r_2y,y⟩,
⟨ x|G^22|y⟩ = ⟨ -r_1x,x|G_0^+(E_p)|-r_1y,y⟩.
§.§ Free Gree function
To numerically evaluate the integral equation (<ref>), we need to calculate the free Green function
⟨ X,x| G_0^+(E_P)|Y,y⟩ = ⟨ X,x| 1/E_P+E_b - H_0 + i ϵ |Y,y⟩
= G_0^(I) + G_0^(II),
where
G_0^(I) = ∫_-∞^∞ dQ ⟨ X,x|Q,ϕ_b⟩⟨ Q, ϕ_b|Y,y⟩/E_P - E_Q + i ϵ,
G_0^(II) = ∫ dQ ∫ dp ⟨ X,x|Q,ϕ_p^+⟩⟨ Q, ϕ_p^+|Y,y⟩/E_P - E_Q + E_b - E_p + i ϵ.
By detailed calculations, the free green function is given by
G_0^(I) = e^-q(|x|+|y|)-i M q e^i P|X-Y|/P,
G_0^(II) = κ M/2π i∫_-∞^∞ dp [ (e^ip|x-y| + iq/p-iq e^ip(|x|+|y|))
× e^i |X-Y|/κ√(κ^2 P^2 - q^2 - p^2)/√(κ^2 P^2 - q^2 - p^2 + iϵ)].
To further simplify the calculation of G_0^II, let
p_0 = √(|κ^2 P^2 - q^2|),
σ = κ^2 P^2 - q^2,
q_0 = q/p_0,
α = p_0 |x-y|,
β = p_0 |X-Y|/κ,
η = p_0 (|x| + |y|),
z = p/p_0.
Then the second term in the free Green function can be rewritten as
G_0^(II) = κ M/π i∫_0^∞ dz (cos(α z) - q_0^2 cos(η z) + q_0 zsin(η z)/z^2 + q_0^2) e^i β√(σ - z^2)/√(σ - z^2).
It can be simplified as follows:
Case i: When κ^2P^2-q^2<0, σ=-1, and then
G_0^(II) = - κ M/π∫_0^∞ du (cos(αsinh(u)) - q_0^2 cos(ηsinh(u)) + q_0 sinh(u)sin(ηsinh(u))/sinh(u)^2 + q_0^2) e^-βcosh(u).
Case ii: When κ^2P^2-q^2>0, σ=1, and then
G_0^(II) = - iκ M/π∫_0^π/2 du (cos(αsinu) - q_0^2 cos(ηsinu) + q_0 sinusin(ηsinu)/sinu^2 + q_0^2) cos( βcosu)
= + κ M/π∫_0^π/2 du (cos(αsinu) - q_0^2 cos(ηsinu) + q_0 sinusin(ηsinu)/sinu^2 + q_0^2) sin( βcosu)
= - κ M/π∫_0^∞ du (cos(αcoshu) - q_0^2 cos(ηcoshu) + q_0 coshusin(ηcoshu)/coshu^2 + q_0^2) e^- βsinhu.
Case iii: When κ^2 P^2 -q^2=0, σ=0, and then
G_0^(II) = - κ M/π∫_0^∞ dp (cos(p|x-y|) - q^2 cos(p(|x|+|y|)) + q psin(p(|x|+|y|))/p^2 + q^2) e^- p|X-Y|/κ/p.
§.§ Numerical results
Now we are ready to perform the numerical solution of the integral equation (<ref>) to obtain |Φ^1⟩ and |Φ^2⟩, and calculate the reflection rate R_b and the reflection rate T_b via Eqs. (<ref>)(<ref>). Then the disassociation rate can be obtained by C_nb=1-R_b-T_b in Fig <ref>, where the parameters are given by m_1=m_2=1.0, γ_1=γ_2=0.5, α=2.0. Compared with the case calculated in the Born approximation, we take larger scattering strengths γ_1 and γ_2 while keeping the other parameters invariant. As expected, the disassociation channel opens only when the mass-center momentum P>2. With the increasing of the momentum P, the transmission rate T_b increases while the reflection rate R_b decreases. The disassociation rate C_nb takes its maximum ≃0.1 at P≃3.2. We also show the Born approximation results in the same parameter setting, which become increasingly accurate with the integral results as P increasing, just as one can expect.
We also care about that how the parameters influence the maximum of the disassociation rate. The disassociation rate depends on the mass of each particle, the interaction strengths γ_1,γ_2 and center-of-mass momentum P for a fixed bound strength α. In <Ref>, we show when the disassociation rate takes its maximum C_nb^max under different parameter settings. The solid black lines in <Ref> show C_nb^max with equal interaction strengths γ_1=γ_2=γ, and equal mass m_1 = m_2 = 1.0, while the dashing lines show C_nb^max with m_1 = 0.5, m_2 = 1.5 and different interaction strengths. The bound strength is α = 2.0. <Ref> shows the conditions of P and γ when C_nb = C_nb^max, which means that in order to reach the maximum disassociation rate, one should increase both P and γ following the relations revealed in <Ref>. <Ref> gives the values of C_nb^max under different parameter settings changing with the interaction strength γ, from which we can see that they increase as γ increasing and asymptotically reach some limits. For equal mass and equal interaction strengths, the limit of C_nb^max is 0.5. For γ_1= 5 γ_2, the limit is about 0.72, and for γ_2 = 5 γ_1, the limit is about 0.75. For γ_1=0 or γ_2 =0, the limit approximates to 1. In conclusion, if one want to reach higher disassociation rate, one would tune stronger interaction strengths and center-of-mass momentum following some similar relations given in <Ref> and a larger difference between interaction strengths γ_1 and γ_2. In fact this maximum value C_nb^max is irrelevant to the coupling strength α in this situation because this can be reduced to a scaling problem.
While for different interaction strengths (γ_1 ≠γ_2), one would suppose that larger difference between γ_1 and γ_2 induce larger disassociation rate. <Ref> shows more details of the effect, where we keep γ_1 + γ_2 = 1.0 in <Ref> to see the main influence of the difference between γ_1 and γ_2. <Ref> also show that lighter particle in the molecule with weaker interaction strength has higher disassociation rate than that of lighter particle with stronger interaction strength.
When the coupling of the molecule is strong enough, in the regime of low injection center-of-mass momentum P the molecule would not disassociate and behave as a single particle. We know the reflection rate R_single and transmission rate T_single of a single particle scattered by a δ potential, which is a kind of quantum tunneling <cit.>, and in our problem:
R_single = M^2(γ_1 + γ_2)^2/P^2 + M^2(γ_1 + γ_2)^2,
T_single = P^2/P^2 + M^2(γ_1 + γ_2)^2.
<Ref> shows the reflection and transmission rates of the molecule compared with a single particle for P<10, where the parameters are give by m_1=m_2=1.0, γ_1=γ_2=0.5, α=12.0.
§ DISCUSSION AND CONCLUSION
In this paper, a simple model with contact interactions, which contains the basic process of disassociation of a one-dimensional molecule, is proposed to describe the corresponding system of ultracold atoms. The first order Born approximation is made to obtain the basic physical picture of the process: only when the kinetic energy associated with the injection center-of-mass momentum P is larger than the ionization energy can the disassociation process occur. To further validate this picture, we develop the numerical method to solve the integral equation of quantum scattering. With the increases of the interaction strengths and the injection center-of-mass momentum, the maximum disassociation rate will increase, With larger difference of the interaction strengths the disassociation rate will increase. And under different parameter settings, the maximum disassociation rate has different limits as increasing the interaction strengths and injection momentum. We expect that our model can be realized in the experiment of ultracold atoms and molecules in the near future.
This work is supported by National Key Research and Development Program of China (Grants No. 2021YFA0718302 and No. 2021YFA1402104), National Natural Science Foundation of China (Grant No. 12075310), and the Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDB28000000).
apsrev4-2
|
http://arxiv.org/abs/2307.04044v1 | 20230708204524 | When greediness and self-confidence meet in a social dilemma | [
"Chaoqian Wang",
"Wenqiang Zhu",
"Attila Szolnoki"
] | physics.soc-ph | [
"physics.soc-ph",
"cond-mat.stat-mech",
"cs.GT",
"nlin.CG"
] |
1
.001
Chaoqian Wang et al.
mode = title]When greediness and self-confidence meet in a social dilemma
1]Chaoqian Wang
[email protected]
Conceptualization; Methodology; Writing
2]Wenqiang Zhu
Methodology; Validation
3]Attila Szolnoki
[1]
[cor1]Corresponding author
[email protected]
Conceptualization; Validation; Writing
[1]Department of Computational and Data Sciences, George Mason University, Fairfax, VA 22030, USA
[2]Institute of Artificial Intelligence, Beihang University, Beijing 100191, China
[3]Institute of Technical Physics and Materials Science, Centre for Energy Research, P.O. Box 49, H-1525 Budapest, Hungary
A greedy personality is usually accompanied by arrogance and confidence. This work investigates the cooperation success condition in the context of biased payoff allocation and self-confidence. The first component allows the organizer in a spatial public goods game to receive a different proportion of goods than other participants. The second aspect influences the micro-level dynamics of strategy updates, wherein players can maintain their strategy with a certain weight. Analytical results are obtained on square lattices under the weak selection limit. If the organizer attempts to monopolize the public goods, cooperation becomes more attainable. If the confidence increases, cooperation is inhibited. Consequently, these elements have conflicting effects on cooperation, and their simultaneous presence can result in a heterogeneous change of the critical synergy factor. Our theoretical findings underscore the subtle implications of a mutual trait that may manifest as greediness or self-confidence under different circumstances, which are validated through Monte Carlo simulations.
* Examining biased allocation and self-confidence in spatial public goods game
* Calculating cooperation success conditions in weak selection limit
* Conflicting effects yield a non-monotonic critical synergy factor
* Analytical results validated via Monte Carlo simulations
Public goods game Weak selection Biased allocation Self-confidence Evolutionary game theory
[
[
=====
§ INTRODUCTION
The dynamism of various facets of reciprocity—be they direct, indirect, or network reciprocity—have been unequivocally demonstrated to wield significant influence over system behaviors, particularly when there is a need to sustain costly cooperation among self-interested, or more crudely put, selfish agents <cit.>. These mechanisms, chiefly concerned with pairwise interactions among players, have been observed to incorporate higher-order interactions <cit.>. The public goods game (PGG) is an illustrative example of such complex interactions, involving simultaneous decision-making processes through multi-body or group interactions <cit.>. Players may opt to contribute or abstain from contributing to a common pool, reaping the benefits of the overall contributions regardless of their individual decisions. In a spatial population, where players engage in limited yet enduring interactions with others, reciprocity manifests on an additional level <cit.>. Here, the intricate web of relations among agents means a player is not limited to a single game, but finds themselves immersed in several others. A pragmatic approach for a player would be to partake in the group where they serve as the central agent, encircled by proximate neighbors. Concurrently, said player also engages in games instigated by their neighbors. Consequently, a player positioned on a node with a k degree finds themselves partaking in G=k+1 PGGs. This setup could potentially underpin a reciprocal mutual aid system which promotes a degree of cooperation.
Assuming the most rudimentary scenario where players consistently maintain their strategies across all the games they participate in and disregard strategy diversity <cit.>, there still exists considerable flexibility in the implementation of a realistic model. To elaborate, groups do not necessarily correspond to a player, who may be more incentivized to invest effort in a venture they have personally initiated. Such dedication could be recognized and appreciated by the others. This could be simply expressed by allocating enhanced contributions in a biased manner. Specifically, a 0≤ w_L ≤ 1 fraction of the total income is allotted to the central player while the remaining 1-w_L is distributed among the participating neighbors. The w_L=1/G scenario represents the traditional PGG model, where the income is equally distributed among all participants. The w_L=0 limit corresponds to the situation where the central player allocates all income to the neighbors. While this may initially seem irrational, there have been empirical studies indicating the existence of similar practices in certain tribes where partners generally offer a larger share to an associate in an ultimatum game, signaling their honest intentions <cit.>. The other extreme case, w_L=1, denotes that the central player retains all the benefits. Interestingly, even this seemingly greedy scenario can reflect a cooperative intent and represent a form of mutual aid <cit.>. One can contemplate a barn constructed by an entire Amish community, yet later solely utilized by a single farmer. This study aims to explore the potential ramifications when players exhibit a specific w_L value.
The unequal distribution of collective benefits has previously been the subject of extensive investigation <cit.>. For instance, how income is allocated remains a central issue in the ultimatum game <cit.>. For the current study, however, the diverse allocation within a group comprising several participants is of greater relevance. In certain scenarios, the individual portion accrued by a participant can be strongly contingent on their investment capability <cit.>. Additionally, the heterogeneous interaction topology is a critical aspect where income allocation is proportional to an agent's weight (degree) in the graph <cit.>. In more sophisticated model configurations, players possess an extra skill and keep track of their previous round earnings <cit.>. Yet, our current model is straightforward, emphasizing the fundamental element of biased allocation. For example, it can be applied to regular graphs where players have equal-sized neighborhoods, thus participating in an equal number of joint groups. Moreover, we presuppose homogeneous players who behave similarly and apply a pre-established allocation policy in each case. This characteristic could prove to be crucial, as it has been widely observed that a heterogeneous population, wherein players are unequal, could serve as a mechanism that encourages cooperation <cit.>.
Players may differ in their views about their groups, and their approach to strategies can also be distinct. For example, they may show reluctance to alter their existing strategies, a phenomenon explained from various perspectives. This could be a result of a specific cost related to change <cit.>, or it could be interpreted as a form of self-confidence <cit.>. This strategy change inertia or updating passivity has been identified as a separate mechanism that significantly influences the evolutionary process <cit.>. To quantitatively track this effect, we introduce a 0≤ w_R ≤ 1 weight parameter, which determines the likelihood of retaining the original strategy during the elementary dynamical process. At w_R=0, this effect is completely absent, and we revert to the traditional death–birth rule <cit.>. In the opposite extreme, when w_R=1, there is no proper evaluation because all agents adamantly stick to their original strategy, despite the theoretical cooperation success condition equating to the birth-death rule as w_R→ 1 <cit.>. In between these extremes, at w_R=1/G where G denotes the group size, the strategy of the central player and the strategies of the neighbors carry equal weight and we revert to the imitation rule <cit.>.
This work simultaneously considers the aforementioned effects within the framework of PGG, with players situated on a square lattice. It is important to note that the biased allocation, which can also be interpreted as autocratic behavior, and the indifference towards alternative players representing diverse strategies, may stem from a shared trait. If an individual exhibits higher levels of autocracy and retains more public goods when they organize a group, it may also display traits of arrogance, meaning they have a high self-regard and are not prone to learning from others' strategies. Therefore, the weight factors representing these traits can be similar in size. Moreover, all the mentioned details of the proposed model are strategy-neutral, making it unclear whether they support cooperation or not. Specifically, we assume the analytically feasible weak selection limit, where payoff values merely slightly alter the reproductive fitness of competing strategies.
Our main goal is to determine the critical synergy factor for the success of cooperation based on the control parameters and to uncover the consequences of their simultaneous presence. In the next section, we will define our model, and our primary findings will be presented in Section <ref>. Monte Carlo simulations were also conducted to validate and confirm our theoretical results. The comparisons will be presented in Section <ref>. Our primary conclusions are summarized in Section <ref>, where potential implications will also be discussed.
§ MODEL
In the study of spatial population dynamics, the model utilizes an L× L square lattice with periodic boundary conditions. Hence, the total population N=L^2. Each individual, referred to as an agent, inhabits a vertex on the lattice and forms a group of G=k+1 members, comprising of itself and k of its neighbors. Consequently, each agent partakes in 1+k groups, either organized by itself or by its neighbors. The group formed by agent i is represented by Ω_i. Consequently, the collection of agent i's neighbors can be expressed as Ω_i∖{i}. The common choice of group size is G=5 (k=4, von Neumann neighborhood) or G=9 (k=8, Moore neighborhood).
During each elementary Monte Carlo step, a random agent i is selected to update its strategy s_i based on the payoff acquired from participating in the public goods games. Specifically, agent i organizes a public goods game within its group Ω_i. Each participant j∈Ω_i contributes a cost c>0 to the group if cooperating (s_j=1) or contributes nothing if defecting (s_j=0). The combined investments of all participants ∑_j∈Ω_is_j c is amplified by a synergy factor r>1 to generate the public goods, which are then distributed among group members.
Distinct from the conventional public goods game where the goods are evenly distributed, this study extends this notion by allowing the potential for uneven distribution between the organizer and other players. Specifically, the organizer is allotted a portion w_L (0≤ w_L≤ 1), while the remaining players are evenly allocated the remaining proportion 1-w_L; that is, each of the other players receives (1-w_L)/k. Hence, as the organizer, agent i receives a payoff of w_L r∑_j∈Ω_is_j c-s_i c from group Ω_i. Correspondingly, agent i also participates in groups organized by its neighbors g∈Ω_i∖{i}, receiving a payoff in those groups as a standard player. The payoff of agent i is the average over the k+1 groups, calculated by:
π_i=1/k+1{(w_L r∑_j∈Ω_is_j c-s_i c)
+
∑_g∈Ω_i∖{i}(1-w_L/kr∑_j∈Ω_gs_j c-s_i c)
}.
As underscored, Eq. (<ref>) broadens the traditional public goods game by incorporating the self-allocation parameter w_L. At w_L=0, all public goods are allocated to the other players, while at w_L=1, all public goods are allocated to the organizer. At w_L=1/G, the public goods are distributed equally, reducing Eq. (<ref>) to the traditional public goods game scenario.
In alignment with previous studies <cit.>, the payoff π_i is transformed to fitness F_i=exp(δπ_i), where δ→ 0^+ is a weak selection strength limit. Therefore, a strategy with a higher fitness has a marginal advantage to reproduce more frequently. To calculate the strategy updating probability, we also compute the payoff of agent i's neighbors and convert them to fitness in a similar manner. Consequently, the strategy of agent i is replaced by the strategy of an agent j∈Ω_i with probability W(s_i s_j), which is defined by the generalized death–birth rule <cit.>,
W(s_i s_j)
=
(1-w_R)/k· F_j/w_RF_i+(1-w_R)/k·∑_ℓ∈Ω_i∖{i}F_ℓ,
w_R F_j/w_RF_i+(1-w_R)/k·∑_ℓ∈Ω_i∖{i}F_ℓ,
In Eq. (<ref>), ∑_j∈Ω_iW(s_i s_j)=1 is normalized. Eq. (<ref>) extends the traditional death–birth rule <cit.> by introducing a self-learning weight w_R, following a similar logic to self-allocation. The agent i learns the strategy of agent j proportional to the fitness in the group Ω_i, taking self-learning into consideration. The case of j=i implies that agent i does not learn the strategy from others. At w_R=0, Eq. (<ref>) reduces to the traditional death–birth rule, where the fitness of agent i is disregarded. At w_R=1/G, Eq. (<ref>) simplifies to the imitation rule, where the fitness of agent i is compared equally with all neighbors. An elementary Monte Carlo step concludes once the randomly selected agent i in the system updates its strategy. A full Monte Carlo step encompasses N elementary steps, ensuring that the strategy of each agent is updated on average once.
Our model's key parameters are the weight factors, w_L and w_R, which dictate the bias in allocation and the rate of self-learning, respectively. In Fig. <ref>, we unveil the comprehensive parameter plane, highlighting the important weight values. These values have particular implications. When w_L=1, the total earnings from the communal pool are allocated solely to the focal player. Conversely, when w_L=0, every participant benefits from the pool while the focal player gains nothing. The midway scenario of w_L=1/G recaptures the traditional public goods game (PGG) where all group members equally share the proceeds from the common pool. Shifting our attention to the other weight factor, w_R=0 signifies the classic death–birth dynamics, where the new strategy of the focal player is exclusively drawn from the strategies of the neighbors. When w_R=1/G, all strategies present in the group are potential candidates in equal measure, which aligns with the well-established imitation rule. Finally, in the limit where w_R → 1, players tenaciously cling to their current strategies, thereby causing the evolution to stagnate. On the parameter plane, we also demarcate with a dotted line the trajectory where both weight factors are simultaneously altered. This trajectory represents the typical system behavior when both the effects of biased allocation and self-confidence are operative in the extended model with equal weights.
In the ensuing section, we explore and analyze how the critical synergy factor for cooperation success evolves in the presence of these skewed allocations and self-confidence biases.
§ THEORETICAL ANALYSIS
We assume that the evolutionary process begins from a state with the presence of N_C cooperative players. In essence, the initial proportion of cooperation is N_C/N. When the selection strength, denoted as δ, equals zero, the system defaults to the dynamics of the voter model <cit.>. In this state, cooperation will ultimately dominate the entire population with a probability of ρ_C=N_C/N <cit.>. Consequently, under a minimal selection strength of δ→ 0^+, if ρ_C>N_C/N, selection leans towards cooperation, which implies that evolution promotes the success of cooperative behavior. Here, ρ_C can be gauged by the average final proportion of cooperation obtained from independent runs.
Our objective in Section <ref> is to pinpoint the condition that enables the success of cooperation, while Section <ref> focuses on exploring the inherent features of this condition.
§.§ The condition for cooperation success
To discern the requisite condition for cooperation success, we utilize the identity-by-descent (IBD) method <cit.>. Initially, we introduce n-step random walks. Fundamentally, this refers to moving to a random neighbor during each 1-step random walk. The quantity after completing n-step walks is represented as x^(n), where x could be π, F, and s. The x^(n) quantity is indistinguishable among various agents since the square lattice is a vertex-transitive graph, where an agent cannot identify its location by examining the network structure.
Based on the random walks' definition, we can rewrite the payoff calculation in Eq. (<ref>) to obtain an agent's expected payoff from n steps away, as described in Eq. (<ref>),
π^(n) =1/k+1{(w_L r(k s^(n+1)+s^(n))c-s^(n)c)+k(1-w_L/k r(k s^(n+2)+s^(n+1))c-s^(n)c)}
=(w_L/k+1r-1)s^(n)c+1+(k-1)w_L/k+1rs^(n+1)c+k(1-w_L)/k+1 rs^(n+2)c,
which will later be useful for calculation.
To simplify, we assume a single initial cooperative player 1 in our analysis, implying that N_C=1 and evolution favors cooperation if ρ_C>1/N. In this scenario, the condition for cooperation success under weak selection can be rewritten as per the equivalent form <cit.> as shown in Eq. (<ref>),
⟨∂/∂δ(ℬ_1-𝒟_1)⟩_[ δ=0; s_1=1 ]>0,
where ⟨·⟩_[ δ=0; s_1=1 ] represents the expected value under neutral drift (δ=0) and single cooperator (s_1=1). ℬ_1 is the probability of agent 1 passing on its strategy to a neighbor. This occurs when a neighbor i∈Ω_1∖{1} of agent 1 is randomly selected with a 1/N probability to update the strategy and learns agent 1's strategy with a W(s_i s_1) probability. In the same vein, 𝒟_1 is the probability of agent 1's strategy being supplanted by a neighbor. This transpires when agent 1 is randomly selected with a 1/N probability to update its strategy and learns the strategy of a neighbor j∈Ω_1∖{1} with a W(s_1 s_j) probability. By applying Eq. (<ref>) and F_i=exp(δπ_i), we arrive at the equations summarized as follows:
ℬ_1 =∑_i∈Ω_1∖{1}1/NW(s_i s_1)
=∑_i∈Ω_1∖{1}1/N(1-w_R)/k·exp(δπ_1)/w_Rexp(δπ_i)+(1-w_R)/k·∑_ℓ∈Ω_i∖{i}exp(δπ_ℓ),
𝒟_1 =1/N∑_j∈Ω_1∖{1}W(s_1 s_j)
=1/N∑_j∈Ω_1∖{1}(1-w_R)/k·exp(δπ_j)/w_Rexp(δπ_1)+(1-w_R)/k·∑_ℓ∈Ω_1∖{1}exp(δπ_ℓ).
In the further steps, we substitute Eq. (<ref>) and Eq. (<ref>) into Eq. (<ref>) and compute it, as shown in Eq. (<ref>).
⟨∂/∂δ(ℬ_1-𝒟_1)⟩_[ δ=0; s_1=1 ]>0
⇔ 1-w_R/Nk(
k⟨π_1⟩_[ δ=0; s_1=1 ]
-w_R⟨∑_i∈Ω_1∖{1}π_i⟩_[ δ=0; s_1=1 ]
-1-w_R/k⟨∑_i∈Ω_1∖{1}∑_ℓ∈Ω_i∖{i}π_ℓ⟩_[ δ=0; s_1=1 ])
-1-w_R/Nk(
-kw_R ⟨π_1⟩_[ δ=0; s_1=1 ]
+⟨∑_j∈Ω_1∖{1}π_j⟩_[ δ=0; s_1=1 ]
-(1-w_R)⟨∑_ℓ∈Ω_1∖{1}π_ℓ⟩_[ δ=0; s_1=1 ])>0
⇔ ⟨π_1⟩_[ δ=0; s_1=1 ]
-2w_R/k(1+w_R)⟨∑_j∈Ω_1∖{1}π_j⟩_[ δ=0; s_1=1 ]
-1-w_R/k^2(1+w_R)⟨∑_i∈Ω_1∖{1}∑_ℓ∈Ω_i∖{i}π_ℓ⟩_[ δ=0; s_1=1 ]>0
⇔ π^(0)
-2w_R/1+w_Rπ^(1)
-1-w_R/1+w_Rπ^(2)>0.
Following the definition of random walks starting from agent 1, we used Eq. (<ref>) in the last step of Eq. (<ref>).
π^(0)=⟨π_1⟩_[ δ=0; s_1=1 ], π^(1)=1/k⟨∑_j∈Ω_1∖{1}π_j⟩_[ δ=0; s_1=1 ], π^(2)=1/k^2⟨∑_i∈Ω_1∖{1}∑_ℓ∈Ω_i∖{i}π_ℓ⟩_[ δ=0; s_1=1 ].
To transform the strategy quantity s^(n) into walk quantity p^(n), the probability that one returns to the starting vertex after n-step random walks, we use the substitution in Eq. (<ref>), as suggested by Allen and Nowak <cit.>:
s^(n)-s^(n+1)=μ/2(Np^(n)-1)+𝒪(μ^2),
where μ→ 0^+ is an auxiliary parameter, which will be eliminated later, and 𝒪(μ^2)=0. Based on Eq. (<ref>), we can then further develop Eq. (<ref>):
s^(n)
-2w_R/1+w_Rs^(n+1)
-1-w_R/1+w_Rs^(n+2) =(s^(n)-s^(n+1))
+1-w_R/1+w_R(s^(n+1)-s^(n+2))
=μ/2(Np^(n)+1-w_R/1+w_RNp^(n+1)-2/1+w_R)
+𝒪(μ^2).
Utilizing this, we can further calculate the condition for cooperation success as given by Eq. (<ref>). First, we use Eq. (<ref>) to replace the payoff quantity π^(n) with strategy quantity s^(n). Second, we use Eq. (<ref>) to replace the strategy quantity s^(n) with walk quantity p^(n). This logic leads us to Eq. (<ref>):
π^(0)
-2w_R/1+w_Rπ^(1)
-1-w_R/1+w_Rπ^(2)>0
⇔ (w_L/k+1r-1)s^(0)c+1+(k-1)w_L/k+1rs^(1)c+k(1-w_L)/k+1 rs^(2)c
-2w_R/1+w_R{(w_L/k+1r-1)s^(1)c+1+(k-1)w_L/k+1rs^(2)c+k(1-w_L)/k+1 rs^(3)c}
-1-w_R/1+w_R{(w_L/k+1r-1)s^(2)c+1+(k-1)w_L/k+1rs^(3)c+k(1-w_L)/k+1 rs^(4)c}>0
⇔ (w_L/k+1r-1)
(s^(0)-2w_R/1+w_Rs^(1)-1-w_R/1+w_Rs^(2))
+1+(k-1)w_L/k+1r
(s^(1)-2w_R/1+w_Rs^(2)-1-w_R/1+w_Rs^(3))
+k(1-w_L)/k+1 r
(s^(2)-2w_R/1+w_Rs^(3)-1-w_R/1+w_Rs^(4))>0
⇔ (w_L/k+1r-1)
(Np^(0)+1-w_R/1+w_RNp^(1)-2/1+w_R)
+1+(k-1)w_L/k+1r
(Np^(1)+1-w_R/1+w_RNp^(2)-2/1+w_R)
+k(1-w_L)/k+1 r
(Np^(2)+1-w_R/1+w_RNp^(3)-2/1+w_R)>0.
The walk quantity p^(n) can be directly perceived by analyzing the topology of the network structure. One remains in the starting vertex if not walking, so p^(0)=1. A single step cannot encompass leaving and returning to the starting vertex, hence p^(1)=0. On a square lattice, the probability that one returns to the starting vertex after two steps is p^(2)=1/k. Finally, the value of p^(3) varies from case to case. In short, p^(3)=0 for von Neumann neighborhood and p^(3)=3/64 for Moore neighborhood (for more details, refer to Ref. <cit.>).
By applying the previously mentioned values of p^(0)=1, p^(1)=0, and p^(2)=1/k, but retaining p^(3), we can further calculate Eq. (<ref>) to reach the final result as shown in Eq. (<ref>):
π^(0)
-2w_R/1+w_Rπ^(1)
-1-w_R/1+w_Rπ^(2)>0
⇔ (w_L/k+1r-1)
(N-2/1+w_R)
+1+(k-1)w_L/k+1r(1-w_R/1+w_RN/k-2/1+w_R)
+k(1-w_L)/k+1 r(N/k+1-w_R/1+w_RNp^(3)-2/1+w_R)>0
⇔
r>(N-2+N w_R)(G-1)G/N(G-1)^2 (1-w_L)(1-w_R) p^(3)+N(G-2)(w_L-w_L w_R+w_R)+(N+2-2G)G≡ r^⋆.
This provides the condition r>r^⋆ for cooperation success. Notably, the critical synergy factor r^⋆ is only a function of the population N, group size G, higher-order network structure p^(3), self-allocation w_L, and updating inertia w_R.
Table <ref> summarizes the primary outcomes related to the critical synergy factor, r^⋆, along with their corresponding large population limits (N→ +∞), derived from taking specific parameters in Eq. (<ref>). Following the convention in much of the prior literature, we consider the death–birth rule (w_R=0) as the benchmark scenario. In this context, we present the reduced r^⋆ values corresponding to three distinct scenarios: equal allocation (w_L=1/G), allocation to other players (w_L=0), and allocation to the organizer (w_L=1). In addition, we explore a situation where the self-allocation and updating inertia are congruent (w_L=w_R≡ w), leading to consistency in the self-loops of allocation and updating. The trajectories of this case in the w_R-w_L parameter plane are visually represented in Fig. <ref> for an intuitive understanding.
Table <ref> offers additional insights into the main outcomes associated with the critical synergy factor, r^⋆, in relation to specific neighborhood types. We concentrate on two commonly used cases: von Neumann neighborhood and Moore neighborhood. The former, von Neumann neighborhood, lacks triangle motifs, resulting in p^(3)=0. Conversely, the latter, Moore neighborhood, is a rudimentary structure on a two-dimensional lattice that incorporates overlapping neighbors, yielding p^(3)=3/64 <cit.>.
§.§ The conflict between self-allocation and self-confidence
Utilizing the analytical expression of the critical synergy factor r^⋆, we can examine the combined impact of self-allocation w_L and self-confidence w_R on cooperation. From an intuitive perspective, a decrease in the r^⋆ value needed for cooperation success (i.e., r>r^⋆) fosters cooperation.
By referring to Eq. (<ref>), we can confirm that ∂ r^⋆/∂ w_L<0 holds for the specified neighborhood types. This indicates that an increase in self-allocation diminishes r^⋆ and thereby enhances cooperation. Fig. <ref>(a) portrays the critical synergy factor r^⋆ as a function of self-allocation w_L for von Neumann neighborhood under the condition of death–birth updating (w_R=0). Regardless of the population size, directing the public goods towards the organizer invariably stimulates cooperation.
Similarly, we find ∂ r^⋆/∂ w_R>0 for the designated neighborhood types. This suggests that an increase in self-confidence, or alternatively, an increase in updating inertia, acts to obstruct cooperation. This effect aligns with observations made in simpler models by prior studies <cit.>. With the von Neumann neighborhood and w_L=1/G, the critical synergy factor r^⋆ as a function of updating inertia is depicted in Fig. <ref>(b). Across varying population sizes, an increase in updating inertia consistently hampers cooperation.
The aforementioned observations create a fascinating dynamic when both effects coexist. Specifically, the divergent outcomes of biased allocation and self-confidence pose a question: how does the system respond when we enhance the weights of these factors simultaneously? Does it stimulate or inhibit cooperation? To explore this, we set w_L=w_R≡ w and illustrate the critical synergy factor r^⋆ as a function of w in Fig. <ref>(c). The figure reveals that an initial increase in the self-loop of allocation and strategy updating fosters cooperation, but once the weight surpasses a certain level, this effect reverses, ultimately discouraging cooperation. There exists an optimal self-loop weight w_0, which minimizes the r^⋆ value and is thus most beneficial for cooperation. We can derive the analytical expression for this optimal self-loop value by solving ∂ r^⋆/∂ w=0. The solution is given as:
w_0=1/N(
-(N-2)+√(2)√(2(N-1)^2+N(N-G)(G-1)/(G-1)^2 p^(3)-G+2)),
which is a function of population size N, group size G, and the higher-order network structure p^(3). This weight level provides the most favorable condition for the evolution of cooperation.
By setting N→ +∞ in Eq. (<ref>), we obtain the large population limit of w_0 as:
w_0=-1+√(2)√(2+G-1/(G-1)^2 p^(3)-G+2).
To provide a broader perspective on the simultaneous influences of these factors, we introduce a heat map of the critical synergy factor r^⋆ across the complete w_R-w_L parameter plane in Fig. <ref>. The diagonal dotted line within the figure represents the trajectory discussed in Fig. <ref>(c). This plot reveals certain general characteristics regarding the collective impact of self-loop effects. Specifically, the immediate effect of biased payoff allocation on the critical synergy factor is more pronounced when w_R is small, whereas the w_R dependency of r^⋆ is moderate for large w_R values. The inverse is true when considering the w_R dependency of r^⋆, as it changes more dramatically when w_L is low, while the w_R dependency remains moderate for small w_L values.
When maintaining the aforementioned diagonal trajectory, we can identify some general trends regarding the w-dependence. Specifically, we can confirm that the r^⋆ value at w=0 is consistently lower than the one at w=1, that is, .r^⋆|_w=0<.r^⋆|_w=1. Applying w=1 and w=0 in Eq. (<ref>), we find .r^⋆|_w=1=(N-1)G/(N-G) and .r^⋆|_w=0=(N-2)(G-1)G/[N(G-1)^2 p^(3)+(N+2-2G)G], respectively. Given that N(G-1)^2 p^(3)>0 always stands, we deduce .r^⋆|_w=0<(N-2)(G-1)G/[(N+2-2G)G]=[(N-1)G-(G-2)-N]/[N-G-(G-2)]. And since (N-1)G>N-G and -(G-2)<0, it follows that .r^⋆|_w=0<[(N-1)G-N]/(N-G)<(N-1)G/(N-G). Therefore, .r^⋆|_w=0<.r^⋆|_w=1 always holds true. This indicates that, on a larger scale, when both self-loop effects are significant, the outcome is dominated by the impact of self-confidence, which hinders cooperation. This effect is more pronounced in a topology containing triangle motifs, such as the Moore neighborhood where each player forms a G=9-member group with overlapping neighbors. This case is discussed in more detail in Appendix <ref>.
§ NUMERICAL SIMULATION
To validate our theoretical analysis, we performed Monte Carlo simulations. Initially, each agent is randomly assigned either cooperation or defection, such that N_C≈ N/2. Consequently, as outlined at the beginning of Section <ref>, evolution favors cooperation if ρ_C>1/2. To compute the expected cooperation level ρ_C, we permit up to 40,000 full Monte Carlo steps per run (if all agents become either cooperators or defectors, that specific run may be terminated earlier), and record the cooperation proportion at the last step as the result of each run. The expected cooperation level ρ_C is then the average across multiple independent runs. Based on our empirical exploration, for N=25, ρ_C is the average over 1,000,000 runs; for N=400, ρ_C is the average over 10,000 runs; for N=10000, ρ_C is obtained from a single run.
Using the von Neumann neighborhood, Fig. <ref> illustrates the expected cooperation level ρ_C as a function of the synergy factor r at w=0, w=0.3, and w=0.6. In Fig. <ref>(a), where N=25, substituting all parameter values into Eq. (<ref>) gives r^⋆=5.4118, 4.9493, 5.1351 for w=0, 0.3, and 0.6, respectively. Similarly, in Fig. <ref>(b), for N=400, we get r^⋆=4.0612, 4.0280, 4.2992. In Fig. <ref>(c), where N=10000, we obtain r^⋆=4.0024, 3.9835, 4.2571. As can be observed, the cooperation level ρ_C rises with an increase in the synergy factor r, and ρ_C>0.5 when r>r^⋆, thus affirming the theoretical analysis.
§ CONCLUSION
Collaborating on a project does not necessarily equate to equal benefits from the resulting income. For instance, an individual acting as the organizer of a group may allocate a different proportion of public goods to themselves than to other participants. If everyone follows the same protocol, allocating more public goods to the organizer boosts the gains in the game managed by oneself, but simultaneously leads to fewer gains in games organized by neighbors. Consequently, the impact of biased allocation on the level of cooperation is far from a simple question. Prior studies have demonstrated that this seemingly strategy-neutral mechanism actually promotes cooperation by preventing the diffusion of public goods <cit.>.
On the other hand, if an individual allocates more public goods to themselves as an organizer, this attitude might also imply that the individual is more authoritative and confident, and less inclined to change their current strategy. Past observations have revealed that this inertia in strategy updating inhibits cooperation by slowing the aggregation of cooperators <cit.>. Thus, it can be concluded that biased allocation and strategy updating inertia play opposing roles in the evolution of cooperation.
Assuming that the measure of biased allocation and updating inertia are interconnected, this study focuses on their simultaneous presence and explores how they jointly influence cooperation. We derive a theoretical solution on a two-dimensional square lattice and identify the critical synergy factor r^⋆ required for cooperation success. Consequently, cooperators are more likely to dominate when r>r^⋆. Our primary interest lies in how r^⋆ fluctuates on the plane of weight factors, which determine biased allocation and the extent of strategy updating inertia. Upon introducing the self-loop w of allocation and updating, it initially promotes and later, for larger w values, inhibits cooperation. In this scenario, we can identify an optimal self-loop value w_0 that is most conducive to cooperation. In other cases, where the network topology contains triangle motifs, the impact of strategy inertia is more potent, thus increasing the self-loop w tends to hamper cooperation.
Moreover, we theoretically demonstrate that the cooperation threshold at w=0 is always smaller than at w=1. This suggests that the inhibitory effect of self-confidence on cooperation generally outweighs the facilitative effect of self-allocation on cooperation when the allocation and updating self-loop w takes extreme values. These observations propose that although biased allocation may appear as an unfair protocol, its impact on cooperation is decidedly not detrimental. However, the self-confidence driven strategy updating inertia is always harmful, and cannot be offset by the effect of allocation.
§ ACKNOWLEDGEMENT
A.S. was supported by the National Research, Development and Innovation Office (NKFIH) under Grant No. K142948.
§ MOORE NEIGHBORHOOD
Our primary results are summarized in Eq. (<ref>). It proposes that topology slightly influences the critical synergy factor r^⋆ through the parameter G. However, a more complex consequence is embodied in the value of p^(3). This factor creates a stark distinction between the von Neumann and Moore neighborhoods, regardless of using the same vertex-transitive square lattice. For the von Neumann neighborhood, the three-step quantity p^(3)=0, as there is no triangle motif. To explore the consequences of a non-zero p^(3), we examine the Moore neighborhood, the simplest two-dimensional lattice that contains higher-order structure where p^(3)=3/64 <cit.>.
The first two panels of Fig. <ref> confirm that the separate impacts of biased allocation and strategy updating inertia are similar to those observed for the von Neumann neighborhood. However, their combined influence on r^⋆ diverges from the previous observation, as the self-confidence-based inertia is significantly stronger in this context, making the increase of the mutual weight factor w detrimental to the success of cooperation.
This effect is generally valid and becomes evident when we compare the color-coded heat map of the critical synergy factor r^⋆ on the w_R-w_L parameter plane. The main difference between the last panels of Fig. <ref> and Fig. <ref> is the minimal change in the value of r^⋆ as we move horizontally on the parameter plane of Fig. <ref>(c). This suggests that changes in w_L have only a minimal impact on cooperation, because the value of w_R is the determining factor here.
Our final Fig. <ref> presents a comparison of the results from our analytical and numerical calculations. In Fig. <ref>(a), where N=25, substituting all parameter values into Eq. (<ref>) yields r^⋆=10.6154, 10.6087, 11.4000 for w=0, 0.3, and 0.6, respectively. Similarly, in Fig. <ref>(b), for N=400, we obtain r^⋆=6.1546, 6.8158 for w=0, 0.3. In Fig. <ref>(c), where N=10000, we calculate r^⋆=6.0060, 6.6725, 7.5061 for w=0, 0.3, and 0.6. As before, the simulations confirm our theoretical predictions well.
53
natexlab#1#1
[#1],#1
[Nowak(2006)]nowak_s06
authorM. A. Nowak,
titleFive rules for the evolution of cooperation,
journalScience volume314
(year2006) pages1560–1563.
[Perc et al.(2013)Perc, Gómez-Gardeñes, Szolnoki, and
Floría and Y. Moreno]perc_jrsi13
authorM. Perc, authorJ. Gómez-Gardeñes,
authorA. Szolnoki, authorL. M. Floría and Y.
Moreno,
titleEvolutionary dynamics of group interactions on
structured populations: a review,
journalJ. R. Soc. Interface volume10
(year2013) pages20120997.
[Sigmund(2010)]sigmund_10
authorK. Sigmund, titleThe Calculus of Selfishness,
publisherPrinceton University Press, addressPrinceton,
NJ, year2010.
[Wang et al.(2022)Wang, Dai, He, Yu, and Shen]wang_jw_pla22
authorJ. Wang, authorW. Dai, authorJ. He,
authorF. Yu, authorX. Shen,
titlePersistent imitation paves the way for cooperation in
public goods game,
journalPhys. Lett. A volume447
(year2022) pages128302.
[Xiao et al.(2022)Xiao, Zhang, Li, Dai, and Yang]xiao_sl_epjb22
authorS. Xiao, authorL. Zhang, authorH. Li,
authorQ. Dai, authorJ. Yang,
titleEnvironment-driven migration enhances cooperation in
evolutionary public goods games,
journalEur. Phys. J. B volume95
(year2022) pages67.
[Wang and Szolnoki(2022)]wang2022reversed
authorC. Wang, authorA. Szolnoki,
titleA reversed form of public goods game: equivalence and
difference,
journalNew J. Phys. volume24
(year2022) pages123030.
[Hua and Liu(2023)]hua_sj_csf3
authorS. Hua, authorL. Liu,
titleFacilitating the evolution of cooperation through
altruistic punishment with adaptive feedback,
journalChaos, Solit. and Fract. volume173
(year2023) pages113669.
[Szolnoki et al.(2009)Szolnoki, Perc, and Szabó]szolnoki_pre09c
authorA. Szolnoki, authorM. Perc,
authorG. Szabó,
titleTopology-independent impact of noise on cooperation
in spatial public goods games,
journalPhys. Rev. E volume80
(year2009) pages056109.
[Yu et al.(2022)Yu, Wang, and He]yu_fy_csf22
authorF. Yu, authorJ. Wang, authorJ. He,
titleInequal dependence on members stabilizes cooperation
in spatial public goods game,
journalChaos, Solit. and Fract. volume165
(year2022) pages112755.
[Wang et al.(2021)Wang, Pan, Ju, and He]wang2021public
authorC. Wang, authorQ. Pan, authorX. Ju,
authorM. He,
titlePublic goods game with the interdependence of
different cooperative strategies,
journalChaos. Solit. and Fract. volume146
(year2021) pages110871.
[Wang and Huang(2022)]wang2022between
authorC. Wang, authorC. Huang,
titleBetween local and global strategy updating in public
goods game,
journalPhysica A volume606
(year2022) pages128097.
[Wang and Sun(2023a)]wang2023public
authorC. Wang, authorC. Sun,
titlePublic goods game across multilayer populations with
different densities,
journalChaos. Solit. and Fract. volume168
(year2023a) pages113154.
[Wang and Sun(2023b)]wang_cq_c23
authorC. Wang, authorC. Sun,
titleZealous cooperation does not always promote
cooperation in public goods games,
journalChaos volume33
(year2023b) pages063111.
[Xie et al.(2023)Xie, Liu, Wang, and Jiang]xie_k_csf23
authorK. Xie, authorX. Liu, authorH. Wang,
authorY. Jiang,
titleMulti-heterogeneity public goods evolutionary game on
lattice,
journalChaos. Solit. and Fract. volume172
(year2023) pages113562.
[Ding et al.(2023)Ding, Wang, Zhao, Gu, and Wang]ding_r_csf23
authorR. Ding, authorX. Wang,
authorJ. Zhao, authorC. Gu,
authorT. Wang,
titleThe evolution of cooperation in spatial public goods
games under a risk-transfer mechanism,
journalChaos, Solitons and Fractals volume169
(year2023) pages113236.
[Zhang et al.(2010)Zhang, Zhang, Xie, and Wang]zhang_cy_epl10
authorC. Zhang, authorJ. Zhang,
authorG. Xie, authorL. Wang,
titleDiversity of game strategies promotes the evolution
of cooperation in public goods games,
journalEPL volume90 (year2010)
pages68005.
[Henrich et al.(2001)Henrich, Boyd, Bowles, Camerer, Fehr, Gintis, and
McElreath]henrich_aer01
authorJ. Henrich, authorR. Boyd,
authorS. Bowles, authorC. Camerer,
authorE. Fehr, authorH. Gintis,
authorR. McElreath,
titleIn search of homo economicus: behavioral experiments
in 15 small-scale societies,
journalAm. Econ. Rev. volume91
(year2001) pages73–78.
[Nowak et al.(1995)Nowak, May, and Sigmund]nowak_sa95
authorM. A. Nowak, authorR. M. May,
authorK. Sigmund,
titleArithmetics of mutual help,
journalScientific American volume272
(year1995) pages76–81.
[Allen et al.(2013)Allen, Gore, and Nowak]allen2013spatial
authorB. Allen, authorJ. Gore, authorM. A.
Nowak,
titleSpatial dilemmas of diffusible public goods,
journalElife volume2 (year2013)
pagese01169.
[Su et al.(2018)Su, Wang, and Stanley]su2018understanding
authorQ. Su, authorL. Wang, authorH. E.
Stanley,
titleUnderstanding spatial public goods games on
three-layer networks,
journalNew J. Phys. volume20
(year2018) pages103030.
[Zhang et al.(2012)Zhang, Shi, Liu, and Wang]zhang_hf_pa12
authorH. Zhang, authorD. Shi, authorR. Liu,
authorB. Wang,
titleDynamic allocation of investments promotes
cooperation in spatial public goods game,
journalPhysica A volume391
(year2012) pages2617–2622.
[Cong et al.(2016)Cong, Li, Wang, and Zhao]cong_r_epl16
authorR. Cong, authorK. Li, authorL. Wang,
authorQ. Zhao,
titleCooperation induced by wise incentive allocation in
spontaneous institution,
journalEPL volume115 (year2016)
pages38002.
[Szolnoki and Chen(2020)]szolnoki_amc20
authorA. Szolnoki, authorX. Chen,
titleBlocking defector invasion by focusing on the most
successful partner,
journalAppl. Math. Comput. volume385
(year2020) pages125430.
[Wang et al.(2018)Wang, He, and Chen]wang_q_amc18
authorQ. Wang, authorN. He, authorX. Chen,
titleReplicator dynamics for public goods game with
resource allocation in large populations,
journalAppl. Math. Comput. volume328
(year2018) pages162–170.
[Bin and Yue(2023)]bin_l_amc23
authorL. Bin, authorW. Yue,
titleCo-evolution of reputation-based preference selection
and resource allocation with multigame on interdependent networks,
journalAppl. Math. Comput. volume456
(year2023) pages128128.
[Güth et al.(1982)Güth, Schmittberger, and
Schwarze]guth_jebo82
authorW. Güth, authorR. Schmittberger,
authorB. Schwarze,
titleAn experimental analysis of ultimatum bargaining,
journalJ. Econ. Behav. Org. volume3
(year1982) pages367–388.
[Sigmund et al.(2002)Sigmund, Fehr, and Nowak]sigmund_sa02
authorK. Sigmund, authorE. Fehr, authorM. A.
Nowak,
titleThe economics of fair play,
journalSci. Am. volume286
(year2002) pages82–87.
[Szolnoki et al.(2012)Szolnoki, Perc, and Szabó]szolnoki_prl12
authorA. Szolnoki, authorM. Perc,
authorG. Szabó,
titleDefense mechanisms of empathetic players in the
spatial ultimatum game,
journalPhys. Rev. Lett. volume109
(year2012) pages078701.
[Wang et al.(2014)Wang, Chen, and Wang]wang_xf_srep14
authorX. Wang, authorX. Chen,
authorL. Wang,
titleRandom allocation of pies promotes the evolution of
fairness in the ultimatum game,
journalSci. Rep. volume4
(year2014) pages4534.
[Chen et al.(2015)Chen, Wu, Li, Wu, and Wang]chen_w_epl15
authorW. Chen, authorT. Wu, authorZ. Li,
authorN. Wu, authorL. Wang,
titleHeterogenous allocation of chips promotes fairness in
the ultimatum game,
journalEPL volume109 (year2015)
pages68006.
[Szolnoki et al.(2012)Szolnoki, Perc, and Szabó]szolnoki_epl12
authorA. Szolnoki, authorM. Perc,
authorG. Szabó,
titleAccuracy in strategy imitations promotes the
evolution of fairness in the spatial ultimatum game,
journalEPL volume100 (year2012)
pages28005.
[Fan et al.(2017)Fan, Zhang, Luo, and Zhang]fan_rg_pa17
authorR. Fan, authorY. Zhang, authorM. Luo,
authorH. Zhang,
titlePromotion of cooperation induced by heterogeneity of
both investment and payoff allocation in spatial public goods game,
journalPhysica A volume465
(year2017) pages454–463.
[Peng et al.(2010)Peng, Yang, Wang, Chen, and Wang]peng_d_epjb10
authorD. Peng, authorH.-X. Yang, authorW.-X.
Wang, authorG. R. Chen, authorB.-H. Wang,
titlePromotion of cooperation induced by nonuniform payoff
allocation in spatial public goods game,
journalEur. Phys. J. B volume73
(year2010) pages455–459.
[Meloni et al.(2017)Meloni, Xia, and Moreno]meloni_rsos17
authorS. Meloni, authorC.-Y. Xia,
authorY. Moreno,
titleHeterogeneous resource allocation can change social
hierarchy in public goods games,
journalR. Soc. open sci. volume4
(year2017) pages170092.
[Perc and Szolnoki(2008)]perc_pre08
authorM. Perc, authorA. Szolnoki,
titleSocial diversity and promotion of cooperation in the
spatial prisoner's dilemma game,
journalPhys. Rev. E volume77
(year2008) pages011904.
[Santos et al.(2008)Santos, Santos, and Pacheco]santos_n08
authorF. C. Santos, authorM. D. Santos,
authorJ. M. Pacheco,
titleSocial diversity promotes the emergence of
cooperation in public goods games,
journalNature volume454
(year2008) pages213–216.
[Szabó and Hauert(2002)]szabo_prl02
authorG. Szabó, authorC. Hauert,
titlePhase transitions and volunteering in spatial public
goods games,
journalPhys. Rev. Lett. volume89
(year2002) pages118101.
[Li et al.(2016)Li, Szolnoki, Cong, and Wang]li_k_srep16
authorK. Li, authorA. Szolnoki,
authorR. Cong, authorL. Wang,
titleThe coevolution of overconfidence and bluffing in the
resource competition game,
journalSci. Rep. volume6
(year2016) pages21104.
[Szolnoki and Chen(2018)]szolnoki_pre18
authorA. Szolnoki, authorX. Chen,
titleReciprocity-based cooperative phalanx maintained by
overconfident players,
journalPhys. Rev. E volume98
(year2018) pages022309.
[Wang and Szolnoki(2023)]wang2023evolution
authorC. Wang, authorA. Szolnoki,
titleEvolution of cooperation under a generalized
death-birth process,
journalPhys. Rev. E volume107
(year2023) pages024303.
[Szolnoki et al.(2009)Szolnoki, Perc, Szabó, and
Stark]szolnoki_pre09
authorA. Szolnoki, authorM. Perc,
authorG. Szabó, authorH.-U. Stark,
titleImpact of aging on the evolution of cooperation in
the spatial prisoner's dilemma game,
journalPhys. Rev. E volume80
(year2009) pages021901.
[Liu et al.(2010)Liu, Rong, Jia, and Wang]liu_rr_epl10
authorR.-R. Liu, authorZ. Rong, authorC.-X.
Jia, authorB.-H. Wang,
titleEffects of diverse inertia on scale-free-networked
prisoner's dilemma games,
journalEPL volume91 (year2010)
pages20002.
[Zhang et al.(2011)Zhang, Fu, Wu, Xie, and Wang]zhang_yl_pre11
authorY. Zhang, authorF. Fu, authorT. Wu,
authorG. Xie, authorL. Wang,
titleInertia in strategy switching transforms the strategy
evolution,
journalPhys. Rev. E volume84
(year2011) pages066103.
[Wang and Szolnoki(2023)]wang2023inertia
authorC. Wang, authorA. Szolnoki,
titleInertia in spatial public goods games under weak
selection,
journalAppl. Math. Comput. volume449
(year2023) pages127941.
[Wang et al.(2023)Wang, Zhu, and Szolnoki]wang2023conflict
authorC. Wang, authorW. Zhu,
authorA. Szolnoki,
titleThe conflict between self-interaction and updating
passivity in the evolution of cooperation,
journalChaos, Solit. and Fract. volume173
(year2023) pages113667.
[Ohtsuki and Nowak(2006)]ohtsuki_jtb06
authorH. Ohtsuki, authorM. A. Nowak,
titleThe replicator equation on graphs,
journalJ. Theor. Biol. volume243
(year2006) pages86–97.
[Nowak et al.(2004)Nowak, Sasaki, Taylor, and Fudenberg]nowak_n04b
authorM. A. Nowak, authorA. Sasaki,
authorC. Taylor, authorD. Fudenberg,
titleEmergence of cooperation and evolutionary stability
in finite populations,
journalNature volume428
(year2004) pages646–650.
[McAvoy et al.(2020)McAvoy, Allen, and Nowak]mcavoy2020social
authorA. McAvoy, authorB. Allen, authorM. A.
Nowak,
titleSocial goods dilemmas in heterogeneous societies,
journalNat. Human Behav. volume4
(year2020) pages819–831.
[Clifford and Sudbury(1973)]clifford1973model
authorP. Clifford, authorA. Sudbury,
titleA model for spatial conflict,
journalBiometrika volume60
(year1973) pages581–588.
[Cox and Griffeath(1983)]cox1983occupation
authorJ. T. Cox, authorD. Griffeath,
titleOccupation time limit theorems for the voter model,
journalAnnals Prob. (year1983)
pages876–893.
[Cox and Griffeath(1986)]cox1986diffusive
authorJ. T. Cox, authorD. Griffeath,
titleDiffusive clustering in the two dimensional voter
model,
journalAnnals Prob. (year1986)
pages347–370.
[Allen and Nowak(2014)]allen2014games
authorB. Allen, authorM. A. Nowak,
titleGames on graphs,
journalEMS Surv. Math. Sci. volume1
(year2014) pages113–151.
[Nowak et al.(2010)Nowak, Tarnita, and Wilson]nowak2010evolution
authorM. A. Nowak, authorC. E. Tarnita,
authorE. O. Wilson,
titleThe evolution of eusociality,
journalNature volume466
(year2010) pages1057–1062.
|
http://arxiv.org/abs/2307.05442v1 | 20230711170739 | Channel State Information-Free Location-Privacy Enhancement: Fake Path Injection | [
"Jianxiu Li",
"Urbashi Mitra"
] | eess.SP | [
"eess.SP"
] |
LI AND MITRA: CHANNEL STATE INFORMATION-FREE LOCATION-PRIVACY ENHANCEMENT: FAKE PATH INJECTION
Channel State Information-Free Location-Privacy Enhancement: Fake Path Injection
Jianxiu Li, Graduate Student Member, IEEE,
and Urbashi Mitra, Fellow, IEEE
J. Li and U. Mitra are with the Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089, USA (e-mails: jianxiul, [email protected]).
This paper was presented in part at the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023) <cit.>. This work has been funded largely by the USC + Amazon Center on Secure and Trusted Machine Learning as well as in part by one or more of the following: NSF CCF-1817200, DOE DE-SC0021417, Swedish Research Council 2018-04359, NSF CCF-2008927, NSF CCF-2200221, ONR 503400-78050, and ONR N00014-15-1-2550.
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In this paper, a channel state information-free, fake path injection (FPI) scheme is proposed for location-privacy preservation. Specifically, structured artificial noise is designed to introduce virtual fake paths into the channels of the illegitimate devices. By leveraging the geometrical feasibility of the fake paths, under mild conditions, it can be proved that the illegitimate device cannot distinguish between a fake and true path, thus degrading the illegitimate devices' ability to localize. Two closed-form, lower bounds on the illegitimate devices’ estimation error are derived via the analysis of the Fisher information of the location-relevant channel parameters, thus characterizing the enhanced location-privacy. A transmit beamformer is proposed, which efficiently injects the virtual fake paths. The intended device receives the two parameters of the beamformer design over a secure channel in order to enable localization. The impact of leaking the beamformer structure and associated localization leakage are analyzed. Theoretical analyses are verified via simulation. Numerical results show that a 20dB degradation of the illegitimate devices' localization accuracy can be achieved thus validating the efficacy of the proposed FPI versus using unstructured Gaussian noise.
Location-privacy, localization, fake path injection, channel state information-free, beamforming.
§ INTRODUCTION
With the wide deployment of the Internet-of-Things, location of user equipment (UE) is becoming an important commodity. The proliferation of wearables and the associated location-based services necessitate location-privacy preservation. Unfortunately, most existing works on localization, such as <cit.>, focus on how to leverage the features of the wireless signals to improve estimation accuracy without considering location-privacy. These wireless signals that have the confidential location information embedded in them are easily exposed to the risk of eavesdropping, i.e., illegitimate devices can determine the locations of the UE when the UEs request the location-based services from legitimate devices. Even worse, more private information, such as age, gender, and personal preference, can be snooped and inferred from the leaked location information as well <cit.>. Hence, it is critical to develop new strategies to limit location-privacy leakage (LPL).
To combat eavesdropping, location-privacy preservation has predominantly been studied at the application and network layers <cit.>. To the best of our knowledge, enabling physical-layer signals for protecting the location-privacy of UEs has not been fully investigated. Due to the nature of the propagation of electromagnetic waves, as long as the illegitimate devices can listen to the wireless channel and leverage the received signals for localization, the location-privacy of the UE is inevitably jeopardized. Traditionally, to preserve the location-privacy at the physical layer, the statistics of the channel, or the actual channel, have been exploited to prevent the illegitimate devices from easily extracting the location-relevant information from the wireless signals <cit.>. Example strategies include artificial noise injection <cit.> and beamforming design <cit.>.
More precisely, to make the malicious localization harder, via artificial noise, the received signal-to-noise ratio (SNR) can be decreased for illegitimate devices; to maintain the legitimate localization accuracy, actual noise realizations have to be shared with the legitimate devices or the channel state information (CSI) is known to the UE so as to inject the artificial noise into the null space of the legitimate channel <cit.>. On the other hand, if Alice has prior knowledge of the CSI for the illegitimate channel, the transmit beamformer can be specifically designed to hide the key location-relevant information <cit.>. In particular, by exploiting the CSI to design a transmit beamformer, <cit.> misleads the illegitimate device into incorrectly treating one non-line-of-sight (NLOS) path as the line-of-sight (LOS) path such that UE’s position cannot be accurately inferred, where the obfuscation is achieved by adding a delay for the transmission over the LOS path. However, all these existing physical-layer security designs <cit.> highly rely on accurate or even perfect CSI and thus are costly.
In the sequel, motivated by the analysis of the structure of the channel model in <cit.>, we propose a fake path injection (FPI)-aided location-privacy enhancement scheme to reduce location privacy leakage to illegitimate devices. A provably, statistically hard estimation problem is created, while providing localization guarantees to legitimate devices. In contrast to <cit.>, our design does not require CSI, which avoids extra channel estimation and reduces the overhead for resource-limited UEs.
The main contributions of this paper are:
1) A general framework for FPI-aided location-privacy enhancement is proposed without CSI, where the intrinsic structure of the channel is explicitly exploited to design the fake paths.
2) The identifiability of the designed fake paths is investigated. The analysis indicates that the illegitimate devices cannot provably distinguish and remove the fake paths.
3) In the presence of the proposed fake paths, two closed-form, lower bounds on the estimation error are derived for the illegitimate devices, suggesting appropriate values of key design parameters to enhance location-privacy.
4) A beamforming strategy is designed to efficiently inject the fake paths, where LPL is mitigated, while the securely shared information (a secret key <cit.>) is characterized to maintain authorized devices' localization accuracy. Only two parameters need to be shared with the legitimate device. The potential leakage of the beamformer structure is also studied for the robustness of the proposed scheme.
5) Theoretical analyses are validated via simulation and numerical comparisons of the localization accuracy of legitimate devices. The proposed method can contribute to 20dB localization accuracy degradation for illegitimate devices even when the LOS path exists. If the structure of the designed beamformer is unfortunately leaked, there is still around a 4dB accuracy degradation for the illegitimate device. This degradation can be increased by creating more complex beamformers.
6) For low SNRs, the proposed scheme is numerically shown to be more effective for location-privacy preservation, than the injection of unstructured Gaussian noise, which requires the sharing of entire noise realizations.
The present work completes our previous work <cit.> with an analysis of identifiability of the designed fake paths, which ensures that the fake paths are challenging for the illegitimate devices to remove. In addition, considering the injected fake paths, we derive two closed-form, lower bounds on the estimation error based on the Fisher information of the location-relevant channel parameters.
Our work is different from the analysis of the stability of the super-resolution problem <cit.> with the goal of guaranteeing
high estimation accuracy with a certain statistical resolution limit at high SNR. In contrast, our lower bounds are to validate that estimation accuracy can be efficiently degraded for an eavesdropper, if the injected fake paths are close to the true paths in terms of the location-relevant parameters. We note that, in the presence of noise, the stability of the Fisher information matrix (FIM) of the line spectral estimation problem has been studied (see e.g. <cit.>) for a new algorithm-free resolution limit; our analysis is tailored to location-privacy preserving design with multi-dimensional signals.
The rest of this paper is organized as follows. Section <ref> introduces the signal model adopted for localization. Section <ref> presents the proposed CSI-free location-privacy enhancement framework, with the fake paths designed according to the intrinsic channel structure. In Section <ref>, the identifiability of the injected fake paths is investigated to show that such fake paths cannot be removed. In the presence of the fake paths, two closed-form, lower bounds on the estimation error are derived and analyzed in Section <ref>, proving the efficacy of the proposed scheme. Section <ref> proposes a transmit beamformer to practically inject the designed fake paths. The beamformer design does not rely on CSI. Numerical results are provided in Section <ref> to validate the theoretical analyses and highlight the performance degradation caused by the FPI to the eavesdropper. Conclusions are drawn in Section <ref>. Appendices <ref>, <ref>, <ref>, and <ref> provide the proofs for the key technical results.
We use the following notation. Scalars are denoted by lower-case letters x and column vectors by bold letters x. The i-th element of x is denoted by x[i]. Matrices are denoted by bold capital letters X and X[i, j] is the (i, j)-th element of X. The operators ⌊x⌋, |x|, x_2, ℜ{x}, ℑ{x}, and diag(𝒜) represent the largest integer that is less than or equal to x, the magnitude of x, the ℓ_2 norm of x, the real part of x, the imaginary part of x, and a diagonal matrix whose diagonal elements are given by 𝒜, respectively. I_l stands for a l× l identity matrix and ℙ(·) is reserved for the probability of an event.
The operators 𝔼{·} denotes the expectation of a random variable. The operators Rank(·), Tr(·), det(·), (·)^T, and (·)^H are defined as the rank of a matrix, the trace of a matrix, the determinant of a matrix, the transpose of a matrix or vector, and the conjugate transpose of a vector or matrix, respectively.
§ SYSTEM MODEL
We consider a legitimate device (Bob) serving a UE (Alice), as shown in Figure 1. The locations of Alice and Bob are denoted by p=[p_x,p_y]^T∈ℝ^2 and q=[q_x,q_y]^T∈ℝ^2, respectively. To acquire location-based services, Alice transmits pilot signals to Bob, while Bob estimates Alice's position based on the received signals and his location q. We assume that the pilot signals are known to Bob. An illegitimate device (Eve) exists at location z=[z_x,z_y]^T∈ℝ^2. Eve also knows the same pilot signals and her location z. By eavesdropping on the channel to infer p, Eve jeopardizes Alice's location-privacy.
Without of loss of generality, we adopt the millimeter-wave (mmWave) multiple-input-single-output (MISO) orthogonal frequency-division multiplexing (OFDM) channel model of <cit.> for transmissions, where Alice is equipped with N_t antennas, while both Bob and Eve have a single antenna[The location-privacy enhancement framework proposed in Section <ref> can be extended for the single-input-multiple-output systems and multiple-input-multiple-output (MIMO) systems.]. Assume that K+1 paths, i.e., one LOS path and K NLOS paths, exist in the MISO OFDM channel, and the scatterer of the k-th NLOS path is located at an unknown position v_k=[v_k,x,v_k,y]^T∈ℝ^2, for k=1,2,⋯,K. We transmit G OFDM pilot signals via N sub-carriers with central carrier frequency φ_c and bandwidth B. It is assumed that a narrowband channel is employed, i.e., B≪φ_c. Denoting by x^(g,n) and f^(g,n)∈ℂ^N_t× 1 the g-th symbol transmitted over the n-th sub-carrier and the corresponding beamforming vector, respectively, we can express the g-th pilot signal over the n-th sub-carrier as
s^(g,n)≜ f^(g,n)x^(g,n)∈ℂ^N_t× 1 and write the received signal y^(g,n) as
y^(g,n)=h^(n)s^(g,n)+w^(g,n),
for n=0,1,⋯,N-1 and g=1,2,⋯,G, where w^(g,n)∼𝒞𝒩(0,σ^2) is an independent, zero-mean, complex Gaussian noise with variance σ^2, and h^(n)∈ℂ^1× N_t represents the n-th sub-carrier public channel vector. We assume that the pilot signals transmitted over the n-th sub-carrier are independent and identically distributed and 𝔼{s^(g,n)(s^(g,n))^H}= 1/N_t I_N_t holds for any g and n. Denote by c, d, T_s and a_L(f) ∈ℂ^L the speed of light, the distance between antennas, the sampling period T_s≜1/B, and the unit-norm Fourier vector
a_L(ϑ) ≜1/√(L)[1, e^-j 2πϑ, …, e^-j 2π (L-1)ϑ]^T, respectively. The public channel vector h^(n) is defined as
h^(n)≜√(N_t)∑_k=0^Kγ_k e^-j 2π nτ_k/N T_sα(θ_Tx,k)^H,
where k=0 corresponds to the LOS path, γ_k represents the complex channel coefficient of the k-th path, while the steering vector α(θ_Tx) is defined as α(θ_Tx) ≜a_N_t(d sin(θ_Tx)/λ_c) with λ_c≜c/φ_c being the wavelength. According to the geometry, the location-relevant channel parameters of each path, i.e., time-of-arrival (TOA) τ_k and angle-of-departure (AOD) θ_Tx, k, are given by
τ_k =v_0-v_k_2 +p-v_k_2/c,
θ_Tx, k =arctan(v_k,y-p_y/v_k,x-p_x),
for k=0,1,⋯,K, where v_0≜ q (or v_0≜ z) holds for Bob (or Eve). In addition, we assume τ_k/NT_s ∈(0,1] and d sin(θ_Tx,k)/λ_c∈(-1/2,1/2] as in <cit.>.
Given the pilot signals, the location of Alice can be estimated using all the received signals. Hereafter, we denote by y^(g,n)_Bob and y^(g,n)_Eve the signals received by Bob and Eve, respectively, which are defined according to Equation (<ref>), while the public channels for Bob and Eve, denoted as h^(n)_Bob and h^(n)_Eve, are modelled according to Equation (<ref>), as a function of their locations. Note that the CSI is assumed to be unavailable to Alice. We aim to reduce the LPL to Eve with the designed FPI, providing localization guarantees to Bob with the shared information transmitted over a secure channel.
§ FAKE PATH INJECTION-AIDED LOCATION-PRIVACY ENHANCEMENT
In this section, we present a general framework for CSI-free location-privacy enhancement with the FPI. Since it is assumed that the CSI is unknown, we inject the fake paths tailored to the intrinsic structure of the channel, which can be considered as structured artificial noise, to effectively prevent Eve from accurately inferring Alice's position, and characterize the securely shared information to maintain Bob's localization accuracy.
According to the analysis of the atomic norm minimization based localization <cit.>, high localization accuracy relies on super-resolution channel estimation; to ensure the quality of the estimate, the TOAs and AODs of the K + 1 paths need to be sufficiently separated, respectively. Inspired by <cit.>, we propose the FPI to degrade the structure of the channel for location-privacy preservation, where the minimal separation for TOAs and AODs, i.e., Δ_min({τ_k/NT_s}) and Δ_min({d sin(θ_Tx,k)/λ_c}), is reduced, respectively[For the atomic norm minimization based localization method in <cit.> with the mmWave MIMO OFDM signaling, the conditions Δ_min({τ_k/NT_s})≥1/⌊N-1/8⌋ and Δ_min({d sin(θ_Tx,k)/λ_c})≥1/⌊N_t-1/4⌋ are desired.], with
Δ_min({κ_k})≜min_k≠ k^'min(|κ_k-κ_k^'|,1-|κ_k-κ_k^'|).
To be more precise, in this framework, denoting by K̃ an integer assumed to be greater than or equal to K, we can design a virtual fake channel with K̃+1 fake paths according to the channel structure as
h̃^(n)≜√(N_t)∑_k̃=0^K̃γ̃_k̃ e^-j 2π nτ̃_k̃/N T_sα(θ̃_Tx,k̃)^H∈ℂ^1× N_t,
where γ̃_k̃, τ̃_k̃, and θ̃_Tx,k̃ can be interpreted as the artificial channel coefficient, TOA, and AOD of the k̃-th fake path, respectively. With the injection of these fake paths to the original mmWave MISO OFDM channel, which is also shown in Figure <ref>, the received signal used by Eve is
y^(g,n)_Eve =( h^(n)_Eve+h̃^(n))s^(g,n)+w_Eve^(g,n)
= h^(n)_Eves^(g,n)+ ξ^(g,n)+w_Eve^(g,n),
where w_Eve^(g,n)∼𝒞𝒩(0,σ^2). As seen in Equation (<ref>), the proposed FPI equivalently adds the structured artificial noise, denoted as ξ^(g,n), to the g-th received signal transmitted over the n-th sub-carrier, with n=0,1,⋯,N-1 and g=1,2,⋯,G, i.e.,
ξ^(g,n)≜h̃^(n)s^(g,n).
Suppose that these artificial channel parameters are designed to be individually close to the true channel parameters, the introduced fake paths heavily overlap with the true paths and the minimal separation for TOAs and AODs is thus reduced according to the definition of Δ_min(·) in Equation (<ref>), degrading the channel structure.
On the other hand, Bob is also affected by the FPI. To alleviate the distortion caused by the FPI for Bob, we assume Alice transmits the shared information {{γ̃_k̃}, {τ̃_k̃}, {θ̃_Tx,k̃}}[The amount of shared information is determined by the design for the FPI; one specific design will be provided in Section <ref>. ] to Bob over a secure channel that is inaccessible by Eve. Then, Bob can exploit the shared information to remove the fake paths and the signal employed by Bob for localization is still given by[Error in the shared information will reduce Bob’s localization accuracy, which is affected by quantization, channel statistics, communication protocol, etc. To show the efficacy of our design, perfect shared information is assumed; the study of such error is beyond the scope of this paper.]
y^(g,n)_Bob= h^(n)_Bobs^(g,n)+w_Bob^(g,n),
where w_Bob^(g,n)∼𝒞𝒩(0,σ^2).
In contrast to Bob who has the access to the shared information, the FPI distorts the structure of Eve's channel and thus degrades her eavesdropping ability. We note that the proposed location-privacy enhancement framework does not rely on the CSI and any specific estimation method. In Section <ref>, to decrease the minimal separation without CSI, an efficient strategy for the FPI will be shown, where the virtually added fake paths are further elaborated upon, while the specific design of γ̃_k̃, τ̃_k̃, θ̃_Tx,k̃ and the accompanying shared information are provided.
§ FAKE PATH-AIDED LOCATION-PRIVACY ENHANCEMENT
In this section, we present a general framework for CSI-free location-privacy enhancement with artificial noise. Since it is assumed that the CSI is unknown, we inject the artificial noise tailored to the intrinsic structure of the channel, i.e., structured artificial noise (SAN), to effectively prevent Eve from accurately inferring Alice's position, and characterize the securely shared information to maintain Bob's localization accuracy.
According to the analysis of the atomic norm minimization based localization <cit.>, high localization accuracy relies on super-resolution channel estimation; to ensure the quality of the estimate, the TOAs and AODs of the K + 1 paths need to be sufficiently separated, respectively. Inspired by <cit.>, we propose the SAN to degrade the structure of the channel for location-privacy preservation, where the minimal separation for TOAs and AODs, i.e., Δ_min({τ_k/NT_s}) and Δ_min({d sin(θ_Tx,k)/λ_c}), is reduced, respectively[For the atomic norm minimization based localization method in <cit.> with the mmWave MIMO OFDM signaling, the conditions Δ_min({τ_k/NT_s})≥1/⌊N-1/8⌋ and Δ_min({d sin(θ_Tx,k)/λ_c})≥1/⌊N_t-1/4⌋ are desired.], with
Δ_min({κ_k})≜min_k≠ k^'min(|κ_k-κ_k^'|,1-|κ_k-κ_k^'|).
To be more precise, in this framework, the injected SAN for the g-th pilot signal transmitted over the n-th sub-carrier, denoted as ξ^(g,n), is designed according to the channel structure as
ξ^(g,n)≜h̃^(n)s^(g,n),
where
h̃^(n)≜√(N_t)∑_k̃=0^K̃γ̃_k̃ e^-j 2π nτ̃_k̃/N T_sα(θ̃_Tx,k̃)^H∈ℂ^1× N_t,
K̃ is an integer, assumed to be greater than or equal to K, while γ̃_k̃, τ̃_k̃, and θ̃_Tx,k̃ are design parameters. Then, with the injection of the SAN defined in Equation (<ref>), the received signal used by Eve is
y^(g,n)_Eve = h^(n)_Eves^(g,n)+ ξ^(g,n)+w_Eve^(g,n)
=( h^(n)_Eve+h̃^(n))s^(g,n)+w_Eve^(g,n).
where w_Eve^(g,n)∼𝒞𝒩(0,σ^2). As seen in Equation (<ref>), adding such SAN equivalently introduces K̃+1 virtual fake paths to the original mmWave MISO OFDM channel, which is also shown in Figure <ref>. In addition, γ̃_k̃, τ̃_k̃, and θ̃_Tx,k̃ can be interpreted as the artificial channel coefficient, TOA, and AOD of the k̃-th fake path, respectively. Suppose that these artificial channel parameters are designed to be individually close to the true channel parameters, the introduced fake paths heavily overlap with the true paths and the minimal separation for TOAs and AODs is thus reduced according to the definition of Δ_min(·) in Equation (<ref>), degrading the channel structure.
On the other hand, Bob is also affected by the SAN. To alleviate the distortion caused by the SAN for Bob, we assume Alice transmits the shared information {{γ̃_k̃}, {τ̃_k̃}, {θ̃_Tx,k̃}}[The amount of shared information is determined by the design for the injection of SAN; one specific design will be provided in Section <ref>. ] to Bob over a secure channel that is inaccessible by Eve. Then, Bob can exploit the shared information to remove the SAN and the signal employed by Bob for localization is still given by[Error in the shared information will reduce Bob’s localization accuracy, which is affected by quantization, channel statistics, communication protocol, etc. To show the efficacy of our design, perfect shared information is assumed; the study of such error is beyond the scope of this paper.]
y^(g,n)_Bob= h^(n)_Bobs^(g,n)+w_Bob^(g,n),
where w_Bob^(g,n)∼𝒞𝒩(0,σ^2).
In contrast to Bob who has the access to the shared information, the injected SAN distorts the structure of Eve's channel and thus degrades her eavesdropping ability. We note that the proposed location-privacy enhancement framework does not rely on the CSI and any specific estimation method. In Section <ref>, to decrease the minimal separation without CSI, an efficient strategy for the injection of SAN will be shown, where the virtually added fake paths are further elaborated upon, while the specific design of γ̃_k̃, τ̃_k̃, θ̃_Tx,k̃ and the accompanying shared information are provided.
§ IDENTIFIABILITY OF FAKE PATHS
To enhance the location-privacy, FPI is proposed in Section <ref> to distort the structure of Eve's channel. However, if Eve can distinguish and remove the designed fake paths, the efficacy of the proposed method would be severely degraded. Hence, the identifiability of the proposed fake paths is investigated in this section and conditions are provided to ensure that Eve cannot distinguish the fake paths.
To show that Eve cannot distinguish the injected fake paths from the true paths, we introduce the following notion of the feasibility of paths.
Given a pair of TOA and AOD parameters, i.e., {τ,θ_Tx}, a path characterized by {τ,θ_Tx} is geometrically feasible if there exists a scatterer at location v that can be mapped to the parameters {τ,θ_Tx} according to Equation (<ref>).
From Definition <ref>, to make the introduced fake paths indistinguishable, we need to show that they are geometrically feasible, where the required conditions are provided in the following proposition.
With the injection of the structured artificial noise via fake paths proposed in Equation (<ref>), Eve cannot distinguish between the equivalently added fake paths and the true paths if cτ̃_k̃≥ z- p_2 holds for k̃=0,1,⋯,K̃.
See Appendix <ref>.
According to Proposition <ref>, with appropriate choice of the design parameters, the fake paths cannot be identified and thus cannot be removed. With the FPI, Eve will incorrectly believe that there are K+K̃+1 NLOS paths associated with K+K̃+1 scatterers at positions v_k and ṽ_k̃ with k=1,2,⋯,K and k̃=0,1,⋯,K̃, where ṽ_k is given in Equation (<ref>) and can be evaluated by substituting Equation (<ref>) into Equation (<ref>). The analysis of the identifiability of the proposed fake paths suggests the rationality of the proposed method. In Section <ref>, the efficacy of the designed FPI will be investigated.
§ FISHER INFORMATION OF CHANNEL PARAMETERS WITH FAKE PATH INJECTION
Since localization accuracy highly relies on the quality of the estimates of channel parameters, the Fisher information of the location-relevant channel parameters is analyzed in this section to show that the FPI results in a reduction of the minimal separation and thus effectively increases the estimation error. As an example, we see that <cit.> determines the minimal separation which is a sufficient condition for uniqueness and optimality of the proposed localization algorithm.
To this end, we first present the exact expression for the FIM[Though similar derivations can be found in <cit.>, we show the FIM herein for consistency, which will be used for the analysis of our design.]. Then, two lower bounds on the estimation error are derived and analyzed based on an asymptotic FIM, suggesting appropriate values of the key design parameters.
For simplicity, in this section, K̃ is assumed to be equal to K and the k-th fake path is designed to be close to the k-th true path in terms of differences of the channel coefficients, TOAs and AODs. In addition, we assume the conditions in Proposition <ref> are satisfied for the analysis in this section. To quantify how close the k-th true path is to the k-th fake path, we denote by δ_γ_k, δ_τ_k and δ_θ_Tx,k the differences for the channel coefficients, TOAs and AODs, respectively, and define them as
δ_γ_k ≜γ̃_k-γ_k,
δ_τ_k ≜τ̃_k-τ_k,
δ_θ_Tx,k ≜arcsin(sin(θ̃_Tx,k)-sin(θ_Tx,k)).
According to Equation (<ref>), if the values of δ_τ_k and δ_θ_Tx,k are sufficiently small, the minimal separation for TOAs and AODs are determined by δ_τ_k and δ_θ_Tx,k, respectively. Hence, we will analyze the effects of δ_γ_k, δ_τ_k and δ_θ_Tx,k. We note that there are lower bounds on |δ_γ_k|, |δ_τ_k| and |δ_θ_Tx,k| in practice, denoted as δ_γ_k,min, δ_τ_k,min and δ_θ_Tx,k,min, respectively, i.e., 0≤δ_γ_k,min≤|δ_γ_k|, 0<δ_τ_k,min≤|δ_τ_k| and 0<δ_θ_Tx,k,min≤|δ_θ_Tx,k| so that each pair of fake and true paths is still considered to be produced by two distinct scatterers.
§.§ Exact Expression for FIM
Considering the equivalently injected fake paths, we stack the true and artificial channel parameters as γ̅≜ [γ_0,γ_1,⋯,γ_K,γ̃_0,γ̃_1,⋯,γ̃_K]^T∈ℂ^2(K+1), τ̅≜ [τ_0,τ_1,⋯,τ_K,τ̃_0,τ̃_1,⋯,τ̃_K]^T∈ℝ^2(K+1), and θ̅_Tx≜ [θ_Tx,0,θ_Tx,1,⋯,θ_Tx,K,θ̃_Tx,0,θ̃_Tx,1,⋯,θ̃_Tx,K]^T∈ℝ^2(K+1). Denote the vector of all the channel parameters as
η≜[τ̅^T,θ̅_Tx^T,ℜ{γ̅^T},ℑ{γ̅^T}]^T∈ℝ^8(K+1).
Accordingly, the FIM J^(η)∈ℝ^8(K+1)×8(K+1) is given by <cit.>
J^(η) = [ J^(η)_τ̅,τ̅ J^(η)_τ̅,θ̅_Tx J^(η)_τ̅,ℜ{γ̅} J^(η)_τ̅,ℑ{γ̅}; J^(η)_θ̅_Tx,τ̅ J^(η)_θ̅_Tx,θ̅_Tx J^(η)_θ̅_Tx,ℜ{γ̅} J^(η)_θ̅_Tx,ℑ{γ̅}; J^(η)_ℜ{γ̅},τ̅ J^(η)_ℜ{γ̅},θ̅_Tx J^(η)_ℜ{γ̅},ℜ{γ̅} J^(η)_ℜ{γ̅},ℑ{γ̅}; J^(η)_ℑ{γ̅},τ̅ J^(η)_ℑ{γ̅},θ̅_Tx J^(η)_ℑ{γ̅},ℜ{γ̅} J^(η)_ℑ{γ̅},ℑ{γ̅}; ],
where
J^(η)[r,u] = 2/σ^2∑_n=0^N-1∑_g=1^Gℜ{(∂ u^(g,n)/∂η[r])^*∂ u^(g,n)/∂η[u]},
with r,u=0,1,⋯,8K+7.
Herein, we define
u^(g,n)≜√(N_t)∑_k=0^2K+1γ̅[k]e^-j2π nτ̅[k]/NT_sα(θ̅_Tx[k])^H s^(g,n),
while compute Equation (<ref>) using
∂ u^(g,n)/∂τ̅[k] = -j2π√(N_t) n/NT_sγ̅[k] e^-j2π nτ̅[k]/NT_sα(θ̅_Tx[k])^H s^(g,n),
∂ u^(g,n)/∂θ̅_Tx[k] = j2π√(N_t) d/λ_cγ̅[k] e^-j2π nτ̅[k]/NT_scos(θ̅_Tx[k])
×α(θ̅_Tx[k])^Hdiag([0,1,⋯,N_t-1]) s^(g,n),
∂ u^(g,n)/∂ℜ{γ̅[k]} = √(N_t) e^-j2π nτ̅[k]/NT_sα(θ̅_Tx[k])^H s^(g,n),
∂ u^(g,n)/∂ℑ{γ̅[k]} = j√(N_t)e^-j2π nτ̅[k]/NT_sα(θ̅_Tx[k])^H s^(g,n),
for k=0,1,⋯,2K+1. Denote by η̂ an unbiased estimator of η. Based on the FIM in Equation (<ref>), the mean squared error (MSE) of η̂ can be evaluated according to <cit.>
𝔼{(η̂-η)(η̂-η)^T}≽( J^(η))^-1,
which is also well known as the Cramér-Rao lower bound (CRLB).
Denote by ϕ≜[p^T,v_1^T,v_2^T,⋯,v_K^T,ṽ_0^T,ṽ_1^T,⋯,ṽ_K^T,ℜ{γ̅^T},ℑ{γ̅^T}]^T∈ℝ^8(K+1) a parameter vector including the positions of Alice and scatterers.
The CRLB for localization can be obtained as well via the analysis of the associated FIM J^(ϕ) given by
J^(ϕ) = Π J^(η)Π^T,
where Π≜∂η^T/∂ϕ∈ℝ^8(K+1)×8(K+1).
§.§ Lower Bound on Estimation Error
Due to the structure of the FIM, it is complicated to theoretically analyze the exact FIM to determine how the design parameters δ_γ_k, δ_τ_k, and δ_θ_Tx,k affect the estimation error.
To show the efficacy of our framework, we derive two lower bounds on the estimation error using an approximated FIM in this subsection, which are closed-form expressions associated with δ_γ_k, δ_τ_k, and δ_θ_Tx,k, under certain mild conditions on these design parameters.
Since there are no assumptions on the path loss model and knowledge of the channel coefficients do not improve the localization accuracy <cit.>, the channel coefficients are considered as nuisance parameters and we restrict the analysis to the Fisher information of the location-relevant channel parameters for each pair of the true and fake paths, i.e., ζ_k≜[τ_k, θ_Tx,k, τ̃_k, θ̃_Tx,k]^T∈ℝ^4 with k=0,1,⋯,K. Denote by J^(ζ_k)∈ℝ^4×4 the exact expression for the FIM with respect to ζ_k. According to Equation (<ref>), J^(ζ_k) can be analogously derived, but its complicated structure still hampers the theoretical analysis of the estimation error.
To associate the design parameters δ_γ_k, δ_τ_k, and δ_θ_Tx,k with the estimation error in a closed-form expression, an asymptotic FIM J̆^(ζ_k)∈ℝ^4×4 is studied, which is defined as
J̆^(ζ_k)≐8π^2/(σ NT_s)^2ℜ{Γ^(k)},
where the matrix Γ^(k)∈ℝ^4×4 is provided in Equation (<ref>), with constants Λ, O_i and functions M_i^(k), i=1,2,⋯,6, defined as, Λ≜λ_c/NT_sd, O_1≜N(N-1)(2N-1)/6, O_2≜N(N-1)/2, O_3≜N_t-1/2, O_4≜(N_t-1)(2N_t-1)/6, O_5=N, O_6=1, M_1^(k)≜∑_n=0^N-1n^2e^-j2π n δ_τ_k/NT_s, M_2^(k)≜∑_n=0^N-1ne^-j2π n δ_τ_k/NT_s, M_3^(k)≜1/N_t∑_n_t=0^N_t-1n_te^j2π n_t d sin(δ_θ_Tx,k)/λ_c, M_4^(k)≜1/N_t∑_n_t=0^N_t-1n_t^2e^j2π n_t d sin(δ_θ_Tx,k)/λ_c, M_5^(k)≜∑_n=0^N-1e^-j2π n δ_τ_k/NT_s, and M_6^(k)≜1/N_t∑_n_t=0^N_t-1e^j2π n_t d sin(δ_θ_Tx,k)/λ_c.
We note that, as compared with 1/G J^(ζ_k), the approximation error with J̆^(ζ_k) is negligible when a large number of symbols are transmitted according to the following lemma.
As G→∞, 1/G J^(ζ_k) converges almost surely (a.s.) to J̆^(ζ_k), i.e.,
ℙ(lim_G→∞1/G J^(ζ_k)=J̆^(ζ_k))=1.
See Appendix <ref>.
We will show that, using such an asymptotic FIM as an approximation of 1/G J^(ζ_k), the theoretical analysis of the degraded estimation accuracy
is tractable.
Rank{J̆^(ζ_k)}→Ω^(ζ_k) as δ_γ_k, δ_τ_k, δ_θ_Tx,k→ 0, where Ω^(ζ_k) is an integer with Ω^(ζ_k)<4.
Since lim_δ_γ_k→0ℜ{γ^*_kγ̃_k}=lim_δ_γ_k→0ℜ{γ̃^*_kγ_k}=lim_δ_γ_k→0|γ̃_k|^2=|γ_k|^2 holds for any k while M_i^(k) is a function with respect to δ_τ_k or δ_θ_Tx,k with
lim_δ_τ_k,δ_θ_Tx,k→0ℜ{M_i^(k)M_t^(k)}=O_iO_t,
|ℜ{M_i^(k)M_t^(k)}|≤ O_iO_t,
for i,t=1,2,⋯,6, it can be verified that the third and fourth rows of Γ^(k) converge to its first and second rows, respectively, as δ_γ_k, δ_τ_k, δ_θ_Tx,k→ 0, which concludes the proof.
Proposition <ref> indicates that J̆^(ζ_k) tends to a singular matrix as δ_γ_k, δ_τ_k, δ_θ_Tx,k→ 0, which can be exploited to characterize the asymptotic property for J^(ζ_k) based on Lemma <ref> as well.
By leveraging the approximated FIM J̆^(ζ_k), we denote by ζ̂_k an unbiased estimator of ζ^(k) and bound the estimation error with a closed-form expression as follows.
For k=0,1,⋯,K, if δ_γ_k=δ_γ_k,min=0 holds,
for any real ψ>0, there always exists a positive integer 𝒢 such that when G≥𝒢. Then, the following lower bound on the MSE of ζ̂^(k) holds with probability of 1,
𝔼{(ζ̂_k-ζ_k)^T(ζ̂_k-ζ_k)}>1/G(Ξ^(k)-ψ),
where Ξ^(k) is provided in Equation (<ref>).
For k=0,1,⋯,K, supposed that
A1) δ_γ_k=δ_γ_k,min=0;
A2) δ_τ_k,min≤|δ_τ_k|≤δ_τ_k,max;
A3) δ_θ_Tx,k,min≤|δ_θ_Tx;k|≤δ_θ_Tx,k,max
A4)
max_n_t=0,1,2,⋯,N_t-1 n=0,1,2,⋯,N-1|n_tsin(δ_θ_Tx,k)+nΛδ_τ_k|≤λ_c/4d;
A5) sin(δ_θ_Tx,k)≥(N-1)Λδ_τ_k;
for any real γ>0, there always exists a positive integer 𝒢 such that when G≥𝒢. Then, the following lower bound on the MSE of ζ̂^(k) holds with probability of 1,
𝔼{(ζ̂_k-ζ_k)^T(ζ̂_k-ζ_k)}
≥Tr(( J^(ζ_k))^-1)>Tr((J̆^(ζ_k))^-1)-γ≥Ψ^(k)-γ,
where Ψ^(k) is provided in Equation (<ref>).
See Appendix <ref>.
According to Equations (<ref>) and (<ref>), Ξ^(k)→∞ as δ_τ_k, δ_θ_Tx,k→ 0; constrained by the boundary, i.e., δ_τ_k,min and δ_θ_Tx,k,min, smaller values for |δ_τ_k| and |δ_θ_Tx,k| are preferred to degrade Eve's estimation accuracy.
To make the effect of δ_τ_k and δ_θ_Tx,k on the estimation error more clear, we can further bound the lower bound derived in Equation (<ref>). To this end, we note that, according to Equation (<ref>), there exist two positive real numbers, denoted as δ_τ_k,max and δ_θ_Tx,k,max, such that for a given constant ϵ≜2(O_1O_4O_5O_6-(O_2O_3)^2)/O_1O_6+2O_2O_3+O_4O_5>0, the following inequalities hold,
|ℜ{M_1^(k)M_6^(k)}-O_1O_6|<ϵ
|ℜ{M_2^(k)M_3^(k)}-O_2O_3|<ϵ
|ℜ{M_4^(k)M_5^(k)}-O_4O_5|<ϵ
if |δ_θ_Tx,k|≤δ_θ_Tx,k,max and |δ_τ_k|≤δ_τ_k,max, which will be used to bound Ξ^(k) with other additional assumptions.
For k=0,1,⋯,K, supposed that
A1) δ_γ_k=δ_γ_k,min=0,
A2) 0<δ_τ_k,min≤δ_τ_k≤δ_τ_k,max,
A3) 0<δ_θ_Tx,k,min≤δ_θ_Tx,k≤δ_θ_Tx,k,max,
A4)
max_n_t=0,1,2,⋯,N_t-1 n=0,1,2,⋯,N-1n_tsin(δ_θ_Tx,k)+nΛδ_τ_k≤λ_c/4d,
A5) (N-1)Λδ_τ_k,max≥sin(δ_θ_Tx,k)≥(N-1)Λδ_τ_k,
the MSE of ζ̂^(k) can be bounded as Equation (<ref>), where Ξ^(k) is replaced with Ψ^(k), given in Equation (<ref>).
See Appendix <ref>.
As observed in Equation (<ref>), Ψ^(k) can be decomposed into three terms, i.e., Ψ^(k)=Ψ^(k)_1Ψ^(k)_2Ψ^(k)_3. Define SNR_k≜|γ_k|^2/σ^2 as the received SNR for the k-th true path. Several key properties of Ψ^(k) are listed as follows.
P1) According to Ψ^(k)_1, the lower bound on the MSE of ζ^(k) is not only inversely proportional to SNR_k but also related to the AODs;
P2) Coinciding with the analysis for the atomic norm minimization based method <cit.>, we have
Ψ^(k)_2=1/𝒪(BNN^3/2_t),
which suggests employing narrower bandwidth B, smaller number of transmit antennas and sub-carriers, i.e., N_t, and N, to improve location-privacy enhancement;
P3) The value of Ψ^(k)_3 increases as √(sin(δ_θ_Tx,k)) decreases; supposed that the value of δ_θ_Tx,k is small enough such that δ_θ_Tx,k,min≤δ_θ_Tx,k≤π/2, the value of Ψ^(k)_3 monotonically decreases with respect to δ_θ_Tx,k and the largest value of Ψ^(k)_3 is achieved when δ_θ_Tx,k=δ_θ_Tx,k,min. In addition, since we set δ_τ_k=sin(δ_θ_Tx,k)/(N-1)Λ according to the proof of Corollary <ref>, the value of δ_τ_k is reduced as well when δ_θ_Tx,k decreases, showing that Eve’s eavesdropping ability can be efficiently degraded if the injected fake paths are close to the true paths.
Assumption A1 can be realized with a transmit beamformer that will be proposed in Section <ref>. With respect to the upper bounds in assumptions A2 and A3, they are needed to show Equation (<ref>); it can be numerically shown that δ_τ_k,max and δ_θ_Tx,k,max can be quite large in practice such that δ_τ_k,min≤δ_τ_k,max and δ_θ_Tx,k,min≤δ_θ_Tx,k,max can be easily satisfied.
For small values of δ_τ_k and δ_θ_Tx,k, the assumptions A4 and A5 will be met.
Inspired by the analysis of the NLOS paths for the single-carrier mmWave MIMO channels in <cit.>, due to the low-scattering sparse nature of the mm-Wave channels, the true paths are not close to each other and we have
2/σ^2∑_n=0^N-1∑_g=1^Gℜ{(∂ u^(g,n)/∂ξ_k)^*∂ u^(g,n)/∂ξ_k^'}≈0, k≠ k^',
for a large number of symbols and transmit antennas, where ξ_k∈{τ_k,θ_Tx,k,ℜ{γ_k},ℑ{γ_k}} and ξ_k^' is defined similarly, with k,k^'=0,1,⋯,K-1. Thus, the true paths of mmWave MISO OFDM channels are approximately orthogonal to each other, i.e., by grouping the true channel parameters path-by-path, the associated FIM is almost a block diagonal matrix. Furthermore, the k-th path is designed to be close to the k-th true path so the estimation accuracy for ζ^(k) does not rely much on the uncertainties for the channel parameters of the other paths; if the channel coefficients are also assumed to be known, Tr(( J^(ζ_k))^-1) is nearly the CRLB for the MSE of ζ̂^(k).
Inspired by the analysis of the NLOS paths for the single-carrier mmWave MIMO channels in <cit.>, we can simplify the FIM for the estimation of the mmWave MIMO OFDM channels, using the following approximation with the following lemma, if the SAN is not injected.
If θ_Tx,k≠θ_Tx,r holds for any k≠ r with k,r∈{0,1,⋯,K}, the cross-correlation between any two distinct paths converges almost surely (a.s.) to 0, i.e.,
𝒫(lim_G,N_t→∞2/σ^2∑_n=0^N-1∑_g=1^Gℜ{(∂ u^(g,n)/∂ξ_r)^*∂ u^(g,n)/∂ξ_k}=0)
=1,
with k≠ r, where ξ_k∈{τ_k,θ_Tx,k,ℜ{γ_k},ℑ{γ_k}} and ξ_r is defined similarly.
See Appendix <ref>.
Note that Proposition <ref> indicates
Since Eve cannot distinguish between the true paths and fake paths, as proved in Section <ref>, all the estimated location-relevant channel parameters are used for localization. Hence, though it is unclear how the injected fake paths affect the estimation accuracy of the individual channel parameters from Proposition <ref> and Corollary <ref>, the derived lower bound in Equation (<ref>) still indicates that Eve's localization accuracy can be effectively decreased with the proper design of the FPI according to Equation (<ref>).
§.§ Approximated FIM with SAN
Due to the low-scattering sparse nature of the mm-Wave channels <cit.>, Equation (<ref>) is a valid approximation for the analysis of FIM according to Proposition <ref>. However, since the fake paths are designed to be very close to the true paths according to the proposed framework in Section <ref>, the approximation error with Equation (<ref>) is not negligible for the cross-correlation between the fake and true paths in practice.
To be more precise, we assume that the values of δ_θ_Tx,k are relatively small as compared with the minimal separation for the true AODs, and for r, z, k=0,1,⋯,K with r≠ z,
|α(θ_Tx,r)α(θ_Tx,z)^H| ≪|α(θ_Tx,k)α(θ̃_Tx,k)^H|,
|α(θ_Tx,r)α(θ̃_Tx,z)^H| ≪|α(θ_Tx,k)α(θ̃_Tx,k)^H|,
|α(θ̃_Tx,r)α(θ̃_Tx,z)^H| ≪|α(θ_Tx,k)α(θ̃_Tx,k)^H|,
hold with a certain large number of transmit antennas employed.
Then, if the number of symbols is large enough, the approximation in Equation (<ref>) still can be applied for the cross-correlation terms with ξ_k∈{τ̅[k],θ̅_Tx[k],ℜ{γ̅[k]},ℑ{γ̅[k]}} and ξ_r∈{τ̅[r],θ̅_Tx[r],ℜ{γ̅[r]},ℑ{γ̅[r]}}, where k≠ r and |k-r|≠ K, while the other terms can be simplified as presented in Appendix <ref>.
Hence, we can individually study the k-th true and fake paths for the analysis of the estimation error with SAN, where k=0,1,⋯,K.
In addition, since we have lim_δ_γ_k→0ℜ{γ^*_kγ̃_k}=lim_δ_γ_k→0|γ̃_k|^2=|γ_k|^2, it can be verified that
Γ^(k)[1,1]
=lim_δ_τ_k,δ_θ_Tx,k→0ℜ{Γ^(k)[1,3]}=lim_δ_τ_k,δ_θ_Tx,k→0ℜ{Γ^(k)[3,1]}
=lim_δ_τ_k,δ_θ_Tx,k→0Γ^(k)[3,3],
Γ^(k)[1,2]= Γ^(k)[2,1]
=lim_δ_τ_k,δ_θ_Tx,k→0ℜ{Γ^(k)[1,4]}=lim_δ_τ_k,δ_θ_Tx,k→0ℜ{Γ^(k)[2,3]}
=lim_δ_τ_k,δ_θ_Tx,k→0ℜ{Γ^(k)[3,2]}=lim_δ_τ_k,δ_θ_Tx,k→0Γ^(k)[3,4]
=lim_δ_τ_k,δ_θ_Tx,k→0ℜ{Γ^(k)[4,1]}=lim_δ_τ_k,δ_θ_Tx,k→0Γ^(k)[4,3]
Γ^(k)[2,2]
=lim_δ_τ_k,δ_θ_Tx,k→0ℜ{Γ^(k)[2,4]}=lim_δ_τ_k,δ_θ_Tx,k→0ℜ{Γ^(k)[4,2]}
=lim_δ_τ_k,δ_θ_Tx,k→0Γ^(k)[4,4].
which indicates that J^(ζ_k) tends to be a singular matrix as G→∞ and δ_γ_k, δ_τ_k, δ_θ_Tx,k→ 0; meanwhile, the estimation accuracy of the channel parameters is degraded with the well-designed SAN according to Equation (<ref>).
§ BEAMFORMING DESIGN FOR THE FAKE PATH INJECTION
From Section <ref>, it is clear that the injection of fake paths will increase location privacy. However, it is impractical to create extra physical scatterers to generate the fake paths. In the sequel, following the principle of the proposed framework, we design a transmit beamforming strategy that ensures the creation of fake paths that are close to the true ones, to efficiently reduce the LPL to Eve without the need for CSI.
§.§ Alice's Beamformer
Let δ̅_τ and δ̅_θ_TX represent two parameters used for beamforming design.
To enhance the location-privacy, Alice still employs the mmWave MISO OFDM signaling according to Section <ref>, but designs her transmitter beamformer f̃^(g,n) as
f̃^(g,n)≜( I_N_t + √(N_t)e^-j2π n δ̅_τ/NT_sdiag(α(δ̅_θ_Tx)^H)) f^(g,n),
for the g-th pilot signal transmitted over the n-th sub-carrier. By analyzing the received signals in the following subsections, we will observe that adopting the transmit beamformer in
Equation (<ref>) equivalently injects K virtual fake paths to the mmWave MISO OFDM channels[The proposed beamforming strategy can be directly extended to virtually add ν K paths with ν∈𝒩^+.].
§.§ Bob's Localization
Through the public channel h^(n)_Bob, Bob receives
y^(g,n)_Bob= h^(n)_Bobf̃^(g,n)x^(g,n)+w_Bob^(g,n),
for n=0,1,⋯,N-1 and g=1,2,⋯,G. Assume Bob has the knowledge of the structure of Alice's beamformer in Equation (<ref>), i.e., the construction of f̃^(g,n) based on f^(g,n). By leveraging the secure channel, Bob also knows δ̅≜[δ̅_τ,δ̅_θ_TX]^T∈ℝ^2 that is shared information. Rather than directly remove the fake paths, he can construct effective pilot signals s̃^(g,n) based on the known pilot signal s^(g,n) as
s̃^(g,n) ≜f̃^(g,n)x^(g,n)
=( I_N_t + √(N_t)e^-j2π n δ_τ/NT_sdiag(α(δ̅_θ_Tx)^H)) s^(g,n),
which simplifies Equation (<ref>) into Equation (<ref>), i.e., y^(g,n)_Bob= h^(n)_Bobs̃^(g,n)+w_Bob^(g,n). Using the securely shared information, Bob can remove the fake paths. Note that the amount of shared information δ̅ does not increase with respect to the number of received signal samples, i.e., NG.
As assumed in Section <ref>, Bob receives δ̅ noiselessly through the secure channel.
§.§ Eve's Localization
Assume that the shared information δ̅ is refreshed at a certain rate such that Eve cannot decipher it. Then, the following received signal has to be used if Eve attempts to estimate Alice's position,
y^(g,n)_Eve = h^(n)_Evef̃^(g,n)x^(g,n)+w_Eve^(g,n)
=( h^(n)_Eve+h̃^(n)) s^(g,n)+w_Eve^(g,n)
= h^(n)_Eves^(g,n)+ξ^(n) +w_Eve^(g,n),
where ξ^(n) and h̃^(n) are defined in Equations (<ref>) and (<ref>), respectively, with K̃=K and artificial channel parameters designed as
γ̃_k =γ_k,
τ̃_k =τ_k+δ̅_τ,
θ̃_Tx,k =arcsin(sin(θ_Tx,k)+sin(δ̅_θ_Tx)).
Accordingly, if Alice uses the proposed transmit beamformer, the differences defined in Equation (<ref>) are given by
δ_γ_0 =δ_γ_1=⋯=δ_γ_K=0,
δ_τ_0 =δ_τ_1=⋯=δ_τ_K=δ̅_τ,
δ_θ_Tx,0 =δ_θ_Tx,1=⋯=δ_θ_Tx,K=δ̅_θ_Tx,,
Hence, given that the values of δ̅_τ and δ̅_θ_Tx are small enough, the minimal separation for TOAs and that for AODs are |δ̅_τ/NT_s| and |dsin(δ̅_θ_Tx)/λ_c|, respectively, according to the definition of Δ_min(·) provided in Equation (<ref>), which can be efficiently decreased to degrade Eve's channel structure.
§.§ Prior Knowledge of the Beamformer Structure
Supposed that Eve does not have any prior information regarding the beamformer structure, she would treat f^(g,n) as Alice's beamformer and employ s^(g,n) to infer Alice's position. For such a case, the condition to ensure the geometrical feasibility of the fake paths is provided in the following corollary based on Proposition <ref>.
If Alice uses the designed beamformer in Equation (<ref>) with δ̅_τ>0 to inject the structured artificial noise, Eve cannot distinguish the fake paths, even when she can perfectly estimate the TOAs and AODs of the fake paths.
Given δ̅_τ>0, we have
cτ̃_k(m)>cτ_k(n)≥ z- p_2,
where (m) follows from Equation (<ref>) while (n) holds due to the geometry and the triangular inequality. Then, according to Proposition <ref>, the proof is concluded.
Consequently, as analyzed in Section <ref>, using a proper choice of δ̅, the proposed FPI can mislead Eve into believing that there are 2K+1 NLOS paths while the existing paths heavily overlaps. Hence, it is unnecessary to refresh the shared information δ̅ in the beamforming design to prevent it from being deciphered in practice if Eve does not actively attempt to snoop Alice's beamformer structure. Furthermore, as δ̅_τ and δ̅_θ_Tx approach 0, the estimation error for all the location-relevant channel parameters tends to significantly increase based on the analysis of the associated FIM in Section <ref>; smallest values of δ̅_τ and δ̅_θ_Tx are desired according to the derived lower bound on estimation error using Equation (<ref>).
On the other hand, if Alice’s beamformer structure is unfortunately leaked, Eve would learn the shared information. To be more precise, in contrast to Bob who exploits y^(g,n)_Bob and s̃^(g,n)_Bob to estimate {{τ_k},{θ_Tx,k}, ℜ{γ_k}, ℑ{γ_k}}, Eve can infer χ≜{{τ_k},{θ_Tx,k}, ℜ{γ_k}, ℑ{γ_k},δ̅_τ,δ̅_θ_TX} using y^(g,n)_Eve and s^(g,n)_Eve. The corresponding FIM for the channel estimation and localization can be derived, similar to Equations (<ref>) and (<ref>).
We denote by J^(χ)∈ℝ^(4K+6)×(4K+6) the FIM for Eve's channel estimation with the prior knowledge of Alice's beamformer structure, whose asymptotic property is studied as follows.
As δ̅_τ_k, δ̅_θ_Tx,k→ 0 and G,N→∞,
Rank{J^(χ)}→Ω^(χ) a.s., where Ω^(χ) is an integer with Ω^(χ)<4K+6.
See Appendix <ref>.
According to Proposition <ref>, J^(χ) also tends to a singular matrix when δ̅_τ_k, δ̅_θ_Tx,k→ 0 and G,N→∞. Thus, the FIM for Eve's localization in this case is also asymptotically rank-deficient. Based on the definition of CRLB in Equation (<ref>), large estimation error still can be achieved with small values of δ̅_τ_k, δ̅_θ_Tx,k employed in the design of Alice's beamformer, even when the beamformer structure is snooped by Eve. However, in contrast to the case where Eve does not know the beamformer structure, the shared information δ̅ has to be refreshed for this case though the optimal design of the refresh rate is beyond the scope of this paper. We will numerically show the impact of the beamformer structure leakage in Section <ref>, yet there is still a strong degradation of Eve's estimation accuracy with a proper choice of the design parameters.
§ SIMULATION RESULTS
In this section, we evaluate the performance of our proposed scheme with the CRLB derived in Sections <ref> and <ref>, which is not restricted to any specific estimators. First, the theoretical analyses presented in Section <ref> are numerically validated. Then, our location-privacy enhanced scheme is compared with the case where location-privacy preservation is not considered to show the degraded estimation accuracy of individual location-relevant channel parameters as well as the location. Finally, a comparison to the unstructured Gaussian noise is conducted to validate the efficacy of the proposed FPI.
§.§ Signal Parameters
In all of the numerical results, unless otherwise stated, the system parameters B, φ_c, c, N_t, N, G, K, and d are set to 15 MHz, 60 GHz, 300 m/us, 16, 16, 16, 2, and λ_c/2, respectively. The free-space path loss model <cit.> is used to determine channel coefficients in the simulation, while the pilot signals s^(g,n) are random, complex values uniformly generated on the unit circle, scaled by a factor of 1/√(N_t). The scatterers of the two NLOS paths are located at [8.89 m, -6.05 m]^T and [7.45 m, 8.54 m]^T, respectively, while Alice is at [3 m,0 m]^T. To make a fair comparison, Bob and Eve are placed at the same location, i.e., [10 m,5 m]^T, and the same received signal is used for the simulation. To enhance the location-privacy, K̃=K fake paths are injected via the design of the transmit beamformer according to Section <ref>.
In the presence of independent, zero mean, complex Gaussian noise and the proposed FPI, the received SNR is defined as 10log_10∑^G_g=1∑^N-1_n=0| h^(n)_Bobs̃^(g,n)|^2/NGσ^2[It is also equal to 10log_10∑^G_g=1∑^N-1_n=0|( h^(n)_Eve+h̃^(n)) s^(g,n)|^2/NGσ^2 for a fair comparison.]. The minimal separation constraints desired by the atomic norm minimization based method <cit.> is denoted as Υ_τ≜NT_s/⌊N-1/4⌋ and Υ_θ≜arcsin(λ_c/d⌊N_t-1/4⌋),
for the TOAs and AODs, respectively.
§.§ Validation of Theoretical Analyses in Section <ref>
In terms of the root-mean-square error (RMSE) for the estimation of the LOS path and the corresponding fake path[The estimation error for the other paths can be analogously studied as well.], Figure <ref> shows the lower bounds derived in Proposition <ref> and Corollary <ref>, where Tr((J^(ζ_0))^-1) and Tr((J̆^(ζ_0))^-1) are provided as comparisons. The received SNR is set to 0dB. To manifest the effect of δ̅_τ and δ̅_θ_TX on the estimation error, we set δ̅_θ_TX and δ̅_τ to μΥ_θ and sin(μΥ_θ)/(N-1)Λ, respectively, with μ being a constant. As seen in Figure <ref>, 1/GΞ^(0) is a lower bound for Tr((J^(ζ_0))^-1), which is further bounded by 1/GΨ^(0); with the decreasing value of μ, the values for δ̅_τ and δ̅_θ_TX decrease accordingly and the reduced distance between the truth path and fake path results in significant increases of the estimation error, though these lower bounds are not sharp, consistent with our analysis in Section <ref>. In addition, from Figure <ref>, we can also observe that 1/GTr((J̆^(ζ_0))^-1)≈Tr((J^(ζ_0))^-1), indicating the small approximation error with J̆^(ζ_0) even for a realistic setting of G.
Perturbed by the proposed fake paths, the CRLB for Eve’s localization is presented in Figure <ref> with different choices of δ̅_τ and δ̅_θ_TX. According to Figure <ref>, coinciding with the analysis of the derived lower bounds, simultaneously decreasing the values of δ̅_τ and δ̅_θ_TX is desired to effectively degrade Eve's localization accuracy. We note that, with the choices of δ̅_τ and δ̅_θ_TX in Figure <ref>, the assumptions A4 and A5 in Corollary <ref> are not satisfied, suggesting that these sufficient conditions for the derivation of Ψ^(k) are not necessary for the location-privacy enhancement.
§.§ Estimation Accuracy Comparison
According to Sections <ref> and <ref>, we assume δ_τ_k,min=1/20Υ_τ and δ_θ_Tx,k,min=1/20Υ_θ for any k and we set δ̅_τ and δ̅_θ_TX to δ_τ_k,min and δ_θ_Tx,k,min, respectively, to enhance the location-privacy. The RMSE of TOA estimation and AOD estimation is shown in Figures <ref> and <ref>, respectively, where the CRLB for Bob's estimation is compared with that for Eve's estimation. As observed in Figures <ref> and <ref>, without the leakage of the channel structure, our proposed scheme contributes to more than 25dB degradation with respect to the TOA and AOD estimation, by virtue of the fact that the FPI effectively distorts the structure of Eve's channel. In contrast, Bob can compensate for the presence of the fake paths given the secure side information.
Due to the strong degradation of the quality of channel estimates, a larger CRLB for Eve's localization is achieved as seen in Figure <ref>. With respect to the localization accuracy, there is a 20dB advantage for Bob versus Eve using our proposed scheme. As analyzed in Section <ref>, if the structure of Alice's beamformer is unfortunately leaked to Eve, Eve can actively estimate the shared information δ̅ to mitigate the perturbation of the fake paths while inferring Alice's position so the efficacy of our scheme is degraded. However, considering the uncertainties in the shared information, there is still around 4dB degradation of localization accuracy for Eve versus Bob according to Figure <ref>, indicating the robustness of our scheme with respect to the beamformer structure leakage. We note that the model order can be adjusted via changing the value of K̃ at a certain rate such that Eve has insufficient samples to learn the true beamformer structure and has to tackle the virtually introduced fake paths. In addition, as shown in Figure <ref>, if we inject an additional set of fake paths, Eve’s localization accuracy can be further degraded at the cost of higher transmit power. On the other hand, more side information needs to be shared with Bob to maintain his performance. Hence, there is an interesting trade-off to be investigated in the future.
To validate the efficacy of the FPI, CRLBs for channel estimation and localization with the injection of additional Gaussian noise <cit.> are also provided in Figures <ref>, <ref> and <ref> as comparisons. For relatively fair comparisons, the variance of the artificially added Gaussian noise w̃^(g,n)∼𝒞𝒩(0,ς^2) is set to a constant value for all the received SNRs, i.e., ς^2≜∑^G_g=1∑^N-1_n=0|h̃^(n)s^(g,n)|^2_/NG, while the received SNR is still defined as previously stated. As observed in Figures <ref>, <ref> and <ref>, the injection of the extra Gaussian noise is ineffective to preserve location-privacy at low SNRs since its constant variance is relatively small as compared with that for the Gaussian noise naturally introduced during the transmission over the wireless channel. In contrast, through the degradation of the channel structure, the proposed FPI strongly enhances location-privacy. As the received SNR increases, the injection of the additional Gaussian noise can further degrade Eve's performance due to the constant variance. However, the comparison is not fully fair as the amount of side information needed by Bob to remove the artificially injected Gaussian noise without CSI is high, i.e., the realization of w̃^(g,n) for all g=1,2,⋯,G and n=0,1,⋯,N-1; our beamforming strategy requires very little side information, i.e., δ̅_τ and δ̅_θ_TX. The exact amount of shared information transmitted over the secure channel depends on the distribution of the shared information required for the FPI, refresh rate, the quantization and coding strategy. The associated analysis is beyond the scope of this paper.
§ CONCLUSIONS
A location-privacy enhancement strategy was investigated with the injection of fake paths. A novel CSI-free location-privacy enhancement framework was proposed, where the structure of the channel was exploited for the design of the fake paths. The injected fake paths were proved to be indistinguishable while two closed-form, lower bounds on the estimation error with the FPI were provided to validate that the proposed method can strongly degrade the eavesdropping ability of illegitimate devices. To effectively preserve location-privacy based on the proposed framework, a transmit beamformer was designed to inject the fake paths for the reduction of the LPL to illegitimate devices, while legitimate devices can maintain localization accuracy using the securely shared information. With respect to the CRLB for localization, there was 20dB degradation in contrast to legitimate devices with the shared information and the robustness to the leakage of beamformer structure was also highlighted. Furthermore, the efficacy of the FPI was numerically verified with the comparison to unstructured Gaussian noise.
§ PROOF OF PROPOSITION <REF>
From the analysis for the feasibility of paths, to show Eve cannot make a distinction between fake paths and true paths, we seek to prove all the fake paths are geometrically feasible. To find the scatterers that can produce the injected fake paths, we consider K̃+1 potential positions for these scatterers and denote by b_k̃ the distance between Alice and the k̃-th potential position, with k̃=0,1,⋯,K̃. Then, the k̃-th potential position, denoted as ṽ_k̃=[ṽ_k̃,x,ṽ_k̃,y]^T∈ℝ^2, can be mapped to {τ̃_k̃, θ̃_Tx,k̃} according to Equation (<ref>), i.e., the k̃-th fake path is geometrically feasible, suppose that
C1) the inequality
0(a)≤ b_k̃(b)≤ cτ̃_k̃
holds;
C2) given θ̃_Tx,k̃, the equality
τ̃_k̃ = b_k̃ + [ṽ_k̃,x-z_x,ṽ_k̃,y-z_y] _2/c,
is satisfied, where [ṽ_k̃,x-z_x,ṽ_k̃,y-z_y] _2 represents the distance between Eve and the k̃-th potential position that can be expressed as
ṽ_k̃,x= b_k̃cos(θ̃_Tx,k̃)+p_x,
ṽ_k̃,y= b_k̃sin(θ̃_Tx,k̃)+p_y.
According to Equation (<ref>), we set the distance b_k̃ to
b_k̃ = (cτ̃_k̃)^2-(z_x-p_x)^2-(z_y-p_y)^2/2(cτ̃_k̃-(z_x-p_x)cos(θ̃_Tx,k̃)-(z_y-p_y)sin(θ̃_Tx,k̃)).
It can be verified that the condition C2) is satisfied using the distance b_k̃ in Equation (<ref>) if the condition C1) is met. Hence, the final step of this proof is to show Equation (<ref>) holds if b_k̃ is set according to Equation (<ref>), under the assumption of cτ̃_k̃≥ z- p_2.
Using the artificial TOAs and AODs of the fake paths, the position of the k-th virtual scatterer can be derived as,
ṽ_k̃,x= b_k̃cos(θ̃_Tx,k̃)+p_x,
ṽ_k̃,y= b_k̃sin(θ̃_Tx,k̃)+p_y.
Then, the distance between Eve and the k-th scatterer can be expressed as [ṽ_k̃,x-z_x,ṽ_k̃,y-z_y] _2 which is supposed to be equal to cτ̃_k̃-b_k̃, i.e.,
cτ̃_k̃-b_k̃=
√((b_k̃cos(θ̃_Tx,k̃)+p_x-z_x)^2+(b_k̃sin(θ̃_Tx,k̃)+p_y-z_y)^2),
if the virtual scatterer is at a valid physical position.
For notational convenience, we introduce two variables ż_x and ż_y that are defined as ż_x≜ z_x-p_x and ż_y≜ z_y-p_y. To prove (a), considering the assumption cτ̃_k̃≥ z- p_2>0, we equivalently need to justify
cτ̃_k̃-ż_xcos(θ̃_Tx,k̃)-ż_ysin(θ̃_Tx,k̃)≥0,
which holds since we have
(ż_xcos(θ̃_Tx,k̃)+ż_ysin(θ̃_Tx,k̃))^2=
ż_x^2cos^2(θ̃_Tx,k̃)+ż_y^2sin^2(θ̃_Tx,k̃)+2ż_xż_ysin(θ̃_Tx,k̃)cos(θ̃_Tx,k̃)
(c)=ż_x^2+ż_y^2-(ż_xsin(θ̃_Tx,k̃)-ż_ycos(θ̃_Tx,k̃))^2
≤ż_x^2+ż_y^2 (d)≤(cτ̃_k̃)^2.
The equality (c) can be verified with the property of the trigonometric functions while inequality (d) holds according to the assumption cτ̃_k̃≥ z- p_2. In addition, we can re-express cτ̃_k̃-b_k̃ as
(cτ̃_k̃)^2+ż_x^2+ż_y^2-2cτ̃_k̃(ż_xcos(θ̃_Tx,k̃)+ż_ysin(θ̃_Tx,k̃))/2(cτ̃_k̃-ż_xcos(θ̃_Tx,k̃)-ż_ysin(θ̃_Tx,k̃)),
where the denominator has been proved to be non-negative according to Equation (<ref>). Hence, to validate (b), we need to prove that
(cτ̃_k̃)^2+ż_x^2+ż_y^2-2cτ̃_k̃(ż_xcos(θ̃_Tx,k̃)+ż_ysin(θ̃_Tx,k̃))≥0.
Since we have
(2cτ̃_k̃(ż_xcos(θ̃_Tx,k̃)+ż_ysin(θ̃_Tx,k̃)))^2
(e)≤ 4(cτ̃_k̃)^2 (ż_x^2+ż_y^2)≤((cτ̃_k̃)^2+ż_x^2+ż_y^2)^2,
where (e) follows from Equation (<ref>), Equation (<ref>) holds, concluding the proof.
§ PROOF OF LEMMA <REF>
Proving Lemma <ref> is equivalent to showing that each element of 1/G J^(ζ_k) converges a.s. to the corresponding element of J̆^(ζ_k) as G→∞. Therefore, according to their definitions with ∂ u^(g,n)/∂τ̅[k] and ∂ u^(g,n)/∂θ̅_Tx[k] presented in Equation (<ref>), we restrict the derivations to
2/Gσ^2∑_n=0^N-1∑_g=1^Gℜ{(∂ u^(g,n)/∂τ̅[k])^*∂ u^(g,n)/∂τ̅[k]}
a.s.⟶8π^2 O_1O_6/(σNT_s)^2|γ̅[k]|^2, as G→∞,
in Equation (<ref>) as the others can be verified analogously, where (f) results from the law of large number.
Similar to Equation (<ref>), the approximation (f) in Appendix <ref> still holds if r=k, i.e.,
2/σ^2∑_n=0^N-1∑_g=1^Gℜ{(∂ u^(g,n)/∂τ̅[k])^*∂ u^(g,n)/∂τ̅[k]}
≈2/σ^2∑_n=0^N-1ℜ{(2π√(N_t) n/NT_s)^2|γ̅[k]|^2α(θ̅_Tx[k])^Hα(θ̅_Tx[k])},
which can be simplified as
2/σ^2∑_n=0^N-1 ∑_g=1^Gℜ{(∂ u^(g,n)/∂τ̅[k])^*∂ u^(g,n)/∂τ̅[k]}
≈ 2/σ^2∑_n=0^N-1(2π√(N_t) n/NT_s)^2|γ̅[k]|^2
= 8π^2 O_1O_6N_t/(σNT_s)^2|γ̅[k]|^2.
Since |α(θ_Tx,k)α(θ̃_Tx,k)^H| is not negligible according to Equation (<ref>), the others can be derived analogously and we have
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂τ̅[k])^*∂ u[n]/∂θ̅_Tx[k]}
≈ -8π^2 dN_tO_2O_3/σ^2λ_c NT_s|γ̅[k]|^2cos(θ̅_Tx[k]),
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂τ̅[k])^*∂ u[n]/∂ℜ{γ̅[k]}}
≈ℜ{ j4π N_tO_2O_6/σ^2N T_s(γ̅[k])^*},
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂τ̅[k])^*∂ u[n]/∂ℑ{γ̅[k]}}
≈ℜ{-4π N_t O_2O_6/σ^2N T_s(γ̅[k])^*},
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂θ̅_Tx[k])^*∂ u[n]/∂θ̅_Tx[k]}
≈8(π d)^2O_4O_5N_t/(σλ_c)^2|γ̅[k]|^2cos^2(θ̅_Tx[k]),
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂θ̅_Tx[k])^*∂ u[n]/∂ℜ{γ̅[k]}}
≈ℜ{ -j4π d O_3O_5N_t/σ^2λ_c(γ̅[k])^*cos(θ̅_Tx[k])},
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂θ̅_Tx[k])^*∂ u[n]/∂ℑ{γ̅[k]}}
≈ℜ{4π d O_3O_5N_t/σ^2λ_c(γ̅[k])^*cos(θ̅_Tx[k])},
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂ℜ{γ̅[k]})^*∂ u[n]/∂ℜ{γ̅[k]}}
= 2/σ^2∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂ℑ{γ̅[k]})^*∂ u[n]/∂ℑ{γ̅[k]}}
≈2O_5O_6N_t/σ^2,
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂ℜ{γ̅[k]})^*∂ u[n]/∂ℑ{γ̅[k]}}= j2O_5O_6N_t/σ^2,
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u^(g,n)/∂τ_k)^*∂ u^(g,n)/∂τ̃_k}
≈ℜ{8π^2 M_1^(k)M_6^(k)N_t/(σNT_s)^2γ^*_kγ̃_k}
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂τ_k)^*∂ u[n]/∂θ̃_Tx,k}
≈ℜ{-8π^2 dN_tM_2^(k)M_3^(k)/σ^2λ_c NT_sγ^*_kγ̃_kcos(θ̃_Tx,k)},
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂τ_k)^*∂ u[n]/∂ℜ{γ̃_k}}
≈ℜ{j4π N_tM_2^(k)M_6^(k)/σ^2N T_sγ^*_k},
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂τ[k])^*∂ u[n]/∂ℑ{γ̃_k}}
≈ℜ{-4π N_t M_2^(k)M_6^(k)/σ^2N T_sγ^*_k},
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂θ_Tx,k)^*∂ u[n]/∂θ̃_Tx,k}
≈ℜ{8(π d)^2M_4^(k)M_5^(k)N_t/(σλ_c)^2γ^*_kγ̃_kcos(θ_Tx,k)cos(θ̃_Tx,k)},
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂θ_Tx,k)^*∂ u[n]/∂ℜ{γ̃_k}}
≈ℜ{ -j4π d M_3^(k)M_5^(k)N_t/σ^2λ_cγ_k^*cos(θ_Tx,k)},
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂θ_Tx,k)^*∂ u[n]/∂ℑ{γ̃_k}}
≈ℜ{4π d M_3^(k)M_5^(k)N_t/σ^2λ_cγ_k^*cos(θ_Tx,k)},
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂ℜ{γ_k})^*∂ u[n]/∂ℜ{γ̃_k}}
= 2/σ^2∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂ℑ{γ_k})^*∂ u[n]/∂ℑ{γ̃_k}}
≈ℜ{2M_5^(k)M_6^(k)N_t/σ^2},
2/σ^2 ∑_n=0^N-1∑_g=1^Gℜ{(∂ u[n]/∂ℜ{γ_k})^*∂ u[n]/∂ℑ{γ̃_k}}
≈ℜ{j2M_5^(k)M_6^(k)N_t/σ^2}.
§ PROOF OF PROPOSITION <REF>
For the k-th true path and the k-th fake paths, considering the TOAs and AODs of the other paths as well as all the channel coefficients as the nuisance parameters, we can bound the mean squared error of ζ̂^(k) as 𝔼{(ζ̂^(k)-ζ^(k))^T(ζ̂^(k)-ζ^(k))}≥Tr(( J^(ζ_k))^-1) according to <cit.>. Following from Equation (<ref>), under the assumptions of G→∞, we have Tr(( J^(ζ_k))^-1)a.s.⟶Tr((J̆^(ζ_k))^-1). Denote by ρ^(k)_i the i-th eigenvalue of J̆^(ζ_k) with i=1,2,3,4.
The following lower bound on Tr((J̆^(ζ_k))^-1) holds,
Tr((J̆^(ζ_k))^-1) = ∑_i=1^41/ρ^(k)_i(h)≥ 4√(1/∏_i=1^4ρ^(k)_i),
where (h) follows from the inequality of arithmetic and geometric means.
Leveraging Equation (<ref>) and the equality that ∏_i=1^4ρ^(k)_i=det(J̆^(ζ_k)) yields the desired statement.
§ PROOF OF PROPOSITION <REF>
For the k-th true path and the k-th fake path, considering the TOAs and AODs of the other paths as well as all the channel coefficients as the nuisance parameters, we can bound the mean squared error of ζ̂^(k) as 𝔼{(ζ̂^(k)-ζ^(k))^T(ζ̂^(k)-ζ^(k))}≥Tr(( J^(ζ_k))^-1) according to <cit.>. Following from Lemma <ref>, if G→∞, we have GTr(( J^(ζ_k))^-1)a.s.⟶Tr((J̆^(ζ_k))^-1). Hence, for any real ψ>0, there always exists a positive integer 𝒢 such that when G≥𝒢, we have GTr(( J^(ζ_k))^-1)>Tr((J̆^(ζ_k))^-1)-ψ with probability of 1. Then, the final step is prove Tr((J̆^(ζ_k))^-1)≥Ξ^(k).
Denoting by ρ^(k)_i the i-th eigenvalue of J̆^(ζ_k) with i=1,2,3,4,
we have the following lower bound on Tr((J̆^(ζ_k))^-1),
Tr((J̆^(ζ_k))^-1) = ∑_i=1^41/ρ^(k)_i(g)≥Ξ^(k)≜4√(1/∏_i=1^4ρ^(k)_i),
where (g) follows from the inequality of arithmetic and geometric means.
Then, leveraging Equation (<ref>), the equality that ∏_i=1^4ρ^(k)_i=det(J̆^(ζ_k)), and the assumption of δ_γ_k=δ_γ_k,min=0 yields the desired statement.
§ PROOF OF COROLLARY <REF>
Under the assumption A1, the lower bound with Ξ^(k) has been derived in Proposition <ref>. Given the assumptions A2 and A3, Equation (<ref>) holds so we have the following inequalities,
2O_1O_6-ϵ<ℜ{M_1^(k)M_6^(k)}+O_1O_6,
2O_4O_5-ϵ<ℜ{M_4^(k)M_5^(k)}+O_4O_5
ℜ{M_2^(k)M_3^(k)}+O_2O_3<2O_2O_3+ϵ.
With the definition of ϵ, Ξ^(k)_1 in Equation (<ref>) is a positive value according to Equation (<ref>) so
Ξ^(k)_1≥ (NT_s)^4/(O_1O_6+ℜ{M_1M_6})(O_4O_5+ℜ{M_4M_5})
(h)≥ (NT_s)^4/4O_1O_4O_5O_6
holds, where (h) follows from Equation (<ref>). In addition, it can be verified that J̆^(ζ_k) is a positive semidefinite matrix so det(J̆^(ζ_k))≥ 0 and thus Ξ^(k)_2≥0 according to Equation (<ref>). Then, we have
Ξ^(k)_2≥1/(O_1O_6-ℜ{M_1M_6})(O_4O_5-ℜ{M_4M_5}),
where the denominator of the right-hand side can be bounded according to
O_1O_6-ℜ{M_1M_6}
= ℜ{O_1O_6-M_1M_6}
≤ |O_1O_6-M_1M_6|
= |1/N_t∑_n=0^N-1∑_n_t=0^N_t-1n^2(1-e^j2π( n_t d sin(δ_θ_Tx,k)/λ_c-nδ_τ_k/NT_s))|
(i)≤ 1/N_t∑_n=0^N-1∑_n_t=0^N_t-1n^2|(1-e^j2π( n_t d sin(δ_θ_Tx,k)/λ_c-nδ_τ_k/NT_s))|
(j)≤ 3π/N_t∑_n=0^N-1∑_n_t=0^N_t-1n^2| n_t d sin(δ_θ_Tx,k)/λ_c-nδ_τ_k/NT_s|
(k)= 3π/N_t∑_n=0^N-1(n^3δ_τ_k/NT_s+∑_n_t=1^N_t-1n^2( n_t d sin(δ_θ_Tx,k)/λ_c-nδ_τ_k/NT_s))
= π N(N-1)(2N-1)(N_t-1)dsin(δ_θ_Tx,k)/4λ_c
-3π N(N-1)^2(N_t-2)δ_τ_k/4N_tT_s,
and
O_4O_5-ℜ{M_4M_5}≤ 3π NN_t(N_t-1)^2dsin(δ_θ_Tx,k)/4λ_c
-π (N_t-1)(2N_t-1)(N-1)δ_τ_k/4T_s.
Herein, it can be verified with some algebra that (i), (j), and (k) result from the triangle inequality, the assumption A4, and the assumption A5, respectively, while Equation (<ref>) can be derived analogously. By leveraging Equations (<ref>), (<ref>) and (<ref>), the quantity Ξ^(k)_2 can be further bounded as provided in Equation (<ref>), where (l) follows from the fact that N,N_t≥ 2 holds for MISO OFDM systems. Then, substituting Equations (<ref>) and (<ref>) into Equation (<ref>) simplifies Ξ^(k) into ℵ^(k) shown in Equation (<ref>). Since the lower bound ℵ^(k) holds for any δ_τ_k and δ_θ_Tx,k that satisfy assumptions A1-A5, we can set δ_τ_k=sin(δ_θ_Tx,k)/(N-1)Λ for ℵ^(k) so that Equation (<ref>) can be verified with some algebra.
§ DERIVATIONS OF EQUATION (<REF>)
Supposed that the number of symbols are large enough, the following approximation holds,
∑_g=1^G( s^(g,n))( s^(g,n))^H≈𝔼{s^(g,n)(s^(g,n))^H}= I_N_t
according to the law of large number. Furthermore, given a large number of transmit antennas and well separated paths, we have
|α(θ_Tx,r)α(θ_Tx,k)^H|≪ 1, k≠ r.
Then, we restrict the derivations to
2/σ^2∑_n=0^N-1∑_g=1^Gℜ{(∂ u^(g,n)/∂τ_r)^*∂ u^(g,n)/∂τ_k}≈ 0, k≠ r,
in Equation (<ref>) as the others can be verified analogously, where (f) and (g) result from Equations (<ref>) and (<ref>), respectively.
§ PROOF OF PROPOSITION <REF>
Since we assume that Eve knows Alice's beamformer structure, the associated noise-free observation is defined as
ι^(g,n)≜√(N_t)∑_k=0^Kγ_ke^-j2π nτ_k/NT_sα(θ_Tx,k)^Hs̃^(g,n).
Let ϖ^(g,n)≜√(N_t)∑_k=0^Kγ_ke^-j2π nτ_k/NT_sα(θ_Tx,k)^H s^(g,n). It can be verified that ι^(g,n)→ 2ϖ^(g,n) as δ̅_τ, δ̅_θ_Tx→ 0.
To derive the FIM for Eve’s channel estimation with the prior knowledge of the beamformer structure, we can compute the derivative ∂ι^(g,n)/∂ξ_k
by replacing s^(g,n) in Equation (<ref>) with s̃^(g,n), where ξ_k∈{τ_k,θ_Tx,k,ℜ{γ_k},ℑ{γ_k}}, while derive ∂ι^(g,n)/∂δ̅_τ and ∂ι^(g,n)/∂δ̅_θ_Tx as follows,
∂ι^(g,n)/∂δ̅_τ = -j2πN_t n/NT_s∑_k=0^Kγ_k e^-j2π n (τ_k+δ_τ)/NT_sα(θ_Tx,k)^H
×diag(α(δ̅_θ_Tx)^H) s^(g,n),
∂ι^(g,n)/∂δ̅_θ_Tx = j2πN_t d/λ_c∑_k=0^Kγ_k e^-j2π n (τ_k+δ_τ)/NT_scos(δ̅_θ_Tx)
×α(θ_Tx,k)^Hdiag([0,1,⋯,N_t-1])
×diag(α(δ̅_θ_Tx)^H) s^(g,n).
Then, it is straightforward to show that, as
δ̅_τ, δ̅_θ_Tx→ 0,
∂ι^(g,n)/∂ξ_k → 2∂ϖ^(g,n)/∂ξ_k,
∂ι^(g,n)/∂δ̅_τ →∑_k=0^K∂ϖ^(g,n)/∂τ_k,
∂ι^(g,n)/∂δ̅_θ_Tx →∑_k=0^K∂ϖ^(g,n)/∂θ_Tx,k.
On the other hand, similar to Equation (<ref>), it can be verified that for any ξ_k and ξ_k^'∈{τ_k^',θ_Tx,k^',ℜ{γ_k^'},ℑ{γ_k^'}},
2/σ^2∑_n=0^N-1∑_g=1^Gℜ{(∂ϖ^(g,n)/∂ξ_k)^*∂ϖ^(g,n)/∂ξ_k^'}a.s.⟶ 0, k≠k^' ,
holds when G,N_t→∞. Hence, given δ̅_τ, δ̅_θ_Tx→ 0 and G,N_t→∞, for any ξ_k, we have
2 J^(χ)_δ̅_τ,ξ_k=2 J^(χ)_ξ_k,δ̅_τ → J^(χ)_τ_k,ξ_k,
2 J^(χ)_δ̅_θ_Tx,ξ_k=2 J^(χ)_ξ_k,δ̅_θ_Tx → J^(χ)_θ_Tx,k,ξ_k,
4 J^(χ)_δ̅_τ,δ̅_τ →∑_k=0^K J^(χ)_τ_k,τ_k,
4 J^(χ)_δ̅_θ_Tx,δ̅_θ_Tx →∑_k=0^K J^(χ)_θ_Tx,k,θ_Tx,k,
4 J^(χ)_δ̅_τ,δ̅_θ_Tx=4 J^(χ)_δ̅_θ_Tx,δ̅_τ →∑_k=0^K J^(χ)_τ_k,θ_Tx,k=∑_k=0^K J^(χ)_θ_Tx,k,τ_k.
Hence, two rows of J^(χ) are linearly dependent on the other rows as δ̅_τ, δ̅_θ_Tx→ 0 and G,N_t→∞, i.e.,
2[ J^(χ)_δ̅_τ,τ_0, J^(χ)_δ̅_τ,θ_Tx,0,⋯, J^(χ)_δ̅_τ,δ̅_θ_Tx]
→ ∑_k=0^K[ J^(χ)_τ_k,τ_0, J^(χ)_τ_k,θ_Tx,0,⋯, J^(χ)_τ_k,δ̅_θ_Tx],
2[ J^(χ)_δ̅_θ_Tx,τ_0, J^(χ)_δ̅_θ_Tx,θ_Tx,0,⋯, J^(χ)_δ̅_θ_Tx,δ̅_θ_Tx]
→ ∑_k=0^K[ J^(χ)_θ_Tx,k,τ_0, J^(χ)_θ_Tx,k,θ_Tx,0,⋯, J^(χ)_θ_Tx,k,δ̅_θ_Tx],
which leads to the desired statement.
|
http://arxiv.org/abs/2307.04466v1 | 20230710103140 | Decay of long-lived oscillations after quantum quenches in gapped interacting quantum systems | [
"Jacob H. Robertson",
"Riccardo Senese",
"Fabian H. L. Essler"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech"
] |
apsrev4-2
|
http://arxiv.org/abs/2307.05151v2 | 20230711101441 | ExFaceGAN: Exploring Identity Directions in GAN's Learned Latent Space for Synthetic Identity Generation | [
"Fadi Boutros",
"Marcel Klemt",
"Meiling Fang",
"Arjan Kuijper",
"Naser Damer"
] | cs.CV | [
"cs.CV"
] |
empty
[
ExFaceGAN: Exploring Identity Directions in GAN’s Learned Latent Space for Synthetic Identity Generation
Fadi Boutros^1, Marcel Klemt^1, Meiling Fang^1, Arjan Kuijper^1,2, Naser Damer^1,2
^1Fraunhofer Institute for Computer Graphics Research IGD, Darmstadt, Germany
^2Department of Computer Science, TU Darmstadt,
Darmstadt, Germany
Email: [email protected]
==================================================================================================================================================================================================================================================================================
type=figure
< g r a p h i c s >
figureSamples images generated by our ExFaceGAN applied to the learned latent space of unconditional StyleGAN-3
]
Deep generative models have recently presented impressive results in generating realistic face images of random synthetic identities.
To generate multiple samples of a certain synthetic identity, previous works proposed to disentangle the latent space of GANs by incorporating additional supervision or regularization, enabling the manipulation of certain attributes.
Others proposed to disentangle specific factors in unconditional pretrained GANs latent spaces to control their output, which also requires supervision by attribute classifiers.
Moreover, these attributes are entangled in GAN's latent space, making it difficult to manipulate them without affecting the identity information.
We propose in this work a framework, ExFaceGAN, to disentangle identity information in
pretrained GANs latent spaces, enabling the generation of multiple samples of any synthetic identity.
Given a reference latent code of any synthetic image and latent space of pretrained GAN, our ExFaceGAN learns an identity directional boundary that disentangles the latent space into two sub-spaces, with latent codes of samples that are either identity similar or dissimilar to a reference image.
By sampling from each side of the boundary, our ExFaceGAN can generate multiple samples of synthetic identity without the need for designing a dedicated architecture or supervision from attribute classifiers.
We demonstrate the generalizability and effectiveness of ExFaceGAN by integrating it into learned latent spaces of three SOTA GAN approaches.
As an example of the practical benefit of our ExFaceGAN, we empirically prove that data generated by ExFaceGAN can be successfully used to train face recognition models (<https://github.com/fdbtrs/ExFaceGAN>).
§ INTRODUCTION
Recent advances in Deep Generative Models (DGM), especially Generative Adversarial Networks (GANs) <cit.> and diffusion models <cit.>, enabled the generation of photo-realistic face images.
The aim of DGMs is to learn the probability distribution of a certain training dataset, enabling the generation of completely new data points. Many DGMs also featured conditional image generation for structural and controllable outputs with a wide range of application scenarios, such as text-to-image generation <cit.>, photo-editing <cit.>, image-based virtual try-on <cit.>, and synthetic-based face recognition <cit.>.
While most of these application use cases require explicitly controllable image generation, others such as the development of FR using synthetic data require that the generated images are of discriminate synthetic identities and contain realistic variations that are not limited to small and predefined sets of attributes.
Recently, there was an increased interest in synthetic-based face recognition (FR) <cit.> driven by the increased legal and ethical concerns about the use, share, and management of real biometric data in FR development <cit.>.
State-of-the-art (SOTA) synthetic-based FR <cit.> proposed either explicitly learning identity-discriminant feature representations <cit.> or leaning multi-class classification problem <cit.>. In both learning strategies, these FR models relied on existing DGMs to generate multiple samples of synthetic identities.
Recent SOTA DGMs touching on the concept of generating multiple synthetic face images of synthetic identity with varying intra-class appearances can be grouped into two categories, controllable image generation by explicitly learning to generate face images with a predefined set of attributes <cit.> and conditional image generation via manipulating learned latent space of unconditional DGMs <cit.>.
The approaches in the first category proposed to design and train conditional DGMs to explicitly generate synthetic images with a certain visual attribute, such as age, pose, illumination, expression, or combination of these attributes <cit.>.
SynFace <cit.> and UsynthFace <cit.> utilized a controllable GAN, DiscoFaceGAN <cit.>, for their synthetic-based FR training. Each synthetic identity in their training datasets is formed by fixing the identity condition and randomizing the attribute conditions.
However, the use of such controllable GAN for synthetic-based FR training suffers from two main drawbacks. First, the intra-class variations in the generated images are limited to a predefined set of attributes and do not necessarily reflect real-world variations.
Second, extending these GAN models with additional attributes is extremely challenging as it requires designing and training a dedicated architecture for controlling additional attributes in the generated images.
SFace <cit.> and IDNet <cit.> aimed at mitigating this challenge by training class-conditional GAN for class-labeled synthetic image generation. Images of each synthetic identity in SFace and IDNet are generated by fixing the class label and randomizing the generator's input latent code. However, the generated data suffer from low identity discrimination and the number of synthetic identities is limited to the number of classes in the training dataset. Unlike previous approaches, very recently IDiff-Face <cit.> proposed a latent diffusion model conditioned on identity contexts for synthetic identity generation, enabling the generation of multiple samples of synthetic identities with realistic variations.
The second category of image generation approaches proposed methods to manipulate the learned latent space of pretrained GANs, aiming at finding meaningful directions in latent space to produce a structural output generation <cit.>. GANSpace <cit.> utilized Principal Component Analysis (PCA) applied on the feature space of pretrained GANs to create interpretable controls for image synthesis. Similar to GANSpace <cit.>, InterFaceGAN <cit.> proposed a framework for semantic face editing. InterFaceGAN trained a Support Vector Machine (SVM) on latent codes from a pretrained GAN latent space with labels from attribute classifiers to obtain a directional decision boundary for targeted attribute manipulations, enabling the generation of conditional images on visual attributes, e.g., adding eyeglasses or changing the pose. However, these approaches, GANSpace <cit.> and InterFaceGAN <cit.>, mainly relied on human labels or attribute classifiers for visual attribute manipulation without any restriction on identity information.
In this paper, we propose a novel approach, ExFaceGAN, to discover identity directions in the learned latent space (or feature space) of a pretrained GAN generator, enabling the generation of multiple synthetic face images of a specific synthetic identity with realistic intra-class appearances.
Unlike previous works, the variations in the generated images by our approach are not limited to a predefined set of visual attributes and do not require human labeling or attribute classifiers to generate multiple samples of a synthetic identity.
In a nutshell, given a reference synthetic image of random identity with its latent code, our approach aims at disentangling the learned latent space with respect to the reference latent code into two sub-spaces, positive and negative subspace. The positive and negative sub-spaces contain latent codes of face images that are identity-similar and identity-dissimilar, respectively, to the reference images. These sub-spaces are corresponding to diverse face image transformations that maintain the identity information across the generated images in each discovered sub-space.
Thus, our approach can turn any pretrained unconditional GAN model into identity-conditional GAN without the need to change the model architectures or retrain the model. Also, we demonstrate that our ExFaceGAN can be integrated into attribute-conditional GAN models, enhancing the diversity in the generated images.
We additionally propose a sampling mechanism to control the inter-class variation and intra-class compactness of the generated data.
An overview of our proposed ExFaceGAN is presented in Figure <ref>.
Given a set of synthetic images, e.g. 5k images, and their corresponding latent codes, our ExFaceGAN can generate 10K discriminant synthetic identities with unlimited samples per identity. We empirically proved in this paper the identity discrimination in our ExFaceGAN-generated data. As an example of the practical benefit of ExFaceGAN, we demonstrated that the synthetically generated data by our ExFaceGAN can be successfully used to train FR models, outperforming all GAN-based FR models.
§ METHODOLOGY
This section presents our framework for disentangling complex identity information in learned latent spaces of
pretrained identity-unconditional GAN.
The general idea of the proposed approach is to learn a directional boundary for each reference synthetic image that separates the latent space into two sub-spaces. The first latent subspace contains latent codes of synthetic images that share, to a certain degree, identity information with the reference image. The second latent subspace contains latent codes of images that are identity-dissimilar to the reference image.
This disentangled latent space is then used to generate two new identity-specific sets of synthetic images.
Figure <ref> illustrates the pipeline of our approach. We start by generating a set of images by sampling a set of latent codes from a Gaussian distribution and feeding it into a pretrained generator. One image, subsequently its latent code, is considered a reference for separating the latent spaces into the two sub-spaces.
We label each synthetic image and its latent code with 1 (similar to the reference image) or 0 (dissimilar to the reference image) based on the similarity between the image and the reference one. The latent codes and their corresponding labels are then used to train an SVM to obtain an identity-separating boundary. By sampling latent codes from the two sides of the boundary, two identity-specific sets of images can be generated.
The pseudo-code of our algorithm is given in Algorithm <ref>.
§.§ Disentangled Identity Representations in the learned Latent Space of GANs
Given an identity-unconditional pretrained GAN G that is designed and trained to generate face images of random identities, we start by generating m face images X. Specifically, m latent codes Z={z_1, z_2, ..., z_m} are randomly sampled from a normal Gaussian distribution z_i ∼ N(0,1)^d. All latent codes are then processed by G to output m synthesized images {x_1, x_2, ..., x_m}, where x_i is a synthetic image generated by G from z_i. In StyleGAN-based architecture <cit.>, the G consists of two networks, mapping and synthesis networks. The mapping network takes z_i ∈ Z as an input and generates a style w_i ∈ W (an intermediate latent code). w_i is then passed to the synthesis network to generate images x_i. In StyleGAN-based approaches, the output of the mapping network (W) is commonly used as the latent space for GAN as it is learned to disentangle the inherent structure of the training data and thus, it contains more meaningful semantic information than Z <cit.>.
Each synthesis image with its latent code (z_i) and the intermediate latent code (w_i) forms a triplet (z_i, w_i,x_i).
Formally, images are generated as follows:
{ x_i, w_i = G(z_i) | z_i ∼ N(0,1)^d; i ∈{1, 2, ..., m}},
where x_i ∈ℝ^W × H × C, z_i ∈ℝ^d and w_i ∈ℝ^d.
Then, for each w_i ∈ W, we split W ∖ w_i into two subsets, W_i1 and W_i2, where W_i1 contains latent codes of images that are identity similar to x_i and W_i2 contains latent codes of images that are identity dissimilar to x_i.
The similarity between x_i and each x_j | j ∈{0,1...,m}∖ i is calculated in the embedding space between f_i ∈ℝ^l and f_j ∈ℝ^l feature representations. Feature representations are extracted using a pretrained FR model ϕ as follows:
{ f_i = ϕ (x_i) | i ∈{1, 2, ..., m}}.
The similarity cs_ij between f_i and each f_j is calculated using cosine similarity as defined in the following equation:
cs_ij = f_i * f_j/‖ f_i‖‖ f_j‖| j ∈{1, 2, ..., m}∖{i}.
f_j is considered to be similar to f_i if cs_ij is higher than a threshold th_i, and thus, the corresponding w_j is in W_i1 (label of 1), otherwise w_j is in W_i2 (label of 0).
We consider the median cosine similarity of all cs_ij as a threshold th_i. Formally x_j, its corresponding f_j, and latent codes z_j and w_j is labeled as follows:
label_j =
0 if cs_ij≤ th_i
1 otherwise| j ∈{1, 2, ..., m}∖ i.
W ∖ w_i is split into two subspaces using a decision boundary obtained from an SVM. The SVM is trained on W ∖ w_i and their corresponding binary labels. The normalized weights of the SVM define the direction n^id∈ℝ^d of the decision boundary, in which latent space W ∖ w_i is split into two subspaces, i.e., latent codes located in the direction of n^id and ones located in the opposite direction of n^id:
0.9!
n^id = SVM({w_j}, {label_j}) | j ∈{1, 2, ..., m}∖ i.
n^id disentangles the latent space in relation to w_i. Images of the latent codes located in the direction of n^id contain similar identity information to x_i, and thus, interpolating these codes with w_i will lead to generating a set of images of that identity, i.e., class positive that are, to a large degree, similar to the reference image x_i.
Conversely, images of latent codes located in the opposite direction of n^id are of dissimilar identities to the reference image x_i, and thus, interpolating these codes with w_i will lead to generating another set of images of a new identity (dissimilar to w_i), i.e., class negative.
This procedure is repeated for each image x_i ∈ X, i.e., each x_i is considered a reference once, resulting in m decision boundaries. Each decision boundary can be used to sample two identity-specific sets of images.
§.§ Generation of Identity Specific Face Images
The decision boundary of SVM is used to obtain latent codes to synthesize new images of positive and negative identities.
To achieve this goal, we start by sampling an offset vector o ∈ℝ^d from N(0,max-off)^d, where max-off is a hyperparameter that determines the maximal offset allowed in each dimension of d. An element-wise multiplication is performed between o and n^id. Then, the resulting directional vector (o ⊗ n^id) is added to w_i, resulting in a new latent code v^1.
A synthetic image generated from v^1 is of class positive, i.e., shared identity information with x_i.
Similarly, multiplication with negative o will result in a latent code v^2 where the synthetic image using this latent code is of class negative, i.e., does not share identity information with x_i. By sampling different o ⊗ n^id and negative o ⊗ n^id, we generate two sets of images of class positive and class negative as defined in the following:
v^1 = w_i⊕ o ⊗ n^id,
v^2 = w_i⊖ o ⊗ n^id,
where v^1, v^2 ∈ℝ^d are the new latent codes sampled from class positive and negative sides, respectively.
Images sampled from latent codes close to the boundary are assumed to be similar to image x_i. Increasing the max-off value of v^1 increases the appearance variations in the generated images while maintaining, to a very large degree, identity information of x_i. Conversely, increasing the max-off value of v^2 will result in synthetic images of a second identity (different than x_i).
A simplified 2D example of our sampling technique is shown in Figure <ref>. w_i refers to the latent code of the image x_i for which the boundary n^id was trained. The red arrow n^id indicates the normal vector of the identity-separating boundary. The boundary separates all data points into class positive and class negative, i.e., points located in the positive and negative (opposite) direction of the decision boundary, respectively.
§ EXPERIMENTAL SETUP
Identity discrimination evaluation metrics: We evaluate the identity discrimination in our ExFaceGAN datasets using the following metrics: Equal Error Rate (EER), FMR100 which is the lowest False None-match Rate for False Match Rate (FMR) ≤ 1.0% <cit.>. Also, we report the Fisher Discriminant Ratio (FDR) <cit.>.
FDR metric indicates the separability of the genuine and impostor distribution. To calculate the genuine and imposter comparison scores, we utilize the ArcFace model <cit.> [ArcFace model architecture is ResNet50 <cit.> trained on CASIA-WebFace dataset <cit.>.] for feature extraction.
Dataset generation:
We applied our ExFaceGAN on three generative models (official pretrained releases), StyleGAN2-ADA <cit.>, StyleGAN3 <cit.>, and GAN-Control <cit.>, noted as ExFaceGAN(SG2), ExFaceGAN(SG3), and ExFaceGAN(Con), respectively. StyleGAN models are unconditional generative models, while GAN-control is an attribute conditional (age, expression, illumination, and pose ) model. We opt to further apply our approach to GAN-Control <cit.> to demonstrate the applicability of our ExFaceGAN to both, unconditional and attribute-conditional generative models.
For each of the considered models, ExFaceGAN datasets are created by randomly sampling 5k different latent codes from a normal Gaussian distribution. Then, 5k w representations are obtained from each model mapping network. The w is then processed by each of the synthesis networks to produce 5k synthetic images. It should be noted that all experiments are conducted in the w-space and not in the z-space of GANs as w-space provides a more disentangled representation <cit.> than z-space.
All synthetic images are aligned and cropped using five landmarks similarity transformation as defined in <cit.>. The landmarks are extracted using MTCNN <cit.>.
For each model, we train 5k SVMs to obtain identity-separation boundaries, following our algorithm described in <ref>. Finally, ExFaceGAN datasets are generated as defined in Section <ref>.
FR model training setup:
We evaluate the verification accuracies of FR models trained on our ExFaceGAN datasets. Following the previous synthetic-based FR training approaches <cit.>, we use ResNet-50 network architecture <cit.> with ArcFace loss (margin penalty of 0.5 and scale term of 64 <cit.>). During the training, a dropout of 0.4 is applied.
Stochastic Gradient Descent (SGD) is employed as an optimizer with a learning rate of 0.1. The learning rate is divided by 10 at 22, 30, and 35 epochs <cit.>. The model is trained for 40 epochs with a batch size of 512.
FR Evaluation benchmarks: We evaluate FR models trained on ExFaceGAN datasets on the following benchmarks: Labeled Faces in the Wild (LFW) <cit.>, AgeDB-30 <cit.>, Celebrities in Frontal to Profile in the Wild (CFP-FP) <cit.>, Cross-Age LFW <cit.>, and Cross-Pose LFW <cit.>. Results for all benchmarks are reported as verification accuracy following their official evaluation protocol.
§ RESULTS
Identity similarity between references and samples from class positive and class negative:
Qualitative samples from class negative and positive generated by our ExFaceGAN(SG2), ExFaceGAN(SG3), and ExFaceGAN(Con) with different max-off values are shown in Figure <ref>. Larger intra-class variations can be easily perceived when samples are generated with a large max-off value e.g. 30 or 40. Samples from class positive, as expected, are visually similar to the reference images. On the other hand, the samples from class negative are dissimilar to the reference images, especially, on max-off of 30 and 40. This observation is supported by the results presented in Figure <ref>, where we performed a 1:N comparison between feature representations of a reference and samples from class negative and class positive.
It can be noticed that increasing the max-off value slightly affects the identity similarity between reference and class positive samples.
As expected, the similarity between the reference and class negative samples significantly decreases when the max-off is higher than 10, as shown in Figure <ref>. For example, one can notice that samples from class negative in <ref> are of a different gender than the reference and samples from class positive.
This motivated our next experiment to investigate whether class negative samples form a new synthetic identity.
Identity-discrimination of ExFaceGAN:
We investigate first the identity discrimination in samples from class positive and class negative generated by ExFaceGAN. For each of the 5k reference images, we generated 50 class positive samples and 50 class negative samples using our ExFaceGAN(SG3). For each reference image, we consider the class position samples as one identity and the class negative ones as a second identity. Thus, we generated a total of 500k images (2 × 5k × 50) of 10K identities.
These generated samples formed three testing datasets. The first one contains class positive samples. The second one contains class negative samples and the third dataset contains both, class positive and negative samples.
Table <ref> presents the identity verification on the three datasets. We made three main observations: 1) Each of the class positive and class negative samples are, to a very large degree, identity discriminate where for example, EERs are 0.0012 and 0.0014 for class positive and class negative samples, respectively. 2) Increasing the max-off values increased, to a small degree, the verification error rates. 3) Class positive and class negative samples do not (or to extremely a very low degree) share identity information. For example, the EER is 0.0015 when the evaluation dataset contains both class positive and class negative samples, which is slightly higher in comparison to the EER of each of the class positive and class negative scoring 0.0012 and 0.0014, respectively.
This clearly indicates that class negative samples form a new synthetic identity, and thus, our ExFaceGAN can generate two discriminative synthetic identities from each reference image.
Comparison with SOTA identity-conditioned generative models:
We empirically proved the generalizability of our ExFaceGAN approach by applying it to the learned latent spaces of three SOTA GAN models and comparing the identity discrimination of our ExFaceGAN with SOTA approaches, DiscoFaceGAN <cit.>, GAN-Control <cit.>, SFace <cit.>, InterFaceGAN <cit.>, and DigiFace <cit.>.
Figure <ref> presents sample images generated by several SOTA models and our ExFaceGAN. For each approach, we show two images that belong to that same synthetic identity.
Table <ref>
presents the identity verification outcome on synthetic datasets generated by our ExFaceGAN and several SOTA GAN-based models. For each of the ExFaceGAN approaches, we present the results using a max-off of 10, 20, 30, and 40.
As shown in Table <ref>, InterFaceGAN achieved lower identity verification accuracies in comparison to our ExFaceGAN and other SOTA approaches.
Unlike previous identity-conditioned generative models, our ExFaceGAN offers a controllable trade-off between identity discrimination and intra-class variation using the max-off parameter.
As we discussed in the previous subsection, increasing the max-off value leads to more challenging comparison pairs (higher intra-class variation) which slightly decreases verification accuracies, however, such challenging samples are needed, for example, to train FR as we will present in the next section.
Unlike DiscoFaceGAN <cit.> and GAN-Control <cit.>, the intra-class variation in our ExFaceGAN is not limited to a predefined set of attributes. Although, we demonstrated that our ExFaceGAN could be integrated into GAN-Control <cit.>, enhancing identity discrimination.
Our approach, unlike DiscoFaceGAN, GAN-Control, and SFace, does not require designing or training a special architecture. Our ExFaceGAN(Con) achieved lower verification accuracies than our ExFaceGAN(SG2) and ExFaceGAN(SG3). However, combining attributes conditions of GAN-Control with our ExFaceGAN created more challenging pairs with large intra-class variations as shown in Figure <ref>, which is beneficial for application use cases such as training FR using synthetic data as we demonstrate in the next section.
Synthetic-based FR
We demonstrate that data generated by our ExFaceGAN can be successfully used to train FR. We first conducted an ablation study by generating 12 datasets, each containing 250K images from ExFaceGAN(SG2), ExFaceGAN(SG3), and ExFaceGAN(Con) with different max-off values of 10, 20, 30, and 40. These datasets are used to train FR models with ResNet-50 network architecture <cit.> and ArcFace loss <cit.>, using exact experimental setups described in <cit.> and Section <ref>.
For each ExFaceGAN, the overall comparison of the verification performance is based on the sum of the performance ranking Borda count (BC) on the considered evaluation datasets, following the comparison method in <cit.>. As shown in Table <ref>, ExFaceGAN(SG3) with a max-off of 30 achieved higher verification accuracies than ExFaceGAN(SG2). ExFaceGAN(Con) with a max-off of 10 achieved the highest verification accuracies among all ExFaceGAN approaches.
It should be noted that all results are reported without applying data augmentation during the training.
Motivated by the recent works that proposed synthetic-based FR <cit.>, we demonstrate the effectiveness of introducing data augmentation to FR model training. We utilized rand-augmentation <cit.> with settings presented by <cit.>. Table <ref> presents the verification accuracies of FR models trained with our ExFaceGAN using the best max-off values from Table <ref>. It can be clearly observed that applying data augmentation consistently improves the verification accuracies for all ExFaceGAN models.
Comparison with SOTA synthetic-based FR
We compare ExFaceGAN with the recent SOTA synthetic-based FR models.
To provide a fair comparison, we generate 500K images of 10K identities, each containing 50 images. The images are generated from both, class positive and class negative.
We first provide an ablation study on different training loss functions, including ArcFace <cit.>, AdaFace <cit.>, CosFace <cit.> and Elastic-CosFace <cit.>, as shown in Table <ref>. It can be observed that CosFace achieved the best overall verification accuracies, followed by Elastic-CosFace.
Table <ref> presents comparisons of our ExFaceGAN(SG3) and ExFaceGAN(Con) (trained using CosFace loss) with SOTA synthetic-based FR approaches.
Our ExFaceGAN approaches achieved the best verification accuracies on Cross-Age datasets (AgeDB-30 and CA-LFW). Also, ExFaceGAN achieved very competitive performance to SOTA FR models trained with synthetic data on CFP-FP and CP-LFW. On LFW, our ExFaceGAN outperformed FR models trained with data generated by GAN, including SynFace, SFace, IDNet, and UsynthFace, and scored behind DigiFace-1M which is based on a computationally expensive digital rendering pipeline. Results of SOTA approaches in Table <ref> are reported as in their corresponding works. We opted to train and evaluate an FR model on GAN-Control <cit.> to provide a direct comparison with our ExFaceGAN(Con). Our ExFaceGAN(Con) outperformed GAN-Control <cit.> on the considered benchmarks.
§ CONCLUSION
We proposed in this work a framework, ExFaceGAN, to disentangle complex identity information in the learned latent space of StyleGAN models.
The proposed ExFaceGAN enabled the generation of multiple samples of specific synthetic identity without the need to design and train a dedicated deep generative model or supervision from attribute classifiers. We also proposed a controllable sampling technique to gain control over the balance between the intra-class variations and inter-class compactness in our generated data.
We empirically proved the generalizability of the effectivity of our framework by integrating it in learned latent spaces of three SOTA GAN models, StyleGAN-ADA, StyleGAN-3, and GAN-Control. We also demonstrated that the generated data by ExFaceGAN can be successfully used to train FR models, advancing SOTA performances on a number of benchmarks for synthetic-based FR.
Acknowledgment
This research work has been funded by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE. This work has been partially funded by the German Federal Ministry of Education and Research (BMBF) through the Software Campus Project.
ieee
|
http://arxiv.org/abs/2307.07256v1 | 20230714100714 | Modeling laser pulses as $δ$-kicks: reevaluating the impulsive limit in molecular rotational dynamics | [
"Volker Karle",
"Mikhail Lemeshko"
] | physics.chem-ph | [
"physics.chem-ph",
"physics.atm-clus",
"physics.atom-ph"
] |
[email protected]
[email protected]
Institute of Science and Technology Austria, Am Campus 1, 3400 Klosterneuburg, Austria
The impulsive limit (the “sudden approximation”) has been widely employed to describe the interaction between molecules and short, far-off-resonant laser pulses. This approximation assumes that the timescale of the laser–molecule interaction is significantly shorter than the internal rotational period of the molecule, resulting in the rotational motion being instantaneously “frozen” during the interaction. This simplified description of laser–molecule interaction is incorporated in
various theoretical models predicting rotational dynamics of molecules driven by short laser pulses. In this theoretical work, we develop an effective theory for ultrashort laser pulses by examining the full time-evolution operator and solving the time-dependent Schrödinger equation at the operator level. Our findings reveal a critical angular momentum, l_crit, at which the impulsive limit breaks down. In other words, the validity of the sudden approximation depends not only on the
pulse duration, but also on its intensity, since the latter determines how many angular momentum states are populated. We explore both ultrashort multi-cycle (Gaussian) pulses and the somewhat less studied half-cycle pulses, which produce distinct effective potentials. We discuss the limitations of the impulsive limit and propose a new method that rescales the effective matrix elements, enabling an improved and more accurate description of laser–molecule interactions.
Modeling laser pulses as δ-kicks: reevaluating the impulsive limit
in molecular rotational dynamics
Mikhail Lemeshko
July 14, 2023
====================================================================================================
§ INTRODUCTION
The control and manipulation of molecules with laser pulses is of paramount importance in diverse fields such as spectroscopy, chemistry, materials science, quantum optics, and even biology <cit.>. A comprehensive understanding of the post-pulse rotational dynamics of molecules is vital for the development of new technologies, including ultrafast spectroscopy and laser-induced chemistry <cit.>. Additionally, the rotational degrees of freedom of a molecule have the potential to serve as a new platform for qubits, the fundamental building blocks of quantum computing and quantum memory <cit.>.
Since the Born-Oppenheimer approximation (separation of the electronic, vibrational, and rotational timescales) works well for many of the small molecules, at low energies they can be reliably described as quantized rigid rotors <cit.>. For off-resonant ultrashort laser pulses (usually with infrared frequencies far detuned from any transitions), the rotational motion is generally considered to be slow compared to the laser modes, leading to the “frozen” rotational motion assumption during the laser–molecule interaction <cit.>. This justifies the impulsive limit, which adapts a semi-classical approach by neglecting the accumulation of quantum phases during the pulse duration.
For quantum rotors, however, the energy splittings grow linearly with the angular momentum l, which causes the corresponding change of the relevant timescales. Therefore the applicability of the sudden approximation does not solely rely on the duration of the laser pulse, but also on its intensity which determines how many l-states are populated during the laser excitation. For example, for a molecule with a rotational period τ_rot(l), for the impulsive limit to be valid, only states with l satisfying τ_rot(l) ≫τ_L should be occupied, where τ_L represents the pulse duration of the laser. Additionally, the specific shape of the laser pulse is an important factor to consider. It is not immediately evident which values of τ_rot(l) are large enough or how different laser shapes affect this relationship. Despite the widespread adoption of the impulsive limit as a theoretical framework to describe molecular rotational response to a laser pulse, a comprehensive analysis of the specific states for which this approximation is valid remains unexplored.
In this work, we aim to develop an effective theory for ultrashort laser pulses by analyzing the full time evolution of linear rotors during and after off-resonant, linearly polarized laser pulse illumination.
Our approach can be extended to more complex molecules with higher order polarizability terms and other laser polarization schemes. While the sudden limit for multi-cycle pulses is well-established <cit.>, we also investigate the effects of half-cycle pulses, which can generate unipolar fields <cit.>. Using a theoretical method accounting for the full time-evolution operator, we demonstrate that the validity of the sudden limit can be understood in terms of a critical angular momentum threshold l_crit. We propose a new method involving rescaling of matrix elements, resulting in an effective theory that accounts for deviations from the standard impulsive limit when encountering extended pulse durations.
Our findings hold significant implications for experimentalists working with ultrashort lasers and theorists who employ the sudden limit within their models.
§ METHOD
§.§ Ultrashort laser pulses
Here we focus on time-dependence of the full time-evolution operator instead of time-evolving a single initial state with respect to a given laser envelope, as commonly used to describe the dynamics of rotational wavepackets. The advantage is that we do not only learn about the time-evolution of a particular initial state, but also of all possible superpositions. The rigid rotor Hamiltonian can be written as H_0=B𝐋̂^2 with the squared angular momentum operator 𝐋̂^2. The potential energy of a polar rotor in an electromagnetic field is given by
V(t) = - μ·ℰ(t) with (total) dipole moment μ and laser field amplitude ℰ(t). Strong fields can give rise to an induced dipole moment μ_i = (μ_0)_i + 1/2∑_jα_ij ℰ_j(t)+ 𝒪[ℰ^2(t)] with the permanent dipole moment of the molecule μ_0 and the polarizability tensor α_ij. The interaction of a linear molecule with a ultrashort, off-resonant linearly polarized laser pulse is given by <cit.>
Ĥ(t) = Ĥ_0 - μ_0 ℰ(t)cos(θ̂)- 14ℰ^2(t) Δαcos^2(θ̂)
with angle between field polarization and molecular axis θ∈ [0,π], the electric field in the Z-direction ℰ(t) and the difference between parallel and perpendicular polarizability Δα.
In the far-field limit the electric field of the laser pulse has to integrate to zero <cit.>
∫_-∞^∞ℰ(t)ṭ=0.
For a laser pulse with many cycles one often assumes that only the part with ℰ(t)^2 is relevant, since the linear term averages out. In that case, one can assume a purely positive Gaussian shape ℰ(t)>0 for the laser field amplitude with kick strength P_2, peak position t_0 and width σ_t. In the sudden approximation, the time-evolution propagator (for t≫ t_0) takes the simple form
Û_sudd, gaussian = e^-Ĥ_0 (t-t_0)/ħe^+ P_2 cos^2(θ̂)e^-Ĥ_0 t_0 /ħ.
Note that the kick strength is dimensionless and can be calculated as <cit.>
P_2 = -Δα/4ħ∫_-∞^∞ℰ^2(t)ṭ
Although it is possible to replace pulses with kicks, for few- and half-cycle pulses one has to take into account the full spatial dependence of the laser field. Here, we analyze the half-cycle pulse as an exemplary and experimentally important case, but this analysis can be extended straightforwardly to few-cycle pulses. We consider the following parametrization from Ref. <cit.>:
ℰ(t) =
0 (t ≤ 0)
ℰ_1 cos^2(ω_L (t-t_p)/2)sin(ω_L(t-t_p)) (0 ≤ t < t_p)
ℰ_2 (1 - e^-(t-t_p)/τ_1) e^-(t-t_p)/τ_2 (t ≥ t_p),
with electric field amplitudes ℰ_1, ℰ_2>0, the laser frequency ω_L, the pulse duration of the first part of the laser pulse t_p=π/ω_L (in the following referred to as positive pulse duration), the switch-on and switch-off times τ_1, τ_2. The ratio
ξ≡ℰ_2/ ℰ_1
determines the width of the first peak relatively to the negative tail.
The condition that the electric field is smooth at t=t_p further leads to τ_1 = ℰ_2/ω_L ℰ_1=ξ/ω_L and Eq. (<ref>) leads to
τ_2 = (2ω_L^2τ_1)^-1 + √((2ω_L τ_1)^-2 + (ω_L)^-2)
= (2ω_L ξ)^-1 + √((2 ξ)^-2 + (ω_L)^-2),
see Ref. <cit.>. The decay time is determined by τ_2. The sudden limit for this potential follows as
Û_sudd, half-cycle = e^-Ĥ_0 (t-t_0)/ħe^+ P_1 cos(θ̂)e^-Ĥ_0 t_0 /ħ
with estimated peak position t_0 and kick strength P_1. Observe that t_0 does not have to match with t_p, as the pulse's peak (the pulse position) occurs for t_0 < t_p. Furthermore, the duration t_p might not align with the laser duration τ_L based on the value of ξ, since it would disregard the negative tail of the pulse. Still P_1 is frequently approximated in the literature as <cit.>
P_1 ≈μ_0/ħ∫_-∞^t_pℰ(t)ṭ,
by the integral over the positive part of the field amplitude. This is a good approximation when the half-cycle pulse looks similar to a Gaussian pulse, which we demonstrate below. Approximately, the integral over the positive peak scales as P_1∝ℰ_1 · t_p (the negative tail compensates for exactly this
value). Molecular rotation sets the timescale of the Hamiltonian, thereby justifying the representation of time in units of the rotational revival time τ_B= πħ/B, denoted as t̃ = t/τ_B. In an effort to render the Hamiltonian dimensionless, we can conveniently incorporate the ħ^-1 prefactor of the time evolution into the coupling constants, resulting in the following expression [Note that we do not employ the common units of H/B, since we are interested in expressing time in units of τ_B, which leads to an additional factor of π in front of 𝐋̂^2,]
H̃(t̃)= π𝐋̂^2 - ℰ(t̃)/ℰ_μcos(θ̂) - ℰ^2(t̃)/ℰ^2_Δαcos^2(θ̂).
This includes the constants
ℰ_μ = B/πμ, ℰ_Δα=√(4 B/πΔα),
which depend on the particular molecule under study (see Section <ref> for an illustrative example of a time-evolution for the molecule OCS). Moving forward, we will omit the tilde on t and H, keeping in mind that all expressions are now unitless.
In order to study the validity of the sudden approximation, we numerically integrated the differential equation of the time-evolution operator Û_full(t),
∂_t Û_full(t) = Ĥ(t)Û_full(t)
for a reasonable cutoff l < l_max and various parameters [For each calculation we increase the cutoff scale until the results we are interested in are converged. This typically depends on the timescale (since high l correspond to high frequency) and the field strength (which determines how many l states are occupied).]. As mentioned in the introduction, each angular momentum eigenstate |l,m ⟩ oscillates with the frequency
ω_rot(l)=π· l(l+1)/τ_B
which provides a natural cutoff scale; the approximation can only succeed for states with ⟨ l|ψ⟩≈ 0 for l with τ_L > τ_rot(l). The eigenstates l with τ_L > τ_rot(l) oscillate with a frequency equal or higher than the pulse duration and a separation of timescales is not possible. The matrix elements for the potentials are
⟨ l'm'|cos(θ)|lm⟩ = -δ_mm'C^l'm_lm10C^l0_l'010
⟨ l'm'|cos^2(θ)|lm⟩ =+δ_mm' (23C^l'm_lm20C^l0_l'020 + 13δ_ll'),
with the Clebsch–Gordan coefficients C_lml'm'^LM <cit.>. Henceforth, our analysis will concentrate exclusively on linearly polarized laser fields for which different m-sectors are independent and we can assume m=0. Following the definitions for the sudden limit in Eqs. (<ref>) and (<ref>), the effective potential of the full time evolution can be calculated as
V̂_eff(t) = -log[e^+Ĥ_0 (t-t_0)Û_full(t) e^+Ĥ_0 t_0]
where one has to use the correct branch cut of the logarithm [For values of the effective kick strength smaller than P ≈π the logarithm is straightforward to calculate. For larger values one has to resort an algorithm that guarantees a smooth transition of the operator eigenvalues in order to choose the correct branch cut.].
For times t ≪ t_0 it converges to a constant, time-independent potential V̂_eff≡V̂_eff(t=∞). This is the potential an instantaneous laser pulse at t_0 exerts upon the molecule, after the full time evolution. We want to know if the effective matrix elements resemble the ones given in (<ref>) and (<ref>). For perfect agreement the off-diagonal matrix elements
v^(s)_l = ⟨ l± s|V̂_eff|l⟩ with s∈{1,2}
should resemble P_s·⟨ l± s|cos^s(θ̂)|l⟩ where P_s depends on the field ℰ(t). In that case, we can find the strength by P_s = v_l^(s)/ ⟨ l± s|cos^s(θ̂)|l⟩ which should be the same for all l. However, in a realistic case the matrix elements deviate from that obtained in the sudden limit. This implies that the kick strength coefficients
p^(s)_l ≡ v_l^(s)/⟨ l± s|cos^s(θ̂)|l⟩
depend on l. In many cases, we are only interested in the convergence up to some experimentally relevant l_av. We define the average of a matrix element A_l as A̅≡1/l_av+1∑_l=0^l_avA_l
and estimate the strength P_s,eff and its error by
P_s,eff≡p^(s) , δ P_s,eff≡√( (p^(s))^2 - p^(s)^2).
Clearly, if the sudden approximation was exact we would find δ P_s,eff=0. For the case, where the sudden approximation is applicable, this value should be sufficiently small. However, for small kick strengths, this error becomes small as well, therefore, it is necessary to consider the relative error
r_s≡δ P_s,eff/P_s,eff.
Only the size of r_s poses a sufficient criterion whether the sudden limit approximation is valid or not. Until now we have assumed that we are looking at the impulsive limit in the form of Eqs. (<ref>) and (<ref>). However, there is a more generic possibility of
Û_sudd, generic = e^-Ĥ_0 (t-t_0)e^+V̂_effe^-Ĥ_0 t_0
with V̂_eff as defined in Eq. (<ref>). In particular, as we will see later, the numerically estimated effective potentials will often have the same off-diagonal structure as the generating potentials V̂(t). Therefore, it is possible to use rescaled matrix-elements v_l^(s) that originate from finite time pulses or pulses that are not Gaussian, such as half-cycle pulse. A rescaled potential will have the form v^(s)_l→ v^(s)_l f^(s)_l with some function f^(s)_l that depends on the laser shape. For Gaussian pulses we can find f^(2)_l straightforwardly by
f^(2)_l = p^(2)_l / P_2.
with the error factor
δ_l = 1 - f^(2)_l
that gives a good indication how much rescaling is necessary. For half-cycle pulses such a simple expression is not possible, since one has to infer additionally the effective strength P_1.
We introduce the usual interaction picture of a Hermitian operator  by
Â_I(t) = e^+i Ĥ_0 t e^-i Ĥ_0 t
and the time-evolution operator (with t_0=0) with U_I(t)= e^+i Ĥ_0 tÛ(t).
The Schrödinger equation then reads
i ∂_t Û_I(t) = V̂_I(t)Û_I(t).
In the following we resort to numerical integration of (<ref>) and use (<ref>) to calculate the effective potential directly.
§ RESULTS
In this section, we scrutinize the application of the sudden approximation to multi-cycle pulses. Following this, we turn our attention to the analysis of half-cycle pulses. Notwithstanding the disparities in laser frequencies between these pulse types – optical frequencies for multi-cycle pulses and terahertz frequencies for half-cycle pulses – similar impacts are discerned in their interaction with rotors.
§.§ Gaussian pulses
We model multi-cycle pulses using Gaussian functions ℰ^2(t)/ℰ^2_Δα = e^-(t-t_0)^2/2σ_t^2/(σ_t √(2π)) with the squared field strength ℰ^2(t), μ_0=0, and P_2=1, which we will denote Gaussian pulses in what follows. Hence, the pulse width of the laser can be directly inferred from τ_L ≈σ_t, depending on the definition of τ_L.
Figure <ref> provides an illustration of the results calculated for a range of σ_t. As one would intuitively expect we observe that as the ratio σ_t/τ_B becomes increasingly small, the approximation aligns more closely with the sudden limit. However, with an increase in the value of σ_t, the effective potential begins to display noticeable deviations from the sudden limit. This divergence is prominently displayed in the off-diagonal matrix elements.
A detailed look at these matrix elements reveals a significant decrease for larger values of l. This contrasts with the matrix elements of the pure sudden pulse, which remains constant. One of the primary features of the perfect delta kick is its ability to transfer angular momentum even for states with high l values. However, this feature is absent in the case of pulses of finite width. Here, the transfer of angular momentum may cease altogether for large l. This can occur when the rotational periods τ_rot(l), are comparable or smaller than the laser pulse duration τ_L. We assume that such parity leads to destructive interference, inhibiting the laser's capacity to transfer energy to the molecule coherently.
The phenomenon is more clearly depicted in Figure <ref>, where the scaling factor, f_l, and its error, δ_l ≡ 1 - f_l, are showcased for different values of σ_t. When the values of f_l or δ_l are equal to 1 or 0 respectively, it indicates an agreement with a delta kick. However, if δ_l diverges from 0, it signals a deviation from a delta kick. As per our findings, the sudden limit holds true until a certain critical value, l_crit∝σ_t^-1. Once this point is surpassed, the sudden limit no longer applies, leading to decay in matrix elements and rapid growth in deviations.
Henceforth, the time-evolution of a wavepacket that is driven by a Gaussian-shaped pulse can be captured by the sudden approximation when the wavepacket has only occupations for l<l_crit(σ_t). In that case, the sudden approximation is valid and it is not necessary to integrate the Schrödinger equation fully. Another possibility is to rescale the effective potential to
⟨ l'm'|V̂_rescaled|lm⟩ = δ_mm'( f^(2)_l23C^l'm_lm20C^l0_l'020 + 13δ_ll')
with the rescaling function f_l. This way we can capture the deviations that arise due to the non-zero pulse width. However, this rescaling is not possible in all cases, as we will demonstrate in Section <ref>. These findings are expected to be useful in understanding the behavior of half-cycle pulses, which we will be exploring in our subsequent analysis.
§.§ Half-cycle pulses
In this section, we shift our focus to half-cycle pulses. For simplicity we focus only on the dominant term, the permanent dipole term with finite μ_0>0. For many linear molecules this is a good approximation since the specific constants (<ref>) satisfy ℰ_μ≪ℰ_Δα. As previously mentioned, in the case of half-cycle pulses, there is a positive peak followed by a potentially long negative tail. The ⟨ l|cos(θ̂)|l' ⟩ matrix element is only non-zero for l=l'± 1. In many cases, this is also true for V̂_eff. Specifically, in the limit where the ratio ξ→ 0 from (<ref>), which we will refer to as the Gaussian limit, the behavior converges to the Gaussian pulse discussed earlier, since the depth of the negative tail is minimal and it requires an infinite amount of time to satisfy Equation (<ref>). In Fig. <ref>, we illustrate the shape of the potential for very small values of ξ. As expected, the effective potential matrix elements diverge from the cos(θ) potential for increasing t_p, exhibiting similar behavior to that of Gaussian pulses (cf. Fig. <ref>). For half-cycle pulses, the positive pulse duration t_p plays a role analogous to the width σ_t for Gaussian pulses.
In Fig. <ref> we find that the critical positive pulse width scales as t_p,crit/τ_B ∝ l^-1, which is similar to the critical pulse width for Gaussian pulses in Fig. <ref>. The primary difference arises from the fact that t_p=π/ω_L (only for half-cycle pulses) with the laser frequency ω_L, corresponding to exactly half a cycle, while the variable σ_t of the Gaussian pulses corresponds to the width of one standard deviation, or approximately 68% of the nominal pulse area. We note that in the Gaussian limit, we do not observe a dependency of the relative error on the kick strength P_1,eff. However, when leaving the Gaussian limit, when ξ is not small, it plays an important role how the potential deviates from the impulsive limit.
Now we look at the opposite limit ξ= 1, which we denote the oscillating limit, since the negative tail can not be integrated out, like we did effectively for the Gaussian limit. Also in that limit we find that it is possible to approximate the full time-evolution with the impulsive limit, see Fig. <ref>. The main difference is that for a given t_p, the sudden approximation breaks down for smaller l, which implies that one has to choose smaller widths t_p/τ_B than in the Gaussian limit to achieve the same accuracy. Further, it is important to note that unlike the Gaussian case the diagonal elements are not vanishing completely. While we confirm the relationship P_1,eff∝ℰ_1/ℰ_μ, the dependency on t_p is more complicated than in the ξ→ 0 case and we find ∂ P_1,eff/∂ (ℰ_1/ℰ_μ)∝ t_p^2, displaying a strong deviation from the generally accepted result (<ref>).
Finally, we turn our focus to the case involving arbitrary ξ. Our compiled results are presented in Fig. <ref>. This consolidates our previous analyses for the two limiting scenarios: ξ→ 0 and ξ=1. Additionally, it provides an understanding of how the Gaussian and oscillating limits respectively cease to hold for mid-range values of ξ, where the error δ_l grows large already for small l. Evidently, in the scenario of ξ→ 1, a small t_p/τ_B ratio is necessary to maintain the sudden approximation, as has been demonstrated in Fig. <ref>. Contrarily, we discover that in the opposing extreme where ξ≈ 0, a larger t_p proves beneficial, at least for the relative error.
§ WAVEPACKET TIME-EVOLUTION OF OCS
We executed a series of numerical simulations, aiming to examine the dynamics of an OCS molecule's wave-packet under illumination of different half-cycle pulses. In Figs. <ref>, <ref>, <ref>, and <ref>, we present the results using τ_B ≈ 80 ps, Δα≈ 4.67 Å^3, and μ≈ 0.66 Debye <cit.>. By using rescaled units (<ref>), we obtain the
specific field constants ℰ_μ≈ 6 kV/cm and ℰ_Δα≈ 1 MV/cm. Since ℰ_μ≪ℰ_Δα, we neglect the influence of the Δα term in what follows. In a study by Fleischer et al. <cit.>, they reported the use of half-cycle pulses with an average field strength of approximately 22 kV/cm up to 1 MV/cm when applied to OCS molecules, which is the regime we are examining here. Note that as can be inferred from Fig. <ref>, the relative field strength ℰ/ℰ_Δα should be on the order of 100- 1000 in order to see a visible effect on the molecule.
The time-dependent wave-packet evolution of a molecule (with m=0) is controlled by
∂_t C_l(t) = -i∑_l'=0⟨ l'|V̂_I(t) | l ⟩ C_l'(t),
with the potential in the interaction picture defined in (<ref>), and the solution for the wavefunction
⟨ l | ψ(t) ⟩ = C_l(t)e^-i π l(l+1) t
in units of rotational time τ_B.
In Fig. <ref>, the molecule is exposed to a half-cycle pulse in the Gaussian regime, with ξ=10^-3, whose profile is shown in Fig. <ref>(a). The pulse has a width t_p, significantly shorter than the molecule's rotational period. The wavepacket in the initial condition is in a pure l=4 angular momentum state, ⟨ l|ψ⟩ = δ_l,4. During the pulse illumination, the pulse performs akin to a Gaussian pulse, with both lower and higher angular momentum states being occupied. Post-illumination, a decrease in the occupation probability for state l=4 is evident, possibly due to destructive interference. We use the sudden approximation, defined by (<ref>), to estimate the effective kick strength of an instantaneous delta pulse. This approximation mirrors the final state of the wavepacket with high precision, demonstrating a fidelity of 97%, and it accurately predicts the dip in the l=4 state. The sudden approximation's agreement with the full time-evolution is further confirmed by the effective matrix elements (<ref>), see Fig. <ref>(d). The small relative error (<ref>) of r_1≈ 2 % underscores the appropriateness of the sudden approximation in this context.
Figures <ref>, <ref>, and <ref> were created similarly to Fig. <ref>, albeit with varied pulse parameters and widths. In Fig. <ref>, the pulse is set in the intermediate regime ξ=0.1. The sudden approximation proves challenging to apply in this scenario, as evident in the evolution of the representative wavepacket. The matrix elements of the effective potential begin to diverge for large l, failing to plateau like in the case of the sudden approximation. Consequently, finding the correct kick strength that could reproduce the full time-evolution results is problematic. Therefore, we advise against using the sudden approximation in such a scenario due to the significant deviations.
In Fig. <ref>, we adjust the pulse width to a longer duration (t_p=1 ps), while staying within the same intermediate regime (ξ=0.1). This modification leads to noticeable oscillations (see Fig. <ref>(d)) in the matrix elements of the effective potential, resulting from the compatibility of the pulse width with the rotational periods of certain angular momentum states. Evidently, in this regime, the laser's timescale overlaps with the molecule's rotational oscillations, causing interference. This interference hinders the application of the sudden approximation, corroborated by a poor agreement between the wavefunctions of the sudden approximation and the full time evolution (as low as 20%). An intriguing observation is the absence of a depopulation in high angular momentum states, likely attributable to the longer duration of the negative peak.
In the final scenario, as illustrated in Fig. <ref>, we look into the oscillating limit by setting ξ=1. We observe that the pulse's negative slope almost negates the positive peak, leading to a markedly reduced effective kick strength. Nevertheless, the agreement with the sudden approximation in this regime is remarkably high, presenting a fidelity of 98 %, reinforcing our previous analysis of Fig. <ref>.
§ CONCLUSIONS
In summary, we have assessed the validity of the impulsive limit by examining the full time-evolution operator using a method that solves the time-dependent Schrödinger equation at the operator level. Our findings demonstrate that both Gaussian pulses and half-cycle pulses can be accurately described by the sudden limit, provided that the angular momentum is below the critical threshold l_crit, the pulse width σ_t or t_p is significantly smaller than the rotational period τ_B, and for half-cycle pulse the pulse is either in the Gaussian limit (ξ→ 0) or the oscillating limit (ξ = 1).
This can be used to obtain experimental estimates to reliably realize delta kicks for the cases where the laser parameters fall within the sudden limit regime and the molecule's angular momentum is not excessively large. Under these constraints, it becomes impossible to differentiate between a delta pulse and a finite-width pulse when examining the matrix elements in the long time limit. However, outside this regime, we observe substantial deviations that can be attributed to the to the time evolution within the pulse width.
This research serves a dual purpose: elucidating the validity boundaries of the impulsive limit and pinpointing specific circumstances under which deviations from the approximation manifest. Further studies may explore a broader range of pulse shapes, such as, e.g., few-cycle pulses. Moreover, the investigation could extend to quantum numbers other than angular momentum l, more intricate polarization schemes, or more complex molecules. Our findings could be applicable to other applications involving THz pulses, such as their interaction with electrons. The enhanced control over molecular dynamics provided by our research might be valuable in fields like ultrafast spectroscopy, laser-induced chemistry, and material processing, where precision is vital for realizing targeted results. The novel viewpoints and methodologies proposed in this study could also inspire further research and innovation in molecular rotational dynamics and related fields.
M.L. acknowledges support by the European Research Council (ERC) Starting Grant No. 801770 (ANGULON).
|
http://arxiv.org/abs/2307.04089v1 | 20230709035156 | Can Variational Quantum Algorithms Demonstrate Quantum Advantages? Time Really Matters | [
"Huan-Yu Liu",
"Zhao-Yun Chen",
"Tai-Ping Sun",
"Cheng Xue",
"Yu-Chun Wu",
"Guo-Ping Guo"
] | quant-ph | [
"quant-ph"
] |
[email protected]
0000-0002-6158-9627
0000-0002-5181-160X
[email protected]
0009-0009-2591-1672
0000-0003-2207-9998
[email protected]
0000-0002-8997-3030
0000-0002-2179-9507
Applying low-depth quantum neural networks (QNNs), variational quantum algorithms (VQAs) are both promising and challenging in the noisy intermediate-scale quantum (NISQ) era: Despite its remarkable progress, criticisms on the efficiency and feasibility issues never stopped.
However, whether VQAs can demonstrate quantum advantages is still undetermined till now, which will be investigated in this paper.
First, we will prove that there exists a dependency between the parameter number and the gradient-evaluation cost when training QNNs. Noticing there is no such direct dependency when training classical neural networks with the backpropagation algorithm, we argue that such a dependency limits the scalability of VQAs.
Second, we estimate the time for running VQAs in ideal cases, i.e., without considering realistic limitations like noise and reachability. We will show that the ideal time cost easily reaches the order of a 1-year wall time.
Third, by comparing with the time cost using classical simulation of quantum circuits, we will show that VQAs can only outperform the classical simulation case when the time cost reaches the scaling of 10^0-10^2 years.
Finally, based on the above results, we argue that it would be difficult for VQAs to outperform classical cases in view of time scaling, and therefore, demonstrate quantum advantages, with the current workflow.
Since VQAs as well as quantum computing are developing rapidly, this work does not aim to deny the potential of VQAs. The analysis in this paper provides directions for optimizing VQAs, and in the long run, seeking more natural hybrid quantum-classical algorithms would be meaningful.
Can Variational Quantum Algorithms Demonstrate Quantum Advantages? Time Really Matters
Guo-Ping Guo
August 12, 2023
======================================================================================
§ INTRODUCTION
Machine learning (ML) <cit.> is one of the most remarkable technology in the 21st century, which has applications ranging from daily works to scientific research <cit.>. Developments of ML rely on the success of computer science and the neural network (NN) model <cit.>, which provided the capability of carrying out huge computational tasks and simulating complex functions. Quantum computing <cit.> is also developed rapidly in decades, whose features, like quantum entanglement and quantum operation parallelism, are unavailable for their classical counterparts. Quantum computing has been introduced to the ML region, known as quantum machine learning (QML) <cit.>.
Variational quantum algorithms (VQAs) <cit.> are representative of QML, whose workflow is shown in Fig. <ref>. It is a hybrid quantum-classical algorithm. A quantum processor prepares an ansatz with the quantum neural network (QNN) <cit.> U(θ)
[It is also called parameterized quantum circuits in some works. To make it consistent with classical machine learning, we use QNN here.]
as | ψ (θ) ⟩= U( θ )|0⟩ with θ={θ_1,θ_2,⋯,θ_L } the (trainable) parameter vector. The ansatz is then used to evaluate cost functions with quantum measurements, which is usually an expectation value under some Hamiltonian H: C(θ) =⟨ψ(θ)|H|ψ(θ)⟩. The classical processor optimizes θ to minimize the cost function. QNNs in VQAs are usually low-depth, which can be performed on current noisy intermediate-scale quantum (NISQ) <cit.> devices even without the support of fault-tolerant quantum computation technology <cit.>.
This makes VQAs potential to achieve quantum advantages in the NISQ era. Since its proposal, VQAs have been developed rapidly and have
applications ranging from quantum chemistry simulation <cit.> to numerical computation <cit.>. Experimental demonstrations have also been performed <cit.>.
As research progresses, the challenges of VQAs gradually attracted attention, which can be divided into the efficiency part and feasibility part: Efficiency challenges usually mean that executing VQAs requires huge resources. The well-known barren plateaus <cit.> describes a phenomenon with exponentially vanishing gradients, indicating the required sampling times to obtain the cost function also grows exponentially with the number of qubits. On the other hand, feasibility challenges are the major part. They focus on whether the correct answer can be acquired by running VQAs. Training VQAs is an NP-hard problem <cit.>, besides the barren plateaus problem mentioned above, there usually exists a variety of local minimum points in the optimization landscape of VQAs <cit.>, implying that it is difficult to achieve the global optimal point. The expressibility of QNNs <cit.> also affected the reachability issue <cit.>, where global optimal points will never be reachable if they cannot be represented by the QNN. Noise <cit.> and other factors will also affect the correctness of executing VQAs. Great efforts have also been provided to deal with such challenges, including mitigating barren plateaus to improve trainability <cit.>, reducing sampling times to improve efficiency <cit.>, mitigating noises <cit.>, etc.
We focus on challenges in the efficiency part in this work. First, we will prove that there exists a dependency between the number of parameters in QNNs and the gradient-evaluation cost when training the QNN. Noticing that such a dependency does not exist when training classical NN models with the backpropagation algorithm <cit.>, we argue that the parameter number affected the scalability of VQAs. Next, we consider the time cost for running VQAs in an ideal setting, i.e., we do not consider realistic limitations on VQAs like noise, qubit connectivity, reachability, etc. The time cost analysis is used as follows:
* The time cost scaling easily reached the 1-year wall time at about 20 qubits.
* By comparing with the time cost using classical simulation, we can see that VQAs can only outperform classical simulations when the time cost reaches a scaling of 10^0-10^2 years. Therefore, quantum advantages are difficult for VQAs to achieve based on the current workflow.
In performing such analysis, we would not deny the potential of VQAs, as well as other hybrid quantum-classical algorithms in the NISQ era, but some changes and improvements need to be made. According to our analysis, some directions for optimizing VQAs are provided. Taking one step further, we need to consider what is the natural way of executing machine learning with quantum computing.
The rest of this paper is organized as follows:
In Sec. <ref>, we introduced some backgrounds needed for the latter analysis, including training NNs with the backpropagation algorithm and QNNs.
In Sec. <ref>, the dependency of the parameter number and the gradient-evaluation cost in training QNNs is provided.
In Sec. <ref>, we analyze the time cost of running VQAs.
Sec. <ref> gives the total time cost of running VQAs.
In Sec. <ref>, we compare the time cost using both VQAs and classical simulation.
A conclusion is given in Sec. <ref>.
§ PRELIMINARY
§.§ Training classical neural networks using the backpropagation algorithm
The NN model is widely applied in solving ML tasks. General NNs are comprised of neurons, whose diagram is shown in Fig. <ref>. A neuron can be viewed as a non-linear function that maps n inputs x={x_1,x_2,⋯,x_n} to an output y as:
y = f( ∑_i w_ix_i-b ),
where b is a bias, w={ w_1,w_2,⋯,w_n} is the adjustable weight vector, f is the non-linear activation function and one example is the sigmod function:
f(x)=1/1+e^-x.
Different functions can be approximated by adjusting the weight vector, and the core idea of ML is to make such functions approach desired maps. “Learning” is exactly the process of adjusting the weights.
Only one neuron has limited learning capability. To further increase the expressive power, i.e., be able to fit more functions, neurons can be used to construct a NN, which is shown in Fig. <ref>. In the NN, the input is fed into several neurons, whose outputs are then viewed as inputs to neurons in the next layer. Denote y={y_1,y_2⋯,y_m} as the output of the whole NN, or equivalently, the output of neurons corresponding to the final layer. Denote the desired value as d={d_1,d_2⋯,d_m} and the vector of weights for all neurons as W. As introduced, the learning process is to adjust W such that y is close to d.
To achieve this, one can define a cost function as:
C ≡ C(W) := 1/2∑_i=1^m (y_i-d_i)^2.
C=0 implies we have finished the learning process. To find the minimum value of the cost function, one can start from some specific set of parameters and then optimize the weight vector according to optimization algorithms like gradient descent:
W←W - η·∇ C,
where η > 0 is the learning rate, the gradient is ∇ C={∂ C/∂ w_j |w_j∈W}. Every element in the gradient can be obtained via methods like the finite difference method:
∂ C/∂ w_j=lim_δ→ 0C(w_jδ+)-C(w_jδ-)/2δ,
where w_jδ±={ w_1,⋯,w_j±δ,⋯}.
Denote the total number of weights as M
[The parameters number in NN and QNN may not be the same, therefore we apply different notations (M and L).].
If we apply Eq. (<ref>) to evaluate the gradient for every weight, we will need to execute the NN O(M) times, and execute the NN once will query all M weights, then the query complexity for directly evaluating the gradient scales O(M^2). However, large NN execution will cost huge resources, so reducing the costs for evaluating gradients would be remarkable. We introduce the backpropagation algorithm below, which achieved this goal.
Take Fig. <ref> as one example, Consider the weight w_2, which is representative of weights corresponding to neurons in the final layer. The gradient element for this weight is:
∂ C/∂ w_2 = ∂ C/∂ y_1∂ y_1/∂ w_2.
According to Eq. (<ref>), ∂ C/∂ y_1 = y_1-d_1. And ∂ y_i/∂ w_2 is the operation within one neuron, which can be easily acquired according to Eq. (<ref>).
Next, we consider evaluating the gradient concerning w_1, which is representative of weights in the middle layer.
∂ C/∂ w_1 = ∂ C/∂ y_m1∂ y_m1/∂ w_1
= ( ∑_i ∂ C/∂ y_i∂ y_i/∂ y_m1) ∂ y_m1/∂ w_1.
According to Eq. (<ref>), ∂ C/∂ y_i is already known if all the gradients of weights corresponding to neurons in the final layers are obtained, which can be reused, and other partial derivatives are all within one neuron.
Moving back, ∂ C/∂ w_0 can be analyzed similarly.
Therefore, when training classical NN models, one can first execute the NN and record the output (y) for every neuron. When evaluating gradients, weights of neurons corresponding to the final layer can be first evaluated, whose information can be reused when evaluating gradients for neurons corresponding to former layers.
Gradient evaluation with this back-forward propagation of information is called the backpropagation algorithm, whose query complexity is O(M), which establishes a reduction compared to the directly finite difference method. Using this method, we do not need to execute NNs for every weight and this makes it scalable for training NNs even with huge sizes.
§.§ Quantum Neural Networks
To make it convenient for the latter analysis, we introduce the unitary coupled-cluster singles and doubles ansatz <cit.> and the hardware-efficient ansatz (HEA) <cit.> in this section.
§.§.§ Unitary coupled-cluster singles and doubles ansatz
In quantum chemistry simulations, the unitary coupled-cluster (UCC) ansatz is widely applied. It is derived from the coupled-cluster theory <cit.>, which applies symmetry-conserved excitation operators on some initial states, usually the Hartree-Fock (HF) state, to expand wavefunctions in the target subspace.
Denote the number of spin-orbitals and electrons of a given system as n_o and n_e. And order the n_o spin-orbitals from 1 to n_o, whose corresponding energies are in non-decreasing order. Then the HF state |ψ_HF⟩ = | 1,1,⋯,1,0,0,⋯,0⟩ with exactly n_e 1s and n_o-n_e 0s is the state with the lowest energy when ignoring interaction energies, which is usually served as ground state approximations.
When considering the interaction energies, the ground state should be |ψ⟩ = ∑_ |ψ_i⟩∈ S a_i |ψ_i⟩, where a_i are coefficients and all states in the set S satisfying the condition that the Hamming weight, i.e, the sum of all 1s is exactly n_e. Starting from the |ψ_HF⟩, some symmetry-conserved operations can be applied to expand the target subspace spanned by S. This can be realized with the fermionic creation(annihilation) operators a_j^†(a_j). For instance, the operator a_i^†a_α can excite one electron from the α-th spin-orbital to the i-th one and will result in 0 (not the vacuum state) if the α-th orbital has no electron or the i-th already has one electron. Therefore, we can define it as a single-excitation operator. Double-excitation operator a_i^†a_j^†a_α a_β can be similarly defined.
Since considering all excitations will cost huge resources, we usually consider the single- and double-excitations, and the UCC ansatz with only the single- and double-excitation is called the UCCSD ansatz:
|ψ_UCCSD(θ)⟩ = U_UCCSD(θ) |ψ_HF⟩,
where the QNN has the form:
U_UCCSD(θ) = e^T-T^†,
where T=T_1+T_2 are linear combinations of excitation operators, which are expressed as:
T_1 = ∑_α={1,2,⋯,n_e},
i={n_e+1,⋯,n_o}θ_iα a_i^† a_α,
T_2 = ∑_α,β={1,2,⋯,n_e},
i,j={n_e+1,⋯,n_o},
α<β,i<jθ_ijαβ a_i^† a_j^† a_α a_β,
where θ={θ_iα,θ_ijαβ} is the parameter vector. Therefore:
T-T^† = ∑_α={1,2,⋯,n_e},
i={n_e+1,⋯,n_o}θ_iα (a_i^† a_α - a_α^† a_i)
+ ∑_α,β={1,2,⋯,n_e},
i,j={n_e+1,⋯,n_o},
α<β,i<jθ_ijαβ (a_i^† a_j^† a_α a_β-a_β^† a_α^† a_j a_i).
To further implement the ansatz on quantum processors, fermionic-to-qubit mappings are required. We apply the Jordan-Wigner (JW) transformation <cit.>.
a_j^† = 1/2[∏_k<jZ_k] (X_j-iY_j),
a_j = 1/2[∏_k<j Z_k](X_j+iY_j).
After this, the HF state is mapped to |1⟩^⊗ n_e⊗ |0⟩^⊗ n_o-n_e, implying that under JW transformation, the number of qubits required is the same as the number of spin-orbitals: n=n_o. And the excitation operator becomes a linear combination of tensor products of Pauli operators (Pauli strings). Finally, the operation T-T^† will be a linear combination of Pauli strings. With some orders of Trotter expansion, we have:
U_UCCSD(θ) = ∏_l e^-iθ'_lP_l,
where θ' can be obtained from θ.
For every e^-iθ P, we can implement it on the quantum processor shown in Fig. <ref>.
§.§.§ Hardware-efficient ansatz
HEA is a problem-agnostic ansatz, which directly applies easy-implementable quantum gates of the quantum processor. We assume the HEA to be comprised of P blocks, each of which consists of single-qubit rotation and two-qubit entangling operations:
U_HEA(θ) = ∏_p=1^P U_entangle U_single(θ_p),
where:
U_entangle = CNOT_n,1∏_i=1^n-1CNOT_i,i+1,
U_single(θ_p) = ∏_i=1^n R_Z(θ_p^i1) R_X(θ_p^i2) R_Z(θ_p^i3),
where subscripts in CNOT gates represent the control and target qubit, respectively. The quantum circuit for the HEA described here is shown in Fig. <ref>.
It has been pointed out that HEA has remarkable expressibility <cit.>. Combined with the fact that HEA is hardware-friendly, it has become the most common-applied QNN model.
§ GRADIENTS IN VARIATIONAL QUANTUM ALGORITHMS
Training parameters in QNNs is the main step in executing VQAs, which is NP-hard <cit.>. On the one hand, cost functions in VQAs are obtained via repeated measurements, and achieving sampling error ϵ will require sampling O(1/ϵ^2) times. Then about 10^6 sampling times is required to reach the widely-applied chemical accuracy 1.6× 10^-3 Hartree
[1 Hartree = 2625.5 kJ/mol.]
. On the other hand, problems like barren plateaus can cause exponentially increased sampling times. Together with noise and other factors, evaluating cost functions in VQAs would be difficult.
Note that in the training process, measuring cost function is mainly used to evaluate gradients. If we apply Eq. (<ref>) for gradient evaluation, O(L) times of cost function needs to be evaluated. In Sec. <ref>, we introduced that the backpropagation algorithm can be used to reduce the times required for executing classical NNs, Therefore, it would be natural to ask whether such type of methods can be applied to reduce the gradient-evaluation cost when training QNNs.
First of all, the backpropagation algorithm cannot be implemented directly because a QNN is a parameterized unitary transformation that maps an initial state to the ansatz, without recording to inter-layer state, which, however, is required when performing backpropagation algorithms. As introduced in <cit.>, the backpropagation scaling for training QNNs is only possible when we have multiple copies of the ansatz.
Next, we consider whether there is some dependency between the gradient elements. If it is the case, after evaluating some gradient elements, we can apply this relation to directly compute the remaining gradient elements without running the QNN. However, we will show below that this is also unavailable.
For a general ansatz U(θ) with L independent parameters, and the cost function defined as the expectation value under some Hamiltonian H, we need at least O(L) times for evaluating the cost function to obtain the gradient.
The proof of this Theorem is provided below. According to this theorem, the costs for evaluating gradients in training QNNs depend on the number of parameters. This dependency heavily limits the scalability of VQAs.
In ML tasks, it is common to improve performance by increasing the number of parameters. Since there is no dependency of the gradient evaluation cost and the NN depth, such a performance-improving strategy works. However, scalability limitation makes increasing parameters not a good choice in VQAs. Since the parameter number naturally grows with the problem size or complexity, applying VQAs would be challenging.
Suppose the PQC has the form:
U(θ) = ∏_l=1^L U_l(θ_l) W_l = ∏_l=1^L (cosθ_l I - i sinθ_l P_l) W_l,
where θ = {θ_1,θ_2,⋯,θ_L } is a vector of independent parameters. P_l is a Hermitian operator and W_l is the un-parameterized gate. Denote the initial state as ρ_0, then the cost function is:
C(θ) = Tr [ U(θ) ρ_0 U^† (θ) H ].
Expand Eq. (<ref>) according to Eq. (<ref>), we have:
C(θ) = Tr[ ∏_l=1^L (cosθ_l I - i sinθ_l P_l) W_l ρ_0 ∏_l=L^1 W_l^† (cosθ_l I + i sinθ_l P_l) H ].
Observe there are 4 terms for every θ_l. We view cosθ_l and sinθ_l as coefficients. Then the function for each term in the cost function is:
cosθ_l cosθ_l, f(I,I);
cosθ_l sinθ_l, f(I,iP_l);
sinθ_l cosθ_l, f(-iP_l,I);
sinθ_l sinθ_l, f(-iP_l,iP_l).
Note that such four cases can be described by two bits p_lq_l and we define the above four cases mean p_lq_l=00,01,10,11, respectively. Then the cost function is expressed as:
C = ∑_ pq = { p_lq_l|p_lq_l=00,01,10,11 }_l=1^L a_pq f_pq,
where:
a_pq = ∏_l a_p_lq_l, a_p_lq_l = cos^2θ_l, p_lq_l=00,
sinθ_lcosθ_l, p_lq_l = 01,10,
sin^2θ_l, p_lq_l=11.
Denote:
g^l_pq = ∂ a_pq/∂θ_l.
Then the gradient is:
∂ C/∂θ_l = ∑_pq g_pq^l f_pq.
We assume {f_pq} are unknown. Computing ∂ C/∂θ_l through {f_pq} requires computing almost 4^L times, which is impractical.
If we can obtain the full gradient by evaluating the QNN k<O(L) times, then after evaluating some gradient elements we can obtain the others. Due to the unknown functions {f_pq}, unknown elements must be a linear combination of known gradients. If such a case exists, we consider the easiest case that we have obtained L-1 gradient elements, the remaining gradient can be expressed as:
∂ C/∂θ_l = ∑_k≠ l m_k ∂ C/∂θ_k.
This means that the vectors {g^k_pq}_k=1^L are linear dependent. Then there exists a set of numbers {m_i}_i=1^L that are not all 0:
∑_l=1^L m_l ∂ C/∂θ_l = 0.
This means:
∑_l=1^L m_l g^l_pq = 0, ∀ pq = {p_lq_l}.
We consider the following 2^L elements with indices:
pq = {00,11}^L.
And we re-order them as w_l=p_lq_l. Then the above equation will become:
∑_l=1^L m_l g^l_w=0, ∀ w={w_l}={0,1}^L.
Define w'={w_l}_l=2^L. Consider every pair of index 0,w' and 1,w', we have:
∑_l=1^L m_l g^l_0,w'=0,
∑_l=1^L m_l g^l_1,w'=0.
Add the two equations together:
∑_l=1^L m_l ( g^l_0,w' +g^l_1,w') =0.
Observe:
g^l_0,w' + g^l_1,w' = ∂ a_0,w'/∂θ_l + ∂ a_1,w'/∂θ_l = ∂/∂θ_l (a_0,w'+a_1,w').
While:
a_0,w'+a_1,w'= cos^2θ_l a_w' + sin^2θ_l a_w'=a_w',
we have:
g^0_0,w' +g^0_1,w' = 0.
Then Eq. (<ref>) will become:
∑_l=2^L m_l ( g^l_0,w' +g^l_1,w') = ∑_l=2^L m_l ∂ a_w'/∂θ_l = 0.
This is exactly the (L-1)-parameter case.
Repeat this process and we will eventually have:
m_L ∂ a_w_L/∂θ_L = 0, w_L=0,1.
Since a_w_L=0=cos^2θ_L, ∂ a_w_L=0/∂θ_L = -sin (2θ_L). Then we have m_L=0 except when θ_l=0. Moving back, we will obtain m_L-1=0. Finally, m_l=0,∀ l. This conflicts with the assumption that the vectors are linearly dependent. Then the proof is now finished.
§ TIME COSTS FOR EXECUTING VARIATIONAL QUANTUM ALGORITHMS
In this part, we estimate the time cost for executing VQAs, especially when using the UCCSD ansatz and HEA introduced in Sec. <ref>. Since VQA is executed by repeatedly measuring cost functions and updating parameters, the total time of running a VQA is:
t_VQA = t_cost× N_cost,
where t_cost is the time needed to obtain a cost function and N_cost is the number of cost functions needed to obtain to finish the algorithm.
On the one hand, cost functions in VQAs are obtained via repeated sampling of the ansatz. Then: t_cost = t_sample× N_sample, where t_sample and N_sample are the time needed to sample the ansatz once and the number of samples needed to obtain a cost function, respectively. On the other hand, N_cost depends on the optimization algorithms applied. When using gradient-based algorithms, we have:
N_cost = N_gradient× N_iterate, where N_gradient and N_iterate are the number of cost functions needed to evaluate to obtain one gradient and the number of iteration times, respectively. Below we will analyze the above four factors. And the sketch diagram for the analysis is shown in Fig. <ref>.
N_gradient As described in Theorem <ref>, we can view N_gradient simply as the number of parameters in the ansatz. In the UCCSD ansatz, the number of parameters is exactly the sum of single- and double-excitation terms:
L_UCCSD = C_n_e^1 C_n_o-n_e^1 + C_n_e^2 C_n_o-n_e^2,
where
C_n^m = n!/m!(n-m)!.
In HEA, parameters only appear in the single-qubit rotation operations. In each of the P blocks, we apply three single-qubit gates on every qubit, then we have:
L_HEA = 3nP.
t_sample Generally, sampling a quantum circuit includes three parts: initializing the quantum hardware, running the circuit, and measuring the outcome. Then:
t_sample = t_initial + t_gate + t_read.
On current superconducting hardware, t_initial
and t_read together will reach the order of 1 μ s <cit.>. The time of applying a single- and two-qubit gate are t_single=30 ns and t_double=60 ns <cit.>, respectively.
[The detailed time differs in systems but is in the same order. We will apply the averaged and experienced values.]
Then:
t_gate = l_single× t_single + l_double× t_double,
where l is the single- and two-qubit gate layer depth, where two gates in the same layer indicates they can be applied at the same time. Since the time of initializing the hardware and measuring the outcome is approximate to applying 10^2 quantum gates, then we will ignore this cost and only take the circuit running time as t_sample. The following theorems provide the value of t_gate for the UCCSD ansatz and HEA.
For a many-body system with n_o spin-orbitals and n_e electrons, the gate layer depth for the UCCSD ansatz under the first-order Trotter expansion is:
l_single = 6 C_n_e^1 C_n_o-n_e^1 +24 C_n_e^2 C_n_o-n_e^2 ,
l_double = 2 n_oC_n_e^1 C_n_o-n_e^1 + 8/3 (2n_o+1) C_n_e^2 C_n_o-n_e^2.
As introduced in Sec. <ref>, implementing the UCCSD ansatz on the quantum hardware requires transforming the ansatz into the form of Eq. (<ref>). According to Fig. <ref>, for a k-local Pauli operator, which means that the operator acts non-trivially on k qubits, the single-qubit and two-qubit depth of implementing e^-iθ P is 3 and 2k-2, respectively. Therefore, to determine the gate layer depth with the first-order Trotter expansion, we just need to determine the number of operators e^-iθ P in Eq. (<ref>) and the locality for each operator P.
Consider the single-excitation term, for every pair of i>α, the single-excitation term a_i^† a_α - a_α^† a_i is mapped with the JW transformation as:
a_i^†a_α - a_α^†a_i = [ ∏_k<i Z_k ] (X_i - i Y_i) [ ∏_k<α Z_k ] (X_α + i Y_α)
- [ ∏_k<i Z_k ] (X_i + i Y_i) [ ∏_k<α Z_k ] (X_α - i Y_α)
= Z_α (X_α + i Y_α) [ ∏_α<k<i Z_k ] (X_i-i Y_i)
- Z_α (X_α - i Y_α) [ ∏_α<k<i Z_k ] (X_i+i Y_i)
= 2 i X_α[ ∏_α<k<i Z_k ] Y_i - 2 i Y_α[ ∏_α<k<i Z_k ] X_i.
After mapping, a_i^† a_α - a_α^† a_i is mapped to a sum of 2 Pauli strings, each of which is (i-α+1)-local. Similar to Eq. (<ref>), for every group of i>j>α>β, the double-excitation term a_i^† a_j^† a_α a_β-a_β^† a_α^† a_j a_i is mapped to a sum of 8 Pauli strings, each of which is (i-β+1)-local.
Now we are going to determine the circuit depth. Since every e^-iθ P will cause 3 single-qubit circuit depth, and according to Eq. (<ref>), the number of single-excitation and double-excitation terms are C_n_e^1 C_n_o-n_e^1 and C_n_e^2 C_n_o-n_e^2, respectively. Then:
l_single = C_n_e^1 C_n_o-n_e^1 × 2 × 3 + C_n_e^2 C_n_o-n_e^2 × 8 × 3
=6 C_n_e^1 C_n_o-n_e^1 + 24 C_n_e^2 C_n_o-n_e^2.
The case for the two-qubit depth is more complex. For every pair of i,α, there are 2 Pauli strings for each single-excitation term, the two-qubit circuit depth for each of which is 2(i-α+1)-2=2(i-α). Therefore, the two-qubit gate layer depth with the single-excitation term is:
∑_i=n_e+1^n_e+(n_o-n_e)( ∑_α=1^n_e 4(i-α ) ) = ∑_i=n_e+1^n_e+(n_o-n_e)( 4in_e - n_e(n_e+1)/2× 4)
= 4n_e (n_e+1+n_o) (n_o-n_e) /2 -2 n_e(n_e+1)(n_o-n_e)
=2 n_on_e (n_o-n_e)
=2 n_o C_n_e^1 C_n_o-n_e^1.
For every group of i,j,α,β, the double-excitation operator will result in 8 Pauli strings, each of which is (i-β+1)-local. And different choices of j,α will not affect the locality. Then the two-qubit gate depth caused by the double-excitation term is:
∑_i=n_e+1^n_e+(n_o-n_e)( ∑_β=1^n_e (i-β)(n_e-β)(i-n_e-1) ) × 8 = 8/3 (2n_o+1) C_n_e^2 C_n_o-n_e^2 .
Adding Eq. (<ref>) and (<ref>), we obtain the overall two-qubit layer depth. And the theorem is now finished.
For the HEA described above with P blocks, we have:
l_single = 3P,
l_double = nP.
N_sample Cost functions in VQAs are obtained via repeated sampling, where reaching the sampling error ϵ requires sampling the circuit O(1/ϵ^2) times. then N_sample is determined by the sampling accuracy required.
Generally, the sampling error should be within the accuracy required for solving the problem. However, to perform parameter optimization, sampling accuracy should also be related to the scaling of the gradient. Suppose we are applying the parameter-shift rule <cit.> to evaluate the gradient as:
∂_jC = 1/2( C_+ -C_- ),
with C_± = C(θ_j±π/2) and ∂_jC = ∂ C/∂θ_j.
Denote the sampling error as ϵ and the sampled gradient as ∂_jC. The worst case is (Suppose ϵ > 0):
∂_jC = 1/2( [C_+ - ϵ] - [C_-+ϵ] )
= ∂_jC - ϵ.
To update parameters in the correct direction, we need:
∂_jC/∂_jC = ∂_jC/∂_jC-ϵ > 0.
Then sampling accuracy is dependent on the scaling of the gradient.
While the magnitude of the gradient could be affected by the barren plateaus, exponential sampling times would be required, which is not workable in practice. We will analyze the time cost with a set of several given sampling times. In real tasks, we can apply methods to reduce the sampling times, address the barren plateaus phenomenon and reduce measurement costs.
N_iterate Generally, N_iterate is not pre-known and differs between problems. Even for the same problem, different initial parameters and the choice of optimization algorithms will make N_iterate different. In gradient descent algorithms, both the learning rate and the gradient scaling will affect the iteration times. Moreover, while the scaling of the gradient can be affected by barren plateaus or local minimum points, optimization will take more steps. Therefore, we will treat N_iterate similar to N_sample, where we will provide the time cost for a set of given N_iterate. And we combine these two factors as:
N_si = N_sample× N_iterate,
t_VQA
Now we provide the value of t_VQA for both UCCSD ansatz and HEA. In general,
t_VQA = t_sample× N_sample× N_gradient× N_iterate
= N_si× ( t_single× l_single + t_double× l_double ) × L
= 3× 10^-8× N_si× (l_single+2l_double )× L.
Based on the former analysis, when considering the above ansatzes, we have:
t_VQA-UCCSD = 10^-8× N_si×( C_n_e^1 C_n_o-n_e^1 + C_n_e^2 C_n_o-n_e^2 )
×[ (12n_o+18) C_n_e^1 C_n_o-n_e^1 + (16n_o+88)C_n_e^2 C_n_o-n_e^2 ],
and
t_VQA-HEA = 9× 10^-8× N_si× (2n^2+3n)P^2 .
We can see that for a fixed N_si, the total time establishes a polynomial growth.
§ TOTAL TIME COST
Based on the analysis in Sec. <ref>, we now provide the detailed time cost for running VQAs. We will estimate the time cost under realistic assumptions of an ideal quantum processor. That is, we only take into account circuit running time and the sampling process for obtaining cost functions, and other factors including hardware noise, connectivity between physical qubits, the time for initializing the hardware and reading out the outcomes, as well as limitations for VQAs like reachability and trainability, are all ignored. The goal of ignoring these factors is to show the “best” time-scaling performance of VQAs.
As a representative application scenario, we consider applying VQAs to solve the ground states of different-sized molecular systems and label the systems according to their spin-orbital numbers n_o, which is also the number of qubits required: n. The number of electrons is set to be n_e=n_o/2.
Since N_sample and N_iterate are not pre-determined, we will provide the time cost concerning the value of the two factors, which are listed as:
N_sample ∈{ 10^4,10^5,10^6,10^7,10^8 },
N_iterate ∈{ 10^2,10^3,10^4 }.
Combine them as one factor: Therefore, N_si ranges from 10^6 to 10^12.
Given n_o and n_e, the structure of UCCSD ansatz is determined. However, the block depth P needed is generally hard to be determined. Therefore, we will consider the following two cases: P=n and P=n^2.
In Fig. <ref> and <ref>, we plot the time cost with different values of N_si for both UCCSD ansatz and HEA. The 1-year and 1000-year time are given as benchmarks.
From the figures, it is clear that for a fixed value of N_si, the total time cost for running VQAs establishes a polynomial growth with the number of qubits. Compared to the exponential scaling with classical simulation, VQAs seem to perform better.
However, in terms of real-time scaling, it is not the case. Even at a scaling of about 20 qubits, VQAs easily reached the 1-year time. In quantum chemistry tasks, to achieve chemical accuracy, sampling times is at least 10^6 times. Then the total time cost corresponding to N_si=10^6 can be viewed as the time for performing one step of parameter optimization, which comes at the level of 1 year. Since this is already the time on an ideal quantum computer, the real-time cost will be larger than this result.
§ VQAS V.S. CLASSICAL SIMULATIONS
Since the term “quantum advantage” is a topic compared to classical simulations, it is insufficient to only provide the time cost for using VQAs. In this part, we also consider the time cost of simulating VQAs using classical simulation of quantum circuits.
As quantum processors are unavailable for common research, classical simulation of quantum circuits is widely applied. The major difference between quantum simulation and classical simulation of quantum circuits is the time of quantum gates does not change with the number of qubits, but it is not the case with classical simulation. A quantum operation U_x with x the list of qubits that the operation acts on, is indeed U_x⊗ I_x̅, where x̅={ k|k∉x}. In this case, the time of applying a quantum gate grows exponentially with the number of qubits.
We set the gate time of 10 qubits as t_10=10^-3 s and the time for n qubits is t_n=t_102^n-10. Sampling is not required with classical simulation. We set N_sample=10^6 for quantum simulations to reach the chemical accuracy. And N_iterate is listed in Eq. (<ref>).
The time comparison between VQAs and classical simulations with both UCCSD ansatz and HEA is shown in Fig. <ref>. Due to the different increasing speeds, the time curve of VQAs and classical simulations crossed, whose corresponding time is denoted as T, which is a function of the ansatz, iteration number, etc.
It is only possible for VQAs to outperform classical computers when the time required is larger than T. From the figures, this time is at the scaling of years, and it also increased with the number of parameters.
Moreover, different from quantum processors, classical simulations can apply multi-cores, which can also provide a time reduction. For instance, in <cit.>, the average gate time is 2.09 s and 1.22 s when performing a 29-qubit and 40-qubit quantum operation. While quantum simulation with multiple quantum processors is still unavailable nowadays. Therefore, quantum advantages are difficult to reach for VQAs in the acceptable time-scaling.
§ CONCLUSION AND OUTLOOK
In this paper, we have investigated the time-scaling performance of VQAs and the potential for VQAs to achieve quantum advantages. We proved that methods like backpropagation cannot be directly applied when training QNNs since the inter-layer quantum states of QNNs are not recorded. And this makes the gradient-evaluation cost depend on the number of parameters in the quantum version of NN models, which limits the scalability of VQAs. Based on this result, we estimated the time cost of running VQAs in ideal cases, where realistic limitations like noise, reachability, and qubit connectivity are not considered, and we only take into account the time of performing quantum gates and errors due to finite sampling times. The result showed that even though the time established a polynomial growth, the time scaling easily reached the 1-year time wall time. Finally, we considered the time of applying classical simulations, which grows exponentially with the number of qubits. The result showed that the running time of VQAs is only shorter when the time-scaling is over 10^2 years with the UCCSD ansatz. However, due to the realistic limitations mentioned above, whether VQAs can perform better is still not sure. At a regular time-scaling, quantum advantages may be unavailable with VQAs.
By providing such a negative comment, we do not want to deny the potential of VQAs and the NISQ algorithms. In view of VQAs, optimizations need to be made to reduce the time cost, examples like more efficient sampling strategies and more parameter-saving ansatzes. And one of our future works is to design backpropagation-type algorithms for efficiently training QNNs.
In the view of long term, introducing quantum computing into the context of machine learning, or equivalently, quantum machine learning, has remarkable potential. However, due to the different features between quantum and classical computation, directly replacing the NN model with QNN may not be the optimal way to achieve quantum advantages. Seeking a more natural way to carry out QML tasks would be meaningful.
Taking one step further, a variety of quantum algorithms is a quantum-classical hybrid: A question is solved by classical pre-processing, quantum computation, and classical post-processing. Usual algorithms replace one step of classical computation with quantum computation, but the pre-processing process to fit quantum computation is preferred.
§ ACKNOWLEDGEMENT
This work was supported by the National Natural Science Foundation of China (Grant No. 12034018), and Innovation Program for Quantum Science and Technology No. 2021ZD0302300.
§ DATA AVAILABILITY
All the data that support the findings of this study are available within this article.
quantum
|
http://arxiv.org/abs/2307.05022v1 | 20230711055157 | Positivity of extensions of vector bundles | [
"Sho Ejiri",
"Osamu Fujino",
"Masataka Iwai"
] | math.AG | [
"math.AG",
"Primary 14J60, Secondary 14E99, 14F06"
] |
In this paper, we prove that an extension of an ample line bundle by a big line bundle is not necessarily pseudo-effective. In particular, this implies that an almost nef vector bundle is not necessarily pseudo-effective. We also show that an extension of a big (resp. pseudo-effective) line bundle by an ample (resp. a nef) vector bundle is big (resp. pseudo-effective).
22.827.6Best Arm Identification Based Beam Acquisition in Stationary and Abruptly Changing Environments
Gourab Ghatak, Member, IEEE
The author is with the Department of Electrical Engineering at the Indian Institute of Technology (IIT) Delhi, New Delhi, India 110016. Email: [email protected].
============================================================================================================================================================================================================
SHO EJIRI, OSAMU FUJINO, and MASATAKA IWAIPositivity of extensions of line bundles
§ INTRODUCTION
Several positivity conditions defined for line bundles, such as ampleness, nefness, bigness, or pseudo-effectivity, are naturally extended to vector bundles (see Definitions <ref>–<ref>).
When we consider a property of vector bundles, it is natural to ask whether or not it is preserved by extension: if we have an exact sequence
0→ℰ' →ℰ→ℰ”→ 0
of vector bundles, and if ℰ' and ℰ” satisfy a positivity condition, then does ℰ satisfy the same?
This problem was solved affirmatively for ampleness and nefness (cf. <cit.>), but it had been open for bigness and pseudo-effectivity (cf. <cit.>, <cit.>).
In this paper, we answer negatively to the above problem for bigness and pseudo-effectivity:
Let k be an algebraically closed field.
Then there exist a smooth projective surface X over k
and a vector bundle ℰ on X with the following properties:
* there exists an exact sequence
0→ℒ→ℰ→ℳ→ 0
such that ℒ is a big line bundle on X and ℳ is an ample line bundle on X;
* ℰ is not pseudo-effective (so not big).
Since a big line bundle is weakly positive, the above theorem also tells us
that weak positivity is not necessarily preserved by extension.
Here, weak positivity is a notion introduced by Viehweg <cit.>
(see Definition <ref>),
which is a stronger condition than pseudo-effectivity.
In <cit.>, these positivities were discussed by using the base loci of vector bundles (see also <cit.>).
Note that a pseudo-effective (resp. big) vector bundle in this paper
is said to be V-psef (resp. V-big) in <cit.>.
Demailly, Peternell, and Schneider <cit.> introduced
the notion of almost nefness for vector bundles:
a vector bundle ℰ on a smooth projective variety X is
said to be almost nef if there exists a countable family A_i
of proper subvarieties of X such that ℰ|_C is nef for all curves
C ⊄⋃_i A_i.
In <cit.>, they proved that a pseudo-effective vector bundle is almost nef.
They also asked whether almost nefness implies pseudo-effectivity (<cit.>).
Theorem <ref> answers negatively to this problem,
since almost nefness is preserved by extension:
The vector bundle ℰ in Theorem <ref> is
almost nef but not pseudo-effective.
Theorem <ref> tells us that an extension of an ample line bundle
by a big line bundle is not necessarily pseudo-effective.
It is then natural to ask whether an extension of a big line bundle
by an ample line bundle satisfies a positivity condition.
This question is solved affirmatively by the following theorem:
Let X be a normal projective variety over an algebraically closed field.
Let ℰ and 𝒢 be vector bundles on X.
Let ℒ be a line bundle on X.
Suppose that there exists the following exact sequence:
0→𝒢→ℰ→ℒ→ 0.
* If 𝒢 is nef and ℒ is pseudo-effective,
then ℰ is pseudo-effective.
* If 𝒢 is ample and ℒ is big,
then ℰ is big.
The authors thank Mihai Fulger, Shin-ichi Matsumura, and Xiaojun Wu very much for some useful comments and suggestions.
The first author was partly supported by MEXT Promotion of Distinctive Joint Research Center Program JPMXP0619217849.
The second author was partially supported by JSPS KAKENHI Grant Numbers JP19H01787, JP20H00111, JP21H00974, JP21H04994.
The third author was supported by Grant-in-Aid for Early Career Scientists JP22K13907.
§ DEFINITIONS
In this section, we recall several definitions defined for vector bundles.
Let k be an algebraically closed field of arbitrary characteristic.
A variety is an integral separated scheme of finite type over k.
Let ℰ be a vector bundle on a projective variety X.
Let π:ℙ(ℰ) → X be the projectivization of ℰ.
Let 𝒪_ℙ(ℰ)(1) be the tautological line bundle.
We say that ℰ is ample (resp. nef) if 𝒪_ℙ(ℰ)(1) is ample (resp. nef).
Let 𝒢 be a coherent sheaf on a variety X.
Let U be an open subset of X.
We say that 𝒢 is globally generated over U (resp. generically globally generated)
if the natural map
H^0(X,𝒢)⊗_k 𝒪_X →𝒢
is surjective over U (resp. surjective at the generic point of X).
Let 𝒢 be a vector bundle on
a quasi-projective variety X.
Let U be an open subset of X.
Let H be an ample Cartier divisor on X.
We say that 𝒢 is weakly positive over U
(resp. pseudo-effective)
if for every α∈ℤ_>0, there exists a β∈ℤ_>0
such that
S^αβ(𝒢) (β H)
is globally generated over U (resp. generically globally generated).
Here, S^αβ(𝒢) denotes
the αβ-th symmetric product of 𝒢.
We say that 𝒢 is weakly positive
if 𝒢 is weakly positive over an open subset of X.
Let 𝒢 be a vector bundle on
a quasi-projective variety X.
Let H be an ample Cartier divisor on X.
We say that 𝒢 is big if there exists an
α∈ℤ_>0 such that
S^α (𝒢) (-H)
is pseudo-effective.
By <cit.>, Definitions <ref> and <ref> are independent of the choice of ample Cartier divisor H.
The terminology “pseudo-effective” (resp. “big”) is often used
in a different meaning.
For example, in other papers, a vector bundle ℰ on
a projective variety X is said to be pseudo-effective (resp. big) if
𝒪_ℙ(ℰ) (1) is pseudo-effective (resp. big).
This is weaker than the pseudo-effectivity (resp. bigness) in this paper.
To the best knowledge of the authors, the notion of
weakly positive sheaves
was first introduced by Viehweg in <cit.>
(see <cit.>) and that of
big sheaves originates from
<cit.> (see <cit.>).
We note that the definition of weak positivity in
<cit.> is different from the one in <cit.>
(see also <cit.>)
and coincides with that of pseudo-effectivity.
§ PROOF OF THEOREM <REF>
Set X:=ℙ(𝒪_ℙ^1⊕𝒪_ℙ^1(-2)).
Let f:X→ℙ^1 be the projection.
Let C⊂ X be the section of f corresponding to the quotient 𝒪_ℙ^1⊕𝒪_ℙ^1(-2) ↠𝒪_ℙ^1(-2).
Then 𝒪_X(1) ≅𝒪_X(C).
Put H:=C +3f^* [y], where y ∈ℙ^1 is a closed point.
Then we see from <cit.> that H is very ample.
Since
f_*𝒪_X(C) ≅𝒪_ℙ^1⊕𝒪_ℙ^1(-2),
we have H^1(ℙ^1, f_*𝒪_X(C)) ≅ k.
Thus, from the Leray spectral sequence, we obtain
H^1(X, 𝒪_X(C))≅ k.
Take 0ξ∈Ext^1(𝒪_X, 𝒪_X(C)).
Let
♭
0→𝒪_X(C) →ℰ→𝒪_X → 0
be the exact sequence corresponding to ξ.
Since H^1(X,𝒪_X)=0, we see that the natural morphism
H^1(X, 𝒪_X(C))
→
H^1(C, 𝒪_C(C))
is injective, so the exact sequence
0 →𝒪_C(C) →ℰ|_C →𝒪_C → 0
does not split. Since 𝒪_C(C) ≅𝒪_ℙ^1(-2),
we see that
ℰ|_C ≅𝒪_ℙ^1(-1)^⊕ 2.
From now on, we divide the proof into the case of char(k)=0
and the case of char(k)=p>0.
Case of char(k)=0.
By <cit.>, there is a surjective finite morphism
π:X'→ X from a smooth projective surface
and an ample Cartier divisor H' on X such that π^*H ∼ 4H'.
Put 𝒢:= π^*ℰ (H').
Taking the pullback of (<ref>) and the tensor product with 𝒪_X'(H'),
we obtain the exact sequence
0→𝒪_X'(π^*C +H') →𝒢→𝒪_X'(H') → 0.
We prove that 𝒢 is not pseudo-effective.
From
S^4(𝒢)
≅ S^4(π^*ℰ) (4H')
≅ S^4(π^*ℰ) (π^*H)
≅π^*( S^4(ℰ) (H) ),
it is enough to show that ℱ:=S^4(ℰ) (H) is not pseudo-effective by <cit.> and <cit.>.
For this purpose, we check that
S^4β(ℱ)(β H)
≅ S^4β(S^4(ℰ))(5β H)
is not generically globally generated for each β∈ℤ_>0.
By (<ref>), we have the following surjective morphism
σ_β:
S^4β(S^4(ℰ))(5β H)
↠𝒪_X(5β H).
Let us consider the following commutative diagram:
S^4β(S^4(ℰ))(15 β f^*[y])
@^(->[r]^-τ_β@->>[d]_-λ_β
S^4β(S^4(ℰ)) (5β H) @->>[d]^-σ_β
𝒪_X(15β f^*[y]) @^(->[r]
𝒪_X(5β H).
Here, the horizontal arrows are induced from the morphism
𝒪_X(-5β C) ↪𝒪_X.
In order to prove that S^4β(S^4(ℰ))(5β H) is not generically globally generated, it is enough to see that H^0(σ_β) is the zero-map.
For this purpose, it is sufficient to prove H^0(τ_β) is bijective and
H^0(λ_β) is the zero-map.
H^0(τ_β) is bijective.
Taking the tensor product of
0 →𝒪_X(-C) →𝒪_X →𝒪_C → 0
and S^4β(S^4(ℰ))( lC +15β f^*[y]) for l∈ℤ_≥ 0,
we obtain the following exact sequence:
0 → H^0(X, S^4β(S^4(ℰ)) ((l-1)C +15β f^*[y]) )
→ H^0(X, S^4β(S^4(ℰ)) (lC +15β f^*[y]) )
→ H^0(C, S^4β(S^4(ℰ)) (lC +15β f^*[y]) |_C ).
Since 𝒪_C(C) ≅𝒪_ℙ^1(-2) and
ℰ|_C ≅𝒪_ℙ^1(-1)^⊕ 2,
we have
H^0(C, S^4β(S^4(ℰ)) (lC +15 β f^*[y]) |_C)
≅⊕ H^0(ℙ^1, 𝒪_ℙ^1(-16β -2l +15β) )
= ⊕ H^0(ℙ^1, 𝒪_ℙ^1(-β -2l) )
=0.
From H^0(X, S^4β(S^4(ℰ)) (5β H) ) = H^0(X, S^4β(S^4(ℰ)) (5β C + 15 β f^*[y] )), our claim follows.
H^0(λ_β) is the zero-map.
Consider the following commutative diagram:
H^0(X, S^4β(S^4(ℰ)) (15β f^*[y]) )
[r] [d]_H^0(λ_β) H^0(C, S^4β(S^4(ℰ)) (15β f^*[y]) |_C ) [d]
H^0(X, 𝒪_X(15β f^*[y]) ) [r]
H^0(C, 𝒪_C(15β f^*[y]) ).
The bottom horizontal arrow is bijective, since C is a section of f.
Hence, our claim follows from
H^0(C, S^4β(S^4(ℰ))(15β f^*[y]) |_C )
≅⊕ H^0(ℙ^1, 𝒪_ℙ^1(-16β +15β) )
=0.
Case of char(k)=p>0.
Set e:=1 (resp. e:=2) if p≥ 5 (resp. p<5).
Then p^e≥ 4.
Put 𝒢:=(F^e^*ℰ)(H),
where F is the absolute Frobenius morphism of X.
Taking the pullback of (<ref>) by F^e and the tensor product with 𝒪_X(H),
we obtain the exact sequence
0→𝒪_X(p^eC+H) →𝒢→𝒪_X(H) → 0.
We prove that 𝒢 is not pseudo-effective.
For this purpose, we check that
S^4β(𝒢)(β H)
≅
S^4β(F^e^*ℰ)(5β H)
is not generically globally generated for each β∈ℤ_>0.
We have the following surjective morphism
s_β: S^4β(F^e^*ℰ)(5β H)
↠𝒪_X(5β H).
Thus, it is enough to check that H^0(s_β) is the zero-map.
For each l∈ℤ_≥ 0, we have
H^0(C, S^4β(F^e^*ℰ)(lC+15β f^*[y]) |_C )
≅⊕ H^0(ℙ^1, 𝒪_ℙ^1(-4β p^e -2l +15β) )
≅⊕ H^0(ℙ^1, 𝒪_ℙ^1 ((15-4p^e)β -2l ) )
=0.
Note that 15-4p^e≤ 15-16=-1.
Hence, we can prove H^0(s_β)=0 by an argument similar to that of H^0(σ_β)=0 as in the char(k)=0 case.
The vector bundle ℰ in (<ref>) is a simple example of an almost nef but not pseudo-effective vector bundle. That ℰ is not pseudo-effective is proved implicitly in the proof above, but can also be proved directly.
By (<ref>), we get the surjective morphism
t_β: S^4β(ℰ)(β H)
↠𝒪_X(β H)
for each β∈ℤ_>0.
We can prove H^0(t_β)=0
by using the commutative diagram
S^4β(ℰ)(3 β f^*[y])
@^(->[r]
@->>[d]
S^4β(ℰ)(β H) @->>[d]^-t_β
𝒪_X(3β f^*[y]) @^(->[r]
𝒪_X(β H),
where the horizontal arrows are induced from the morphism 𝒪_X(-β C) ↪𝒪_X,
and a vanishing as in Claim <ref>, that is,
H^0(C, S^4β(ℰ)(lC+3β f^*[y]) |_C)
=⊕ H^0(ℙ^1, 𝒪_ℙ^1(-4β -2l +3β))
=0
for each l∈ℤ_≥ 0.
In positive characteristic, we do not know whether the
pseudo-effectivity of ℰ implies that of S^m(ℰ),
so we have to separate the proof into the case of char(k)=0 and the
case of char(k)>0.
§ PROOF OF THEOREM <REF>
Before starting the proof of Theorem <ref>,
we recall the following lemma.
Let X be a smooth projective variety.
Let ℰ be a vector bundle on X.
Then ℰ is pseudo-effective if and only if for every finite surjective morphism π:X'→ X from a smooth projective variety X' and for every ample divisor H' on X', the vector bundle π^*ℰ(H') is pseudo-effective.
First, we prove (1).
When char(k)=0 (resp. char(k)>0),
we take a resolution of singularities (resp. a smooth
alteration constructed in <cit.>).
Then, by <cit.>, we may assume that
X is smooth.
We replace X with any finite cover of X.
By Lemma <ref>, it is enough to show that
ℰ(H) is pseudo-effective for every ample divisor H on X.
Note that ℒ(H) is big.
When char(k)=0, by taking a resolution of
a suitable cyclic cover
and using <cit.>,
we may assume that there is an injective
morphism 𝒪_X ↪ℒ(H).
When char(k)>0, by using the Frobenius morphism and
<cit.>,
we can replace ℒ(H) by ℒ(H)^p^e and
obtain an injective morphism 𝒪_X ↪ℒ(H).
Let ℱ be the inverse image of 𝒪_X by
ℰ(H) ↠ℒ(H).
Then we have the morphism between exact sequences
0 [r] 𝒢(H) [r] @=[d] ℱ[r] @^(->[d]^τ 𝒪_X [r] @^(->[d] 0
0 [r] 𝒢(H) [r] ℰ(H) [r] ℒ(H) [r] 0.
Since 𝒢(H) and 𝒪_X are nef,
we see that ℱ is also nef.
By the generic surjectivity of τ,
we see that ℰ(H) is pseudo-effective.
Next, we prove (2).
Let H be an ample Cartier divisor on X.
Take m∈ℤ_>0 such that char(k)∤ m, S^m(𝒢)(-H) is nef, and ℒ^m(-H) is pseudo-effective.
By <cit.>, there are a surjective finite morphism
π:X'→ X from a normal projective variety X'
and an ample Cartier divisor H' on X' such that mH'∼π^*H.
Consider the exact sequence
0→π^*𝒢(-H') →π^*ℰ(-H')
→π^*ℒ(-H') → 0.
Since S^m(π^*𝒢(-H')) ≅π^*(S^m(𝒢)(-H))
is nef, so is π^*𝒢(-H').
Also, π^*ℒ(-H') is pseudo-effective,
so we see that
π^*ℰ(-H') is pseudo-effective by (1).
Then
S^(m+1)β(π^*ℰ(-H'))(β H')
≅ S^(m+1)β(π^*ℰ) (-mβ H')
≅ S^(m+1)β(π^*ℰ) (-βπ^*H)
≅π^*( S^(m+1)β(ℰ)(-β H) )
is generically globally generated for some β∈ℤ_>0,
so S^(m+1)β(ℰ)(-β H)
is pseudo-effective by <cit.>,
which means that ℰ is big.
abbrv
|
http://arxiv.org/abs/2307.04806v1 | 20230710180056 | The Dragon-II simulations -- II. Formation mechanisms, mass, and spin of intermediate-mass black holes in star clusters with up to 1 million stars | [
"Manuel Arca Sedda",
"Albrecht W. H. Kamlah",
"Rainer Spurzem",
"Francesco Paolo Rizzuto",
"Mirek Giersz",
"Thorsten Naab",
"Peter Berczik"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage
Autonomous feedback stabilization of a cavity-coupled spin oscillator
Dan M. Stamper-Kurn
August 12, 2023
=====================================================================
The processes that govern the formation of intermediate-mass black holes (IMBHs) in dense stellar clusters are still unclear. Here, we discuss the role of stellar mergers, star-BH interactions and accretion, as well as BH binary (BBH) mergers in seeding and growing IMBHs in the Dragon-II simulation database, a suite of 19 direct N-body models representing dense clusters with up to 10^6 stars. Dragon-II IMBHs have typical masses of m_ IMBH = (100-380) and relatively large spins χ_ IMBH > 0.6. We find a link between the IMBH formation mechanism and the cluster structure. In clusters denser than 3× 10^5 M_⊙ pc^-3, the collapse of massive star collision products represents the dominant IMBH formation process, leading to the formation of heavy IMBHs (m_ IMBH > 200 M_⊙), possibly slowly rotating, that form over times <5 Myr and grow further via stellar accretion and mergers in just <30 Myr. BBH mergers are the dominant IMBH formation channel in less dense clusters, for which we find that the looser the cluster, the longer the formation time (10-300 Myr) and the larger the IMBH mass, although remaining within 200 M_⊙. Strong dynamical scatterings and relativistic recoil efficiently eject all IMBHs in Dragon-II clusters, suggesting that IMBHs in this type of cluster are unlikely to grow beyond a few 10^2 M_⊙.
methods: numerical – galaxies: star clusters: general – stars: general, black holes
§ INTRODUCTION
Despite the great progresses in observations, marked by the detection of intermediate-mass black hole (IMBH) candidates with masses as low as 50,000 <cit.>, and the first detection of an IMBH with mass ∼ 150 formed from the merger of two massive stellar BHs <cit.>, IMBHs remain elusive objects whose existence in the M_ IMBH = 10^2-10^5 mass range is largely debated <cit.>.
Several IMBH candidates have been proposed in galactic and extragalactic clusters <cit.>, but none of the explorations conducted so far led to conclusive results, making IMBH formation processes one of the most intriguing puzzles of modern astronomy.
Numerical and theoretical works on IMBH formation in dense star clusters suggest that the IMBH seeding can occur via three, rather uncertain, pathways <cit.>: multiple stellar mergers, accretion of stellar matter onto a stellar BH, or repeated BH mergers. These mechanisms are not mutually exclusive: multiple stellar mergers can form a very massive star (VMS) that eventually collides with a stellar BH and the collision product further grows by merging with other BHs in the cluster. These processes could explain the formation of supemassive BHs (SMBHs) in galactic nuclei <cit.>. A further formation channel could be via formation and collapse of a supermassive star, the so-called direct collapse scenario for SMBH seedings in galactic nuclei <cit.>. A similar process, aided by stellar collisions and gaseous accretion, could operate also in the most massive globular clusters, provided that they accrete a significant amount of the gas in which they are embedded at formation <cit.>.
The impact of multiple stellar mergers onto the IMBH buildup depends in part on the possible insurgence of pair-instability (PISN) and pulsational pair-instability supernova (PPISN) mechanisms. Stars that develop an He core with mass in the range m_ He=(64-135) undergo PISN and explode leaving no remnant, whilst stars m_ He=(32-64) suffer a strong mass loss owing to PPISN and leave remnants with a mass generally lighter than 40-50. These explosive mechanisms result in the so-called upper mass-gap, a region of the mass spectrum m_ BH = 40-150 where no BHs are expected. The boundaries of the upper mass-gap are rather uncertain, and depend on many details, among which the stellar evolution model, stellar rotation, the rate of thermonuclear reactions <cit.>. Stellar mergers can actually overcome PISN and PPISN by mixing stars in different evolutionary stage, a mechanism that permits to increase the stellar mass but keep the He core below the threshold for these explosive mechanisms to develop <cit.>. Stellar mergers of this type have proven to be a viable way to generate upper-mass gap BHs in star clusters and, in some cases, IMBHs <cit.>.
Whilst there is some general consensous about the outcome of stellar mergers, also thanks to the development of detailed hydrodynamical simulations coupled with stellar evolution models <cit.>, it is still rather unclear how much mass a massive star can accrete onto a stellar BH. Several works have shown that in the case of a "normal" star merging with a stellar BHs, there is little accretion as most of the energy is radiated away via jets, although the mechanism is highly uncertain and likely depends on the star structure and evolutionary stage <cit.>. Hydrodynamical simulations of star-BH close interactions have shown that up to 70% of the star mass remains bound to the BH, but energy arguments suggest that even a tiny amount of accreted matter, O(10^-3-10^-2), generates enough energy to evaporate the accretion disk and halt the BH growth <cit.>. Nonetheless, recent simulations modelling the common envelope phase of a tight star-BH binary have shown that the BH accretes the stellar core and expels the envelope, a process – possibly accompanied by a SN-like transient – that can spin-up the BH to nearly extremal values regardless the initial spin <cit.>. In multiple main sequence (MS) star collisions, the merger product is expected to be characterised by a compact core and a tenuous envelope with densities as low as 10^-10 g cm^-3 <cit.>. Therefore, it seems reasonable to assume that a BH would eat-up a significant fraction of mass from a massive companion that underwent multiple stellar mergers. Given this, recent works parametrised the amount of accreted matter through an accretion parameter f_c=0-1 <cit.>.
Repeated BH mergers can potentially build-up upper-mass gap BHs and IMBHs, but their efficiency is inevitably hampered by the development of post-merger recoil originated by anysotropic GW emission <cit.>, which can easily eject the post-merger product from the parent environment, especially in star clusters with velocity dispersion σ < 100 km s^-1 <cit.>.
Typically, the amplitude of the kick imparted promptly after a merger on the remnant depends on the binary mass ratio and the amplitude and direction of the component spins, and can attain values that span more than two orders of magnitude.
Despite its crucial impact on post-merger dynamics, little is known about the natal spin of stellar BHs, let alone IMBHs. Observations of several high-mass X-ray binaries show that BHs in these systems are nearly maximally spinning <cit.>, while observations of GW sources suggest that merging BHs are mostly slowly rotating (χ_ BH < 0.5) <cit.>.
From the theoretical point of view, it has been suggested that the evolution of the BH stellar progenitors could significantly impact the natal spin distribution.
In single stars and binaries with negligible mass transfer, efficient angular momentum transport driven by magnetic fields could trigger the formation of BHs with natal spins as small as χ_ BH≲ 0.01 via Taylor-Spruit dynamo <cit.>.
A significant mass-transfer can, instead, significantly spin-up a BH even if it is spinless at birth, possibly explaining the observed spin of BHs in Galactic low-mass X-ray binaries (χ_ BH∼ 0.1-0.99) <cit.>. Similarly, accretion from a BH progenitor onto a close companion in a binary and subsequent accretion from the companion onto the BH can spin-up the BH in high-mass X-ray binaries, provided that the angular momentum transfer when the companion leaves the MS phase is inefficient <cit.>. High-mass X-ray binaries with highly spinning BHs are not expected to produce merging BHs, a feature that partly explains the dearth of highly spinning BHs in observed BH mergers <cit.>.
In massive binaries undergoing both Roche lobe overflow and common envelope and eventually forming a BH binary (BBH), the first-born BH can have nearly zero spin or a spin covering a wide range, depending on the stellar prescription adopted, whilst the second BH could have nearly extremal spin <cit.>. This is likely driven by tidal synchronization of BH progenitors rotation and their mutual orbit <cit.>. Nonetheless, massive binaries could also form BHs with negligible spins, provided that their progenitors lose their hydrogen envelope before undergoing SN <cit.>.
In the case of BHs formed from star-BH mergers, instead, it has been shown that the accretion of the star core onto the BH can spin-up the BH to extreme values <cit.>. The aforementioned scenarios for BH natal spin can have a significant impact on the properties of IMBHs, depending on their formation mechanism. An IMBH formed via star-BH merger, for example, could be characterised by a large spin, while one formed via the collapse of a VMS could have negligible spin.
Stellar mergers, star-BH interactions, and BBH mergers can also have an impact on the formation of BHs in the upper-mass gap. In the first three observation runs, the LIGO-Virgo-Kagra collaboration (LVC) revolutionized our knowledge of BHs, proving the existence of BHs in and beyond the upper-mass gap. The most updated GW transient catalog (GWTC-3) contains 85 sources associated with the merger of two BHs with a mass above m_ BH = 3 <cit.>. Around one-third of them (27) have one component above m_ BH > 40.5, and 8 of them have one component heavier than m_ BH > 65, i.e. two proposed lower limits for the PISN <cit.>. Moreover, 8 sources have a remnant mass m_ BH, rem > 100, 3 of which exceeds the IMBH threshold at 95 confidence level. With the forthcoming fourth observation run (O4), the LVC collaboration will possibly detect further 30-150 merging events, thus future detection will provide further insights on the development of BH mergers with upper-mass gap BHs.
In this work, we discuss the formation of IMBHs and upper mass-gap BHs in the star cluster database, a suite of 19 direct N-body simulations of star clusters comprised of up to 1 million stars and up to 33% of stars initially in binaries (details about these models are discussed in our companion paper, Arca Sedda et al in prep), performed with the code[<https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing>] <cit.>.
The paper is organised as follows: in Section <ref> we briefly summarise the main features of our models; Section <ref> describes how IMBHs form in simulations and what is the impact of different formation channels; whilst Section <ref> is devoted to discuss the impact of Newtonian and relativistic dynamics on the mass and spin of IMBHs in dense star clusters. Section <ref> summarises the main results of the work.
§ NUMERICAL METHODS
§.§ Modelling clusters with the code
All clusters are represented by <cit.> models with an dimensionless potential well W_0 = 6, a number of stars of N = (120 - 300 - 600)× 10^3, and an initial half-mass radius either R_ = 0.47, 0.80, 1.75 pc. As described in the first paper of the series (Arca Sedda et al, subm., hereafter paper AS-I) this choice is compatible with observations of several Galactic young massive clusters and produce cluster models that broadly match observed masses and half-mass radii of dense clusters in the Magellanic clouds (see Figure 2 in paper AS-I). For all models we adopt a binary fraction f_b=0.2[Note that the binary fraction is defined as f_b = n_b/(n_s+n_b), where n_b is the number of binaries. This implies that the fraction of stars initially in binary systems is f_2b = 2f_b/(1+f_b)= 0.10-0.33, with f_b=0.05, 0.2.], defined as the number of binaries normalised to the sum of the number of single stars and binary pairs. For models with R_ = 2.2 pc, we run an additional series of models where we adopt f_b = 0.05 and N = (120 - 300 - 1,000)× 10^3. All clusters have the same metallicity, Z = 0.0005, a value consistent with the metallicity of several globular clusters in the Milky Way that may host a substantial population of BHs <cit.>.
The reduced computational cost of modelling a smaller amount of binaries permitted us to increase the total number of stars to one million, which is the maximum amount of stars and binaries ever simulated for realistic star cluster models with a direct N-body code <cit.>.
All clusters have been initialised with the code <cit.>, adopting a <cit.> initial mass function limited between 0.08 and 150. Binary eccentricities are drawn from a thermal distribution, whilst semimajor axes follow a distribution flat in logarithmic values limited between the sum of stellar radii and 50 AU <cit.>. Binary components are paired according to a uniform mass ratio distribution if their mass exceeds m_*>5, whilst lighter stars are paired randomly <cit.>.
All clusters are assumed to orbit on a circular orbit 13.3 kpc away from the centre of a galaxy with total mass 1.78×10^11, assuming for the galaxy a Keplerian gravitational potential. Note that the choice of parameters is such that the velocity curve at the adopted distance is similar to the one observed in the Milky Way. This implies that all clusters are initially well contained inside their Roche lobe, thus the galactic field has little effect on the cluster structural evolution. In all cases but one, we ran two different realisation of each cluster to reduce the impact of statistical fluctuations.
Table <ref> summarizes the main properties of clusters. The table shows the initial parameters of the clusters, the simulated time T_ sim, the number of merging compact objects occurring inside the cluster or after their ejection, the absolute maximum mass attained by BHs and the maximum BH mass at the end of the simulation, the number of BHs with a mass above 30 or 40. For each set of initial conditions, we provide numbers for each independent realisation.
The simulations have been performed with the code, a state-of-the-art direct N-body integrator that exploits GPU-accelerated high-performance supercomputing <cit.>. The current version of the code follows the footstep of a 50 year old tradition initiated by Sverre Aarseth <cit.>.
The code exploits a 4th-order Hermite integrator with individual block-time step <cit.> and implements a dedicated treatment for close encounters and few-body dynamics based on the Kustaanheimo-Stiefel (KS) regularisation <cit.>, the Ahmad-Cohen (AC) scheme for neighbours <cit.>, and algorithmic chain regularisation <cit.>, which permits to resolve the evolution of binaries with a period 10^-10 times smaller than the typical dynamical timescales of star clusters.
Recently, a series of improvements have been introduced in the code to treat the formation and merger of relativistic binaries <cit.> and quantify the fraction of stellar matter that can be fed to a stellar BH in binary systems or star-BH collisions <cit.>.
Stars in clusters are evolved self-consistently from the zero age main sequence through the code <cit.>, conveniently updated to feature state-of-the-art recipes for the evolution of massive stars, the mass spectrum and natal kicks of BHs and NSs, and the physics of (P)PISN, <cit.>. In this work, we use the so-called level-B of stellar evolution <cit.>.
After a series of major upgrades described in recent papers <cit.>, currently implements multiple choices for the distributions of BH natal spins and numerical relativity fitting formulae to calculate the final mass and spin of merger remnants, based on <cit.>, and the relativistic recoil imparted onto them because of asymmetric GW emission, based on <cit.>.
Although implements the GW recoil in a self-consistent way, the amplitude of the recoil depends primarily on the merging masses and the spin amplitude and orientation, making the process highly stochastic. Given the relatively small number of simulations in our sample, we decide to explore the role of post-merger kicks as follows.
Firstly, we run all simulations assuming zero GW recoil.
Secondly, we calculate the typical GW recoil experienced by merger products in clusters and infer the corresponding retention probability in post-process, following an approach widely used in the literature.
Thirdly, in case of a simulation featuring multiple generation mergers, we re-run the simulation shortly before the n-th merger with the GW kicks enabled to verify if, upon retention, the BH undergoes an n+1-th generation merger.
The scopes of such simplified scheme are manifold. On the one hand, it permits us to verify whether multiple-generation mergers can occur in absence of relativistic effects. On the other hand, it permits us to assess the impact of Newtonian and general relativistic dynamics on the formation and retention of IMBHs. Furthermore, using this multi-stepped procedure helps us to optimise the available computational resources and to maximise the scientific outputs of the simulations.
§ INTERMEDIATE-MASS AND UPPER-MASS GAP BLACK HOLES FORMATION IN MASSIVE DENSE CLUSTERS
Out of 19 simulated clusters, we find 8 IMBHs with a mass M_ IMBH = (107-350), corresponding to a formation probability of P_ IMBH∼ 42±15%. Despite the small statistics, we note that there is a moderate dependence on the binary fraction and the cluster compactness. In fact, we find an IMBH formation fraction of f_ IMBH = 0.17, 0.33, 0.75, 0.67 going from f_b=0.05 to f_b=0.2 and from R_ = 1.75, 0.8, 0.47 pc. Comparing different models makes evident the importance of binaries and cluster compactness in determining the IMBH seeding.
The formation history and main properties of all IMBHs in simulations are described in detail in Appendix <ref>.
Aside IMBHs, around N_ gap≃ 10^2 upper mass-gap BHs form within the simulation time, corresponding to a formation efficiency
η_ gap = N_ gap/M_ sim = 3.44 × 10^-5^-1,
where M_ sim = 3.65× 10^6 is the total simulated mass.
The formation of IMBHs and upper-mass gap BHs via stellar mergers, accretion of stellar material onto a stellar BH, BH-BH mergers, or a combination of them, intrinsically depend on the host cluster properties. The development of one mechanism or another is intrinsically linked to the initial cluster structure, which determines the typical timescales of dynamical processes. The earliest process that regulates the evolution of a star cluster with a broad mass spectrum is mass-segregation, by which the most massive stars sink toward the cluster centre and start dominating dynamics in the inner core <cit.>. The mass-segregation timescale of heavy stars with maximum mass m_ max can be expressed as <cit.>
T_ seg∼0.138N⟨ m_* ⟩/m_ maxln(0.11M_cl/m_ max)(R_^3/GM_cl)^1/2.
If the mass-segregation time is shorter than the lifetime of the most massive stars, it implies that they will sink to the centre before they turn into compact objects, thus their interactions can trigger more easily stellar collisions or massive star-BH close interactions.
As summarised in Table <ref>, clusters have a typical mass-segregation time T_ seg=0.4-3.4 Myr, thus they represent ideal laboratories to study the impact of star mergers and strong interactions on the early evolution of star clusters.
In the following section, we describe the impact of stellar collisions, star-BH collisions and mergers, and compact object mergers on the formation of IMBHs and mass-gap BHs.
§.§ Formation channels and formation times
Despite the relatively small database, our models support the formation of IMBHs via all the three main channels, complementing previous works <cit.>.
To provide the reader with a clearer idea about how IMBHs form in clusters, we provide below two examples extracted from our simulations.
In the first example, an IMBH with final mass m_ IMBH = 350 forms in a cluster with N=120k stars, half-mass radius R_ = 0.47pc, and binary fraction f_b=0.2. The IMBH formation sequence is sketched in Figure <ref>.
Initially, a primordial binary with component masses m_p1,p2 = (132 + 99) undergoes a series of strong interactions with a single MS star with mass m_s = 133 within the inner Myr of cluster evolution. The triple formed this way undergoes both phases of resonant interactions, with an exchange among the binary secondary and the third star, and a phase of hierarchical evolution, until the third body and the companion merge, leaving behind a new binary with component masses m_p1,ps = (132+231), eccentricity e ∼ 0.001 and semimajor axis a ≃ 225 R_⊙. After 1.8 Myr, the binary captures a massive companion with mass m_3 = 115 that induces the collision of the two massive stars, eventually leaving behind a VMS with mass m_ VMS = 360, which forms a binary with m_3. The two binary components merge during the Hertzsprung-gap (HG) phase of the primary, leading to the formation of a VMS with total mass m_ VMS = 365. After capturing via a hyperbolic collision a small MS star (∼ 0.7) during the CHe burning phase, the VMS collapses to a BH with final mass m_ IMBH,1 = 288 over a total time of T_ sim = 2.5 Myr. Within the subsequent 4 Myr, the newborn IMBH collides with another massive MS star with mass m_ MS = 122, accreting a fraction f_c = 0.5 of its total mass and reaching a final IMBH mass of m_ IMBH≃ 350. This case represents a clear example of how different formation channels, in this case stellar and star-BH mergers, concur to the IMBH seeding and growth.
In the second example, instead, an IMBH with mass m_ IMBH = 191 form from the coalescence of two nearly equal mass BHs. As sketched in Figure <ref>, the two BHs with masses ∼ 95 form from the evolution of two initially indipendent primordial binaries. After formation, the two BHs are part of different binaries and undergo many binary-single and binary-binary interactions before finding each other and merge after a time of ∼ 10^2 Myr.
§.§.§ Stellar mergers
In models we find in total 104 stellar mergers with a merger remnant mass heavier than m_ VMS>90, with 75% of them involving primordial binaries. The typical mass of the merger product is a star with mass in the range m_ VMS = 100-350. In some cases, the same star undergoes 3-4 merging events with stars in different evolutionary phases. Figure <ref> shows the post-merger mass as a function of the time at which the merger occurs for all simulations. The plot shows exclusively star-star coalescences, thus it excludes both star-BH and BH-BH merging events. Around 48% of stellar mergers produce a massive MS star, 32% produce a star in the HG, and a core-He burning star in the remaining 22% of cases.
The formation of a VMS (m_ VMS> 150) eventually leaves to either no remnant owing to PISN (∼ 23 cases), a remnant with mass m_ BH = 40.5 owing to PPISN (∼ 64 cases), or an IMBH (∼ 2 cases).
Comparing models with same R_ and different binary fraction, we find that models with f_b=0.2 host a number of mergers 2-5 times larger than the case f_b=0.05, a reflection of the fact that most of the mergers involve primordial binaries.
Noteworthy, the two IMBHs form in the densest simulated clusters, i.e. those with R_ = 0.47 pc and N=(1.2-3)× 10^3, which are also those with the shortest mass-segregation time (T_ seg∼ 0.3-0.4 Myr), much shorter than the typical BH formation time (>2 Myr).
§.§.§ Star-black hole collisions
Among all simulations, we find 454 star-BH merger events, the vast majority of which (72%) lead to the formation of BHs with a final mass m_ BH<40.5, thus they will remain mixed with the population of "ordinary" BHs that never experienced stellar accretion episodes. The remaining mergers leave behind, instead, BHs with a mass falling in the upper-mass gap. More in detail, around 18% of these events trigger the formation of a final BH with a mass in the range 40.5 < m_ BH/ < 60, 6% form BHs with masses in the 60 < m_ BH/ < 70 mass range, and the remaining ∼ 4% produces BHs heavier than m_ BH > 70. Stars involved in a star-BH merger are in different evolutionary stages: HG (40.1%), core He burning (45.2%), MS (5.5%), early/late asymptotic giant branch (AGB, 9%), giant branch (GB, 1.1%), and HG naked He star (0.2%).
Note that we have two different type of star-BH accretion events: one purely dynamical and one induced by stellar evolution. In the purely dynamical case, we have two possibilities: either the BH captures a MS star in a orbit such that the star fills its Roche lobe, or the orbit is sufficiently tight and eccentric that the BH crashes onto the star. In any case, the BH accretes a fraction f_c of the star mass. In the stellar evolution-driven case, instead, the star fills its Roche lobe, mainly when inflating during the HG or the core He burning phase. Even in such case, though, in it is assumed that the BH eats up a fraction f_c of the star mass. Therefore, the stellar type is likely the parameter that better identify the two types of star-BH accretion/merger events.
Figure <ref> shows the mass distribution of the merging star, and the BH before/after the merger, and the stellar type of the stars involved in the process.
Two events contribute to IMBH seeding or growh, one of them involves a m_ BH=40.5 BH that accretes a core He burning star with mass m_ VMS = 133, previously formed via a complex sequence of stellar mergers triggered by binary-binary and binary-single interactions. In such case, the IMBH mass is m_ IMBH = 107. The second event, which we do not show in the histogram to ensure an optimal visibility, involves an IMBH with mass m_ IMBH = 288 and a MS star with mass m_* ≃ 122. None of all other interactions lead to the formation of an IMBH, partly owing to our choice to set the accretion factor to f_c=0.5. Adopting f_c = 1 would have lead to an additional population of ∼ 20 IMBHs with a mass at formation in the range m_ IMBH = 100-160.
§.§.§ Black hole mergers
The remaining 5 IMBHs in clusters form via BH-BH mergers, all involving upper mass-gap BHs. This highlights the fundamental impact of star-BH accretion events, because they are the main channel through which mass-gap BHs form. Interestingly, all the BH mergers involved in the IMBH buildup have progenitor stars originally in a primordial binary, thus highlighting the crucial role of binary dynamics in the IMBH formation process.
At formation, these 5 IMBHs have masses in the range m_ IMBH≃(140-232) and, in case of negligible GW recoil, further increase up to m_ IMBH≃(160-260) via one or two repeated (hierarchical) merger events, after being dynamically ejected from the cluster. In the case of zero GW recoil, among all IMBHs in models, only one is ejected from the cluster as a single object. All the other are ejected with a companion and undergo merger within a Hubble time. In two cases, the IMBH undergoes two/three mergers inside the cluster and forms a binary with another BH that is eventually ejected from the cluster, merging in the field within a Hubble time.
§.§.§ The link between formation channels, formation times, and the intermediate-mass black hole mass
Despite our sample is rather small, the fact that IMBHs form via all the proposed formation channels can help to provide a possible answer to the intriguing question
"Is there a link between the IMBH seeding process and the environment in which this happens?"
Figure <ref> shows the IMBH mass as a function of time for different formation channels from the first time the IMBH mass exceeds 10^2 and until the first BH merger event develops. In other words, we exclude from the plot IMBHs older than the second generation (2g), because GW recoil drastically reduce the probability for multiple generation mergers, as discussed in Section <ref>.
From the plot, it seems that there is a striking relation between the structure of the host cluster and the IMBH formation process. The densest clusters (ρ_ cl > 3× 10^5 pc^-3) favour the formation of IMBHs via stellar collisions on the short timescales (<10 Myr) and nurture the most massive IMBHs in our sample. IMBHs in these clusters further grow via accretion of stellar material and coalescence with stellar BHs on timescales <100 Myr <cit.>.
In lower density clusters, instead, IMBHs form on longer timescales (10-300 Myr) via star-BH accretion and BBH mergers. In such case, Figure <ref> clearly shows a trend, namely that the looser the cluster the longer the formation time and the heavier the IMBH seed mass.
This difference may be related to the core-collapse process, a mechanism driven by mass-segregation and relaxation according to which the cluster core contracts and its density increases up to a maximum point, i.e. the core-collapse. The time at which core-collapse occurs is generally a fraction of the relaxation time, t_ cc = 0.2 T_ rlx <cit.>. We find that in clusters with an initial density >3× 10^5 pc the core-collapse occurs before stellar BH forms or massive stars undergo PISN and PPISN, i.e. t_ BH∼ 4 Myr. This supports the idea that core-collapse facilitate the collision of the stars before they collapse to BH or undergo PISN.
In the case of clusters less dense that 3× 10^5 pc, we also note that the smaller the density the larger the IMBH mass. This may be due to the fact that in low-density clusters, where interactions are less energetic and less frequent, the ejection of the most massive BHs via the so-called BH-burning process <cit.> is less effective. As a consequence, the heaviest BHs in the loosest clusters in our sample have more time to hang around in the cluster and pair-up, as in the case of model IBH_Rh1.75f20N120k.
§ DISCUSSION
§.§ Newtonian versus relativistic dynamics: intermediate-mass black hole retention and hierarchical mergers frequency
In this work, we want to assess the competing role of Newtonian and relativistic dynamics in determining BH retention and IMBH seeding and growth, thus we adopt the following multi-stepped procedure: a) run all cluster simulations assuming zero GW recoil to verify the possible development of multiple mergers and quantify the impact of Newtonian dynamics on the retention of BH merger remnants, b) quantify the retention probability of remnant BHs, c) re-run models in which BHs undergo repeated mergers with GW recoil enabled.
§.§.§ Newtonian dynamics
Regardless of the formation scenario, an IMBH seed that upon formation is retained in its parent cluster will likely undergo mass-segregation and quickly settles in the cluster centre possibly capturing a companion <cit.>. The newly formed binary will undergo frequent interactions with surrounding cluster members with mass m_p, at a rate
ṅ_2-1∼ n σπ a^2(1-e)^2 (1+2G(m_1+m_2+m_p)/a(1-e)σ^2),
where n is the cluster number density, σ the velocity dispersion, m_1,2 the mass of binary components, and a the binary semimajor axis. If the binary is hard, i.e. a ≪ 2G(m_1+m_2)/σ^2, or highly eccentric, the timescale for these interaction is roughly given by
t_2-1 ∼ 6 Myr(n/10^5 pc^-3)^-1(σ/20 km s^-1) ×
×(m_1+m_2+m_p/240)^-1(a/1 AU)^-1 (1-e),
therefore much shorter than the typical cluster lifetime. Repeated binary-single interactions can have an important effect on the binary evolution: on the one hand, they can extract orbital energy and harden the binary <cit.>, but, on the other hand, they can become so violent to eject the binary from the cluster, halting the IMBH growth <cit.>.
The typical escape velocity of clusters described by a <cit.> model can be conveniently expressed as <cit.>
v_ esc = 2√(log(1/c)/π)(1-c)^-1/2(GM/R_)^1/2,
where c = R_c/R_ is the ratio between the core and half-mass radius of the cluster. In models, we find that such parameter attains values c=0.2± 0.1 within the whole simulation time and regardless of the initial conditions. Therefore, the escape velocity can be rewritten as
v_ esc = (34± 3) km/s(M/10^5)^1/2(R_/1 pc)^-1/2.
In all clusters the escape velocity remains below v_ esc < 50 km/s, with the loosest and smallest clusters attaining values in the 8-20 km/s range. This relatively small escape velocity has huge implications on the IMBH evolution. In fact, even when GW recoil is not taken into account, all IMBHs are ejected from the parent cluster after a violent interaction with a perturber.
A clear example is a simulation with N = 300 k, R_ = 0.47 pc, and f_b = 0.2, in which a binary with mass m_1 + m_2 =
(240 + 38) undergoes a strong scattering with a BH with mass
m_p = 44, which reduces the binary semimajor axis from a = 0.35 AU to a_ fin = 0.24 AU and impart to the binary a recoil with amplitude v_ rec = 85 km s^-1. From a theoretical standpoint, a binary undergoing a close interaction with a perturber with mass m_p and consequently shrinking from a to a_ fin receives a kick
<cit.>
v_ rec = [ Gm_1m_2/a_ fin(m_1+m_2)m_p/m_1+m_2+m_p(1-a_ fin/a) ]^1/2=
= 37.1(μ/26)^1/2(q_p/0.12)^1/2(a_ fin/1 AU)^-1/2(1 - x_ fin/0.5)^1/2,
where μ = m_1m_2/(m_1+m_2) and q_p = m_p/(m_1+m_2+m_p). This equation returns a value v_ rec≃ 72 km s^-1 for the aforementioned example.
This implies that as long as at least one heavy (m_p > 10) perturber remains in the cluster, Newtonian dynamics, in particular close binary-single scatterings, can represent a serious threat to the IMBH retention.
Our analysis highlights the extreme importance of Newtonian dynamics in determining the evacuation of BHs from the parent cluster.
§.§.§ The impact of black hole natal spins and relativistic recoil on the properties of intermediate-mass black holes
In order to determine the possible properties of IMBHs and their retention probability in models, we implement the following simple model to take into account the impact of spins:
* If a stellar BH involved in the IMBH build-up formed from a single star or from a “non-interacting” binary, we assign a spin of χ_ BH = 0.01 <cit.>.
* In the two cases in which an IMBH forms from the collapse of a VMS assembled via stellar mergers, we assign an initial spin of 0.5. The choice is motivated by the fact that the particularly complex formation processes that lead to the IMBH formation make the IMBH natal spin practically unpredictable. We note that this choice has no effect on our results though, because both IMBHs accretes material from a stellar companion and we assume that this spin-up the IMBH as detailed in the following point.
* If the IMBH feeds on a stellar companion, or if its progenitors are upper-mass gap BHs, i.e. they underwent mass accretion at some point, we assign a spin drawn from a flat distribution in the range χ_ BH = 0.8-1 <cit.>.
* If the IMBH progenitor is a BH formed in a primordial binary, we assign a small spin (χ_ BH = 0.01) if it is the firstborn or a spin in the range χ_ BH = 0.1-1 <cit.> otherwise.
* If the IMBH formed from a BBH merger, the IMBH spin and mass are calculated according to <cit.> fitting formulae <cit.>.
Note that this model is applied in post-process to the simulation data.
To keep track of the IMBH-BH merging history, we label an IMBH as first generation (1g) if it did not undergo any merger with another compact object. IMBHs formed out of VMS collapse or star-BH accretion are considered 1g. Second generation (2g) and higher generation IMBHs are those that underwent multiple mergers with other compact objects. In models, all merging companions are stellar BHs.
Figure <ref> shows the masses and spins of IMBHs assuming zero GW recoil. It appears evident that, upon our assumptions, IMBHs in clusters generally form with a high spin (χ_ IMBH > 0.6), unless they form from the collapse of a VMS. Even in such a case, the accretion of matter, which likely spins-up the IMBH, occurs on a sufficiently short timescale (t≲ 8 Myr) to make rather unlikely their observation as low-spin objects.
In the case of IMBHs forming via multiple BH mergers, note that the IMBH spin decreases at increasing the merger generation <cit.>.
Table <ref> summarizes the main properties of IMBHs in terms of generation, masses, spins, and recoil velocity at 95% confidence level. These quantities are calculated drawing for each merging event 10,000 times the spin amplitude of the merging components and assuming for the spin directions an isotropic distribution.
Looking at the Table, we see that GW recoil has no effect on the IMBH formation probability, because all IMBHs in clusters form either via stellar collapse or have a 1g BH progenitor. Nonetheless, GW recoil crucially affects second and higher generation IMBHs, which typically receive a kick, v_ GW = (200 - 800), much larger than the escape velocity from the parent cluster, typically v_ esc < 50.
Therefore, the inclusion of GW recoil affects 7 out of 8 IMBHs in our simulations, avoiding both: a) the formation of IMBH-BH binaries that merge after dynamical ejection, a process involving 5 IMBHs in our sample, and b) the development of multiple BH mergers inside the cluster (2 IMBHs).
The remaining IMBH is ejected from the cluster as a single object after a strong resonant interaction with other two, fairly massive (>30), BHs.
As a consequence, we find that the number of merging events involving an IMBH decreases from 9 in the no-recoil case, to just 2, despite this represents the lowest value possible. The possible detection of GWs emitted from IMBH-BH binaries with future detectors, especially those operating in the deci-Hz frequency band, could help shed a light on the IMBH formation efficiency and retention probability <cit.>.
§.§.§ Simulations implementing a self-consistent treatment for gravitational recoil
The post-process treatment applied to simulation data provides an effective way to place constraints on the IMBH retention probability without the need to explore the wide associated parameter space. Nonetheless, a fully self-consistent simulation implementing also GW recoils would provide useful insights on, e.g. the impact of the IMBH displacement onto the development of new merging events.
To prove the impact of GW recoil in a self-consistent way, we focus on the two models in which the IMBH undergoes repeated mergers, namely models IBH_Rh1.75f20N120k, which ultimately form a 4g-IMBH, and IBH_Rh0.47f20N300k, which instead leads to a 3g-IMBH.
Practically speaking, we restart the simulation from the snapshot immediately before the merging event and apply to the merger remnant a kick. For simplicity, rather than extracting the kick from a distribution we assign the merger a given kick, as described below. Generally, we adopt a GW kick sufficiently small to ensure the IMBH retention after the merger. This choice permits us to investigate whether the IMBH can be retained in the cluster, it further grows, or it is anyway ejected owing to Newtonian or relativistic effects.
*Model ID: IBH_Rh1.75f20N120k
The IMBH in this model forms from the merger of two upper mass-gap BHs with masses m_ BH1+m_ BH2 = (95.5+95.8). Therefore, the IMBH is already 2g at formation, and receives a kick v_ rec > 171 at 95% confidence level (see Table <ref>). For comparison, the cluster escape velocity at the time of the merger is around v_ esc = 12.
Adopting the spin model described in Section <ref>, based on stellar evolution models, we find that the IMBH has a tiny fraction (P_20<0.2%) to receive a kick v_ GW < 20. However, if the IMBH progenitors have negligible spins for some reason, for example if the IMBH progenitor is slowly rotating and the angular momentum transport is essentially driven by meridional currents <cit.>, the probability for v_ GW<20(5) rises up to 84%(21%), significantly increasing the IMBH retention probability.
Therefore, we re-run the simulation and assign to the IMBH promptly after formation a GW kick of either v_ GW = 5 (small kick) or 20 (large kick). As expected, in the large kick model, the kick of v_ GW = 20 exceeds the cluster escape velocity and the IMBH promptly leaves the cluster.
In the small kick model, where v_ GW = 5, the 2g-IMBH is retained in the cluster and sinks back to the cluster centre where, after a long series of interactions with other stellar BHs, captures a BH with mass m_ BH=28 and is ejected from the cluster with a velocity of just 15.3. The ejected IMBH-BH binary has an eccentricity e=0.57 and a period of P=190 days, and a corresponding merger time t_ GW∼ 10^3 Hubble times.
For the sake of comparison, in the zero GW recoil model, the IMBH pairs with a BH with mass m_ BH = 40.5 and is ejected from the cluster, merging within a Hubble time (see Appendix <ref>).
*Model ID: IBH_Rh0.47f20N300k
Let us now consider the other model, named IBH_Rh0.47f20N300k. Since the IMBH in this model forms via stellar collisions, its mass at birth is fairly large m_ IMBH = 217. After only 17 Myr, when the cluster escape velocity is around v_ esc = 46.5, this 1g-IMBH merges with an upper mass-gap BH with mass m_ BH = 51.7. The resulting 2g-IMBH receives a GW kick with amplitude v_ kick > 99 at 95% confidence level. The probability to obtain a kick of ≃ 50 is of the order of ∼ 0.1%, regardless of the spin distribution choice.
Therefore, we re-run the simulation shortly before the merger event and assign to the merger remnant either a small (v_ = 20) or large (v_ = 100) recoil kick. In the case of v_ rec=100 the merger remnant promptly leaves the cluster, as expected.
In the case of v_=20, instead, the 2g-IMBH remains in the cluster core and undergoes a series of resonant interactions with two BHs, which drives the IMBH to merge after just 25.5 Myr with an upper-mass gap BH with (m_ BH,2 = 63). The 3g-IMBH, with a mass m_ 3g≃ 300, receives a kick v_ GW > 90 regardless of the amplitude and direction of progenitors' spins, hence it leaves the cluster promptly after the merging event.
The impact of relativistic effects on the chaotic nature of N-body dynamics is apparent in this case: The displacement caused by the GW recoil favor the onset of the three-body interactions that led to the merger. For comparison, in the zero-kick model the two BHs never find each other.
§ CONCLUSION
In this work we have analysed the properties of IMBHs formed in the cluster models, a suite of 19 direct N-body simulations representing star clusters initially made up of ≤ 10^6 stars, up to 33% of which initially paired in a binary. Our main results can be summarised as follows:
* Out of 19 models, 8 IMBHs form in clusters, following three main formation channels: a) collapse of a VMS formed via repeated stellar mergers (2 IMBHs), b) accretion of stellar material onto stellar BHs (1), c) BH-BH mergers (5). The IMBHs have typical masses in the range m_ IMBH = (100-370). Aside IMBH seeding, the aforementioned formation channels significantly contribute to the population of BHs with masses in the upper mass-gap, for which we derive a formation efficiency of η_ gap = 3.44× 10^-5^-1 [Table <ref> and Figures <ref>-<ref>].
* Despite the small sample, we find a striking relation between the IMBH formation channel and the host cluster properties. Stellar mergers dominate IMBH formation in the densest clusters, operating on short timescale (10 Myr) and producing the most massive IMBHs (>200). Star-BH interactions and BBH mergers, instead, dominate IMBH formation in less dense clusters, showing that the looser the cluster the longer the IMBH formation time (10-300 Myr), and the larger the IMBH seed mass [Figure <ref>].
* When relativistic recoil is neglected, Newtonian dynamics represents a serious threat to IMBH retention and growth. In fact, all IMBHs are ejected from cluster through strong dynamical interactions. Nonetheless, in the Newtonian scenario some IMBHs undergo multiple IMBH-BH mergers reaching up to the fourth generation. The inclusion of GW recoil severely impacts the IMBH growth process, limiting the IMBH merger history to two generations. We implement a simple model for BH natal spins, based on stellar evolution models, to infer the IMBH mass and spins. In our fiducial model IMBHs are characterised by masses up to 376 and relatively large spins, i.e. χ_ IMBH > 0.6. The inclusion of relativistic kicks in the simulations enables a fully self-consistent description of the IMBH merging process and reveal how hard is for IMBHs to be retained in their parent clusters. Nonetheless, even in the unlikely case the IMBH receives small GW kicks and avoid ejection, our simulations confirm how chaotic and unpredictable the evolution of the post-merger IMBH can be. For example, in one simulation the inclusion of the kick can favour the merger of the IMBH with a BH more massive than in the zero GW kick case [Table <ref> and Figure <ref>].
The simulations represent one of the few numerical models <cit.> in which all the three main channels proposed for the formation of IMBHs have been confirmed. Our analysis of the database suggests that: i) IMBHs form preferentially via collapse of stellar merger products (BBH mergers) in clusters more (less) dense than 3×10^5 pc^-3, ii) have large spins at formation χ_ BH > 0.6, iii) live most of their life with a BH companion, iv) are unlikely to grow beyond a few hundred because of the efficiency of dynamical scatterings and the impact of relativistic recoil.
§ THE EVOLUTION AND GROWTH OF IMBHS IN DRAGON-II CLUSTERS
In this section, we discuss in detail the evolutionary history of the 8 IMBHs in clusters, their main properties, and retention probability. In the following we indicate with BH1, 2 and with letters a, b the IMBH progenitors, and with p1, p2 the progenitors of the IMBH progenitors, in such a way that p1a, p2a indicates the two progenitors of the primary BH that eventually led to the IMBH.
All the main properties of the IMBHs are summarised in Table <ref>.
*IMBH No. 1: IBH_Rh1.75f5N1000k.
In one cluster model with R_=1.75 pc, f_b=0.05, N=10^6, the IMBH forms via the merger of two BHs with masses m_ BH,1 = 86.3 and m_ BH,2 = 58.9. The primary BH is the byproduct of a merger between a PPISN BH and a massive star in the HG phase m_p1a+m_p2a = (40.5 + 91.7) in a primordial binary, and we assume that it spins-up during its growth, assigning it a spin χ_ BH,1 > 0.8. The secondary BH, instead, forms from the merger of two stars in a primordial mass, with masses m_p1b+m_p2b = (37+82), with the lighter component being a naked He MS star and the heavier a star in the HG phase. We assign the companion BH a spin χ_BH,2 = 0.01.
The resulting IMBH (2g) has a mass m_ 2g = 138.4^+1.8_-3.0 and spin χ_ 2g = 0.76^+0.11_-0.27, with the spin increasing at decreasing the mass. In the simulation with GW recoil disabled, the IMBH forms a binary with a BH with mass m_ BH = 40.5 — formed from a single star — and ultimately merge after being ejected outside the cluster, leading to a final IMBH (3g) with a mass m_ 3g = 174.0^+2.6_-4.6 and χ_ 3g=0.68^+0.20_-0.40. However, the GW recoil associated with the formation of the 2g-IMBH is sufficiently large (v_ GW = 150-2200) to make the retention of the IMBH and its further growth impossible.
*IMBH No. 2: IBH_Rh1.75f20N120k.
The second IMBH in the sample (simulation with R_=1.75 pc, f_b=0.2, N=120) forms through a BH-BH merger with component masses m_ BH,1 + m_ BH,2 = (95.5+95.8). The previous evolution of these massive BHs is rather complex. The primary forms from the accretion of a MS star with mass m_ p2a= 110 and a BH (m_ p1a=40.5) previously formed from the merger of two MS stars in a primordial binary. We thus assign the primary BH a spin χ_ BH,1=0.8-1. The secondary, instead, forms from the merging of two stars in a primordial binary during the HG phase of the heavier component. We assign the secondary BH a small spin χ_ BH,2 = 0.01. The resulting IMBH (2g) has a mass m_ 2g=181.8^+1.8_-2.7 and spin χ_ 2g = 0.72^+0.10_-0.15. When GW recoil is disabled, the IMBH undergoes a second merger with a BH with mass m_ BH,2 = 40.5 that did not experience significant mass-transfer, thus likely characterised by a low spin. After the merger, the IMBH (3g) has a mass m_ 3g = 217.8^+2.5_-4.3 and spin χ_ 3g = 0.65^+0.20_-0.45. It forms a binary that is ejected and merges outside the cluster, leaving a 4g-IMBH with final mass m_ 4g = 253.9^+2.9_-5.9 and spin χ_ 4g = 0.56^+0.28_-0.34.
There is a probability of ∼ 0.2% for the GW recoil imparted on the 2g-IMBH to remain below v_ < 20, i.e. sufficiently smaller to be retained in the cluster. However, when the 3g-IMBH forms, the post-merger kick is in the range v_ GW = 35-2000, definitely larger than the cluster escape velocity. We discuss the results from a self-consistent simulation of the evolution of the 2g-IMBH in Section <ref>.
*IMBH No. 3: IBH_Rh1.75f20N600k.
The third IMBH forms in model with R_ = 1.75 pc, f_b=0.2, and N=600,000 through the merger of two BHs with mass m_ BH,1=74.7 and m_ BH,2 = 68.8, both being byproduct of a stellar merger event in two primordial binaries. We assume that both BHs have negligible spins, which leads to an IMBH (2g) with a mass m_ 2g = 136.6^+1.2_-1.9 and spin χ_ 2g = 0.72^+0.08_-0.15. The post-merger recoil is sufficiently small (v_ GW = 20-45) to retain the IMBH. The IMBH eventually merges with a BH with mass m_ BH,2 = 18 (for which χ_ BH,2 = 0.01) after being ejected from the cluster. The final IMBH (3g) has a mass m_ 3g = 152.7^+1.5_-2.4 and spin χ_ 3g=0.61^+0.22_-0.36.
*IMBH No. 4: IBH_Rh0.8f20N120k.
The fourth IMBH forms in model R_ = 0.8 pc, f_b = 0.2, N=120,000 from two BHs with masses m_ BH,1 = 79.8 and m_ BH,2=40.5. The primary formed from a star-BH merger in a primordial binary involving a BH m_ p1a = 40.5 and a star in the HG phase with mass m_ p2a = 78.5. We assign a spin χ_ BH,1 > 0.8 to the primary and a small spin to the secondary, which did not undergo any significant matter accretion phase. The IMBH (2g) formed this way has a mass m_ 2g = 115.6^+1.3_-3.0 and spin χ_ 2g = 0.74^+0.15_-0.36. In absence of GW recoil, the IMBH captures a BH with mass m_ BH,2 =39, which experienced mass transfer in a primordial binary, and finally merge outside the cluster. In this case, we assign to the stellar BH a spin in the 0-1 range, which leads to an IMBH (3g) with final mass m_ 3g=149.8^+2.0_-4.6 and χ_ 3g=0.67^+0.22_-0.35. The kick received by the 2g-IMBH, however, is large enough (v_ GW > 100) to kick the IMBH out before the binary can form.
*IMBH No. 5: IBH_Rh0.8f20N120k.
Even the fifth IMBH, which forms in model R_=0.8 pc, f_b=0.2, and N=120,000, is the byproduct of a BBH merger. The primary, with a mass m_ BH,1=80.7, forms from the merger of two MS stars, and we assume negligible spin. The companion, with a mass m_ BH,2=51.5, forms from mass transfer in a primordial binary, thus we assume that its spin is distributed in the χ_ BH,2 = 0.8-1 range. The resulting IMBH has a mass m_ 2g = 126.4^+0.7_-1.0 and spin χ_ 2g = 0.67^+0.06_-0.08. In the case of no GW recoil, the IMBH captures a BH with mass m_ BH = 30 formed from a single star (thus χ_ BH = 0.01), and the resulting binary is eventually ejected from the cluster, ultimately merging outside the cluster and leaving behind an IMBH with mass m_ 3g = 153.0^+1.4_-2.1 and spin χ_ 3g = 0.62^+0.19_-0.42. Even in this case, though, the GW kick imparted onto the 2g-IMBH (v_ GW > 60) is larger than the cluster escape velocity.
*IMBH No. 6: IBH_Rh0.8f20N300k.
The sixth IMBH forms in a cluster with R_=0.8 pc, f_b=0.2, and N=300,000, from the coalescence of a PPISN BH (m_ BH = 40.5, negligible spin) and a massive star in the HG phase (m_ HG=133). The IMBH, with mass m_ 1g = 107, likely spins-up during the interaction with its stellar companion. The IMBH is eventually ejected as a single object in consequence of a resonant strong scattering involving two BHs with masses m_ BH,1 = 35.2 and m_ BH,2 = 67.7.
*IMBH No. 7: IBH_Rh0.47f20N120k.
The seventh, and most massive, IMBH, forms in one of the most compact clusters (R_=0.47 pc, f_b=0.2, and N=120,000). A complex series of stellar mergers triggers the IMBH seeding, leading to an IMBH with mass m_ 1g = 288 that eventually collides with a massive MS star with mass m_ MS = 122. The resulting IMBH, which can be considered half-way between first and second generation, has a mass m_ 1g* = 350 and likely a large spin, χ_ 1g*∼ 0.8-1, owing to the mass accretion process. The IMBH captures a stellar BH with mass m_ BH,2 = 29 formed from a single star, for which we assume negligible spin. The IMBH-BH binary is eventually ejected in a strong binary-single interaction and merges outside the cluster, leading to a 2g-IMBH with mass m_ 2g = 376.5^+0.8_-3.7 and spin χ_ 2g = 0.79^+0.17_-0.27.
*IMBH No. 8: IBH_Rh0.47f20N300k.
The last IMBH forms in the densest cluster (R_=0.47 pc, f_b=0.2, and N=300,000). Initially, an IMBH seed with mass m_ 1g = 189 forms via subsequent mergers of massive stars. It later collides with a MS star with mass m_ MS = 51.7 and shortly after with two low mass stars, leaving behind an IMBH (1g*) with mass m_ 1g* = 217 and high-spin triggered by mass accretion. The IMBH undergoes merger with a low-spin BH with mass m_ BH = 27, forming a 2g-IMBH with a mass m_ 2g = 241.4^+0.8_-3.3 and spin χ_ 2g = 0.77^+0.18_-0.37.
In absence of GW recoil, the 2g-IMBH further merge with a low-spin BH (mass m_ BH = 38) after being ejected in the cluster, leading to a 3g-IMBH characterised by m_ 3g = 275.3^+1.8_-5.4 and spin χ_ 3g = 0.63^+0.28_-0.39. When GW recoil are taken into account, the 2g-IMBH receives a kick v_ GW > 40, thus larger than the cluster escape velocity. We explore more in detail the retention of this IMBH in Section <ref>.
§ ACKNOWLEDGEMENTS
The authors thank the referee for their constructive report and feedback. The authors warmly thank Agostino Leveque for their help and assistance in using their implementation of the code, and Giuliano Iorio, Sara Rastello, and Michela Mapelli for useful comments and discussion.
This work benefited of the support from the Volkswagen Foundation Trilateral Partnership through project No. 97778 “Dynamical Mechanisms of Accretion in Galactic Nuclei” and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 138713538 – SFB 881 “The Milky Way System”), and by the COST Action CA16104 “GWverse”. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jülich Supercomputing Centre (JSC).
MAS acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 101025436 (project GRACE-BH, PI: Manuel Arca Sedda).
AWHK is a fellow of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD).
The work of PB was supported by the Volkswagen Foundation under the special stipend No. 9B870.
PB acknowledge the support within the grant No. AP14869395 of the Science Committee of the Ministry of Science and Higher Education of Kazakhstan ("Triune model of Galactic center dynamical evolution on cosmological time scale").
The work of PB was supported under the special program of the NRF of Ukraine Leading and Young Scientists Research Support - "Astrophysical Relativistic Galactic Objects (ARGO): life cycle of active nucleus", No. 2020.02/0346.
RS acknowledges support by Yunnan Academician Workstation of Wang Jingxiu (No. 202005AF150025) and thanks Max Planck Institute for Astrophysics (Thorsten Naab) for hospitality during many visits.
MG was partially supported by the Polish National Science Center (NCN) through the grant No. 2021/41/B/ST9/01191.
FPR acknowledge the support by the European Research Council via ERC Consolidator Grant KETJU (no. 818930).
§ DATA AVAILABILITY
The data from the runs of these simulations and their initial models
will be made available upon reasonable request by the corresponding author.
The Nbody6++GPU code is publicly available[<https://github.com/nbody6ppgpu/Nbody6PPGPU-beijing>]. The McLuster version used in this work will soon be available. A similar version is described in <cit.>.
mnras
|
http://arxiv.org/abs/2307.05829v1 | 20230711224210 | Distance-Preserving Graph Compression Techniques | [
"Amirali Madani",
"Anil Maheshwari"
] | cs.DS | [
"cs.DS",
"cs.DM"
] |
SnakeSynth: New Interactions for Generative Audio Synthesis
Eric Easthope
University of British Columbia
Vancouver, British Columbia, Canada
[email protected]
August 12, 2023
===============================================================================================================================
We study the problem of distance-preserving graph compression
for weighted paths and trees. The problem entails a weighted
graph G = (V, E) with non-negative weights, and a subset of edges
E^'⊂ E which needs to be removed from G (with
their endpoints merged as a supernode). The goal is to
redistribute the weights of the deleted edges in a way that
minimizes the error. The error is defined as the sum of the
absolute differences of the shortest path lengths between different pairs of nodes before and after contracting E^'.
Based on this error function, we propose optimal approaches for
merging any subset of edges in a path and a single edge in a
tree. Previous works on graph compression techniques aimed at
preserving different graph properties (such as the chromatic
number) or solely focused on identifying the optimal set of
edges to contract. However, our focus in this paper is on
achieving optimal edge contraction (when the contracted edges are provided
as input) specifically for weighted trees and paths.
§ INTRODUCTION AND RELATED WORK
Graphs have become increasingly relevant for solving real-world
problems, leveraging their numerous characteristics
<cit.>. However, many of these graphs
are incredibly large, consisting of trillions of edges and
vertices, which poses scalability challenges for modern systems
<cit.>. Consequently,
graph compression techniques have garnered significant research
interest in recent years, aiming to obtain a smaller graph while
retaining the essential properties of the original input graph.
Different names, such as graph compression <cit.>, graph
summarization <cit.>, graph modification
<cit.>, and graph contraction <cit.>,
have been used in the literature to describe this problem, each
within its specific context, leading to various proposed
approaches. Regardless of the terminology or context, most of
these problems focus on reducing the size of the graph while
preserving a specific property <cit.>, while some
approaches aim to modify a graph to satisfy a given property
<cit.>. Furthermore, graphs can be compressed in
different ways, such as vertex deletions, edge deletions, and
edge contractions. It is worth noting that many of these
resulting graph modification problems are NP-hard, as indicated
in <cit.>.
A very relevant problem is commonly referred to in the
literature as the blocker problem <cit.>.
Given a graph G, integers k and d, an invariant π:
𝒢ℝ, and some modification
operations (such as edge contractions), a blocker problem asks
whether there exists a set of at most k graph modification
operators such that in the resulting graph G^',
π(G^') ≤π(G)-d holds. In recent
years, blocker problems have been studied for various graph
properties such as the chromatic number
<cit.>, maximum weight independent set and
minimum weight vertex cover <cit.>, maximum independent
set <cit.>, the clique number <cit.>,
the total domination number <cit.>, diameter
<cit.>, and maximum weight clique <cit.>.
A lot of these blocker problems are defined as
contraction problems, in which graphs can only be
modified via edge contractions. More precisely, given a graph
G, integers k and d, and an invariant π: 𝒢ℝ, CONTRACTION(π)
asks whether there exists a set of at most k edge contractions
that results in a graph G^' with
π(G^') ≤π(G)-d. Galby et
al. <cit.> studied the contraction problems
in which a specific edge could be provided as input (in addition
to π). As an important contribution, Galby et al.
<cit.> proved that, unless P=NP,
there exists no polynomial-time algorithm that decides whether
contracting a given edge reduces the total domination number.
Biedl et al. <cit.> studied the problem of flow-
preserving graph simplification, which is the problem of finding
a set of edges whose removal does not change the maximum flow of
the underlying network.
Shortest path queries are crucial to various domains, including
search engines <cit.>, networks
<cit.> and transportation
<cit.>. In a more relevant work
to this paper, Bernstein et al. <cit.>
studied a slightly different variant of
CONTRACTION. In their work, Bernstein
et al. <cit.> focused on compressing a given
graph as much as possible, while permitting only a limited
amount of distance distortion among any pair of vertices. Given
a tolerance function φ(x)=x / α-β, with α≥ 1 and β≥ 0, Bernstein et al.
<cit.> studied the problem of finding the maximum
cardinality set of edges whose contraction results in a graph
G^' such that d_G^'(u, v) ≥φ(d_G(u, v)) for all u,v
∈ G. However, in their work, they only focused on
finding an optimal set instead of optimally
redistributing the weights. More specifically, after
finding the optimal set E^', they set the weight of
each edge e ∈ E^' to zero.
Unlike the work by Bernstein et al. <cit.>,
the work by Sadri et al. <cit.> focuses on
optimally redistributing the weights. However, they do not
provide any bound guarantees on the amount of error. Precisely,
they assess the efficiency of their proposed approach by a set
of experimental studies. Moreover, their weight redistribution
approach for trees ignores the size of each subtree rooted at
the endpoints of a given contracted edge, which is a key factor
in deciding the optimal assignment as we will show in this
paper. More recently, Liang et al. <cit.>
studied the problem of reachability-preserving graph compression
techniques. There have also been other works related to graph
compression for unweighted and weighted graphs, as listed in
<cit.>. Zhou et al.
<cit.> proposed an efficient approach to remove
a large portion of the edges in a network without affecting the
overall connectivity by much. Ruan et al.
<cit.> studied the minimum gate-vertex set discovery
(MGS) problem. The MGS problem is concerned with finding the
minimum cardinality set of vertices, designating them as gate
vertices, using which every non-local pair of vertices (whose
distance is above some threshold) is able to recover its
distance in the original network. However, the work by Ruan et al. <cit.> only studies unweighted graphs.
Where our work stands in the literature: Surprisingly,
there has been little attention in the literature to one
particular side (discussed momentarily) of the distance-preserving graph compression problem. To the best of our
knowledge, all existing works have either only focused on
finding an optimal set of edges to contract or have not provided
any bounds on the amount of error. We study a different problem:
instead of choosing which optimal edge to contract, we
are interested in finding out how to contract a given
edge optimally. Even though we still study distance-preserving
graph compression, our focus is mainly on optimally modifying
the graph after a given edge has been contracted. Our primary
modification operation is changing the edge weights of the
graph. It is worth noting that this problem has received
limited attention in the literature, with the closest existing
work being the study by Sadri et al. <cit.>.
Their approach involves solving a system of equations to
determine the new edge weights in the resulting graph. However,
their analysis of the problem has certain limitations. Firstly,
they do not offer any optimal guarantees for their weight
distribution technique. Furthermore, their weight redistribution
method does not account for the sizes of the individual
subgraphs connected to a given edge. In contrast, as we will
demonstrate throughout this paper, the size of a subtree
(particularly in the context of paths and trees) plays a crucial
role in achieving optimal weight redistribution.
The organization of the paper: The remainder of this
paper is organized as follows. In Section <ref>, we
present a summary of our main results along with some comments
and details regarding each contribution. In Section <ref>,
we describe the notation used in the paper, using which we
formally define the scope of our paper in Section <ref>.
In Section <ref>, we study the problem of distance-preserving graph compression for weighted paths, where we prove
optimal approaches to contracting any set of k edges. In
Section <ref>, we study the problem of graph
compression for weighted trees, where we provide an optimal
linear-time algorithm for contracting a single edge. We present
the concluding remarks of this paper along with some potential
avenues of future work in Section <ref>.
§.§ Contributions and Results
In Section <ref>, we study the problem of distance-preserving graph compression for weighted paths.
* As a warm-up, we prove an optimal bound for merging[As defined in Section <ref>, we use the terms merging and contracting interchangeably. They both refer to the act of contracting an edge or a set of edges.] a
single edge in a path topology in Section <ref>,
whose main result is stated in Theorem <ref>.
In Section <ref>, we present a method for
transforming any weight redistribution for a given merged edge
e^* to another redistribution in which only the weights of
its neighbouring edges are altered.
* We present Algorithm <ref> for merging any
set of k≤n/2 independent edges (edges that
have no endpoints in common and induce a matching on the
path) in a path of size n.
We note that Algorithm <ref> produces suboptimal
solutions when applied to a contiguous subpath (a connected
subgraph) of the given input path. We relate this suboptimal
performance to the distinction between merging two regular
vertices and two supernodes. We thoroughly investigate
this distinction in Lemma <ref>, where we present an
optimal redistribution for merging two supernodes.
* Having the suboptimal performance of Algorithm
<ref> for merging subpaths in mind, in Section
<ref> we study the problem of finding the optimal
redistribution for any connected subgraph of a given input
path. The optimal method for contracting any contiguous
subpath of the input path is presented in Theorem
<ref>.
In Section <ref>, we study the problem of
distance-preserving graph compression for the tree topology
where we present optimal approaches for merging a single edge in
a weighted tree. To this end, we define a relevant problem,
which we refer to as the marking problem. The
objective of the marking problem is to minimize the error, as
defined in Section <ref>, by marking a subset of the
neighbouring edges of the merged edge e^* (with weight
w^*). For merging an edge e^* with weight w^*, an edge
e_i is said to be marked if its new weight w^'
(e_i)=w(e_i)+ w^*. As a warm-up, in Section <ref>, we
study the marking problem for a tree in which the neighbouring
subtrees of e^* are of equal sizes. For such edges, we show
that the optimal marking is achieved when all edges to the left
or right of (but not both) e^* are marked. In Section
<ref>, we generalize the findings of Section <ref>
and present an optimal marking for any merged edge e^* in a
weighted tree.
The definition of the marking problem implies that an edge can
either be fully marked or unmarked. It is non-trivial to see
whether fractionally marking the edges produces better
results. Therefore, in Section <ref>, we thoroughly
investigate the distinction between the marking problem
(Definition <ref>) and the fractional marking problem
(Definition <ref>) and conclude that any solution to the
latter can be transformed into another solution to the former
without worsening the error value.
* We present an 𝒪(|V|)-time algorithm for
finding an optimal marking for e^* in Algorithm
<ref>.
§ PRELIMINARIES
In this section, we first discuss the common notation (Section
<ref>) and then present some additional definitions
(Section <ref>) that help describe the scope of our paper.
In Section <ref>, we present a simple number-theoretic
lemma that is later used in some proofs of the paper. Throughout
this section, we use the path in Figure <ref> as
the running example of the definitions.
§.§ Notation
Let G=(V,E) denote a graph with V and E as its sets of
vertices and edges respectively. With every edge e ∈ E, we
associate a weight w(e) w: E ℝ_≥ 0. We sometimes denote an edge e by
(u,v), where u,v ∈ V are referred to as the
endpoints of e. Throughout this paper, we frequently
denote edges and vertices using subscripts (for instance e_i
and v_i) and superscripts for merged edges (for instance
e^*). When the context is clear, we sometimes abuse the
weight notation and denote the weight of e_i and e^* by
w_i and w^* respectively. We denote the number of vertices
(|V|) by n and a path of n vertices by P_n. Throughout
this paper, we frequently use n_L and n_R in different
contexts to denote different quantities. However, in most cases,
we denote by n_L and n_R the number of vertices to the left
and right of a given vertex of a path respectively (including
itself). For instance, in Figure <ref>-(a), n_L
and n_R denote the number of vertices to the left of (and
including) v^'_3 and the right of (and including)
v^'_4, respectively. Formally, let G_1 be one of the
two connected components of H=G-{ e_3=(v^'_3,u_1), e^*=
(u_1,v_1), e_4=(v_1,v^'_4)} that is adjacent to
v^'_3, and let G_2 be the connected component of H
that is adjacent to v^'_4. We have
n_L=|{v| v ∈ G_1 }|, n_R=|{v| v ∈ G_2 }|
For instance, in Figure <ref>-(a), n_L=3
because G_1 includes vertices {v^'_1, v^'_2,
v^'_3}, and n_R=3 because G_2 includes vertices {v^'_4,v^'_5, v^'_6}. Therefore, in this
paper, we assume that the graph is laid out in the plane and the
edge to be merged (e^* in Figure <ref>-(a)) is
horizontal. This assumption will simplify the description of our
results.
§.§ Additional Definitions
We now provide some additional definitions for defining the scope of our paper.
For a weighted graph G=(V,E), the distance between
two vertices u, v ∈ V, denoted by d_G(u,v), is the length
of the shortest weighted path between u and v in G.
A merged edge, or a contracted edge, is one
whose endpoints are merged, and the edge itself is removed from
the graph.
For instance, e^*=(u_1,v_1) in Figure <ref>-(a)
(highlighted in red) is a contracted edge. After contracting
e^*, the path of Figure <ref>-(a) is
transformed into the one in Figure <ref>-(b).
A supernode is a node containing a subset
V^'⊂ V of the nodes in the original graph,
which is a result of a series of edge contractions. We
denote the set of all supernodes by V_s.
For a supernode v∈ V_s, the cardinality of v,
denoted by 𝒞, 𝒞: V_s →ℕ, is the number of regular vertices it contains.
In the path of Figure <ref>-(b), {u_1, v_1}
is a supernode with cardinality 2.
Let G=(V,E) be a graph with weight function w: E
→ℝ_≥ 0, and let e^* ∈ E be the
merged edge. A weight redistribution is a new
weight function w^': E →ℝ_≥
0 in which w^'(e_i)=w(e_i) + ϵ_i, ∀ e_i ∈ E, ϵ_i ∈ℝ.
In the path of Figure <ref>-(b), the weight
redistribution sets the edge weights of Figure <ref>-(a) as w^'(e)=w(e)+w(e^*) if e=e_3, and w^'
(e)=w(e) otherwise.
With reference to a given merged edge e^* in a graph G=(V,E) with the associated weight function w: E →ℝ_≥ 0 and a new weight redistribution function w^': E →ℝ_≥ 0, an edge e_i is said to be marked if w^'(e_i)=w(e_i)+w(e^*), unmarked if w^'(e_i)=w(e_i), and altered otherwise.
As shown in Figure <ref>-(b), e_3 is marked and all other edges are unmarked.
With reference to a set of merged edges E_m ⊂ E, the set of merged vertices V_m consists of all vertices with at least one endpoint in E_m, or V_m={v| v,u ∈ V, ∃ e=(u,v) ∈ E_m}. The set of unmerged vertices is defined as V_m=V- V_m.
In the path of Figure <ref>, we have E_m={e^*}, V_m={u_1,v_1}, and V_m={v^'_1, v^'_2, v^'_3, v^'_4, v^'_5, v^'_6}.
With reference to a set of merged edges E_m ⊂ E and a weight redistribution w^', let G^' be the resulting graph after contracting the edges in E_m and setting the new edge weights according to w^'. The error associated with w^' with respect to E_m is denoted by |Δ E| and calculated as:
|Δ E|=∑_u ∈ V_m, v ∈V_m , or u, v ∈V_m, u≠ v|d_G( u, v)-d_ G^'(u, v)|
In other words, the error is equal to the sum of the absolute differences (between G and G^') of all shortest path lengths between vertices u,v, at least one of which is in V_m.
Returning to our example in Figure <ref>, the error function of Eq. (<ref>) sums up the absolute values of the shortest path differences among the vertices of V_m={v^'_1, v^'_2, v^'_3, v^'_4, v^'_5, v^'_6}, and between the vertices of V_m and the vertices of V_m={u_1, v_1}. As the final example, we now explain how the distance difference between one of the aforementioned pairs of vertices is calculated. In Figure <ref>, the shortest path value difference between v^'_1 and u_1 changes from w_1+w_2+w_3 in G (Figure <ref>-(a)) to w_1+w_2+w_3+w^* in G^' (Figure <ref>-(b)). The error induced by this change is thus equal to |w_1+w_2+w_3+w^*-w_1-w_2-w_3|=w^*.
We are now ready to present the formal definition of our first studied problem:
Distance-Preserving Graph Compression: Given a graph G, and a set of contracted edges E_m, the problem of distance-preserving graph compression is to find a weight redistribution w^' for which |Δ E| is minimized.
§.§ A Number-Theoretic Lemma
The following lemma is used in some of the proofs of this paper:
For all real numbers A, B, C, D, x, y, let α_1 = |x-A|+|x-A-B| and α_2=|y-C|+|y-B-C|. We have α_1 ≥ B and α_2 ≥ B. Furthermore, α_1=B, α_2=B for A ≤ x ≤ A+B and C ≤ y ≤ B+C.
We prove this using contradiction. Let us prove α_1 ≥ B, the other proof will be analogous. For the sake of contradiction, assume that α_ 1< B. We have four cases depending on whether the values inside the absolute value function (x-A and x-A-B) are positive or negative. Note that |a| = a when a≥ 0, and |a|=-a otherwise.
* Case 1: x < A and x<A+B:
α_1= A-x + A+B -x < B A < x
which contradicts the assumption (x<A).
* Case 2: x<A and x ≥ A+B: These two conditions imply that B<0, we have:
α_1 = A-x + x-A-B <B 0 <2B
which is a contradiction since B<0.
* Case 3: x≥ A and x<A+B:
α_1 = x-A +A+B-x <B 0 <0
which is impossible.
* Case 4: x≥ A and x≥ A+B:
x-A+x-A-B<Bx<A+B
which contradicts the assumption.
Since we get a contradiction for every possible case, we have α_1≥ B and α_1=B for A ≤ x ≤ A+B. Similarly, we have α_2≥ B and α_2= B for C ≤ x ≤ B+C.
We will also use the following corollary.
Given two real numbers x, z we have |z|-|x| ≤ |z-x|.
Using Lemma <ref>, we have the following for two real numbers x and z:
|z| ≤ |x|+|z-x| |z|-|x|≤ |z-x|
§ GRAPH COMPRESSION FOR PATHS
In this section, we study the problem of distance-preserving graph compression for a weighted path with non-negative weights. The paths in this section all have n ≥ 3 vertices since the compression problem for a two-vertex path is trivial.
The remainder of this section is organized as follows. As a warm-up, we provide optimal bounds for merging a single edge in Section <ref>. In Section <ref>, we study the path compression problem for an edge connecting two supernodes, each consisting of a subset of nodes from the path. Two generalizations of the results of Section <ref> for contracting any subpath (a contiguous subpath of the original graph) and any set of independent edges (that induce a matching in the original path) are provided in Section <ref> and Section <ref> respectively.
§.§ A Tight Lower Bound for Merging One Edge
This section presents a tight lower bound on the optimal error (Eq. (<ref>)) associated with merging a single edge in a path topology.
As seen in Figure <ref>, the edge between v_2 and v_3 is merged, and only the immediate edge weights are altered to x and y. Later in this section (Lemma <ref>), we show why it is sufficient to alter only the immediate edge weights (A and B in Figure <ref>) to get the minimum amount of error. Note that, for merging a single edge (Figure <ref>) we have:
n_L + n_R =n-2= |V_m|
The following theorem is now presented:
Let |Δ E| be the error associated with merging a single edge e^*=(v_2, v_3) (with weight B) in a path P_n, n ≥ 3 (Figure <ref>). Furthermore, let V_m={v_2, v_3} and V_m= V- V_m. We have |Δ E| ≥ (n-2) B=|V_m|B. Moreover, this lower bound is tight and can be achieved by marking the left neighbour of the merged edge. If the merged edge has no left or right neighbour, the lower bound can be achieved by simply contracting the edge, and no further modifications (weight changes) are required.
Figure <ref> depicts the situation in which edge e^* with weight B is merged. We first assume that e^* has a left neighbour, and we handle the no-neighbour exception at the end of the proof. As seen in Figure <ref>-(b), let x and y denote the new edge weights of the neighbouring edges of e^*, and let G_1 (with n_L vertices) and G_2 (with n_R vertices) denote the subpaths rooted at v_1 and v_4 respectively. We denote the error by |Δ E| and classify it into different parts (in accordance with Eq. (<ref>)):
* The error between two vertices u ∈ G_1, v∈ G_2 is |x+y-A-B-C|. The only affected portion of such a shortest path is the subpath between v_1 and v_4, the value of which changes from A+B+C (in Figure <ref>-(a)) to x+y (in Figure <ref>-(b)). Summing over all such pairs u ∈ G_1,v ∈ G_2, the total amount of error is n_L n_R |x+y-A-B-C|.
* Between two vertices u,v ∈ G_1, there is no error, because the shortest path value between all such pairs of vertices is unchanged. Similarly, between two vertices u,v ∈ G_2, there exists no error.
* The error between a vertex u ∈ G_1 and the vertices in V_m={v_2, v_3} is |x-A|+ |x-A-B|. In the path from u to v_2, the only changed (with reference to edge weights) subpath is the subpath between v_2 and v_1 which changes from A (Figure <ref>-(a)) to x (Figure <ref>-(b)), inducing an error of |x-A|. Similarly, the error between some vertex u ∈ G_1 and v_3 is |x-A-B| as that is the amount by which the weight of the subpath from v_1 to v_3 changes. The total amount of error between all vertices u ∈ G_1 and the vertices in V_m={v_2, v_3} is therefore n_L (|x-A|+|x-A-B|).
* By similar reasoning to the one provided above, the total amount of error between all vertices u ∈ G_2 and the vertices in V_m={v_2, v_3} is equal to n_R(|y-C|+|y-B-C|).
Therefore, we can formulate |Δ E| as
|Δ E|=n_L (|x-A|+|x-A-B|) + n_R (|y-C|+|y-B-C|) +n_Ln_R |x+y-A-B-C|= n_L α_1 +n_R α_2 + n_L n_R |x+y-A-B-C|
where α_1 and α_2 are the values defined in Lemma <ref>. Using Lemma <ref>, we know α_1 ≥ B and α_2 ≥ B:
|Δ E| ≥ B(n_L +n_R)+ n_L n_R |x+y-A-B-C|
Using Eq. (<ref>):
|Δ E| ≥ B(n-2)+ n_L n_R |x+y-A-B-C| ≥ B(n-2)= |V_m| B
Which proves the first part of the theorem. As for the second part, we now show that this lower bound is tight. By marking the left neighbouring edge of e^* (effectively setting x=A+B and y=C) we get:
|Δ E|= n_L (|x-A|+|x-A-B|) + n_R (|y-C|+|y-B-C|) +n_Ln_R |x+y-A-B-C|
= (n_L+n_R)B= (n-2)B =|V_m|B
This analysis concludes the proof for the case where e^* has a left neighbour.
If e^* has no left neighbour, n_L=0 and no shortest path crossing e^* is affected. For each shortest path starting from v_4 (and its right-side vertices) and terminating at v_2 and v_3 there is an error of |y-C|+ |y-B-C|. According to Lemma <ref>, |y-C|+|y-B-C| is minimized as long as C≤ y≤ B+C, which is the case if all edges are unmarked, i.e. y=C. We can use a similar argument if e^* has neither a right nor a left neighbour.
Observe that marking the left neighbouring edge is not the only way of achieving the lower bound as it can also be achieved by marking the right neighbouring edge. In fact, any assignment of values to x and y such that x=A+ϵ_1, y=C + ϵ_2, ϵ_1 + ϵ_2= B will have the same impact. Therefore, for merging a single edge in a weighted path, the marked neighbour can be chosen arbitrarily, and the error value is oblivious to the marking direction. However, this observation (being oblivious to the marking direction) only holds for merging two regular nodes. As we will show in Lemma <ref>, for merging two supernodes the optimal error is obtained by marking the edge adjacent to the smaller node with respect to cardinality.
Theorem <ref> assumes that to achieve the minimum amount of error, we have to alter only the immediate edges directly connected to the endpoints of the merged edge. We now prove the correctness of this assumption. For modelling the proof, we now define some notation. For any edge e_i ∈ E, we denote its new weight as w^'(e_i)=w(e_i) + ϵ_i where ϵ_i is a real number (see Figure <ref>). This definition allows us to increase or decrease the weight of any given edge e_i by ϵ_i. We call this assignment of weights a redistribution for e^*. We refer to an edge e_i as altered if ϵ_i ≠ 0, and unaltered otherwise. Moreover, let V_L, V_R ⊂ V be the vertices to the left and right of the merged edge respectively as depicted in Figure <ref>. Therefore, the problem is now to show that there exists an optimal solution, with only the immediate edges altered. For simplicity, we slightly abuse the notation and write w(e_i) as w_i and w(e^*) as w^*. In the following lemma, we show a construction for transforming any redistribution into another equivalent redistribution in which only the immediate edges are altered.
(See Figure <ref> and Figure <ref>) For a merged edge e^* (in a weighted path) that has both left and right neighbouring edges, any weight redistribution can be transformed into another weight redistribution in which only the left neighbouring edge of e^* is altered (ϵ_i =0 ∀ i ≠ n_1 in Figure <ref>). The error associated with this redistribution is no worse than that of the original one.
We prove the lemma by presenting a construction method for transforming any arbitrary weight redistribution to another one in which only the left neighbouring edge is altered. Furthermore, we show this transformation does not worsen the error. The illustration is mainly based on Figure <ref> and Figure <ref>. Figure <ref>-(b) depicts an arbitrary weight redistribution for merging e^*=(v_n_1 +1, v_n_1 +2), which is transformed into another weight redistribution (depicted in Figure <ref>-(b)).
We now present a simple construction as follows. For
illustration, see Figure <ref>. Set ϵ_i
=0 ∀ i ≠ n_1, and ϵ_n_1=
w^*. Note that this new redistribution may cause
some parts of the error to increase. However, we will
use Corollary <ref> to provide an upper bound on
any potential error increase and show that there will
always be enough decrease in error to counterbalance
the increase. In the original redistribution, the error
between two vertices v_i, v_j∈ V_L (i< j) is:
|∑_k=i^j-1 (w_k + ϵ_k)-
∑_k=i^j-1 w_k |= |∑_k=i^j-1ϵ_k |
The indices i and j used in this proof are based on the ones depicted in Figure <ref> and Figure <ref>. Assume that the path of Figure <ref> has n_2+1 edges with V_L={v_i| 1 ≤ i ≤ n_1 +1}, V_R={v_i|n_1+2 ≤ i≤ n_2 +2}, n_1=n_L and n_2=n_R. For instance, v_1 and v_3 are two vertices from V_L in Figure <ref>. The original shortest path length between v_1 to v_3 is w_1+w_2 =∑_k=1^2w_k (Figure <ref>-(a)). In the original redistribution of Figure <ref>-(b) (which we transform into Figure <ref>-(b)), this length is w_1+ ϵ_1 + w_2 + ϵ_2 =∑_k=1^2(w_k +ϵ_k), resulting in Eq. (<ref>). A similar equation can also be defined for any two vertices v_i, v_j ∈ V_R. Moreover, the error between a vertex v_i ∈ V_L and a vertex v_j∈ V_R (i<j) in the original redistribution is equal to:
|∑_k=i^j-2 (w_k + ϵ_k)- w^*-∑_k=i^j-2 w_k |= | w^*-∑_k=i^j-2ϵ_k |
Transforming the weight redistribution (as shown in Figure <ref>) changes the error value. We break this change down into five different cases:
* Case 1: The error between two
vertices v_i, v_j∈ V_L (i<j, j ≠ n_1+1) decreases by - |∑_k=i^j-1ϵ_k |, because after the construction
this error is equal to zero (compare Figure <ref>-(b) with Figure <ref>-(a) for v_1 and v_3) and using Eq.
(<ref>), the change is equal to 0-
|∑_k=i^j-1ϵ_k |.
* Case 2: The error
between some vertex v_i ∈ V_L, v_i ≠ v_n_1+1, and
every vertex v_j ∈ V_R decreases. Specifically, the
error between v_i and v_n_1 +2 decreases by -
|w^*-∑_k=i^n_1ϵ_k | using Eq.
(<ref>) (see Figure <ref>).
* Case 3: The error between some vertex v_i ∈ V_L, v_i ≠ v_n_1+1, and v_n_1+1 changes by |w^*| - | ∑_k=i^n_1ϵ_k |, because in the original redistribution, the error is equal to |∑_k=i^n_1ϵ_k| (Eq. (<ref>)) and in the new redistribution, it is equal to |w^*|. This change might lead to an increase in error; however, by using Corollary <ref> and setting z=w^* and x=∑_k=i^n_1ϵ_k we have:
|w^*|-| ∑_k=i^n_1ϵ_k | ≤| w^*- ∑_k=i^n_1ϵ_k |
In other words, if the construction causes an increase in error, it is at most equal to | w^*- ∑_k=i^n_1ϵ_k |. However, from Case 2 we know that each such vertex v_i also has an error decrease of -| w^*-∑_k=i^n_1ϵ_k |, which will be enough to nullify this increase.
* Case 4: The error between some vertex v_j∈ V_R, v_j≠ v_n_1 +2, and v_n_1 +2 decreases by -| ∑_k=n_1+1^j-2ϵ_k | using Eq. (<ref>).
* Case 5: The error between some vertex v_j∈ V_R, v_j≠ v_n_1 +2, and v_n_1+1 changes by |w^*|- | w^*-∑_k=n_1+1^j-2ϵ_k |. This change may lead to an increase in error; however, we use Corollary <ref> to bound this increase:
|w^*|- | w^*-∑_k=n_1+1^j-2ϵ_k | ≤|∑_k=n_1+1^j-2ϵ_k |
Therefore, we have enough decrease from Case 4 to nullify this increase.
For a given merged e^* with left and right neighbouring edges, there exists an optimal redistribution in which only the left-neighbouring edge of e^* is marked and all other edges are unmarked.
Based on Theorem <ref>, the algorithm for merging a set S of edges in a path G=P_n is presented in Algorithm <ref>. Algorithm <ref> continuously applies Theorem <ref> and marks the left neighbouring edge of each edge e^* ∈ S taken in an arbitrary order.
Unfortunately, Algorithm <ref> may produce suboptimal results when applied to specific kinds of inputs. Precisely, it may produce suboptimal results when merging a connected subpath of the given path. The reason behind this suboptimal performance lies in the difference between merging two regular nodes and two supernodes. An example of merging a subpath of size four is depicted in Figure <ref>-(a), for which Algorithm <ref> may produce the suboptimal solution Figure <ref>-(b). Later in Section <ref>, we shall show that the optimal solution for this example is the one depicted in Figure <ref>-(c). Furthermore, in Theorem <ref>, we prove that when the input to Algorithm <ref> consists of independent edges in G (if S induces a matching in G), it produces the optimal results.
§.§ Merging Supernodes
As seen in the previous section, Algorithm <ref> may find suboptimal solutions when given an entire subpath of size k. The main reason behind this suboptimal performance lies in the difference between merging regular nodes and supernodes.
Recall from Definition <ref> that a supernode contains more than one node of the original graph. In this section, we show that merging two supernodes differs from merging two regular vertices, and we provide a generalized version of Theorem <ref>. Interestingly, we observe that, unlike merging two regular nodes in which the error value was oblivious to the marking direction, for merging supernodes this direction is directly affected by the cardinality (Definition <ref>) of each endpoint. In the following lemma, we shall see that for merging an edge e^*= (u,v) connecting two supernodes u and v, the optimal solution is obtained by marking the edge adjacent to the lighter vertex (the one with the smaller cardinality) among u and v.
Suppose we have supernodes v and u (as shown in Figure <ref>), with 𝒞(v) =k and 𝒞(u) =k^' (where k≥ k^'), connected to vertices w_1 and w_2, respectively. The error incurred by merging the edge e^* = (u, v) (with weight B) is at least B × k^'× (n-(k+ k^')). Furthermore, this lower bound can be achieved by marking the neighbouring edge adjacent to the smaller vertex among v and u in terms of cardinality (e = (u, w_2) in Figure <ref>). If the smaller vertex, with reference to cardinality, has no neighbouring edge other than e^* = (u, v), then the optimal error can be achieved by contracting e^* without any further modifications or weight changes.
The analysis is similar to the case of merging regular vertices, we enumerate all possible error values and then deduce the optimal assignment. We first assume that u (the smaller vertex) is adjacent to another edge e^'≠ e^*, and we handle the other case (only adjacent to e^*) later in the proof. We denote the error by |Δ E|. Note that n_L +n_R= |V_m|=n-(k+k^'). Let x and y denote the new weights of the edges adjacent to (v,u), we have:
|Δ E| =n_L × |x-A| × k_between the subpath of w_1 and the vertices in v+ n_L ×|x-A-B|× k^'_between the subpath of w_1 and the vertices in u+
n_R × |y-C| × k^'_between the subpath of w_2 and the vertices in u + n_R ×|y-B-C|× k_between the subpath of w_2 and the vertices in v +
n_L × n_R ×|x+y -A-B-C|_between the subpath of w_1 and w_2
Because k≥ k^', we further simplify |Δ E| as:
|Δ E| = (k-k^') × n_L × |x-A| + n_L × k^'×(|x-A| + |x-A-B|)
+ (k-k^') × n_R × |y-B-C| + n_R × k^'×(|y-C| + |y-B-C|)
+ n_L × n_R ×|x+y -A-B-C|
Using Lemma <ref>, and the fact that k-k^'≥ 0, we have:
|Δ E| ≥ n_L × k^'×(|x-A| + |x-A-B|)_≥ B (Lemma <ref>) + n_R × k^'×(|y-C| + |y-B-C|)_≥ B (Lemma <ref>)
≥ B × k^'×(n_L + n_R )_=n-(k+ k^') = B × k^'× (n-(k+ k^'))
We can observe that this lower bound (Eq. (<ref>)) is tight and can be achieved by setting y=B+C and x=A in Eq. (<ref>).
Now if u is only adjacent to e^*, the lower bound can be achieved by just contracting e^* and leaving the weight function unchanged. To see why, suppose u is only adjacent to e^*. We have n_R=0 and the error is equal to:
|Δ E| = (k-k^') × n_L × |x-A| + n_L × k^'×(|x-A| + |x-A-B|)
+ (k-k^') × n_R × |y-B-C| + n_R × k^'×(|y-C| + |y-B-C|)
+ n_L × n_R ×|x+y -A-B-C|
= (k-k^') × n_L × |x-A| + n_L × k^'×(|x-A| + |x-A-B|)
= n_L × k^'× B
On the other hand, note that n_L+n_R=n-(k+ k^'), and n_R=0 implies that n_L=n-(k+ k^'). We have:
|Δ E|=n_L × k^'× B=(n-(k+ k^')) × k^'× B
and the lower bound is achieved without any weight changes.
It is worth noting that similar to Theorem <ref>, we assume that it is sufficient to only alter the neighbouring edges of the merged edge e^*=(u,v). This proof for this assumption is almost identical to that of Lemma <ref>, where any arbitrary redistribution can be transformed into another redistribution in which only the edge adjacent to the smaller vertex (e=(u,w_2)) is marked. Then, similar to the proof of Lemma <ref>, the decrease in error is always sufficient to counterbalance any potential error increase. The only difference is that in the new proof, the decrease in error and any potential error increase are weighted by k and k^', respectively. Since k ≥ k^', the proof follows.
Lemma <ref> is a generalization of Theorem <ref>. Thinking of each regular vertex as a supernode with cardinality one, we have k=k^'=1 and using Lemma <ref>, the error is equal to B × k^'× (n-(k+ k^'))= B × (n-2) by arbitrarily marking one of the neighbouring edges (since the endpoints have equal cardinalities).
Using Lemma <ref>, we can now explain the suboptimal performance of Algorithm <ref> for edges that are not independent and form a contiguous subpath. For inputs of such kind, Algorithm <ref> continuously marks the left neighbouring edges of all edges e^* ∈ S, potentially marking an edge adjacent to the heavier endpoint of some e^* ∈ S along the way and violating the conditions of Lemma <ref>.
In the next section, we study the problem of optimally merging an entire contiguous subpath of the path.
§.§ Merging Contiguous Subpaths
This section presents an optimal way of merging any contiguous subpath (or connected subpath) of a given path. For convenience, we refer to contiguous subpaths as subpaths. Let P ^'⊆ P be the desired subpath consisting of k edges (see Figure <ref> for an illustration). Throughout this section, we assume k is even; otherwise, we can convert P^' into an equivalent subpath of even length by adding a dummy edge of weight zero. As depicted in Figure <ref>, we assume P^' partitions the set of vertices into two subsets, V_L and V_R, with n_L and n_R vertices respectively. We denote the error associated with contracting P^' by ℰ and break it down into three components:
* ℰ_L, the error between the vertices in V_L and the ones inside P^',
* ℰ_R, the error between the vertices in V_R and the ones inside P^', and
* ℰ_LR the error between the vertices of V_L and V_R.
With this in mind, we formulate ℰ as:
ℰ=ℰ_L +ℰ_R +ℰ_LR
such that:
ℰ_L=n_L ×( |x-w_0|_between the vertices of V_L and v_2+ |x-w_0-w_1|_between the vertices of V_L and v_3+… + |x-w_0-w_1-… -w_k|_between the vertices of V_L and v_k+2)
ℰ_R=n_R ×( |y-w_k+1|_between the vertices of V_R and v_k+2+ |y-w_k+1-w_k|_between the vertices of V_R and v_k+1+… + |y-w_k+1-w_k-…- w_1|_between the vertices of V_R and v_2)
and
ℰ_LR=n_L × n_R × |x+y - w_0-w_1 - … - w_k+1|
where x and y are the new weights of the neighbouring edges of P^' (Figure <ref>-(b)).
We first prove the optimal solution for ℰ_L and derive the optimal solution for ℰ_R by symmetry. Let ℰ_L ^(i) denote the value of ℰ_L when x=w_0+w_1 +… + w_i for 0 ≤ i ≤ k. We prove the following lemma using induction on i.
ℰ_L ^(i)= n_L ×(∑_j=0^i j w_j + ∑_j=i+1^k(k+1-j) w_j)
For the base case, ℰ_L ^(0), assume x=w_0. By a simple replacement into Eq. (<ref>) we get:
ℰ_L ^(0)= n_L×(w_1 +w_1 +w_2 +w_1+w_2+w_3 +… + w_1 +w_2 +… +w_k )
In other words, every w_j, 1 ≤ j ≤ k, is repeated k+1-j times, and:
ℰ_L ^(0)= n_L ×( ∑_j=1^k (k+1 -j) w_j )
Now assume the lemma holds for all j<i+1. By the inductive hypothesis, we have ℰ_L ^(i)= n_L ×(∑_j=0^i j w_j + ∑_j=i+1^k(k+1-j) w_j). We break Eq. (<ref>) into k+1 clauses, such that c_j=|x-w_0-w_1-… - w_j| for 0 ≤ j ≤ k. Going from x=∑_j=0^iw_j to x=∑_j=0^i+1w_j, ℰ_L ^(i) first increases by n_L ×((i+1) w_i+1) because there are i+1 clauses c_0, c_1, …, c_i that do not include w_i+1, and then decreases by n_L ×((k-i) w_i+1) because there are k-i clauses c_i+1, …, c_k that include w_i+1 and were not covered by the previous assignment of x (x=∑_j=0^iw_j). Therefore, we have:
ℰ_L ^(i+1)= ℰ_L ^(i) + n_L ×((i+1) w_i+1 - (k-i) w_i+1)
=
n_L ×(∑_j=0^i j w_j + ∑_j=i+1^k(k+1-j) w_j +(i+1) w_i+1 - (k-i) w_i+1)
= n_L ×(∑_j=0^i+1 j w_j + ∑_j=i+2^k(k+1-j) w_j)
The following lemma states that the optimal value of ℰ_L is equal to ℰ_L^(k/2).
The optimal value of ℰ_L is obtained when x=w_0+w_1+…+w_k/2.
It suffices to show the optimal value of ℰ_L is equal to ℰ_L ^(k/2). From the proof of Lemma <ref>, we know that ℰ_L^(i+1)-ℰ_L^(i)=n_L × ((i+1) w_i+1 - (k-i) w_i+1). Therefore, ℰ_L^(i+1)-ℰ_L^(i) < 0 if:
i+1 -k+i < 0 2i < k-1 i < k/2 - 1/2 i ≤k/2-1
In other words, ℰ_L^(k/2) is strictly better than (less than) any ℰ_L^(j), j≠k/2. Note that the optimal solution also cannot happen when x= ϵ + ∑_j=0^k/2 w_j for some 0< ϵ < w_k/2+1, because in that case, the error would be equal to:
ℰ_L^(k/2) + (k/2 +1) ϵ - (k/2) ϵ > ℰ_L^(k/2)
Using simple replacements, we can deduce that ℰ_L ^(k/2) is also smaller than ℰ_L when x<w_0 or x> w_0 +… +w_k. Let ℰ^(x<w_0)_L denote the value of ℰ_L for some x<w_0. For some x<w_0, all clauses in Eq. (<ref>) have negative values. Recalling that |x|=-x when x<0, we have:
ℰ^(x<w_0)_L =n_L×(w_0-x + w_0 +w_1 -x + … +w_0 +w_1 +… + w_k -x )= n_L ×((∑_j=0^k(k+1-j) w_j) - (k+1) × x)
>n_L ×( ∑_j=1^k(k+1-j) w_j )=ℰ^(0)_L >ℰ^(k/2)_L
The other case (x>w_0 +… + w_k) can be handled analogously.
The optimal value of ℰ_R is obtained when y=w_k/2+1+w_k/2+2+…+w_k+1.
By symmetry and using Lemma <ref> and Lemma <ref>.
We now derive the following theorem, which states that the optimal way of contracting an entire subpath is by distributing the left and right halves of the edges in the subpath to the left and right neighbours respectively.
Let P^'⊆ P be a contiguous subpath of P (a weighted path on n vertices) consisting of k edges {e_1, …, e_k}, and let e_0 and e_k+1 be the left and right neighbouring edges of P^' respectively. Furthermore, let w_i=w(e_i) ∀ i ∈{ 0,… , k+1}. The optimal error for contracting P^' is obtained by setting x=w_0+w_1+…+w_k/2 and y=w_k/2+1+w_k/2+2+…+w_k+1, where x and y are the new edge weights of e_0 and e_k+1 respectively (see Figure <ref>). If P^' has no left neighbour (e_0 does not exist), the optimal error can be achieved by setting y=w_k/2+1+w_k/2+2+…+w_k+1. If P^' has no right neighbour (e_k+1 does not exist), the optimal error can be achieved by setting x=w_0+w_1+…+w_k/2. Finally, if P^' has neither a left nor a right neighbour, the optimal error can be achieved by simply contracting P^' and no further modifications (weight changes) are required.
The case with both neighbours existing is immediate from Lemma
<ref>, Lemma <ref>, Eq. (<ref>), and the fact
that ℰ_LR=0 when x=w_0+w_1+…+w_k/2 and
y=w_k/2+1+w_k/2+2+…+w_k+1.
If P^' has no left neighbour (e_0 does not exist), we have n_L=0
and consequently ℰ_LR=E_L=0. It follows that
ℰ=ℰ_R whose optimal value is obtained by setting
y=w_k/2+1+w_k/2+2+…+w_k+1 using Lemma
<ref>. The other cases can be shown analogously.
To prove that it is sufficient to alter only the immediate neighbouring edges
of P^', we only provide a sketch to avoid repetition. The idea is
very similar to the proof of Lemma <ref> and Lemma <ref>.
Suppose we have any arbitrary weight redistribution, which we transform to the
one provided in this theorem. Let u be some vertex in V_L (as in Figure
<ref>). In the original redistribution, let x be the length of the
shortest path from u to the super vertex v^*={v_2, v_3, …,
v_k+2} in P^' (Figure <ref>-(b)). It is easy to see that in
the original distribution, the error between u and all of the vertices in
v^* is equal to:
ℰ_1=|x-w_0|+…+ |w-w_0-…- w_k|
For a fixed x, ℰ_1 corresponds to ℰ_L/n_L
(Eq. (<ref>)). It is easy to see that in the new redistribution, the
error between u and all vertices in v^* is equal to
ℰ^(k/2)_L/n_L. Therefore, using Lemma
<ref>, we know that ℰ^(k/2)_L/n_L -
ℰ_L/n_L≤ 0 for any x, and this change in the weight
redistribution cannot worsen the error associated with any u ∈ V_L. Other
cases can be handled analogously.
§.§ Merging A Set of Independent Edges
We now generalize the results of Section <ref> by proving the correctness of Algorithm <ref> for merging any set of independent edges. The proof of correctness consists of the following lemma and theorem which are similar to Lemma <ref> and Theorem <ref> respectively.
For merging a set of independent edges E_m from a path on n vertices P_n, there exists an optimal redistribution in which for each e ∈ E_m, only its left neighbouring edge is marked. If e^'∈ E_m is the leftmost edge on P_n, then this optimal solution is obtained by marking the left neighbouring edge of all edges in E_m except for e^'.
The proof is similar to the proof of Lemma <ref> and we will provide a sketch using Figure <ref>. In Figure <ref>, the edges in E_m and the vertices in V_m are highlighted in red, and the vertices in V_m are depicted in blue. We assign an ordering to the vertices (of V_m and V_m) and the edges (of E_m and E_m) from left to right, as illustrated in Figure <ref>. Let v_i and u_j be the i-th and the j-th vertex in V_m and V_m respectively according to this ordering. Similarly, let e_i and e^*_j be the i-th and the j-th edge in E_m=E- E_m and E_m respectively. For convenience, we denote w(e_i) and w(e^*_i) by w_i and w^*_i respectively. Figure <ref>-(b) depicts some arbitrary weight redistribution in which the new weight of each edge e_i is set to w(e_i) + ϵ_i. We shall show that the error associated with the weight redistribution of Figure <ref>-(c) (in which the left neighbours of E_m are marked) is no worse than that of Figure <ref>-(b). We again assume that all edges in E_m have left neighbours. First, observe how this new weight redistribution removes any error between the vertices in V_m. For instance, in the path of Figure <ref>-(c), the shortest path value between v_1, v_3 ∈V_m is the same as the one in the original path (Figure <ref>-(a)). Therefore, it suffices to study only the error between all pairs of vertices (u,v), u ∈ V_m, v ∈V_m. Using our ordering of edges, let e^*_k=(u_j,u_j+1 ) ∈ E_m and let v_i ∈V_m be a vertex to the left of e^*_k (we will explain how the other case can be handled analogously). Continuing with our example of Figure <ref>, let e^*_k=e^*_3=(u_5, u_6) and v_i=v_1. Observe how between v_i (v_1 in Figure <ref>-(a)) and u_j+1 (u_6 in Figure <ref>-(c)), there exists no error in the new redistribution as they have equal shortest path values in the original graph (Figure <ref>-(a)) and the new distribution (Figure <ref>-(c)). We show that, going from the distribution of Figure <ref>-(b) to the one in Figure <ref>-(c), any increase in the error between v_i and the left endpoint of e^*_k (u_j) can be nullified by the decrease in the error between v_i and u_j+1. The case where v_i is located on the right of e^*_k can be handled similarly.
For any E^'⊆ E we define the following quantities:
(E^')=∑_e ∈ E^'∩E_mw(e), ^*(E^')=∑_e ∈ E^'∩E_mw(e), ^'(E^')=∑_e_i ∈E_m∩ E^'ϵ_i
where ^'(E^') denotes the sum of all ϵ_i's in the distribution of Figure <ref>-(b). Let π_v, u, π^'_v, u, and π^''_v, u denote the shortest path values between v and u in the original graph (Figure <ref>-(a)), the first redistribution (Figure <ref>-(b)), and the second redistribution (Figure <ref>-(c)) respectively. Moreover, let E^(u,v) denote the set of edges on the unique shortest path from u to v. We have:
π_v_i, u_j = ()+ ^*()
π^'_v_i, u_j =()+^'()
π^''_v_i, u_j =()+^*()+ w^*_k
We provide some examples of these quantities in Example <ref> for better readability.
Note that:
π_v_i, u_j+1= π_v_i, u_j+w^*_k , π^'_v_i, u_j=π^'_v_i, u_j+1, and π^''_v_i, u_j=π^''_v_i, u_j+1
The error between v_i and u_j+1 in the redistribution of Figure <ref>-(b) is:
ℰ^v_i, u_j+1_1=|π_v_i,u_j+1-π^'_v_i, u_j+1|=| π_v_i, u_j+w^*_k -π^'_v_i, u_j| = |w^*_k+^*()- ^'()|
As mentioned before, the error between v_i and u_j+1 in the weight redistribution of Figure <ref>-(c) is equal to zero:
ℰ^v_i, u_j+1_2=0
Therefore, transforming Figure <ref>-(b) into Figure <ref>-(c) changes the error between v_i to u_j+1 by:
Δ_v_i, u_j+1=ℰ^v_i, u_j+1_2- ℰ^v_i, u_j+1_1=-|w^*_k+^*()- ^'()|
The error between v_i and u_j in the redistribution of Figure <ref>-(b) is:
ℰ^v_i, u_j_1=|π_v_i,u_j-π^'_v_i, u_j|=|π^'_v_i,u_j-π_v_i, u_j|= |^'()- ^*()|
The error between v_i and u_j in the weight redistribution of Figure <ref>-(c) is equal to:
ℰ^v_i, u_j_2=|π^''_v_i, u_j-π_v_i, u_j|=|w^*_k|
Transforming Figure <ref>-(b) into Figure <ref>-(c) changes the error between v_i to u_j by:
Δ_v_i, u_j=ℰ^v_i, u_j_2- ℰ^v_i, u_j_1=|w^*_k|-|^'()- ^*()|≤|w^*_k-^'()+ ^*()|
using Corollary <ref>. Therefore, going from the first redistribution to the second one changes the error between the endpoints of e^*_k=(u_j, u_j+1) and v_i by:
Δ_v_i, u_j+Δ_v_i, u_j+1≤|w^*_k-^'()+ ^*()|-|w^*_k+^*()- ^'()| ≤ 0
Since each e^*_k edge in E_m has exactly two endpoints, this concludes the proof for the first case (v_i is on the left of e^*_k). The other case can be handled analogously.
Returning to our example of Lemma <ref> and Figure <ref>, let e^*_k=(u_j, u_j+1)= e^*_3=(u_5, u_6), and v_i=v_1. Then:
* E^(v_1, u_5)={e_1, e^*_1, e_2, e_3, e^*_2, e_4}
* (E^(v_1, u_5))= w_1+w_2+w_3+w_4
* ^* (E^(v_1, u_5))= w^*_1+w^*_2
* ^' (E^(v_1, u_5))=ϵ_1+ϵ_2+ϵ_2+ϵ_3+ϵ_4
Let |Δ E| be the optimal error resulting from merging a set of k independent edges e_1, e_2,…, e_k with respective weights w^*_1, w^*_2, …, w^*_k from a path on n vertices P_n. Let (u_2i-1, u_2i) be the endpoints of e_i ∈ E_m, 1≤ i ≤ k. Furthermore, let V_m={u_1,…, u_2k} and V_m=V-V_m. We have |Δ E|= |V_m| (w^*_1+…+w^*_k)= (n-2k)(w^*_1+…+w^*_k). This optimal value can be achieved by marking the left neighbour of each edge in E_m after contraction. If the leftmost edge in E_m has no left neighbour, the optimal error can be achieved by marking the left neighbours of all other edges in E_m.
Let w^': E →ℝ_≥ 0 be the weight redistribution that marks the left neighbouring edge (if any) of each edge in E_m (Figure <ref>-(c)). That w^' is optimal follows directly from Lemma <ref>. We now prove the error associated with w^'.
Since the edges in E_m induce a matching on P_n, |V_m|= n-2|E_m|=n-2k. Recall from the proof of Lemma <ref> that in w^', there exists no error between two vertices v_1, v_2 ∈V_m. Let us fix some e^*_k∈ E_m. Using the proof of Lemma <ref>, we know that each vertex v_i ∈V_m induces an error of w^*_k with exactly one endpoint of e^*_k (and no error with the other endpoint). Summing over all vertices v_i ∈V_m, we get that each edge e^*_k ∈ E_m accumulates a total of (n-2k)w^*_k in error. Summing again over all edges e^*_k ∈ E_m yields the desired bound.
§ GRAPH COMPRESSION FOR TREES
In this section, we study the problem of distance-preserving graph compression for weighted trees. Precisely, we study a relevant problem, referred to as the marking problem, for a tree T=(V,E), |V|=n, and weight function w: E ℝ_≥ 0.
The remainder of this section is organized as follows. In Section <ref>, we formally define the marking problem. The adaptation of the error function (Eq. (<ref>)) to the marking problem is thoroughly explained in Section <ref>.
As a warm-up, we study a special case of the marking problem in Section <ref>, after which we generalize the results in Section <ref> and present a linear-time algorithm for solving the marking problem in Algorithm <ref>. As the final component of this section, we thoroughly study the difference between the marking problem (Definition <ref>) and the fractional marking problem (Definition <ref>) in Section <ref>.
§.§ The Marking Problem for a Single Edge
As seen in Section <ref>, for merging a single edge in a weighted path, marking one of the neighbouring edges produces the optimal amount of error. An important question is how to generalize this result to solve the same problem for weighted trees.
We formally state the marking problem as:
The Marking Problem for Weighted Trees: Given a contracted edge e^* in a weighted tree T, what subset of the neighbouring edges of e^* should we mark such that the error value of Eq. (<ref>) is minimized over all such possible subsets?
An example of the marking problem is depicted in Figure <ref>-(a), where edge e^* with weight w^* is contracted. As shown in Figure <ref>-(b), in the marking problem, the goal is to mark a subset of the neighbouring edges of e^*, by setting the new weight of each marked edge e_i to w^'(e_i)=w(e_i) +ϵ_i, ϵ_i ∈{0, w^*}, in a way that minimizes the error function of Eq. (<ref>) over all such possible subsets. Note that the fractional case (when the weight of each marked edge e_i is set to w^'(e_i)=w(e_i) +ϵ_i, ϵ_i ∈ [0, w^*]) is thoroughly studied in Section <ref>.
In the tree of Figure <ref>-(a), e^* has four neighbouring edges, namely e_1=(v_1, v_3), e_2=(v_1,v_4), e_3=(v_2, v_5), and e_4=(v_2, v_6). Different subsets of these neighbouring edges can be marked, for instance, in Figure <ref>-(a), {e_1, e_2} is marked. In the remainder of this section, we may refer to each of these marked subsets as a marking for simplicity. For example, in Figure <ref>-(c), {e_1, e_3} is a marking. An optimal marking is one that minimizes the error function of Eq. (<ref>) over all possible markings.
Since for merging an edge in a weighted path marking one of the neighbouring edges gives the optimal amount of error, our intuition tells us that in a weighted tree, we have to mark all neighbouring edges on one side of the contracted edge e^*. As we shall show later, this intuition, though not completely correct, is optimal for specific kinds of input. To study the marking problem, we first present some definitions and observations using Figure <ref> and Figure <ref> as our running examples. We assume the tree is laid out in the plane and e^* (the edge to be merged) is horizontal. This assumption will simplify the description of our results.
Let T=(V, E) be a weighted tree with non-negative weights, and let e^*=(v_1, v_2) be the merged edge with weight w^*, V_m={v_1, v_2}, and V_m=V-V_m. We denote by the number of subtrees to the left of v_1 and by the number of subtrees to the right of v_2. More formally, let E^'=E- e^*. We have:
V_L= {u| (u,v_1) ∈ E^'}, =|V_L|
V_R= {w| (v_2,w) ∈ E^'}, = |V_R|
For instance, in the tree of Figure <ref>, we have V_L={v_3, v_4} and V_R={v_5, v_6} and therefore ==2.
Given e^*=(v_1, v_2) in T, T - {v_1, v_2} is a forest ℱ, the components of which are used in our analyses and defined as follows:
Let T, e^*=(v_1, v_2), V_L and V_R be as defined in Definition <ref>. Let ℱ be the forest T- {v_1, v_2}. Furthermore, assume that the connected components of ℱ are rooted at the vertices of V_L or V_R, and let C_L and C_R be the sets of components of ℱ rooted at the vertices of V_L and V_R respectively. Then, we denote by , i ∈{1, …, } the i-th member of C_L, and by , j ∈{1,…, } the j-th member of C_R, given some arbitrary ordering on the members of C_L and C_R.
In the tree of Figure <ref>, =2, and C_L has two members (the subtrees rooted at v_3 and v_4). Given some arbitrary ordering on the members of C_L, T^L_1 is the subtree rooted at v_3.
We also formally define the cardinality of the subtrees of Definition <ref> as follows:
Let , i ∈{1, …, } and , j ∈{1,…, } be as defined in Definition <ref>. We have L_i=|{v|v∈}| and R_j=|{v|v∈}|. We refer to L_i as the cardinality of the i-th edge on the left and R_j as the cardinality of the j-th edge on the right.
A few examples of marking the edges of Figure <ref> are provided in Figure <ref>-(a) to Figure <ref>-(c). In Figure <ref>-(a) and Figure <ref>-(b), all edges on one side of e^* are marked, and in Figure <ref>-(c), a subset of edges from both sides is marked. Marking an edge could both increase and decrease the total amount of error. Before proceeding with the remainder of this section, we note the following lemma to justify our focus on minimizing the error between all pairs of vertices in V_m.
(See Figure <ref>) Let e^*=(v_1,v_2) be the single merged edge in a weighted tree T=(V, E), and let V_m= V- {v_1, v_2}. Then, as long as every neighbouring edge of e^* is either marked or unmarked, the error between some vertex u ∈V_m and the vertices in {v_1, v_2} is minimized.
This lemma is a direct result of Lemma <ref> and Theorem <ref>. Let us fix some vertex u ∈ T_2^L (see Figure <ref>-(b)), the error between u and the endpoints of e^*, v_1 and v_2, can be formulated as:
|Δ E|^'= |w_2-(w_2+ϵ_2)|_between u and v_1+|w_2+w^*-(w_2+ϵ_2)|_between u and v_2 =|ϵ_2|+|w^*-ϵ_2|= |ϵ_2|+|ϵ_2- w^*|
Using Lemma <ref>, we have |Δ E|^'≥ w^*, and |Δ E|^' = w^* for 0≤ϵ_2 ≤ w^*. Therefore, when (v_1, v_4) is either marked or unmarked, we have ϵ_2 ∈{0, w^*}, which satisfies the desired conditions. This analysis applies to all nodes u ∈V_m, thus the lemma follows.
In the remainder of this section, we therefore only focus on minimizing the error between all pairs of vertices u_1, u_2 ∈V_m, because by the definition of the marking problem (Definition <ref>), the conditions of Lemma <ref> are automatically satisfied.
§.§ Formulating The Error
This section formally explains how marking a set of edges affects the error function. Using Figure <ref>, we first present some examples, which we generalize later in Observation <ref>. Throughout this section, we may sometimes refer to this error as units of error, where each unit is equal to w^*.
The error between v_3 and v_4 in Figure <ref>-(a) is equal to |w_1 +w^* +w_2 +w^*- w_1 -w_2|=2w^*. In the original graph (Figure <ref>-(a)), e^* does not appear on the unique path between v_3 and v_4, while in the modified graph (Figure <ref>-(a)), the weight of e^* appears twice. In the marking of Figure <ref>-(a), the total amount of error between all pairs of vertices u_1 ∈ T_1^L, u_2 ∈ T_2^L is L_1 × L_2 × 2w^*.
The error between v_3 and v_5 in Figure <ref>-(c) is |w_1+w^*+w_3+w^*-w_1-w^*-w_3|=w^*. Because in the original graph (Figure <ref>-(a)), e^* appears only once on the unique path from v_3 to v_5, while in the modified graph (Figure <ref>-(c)), the weight of e^* appears twice. The total amount of error between all pairs of vertices u_1 ∈ T_1^L, u_2 ∈ T_1^R is L_1 × R_1 × w^*.
In Figure <ref>-(c), the error between v_5 and v_6 is |w_3+w^*+w_4-w_3-w_4|=w^*. The total amount of error between all pairs of vertices u_1 ∈ T_1^R, u_2 ∈ T_2^R is R_1 × R_2 × w^*.
In Figure <ref>-(c), the total amount of error between all pairs of vertices u_1 ∈ T_1^L, u_2 ∈ T_2^L is L_1 × L_2 × w^*.
In Figure <ref>-(a), the error between v_3 and v_5 is equal to |w_1+w^*+w_3-w_1-w^*- w_3|=0. The length of the unique path between v_3 and v_5 does not change compared with Figure <ref>-(a).
Between the vertices of two edges (vertices belonging to the subtree rooted at that edge) adjacent to the endpoints of e^*, there might exist some error. We classify this observation into the following cases:
* Let T_i^L and T_j^L be the subtrees adjacent to two distinct marked edges on the left. Then, the total amount of error between all pairs of vertices u_1 ∈ T_i^L, u_2 ∈ T_j^L is L_i × L_j × 2w^* (see Example <ref>).
* Let T_i^R and T_j^R be the subtrees adjacent to two marked edges on the right. Then, the total amount of error between all pairs of vertices u_1 ∈ T_i^R, u_2 ∈ T_j^R is R_i × R_j × 2w^*.
* Let T_i^L and T_j^R be the subtrees adjacent to two marked edges on the left and right respectively. Then, the total amount of error between all pairs of vertices u_1 ∈ T_i^L, u_2 ∈ T_j^R is L_i × R_j × w^* (see Example <ref>).
* Let T_i^R and T_j ^R be the subtrees adjacent to a marked edge and an unmarked edge on the right respectively. Then, the total amount of error between all pairs of vertices u_1 ∈ T_i^R, u_2 ∈ T_j^R is R_i × R_j × w^* (see Example <ref>).
* Let T_i^L and T_j ^L be the subtrees adjacent to a marked edge and an unmarked edge on the left respectively. Then, the total amount of error between all pairs of vertices u_1 ∈ T_i^L, u_2 ∈ T_j^L is L_i × L_j × w^* (see Example <ref>).
* Let T_i^L be the subtree adjacent to a marked edge on the left, and T_j^R be the subtree adjacent to an unmarked edge on the right. Then, the total amount of error between all pairs of vertices u_1 ∈ T_i^L, u_2 ∈ T_j^R is equal to zero (see Example <ref>).
* Let T_i^L be the subtree adjacent to an unmarked edge on the left, and T_j^R be the subtree adjacent to a marked edge on the right. Then, the total amount of error between all pairs of vertices u_1 ∈ T_i^L, u_2 ∈ T_j^R is equal to zero.
§.§ Equal-Sized Subtrees
We now investigate a special case where each subtree on the left has vertices and each subtree on the right has vertices, i.e. L_i = n_L, 1≤ i ≤, and R_i=, 1≤ i ≤. Recall that every merged edge has two sides, left and right, one of which is designated as the preferable side. A given side is preferable if it produces a smaller amount of error when fully marked compared to its fully-marked counterpart. For example, if the left side is preferable, we have:
× ( -1) ≤× ( -1)
The above inequality compares the error between the marking with the left side fully marked and the right side fully unmarked (Figure <ref>-(a)), and the opposite marking with the right side fully marked and the left side fully unmarked (Figure <ref>-(b)). In the first marking, there exists no error between the left and the right sides (Observation <ref>, Case 6), but there are 2 distinct pairs of marked edges on the left, each inducing an error of ×× 2w^* (Observation <ref>, Case 1). Therefore, the total amount of error for the first marking is equal to 2× n_L × n_L × 2w^*= ×( -1 ) × w^*. The other marking can be analyzed analogously. Note that in the remainder of this section, we drop w^* from each error term, and each error term counts the error units, where each unit is equal to w^*. Therefore, all quantities are implicitly multiplied by w^* in the remainder of this section.
The following lemma states that, for a contracted edge e^* that has equal-sized subtrees on each side, the optimal solution is obtained by marking all edges on the preferable side of e^* and leaving the other side completely unmarked.
Given a merged edge (in a weighted tree) with two sides left and right, such that the subtrees on each side have equal sizes, the optimal marking is obtained if one side (the preferable side) is fully marked and the other side is fully unmarked.
By contradiction. This lemma assumes each subtree on the left and right side has and vertices respectively, i.e. L_i = n_L, 1≤ i ≤, and R_i=, 1≤ i ≤. Without loss of generality, we assume the left side is preferable throughout this proof. Therefore, we have:
× ( -1) ≤× ( -1)
Let i and j denote the number of marked edges on the left and right respectively. We define two functions, MARK_LEFT, which marks one of the edges on the left, and UNMARK_RIGHT, which unmarks one edge on the right. We will show that for all values i< or j>0, one can achieve smaller error values by applying a series of MARK_LEFT's and UNMARK_RIGHT's and ending up at i= and j=0, as desired. For a function f ∈ℱ= {MARK_LEFT,UNMARK_RIGHT}, we define Δ(f) as the amount of change in the error value after applying f to the tree. Since we are interested in decreasing the error value using the functions in ℱ, in this proof we will look for conditions under which Δ(MARK_LEFT) ≤ 0 and Δ(UNMARK_RIGHT) ≤ 0.
We begin by investigating . Note that this function sets i i+1 and j j. We observe the following:
* Because we are marking a new edge, the total amount of error between the marked edges on the left changes by:
× 2 (i+1 2 -i 2)= × 2i
* The total amount of error between the unmarked edges and the marked ones on the left changes by:
× ((i+1)(-i-1)-i(-i))=× (i-i^2-i+-i-1-i+i^2)=×( -2i-1)
* The total amount of error between the marked edges on the left and right changes by:
((i+1)j-ij)=× j
* The total amount of error between the unmarked edges on the left and right changes by:
× ((-i-1)( -j)-(-i)(-j))=× ( - j-i +ij-+j- + j+i -ij)=× (j-)
Therefore, Δ () is equal to:
Δ ()=× (2i + -2i -1)+ (j+j-)= × ( -1)+ (2j-)
Since we are looking for conditions under which Δ () ≤ 0, we have:
[ Δ ()≤ 0 × ( -1)+ (2j-) ≤ 0; ]
Therefore,
Δ ()≤ 0 if × ( -1) ≤ (-2j) j ≤/2+ (1-)/2
Similar reasoning can be used for . This function sets i i and j j-1. We have:
* The total amount of error between the marked edges on the right changes by:
× 2 (j-1 2 -j 2)= × (-2(j-1))
* The total amount of error between the unmarked edges and the marked ones on the right changes by:
× ((j-1)( -j +1)-j(-j))=× (j-j^2+j-+j-1-j +j^2)=×(2j--1)
* The total amount of error between the marked edges on the left and right changes by:
(i(j-1)-ij)=× (-i)
* The total amount of error between the unmarked edges on the left and right changes by:
× ((-i)( -j+1)-(-i)(-j))=× ( - j+- i +ij-i- + j+i -ij)=× ( -i )
Thus, we have:
Δ()=(-2(j-1)+2j--1)+( -i -i )=(1-)+( -2i )
and
Δ ()≤ 0 if ( -2i )≤(-1) i ≥/2+ (1-)/2
We conclude the proof by stating that whenever i< or j>0, one can achieve smaller error values by applying a series of MARK_LEFT's and UNMARK_RIGHT's and ending up at i= and j=0. When j ≤/2+ (1-)/2, Eq. (<ref>) is satisfied. Therefore, we repeatedly apply until i=, at which point Eq. (<ref>) is satisfied and we repeatedly apply until j=0, as desired.
Now suppose j > /2+ (1-)/2 edges are marked on the right side. If i ≥/2+ (1-)/2, Eq. (<ref>) is satisfied, which allows us to repeatedly apply until j=0, at which point Eq. (<ref>) is satisfied and we repeatedly apply until i=, as desired.
Assume i < /2+ (1-)/2 and j > /2+ (1-)/2, and both Eq. (<ref>) and Eq. (<ref>) are unsatisfied. We first apply /2+ (1-)/2 -i= +(1-)/2 -i 's, increasing the error by ( +(1-)/2 -i) (× ( -1)+ (2j-)), at which point i= +(1-)/2 and Δ()=0. Therefore, we set j0 without changing the error (since Δ()=0), and then we apply - ( +(1-)/2 )= -(1-)/2 's until i=, as desired. We now show that this sequence of 's and 's results in an error value no worse than that of the original one:
Δ = ( +(1-)/2 -i_≤ +(1-)/2 ) (× ( -1)+ (2j-)_≤× ( -1)+ ( ))+ j ((1-)+( -2 × +(1-)/2 )_=0)
+ -(1-)/2 (× ( -1)+ (-))
≤ ( +(1-)/2 )( × ( -1)+ ( )) + -(1-)/2 (× ( -1)+ (-))
= × ( -1) - × ( -1)
≤ 0
and we arrive at i= and j=0 while obtaining a smaller error value.
In the next section, we generalize Lemma <ref> to the case in which different subtrees can have varying sizes.
§.§ Varying-Size Subtrees
As a generalization of Section <ref>, now assume the i-th subtree on the left (1 ≤ i ≤) has _i nodes, and the j-th subtree on the right (1 ≤ j ≤) is of size _j. We observe when each side has subtrees of different sizes, marking all edges on one side does not necessarily produce the optimal error. An example is depicted in Figure <ref>, where marking only one edge on the right produces the optimal amount of error.
Although marking all edges on one side does not necessarily produce the optimal error, we observe that no optimal solution has markings on both sides, as the following lemma states. Similar to Section <ref>, we remove w^* from all calculations and expressions in this section. Therefore, all calculations in this section are implicitly multiplied by w^*.
Given a merged edge (in a weighted tree) with two sides left and right, no optimal marking has marked edges on both sides.
By contradiction. We assume there exists such an optimal marking, and we strictly improve its error by unmarking everything on one of the two sides (thus obtaining a contradiction). Expanding the proof of Lemma <ref>, we define four operations, , , , and for unmarking and marking edges on both ends. For a function
f ∈ℱ= {, , , }
we define Δ(f) as the amount of change in the error value after applying f to the tree. Let =∑_i=1^L_i and =∑_i=1^R_i denote the total sum of all edge cardinalities on the left and right sides respectively. Furthermore, let , , , and denote the sum of the cardinalities of the marked and unmarked edges on the left and right sides respectively. Note that = + and = +.
First, we calculate Δ() and derive Δ() by symmetry. Assume we are unmarking the i-th edge on the right e_i with cardinality R_i. We break the change in the error value down into four parts as follows:
* The total amount of error between the marked edges on the right changes by:
-2 × R_i × ( - R_i)
because between two marked edges on the right, there exist two units of error (equal to twice the weight of the merged edge ). Therefore, unmarking e_i relieves some of this error.
* The total amount of error between the unmarked and the marked edges on the right changes by:
-R_i × () + R_i × ( -R_i)
because between a marked and an unmarked edge on the right, there exists one unit of error and unmarking e_i relieves some error with other unmarked edges (the first part of the expression), making e_i an unmarked edge itself (the second part of the expression).
* The total amount of error between e_i and the marked edges on the left changes by:
-R_i ×
because between two marked edges on the right and the left, there exists one unit of error and unmarking e_i relieves some of this error.
* The total amount of error between e_i and the unmarked edges on the left changes by:
R_i ×
Summing all four parts together, we get:
Δ()= R_i ×(- ( - R_i) - - + )
By symmetry, we also have:
Δ()= L_i ×(- ( - L_i) - - + )
Next, we calculate Δ() and derive Δ() by symmetry. Assume we are marking the i-th edge on the left e_i with cardinality L_i. We break the change in the error value into four parts:
* The total amount of error between the marked edges on the left changes by:
2 × L_i × ()
because between two marked edges on the left, there exist two units of error (equal to twice the weight of the merged edge ). Therefore, marking e_i introduces some error between e_i and all other marked edges on the left.
* The total amount of error between the unmarked and the marked edges on the left changes by:
-L_i × () + L_i × ( -L_i)
because between a marked and an unmarked edge on the left, there exists one unit of error, and marking e_i relieves some error with all other marked edges on the left (the first part of the expression), making e_i a marked edge itself (the second part of the expression).
* The total amount of error between e_i and the marked edges on the right changes by:
+L_i ×
because between two marked edges on the right and the left, there exists one unit of error, thus marking e_i introduces some error between e_i and all other marked edges on the right.
* The total amount of error between e_i and the unmarked edges on the right changes by:
-L_i ×
because between two unmarked edges on the left and the right, there exists one unit of error, and marking e_i relieves some of this error.
Summing all four parts together, we get:
Δ()= L_i ×( + ( -L_i)+ - )
By symmetry, we also have:
Δ()= R_i ×( + ( -R_i)+ - )
Now, we can complete the proof. For the sake of contradiction, assume that there exists an optimal marking
with edges marked on both sides. Therefore, we have >0 and >0. Without loss of generality, assume
≥ (see Figure <ref>-(a)). Using
Eq. (<ref>), we can unmark any edge on the right, say the
i-th one e_i connected to R_i vertices, such that the
change in the error value is equal to:
[ Δ =R_i ×(- ( - R_i)_<0 - - _<0 + )
<R_i (- +_≤ 0 )< 0 ]
and we obtain a strictly better
marking by unmarking e_i;
therefore, the original marking
could not have been optimal. After
unmarking e_i, we again have
> and we can
keep unmarking all edges on the
right until the right side is fully
unmarked and we have a strictly
better marking than
(see Figure
<ref>).
Note that the other case
(<) can be
handled symmetrically by fully
unmarking the left side and
repeatedly applying Eq.
(<ref>).
§.§.§ Partial Markings
In Lemma <ref>, we observed that no optimal marking has edges marked on both sides. In this section, we introduce the concept of partial markings, used to form optimal marking after merging a given edge . A partial left (respectively right) marking, denoted by (respectively ), is a marking with all edges on the right (respectively on the left) unmarked, and a subset of the edges on the left (respectively on the right) marked. We call a partial marking optimal if its error count is less than any other partial marking for its respective side. Let ^* and ^* denote the optimal partial left and right markings respectively, the following lemma is easy to prove.
After merging edge in a weighted tree with non-negative weights, the optimal marking is either ^* or ^*, depending on which one produces a smaller amount of error.
Immediate from Lemma <ref>.
Applying the results of Lemma <ref>, we can find the optimal marking by finding the optimal partial markings ^* and ^*, comparing their respective error values, and choosing the one with the smaller error value as the optimal marking. The question is how to find the optimal partial markings, and it is answered in the following lemma.
The optimal partial marking ^* consists of all edges e_i (adjacent to L_i vertices) such that
-≤ L_i
Similarly, the optimal partial marking ^* consists of all edges e_i (adjacent to R_i vertices) such that
-≤ R_i
We first prove Eq. (<ref>) and derive Eq. (<ref>) by symmetry. Suppose we are trying to construct an optimal partial marking for the left side. We can do so by keeping the right side unmarked and marking edges on the left until the error can no longer be improved. Recall Eq. (<ref>) from the proof of Lemma <ref>, we have to keep marking all edges e_i (with cardinality L_i) until the error can no longer be improved, the change in the error value at each step is equal to:
Δ()= L_i ×( + ( -L_i)+ - )
At each step, to get an improvement, we must have Δ() ≤ 0:
( + ( -L_i)+ - )≤ 0 -L_i+ - ≤ 0
However, since we are calculating a partial marking for the left side, we know by definition that the right side has to remain fully unmarked at all times, so we have = and =0. Inserting these values in the above equation we get the following inequality for edges that improve the partial left marking:
-L_i - ≤ 0 - ≤ L_i
Then, we can deduce that if an edge e_i on the left satisfies S_L-S_R ≤ L_i, it must be marked in ^*. Conversely, if an edge on the left e_i is marked in ^*, it must satisfy S_L-S_R ≤ L_i. To see why, assume ^* includes an edge e_i with S_L-S_R > L_i. Then, we can improve ^* by unmarking e_i (see Eq. (<ref>)):
Δ()= L_i ×(- ( - L_i) - - + ) =L_i×( L_i -S_L+S_R) <0
which contradicts the optimality of ^* and proves Eq. (<ref>). The other inequality (Eq. (<ref>)) can be proven analogously by applying Eq. (<ref>).
We present our linear-time algorithm for finding the optimal marking after merging an edge e^* in Algorithm <ref>.
As an example, let us demonstrate how Algorithm <ref> finds the optimal marking for the tree of Figure <ref>. The optimal marking ^* consists of all edges on the left, because:
* 7-21≤ 2=L_1
* 7-21≤ 2=L_2
* 7-21≤ 3=L_3
On the other hand, the optimal marking ^* consists of only one edge on the right, because:
* 21-7 ≤ 20 = R_1
* 21-7 > 1 =R_2
Moreover, because ^* has a better error count than ^*, Algorithm <ref> returns ^* as the overall optimal marking which is the correct answer as depicted in Figure <ref>.
Now, we summarize our result in the following theorem.
Algorithm <ref> computes the optimal marking for a merged edge in 𝒪(|V|) time.
Immediate from Lemma <ref>, Lemma <ref>, and Lemma <ref>.
§.§ Fractional Markings
In the previous section, we studied the marking problem under the assumption that each edge could either be fully marked or fully unmarked. In this section, we study a generalized version of the marking problem, called the fractional marking problem (to be defined momentarily). We show that Algorithm <ref> does not err by assuming that each edge can either be fully marked or fully unmarked.
With reference to a given merged edge e^* in a graph G=(V, E) with the associated weight function w: E ⇒ℝ_≥ 0, and a new weight redistribution function w^': E →ℝ_≥ 0, an edge e_i is said to be fractionally marked if w^'(e_i)=w(e_i)+c_iw(e^*), 0<c_i<1.
Each neighbouring edge e_i has thus an assigned c_i, which denotes the (possibly fractional) amount by which it is marked. We may sometimes refer to an edge e_i with c_i=1 as a fully marked edge. An edge e_i is marked by ϵ if its corresponding c_i is set to c^'_i=c_i + ϵ, and it is unmarked by ϵ if its corresponding c_i is set to c^'_i=c_i - ϵ.
The Fractional Marking Problem for Weighted Trees: Given a contracted edge e^* in a weighted tree T with non-negative weights,
what subset of the neighbouring edges of e^* should we fully mark or fractionally mark such that the error value of Eq. (<ref>) is minimized over all such possible subsets?
Similar to the previous section, we may omit some occurrences of w^* from our calculation for convenience. We borrow our previous running example (Figure <ref> and Figure <ref>) and extend it to present an example of fractional markings in Figure <ref>. Figure <ref>-(a) depicts the tree of Figure <ref> with two edges fractionally marked. Figure <ref>-(b) illustrates a succinct representation of Figure <ref>-(a), where each fractionally marked edge e_i is shown using its respective c_i (Definition <ref>) and the weights of the unmarked edges are omitted. We use this succinct version often in the remainder of this section.
As a warm-up, we first present a property of any optimal marking that has at least one fractionally marked edge.
Let M be an optimal marking for a contracted edge e^* (in a weighted tree) that has at least one fractionally marked edge e^'. Then, M necessarily has marked edges on both sides.
By contradiction. Suppose M is an optimal marking with a
fractionally marked edge e^', and suppose M is a
partial left or right marking (Section <ref>) with
marked edges only on the left or right respectively. Without
loss of generality, assume M is a partial left marking
with a fractionally marked edge e^'=e_1. As
depicted in Figure <ref>-(a), assume e_1 has
cardinality L_1 and marking value c_1 (Definition
<ref>). We can obtain another marking
M^' by unmarking e_1 (Figure <ref>-(b)).
Let ℰ be the error function, then ℰ
(M)=ℰ(M^') +Δ_1 () and
ℰ(M) < ℰ (M^') because M is
an optimal marking. Therefore, Δ _1() < 0 when
marking e_1 back in M^'. We now formulate
Δ_1() when marking e_1 in M^' by
c_1.
Δ_1()=c_1 × (X)
where X= L_1 ×( + ( -L_1)+
- )= L_1 ×( +
( -L_1) - S_R) because =0 and
S_R=. However, because Δ_1 ()<0, we
have that X<0 and we can fully mark e_1 in M^'
to get another marking M^'' (Figure
<ref>-(c)). The amount of error change of this mark
operation is equal to:
Δ_2()= (c_1) × X + (1-c_1) × X
Noting that ℰ(M)=ℰ(M^')
+Δ_1 () we have:
ℰ(M^'')= ℰ(M^') +
Δ_2() =ℰ(M^')+(c_1)
× X_=ℰ(M^')
+Δ_1()=ℰ(M) + (1-c_1) × X<
ℰ(M)
Therefore M^'' is a strictly better marking
than M, contradicting our assumption that M is an
optimal marking.
Lemma <ref> states that an optimal marking with fractionally marked edges cannot be a partial left or right marking (as defined in Section <ref>).
We now present the main result of this section.
Let M be an optimal marking for a contracted edge e^* (in a weighted tree) that contains both fractionally and fully marked edges. Then, M can be transformed into another optimal marking M^' that contains no fractionally marked edges.
We assume M is an optimal marking that contains fractionally marked edges. We consider several possible cases and for each case, we present a transformation technique that does not worsen the marking with reference to the error function (Eq. (<ref>)). The repeated application of these transformations converts M into another marking M^' with no fractionally marked edges.
Let M be an optimal marking that contains at least one fractionally marked edge. Using Lemma <ref>, we may assume M contains marked edges on both sides. Throughout this proof, we use 0<c_i≤ 1 to denote the marking value for an edge e_i, such that e_i is fractionally marked if 0<c_i< 1 (see Definition <ref>).
* Case 1: M contains two marked edges e_1 (with cardinality L_1) and e_2 (with cardinality R_1) on the left and right respectively such that c_1 + c_2 > 1 (Figure <ref>-(a)).
Let c_1 + c_2 =1+ ϵ for ϵ >0. We show
that fractionally unmarking either e_1 or e_2 by
ϵ does not worsen the error, and this change
transforms M into another optimal marking M^'
in which c^'_1+c^'_2=1.
We generalize the proof of Lemma <ref> and
define =∑_i=1^L_i and =∑_i=1^R_i as the total sum of all edge
cardinalities on the left and right sides respectively.
Let ^'= - L_1 and
^'= - R_1 and without loss of
generality, assume ^'≥^'. We unmark e_2 by ϵ,
setting c^'_2=c_2 -ϵ (Figure <ref>-
(b)). Note that we must necessarily have c_2 ≥ϵ because otherwise c_1+ c_2 < 1 + ϵ=
c_1+c_2, which is a contradiction. We now show this
operation does not worsen the marking. Between the
vertices of e_2 and the vertices of all other edges on
the right side, the error is reduced by -ϵ× w^*. Now, let e_j ≠ e_1 be some edge on the
left. The error between the vertices of e_2 and e_j
is increased by at most ϵ× w^*. If e_j
has marking value c_j, then this operation may
increase the error between the vertices of e_2 and
e_j by |w^*- (c_2 × w^* +c_j × w^* -
ϵ× w^*)| -|w^*- (c_2 × w^* +c_j ×
w^* )| ≤ϵ× w^* using Corollary
<ref>.
Therefore, this unmark operation changes the error by:
[ Δ_1() ≤ R_1 ×ϵ× w^* ×( - ^'- L_1_<0 + ^') <
R_1 ×ϵ× w^* × (-^' +^'_≤ 0 )≤ 0 ]
* Case 2: For all pairs of marked edges e_1 and e_2 on the left and right respectively c_1 +c_2 ≤ 1 (Figure <ref>).
We consider two subcases:
* Case 2-1: There exist two edges e_i and e_j on one side (left or right) such that c_i ≠ c_j (Figure <ref>-(a)).
Figure <ref>-(a) is an example of such marking M in
which c_1+ c_2 ≤ 1 for any two edges e_1 and e_2 on
opposite sides and at least two edges e_i and e_j on the
left with c_i ≠ c_j. Without loss of generality, we may
assume c_i < c_j. Then, we can set c^'_i= c_j without
increasing the error. Due to the properties of this subcase, we
may get another marking M^' by unmarking e_i (setting
c^'_i=0) and then marking it by c_j to get a third
marking M^'' with ℰ(M^'') ≤ℰ (M). Similar to the proof of Lemma
<ref>, we have:
ℰ(M^'')=ℰ
(M^') + c_1 × X _=ℰ(M)+ (c_2- c_1)
× X
* Case 2-2: For all marked edges e_i (with
c_i >0) on the left c_i=ϵ_1, and for all marked
edges e_j (with c_j>0) on the right c_j =ϵ_2
(ϵ_1 + ϵ_2 ≤ 1) (Figure <ref>-(b)).
For this case, we simply show that the error associated with the
optimal partial marking (Lemma <ref>) is a lower bound
on ℰ(M), or min(ℰ
(^*), ℰ(^*))≤ℰ
(M). Without loss of generality, we assume that
min(ℰ(^*), ℰ
(^*))= ℰ(^*).
Let E_L and E_R be the set of marked edges in
^* and ^* respectively. From Lemma
<ref>, we know that for each e_i ∈ E_L, S_L-
S_R ≤ L_i and for each e_i ∈ E_R, S_R-S_L ≤
R_i. We show that the set of marked edges in M is precisely
equal to E_L ∪ E_R. Let e_i be any marked (with reference
to M) edge on the left, and let e_j be any marked edge on
the right. Because c_i +c_j ≤ 1, unmarking e_i by
ϵ≤ c_i increases the error between the vertices of
e_i and e_j by ϵ× w^*:
|w^*-(c_i× w^* + c_j× w^* -ϵ× w^*)|_=w^*-(c_i × w^*+ c_j × w^* -ϵ× w^*) because c_i+c_j ≤ 1- |w^*-
(c_i× w^* + c_j × w^*)|_=w^*-(c_i × w^*+ c_j
× w^*) because c_i+c_j ≤ 1 = ϵ×
w^*
Furthermore, unmarking e_i by ϵ decreases the error
between the vertices of e_i and the vertices of all other
edges on the left by -ϵ× w^*. Therefore,
unmarking e_i by ϵ changes ℰ(M) by:
Δ_1 ()= L_i ×ϵ× w^* × (-(S_L
- L_i) + S_R )=L_i ×ϵ× w^* × (-S_L + L_i
+ S_R )
Because M is an optimal marking, Δ_1 () ≥ 0
and:
-S_L + L_i + S_R ≥ 0 L_i ≥ S_L - S_R
Conversely, we may assume that any edge e_i on the left satisfying L_i ≥ S_L - S_R is marked in M; because
otherwise, we could improve M by marking e_i[The
proof for this claim is almost identical to the one provided in
the proof of Lemma <ref>. Here, we omitted the details
to avoid repetition.]. Similar reasoning can be applied to any
marked edge e_i on the right.
We now conclude the proof. First, note that:
ℰ(^*)= ℰ
(M_0)_The error associated with the empty marking
+∑_e_i ∈ E_L L_i × w^* × (S_L-L_i-
S_R)_The sum of all Δ() 's that
transform M_0 into ^*
and
ℰ(^*)= ℰ
(M_0)_The error associated with the empty marking
+∑_e_i ∈ E_R R_i × w^* × (S_R-R_i-
S_L)_The sum of all Δ() 's that
transform M_0 into ^*
where M_0 is a trivial marking with no marked edges.
On the other hand, M can be constructed by first marking all
edges e_i in E_L by ϵ_1 and then marking all edges in E_R by ϵ_2.
ℰ(M)= ℰ(M_0)_The error associated with the empty marking +ϵ_1
×∑_e_i ∈ E_L L_i × w^* × (S_L-L_i-
S_R)_The sum of all Δ() 's by
ϵ_1
+ ϵ_2 ×∑_e_i ∈ E_R R_i× w^* × (S_R-R_i-S_L)_The sum of all Δ() 's by ϵ_2
From our assumption, ℰ(^*) ≤ℰ(^*). We have S_L-L_i-S_R ≤ 0 for
all e_i ∈ E_L and S_R-R_i-S_L ≤ 0 for all e_i ∈
E_R. We get:
ℰ(^*) ≤ℰ
(^*)∑_e_i ∈ E_LL_i× w^* × (S_L-L_i-S_R) ≤∑_e_i ∈ E_R R_i × w^* × (S_R-R_i-S_L)
On the other hand:
ℰ(M) = ℰ(M_0) + ϵ_1 ×∑_e_i
∈ E_L L_i × w^* × (S_L-L_i-S_R)+ ϵ_2 ×∑_e_i ∈ E_R R_i × w^* × (S_R-R_i-S_L)
≥ℰ(M_0) + ϵ_1 ×∑_e_i ∈ E_L L_i× w^* × (S_L-L_i-S_R)+ ϵ_2 ×∑_e_i ∈
E_L L_i × w^* × (S_L-L_i-S_R)
= ℰ(M_0) + (ϵ_1 +ϵ_2) ×∑_e_i ∈ E_L L_i × w^* × (S_L-L_i-S_R)
≥ℰ(M_0) + ∑_e_i ∈ E_L L_i × w^* × (S_L-L_i-S_R) =ℰ(^*)
ℰ(M) ≥ℰ(^*)
Thus, min(ℰ(^*),
ℰ(^*)) is a lower bound on the error of
any such marking.
We now conclude the proof by stating that any optimal marking
with fractionally marked edges can be transformed into another
optimal marking with no fractionally marked edges. Let M be
any such marking. If M satisfies the conditions of Case 1, we
repeatedly apply the transformation method of Case 1 until it
satisfies the conditions of Case 2-1. We then repeatedly apply
the construction method of Case 2-1 until M satisfies the
conditions of Case 2-2. Finally, if M satisfies the conditions
of Case 2-2, we have already shown that ℰ(M) is
lower bounded by the optimal partial marking of Lemma
<ref> which has no fractionally marked edges.
§ CONCLUSION AND OPEN PROBLEMS
In this paper, we studied the problem of distance-preserving
graph compression for weighted paths and trees. We first
presented a brief literature review of some related work in this
domain, noting that one particular aspect of the problem is
understudied. More specifically, there has been little attention
in the literature to the problem of optimally compressing a
given set of edges. To address this, we presented optimal
algorithms for compressing any set of k edges in a weighted
path and for optimally compressing a single edge in a weighted
tree. We tackled the problems in an incremental order of
difficulty. For weighted paths, we first solved the problem of
optimally compressing a single edge, then we generalized it to
any set of k independent edges. Finally, we provided an
optimal approach to compressing any contiguous subset of edges
in a weighted path.
We then generalized our scope to weighted trees, where we
studied the problem of optimally compressing a single edge. To
this end, we first studied the easier case in which the subtrees
of both sides of the merged edge had equal sizes. Finally, we
generalized our results to the case in which different subtrees
were of different sizes.
We now note some potential avenues of future studies:
Can we solve the distance-preserving graph compression problem for general graphs in polynomial time?
The above problem would indeed be a natural extension of this
paper. The complexity of the weight redistribution problem for
general graphs is still unknown. However, it appears that the
related problem of finding the contracted edges is unlikely to
be solved in polynomial time. Bernstein et al.
<cit.> showed that CONTRACTION
(defined in Section 1) is NP-hard even if the underlying graph
is just a weighted cycle. In a graph with cycles, some vertices
are connected via multiple paths. Therefore, after merging a
single edge, several shortest paths that traverse that edge may
need to be rerouted using completely different edges, making the
analysis much more difficult.
How could we find an optimal redistribution strategy that also
minimizes the error between all pairs of vertices in V_m?
Note that even if some weight redistribution minimized the error
between two nodes in different supernodes, it would still be non-trivial to do the same for two vertices that are placed in a
single supernode. Obviously, a trivial solution would be to
store the shortest path weights between the vertices in one
supernode as separate table entries. However, such an approach
would defeat the whole purpose of graph compression, which is to
reduce memory requirements.
For the optimal weight redistribution problem, are there any
better cost models (error functions)?
As stated in Section <ref>, in this paper we defined the
error function as the sum of
the absolute differences of the shortest path lengths between different pairs of nodes before and after redistributing the
weights. However, exploring alternative cost functions that
better capture the distance-based similarity between a modified
graph and its original version can open up exciting research
avenues. Investigating whether there exist other cost functions
that provide a more accurate measure of closeness between graphs
can lead to valuable research opportunities.
plain
|
http://arxiv.org/abs/2307.04243v1 | 20230709184355 | Swimming Efficiently by Wrapping | [
"H. Gidituri",
"M. Ellero",
"F. Balboa Usabiaga"
] | cond-mat.soft | [
"cond-mat.soft",
"physics.flu-dyn"
] |
1 BCAM - Basque Center for Applied Mathematics, Alameda de Mazarredo 14, E48009 Bilbao, Basque Country - Spain
2 Ikerbasque, Basque Foundation for Science, Calle de Maria Diaz de Haro 3, E48013 Bilbao, Basque Country - Spain
3 Zienkiewicz Center for Computational Engineering (ZCCE), Swansea University, Bay Campus, Swansea SA1 8EN, UK
Swimming Efficiently by Wrapping
H. Gidituri1
M. Ellero1,2,3
F. Balboa Usabiaga1 [email protected]
August 12, 2023
===========================================================================
Single flagellated bacteria are ubiquitous in nature.
They exhibit various swimming modes using their flagella to explore complex surroundings such as soil and porous polymer networks.
Some single-flagellated bacteria swim with two distinct modes,
one with its flagellum extended away from its body and another with its flagellum wrapped around it.
The wrapped mode has been observed when the bacteria swim under tight confinements or in highly viscous polymeric melts.
In this study we investigate the hydrodynamics of these two modes inside a circular pipe.
We find that the wrap mode is slower than the extended mode in bulk but more efficient under strong confinement
due to a hydrodynamic increased of its flagellum translation-rotation coupling.
§ INTRODUCTION
Bacteria are prokaryotic microorganisms forced to live in a zero Reynolds number environment.
Due to the kinematic reversibility of viscous flows, some bacteria have developed a non-reciprocal propulsion mechanism for locomotion, the rotation of flagella.
The cell body and the flagella are rotated in opposite directions by molecular motors.
Under rotation the flagella adopt an helical shape and propel the bacterium by working as a screw.
Some bacteria can move both forward or backward, in a push or pull mode, depending on the direction of rotation of the molecular motors
and on the chirality of their flagella.
As bacteria are often found in confined environments they have developed different strategies to swim while foraging in those conditions.
One example is a swimming mode used by some monotrichous and bipolar bacteria
where bacteria wrap their flagella around their own bodies resembling an Archimedes' screw <cit.>.
These bacteria swim alternating between two different modes, the wrapped mode and the extended mode, where the later has the flagella extended away from their bodies.
The wrap mode emerges when a cell encounter highly viscous or strongly confined environments <cit.>.
When a cell gets trapped during its forward pushing mode
a buckling instability occurs in the flagellar hook that triggers the flagellum wrapped mode <cit.>.
The number of known bacterial species showcasing a wrap mode under confinement is growing <cit.>.
Thus, a natural question arises: is the wrapped mode a mere accident or is it selected due to some advantage to the bacteria?
Some studies suggest that the wrapped mode confer advantages to the motion in confinement environments.
Kühn et al. observed experimentally that the wrapped mode can enhance the motion in highly viscous and structured environments <cit.>.
Kinosita et al. studied the motion of bacteria with wrapped mode in very tight confinements and concluded that the wrapped mode
can allow the bacteria to glide over the substrate <cit.>.
Along this line of work we investigate how the flagella motion in the wrapped mode favors the motion
of bacteria under strong confinement by hydrodynamic interactions only.
To this end we investigate the swimming of bacteria inside circular pipes by means of CFD simulations.
We show that the extended mode is more efficient in bulk and wide pipes while the wrapped mode can be more efficient in tight pipes.
The scheme of the paper is the following.
In Sec. <ref> we describe our numerical method, describe our results in Sec. <ref>
and conclude in Sec. <ref>.
§ NUMERICAL METHOD
We model a monotrichous bacterium as a rigid ellipsoid with an helical flagellum attached to one of its poles.
The flagellum is also modeled as a rigid object, which is a good approximation to study steady state swimming <cit.>.
The body and the flagellum are connected by inextensible links that allow the flagellum to rotate freely around its main axis
but otherwise it is forced to move concomitant to the rigid ellipsoid.
The rigid objects, _n, move with linear and angular velocities, _n and _n, where we use the subindex n to denote
either the bacterium body or the flagellum.
Due to the small bacterium size, the flow Reynolds number is vanishingly small, Re∼ 10^-5.
Thus, the flow can be modeled with the Stokes equations
- ∇ p + μ∇^2 v = ,
∇·v = 0,
where p and are the fluid pressure and velocity and μ its viscosity.
The no-slip boundary condition is imposed on the surface of the bacterium body and its flagellum
() = u_n + ω_n× (r-q_n ) for on the bacterium,
where _n is tracking point of the rigid bodies (e.g. the bacterium body center and the flagellum attaching point respectively).
To solve the coupled fluid-structure interaction problem we use the rigid multiblob method for articulated bodies.
We summarized the numerical method while a detailed description can be found elsewhere <cit.>.
The rigid bodies are discretized with a finite number of blobs with position _i as shown in Fig. <ref>.
As the inertia is negligible the conservation of momentum reduces to the balance of force and torque.
The discrete force and torque balance for the rigid object n can be written as,
∑_i∈_n_i - ∑_i∈ℒ_n_n = _n,
∑_i∈ℬ_n (r_i -q_n) ×λ_i - ∑_i∈ℒ_n (Δl_np -q_n) ×ϕ_n = τ_n,
where _n and _n are the external forces and torques acting on the rigid objects
while _i are the constrained forces acting on the blobs that ensure the rigid motion of the bacterium body and the flagellum.
The second sums in (<ref>)-(<ref>) run over the links, ℒ_n,
attached to the rigid object n and _n is the force exerted by the link n
to keep the rigid bodies connected while |Δl_np| is the link length.
The discrete no-slip condition evaluated at each blob i is,
(_i) = ∑_j_ij_j = u_n + ω_n× (r_i-q_n ) for i∈ℬ_n.
The mobility matrix _ij gives the hydrodynamic interaction between any two blobs, i and j, of radius a_i and a_j.
We use a regularized version of the Oseen tensor, the Rotne-Prager tensor, <cit.>.
_ij = 1(4π a_i a_j)^2∫δ (|r'-r_i|- a_i) G(r',r”)δ (|r”-r_j|- a_j) ^3r' ^3r” ,
where (,') is the Green's function of the Stokes equation and δ(r) the Dirac's delta function.
The advantage of this formulation is that the regularized mobility has no divergence even when blobs get close
and it is not necessary to use special quadrature rules.
The equations (<ref>)-(<ref>) form a linear system for the unknown velocities, _n and _n,
and constraint forces, _j and _n, that can be solved efficiently with iterative methods such as GMRES <cit.>.
§ RESULTS AND DISCUSSION
In this section we study the swimming of bacteria inside circular pipes of radius r_0 and length L_0 ≈ 21 r_0 aligned along z.
Keeping the aspect ratio constant ensures that the flow disturbance created by a bacterium decays to negligible values at the pipes ends <cit.>.
We model the pipes as immobile rigid objects <cit.>.
We place the bacteria in the middle of the pipes and we use that configuration to compute the bacteria velocity.
As the Stokes equation assume a steady state flow solving one mobility problem is enough to determine the velocities.
Later, we will consider the case where bacteria freely swim in a pipe periodic along its main axis.
We consider two different swimming modes.
First, the extended mode where the flagellum is attached to the body front part and it extends away from it.
In the second mode the flagellum is wrapped around the bacteria body, see Fig. <ref>.
In both cases we apply constant and opposite torques, of magnitude τ=0.46 pNμ m, to the body and the flagellum to model the work exerted by a molecular motor.
Thus, we assume that the molecular motor always works on the low frequency (constant torque) regime <cit.>.
In most numerical experiments the flagellum extends along its main axis a length similar to the bacterium body.
Thus, in the wrapped mode the body is fully covered by the flagellum.
The bacterium body, always 2.04 μ m long and 0.49 μ m wide, is discretized with 292 blobs of radius a=0.0425 μ m.
The geometric details of the helical flagella and pipes used in this work
are presented in Tables <ref> and <ref>.
All the motion is driven by the rotation of the flagellum.
Therefore, we start looking at its angular velocity, ω_z, see Fig. <ref>a.
In bulk the flagellum rotates two times faster in the extended mode than in the wrapped mode.
The slower rotation can be explained by the additional drag experienced by the flagellum in the wrapped mode,
which is caused by the proximity of the flagellum to the bacterium body.
Both modes reduce their angular velocities as r_0 decreases due to the additional hydrodynamic drag generated by the pipe walls.
However, the decrease is proportionally less important in the wrapped mode as its initial drag was larger.
Thus, the ratio between the angular frequencies of the two modes falls from a factor 2.0 in bulk to a factor 1.6 in the smallest pipe considered.
Next, we look at the swimming speed along the pipe axis, u_z, see Fig. <ref>b.
We observe that in bulk the wrapped mode swims about twice slower than the extended mode.
This result is consistent with experimental observations <cit.>.
The slower swimming speed in the wrapped mode is a consequence of the slower rotation of its flagellum.
Under confinement the swimming speed, u_z, decreases for the extended mode as the pipe radius is decreased.
Again, the additional hydrodynamic drag generated by the pipe walls is responsible for this effect.
In contrast, the wrapped mode exhibits a non-monotonic trend in its swimming speed.
As the pipe radius is decreased the bacterium swims faster up to the point where the ratio between the pipe radius
and the flagellum amplitude is r_0 / α≈ 1.5.
Beyond that point the swimming speed decreases with r_0.
The Stokes equations are linear and thus the linear and angular velocity are proportional when keeping all geometric parameters constant.
We could have imagined that changing the pipe radius would affect the flagellum rotation and the bacterium translation to a similar degree.
That is approximately true for the extended mode but completely false for the wrapped mode as shown in the inset of Fig. <ref>a.
To understand this difference and the unusual swimming speed increase observed with the wrapped mode we consider the motion of a single helical flagellum inside a pipe.
We apply a constant torque on the helical flagellum and measure its translational and rotational speeds.
Note that in this case the flagellum is not a torque-free swimmer, as there is no body to which apply an opposite torque.
Nonetheless, this numerical experiment is useful to understand the more complex wrapped mode.
We observe an increase in the swimming speed for decreasing pipe radius with respect to the bulk value above a critical pipe radius, see Fig. <ref>a,
similar to the wrapped mode results.
For the single flagellum its swimming speed can be written as u_z = M_trτ_z.
For moderate confinements the hydrodynamic interactions with the wall increase the value of the mobility coupling term, M_tr, with respect to the bulk values,
thus, the swimming speed is increased.
For very tight confinements the lubrication interactions dominate the interactions with the wall and M_tr decreases below the bulk values.
These effects were already reported by Liu et al. for an infinite flagellum within an infinite pipe <cit.>.
This speed increase is observed despite the reduction in the flagellum angular velocity, ω_z, with r_0, see Fig. <ref>a inset.
The wrapped mode takes advantage of the increased translation-rotation coupling of its flagellum under confinement to increase its speed.
In the extended mode the flagellum translation-rotation coupling is increased just as in the wrapped mode.
However, the drag on the body increases faster with smaller r_0, the combined effect is to reduce the swimming speed.
In the wrapped mode the body is protected by the flagellum, moving in the same direction, and thus the increase in the body drag is less important.
This interplay between the enhanced translation-rotation coupling, which increases thrust and the swimming speed, and the drag on the bacterium
body which reduces it, has been observed in a recent experimental study with E. coli <cit.>.
Vizsnyiczai et al. observed that a bacterium swimming in a extended mode inside a pipe swims slower than a bacterium in a channel.
However, when the bacterium is exiting the pipe and only its flagella remain inside, the swimming speed is larger than a channel.
The reason is the increased translation-rotational coupling experienced by the flagella and the lack of an additional drag acting on the bacterium body.
This result was nicely reported in Fig. 5 of Ref. <cit.>.
After the flagella exit the pipe the speed decreases to the bulk value.
Our results agree with their observations.
§.§ Power and Efficiency
The power consumption is an important quantity for a microswimmer propelling in a viscous environment
and the efficiency can be more important than the absolute swimming speed.
Thus, we measure these quantities.
Considering the chemical energy used within the cell is beyond the scope of our work, thus, we limit ourselves to study the power dissipated by the Stokes flow
and the microswimmers hydrodynamic efficiency.
The power exerted by a microswimmer to the medium and dissipated by the flow is
P = ∑_n _n ·_n = ∑_n _n ·_n + _n ·_n,
where the sum is over rigid bodies, in our case the bacterium body and its flagellum.
As the power is generated by the motor, the power consumed by a bacteria during its swimming can be rewritten as
P_m = _m ·_m = _m · (_flag - _body).
In the absence of elastic or soft steric interactions both expressions are equivalent.
We will always use (<ref>) to account for soft steric interactions used in Sec. <ref>.
The wrapped mode consume less power for all pipe radii owing to the slower rotation of its flagellum, see Fig. <ref>b inset.
Under confinement the power exerted by the motor decays for both swimming modes.
Of more interest is the hydrodynamic efficiency of the swimmers to propel themselves.
There are several approaches to define the hydrodynamic efficiency <cit.>.
We follow a classical approach and define the inverse efficiency as the power normalized with the
power necessary to pull the body with the same speed <cit.>
η^-1 = M_zzu_z^2 P,
where M_zz=u_z/f_z is body mobility along the pipe axis and u_z the velocity.
The Fig. <ref>b shows the variation of the inverse efficiency as a function of the pipe radius.
It is evident from the figure that in bulk and wide pipes the extended mode is more efficient.
However, there is a crossover and for tight confinements the wrapped mode becomes more efficient.
This is a result of the lower power consumption of the wrapped mode and, importantly, its enhanced velocity within the pipe.
This result suggest that the wrapped mode is beneficial to selfpropel in confined spaces.
So far we have only used one flagellum, model II, and a bacterium placed exactly on the middle of the pipe.
In the next two sections we explore whether these results are robust under a change of these conditions.
§.§ Robustness of results: Effect of N_λ and L
Bacteria species present flagella of different lengths, amplitudes and pitch angles
which affect the bacteria bulk speeds and efficiencies <cit.>.
Here, we explore if the wrapped mode is a more efficient swimming style in confined environments for a wide variety of flagella models.
We build five flagella models by varying simultaneously the flagellum length, L, and the number of waves along its length,
N_λ=L_z / λ, where L_z is the flagellum extension along its axis and λ the
wavelength of the helical wave, see Fig. <ref>(a,b) and Table <ref>.
We present the inverse efficiency for all flagella models and pipe radius in Fig. <ref>(c,d)
The general trend is the same as before.
For wide pipes the extended mode swims more efficiently than the wrapped mode for all flagella models except one (N_λ=2.5).
Under confinement both swimmers increase their efficiency but the improvement is stronger for the wrapped mode which becomes the most efficient
for pipes with r_0 / α⪅ 1.7.
In those situations the wrapped mode is approximately two times more efficient than the extended mode.
The efficiency, for both swimming modes, is non-monotonous on N_λ.
When N_λ≪ 1 the flagellum is almost straight, thus, it cannot propel the bacterium.
Therefore, the swimming speed and the efficiency initially grow with N_λ.
Beyond a certain value of N_λ the flagellum tangent forms a large angle with the direction of motion, which again reduces the propulsion efficiency.
For intermediate values of N_λ the flagellum is helical-shaped which allows propulsion.
For both modes the flagellum with N_λ=1.5 is the most efficient under confinement for the flagella lengths considered.
For bacteria swimming in bulk the optimum is also close to N_λ=1.5, although the exact optimum N_λ depends on the flagellum length <cit.>.
For the extended mode, optimal swimming occurs around the non-dimensional pipe radius, r_0/α = 1.5 for all values of N_λ.
For the wrap mode the optimal swimming occurs for lower values of r_0/α.
§.§ Robustness of results: dynamical simulations
So far we have computed the swimming speed when the bacteria are located in the middle of the pipe and aligned along it.
However, freely swimming bacteria can tilt and move towards the pipe wall.
To verify if the results reported so far are robust, we perform dynamic simulations where the bacteria are free to displace away
from the pipe centerline and to change orientations.
We use the same pipe models as before but imposing periodic boundary conditions along the pipe.
To solve the Stokes equations with these boundary conditions we use a periodic Fast Multipole Method implemented in the library STKFMM <cit.>.
To avoid the overlap of the bacterium with the pipe we include
a steric repulsion interaction between the blobs of pipe and bacterium with a repulsion strength f=5×10^-5pN μ m
for overlapping blobs and with an exponential decay with a characteristic length ξ=0.01 μ m for non-overlapping blobs.
For all models considered in this section we simulate the bacteria for 10 s so the bacteria can swim at least 70 μ m.
We use the last 8 s to extract the swimming speed and the power consumption.
The results for bacteria with the flagella model VII, the one used in Fig. <ref>, are shown as full symbols in Fig. <ref>.
The same general trend as for the static simulations is observed.
However, the efficiency curves do not cross over.
The cross over is not observed because this time the wrapped swimming speed along the pipe, u_z, barely increases with confinement,
and the efficiency depends strongly on u_z.
The magnitude of u_z does not increase because the bacterium swims with a tilt towards the wall, see Fig. <ref>c and Movie 1.
In contrast, the extended mode cannot tilt significantly on small pipes as that is prevented by its rigid flagellum, which favours the motion along the pipe.
To verify the role of the tilt we run another set of simulations using a longer flagellum, model VI, that extends beyond the bacterium body,
see Fig. <ref>c and Movie 2.
The results are presented as open symbols in Fig. <ref>.
In this case the speed of the wrapped mode is approximately independent on the confinement but larger than with the shorter flagellum.
As a result we observe a crossover between the efficiencies of the wrapped and extended modes.
Overall, these results show that (i) the swimming speed is less sensitive to confinement for the wrapped mode than for the extended mode,
(ii), the efficiency improves strongly for the wrapped mode and (iii), depending on the flagellum details,
the wrapped mode can be the most efficient way to swim.
§ CONCLUSIONS
In this paper we have presented the dynamics of two different swimming modes, namely the extended and wrapped modes of monotrichous type bacteria.
Under bulk conditions the extended mode swims faster and more efficiently than the wrapped mode.
However, under strong confinement the efficiency of the wrapped mode improves faster than for the extended mode.
For a wide number of flagella shapes, with different lengths and wavelengths, the bacteria in the wrapped mode swim more efficiently.
These results are complementary to the experimental work of Kinosita et al.
where the bacteria Burkholderia adopting the wrapped mode was observed to glide in very narrow ducts <cit.>.
It seems that, either by gliding over a substrate or by means of hydrodynamic interactions, the wrapped mode promotes the motion
of bacteria on tight confinements.
It is interesting to note that some bipolar flagellated bacteria can display a wrapped and an extended mode simultaneously,
where the flagellum at the front pole wraps around the body and the rear one remains extended <cit.>.
Such mixed mode could present some advantages under confinement that should be investigated.
§ ACKNOWLEDGMENTS
The project that gave rise to these results received the support of a fellowship from “la Caixa”
Foundation (ID 100010434), fellowship LCF/BQ/PI20/11760014, and from the European Union's Horizon 2020 research and innovation
programme under the Marie Skłodowska-Curie grant agreement No 847648.
Funding provided by the Basque Government through the BERC 2022-2025 program
and by the Ministry of Science and Innovation: BCAM Severo Ochoa accreditation
CEX2021-001142-S/MICIN/AEI/10.13039/501100011033 and the project PID2020-117080RB-C55
“Microscopic foundations of soft matter experiments: computational nano-hydrodynamics (Compu-Nano-Hydro)” are also acknowledged.
jfm
|
http://arxiv.org/abs/2307.03886v1 | 20230708033922 | On Regularization and Inference with Label Constraints | [
"Kaifu Wang",
"Hangfeng He",
"Tin D. Nguyen",
"Piyush Kumar",
"Dan Roth"
] | cs.LG | [
"cs.LG",
"stat.ML"
] |
[
ICML'2023
On Regularization and Inference with Label Constraints
equal*
Kaifu Wangupenn
Hangfeng Herochester
Tin D. Nguyenmit
Piyush Kumarstr
Dan Rothupenn
upennUniversity of Pennsylvania, Philadelphia, PA, USA
mitMassachusetts Institute of Technology, Cambridge, MA, USA
strSystems and Technology Research, Woburn, MA USA
rochesterUniversity of Rochester, Rochester, NY, USA (Part of the work done while at the University of Pennsylvania.)
Piyush [email protected]
Dan [email protected]
Machine Learning, ICML
0.3in
]
Prior knowledge and symbolic rules in machine learning are often expressed in the form of label constraints, especially in structured prediction problems.
In this work, we compare two common strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference, by quantifying their impact on model performance.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints. However, its preference for small violations introduces a bias toward a suboptimal model.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
Given these differences, we further explore the use of two approaches together and propose conditions for constrained inference to compensate for the bias introduced by regularization, aiming to improve both the model complexity and optimal risk.
§ INTRODUCTION
Domain knowledge in machine learning is often framed as constraints on the output label space.
Such label constraints have been widely identified in natural language processing tasks
<cit.>
and studied in the context of structured prediction
<cit.>.
For example, in temporal reasoning <cit.> where the model is asked to label the relations (“before” or “after”) among a set of events, the assigned labels will need to satisfy a transitivity constraint which means, for example, the facts that an event E_1 is after E_2 and that E_2 is after E_3 imply that E_1 is after E_3.
The central question is how to encode such a constraint into a learning algorithm to ensure better performance and generalization of the learned model.
Practitioners have developed two techniques to encode a label constraint in a machine learning pipeline. The first, called regularization with constraints, penalizes a model for its violation of the constraint in addition to the classification loss <cit.>. The second, called inference with constraints, modifies prediction rules directly by enforcing strictly constrained inference <cit.> or balancing the original model's output with the constraint in a soft way <cit.>.
Although these two learning algorithms have been shown to be empirically successful, we are not aware of theoretical analyses that elucidate each algorithm's advantages or disadvantages in comparison with the other one. Natural questions include, how do these two differ in their impact on the learned model? Moreover, in practice, the constraints could be noisy i.e. <cit.>. In such cases, do they still improve the model performance? If so, by how much?
Focusing on multiclass classification with label constraints, we compare regularization with constraints and constrained inference.
For each algorithm, we quantify its optimal risk (aka approximation error) and its generalization gap (aka estimation error).
Specifically, in Section <ref>, we show that regularization with constraints achieves a smaller generalization error by reducing the model complexity but will introduce a bias towards a suboptimal model if the risk minimizer and violation minimizer does not coincide.
In Section <ref>, we study a broad family of constrained inference model called Constrained Conditional Model (CCM) <cit.> and point out that the constrained inference could reduce the risk of a model if and only if the model violates the constraint more than the true data distribution.
This further suggests finding models with higher violation, which contrasts the learning objective used in regularization that discourages violation.
Given these contrasts, we further study the combination and interaction of the two methods in Section <ref> and describe how constrained inference could compensate for the bias introduced by regularization.
To the best of our knowledge, our analysis is the first to provide a theoretical view on comparing the two approaches. We believe in the importance of this comparison and hope to bring this problem to the attention of the machine learning community.
In summary, our contributions include:
* We provide an error bound (Theorem <ref>) that describes the tradeoff between the generalization gap and the optimal risk when performing regularization with constraints.
* We propose a sufficient and necessary condition (Theorem <ref>) for constrained inference to improve a model by quantifying its reduction in risk.
Based on this, we further argue that constrained inference, when used at training time, implicitly modifies the training objective in an opposite direction as in the regularization approach (Proposition <ref>).
* We study the combination of regularization and constrained inference, and propose sufficient (Theorem <ref>) as well as necessary (Theorem <ref>) conditions for the combined algorithm to achieve improvement in both optimal risk and model complexity.
Proofs of all the theoretical results are in the appendix.
§ PRELIMINARIES
Our goal is to learn a mapping from the instance space X to the output space Y.
The learner has access to a set of labeled training data S_ L of size m_ L, which contains i.i.d. samples of a distribution P on X ×Y.
The marginal distribution of X is denoted as P_X.
In this work, we assume the ground truth label associated with x ∈X is generated by a deterministic mapping y_:X →Y (_ is short for oracle). We also denote the true label as y_ when the context is clear.
Model.
The scoring class F contains scoring functions f:X ×Y →R.
We will also call a f∈F a classifier.
Let Δ_Y be the |Y|-dimensional probability simplex. Each scoring function
induces a probabilistic prediction P_f(·|x) ∈Δ_Y by performing softmax inference as P(y|x) ∝exp(f(x,y)).
Loss Function.
The prediction of f at x is evaluated by the classification error (or ℓ^1 loss) L(x,y_,f) := 1 - P_f(y_|x), which is half the ℓ^1 distance the between the one-hot distribution e_y_ and P_f on Δ_Y.
It can also be viewed as a smoothed version of the standard zero-one loss in the sense that lim_t →∞ L(x,y_,tf) = 1{_y∈Yf(x,y) y_}.
More background on the definition of the ℓ^1 loss are provided in Appendix <ref>.
A scoring function f is evaluated by its risk R(f) := E[L(x,y_,f)]. The empirical estimate of the risk using the labeled examples in S_ L is denoted as R(f, S_ L).
We also consider the cross-entropy surrogate loss defined as L_ (x,y_,f) := -logP_f(y_|x) and refer its expectation R_(f) = E[L_(x,y_,f)] as cross-entropy risk.
Label constraint.
A label constraint (or constraint for short) is a deterministic mapping C:X → 2^Y-{∅}. Namely, C maps an instance x to a nonempty subset of Y, which may or may not contain the true label y_(x). In particular, we say a constraint C is noise-free if P(y_∈ C(x))=1. Otherwise, C is said to be a noisy constraint and its noise rate is denoted as V_ := P(y_(x) ∉ C(x)).
Violation.
A constraint C is equipped a violation function, which is an indicator function v_C(x,y) = 1{y∉ C(x)}. We also overload the notation v and define the violation of a classifier f at an instance x as v_C(x,f):= 1-P_f(C(x)|x) = ∑_y∉ C(x)P_f(y|x). Its expectation is V_C(f):= E[v_C(x,f)]. We elide the subscript C and write them as v(x,y), v(x,f) and V(f) when the context is clear. Similar to the classification error, we consider a cross-entropy surrogate of the violation function defined as v_(x,f):=-logP_f(C(x)) and its expectation V_(f) = E[v_(x,f)].
Rademacher complexity.
We use the following version of Rademacher complexity that is adopted from <cit.> to characterize the generalization ability of the scoring space of multiclass classifiers F:
The empirical Rademacher complexity of scoring class F with respect to a set S = {x_i}_i=1^m that contains m samples of the instance is defined as
ℜ_m(F;S)
:=
1/mE_ϵ[
sup_f∈F∑_i=1^m
∑_y∈Yϵ_i,y f(x_i,y)
]
where ϵ=(ϵ_i,y)_i∈ [m],y∈Y are independent Rademacher random variables, each of which is uniformly distributed over {-1,+1}. The Rademacher complexity of scoring class F is the expectation of the empirical version:
ℜ_m(F)
:= E_S ∼P_X^m[ℜ_m(F;S)]
This definition of Rademacher complexity is a special case of the factor graph complexity proposed by <cit.>, which is defined for more general structured prediction models. It is hence possible to extend our results of the generalization bounds to structured models by replacing the Rademacher complexity with factor graph complexity. In this work, we focus on multiclass classifiers for the simplicity of presentation.
§ REGULARIZATION WITH CONSTRAINTS
In a standard machine learning algorithm, the learner receives a set of labeled data S_ L ∈∪_m=1^∞(X ×Y)^m and finds the empirical risk minimizer, which is defined as _f ∈FR̂(f;S_ L).
In this section, we consider a method that modifies this learning objective by adding a regularization term defined with the constraint C. Precisely, we consider minimizing an augmented objective defined as
L_ρ (f)
:= R(f) + ρ V(f)
where ρ≥ 0 is a fixed tradeoff parameter.
The idea of regularizing the model by adding a penalty for the violation of the constraints on an unlabeled dataset is widely adopted in the literature. In particular, the cross entropy violation is known as the semantic loss <cit.> in the context of logical constraints. Other designs of the regularization term include using the KL-divergence on the probability space in the posterior regularization algorithm <cit.> and using the t-norms from fuzzy logic <cit.>.
We will show this algorithm improves the generalization error by reducing the complexity of the scoring space (Theorem <ref>), but in general leads to a larger classification risk in the long run (Proposition <ref>), thus resulting in a tradeoff between estimation and approximation errors.
§.§ Semi-supervised Regularization with Constraints
We consider a semi-supervised approach where the learner has access to an unlabeled dataset S_ U that contains m_ U independent samples of the instance X, resulting in the following definition.
Given a labeled dataset S_ L of size m_ L and an unlabeled dataset S_ U of size m_ U, a scoring space F and a tradeoff parameter ρ≥ 0, we define and denote the empirical risk and violation minimizer (ERVM) as:
f_ρ(S_ L,S_ U)
:= _f∈F (
1/m_ L∑_(x,y)∈ S_ L L(x,y,f) .
. + ρ/m_ U∑_x∈ S_ U v_C(x,f)
).
We also denote the expected version as:
f_ρ := _f ∈F R(f) + ρ V_C(f).
For example, with our notation, f̂_0 is the ERM and f_∞ is the minimizer of the expected violation function. Notice that the minimizer in general is non-unique. Therefore, when we state any proposition that is related to f_ρ or f̂_ρ, we mean the proposition will hold for any of the minimizers.
§.§ Deviation from The Optimal Risk
In this section, we study how the risk of the minimizer f_ρ will deviate from the optimal risk in F. The reason that we are interested in bounding R(f_ρ) is that in general the minimizer R(f_ρ) is non-unique and may have different values of risks. Therefore, to describe the risk of ERVM in the long run (in Theorem <ref>), we provide an upper bound for all the possible risks of f_ρ.
For any constraint C and ρ≥ 0, the following holds.
R(f_0)
≤R(f_ρ)
≤R(f_0) + ρ (V(f_0) - V(f_∞))
.
The same relation also holds for the empirical estimates R̂ and V̂. Moreover, for any ρ>0, there exists a scoring space and data distribution so that the RHS can be reached even with a noise-free constraint C.
This result shows the minimizer of the regularized objective in general has a suboptimal risk over F. On the other hand, if the risk minimizer is simultaneously a violation minimizer, i.e., V(f_0) = V(f_∞), this relation implies consistency, i.e., R(f_ρ) = R(f_0).
This quantity V(f_0) can be small when the noise rate V_ is small and the model is expressive enough (e.g., a deep neural net) to approximate the true model.
§.§ Generalization Bounds
Now we discuss how regularization could reduce the complexity of the hypothesis class. The first step is to show that the violation of the target hypothesis is not too large. In particular, the following bound is a direct consequence of minimizing the regularized objective:
Let f_ρ be the regularized learning objective defined as in (<ref>). If the minimum violation in F is upper bounded by a known constant u ≥ 0, i.e., V(f_∞) ≤ u, then V(f_ρ) ≤ 1/ρ + u.
The upper bound u can be set to arbitrarily small by adding a baseline model defined as f_t(x,y) = t·1{y∈ C(x)} and driving t to infinite. This construction is possible due to the fact that the mapping C is known to the learner. The benefits of knowing C will be further explored in Section <ref> when we discuss inference with constraints.
For any B ≥ 0, we let F_B := {f ∈F| V(f) ≤ B} be the set of classifiers with small violation.
From the above discussion, we know that the target hypothesis f_ρ will lie in a smaller space F_u+1/ρ, which is characterized by the violation function and hence can be identified only with unlabeled data. To this end, we describe how the violation as well as the risk can be estimated with data.
Given a labeled dataset S_ L of size m_ L, for any δ>0, with probabilistic at least 1-δ, the following inequality holds uniformly for f ∈F:
R(f)
≤R̂(f;S_ L) + ℜ_m_ L(F) + √(log(1/δ)/2m_ L)
Given a unlabeled dataset S_ U of size m_ U, for any δ>0, with probabilistic at least 1-δ, the following inequality holds uniformly for f ∈F:
V(f)
≤V̂(f;S_ U) + ℜ_m_ U(F) + √(log(1/δ)/2m_ U)
The proof of this result relies on a contraction lemma established in <cit.>, which was used to analyze the argmax inference with margin losses. Our analysis extends their results to softmax inference, which may be of independent interest.
Furthermore, if the size of the constrained set C(x) is a constant, namely |C(x)|=c_0 < c = |Y| for all x ∈X, then the Rademacher complexity term of equation (<ref>) can be improved to √(2)/2√(1/c-c_0 + 1/c_0)ℜ_m_ U(F) (see the discussion in the proof).
This term is symmetric with the transformation c_0 ↦ c-c_0, due to the fact that estimating the violation V_C of a constraint C is equivalent to estimating V_Y-C.
In particular, when c_0 < c/2, if the constraint is more restrictive and informative (so that c_0 is small), it can be more difficult to estimate the violation.
Assuming lim_m
→∞ℜ_m(F) = 0, this result implies L_ρ can be approximated by its empirical version L̂_ρ with sufficient amount of data. On the other hand, since L̂_ρ is upper bounded by its cross-entropy surrogate R̂_ + ρV̂_, we further have that
L_ρ(f)
≤R̂_(f,S_ L) + ρV̂_(f,S_ U) + o_m_ L, m_ U(1)
where o_m_ L, m_ U(1) converges to 0 as m_ L, m_ U →∞.
Therefore, in practice one can minimize this upper bound by solving the convex surrogate problem
min_f ∈FR̂_(f,S_ L) + ρV̂_(f,S_ U).
where R̂_(f,S_ L) and V_(f,S_ U) are the empirical average of the cross-entropy loss and violation.
Finally, using these results, we bound the risk of the classifier learned by ERVM. For simplicity, we will denote the generalization gap B(δ, m, F) := ℜ_m(F) + 2√(log(1/δ)/2m).
We have with probability at least 1-6δ that
R(f̂_ρ)
≤ R(f_0) + ρ V(f_0) - ρ V(f_∞)
+ ℜ_m_ L(F_1/ρ + u + B(δ, m_ U, ℱ))
+ ρℜ_m_ U(F_1/ρ + u + B(δ, m_ U, ℱ))
+ 2 √(log(2/δ)/2m_ L) + 2ρ√(log(2/δ)/2m_ U)
where ℜ(·) is the Rademacher complexity defined in (<ref>).
First, we show f̂_ρ and f_ρ both lie in the subspace F_1/ρ + u + B(δ, m_ U, ℱ) with high probability since the violation can be well-approximated, according to Lemma <ref>.
Then, the gap between the objective L(f_ρ) and L(f̂_ρ) is controlled by the Rademacher complexity of F_1/ρ + u + B(δ, m_ U, ℱ).
Finally, we use the inequalities established in Lemma <ref> to further upper bound the term L(f_ρ) using the risk and violation of f_0.
Using the same proof technique, this result can be extended to other choices of loss function as long as:
(a) The loss is bounded so that the optimal regularized model has a small violation, as in Lemma <ref>. (b) The loss is Lipschitz with the model scores so that a generalization bound associated with the loss holds, as in Lemma <ref>.
Reducing the generalization gap.
The bound (<ref>) contains three parts: the first line is the worst risk that can be achieved by f_ρ as we described in Proposition <ref>, the second and the third line is the complexity of the classifiers that have a small violation, and the last line is the errors that are independent of the model.
This bound (<ref>) is most preferable when a large set of unlabeled data is available so that the approximation errors of violations (i.e., term B(δ/2, m_ U, ℱ), ℜ_m_ U(F_1/ρ + u + B(δ/2, m_ U, ℱ)) and √(log(1/δ)/2m_ U)) are all small. Then, the model complexity is mainly described by the term ℜ_m_ L(F_1/ρ + u), which is the Rademacher complexity of a proper subset of F.
In this sense, the regularization method reduces the generalization gap by reducing the model complexity of the scoring space.
Tradeoff in regularization.
In situations where m_ U is large, the tradeoff parameter ρ balances two quantities: a larger ρ leads to a smaller scoring space F_1/ρ + u, but brings more bias depending on the suboptimality of f_0 in violation, measured by V(f_0)-V(f_∞).
The benefit of regularization is greater if fewer classifiers can achieve a violation that is close to the optimal value V(f_∞).
We provide the following example to illustrate how the Rademacher complexity can be reduced in linear models.
[Logistic Regression]
Consider a linear model for multiclass classification where Y=[c] and f(x,j)=w_j^ T x with ∑_j=1^c w_j_2^2 ≤ 1.
Suppose x ∈R^p is distributed in the unit sphere x_2 ≤ 1 with expectation E[x] = α∈R^p and covariance matrix σ^2I_p× p.
Without constraint, the Rademacher complexity is upper bounded as ℜ_m(F) ≤√(c/m) as in <cit.> (Theorem 2).
Now, consider a constraint that removes exactly one label so that C(x) ≡ [c-1].
With regularization, for sufficient small t<1/(c+2), we have the following bound
ℜ_m(F_t)
≤1/2(√(c/m) + √(c-σ^2-α_2^2/m))
which is strictly tighter than the standard bound. Intuitively, if x is concentrated around the origin 0, the prediction by any classifier will tend to be a uniform distribution. Therefore, a large bias and variance in x (captured by σ^2+α_2^2) help to distinguish models with different levels of violation.
Compare to existing results.
Previous works mostly consider a zero-one loss for both classification and violation under the assumption that the risk minimizer also achieves zero violation.
Then, one can simply preclude all the classifiers f∈F that have nonzero empirical violations on the unlabeled dataset and find the ERM among the remaining classifiers.
This approach has been theoretically studied in <cit.> for binary classification and <cit.> in a similar manner for regression by characterizing the complexity of the reduced set of hypotheses that achieve zero violation.
Conceptually, we can regard this algorithm as a special case of problem (<ref>) when ρ = ∞.
Our study, therefore, extends previous works with a soft learning objective to multiclass classification problems.
§ INFERENCE WITH CONSTRAINTS
An inference algorithm is a mapping F ×X →Δ_Y.
By default, we define it as the softmax inference: (f,x) ↦P_f(·|x).
When performing inference with constraints (or constrained inference), we modify this softmax mapping for the given function f using the additional information of C.
In this section, we study the Constrained Conditional Model (CCM) <cit.>, a broad family of models that perform inference with constraints.
We show at testing time, whether CCM reduces the risk depends on whether the model's expected violation is larger than the noise rate of the constraint V_ (Theorem <ref>).
In particular, when the constraint is noise-free, CCM always achieves a smaller or equal risk.
Furthermore, we show better risks are achieved if the constrained inference is also performed at training time, and pursuing this optimal risk leads to a learning objective that contrasts with the one used in the regularization approach (Proposition <ref>).
To make distinguishment, we will refer to a model in the original spaces F as a base model and refer to an augmented model as a constrained model.
§.§ Constrained Conditional Model
CCM augments existing scoring functions using a linear combination with the violation function. Precisely, given a vanilla scoring space F, the scoring space of CCM is defined as follows.
Given a scoring space F, a constraint C and a fixed tradeoff parameter μ∈ [0, ∞], the scoring space of the Constrained Conditional Model (CCM) is defined as:
F^μ
:= { (x,y) ↦ f(x,y) - μ v_C(x,y) | f∈F}
We will also denote
f^μ(x,y)
:= f(x,y) - μ v_C(x,y)
to be the augmented scoring function for a given f∈F. In particular, setting μ = ∞ will assign a score -∞ to any y ∉ C(x), which implies P_f^∞(y|x)=0, namely forcing strictly-constrained inference.
The tradeoff parameter μ allows CCM to improve the base model f despite noisy constraints, as we will discuss in detail in the following sections. Otherwise, if the noise rate is large, performing strictly-constrained inference can be harmful because it assigns 0 probability mass to any label y that is outside C(x) and hence has a classification loss L(x,y_,f^∞)=1 at any x where y_∉ C(x).
The learner can choose whether or not to perform the constrained inference either at the training time. This choice leads to the following two different approaches:
* On-training approach: perform constrained inference both at training and testing time, and directly find the ERM over F^μ using labeled data (also known as (Inference Based Training in <cit.>)
* Post-training approach: first find the ERM over the vanilla F using labeled data, and then perform constrained inference at the testing time (also known as Learning Plus Inference in <cit.>).
For both approaches, the generalization ability of CCM is characterized by the complexity of F^μ. So, we first point out that CCM does not increase the Rademacher complexity.
For any fixed μ≥ 0 and m ∈N, we have the following identity:
ℜ_m(F^μ)
= ℜ_m(F)
§.§ Post-training Constrained Inference
For a given and fixed classifier f (presumably trained with data), how does performing constrained inference impact the model performance?
In this section, we study the change in risk when the learner chooses to augment f as a CCM f^μ defined in (<ref>).
It is most convenient to characterize the risk of a CCM using the cross-entropy loss, although we will also conduct the same analysis for the hinge and ℓ^1 losses, as we will point out later.
To start with, for any f and μ∈ [0, ∞], we let
Δ^μ_(f)
:=R_(f) - R_(f^μ)
be the difference in the risk between the base model and the CCM (the larger the better).
We have:
-0.5em
* For any fixed model f, there exists an μ_0 > 0 such that R_(f^μ_0) < R_(f) if and only if
V(f) > V_
* The change in risk can be lower bounded as
Δ^μ_(f)
≥ V(f)(1-^-μ) - μ V_
* In particular, if the constraint is noise-free, we have
Δ^∞_(f)
= V_(f)
The first result describes the sufficient and necessary condition for constrained inference to be helpful.
It requires f to have a larger violation (measured by ℓ^1 violation) than the true data on average so that it has the potential to be improved. This condition is easier to satisfy when the constraint is less noisy.
The second result further quantifies the risk reduction as an explicit function of μ.
The last result shows that in the noise-free case, the maximum risk reduction is exactly the expected violation measured by cross-entropy. Its consequences will be further discussed in the next section.
We present the counterparts of Theorem <ref> for hinge loss and ℓ^1 loss in the Appendix <ref>.
The information delivered by those results is consistent with Theorem <ref> in the sense that (1) whether CCM can reduce the risk depends on the comparison between the violation of the original model and the oracle.
(2) the reduction can be described or lower bounded by some measures of the violation.
The drawback of the hinge loss is its non-smoothness due to the discontinuity of the argmax inference. The drawback of the ℓ^1 loss is that the range of μ such that R(f^μ) ≤ R(f) can be disconnected and difficult to describe. Therefore, we provide weaker results by deriving only sufficient or necessary conditions for CCM to reduce the risks.
As an application of Theorem <ref>, we derive a sufficient condition under which CCM achieves smaller risks.
Assuming V(f) ≥ V_, then R_(f^μ) ≤ R_(f) if the following condition holds:
μ≤ W(-η/^η)+η
where η := V(f)/V_ is the relative violation rate and W is the Lambert W function whose value W(t) is defined to be the solution to the equation w^w = t of w.
The RHS of (<ref>) increases with η and vanishes as η→ 1.
In particular, when the constraint is noise-free, one should encourage strictly-constrained inference and set μ = ∞. We also provide a plot of the RHS in the proof in the appendix.
§.§ On-training Constrained Inference
In this subsection, we study the on-training approach where we perform constrained inference both at the training and testing time. We use the results we established in the last subsection to describe the learning objective of the on-training approach, and argue that it achieves better risks than the post-training approach. Based on this, we further show that minimizing the cross entropy over CCM encourages a large violation of the base model, which contrasts the learning objective (<ref>) that is used in regularization.
We provide a simplified analysis for the noise-free setting where we choose μ = ∞ and perform strictly-constrained inference.
Then, the on-training approach aims to find the optimal (in terms of cross entropy) base model as follows:
:=
_f ∈F R_(f^∞)
(recall f^∞ means performing strictly-constrained inference with f) We characterize the behavior of with the following results, which are direct corollaries of Theorem <ref>.
Assuming C is noise-free, we can reformulate the learning objective (<ref>) as
= _ f∈F R_(f) - V_(f)
A fundamental difference.
Surprisingly, the reformulated learning objective (<ref>) is opposite to the surrogate regularized objective defined in (<ref>) in their attitudes towards violations. This contrast suggests a fundamental difference between regularization and constrained inference: the regularization method views violation as a bad thing and it precludes classifiers with substantial violations. But constrained inference corrects a model from its violation, so a large violation means a great potential to be improved.
On-training vs post-training.
Loosely speaking, this result also suggests that in general, the best constrained model is not the constrained best model. To be more precise, suppose we perform post-training constrained inference for the cross-entropy risk minimizer in the vanilla model, i.e., := _f∈F R_ (f).
Then, we can reformulate the definition of as
:= _f∈F(R_(f) - V_(f))_objective in (<ref>), post-training risk + V_(f)
which can be regarded as a “regularized” version of (<ref>). Therefore, similar to Proposition <ref>, we can argue that the risk minimizer over F, as a base model of CCM, contains a bias towards a higher risk than the on-training method's as follows:
R_(^∞)
≤ R_(^∞)
≤ R_() - min_f∈F V_(f)
The proof is included in the proof of Proposition <ref>.
Computational considerations.
In practical structured prediction problems where the output is sequential or graphical, performing constrained inference during training time is typically expensive due to the complexity of the constraints. For example, as pointed out by <cit.>, when the constraint is defined by a logical expression over several output variables, computing the probability of constraint being satisfied corresponds to the problem of weighted model counting (WMC) and is #P-complete <cit.>.
Therefore, to implement the on-training approach in practice, one can alternatively use approximate inference to ensure tractability.
For example, strictly constrained inference, formulated as Integer Linear Programming <cit.>, can be further relaxed as Linear Programming <cit.>.
Another example is amortized inference <cit.>, which accelerates the convergence to the optimal model while only performing exact inference in every τ>1 iterations.
Compare to existing results.
There has been limited theoretical work discussing the impact of performing constrained inference. The most related one is <cit.>, which derives VC-style generalization bounds for linear structured models to argue that (1) performing strictly constrained inference in a post-training manner (Learning Plus Inference in the paper) improves the model performance and (2) the on-training approach (Inference Based Training in the paper) further reduces the error in the long run. Our approach directly analyses the classification risk and extends the comparison to noisy constraints and soft-constrained inference with CCM.
§ REGULARIZATION WITH CONSTRAINED INFERENCE
We have seen that regularization and constrained inference have different impacts on the generalization gap and the risk.
On one hand, CCM has an equal Rademacher complexity (Proposition <ref>) as the original model ℜ(F), which can be reduced by regularization. So, performing regularized algorithm to CCM also reduces the generalization gap.
On the other hand, their impacts on the risks are contradicting, as summarized in figure <ref>.
In this section, we aim to describe how these impacts can interact with each other by applying our established results to explore the usage of these two methods together.
We show both positive and negative results for the combination. On one hand, we propose sufficient conditions under which the bias introduced by regularization can be compensated by performing constrained inference (Proposition <ref>).
On the other hand, we study if post-training constrained inference can reduce the risk of the optimal classifier f_ρ. We show with a noisy constraint, choosing a large value of ρ in the regularized objective (<ref>) will make CCM incapable to reduce the risk (Proposition <ref>).
§.§ CCM Compensates for Regularization Bias
As the red part of Figure <ref> summarizes, we have shown that the regularization and constrained inference have contradicting influences on the risk. Moreover, the regularization bias is controlled by the violation of the risk minimizer (Proposition <ref>), which can be reduced by constrained inference. This suggests the possibility for CCM to reduce the additional risk introduced by regularization.
We formally describe this phenomenon by considering the following combination: an on-training approach that aims to find the minimizer of the following regularized surrogate objective over the CCM F^μ:
f_⋆^μ
:= _g∈F^μ R_(g) + ρ V_(g)
Recall that R_() is the minimum cross-entropy risk that can be achieved in F.
We show that unlike the vanilla regularized objective (<ref>), it is possible for this algorithm to achieve a smaller risk than R_() as follows.
If
CCM improves so that Δ^μ_()> 0,
then letting
ρ
< V_()-μ V_/V_(^μ) - 1
will imply R_(f_⋆^μ) < R_().
This result shows a small choice of ρ allows the regularized optimizer f_⋆^μ to achieve better cross-entropy.
A less noisy constraint allows more choices of ρ to make this happen.
In particular, when the constraint is noise-free, since V_(^μ) → 0 as μ→∞, driving μ to ∞ will make R(f_⋆^μ) < R() for all ρ > 0.
As a cost, regularization will be less effective in reducing the Rademacher complexity with a large value of μ. In the extreme case, all the classifiers in F^∞ make zero violation, and hence cannot be distinguished by the regularization objective.
§.§ Post-regularized-training Constrained Inference
Finally, as the blue part of Figure <ref> summarizes, we have shown that post-training inference is beneficial only if the average violation of f is larger than V_ (Theorem <ref>). However, the minimizer of the regularized objective f_ρ tends to have a small violation (Proposition <ref>) scaled with 1/ρ.
Therefore, it is possible that choosing a large value of ρ will make post-training incapable to reduce the risk with a noisy constraint.
Formally, assuming a model is already trained with the vanilla regularized ℓ^1 objective as in (<ref>), we have the following holds.
Recall V(f_∞) is the minimal expected violation that can be achieved by F. If V_≥ V(f_∞) and
ρ≥1/V_ - V(f_∞)
then the minimizer f_ρ of the regularized objective (<ref>) will not be improved by post-training constrained inference for any μ∈ (0, ∞] in the sense that R_(f_ρ) ≤ R_((f_ρ)^μ).
The RHS of (<ref>) shrinks with a larger noise rate V_ and smaller V(f_∞). Intuitively, a more noisy constraint is less helpful (Theorem <ref>), while a small value of V(f_∞) allows f_ρ to violate less (Proposition <ref>) and hence gains fewer benefits from constrained inference (Theorem <ref>).
As a consequence, with a noisy constraint, choosing a large ρ in the regularized objective will make post-training constrained inference unnecessary or even harmful.
§ RELATED WORKS
Regularization with constraints.
In the context of structured prediction, the Posterior Regularization (PR) framework <cit.> proposed to regularize the log-likelihood by adding a distance of the probabilistic prediction to the constrained subspace of distributions.
The CoDL algorithm <cit.> is a semi-supervised algorithm that repetitively assigns constrained pseudo-labels to the unlabeled dataset and uses pseudo-labels to retrain the model.
CoDL and PR are further unified in <cit.> as special cases of a parameterized EM algorithm.
More recent works have proposed injecting logical constraints into deep models by augmenting the training objective with explicitly defined violation functions, such as the semantic loss <cit.>, the DL2 loss <cit.> and the inconsistency loss <cit.>, which motivate our theoretical formulation in (<ref>).
Inference with constraints.
The idea of injecting prior knowledge directly into a predictive model dates back to <cit.>, which formulates the problem of inference with hard constraints as Integer Linear Programming (ILP).
The idea of constrained inference has been followed and developed by NLP researchers and empirically shown to be effective in various problems such as summarization <cit.>, temporal reasoning <cit.>, semantic parsing <cit.> and text generation <cit.>.
<cit.> further defines the CCM to incorporate soft constraints into linear models.
Another related work is <cit.>, which uses Bayesian networks to model the label correlations and define an order to the labels.
The order information is then taken as extended features at inference time.
Theoretically, <cit.> provides a comparison between the on-training and post-training constrained inference using VC-style error bounds.
Semi-supervised learning theory.
Several theoretical semi-supervised learning frameworks such as <cit.> and <cit.> illustrate how hard constraints on the hypothesis space could reduce the generalization error. A detailed comparison can be seen in the discussion at the end of Section <ref>.
Learning with partial labels.
The problem of learning with constraints is closely related to the problem of learning from partial labels (also known as superset labels) <cit.> where each instance x in the dataset is assigned with a partial label s which also takes value in 2^Y.
The difference is that the constraint mapping itself is known to the learner and hence can be encoded in the inference algorithm directly, for example, via the CCM. Another difference is that the partial labels are typically more informative and can guarantee learnability alone <cit.>. In contrast, the constraints that appear in practice typically provide only side information and need to be used with gold labels together.
§ CONCLUSION AND FUTURE WORKS
In this paper, we presented a theoretical study of two methods to encode label constraints into a learning system: regularization and constrained inference.
We compared these two approaches by quantifying their impact on the optimal risk as well as the generalization error.
Our study revealed that the success of these two approaches replies on different data assumptions:
the regularization method requires the optimal classifier in the model to have a small violation while constrained inference requires the true data to have a small violation.
We further elucidated the detrimental consequences that arise when these assumptions fail to hold.
Finally, we demonstrate how their impacts on the model can interact when used together.
We have focused on multiclass classification, aiming to provide a starting point for understanding the different mechanisms of the two methods. For future work, we will extend the discussion to structured prediction problems where complex constraints are naturally defined. In particular, while the presence of constraints can improve the model performance, it also suggests a strong dependency inside the structure, which may hurt the generalization performance, as pointed out by <cit.>.
§ ACKNOWLEDGEMENTS
This work was partially supported by Contract FA8750-19-2-0201 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.
This work was also partially sponsored by the Army Research Office and was accomplished under Grant Number W911NF-20-1-0080. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
This work was also partially funded by ONR Contract N00014-19-1-2620.
icml2023
PART:
*Appendix
§ DETAILS ON LOSS FUNCTION
The ℓ^1 loss is a smoothed alternative to the zero-one loss and has been used in the theoretical analysis for the generalization error, see, for example, in <cit.> (Section 6.2). It can be related to other common loss functions as follows.
As distances on the probability simplex.
Let e_y ∈R^|Y| be a one-hot vector with the y^th coordinate be 1 and all others be 0. We then have that
L(x,y_,f)
:= 1 - P_f(y_|x)
= 1/2e_y_ - P_f_1
Moreover, since our label space Y is of finite cardinality, we further have that 1/2e_y_ - P_f_1 = TV(e_y_, P_f), the total variation distance.
Relation to zero-one loss.
By introducing a temperature parameter t ∈R_≥ 0 to the softmax function, it is well known that lim_t →∞(tu) = (u) for a vector u. This implies
lim_t →∞ L(x,y_,tf)
= 1 - 1{_y∈Yf(x,y) = y_}
= 1{_y∈Yf(x,y) y_}
which is the zero-one loss.
Since performing softmax inference with temperature t can be equivalently regarded as performing softmax inference for the scoring space tF, for the simplicity of our presentation, we omit the temperature parameter in the softmax inference.
Relation to cross-entropy.
The total variation distance to a one-hot probability can be lower bounded by cross-entropy due to Pinsker's inequality. More directly, in our case, we have 1-p ≤ -log(p) for any p ∈ [0,1] from basic inequality. This implies L(x,y,f) ≤ L_(x,y,f).
In conclusion, the ℓ^1 loss is a ℓ^1 and total variation distance on the probability space, is a smoothed version of the zero-one loss, and is upper bounded by cross-entropy. It is differentiable and bounded so that we can derive generalization bounds with Rademacher complexity. Another reason that we are interested in softmax inference will be clearer in the discussion for constrained inference, where in Theorem <ref>, <ref> and <ref>, the change of expected cross entropy and ℓ^1 loss can be lower bounded by a smooth function. But with the argmax inference, the risk is in general not continuous and needs to be assumed to be Lipschitz to obtain similar results.
§ PROOFS FROM SECTION 3
§.§ Proof of Proposition <ref>
The first inequality is straightforward. For the second inequality, by definition (<ref>) we have
R(f_ρ) + ρ V(f_ρ)
≤ R(f_0) + ρ V(f_0)
and
V(f_ρ) ≥ V(f_∞)
.
Combining the two above inequalities yields
R(f_ρ) + ρ V(f_∞)
≤ R(f_0) + ρ V(f_0)
.
The desired inequality follows by rearranging these terms. This argument also holds if we replace the expectations with empirical estimates.
To see how the RHS bound can be reached, consider the following scoring space that contains two classifiers, f_0 and f_∞, and an instance space X that only contains one point x. Let C(x) = {y_,y'}. Let f_0 be such that P_f_0(y_)=a∈(0,1) and P_f_0(y')=b. Let f_∞ be such that P_f_∞(y_)=a-ϵ_1 and P_f_∞(y')=b+ϵ_2 so that ϵ_1 < ρϵ_2. Then
R(f_∞) + ρ V(f_∞)
≤ 1 - (a - ϵ_1) + ρ (b-ϵ_2)
< 1-a + ρ b
= R(f_0) + ρ V(f_0)
which means f_∞ will be preferred to f_0 by the regularized objective.
§.§ Proof of Lemma <ref>
By definitions, we have
ρV(f_ρ)
≤R(f_ρ) + ρV(f_ρ)
≤R(f_∞) + ρV(f_∞)
≤ 1 + ρV(f_∞)
≤ 1 + ρ u
Therefore, we have that V(f_ρ) ≤ u + 1/ρ.
§.§ Proof of Lemma <ref>
To prove this theorem, we need the following lemmas. The first one is a contraction inequality established in <cit.>.
Let H be a set of functions mapping X to R^N. Suppose Φ_i is μ_i-Lipschtz with the 2-norm, i.e.,
|Φ_i(v') - Φ_i(v)|
≤μ_i v'-v_2
∀ v,v'∈R^N
Then for any set of m points x_1,…, x_m ∈X, the following inequality holds
1/mE_σ[
sup_h ∈H∑_i=1^m σ_i Φ_i(h(x_i))
]
≤√(2)/mE_ϵ[
sup_h∈H∑_i=1^m ∑_j=1^N ϵ_ijμ_i h_j(x_i)
]
where σ_is and ϵ_ijs are independent Rademacher variables uniformly distributed over {-1,+1}.
The second one computes the Lipschitz constants of the ℓ^1 losses by bounding its gradient's 2-norm.
Given a scoring function f:X ×Y →R, let f(x) = [f(x,y)]_y ∈Y∈R^|Y| be the vector of scores for each label.
For any two scoring functions f,f' and data (x,y), we have that
|P_f(y|x) - P_f'(y|x)|
≤√(2)/4f(x) - f'(x)_2
Furthermore, for any constraint C, we have
|P_f(C|x) - P_f'(C|x)|
≤1/4√(1 + 1/|C(x)|)f(x) - f'(x)_2
where P_f(C|x)=P_f(C(x)|x)=∑_y ∈ C(x)P_f(y|x).
We start with the second claim.
Suppose C(x) = Y, then P_f(C|x) = 0 for any scoring function f, so the inequality trivially holds.
Next, we assume C(x) ⊂Y.
Given a constraint C:X → 2^𝒴, the derivative of its violation function with respect to the score for a label y is
P_f(C|x)/ f(x,y) = ∑_y' ∈ C(x)P_f(y'|x)/ f(x,y)
= ∑_y' ∈ C(x)P_f(y|x) 1{y' = y} - P_f(y|x) P_f(y'|x)
The 2-norm of the gradient of the mapping f(x) ↦P_f(y|x) is then
(
∑_y ∈Y( ∑_y' ∈ C(x)P_f(y|x) 1{y' = y} - P_f(y|x) P_f(y'|x) )^2
)^1/2
which is maximized when P_f(y|x) = 1/2|C(x)| for all y ∈ C(x) and P_f(y|x) = 1/2(Y-|C(x)|) for all y ∉ C(x) (so that P_f(C|x)=1/2). The maximum is then
(
∑_y ∈ C(x)( ∑_y' ∈ C(x)P_f(y|x) 1{y' = y} - P_f(y|x) P_f(y'|x) )^2
+ ∑_y ∉ C(x)( ∑_y' ∈ C(x)P_f(y|x) P_f(y'|x) )^2
)^1/2
= √(|C(x)|(1/4|C(x)|)^2 + |Y-C(x)| (1/2|Y-C(x)|)^2)
= √(1/16 |C(x)| + 1/16|Y-C(x)|)
≤√(1/16 |C(x)| + 1/16)
= 1/4√(1 + 1/|C(x)|)
The boundedness of the gradient implies that the function f(x) ↦P_f(C|x) is Lipschitz with a Lipschitz constant 1/4√(1 + 1/|C(x)|).
The first claim then follows by considering the special constraint C(x) := {y_(x)} so that |C(x)| = 1.
Next, we present the proof of the theorem. By standard Rademacher complexity bounds, given a labeled dataset S of size m, for any δ>0, with probability at least 1-δ, the following inequality holds uniformly for f ∈F:
R(f)
≤R̂(f;S_ L) + 2 ℜ_m(H) + √(log (1/δ)/2m)
where
H
:= {(
x,y) ↦ 1- P_f(y|x): f ∈F
}
By the contraction lemma and Lipschitzness, we have
ℜ_m(H)
= 1/mE_SE_σ[
sup_f ∈F∑_i=1^m σ_i ( 1 - P_f(y_i|x_i))
]
≤√(2)/mE_SE_ϵ[
sup_f ∈F∑_i=1^m ∑_y ∈Yϵ_iy√(2)/4 f(x, y)
]
= 1/2mE_SE_ϵ[
sup_f ∈F∑_i=1^m ∑_y ∈Yϵ_iy f(x, y)
]
This implies
R(f)
≤R̂(f;S_ L) + ℜ_m(F) + √(log (1/δ)/2m)
The proof for the generalization bound of violation follows from the same argument. In particular, if the size of the constrained set C(x) is a constant, namely |C(x)|=c_0 < c = |Y| for all x ∈X, then from Equation (<ref>), we know that the mapping x ↦ 1- P_f(y|x) is Lipschitz with a Lipschitz constant 1/4√(1/c_0 + 1/c-c_0). So in this case, the generalization bound for the violation function can be improved as
V(f)
≤V̂(f;S_ U)
+ √(2)/2√(1/c_0 + 1/c-c_0)ℜ_m_ U(F)
+ √(log(1/δ)/2m_ U)
§.§ Proof of Theorem <ref>
Step 1. Showing the expected violation of f̂_̂ρ̂ is bounded.
First, we have with probability 1-δ,
ρV̂(f̂_ρ)
≤R̂(f̂_ρ) + ρV̂(f̂_ρ)
≤R̂(f_∞) + ρV̂(f_∞)
≤ 1 + ρV̂(f_∞)
≤ 1 + ρ(u + √(log(1/δ)/2m_ U))
where the last step follows by applying Hoeffding's inequality to V̂(f_∞). This result implies V̂(f̂_ρ) ≤1/ρ + u + √(log(1/δ)/2m_ U).
Second, Theorem <ref> claims that with probability 1-δ, the following inequality holds:
V(f̂_ρ) - V̂(f̂_ρ) ≤ℜ_m_ U(F) + √(log(1/δ)/2m_ U)
Putting these two inequalities together using union bound, we know with probability 1-2δ,
V(f̂_ρ)
≤1/ρ + u + ℜ_m_ U(F) + √(log(1/δ)/2m_ U) + √(log(1/δ)/2m_ U)
= 1/ρ + u + B(δ,m_ U,F)
Namely, with probability no less than 1-2δ, f̂_ρ lies in F_1/ρ + u + B(δ,m_ U,F), which is a fixed hypothesis class.
Step 2. Bounding the generalization gap of L_ρ.
Since f̂_ρ∈F_1/ρ + u + B(δ,m_ U,F), we can bound the generalization gap of L_ρ using the uniform convergence property of F_1/ρ + u + B(δ,m_ U,F). By standard decomposition,
L_ρ (f̂_ρ) - L_ρ (f_ρ)
=
L_ρ (f̂_ρ) - L̂_ρ (f̂_ρ)_(*)
+ L̂_ρ (f̂_ρ) - L̂_ρ (f_ρ)_≤ 0
+ L̂_ρ (f_ρ) - L_ρ (f_ρ)_(**)
For term (*), combining the two inequalities in Lemma <ref> and Step 1 via union bound, we know with probability 1-4δ,
(*)
≤ℜ_m_ L(F_1/ρ + u + B(δ,m_ U,F)) + √(log(1/δ)/2m_ L) + ρ( ℜ_m_ U(F_1/ρ + u + B(δ,m_ U,F)) + √(log(1/δ)/2m_ U))
For term (**), using Hoeffding's inequality for the risk and violation separately, we have with probability 1-2δ,
(**)
≤√(log(2/δ)/2m_ L) + ρ√(log(2/δ)/2m_ U)
By union bound, with probability 1-6δ,
L_ρ (f̂_ρ) - L_ρ (f_ρ)
≤ℜ_m_ L(F_1/ρ + u + B(δ,m_ U,F)) + ρℜ_m_ U(F_1/ρ + u + B(δ,m_ U,F)) + 2 √(log(2/δ)/2m_ L) + 2ρ√(log(2/δ)/2m_ U)_for convenience, denote these terms as B'
Step 3. Bounding the risk of f_ρ.
By Step 2, we have with probability 1-6δ,
R(f̂_ρ)
≤ R(f_ρ) + ρ V(f_ρ) - ρ V(f̂_ρ) + B'
≤ R(f_0) + ρ V(f_0) - ρ V(f̂_ρ) + B'
≤ R(f_0) + ρ V(f_0) - ρ V(f_∞) + B'
We conclude that with probability 1-6δ,
R(f̂_ρ)
≤ R(f_0) + ρ V(f_0) - ρ V(f_∞)
+ ℜ_m_ L(F_1/ρ + u + B(δ,m_ U,F)) + ρℜ_m_ U(F_1/ρ + u + B(δ,m_ U,F)) + 2 √(log(2/δ)/2m_ L) + 2ρ√(log(2/δ)/2m_ U)
as claimed.
§.§ Proof of Example <ref>
The normalizing factor ∑_j=1^c ^w_j^ T x is maximized at w_1=x=[1,0,0,…,0] and w_2=…=w_c=0 so that
∑_j=1^c ^w_j^ T x≤ + (c-1)
≤ c+2
This implies P_w(y_c) ≥ (^w_c^ T x)/(c+2). Therefore, E[P_w(y_c)] ≤ t implies t(c+2) ≥E[^w_c^ T x] ≥^E[w_c^ T x] = ^α^ T w_c, or equivalently α^ T w_c ≤log(t(c+2)).
Therefore, given a set of data S={x_i}_i=1^m and Rademacher random variables ϵ, the inner supremum in the definition of Rademacher complexity can be upper bounded by solving the following program
max ∑_i=1^m ∑_j=1^c ϵ_i, j w_j^ T x_i
s.t. ∑_j=1^c w_j^ T w_j ≤ 1
α^ T w_c ≤log(t(c+2))
Consider its Lagrangian
L(w, λ, μ)
= ∑_i=1^m ∑_j=1^c ϵ_i,j w_j^ T x_i
+ λ(1 - ∑_j=1^n w_j^ T w_j )
+ ν(log(t(c+2)) - α^ T w_c )
Denote ξ_j := ∑_i=1^m ϵ_i,jx_i. The Lagrangian is then maximized at w_j = ξ_j/(2λ) for j<c and w_c = (ξ_c- να)/(2λ). The dual function then writes:
g(λ, ν)
= νlog(t(c+2)) + λ + ∑_j=1^c-1ξ_j ^2_2/4 λ +ξ_c - να^2_2/4 λ≥νlog(t(c+2)) + √(∑_j=1^c-1ξ_j _2^2 + ξ_c - να_2^2 )
By weak duality, we have that
ℜ̂_m (F_t)
≤1/mE_ϵ[
min_ν≥ 0(
νlog(t(c+2)) + √(∑_j=1^c-1ξ_j _2^2 + ξ_c - να_2^2 ))
]
Assuming t<1/(c+2) so that log(t(c+2))<0. We can upper bound (<ref>) as
1/mE_ϵ[
min_ν≥ 0(
√(∑_j=1^c-1ξ_j _2^2 + ξ_c - να_2^2 ))
]
The function ∑_j=1^c-1ξ_j _2^2 + ξ_c - να_2^2 is minimized at ν = 0 if ξ_c^ T α≤ 0 and ν = ξ_c^ T α /α_2^2 otherwise. Denote the event ξ_c^ T α≤ 0 as E. By symmetry, we have that P(E) = 1/2 so that
1/mE_ϵ[
min_ν≥ 0(
√(∑_j=1^c-1ξ_j _2^2 + ξ_c - να_2^2 ))
]
= 1/2E_ϵ[ √(∑_j=1^cξ_j _2^2)| E ]
+ 1/2E_ϵ[√(∑_j=1^cξ_j _2^2 - (ξ_c^ T α)^2/α_2^2)| E]
Again by symmetry, the quantity (ξ_c^ T α)^2 is independent of E. Therefore, by Jensen's inequality, we have that
E_S,ϵ[√(∑_j=1^cξ_j _2^2 - (ξ_c^ T α)^2/α_2^2)| E]
≤√(E_S,ϵ[
∑_j=1^cξ_j _2^2 - (ξ_c^ T α)^2/α_2^2]
)
≤√(
cm - E_S,ϵ[ (ξ_c^ T α)^2/α_2^2]
)
= √(
cm - Var(ξ_c^ T α)/α_2^2)
= √(
cm - mσ^2 α_2^2+α_2^4/α_2^2)
= √(
(c-σ^2-α_2^2)m
)
Similarly, we can use Jensen's inequality to bound E_S,ϵ[ √(∑_j=1^cξ_j _2^2)| E ] ≤√(cm). Putting these together, we have that
ℜ_m (F_t)
=E_x[ℜ̂_m (F_t)]
≤1/2√(c/m) +1/2√(c-σ^2-α_2^2/m)
§ PROOFS FROM SECTION 4
§.§ Proof of Propostion <ref>
First, we show the Rademacher complexity of the singleton mapping is zero:
ℜ_m({(x,y)↦ -μ v(x,y)})
= 1/mE_x, ϵ[
∑_i=1^m∑_y ∈Y -ϵ_i,yμ v(x_i,y)
]
= 1/mE_x[
∑_i=1^m∑_y ∈Y -E[ϵ_i,y] μ v(x_i,y)
]
= 0
Second, we use the linearity of Rademacher complexity to obtain the desired result.
ℜ_m(F^μ)
= 1/mE_x, ϵ[ sup_f ∈F∑_i=1^m∑_y ∈Yϵ_i,y (f(x_i,y) - μ v(x_i,y))
]
= 1/mE_x, ϵ[ sup_f ∈F∑_i=1^m∑_y ∈Yϵ_i,y f(x_i,y)
] + 1/mE_x, ϵ[
∑_i=1^m∑_y ∈Y -ϵ_i,yμ v(x_i,y)
]
= ℜ_m(F) + ℜ_m({(x,y)↦ -μ v(x,y)}) = ℜ_m(F)
§.§ Proof of Proposition <ref>
* Given any scoring function f, let Z_f^C(x) := ∑_y ∈ C(x)exp(f(x,y)) and Z_f^-C(x) := ∑_y ∉ C(x)exp(f(x,y)). We have
μΔ^μ_ (f)
= μE[logexp(f(x,y_)-μ v(x,y_))/Z_f^C(x) + Z_f^-C(x)/^μ]
= E[ μlogexp(f(x,y_)-μ v(x,y_))/Z_f^C(x) + Z_f^-C(x)/^μ]
= E[
Z^-C_f(x)/^μ/Z_f^C(x) + Z_f^-C(x)/^μ - v(x,y_)
]
= V(f^μ) - V_
Moreover,
μ V(f^μ)
= E[ μZ_f^μ^-C(x)/Z_f^μ(x)]
= E[ Z_f^μ(x)(-Z_f^μ^C(x)) + (Z_f^μ^C(x))^2/(Z_f^μ(x))^2]
= E[ P_f^μ^2(-C) - P_f^μ(-C) ]
which is negative and bounded, implying V(f^μ) - V_ is decreasing and Lipschitz with μ. Therefore, there is a μ > 0 such that R_(f^μ) < R_(f) if and only if the derivative is positive at μ = 0, i.e., V(f) > V_.
* By (<ref>),
Δ^μ_ (f)
= ∫^μ_0 (V(f^t) - V_) t
= E[ ∫^μ_0
Z^-C_f(x)/^t/Z_f^C(x) + Z_f^-C(x)/^t t
] - μ V_
≥E[ ∫^μ_0
Z^-C_f(x)/^t/Z_f^C(x) + Z_f^-C(x) t
] - μ V_
= (1-^-μ) E[
Z^-C_f(x)/Z_f^C(x) + Z_f^-C(x)] - μ V_
= (1-^-μ) V(f) - μ V_
* If V_=0, we have
Δ^∞_ (f)
= ∫^∞_0 E[
Z^-C_f(x)/^t/Z_f^C(x) + Z_f^-C(x)/^t] t
= E[ ∫^∞_0
Z^-C_f(x)/^t/Z_f^C(x) + Z_f^-C(x)/^t t ]
= E[
log(Z_f^C(x) + Z_f^-C/Z_f^C)
]
= V_(f)
§.§ Proof of Corollary <ref>
Using Proposition <ref> (b), this result follows by solving the following equation
(1-^-μ) V(f) - μ V_≥ 0
It is known that the solution to the inequaltiy u ≤ a + b^c u of u is u ≤ a-1/cW(-bc^ac). Substituting a=η=V(f)/V_=-b and c=-1 yields the desired result:
μ≤ W(-η/^η)+η
where the RHS is positive only when η>1. A plot of this solution as a function of η is presented below in Figure <ref>.
§.§ Proof of Proposition <ref>
This claim follows from the fact that R_(f^∞)=R_(f)-V_(f) from Proposition <ref> (c).
For equation (<ref>), the first inequality follows from the optimality of . For the second inequality, by definition we have
R_(^∞) + V_() = R_()
≤ R_()
⇒ R_(^∞) ≤ R_() - V_() ≤ R_() - min_f∈F V_(f)
§ ANALYSIS FOR HINGE LOSS AND ℓ^1 LOSS
§.§ Hinge Loss
The margin of a scoring function f at a sample (x,y_) is defined as
m(x,y_, f)
:= max_y∈Y{f(x,y)} - f(x,y_)
We denote its expectation as M(f) = E[m(x,y_,f)].
Given a loss function ℓ:Y×Y →R, the structured hinge loss <cit.> is defined as the margin of the loss augmented scoring function f+ℓ: (x,y)↦ f(x,y) + ℓ(y, y_). Namely,
L_hinge (x,y_, f)
:= m(x,y_, f+ℓ)
Therefore, we can study the impact of constrained inference on the hinge loss via the impact on the margin. Let Δ_margin^μ(f) = M(f) - M(f^μ). We present the following result.
The following results hold:
* For any fixed model f, there exists an μ_0 > 0 such that M(f^μ) ≤ M(f) only if
V_01(f) > V_
where V_01(f) is the zero-one style violation defined as E[1{_y ∈Yf(x,y) y_}].
* In particular, if the constraint is noise-free, we have
Δ^∞_margin(f)
= E[ max_y ∈Y f(x,y) - max_y∈ C(x) f(x,y) ]
= E[ (max_y ∉ C(x) f(x,y) - max_y∈ C(x) f(x,y))_+ ]
* The derivative of the change of the margin is
μΔ^μ_margin(f) =
-μ M(f^μ)
= - μE [
max_y ∈Y{ f(x,y) - μ v(x,y) } - f(x,y_) + μ v (x,y_)
]
= E[v(x,y_f^μ) - v(x,y_)]
where y_f^μ:= _y ∈Y{ f(x,y) - μ v(x,y)} is the argmax inference output of CCM. Moreover, this derivative is non-increasing with μ. Therefore, a necessary condition for CCM to reduce the margin is
E[v(x,y_f)] = V_01(f)
> V_
* This follows directly by taking the difference between M(f) and M(f^∞).
Due to the discontinuous nature of the argmax inference, the function v(x,y_f^μ) is in general not continuous with μ. On the other hand, if we assume μ↦E[v(x,y_f^μ)] is Lipschitz continuous, the condition proposed in (a) is also sufficient, as in the analysis for cross-entropy.
The impact of constrained inference on the hinge loss can be investigated by substituting f by f+ℓ. For example, a sufficient for improving the average hinge loss will be V_01(f+ℓ) > V_.
The quantity (max_y ∉ C(x) f(x,y) - max_y∈ C(x) f(x,y))_+ is closely related to the integrality loss defined in <cit.>. It is a hinge-stye surrogate loss function for the zero-one style violation function of f with argmax inference:
P{max_y ∉ C(x) f(x,y) - max_y∈ C(x) f(x,y)
≥ 0
}
= V_01(f)
§.§ ℓ^1 Loss
To facilitate our discussion, we first present the following lemmas that will be useful in this section.
For any constraint C we have the following:
* The derivative of the predicted probability is
μP_f^μ(y|x)
= P_f^μ(y) (P_f^μ(-C|x) - v(x,y))
* The second order derivative of the probability is
μP_f^μ(-C|x)
= P_f^μ(y|x) (
( P_f^μ(-C|x) - v(x,y))^2 + P_f^μ^2(-C|x) - P_f^μ(-C|x)
)
Recall that given any scoring function f, we denote
Z_f^C(x) := ∑_y ∈ C(x)exp(f(x,y))
and
Z_f^-C(x) := ∑_y ∉ C(x)exp(f(x,y))
We also let Z_f(x) = Z_f^C(x) + Z_f^-C(x).
* The pointwise derivative of CCM's l^1 risk with respect to μ is then
μP_f^μ(y|x)
= μ^f(x,y) - μ v(x,y)/Z_f^μ(x)
= 1/(Z_f^μ(x))^2( Z_f^μ(x) (-v(x,y) ^f(x,y) - μ v(x,y)) + Z_f^μ^-C(x) ^f(x,y) - μ v(x,y))
= P_f^μ(y) (P_f^μ(-C) - v(x,y))
where the second equality follows from the fact that μ Z_f^μ(x) = -Z_f^μ^-C(x).
* Based on (a),
^2/^2 μP_f^μ(y|x)
= (P_f^μ(y) (P_f^μ(-C) - v(x,y)))(P_f^μ(-C) - v(x,y))
+ P_f^μ(y) (P_f^μ^2(-C) - P_f^μ(-C))
= P_f^μ(y|x) (
( P_f^μ(-C|x) - v(x,y))^2 + P_f^μ^2(-C|x) - P_f^μ(-C|x)
)
Now we discuss the change in ℓ^1 risk that is defined as Δ^μ(f):=R(f)-R(f^μ).
The following results hold:
* For any fixed model f, there exists an μ_0 > 0 such that R(f^μ) < R(f) if
E[P_f(y_)P_f(-C)]
> E[P_f(y_)v(x,y_)]
* The change of risk can be lower bounded by
Δ^μ(f)
≥1-^-2μ/2E_x[P_f(y_)P_f(-C)] - μ V_
* In particular, if the constraint is noise-free, we have
Δ^∞(f)
≥E_x[P_f(y_)P_f(-C)]
* From Lemma <ref> (a) we know the derivative of the risk with respect to μ at μ=0 is
E[P_f(y_)P_f(-C)] - E[P_f(y_)v(x,y_)]
Further, Lemma <ref> (b) implies this derivative is Lipschitz with respect to μ since for any μ,
| P_f^μ(y|x) (
( P_f(-C|x) - v(x,y))^2 + P_f^μ^2(-C|x) - P_f^μ(-C|x)
) |
≤ 1
Therefore, a sufficient condition for the existence of an μ_0 > 0 such that R(f^μ) < R(f) is that E[P_f(y_)P_f(-C)] > E[P_f(y_)v(x,y_)].
* First, we note for any y and μ that
P_f^μ(y)P_f^μ(-C)
= ^f(x,y)-μ v(x,y) Z_f^-C(x)/^μ/(Z_f^μ(x))^2
≥^f(x,y)-μ v(x,y) Z_f^-C(x)/^μ/(Z_f(x))^2
≥^f(x,y)-μ Z_f^-C(x)/^μ/(Z_f(x))^2
= P_f(y)P_f(-C)^-2μ
Also,
E[P_f(y_)v(x,y_)]
≤E[v(x,y_)]
= V_
Integrating the derivative gives
Δ^μ(f)
≥∫^μ_0 E[
P_f(y_)P_f(-C)^-2t - V_] t
= 1-^-2μ/2E_x[P_f(y_)P_f(-C)] - μ V_
* With noise-free constraints,
P_f^μ(y_)P_f^μ(-C)
= ^f(x,y_) Z_f^-C(x)/^μ/(Z_f^μ(x))^2
≥^f(x,y_) Z_f^-C(x)/^μ/(Z_f(x))^2
= P_f(y_)P_f(-C)^-μ
Integrating both sides gives
Δ^μ(f)
≥∫^μ_0 E[
P_f(y_)P_f(-C)^-t] t
= E_x[P_f(y_)P_f(-C)]
The term E_x[P_f(y_)P_f(-C)] plays a key role in these results, and it measures the average violation of the model f, weighted by the model's confidence of the true label. The first result shows that if this weighted average violation is larger than that of the true data distribution, then CCM is helpful. The last result shows that a model with a larger weighted violation obtains more benefits from strictly constrained inference.
§ PROOFS FROM SECTION 5
§.§ Proof of Theorem <ref>
Recall f_⋆^μ = _g∈F^μ R_(g) + ρ V_(g) is the optimal CCM for the regularized surrogate objective and is the cross entropy risk minimizer in F. According to our notation, ^μ is the constrained model with base model .
By this definition, we have
R_(f_⋆^μ ) +ρ V_(f_⋆^μ)
≤ R_(^μ) +ρ V_(^μ)
Therefore,
R_(f_⋆^μ)
≤ R_(^μ) + ρ (V_(^μ) - V_(f_∞^μ))
≤ R_(^μ) + ρ V_(^μ)
≤ R_() - Δ_^μ() + ρ V_(^μ)
Therefore, a sufficient condition for R_(f_⋆^μ) ≤ R_() is that ρ V_(^μ) < Δ_^μ(). Furthermore, recall for any scoring function f, we define Z_f^C(x) := ∑_y ∈ C(x)exp(f(x,y)) and Z_f^-C(x) := ∑_y ∉ C(x)exp(f(x,y)). We then have
V_(f) - V_(f^μ)
= E[
-log( Z_f^C(x)/Z_f^C(x) + Z_f^-C(x))
] - E[
-log( Z_f^C(x)/Z_f^C(x) + Z_f^-C(x)/^μ)
]
= E[
-log( Z_f^C(x) + Z_f^-C(x)/^μ/Z_f^C(x) + Z_f^-C(x))
]
= ∫^μ_0 E[
Z^-C_f(x)/^t/Z_f^C(x) + Z_f^-C(x)/^t] t
= Δ^μ_(f) + μ V_ (compare to equation (<ref>))
Therefore, Δ^μ_() = V_() - V_(^μ) - μ V_. So, the sufficient condition can be reformulated as
ρ
< V_() - V_(^μ) - μ V_/V_(^μ)
§.§ Proof of Theorem <ref>
We have seen in Theorem <ref> that for any scoring function f, there is a μ > 0 such that R_(f^μ) < R_(f) if and only if V(f) ≥ V_. On the other hand, we know from Lemma <ref> that
V(f_ρ)
≤ V(f_∞) + 1/ρ
Therefore, if
ρ≥1/V_ - V(f_∞)
we must have V(f_ρ) ≤ V_, which implies there is no μ > 0 such that R_((f_ρ)^μ) < R_(f_ρ).
|
http://arxiv.org/abs/2307.04685v1 | 20230710164026 | The Mikheyev-Smirnov-Wolfenstein Matter Potential at the One-loop Level in the Standard Model | [
"Jihong Huang",
"Shun Zhou"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
The Mikheyev-Smirnov-Wolfenstein Matter Potential at the One-loop Level in the Standard Model
Jihong Huang [E-mail: [email protected]],
Shun Zhou [E-mail: [email protected] (corresponding author)]
Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
When neutrinos are propagating in ordinary matter, their coherent forward scattering off background particles results in the so-called Mikheyev-Smirnov-Wolfenstein (MSW) matter potential, which plays an important role in neutrino flavor conversions. In this paper, we present a complete one-loop calculation of the MSW matter potential in the Standard Model (SM). First, we carry out the one-loop renormalization of the SM in the on-shell scheme, where the electromagnetic fine-structure constant α, the weak gauge-boson masses m^_W and m^_Z, the Higgs-boson mass m^_h and the fermion masses m^_f are chosen as input parameters. Then, the finite corrections to the scattering amplitudes of neutrinos with the electrons and quarks are calculated, and the one-loop MSW matter potentials are derived. Adopting the latest values of all physical parameters, we find that the relative size of one-loop correction to the charged-current matter potential of electron-type neutrinos or antineutrinos turns out to be 6%, whereas that to the neutral-current matter potential of all-flavor neutrinos or antineutrinos can be as large as 8%. The implications of such corrections for neutrino oscillations are briefly discussed.
footnote
§ INTRODUCTION
In the past quarter of a century, neutrino oscillation experiments have provided us with robust evidence that neutrinos are massive and leptonic flavor mixing is significant <cit.>. For the neutrinos propagating in matter, the coherent forward scattering of neutrinos off the background particles leads to the Mikheyev-Smirnov-Wolfenstein (MSW) matter potential and could modify neutrino flavor conversions in a remarkable way <cit.>. To be explicit, at the tree level in the Standard Model (SM), the effective Hamiltonian for neutrino oscillations in matter receives extra potential terms, i.e., V_e^ = V_ CC^ + V_ NC^ for electron neutrinos and V_μ^ = V_τ^ = V_ NC^ for muon and tau neutrinos, where the charged-current (CC) and the neutral-current (NC) contributions are given by
V_ CC^ = √(2) G_μ^ N_e^ , V_ NC^ = -G_μ^/√(2)[(1 - 4 sin^2θ_ w^) (N_e^ - N_p^) + N_n^] .
In Eq. (<ref>), G_μ^ is the Fermi constant determined from the muon lifetime, N_e^, N_p^ and N_n^ are respectively the net number densities of electrons, protons and neutrons, and θ_ w^ is the weak mixing angle. For antineutrinos, the MSW matter potentials V^_α (for α = e, μ, τ) change accordingly to opposite signs. As the NC potential V^_ NC is universal for three neutrino flavors, only the CC potential V^_ CC for electron (anti)neutrinos is relevant for neutrino flavor conversions in matter.
At the one-loop level in the SM, it has been known for a long time that the NC potentials V^α_ NC become dependent on the charged-lepton masses m^_α (for α = e, μ, τ). Given the strong hierarchy of charged-lepton masses m_e^≪ m_μ^≪ m_τ^ and N_n^ = N_p^ = N_e^ for ordinary matter, one can estimate the ratio of the flavor-dependent part of one-loop NC potential to the tree-level CC potential as below <cit.>
ϵ^_μτ≡ V^τ_ NC - V^μ_ NC/ V_ CC≈ - 3α/2πsin^2θ_ w^m_τ^2/m_W^2[ln(m_τ^2/m_W^2) + 5/6] ,
where α≡ e^2/(4π) denotes the electromagnetic fine-structure constant. With the input values of α = 1/137, m^_W = 80.377 GeV, m^_Z = 91.1876 GeV and m^_τ = 1.777 GeV, one has sin^2θ^_ w = 1 - m^2_W/m^2_Z ≈ 0.223 and thus finds the ratio in Eq. (<ref>) to be ϵ^_μτ≈ 5.19× 10^-5. Although such a correction is extremely small, it causes the difference between the matter potential of ν^_μ and that of ν^_τ, which affects greatly the flavor conversions of supernova neutrinos in the dense-matter environment <cit.>. Further discussions about the impact of ϵ^_μτ on neutrino oscillations can be found in Refs. <cit.>.
In the calculation of ϵ^_μτ, however, the previous works <cit.> concentrate on the flavor-dependent radiative corrections, e.g., V^τ_ NC - V^μ_ NC, instead of the one-loop NC potentials V^α_ NC themselves (for α = e, μ, τ). Moreover, the one-loop radiative corrections to the CC potential have not been studied thus far. Therefore, it is interesting to calculate neutrino matter potentials in the SM at the one-loop level, including the NC potential V^α_ NC for three-flavor neutrinos and the CC potential V^_ CC for the electron neutrino. The motivation for such a calculation is two-fold. First, the flavor-independent part of the one-loop NC potential V^α_ NC is irrelevant for flavor oscillations of three active neutrinos, but may be important for active-sterile neutrino oscillations, particularly in the supernova environment <cit.>. Second, the future long-baseline accelerator neutrino oscillation experiments, such as DUNE <cit.> and T2HK <cit.>, will be able to determine neutrino mass ordering and probe leptonic CP violation, and they are already sensitive enough to the Earth matter effects. Obviously, the precise calculation of V^_ CC at the one-loop level is necessary to achieve high-precision measurements of the neutrino mass ordering and the CP-violating phase.
In this work, we carry out a complete one-loop calculation of the MSW potentials in the SM. More explicitly, after performing one-loop renormalization of the SM in the on-shell scheme <cit.>, we compute the scattering amplitudes for ν_α^ + f →ν_α^ + f at one loop, where f = u, d, e are the SM fermions in ordinary matter. For the electron neutrino ν_e^, both CC and NC interactions must be taken into account, while only the latter is considered for ν_μ,τ^. For both NC and CC interactions, since the distributions of background particles are assumed to be homogeneous and isotropic, only the vector-type couplings c_ V, NC^f and c^f_ V, CC are directly involved in matter potentials. After obtaining finite scattering amplitudes, we extract the matter potentials by comparing the obtained amplitudes and those generated by the effective weak Hamiltonian of neutrino interactions in the forward limit. After inputting the latest values of all physical parameters, we find that the one-loop correction to the NC potential is about 8%, while that to the CC potential is about 6%. In the future long-baseline accelerator neutrino oscillation experiments, e.g., DUNE and T2HK, it is promising to probe the one-loop correction to the CC potential.
The remaining part of this paper is organized as follows. In Sec. <ref>, we outline the basic strategy for one-loop calculations of the MSW matter potentials in the SM, and explain the notations and the on-shell scheme of the one-loop renormalization implemented in our calculations. The analytical results for the one-loop NC and CC potentials are presented in Sec. <ref> and Sec. <ref>, respectively. Then, in Sec. <ref>, we specify the input parameters and evaluate the one-loop corrections. The impact of such corrections on the long-baseline accelerator neutrino oscillation experiments is briefly discussed. We summarize our main results in Sec. <ref>. For completeness, the renormalization of the SM and some details of our calculations are given in Appendix <ref>.
§ STRATEGY FOR ONE-LOOP CALCULATIONS
In this section, we explain how to calculate the one-loop potentials in the SM. For the low-energy neutrinos propagating in ordinary matter, the coherent forward scattering with background particles modifies their dispersion relations and its impact on neutrino flavor conversions can be described by the effective potentials at the amplitude level. The ordinary matter is composed of protons, neutrons and electrons, so the NC interactions contribute to the matter potentials for all-flavor neutrinos, whereas the CC interaction is relevant only for the electron neutrinos.
§.§ Effective Hamiltonians and Matter Potentials
The amplitudes for relevant two-body scattering processes ν^_α + f →ν^_α + f, with α = e, μ, τ and f = u, d, e, can be divided into the NC and CC parts. For the NC part, we can directly read it off from the low-energy effective Hamiltonian
H_ eff^ NC (x) = G_μ^/√(2)[ν_α^ (x)γ^μ(1-γ^5) ν_α^ (x)] [f(x)γ_μ(c_ V, NC^f - c_ A, NC^f γ^5 ) f (x)] ,
where c^f_ V, NC and c^f_ A, NC refer respectively to the vector-type and axial-vector-type couplings for the NC interaction. At the tree level, these couplings in the SM have been collected in Table <ref>.
Assuming the distribution of background fermions to be homogeneous and isotropic, one can average the effective Hamiltonian over all possible states of background fermions and then obtain the effective potential for the SM left-handed neutrinos <cit.>
V_ NC^ = √(2) G_μ^ N_f^ c_ V, NC^f ,
where N^_f is the net number density of the background fermion f and only the vector-type coupling c^f_ V, NC is involved. Notice that the NC potential is independent of neutrino flavors at the tree level.
For electron neutrinos, the CC part of the two-body scattering amplitude can be derived from the effective Hamiltonian
H_ eff^ CC (x) = G_μ^/√(2)[ν_e^ (x)γ^μ(1 - γ^5 ) ν_e^ (x)] [e(x)γ_μ(c^e_ V, CC -c^e_ A, CCγ^5) e (x)] ,
where the Fierz transformation has been performed and c
^e_ V, CC = c^e_ A, CC = 1 in the SM. In a similar way to the derivation of the NC potential, one can easily get the CC potential of electron neutrinos
V_ CC^ = √(2) G_μ^ N_e^ c^e_ V, CC .
Therefore, the total matter potential for electron neutrinos is V^_e = V^_ CC + V^_ NC, while those for muon and tau neutrinos are V^_μ = V^_τ = V^_ NC. For ordinary matter composed of protons, neutrons and electrons, together with the vector-type couplings in Table <ref>, one can simply use N^_u = 2N^_p + N^_n and N^_d = 2 N^_n + N^_p and the condition of charge neutrality N^_p = N^_e to reproduce the results of V^_ CC and V^_ NC in Eq. (<ref>).
From the above derivations of the tree-level matter potentials, it is evident that one should calculate the renormalized scattering amplitude of ν^_α + f →ν^_α + f at the one-loop level and then find out the effective Hamiltonian corresponding to the loop-corrected amplitude. Starting with the loop-level effective Hamiltonian, we can extract the coefficient for the vector-type interactions involving the background particles. More explicitly, for the NC part, we identity the correction to the vector-type coupling c_ V,NC^f, which will be denoted as Δ c_ V,NC^f ≡c^f_ V,NC - c^f_ V,NC with c^f_ V,NC being the loop-corrected coupling. For definiteness, we take the Fermi constant to be G^_μ as determined precisely from muon decays. The one-loop NC potential is given by V^α_ NC = √(2)G^_μ N^_f c^f_ V,NC, whereas the tree-level one reads V^_ NC = √(2)G^_μ N^_f c^f_ V,NC. In this case, the relative magnitude of one-loop correction to the NC potential is characterized by Δ c^f_ V,NC/c^f_ V,NC, as G^_μ will be anyway assigned the experimentally measured value in both tree- and loop-level calculations. Similarly, the correction to the CC potential will be represented by Δ c^e_ V, CC/c^e_ V, CC, where Δ c_ V,CC^e ≡c^e_ V,CC - c^e_ V,CC and c^e_ V,CC is the loop-level coupling.
§.§ On-shell Renormalization
The one-loop renormalization of the SM in the on-shell scheme can be found in the monograph <cit.> and also in many excellent review papers <cit.>. For completeness, a brief summary of the on-shell renormalization of the SM is presented in Appendix <ref>, and the basic procedure is sketched in this subsection in order to explain our conventions.
For the classical Lagrangian of the standard electroweak theory, we shall closely follow the definitions and notations in Ref. <cit.>. As usual, the quantization of the SM can be performed by introducing the gauge-fixing terms and the Faddeev-Popov ghosts, and then the Feynman rules can be derived, where the 't Hooft-Feynman gauge will be chosen for simplicity. At the one-loop level, the ultraviolet (UV) divergences in the one-point Green's function (i.e., the Higgs tadpole diagrams), one-particle-irreducible two-point Green's functions and three-point vertex functions can be separated out by using the dimensional regularization, where the space-time dimension is set to d = 4 - 2ϵ and the UV-divergent term in the limit of ϵ→ 0 shows up as
Δ≡1/ϵ - γ_ E^ + ln (4π) ,
where γ_ E^≈ 0.577 is the Euler-Mascheroni constant. In principle, only the particle masses and coupling constants need to be renormalized to guarantee finite S-matrix elements in the SM <cit.>, but the wave-function renormalization of physical fields is necessary to keep the Green's functions finite as well.
After expressing the bare model parameters and physical fields in terms of the renormalized ones and the corresponding counterterms, as summarized in Appendix <ref>, one can calculate the Higgs tadpole diagrams, two-point Green's functions and three-point vertex functions, which are in general UV-divergent. Then, the on-shell renormalization conditions on the renormalized Green functions are imposed to remove the UV-divergences and thus determine the counterterms. Finally, a complete set of renormalized parameters are chosen as inputs and implemented to calculate the S-matrix elements of our interest. Some comments are helpful.
* Input parameters. As has been done in Ref. <cit.>, we shall choose the input parameters as the fine structure constant α, the W-boson mass m_W^, the Z-boson mass m^_Z, the Higgs-boson mass m_h^, and the charged-fermion masses m_f^. Since m_W^ and m_Z^ have been chosen as input parameters, the weak mixing angle is defined via cosθ_ w≡ m_W^ / m_Z^. For later convenience, the abbreviations c ≡cosθ_ w^ and s ≡sinθ_ w^ will be used. Moreover, s^_2 w≡sin2θ_ w^ = 2sc and c_2 w^≡cos 2θ_ w^ = c^2 - s^2 are also implemented to simplify the expressions.
With the physical parameters chosen above, the electromagnetic coupling constant e = √(4πα) is related to the weak gauge coupling constant g via the weak mixing angle, i.e., e = g s. Whenever the coupling constants e and g appear, their definitions should be understood in terms of the fine-structure constant α and the weak mixing angle θ^_ w.
* One-loop amplitudes. The contributions to the amplitudes of ν^_α + f →ν^_α + f at the one-loop level can be divided into three categories, i.e., the self-energies of weak gauge bosons including the tadpole diagrams, the vertex corrections and the box diagrams. With all the counterterms previously determined in the on-shell scheme, the UV-divergent terms are all canceled out and the finite corrections are obtained. The one-loop diagrams have been calculated by using Package-X <cit.>, and the Passarino-Veltman functions <cit.> are implemented to express one-loop integrals as in Appendix <ref>.
In the following expressions, x_i^≡ m_i^2/m_W^2 and y_i^≡ m_i^2/m_Z^2 are introduced with “i" referring to the particle type. The fermion masses for external legs are retained, but they are much smaller compared to the gauge-boson masses and thus all the terms of O(x_e^) or O(x_q^) for q = u,d can be safely neglected. It should be noticed that as we are interested in the forward scattering amplitudes, the diagrams with the photon propagator with p^2 = 0 attached to the charged fermions will not contribute due to the on-shell renormalization of the electric charge. In addition, neutrinos are massless in the SM and the quark flavor mixing is ignored. For the latter assumption, the reason is simply that the off-diagonal elements of the Cabibbo-Kobayashi-Maskawa (CKM) matrix are much smaller than the diagonal ones and the vertices involving a pair of quarks not in the same isospin-doublet are highly suppressed.
* Finite corrections. Once the finite corrections to the amplitudes are obtained, one can extract the vector-type coefficients in the corresponding effective Hamiltonian and derive the one-loop corrections to the matter potentials of neutrinos. For the NC part, the renormalized self-energy of the Z-boson, the neutrino or charged-fermion vertex, and the box diagrams are denoted as iΣ_Z^ r, i e Γ^ r_ν_αν_α Z or i e Γ^ r_ffZ and i M_ NC^f, respectively, so the correction to the vector-type coupling is
Δ c_ V,NC^f = (-Σ_Z^ r/m_Z^2 + s_2 w^Γ_ν_α^ν_α^ Z^ r) c_ V,NC^f + s_2 w^Γ_ffZ^ r - 4m_W^2/g^2 M_ NC^f .
Similarly, for the CC part, with the renormalized self-energy of the W-boson, the corrected vertex, and the box diagrams denoted as iΣ_W^ r, i e Γ^ r_ν_e^ e W and i M_ CC^, respectively, the correction to the vector-type coupling turns out to be
Δ c_ V,CC^e = (-Σ_W^ r/m_W^2 + 2×√(2) s Γ_ν_e^ e W^ r) c_ V,CC^e - 4m_W^2/g^2 M_ CC^ .
Note that the factor of two associated with the vertex correction Γ_ν_e^ e W^ r in Eq. (<ref>) arises from the fact that the ν^_e-e-W vertex appears twice in the diagrams.
The self-energy, vertex and box contributions on the right-hand sides of Eqs. (<ref>) and (<ref>) will be presented in Sec. <ref> and Sec. <ref>, respectively. With the latest values of the input parameters, we shall evaluate these finite corrections in Sec. <ref>.
§ THE NEUTRAL-CURRENT POTENTIAL
§.§ The Fermi Constant
As shown in Eqs. (<ref>) and (<ref>), the NC and CC potentials at the tree level are usually given in terms of the Fermi constant G^_μ, which is related to the adopted physical parameters by G^_μ = g^2/(4√(2)m^2_W) = πα/(√(2) m^2_W s^2). At the one-loop level, however, such a relation is corrected as
g^2/4 √(2) m_W^2≡G_μ^(1 - Δ r ) ,
where G_μ^ stands for the one-loop corrected Fermi constant and the finite radiative corrections are collected in Δ r. With the help of Eqs. (<ref>)-(<ref>), we can evaluate Δ r by <cit.>
Δ r = - . ∂Σ_ T^A(p^2)/∂ p^2|_p^2=0 + c^2/s^2[Σ_ T^Z(m_Z^2)/m_Z^2 - Σ_ T^W(m_W^2)/m_W^2] + Σ_ T^W(m_W^2) - Σ_ T^W(0)/m_W^2
- 2c/sΣ_ T^AZ (0)/m_Z^2 + α/4π s^2[6+7-4s^2/2s^2ln(m_W^2/m_Z^2)] .
Since the Fermi constant determined from the muon lifetime is the most precise, it is convenient to use it in the studies of low-energy weak interactions. For the tree-level matter potential, one may just input the value of G^_μ extracted from the muon lifetime, namely, G^_μ = G^ exp_μ. On the other hand, at the one-loop level, we implement the relation in Eq. (<ref>) to determine G^_μ from the same experimental observation, i.e., G^_μ (1 - Δ r) = G^ exp_μ. In this case, the tree-level matter potential is given by V = √(2)G^_μ N^_f c^f_ V, while the one-loop potential is V = √(2)G^_μ (1 - Δ r) N^_f (c^f_ V + Δ c^f_ V). As the experimental value G^ exp_μ is used to evaluate the matter potential at either the tree- or one-loop level, we shall characterize the magnitude of radiative corrections by
V/ V - 1 = √(2) G_μ^ exp N_f^(c_ V^f + Δ c_ V^f )/√(2)G^ exp_μ N^_f c^f_ V - 1 = Δ c_ V^f/c_ V^f .
It is worthwhile to mention that Eq. (<ref>) is applicable to both NC and CC potentials, for which one should make use of the corresponding vector-type couplings and their radiative corrections. Therefore, in the subsequent discussions, we focus only on the radiative corrections to the vector-type couplings.
§.§ Self-energy of Z-boson
The relevant Feynman diagrams of the scattering ν_α^ + f →ν_α^ + f for the NC potential have been shown in Fig. <ref>. After calculating the one-loop amplitudes, we can extract the corrections to the vector-type coupling c_ V,NC^f.
First, let us look at the self-energy of Z-boson in Fig. <ref>-(3), where the shaded circle represents all possible contributions. The self-energy of Z-boson contributes to Δ c_ V,NC^f as -(c_ V,NC^f / m_Z^2) Σ_Z^ r, where iΣ_Z^ r denotes the renormalized self-energy.
* Bosonic Contributions. The bosonic contributions to the Z-boson self-energy involve gauge bosons, the Higgs boson, the Goldstone bosons and the Faddeev-Popov ghosts running in the loop. The final result can be written as
(4π)^2 Σ_Z- b^ r = g^2 m_Z^2/8 c^2 (1-y_h^)(y_h^4-6y_h^3+17y_h^2-22y_h^+4)ln y_h^
-3/2 g^2 m_Z^2 (4 c^4+4 c^2-1) DiscB(m_Z^2,m_W^,m^_W)
+g^2 m_Z^2/4 c^2 (y_h^ - 4 )(y_h^3 -7y_h^2 + 20y_h^ -28) DiscB(m_Z^2,m_h^,m_Z^)
+g^2 m_Z^2/24 c^2(6y_h^2 -21y_h^ -288c^6-264c^4 +112c^2 +49 ) ,
where the function DiscB(p^2,m_0^,m_1^) is related to the Passarino-Veltman function via
B^_0 (p^2;m^_0,m^_1) = Δ + ln(μ ^2/m_1^2) +2+ DiscB(p^2,m^_0,m^_1)-m_0^2-m_1^2+p^2/2 p^2ln(m_0^2/m_1^2) ,
with μ being the renormalization mass scale. The explicit form of DiscB(p^2,m_0^,m_1^) reads
DiscB(p^2, m^_0, m^_1) = √(λ(m_0^2,m_1^2,p^2))/p^2ln[m_0^2 + m_1^2 -p^2 + √(λ(m_0^2,m_1^2,p^2))/2m^_0 m^_1] ,
where the Källén function
λ(x,y,z) ≡ x^2+y^2+z^2-2xy-2yz-2zx
has been defined.
* Fermionic Contributions. For the fermions running in the loop, we have
(4π)^2 Σ_Z- f^ r = ∑_f 4 e^2 m_Z^2 /12 y^_f-3{ 6 y^_f [a_f^2 (1-4 y^_f)+2 v_f^2 y^_f] DiscB(m_Z^2,m^_f,m^_f) .
. +(4 y^_f-1) [a_f^2 (1-12 y^_f)+v_f^2 (6 y^_f+1)] } ,
where we have defined v_f^≡ c_ V,NC^f/s^_2 w and a_f^≡ c_ A,NC^f/s^_2 w. Note that the summation is over all the SM fermions and three colors for each type of quarks are taken into account.
§.§ Vertex Contributions
Then, we calculate the vertex corrections, for which the Feynman diagrams have been depicted in Fig. <ref>-(2) and (4). For later convenience, we introduce the following functions
F_Z^ (p^2) = ∑_f {[4 a_f^2 m_f^2-p^2(a_f^2+v_f^2)] B_0^(p^2;m_f^,m_f^) .
. -4 (a_f^2+v_f^2) B^_00(p^2;m_f^,m_f^) +2 (a_f^2+v_f^2) A^_0(m_f^) } ,
F_W^ (p^2) = ∑_{f,f'}[ (m_f^2+m_f^'^2) B^_0(p^2;m^_f,m^_f')-4 B^_00(p^2;m^_f,m^_f') .
. -p^2 B^_0(p^2;m^_f,m^_f') + A^_0(m^_f)+ A^_0(m^_f') ] ,
F_A^ (p^2) = ∑_f Q_f^2 [ -4 B^_00(p^2;m^_f,m^_f)-p^2 B^_0(p^2;m^_f,m^_f) + 2 A^_0(m^_f) ] ,
F_AZ^ (p^2) = ∑_f Q_f^ v_f^[ -4
B^_00(p^2;m^_f,m^_f)-p^2 B^_0(m_Z^2;m^_f,m^_f)+2 A^_0(m^_f) ] ,
where Q^_f denotes the electric charge and {f, f^'} refers to the pair of fermions in the same isospin-doublet. As the subscripts of these functions indicate, they represent the contributions from the self-energies of Z-boson, W-boson, photon and the A-Z mixing in Eqs. (<ref>)-(<ref>). In addition, their derivatives F_V^' (m_V^2) ≡. d F_V^(p^2)/ dp^2|^_p^2=m_V^2 for V = W,Z,A are also needed.
* The ν_α^-ν_α^-Z Vertex. The contribution to Δ c_ V,NC^f is given by s^_2 w c_ V,NC^f Γ_ν_α^ν_α^ Z^ r with
(4π)^2 Γ_ν_α^ν_α^ Z^ r = -g^2 x_α^/s^_2 w(ln x_α^ +3) + g^2 c^_2 w/s^_2 w[ F_Z^(m_Z^2)/m_Z^2 - F_W^(m_W^2)/4 s^2 m_W^2 ] + g^2 s/2c[ F_Z^'(m_Z^2) - F_A^'(0)]
+g^2/48 c s^3(120 c^6+68 c^4-106 c^2+17) DiscB(m_Z^2,m^_W,m^_W)
-g^2/6 s^3_2 w(y^_h-4)[ (4 c^2-3) y_h^3-(29 c^2-21) y_h^2 .
.+(88 c^2-60) y^_h -132 c^2+84 ] DiscB(m_Z^2,m^_h,m^_Z)
-g^2 /48 c^5 s^3(96 c^8+88 c^6-100 c^4+14c^2+1) DiscB(m_W^2,m^_W,m^_Z)
+ g^2 c^_2 w/48 c s^3(x_h^2-4 x^_h +12 ) DiscB(m_W^2,m^_h,m^_W)
+g^2 /12 s_2 w^3 [(4c^2-3) y_h^3-(21 c^2-15) y_h^2+(42 c^2-30) y^_h-60 c^2+36] ln y^_h
-g^2 c^_2 w/12 s^3_2 w(c^2 x_h^3-6 c^2 x_h^2+12 c^2 x^_h-24)ln x^_h
+g^2 /96 c^7 s^3[(12 c^6-6 c^4) y^_h-158 c^6+106 c^4-12 c^2-1 ]ln(m_W^2/m_Z^2)
+g^2 /48 c^5 s [(4 c^2-1) y_h^2-6 c^2 y^_h-240 c^8-356 c^6+252 c^4+10 c^2-1] .
Notice that the flavor-dependent terms proportional to x_α^ are the same as those in Ref. <cit.>, and our results are also consistent with Eqs. (5.46) and (5.47) in Ref. <cit.>.
* The f-f-Z Vertex. With the radiative corrections to the vector-type couplings in the renormalized vertices Γ_ffZ^ r, the total contributions to Δ c_ V,NC^f can be expressed as s_2 w^Γ_ffZ^ r for f=u,d,e. All the terms proportional to the quark and electron masses of O(x_f^) can always be neglected due to the suppression by the W-boson mass.
* u-u-Z Vertex. The renormalized vertex reads
(4π)^2Γ_uuZ^ r = g^2(5-2c^2)/6 s^_2 w[ F_Z^(m_Z^2)/m_Z^2 - F_W^(m_W^2)/4 s^2 m_W^2] + e^2 (8c^2-5)/6 s^_2 w[ F_Z^'(m_Z^2) - F_A^'(0)]
+4 e^2 /3 m_Z^2 F_AZ^ (m_Z^2) -g^2 (2 c^2-5)/288 c s^3(x_h^2-4 x_h^+12) DiscB(m_W^2,m_h^,m_W^)
-g^2/36 s^3_2 w(y_h^-4)[ (16 c^4-28 c^2+15) y_h^3 - (104 c^4 - 185 c^2+105) y_h^2 .
. +(256 c^4-472 c^2+300) y_h^ -288 c^4+564 c^2-420 ] DiscB(m_Z^2,m_h^,m_Z^)
+g^2 /96 c s^3(320 c^8-360 c^6-236 c^4+398 c^2-23) DiscB(m_Z^2,m_W^,m_W^)
+ g^2/288 c^5 s^3(96 c^8-104 c^6-372 c^4+78 c^2+5) DiscB(m_W^2,m_Z^,m_W^)
+g^2 /72 s^3_2 w[ (16 c^4-28 c^2+15) y_h^3 -(72 c^4 - 129 c^2 + 75) y_h^2 .
. +(144 c^4-258 c^2+150) y_h^ - 96 c^4+204 c^2-180 ] ln y_h^
+ g^2(2 c^2-5) /72 s^3_2 w(c^2 x_h^3-6 c^2 x_h^2+12 c^2 x_h^-24) ln x_h^
+ g^2 /576 c^7 s^3[(30 c^4-12 c^6) y_h^ + 16 c^8+134 c^6-418 c^4+68 c^2+5]ln(m_W^2/m_Z^2)
+ g^2/288 c^5 s [ (16 c^4-12 c^2+5) y_h^2 -6 c^2 (8 c^2-5) y_h^.
. -1920 c^10+400 c^8+1652 c^6-1100 c^4-42 c^2+5 ] .
* d-d-Z Vertex. The renormalized vertex is given by
(4π)^2Γ_ddZ^ r = - g^2(2c^2+1)/6 s^_2 w[ F_Z^(m_Z^2)/m_Z^2 - F_W^(m_W^2)/4 s^2 m_W^2 ] + e^2(1-4c^2)/6 s^_2 w[ F_Z^'(m_Z^2) - F_A^'(0)]
-2 e^2/3 m_Z^2 F_AZ^ (m_Z^2) -g^2(2 c^2+1)/288 c s^3(x_h^2-4 x^_h+12) DiscB(m_W^2,m^_h,m^_W)
+g^2/36 s^3_2 w(y^_h-4)[ (8 c^4-8 c^2+3) y_h^3-(52 c^4-49 c^2+21) y_h^2 .
. +(128 c^4-104 c^2+60) y^_h - 144 c^4+84 c^2-84 ] DiscB(m_Z^2,m^_h,m^_Z)
-g^2 /96 c s^3(160 c^8-120 c^6-84 c^4+146 c^2-3) DiscB(m_Z^2,m^_W,m^_W)
+g^2 /288 c^5 s^3(96 c^8+184 c^6+36 c^4-18 c^2-1) DiscB(m_W^2,m^_Z,m^_W)
-g^2 /72 s^3_2 w[ (8 c^4-8 c^2+3) y_h^3 - (36 c^4 - 33 c^2 + 15) y_h^2 .
. +(72 c^4-66 c^2+30) y^_h -48 c^4+12 c^2 -36 ] ln y^_h
+ g^2 (2 c^2+1)/72 s^3_2 w(c^2 x_h^3-6 c^2 x_h^2+12 c^2 x^_h-24)ln x^_h
-g^2 /576 c^7 s^3[6 (2 c^2+1) c^4 y_h + 8 c^8-170 c^6-50 c^4+16 c^2+1] ln(m_W^2/m_Z^2)
-g^2 /288 c^5 s [ (8 c^4+1) y_h^2 + 6 c^2 (1-4 c^2) y^_h .
. + (960 c^10+160 c^8-292 c^6+172 c^4+6 c^2-1) ] .
* e-e-Z Vertex. The renormalized vertex is
(4π)^2Γ_eeZ^ r = g^2 (2 c^2-3) /2 s^_2 w[ F_Z^(m_Z^2)/m_Z^2 - F_W^(m_W^2)/4 s^2 m_W^2] + e^2 (3-4 c^2)/2 s^_2 w[ F_Z^'(m_Z^2) - F_A^'(0)]
- 2 e^2/m_Z^2 F_AZ^ (m_Z^2) +g^2 (2 c^2-3) /96 c s^3(x_h^2-4 x_h^+12) DiscB(m_W^2,m_h^,m_W^)
+g^2/12 s^3_2 w(y_h-4)[ (8 c^4-16 c^2+9) y_h^3-(52 c^4-107 c^2+63) y_h^2 .
. +(128 c^4-280 c^2+180) y_h^-144 c^4+348 c^2-252 ] DiscB(m_Z^2,m_h^,m_Z^)
-g^2 /96 c s^3(480 c^8-600 c^6-388 c^4+650 c^2-43) DiscB(m_Z^2,m_W^,m_W^)
-g^2/96 c^5 s^3(96 c^8-8 c^6-236 c^4+46 c^2+3) DiscB(m_W^2,m_W^,m_Z^)
-g^2 /24 s^3_2 w[ (8 c^4-16 c^2+9) y_h^3 - (36 c^4 - 75 c^2 + 45) y_h^2 .
. +(72 c^4-150 c^2+90) y_h^-48 c^4+132 c^2 -108 ] ln y_h^
-g^2 (2 c^2-3)/24 s^3_2 w(c^2 x_h^3-6 c^2 x_h^2+12 c^2 x_h^-24) ln x_h^
-g^2/192 c^7 s^3[ (18 c^4-12 c^6) y_h^ + 96 c^12-240 c^10+224 c^8 .
. +62 c^6-250 c^4+40 c^2 +3 ] ln(m_W^2/m_Z^2)
- g^2 /96 c^5 s[ (8 c^4-8 c^2+3) y_h^2 - 6 c^2 (4 c^2-3) y_h^.
. - 960 c^10+320 c^8+1004 c^6-676 c^4-26 c^2+3 ] .
This renormalized vertex has also been calculated in Ref. <cit.>, where the results in Eqs. (5.42)-(5.44) agree perfectly with ours.
§.§ Box-diagram Contributions
Finally, we consider the box diagrams shown in Fig <ref>-(5). The contribution to Δ c_ V,NC^f is actually given by -(4 m_W^2/g^2) M^f_ NC, where the relevant amplitudes from the one-loop box diagrams are expressed as i M^f_ NC with f=u,d,e. These amplitudes are UV-finite and no renormalization is needed. For the scattering with the neutrino ν_α^, the box diagrams for three different types of background particles lead to
(4π)^2 M_ NC^u = -g^4/8 m_W^2[5-4c^2/4c^2+ x_α^(ln x_α^ +1)] ,
(4π)^2 M_ NC^d = +g^4/2m_W^2[20 c^2-1/16 c^2 +x_α^(ln x_α^+1)] ,
(4π)^2 M_ NC^e = +g^4/2m_W^2[28c^2-9/16c^2+x_α^(ln x_α^+1)] .
The first two results are consistent with Eqs. (7.1)-(7.3) in Ref. <cit.>, whereas the final one is the same as in Eq. (5.51) of Ref. <cit.>. The neutrino flavor-dependent parts have been found to be compatible with the previous calculations in Refs. <cit.>.
§ THE CHARGED-CURRENT POTENTIAL
In parallel with the discussions about the NC potential, there are also three types of radiative corrections to the CC potential V_ CC^, which will be denoted by Δ c_ V,CC^e. The relevant Feynman diagrams of the elastic scattering between electron neutrinos and electrons ν_e^ +e →ν_e^ + e for the CC potential have been given in Fig. <ref>.
§.§ Self-energy of W-boson
First, we consider the self-energy of W-boson in Fig. <ref>-(3), where the shaded circle represents all possible contributions. The contribution to Δ c_ V,CC^e from the W-boson self-energy can be expressed as -(c_ V,CC^e / m_W^2) Σ_W^ r, where iΣ_W^ r denotes the renormalized self-energy.
* Bosonic Contributions. The W-boson self-energy receives the contributions from all the bosons running in the loop, and the renormalized self-energy is
(4π)^2Σ_W- b^ r = -g^2 m_Z^2/4 c^2(12 c^6+44 c^4-13 c^2-1) DiscB(m_W^2,m_Z^,m_W^)
+g^2 m_W^2/4(x_h-4)(x_h^3-7 x_h^2+20 x_h^ -28 ) DiscB(m_W^2,m_h^,m_W^)
-g^2 m_W^2 /8 (x_h^-1)(x_h^4-6 x_h^3+17 x_h^2-22 x_h^+4) ln x_h^
- g^2 m_Z^2/8 c^4 s^2(16 c^10-4 c^8-118 c^6+83 c^4-10 c^2-1) ln(m_W^2/m_Z^2)
+ g^2 m_W^2 /24 c^4[c^4 (6 x_h^2-21 x_h^ -370)+75 c^2+6] .
* Fermionic Contributions. The self-energy correction with fermions in the loop reads
(4π)^2Σ_W- f^ r = g^2 m_W^2∑_{f,f^'}{m_W^4/6{-x_f^3+x_f^2 x_f^'^+x_f^(x_f^'^2-4 x_f^'^ +3 )-x_f^'^3+3 x_f^'^ -2 /λ(m_f^2,m_f^'^2,m_W^2). .
. -1/m_W^4[3 x_f^2-2x_f^(3 x_f^'^-1)+3 x_f^'^2+2 x_f^'^-2] } DiscB(m_W^2,m_f^,m_f^'^)
+ 1/4(x_f^-x_f^'^)[x_f^4-4 x_f^3 x_f^'^ +x_f^2 (6 x_f^'^2-1)-4 x_f^ x_f^'^3+x_f^'^4-x_f^'^2] ln(x_f^/x_f^'^)
. -1/12[6 x_f^2+3 x_f^(1-4 x_f^'^)+6 x_f^'^2+3 x_f^'^-4 ] } .
We should sum over all the contributions from the SM fermions, where {f, f^'} denotes the pair of fermions in the same isospin-doublet, and take into account three colors for each type of quarks.
§.§ Vertex Contributions
Then, we turn to the CC vertex corrections, which have been shown in Fig. <ref>-(2) and (4). The total contribution to Δ c_ V,CC^e from the ν_e^-e-W vertex can be expressed as √(2) s Γ_ν_e^ e W^ r c_ V,CC^e with the renormalized vertex iΓ_ν_e^ e W^ r defined as follows
(4π)^2Γ_ν_e^ e W^ r = g^2/c^2[ F_Z^(m_Z^2)/m_Z^2 - F_W^(m_W^2)/4 s^2 m_W^2] - e^2 [ F_A^'(0) - F_W^'(m_W^2)/4 s^2]
+g^2 /24 s^2(4-x_h^)[ (c^2-2) x_h^3 - (5 c^2-13) x_h^2 .
. +4 (c^2-8) x_h^ +12 (c^2+3) ] DiscB(m_W^2,m_h^,m_W^)
-g^2/24 c^4 s^2 (60 c^8-8 c^6+71 c^4-22 c^2-2) DiscB(m_W^2,m_W^,m_Z^)
- g^2/24 s^2(y_h^2-4 y_h^+12) DiscB(m_Z^2,m_h^,m_Z^)
+g^2/24 s^2 (48 c^6+68 c^4-16 c^2-1) DiscB(m_Z^2,m_W^,m_W^)
+g^2/48 s^2(y_h^3-6 y_h^2 +18 y_h^ -20 c^2 )ln y_h^
-g^2/48[(c^4+c^2+2) x_h^3 - (6 c^2+9) x_h^2 +18 x_h^ +168 c^2-8] ln x_h^
+g^2/48 c^6 s^2( c^6 y_h^3 -6 c^6 y_h^2 +18 c^6 y_h^ - 48 c^10-36 c^8 .
. +166 c^6-119 c^4+18 c^2+2 )ln(m_W^2/m_Z^2)
+g^2/24 c^4[(c^2+2) y_h^2-6 c^2 y_h^ -96 c^8-224 c^6+32 c^4+23 c^2+2 ] .
As mentioned before, the same CC vertex appears both in Fig. <ref>-(2) and (4), so a factor of two is present in the vertex correction in Eq. (<ref>).
§.§ Box-diagram Contributions
Finally, the contributions from the UV-finite box diagrams should be included, for which the Feynman diagram has been shown in Fig. <ref>-(5). Since the electrons are present in the background, electron neutrinos interact with them via both NC and CC processes. In particular, for the box diagrams, it is impossible to categorize the contributions into either NC or CC type. However, it is clear that both ν^_μ and ν^_τ interact with the background particles only through the NC interaction. For this reason, we select the box diagrams that are universal for all three types of neutrinos as the NC part, whereas the remaining ones as the CC part. The contribution from box diagrams can be written as -(4 m_W^2/g^2) M^_ CC with the amplitude
(4π)^2 M^_ CC = -g^4/8 m_W^2 s^2[2 s^4 (ln x_e^-1)+(2 c^4+6 c^2-3) ln(m_W^2/m_Z^2)] .
Here it is worth mentioning that for the box diagram involving the internal photon propagator, the generalized Fierz identity <cit.>
ν_e^ (x)(1+γ^5) e(x) e(x)(1-γ^5) ν_e^ (x) = -1/2ν_e^ (x)γ_μ(1-γ^5) ν_e^ (x) e(x)γ^μ(1+γ^5) e(x) ,
has been utilized to transform the contributions into the correction to the vector-type coupling.
§ NUMERICAL RESULTS
Given the finite corrections in the previous sections, we now specify the input parameters and evaluate the one-loop corrections to the matter potentials. The latest values of relevant input parameters are quoted from the Particle Data Group <cit.> and summarized below:
* The fine structure constant
α≡ e^2/(4π) = 1/137.035999084 ;
* The gauge-boson and Higgs-boson masses[The latest measurement of W-boson mass given by the CDF-II collaboration is m_W^ = 80.433 GeV <cit.>, yielding a 7σ discrepancy with the SM expectation. However, we have checked that the difference in the correction to the matter potential caused by such a discrepancy appears at the order of O(10^-4).]
m_W^ = 80.377 GeV , m_Z^ = 91.1876 GeV , m_h^ = 125.25 GeV ;
* The quark masses
m_u^ = 2.16 MeV , m_c^ = 1.67 GeV , m_t^ = 172.5 GeV ,
m_d^ = 4.67 MeV , m_s^ = 93.4 MeV , m^_b = 4.78 GeV ;
* The charged-lepton masses
m_e^ = 0.511 MeV , m_μ^ = 105.658 MeV , m_τ^= 1.777 GeV .
All the particle masses quoted above refer to the on-shell masses, except for those of three light quarks (i.e., u, d and s). Instead, the running masses of three light quarks at the energy scale of μ = 2 GeV are used, since the on-shell masses of light quarks are not well-defined due to the non-perturbative nature of quantum chromodynamics at low energies.
From Eq. (<ref>), we can observe that the tree-level NC potential induced by each type of fermions in the matter is proportional to the vector-type coupling c_ V,NC^u = 0.2026, c_ V,NC^d = -0.3514 and c_ V,NC^e = -0.0539, where these couplings have been displayed in Table <ref> and evaluated by using s^2 = 1 - m^2_W/m^2_Z ≈ 0.223. The corresponding corrections to these vector-type couplings from the Z-boson self-energy, vertex corrections and box diagrams are listed in Table <ref>, accordingly. The flavor-dependent corrections are labeled as “fd", where we have chosen the flavor α = τ for example. It shows clearly that the flavor-dependent contributions are two to three orders of magnitude smaller than the flavor-independent ones. Therefore, in the final results of Δ c_ V,NC^f in the last column of Table <ref>, we only list the dominant flavor-independent values.
Then, we can translate the NC potential induced by quarks and electrons into that by protons, neutrons and electrons via the relations among their number densities, namely, N_u^ = 2 N_p^ + N_n^, N_d^ = N_p^ + 2 N_n^ and N_e^ = N_p^. The one-loop correction to the NC potential is thus given by
Δ c_ V,NC^/c_ V,NC^ = N_p^(2Δ c_ V,NC^u + Δ c_ V,NC^d + Δ c_ V,NC^e) + N_n^(Δ c_ V,NC^u + 2 Δ c_ V,NC^d)/N_n^( c_ V,NC^u + 2 c_ V,NC^d)≈ 0.062 + 0.02 N^_p/N^_n ,
where the relation 2 c_ V,NC^u + c_ V,NC^d + c_ V,NC^e = 0 has been implemented. Therefore, for the ordinary matter with N^_p ≈ N^_n, the one-loop correction to the NC potential is about 8.2%.
Similar to the case of the NC potential, we collect all the contributions to Δ c_ V,CC^e in Table <ref>. It shows that there is a correction of about 6% to the CC matter potential. Whereas the NC potentials are the same for three-flavor neutrinos, except for the tiny flavor-dependent contributions, this correction to the CC potential of electron neutrinos will play an important role in neutrino flavor conversions. In the near future, the long-baseline accelerator neutrino experiments DUNE and T2HK will make use of the MSW effect to resolve the sign of Δ m^2_31, and also determine the octant of θ_23^ and the CP-violating phase δ_ CP^. The oscillation probability in the appearance channel ν^_μ→ν_e^ with matter effects can be written as <cit.>
P(ν_μ^→ν_e^) ≈ sin ^2θ_23sin ^2 2 θ_13sin ^2(Δ_31-a L)/(Δ_31-a L)^2Δ_31^2
+sin 2 θ_23sin 2 θ_13sin 2 θ_12sin(Δ_31-a L)/(Δ_31-a L)Δ_31sin (a L)/(a L)Δ_21cos(Δ_31+δ_ CP^)
+cos ^2θ_23sin ^2 2 θ_12sin ^2(a L)/(a L)^2Δ_21^2 ,
where Δ_ij^≡Δ m_ij^2 L/(4E) with Δ m^2_ij≡ m^2_i - m^2_j for ij = 21, 31 being the neutrino mass-squared differences and a ≡ V/2 have been defined. Here L is the baseline length and E is the beam energy of neutrinos. The first line on the right-hand side of Eq. (<ref>) denotes the dominant oscillation term driven by Δ m^2_31. As the contributions from the NC potential are identical for three neutrino flavors (when the tiny flavor-dependent parts are neglected), only the CC potential is relevant. At the tree level, we have a = G_μ^ N_e^ /√(2), while the 5.8% correction to the CC potential at the one-loop level should be included.
As an example, we now investigate the impact of the one-loop correction to the matter potential on the sensitivity to neutrino mass ordering at DUNE, for which the baseline length is L = 1300 km <cit.> and the average matter density is ρ_ avg^ = 2.848 g/ cm^3 <cit.>. With the global-fit results of neutrino oscillation parameters <cit.>, the oscillation probability in Eq. (<ref>) can be numerically calculated in both cases of normal mass ordering (NO) and inverted mass ordering (IO). Since DUNE is sufficiently sensitive to the difference in the oscillation probabilities between NO and IO cases, we expect that the difference caused by the one-loop correction to the matter potential can also be observed experimentally.
The difference of oscillation probabilities Δ P(ν^_μ→ν^_e) ≡ P^ NO_(ν_μ^→ν_e^) - P^ IO_(ν_μ^→ν_e^) between NO and IO cases at DUNE has been plotted in Fig. <ref>. In the left and middle panels, the tree- and one-loop-level results of Δ P(ν^_μ→ν^_e) are respectively denoted by the black solid curve and blue dashed curve. The difference of Δ P(ν^_μ→ν^_e) between tree- and one-loop-level results is represented by the red dot-dashed curve, where the CP phase has been chosen as δ_ CP^ = -90^∘ and δ^_ CP = 0 in the left and middle panel, respectively. To display the impact of the one-loop correction to the matter potential on the sensitivity to neutrino mass ordering, we show the difference between tree- and one-loop-level results as the function of neutrino energy E ∈ [1, 5] GeV and δ_ CP^∈ [-180^∘, 180^∘] in the right panel. It can be seen that the per mille level difference can be generated by radiative corrections. In particular, in the region where E ≈ 2 GeV and δ_ CP^ is negative, the difference reaches more than 3‰. Although such a difference is small, it is promising to be observed at DUNE, for which it has been demonstrated that the percent-level uncertainty in matter density can be measured <cit.>. In consideration of both one-loop corrections to the matter potential and the uncertainty in the matter density, we shall carry out a more dedicated study to explore their impact on the determination of neutrino mass ordering and the precise measurements of the CP-violating phase at both DUNE and T2HK in a separate work.
§ SUMMARY
In this paper, we have performed a complete calculation of the MSW matter potential for all-flavor neutrinos at the one-loop level in the SM. Following the on-shell renormalization of the SM, we have calculated the one-loop amplitudes for the coherent forward scattering of neutrinos with the SM fermions present in the ordinary matter. The radiative corrections to the vector-type couplings of neutrinos in both NC and CC processes have been obtained and used to determine the MSW matter potential. With the latest values of the SM parameters, we evaluate the finite corrections to the matter potentials and find that the correction to the NC potential is about 8% while that to the CC potential is about 6%.
In the coming precision era of neutrino oscillation physics, one has to reconsider the radiative corrections at the percent level to the interactions of neutrinos with matter. For instance, the JUNO experiment will push the relative errors in the measurement of the oscillation parameters sin^2θ^_12, Δ m^2_21 and Δ m^2_31 even down to the sub-percent level <cit.>. The next-generation long-baseline accelerator neutrino experiments are expected to determine the neutrino mass ordering, the octant of θ_23^ and the value of the CP-violating phase δ_ CP^. The experimental sensitivities of DUNE and T2HK to these unknown parameters are also sufficiently high to probe the one-loop corrections to the MSW matter potential. In this sense, we believe that our calculations are not only useful for the study of neutrino oscillation phenomenology, but also serve as an instructive example for precision calculations in the whole field of neutrino physics.
§ ACKNOWLEDGEMENTS
This work was supported by the National Natural Science Foundation of China under grant No. 11835013. One of the authors (J.H.) would like to thank Dr. Di Zhang for helpful suggestions on using FeynArts. All Feynman diagrams in this work are generated by FeynArts <cit.>, and the loop integrals are calculated with the help of Package-X <cit.>.
§ RENORMALIZATION OF THE STANDARD MODEL
In this appendix, we explain some details about the on-shell renormalization of the Standard Model (SM) and list all the relevant one-loop diagrams for completeness.
The renormalization procedure that we have adopted follows closely that in Ref. <cit.>. Instead of repeating the derivations of all the counterterms, we just highlight some key points relevant to our calculations. More details of the on-shell renormalization can be found in a number of excellent reviews <cit.>, where the SM Lagrangian and the Feynman rules are explicitly given.
§.§ Renormalization Constants
Once the set of input physical parameters is chosen, one can decompose the bare parameters and fields, which will be marked by the subscript “0", into the renormalized ones and the counterterms. More explicitly, the bare parameters are given by
e_0^ = Z_e^ e = (1 + δ Z_e^) e ,
m_W,0^2 = m_W^2 + δ m_W^2 ,
m_Z,0^2 = m_Z^2 + δ m_Z^2 ,
m_h,0^2 = m_h^2 + δ m_h^2 ,
m_f,0^2 = m_f^2 + δ m_f^2 ,
while the renormalization of the physical fields is as follows
W_0μ^± = √(Z_W^) W_μ^± = (1 + 1/2δ Z_W^) W_μ^± ,
[ Z_0μ^; A_0μ^ ] = [ √(Z_ZZ^) √(Z_ZA^); √(Z_AZ^) √(Z_AA^) ][ Z_μ^; A_μ ] = [ 1+ 1/2δ Z_ZZ^ 1/2δ Z_ZA^; 1/2δ Z_AZ^ 1 + 1/2δ Z_AA^ ][ Z_μ^; A_μ^ ] ,
h_0^ = √(Z_h^) h = (1 + 1/2δ Z_h^) h ,
f_i,0^ L = √(Z_ij^f, L) f_j^ L = (1+1/2δ Z_ij^f, L) f_j^ L ,
f_i,0^ R = √(Z_ij^f, R) f_j^ R = (1+1/2δ Z_ij^f, R) f_j^ R .
The subscripts i and j of the fermion fields refer to different generations. In our calculations, the flavor mixing among different generations of quarks plays an insignificant role, so we ignore it and its radiative corrections. Hence only the i=j case is considered and the CKM matrix is taken to be the identity matrix. A more careful treatment of the renormalization of the CKM matrix can be found in Refs. <cit.>. In addition, the renormalization of unphysical fields is irrelevant to the one-loop scattering amplitudes and will be neglected as well.
§.§ Fixing the Counterterms
The one-loop self-energies of the scalar and fermion fields are denoted as iΣ, while those of gauge fields as iΣ^_ T with
iΣ_μν^V (p^2) = iΣ_ T^V(g_μν^ - p_μ^ p_ν^/p^2) + iΣ_ L^Vp_μ^ p^_ν/p^2 ,
for V = W, Z, A, AZ. The counterterms are fixed by imposing the on-shell conditions and can be expressed in terms of the self-energies. The mass and wave-function counterterms of gauge bosons and the Higgs boson are given by
δ m_W^2 = - Re Σ_ T^W (m_W^2) , δ Z_W^ = . Re ∂Σ_ T^W (p^2)/∂ p^2|_p^2_ = m_W^2 ,
δ m_Z^2 = - Re Σ_ T^Z (m_Z^2) , δ Z_Z^ = . Re ∂Σ_ T^Z (p^2)/∂ p^2|_p^2_ = m_Z^2 ,
δ m_h^2 = + Re Σ_^h (m_h^2) , δ Z_h^ = - . Re ∂Σ_^h (p^2)/∂ p^2|_p^2_ = m_h^2 .
The counterterms for the photon and A-Z mixing are
δ Z_AA^ = .∂Σ_ T^AA(p^2)/∂ p^2|_p^2 = 0 , δ Z_AZ^ = 2 Re Σ_ T^AZ(m_Z^2)/m_Z^2 , δ Z_Z A^ = - 2 Σ_ T^A Z(0)/m_Z^2 .
Notice that there is a minus sign for the gauge-boson self-energy in our notations compared to those in Refs. <cit.>. Such a difference just arises from the definition of the gauge-boson self-energy, which is denoted as iΣ_ T^ in our work while as - iΣ_ T^ in the previous literature. As a result, all the counterterms corresponding to the gauge-boson self-energies in Eqs. (<ref>) and (<ref>) have an opposite sign.
For the fermion masses and wave functions, the counterterms are fixed by
δ m_f^ = m_f^/2 Re[Σ^f, L_ii(m_f^2) + Σ^f, R_ii(m_f^2) + 2 Σ^f, S_ii(m_f^2)] ,
δ Z_ii^f, L = - Re Σ_i i^f, L(m_f^2) - .m_f^2∂/∂ p^2 Re[Σ_ii^f, L(p^2) + Σ_ii^f, R(p^2) + 2 Σ_ii^f, S(p^2)]|_p^2=m_f^2 ,
δ Z_ii^f, R = - Re Σ_i i^f, R(m_f^2) - .m_f^2∂/∂ p^2 Re[Σ_ii^f, L(p^2) + Σ_ii^f, R(p^2) + 2 Σ_ii^f, S(p^2)]|_p^2=m_f^2 .
As has been mentioned in the main text, the terms of O(x^_f) can be safely neglected, so only the first terms in the wave-function counterterms of fermions need to be taken into account. Note that the fermion self-energy has been decomposed as below
Σ_ii^f(p^2) = p P_ L^Σ_ii^f, L(p^2) + p P_ R^Σ_ii^f, R(p^2) + m^_f Σ_ii^f, S(p^2) ,
with the chiral projection operators P_ L, R^ = (1∓γ^5_)/2.
The renormalization constant of the electric charge can be expressed in terms of the self-energies by implementing the Ward identity, namely,
δ Z_e^ = -1/2δ Z_AA^ - s/2cδ Z_ZA^ ,
which is independent of the fermion species. This occurs as the consequence of the universality of the electric charge.
Finally, although the weak mixing angle has not been chosen as an input parameter, it is usually convenient to introduce a counterterm for it as well and use it to simplify the Feynman rules of the vertex counterterms. However, the counterterms of the cosine and sine of the weak mixing angle are related to the counterterms of gauge-boson masses by
δ c/c = 1/2(δ m_W^2/m_W^2 - δ m_Z^2/m_Z^2) , δ s/s = - c^2/2s^2(δ m_W^2/m_W^2 - δ m_Z^2/m_Z^2) .
§.§ Self-energies
As all the relevant counterterms are governed by the self-energies, we shall explicitly show the results of the self-energies and give some explanations whenever necessary. In our calculations, the tadpole contribution to the gauge-boson self-energies is included. In subsequent discussions, we focus only on the real parts of the transverse self-energies that contribute to the counterterms.
§.§.§ Tadpole
The inclusion of the tadpole diagrams iT renders the mass counterterms of gauge bosons to be gauge-independent. All the tadpole diagrams are plotted in Fig. <ref>, and the total contribution is
i T = i g/(4π)^2 4 m_W^[ -8 m_f^2 A_0^(m_f^)+2 m_h^2
A_0^(m_W^)+m_h^2 A_0^(m_Z^)+3
m_h^2 A_0^(m_h^) .
. + 4 d m_W^2
A_0^(m_W^) -4 m_W^2 A_0^(m_W^)+2
d m_Z^2 A_0^ (m_Z^)-2 m_Z^2
A_0^ (m_Z^) ] .
Notice that a symmetry factor of 1/2 should be considered in Fig. <ref>-(1), -(2) and -(7), while a minus sign for the ghost loops in the diagrams (4)-(6) and the fermion loop in the diagram (9) must be included.
§.§.§ Z-boson
The one-loop self-energy corrections for Z-boson are shown in Fig. <ref>. The contribution to the self-energy of Z-boson is
Σ^Z_ T (p^2) = g^2 /(4π)^2 4 c^2{(16 c^4 p^2+8c_2 w^ m_W^2) B_0^(p^2;m_W^,m_W^)-4m_Z^2 B_0^(p^2;m_Z^,m_h^) .
+4 B_00^(p^2;m_h^,m_Z^)+[16 c^4 (d-1)-16 c^2+4] B_00^(p^2;m_W^,m_W^) - A_0^(m_h^)
. - A_0^(m_Z^) -[8 c^4 (d-1)-8 c^2+2] A^_0(m_W^) }
+ 2 e^2/(4π)^2∑_f{[4 a_f^2 m_f^2-p^2 (a_f^2+v_f^2)] B_0^(p^2;m_f^,m_f^) .
. -4 (a^2_f+v^2_f) B_00^(p^2;m_f^,m^_f) +2 (a^2_f+v^2_f) A_0^(m^_f) } .
The summation is over all the SM fermions. In addition, the tadpole diagrams contribute the term of g m_Z^ T/(m_h^2 c).
§.§.§ W-boson
The one-loop diagrams for the W-boson self-energy are listed in Fig. <ref>. The total result is
Σ_ T^W (p^2) = g^2/4(4π)^2{(8 m_W^2 - 4m_Z^2 s^2 + 16 p^2 c^2) B_0^(p^2;m^_W,m^_Z)-4m_W^2 B_0^(p^2;m^_W,m_h^) .
+ 4[4 c^2 (d-2) + 1] B_00^(p^2;m_W^,m_Z^)+4 B_00^(p^2;m_W^,m_h^)
+ 4s^2(λ ^2+4 p^2) B_0^(p^2;m_W^,λ)+16 s^2 (d-2) B_00^(p^2;m_W^,λ)
. +(6-4 d) A_0^(m_W^) -4(d-2) s^2 A_0^(λ)- A_0^(m_h^) +[4c^2(2-d)-1] A_0^(m_Z^) }
+ g^2/2(4π)^2∑_{f,f'}[ (m_f^2 +m_f^'^2) B_0^(p^2;m_f^,m_f^'^)-4 B_00^(p^2;m_f^,m^_f^') .
. -p^2 B_0^(p^2;m_f^,m_f^'^)+ A_0^(m_f^)+ A_0^(m_f^'^) ] .
To avoid the infrared divergence, we have introduced a tiny mass λ for the photon, which should be kept during the whole calculation and then set to zero in the end. The summation is performed over {f,f^'}, which denotes a pair of fermions in the same isospin-doublet. The tadpole contribution is given by g m_W^ T/ m_h^2.
§.§.§ Photon and A-Z Mixing
Different from the cases of gauge bosons, whose self-energies directly contribute to the corrections of the matter potential, the self-energy of the photon and the A-Z mixing are relevant for the counterterm of the electric charge as indicated in Eq. (<ref>). Given one-loop diagrams in Fig. <ref>, the self-energy of the photon A reads
Σ_ T^A (p^2) = 2 e^2/(4π)^2[ (3p^2 + 4m_W^2) B_0^(p^2;m_W^,m_W^)-2(d-2) A_0^(m_W^)]
+ 2 e^2/(4π)^2∑_f Q_f^2 [-4 B_00^(p^2;m_f^,m_f^)-p^2 B_0^(p^2;m_f^,m_f^)+2 A_0^(m_f^)] .
Note that there is no correction to the longitudinal self-energy of the photon, as expected from the unbroken U(1) gauge symmetry.
The Feynman diagrams for the A-Z mixing are similar to those for the photon self-energy, as shown in Fig. <ref>. The analytical expression reads
Σ_ T^AZ(p^2) = g^2 s /(4π)^2 c{[c^2 (3-2d)+s^2] A_0^(m_W^) +2[c^2 (2 d-3)-s^2] B_00^(p^2;m_W^,m_W^) .
. + 2[c^2 (m_W^2+2 p^2)+m_W^2 s^2] B_0^(p^2;m_W^,m_W^) }
+ 2 e^2/(4π)^2∑_f Q_f^ v_f^[-4 B_00^(p^2;m_f^,m_f^)-p^2 B_0^(p^2;m_f^,m_f^) +2 A_0^(m_f^)] .
As in the case of the photon self-energy, the diagrams with the ghost loops and those with the W-ϕ loops give identical corrections.
§.§.§ Fermion
The fermion self-energy will be involved in the vertex counterterms. From the one-loop diagrams in Fig. <ref> and with the decomposition in Eq. (<ref>), we obtain
Σ^f, L (p^2) = -g^2/4(4π)^2{[4 (d-2) s^2 (a_f^+v_f^)^2+x_f^] B_1(p^2;m_f^,m_Z^)+x_f^ B_1(p^2;m_f^,m_h^) .
. +4 (d-2) Q_f^2 s^2 B_1(p^2;m_f^,λ)+2 (d+x_f^'^-2) B_1(p^2;m_f^'^,m_W^) } ,
Σ^f, R (p^2) = -g^2/4(4π)^2{ 4 (d-2) s^2 [(a_f^-v_f^)^2 B_1(p^2;m_f^,m_Z^)+Q_f^2 B_1(p^2;m_f^,λ)] .
+ . x_f^[ B_1(p^2;m_f^,m_h^)+ B_1(p^2;m_f^,m_Z^)+2 B_1(p^2;m_f^'^,m_W^)] } ,
Σ^f, S (p^2) = g^2/4(4π)^2{[4 s^2 d (a_f^2-v_f^2)-x_f^] B_0^(p^2;m_f^,m_Z^) .
. -2 [2 d Q_f^2 s^2 B_0^(p^2;m_f^,λ)+x_f^'^ B_0^(p^2;m_f^'^,m_W^)]+x_f^ B_0^(p^2;m_f^,m_h^) } .
For massless and electrically-neutral neutrinos, the contributions from diagrams (1), (2) or (4) are vanishing, since the relevant interaction vertices are proportional to either the fermion mass or the electric charge.
It is worthwhile to mention that although the obtained self-energies are seemingly different from those in Ref. <cit.>, cf. Eqs. (B.1)-(B.4) and (B.6)-(B.8) therein, they are actually identical after transforming the Passarino-Veltman functions A_0^, B_00^ and B_1^ into B_0^. With these self-energies, we can fix all the counterterms as in Eqs. (<ref>)-(<ref>).
§.§ Amplitudes from the Counterterms
The counterterms result in new interaction vertices and additional diagrams to the scattering amplitudes of our interest. The Feynman rules for the counterterms have been derived in the previous literature <cit.>, and the amplitudes from the counterterms can be easily obtained.
§.§.§ Self-energies of Gauge Bosons
The mass and wave-function counterterms of gauge bosons induce the following contribution
i(m_Z,W^2 δ Z_ZZ,W^ + δ m_Z,W^2) g_μν^ ,
where p^2 = 0 has been assumed for the intermediate gauge bosons in the case of forward scattering. Furthermore, considering the external fermions, one obtains the scattering amplitudes of ν_α^ + f →ν_α^ + f from the self-energy counterterms
i M_ c^Z = ig^2/4m_Z^4 c^2(m_Z^2 δ Z_ZZ^ + δ m_Z^2) ν_α^γ_μ P_ L^ν_α^ fγ^μ(c_ V,NC^f-c_ A,NC^fγ^5)f ,
i M_ c^W = ig^2/4m_W^4(m_W^2 δ Z_W^ + δ m_W^2) ν_α^γ_μ P_ L^ν_α^ fγ^μ(c_ V,CC^f - c_ A,CC^f γ^5 ) f .
§.§.§ Vertex Counterterms
The general fermion-vector-boson interaction from the counterterms can be expressed as
δΓ^FFV_μ = i e γ_μ( C^-_f P_ L^ + C^+_f P_ R^) ,
with F stands for relevant fermions interacting with a given gauge boson V. All the one-loop diagrams for the corrections to the f-f-Z vertex have been shown in Fig. <ref>. The coefficients in front of chiral projection operators are defined as
C^±_f = g_f^±(δ g_f^±/g_f^±+1/2δ Z_Z Z^ + δ Z_i i^f, R(L)) + 1/2 Q_f^δ Z_A Z^ ,
where
g_f^+ = -s/c Q_f^ , δ g_f^+=-s/c Q_f^(δ Z_e+1/c^2δ s/s) ,
g_f^- = I_f^3 - s^2 Q_f^/sc , δ g_f^-=I_f^3/s c(δ Z_e+s^2-c^2/c^2δ s/s)+δ g_f^+ ,
with the weak isospin generator I_f^3 of the SM fermions. The scattering amplitude from the counterterms turns out to be
i M_ c^Γ = - i g^2 s/2m_Z^2 c[ν_α^γ_μ C^-_ν_α^ P_ L^ν_α^ fγ^μ(c_ V,NC^f - c_ A,NC^f γ^5) f + fγ_μ( C^-_f P_ L^ + C^+_f P_ R^) f ν_α^γ^μ P_ L^ν_α^] ,
from which one can see that some corrections to the vector-type coupling are proportional to the tree-level coupling c_ V,NC^f whereas others are not.
Several comments on Fig. <ref> are helpful. For massless and electrically-neutral neutrinos, the contributions from the diagrams (1), (2), (4), (5), (7), (10) or (12) are vanishing. As the corrections of O(x_f^) for f=u,d,e are highly suppressed, the contributions from those diagrams can also be neglected. The flavor-dependent terms in the vertex correction come from the diagrams (3), (6), (9), (11), (13) and (14), which are consistent with the observations in Refs. <cit.>. Meanwhile, since neutrinos are purely left-handed in the SM, only C^-_ν_α^ takes part in the correction.
The one-loop diagrams for the corrections to the ν_e^-e-W vertex are given in Fig. <ref>. The counterterm is similar to that in Eq. (<ref>) but with
C^-_f = 1/√(2)s[δ Z_e^ - δ s/s + 1/2δ Z_W^ + 1/2(δ Z_ii^α, L+δ Z_ii^ν_α^, L)] , C^+_f = 0 .
As the diagrams with the vertices proportional to the electron mass can be neglected, we just concentrate on those in (3), (8), (9) and (10).
§.§ Box Diagrams
The box diagrams are presented in Figs. <ref>, <ref> and <ref>, which are actually UV-finite. The final results of the amplitudes have been given and discussed in the main text. Notice that the diagrams involving W or ϕ lead to the flavor-dependent corrections.
To simplify the expressions, one can expand the analytical formulas around the small fermion masses. However, there are two types of small fermion masses, namely, the charged-lepton masses and light quark masses. Given the strong mass hierarchy, i.e., m_e^≪ m_u^≈ m_d^≪ m_μ^≪ m_τ^, we should first expand the results around m^_u,d=0 and m^_e = 0 and safely neglect O(x_u,d,e) terms.
99
ParticleDataGroup:2022pth
R. L. Workman et al. [Particle Data Group],
“Review of Particle Physics,”
PTEP 2022, 083C01 (2022)
Xing:2020ijf
Z. z. Xing,
“Flavor structures of charged fermions and massive neutrinos,”
Phys. Rept. 854, 1-147 (2020)
[arXiv:1909.09610 [hep-ph]].
Wolfenstein:1977ue
L. Wolfenstein,
“Neutrino Oscillations in Matter,”
Phys. Rev. D 17, 2369-2374 (1978)
Wolfenstein:1979ni
L. Wolfenstein,
“Neutrino Oscillations and Stellar Collapse,”
Phys. Rev. D 20, 2634-2635 (1979)
Mikheyev:1985zog
S. P. Mikheyev and A. Y. Smirnov,
“Resonance Amplification of Oscillations in Matter and Spectroscopy of Solar Neutrinos,”
Sov. J. Nucl. Phys. 42, 913-917 (1985)
Mikheev:1986wj
S. P. Mikheev and A. Y. Smirnov,
“Resonant amplification of neutrino oscillations in matter and solar neutrino spectroscopy,”
Nuovo Cim. C 9, 17-26 (1986)
Botella:1986wy
F. J. Botella, C. S. Lim and W. J. Marciano,
“Radiative Corrections to Neutrino Indices of Refraction,”
Phys. Rev. D 35, 896 (1987)
Mirizzi:2009td
A. Mirizzi, S. Pozzorini, G. G. Raffelt and P. D. Serpico,
“Flavour-dependent radiative correction to neutrino-neutrino refraction,”
JHEP 10, 020 (2009)
[arXiv:0907.3674 [hep-ph]].
Dutta:1999ir
G. Dutta, D. Indumathi, M. V. N. Murthy and G. Rajasekaran,
“Neutrinos from stellar collapse: Effects of flavor mixing,”
Phys. Rev. D 61, 013009 (2000)
[arXiv:hep-ph/9907372 [hep-ph]].
Dighe:1999bi
A. S. Dighe and A. Y. Smirnov,
“Identifying the neutrino mass spectrum from the neutrino burst from a supernova,”
Phys. Rev. D 62, 033007 (2000)
[arXiv:hep-ph/9907423 [hep-ph]].
Zhu:2020wuy
J. y. Zhu,
“Radiative corrections to the lepton flavor mixing in dense matter,”
JHEP 05, 097 (2020)
[arXiv:2002.12182 [hep-ph]].
Xing:2022efm
Z. z. Xing and J. y. Zhu,
“One-loop radiative correction to the Toshev relation for neutrino oscillations in matter,”
[arXiv:2208.03488 [hep-ph]].
Tamborra:2011is
I. Tamborra, G. G. Raffelt, L. Hudepohl and H. T. Janka,
“Impact of eV-mass sterile neutrinos on neutrino-driven supernova outflows,”
JCAP 01, 013 (2012)
[arXiv:1110.2104 [astro-ph.SR]].
Wu:2013gxa
M. R. Wu, T. Fischer, L. Huther, G. Martínez-Pinedo and Y. Z. Qian,
“Impact of active-sterile neutrino mixing on supernova explosion and nucleosynthesis,”
Phys. Rev. D 89, no.6, 061303 (2014)
[arXiv:1305.2382 [astro-ph.HE]].
DUNE:2020ypp
B. Abi et al. [DUNE],
“Deep Underground Neutrino Experiment (DUNE), Far Detector Technical Design Report, Volume II: DUNE Physics,”
[arXiv:2002.03005 [hep-ex]].
Hyper-Kamiokande:2022smq
J. Bian et al. [Hyper-Kamiokande],
“Hyper-Kamiokande Experiment: A Snowmass White Paper,”
[arXiv:2203.02029 [hep-ex]].
Aoki:1982ed
K. I. Aoki, Z. Hioki, M. Konuma, R. Kawabe and T. Muta,
“Electroweak Theory. Framework of On-Shell Renormalization and Study of Higher Order Effects,”
Prog. Theor. Phys. Suppl. 73, 1-225 (1982)
Bohm:1986rj
M. Bohm, H. Spiesberger and W. Hollik,
“On the One Loop Renormalization of the Electroweak Standard Model and Its Application to Leptonic Processes,”
Fortsch. Phys. 34, 687-751 (1986)
Hollik:1988ii
W. F. L. Hollik,
“Radiative Corrections in the Standard Model and their Role for Precision Tests of the Electroweak Theory,”
Fortsch. Phys. 38, 165-260 (1990)
Denner:1991kt
A. Denner,
“Techniques for calculation of electroweak radiative corrections at the one loop level and results for W physics at LEP-200,”
Fortsch. Phys. 41, 307-420 (1993)
[arXiv:0709.1075 [hep-ph]].
Giunti:2007ry
C. Giunti and C. W. Kim,
“Fundamentals of Neutrino Physics and Astrophysics,”
Oxford University Press, 2007,
Xing:2011zza
Z. z. Xing and S. Zhou,
“Neutrinos in particle physics, astronomy and cosmology,”
Springer Berlin, 2011,
Bohm:2001yx
M. Bohm, A. Denner and H. Joos,
“Gauge theories of the strong and electroweak interaction,”
Vieweg+Teubner Verlag, 2001
Sirlin:1977sv
A. Sirlin,
“Current Algebra Formulation of Radiative Corrections in Gauge Theories and the Universality of the Weak Interactions,”
Rev. Mod. Phys. 50, 573 (1978)
[erratum: Rev. Mod. Phys. 50, 905 (1978)]
Sirlin:1980nh
A. Sirlin,
“Radiative Corrections in the SU(2)-L x U(1) Theory: A Simple Renormalization Framework,”
Phys. Rev. D 22, 971-981 (1980)
Patel:2015tea
H. H. Patel,
“Package-X: A Mathematica package for the analytic calculation of one-loop integrals,”
Comput. Phys. Commun. 197, 276-290 (2015)
[arXiv:1503.01469 [hep-ph]].
Patel:2016fam
H. H. Patel,
“Package-X 2.0: A Mathematica package for the analytic calculation of one-loop integrals,”
Comput. Phys. Commun. 218, 66-70 (2017)
[arXiv:1612.00009 [hep-ph]].
Passarino:1978jh
G. Passarino and M. J. G. Veltman,
“One Loop Corrections for e+ e- Annihilation Into mu+ mu- in the Weinberg Model,”
Nucl. Phys. B 160, 151-207 (1979)
Sakakibara:1980hw
S. Sakakibara,
“Radiative Corrections to the Neutral Current Interactions in the Weinberg-Salam Model,”
Phys. Rev. D 24, 1149 (1981)
Nieves:2003in
J. F. Nieves and P. B. Pal,
“Generalized Fierz identities,”
Am. J. Phys. 72, 1100-1108 (2004)
[arXiv:hep-ph/0306087 [hep-ph]].
CDF:2022hxs
T. Aaltonen et al. [CDF],
“High-precision measurement of the W boson mass with the CDF II detector,”
Science 376, no.6589, 170-176 (2022)
Nunokawa:2007qh
H. Nunokawa, S. J. Parke and J. W. F. Valle,
“CP Violation and Neutrino Oscillations,”
Prog. Part. Nucl. Phys. 60, 338-402 (2008)
[arXiv:0710.0554 [hep-ph]].
DUNE:2015lol
R. Acciarri et al. [DUNE],
“Long-Baseline Neutrino Facility (LBNF) and Deep Underground Neutrino Experiment (DUNE): Conceptual Design Report, Volume 2: The Physics Program for DUNE at LBNF,”
[arXiv:1512.06148 [physics.ins-det]].
DUNE:2021cuw
B. Abi et al. [DUNE],
“Experiment Simulation Configurations Approximating DUNE TDR,”
[arXiv:2103.04797 [hep-ex]].
Esteban:2020cvm
I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, T. Schwetz and A. Zhou,
“The fate of hints: updated global analysis of three-flavor neutrino oscillations,”
JHEP 09, 178 (2020)
[arXiv:2007.14792 [hep-ph]].
Kelly:2018kmb
K. J. Kelly and S. J. Parke,
“Matter Density Profile Shape Effects at DUNE,”
Phys. Rev. D 98, no.1, 015025 (2018)
[arXiv:1802.06784 [hep-ph]].
JUNO:2015zny
F. An et al. [JUNO],
“Neutrino Physics with JUNO,”
J. Phys. G 43, no.3, 030401 (2016)
[arXiv:1507.05613 [physics.ins-det]].
JUNO:2022mxj
A. Abusleme et al. [JUNO],
“Sub-percent precision measurement of neutrino oscillation parameters with JUNO,”
Chin. Phys. C 46, no.12, 123001 (2022)
[arXiv:2204.13249 [hep-ex]].
Hahn:2000kx
T. Hahn,
“Generating Feynman diagrams and amplitudes with FeynArts 3,”
Comput. Phys. Commun. 140, 418-431 (2001)
[arXiv:hep-ph/0012260 [hep-ph]].
Denner:1990yz
A. Denner and T. Sack,
“Renormalization of the Quark Mixing Matrix,”
Nucl. Phys. B 347, 203-216 (1990)
Gambino:1998ec
P. Gambino, P. A. Grassi and F. Madricardo,
“Fermion mixing renormalization and gauge invariance,”
Phys. Lett. B 454, 98-104 (1999)
[arXiv:hep-ph/9811470 [hep-ph]].
Pilaftsis:2002nc
A. Pilaftsis,
“Gauge and scheme dependence of mixing matrix renormalization,”
Phys. Rev. D 65, 115013 (2002)
[arXiv:hep-ph/0203210 [hep-ph]].
|
http://arxiv.org/abs/2307.04491v2 | 20230710112824 | Thermal Corrections to Rényi Entropy in BMS Field Theory | [
"Yuan Zhong"
] | hep-th | [
"hep-th"
] |
Calculating Originality of LLM Assisted Source Code
Shipra Sharma
[email protected]
Balwinder Sodhi
Department of Computer Science and Engineering
Indian Institute of Technology Ropar
India
[email protected]
==========================================================================================================================================================================
§ INTRODUCTION
On the journey of understanding the quantum gravity, one of the most remarkable idea is the holographic principle <cit.>, which relates the (d+1)-dimensional quantum gravity with the d-dimensional quantum field theory. The most fruitful incarnation of the holographic principle is the AdS/CFT correspondence <cit.>, which equates the quantum gravity on the (d+1)-dimensional asymptotically anti-de Sitter (AdS) spacetime and the d-dimensional conformal field theory (CFT) on the asymptotic boundary. An important entry in the holographic dictionary is that the asymptotic symmetry of the bulk theory agrees with the symmetry of the boundary theory. The constraints from symmetry are powerful, and many universal results can be obtained in a general way together with other constraints.
In the study of holographic description of asymptotically flat gravity, inspired by the success of the role of asymptotic symmetry played in the AdS/CFT correspondence, the study of the asymptotic symmetry in the asymptotically flat spacetime, known as the Bondi–van der Burg–Metzner–Sachs symmetry <cit.>, receives much interest in the last few years. A simpler version of the asymptotically flat gravity is the three-dimensional BMS_3 symmetry. Based on the BMS_3 symmetry, the three-dimensional flat holography was proposed <cit.> that the three-dimensional asymptotic flat gravity is holographically described by a two-dimensional quantum field theory governed by the BMS_3 symmetry, known as the BMS field theory (BMSFT) or Carrollian conformal field theory, since the BMS_3 algebra is isomorphic to the Carrollian conformal algebra. This is an infinite-dimensional algebra, and the constraints from it lead to powerful constraints in the study of the BMS field theories.
One important probe in the AdS/CFT correspondence is the holographic entanglement entropy. The Ryu-Takayanagi formula <cit.> proposed that the entanglement entropy in the boundary corresponds to the area of a minimal surface in the bulk. In the case of flat holography, the analogue of the Ryu-Takayanagi formula was proposed in <cit.>. On the BMS field theory side, the entanglement entropy for a single interval on the cylinder or on the plane in the vacuum state can be obtained with the help of replica trick <cit.>.
The entanglement entropy is a good measurement of entanglement only when the system is in a pure state. In practice, however, it is always thermally polluted. In this paper, we are interested in the entanglement entropy for a single interval in the thermal state. Since there is a thermal circle and a spatial circle, this task is generally very difficult. However, in the low-temperature limit β_ϕ≫ L,β_u/β_ϕ≤ O(1), the leading thermal correction to the Rényi entropy is dominated by the first excited state and calculable. Here, L is the circumference of the cylinder coordinated by ϕ and u, and β_ϕ and β_u are the lengths of the thermal circle along the ϕ- and u-directions. Inspired by the universal results of the thermal correction to the entanglement entropy in the low-temperature limit in CFT <cit.>, we use the replica trick to rewrite the leading term in the thermal correction as an correlation function on the branched covering space and work it out with the help of the uniformizing map. It turns out that the leading thermal correction to the Rényi entropy takes a universal form
δS_n =n/1-n[( sinπl_ϕ/L/nsinπl_ϕ/n L )^2Δ e^2 πl_u ξ/L ( πl_ϕ/L -1/nπl_ϕ/nL ) -1] e^-2πβ_ϕΔ/L -2πβ_uξ/L,
which only depends on the scaling dimension Δ and the boost charge ξ of the first excited state and the geometric configuration of the entanglement interval. The thermal correction to the entanglement entropy is obtained by δ S_E = δ S_n→ 1.
As a double check, we also use the entanglement first law to translate the calculation of the variation δ S_E of the entanglement entropy to the variation δ⟨K|_|$⟩ of the expectation value of the modular Hamiltonian. The latter can be calculated directly as the modular Hamiltonian for a single interval on the cylinder in the pure state can be written explicitly. We show that these two approaches agree.
This paper is organized as follows. In Sec. 2, we give a quick review on BMS field theory. In Sec. 3, we calculate the thermal correction to the Rényi entropy in a type of low-temperature limit with the help of the replica trick and the uniformizing map. We also provide an alternative way to calculate the thermal correction to the entanglement entropy from the modular Hamiltonian and the entanglement first law as a double check. We conclude in Sec. 4 with a summary and some future directions.
§ REVIEW ON THE BMS FIELD THEORY
In this section, we give a quick review on some aspects of the BMS field theory.
∙ BMSFT on the cylinder
A BMSFT on a cylinder(ϕ,u)with a circumference
ϕ∼ϕ+L
is a two-dimensional quantum field theory that is invariant under the following BMS transformations
ϕ→ f(ϕ),
u → f'(ϕ) u +g(ϕ).
Here,f(ϕ)andg(ϕ)are periodic functions inϕwith the periodicityL. Then, the infinitesimal BMS transformation generators are obtained by taking the Fourier modes
l_n = i L/2π e^i n 2π/Lϕ∂_ϕ -n e^i n 2π/Lϕ u∂_u,
m_n =i L/2π e^i n 2π/Lϕ∂_u.
∙ BMSFT on the plane
The BMSFT on the(x,y)-plane is obtained from the following plane-to-cylinder transformation
x =e^2π i /Lϕ,
y = 2π i /L e^2π i/Lϕ u.
The infinitesimal symmetry generators on the plane are
l_n =-x^n+1∂_x -(n+1) x^n y ∂_y,
m_n = -x^n+1∂_y.
They form the BMS algebra without a central term via the Lie bracket
[l_n ,l_m] =(n-m) l_m+n,
[l_n, m_m] =(n-m) m_m+n,
[m_n, m_m] =0.
At the quantum level, these symmetry generatorsl_nandm_nwill become operatorsL_nandM_nwhich act on the state space. They form the BMS algebra with central chargesc_Mandc_Las
[L_n ,L_m] =(n-m) L_m+n +c_L/12n(n^2-1)δ_m+n,
[L_n, M_m] =(n-m) M_m+n+c_M/12n(n^2-1)δ_m+n,
[M_n, M_m] =0.
A primary operatorψof the boost chargeξand the conformal dimensionΔis specified by the following conditions
[L_0, ψ] =Δψ,
[M_0,ψ] =ξψ,
[L_n, ψ] =0, n>0,
[M_n, ψ] =0, n>0.
Under a BMS transformation
x̃ =f(x),
ỹ = f'(x)y +g(x),
a primary operatorψtransforms as
ψ̃(x̃,ỹ) =(f')^-Δ e^-ξy f” +g'/f'ψ(x,y).
On the plane, the currentsJ(x)andP(x)admit the following mode expansions
J(x) = ∑_n L_n x^-n-2,
P(x) =∑_n M_n x^-n-2.
Under the BMS transformation (<ref>) and (<ref>), the currentsJ(x)andP(x)transform as <cit.>P̃(x̃) =( ∂ f/∂ x)^-2( P(x) -c_M/12{f,x}),
J̃(x̃) =( ∂ f/∂ x)^-2( J(x) -c_L/12{f,x}) + ( ∂ g/∂ x)^-2( P(x) -c_M/12{g,x}) .
∙ State-operator correspondence
On the(x,y)-plane, the in-state corresponds to an operator inserted atx=0. From the plane-to-cylinder map (<ref>), in the cylinder coordinate, the in-state is inserted atϕ=i∞. Similarly, the out-state is inserted atϕ=-i∞in the cylinder coordinate.
§ THERMAL CORRECTIONS TO THE RÉNYI ENTROPY
In this section, we use the replica trick and the uniformizing map to calculate the thermal correction to the Rényi entropy in the BMSFT for a single intervalon the cylinder with circumferenceL.
§.§ Thermal Corrections to Rényi Entropy in CFT_2
Before we continue our calculation of the thermal correction to the Rényi entropy for a single interval on the cylinder in the BMSFT, we would like to first review the similar calculation in the case of CFT_2<cit.> first.
We assume that the theory is put on a cylinder with the circumferenceL, coordinatized byw=x-it, the thermal density matrix written in terms of a complete set of states is
ρ =1/(e^-β H)∑_|ϕ|⟩ |ϕ|⟨%s|⟩ϕ| e^-β E_ϕ.
The Hamiltonian on the cylinder in the CFT is the combination of the left- and the right-moving zeroth-level Virasoro generators and the central charge,
H =2π/L( L_0 +L̅_0 -c/12).
Here, we have assumed thatc_L=c_R=c. With the assumptions that there exists a unique ground state|0|$⟩, and that the spectrum of conformal dimensions Δ=h+h̅ is positive and gapped from the smallest positive value, there should exist an operator ψ of conformal weights (h,h̅) carrying this smallest Δ. This ψ has the smallest energy E_ψ=2π/L(Δ -c/12). Then, in the low-temperature limit β≫ L, the thermal density matrix admits the following expansion
ρ= |0|⟨%s|⟩0| +|ψ|⟨%s|⟩ψ|e^-2πΔβ/L+⋯/1 +e^-2πΔβ/L +⋯.
We consider the entanglement region to be a single interval with two endpoints
∂_- : w=w̅=w_1, ∂_+ : w=w̅=w_2.
For convenience, we also introduce the rescaled endpoints
θ_1,2 = 2πw_1,2/L
and their difference
l=w_2-w_1.
The trace of the reduced density matrix ρ_ can be expanded according to the expansion (<ref>) of the thermal density matrix as
ρ_A^n = [ _(|0|⟨%s|⟩0| +|ψ|⟨%s|⟩ψ|e^-2πΔβ/L+⋯) ]^n/(1 +e^-2πΔβ/L +⋯)^n
=(_ |0|⟨%s|⟩0|)^n [1+ ( (_ |ψ|⟨%s|⟩ψ| (_ |0|⟨%s|⟩0|)^n-1)/(_ |0|⟨%s|⟩0|)^n -1 ) n e^-2πΔβ/L +⋯].
The first term (_ |0|⟨%s|⟩0|)^n is just the zero-temperature Rényi entropy. And the expression in the second term
(_ |ψ|⟨%s|⟩ψ| (_ |0|⟨%s|⟩0|)^n-1)/(_ |0|⟨%s|⟩0|)^n,
which determines the leading thermal correction, can be recasted as a 2-point function of the operator ψ(w) on an n-sheeted copy C_n of the cylinder branched over via the state operator correspondence |ψ|∼⟩lim_t→-∞ψ(x,t)|0|$⟩ and⟨ψ|| ∼lim_t→∞⟨0||ψ(x,t)as
(_B |ψ|⟨%s|⟩ψ| (_B |0|⟨%s|⟩0|)^n-1)/ (_B |0|⟨%s|⟩0|)^n =lim_t_2 →∞, t_1 → -∞⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_n/⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_1.
To calculate the 2-point function⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_non then-sheeted copyC_n, we can use the following uniformizing map
ζ^(n) =( e^2π i w/L -e^iθ_2/e^2π i w/L -e^iθ_1)^1/n
to sendC_nto theζ-plane. The 2-point function on a plane in the CFT is just
⟨ψ|(ζ^(n)_2,ζ̅^(n)_2)ψ(ζ^(n)_1,ζ̅^(n)_1) |=⟩1/(ζ^(n)_21)^2h(ζ̅^(n)_21)^2h̅.
Mapping it back to then-sheeted copyC_nalong the uniformizing map (<ref>), we obtain the expression of the 2-point function onC_nas
⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_n =(d ζ_1/d w_1d ζ_2/d w_2)^h/ζ_12^2h(d ζ̅_1/d w̅_1d ζ̅_2/d w̅_2)^h̅/ζ̅_12^2h̅.
Substituting this into (<ref>),we have
⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_n/⟨ψ|(w_2,w̅_2)ψ(w_1,w̅_̅1̅) |_⟩C_1 = [ 1/n^2h( ζ_1^(n)ζ_2^(n)/ζ_1^(1)ζ_2^(1))^h( ζ_2^(1) -ζ_1^(1)/ζ_2^(n) -ζ_1^(n))^2h] · [complex conjugate].
After taking the limitt_1→-∞andt_2→∞, we have
⟨ψ|(i∞)ψ(-i∞) |_⟩C_n/⟨ψ|(i∞)ψ(-i∞) |_⟩C_11/n^2Δ =( sinθ_2 -θ_1/2/sinθ_2 -θ_1/2n)^2Δ.
Then, from (<ref>) and the definition of the Rényi entropy, we obtain the leading thermal correction to the Rényi entropy as
δ S_n = 1/1-n( sin^2Δ(π l/L)/n^2Δ-1sin^2Δ(π l/nL)-n ) e^-2πΔβ/L+ o(e^-2πΔβ/L).
In this calculation, suitable assumptions about the spectrum have been proposed so that the leading contribution to the thermal correction of the Rényi entropy is captured by the correlation function of the lightest operator on the branched covering space. The latter is further worked out with the help of the uniformizing map that sends thisn-sheeted copy space to the plane.
§.§ Thermal Correction Dominated by the Singlet Primary
Consider a two-dimensional BMS filed theory on the cylinder coordinated by(ϕ, u)with circumferenceLϕ∼ϕ +L.
To introduce the temperature, we consider the following thermal identification [Here, we consider the case that β_u takes the same sign as β_ϕ, because we are going to assume the boost charge ξ is bounded from below. If ξ is bounded from above instead, then we should consider (ϕ, u) ∼ (ϕ +iβ_ϕ, u -iβ_u ) instead.]
(ϕ, u) ∼ (ϕ +iβ_ϕ, u +iβ_u ),
the corresponding thermal density matrix is
ρ = e^-β_ϕ L_0^cyl -β_u M_0^cyl/( e^-β_ϕ L_0^cyl -β_u M_0^cyl) .
Here,L_0^cylandM_0^cylare charges generating translations alongϕandudirections respectively. Under the plane-to-cylinder transformation of the currents (<ref>), these cylinder translation generators are related to the canonical BMS generatorsL_0andM_0as
L_0^cyl =2π/L( L_0-c_L/24), M_0^cyl=2π/L( M_0-c_M/24).
Substituting this back into (<ref>), the thermal density matrix written in terms of canonical BMS generators is
ρ = e^-β_ϕ2π/L( L_0-c_L/24) -β_u 2π/L( M_0-c_M/24)/( e^-β_ϕ2π/L( L_0-c_L/24) -β_u 2π/L( M_0-c_M/24))
= e^-β_ϕ2π/L L_0 -β_u 2π/LM_0/( e^-β_ϕ2π/LL_0 -β_u 2π/LM_0).
∙ Low Temperature Expansion
We consider the BMSFT whose spectrum satisfies the following conditions so that the low-temperature expansion of the thermal density matrix is dominated by the first excited state.
– There exists a unique ground state |0⟩, around which we can turn on a small temperature and expand the thermal density matrix.
– In the spectrum both the conformal weight Δ and the boost charge ξ are bounded from below.
– There exists a gap between the ground state |0⟩ and the lightest state |ψ⟩ corresponding to the primary operator ψ labelled by (Δ,ξ).
The last condition requires more explanation. As we turn on a small temperature, there might be several candidate lightest states above the ground state. Depending on the approach to the low-temperature limit, the operatorψwith the smallestΔ+β_u/β_ϕ ξexcites first.
There are still several difficulties to obtain an expansion dominated byψ. First, due to the non-unitary nature, althoughM_0is self-adjoint, it is not diagonalizable. For example, there are two descendants ofψat the level 1,M_-1|ψ⟩andL_-1|ψ⟩.M_0acts on them non-diagonally as a Jordan block
M_0 [ M_-1|ψ⟩; L_-1|ψ⟩ ] = [ ξ 0; 1 ξ ][ M_-1|ψ⟩; L_-1|ψ⟩ ].
As a consequence, the thermal density matrixρis also non-diagonalizable, and it is not possible to expandρin terms of eigenstates{Φ}ofL_0andM_0such as
ρ∝∑_Φ e^-2π/L(β_ϕ L_0^Φ +β_u M_0^Φ) |Φ|⟨%s|⟩Φ| .
Another problem is that there are infinitely many descendants created byM_-k's with the same boost chargeξasψitself, becauseM_-kall commute withM_0. So, in a low-temperature limit withβ_u ≫β_ϕ, these descendants will not be suppressed.
At this point, we will not try to answer the interesting question of the meaning of a non-diagonalizable density matrix. Instead, we restrict to a particular type of low-temperature limit to avoid the above difficulties.
– Consider the following low-temperature limit
β_ϕ≫ L, β_u/β_ϕ≤ O(1).
Then, the primary operator ψ dominates the thermal density matrix expansion.
Under these assumptions, the thermal density matrix is dominated byψat this low temperature as
ρ = |0⟩⟨ 0| +|ψ⟩⟨ψ| e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯/1 +e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯.
∙ Entanglement measurements
Consider the entanglement region, the reduced density matrix onis
ρ_ = _ρ.
We are interested in the following entanglement measurements: the Renyi entropy
S_n = 1/1-nlog(ρ_A^n)
and the entanglement entropy
S_E= -ρ_A logρ_A =S_n→ 1.
Concretely, we consider the entanglement region to be a single intervalAspecified by its endpoints
∂_- A =(ϕ_-,u_-), ∂_+ A =(ϕ_+,u_+).
For convenience, let us introduce the range of the intervalin theϕ- and theu-directions as
l_ϕ =ϕ_+ -ϕ_-, l_u=u_+ -u_-.
Under the above low-temperature expansion (<ref>),ρ_^ncan be expanded as
ρ_^n =[_(|0⟩⟨ 0| +|ψ⟩⟨ψ| e^-2πβ_ϕΔ/L -2πβ_uξ/L +⋯)]^n /(1 +e^-2πβ_ϕΔ/L -2πβ_uξ/L +⋯)^n
=(_ |0⟩⟨ 0|)^n [1+ ( [_ |ψ⟩⟨ψ| (_ |0⟩⟨ 0|)^n-1]/ (_ |0⟩⟨ 0|)^n -1 ) n e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯].
The first term(_ |0⟩⟨0|)^ncorresponds to the ground-state Rényi entropy. The second term determines the leading contribution to the low-temperature thermal correction. To calculate this term, we use the replica trick and the state-operator correspondence to replace it by a 2-point function ofψon then-sheeted copyC_nof the original space branched over∂. Use the state-operator correspondence, the in-state|ψ⟩corresponds to
|ψ⟩=lim_ϕ→ i∞ψ(ϕ,u)|0|,⟩
and the out-state⟨ψ||corresponds to
⟨ψ||=lim_ϕ→ -i∞⟨0||ψ(ϕ,u).
Together with the replica trick, the coefficient in the thermal correction term can be written as
[_ |ψ|⟨%s|⟩ψ| (_ |0|⟨%s|⟩0|)^n-1]/ (_ |0|⟨%s|⟩0|)^n =lim_ϕ_1→ +i∞
ϕ_2→ -i∞ [_( ψ(ϕ_1,u_1)|0|⟨%s|⟩0|ψ(ϕ_2,u_2) ) (_ |0|⟨%s|⟩0|)^n-1]/ (_ |0|⟨%s|⟩0|)^n
= lim_ϕ_1→ +i∞
ϕ_2→ -i∞⟨ψ|(ϕ_2,u_2) ψ(ϕ_1,u_1) |_⟩C_n/⟨ψ|(ϕ_2,u_2) ψ(ϕ_1,u_1) |_⟩C_1.
Now, we can use the uniformizing map to calculate this 2-point function ofψonC_n.
∙ Uniformizing Map
To calculate the 2-point function onC_n, we use the following uniformizing map fromC_nto the plane,
x =(e^2π i ϕ/L -e^2π i ϕ_-/L/e^2π i ϕ/L-e^2π i ϕ_+/L)^1/n =: f^(n)(ϕ)
y = ( u -l_u/2sinπ l_ϕ/Lsinπ( 2ϕ -ϕ_- -ϕ_+)/L)d/d ϕ f^(n)(ϕ).
This transformation can be decomposed into several steps. In thex-direction, the plane-to-cylinder mapz=e^2πiϕ/Lmaps theS^1-coordinateϕto the analytically continued complexz-plane. Then, on this complex plane, thez-coordinate of∂becomesz_±=e^2πi ϕ_±/L. To introduce then-sheeted copy of this analytically continued space branched overz_±, we apply anSL(2,ℂ)transformationw=z-z_-/z-z_+which sendsz_-to0andz_+to∞, and take then-th root of it. In they-direction, the subtraction( u -l_u/2sinπl_ϕ/Lsinπ( 2ϕ-ϕ_- -ϕ_+)/L )cancels the rangel_uof the intervalAinu-direction.
The 2-point function of the primary operators on the plane is determined by the symmetry up to a normalization factorN,
⟨ψ|(x_1,y_1)ψ(x_2,y_2) |=⟩N x_12^-2Δe^-2ξy_12/x_12.
Mapped to the cylinder coordinate along (<ref>), the primary operatorψtransforms as
ψ(ϕ, u) =( d x/d ϕ)^Δe^-ξyd^2ϕ/dx^2/dϕ/dx -ξd/dϕ( l_u/2sinπ l_ϕ/Lsinπ( 2ϕ -ϕ_- -ϕ_+)/L)ψ(x,y)
=f^(n)'Δ(ϕ) e^-ξ( u f^(n)'(ϕ)d(f^(n)'(ϕ)^-1)/dϕ +π l_u/L sinπ l_ϕ/Lcosπ( 2ϕ -ϕ_- -ϕ_+)/L)ψ(x,y).
Thus, the correlation function onC_nbecomes
⟨ψ|(ϕ_2,u_2) ψ(ϕ_1,u_1) |_⟩C_n
= N f^(n)'Δ(ϕ_2) e^-ξ( u_2 f^(n)'(ϕ)d(f^(n)'(ϕ_2)^-1)/dϕ_2 +π l_u/L sinπ l_ϕ/Lcosπ( 2ϕ_2 -ϕ_- -ϕ_+)/L)
× f^(n)'Δ(ϕ_1) e^-ξ( u f^(n)'(ϕ_1)d(f^(n)'(ϕ_1)^-1)/dϕ_1 +π l_u/L sinπ l_ϕ/Lcosπ( 2ϕ_1 -ϕ_- -ϕ_+)/L) x_12^-2Δe^-2ξy_12/x_12.
Substituting this into (<ref>) and take the limit, we obtain the the correction term
[_ |ψ|⟨%s|⟩ψ| (_ |0|⟨%s|⟩0|)^n-1]/ (_ |0|⟨%s|⟩0|)^n = lim_ϕ_1→ +i∞
ϕ_2→ -i∞⟨ψ|(ϕ_2,u_2) ψ(ϕ_1,u_1) |_⟩C_n/⟨ψ|(ϕ_2,u_2) ψ(ϕ_1,u_1) |_⟩C_1
= ( sinπ l_ϕ/L/nsinπ l_ϕ/n L)^2Δ e^2 π l_u ξ/L( π l_ϕ/L -1/nπ l_ϕ/nL) .
In the explicit calculation, to take the limitϕ_1→+i∞, ϕ_2→-i∞, we have setϕ_1=i T_1andϕ_2 =-i T_2and expand the above in order ofϵ_1=e^-2πT_1/Landϵ_2=e^-2πT_2/L.
Substituting this back to the definition (<ref>) of the Rényi entropy, we obtain the thermal correction to the Rényi entropy
δ S_n =n/1-n[( sinπ l_ϕ/L/nsinπ l_ϕ/n L)^2Δ e^2 π l_u ξ/L( π l_ϕ/L -1/nπ l_ϕ/nL) -1] e^-2πβ_ϕΔ/L -2πβ_uξ/L.
The thermal correction to the entanglement entropy can be obtained by taking then→1limit,
δ S_E =[ 2Δ(1-π l_ϕ/Lπ l_ϕ/L) + 2 ξ( π^2 l_u l_ϕ/L^2 sin^2π l_ϕ/L-π l_u/Lπ l_ϕ/L) ] e^-2πβ_ϕΔ/L -2πβ_uξ/L.
For a pure state,S_n()=S_n(). However, the thermal correction contribution violates this equality. The compliment ofis an interval of rangeL-l_ϕin theϕ-direction and-l_uin theu-direction. SinceδS_n(L-l_ϕ,-l_u)≠δS_n (l_ϕ,l_u), the Rényi entropy is indeed thermally polluted.
§.§ Thermal Correction Dominated by the Multiplet Primary
Previously, we obtained the thermal correction to the Rényi entropy and the entanglement entropy in the case that a singlet primary dominates the thermal correction. Now, we consider the case that a multiplet primary dominates the thermal correction. As we will see, the thermal correction to the Rényi entropy is just that of a singlet multiplied by the rank of the multiplet. However, this seemingly intuitive result is not that trivial. Actually, the off-diagonal terms dominate the expansion of the thermal density matrix, but they just do not contribute to the thermal correction to the Rényi entropy.
TheM_0acts on a rank-rprimary multiplet=(O_0,O_1,⋯,O_r-1)^Tas
M_0 | O_a | ⟩= ξ | O_a |+⟩ | O_a-1|,⟩ a=1,⋯,r-1,
M_0 | O_0 | ⟩= ξ | O_0 |,⟩ a=0.
Or in a more compact form,M_0 =(ξ_r +_r) . Here,_ris the rank-ridentity matrix, and_ris the rank-rJordan cell
_r=
[ 0 ; 1 0 ; ⋱ ⋱; 1 0; ]_r× r,
which is nilpotent(_r)^r=0. The action ofe^-β_ϕ2π/LL_0-β_u 2π/LM_0on the primary part of this multiplet becomese^-β_u 2π/L_r e^-2πβ_ϕ/LΔ-2πβ_u/Lξ. The matrix parte^-β_u 2π/L_rcan be expanded into finitely many terms as
e^-β_u 2π/L_r =∑_k=0^r-1(-β_u 2π/L)^k/k! (_r)^k.
Sinceβ_u ≫L, it seems that thek=r-1term dominates the expansion (<ref>). However, as we will see later, although this^r-1term dominates the expansion of the matrix, it does not contribute to the thermal correction term to the Rényi entropy after taking trace. It is the^0term that dominates the thermal correction. Explicitly, thek=r-1term is
(-β_u 2π/L)^r-1/(r-1)! (_r)^k= (-β_u 2π/L)^r-1/(r-1)![ 0 0; ⋮ ⋱; 0 ⋱; 1 0 ⋯ 0 ]_r× r
= (-β_u 2π/L)^r-1/(r-1)! |O_0|⟨%s|⟩O_r-1^∨|.
Here, the dual basis⟨O|_a^∨|is defined by
⟨O|_a^∨ |O_b|=⟩δ_a,b.
Putting everything together, the multiplet version of the low-temperature expansion of density matrix (<ref>) is
ρ = |0⟩⟨ 0| +|O_0|⟨%s|⟩O_r-1^∨ | (-2πβ_u/L)^r-1/(r-1)! e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯/1 +r e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯.
We can use the inner product between the in-state and the out-state within a multiplet <cit.>⟨O|_a | O_b |=⟩δ_a+b,r-1
to transfrom from the dual basis to the out-states,
⟨O|_a^∨ |=⟨O|_r-1-a|.
Then, the density matrix can be written as
ρ = |0⟩⟨ 0| +|O_0|⟨%s|⟩O_0| (-2πβ_u/L)^r-1/(r-1)! e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯/1 +r e^-2πβ_ϕ/LΔ -2πβ_u/Lξ +⋯.
The correlation function <cit.> among the rank-rmultiplet is
⟨ O_a(x_x,y_x)O_b(x_1,y_1)⟩ ={[ 0 ; d_r x_12^-2Δ_i e^-2ξ_iy_12/x_121/q!(-2y_12/x_12)^q, ]. , q=a+b-r+1.
In particular, forr>1⟨O|_0(x,y) O_0(x',y') |=⟩0.
So, we see this^r-1term does not contribute to the thermal correction term at all. Moreover, it turns out that all the off-diagonal terms do not contribute to the leading correction to the Rényi entropy. To see this, consider the^ksummand in (<ref>) written in the basis
_r^k = ∑_a=0^r-1-k |O_a|⟨%s|⟩O_a+k^∨| =∑_a=0^r-1-k |O_a|⟨%s|⟩O_r-1-a-k|.
Sinceq=(a) +(r-1-a-k) =r-1-k ≥r-1and the equality holds only ifk=0,
the correlation function⟨O|_a(x,y) O_r-1-a-k(x',y')|$⟩ vanishes for any k>0 because of (<ref>). Only for k=0, the correlation function does not vanish, i.e.,
⟨O|_a(x_2,y_2) O_r-1-a(x_1,y_1) |=⟩N x_12^-2Δ e^-ξy_12/x_12, a=0,⋯,r-1,
which is the same as the correlation function of a singlet (<ref>). So, the thermal correction to the Rényi entropy is just that of a singlet multiplied by r,
δS_n =rn/1-n[( sinπl_ϕ/L/nsinπl_ϕ/n L )^2Δ e^2 πl_u ξ/L ( πl_ϕ/L -1/nπl_ϕ/nL ) -1] e^-2πβ_ϕΔ/L -2πβ_uξ/L.
We see that when a multiplet primary dominates the low-temperature expansion, although the off-diagonal contributions dominate the correction to the thermal density matrix, they do not contribute to the correction of the Rényi entropy. The result is just that of the singlet multiplied by the rank r. It will be interesting to find if there exist any other entanglement measurements to which the off-diagonal contributions do not vanish. We leave this to future work.
§.§ Comments on Another Limit
So far, we consider the particular low-temperature limit (<ref>), but there is also a complimentary choice to reach the low-temperature limit so that the boost charge ξ dominates the first excited state. An extreme case is that the thermal circle is purely along the u-direction. The thermal circle is
u ∼u+ iβ_u, β_u ≫L.
Then, the density matrix is proportional to e^-β_u M_0. In this case, any primary ψ with boost charge ξ>0 is heavier than not only the vacuum state, but all the descendants of the vacuum (e.g., M_-k⃗|0|$⟩), because these descendants of the vacuum all have the boost chargeξ=0.
If in the spectrum the boost charge is gapped, then in theβ_u ≫Llimit, the density matrix is dominated by the vacuum block, and all the vacuum descendants are just as heavy as the vacuum, thus a low-temperature expansion is hardly accessible. However, the result of such thermal correction to the entanglement entropy might be even more universal than the previous case, as it depends only on the vacuum block and the algebraic structure, not on the details of the spectrum.
On the other hand, if there exist any other primary operators with boost charge0, then the density matrix is dominated by these blocks together with the vacuum block. Since the operatore^-β_u M_0does not care about the conformal weight at all, the results of the thermal correction might be similar to the case that only the vacuum block dominates.
To summarize, in this type of low-temperature limit, since all descendants of the vacuum are equally heavy measured by their boost charge, an honest calculation must include them all. Even it is still possible to expand the density matrixe^-β_u M_0organized by the orders of the Taylor expansion and the levels of the descendants, it is still hard to trace outand obtain the reduced density matrix onin a workable way. However, since we expect the result to be universal, once we work it out in one explicit example, hopefully we might find a solution according to the answer. Currently, since this type of thermal circle is in BMSFT is still not well understood, we leave this to future work.
§.§ Modular Hamiltonian Approach
In this subsection, we calculate the thermal correction to the entanglement entropy from the modular Hamiltonian. As a double check, the result agrees with the previous calculation (<ref>). The modular Hamiltonian for the reduced density matrix onis defined to be
K_ = -logρ_.
From the entanglement first law, for an infinitesimal variation of the state, the calculation of the variation of the entanglement entropy can be replaced by the variation of the expectation value of the modular Hamiltonian
δ S_=δ⟨K|_|.⟩
In general, the modular HamiltonianK_cannot be written down in terms of local data. Only in theories with enough symmetry the modular Hamiltonian has an explicit formula for simple entanglement regions and special states. In particular, the modular Hamiltonian in BMSFT <cit.> can be written down explicitly for a single interval on the cylinder under the vacuum state.
For the single intervalin the vacuum state on the cylinder with circumferenceL, the modular HamiltonianK_can be written as a local integral of the modular generatorζ_against the currentsJ(ϕ)andP(ϕ)as
K_ =∫_ϕ_-^ϕ_+ dϕ[ L/2πcosπ l_ϕ/L-cosπ(2ϕ -ϕ_+ -ϕ_-)/L/sinπ l_ϕ/L J(ϕ) + l_u/2 π l_ϕ/Lcosπ(2ϕ -ϕ_+ -ϕ_-)/L -π l_ϕ/L/sinπ l_ϕ/L P(ϕ) ].
To calculate the variation of the modular Hamiltonian, we need to calculate the variation of the currentsJ(ϕ)andP(ϕ),
δ⟨J||=⟩⟨J||_⟩ρ -⟨J||_⟩|0|⟩,
δ⟨P||=⟩⟨J||_⟩ρ -⟨P||_⟩|0|⟩.
Substitute the low-temperature expansion (<ref>) of the thermal density matrixρ,
δ⟨J|(ϕ) |=⟩e^-2πβ_ϕ/LΔ -2πβ_u/Lξ( ⟨J|(ϕ) |_⟩|ψ|⟩ - ⟨J|(ϕ) |_⟩| 0|⟩),
δ⟨P|(ϕ) |=⟩e^-2πβ_ϕ/LΔ -2πβ_u/Lξ( ⟨P|(ϕ) |_⟩|ψ|⟩ - ⟨P|(ϕ) |_⟩| 0|⟩).
So, we need to calculate the difference of the expectation values of the currents between the primary state|ψ|$⟩ and the vacuum | 0|$⟩. For this, we apply the plane-to-cylinder transformation (<ref>) and insert the primary operatorψat the origin of the(x,y)-plane. Recall the mode expansion of the currents on the plane
J(x) =∑_n L_n x^-n-2, P(x)=∑_n M_n x^-n-2.
Thus, the expectation values of the currents on the plane under a primary state are
⟨J|^pl(x)|=⟩x^-2Δ, ⟨P|^pl(x)|=⟩x^-2ξ.
Applying the transformation of currents (<ref>), the expectation values of currents on cylinder become
⟨J|(ϕ) | ⟩= ( ∂ x/∂ϕ)^2 J^pl(x) +c_L/12{x,ϕ}=-4π^2/L^2Δ +π^2/L^2c_L/6,
⟨P|(ϕ) | ⟩= ( ∂ x/∂ϕ)^2 P^pl(x) +c_M/12{x,ϕ}=-4π^2/L^2ξ +π^2/L^2c_M/6.
Thus, the difference of the expectation values of the currents between|ψ|$⟩ and |0|$⟩ are
⟨J|(ϕ) |_⟩|ψ|⟩ - ⟨J|(ϕ) |_⟩| 0|⟩ =-4π^2/L^2Δ,
⟨P|(ϕ) |_⟩|ψ|⟩ -⟨P|(ϕ) |_⟩|0|⟩ =-4π^2/L^2ξ.
Substituting this into (<ref>), we obtain the variation of the currents
δ⟨J|(ϕ) | ⟩=e^-2πβ_ϕ/LΔ -2πβ_u/Lξ( ⟨J|(ϕ) |_⟩|ψ|⟩ - ⟨J|(ϕ) |_⟩| 0|⟩) =-4π^2/L^2Δ e^-2πβ_ϕ/LΔ -2πβ_u/Lξ ,
δ⟨P|(ϕ) | ⟩=e^-2πβ_ϕ/LΔ -2πβ_u/Lξ( ⟨P|(ϕ) |_⟩|ψ|⟩ - ⟨P|(ϕ) |_⟩| 0|⟩) =-4π^2/L^2ξ e^-2πβ_ϕ/LΔ -2πβ_u/Lξ.
For the modular Hamiltonian (<ref>), the variation of the modular Hamiltonian is
δ⟨K|_|=⟩ ∫_ϕ_-^ϕ_+ dϕ[ L/2πcosπ l_ϕ/L-cosπ(2ϕ -ϕ_+ -ϕ_-)/L/sinπ l_ϕ/Lδ⟨J|(ϕ) |+⟩l_u/2 π l_ϕ/Lcosπ(2ϕ -ϕ_+ -ϕ_-)/L -π l_ϕ/L/sinπ l_ϕ/Lδ⟨P|(ϕ) |⟩]
= [ 2Δ(1-π l_ϕ/Lπ l_ϕ/L) + 2 ξ( π^2 l_u l_ϕ/L^2 sin^2π l_ϕ/L-π l_u/Lπ l_ϕ/L) ]e^-2πβ_ϕ/LΔ -2πβ_u/Lξ .
This result agrees with the previous calculation (<ref>) of the variation of the entanglement entropy.
§ DISCUSSION
In this paper, we consider the single interval entanglement region on the cylinder in the BMSFT. We find a suitable low-temperature limit under which an expansion of the thermal density matrix dominated by the first excited operator is possible. In this limit, we calculate the thermal correction to the Rényi entropy by the replica trick and the uniformizing map. As a double check, for the thermal correction to the entanglement entropy, we also provide an alternative calculation by the modular Hamiltonian and the entanglement first law.
Though we provide a double check from another calculation of the entanglement entropy by modular Hamiltonian, it will be more satisfactory to have a numerical check in the concrete model. Despite the fact that several concrete BMSFT models have been found and studied recently, it seems that we still do not have a satisfactory understanding of their underlying Hilbert space structure and the correct way to discretize the models in a meaningful way. We leave this to future work until we have a better understanding of these concrete models. Also, a concrete model analysis might be helpful to understand another type of low-temperature limit in Sec. <ref>.
Another interesting thing is to test this thermal correction term in the holographic entanglement proposals. For the finite temperature, the calculation on the cylinder is secretly on a torus, and the replica trick fails as the covering space is of high genus. However, a holographic calculation with temperature in the bulk is still possible using the geometric picture. Hence, a comparison between the low-temperature result in the bulk and boundary is possible.
I would like to thank Peng-xiang Hao, Wenxin Lai and Jun Nian for useful discussions. I would like to specially thank Jun Nian for proofreading the manuscript. This work was supported in part by the NSFC under grant No. 12147103.
JHEP |
http://arxiv.org/abs/2307.04042v1 | 20230708202414 | Sup-Norm Convergence of Deep Neural Network Estimator for Nonparametric Regression by Adversarial Training | [
"Masaaki Imaizumi"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
Typology of Risks of Generative Text-to-Image Models
Atoosa Kasirzadeh
====================================================
We show the sup-norm convergence of deep neural network estimators with a novel adversarial training scheme. For the nonparametric regression problem, it has been shown that an estimator using deep neural networks can achieve better performances in the sense of the L2-norm. In contrast, it is difficult for the neural estimator with least-squares to achieve the sup-norm convergence, due to the deep structure of neural network models. In this study, we develop an adversarial training scheme and investigate the sup-norm convergence of deep neural network estimators. First, we find that ordinary adversarial training makes neural estimators inconsistent. Second, we show that a deep neural network estimator achieves the optimal rate in the sup-norm sense by the proposed adversarial training with correction. We extend our adversarial training to general setups of a loss function and a data-generating function. Our experiments support the theoretical findings.
§ INTRODUCTION
We study the nonparametric regression problem.
Suppose we observe (X_1,Y_1),...,(X_n,Y_n) ∈ [0,1]^d × with dimension d ∈ that are independent and identical copies of a [0,1]^d ×-valued random element (X,Y) which follows the following regression model:
Y = f^*(X) + ξ,
where f^*: [0,1]^d → is an unknown function, ξ is a random noise variable with zero mean and finite variance and is independent to X, and X follows a marginal measure P_X on [0,1]^d.
Our interest is to utilize a deep neural network model and develop an estimator f̂ from the model and the n observations, then study its estimation risk in terms of the sup-norm, referred to as an L^∞-risk:
sup_x ∈ [0,1]^d |f̂(x) - f^*(x)|,
which implies uniform convergence of the estimator.
In this study, we prove that an adversarial training framework can provide an estimator with deep neural networks whose L^∞-risk converges, then derive a convergence rate of the risk and show the minimax optimality of the rate.
§.§ Background and Question
Deep learning is a data-driven statistical method using deep neural network models <cit.>, which have multiple layers.
It has many well-known extensions, such as a deep convolutional network <cit.>, a residual network <cit.>, and an attention mechanism <cit.>.
Owing to the multiple layers and the well-designed training algorithm, deep learning has achieved quite accurate prediction performance in various tasks.
The framework of nonparametric regression has been actively used to analyze deep neural networks, and many roles of deep learning have been revealed.
A deep neural network is a model of functions f:[0,1]^d → with multiple layers such that
f(x) = g_L ∘ g_L-1∘⋯∘ g_1(x),
where g_1(·),...,g_L(·) are trainable functions by L layers.
Deep learning is a method of fitting the function by deep neural networks to observed data, hence it is obviously regarded as a method for the nonparametric regression problem.
Specifically, in most studies on the nonparametric regression with deep neural networks, the following least-square estimator has been studied:
f̂^LS∈_f ∈1/n∑_i=1^n (Y_i - f(X_i))^2,
where is a set of functions by deep neural networks with the form (<ref>).
Further, performance of the estimator f̂^LS has been studied by its L^2-risk
f̂^LS - f^*_L^2^2 := [ (f̂^LS(X) - f^*(X))^2 ].
Using this framework, seminal works <cit.> show that the multilayer structure of deep neural networks fits an internal structure of the unknown function f^* and that its estimation error achieves a faster convergence.
<cit.> investigate statistical properties of the neural estimators such as asymptotic distribution and robustness.
<cit.> show that the multilayer structure of the neural estimator is effective when the target function f^* has irregular properties such as discontinuity and heterogeneous smoothness.
<cit.> shows an adaptive property of the neural estimators to an intrinsic low-dimensionality of the observations, e.g., data concentrates on a low-dimensional manifold in its domain.
Studying a sup-norm value of the estimation error has been an important interest in nonparametric regression problems.
The sup-norm value, referred to as an L^∞-risk, is a sharper measure of accuracy and sensitivity of estimators than the L^2-risk.
Furthermore, the sup-norm convergence of errors is useful for statistical inference, such as a uniform confidence band, and is effective in the case with covariate shift of the transfer learning <cit.>.
For several conventional (non-deep) nonparametric estimators for f^*, their sup-norm convergence has been actively studied.
Classically, the convergence of kernel methods <cit.> and series methods <cit.> have been investigated.
More recently, the convergence of wavelet methods <cit.>, methods with reproducing kernel Hilbert spaces <cit.>, and Gaussian process methods <cit.> have been clarified.
Roughly speaking, when studying the sup-norm convergence of these non-deep estimators f̂^ND, the following linear-in-basis form plays an effective role:
f̂^ND = ∑_j ∈ J w_j ψ_j(·),
where J is an index set, {w_j}_j ∈ J is a set of weights in trained by the least-square approach, and {ψ_j(·)}_j ∈ J is a family of basis functions (possibly depending on covariates) such as wavelets or kernels.
Since the non-deep estimators have the linear form, it is possible to control the L^∞-risk effectively and show its convergence, except a general result by <cit.>.
Our interest is to evaluate the L^∞-risk of an estimator using deep neural networks (<ref>).
Since the deep neural network model (<ref>) does not have the linear-in-basis form (<ref>) as the non-deep methods, the existing analysis cannot study the L^∞-risk of deep neural networks.
Based on the background, we have the following questions:
Is it possible to achieve an estimator by deep neural networks f^* whose L^∞-risk converges?
If so, is it possible to show the optimality of a convergence rate of the L^∞-risk?
§.§ Introduction to Adversarial Training
The adversarial training is a training scheme for deep neural networks, which has been developed to deal with an adversarial attack on prediction by neural networks.
An adversarial attack is a methodology to mislead deep neural networks in its predictions, by putting a tiny perturbation into a covariate for a trained deep neural network.
Since functions by trained deep neural networks are unstable, the perturbed samples, called adversarial samples, vary the outputs of deep neural networks drastically.
<cit.> reported that the phenomenon by introducing a case in which a deep neural network misclassified an image of a panda as an image of gibbons by adding very fine noise to the image.
After the finding, many adversarial attack methods have been developed <cit.>, threatening the robustness of neural networks.
A standard approach to adversarial training is to minimize a robustified empirical risk, which is measured by adding perturbations to the observed input variable <cit.>.
Rigorously, an estimator by the adversarial training for regression is defined as the minimizer of the following empirical risk:
min_f ∈1/n∑_i=1^n max_x' : x' - X_i_∞≤ h (Y_i-f(x'))^2,
with some h > 0.
The outer minimization is solved by the gradient descent method as well as the usual least-square loss, and the inner maximization is solved by a gradient ascent method.
Several efficient algorithms have been proposed to solve this problem effectively <cit.>, such as the fast gradient sign method <cit.>.
The optimization process is summarized in the following:
i. Initialize f ∈ and repeat the following steps ii and iii:
ii. For each (Y_i,X_i), find x^*_i = _x' ∈{x: x-X_i_∞≤ h} (Y_i - f(x'))^2.
iii. Update function f ← f - η∇ ( n^-1∑_i=1^n (Y_i - f(x^*_i))^2),
where η > 0 is a learning rate and ∇ denotes a derivative with respect to neural network parameters of f.
Note that the efficiency of the algorithm is not a primary interest of this study, hence we focus on the estimation error by the global minimizer of the adversarial risk.
Several works actively pursue a theoretical understanding of adversarial training.
One of the most significant issues is a trade-off between the robustness and accuracy of the adversarial training, which studies the possibility of balancing the predictive performance of deep neural networks with their ability to defend against adversarial samples.
A risk bound and the sample complexity of the adversarial training in general settings is widely examined <cit.>.
The predictive performance of the adversarial training has been also studied, particularly in linear regression models with over-parameterization <cit.>.
§.§ This Study
The purpose of this study is to investigate the sup-norm convergence of an error by deep neural networks using the adversarial training scheme.
For this aim, we develop a novel formulation of adversarial training and study its efficiency.
Specifically, our formulation includes a preprocessing for smoothing the output variable at the first step, then formulates a neural estimator as a minimizer of an empirical adversarial risk associated with the preprocessing.
The preprocessing has a role to reduce a bias on the estimator from the perturbation of the adversarial training scheme.
As a specific form of preprocessing, we can employ several nonparametric estimators including the nearest neighbor method and the kernel method.
As a result, we derive an upper bound on the L^∞-risk of the estimator with deep neural networks using our adversarial training scheme, then reveal some properties of its convergence rate.
Specifically, our contributions are summarized as follows.
(i) We derive a convergence rate of the L^∞-risk of the estimator when the true function f^* belongs to the Hölder space.
The derived rate achieves the minimax optimal rate with an appropriately designed preprocessing.
(ii) We show the inconsistency of the ordinary adversarial training without preprocessing.
This is due to the inability of an output variable in the regression problem to accommodate perturbations of the adversarial training.
(iii) Our approach applies to not only the adversarial training with a squared loss but also a general convex loss.
Specifically, we study an L^∞-risk of the regression problem of general loss, which is useful for handling data that have heavy-tailed noise.
(iv) We additionally study the L^∞-risk when the true function f^* has a heterogeneous smoothness, i.e. it belongs to the Besov space.
Our analysis shows the minimax optimality of the convergence rate of the L^∞-risk in this case.
(v) Our result is applicable to a wide range of architectures of deep neural networks, such as a fully-connected dense layer.
Also, it allows both finite depth networks and finite width networks.
We conduct numerical experiments and confirm that our theoretical results are consistent with the result.
Our results provide new implications for the understanding of adversarial training, which argues the trade-off between robustness and accuracy of prediction by adversarial training.
Along with this line, we show that (i) the ordinary adversarial learning is not consistent in the regression problem in the first place, (ii) the robustness obtained by adversarial learning is described by sup-norm convergence of the estimation error, and (iii) the adversarial training achieve the optimal rate with appropriate preprocessing.
Technical contributions in our proof are summarized as follows.
First, we derive an upper bound of the sup-norm of an estimation error by the adversarial risk up to constants.
This bound uses a volume of a neighborhood set of an input variable, which is utilized to design the adversarial perturbation.
Second, we develop an empirical process technique for the evaluation of preprocessing.
To control the effects of the preprocessing and the adversarial training simultaneously, we involve two levels of evaluation of biases and variances as appropriate.
§.§ Organization
The rest of this paper is organized as follows.
Section <ref> gives a setup for the nonparametric regression problem and the definition of deep neural networks.
Section <ref> gives a general formulation of adversarial training and an overview of analysis on it.
Furthermore, the section shows that naive adversarial training does not give a consistent estimator.
In Section <ref>, as a main result, we derive an upper bound by a sup-norm of an estimation error by the developed estimator
Section <ref> gives extensions and applications.
Section <ref> gives numerical simulations, and Section <ref> concludes.
§.§ Notation
For n ∈, [n] := {1,2,...,n} is a set of natural numbers no more than n.
For a,a' ∈, a ∨ a' := max{a,a'} is the maximum.
⌊ a ⌋ denotes the largest integer which is no more than a.
The Euclidean norm of a vector b ∈^d is denoted by b_2 := √(b^⊤ b).
Let C_w be a positive finite constant depending on a variable w.
{E} denotes the indicator function. It is 1 if the event E holds and 0 otherwise.
For a matrix A ∈^N × N, A_i,j denotes an (i,j)-th element of A for i,j=1,...,N.
For a measurable function f: Ω→ on a set Ω⊂^d, f_L^p(μ) := (∫ |f(x)|^p dμ(x) )^1/p denotes an L^p-norm for p ∈ [1,∞) with a measure μ, and f_L^∞ := sup_x ∈Ω|f(x)| denotes a sup-norm.
Also, L^p(Ω) denotes a set of measurable functions such that f_L^p(λ) < ∞ with the Lebesgue measure λ.
For x ∈^d, δ_x denotes the Dirac measure at x.
For a function f : ^d → with a multi-variate input (x_1,...,x_d) ∈^d and a multi-index a = (a_1,...,a_d) ∈^d, ∂^a f(x_1,...,x_d) := ∂_x_1^a_1∂_x_2^a_2⋯∂_x_d^a_d f(x_1,...,x_d) denotes a partial derivative with the multi-index.
For a variable x, C_x denotes some positive finite constant that polynomially depends on x, and it can have different values in different places.
For sequences of reals {a_n}_n ∈ and {b_n}_n ∈, a_n ≍ b_n denotes lim_n →∞ a_n/b_n → c with some c ∈ (0,∞), a_n = O(b_n) denotes |a_n| ≤ M|b_n| and a_n = Ω (b_n) denotes |a_n| ≥ M |b_n| with some M > 0 for all sufficiently large n. a_n = o(b_n) denotes |a_n| ≤ M |b_n| for any M > 0 and for all sufficiently large n.
O(·) and Ω(·) are the notations O(·) and Ω(·) ignoring multiplied polynomials of log(n), respectivelly.
For a sequence of random variables {X_n}_n ∈, X_n = O_P(a_n) denotes Pr(|X_n/a_n| > M) ≤ε for any ε > 0 and some M>0 for all sufficiently large n, and X_n = o_P(a_n) denotes lim_n →∞Pr(|X_n/a_n| > ε) = 0 for any ε > 0.
§ PROBLEM SETTING AND PRELIMINARIES
§.§ Nonparametric Regression and L^∞-Risk
§.§.§ Model and Observations
For the nonparametric regression, suppose that we have n observations (X_1,Y_1),...,(X_n,Y_n) ∈ [0,1]^d × that are independent and identical copies of a random variable (X,Y) which follows the regression model (<ref>).
Note that the model is characterized by the unknown function f^* and the noise variable ξ.
Let P_X be a marginal measure of X.
§.§.§ Basic Assumption
We introduce a standard assumption on the regression model.
P_X has a density function that is uniformly lower bounded by C_P_X > 0 on [0,1]^d.
Assumption <ref> is important to estimate f^* on the entire domain [0,1]^d.
Both of the assumptions are commonly introduced in the nonparametric regression for neural networks <cit.>.
We suppose that f^* belongs to a function class with the Hölder smoothness with an index β > 0.
To the end, we define a ball of the Hölder space with β > 0 as
^β([0,1]^d) := { f: [0,1]^d →|
∑_b ∈^d: b_1 < ⌊β⌋∂^b f_L^∞ + ∑_b ∈^d: b_1 = ⌊β⌋sup_x,x' ∈ [0,1]^d, x ≠ x'|∂^b f(x) - ∂^b f(x')|/x - x'_∞^β - ⌊β⌋≤ B},
with its radius B ≥ 1.
Intuitively, ^β([0,1]^d) is a set of functions on [0,1]^d that are ⌊β⌋ times partially differentiable and their derivatives are (β - ⌊β⌋)-Hölder continuous.
There exists β > 0 such that f^* ∈^β'([0,1]^d) holds for all β' ∈ (0,β].
To impose differentiability for f^* is the usual setting for nonparametric regression (see <cit.>, for example).
Further, in the statistical studies on deep neural networks, it has also studied the estimation of functions with more complex structures <cit.>.
We will discuss an extension on this assumption in Section <ref>.
§.§.§ Goal: Sup-norm Convergence
Our goal is to estimate the true function f^* in the model (<ref>) and study an estimation error of an estimator in terms of the sup-norm ·_L^∞.
Rigorously, we will develop an estimator f̂ and study its L^∞-risk defined as follows:
f̂ - f^*_L^∞ := sup_x ∈ [0,1]^d |f̂(x) - f^*(x)|.
The L^∞-risk is a sharp measure for the robustness of estimators and is applied to statistical inference such as a uniform confidence band.
To understand this point, we discuss its relation to the commonly used L^2-risk measured by the L^2-norm, which is a typical case with the following L^p-norm (p ∈ [1,∞)) with p=2:
f̂ - f^*_L^p(P_X)^p := _X[ |f̂(X) - f^*(X)|^p ].
Since the L^∞-risk bounds the L^p-risk, i.e. f̂ - f^*_L^∞≥f̂ - f^*_L^p(P_X) holds for every p ≥ 1, the L^∞-risk leads stronger convergence.
Figure <ref> illustrates the difference between the convergences in the L^2-norm and the sup-norm.
In the related studies with neural networks (e.g. <cit.>), the L^2-risk has been mainly studied, but the L^∞-risk of neural network estimators has not been proved to converge.
§.§ Deep Neural Network Model
We define a deep neural network, which is a model of functions by multiple layers.
Specifically, we consider deep neural networks with fully-connected layers and the rectified linear unit (ReLU) activation function, which is one of the most commonly used activations.
Let L ∈ be a number of layers, and = (W_1,...,W_L+1) ∈^L+1 be a tuple of width parameters, where W_ℓ denotes width of an ℓ-th layer.
Deep neural networks have a weight matrix A_ℓ∈^W_ℓ + 1× W_ℓ and a weight vector b_ℓ∈^W_ℓ for each ℓ∈ [L].
For each d ∈, we introduce a ReLU activation function σ:^d →^d such that σ(z) = ((z_1 ∨ 0), (z_2 ∨ 0),...,(z_d ∨ 0))^⊤ for z = (z_1,...,z_d) ∈^d.
For each ℓ∈ [L-1], we define a map g_ℓ: ^W_ℓ→^W_ℓ+1 by an ℓ-th layer as
g_ℓ(z) = σ(A_ℓ z + b_ℓ), z ∈^W_ℓ.
For the last L-th layer, we define g_L(z) = A_L z + b_L with z ∈^W_L.
For L and , we define a parameter space Θ_L, := (^W_2× W_1×^W_1) × (^W_3× W_2×^W_2) ×⋯× (^W_L+1× W_L×^W_L) whose elements is θ = ((A_1,b_1),(A_2,b_2),...,(A_L,b_L)), then we define a function g :^d → by a deep neural network with d = W_1 and W_L+1 = 1 as
f_θ(x) = g_L ∘ g_L-1∘⋯∘ g_1(x), x ∈ [0,1]^d.
Intuitively, f_θ(x) is constituted by compositions of L maps by the multiple layers with the maximum width _∞ = max_ℓ∈ [L+1] W_ℓ.
There are at most ∑_ℓ=1^L (W_ℓ + 1) W_ℓ+1≤ L (_∞ +1)^2 parameters in the deep neural network model.
We introduce a set of functions by deep neural networks with L layers and W maximum width.
With a tuple (L, W) ∈^2 and an upper bound B ≥ 1, we define the set of functions by deep neural networks as
(L,W):= { f_θ|f_θ_L^∞≤ B , θ∈Θ_L,, _∞≤ W }.
The condition on the upper bound B can be satisfied by a clipping operation using the ReLU activation function <cit.>.
This definition of deep neural networks includes several variations of neural networks.
If the parameter matrix A_ℓ is not sparse, the defined neural network is a fully-connected neural network.
If the matrix A_ℓ is constrained to be sparse with some structure, it is equivalent to a convolutional neural network <cit.> or a residual network <cit.>.
One advantage of the definition (<ref>) is that it controls the easily manipulated values of width W and depth L of neural networks, that can be easily specified when designing neural network models.
This is in contrast to manipulating the number of nonzero parameters and the maximum parameter value, which are difficult to control in practice (for example, see <cit.>).
§ ADVERSARIAL TRAINING ESTIMATOR FOR REGRESSION
§.§ Ordinary Adversarial Training and its Inconsistency
We introduce a framework of adversarial training.
The adversarial training framework defines its loss using an input point in the neighborhood of a data point that maximizes loss, as reviewed in (<ref>).
Rigorously, with a scale multipliers h ∈ ( h,1) with h >0, we consider a neighbourhood of x ∈ [0,1]^d as
Δ_h^p(x) = {x' ∈ [0,1]^d |x - x'_p ≤ h}⊂ [0,1]^d.
Then, we consider the following estimator by the empirical adversarial risk with a function f: [0,1]^d → and p ≥ 1:
R_n^o(f) := 1/n∑_i=1^n sup_x' ∈Δ_h^p(X_i) (Y_i - f(x'))^2.
We can define an estimator of f^* by the minimizer of this empirical adversarial risk as
f := _f ∈(L,W) R_n^o(f).
The minimax optimization in the problem (<ref>) is solved by various algorithms <cit.>.
§.§.§ Inconsistency of Ordinary Adversarial Training
In this section, we show the inconsistency of f̃ by ordinary adversarial training.
Specifically, we obtain the following result.
Suppose n ≥ 3.
There exists a sub-Gaussian noise ξ_i, f^* ∈^1([0,1]^d), P_X, and h ∈ (0,1) such that the estimator f̌ in (<ref>) satisfies the following inequality with an existing constant c^* > 0 with probability at least 0.5:
f̌ - f^*_L^2(P_X)^2 ≥ c^*.
This result shows that the L^∞-risk of f̌ does converge to zero with the ordinary adversarial training, regardless of the sample size n and a neural network architecture.
Since the L^∞-risk is bounded below by the L^2-risk, hence the ordinary adversarial training also yields an inconsistent estimator in the sense of a sup-norm.
This result is not limited to the choice of model used for the estimator, hence it occurs with methods other than neural networks.
Intuitively, ordinary adversarial training produces a bias by the design of perturbations on inputs (see the middle panel of Figure <ref>).
This is because the perturbation makes f̌(X_i) fit to an output with a shift ς = x' - X_i, which creates the inconsistency.
Hence, we need to correct the bias by the ordinary adversarial training in the regression problem.
§.§ Proposed Framework of Adversarial Training
We introduce an empirical risk function for adversarial training based on a quadratic loss.
We develop a random map Ŷ: [0,1]^d → for surrogate outputs, which referred to a preprocessed output.
This notion is a general expression of several methods, and its specific configurations will be given later.
With Ŷ, we define an empirical preprocessed adversarial risk as
R_n(f) := 1/n∑_i=1^nsup_x' ∈Δ_h^p(X_i) (Ŷ(x') - f(x'))^2,
for a function f ∈ L^2([0,1]^d).
This loss function is a generalized version of the ordinary adversarial risk (<ref>) with the preprocessing Ŷ.
Using this notion, we define an estimator as the minimizer of the empirical risk as
f̂∈_f ∈(L,W) R_n(f).
This framework intends to perturb an output variable in response to the perturbation on the input X_i.
That is, when the input point X_i is shifted by ς = x' - X_i due to the adversarial training, we also shift the output side by ς.
Hence, the observed outputs may not be able to accommodate the shift.
To address this issue, we prepare the corresponding output using a preprocessing approach, such as the nearest neighbor method.
Figure <ref> illustrates differences between the least square estimator f̂^LS, the ordinary adversarial training f̌, and our proposal estimator by the adversarial training with preprocessing f̂.
§.§.§ Preprocessing Design
We impose the following assumptions on the preprocessing.
[Preprocessing]
Ŷ(x) is continuous and [Ŷ_L^∞^2] ≤ V^2 with some V > 0.
Also, there exists a non-negative sequence {ζ_n}_n ∈ such that ζ_n → 0 as n →∞ such that the following holds for all n ∈:
ζ_n^2 ≥[ Ŷ - f^*_L^∞^2 ].
The sequence {ζ_n}_n ∈ represents a convergence rate of the preprocessing Ŷ to f^*.
Importantly, the data used to construct the preprocessed output Ŷ here may overlap the data for the estimator as (<ref>).
There are several examples for preprocessing as follows.
[Nearest neighbour]
First, we consider the k-nearest neighbor method.
For k ∈ and x ∈ [0,1]^d,
we define a radius B_x(r) := {x' ∈ [0,1]^d |x-x'_2 ≤ r} with r>0, the k-nearest neighbour radius r_k(x) := inf{r >0 | |B_x(r) ∩| ≥ k}, and its corresponding dataset N_k(x) := B_x(r) ∩.
With this notion, we define the k-
nearest neighbor preprocessing.
Ŷ(x) = 1/|N_k(x)|∑_i=1^n Y_i {X_i ∈ N_k(x)}
In this example, if Assumption <ref> holds with β∈ (0,1], we have ζ_n^2 = O(n^-2β/(2β + d)log n) with k ≍ n^2β/(2β + d) by Theorem 1 in <cit.>.
[Posterior mean by Bayesian method]
We consider a mean of a posterior distribution by a prior distribution on functions.
The method considers a B-spline series (see <cit.> for overview and specific constructions).
With some tuple of numbers of basis (J_1,...,J_d) ∈^d and orders (q_1,...,q_d) ∈^d, we consider parameters {θ_j_1,...,j_d}_j_1,...,j_d = 1^J_1,...,J_d and the B-spline series {B_j_k,q_k(x)}_j_k = 1^J_k for k=1,...,d.
Then, the method constructs a prior distribution on a function f with the form
f(x) = ∑_j_1=1^J_1⋯∑_j_d=1^J_dθ_j_1,...,j_d∏_k=1^d B_j_k,q_k(x_k),
by putting a Gaussian prior on the parameters θ_j_1,...,j_d.
If Assumption <ref> holds with β > 0, Theorem 4.4 in <cit.> shows that ζ_n^2 = O(n^-2β/(2β + d)log^2β/(2β + d) n), which is implied by a contraction of the posterior shown by the theorem.
We can pick other methods for preprocessing.
The required property is that an error in estimating a smooth function converges in the sup-norm sense.
§ MAIN RESULT: L^∞-RISK ANALYSIS
We present our main results on the consistency of the estimator and a non-asymptotic upper bound on the estimation error with its convergence rate in n.
We further discuss the minimax optimality of the obtained convergence rate.
To achieve optimality, we need to discuss the design of the preprocessing Ŷ and the architecture of deep neural networks.
§.§ Consistency
We present an upper bound of an expectation of the L^∞-risk of the estimator.
The first result is consistency in the sense of the L^∞-risk.
In an asymptotic analysis with n →∞, a product of the depth and width of deep neural networks should also increase in n.
Consider the regression model (<ref>) and the adversarial estimator f̂ in (<ref>) with the function class by deep neural networks with a tuple (L,W).
Suppose <ref>, and <ref> hold and f^* is continuous.
Then, there exists a tuple (L,W) with LW = o(n) such that it holds that
[f̂ - f^*_L^∞^2 ] → 0,
as n →∞.
The results show that under divergent widths and depths and appropriate preprocessing, we obtain consistency in the sense of sup-norm.
Note that f^* needs only be continuous, and conditions on derivatives are not necessary.
Also, it provides the following important implications: (i) we can control the L^∞-risk even though the deep neural network model does not have the linear-in-feature structure, and (ii) the preprocessing solves the problem of inconsistency in adversarial training presented in Section <ref>.
Its proof is based on the procedure in Section <ref>.
We note the importance of sup-norm convergence in the context of estimation.
In the theory of approximation, the sup-norm convergence by neural networks has been an important topic, that is, inf_f∈(L,W)f - f^*_L^∞→ 0 as L →∞ or W →∞, and numerous studies have studied the problem, e.g. <cit.>.
Conversely, in the nonparametric regression problem, the sup-norm convergence has been difficult due to noise in observations.
Theorem <ref> shows that the adversarial training with preprocessing enables convergence in the sup-norm.
§.§ Non-Asymptotic Bound and Convergence Rate
As a more rigorous error evaluation, we derive a non-asymptotic upper bound for the L^∞-risk of the estimator with the adversarial training.
This result is also useful in studying convergence rates of the risk and discussing its optimality.
Consider the regression model (<ref>) and the adversarial estimator f̂ in (<ref>) with the function class (L,W) by deep neural networks.
Suppose Assumption <ref>, <ref>, and <ref> hold for some β > 0.
Then we have
[f̂ - f^*_L^∞^2 ] ≤ C_P_X,p,d,B,d,β h^-d( (WL)^2 log(WL) log n/n + (WL)^-4β/d + h^-dζ_n^2 ),
for every n ≥n̅ with some n̅∈ℕ.
This result gives some implications: (i) we develop an upper bound on the L^∞-risk of the estimator, and
(ii) the bound is proportional to h^-d, which appears when evaluating the L^∞-risk using the adversarial loss.
Note that we can select h as strictly positive and thus it does not affect an order of the bound in n.
More precisely, this upper bound consists of the three terms.
The first term O((WL)^2 log (WL) /n) is the complexity error, the second term O((WL)^-4s/d) is the approximation error by the deep neural network, and the third term O(ζ_n^2) is the error by the preprocessing.
The complexity and approximation errors also appear in several risk bounds on an L^2-risk of deep neural network (e.g., Theorem 4.3 in <cit.>).
In contrast, the preprocessing error term is a new term needed to derive an upper bound on the L^∞-risk.
We derive the convergence rate of the L^∞-risk with respect to n.
Specifically, we select the width and depth of deep neural networks in order to balance the trade-off in the error terms presented in Theorem <ref>.
Consider the setting in Theorem <ref>.
Further, suppose that ζ_n^2 = O(n^-2β/(2β + d)log^β^* n) for some β^* > 0.
We set L and W as LW ≍ n^2β/(2β + d).
Then, we obtain the following as n →∞:
[f̂ - f^*_L^∞^2 ] = O( n^-2β / (2β + d)log^2 ∨β^* n ).
The rate obtained in Corollary <ref> is identical to the minimax optimal rate of risk measured in the sup-norm in the problem of estimating a function from ^β([0,1]^d) <cit.>.
Specifically, the derived rate corresponds to the following lower bound:
inf_f̅_nsup_f^* ∈^β([0,1]^d)[f̅_n - f^*_L^∞^2 ] = Ω̃( n^-2β / (2β + d)), (n →∞),
where f̅_n is taken from all estimators depending on the n observations.
Since the derived rate is the same as the lower bound, we show that the adversarial training estimator achieves the minimax optimal rate.
§.§ Proof Overview
We give an overview of proof of the main theorem.
As preparation, we introduce several notations related to adversarial training.
With h, an order p and a base measure P, we define an adversarial (pseudo-)norm of f: [0,1]^d → and its empirical analogue
f_P,Δ^2 := _X ∼ P[ max_x' ∈Δ_h^p(X) |f(x')|^2 ], f_n,Δ^2 := n^-1∑_i=1^n max_x' ∈Δ_h^p(X_i) |f(x')|^2.
These norms correspond to the adversarial risks with a squared loss for the regression problem (<cit.>).
We also define an approximation error of deep neural networks in (L,W) as
Φ_L,W := inf_f ∈(L,W)f - f^*_L^∞.
This term represents an expressive power of neural networks in (L,W), which decreases as L or W increase (see <cit.> for an example).
We further use a uniform covering number of (L,W).
Let Q_n be an empirical measure with n samples.
Given δ∈ (0,1],
we define a δ-covering set of (L,W) as {f_1,...,f_N}⊂ and the uniform covering number from the empirical process theory (e.g., <cit.>):
N_L,W(δ) := sup_Q_n N(δ, (L,W), ·_L^2(Q_n)),
where the supremum is taken over all possible empirical measures Q_n.
This notion is useful to evaluate the complexity of the set of deep neural networks, because it gives an upper bound without boundedness or sparsity of parameters of neural networks (See Lemma <ref>, for example).
Our proof consists of three main elements: (i) the derivation of an upper bound of the adversarial norm of the estimation error, (ii) to develop an upper bound of the L^∞ norm of the estimation error by the adversarial norm, and (iii) a comb of the above results using the localization technique.
Each of these is described below.
In the first step, we derive an upper bound for the adversarial norm of the estimation error.
Rigorously, Lemma <ref> will state the following upper bound
[f̂ - f^*_P_X, Δ^2 ] ≤ C {[f̂ - f^*_n,Δ^2] + B^2 (log N_L,W(δ) +1)/n + δ B + δ^2 },
for any δ∈ (0,1) with some universal constant C> 0.
Furthermore, Proposition <ref> will bound the empirical adversarial norm [f̂ - f^*_n,Δ^2] as
[f̂ - f^*_n, Δ^2 ] ≤ C {([f̂ - f^*_L^∞^2 ]^1/2 +δ) ( log N_L,W(δ)/n + ζ_n )^1/2 + (Φ_L,W + ζ_n )^2 }.
We achieve these bounds by extending the empirical process technique by <cit.> to the adversarial norm.
There are several points for noting: (i) the term Φ_L,W represents a bias, and the term O(log N_L,W(δ) / n) represents a variance of the estimator, that are similar to the least square estimator, (ii) the variance term is described by the uniform covering number, which is useful to study neural networks whose parameters are unbounded and non-sparse, and (iii) there is a term ζ_n which represents the error by the preprocessing, unlike the case of the least square estimator.
In the second step, we construct an upper bound for the sup-norm using the adversarial norm.
That is, we develop the following statement:
Consider the estimator as (<ref>) and the adversarial norm as (<ref>).
Suppose P_X satisfies Assumption <ref>.
Then, we have
f̂ - f^*_P_X, Δ^2≥ C_P_X,p,d h^d f̂ - f^*_L^∞^2 .
Intuitively, we utilize the similarity between the adversarial norm and the sup-norm to achieve the result.
That is, the maximization over Δ_h^p in the adversarial norm has a similar property to the sup-norm.
Using this property, we give an upper bound on the sup-norm while taking into account the volume of the hypercube.
We will give a generalized version of this result as Lemma <ref> in the supplementary material.
In the last step, we combine these results and derive the main statement of Theorem <ref>.
Here we apply the peeling argument to obtain convergence rates. Note that a simple combination of the above results would lose optimality.
To obtain the minimax optimal rate, we evaluate the approximation error and the uniform covering number based on the localization techniques.
§ APPLICATIONS
§.§ Extension to General Loss Function
§.§.§ Motivation and Setting
We can extend our adversarial training results to the case of non-squared loss functions.
Specifically, we can handle loss functions such as absolute value loss, quantile loss, and Huber loss, which are used in the presence of heavy-tailed noise.
This setting with deep neural networks is studied in <cit.>.
We introduce a generic loss function, which satisfies the following assumption:
A loss function ℓ:×→ is symmetric and ℓ(x,y) is Lipschitz-continuous in each x and y with its Lipschitz constant C_ℓ > 0.
Further, ℓ (y,x)=0 holds if and only if y=x, and there exists a constant c_ℓ > 0 and q ≥ 1 such that
ℓ(y,x) ≥ c_ℓ |y-x|^q, ∀ x,y ∈.
A class of loss function satisfying Assumption <ref> includes several representative loss functions, e.g., an absolute loss ℓ(y,x) = |y-x|, a quantile loss ℓ(y,x) = ({y ≥ x}τ + {y ≤ x}(τ - 1)) (y-x) for τ∈ (0,1), and the Cauchy loss ℓ(y,x) = log (1 + κ^2 (y-x)^2) for κ > 0.
We introduce an empirical risk function for adversarial training based on ℓ.
Using the neighbourhood set Δ_h^p(x) and the preprocessing Ŷ, we define an empirical risk function as
R̃_n(f) := 1/n∑_i=1^n sup_x' ∈Δ_h^p(X_i)ℓ(Ŷ(x'), f(x')).
This loss function is a generalized version of the ordinary loss for the adversarial training (<ref>).
Using this notion, we define its minimizer as
f̃∈_f ∈(L,W)R̃_n(f).
§.§.§ Error Analysis
We study an L^∞-risk of this estimator by deriving a non-asymptotic upper bound.
The proof differs from that of Theorem <ref>, requiring a more general treatment of loss combined with adversarial training.
Consider the regression model (<ref>) and the adversarial estimator f̃ in (<ref>) with the function class by deep neural networks with a tuple (L,W) and h ∈ (0,1).
Suppose Assumption <ref> and <ref> for β > 0, Assumption <ref> holds with ζ_n^2 = O(n^-2β/(2β + d)log^β^* n) for some β^* > 0 and Ŷ is independent of {(X_i,Y_i)_i=1^n},
and Assumption <ref> holds with q ∈ [1,∞).
Then, we have the following as n →∞:
[f̃ - f^*_L^∞^2] = O(h^-2d/q n^-β/(q(β + d))log^ (2/q) ∨β^* n ).
This result shows that the L^∞-risk is bounded with the setup with general loss functions.
The convergence rate of Proposition <ref> of the L^∞-risk corresponds to a convergence rate of excess risks derived by Theorem 4.2 in <cit.> under general losses.
The key to this result is the bound V on [Ŷ_L^∞^2] given in Assumption <ref>.
The independence of the preprocessing Ŷ is imposed because of a technical reason, however, it is easy to satisfy it.
For example, we can randomly split the observed data into two and then conduct the preprocessing using one of the two.
The technical derivation is similar to that of Theorem <ref>.
First, we define an expected value of adversarial risk with the general loss and the preprocessing: for f ∈(L,W), we define
R(f) := _X [ sup_x' ∈Δ_h^p(X)ℓ(f(x'),Ŷ(x')) ].
Then, we derive an upper bound for an excess value of the risk R̃ (f̃) - R̃(f^*) in Proposition <ref>.
Next, we bound the L^∞-risk by properties of the expected adversarial risk as
f̃ - f^*_L^∞^q = O ( h^-d( R̃(f̃) - R̃(f^*) + Ŷ - f^*_L^∞)).
in Lemma <ref>.
This result is an extension of the bound for the L^∞-risk by the L^2-risk as shown in Lemma <ref>.
Combining the results, we obtain the result of Proposition <ref>.
§.§ Adaptation to Heterogeneous Smoothness with Besov Space
§.§.§ Motivation and Setting
In this section, we show that our proposed method can be adapted to estimate functions with heterogeneous smoothness, that is, we study the case that the true function f^* is an element of the Besov space (see <cit.> for an introduction).
The Besov space has an interesting property that linear estimators, a certain type of non-deep estimators, cannot estimate its elements with the optimal convergence rate.
First, we give the definition of the Besov space following <cit.>.
Note that there are several equivalent definitions for Besov spaces, and the following is based on the notion of difference of functions.
Consider parameters p,q ∈ (0,∞] and β > 0.
For r ∈, h ∈^d, and f:[0,1]^d →, we define an r-th difference of f at x ∈ [0,1]^d as
Δ_h^r[f](x) = {x + rh ∈ [0,1]^d}∑_j=1^r rj (-1)^r-j f(x + jh).
We also define the r-th modulus of smoothness of f with u > 0 as
ω_r,p(f,u) = sup_h_2 ≤ uΔ_h^r[f]_L^p(λ).
Recall that ·_L^p(λ) denotes the L^p-norm with the Lebesgue measure λ.
Using these notions, we define a ball in the Besov space as follows.
.
With r ∈ such that define r > β, we define a semi-norm of f: [0,1]^d → as
f__p,q^β :=
∫_0^∞ ((u^-βω_r,p(f,u))^q u^-1 du )^1/q q < ∞
sup_u > 0 u^-βω_r,p(f,u) q = ∞.
Then, we define a ball of the Besov space with its radius B ≥ 1 as
_p,q^β := { f: [0,1]^d →|f_L^p(λ) + f__p,q^β≤ B }.
The Besov space can represent functions with discontinuity and heterogeneous smoothness, which means that the degree of smoothness of functions varies depending on x.
These properties follow the fact that _1,1^1 coincides with the space of bounded total variation <cit.>.
An important property of heterogeneous smoothness is that deep estimators, such as deep neural networks, tend to have an advantage in estimating such functions.
Specifically, a linear estimator, which is one certain family of non-deep estimators <cit.>, becomes sub-optimal when estimating elements of the Besov space.
The linear estimator has a form f̂^lin(·) = ∑_i=1^n Ψ(·;X_1,...,X_n)Y_i with an arbitrary measurable map Ψ, and includes major estimators such as the kernel ridge estimator.
Then, Theorem 1 in <cit.> implies the following minimax lower bound with d=1 case:
min_f̂^linmax_f^* ∈_p,q^β[ f̂^lin - f^*_L^2(λ)^2 ] ≥ C n^-2 β' / (2β' + d ),
with some C > 0 and β' = β + 1/2 - 1/p.
For p < 2 case, the linear estimator is sub-optimal, hence the rate is slower than the minimax optimal rate Õ(n^-2 β / (2β + d )).
Several studies <cit.> show similar statements.
Therefore, it is important to estimate functions in the Besov space with deep neural networks, since it overcomes the limitations of linear estimators.
§.§.§ Error Analysis
We give a convergence rate of the adversarial estimator with deep neural networks and the preprocessing in (<ref>).
Note that we consider the adversarial risk (<ref>) based on the squared loss function.
We first give the following assumption.
There exists β > 0 such that f^* ∈_p,q^β' holds for every β' ∈ (0,β].
To estimate functions in the Besov space, we have to restrict a set of neural network functions.
Let (L,W,S,B) be a set of neural network functions (<ref>) such that there are S ∈ non-zero parameters and each value is included in [-B̅, B̅] with B≥ 1, then consider the empirical preprocessed adversarial risk (<ref>) on (L,W,S,B) as
f̂∈_f ∈(L,W,S,B) R_n(f).
Then, we give the convergence rate of the estimator, which corresponds to the minimax optimal rate Õ(n^-2 β / (2β + d )) <cit.>.
Note that this rate is valid regardless of the values of p and q.
Fix p,q ∈ (0,∞].
Consider the regression model (<ref>) and the adversarial estimator f̂ in (<ref>) with the function class (L,W,S,B) by deep neural networks.
Suppose that Assumption <ref>, and <ref> hold with β > d/p.
Further, suppose that ζ_n^2 = O(n^-2β/(2β + d)log^β^* n) for some β^* > 0.
We set L and W as L ≥ C_d,p,β,Blog n, S ≍ W ≍ n^d/(2β + d), and B = O(n^a) with some a > 0.
Then, we obtain the following as n →∞:
[f̂ - f^*_L^∞^2 ] = O( n^-2β / (2β + d)log^3 ∨β^* n ).
The result shows that our estimator with deep neural networks inherits the advantages of both deep and non-deep estimators.
Rigorously, first, it achieves the minimax optimal rate up to log factors.
This optimality is not achieved by the linear estimator and is one of the advantages of using deep neural networks.
Next, the errors are convergent in the sup-norm sense.
This is not shown by deep neural network estimators using the least squares, and is achieved by adversarial training with preprocessing.
Note that the requirement on the preprocessing is satisfied by, for example, the wavelet estimator with β^* = 2β / (2β + d) <cit.>.
The proof of this proposition is a slight modification of the proof of Proposition <ref> in Appendix.
The main update is an analysis of the approximation error by deep neural networks to a function in the Besov space.
Here, we apply the seminal result by <cit.> on the approximation error in the sup-norm.
§ SIMULATIONS
In this section, we conduct simulation experiments to justify the theoretical results.
Specifically, we generate data from a function and then numerically compute the L^∞-risk of the proposed estimator and other standard methods.
We generate n samples from the regression model (<ref>) with the sample size n ∈{400,800,1200,1600} and the noise variance σ^2 ∈{0.0001,0.01,1.0}.
We consider the following three cases as values of f^* on [0,1]^d.
In Case 1, we set d=1 and f^*(x) = 0.3 sin(4 π x) - x + 0.5.
In Case 2, we set d=2 and f^*(x_1,x_2) = sin(4 π x_1) + cos(2 π x_2).
In Case 3, we set d=7 and f^*(x_1,x_2,...,x_7) = 2/x_1 + 0.01 + 3 log (x_2^7 x_3 + 0.1) x_4 + 0.1 x_5^4 x_6^2 x_7.
For estimation, we use a three-layer fully-connected neural network with the ReLU activation function.
The width of each layer is 40.
For training, we use three methods: (i) adversarial training without preprocessing, (ii) adversarial training with preprocessing (our proposal), and (iii) ordinary least squares.
In the adversarial training case (i) and (ii), the value of h is set to 2^-3.
For the adversarial training, we employ the projected descent algorithm <cit.>.
For the preprocessing, we employ the k-nearest neighbor with setting k=3.
To measure the L^∞-risk, we generate 10,000 uniform random variables on the support [0,1]^d and use their maximum to approximate the risk.
Figure <ref> shows the measured L^∞-risk against the sample size n.
We have mainly three findings:
(i) In approximately all cases, our proposed estimator from adversarial training with preprocessing monotonically reduces the L^∞-risk in n.
(ii) The adversarial estimators without preprocessing may or may not be as good as those with preprocessing.
This implies that the magnitude of the bias from adversarial training depends on the shape of the true function f^*.
(iii) The L^∞-risk of the least square estimator generally decreases at a slower rate or does not decrease in all cases.
This supports the possibility that training a deep neural network with least-squares may have difficulty in reducing the L^∞-risk.
§ CONCLUSION AND DISCUSSION
We consider the nonparametric function estimator by deep neural networks that converge in the sense of the sup-norm, i.e., L^∞-norm.
Since deep neural networks do not have a tractable structure such as a linear sum of basis functions as the conventional non-deep estimators, they are not guaranteed to converge in the sup-norm sense.
In this study, we tackle this problem by considering the estimator based on adversarial training.
For the bias due to the adversarial training, we solve this problem by introducing the preprocessing for the data.
As a result, our proposed corrected adversarial converges to the smooth true function with the minimax optimal rate in the sup-norm sense.
Our approach is also valid for functions with general loss and functions with heterogeneous smoothness.
The experiments support our theoretical results.
Future research directions include sup-norm convergence for estimating non-smooth functions.
Although we expect that there are significant obstacles to the sup-norm convergence of estimators for the non-smooth functions, it is interesting to argue how far we can relax the conditions to estimate such functions.
Another direction is the application of uniform confidence bands for functions.
Our sup-norm convergence is useful to study the uncertainty of neural network estimators and constructing uniform confidence bands.
These directions may be a step toward statistical inference with deep neural networks.
§ PROOF FOR MAIN RESULT IN SECTION <REF>
§.§ Overview
We first develop a general theorem with arbitrary preprocessing, then apply the result and prove the results in Section <ref>.
For a preprocessed output Ŷ, we define its residual as
Ξ(x) := Ŷ(x) - f^*(x), x ∈ [0,1]^d.
This notion expresses an error in estimating the true function f^* by the preprocessing Ŷ.
Consider the regression model (<ref>) and the corrected adversarial estimator f̂ as (<ref>) with the function class (L,W) by deep neural networks.
Suppose that Assumption <ref> and <ref> hold.
Then, we obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B h^-d( W^2 L^2 log(WL) log n/n + Φ_L,W^2+ [Ξ_L^∞] Φ_L,W + h^-d[Ξ_L^∞^2 ] ).
We apply Lemma <ref> to bound the sup-norm as
f̂ - f^*_L^∞^2 ≤ 2(C_P_X,p,d h^d)^-1f̂ - f^*_P_X, Δ^2
Note that any f ∈(L,W) is continuous, since it has a form of deep neural network with the ReLU activation with continuity.
We then take an expectation of the bounds and apply Lemma <ref> and Proposition <ref> to obtain
[f̂ - f^*_P_X, Δ^2 ]
≤ 4 [f̂ - f^*_n,Δ^2] + 800 B^2 log N_L,W(δ) + 4118B^2/n + 32 δ B + 8 δ^2
≤( 16[f̂ - f^*_L^∞^2 ]^1/2 + 40 δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2
+ 800 B^2 log N_L,W(δ) + 4118B^2/n + 32 δ B + 8 δ^2 + 4Φ_L,W^2+ 8 [Ξ_L^∞] Φ_L,W + 2 [Ξ_L^∞^2 ],
for δ∈ (0,1].
Note that both f ∈(L,W) and f^* are bounded, the expectations are guaranteed to exist.
We combine this fact with the above inequality to (<ref>), then obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d h^-d( [f̂ - f^*_L^∞^2 ]^1/2 + δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2
+ C_P_X,p,dh^-d( B^2 log N_L,W(δ) + B^2/n + δ B + Φ_L,W^2+ [Ξ_L^∞] Φ_L,W + [Ξ_L^∞^2 ] ),
by setting δ≤ B ∨Φ_L,W, which will be verified later.
We arrange the terms in the above inequality.
For a,b ≥ 0 and z ∈, z^2 ≤ az + b implies z^2 ≤ 3a^2 + 2b.
with regarding regard z = [f̂ - f^*_L^∞^2 ]^1/2 and obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B h^-d{log N_L,W(δ)/n + δ + Φ_L,W^2+ [Ξ_L^∞] Φ_L,W + h^-d[Ξ_L^∞^2 ]
+ ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2δ}.
Further, we set δ = 1/n then Lemma <ref> shows
log N_L,W(1/n) = logsup_Q_n N(1/n, (L,W), ·_L^2(Q_n)) ≤ C W^2 L^2 log(WL) log (B n^2).
We substitute these results and obtain the statement.
Suppose P_X satisfies Assumption <ref> and f^* is continuous.
For any bounded and continuous f:[0,1]^d →, we have
f - f^*_P_X,Δ^2 ≥ C_P_X,p,d h^d f - f^*_L^∞^2 .
We apply Lemma <ref> to achieve the statement.
To apply the lemma, we verify that the map x' ↦ (f(x') - f^*(x'))^2 is bounded and continuous by the compactness of the domain [0,1]^d and the assumptions.
Then, we have
f - f^*_P_X,Δ^2 ≥ C_P_X,p,d h^d sup_x' ∈ [0,1]^d (f(x') - f^*(x'))^2 = C_P_X,p,d h^d f - f^*_L^∞^2 .
The inequality follows Lemma <ref> by setting g(·) = (f(·) - f^*(·))^2.
All f ∈ is continuous.
Suppose that f^* is continuous and f^*_L^∞≤ B holds.
Then, for any δ > 0, we have
[f̂ - f^*_P_X,Δ^2]
≤ 4 [f̂ - f^*_n,Δ^2] + 800 B^2 log N_L,W(δ) + 4118B^2/n + 32 δ B + 8 δ^2.
Without loss of generality, we assume that N_L,W(δ) ≥ 3 and log N_L,W(δ) ≤ n.
Also, we define the nearest element of the covering set to f̂, that is, we define ĵ := _j' = 1,...,Nsup_Q_nf_j' - f̂_L^2(Q_n).
Let X_i' be an i.i.d. samples from P_X for i = 1,...,n.
Note that Ŷ depends on X_1,...,X_n.
We give a bound on the following difference as
|[f̂ - f^*_P_X,Δ^2] - [f̂ - f^*_n,Δ^2] |
= | [ 1/n∑_i=1^n sup_x' ∈Δ_h^p (X_i') (f̂(x') - f^*(x'))^2 - sup_x' ∈Δ_h^p (X_i) (f̂(x') - f^*(x'))^2 ] |
≤| [ 1/n∑_i=1^n sup_x' ∈Δ_h^p (X_i') (f_ĵ(x') - f^*(x'))^2 - sup_x' ∈Δ_h^p (X_i) (f_ĵ(x') - f^*(x'))^2_=: g_ĵ(X_i,X_i')] |
+ 2 | [ 1/n∑_i=1^n sup_x' ∈Δ_h^p (X_i) (f̂(x') - f_ĵ(x') + f_ĵ(x') - f^*(x'))^2 - sup_x' ∈Δ_h^p (X_i) (f_ĵ(x') - f^*(x'))^2 ] |
≤| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] | + 4 [sup_Q_nf̂ - f_ĵ_L^2(Q_n)^2 ]^1/2[ sup_Q_nf_ĵ - f^*_L^2(Q_n)^2 ]^1/2
+ 2 [ sup_Q_nf̂ - f_ĵ_L^2(Q_n)^2]
≤| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] | + 4 δ[ f_ĵ - f^*_L^∞^2 ]^1/2+ 2 δ^2
≤| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] | + 8 δ B + 2δ^2.
Here, the second last inequality follows Lemma <ref> using the continuity of f^* and the f ∈.
The last inequality follows the definition of ĵ and the boundedness of f ∈ and f^* by B.
We further study the first term of the bound (<ref>).
As preparation, we define
r_j = Bmax{[f_j - f^*_P_X,Δ^2 ]^1/2 , (n^-1log N_L,W(δ))^1/2},
for j=1,...,N, and it yields
r_ĵ ≤ B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f_ĵ(x') - f^*(x'))^2 ]^1/2 + B (n^-1log N_L,W(δ))^1/2
≤ B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f̂(x') - f^*(x'))^2]^1/2 +B (n^-1log N_L,W(δ))^1/2 + Bδ.
Here, _X| X_1:n, Y_1:n[ · ] denotes a conditional expectation with given X_1,...,X_n and Y_1,...,Y_n.
By the law of iterated expectation, the first term of the bound is decomposed as
| [ 1/n∑_i=1^n g_ĵ(X_i,X_i') ] |
= 1/n| [ ∑_i=1^n g_ĵ(X_i,X_i') /r_ĵ_=: g̃_ĵ(X_i,X_i')r_ĵ] |
≤1/n| [ ∑_i=1^n g̃_ĵ(X_i,X_i')( B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f̂(x') - f^*(x'))^2]^1/2 +B (n^-1log N_L,W(δ))^1/2 + Bδ)] |
≤1/n| [ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ( B _X| X_1:n, Y_1:n[ sup_x' ∈Δ_h^p(X) (f̂(x') - f^*(x'))^2]^1/2)] |
+ B/n| [ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ( (n^-1log N_L,W(δ))^1/2 + δ)]^1/2|
≤B/n| [ ( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') )^2 ]^1/2[f̂ - f^*_P_X,Δ^2 ]^1/2|
+ B/n[ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i')]((n^-1log N_L,W(δ))^1/2 + δ)
≤B/n(36 n log N_L,W(δ) + 256 n)^1/2[ f̂ - f^*_P_X,Δ^2]^1/2+ B/n (6 log N_L,W(δ) + 11).
The first inequality follows (<ref>) and the second last inequality follows the Cauchy-Schwartz inequality.
We also apply Lemma <ref> and 1 ≤log N_L,W(δ) ≤ n to achieve the last inequality.
We substitute the result (<ref>) into the bound (<ref>), then obtain the inequality:
|[f̂ - f^*_P_X,Δ^2] - [f̂ - f^*_n,Δ^2] |
≤B/n(36 n log N_L,W(δ) + 256 n)^1/2[ f̂ - f^*_P_X,Δ^2]^1/2 + B/n (6 log N_L,W(δ) + 11) + 8 δ B + 2δ^2.
We rearrange the term and obtain that
[f̂ - f^*_P_X,Δ^2]
≤ 2 ([f̂ - f^*_n,Δ^2] + B/n (6 log N_L,W(δ) + 11) + 8 δ B + 2δ^2 ) + 8B^2(36 n log N_L,W(δ) + 256 n)/n^2.
Then, we obtain the statement.
Suppose that N_L,W(δ) ≥ 3.
For the function g̃_j(X_i,X_i') defined in the proof of Lemma <ref>, we have
[ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i')] ≤ 6 (n log N_L,W(δ))^1/2 + 32 n^1/2/ 3(log N_L,W(δ))^1/2,
and
[ ( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') )^2] ≤ 36 n log N_L,W(δ) + 256 n.
We first note that for any j = 1,...,N_L,W(δ), we have [g̃_j(X_i,X_i')] = 0, |g̃_j(X_i,X_i')| ≤ 4B^2 /r_j ≤ 4 n^1/2/ (log N_L,W(δ))^1/2 =: M, and
(g̃_j(X_i,X_i')) = 2 r_j^-2( sup_x' ∈Δ_h^p(X_1) (f_j(x') - f^*(x'))^2 )
≤ 2 r_j^-2[ ( sup_x' ∈Δ_h^p(X_1) (f_j(x') - f^*(x'))^2 )^2]
≤ 8 r_j^-2[f_j - f^*_P_X,Δ^2] B^2
≤ 8.
The second inequality follows Hölder's inequality.
Using the bounds above, we apply the Bernstein inequality as
( ∑_i=1^n g̃_j(X_i,X_i') ≥ t) ≤exp( - t^2/2t M/3 + 2n (g̃_j(X_1,X_1')))
≤exp( - t^2/8t n^1/2(log N_L,W(δ))^-1/2 /3 + 16n)
≤exp( - t^2/16t n^1/2(log N_L,W(δ))^-1/2 /3)
= exp( - 3t (log N_L,W(δ))^1/2/16 n^1/2),
for t ≥ 6 (n log N_L,W(δ))^1/2.
The last inequality follows 8t n^1/2(log N_L,W(δ))^-1/2 /3 ≥ 16n for t larger than the threshold 6 (n log N)^1/2.
Using the result (<ref>) associated with t ≥ 6 (n log N_L,W(δ))^1/2, we bound the following expectation:
[ max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i')]
= ∫_0^∞( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ≥ t)dt
≤ 6 (n log N_L,W(δ))^1/2 + 2N_L,W(δ) ∫_6 (n log N_L,W(δ))^1/2^∞max_j =1,...,N_L,W(δ)( ∑_i=1^n g̃_j(X_i,X_i') ≥ t)dt
≤ 6 (n log N_L,W(δ))^1/2 + 2N_L,W(δ) ∫_6 (n log N_L,W(δ))^1/2^∞exp( - 3t (log N_L,W(δ))^1/2/16 n^1/2)dt
≤ 6 (n log N_L,W(δ))^1/2 + 32 n^1/2/ 3(log N_L,W(δ))^1/2.
Then, the first statement is proved.
For the second statement, we similarly apply (<ref>) and obtain
Using the result (<ref>) associated with t ≥ 6 (n log N_L,W(δ))^1/2, we bound the following expectation:
[ ( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') )^2]
= ∫_0^∞( max_j =1,...,N_L,W(δ)∑_i=1^n g̃_j(X_i,X_i') ≥ t^1/2)dt
≤ 36 n log N_L,W(δ) + 2N_L,W(δ) ∫_6 n log N_L,W(δ)^∞max_j =1,...,N_L,W(δ)( ∑_i=1^n g̃_j(X_i,X_i') ≥ t^1/2)dt
≤ 36 n log N_L,W(δ) + 2N_L,W(δ) ∫_6 n log N_L,W(δ)^∞exp( - 3t^1/2 (log N_L,W(δ))^1/2/16 n^1/2)dt
≤ 36 n log N_L,W(δ) + 256 n.
Then, the second statement is also proved.
Consider the setting in Theorem <ref>.
Then, for any δ∈ (0,1], we have
[f̂ - f^*_n,Δ^2] ≤( 4[f̂ - f^*_L^∞^2 ]^1/2 + 10δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2
+ Φ_L,W^2+ 2 [Ξ_L^∞] Φ_L,W + 2 [Ξ_L^∞^2].
By the definition of the minimization problem, L_n(f̂) ≤L_n(f) holds for any f ∈(L,W), hence we have the following basic inequality as
1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (Ŷ(x') - f̂(x'))^2 ≤1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (Ŷ(x') - f(x'))^2,
which can be rewritten as
1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (f^*(x') + Ξ(x') - f̂(x'))^2 ≤1/n∑_i=1^n max_x' ∈Δ_h^p(X_i) (f^*(x') + Ξ(x') - f(x'))^2.
We bound the both-hand side of (<ref>).
The left-hand side (LHS) of (<ref>) is lower bounded as
= 1/n∑_i=1^n max_x' ∈Δ_h^p(X_i){ (f^*(x') - f̂(x'))^2 + Ξ(x')^2 + 2 Ξ(x') (f^*(x') - f̂(x'))}
≥f^* - f̂_n,Δ^2 - Ξ_n,Δ^2 - 2/n∑_i=1^nmax_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f̂(x'))|,
by applying Lemma <ref>.
Similarly, we bound the right-hand side of (<ref>) as
= 1/n∑_i=1^n max_x' ∈Δ_h^p(X_i){ (f^*(x') - f(x'))^2 + Ξ(x')^2 + 2 Ξ(x') (f^*(x') - f(x'))}
≤f^* - f_n,Δ^2 + Ξ_n,Δ^2 +2/n∑_i=1^n max_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f(x'))|.
Combining (<ref>) and (<ref>) with (<ref>), we obtain
f^* - f̂_n,Δ^2 ≤f^* - f_n,Δ^2 + 2 Ξ_n,Δ^2 + 2/n∑_i=1^nmax_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f̂(x'))| _=: T_1
+ 2/n∑_i=1^n max_x' ∈Δ_h^p(X_i) | Ξ(x') (f^*(x') - f(x'))|
≤Φ_L,W^2 + 2 Ξ_L^∞^2 + T_1 + 2 Ξ_L^∞Φ_L,W,
by the definition of Φ_L,W in (<ref>).
We will bound an expectation the terms.
Note that the expectations of the terms are guaranteed to exist, by the boundedness of f^* and f̂,f ∈(L,W), and Ŷ.
We bound [T_1].
We define the nearest element of the covering set to f̂, that is, we define ĵ := _j' = 1,...,Nsup_Q_nf_j' - f̂_L^2(Q_n).
Then, we bound [T_1] as
[T_1] = [ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x') + f_ĵ(x') - f̂(x'))| ]
≤[ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x'))| ] + [ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') ( f_ĵ(x') - f̂(x'))| ]
≤[ 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x'))| f̂ - f^*_L^∞ + δ/f_ĵ - f^*_L^∞]
+ 2 [ sup_Q_nΞ_L^2(Q_n)^2 ]^1/2[ sup_Q_nf_ĵ - f̂_L^2(Q_n)^2]^1/2
≤[ (f̂ - f^*_L^∞ + δ) 2/n∑_i=1^n max_x' ∈Δ_h(X_i) | Ξ(x') (f^*(x') - f_ĵ(x'))|/f_ĵ - f^*_L^∞_=: Z_ĵ] + 2 [Ξ_L^∞^2 ]^1/2δ.
Since we have
|Z_j| ≤2/n∑_i=1^n | max_x' ∈Δ_h(X_i){| Ξ(x') | | (f^*(x') - f_j(x'))| }/f_j - f^*_L^∞| ≤ 2Ξ_L^∞,
for any j = 1,...,N,
the Cauchy-Schwartz inequality yields
[ (f̂ - f^*_L^∞ + δ) Z_ĵ] ≤[ (f̂ - f^*_L^∞ + δ)^2 ]^1/2[ Z_ĵ^2 ]^1/2
≤ 2( [f̂ - f^*_L^∞^2 ]^1/2 + δ)[ max_j=1,...,N_L,W(δ) Z_j^2 ]^1/2
≤ 4( [f̂ - f^*_L^∞^2 ]^1/2 + δ) ( log N_L,W(δ) + [ Ξ_L^∞^2 ]/n)^1/2.
The last inequality follows the maximal inequality (Theorem 3.1.10 in <cit.>) for the bounded random process.
Using this result, we obtain
[T_1] ≤ 4 ( [f̂ - f^*_L^∞^2 ]^1/2 + δ) ( log N_L,W(δ) + [ Ξ_L^∞^2 ]/n)^1/2 + 2 [Ξ_L^∞^2 ]^1/2δ
≤( 4[f̂ - f^*_L^∞^2 ]^1/2 + 10δ) ( log N_L,W(δ)/n + [ Ξ_L^∞^2 ] )^1/2.
We substitute the bound (<ref>) into the expectation of (<ref>), then obtain the statement.
Fix ε > 0 arbitrary.
Also, we fix C_* = C_P_X,p,d,B as used in the statement of Proposition <ref>.
By the universal approximation theorem (e.g. Theorem 1 in <cit.>) associate with the continuity of f^*, there exists a tuple (L',W') such that
Φ_L',W'≤√(ε h^d/( 4C_*)).
Further, by Assumption <ref>, there exists n̅∈ such that
[Ξ_L^∞^2] ≤√(ε h^2d/(4 C_*)).
Then, for all n ≥n̅, Proposition <ref> yields that
[f̂ - f^*_L^∞^2 ] ≤ C_* h^-d(W'L')^2 log(W'L') log n/n + 3 ε/4.
Then, for any n ≥ n' ∨ (4 C_* (W'L')^2 log(W'L') h^-dε^-1), we have [f̂ - f^*_L^∞^2 ] ≤ε/4 + 3ε/4 = ε, which shows the statement.
As preparation, Lemma <ref> gives the following bound
Φ_L,W≤ C_d,β (LW)^-2β/d.
With this bound on Φ_L,W, we apply Proposition <ref> and obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B,d,β h^-d( (WL)^2 log(WL) log n/n + (LW)^-4β/d+ [Ξ_L^∞] (LW)^-2s/d + h^-d[Ξ_L^∞^2] ).
Further, we have
(LW)^-4β/d+ [Ξ_L^∞] (LW)^-2s/d + h^-d[Ξ_L^∞^2] ≤{(LW)^-2β/d + h^-d/2[Ξ_L^∞^2]^1/2}^2,
by applying Jensen's inequality.
Arranging the terms, we obtain the statement.
We start with the inequality (<ref>) in the proof of Theorem <ref> and obtain
[f̂ - f^*_L^∞^2 ]
≤ C_P_X,p,d,B,d,β h^-d( n^-2β/(2β+d) (log^2 n + 1) + [Ξ_L^∞] n^-β/(2β+d) + h^-d[Ξ_L^∞^2] )
by the setting WL ≍ n^d/(4s + 2d).
§ PROOF FOR APPLICATIONS
§.§ Proof for General Loss Setting
We give proofs of the result in Section <ref>.
Consider the setting in Proposition <ref>.
Then, we have for n such that log N(1/n) ≥ 1:
[R̃ (f̃) - R̃(f^*)] ≤C_ℓ, B ( log N_L,W(1/n) + V^2 )/n^1/2 + C_ℓ (Φ_L,W + [Ξ_n_L^∞]).
This proof is similar to Lemma 3.1 in <cit.>.
A difference between <cit.> and our result is that a property of the loss depends on f in our setting, so we have to modify it.
Hence, we write down the proof.
We develop the proof in the following four steps: (i) a basic decomposition, (ii) bounding a variance, (iii) bounding a bias, and (iv) combining every bound.
Step 1: Basic decomposition.
We define i.i.d. copies of the observations D := {(X_i,Y_i)_i=1^n} as D' := {(X_i',Y_i')_i=1^n}, and also define an excess loss as
g(x,Ŷ,f) = sup_x' ∈Δ_h^p(x)ℓ(f(x'), Ŷ(x')) - sup_x' ∈Δ_h^p(x)ℓ(f^*(x'), Ŷ(x'))
We further define empirical means of the excess loss as G_n(f) := n^-1∑_i=1^n g(X_i,Ŷ,f) with the observations D, and G_n'(f) := n^-1∑_i=1^n g(X_i',Ŷ,f) with the copies D'.
Since f̂ is independent to D', we can rewrite the expected risk as
[R̃(f̂) - R̃(f^*)] = [ _D'[G_n'(f̂) ]].
Since f̂ is the minimizer of the empirical risk and the loss is bounded, we obtain the following inequality of expectations:
[G_n(f̂)] ≤[G_n(f) ],
for any f∈(L,W).
We set set f such that f - f^* _L^∞ = inf_f ∈(L,W)f - f^*_L^∞
Using this fact, we decompose the excess risk as
[R̃(f̂) - R̃(f) ] = [ _D'[ G_n'(f̂)]] ≤[ - 2G_n(f̂) + _D'[ G_n'(f̂)]_=:] + 2[ G_n(f)_=: ].
The inequality follows (<ref>).
Step 2: Bound the variance [].
We bound an expectation of the term .
By the boundedness of both Ŷ and f̃ by Assumption <ref> and (<ref>), the expectation [] exists.
We prepare additional notations.
Fix δ∈ (0,1].
We consider a covering set {f_j}_j=1^N_L,W(δ)⊂, then we pick f_j from the set such that sup_Q_nf_j - f̃_L^2(Q_n)≤δ.
We define a term g̃(X_i,Ŷ,f̃) by the following reform of as
= 1/n∑_i=1^n {_D'[ G_n'(f̃)] - 2 g(X_i,Ŷ,f̃) } =: 1/n∑_i=1^ng̃(X_i,Ŷ,f̃),
which yields the following form
[] = [1/n∑_i=1^ng̃(X_i,Ŷ,f̃)]
= [1/n∑_i=1^ng̃(X_i,Ŷ,f_j)_:= _1] + [1/n∑_i=1^ng̃(X_i,Ŷ,f̃)- 1/n∑_i=1^ng̃(X_i,Ŷ,f_j)_=: _2] .
We will bound both [_1] and [_2], separately.
We bound the term [_2].
Since g in (<ref>) is Lipschitz continuous in f with its Lipschitz constant C_ℓ by Lemma <ref>, we easily see that g̃ is Lipschitz continuous in f with its Lipschitz constant 6C_ℓ.
Thus, we obtain that
[_2] ≤| [1/n∑_i=1^ng̃ (X_i,Ŷ,f̃)] - [1/n∑_i=1^ng̃ (X_i, Ŷ,f_j)] | ≤ 6 C_ℓδ.
Next, we bound the term [_1].
Here, we need to consider a uniformly bounded function y:[0,1]^d → [-B,B]
For each f_j in the covering set, t > 0, and the bounded function y, we use the Bernstein inequality to derive its stochastic upper bound.
As preparation, we consider a threshold B_n ≥ 1 depending on n and define a clipped preprocessing Ŷ_B_n(·) := max{min{Ŷ(·), B_n}, -B_n}.
We firstly approximate [_1] by the Lipschitz continuity of ℓ as
[_1] ≤[1/n∑_i=1^ng̃(X_i,Ŷ_B_n,f_j)] + 6 C_ℓ[Ŷ - Ŷ_B_n_L^∞].
Since |Ŷ(x) - Ŷ_B_n(x)| = |Ŷ(x)| {|Ŷ(x)| ≥ B_n} holds, we can bound the expectation in the second term of the right-hand side as
[Ŷ - Ŷ_B_n_L^∞] = [ sup_x ∈ [0,1]^d |Ŷ(x)| {|Ŷ(x)| ≥ B_n}|]
≤[ sup_x ∈ [0,1]^d |Ŷ(x)| sup_x ∈ [0,1]^d{|Ŷ(x)| ≥ B_n}|]
≤[Ŷ_L^∞{Ŷ_L^∞≥ B_n}]
≤[Ŷ_L^∞^2 / B_n].
The last inequality follows {x ≥ 1}≤ x for any x ≥ 0.
The existence of the second moment is guaranteed by Assumption <ref>.
We put this result to (<ref>) and obtain
[_1] ≤[1/n∑_i=1^ng̃(X_i,Ŷ_B_n,f_j)] + 6 C_ℓ[Ŷ_L^∞^2 / B_n].
Then, we bound the first term [n^-1∑_i=1^ng̃(X_i,Ŷ_B_n,f_j)].
Since we have |g(x,Ŷ_B_n,f)| ≤ C_ℓ ( B_n ∨ B) for any x ∈ [0,1]^d and f: f_L^∞≤ B, we obtain the following inequality with fixed Ŷ_B_n:
( 1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j) > t)
=(_D'[ g(X_i',Ŷ_B_n,f_j)] - 2/n∑_ i=1^n g(X_i,Ŷ_B_n,f_j) > t )
=(_D'[ g(X_i',Ŷ_B_n,f_j)] - 1/n∑_ i=1^n g(X_i,Ŷ_B_n,f_j) > t/2 + 1/2_D'[ g(X_i',Ŷ_B_n,f_j)] )
≤(_D'[ g(X_i',Ŷ_B_n,f_j)] - 1/n∑_ i=1^n g(X_i,Ŷ_B_n,f_j) > t/2 + 1/2_D'(g(X_i, Ŷ_B_n, f_j))/4 C_ℓ B_n)
≤exp( - n(t')^2/2 _D'(g(X_i, Ŷ_B_n, f_j)) + 16 C_ℓ ( B_n ∨ B) t'/3 )
≤exp( - n(t')^2/2 t' C_ℓ ( B_n ∨ B) + C_ℓ ( B_n ∨ B) t'/3 )
≤exp( - n(t')^2/16 t' C_ℓ ( B_n ∨ B) + 16 C_ℓ ( B_n ∨ B) t'/3 )
≤exp( - 3 n t'/64 C_ℓ ( B_n ∨ B))
≤exp( - 3 n t/128 C_ℓ ( B_n ∨ B)).
The first and third inequalities follow _D'(g(X_i, Ŷ_B_n, f_j)) ≤ 4 C_ℓ B_n _D'[g(X_i, Ŷ_B_n, f_j)], and the second and last inequalities follows a setting t' = t/2 + _D'(g(X_i, Ŷ_B_n, f_j))/(8 C_ℓ (B ∨ B_n)).
Using this inequality for a uniform bound in terms of the covering set {f_j}_j=1^N_L,W(δ) and the independent random functions Ŷ and Ŷ_B_n, we obtain
( max_j = 1,...,N_L,W(δ)1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j) > t ) ≤ N_L,W(δ) exp( - 3nt/128 C_ℓ ( B_n ∨ B) t ).
Then, by the maximal inequality (Corollary 2.2.8 in <cit.>), for any η > 0, we have
[max_j=1,...,N_L,W(δ)[1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j)]]
≤η + ∫_η^∞( max_j = 1,...,N_L,W(δ)1/n∑_i=1^ng̃ (X_i,Ŷ_B_n,f_j) > t ) dt
≤η + ∫_η^∞ N_L,W(δ) exp( - 3nt/128 C_ℓ ( B_n ∨ B) t ) dt
≤η + N_L,W(δ) (128 C_ℓ ( B_n ∨ B))/3nexp( - 3 n η/ 128 C_ℓ ( B_n ∨ B) ) .
We set B_n = n^1/2, hence we have (B ∨ B_n) ≤ C_B n^1/2.
Also, we set η = (128 C_B,ℓ n^1/2) log N_L,W(δ) / (3n) and put this result into (<ref>), we obtain
[_1] ≤[max_j=1,...,N[1/n∑_i=1^ng̃ (X_i,Ŷ,f_j)]] ≤C_ℓ,B (log N_L,W(δ) + [Ŷ_L^∞^2 ])/n^1/2.
Combining the inequalities (<ref>) and (<ref>) into (<ref>) and set δ = 1/n, we obtain
[] ≤(2 C_ℓ^2 B_2 + C_ℓ B/3) (log N_L,W(1/n) + [Ŷ_L^∞^2 ])/n^1/2.
Step 3: Bound the bias [].
By the Lipschitz continuity of the loss ℓ by Assumption <ref>, we have
[] = [ 1/n∑_i=1^n sup_x' ∈Δ_h^p(X_i)ℓ( f̅(x'), Ŷ(x')) ]
≤[ sup_x ∈[0,1]^dℓ( f̅(x), Ŷ(x)) ]
≤[sup_x' ∈[0,1]^d C_ℓ |f̅(x) - Ŷ(x)| + ℓ(Ŷ(x), Ŷ(x)) ]
≤ C_ℓ[f̅ - Ŷ_L^∞]
≤ C_ℓ (f̅ -f^*_L^∞ + [f^*- Ŷ_L^∞ ])
≤ C_ℓ (Φ_L,W + [Ξ_n_L^∞]).
The last inequality holds by setting f such that f - f^* _L^∞ = inf_f ∈(L,W)f - f^*_L^∞.
Step 4: Combining the bounds.
We combine the result in Step 3 and Step 4 into the decomposition (<ref>), then obtain the statement.
Consider the expected adversarial risk R̃(·) with general losses as (<ref>).
Then, for the estimator f̃ as (<ref>) and q ∈ [1,∞), we have
f^* - f̃_L^∞^q ≤ C_P_X,p,d,ℓ,q h^-d( R̃(f̃) - R̃(f^*) + Ξ_L^∞^q ∨Ξ_L^∞).
We develop a lower bound of R̃(f̃) - R̃(f^*) as
R̃(f̃) - R̃(f^*) = _X[sup_x' ∈Δ_h^p(X)ℓ(Ŷ(x'), f̃(x')) - sup_x' ∈Δ_h^p(X)ℓ(Ŷ(x'), f^*(x')) ]
≥ C_P_X,p,d h^d sup_x ∈ [0,1]^d |ℓ(Ŷ(x'), f̃(x'))| - C_ℓŶ - f^*_L^∞
≥ C_P_X,p,d,ℓ h^d Ŷ - f̃_L^∞^q - C_ℓΞ_L^∞
≥ C_P_X,p,d,ℓ,q h^d ( f^* - f̃_L^∞^q - Ξ_L^∞^q ) - C_ℓΞ_L^∞ .
Here, the first inequality follows Lemma <ref> and the Lipschitz continuity of ℓ by Assumption <ref>, and the last inequality follows (a+b)^q ≤ C_q (a^q + b^q) for q ∈ [1,∞) and a,b ≥ 0.
By Proposition <ref> and Lemma <ref>, we have
[f^* - f̃^2_L^∞] ≤ C_P_X, p,d,ℓ,q h^-2d/q( [(R̃(f̃) - R(f^*))^2/q] + [ Ξ_L^∞^2] )
≤ C_P_X,B, p,d,ℓ,q, h^-2d/q{(log N_L,W(1/n) /n^1/2)^2/q + Φ_L,W^2/q + ζ_n^2 }
≤ C_P_X,B, p,d,ℓ,q,V h^-2d/q{( W^2L^2 log(WL) log n /n^1/2)^2/q + Φ_L,W^2/q + ζ_n^2 }.
The last inequality follows Lemma <ref>.
We set WL ≍ n^d/(4β + 4d) and obtain the statement.
§.§ Proof of Adaptation to Besov Space
We give proof of the result in Section <ref>.
To show the statement, we slightly modify the proof of Proposition <ref>.
We start from the inequality (<ref>) with setting δ = 1/n.
Since we use (L,W,S,B) as a set of candidate functions instead of (L,W), we obtain the following updated inequality of (<ref>) as
[f̂ - f^*_L^∞^2 ] ≤ C_P_X,p,d,B h^-d{logÑ_L,W,S,B(1/n)/n + Φ̃_L,W,S,B^2 + ζ_n^2 },
which replaces N_L,W(1/n) by Ñ_L,W,S,B(1/n) := sup_Q_n N(1/n, (L,W,S,B), ·_L^2(Q_n)) and Φ_L,W by Φ̃_L,W,S,B := inf_f ∈(L,W,S,B)f - f^*_L^∞.
We study the terms Ñ_L,W,S,B(1/n) and Φ̃_L,W,S,B.
For the approximation error term Φ̃_L,W,S,B, we apply Lemma <ref> by setting r = ∞ and obtain
Φ̃_L,W,S,B≤ C_d,β N^-β/d,
for sufficiently large N such that L ≥ C_d,p,β,Blog (N), W = C_d,βN, S=(L-1)C_d,βN + N.
About the entropy term Ñ_L,W,S,B(1/n), we apply Lemma <ref> and obtain
logÑ_L,W,S,B(1/n) ≤log N(1/n, (L,W,S,B), ·_L^∞)
≤ LS log(n LB(1+S))
≤ C_d,β L^2 N log (n L^2 B N)
≤ C_d,p,β,B N log^2(N) log (nN log^2(N)),
by substituting the setup of L,S,W and B.
We substitute (<ref>) and (<ref>) into (<ref>) and obtain
[f̂ - f^*_L^∞^2 ] ≤ C_P_X,p,d,B,β h^-d{ N log^2(N) log (nN log^2(N))/n + N^-2β/d + ζ_n^2 }.
We set N ≍ n^d/(2β + d) and obtain the statement.
§ SUPPORTIVE RESULT
Consider a non-negative bounded continuous function g:[0,1]^d →_+.
Then, we have
_X[sup_x' ∈Δ_h^p(X) g(x') ] ≥g_L^∞ P_X(Δ_h^p(x^*)),
with x^* ∈_x ∈ [0,1]^d g(x).
Further, if Assumption <ref> holds, then we have
_X[sup_x' ∈Δ_h^p(X) g(x') ] ≥g_L^∞ h^d C_P_X,p,d.
Let A := {x ∈ [0,1]^d | g(x) = max_x' ∈ [0,1]^d g(x')} be a set of argmax of g(x), which is non-empty because of the compactness of [0,1]^d and boundedness/continuity of g.
Also, we define a union Δ_A := ∪_x ∈ AΔ_h^p({x}).
By the non-negativity of g, we obtain
_X[sup_x' ∈Δ_h^p(X) g(x') ] ≥_X[sup_x' ∈Δ_h^p(X) g(x') {X ∈Δ_A }]
= _X[sup_x ∈ [0,1]^d g(x) {X ∈Δ_A }]
= g_L^∞ P_X(Δ_A).
Hence, we obtain the first statement.
We consider that Assumption <ref> holds.
We develop a lower bound of P_X(Δ_A) as
P_X(Δ_A) ≥inf_x ∈ A P_X( Δ_h^p({x})) ≥ C_P_Xinf_x ∈ A P_X( Δ_h^p({x})) ≥ C_P_Xinf_x ∈ [0,1]^dλ( Δ_h^p({x})),
where C_P_X is a lower bound of a density function of P_X defined in Assumption <ref>, and λ(·) is the Lebesgue measure.
Since the Lebesgue of the L^p-ball is known, we obtain that
inf_x ∈ [0,1]^dλ( Δ_h^p({x})) = Γ(1/p + 1)^d/Γ(d/p + 1)h^d,
where Γ (·) is the Gamma function.
Then, we obtain the second statement.
We develop the following covering number bound.
The following lemma immediately holds by <cit.> and <cit.>.
Consider the set of deep neural networks as (<ref>) with the depth L, the width W, and the upper bound B.
For any δ > 0 and sufficiently large n, we have
log N(δ, (L,W), ·_L^2(P_n)) ≤ C W^2 L^2 log(WL) log (B n /δ).
Let D be the VC-dimension of , and S(≤ W^2 L) be a number of parameters in .
By Theorem 3 in <cit.>, we bound the VC-dimension as D = O(S L log(S)) ≤ O(W^2 L^2 log (WL)).
Using this inequality and Theorem 12.2 in <cit.>, we have
log N(δ, (L,W), ·_L^2(P_n)) ≤ D log( en B/δ D) ≤ C W^2 L^2 log(WH) log (B n /δ).
for n = Ω(W^2 H^2 log (WH)).
Consider a non-empty compact set T ⊂^d with some d and continuous bounded functions f,f':T →.
Then, we have
|sup_t ∈ T(f(t) + f'(t))^2 - sup_t ∈ Tf(t) | ≤f_L^∞f'_L^∞ + f'_L^∞^2.
We define the optimal values t^* ∈ T and t^†∈ T such that sup_t ∈ T(f(t) + f'(t))^2 = (f(t^*) + f'(t^*))^2 and sup_t ∈ Tf(t) ^2 = f(t^†)^2.
Note that such t^* ∈ T and t^†∈ T exist by the compactness of T and the continuity of f and f'.
We first derive the following inequality
sup_t ∈ T(f(t) + f'(t))^2 - sup_t ∈ Tf(t) ^2 ≤ f(t^*)^2 + 2 f(t^*)f'(t^*) + f'(t^*)^2 - f(t^*)^2
≤ 2 f_L^∞f'_L^∞ + f'_L^∞^2.
Second, we develop a bound for the opposite side as
sup_t ∈ Tf(t)^2 - sup_t ∈ T(f(t) + f'(t))^2 ≤ f(t^†)^2 - (f(t^†) + f'(t^†))^2
≤ 2f(t^†) f'(t^†) - f'(t^†)^2
≤ 2 f_L^∞f'_L^∞ + f'_L^∞^2.
Then, we obtain the statement.
For any continuous and bounded functions f,g on a compact set I, we have
max_t ∈ I (f(t) + g(t)) ≥max_t ∈ I f(t) - max_t ∈ I |g(t)|.
Let t' ∈ I be a point such that max_t ∈ I (f(t) + g(t)) = f(t') + g(t'), which is guaranteed to exist by the compactness of I and the boundedness/continuity of f,g.
The statement simply follows
max_t (f(t) + g(t)) = f(t') + g(t') ≥ f(t') - |g(t')| ≥max_t(f(t)) - max_t |g(t')|.
Consider functions f,f', y: [0,1]^d → [-B,B], and a loss function ℓ satisfying Assumption <ref>.
Also, consider a funciton g as (<ref>).
For any x ∈ [0,1]^d, we have
g(x,y,f) - g(x,y,f') ≤ C_ℓ |f(x̅) - f'(x̅)|,
for some x̅∈ [0,1]^d.
We define x^* ∈Δ_h^p(x) such that ℓ(y(x^*), f(x^*)) = max_x' ∈Δ_xℓ(y(x'), f(x')).
Its existence follows the continuity of f, f',y, and ℓ.
For f,f' ∈ L^2([0,1]^d), we have
g(x,y,f) - g(x,y,f') = max_x' ∈Δ_h^p(x)ℓ(y(x'),f(x')) -max_x' ∈Δ_h^p(x)ℓ(y(x'),f'(x'))
≤ℓ(y(x^*),f(x^*)) - ℓ(y(x^*),f'(x^*))
≤ C_ℓ |f(x^*) - f'(x^*)|.
The first inequality follows max_x' ∈Δ_h^p(x)ℓ(y(x'), f(x')) = ℓ(y(x^*), f(x^*)), and the second inequality follows the Lipschitz continuity of ℓ in the second argument from Assumption <ref>.
Thus, we obtain the statement.
Fix N,M ∈ arbitrarily.
If (L,W) is a set of functions with W= C_d (N+2) log_2 (8N) and L= C_s (M+2) log_2 (4M) + 2d, we have
inf_f ∈sup_f^* ∈ C^s_1([0,1]^d)f - f^*_L^∞≤ C_d,s N^-2s/d M^-2s/d.
Fix p,q,r∈ (0, ∞] and β∈ (0,∞).
Suppose that β > d max{1/p-1/r, 0} holds.
Let (L,W,S,B) be a set of neural network functions (<ref>) such that there are S ∈ non-zero parameters and each value is included in [-B̅, B̅] with B≥ 1.
Let N be a sufficiently large number and set L ≥ C_d,p,β,Blog (N), W = C_d,βN, S=(L-1)C_d,βN + N, and B̅ is a polynomially increasing in N.
Then, we have
sup_f^0 ∈_p,q^βinf_f ∈(L,W,S,B)f^0 - f_L^r(λ)≤ C N^-β/d,
with some constant C > 0 independent of N.
For ε∈ (0,1], we obtain
log N(ε, F(L,W,S,B)) ≤ LS log(ε^-1 LB(1+S)).
§ PROOF OF INCONSISTENCY
We first specify the coordinates of the setting.
We consider two points x = (0.3, 0.5, 0.5, ...,0.5), x' = (0.7,0.5, 0.5, ...,0.5)∈ [0,1]^d, and a marginal measure as a mixture of Dirac measures on the points; P_X = 0.5 δ_{x} + 0.5 δ_{x'}.
We also specify the true function with an input x = (x_1,...,x_d) ∈ [0,1]^d as f^*(x) = - {x_1 < 0.4} + 10 (x_1 - 0.5){0.4 ≤ x_1 ≤ 0.6} + {x_1 > 0.6}, and the noise variable ξ_i as a uniform random variable on [-0.1,0.1].
For the adversarial training, we set p=∞ and h = 0.5.
We study an empirical risk minimizer in this setting.
Since the inputs X_1,...,X_n are either of x or x', we set n_1 := |{i: X_i = x}| and n_2 := |{i: X_i = x'}| such that n = n_1 + n_2.
With the specified coordinates above, we rewrite an empirical risk of f:[0,1]^d → with the adversarial training as
1/n∑_i=1^n max_x ∈Δ_h^p(X_i) (Y_i - f(x))^2
=1/n∑_i: X_i = xmax_x ∈Δ_h^p(X_i) (f^*(X_i) + ξ_i - f(x))^2 + 1/n∑_i: X_i = x'max_x ∈Δ_h^p(X_i) (f^*(X_i) + ξ_i - f(x))^2
=1/n∑_i: X_i = xmax_x ∈ [0,1]^d: x_1 ∈ [0,0.8] (ξ_i - f(x))^2 + 1/n∑_i: X_i = x'max_x ∈ [0,1]^d: x_1 ∈ [0.2,1] (1 + ξ_i - f(x))^2,
which follows f^*(x) = 0 and f^*(x') = 1.
To minimize this empirical risk in terms of f, we restrict a class of f.
Specifically, we set f with an input x = (x_1,...,x_d) as having a form f(x) = c_1 {x_1 ≤ 0.2} + c_2 {0.2 < x_1 < 0.8} + c_3 {0.8 ≤ x_1} with some constants c_1,c_2,c_3 ∈.
This form of f can minimize the risk, since The risk depends only on the value of f for each region.
Then, we rewrite the risk as
(<ref>) =1/n∑_i: X_i = xmax{ (ξ_i - c_1)^2 , (ξ_i - c_2)^2} + 1/n∑_i: X_i = x'max{ (1 + ξ_i - c_2)^2 , (1 + ξ_i - c_3)^2 }.
Here, we consider an event |n_1/2 - n/2| ≤ 1, which appears with probability 1-2 exp(-2/n) ≥ 0.5 with n ≥ 3, by Hoeffding's inequality.
In this case, a simple calculation yields that c_2 ∈ [-0.2, 0.2] minimizes the (<ref>) since it prevents quadratic growth of the risk in terms of c_2, which gives (ξ_i - c_1)^2 < (ξ_i - c_2)^2 and (1 + ξ_i - c_2)^2 > (1 + ξ_i - c_3)^2.
Then, we rewrite the risk (<ref>) as
(<ref>) = 1/n∑_i: X_i = x (ξ_i - c_2)^2 + 1/n∑_i: X_i = x'(1 + ξ_i - c_2)^2,
and the minimization on it by c_2 yields the following optimal choise
c_2^* = n_2 - n_1/n + 1/n∑_i=1^n ξ_i.
Then, we have that the original risk (<ref>) is minimized by the following function
f̌(x) := c_1^* {x_1 ≤ 0.2} + c_2^* {0.2 < x_1 < 0.8} + c_3^* {0.8 ≤ x_1},
where c_1^* = n_1^-1∑_i: X_i = xξ_i and c_3^* = n_2^-1∑_i: X_i = x'ξ_i.
Finally, we define the L^∞-risk of f̌.
Simply, we have
f̌ - f^*_L^∞^2 ≥f̌ - f^*_L^2(P_X)^2
= _X ∼ P_X[ (f̌(X) - f^*(X) )^2 ]
= 1/2{ (f̌(x) - f^*(x) )^2 + (f̌(x') - f^*(x') )^2}
= 1/2{ (c_2^* +1 )^2 + (c_2^* - 1)^2}
= 1 + (c_2^*)^2
≥ 1.
Hence, we show the statement of Proposition <ref>.
alpha
|
http://arxiv.org/abs/2307.05789v1 | 20230711203333 | Implicit regularisation in stochastic gradient descent: from single-objective to two-player games | [
"Mihaela Rosca",
"Marc Peter Deisenroth"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
calc,patterns,decorations.pathreplacing,shapes.multipart
|
http://arxiv.org/abs/2307.04899v1 | 20230710205644 | Application of the Duperier method to the analysis of the cosmic muon flux dependence on the meteorological parameters, based on the DANSS detector data | [
"I. Alekseev",
"V. Belov",
"M. Danilov",
"D. Filosofov",
"M. Fomina",
"S. Kazartsev",
"A. Kobyakin",
"A. Kuznetsov",
"I. Machikhiliyan",
"D. Medvedev",
"V. Nesterov",
"D. Ponomarev",
"I. Rozova",
"N. Rumyantseva",
"V. Rusinov",
"E. Samigullin",
"Ye. Shevchik",
"M. Shirchenko",
"Yu. Shitov",
"N. Skrobova",
"D. Svirida",
"E. Tarkovsky",
"A. Yakovleva",
"E. Yakushev",
"I. Zhitnikov",
"D. Zinatulina"
] | physics.ins-det | [
"physics.ins-det",
"hep-ex"
] |
The angular dependence of spin-orbit torque in monolayer Fe3GeTe2
Paul M. Haney
August 12, 2023
=================================================================
1. . ݣ 20- , , . - , , . , , ԣ . , — . , , - , , ģ , . , , , , , . , , . , , , ̣, DANSS — , ∼50 ... . ޣ , , ԣ , .
, , <cit.>, . <cit.>, ݣ DANSS, <cit.>. , . :
I-⟨ I⟩/⟨ I⟩=αT_eff-⟨ T_eff⟩/⟨ T_eff⟩+β(P-⟨ P⟩),
I — ޣ , T_eff — , P — , α β — . β , , , . , α , , <cit.> 20 40 ..., , , ̣.
, β, ,́ <cit.> , . , , ޣ . , , , , , , , , . , , 100 , :
I-⟨ I⟩/⟨ I⟩ = β(P-⟨ P⟩)+
+μ'(H_100-⟨ H_100⟩)+μ”(T_100-⟨ T_100⟩) ,
H_100 T_100 — 100 , β, μ' μ” — .
2. . DANSS <cit.> (57.91 .., 35.06 ..), -1000 ߣ , 10.9 – 12.9 . , , ∼50 .
ߣ 1 ^3 2500 , . , ϣ: (5 ), (8 ), (5 ) (8 ). , - . ޣ, .
100 × 4 × 1 , 3 , . , ɣ. , , 10 ϣ, 5 . . . , . -, ; , .
, DANSS , - , , .
3. . , 05.10.2016 31.08.2020, 4 . , 40 ߣ. ң ߣ . - , ߣ — , — , . ң , : cosθ > 0.9, cosθ < 0.36 , .
ERA5 <cit.>. , , , .
0.25×0.25, . 37
1 1000 , DANSS (57.9 .., 35.1 ..). ERA5 100 <cit.>, , 60 , 0.81 . ERA5 <cit.>, ̣ 0.59 . , ERA5 100 , . ޣ 100 <cit.>:
Δ H = 18400(1+aT)(p_1/p_2),
Δ H — , a = 0.00366 K^-1 — ߣ , T — . p_1 p_2. , H_100 <ref> ERA5, 100 , H_100. ޣ . , 100 ∼16 .
β, μ' μ” . ޣ , . , . , , , . , , , . , β β. "" . <ref>. β, μ' μ” <ref>.
4. . , ݣ . , ţ, E_thr — , , . , , , . ң ң <cit.> <ref>.
, DANSS, . Global Muon Detector Network <cit.>, . -, , , . -, , , . -, أ , E_thr . , ݣ , , . 40 ... Budapest<cit.>, 42 ... Hobart<cit.> 60 ... London<cit.>, . Particle Data Group<cit.>, 24 10 40 . — 10 40, 42 60 , , , . , , , , , .
μ' μ” , , <ref>. , , . , , , . .
β <cit.> <ref>. - , β E_thr , , , <cit.>, DANSS . β <cit.> . , <ref> <ref>, , E_thr Budapest, Hobart London <cit.>. <cit.>, ( ), , <ref>. Σ 6, 7 9, <cit.> E_thr, .
5. . DANSS , , . β, μ' μ” ң , ң . β ޣ , ∼ 30 %. μ' μ”, , . β , α. <cit.> <cit.> Budapest, Hobart London, β .
. . , .
"" № .4.44.90.13.1119 № .4.44.9.16.1006
(2013–2016 .). , № 17-12-01145 (2017–2021 .).
№ 23-12-00085.
ieeetr
|
http://arxiv.org/abs/2307.04857v1 | 20230710190112 | The geproci property in positive characteristic | [
"Jake Kettinger"
] | math.AG | [
"math.AG",
"math.CO",
"14"
] |
Bragg-Primakoff Axion Photoconversion in Crystal Detectors
Adrian Thompson
^1Central European University, Quellenstraße 51, 1100 Vienna, Austria
^2Computational Social Science - Research Center for Educational and Network Studies, Centre for Social Sciences, Tóth Kálmánutca 4,Budapest, 1097, Hungary
^3Department of Social Research Methodology, Faculty of Social Sciences, Eötvös Loránd University, Pázmány Péter s étány 1/A, Budapest, 1117, Hungary.
^4 National Laboratory for Health Security, Hungary.
^5 Rényi Institute of Mathematics, Reáltanodautca 13-15, Budapest, 1053, Hungary.
^*Corresponding author: [email protected]
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The geproci property is a recent development in the world of geometry. We call a set of points Z_k^3 an (a,b)-geproci set (for GEneral PROjection is a Complete Intersection) if its projection from a general point P to a plane is a complete intersection of curves of degrees a≤ b. Nondegenerate examples known as grids have been known since 2011. Nondegenerate nongrids were found starting in 2018, working in characteristic 0. Almost all of these new examples are of a special kind called half grids.
Before the work in this paper– based partly on the author's thesis– only a few examples of geproci nontrivial non-grid non-half grids were known and there was no known way to generate more. Here, we use geometry in the positive characteristic setting to give new methods of producing geproci half grids and non-half grids.
§ INTRODUCTION
While complete intersections have been a topic of much study for many years in algebraic geometry, the study of the geproci property has emerged relatively recently. Much of the groundwork in this study has been laid in the works <cit.>, <cit.>, and <cit.>, which will be cited often in this paper. We will begin with the definition of geproci (from: general projection complete intersection).
Let K be an algebraically closed field. A finite set Z in ^n_K is geproci (dZ@"protSi) if the projection Z of Z from a general point P∈^n_K to a hyperplane H is a complete intersection in H≅^n-1_K.
An easy but degenerate example of a geproci set in ^n is a complete intersection in a hyperplane ^n-1≅ H^n. In this paper, we are specifically interested in geproci sets in ^3_K. (No nondegenerate examples are known in ^n, n>3.) In the three-dimensional setting, we will specify that a configuration Z^3_K is (a,b)-geproci (where a≤ b) if the image of Z under a general projection into ^2_K is the complete intersection of a degree a curve and a degree b curve. We will use the notation {a,b}-geproci in instances when we do not want to require a≤ b.
There are two easy-to-understand types of geproci sets. One type as noted above is any complete intersection in a plane: it will project from a general point isomorphically to another complete intersection in any other plane, and so is geproci. The other type is a grid, which we will now define.
Given a curve A^3 comprising a finite set of a pairwise-disjoint lines a curve B^3 comprising a finite set of b pairwise-disjoint lines, such that every line in A intersects every line in B transversely, the ab points of intersection form an (a,b)-grid.
The set of points Z of an (a,b)-grid is (a,b)-geproci. The image Z of Z under a general projection is equal to the intersection of the images A and B of A and B, which are unions of a lines in the plane and b lines in the plane respectively, and thus A and B are curves of degrees a and b, respectively, meeting at ab points. Thus Z is a complete intersection.
These two types (sets of coplanar points and grids) are well understood, so are called trivial. What is not yet well understood is how nontrivial geproci sets can arise. The existing work on the geproci property has been done over fields of characteristic 0. What is new with this paper are the results in characteristic p>0, starting in the second section. For the rest of this section we will only discuss work which has been done in characteristic 0.
The first nontrivial examples of geproci sets came from the root systems D_4 and F_4 <cit.> and so themselves are called D_4 and F_4. These are configurations in ^3 containing 12 points and 24 points, respectively <cit.>. It was also shown that D_4 is the smallest nontrivial geproci set <cit.>, and the only nontrivial (3,b)-geproci set <cit.>. (See Figure <ref> for the 12 points of D_4 and its 16 sets of 3 collinear points.)
The configurations D_4 and F_4 are examples of half grids.
A set Z^3 is a {μ,λ}-half grid if Z is a nontrivial {μ,λ}-geproci set contained in a set of μ mutually-skew lines, with each line containing λ points of Z.
For example, the D_4 configuration is a 4, 3-geproci half grid and can be covered by four mutually-skew lines, with each line containing three points, as Figure <ref> shows. The general projection of an {a,b} half grid is a complete intersection of a union of a lines and a degree b curve that is not a union of lines. It is known that there is an (a,b)-half grid for each 4≤ a≤ b <cit.>. No other infinite families of nontrivial geproci sets were known before the results in this paper, and only finitely many (indeed, three <cit.>) non-half grid nontrivial geproci sets were known before the results in the next section.
There seem to be strong links between geproci sets Z and sets Z admitting unexpected cones <cit.>.
A finite set Z^n_k admits an unexpected cone of degree d when
[I(Z)∩ I(P)^d]_d>max(0,[I(Z)]_d-d+n-1n)
for a general point P∈^n_K, where I(Z) is the homogeneous ideal of Z in K[^n] and [I(Z)]_d is its homogeneous component of degree d <cit.>.
This is said to be unexpected because one expects by a naive dimension count that the vector subspace of homogeneous polynomials in [I(Z)]_d that are singular with multiplicity d at a general point P would have codimension n+d-1n (since being singular at P to order d imposes n+d-1n conditions on [I(Z)]_d). Therefore it is called unexpected when more such hypersurfaces exist than a naive dimension count would lead one to expect. Chiantini and Migliore showed that every (a,b)-grid with 3≤ a≤ b admits unexpected cones of degrees a and b <cit.>.
§ THE GEPROCI PROPERTY OVER FINITE FIELDS
§.§ Spreads
While examples of nontrivial geproci configurations (especially nontrivial non-half grids) have proven rather elusive in the characteristic 0 setting, we will see in this paper that they arise quite naturally over finite fields. In the finite field setting, we make generous use of the study of spreads over projective space, which we will define now.
Let ^2t-1_k be a projective space of odd dimension over a field k. Let S be a set of (t-1)-dimensional linear subspaces of ^2t-1_k, each of which is definedover k. We call S a spread if each point of ^2t-1_k is contained in one and only one member of S.
Over a finite field, spreads always exist for each t≥ 1 <cit.>. In our three-dimensional case, we have t=2. Therefore a spread in ^3_k will be a set of mutually-skew lines defined over k that cover ^3_k.
Here we show an example of a spread based on <cit.>. Given a field extension k L with (as vector spaces) _k L=t, we get a map
_k^2t-1=_k(k^2t)=_k(L^2)⟶_L(L^2)=^1_L
with linear fibers _k(L)=(k^t)=_k^t-1, giving a spread. When we take k=, t=2, and L=, we get
^3_⟶^1_=S^2.
Composing with the antipodal map S^3→^3_ gives the well-known Hopf fibration S^3→ S^2 with fibers S^1.
Here we give another construction of spreads for ^3 for fields of positive characteristic based on <cit.> and <cit.>. Let _q be a finite field of size q and characteristic p, first where p is an odd prime. Let r∈_q be such that the polynomial x^2-r∈_q[x] is irreducible; that is, r has no square root in _q. Denote by L_r(a,b) the line in ^3__q through the points (1,0,a,b) and (0,1,rb,a). Denote by L(∞) the line through the points (0,0,1,0) and (0,0,0,1). Then the set of lines
S_r={L_r(a,b),L(∞):a,b∈_q}
is a spread in ^3__q (since ^3__q has (q+1)(q^2+1)=q^3+q^2+q+1 points and one can check (using the fact that r is not a square in _q) that the lines are skew, but there are q^2+1 lines and each line has q+1 points).
In the case _q=2, we want to choose r∈_q to be such that the polynomial x^2+x+r is irreducible in _q[x]. Then define L_r(a,b) to be the line in ^3__q through the points (1,0,a,b) and (0,1,br,a+b). Then S_r={L_r(a,b),L(∞):a,b∈_q} is a spread.
Let _q be the field of size q, where q is some power of a prime. Then Z=^3__q^3__q is a (q+1,q^2+1)-geproci half grid.
First we will show that there is a degree (q+1) cone containing Z having a singularity of multiplicity q+1 at a general point P∈^3__q. Let P=(a,b,c,d)∈^3__q. Let
M= [ a b c d; a^q b^q c^q d^q; x y z w; x^q y^q z^q w^q; ].
Then we claim F= M
is such a cone.
First note that F contains every point of Z, because x^q=x for each x∈_q. Furthermore, the terms of F can be combined into groups of 4 so that F is the sum of terms of the form
(x^qyc^qd-x^qwc^qb)-(z^qya^qd-z^qwa^qb)=x^qc^q(yd-wb)-z^qa^q(yd-wb)
=(x^qc^q-z^qa^q)(yd-wb)=(xc-za)^q(yd-wb)∈ I^q+1((a,b,c,d))
Thus F is a cone C_1 of degree q+1 with vertex (a,b,c,d) of multiplicity q+1.
Now we will show there is a degree q^2+1 cone C_2 containing Z having a general point P of multiplicity q^2+1. By Example <ref>, the space ^3__q admits a spread of q^2+1 mutually-skew lines that covers all of ^3__q. Each line together with a fixed general point P determines a plane. The union of the planes gives C_2.
Projecting the q^2+1 lines from a general point P∈^3__q to a general plane Π=^2__q yields a set of q^2+1 lines in ^2__q containing the (q+1)(q^2+1) points of the image of Z.
Now we will show that C_1 and C_2 do not have components in common; to this end, we will show that C_1 contains no line in ^3__q defined over ^3__q. Note that C_1 vanishes on such a line if and only if F=0, where F= M and
M=[ a b c d; a^q b^q c^q d^q; X Y Z W; X^q Y^q Z^q W^q ]
for X=η_0u+μ_0v, Y=η_1u+μ_1v, Z=η_2u+μ_2v, and W=η_3u+μ_3v for all (u,v)∈^1__q where (η_0,η_1,η_2,η_3) and (μ_0,μ_1,μ_2,μ_3) are points on the line. If r_1, r_2, r_3, and r_4 are the rows of a 4× 4 matrix, we will denote the determinant of that matrix by |r_1,r_2,r_3,r_4|. In particular, taking the r_i to be the rows of M, we have F=|r_1,r_2,r_3,r_4|=|r_1,r_2,η u+μ v,η u^q+μ v^q|=0 for all (u,v).
Since determinants are multilinear, we have
|r_1,r_2,η u+μ v,η u^q+μ v^q|
= |r_1,r_2,η u,η u^q|+|r_1,r_2,η u,μ v^q|+|r_1,r_2,μ v,η u^q|+|r_1,r_2,μ v,μ v^q|
= |r_1,r_2,η u,η u|u^q-1+|r_1,r_2,η u,μ v^q|+|r_1,r_2,μ v,η u^q|+|r_1,r_2,μ v,μ v|v^q-1
= |r_1,r_2,η u,μ v^q|+|r_1,r_2,μ v,η u^q|=|r_1,r_2,η,μ|uv^q+|r_1,r_2,μ,η|u^qv
= |r_1,r_2,η,μ|uv^q-|r_1,r_2,η,μ|u^qv=|r_1,r_2,η,μ|(v^q-1-u^q-1)uv.
But v^q-1-u^q-1≠ 0 unless u=v=0 or u/v∈_q. Therefore F is 0 for all (u,v) only if |r_1,r_2,η,μ|=0. By an appropriate choice of coordinates we get η=(1,0,0,0), μ=(0,1,0,0), r_1=(a',b',c',d'), and r_2=(a'^q,b'^q,c'^q,d'^q) for some point (a',b',c',d') which is general since (a,b,c,d) is general. Since |r_1,r_2,η,μ| is nonzero for a'=b'=0, c'=1, d'∈_q∖_q, we see |r_1,r_2,η,μ|≠ 0 for general (a',b',c',d'). We conclude that C_1 does not contain a line of ^3__q defined over ^3__q, and so C_1 has no components in common with C_2. (In fact, since C_1 contains the q+1 points of each line of ^3__q defined over ^3__q but does not contain the line, C_1 meets each line of ^3__q defined over ^3__q transversely.) Thus C_1∩ C_2 is a curve of degree (q+1)(q^2+1) and contains the (q+1)(q^2+1) lines through P and points of Z, hence C_1∩ C_2 is exactly this set of lines.
So Z is a set of (q+1)(q^2+1) points, which is the intersection of the curves C_1∩Π (of degree q+1) and C_2∩Π (of degree q^2+1), so Z is a (q+1,q^2+1)-complete intersection. Thus Z is (q+1,q^2+1)-geproci.
Furthermore, the degree q+1 and q^2+1 cones in the above proof are unexpected. We will show this with the help of the following lemma.
Let Z=^n__q in variables x_0,…,x_n. Then [I(Z)]_q+1=1+2+⋯+n=n+12.
We will induct on n, starting with n=1. The product
x_0(x_0-x_1)(x_0-2x_1)⋯(x_0-(q-1)x_1)x_1
is the unique q+1 form (up to scalar multiplication) vanishing on all points of Z. So [I(Z)]_q+1=1.
Now let n>1 and Z'=V(x_n) Z=^n__q, so we can regard Z' as Z'=^n-1__q. We can regard each element f∈[I(Z')]_q+1 as a form in the variables x_0,…,x_n-1 and thus defined over ^n. Using this, we can define a map ρ:[I(Z)]_q+1→[I(Z')]_q+1 by ρ(f(x_0,…,x_n-1,x_n))=f(x_0,…,x_n-1,0). We can see that ρ is surjective because each element g∈[I(Z')]_q+1 defines a cone over Z' with vertex v=(0,…,0,1)∈^n. Thus g vanishes at every line through v and a point of Z'. But every point of Z is on such a line, so g∈[I(Z)]_q+1, and thus ρ is surjective.
Now let Y be the complement of Z' in Z. Then we have x_n[I(Y)]_q [I(Z)]_q+1. Furthermore, for all f∈ [I(Z)]_q+1, we see that ρ(f)=0 if and only if f=0 or f=x_n· h for some degree q polynomial h vanishing on Y. Hence x_n[I(Y)]_q=ρ. This gives us the short exact sequence
0r x_n[I(Y)]_qr [I(Z)]_q+1r [I(Z')]_q+1r 0
where [I(Z')]_q+1=1+⋯+(n-1) by the induction hypothesis. Now we must show that x_n[I(Y)]_q=n. But x_n[I(Y)]_q=[I(Y)]_q and Y is a complete intersection of n forms of degree q. For example, we can cut out Y by the n forms given by
x_i(x_i-x_n)(x_i-2x_n)⋯(x_i-(q-1)x_n)
for 0≤ i≤ n-1. Hence [I(Y)]_q=n, and so [I(Z)]_q+1=[I(Z')]_q+1+[I(Y)]_q=1+⋯+n.
The degree q+1 cone and degree q^2+1 cone in the proof of Theorem <ref> are unexpected.
From Lemma <ref>, we see that [I(Z)]_q+1=6. In particular, [I(Z)]_q+1 is generated by the 2× 2 minors of the matrix
[ x y z w; x^q y^q z^q w^q; ].
Since 6-q+33<0 for q≥ 2, and [I(Z)∩ I(P)^q+1]_q+1≥ 1>0, we have that the above q+1 cone is indeed unexpected.
To show that the degree q^2+1 cone is unexpected, we will first show that the (q^2+1)(q+1) points of ^3__q impose independent conditions on forms of degree q^2+1. We will show that for each Q∈^3__q that there is a degree q^2+1 form vanishing at every point ^3__q except Q. Without loss of generality, we will take Q=(0,0,0,1).
We will start with the case q≠ 2. Then the union of planes given by the product
π_x=∏_i=0^q-1(w-ix)
contains every point of ^3__q except those on the affine plane {(0,*,*,1)}. Similarly, the products
π_y=∏_i=0^q-1(w-iy) and π_z=∏_i=0^q-1(w-iz)
vanish everywhere except on the affine planes {(*,0,*,1)} and {(*,*,0,1)}, respectively. Therefore, the product π_xπ_yπ_z vanishes everywhere on ^3__q except the point (0,0,0,1). Since π_xπ_yπ_z=3q, taking π=w^q^2-3q+1π_xπ_yπ_z gives us a degree q^2+1 form vanishing at every point of ^3__q except Q. Note that since q>2, q^2-3q+1>0, so π is well-defined.
Since the points of Z=^3__q impose independent conditions on the q^2+1 forms, we have
[I(Z)]_q^2+1=q^2+43-(q^2+1)(q+1).
Using our degree q+1 cone from the proof of Theorem <ref> as F, we have
F[I(P)^q^2-q]_q^2-q [I(Z)+I(P)^q^2+1]_q^2+1,
giving us
[I(P)^q^2-q]_q^2-q≤[I(Z)+I(P)^q^2+1]_q^2+1.
We know that [I(P)^q^2-q]_q^2-q=q^2-q+33-q^2-q+23=q^2-q+22, so in order to show the degree q^2+1 cone is unexpected it is sufficient to see that the following inequality holds:
q^2-q+22>q^2+43-(q^2+1)(q+1)-q^2+33.
This inequality holds for q≥ 3. Thus for all prime powers q≥ 3, the degree q^2+1 cone in the proof of Theorem <ref> is unexpected.
Now for the case q=2: First we wish to show that the fifteen points of Z=^3__2 impose independent conditions on the quintic forms. Again taking Q=(0,0,0,1) without loss of generality, we can take π=w^2(w+x)(w+y)(w+z) as our degree 5 form vanishing at every point of ^3__2 except Q. Therefore the points indeed impose independent conditions. Thus
[I(Z)]_5=5+33-15=41
and so [I(Z)]_5-5+23=41-35=6. A computation in Macaulay2 reveals that
[I(Z)+I(P)^5]_5=7>6,
thus the degree q^2+1 cone from the proof of Theorem <ref> is unexpected for q=2 as well.
The Macaulay2 commands used to show [I(Z)+I(P)^5]_5=7 are as follows.
§.§ Maximal Partial Spreads
Of particular interest to the hunt for geproci sets is the existence of maximal partial spreads.
A partial spread of ^3__q with deficiency d is a set of q^2+1-d mutually-skew lines of ^3__q. A maximal partial spread is a partial spread of positive deficiency that is not contained in any larger partial spread. We will denote the set of points of ^3__q contained in the lines in a spread S by (S).
Maximal partial spreads allow us to construct examples of many geproci sets as subsets of ^3__q, using the following corollary.
Let S be a partial spread of s lines in ^3__q. Then the set of points (S)^3__q is {s,q+1}-geproci.
The same degree q+1 cone C_1 from the proof of Theorem <ref> works in this case. The degree s cone is the join of the s lines with the general point P. It follows from the proof of Theorem <ref> that C_1 meets every line of ^3__q transversely and thus that (S) is geproci.
Let Z be an {a,b}-geproci set and let Z' Z be a {c,b}-geproci subset, whose general projection shares with the general projection of Z a minimal generator of degree b. Then the residual set Z”=Z∖ Z' is {a-c,b}-geproci.
This is Lemma 4.5 of <cit.>, and the proof still works in positive characteristic.
The complement Z^3__q of a maximal partial spread of deficiency d is a nontrivial {q+1,d}-geproci set. Furthermore, when d>q+1, Z is also not a half grid.
The first sentence of the Theorem comes directly from Corollary 1 and Lemma 1, except for being nontrivial. To demonstrate that Z is nontrivial, suppose Z is contained in a plane H. Let Z' be the complement of Z. Then Z' consists of q+1 points on q^2+1-d lines. At most one of those lines can be in H, but each of the lines meet H. Thus Z' has at least q^2+1-d points in H, so Z consists of at most q^2+q+1-(q^2+1-d)=q+d points. This is impossible since |Z|=(q+1)d>q+d.
Now suppose that Z is a grid. Thus it consists of q+1 points on each of d lines. But Z' comes from a maximal partial spread, so Z contains no set of q+1 collinear points. Thus Z cannot be a grid, so Z is nontrivial.
Now we will prove that Z is a nontrivial non-half grid if d>q+1. Recall that every line in ^3__q consists of q+1 points. If Z were a half grid, then either it contains subsets of d collinear points or subsets of q+1 collinear points, but d>q+1, so the latter would be true. But we know from the above that Z contains no subset of q+1 collinear points.
§.§ Examples
By <cit.>, if q≥ 7 and q is odd, then ^3__q has a maximal partial spread of size n for each integer n in the interval q^2+1/2+6≤ n≤ q^2-q+2. In terms of deficiency d=q^2+1-n, we get the inequalities q-1≤ d≤q^2+1/2-6. Thus for every odd prime power q≥ 7 there is a maximal partial spread in ^3__q of deficiency d>q+1 and thus a nontrivial non-half grid (q+1,d)-geproci set.
In addition to Heden's bounds <cit.> showing the existence of maximal partial spreads, Mesner has provided a lower bound for the size of the deficiency d at √(q)+1≤ d <cit.>. Glynn has provided an upper bound for d at d≤ (q-1)^2 <cit.>.
By Lemma <ref>, for any line L^3__2, the set Z=^3__2∖ L is a (3,4)-geproci half grid. In fact, Z has the same combinatorics as D_4, shown in Figure <ref> (that is, Z consists of 12 points, each of which is on 4 lines, with each line containing 3 of the points). Specifically, in Figure <ref> we see ^3__2∖ V(x+y+z,w).
There is (up to projective equivalence) a unique maximal partial spread in ^3__3 <cit.>. This spread contains seven lines (as opposed to a complete spread, which contains ten). The complement Z of the points of the maximal partial spread is a set of 12 points in ^3__3 that is (3,4)-geproci and nontrivial. Furthermore, Z has the same combinatorics as the D_4 configuration (that is, Z is a set of 12 points, each of which is on 4 lines, with each line containing 3 of the points). Note that Z is then a half grid, as shown in Figure <ref>. Specifically, Figure <ref> exhibits the points of ^3__3 in the complement of the maximal partial spread given by the seven lines V(x+y,y+z+w), V(x-y-z,y+w), V(x-y+w,y+z), V(x+y+z,w), V(x-y+z, z+w), V(x+y-z,x+w), and V(x+z,x+y+w).
There are (up to projective equivalence) fifteen maximal partial spreads in ^3_/7 of size 45 and invariant under a group of order 5 (as opposed to a complete spread, which contains 50 lines) <cit.>. Let Z be the complement of the set of points of any of these maximal partial spreads. Then Z is a set of 40 points that is a nontrivial (5,8)-geproci non-half grid. Furthermore, Z has the same combinatorics as the Penrose configuration of 40 points <cit.>.
Note that if we look at two non-isomorphic maximal partial spreads M and M', and consider their complements Z and Z', then Z and Z' are non-isomorphic nontrivial non-half grid (5,8)-geproci sets. In fact, some such sets have stabilizers of different sizes! Of the fifteen up to isomorphism, there are nine with stabilizers of size 10, there is one with a stabilizer of size 20, there is one with a stabilizer of size 60, and there are four with stabilizers of size 120.
An example of such a geproci set is
{(0,0,1,3),(0,1,3,3),(0,1,3,5),(0,1,4,6),
(0,1,6,5),(1,0,1,3),(1,0,2,6),(1,0,4,5),
(1,0,4,6),(1,1,0,1),(1,1,0,4),(1,1,1,4),
(1,1,5,2),(1,2,1,6),(1,2,3,3),(1,2,5,2),
(1,2,6,5),(1,3,2,1),(1,3,4,4),(1,3,5,2),
(1,3,6,0),(1,4,0,5),(1,4,2,4),(1,4,4,1),
(1,4,6,2),(1,5,0,4),(1,5,1,0),(1,5,2,0),
(1,5,3,0),(1,5,3,1),(1,5,3,3),(1,5,3,6),
(1,5,4,5),(1,5,5,0),(1,5,5,2),(1,5,6,3),
(1,6,0,3),(1,6,1,5),(1,6,2,1),(1,6,6,6)}.
This example is the complement of a maximal partial spread of size 45 with a stabilizer of size 60.
We also used Macaulay2 to check that at least one configuration of each size stabilizer is Gorenstein. This contrasts with the case in characteristic 0, where only one nontrivial Gorenstein geproci set is known, up to projective equivalence: the Penrose configuration. <cit.>
One can determine this using the following commands in Macaulay2 with the example set of points from above.
0 1 2 3
total: 1 5 5 1
0: 1 · · ·
1: · · · ·
2: · · · ·
3: · 5 · ·
4: · · 5 ·
5: · · · ·
6: · · · ·
7: · · · 1
We can see from the Betti table that this set of points is Gorenstein. A similar calculation works to show the other geproci sets are Gorenstein.
This pattern leads us to the following question:
Given the complement of a maximal partial spread Z^3__q, when does Z correspond to a nontrivial geproci set that exists in ^3_? That is, when does there exist a nontrivial geproci set in _^3 that has the same combinatorics as Z?
§ THE GEPROCI PROPERTY WITH INFINITELY-NEAR POINTS
We can also consider configurations of points that include infinitely-near points.
Let A be a smooth point on an algebraic variety X. Let Bl_A(X) denote the blowup of X at A. Then a point B∈Bl_P(X) is infinitely-near A if π_A(B)=A where π_A:Bl_A(X)→ X is the standard blowup map.
On the other hand, if and π_A(B)≠ A, then B and A are distinct.
Intuitively, B corresponds to the direction of a line through A. In the plane, we can consider how a point A and a point B that is infinitely-near A can uniquely determine a line, the same way a line can be uniquely determined by two distinct points. This is akin to determining a line from a point and a slope. In ^3, we will consider how infinitely-near points impose conditions on forms the same way distinct points can.
We can extend the definition of geproci to include configurations with infinitely-near points by realizing Z as a non-reduced 0-dimensional subscheme of ^3. For example, let A∈^3 be a point and L a line through A. Let B be the point infinitely near A corresponding to L. Then I({A,B})=I(L)+I(A)^2 and the ideal of the image {A,B} of {A,B} under projection from a point P∉ L is I(L)+I(A)^2, where L is the image of L. A scheme Z including infinitely near points is geproci if the projection Z of Z from a general point P to a plane is a complete intersection as a subscheme of ^2.
In the following sets of points in ^3__2, we will denote a point A together with a point infinitely-near A as A× 2. We will then specify what line the infinitely-near point corresponds to.
We will consider the set of nine (not distinct) points in ^3_K where K=2:
Z={(1,0,0,0)× 2, (0,1,0,0)× 2,(0,0,1,0)× 2, (0,0,0,1)× 2,(1,1,1,1)}
by choosing infinitely-near points for each of (1,0,0,0), (0,1,0,0), (0,0,1,0), and (0,0,0,1) to be the point that corresponds to the (respective) direction of the line through the given point and the point (1,1,1,1).
The projection Z of these 9 points to the plane w=0 from a general point takes (0,0,1), (0,1,0), (1,0,0) to themselves and (1,1,1,1) and (0,0,0,1) to general points. After a change of coordinates we can map the image of (1,1,1,1) to (1,1,1) and the image of (0,0,0,1) to (a,b,c). We will denote
Z'={(0,0,1)× 2,(0,1,0)× 2,(1,0,0)× 2,(a,b,c)× 2,(1,1,1)},
where the tangent directions of each point of multiplicity 2 correspond to the line connecting the point with (1,1,1). Then Z' is the base locus of a specific type of pencil of cubics called a quasi-elliptic fibration. Specifically, the quasi-elliptic pencil given by Z has Dynkin diagram A_1^8. One can read more about the connection between Dynkin diagrams and (quasi-)elliptic fibrations in e.g. Cossec and Dolgachev <cit.>.
We can see that the conic C_1=V(xy+xz+yz) contains the points (0,0,1), (0,1,0), and (1,0,0), and the tangent lines of the three points all meet (1,1,1). Additionally, the line L_1 connecting (a,b,c) and (1,1,1) has the appropriate slope to contain the remaining infinitely-near point. Therefore the cubic given by C_1∪ L_1 contains Z'.
Similarly, we can also construct a conic C_2=V(cxy+bxz+ayz+(a+b+c)y^2) that contains the points (0,0,1), (0,1,0), (a,b,c), and their respective infinitely-near points. Letting L_2 denote the line connecting (1,0,0) and (1,1,1), we get another cubic C_2∪ L_2 containing Z'. The two cubics share no components in common, and so Z' is a complete intersection of two cubics.
Since Z' is projectively equivalent to Z, we get Z is a complete intersection. Therefore Z is (3,3)-geproci. Note that Z is a nontrivial non-half grid. What makes this work is the fact that the tangent lines of a conic in characteristic 2 are concurrent.
We can also see that Example <ref> provides examples of unexpected cones. Letting {α_0,α_1,α_2,α_3}={x,y,z,w}, we can construct a (non-minimal) generating set for I(Z) as
𝒜={α_iα_j(α_k+α_ℓ):i,j≠ k, i,j≠ℓ, k≠ℓ}.
(Note that this set includes both the polynomials where i=j and i≠ j.) A computation in Macaulay2 reveals that the ideal generated by 𝒜 can be minimally generated by 11 cubic polynomials. Therefore [I(Z)]_3=11. We also have 3+23=10, so [I(Z)]_3-3+23=1.
But we also know that [I(Z)+I(P)^3]_3≥ 2 by for example taking the join of the two planar cubics making up the complete intersection of Z' with the vertex P. Therefore we have the inequality [I(Z)+I(P)^3]_3>[I(Z)]_3-3+23>0 and so the cubic cones are indeed unexpected.
Let K=2. Now consider the 6 points
Z={(1,0,0,0)× 2, (0,1,0,0)× 2,(0,0,1,0)× 2},
where the infinitely near point for each is in the direction of (0,0,0,1). We will show that this is (2,3)-geproci.
First we will look at the following scheme of points in ^2:
Z'={(1,0,0)× 2, (0,1,0)× 2,(0,0,1)× 2}
where the infinitely-near point for each is in the direction of (1,1,1). We will show that this set of 6 points is a complete intersection of a conic and a cubic, and then show that a general projection of Z onto any plane is projectively equivalent to Z'. Note that Z' is contained in the conic A=V(xy+xz+yz) and the cubic B=V((x+y)(x+z)(y+z)). Also note that A and B, have no components in common, since A is an irreducible conic and B is the union of three lines. Therefore Z' is a complete intersection of a conic and a cubic.
Now let us return to Z^3. Let us project Z from a general point P∈^3 onto a general plane Π^3. Since the lines corresponding to each infinitely-near point meet at (0,0,0,1), and since projection from a point preserves lines (and therefore the intersection of lines), the images of the three infinitely-near points under the projection π_P,Π will also correspond to three concurrent lines. In other words, Z will map to the set
π_P,Π(Y)={π_P,Π(1,0,0,0)× 2,π_P,Π(0,1,0,0)× 2,π_P,Π(0,0,1,0)× 2}
where each infinitely-near point is in the direction of π_P,Π(0,0,0,1). For a general point P, the images of the three ordinary points in Z and the point π_P,Π(0,0,0,1) will not be collinear. Therefore we can map Π to ^2 and use an automorphism of the plane to map π_P,Π(1,0,0,0) to (1,0,0), π_P,Π(0,1,0,0) to (0,1,0), π_P,Π(0,0,1,0) to (0,0,1), and π_P,Π(0,0,0,1) to (1,1,1). Then we are in the same situation as Z', which is a complete intersection of a conic and a cubic.
Note that Z is a half grid, since the cubic containing Z is a union of three lines, but the conic is irreducible.
The unique quadric cone containing Z with a vertex at (a,b,c,d) is given by cdxy+bdxz+adyz+abw^2.
Let K=2. Now consider the 9 points
Z={(1,0,0,0)× 2, (1,1,0,0)× 2, (0,1,0,0)× 2, (0,0,1,0)× 2, (0,0,0,1)},
by choosing as our infinitely-near points for (1,0,0,0), (1,1,0,0), (0,1,0,0), and (0,0,1,0) the points that correspond to the respective directions to the point (0,0,0,1). First we will look at the following set of points in ^2_K:
Z'={(1,0,0)× 2,(a,0,1)× 2,(0,0,1)× 2,(1,1,1)× 2,(0,1,0)}
where a≠ 0 and each infinitely-near point is in the direction of (0,1,0). These nine points are a complete intersection of (y^2+xz)(x+az) and y^2(x+z). Since every set of four points, no three of which are collinear, maps can be mapped to every other such set of four points by a linear automorphism, every projection of Z onto any plane Π will be isomorphic to the configuration Z' for some a∈ K∖{1,0}, and so Z is a nontrivial (3,3)-geproci set.
The preceding example is particularly interesting because the general projection of X is not only a (3,3) complete intersection, but as in Example <ref> it is also the set of base points of a quasi-elliptic fibration (specifically one with Dynkin diagram A_1^4D_4).
|
http://arxiv.org/abs/2307.06165v1 | 20230712134620 | The IMPTC Dataset: An Infrastructural Multi-Person Trajectory and Context Dataset | [
"Manuel Hetzel",
"Hannes Reichert",
"Günther Reitberger",
"Erich Fuchs",
"Konrad Doll",
"Bernhard Sick"
] | cs.CV | [
"cs.CV",
"cs.DB"
] |
A comparative study of different approaches for heavy quark energy loss, based on the latest experimental data
Kurosh Javidan^1
August 12, 2023
================================================================================================================
Inner-city intersections are among the most critical traffic areas for injury and fatal accidents. Automated vehicles struggle with the complex and hectic everyday life within those areas. Sensor-equipped smart infrastructures, which can cooperate with vehicles, can benefit automated traffic by extending the perception capabilities of drivers and vehicle perception systems. Additionally, they offer the opportunity to gather reproducible and precise data of a holistic scene understanding, including context information as a basis for training algorithms for various applications in automated traffic. Therefore, we introduce the Infrastructural Multi-Person Trajectory and Context Dataset (IMPTC). We use an intelligent public inner-city intersection in Germany with visual sensor technology. A multi-view camera and LiDAR system perceives traffic situations and road users' behavior. Additional sensors monitor contextual information like weather, lighting, and traffic light signal status. The data acquisition system focuses on Vulnerable Road Users (VRUs) and multi-agent interaction. The resulting dataset consists of eight hours of measurement data. It contains over 2,500 VRU trajectories, including pedestrians, cyclists, e-scooter riders, strollers, and wheelchair users, and over 20,000 vehicle trajectories at different day times, weather conditions, and seasons. In addition, to enable the entire stack of research capabilities, the dataset includes all data, starting from the sensor-, calibration- and detection data until trajectory and context data. The dataset is continuously expanded and is available online for non-commercial research at <https://github.com/kav-institute/imptc-dataset>.
Dataset, Trajectories, Vulnerable Road Users, Intelligent Infrastructure, Machine Learning
§ INTRODUCTION
Automated driving (AD) is a major goal and point of interest for the industry and the research community. In the recent past, data-driven approaches have been a thriving factor in making this goal reachable. Datasets naturally play a central role as the performance of the algorithms strongly depends on them. Famous examples are JAAD <cit.>, SHIFT <cit.>, and inD <cit.>. It is not just about the datasets themselves but also about how to create datasets, how to be able to evaluate results, and especially how to make sure that the data developed for algorithms represent the real world. In the case of synthetic datasets, e.g., SHIFT or sets created with CARLA <cit.>, it must be ensured that synthetic data can overcome domain gaps to improve real-world applications. With our work, we want to contribute a piece to the picture of holistic algorithms towards AD. We provide a sensor setup at a public inner-city intersection with everyday traffic. The setup can cover the whole intersection and parts of the incoming streets. Our main focus lies in predicting future residence probabilities and investigating the behavior of VRUs to include them in the AD environment, especially in complex inner-city scenarios.
For VRU location and behavioral prediction, trajectory and additional context data are essential. If only trajectories are used, then the prediction relies on the past observed motion of the VRU to predict future locations. These algorithms react to an action already in progress instead of anticipating it. Additional information, e.g., VRU body poses and hand gestures or traffic light signal status, can help to improve the reliability and precision of VRU predictions. Many datasets do not provide this; for example, CityScapes <cit.> or NuScenes <cit.> provides too short sequences of VRUs, making it more difficult to determine their intentions profoundly and especially to include group or context-induced behavior. Other datasets like JAAD or Euro-PVI (PVI) <cit.> provide additional context information to improve scene understanding. We introduce an extensive real-world dataset, Infrastructural Multi-Person Trajectory and Context Dataset (IMPTC), providing complete scene understanding with additional context information and top-to-bottom data availability for VRU intention prediction modeling created by our infrastructural setup. With the publication of the dataset, we want to support research on safety validation for AD, VRU intention prediction, and other topics which rely on naturalistic VRU trajectories.
We use context as a superordinate term to describe all types of information regarding VRU behavior influencing road safety. That includes VRU attributes like gender, age, body pose, hand gesture, and view direction, just as static and dynamic conditions like traffic light signals, traffic rules, maps, weather, lightning, and interactions with other road users.
In <ref>, we give an overview of existing intelligent intersections and a summary of available VRU trajectory datasets. Afterward, <ref> presents our public research intersection, including sensor setup, data acquisition, and post-processing methods. Next, in <ref>, we will detail the IMPTC dataset and compare it with currently available datasets. Finally, <ref> will summarize our work.
§ RELATED WORK
This chapter references prior work in two related topic areas: intelligent infrastructure systems, including datasets for VRU safety (<ref>) and other existing methods for recording publicly available multi-VRU trajectory datasets (<ref>).
§.§ Intelligent Intersections
There are multiple intelligent intersections used for road safety research applications. However, most research focuses on high-level traffic flow understanding and optimization <cit.>. In contrast, some research targets VRU safety topics requiring high-resolution sensing technologies.
In Aschaffenburg, Germany, a research intersection was introduced in 2012 for the German Ko-PER <cit.> and DeCoInt^2 <cit.> projects. A precise 90-degree stereo camera setup using two gray-scale full HD cameras has been used to detect and track VRU behavior focusing on one corner of the intersection. Ko-PER investigates prediction behavior models for VRUs crossing the street, DeCoInt^2 covers VRU intention detection under the cooperative aspect between intelligent infrastructure and mobile research vehicles. For motion anticipation, ReitbergerReitberger et al. <cit.> provided a cooperative tracking algorithm for cyclists, and Bieshaar et al. <cit.> used Convolutional Neural Networks to detect starting movements of cyclists. Zernetsch et al. <cit.> developed a probabilistic VRU trajectory forecasting method. Kress et al. <cit.> used this sensor setup as a reference to evaluate a human keypoint detection model deployed to a mobile research vehicle. It is worth mentioning that this sensor setup and the knowledge from the Ko-PER and DeCoInt^2 projects were utilized to develop the novel proposed sensor setup. Regarding publicly available data, the Ko-PER project only provides a small amount of trajectory data for public research <cit.>. The dataset contains VRUs and vehicles with 340 trajectories extracted from less than one hour of recordings. In contrast, DeCoInt^2 released a pedestrian and cyclist trajectory dataset <cit.> extracted from the same intersection containing 2,700 VRU trajectories. Furthermore, a cyclist trajectory dataset is provided by Bieshaar et al. <cit.> with 84 sequences recorded by cellphone GPS data.
In Braunschweig, Germany, a comparable research intersection serves as a field instrument for detecting and assessing traffic behavior <cit.>. The intersection can provide trajectory data of road users acquired by multi-modal sensor setups. Mono cameras and radars are utilized for the 3D detection of vehicles. For VRU detection, multiple binocular stereo camera setups facing the pedestrian crossings are used. From this intersection, no data is publicly available.
In Auburn Hills, Michigan, Continental operates two intelligent intersections in public use <cit.>. The systems improve traffic flow, reduce pollution, and increase the intersection’s safety by communicating hidden dangers to approaching connected vehicles and pedestrians. Camera and radar sensors create an environment model providing information about road users, traffic infrastructure, and static objects to connected vehicles using infrastructure-to-everything (I2X) communication. Continental does not provide publicly available data from its intelligent intersections.
In Ulm, Germany, another research intersection is located <cit.>. A combination of camera, LiDAR, and radar sensors creates road user trajectory data. In addition to the core intersection area, the sensors cover all three approaching streets for several hundred meters. Therefore, the general object tracking area is more extensive than all previously described research intersections. Just like in Braunschweig, no data is publicly available for researchers.
Besides intelligent infrastructures, other recording areas and techniques are used to create VRU trajectory datasets. The following section will present recent works.
§.§ VRU Trajectory Datasets
Several VRU trajectory datasets have been introduced and published in recent years. Different recording techniques, contents, scopes, and target research applications differentiate these sets. In terms of recording techniques, one can group them into three sub-classes, drone-based, vehicle-based, and stationary-based. The main focus of many datasets is trajectory prediction, but within the last couple of years, more datasets focusing on behavior and intention prediction topics have been published. The following will give an overview of existing and publicly available datasets.
Drones have become very popular for road user trajectory recording within the last few years. A drone equipped with a camera monitors a specific environment on the ground. In most cases, the drone holds a static position up to 100 meters above the target area for recording. The inD <cit.>, rounD <cit.>, HighD <cit.>, SSD <cit.>, and CITR+DUT <cit.> datasets use drones to acquire trajectories from critical intersections, public places of interest, or highways. This method is very flexible and enables fast data recording capabilities. However, typical roadside occlusions do not exist because of the top view. In return, useful context information is lost, such as traffic light signals, VRU body poses and gestures, vehicle flashing lights, and object heights. Furthermore, the trajectory precision of tiny objects like VRUs is error-prone, and drones can only operate in good weather conditions. Therefore, all drone datasets are recorded in calm and sunny weather. Rain, snowfall, or wind can affect the behavior of road users resulting in different behavior, which these sets do not cover. All previously mentioned drone-based datasets are publicly available.
Sensor-equipped mobile research vehicles are another popular way to record environment perception data. Cityscapes <cit.> is among the first high-quality and extensive vehicle-based datasets. Followed by NuScenes <cit.>, Waymo Open Dataset <cit.>, BDD100K <cit.>, and ONCE <cit.>. In 2022 SHIFT <cit.> was introduced as a full synthetic dataset with over 2 million annotated frames. These vehicle-based datasets cover all types of road typologies and users. Therefore VRU related scenes must be filtered explicitly. Due to the amount of recorded data, only some frames are annotated, reducing the sample rate. Cityscapes uses a 17 Hz camera recording frame rate; every 20th frame is annotated, resulting in a less than 1 Hz sample rate. NuScenes has a frame rate of 2 Hz. Drone-based and infrastructure-based datasets provide much higher frame rates, resulting in a more precise mapping of the ongoing situation and VRU behavior. Furthermore, the sensors' point of view is vulnerable to occlusions. In contrast, the Pedestrian Intention Estimation Dataset (PIE) <cit.> and the PVI datasets focuses on VRU behavior. All scenes include VRUs with additional labels to describe their behavior and the possible intention to cross the street. In-vehicle cameras depend on good weather and lighting conditions. Therefore PIE only includes scenes in sunny and calm weather. Moreover, only VRU and ego-vehicle trajectories are included, and no other road users are considered. All previously mentioned vehicle-based datasets are publicly available.
In contrast to intelligent intersections, there are other stationary mounting areas to capture trajectory data. The datasets from ETH <cit.>, UCY <cit.>, ATC <cit.>, and GC <cit.> use stationary mounted sensors on top of or inside buildings. These datasets cover crowded areas with high VRU activities targeting human-to-human interactions. However, the datasets are not recorded under real-world traffic scenarios and do not focus on road traffic situations. For example, the ATC dataset consists of pedestrian trajectories in a shopping center, and in GC the main hall of the grand central station in New York is monitored. All previously presented stationary and drone- and vehicle-based datasets provide trajectory data of VRUs. However, the datasets vary in scope and research targets. Beyond that, only some datasets provide additional context data. In the case of VRU intention detection, trajectories are one of many parameters that need to be considered. Additional context information is necessary to create reliable VRU behavior predictions. Recent research demonstrates that additional context can significantly improve VRU trajectory forecasting. Rasouli et al. <cit.> used local context, e.g., curbs or crosswalks, and VRU appearance, e.g., gestures, to create a pedestrian intention estimation model. The model's outcome is used with trajectory data as additional information for VRU trajectory prediction. Kress et al. <cit.> demonstrated that human body-pose information helps to improve VRU intention detection. Instead of using one real-world coordinate representing a VRU's 3D location, 17 body joints are used. That enables a more precise presentation of VRU's posture and gestures. The lack of context and the imbalance of presented datasets is a gap for ongoing research in VRU intention prediction. We introduce our research intersection and the IMPTC dataset to address this issue.
§ SYSTEM SETUP
In this chapter, we introduce our research intersection. In <ref>, we present the topology of the intersection, followed by <ref> detailing the object detection and classification process. Next, <ref> explains the object assignment and tracking, followed by <ref> describing included context data. Finally, <ref> describes the dataset format.
§.§ Research Intersection
A detailed description of the research intersection can be found in <cit.>. The research intersection is located in the inner-city of Aschaffenburg and is highly frequented by VRUs and vehicles. The intersection is signalized and includes three pedestrian crosswalks and an additional bike lane. The speed limit is 50 km/h. The intersection has six high-resolution wide-angle color cameras (4096x2160 pixels, 71-degree aperture angle), three high-resolution spinning LiDAR sensors (128-Layers, 45-degree aperture angle), alongside a weather station, and a traffic light signal tracker for contextual data. The cameras form a correlated multi-camera stereo system and are mounted approximately eight meters above ground level to reduce occlusions. The LiDARs are mounted on light poles at five meters in height at three corners of the core intersection. The camera systems' stereo coverage, the LiDAR sensors coverage, and the topology of the intersection are illustrated in <ref>. The sensor setup covers a total area of 50x40 meters for 3D road user tracking. The camera setup is used to achieve highly accurate VRU trajectories and to extract visual context information. The LiDAR setup handles all common types of road users.
§.§ Detection and Classification
The reliable detection of road users in camera images is essential. Therefore, we extended and fine-tuned a Faster-RCNN R101 FPN model <cit.> with additional classes using 10,000 human annotated and equally distributed camera images, split into 80:20 for training and evaluation. To ensure variability, the images were collected at various seasons, weather conditions, and day-times. The model focuses on VRUs. Therefore, we split VRUs into sub-classes, e.g., pedestrians, cyclists, e-scooter riders, strollers, and wheelchair users. As a result, the model achieves an mAP score of 90.5% within our static setup.
Intrinsic and extrinsic camera calibration is necessary to achieve accurate world object coordinates from 2D detections. After mounting, a checkerboard pattern was used for the intrinsic calibration of the cameras. For the extrinsic calibration, we scattered 61 global geographic referenced points at the corners of lane markings and other road marks within the intersection area. Due to the bright white color and high reflectivity, they are easy to see in camera images and LiDAR point clouds. The global geographic referenced points are provided in Universal Transverse Mercator (UTM) coordinates, and the height is meters above sea level. For the cameras, we used the methods described in <cit.><cit.> to solve Perspective-n-Point (PnP) pose computations to obtain the sensor's extrinsic parameters concerning an origin point. One marker is treated as a reference point to define a local coordinate system, further called intersection coordinate system. Although we were careful to perform the calibration process with maximal accuracy, we discovered inconsistencies moving from one stereo camera setup to the next. <ref> shows a person crossing the street being visible in multiple camera views. Labeling the center of the head of the person in every image and calculating the 3D positions concerning the stereo settings lead to a difference exceeding the expected error, especially at the margins of the image areas. This effect causes detections of the same object not to be considered corresponding between stereo camera setup changes.
Our solution is to optimize the extrinsic camera parameters to achieve a minimal re-projection error concerning the measured intersection points and minimize the pairwise distance of triangulations of corresponding points in the common fields of view. For 𝔾 being the set of geo-referenced points, γ_i(𝕘) being a manually labeled image point in camera i corresponding to 𝕘∈𝔾, and π_i being the projection of a real-world point to camera i with i∈{1,…,6} based on the calibration, then
ϵ_r := ∑_𝕘∈𝔾∑_i ∈{1,…,6}‖π_i(𝕘) - γ_i(𝕘) ‖
is the re-projection error. Additionally, we collect a set of real-world points ℍ we can see in some or all of the cameras, but we do not know their coordinates. An example is the center of the head of the person crossing the street in <ref>. Next, we label the corresponding positions γ_i(𝕙) of such an 𝕙∈ℍ with i∈ I(𝕙) and I(𝕙) ⊂{1,…,6} being the cameras that have a view of 𝕙. The triangulation τ_i,j(p,q) calculates the 3D world point for two points p from camera i and q from camera j assuming (i,j) being a calibrated stereo camera setup. We formulate the consistency error ϵ_c AS the summed-up distance of the triangulations of the same real-world points viewed by different stereo setups.
The aforementioned set of stereo camera setups shall be called 𝕊 and 𝕊_I(𝕙) shall be the set of stereo setups containing only cameras from I(𝕙) of a 3D point 𝕙. Our consistency term is the following:
ϵ_c =
∑_𝕙∈ℍ∑_(i,j)∈𝕊_I(𝕙),
(i',j')∈𝕊_I(𝕙) ‖τ_i,j(γ_i(𝕙),γ_j(𝕙))
- τ_i',j'(γ_i'(𝕙),γ_j'(𝕙)) ‖.
To smooth the transition between the camera stereo setups, we choose the elements of ℍ to be in intersecting areas of at least two stereo setups. We use Nelder-Mead <cit.> optimization on the extrinsic parameters of every camera to minimize our objective function
ϵ_r + λϵ_c.
Depending on λ, the consistency argument is more or less important than the fit of the measured intersection points. In our case, λ = 0.04 is a good choice. We can achieve a precise and consistent multi-camera calibration setup by following this procedure. The mean distance of the triangulations τ_i,j(γ_i(𝕘),γ_j(𝕘)) over all stereo camera sets (i,j) and geo-referenced points 𝕘 to the points 𝕘 is 3.4cm and the maximum distance is 9.6cm.
The LiDAR setup consists of three sensors. We use a two-step data post-processing pipeline to achieve highly reliable 3D object trajectories. First, each sensor's data is processed for itself, using foreground-background subtraction combined with a classifier. In our case, this method achieves excellent results because of our static environment. The subtraction uses an exact digital scan of the intersection as a reference. Second, both sensor results are merged using least-squares fitting of 3D point sets <cit.>. As a result and in contrast to our vision system, we receive trajectory data for all road users. The reference point cloud and two exemplary results are illustrated in <ref>.
§.§ Tracking and Post-processing
To perform further work, such as movement prediction, it is essential to have both detections and consistent and reliable tracks available. The aim of tracking is twofold. On the one hand, consistent identifiers are assigned; on the other hand, missed detections due to failures or short-time occlusions can be bridged. For this purpose, it is essential to have a fitting movement model for the tracked object. VRUs are not only pedestrians but, in any case, including persons. We present a way of tracking pedestrians, cyclists, e-scooter riders, strollers, and wheelchair users as VRU sub-classes incorporating individual movement models and class probabilities. For pedestrians, we use an Interacting Multiple Model (IMM) Kalman filter that is flexible enough to cover the capability of abruptly changing the movement direction and rapidly changing accelerations. The model state space is [x, ẋ, y, ẏ, z, ż]. Cyclist, e-scooter rider, and wheelchair user movements follow arcs or straight lines. They differ in acceleration and average speed. For all these classes, we use adjusted parameters. The procedure for all sub-classes is the same. We describe the method for cyclists in <cit.>, going further in detail about how we initialize tracks and assign measurements.
Therefore, we choose the so called bicycle model, adapted from <cit.> based on the state space [x, y, z, ż, γ, γ̇, v] with x, y, and z being world coordinates, γ being the yaw, v the velocity, and being the time derivative. The state transition is given by f(𝐱) := [
x + cos(γ) a - sin(γ) b,
y + sin(γ) a + cos(γ) b,
z + ż T, ż, γ + γ̇ T,
γ̇, v]
with a =sin(γ̇ T) v/γ̇ and b = (1 - cos(γ̇ T)) v/γ̇ for a time step T.
The detection of a cyclist consists of an intersection of a bicycle with a person bounding box exceeding a predefined Intersection over Union (IoU) threshold score.
Performing tracking of cyclists solely based on cyclist detections does not perform well, as especially the bicycle detection is not yet stable with regard to all viewing angles, for example.
With Neural Networks (NN) based detection algorithms, we discovered an absolute bicycle detection rate difference between frontal and side views of about 0.65. Therefore, bicycle detections need to be more robust in certain constellations that bicycle detection-only tracking cannot provide gap-free results. We tackle this issue by tracking pedestrians and bicycles simultaneously and introducing a class probability. The algorithm we selected to achieve the goal mentioned above is the Interacting Multiple Model (IMM) <cit.><cit.> approach based on the multiple Kalman filter models.
It shows a robust behavior with respect to model mismatching <cit.>. To make the different model states compatible, the IMM state is a lifted one by merging the individual state spaces <cit.>.
A probability score evaluates every tracking step for how well each set of models, i.e., pedestrian and bicycle models, fits the perception. Additionally, the object class predictions by the NN classifier are available. We combine both indicators to label the class of the IMM-tracked object. Together, we achieve a classification precision of at least 0.97%. Concerning the MOTA and MOTP tracking scores <cit.>, the IMM approach can increase cyclist tracking accuracy (MOTA) by adding person detections from 32.5% to 97.6%. Furthermore, the precision (MOTP) improves from 15.5cm to 7.6cm by mixing in the pedestrian models.
§.§ Context Data
To achieve highly reliable VRU predictions, additional information about the current situation is essential. Therefore, we use additional sensors to perceive different types of context data in real-time. For example, a weather station provides features like temperature, wind, precipitation, and visibility. A traffic light signal tracker provides all traffic light signal statuses. Furthermore, the camera system can extract VRU human body poses, providing additional information for viewing direction detection and gesture recognition. In addition, the intersection was measured by combining photogrammetry and road-level laser scans resulting in a highly accurate digital model with a better than 1cm textural resolution and a 3cm or better structural resolution. The LiDAR reference map and an exact Open Street Map (OSM) are derived from this model. Besides VRU and other road user trajectories, our setup provides a comprehensive set of additional context information to achieve a precise environmental perception of every traffic situation.
§.§ Dataset Format and Tools
Our goal is to ensure that the dataset is easy to use. Therefore, we provide all data synchronized and in the established Json-format. A global keyword catalog will enable fast scene browsing, including the number of tracks, weather-, lightning-, and seasonality attributes. Every scene will include the following scope:
High-level data: A timestamp-ordered Json-file including all detected and classified objects represented by their current 3D world coordinates and corresponding meta attributes like detection scores. Furthermore, all weather- and traffic light signal data is included. Next, the intersection topology is given by a precise OSM reference map. The map represents all static circumstances. Finally, additional visual support is given by a scene preview video for a better scene understanding. All six camera images and a trajectory top-view presentation are stitched together for the preview. <ref> depicts a sample frame of a scene preview video.
Low-level data: Object detection lists and human key point annotations are provided for every frame. Furthermore, the LiDAR sensors' point clouds will be included. Every point is described by 3D world coordinates and its reflectivity (x, y, z, r). In addition, object detection lists are provided for every point cloud. Finally, the extrinsic and intrinsic camera parameters are included, just as all 61 survey points. The low-level data can be used to extract additional context information from sensor data if necessary and not already provided.
§ DATASET
After describing our research intersection and data processing pipeline in the previous section, the following chapter shifts the focus toward the IMPTC dataset. First, <ref> will give a detailed dataset overview. Afterward, <ref> compares our dataset with existing ones.
§.§ IMPTC at a Glance
In <ref>, we elaborate that there is a clear need for more detailed and complete datasets in VRU behavior prediction and trajectory forecasting, especially in critical urban traffic situations like intersections. It is the goal of our dataset to fill this gap. Therefore, we provide one of the most extensive trajectory datasets so far. IMPTC will be released with an initial number of 250 sequences recorded in 2022 and beyond. Each sequence has an average length of 90 to 120 seconds and represents public everyday traffic situations at our intersection. The dataset includes 2,500 VRU- and 20,000 vehicle trajectories. VRUs are classified into subgroups: pedestrians, cyclists, e-scooter riders, strollers, and wheelchair users. <ref> illustrates 100 randomly selected VRU trajectories and corresponding vehicle tracks. Vehicles are divided into cars, motorbikes, and trucks/buses. An unknown class is also included representing uncommon corner cases. Besides object classification and 3D trajectories, IMPTC contains multiple types of additional context information and low-level sensor data like LiDAR point clouds as described in <ref> and <ref>. The initial set will be extended over time to enlarge the dataset's variability, including different daytime and light conditions, weather and seasonality conditions, and VRU amounts and types. The dataset's goal is to support researchers within the topics of VRU trajectory and behavior/intention prediction, traffic flow understanding, and social-behavioral analysis in complex urban traffic scenarios.
§.§ Comparison with Existing Datasets
In <ref>, we compare the IMPTC dataset with currently available VRU-focused datasets: SSD <cit.>, CITR+DUT <cit.>, DeCoint <cit.> inD <cit.>, PIE <cit.>, JAAD <cit.>, and PVI <cit.>. No dataset meets all requirements. A comprehensive roadside VRU trajectory dataset should include a balanced number of road user trajectories at different weather, lightning, and seasonality conditions. Furthermore, additional context information should be included to create the best environmental perception model possible as the basis for VRU behavior prediction. Regarding total available VRU trajectories, our initial set can not match inD, PVI, or SSD. Nevertheless, IMPTC covers the broadest range of object classes, eight in total, with five VRU subclasses, including new means of transportation like e-scooters. Only a few compared datasets include a wide range of weather or seasonality. Sunny and cloudy weather is omnipresent. Thanks to our LiDAR sensors, we can record data under poor weather and lighting conditions and at night. PIE, JAAD, and PVI provide additional context information but with varying scopes. In <ref>, we detailed the context information included in IMPTC, exceeding all others. Our dataset is the most balanced and extensive roadside VRU dataset yet. All information, download links, detailed descriptions, instructions, and additional code will be available at <https://github.com/kav-institute/imptc-dataset>.
§ CONCLUSION
This paper has motivated the need for an extensive VRU trajectory dataset at urban intersections, which other publicly available datasets still need to meet. We have shown that the available datasets use different recording strategies, e.g., static setups, drones, or research vehicles, and vary in size and scope of provided trajectory data. So far, these datasets have yet to take care of additional important context information, i.e., static circumstances, weather, traffic light signals, or VRU body poses, necessary for reliable VRU behavior prediction. Furthermore, we subdivide VRUs into five groups, i.e., pedestrian, cyclist, e-scooter rider, strollers, and wheelchair user, for a more precise analysis of the movement behavior. Our dataset fills that gap and provides researchers with the densest environmental perception available. After implementing a complete processing pipeline, we used that pipeline to create the IMPTC dataset. The initial set contains 250 scenes with over eight hours of recorded data and 2,500 VRU trajectories. The set will be continuously extended. We surpassed any comparable dataset regarding data scope in high-level and low-level aspects and outlined multiple application domains in which the IMPTC dataset supports researchers. The dataset will be released after the conference date, and it will be updated at regular intervals.
00
jaadKotseruba, I. & Rasouli, A. Joint Attention in Autonomous Driving (JAAD). arXiv preprint arXiv:1609.04741 (2020)
shift_2022Sun, T. & Segu, M. SHIFT: A Synthetic Driving Dataset for Continuous Multi-Task Domain Adaptation. Computer Vision And Pattern Recognition (CVPR). (2022)
inDBock, J. & Krajewski, R. The inD Dataset: A Drone Dataset of Naturalistic Road User Trajectories at German Intersections. 2020 IEEE Intelligent Vehicles Symposium (IV). pp. 1929-1934 (2020)
carla_2017Dosovitskiy, A. & Ros, G. CARLA: An Open Urban Driving Simulator. 1st Annual Conference On Robot Learning. pp. 1-16 (2017)
cityscapes_2016Cordts, M. & Omran, M. The Cityscapes Dataset for Semantic Urban Scene Understanding. Proc. Of The IEEE Conference On Computer Vision And Pattern Recognition (CVPR). (2016)
nuscenesCaesar, H. & Bankiti, V. nuScenes: A multimodal dataset for autonomous driving. ArXiv Preprint ArXiv:1903.11027. (2019)
euro_pviBhattacharyya, A. & Reino, D. Euro-PVI: Pedestrian Vehicle Interactions in Dense Urban Centers. IEEE Conference On Computer Vision And Pattern Recognition (CVPR). pp. 6408-6417 (2021)
surveyShirazi, M. & Morris, B. Looking at Intersections: A Survey of Intersection Monitoring, Behavior and Safety Analysis of Recent Studies. IEEE Transactions On Intelligent Transportation Systems. pp. 1-21 (2016,8)
koperGoldhammer, M. & Stiegel, E. Cooperative multi sensor network for traffic safety applications at intersections. 2012 15th International IEEE Conference On Intelligent Transportation Systems. pp. 1178-1183 (2012)
decointBieshaar, M. & Reitberger, G. Detecting Intentions of Vulnerable Road Users Based on Collective Intelligence. CoRR. abs/1809.03916 (2018)
reitbergerReitberger, G. & Zernetsch, S. Cooperative Tracking of Cyclists Based on Smart Devices and Infrastructure. 2018 21st International Conference On Intelligent Transportation Systems (ITSC). pp. 436-443 (2018)
bieshaarBieshaar, M. & Zernetsch, S. Cooperative Starting Movement Detection of Cyclists Using Convolutional Neural Networks and a Boosted Stacking Ensemble. CoRR. abs/1803.03487 (2018)
zernenschZernetsch, S. & Reichert, H. Trajectory Forecasts with Uncertainties of Vulnerable Road Users by Means of Neural Networks. 2019 IEEE Intelligent Vehicles Symposium (IV). pp. 810-815 (2019)
kressKress, V. Human Pose Estimation in Real Traffic Scenes. IEEE Symposium Series On Computational Intelligence (SSCI). pp. 518-523 (2018)
koper_dataStrigel, E. & Meissner, D. The Ko-PER intersection laserscanner and video dataset. 17th International IEEE Conference On Intelligent Transportation Systems (ITSC). pp. 1900-1901 (2014)
decoint_datasetZernetsch, S. VRU Trajectory Dataset. https://www.th-ab.de/vru-trajectory-dataset (2020)
dlrDLR AIM Research Intersection: Instrument for traffic detection and behavior assessment for a complex urban intersection. Journal Of Large-scale Research Facilities. Volume 2 pp. A65 (2016)
continentalAG, C. Continental Launches Smart City Mobility and Transportation Hub for Safer and Smarter Cities. (https://www.continental.com/en-us/press-/press-releases/smart-city-mobility-205048) (2018)
ulmUlm University. Pilotanalage für vernetztes Fahren. (https://www.uni-ulm.de/in/mrm/forschung/infrastruktur/pilotanlage-fuer-vernetztes-fahren) (2019)
rounDKrajewski, R. & Moers, T. The rounD Dataset: A Drone Dataset of Road User Trajectories at Roundabouts. IEEE 23rd International Conference On Intelligent Transportation Systems (ITSC). pp. 1-6 (2020)
highDKrajewski, R. & Bock, J. The highD Dataset: A Drone Dataset of Naturalistic Vehicle Trajectories on German Highways for Validation of Highly Automated Driving Systems. 2018 21st International Conference On Intelligent Transportation Systems (ITSC). pp. 2118-2125 (2018)
ssdRobicquet, A. & Sadeghian, A. Learning social etiquette: Human trajectory understanding in crowded scenes. European Conference On Computer Vision. pp. 549-565 (2016)
citrYang, D. & Ozguner, U. Top-view Trajectories: A Pedestrian Dataset of Vehicle-Crowd Interaction from Controlled Experiments and Crowded Campus. (2019)
waymo_predictionSun, P. & Kretzschmar, H. Scalability in Perception for Autonomous Driving: Waymo Open Dataset. Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition (CVPR). (2020,6)
bdd100kYu, F., Chen, H. & Others. BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. CVPR pp. 2633-2642 (2020)
onceMao, J., Niu, M. & Others One Million Scenes for Autonomous Driving: ONCE Dataset. NeurIPS. (2021)
pieRasouli, A. & Kotseruba, I. PIE: A Large-Scale Dataset and Models for Pedestrian Intention Estimation and Trajectory Prediction. International Conference On Computer Vision (ICCV). (2019)
ethPellegrini, S. & Ess, A. You'll never walk alone: Modeling social behavior for multi-target tracking. 2009 IEEE 12th International Conference On Computer Vision. pp. 261-268 (2009)
ucyLerner, A. & Chrysanthou, Y. Crowds by example. Computer Graphics Forum. pp. 655-664 (2007)
atcBrscic, D., Kanda, T., Ikeda, T. & Miyashita, T. Person Tracking in Large Public Spaces Using 3-D Range Sensors. Human-Machine Systems, IEEE Transactions On. pp. 522-534 (2013)
gcYi, S. & Li, H. Understanding pedestrian behaviors from stationary crowd groups. Proceedings Of The IEEE Conference On Computer Vision And Pattern Recognition. pp. 3488-3496 (2015)
xungHetzel, M. & Reichert, H. Smart infrastructure: A research junction. IEEE International Smart Cities Conference (ISC2). pp. 1–4 (2021)
faster_rcnnLin, T. & Dollár, P. Feature Pyramid Networks for Object Detection. CoRR. abs/1612.03144 (2016), http://arxiv.org/abs/1612.03144
hartley_2004Hartley, R. & Zisserman, A. Multiple View Geometry in Computer Vision. (Cambridge University Press,2004)
openCV3_kaehlerKaehler, A. & Bradski, G. Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library. (O'Reilly Media, Inc.,2016)
nelder_meadNelder, J. & Mead, R. A Simplex Method for Function Minimization. The Computer Journal. 7, 308-313 (1965,1)
arun_1987Arun, K. & Huang, T. Least-Squares Fitting of Two 3-D Point Sets. IEEE Transactions On Pattern Analysis And Machine Intelligence. PAMI-9, 698-700 (1987)
imm_blair_shalomBlair, W. & Bar-Shalom, T. Tracking maneuvering targets with multiple sensors: does more data always mean better estimates?. IEEE Transactions On Aerospace And Electronic Systems. 450-456 (1996)
imm_genoveseGenovese, A. The interacting multiple model algorithm for accurate state estimation of maneuvering targets. Johns Hopkins APL Technical Digest (Applied Physics Laboratory). Volume 22 pp. 614-623 (2001,10)
bernardin_2008Bernardin, K. & Stiefelhagen, R. Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. EURASIP Journal On Image And Video Processing. 2008, 246-309 (2008)
|
http://arxiv.org/abs/2307.05751v1 | 20230709165255 | Exponential Resummation of QCD at finite chemical potential | [
"Sabarnya Mitra"
] | hep-ph | [
"hep-ph",
"hep-lat",
"nucl-ex",
"nucl-th"
] | |
http://arxiv.org/abs/2307.06314v2 | 20230712172255 | Coexistence of Competing Microbial Strains under Twofold Environmental Variability and Demographic Fluctuations | [
"Matthew Asker",
"Lluís Hernández-Navarro",
"Alastair M. Rucklidge",
"Mauro Mobilia"
] | q-bio.PE | [
"q-bio.PE",
"cond-mat.stat-mech",
"nlin.AO",
"physics.bio-ph"
] |
equation(#2#1#3)
equation(#3#1#4) to (#5#2#6)
equation(#2#1#3)
and (#2#1#3), (#2#1#3) and (#2#1#3)
[email protected]
[email protected]
Department of Applied Mathematics, School of Mathematics, University of Leeds, Leeds LS2 9JT, United Kingdom
https://eedfp.com
Microbial populations generally evolve in volatile environments,
under conditions fluctuating between harsh and mild, e.g. as the result of sudden changes in toxin concentration or nutrient abundance. Environmental variability
thus shapes the population long-time dynamics, notably by influencing the ability of different strains of microorganisms to coexist.
Inspired by the evolution of antimicrobial resistance, we study the dynamics
of a community consisting of two competing strains subject to twofold environmental variability. The level of toxin varies in time, favouring the growth of one strain
under low levels and the other strain when the toxin level is high.
We also model time-changing resource abundance
by a randomly switching carrying capacity that drives the fluctuating size of the community. While one strain dominates in a static environment, we show that species
coexistence is possible in the presence of
environmental variability. By computational and analytical means, we determine the environmental conditions under which long-lived coexistence is possible and when it is almost certain. Notably, we study the circumstances under which environmental and demographic
fluctuations promote, or hinder, the strains coexistence. We also determine how the make-up
of the coexistence phase and the average abundance of each strain
depend on the environmental variability.
Coexistence of Competing Microbial Strains under Twofold Environmental Variability and Demographic Fluctuations
Mauro Mobilia
August 12, 2023
===============================================================================================================
§ INTRODUCTION
Microbial communities evolve in volatile environments that often fluctuate between mild and harsh conditions. For instance, the concentration of toxin and the abundance of nutrients
in a community can suddenly and radically change <cit.>. This results
in environmental variability (EV) that shapes the population
evolution <cit.>. In particular, EV greatly influences the ability of species to coexist <cit.>, which is a characteristic of key importance in biology and ecology, with direct applications
in subjects of great societal concern <cit.> like the maintenance of biodiversity in ecosystems <cit.> and the evolution of antimicrobial resistance (AMR) <cit.>.
In the absence of detailed
knowledge about the time-variation of external factors, EV is generally
modelled by means of noise terms affecting the species growth and/or death rates <cit.>. Demographic noise (DN) is another important source of fluctuations: it can lead to fixation, which is the phenomenon arising when one strain takes over the entire community. The effect of DN is significant in communities of small size, and becomes negligible in large populations <cit.>.
Significantly, the time development of the size and composition of
populations are often interdependent <cit.>, with fluctuations of the population size modulating the strength of DN <cit.>.
The interplay between EV and DN is crucial in shaping microbial communities, but the quantitative effects of their coupling
are as yet still mostly unknown.
Environmental and demographic fluctuations play a crucial role in the evolution of AMR, when treatments reduce a microbial community to a very small size, but fail to eradicate the microorganisms resistant to the drug <cit.>. After antibiotic treatment finishes, resistant cells in the small community, may replicate and restore infection, hence possibly leading to the spread of antibiotic resistance. On the other hand, with a small population, DN may lead to the extinction of the resistant strain. How the coexistence of
cells resistant and sensitive to antibiotics
is affected by the joint effect of EV and DN,
and how the fraction of resistant cells depends on environmental conditions, are thus central questions in the effort to understand the evolution of AMR <cit.>.
Here, inspired by the AMR evolution in a chemostat setup <cit.>,
we study the eco-evolutionary dynamics
of an idealised microbial community consisting of two competing strains subject to a time-varying level of toxin, with the growth of one strain favoured
under low toxin level and a selective advantage to the other strain under high toxin level. We also assume that the resource abundance varies according to a time-switching
carrying capacity that drives the fluctuating size of the community.
In most of previous works, EV is either encoded in fluctuating growth rates, with the size or carrying capacity of the population kept constant
<cit.>, or EV is modelled
by a time-varying carrying capacity that affects the species death rates and drives the population size <cit.> (see also <cit.>). The distinctive feature of this study is the twofold EV
accounting for environmental fluctuations stemming from the variation of the
toxin level and the switches of the
carrying capacity resulting in the coupling of DN and EV; see Fig. <ref>.
We determine the fixation-coexistence diagrams of the system, and these allow us to determine the environmental conditions under which long-lived coexistence of the strains is possible or certain, and when one strain dominates the other. We also analyse the make-up
of the population when the strains coexist, and their average abundance.
The organisation of the paper is as follows: the model is introduced in Sec. <ref>. Sec. <ref> is dedicated to the
study of the case with a constant carrying capacity (subject to a static or varying toxin level) by means of a mean-field analysis and a mapping onto a suitable Moran process. The twofold influence of time-varying fitness and carrying capacity on the
coexistence and fixation of the species is analysed in Sec. <ref>. Sec. <ref> is dedicated to the influence of the EV on the make-up of the coexistence phase and strains abundance. We present our conclusions in Sec. <ref>. Additional technical details are given in the supplementary material (SM) <cit.>.
§ MODEL
Here, we consider a well-mixed population of fluctuating size N(t)=N_R(t)+N_S(t) which, at time t, consists of N_R bacteria of strain R and N_S of type S, which compete for the same resources. The former refers to a strain that can resist a certain toxin, and the latter to microorganisms sensitive to that toxin.
Based on mounting evidence showing that microbial communities
generally evolve in volatile environments <cit.>, we study the eco-evolutionary of this population under twofold environmental variability: external conditions fluctuate between harsh and mild, and affect
the level of toxin and resources that are available in the population; see Fig. <ref>.
For concreteness, we assume that
the toxin is biostatic and reduces the growth rate of the sensitive strain, but does not affect the resistant bacteria <cit.>[The case where the toxin increases the death rate of the strain S corresponds to a biocidal toxin, and is not directly considered here. This is not particularly limiting since the same drug can often act as a biostatic or biocidal toxin at low/high concentration <cit.>.].
In this setting, resistant R bacteria have
a constant fitness f_R, whereas the sensitive S bacteria have an environment-dependent fitness f_S(ξ_T),
where ξ_T(t) is a time-varying environmental random variable encoding the toxin level: ξ_T>0 for the low toxin level and ξ_T < 0 for the high toxin level. As in previous studies <cit.>, we here consider
f_R=1 and f_S=exp(s ξ_T),
where s>0 denotes the selection bias favouring the strain S when ξ_T>0, and strain R when ξ_T<0. The parameter s therefore
encodes both the selection and
strength of the environmental variability associated with the changes in toxin level (T-EV).
As in many recent theoretical studies <cit.>,
T-EV is here modelled by coloured
dichotomous Markov noise (DMN) <cit.>, so that ξ_T∈{-1,1}; see below. DMN aptly models suddenly changing conditions occurring in bacterial life, like the environmental stress resulting from exposure to antibiotics <cit.>, and is amenable to
mathematical analysis <cit.>, as well as to laboratory-controlled experimental probes <cit.>.
The environmental effect on the level of nutrients (K-EV), fluctuating between scarcity and abundance, is modelled by a binary switching carrying capacity K(t)∈{K_-,K_+}
that is driven by the binary random variable (also following a DMN process) ξ_K(t)∈{-1,1}. The state ξ_K=-1 thus corresponds to a harsh state with scare resources, where the carrying capacity is K_-, whereas nutrients are abundant in the mild state ξ_K=+1 where the carrying capacity is K_+> K_-≫ 1. As in <cit.>, this is encoded in the time-switching carrying capacity
K(t)=1/2[K_+ +K_- +ξ_K(t)(K_+ + K_-)],
which, with K_0 ≡K_+ + K_-/2 and γ = K_+ - K_-/2K_0,
can conveniently be written as
K(t)=K_0[1 +γξ_K(t)].
This randomly switching carrying capacity
drives the population size N(t) and is hence responsible for its fluctuations, with K(t)≫ 1 ensuring that the population dynamics is never solely dominated by demographic fluctuations.
The population thus evolves under twofold EV encoded in the
environmental states ξ_T(t),ξ_K(t), see Fig. <ref>, subject to
time switching according to the reactions
ξ_α=+1 -1, andξ_α=-1 +1,
where ν_α^± are the switching rates of the α-DMN, with α∈{T,K} indicating the relevant environmental noise.
It is also useful to define the average switching rates
ν_α and switching biases δ_α for each
α-DMN as
ν_α≡ν_α^- + ν_α^+/2andδ_α = ν_α^- - ν_α^+/2ν_α,
such that ν_α^±=ν_α(1∓δ_α).
This means that δ_T>0 corresponds to a bias towards low toxin level (mild T state, ξ_T=+1) favouring the S strain, whereas δ_T<0 indicates a bias towards high toxin level
(harsh T state, ξ_T=-1) where the growth of S is hampered and the spread of R is favoured, see Fig. <ref>. Similarly, δ_K>0 corresponds to
bias towards the environmental state rich in nutrients (where K=K_+), while
δ_K<0 is associated with a bias towards an environment where nutrients are scarce (K=K_-). In all cases, we consider α-DMN at stationarity, where
ξ_α=± 1 with probability (1±δ_α)/2,
yielding the average ξ_α=δ_α and
autocorrelation ξ_α(t)ξ_α(t') = (1-δ_α^2)exp(-2ν_α|t-t'|), where · denotes the α-DMN ensemble average <cit.>.
From Eq. <ref>, it is useful to note that the average carrying capacity is K=K_0(1+δ_K)
and the variance of K(t)
is (K_0γ)^2(1-δ_K^2), with the amplitude of K-EV thus scaling as K_0γ, while the variance and amplitude of the T-EV increase with s; see Sec. SM1 in <cit.>.
For concreteness, we here assume that
ξ_T and ξ_K are totally uncorrelated. In our motivating example, this corresponds to the reasonable assumption that nutrient and antibiotic uptake are independent processes. The case where ξ_T and ξ_K are fully correlated or anti-correlated, with ξ_T=ξ_K=ξ or ξ_T=-ξ_K=ξ, where ξ is a single DMN process, is briefly discussed in the Supplementary Material, see Sec. SM7 in <cit.>.
The system considered here best translates to a chemostat setup whereby toxin and nutrient levels can be maintained at a constant level through time and switched via changing concentrations of medium coming into the system <cit.>. The switch ξ_T→ -ξ_T with ξ_T=-1 can thus be envisioned as corresponding to switching the concentration of an antibiotic drug from
above the minimum inhibitory concentration (MIC),
where the growth of the sensitive strain is hampered, to a concentration below the MIC where the S strain can spread at the expense of R <cit.>.
At time t the fraction of R-types in the system is x(t)=N_R(t)/N(t) and the average population fitness is
f(x,ξ_T)=x+(1-x)exp(sξ_T), which depends on the population composition x and the toxin state ξ_T, . We assume that mutation rates between strains are negligible, and seek to characterise the population dynamics by the evolution of its size and composition according to the multivariate birth-death process <cit.>
N_R/S N_R/S + 1 and N_R/S N_R/S - 1,
where
the time-dependent birth and death transition rates are respectively
T_R/S^+ = f_R/S/fN_R/Sand T_R/S^-= N/K N_R/S.
The per-capita birth rates f_R/S/f (where we normalise with f in line with the standard Moran process) thus vary with the toxin level
and population composition, while the logistic-like per capita death rate
N/K varies with nutrient level and population size.
With 𝐍≡(N_R,N_S),
the master equation
giving the probability P(𝐍,ξ_T,ξ_K,t) for the population to consist of N_R and N_S
bacteria of type R and S, respectively,
in
the environmental state (ξ_T,ξ_K) at time t is
∂ P(𝐍,ξ_T,ξ_K,t)/∂ t = ( 𝔼_R^–1)[T^+_R P(𝐍,ξ_T,ξ_K,t)]
+( 𝔼_S^–1)[T^+_S P(𝐍,ξ_T,ξ_K,t)]
+
( 𝔼_R^+-1)[T^-_R P(𝐍,ξ_T,ξ_K,t)]
+( 𝔼_S^+-1)[T^-_S P(𝐍,ξ_T,ξ_K,t)]
+ ν_T^-ξ_T P(𝐍,-ξ_T,ξ_K,t)-ν_T^ξ_T P(𝐍,ξ_T,ξ_K,t)
+ ν_K^-ξ_K P(𝐍,ξ_T,-ξ_K,t)-ν_K^ξ_K P(𝐍,ξ_T,ξ_K,t),
where 𝔼^±_R/S are shift operators such that
𝔼^±_R f(N_R,N_S,ξ_T,ξ_K,t) =f(N_R± 1,N_S,ξ_T,ξ_K,t), and
ν_α^ξ_α≡ν_α^± when ξ_α=± 1. We note that P(𝐍,ξ_T,ξ_K,t)=0 whenever N_R<0 or N_S<0, and the last two lines on the right-hand-side of Eq. <ref>
account for the random environmental switching of toxin (ξ_T→ -ξ_T) and carrying capacity (ξ_K→ -ξ_K).
Since T^±_R/S=0 whenever N_R/S=0, there is extinction of R and fixation
of S (N_R=0, N=N_S), or fixation of R and extinction of S (N_S=0, N=N_R). When one strain fixates and replaces the other, the population composition no longer changes while its size continues to fluctuate[
Finally, the population will settle in the absorbing state N_R=N_S=0 corresponding to the eventual extinction of the
entire population. This occurs after a time that grows exponentially with the system size <cit.>. This phenomenon, irrelevant for our purposes (since we always have K(t)≫ 1), is not considered here.
].
Fixation of one strain and extinction of the other is expected
when strains compete for the same resources (competitive exclusion principle), and always occur in a finite population even when its size fluctuates <cit.>. In stark contrast, here we show that environmental fluctuations can lead to the long-lived coexistence of competing species
and nontrivially shape the abundance distribution of both strains.
§
CONSTANT CARRYING CAPACITY: MEAN-FIELD ANALYSIS AND
MORAN PROCESS
Since ξ_T and ξ_K are independent,
it is useful to first consider the case of a constant
carrying capacity, with environmental variability stemming only from the fluctuations of the toxin level in the birth rates of Eq. (<ref>).
In this section, we thus assume that the carrying capacity is constant and large: K(t)=K_0≫ 1. After a short transient the population size
fluctuates about K_0, with N≈ K_0. When K_0≫ 1, we can approximate the population size by N=K_0
and make analytical progress by using the well-known results of the Moran process <cit.>.
In this approximation, the population is kept constant, which requires the simultaneous birth and death
of individuals of either species, and the population evolves according to a fitness-dependent Moran process <cit.>, defined in terms of Eqs. <ref> by the reactions
(N_R,N_S) (N_R + 1, N_S - 1),
(N_R,N_S) (N_R - 1,N_S + 1),
corresponding, respectively, to the simultaneous birth of an R and death of an S with
rate T_R^+, and death of an R and birth of an S with
rate T_R^-, where
T_R^+ = T_R^+ T_S^-/N=Nx(1-x)f_R/f(t),
T_R^- =T_R^- T_S^+/N=Nx(1-x)f_S(t)/f(t).
§.§ Mean-field analysis
We now consider the case where N=K_0→∞, and thus ignore demographic fluctuations. In this case, the population composition evolves according to the mean-field equation <cit.>:
ẋ= T_R^+ - T_R^-/N
=x(1-x)(1-e^sξ_T/x+(1-x)e^sξ_T),
where the dot denotes the time derivative. It is important to notice that, owing to
environmental noise
ξ_T,
Eq. (<ref>) is a mean-field stochastic differential equation that defines a so-called “piecewise deterministic Markov process” (PDMP) <cit.>:
according to this PDMP,
after a switch to an environmental state ξ_T,
x evolves deterministically with
Eq. (<ref>) and a fixed value of ξ_T,
until a switch ξ_T→-ξ_T occurs, see Sec. <ref>.
We consider Eq. (<ref>) in the regimes of (i) low, (ii) high, and (iii) intermediate switching rate ν_T:
(i) Under low switching rate, ν_T→ 0, the population settles in its final state without experiencing any T-switches. In this regime,
the population reaches its final state in its initial toxin level ξ_T(0), i.e. ξ_T(0)=ξ_T(∞)=± 1 with probability (1±δ_T)/2. In this regime,
Eq. (<ref>) thus boils down to
ẋ
=
-x(1-x) (e^s-1)/x+(1-x)e^s with probability 1+δ_T/2
x(1-x)(1-e^-s)/x+(1-x)e^-s with probability 1-δ_T/2.
Since s>0,
with a probability (1+ δ_T)/2 we have ξ_T(0)=ξ_T(∞)=+1 and x→ 0
(R vanishes), while with a probability (1- δ_T)/2 we have ξ_T(0)=ξ_T(∞)=-1 and x→ 1 (S vanishes). In either case, the mean-field dynamics are characterised by the dominance of one of the strains. Therefore, in the absence of demographic fluctuations, there is never long-lived coexistence of
the strains R and S under low switching rate of the toxin level.
(ii) Under high switching rate, ν_T≫ 1, the population experiences a large number of T-switches before relaxing into its final state; see below. In this case
the T-DN self-averages <cit.> and we are left with a Moran process defined by the effective rates
T_R^±→T_R^± obtained by averaging ξ_T over its stationary distribution, yielding
T_R^+ =Nx(1-x)/2(1+δ_T/x+(1-x)exp(s) +1-δ_T/x+(1-x)exp(-s)),
T_R^- =Nx(1-x)/2((1+δ_T)exp(s)/x+(1-x)exp(s)+(1-δ_T)exp(-s)/x+(1-x)exp(-s)).
When N→∞, the mean-field (MF) rate equation associated with this effective Moran process
thus reads <cit.>:
ẋ =T_R^+ -T_R^-/N
=x(1-x)/2[(1+δ_T) (1-exp(s))/x+(1-x)exp(s) + (1-δ_T)(1-exp(-s))/x+(1-x)exp(- s)],
where the right-hand-side (RHS) can be interpreted as the RHS of Eq. (<ref>) averaged over ξ_T. In addition to the trivial fixed points x=0, 1, Eq. (<ref>) admits a coexistence equilibrium
x^*=1/2-δ_T/2s/2,
when -tanhs/2<δ_T<tanhs/2.
This equilibrium stems from the T-DMN and thus is a fluctuation-induced coexistence point. In the case of large s we have that s/2→1, and x^* exists (0<x^*<1) for all values of δ_T.
Since .dẋ/dx|_x^*=-4/1-δ_T^2tanh^2(s/2)(1-x^*)<0
is negative, linear stability analysis
reveals that when x^* is the sole asymptotically stable equilibrium of Eq. (<ref>) when it exists (x=0,1 are thus unstable).
When s≪ 1, s/2→2/s
and x^* exists only for -s/2<δ_T<s/2.
This means that for s≪ 1, coexistence is essentially possible only under symmetric switching (δ_T=0), see Sec. SM1 in <cit.>. In what follows, we focus
on the less restrictive case s= O(1), for which coexistence is possible for a broad range of parameters (ν_T,δ_T).
(iii) In the regime of intermediate switching rate, where ν_T∼ 1, the population experiences
a finite number of T-switches prior to settling in its final state. Depending
on this number, as well as the selection strength s and the population size,
the dynamics may be closer to either the low or high ν_T regime, with dominance or coexistence possible but not certain; see Fig. <ref> below.
§.§ Finite populations - fixation and long-lived coexistence
From the MF analysis, we have found that when N→∞ species coexistence is feasible under fast T-EV switching, whereas only dominance occurs under slow switching. Here, we study how these results nontrivially morph when the population is fixed and finite.
Since the model is defined as a finite Markov chain with absorbing boundaries, see Eqs. (<ref>) and (<ref>), its final state unavoidably
corresponds
to the fixation of one strain and the extinction of the other, i.e. the population eventually ends up in either the state (N_R,N_S)=(N,0) or (N_R,N_S)=(0,N)
<cit.>. This means that, strictly, the finite population does not admit stable coexistence: when it exists, x^* is metastable <cit.>. In fact,
while it is guaranteed that eventually only one of the strains will finally survive, fixation can occur after a very long time and can follow a long-term coexistence of the strains, as suggested by the MF analysis of the regime with ν_T≫ 1. It is thus relevant to study under which circumstances there is long-lived coexistence of the strains.
The evolutionary dynamics is characterised by the fixation probability of the strain R, here denoted by ϕ. This is the probability that a population, consisting initially of a fraction x_0 of R bacteria, is eventually taken over by the strain R.
A related quantity is the unconditional mean fixation time (MFT), here denoted by τ, which is the average time for the fixation of either species to occur.
In what follows, we study how the R fixation probability ϕ(ν_T) and the MFT τ(ν_T) vary with the average switching rate of the T-EV for different values of K_0, δ_T, and s (treated as parameters), and determine when there is long-lived coexistence of the strains.
In the limits ν_T→0, ∞,
we can use the well-known properties of the Moran model <cit.> to
obtain ϕ(ν_T) and τ(ν_T) from their Moran approximation (MA) counterparts
ϕ_ MA(N) and τ_ MA(N), which are respectively the R fixation probability and mean fixation time in the associated Moran process for a population of constant size N; see Sec. SM2 in <cit.>.
For a given initial resistant fraction[In all our examples we set x_0=0.5.], the fixation probability in the low-switching regime, ϕ(ν_T→0)
is obtained by averaging ϕ_ MA(N)|_ξ_T,
denoting
the R fixation probability in the realm of the MA in static environment ξ_T, over the stationary distribution of ξ_T <cit.>:
ϕ(ν_T→0) =(1+δ_T/2)ϕ_ MA(N)|_ξ_T=+1
+(1-δ_T/2)ϕ_ MA(N)|_ξ_T=-1.
When N≫1 and ξ_T=+1 the strain S is always favoured and ϕ_ MA(N)|_ξ_T=+1≈ 0, whereas R is favoured when
ξ_T=-1 and in this case ϕ_ MA(N)|_ξ_T=-1≈ 1.
Since ξ_T(0) = - 1
with probability (1- δ_T)/2, this coincides with the
R fixation probability:
ϕ(ν_T→0)≈ (1- δ_T)/2. The probability that S fixates
when ν_T→0 is thus 1-ϕ(ν_T→0)≈ (1+ δ_T)/2
In the high-switching regime the fixation probability is that of the Moran process defined by the effective rates in
Eq. (<ref>). Using Eq. (<ref>) with x=n/N, we thus find <cit.>:
ϕ(ν_T→∞)=1+∑_k=1^Nx_0-1∏_n=1^kT_R^-(n/N)/T_R^+(n/N)/1+∑_k=1^N-1∏_n=1^kT_R^-(n/N)/T_R^+(n/N);
see Sec. SM2.A in <cit.>. A similar analysis can also be carried out for τ, see Sec. SM2.B in <cit.>.
Results reported in Fig. <ref> show that Eqs. (<ref>) and (<ref>) accurately capture the behavior of ϕ in the limiting regimes ν_T→0,∞, see Fig. <ref>(a). Fig. <ref>(b) shows that the predictions for τ when ν_T→0,∞, are also in good agreement with simulation results, with a much larger MFT under high ν_T than under low switching rate (at fixed δ_T). In Fig. <ref>(b), the MFT when ν_T≫ 1 for δ_T=0 is significantly larger than under δ_T≠ 0. This stems from
x^*=x_0=1/2 being the
attractor of Eq. (<ref>) when δ_T=0, but not being an equilibrium when δ_T=0.3 or δ_T=0.5. Fig. <ref>(a,b) also illustrate the excellent agreement between the predictions of the MA with N=K_0 and those obtained from stochastic simulations with K=K_0 constant.
The MF analysis and results of Fig. <ref> suggest that under sufficiently high switching rate ν_T
there is long-lived coexistence of the strains. We can rationalise this picture by noting that in the regime of dominance of one strain the MFT scales sublinearly with the population size N, while the MFT grows superlinearly (exponentially when N=K_0≫1, see Fig. <ref>(c)) in the regime
of long-lived coexistence <cit.>. The dominance and long-lived coexistence scenarios are separated by a
regime where the MFT scales with the population size, i.e. τ∼ N, where the dynamics is governed by random fluctuations. This leads us to
consider that long-lived coexistence of the R and S strains arises whenever
the MFT exceeds 2⟨ N ⟩, i.e. when τ>2⟨ N ⟩,
where ⟨ N ⟩ is the mean population size at (quasi-)stationarity; see below[The factor 2 has been chosen arbitrarily to prevent τ∼N from appearing as coexistence. Other choices are of course possible, and would have only modest effects on the crossover regime between the phases of dominance and coexistence.]. This is illustrated
in the provided videos of <cit.> commented in Sec. SM5 of <cit.>. When, as in this section,
N=K_0 or N fluctuates about the constant carrying capacity K_0 (N≈ K_0), we simply have ⟨ N ⟩=K_0. The criterion τ>2⟨ N ⟩=2K_0 thus prescribes that long-lived coexistence occurs when the MFT scales superlinearly with K_0 and hence exceeds
the double of the average population size,
2K_0≫ 1.
The conditions under which the long-lived coexistence criterion, τ>2⟨ N ⟩, is satisfied can be estimated by noting that, from the MF analysis, we expect coexistence to occur when ξ_T self-averages
under sufficiently high switching rate ν_T. Since the average number of T-switches by
t=2⟨ N ⟩ scales as ν_T ⟨ N ⟩, self-averaging occurs when ν_T ⟨ N ⟩≫ 1. We thus consider that there is fast T-EV switching when ν_T ≫ 1/⟨ N ⟩,
while ν_T ≪ 1/⟨ N ⟩ corresponds to the slow T-EV regime.
To ensure long-lived coexistence, the necessary condition ν_T ≫ 1/⟨ N ⟩ is supplemented by the requirement that s∼ 1. This ensures enough environmental variability
and a regime of coexistence where
the MFT is generally τ∼ e^c⟨ N ⟩ (where c is some positive constant) when s= O(1) <cit.> guaranteeing
τ>2⟨ N ⟩.
Hence, the expected conditions for long-lived coexistence
are ν_T ≫ 1/⟨ N ⟩ (fast T-switching) and s= O(1)
(enough EV),
which are satisfied in the examples considered here
when ν_T ∼ 1, s∼ 1 or greater and N≫ 1.
We have studied the influence of T-EV on the fixation and coexistence
properties of the model with constant carrying capacity K=K_0 and selection strength s by
running a large number of computer simulations up to a time t=2K_0 across the ν_T-δ_T parameter space.
When just after t=2K_0 both species are present, the run for (ν_T,δ_T,s) is characterised by long-lived coexistence which is RGB coded (0, 1, 0).
There is no long-lived coexistence for the run (ν_T,δ_T)
if one of the species fixates by t≤ 2K_0: either the strain R, which is RGB coded (1,0,0), or the strain S, which is coded by (0,0,1). This procedure is repeated for 10^3 realisations for different (ν_T,δ_T) and, after sample averaging, yields the RGB-diagram of Fig. <ref>(a-c); see Sec. SM3 in <cit.>.
It is also useful to study the effect of the T-EV
in the realm of the MA by means of numerically exact results. For this,
with the transition rates Eq. (<ref>),
we notice that when N=K_0
is constant and there are initially n cells of type R,
the R-strain fixation probability, ϕ_n^ξ_T,
in the environmental state ξ_T satisfies
the first-step analysis equation <cit.>
(T_R^+(n) + T_R^-(n)+ν_T^ξ_T)ϕ_n^ξ_T= T_R^+(n)ϕ_n+1^ξ_T
+ T_R^-ϕ_n-1^ξ_T
+ ν_T^ξ_Tϕ_n^-ξ_T,
subject to the boundary conditions ϕ_0^ξ_T = 0 and ϕ_N^ξ_T = 1. The mean fixation time in the environmental state ξ_T, τ_n^ξ_T,
satisfies a similar equation:
(T_R^+(n) + T_R^-(n)+ν_T^ξ_T)τ_n^ξ_T= 1+T_R^+(n)τ_n+1^ξ_T
+ T_R^-(n)τ_n-1^ξ_T
+ ν_T^ξ_Tτ_n^-ξ_T,
with boundary conditions τ_0^ξ_T=τ_N^ξ_T=0. Eqs. (<ref>) and (<ref>)
are thus solved numerically, and the fixation probability and MFT are obtained
after averaging over the stationary distribution of ξ_T, yielding
ϕ_n = (1+δ_T/2) ϕ_n^+ + (1-δ_T/2) ϕ_n^-, and
τ_n = (1+δ_T/2) τ_n^+ + (1-δ_T/2) τ_n^-.
In our examples, we always consider x_0=1/2, and henceforth write ϕ_ MA(N)≡ϕ_N/2 for the R-fixation probability and
τ_ MA(N)≡τ_N/2
for the MFT in the realm of the MA.
For each triple (ν_T,δ_T,s), we numerically
solved Eq. (<ref>) and, in the region of the
the parameter space where τ_ MA(N)> 2K_0,
there is long-lived coexistence, which is coded by (0,1,0) in the RGB-diagram of Fig. <ref>(d-f). When τ_ MA(N)≤ 2K_0, there is dominance of one of the species, characterised by the fixation probabilities ϕ_ MA(N) and 1-ϕ_ MA(N)
of R and S, respectively, obtained from Eq. (<ref>) and
coded by (ϕ_ MA(N),0,1-ϕ_ MA(N)) in Fig. <ref>(d-f).
Exact numerical results for the MA with N=K_0
in Fig. <ref>(d-f) are in excellent agreement with those
of simulations obtained
for K=K_0 in Fig. <ref>(a-c).
In line with the MF analysis, we find that long-lived coexistence, occurs for T-EV of sufficiently large magnitude, i.e. s∼ 1 or higher,
and under high enough switching rate, i.e. ν_T∼ 1 or higher, shown as green areas in Fig. <ref>. The region of coexistence separates regimes dominated by either species, especially at high ν_T when ϕ≈ 0 when δ_T0 while ϕ≈ 1
when δ_T<0. The boundaries between the regimes of
R/S dominance (blue/red) and coexistence (green),
are interspersed by crossover regimes where both species are likely to fixate (magenta in Fig. <ref>), or
coexist with probability between 0 and 1 (faint green in Fig. <ref>), as coded in Fig. <ref>(g).
§ TWOFOLD ENVIRONMENTAL VARIABILITY: COEXISTENCE AND FIXATION UNDER TIME-VARYING FITNESS AND SWITCHING CARRYING CAPACITY
We have seen that under a constant carrying capacity,
long lived coexistence of the strains is feasible when
s and ν_T are of order 1 or higher (enough T-EV and fast T-switching). We now study how this picture morphs when, in addition to the time-variation of f_S and f, the carrying capacity K(t) switches according to Eq. (<ref>) and drives the fluctuating population size N. EV is thus twofold, and the population evolves under the joint effect of T-EV and K-EV.
We consider K(t)∈{K_-,K_+} with
1≪ K_-<K_+, and in the first instance assume that N is always sufficiently large to
allow us to neglect the DN, yielding
<cit.>
Ṅ=T_R^+-T_R^-+T_S^+-T_S^-=N(1-N/K),
ẋ=T_R^+-T_R^-/N-xṄ/N=x(f_R/f-1),
where N is independent of s and affected only by K-EV via ξ_K in Eq. (<ref>), while
the evolution of x in Eq. (<ref>)
is impacted by ξ_K, ξ_T, and s via x=N_R/N
and f(t)=x+(1-x)exp(sξ_T). The population composition is hence coupled
to the evolution of the population size (eco-evolutionary dynamics), while the statistics of N, like its average, denoted by ⟨ N⟩,
are obtained by ensemble averaging over ξ_K.
The stochastic logistic differential equation Eq. (<ref>) defines an
N-PDMP
whose properties allow us to characterise the distribution of N <cit.>.
As discussed in <cit.>,
the (marginal) stationary probability density function (PDF) p(N) of the N-PDMP Eq. (<ref>), while ignoring DN, provides a useful approximation of the actual quasi-stationary population size distribution (QPSD). Here, the stationary PDF is <cit.>
p(N) = 𝒵/N^2(K_+ - N/N)^ν_K(1-δ_K)-1(N-K_-/N)^ν_K(1+δ_K)-1,
of support [K_-,K_+], with the normalisation
constant 𝒵 ensuring that ∫_K_-^K_+ p(N) dN=1.
§.§ N-PDMP approximation
The PDF p(N) captures well the main effects of the K-EV on the QPSD,
which is bimodal under low ν_K and becomes unimodal when ν_K≫ 1, see Fig. <ref>. In the realm of the N-PDMP approximation,
p(N) aptly reproduces the location of the QPSD peaks and the
transition from a bimodal to unimodal distribution as ν_K increases:
The distribution is sharply peaked around
N≈ K_±
when ν_K → 0 (with probability (1∓δ_K)/2),
flattens when ν_K∼ 1, and then sharpens
about
N≈𝒦≡ K_0(1-γ^2)/(1-γδ_K)
when ν_K →∞. Since p(N) ignores DN, it cannot
capture the width of the QPSD about the peaks, but
it provides an accurate description of the mean population size, see Fig. <ref>,
that is well approximated by
N=∫_K_-^K_+ Np(N) dN,
with ⟨ N⟩≈ K_0(1+γδ_K) when ν_K≪ 1
and ⟨ N⟩≈𝒦 when ν_K≫ 1 <cit.>, as shown in Fig. <ref>.
The PDF p(N) is particularly useful to obtain the fixation/coexistence diagrams under
the effects of both T-EV and K-EV. Theoretical/numerical
predictions of the fixation and coexistence probabilities can indeed be derived in the vein of <cit.> by focusing on situations where coexistence occurs when x and N relax on similar timescales. Long-lived coexistence of the strains thus arises when
s∼ 1, with fixation typically occurring after N
has settled in the QPSD, see Sec. SM6 in <cit.> and videos in <cit.>. Hence, the R fixation probability (with x_0=1/2)
can be approximated by averaging ϕ_ MA(N) over p(N):
ϕ≃∫_K_-^K_+ϕ_ MA(N) p(N) dN,
where ϕ_ MA(N) is obtained from solving the corresponding Eq. (<ref>) for the
R fixation probability of the associated Moran process, as seen in Sec. <ref>.
We can use the PDF p(N) and the results for the mean fixation time
τ_ MA(N), obtained from solving Eq. (<ref>), to determine the probability of coexistence in the realm of the
N-PDMP approximation.
For this, we first solve
Eq.(<ref>) for
τ_ MA(N^*)=2⟨ N⟩, where ⟨ N⟩ is given by Eq. (<ref>). Since τ_ MA is an increasing function of N,
see Fig. <ref>(c), we have
τ_ MA(N)>2⟨ N⟩ for all N>N^*, which is the long-lived coexistence condition. Within the
N-PDMP approximation, the lowest possible value of N^* is K_- (since N∈ [K_-,K_+]).
We then determine the probability
η that this condition is satisfied
by integrating p(N) over [ max(N^*,K_-),K_+]:
η≡Prob.{τ_ MA(N)>2N}=∫_ max(N^*,K_-)^K_+ p(N) dN,
where N^* depends on both T-EV and K-EV (via N), while
the integrand depends only on K-EV. Clearly, η→ 1 when N^*→ K_-. Hence, long-lived coexistence is almost certain when N^*≈ K_-, i.e. whenever the mean-fixation time of the population of fixed size N=K_- exceeds 2N. Based on the results of Sec. <ref>,
η increases with ν_T and s, and thus for sufficiently large ν_T and s (but not too large δ_T), we expect N^*→ K_- and η→ 1.
§.§ Fixation/coexistence diagrams under T-EV and K-EV
The fixation/coexistence diagrams under joint effect of T-EV and K-EV
are obtained as in Sec. <ref>, with the difference that
long-lived coexistence arises when t>2N, a condition that depends on (ν_K,δ_K), see Fig. <ref>(c).
In our simulations, we considered different values of ν_K (letting δ_K=0 for simplicity), and ran simulations until t=2N. Each run in which both species still coexist just after
t=2N are RGB-coded (0,1,0), whereas those in which R or S fixates
by t≤ 2N
are respectively RGB-coded (1,0,0) or (0,0,1). The RGB fixation/coexistence diagrams of Fig. <ref>(a-c) are obtained after sample-averaging the outcome of this procedure,
repeated 10^3 times for each pair (ν_T,δ_T) and different values of ν_K.
Theoretical RGB diagrams are obtained from the N-PDMP based approximation built on Eqs. (<ref>)
and (<ref>): for a given ν_K,
we allocate the RGB value ((1-η)(1-ϕ), η, (1-η)ϕ) obtained for each pair (ν_T,δ_T) of the diagram, see Fig. <ref>(d-f). This triple corresponds to the probability of having, by t=2N, either no long-lived coexistence (with probability 1-η) and
fixation of R or S (with respective probabilities ϕ and 1-ϕ), or long-lived coexistence (with probability η).
In practice, Eqs. (<ref>)
and (<ref>) have been used for relatively small systems whereas an equivalent, but more efficient, method was used for large systems, see Sec. SM3 in <cit.>.
The comparison of the top and bottom rows of Fig. <ref> shows that the theoretical RGB diagrams quantitatively reproduce the features of those obtained from simulations. In general, we find that coexistence regions
are brighter in diagrams obtained from N-PDMP based approximation than in those stemming from simulations. This difference stems from
former ignoring demographic fluctuations which slightly broaden the crossover (magenta and faint green) regimes in the latter.
The regions of Fig. <ref> where |δ_T|→ 1
are characterised by dominance of one of the strains, and essentially reduces
to the model studied in <cit.>, and we can therefore focus
on characterising the coexistence phase.
When K_0 is large, under sufficient environmental variability (s=0.5, γ=2/3 in Fig. <ref>), the joint effect of T-EV and K-EV
on the phase of long-lived coexistence in the RGB diagrams of Fig. <ref>
can be summarised as follows: (i) when ν_K→ 0, a (bright green) region where η≈ 1 and coexistence is almost certain is surrounded by a (faint green) “outer shell” where coexistence is possible but not certain (0<η<1), see Fig. <ref>(a,d); (ii) at low, but non-vanishingly small, values of ν_K,
the outer-shell where 0<η<1 fades, and there is essentially only a (bright green)
region of coexistence where η≈ 1, see Fig. <ref>(b,e); (iii) when ν_K≫ 1, the coexistence region corresponds essentially to η≈ 1 (bright green) and is broader than under low ν_K, Fig. <ref>(c,f).
In all scenarios (i)-(iii),
η increases with ν_T≳ 1 (for not too large δ_T) and hence all the green coexistence phases in Fig. <ref>) become brighter as ν_T is raised and η→ 1.
These different scenarios can be explained by the dependence of the QPSD on ν_K, well captured by the PDF Eq. (<ref>). In regime (i) where ν_K ≪ 1/K_0, the QPSD and p(N) are bimodal,
N≈ K_± with probability (1±δ_K)/2,
and any K-switches by t=2 N ∼ K_0 are unlikely, yielding the faint green outer shell of
Fig. <ref>(a,d) corresponding to long-lived coexistence arising only when N≈ K_+, with a probability
η≈ (1+δ_K)/2. In regime (ii), where 1/K_0≪ν_K ≪ 1, the QPSD and p(N) are still bimodal but some
K-switches occur by t∼ K_0, resulting in long-lived coexistence
arising almost only when ν_T is high enough
to ensure η≈ 1
when N≈ K_-. In regime (iii), where ν_K≫ 1
the QPSD and p(N) are unimodal with average N≈ K≥ K_-, which results in a long-lived coexistence
region where η≈ 1 that is broader than in (i) and (ii), Fig. <ref>(c,f).
The size of the coexistence region in regime (iii) actually depends nontrivially on ν_K, as revealed by the modal value
of the PDF Eq. (<ref>) when ν_K(1-|δ_K|)>1, which reads
N̂= K_0/2[1+ν_K(1-γδ_K)]
-K_0/2√((1+ν_K(1-γδ_K))^2-4ν_K(1-γ^2)),
with lim_ν_K→∞N̂=N=𝒦.
We notice that N̂ is an increasing function of ν_K when γ>δ_K and it decreases if γ<δ_K (remaining constant when γ=δ_K). As a consequence,
the long-lived coexistence region under high K switching rate grows with ν_K when γ>δ_K, as in Fig. <ref>(c,f), and, when γ<δ_K, shrinks as ν_K is increased, see Sec. SM5 Fig. <ref> in <cit.>.
§.§ Influence of the K-EV amplitude on coexistence
We have seen that increasing the selection bias s, raises the amplitude of the T-EV and facilitates the emergence of long-lived coexistence. Here, we investigate the influence of the parameter γ, which controls the amplitude of K-EV, on the fixation/coexistence diagrams. When γ→ 1 and K_0≫ 1, there is K-EV of large amplitude, with the population subject to a harsh population bottleneck (K_-→ 0) accompanied by strong demographic fluctuations.
The latter facilitate fixation of either strain and counter the
effect of T-EV that drives the community to
coexistence. Results of Fig. <ref> illustrate the influence of γ under low and high K-switching rate (δ_K=0):
- Under low ν_K, the
probability of long-lived coexistence η
decreases together with the value of K_-=K_0(1-γ)
when γ is increased (all other parameters being kept fixed). As a consequence,
the (bright green) region
in Fig. <ref>(a) where
long-lived coexistence is almost certain (η≈ 1) shrinks with γ and is gradually replaced by a faint green area where coexistence occurs with a lower probability (η=(1+δ_K)/2< 1), see Fig. <ref>(b,c).
- Under high ν_K, we have N≈𝒦 and the effect of γ is encoded in the expression of 𝒦= K_0 (1-γ^2)/(1-γδ_K). When δ_K≤ 0,
𝒦 and η decrease with γ, and as a result the
bright green region
in Fig. <ref>(d) shrinks and is eventually replaced by a smaller faint green region where coexistence is possible but not certain (0<η< 1), see Fig. <ref>(e,f).
When δ_K> 0,
there is a bias towards K=K_+ and
𝒦 increases with γ until
γ=γ̅≡(1-√(1-δ_K^2))/δ_K and then decreases, with 𝒦<K_0, when γ>δ_K. This results in a non-monotonic dependence of the coexistence region where η≈ 1: under ν_K≫ 1 and δ_K>0, the long-lived (bright-green) coexistence region grows with γ up to γ̅ and shrinks when γ> γ̅.
We have thus found that the
environmental fluctuations have
opposite effects on the species coexistence: increasing the amplitude of T-EV (by raising s) prolongs the coexistence of the strains and expands the coexistence region, but raising the amplitude of K-EV (by raising γ)
can significantly reduce the probability of long-lived coexistence for all values of ν_K.
§ MAKE-UP OF THE COEXISTENCE PHASE AND STRAINS AVERAGE ABUNDANCE
Having characterised in detail the conditions under which long-lived coexistence and fixation occurs, we now study the make-up of the coexistence phase and then use this result to determine the stationary average abundance of each strain.
§.§ Coexistence phase make-up
We are interested in the characteristic fraction of the resistant strain R in the coexistence phase, here defined as x^*. This is the fraction of R expected, given that we have coexistence at t=2N. According to the mean-field theory, the fraction of the strain R in the coexistence phase is given by the expression Eq. (<ref>) of x^*. It turns out that
deep into the coexistence region whereby η≈ 1 and ν_T is sufficiently high, there is good agreement between theory and simulations, see Fig. <ref>(a,b). In addition to this, even when η<1, the theoretical prediction remains remarkably similar to the measured value, differing by a small amount as η approaches 0. We notice that the characteristic fraction of R, for given δ_T, is almost independent of ν_T.
We can also predict the fraction of R regardless of coexistence or fixation, here denoted by x.
The quantity x thus characterises the fraction of R in the coexistence, fixation, and crossover regime where both coexistence and fixation are possible, with respective probabilities η and ϕ, but neither is certain.
Making use of Eqs. (<ref>), (<ref>), and (<ref>) we thus define x as
x=η x^* + (1-η)ϕ.
This captures well the dependence of x on ν_T and reduces to the fraction of R in the coexistence phase, x=x^*, when η≈ 1 and long-lived coexistence is almost certain (see SM4, Fig. <ref>). As shown in Sec. SM4 in <cit.>, a closed-form alternative to x is provided by the modal value of the stationary PDF of the PDMP defined by Eq. (<ref>), which, while less accurate than x, matches qualitatively well to simulations.
§.§ Strain average abundance
In this section, we study the (quasi-)stationary average abundance of the strains R and S, respectively denoted by N_R and N_S.
Since N_S=N-N_R, and N is well described by Eq. (<ref>), see Fig. <ref>, we only need to focus on studying N_R.
In fact, while the evolution of N
is governed by K-EV and is well-captured by the stochastic logistic equation Eq. (<ref>) and the corresponding N-PDMP, the dynamics of the abundance of the R strain depends on both T-EV and K-EV. In the mean-field limit, where demographic fluctuations are neglected, we indeed have <cit.>
Ṅ_R=T_R^+-T_R^- =(1/f(t)-N/K(t))N_R
=(1/x+(1-x)e^sξ_T-N/K_0(1+γξ_K))N_R,
which is a stochastic differential equation depending on both
ξ_K and ξ_T, and coupled to
the N and x-PDMPs defined by Eqs. (<ref>) and (<ref>). In the dominance regimes, N_R≈ 0 (S dominance) or N_R≈N (R dominance),
which can be obtained from Eq. (<ref>). However, finding N_R in the coexistence phase is a nontrivial task. Progress can be made
by noticing that, ξ_K and ξ_T being independent, we can write
N_R≈Nx≡N(η x^* + (1-η)ϕ),
where Nη x^* is the contribution to N_R when there is coexistence (with probability η), and N(1-η)ϕ is the contribution arising when there is fixation of the strain R (with probability (1-η)ϕ).
In our theoretical analysis, x^*, N, ϕ and η
are obtained from Eqs. (<ref>) and (<ref>)-(<ref>). Eq. (<ref>)
thus captures the behavior of N_R in each regime: the dominance regime where η≈ 0 and we have N_R≈Nϕ,
deep in the coexistence phase where we have η≈ 1 and
N_R≈N x^*, and where 0<η< 1 and coexistence is possible but not certain where we have
N_R≈Nx.
In Fig. <ref>, we find that the theoretical predictions based on Eq. (<ref>) agree well with simulation results over a broad range of ν_K and ν_T, and for different values of δ_K and δ_T. The dependence of
N_R on ν_K reflects that of N shown in Fig. <ref>:
N_R
decreases with ν_K at fixed δ_K, see Fig. <ref>(a), and we have N≈ K when
ν_K→∞ yielding N_R≈ Kx^*
deep in the coexistence phase where ν_T≫ 1,
and similarly N≈ K_0 (1+γδ_K) when ν_K→ 0 yields N_R≈ K_0 (1+γδ_K) x^*. Not shown in Fig. 8(a) is the case of ν_T≪1, whereby we have only dominance such that N_R≈N(1-δ_T)/2.
Fig. <ref>(b) shows that the dependence of
N_R on ν_T can be non-monotonic and exhibit an extreme dip (δ_T>0) or peak (δ_T<0) at intermediate T-switching rate, ν_T∼ 1. This behavior can be understood by referring to the diagrams of Fig. <ref>:
as ν_T is raised from ν_T=0 with δ_T<0 kept fixed, the R fixation
probability first slowly increases across the slightly R-dominant phase (blueish-magenta regions in Fig. <ref>) where coexistence is unlikely (η≈0) and N_R≈Nϕ.
When ν_T is increased further and R is the strongly dominant species (bright blue phases in Fig. <ref>), with ϕ≈ 1 and N_R≈N is maximal;
coexistence then becomes first possible (0<η<1, faint green in Fig. <ref>) and then almost certain (η≈ 1, bright green in Fig. <ref>) when ν_T is increased further, which results in a reduction of the R abundance to N_R≈Nx^*<N. A similar reasoning holds for the S strain when δ_T>0 and results in a maximal value N_S≈N
and therefore a dip of the R abundance, with a minimal value N_R≈ 0,
when ν_T∼ 1.
The results of this section hence show that the twofold EV
has nontrivial effects on the make-up of the coexistence phase, and on the average number of cells of each strain, as shown by Fig. <ref>
and the nonmonotonic dependence of N_R on ν_T in Fig. <ref>.
§ CONCLUSION
Microorganisms live in environments that unavoidably fluctuate between mild and harsh conditions. Environmental variability can cause endless changes in the concentration of toxins and amount of available nutrients, and thus shapes the eco-evolutionary properties of microbial communities including the ability of species to coexist.
Understanding under which circumstances various microbial species can coexist, and how their coexistence and abundance vary with environmental factors, is crucial to shed further light on the mechanisms promoting biodiversity in ecosystems and to elucidate the evolution of antimicrobial resistance.
Motivated by these considerations, and inspired by the antimicrobial resistance (AMR) evolution in a chemostat setup,
we have studied the eco-evolutionary dynamics
of an idealised microbial community of fluctuating size consisting of two strains competing for the same resources under twofold environmental variability (T-EV and K-EV): the level of toxin and the abundance of nutrients in the community both vary in time.
One of the strains is resistant while the other is sensitive to the drug present in the community, and both compete for the same resources.
Environmental variability is thus assumed to affect the strains growth and death rates, and is modelled by means of binary randomly time-switching fitness (T-EV) and carrying capacity (K-EV).
Under harsh conditions, the level of toxin is high and resources are scarce, while environmental conditions are mild when the level of toxin is low and resources are abundant.
In this setting, the strain resistant to the drug has a selective advantage under high toxin-level, whereas it is outgrown by the sensitive strain when the level of toxin is low. Moreover, the time-switching carrying capacity drives the fluctuating size of the microbial community, which in turn modulates the amplitude of the demographic fluctuations, resulting in their coupling with the variation of the available resources.
When the environment is static, there is no lasting coexistence since one species dominates and rapidly fixates the entire population.
Here, we have shown that this picture changes radically in fluctuating environments: we have indeed found that long-lived species coexistence is possible in the presence of environmental fluctuations. Using stochastic simulations and the properties of suitable piecewise-deterministic
and Moran processes, we have computationally and analytically
obtained the fixation-coexistence phase diagrams of the system. These have allowed us to
identify the detailed environmental conditions under which species coexist almost certainly for extended period of times, and the phases
where one species dominates, as well as the crossover regimes where both coexistence and fixation are possible but not guaranteed. We have found that long-lived coexistence requires sufficient variation of the toxin level, while resource variability can oppose coexistence when strong K-EV leads to population bottlenecks responsible for large demographic fluctuations facilitating fixation. More generally, our analysis has allowed us to assess the influence of the population size distribution, whose shape changes greatly with the rate of K-EV, on the fixation-coexistence phase diagram.
We have also determined
how the make-up of the coexistence phase and average abundance of each strain depend on the rates of environmental change.
Environmental variability generally comes about in many forms in a variety of settings throughout biology and ecology, and the conundrum of coexistence within a system is impacted by it, alongside demographic fluctuations. This leads to complex eco-evolutionary dynamics.
In particular, how microbial communities evolve according to environmental variability is vital when considering the issue of AMR, so that the effectiveness of treatments can be maximised, while minimising the harmful effects.
In considering twofold environmental variations, we have shown that these can have qualitative effects on the population evolution as they can either promote or jeopardise lasting species coexistence.
In summary, our analysis allows us to understand under which circumstances environmental variability, together with demographic fluctuations, favours or hinders the long-lived coexistence of species, and how it affects the fraction and abundance of each strain in the community.
In particular, our findings demonstrate the influence of environmental fluctuations on biodiversity in microbial communities, and may thus have potential impacts on
numerous applications. For instance, the model studied here
is well suited to describe the in vitro
evolution of antimicrobial resistance
in a chemostat setup where the level of antibiotics would fluctuate below and above the minimum inhibitory concentration. In this context, the model is able to predict, under a broad range of external constraints, the best conditions to avoid the fixation of the strain resistant to the drug and when both strains coexist. A more realistic model of AMR evolution would take into account that the drug resistance is often mediated by a form of public goods <cit.>.
§ DATA ACCESSIBILITY
Simulation data for all figures are electronically available, see <cit.>.
§ AUTHOR CONTRIBUTIONS
Matthew Asker: Conceptualisation (supporting), Methodology, Formal Analysis (lead), Software, Writing - Original Draft, Writing - Review & Editing, Visualisation, Investigation, Validation.
Lluís Hernández-Navarro: Formal Analysis (supporting), Writing - Review & Editing (supporting), Supervision (supporting).
Alastair M. Rucklidge: Writing - Review & Editing (supporting), Supervision (supporting), Funding acquisition (supporting).
Mauro Mobilia: Conceptualisation (lead), Methodology (lead), Formal Analysis (supporting), Writing - Original Draft, Writing - Review & Editing, Visualisation, Supervision (lead), Project administration, Funding acquisition (lead).
We would like to thank K. Distefano, J. Jiménez, S. Muñoz-Montero, M. Pleimling, M. Swailem, and U. C. Täuber for helpful discussions. L.H.N., A.M.R and M.M. gratefully acknowledge funding from the U.K.
Engineering and Physical Sciences Research Council (EPSRC)
under the grant No. EP/V014439/1
for the project `DMS-EPSRC Eco-Evolutionary Dynamics of Fluctuating Populations’. The support of a Ph.D. scholarship to M.A.
by the EPSRC grant No. EP/T517860/1 is also thankfully acknowledged. This work was undertaken on ARC4,
part of the High Performance Computing facilities at the
University of Leeds, UK.
§ APPENDIX
In this
appendix, we provide some further technical details and supplementary information in support
of the results discussed in the main text. We also provide additional information concerning the mean-field, Moran and PDMP approximations used in the main text, the simulation methods, we
illustrate our main findings by discussing typical sample paths, and briefly discuss
the generalisation
of the model to correlated/anticorrelated EV.
§ SM1. T-SWITCHING COEXISTENCE: FLUCTUATION SIZE AND TIME SCALE
The selection strength s does not only shape the dynamics of the composition, see Eq. (<ref>), but it also determines the amplitude of the T-EV fluctuations, i.e. its variance, which is linked to stronger coexistence in the fast-switching regime (see Sec. <ref>). To show this, we first consider the normalised fitness of the resistant subpopulation
f_R/f=1/x+(1-x)exp(ξ_T s).
Since we here focus on how s may shape coexistence through the toxin level fluctuation size, and coexistence dominates in the fast toxin-switching regime, we assume ν_T≫1. As discussed in the main text, in the fast-switching regime, the per capita growth rate
(normalised fitness) of strain R averaged over the stationary distribution of ξ_T reads:
f_R/f=1-δ_T/21/x+(1-x)exp(-s)+1+δ_T/21/x+(1-x)exp(s).
To derive its variance, we also have to compute the average of its square as
(f_R/f)^2=1-δ_T/21/(x+(1-x)exp(-s))^2+1+δ_T/21/(x+(1-x)exp(s))^2.
Combining both, we obtain the variance of the normalised resistant fitness due to the environmental fluctuations in the toxin level
var(f_R/f) =(f_R/f)^2-f_R/f^2
=(1-δ_T^2)/4((exp(2s)-1)(1-x)/(x+(1-x)exp(s))(1+x(exp(s)-1)))^2.
A similar analysis for the sensitive strain, which has a normalised fitness exp(ξ_T s) times that of the resistant strain, provides
var(f_S/f)=(1-δ_T^2)/4((exp(2s)-1)x/(x+(1-x)exp(s))(1+x(exp(s)-1)))^2.
In both cases, we conclude that the variance (arising from the T-EV) indeed increases with the selection strength s. Therefore, s does shape the strength of coexistence.
Note that the amplitude of the fluctuations of the carrying capacity K-EV (at no bias δ_K=0, for simplicity) increases with γ. However, in this case, the larger the K-EV the weaker the coexistence, as the harsh state K_-→0 further promotes extinction. In conclusion, and as discussed in Sec. <ref> and Fig. <ref>, the environmental variability can either promote coexistence (T-EV) or jeopardise it (K-EV), and both parameters s and γ determine long-lived coexistence.
Regarding the coexistence timescale, let us now consider the small s regime by expanding
the numerator and denominator of the right-hand-side of Eq. (<ref>) to order O(s^2), which yields
ẋ ≈x(1-x)/2[(1+δ_T) (1-(1+s+s^2/2))/x+(1-x)(1+s+s^2/2) + (1-δ_T)(1-(1-s+s^2/2))/x+(1-x)(1-s+s^2/2)]
≈x(1-x)/2[(1+δ_T)(-s-s^2/2)(1-s(1-x))+(1-δ_T)(s-s^2/2)(1+s(1-x))/1-s^2(1-x)]
≈ -s^2x(1-x)[x-(1/2-δ_T/s)]=-s^2x(1-x)[x-x^*],
where x^*=1/2-δ_T/s is the coexistence
equilibrium under s≪1. The equilibrium x^* is physical (and stable) only when -s/2<δ_T<s/2; i.e. only in the special case of almost symmetric switching (δ_T= O(s) or smaller).
From Eq. (<ref>), the coexistence equilibrium is approached slowly, with a relaxation time on the order of ∼ 1/s^2; thus, taking into account demographic noise, the expected fixation time is τ∼ e^Ns^2 when Ns^2≫ 1.
§ SM2. MORAN APPROXIMATION: PROBABILITY AND MEAN FIXATION TIME
§.§ Moran fixation probability
The exact Moran fixation probability ϕ(N_R^0,N,s,ξ_T) for the resistant strain to take over the entire population of constant size N in the static toxin environment ξ_T, starting with N_R^0 resistant individuals, is <cit.>
ϕ(N_R^0,N,s,ξ_T)=1+∑_k=1^N_R^0-1∏_i=1^kγ(i,N,s,ξ_T)/1+∑_k=1^N-1∏_i=1^kγ(i,N,s,ξ_T), for γ(N_R^0,N)≡T^-_R(N_R^0,N,s,ξ_T)/T^+_R(N_R^0,N,s,ξ_T) and N_R^0=1, 2,..., N.
The effective Moran transition rates are T^+_R=T^+_RT^-_S/N and T^-_R=T^-_RT^+_S/N (see Sec. <ref>) <cit.>, which read
T^+_R(x,N,s,ξ_T) =x(1-x)/x+(1-x)e^sξ_TN, and
T^-_R(x,N,s,ξ_T) =x(1-x)e^sξ_T/x+(1-x)e^sξ_TN,
where x≡ N_R/N as in the main text. Hence, our particular factor γ yields
γ(s,ξ_T) =e^sξ_T.
The general Moran fixation probability Eq. (<ref>) is valid for fixed N and time-independent T^±_R, i.e. fixed ξ_T=±1 (static environments). Therefore, we compute our particular resistant fixation probability in static environments as
ϕ(N_R^0,N,s,ξ_T)=∑_k=0^N_R^0-1(e^sξ_T)^k/∑_k=0^N-1(e^sξ_T)^k=1-e^sN_R^0ξ_T/1-e^sNξ_T,
where, in the last step, we use that this is a finite geometric series. As noted in the main text, we always use a starting resistant fraction x_0=0.5, a starting resistant population of N_R^0=N/2=K_0/2. For brevity, in the main text, the fixation probability ϕ(N/2,N,s,ξ_T) is thus denoted by
ϕ_ MA|_ξ_T.
In the regime ν_T≫1, the effective Moran transition rates become time-independent; see Eq (<ref>). However, in this case, as the γ factor shows a complex dependency on the resistant fraction x, the fixation probability
ϕ_ MA|_ξ_T in the main text is computed numerically through Eq. (<ref>), as shown in Eq (<ref>).
§.§ Moran unconditional Mean Fixation Time (MFT)
When the Moran `birth-death' transition rates are time-independent, the unconditional MFT t(N_R^0,N,s,ξ_T) in a population of constant size N in the static toxin environment ξ_T, and consisting initially of N_R^0 resistant individuals,
reads <cit.>:
τ(N_R^0,N,s,ξ_T)=-τ_1(N,s,ξ_T)∑_k=N_R^0^N-1∏_i=1^kγ(i,N,s,ξ_T)+
∑_k=N_R^0^N-1∑_n=1^k∏_m=n+1^kγ(m,N,s,ξ_T)/T^+_R(n,N,s,ξ_T),
for N_R^0=1, 2,..., N.
where γ(N_R^0,N,s,ξ_T) is defined as in Eq. (<ref>); T^±_R(N_R^0,N,s,ξ_T) are defined either in Eq. (<ref>) or (<ref>) for static or fast toxin switching environments, respectively; and
τ_1(N,s,ξ_T)=∑_k=1^N-1∑_n=1^k∏_m=n+1^kγ(m,N,s,ξ_T)/T^+_R(n,N,s,ξ_T)/1+∑_k=1^N-1∏_i=1^kγ(i,N,s,ξ_T)
is the unconditional MFT starting with a single resistant individual. In our examples, we always use x_0=0.5, N_R^0=N/2=K_0/2. For brevity, in the main text, the MFT τ(N/2,N,s,ξ_T) is thus denoted by
τ_ MA|_ξ_T.
As for the fixation probability in the main text, we can obtain analytical expressions of τ_ MA|_ξ_T in the very slow and fast toxin switching regimes (ν_T≪ 1 and ν_T≫ 1). For the former, we have to substitute ϕ_ MA|_ξ_T in Eq. (<ref>) by the MFT expression of Eq. (<ref>). And, for the latter, we have to substitute each T^±_R by its corresponding average across the stationary ξ_T distribution, i.e. ⟨T^±_R⟩.
§ SM3. SIMULATION METHODS
To study the stochastic behaviour of the full model presented here, we perform exact stochastic numerical simulations <cit.>. Simulations start at an initial time t=t_0=0 with an initial toxin level ξ_T(t_0) and resourcce level ξ_K(t_0) always at stationarity (with <ξ_T/K(t_0)>=δ_T/K), initial populations N_R(t_0)=N_S(t_0)=K_0/2=(K_+ + K_-)/4, and we take into account all the possible reactions that can take place.
In the case of the full model this means: (1) the four possible birth or death reactions with rates {T^+_R(t), T^-_R(t), T^+_S(t), T^-_S(t)} that depend on the variables {N_R(t), N_S(t), K(t), ξ_T(t)} and the constant parameter s; (2) the nutrient level switches stochastically with constant rate ν_K^± for the state K(t)=K_±; and (3) the toxin level switches stochastically with constant rate ν_T^± for the state ξ_T(t)=±1. We perform efficient stochastic simulations by implementing the Next Reaction Method <cit.>.
Regarding Fig. <ref>, the direct numerical implementation of Eqs. (<ref>) and (<ref>) to predict fixation and coexistence probabilities is not feasible, because calculating ϕ(N) and the coexistence probability for each integer N∈[K_-=400,K_+=2000] is computationally too expensive. Therefore, we capitalise on the exact Moran results for static environments (see Eqs. (<ref>) and (<ref>)), and the N-PDMP PDF p(N) (see Eq. (<ref>) and Fig. <ref>), to provide the theoretical prediction for the fixation and coexistence probabilities. For the first regime at very low ν_K (Fig. <ref>(a,d)), the starting environment is unlikely to switch, and the distribution of the total population is bimodal; see left inset in Fig. <ref>. The fixation and coexistence probabilities of Fig. <ref>(d) are then computed as the weighted average of the Moran probabilities for the static environment cases N=K_±, with weights (1±δ_K)/2. For the regime of Fig. <ref>(b,e), irrespective of the starting environment, the population size will, on average, spend significant time on both N=K_- and K_+. Since relative demographic fluctuations are larger at K_-, the simulated fixation and coexistence probabilities of Fig. <ref>(b) are captured in Fig. <ref>(e) by a Moran process under total population N=K_-. Finally, for high ν_K, the environmental noise averages out and the behaviour of the system corresponds to that of a Moran process at N=𝒦; see right inset in Fig. <ref>.
§ SM4. STATIONARY PROBABILITY DENSITY FUNCTION OF THE X-PDMP
The joint stationary PDF
of the x-PDMP
is labeled by ρ_±(x)≡ρ(x,ξ_T=±), and satisfies <cit.>
∂_t ρ_± = -∂_x(ẋ_±ρ_±)-ν_±ρ_± + ν_∓ρ_∓,
where ẋ_±≡ x(1-x)(1-e^± s)/[
x+(1-x)e^± s],
and the marginal stationary PDF of the x-PDMP defined by Eq. (<ref>)
is ρ(x)≡ρ_+(x)+ρ_-(x).
Following <cit.>, from Eq. (<ref>), we can define J_±=ẋ_±ρ_± + ∫^x_0(ν_±ρ_± - ν_∓ρ_∓) dx' as the probability flux of the system, and then rewrite Eq. (<ref>) as ∂_t ρ_± = -∂_x J_±. Note that by definition the x-PDMP ignores
intrinsic noise, hence x=0,1 states cannot be reached. We thus set probability flux at the boundaries to zero as natural boundary conditions (BCs) <cit.>. Since we want to derive the stationary joint PDF ρ(t→∞)
under zero-current BCs, we use
ρ =ρ_+ + ρ_- and take ∂_t ρ = -∂_x (J_+ + J_-) = 0. In this case, we obtain J_+ + J_- = 0, where the flux boundary conditions set the integration constant to 0. From this relation we can find that ρ_± = -ẋ_∓ρ_∓/ẋ_±. It is useful to
introduce the auxiliary variable q ≡ρ_+ - ρ_-, and write J_+ + J_- = 0 as
ẋ_+ ρ+q/2 + ẋ_- ρ - q/2 = 0. After rearranging for q we can substitute this into our expression for ρ_± to find ρ_± = ρ± q/2=±ẋ_∓/ẋ_- - ẋ_+ρ. The equation for the PDMP density of the resistant fraction x at quasi-stationary coexistence then reads
∂_x (ẋ_- ẋ_+/ẋ_- - ẋ_+ρ) + ẋ_- ẋ_+/ẋ_- - ẋ_+ρ(ν_+/ẋ_+ + ν_-/ẋ_-) = 0.
Multiplying by -1, rearranging, and integrating, leads to
ln(ρ/1/-ẋ_+ + 1/ẋ_-) - ln(C) = ∫^x(ν_+/-ẋ_+ - ν_-/ẋ_-) dx',
where C is an integration constant, with
1/-ẋ_+=e^s/x+1/1-x/e^s-1 and 1/ẋ_-=1/x+e^s/1-x/e^s-1,
see Eq. (<ref>) with ξ_T=±1. Eq. (<ref>) yields
ln(ρ/Ce^s+1/e^s-11/x(1-x)) = ν_+(e^sln(x)-ln(1-x))-ν_-(ln(x)-e^sln(1-x))/e^s-1,
and the normalised x-PDMP stationary probability density function then becomes
ρ = Γ(λ+μ)/Γ(λ)Γ(μ)x^λ-1(1-x)^μ-1,
which corresponds to the well-known beta distribution with
λ≡ν_+e^s-ν_-/e^s-1 and μ≡ν_-e^s-ν_+/e^s-1.
Under the change of variables ν_±=ν_T(1∓δ_T), and after some algebra, the exponents read
λ=ν_T(1-δ_T coth(s/2)) and μ=ν_T(1+δ_T coth(s/2)),
giving the probability density Eq. (<ref>) in terms of the environmental parameters and the selection bias. Fig. <ref> shows the predictions of Eq. (<ref>) and its excellent match with simulation data.
As mentioned in Sec. <ref>, a complementary characterisation of the coexistence phase is provided by the modal value, denoted by x̂, of the PDF of the x-PDMP, derived in Eq. (<ref>). As the x-PDMP density is a beta distribution, its modal value is
x̂ = λ-1/λ+μ-2=1/2(1-ν_T/ν_T-1δ_T(s/2)).
Note that the x-PDMP distribution is unimodal only for λ,μ>1, i.e. ν_T(1-|δ_T|)>1. However, we use x̂ only as an analytical proxy for the expected value of x in the coexistence and fixation-coexistence crossover regions (coexistence probability 0<η≤1), which coincide with the unimodal regime. For pure coexistence η→1, observed at ν_T→∞ and δ_T≠±1, the modal x̂ value reduces to the mean field expression x^* of Eq. (<ref>).
As shown in Fig. <ref>, the modal value x̂ approximates the unconditional expected R fraction ⟨ x⟩ (red crosses) in the regime ν_T>1/(1-|δ_T|), where there is non-zero coexistence probability (0<η<1).
In the fixation regime where ν_T<1/(1-|δ_T|), we find a higher (lower) fixation probability for the resistant strain than for the sensitive one when δ_T is negative (positive), with the fraction of R (hence ϕ) approaching one (Fig. <ref>, left) or zero (Fig. <ref>, right) when ν_T≈ 1.
The rationale is that, for very small ν_T, the strain that fixates is set by the initial toxin state, as we expect no toxin switches before fixation has occurred; the probability to start at ξ_T=±1 is (1±δ_T)/2. When ν_T is increased further the system experiences both environmental states, and the
the toxin bias δ_T sets which strain is more likely to fixate. For larger ν_T>1/(1-|δ_T|), coexistence takes over as the result of the self-averaging of the transition rates over the stationary ξ_T distribution.
§ SM5. COEXISTENCE UNDER K-EV AND FINAL FIXATION
In Fig. <ref>(a-c) we find that the region of coexistence grows when ν_K is increased.
This behaviour is expected for an effective population size that would increase with ν_K, as suggested by the MFT that increase with the population size (Fig. <ref>(c)). However, this seems at odds
the average population size N decreasing with ν_K as shown by
Fig. <ref>.
A more suitable characterisation of the influence of ν_K on coexistence phase is thus provided by the modal value N̂
of the N-PDMP PDF, given by Eq. (<ref>). Indeed, Fig. <ref> illustrates that N̂, unlike ⟨ N⟩,
increases ν_K for δ_K<γ in line with the results reported in Fig. <ref>(a-c).
For a complete characterisation of the coexistence regime we now briefly discuss the final state that is attained after after a long transient ensuring that a large fluctuation drive the system from the long-live metastable coexistence to one of the two absorbing states where a single strain takes over the entire population <cit.>. Fig. <ref> illustrates the full fixation outcome diagram (under no K-EV for simplicity), even in the phase of long-lived coexistence. Fixation in this regime is determined by the sign of δ_T, with a sharp transition at δ_T=0. This is because fixation occurs most likely in the absorbing state closest to the coexistence equilibrium x^* given by Eq. (<ref>): here,
the toxin bias
eventually imposes the fixation of S (δ_T>0) or R (δ_T<0).
§ SM6. VIDEOS OF SAMPLE PATHS
In this section we provide example trajectories for representative realisations of the full model under both fluctuations in the toxin level and in the carrying capacity. The corresponding movies are provided online in <cit.>.
Fig. <ref> captures the dynamics of the system in six example realisations under different EV parameters (ν_T,δ_T,ν_K) for a selection strength s=0.5, which sets the magnitude of T-EV; and mean carrying capacity K_0=1200 with γ=2/3 (K_-=400 and K_+=2000), setting the magnitude of K-EV. For simplicity, we keep δ_K=0.
- The example of Fig. <ref>(a)
illustrates an unbiased fluctuating environment with intermediate T-switching rate
(white background: low toxin level, grey: high toxin level; ν_T=0.5, δ_T=0) and K-switching rate (solid black line; ν_K=0.2). We notice that total population N rapidly attains quasi-stationarity well before a fixation event occurs (S fixation in this example).
- In Fig. <ref>(b), we show faster environmental T and K switching, with ν_T=10 and ν_K=5. There is also a bias towards the harsh T state favouring the strain R (δ_T=-0.15, high toxin level)
that is responsible for a robust offset in strain abundance eventually leading to
extinction of S and fixation of R.
In this example, the K-EV switching rate is sufficiently high to ensure its self-averaging: in this fast K switching regime, the population size is distributed about the effective carrying capacity 𝒦= K_0(1-γ^2)/(1-γδ_K) (here N≈𝒦=667), see Fig. <ref> and Sec. <ref>.
- Fig. <ref>(c) illustrates the case of fast T-EV (ν_T=10)
with a bias towards the mild/low T state favouring the strain S (δ_T=0.15). In this example, the K-EV switching rate is low (ν_K=0.05)
and the population size N follows K(t) and fluctuates about K_- and K_+.
The T-EV bias (δ_T>0) is here responsible for a systematic
offset in the strain abundance, with N_S>N_R, resulting in the fixation of S and extinction of R, see Fig. <ref>(a). In this example, K-EV is responsible for larger demographic fluctuations in the environmental state ξ_K=-1, where N≈ K_- and S fixation is more likely than when N≈ K_+ (ξ_K=+1).
- Fig. <ref>(d) shows typical trajectories in the long-lived coexistence phase in the case of unbiased fast switching T-EV and K-EV, with ν_T=10, δ_T=0 and ν_K=50. This set of parameters essentially corresponds (here ν_K=50 instead of ν_K=5) to a point in the bright green region in the diagram of Fig. <ref>(c), where η≈ 1
and long-lived coexistence is almost certain.
As in panel (b), the fast K switching (solid black line not shown in Fig. <ref>(d))
leads to fluctuations of the total population size about K, i.e.
N≈𝒦=667. In this example x^*=1/2, see Fig. <ref>, and the number of R and S cells fluctuates about their averages: N_R≈N_S≈ K/2≈ 333.5, see Sec. <ref>.
- Fig. <ref>(e,f) illustrate the
dynamics under fast T-EV (ν_T=10) and extremely slow K-switching rate (ν_K=5×10^-6), with an initial carrying capacity K(0)=K_- in (e) and K(0)=K_+ (f). There is also a small bias towards the mild T state favouring S (δ_T=0.1). This choice of parameters corresponds to a point in the faint green region in the diagram of Fig. <ref>(a), where 0<η<1
and long-lived coexistence is possible but not certain. In panel (e), the population size fluctuates about K(0)=K_- (N≈ K_-) and is not able to sustain long-lived coexistence: demographic fluctuations yield S fixation in a time t≲ 2K_-=800. In panel (f), we have N≈ K_+ and, as demographic fluctuations are unable to cause the fixation/extinction of either strain in a time less than 2N=2K_+=4000 (for clarity, the time series in Fig. <ref> have been truncated at t=2400), we have long-lived coexistence.
In all examples in the panels of Fig. <ref>,
the population size N reaches quasi-stationarity (settles in the QPSD) a significant time before the
fixation of one strain and extinction of the other, in line with the considerations underpinning Eqs. (<ref>) and (<ref>).
§ SM7. FULLY CORRELATED AND ANTI-CORRELATED ENVIRONMENTAL VARIABILITY
In the case of fully correlated/anti-correlated T-EV and K-EV,
environmental variability is no longer twofold since
we have ξ≡ξ_T and ξ_K=ξ (fully correlated EV) or ξ_K=-ξ (fully anti-correlated EV). The switching carrying capacity can thus be written as
K(t)=K_0[1+γξ(t)],
where γ=γ in the fully correlated case, and γ=-γ when T-EV and K-EV are fully anti-correlated. For instance, this implies that, in the correlated case, the environmental state ξ=+1 corresponds to f_s=e^s>1 and K=K_+ (low toxin level, abundant resources), while ξ=-1 is associated with f_S=e^-s<1 and K=K_- (high toxin level, scarce resources). As said, under fully correlated/anti-correlated T/K-EV, environmental variability is no longer twofold: ξ simultaneously drives the level of toxin and the abundance of resources. Hence, we can characterise the effect of fully correlated/anti-correlated EV in terms of ν≡ν_T and δ≡δ_T, and the fully anti-correlated case is related to completely correlated EV via γ→ -γ. An example of fully-anticorrelated EV modelled in terms of a dichotomous process driving the level of toxins and resources in the context of competitive exclusion is discussed in <cit.>.
Fig. <ref> shows the comparison between the uncorrelated T-EV and K-EV studied in the main text (top row), and the fully correlated (middle row) and fully anti-correlated (bottom row) cases reported here, all under K_0=1000 and γ=0.9, and for different selection strengths s∈{0.2,2,20} (left to right columns). For the uncorrelated case, the parameters of K-EV are independent from those of T-EV parameters,
and are here chosen to be (ν_K,δ_K)=(10^-4,0), i.e. unbiased slow-switching K-EV (similar to the example of Fig. <ref>(a)).
Since the fully correlated and anti-correlated cases (middle and bottom rows)
are mirror images through the vertical axis
and under a red-blue colour change, we focus on the correlated case only (middle row). For this case, γ=γ=0.9, with ξ_K=ξ_T≡ξ, ν_K=ν_T, and δ_K=δ_T. In the fully correlated case, we observe that both the blue (resistant fixation) and the bright green (coexistence) regions shift upwards (to higher toxin level biases δ_T) as the selection strength increases (from left to right column). We can understand this phenomenon in light of the T-correlated K-EV fluctuations. This is, since δ_K=δ_T, a lower value of δ_T in the diagrams implies longer cumulative periods in the harsh toxin level, but also in the low carrying capacity environment. Therefore, lower δ_T provides the selective advantage to resistant strain at the same time that it shrinks the total population. Demographic fluctuations being stronger in smaller populations, correlated T-EV and K-EV thus provide higher R fixation probability (blue region shifted up), as well as lower coexistence probability (green region shifted up). Moreover, since total population increases with the bias towards positive carrying (here δ_K=δ_T), see Sec. <ref>,
the MFT thus increases with δ_T (see Fig. <ref>(c)), and the coexistence probability thus shifts upwards. The magnitude of the upwards shift in the correlated case, is small but increases with selection strength s that increases the amplitude of the T-EV fluctuations.
In summary, we obtain the same qualitative results for fully correlated/anti-correlated T/K-EV as when ξ_T/K are independent (uncorrelated environmental noise, twofold EV), with some minor quantitative differences, as shown in Fig. <ref> and discussed above. We conclude that the similar behaviour observed for uncorrelated and (anti-)correlated T/K-EV indicates that our findings are robust against the detailed model specifications: the results are expected to be valid for the general case of twofold environmental variability where T/K-EV are neither completely independent nor fully correlated/anti-correlated.
|
http://arxiv.org/abs/2307.04668v2 | 20230710161133 | Quantifying the Echo Chamber Effect: An Embedding Distance-based Approach | [
"Faisal Alatawi",
"Paras Sheth",
"Huan Liu"
] | cs.SI | [
"cs.SI",
"cs.AI",
"cs.LG"
] |
Quantifying the Echo Chamber Effect:
An Embedding Distance-based Approach
Faisal Alatawi, Paras Sheth, Huan Liu
Arizona State University
{faalataw,psheth5,huanliu}@asu.edu
August 12, 2023
==========================================================================================================
The rise of social media platforms has facilitated the formation of echo chambers, which are online spaces where users predominantly encounter viewpoints that reinforce their existing beliefs while excluding dissenting perspectives. This phenomenon significantly hinders information dissemination across communities and fuels societal polarization. Therefore, it is crucial to develop methods for quantifying echo chambers. In this paper, we present the Echo Chamber Score (ECS), a novel metric that assesses the cohesion and separation of user communities by measuring distances between users in the embedding space. In contrast to existing approaches, ECS is able to function without labels for user ideologies and makes no assumptions about the structure of the interaction graph.
To facilitate measuring distances between users, we propose EchoGAE, a self-supervised graph autoencoder-based user embedding model that leverages users' posts and the interaction graph to embed them in a manner that reflects their ideological similarity. To assess the effectiveness of ECS, we use a Twitter dataset consisting of four topics - two polarizing and two non-polarizing. Our results showcase ECS's effectiveness as a tool for quantifying echo chambers and shedding light on the dynamics of online discourse.
Echo Chamber, Polarization, Social Media, Ideology Detection, User Representation, Graph Auto-Encoder
§ INTRODUCTION
In the age of digital communication, social media platforms have revolutionized the way we disseminate and consume information. Nevertheless, this evolution has brought about notable challenges, particularly the emergence of echo chambers and polarization <cit.>. These phenomena are often characterized by high levels of controversy between members of different groups and homogeneity among members of the same group <cit.>. This reinforces pre-existing beliefs <cit.>, discourages critical thinking <cit.>, promotes the spread of misinformation <cit.>, and leads to societal divisions. Hence, it is crucial to devise methods for measuring the extent and impact of echo chambers on social media. By quantifying them, we can better understand these phenomena and, consequently, devise strategies to mitigate echo chamber effects and foster more balanced and nuanced discussions. Ultimately, this could contribute to a better informed, open-minded, and empathetic society. Such efforts are particularly crucial in today's world, where topics such as politics, health, economics, and environmental issues, which are susceptible to echo chambers <cit.>, have far-reaching implications for society.
Echo chambers are contingent on two dynamics: the interaction among users and the individual ideological leanings of these users. Numerous measures and metrics have been developed to leverage these dynamics, either separately or in conjunction. One such method, is to leverage the interactions graph to compute graph-specific metrics such as modularity <cit.>, or resort to other techniques like random walkers <cit.>. However, utilizing the graph introduces a difficulty, as a graph may exhibit modularity without necessarily being polarized or containing an echo chamber <cit.>. An alternate approach involves assessing the ideological disparity between users and their adjacent nodes within the graph, investigating correlations between a user's ideology and that of their neighbors <cit.>, or observing ideological deviations from the center of an opinion scale after deploying opinion-spreading models <cit.>. These methodologies, although insightful, are fraught with challenges. Labeling users to ascertain their ideologies or opinions is a laborious task that is susceptible to errors. Similarly, semi-supervised methods that depend on weak labels also present their own unique set of complications.
In response to these issues, we introduce the Echo Chamber Score (ECS) a metric that captures the essence of the echo chamber concepts by focusing on the dynamic interactions both within and across different user communities. The crux of our approach is to gauge the similarity of users with their respective communities (i.e., cohesion) and across different communities (i.e., separation). Here, an interaction graph can be characterized as exhibiting an echo chamber-like structure if it exhibits a low average distance between users of a single community (i.e., high cohesion) and a high average distance between users across different communities (i.e., high separation). This strategy of using the distance allows us to bypass reliance on potentially incidental graph structures and eliminates the need to split the graph into two separate communities, an action that erroneously assumes inherent polarization. Further, our method uses similarity in the embedding space as a proxy for ideological distance, thereby circumventing the arduous and error-prone task of detecting individual users' ideologies.
To facilitate the measurement of ideological distance, we propose EchoGAE, a self-supervised Graph Auto-Encoder <cit.> (GAE) based user embedding model. EchoGAE is designed to capture the ideological similarities among users through their interactions and shared posts, operating on two core principles: homophily <cit.>, where individuals associate and interact with those similar to themselves, and linguistic homophily <cit.>, the tendency of socially connected users to use language in similar ways. EchoGAE leverages homophilic interactions such as retweets, regarded as endorsements of similar ideologies <cit.>, along with the content of user posts. Both serve as inputs to capture and map these ideological similarities. The model architecture comprises an encoder that positions similar nodes closely together in the embedding space, and a decoder that uses users' embedding to reconstruct the graph structure in a self-supervised manner. Additionally, it utilizes Sentence-BERT <cit.>, a BERT-based language model, to embed tweets, thus reflecting their semantic similarities. By uniquely combining the interaction graph structure and linguistic information from user posts, EchoGAE generates representations that accurately reflect ideological similarities, establishing it as a robust tool for measuring the effects of echo chambers and polarization.
In this research, we evaluate the ability of the Echo Chamber Score (ECS) to measure echo chamber effects within homophilic social interaction networks. Our experiments are based on real-life Twitter datasets related to four topics: two polarizing and two non-polarizing. Our findings confirm that the ECS metric accurately identifies polarized interaction graphs and quantifies the echo chamber effect in a manner consistent with existing state-of-the-art methods. Furthermore, ECS proves successful in determining which communities within the interaction graph are more polarized, demonstrating its unique ability to rank communities based on their polarization. We also verify that EchoGAE's user embedding effectively reflects ideological distances between users, showcasing its capacity to detect user ideologies. To promote reproducibility and foster further development in this field, we make our datasets and code available to the public[https://github.com/faalatawi/echo-chamber-scorehttps://github.com/faalatawi/echo-chamber-score].
§ RELATED WORK
Echo chambers and polarization measures can be divided into two main types: graph-based and ideology-based methods. Graph-based methods are based on the concept of a graph representing interactions between users on a given topic. These methods operate on the assumption that polarization can be observed within the graph itself. For instance, the modularity of a graph, which quantifies how well a graph can be divided into distinct communities, has been used to measure echo chambers <cit.>. However, challenges arise from this approach, as modularity and other similar methods may not accurately represent echo chamber phenomena due to the possibility that non-polarized graphs can also exhibit high modularity <cit.>.
To address these limitations, new methods have been developed that scrutinize the interactions between communities within a graph. These improved methods involve dividing the graph into two distinct communities and measuring polarization at the boundaries between them <cit.>. An alternative approach involves using the Random Walk Controversy <cit.> (RWC), a popular polarization method <cit.> that calculates the probability of a random walker starting at one community and ending at another. Nonetheless, these methods have their own drawbacks, such as the necessity of splitting the communities in the graph and making an inherent assumption that the graph is already polarized. This results in difficulties in measuring polarization that may not actually exist.
Our novel approach, the Echo Chamber Score (ECS), alleviates these issues. The ECS does not require the division of the graph into two communities and is capable of measuring the effects of echo chambers and polarization across any number of communities, making it a more flexible and accurate method for assessing polarization.
Ideology-based methods for measuring echo chambers and polarization take a different approach, focusing on a user's ideological leaning and the users they interact with. Two primary approaches exist within this category: (1) measuring the ideological distance between a user and their neighboring users in the graph, and (2) measuring the divergence from an ideological center after applying an opinion-spreading model.
In the first approach, the ideological leanings of all users are estimated and then compared to their neighboring users. The fundamental idea here is that an echo chamber is formed when users mostly interact with others who share similar opinions <cit.>. For instance, the ideology of users can be inferred from the hashtags they share or the content they post <cit.>. The polarization is then quantified by measuring the Pearson correlation between a user's ideological score and the average ideological score of their neighbors <cit.>.
In the second approach, opinion-spreading models such as the Friedkin-Johnsen or DeGroot opinion model are utilized <cit.>. For instance, the Friedkin-Johnsen model operates by updating a node's opinion through repeatedly averaging the opinions of its neighbors until reaching equilibrium <cit.>. Polarization is then measured by how much opinions at equilibrium deviate from the average <cit.>. Alternatively, the DeGroot opinion model is used to construct a Polarization Index (PI) based on the probability density distribution of individuals' opinions <cit.>. A bimodal distribution would suggest the existence of polarization, while a unimodal distribution would indicate its absence <cit.>.
Both these ideology-based approaches have challenges, such as the laborious and error-prone task of estimating users' ideological leanings from their content or interactions. Therefore, we have opted instead for a model based on similarity in the embedding space as a proxy for ideology, eliminating the need for ideology estimation.
§ METHODOLOGY
This section presents our approach to quantifying echo chambers in online conversations. Our objective is to assess whether the discussion surrounding a given topic exhibits polarization and whether the communities formed by users can be characterized as echo chambers or comprise a diverse group of individuals with varying ideologies. To achieve this, we construct a graph G = (V, E), where V represents the set of social media users, and E represents the edges denoting homophilic interactions, such as retweets. Additionally, we obtain a set of communities Ω from a community detection algorithm, where each community consists of a group of users. Our primary aim is to measure the level of polarization within the entire graph by computing the Echo Chamber Score (ECS) for each community. Consequently, this section presents our novel ECS metric for quantifying echo chambers. However, as ECS relies on user embedding, we begin by introducing our user embedding framework, EchoGAE, which enables the representation of users based on their ideological similarity.
§.§ Embedding Social Media Users
The EchoGAE model (see figure <ref>) is essential to our methodology for quantifying echo chambers in online conversations. Its purpose is to embed users in a way that reflects their ideological similarity, facilitating the calculation of the Echo Chamber Score (ECS). By placing ideologically similar users closer in the embedding space, EchoGAE enables the measurement of cohesion and separation of communities in the graphs, the two components of ECS, as we will explain later in the section.
EchoGAE is an adaptation of the Graph Auto-Encoder (GAE) model <cit.>, tailored for user embedding based on tweets and interactions. As a self-supervised model, EchoGAE eliminates the need for user ideological labeling. It employs two graph convolutional layers to encode the graph into a latent representation, which is subsequently decoded to reconstruct the graph structure. EchoGAE aims to minimize the binary cross-entropy between the real and reconstructed adjacency matrices.
The EchoGAE model consists of two main components: an Encoder and a Decoder. The Encoder takes both the tweets and the graph as input to create node embeddings, which serve as the user embeddings. The Encoder is divided into two parts. Firstly, the tweets component utilizes Sentence-BERT <cit.> to embed the user's tweets, and the average of these tweet embeddings is taken to form the content embeddings (In fig <ref>, it's represented as the matrix 𝐗). Secondly, the network component leverages the adjacency matrix (𝐀 in fig <ref>) of the graph. Together, these components contribute to the creation of nodes embeddings (or users embeddings 𝐙∈ℝ^n × d where n is the number of users in the graph and d is the dimension of user embedding) that capture the information from both the users' content and their network interactions.
The Decoder performs an inner product operation <cit.> on the node representations (σ(𝐙 * 𝐙^𝐓)) obtained from the Encoder, resulting in a reconstructed adjacency matrix (Â). Subsequently, the binary cross-entropy loss is used to train the model and ensure accurate graph reconstruction.
§.§ Measuring the Echo Chamber Effect
We introduce ECS (Echo Chamber Score), a measure for quantifying the echo chamber and polarization effects on social media. To measure the echo chamber effect using user embedding, we assess in-group cohesion and between-group separation <cit.>. We utilize the distance in the embedding space as a proxy for these factors, reflecting how closely related users within a community are (cohesion) and how distinct a community is from others (separation).
Let Z ∈ℝ^n × d represent user embeddings, where n is the number of users and d is the embedding dimension. Additionally, let Ω = {ω_1, ω_2, …, ω_M} denote the set of communities, where ω_i ⊂ V represents the i^th community consisting of users. For a user u ∈ω, we compute the cohesion value (λ_u) as the average distance between u and other users in the same community using Equation <ref>.
λ_u = 1/|ω|∑_v ∈ω
v ≠ u dist(u, v)
Here, |ω| denotes the number of users in the community ω, and dist(u, v) represents the distance (e.g., Euclidean) between users u and v in the embedding space (Z^(u) and Z^(v) respectively). Similarly, we compute the separation value (Δ_u) as the average distance between u and the nearest community other than ω using Equation <ref>.
Δ_u = *min_ω∈Ω
u ∉ω [ 1/|ω|∑_v ∈ω dist(u, v) ]
To calculate the Echo Chamber Score (ECS) for a community ω = {u_1, u_2, …, u_N}, we use a formula inspired by the silhouette score <cit.> (in the appendix we show how to derive the ECS from the silhouette). Equation <ref> produces a score between 0 and 1, with a higher score indicating a greater likelihood of an echo chamber effect within the community.
ECS^*(ω) = 1/|ω|∑_u ∈ωmax(Δ_u, λ_u) + Δ_u - λ_u/ 2 * max(Δ_u, λ_u)
The Echo Chamber Score can be computed for the entire graph using Equation <ref>, where Ω represents the set of communities obtained from a community detection algorithm such as Louvain <cit.> or Leiden <cit.>.
ECS(Ω) = 1/|Ω|∑_ω∈Ω ECS^*(ω)
The Echo Chamber Score (ECS) allows for comparison across different graphs representing various controversial topics. A higher ECS indicates a higher degree of echo chamber within a conversation. The components of ECS can provide additional insights, such as ranking communities based on their polarization, using Equation <ref>.
Note that our approach does not assume a specific number or size of communities and is independent of the community detection method. Moreover, it does not require prior knowledge of users' internal ideologies, setting it apart from related works <cit.>.
§.§ Estimating Users' Ideology
Our embedding model, EchoGAE, aims to position users with similar ideological leanings closer to each other in the embedding space. Therefore, we assume that we can utilize the distance in the embedding space to infer users' ideological leanings. This helps us evaluate whether EchoGAE embeds users in a way that reflects their ideology, which is the core idea behind ECS.
After applying the EchoGAE embedding, we employ a clustering algorithm (e.g., KMeans) to detect two communities of users in the embedding space, denoted as ω_1 and ω_2. These communities represent the pro and anti sides of the debate, respectively. We follow similar works <cit.> that split the ideology spectrum into two sides.
The ideology score for each user is calculated using Equation <ref>. It is determined by the difference between the average distance of the user u to other users in ω_1 and the average distance to users in ω_2.
I(u) = 1/|ω_1|∑_v ∈ω_1
v ≠ u dist(u, v) - 1/|ω_2|∑_v ∈ω_2
v ≠ u dist(u, v)
Here, dist represents any distance function normalized between 0 and 1. In our implementation, we employ the Euclidean distance, but other distance measures can be used. The ideology scores I(u) range from -1 to +1. Importantly, values of -1 and +1 do not inherently indicate "good" or "bad" ideologies. In Equation <ref>, the order of the communities (ω_1 and ω_2) affects the sign of the ideology score. If a user belongs to ω_1, their score is positive when ω_1 is in the first term. Reversing the order of communities changes the sign but not the magnitude of the score. This introduces an additional layer of complexity in evaluating our method, which we address in the experimental results section.
§ EXPERIMENTS
In this section, we present the experiments we used to assess the effectiveness of our proposed method, Echo Chamber Score (ECS), in analyzing the echo chamber effect. To evaluate its performance and reliability, we compare ECS with two commonly used methods, Random Walk Controversy (RWC) and Polarization Index (PI). Additionally, we utilize ECS to analyze echo chambers at the community level, examining the distances between users in the embedding space to gain insights into the cohesion and separation of user communities. Furthermore, We conduct an experiment to determine if the distances in the embedding space can predict the ideological leaning of users. Finally, we perform an ablation study to examine the impact of using tweets in measuring the echo chamber effect and predicting user ideology. These experiments provide valuable insights into the performance and applicability of ECS in analyzing echo chambers, predicting user ideology, and assessing the role of tweets in these measurements.
§.§ Datasets
To investigate the echo chamber phenomenon, we selected four topics to examine user interactions related to these subjects. Two topics were controversial: abortion and gun debates, while the other two were non-controversial: the SXSW conference and the Super Bowl. The inclusion of non-controversial topics aimed to assess our method's performance in non-polarized settings. The datasets used in our experiments are outlined in Table <ref>, and we have made them publicly available[https://github.com/faalatawi/echo-chamber-scorehttps://github.com/faalatawi/echo-chamber-score] to ensure reproducibility and facilitate further research in the field of echo chamber analysis and detection.
Data collection. To collect data for each topic, we identified frequently used keywords in discussions (see Table <ref>) and monitored the conversation. We then gathered the retweeters of the most popular tweets associated with these keywords. This data was used to construct a graph for each topic, where users were represented as nodes, retweet interactions formed the edges, and users' tweets provided node attributes. We collected up to 200 of the users' most recent tweets (excluding retweets) to ensure an adequate amount of user-generated text for analysis.
The gun debate dataset was collected during the period of intense debate following the Uvalde school shooting in Uvalde, Texas, on May 24, 2022. Unfortunately, school shootings in the United States often ignite polarized discussions <cit.> on gun violence and constitutional gun ownership rights. To capture this discourse, we selected commonly used words from both sides of the debate and monitored the conversation from May to July. We then selected the top 1200 most retweeted tweets and constructed the retweet graph. The resulting graph (shown in the lower left panel of Figure <ref>) exhibited two communities, as identified by the Louvain algorithm <cit.>, indicating the presence of two polarized communities <cit.>. Similarly, we collected the retweet graph from the abortion rights debate following the US Supreme Court ruling on abortion that was issued on June 24, 2022, using relevant keywords. Both the gun debate <cit.> and abortion <cit.> have been widely studied as topics for analyzing echo chambers and polarization.
On the other hand, for non-controversial topics, we selected the topics that have been used to study echo chambers Super Bowl <cit.> and SXSW <cit.>. The Super Bowl is an annual sports event in the US, while the SXSW conference is an annual event that combines music, film, and interactive media in Austin, Texas. We followed the same data collection procedure as with the controversial topics.
Labeling. To evaluate the embedding quality of EchoGAE in capturing ideological similarity, we estimated users' ideological leanings. Following previous works that used news URLs to infer political leanings <cit.>, we obtained ideological labels for URLs from the non-partisan media watchdog AllSides[https://www.allsides.com/media-bias]. To assign labels to users, we utilized the news URLs they post as indicators of their ideology, using AllSides' political leanings for news websites' URLs. A user's political leaning is calculated as the average of the news articles they share. AllSides' ratings consist of five categories: left, center-left, center, center-right, and right, to which we assigned values of -1, -0.5, 0, 0.5, and 1, respectively. It is important to note that these values indicate opposing sides of the debate and do not inherently represent good or bad ideologies. We only used labels for users who shared at least five links. The number of labeled users for each dataset is specified in Table <ref>. Notably, controversial topics tend to have more labeled users due to the nature of user engagement with these topics, as users are more likely to express their ideological leanings in these topics.
§.§ Measuring the Echo Chamber Effect
In this experiment, our objective is to evaluate the effectiveness of our proposed method in measuring the echo chamber effect. To accomplish this, we compare our method with commonly used techniques for calculating polarization and echo chamber effects. This comparison aims to demonstrate that our method performs comparably to existing methods and produces reliable results for measuring the echo chamber effect.
For our experiments, we utilize two widely used baselines: Random Walk Controversy (RWC) <cit.> and Polarization Index (PI) <cit.>. We then compare these baselines with our proposed method, Echo Chamber Score (ECS). RWC measures the likelihood of transitioning from one community to another in a network, where a value close to one indicates polarization and close to zero indicates no polarization. On the other hand, PI measures the degree of segregation within a population by modeling the propagation of opinions based on the probability density distribution of individuals' opinions.
To compute RWC, we partition the graph into two communities using the FluidC <cit.> algorithm. Subsequently, we calculate the probability of transitioning from one partition to another. For PI, we employ the DeGroot opinion model <cit.> with labeled users as seeds to disseminate opinions, and then we compute the PI index for each graph. In contrast to RWC, our proposed method ECS does not require dividing the graph into two communities. The graph may consist of multiple communities, and any community detection method can be employed. In this study, we use the Louvain algorithm <cit.> to identify the communities, which are then used to compute ECS. Furthermore, unlike PI, our method does not rely on any labeled users, as we utilize the embeddings obtained from EchoGAE.
As shown in Table <ref>, our approach effectively assigns higher scores to controversial topics (e.g., Gun debate and Abortion) compared to non-controversial ones, demonstrating its ability to perform on par with existing methods. Our method aligns with PI, a highly regarded technique that employs ideology labels to gauge polarization. PI's approach closely approximates the actual labels, and our method exhibits strong agreement with it, as evidenced by a 0.99 Pearson correlation. In contrast, there are notable differences between our method and RWC. For instance, both ECS and PI indicate that the Gun Control debate is more polarized than the Abortion debate, which contradicts the findings of RWC. We posit that the requirement of RWC to partition the graph into only two communities hinders its performance. By relaxing this requirement, our measure ECS can evaluate any number of communities identified by various community detection algorithms.
These techniques (RWC, PI, and ECS) enable us to rank topics based on their polarization levels, from highest to lowest. Both PI and our method (ECS) consistently rank the topics in a similar manner. It is worth noting that our method considers the Gun debate more polarized than the Abortion debate, aligning with opinion polls. According to the Pew Research Center[https://www.pewresearch.org/], in 2022, 61% of Americans supported abortion access, while only 53% advocated for stricter gun laws. This demonstrates greater disagreement and polarization within the Gun debate compared to the Abortion debate.
§.§ Analysing the Echo Chamber Effect on Community Level
To showcase ECS's capability in analyzing the echo chamber at a more detailed level, we conducted an experiment to examine the insights provided by our measure at the community level. The objective was to determine which community within a topic exhibited a higher level of polarization. For this experiment, we focused on the controversial topics, namely the Gun debate and Abortion, and explored how we could investigate the interaction both between and within communities. These topics were chosen due to the presence of echo chambers, as identified in the previous experiment.
Upon examining the gun dataset, we observed that the debate surrounding guns and school shootings exhibited a higher level of polarization compared to abortion, as evidenced by an ECS score of 0.714 compared to 0.626 (see Table <ref>). Applying the Louvain algorithm, we identified two communities in the interaction graph, with sizes of 3984 and 2582 nodes, respectively. Computing the ECS* (equation <ref>) for each community, we obtained echo chamber scores of 0.739 and 0.676, indicating polarization and ideological homogeneity within both communities. Notably, the larger community demonstrated a little bit higher level of polarization.
Upon labeling a sample of ten users from each community, we discovered that the larger community aligned with the anti-gun group, while the smaller community represented the pro-gun group. By examining the 2D projection of the EchoGAE embedding of users (refer to Figure <ref>), we observed that the blue community (anti-gun) appeared to have similar size as the other community (pro-gun), suggesting close levels of polarization between the two communities. However, the anti-gun higher ECS score indicates that this group is more homogenized than the other group, which is surprising. It is possible that this debate around guns is not a right and left issue, and more centrists voices are participating in the debate. This analysis would be challenging to perform using PI or RWC techniques. However, ECS, being community-independent and not reliant on ideology labels, enables such analysis without prior knowledge of community divisions and ideologies.
In the abortion dataset, we identified two communities with sizes of 3933 and 1154. The ECS* scores for these communities were 0.6 and 0.69, respectively. To gain deeper insights, we conducted a random sampling of ten users from each community and manually examined their Twitter accounts. Our analysis revealed that the larger community primarily consisted of supporters of abortion rights. On the other hand, the anti-abortion community exhibited a higher level of polarization compared to the other community. This finding aligns with the opinion polls mentioned earlier, as the anti-abortion group tends to hold a more fringe position compared to the pro-abortion group. Additionally, this alignment can be observed in the abortion rights vote that took place in Kentucky, which is considered a conservative state. During the vote, the majority of voters rejected[https://www.pbs.org/newshour/politics/kentucky-voters-reject-constitutional-amendment-on-abortion] the proposal to restrict abortion rights.
§.§ Using Ideology Detection to Verify the Embedding Space
We assume that the distance in the embedding space could be used to predict the political leaning of users and that users with similar ideological leanings are closer to each other in the embedding space. If we prove that the distance in the embedding space could be used to estimate the ideology of users, we could then use the distance to measure the echo chamber effect, as we rely on the distance to measure the separation (eq <ref>) and cohesion (eq <ref>) of communities in order to gauge the echo chamber effect.
After labeling users, we then split the labeled users into training and validation sets (10% - 90% respectively). Since our model is unsupervised, the training set is used by the baseline model only, and we use the validation set to validate the estimation of both models. For the baseline model, we used the DeGroot opinion model <cit.>, in which the user’s ideology is the average ideology of their neighbors. After embedding users using EchoGAE, we employed the KMeans algorithm to detect two communities of users in the embedding space, referred to as ω_1 and ω_2, representing the pro and anti sides of the debate. Lastly, we calculated the ideology score of each user, taking into account their distances to the members of communities ω_1 and ω_2 in the embedding space as shown in equation <ref>.
In Table <ref>, we present our method's outcomes for estimating ideology compared to the baseline. The resulting ideology scores were compared to the pseudo-scores obtained from AllSides labeling. Our analysis involved comparing our ideology scores to those obtained from the AllSides labeling and the baseline model using Mean Absolute Error (MAE), and Mean Squared Error (MSE). The results shown in Table <ref> demonstrate that our model performs comparably to the semi-supervised baseline, even though our method is unsupervised (we do not use any labels in our model). Furthermore, as depicted in Figure <ref>, a high degree of concurrence is observed between the distributions of the predicted and actual ideologies.
It should be noted that in Equation <ref>, the order of the communities (i.e., ω_1 and ω_2) influences the sign of the ideology score. For instance, if a user belongs to ω_1 (i.e., is more closely associated with users in ω_1), their ideology score would be positive if ω_1 appeared in the equation's first term. However, if the order of the communities is reversed, the score's magnitude remains the same, but the sign changes. Consequently, in our measurement, we tried both orders (i.e., ω_1 first then it becomes the second) and report the minimum value.
§.§ Ablation Study
The primary objective of this study is to examine the impact of the components of EchoGAE on the performance of two tasks: measuring the echo chamber effect and predicting the ideology of users. Specifically, the study explores the significance of using textual information, i.e., tweets, in these tasks. Table <ref> presents the results obtained from this study. It demonstrates that the model's performance is enhanced when tweets are utilized. This finding emphasizes the importance of linguistic similarity in measuring echo chambers and estimating ideology. Therefore, the study suggests that investing more resources to extract knowledge from tweets could lead to improved accuracy in both tasks.
However, the study also observes that good results can be achieved with the graph component alone, in situations where textual information is unavailable. Notably, even in cases where the difference in echo chamber scores between controversial and non-controversial topics is not substantial, the tweet-less model still performs well by assigning higher scores to controversial topics.
In conclusion, this study provides empirical evidence supporting the importance of incorporating textual information, such as tweets, in measuring echo chambers and estimating ideology. Nevertheless, it also highlights that satisfactory results can be obtained with graph-only models in the absence of textual data.
§ CONCLUSION
In this paper, we introduced Echo Chamber Score (ECS), a novel metric for quantifying echo chambers and polarization in social media networks. ECS leverages an embedding space to measure the cohesion and separation of user communities, providing insights into the echo chamber effect. To enable this measurement, we presented EchoGAE, a self-supervised user embedding model that captures ideological similarities among users and generates accurate embeddings.
Our evaluation of ECS on a Twitter dataset demonstrated its effectiveness in ranking topics based on echo chamber scores and ordering communities by polarization levels. Compared to existing metrics, ECS showcased unique capabilities in capturing the dynamics of online discourse. Our research contributes to understanding and quantifying echo chambers and polarization, which could help the development of strategies to mitigate their negative impacts and promote a more informed and open-minded society.
IEEEtran
[Deriving the ECS* equation]
Here we derive the ECS* equation. We start with the silhouette score <cit.>:
ECS^*(ω) = 1/|ω|∑_u ∈ω [ Δ_u - λ_u/max(Δ_u, λ_u) ]
We want to scale it from 0 to 1 instead of -1 to +1. So we have:
ECS^*(ω) = 1/|ω|∑_u ∈ω1/2 [ Δ_u - λ_u/max(Δ_u, λ_u) + 1 ]
Multiply the terms within the square brackets by the denominator of the first fraction, max(Δ_u, λ_u):
ECS^*(ω) = 1/|ω|∑_u ∈ω1/2 [ Δ_u - λ_u + max(Δ_u, λ_u)/max(Δ_u, λ_u) ]
Finally, rearrange the terms in the numerator, then separate the fractions to have the ECS equation:
ECS^*(ω) = 1/|ω|∑_u ∈ωmax(Δ_u, λ_u) + Δ_u - λ_u/ 2 * max(Δ_u, λ_u)
|
http://arxiv.org/abs/2307.05988v1 | 20230712080429 | A Comprehensive Review of Automated Data Annotation Techniques in Human Activity Recognition | [
"Florenc Demrozi",
"Cristian Turetta",
"Fadi Al Machot",
"Graziano Pravadelli",
"Philipp H. Kindt"
] | cs.LG | [
"cs.LG",
"cs.HC"
] |
A Comprehensive Review of Automated Data Annotation Techniques in HAR]A Comprehensive Review of Automated Data Annotation Techniques in Human Activity Recognition
Department of Electrical Engineering and Computer Science, University of Stavanger
Kitty Kiellands hus, Rennebergstien 30
Stavanger
Norway
[email protected]
Department of Computer Science, University of Verona
Strada Le Grazie, 15
Verona
Italy
[email protected]
Faculty of Science and Technology, Norwegian University of Life Sciences
Campus Ås, Universitetstunet 3
Ås
Norway
[email protected]
Department of Computer Science, University of Verona
Strada Le Grazie, 15
Verona
Italy
[email protected]
Faculty of Computer Science, TU Chemnitz
Str. der Nationen 62, 09111
Chemnitz
Germany
[email protected]
This is the author’s draft submitted to IEEE/ACM. Copyright may be transferred without notice, after which this version may no longer be accessible. A copyright notice will be added here upon submission, and additional information in the case of an acceptance/publication.
Human Activity Recognition (HAR) has become one of the leading research topics of the last decade. As sensing technologies have matured and their economic costs have declined, a host of novel applications, e.g., in healthcare, industry, sports, and daily life activities have become popular. The design of HAR systems requires different time-consuming processing steps, such as data collection, annotation, and model training and optimization.
In particular, data annotation represents the most labor-intensive and cumbersome step in HAR, since it requires extensive and detailed manual work from human annotators. Therefore, different methodologies concerning the automation of the annotation procedure in HAR have been proposed.
The annotation problem occurs in different notions and scenarios, which all require individual solutions. In this paper, we provide the first systematic review on data annotation techniques for HAR. By grouping existing approaches into classes and providing a taxonomy, our goal is to support the decision on which techniques can be beneficially used in a given scenario.
<ccs2012>
<concept>
<concept_id>10003120.10003138.10003139.10010906</concept_id>
<concept_desc>Human-centered computing Ambient intelligence</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121.10003126</concept_id>
<concept_desc>Human-centered computing HCI theory, concepts and models</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10003120.10011738.10011775</concept_id>
<concept_desc>Human-centered computing Accessibility technologies</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
[300]Human-centered computing Ambient intelligence
[300]Human-centered computing HCI theory, concepts and models
[100]Human-centered computing Accessibility technologies
20 December 2022
[revised]– —– 2023
[accepted]– —– 2023
[
Philipp H. Kindt
August 12, 2023
====================
§ INTRODUCTION
In the last decade, we have witnessed the spread and adoption of sensors, wearables, the Internet of Things (IoT), the Internet of Medical Things (IoMT), and edge computing technologies <cit.>.
Sensors can detect and measure physical properties such as temperature, pressure, light, and motion. They are becoming ubiquitous in various industries, including automotive, aerospace, and consumer electronics. Moreover, their miniaturization has led to their integration into wearables, such as fitness trackers, smartwatches, clothes, and dedicated devices.
Wearables are frequently used to track various aspects of a person's health and activity. Recent developments even involve integrating medical sensors for remote patient monitoring, digital therapeutics, and real-time intervention into wearables <cit.>.
On the other side, the IoT is formed by networks of interconnected devices, vehicles, and buildings that communicate with each other and exchange data. It has been adopted across various industries, including home automation, agriculture, and manufacturing. In addition, IoT devices can be remotely monitored and controlled, improving efficiency and productivity <cit.>.
Instead, IoMT refers to using IoT devices in medical applications, enabling the healthcare providers' capacity to monitor patients remotely, collect data for analysis, improve patient outcomes, and reduce healthcare costs <cit.>.
Finally, edge computing refers to processing data at or near the source rather than sending it to a central/remote server for processing. This technology has become increasingly important as the amount of data generated by IoT and IoMT devices grows. Edge computing enables faster processing times and reduces the latency and amount of data that needs to be transmitted over the network <cit.>.
The adoption and spread of these technologies have revolutionized various industries and enabled new applications and capabilities. With such systems now being ubiquitous, they serve as a common infrastructure for recognizing human activity, as described next.
Human Activity Recognition (HAR): In such a context, HAR is a central research field that finds applications in various areas, including healthcare, sports, industry, and smart homes. HAR refers to the ability to identify and classify human activities using sensors, wearables, or other devices that capture data about the person's movements and actions. With regard to healthcare, HAR can be used to monitor a patients' status and detect abnormalities or changes in their behavior that may indicate a deterioration of health or the onset of a medical condition. For example, HAR can be used to detect falls of elderly patients or to monitor the movements of patients with Parkinson's disease or other motor disorders <cit.>.
Moreover, HAR also has applications in sports and fitness to monitor the athletes' performance and technique, helping them to improve their training and prevent injuries.
HAR can also be used in activity tracking devices, such as fitness trackers, to provide users with insights into their daily activity levels and help them to achieve their fitness goals.
In addition, HAR automates various tasks in smart homes based on the occupant's activities. For example, lights can be turned on or off automatically based on the person's movements, or the thermostat can be adjusted based on the person's activity level <cit.>.
HAR is related to various technologies, including sensors, wearables, IoT, IoMT, edge computing, machine learning (ML), Deep Learning (DL), and Artificial Intelligence (AI). Sensors and wearables are used to capture data about the person's movements and actions, which is then used to identify and classify human activities in HAR applications. IoT and IoMT systems are used to collect data from sensors and wearables, which can be transmitted over the network for processing and analysis. Edge computing can process this data at or near the source, reducing latency and enabling real-time processing of HAR data <cit.>.
In HAR systems, the data collected from such devices is analyzed to classify a user's activity. While, in principle, this analysis can be done based on heuristics (e.g., a feature exceeds certain thresholds, etc.), ML- and DL-based HAR techniques have become the most popular solution. Using them, also more complex analyses can be carried out, allowing for reliable recognition of activities even in data in which the properties or patterns that represent a certain activity or behavior are not obvious. ML- and Dl-based HAR methods can also integrate other data sources, such as environmental data, to provide more comprehensive insights into human behavior and activity <cit.>.
As the technology continues to improve and becomes widely available, we expect to see further advancements and new applications for ML-based HAR <cit.>.
When generating HAR model, a set of sensor data is recorded first. This data is then labeled with the activities under consideration. This step is called annotation. Next, a machine-learning model is trained, which can then be used to classify unlabeled data. In the following, we describe the individual steps <cit.>
that are involved in creating a HAR system in more detail. An overview is shown in Figure <ref>.
* Definition of Target Activities: Definition and analyzation of the real-world characteristics of the target activities to be recognized. For example, this can be their duration, distribution, similarity with other activities, etc.
* Device Setup: Identification and study of requirements and determination of the devices to be used in the data collection phase, based on the target human activities.
* Data Collection: In this phase, data is collected from sensors, wearables, or other devices that capture information about the person's movements and actions.
* Data Annotation: The process of assigning labels to the human activities being performed. Labels are crucial in supervised learning as they provide the ground truth or correct answers that guide the learning process. By associating input data with corresponding labels, the model can learn to make accurate predictions and generalize its knowledge to unseen examples.
* Data Preprocessing: The collected data is then preprocessed to remove noise, irrelevant information are filtered out, and the data is prepared for analysis. As a part of this, the following analysis is carried out:
* Feature extraction: The preprocessed data is analyzed to extract relevant features that can be used to classify human activities. These features may include movement patterns, body position, or other characteristics.
* Feature selection: Once the features have been extracted, a subset of features may be selected for use in the classification model. This helps to reduce the dimensionality (e.g., the number of features) of the data and improve the accuracy of the model.
* Model generation and testing: A HAR (i.e., ML or DL) model is developed to classify human activities based on the selected features in this phase. The model may be trained using a labeled dataset or unsupervised learning techniques. After the model has been generated, the following steps are carried out before the model is ready to be used:
* Model evaluation: The developed model is then evaluated using a test dataset to assess its accuracy and performance. This phase helps to identify any issues or areas for improvement in the model.
* Deployment: Finally, the developed model is deployed to a real-world environment, where it is used to classify human activities.
Data Annotation in HAR: The most labor-intensive step in creating a HAR system is data annotation, which involves creating a labeled dataset for training the ML/DL models.
Manual labeling, in which human annotators manually label each recorded sample with the corresponding activity, is a common approach in data annotation. Although time-consuming and resource-intensive, it can produce high-quality labels that are accurate and consistent.
Nevertheless, several factors can pose challenges in the manual data annotation process for HAR systems. Firstly, subjectivity can lead to inconsistencies and errors in labeling as the interpretation of the activity being performed can vary among annotators. This can ultimately affect the accuracy of the ML/DL model.
Secondly, the data annotation process can be time-consuming, particularly when labeling large amounts of data, which can cause delays in the development of the HAR system and increase project costs. Thirdly, the economic cost can be a limiting factor since hiring human annotators or utilizing crowdsourcing platforms for data labeling can become expensive, mainly when the studied activities are complex.
Fourthly, the variability of human activities can also pose a challenge in the annotation process. Since different individuals can perform activities differently, creating accurate and consistent labels for the data can be challenging. Lastly, label noise may exist in annotated data, resulting in errors in the labeling process. Label noise can occur due to human error, subjectivity, or inconsistencies in the annotation process, which ultimately reduces the performance of the HAR system's ML/DL model.
Careful consideration of these limitations and appropriate methods can help mitigate these challenges and improve the accuracy and performance of the final HAR system.
Alternatively, automated methods, such as rule-based systems or unsupervised learning algorithms, can be employed for data annotation. These approaches are more efficient and scalable but may be less precise or necessitate additional manual validation.
The quality of the annotated data is pivotal to the efficacy of the HAR system. Inaccurate or inconsistent labeling can cause poor ML/DL model performance, leading to the misclassification of human activities <cit.>.
There are several (partial) possible solutions to the limitations of the annotation process in HAR <cit.>.
Some of these solutions include:
* Standardization: Standardizing the annotation process can help to reduce subjectivity and increase consistency in the labeling process. This can be achieved by defining clear guidelines and procedures for annotators to follow and providing training and feedback to ensure the quality of the annotations.
* Automation: Automated methods, such as unsupervised learning algorithms or rule-based systems, can be used to annotate data. These methods can be faster and more scalable than manual labeling and reduce the annotation process's cost.
* Active learning: Active learning techniques can reduce the labeled data needed for training an ML or DL model. This involves selecting the most informative data samples for annotation, which can help reduce the labeling process's time and cost.
* Crowdsourcing: Crowdsourcing platforms can be used to engage many annotators to label the data. This can be a cost-effective solution, as well as provide a diverse range of perspectives on the activity being performed.
* Quality control: Quality control measures can be implemented to ensure the accuracy and consistency of the labeled data. This can include using multiple annotators to label the same data samples and comparing their annotations, as well as conducting regular checks on the quality of the annotations.
While these solutions can enhance the accuracy and performance of the final HAR system, they do not completely eliminate the cost and time needed for the annotation process.
Systematic Review Objectives: This paper aims to systematically review existing methodologies for automating data annotation in HAR. The objective is to identify the strengths and limitations of different techniques and provide insights into the current research and ongoing trends in this area. Specifically, the paper explores different approaches and algorithms used in automatic data annotation techniques. This does not only help in developing novel techniques in the future, but also supports the choice of an appropriate labeling technique for a given application.
This review considers 2401 publications on automating data annotation in HAR. To the best of our knowledge, no systematic review has been published prior to this paper. The absence of such a review aggravates overseeing the different technologies used in this area, makes it difficult to follow recent trends, and leaves unclear which technical solution is most beneficial for realizing a given scenario.
We in this paper close this gap by providing the first systematic review on this field of research.
Paper organization: The rest of the paper is organized as follows.
Section <ref> delves into the background of HAR, presenting a comprehensive overview of the field, including its applications and challenges.
Following that, Section <ref> discusses the selection criteria for annotation methods in HAR, examining the key factors that we consider when choosing appropriate techniques.
Section <ref> presents an in-depth analysis and discussion of various annotation methods employed in HAR, exploring their strengths, limitations, and effectiveness in accurately identifying and classifying human activities
Finally, Sections <ref> and <ref> conclude the paper by summarizing the key findings and contributions of the study, emphasizing the significance of automatic annotation methods in advancing HAR research and suggesting potential avenues for future exploration in this area.
§ BACKGROUND
In this section, we provide the necessary background on data annotation techniques.
Figure <ref> illustrates the different annotation techniques utilized in HAR. Each of them has unique benefits and drawbacks. This section examines and analyzes the advantages and disadvantages of these techniques. While this section provides a comprehensive overview of the technical background of annotation, Section <ref> in detail describes different solutions proposed in the literature.
§.§ Manual Annotation Systems
Manual annotation systems require human experts to label and annotate the data manually. This approach is time-consuming, labor-intensive, and prone to errors. While manual annotation is, in principle, the golden standard and provides high-quality annotations <cit.>, it is known to be subjective. Hence, the results may vary between different annotators, leading to inter-annotator disagreements. The subjectivity of manual annotation can arise due to differences in annotator expertise, biases, and interpretation of the annotation guidelines. Inter-annotator disagreement can occur when multiple annotators are asked to label the same data, leading to differences in their annotations. This can reduce the reliability and validity of the annotation data, making it challenging to build machine learning models that generalize well to new, unseen data <cit.>.
To mitigate these issues, manual annotation systems can incorporate various strategies, such as using multiple annotators and measuring inter-annotator agreement to ensure consistency, providing clear annotation guidelines and training to reduce subjectivity and error, and using quality control measures, such as random spot-checks and review of annotations, to ensure accuracy and completeness. Additionally, manual annotation can be supplemented with semi-automated or fully automated approaches, such as active learning, crowd-sourcing, or machine learning-assisted annotation, to increase efficiency and reduce costs.
§.§ Semi-Automated Annotation Systems
Semi-automated annotation systems use a combination of manual and automated annotation methods. For example, a human annotator may label a small subset of the data, and an algorithm can propagate those annotations to the rest of the dataset <cit.>. This approach can speed up the annotation process while maintaining high-quality annotations. Semi-automated annotation systems can also reduce inter-annotator disagreement <cit.> and can provide a middle ground between fully manual and fully automated approaches. By combining the strengths of both approaches, they can offer a more efficient and cost-effective solution for annotation tasks.
Active learning and Transfer learning have emerged as highly innovative and accurate solutions among the semi-automated techniques.
§.§.§ Active learning (AL)
To further improve the performance of semi-automated annotation systems, AL algorithms are designed to incorporate feedback from human annotators. For instance, an algorithm can present the most uncertain instances for annotation to human annotators, allowing them to correct errors and improve the overall quality of the labeled data. This process can reduce the number of instances that need to be labeled while maintaining the annotation quality <cit.>.
§.§.§ Transfer learning (TL)
Transfer learning systems leverage pre-existing annotated datasets to train models that can be applied to new datasets <cit.>. Such systems can reduce the annotation effort required and improve the accuracy of HAR algorithms, especially for similar activities across different datasets <cit.>.
Transfer learning can be particularly advantageous when no annotated data exists for a specific task or activity. By leveraging pre-existing annotated datasets, transfer learning annotation systems can effectively "transfer" knowledge from one dataset to another, allowing models to learn from the annotated data in one dataset and generalize to new datasets with similar activities.
However, transfer learning annotation systems also have their own challenges, such as the need to identify appropriate pre-existing datasets that are relevant to the new dataset, and the need to carefully tune the transfer learning approach to ensure optimal performance.
§.§ Automated Annotation Systems
Automated annotation systems are commonly used in large-scale HAR applications, where manual annotation is not feasible due to a large amount of data <cit.>.
In such applications, automated annotation can help to provide a baseline for labeling the data, which can then be refined by human experts or through semi-automated methods. Such systems are fast and efficient, but their accuracy may be lower than manual or semi-automated systems, especially for complex activities, and they may require significant computational resources to train and execute <cit.>.
Various techniques can be employed to improve the accuracy of automated annotation systems, such as feature selection and engineering, model selection, and the optimization of hyperparameters. Moreover, automated annotation techniques can be enhanced by leveraging additional sources of information, such as sensor fusion, context awareness, and domain-specific knowledge. Additionally, manual or semi-automated methods can be used to correct errors or refine the annotations produced by automatic systems.
§.§ Sensor Fusion Annotation Systems
Sensor fusion annotation systems combine data from multiple sensors to provide more accurate annotations. For example, combining data from accelerometers, gyroscopes, and magnetometers can give a more comprehensive picture of the user's movements <cit.>. Sensor fusion annotation systems can improve the accuracy of HAR algorithms, especially for complex activities that are difficult to annotate with a single sensor <cit.>.
Sensor fusion annotation systems can also help in overcoming some of the limitations of individual sensors, such as their sensitivity to environmental factors or their limited coverage of certain types of movements.
However, sensor fusion annotation systems also have their own challenges, such as the need for careful calibration and synchronization of multiple sensors, and the complexity of combining data from different sources. Moreover, the increased amount of data generated by sensor fusion systems can require more powerful computational resources and more sophisticated algorithms to process and analyze.
§.§ Crowdsourcing Annotation Systems
Crowdsourcing annotation systems use crowdsourcing platforms to collect annotations from a large pool of non-expert annotators <cit.>. Crowdsourcing can provide access to a diverse pool of annotators, allowing for annotations to be collected from a range of perspectives and backgrounds.
This approach can be cost-effective and scalable, but the quality of the annotations may vary depending on the expertise and motivation of the crowd workers. Such systems can also introduce noise and errors in the annotations, which may require additional quality control measures <cit.>, such as redundant annotations or expert reviews.
Moreover, crowdsourcing annotation systems can introduce challenges related to task design and management, such as the need to design effective annotation tasks that are understandable and accessible by non-expert annotators, and the need to manage and monitor the crowd workers to ensure that high-quality annotations are collected.
In summary, the choice of an annotation system for HAR depends on various factors, such as the availability of annotated data, the complexity of the activities to be annotated, the size of the dataset, and the resources available. Every annotation technique, as summarized in Tables <ref> and <ref>, has advantages and disadvantages, and researchers must carefully evaluate which approach is most suitable for their specific HAR task.
§ SELECTION CRITERIA
This section describes the selection criteria of this systematic review, i.e., how the papers that were considered were selected.
This review includes only studies focused on developing and evaluating (semi-, fully-) automated data annotation techniques for HAR. The participants were required to be human, while studies involving non-human subjects were excluded. In addition, studies had to report on the accuracy, precision, and other relevant performance metrics of the annotation systems.
Only publications in English language were considered, and all studies had to be published in peer-reviewed journals or conference proceedings.
The search strategy and selection criteria were developed in consultation with all authors. Any disagreements between reviewers were resolved through discussion and consensus. The study selection process was documented using a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flowchart to ensure transparency and applicability <cit.>.
PRISMA Flowchart: Table <ref> presents the search query used to identify relevant studies during the research phase of the systematic review. The query was structured into three categories or "leaves" that represent the main concepts of interest in the review: 1) algorithms, 2) automated annotation systems, and 3) devices. These three concepts form the basis of the inclusion criteria for selecting studies considered by the systematic review. By specifying the types of algorithms, annotation processes, and devices of interest, the query helps to ensure that the studies selected for the review are relevant and meet the specific research objectives.
Table <ref> shows the number of search results retrieved from each of the four databases (i.e., IEEE Explore, ACM Digital Library, Scopus, and Web of Science) on January 21, 2023), using the search strategy defined for the systematic review.
Figure <ref> illustrates the PRISMA flowchart, which serves as a transparent and replicable means of reporting the systematic review's search and selection process.
The chart shows that 2401 research articles were initially retrieved through the search process described in Table <ref>.
It then depicts the screening process, ultimately leading to the inclusion of 39 studies in the review.
Excluded research items did not meet the pre-defined selection criteria outlined at the beginning of this section.
Table <ref> illustrates the distribution of the 115 research items assessed for eligibility over time and reveals a growing interest in the field of HAR technologies and automatic data annotation techniques.
The table provides information on the number of studies published in each year, distinguishing between included and excluded items, and offers a glimpse into the research activity in this field over time.
Notably, the table shows that despite the search starting as early as 01/01/1980, no work in this field was presented until 2006.
By showcasing the increasing number of studies on HAR technologies and automated data annotation, the Table <ref> implies that this subject is gaining more prominence and significance in the field, providing a comprehensive overview of the literature landscape that can assist researchers in identifying trends, gaps, and areas for further exploration.
Finally, out of the 2401 reviewed papers, none of them were found to be survey papers on data annotation techniques in HAR. As a result, we claim that this is the first systematic review to address this topic.
§ ANNOTATION SYSTEMS IN HAR
Based on the analysis of the 39 out of 2401 papers identified through our selection criteria procedure, this systematic review will focus solely on semi-automated and fully-automated data annotation techniques and exclude manual techniques, sensor fusion, and crowdsensing.
Thus, the techniques will be categorized into semi- and fully-automated and subsequently into three categories: a) data-driven, b) environment-driven, and c) hybrid.
Data-driven: Data-driven techniques leverage the patterns, structures, and characteristics inherent in the data itself to guide the annotation process.
Environment-driven: The environment-driven techniques use information about the context and environment in which the data was collected to perform annotation. For example, use the interaction of users with ODLs to recognize the performed activity.
Hybrid: The hybrid techniques combine both data-driven and environment-driven approaches, often using multiple sources of information to achieve more accurate and robust annotation results.
This taxonomy is shown in Figure <ref>, while Table <ref> provides an overview of the studies included in the review and categorized using the above taxonomy.
In the order of their date of publication, this section will provide a comprehensive overview of each paper included in this survey. Additionally, for each category of our taxonomy, a table that summarizes the most important properties of all publications that fall under the corresponding category is provided.
In particular, Tables <ref>, <ref>, <ref>, <ref>, and <ref> contain detailed information on aspects such as the ownership (i.e., authors used an open access existing dataset or designed and collected new data on their own) of the tested datasets and the number of datasets utilized (Column 2), the devices (Column 3) and sensors (Column 4) employed, the total number of used sensors (Column 5), their placement on the body or in the environment (Column 6), the ML/DL/AI models utilized (Column 7), the number of subjects involved in data collection (Column 8), the performed activities (Column 9), the total number of activities (Column 10), and the type of considered daily environment (Column 11).
Instead, Tables <ref>, <ref>, <ref>, <ref>, and <ref> provide the annotation techniques used by the individual publications, and summarizes the individual advantages (Column 3) and disadvantages (Column 4) of every approach.
Moreover, in order to improve the legibility of Tables <ref> through <ref>, Table <ref> offers a summary of the terminology utilized to classify the different types of activities discussed in the methodologies of the examined papers.
Conversely, Table <ref> presents an overview of the acronyms used for the annotation models.
§.§ Semi-Automated Annotation
Within this section, the 26[paper <cit.> fits into both fully- and semi-automated categories, since it provides different approaches.] papers on semi-automated data annotation techniques for HAR are categorized into the three subcategories mentioned earlier, and their proposed methods and key features are described in detail.
§.§.§ Data-driven approaches
These studies propose semi-automated data-driven methodologies for automated data annotation in HAR, leveraging the existence of data patterns being extracted through various techniques such as AL, augmented and TL frameworks, or self-supervised learning.
Tables <ref> and <ref> offer a comprehensive summary of the 15 identified works falling in this category.
In their study, Saeedi et al. <cit.> propose a multi-expert mobile health system utilizing AL techniques. The architecture addresses challenges related to reconfiguring mobile sensor devices for health monitoring. One challenge is the expensive cost of data labeling, which can interfere with the user's life and requires feedback from healthcare experts or costly equipment. Another challenge involves identifying the most suitable expert for each data instance, and a third challenge is the uncertainty of labels due to limited expert knowledge.
To overcome these challenges, the proposed architecture selects the most cost-effective and confident expert for each query, considering collaboration among experts to minimize cost and improve data labeling accuracy. The authors also develop new algorithms for system initialization, utilizing clustering algorithms and ensemble classification methods.
The effectiveness of the architecture and algorithms is demonstrated through a case study on activity monitoring, achieving a 85% accuracy in activity recognition by labeling only 15% of unlabeled data and reducing annotation costs.
Future work aims to enhance the architecture to handle multi-modality sensory systems, to conduct a real pilot study, and to integrate TL and AL approaches to further reduce the number of queries and improve the learner accuracy.
In <cit.>, Martindale et al. proposed a HAR pipeline for semi-automated labeling and efficient collection of daily activity data. This data was labeled by identifying the walking cycle phases based on video, IMU, and pressure insole data. Although the setup was designed in a controlled environment, the same principles could be applied to more natural or specific applications. The authors proposed an on-the-edge video detection method to detect on-the-ground and off-the-ground stride phases with only 17% manual labeling or correction required.
This technique reduced the labeling time by 83% compared to complete manual labeling without assistance.
Gan et al., in <cit.>, devised two semi-automated labeling algorithms: one for personalized fall detection online model training using k-Means clustering, and another one for personalized localization online model training, which employed a descendingly ordered set of peak-trough magnitudes. Based on their experiments, the authors reported that the proposed approach resulted in a fall detection accuracy of up to 97%. In contrast, the labeling of peak-trough magnitudes led to a step counting accuracy of at least 96%.
Bota et al., in <cit.>, proposed a Semi-Supervised AL (SSAL) approach to address the challenges posed by the significant volume of data recorded by unobtrusive and pervasive sensors, such as smartphones and wearables. The proposed approach consists of two steps: (1) selecting the most relevant samples to be labeled by an expert using a Query Strategies (QSs) criterion, and (2) propagating the labels of annotated samples to similar samples on the entire dataset using an automatic method.
The study was tested on two HAR datasets using a comprehensive study of state-of-the-art QS and Stopping Criteria (SC) techniques, and a comparison to AL.
The methods were evaluated over several automatic annotation strategies based on different distance functions to optimize the SSAL model.
This paper extended the work conducted by <cit.> on HAR by applying Self-Training (ST) on the labels previously selected by AL.
In <cit.>, Martindale et al. proposed a pipeline to overcome the lack of realistic and labeled datasets for medical applications of cyclic activity monitoring, such as step-counting and gait analysis.
The pipeline reduces the percentage of labels that require manual adjustment to only 14%, making it possible to produce a public dataset of over 150,000 labeled cycles from 80 participants. The dataset includes 12 activities, 10 of which are cyclic, and features diverse ranges of bouts, transitions, and non-straight walking. For datasets related to e.g., home monitoring, where mostly walking data is expected, the labeling effort for new datasets can be as low as 8%.
Furthermore, the authors proposed an iterative training technique for a hierarchical Hidden Markov Model (hHMM) in this paper. The hHMM hierarchy includes cycle phases for each of the 10 cyclic activities, and this method allowed the dataset, which has been made publicly available in <cit.>, to be expanded fourfold with a final miss rate of 0.6% and a false discovery rate of 7.6%. The complete pipeline achieved an F1-score of 89.5%, with an expected F1-score for new data of 93.0%.
In <cit.>, Ponnada et al. introduced two design prototypes for Human Computation Games (HCG), namely Mobots and Signaligner, with the purpose of motivating players to label raw accelerometer data. These games were trained using annotated data, which provided players with an initial assessment of the accelerometer data provided. The objective for Mobots players was to annotate data fragments with activity names, while Signaligner players aimed to match input data patterns with visual pattern templates. In terms of performance, Mobots players successfully annotated 8.7 hours of accelerometer data using only 9.5 minutes of annotated data, achieving an overall accuracy of 89.7%. On the other hand, Signaligner players achieved a 99.5% accuracy in labeling 11.69 hours of acceleration data, starting from 3.8 hours of annotated data.
This difference in performance was attributed to the fact that Signaligner players were provided with more signal context and visual patterns to match with an on-screen reference, whereas Mobots players had to rely on their memory of signals and activity categories to label short data fragments.
Faridee et al.<cit.> introduced the AugToAct framework, a flexible and innovative semi-supervised TL framework with augmented capabilities.
The framework can be applied to different classification and domain adaptation tasks, showcasing its suitability for complex HAR. In particular, the proposed technique, starting from an annotated dataset, aims to augment the dataset with artificial data samples labeled as the initial dataset.
The authors aim to automate the process of identifying optimal augmentation parameters in future research.
Additionally, they plan to evaluate the model's generalizability on a broader range of datasets, encompassing diverse human activities.
Notably, the current experiment does not address unseen labels in the target domain, a limitation the authors plan to overcome in their future work.
In <cit.>, Hossain et al., proposed a DL model for activity recognition that incorporates AL in hyperparameter tuning, as opposed to previous works that only focused on identifying the most informative instance through AL. To achieve this, the authors suggested optimizing network parameters using a joint loss function that combined the cross-entropy loss of the DL model and the entropy function of the AL pipeline. To validate their approach, they used a mobile application to collect data in real-world settings, and the results showed that the joint loss function helped the DL model to generalize better with lower weight values, even in the presence of outliers.
The authors also introduced an annotator selection model based on the contextual similarity between annotators and users, outperforming other algorithms by converging faster into optimal accuracy.
Sawano et al. <cit.> proposed a method for estimating the user and device status, based on user responses to notifications generated by a smartphone. The experiments showed that the proposed method had an average precision of 76.9% and 96.3% for user-independent and user-dependent experiments, respectively. Although the proposed method had a high annotation precision, the recall was low, meaning that accurate annotations can only be assigned to a limited amount of data. However, since the method has an automatic annotation collection mechanism, it can collect a large amount of annotated data for many people over a long period of time.
In <cit.>, Kwon et al. addressed the lack of labeled data in HAR by introducing IMUTube. This automated processing pipeline generates virtual streams of IMU data from human activity videos.
The authors demonstrated the effectiveness of the virtually-generated IMU data in improving the performance of existing HAR models.
Avsar et al. <cit.> present an approach for generating high-quality data in the context of multi-channel time series HAR. Their method utilizes optical motion capturing and inertial measurements from on-body devices to combine temporal CNN predictions with manual revisions, resulting in fine-grained annotations.
The approach was evaluated in terms of time consumption and annotation consistency, revealing a substantial reduction in annotation effort by up to 62.8%.
Korpela et al., in <cit.>, propose a technique that utilizes an image segmentation algorithm called SLIC (Simple Linear Iterative Clustering) to perform temporal clustering of the classifier output.
The time-series data was fed to the algorithm as a 1D image, with the class probabilities serving as the color channels. The proposed method was evaluated on 233 minutes of time-series data, and it achieved an average reduction of 56% in annotation time compared to the baseline method that used raw classifier output.
In <cit.>, Tang et al., proposed SelfHAR, a semi-supervised model that leverages unlabeled mobile sensing datasets to improve the performance of HAR models. SelfHAR uses a combination of teacher-student self-training and multi-task self-supervision to learn robust signal-level representations and augment small labeled datasets.
This technique was evaluated on various HAR datasets and outperformed other supervised and semi-supervised approaches, achieving up to a 12% increase in F1-score with the same number of model parameters at inference.
Additionally, SelfHAR achieved similar performance by using up to 10 times less labeled data than supervised approaches.
In their work on fall detection <cit.>, Yhdego et al. introduced a self-supervised learning approach that utilizes unlabeled data to pre-train Fully Connected Network (FCN) and Residual Neural Network (ResNet) models. These pre-trained models are then fine-tuned using labeled data. The method incorporates overlapping sliding windows for feature extraction and addresses the issue of imbalanced classes in the dataset through oversampling and a modified weighted focal loss function.
Experimental results demonstrated that the ResNet self-supervised DL method, combined with random oversampling, achieved an impressive average F1-score of 98% for accurately detecting falls.
Mohamed et al., in <cit.>, present HAR-GCCN (HAR-Graph Chronologically Correlation Network), a deep graph CNN model for HAR using mobile sensor data.
They proposed leveraging the implicit chronology of human behavior to learn unknown labels and classify future activities. This was done using a new training strategy that predicts missing activity labels by leveraging the known ones.
HAR-GCCN outperformed baseline methods, improving classification accuracy by up to 68% on different datasets. In addition, they reported that HAR-GCNN has stable performance, independently of the number of chronologically ordered activities considered within the input graph.
§.§.§ Environment-driven approaches
We next describe an approach to semi-automatic annotation techniques that makes primarily use of human and environment-driven knowledge to annotate HAR data.
Tables <ref> and <ref> offer a comprehensive summary of the 4 articles falling into such category.
One of the first environmental-based, semi-automated methodologies has been published by Szewcyzk et al. <cit.>[In this article, the authors explore an annotation technique falling both into semi- and fully- automated environment-driven categories.].
Szewcyzk et al. explored four alternative mechanisms for annotating sensor data with corresponding activity labels to monitor the functional health of smart home residents. The first method utilizes the raw data from sensors along with a map of the apartment to identify the activities being performed. The location of the sensors and the time of day are used to infer the activities. For example, motion and water sensors triggered during a specific time could indicate meal preparation.
In the second method, the residents provide time diaries reporting their activities every half an hour. This approach is less invasive than others but relies on the residents' self-reports, which may not always be reliable.
The third and fourth methods involve using a visualization tool to analyze the sensor events. Method 3 uses the visualization tool for manual annotation, while method 4 includes resident feedback. A 3D environment simulator called CASASim displays sensor readings in real-time. Researchers rely on the combined information from the simulator and resident time diaries to interpret and annotate the sensor events.
Subramanya et al. <cit.> proposed a dynamical graph model to jointly estimate activity and spatial context over time, based on asynchronous observations from GPS measurements and a wearable sensor.
The graph model's parameters are trained on partially labeled data, and the authors applied virtual evidence to improve data annotation, providing high flexibility in labeling training data.
Experiments suggest that the proposed system achieves a recognition accuracy of 95%. This is significantly higher than existing techniques that do not perform joint reasoning about a person's activities and spatial context.
In the studies conducted by Woznowski et al. <cit.> and Tonkin et al. <cit.>, various approaches were explored to allow users to self-annotate their activities in near-real-time for the development of accurate HAR algorithms.
The study proposed a mobile app with multiple logging capabilities for self-annotation of activities. These capabilities included model-based, voice-based, location-based, and NFC-based methods. Users interacted directly with the app, except for the NFC-based approach, which was fully automatic upon contact with NFC tags.
Finally, in <cit.>, Solis et al. proposed a methodology to improve the recognition of activities related to eating using wearable computers in natural environments. They utilized location information from wearable sensors in IoT platforms to learn the users' behavior patterns without prior knowledge. Annotations were requested only when automatic annotation failed. In a case study with 12 participants wearing smartwatches, audio recordings were used for labeling eating moments. The study showed a 2.4% accuracy improvement with a limit of 20 requested annotations per day.
A dietary monitoring study validated the algorithm based on classifier uncertainty, allowing long-term data collection with minimal annotations.
§.§.§ Hybrid
The focus of hybrid methodologies is to further reduce the manual effort and expenses associated with the annotation process by combining data and environmental information, while enhancing the accuracy and performance of activity recognition models.
Tables <ref> and <ref> provide an overview over the 6 methodologies falling into this category.
With this aim, in <cit.>, Alma et al. proposed Mobeacon: a mobile phone and iBeacon sensor-based smart home activity recognition system, which uses Bagging Ensemble Learning (BEL) and Packaged Naive Bayes (PNB) classification algorithms for high-level activity recognition on smartphones.
The authors incorporated the semantic knowledge of the testing environment and use it with the built-in adaptive learning models on the smartphone to facilitate the ground truth data annotation. They demonstrated that Mobeacon outperforms existing lightweight activity recognition techniques in terms of accuracy (max. 94%) in a low-resource scenario and is sufficiently efficient to reside on smartphones for recognizing ADLs in real-time. The authors also designed an efficient smartphone application interface for defining and creating an initial semantic knowledge base about the smart home environment. They used their semantic knowledge base, expression tree-based activity construction, and an inference cache to accelerate the activity recognition process of their lightweight BEL-based approach.
Meurisch et al. <cit.> proposed "Labels," a self-tracking mobile application that provides a user interface for annotating automatically collected sensor data from mobile, desktop, and social media platforms with metadata, such as performed activities. The study evaluated Labels with 163 participants over a four-week field study, collecting over 43,000 manually annotated data samples. Results show that the participants annotated about 82.5% of their place-related time slots with their performed activities.
Nino et al. <cit.> presented a methodology for the semi-automatic generation of reliable position annotations to evaluate multi-camera people trackers on large video data sets. The methodology automatically computed most of the annotation data by recognizing the person's position and interaction with daily life objects.
The proposed framework is generic and can handle additional trackers.
The authors provided guidelines on applying the proposed methodology to new data sets and presented an exploratory study for the multi-target case.
Cruciani et al. <cit.> proposed an annotation system that integrates GPS and a step counter as two information sources. The GPS data is utilized to differentiate activities based on position, estimated speed, and predefined heuristics. Speed ranges associated with each activity (e.g., walking: 1.4 - 2.0 m/s, running: 3.0 - 6.0 m/s, transportation: > 8 m/s) are employed for labeling activities. To enhance the accuracy and reduce mislabeled samples, a step counter is incorporated into the system.
The system combines the GPS and step counter data through a rule-based intersection of these information sources. This combination allows for more precise labeling and discrimination between activities. For instance, it can distinguish between running and driving a vehicle, or detect running activities in a gym environment that may not be identifiable using GPS alone.
In <cit.>, the authors proposed two approaches for semi-automated online data labeling. The first approach is based on the recognition of subtle finger gestures performed in response to a data-labeling query. In contrast, the second approach focuses on labeling activities with an auditory manifestation and uses a classifier to estimate the activity and a conversational agent to ask the participant for clarification or additional data. Results show that while both studies have limitations, they achieve a precision from 80% to 90%. In addition, the authors described an approach for the semi-automatic labeling of environmental audio data and presented the results of experiments to assess its feasibility.
In <cit.>, the authors proposed a participant-centric free-text annotation process to facilitate activity recognition in a kitchen environment and characterized the resulting annotations. They reviewed the data from the study for assessing the complexity of cooking activities using the dataset, and found that the annotations explored in the paper constitute a useful basis for an exploratory analysis of the data.
However, they noted that the granularity of the annotations is not optimal for certain tasks and may benefit from a more detailed set of annotations. The authors also identified several features of meal preparation complexity that are readily detectable in their sensor data, including monitored appliance use, water use, and the energy released as heat and humidity during the task.
§.§ Fully automated
This section will discuss the 14 papers on fully-automated data annotation techniques for HAR in the three previously mentioned categories and provide detailed descriptions of their proposed methods and key features. As shown in Table <ref>, none of these papers focus on hybrid fully-automated methodologies. Therefore, this section will focus solely on data-driven and environment-driven methods.
§.§.§ Data-driven approaches
These studies propose fully-automated data-driven methodologies for automated data annotation in HAR, leveraging the existence of data patterns being extracted through various techniques such as AL, augmented and TL frameworks, or self-supervised learning.
Tables <ref> and <ref> provide an overview of the 9 publications falling into this category.
The initial study that introduced fully-automatic data-driven annotation techniques for HAR was conducted by Jardim et al. in <cit.>. The researchers proposed a method for recognizing human actions from a continuous sequence of images captured by a Kinect sensor. They designed an automatic temporal segmentation approach to divide the sequence into individual actions and employed a straightforward filtering technique based on joint movement. Furthermore, they presented an automatic labeling method utilizing a clustering algorithm on a subset of available features.
To enhance the outcomes, they recommended the utilization of Euler angles and dynamic time warping (DTW) techniques. They successfully demonstrated that combining clustering and filtering techniques allows for the unsupervised labeling of human actions captured by a depth-sensing camera that tracks skeleton body joints.
In <cit.>, Rokni et al. introduced an autonomous multi-view learning approach capable of dynamically retraining ML algorithms in real-time without the need for labeled training data.
By employing the approach in batch mode, they achieved an 83.7% accuracy in activity recognition, representing a 9.3% improvement facilitated by automatic data labeling in the new sensor node. Instead, in the online mode, it achieved an 82.2% accuracy in activity recognition.
This study represents an initial step towards developing next-generation wearables with computational autonomy and automatically learning ML algorithms.
A paper by Liang et al. <cit.> presents ALF, an Automatic Labeling Framework for in-laboratory HAR, eliminating the need for a small initial set of labeled data. The proposed framework converts time series activity data into absolute wavelet energy entropy and detects activity endpoints using constraints and information extracted from a predefined human activity sequence.
The authors evaluated the framework's performance on a collected dataset and the UCI HAR dataset<cit.>, achieving average precision and recall scores above 81.9%, and average F-measure scores above 88.9%. The ALF framework significantly reduces labeling efforts, while maintaining the labeling accuracy. It provides a fast and reliable method for generating labeled datasets, with a total labeling time of approximately 18.6 minutes, which is 75.8% shorter than the average manual labeling time of 76.8 minutes.
In <cit.>, Zhang et al. proposed two deep generative cross-modal architectures to synthesize accelerometer data streams from video data streams. The approach utilizes a conditional generative adversarial network (cGAN) to generate sensor data based on video data and incorporates a conditional variational autoencoder (cVAE)-cGAN to further enhance the data representation.
The proposed method was evaluated through experiments on publicly available sensor-based activity recognition datasets, comparing models trained on synthetic data against those trained on real sensor data.
In <cit.>, Jeng et al. introduced iSleePost, a sleep posture monitoring system for home care that automatically recognizes body posture during sleeping for labeling data. By analyzing data from a single wrist sensor, the system achieves an accuracy of up to 85% in posture recognition.
The authors evaluated two different learning algorithms, with the RF algorithm achieving over 70% accuracy and the SVM algorithm achieving 73% accuracy. iSleePost is more cost-effective than existing approaches relying on pressure mats, cameras, or specialized equipment.
In <cit.>, the authors introduced a pipeline for the automated detection of unsupervised standardized gait tests from continuous real-world IMU data.
The proposed approach involves gait sequence detection, peak enhancement, and subsequence DTW to identify gait test series, which are further decomposed into individual 4 x 10 meters walking-tests. These tests were used to assess the walking velocity. The algorithm was evaluated using 419 gait test series, achieving an F1-score of 88.9% for detection and 94.0% for decomposition.
In <cit.>, Ma et al., presented an end-to-end multi-task deep clustering framework that integrates feature representation, clustering, and classification tasks into a uniform learning framework.
The framework comprises an autoencoder neural network structure to extract features from the raw signals and form a compressed latent feature representation. Furthermore, it contains a k-Means clustering algorithm to partition the unlabeled dataset into groups to produce pseudo labels for the instances. Finally, it contains a DNN classifier to train the human activity classification model based on the latent features and pseudo labels.
The authors conducted extensive experiments on three publicly available datasets showing that the proposed approach outperforms existing clustering methods under completely unsupervised conditions and achieves a performance similar to fully supervised learning when retraining the extracted latent feature representation.
In <cit.>, Qi et al. proposed a framework for smartphone-based HAR that combines data from the Microsoft Kinect camera and the smartphone's IMU signals to identify 12 complex daily activities. The proposed framework comprises five clustering layers and a DL-based classification model. The authors employed a hierarchical k-medoids (Hk-medoids) algorithm to obtain labels with a high accuracy.
Additionally, the performance of a deep convolutional neural network (DCNN) classification model was evaluated and compared to other ML and DL methods. Moreover, the authors proposed a calibration approach to mitigate the effect of artifact and drifting noise on the obtained 3D skeleton joints data.
Finally, Lin et al. <cit.> designed a feature selection technique based on Fuzzy C-means particle swarm optimization (FS-FCPSO) to annotate six human activities automatically.
The results of this method were compared with those of k-means and fuzzy C-means algorithms. The authors used a dataset that included 30 volunteers aged 19 to 48 and captured 3-axis linear acceleration and 3-axis angular velocity using a Samsung Galaxy S II smartphone with an embedded accelerometer and gyroscope.
The FS-FCPSO method was more suitable for automatic labeling in HAR than the k-means and fuzzy C-means algorithms.
The main contribution of this research was to adopt a feature selection method based on fuzzy C-average particle swarm optimization (PSE) to improve the accuracy of automatic labeling results.
The authors reduced 561 feature to 163 features as a cluster subset and showed that feature selection based on binary PSO could effectively enhance the application of the fuzzy C-means clustering method in automatic labeling for HAR.
§.§.§ Environment-driven approaches
This section concludes the description of the categories introduced by the taxonomy given in Figure <ref> by discussing methodologies presenting fully automated, environment-driven techniques. Tables <ref> and <ref> offer a comprehensive summary of the 5 articles included in this category.
The earliest approach in this a category was proposed by Loseto et al. in <cit.>.
The authors proposed an agent running on an Android mobile app that utilizes semantic web languages to perform automated profile annotation based on the data collected by embedded micro-devices, logs, and applications on a smartphone.
The system annotates the data by using motion, location, and smartphone usage to annotate the user's activity automatically. The resulting semantic-based daily profile can be leveraged in an ambient intelligence scenario to adapt the environment to user preferences.
In <cit.>, Al Zamil et al. proposed a methodology for automated data annotation in smart home environments, specifically for modeling activities based on spatially recognized actions and validating the assignment of labels through temporal relations.
The proposed technique utilized Hidden Markov Models (HMM) and Conditional Random Field (CRF) models to accurately detect segment labels.
The authors defined the segmentation problem as an optimization problem that minimizes the ambiguity to improve the overall accuracy. The experiments that were performed on the CASAS data sets <cit.> indicated that the proposed methodology achieved a better performance than state-of-the-art methodologies, with contributions including the modeling of activity actions as states and transitions, the incorporation of spatial and temporal relationships, and the algorithmic segmentation of incoming actions.
In the approach presented by Demrozi et al. <cit.>, BLE beacons were mapped to the locations or objects where a human subject typically performs activities, such as cooking or working. Furthermore, the data collected by sensors embedded in the user's smartwatch were associated to the nearest BLE beacon. This allows data from the smartwatch sensors to be automatically labeled with the human activity that corresponds to the closest beacon.
The proposed methodology is low-cost and uses regression models to estimate the distance between the user and the beacons accurately.
The methodology was found to estimate the distance between emitters and receivers with an RMSE of 13 cm and an MAE of 10 cm. The outcome is an automatically-annotated dataset that can be used to design dedicated HAR models.
Finally, in <cit.>, Dissanayake et al. present IndoLabel, a method to automatically detect short sensor data motifs specific to a location class, and builds an environment-independent location classifier without requiring handcrafted rules and templates.
The authors state that this method can be utilized to extract class-specific sensor data segments from any type of time series sensor data, and can assign semantic labels to any WiFi cluster in daily life, e.g., in hospitals and factories. The authors evaluated the proposed method in real house environments using a leave-one-environment-out cross-validation methods, and achieved state-of-the-art performance despite the unavailability of labeled training data in the target environment.
§.§ Classification based on employed sensing device
As can be inferred from the previous sections, sensing devices play a pivotal role in HAR and related data annotation techniques. We hence in this section classify the reviewed approaches based on their sensing devices used.
Sensing devices, ranging from wearable sensors to environmental sensors, enable the collection of essential data that provide insights into individuals' activities and behavior patterns. By accurately capturing information such as motion, location, heart rate, and environmental context, sensing devices serve as the foundation for HAR systems. The data collected by these sensing devices serve as the raw material for data annotation techniques in HAR.
Sensing devices facilitate data annotation techniques in several ways. Firstly, they provide objective and quantitative measurements of various physical and environmental parameters, ensuring the accuracy and reliability of the annotated data. This reliability is essential for developing robust HAR models.
Secondly, sensing devices offer real-time or near real-time data, allowing for immediate feedback and annotation during data collection. This feature is particularly valuable in scenarios where prompt intervention or feedback is necessary, such as monitoring athletic performance or tracking a patient's rehabilitation progress.
Furthermore, sensing devices allow for the annotation of contextual information, such as the location and environmental conditions during specific activities. This additional contextual data enriches the understanding of human behavior and contributes to more nuanced and comprehensive activity recognition models.
To underscore the significance of sensing devices, Table <ref> offers a comprehensive overview of the various sensing devices and associated sensors employed in the methodologies and data annotation techniques explored in the preceding sections.
By providing a structured representation of the devices and sensors used, the table highlights their crucial role in capturing and annotating data in the defined annotation categories.
Empirical evidence demonstrates that among the different sensing devices employed, inertial sensors have emerged as the most widely utilized for data collection and annotation purposes. Inertial sensors, which encompass accelerometers, gyroscopes, and magnetometers, offer the ability to measure and record an individual's motion, orientation, and spatial positioning. Their popularity can be attributed to their versatility, portability, and ability to provide real-time and fine-grained data.
The inherent advantages of inertial sensors have positioned them as a dominant choice for researchers and practitioners in the field, facilitating accurate and reliable data annotation for a wide range of applications.
§ DISCUSSION
Data Annotation Significance in HAR: In recent years, as shown by our searching strategy, there has been a notable surge in studies investigating methods for data annotation in HAR. This trend reflects the growing recognition of the importance of accurate and efficient annotation techniques in extracting meaningful insights from individuals' daily life activities.
Such rising interest is driven by the recognition of its potential applications in various domains, such as healthcare, smart environments, and personalized services.
Nevertheless, the complexity of daily life activities and the challenge of collecting and annotating corresponding data are intertwined in a mutually reinforcing manner. As individuals go about their routines, the range and intricacy of activities they engage in can be overwhelming. From personal tasks like commuting, shopping, and exercising to professional responsibilities, social interactions, and leisure pursuits, the spectrum of daily activities is vast. Each activity comprises numerous elements, such as time, location, duration, and context, which need to be captured accurately to gain a comprehensive understanding of an individual's life.
Thus, collecting and annotating such data poses significant complexities.
In addition, the diversity of data sources, including smartphones, wearables, and environmental sensors, and the subjective nature of annotating data, such as categorizing activities and determining their significance, introduces inherent biases and uncertainties.
Our perspective: To this end, deciding between fully automated and semi-automated data annotation techniques is crucial in HAR. Fully automated methods offer scalability and the ability to process large volumes of data, but they may lack accuracy and transparency. In contrast, semi-automated methods combine machine learning models with human expertise, providing higher precision, adaptability, and a deeper understanding of the data.
Fully automated techniques excel in speeding up the annotation process and detecting complex patterns, but they may be less accurate and suffer from interpretability issues. They heavily rely on software (algorithms or machine learning models) or hardware support (sensors or smart devices), making it challenging to explain their annotations.
However, such techniques are more prone to biases and errors in the training data and require significant computing power and specialized software or hardware.
Instead, semi-automated methods leverage machine learning models and human experts, resulting in higher precision, adaptability, and a better understanding of the data. Involving human annotators improves the accuracy and quality, but, demands time and resources due to their involvement, and the accuracy depends on their capabilities. Despite these challenges, semi-automated methods offer interpretability and transparency because human experts contribute to decision-making.
To choose the appropriate automated data annotation approach for human activity recognition, it is essential to consider the advantages and disadvantages of fully automated and semi-automated methods. Factors such as performance trade-offs, human-in-the-loop approaches, data quality challenges, interpretability and transparency, and resource requirements play a significant role in selecting the most suitable method based on specific demands, goals, available resources, and existing knowledge.
In this paper, we categorized these approaches into data-driven, environment-driven, and hybrid methods, based on their underlying principles and methodologies.
Data-driven techniques rely on data characteristics for annotation, while environment-driven techniques consider the context and environment in which the data was collected.
Hybrid methods combine both approaches, aiming for more accurate and robust results by integrating data-driven analysis with contextual information.
Categorizing methods into data-driven, environment-driven, and hybrid approaches allows for informed decision-making, helping researchers and practitioners to select the most suitable approach that aligns with their objectives and requirements.
Advantages and Disadvantages: To summarize, Table <ref> provides an overview of the characteristics that need to be considered when designing a new methodology for data annotation in HAR.
Recent trends: Moreover, new techniques (i.e., Zero-shot, few-shot, and self-supervised learning) for data annotation are finding their space. In particular, Zero-shot learning techniques address the problem of annotating instances or activities that are not included in the training data. This involves identifying and categorizing activities that were only partially observed or not observed at all during the training phase. By leveraging prior knowledge and auxiliary information about related activities, zero-shot learning enables the annotation system to generalize and make accurate predictions for unobserved classes or activities <cit.>.
In addition, few-shot learning enables the annotation system to learn from a small number of annotated instances instead of requiring a large amount of annotated data for each activity. This is especially useful when obtaining a large annotated dataset for all possible activities is difficult or time-consuming. Few-shot learning allows the annotation system to generalize from limited labeled data to annotate new instances or activities accurately <cit.>.
Furthermore, self-supervised learning algorithms in HAR utilize unlabeled data's intrinsic structure or information to discover meaningful representations.
These learned representations can then be utilized to enhance the precision and efficiency of activity recognition during the annotation process. Self-supervised learning enables the annotation system to maximize available data and employ innate knowledge to drive the annotation process <cit.>.
Consequently, incorporating zero-shot learning, few-shot learning, and self-supervised learning techniques into HAR annotation systems makes it possible to annotate a broader range of activities, handle limited annotated data and increase the system's adaptability to diverse scenarios and activity recognition tasks, thereby expanding its capabilities.
§ CONCLUSION
In conclusion, the complexity of daily life activities and the intricacies of collecting and annotating relevant data create a multifaceted challenge in HAR (Section <ref>). As summarized in Figure <ref>, this paper presents the first systematic review about (Semi-) Automatic data annotation techniques in HAR from 01/01/1980 to 21/01/2023 (Section <ref>).
In particular, concerning the HAR annotation taxonomy introduced in Section <ref>, different approaches have been exploited, e.g., manual, sensor fusion, semi-automated (Section <ref>), fully-automated (Section <ref>), and crowdsourcing.
The decision between fully automated and semi-automated data annotation techniques is crucial in addressing this challenge.
Fully automated methods (Section <ref>) provide scalability and the ability to quickly process large volumes of data.
They excel in detecting complex patterns but may be less accurate and suffer from interpretability issues.
On the other hand, semi-automated methods (Section <ref>) leverage machine learning models and human expertise, resulting in higher precision, adaptability, and a better understanding of the data. Involving human annotators thereby improves accuracy and quality control.
Choosing the appropriate automated data annotation approach for HAR requires considering factors such as performance trade-offs, human-in-the-loop approaches, data quality challenges, interpretability and transparency, and resource requirements. Both fully automated and semi-automated methods can be developed in a data-driven (Section <ref> and Section <ref>), environment-driven (Section <ref> and Section <ref>), or hybrid (Section <ref>) manner. The decision depends on the application's demands, goals, available resources, and existing knowledge. Besides, when exploiting the annotation system, the decision must also consider the used sensing technology (Section <ref>).
All approaches have shown promising results in reducing the amount of work and time to be spent on data annotation, while maintaining the annotation accuracy. However, the choice between fully automated and semi-automated methods should be based on specific demands, goals, available resources, and knowledge. Understanding the advantages and limitations of each approach enables informed decision-making, allowing researchers and practitioners to select the most suitable method that aligns with their objectives and requirements.
Finally, the use of zero/few-shot learning and self-supervised learning as part of HAR can potentially improve the practicality and application of annotation systems in real life. HAR systems may be implemented in more contexts and fields, if the scope of actions is limited. This paves the way for using HAR in industries like healthcare, sports analytics, intelligent settings, and surveillance.
ACM-Reference-Format
65
#1 #1#1#1 #1 #1 #1 #1#1 #1#1
[Adaimi and Thomaz(2019)]
adaimi2019leveraging
authorpersonRebecca Adaimi and
personEdison Thomaz. year2019.
Leveraging active learning and conditional mutual
information to minimize data annotation in human activity recognition.
journalProceedings of the ACM on Interactive,
Mobile, Wearable and Ubiquitous Technologies volume3,
number3 (year2019), pages1–23.
[Al Machot et al(2020)]
al2020zero
authorpersonFadi Al Machot, personMohammed
R. Elkobaisi, and personKyandoghere Kyamakya.
year2020.
Zero-shot human activity recognition using
non-visual sensors.
journalSensors volume20,
number3 (year2020), pages825.
[Al Zamil et al(2017)]
al2017annotation
authorpersonMohammed Gh Al Zamil,
personMajdi Rawashdeh, personSamer Samarah,
personM Shamim Hossain, personAwny Alnusair, and
personSk Md Mizanur Rahman. year2017.
An annotation technique for in-home smart
monitoring environments.
journalIEEE Access volume6
(year2017), pages1471–1479.
[Alam et al(2015)]
alam2015mobeacon
authorpersonMohammad Arif Ul Alam,
personNilavra Pathak, and personNirmalya Roy.
year2015.
Mobeacon: An iBeacon-Assisted SmartphoneBased Real
Time Activity Recognition Framework.
journalUMBC Student Collection
(year2015).
[Anguita et al(2013)]
anguita2013public
authorpersonDavide Anguita, personAlessandro
Ghio, personLuca Oneto, personXavier Parra,
personJorge Luis Reyes-Ortiz, et al
year2013.
A public domain dataset for human activity
recognition using smartphones.. In booktitleEsann,
Vol. volume3. pages3.
[Avsar et al(2021)]
avsar2021benchmarking
authorpersonHülya Avsar, personErik
Altermann, personChristopher Reining,
personFernando Moya Rueda, personGernot A Fink, and
personMichael ten Hompel. year2021.
Benchmarking annotation procedures for
multi-channel time series HAR dataset. In booktitle2021
IEEE International Conference on Pervasive Computing and Communications
Workshops and other Affiliated Events (PerCom Workshops). IEEE,
pages453–458.
[Baker and Xiang(2023)]
baker2023artificial
authorpersonStephanie Baker and personWei
Xiang. year2023.
Artificial Intelligence of Things for Smarter
Healthcare: A Survey of Advancements, Challenges, and Opportunities.
journalIEEE Communications Surveys & Tutorials
(year2023).
[Bota et al(2019)]
bota2019semi
authorpersonPatrícia Bota, personJoana
Silva, personDuarte Folgado, and personHugo
Gamboa. year2019.
A semi-automatic annotation approach for human
activity recognition.
journalSensors volume19,
number3 (year2019), pages501.
[Bulling et al(2014)]
bulling2014tutorial
authorpersonAndreas Bulling, personUlf
Blanke, and personBernt Schiele.
year2014.
A tutorial on human activity recognition using
body-worn inertial sensors.
journalACM Computing Surveys (CSUR)
volume46, number3 (year2014),
pages33.
[Capponi et al(2019)]
capponi2019survey
authorpersonAndrea Capponi, personClaudio
Fiandrino, personBurak Kantarci, personLuca
Foschini, personDzmitry Kliazovich, and
personPascal Bouvry. year2019.
A survey on mobile crowdsensing systems:
Challenges, solutions, and opportunities.
journalIEEE communications surveys & tutorials
volume21, number3 (year2019),
pages2419–2465.
[Cheng et al(2021)]
cheng2021recent
authorpersonYuemeng Cheng, personKan Wang,
personHao Xu, personTangan Li,
personQinghui Jin, and personDaxiang Cui.
year2021.
Recent developments in sensors for wearable device
applications.
journalAnalytical and bioanalytical chemistry
volume413, number24 (year2021),
pages6037–6057.
[Cook et al(2013)]
cook2013transfer
authorpersonDiane Cook, personKyle D Feuz,
and personNarayanan C Krishnan.
year2013.
Transfer learning for activity recognition: A
survey.
journalKnowledge and information systems
volume36 (year2013), pages537–556.
[Cook et al(2009)]
cook2009collecting
authorpersonDiane Cook, personMaureen
Schmitter-Edgecombe, personAaron Crandall, personChad
Sanders, and personBrian Thomas.
year2009.
Collecting and disseminating smart home sensor data
in the CASAS project. In booktitleProceedings of the CHI
workshop on developing shared home behavior datasets to advance HCI and
ubiquitous computing research. pages1–7.
[Cruciani et al(2018a)]
cruciani2018automatic
authorpersonFederico Cruciani, personIan
Cleland, personChris Nugent, personPaul McCullagh,
personKåre Synnes, and personJosef Hallberg.
year2018a.
Automatic annotation for human activity recognition
in free living using a smartphone.
journalSensors volume18,
number7 (year2018), pages2203.
[Cruciani et al(2018b)]
cruciani2018personalized
authorpersonFederico Cruciani, personIan
Cleland, personKåre Synnes, and personJosef
Hallberg. year2018b.
Personalized Online Training for Physical Activity
monitoring using weak labels. In booktitle2018 IEEE
International Conference on Pervasive Computing and Communications Workshops
(PerCom Workshops). IEEE, pages567–572.
[Cruz-Sandoval et al(2019)]
cruz2019semi
authorpersonDagoberto Cruz-Sandoval,
personJessica Beltran-Marquez, personMatias
Garcia-Constantino, personLuis A Gonzalez-Jasso,
personJesus Favela, personIrvin Hussein Lopez-Nava,
personIan Cleland, personAndrew Ennis,
personNetzahualcoyotl Hernandez-Cruz, personJoseph
Rafferty, et al year2019.
Semi-automated data labeling for activity
recognition in pervasive healthcare.
journalSensors volume19,
number14 (year2019), pages3035.
[Demrozi et al(2021)]
demrozi2021towards
authorpersonFlorenc Demrozi, personMarin
Jereghi, and personGraziano Pravadelli.
year2021.
Towards the automatic data annotation for human
activity recognition based on wearables and BLE beacons. In
booktitle2021 IEEE International Symposium on Inertial
Sensors and Systems (INERTIAL). IEEE, pages1–4.
[Demrozi et al(2020)]
demrozi_survey
authorpersonFlorenc Demrozi, personGraziano
Pravadelli, personAzra Bihorac, and personParisa
Rashidi. year2020.
Human Activity Recognition Using Inertial,
Physiological and Environmental Sensors: A Comprehensive Survey.
journalIEEE Access volume8
(year2020), pages210816–210836.
<https://doi.org/10.1109/ACCESS.2020.3037715>
[Diete et al(2017)]
diete2017smart
authorpersonAlexander Diete, personTimo
Sztyler, and personHeiner Stuckenschmidt.
year2017.
A smart data annotation tool for multi-sensor
activity recognition. In booktitle2017 IEEE International
Conference on Pervasive Computing and Communications Workshops (PerCom
Workshops). IEEE, pages111–116.
[Dissanayake et al(2021)]
dissanayake2021indolabel
authorpersonThilina Dissanayake,
personTakuya Maekawa, personTakahiro Hara,
personTaiki Miyanishi, and personMotoaki Kawanabe.
year2021.
Indolabel: Predicting indoor location class by
discovering location-specific sensor data motifs.
journalIEEE Sensors Journal volume22,
number6 (year2021), pages5372–5385.
[Do and Gatica-Perez(2011)]
do2011crowdsourcing
authorpersonTrong Do and personDaniel
Gatica-Perez. year2011.
Crowdsourcing annotations for human activity
recognition.
journalComputer Communications
volume34, number16 (year2011),
pages1939–1949.
[Dunn et al(2018)]
dunn2018wearables
authorpersonJessilyn Dunn, personRyan
Runge, and personMichael Snyder.
year2018.
Wearables and the medical revolution.
journalPersonalized medicine volume15,
number5 (year2018), pages429–448.
[Faridee et al(2019)]
faridee2019augtoact
authorpersonAbu Zaher Md Faridee,
personMd Abdullah Al Hafiz Khan, personNilavra
Pathak, and personNirmalya Roy.
year2019.
AugToAct: Scaling complex human activity
recognition with few labels. In booktitleProceedings of the
16th EAI International Conference on Mobile and Ubiquitous Systems:
Computing, Networking and Services. pages162–171.
[Gan(2018)]
gan2018automatic
authorpersonOon Peen Gan.
year2018.
Automatic labeling for personalized IoT wearable
monitoring. In booktitleIECON 2018-44th Annual Conference
of the IEEE Industrial Electronics Society. IEEE,
pages2861–2866.
[Gupta et al(2022)]
gupta2022human
authorpersonNeha Gupta, personSuneet K
Gupta, personRajesh K Pathak, personVanita Jain,
personParisa Rashidi, and personJasjit S Suri.
year2022.
Human activity recognition in artificial
intelligence framework: A narrative review.
journalArtificial intelligence review
volume55, number6 (year2022),
pages4755–4808.
[Hossain and Roy(2019)]
hossain2019active
authorpersonHM Sajjad Hossain and
personNirmalya Roy. year2019.
Active deep learning for activity recognition with
context aware annotator selection. In booktitleProceedings
of the 25th ACM SIGKDD International Conference on Knowledge Discovery &
Data Mining. pages1862–1870.
[Jardim et al(2016)]
jardim2016automatic
authorpersonDavid Jardim, personLuís
Nunes, and personMiguel Sales Dias.
year2016.
Automatic human activity segmentation and labeling
in RGBD videos. In booktitleInternational Conference on
Intelligent Decision Technologies. Springer, pages383–394.
[Jeng et al(2021)]
jeng2021wrist
authorpersonPo-Yuan Jeng, personLi-Chun
Wang, personChaur-Jong Hu, and personDean Wu.
year2021.
A wrist sensor sleep posture monitoring system: An
automatic labeling approach.
journalSensors volume21,
number1 (year2021), pages258.
[Korpela et al(2021)]
korpela2021reducing
authorpersonJoseph Korpela, personTakayuki
Akiyama, personTakehiro Niikura, and
personKatsuyuki Nakamura. year2021.
Reducing Label Fragmentation During Time-series
Data Annotation to Reduce Annotation Costs. In
booktitleAdjunct Proceedings of the 2021 ACM International
Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the
2021 ACM International Symposium on Wearable Computers.
pages328–333.
[Kwapisz et al(2011)]
kwapisz2011activity
authorpersonJennifer R Kwapisz, personGary M
Weiss, and personStacey A Moore.
year2011.
Activity recognition using cell phone
accelerometers.
journalSIGKDD Explorations volume12,
number2 (year2011), pages74–82.
[Kwon et al(2020)]
kwon2020imutube
authorpersonHyeokhyen Kwon, personCatherine
Tong, personHarish Haresamudram, personYan Gao,
personGregory D Abowd, personNicholas D Lane, and
personThomas Ploetz. year2020.
IMUTube: Automatic extraction of virtual on-body
accelerometry from video for human activity recognition.
journalProceedings of the ACM on Interactive,
Mobile, Wearable and Ubiquitous Technologies volume4,
number3 (year2020), pages1–29.
[Li et al(2015)]
li2015internet
authorpersonShancang Li, personLi Da Xu,
and personShanshan Zhao. year2015.
The internet of things: a survey.
journalInformation systems frontiers
volume17 (year2015), pages243–259.
[Liang et al(2018)]
liang2018automatic
authorpersonGuanhao Liang, personQingsheng
Luo, and personYan Jia. year2018.
Automatic Labeling Framework for Wearable
Sensor-based Human Activity Recognition.
journalSensors and Materials volume30,
number9 (year2018), pages2049–2071.
[Lin and Lin(2022)]
lin2022clustering
authorpersonBo-Yan Lin and personYu-Da
Lin. year2022.
A Clustering-based Feature Selection for Automatic
Labeling in Human Activity Recognition. In booktitle2022
IEEE 4th Global Conference on Life Sciences and Technologies (LifeTech).
IEEE, pages308–309.
[Loseto et al(2013)]
loseto2013mining
authorpersonGiuseppe Loseto, personMichele
Ruta, personFloriano Scioscia, personEugenio
Di Sciascio, and personMarina Mongiello.
year2013.
Mining the User Profile from a Smartphone: a
Multimodal Agent Framework.. In booktitleWOA@ AI* IA.
Citeseer, pages47–53.
[Ma et al(2021)]
ma2021unsupervised
authorpersonHaojie Ma, personZhijie Zhang,
personWenzhong Li, and personSanglu Lu.
year2021.
Unsupervised human activity representation learning
with multi-task deep clustering.
journalProceedings of the ACM on Interactive,
Mobile, Wearable and Ubiquitous Technologies volume5,
number1 (year2021), pages1–25.
[Martindale et al(2018)]
martindale2018smart
authorpersonChristine F Martindale,
personNils Roth, personJulius Hannink,
personSebastijan Sprager, and personBjoern M
Eskofier. year2018.
Smart annotation tool for multi-sensor gait-based
daily activity data. In booktitle2018 IEEE International
Conference on Pervasive Computing and Communications Workshops (PerCom
Workshops). IEEE, pages549–554.
[Martindale et al(2019)]
martindale2019hidden
authorpersonChristine F Martindale,
personSebastijan Sprager, and personBjoern M
Eskofier. year2019.
Hidden Markov model-based smart annotation for
benchmark cyclic activity recognition database using wearables.
journalSensors volume19,
number8 (year2019), pages1820.
[Meurisch et al(2015)]
meurisch2015labels
authorpersonChristian Meurisch,
personBenedikt Schmidt, personMichael Scholz,
personImmanuel Schweizer, and personMax
Mühlhäuser. year2015.
Labels: Quantified self app for human activity
sensing. In booktitleAdjunct Proceedings of the 2015 ACM
International Joint Conference on Pervasive and Ubiquitous Computing and
Proceedings of the 2015 ACM International Symposium on Wearable Computers.
pages1413–1422.
[Mohamed et al(2022)]
mohamed2022har
authorpersonAbduallah Mohamed,
personFernando Lejarza, personStephanie Cahail,
personChristian Claudel, and personEdison Thomaz.
year2022.
HAR-GCNN: Deep Graph CNNs for Human Activity
Recognition From Highly Unlabeled Mobile Sensor Data. In
booktitle2022 IEEE International Conference on Pervasive
Computing and Communications Workshops and other Affiliated Events (PerCom
Workshops). IEEE, pages335–340.
[Moher et al(2009)]
moher2009preferred
authorpersonDavid Moher, personAlessandro
Liberati, personJennifer Tetzlaff, personDouglas G
Altman, and personthe PRISMA Group*.
year2009.
Preferred reporting items for systematic reviews
and meta-analyses: the PRISMA statement.
journalAnnals of internal medicine
volume151, number4 (year2009),
pages264–269.
[Niño-Castañeda et al(2016)]
nino2016scalable
authorpersonJorge Niño-Castañeda,
personAndrés Frías-Velázquez,
personNyan Bo Bo, personMaarten Slembrouck,
personJunzhi Guan, personGlen Debard,
personBart Vanrumste, personTinne Tuytelaars, and
personWilfried Philips. year2016.
Scalable semi-automatic annotation for multi-camera
person tracking.
journalIEEE Transactions on Image Processing
volume25, number5 (year2016),
pages2259–2274.
[Ponnada et al(2019)]
ponnada2019designing
authorpersonAditya Ponnada, personSeth
Cooper, personBinod Thapa-Chhetry, personJosh Aaron
Miller, personDinesh John, and personStephen
Intille. year2019.
Designing videogames to crowdsource accelerometer
data annotation for activity recognition research. In
booktitleProceedings of the Annual Symposium on
Computer-Human Interaction in Play. pages135–147.
[Qi et al(2022)]
qi2022dcnn
authorpersonWen Qi, personNing Wang,
personHang Su, and personAndrea Aliverti.
year2022.
DCNN based human activity recognition framework
with depth vision guiding.
journalNeurocomputing volume486
(year2022), pages261–271.
[Rokni and Ghasemzadeh(2018)]
rokni2018autonomous
authorpersonSeyed Ali Rokni and
personHassan Ghasemzadeh. year2018.
Autonomous training of activity recognition
algorithms in mobile sensors: A transfer learning approach in
context-invariant views.
journalIEEE Transactions on Mobile Computing
volume17, number8 (year2018),
pages1764–1777.
[Saeed et al(2019)]
saeed2019multi
authorpersonAaqib Saeed, personTanir
Ozcelebi, and personJohan Lukkien.
year2019.
Multi-task self-supervised learning for human
activity detection.
journalProceedings of the ACM on Interactive,
Mobile, Wearable and Ubiquitous Technologies volume3,
number2 (year2019), pages1–30.
[Saeedi et al(2017)]
saeedi2017co
authorpersonRamyar Saeedi, personKeyvan
Sasani, and personAssefaw H Gebremedhin.
year2017.
Co-MEAL: Cost-optimal multi-expert active learning
architecture for mobile health monitoring. In
booktitleProceedings of the 8th ACM International Conference
on Bioinformatics, Computational Biology, and Health Informatics.
pages432–441.
[Sawano and Murao(2020)]
sawano2020annotation
authorpersonRyota Sawano and personKazuya
Murao. year2020.
Annotation Method for Human Activity and Device
State Recognition Based on Smartphone Notification Removals.
journalJournal of Information Processing
volume28 (year2020), pages679–688.
[Seneviratne et al(2017)]
seneviratne2017survey
authorpersonSuranga Seneviratne,
personYining Hu, personTham Nguyen,
personGuohao Lan, personSara Khalifa,
personKanchana Thilakarathna, personMahbub Hassan,
and personAruna Seneviratne. year2017.
A survey of wearable devices and challenges.
journalIEEE Communications Surveys & Tutorials
volume19, number4 (year2017),
pages2573–2620.
[Settles(2009)]
settles2009active
authorpersonBurr Settles.
year2009.
Active learning literature survey.
journalUniversity of Wisconsin-Madison
(year2009).
[Solis et al(2019)]
solis2019human
authorpersonRoger Solis, personArash
Pakbin, personAli Akbari, personBobak J Mortazavi,
and personRoozbeh Jafari. year2019.
A human-centered wearable sensing platform with
intelligent automated data annotation capabilities. In
booktitleProceedings of the International Conference on
Internet of Things Design and Implementation. pages255–260.
[Stikic and Schiele(2009)]
stikic2009activity
authorpersonMaja Stikic and personBernt
Schiele. year2009.
Activity recognition from sparsely labeled data
using multi-instance learning. In booktitleLocation and
Context Awareness: 4th International Symposium, LoCA 2009 Tokyo, Japan, May
7-8, 2009 Proceedings 4. Springer, pages156–173.
[Subramanya et al(2012)]
subramanya2012recognizing
authorpersonAmarnag Subramanya, personAlvin
Raj, personJeff A Bilmes, and personDieter Fox.
year2012.
Recognizing activities and spatial context using
wearable sensors.
journalarXiv preprint arXiv:1206.6869
(year2012).
[Szewcyzk et al(2009)]
szewcyzk2009annotating
authorpersonS Szewcyzk, personK Dwan,
personB Minor, personB Swedlove, and
personD Cook. year2009.
Annotating smart environment sensor data for
activity learning.
journalTechnology and Health Care
volume17, number3 (year2009),
pages161–169.
[Tang et al(2021)]
tang2021selfhar
authorpersonChi Ian Tang, personIgnacio
Perez-Pozuelo, personDimitris Spathis, personSoren
Brage, personNick Wareham, and personCecilia
Mascolo. year2021.
Selfhar: Improving human activity recognition
through self-training with unlabeled data.
journalarXiv preprint arXiv:2102.06073
(year2021).
[Tonkin et al(2018)]
tonkin2018talk
authorpersonEmma L Tonkin, personAlison
Burrows, personPrzemysław R Woznowski, personPawel
Laskowski, personKristina Y Yordanova, personNiall
Twomey, and personIan J Craddock.
year2018.
Talk, text, tag? understanding self-annotation of
smart home data from a user’s perspective.
journalSensors volume18,
number7 (year2018), pages2365.
[Tonkin et al(2019)]
tonkin2019towards
authorpersonEmma L Tonkin, personOla
Bykowska, personHannah Berg, and personIan
Craddock. year2019.
Towards estimation of cooking complexity: Free-text
annotations in the kitchen environment. In
booktitleProceedings of the 6th international Workshop on
Sensor-based Activity Recognition and Interaction. pages1–7.
[Tseng et al(2022)]
tseng2022haa4d
authorpersonMu-Ruei Tseng, personAbhishek
Gupta, personChi-Keung Tang, and personYu-Wing
Tai. year2022.
HAA4D: few-shot human atomic action recognition via
3D spatio-temporal skeletal alignment.
journalarXiv preprint arXiv:2202.07308
(year2022).
[Ullrich et al(2021)]
ullrich2021detection
authorpersonMartin Ullrich, personAnnika
Mücke, personArne Küderle, personNils Roth,
personTill Gladow, personHeiko Gaßner,
personFranz Marxreiter, personJochen Klucken,
personBjoern M Eskofier, and personFelix Kluge.
year2021.
Detection of unsupervised standardized gait tests
from real-world inertial sensor data in Parkinson’s disease.
journalIEEE Transactions on Neural Systems and
Rehabilitation Engineering volume29 (year2021),
pages2103–2111.
[Vishnu et al(2020)]
vishnu2020internet
authorpersonS Vishnu, personSR Jino Ramson,
and personR Jegan. year2020.
Internet of medical things (IoMT)-An overview. In
booktitle2020 5th international conference on devices,
circuits and systems (ICDCS). IEEE, pages101–104.
[Woznowski et al(2017)]
woznowski2017talk
authorpersonPrzemyslaw Woznowski, personEmma
Tonkin, personPawel Laskowski, personNiall Twomey,
personKristina Yordanova, and personAlison
Burrows. year2017.
Talk, text or tag?. In
booktitle2017 IEEE International Conference on Pervasive
Computing and Communications Workshops (PerCom Workshops). IEEE,
pages123–128.
[Yhdego et al(2022)]
yhdego2022fall
authorpersonHaben Yhdego, personMichel
Audette, and personChristopher Paolini.
year2022.
Fall Detection Using Self-Supervised Pre-Training
Model. In booktitle2022 Annual Modeling and Simulation
Conference (ANNSIM). IEEE, pages361–371.
[Yu et al(2012)]
yu2012crowdsourcing
authorpersonZhongmin Yu, personJames Lin,
and personYung Chi. year2012.
Crowdsourcing annotations for accelerometer data
collected from older adults.
journalProceedings of the ACM on Interactive,
Mobile, Wearable and Ubiquitous Technologies volume1,
number3 (year2012), pages1–18.
[Zhang and Alshurafa(2020)]
zhang2020deep
authorpersonShibo Zhang and personNabil
Alshurafa. year2020.
Deep generative cross-modal on-body accelerometer
data synthesis from videos. In booktitleAdjunct Proceedings
of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous
Computing and Proceedings of the 2020 ACM International Symposium on Wearable
Computers. pages223–227.
[Zhang et al(2012)]
zhang2012survey
authorpersonZhiwu Zhang, personYongqing
Huang, personYifei Wang, and personYaonan Liu.
year2012.
A survey on recent advances in human activity
recognition using vision, depth, and inertial sensors.
journalSensors volume12,
number9 (year2012), pages12334–12374.
|
http://arxiv.org/abs/2307.04106v2 | 20230709060722 | Parametric Depth Based Feature Representation Learning for Object Detection and Segmentation in Bird's Eye View | [
"Jiayu Yang",
"Enze Xie",
"Miaomiao Liu",
"Jose M. Alvarez"
] | cs.CV | [
"cs.CV"
] |
Parametric Depth Based Feature Representation Learning for Object Detection and Segmentation in Bird’s-Eye View
Jiayu Yang^1,3^*, Enze Xie^2, Miaomiao Liu^1, Jose M. Alvarez^3
^1Australian National University, ^2The University of Hong Kong, ^3NVIDIA
{jiayu.yang, miaomiao.liu}@anu.edu.au, [email protected], [email protected]
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
========================================================================================================================================================================================================================================
empty
< g r a p h i c s >
figureGiven multi-view images and camera parameters, our framework utilize parametric depth to transform image feature into BEV space for jointly estimating 3D object detection, BEV segmentation and a BEV visibility map.
Recent vision-only perception models for autonomous driving achieved promising results by encoding multi-view image features into Bird's-Eye-View (BEV) space. A critical step and the main bottleneck of these methods is transforming image features into the BEV coordinate frame. This paper focuses on leveraging geometry information, such as depth, to model such feature transformation. Existing works rely on non-parametric depth distribution modeling leading to significant memory consumption, or ignore the geometry information to address this problem. In contrast, we propose to use parametric depth distribution modeling for feature transformation. We first lift the 2D image features to the 3D space defined for the ego vehicle via a predicted parametric depth distribution for each pixel in each view. Then, we aggregate the 3D feature volume based on the 3D space occupancy derived from depth to the BEV frame. Finally, we use the transformed features for downstream tasks such as object detection and semantic segmentation. Existing semantic segmentation methods do also suffer from an hallucination problem as they do not take visibility information into account. This hallucination can be particularly problematic for subsequent modules such as control and planning. To mitigate the issue, our method provides depth uncertainty and reliable visibility-aware estimations.
[^*The work is done during an internship at NVIDIA]
We further leverage our parametric depth modeling to present a novel visibility-aware evaluation metric that, when taken into account, can mitigate the hallucination problem.
Extensive experiments on object detection and semantic segmentation on the nuScenes datasets demonstrate that our method outperforms existing methods on both tasks.
§ INTRODUCTION
In autonomous driving, multiple input sensors are often available, each of which has its coordinate frame, such as the coordinate image
frame used by RGB cameras or the egocentric coordinate frame used by the Lidar scanner. Downstream tasks, such as motion planning, usually require inputs in a unified egocentric coordinate system, like the widely used Bird's Eye View (BEV) space. Thus, transforming features from multiple sensors into the BEV space has become a critical step for autonomous driving. Here, we focus on this transformation for the vision-only setup where we take as input multi-view RGB images captured in a single time stamp by cameras mounted on the ego vehicle and output estimation results, such as object detection and segmentation, in a unified BEV space, see Fig. <ref>.
In general, accurate depth information is crucial to achieve effective transformations.
Early methods<cit.> forgo explicit depth estimation and learn implicit feature transformations using neural networks, which suffers from the generalization problem since the neural network does not have an explicit prior of the underlying geometric relations. More recent methods <cit.> adopt explicit but simplified depth representations for the transformation, which either requires large memory consumption, limiting the resolution <cit.>; or over-simplifies the representation leading to noise in the BEV space<cit.>. Moreover, these simplified depth representation do not have the ability to efficiently provide visibility information. As downstream tasks such as semantic segmentation are trained using aerial map ground truth, the lack of visibility estimation usually results in hallucination effects where the network segments areas that are not visible to the sensor <cit.>, see Figure <ref>. As a consequence, those estimations can mislead downstream planning tasks as it is extremely dangerous to drive towards hallucinated road but actually non-driveable, especially in high speed.
To address these limitations, we propose to adopt explicit parametric depth representation and geometric derivations as guidance to build a novel
feature transformation pipeline. We estimate a parametric depth distribution and use it to derive both a depth likelihood map and an occupancy distribution to guide the transformation from image features into the BEV space. Our approach consists of two sequential modules: a geometry-aware feature lifting module and an occupancy-aware feature aggregation module. Moreover, our parametric depth-based representation enables us to efficiently derive a visibility map in BEV space, which provides valuable information to decouple visible and occluded areas in the estimations and thus, mitigate the hallucination problem. We also derive ground-truth visibility in BEV space, which enables us to design a novel evaluation metric for BEV segmentation that takes visibility into account and reveals insight of selected recent methods <cit.> in terms of estimation on visible region and hallucination on occluded region.
Our contributions can be summarized as follows:
* We propose a geometry-aware feature transformation based on parametric depth distribution modeling to map multi-view image features into the BEV space. Our depth distribution modeling enables the estimation of visibility maps to decouple visible and occluded areas for downstream tasks.
* The proposed feature transformation framework consists of a novel feature lifting module that leverages the computed depth likelihood to lift 2D image features to the 3D space; and a feature aggregation module to project feature to the BEV frame through the derived 3D occupancy.
* We further propose a novel visibility-aware evaluation metric for segmentation in BEV space that reveals the insight of estimation on visible space and hallucination on occluded space.
Extensive experiments on the nuScenes dataset on object detection and semantic segmentation demonstrate the effectiveness of our method yielding state of the art results for these two tasks with a negligible compute overhead.
§ RELATED WORK
External depth based feature transformations.
When given depth input either from Lidar sensor or stereo matching, image feature can easily be transformed into BEV space<cit.>. PointPillar<cit.> extract features from a 3D point cloud and aggregate the features into BEV space. PseudoLidar<cit.> based methods firstly estimate a depth using stereo matching given stereo image pair as input followed by unprojecting the feature based on estimated depth. However, in real-life applications, Lidar sensors or stereo image inputs are not always available, which limits these line of methods.
Feature transformations without reliable depth input.
Without reliable depth input, various feature transformation methods have been proposed<cit.>, starting from early methods<cit.> that learn implicit feature transformations using neural networks. Learned transformation can suffer from the generalization problem, since the neural network does not explicitly account for changes in cameras' intrinsic and extrinsic parameters. Recent methods <cit.> adopt various depth representations to explicitly transform features based on multi-view geometry to the BEV space. The key in these methods is the underlying depth representation, which dominates the resolution and accuracy the feature transformation module can achieve. For instance, LSS <cit.> adopts a non-parametric depth representation. It represents depth as a discretized probability density function along each visual ray, which can be treated as a categorical distribution of depth. It can further form the depth probability volume in LSS for all pixels in an image. When the sampling rate is sufficient, such non-parametric depth distribution can adequately represent a large variety of depths, including multi-modal depth distributions. In practice, however, to estimate such depth representation, the backbone needs to estimate a probability volume that is cubic with the input image size and increases significantly along the number of input images, which limits the image and depth resolution.
To address this limitation, M^2BEV <cit.> adopts a simplified depth representation assuming the depth of all pixels follows a uniform distribution. Under this assumption, features are directly lifted to every location on the visual ray, resulting identical feature along the entire ray with no difference. Following works <cit.> followed similar depth representation. Such simplified representation have advantage on efficiency, as the backbone network do not need to estimate any parameter for the depth, but can cause ambiguity and noise in the 3D space.
Unlike the non-parametric depth distribution used in <cit.> or the uniform depth distribution in M2BEV<cit.>, we adopt a parametric depth distribution to model pixel-wise depth for feature lifting. Parametric depth distribution represents depth as a continuous distribution such as Gaussian or the Laplacian distribution, and its estimated distribution parameters can be used to evaluate depth likelihood or depth probability on any given depth value along each ray. To model the depth for a pixel, it takes only two parameters (μ,σ) for Gaussian and two (μ,b) for Laplacian, so it can be more efficient than non-parametric distribution. Moreover, its continuous nature allows evaluating depth likelihood on any points along the visual ray, which can achieve a higher depth resolution than the diescretized non-parametric distribution. We specifically designed our pipeline incorporating parametric depth to improve 2D-BEV feature transformation and also propose the derivation of visibility for subsequent planning tasks and visibility-aware evaluations.
Aggregating 3D feature into BEV space. Given the lifted feature in 3D space, most existing works including LSS <cit.> and M^2BEV <cit.> use the feature concatenation method introduced by Pointpillars<cit.> for transforming 3D features into BEV space. The 3D feature volume is split along horizontal dimensions and interpreted as pillars of features. Then, a feature vector is created by concatenating features along the vertical dimension for each pillar. All the concatenated features form a 2D feature map, which is converted into BEV feature map by few convolution layers. This design allows each voxel along the Z-axis to have equal contribution to the final BEV feature. However, this method can be affected by noisy features on empty spaces. We thus propose to compress the features based on a calculated space occupancy probability from the parametric depth distribution. Our proposed method can largely reduce the influence of those empty voxels to the aggregated features.
Joint Detection and Segmentation in BEV space.
M^2BEV recently proposed a unified detection and segmentation framework in BEV space, which we leverage to evaluate the effectiveness of our method. Specifically, the image features are transformed into a unified BEV feature, which is used by two parallel heads, a detection head and a segmentation head, to achieve multi-task estimation. M^2BEV leverage a detection head design from Lidar-based detection methods <cit.> and modify it to better suit camera-based methods. Their segmentation head is inspired by the design from <cit.>. However, in contrast to prior work, we leverage the proposed explicit feature transformations based on parametric depth to address its weaknesses.
Temporal extension.
Few concurrent methods <cit.> proposed to utilize temporal information to further boost segmentation and detection performance in BEV space and achieved promising results. Most of these methods, including BEVFormer<cit.>, BEVerse<cit.>, BEVDet4D<cit.> are based on the feature transformation module in LSS<cit.>.
<cit.> adopt depth supervision and temporal stereo matching to improve depth quality and further propose a more efficient implementation of LSS's Lift-splat step. <cit.> query 2D features from projected location of 3D voxels, which does not explicitly use depth and is similar to the uniform depth assumption in M^2BEV<cit.>. Our contributions focusing on depth representation, feature transformation and visibility estimation is orthogonal to the temporal extension of these methods and our method can potentially be applied to these methods to further boost their performance and enable the efficient visibility inference.
§ METHOD
Let us now introduce our framework to jointly perform segmentation and object detection. Shown in Fig. <ref>, our framework comprised of three fundamental components: feature extraction, feature transformation, and multi-task estimation. The framework's key contributions include a parametric depth decoder integrated into the feature extraction, a geometry-aware feature lifting module, and an occupancy-aware feature aggregation module. Furthermore, we introduce a visibility estimation module as a constituent of the multi-task estimation that provide crucial visibility information for down-streaming planning tasks.
§.§ Problem Statement
Let { I_i} _i=1^N, I_i∈ℝ^H× W × 3,
be a set of RGB images taken at the same time slot, H and W define the image dimension, and { K_i, R_i, T_i}_i=1^N represent the intrinsic and extrinsic parameters for their corresponding camera poses, respectively. We focus on lifting the image features f_i^2D∈ℝ^H× W × CH to the 3D space as f^3D∈ℝ^X'× Y' × Z'× CH and then aggregate them to the BEV space as f^BEV∈ℝ^X× Y × CH_B for 3D object detection and segmentation.
§.§ Parametric Depth Distribution Modelling
Let us first introduce our parametric depth distribution modelling. Given an image I_i, we extract its latent features f_i^T using a backbone network followed by a image feature decoder network to extract 2D image features, f_i^2D, see fig. <ref>. Then, following depth estimation methods <cit.>, we adopt a Laplacian distribution to model depth in real-world scenarios where the depth distribution for each pixel is given by,
ℒ(d|μ,b) = 1/2bexp(-|d-μ|/b),
where μ provides an estimation of the depth, and b is the diversity parameter of the distribution, see Fig. <ref>. The goal in this module is to estimate (μ, b).
We design the parametric depth decoder network Φ_θ to map the latent feature to the parameter space of the depth distribution: Φ_θ: ℝ^H× W× CH_T→ℝ^H× W× 2,
where CH_T is the latent feature dimension. Note that when the ground-truth depth for each pixel is known, the depth distribution becomes a delta function, where the depth probability p(d_gt) on ground-truth depth d_gt is one and zero anywhere else. However, in practice, the depth is unknown for each pixel. Given our modelled depth distribution, we can calculate the depth likelihood analytically based on our parametric modelling.
Fig. <ref> shows an example of depth distribution where μ gives an estimate of the depth and b could be interpreted as the uncertainty of each estimation. Larger values of b correspond to areas where the estimation is more uncertain.
§.§ Geometry-aware Feature Lifting
Fig. <ref> depicts our geometry-aware feature lifting module to transform the 2D image features f_i^2D∈ℝ^H× W× CH from the camera coordinate system into 3D space defined for the ego vehicle coordinate system, generating the 3D feature volume f_i^3D∈ℝ^X'× Y'× Z'× CH_I.
Ideally, the 2D image feature for each pixel is back-projected along the visual ray to the 3D location defined by its ground truth depth value f^3D( P_gt) = f^2D( p), where P_gt = d_gt K_i^-1p̃, p̃ is the homogeneous coordinate for p. Without knowing the true depth value for each pixel, we discretize the 3D space into voxels and thus aggregate the feature for each voxel by forward projecting it to multi-view images.
Precisely, let P_j = (x_j, y_j, z_j)^T define the 3D coordinate of centre for voxel j. Given the camera poses for multiple views, we project it to image I_i as
d^i_jp̃^i_j = K_i( R_iP̃_j+ T_i) where p̃^i_j denotes the homogenous coordinate of p^i_j in image I_i. Meanwhile, we can obtain the depth value of P_j in view i as d^i_j. Based on our parametric depth modelling, we obtain the likelihood of d^i_j being on the object surface as
α_d^i_j = ℒ(d^i_j|μ^i_ p^i_j,b^i_ p^i_j) = 1/2b^i_ p^i_jexp(-|d^i_j-μ^i_ p^i_j|/b^i_ p^i_j).
We similarly project the voxel to all views and aggregate the feature for the j-th voxel as
f_j^3D = ∑_i=1^Nα_d^i_j f_i^2D( p^i_j),
where f_i^2D is the extracted image feature. We adopts bilinear interpolation to obtain f_i^2D( p^i_j) when p^i_j is a non-grid coordinate. All lifted 3D features form the 3D feature volume f^3D∈ℝ^X'× Y'× Z'× CH, which is then aggregated by our occupancy aware feature aggregation module into 2D BEV feature, introduced in the following section.
§.§ Occupancy-aware Feature Aggregation
Our occupancy-aware feature aggregation module aggregates the 3D feature volume f^3D∈ℝ^X'× Y'× Z'× CH from ego vehicle 3D coordinate frame into BEV space, forming BEV feature map f^BEV∈ℝ^X× Y× CH_B.
As shown in Fig. <ref>, the 2D BEV coordinate system is aligned with the XY plane of the ego vehicle coordinate system where the shared origin is defined on the center of the ego vehicle. Note that the BEV coordinate system only has 2 dimensions, forgoing the Z dimension. The goal of the feature aggregation is to transform the 3D feature volume in ego vehicle coordinate into a 2D feature map in the BEV space, which can be treated as aggregating the 3D feature volume along its Z axis. To this end, we first rearrange the previously computed depth likelihood for all voxels by Eq. <ref> into a depth likelihood volume P^3D∈ℝ^X'× Y'× Z', which shares the same volumetric coordinate as that of 3D feature volume f^3D. For each column along the Z-axis in the depth likelihood volume, the likelihood of each voxel of different height reflects its spatial occupancy. Thus, we normalize the depth likelihood along Z axis into a spatial occupancy distribution, forming a spatial occupancy volume O^3D∈ℝ^X'× Y'× Z' defined as
O^3D(x,y,z) = P^3D(x,y,z) + b_o/∑_z_i=0^Z'-1P^3D(x,y,z_i) + b_o,
where the b_o is a bias term to encourage an equal contribution of feature on completely occluded region.
Our feature aggregation along the Z-axis could minimize the influence of features from empty voxels to the final feature in the BEV frame. Given the spatial occupancy volume O^3D, we compute the final 2D BEV feature as a weighted sum of 3D features
f̂^BEV(x,y) = ∑_z_i=0^Z'-1 (O^3D(x,y,z_i)× f^3D(x,y,z_i)),
where we use the normalized spatial occupancy distribution as the 3D feature weight.
We further transform f̂^BEV via a few layers of convolution to obtain the final feature for BEV space f^BEV which is then applied to detection and segmentation tasks.
§.§ Object Detection and Segmentation
Given the BEV feature map, we use two heads for detection and segmentation. Specifically, we adopt the detection head and segmentation head from M^2BEV <cit.> without modification for fair comparison. The detection head consists of three convolution layers and outputs dense 3D anchors in BEV space along with category, box size, and direction of each object. The segmentation head consists of five convolution layers and outputs 2 classes predictions, road and lane, as originally defined by LSS<cit.>.
§.§ Training Strategy
We adopt supervised training strategy. We supervise the parametric depth estimation by maximizing its depth likelihood on ground-truth depth observations. Specifically, we minimize the negative log-likelihood loss ℒ_D using sparse ground-truth depth d_gt generated from sparse lidar measurements. Here ℒ represent Laplacian distribution and P^i_gt represent set of pixels where ground-truth lidar measurements is valid for image i.
ℒ_D(θ) =∑_i=1^N∑_p∈𝒫^i-log(ℒ(d^p_gt,i|μ_i^p(θ), b_i^p(θ)))
where 𝒫^i defines the set of pixel coordinates with valid ground truth depth map for view i.
For detection head, we use the 3D detection loss used in PointPillars<cit.> as follows, where ℒ_loc is the total localization loss, ℒ_cls is the object classification loss, ℒ_dir is the direction classification loss, N_pos refer to the number of positive samples and β_cls, β_loc, β_dir are set to 1.0, 0.8, 0.8 accordingly.
ℒ_det = 1/N_pos(β_clsℒ_cls + β_locℒ_loc + β_dirℒ_dir)
Please refer to <cit.> for more details.
For segmentation head, we use both Dice loss ℒ_dice and binary cross entropy loss ℒ_bce as segmentation loss ℒ_seg and use equal weight β_dice = β_bce = 1.
ℒ_seg = β_diceℒ_dice + β_bceℒ_bce
For the visibility map and additional outputs, since they are geometrically derived from the estimated parametric depth representation without any learned parameters, it's not necessary to apply supervision on them.
§ VISIBILITY
§.§ Visibility Map
The segmentation in BEV space mainly focuses on segmenting lane regions. However, those regions are not always visible in the camera views due to the occlusion of vertical scene structures such as building (see Fig.<ref>). We thus propose to use our parametric depth modeling to infer a visibility map which decouples visible and occluded areas and, will contribute to mitigate the hallucination effect.
We define a visibility map V^BEV∈ℝ^X× Y to describe the visibility range of ego vehicle's multi-view cameras. Starting from the likelihood of the Laplacian distribution in Eq. <ref>, the occlusion probability B(d) of a voxel in 3D space that has a back-projected depth d in camera view is
B(d) = ∫_0^dℒ(x|μ,b) dx.
We derive this occlusion probability as follows. Firstly we find the indefinite integral of Eq. <ref> as
F(x) = ∫_-∞^xℒ(x|μ,b)dx = 1/2exp(x-μ/b) if x < μ
1-1/2exp(-x-μ/b) if x ≥μ.
Then we calculate the definite integral between [0,d] as the occlusion probability B(d), which is defined as
B(d) = F(d) - F(0) = F(d)-1/2exp(-μ/b).
In practice, this is computed very efficiently, without the need to perform the discrete integration of the depth likelihood over the range [0,d]. Based on the relationship between visibility and occlusion, we convert the occlusion probability B to visibility probability V by
V(d) = 1-B(d) = 1 + 1/2exp(-μ/b)-F(d).
To finally compute the visibility in BEV space, we take the maximum visibility probability along the Z axis to form the visibility map V^BEV.
Ṽ^BEV(x,y) = max_z∈𝒵'V(x,y,z)
where 𝒵'={0,1,2⋯ Z'-1}. The V^BEV is obtained via interpolation from Ṽ^BEV.
§.§ Visibility-aware Evaluation
For semantic segmentation where the ground-truth is usually generated using aerial images, it is not possible evaluate predictions in visible and occluded areas by using the standard evaluation metrics. Therefore, in this section, we follow a similar process as the one to generate the visibility map to derive a visibility-aware evaluation method for segmentation in BEV space. In this case, however, we project the lidar 3D points (ground-truth) into multi-view image space and use a depth completion network to obtain multi-view dense depth maps. This depth map is then used as the expected depth value to build a parametric depth representation F(θ_gt). We then evaluate the ground-truth depth likelihood on each voxel in 3D space using Eq. <ref>, forming the ground-truth depth likelihood volume L_gt. Finally, we derive the ground-truth visibility map in BEV space V using Eq. <ref> and Eq. <ref>.
In this case, V reflects the maximum visibility of the multi-view cameras in BEV space. Thus, it can be used as a mask to explicitly evaluate results in BEV space subject to visibility. Specifically, we use a threshold τ_vis to split the predicted segmentation s_pred and ground-truth segmentation label s_gt into visible region {s^vis_pred,s^vis_gt} and occluded region {s^occ_pred,s^occ_gt}. We can then compute the IoU for the visible (IoU_vis) and occluded (IoU_occ) regions separately as
s^vis = ∑_x∈𝒳,y∈𝒴s(x,y)× 1(V(x,y) ≥τ _vis),
s^occ = ∑_x ∈𝒳, y∈𝒴s(x,y)×1(V(x,y) < τ _occ),
IoU_vis = s^vis_pred∩ s^vis_gt/s^vis_pred∪ s^vis_gt, IoU_occ = s^occ_pred∩ s^occ_gt/s^occ_pred∪ s^occ_gt where 𝒳={0,1,⋯,X-1}, 𝒴={0,1,⋯,Y-1}, and 1(·) is the indicator function.
We also report the occlusion rate on nuScenes as the percentage of visible or occluded segmentation labels over total number of segmentation labels.
§ EXPERIMENTS
In this section, we first detail our experimental settings, then we demonstrate the effectiveness of our approach on the nuScenes dataset, and, finally, we provide ablation studies on the main components of our method.
§.§ Implementation Details
Dataset. We conduct our experiments on the nuScenes dataset <cit.>. The nuScenes dataset provides video sequences along with multiple sensor outputs including Lidar, Radar, GPS and IMU, all of which are collected by calibrated and synchronized sensors mounted on an vehicle driving across Boston and Singapore. The dataset consists of 1000 sequences, split into 700 for training and 150 for validation and testing, respectively. Each sample provides six RGB images captured by 6 cameras with divergent viewing directions along with Lidar sparse 3D points, Radar sparse 3D points, GPS pose and IMU readouts. We follow <cit.> to generate ground-truth segmentation labels from the global map provided by nuScenes dataset.
Evaluation metrics. We report our results using the same metrics as in the nuScenes benchmark. For detection, we report mean Average Precision (mAP) and the nuScenes detection score <cit.>. For segmentation, we follow LSS <cit.>, and report the mean IoU score (mIoU). In addition, we report results using the proposed visibility-aware evaluation detailed in Sec. <ref>. Unless specified, we report numbers on the validation set.
Network architecture. We use a unified framework to demonstrate benefits of our depth-based feature transformation module. The network consists of a backbone image encoder and two decoding heads, one for segmentation and one for detection. We use ResNet with deformable convolution as the image encoder. For the decoding heads, we use the same architecture as the one in PointPillars <cit.>.
We set the size of the intermediate 3D volume consisting of X'× Y'× Z' = 400×400×12 voxels, with a voxel size of 0.25m× 0.25m× 0.5m, respectively. The final BEV space dimension consists of X× Y = 200×200 grids. Each grid is of size 0.5m× 0.5m.
Training and inference. During training, we use 6 RGB images and corresponding camera parameters as input.
The training for parametric depth estimation is supervised by the ground-truth sparse Lidar points provided in the dataset. Ground-truth detection and segmentation labels are used to supervise the detection and segmentation heads. We set batch size to 1 per GPU and use 3 nodes with 8 Nvidia V100 GPUs. For inference, our method only requires the 6 input RGB images together with the corresponding camera parameters.
§.§ Results
We now compare our results with M^2BEV and other state-of-art methods on the nuScenes dataset. To facilitate the comparison to other approaches, we use ResNeXt-101 as the backbone of our method for detection and segmentation experiments and use ResNet-50 as the backbone for multi-task learning experiments and efficiency analysis.
Detection. We report the results of our method and related state of the art methods in Tab. <ref> and Tab. <ref>, for the validation set and the test set respectively. For the validation set, we only include frame-wise camera-based methods. That is, we exclude those approaches using temporal information. For the test set, we include the latest results including Camera, Lidar, Radar and their combination. As we can see, in both sets, our approach outperforms all existing camera-based methods on both mAP and the NDS score.
Segmentation. We now focus on evaluating our semantic segmentation results. We report our performance compared to state-of-the-art methods on the nuScenes validation set in Tab. <ref>.
We also report a variant of our model trained without depth supervision (Ours*) to fairly compare with LSS <cit.>.
Our method performs significantly better compared to LSS <cit.> on both road and lane segmentation and slightly better compared to M^2BEV <cit.>, the closest method to ours.
Our model without depth supervision still outperforms existing methods.
Interestingly, if we take the visibility into account, as shown in Tab. <ref> and Fig. <ref>, our method clearly outperforms the baselines on the visible areas while maintain the performance compared to M^2BEV on the occluded regions. These results evidence the benefits of our parametric depth approach.
Joint detection and segmentation. Finally, we report results for jointly evaluating both tasks. In this case, we compare our results to the multi-task version of M^2BEV. We show results for this experiment in Tab. <ref>. Our method, once again, outperforms the baseline on both detection and segmentation tasks. These results further evidence the benefits of an improved depth representation in the 2D to 3D feature transformation process.
Efficiency. Our parametric depth estimation requires the estimation of additional parameters compared to simplified depth estimation approaches. As shown in Tab. <ref>, our model requires slightly larger amount of memory; However, that does not lead to a significant increase in the inference time.
§.§ Ablation Studies
We carry out ablation experiments to study the influence of feature transformations on final detection and segmentation performance and the robustness of our model to calibration error. More ablation experiments can be found in supplementary material. We use ResNet-50 as the backbone for all ablation experiments.
Feature transformations
We evaluate the effectiveness of the parametric depth based feature lifting and aggregation module comparing with baseline non-parametric depth based lifting LSS<cit.>, baseline uniform depth based lifting similar to M^2BEV and the widely used Pointpillar<cit.> feature aggregation. Results are in Tab. <ref>. Our proposed parametric depth based lifting coupled with occupancy based feature aggregation achieved best performance for both detection and segmentation.
Limitations. Like all camera based methods, our method can only provide reliable detection and segmentation results on visible region. On occluded region, although our method can provide hallucination results and visibility information, the results are not reliable for making critical driving decision. Following planning tasks should utilize the visibility and uncertainty information to achieve reliable planning.
§ CONCLUSION
We propose a parametric depth distribution modeling-based feature transformation that efficiently transforms 2D image features to BEV space. By incorporating visibility inference, our method can provide crucial visibility information to down-streaming planning tasks. Moreover, our approach outperforms existing methods in both detection and segmentation tasks, making it a promising candidate for feature transformation in future works. In our future work, we aim to investigate the integration of temporal information to improve estimation accuracy.
ieee_fullname
|
http://arxiv.org/abs/2307.04948v1 | 20230711003220 | Viscous tweezers: controlling particles with viscosity | [
"Tali Khain",
"Michel Fruchart",
"Vincenzo Vitelli"
] | cond-mat.soft | [
"cond-mat.soft",
"physics.flu-dyn"
] |
James Franck Institute, The University of Chicago, Chicago, IL 60637, USA
James Franck Institute, The University of Chicago, Chicago, IL 60637, USA
Gulliver, UMR CNRS 7083, ESPCI Paris PSL, 75005 Paris, France
James Franck Institute, The University of Chicago, Chicago, IL 60637, USA
Kadanoff Center for Theoretical Physics, The University of Chicago, Chicago, IL 60637, USA
Control of particle motion is generally achieved by applying an external field that acts directly on each particle.
Here, we propose a global way to manipulate the motion of a particle by dynamically changing the properties of the fluid in which it is immersed.
We exemplify this principle by considering a small particle sinking in an anisotropic fluid
whose viscosity depends on the shear axis.
In the Stokes regime, the motion of an immersed object is fully determined by the viscosity of the fluid through the mobility matrix, which we explicitly compute
for a pushpin-shaped particle.
Rather than falling upright under the force of gravity, as in an isotropic fluid, the pushpin tilts to the side, sedimenting at an angle
determined by the viscosity anisotropy axis.
By changing this axis, we demonstate control over the pushpin orientation as it sinks, even in the presence of noise, using a closed feedback loop.
This strategy to control particle motion, that we dub viscous tweezers, could be experimentally realized in a fluid comprised of elongated molecules by suitably changing their global orientation.
Viscous tweezers: controlling particles with viscosity
Vincenzo Vitelli
October 2023
======================================================
The control of small particles in a fluid is crucial in applications including sedimentation <cit.>, swimming <cit.>, active matter <cit.>, crystal growth <cit.>, or cell manipulation and drug delivery <cit.>.
To achieve control on the state of a single particle, it is common to apply external fields that act directly on the particle by enacting a force or a torque <cit.>. Examples include magnetic <cit.>, electric <cit.>, optical <cit.>, or acoustic <cit.> forces as well as surface Faraday waves <cit.>.
In this Letter, we take an alternative route towards particle control. Instead of acting directly on the particle, we act on the fluid.
We show that modulating fluid properties such as viscosity implements a way to indirectly control the motion or orientation of the immersed object. This method of object manipulation is independent of the nature of the particle and does not impose a predetermined flow in the fluid.
The basic requirement, tunable anisotropic viscosities, is present in systems ranging from fluids under electric or magnetic fields <cit.> and electron fluids <cit.> to so-called viscosity metamaterials, complex fluids whose viscosity can be controlled by applying acoustic perturbations <cit.>.
Stokes flow and mobility.—Let us consider a small rigid particle immersed in a viscous incompressible fluid.
In this low Reynolds number regime, the fluid flow is well-described by the Stokes equation,
∂_t v_i = ∂_j σ_ij + f_i
with σ_ij = -δ_ij P + η_ijkℓ∂_ℓ v_k
along with the incompressibility condition ∂_i v_i = 0.
Here, v_i is the fluid velocity, P the pressure, σ_ij the stress tensor, f_i an external force, and η_ijkℓ the viscosity tensor.
The overdamped motion of a particle in a fluid is described by the linear equation
[ V; Ω ]
=
𝕄(η)
[ F; τ ],
where the 6 × 6 mobility matrix 𝕄 relates the force F and torque τ applied to the particle with its velocity V and angular velocity Ω <cit.>.
The form of 𝕄 depends on both the geometry of the object and the viscosity tensor η of the fluid.
The position and orientation of the particle can then be obtained by integrating the velocity and angular velocity.
We focus here on the orientation of a sedimenting particle that sinks under the force due to gravity F = F ẑ. Note that here we apply no torque (τ = 0), which is the most common way of changing the orientation of the particle. Equation (<ref>) then reduces to
Ω = T(η) F
in which T is a sub-block of 𝕄, see Supplemental Material (SM). As the force and the object are given, our only handle on the orientation dynamics is the viscosity tensor η in Eq. (<ref>).
Viscosity of an anisotropic fluid.—In familiar fluids such as water, this viscosity tensor reduces to one scalar coefficient, the shear viscosity μ.
When the fluid is anisotropic (for example, a fluid consisting of elongated molecules that are aligned to an externally applied magnetic field, B, as in Fig. <ref>a), the shear viscosity of the fluid may not be the same in all directions, but depends on the shear axis.
Assuming that the viscosity tensor is invariant under rotations about the anisotropy (alignment) axis, the most general equation of motion can contain three shear viscosities (see SM).
The shear stress and strain rate deformations corresponding to these viscosities are visualized in Fig. <ref>b, for an anisotropy axis chosen along the z direction.
In a generic fluid, the magnitude of the anisotropy could depend on both B and on the microscopic details of the system.
Here, we separate out the orientation and magnitude: B̂ controls the direction of the anisotropy axis and ϵ sets the strength of the anisotropy.
In this case, the Stokes equation is
-1/μ∇ P + Δv + ϵ𝒟(B̂)v = 0,
where 𝒟 is a matrix of second derivatives.
As an example, consider a weakly anisotropic fluid with shear viscosities μ_1 = μ, μ_2 = μ(1 + ϵ), and μ_3 = μ(1 + 4/3ϵ) when the anisotropy axis is along the z direction (see Fig. <ref>b and SM). This particular form allows for analytical calculations when ϵ is small (SM), but our general strategy applies to any anisotropic viscosity. The operator 𝒟 then takes the form
𝒟(B̂ = ẑ) =
[ ∂_z^2 0 - ∂_x ∂_z / 3; 0 ∂_z^2 -∂_y ∂_z / 3; 0 0 Δ + 2∂_z^2 /3 ]
The Green function of the Stokes equation (Stokeslet) can be computed numerically for any value of ϵ using fast Fourier transforms. We compute it analytically in the perturbative regime to linear order in ϵ
(SM).
Motion of a pushpin in an anisotropic fluid.—To determine the form of 𝕄, we now need to specify the shape of our particle.
In principle, this requires solving boundary value problems for this specific shape <cit.>. We use a shortcut by which the mobility matrix for a given shape is obtained by constructing the object out of Stokeslets (see SM and Refs. <cit.>).
To validate this method, we first consider a sphere. In this case, we can analytically solve the boundary value problem of a fluid flowing past the sphere in the limit of weak anisotropy (small ϵ), calculate the force and torque that the fluid exerts on the object, and compare with the results of the Stokeslet method (SM).
The main consequence is that a sphere settling under the force of gravity in an anisotropic fluid sinks slower than in an isotropic one. The familiar Stokes drag law is modified: the drag coefficient is increased in the x and y directions by a factor of (1 + ϵ/2) and in the z direction by (1 + ϵ).
When the shape of the particle is not spherically symmetric, both its velocity and angular velocity can change as compared to the isotropic case.
We consider the simplest shape which exhibits non-trivial orientation evolution: a cylindrically symmetric pushpin, shown in Fig. <ref>a. The orientation of the pushpin is described by two angles: θ, the angle the pushpin long axis makes with the lab z axis, and ϕ, the angle between the plane projection of the pushpin long axis and the lab x axis. Equivalently, the pushpin orientation is given by the radial unit vector
n̂(θ,ϕ) = (sin(θ)cos(ϕ), sin(θ)sin(ϕ), cos(θ)).
The mobility matrix 𝕄 = 𝕄 (ϵ, B̂, n̂), which determines how the pushpin moves, depends on the orientation n̂ of the pushpin, on the anisotropy axis B̂ = (cos(ϕ_B)sin(θ_B), sin(ϕ_B)sin(θ_B),cos(θ_B)) of the fluid (Eq. <ref> is written with B̂ = ẑ), and on the strength of the anisotropy ϵ. By constructing the pushpin out of Stokeslets, we can compute the mobility matrix for any anisotropy direction and pushpin orientation (see SM for more details).
Examples of mobility matrices for a tilted pushpin in an isotropic and anisotropic fluid can be visualized schematically as
-0.5
< g r a p h i c s >
in which red/blue represent positive/negative entries whose magnitude is represented by lightness (see SM).
Orientation dynamics of a sedimenting pushpin.—We investigate the dynamics of a pushpin sinking under the force of gravity.
Applying a constant force in the -z direction determines the angular velocity Ω of the pushpin, as in Eq. <ref>.
Then, the equation of motion for the orientation of the pushpin is given by
∂_t n̂ = N(n̂) ≡Ω×n̂
in which Ω is given by Eq. (<ref>).
Since N·n̂ = 0, the vector field N describing the orientation dynamics of the pushpin is tangent to the sphere (there is no radial component), as shown in Fig. <ref>c.
The arrows show the instantaneous motion of the tip of a pushpin embedded in the center of the sphere.
In spherical coordinates, Eq. <ref> reads
θ̇ = N_θ
sin(θ)ϕ̇ = N_ϕ,
which we numerically solve with an explicit Runge-Kutta method of order 5(4) as implemented in SciPy <cit.>.
We now ask what is the eventual orientation of the pushpin. Fixed points of the orientation dynamics satisfy
N (θ^*, ϕ^*) = 0.
In the isotropic case (ϵ=0), we find that after a transient, the pushpin orients itself to fall upright, with θ=0 (Fig. <ref>c).
We expect that the anisotropy in the direction B will tilt the pushpin at an angle depending on the anisotropy direction and strength (Fig. <ref>d).
Such a setup would allow us to control the orientation of the pushpin by acting on the fluid (Fig. <ref>a). We confirm that this is indeed the case using numerical simulations of the orientation dynamics.
The results of our numerical simulations are presented in Fig. <ref>, in which we zoom in on the region of the sphere around the north pole, which corresponds to the stable fixed point in an isotropic fluid (Fig. <ref>a-b).
In the anisotropic case (ϵ≠ 0), the steady state orientation of the pushpin can change: in Fig. <ref>c, the fixed point moves off of the north pole.
We numerically compute the dependence of the fixed point position (θ^*, ϕ^*) on the orientation of the anisotropy axis (θ_B, ϕ_B) in Fig. <ref>d.
As long as the anisotropy axis B̂ is neither exactly parallel nor perpendicular to F, the stable fixed point shifts off of the north pole (θ^* ≠ 0).
Note that θ^* = θ^* (θ_B) and ϕ^* = ϕ^* (ϕ_B), with the exception of the case θ^* = 0, in which case ϕ^* is not defined.
In this perturbative regime, we find that the numerical results are summarized by
θ^* = ϵ A sin(k θ_B) and ϕ^* = ϕ_B - π
where A/π≃ 0.0073 and k ≃ 2 for 0 < θ_B < π/2. Increasing ϵ moves the fixed point further from the north pole.
If π/2 < θ_B < π, the θ^* dependence remains the same as shown in Fig. <ref>d, and ϕ^* shifts by π.
With the help of Eq. <ref>, it is possible to adiabatically change the axis of B over time to induce the orientation of the pushpin to follow some desired trajectory.
Fig. <ref> provides the necessary protocol for θ_B(t) and ϕ_B(t) that drives the pushpin to rotate in such a way as to trace out the rose trajectory.
The control loop here is open: the axis of B affects the orientation of the pushpin, but there is no feedback on B from the current orientation.
We now introduce a simplified description of the orientation dynamics.
In the isotropic case, the orientation vector field N in Eq. <ref>-<ref> is well-approximated by N_iso = (0, -sinθ, 0) in spherical coordinates (Fig. <ref>c).
From this, we can construct a toy model of N in the case ϵ≠ 0.
To obtain the flow to a fixed point (θ^*, ϕ^*) which is off of the north pole, we can simply rotate the isotropic vector field to find
N_an =
[ 0; cosθsinθ^*cos(ϕ^* - ϕ) - cosθ^*sinθ; sinθ^*sin(ϕ^* - ϕ) ]
in spherical coordinates,
as shown in Fig. <ref>d.
We achieve open loop control in the same way as in the full system: the fixed point (θ^*, ϕ^*) (the control variable) is simply set to the desired target (θ_set, ϕ_set) (Fig. <ref>a).
In the presence of slowly varying noise, the orientation (θ(t), ϕ(t)) evolves through Eq. <ref> to be near the target, but does not follow it exactly (Fig. <ref>b).
To improve control over the pushpin orientation, we close the feedback loop with a proportional-integral-derivative (PID) controller <cit.> (Fig. <ref>c-e) by setting
[ θ^*; ϕ^* ] (t)
=
K_p e(t) + K_i ∫_0^t e(τ) dτ + K_d de(t)/dt
where the error e(t) = (θ_set(t) - θ^*(t), ϕ_set(t)-ϕ^*(t)), and K_p, K_i, and K_d are parameters of the PID controller.
Practically, we differentiate Eq. <ref> with respect to time to obtain a set of ordinary differential equations, which we numerically solve in conjuction with Eq. <ref> with the forward Euler method.
We find that the closed loop successfully controls the orientation (Fig. <ref>d, compare with Fig. <ref>b) by changing the fixed point (Fig. <ref>e) in response to the noise (Fig. <ref>f).
Discussion.—Our work suggests a novel method of indirect control of objects through the modulation of the properties of the medium in which they are immersed. By changing the viscosity of an anisotropic fluid, we manipulate the orientation of a small particle that sediments under the force of gravity. Such control could be experimentally realized in anisotropic fluids by varying the alignment axis of the fluid molecules.
We thank Tom Witten, Colin Scheibner, Bryan VanSaders, and Yael Avni for discussions.
T.K. acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. 1746045.
M.F. acknowledges support from the Simons Foundation, the National Science Foundation under grant DMR-2118415, and a MRSEC-funded Kadanoff–Rice fellowship (DMR-2011854).
V.V. acknowledges support from the Simons Foundation, the Complex Dynamics and Systems Program of the Army Research Office under grant W911NF-19-1-0268, the National Science Foundation under grant DMR-2118415 and the University of Chicago Materials Research Science and Engineering Center, which is funded by the National Science Foundation under award no. DMR-2011854.
§ ANISOTROPIC VISCOSITIES
A passive anisotropic fluid with cylindrical symmetry can contain three independent shear viscosity coefficients.
These can be obtained by explicitly writing down the transformation law for the viscosity tensor (see for instance Refs. <cit.> and references therein). At steady state, the Stokes equation yields
0 = -∇ P
+μ_1
[ (∂_x^2 + ∂_y^2) v_x; (∂_x^2 + ∂_y^2) v_y; 0 ]
+
μ_2
[ ∂_z^2 v_x + ∂_x ∂_z v_z; ∂_z^2 v_y + ∂_y ∂_z v_z; (∂_x^2 + ∂_y^2)v_z - ∂_z^2 v_z ]
+
μ_3
[ -∂_x ∂_z v_z; -∂_y ∂_z v_z; 2 ∂_z^2 v_z ].
If μ_1 = μ_2 = μ_3 ≡μ, the viscous contribution reduces to the familiar μΔv. In the main text, we consider the case μ_1 = μ, μ_2 = μ(1 + ϵ), and μ_3 = μ(1 + 4/3ϵ), where ϵ is small. In this case, the above equation reduces to Eqs. <ref>-<ref>.
In a generic fluid, additional viscosity coefficients can be present, see Refs. <cit.> and references therein for more details.
The viscosities in Eq. <ref> can be expressed in terms of the Leslie viscosity coefficients α_i (Eq. 6.50 of <cit.>) as
μ_1 = α_4/2
μ_2 = α_4/2 + α_5 + α_6/4
μ_3 = α_4/2 + α_1 + α_5 + α_6/3
in which we have considered a uniform nematic director n = ẑ.
§ GREEN'S FUNCTION (STOKESLET)
We compute the Green's function (Stokeslet) corresponding to Eqs. <ref>-<ref> in the perturbative regime, for small anisotropy in the z direction (B̂ = ẑ).
The general case (for arbitrarily large ϵ) can be computed numerically with fast Fourier transforms. The Stokeslet is the solution to the Stokes equation with an applied point force, F:
Fδ^3(r) = -∇ P + μΔv + ϵμ[ ∂_z^2 v_x - ∂_x ∂_z v_z/3; ∂_z^2 v_y - ∂_y ∂_z v_z/3; Δ v_z + 2∂_z^2 v_z/3 ]
where we take ϵ≪ 1. To solve for v, we write Eq. <ref> in Fourier space, solve for the pressure and velocity fields, and integrate using contour integration to find the real-space solutions, in the same way as in <cit.>.
The Stokeslet velocity field is expressed through the Green's function, 𝔾, as v(r) = 𝔾(r) F.
Expanding the Green's function to linear order in ϵ, 𝔾(r) = 𝔾_0(r) + ϵ𝔾_1(r), we recover the familiar solution for a normal fluid,
𝔾_0,ij(r) = 1/8πμ r^3(r^2δ_ij + r_i r_j)
and derive the first order correction due to anisotropy, 𝔾_1(r):
𝔾_1(r) = 1/8πμ r^3[ -(x^2 + y^2) 0 -xz; 0 -(x^2 + y^2) -yz; -xz -yz -(x^2 + y^2 + 2z^2) ].
The associated pressure field is P(r) = P_0(r) + ϵ P_1(r), with
P_0(r) = 1/4π r^3F·r
P_1(r) = -x^2 + y^2 - 2z^2/12π r^5(F·r - 2 F·z).
The Green's function in Eq. <ref> holds for an anisotropy axis B̂ = ẑ.
Under a rotation to an arbitrary anisotropy axis given by B̂ = (cosϕ_Bsinθ_B, sinϕ_Bsinθ_B, cosθ_B), the Green's function in Eq. <ref> transforms as
𝔾_1(r) → R 𝔾_1(R^-1r)R^-1,
where R is the rotation matrix
R =
[ cos(θ_B)cos(ϕ_B) - sin(ϕ_B) cos(ϕ_B)sin(θ_B); cos(θ_B)sin(ϕ_B) cos(ϕ_B) sin(ϕ_B)sin(θ_B); -sin(θ_B) 0 cos(θ_B) ].
§ FLOW PAST A SPHERE
We solve the anisotropic Stokes equation (Eqs. <ref>-<ref>) for the flow past a sphere to linear order in ϵ by writing v(r) = v_0(r) + ϵv_1(r), P(r) = P_0(r) + ϵ P_1(r), as in <cit.>.
We take the velocity of the fluid at infinity to be U, and the boundary condition on the sphere surface to be no-slip, v(r = a) = 0, where a is the sphere radius.
Since the viscosity is anisotropic, we have two cases to consider: one in which U is parallel to the anisotropy axis B, and one in which U and B are perpendicular.
Let us take B̂ = ẑ. We first consider the parallel case, U = U ẑ. In this situation, the velocity field around the sphere is not modified at first order, v_1(r) = 0, but the pressure is:
P_1(r) = -μ a U/2r^7 z(4x^4 + 4y^4 + 5y^2 z^2 + z^4
+ 8x^2y^2 + 5x^2 z^2 + a^2(-3x^2 - 3y^2 + 2z^2)).
We repeat in the perpendicular case, for U = U x̂. Here, both the velocity and pressure fields are modified,
v_1,x(r) = 3aU/8r^5(y-z)(y+z)(r^2 - a^2)
v_1,y(r) = -3aU/8r^5xy(r^2 - a^2)
v_1,z(r) = 3aU/8r^5xz(r^2 - a^2)
P_1(r) = μ U a/4r^7 x (-5(x^2 + y^2)^2 - 4(x^2 + y^2)z^2
+ z^4 + 2a^2 (x^2 + y^2 -4z^2))
The velocity field can be more compactly written in terms of the Green's function and its Laplacian.
To linear order, the velocity is
v(r) = -6πμ U a (1 + ϵ/2)𝔾(r) ·x̂
-πμ U a^3 (1 + ϵ/2) Δ𝔾(r)·x̂.
The first order term can be written explicitly as
v_1(r) = -6πμ U a(𝔾_0(r)/2·x̂ + 𝔾_1(r) ·x̂)
-πμ U a^3Δ(𝔾_0(r)/2·x̂ + 𝔾_1(r) ·x̂).
§ FORCES ON A SPHERE
To solve for the forces on the sphere due to the fluid flow, we compute the stress from the above velocity fields and integrate it over the surface of the sphere,
F_i = ∮σ_ij n_j dS,
where n̂ = r̂ is the unit vector normal to the sphere surface.
Here, in addition to the familiar pressure and shear viscosity contributions, the stress contains a third term due to the anisotropic viscosity,
σ_ij = -Pδ_ij + μ(∂_i v_j + ∂_j v_i) + ϵσ_an,
where
σ_an = μ[ 4/9 (∂_x v_x + ∂_y v_y - 2∂_z v_z) 0 ∂_z v_x + ∂_x v_z; 0 4/9(∂_x v_x + ∂_y v_y - 2∂_z v_z) ∂_z v_y + ∂_y v_z; ∂_z v_x + ∂_x v_z ∂_z v_y + ∂_y v_z -8/9 (∂_x v_x + ∂_y v_y - 2∂_z v_z) ]
for an anisotropy axis B̂ = ẑ.
Computing the forces yields the following subset of the propulsion matrix:
[ F_x; F_y; F_z ]
= 6πμ a
[ 1+ϵ/2 0 0; 0 1+ϵ/2 0; 0 0 1 + ϵ ][ V_x; V_y; V_z ].
Due to the anisotropy of the viscosity, the Stokes drag law is modified. The drag coefficients in the x and y directions are increased by a factor of (1 + ϵ/2) and in the z direction (along the anisotropy axis) by (1 + ϵ). The fluid does not exert torques on the sphere.
The A block of the mobility matrix 𝕄 (see Eq. <ref>) is simply the inverse of the matrix above:
[ V_x; V_y; V_z ]
=
1/6πμ a[ 1-ϵ/2 0 0; 0 1-ϵ/2 0; 0 0 1 - ϵ ]_A
[ F_x; F_y; F_z ].
Eq. <ref> holds for an anisotropy axis B̂ = ẑ.
To obtain Eq. <ref> for an arbitrary anisotropy axis B̂ = (cosϕ_Bsinθ_B, sinϕ_Bsinθ_B, cosθ_B), we transform A as follows:
A → R A R^-1,
where R is the rotation matrix in Eq. <ref>.
Note that we can transform 𝕄 in this way only for the sphere due to its rotational invariance.
For the pushpin, which has its own anisotropy axis n, 𝕄 only transforms as in Eq. <ref> if B and n rotate together.
In the general case, the mobility matrix must be recomputed for different anisotropy axes, as described below.
§ STOKESLET APPROXIMATION OF THE MOBILITY MATRIX
To isolate the coefficients that relate different degrees of freedom, the mobility matrix can be conveniently arranged in four blocks
𝕄 =
[ [ A B; T S ] ].
For shapes that are less symmetric than the sphere, it is difficult to obtain an analytical form of the mobility matrix.
To derive the mobility matrix of the pushpin-shaped object, we construct the pushpin out of small spheres of radius a (denoted by markers in Fig. <ref>) with the method reviewed in <cit.>.
With this method, we apply a force to the pushpin at some reference point, which is then distributed amongst the small spheres.
Reference <cit.> provides an algorithm to determine how to distribute these forces that depends on two main ingredients.
The first ingredient is the velocity field generated by a small sphere moving in a fluid due to an applied force.
The distance between the spheres that compose the pushpin is taken to be much larger than the radius of the spheres, which allows us to treat the spheres as Stokeslets, and approximate the velocity field by the Green's function in Eq. <ref>.
The second ingredient is the force on a sphere moving with some velocity in a fluid (i.e. the A block of the mobility matrix), which we derived in Eq. <ref>.
Combining these two ingredients, we impose the constraint of a rigid body (we insist that the small spheres cannot move relative to one another) which yields the mobility matrix of the pushpin.
Moreover, with the help of Eqs. <ref> and <ref>, we can compute the mobility matrix of the pushpin for any orientation of the anisotropy axis B̂.
For the computations in this work, the pushpin is composed of fourteen spheres of radius a = 0.01. The four which lie along the axis are spaced with unit distance, and the ten that are along the base lie on the vertices of a regular decagon. The line segments that connect the markers in Fig. <ref> are not real and are meant to guide the eye.
Below, we provide the numerical values of T, the bottom left block of the mobility matrix shown pictorially in Eq. <ref>. For these matrices, the pushpin orientation is ϕ = 0, θ = 0.3π. For the anisotropic case, we take ϵ = 0.1, B̂ = 1/√(2)(0, 1, 1).
T_iso ≃[ 0 0.00367034 0; -0.00367034 0 0.00505179; -0 -0.00505179 -0 ]
T_an ≃[ 0.00007423 0.00345387 -0.00010948; -0.00350251 -0.00020758 0.00481074; -0.00008957 -0.00473651 0.00013335 ]
37
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Ramaswamy(2001)]Ramaswamy2001
author author S. Ramaswamy, https://doi.org/10.1080/00018730110050617 journal journal Advances in Physics volume 50, pages 297 (year 2001)NoStop
[Guazzelli et al.(2009)Guazzelli, Morris, and Pic]Guazzelli2009
author author E. Guazzelli, author J. F. Morris, and author S. Pic, https://doi.org/10.1017/cbo9780511894671 title A
Physical Introduction to Suspension Dynamics (publisher
Cambridge University Press, year 2009)NoStop
[Lauga and Powers(2009)]Lauga2009
author author E. Lauga and author T. R. Powers, https://doi.org/10.1088/0034-4885/72/9/096601 journal journal Reports on Progress in Physics volume 72, pages 096601 (year
2009)NoStop
[Bär et al.(2020)Bär,
Großmann, Heidenreich, and Peruani]Bär2020
author author M. Bär, author R. Großmann,
author S. Heidenreich, and author F. Peruani, https://doi.org/10.1146/annurev-conmatphys-031119-050611 journal journal Annual Review of Condensed Matter Physics volume 11, pages 441–466 (year 2020)NoStop
[Gompper et al.(2020)Gompper, Winkler, Speck, Solon, Nardini, Peruani, Löwen, Golestanian, Kaupp, Alvarez, Kiørboe, Lauga, Poon, DeSimone, Muiños-Landin,
Fischer, Söker, Cichos,
Kapral, Gaspard, Ripoll,
Sagues, Doostmohammadi, Yeomans, Aranson, Bechinger, Stark, Hemelrijk, Nedelec, Sarkar, Aryaksama, Lacroix, Duclos, Yashunsky, Silberzan, Arroyo, and Kale]Gompper2020
author author G. Gompper, author R. G. Winkler, author T. Speck,
author A. Solon, author C. Nardini, author
F. Peruani, author H. Löwen, author R. Golestanian, author U. B. Kaupp, author L. Alvarez, author T. Kiørboe,
author E. Lauga, author W. C. K. Poon, author
A. DeSimone, author
S. Muiños-Landin, author
A. Fischer, author N. A. Söker, author F. Cichos, author R. Kapral, author P. Gaspard, author M. Ripoll, author F. Sagues, author A. Doostmohammadi, author J. M. Yeomans, author I. S. Aranson, author C. Bechinger, author H. Stark,
author C. K. Hemelrijk, author F. J. Nedelec, author
T. Sarkar, author T. Aryaksama, author M. Lacroix, author G. Duclos, author V. Yashunsky, author P. Silberzan, author M. Arroyo, and author S. Kale, https://doi.org/10.1088/1361-648x/ab6348
journal journal Journal of Physics: Condensed
Matter volume 32, pages 193001
(year 2020)NoStop
[Bechinger et al.(2016)Bechinger, Di Leonardo, Löwen,
Reichhardt, Volpe, and Volpe]Bechinger2016
author author C. Bechinger, author R. Di Leonardo, author H. Löwen,
author C. Reichhardt, author G. Volpe, and author
G. Volpe, https://doi.org/10.1103/revmodphys.88.045006 journal
journal Reviews of Modern Physics volume
88, pages 045006 (year 2016)NoStop
[Boles et al.(2016)Boles,
Engel, and Talapin]Boles2016
author author M. A. Boles, author M. Engel, and author D. V. Talapin, https://doi.org/10.1021/acs.chemrev.6b00196 journal journal Chemical Reviews volume 116, pages 11220–11289 (year 2016)NoStop
[Nelson et al.(2010)Nelson,
Kaliakatsos, and Abbott]Nelson2010
author author B. J. Nelson, author I. K. Kaliakatsos, and author J. J. Abbott, https://doi.org/10.1146/annurev-bioeng-010510-103409
journal journal Annual Review of Biomedical
Engineering volume 12, pages 55–85
(year 2010)NoStop
[Walker et al.(2022)Walker,
Ishimoto, Gaffney, and Moreau]Walker2022
author author B. Walker, author K. Ishimoto,
author E. Gaffney, and author C. Moreau, journal
journal Journal of Fluid Mechanics volume 942, https://doi.org/10.1017/jfm.2022.253
10.1017/jfm.2022.253 (year 2022)NoStop
[Lim et al.(2011)Lim,
Lanni, Evarts, Lanni,
Tilton, and Majetich]lim2011magnetophoresis
author author J. Lim, author C. Lanni, author E. R. Evarts, author
F. Lanni, author R. D. Tilton, and author S. A. Majetich, @noop journal
journal Acs Nano volume 5, pages 217 (year 2011)NoStop
[Venu et al.(2013)Venu,
Lim, Hu, Jeong, Ramulu, and Kim]venu2013chip
author author R. Venu, author B. Lim, author X. Hu, author
I. Jeong, author T. Ramulu, and author C. Kim, @noop journal journal Microfluidics and nanofluidics volume
14, pages 277 (year 2013)NoStop
[Alnaimat et al.(2018)Alnaimat, Dagher, Mathew, Hilal-Alnqbi, and Khashan]alnaimat2018microfluidics
author author F. Alnaimat, author S. Dagher,
author B. Mathew, author A. Hilal-Alnqbi, and author S. Khashan, @noop
journal journal The Chemical Record volume 18, pages 1596 (year
2018)NoStop
[Hunt and Westervelt(2006)]hunt2006dielectrophoresis
author author T. Hunt and author R. Westervelt, @noop journal journal
Biomedical microdevices volume 8, pages 227 (year 2006)NoStop
[Pethig(2010)]pethig2010dielectrophoresis
author author R. Pethig, @noop journal journal
Biomicrofluidics volume 4, pages
022811 (year 2010)NoStop
[Fan et al.(2011)Fan,
Zhu, Cammarata, and Chien]fan2011electric
author author D. Fan, author F. Zhu, author R. Cammarata, and author C. Chien, @noop
journal journal Nano Today volume 6, pages 339 (year 2011)NoStop
[Svoboda and Block(1994)]svoboda1994biological
author author K. Svoboda and author S. M. Block, @noop journal journal Annual
review of biophysics and biomolecular structure volume
23, pages 247 (year 1994)NoStop
[Roichman et al.(2007)Roichman, Wong, and Grier]roichman2007colloidal
author author Y. Roichman, author V. Wong, and author D. G. Grier, @noop journal journal Physical Review
E volume 75, pages 011407 (year 2007)NoStop
[Moffitt et al.(2008)Moffitt, Chemla, Smith, and Bustamante]moffitt2008recent
author author J. R. Moffitt, author Y. R. Chemla,
author S. B. Smith, and author C. Bustamante, @noop journal journal Annu. Rev.
Biochem. volume 77, pages 205
(year 2008)NoStop
[Zhong et al.(2013)Zhong,
Wei, Zhou, Wang, and Li]zhong2013trapping
author author M.-C. Zhong, author X.-B. Wei,
author J.-H. Zhou, author Z.-Q. Wang, and author Y.-M. Li, @noop
journal journal Nature communications volume 4, pages 1768 (year
2013)NoStop
[Courtney et al.(2014)Courtney, Demore, Wu, Grinenko, Wilcox, Cochran, and Drinkwater]courtney2014independent
author author C. R. Courtney, author C. E. Demore, author H. Wu, author A. Grinenko, author
P. D. Wilcox, author
S. Cochran, and author
B. W. Drinkwater, @noop
journal journal Applied Physics Letters volume 104, pages 154103 (year 2014)NoStop
[Collins et al.(2015)Collins, Morahan, Garcia-Bustos,
Doerig, Plebanski, and Neild]collins2015two
author author D. J. Collins, author B. Morahan,
author J. Garcia-Bustos, author C. Doerig, author
M. Plebanski, and author
A. Neild, @noop journal journal Nature communications volume 6, pages 8686 (year 2015)NoStop
[Ozcelik et al.(2018)Ozcelik, Rufo, Guo, Gu,
Li, Lata, and Huang]ozcelik2018acoustic
author author A. Ozcelik, author J. Rufo,
author F. Guo, author
Y. Gu, author P. Li, author J. Lata, and author T. J. Huang, @noop journal journal Nature
methods volume 15, pages 1021
(year 2018)NoStop
[Hardman et al.(2022)Hardman, George Thuruthel, and Iida]hardman2022manipulation
author author D. Hardman, author T. George Thuruthel, and author F. Iida, @noop journal journal
Scientific Reports volume 12, pages
1 (year 2022)NoStop
[Beenakker and McCourt(1970)]Beenakker1970
author author J. J. M. Beenakker and author F. R. McCourt, https://doi.org/10.1146/annurev.pc.21.100170.000403 journal
journal Annual Review of Physical Chemistry volume 21, pages 47–72 (year
1970)NoStop
[Varnavides et al.(2020)Varnavides, Jermyn, Anikeeva, Felser, and Narang]Varnavides2020
author author G. Varnavides, author A. S. Jermyn, author P. Anikeeva,
author C. Felser, and author P. Narang, journal
journal Nature Communications volume
11, https://doi.org/10.1038/s41467-020-18553-y
10.1038/s41467-020-18553-y (year 2020)NoStop
[Gusev et al.(2020)Gusev,
Jaroshevich, Levin, Kvon, and Bakarov]Gusev2020
author author G. M. Gusev, author A. S. Jaroshevich, author A. D. Levin, author Z. D. Kvon, and author A. K. Bakarov, journal journal Scientific Reports volume 10, https://doi.org/10.1038/s41598-020-64807-6
10.1038/s41598-020-64807-6 (year 2020)NoStop
[Cook and Lucas(2021)]Cook2021
author author C. Q. Cook and author A. Lucas, https://doi.org/10.1103/physrevlett.127.176603 journal
journal Physical Review Letters volume
127, pages 176603 (year 2021)NoStop
[Sehgal et al.(2019)Sehgal,
Ramaswamy, Cohen, and Kirby]sehgal2019using
author author P. Sehgal, author M. Ramaswamy,
author I. Cohen, and author B. J. Kirby, @noop
journal journal Physical review letters volume 123, pages 128001 (year 2019)NoStop
[Sehgal et al.(2022)Sehgal,
Ramaswamy, Ong, Ness,
Cohen, and Kirby]sehgal2022viscosity
author author P. Sehgal, author M. Ramaswamy,
author E. Y. Ong, author C. Ness, author
I. Cohen, and author
B. J. Kirby, @noop journal journal arXiv preprint arXiv:2206.01141 (year 2022)NoStop
[Gibaud et al.(2020)Gibaud,
Dagès, Lidon, Jung,
Ahouré, Sztucki, Poulesquen,
Hengl, Pignon, and Manneville]Gibaud2020
author author T. Gibaud, author N. Dagès,
author P. Lidon, author G. Jung, author
L. C. Ahouré, author
M. Sztucki, author A. Poulesquen, author N. Hengl, author F. Pignon, and author S. Manneville, https://doi.org/10.1103/physrevx.10.011028 journal journal Physical Review X volume 10, pages 011028 (year 2020)NoStop
[Kim and Karrila(1991)]KimKarrila
author author S. Kim and author S. J. Karrila, https://doi.org/10.1016/c2013-0-04644-0 title Microhydrodynamics (publisher
Butterworth-Heinemann, year 1991)NoStop
[Witten and Diamant(2020)]witten2020review
author author T. A. Witten and author H. Diamant, @noop journal journal
Reports on Progress in Physics volume 83, pages 116601 (year 2020)NoStop
[Mowitz and Witten(2017)]mowitz2017predicting
author author A. J. Mowitz and author T. Witten, @noop journal journal
Physical Review E volume 96, pages
062613 (year 2017)NoStop
[Virtanen et al.(2020)Virtanen, Gommers, Oliphant, Haberland, Reddy, Cournapeau, Burovski, Peterson, Weckesser,
Bright, van der Walt, Brett, Wilson, Millman, Mayorov, Nelson, Jones, Kern, Larson, Carey, Polat,
Feng, Moore, VanderPlas,
Laxalde, Perktold, Cimrman,
Henriksen, Quintero, Harris,
Archibald, Ribeiro, Pedregosa, van Mulbregt, and SciPy 1.0
Contributors]scipy
author author P. Virtanen, author R. Gommers,
author T. E. Oliphant, author M. Haberland, author
T. Reddy, author D. Cournapeau, author E. Burovski, author P. Peterson, author W. Weckesser, author J. Bright, author S. J. van der Walt, author M. Brett, author J. Wilson, author K. J. Millman, author N. Mayorov, author A. R. J. Nelson, author E. Jones,
author R. Kern, author
E. Larson, author C. J. Carey, author İ. Polat, author Y. Feng, author E. W. Moore, author J. VanderPlas, author D. Laxalde, author J. Perktold,
author R. Cimrman, author I. Henriksen, author
E. A. Quintero, author
C. R. Harris, author
A. M. Archibald, author
A. H. Ribeiro, author
F. Pedregosa, author
P. van Mulbregt, and author
SciPy 1.0 Contributors, https://doi.org/10.1038/s41592-019-0686-2 journal journal Nature Methods volume 17, pages 261 (year 2020)NoStop
[Bechhoefer(2021)]bechhoefer2021control
author author J. Bechhoefer, @noop title Control Theory for
Physicists (publisher Cambridge University Press, year 2021)NoStop
[Khain et al.(2022)Khain,
Scheibner, Fruchart, and Vitelli]khain2022stokes
author author T. Khain, author C. Scheibner,
author M. Fruchart, and author V. Vitelli, @noop
journal journal Journal of Fluid Mechanics volume 934 (year 2022)NoStop
[Kleman and Lavrentovich(2003)]kleman2003soft
author author M. Kleman and author O. D. Lavrentovich, @noop title Soft matter
physics: an introduction (publisher Springer, year 2003)NoStop
|
http://arxiv.org/abs/2307.05721v1 | 20230709084446 | HA-ViD: A Human Assembly Video Dataset for Comprehensive Assembly Knowledge Understanding | [
"Hao Zheng",
"Regina Lee",
"Yuqian Lu"
] | cs.CV | [
"cs.CV"
] |
Self-healing unitarity is an Optical illusion: Comment on `Self-healing of unitarity in effective field theories and the onset of new physics'
Archit Vidyarthi [email:[email protected]]
August 12, 2023
==============================================================================================================================================
Understanding comprehensive assembly knowledge from videos is critical for futuristic ultra-intelligent industry. To enable technological breakthrough, we present HA-ViD – the first human assembly video dataset that features representative industrial assembly scenarios, natural procedural knowledge acquisition process, and consistent human-robot shared annotations. Specifically, HA-ViD captures diverse collaboration patterns of real-world assembly, natural human behaviors and learning progression during assembly, and granulate action annotations to subject, action verb, manipulated object, target object, and tool. We provide 3222 multi-view, multi-modality videos (each video contains one assembly task), 1.5M frames, 96K temporal labels and 2M spatial labels. We benchmark four foundational video understanding tasks: action recognition, action segmentation, object detection and multi-object tracking. Importantly, we analyze their performance for comprehending knowledge in assembly progress, process efficiency, task collaboration, skill parameters and human intention. Details of HA-ViD is available at: <https://iai-hrc.github.io/ha-vid>
§ INTRODUCTION
Assembly knowledge understanding from videos is crucial for futuristic ultra-intelligent industrial applications, such as robot skill learning <cit.>, human-robot collaborative assembly <cit.> and quality assurance <cit.>. To enable assembly video understanding, a video dataset is required. Such a video dataset should (1) represent real-world assembly scenarios and (2) capture the comprehensive assembly knowledge via (3) a consistent annotation protocol that aligns with human and robot assembly comprehension. However, existing datasets cannot meet these requirements.
First, the assembled products in existing datasets are either too scene-specific <cit.> or lack typical assembly parts and tools <cit.>. Second, existing datasets did not design assembly tasks to foster the emergence of natural behaviors (e.g., varying efficiency, alternative routes, pauses and errors) during procedural knowledge acquisition. Third, thorough understanding of nuanced assembly knowledge is not possible via existing datasets as they fail to annotate subjects, objects, tools and their interactions in a systematic approach.
Therefore, we introduce HA-ViD: a human assembly video dataset recording people assembling the Generic Assembly Box (GAB, see Figure <ref>). We benchmark on four foundational tasks: action recognition, action segmentation, object detection and multi-object tracking (MOT), and analyze their performance for comprehending application-oriented knowledge. HA-ViD features three novel aspects:
* Representative industrial assembly scenarios: GAB includes 35 standard and non-standard parts frequently used in real-world industrial assembly scenarios and requires 4 standard tools to assemble it. The assembly tasks are arranged onto 3 plates featuring different task precedence and collaboration requirements to promote the emergence of two-handed collaboration and parallel tasks. Different from existing assembly video datasets, GAB represents generic industrial assembly scenarios (see Table <ref>).
* Natural procedural knowledge acquisition process: Progressive observation, thought and practice process (shown as varying efficiency, alternative assembly routes, pauses, and errors) in acquiring and applying complex procedural assembly knowledge is captured via the designed three-stage progressive assembly setup (see Figure <ref>). Such a design allows in-depth understanding of the human cognition process, where existing datasets lack (see Table <ref>).
* Consistent human-robot shared annotations: We designed a consistent fine-grained hierarchical task/action annotation protocol following a Human-Robot Shared Assembly Taxonomy (HR-SAT[HR-SAT, developed by the same authors, is a hierarchical assembly task representation schema that both humans and robots can comprehend. See details via: <https://iai-hrc.github.io/hr-sat>] , to be introduced in Section 2.3). Using this protocol, we, for the first-time, (1) granulate action annotations to subject, action verb, manipulated object, target object, and tool; (2) provide collaboration status annotations via separating two-handed annotations; and (3) annotate human pauses and errors. Such detailed annotation embeds more knowledge sources for diverse understanding of application-oriented knowledge (see Table <ref>).
§ DATASET
In this section, we present the process of building HA-ViD and provide essential statistics.
§.§ Generic Assembly Box
To ensure the dataset can represent real-world industrial assembly scenarios, we designed the GAB shown in Figure <ref>.
First, GAB[Find GAB CAD files at: <https://iai-hrc.github.io/ha-vid>.] is a 250×250×250mm box including 11 standard and 24 non-standard parts frequently used in real-world industrial assembly. Four standard tools are required for assembling GAB. The box design also allows participants to naturally perform tasks on a top or side-facing plate, closer to the flexible setups of real-world assembly.
Second, GAB consists of three plates featuring different task precedence and collaboration requirements. Figure <ref> shows the subject-agnostic task precedence graphs (SA-TPG) for the three plates with different precedence constraints. These different task precedence graphs provide contextual links between actions, enabling situational action understanding with different complexities. The cylinder plate also has more collaboration tasks, posing greater challenges for understanding collaborative assembly tasks. Gear and cylinder plates contain parts that become hidden after assembly, e.g., spacers under the gears. This introduces additional complexities for understanding assembly status.
§.§.§ Dataset Collection
Data was collected on three Azure Kinect RGB+D cameras mounted to an assembly workbench facing the participant from left, front and top views, as shown in Figure <ref>. Videos were recorded at 1280×720 RGB resolution and 512×512 depth resolution under both lab lighting and natural lighting conditions. 30 participants (15 males, 15 females) assembled each plate 11 to 12 times during a 2-hour session.
To capture the progression of human procedural knowledge <cit.> acquisition and behaviors (e.g., varying efficiency, alternative routes, pause, and errors) during learning, a three-stage progressive assembly setup is designed. Inspired by discovery learning <cit.>, we design the three stages as[The instruction files can be found at <https://iai-hrc.github.io/ha-vid>. The detailed instructions were written following HR-SAT to align assembly instructions with our annotations.]: Discovery – participants are given minimal exploded view instructions of each plate; Instruction – participants are given detailed step-by-step instructions of each plate; Practice – participants are asked to complete the task without instruction.
The first stage encourages participants to explore assembly knowledge to reach a goal, the second stage provides targeted instruction to deepen participants’ understanding, and the last stage encourages participants to reinforce their learning via practicing. During Instruction and Practice stages, the participants were asked to perform the assembly with the plate facing upwards and sideways.
§.§.§ Dataset Annotations
We provide temporal and spatial annotations to capture rich assembly knowledge shown in Figure <ref>.
To enable human-robot assembly knowledge transfer, the structured temporal annotations are made following HR-SAT. According to HR-SAT (shown in Figure <ref>), an assembly task can be decomposed into primitive tasks and further into atomic actions. Each primitive task and atomic action contain five description elements: subject, action verb, manipulated object, target object and tool. Primitive tasks annotations describe a functional change of the manipulated object, such as inserting a gear on a shaft or screwing a nut onto a bolt. Atomic actions describe an interaction change between the subject and manipulated object such as a hand grasping the screw or moving the screw. HR-SAT ensures the annotation transferability, adaptability, and consistency.
The ST-TPGs files can be downloaded at: <https://iai-hrc.github.io/hr-sat>
We annotate human pause and error as null and wrong respectively to enable research on understanding assembly efficiency and learning progression. Our annotations treat each hand as a separate subject. Primitive tasks and atomic actions are labeled for each hand to support multi-subject collaboration related research. Alongside the primitive task annotations, we annotate the two-handed collaboration status as: collaboration, when both hand work together on the same task; parallel, when each hand is working on a different task; single-handed, when only one hand is performing the task while the other hand pauses; and pause, when neither hand is performing any task. More details about the temporal annotations can be found in Supplementary Section 2.3.
For spatial annotations, we use CVAT[<https://www.cvat.ai/>], a video annotation tool, to label bounding boxes for subjects, objects and tools frame-by-frame. Different from general assembly datasets, we treat important assemblable features, such as holes, stud and USB female, as objects, to enable finer-grained assembly knowledge understanding.
§.§ Statistics
In total, we collected 3222 videos with side, front and top camera views. Each video contains one task – the process of assembling one plate. Our dataset contains 86.9 hours of footage, totaling over 1.5 million frames with an average of 1 min 37 sec per video (1456 frames). To ensure annotation quality, we manually labeled temporal annotations for 609 plate assembly videos and spatial annotations for over 144K frames. The selected videos for labeling collectively capture the dataset diversity by including videos of different participants, lighting, instructions and camera views.
Overall, our dataset contains 18831 primitive tasks across 75 classes, 63864 atomic actions across 219 classes, and close to 2M instances of subjects, objects and tools across 42 classes. Figure <ref> presents the annotation statistics of the dataset. Our dataset shows potential for facilitating small object detection research as 46.6% of the annotations are of small objects. More statistics can be found in Supplementary Section 2.4.
Our temporal annotations can be used to understand the learning progression and efficiency of participants over the designed three-stage progressive assembly setup, shown in Figure <ref>. The combined annotation of wrong primitive task, pause collaboration status and total frames can indicate features such as errors, observation patterns and task completion time for each participant. Our dataset captures the natural progress of procedural knowledge acquisition, as indicated by the overall reduction in task completion time and pause time from stage 1 to 3, as well as the significant reduction in errors. The wrong and pause annotations enable research on understanding varying efficiency between participants.
By annotating the collaboration status and designing three assembly plates with different task precedence and collaboration requirements, HA-ViD captures the two-handed collaborative and parallel tasks commonly featured in real-world assembly, shown in Figure <ref>. Overall, 49.6% of the annotated frames consist of two-handed tasks. The high percentage of two-handed tasks enables research in understanding the collaboration patterns of complex assembly tasks.
§ BENCHMARK EXPERIMENTS
We benchmark SOTA methods for four foundational techniques for assembly knowledge understanding, i.e., action recognition, action segmentation, object detection, and MOT. Due to page limit, we highlight key results and findings in this section, and present implementation details, more results and discussions in the Supplementary Section 3.
§.§ Action Recognition, Action Segmentation, Object Detection and MOT
Action recognition is to classify a sequence of video frames into an action category. We split 123 out of 609 temporally labeled videos to be the testset, and the rest is trainset. We benchmark five action recognition methods from three categories: 2D models (TSM <cit.>, TimeSFormer <cit.>), 3D models (I3D <cit.>, MVITv2 <cit.>), and skeleton-based method (ST-GCN <cit.>) and report the Top-1 accuracy and Top-5 accuracy in Table <ref>.
Action segmentation is to temporally locate and recognize human action segments in untrimmed videos <cit.>. Under the same train/test split, we benchmark three action segmentation methods, MS-TCN <cit.>, DTGRM <cit.> and BCN <cit.>, and report the frame-wise accuracy (Acc), segmental edit distance (Edit) and segmental F1 score at overlapping thresholds of 10% in Table <ref>.
Object detection is to detect all instances of objects from known classes <cit.>. We split 18.4K out of 144K spatially labeled frames to be testset, and the rest is trainset. We benchmark classical two-stage method FasterRCNN <cit.>, one-stage method Yolov5 <cit.>, and the SOTA end-to-end Transformer-based method DINO <cit.> with different backbone networks, and report parameter size (Params), average precision (AP), AP under different IoU thresholds (50% and 75%) and AP under different object scales (small, medium and large) in Table <ref>.
MOT aims at locating multiple objects, maintaining their identities, and yielding their individual trajectories given an input video <cit.>. We benchmark SORT <cit.> and ByteTrack <cit.> on the detection results of DINO and ground truth annotations (test split of object detection), respectively. We report average multi-object tracking accuracy (MOTA), ID F1 score (IDF1), false positive (FP), false negative (FN), and ID switch (IDS) over the videos in our testing dataset in Table <ref>.
The baseline results show that our dataset presents great challenges on the four foundational video understanding tasks compared with other datasets. For example, BCN has 70.4% accuracy on Breakfast <cit.>, MViTv2 has 86.1% Top-1 accuracy on Kinetics-400 <cit.>, DINO has 63.3% AP on COCO test-dev <cit.>, and ByteTrack has 77.8% MOTA on MOT20 <cit.>.
Compared to the above baseline results, we are more concerned with whether existing video understanding methods can effectively comprehend the application-oriented knowledge (in Figure <ref>). We present our subsequent analysis in Sections 3.2-3.5.
§.§ Assembly progress
Insight #1: Assembly action recognition could focus on compositional action recognition and leveraging prior domain knowledge.
Understanding assembly progress, as an essential application-oriented task, requires real-time action (action verb + interacted objects and tools) recognition, and compare the action history with predefined assembly plan (represented in a task graph). After further analysis of the sub-optimal action recognition performance in Table <ref>, we found recognizing interacting objects and tools are more challenging than recognizing action verbs, (as shown in Table <ref>). Therefore, a promising research direction could be compositional recognizing action verb and interacted objects and tools.
Leveraging prior domain knowledge, such as task precedence and probabilistic correlation between action verbs and feasible objects and tools, one may improve the performance of action recognition. With defined task precedence graphs and rich list of action verb/object/tool pairs, HA-ViD enables research on this aspect.
Insight #2: Assembly action segmentation should focus on addressing under-segmentation issues and improving segment-wise sequence accuracy. Assembly progress tracking requires obtaining the accurate number of action segments and their sequence. For obtaining the accurate number of action segments from a given video, previous action segmentation algorithms <cit.> focused on addressing over-segmentation issues, but lack metrics for quantifying under/over-segmentation. Therefore, we propose segmentation adequacy (SA) to fill this gap. Consider the predicted segments as s_pred={s_1',s_2',…,s_F'} and ground truth segments as s_gt={s_1,s_2,…,s_N} for a given video, where F and N are the number of segments, SA = tanh(2(F-N)/F+N). Table <ref> reveals the significant under-segmentation issues on our dataset. This reminds the community to pay attention to addressing under-segmentation issues for assembly action understanding. The proposed SA can offer evaluation support, and even assist in designing the loss function as it utilizes hyperbolic tangent function.
As for segment-wise sequence accuracy, the low value of Edit in Table <ref> suggests pressing required research efforts. Compared with Breakfast <cit.> (66.2% Edit score with BCN algorithm), our dataset presents greater challenges.
§.§ Process Efficiency
Understanding process efficiency is essential for real-world industry. It requires video understanding methods to be capable of recognizing human pause and error. HA-ViD supports this research by providing null and wrong labels.
Insight #3: For null action understanding, efforts need to be made on addressing imbalanced class distribution. Table <ref> shows the recall and precision of action recognition and action segmentation of null actions. We suspect the high recall and low precision is caused by the imbalanced class distribution, as null is the largest head class (see Figure <ref>).
Insight #4: New research from wrong action annotations. Wrong action is the assembly action (primitive task level) occurred at wrong position or order. Our annotation for wrong actions allows in-depth research on understanding its appearing patterns between participants across the three stages. Joint understanding between wrong actions and their adjacent actions could also trigger new research of predicting wrong actions based on action history.
§.§ Task Collaboration
Insight #5: New research on understanding parallel tasks from both hands Table <ref> shows that both action recognition and segmentation have lowest performance on parallel tasks during assembly. One possible reason is that the foundational video understanding methods rely on global features of each image, and do not explicitly detect and track the action of each hand. This calls for new methods that can independently track both hands and recognize their actions through local features. Recent research on human-object interaction detection in videos <cit.> could offer valuable insights.
§.§ Skill Parameters and Human Intention
Understanding skill parameters and human intentions from videos is essential for robot skill learning and human-robot collaboration (HRC) <cit.>.
Typically, skill parameters vary depending on the specific application. However, there are certain skill parameters that are commonly used, including trajectory, object pose, force and torque <cit.>. While videos cannot capture force and torque directly, our dataset offers spatial annotations that enable tracking the trajectory of each object. Additionally, the object pose can be inferred from our dataset via pose estimation methods. Therefore, HA-ViD can support research in this direction.
Understanding human intention in HRC refers to a combination of trajectory prediction, action prediction and task goal understanding <cit.>. Our spatial annotations provide trajectory information, SA-TPGs present action sequence constraints, and GAB CAD files offer the final task goals. Therefore, HA-ViD can enhance the research in this aspect.
§ CONCLUSION
We present HA-ViD, a human assembly video dataset, to advance comprehensive assembly knowledge understanding toward real-world industrial applications. We designed a generic assembly box to represent industrial assembly scenarios and a three-stage progressive learning setup to capture the natural process of human procedural knowledge acquisition. The dataset annotation follows a human-robot shared assembly taxonomy. HA-ViD includes (1) multi-view, multi-modality data, fine-grained action annotations (subject, action verb, manipulated object, target object, and tool), (2) human pause and error annotations, and (3) collaboration status annotations to enable technological breakthroughs in both foundational video understanding techniques and industrial application-oriented knowledge comprehension.
As for limitation of HA-ViD, the imbalanced class distribution of primitive tasks and atomic actions could cause biased model performance and insufficient learning. In addition, the true complexities and diversities of real-world assembly scenarios may still not be fully captured.
We benchmarked strong baseline methods of action recognition, action segmentation, object detection and multi-object tracking, and analyzed their performance on comprehending application-oriented knowledge in assembly progress, process efficiency, task collaboration, skill parameter and human intention. The results show that our dataset captures essential challenges for foundational video understanding tasks, and new methods need to be explored for application-oriented knowledge comprehension. We envision HA-ViD will open opportunities for advancing video understanding techniques to enable futuristic ultra-intelligent industry.
§ ACKNOWLEDGEMENTS
This work was supported by The University of Auckland FRDF New Staff Research Fund (No. 3720540).
10
Duque2019
D. A. Duque, F. A. Prieto, and J. G. Hoyos, “Trajectory generation for
robotic assembly operations using learning by demonstration,” Robotics
and Computer Integrated Manufacturing, vol. 57, no. December 2018,
pp. 292–302, 2019.
Lamon2019
E. Lamon, A. De Franco, L. Peternel, and A. Ajoudani, “A Capability-Aware
Role Allocation Approach to Industrial Assembly Tasks,” IEEE Robotics
and Automation Letters, vol. 4, no. 4, pp. 3378–3385, 2019.
Frustaci2020
F. Frustaci, S. Perri, G. Cocorullo, and P. Corsonello, “An embedded machine
vision system for an in-line quality check of assembly processes,” Procedia Manufacturing, vol. 42, pp. 211–218, 2020.
Cicirelli2022
G. Cicirelli, R. Marani, L. Romeo, M. G. Domínguez, J. Heras, A. G.
Perri, and T. D'Orazio, “The HA4M dataset: Multi-Modal Monitoring of an
assembly task for Human Action recognition in Manufacturing,” Scientific Data, vol. 9, p. 745, dec 2022.
Ben-Shabat2021
Y. Ben-Shabat, X. Yu, F. Saleh, D. Campbell, C. Rodriguez-Opazo, H. Li, and
S. Gould, “The IKEA ASM Dataset: Understanding people assembling furniture
through actions, objects and pose,” Proceedings - 2021 IEEE Winter
Conference on Applications of Computer Vision, WACV 2021, pp. 846–858,
2021.
Sener2022
F. Sener, R. Wang, and A. Yao, “Assembly101: A Large-Scale Multi-View Video
Dataset for Understanding Procedural Activities,” Cvpr, 2022.
Toyer2017
S. Toyer, A. Cherian, T. Han, and S. Gould, “Human Pose Forecasting via Deep
Markov Models,” DICTA 2017 - 2017 International Conference on Digital
Image Computing: Techniques and Applications, vol. 2017-Decem, pp. 1–8,
2017.
Zhang2020
J. Zhang, P. Byvshev, and Y. Xiao, “A video dataset of a wooden box assembly
process: Dataset,” DATA 2020 - Proceedings of the 3rd Workshop on Data
Acquisition To Analysis, Part of SenSys 2020, BuildSys 2020, pp. 35–39,
2020.
Ragusa2021
F. Ragusa, A. Furnari, S. Livatino, and G. M. Farinella, “The MECCANO
Dataset: Understanding Human-Object Interactions from Egocentric Videos in an
Industrial-like Domain,” in 2021 IEEE Winter Conference on
Applications of Computer Vision (WACV), pp. 1568–1577, IEEE, jan 2021.
Georgeff1986
M. Georgeff and A. Lansky, “Procedural knowledge,” Proceedings of the
IEEE, vol. 74, no. 10, pp. 1383–1398, 1986.
Mayer2004
R. E. Mayer, “Should There Be a Three-Strikes Rule Against Pure Discovery
Learning?,” American Psychologist, vol. 59, no. 1, pp. 14–19, 2004.
Lin2019
J. Lin, C. Gan, and S. Han, “TSM: Temporal Shift Module for Efficient Video
Understanding,” in 2019 IEEE/CVF International Conference on Computer
Vision (ICCV), pp. 7082–7092, IEEE, oct 2019.
Bertasius2021
G. Bertasius, H. Wang, and L. Torresani, “Is Space-Time Attention All You
Need for Video Understanding?,” in Proceedings of the 38th
International Conference on Machine Learning, pp. 813–824, feb 2021.
Carreira2017
J. Carreira and A. Zisserman, “Quo Vadis, Action Recognition? A New Model and
the Kinetics Dataset,” in 2017 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), pp. 4724–4733, IEEE, jul 2017.
Li2022
Y. Li, C.-Y. Wu, H. Fan, K. Mangalam, B. Xiong, J. Malik, and C. Feichtenhofer,
“MViTv2: Improved Multiscale Vision Transformers for Classification and
Detection,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), pp. 4794–4804, IEEE, jun 2022.
Yan2018
S. Yan, Y. Xiong, and D. Lin, “Spatial temporal graph convolutional networks
for skeleton-based action recognition,” in 32nd AAAI Conference on
Artificial Intelligence, AAAI 2018, pp. 7444–7452, jan 2018.
Wang2021
D. Wang, D. Hu, X. Li, and D. Dou, “Temporal Relational Modeling with
Self-Supervision for Action Segmentation,” Proceedings of the AAAI
Conference on Artificial Intelligence, vol. 35, pp. 2729–2737, dec 2021.
Farha2019
Y. A. Farha and J. Gall, “MS-TCN: Multi-Stage Temporal Convolutional Network
for Action Segmentation,” in 2019 IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), vol. 2019-June, pp. 3570–3579, IEEE,
jun 2019.
Wang2020
Z. Wang, Z. Gao, L. Wang, Z. Li, and G. Wu, “Boundary-Aware Cascade Networks
for Temporal Action Segmentation,” in ECCV, vol. Part XXV 1,
pp. 34–51, 2020.
Amit2014
Y. Amit and P. Felzenszwalb, “Object Detection,” in Computer Vision,
pp. 537–542, Boston, MA: Springer US, 2014.
Ren2017
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time
Object Detection with Region Proposal Networks,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 39, pp. 1137–1149, jun
2017.
Jain
G. J. A. C. A. S. J. B. N. Y. K. K. M. T. J. F. i. L. Z. Y. C. W. A. V. D. M.
Z. W. C. F. J. N. L. U. V. Jain, “YOLOv5,”
Zhang2022a
H. Zhang, F. Li, S. Liu, L. Zhang, H. Su, J. Zhu, L. M. Ni, and H.-Y. Shum,
“DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object
Detection,” mar 2022.
Luo2021
W. Luo, J. Xing, A. Milan, X. Zhang, W. Liu, and T. K. Kim, “Multiple object
tracking: A literature review,” Artificial Intelligence, vol. 293,
p. 103448, apr 2021.
Bewley2016
A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and
realtime tracking,” in 2016 IEEE International Conference on Image
Processing (ICIP), pp. 3464–3468, IEEE, sep 2016.
Zhang2022
Y. Zhang, P. Sun, Y. Jiang, D. Yu, F. Weng, Z. Yuan, P. Luo, W. Liu, and
X. Wang, “ByteTrack: Multi-Object Tracking by Associating Every Detection
Box,” in Proceedings of the European Conference on Computer Vision
(ECCV), vol. 2, oct 2022.
Kuehne2014
H. Kuehne, A. Arslan, and T. Serre, “The Language of Actions: Recovering the
Syntax and Semantics of Goal-Directed Human Activities,” in 2014 IEEE
Conference on Computer Vision and Pattern Recognition, pp. 780–787, IEEE,
jun 2014.
Kay2017
W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan,
F. Viola, T. Green, T. Back, P. Natsev, M. Suleyman, and A. Zisserman, “The
Kinetics Human Action Video Dataset,” may 2017.
Lin2014
T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona,
D. Ramanan, C. L. Zitnick, and P. Dollár, “Microsoft COCO: Common
Objects in Context,” may 2014.
Dendorfer2020
P. Dendorfer, H. Rezatofighi, A. Milan, J. Shi, D. Cremers, I. Reid, S. Roth,
K. Schindler, and L. Leal-Taixé, “MOT20: A benchmark for multi object
tracking in crowded scenes,” mar 2020.
Tu2022
D. Tu, W. Sun, X. Min, G. Zhai, and W. Shen, “Video-based Human-Object
Interaction Detection from Tubelet Tokens,” in Advances in Neural
Information Processing Systems 35, pp. 23345—-23357, 2022.
Chiou2021
M.-J. Chiou, C.-Y. Liao, L.-W. Wang, R. Zimmermann, and J. Feng, “ST-HOI: A
Spatial-Temporal Baseline for Human-Object Interaction Detection in
Videos,” in Proceedings of the 2021 Workshop on Intelligent Cross-Data
Analysis and Retrieval, (New York, NY, USA), pp. 9–17, ACM, aug 2021.
Mees2020
O. Mees, M. Merklinger, G. Kalweit, and W. Burgard, “Adversarial Skill
Networks: Unsupervised Robot Skill Learning from Video,” in 2020 IEEE
International Conference on Robotics and Automation (ICRA), pp. 4188–4194,
IEEE, may 2020.
Zheng2022
P. Zheng, S. Li, L. Xia, L. Wang, and A. Nassehi, “A visual reasoning-based
approach for mutual-cognitive human-robot collaboration,” CIRP
Annals, vol. 71, no. 1, pp. 377–380, 2022.
Jeon2022
J. Jeon, H.-r. Jung, F. Yumbla, T. A. Luong, and H. Moon, “Primitive Action
Based Combined Task and Motion Planning for the Service Robot,” Frontiers in Robotics and AI, vol. 9, feb 2022.
Berger2016
E. Berger, S. Grehl, D. Vogt, B. Jung, and H. B. Amor, “Experience-based
torque estimation for an industrial robot,” in 2016 IEEE International
Conference on Robotics and Automation (ICRA), pp. 144–149, IEEE, may 2016.
Lu2022
Y. Lu, H. Zheng, S. Chand, W. Xia, Z. Liu, X. Xu, L. Wang, Z. Qin, and J. Bao,
“Outlook on human-centric manufacturing towards Industry 5.0,” Journal of Manufacturing Systems, vol. 62, pp. 612–627, jan 2022.
Supplementary Document for HA-ViD: A Human Assembly Video Dataset for Comprehensive Assembly Knowledge Understanding
§ OVERVIEW
This supplementary document contains additional information about HA-ViD.
Section <ref> further describes the process of building HA-ViD, including the design of the Generic Assembly Box, data collection, data annotation, and annotation statistics.
Section <ref> presents the implementation details of our baselines, discusses the experimental results, and provides the licenses of the benchmarked algorithms.
Section <ref> discusses the bias and societal impact of HA-ViD.
Section <ref> presents the research ethics for HA-ViD.
§ HA-VID CONSTRUCTION
In this section, we further discuss the process of building HA-ViD. First, we introduce the design of the Generic Assembly Box. Second, we describe the three-stage data collection process. Third, we describe data annotation details. Finally, we present critical annotation statistics.
§.§ Generic Assembly Box Design
To ensure the dataset is representative of real-world industrial assembly scenarios, we designed the Generic Assembly Box (GAB), a 250×250×250mm box (see Figure <ref>), which consists of 11 standard parts and 25 non-standard parts and requires 4 standard tools during assembly (see Figure 2).
GAB has three assembly plates, including General Plate, Gear Plate, and Cylinder Plate, and three blank plates. The opposite face of each assembly plate is intentionally left blank to allow a different assembly orientation. Three assembly plates feature different design purposes.
General Plate (see Figure <ref>) was designed to capture action diversity. The general plate consists of 11 different parts. The parts used in this plate were designed to include the different directions, shapes, and forces in which the common assembly actions can be performed. Since there is close to no precedence between assembling different parts, General Plate results in the most variety of possible assembly sequences.
Gear Plate (see Figure <ref>) was designed to capture parallel two-handed tasks, e.g., two hands inserting two spur gears at the same time. Gear Plate has three gear sub-systems: large gear, small gear, and worm gear, which mesh together to form a gear mechanism. The plate consists of 12 different parts. Gear Plate has a higher precedence constraint on assembly sequence than the general plate.
Cylinder Plate (see Figure <ref>) was designed to capture two-handed collaborative tasks, e.g., two hands collaborating on screwing the cylinder cap onto the cylinder base. Cylinder Plate requires assembling a cylinder subassembly and fastening it onto the plate. This plate consists of 11 parts. The parts were designed to represent assembling a subassembly where parts become fully occluded or partially constrained to another part (see the cylinder in Figure <ref>).
Table <ref> shows a summary of the three assembly plates. The box can be easily replicated using standard components, laser cutting, and 3D printing. The CAD files and bill of material can be downloaded from our website[<https://iai-hrc.github.io/ha-vid>].
§.§ Data Collection
Data was collected on three Azure Kinect RGB+D cameras mounted to an assembly workbench. 30 participants (15 male, 15 female) were recruited for a 2-hour session to assemble the GAB. During the data collection session, participants were given a fully disassembled assembly box, assembly parts, tools, and instructions. To capture the natural progress of human procedural knowledge acquisition and behaviors (varying efficiency, alternative routes, pauses, and errors), we designed a three-stage progressive assembly setup:
Discovery: Participants were asked to assemble a plate twice following the minimal visual instructions (see Figure <ref>).
Instruction: Participants were asked to assemble a plate six times following the detailed step-by-step instructions (see Figure <ref>). Six different instruction versions were created, each presenting a different assembly sequence. Each participant was given three different instruction versions, where two attempts were completed following each instruction version. The three instruction versions given to one participant must contain assembling the plate facing both upwards and sideways.
Practice: After the first two stages, participants were asked to assemble a plate four times without any instructions. During this stage, participants performed two attempts of each plate facing upwards and two attempts of each plate facing sideways.
The instruction files are available on our website[https://iai-hrc.github.io/ha-vid].
§.§ Data Annotation
To capture rich assembly knowledge, we provide temporal and spatial annotations.
Temporal Annotations: In HR-SAT[Details for the definitions of primitive task and atomic action can be found at: https://iai-hrc.github.io/hr-sat], an assembly task can be decomposed into a series of primitive tasks, and each primitive task can be further decomposed into a series of atomic actions. For both primitive task and atomic action, there are five fundamental description elements: subject, action verb, manipulated object, target object, and tool (see Figure <ref>). We follow HR-SAT to provide primitive task and atomic action annotations for the assembly processes recorded in the videos. To enable the research in two-handed collaboration task understanding, we defined the two hands of each participant as two separate subjects, and we annotated action verb, manipulated object, target object, and tool for each subject. For both primitive task and atomic action annotations, we follow the annotation specification shown in Figure <ref>.
Spatial Annotations: For spatial annotations, we use CVAT[https://www.cvat.ai/] to annotate the subjects (two hands), objects (manipulated object, target object), and tools via bounding boxes, shown in Figure <ref>.
§.§ Annotation Statistics
Overall, the dataset contains temporal annotations of 81 primitive task classes and 219 atomic action classes. The trainset and testset were split by subjects to balance data diversity. Figure <ref> and Figure <ref> show the class distributions of primitive task and atomic action annotations in the trainset and testset, respectively.
Overall, the dataset contains spatial annotations of 42 classes. The trainset and testset were split by subjects to balance data diversity. Figure <ref> shows the class distributions of spatial annotation classes in the trainset and testset.
§ EXPERIMENT
In this section, we provide the implementation details of the baselines, the results unreleased in the main paper, further discussions on the results, and the licenses of the benchmarked algorithms.
§.§ Action Recognition
We use the MMSkeleton[https://github.com/open-mmlab/mmskeleton] toolbox to benchmark ST-GCN <cit.>; the MMAction2[https://github.com/open-mmlab/mmaction2] toolbox to benchmark I3D <cit.>, TimeSformer <cit.>, and MVITv2 <cit.>; and the original codes to benchmark TSM <cit.>. For ST-GCN, we first extracted the upper 26 skeleton joints from each frame as the input. Action clips which consisted of frames where the skeleton could not be extracted, were excluded from reporting the performance. For I3D (rgb), TSM, MVITv2, and TimeSformer, the RGB frames of each clip were used as input. For I3D (flow), we extracted TV-L1 optical flow frames from each clip as input. To compare model performance on different views (side, front, and top), hands (left and right hands) and annotation levels (primitive task and atomic action), we conducted a combinational benchmark, which means we benchmark each model on 12 sub-datasets (see Figure <ref>). We report the Top-1 and Top-5 accuracy on these sub-datasets in Table <ref>.
ST-GCN: Following the default parameters from MMSkeleton, we use the SGD optimizer with a dropout of 0.5. The learning rate was initialized as 0.1 and decayed by a factor of 10 after epochs 10 and 50. We sampled all frames as the input. The ST-GCN was pretrained on NTU <cit.>, and we finetuned it on our 12 sub-datasets. As the slowest convergence of the 12 sub-datasets was observed around 70 epochs, we set the total training epochs to be 80 with a batch size of 16.
TSM: Following the original paper’s suggestions, we use the SGD optimizer with a dropout of 0.5. The learning rate was initialized as 0.0025 and decayed by a factor of 10 after epochs 20 and 40. 8 frames were uniformly sampled from each clip. The TSM was pretrained on ImageNet <cit.>, and we finetuned it on our 12 sub-datasets. As the slowest convergence of the 12 sub-datasets was observed around 40 epochs, we set the total training epochs to be 50 with a batch size of 16.
TimeSformer: Following the default parameters from MMAction2, we use the SGD optimizer. The learning rate was initialized as 0.005 and decayed by a factor of 10 after epochs 5 and 10. 8 frames were uniformly sampled from each clip. The TimeSformer was pretrained on ImageNet-21K <cit.>, and we finetuned it on our 12 sub-datasets. As the slowest convergence of the 12 sub-datasets was observed around 90 epochs, we set the total training epochs to be 100 with a batch size of 8.
I3D (rgb) and (flow): Following the default parameters from MMAction2, we use the SGD optimizer with a dropout of 0.5. The learning rate was initialized as 0.01 and decayed by a factor of 10 after epochs 40 and 80. 32 frames were uniformly sampled from each clip. I3D takes ResNet50 pretrained on ImageNet-1K <cit.> as the backbone, and we finetuned it on our 12 sub-datasets. As the slowest convergence of the 12 sub-datasets was observed around 90 epochs, we set the total training epochs to be 100 with a batch size of 4.
MVITv2: Following the default parameters from MMAction2, we use the AdamW optimizer with a cosine annealing learning rate with the minimum learning rate of 0.00015. 16 frames were uniformly sampled from each clip. The MVITv2 was pre-trained on Kinetics-400 <cit.> via MaskFeat <cit.>, and we finetuned it on our 12 sub-datasets. As the slowest convergence of the 12 sub-datasets was observed around 90 epochs, we set the total training epochs to be 100 with a batch size of 4.
The benchmarking results of action recognition are shown in Table <ref>. We use a single RTX 3090 GPU to train each model, and Table <ref> shows the average training time of each model for each sub-dataset.
§.§ Action Segmentation
We benchmark three action segmentation algorithms: MS-TCN, DTGRM, and BCN, and report the frame-wise accuracy (Acc), segmental edit distance (Edit) and segmental F1 score at overlapping thresholds 10% in Table <ref>. Before benchmarking, we extract I3D features for each frame as the input of the action segmentation algorithms. We use the Pytorch version of the I3D implementation[https://github.com/piergiaj/pytorch-i3d] and the pretrained model on ImageNet <cit.> and Kinetics <cit.>. For action segmentation, we also conducted a combinational benchmark.
MS-TCN: We follow the model settings provided by <cit.>. More specifically, we use the Adam optimizer with a fixed learning rate of 0.0005, dropout of 0.5 and sampling rate of 1 (taking all frames into the network). As the slowest convergence of the 12 sub-datasets was observed around 800 epochs, we set the total training epochs to be 1000 with a batch size of 10.
DTGRM: We follow the model settings provided by <cit.>. More specifically, we use the Adam optimizer with a fixed learning rate of 0.0005, dropout of 0.5 and sampling rate of 1. As the slowest convergence of the 12 sub-datasets was observed around 800 epochs, we set the total training epochs to be 1000 with a batch size of 16.
BCN: We follow the model settings provided by <cit.>. More specifically, we use the Adam optimizer with the learning rate of 0.001 for the first 30 epochs and 0.0001 for the rest epochs, dropout of 0.5 and sampling rate of 1. As the slowest convergence of the 12 sub-datasets was observed around 200 epochs, we set the total training epochs to be 300 with a batch size of 1.
The benchmarking results of action segmentation are shown in Table <ref>. We use a single RTX 3090 GPU to train each model, and Table <ref> shows the average training time of each model for each sub-dataset.
§.§ Object Detection
We benchmark three object detection algorithms: Faster-RCNN <cit.>, YOLOv5 <cit.> and DINO <cit.> with different backbone networks. The results have been reported in the main paper. Therefore, we only discuss the implementation details here. We train Faster-RCNN and DINO using the implementation provided by the MMDetection <cit.> and train YOLOv5 using the implementation provided by the MMYOLO[https://github.com/open-mmlab/mmyolo].
Faster-RCNN: We train Faster-RCNN with three backbone networks: ResNet50, ResNet101, and ResNext101. All the networks have been pretrained on the coco_2017_train dataset <cit.> and finetuned on our dataset. Following the default setting provided by MMDetection, we use the SGD optimizer with a momentum of 0.9 and weight decay of 0.0001. The learning rate was initialized as 0.02 and decayed by a factor of 10 at epochs 8 and 11. As the slowest convergence of the three models was observed around 14 epochs, we set the total training epochs to be 20. We set the batch size as 4, 1, and 5, respectively, for ResNet50, ResNet101, and ResNext101.
YOLOv5: We train YOLOv5-small and YOLOv5-large using MMDetection. These two models have been pretrained on the coco_2017_train dataset, and finetuned on our dataset. Following the default setting provided by MMDetection, we use the SGD optimizer with a momentum of 0.937, weight decay of 0.0005 for both models. The linear learning rate with base learning rate of 0.0025 and factor of 0.01 was applied to YOLOv5-small. The linear learning rate with base learning rate of 0.0025 and factor of 0.1 was applied to YOLOv5-large. We set the total training epochs to be 100 epochs with a batch size of 32 and 50 epochs with a batch size of 10, respectively, for YOLOv5-small and YOLOv5-large to ensure convergence.
DINO: We benchmark the DINO model with the Swin-large network as the backbone. The model has been pretrained on the coco_2017_train dataset, and finetuned on our dataset. Following the default setting provided by MMDetection, we use the AdamW optimizer with a learning rate of 0.0001 and weight decay of 0.0001. As the convergence was observed around 6 epochs, we set the total training epochs to be 10 with a batch size of 1.
We use single RTX 3090 GPU to train each model, and Table <ref> shows the average training time of each model.
§.§ Multi-Object Tracking
In this paper, we focus on tracking-by-detection methods because, normally, tracking-by-detection methods perform better than joint-detection-association methods <cit.>. Since we already benchmarked the object detection methods, we only need to test the SOTA trackers. We benchmark SORT <cit.> and ByteTrack <cit.> trackers on the detection results of DINO and ground truth annotations, respectively. The results have been reported in the main paper. Since the trackers are not neural networks, we do not need to train them and explain the implementation details. We always use the default parameters of the algorithm. For more details, please refer to the papers <cit.> and their GitHub repositories.
§.§ Discussion
In this section, we further discuss the results from the above experiments and analyze a prevalent problem of video understanding – occlusion.
§.§.§ General Discussion
Action recognition: We found the Top-1 accuracy of primitive task recognition is 15.6% higher on average than atomic action recognition, and the atomic action recognition performance of the left hand is 2.4% higher on average than the right hand. One possible reason behind these two observations can be occlusion since (1) primitive task recognition is less influenced by occlusion because it can rely on the key motion or relevant object recognition; and (2) the left hand is less occluded because the side-view camera is mounted on the left-side of the participant.
Action segmentation: We found (1) the frame-wise accuracy (Acc) of atomic action segmentation is 4% lower on average than primitive task segmentation, as atomic actions have higher diversity and current methods face under-segmentation issues (refer to the main paper); and (2) on the atomic action level, the Acc of the left hand is 6% higher on average than the right hand, where one possible reason could be that the left hand is less occluded.
Object detection: From Table 4 of the main paper, we found that (1) the large-scale end-to-end Transformer based model (DINO) performs the best, and the traditional two-stage method (Faster-RCNN) has better performance on small objects but worse performance on large objects than the one-stage method (YOLOv5), which is consistent with the conclusion of <cit.>; (2) current methods still face great challenges in small object detection, as the best model only has 27.4% average precision on small object detection; and (3) recognizing objects with same/similar appearances but different sizes is challenging (see Figure <ref>, e.g., Bar and Rod, Hole C1-C4, and two Wrenches).
Multi-object detection: From Table 5 of the main paper, we found that (1) object detection performance is the decisive factor in tracking performance; (2) with perfect detection results, even the simple tracker (SORT) can achieve good tracking results, as SORT has 94.5% multi-object tracking accuracy on the ground truth object bounding boxes; and (3) ByteTrack can track blurred and occluded objects better (comparing b1-2, c1-2, and f1-2 in Figure <ref>) due to taking low-confidence detection results into association, but it generates more ID switches (IDS) (seeing a2-f2 in Figure <ref>) due to the preference of creating new tracklets.
§.§.§ Occlusion Analysis
From the discussion in Section <ref>, we can see occlusion is a prevalent problem of video understanding. Therefore, we further explore the impact of occlusion on video understanding tasks in this Section. Table <ref> reports the average results over two hands of action recognition and segmentation on three views and the combined view (Com). We fuse the features from three views before the softmax layer to evaluate the performance of the combined view. The results show the significant benefits of combining three views which offers a viable solution for mitigating occlusion challenges in industrial settings.
Figure <ref> shows the impact of occlusion on tracking and reidentification via visualizing SORT and ByteTrack tracking results on sampled ground truth object annotations. To quantitatively analyze the occlusion problem, we design two metrics: occlusion duration (OD) and occlusion frequency (OF). Given a video of n frames v=[f_1,…,f_n], the observation of object k is denoted as O_k=[o_t^k,o_t+1^k,…,o_t+m^k], where t and t+m are the frame numbers that object k first, and last appear, respectively. o_j^k={0,1}, where 0 denotes observed, and 1 denotes unobserved. OD_k=1/m∑_j=t^j=t+mo_j^k and OF_k=1/2∑_j=t^j=t+m-1|o_j+1^k-o_j^k|. OD_k and OF_k describe the occluded duration and occluded frequency of object k in a video. We calculate the average OD and OF over every object in our testing dataset and compare the results with the tracking results on ground truth object annotations in Table <ref>. Table <ref> shows a negative correlation between mOD and mOF with MOTA and IDS, which is also consistent with the findings in Figure <ref>. We envision OD and OF will serve as effective occlusion evaluation tools for developing better object association modules and reidentification modules in MOT.
§.§ Licenses of the benchmarked algorithms
The licenses of the benchmarked algorithms are listed in Table <ref>.
§ DATASET BIAS AND SOCIETAL IMPACT
Our objective is to construct a dataset that can represent interesting and challenging problems in real-world industrial assembly scenarios. Based on this objective, we developed the Generic Assembly Box that encompasses standard and non-standard parts widely used in industry and requires typical industrial tools to assemble. However, there is still a gap between our dataset and the real-world industrial assembly scenarios. The challenges lie in:
1) the existence of numerous unique assembly actions, countless parts, and tools in the industry;
2) the vast diversity of operating environments in the industry;
3) various agents and multi-agent collaborative assembly scenarios in the industry.
Therefore, additional efforts would be needed to apply the models trained on our dataset to real-world industrial applications. We hope the fine-grained annotations of this dataset can advance the technological breakthrough in comprehensive assembly knowledge understanding from videos. Then, the learned knowledge can benefit various real-world applications, such as robot skill learning, human-robot collaboration, assembly process monitoring, assembly task planning, and quality assurance. We hope this dataset can contribute to technological advancements facilitating the development of smart manufacturing, enhancing production efficiency, and reducing the workload and stress on workers.
§ ETHICS APPROVAL
HA-ViD was collected with ethics approval from the University of Auckland Human Participants Ethics Committee. The Reference Number is 21602. All participants were sent a Participant Information Sheet and Consent Form[The participant consent form is available at: <https://www.dropbox.com/sh/ekjle5bwoylmdcf/AACLd_NqT3p2kxW7zLvvauPta?dl=0>] prior to the collection session. We confirmed that they had agreed to and signed the Consent form before proceeding with any data collection.
§ DATA DOCUMENTATION
We follow the datasheet proposed in <cit.> for documenting our HA-ViD dataset:
1. Motivation
(a) For what purpose was the dataset created?
This dataset was created to understand comprehensive assembly knowledge from videos. The previous assembly video datasets fail to (1) represent real-world industrial assembly scenarios, (2) capture natural human behaviors (varying efficiency, alternative routes, pauses and errors) during procedural knowledge acquisition, (3) follow a consistent annotation protocol that aligns with human and robot assembly comprehension.
(b) Who created the dataset, and on behalf of which entity?
This dataset was created by Hao Zheng, Regina Lee and Yuqian Lu. At the time of creation, Hao and Regina were PhD students at the University of Auckland, and Yuqian was a senior lecturer at the University of Auckland.
(c) Who funded the creation of the dataset?
The creation of this dataset was partially funded by The University of Auckland FRDF New Staff Research Fund (No. 3720540).
(d) Any other Comments?
None.
2. Composition
(a) What do the instances that comprise the dataset represent?
For the video dataset, each instance is a video clip recording a participant assembling one of the three plates of the designed Generic Assembly Box. Each instance consists of two-level temporal annotations: primitive task and atomic action, and spatial annotations, which means the bounding boxes for subjects, objects, and tools.
(b) How many instances are there in total?
We recorded 3222 videos over 86.9 hours, totaling over 1.5M frames. To ensure annotation quality, we manually labeled temporal annotations for 609 plate assembly videos and spatial annotations for over 144K frames.
(c) Does the dataset contain all possible instances, or is it a sample (not necessarily random) of instances from a larger set?
Yes, the dataset contains all possible instances.
(d) What data does each instance consist of?
See 2. (a).
(e) Is there a label or target associated with each instance?
See 2. (a).
(f) Is any information missing from individual instances?
No.
(g) Are relationships between individual instances made explicit?
Yes, each instance (video clip) contains one participant performing one task (assembling one of the three plates of the designed Generic Assembly Box.)
(h) Are there recommended data splits?
For action recognition and action segmentations, we provide two data splits: trainset and testset.
For object detection and multi-object tracking, we provide another two data splits: trainset and testset.
Refer to Section <ref> for details.
(i) Are there any errors, sources of noise, or redundancies in the dataset?
Given the scale of the dataset and complexity in annotation, it is possible that some ad-hoc errors exist in our annotations. However, we have given our best efforts (via human checks and quality checking code scripts) in examining manually labelled annotations to minimize these errors.
(j) Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)?
The dataset is self-contained.
(k) Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ non-public communications)?
No.
(l) Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?
No.
(m) Does the dataset relate to people?
Yes, all videos are recordings of human assembly activities, and all annotations are related to the activities.
(n) Does the dataset identify any subpopulations (e.g., by age, gender)?
No. Our participants have different ages and genders. But our dataset does not identify this information. To ensure this, we have blurred participants’ faces in the released videos.
(o) Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset?
No, as explained in 2. (n), we have blurred participants’ faces in the released videos.
(p) Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)?
No.
(q) Any other comments?
None.
3. Collection Process
(a) How was the data associated with each instance acquired?
For each video instance, we provide temporal annotations and spatial annotations. We follow HR-SAT to create temporal annotations to ensure the annotation consistency. The temporal annotations were manually created and checked by our researchers. The spatial annotations were manually created by postgraduate students at the University of Auckland, who were trained by one of our researchers to ensure the annotation quality.
(b) What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)?
Data were collected on three Azure Kinect RGB+D cameras via live video capturing while a participant is performing the assembly actions, and we manually labeled all the annotations.
(c) If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)?
No, we created a new dataset.
(d) Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)?
For video recordings, volunteer participants were rewarded gift cards worth NZ$50.00 upon completion of the 2-hour data collection session.
For data annotations, we contracted students at the University of Auckland, and they were paid at a rate of NZ$23.00 per hour.
(e) Over what timeframe was the data collected?
The videos were recorded during August to September of 2022, and the annotations were made during October of 2022 to March of 2023.
(f) Were any ethical review processes conducted (e.g., by an institutional review board)?
Yes, we obtained ethics approval from the University of Auckland Human Participants Ethics Committee. More information can be found in Section <ref>.
(g) Does the dataset relate to people?
Yes, we recorded the process of people assembling the Generic Assembly Box.
(h) Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)?
We collected the data from the individuals in question directly.
(i) Were the individuals in question notified about the data collection?
Yes, all participants were informed of the data collection purpose, process and the intended use of the data. They were sent a Participant Information Sheet and signed Consent Form prior to the collection session. All sessions started with an introduction where instructions on data collection, health and safety and confirmation of the Consent Form were discussed.
(j) Did the individuals in question consent to the collection and use of their data?
Yes, all participants were sent a Participant Information Sheet and Consent Form prior to the collection session. We confirmed that they had agreed to and signed the Consent form regarding the collection and use of their data before proceeding with any data collection. Details can be found in Section <ref>.
(k) If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses?
Yes. The Participant Information Sheet and Consent Form addressed how they can request to withdraw and remove their data from the project and how the data will be used.
(l) Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted?
No, all data have been processed to be made de-identifiable and all annotations are on objective world states. The potential impact of the dataset and its use on data subjects were addressed in the Ethics Approval, Participant Information Sheet and Consent Form. Details can be found in Section <ref>.
(m) Any other comments?
None.
4. Preprocessing, Cleaning and Labeling
(a) Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)?
Yes, we have cleaned the videos by blurring participants’ faces. We have also extracted I3D features from the video for action segmentation benchmarking.
(b) Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)?
No, we only provide the cleaned videos (participants’ faces being blurred) to the public due to the ethics issues.
(c) Is the software used to preprocess/clean/label the instances available?
Yes, we used CVAT to draw bounding boxes. Details can be found in Section <ref>.
(d) Any other comments?
None.
5. Uses
(a) Has the dataset been used for any tasks already?
No, the dataset is newly proposed by us.
(b) Is there a repository that links to any or all papers or systems that use the dataset?
Yes, we provide the link to all related information on our website.
(c) What (other) tasks could the dataset be used for?
The dataset can also be used for Compositional Action Recognition, Human-Object Interaction Detection, and Visual Question Answering.
(d) Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?
We granulated the assembly action annotation into subject, action verb, manipulated object, target object and tool. We believe the fine-grained and compositional annotations can be used for more detailed and precise descriptions of the assembly process, and the descriptions can serve various real-world industrial applications, such as robot learning, human robot collaboration, and quality assurance.
(e) Are there tasks for which the dataset should not be used?
The usage of this dataset should be limited to the scope of assembly activity or task understanding, e.g., action recognition, action segmentation, action anticipation, human-object interaction detection, visual question answering, and the downstream industrial applications, e.g., robot learning, human-robot collaboration, and quality assurance. Any work that violates our Code of Conduct are forbidden. Code of Conduct can be found at our website[<https://iai-hrc.github.io/ha-vid>.].
(f) Any other comments?
None.
6. Distribution
(a) Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created?
Yes, the dataset will be made publicly available.
(b) How will the dataset will be distributed (e.g., tarball on website, API, GitHub)?
The dataset could be accessed on our website.
(c) When will the dataset be distributed?
We provide private links for the review process. Then the dataset will be released to the public after the review process.
(d) Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)?
We release our dataset and benchmark under CC BY-NC 4.0[<https://creativecommons.org/licenses/by-nc/4.0/>.] license.
(e) Have any third parties imposed IP-based or other restrictions on the data associated with the instances?
No.
(f) Do any export controls or other regulatory restrictions apply to the dataset or to individual instances?
No.
(g) Any other comments?
None.
7. Maintenance
(a) Who is supporting/hosting/maintaining the dataset?
Regina Lee and Hao Zheng are maintaining, with continued support from Industrial AI Research Group at The University of Auckland.
(b) How can the owner/curator/manager of the dataset be contacted (e.g., email address)?
E-mail addresses are at the top of the paper.
(c) Is there an erratum?
Currently, no. As errors are encountered, future versions of the dataset may be released and updated on our website.
(d) Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances’)?
Yes, see 7.(c).
(e) If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)?
No.
(f) Will older versions of the dataset continue to be supported/hosted/maintained?
Yes, older versions of the dataset and benchmark will be maintained on our website.
(g) If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?
Yes, errors may be submitted to us through email.
(h) Any other comments?
None.
10
Yan2018
S. Yan, Y. Xiong, and D. Lin, “Spatial temporal graph convolutional networks
for skeleton-based action recognition,” in 32nd AAAI Conference on
Artificial Intelligence, AAAI 2018, pp. 7444–7452, jan 2018.
Carreira2017
J. Carreira and A. Zisserman, “Quo Vadis, Action Recognition? A New Model and
the Kinetics Dataset,” in 2017 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), pp. 4724–4733, IEEE, jul 2017.
Bertasius2021
G. Bertasius, H. Wang, and L. Torresani, “Is Space-Time Attention All You
Need for Video Understanding?,” in Proceedings of the 38th
International Conference on Machine Learning, pp. 813–824, feb 2021.
Li2022
Y. Li, C.-Y. Wu, H. Fan, K. Mangalam, B. Xiong, J. Malik, and C. Feichtenhofer,
“MViTv2: Improved Multiscale Vision Transformers for Classification and
Detection,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), pp. 4794–4804, IEEE, jun 2022.
Lin2019
J. Lin, C. Gan, and S. Han, “TSM: Temporal Shift Module for Efficient Video
Understanding,” in 2019 IEEE/CVF International Conference on Computer
Vision (ICCV), pp. 7082–7092, IEEE, oct 2019.
Shahroudy2016
A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang, “NTU RGB+D: A Large Scale
Dataset for 3D Human Activity Analysis,” in 2016 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), pp. 1010–1019, IEEE, jun
2016.
Deng2009
J. Deng, W. Dong, R. Socher, L.-J. Li, Kai Li, and Li Fei-Fei, “ImageNet:
A large-scale hierarchical image database,” in 2009 IEEE Conference on
Computer Vision and Pattern Recognition, pp. 248–255, IEEE, jun 2009.
Kay2017
W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan,
F. Viola, T. Green, T. Back, P. Natsev, M. Suleyman, and A. Zisserman, “The
Kinetics Human Action Video Dataset,” may 2017.
Wei2022
C. Wei, H. Fan, S. Xie, C.-Y. Wu, A. Yuille, and C. Feichtenhofer, “Masked
Feature Prediction for Self-Supervised Visual Pre-Training,” in 2022
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),
pp. 14648–14658, IEEE, jun 2022.
Farha2019
Y. A. Farha and J. Gall, “MS-TCN: Multi-Stage Temporal Convolutional Network
for Action Segmentation,” in 2019 IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), vol. 2019-June, pp. 3570–3579, IEEE,
jun 2019.
Wang2021
D. Wang, D. Hu, X. Li, and D. Dou, “Temporal Relational Modeling with
Self-Supervision for Action Segmentation,” Proceedings of the AAAI
Conference on Artificial Intelligence, vol. 35, pp. 2729–2737, dec 2021.
Wang2020
Z. Wang, Z. Gao, L. Wang, Z. Li, and G. Wu, “Boundary-Aware Cascade Networks
for Temporal Action Segmentation,” in ECCV, vol. Part XXV 1,
pp. 34–51, 2020.
Ren2017
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time
Object Detection with Region Proposal Networks,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 39, pp. 1137–1149, jun
2017.
Jain
G. J. A. C. A. S. J. B. N. Y. K. K. M. T. J. F. i. L. Z. Y. C. W. A. V. D. M.
Z. W. C. F. J. N. L. U. V. Jain, “YOLOv5,”
Zhang2022a
H. Zhang, F. Li, S. Liu, L. Zhang, H. Su, J. Zhu, L. M. Ni, and H.-Y. Shum,
“DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object
Detection,” mar 2022.
Chen2019
K. Chen, J. Wang, J. Pang, Y. Cao, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu,
J. Xu, Z. Zhang, D. Cheng, C. Zhu, T. Cheng, Q. Zhao, B. Li, X. Lu, R. Zhu,
Y. Wu, J. Dai, J. Wang, J. Shi, W. Ouyang, C. C. Loy, and D. Lin,
“MMDetection: Open MMLab Detection Toolbox and Benchmark,” jun 2019.
Lin2014
T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona,
D. Ramanan, C. L. Zitnick, and P. Dollár, “Microsoft COCO: Common
Objects in Context,” may 2014.
Luo2021
W. Luo, J. Xing, A. Milan, X. Zhang, W. Liu, and T. K. Kim, “Multiple object
tracking: A literature review,” Artificial Intelligence, vol. 293,
p. 103448, apr 2021.
Bewley2016
A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online and
realtime tracking,” in 2016 IEEE International Conference on Image
Processing (ICIP), pp. 3464–3468, IEEE, sep 2016.
Zhang2022
Y. Zhang, P. Sun, Y. Jiang, D. Yu, F. Weng, Z. Yuan, P. Luo, W. Liu, and
X. Wang, “ByteTrack: Multi-Object Tracking by Associating Every Detection
Box,” in Proceedings of the European Conference on Computer Vision
(ECCV), vol. 2, oct 2022.
Zhao2019
Z.-q. Zhao, P. Zheng, S.-T. Xu, and X. Wu, “Object Detection With Deep
Learning: A Review,” IEEE Transactions on Neural Networks and Learning
Systems, vol. 30, pp. 3212–3232, nov 2019.
Gebru2018
T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach,
H. Daumé, and K. Crawford, “Datasheets for Datasets,” mar 2018.
|
http://arxiv.org/abs/2307.04620v2 | 20230710150757 | Surface magnon spectra of nodal loop semimetals | [
"Assem Alassaf",
"János Koltai",
"László Oroszlány"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.other",
"cond-mat.str-el"
] |
Department of Physics of Complex Systems, ELTE Eötvös Loránd University, Pázmány Péter sétány 1/A, 1117 Budapest, Hungary
[email protected]
Department of Biological Physics, ELTE Eötvös Loránd University, Pázmány Péter sétány 1/A, 1117 Budapest, Hungary
Department of Physics of Complex Systems, ELTE Eötvös Loránd University, Pázmány Péter sétány 1/A, 1117 Budapest, Hungary; MTA-BME Lendület Topology and Correlation Research Group, Budafoki út 8., H-1111 Budapest, Hungary
In this paper we establish a connection between the bulk topological structure and the magnetic properties of drumhead surface states of nodal loop semimetals. We identify the magnetic characteristics of the surface states and compute the system's magnon spectrum by treating electron-electron interactions on a mean-field level. We draw attention to a subtle connection between a Lifshitz-like transition of the surface states driven by mechanical distortions and the magnetic characteristics of the system. Our findings may be experimentally verified e.g. by spin polarized electron energy loss spectroscopy of nodal semimetal surfaces.
Surface magnon spectra of nodal loop semimetals
László Oroszlány
August 12, 2023
===============================================
§ INTRODUCTION
Due to their unique electronic properties and potential applications in numerous fields, topological materials have attracted significant attention <cit.>. These materials possess nontrivial topological properties, that in some cases need be protected by symmetries, resulting in the existence of robust surface or edge states. Topological semimetals are a class of topological materials that have been extensively studied in recent years <cit.>. Weyl and nodal semimetals are two types of topological semimetals that possess distinct surface states. Weyl semimetals are distinguished by the presence of Weyl nodes in the bulk band structure, resulting in Fermi arcs on the surface <cit.>. These Fermi arcs connect the Weyl node projections and exhibit a variety of fascinating transport properties. In contrast, the drumhead states on the surfaces of nodal semimetals are dispersionless states associated to the surface projection of the nodal line structure. Due to their small kinetic energy these drumhead states are susceptible to interactions and thus they can be an ideal platform for superconductivity<cit.> or emergent surface magnetism <cit.>.
Rhombohedral graphite is a prime example of such a material whose interaction induced magnetic properties have already been studied theoretically <cit.> and observed experimentally <cit.>.
In this paper we investigate, through a simple model, the surface magnon spectrum of nodal loop semimetals. In the next section we introduce our model and describe the connection of the bulk nodal loop and drumhead surface states. Treating electron-electron interaction on a mean field level we obtain the magnetic properties of the surface states. Mapping to an isotropic Heisenberg spin model, we calculate the magnon spectrum of the system. We highlight a nuanced connection between the connectivity of the topological flat band and the magnon energies. Our findings should be relevant for experimental characterization of topological flat bands arising in nodal semimetals, specially when the flat bands extend over a considerable portion of the projected Brillouin zone, for example as those in Ca_3P_2 <cit.>.
§ THE MODEL
In this section we introduce the investigated model and describe the real space structure and momentum space spectrum. The presence of a nodal loop, which is a closed curve in momentum space, distinguishes our model. As we show, the shape of the nodal loop and the flat surface states stabilized by its presence can be controlled by a parameter which corresponds to mechanical distortion in an experimental setting.
§.§ Real space structure
We consider a three dimensional cubic system, spanned by the lattice vectors 𝐚_i with two sublattices (A and B). The real space structure is depicted in Fig. <ref> (a). We take a single spinfull orbital degree of freedom on each site into account. Electrons are allowed to hop from one site to the other without breaking of the sublattice symmetry characterized by the real space Hamiltonian:
Ĥ_0 = ∑_𝐫,s δξ t â_𝐫,s^†b̂_𝐫,s + t â_𝐫,s^†b̂_𝐫+𝐚_1,s
+ t â_𝐫,s^†b̂_𝐫+𝐚_2,s + 2ξ t â_𝐫,s^†b̂_𝐫+𝐚_3,s +h.c. ,
where 𝐫 represents a unit cell of the system, while s is the spin degree of freedom. The annihilation operators â_𝐫,s and b̂_𝐫,s act on the appropriate sublattice and spin degree of freedom. The hopping amplitude t controls the strength of electron movement between neighboring lattice sites and serves as the unit of energy for our model. The sublattice symmetry is the fundamental symmetry of the system which allows for the emergence of the nodal loop.
There are two more important dimensionless parameters in the considered system. The parameter δ serves as an internal parameter which mimics experimentally hard to control properties of the system, such as particular matrix elements of the Hamiltonian related to hopping from one orbital to the other, while ξ, multiplying all hopping amplitudes in the z - direction, captures effects of applying mechanical pressure on the system. As we shell see below, both of these parameters have a significant impact on the electronic structure and magnetic properties of the system, as they both control the shape of the nodal loop and the associated drumhead surface states.
§.§ Momentum space structure
As the investigated system is cubic the corresponding Brillouin zone spanned by reciprocal lattice vectors 𝐛_i will also be cubic as depicted in Fig. <ref>(b). As we will connect the topological properties of the bulk to the surface magnetic properties of a slab with finite thickness, it is instructive to introduce the projected Brillouin zone with its appropriate high symmetry points, as shown in the figure, too.
In order to elucidate the momentum space structure defined by the kinetic Hamiltonian (<ref>)
we introduce Fourier transformed operators as
â_𝐤,s = ∑_𝐫e^i𝐤𝐫â_𝐫,s , b̂_𝐤,s = ∑_𝐫e^i𝐤𝐫b̂_𝐫,s,
where 𝐤 is a wavevector indexing states in the three dimensional Brillouin zone. With these we can recast (<ref>) as
Ĥ_0 = ∑_𝐤,s[ â_𝐤,s^† b̂_𝐤,s^† ]ℋ(𝐤)
[ â_𝐤,s; b̂_𝐤,s, ]
where we introduce the matrix ℋ(𝐤) as
ℋ(𝐤) = [δ t_z -2∑_i = (x,y,z)t_i cosk_i]σ_x +
2 t_z sink_zσ_y
=𝐝_δ,ξ(𝐤)·σ,
with t_x,y = t, t_z = ξ t and σ_x,y are Pauli matrices acting on the sub-lattice space.
The absence of σ_z from the above expression is the fingerprint of sublattice symmetry of the model. Three dimensional Hamiltonians with sublattice symmetry can be characterized by winding number <cit.> associated to the 𝐝_δ,xi(𝐤) vector for specific paths in momentum space. The system for a given value of k_x and k_y mimics the behaviour of the SSH model <cit.>. We calculate this winding number along k_z as we cross the Brillouin zone. For a given value of k_x, k_y, δ and ξ the winding number is evaluated as
ν(k_x,k_y,δ,ξ)=
1 |C_δ,ξ(k_x,k_y)/2ξ t|<1
0 |C_δ,ξ(k_x,k_y)/2ξ t|>1
where we introduce the shorthand C_δ,ξ(k_x,k_y)=δξ t -2 t cosk_x-2 t cosk_y.
The winding number, which is a bulk property, signals the presence or absence of topological drumhead states for slabs. This is a manifestation of the bulk boundary correspondence <cit.>.
If the winding number is nonzero for a given set of bulk parameters δ and ξ and wavevector components k_x and k_y then in a slab geometry there will be a zero energy surface state present at the corresponding wavevector.
The geometry of the nodal loop, the map of winding number and the spectrum of a slab of a finite thickness can be observed for different values of δ but fixed values of ξ in Fig. <ref>. while in Fig. <ref>. the same is depicted but for fixed values of δ and changing ξ.
Let us discuss the evolution of the nodal loop and the drumhead states associated with it as the function of the parameters δ and ξ!
First, focusing on Fig. <ref>. that is keeping ξ=1.0, we can observe that, as one decreases δ, a nodal loop first appears at the Γ point of the bulk Brillouin zone, then it grows in size. At δ=2.0 two drastic changes occur. First, the nodal loop around the Γ point is enlarged to a point where it coalesces with nodal loops from the neighboring Brillouin zone effectively transforming itself from a loop around Γ to a loop around M. Second, an additional nodal loop is germinated at the Z point of the bulk Brillouin zone, due to a band crossing. The appearance and evolution of the nodal loops leave an impression on the winding number maps as well. For larger values of δ where only a single loop is present, the region with ν=1 is a simply connected region in the shadow of the nodal loop. For δ<2.0 however, the appearance of the second loop and the coalescence of the original loop causes a drastic change in the connectivity of the region with a finite winding number, changing a simply connected region into a multiply connected one. Let us denote this type of transition as a connectivity shift. This transition is similar to a Lifshitz transition whereby the topology of the Fermi surface changes.
However, in contrast to the case of other systems with a two-dimensional Brillouin zone, for instance, bilayer graphene <cit.>, in our special case the Fermi-surface is also a two-dimensional object.
As δ is decreased even further to δ=0.0 the area Ω_0 of the region with ν=1 reaches a maximum. Let us introduce the ratio r of this area to the total area of the projected Brillouin zone Ω_BZ as
r = Ω_0/Ω_BZ.
As expected, due to the bulk boundary correspondence of topological systems, finite winding numbers herald non-dispersing zero energy surface states. As one can observe in Fig. <ref>.(f)-(j) where the spectrum of a slab with finite thickness is depicted, the region corresponding to ν=1 indeed harbors drumhead surface states. The spatial localization of these states follows from their analogy with the SSH model <cit.>.
Turning now our attention to the parameter ξ and to Fig. <ref>. we can see that for a fixed value of δ the parameter ξ, which mimics mechanical distortions, can also be used to change the connectivity of the flat portion of the surface localized zero energy states. As one decreases ξ a band crossing can be engineered at the Z point, introducing again a second nodal loop, and thus transforming a simply connected disk like region with ν=1 into an annulus like region. This thus again leads to a connectivity shift.
§.§ Interactions
In the previous sections, we showed that the presented model exhibits drumhead surface states. For these states, which occupy a considerable portion of the projected Brillouin zone, the kinetic energy vanishes. Interactions between charge carriers thus undoubtedly will have a major role in influencing their behavior. The simplest of consequences of interactions might lead to the formation of an ordered magnetic pattern on the surface of the system. This emergent magnetism parallels that of the edge magnetization of zigzag graphene nanoribbons already observed experimentally <cit.>.
We take interactions into account through a Hubbard term, thus the full Hamiltonian Ĥ for the electronic degrees of freedom is cast in the form
Ĥ=Ĥ_0 + U ∑_in̂_i ,↑n̂_i, ↓,
where n̂_i,s = ĉ^†_i,sĉ_i,s with ĉ_i,s = â_𝐫_i,s, b̂_𝐫_i,s. In the present work, we shall focus on the case of a half filled system, thus, in all calculations, the Fermi level E_F is set to guarantee this condition. We have to stress here that in order for magnetism to arise the system needs to be in the vicinity of half filling otherwise the spin polarization of the surface states vanishes, this behavior is expected for nodal line semimetals and it was already observed in rhombohedral graphene <cit.>. However we also note that all mechanisms which make the surface states dispersive, by enhancing its kinetic energy, also will extend the range of the chemical potential at which magnetism can be stabilized.
In order to further proceed, we analyse the system defined by (<ref>) on a mean-field level <cit.>. That is, we obtain an effective spin dependent single particle description of the system after a self-consistent procedure. Thus instead of the interacting Hamiltonian (<ref>) we work with the mean-field Hamiltonian Ĥ^s_MF ({n_i,↑,n_i,↓} ) for spin channel s which depends explicitly on the self-consistently obtained occupation numbers n_i,s at each site. The results of such a mean-field calculation can be observed in Fig. <ref>. (a), where the spectrum of a slab with finite thickness is presented.
The impact of interactions is the visible splitting of the zero energy flat band. The splitting is due to the local difference of the occupation of the two spin species on the surfaces of the system. The magnetization m_i on site i is obtained as
m_i = (n_i ,↑ - n_i, ↓) μ_B,
where the occupation numbers n_i,s are the expectation value of n̂_i,s in the ground state for site i and spin s while μ_B is the Bohr magneton.
Fig. <ref>. (b) shows the magnetization for each site in the cross section of a slab of finite thickness. One can observe that the sites on the very top and bottom carry a considerable portion of the overall magnetization. Magnetization drops off exponentially towards the bulk of the system with neighbouring layers exhibiting opposite magnetization.
For moderate system thickness where there is still some overlap between the states localized on the two opposing surfaces of the system, an antiferromagnetic configuration is energetically more favorable where the magnetization of the top layer is reversed as compared to that of the bottom layer, as can be observed in Fig. <ref> (b). In these situations the ground state of the system possesses an overall spectral gap as can be also seen in Fig. <ref> (a).
For wide enough slabs though, the difference in ground state energy of the parallel and anti-parallel alignment of the magnetization of the opposing surfaces vanishes as the two surfaces effectively decouple from each other.
§ SURFACE MAGNONS
In this section, we are going to analyze the magnetic characteristics of the topmost surface sites of our model. This layer of sites is characterized at zero temperature by an ordered ferromagnetic spin configuration. We start by mapping the localized magnetic moments of the surface, with magnitude m, to that of an isotropic Heisenberg model. The mapping will allow us to find the surface magnon spectrum of the system. From the magnon spectrum, we extract experimentally accessible quantities such as the spin wave stiffness D and the effective exchange constant J(0). We finish this section by discussing how these quantities depend on the parameters of the model. We shall concentrate on possible observable fingerprints of the connectivity shift discussed in the previous sections.
The classical Heisenberg model describes coupled classical magnetic moments at site i with an orientation 𝐞_i and coupling constants J_i j through the classical Hamiltonian
h = -1/2∑_i, j J_i j𝐞_i𝐞_j.
For tight binding like electronic systems, with a single spinfull orbital on each site, where interactions are taken into account through a Hubbard term with interaction strength U, on the mean-field level, the coupling constants appearing in the above expression can be cast into the rather simple form <cit.>
J_i j= 2/π (mU/μ_B)^2 ∑_i ≠ j∫_-∞^E_Fd E Im[ G^↑_i j(E) G^↓_ j i(E)].
In this expression G^s_ij(ε) are the matrix elements of the Green's function Ĝ^s(ε) for spin channel s and between surface sites i and j which in turn are obtained from the mean-field Hamiltonian Ĥ^s_MF as
Ĝ^s(E)=lim_η→ 0 ((E+iη) Î-Ĥ^s_MF)^-1.
The Fourier transform of the coupling constants, J(𝐪), can be cast in terms of an integral over the projected Brillouin zone for each wave vector 𝐪 as
J(𝐪)=∑_j ≠ 0 e^i 𝐪𝐑_j J_0 j
= 2/π (mU/μ_B)^2 Im∫_-∞^E_Fdεℐ_𝐪 (E )
with
ℐ_𝐪 (E ) = (∑_k𝒢^↑_00(E, 𝐤) 𝒢^↓_00(E, 𝐤+𝐪) ) - G_00^↑(E) G_00^↓(E).
Here 𝒢^s_00(E, 𝐤) is the surface component of the momentum dependent Green's function for an infinite slab geometry of finite thickness at momentum 𝐤 and spin component s.
The coupling constants can be used to define a temperature scale analogous to the mean-field Curie temperature as J(0)/3k_B. Thus we shall use J(0), the effective exchange parameter <cit.>, as a key characteristic property as well.
The dynamics of spin fluctuations is captured by the dispersion relation of magnons, which in turn, for a ferromagnetic reference state, is given by
ε(𝐪)=2 μ_B/m(J(0)-J(𝐪)).
This spectrum can be measured for instance by spin polarized electron energy loss spectroscopy<cit.>.
For ferromagnetic systems the curvature D of the magnon spectrum at 𝐪=0 is again an important attribute which is more commonly referred to as spin wave stiffness. That is
ε(𝐪)|_𝐪≈0 = D q ^2.
In the following we present and discuss results for the quantities mentioned above. We put an emphasis on how the energetics of surface magnons are impacted by the two model parameters δ and ξ particularly around a connectivity shift of the surface flat band.
Finite size scaling shows that as one increases the number of layers N towards the macroscopic limit the identified signatures of the connectivity shift presented below will manifest precisely at the critical values of parameters, even for weak interaction strengths. For stronger interactions the fingerprints of the transition will occur already for a moderate number of layers. In the calculations shown we considered a slab of thickness N=20 layers and an interaction strength of U/t=1.0 which proved to be a pragmatic choice in order to illustrate our main message.
As we did in previous sections we start our analysis by focusing on the parameter δ and keeping ξ=1.0 that is we consider a system in the absence of mechanical distortions. The magnon spectrum around a high symmetry path of the projected Brillouin zone for various values of δ is depicted in Fig. <ref>. As one can deduce from the graph reducing the value of δ, increases the energy of magnons around the Γ point. A curious observation can be also made regarding the spectrum for δ=0.0, namely that it vanishes not just at Γ but also at M. This property, which would point towards the instability of the ferromagnetic phase in general, can be explained in this particular case. In this instance the absence of the hopping terms proportional to δ from the kinetic term Ĥ_0 means that the system falls apart in to two interlocked but decoupled subsystems, which can be oriented parallel or anti-parallel with respect to each other without any energy cost.
In order to further elucidate important characteristic features of the obtained magnon spectrum we plot key properties as a function of δ in Fig. <ref>. We comment first on the evolution of r depicted in subfigure (a). As the nodal loop enlarges with decreasing δ the drum-head surface states occupy more and more area from the projected Brillouin zone. However decreasing δ beyond the connectivity shift at δ=2.0 the growth of the ratio r, depicted by orange dashed line in the figure, suffers a discontinuity. A qualitative observation regarding the connectivity shift can be also made based on the evolution of the magnon energies at the high symmetry points shown in subfigure (b). A maximum in the vicinity of the connectivity shift at the M point while a local minimum at the M point is present. Signatures of the connectivity shift are also present in the magnetization m the effective exchange coupling J(0) and in the stiffness D visualized in subfigures (c),(d) and (e) respectively. Although somewhat hard to discern these directly they are more readily visible through their derivatives with respect to δ. The derivative of the magnetization ∂_δ m jumps, the derivative of the effective coupling ∂_δ J(0) shows a local maximum while the derivative of the stiffness ∂_δ D has a local minimum in the vicinity of the connectivity shift at δ=2.0.
In an experimental setting the parameter δ is typically hard to control, ξ on the other hand is directly linked to a uniaxial distortion of the sample in the z direction. As we discussed previously a connectivity shift occurs for δ=3.0 if we decrease ξ below the critical 0.8 value, thus examining the behaviour of the above detailed characteristic features for this case as well might highlight experimentally observable fingerprints of this transition.
In Fig. <ref>. the magnon dispersion relation is depicted for distinct values of ξ above, below and exactly at the connectivity shift. In the panels of Fig. <ref>. the detailed ξ dependence of the characteristic magnon spectral features is collected. The discontinuity of the evolution of the ratio r at the connectivity shift is evident as in this case r peaks at the transition point. The magnon energies at the high symmetry M and X point as well as the magnetisation and the effective exchange coupling show a local maximum in the vicinity of the connectivity shift, while in the evolution of the stiffness a considerable decrease in the slope is observable as ξ increases past the transition point.
In this case it will be also insightful to evaluate the derivatives, now with respect of ξ. In all the characteristic properties there is a clear transition happening at the connectivity shift in the derivatives. The derivative of m and J(0) both drop sharply while the ∂_ξ D jumps abruptly at the transition point. We note that the oscillations present in this quantity at small ξ values are due to numerical limitations, and as such they should be considered as a computational artefact.
§ SUMMARY
In conclusion, we investigated the magnons associated to the drumhead surface states in a simple model of a nodal loop semimetal.
The model without interactions exhibits topological flat bands whose shape, and crucially their connectivity, can be controlled by mechanical distortions.
Including interactions on a mean-field level we show that magnetization on the surface is stabilized.
Employing a standard Green's function based technique we obtained the dispersion relation of surface magnons.
Determining key, experimentally accessible characteristics of the magnon spectrum, such as the magnetization, the effective exchange coupling and the spin wave stiffness, we show that the Lifshitz-like transition of the electronic states can in principle be observed through the magnetic properties of the surface.
On the one hand we emphasise that our presented phenomenological observations would greatly benefit from future analytic calculations which may shed light to the intricate interplay of topology, interactions and magnetism in this system.
On the other hand our calculations hopefully will encourage experimental exploration of magnetism on the surface of nodal loop semimetals. For instance Ca_3P_2 <cit.> with a relative large r ratio might be an excellent candidate for future investigations.
§ ACKNOWLEDGEMENT
The authors wish to express their gratitude to Edward McCann, Rahul Nandkishore, Jaime Ferrer, Amador García Fuente, Gabriel Martinez Carracedo, László Szunyogh and László Udvardi, for valuable discussions and their comments regarding the present work.
This research was supported by the Ministry of Culture and Innovation and the National Research, Development and Innovation Office within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004) and by NKFIH Grants No. K131938, K134437 and K142179. A.A. greatly acknowledges the support from Stipendium Hungaricum No. 249316.
L.O. also acknowledges support of the National Research, Development and Innovation (NRDI) Office of Hungary and the Hungarian Academy of Sciences through the Bolyai and Bolyai+ scholarships.
|
http://arxiv.org/abs/2307.04813v2 | 20230710180256 | Cohomologies of tautological bundles of matroids | [
"Christopher Eur"
] | math.AG | [
"math.AG",
"math.CO"
] |
Functional PCA and Deep Neural Networks-based Bayesian Inverse Uncertainty Quantification with Transient Experimental Data
[
August 12, 2023
==========================================================================================================================
Tautological bundles of realizations of matroids were introduced in <cit.> as a unifying geometric model for studying matroids. We compute the cohomologies of exterior and symmetric powers of these vector bundles, and show that they depend only on the matroid of the realization. As an application, we show that the log canonical bundle of a wonderful compactification of a hyperplane arrangement complement, in particular the moduli space of pointed rational curves ℳ_0,n, has vanishing higher cohomologies.
§ INTRODUCTION
Let E = {1, …, n} be a finite set, and an algebraically closed field.
Let L⊆^E be an r-dimensional linear subspace.
The matroid of L is the data of the set
{B ⊆ E : the composition L↪^E ↠^B is an isomorphism}
called the set of bases of . We say that L realizes the matroid .
We will explain notions from matroid theory as necessary, and refer to <cit.> for a general background.
In <cit.>, Berget, Spink, Tseng, and the author introduced tautological bundles of realizations of matroids as a new geometric model for studying matroids.
Let us recall the construction. Let T be the algebraic torus (^*)^E, which acts standardly on ^E via (t_1, …, t_n) · (x_1, …, x_n) = (t_1x_1, …, t_nx_n).
Let T = T/^* be its projectivization, i.e. its quotient by the diagonal. The image of t∈ T in T is denoted t.
The permutohedral variety X_E (Definition <ref>) is a smooth projective toric variety with the open dense torus T,
considered here as a T-variety.
Let 𝒪_X_E^⊕ E be the trivial vector bundle X_E ×^E whose T-equivariant structure is given by the inverse action of T on ^E, i.e. t ·_inv x = t^-1 x.
The tautological subbundle and quotient bundle of L⊆^E are the T-equivariant vector bundles _L and _L (respectively) on X_E defined by
_L = the T-equivariant subbundle of 𝒪_X_E^⊕ E whose fiber over t∈ T ⊂ X_E is t^-1L, and
_L = the T-equivariant quotient bundle of 𝒪_X_E^⊕ E whose fiber over t∈ T ⊂ X_E is ^E/t^-1L.
For well-definedness, see <cit.>. The authors of <cit.> showed that the K-classes [_L] and [_L] of these vector bundles depend only on the matroid . Moreover, by studying the Chern classes and sheaf Euler characteristics of the tautological bundles, both of which depend only on the K-class, they were able to unify, recover, and extend various recent developments in algebro-geometric studies of matroids.
Here, we ask: How do the sheaf cohomologies of _L and _L depend on the matroid ?
Our main results are as follows. We say that an element e∈ E is a coloop (resp. loop) of L if the decomposition ^E∖ e⊕^{e} of ^E decomposes L into L' ⊕ (resp. L' ⊕{0}) for some L' ⊆^E∖ n, or equivalently, if every basis of includes (resp. excludes) e.
Exterior powers of _L and _L have vanishing higher cohomologies, i.e.
H^i(^p _L) = 0 and H^i(^p _L) =0 for all i > 0 and p≥ 0,
and we have
∑_p ≥ 0 H^0(^p _L)u^p = (u+1)^|coloops()| and ∑_p≥ 0 H^0(^p _L) u^p = ∑_S⊆ E
S contains
a basis of u^|E|-|S|
where u is a formal variable.
The symmetric powers of _L have vanishing higher cohomologies, i.e.
H^i(Sym^p _L) = 0 for all i > 0 and p≥ 0,
and we have
∑_p ≥ 0 H^0(Sym^p_L) u^p = (1/1-u)^|E| - |coloops()|
where u is a formal variable.
In particular, the theorems imply that the cohomologies of exterior and symmetric powers of _L, and those of exterior powers of _L, depend only on the matroid that L realizes.
One may contrast this to the fact that exterior and symmetric powers of a realization L of are not in general determined by the matroid <cit.>.
Similar results for the dual vector bundles _L^∨ and _L^∨ can be obtained as follows.
Using the standard dot product on ^E, let us identify ^E ≃ (^E)^∨, so the trivial bundle (𝒪_X_E^⊕ E)^∨ is identified with X_E ×^E where T now acts standardly on ^E. Denoting L^⊥ for the space (^E/L)^∨ considered as a subspace of ^E ≃ (^E)^∨, we identify _L^∨ as the subbundle of (𝒪_X_E^⊕)^∨ whose fiber over t∈ T ⊂ X_E is t L^⊥.
The permutohedral variety X_E has the Cremona involution crem: X_E ∼→ X_E, induced by sending t∈ T to t^-1 (see for instance <cit.>).
Our description of _L^∨ above shows that _L^∨≃crem_L^⊥, and similarly one has _L^∨≃crem_L^⊥. In particular, symmetric and exterior powers of _L^∨ have vanishing higher cohomologies.
We prove Theorems <ref> and <ref> by establishing a “deletion-contraction” property for the tautological bundles, which we now describe.
For a subset S⊆ E, we denote
L\ S = the image of L under the projection ^E ↠^E∖ S, and
L/S = L ∩ (^E∖ S×{0}^S), considered as a subspace of ^E∖ S.
When S = {e} is a singleton we often omit the brackets to write L\ e and L/e, called the deletion and contraction of L by e, respectively.
For an element of E, say n∈ E for concreteness, there is a natural projection map f: X_E → X_E∖ n (Definition <ref>).
We show the following deletion-contraction property for the pushforward f_* of the tautological bundles of L.
For all p ≥ 0, we have
R^if_*(^p _L) = 0 for all i>0, and
f_*(^p _L) = ^p(_L/n⊕𝒪_X_E∖ n) if n is a coloop in
^p _L/n if n is not a coloop in .
Similarly, for all p ≥ 0, we have
R^if_*(^p _L) = 0 for all i>0, and
f_*(^p _L) = ^p(_L\ n⊕𝒪_X_E∖ n) if n is a loop in
^p_L/n if n is a coloop in
^p_L/n⊕^p-1_L∖ n if n is neither a loop nor a coloop in .
For all p≥ 0, we have
f_*Sym^p_L = Sym^p(_L/n⊕𝒪_X_E∖ n) if n a coloop
Sym^p_L/n if n not a coloop.
For all p ≥ 0, we have
R^if_*_L = 0 for all i > 0, and
f_*_L = Sym^p _L/n if n a coloop
Sym^p(_L/n⊕𝒪_X_E∖ n) if n not a coloop.
We induct on the cardinality of E, where the statements in the base case |E| = 1 are straightforward since X_E is a point in that case.
When |E|>1, for all p≥ 0, the Leray spectral sequence E_2^a,b = H^a(X_E∖ n, R^b f_* (^p_L)) satisfies E_2^a,b = 0 for all b >0 by Theorem <ref>, so that
H^i(X_E, ^p_L) ≃ H^i(X_E∖ n, f_* (^p_L)) for all i≥ 0.
Similar statements hold for ^p_L and Sym^p_L by the same argument.
From the formula for the pushforward f_* of these bundles in Theorems <ref> and <ref>, we conclude by induction hypothesis the vanishing of higher cohomologies.
Moreover, the formula for f_*(^p _L) implies that the polynomial g(L,u) = ∑_p ≥ 0 H^0(^p _L) u^p satisfies the relation
g(L,u) =
(u+1) · g(L/n, u) if n is a coloop in
g(L/n,u) if n is not a coloop in ,
hence g(L,u) = (u+1)^|coloops()|.
One similarly computes ∑_p ≥ 0 H^0(Sym^p _L) u^p.
Lastly, the formula for f_*(^p_L) implies that the polynomial h(L,u) = ∑_p≥ 0 H^0(^p _L) u^p satisfies
h(L,u) =
(u+1) · h(L\ n, u) if n is a loop in
h(L/n, u) if n is a coloop in
u · h(L\ n, u)+ h(L/n,u) if n is neither a loop nor a coloop in .
Feeding this into the recipe formula for deletion-contraction invariants <cit.> gives h(L,u) = u^|E| - rT_(1, 1+ u^-1) where T_ is the Tutte polynomial of , whose corank-nullity description [(2.13), loc. cit.] gives the desired formula for h(L,u).
Introduced in <cit.>, a wonderful compactification (Definition <ref>) of L⊆^E is a compactification W_L of L ∩ T that served as a key geometric model behind the Hodge theory of matroids <cit.>.
Its boundary ∂ W_L = W_L ∖ ( L ∩ T) is a simple normal crossings divisor.
We use Theorem <ref> to deduce the following.
The log canonical divisor K_W_L + ∂ W_L of a wonderful compactification W_L of L has vanishing higher cohomologies, i.e.
H^i(𝒪_W_L(K_W_L + ∂ W_L)) = 0 for all i > 0,
and we have
H^0(𝒪_W_L(K_W_L + ∂ W_L)) = ∑_S⊆ E
S contains
a basis of (-1)^|S|-r.
The moduli space ℳ_0,n of pointed rational curves arises as a wonderful compactification of a linear subspace whose matroid is the cyclic matroid of the complete graph on n-1 vertices <cit.> (see also <cit.>).
Hence, the corollary in particular implies that the log canonical divisor of ℳ_0,n has vanishing higher cohomologies, and recovers the classical result that H^0(𝒪_ℳ_0,n(K_ℳ_0,n + ∂ℳ_0,n)) = (n-2)!.
Corollary <ref> is the “dual version” of the following outstanding question in matroid theory due to Speyer about the anti log canonical divisor.
Speyer asked whether
(-1)^r-1χ( 𝒪_W_L(-K_W_L - ∂ W_L)) ≥ 0
for all L⊆^E such that its matroid is a connected matroid.[This is an equivalent formulation of the original question, which asked whether <cit.> holds over positive characteristic. We omit the details of the equivalence, which was communicated to the author by David Speyer.]
One can ask more strongly whether H^i( 𝒪_W_L(-K_W_L - ∂ W_L)) = 0 for all i<r-1, which implies the nonnegativity.
Speyer showed that the validity of this nonnegativity implies a bound on the f-vectors of matroidal subdivisions <cit.>.
Over characteristic zero, he proved the nonnegativity via Kawamata–Viehweg vanishing.
Corollary <ref> also implies that the cohomologies of the log canonical divisor on a wonderful compactification W_L depends only on the matroid of L.
In tropical geometry, for an arbitrary matroid possibly with no realization, instead of the wonderful compactifcation we have its tropical linear space <cit.>, which serves as building blocks of tropical manifolds.
With the theory of tropical vector bundles in its infancy, we ask:
Is there a theory of tropical line bundles and their sheaf cohomology on tropical manifolds such that it agrees with Corollary <ref>?
Related discussions and questions can be found in Section <ref>.
§.§ Previous works
When the characteristic of is zero, so that tools like resolution of singularities and Kawamata–Viehweg vanishing are available, parts of the results here have been established in previous literature <cit.>. For instance, <cit.> states that any Schur functor applied to _L^∨ has vanishing higher cohomologies. The vanishing higher cohomologies of the log canonical divisor of W_L (Corollary <ref>) is also immediate from Kawamata–Viehweg vanishing when one notes that ∂ W_L is big and nef.
The proofs of these previous results crucially depend on characteristic zero methods.
The vanishing statements here are established over fields of arbitrary characteristic by elementary methods.
§.§ Organization
In Section <ref>, we review permutohedral varieties, and detail the behavior of the projection map f: X_E → X_E\ n.
In Section <ref>, after some preparatory computations on ^1, we prove Theorems <ref> and <ref>.
In Section <ref>, we explain the application to wonderful compactifications.
In Section <ref>, we collect some questions.
§.§ Acknowledgements
The author thanks Andrew Berget, Alex Fink, Dhruv Ranganathan, and David Speyer for helpful conversations, and thanks Matt Larson for helpful conversations and comments on a preliminary draft of the paper.
The author is supported by US National Science Foundation (DMS-2001854 and DMS-2246518).
§ PERMUTOHEDRAL VARIETIES
For a subset S⊆ E, let _S be the sum of standard basis vectors ∑_i∈ S_i ∈^E, and let _S be its image in ^E/_E.
For background and conventions for polyhedral geometry and toric geometry, we refer to <cit.>.
An ordered set partition ℱ of E is a sequence (F_1, …, F_ℓ) of nonempty subsets of E that partition E.
The permutohedral fan Σ_E is the fan in ^E/_E consisting of cones
σ_ℱ = cone{_F_1, _F_1∪ F_1, …, _F_1 ∪⋯∪ F_ℓ}
for each ordered set partition ℱ = (F_1, …, F_ℓ) of E.
The permutohedral variety X_E is the (smooth projective) toric variety associated the fan Σ_E, considered as a rational fan over ^E/_E.
We identify the cocharacter lattice of T = (^*)^E with ^E, which identifies the cocharacter lattice of T with ^E/_E. This identifies the dense open torus of X_E with T, and so we treat X_E as a T-variety.
We refer to <cit.> for a background on torus-orbits of toric varieties, and fix the following notations.
For an ordered set partition ℱ = (F_1, ⋯, F_ℓ) of E, denote by
* p_ℱ = lim_t→ 0λ(t) the limit point in X_E where λ: ^* → T is the one-parameter map of any cocharacter λ∈^E / _E in the relative interior relint(σ_ℱ) of σ_ℱ,
* O_ℱ the T-orbit of corresponding to σ_ℱ, i.e. the orbit T· p_ℱ, and
* Z_ℱ the closure of the T-orbit O_ℱ.
We now describe the map f: X_E → X_E∖ n. First, note that the projection map f: ^E →^E∖ n induces a map of fans Σ_E →Σ_E∖ n.
We record the following observation, whose verification is straightforward and is omitted.
Let ℱ =(F_1, …, F_ℓ) be an ordered set partition of E∖ n.
The inverse image of the cone σ_ℱ∈Σ_E∖ n under the map Σ_E →Σ_E∖ n consists of cones in Σ_E corresponding to the following two kinds of ordered set partitions of E:
* For 1≤ i ≤ℓ+1, let ℱ^i = (F_1, …, F_i-1, n, F_i, …, F_ℓ).
* For 1≤ i ≤ℓ, let ℱ(i) = (F_1, …, F_i-1, F_i ∪ n, F_i+1, …, F_ℓ).
Note that σ_ℱ^i∩σ_ℱ^i+1 = σ_ℱ(i).
Let f: X_E → X_E∖ n be the toric map associated to the map of fans Σ_E →Σ_E∖ n induced by the projection map f: ^E →^E∖ n.
Translating the polyhedral statement in Proposition <ref> to toric geometry gives the following.
The map f: X_E → X_E∖ n is a flat and projective map whose fibers are chains of rational curves. More specifically, for any (t,1)∈ T where t∈ (^*)^E∖ n and an ordered set partition ℱ = (F_1,…, F_ℓ) of E∖ n, we have that the fiber
f^-1(t · p_ℱ) = ⋃_i=1^ℓ C(t,i)
where
C(t, i) = {(t,1)· p_ℱ^i}⊔{(t,1) · p_ℱ^i+1}⊔{(t,t_n)· p_ℱ(i) : t_n ∈^*}
= {0}⊔{∞}⊔^* ≃^1.
One may also deduce the first statement of the corollary by noting that X_E is the Losev-Manin space <cit.>, which is a particular Hassett space of rational curves with weighted markings, and that the map f is the universal curve map.
For proving Thereoms <ref> and <ref>, Corollary <ref> primes us to use Grauert's theorem, which we recall here for convenience <cit.>:
If φ: X → Y is projective and ℱ∈Coh(X) is flat over Y such that H^i(X_y, ℱ_y) is constant over the fibers X_y = φ^-1(y) of y∈ Y, then R^iφ_*ℱ is a vector bundle on Y whose fiber at y∈ Y is H^i(X_y, ℱ_y).
In particular, if φ is itself flat, then because the Euler characteristic χ(X_y, ℰ_y) is constant for a vector bundle ℰ on X, the pushforward φ_* ℰ is a vector bundle on Y with fibers H^0(X_y, ℰ_y) if H^i(X_y, ℰ_y) = 0 for all i>0 and y∈ Y.
We conclude this section by discussing the behavior of the tautological bundles of L on the fibers of the map f.
Let us write L|S = L\(E\ S) for a subset S ⊆ E. For an ordered set partition ℱ = (F_1, …, F_ℓ) of E, let L_ℱ be the linear subspace
L_ℱ= L | F_1 ⊕ L|(F_1∪ F_2)/F_1 ⊕⋯⊕ L|(F_1 ∪⋯∪ F_ℓ-1)/(F_1 ∪⋯∪ F_ℓ-2) ⊕ L/(F_1 ∪⋯∪ F_ℓ-1)
of ^F_1⊕^F_2⊕⋯⊕^F_ℓ = ^E. We will need the following fact, which follows from <cit.>.
The restriction _L|_Z_ℱ (resp. _L|_Z_ℱ) is the unique T-equivariant subbundle (resp. quotient bundle) of (𝒪_X_E^⊕ E)|_Z_ℱ = Z_ℱ×^E whose fiber over p_ℱ is L_ℱ (resp. ^E/L_ℱ).
The tautological bundles of L restricted to a fiber of f are simple in the following sense.
Let notations be as in Corollary <ref>.
As a subbundle (resp. quotient bundle) of the trivial bundle 𝒪^⊕ E, the fibers of the restricted bundle _L|_f^-1(t· p_ℱ) (resp. _L|_f^-1(t · p_ℱ)) are constant along all ^1-components C(t,i) if n is a loop or a coloop of L, and non-constant at exactly one component if n is neither a loop nor a coloop of L.
If n is a loop or a coloop of L, then (t,t_n) · L = (t,1) · L for all t_n ∈^*, and for any S⊆ E∖ n, the element n is again a loop or a coloop of L\ S and L/S.
Thus, in this case n is a loop or a coloop of L_ℱ(i) for all 1≤ i≤ℓ, and so Lemma <ref> and Corollary <ref> together imply that _L and _L are constant along each component C(t,i).
Suppose now that n is neither a loop nor a coloop of L. We need show that n is neither a loop nor a coloop in L_ℱ(k) for exactly one 1≤ k ≤ℓ.
For this end, we will use the following property of matroids that follows from its greedy algorithm structure (see <cit.>):
For an ordered partition ℱ' = (F'_1, …, F'_ℓ) of E, let w_ℱ': E → be any weighting such that w_ℱ' is constant on each F'_i and w(f'_i) > w(f'_j) whenever f'_i∈ F'_i and f'_j∈ F'_j with i<j. Then, the w_ℱ'-maximal bases of the matroid of L are the bases of the matroid of L_ℱ.
Now, note that if n is neither a loop nor a coloop in L_ℱ(k), then it is a coloop in L_ℱ^k and a loop in L_ℱ^k+1 by construction. In particular, every w_ℱ^k-maximal bases of includes n, and every w_ℱ^k+1-maximal bases of excludes n. Thus, every w_ℱ(i)-maximal bases of must include n if i<k, and must exclude n if i>k.
§ PROOF OF THEOREMS <REF> AND <REF>
Since _L or _L along a fiber of the map f is non-constant on at most one ^1-component of the fiber (Proposition <ref>), we begin with preparatory observations for vector bundles on ^1.
Consider ^1 = {0}⊔{∞}⊔^* as a ^*-toric variety, and let 𝒪_^1^⊕ E be the trivial vector bundle ^1 ×^E where ^* acts on ^E by t· (x_1, …, x_n-1, x_n) = (x_1, …, x_n-1, t^-1 x_n).
We write ^{n} for last coordinate of ^E with the inverse standard action of ^*.
For a subspace L⊆^E, let _L' and _L' be the ^*-equivariant sub and quotient bundles of 𝒪_^1^⊕ E, respectively, fitting into a short exact sequence 0→_L' →𝒪_^1^E →_L' → 0 such that its fiber over the identity of ^* is 0→ L →^E →^E/L → 0.
Note that if n is a loop or a coloop of L, so that L = L' ⊕ L|n ⊆^E∖ n⊕^{n}, we have that '_L ≃𝒪_^1⊗ (L' ⊕ L|n) is a trivial bundle, and similarly for '_L.
We have short exact sequences
0→'_L/n ⊕ 0→'_L →ℒ_→ 0 and 0→ℒ_→'_L →'_L\ n ⊕ L|n→ 0,
where ℒ_ and ℒ_ are ^*-equivariant line bundles (or zero) defined by
ℒ_ = the ^*-equivariant subbundle of
𝒪_^1⊗( (^E∖ n / (L/n)) ⊕^{n})
whose fiber at identity is L/ (L/n ⊕ 0)≃
0 if n a loop
𝒪_^1⊗^{n} if n a coloop
𝒪_^1(-1) if n neither
and
ℒ_ = the ^*-equivariant quotient bundle of
𝒪_^1⊗( (L\ n)/(L/n) ⊕ L|n )
whose fiber at identity is the quotient by L/(L/n ⊕ 0)≃
0 if n a loop
0 if n a coloop
𝒪_^1(1) if n neither.
If n is a loop or a coloop, so that '_L and '_L are trivial bundles, the statements of the lemma are immediate. Let us now assume that n is neither a loop nor a coloop.
For all t∈^* ⊂^1, we have L/n ⊕ 0 = (t· L) ∩ (^E∖ n⊕ 0) ↪ t· L, and at the boundaries we have lim_t→ 0 t· L = L/ n ⊕ 0 and lim_t→∞t· L = L\ n ⊕ 0.
Since L/n ⊆ L\ n, we thus have an injective map of vector bundles '_L⊕ 0↪'_L.
We hence obtain the following diagram of commuting short exact sequences
0 [d] 0 [d]
0 [r] '_L/n ⊕ 0[r,equal][d] 𝒪_^1⊗ (L/n ⊕ 0) [r] [d] 0 [d]
0 [r] '_L [r] [d] 𝒪_^1⊗^E [r][d] '_L [r][d,equal] 0
0 [r] ℒ_[r] [d] 𝒪_^1⊗(^E∖ n/(L/n) ⊕^{n}) [r][d] '_L [r][d] 0
0 0 0
by starting with the first two rows and then applying the snake lemma.
We have the desired short exact sequence for '_L.
Now, over the identity point of ^* ⊂^1, the ^*-equivariant embedding ℒ_↪𝒪_^1⊗( (^E∖ n / (L/n)) ⊕^{n}) is L/(L/n⊕ 0) ↪ (^E∖ n / (L/n)) ⊕^{n}.
Because (L\ n)/(L/n) ⊕ L|n is the direct sum of the projections of L/(L/n ⊕ 0) to the two direct summands of (^E∖ n / (L/n)) ⊕^{n}, we have that
ℒ_ in fact embeds in 𝒪_^1⊗( (L\ n)/(L/n) ⊕ L|n ).
In other words, ℒ_ is the pullback of the tautological subbundle of ^1 ≃((L\ n)/(L/n) ⊕ L|n) where the isomorphism ^1 ∼→((L\ n)/(L/n) ⊕ L|n) is defined by
^*∋ t ↦the image in (L\ n)/(L/n) ⊕ L|n of t· L / (L/n ⊕ 0).
The Euler sequence 0→𝒪_^1(-1) →𝒪_^1^2 →𝒪_^1(1) → 0 on ^1 then becomes
0 →ℒ_→𝒪_^1⊗( (L\ n)/(L/n) ⊕ L|n ) →ℒ_→ 0,
which defines the line bundle ℒ_, and proves the statements about the isomorphism types of ℒ_ and ℒ_.
Lastly, we have the following commuting diagram of short exact sequences
0 [d] 0 [d] 0[d]
0 [r] ℒ_[r][d,equal] 𝒪_^1⊗( (L\ n)/(L/n) ⊕ L|n) [r] [d] ℒ_[d][r] 0
0 [r] ℒ_[r] [d] 𝒪_^1⊗(^E∖ n/(L/n) ⊕^{n})[r][d] '_L [r][d] 0
0 [r] 𝒪_^1⊗(^E∖ n/(L\ n) ⊕^{n}/(L|n) ) [r,equal] [d] '_L\ n ⊕ L|n[r][d] 0
0 0
by starting with the first two columns and then applying the snake lemma. The desired short exact sequence for '_L follows.
The two short exact sequences in <Ref> split: For the first sequence, it follows from the possible isomorphism types of ℒ_ that Ext^1_^1(ℒ_, '_L/n ⊕ 0) ≃ H^1(^1, ℒ_^∨⊗'_L/n ⊕ 0) = 0, and similarly for the second sequence.
The lemma implies the following about the cohomologies of exterior powers of '_L and '_L.
For all p≥ 0, we have H^1(⋀^p '_L) = 0 and H^1(⋀^p '_L) = 0, and we have natural isomorphisms
H^0(^p '_L) ≃^p (L/n ⊕) if n a coloop,
^p(L/n) if n not a coloop,
and
H^0(^p '_L) ≃^p(^E/(L/n ⊕ 0)) if n a loop
^p ( ^E / (L/n ⊕)) if n a coloop
^p ( ^E∖ n/(L/n) ) ⊕^p-1(^E∖ n/(L\ n)) if n neither.
By standard multilinear algebra (e.g. <cit.>), applying exterior powers to the short exact sequences of <Ref> yields short exact sequences
†
0 →^p '_L/n ⊕ 0→ ^p '_L →^p-1'_L/n⊕ 0⊗ℒ_→ 0 and
0→ℒ_⊗^p-1'_L\ n ⊕ L|n→ ^p '_L →^p '_L\ n ⊕ L|n→ 0
for all p ≥ 0.
In the resulting long exact sequences of cohomologies, we have H^1(^p '_L)=H^1(^p '_L) = 0 because of the descriptions of ℒ_ and ℒ_ in <Ref> and H^1(𝒪_^1(-1)) = H^1(𝒪_^1) = H^1(𝒪_^1(1)) = 0, keeping in mind that '_L/n ⊕ 0 and '_L\ n ⊕ L|n are trivial bundles.
As H^1's vanish, note that applying H^0 yields the short exact sequences of vector spaces.
We now treat the statements about H^0. When n is a loop or a coloop, the desired follows since all vector bundles involved are trivial in such case. So, assume now that n is neither a loop nor a coloop.
The statement for H^0(^p '_L) follows since H^0(𝒪_^1(-1)) = 0.
For H^0(^p '_L), note first that H^0(ℒ) = V for any V ≃^2 and ℒ≃𝒪_^1(1) such that
0 →𝒪_^1(-1) →𝒪_^1⊗ V →ℒ→ 0.
Applying this with V = (L\ n)/(L/n) ⊕ L|n and ℒ = ℒ_, we obtain that the short exact sequence from applying H^0 to the second sequence in (<ref>) with p = 1 is natural isomorphic to
0→ (L\ n)/(L/n) ⊕ L|n →^E∖ n/(L/n) ⊕^{n}→^E∖ n/(L\ n) ⊕^{n}/(L|n) → 0,
(i.e. the middle column of the second diagram in the proof of <Ref>) which is the direct sum of two sequences
0→ (L\ n)/(L/n) →^E∖ n/(L/n) →^E∖ n/(L\ n) → 0 and 0→→→ 0 → 0.
In general, applying H^0 for p≥ 1 yields the short exact sequence which is the direct sum of
0 → (L\ n)/(L/n) ⊗^p-1^E∖ n/(L\ n) → ^p ^E∖ n/(L/n) →^p^E∖ n/(L\ n) → 0 and
0 →⊗^p-1^E∖ n/(L\ n) → ^p-1^E∖ n/(L\ n) → 0 → 0.
The desired statement for H^0(^p'_L) follows.
We also use <Ref> to deduce the following symmetric powers analogue of <Ref>. Note that for any V ≃^2 and ℒ≃𝒪_^1(1) fitting into 0 →𝒪_^1(-1) →𝒪_^1⊗ V →ℒ→ 0, we have natural isomorphisms
H^0(ℒ^⊗ p) ≃Sym^p V and H^1(𝒪_^1(-p-2)) ≃ V^∨⊗Sym^p V^∨ for all p≥ 0.
For all p ≥ 0, we have a natural isomorphism
H^0(Sym^p 𝒮'_L) ≃Sym^p(L/n ⊕) if n a coloop
Sym^p(L/n) if n not a coloop.
When n is a loop or a coloop, we have H^1(Sym^p '_L) = 0, and when n is neither, we have a filtration H^1(Sym^p '_L) = F_0 ⊇ F_1 ⊇⋯⊇ F_p-2⊇ F_p-1 = 0 such that
F_i/F_i+1≃(((L\ n)/(L/n) ⊕ L|n)^∨)^⊗ p-1-i⊗Sym^p-2-i((L\ n)/(L/n) ⊕ L|n) ⊗Sym^i(L/n ⊕ 0)
for all 0≤ i ≤ p-2. Similarly, for all p≥ 0, we have H^1(Sym^p '_L) = 0, and we have a natural isomorphism
H^0(Sym^p'_L) ≃Sym^p (^E∖ n/(L/n)) if n a coloop
Sym^p (^E∖ n/(L/n)⊕) if n not a coloop.
When n is a loop or a coloop, the bundles '_L and '_L are trivial, and the claimed statements follow easily. Suppose n is neither a loop or coloop now. For the statements about Sym^p '_L, we first note that the short exact sequence 0→'_L/n ⊕ 0→'_L →ℒ_→ 0 in <Ref>, along with some multilinear algebra (e.g. <cit.>), gives a filtration Sym^p'_L = ℱ_0 ⊇ℱ_1 ⊇⋯⊇ℱ_p ⊇ℱ_p+1 = 0 with ℱ_i / ℱ_i+1≃Sym^i '_L/n ⊕ 0⊗ℒ_^⊗ p-i. In the long exact sequences
0 → H^0(ℱ_i+1) → H^0(ℱ_i) → H^0(ℱ_i / ℱ_i+1) → H^1(ℱ_i+1) → H^1(ℱ_i) → H^1(ℱ_i / ℱ_i+1) → 0,
we have H^0(ℱ_i / ℱ_i+1) = 0 if i < p since ℒ_S ≃𝒪_^1(-1), and thus we have H^0(Sym^p '_L) ≃ H^0(ℱ_p / ℱ_p+1) = Sym^p (L/n). The filtration for H^1 also follows since the H^1's form a short exact sequence for each i, and
H^1(ℱ_i / ℱ_i+1)
≃((L\ n)/(L/n) ⊕ L|n)^∨⊗Sym^p-2-i((L\ n)/(L/n) ⊕ L|n)^∨⊗Sym^i(L/n ⊕ 0)
≃(((L\ n)/(L/n) ⊕ L|n)^∨)^⊗ p-1-i⊗Sym^p-2-i((L\ n)/(L/n) ⊕ L|n) ⊗Sym^i(L/n ⊕ 0).
For the statements about Sym^p'_L, we similarly have from 0→ℒ_→'_L →'_L\ n⊕ L|n→ 0 a filtration
Sym^p'_L = ℱ_0 ⊇ℱ_1 ⊇⋯⊇ℱ_p ⊇ℱ_p+1 = 0 with ℱ_i / ℱ_i+1≃ℒ_^⊗ i⊗Sym^p-i'_L/n ⊕ 0. Note that ℒ_≃𝒪_^1(1), so all H^1(ℱ_i / ℱ_i+1) vanish, and hence H^1(ℱ_i) = 0 for all i as well. Now, the resulting short exact sequences
0 → H^0(ℱ_i+1) → H^0(ℱ_i) → H^0(ℱ_i / ℱ_i+1) → 0
give a filtration of H^0(Sym^p'_L) whose successive quotients are
Sym^i((L\ n)/(L/n) ⊕ L|n) ⊗Sym^p-i(^E∖ n/(L\ n) ⊕/(L|n)).
This is exactly the filtration of Sym^p(^E∖ n/(L/n) ⊕) arising from the short exact sequence 0→ (L\ n)/(L/n) ⊕ L|n →^E∖ n/(L/n) ⊕→^E∖ n/(L\ n) ⊕/(L|n) → 0.
We are now ready to prove the main theorems.
We prove the statement for ^p_L. The statements for ^p_L and Sym^p_L are proved similarly.
We will compute the cohomologies of the restriction of ^p_L to a fiber of f: X_E → X_E∖ n, and then apply Grauert's theorem.
A point y∈ X_E∖ n is of the form t· p_ℱ for some t∈ (^*)^E∖ n and an ordered set partition ℱ = (F_1, …, F_ℓ) of E∖ n.
To reduce notational burden (such as (t· L)_ℱ(i)), we assume without loss of generality that t is the identity.
By Corollary <ref>, the restriction ^p _L|_f^-1(y) is constant on all ^1-components of f^-1(y) except possibly on C(t,k) for some 1≤ k ≤ℓ (if no such then fix an arbitrary 1≤ k ≤ℓ). By Lemma <ref>, the restriction ^p _L|_C(t,k) to C(t,k) ≃^1 is isomorphic to ^p'_L_ℱ(k), where '_L_ℱ(k) is as defined in Definition <ref>.
Hence we have H^i(^p _L|_f^-1(y)) ≃ H^i(^p'_L_ℱ(k)) for all i.
Now, Proposition <ref> implies that H^i(^p'_L_ℱ(k)) = 0, and moreover, when we note that L_ℱ(i)/n = (L/n)_ℱ for any 1≤ i ≤ℓ, the proposition gives
H^0(^p '_L_ℱ(k)) ≃^p ((L/n)_ℱ⊕) if n a coloop,
^p((L/n)_ℱ) if n not a coloop.
The desired statements for ^p _L now follow from Grauert's theorem.
Let ℰ be any globally generated vector bundle on X_E, such as ^2_L ⊗Sym^3_L.
Its restriction to a fiber of f has no higher cohomology since it is a globally generated bundle on a chain of ^1, and hence f_*ℰ is a vector bundle with R^i f_*ℰ = 0 for all i>0. However, without sufficiently explicit description of f_*ℰ like the one in Theorem <ref>, one cannot conclude much about H^i(ℰ). We currently do not have an analogue of Proposition <ref> for arbitrary Schur/Weyl functors applied to '_L, and thus our treatment is restricted to exterior and symmetric powers.
§ WONDERFUL COMPACTIFICATIONS
We begin with a review of wonderful compactifications introduced in <cit.>.
To avoid trivialities, throughout this section we assume L⊆^E to be loopless, so that the intersection L ∩ T is nonempty.
Let 𝒜 be the arrangement of hyperplanes { L ∩ H_e : e∈ E} where H_e is the e-th coordinate hyperplane of (^E). Notice that L ∩ T = L ∖ (⋃𝒜).
Let 𝒫 be the poset whose elements are the linear subvarieties P⊆ L that arise as intersections of hyperplanes in 𝒜, with partial ordering G≤ G' given by reverse inclusion P ⊇ P'. The poset 𝒫 has the top and bottom elements 1̂ = ∅ and 0̂ = L, respectively.
In matroid theory, this poset is known as the lattice of flats of the matroid of L.
A building set 𝒢 is a subset of 𝒫∖{0̂, 1̂} such that for every P ∈𝒫∖{0̂ , 1̂}, the set max𝒢_≤ P of maximal elements of 𝒢 in the interval [0̂, P] satisfies
[0̂, P] ≃∏_G∈max𝒢_≤ P [0̂, G].
The wonderful compactification of L with building set 𝒢 is the variety W_L^𝒢 obtained from L by sequentially blowing-up the linear subvarieties of L in 𝒢, starting with the smallest dimensional ones to the largest.
The boundary ∂ W_L^𝒢 = W_L ∖ ( L ∩ T) of W_L^𝒢 is a simple normal crossings divisor <cit.>.
A stratum in the boundary is the intersection of a subset of the irreducible components of the boundary divisor ∂ W_L^𝒢, which is necessarily smooth.
Note that 𝒫∖{0̂, 1̂} itself is a building set, in which case we abuse notation to denote W_L^𝒫=W_L^𝒫∖{0̂, 1̂}.
We now recall as a lemma two facts from the literature to prepare for the proof of Corollary <ref>, which stated that the log canonical divisor K_W_L^𝒢 + ∂ W_L^𝒢 has vanishing higher cohomology.
Let notations be as above.
* If 𝒢 and ℋ are building sets on 𝒫 such that 𝒢⊇ℋ, then there exists a sequence of building sets (𝒢 = 𝒢_1, 𝒢_2, ⋯, 𝒢_ℓ = ℋ) such that W_L^𝒢_i is the blow-up of a stratum in the boundary of W_L^𝒢_i+1 for each i = 1, …, ℓ-1.
* The variety W_L^𝒫 is isomorphic to the vanishing locus in X_E of a global section of _L. Under this isomorphism, we have 𝒪_W_L^𝒫 (K_W_L^𝒫 + ∂ W_L^𝒫) ≃_L|_W_L^𝒫.
(1) is a translation of <cit.> into geometric language under the dictionary provided in <cit.> between the boundary strata structure of W_L^𝒢 and the simplicial complex known as the nested complex of 𝒢.
The first statement of (2) is <cit.>.
The second statement follows from <cit.>, which implies that 𝒯_W_L^𝒫(-log∂ W_L^𝒫) ≃_L|_W_L^𝒫, and _L^∨≃_L from 0→_L →𝒪_X_E^⊕ E→_L → 0.
We first claim that H^i(𝒪_W_L^𝒢(K_W_L^𝒢 + ∂ W_L^𝒢)) ≃ H^i(𝒪_W_L^𝒫(K_W_L^𝒫 + ∂ W_L^𝒫)) for any building set 𝒢.
Let π: W_L^𝒫→ W_L^𝒢 be the composition of blow-down maps given by Lemma <ref>(1).
Recall that, for any blow-up φ: X→ X of a smooth subvariety Y in a smooth variety X, we have K_X = φ^* K_W + (codim_X(Y) -1) E where E is the exceptional divisor <cit.>.
Applying this to each of the blow-down maps making up π, we find π^*(K_W_L^𝒢 + ∂ W_L^𝒢) =
K_W_L^𝒫 + ∂ W_L^𝒫.
Moreover, Lemma <ref>(1) further implies that π_* 𝒪_W_L^𝒫 = 𝒪_W_L^𝒢 and R^iπ_* 𝒪_W_L^𝒫 = 0 for all i>0 <cit.>. Our claim now follows from the projection formula.
To finish, the first statement of Lemma <ref>(2) implies that we have the Koszul resolution
0→_L^∨→⋯→^2 _L^∨→_L^∨→𝒪_X_E→𝒪_W_L^𝒫→ 0.
Since ℰ⊗^i ℰ^∨≃^rank(ℰ) - iℰ for a vector bundle ℰ, twisting the above resolution by _L and noting the second statement of Lemma <ref>(2) gives the resolution
0→𝒪_X_E→_L →^2 _L →⋯→_L→𝒪_W_L^𝒫 (K_W_L^𝒫 + ∂ W_L^𝒫) → 0.
Applying Theorem <ref> now yields the desired corollary by standard homological algebra <cit.>.
§ QUESTIONS
A broader theme behind Question <ref> is to ask: Which sheaf theoretic properties of realizations of matroids extend to all matroids?
We collect some related observations and questions.
We will now assume familiarity with matroid theory.
As in the previous section, we suppose L⊆^E to be loopless to avoid trivialities.
For an arbitrary not necessarily realizable matroid , there are K-classes [_] and [_] in the Grothendieck K-ring of vector bundles on X_E such that [_] = [_L] and [_] = [_L] whenever has a realization L⊆^E <cit.>.
Let us denote by D_-P() = c_1(_), the first Chern class of [_]. See <cit.> for an explanation of the notation D_-P().
If has a realization L, Lemma <ref>(2) states that the log canonical divisor of W_L^𝒫 is D_-P()|_W_L^𝒫.
Even if is not realizable, we may consider the line bundle 𝒪_X_E(D_-P()).
§.§ Immaculate line bundles
We have the following variation of Corollary <ref>.
Let L'⊆^E be a subspace containing L such that L' = L+1, and let ' be the matroid of L'. Then, the line bundle 𝒪_W_L^𝒫(D_-P() - D_-P(')) on W_L^𝒫 satisfies
H^i(𝒪_W_L^𝒫(D_-P() - D_-P('))) = 0 for all i>0, and
H^0(𝒪_W_L^𝒫(D_-P() - D_-P('))) = 1 if ' has loops
0 if ' is loopless.
In particular, the line bundle 𝒪_W_L^𝒫(D_-P() - D_-P(')) on W_L^𝒫 is immaculate, i.e. has no nonzero cohomologies, if ' is loopless.
By construction, from L'⊂ L ⊆^E we have the surjective map _L'→_L.
Let ℒ_', be the kernel, so that we have the short exact sequence
0 →ℒ_',→_L'→_L → 0.
By taking of the sequence, we see that ℒ_',≃𝒪_X_E(D_-P(') - D_-P()). Applying duality and exterior power, we obtain for each p≥ 1 a short exact sequence
0 →^p _L^∨→^p _L'^∨→^p-1_L^∨⊗ℒ^∨_',→ 0.
Applying Theorem <ref> and Remark <ref> to the long exact sequence of cohomologies, we thus obtain H^i(^p-1_L^∨⊗ℒ^∨_',) = 0 for all i>0 and p ≥ 1. Moreover, we have
H^0(^p-1_L^∨⊗ℒ^∨_',) = H^0(^p _L'^∨) - H^0(^p _L^∨) = |loops(')|p - |loops()|p,
where for the last equality we used that crem_L^∨≃_L^⊥, and L^⊥ realizes the dual matroid ^⊥ whose coloops correspond to the loops of the original matroid . To finish, recall the Koszul resolution ^∙_L^∨→𝒪_W_L^𝒫→ 0 from the proof of Corollary <ref>. Twisting the resolution by ℒ^∨_', and taking cohomology, keeping in mind standard homological algebra <cit.>, one obtains the desired result.
An elementary matroid quotient ↠' consists of two matroids and ' whose ranks differ by 1 such that every flat of ' is a flat of . It is realizable if there is a flag of linear subspaces L' ⊆ L ⊆^E such that L' and L respectively realize ' and .
In light of the corollary above, we ask the following question.
For any elementary matroid quotient ↠', not necessarily realizable, is the line bundle 𝒪_W_L^𝒫(D_-P() - D_-P(')) on W_L^𝒫 immaculate, i.e. has no nonzero cohomologies, if ' is loopless?
Is there a theory of tropical line bundles and their sheaf cohomology on tropical manifolds such that it agrees with the above corollary?
§.§ Log canonical image
We conclude with a discussion of the log canonical image of a wonderful compactification of L.
The line bundle _L ≃𝒪_X_E(D_-P()) is globally generated, with torus-invariant sections in bijection with the bases of (see <cit.> and <cit.>).
We may thus consider the embedded projective variety
X_L = the closure of the image of L ∩ T under the map φ: X_E →(H^0(_L)).
This variety X_L is also known as Kapranov's visible contour <cit.>. When is connected, as we shall assume from now, the variety X_L is the log canonical model of L ∩ T with (étale locally) toric singularities <cit.>. For a building set 𝒢, the map W_L^𝒢→ X_L given by the log canonical bundle of W_L^𝒢 is a (étale locally) toric resolution of singularities,
and thus H^i(𝒪_X_L(ℓ)) ≃ H^i(_L|_W_L^𝒫^⊗ℓ) for all ℓ∈.
In particular, applying Theorem <ref> and Remark <ref> to the Koszul complexes in the proof of Corollary <ref> yields the following.
The ideal sheaf ℐ_X_L satisfies
H^i(ℐ_X_L) = 0 and H^i(ℐ_X_L(1)) = 0 for all i>0,
and hence H^i(𝒪_X_L) =0 and H^i(𝒪_X_L(1)) = 0 for all i>0.
Over characteristic zero, applying <cit.> further implies that H^i(ℐ_X_L(ℓ)) = 0 and H^i(𝒪_X_L(ℓ)) = 0 for all i>0 and ℓ≥ 0.
Moreover, over characteristic zero, by Kawamata–Viehweg vanishing we have H^i(𝒪_X_L(-ℓ)) = 0 for all i< X_L and ℓ >0.
We thus ask the following, part of which is a strengthening of Speyer's question in Remark <ref>.
Suppose has positive characteristic. Is H^i(ℐ_X_L(ℓ)) = 0 and H^i(𝒪_X_L(ℓ))=0 for all i>0 and ℓ≥ 0? Is H^i(𝒪_X_L(-ℓ)) = 0 for all i< X_L and ℓ >0? In particular, is the embedded variety X_L projectively normal and/or arithmetically Cohen-Macaulay?[Matt Larson also proposed this question during the Banff workshop “Algebraic Aspects of Matroid Theory.”]
Given a fixed total order of E, one can show that the restrictions to W_L^𝒫 of the torus-invariant sections of _L are spanned by those that correspond to the nbc-bases of the matroid .
One can moreover show that they not only span but also form a basis of H^0(𝒪_X_L(1)), by using Corollary <ref> and by noting that the quantity
H^0(𝒪_X_L(1)) = H^0 (𝒪_W_L^𝒫 (K_W_L^𝒫 + ∂ W_L^𝒫) ) = ∑_S⊆ E
S contains
a basis of (-1)^|S|-r
is the Möbius invariant T_(1,0) of , which equals the number of nbc-bases of <cit.>.
alpha
|
http://arxiv.org/abs/2307.05039v1 | 20230711064633 | Strong convergence in the infinite horizon of numerical methods for stochastic differential equations | [
"Wei Liu",
"Yudong Wang"
] | math.NA | [
"math.NA",
"cs.NA",
"math.PR",
"65C30, 60H35"
] |
Stationary striations in plasma, created by a short microwave pulse in a waveguide filled with a neutral gas
Ya.E. Krasik
August 12, 2023
=============================================================================================================
The strong convergence of numerical methods for stochastic differential equations (SDEs) for t∈[0,∞) is proved. The result is applicable to any one-step numerical methods with Markov property that have the finite time strong convergence and the uniformly bounded moment. In addition, the convergence of the numerical stationary distribution to the underlying one can be derived from this result. To demonstrate the application of this result, the strong convergence in the infinite horizon of the backward Euler-Maruyama method in the L^p sense for some small p∈ (0,1) is proved for SDEs with super-linear coefficients, which is also a a standalone new result. Numerical simulations are provided to illustrate the theoretical results.
§ INTRODUCTION
The strong convergence in a finite time interval of numerical methods for stochastic differential equations (SDEs) has been an essential topic and attract lots of attention in the past decades. For any new proposed numerical methods of SDEs, the finite time strong convergence is always one of the fundamental properties to be investigated, for example the semi-implicit Euler-Maruyama method <cit.>, the truncated Euler-Maruyama method <cit.>, the tamed Euler method <cit.>, and the fundamental mean-square finite time strong convergence theorem for any one-step method <cit.>. Briefly speaking, for the solution of some SDE, x(t), and its corresponding numerical solution, X(t), that is produced by some numerical method, the study of the strong convergence in some finite time interval seeks to find some upper bound of the difference between x(t) and X(t), i.e. for some positive constants T and p
sup_0≤ t ≤ T𝔼 |x(t) - X(t)|^p ≤ C_T h,
where h is the step size and C_T is a constant dependent on T. In most existing literatures, C_T is an increasing function in terms of T, which means the above estimate of the error of the numerical method would become useless as T →∞.
While, in this paper we try to obtain the error bound for some constant C which is independent from T, i.e.
sup_t ≥ 0𝔼 |x(t) - X(t)|^p ≤ C h,
which is what the title of this paper indicates.
A naive question would be whether every numerical method that has the property (<ref>) possesses the property (<ref>) naturally. And the obvious answer is no. The geometric Brownian could be a quite illustrative example that if 2a+b^2 > 0 then the numerical solution X(t), for example generated by the EM method, to
dx(t) = ax(t)dt + bx(t)dB(t)
satisfies (<ref>) but not (<ref>) for p=2. However, if the relation between a and b is changed into 2a+b^2 < 0, we may have both (<ref>) and (<ref>) for p=2. In addition, for the case 2a + (p-1)b^2 <0, both (<ref>) and (<ref>) hold only for some small p ∈ (0,1).
Those observations from the geometric Brownian give us the hint that if the underlying solution has some moment boundedness property in the infinite horizon then there could be some numerical method that has the property (<ref>) as well as the property (<ref>). And SDE models that naturally have such a boundedness property are not rarely found. Those SDE models that are used to describe to population, like the stochastic susceptible-infected-susceptible epidemic model <cit.>, usually have their solutions naturally bounded within a finite area. In addition, we are also inspired by the above discussion that the variable p may affect the results.
This paper is devoted to study numerical methods for SDEs that are convergent strongly in the infinite horizon, i.e. (<ref>). Since (<ref>) has been well studied for many numerical methods, we do not want to waste those nice results in this paper. Hence, our strategy to prove (<ref>) for some numerical method is assuming that (<ref>) is true together with some moment boundedness properties on the underlying and the numerical solutions.
It is clear that such a strong convergence (<ref>) has its own importance. Moreover, the strong convergence in the infinite horizon may help to derive the same properties of numerical methods as those of underlying equations. For example, if the underlying SDE is stable in distribution, with the help of (<ref>), we could conclude the numerical solution is also stable in distribution (see Section 2 for more detailed discussion). Other applications of such an infinite-horizon result contain approximating the mean exit time of some bounded domain of the underlying solution, as mentioned in Page 124 of <cit.> the finite time convergence does not apply for this problem. It should also be mentioned that such an error estimate for t ∈ [0,∞) of numerical methods for SDEs has been attracting increasing attention, see for example <cit.> and <cit.>.
The main contribution of this paper is that we prove a new theorem that connects (<ref>) and (<ref>) without giving much attention to the detailed structures of the numerical methods and the coefficients of the underlying SDEs.
To demonstrate the application of the new theorem, we use the backward Euler-Maruyama (BEM) method as an example. For some small enough p, the BEM method is proved to be convergent in the infinite horizon for SDE with both superlinear drift and diffusion coefficients. And this is also a standalone new result, as in this paper the coefficient of the linear term in the drift coefficient is allowed to be positive, which is different from the existing result <cit.>.
This work is constructed in the following way. The main theorems and their proofs are put in Section 2. The strong convergence in the infinite horizon of the BEM method is proved in Section 3. Numerical results are displayed in Section 4. Section 5 sees the conclusion and future works.
§ ASSUMPTIONS AND MAIN RESULTS
Throughout this paper, we let (Ω, F,ℙ) be a complete probability space with a filtration { F_t}_t ≥ 0 satisfying the usual conditions (that is, it is right continuous and F_0 contains all ℙ-null sets). Let W(t) be a scalar Brownian motion. Let |·| and ⟨·⟩ denote the Euclidean norm and inner product in ℝ^d.
In this paper, we consider the d-dimensional Itô SDE
dx(t)=f(x(t))dt+g(x(t))dW(t), t ≥ 0,
with initial value x(0)=x_0 ∈ℝ^d, where f:ℝ^d →ℝ^d and g:ℝ^d →ℝ^d. In this paper, the underlying and the numerical solutions of (<ref>) are sometimes defined as x(t) and X_k, and sometimes to emphasize the initial value x_0, we also denote them by x(t, x_0) and X_k^x_0. Now we list the following conditions.
(i) There is a numerical method that can be used to approximate the solution of (<ref>), and the numerical solution {X_k}_k ≥ 0 defined by this method is a time-homogeneous Markov process.
(ii)The underlying solution of (<ref>) has attractive property, i.e., for some p > 0, and any compact subset K ∈ℝ^d, any two underlying solutions with different initial values satisfy
𝔼(|x(t,x_0)-x(t, y_0)|^p ) ≤𝔼(|x_0-y_0|^p )e^-M_1t,
where M_1 is a positive constant and (x_0, y_0) ∈ K × K.
(iii) In the finite time interval [0, T], given a positive integer n, let h=T/n, then for some p > 0 consistent with (ii) and any j= 1, 2, ⋯ , n, the pth moment of the error of the numerical solution satisfies
𝔼(|x(jh) - X_j|^p )
≤ C_T Ψ(𝔼(|x(0)|^r_1)) h^q,
where C_T is a constant depends on T, q is a positive constant, r_1 is a nonnegative constant, and Ψ (·) is a continuous function.
(iv) The numerical solution is uniformly moment bounded, i.e., for some r_1 > 0 and any k ∈ℕ
𝔼(|X^x_0_k|^r_1 ) ≤ M_2.
where r_1 is consistent with (iii) and M_2 is a positive constant depend on r_1 and x_0.
Suppose Condition <ref> holds, then the numerical solution {X_k}_k ≥ 0 is converges uniformly to the underlying solution of (<ref>), which means that for any fixed h > 0
sup_j ∈ℕ 𝔼|x(jh, x_0)-X_j^x_0|^p ≤C h^q
where C is a positive constant.
First, choose a fixed T>plog2/c_1. For any i ∈ℕ, let x(t,X_in^x_0 ) denote a new underlying solution of (<ref>) with the initial data X_in^x_0. Then by the third and fourth assumptions of Condition <ref>, we know that Ψ is a continuous function with a finite domain, so that for any x ∈ [0, M_2]
Ψ(x) ≤ M_3,
where M_3 is a positive constant.
This implies that for any jh ∈ [0,T]
𝔼|x(jh,x_0) - X_j^x_0|^p
≤ C_T Ψ(𝔼(|x_0|^r_1) ) h^q ≤ C_T M_3 h^q,
As for any T+jh ∈ [T,2T], by the elementary inequality
(a+b)^p ≤ (2(a∨ b))^p ≤ 2^p (a^p+b^p),
we derive that
𝔼(|x(T+jh,x_0) - X_n+j^x_0|^p)
≤𝔼(|x(T+jh,x_0) -x(jh,X_n^x_0) + x(jh,X_n^x_0) - X_n+j^x_0|^p)
≤ 2^p 𝔼(|x(T+jh,x_0) -x(jh,X_n^x_0)|^p) + 2^p 𝔼(|x(jh,X_n^x_0) - X_n+j^x_0|^p).
Since the underlying solution is time-homogeneous, then by (<ref>) and the second assumption of Condition <ref>, we have
𝔼(|x(T+jh,x_0) -x(jh,X_n^x_0)|^p)
=𝔼(|x(jh,x(T,x_0)) -x(jh,X_n^x_0)|^p)
≤𝔼(|x(T,x_0) - X_n^x_0|^p) e^-M_1 jh
≤ e^-M_1jhC_T M_3 h^q.
Similarly, by the first and third assumptions of Condition <ref> as well as (<ref>), we get
𝔼(|x(jh,X_n^x_0) - X_n+j^x_0|^p)
= 𝔼(|x(jh,X_n^x_0) - X_j^X_n^x_0|^p)
≤ C_T Ψ(𝔼(|X_n^x_0|^r_1) ) h^q
≤ C_T M_3 h^q.
Inserting (<ref>) and (<ref>) into (<ref>) yields
𝔼(|x(T+jh,x_0) - X_n+j^x_0|^p)
≤ 2^p e^-M_1jhC_T M_3 h^q + 2^p C_T M_3 h^q.
Next, since T>plog2/c_1, let
γ :=2^p e^-M_1T < 1.
Continuing this approach, for any 2T+jh ∈ [2T, 3T]
𝔼(|x(2T+jh,x_0) - X_2n+j^x_0|^p)
≤ 2^p 𝔼(|x(2T+jh,x_0) -x(jh,X_2n^x_0)|^p) + 2^p 𝔼(|x(jh,X_2n^x_0) - X_2n+j^x_0|^p).
By (<ref>) and the second assumption of Condition <ref>
𝔼(|x(2T+jh,x_0) -x(jh,X_2n^x_0)|^p)
≤𝔼(|x(2T,x_0) - X_2n^x_0|^p) e^-M_1 jh
≤ e^-M_1jh(γ C_T M_3 h^q + 2^p C_T M_3 h^q).
By the first and third assumptions of Condition <ref> as well as (<ref>) again, we have
𝔼(|x(jh,X_2n^x_0) - X_2n+j^x_0|^p) ≤ C_T Ψ(𝔼(|X_2n^x_0|^r_1) ) h^q
≤ C_T M_3 h^q.
Inserting (<ref>) and (<ref>) into (<ref>), we get
𝔼(|x(2T+jh,x_0) - X_2n+j^x_0|^p)
≤ 2^p e^-M_1jh(γ C_T M_3 h^q + 2^p C_T M_3 h^q) + 2^p C_T M_3 h^q.
Similarly, for any 3T+jh ∈ [3T,4T], we have
𝔼(|x(3T+jh,x_0) - X_3n+j^x_0|^p)
≤ 2^p e^-M_1jh(γ^2 C_T M_3 h^q + γ 2^p C_T M_3 h^q + 2^p C_T M_3 h^q)
+ 2^p C_T M_3 h^q.
Now, we assume that for any (m-1)T + jh ∈ [(m-1)T, mT], the following inequality holds
𝔼(|x((m-1)T+jh,x_0) - X_(m-1)n+j^x_0|^p)
≤ 2^p e^-M_1jh(γ^m-2 C_T M_3 h^q + 2^p C_T M_3 h^q ×∑_i = 0^m-3γ ^i) + 2^p C_T M_3 h^q .
Then for any mT+jh ∈ [mT, (m+1)T],
𝔼(|x(mT+jh,x_0) - X_mn+j^x_0|^p)
≤ 2^p 𝔼(|x(mT+jh,x_0) -x(jh,X_mn^x_0)|^p) + 2^p 𝔼(|x(jh,X_mn^x_0) - X_mn+j^x_0|^p).
By (<ref>) and Condition <ref> as well as (<ref>), we obtain
𝔼(|x(mT+jh,x_0) -x(jh,X_mn^x_0)|^p)
=𝔼(|x(jh,x(mT,x_0)) -x(jh,X_mn^x_0)|^p)
≤𝔼(|x(mT,x_0) - X_mn^x_0|^p) e^-M_1 jh
≤ e^-M_1jh(γ^m-1 C_T M_3 h^q + 2^p C_T M_3 h^q ×∑_i = 0^m-2γ ^i) ,
and
𝔼(|x(jh,X_mn^x_0) - X_mn+j^x_0|^p)
≤ C_T Ψ(𝔼(|X_mn^x_0|^r_1) ) h^q
≤ C_T M_3 h^q.
Inserting (<ref>) and (<ref>) into (<ref>), we have
𝔼(|x(mT+jh,x_0) - X_mn+j^x_0|^p)
≤ 2^p e^-M_1jh(γ^m-1 C_T M_3 h^q + 2^p C_T M_3 h^q ×∑_i = 0^m-2γ ^i) + 2^p C_T M_3 h^q.
Since 0<γ<1, there exists a positive constant upper bound M_4 for the series ∑_i = 0^∞γ ^i. Then for any m, k ∈ℕ, let
C := 2^pC_T M_3 +2^2p C_T M_3 M_4 + 2^p C_T M_3.
The desired assertion follows.
(1) The uniform convergence rate is consistent with the finite-time convergence rate.
(2) If the second assumption of Condition <ref> is replaced by the global attractivity of the numerical solution, (<ref>) still holds.
Next, we will discuss the connection between the uniform convergence and the stationary distribution.
Before we proceed, let us introduce some necessary notions about the stationary distribution. For any x ∈ℝ^d and any Borel set 𝐵⊂ℝ^d, the transition probability kernel of the underlying solution x(t) and the numerical solution X_k with initial value x(0)=X_0=x_0 is defined as
ℙ_t(x_0,𝐵) := ℙ(x(t) ∈𝐵|x(0) = x_0) and ℙ_k(x_0,𝐵) := ℙ(X_k ∈𝐵|X_0 = x_0).
Denote the family of all probability measures on ℝ^d by 𝒫(ℝ^d). Define by 𝕃 the family of mappings 𝐹 : ℝ^d →ℝ satisfying
|𝐹(x) - 𝐹(y)| ≤ |x-y| and |𝐹(x)| ≤ 1,
for any x,y ∈ℝ^d. For ℙ_1, ℙ_2 ∈𝒫(ℝ^d), define metric 𝑑_𝕃 by
𝑑_𝕃(ℙ_1 , ℙ_2) = sup_𝐹∈𝕃| ∫_ℝ^d𝐹(x) ℙ_1(dx) - ∫_ℝ^d𝐹(x)ℙ_2(dx) |.
The weak convergence of probability measures can be illustrated in terms of metric 𝑑_𝕃 <cit.>. That is, a sequence of probability measures {ℙ_k}_k ≥ 1 in 𝒫(ℝ^d) converge weakly to a probability measure ℙ∈𝒫(ℝ^d) if and only if
lim_ k →∞𝑑_𝕃(ℙ_k, ℙ) = 0.
Then we define the stationary distribution for the underlying solution of (<ref>) by using the concept of weak convergence.
For any initial value x ∈ℝ^d, the underlying solution of (<ref>) is said to have a stationary distribution π∈𝒫(ℝ^d) if the transition probability measure ℙ_t(x,·) converges weakly to π( ·) as t →∞ for every x ∈ℝ^d, that is
lim_k →∞(sup_𝐹∈𝕃|𝔼(𝐹(x(t)))-𝔼_π(𝐹)|) = 0,
where
𝔼_π(𝐹) = ∫_ℝ^d𝐹(y)π(dy).
If we add an additional condition.
The underlying solution is uniformly moment bounded, i.e., for any t ≥ 0 and some p > 0 consistent with the second assumption of Condition <ref>
𝔼(|x(t,x_0)|^p) ≤ M_5,
where M_5 is a positive constant depend on p and x_0.
Then, from Theorem 3.1 in <cit.>, we know that the solution of (<ref>) has a unique stationary distribution denoted by π (·) under Conditions <ref> and <ref>.
Thus, we have the following theorem.
Suppose Conditions <ref> and <ref> hold.
Then, the probability measure of the numerical solution converges to the stationary distribution of the underlying solution, that is
lim_ jh →∞ h → 0𝑑_𝕃(ℙ_j(x_0,·), π(·)) = 0.
First, since the underlying solution of (<ref>) has a unique stationary distribution π, this means that for any ϵ >0, there exists an T_1 >0 such that for any jh ≥ T_1
𝑑_𝕃(ℙ_jh(x_0, ·), π(·)) ≤ϵ/2.
Next, from the definition of 𝕃, for any F ∈𝕃, we can get
|𝔼(F(x(jh))) - 𝔼(F(X_j))| ≤𝔼(2 ∧ |x(jh) - X_j|).
Since the numerical solution satisfies (<ref>), choose h sufficiently small such that Ch^q ≤min{ϵ/8, ϵ^p/2^p}. Then, if p ≥ 1, we have
𝔼(2 ∧ |x(jh) - X_j|) ≤𝔼(|x(jh) - X_j|)≤ (𝔼 |x(jh) - X_j|^p)^1/p≤ϵ/2,
and if p ∈ (0, 1), we have
𝔼(2 ∧ |x(jh) - X_j|) ≤ 2ℙ(|x(jh) - X_j| ≥ 2) + 𝔼(I_{|x(jh) - X_j| <2}|x(jh) - X_j|)
≤ 2^1-p𝔼 |x(jh) - X_j|^p + 𝔼(2^1-p |x(jh) - X_j|^p)
≤ 2^2-p𝔼 |x(jh) - X_j|^p
≤ϵ/2.
Hence, it follows from (<ref>), (<ref>), (<ref>) that
|𝔼(F(x(jh))) - 𝔼(F(X_j))| ≤ϵ/2.
Consequently, it is obvious that
sup_𝐹∈𝕃|𝔼(F(x(jh))) - 𝔼(F(X_j))| ≤ϵ/2,
that is
𝑑_𝕃(ℙ_j(x_0,·), ℙ_jh(x_0, ·)) ≤ϵ/2.
Then, the triangle inequality yields
𝑑_𝕃(ℙ_j(x_0,·), π(· ) ≤ϵ.
The proof is hence completed.
§ THE BEM METHOD
In this section, the BEM method is used as an example. Under some conditions that are weaker than the existing results, we not only obtain the uniform convergence of the BEM method but also prove that it can be used to numerically approximate the stationary distribution of the underlying solution. Now we make the following assumptions.
There exists a pair of constants q ∈ [1,∞) and L_1 ∈ (0,∞) such that
|f(x_1)-f(x_2)| ≤ L_1(1+|x_1|^q-1+|x_2|^q-1)|x_1-x_2|
for all x_1,x_2 ∈ℝ^d.
There exists c_1, c_2, c_3∈ (0,∞) and l_1 ≥max{2q, 3}, l_2 ≥ 3 such that
2⟨ x,f(x) ⟩ +l_1|g(x)|^2 ≤ c_1|x|^2 + c_2
2⟨ x_1-x_2,f(x_1)-f(x_2) ⟩ + l_2|g(x_1)-g(x_2)|^2≤ c_3|x_1-x_2|^2
for all x, x_1, x_2 ∈ℝ^d.
From Assumptions <ref> and <ref>, we can get the following result <cit.>:
|g(x_1)-g(x_2)| ≤ L_2(1+|x_1|^q-1+|x_2|^q-1)|x_1-x_2|
for all x_1, x_2 ∈ℝ^d, where L_2 is a positive constant depends on L_1 and l_2.
From (<ref>) and (<ref>), we further deduce the following polynomial growth bound
|f(x)| ∨ |g(x)| ≤ L_3 (1 + |x|^q),
where L_3 is a positive constant depends on L_1, L_2, f(0) and g(0).
This means that under the above assumptions, the solution of (<ref>) is uniquely determined <cit.>.
The BEM method applied to (<ref>) produces approximations X_k ≈ x(kh) by setting X_0=x(0)=x_0 and forming
X_k=X_k-1+f(X_k)h+g(X_k-1) Δ W_k-1,
where h > 0 is the timestep and Δ W_k-1 :=W(kh)-W((k-1)h) is the Brownian increment.
We point out that the BEM method (<ref>) is well-defined under (<ref>) (see, e.g., <cit.>). And following the same argument of Theorem 2.7 in <cit.>, we get the following result.
The BEM method (<ref>) is a homogeneous Markov process.
§.§ The uniform moment boundedness.
In this subsection, for proof, we need some additional assumptions on diffusion coefficient g.
There exist three constants α > 0, D > 0, and C_3 ≥ 0 such that for any x∈ℝ^d
(1-l_1)|g(x)|^2/D+|x|^2+α|g(x)|^2-2|⟨ x,g(x) ⟩|^2/(D+|x|^2+α|g(x)|^2)^2≤ k_1 + P(x)/(D+|x|^2+α|g(x)|^2)^2 ,
where k_1 is a constant with k_1+c_1 < 0 and P(x) is a polynomial of x that satisfies the following condition for a sufficiently small constant p
sup_x∈ℝ^d| P(x)/(D+|x|^2+α|g(x)|^2)^2-p/2| ≤ C_3.
There exist a constant β > 0 such that for any x, y ∈ℝ^d
(1-l_2)|g(x)-g(y)|^2/|x-y|^2+β|g(x)-g(y)|^2-2 |⟨ x-y,g(x)-g(y) ⟩|^2/(|x-y|^2+β|g(x)-g(y)|^2)^2≤ k_2,
where k_2 is a constant with k_2+c_3 < 0 and x ≠ y.
We emphasize that the family of drift and diffusion functions that satisfy (<ref>) - (<ref>) is large. For example, for any polynomial g(x) = a_0 + ∑_i=0^n a_i x^2i+1 with a_i > 0 and n ≥ 0, if we choose α and β sufficiently small, then k_1 and k_2 will be very close to -a_1^2 (l_1 + 1) and - a_1^2 (l_2 + 1), and P (x) is usually a polynomial of degree less than (D+|x|^2+α|g(x)|^2)^2-p/2. We can choose c_1 and c_3 that are less than a_1^2 (l_1 + 1) and a_1^2 (l_2 + 1). Then for this g(x), it is not difficult to see that (<ref>) and (<ref>) are satisfied and there are many f(x) satisfying Assumptions <ref> and <ref>.
Suppose (<ref>) and (<ref>) hold, then there exists a pair of constants (p^*, h^*) with p^* ∈ (0, 1) and h ∈ (0, 1) such that for any p ∈ (0, p^*], h ∈ (0, h^*] and D ∈ (0,∞), the BEM solution (<ref>) satisfies
𝔼|X_k|^p ≤𝔼(D+|x_0|^2+l_1h|g(x_0)|^2 )^p/2-4M/p(k_1+c_1),
where M is a positive constant depends on D, c_2 and p.
First, by (<ref>) and (<ref>) , we have
|X_k|^2-|X_k-1|^2+|X_k-X_k-1|^2
=2 ⟨ X_k-X_k-1,X_k ⟩
=2 ⟨ f(X_k),X_k ⟩ h + 2 ⟨ g(X_k-1) Δ W_k-1,X_k ⟩
=2 ⟨ f(X_k),X_k ⟩ h + 2 ⟨ g(X_k-1) Δ W_k-1,X_k -X_k-1⟩ + 2 ⟨ g(X_k-1) Δ W_k-1,X_k-1⟩
≤ (c_2+c_1|X_k|^2)h - l_1|g(X_k)|^2 h+|g(X_k-1) Δ W_k-1|^2+|X_k-X_k-1|^2 +2 ⟨ g(X_k-1) Δ W_k-1,X_k-1⟩.
Canceling the same terms on both sides gives
(1-c_1h)|X_k|^2+l_1|g(X_k)|^2 h
≤ |X_k-1|^2+|g(X_k-1) Δ W_k-1|^2
+2 ⟨ g(X_k-1) Δ W_k-1,X_k-1⟩ +c_2 h.
Since l_1 ≥ 3 and c_1 > 0, we see that for any constant D > 0
(1-c_1h)(D+|X_k|^2+l_1|g(X_k)|^2 h ) ≤ (D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )(1+ξ _k-1),
where
ξ _k-1=|g(X_k-1) Δ W_k-1|^2-l_1|g(X_k-1)|^2 h +2 ⟨ g(X_k-1) Δ W_k-1,X_k-1⟩ +c_2 h/D+|X_k-1|^2+l_1|g(X_k-1)|^2 h .
It is obvious that ξ_k-1>-1. Then we take conditional expectations with respect to ℱ_(k-1)h on (<ref>) leads to
𝔼((D+|X_k|^2+l_1|g(X_k)|^2 h)^p/2|ℱ_(k-1)h)
≤ (1-c_1h)^-p/2(D+|X_k-1|^2+l_1|g(X_k-1)|^2h)^p/2𝔼( (1+ξ _k-1)^p/2 | ℱ_(k-1)h)
≤ (1-c_1h)^-p/2(D+|X_k-1|^2+l_1|g(X_k-1)|^2h)^p/2𝔼( 1+p/2ξ _k-1+p(p-2)/8ξ _k-1^2
+p(p-2)(p-4)/2^3 × 3!ξ _k-1^3 | ℱ_(k-1)h),
where the last step, we use the following inequality
(1+u)^p/2≤ 1+p/2u+p(p-2)/8u^2+p(p-2)(p-4)/2^3 × 3!u^3, ∀ p ∈(0, 1),u∈ (-1,∞).
Since Δ W_k-1 is independent of ℱ_(k-1)h, for any i ∈ℕ^+, it is not difficult to see that
𝔼((Δ W_k-1)^2i-1|ℱ_(k-1)h)=𝔼((Δ W_k-1)^2i-1)=0,
𝔼(|Δ W_k-1|^2i|ℱ_(k-1)h)=𝔼(|Δ W_k-1|^2i)=(2i-1)!! h^i.
This, together with (<ref>) yields
𝔼(ξ _k-1 | ℱ_(k-1)h)
= 𝔼(|g(X_k-1) Δ W_k-1|^2-l_1|g(X_k-1)|^2 h +2 ⟨ g(X_k-1) Δ W_k-1,X_k-1⟩ +c_2 h/D+|X_k-1|^2+l_1|g(X_k-1)|^2 h | ℱ_(k-1)h)
= (1-l_1)|g(X_k-1)|^2 h +c_2h/D+|X_k-1|^2+l_1|g(X_k-1)|^2 h.
Similarly, we can get
𝔼(ξ _k-1^2 | ℱ_(k-1)h)
=𝔼((D+|X_k-1|^2+l_1|g(X_k-1)|^2 h)^-2((3-2l_1+l_1^2)|g(X_k-1)|^4 h^2
+4|⟨ g(X_k-1)Δ W_k-1, X_k-1⟩|^2
+ 2(1-l_1)c_2|g(X_k-1)|^2 h^2+c_2^2h^2) | ℱ_(k-1)h)
=𝔼((D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^-2(2|g(X_k-1)|^4 h^2 + (1-l_1)^2|g(X_k-1)|^4 h^2
+ 2(1-l_1)c_2|g(X_k-1)|^2 h^2+c_2^2h^2
+4|⟨ g(X_k-1)Δ W_k-1, X_k-1⟩|^2) | ℱ_(k-1)h)
≥4|⟨ g(X_k-1), X_k-1⟩|^2h/(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^2
and
𝔼(ξ _k-1^3 | ℱ_(k-1)h)
=𝔼([|g(X_k-1) Δ W_k-1|^2-l_1|g(X_k-1)|^2 h /(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^3 +2 ⟨ g(X_k-1) Δ W_k-1,X_k-1⟩ +c_2 h ]^3/ | ℱ_(k-1)h)
:= B_1+B_2+B_3+B_4.
In the sequel, we will estimate B_1, B_2, B_3, B_4 separately. Firstly, we have
B_1 =𝔼([|g(X_k-1) Δ W_k-1|^2-l_1|g(X_k-1)|^2 h +2 ⟨ g(X_k-1) Δ W_k-1,X_k-1⟩]^3/(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^3 | ℱ_(k-1)h)
= (15-9l_1 + 3l_1^2 -l_1^3)|g(X_k-1)|^6 h^3 + 12(3-l_1)|g(X_k-1)|^2 |⟨ g(X_k-1),X_k-1⟩|^2 h^2/(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^3.
Then,
B_2 = 𝔼(3[|g(X_k-1) Δ W_k-1|^2-l_1|g(X_k-1)|^2 h +2 ⟨ g(X_k-1) Δ W_k-1,X_k-1⟩]^2 × c_2h/(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^3 | ℱ_(k-1)h)
= 3(3-2l_1+l_1^2)|g(X_k-1)|^4 h^2 × c_2 h +12|⟨ g(X_k-1),X_k-1⟩|^2 h × c_2 h/(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^3.
Next,
B_3 = 𝔼(3[|g(X_k-1) Δ W_k-1|^2-l_1|g(X_k-1)|^2 h +2 ⟨ g(X_k-1) Δ W_k-1,X_k-1⟩]×(c_2h)^2 /(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^3 | ℱ_(k-1)h)
= 3(1 - l_1)|g(X_k-1)|^2 h ×(c_2 h )^2/(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^3 .
Finally,
B_4 = 𝔼((c_2h)^3/(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^3 | ℱ_(k-1)h)
= (c_2 h )^3/(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^3.
Since 15-9l_1 + 3l_1^2 -l_1^3 < 0 and 1-l_1 < 0 as well as 3 - l_1 ≤ 0 when l_1 ≥ 3, by Young’s inequality |a|^2-p/6-p|b|^4/6-p≤2-p/6-p|a| + 4/6-p|b| ≤ |a| +|b| and |a||b| ≤ |a|^2+|b|^2 for any a,b ∈ℝ^d, then inserting (<ref>)- (<ref>) into (<ref>) leads to
𝔼(ξ _k-1^3 | ℱ_(k-1)h)
≤3(3-2l_1+l_1^2)|g(X_k-1)|^4 c_2 h^3 +12|⟨ g(X_k-1),X_k-1⟩|^2 c_2 h^2 + (c_2 h)^3/(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^3
≤1/(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^p/2( 3(3-2l_1+l_1^2)|g(X_k-1)|^4 c_2 h^3/D^1-p/2× l_1^2|g(X_k-1)|^4 h^2
+ 12|g(X_k-1)|^2 |X_k-1|^2 c_2 h^2/D^1-p/2× 4 l_1 |g(X_k-1)|^2 |X_k-1|^2 h + (c_2 h)^3/D^3-p/2)
≤1/(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^p/2(3 c_2 h/D^1-p/2 + 3 c_2 h/D^1-p/2 l_1 + c_2 ^3 h^3/D^3-p/2).
Choosing h sufficiently small such that l_1 h ≤α, and inserting (<ref>),(<ref>) and (<ref>) into (<ref>), then we have
𝔼((D+|X_k|^2+l_1|g(X_k)|^2 h)^p/2|ℱ_(k-1)h)
≤(1-c_1h)^-p/2(D+|X_k-1|^2+l_1|g(X_k-1)|^2h)^p/2( 1 + p/2(1-l_1)|g(X_k-1)|^2 h +c_2h/D+|X_k-1|^2+l_1|g(X_k-1)|^2 h
+p(p-2)/84|⟨ g(X_k-1), X_k-1⟩|^2h/(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^2 +p(p-2)(p-4)/2^3 × 3!1/(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^p/2
×(3 c_2 h/D^1-p/2 + 3c_2 h/D^1-p/2 l_1 + c_2 ^3 h^3/D^3-p/2))
≤(1-c_1h)^-p/2(D+|X_k-1|^2+l_1|g(X_k-1)|^2h)^p/2( 1+p/2(1-l_1)|g(X_k-1)|^2 h/D+|X_k-1|^2+α|g(X_k-1)|^2
+p(p-2)/84|⟨ g(X_k-1), X_k-1⟩|^2h/(D+|X_k-1|^2+α|g(X_k-1)|^2 )^2+p/2c_2h/D+|X_k-1|^2+l_1|g(X_k-1)|^2 h
+p(p-2)(p-4)/2^3 × 3!1/(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^p/2(3 c_2 h/D^1-p/2 + 3c_2 h/D^1-p/2 l_1 + c_2 ^3 h^3/D^3-p/2))
≤(1-c_1h)^-p/2(D+|X_k-1|^2+l_1|g(X_k-1)|^2h)^p/2
×( 1+ph/2((1-l_1)|g(X_k-1)|^2 /D+|X_k-1|^2+α|g(X_k-1)|^2
-2|⟨ g(X_k-1), X_k-1⟩|^2/(D+|X_k-1|^2+α|g(X_k-1)|^2 )^2)
+ p^2h/2|⟨ g(X_k-1), X_k-1⟩|^2/(D+|X_k-1|^2+α|g(X_k-1)|^2 )^2
+p/2c_2h/D+|X_k-1|^2+l_1|g(X_k-1)|^2 h
+p(p-2)(p-4)/2^3 × 3!1/(D+|X_k-1|^2+l_1|g(X_k-1)|^2 h )^p/2(3 c_2 h/D^1-p/2 + 3c_2 h/D^1-p/2 l_1 + c_2 ^3 h^3/D^3-p/2)).
Noting, by (<ref>) as well as the elementary inequality a^2 +b^2 ≥ 2|a| |b| and ⟨ a, b ⟩≤ |a||b|, we have
(1-l_1)|g(X_k-1)|^2 /D+|X_k-1|^2+α|g(X_k-1)|^2
-2|⟨ g(X_k-1), X_k-1⟩|^2/(D+|X_k-1|^2+α|g(X_k-1)|^2 )^2≤ k_1 + P(X_k-1)/(D+|X_k-1|^2+α|g(X_k-1)|^2 )^2,
and
|⟨ g(X_k-1), X_k-1⟩|^2/(D+|X_k-1|^2+α|g(X_k-1)|^2 )^2 ≤|X_k-1 |^2| g(X_k-1)|^2/(|X_k-1|^2+α|g(X_k-1)|^2 )^2
≤|X_k-1 |^2| g(X_k-1)|^2/(2√(α)|X_k-1 || g(X_k-1)|)^2
=1/4α.
If we choose p sufficiently small, then by (<ref>) again,
(D+|X_k-1|^2+l_1|g(X_k-1)|^2h)^p/2P(X_k-1)/(D+|X_k-1|^2+α|g(X_k-1)|^2 )^2≤ C_3.
Therefore, Substituting (<ref>), (<ref>) and (<ref>) into (<ref>), let h ≤ 1/2c_1, we obtain
𝔼((D+|X_k|^2+l_1|g(X_k)|^2 h)^p/2|ℱ_kh)
≤(1-c_1h)^-p/2(D+|X_k-1|^2+l_1|g(X_k-1)|^2h)^p/2( 1+p/2k_1h+p^2/2h/4α)
+(1-c_1h)^-p/2(ph/2(D+|X_k-1|^2+l_1|g(X_k-1)|^2)^p/2P(X_k-1)/(D+|X_k-1|^2+α|g(X_k-1)|^2 )^2h +p/2c_2h/D^1-p/2
+p(p-2)(p-4)/2^3 × 3!(3 c_2 h/D^1-p/2 + 3c_2 h/D^1-p/2 l_1 + c_2 ^3 h^3/D^3-p/2)).
≤(1-c_1h)^-p/2(D+|X_k-1|^2+l_1|g(X_k-1)|^2h)^p/2( 1+p/2k_1h+p^2h/8α)+ Mh,
where M is a positive constant depends on D, C_3, c_2 and p. Furthermore, for any p ∈ (0,1), h ∈ (0,1/c_1) and κ∈ [-1/2,1/2], it is obvious that
(1-c_1h)^p/2≥ 1-p/2c_1h-K_1h^2
as well as
1/1-κ≤ 1+κ+κ^2∑ _i=0^∞(1/2)^i=1+κ+2κ^2,
where K_1=K_1(c_1,p) is a positive constant. Define ϵ :=1/4|k_1+c_1|. And if necessary, we choose h^* and p^* sufficiently small such that for any h ∈ (0, h^*] and p ∈ (0, p^*],
p^∗/4α≤ϵ,
|p/2c_1h+K_1h^2| ≤1/2,
and
K_1h+2(p/2c_1+K_1h)^2h+p/2(k_1+ϵ)((p/2c_1h+K_1h^2)
+2(p/2c_1h+K_1h^2)^2)
≤p/2ϵ.
Therefore, combining (<ref>)-(<ref>) together, we obtain that
𝔼(D+|X_k|^2+l_1h|g(X_k)|^2 )^p/2
≤ (1+p/2k_1h+p/2ϵ h/1-(p/2c_1h+K_1h^2))𝔼(D+|X_k-1|^2+l_1h|g(X_k-1)|^2 )^p/2+Mh
≤ (1+p/2k_1h+p/2ϵ h)(1+(p/2c_1h+K_1h^2)+2(p/2c_1h+K_1h^2)^2 )𝔼(D+|X_k-1|^2 +l_1h|g(X_k-1)|^2 )^p/2+Mh
≤ (1+p/2(k_1+c_1+ϵ)h+p/2ϵ h)𝔼(D+|X_k-1|^2+l_1h|g(X_k-1)|^2 )^p/2+Mh
≤ (1+p/4(k_1+c_1)h)𝔼(D+|X_k-1|^2+l_1h|g(X_k-1)|^2 )^p/2+Mh.
Let h < -4/(p(k_1+c_1)). By iteration, we get
𝔼(D+|X_k|^2+l_1h|g(X_k)|^2 )^p/2
≤ (1+p/4(k_1+c_1)h)^k𝔼(D+|x_0|^2+l_1h|g(x_0)|^2 )^p/2+1-(1+p/4(k_1+c_1)h)^k/1-(1+p/4(k_1+c_1)h)Mh.
This implies that
𝔼(|X_k|^p )≤𝔼(D+|x_0|^2+l_1h|g(x_0)|^2 )^p/2-4M/p(k_1+c_1).
The proof is completed.
Suppose Assumption (<ref>) and (<ref>) hold, then there exists a constant p^* ∈ (0,1) such for any p ∈ (0, p^*] and D ∈ (0, ∞), the solution of (<ref>) satisfies
𝔼|x(t)|^p≤𝔼(D+|x_0|^2)^p/2+K_2
for any t > 0, where K_2 is a constant depends on p,c_1,c_2 and D.
First, Let ϵ := p|c_1+k_1|/4, by (<ref>), (<ref>) and the Itô formula, we derive that
𝔼(e^ϵ t(D+|x(t)|^2)^p/2)
≤𝔼(D+|x_0|^2)^p/2+𝔼∫_0^t (ϵ e^ϵ s (D+|x(s)|^2)^p/2+p/2e^ϵ s (D+|x(s)|^2)^p/2-1
(2⟨ x(s),f(x(s))⟩+|g(x(s))|^2 + (p-2)|⟨ x(s),g(x(s))⟩|^2/D+|x(s)|^2))ds
where
2⟨ x(s),f(x(s))⟩+|g(x(s))|^2+ (p-2)|⟨ x(s),g(x(s))⟩|^2/D+|x(s)|^2
=(2⟨ x(s),f(x(s))⟩+l_1|g(x(s))|^2 )+(D+|x(s)|^2)((1-l_1)|g(x(s)))|^2/D+|x(s)|^2 + (p-2)|⟨ x(s),g(x(s))⟩|^2/(D+|x(s)|^2)^2)
≤ c_2+c_1|x(s)|^2+(D+|x(s)|^2)((1-l_1)|g(x(s)))|^2/D+|x(s)|^2+α|g(x(s)|^2 + (p-2)|⟨ x(s),g(x(s))⟩|^2/(D+|x(s)|^2+α|g(x(s))|^2)^2)
≤ c_2+c_1|x(s)|^2+(D+|x(s)|^2)(k_1+P(x(s))/(D+|x(s)|^2+α|g(x(s))|^2)^2 +p|⟨ x(s),g(x(s))⟩|^2/(D+|x(s)|^2+α|g(x(s))|^2)^2)
≤ c_2+(D+|x(s)|^2)(c_1+k_1+P(x(s))/(D+|x(s)|^2+α|g(x(s))|^2)^2 +p|⟨ x(s),g(x(s))⟩|^2/(D+|x(s)|^2+α|g(x(s))|^2)^2).
Following the same arguments used in the derivation of (<ref>), we can get
(D+|x(s)|^2)^p/2-1(D+|x(s)|^2)(P(x(s))/(D+|x(s)|^2+α|g(x(s))|^2)^2) ≤ C_3 ,
and
p|⟨ x(s),g(x(s))⟩|^2/(D+|x(s)|^2+α|g(x(s))|^2)^2≤p/4α,
where C_3 is a constant specified in (<ref>). Let p sufficiently small such that p/4α≤ |c_1+k_1|/2, then we substitute (<ref>)-(<ref>) into (<ref>)
𝔼(e^ϵ t(D+|x(t)|^2)^p/2) ≤𝔼(D+|x_0|^2)^p/2+∫_0^t p/2 e^ϵ s(c_2/D^1-p/2+C_3)ds
≤𝔼(D+|x_0|^2)^p/2+p/21/ϵ(e^ϵ t-1)(c_2/D^1-p/2+C_3).
Let K_2 := (p/(2ϵ))((c_2/D^1-p/2)+C_3), and then we divide both sides of (<ref>) by e^ϵ t such that
𝔼((D+|x(t)|^2)^p/2)
≤𝔼(D+|x_0|^2)^p/2× e^p/4(c_1+k_1)t + K_2.
Since c_1+k_1 < 0, then the desired result follows.
§.§ The global attractivity.
In this subsection, we will show the global attractivity of the underlying solution under the above assumptions.
Suppose (<ref>) and (<ref>) hold, then there exists a sufficient small constant p ∈ (0, 1) such that for any t > 0, the solution of (<ref>) satisfies
𝔼|x(t,x_0)-x(t, y_0)|^p ≤𝔼(|x_0-y_0|^p)e^p/4(c_3+k_2)t.
First of all, we define D(t):=x(t, x_0)-x(t, y_0),G(t):=g(x(t, x_0))-g(x(t, y_0)),F(t):=f(x(t, x_0))
-f(x(t, y_0)), and γ := p|c_3 + k_2|/4, then by Itô formula
𝔼(e^γ t|D(t)|^p) = 𝔼(|D(0)|^p)+𝔼∫_0^t γ e^γ s |D(s)|^p+p/2e^γ s(|D(s)|^2)^p/2-1
×(2⟨ D(s),F(s)⟩+|G(s)|^2 +(p-2)|⟨ D(s),G(s)⟩|^2/|D(s)|^2)ds.
Since l_2 > 1 and p ∈ (0,1), we have
2⟨ D(s),F(s)⟩+|G(s)|^2 +(p-2)|⟨ D(s),G(s)⟩|^2/|D(s)|^2
=(2⟨ D(s), F(s)⟩+l_2|G(s)|^2)+(|D(s)|^2)((1-l_2)|G(s)|^2/|D(s)|^2+(p-2)|⟨ D(s),G(s)⟩|^2/(|D(s)|^2)^2)
≤( c_3|D(s)|^2)+(|D(s)|^2)((1-l_2)|G(s)|^2/|D(s)|^2+β|G(s)|^2+(p-2)|⟨ D(s),G(s)⟩|^2/(|D(s)|^2+β|G(s)|^2)^2)
≤( c_3|D(s)|^2)+(|D(s)|^2)((1-l_2)|G(s)|^2/|D(s)|^2+β|G(s)|^2-2|⟨ D(s),G(s)⟩|^2/(|D(s)|^2+β|G(s)|^2)^2+p|⟨ D(s),G(s))⟩|^2/(|D(s)|^2+β|G(s)|^2)^2),
Similar to the derivation of (<ref>), we can see that
p|⟨ D(s),G(s)⟩|^2/(|D(s)|^2+β|G(s)|^2)^2≤p|D(s)|^2|G(s)|^2/(2√(β)|D(s)||G(s)|)^2 =p/4β
Let p=2β|c_3+k_2|, by (<ref>), we have
2⟨ D(s),F(s)⟩+|G(s)|^2 +(p-2)|⟨ D(s),G(s)⟩|^2/|D(s)|^2 ≤ c_3|D(s)|^2+|D(s)|^2(k_1+p/4β )
≤1/2(c_3+k_2)(|D(s)|^2),
Inserting (<ref>) into (<ref>) yields
𝔼(e^γ t|D(t)|^p)
≤𝔼(|D(0)|^p + 𝔼∫_0^t γ e^γ s |D(s)|^p+p/2e^γ s(|D(s)|^2)^p/2-1(1/2(c_3+k_2)(|D(s)|^2)) ds
= 𝔼(|D(0)|^p + 𝔼∫_0^t (γ + p(c_3+k_2)/4)e^γ s |D(s)|^p ds
=𝔼(|D(0)|^p
Dividing both sides by e^γ t gives
𝔼(|D(t)|^p)≤𝔼(|D(0)|^p)e^p/4(c_3+k_2)t.
Thus, the proof is completed.
§.§ Convergence
First, we will show the finite-time convergence result that we need. In fact, this result has been proved in <cit.>, but we need to make a little modification to meet our requirements.
Suppose (<ref>) and (<ref>) hold with l_1 ≥ 2q, then there exists a pair of constants (p^*, h^*) consistent with Lemmas <ref> and <ref> such that for any p ∈ (0, 2p^*/l_1] and h ∈ (0, h^*], the solution of (<ref>) and the solution of BEM method (<ref>) satisfie
sup _0≤ k ≤ n𝔼(|x(kh) - X_k|^p) ≤ C_T(1+𝔼(|x(0)|^l_1 p/2))h^p/2
for any T > 0, where n:=T/h and C_T is a positive constant depends on T.
When the initial value x(0) of (<ref>) is constant, we state the convergence result in <cit.> as follows.
𝔼(|x(kh) - X_k|^2) ≤ C_T|x(0) - X_0|^2 + C_T(1+|x(0)|^l_1)h.
And if the initial value x(0) is a random variable, we can easily get that
𝔼(|x(kh) - X_k|^2| ℱ_0) ≤ C_T|x(0) - X_0|^2 + C_T(1+|x(0)|^l_1)h.
Then by the Hölder inequality and the elementary inequality (<ref>), we have
𝔼(|x(kh) - X_k|^p| ℱ_0) ≤( 𝔼(|x(kh) - X_k|^2| ℱ_0))^p/2
≤( C_T|x(0) - X_0|^2 + C_T(1+|x(0)|^l_1)h)^p/2
≤ C_T|x(0) - X_0|^p + C_T(1+(|x(0)|^l_1 p/2))h^p/2,
this implies
𝔼(|x(kh) - X_k|^p) ≤ C_T𝔼(|x(0) - X_0|^p) + C_T(1+𝔼(|x(0)|^l_1 p/2))h^p/2.
The required assertion follows.
From (<ref>)-(<ref>), let r_1 = l_1p/2, p=p, and Ψ(x)=1+x^l_1 p/2, Condition <ref> and <ref> are satisfied, then by (<ref>) and (<ref>), we can conclude this part by the following theorems.
Under (<ref>)-(<ref>), the BEM solution converges uniformly to the underlying solution of (<ref>).
Next, since (<ref>) and (<ref>) hold, from (<ref>), we get the last theorem.
Under (<ref>)-(<ref>), the probability measure of the BEM solution {X_k}_k ≥ 0 converges to the underlying stationary distribution π (·).
§ NUMERICAL EXAMPLES
The Ginzburg-Landau equation is from the theory of superconductivity. Its
stochastic version with multiplicative noise can be written as
dx(t) = ((α + 1/2σ^2)x(t) - x^3(t))dt + σ x(t)dW(t),
And the exact solution is known to be <cit.>
x(t) = x(0)exp(α t + σ W(t))/√(1+2x^2(0)∫_0^t exp (2 α s + 2 σ W(s))ds).
If α = -1/4 and σ = 1, by setting l_1 = 10, l_2=3, and α = β =0.01, it is not hard to see that the
drift and diffusion coefficients of (<ref>) satisfy (<ref>) - (<ref>) with p = 0.001, q=3, L_1=1.5, k_1=-10.75, c_1=10.5, c_2=0, k_2=-3.75 and c_3=3.5.
From our previous analysis, we can see that the BEM method is strongly uniform convergence with order p, which means that for any j>0 and h ∈ (0, h^*), there is a constant C>0 such that
e^strong_h := 𝔼|x(jh) - X_j|^p ≤ C h^p/2.
Then we use the BEM method to simulate 1000 sample paths with x_0 = 1 and h = 0.001, and the mean of sample points generated by these paths at the same time point are used to construct the approximation to e^strong_h against h^p/2 over [0, 200]. As shown in Figure <ref>, it is clear that the curve is roughly stable, which indicates that there is indeed a constant C satisfying the inequality (<ref>).
Consider a two dimensional SDE
d[
[ x_1(t); x_2(t) ]]
=
[
[ 1+ 0.1x_1(t) -x_2(t) - 21x_1^3(t) -21x_1^5(t); 1+ x_1(t)+ 0.1x_2(t) - 21x_2^3(t) -21x_2^5(t) ]]
dt
+
[
[ x_1(t) -0.2x_2(t) + x_1^3(t) - 0.2 x_2^3(t); 0.2x_1(t) + x_2(t) + 0.2 x_1^3(t) + x_2^3(t) ]]
dW(t).
Let l_1 =20, l_2 = 5 and α = β =0.025, it is not hard to see that (<ref>) -(<ref>) are satisfied with p = 0.001, q=5, L_1=40, k_1=-21, c_1=20.5, c_2=10, k_2=-6 and c_3=5.5.
We use the BEM method to simulate 1000 sample paths with x_0=[1,1]^T and h=0.001, and then the sample points generated by these paths at the same time point are used to construct the corresponding empirical density function.
According to the (<ref>), the underlying solution has a unique stationary distribution. However, its explicit form is hard to find.
Therefore, to intuitively show that the underlying solution does have a unique stationary distribution, the empirical distribution at t=5 is regarded as the stationary distribution. Then we use the Kolmogorov-Smirnov test (K-S test) to measure the difference between the empirical distribution and the stationary distribution at each time point. As shown in Figure <ref>, the difference tends to 0, which indicates the numerical stationary distribution is quite a good approximation to the underlying one.
§ CONCLUSION AND FUTURE WORKS
In this paper, a quite general result about the strong convergence in the infinite horizon of numerical methods for SDEs is proved. This result could cover many different numerical methods, as the proof does not need the detailed structure of the numerical methods.
In addition, as the driven noise are only required to be the independent and stationary, the results still hold if the Brownian motion is replaced by other proper processes.
Right now, we are working on a similar result for stochastic delay differential equation (SDDEs) and trying to build a connection between the numerical methods for SDEs and SDDEs.
|
http://arxiv.org/abs/2307.03997v1 | 20230708154148 | Efficient Model-Free Exploration in Low-Rank MDPs | [
"Zakaria Mhammedi",
"Adam Block",
"Dylan J. Foster",
"Alexander Rakhlin"
] | cs.LG | [
"cs.LG",
"math.OC"
] |
Lightweight Improved Residual Network for Efficient Inverse Tone Mapping
Liqi Xue, Tianyi Xu, Yongbao Song, Yan Liu, Lei Zhang, Xiantong Zhen, and Jun Xu
This work was sponsored by the National Natural Science Foundation of China (No. 62002176, 62176068, and 12101334), CAAI-Huawei MindSpore Open Fund, the Natural Science Foundation of Tianjin (No. 21JCQNJC00030), and the Fundamental Research Funds for the Central Universities. Corresponding author: Xiantong Zhen ([email protected]) and Jun Xu ([email protected]).
Liqi Xue, Tianyi Xu, Yan Liu, and Jun Xu are with the School of Statistics and Data Science, Nankai University, Tianjin 300071, China.
Yongbao Song is with the School of Mathematical Science, Nankai University, Tianijn 300071, China.
Lei Zhang and Xiantong Zhen are with the Computer Science College, Guangdong University of Petrochemical Technology, Maoming 525000, China.
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
A major challenge in reinforcement learning is to develop practical, sample-efficient algorithms for exploration in high-dimensional domains where generalization and function approximation is required. Low-Rank Markov Decision Processes—where transition probabilities admit a low-rank factorization based on an unknown feature embedding—offer a simple, yet expressive framework for RL with function approximation, but existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions such as latent variable structure, access to model-based function approximation, or reachability. In this work, we propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs that is both computationally efficient and model-free, allowing for general function approximation and requiring no additional structural assumptions. Our algorithm, , uses the notion of a generalized optimal design for the feature embedding as an efficiently computable basis for exploration, performing efficient optimal design computation by interleaving representation learning and policy optimization. Our analysis—which is appealingly simple and modular—carefully combines several techniques, including a new reduction from optimal design computation to policy optimization based on the Frank-Wolfe method, and an improved analysis of a certain minimax representation learning objective found in prior work.
§ INTRODUCTION
In reinforcement learning and control, many of the most promising
application domains require the agent to navigate complex,
high-dimensional state and action spaces, where generalization and function approximation
is necessary. The last decade has
witnessed impressive empirical success in domains where
data are abundant <cit.>,
but when data are limited, ensuring efficient exploration in
large domains is a major research question. For
statistical efficiency, the foundations have recently begun to
take shape, with a line
of research providing structural conditions that facilitate
sample-efficient exploration, as well as fundamental limits
<cit.>. Computational
efficiency, however, remains a major challenge: outside of simple
settings <cit.>, existing algorithms
with provable sample complexity guarantees are computationally
inefficient, and typically require solving intractable non-convex
optimization problems
<cit.>. The
prospect of
developing practical algorithms for exploration in
high-dimensional state spaces that are both computationally and
statistically efficient raises three fundamental questions:
* What are the right computational primitives for exploration?
That is, how can one efficiently represent and compute exploratory policies that
allow the learner
to explore the state
space and gather useful data?
* How should one leverage function approximation—for
example, via
representation learning—to
discover such primitives in a computationally and statistically
efficient fashion?
* Given answers to the first two questions, how can one efficiently interleave function approximation and exploration to provide provably efficient algorithms?
In this paper, we investigate these questions through the
model <cit.>. In a , the state space is large
and potentially continuous, but the transition probabilities admit an
(unknown) low-rank factorization. Concretely, for a finite-horizon
with horizon H, the transition densities for layer
h∈H satisfy
T_h(x_h+1|x_h,a_h) = [h+1](x_h+1)^(x_h,a_h),
where (·,·)∈^d and
(·)∈^d are state-action and next-state
embeddings. The low-rank structure in (<ref>)
facilitates tractable exploration: if the embedding is known
to the learner, one can efficiently learn a near-optimal policy with sample
complexity polynomial in the feature dimension d, and independent of
the size of the state space <cit.>; in this regard,
can be thought of as a low-dimensional representation that enables
sample-efficient RL. Following
<cit.>, we consider the challenging setting in
which both and are unknown to the
learner. This formulation generalizes well-known frameworks such as
the Block MDP (BMDP) model <cit.>,
and necessitates the use of representation
learning: the agent must learn an embedding that approximates
as it explores the environment, and must use this learned embedding
to drive subsequent exploration. This form of function approximation allows
for great flexibility, as can be an arbitrary, nonlinear
function of the state; in practice, it is common to model as a neural net <cit.>.
The is perhaps the simplest MDP structure that demands
systematic exploration and nonlinear function approximation while allowing for a continuum of states, yet understanding of
efficient algorithm design for this model is surprisingly
limited. Existing algorithms suffer from at least one of the following drawbacks:
* Computational intractability <cit.>.
* Strong modeling assumptions (e.g., ability to model
[h+1](·), which facilitates application of model-based
RL techniques)
<cit.>;
in this work, we aim for model-free methods that only require
learning .
* Restrictive structural assumptions (e.g.,
non-negativity or latent variable
structure for the embeddings in (<ref>)) <cit.>.
At the root of these limitations is the complex interplay between
exploration and representation learning:
the agent must learn a high-quality representation to guide
in exploring
the state space, but learning such a representation requires gathering
diverse and informative data, which is difficult to acquire without
having already explored the state space to begin with. Overcoming
this challenge—particularly where computational efficiency is
concerned—requires (1) representation learning procedures that lead to sufficiently expressive
representations for downstream applications, (2) efficient exploration procedures that are
robust to errors in learned representations, and 3) understanding the
interaction between these procedures, which must be interleaved. In
this work, we propose an algorithm that addresses each of these challenges, as detailed below.
Contributions
We provide the first provably computationally efficient and model-free
algorithm for general Low-Rank MDPs.
Our algorithm, (“Volumetric Exploration”), uses
the notion of a generalized optimal design for the
embedding as an efficiently computable
basis for exploration, and combines this with a minimax representation
learning objective <cit.>. interleaves exploration with representation learning in a layer-wise
fashion, learning a new representation at each layer h using exploratory
data gathered at previous layers, then uses this representation to
facilitate computation of a collection of exploratory policies (a
policy cover), which act as an approximate optimal design
for the features at layer h+1, ensuring good coverage for subsequent
iterations. is simple and modular, and its analysis is
surprisingly compact given the greater generality compared to prior
work
<cit.>.
accommodates general-purpose function approximation
to learn the representation (e.g., neural
nets or other flexible classes), and is efficient whenever a certain minimax
representation learning objective <cit.> can be solved efficiently for the
function class of interest. Compared to efficient algorithms from
prior work, : (1) is model-free (i.e., only requires access to a function class
Φ capable of modeling , and does not need to model
), and (2) applies to general Low-Rank MDPs, removing
the need for strong assumptions such as reachability or non-negativity of the feature embeddings
(so-called latent variable structure); see
<Ref>).
As a secondary benefit, the algorithm is reward-free.
Our analysis carefully combines several new techniques, including (1) a new reduction from optimal design
computation to policy optimization based on the Frank-Wolfe method, and (2) a new analysis of a minimax representation learning
objective introduced in <cit.>,
which leads to faster rates and shows for
the first time that this objective can lead to meaningful guarantees in general Low-Rank
MDPs without latent variable structure.
The algorithm follows a simple and modular template. To highlight this, we use the same template to give a
variant of the algorithm, (<ref>), which
leverages barycentric spanners <cit.> for
exploration, and obtains a tighter
sample complexity bound under an additional reachability assumption; see <ref>.
Organization
sec:setting formally introduces the model and the online
reinforcement learning framework we consider. In
<ref>, we highlight challenges faced
by previous approaches, introduce our main algorithm, , and
show how it overcomes these challenges, and then present its main
sample complexity guarantee. We conclude
with discussion in <ref>.
§ PROBLEM SETTING
§.§ Model
We work in an episodic, finite-horizon reinforcement learning framework, where H∈ denotes the horizon. A <cit.> is a tuple =(,, ()_h∈ [H],([h])_h∈[H],) consisting of a state space , action space with =A, distribution over initial states ∈Δ(), and mappings :→^d and : ×→^d.[We emphasize that neither [h] nor is known to the agent, in contrast to the linear MDP setting <cit.>.]
Beginning with _1∼, an episode proceeds in H steps, where for each step h∈H, the state _h evolves as a function of the agent's action _h via
_h+1∼T_h(·|_h,_h),
where T_h is a probability transition kernel, which is assumed to factorize based on and . In detail, we assume that there exists a σ-finite measure ν on such that for all 1 ≤ h ≤ H-1, and for all x ∈ and a ∈, the function x' ↦(x')^⊤(x, a) is a probability density with respect to ν (i.e. the function is everywhere non-negative and integrates to 1 under ν). For any '⊆, the probability that _h+1∈' under _h+1∼T_h(·|x_h,a_h) is then assumed to follow the law
T_h('|x_h,a_h) = ∫_'(x)^⊤(x_h, a_h) ν(x).
For notational compactness, we assume (following, e.g., <cit.>) that the MDP is layered so that = _1∪…∪_H for _i ∩_j=∅ for all i≠ j, where _h⊆ is the subset of states in that are reachable at layer h∈[H]. This can be seen to hold without loss of generality (modulo dependence on H), by augmenting the state space to include the layer index.
Our formulation, in which the transition dynamics (<ref>) are stated with respect to a base measure ν, are a rigorous generalization of formulations found in previous works <cit.>, which tend to implicitly assume the state space is countable and avoid rigorously defining integrals. We adopt this more general formulation to emphasize the applicability our results to continuous domains. However, in the special case where state space is countable, choosing ν as the counting measure yields T_h('|x_h,a_h) = ∑_x∈'(x)^⊤(x_h, a_h), which is consistent with prior work.
Policies and occupancy measures
We define =*π:→Δ() as the set of all randomized, Markovian policies. For a policy π∈, we let ^π denote the law of (_1,_1),…,(_H,_H) under _h∼π(_h), and let ^π denote the corresponding expectation. For any '⊆_h, we let _h^π[']^π[_h ∈'] denote the marginal law of _h under π. For x∈_h, we define the occupancy measure d^π(x) _h^π/ν(x) as the density of ^π_h with respect to ν.
§.§ Online Reinforcement Learning and Reward-Free Exploration
We consider a standard online reinforcement learning framework where the Low-Rank MDP is unknown, and the learning agent interacts with it in episodes, where at each episode the agent executes a policy of the form π:→Δ() and observes the resulting trajectory (_1,_1),…,(_H,_H).
While the ultimate goal of reinforcement learning is to optimize a policy with respect to a possibly unknown reward function, here we focus on the problem of
reward-free exploration, which entails learning a collection of policies that almost optimally “covers” the state space, and can be used to efficiently optimize any downstream reward function <cit.>. To wit, we aim to construct an policy cover, a collection of policies that can reach any state with near-optimal probability.
For α,∈(0,1], a subset Ψ⊆ is an (α,)-policy cover for layer h if
max_π∈Ψ d^π(x)≥α·max_π' ∈ d^π'(x) for all x∈_h such that max_π'∈Π d^π'(x)≥·[h](x).
Informally, an (α,)-policy cover Ψ has the property that for every state x∈ that is reachable with probability at least ·[h](x), there exists a policy in Ψ that reaches it with probability at least α··[h](x). We show (<ref>) that given access to such a policy cover with α =(, d^-1 ,A^-1), it is possible to optimize any downstream reward function to () precision with polynomial sample complexity.
def:polcover101 generalizes the notion of approximate policy cover used by <cit.> for the Block MDP setting; as in that work, the definition allows one to sacrifice states for which the maximum occupancy is small, which is necessary in the absence of reachability-style assumptions <cit.>. Compared to <cit.>, we replace the Block MDP condition max_π∈ d^π(x) ≥ by max_π∈ d^π(x) ≥·[h](x). As our analysis shows, the latter condition turns out to be better suited to the ℓ_2 geometry of the model, and is sufficient for the purpose of optimizing downstream reward functions up to O() precision (<ref>).
Function approximation and desiderata
We do not assume that the true features ()_h∈[H] or the mappings ([h])_h∈[H] are known to the learner.
To provide sample-efficient learning guarantees we make use of function approximation as in prior work <cit.>, and assume access to a feature class Φ⊆{ϕ : ×→^d} that contains , for h∈[H-1].
[Realizability]
The feature class Φ⊆{ϕ : ×→^d} has ∈Φ for all h∈[H]. Moreover, for all ϕ∈Φ, x ∈, and a ∈, it holds that ϕ(x, a)≤ 1.
The class Φ may consist of linear functions, neural networks, or other standard models depending on the application, and reflects the learner's prior knowledge of the underlying MDP. We assume that Φ is finite to simplify presentation, but extension to infinite classes is straightforward, as our results only invoke finiteness through standard uniform convergence arguments.
Note that unlike model-based approaches <cit.>, we do not assume access to a class capable of realizing the features , and our algorithm does not attempt to learn these features; this is why we distinguish our results as model-free.
Beyond realizability, we assume (following <cit.>) for normalization that, for all h∈[H] and (x,a)∈_h×, *_h(x,a)≤1, and that for all g:_h→0,1,
*∫__h[h](x)g(x) ν(x)≤√(d).
For ∈(0,1), our goal is to learn an (α,)-policy cover with α= (,d^-1,A^-1) using
(d,A,H,logΦ,^-1)
episodes of interaction.
This guarantee scales with the dimension d of the feature map and the complexity logΦ of the feature class but, critically, does not depend on the size of the state space ; note that by <cit.>, dependence on both H and A= is necessary when is unknown. Given such a guarantee, we show in <Ref> that it is possible to optimize any downstream reward function to error with polynomial sample complexity.
Additional preliminaries
For any m,n ∈ℕ, we denote by [mn] the integer interval {m,…, n}. We also let [n] [1n]. For any sequence of objects o_1, o_2,…, we define o_m:n (o_i)_i∈[m n].
A partial policy is a policy defined over a contiguous subset of layers ℓr⊆H. We denote by ^ℓ:r{π⋃_h=ℓ^r _h →Δ()} the set of all partial policies over layers ℓ to r; note that ≡^1:H. For a policy π∈^ℓ:r and h∈ℓr, π(x_h) denotes the action distribution for the policy at layer h when x_h∈_h is the current state. For 1≤ t≤ h≤ H and any pair of partial policies π∈^1:t-1, π'∈^t:h, we define π∘_t π'∈^1:h as the partial policy given by (π∘_t π')(x_ℓ) = π(x_ℓ) for all ℓ<t and (π∘_t π')(x_ℓ) = π'(x_ℓ) for all ℓ∈ [t h]. We define π∘_t π' in the same fashion for π∈^1:ℓ for ℓ≥ t.
We use the _h∼π as shorthand to indicate that _h is drawn from the law ^π, and likewise for (_h,_h)∼π and so on. For a set of partial policies Ψ{π^(i) i ∈ [N]}, we define (Ψ) as the random partial policy obtained by sampling ∼([N]) and playing π^(). We define ∈ as the random policy that selects actions in uniformly at random at each layer.
We use *· to denote the Euclidean norm, *·_∞ to denote the supremum norm on functions, and let (r)⊆^d denote the Euclidean ball of radius r. We let _(r) be the Frobenius ball of radius r>0 in ^d× d. We denote by the set of positive semi-definite matrices in ^d× d, and by “≼” the corresponding partial order. For a vector v∈^d, we denote by v[i] its ith coordinate.
We refer to a scalar c>0 as an absolute constant to indicate that it is independent of all problem parameters and use (·) to denote a bound up to factors polylogarithmic in parameters appearing in the expression.
§ : ALGORITHM AND MAIN RESULTS
In this section, we present the algorithm. We begin by describing
challenges in deriving efficient, model-free algorithms using existing
approaches (<ref>). We then formally describe (<ref>) and build intuition as to how it is able to overcome these challenges, and finally state our main sample
complexity guarantee (<ref>).
§.§ Challenges and Related Work
Designing algorithms with provable guarantees in the Low-Rank MDP setting is challenging because of the complicated interplay between representation learning and exploration. Indeed, while there are many efficient algorithms for the so-called linear MDP setting where the feature maps ()_h∈[H] are known (removing the need for representation learning) <cit.>, these approaches do not readily generalize to accommodate unknown features. For Low-Rank MDPs, previous algorithms suffer from at least one of the following three drawbacks: (1) the algorithms are computationally inefficient; (2) the algorithms are model-based; or (3) the algorithms place strong assumptions on the MDP that are unlikely to hold in practice. To motivate the algorithm, we briefly survey these results, highlighting several key challenges in avoiding these pitfalls.
Let us first discuss the issue of computational efficiency. While there are a number of algorithms—all based on the principle of optimism in the face of uncertainty—that provide tight sample complexity guarantees for Low-Rank MDPs in reward-based <cit.> and reward-free <cit.> settings, these algorithms involve intractable optimization problems, and cannot be implemented efficiently even when the learner has access to an optimization oracle for the representation class Φ <cit.>. This intractability arises because these algorithms implement optimism via a “global” approach, in which the algorithm explores at each round by choosing the most optimistic value function in a certain version space of candidate value functions; optimizing over this version space is challenging, as it involves satisfying non-convex constraints with a complicated dependence on the learned representation that are coupled globally across layers h∈H.
To avoid the intractability of global optimism, several works have restricted attention to a simpler model-based setting. Here, in addition to assuming that the feature maps ()_h∈[H] are realizable with respect to Φ, one assumes access to a second feature class Υ capable of modeling the mappings ()_h∈[H]; this facilitates direct estimation of the transition probability kernel T_h(·|x,a). For the model-based setting, it is possible to efficiently implement certain “local” forms of optimism <cit.>, as well as certain non-optimistic exploration techniques based on policy covers <cit.>. For example, one can estimate features using maximum likelihood, and then apply efficient algorithms for the known-feature setting with the estimated features plugged-in <cit.>; here, a key insight is that model-based estimation leads to strong distribution transfer guarantees for the learned features. As a result, there are now a number of efficient model-based algorithms <cit.>, some of which have been practically implemented <cit.>. Unfortunately, model-based realizability is a restrictive assumption, and falls short of the model-free guarantees we aim for in this work; indeed, in general, one cannot hope to estimate the feature map without sample complexity scaling with the number of states.[For example, in the special case of the Block MDP setting <cit.>, model-based realizability entails modeling a certain emission process, which is not required by model-free approaches.]
When one moves from model-based learning to model-free learning, representation learning becomes substantially more challenging—both for optimistic and non-optimistic approaches. Here, a key challenge is to develop representation learning procedures that are (1) efficient, yet (2) provide meaningful guarantees when the learned features are used downstream for exploration.
To our knowledge, the only proposal for a representation learning procedure satisfying both desiderata comes from the work of <cit.>, who introduced a promising “minimax” representation learning objective (described in detail in the sequel; cf. <ref>), which <cit.> subsequently showed to have encouraging empirical performance. However, to provide guarantees for this objective, both works place substantial additional restrictions on the low-rank factorization. In particular, <cit.> make the so-called latent variable assumption <cit.>, which asserts that and are non-negative coordinate-wise, and <cit.> further restrict to the Block MDP model <cit.>.
Non-negativity is a substantial restriction, as the best non-negative factorization can have exponentially large dimension relative to the best unrestricted factorization <cit.>. Beyond non-negativity, many prior works <cit.> require reachability assumptions, the weakest of which asserts that there exists η>0 such that for all x∈_h,
max_π∈ d^π(x)≥η·[h](x).
These works give sample complexity bounds that scale polynomially in η^-1, and do not give any guarantee when η=0; see <ref> for further background.[When specialized to tabular MDPs, reachability asserts that for each state x∈, there exists a policy that reaches x with probability at least η.] The source of both restrictions is the problem of how to quantify how close a learned representation ϕ is to the ground truth , which depends strongly on the downstream exploration strategy. In what follows, we show that with the right exploration strategy, this challenge can be ameliorated, but prior to our work it was unclear whether the minimax objective could lead to meaningful guarantees in the absence of non-negativity.
§.§ The Algorithm
Our algorithm, , is presented in <ref>. The
algorithm proceeds by building a policy cover layer-by-layer in an
inductive fashion. To describe the algorithm in detail, we slightly generalize <ref>.
For α,∈(0,1], a distribution P∈Δ() is an (α,)-randomized policy cover for layer h if
_π∼ P*d^π(x)≥α·max_π' ∈ d^π'(x) for all x∈_h such that max_π'∈Π d^π'(x)≥·[h](x).
If P is a randomized policy cover, then the set Ψ(P) is a policy
cover in the sense of <ref>, but is most
naturally described in terms of randomized policy covers, which allow
for non-uniform mixtures of policies. Critically, the randomized
policy covers used in have support size polynomial in d and H,
which allows them to be computed and represented efficiently.
For each layer h≥2, uses a randomized policy cover
Ph built at a previous iteration to perform K steps of
interleaved representation learning and exploration. Starting from
h,0Ph, for each step k∈K, first
invokes a subroutine,
(<ref>; deferred to <ref>) with the
randomized policy cover h,k-1 to produce a
feature map ϕh,k that approximates . Using
this feature map, the algorithm invokes a second subroutine,
(<ref> in <ref>) to produce a (sparsely
supported) policy distribution
Ph,k∈Δ() that acts as a generalized optimal design for the
estimated feature map ϕh,k, ensuring maximal coverage in
a certain sense; given this distribution, the algorithm defines
h,k=1/2k∑_ℓ=1^kPh,k +
1/2Ph and proceeds to step k+1. Once this process
completes, a new randomized policy cover for layer h+2 is formed via Ph+2=1/K∑_k=1^K∑_π∈(Ph,k)Ph,k(π)·_π∘_h+1. To
invoke the
subroutine, makes use of additional subroutines for policy optimization
(; <ref> in
<ref>) and estimation of certain
matrix-valued functionals (; <ref>
in <ref>). The use of multiple
(K>1) inner loop iterations within this scheme is
necessary to handle certain distribution shift
issues, which we will elaborate on momentarily.
We now describe
each component of the algorithm in detail,
highlighting how they allow us to overcome the
challenges in the prequel.
Generalized optimal design
At the heart of is the notion of a generalized
optimal design as an efficient basis for exploration. We
begin by defining a generalized optimal design for an abstract of
positive-semidefinite matrices ⊆.
Given a set ⊂ and parameters γ∈(0,1/d),
C≥1, we say that a distribution P∈Δ() is a
(C,γ)-generalized optimal design for if the matrix
M_PγI_d+_W∼P*W satisfies
sup_W∈(M_P^-1W) ≤ (1+C)d.
This definition generalizes the classical notion of G-optimal
design <cit.>, which corresponds to the
special case in which each W∈ is a rank-one matrix, and where γ=C=0.
The utility of generalized optimal designs for reward-free exploration is
highlighted in the following lemma.
Let h∈[H]. If a distribution P∈Δ() over policies is a
(C,γ)-generalized optimal design for the set
_h{^π[
(_h, _h)(_h, _h) ^]|π∈},
then the distribution
P'=∑_π∈(P)P(π)·_π∘_h+1 is an
(α,η)-randomized policy cover for layer h+2 with αη/2 d A and η 4 d √((1+C)γ).
<Ref>, proven in <Ref>, shows that to compute a policy cover for layer h+2, it suffices to compute a distribution over policies that acts as a generalized optimal design for the set _h{^π[
(_h, _h) (_h, _h) ^]|π∈}⊆^d. Of course, even if is known, this observation is only useful if we
can compute a spanner without explicitly enumerating over the set
, since our goal is to develop an efficient
algorithm. In what follows, we will show:
* By applying the Frank-Wolfe method
<cit.> to a certain determinantal/volumetric objective,
it holds that for any ϕ∈Φ, a sparsely supported
generalized optimal design for the set {^π[
ϕ(_h, _h)ϕ(_h, _h) ^ ]|π∈} can be computed
efficiently whenever, for any M∈ with
*M_≤1, one can (approximately) solve policy optimization problems of the form
_π∈^π*ϕ(_h,_h)Mϕ(_h,_h)^.
* Given access to policy covers P1,…,Ph for layers 1 to h, one can efficiently solve the optimization problem in (<ref>) by
appealing to the algorithm for policy
optimization (<ref>).
To handle the fact that is unknown, <ref>
uses the approach above to compute a generalized optimal design for the set {^π[
ϕh(_h, _h) ]|π∈}, where
ϕh∈Φ is a learned feature map. In what follows, we
first give a detailed overview of our optimal design computation approach, then show
how applies this approach to a feature map estimated via
representation learning.
Prior work <cit.> makes use
of elliptic planning objectives similar to the notion of optimal
design in
<ref>. An
important difference in our approach, which follows from the explicit
connection to optimal design, is that the right-hand side in
(<ref>) is bounded by an absolute (problem-dependent)
constant (d), and does not scale inversely proportional to the
target precision >0 or any sort of reachability parameter. This
property is essential to our reachability-free analysis.
Optimal design computation via approximate linear optimization
To describe generalized optimal design in , we take a brief detour
and consider an abstract approach to optimal design computation, which generalizes our problem. Suppose that we wish
to compute a spanner for an implicitly specified set of matrices
=*W^z_z∈⊆ indexed by an abstract set
. The set (which will be set to when we apply this
framework to RL) may be exponentially large, and cannot be efficiently enumerated. In addition, given z∈, we
cannot explicitly compute W^z, and have to settle for a noisy approximation.
To allow for optimal design computation, we assume access to two
oracles for the set , a linear optimization oracle :∩_(1)→ and
an index-to-matrix oracle :Δ()→. We assume
that for some _, _>0:
* For all M∈ with *M_=1, the output
ẑ_M(M) satisfies
(MW^ẑ_M) ≥sup_z∈(MW^z) - _.
* For all P∈Δ(), the output W_P(P)
satisfies
W_P - _z∼P*W^z_≤_.
Given access to oracles and with _=(γ) and _=(γ^2), the algorithm
(<ref>) computes a (C,γ)-approximate spanner for
using *γ^-2C^-2 d^-1ln (1 + 1/γ)
oracle calls. can be viewed as an application of the Frank-Wolfe
algorithm <cit.> for first-order optimization to
maximize the determinantal/volumetric objective
F(P) log(γ I_d + _z∼ P[W^z]),
which is inspired by the well-known duality of G-optimal and D-optimal
design <cit.>. Frank-Wolfe is well-suited to
our setting because it produces a sparsely supported
distribution P∈Δ(), with the sparsity bounded by the
number of iterations (d,γ^-1) and independent of
. This feature is critical for computational efficiency
when applied to RL, as the set = is too large for one to even
represent a general distribution P∈Δ() efficiently.
Representation learning
Ideally, we would
like to use to construct a generalized optimal design for the set {^π[_h(_h, _h) _h(_h, _h)^]|π∈} with =.
Because we do not have access to _h, each inner loop iteration
k∈K in <ref> instead applies with {^π[ϕh,k(_h, _h)]|π∈},
where ϕh,k is a learned
representation. We now describe how the feature map
ϕh,k is learned, then show how to use these learned features to
efficiently implement the oracles (·) and (·).
To learn representations for layer h, we use the algorithm (<ref>),
which was originally introduced in
<cit.>. When invoked in each inner loop
iteration k∈K via ϕh,k = (h, ,Φ,
Ph,k-1,n_) (<ref>), the
algorithm gathers a
collection of triples (_h, _h, _h+1) by rolling in to
_h with a policy sampled from the randomized policy cover h,k-1 and selecting _h
uniformly at random, then observing the resulting state _h+1. Using this dataset, the algorithm
solves a sequence of adversarial training sub-problems
(<ref> of <ref>) which involve
the feature class Φ and an auxiliary discriminator class :
→. As we discuss in detail in the sequel, these
sub-problems, described in (<ref>),
are amenable to standard gradient-based training methods. The
sub-problems are designed to approximate the following “idealized”
min-max-min representation learning objective:
ϕh,k∈_ϕ∈Φsup_f ∈inf_w∈(2d^1/2)_π∼h,k-1^π∘_h[(
ϕ(_h, _h)w - *f(_h+1)|_h,_h)^2
].
The intuition for
this objective lies in the fact that in a Low-Rank MDP, for any function f:→, the mapping (x,a)↦[ f(_h+1)
|_h=x, _h=a ] is linear in
_h(x, a). Thus, if is sufficiently expressive, we may
hope that any ϕh,k which solves (<ref>) will approximate
well. We adopt the simple discriminator class neurips= {. x ↦max_a∈θϕ(x, a) | θ∈(1), ϕ∈Φ}.
= {. x ↦max_a∈θϕ(x, a) | θ∈(1), ϕ∈Φ}.
We show that solving
(<ref>) with this choice for , which is slightly
simpler than those considered in <cit.>, yields an approximation
guarantee for ϕh,k that is suitable for downstream use in
optimal design computation.
To facilitate an analysis of that does not require reachability assumptions, we use
slightly different parameter values for than in
<cit.>, and provide a tighter sample
complexity bound (<ref>) which may be of independent interest.
In more detail, prior work shows that the algorithm solves
a variant of (<ref>) with
w∈(d^1/2·(^-1)), where >0 is the desired
bound on mean-squared error. Due to the polynomial dependence on
^-1, such a guarantee would lead to vacuous
guarantees when invoked within our analysis of . Our improved
analysis of , which is based on a determinantal potential
argument, shows that w∈((d)) suffices. A secondary benefit of our improved bound is a faster rate with
respect to the number of trajectories.
Putting everything together Having learned ϕh,k
using , each inner loop iteration k∈K of applies with {^π[ϕh,k(_h, _h) ϕh,k(_h, _h)^]|π∈},
=, C = 2, and γ chosen as a function of the
target accuracy; that is, we use the learned
representation ϕh,k as a plug-in estimate for the true representation
.[Though the policies produced by the
algorithm may not necessarily induce an optimal design for _h= {^π[
(_h, _h) ]|π∈} (this would
require a stronger coordinate-wise approximation guarantee, does not
necessarily follow from <ref>), our analysis shows that they still suffice to build a policy cover for layer h+2.]
With this choice, implementing
entails (approximately) solving
_π∈^π[ ϕh,k(_h, _h)^M ϕh,k(_h, _h)]
for a given matrix M∈∩_(1), and implementing entails estimating
the second moment matrix
^π[ϕh,k(_h, _h) ϕh,k(_h, _h)^]
for a given policy π∈.
We instantiate (π) as the Monte Carlo algorithm
(<Ref>), which simply samples trajectories according to π and returns the sample average of ϕh,k(_h, _h) ϕh,k(_h, _h)^.
To
implement (θ), we appeal to (<ref>). , given an arbitrary reward function r_1:h:×→ and a function class ⊆{g:
×→} capable of realizing all possible value
functions induced by these rewards, can use the policy covers
P1,…,Ph to efficiently compute a policy = (h,r_1:h, ,
P1:h, n) that approximately solves neurips_π∈^π[∑_t=1^h r_t(_t,_t)],
_π∈^π[∑_t=1^h r_t(_t,_t)],
and does so using polynomially many episodes; see <ref> for
details and formal guarantees.[This is the main
place where the analysis uses the inductive hypothesis
that P1:h are policy covers.] Thus, implementing (M)
for M∈∩_(1) is as
simple as invoking with the rewards neurips
r_h(x, a; M) = ϕh,k(x,a)^⊤ Mϕh,k(x,a), and r_t(x,a; θ) = 0, for t ≠ h.
r_t(x,a;M){[ ϕh,k(x,a)^⊤ Mϕh,k(x,a), for
t=h,; 0, otherwise. ].
Addressing distribution shift
With this, we have all the
ingredients needed for optimal design computation, and can prove that
Ph,k is an approximate optimal design with respect to
ϕh,k. However, we not quite done, due to the issue of
distribution shift, which motivates the use of multiple (K>1)
inner loop iterations within . In particular, while the
objective in (<ref>) ensures that ϕh,k approximates
well under Ph,k-1, the representations may be far
away from one another under the new distribution Ph,k produced
when we invoke with ϕh,k.[If Ph were
an exact (i.e., (α,0)-) policy cover, this would be a
non-issue. However with an approximate policy cover, which is all that
one can for in the absence of reachability, distribution shift must
be addressed.] To address this issue, we use a potential argument <cit.>
to show that as long as K is chosen to be sufficiently large, there exists
k^⋆∈*K such that ϕh,k^⋆
(approximately) enjoys a stronger on-policy approximation guarantee:
ϕh,k^⋆∈_ϕ∈Φsup_f ∈inf_w∈(2d^1/2)_π∼h,k^⋆^π∘_h[(
ϕ(_h, _h)w - *f(_h+1)|_h,_h)^2
].
This suffices to prove that the distribution Ph+2 constructed
in is an approximate policy cover
for layer h+2.
§.§ Main Guarantee for
The following result is the main sample complexity guarantee for (<ref>).
Let δ, η∈(0,1), and suppose that realizability holds (<ref>). If = (A,H,d,ln
(|Φ|/δ)) is sufficiently large, then the distributions P1:H
produced by (Φ, η, , δ) are a
(η^3/· d^6 A^2,)-randomized policy cover with probability at least
1-δ, where 4 H d^3/2η.
The total number of episodes used by is at most:
(A^4 d^20 H^17 (d + ln (|Φ|/δ))· 1/^14).
The next corollary follows immediately from the definition of a policy cover (<ref>).
Consider the setting of <ref> and let P1:H be the distributions
produced by . Then, under the same success event as in <ref>, the collection of policies Ψ1,…, ΨH, where Ψh Ph for each h∈[H], are a (η^3/· d^6 A^2,)-policy cover in the sense of <ref>, where η/(4 H d^3/2).
<ref> is the first provable, model-free sample complexity
guarantee for general Low-Rank MDPs that is attained by an
efficient algorithm. Prior to our work, all efficient model-free algorithms required non-negative features (latent
variable structure), reachability, or stronger assumptions
<cit.>; see <ref>.
While our guarantee is polynomial in
all relevant problem parameters, improving the dependence further
(e.g., to match that of the best known inefficient algorithms) is
an interesting direction for future research.
Application to reward-based RL
By using the policy cover produced by within (<ref>),
we can optimize any downstream reward function to error using
(d,A,H,logΦ,^-1) episodes. See
<ref> for details. A technical novelty here compared to, e.g. <cit.> (who also used and policy covers to optimize downstream reward functions), is in proving that our notion of approximate policy cover (<ref>) is sufficient for downstream reward optimization in s.
Efficiency and practicality is simple and practical. Defining _(ϕ, w, f) ∑_(x, a,
x')∈ (ϕ(x,a)^⊤ w - f(x'))^2, where
is a dataset consisting of (_h,_h,_h,_h+1)
tuples, the algorithm is provably efficient whenever the adversarial
objective
ft∈_f∈max_ϕ̃∈Φ{min_w∈(3d^3/2)_(ϕt, w, f) - min_w̃∈(2d^1/2)_(ϕ̃, w̃, f) },
in <ref> of (<ref>),
can be implemented efficiently. This objective was also assumed to be efficiently solvable in
<cit.> and was empirically shown to
be practical in <cit.>.[In
addition to <ref>, also solves the
objective
ϕt+1∈_ϕ∈Φmin_(w_1,…,w_t)∈(2√(d))^t∑_ℓ=1^t _(ϕ,w_ℓ,fℓ)
in <ref> of <ref>. Compared the
adversarial objective in (<ref>), this objective is
simpler, and only
requires minimization.] Note that both of the objective
is amenable to standard gradient-based optimization techniques, and allows
the class to be over-parameterized. While a detailed
experimental evaluation is outside of the scope of this paper, we are
optimistic about the empirical performance of the algorithm in light
of the encouraging results based on the same objective in
<cit.>.
Outside of representation learning, the only computational overhead in is
in the subroutine, which has runtime polynomial in all parameters. Indeed,
requires only polynomially many calls to the linear optimization oracle, instantiated as , which is
efficient whenever standard least-squares regression problems based on
the class Φ can be solved efficiently, analogous to
<cit.>. The
distributions Ph,k returned by each invocation of have
support size (d,^-1), and hence can be represented with
polynomial space memory; it follows that all of policy
distributions maintained throughout the execution of <ref> have
polynomial support size as well.
Under the setting of <ref>, if = (A,H,d,ln
(|Φ|/δ)) is sufficiently large, then the distributions P1:H
produced by (Φ, η, , δ) are such that max_h∈[H]| Ph| ≤· d^7/η^4.
Analysis and proof techniques
A significant challenge overcome by the proof of <ref> (given
in <ref>) is to show that
—despite being non-optimistic—succeeds in the absence of
reachability-type assumptions. To achieve this, we use a novel
adaptation of the extended
MDP technique introduced in the recent work
<cit.> in the context of Block MDPs. This
technique allows us to analyze in a modified version of the
true MDP which emulates certain properties of reachability; see
<ref> for details. Within the extended MDP, the crux of
the proof is to show that the
representation learning guarantee in (<ref>) is strong
enough to ensure that the downstream optimal design computation in
succeeds. It is straightforward to show that optimal design
computation would succeeds if we have access to an estimated
representation that ϕh,k that approximates
point-wise (i.e., uniformly for all (x,a) pairs), but the key challenge is that the guarantee in
(<ref>) only holds on average under the roll-in
distribution h,k-1. Prior works that make use of the same representation
learning objective ( <cit.> and
<cit.>) make use of additional structural assumptions
(non-negativity of the factorization for , and Block MDP
structure for ) to facilitate change-of-measure arguments
that address this issue. We avoid such assumptions by inductively appealing to
the optimal design objective in (<ref>), which provides a
stronger coverage guarantee compared to elliptic objectives from prior
work; see <ref>. While the high-level schema for the
proof is quite simple, there are
several subtle technical challenges that arise in analyzing in the
extended MDP, including:
* Showing that succeeds when invoked within , despite
the lack of uniform coverage.
* Proving that gives a sufficiently strong
approximation guarantee even when the weights used by the algorithm
are kept uniformly bounded throughout training; see <ref>.
* Addressing distribution shift that occurs when the updates policies using the
representations produced by .
See <ref> for
details.
§.§ Stronger Guarantees under Reachability:
The algorithm is appealing in its simplicity and
modularity. To highlight this, we use the same template to give a variant of the
algorithm, (<ref>), which obtains a tighter
sample complexity bound whenever a reachability assumption is satisfied.
Concretely, we make the following assumption.
[η-reachability]
For any h∈[H] and x∈_h,
max_π∈ d^π(x)≥η·[h](x).
<ref> generalizes and subsumes all
previous reachability-like conditions of which we are aware
<cit.>. Notably,
reachability is implied by the notion of feature
coverage <cit.> (used in the context of
transfer learning in Low-Rank MDPs), which asserts that
sup_π∈λ_min(^π[(_h,_h)(_h,_h)^⊤])
≥η, for some η>0. It is also implied by
explorability <cit.>, which is
similar to feature coverage, but involves the first moments of
. Our reachability assumption is also weaker than
the notion used in <cit.>
under the latent variable model, and generalizes the
notions of reachability for BMDPs <cit.>. See <ref> for details, as well as an exponential separation between <ref> and analogous assumptions in <cit.>.
follows the same template as , with two
differences. First, we remove the inner loop (which corresponds to
setting K=1 in ). Second, and more importantly, the subroutine is replaced
with a new subroutine, . Instead of computing an optimal
design, computes an alternative basis for exploration known as
a barycentric spanner <cit.>. is
an error-tolerant variant of a classical spanner computation
algorithm of <cit.>, and may be of independent
interest; we use the algorithm to compute a spanner for learned feature maps via reduction to policy
optimization. The sample complexity of improves upon ,
but its analysis leverages reachability. See <ref> for a detailed overview.
The main sample complexity guarantee for is as follows.
Let δ∈(0,1) be given, and suppose that realizability holds (<ref>) and that reachability (<ref>) is satisfied with parameter η>0. If = η/36 d^5/2 and = (A,H,d,ln
(|Φ|/δ)) is sufficiently large, then the policies Ψ1:H
produced by (Φ, , , δ) are a
(1/4 Ad,0)-policy cover with probability at least
1-δ.
The total number of episodes used by is at most:
( A^4 d^9 H^4 (d + ln (|Φ|/δ))· 1/η^2).
The sample complexity bound in <ref> scales
with the reachability parameter η as η^-2, which
significantly improves upon the dependence on the accuracy parameter
in <ref>. The dependence on the
dimension d is also tighter. We
find this result to be notable in its own right, as even in the
presence of similar reachability assumptions, all efficient model-free
algorithms in prior work required non-negative features (latent
variable structure) or stronger assumptions
<cit.>.
A secondary benefit of lies in memory: The algorithm
maintains policy covers with support size (d,^-1), while
the policy covers used in have support size (d),
which is independent of the target accuracy.
The proof of <ref> is similar to that of
<ref>, but is somewhat simpler, and does not require
appealing to the extended MDP analysis of
<cit.>. A useful feature of our proof is to show that the notion of
reachability in <ref>, which generalizes and
extends all previous reachability conditions in the and Block
MDP literature <cit.>,
is sufficient to build an exact (i.e., (α,0)-) policy cover. We
anticipate that this observation will find broader use.
§ DISCUSSION
Our work shows for the first time how to achieve efficient, model-free
exploration in general Low-Rank MDPs. On the technical side, our
results leave open a number of interesting technical questions,
including (1) regret (as opposed to PAC) guarantees, and (2) matching the minimax rate achieved by
inefficient algorithms using an efficient
algorithm. empirical evaluation?
More broadly, our work highlights the power of non-optimistic
algorithms that explore by building policy covers. In light of this, perhaps the most interesting question
is how to extend our techniques to more general function approximation
settings beyond the Low-Rank MDP model; this will likely entail
replacing the notion of optimal design with a more general form of
exploration basis.
§.§ Acknowledgements
We thank Noah Golowich, Dhruv Rohatgi, and Ayush Sekhari for
several helpful discussions. ZM and AR acknowledge support from the ONR through awards N00014-20-1-2336 and N00014-20-1-2394, and ARO through award W911NF-21-1-0328. AB acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. 1122374.
§ ADDITIONAL RELATED WORK
In thi section, we discuss relevant related work not already covered.
Block MDPs
A particularly well-studied special case low-rank MDPs with the latent variable assumed in <cit.> (defined in <Ref>) is the Block MDP (BMDP) model <cit.>. For this setting, <cit.> provide algorithms that conduct exploration in a provably oracle-efficient manner under a reachability assumption. This reachability assumption was removed by subsequent work of <cit.> (with a suboptimal rate) and <cit.> (with optimal error dependence). These works are tailored to the BMDP model, and it is unclear whether it is possible to extend them to general low-rank MDPs.
Barycentric spanners
<cit.> consider a variant of the framework in which we are given a class Υ that realizes the
next-state feature map , but do not have access to a class
Φ for the feature map , which is unknown. Their
algorithm, like , is based on barycentric spanners, though the algorithm
design considerations and analysis are significantly
different. Notably, their algorithm is not computationally efficient,
and their analysis takes advantage of the fact that realizability of
facilitates estimation of the occupancies d^π(·)_π∈ in ℓ_1-error. Barycentric spanners were also in the work of <cit.> for reinforcement learning in Partially Observable MDPs (POMDPs). Their analysis is substantially different from ours, and their algorithm appeals to the barycentric spanner computation approach in <cit.> in an off-the-shelf fashion.
Frank-Wolfe method in RL
Similar to our work, <cit.> make use of the Frank-Wolfe method for policy cover computation, but their algorithm is tailored to the known-feature (linear MDP) framework, and the design and analysis are quite different.
PART:
Analysis of
§ ORGANIZATION OF PART:ANALYSISVOX
<ref> of the appendix contains the proof of our main
result, <ref>, as well as other proofs. This
section is organized as follows:
* <ref> contains the analysis of <ref>.
* <ref>, <ref>, and <ref> contain results we rely on in the proof of <ref>. In particular, <ref>, <ref>, and <ref> provide generic guarantees for the subroutines (<ref>), (<ref>), and (<ref>) of (<ref>), respectively.
* In <ref>, we show how an approximate policy cover can be used to optimize downstream reward functions.
* In <ref>, we present some useful structural results concerning the extended MDP introduced in <ref>.
* Finally, <ref> contains a set of helper
results used throughout the analysis.
§ ANALYSIS: PROOF OF THM:VOXMAIN
In this section, we present the full proof of the main guarantee for (<ref>). In <ref>, we define key concepts needed for the analysis. <ref>, <ref>, and <ref> give guarantees for (<ref>), (<ref>), and (<ref>) as instantiated within . <ref> gives guarantees for the subroutine within . We then combine these results in <ref> to prove <ref>.
§.§ Extended Low-Rank MDP and Truncated Policies
In this section, we present two tools, the extended MDP and a truncated policy class, that will be used throughout the analysis of , and facilitate an analysis that does not require reachability assumptions. The definitions we give generalize analogous definitions given in <cit.> for the special case of Block MDPs, though the generalization to the low-rank MDP setting is non-trivial.
Extended MDP As in <cit.>, we define the extended MDP to be the result of augmenting the true MDP by adding a set of H terminal states _1:H, and a terminal action with the property that taking from any state at layer h∈ [H-1] leads to _h+1 deterministically, and any action in ∪{} at latent state _h transitions to _h+1 deterministically. To express as a low-rank MDP, we increase the feature dimension by 1. First, for any ϕ∈Φ, we define the extension
ϕ̅(x,a) = {[ [ϕ(x,a)^⊤, 0]^⊤∈^d+1, ∀ a∈, ∀ x∈,; e_d+1∈^d+1, a = , ∀ x∈,; e_d+1∈^d+1, ∀ a∈, x ∈{_1,…, _H}, ]. with ϕ̅^⋆ denoting the extension of ϕ^⋆. We similarly define[h](x) = {[ [[h](x)^⊤, 0]^⊤∈^d+1, ∀ x∈,; e_d+1∈^d+1, x=_h, ].
for h∈[H]. With these definitions, we formally define =(∪{_1,⋯, _H}, ∪{}, ρ, ([h])_h∈[H], (ϕ̅_h^⋆)_h∈[H]) as the extended MDP, which one can verify is indeed a low-rank MDP in d+1 dimensions.
We let be the set of all randomized Markov policies in , with the convention that π(_h)= for all π∈ and h∈ [H]. For any policy π→, we extend it to ∪{_1, …, _H} by taking π(_h)= for all h∈[H]. Moving forward, for any h∈[H], we let _h _h ∪{_h}, and define =∪.
We denote expectations and probability laws for trajectories in by and , respectively, and for any '⊆_h, we let _h^π[']^π[_h ∈'] denote the induced law of _h under a policy π in . Furthermore, for any x∈_h, we define the occupancy measure ^π(x) _h^π/ν̅(x) as the density of ^π_h with respect to ν̅= ν +∑_h∈[H]𝕀__h.
We define Φ be the set of all extended feature maps (as in (<ref>)) for ϕ∈Φ. In some proofs, it will be convenient to work with the restriction of the extended feature maps to their first d coordinates; for any ϕ∈Φ, we define
ϕ̃(·,·) (ϕ̅(·,·)[1], …, ϕ̅(·,·)[d])^⊤.
Finally, we the extend the notion of a policy cover to the extended MDP as follows.
For α∈(0,1], η≥ 0, a distribution P∈Δ() is a (α, η)-randomized policy cover relative to Π⊆ for layer h in if
_π∼ P [^π(x)] ≥α·max_π'∈Π^π'(x), for all x∈_h such that max_π'∈Π^π'(x)≥η·[h](x).
Truncated policy class
Next, we introduce the notion of the truncated policy class, generalizing <cit.>. We begin with some preliminary definitions.
For any h ∈ [H], given a collection of policies Π'⊆, we let
_h(Π') {ϕ̃^⋆,π_h|π∈Π'}, where ϕ̃^⋆,π_h^π[ϕ̃^⋆_h(_h, _h)].
Using this, we define the notion of η-reachable states relative to Π'.
For h∈[H] and a policy class Π'⊆, we define the set of η-reachable states at layer h relative to the set Π' as:
_h, η(Π') {x∈_h |∃ u ∈_h-1(Π') : [h](x)^⊤ u ≥[h](x)·η}.
Given a parameter η>0, we now define the truncated policy class _η inductively as follows: Let _0,η, and for each h≥ 1, let _h, η be the set of policies defined by
π∈_h,η∃π'∈_h-1,η : ∀ t ∈[H], ∀ x ∈_t, π(x) = {[ π'(x), if t=h and x ∈_h,η(_h-1,η),; , otherwise. ].
Finally, we define _η_H,η.
As in <cit.>, the utility behind the extended MDP and truncated policy class is as follows:
* While the extended BMDP does not necessarily enjoy the reachability property (<ref>), it emulates certain properties of reachable MDPs, but only if we compare performance to policies in _η.
* For all reward functions of interest, the best reward that can be achieved by a policy in _η is close to what can be achieved using arbitrary policies in .
§.§ Proof Overview
The proof of <ref> is inductive. For fixed h, the inductive hypothesis is that the distributions over policies P1:h+1 produced by satisfy the property:[By extending policies in to in the fashion described in <ref>, the distributions P1:h can be viewed as distribution over policies in .]
P1,… Ph+1 are (η32 dK A, η)-randomized policy covers relative to _η for layers 1 through h+1 in ,
where K is defined as in <ref>. Assuming the inductive hypothesis holds, we prove that with high probability, the distribution Ph+2 is a (η/32 dK A, η)-randomized policy cover relative to _η in for layer h+2. This inductive hypothesis is primarily used to show that , as invoked in <ref> is a valid choice for the oracle required by (that is, implements approximate linear optimization over = {^π[ ϕ(_h, _h)ϕ(_h, _h)^⊤] |π∈}, for any choice of ϕ∈Φ), which is proven in <Ref>. With this established, we instantiate the guarantee for from <ref> with and set to the instances of (<ref>) and (<ref>) in , respectively. To conclude the proof of the inductive step, we combine the guarantee for and the guarantee for in <Ref> with a change of measure argument, also enabled by the inductive hypothesis that P1:h are approximate policy covers (i.e. (<ref>)). As in <cit.>, a key feature of the analysis is that we work with the extended MDP and truncated policy class throughout the proof, only passing back to the true MDP once the induction is complete and <ref> has been proven to hold for all layers H. To pass back to the true MDP, we use the following (proven in <ref>).
Let h∈ [H], α∈ (0,1), and η >0 be given.
If P∈Δ() is an (α,η)-randomized policy cover relative to _η for layer h in , then P is an (α/2,)-randomized policy cover relative to for layer h in the true MDP , where 4 H d^3/2η.
In <ref> [reps. <ref>] we show that [resp. ], as invoked in <ref>, instantiates the approximate linear optimization oracle [resp. index-to-matrix oracle ] required by . In <ref> and <ref>, we prove guarantees for the instantiations of and within , respectively. In <ref>, we conclude the proof of <ref>.
§.§ Guarantee for as a Subroutine for
We begin by showing that , as invoked in <ref>, instantiates the approximate linear optimization oracle required by . In particular, we fix a layer h∈[H] and assume that P1:h+1 satisfy (<ref>) and apply the generic guarantees for given <Ref>.
For M ∈∩_(1) and ϕ∈Φ, define function classes '_1:h(M,ϕ) as follows:
'_t(M,ϕ) {g:(x,a)↦ϕ(x,a)^⊤ w |ϕ∈Φ , w ∈(√(d))}, ∀ t ∈[h-1] and '_h(M,ϕ) {r'_h(·,·; M,ϕ)} ,
where we define reward functions r'_1:h(·,·;M, ϕ) by:
∀ (x,a)∈×, r'_t(x,a;M,ϕ){[ ϕ(x,a)^⊤ M ϕ(x,a), for
t=h,; 0, otherwise. ].
With these rewards and function classes, we show will that for any M ∈∩_(1) and ϕ∈Φ, the output
= (h, r'_1:h(·, ·;M,ϕ), '_1:h(M,ϕ), P1:h, n)
satisfies the property that
max_π∈_η^π[ ϕ̃(_h, _h)^⊤ M ϕ̃(_h, _h) ] ≤^[ ϕ̃(_h, _h)^⊤Mϕ̃(_h, _h) ] + ,
with high probability once n≥ 1 is sufficiently large; recall that ϕ̃ is the restriction of to its first d coordinates, with defined as in <ref>.
Note that we can equivalently formulate (<ref>) as, for fixed M ∈∩_(1) and ϕ∈Φ, maximizing the sum of the reward functions r'_1:h(·,·;M, ϕ) in (<ref>).
Note that this matches the choice of reward functions in (<ref>) at iteration h, with ϕ = ϕh,k, the feature map returned by in <ref>.
We first verify that the function classes '_1:h(M,ϕ) realize the reward functions specified in (<ref>) in the sense of <Ref>.
For any ϕ∈Φ and M∈∩_F(1), under <ref>, the function classes '_1:h(M,ϕ) in (<ref>) realize the reward functions in (<ref>) in the sense of <ref> (in the true MDP). Furthermore:
* All functions in '_1:h(M,ϕ) take values in [-√(d), √(d)].
* max_t∈[h]ln_'_t(M,ϕ)()≤ln |Φ|+ d ln (√(d) /), where we recall that _() denotes the -covering number for a function class in ℓ_∞-distance (see <ref>).
Fix ϕ∈Φ and M∈∩_(1), and let r'_t(·,·)≡ r'_t(·,·; M, ϕ) and _t'_t'(M,ϕ), for t∈[h]. Further, for t∈[h] and π∈^t+1:h, we define the state-action value function (Q-function) at layer t with respect to the rewards r'_1:h and partial policy π:
∀ (x,a)∈_t×, Q^π_t(x,a) r'_t(x,a)+^π[.∑_ℓ=t+1^h r'_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
For t=h, we clearly have that for any π∈^h:h, Q^π_h(·,·)=r'_h(·,·)∈'_h. For t<h and any π∈^t+1:h, we have from the low-rank structure that for any (x,a)∈_t×, the Q-function Q^π_t satisfied
Q^π_t(x,a) = ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·ϕ^⋆_t(x,a)^⊤μ_t+1^⋆(y) ν (y),
= ϕ^⋆_t(x,a)^⊤( ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y)).
Now, note that for all y∈_t+1,
0≤^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ≤r'_t(·, ·)_∞,
≤M_·sup_x∈_t,a∈ϕ(x,a)^2, (by Cauchy-Schwarz)
≤ 1,
where the last inequality follows by the fact that ϕ(·,·)≤ 1 for all ϕ∈Φ, and that M_≤M_≤ 1. Combining (<ref>) with the normalizing assumption made on ([h])_h∈[H] in <ref> (i.e. that for all g:_t+1→0,1, *∫__t+1[t+1](y)g(y) ν(y)≤√(d)), we have that
w_t ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y) ∈(√(d)).
Thus, by (<ref>) we have
Q_t^π(·,·) ≡ϕ^⋆_t(·,·)^⊤ w_t, with w_t ∈(√(d)).
This, together with the fact that [t]∈Φ (by <ref>), implies that Q_t^π∈'_t, which establishes that '_1:h realize the rewards r'_1:h. The bound on the covering number _'_t() follows from a standard bound on the covering number of the ball (√(d)) <cit.>.
Combining <Ref> with <Ref> gives in the following bound on the quality of as an approximate linear optimization oracle over the space of policies.
Fix δ∈(0,1) and h∈[H]. Let M∈∩_(1), ϕ∈Φ, and be the output of when given input (h, r'_1:h(·, ·;M,ϕ), '_1:h(M,ϕ), P1:h, n), where
* The reward functions r'_1:h(·, ·;M,ϕ) are as in (<ref>).
* The function classes '_1:h(M,ϕ) are as in (<ref>).
* The distributions P1:h satisfy (<ref>).
Then, under <ref>, with probability at least 1-δ, we have that
max_π∈_η^π[ ϕ̃(_h, _h)^⊤Mϕ̃(_h, _h) ] ≤^[ ϕ̃(_h, _h)^⊤Mϕ̃(_h, _h) ] + _(n,δ),
where _(n,δ) cH d A√(K η^-1 n^-1 (d ln (n d^1/2)+ln (|Φ|/δ))) and c>0 is a sufficiently large absolute constant.
§.§ Guarantee for as a Subroutine for
We now state a performance guarantee for the subroutine (<Ref>), which simply estimates the second moment of the feature embedding of (_h, _h) under policy π by sampling sufficiently many trajectories and taking the empirical second moment. The following result shows that is a valid choice for the subroutine passed to within .
Let δ∈(0,1) h∈[H], ϕ∈Φ, π∈, and n∈ℕ be given. The output M_h= (h,ϕ(·,·)ϕ(·, ·)^⊤,π, n) (<ref>) satisfies M_h ∈ and, with probability at least 1-δ,
M_h - ^π[ϕ(_h,_h)ϕ(_h,_h)^⊤] _≤_(n,δ),
where _(n,δ) c ·√(n^-1·log( 1/δ)) and c>0 is a sufficiently large absolute constant.
Let ϕ∈Φ and π∈. The claim that M_h ∈ follows by the fact that M_h is an empirical average of rank-1 matrices in .
Now, we show (<ref>). By a standard matrix concentration inequality (see for example <cit.>) and the fact that ϕ(x, a)ϕ(x, a)^⊤_≤ 1 for all x ∈ and a ∈ (following from ϕ(·,·)≤ 1), there exists an absolute constant c>0 such that with probability at least 1 - δ,
M_h - ^π[ ϕ(_h, _h) ϕ(_h, _h)^⊤]_≤ c ·√(log(1/δ)/n) .
Since policies in never take the terminal action, the guarantee in <ref> can also be expressed in the extended MDP as we do in the next corollary.
Let δ∈(0,1) h∈[H], ϕ∈Φ, π∈, and n∈ℕ be given. The output M_h of (h,ϕ(·,·)ϕ(·, ·)^⊤,π, n) (<ref>) satisfies M_h ∈ and for a sufficiently large absolute constant c>0, with probability at least 1-δ,
M_h - ^π[ϕ̃(_h,_h)ϕ̃(_h,_h)^⊤] _≤_(n,δ),
where _(n,δ) c ·√(n^-1·log( 1/δ)) and ϕ̃ is the restriction of to the first d coordinates; see <ref>.
§.§ Guarantee for as a Subroutine for
In this section, we prove a guarantee for the instantiation of (<ref>) within .
For the rest of this section, we recall that (ϕh,k)_k∈[K] denote the feature maps returned by within (<ref>) at iteration h∈[H-2], and that (Ph,k)_k∈[K] denote the distributions returned by within <ref> at iteration h∈[H-2]. We define
Mh,kγ I + _π∼ Ph,k^π[ϕh,k(_h,_h)ϕh,k(_h,_h)^⊤].
In , we instantiate passing as and as . Combining <Ref> with the general guarantee of in <Ref>, we have the following result.
Let δ,γ∈(0,1) and K≥ 1 be as in <ref>, and fix h∈[H-2] and k∈[K]. Suppose that the feature class Φ satisfies <ref>, and that P1:h in <ref> satisfy (<ref>). Then, with probability at least 1-δ/3H:
* The number of iterations used by (<ref>) when invoked in <Ref> of <Ref> is at most T ⌈4/γ^2dlog( 1+1/γ)⌉.
* The distribution Ph,k output by is such that | Ph,k|≤ T and for Mh,k as in (<ref>), we have
sup_π∈_η^π[ ϕ̃h,k(_h,_h) ^2_( Mh,k)^-1] ≤ 3 d,
where we recall that ϕ̃h,k is the restriction of h,k to its first d coordinates, and h,k is the extension of ϕh,k to ; see <ref>.
By <Ref>, on the event that the instance of (resp. ) used by satisfy <Ref> with _=2γ/5 [_ = 2 γ^2/10], the two desiderata of the lemma hold; Here, we instantiate the guarantee in <ref> with C=2, which is what it is set to in <ref>. We claim that, with probability at least 1- δ/6 T H, each call to and to satisfies <Ref> with
=, _ref=_η, _=, and = {^π[ϕ̃h,k(_h,_h)ϕ̃h,k(_h,_h)^⊤] |π∈}.
Since and are called at most two times per iteration of , a union bound (see <ref>) concludes the proof contingent on the above claim.
We now prove the claim. First, note that the instance of that (<ref>) uses within <ref> is always of the form (see <ref> of <ref>):
(h, r_1:h(·, ·, M/M_), _1:h(M/M_), P1:h, n_)
with r_1:h and _1:h as in <Ref> and M ∈∖{0}; this matches the form in <Ref> ('s guarantee) with ϕ = ϕh,k, which implies that with probability at least 1- δ/6 T K, the output of _M of the instance in (<ref>) satisfies:
max_π∈_η^π[ ϕ̃h,k(_h, _h)^⊤Mϕ̃h,k(_h, _h) ]- ^_M[ ϕ̃h,k(_h, _h)^⊤Mϕ̃h,k(_h, _h) ]
≤ cM_· H d A√(K (d ln (n_ d^1/2)+ln (6 TK|Φ|/δ))/η n_),
for a sufficiently large absolute constant c>0. Thus, by choosing
n_ =·η^-1γ^-2 H^2 d^2K A^2· (d + ln (|Φ|/δ)),
for = (A,d,H,log(|Φ|/δ)) sufficiently large, the of (<ref>) is bounded by 2M_γ/5, which implies the claim for the invocation of within . Similarly, the choice of n_ in <Ref> ensures that the claim holds for the invocation of within , by <Ref>. The result follows.
§.§ Guarantee for as a Subroutine for
In this subsection, we prove a guarantee for the instantiation of within . Recall that (ϕh,k)_k∈[K] denote the feature maps returned by within (<ref>) at iteration h, and let (Ph,k)_k∈[0 K-1] and ( Ph,k)_k∈[K] be as in <ref>.
Recall that Ph,k-1∈Δ() is the distribution over policies that passes to at outer iteration h∈[H-2] and inner iteration k∈[K] to compute ϕh,k. Thus, by invoking <ref> in <ref> and using that
n_ = ·η^-5 A^2 d^10log (|Φ|/δ)
in <ref> for = (A,d,H,log(|Φ|/δ)) sufficiently large, we immediately obtain the following corollary.
Let δ,η∈(0,1), K≥ 1, and be as in <ref>, and fix h∈[H-2] and k∈[K]. Suppose that the class Φ satisfies <ref>. Then, with probability at least 1-δ/3HK, the instance of in <ref> of <ref> runs for t≤'· d iterations for ' = (A,d,H,log(|Φ|/δ)) sufficiently large, and outputs ϕh,k such that for all f∈, there exists w_fh,k∈(3d^3/2) satisfying
_π∼Ph,k-1^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤' · d^4 n^-1_log
(|Φ|/δ) ≤αη^2/32,
where w_f ∫__h+1 f(y) (y) ν(y) and αη/32 d K A.
We note that by the definition of Ph,k-1 in <ref> of <ref>, <ref> implies that, with probability at least 1-δ/3HK, for all k∈[2 K], f∈ and w_f,w_fh,k∈^d as in <ref>,
1/k-1∑_ℓ=1^k-1_π∼Ph,ℓ^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤2 ' · d^4 n^-1_log
(|Φ|/δ),
We now instantiate <ref> with B=3d^3/2A^1/2, ^2 =2 ' · d^4 n^-1_log
(|Φ|/δ), πℓ = _π∼ Ph,ℓ [π] ∈, for each ℓ∈[k], and
δk=√(∑_a∈(ϕh,k(·,a)^⊤wh,k_f - ϕ_h^⋆(·,a)^⊤w_f)^2),
and make use of the following facts:
* δk_∞≤ 3d^3/2 A^1/2 (since w_f∨w_fh,k≤3 d^3/2 and ϕ_h^⋆(·,·)∨ϕh,k(·,·)≤ 1).
* <ref> sets K = · d^5A/η^2 and n_≥·η^-4A d^10log (|Φ|/δ) with = (A,d,H,log(|Φ|/δ)) sufficiently large.
This leads to the following corollary.
Let δ,η∈(0,1), K≥ 1, and be as in <ref>, and fix h∈[H-2] and k∈[K]. Suppose that the feature class Φ satisfies <ref>. Then, with probability at least 1-δ/3H, the outputs (ϕh,k)_k∈[K] of in <ref> at iteration h of <ref> are such that for all f∈, with w_f, w_fh,k∈^d defined as in <ref>,
min_k∈[K]_π∼ Ph,k^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤η^2/128 d.
§.§ Concluding the Proof of thm:voxmain
In this section, we conclude the proof of <ref>. We prove the result as a direct consequence of the following inductive statement.
Consider iteration h∈[H] of (Φ, η, ,δ) (<ref>) with parameters >0,δ, η∈(0,1) and a feature class Φ satisfying <ref>. Further, assume that:
* The distributions P1:h+1 at the start of the hth iteration of satisfy (<ref>).
* P1:h+1 are supported on policies that never take the terminal action .
* The input parameter = (A,d,H,log(|Φ|/δ)) is sufficiently large.
Then, with probability at least 1-δ/H, the distribution Ph+2 produced by (Φ,η,,δ) at the end of the hth iteration is an ( η/32 dK A,η)-randomized policy cover relative to _η in for layer h+2, where K is as in <ref>. In addition, Ph+2⊆, and | Ph+2|≤576 d^7/η^4log (1+576 d^4/η^2).
This immediately implies <ref>, which bounds the cardinality of the supports of the distributions returned by <ref>
Follows immediately from <ref>.
In a first step we prove that with probability at least 1-δ, P1,… PH are (η32 dK A, η)-randomized policy covers relative to _η for layers 1 through H in ; that is, we need to show that (<ref>) holds for h=H-1 with probability at least 1-δ. To do this, we proceed by induction over h=1,…,H-1. The base case of h=1 trivially holds because Ψ1=∅ and Ψ2={π_}. The induction step now follows by <ref> and the union bound (see <ref>). Now, <ref> implies that P1,… PH are (η64 dK A, )-randomized policy covers relative to for layers 1 through H in the real MDP M, where 4 H d^3/2η. Plugging in the choice of K in <ref> implies the claim on P1,…, PH.
We now bound the number of trajectories <ref> requires. The total number of trajectories is equal to the sum of the number of trajectories , , and require. We know that and are called T = O(γ^-2 d) times by (<ref>) at each inner iteration k∈[K] of <ref> (γ is defined in <ref>), and is called once. Furthermore, each call to requires H · n_ trajectories, and and require n_ and n_ trajectories, respectively. Thus, the total number of trajectories is equal to
n_· H^2 K T+ n_· H K T + n_· H K
≤O(η^-13 d^27 H^4 A^4 (d + ln (|Φ|/δ))) +O(η^-14 d^28 H A ln (1/δ)) +O(η^-7 d^15 A^3 H ln (|Φ|/δ)),
where the inequality follows by the choice of parameters in <ref>.
This implies the desired bound on the number of trajectories
Let _h, _h', and _h” denote the success events in <ref>, <ref>, and <ref>, respectively, and note that by the union bound, we have [_h ∩_h'∩”_h]≥ 1 - δ/H. For the rest of this proof, we will condition on _h ∩_h'∩”_h.
Using <ref>, the assumption that P1:h+1 satisfy (<ref>) implies that the distributions P1, …, Ph+1 have the property that for all ℓ∈[h+1], x∈_ℓ,η(_η), then
_π∼ Pℓ*[ℓ](x)^⊤ϕ̅_ℓ-1^⋆,π≥α·sup_π∈_η[ℓ](x)^⊤ϕ̅_ℓ-1^⋆,π, for αη/32 dK A.
We will show that with probability at least 1-δ/H, the policy distribution Ph+2 satisfies the same property:
∀ x∈_h+2,η(_η), _π∈ Ph+2*[h+2](x)^⊤ϕ̅_h+1^⋆,π≥α·sup_π∈_η[h+2](x)^⊤ϕ̅_h+1^⋆,π.
By <ref> this is equivalent to the statement that Ph+2 is an ( η/32 dK A,η)-randomized policy cover relative to _η for layer h+2 in .
Throughout the proof, for any ℓ∈[2 H] and z∈_ℓ, we define
π_z ∈_π∈_η^π(z),
and note that by <ref>, we have
π_z ∈_π∈_η[ℓ](z)^⊤ϕ̅_ℓ-1^⋆,π, where ϕ̅_ℓ-1^⋆,π^π[^⋆_ℓ-1(_ℓ-1, _ℓ-1)].
Fix x∈_h+2,η(_η).
In the remainder of the proof, we will argue that Ph+2 satisfies the coverage property <ref> for x.
Preliminaries
We begin with some notation. Let us introduce a function f_x: _h+1→ defined by
f_x(y)_x^⊤ϕ̅^⋆_h+1(y,π_x(y)), where _x [θ_x^⊤, 0]^⊤ and θ_x [h+2](x)/[h+2](x).
Note that [h+2](x)>0, since x∈_h+2,η(_η). Next, we define
w_x ∫__h+1 f_x(y) (y) ν(y), and w̅_x [w_x^⊤, 0]^⊤∈^d+1.
By definition of π_x, we have that for all y∈_h+1,
_x^⊤ϕ̅^⋆_h+1(y,π_x(y)) = max_a∈_x^⊤ϕ̅^⋆_h+1(y,a),
≤max_a∈_x^⊤ϕ̅^⋆_h+1(y,a), (justified below)
= max_a∈θ_x^⊤ϕ^⋆_h+1(y,a), (since y≠_h+1 and [θ̅_x]_d+1=0)
where (<ref>) follows by the facts that _x^⊤ϕ̅^⋆_h+1(y,)=0 (since ϕ̅^⋆_h+1(·,)≡ e_d+1 and [_x]_d+1=0) and that
∀ a∈, _x^⊤ϕ̅^⋆_h+1(y,a) y≠_h+1=θ_x^⊤ϕ^⋆_h+1(y,a) = [h+2](x)^⊤ϕ_h+1^⋆(y,a)/[h+2](x),
≥ 0. ([h+2](·)^⊤ϕ_h+1^⋆(y,a) is a conditional law)
eq:cravit and the fact that θ_x=1 implies that
f_x|__h+1∈,
where f_x|__h+1 denotes the restriction of f_x to _h+1. We also note that since x∈_h+2,η(_η), we have
_x^⊤ϕ̅_h^⋆, π_x = [ ∫__h+1 f_x(y) (y)^⊤ν(y), 0] ϕ̅_h^⋆, π_x, (by definition of w̅_x in (<ref>))
= ∫__h+1 f_x(y) (y)^⊤ϕ̅_h^⋆, π_xν(y), (since (y)=[(y)^⊤, 0], for all y≠_h+1)
= ∫__h+1 f_x(y) (y)^⊤ϕ̅_h^⋆, π_x(y), (since f_x(_h+1)=0)
=_x^⊤ϕ̅_h+1^⋆,π_x, (by definition of f_x in (<ref>))
= 1/*[h+2](x)max_π∈_η[h+2](x)^⊤ϕ̃_h+1^⋆,π, (by definition of θ̅_x in (<ref>))
≥η>0,
where (<ref>) uses the definition of reachable states _h+2,η(_η) (see <ref>); we recall (see <ref>) that ϕ̃^⋆,π_h^π[ϕ̃^⋆_h(_h, _h)] and ϕ̃^⋆_h represents the restriction of ϕ̅^⋆_h to its first d coordinates.
Applying the guarantee for
Moving forward, we let (ϕh,k)_k∈[K] be the feature maps returned by within (<ref>) at iteration h, and define ϕ̅^k,π_h^π[h,k(_h,_h)], for any π∈, where we recall that h,k is the extension of ϕh,k to ; see <ref>. Further, for k∈[K], let wh,k_x be the vector wh,k_f in <ref> with f=f_x|__h+1, and note that
w_xh,k≤3d^3/2.
We will use the extended vector w̅_xh,k [(w_xh,k)^⊤,0]^⊤∈^d+1. By Jensen's inequality, we have for all k∈[K],
( h,k_x_h^k,π_x- _xϕ̅_h^⋆, π_x)^2
≤^π_x[(h,k(_h,_h)^⊤h,k_x - ϕ̅_h^⋆(_h,_h)^⊤_x)^2],
= ^π_x[(h,k(_h,π_x(_h))^⊤h,k_x - ϕ̅_h^⋆(_h,π_x(_h))^⊤_x)^2],
= ^π_x[𝕀{_h ∈_h,η(_η)}·(h,k(_h,π_x(_h))^⊤h,k_x - ϕ̅_h^⋆(_h,π_x(_h))^⊤_x)^2],
≤^π_x[𝕀{_h ∈_h,η(_η)}·∑_a∈(h,k(_h,a)^⊤h,k_x - ϕ̅_h^⋆(_h,a)^⊤_x)^2],
where the last inequality follows by the fact that h,k(·,)≡ϕ̅^⋆_h(·,) ≡ e_d+1 and [w̅_xh,k]_d+1=[w̅_x]_d+1=0 (by definition). Thus, for g(y) 𝕀{y∈_h,η(_η)}·∑_a∈(ϕ̅h,k(y,a)^⊤_xh,k - ϕ̅_h^⋆(y,a)^⊤_x )^2, (<ref>) implies that
( h,k_x_h^k,π_x- _xϕ̅_h^⋆, π_x)^2
≤∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_x_h-1(y),
≤∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_y_h-1(y), (by definition of π_y ((<ref>)) and (<ref>)))
≤α^-1_π∼ Ph[ ∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_h-1(y)], (by (<ref>) with ℓ=h, and g(y)=0 for all y∉_h,η(_η))
≤ 2 α^-1_π∼Ph,k-1[ ∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_h-1(y)], (Ph,k-1 as in <ref> of <ref>)
= 2 α^-1_π∼Ph,k-1^π[∑_a∈(h,k(_h,a)^⊤h,k_x - ϕ̅_h^⋆(_h,a)^⊤_x)^2],
= 2 α^-1_π∼Ph,k-1^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_x - ϕ_h^⋆(_h,a)^⊤w_x)^2],
where (<ref>) follows by the fact that the policies in the support of Ph,k-1 never take the terminal action (by assumption) and that h,k(x,a)^⊤h,k_x - ϕ̅_h^⋆(x,a)^⊤_x=ϕh,k(x,a)^⊤wh,k_x - ϕ_h^⋆(x,a)^⊤w_x for all a∈ whenever x≠_h. We note that Ph,k-1 is the distribution over policies that passes to to compute ϕh,k. Thus, since w_x = ∫__h+1 f_x(y) (y) ν(y) (see (<ref>)) and f_x|__h+1∈ (see (<ref>)), the guarantee for in <ref> together with (<ref>), implies that (recall that we condition on the event )
∀ k∈[K], | h,k_x_h^k,π_x- _xϕ̅_h^⋆, π_x| ≤η/4,
Since _xϕ̅_h^⋆, π_x≥η (see (<ref>)), (<ref>) implies that under , we have
∀ k∈[K], _xϕ̅_h^⋆, π_x≤4/3h,k_x_h^k,π_x.
Applying the guarantee for
To proceed, define
ℓ∈_k∈[K]_π∼ Ph,k^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_x - ϕ_h^⋆(_h,a)^⊤w_x)^2].
Note that by <ref>, we have,
_π∼ Ph,ℓ^π[∑_a∈(ϕh,ℓ(_h,a)^⊤wh,ℓ_x - ϕ_h^⋆(_h,a)^⊤w_x)^2] ≤η^2/128 d.
Let γ be as in <ref>, and for each k∈[K] define
Mh,kγ I + _π∼ Ph,k^π[ϕh,k(_h,_h)ϕh,k(_h,_h)^⊤], and Mh,k[ Mh,k 0_d × 1; 0_1 × d 0 ]∈^(d+1)× (d+1).
From (<ref>), Hölder's inequality, and AM-GM, we have
_xϕ̅_h^⋆, π_x ≤4/3*w̅h,ℓ_x _ Mh,ℓ·^ℓ, π_x_h_( Mh,ℓ)^, (( Mh,k)^ denotes the pseudo-inverse of Mh,k)
≤8d/η*w̅h,ℓ_x^2_ Mh,ℓ + η/12 d^ℓ, π_x_h^2_( Mh,ℓ)^,
≤8d/η*w̅h,ℓ_x^2_ Mh,ℓ + η/12 d^π_x[h,k(_h,_h)^2_( Mh,ℓ)^], (Jensen's inequality)
≤8d/η*w̅h,ℓ_x^2_ Mh,ℓ + η/12 d^π_x[ϕ̃h,k(_h,_h)^2_( Mh,ℓ)^-1].
By <ref> (in particular (<ref>)), we have that under the event _h”,
^π_x[ϕ̃h,k(_h,_h)^2_( Mh,ℓ)^-1] ≤ 3 d.
Combining this with (<ref>), it follows that
_xϕ̅_h^⋆, π_x ≤η/4 + 8d/η*w̅h,ℓ_x ^2_ Mh,ℓ ,
= η/4 + 8d/η·*wh,ℓ_x^2_ Mh,ℓ,
=η/4+ 8dγ/η·*wh,ℓ_x^2 + 8d/η·_π∼ Ph,ℓ^π[ ( ϕh,ℓ(_h,_h)^⊤wh,ℓ_x)^2 ],
≤η/4+ 72 d^4γ/η + 16 d/η·_π∼ Ph,ℓ^π[ ( ϕ^⋆_h(_h,_h)^⊤w_x)^2 ]+ η/8, (see below)
≤η/2+ 16 d/η·_π∼ Ph,ℓ^π[ ( ϕ^⋆_h(_h,_h)^⊤w_x)^2 ],
where (<ref>) follows by (<ref>), (<ref>), and that (a+b)^2 ≤ 2a^2 +2b^2. The last inequality follows by the parameter choice γ = η^2/576 d^4 (see <ref>).
Concluding
By the definition of w_x, the fact that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,π_x(y))≥ 0 is a conditional density for all y∈_h+1, and Jensen's inequality, we have:
∀ (y',a')∈_h×, (ϕ^⋆_h(y',a')^⊤ w_x )^2 = (∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y))^2,
≤∫__h+1(μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) )^2 μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
≤∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
where the last inequality follows by Cauchy-Schwarz and that ϕ^⋆_h+1(·, ·)≤ 1.
Plugging this into (<ref>), we have
_xϕ̅_h^⋆, π_x - η/2
≤16 d/η·_π∼ Ph,ℓ^π[∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)] ,
≤16 d A/η·_π∼ Ph,ℓ^π[1/A∑_a∈∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,a) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)] , (see below)
= 16 d A/η·_π∼ Ph,ℓ^π∘_h+1π_[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
≤16 d A K/η·_π∼ Ph+2^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
= 16 d A K/η·_π∼ Ph+2[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆,π_h+1],
where (<ref>) uses that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,a) is non-negative for all (y,a)∈_h+1× (since it is a conditional density), and (<ref>) follows by definition of Ph+2 in <ref>.
Combining (<ref>) with the fact that _xϕ̅_h^⋆, π_x≥η (see (<ref>)) yields
1/2·μ̅_h+2^⋆(x)^⊤/μ̅_h+2^⋆(x)ϕ̅^⋆,π_x_h+1 ≤16 d A K/η·_π∼ Ph+2[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆,π_h+1],
= 16 d A K/η·_π∼ Ph+2[ μ̅_h+2^⋆(x)^⊤/μ̅_h+2^⋆(x)ϕ̅^⋆,π_h+1],
where the last equality follows by the fact that policies in the support of Ph+2 never take the terminal action. This establishes (<ref>). Since this argument holds uniformly for all x∈_h+2,η(_η), the proof is completed. The bound on | Ph+2| follows immediately from <ref> and the choice of γ in <ref>.
§.§ Proof of <ref>
Let h∈ [H] and P∈Δ() be a (C,γ)-generalized optimal design (see <ref>) for the set
_h{^π[
(_h, _h)(_h, _h) ^]|π∈}.
Further, define P'=∑_π∈(P)P(π)·_π∘_h+1 and
M_PγI_d+_π∼P^π*(_h, _h)(_h, _h) ^.
We will show that P' is a (α,η)-randomized policy cover for layer h+2 with αη/2 d A and η 4 d √((1+C)γ).
Let x∈_h+2,η() and π_x ∈_π∈ d^π(x).
Preliminaries
We begin with some notation. Let us introduce a function f_x: _h+1→ defined by
f_x(y)θ_x^⊤ϕ^⋆_h+1(y,π_x(y)), where θ_x [h+2](x)/[h+2](x).
Note that [h+2](x)>0, since x∈_h+2,η(). Next, we define
w_x ∫__h+1 f_x(y) (y) ν(y) ∈^d.
Since f_x takes values in [-1,1] (because ϕ_h+1^⋆(· , ·)≤ 1 and θ_x≤ 1), the normalizing assumption on μ^⋆_h+1 in (<ref>) implies that
w_x ∈(2√(d)).
We also note that the definitions of f_x and w_x imply that
w_x^⊤ϕ_h^⋆, π_x = θ_x^⊤ϕ_h+1^⋆,π_x = sup_π∈θ_x^⊤ϕ_h+1^⋆,π, (by definition of π_x)
= 1/*[h+2](x)max_π∈[h+2](x)^⊤ϕ_h+1^⋆,π, (by definition of θ_x in (<ref>))
≥η>0,
where the penultimate inequality follows by the fact that x∈_h+2,η().
Using the generalized optimal design property
By Hölder's inequality, we have for any ν>0,
w_x^⊤ϕ_h^⋆,π_x ≤w_x_M_P·ϕ^⋆, π_x_h_M_P^-1,
≤1/2νw_x^2_M_P + ν/2ϕ^⋆, π_x_h^2_M_P^-1, (AM-GM)
≤1/2νw_x^2_M_P + ν/2^π_x[ ϕ^⋆_h(_h, _h)^2_M_P^-1], (Jensen's inequality)
= 1/2νw_x^2_M_P + ν/2(M_P^-1^π_x[ ϕ^⋆_h(_h, _h) ϕ^⋆_h(_h, _h)^⊤] ),
≤1/2νw_x^2_M_P + ν· d(1+C)/2, (P is a (C,γ)-generalized optimal design)
= γ/2νw_x^2 + 1/2ν_π∼ P^π[(w_x^⊤ϕ^⋆_h(_h,_h))^2] + ν· d(1+C)/2, (by definition of M_P)
≤2γ d/ν + 1/2ν_π∼ P^π[(w_x^⊤ϕ^⋆_h(_h,_h))^2] + ν· d(1+C)/2,
where the last inequality follows by the bound on w_x in (<ref>). Now, by the definition of w_x, the fact that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,π_x(y))≥ 0 is a conditional density for all y∈_h+1, and Jensen's inequality, we have:
∀ (y',a')∈_h×, (ϕ^⋆_h(y',a')^⊤ w_x )^2 = (∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y))^2,
≤∫__h+1(μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) )^2 μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
≤∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
where the last inequality follows by Cauchy-Schwarz and that ϕ^⋆_h+1(·, ·)≤ 1. Plugging (<ref>) into (<ref>) and rearranging, we obtain: for all ν>0,
w_x^⊤ϕ_h^⋆,π_x - 2γ d/ν - ν· d(1+C)/2
≤1/2ν_π∼ P^π[∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)],
≤A/2ν_π∼ P^π[1/A∑_a∈∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,a) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)], (see below)
= A/2ν_π∼ P^π∘_h+1π_[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
= A/2ν_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
where (<ref>) uses that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,a) is non-negative for all (y,a)∈_h+1× (since it is a conditional density), and the last inequality follows by definition of P'. Now, using (<ref>), we get: for ν2 √(γ (1+C)^-1),
1/2w_x^⊤ϕ_h^⋆,π_x ≤w_x^⊤ϕ_h^⋆,π_x - η/2,
≤w_x^⊤ϕ_h^⋆,π_x -2 d√((1+C)γ), (using that γ = η^2 d^-2 (1+C)^-1/16)
≤w_x^⊤ϕ_h^⋆,π_x - 2γ d/ν - ν· d(1+C)/2, (by the choice of ν)
≤A/2ν_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ], (by (<ref>))
= A/4 √(γ (1+C)^-1)_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
= Ad/η_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
where the last equality uses that γ = η^2 d^-2 (1+C)^-1/16.
Rearranging, implies that P' is an (η/2d A,η) randomized policy cover for layer h+2.
§ GENERIC GUARANTEE FOR
In this section we give a generic guarantee for the (<ref>). We consider the abstract framework introduced in <ref>, in which the aim is to compute a generalized optimal design for an implicitly specified set of matrices
=*W^z_z∈⊆ indexed by an abstract set . We assume that subroutines and used by satisfy the following assumption.
[Approximation guarantee for and ]
Consider an abstract set and a collection of PSD matrices {W^z∈^d× d| z∈} indexed by elements in . There exist _,_>0 and reference subsets _ref, _⊆ such that for any M ∈ and P∈Δ(_), the outputs ẑ_M (M/M_) and W_P (P)∈ satisfy ẑ_M∈_ and
sup_z∈_ref(M W^z) ≤(M W^ẑ_M)+_·M_ , and W_P - _z∼ P[W^z]_≤_.
For our application to RL, the sets _ref and _ are useful to accommodate algorithms that optimize relative to restricted policy sets.
Given such subroutines and , and γ>0, ((·),(·), ·,γ) applies the Frank-Wolfe (conditional gradient method) to approximately solve the optimization problem
_P ∈Δ() F(P), where F(P)-log(γ I_d + _z∼ P[W^z]).
Letting {W^z | z∈} and assuming that ⊆_(1), the main result for this subsection (<ref>) bounds the number of iterations used by ((·),(·), ·,γ) under <ref> and gives a guarantee for the output.
Let C∈(1,2] and γ∈(0,1) be such that γ C<5/2, and suppose that the collection {W^z | z ∈} consists of PSD matrices of Frobenius norm bounded by 1. If (<Ref>) is run with parameters C, γ and , satisfying <ref> with _=Cγ/5 and _=Cγ^2 /10, then the algorithm terminates after t ≤16 γ^-2C^-2 d^-1ln (1 + 1/γ) iterations,[While it may seem odd at first glance that the iteration complexity for scales with d^-1, we note that the non-trivial regime in <ref> is when γ≤ 1/d. This is because for γ≥ 1/d, we have (M_P^-1 W^z)≤ d for any P∈Δ() and z∈, since M_P≽ I_d/d and W^z∈∩_(1). Whenever γ≤1/d, the iteration complexity for increases with d, as expected.] and requires at most twice that many calls to each of and . Furthermore, the output P_t of is such that P_t∈Δ(_),
|supp P_t|≤ t, and
sup_z∈_ref(M_P_t^-1 W^z) ≤ (1+3C/2) · d, where M_P_tγ I_d +_z∼ P_t[ W^z].
Let F be as in (<ref>). For z∈ and P∈Δ(), define M^zγ I_d
+ W^z, W_P_z∼ P[W^z], and M_P γ I_d + W_P. Throughout the proof, we will use that the function f: M ↦ -log M defined over has the following gradient and Hessian expressions:
∇ f(M)[H] = - (M^-1 H) and ∇^2 f(M)[H,H] = (M^-1 H M^-1 H),
for all H∈^d× d.
To begin, by Taylor's theorem and the fact that the set of PSD matrices is convex, there exists λ∈[0,1] such that for any P,P'∈, defining M_λλ M_P + (1-λ) M_P'∈,
F(P') - F(P) = f(M_P') -f(M_P),
= ∇ f(M_P)[M_P'-M_P] + 1/2∇^2 f(M_λ)[M_P'-M_P, M_P'-M_P] ,
= - (M_P^-1 (W_P'- W_P)) + 1/2(M^-1_λ (W_P'-W_P) M^-1_λ(W_P'- W_P)),
≤- (M_P^-1 (W_P'- W_P)) + 1/2γ^2W_P' - W_P^2_,
where the last inequality follows because for all z∈, M^z = γ I_d + W^z≽γ I_d, since W^z∈. We also note that by definition of F in (<ref>) and the fact that ⊂∩_(1), we have
sup_P,P'∈Δ() F(P') - F(P) ≤ dln (1 + 1/γ),
since the determinant of a matrix is bounded by the product of the norms of its columns.
Bounding the number of iterations
If <ref> has not terminated at iteration ℓ≥ 1, then
(M_ℓ^-1W_ℓ)>(1+C)d,
where M_ℓ = γ I_d + (P_ℓ), W_ℓ =
(𝕀_z̃_ℓ), and z̃_ℓ =
(M_ℓ^-1/M_ℓ^-1_F). Since satisfies <ref> with _=
γ^2 C/10, we have that
M_P_ℓ - M_ℓ_∨W^z̃_ℓ - W_ℓ_≤γ^2 C/10.
Furthermore, since M_P_ℓ≽γ I_d (because ⊆), we have using Cauchy-Schwarz
rM_P_ℓ^-1· (M_ℓ - M_P_ℓ)_≤M_P_ℓ^-1_·M_P_ℓ - M_ℓ_≤γ C/10<1/4,
where the last inequality follows by the fact that γ C<5/2.
On the other hand, by <ref>, instantiated with A = M_P_ℓ and E = M_ℓ -M_P_ℓ, we have that
M_P_ℓ^-1 - M_ℓ^-1_≤M_ℓ -M_P_ℓ_/1-r·M_P_ℓ^-1_^2 ≤4/3 γ^2γ^2 C/10 , (by (<ref>), (<ref>), and M_P_ℓ≽γ I_d)
= 2C/15≤C/5.
Note also that since only returns matrices in (see <ref>), we have M_ℓ≽γ I_d, and so
M_ℓ^-1_≤1/γ.
Using (<ref>)-(<ref>) and the triangle inequality, we obtain
(M_P_ℓ^-1 W^z̃_ℓ) = ((M_P_ℓ^-1 -M_ℓ^-1) W^z̃_ℓ) + (M_ℓ^-1 (W^z̃_ℓ-W_ℓ)) + (M_ℓ^-1 W_ℓ),
> - M_P_ℓ^-1 -M_ℓ^-1_·W^z̃_ℓ_ -M_ℓ^-1_·W^z̃_ℓ-W_ℓ_ + (1+C)d, (by (<ref>))
≥ - C/5 - 1/γ·γ C/5+ (1+C)d, (by ⊆_(1) and (<ref>)-(<ref>))
≥ - C/2 + (1+C)d.
Now, recall that μ = Cγ^2 d/8. Instantiating (<ref>) with P'=P_ℓ+1 and P=P_ℓ and using (<ref>), we have
F(P_ℓ+1) ≤ F(P_ℓ) + (M_P_ℓ^-1 (W_P_ℓ- W_P_ℓ+1)) + 2/γ^2W_P_ℓ+1- W_P_ℓ^2_,
= F(P_ℓ) + μ·(M_P_ℓ^-1 (W_P_ℓ- W^z̃_ℓ)) + μ^2/2γ^2W^z̃_ℓ- W_P_ℓ^2_,
< F(P_ℓ) + μ·(C/2 - (1+C)d + (M_P_ℓ^-1 W_P_ℓ) ) + 2 μ^2/γ^2, (by ⊆_(1) and (<ref>))
≤ F(P_ℓ) - μ Cd/2 + 2μ^2/γ^2, (see below)
≤ F(P_ℓ) - γ^2 C^2 d^2/16 ,
where (<ref>) follows by the fact that (M_P_ℓ^-1 W_P_ℓ) ≤ d, and the last inequality follows by the choice of μ in <ref>. If the algorithm runs for t≥ 1 iterations, then summing (<ref>) and telescoping, we have
- (t-1) γ^2 C^2 d^2/16 > F(P_t)- F(P_1) ≥inf_P,P'∈Δ() F(P)-F(P') ≥ -d ln (1+1/γ),
where the last inequality follows by (<ref>). By rearranging, we conclude that
t < 1 + 16 γ^-2C^-2 d^-1ln (1 + 1/γ),
giving the claimed bound on the number of iterations.
Guarantee for the last iterate
Suppose the algorithm terminates at step t. Since and satisfy <ref> with _= C
γ/5, the iterates at step t satisfy (<ref>) in addition to
sup_z∈_(M_t^-1 W^z) ≤(M_t^-1 W^z̃_t) + C γM_t^-1_/5,
≤(M_t^-1 W^z̃_t) + C d^1/2M_t^-1_ /5,
≤(M_t^-1 W^z̃_t) + Cd^1/2 /5,
where the last inequality follows by (<ref>).
Combining this with the termination condition (M_t^-1W_t) ≤
(1+C)d, we have that
sup_z ∈_(M_P_t^-1 W^z)
≤sup_z ∈_((M_P_t^-1-M_t^-1) W^z)+ sup_z ∈_(M_t^-1 W^z),
≤sup_z ∈_((M_P_t^-1-M_t^-1) W^z) + (M_t^-1 W^z̃_t) +C d^1/2/5, (by (<ref>))
= sup_z ∈_((M_P_t^-1-M_t^-1) W^z) + (M_t^-1 W_t)+ (M_t^-1 (W^z̃_t -W_t)) +C d^1/2/5,
≤sup_z ∈_M_P_t^-1 -M_t^-1_·W^z_ + (1+C)d+M_t^-1_·W^z̃_t- W_t_ + C d^1/2/5, (see below)
≤2C/15+ (1+C)d+1/γ·C γ^2/10 + C d^1/2/5, (by (<ref>)-(<ref>) and ⊆_(1))
≤ (1+3C/2)· d,
where (<ref>) follows by Cauchy-Schwarz and (M_t^-1W_t) ≤
(1+C)d. This completes the proof.
§ GENERIC GUARANTEE FOR
In this section, we give a generic guarantee for (<ref>). Compared to previous guarantees in <cit.>, we prove a fast 1/n-type rate of convergence for , and show that the algorithm succeeds even when the norm of the weight w in <ref> does not grow with the number of iterations. We also use the slightly simpler discriminator class:
{. f x ↦max_a∈θ^⊤ϕ(x,a) | θ∈(1), ϕ∈Φ}.
The main guarantee for is as follows.
Let h∈ [H], δ∈(0,e^-1), and n∈ℕ be given, and suppose that satisfies the normalization assumption in <ref>.
For any function f ∈, define
w_f = ∫__h+1 f(x) _h+1(x) ν(x).
Let P∈Δ() be a distribution over policies, be as (<ref>), and
Φ be a feature class satisfying <ref>. With probability at least 1 - δ, with input (h, , Φ, P, n) terminates after t≤ T*d log_3/2 (2n d^-1/2) iterations, and its output ϕt satisfies
sup_f∈inf_w ∈(3d^3/2)_π∼ P^π∘_h π_[(w^⊤ϕt(_h,_h)- w_f^⊤ϕ_h^⋆(_h,_h) )^2] ≤_^2(n,δ),
where _^2(n,δ) c T d^3 n^-1log
(|Φ|/δ), for some sufficiently large absolute constant c>0.
To prove the theorem, we need a technical lemma, which follows from <cit.>.
Consider a call to (h, , Φ, P, n) (<ref>) in the setting of <ref>. Further, let _ be as in <ref> and define
(ϕt, wt_1,…, wt_t-1)∈_ϕ∈Φ,(w_1,…,w_t-1)∈(2√(d))^t-1∑_ℓ=1^t-1_(ϕ,w_ℓ,fℓ).
For any δ∈(0,1), there is an event t(δ) of probability at least 1-δ such that under t(δ), if <ref> does not terminate at iteration t≥ 1, then for wℓ w_fℓ:
∑_ℓ =1^t-1_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_ℓ - ϕ_h^⋆(_h,_h)^⊤ wℓ)^2] ≤ t _^2(n,δ),
inf_w ∈3/2(d^3/2)_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ w- ϕ_h^⋆(_h,_h)^⊤ wt)^2] > 8 d t_^2(n,δ),
where ^2_(n,δ) c d^2 n^-1ln
(|Φ|/δ) and c≥1 is a sufficiently large absolute constant.
With this, we prove <ref>.
Let us abbreviate _(n,δ),
with _(n,δ) defined as in <ref>. Further, let N 1+ *d log_3/2 (2d^3/2/), δ' δ/2N, and define
__(n,δ').
Note that ≤_ and N -1 ≤ T, where T is the number of iterations in the theorem statement; the latter inequality follows by the facts that the absolute constant c in <ref> is at least 1 and ln (|Φ|/δ)≥1. We define an event 1(δ')∩…∩N(δ'), where (^t(·))_t are the success events in <ref>. Note that []≥ 1 - δ/2 by the union bound. Throughout this proof, we condition on the event .
To begin the proof, we define a sequence of vectors (v_1:dℓ)_ℓ≥ 0 in an inductive
fashion, with v_iℓ∈^d for all
i∈d and ℓ≥0. For ℓ=0, we let
v_i0 = e_i/d, for all i∈[d]. For
ℓ≥ 1, we consider two cases:
* Case I: If
ℓ{j ∈[d] | |(V_-jℓ-1, wℓ)|>(1+C)· |(Vℓ-1)| . }≠∅,
where
Vℓ-1 (v_1ℓ-1,…,
v_dℓ-1)∈^d× d and
wℓw_fℓ, then we let
j_j'∈ℓj' and define
v_iℓ{[ wℓ , if i=j,; v_iℓ-1, otherwise. ].
* Case II: If ℓ=∅, we let
v_iℓ = v_iℓ-1, for all i∈[d].
We first show that t≠∅ at any iteration t∈[N] where does not terminate. Let t∈[N] be an iteration where the algorithm does not terminate, and suppose that t=∅. This means that
∀ j∈[d] , |(V_-jt-1, wt)|≤ (1+C)· |(Vt-1)|.
Now, since (Vt-1)≠ 0 (note that
*(Vt) is non-decreasing with t), we have
that span( Vt-1)= ^d. Thus, there exist
β_1,…, β_d∈ be such that wt=
∑_i=1^d β_i vt-1_i. By the linearity of the
determinant and (<ref>), we have
∀ j ∈[d], (1+C)|·(Vt-1)| ≥ |(V_-jt-1, wt)|,
= |(V_-jt-1, ∑_i=1^d β_i vt-1_i )|,
= *∑_i∈[d]β_i·(V_-jt-1, v_it-1),
= |β_j| · |(Vt-1)|.
This implies that |β_j|≤ (1+C) for all
j∈[d]. Now, note that by the definition of (v_it-1), we have that for any i∈[d] such that v_it-1≠ e_i/d, there exists ℓ∈ [t-1] such that wℓ= v_it-1. Let
t{i∈[d]| v_it-1≠ e_i/d},
and for any i∈t, let ℓ_i∈[t-1] be such that wℓ_i= v_it-1. Further, define
wt∑_i∈tβ_i wℓ_i= ∑_i∈tβ_i v_it-1,
and note that by the triangle inequality and the fact that wt=∑_i=1^d β_i v_it-1, we have
wt- wt≤ (1+C)_.
Finally, with the notation in (<ref>), define
wt_t ∑_i∈tβ_i wt_ℓ_i,and note that wt_t ∈ (1+C) (2d^3/2),
since |β_i| ≤ (1+C) for all i∈[d], |t|≤ d, and wt_ℓ∈(2√(d)), for all ℓ∈[t-1]. Now, by <ref>, in particular (<ref>), we have
∑_i∈t_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_ℓ_i - ϕ_h^⋆(_h,_h)^⊤ wℓ_i)^2] ≤ t _^2,
where _ is as in (<ref>). Using the
expressions in <ref> with (<ref>) and Jensen's inequality, we have that under t,
_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_t - ϕ_h^⋆(_h,_h)^⊤ wt)^2]
≤(∑_j∈t |β_j|) ·∑_i∈t_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_ℓ_i - ϕ_h^⋆(_h,_h)^⊤ wℓ_i)^2] ,
≤ (1+C) d t _^2.
Now, using (<ref>) and the facts that (a+b)^2 ≤ 2a^2 + 2 b^2 and ϕ^⋆_h_2≤ 1, we have that
_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_t - ϕ_h^⋆(_h,_h)^⊤ wt)^2] ≤ 2(1+C)^2 ^2 + 2(1+C)dt _^2,
≤ 2(1+C)^2 ^2_ + 2(1+C)dt _^2.
Using that C=1/2, we conclude that the right-hand side of this inequality is bounded by 8 d t_^2 which is a contradiction, since wt_t ∈ (1+C)(2d^3/2) = (3d^3/2) and by <ref>, we must have
inf_w∈(3d^3/2)_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ w- ϕ_h^⋆(_h,_h)^⊤ wt)^2]> 8 t _
if does not terminate at round t.
Therefore, we have that t≠∅, for any
iteration t∈[2 N] where does not
terminate.
We now bound the iteration count and prove that the guarantee in
<ref> holds at termination. Note that whenever ℓ≠∅ for ℓ>1, we have by construction:
|(Vℓ)| > 3/2 · |(Vℓ-1)|.
Thus, if runs for t∈[2 N] iterations, then
|(Vt)| > (3/2)^t-1· |(V1)|.
On the other hand, since the determinant of a matrix is bounded by the product of the norms of its columns and v_1:dt∈(2√(d)), we have
|(Vt)| ≤ 2^d d^d/2.
Note also that |(V0)| = (/d)^d. Plugging this
into (<ref>), we conclude that
(3/2)^t-1 < (2d^3/2/)^d.
Taking the logarithm on both sides and rearranging yields
t < 1+ d log_3/2 (2d^3/2/)≤ N.
Thus, the algorithm must terminate after at most N-1 iterations. Furthermore, by <cit.>, we have that with probability at least 1-δ/2N, if the algorithm terminates at iteration t, then
max_f∈inf_w ∈(3d^3/2)_π∼ P^π∘_h π_[(w^⊤ϕt(_h,_h)- w_f^⊤ϕ_h^⋆(_h,_h) )^2] ≤ 32 t _^2,
≤ 32 (N-1)_^2,
≤ 32 T _^2.
Applying a
union bound completes the proof.
§ GENERIC GUARANTEES FOR
In this section, we present self-contained guarantees for (<ref>). We show that given any reward functions r_1:h:×→_≥ 0 and function classes _1:h, where _t⊆{g: _t×→} for t∈[h], that “realize” these reward functions (we formalize this in the next definition), that if P1:h are (approximate) policy covers for layers 1 through h, then for sufficiently large n≥ 1 and with high probability, the output = (h,r_1:h, _1:h, P1:h, n) is an approximate maximizer of the objective
max_π∈^π[∑_t=1^h r_t(_t,_t)].
To formalize this result, we define the notion of realizability we require for the function classes _1:h.
We say that function classes _1:h, where _t⊆{g: _t×→} for t∈[h], realize reward functions r_1:h:×→ if for all t∈[h] and all π∈^t+1:h,
Q_t^π∈_t, where Q^π_t(x,a) r_t(x,a)+^π[.∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
Note that Q^π_t in (<ref>) represents the state-action value function (Q-function) at layer t∈[h] with respect to the rewards r_1:h and partial policy π.
In what follows, given a function class ⊆{g: ×→}, we use _() to denote the -covering number of in ℓ_∞ distance.
A set of functions {g_1, …, g_N}⊂{g: ×→} is an -cover of ⊆{g:×→} in ℓ_∞-distance if for all g∈, there exists i ∈ [N] such that
g - g_i_∞≤.
The -covering number _() is the size N of the smallest -cover of .
§.§ Intermediate Results for
To prove our main guarantees for (stated in the next subsection), we first two intermediate lemmas. The first shows that for any poly π, the corresponding Q-function is the Bayes-optimal predictor for the regression problem solved in when π is executed.
Let reward functions r_1:h:×→, P∈Δ(), and ∈^t+1:h be given. Fix t∈h, and let g^P,_ denote the Bayes-optimal predictor[Observe that because this loss is strongly convex with respect to the prediction, the Bayes-optimal predictor is unique up to sets of measure zero.] for the sum of rewards under a policy π sampled from P and composed with via π∘_t∘_t+1; that is,
g^P,_∈_ g : _t ×→_π∼ P^π∘_t π_∘_t+1[( g(_t, _t) - ∑_ℓ=t^h r_ℓ(_ℓ,_ℓ) )^2].
Then, g^P,_(·,·)≡ Q^_t(·,·), where Q^_t is the Q-function defined in (<ref>) for the partial policy ∈^t+1,h and rewards r_1:h.
The least-squares solution g^P,_ of the problem in (<ref>) satisfies, for all a∈ and x∈_t,
g^P,_ (x,a) = _π∼ P^π∘_t π_∘_t+1[ . ∑_ℓ=t^h r_ℓ(_ℓ,_ℓ) | _t =x ,_t =a ],
= [ r_t(_t,_t)|_t = x,_t = a]+ _π∼ P^π∘_t π_∘_t+1[ . ∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) | _t = x, _t =a],
= r_t(x,a) +^[ . ∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) | _t = x, _t =a], (see below)
= Q_t^(x,a),
where (<ref>) follows by the fact that conditioned on (_t,_t)=(x,a), the sum of rewards ∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) depend only on and not on the policy used to roll-in to layer t.
The next lemma shows that the solution t to the least-squares problem in (<ref>) of <ref> is close to the Q-function in the appropriate sense.
Let δ∈(0,1), B>0, n≥ 1, and h ∈[H] be fixed. Further, let (_, r_1:h, _1:h, P1:h) be such that
* _(n,δ)^2 = cB^2A/n (max_t∈[h]ln__t(1/n)+ln (n/δ)), where c>0 is a sufficiently large absolute constant.
* The function classes _1:h realize the reward functions r_1:h: ×→ (in the sense of <Ref>).
* The functions in _1:h are bounded in absolute value by B uniformly.
* P1,…,Ph∈Δ().
Then, for t∈[h], the solution t to the least-squares problem in (<ref>) in <ref> when invoked as (h, r_1:h, _1:h, P1:h, n) satisfies with probability at least 1-δ,
_π∼ Pt^π[ max_a∈( t(_t,a) - Q_t^t+1(_t, a) )^2 ]≤^2_(n,δ),
where t+1∈^t+1:h is defined as in <ref>.
Fix t∈[h] and abbreviate
gt_ g^Pt,t+1_,
where g^Pt,t+1 is defined as in <ref> (with P= Pt, = t+1, and reward functions r_1:h as in the lemma statement). By <ref>, gt_ is the Bayes-optimal solution to the least-squares problem in (<ref>) of <ref>. Thus, since _1:h realize the reward functions r_1:h, a standard uniform-convergence guarantee for least-square regression (see e.g. <cit.> with = 0 almost surely) implies that there exists an absolute constant c>0 (independent of t,h, and any other problem parameters) such that with probability at least 1-δ,
_π∼ Pt^π∘_tπ_∘_t+1t+1[ ( t(_t,_t) - gt_(_t,_t) )^2 ]≤ c· B^2 ·ln__t(1/n)+ln (n/δ)/n.
Since actions at layer t are taken uniformly at random, (<ref>) implies that
_π∼ Pt^π∘_tπ_∘_t+1t+1[ max_a∈( t(_t,a) - gt_(_t,a) )^2 ]≤ c· B^2A ·ln__t(1/n)+ln (n/δ)/n.
The desired result follows by observing that:
* For all (x,a)∈_t×, gt_(x,a)=Q^t+1_t(x,a), by <ref>.
* The term max_a∈( t(_t,a) - gt_(_t,a) )^2 in (<ref>) does not depend on the actions _t:h, and so the expectation _π∼ Pt^π∘_tπ_∘_t+1t+1· can be simplified to _π∼ Pt^π·.
§.§ Main Guarantee for With Non-Negative Rewards
We now state and prove the main guarantee for used within <ref>, which is stated with respect to the extended MDP defined in <ref>. This result requires non-negative rewards. For the rest of this section, we make use of the extended MDP notation and definitions introduced in <ref>. In addition, given non-negative reward functions r_1:h×→_≥ 0, we define their extensions r̅_1:h in as
r̅_t(x,a){[ r_t(x,a), (x,a)∈_t×; 0, if x= or a=. ].
With this, we now state the guarantee of .
Let α, δ,η∈(0,1), B>0, and h∈[H] be given. Consider reward functions r_1:h: ×→_≥ 0, function classes _1:h, policy distribution P1:h, and a parameter n≥ 1 satisfying the following properties:
* The function classes _1:h, where _t⊆{g: _t×→} for t∈[h], realize the reward functions r_1:h (in the sense of <Ref> with respect to the true MDP), and all functions in _1:h have range uniformly bounded by B.
* For each 1 ≤ t ≤ h, it holds that Pt is a (α,η)-randomized policy cover relative to _η for layer t in (see <ref>).
Then, with probability at least 1 - δ, the policy = (h, r_1:h, _1:h, P1:h, n) produced by <ref> (when applied to the true MDP), satisfies the following guarantee for r̅_1:h as in (<ref>):
max_π∈_η^π[∑_t=1^hr̅_t(_t,_t)] ≤^[∑_t=1^hr̅_t(_t,_t)] + _(n,δ),
where _(n,δ) c·H √(α^-1 B^2 A n^-1· (max_t∈[h]ln__t(1/n)+ln (n/δ))) and c>0 is an absolute constant.
First, we define extensions of Q-functions to the extended MDP using the extended rewards r̅_1:h in (<ref>); for all t∈[h] and all π∈^t+1:h, define the Q-function at layer t in the extended MDP with respect to the extended rewards r̅_1:h and partial policy π:
∀ (x,a)∈_t ×, Q^π_t(x,a) r̅_t(x,a)+^π[.∑_ℓ=t+1^hr̅_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
Note that for any partial policy π∈^t+1:h that never takes the terminal action, we have
Q^π_t(x,a)= {[ Q^π_t(x,a)≥ 0, if (x,a)∈_t ×,; 0 , if x = or a = , ].
where the fact that Q^π_t(·,·)≥ 0 follows because the rewards are non-negative. Further, for the function ĝt in <ref>, we define its (clipped) extension
g̅t(x,a){[ max(0,ĝt(x,a)), if (x,a)∈_t ×,; 0 , if x = or a = . ].
To begin, we will show that for any t∈[h] and _(·,·) as in <ref>, there is an event _t of probability at least 1- δ/H under which the learned partial policies t,t+1 are such that
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] ≤ 2 α^-1/2_(n,δH),
where π_⋆∈_π∈_η^π[∑_t=1^hr̅_t(_t,_t)] is the optimal policy with respect to the truncated policy set _η (definition in <ref>) and Q^π_t is the Q-function defined in (<ref>). Once we establish (<ref>) for all t∈[h], we will apply the performance difference lemma (<ref>) and the union bound to obtain the desired result.
Let π_⋆∈_π∈_η^π[∑_ℓ=1^h r̅_ℓ(_ℓ,_ℓ)]. Observe that the following properties hold:
* For all x∉_t,η(_η), π_⋆(x)= (by definition of _η); and
* For all policies π∈^t+1:h that never take the terminal action, Q^π_t(·,)≡ 0 ≤min_a∈, y∈_tQ^π_t(y,a) (see (<ref>)),
As a result, we have that for any t∈[h] and _t,η_t,η(_η),
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))]
≤^π_⋆[ 𝕀{_t ∈_t,η}·( Q^t+1_t(_t,π_⋆(_t)) - Q^t+1_t(_t, t(_t))) ],
=
^π_⋆[ 𝕀{_t ∈_t,η}·(Q^t+1_t(_t,π_⋆(_t))-g̅t(_t,π_⋆(_t)) + g̅t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t)))],
≤^π_⋆[ 𝕀{_t ∈_t,η}·(Q^t+1_t(_t,π_⋆(_t))-g̅t(_t,π_⋆(_t)) + g̅t(_t,t(_t))- Q^t+1_t(_t, t(_t)))],
where the last inequality follows by the facts that:
* t(x)∈_a∈t(x,a), for all x∈_t, by the definition of t in (<ref>).
* g̅t(·, )≡ 0 ≤g̅t(·, a), for all a∈, by definition of g̅t in (<ref>).
Continuing from the previous display, we have
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))]
≤ 2 ·^π_⋆[𝕀{_t ∈_t,η}·max_a∈| Q^t+1_t(_t,a)-g̅t(_t,a)| ],
= 2 ·^π_⋆[𝕀{_t ∈_t,η}·max_a∈| Q^t+1_t(_t,a)-g̅t(_t,a)| ], (since Q^t+1_t(·,)≡g̅t(·,)≡ 0)
≤ 2 ·√(^π_⋆[𝕀{_t ∈_t,η}·max_a∈( Q^t+1_t(_t,a)-g̅t(_t,a))^2 ]), (Jensen's inequality)
= 2 √(∫__t𝕀{x ∈_t,η}·max_a∈( Q^t+1_t(x,a)-g̅t(x,a))^2 ^(x) ν̅(x)),
≤ 2 √(α^-1∫__t𝕀{x ∈_t,η}·max_a∈( Q^t+1_t(x,a)-g̅t(x,a))^2 _π∼ Pt[^π(x)] ν̅(x)), (justified below)
≤ 2 √(α^-1_π∼ Pt[ ∫__tmax_a∈( Q^t+1_t(x,a)-g̅t(x,a))^2 ^π(x) ν̅(x)]), (Fubini's theorem)
= 2 √(α^-1·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-g̅t(_t,a))^2 ]),
= 2√(α^-1·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-max(0,t(_t,a)))^2 ]),
≤ 2 √(α^-1·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]),
where (<ref>) follows from the fact that Pt is an (α,η)-cover relative to _η for layer t in and π_⋆∈_η, and (<ref>) follows because:
* The policies in the support of Pt never take the terminal action; and
* | Q^t+1_t(x',a')-t(x',a')| = | Q^t+1_t(x',a')-max(0,g̅t(x',a'))|, ∀ (x',a')∈_t× (see (<ref>) and (<ref>)).
Finally, (<ref>) follows by the fact that the Q-functions are non-negative (since the rewards are non-negative), and so replacing max(0,ĝt(_t,a)) by ĝt(_t,a) on the right-hand side of (<ref>) only increases the value of the latter.
Now, from <ref> and the fact that _1:h realize r_1:h, we have that for any t∈[h], there is an absolute constant c>0 (independent of t and other problem parameters) and an event _t of probability at least 1-δ/H under which the solution t to the least-squares regression problem on (<ref>) of <ref> satisfies
_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]≤_(n,δH)^2,
where _(·,·)^2 is defined as in <ref>. Combining (<ref>) with (<ref>) establishes (<ref>) under the event _t.
To conclude the proof, we note that by the performance difference lemma (<ref>), we have
^[∑_t=1^hr̅_t(_t,_t)] - ^[∑_t=1^hr̅_t(_t,_t)]
= ∑_t=1^h ^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))].
Thus, under the event ⋃_t=1^h_t, we have that
^[∑_t=1^hr̅_t(_t,_t)] - ^[∑_t=1^hr̅_t(_t,_t)] ≤ 2H α^-1/2_(n,δH).
The desired result follows from the union bound, which gives []≥ 1-δ.
Let π,∈ be policies, and assume that π never takes the terminal action. Let Q_t^π be defined as in (<ref>). Then for any h≥ 1,
^[ ∑_t = 1^h r̅_t(_t, _t) ] - ^π[ ∑_t = 1^h r̅_t(_t, _t) ] = ∑_t= 1^h ^[Q_t^π(_t, (_t)) - Q_t^π(_t, π(_t)) ].
§.§ Main Guarantee for With Signed Rewards
We now state and prove a guarantee for in the true MDP , when invoked with signed rewards. We make use of the following lemma, which bounds the total probability mass for the set of states that are not reachable with sufficiently high probability.
For any t∈[H], it holds that
sup_π∈^π[_t ∈_t ∖_t,η()] ≤η· d^3/2.
Fix t∈ [H]. By definition of _t,η(), we have that
∀ x∈_t ∖_t,η(), sup_π∈ d^π(x) ≤η·μ^⋆_t(x).
Thus, integrating over x∈_t ∖_t,η(), we obtain
sup_π∈^π[_t ∈_t ∖_t,η()] = sup_π∈∫__t ∖_t,η() d^π(x) ν(x),
= η·∫__t ∖_t,η()μ^⋆_t(x)ν(x), (by (<ref>))
≤η·∫__tμ^⋆_t(x)ν(x),
≤η d^3/2,
where the last inequality follows by <ref>; this is a consequence of the normalization assumption (<ref>).
With this, we now state the guarantee of .
Let α, δ,∈(0,1), B,B_1:h>0, and h∈[H] be given. Consider reward functions r_1: _1×→ [-B_1,B_1],…,r_h: _h×→ [-B_h,B_h], function classes _1:h, distributions over policies P1:h, and a parameter n≥ 1 satisfying the following properties:
* The function classes _1:h, where _t⊆{g: _t×→} for t∈[h], realize the reward functions r_1:h (in the sense of <Ref>), and all functions in _1:h have range uniformly bounded by B.
* For each 1 ≤ t ≤ h, it holds that Pt is a (α,)-randomized policy cover for layer t (see <ref>).
Then, with probability at least 1 - δ, the policy = (h, r_1:h, _1:h, P1:h, n) produced by <ref> satisfies the following guarantee:
max_π∈^π[∑_t=1^hr_t(_t,_t)] ≤^[∑_t=1^hr_t(_t,_t)] + _(n,δ) + 2 h d^3/2·∑_t=1^h B_t,
where _(n,δ) c·H √(α^-1 B^2 A n^-1· (max_t∈[h]ln__t(1/n)+ln (n/δ))) and c>0 is an absolute constant.
First, we define the Q-functions for the reward r_1:h; for all t∈[h] and all π∈^t+1:h, define the Q-function at layer t with respect to the rewards r_1:h and partial policy π:
∀ (x,a)∈_t ×, Q^π_t(x,a) r_t(x,a)+^π[.∑_ℓ=t+1^hr_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
To begin, we will show that for any t∈[h] and _(·,·) as in <ref>, there is an event _t of probability at least 1- δ/H under which the learned partial policies t,t+1 are such that
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] ≤ 2 α^-1/2_(n,δH) + 2 d^3/2·∑_ℓ=1^h B_ℓ,
where π_⋆∈_π∈^π[∑_t=1^h r_t(_t,_t)] is the optimal policy. Once we establish (<ref>) for all t∈[h], we will apply the performance difference lemma (<ref> instantiated in the true MDP) and the union bound to obtain the desired result.
Let π_⋆∈_π∈^π[∑_ℓ=1^h r_ℓ(_ℓ,_ℓ)]. We have that for any t∈[h] and _t,_t,(),
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))]
= ^π_⋆[𝕀{_t ∈_t,}·( Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))) ]
+ ^π_⋆[𝕀{_t ∈_t ∖_t,}·( Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))) ].
We now bound the last term in (<ref>). Note that by the range assumption on the rewards r_1:h and the definition of the Q-function, we have Q^π_t(x,a)∈ [-∑_ℓ=t^h B_ℓ, ∑_ℓ=t^h B_ℓ], for all π∈^t+1:h. Thus, we have
^π_⋆[𝕀{_t ∈_t ∖_t,}·( Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))) ] ≤ 2^π_⋆[_t ∈_t ∖_t,] ·∑_ℓ=t^h B_ℓ,
≤2 · d^3/2·∑_ℓ=1^h B_ℓ,
where the last inequality follows by <ref>.
Plugging (<ref>) into (<ref>) and using that B_1:h≥ 0 implies that
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] - 2 d^3/2·∑_ℓ=1^h B_ℓ
≤^π_⋆[ 𝕀{_t ∈_t,}·( Q^t+1_t(_t,π_⋆(_t)) - Q^t+1_t(_t, t(_t))) ],
=
^π_⋆[ 𝕀{_t ∈_t,}·(Q^t+1_t(_t,π_⋆(_t))-ĝt(_t,π_⋆(_t)) + ĝt(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t)))],
≤^π_⋆[ 𝕀{_t ∈_t,}·(Q^t+1_t(_t,π_⋆(_t))-ĝt(_t,π_⋆(_t)) + ĝt(_t,t(_t))- Q^t+1_t(_t, t(_t)))],
where the last inequality follows by the fact that t(x)∈_a∈t(x,a), for all x∈_t, by the definition of t in (<ref>). Continuing from the previous display, we have
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] - 2 d^3/2·∑_ℓ=1^h B_ℓ
≤ 2 ·^π_⋆[𝕀{_t ∈_t,}·max_a∈| Q^t+1_t(_t,a)-ĝt(_t,a)| ],
≤ 2 ·√(^π_⋆[𝕀{_t ∈_t,}·max_a∈( Q^t+1_t(_t,a)-ĝt(_t,a))^2 ]), (Jensen's inequality)
= 2 √(∫__t𝕀{x ∈_t,}·max_a∈( Q^t+1_t(x,a)-ĝt(x,a))^2 d^(x) ν(x)),
≤ 2 √(1/α∫__t𝕀{x ∈_t,}·max_a∈( Q^t+1_t(x,a)-ĝt(x,a))^2 _π∼ Pt[d^π(x)] ν(x)), (justified below)
≤ 2 √(1/α_π∼ Pt[ ∫__tmax_a∈( Q^t+1_t(x,a)-ĝt(x,a))^2 d^π(x) ν(x)]), (Fubini's theorem)
= 2√(1/α·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]),
where (<ref>) follows from the fact that Pt is an (α,)-randomized policy cover for layer t.
Now, from <ref> and the fact that _1:h realize r_1:h, we have that for any t∈[h], there is an absolute constant c>0 (independent of t and other problem parameters) and an event _t of probability at least 1-δ/H under which the solution t to the least-squares regression problem on (<ref>) of <ref> satisfies
_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]≤_(n,δH)^2,
where _(·,·)^2 is defined as in <ref>. Combining (<ref>) with (<ref>) establishes (<ref>) under the event _t.
To conclude the proof, we note that by the performance difference lemma (<ref>), we have
^[∑_t=1^h r_t(_t,_t)] - ^[∑_t=1^h r_t(_t,_t)]
= ∑_t=1^h ^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))].
Thus, under the event ⋃_t=1^h_t, we have that
^[∑_t=1^h r_t(_t,_t)] - ^[∑_t=1^h r_t(_t,_t)] ≤ 2 H α^-1/2_(n,δH) +2 hd^3/2·∑_t=1^h B_t.
The desired result follows from the union bound, which gives []≥ 1-δ.
§ APPLICATION TO REWARD-BASED RL
In this section, we show how the output P1:H of (<ref>), which is a (η^3/· d^6 A^2, )-policy cover for η = /(4 H d^3/2) and = (A,H,d, log(|Φ|/δ)) sufficiently large (see <ref>), can be used to optimize downstream reward functions r_1:H; our treatment also applies to (for Ph(Ψh) for all h∈[H]). Since the output of is a randomized policy cover, one way to optimize the sum of rewards S_H ∑_h=1^H r_h is by first generating trajectories using policies in P1:H, then applying an offline RL algorithm, e.g. Fitted Q-Iteration () <cit.>, to optimize S_H. It is also possible to use with the randomized policy cover P1:H to achieve the same goal. We will showcase the latter approach, since we can make use of the guarantees for given in <ref>.
As in <ref>, we assume access to a function class _1:H, where _h ⊆{g: _h×→} for each h∈[H], that realize the rewards r_1:H in the following sense: for all h∈[H] and all π∈^h+1:H,
Q_h^π∈_h, where Q^π_h(x,a) r_h(x,a)+^π[.∑_t=h+1^H r_t(_t,_t) | _h=x,_h=a].
Note that when the reward functions r_1:H are linear in the feature map ; that is, when for all h∈[H] and (x,a)∈_h×,
r_h(x,a)=θ_h^⊤(x,a)
for some θ_h∈(1) (this is a common assumption in the context of RL in Low-Rank MDPs <cit.>), then the function classes _1:H, where
∀ h∈[H], _h = {g:(x,a)↦ϕ(x,a)^⊤ w |ϕ∈Φ , w ∈(2H√(d))},
realize r_1:H. We show this claim next.
Under <ref>, the function classes _1:H in (<ref>) realize the reward functions in (<ref>). Furthermore, the functions in _1:H are uniformly bounded by 2√(d)H, and ln__h()≤ln |Φ|+ d ln (2√(d)H /), for all h∈[H], where we recall that _() denotes the -covering number of in ℓ_∞-distance (see <ref>).
For h=H, we clearly have that for any π∈^H:H, Q^π_H(·,·)=r_H(·,·)∈_H. For h<H and π∈^h+1:H, we have, by the low-rank MDP structure and the expression of the rewards in (<ref>), that
Q^π_h(x,a) =r_h(_h,_h)+∫__h+1^π[.∑_t=h+1^H r_t(_t,_t) | _h+1=y,_h+1=π(y)] ·ϕ^⋆_h(x,a)^⊤μ_h+1^⋆(y) ν (y),
= ϕ^⋆_h(x,a)^⊤( θ_h + ∫__h+1^π[.∑_t=h+1^H r_t(_t,_t) | _h+1=y,_h+1=π(y)] ·μ_h+1^⋆(y) ν (y)).
Now, by the fact that ^π[∑_t=h+1^H r_t(_t,_t)|_h+1=y,_h+1=π(y)] ∈ [-H-h,H-h], for all y∈_h+1 (since the rewards take values between -1 and 1 thanks to ϕ(·,·),θ_h∈(1), for all h∈[H]), and the normalizing assumption made on ([h])_h∈[H] in <ref> (i.e. that for all g:_h+1→0,1, *∫__h+1[h+1](y)g(y) ν(y)≤√(d)), we have that
w_h θ_h+∫__h+1^π[.∑_t=h+1^H r_t(_t,_t) | _h+1=y,_h+1=π(y)] ·μ_h+1^⋆(y) ν (y) ∈(2H√(d)).
This, together with (<ref>) and the fact that [h]∈Φ (by <ref>), implies that that Q_h^π∈_h. The bound on the covering number __h(), follows from a standard bound on the covering number of the ball (2H√(d)) <cit.>.
Combining <Ref> with <Ref> results in the following guarantee for .
Let α,,δ∈(0,1) be given and fix h∈[H]. Let be the output of when given input (H, r_1:H, _1:H, P1:H, n), where
* The reward functions r_1:H are as in (<ref>), with θ_1:H∈(1)
* The function classes _1:H are as in (<ref>).
* For each 1≤ h≤ H, it holds that Ph is a (α,)-randomized policy cover for layer h (see <ref>).
Then, under <ref>, with probability at least 1-δ, we have that
max_π∈^π[∑_h=1^H r_h(_h,_h)]≤^[∑_h=1^H r_h(_h,_h)] + c H^2 √(d A · (d log(2n √(d)H) +ln (n|Φ|/δ)) /α n ) + 2 H^2 d^3/2,
for a sufficiently large absolute constant c>0.
By using that the distributions return by are an (η^3/· d^6 A^2, )-policy cover for η = /(4 H d^3/2) and = (A,H,d, log(|Φ|/δ)) sufficiently large (<ref>), we obtain the claimed sample complexity for <ref> in <ref>.
§ STRUCTURAL RESULTS FOR EXTENDED LOW-RANK MDP
In this section, we present some structural results involving the extented MDP and truncated policy class defined in <ref>. First, we recall the definition of the truncated policy class. Given a parameter η>0, let _0,η, and for each h≥ 1, let _h, η be the set of policies defined by
π∈_h,η∃π'∈_h-1,η : ∀ t ∈[H], ∀ x ∈_t, π(x) = {[ π'(x), if t=h and x ∈_h,η(_h-1,η),; , otherwise, ].
where for a set of policies Π'⊆, we let
_h, η(Π') {x∈_h | max_π∈Π'^π(x) ≥μ̅_h^⋆(x)·η. }.
Note that this matches the definition in (<ref>) because [μ̅^⋆_h(x)]_d+1=0, for all x≠_h. Finally, we let _η_H,η.
The next lemma bounds the probability of the set of states that are not reachable with sufficiently high probability.
Under the normalization assumption (<ref>), we have that for any t∈[H],
sup_π∈_η^π[_t ∈_t ∖_t,η(_η)] ≤η· d^3/2.
Fix t∈ [H]. By definition of _t,η(_η), we have that
∀ x∈_t ∖_t,η(_η), sup_π∈_η^π(x) ≤η·^⋆_t(x).
Thus, integrating over x∈_t ∖_t,η(_η), we obtain
sup_π∈_η^π[_t ∈_t ∖_t,η(_η)] = sup_π∈_η∫__t ∖_t,η(_η)^π(x) (x),
= η·∫__t ∖_t,η(_η)μ̅^⋆_t(x)(x), (by (<ref>))
≤η·∫__tμ̅^⋆_t(x)ν̅(x),
= η·∫__tμ^⋆_t(x)ν(x), (since [_t(x)]_d+1=0, ∀ x ≠_t)
≤η d^3/2,
where the last inequality follows by <ref>; this is a consequence of the normalization assumption (<ref>).
The next lemma generalizes <cit.> to s.
For all h ∈[H], x∈_h, and ℓ∈[h H], we have max_π∈_ℓ-1,η(x)= max_π∈_ℓ,η(x). Further,
∀ x∈_h, max_π∈_h-1, η^π(x) = max_π∈_η^π(x) .
We will show that for all ℓ∈[hH],
∀ x∈_h, max_π∈_ℓ-1,η(x)= max_π∈_ℓ,η(x).
This implies (<ref>) by summing both sides of (<ref>) over ℓ=h,…, H, telescoping, and using that _η=_H, η. To prove the result, let ℓ∈[hH], x∈_h, and π̃∈_π'∈_ℓ-1,η^π'(x). Further, let π∈_ℓ, η be as in (<ref>) with π'=π̃. In this case, by (<ref>), we have π̃(x')=π(x'), for all x'∈_τ, and τ≤ [ℓ-1]. Using this and the fact that x∈_h and ℓ≥ h, we have
max_π̆∈_ℓ-1,η^π̆(x) =^π̃(x)= ^π(x) ≤max_π̆∈_ℓ, η^π̆(x).
We now show the inequality in the other direction. Let ℓ∈[hH], x∈_h, and π̃∈_π̆∈_ℓ,η^π̆(x). Further, let π'∈_ℓ-1, η be as in (<ref>) for π = π̃. In this case, by (<ref>), we have π̃(x)=π'(x), for all τ∈ [ℓ-1]. Using this and the fact that x∈_h and ℓ≥ h, we have
max_π̆∈_ℓ,η^π̆(x) =^π̃(x)= ^π'(x) ≤max_π̆∈_ℓ-1, η^π̆(x).
This shows (<ref>) and completes the proof.
Using <ref> and the definition of _h,η(·) in (<ref>), we obtain the following corollary.
For all h∈[H], it holds that
_h,η(_h-1,η) = _h,η(_η).
The next lemma quantifies the “cost of truncation” incurred by optimizing reward functions using policies in the truncated class _η instead of
Let η∈(0,1), and B_1:H>0, and consider reward functions r_1: _1×→ [-B_1,B_1],…,r_H: _H×→ [-B_H,B_H]. We have
sup_π∈_η^π[ ∑_h=1^H r̅_h(_h,_h) ] ≥sup_π∈^π[ ∑_h=1^H r̅_h(_h,_h) ] - 2 H d^3/2η∑_h=1^H B_h,
where, for each h∈[H], r̅_h(x,a)=r_h(x,a) for all (x,a)∈_h×, and r̅_h(x,a)=0 when x=_h or a=.
Let r̅_1:H be the “extended” reward functions as in the lemma's statement. Let h∈[H] and π_h-1∈_π∈_h-1,η^π[∑_h=1^H r̅_h(_h,_h)]. Further, define π_h as π∈_h,η in (<ref>) with π'=π_h-1. Note that since for all t∈[h-1] and x∈_t, π_h(x)=π_h-1(x) (by (<ref>)), we have
^π_h-1[∑_t=1^h-1r̅_t(_t,_t)] = ^π_h[∑_t=1^h-1r̅_t(_t,_t)].
On the other hand, for _h,η_h,η(_h-1,η) we have
^π_h-1[∑_t=h^H r̅_t(_t,_t)]
= ^π_h-1[∑_t=h^H r̅_t(_t,_t)],
= ^π_h-1[ 𝕀{_h ∈_h,η}·∑_t=h^H r̅_t(_t,_t)]+ ^π_h-1[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)] ,
= ^π_h[ 𝕀{_h ∈_h,η}·∑_t=h^H r̅_t(_t,_t)] + ^π_h-1[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)] , (by definition of _h,η and π_h)
= ^π_h[ ∑_t=h^H r̅_t(_t,_t)] - ^π_h[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)] + ^π_h-1[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)],
= ^π_h[ ∑_t=h^H r̅_t(_t,_t)] - ^π_h[ 𝕀{_h ∈_h∖_h,η}·∑_t=h^H r̅_t(_t,_t)] + ^π_h-1[ 𝕀{_h ∈_h∖_h,η}·∑_t=h^H r̅_t(_t,_t)],
where the last equality follows by the fact that I) if _h =_h, then _t=_t for all t∈ [h H], and II) r̅_t(,·)≡ 0, for all t∈ [h … H]. Now, using the range assumption on the rewards, we get
^π_h-1[∑_t=h^H r̅_t(_t,_t)] ≤^π_h[ ∑_t=h^H r̅_t(_t,_t)] +(^π_h[_h ∈_h ∖_h,η] + ^π_h-1[_h ∈_h ∖_h,η]) ∑_t=h^H B_t.
On the other hand, by <ref> and the fact that π_h-1∈_h-1,η and π_h∈_h,η, we have that
^π_h-1[_h ∈_h ∖_h,η] ∨^π_h[_h ∈_h ∖_h,η]≤sup_π∈_η^π[_h ∈_h ∖_h,η].
Furthermore, by <ref>, we have _h,η = _h,η(_η). Combining this with (<ref>) and <ref>, we get
^π_h-1[_h ∈_h ∖_h,η] ∨^π_h[_h ∈_h ∖_h,η]≤sup_π∈_η^π[_h ∈_h ∖_h,η(_η)] ≤η d^3/2.
Plugging this into (<ref>) and using (<ref>) implies that
^π_h-1[∑_t=h^H r̅_t(_t,_t)] ≤^π_h[ ∑_t=h^H r̅_t(_t,_t)]+ 2 η d^3/2∑_h=1^H B_h.
Summing both sides of (<ref>) for h=1,…, H, telescoping, and using that _0,η= and _H,η= _η, we get
max_π∈^π[∑_t=1^Hr̅_t(_t,_t)] ≤max_π∈_η^π[∑_t=1^Hr̅_t(_t,_t)] + 2H η d^3/2∑_h=1^H B_h.
Using this, we now prove <ref>, which allows us to transfer any guarantees in the extended MDP and truncated policies _η back to the original MDP with the unrestricted policy class .
Fix h∈[H], and let y∈_h be such that μ_h^⋆(y)>0. To prove <ref>, we will instantiate <ref> with rewards (r_t) given by
r_t(x,a) = {[ μ_h^⋆(y)^⊤/μ_h^⋆(y)ϕ^⋆_h-1(x,a), if t=h and (x,a)∈_h×,; 0, otherwise. ].
We define the extended rewards (r̅_t) such that for all t∈[H], r̅_t(x,a)=r_t(x,a) for all (x,a)∈_t×, and r̅_t(x,a)=0 when x=_t or a=. By applying <ref> (with B_h =1 and B_t=0 for all t≠ h) and using that |r_h(·,·)|≤ 1 (since ϕ^⋆_h-1(·, ·)≤ 1), we get
max_π∈^π[∑_t=1^Hr̅_t(_t,_t)] ≤max_π∈_η^π[∑_t=1^Hr̅_t(_t,_t)] + 2H η d^3/2.
On the other hand, the definition of (r_t) implies that for any π∈,
^π[∑_t=1^Hr̅_t(_t,_t)] = μ_h^⋆(y)^⊤/μ_h^⋆(y)ϕ̃^⋆,π_h-1,
where ϕ̃^⋆,π_h-1^π[ϕ̃^⋆_h-1(_h-1,_h-1)] and ϕ̃^⋆_h-1 is the restriction of ^⋆_h-1 to its first d coordinates (^⋆_h-1 is defined in <ref>). Now, since y≠_h, we have [μ̅_h^⋆(y)]_d+1=0, and so μ^⋆_h(y)^⊤ϕ̃^⋆,π_h-1= ^⋆_h(y)^⊤ϕ̅^⋆, π_h-1. Thus, plugging this into (<ref>) and using <ref>, we get
∀π∈, ^π[∑_t=1^Hr̅_t(_t,_t)] = _h^⋆(y)^⊤/μ_h^⋆(y)ϕ̅^⋆,π_h-1= ^π(y)/μ^⋆_h(y).
Plugging this into (<ref>) and using that ⊆, we have
max_π∈d^π(y)/μ^⋆_h(y) =max_π∈^π(y)/μ^⋆_h(y)≤max_π∈^π(y)/μ^⋆_h(y)≤max_π∈_η^π(y)/μ^⋆_h(y) + 2Hη d^3/2.
Now, suppose that y is such that max_π∈d^π(y)/μ^⋆_h(y)≥ 4 H η d^3/2. By (<ref>), this implies that
max_π∈_η^π(y)/μ^⋆_h(y)≥ 2H η d^3/2≥η,
and so since P is a (α,η)-randomized policy cover relative to _η for layer t in , we have that
max_π∈_η^π(y)/μ^⋆_h(y)≤α^-1_π∼ P^π[d̅^π(y)/μ^⋆_h(y)].
Combining this with (<ref>) implies that
max_π∈d^π(y)/μ^⋆_h(y) ≤α^-1_π∼ P^π[d̅^π(y)/μ^⋆_h(y)] + 2Hη d^3/2,
≤α^-1_π∼ P^π[d̅^π(y)/μ^⋆_h(y)] +1/2max_π∈d^π(y)/μ^⋆_h(y),
where the last inequality follows by the fact that y is such that max_π∈d^π(y)/μ^⋆_h(y)≥ 4 H η d^3/2. Rearranging the previous display and using that ^π(·)≡ d^π(·) for all policies π that never take the terminal action, we get:
α/2max_π∈d^π(y)/μ^⋆_h(y)≤_π∼ P^π[d^π(y)/μ^⋆_h(y)].
This shows that P is a (α/2, 4 Hη d^3/2)-randomized policy cover.
§ HELPER LEMMAS
For any h∈[2 H], x∈_h, and π∈, we have
d^π(x) = [h](x)^⊤ϕ^⋆, π_h-1, where ϕ^⋆, π_h-1^π[ϕ^⋆_h-1(_h-1,_h-1)],
Let δ∈(0,1) and H≥ 1 be given. If a sequence of events _1,…,_H satisfies [_h|_1,…,_h-1]≥1-δ/H for all h∈[H], then
[_1:H]≥1-δ.
By the chain rule, we have
[_1:H] = ∏_h∈[H][_h|_1,…,_h-1] ≥∏_h∈[H] (1-δ/H) =(1-δ/H)^H ≥ 1-δ.
The normalization assumption in (<ref>) has the following useful implication.
For any h∈[H], if the normalization condition (<ref>) holds, then
∫__hμ^⋆_h(x)ν(x) ≤ d^3/2.
For each i∈[d], if we define g(x)sgn([μ^⋆_h(x)]_i), we have
∫__h |[μ^⋆_h(x)]_i| ν (x) = ∫__h g(x) · [μ^⋆_h(x)]_i ν (x),
= √((∫__h g(x) · [μ^⋆_h(x)]_i ν (x))^2),
≤√(∑_j∈[d](∫__h g(x) · [μ^⋆_h(x)]_j ν (x))^2),
= ∫__h g(x) ·μ^⋆_h(x)ν(x) ,
≤√(d).
Therefore, we have
∫__hμ^⋆_h(x)ν (x)≤∑_i∈[d]∫__h |[μ^⋆_h(x)]_i| ν (x)≤ d^3/2.
Next, we show that the coverability parameter <cit.> constant for s is bounded by d.
For all h∈[H], there exists a measure ρ_h on _h × such that
sup_(x,a)∈_h×sup_π∈d^π(x,a)/ρ_h(x,a)≤ d.
Consider layer h+1. By definition for x ∈_h+1, we have that for any
π, d^π(x) = ^π[
μ_h+1^⋆(x)^⊤ϕ_h^⋆(_h, _h)]=μ_h+1^⋆(x)^⊤ϕ_h^⋆, π. Let
Ψ{π_1, …, π_d} be a barycentric
spanner for the set {ϕ^⋆, π_h |π∈} (see <ref>). Let
π_x denote the policy maximizing d^π(x) (if no such
maximizer exists, we may pass to a maximizing sequence). By definition of a barycentric spanner, there exist β_1, …, β_d ∈ [-1, 1] such that ϕ_h^⋆, π_x = ∑_i=1^d β_i ϕ_h^⋆, π_i, and so
d^π_x(x) = ∑_i = 1^d β_i
μ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i≤∑_i = 1^d *β_iμ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i
≤ d ·∑_i = 1^d 1/dμ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i
=d ·∑_i = 1^d 1/d
d^π_i(x),
where we have used that μ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i is non-negative.
Thus, by defining ρ_h+11/d∑_i=1^d d^π_i, we obtain the desired result.
Let >0, and B>0 be given. Fix h∈[H] and consider
a sequence of policies π1:K∈ and functions δ1:K:_h×→ [-B,B] such that for all k∈ [2 K],
^k-1[ δk(_h,_h)^2 ] ≤^2, where k-11/k-1∑_ℓ=1^k-1πℓ. Thenmin_k∈[K]^πk[ δk(_h,_h) ] ≤√(2 d ln K) + 2 d B K^-1.
Define k-1(·,·) ^k-1[d^π(·,·)], if k∈[2 K],
and k-1(·,·)≡ 0 if k=1. Further, let
ρ̃k(·,·) d/kρ_h(·,·), where
ρ_h(x,a) is as in <ref>. Finally, for any (x,a)∈_h ×, we define the “burn-in” index
τ_h(x,a) min{ k ∈[K] |d̅k-1(x,a) > (k-1) · d ·ρ_h(x,a) },
and note that τ_h(·,·)>1. Since the coverability constant is bounded by d in s (see <ref>), we have the following facts which follow from the derivations in <cit.>:
∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) > k }·δk(_h,_h)] ≤ 2d B,
∀ (x,a)∈_h ×,∀ k≥τ_h(x,a), d̅k-1(x,a) + ρ̃k(x,a) ≤ 2d̅k-1(x,a).
With this, we have
∑_k=1^K ^πk[ δk(_h,_h) ]
= ∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) > k }·δk(_h,_h)] + ∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)] ,
≤ 2 d B + ∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)] ,
where the last inequality uses (<ref>). We now bound the second term on the of (<ref>). We have
∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)]
=∑_k=1^K ∑_(x,a)∈_h×dk(x,a) δk(x,a) ·𝕀{τ_h(x,a) ≤ k } ,
=∑_k=2^K ∑_(x,a)∈_h×dk(x,a) δk(x,a) ·𝕀{τ_h(x,a) ≤ k }, (since τ_h(·,·)>1)
= ∑_k=2^K ∑_(x,a)∈_h×d^πk(x,a)(k-1(x,a)/k-1(x,a))^1/2δk(x,a)·𝕀{τ_h(x,a) ≤ k } ,
≤√(∑_k=2^K ∑_(x,a)∈_h×d^πk(x,a)^2 ·𝕀{τ_h(x,a) ≤ k }/k-1(x,a))·√(∑_k=1^K ∑_(x,a)∈_h×k-1(x,a) ·δk(x,a)^2), (Cauchy Schwarz)
≤√(∑_k=2^K ∑_(x,a)∈_h×2d^πk(x,a)^2/k-1(x,a) + ρ̃k(x,a))·√(∑_k=1^K ∑_(x,a)∈_h×k-1(x,a) ·δk(x,a)^2),
where the last step follows by (<ref>). For the second term in (<ref>), we have
∑_k=1^K ∑_(x,a)∈_h×k-1(x,a) δk(x,a)^2 ≤ K ^2,
by (<ref>).
On the other hand, for the first term on the of (<ref>), we have
∑_k=2^K ∑_(x,a)∈_h×d^πk(x,a)^2/k-1(x,a) + ρ̃k(x,a) ≤∑_k=2^K ∑_(x,a)∈_h×max_ℓ∈ [K] d^πℓ(x,a)d^πk(x,a)/k-1(x,a) + ρ̃k(x,a) ,
≤∑_k=2^K ∑_(x,a)∈_h× d ρ_h(x,a)d^πk(x,a)/k-1(x,a) + ρ̃k(x,a),
≤∑_k=1^K ∑_(x,a)∈_h×d ρ_h(x,a)k · d^πk(x,a)/∑_ℓ∈[k-1] d^πℓ(x,a) + dρ_h(x,a),
≤ K d∑_(x,a)∈_h×ρ_h(x,a) ln K,
=K dln K,
where (<ref>) follows by <ref>
and <cit.>. Plugging (<ref>)
and (<ref>) into (<ref>), we get that
∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)] ≤ K √(2 d ln K).
Combining this with (<ref>), we get
K ·min_k∈[K]^πk[ δk(_h,_h) ] ≤∑_k=1^K ^πk[ δk(_h,_h) ] ≤ K √(2 d ln K) + 2 d B.
This implies the desired result.
The following is a restatement of Theorem 2.2 in <cit.>.
Let A, E∈^d× d. If A is non-singular and rA^-1E_< 1, then A+E is non-singular and (A+E)^-1- A^-1_≤E_A^-1^2_/(1-r).
PART:
Analysis of
§ ORGANIZATION OF PART:ANALYSISSPANNER
<ref> of the appendix contains the proof
of <ref>, the guarantee for <ref>. This
section is organized as follows:
* In <ref>, we give an overview of (<ref>) and highlight its key differences to (<ref>).
* <ref> contains the proof of <ref>.
* <ref>, provides generic guarantees
for the subroutine of ,
which are used within the proof of <ref>.
* Finally, <ref> compares the
reachability assumption used in the analysis of
to other notions used throughout the literature on RL in Low-Rank MDPs.
We note that the analysis of <ref> in <ref> also makes use of the guarantee of from <ref> in <ref>.
§ : ALGORITHM OVERVIEW
The algorithm is presented in <ref>. The
algorithm proceeds by building a policy cover layer-by-layer in an
inductive fashion. The structure of the algorithm is similar to that
of , with the main difference being that instead of computing
an optimal design, the algorithm computes a barycentric spanner
for the feature map.
In more detail, for each layer h≥2, uses a policy cover
Ψh built at a previous iteration within the
(<ref>) subroutine to produce a
feature map h that approximates . Using this feature map, the algorithm invokes a second subroutine, (<ref>) to produce a collection of policies π_1,…,π_d that act as a barycentric spanner for the
feature map, ensuring maximal coverage; given these policies, a new policy cover for layer h+2 is formed via Ψh+2={π_i∘_h+1π_ : i∈[d] }. To invoke the
subroutine, makes use of for policy optimization and
(<ref>) for estimation of vector-valued
functionals. Compared to , there is no inner loop (i.e.,
K=1); this is facilitated by the reachability assumption.
In what follows, we expand on the main differences between .and , focusing on the role of barycentric spanners
Barycentric spanners
uses the notion of a barycentric spanner
<cit.> as an efficient basis for exploration. We
define a barycentric spanner for an abstract set as follows
Given a set ⊂^d such that () = ^d, we say that a set { w_1, …, w_d }⊆ is a (C, )-approximate barycentric spanner for if for every w ∈, there exist β_1, …, β_d ∈ [-C, C] such that w - ∑_i = 1^d β_i w_i≤.[Note that our definition is a slight generalization of <cit.>; the latter is recovered with = 0.]
The following result shows that for Low-Rank MDPs, barycentric spanners
offer a compact representation for policy covers.
Suppose <ref> holds with η>0. If Ψ⊆ is a collection of policies such that {^π[
(_h, _h) ]|π∈Ψ}⊆^d is a (C,
)-approximate barycentric spanner for _h{^π[
(_h, _h) ]|π∈} with ≤η/2, then Ψ is an (α,0)-policy cover for layer h+1 with α = (2dC)^-1.
<Ref>, proven in <Ref>, shows that to compute a policy
cover for layer h+1, it suffices to find a barycentric spanner for the
set _h{^π[
(_h, _h) ]|π∈}⊆^d. Similar to the approach to optimal design computation in
, we show that barycentric spanner computation can be
efficiently reduced
to policy optimization:
* Using, , a novel adaptation of the classical algorithm of
<cit.>, it holds that for any ϕ∈Φ,
spanner computation for the set {^π[
ϕ(_h, _h) ]|π∈} can be performed efficiently whenever, for any θ∈(1), one can (approximately) solve linear optimization problems of the form
_π∈^π*θ^ϕ(_h,_h).
* Given access to policy covers Ψ1:h for layers 1 to h, one can efficiently solve the optimization problem in (<ref>) by
appealing to (<ref>).
To handle the fact that is unknown, <ref> computes policies π_1:d that induce a barycentric spanner for the set {^π[
h(_h, _h) ]|π∈}, where
h∈Φ is an estimated feature map produced by
. In what follows, we give a detailed overview of how the
subroutine achieves efficient spanner computation.
Barycentric spanner computation via approximate linear optimization
We consider an abstract framework for
barycentric spanner computation, which generalizes the problem faced
within . Suppose that we wish
to compute a spanner for an implicitly specified set
=*w^z_z∈⊆^d indexed by an abstract set
.
To allow for efficient spanner computation without resorting to
enumeration over the set , we assume access to two
oracles for the set , a linear optimization oracle :(1)→ and
an index-to-vector oracle :→^d. We assume that for some >0:
* For all θ∈^d with *θ=1, the output
ẑ_θ(θ) satisfies
θ^⊤w^ẑ_θ≥sup_z∈θ^⊤ w^z -.
* For all z∈, the output ŵ_z(z)
satisfies
ŵ_z - w^z≤.
The algorithm
(<ref>) computes a (C,)-approximate spanner for
using
(dlog(d/)) total calls to and . is an error-tolerant variant of the classical spanner computation algorithm of
<cit.>, which was originally introduced and
analyzed for
spanner computation with an exact linear optimization
oracle. Tolerance to approximation errors in the linear optimization oracle
is critical for our application to RL, where additive
errors will arise in sampling trajectories, as well as estimating
the feature maps ()_h∈[H]. achieves error tolerance by
perturbing the vectors returned by (θ) in the direction of
θ, which amounts to running the classical algorithm on an -fattening of , and is necessary in order to ensure that the approximation error of does not swamp the signal in directions θ in which is too “skinny.” This technique may be of independent interest; see <ref>
for additional details and formal guarantees.
Putting everything together Equipped with an estimated
feature map h from , applies
to the set {^π[h(_h,
_h)]|π∈} with
= and C = 2; that is, we plug-in the learned
representation h for the true representation
.[Though the policies produced by the algorithm may not necessarily induce a spanner for _h= {^π[
(_h, _h) ]|π∈} (this would
require “point-wise” representation learning guarantees, which we do
not have), our analysis shows that they still suffice to build a policy cover for layer h+2.]
With this choice, implementing
entails (approximately) solving
_π∈^π[ θ^⊤h(_h, _h)]
for a given θ∈(1), and implementing the oracle
entails estimating
^π[h(_h, _h)]
for a given π∈.
We instantiate (π) as the Monte Carlo algorithm
(<Ref>). To
implement (θ), we invoke with the rewards neurips
r_h(x, a; θ) = h(x,a)^⊤θ, and r_t(x,a; θ) = 0, for t ≠ h.
r_t(x,a;θ){[ h(x,a)^⊤θ, for
t=h,; 0, otherwise. ].
§ ANALYSIS: PROOF OF THM:SPANRLMAIN
In this section, we prove the main guarantee for (<ref>). First, we outline our proof strategy in <ref>. Then, in <ref> and <ref>, we present guarantees for the instances of (<ref>) and (<ref>) used within . We then combine these results in <ref> to complete the proof of <ref>. A self-contained guarantee for (<Ref>) is given in <Ref>.
§.§ Proof Strategy
Like the proof of <ref> for , the proof of <ref> is inductive. However, due to the assumption of reachability, the proof does not make use of the extended MDP analysis used in the proof of <ref>, making it somewhat simpler.
For fixed h, we assume that the policy set Ψ1:h+1 produced by satisfies the property:
Ψ1,…Ψh+1 are (1 Ad,0)-policy covers for layers 1 through h+1, and max_t∈[h+1]|Ψt|≤ d.
Conditioned on this claim, we show that with high probability, the set Ψh+2 is a (1/4 A d,0)-policy cover for layer h +2. To prove this, we use the inductive assumption to show that acts as an approximate linear optimization oracle over = {^π[ h(_h, _h) ] |π∈} (<Ref>). Using this, we then instantiate the guarantee of from <ref> with and instantiated with and . To conclude the proof of the inductive step, we the main guarantee for together with the main guarantee for (<Ref>), along with a change of measure argument enabled by the assumption that Ψ1:h are policy covers (i.e. (<ref>)).
§.§ Guarantee for as a Subroutine for
We begin by showing that , as configured within , acts as an approximate linear optimization oracle as required by . In particular, we fix a layer h, assume that Ψ1:h+1 satisfy (<ref>), apply the generic guarantees for in <Ref>.
Define function classes _1:h such that for each t∈[h],
_t {g:(x,a)↦ϕ(x,a)^⊤ w |ϕ∈Φ, w ∈(2√(d))}.
Given θ∈(1) and ϕ∈Φ, consider the reward functions r'_1:h(·,·;θ, ϕ) given by:
∀ (x,a)∈×, r'_t(x,a;θ,ϕ){[ ϕ(x,a)^⊤θ, for
t=h,; 0, otherwise. ].
With these rewards and function classes, we show that the output
= (h, r'_1:h(·, ·;θ,ϕ), _1:h, P1:h, n),
where Pt(Ψt), for each t∈[h], approximately solves
max_π∈θ^π[ ϕ(_h, _h) ]
with high probability if n≥ 1 is sufficiently large. Note that this matches the choice of reward functions in (<ref>) at iteration h with ϕ = ϕh, the feature map returned by in <ref>.
We first verify that the classes _1:h realize the reward functions specified in (<ref>) in the sense of <Ref>.
Under <ref>, the function classes _1:h in (<ref>) realize (<ref>) the reward functions in (<ref>) for any ϕ∈Φ and θ∈(1). Furthermore, the functions in _1:h are uniformly bounded by 2√(d), and ln__t()≤ln |Φ|+ d ln (2√(d) /), for all t∈[h], where we recall that _() denotes the -covering number of in ℓ_∞-distance (see <ref>).
Fix ϕ∈Φ and θ∈(1), and let r'_t(·,·)≡ r'_t(·,·; θ, ϕ), for t∈[h]. Further, for t∈[h] and π∈^t+1:h, we define the state-action value function (Q-function) at layer t with respect to the rewards r'_1:h and partial policy π:
∀ (x,a)∈_t×, Q^π_t(x,a) r'_t(x,a)+^π[.∑_ℓ=t+1^h r'_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
For t=h, we clearly have that for any π∈^h:h, Q^π_h(·,·)=r'_h(·,·)∈_h. For t<h and π∈^t+1:h, we have by the low-rank structure that
Q^π_t(x,a) = ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·ϕ^⋆_t(x,a)^⊤μ_t+1^⋆(y) ν (y),
= ϕ^⋆_t(x,a)^⊤( ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y)).
Now, by the fact that ^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ∈ [-1,1], for all y∈_t+1 (since ϕ(·,·)∈(1), for all ϕ∈Φ), and the normalizing assumption made on ([h])_h∈[H] in <ref> (i.e. that for all g:_t+1→0,1, *∫__t+1[t+1](y)g(y) ν(y)≤√(d)), we have that
w_t ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y) ∈(2√(d)).
This, together with (<ref>) and the fact that [t]∈Φ (by <ref>), implies that that Q_t^π∈_t. The bound on the covering number __t(), follows from a standard bound on the covering number of the ball (2√(d)) <cit.>..
Combining <Ref> with <Ref> (with =0) results in the following bound on the quality of as an approximate linear optimization oracle.
Let ,δ∈(0,1) be given and fix h∈[H]. Given θ∈(1) and ϕ∈Φ, let be the output of when given input (h, r'_1:h(·, ·;θ,ϕ), _1:h, P1:h, n), where
* The reward functions r'_1:h(·, ·;θ,ϕ) are as in (<ref>).
* The function classes _1:h are as in (<ref>).
* Pt(Ψt), for each t∈[h], and the collection of policies Ψ1:h satisfy (<ref>).
Then, under <ref>, with probability at least 1-δ, we have that
max_π∈θ^⊤^π[ϕ(_h,_h)]≤θ^⊤^[ϕ(_h,_h)] + _(n,δ),
where _(n,δ) c H A d √(d n^-1 (d ln (2n d^1/2)+ln (|Φ|/δ))) for a sufficiently large absolute constant c>0.
§.§ Guarantee for as a Subroutine for
In this section, we prove a guarantee for the invocation of within . We first show that (<Ref>) is a valid choice for the subroutine passed to .
Let δ∈(0,1), h∈[H], ϕ∈Φ, π∈, and n∈ℕ be given. The output _h= (h,ϕ,π, n) (<ref>) satisfies, with probability at least 1-δ,
_h - ^π[ϕ(_h,_h)] ≤_(n,δ),
where _ c ·√(n^-1·log (1/δ)) and c>0 is a sufficiently large absolute constant.
By a standard vector-valued concentration bound in euclidean space (see for example <cit.>) and the fact that ϕ(x, a)≤ 1 for all x ∈ and a ∈, there exists an absolute constant c>0 such that with probability at least 1 - δ,
_h - ^π[ ϕ(_h, _h) ]≤ c ·√(log(1/δ)/n).
Recall that in , we instantiate passing as and as . Combining <Ref> with the general guarantee for in <Ref>, we have the following result.
Consider iteration h∈ [H] of (Φ,,,δ) (<ref>) with ,>0, δ∈(0,1), and feature class Φ satisfying <ref>. Further, let h denote the feature map returned by in <Ref> at iteration h. If Ψ1:h in <ref> satisfy (<ref>) and =(A,d,H,ln(|Φ|/δ)) is sufficiently large, then with probability at least 1 - δ/2H, we have that
* The number of iterations of in <Ref> of <Ref> is at most N ⌈d/2log_2( 100d/)⌉.
* The output (π_1, …, π_d) of has the property that for all π∈, there exist β_1,…,β_d∈[-2,2] such that
*^(h),π - ∑_i=1^d β_i ^(h),π_i≤ 3 d , where ^(h),π'^π'[h(_h,_h)].
By <Ref>, on the event that the instances of and used by satisfy <Ref> with ' = /2, the two prerequisite assumptions of the lemma hold; We instantiate the guarantee in <ref> with C=2, as used by <ref>. We claim that each call to and to satisfies <Ref> with probability at least 1- δ/8 d N H. Because each of and get called at most 4 d N times per iteration of , a union bound concludes the proof contingent on this claim.
We now prove the claim. First, note that the instance of that uses within <ref> is of the form:
(h, r_1:h(·, ·, θ), _1:h, P1:h, n_)
with r_1:h and _1:h as in <Ref>, and Pt(Ψt) for each t∈[h]; this matches the form in <Ref> ('s guarantee) with ϕ = ϕh, which implies that with probability at least 1- δ/8 d N H, the output of of the instance in (<ref>) satisfies:
max_π∈θ^⊤^π[ϕ(_h,_h)]≤θ^⊤^[ϕ(_h,_h)] + c θ H A d √(d · (d ln (2n_ d^1/2)+ln (8 dNH|Φ|/δ))/n_),
for a sufficiently large absolute constant c>0. Thus, by choosing
n_ = ·^-2 A^2 d^3 H^2 · (d +ln (|Φ|/δ)),
for =(A,d,H,ln(|Φ|/δ)) sufficiently large, the of (<ref>) is bounded by θ/2, which implies the claim for the invocation of within . Similarly, the choice of n_ in <Ref> ensures that the claim holds for the invocation of within by <Ref>.
§.§ Guarantee for as a Subroutine for
In this section, we prove a guarantee for the invocation of within
Recall that Ph= (Ψh) is the distribution over policies that passes to at iteration h∈[H-2] to compute feature map ϕh. Thus, by invoking <ref> in <ref> and using the choice of n_ in <ref>, we immediately obtain the following corollary.
Let δ,∈(0,1), and be as in <ref>, and fix h∈[H-2]. Suppose that the feature class Φ satisfies <ref>. Then, with probability at least 1-δ/2H, the instance of in <ref> of <ref> runs for t≤· d iterations for = (A,d,H,log(|Φ|/δ)) sufficiently large, and returns output ϕh such that for all f∈, there exists w_fh∈(3d^3/2) satisfying
^(Ψh)[∑_a∈(ϕh(_h,a)^⊤wh_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤η^2/64 A^2 d^2,
where w_f ∫__h+1 f(y) (y) ν(y).
§.§ Concluding the Proof of thm:spanrlmain
In this section, we conclude the proof of the main guarantee (<ref>). We derive the guarantee from the following inductive claim.
Consider iteration h∈ [H] of (Φ,,,δ) (<ref>) with parameters ,>0, δ∈(0,1) and a feature class Φ satisfying <ref>. Further, assume that:
* The collection of policies Ψ1:h+1 at the start of the hth iteration of satisfy (<ref>).
* <ref> (reachability) holds with η>0.
* The input parameter to is set to =η/36 d^5/2.
* The input parameter =(A,d,H,ln (|Φ|/δ)) is sufficiently large.
Then, with probability at least 1-δ/H, the set of policies Ψh+2 produced by (Φ,,,δ) at the end of iteration h is an (1/ Ad,0)-policy cover for layer h+2.
With this, we can now prove <ref>.
Note that it suffices to prove that (<ref>) holds for h=H-1 with probability at least 1-δ. To do this, we proceed by induction over h=1,…,H-1. The base case of h=1 trivially holds because Ψ1=∅ and Ψ2={π_}. The induction step now follows by <ref> and the union bound (see <ref>).
The number of trajectories used by is dominated by calls to . Since is called O(dln (d/)) times at each iteration of (<ref>), and each call to requires at most H n_ trajectories, the total number of trajectories after H iterations of is bounded by O(H^2 d n_). By plugging the choices for n_ and from the theorem statement, we obtain the claimed sample complexity.
Before proving <ref>, we make the following simple observation.
For any π∈, h∈ [H-1], any x∈_h+1, we have
(x)^⊤^π[ϕ_h^⋆(_h,_h)]=d^π(x)≥ 0.
The equality follows by construction. The non-negativity of d^π(x) follows by definition of a probability density.
We now prove <ref>.
Let _h and _h' denote the success events in <ref> and <ref>, respectively, and note that by the union bound, we have [_h ∩_h']≥ 1 - δ/H. For the rest of this proof, we will condition on _h ∩_h'.
Throughout, we denote
ϕ_h^⋆,π^π[ϕ_h^⋆(_h,_h)], ∀ h∈[H], ∀π∈.
Because Ψ1:h+1 satisfy (<ref>) (i.e., are a policy cover) it holds by <Ref> that for all x∈_h,
max_π∈Ψh[h](x)^⊤ϕ_h-1^⋆,π≥α·sup_π∈[h](x)^⊤ϕ_h-1^⋆,π, for α1/ A d.
We will show that with probability at least 1-δ/H, the policy set Ψh+2 has the same property for layer h+2; that is, for all x∈_h+1,
max_π∈Ψh+2[h+2](x)^⊤ϕ_h+1^⋆,π≥α·sup_π∈[h+2](x)^⊤ϕ_h+1^⋆,π.
Again, by <ref> this is equivalent to the statement that Ψh+2 is an (1/ Ad,0)-policy cover for layer h+2.
For the remainder of the proof, we will fix x∈_h+2 and let π_x ∈_π∈[h+2](x)^⊤ϕ_h+1^⋆,π. Our goal is to show that the inequality <ref> holds for x.
Preliminaries Note that since x∈_h+2, we have [h+2](x)>0. It will be convenient to introduce a function f: _h+1→ defined by
f(y)θ_x^⊤ϕ^⋆_h+1(y,π_x(y)), where θ_x [h+2](x)/[h+2](x).
Further, we define
w_x ∫__h+1 f(y) (y) ν(y).
By definition of π_x, we have that for all y∈_h+1,
θ_x^⊤ϕ^⋆_h+1(y,π_x(y)) = max_a∈θ_x^⊤ϕ^⋆_h+1(y,a).
This together with the fact that θ_x=1 implies that
f ∈ = {. x ↦max_a∈θ^⊤ϕ(x,a) | θ∈(1), ϕ∈Φ};
the discriminator class in <ref> of .
Note also that since x∈_h+2, we have by reachability that
w_x^⊤ϕ_h^⋆, π_x= θ_x^⊤ϕ_h+1^⋆,π_x=1/*[h+2](x)max_π∈[h+2](x)^⊤ϕ_h+1^⋆,π≥η>0.
Applying the guarantee for
Moving forward, let h be the feature map returned by at the hth iteration of <ref>, and define ϕ^(h),π^π[ϕh(_h,_h)], for any π∈. Further, let w_xh be the vector w_fh in <ref> with f=f_x, and note that
w_xh≤ 3 d^3/2.
By Jensen's inequality, we compute
( wh_xϕ^(h),π_x- w_xϕ_h^⋆, π_x)^2
≤^π_x[( h(_h,_h)^⊤ wh_x - ϕ_h^⋆(_h,_h)^⊤ w_x )^2], (Jensen's inequality)
= ∫__h( h(y,π_x(y))^⊤ wh_x - ϕ_h^⋆(y,π_x(y))^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π_x_h-1ν(y), (Low-Rank MDP)
≤α^-1max_π̃∈Ψh∫__h( h(y,π_x(y))^⊤ wh_x -ϕ_h^⋆(y,π_x(y))^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π̃_h-1ν(y), (by (<ref>))
≤α^-1∑_π̃∈Ψh∫__h( h(y,π_x(y))^⊤ wh_x - ϕ_h^⋆(y,π_x(y))^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π̃_h-1ν(y), (by <ref>)
≤α^-1∑_π̃∈Ψh∑_a∈∫__h( h(y,a)^⊤ wh_x - ϕ_h^⋆(y,a)^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π̃_h-1ν(y),
=A α^-1 d·^(Ψh)[( h(_h,_h)^⊤ wh_x - ϕ_h^⋆(_h,_h)^⊤ w_x )^2],
where the last step follows by the definition of Ψh in <ref> and that Ψh = d. Now, since w_x = ∫__h+1 f(y) (y) ν(y) (see (<ref>)) and f∈ (see (<ref>)); the guarantee for in <ref> together with (<ref>) implies that (conditioned on the event )
| wh_x^(h),π_x- w_xϕ_h^⋆, π_x| ≤√(A dη^2/64 α A^2 d^2)≤η/4.
Applying the guarantee for
Letting π_1,…,π_d be the policies returned by at iteration h of , the guarantee of in <ref> implies that there exist β_1, …, β_d∈[-2,2] such that
*ϕ^(h),π_x-∑_i=1^d β _iϕ^(h),π_i≤ 3 d ≤η/12 d^3/2,
where the last inequality follows by the fact that = η/36 d^5/2. Combining (<ref>) with (<ref>) and using the triangle inequality, we get that
w_x^⊤ϕ_h^⋆, π_x ≤∑_i=1^d β_i w_x^⊤ϕ_h^⋆, π_i + wh_x·η/12 d^3/2 +η/4,
≤∑_i=1^d β_i w_x^⊤ϕ_h^⋆, π_i + η/4+η/4, (by (<ref>))
≤ 2d max_i∈[d] w_x^⊤ϕ_h^⋆, π_i + η/2.
Combining this with (<ref>) and rearranging implies
w_x^⊤ϕ_h^⋆, π_x≤ 4d·max_i∈[d] w_x^⊤ϕ_h^⋆, π_i.
On the other hand, by definition of w_x, we have
max_i∈[d] w_x^⊤ϕ_h^⋆, π_i = max_i∈[d]θ_x^⊤ϕ_h+1^⋆, π_i∘_h+1π_x,
= 1/*[h+2](x)max_i∈[d]^π_i ∘_h+1π_x[[h+2](x)^⊤ϕ_h+1^⋆(_h+1,_h+1)],
≤A/*[h+2](x)max_i∈[d]^π_i ∘_h+1π_[[h+2](x)^⊤ϕ_h+1^⋆(_h+1,_h+1)], (see below)
= A/*[h+2](x)max_π∈Ψh+2[h+2](x)^⊤ϕ_h+1^⋆, π,
where the inequality follows from the non-negativity of _h+1(·)_h+1(x,a), for all (x,a)∈_h× (due to <Ref>), and (<ref>) follows from the definition of Ψh+2 in <Ref> of <Ref>. Combining (<ref>) and (<ref>) then implies that
1/*[h+2](x)[h+2](x)^⊤ϕ_h+1^⋆, π_x =θ_x^⊤ϕ_h+1^⋆,π_x= w_x^⊤ϕ_h^⋆, π_x ≤ 4d ·max_i∈[d] w_x^⊤ϕ_h^⋆, π_i,
≤4 A d/*[h+2](x)max_π∈Ψh+2[h+2](x)^⊤ϕ_h+1^⋆, π.
This, together with <ref>, implies that (<ref>) holds. Since this argument holds uniformly for all x∈_h+2, this completes the proof.
§.§ Proof of lem:barycentricspannerknownphi
By definition for x ∈_h+1, we have d^π(x) = ^π[ (x)^⊤[h](_h, _h)]. Let π_x denote the policy maximizing d^π(x) (if no such maximizer exists, we may pass to a maximizing sequence) and let Ψ = {π_1, …, π_d }. Then, we have for some β_1, …, β_d ∈ [-C, C],
d^π_x(x) = (x)^⊤(∑_i = 1^d β_i [π_i]) + (x)^⊤( [π_x] - ∑_i = 1^d β_i[π_i]),
≤ C d ·max_i ∈[d](x)^⊤[π_i] + ·(x)
, (Cauchy-Schwarz)
≤ C d ·max_i ∈[d](x)^⊤[π_i] + 1/2d^π_x(x),
where the inequality follows by the fact that <ref> holds with ≤η/2. The result now
follows by rearranging.
§ GENERIC GUARANTEE FOR
In this section, we give a generic guarantee for the algorithm when invoked with oracles and satisfying the following assumption.
[ and as approximate Linear Optimization Oracles]
For some abstract set and a collection of vectors {w^z∈^d | z∈} indexed by elements in , there exists '>0 such that for any θ∈^d∖{0} and z∈, the outputs ẑ_θ(θ/θ) and ŵ_z (z) satisfy
sup_z∈θ^⊤ w^z≤θ^⊤ w^ẑ_θ +' ·θ, and ŵ_z - w^z≤' .
Letting {w^z | z∈} and assuming that ⊆(1), the next theorem bounds the number of iterations of ((·),(·), ·,·) under <ref>, and shows that the output is an approximate barycentric spanner for (<ref>). Our result extends those of <cit.>, in that it only requires an approximate linear optimization oracle, which is potentially of independent interest.
Fix C>1 and ∈(0,1) and suppose that {w^z | z ∈}⊆(1). If (<Ref>) is run with parameters C, >0 and oracles , satisfying <ref> with '=/2, then it terminates after d + d/2log_C100 d/^2 iterations, and requires at most twice that many calls to each of and . Furthermore, the output z_1:d has the property that for all z∈, there exist β_1,…,β_d∈[-C,C], such that
*w^z - ∑_i=1^dβ_i w^z_i≤3Cd ·/2.
The proof will follows similar steps to those in <cit.>, with modifications to account for the fact that linear optimization over the set {w^z | z∈} is only performed approximately.
Part I: Bounding the number of iterations
In <Ref>, there are two loops, both of which require two calls to and per iteration. As the first loop has exactly d iterations, it suffices to bound the number of iterations in the second loop.
Let Mi (w_1,…, w_i, e_i+1, …, e_d) be the matrix whose columns are the vectors at end of the ith iteration of the first loop (<ref>) of <ref>; note that columns i+1 through d are unchanged at this point in the algorithm. For i∈[d], we define ℓ_i(w) (w,Mi_-i) and θ_i((e_j, Mi_-i))_j∈ [d]∈^d, where we recall that for any matrix A, the matrix A_-i is defined as the result of removing the ith column from A. Note that ℓ_i is linear in w, and in particular
ℓ_i(w) w^⊤θ_i.
Let W0 Md = (w_1, …, w_d), and let Wj denote the resulting matrix after j iterations of the second loop (<Ref>) of <ref>. We will show that for any J≥ 1,
(WJ) ≤(W0) ·( 100 d/^2)^d/2.
By construction of the loop, we have (Wj) ≥ C ·(Wj-1) for each j ∈[J], and thus (WJ) ≥(W0) · C^J. Combining these two facts will establish the bound on the iteration complexity. We now prove (<ref>).
Let u_i = e^⊤_i(Mi)^-1 (note that u_i is a row vector) and let U denote the matrix whose ith row is u_i. We observe that for all w ∈^d,
u_iw = ℓ_i(w)/ℓ_i(w_i),
where we note that ℓ_i(w_i) ≠ 0 by construction; indeed, the columns of Mi are a basis for ^d because (Mi) ≠ 0, and the equality holds on the columns, so the two linear functions must be equal. Now, since <ref> holds with '=/2, we have
θ^⊤_iw_i^+≥sup_z ∈θ^⊤_iw^z - /2θ_i, and θ^⊤_iw_i^-≤inf_z ∈θ^⊤_iw^z + /2θ_i,
where w_i^± = (z_i^±). We will now show that
ℓ_i(w_i) ≥/2·θ_i.
There are two cases. First, suppose that θ^⊤_iw_i^+≥ - θ^⊤_iw_i^-, corresponding to the conditional in <Ref> of <ref> being satisfied. Combining this with (<ref>), we have
θ_i^⊤ w_i^+ ≥( sup_z∈θ_i^⊤ w^z -/2θ_i) ∨ (-θ_i^⊤ w_i^-),
≥( sup_z∈θ_i^⊤ w^z -/2θ_i)∨( sup_z∈ -θ_i^⊤ w^z -/2θ_i), (by (<ref>))
= ( sup_z∈θ_i^⊤ w^z )∨( sup_z∈ -θ_i^⊤ w^z ) - /2θ_i,
≥ - /2θ_i.
Because the conditional is satisfied, w_i = w_i^+ + ·θ_i/θ_i, and so by plugging this into (<ref>), we have
ℓ_i(w_i) = θ^⊤_iw_i≥/2·θ_i.
The case that θ^⊤_iw_i^+≤ - θ^⊤_iw_i^- is essentially identical, establishing (<ref>). Now, recall that { w^z | z ∈} and let ⊕( 3/2) { w + b | w ∈ and b ∈( 3/2) } denote the Minkowski sum with ( 3/2). By Cauchy-Schwarz, it holds that for all w' w + b ∈⊕( 3/2),
ℓ_i(w') = θ^⊤_iw' = θ^⊤_iw + θ^⊤_ib≤( 1 + 3 /2) ·θ_i,
where we used that ⊆(1) (by assumption). Thus, for any w' ∈⊕( 3/2), we have
u_iw' = ℓ_i(w')/ℓ_i(w_i)≤ 1+3 /2 .
We now observe that by construction and the fact that <ref> holds with '=/2, the kth column w_k' of WJ belongs to ⊕( 3 /2), for any k∈[d]. Thus, the (i,k) entry u_iw_k' of U WJ satisfies u_iw_k'∈[-1 - 3 /2, 1+ 3 /2], and so the columns of U WJ have Euclidean norm at most 10 √(d)/. Since the magnitude of the determinant of a matrix is upper bounded by the product of the Euclidean norms of its columns, it holds that (U WJ)≤( 100 d/^2)^d/2.
On the other hand, again by construction, we see that the columns w_1,…, w_d of W0 satisfy u_iw_j=0, for j<i, and u_iw_i=1. Thus, U W0 is an upper-triangular matrix with 1s on the diagonal, and hence has determinant 1. Because determinants are multiplicative, this implies that (U) ≠ 0. We now compute:
(WJ) = (U WJ)/(U) = (U WJ)/(U W0)≤( 100 d/^2)^d/2.
Thus, the upper bound on (WJ) holds and the claim is proven. Therefore, we have
C^J ≤( 100 d/^2)^d/2,
and so J ≤⌈d/2log_C( 100 d/^2)⌉.
Part II: Spanner property for the output Having shown that the algorithm terminates, we now show that the result is an approximate barycentric spanner for . Let W (w_1, …, w_d) be the matrix at termination of the algorithm. By definition, if the second loop (<Ref>) has terminated, then for all i∈[d],
max(θ_i^⊤ w_i^+, - θ_i^⊤ w_i^-) +·θ_i≤ C · |(w_i,W_-i)|,
where θ_i = ((e_j, W_-i))_j∈[d]∈^d. On the other hand, by <ref>, (<ref>) holds, and so
∀ z∈, ∀ i ∈ [d], |(w^z,W_-i)| = |θ_i^⊤ w^z| ≤max(θ_i^⊤ w_i^+, - θ_i^⊤ w_i^-) +·θ_i,
≤ C· |(w_i,W_-i)|.
Now, fix z∈. Since (W) ≠ 0, there exist β_1:d∈ such that w^z= ∑_i=1^d β_i w_i. By plugging this into (<ref>) and using the linearity of the determinant, we have
∀ i∈[d], C· |(w_i,W_-i)| ≥ |(w^z,W_-i)| = |∑_j=1^d β_i (w_j,W_-i)| = |β_i| · |(w_i,W_-i)|.
Therefore, |β_i|≤ C, for all i∈[d]. Now, by definition of w_1:d and w_1:d, for all i∈[d], we have that w_i - w_i≤. Furthermore, by <ref>, we also have that w_i -w^z_i≤/2. Therefore, by the triangle inequality, we have
w^z- ∑_i=1^d β_i w^z_i≤w^z- ∑_i=1^d β_i w_i + ∑_i=1^d|β_i| w_i - w^z_i + ∑_i=1^d|β_i| w_i - w_i ≤ 3d C /2.
This completes the proof.
§ PROPERTIES OF REACHABILITY ASSUMPTION
In this section, we compare the η-reachability assumption used by
(<ref>) to different reachability
assumptions used throughout the literature on RL in Low-Rank MDPs. In
<ref>, we demonstrate an exponential separation
between our notion of reachability and notions considered in the so-called latent variable model <cit.>. In <ref>, we consider a number of other reachability assumptions and show that they imply <Ref>.
§.§ Comparison to Latent Variable Model
In this subsection, we show that our reachability assumption is
implied a reachability assumption used by
<cit.> in the latent
variable/non-negative feature model, and show that our reachability
assumption can hold even when the best possible latent variable
embedding dimension is exponential in the dimension d. We begin by
defining the latent variable model.
Givn a transition operator T:
×→Δ(), a latent variable representation consists of a countable latent space and functions ψ:×→Δ() and
q:→Δ(), such that T(·| x,a) = ∑_z∈
q(·| z) ψ(z | x,a). The latent variable
dimension of T, denoted is the cardinality of smallest
latent space for which T admits a latent variable
representation.
The interpretation for the latent variable model is as follows:
* Each (x,a) pair
induces a distribution ψ(x,a) ∈Δ()
over z∈.
* The latent variable is sampled as ∼ψ(x,a).
* The next state is sampled as '
∼ q(·|).
Note that in discrete state spaces, all transition operators admit a trivial latent variable
representation, as we may take ψ(x,a) = T(·| x,a), but
the dimension of such a representation is potentially infinite. A latent
variable representation certifies that there exists a factorization T(x' | x,a) =
ψ(x,a)^⊤ q(x') with embedding dimension ||, and so
, and hence gives an upper bound on the rank of the
transition operator. On the other hand, compared with the general Low-Rank factorization,
the latent variable factorization additionally requires that ψ(x,a)
and q(·| z) are probability distributions, and thus
non-negative, for all z∈ and (x,a)∈×,
implying that is equivalent to the non-negative rank <cit.> of the transition operator.
Assuming that a latent variable representation exists, <cit.> consider the following notion of reachability.
There exists η>0 such that
∀ h∈[H-1], ∀ z∈_h+1, sup_π∈^π[_h+1=z]≥η.
We first show the latent variable reachability condition above implies our more general assumption.
Consider a Low-Rank MDP with rank d≥ 1. Under the
latent variable model in <ref>, if the latent
variable reachability condition in (<ref>) is satisfied for some η>0, then, for all h∈[H], the transition kernel T_h in admits a factorization T_h(·| x,a)=(·)^⊤(x,a), where (·)∈^ and (·,·)∈^, such that ≤ d A^2/η^2 and η^2/A √(d)-reachability (in the sense of <ref>) is satisfied.
Suppose that <ref> (η-reachability) holds. By <cit.>, the non-negative rank of is bounded as ≤ d A^2/η^2.
Letting q and ψ be as in the definition of the latent variable representation in <ref>, we define and as: for all h∈[H-1],
(·) (q(·| z))_z∈∈^, and (·,·) (ψ(z|· , ·))_z∈∈^.
Now, fix h∈[H-1] and x∈_h+1. For z_0∈_z∈_h+1q(x| z), we have
sup_π∈ d^π(x)= ^π[_h+1 = x] = sup_π∈∑_z∈_h+1
q(x | z) ·^π[ψ(z |_h,_h)],
=sup_π∈
q(x | z_0) ·^π[ψ(z_0 |_h,_h)],
= (x)_∞·sup_π∈^π[_h+1=z_0],
≥η·(x)_∞ , (using reachability)
≥η/√()·(x).
We now complement the result above by showing that there
exists low-rank MDPs for which our notion of reachability
(<ref>) is satisfied with η
polynomially small, yet the best possible latent variable
embedding has dimension =2^Ω(d). This contrasts
the results in <cit.>, which
show that latent variable reachability implies a polynomial
bound on the latent variable dimension.
There exists a one-step Low-Rank-MDP of rank d≥1, where η-reachability (<ref>) is satisfied with η=1/2√(d), but where the non-negative rank satisfies =2^Ω(d).
Let n ∈ℕ and d n 2 +1. As shown
in the proof of <cit.>, there exists
a horizon-two MDP with the following properties:
* The state spaces _1 and _2 at layers 1 and 2, respectively, are finite.
* The cardinality of is d; i.e. = {a_1,…, a_d}.[Technically, the example in the proof of <cit.> does not explicitly specify the number of actions. Instead, the example assigns a number of state-action pairs to vectors in ^d, without specifying the number of actions. The number of actions in their example is a degree of freedom, which we set to d here without loss of generality.]
* The transition kernel T_1 admits the factorization:
T_1(·| x,a) = [2](·)^⊤ϕ_1^⋆(x,a)∈Δ(_2), ∀ (x,a)∈_1×,
where for all x'∈_2, [2](x')∈_≥ 0^d, and for all (x,a)∈_1 ×, ϕ_1^⋆(x,a)∈_≥0^d.
* The non-negative rank of is =2^Ω(d).
We augment this MDP by adding an extra state , and let
_1_1∪{}. We define
_1^⋆:_1×→_≥0^d be the
extension of ϕ_1^⋆ given by
∀ i∈[d], _1^⋆(, a_i)= e_i, and ∀ x ∈_1, _1^⋆(x, a_i)= ϕ_1^⋆(x,a_i),
where e_i is the ith basis element in ^d. We define the
initial state distribution to have ρ()=1/2 and
ρ(x)=1/2 |_1|, for all x∈_1.[We note
that <cit.> did not specify the initial
distribution, which is not needed for the conclusion of their
result.] We let =(_1∪_2,,
_1^⋆,([h])_h∈[2],) denote the resulting
MDP. Note that adding an extra state at layer 1 in this fashion only adds d additional rows to the transition matrix T (viewed as a (|_1×|)× |_2| matrix). Therefore, the non-negative rank of is as least that of .
We now show that reachability is satisfied in . Let π_i the policy that always plays action a_i. With this, we have that for any x'∈_2,
sup_π∈ d^π(x') ≥max_i∈[d] d^π_i(x'),
= max_i∈[d][2](x')^⊤[_1^⋆(_1,a_i)] ,
= max_i∈[d]{[𝕀{_1=}·[2](x')^⊤_1^⋆(_1,a_i)] +[𝕀{_1≠}·[2](x')^⊤_1^⋆(_1,a_i)] },
≥max_i∈[d]ρ() [2](x')^⊤_1^⋆(,a_i).
where the last inequality follows by the fact that, for all (x,a)∈_1×, [2](·)^⊤_1^⋆(x,a)=[2](x')^⊤ϕ_1^⋆(x,a) ≥ 0
(since [2](x')^⊤ϕ_1^⋆(x,a) is a conditional
density). On the other hand, from the construction of _1^⋆ and the fact that [2](x')∈^d_≥ 0, we have
max_i∈[d][2](x')^⊤_1^⋆(,a_i)=[2](x')_∞≥[2](x')/√(d).
Combining this with (<ref>) and using that ρ(x_0)=1/2
implies that reachability 1/(2√(d)) is satisfied in .
§.§ Relation to Other Reachability Assumptions
In this subsection, we show that <ref> is implied
by a notion of feature coverage used in the context of transfer
learning in Low-Rank MDPs <cit.>, as well as a notion of
explorability used in the context of reward-free RL in linear
MDPs <cit.>.
§.§.§ Feature Coverage
We first consider coverage condition used by <cit.>, which involves the second moments of the feature map .
We say that the linear MDP with featurization _h satisfies η-feature coverage if for all h ∈ [H],
sup_π∈λ_min(^π[(_h,_h)(_h,_h)^⊤]) ≥η.
We show that η-feature coverage implies
(η/2)^3/2-reachability. Thus, up to polynomial dependence,
η-feature coverage is a special case of <ref>.
Suppose that an MDP satisfies η-feature coverage as in <ref> for some η>0. If (x,a)∈(1) for all x,a, then the MDP satisfies (η/2)^3/2-reachability in the sense of <Ref>.
Let h∈ [H] and x∈_h+1 be given and define
θ(x)/(x).
To keep notation compact, we define _h ϕ_h^⋆(_h,_h). By η-feature coverage, there exists π∈ such that
η≤^π [(θ^⊤_h)^2] = ^π [𝕀{(θ^⊤_h)^2 < η/2}· (θ^⊤_h)^2] + ^π [𝕀{(θ^⊤_h)^2 ≥η/2}· (θ^⊤_h)^2] ,
≤η/2 + ^π [(θ^⊤_h)^2 ≥η/2],
where we have used that θ=1 and ϕ_h^⋆(x,a)≤ 1 for all (x,a)∈_h×. Rearranging (<ref>) and using that θ^⊤_h≥ 0 (it is a scaled conditional density), have
^π [θ^⊤_h ≥√(η/2)] = ^π [(θ^⊤_h)^2 ≥η/2] ≥η/2.
Now, by Markov's inequality, we have that
θ^⊤ϕ_h^⋆,π= ^π[θ^⊤_h] ≥√(η/2)·^π [θ^⊤_h ≥√(η/2)] ≥ (η/2)^3/2,
where we have once more used that θ^⊤_h≥ 0 almost surely.
§.§.§ Explorability
We now consider the explorability assumption of <cit.>, which involves the first moment of the feature map . This notion is defined as follows.
We say that a linear MDP satisfies η-explorability if for any h∈[H] and any θ∈^d∖{0} it holds that
sup_π∈ |θ^⊤^π[(_h,_h)]| ≥η·θ.
We now show that η-explorability is a special case of η-reachability:
Suppose that the explorability condition in <ref> is satisfied with η>0. Then, η-reachability is satisfied.
Let x∈_h+1 and define θ(x). By explorability, we have that
sup_π∈ d^π(x) = sup_π∈^π[(x)^⊤(_h,_h)],
= sup_π∈ |^π[(x)^⊤(_h,_h)]|, ((·)^⊤(x,a) is a condition law)
= sup_π∈ |θ^⊤^π[(_h,_h)]|,
≥η·θ , (by explorability)
= η·(x).
This shows that <ref> is satisfied with
parameter η.
|
http://arxiv.org/abs/2307.04260v1 | 20230709202754 | Cluster tomography in percolation | [
"Helen S. Ansell",
"Samuel J. Frank",
"István A. Kovács"
] | cond-mat.dis-nn | [
"cond-mat.dis-nn",
"cond-mat.stat-mech"
] | |
http://arxiv.org/abs/2307.04414v1 | 20230710084225 | Optical-power-dependent splitting of magnetic resonance in nitrogen-vacancy centers in diamond | [
"Shuji Ito",
"Moeta Tsukamoto",
"Kensuke Ogawa",
"Tokuyuki Teraji",
"Kento Sasaki",
"Kensuke Kobayashi"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"physics.app-ph"
] |
Department of Physics, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan
Department of Physics, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan
Department of Physics, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan
National Institute for Materials Science, Tsukuba, Ibaraki 305-0044, Japan
Department of Physics, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan
Department of Physics, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan
Institute for Physics of Intelligence, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
Trans-scale Quantum Science Institute, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
Nitrogen-vacancy (NV) centers in diamonds are a powerful tool for accurate magnetic field measurements.
The key is precisely estimating the field-dependent splitting width of the optically detected magnetic resonance (ODMR) spectra of the NV centers.
In this study, we investigate the optical power dependence of the ODMR spectra using NV ensemble in nanodiamonds (NDs) and a single-crystal bulk diamond.
We find that the splitting width exponentially decays and is saturated as the optical power increases.
Comparison between NDs and a bulk sample shows that while the decay amplitude is sample-dependent, the optical power at which the decay saturates is almost sample-independent.
We propose that this unexpected phenomenon is an intrinsic property of the NV center due to non-axisymmetry deformation or impurities.
Our finding indicates that diamonds with less deformation are advantageous for accurate magnetic field measurements.
Optical-power-dependent splitting of magnetic resonance in nitrogen-vacancy centers in diamond
Kensuke Kobayashi
Received / Accepted
==============================================================================================
§ INTRODUCTION
A nitrogen-vacancy (NV) center in a diamond is a defect where a nitrogen atom replaces a carbon atom in the lattice with a vacancy at its neighboring site.
The NV center has an electron spin S=1, and its peculiar spin-dependent optical transitions enable the optical initialization and readout of the ground-state spin.
This property has been applied to the quantum sensing of local magnetic fields <cit.> and temperature <cit.>.
Researchers have applied the technique to measure various physical properties, such as observing the electron flow in graphene <cit.> and the stray fields from magnetic domain walls of a single-crystal antiferromagnet Cr_2O_3 <cit.>.
The basis for these achievements is the ability to accurately measure local magnetic fields on the order of μT using NV centers.
Optically detected magnetic resonance (ODMR) is a typical and basic measurement technique for quantum sensing using NV centers.
This technique measures the microwave (MW) frequency dependence of the photoluminescence (PL) intensity (red) when the NV centers are continuously irradiated with an excitation light (green) and MW.
The ODMR spectrum presents a magnetic resonance signal between the ground state spin levels m_S=0 and m_S=±1.
The resonance frequency splits against the magnetic field due to the Zeeman effect <cit.> and shifts in the same direction against temperature change <cit.>.
In addition, the splitting of the resonance frequency is affected by crystal strain <cit.>, electric field <cit.>, and hyperfine interactions <cit.>.
Therefore, it is essential for accurate sensing to estimate the splitting width purely due to the magnetic field from the ODMR spectra.
Commonly used diamond samples are single-crystal bulk diamonds and nanodiamonds (NDs) with grain sizes ranging from tens to hundreds of nanometers <cit.>.
Depending on whether the diamond is a bulk crystal or nanoparticles, there are variations in crystal strains, impurity density, and crystal orientation.
The ODMR spectra of NV centers vary with the excitation light power.
For example, the contrast and linewidth vary with the degree of initialization and spin relaxation associated with optical excitation <cit.>.
These dependencies only affect sensitivity but not accuracy.
Recently, however, it was reported that the ODMR spectra of NV centers in NDs at low magnetic fields change with the optical power, degrading the accuracy of temperature measurements <cit.>.
They found that a change in the ODMR splitting up to 2.8 MHz (equivalent to Zeeman splitting for 50 μT) occurred depending on the optical power.
This unexpected observation directly affects the accuracy of the conversion of the ODMR splitting to magnetic field, which is a critical issue in achieving the μT-order magnetic field measurements necessary for the physical properties measurements.
In particular, in wide-field imaging of magnetic field and temperature using a CMOS camera and NV ensembles <cit.>, inhomogeneity of the optical power within the field of view could result in degradation of the measurement of the magnetic field and temperature distributions.
Thus, it is crucial to investigate the extent to which this phenomenon is universal for various samples, i.e., bulk diamonds as well as NDs.
In this study, we investigate the dependence of the ODMR splitting on the optical power using several NV ensemble samples.
We first investigate the NV ensembles in NDs with a grain size of 100 nm, the same size as in the previous study <cit.>.
We confirm the reported behavior of the ODMR splitting to decrease with increasing optical power.
In addition, we measure the ODMR spectra over a broader optical power range than in the previous study.
We thereby find the splitting decays exponentially with the optical power and saturates at a constant value.
We observe similar behavior in NDs with a different grain size of 50 nm.
We then investigate NV ensembles in a single-crystal bulk diamond with much fewer impurities and strain than NDs and find a weaker but similar behavior.
We prove the irrelevance of magnetic field and temperature on this observation and discuss possible mechanisms to account for this phenomenon.
Finally, we propose the possibility that repetitive photoionization of impurities averages the local non-axisymmetry environment of NV centers and a systematic method to deal with this phenomenon.
This paper is organized as follows.
Sec. <ref> describes the experimental setup and defines the optical power in this study.
Sec. <ref> reproduces the previous study <cit.> using NDs and confirms that the ODMR spectra change with optical power.
Sec. <ref> shows that a similar phenomenon occurs even in the single-crystal bulk diamond.
Sec. <ref> analyzes the dependence of the ODMR splitting on the optical power.
In Sec. <ref>, we discuss the influence of the magnetic field and temperature, possible mechanisms, and implications of the present finding.
Sec. <ref> presents our conclusions.
§ EXPERIMENTS
Figure 1(a) shows an overview of the experimental setup <cit.>.
All measurements in this study are performed in a confocal system at room temperature.
A green laser with a wavelength of 520 nm (Oxxius, LBX-520-70-CSB-PPA) is applied for initialization and readout of the NV centers.
The intensity of the green laser is adjusted using several fixed neutral density filters as appropriate.
The intensity of the red emission from the NV centers is detected by an avalanche photodiode (APD) after passing through a dichroic mirror, a 514 nm notch filter, a 650 nm long-pass filter, and an 800 nm short-pass filter.
When measuring NV centers in nanodiamonds, the red emission counts were suppressed using a fixed neutral density filter to match the APD measurement range.
We use a MW antenna for spin manipulation of the NV centers, which is a coplanar waveguide with ground consisting of a 1.6 mm thick PCB substrate and an 18 μm thick copper foil with a 2 mm width centerline terminated with a 50 Ω resistor.
The antenna is impedance matched so that no frequency dependence of the MW power at a sample position is present during the measurement.
We confirm that from S11 parameter.
Microwaves are output from a vector signal generator at approximately -13 dBm and input to a microwave antenna after passing through an MW amplifier (typ. +45 dB).
In all measurements in this paper, the microwave power is fixed at the above values.
We use three types of diamond samples, #1, #2, and #3, in the present study: NDs with nominal grain sizes of ϕ50 nm (#1) and of ϕ100 nm (#2), and NV ensemble in a bulk diamond film (#3).
The NDs are those commercially available from Adámas Nanotechnologies, NDNV50nmHi10ml for #1 and NDNV100nm10ml for #2.
In the measurements of #1 and #2, we prepare a ND film [see Fig. 1(b)], which is the NDs spin-coated on a cover glass at 600 rpm <cit.>.
The thickness of the ND film made by this method is typically about 200–1000 nm <cit.>.
The number of NDs in #1 and #2 within a laser irradiation area is estimated to be several hundred and more than 20, respectively.
The ND film is fixed to the antenna with carbon tape.
A surface of the ND film is at a height of 0.44 mm above the antenna.
In addition to NDs, this study investigates a bulk diamond film (#3).
It was synthesized using a custom-built microwave plasma chemical vapor deposition (MPCVD) system <cit.>.
High-pressure and high-temperature type-Ib (100) single crystalline diamond plates were used as substrates. ^12C concentrated (>99.95%) methane gas was used as a carbon source.
First, an undoped thick film with a total thickness of ∼70 μm was grown on the substrate by chemical vapor deposition (CVD).
A ^15N doped CVD layer was then overgrown on the undoped film with a gas ratio of ^15N/C of 4000 ppm.
An expected ^15N concentration is ∼10 ppm and a film thickness is ∼5 μm.
This nitrogen density is consistent with the NV's coherence T_2 = 29 μs obtained by Hahn echo <cit.>.
We fix #3 directly to the antenna with carbon tape for the measurement.
A surface of the bulk diamond film is at a height of 0.73 mm above the antenna.
In this study, NV centers spontaneously formed during the MPCVD process are used for characterization.
We perform the present study under three different magnetic fields: a zero field (A), an environmental field (B), and a biased field (C).
We apply magnetic fields for the conditions A and C.
We use two coils beside and beneath the sample stage to generate magnetic fields perpendicular and parallel to the optical axis, respectively, as shown in Fig. 1(a).
Using a tesla meter (Lake Shore Cryotronics F71), we evaluate the magnetic fields at the sample position as 6.3 μ T, 88.7 μ T, and 196.7 μ T for the conditions A, B, and C, respectively.
The upper panel of Fig. 1(b) shows an optical microscope image of the spin-coated NDs ϕ50 nm (#1).
The lower panel shows the PL intensity map at the spot surrounded by a red frame in the upper panel.
The color bar indicates PL intensity in a unit of kilo counts per sec (kcps).
The data set for #1 is obtained using the standard ODMR measurement at the red circle.
As the dependence of the ODMR spectra on the optical power of the excitation light is the central topic in this study, it is important to calibrate the optical power (P_opt).
We evaluate P_opt from the green laser intensity and the irradiated area with an accuracy of 10 %.
The green laser intensity is measured between the objective lens and the diamond sample using an optical power meter (Thorlab, Power Meter PM100D, sensor S121C).
The irradiation area is estimated as the spot size of the red luminescence from a single NV center near the surface of a high quality bulk diamond provided by H. Watanabe in AIST, Japan <cit.>.
The spot size is calculated as a circle whose diameter is the full width at half maximum of the intensity distribution.
Figure 1(c) presents an example of the PL intensity map from a single NV center used to determine the spot size, where the diamond surface is defined as the xy-plane.
Ten PL intensity maps of a single NV center are fitted by the two-dimensional (2D) Gaussian function, and the obtained average of their full width at half-maximum, 386 ± 2 nm, is used as the laser spot diameter.
The cross sections of the experimental data (markers) and the 2D Gaussian fitting (solid line) are shown in the upper side and right side panels of Fig. 1(c).
Both panels show that the fits are consistent with the experimental data.
All the experimental conditions in this study are compiled in Table <ref>.
NDs ϕ100 nm #2' in Table <ref> indicates the data set obtained at a different location of the same sample as NDs ϕ100 nm #2.
The estimated densities of nitrogen, [N], and NV center, [NV], are also given in Table <ref>.
We include the previous study (Ref. <cit.>) in Table <ref> in the same cell as 2B as their measurements were carried out in an environmental geomagnetic field (∼50 μ T) using NDs ϕ100 nm supplied by Adámas Nanotechnologies.
§ RESULTS AND DISCUSSIONS
§.§ ODMR Spectra of Nanodiamond NVs
The upper panel of Fig. 2(a) is the ODMR spectrum as a function of the MW frequency obtained at P_opt=0.55 kW/cm^2 shown by markers.
This result is for 2A (see Table <ref>).
The vertical axis indicates the PL contrast, namely the normalized contrast of the PL intensities with and without MW.
In this measurement, the swept frequency range is 60 MHz.
The splitting between dips in the ODMR spectrum is due to crystal strain and electric fields that break the axial symmetry of the NV centers. The impacts of such non-axisymmetry deformation were treated in Refs. <cit.>.
Below we call these factors as “deformation”.
We note that similar observations for the NDs ensemble were reported before, for example, in Fig. 1(d) of Ref. <cit.>. Their shapes are generally consistent with ours, while the splitting is slightly larger than that in the present study as they applied a magnetic field of 100 μ T.
Also, similar ODMR spectra obtained in a single ND were reported in Fig. 3(a) of Ref. <cit.>.
From now on, we focus on splitting quantitatively based on the values obtained from fitting with a double Lorentzian function.
This fitting method is meaningful because it is often used for magnetometry using NVs.
We will discuss the validity and limitations of this method later in Sec. <ref>.
The solid line in the upper panel of Fig. 2(a) is a fitted curve.
We define the difference in frequencies between the two dip values obtained by this fitting as the difference Δ.
Δ is 11.5±0.2 MHz in this specific case, which is consistent with the literature values of 10–20 MHz for NDs <cit.>.
We measure the ODMR spectra by increasing P_opt from 0.55 kW/cm^2.
The lower panel of Fig. 2(a) shows the spectrum for 2A obtained at P_opt=38.4 kW/cm^2, which is the maximum optical power used in the present study.
We discuss later that the temperature increase due to laser heating is inconsequential within the present optical power range.
As in the upper panel, the markers show experimental data, and the solid curved line results from a double Lorentzian fitting.
The PL contrast decreases from 2.7% at P_opt=0.55 kW/cm^2 to 0.5% at P_opt=38.4 kW/cm^2 because the increase in the optical power enhances the spin initialization rate, i.e., the transition rate from m_S=±1 to m_S = 0.
The spectrum also possesses two dips, but careful inspection reveals a slight change in shape between the upper and lower panels.
The dashed and solid vertical lines show the dip positions obtained by the fitting at P_opt=0.55 kW/cm^2 and P_opt=38.4 kW/cm^2, respectively.
Δ is determined to be 9.4±0.3 MHz for P_opt=38.4 kW/cm^2.
Thus, Δ decreases with increasing P_opt.
Similar behavior was reported in Fig. 3(a) of Ref. <cit.>, suggesting that Δ of NVs in NDs actually depends on the optical power, which is usually not considered.
In our case, Δ changes by approximately 2.1 MHz between the two different P_opt.
Significantly, ignoring deformation, this variation corresponds to about 38 μT according to a magnetic field conversion widely used in the NV research field.
Therefore, this phenomenon can be relevant in applying NVs to magnetic field measurements.
The above finding is not an artifact caused by a double Lorentzian fitting.
To confirm this, Fig. 2(b) presents the ODMR spectra measured at P_opt=0.55, 2.12, 4.24, 8.21, 15.2, and 31.3 kW/cm^2, which are incrementally shifted from bottom to top.
The markers are the experimental data, where the spline interpolation curves are superposed by the solid lines.
Since the PL contrast varies depending on P_opt, we appropriately normalize the spectra to focus only on the shape.
The cross markers (+) point to the dip positions in the spline interpolation curves.
Their behavior again supports that the two dips become closer for a larger P_opt.
While we do not show the data, the results of the condition 2'A and the NDs of ϕ50 nm (1A) are consistent with the results of 2A. Some results are later shown in Figs. 4(d), 4(e), and 4(f).
§.§ ODMR Spectra of Bulk Diamond NVs
We focus on the bulk diamond film #3 to investigate whether or not the optical power dependence observed in NDs is relevant here.
The upper panel of Fig. 3 presents the ODMR spectrum for the condition 3A obtained at P_opt=0.55 kW/cm^2.
The horizontal axis range is 10 MHz, much smaller than that in Fig. 2(a).
The obtained spectrum shown by the markers has two sharp dips, as expected for the NVs in bulk diamonds.
As performed for the analysis of NDs, we fit the experimental data with a double Lorentzian function.
We estimate the splitting between the two dips to be Δ=3.55±0.02 MHz, a comparable value to the width of 3.03 MHz due to the hyperfine interaction in ^15N <cit.>.
Presumably, the deformation is much less than 1 MHz because it is buried in this hyperfine splitting.
Thus, the bulk diamond differs from NDs because the hyperfine interaction prevails over the deformation.
In addition, the resonance line width is significantly narrower than in the NDs.
This reflects that the density of impurities, such as nitrogen impurities (P1 centers), which cause the decoherence <cit.>, is low in #3.
Indeed, the typical nitrogen concentration of a type 1b diamond, the raw material of NDs, is about 100 ppm, whereas the single-crystal diamond in this study is about 10 ppm.
Now, we discuss the ODMR spectra at increased optical powers.
The lower panel in Fig. 3 shows the ODMR spectrum by the markers in the condition 3A obtained at P_opt=38.4 kW/cm^2.
The markers are experimental data, and the solid curved line results from a double Lorentzian function fitting.
As seen in NDs, the contrast decrease is also due to a larger initialization rate in larger optical power.
In Fig. 3, the dashed and solid vertical lines indicate the dip positions obtained by the fitting at P_opt=0.55 kW/cm^2 and P_opt=38.4 kW/cm^2, respectively.
Δ is now 3.44±0.01 MHz, smaller than Δ=3.55±0.02 MHz.
As in the NDs case, Δ becomes smaller in the larger optical power in the bulk diamond.
Interestingly, the optical power dependence is present even when the ^15N hyperfine interaction causes the splitting.
However, the reduction of Δ in the bulk diamond is much smaller than in NDs.
§.§ Analysis of Splitting
We systematically examine the dependence of Δ on P_opt.
We start with the condition 2A.
The upward triangle markers in Fig. 4(a) show the experimentally observed Δ as a function of P_opt between 0.55 kW/cm^2 and 38.4 kW/cm^2.
We already showed the results of Δ at the minimum (P_opt=0.55 kW/cm^2) and maximum (P_opt=38.4 kW/cm^2) optical powers in the upper and lower panels in Fig. 2(a), respectively.
Figure 4(a) clearly tells that Δ monotonously decays with increasing P_opt and saturates at P_opt≳ 15 kW/cm^2.
Previous study <cit.> reported a similar dependence of Δ on P_opt.
Their results are superposed in Fig. 4(a) by the markers (+).
Significantly, the decaying behavior is almost the same between their results and ours, while they did not reach the optical power to saturate Δ.
It is well established that the PL intensity from an NV center, which is determined by the relaxation rate peculiar to its optical process, saturates for a large P_opt <cit.>.
However, the present observation is irrelevant as we perform the experiment using a sufficiently small laser intensity such that the PL intensity is linear to P_opt.
Figure 4(c) confirms that the PL intensity from NDs in the condition 2A is proportional to P_opt.
Ref. <cit.> also treated this sufficiently small optical power region.
The optical power dependence in such a very small intensity region is unexpected. Our work has quantitatively confirmed Ref. <cit.> for a wider optical power region.
It was previously reported <cit.> that the linewidth of the ODMR spectrum of the NV ensemble decreases with increasing P_opt for an optical power as small as in the present study.
However, they did not mention a decrease in Δ of the ODMR spectra.
While we observe a systematic change in Δ, no systematic change in the linewidth is detected.
For more quantitative discussion, we analyze the behavior of 2A shown in Fig. 4(a) using the following exponential fit.
Δ(P_opt) = Aexp(-P_opt/P_0)+Δ_0,
where A, P_0, and Δ_0 are the amplitude, the saturation power, and the offset, respectively.
The dotted line in Fig. 4(a) is the result of this fitting.
A semi-log plot of only the first term of Eq. (<ref>) is shown in Fig. 4(b) with the same markers as Fig. 4(a).
The linear variation is consistent with the exponential function.
Unlike Fig. 4(a), Fig. 4(b) does not include the previous result <cit.> because no convergence value (offset Δ_0) is available.
Then, how about the behavior of the bulk diamond film (the condition 3A)?
Figure 4(a) shows the P_opt dependence of Δ.
While the decrease of Δ is not as significant as in NDs (2A), the magnified view in the inset of Fig. 4(a) proves that an exponential decay of Δ is also present in the bulk diamond case.
Figure 4(b) depicts the decaying component extracted by the fitting to Eq. (<ref>), which looks very similar to the 2A case.
The fact suggests a common mechanism behind the present exponential decay of Δ in the NDs and the bulk diamond, even though different reasons cause the dip splitting.
We find similar behavior in all the measured conditions at zero fields (1A, 2A, 2'A, and 3A in Table I) and obtain the parameters A, P_0, and Δ_0.
Figure 4(d) shows the obtained amplitude A for the four conditions.
From left to right, the bars indicate the conditions 1A, 2A, 2'A, and 3A, and the vertical axis is expressed on a semi-log scale.
Comparing 1A, 2A, and 2'A, the A values are almost the same for NDs with different grain sizes.
On the other hand, the bulk diamond (3A) has A, one order of magnitude smaller than those of NDs (about 1/20).
Figure 4(e) shows the saturation power P_0 for different conditions.
While the amplitude A significantly differs between NDs and the bulk diamond, there is relatively little difference in P_0 between the two; P_0 ∼ 3.8 kW/cm^2 for NDs and P_0 ∼ 7.4 kW/cm^2 for the bulk diamond.
It is vital that the values of P_0 are close for different diamonds.
The offsets Δ_0 are shown in Fig. 4(f).
They reduce in the order of conditions 1A, 2A, 2'A, and 3A, which seems to coincide with the degree of deformation of NVs.
We intuitively expect that the smaller the crystal size is, the greater the deformation tends to be, affecting the sensitivity of the NVs to the optical power.
We come back to this fact later.
With the results and analysis explained so far, we have established that the ODMR spectra of NVs depend on the excitation light power even when the power is sufficiently small.
This phenomenon occurs in both NDs and the bulk diamond.
The amplitude of the decay (A) largely depends on the samples, but the behavior of exponentially decaying with the optical power characterized by P_0 seems an essential feature of NVs.
The quantitative establishment of the universality of this phenomenon is the main achievement of the present study.
The fact also means that the excitation light power can be relevant for accurate magnetic field measurements using NVs.
§.§ Possible Mechanisms
We are interested in the possible causes of the observed optical power dependence.
The zero-field splitting (ZFS), the coupling between the NV spin and the magnetic field, and the deformation are the most critical factors in defining the energy structure of an NV center in the ground state <cit.>.
The hyperfine interaction between the NV spin and the neighboring nuclear spins is also often relevant.
Therefore, it is essential as a starting point to investigate whether the present phenomenon is related to these four factors.
This section will examine them individually and then explore other possibilities.
We start with the ZFS, which might be subject to the optical power through the heating by the laser.
We define the ZFS as the average of the frequencies of the two dips obtained by a double Lorentzian fit.
Around room temperature at zero magnetic fields, the ZFS in the ODMR spectrum decreases linearly with increasing temperature <cit.>.
The dependences of ZFS on the optical power in the conditions 1A, 2A, and 3A are shown in Figs. 5(a), (b), and (c), respectively.
The figures indicate no signal of systematic change in ZFS due to the optical power.
Indeed, the variation of ZFS is much smaller than the amplitude A in Fig. 4(d).
Thus, heating by laser irradiation is not responsible for the present optical power dependence.
We estimate the maximum temperature change in this experiment to be about 12 K since the maximum frequency shift observed is approximately 850 kHz, as shown in Fig. 5(a).
Next, we discuss the influence of the magnetic field.
The upper and lower panels of Fig. 6 show the ODMR spectra in conditions 2A (zero magnetic field) and 2C (biased magnetic field of 196.7 μ T), respectively [the spectrum shown in the upper is the same as that in the upper panel in Fig. 2(a)].
Both are obtained with the minimum optical power (P_opt=0.55 kW/cm^2).
The markers are experimental data, and the solid curved lines are fitted by a double Lorentzian function.
The dashed and solid vertical lines show the dip positions obtained by the fit for 2A and 2C, respectively.
As expected from the Zeeman effect, the solid vertical lines are outside the two dashed lines, confirming that Δ increases in the magnetic field.
We obtain the spectra for the conditions 2A, 2B, and 2C as P_opt is modulated.
The acquired behaviors of Δ are plotted as a function of P_opt in the inset of Fig. 7(a).
Due to the Zeeman effect, Δ vertically shifts from 2A to 2B to 2C.
Importantly, there is no significant variation in the spectral shapes of 2A, 2B, and 2C except for this vertical shift.
We obtain the offset Δ_0 by the fitting to Eq. (<ref>) and plot Δ-Δ_0 against P_opt in the main panel of Fig. 7(a).
The behavior of 2A, 2B, and 2C are superposed on each other almost perfectly.
We plot the amplitude A, the saturation power P_0, and the offset Δ_0 for each field obtained by the fitting in Figs. 7(b), (c), and (d), respectively.
Δ_0 increases with increasing magnetic field [Fig. 7(d)], reflecting the Zeeman effect, although further quantitative analysis is complicated in this magnetic field region due to the considerable influence of deformation in NDs <cit.>.
On the other hand, A and P_0 do not change significantly as shown in Figs. 7(b) and (c), respectively.
Thus, in our examined regime, there is no visible correlation between the optical power dependence and the magnetic field.
Third, we consider the hyperfine interaction. The optical power dependence in the bulk diamond NVs is minimal, only about 1/20 of that in the nanodiamond NVs [see Figs. 4(a) and 4(d)].
However, the contribution of the hyperfine interaction to Δ is reasonably assumed to be almost similar in the two types of diamonds.
Therefore, if the hyperfine interaction was responsible for the present phenomenon, it would be difficult to explain the marked difference between both.
Consequently, we can conclude that the hyperfine interaction is not the leading cause of this phenomenon.
As the final factor, we examine the deformation.
In NDs, the deformation is about 10 MHz [Figs. 2(a) and 4(a)], while the value is well below 1 MHz in the bulk diamond, as discussed in Sec. IIIB.
Now, the amplitude A to characterize the optical power dependence is ∼ 2 MHz for NDs and ∼ 0.1 MHz for the bulk diamond [Fig. 4(d)].
For the former, the ratio of A to the deformation is about 2/10 = 0.2.
For the latter, the ratio is at least 0.1/1 = 0.1 and is comparable to the NDs' case.
The ratio of ND to bulk diamond deformation also corresponds to the ratio of nitrogen impurity density [see Table <ref>].
This suggests that either the deformation/impurity itself or the impurity-derived deformation would be responsible for this phenomenon.
Although this argument is not fully quantitative, it suggests a correlation between the deformation/impurity and the optical power dependence.
We infer a reasonable idea of the possible mechanism based on the deformation caused by impurities.
Previous work on single NV centers indicated that the electric field from charge traps causes deformation <cit.>.
This might also be the cause with the deformations in the NV ensemble case in our study.
If the charge traps originate from impurities, the magnitude of the deformation will correlate with the impurity density, consistent with our observations.
It is known that the charge state of impurities changes with photoionization.
For example, as the optical power is increased, the time that the NV center retains its charge state decreases exponentially on the millisecond scale <cit.>.
As this charge generated by photoionization moves around, the electric field would be time-averaged, suppressing deformation.
The relationship between the ionization rate at thermal equilibrium and the photoionization rate determines the coefficient of the exponential change.
When the optical power is sufficiently large, the electric field and crystal strain, which cannot be averaged, remain as a finite deformation.
Ref. <cit.> also noted that deformation due to charge can change the shape of the ODMR spectrum to a non-Lorentzian distribution.
This is consistent with the fact that the ODMR spectrum deviates from the double Lorentzian fitting, and its shape changes with optical power [see Figs. 2(a) and (b)].
Investigating both the dip position and its shape will help to elucidate the mechanism.
We note further experimental and theoretical efforts are needed because many parameters could be involved in the mechanism.
On the experimental side, comparing bulk samples with systematically varying impurities and deformations and investigating this optical power-dependent splitting in a single NV center with charge-induced deformation <cit.> are helpful.
The magnetic field can be swept over a sufficiently wide range compared to the deformation for bulk samples. This will clarify which parameters of the ground-state Hamiltonian appear to depend on optical power.
Pulsed ODMR <cit.> will provide information on the time the effect of the laser irradiation remains, which can be used to validate the mechanism.
On the theoretical side, it is helpful to investigate what fitting function is appropriate to reproduce the ODMR spectral shape and what defects are candidates for photoionization.
§ CONCLUSION
We investigate the optical power dependence of splitting of the ODMR spectra using various NV ensemble samples.
In addition to reproducing the previous study using NDs <cit.>, we find that the optical power dependence saturates in a larger optical power than in their study.
Since we also observe the same phenomenon in the single-crystal diamond, which has very few impurities and non-axisymmetry deformation compared to NDs, we consider our observation due to the NV center's intrinsic nature.
We quantitatively discuss the parameters that could be responsible for this phenomenon and infer that deformation is an important parameter.
We point out the possible responsibility of slow dynamics in the optical excitation and emission process of single NV centers.
The present optical power dependence can be critical in accurate magnetometry using NVs.
This effect may degrade the accuracy of the magnetometry using NDs by about a few ten μT.
Even when using high-quality bulk diamonds, we must be careful when discussing a few μT magnetic fields around zero magnetic fields.
We can minimize degradation by introducing strong optical power based on the phenomenological exponential behavior discussed here.
Also, we suggest that using diamonds with fewer impurities and deformation can reduce the influence on the accurate magnetic field measurement.
Further experimental verification and theoretical discussion on deformation, impurity densities, and a comprehensive range of magnetic fields will help to identify the mechanism of this phenomenon.
§ ACKNOWLEDGEMENTS
We thank K. M. Itoh for letting us use the confocal microscope system, and H. Watanabe for his high quality diamond, which we used in the estimation of the spatial resolution of our system [Fig. 1(c)].
We appreciate the fruitful discussion with J. Inoue.
We also thank MEXT-Nanotechnology Platform Program “Microstructure Analysis Platform" for technical support.
K.S. acknowledges the support of Grants-in-Aid for Scientific Research No. JP22K03524.
K.K. acknowledges the support of Grants-in-Aid for Scientific Research (Nos. JP23H01103, JP19H00656, and JP19H05826).
T.T. acknowledges the support of MEXT Q-LEAP (JPMXS0118068379), JST CREST (JPMJCR1773), JST Moonshot R&D (JPMJMS2062), MIC R&D for construction of a global quantum cryptography network (JPMI00316), JSPS KAKENHI (Nos. JP20H02187 and JP20H05661).
34
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Maze et al.(2008)Maze,
Stanwix, Hodges, Hong,
Taylor, Cappellaro, Jiang,
Dutt, Togan, Zibrov,
Yacoby, Walsworth, and Lukin]MazeNature2008
author author J. R. Maze, author P. L. Stanwix,
author J. S. Hodges, author S. Hong, author
J. M. Taylor, author
P. Cappellaro, author
L. Jiang, author M. V. G. Dutt, author E. Togan, author A. S. Zibrov, author A. Yacoby, author R. L. Walsworth, and author M. D. Lukin, https://doi.org/10.1038/nature07279 journal
journal Nature volume 455, pages 644 (year 2008)NoStop
[Degen(2008)]DegenAPL2008
author author C. L. Degen, https://doi.org/10.1063/1.2943282 journal
journal Applied Physics Letters volume
92, pages 243111 (year 2008)NoStop
[Balasubramanian et al.(2008)Balasubramanian, Chan, Kolesov,
Al-Hmoud, Tisler, Shin,
Kim, Wojcik, Hemmer,
Krueger, Hanke, Leitenstorfer, Bratschitsch, Jelezko, and Wrachtrup]BalasubramanianNature2008
author author G. Balasubramanian, author I. Y. Chan, author R. Kolesov,
author M. Al-Hmoud, author J. Tisler, author
C. Shin, author C. Kim, author A. Wojcik, author P. R. Hemmer,
author A. Krueger, author T. Hanke, author
A. Leitenstorfer, author
R. Bratschitsch, author
F. Jelezko, and author
J. Wrachtrup, https://doi.org/10.1038/nature07278 journal journal Nature volume 455, pages
648 (year 2008)NoStop
[Taylor et al.(2008)Taylor,
Cappellaro, Childress, Jiang,
Budker, Hemmer, Yacoby,
Walsworth, and Lukin]Taylor2008
author author J. M. Taylor, author P. Cappellaro,
author L. Childress, author L. Jiang, author
D. Budker, author P. R. Hemmer, author A. Yacoby, author R. Walsworth, and author M. D. Lukin, https://doi.org/10.1038/nphys1075
journal journal Nature Physics volume 4, pages 810 (year
2008)NoStop
[Schirhagl et al.(2014)Schirhagl, Chang, Loretz, and Degen]SchirhaglARPC2014
author author R. Schirhagl, author K. Chang,
author M. Loretz, and author C. L. Degen, https://doi.org/10.1146/annurev-physchem-040513-103659 journal journal Annual Review of Physical Chemistry volume 65, pages 83 (year
2014)NoStop
[Rondin et al.(2014)Rondin,
Tetienne, Hingant, Roch,
Maletinsky, and Jacques]Rondin2014
author author L. Rondin, author J.-P. Tetienne, author T. Hingant,
author J.-F. Roch, author P. Maletinsky, and author V. Jacques, https://doi.org/10.1088/0034-4885/77/5/056503 journal
journal Reports on Progress in Physics volume 77, pages 056503 (year
2014)NoStop
[Levine et al.(2019)Levine,
Turner, Kehayias, Hart,
Langellier, Trubko, Glenn,
Fu, and Walsworth]Levine2019
author author E. V. Levine, author M. J. Turner,
author P. Kehayias, author C. A. Hart, author
N. Langellier, author
R. Trubko, author D. R. Glenn, author R. R. Fu, and author R. L. Walsworth, https://doi.org/10.1515/nanoph-2019-0209
journal journal Nanophotonics volume 8, pages 1945 (year
2019)NoStop
[Barry et al.(2020)Barry,
Schloss, Bauch, Turner,
Hart, Pham, and Walsworth]Barry2020
author author J. F. Barry, author J. M. Schloss,
author E. Bauch, author M. J. Turner, author
C. A. Hart, author
L. M. Pham, and author
R. L. Walsworth, https://doi.org/10.1103/revmodphys.92.015004 journal
journal Reviews of Modern Physics volume
92, pages 015004 (year 2020)NoStop
[Acosta et al.(2010)Acosta,
Bauch, Ledbetter, Waxman,
Bouchard, and Budker]AcostaPRL2010
author author V. M. Acosta, author E. Bauch,
author M. P. Ledbetter, author A. Waxman, author
L.-S. Bouchard, and author
D. Budker, @noop journal journal Physical Review Letters volume 104, pages 070801 (year
2010)NoStop
[Neumann et al.(2013)Neumann, Jakobi, Dolde, Burk, Reuter, Waldherr, Honert, Wolf, Brunner, and Shim]NeumannNL2013
author author P. Neumann, author I. Jakobi,
author F. Dolde, author C. Burk, author
R. Reuter, author G. Waldherr, author J. Honert, author T. Wolf, author A. Brunner, and author J. H. Shim, @noop journal journal Nano
Letters volume 13, pages 2738
(year 2013)NoStop
[Toyli et al.(2013)Toyli,
Charles, Christle, Dobrovitski, and Awschalom]ToyliPNAS2013
author author D. M. Toyli, author F. Charles,
author D. J. Christle, author V. V. Dobrovitski, and author D. D. Awschalom, @noop
journal journal Proceedings of the National
Academy of Sciences volume 110, pages
8417 (year 2013)NoStop
[Tetienne et al.(2017)Tetienne, Dontschuk, Broadway,
Stacey, Simpson, and Hollenberg]TetienneSciAdv2017
author author J.-P. Tetienne, author N. Dontschuk,
author D. A. Broadway, author A. Stacey, author
D. A. Simpson, and author
L. C. L. Hollenberg, journal
journal Science Advances volume 3, https://doi.org/10.1126/sciadv.1602429 e1602429
(year 2017)NoStop
[Ku et al.(2020)Ku,
Zhou, Li, Shin, Shi, Burch, Anderson, Pierce, Xie, Hamo, Vool,
Zhang, Casola, Taniguchi,
Watanabe, Fogler, Kim,
Yacoby, and Walsworth]ku2020
author author M. J. H. Ku, author T. X. Zhou, author Q. Li, author Y. J. Shin, author
J. K. Shi, author C. Burch, author L. E. Anderson, author A. T. Pierce, author Y. Xie, author A. Hamo, author U. Vool, author
H. Zhang, author F. Casola, author T. Taniguchi, author K. Watanabe, author M. M. Fogler, author P. Kim, author A. Yacoby, and author R. L. Walsworth, https://doi.org/10.1038/s41586-020-2507-2 journal journal Nature volume 583, pages
537 (year 2020)NoStop
[Hedrich et al.(2021)Hedrich, Wagner, Pylypovskyi, Shields, Kosub, Sheka, Makarov, and Maletinsky]hedrich2021
author author N. Hedrich, author K. Wagner,
author O. V. Pylypovskyi,
author B. J. Shields, author T. Kosub, author
D. D. Sheka, author
D. Makarov, and author
P. Maletinsky, https://doi.org/10.1038/s41567-021-01205-3 journal journal Nature Physics volume 17, pages 659 (year 2021)NoStop
[Foy et al.(2020)Foy,
Zhang, Trusheim, Bagnall,
Walsh, Wang, and Englund]FoyAPMI2020
author author C. Foy, author L. Zhang, author M. E. Trusheim, author
K. R. Bagnall, author
M. Walsh, author E. N. Wang, and author D. R. Englund, https://doi.org/10.1021/acsami.0c01545 journal journal ACS Appl Mater Interfaces volume 12, pages 26525 (year 2020)NoStop
[Oort and Glasbeek(1990)]VanOort1990
author author E. V. Oort and author M. Glasbeek, https://doi.org/10.1016/0009-2614(90)85665-y
journal journal Chemical Physics Letters volume 168, pages 529 (year
1990)NoStop
[Dolde et al.(2011)Dolde,
Fedder, Doherty, Nöbauer,
Rempp, Balasubramanian, Wolf,
Reinhard, Hollenberg, Jelezko, and Wrachtrup]Dolde2011
author author F. Dolde, author H. Fedder,
author M. W. Doherty, author T. Nöbauer, author
F. Rempp, author G. Balasubramanian, author T. Wolf, author F. Reinhard, author L. C. L. Hollenberg, author F. Jelezko, and author J. Wrachtrup, https://doi.org/10.1038/nphys1969 journal journal Nature Physics volume
7, pages 459 (year 2011)NoStop
[Felton et al.(2009)Felton,
Edmonds, Newton, Martineau,
Fisher, Twitchen, and Baker]felton2009
author author S. Felton, author A. M. Edmonds,
author M. E. Newton, author P. M. Martineau, author
D. Fisher, author D. J. Twitchen, and author J. M. Baker, https://doi.org/10.1103/PhysRevB.79.075203 journal journal Physical Review B volume 79, pages 075203 (year 2009)NoStop
[Igarashi et al.(2012)Igarashi, Yoshinari, Yokota, Sugi, Sugihara, Ikeda, Sumiya, Tsuji, Mori, Tochio, Harada, and Shirakawa]igarashi2012
author author R. Igarashi, author Y. Yoshinari,
author H. Yokota, author T. Sugi, author
F. Sugihara, author
K. Ikeda, author H. Sumiya, author S. Tsuji, author I. Mori, author H. Tochio, author Y. Harada, and author M. Shirakawa, @noop journal journal Nano Letters volume 12, pages 5726 (year
2012)NoStop
[Fu et al.(2007)Fu,
Lee, Chen, Lim, Wu, Lin, Wei, Tsao,
Chang, and Fann]fu2007
author author C.-C. Fu, author H.-Y. Lee,
author K. Chen, author
T.-S. Lim, author H.-Y. Wu, author P.-K. Lin, author P.-K. Wei, author P.-H. Tsao,
author H.-C. Chang, and author W. Fann, @noop
journal journal Proceedings of the National
Academy of Sciences volume 104, pages
727 (year 2007)NoStop
[Dréau et al.(2011)Dréau, Lesik, Rondin, Spinicelli, Arcizet, Roch, and Jacques]dreau2011
author author A. Dréau, author M. Lesik,
author L. Rondin, author P. Spinicelli, author
O. Arcizet, author J.-F. Roch, and author V. Jacques, https://doi.org/10.1103/PhysRevB.84.195204 journal journal Physical Review B volume 84, pages 195204 (year 2011)NoStop
[Jensen et al.(2013)Jensen,
Acosta, Jarmola, and Budker]acosta2013
author author K. Jensen, author V. M. Acosta,
author A. Jarmola, and author D. Budker, https://doi.org/10.1103/PhysRevB.87.014115 journal journal Physical Review B volume 87, pages 014115 (year 2013)NoStop
[Fujiwara et al.(2020)Fujiwara, Dohms, Suto, Nishimura, Oshimi, Teki, Cai, Benson, and Shikano]fujiwara2020
author author M. Fujiwara, author A. Dohms,
author K. Suto, author
Y. Nishimura, author
K. Oshimi, author Y. Teki, author K. Cai, author O. Benson, and author Y. Shikano, https://doi.org/10.1103/PhysRevResearch.2.043415 journal
journal Physical Review Research volume
2, pages 043415 (year 2020)NoStop
[Scholten et al.(2021)Scholten, Healey, Robertson, Abrahams, Broadway, and Tetienne]ScholtenJAP2021
author author S. C. Scholten, author A. J. Healey, author I. O. Robertson, author G. J. Abrahams, author D. A. Broadway, and author J.-P. Tetienne, https://doi.org/10.1063/5.0066733 journal journal Journal of Applied Physics volume 130, pages 150902 (year
2021)NoStop
[Tsukamoto et al.(2021)Tsukamoto, Ogawa, Ozawa, Iwasaki, Hatano, Sasaki, and Kobayashi]TsukamotoAPL2021
author author M. Tsukamoto, author K. Ogawa,
author H. Ozawa, author T. Iwasaki, author
M. Hatano, author K. Sasaki, and author K. Kobayashi, https://doi.org/10.1063/5.0054809
journal journal Applied Physics Letters volume 118, pages 264002 (year 2021)NoStop
[Tsukamoto et al.(2022)Tsukamoto, Ito, Ogawa, Ashida, Sasaki, and Kobayashi]Tsukamoto2022
author author M. Tsukamoto, author S. Ito,
author K. Ogawa, author Y. Ashida, author
K. Sasaki, and author
K. Kobayashi, https://doi.org/10.1038/s41598-022-18115-w journal journal Scientific Reports volume 12, pages 13942 (year 2022)NoStop
[Misonou et al.(2020)Misonou, Sasaki, Ishizu, Monnai, Itoh, and Abe]misonou2020
author author D. Misonou, author K. Sasaki,
author S. Ishizu, author Y. Monnai, author
K. M. Itoh, and author
E. Abe, https://doi.org/10.1063/1.5128716 journal journal AIP Advances volume 10, pages 025206 (year 2020)NoStop
[Ogawa et al.(2023)Ogawa,
Tsukamoto, Sasaki, and Kobayashi]OgawaJPSJ2023
author author K. Ogawa, author M. Tsukamoto,
author K. Sasaki, and author K. Kobayashi, https://doi.org/10.7566/JPSJ.92.014002 journal journal Journal of the Physical Society of Japan volume 92, pages 014002 (year
2023)NoStop
[Teraji et al.(2015)Teraji,
Yamamoto, Watanabe, Koide,
Isoya, Onoda, Ohshima,
Rogers, Jelezko, Neumann,
Wrachtrup, and Koizumi]TerajiPSSA2015
author author T. Teraji, author T. Yamamoto,
author K. Watanabe, author Y. Koide, author
J. Isoya, author S. Onoda, author T. Ohshima, author L. J. Rogers, author F. Jelezko, author P. Neumann,
author J. Wrachtrup, and author S. Koizumi, @noop
journal journal physica status solidi (a) volume 212, pages 2365 (year
2015)NoStop
[Bauch et al.(2020)Bauch,
Singh, Lee, Hart,
Schloss, Turner, Barry,
Pham, Bar-Gill, Yelin, and Walsworth]Bauch2020
author author E. Bauch, author S. Singh,
author J. Lee, author
C. A. Hart, author
J. M. Schloss, author
M. J. Turner, author
J. F. Barry, author
L. M. Pham, author
N. Bar-Gill, author
S. F. Yelin, and author
R. L. Walsworth, https://doi.org/10.1103/physrevb.102.134210 journal journal Physical Review B volume 102, pages 134210 (year 2020)NoStop
[Ohashi et al.(2013)Ohashi,
Rosskopf, Watanabe, Loretz,
Tao, Hauert, Tomizawa,
Ishikawa, Ishi-Hayase, Shikata, Degen, and Itoh]ohashi2013
author author K. Ohashi, author T. Rosskopf,
author H. Watanabe, author M. Loretz, author
Y. Tao, author R. Hauert, author S. Tomizawa, author T. Ishikawa, author J. Ishi-Hayase, author S. Shikata, author C. L. Degen, and author K. M. Itoh, @noop journal journal Nano Letters volume 13, pages 4733 (year 2013)NoStop
[Jelezko and Wrachtrup(2006)]JelezkoPSS2006
author author F. Jelezko and author J. Wrachtrup, https://doi.org/https://doi.org/10.1002/pssa.200671403 journal journal physica status solidi (a) volume 203, pages 3207 (year
2006)NoStop
[Mittiga et al.(2018)Mittiga, Hsieh, Zu, Kobrin,
Machado, Bhattacharyya, Rui,
Jarmola, Choi, Budker, and Yao]Mittiga2018
author author T. Mittiga, author S. Hsieh,
author C. Zu, author
B. Kobrin, author F. Machado, author P. Bhattacharyya, author N. Z. Rui, author A. Jarmola, author S. Choi,
author D. Budker, and author N. Y. Yao, https://doi.org/10.1103/physrevlett.121.246402 journal
journal Physical Review Letters volume
121, pages 246402 (year 2018)NoStop
[Aslam et al.(2013)Aslam,
Waldherr, Neumann, Jelezko, and Wrachtrup]Aslam2013
author author N. Aslam, author G. Waldherr,
author P. Neumann, author F. Jelezko, and author
J. Wrachtrup, https://doi.org/10.1088/1367-2630/15/1/013064 journal
journal New Journal of Physics volume
15, pages 013064 (year 2013)NoStop
|
http://arxiv.org/abs/2307.05901v1 | 20230712041536 | Single Domain Generalization via Normalised Cross-correlation Based Convolutions | [
"WeiQin Chuah",
"Ruwan Tennakoon",
"Reza Hoseinnezhad",
"David Suter",
"Alireza Bab-Hadiashar"
] | cs.CV | [
"cs.CV"
] |
Single Domain Generalization via Normalised Cross-correlation Based Convolutions
WeiQin Chuah[1] Ruwan Tennakoon[1] Reza Hoseinnezhad[1] David Suter[2] Alireza Bab-Hadiashar[1]
RMIT University, Australia[1] Edith Cowan University (ECU), Australia[2]
{wei.qin.chuah,ruwan.tennakoon,rezah,abh}@rmit.edu.au, [email protected]
Received: date / Accepted: date
=============================================================================================================================================================================================================================================================
Deep learning techniques often perform poorly in the presence of domain shift, where the test data follows a different distribution than the training data. The most practically desirable approach to address this issue is Single Domain Generalization (S-DG), which aims to train robust models using data from a single source. Prior work on S-DG has primarily focused on using data augmentation techniques to generate diverse training data. In this paper, we explore an alternative approach by investigating the robustness of linear operators, such as convolution and dense layers commonly used in deep learning. We propose a novel operator called “” that computes the normalized cross-correlation between weights and an input feature patch. This approach is invariant to both affine shifts and changes in energy within a local feature patch and eliminates the need for commonly used non-linear activation functions. We show that deep neural networks composed of this operator are robust to common semantic distribution shifts.
Furthermore, our empirical results on single-domain generalization benchmarks demonstrate that our proposed technique performs comparably to the state-of-the-art methods.
§ INTRODUCTION
Deep learning techniques have achieved practical success in a variety of fields, including computer vision, natural language processing, and speech processing. However, this success is often limited to settings where the test data follows the same distribution as the training data.
In many real-world situations, this assumption breaks down due to shifts in data distribution, known as domain-shift <cit.>, which can significantly degrade performance <cit.>.
Dealing with domain-shift is a challenging problem with important practical implications. There are two main approaches to address domain shift and enable the transfer of knowledge from previously seen environments (source domains) to a new environment (target domain) without using any labeled data of the target domain:
(1) Domain Adaptation <cit.> (DA) where a model trained with source data is recalibrated using unlabeled data from the target domain, and (2) Domain generalisation <cit.> (DG) where a model is trained on multiple source domains but no target domain data is available for recalibration.
The most data-efficient domain generalisation technique is
the single domain generalisation (S-DG), which requires data from only a single source domain to train a model that is robust against unforeseen data shifts. Although practically desirable, S-DG has received little attention in the past.
S-DG presents a significant challenge due to two main factors. Firstly, the input data, derived from only one source domain, does not provide sufficient opportunity to observe the possible diversity in out-of-domain data. Secondly, the presence of spurious correlations or shortcuts can further complicate the issue by introducing biases and hindering generalization.
Prior work on S-DG has primarily focused on increasing the diversity of input data using adaptive data augmentation techniques.
These include creating fictitious examples that mimic anticipated shifts in data distribution using random <cit.>, adversarial <cit.> or causality <cit.> based data augmentation, as well as image style diversification <cit.>.
The generalization of a model is largely influenced by its support, which refers to the diversity of its input data and its inductive biases <cit.>. While not explicitly stated as such, the success of the above-mentioned S-DG methods hinges on increasing the input diversity via data augmentation. An alternative and complementary approach that has received less attention is to incorporate inductive biases into the network components to make them more robust to domain shifts. In this paper, we explore the above approach and propose a robust alternative to linear operators, such as convolution and dense layers, which are fundamental components of most neural networks.
We draw on the classical idea of template matching and consider linear layers in a neural network as computing a matching between the template (represented by the weights) and the signal (represented by input feature maps) using cross-correlation, as detailed in <Ref>.
Early works in template matching have shown that cross-correlation is not ideal for pattern matching as it fails when the local energy of the input (i.e., ∑_u,v z[u,v]^2) varies, and is not robust to affine shifts in the input signal <cit.>. More recently, Jin <cit.> empirically showed that domain shift primarily causes variation in the local energy of feature representations. This suggests that the linear operator, which is sensitive to local energy, might degrade out-of-domain (OOD) generalization.
The above perspective enables us to use more robust template-matching techniques such as normalized cross-correlation <cit.> to replace convolutions or dense layers in neural networks and recover the underlying invariant features in the input.
We call our method “”, which performs cross-correlation between standardized (i.e., Z-score normalized) weights and patch-wise standardized input feature maps.
This reduces the influence of input feature magnitude on the output and makes the operator invariant to affine transformations of the input. Moreover, we leverage robust statistics to improve the resilience of to outliers and introduce a refined version of our method named R-.
As <Ref> demonstrates, our methods achieve significantly better robustness to semantic distribution shifts on CIFAR-10-C, in contrast to other normalization techniques. Moreover, the advantage of our methods becomes more pronounced as the domain discrepancy increases.
The contributions of this paper include:
* We propose a novel nonlinear operator called “”, based on normalized cross-correlation, that reduces the influence of input feature magnitude on the output and invariant to affine transformations.
* Leveraging robust statistics, we further enhance the robustness of to outliers. Our experiments on several commonly used benchmarks in S-DG show that the proposed robust operator (“R-”) is also complementary to augmentation-based methods and achieves state-of-the-art performance.
* We empirically show that a neural network based on “” or “R-”, is significantly more robust to semantic shifts compared to a network based on a typical linear operator.
§ RELATED WORK
Domain Generalization:
Domain generalization (DG) methods aim to learn robust models from several source domains that can generalize to unseen target domains, addressing the issue of domain shifts.
A particularly challenging yet practical variant of domain generalization (DG) is a single-domain generalization (S-DG), where only one source domain is available during training. S-DG is especially challenging because, unlike in DG, there is no access to multiple source domains that would allow for the observation of possible shifts in data and invariances between domains. To address this challenge, researchers have primarily focused on using data augmentation techniques to generate diverse training data and increase input diversity.
A common technique is posing S-DG as a “distributionally robust optimization” problem and solving it using adversarial data augmentation (ADA) <cit.>. ADA lacks the ability to produce large semantic shifts that are common in real data. As a result, subsequent works have added additional constraints to adversarial augmentation <cit.> or incorporated background knowledge about anticipated semantic shifts via random augmentations <cit.>, causality based data augmentations <cit.>, or image style diversification <cit.>.
An alternative that has received little attention is to incorporate inductive biases into the network components to make them more robust to domain shifts. The most closely related work in this direction is the Meta Convolution Neural Networks (Meta-CNN) proposed by Wan <cit.>, where the output feature-maps of each layer are reconstructed using templates learned from training data, resulting in universally coded images without biased information from unseen domains. Our proposed operator, , offers a simpler implementation compared to <cit.>. While their method involves more complex operations, our approach simply replaces the convolution function with our method. This simplicity makes our method more straightforward to implement and integrate into existing frameworks.
Normalization in Neural Networks: During the training process of a deep neural network, the input distribution of an intermediate layer continuously changes, a phenomenon known as covariate shift. This makes it challenging to set the hyper-parameters of a layer, such as the learning rate. To tackle this issue, various normalization techniques have been proposed, including Batch Norm <cit.>, Instance Norm <cit.>, GroupNorm <cit.>, and Layer Norm <cit.>, which aim to normalize the output of each layer using batch statistics, feature channels, groups of features, or the entire layer's output, respectively. Instead of operating on features, Weight Norm <cit.> proposes normalising the filter weights.
While most of the aforementioned work has focused on in-domain generalization, there are a few studies that have examined generalization ability under domain shift. For instance, BN-Test <cit.> computed batch normalization statistics on the test batch while DSON <cit.> used multi-source training data to compute the statistics.
Fan <cit.> investigated normalization for single-domain generalization, where adaptive normalization statistics for each individual input are learned. These adaptive statistics are learned by optimizing a robust objective with adversarial data augmentation.
The above works view normalization as being independent of the base operator (e.g., convolution, fully-connected). In contrast, our approach considers normalization to be an integral part of the base operator. We normalize both weights and input for each local spatial region of the input.
Non-linear Transforms: Several works have explored the use of non-linear transforms to replace the linear transform in the typical convolution operator <cit.>. The works most closely related to ours are those by <cit.> assess the cosine similarity between weights and inputs to improve both model performance and interpretability. In those methods, the convolution is viewed as an inner product between an input patch 𝐳 and the weights 𝐰:
Conv(𝐳; 𝐰) = < 𝐳, 𝐰> = 𝐳𝐰cos(φ)
= h ( 𝐳, 𝐰) g ( cos(φ))
where cos(φ) is the angle between 𝐳 and 𝐰. This view allows for the separation of norm terms from the angle term (decoupling), and to change the form of h() and g() independently. Liu <cit.> derived several decoupled variants of the functions h() and g(). They demonstrated that the decoupled reparameterizations lead to significant performance gains, easier convergence, and stronger adversarial robustness. Later <cit.> introduced the “B-cos” operator and showed that it lead to better neural network interpretations.
Our proposed operator can also be seen within this framework, where the dot product is taken between the centered and normalized versions of the input patch and the weights.
However, unlike the methods mentioned above, we investigate the use of for out-of-domain generalization in a single source domain setting.
§ METHOD
Given a source domain 𝒮 = {( x^𝒮_i, y^𝒮_i )}_i=1^N_𝒮∼ P_XY^𝒮 the goal of Single Domain Generalization is to learn a robust and generalizable predictive function f_θ : 𝒳→𝒴 that can achieve a minimum prediction error on an unseen target domain 𝒯∼ P_XY^𝒯. Here, the joint distribution between the domains is different i.e. P_XY^𝒮≠ P_XY^𝒯.
Ben-David <cit.> showed that it would not be possible to learn models that can generalize to any distribution beyond the source distribution using solely the data sampled from that source distribution. Therefore, to generalize, it is essential to impose restrictions on the relationship between the source and target distributions. One common assumption in S-DG is that the target variable Y depends on an underlying latent representation in the covariate space X_0, that remains invariant across domains. However, X_0 cannot be directly observed; instead, we observe a mapping of X_0 into the observable space X, controlled by decision attributes R such as rendering (synthetic data) or data capture (real data) parameters. These attributes often change between the source and target domains, causing domain shifts. This assumption is represented by a Probabilistic graphical model, which is shown in <Ref>.
Most S-DG methods based on data augmentation aim to diversify X so that it spans the range of possible R values <cit.>. In contrast, our approach is to modify the model to make it robust to variations caused by R. For this purpose, we draw on the classical idea of template matching and consider linear layers in neural networks as computing a matching between the template (represented by the weights) and the signal (represented by input feature maps). This perspective enables us to use more robust template-matching techniques such as normalized cross-correlation to replace convolutions or dense layers in neural networks and recover the underlying invariant features X_0 in the input.
§.§ Normalized Cross-Correlation Layer
Typically, the linear units of a DNN layer compute the cross-correlation between the input feature maps 𝐳∈ℝ^H × W × C_in [Batch dimension is omitted for simplicity.] and the weights 𝐰∈ℝ^K × K × C_in× C_out [We use notations that are consistent with convolutional layers for convenience. For a fully connected (dense) layer, we assume that H = W = K = 1.]. Here C_in(out) is the number of channels in the input (or output), [H, W] are the feature map spatial dimensions and K is the kernel size. The pixel (u,v) of the c^th output feature channel is computed as:
Φ_u,v,c(𝐳; 𝐰) = < 𝐳_u,v , 𝐰_c > = ∑_j z_u,v^(j)·w_c^(j)
Here 𝐳_u,v is a patch of the input feature map centered at u,v with the same shape as the weight tensor 𝐰_c, and j index the pixels within the patch. The map ν: (u, v) → (u,v) is determined by parameters of the convolution (or dense) layer (i.e., stride, kernel width).
The operator above is ineffective for pattern matching when the patch energy (i.e., 𝐳_u,v = ∑_j (z_u,v^(j))^2) is not uniform across a feature map. It also lacks robustness to affine transformations of the input, i.e., < 𝐳_u,v , 𝐰_c > ≠< 𝐀𝐳_u,v + 𝐛 , 𝐰_c > <cit.>.
To overcome the above limitations, we propose using the normalized cross-correlation operator as a replacement for the linear operator. With this new operator, which we coin , the output feature at position (u,v) of the c^th feature channel is computed as:
Ψ_u,v,c(𝐳; 𝐰)
= ∑_j [ z_u,v^(j) - μ_𝐳(u,v) ] [ w^(j)_c - μ_𝐰_c ]/𝐳_u,v - μ_𝐳(u,v)𝐰_c - μ_𝐰_c
= ∑_j z_u,v^(j)·w^(j)_c - αμ_𝐳(u,v)μ_𝐰_c/ασ_𝐰_c√(1/α∑_j z_u,v^(j)·z_u,v^(j) - μ_𝐳(u,v)^2 ) + ϵ.
Here, μ_∙ is the mean of ∙, σ_𝐰_c = √(1/α∑_j [ w^(j)_c - μ_𝐰_c ]^2), ϵ is a small constant to ensure numerical stability, and α = K × K × C_in is the number of pixels in the patch.
Since the patch-wise mean, μ_∙, can be computed using linear operations (i.e., convolving with constant weight tensor with all elements equal to 1/α), the can be realized using linear operators:
Ψ(𝐳; 𝐰)
= Φ(𝐳; 𝐰) - α Φ(𝐳; 𝐰_α) μ_𝐰/α √( [ Φ(𝐳^2 ; 𝐰_α) - Φ(𝐳 ; 𝐰_α)^2] ) σ_𝐰 + ϵ
= Φ(𝐳; 𝐰) - α μ_𝐳 μ_𝐰/α √( [ μ_𝐳^2 - μ_𝐳^2 ]) σ_𝐰 + ϵ.
Here, 𝐰_α∈ℝ^K × K × C_in× 1 is a constant matrix with all elements equal to 1/α, μ_𝐰∈ℝ^1 × C_out is the mean of the weights within each output channel, and σ_𝐰∈ℝ^1 × C_out is the variance of the weights within each output channel.
§.§ Robust
The is sensitive to outliers in the input patch <cit.>. This can lead to issues when the input distribution changes unpredictably and introduces outliers. For instance, salt and pepper noise can cause large variations in the input energy (first term in the denominator of <Ref>) and affect the output of . To overcome this, we propose a robust version of the operator called R-, which modifies <Ref> as follows:
Γ_u,v,c(𝐳; 𝐰)
= ∑_j [ϕ ( z_u,v^(j) - μ_𝐳(u,v) ) ] [w^(j)_c - μ_𝐰_c]/ϕ( 𝐳_u,v - μ_𝐳(u,v) ) 𝐰_c - μ_𝐰_c + ϵ.
Here, ϕ(·) can be any robust function such as the Huber, Cauchy (aka Lorentzian), Tukey, or Welsch function. In this work, we use the Welsch function due to its simplicity, which is defined as follows <cit.>:
ϕ(z) = c[1 - exp( -z^2/2c^2)]
Here, c is a learnable parameter that controls the amount of penalty for the outliers. The influence of c is depicted in <Ref>. During the training process, we adopt an initialization strategy where we set the value of the parameter c to a large value. Subsequently, we update the value of c for each layer by computing the mean of the patchwise standard deviation of the input over the training dataset. Similar to batch normalization, we incorporate a moving average component to enhance the stability and effectiveness of the normalization process.
§.§ Improving Convergence
To enhance the convergence of and R-, we incorporate some modifications to their base formulations. We use the notation Υ (𝐳; 𝐰) to denote either Ψ (𝐳; 𝐰) or Γ (𝐳; 𝐰) in the following text.
Sharpening:
The peaks and valleys of the output Υ(𝐳; 𝐰) can be emphasized (or de-emphasized) by raising it to power τ:
Υ_[1](𝐳; 𝐰) = {max[ 0, Υ (𝐳; 𝐰) ] }^τ
.
Here, we solely consider the positive outputs, as our empirical observations indicate that this choice stabilizes the training process and prevents convergence to undesirable optima.
<Ref> show the relationship between the input and output of the above operation.
Similar, techniques have also been used in cosine similarity-based techniques <cit.>. However, our approach differs from cosine similarity-based methods in that we do not pre-determine the value of τ. Instead, we treat it as a learnable parameter and optimize it alongside the weights.
Gradient Scaling:
The weight normalization in <Ref> tends to reduce the gradient magnitude, which in turn leads to slower convergence <cit.>.
To mitigate this issue, we propose gradient scaling using a learnable scaling factor 𝐀. More specifically, we apply the 𝐀 to the output of Υ_[1](𝐳; 𝐰) at every layer, as shown below:
Υ_[2](𝐳; 𝐰) = 𝐀⊙Υ_[1](𝐳; 𝐰).
Given that scaling the output of a function by a constant is equivalent to augmenting the gradient of the function by that constant, the proposed method effectively addresses the reduced gradient issue and accelerates the convergence process.
Norm-based Attention Mask (NBAM):
The input norm 𝐳 signifies the importance of the local patch within the input. Here, 𝐳 = 𝐳_u,v - μ_𝐳(u,v) for and 𝐳 = ϕ( 𝐳_u,v - μ_𝐳(u,v) ) for R-.
Normalizing with 𝐳 assigns equal importance to all patches, but this may be problematic when 𝐳 is very small. Such small values indicate low-variation (low-information) input areas that should not be equally weighted as high-information areas, as this may cause spurious matches with templates.
To address this issue, we propose the Norm-based Attention Mask technique, which leverages a lightweight single convolution layer with sigmoid activation, denoted as ψ. This function learns to dynamically assign importance weights to image patches. Specifically, by taking 𝐳 as input, ψ learns to generate patch-wise importance weights m ∈ [0, 1].
Subsequently, the output is redefined using the following computation:
Υ_[3](𝐳; 𝐰) = ψ ( 𝐳) ⊙Υ_[2](𝐳; 𝐰)
+ (1-ψ ( 𝐳)) ⊙{Υ_[2](𝐳; 𝐰) ⊙𝐳}
where Γ(𝐳; 𝐰)⊙𝐳 represents the output without normalization.
To further ensure that each feature is weighted equally, we normalize each channel in the output tensor:
Υ_[4](𝐳; 𝐰) = Υ_[3](𝐳; 𝐰) - μ_Υ/σ_Υ
where μ_Υ and σ_Υ are the mean and the variance of Υ_[3](𝐳; 𝐰) along each channel.
§ EXPERIMENTS
§.§ Datasets and Settings
Digits-DG: consists of five distinct subsets: MNIST, MNIST-M, SVHN, SYN and USPS. Each subset represents a different domain with variations in writing styles and quality, scales, backgrounds, and strokes. We mainly utilise the Digits-DG benchmark for single-source domain evaluation and ablation studies. Following <cit.>, we chose the first 10,000 images from both the MNIST training and validation sets as the source dataset.
CiFAR-10-C <cit.>: is typically used for corruption robustness benchmarking. It contains 15 different corruption types that mimic real-world scenarios, such as noise, blur, weather, and digital artifacts. Each corruption type has five levels of severity. We follow the setup of <cit.> and use the CIFAR-10 training set as the source dataset while images in CIFAR-10-C are used for evaluation.
Camelyon-17-Wilds <cit.>: comprises 455k histopathology image patches extracted from 1000 whole-slide images (WSIs) of sentinel lymph nodes <cit.>. These WSIs were obtained from five distinct medical centres, with each centre representing a unique domain within the dataset. The primary objective of this dataset is to classify input patches and determine whether the central region contains any tumour tissue. In our experimental setup, we selected each of the domains as the source domain in turn and used the rest of the domains as target domains.
Implementation Details: Complete details regarding the experimental setup, including the network architecture and model selection, can be found in the supplementary document, providing a comprehensive understanding of our methodology.
§.§ Comparisons on Digits-DG
Results: Table <ref> provides a comprehensive comparison of the out-of-domain generalization performance between our proposed method and state-of-the-art approaches. The results highlight the significant improvements achieved by our proposed method. Both our base method () and robust variant (R-) showcase substantial enhancements over the ERM baseline (49.3%→69.8% and 49.3%→74.2%, respectively), without the need for data augmentation techniques or extensive network modifications. Notably, our methods outperform adversarial augmentation-based domain generalization approaches, including ADA, M-ADA, and ME-ADA, by a considerable margin. Also, Table <ref> highlights the complementary nature of our method with data augmentation techniques, such as Random Convolution (RC) <cit.>. The combined approach achieves even higher performance, positioning it competitively alongside the complex state-of-the-art method, MetaCNN.
§.§ Comparisons on CiFAR-10-C
Results: The average accuracy for four categories of level-5 severity corruptions is presented in <Ref>. Our proposed method showcases substantial improvements over the baseline model (ERM), achieving an impressive accuracy boost from 54.08% to 68.10% without relying on data augmentation techniques. Notably, our approach demonstrates a remarkable 17.03% improvement for blur corruptions and a noteworthy 15.22% improvement for digits corruptions. Furthermore, our R- method effectively enhances model robustness against noise corruption, elevating the accuracy from 48.15% to 60.81%. It is worth highlighting that our R- outperforms most of the data augmentation-based approaches by a respectable margin. Additionally, when combined with RC <cit.>, our method exhibits exceptional performance across all four categories and demonstrates competitive results compared to the current state-of-the-art method.
§.§ Comparisons on Camelyon-17
Results: While the Camelyon17 dataset is commonly employed for conventional domain generalization tasks (generalizing from multiple source domains to a single target domain), it has not been extensively explored for single-source domain generalization. In this study, we investigate the single domain generalization performance on the Camelyon17 dataset using the AUROC metric, as shown in <Ref>. Notably, the ERM model exhibits poor generalization when trained on a single source domain, particularly in domains 2 and 3. We attribute this observation to significant variations in the staining agent's color across different hospitals (refer to the supplementary document for examples of training images from different domains). In contrast, the Random Convolution (RC) approach demonstrates impressive domain generalization capabilities on the Camelyon17 dataset.
Remarkably, our proposed method and its robust variant, R-, consistently outperform the ERM baseline across all domains, without relying on data augmentation techniques. In fact, our methods even surpass the performance of the RC method. Moreover, when combined with RC, our approach achieves a robust model with exceptional domain generalization performance. These results highlight the effectiveness of our method in enhancing domain generalization and its potential to improve model robustness in practical applications, such as medical imaging classification.
§ DISCUSSION
§.§ Ablation Study
In this section, we present the results of our ablation study conducted on the Digits-DG benchmark to evaluate the efficacy of each component of our proposed method. Specifically, we examine the effectiveness of our method, the robust variant (R-), and the norm-based attention mask (NBAM) proposed to improve model convergence, as discussed in <Ref>.
<Ref> reports the classification results of the four variants of our original framework, including the baseline ERM model for comparison. Our method, without any extension, achieves a significant 19.7% performance improvement over the baseline, demonstrating the effectiveness of our approach. Furthermore, the integration of NBAM, which relaxes the normalization of input features, leads to additional performance gains. This highlights the importance of selectively normalizing important input regions, as a global normalization approach may mistakenly assign equal importance to insignificant regions and hinder training.
Furthermore, we evaluate the effectiveness of the robust variant, R-, specifically designed for outlier rejection. As illustrated in <Ref>, the integration of the Welsch robust function enhances performance across all domains except for the USPS dataset. Moreover, combining the robust variant with NBAM yields even greater improvements across all domains without sacrificing performance. These results highlight the benefits of incorporating the robust variant and NBAM in our framework, demonstrating their potential for enhancing model performance and domain generalization.
§.§ Gradient Scaling
In this section, we investigate the effect of gradient scaling, as proposed in <Ref>, on the performance of . As mentioned, The weight normalization employed in <Ref> tends to reduce the gradient magnitude during backpropagation, as shown in <Ref>:
∂/∂wΓ(z;w)=ẑ-ŵ^⊤ẑ·ŵ/ŵ
where ẑ=z/|z| and ŵ=w/|w|. From <Ref>, we observe that the large norm |ŵ| can diminish the gradients, significantly slowing down the convergence of the optimization process.
To address this issue, we propose a gradient scaling method. As shown in <Ref>, we compare the convergence of with and without the proposed gradient scaling. The results clearly demonstrate that with gradient scaling achieves faster convergence than the variant without gradient scaling. This confirms the effectiveness of our proposed gradient scaling method in overcoming the problem of diminishing gradients caused by weight normalization, ultimately speeding up the training convergence.
§.§ Model Sensitivity to Input Perturbations
In this section, we investigate the sensitivity of our model to input perturbations using the CiFAR-10-C dataset. Our analysis focuses on the variance in predicted class probabilities, P(y|x), when using a set of corrupted images with different severity levels (denoted as s) ranging from 0 to 5. A severity level of 0 represents no corruption, while a severity level of 5 indicates the most severe corruption.
As depicted in <Ref>, the input X is influenced by the causal factor X_0 (e.g., semantics) and the rendering factor R, which contributes to domain shift. Since the objective of S-DG is to develop a model that is invariant to R as much as possible, it is crucial to examine the effect of R on the predictions of the developed model. We measure this by computing the Model Robustness Score (MRS), which quantifies the discrepancy between the predictions obtained using clean and perturbed inputs:
MRS(x, ξ) = 1/5∑_s=1^5 KL( f_θ( ξ( x;0 ) ) ; f_θ(ξ(x;s)))
where, ξ(x;s) represents the perturbation (e.g., blur, noise, compression, weather) included in the CiFAR-10-Corruption dataset. A model that relies solely on X_0 for prediction would exhibit a lower MRS compared to a model that incorporates R in its predictions.
<Ref> shows that the ERM model is highly susceptible to R, leading to large MRS values across all categories. In contrast, the proposed effectively improves the model's robustness, resulting in models that are less reliant on R for prediction. Specifically, our method significantly reduces MRS for the weather, blur, and compression categories. Moreover, our robust variant R- further diminishes the impact of R on the model's predictions, particularly in the noise category, thereby enhancing both the model's robustness and domain generalization performance. These findings highlight the effectiveness of our approach in mitigating the influence of domain-specific factors and improving the model's sensitivity to the underlying semantics X_0 rather than the rendering factor R.
§ CONCLUSION
In this paper, we introduce XCNorm, a novel normalization technique based on normalized cross-correlation. XCNorm exhibits invariance to affine shifts and changes in energy within local feature patches, while also eliminating the need for non-linear activation functions. The robust variant, R-XCNorm, focuses on outlier rejection, resulting in improved performance in challenging domains while maintaining competitiveness in others. Our proposed masking method selectively normalizes important input regions, enhancing model stability and out-of-domain performance. The integration of both methods showcases their complementary nature, leading to further improvements across all domains. We demonstrate the practical applicability of XCNorm in medical imaging classification, where it enhances model robustness and sensitivity to underlying semantics. Overall, our work provides effective methods for enhancing model performance and robustness, making notable contributions to the field of single-source domain generalization. These contributions have potential implications for various real-world applications.
ieee_fullname
|
http://arxiv.org/abs/2307.04060v1 | 20230708233916 | Double instability of Schwarzschild black holes in Einstein-Weyl-scalar theory | [
"Yun Soo Myung"
] | gr-qc | [
"gr-qc",
"hep-th"
] |
Double instability of Schwarzschild black holes
in Einstein-Weyl-scalar theory
Yun Soo Myung^a[e-mail address: [email protected]]
^aInstitute of Basic Sciences and Department of Computer Simulation, Inje University,
Gimhae 50834, Korea
We study the stability of Schwarzschild black
hole in Einstein-Weyl-scalar (EWS) theory with a quadratic scalar coupling to the Weyl term.
Its linearized theory admits the Lichnerowicz equation for Ricci tensor as well as scalar equation.
The linearized Ricci-tensor carries with a regular mass term (m^2_2), whereas the linearized scalar has a tachyonic mass term (-1/m^2_2).
It turns out that the double instability of Schwarzschild black hole in EWS theory is given by Gregory-Laflamme and tachyonic instabilities.
In the small mass regime of m_2<0.876, the Schwarzschild black hole becomes unstable against Ricci-tensor perturbations,
while tachyonic instability is achieved for m_2<1.174. The former would provide a single branch of scalarized black holes, whereas the latter would induce infinite branches of scalarized black holes.
§ INTRODUCTION
Recently, black hole solutions with scalar hair obtained from Einstein-Gauss-Bonnet-scalar (EGBS) theories <cit.> and Einstein-Maxwell-scalar theory <cit.> have received much attention
because they have uncovered easily an evasion of the no-hair theorem <cit.> by introducing a non-minimal (quadratic) scalar coupling function f(ϕ) to Gauss-Bonnet and Maxwell terms.
We note that these scalarized black hole solutions are closely related to the appearance of tachyonic instability for bald black holes.
In these linearized theories, the instability of Schwarzschild black hole is determined solely by the linearized scalar equation where the Gauss-Bonnet term acts as an effective mass term <cit.>, while
the instability of Reissner-Nordström (RN) black hole is given just by the linearized scalar equation where the Maxwell term plays the role of an effective mass term <cit.>.
This is allowed because their linearized Einstein and Einstein-Maxwell equations reduce to those for the linearized Einstein theory around Schwarzschild black hole and the Einstein-Maxwell theory around RN black hole, which turned out to be stable against tensor (metric) and vector-tensor perturbations.
It was well known that a higher curvature gravity (Einstein-Weyl theory) with a mass coupling parameter m^2_2 has provided the non-Schwarzschild black hole solution which crosses the Schwarzschild black hole solution at the bifurcation point of m_2=0.876 <cit.>.
This solution indicates the black hole with non-zero Ricci tensor (R̅_μν≠0), comparing to zero Ricci tensor (R̅_μν=0) for Schwarzschild black hole.
We note that the trace no-hair theorem for Ricci scalar played an important role in obtaining the non-Schwarzschild black hole solution.
It is worth noting that the instability of Schwarzschild black hole was found in the massive gravity theory <cit.> since the Schwarzschild black hole was known to be dynamically stable against tensor perturbations in Einstein theory <cit.>.
In the linearized Einstein-Weyl theory, the instability bound of Schwarzschild black hole was found as m_2<0.876 with r_+=1 when solving the Lichnerowicz equation for the linearized Ricci tensor <cit.>, which is the same equation as the linearized Einstein equation around a (4+1)-dimensional black string where the Gregory-Laflamme (GL) instability appeared firstly <cit.>.
A little difference is that the instability of Schwarzschild black hole arose from the massiveness of m_2≠0 in the Einstein-Weyl theory, whereas the GL instability appeared from the geometry of an extra z dimension in (4+1)-dimensional black string theory. This means that the mass m_2 trades for the extra dimension z.
In the present work, we wish to study two instabilities of Schwarzschild black holes simultaneously by introducing the Einstein-Weyl-scalar theory with a quadratic scalar coupling to Weyl term, instead of Gauss-Bonnet term. In this case, the linearized Ricci-tensor δ R_μν has a regular mass term m^2_2, whereas the linearized scalar δϕ possesses a tachyonic mass term (-1/m^2_2).
The linearized scalar equation around Schwarzschild black hole undergoes tachyonic instability for m_2<1.174, while the Lichnerowicz equation for linearized Ricci-tensor reveals GL instability for m_2<0.876.
We expect that the former may induce infinite branches (n=0,1,2,⋯) of scalarized black holes, while the latter admits a single branch (m_2≠0) of scalarized black holes.
This means that their role of the mass term are quite different for producing scalarized black holes.
§ EINSTEIN-WEYL-SCALAR (EWS) THEORY
We introduce the EWS theory defined by
S_ EWS=1/16 π∫ d^4 x√(-g)[ R-2∂_μϕ∂^μϕ-f(ϕ)/2m^2_2 C^2],
where f(ϕ)=1+ϕ^2 is a quadratic scalar coupling function, m_2^2 denotes a mass coupling parameter, and C^2 represents the Weyl term (Weyl scalar invariant) given by
C^2(≡ C_μνρσC^μνρσ)=2(R_μνR^μν-R^2/3)+ R_ GB^2
with the Gauss-Bonnet term R_ GB^2=R^2-4R_μνR^μν+R_μνρσR^μνρσ. In the limit of m_2^2→∞, the Weyl term decouples and the theory reduces to the tensor-scalar theory.
We wish to emphasize that scalar couplings to Gauss-Bonnet term were mostly used to find black holes with scalar hair within EGBS theory because it provides an effective mass term for a linearized scalar without modifying metric perturbations <cit.>. This is so because the Gauss-Bonnet term is a topological term in four dimensions.
Actually, the Weyl term is similar to the Maxwell term (F^2) because both they are conformally invariant and their variations with respect to g_μν are traceless.
From the action (<ref>), we derive the Einstein equation
G_μν=2∂ _μϕ∂ _νϕ -(∂ϕ)^2g_μν+2(1+ϕ^2)B_μν/m^2_2-Γ_μν/m^2_2,
where G_μν=R_μν-(R/2)g_μν is the Einstein tensor.
Here, B_μν (B^μ _μ=0) coming from the first part of (<ref>) is the Bach tensor defined as
B_μν = R_μρνσR^ρσ-g_μν/4 R_ρσR^ρσ- R/3(R_μν-g_μν/4R)
+ 1/2(∇^2R_μν-g_μν/6∇^2 R-1/3∇_μ∇_ν R)
and Γ_μν is given by
Γ_μν = -4/3R∇_(μΨ_ν)-∇^αΨ_α(3R_μν-4g_μν/3R)+ 6R_(μ|α|∇^αΨ_ν)
- 3 R^αβ∇_αΨ_β g_μν
+4R^β_ μαν∇^αΨ_β
with
Ψ_μ= 2ϕ∂_μϕ.
Its trace is not zero as Γ^μ _μ=R∇^ρΨ_ρ-2R^ρσ∇_ρΨ_σ.
Importantly, the scalar equation is given by
∇^2 ϕ +C^2/4m^2_2ϕ=0 .
Considering ϕ̅=0, the Schwarzschild solution is found from Eqs.(<ref>) and (<ref>) as
ds^2_ SBH= g̅_μνdx^μ dx^ν=-(1-r_+/r)dt^2+dr^2/(1-r_+/r)+r^2dΩ^2_2
with horizon radius r_+=2M. This Schwarzschild background gives us R̅_μνρσ≠0, R̅_μν=0, and R̅=0.
In this case, one finds easily that C̅^2=R̅_μνρσR̅^μνρσ=12r_+^2/r^6=R̅^2_ GB.
§ DOUBLE INSTABILITY FOR SCHWARZSCHILD BLACK HOLE
For the stability analysis of Schwarzschild black hole, we need the two linearized equations which describe the metric perturbation h_μν in (g_μν=g̅_μν+h_μν) and scalar perturbation δϕ in (ϕ=0+δϕ) propagating around (<ref>). They are obtained by linearizing Eqs.(<ref>) and (<ref>) as
∇̅^2δ G_μν+2R̅_μρνσδ G^ρσ-1/3(∇̅_μ∇̅_ν-g̅_μν∇̅^2)δ R-m^2_2 δ
G_μν=0 ,
(∇̅^2+ 3r_+^2/m^2_2r^6)δϕ= 0
with δ G_μν=δ R_μν-δ R g̅_μν/2 the linearized Einstein tensor. Here, we note that `m^2_2' in Eq.(<ref>) is regarded as a regular mass term, while `3r_+^2/m^2_2r^6' in Eq.(<ref>) corresponds to a tachyonic mass term for m^2_2>0.
Taking the trace over Eq.(<ref>) leads to
m^2_2 δ R=0,
which implies the non-propagation of a linearized Ricci scalar as
δ R=0.
We confirm Eq.(<ref>) by linearizing R=2(∂ϕ)^2+Γ^μ _μ/m^2_2.
This non-propagation of linearized scalar plays an important role in obtaining a linearized theory of the EWS theory.
Plugging Eq.(<ref>) into Eq.(<ref>), one finds
the Lichnerowicz-Ricci tensor equation for the traceless and transverse Ricci tensor δ R_μν as
(Δ̅_ L+m^2_2 ) δ R_μν=0,
where the Lichnerowicz operator on the Schwarzschild background is given by
Δ̅_ Lδ R_μν=-∇̅^2δ R_μν-2R̅_μρνσδ R^ρσ.
Here, we consider m^2_2>0 for non-tachyonic case.
Actually, Eq.(<ref>) describes a massive spin-2 mode (δ R_μν) with mass m_2 propagating on the Schwarzschild black hole background.
Let us solve the Lichnerowicz-Ricci tensor equation (<ref>) by adopting δ R_μν(t, x)=e^Ω tδR̃_μν( x).
Its s(l=0)-mode in polar sector satisfies the Schrödinger-type equation when introducing a tortoise coordinate r_*=∫[dr/(1-r_+/r)]
d^2δR̃^l=0_μν/dr^2_*-[Ω^2+V_ Z(r)]δR̃^l=0_μν=0,
where the Zerilli potential V_ Z(r) is given by <cit.>
V_ Z(r)=(1-r_+/r)[m^2_2 +r_+/r^3-12m^2_2r_+(r-0.5r_+)+6m^4_2r^3(2r_+-r)/(r_++m^2_2r^3)^2].
As is shown in (Left) Fig. 1, all potentials with m_2≠0 induce negative region near the horizon, while their asymptotic forms are given by m^2_2>0.
The negative region becomes wide and deep as the mass parameter m_2 decreases, implying GL instability of the Schwarzschild black hole.
In case of m_2=0, however, there is no GL instability because its potential V_ Z(r) is positive definite outside the horizon.
Solving Eq.(<ref>) numerically with appropriate boundary conditions, one finds the GL instability bound from (Left) Fig. 2 as
0<m_2<m_2^ th=0.876, for r_+=1,
where m_2^ th denotes threshold of GL instability. It is important to note that this bound is found in the EWS theory, but there is no such bound in the EGBS theory.
In the study of the instability for the Euclidean Schwarzschild black hole together with Einstein gravity, Gross, Perry, and Yaffe have found that there is just one normalizable negative-eigenvalue mode of the Licherowicz
operator [(Δ^ E_ L-λ_ GPY)h_μν=0] <cit.>. This connection could be realized from Eq.(<ref>) because when one considers δ R_μν=Δ̅_ Lh_μν/2
for ∇̅^μ h_μν=0 and h^μ _μ=0, Eq.(<ref>) implies that Δ̅_ Lh_μν=0 or (Δ̅_ L+m^2_2)h_μν=0.
Its eingenvalue is given by λ_ GPY[=-(m_2^ th)^2]=-0.768/r_+^2 which was noted in the early study of Schwarzschild black hole within higher curvature gravity <cit.>. Indeed, λ_ GPY is related to the thermodynamic instability of negative heat capacity C=-2π r_+^2 for Schwarzschild black hole in canonical ensemble.
On the other hand, we focus on the linearized scalar equation (<ref>) which is the same form as found in the linearized EGBS theory.
Considering
δϕ(t,r,θ,φ)=u(r)/re^-iω tY_lm(θ,φ),
the radial equation for s(l=0)-mode scalar leads to the Schrödinger-type equation
d^2u/dr_*^2+[ω^2-V_ S(r)]u(r)=0,
where the scalar potential V_ S(r) is given by
V_ S(r)=(1-r_+/r)[r_+/r^3-3r_+^2/m^2_2r^6],
where the last term corresponds to a tachyonic mass term.
Considering ∫^∞_r_+ dr [V_ S(r)/(1-r_+/r)]<0,
one may introduce a sufficient condition of tachyonic instability for a mass parameter m_2 <cit.>
m^2_2r_+^2<12/10⇒ m_2<m_2^ sc=1.095/r_+.
However, Eq.(<ref>) is not a necessary and sufficient condition for tachyonic instability.
Observing (Right) Fig. 1, one finds that the negative region becomes wide and deep as the mass parameter m_2 decreases, implying tachyonic instability of the Schwarzschild black hole.
To determine the threshold of tachyonic instability, one has to solve the second-order differential equation (<ref>) with ω=iΩ numerically,
which may allow an exponentially growing mode of e^Ω t as an unstable mode.
In this case, we choose two boundary conditions: a normalizable
solution of u(∞)∼ e^-Ω r_* at infinity and
a solution of u(r_+)∼(r-r_+)^Ω r_+ near the horizon.
By observing (Right) Fig. 2 together with r_+=1, we read off the
bound for tachyonic instability as
m_2<m_2^ sth=1.174
which implies that the threshold of tachyonic instability is given by 1.174 being greater than 1.095 (sufficient condition for tachyonic instability).
This corresponds to a bifurcation point between Schwarzschild and n=0 branch of scalarized black holes. In the limit of m^2_2 → 0, one has an infinitely negative potential which implies a large Ω as seen from (Right) Fig. 2.
Finally, we obtain an inequality bound for threshold of GL and tachyonic instabilities as
m_2^ th<m_2^ sth.
However, we remind the reader that the linearized Ricci-tensor δ R_μν carries with a regular mass term (m^2_2), whereas the linearized scalar δϕ has a tachyonic mass term (-1/m^2_2).
In this sense, the GL instability is quite different from the tachyonic instability <cit.>.
§ DISCUSSIONS
In this work, we have investigated two instabilities of Schwarzschild black holes simultaneously by introducing the EWS theory with a quadratic scalar coupling to Weyl term. Here, the linearized Ricci-tensor has a regular mass term (m^2_2), whereas the linearized scalar possesses a tachyonic mass term (-1/m^2_2).
The linearized scalar equation around black hole indicates tachyonic instability for m_2<1.174, while the Lichnerowicz equation for linearized Ricci-tensor shows GL instability for m_2<0.876.
This suggests that their mass terms play different roles for generating scalarized black holes because the GL instability is quite different from the tachyonic instability.
We expect that the former may induce infinite branches (n=0,1,2,⋯) of scalarized black holes, while the latter admits single branch (m_2>0) of scalarized black holes.
Now, we would like to mention the non-Schwarzschild black hole solutions obtained from the Einstein-Weyl theory (ϕ=0 EWS theory with m_2^2>0). This solution can be obtained numerically by requiring the no-hair theorem for Ricci scalar (R=0) <cit.>.
Actually, it corresponds to single branch of non-Schwarzschild black holes with Ricci-tensor hair <cit.>. Recently, it was shown that the long-wave length instability bound for non-Schwarzschild black holes is given by m_2<0.876 <cit.>, which is the same bound as the GL instability for Schwarzschild black hole <cit.>, but it contradicts to the conjecture from black hole thermodynamics addressed in <cit.>. We expect that a single branch of non-Schwarzschild black holes with Ricci-tensor and scalar hairs would be found from the EWS theory with f(ϕ)=1+ϕ^2.
On the other hand, we consider the scalar equation (<ref>) with tachyonic mass. From its static equation with ω=0, we obtain an infinite spectrum of parameter m_2 : m_2∈ [1.174=m_2^ sth, 0.453, 0.280, 0.202, · · ·], which defines infinite branches of scalarized black holes: n=0((0,1.174]), n=1((0,0.453]), n=2((0,0.28]), n=3((0,0.202]),⋯. Also, n=0, 1, 2, 3,⋯ are identified with the number of nodes for δϕ(z) = u(z)/z profile.
Thus, it is expected that infinite branches (n=0, 1, 2, 3,⋯) of black hole with scalar hair would be found when solving Eqs.(<ref>) and (<ref>) numerically.
However, this computation seems not to be easy because Eq.(<ref>) includes fourth-order derivatives and its Ricci scalar is not zero (R=2(∂ϕ)^2+Γ^μ _μ/m^2_2).
We wish to introduce a conventional case of f(ϕ)=ϕ^2 quadratic coupling function. In this case, there is no GL instability because the Bach tensor-term does not contribute to the linearized Einstein equation (<ref>).
Here, the linearized EWS theory reduces to the linearized EGBS theory which provides n=0 band with bandwidth
of 1.174 < m_2 < 1.272 <cit.>. This band of black holes with scalar hair is unstable against radial perturbations <cit.>. This is reason why we choose the EWS theory with the quadratic coupling function f(ϕ)=1+ϕ^2.
Finally, for the EWS theory with a quartic coupling function f(ϕ)=(1-e^-κϕ^4)/4κ <cit.>, the linearized scalar equation leads to ∇̅^2δϕ=0, which implies that there is no tachyonic instability. Also, its linearized Einstein equation is given by δ G_μν=0 which indicates that there is no GL instability. In this quartic coupling case, the linearized EWS theory reduces to the linearized EGBS theory, showing tachyonic stability. Without tachyonic instability, one expects to have a single branch of nonlinearly scalarized black holes but not infinite branches of scalarized black holes.
Acknowledgments
The author thanks De-Cheng Zou for helpful discussions.
99
Antoniou:2017acq
G. Antoniou, A. Bakopoulos and P. Kanti,
Phys. Rev. Lett. 120, no.13, 131102 (2018)
doi:10.1103/PhysRevLett.120.131102
[arXiv:1711.03390 [hep-th]].
Doneva:2017bvd
D. D. Doneva and S. S. Yazadjiev,
Phys. Rev. Lett. 120, no.13, 131103 (2018)
doi:10.1103/PhysRevLett.120.131103
[arXiv:1711.01187 [gr-qc]].
Silva:2017uqg
H. O. Silva, J. Sakstein, L. Gualtieri, T. P. Sotiriou and E. Berti,
Phys. Rev. Lett. 120, no.13, 131104 (2018)
doi:10.1103/PhysRevLett.120.131104
[arXiv:1711.02080 [gr-qc]].
Herdeiro:2018wub
C. A. R. Herdeiro, E. Radu, N. Sanchis-Gual and J. A. Font,
Phys. Rev. Lett. 121, no.10, 101102 (2018)
doi:10.1103/PhysRevLett.121.101102
[arXiv:1806.05190 [gr-qc]].
Bekenstein:1995un
J. D. Bekenstein,
Phys. Rev. D 51, no.12, R6608 (1995)
doi:10.1103/PhysRevD.51.R6608
Myung:2018iyq
Y. S. Myung and D. C. Zou,
Phys. Rev. D 98, no.2, 024030 (2018)
doi:10.1103/PhysRevD.98.024030
[arXiv:1805.05023 [gr-qc]].
Myung:2018vug
Y. S. Myung and D. C. Zou,
Eur. Phys. J. C 79, no.3, 273 (2019)
doi:10.1140/epjc/s10052-019-6792-6
[arXiv:1808.02609 [gr-qc]].
Lu:2015cqa
H. Lu, A. Perkins, C. N. Pope and K. S. Stelle,
Phys. Rev. Lett. 114, no.17, 171601 (2015)
doi:10.1103/PhysRevLett.114.171601
[arXiv:1502.01028 [hep-th]].
Babichev:2013una
E. Babichev and A. Fabbri,
Class. Quant. Grav. 30, 152001 (2013)
doi:10.1088/0264-9381/30/15/152001
[arXiv:1304.5992 [gr-qc]].
Brito:2013wya
R. Brito, V. Cardoso and P. Pani,
Phys. Rev. D 88, no.2, 023514 (2013)
doi:10.1103/PhysRevD.88.023514
[arXiv:1304.6725 [gr-qc]].
Regge:1957td
T. Regge and J. A. Wheeler,
Phys. Rev. 108, 1063-1069 (1957)
doi:10.1103/PhysRev.108.1063
Zerilli:1970se
F. J. Zerilli,
Phys. Rev. Lett. 24, 737-738 (1970)
doi:10.1103/PhysRevLett.24.737
Myung:2013doa
Y. S. Myung,
Phys. Rev. D 88, no.2, 024039 (2013)
doi:10.1103/PhysRevD.88.024039
[arXiv:1306.3725 [gr-qc]].
Gregory:1993vy
R. Gregory and R. Laflamme,
Phys. Rev. Lett. 70, 2837-2840 (1993)
doi:10.1103/PhysRevLett.70.2837
[arXiv:hep-th/9301052 [hep-th]].
Lu:2017kzi
H. Lü, A. Perkins, C. N. Pope and K. S. Stelle,
Phys. Rev. D 96, no.4, 046006 (2017)
doi:10.1103/PhysRevD.96.046006
[arXiv:1704.05493 [hep-th]].
Gross:1982cv
D. J. Gross, M. J. Perry and L. G. Yaffe,
Phys. Rev. D 25, 330-355 (1982)
doi:10.1103/PhysRevD.25.330
Whitt:1985ki
B. Whitt,
Phys. Rev. D 32, 379 (1985)
doi:10.1103/PhysRevD.32.379
Held:2022abx
A. Held and J. Zhang,
Phys. Rev. D 107, no.6, 064060 (2023)
doi:10.1103/PhysRevD.107.064060
[arXiv:2209.01867 [gr-qc]].
Stelle:2017bdu
K. S. Stelle,
Int. J. Mod. Phys. A 32, no.09, 1741012 (2017)
doi:10.1142/S0217751X17410123
Blazquez-Salcedo:2018jnn
J. L. Blázquez-Salcedo, D. D. Doneva, J. Kunz and S. S. Yazadjiev,
Phys. Rev. D 98, no.8, 084011 (2018)
doi:10.1103/PhysRevD.98.084011
[arXiv:1805.05755 [gr-qc]].
Doneva:2021tvn
D. D. Doneva and S. S. Yazadjiev,
Phys. Rev. D 105, no.4, L041502 (2022)
doi:10.1103/PhysRevD.105.L041502
[arXiv:2107.01738 [gr-qc]].
Blazquez-Salcedo:2022omw
J. L. Blázquez-Salcedo, D. D. Doneva, J. Kunz and S. S. Yazadjiev,
Phys. Rev. D 105, no.12, 124005 (2022)
doi:10.1103/PhysRevD.105.124005
[arXiv:2203.00709 [gr-qc]].
Lai:2023gwe
M. Y. Lai, D. C. Zou, R. H. Yue and Y. S. Myung,
[arXiv:2304.08012 [gr-qc]].
|
http://arxiv.org/abs/2307.04110v1 | 20230709065359 | Learning Space-Time Continuous Neural PDEs from Partially Observed States | [
"Valerii Iakovlev",
"Markus Heinonen",
"Harri Lähdesmäki"
] | cs.LG | [
"cs.LG"
] |
Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication
José Miguel Mateos-Ramos, Student Member, IEEE,
Christian Häger, Member, IEEE,
Musa Furkan Keskin, Member, IEEE,
Luc Le Magoarou, Member, IEEE,
Henk Wymeersch, Senior Member, IEEE
This work was supported, in part, by a grant from the Chalmers AI Research Center Consortium (CHAIR), by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), the Swedish Foundation for Strategic Research (SSF) (grant FUS21-0004, SAICOM), Hexa-X-II, part of the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101095759., and Swedish Research Council (VR grant 2022-03007). The work of C. Häger was also supported by the Swedish Research Council under grant no. 2020-04718.
José Miguel Mateos-Ramos, Christian Häger, Musa Furkan Keskin and Henk Wymeersch are with the Department of Electrical Engineering, Chalmers University of Technology, Sweden (email: [email protected]; [email protected]; [email protected]; [email protected]).
Luc Le Magoarou is with INSA Rennes, CNRS, IETR - UMR 6164, F-35000, Rennes, France (email: [email protected]).
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We introduce a novel grid-independent model for learning partial differential equations (PDEs) from noisy and partial observations on irregular spatiotemporal grids. We propose a space-time continuous latent neural PDE model with an efficient probabilistic framework and a novel encoder design for improved data efficiency and grid independence. The latent state dynamics are governed by a PDE model that combines the collocation method and the method of lines. We employ amortized variational inference for approximate posterior estimation and utilize a multiple shooting technique for enhanced training speed and stability. Our model demonstrates state-of-the-art performance on complex synthetic and real-world datasets, overcoming limitations of previous approaches and effectively handling partially-observed data. The proposed model outperforms recent methods, showing its potential to advance data-driven PDE modeling and enabling robust, grid-independent modeling of complex partially-observed dynamic processes.
§ INTRODUCTION
All source code and datasets will be made publicly available after review.
Modeling spatiotemporal processes allows to understand and predict the behavior of complex systems that evolve over time and space <cit.>. Partial differential equations (PDEs) are a popular tool for this task as they have a solid mathematical foundation <cit.> and can describe the dynamics of a wide range of physical, biological, and social phenomena <cit.>. However, deriving PDEs can be challenging, especially when the system's underlying mechanisms are complex and not well understood. Data-driven methods can bypass these challenges <cit.>. By learning the underlying system dynamics directly from data, we can develop accurate PDE models that capture the essential features of the system. This approach has changed our ability to model complex systems and make predictions about their behavior in a data-driven manner.
While current data-driven PDE models have been successful at modeling complex spatiotemporal phenomena, they often operate under various simplifying assumptions such as regularity of the spatial or temporal grids <cit.>, discreteness in space or time <cit.>, and availability of complete and noiseless observations <cit.>. Such assumptions become increasingly limiting in more realistic scenarios with scarce data and irregularly spaced, noisy and partial observations.
We address the limitations of existing methods and propose a space-time continuous and grid-independent model that can learn PDE dynamics from noisy and partial observations made on irregular spatiotemporal grids. Our main contributions include:
* Development of an efficient generative modeling framework for learning latent neural PDE models from noisy and partially-observed data;
* Novel PDE model that merges two PDE solution techniques – the collocation method and the method of lines – to achieve space-time continuity, grid-independence, and data efficiency;
* Novel encoder design that operates on local spatiotemporal neighborhoods for improved data-efficiency and grid-independence.
Our model demonstrates state-of-the-art performance on complex synthetic and real-world datasets, opening up the possibility for accurate and efficient modeling of complex dynamic processes and promoting further advancements in data-driven PDE modeling.
§ PROBLEM SETUP
In this work we are concerned with modeling of spatiotemporal processes. For brevity, we present our method for a single observed trajectory, but extension to multiple trajectories is straightforward. We observe a spatiotemporal dynamical system evolving over time on a spatial domain Ω. The observations are made at M arbitrary consecutive time points t_1:M:=(t_1, …, t_M) and N arbitrary observation locations _1:N:=(_1, …, _N), where _i ∈Ω. This generates a sequence of observations _1:M:=(_1, …, _M), where _i ∈ℝ^N × D contains D-dimensional observations at the N observation locations. We define _i^j as the observation at time t_i and location _j. The number of time points and observation locations may vary between different observed trajectories.
We assume the data is generated by a dynamical system with a latent state (t, ) ∈ℝ^d, where t is time and ∈Ω is spatial location. The latent state is governed by an unknown PDE and is mapped to the observed state (t, ) ∈ℝ^D by an unknown observation function g and likelihood model p:
∂(t, x)/∂ t = F((t,), ∂_(t,), ∂^2_(t,),…),
(t,) ∼ p(g((t,))),
where ∂^∙_(t,) denotes partial derivatives wrt .
In this work we make two assumptions that are highly relevant in real-world scenarios. First, we assume partial observations, that is, the observed state (t,) does not contain all information about the latent state (t,) (e.g., (t,) contains pressure and velocity, but (t,) contains information only about the pressure). Second, we assume out-of-distribution time points and observation locations, that is, their number, positions, and density can change arbitrarily at test time.
§ MODEL
[9]r0.4
< g r a p h i c s >
Model sketch. Initial latent state (t_1,) is evolved via F_θ_dyn to the following latent states which are then mapped to the observed states by g_θ_dec.
Here we describe the model components (Sec. <ref>) which are then used to construct the generative model (Sec. <ref>).
§.§ Model components
Our model consists of four parts: space-time continuous latent state (t, ) and observed state (t, ), a dynamics function F_θ_dyn governing the temporal evolution of the latent state, and an observation function g_θ_dec mapping the latent state to the observed state (see Figure <ref>). Next, we describe these components in detail.
Latent state.
To define a space-time continuous latent state (t, ) ∈ℝ^d, we introduce (t):=(^1(t), …, ^N(t)) ∈ℝ^N × d, where each ^i(t) ∈ℝ^d corresponds to the observation location _i.
Then, we define the latent state (t, ) as a spatial interpolant of (t):
(t, ) := Interpolate((t))(),
where Interpolate(·) maps (t) to an interpolant which can be evaluated at any spatial location ∈Ω (see Figure <ref>). We do not rely on a particular interpolation method, but in this work we use linear interpolation as it shows good performance and facilitates efficient implementation.
Latent state dynamics.
[13]r0.3
< g r a p h i c s >
Latent state (t,) defined as an interpolant of (t) := (^1(t), ..., ^4(t)).
Given a space-time continuous latent state, one can naturally define its dynamics in terms of a PDE:
∂(t, x)/∂ t = F_θ_dyn((t,), ∂_(t,), ∂^2_(t,),…),
where F_θ_dyn is a dynamics function with parameters θ_dyn. This is a viable approach known as the collocation method <cit.>, but it has several limitations. It requires us to decide which partial derivatives to include in the dynamics function, and also requires an interpolant which has all the selected partial derivatives (e.g., linear interpolant has only first order derivatives). To avoid these limitations, we combine the collocation method with another PDE solution technique known as the method of lines <cit.>, which approximates spatial derivatives ∂^∙_(t,) using only evaluations of (t,), and then let the dynamics function approximate all required derivatives in a data-driven manner. To do that, we define the spatial neighborhood of as 𝒩_S(), which is a set containing and its spatial neighbors, and also define (t, 𝒩_S()), which is a set of evaluations of the interpolant (t, ) at points in 𝒩_S():
𝒩_S() := {' ∈Ω : '= or ' is a spatial neighbor of },
(t, 𝒩_S()) := {(t, ') : ' ∈𝒩_S() },
and assume that this information is sufficient to approximate all required spatial derivatives at . This is a reasonable assumption since, e.g., finite differences can approximate derivatives using only function values and locations of the evaluation points. Hence, we define the dynamics of (t, ) as
∂(t, )/∂ t = F_θ_dyn(𝒩_S(), (t, 𝒩_S())),
which is defined only in terms of the values of the latent state, but not its spatial derivatives.
[17]r0.225
< g r a p h i c s >
Example of 𝒩_S(_i). Instead of using the observation locations (dots) to define spatial neighbors, we use spatial locations arranged in a fixed predefined pattern (crosses).
One way to define the spatial neighbors for is in terms of the observation locations _1:N (e.g., use the nearest ones) as was done, for example, in <cit.>. Instead, we utilize continuity of the latent state (t, ), and define the spatial neighbors in a grid-independent manner as a fixed number of points arranged in a predefined patter around (see Figure <ref>). This allows to fix the shape and size of the spatial neighborhoods in advance, making them independent of the observation locations. In this work we use the spatial neighborhood consisting of two concentric circles of radius r and r/2, each circle contains 8 evaluation points as in Figure <ref>. In Appendix <ref> we compare neighborhoods of various shapes and sizes.
Equation <ref> allows to simulate the temporal evolution of (t, ) at any spatial location. However, since (t, ) is defined only in terms of a spatial interpolant of (t) (see Eq. <ref>), with ^i(t) = (t, _i), it is sufficient to simulate the latent state dynamics only at the observation locations _1:N. Hence, we can completely characterize the latent state dynamics in terms of a system of N ODEs:
d(t)/dt :=
[ d^1(t)/dt; ⋮; d^N(t)/dt ] =
[ ∂(t, _1)/∂ t; ⋮; ∂(t, _N)/∂ t ] =
[ F_θ_dyn(𝒩_S(_1), (t, 𝒩_S(_1))); ⋮; F_θ_dyn(𝒩_S(_N), (t, 𝒩_S(_N))) ].
For convenience, we define (t; t_1, _1, θ_dyn) := ODESolve(t;t_1,_1,θ_dyn) as the solution of the ODE system in Equation <ref> at time t with initial state (t_1)=_1 and parameters θ_dyn. We also define (t, ; t_1, _1, θ_dyn) as the spatial interpolant of (t; t_1, _1, θ_dyn) as in Equation <ref>. We solve the ODEs using off the shelf differentiable ODE solvers from torchdiffeq package <cit.>. Note that we solve for the state (t) only at the observation locations _1:N, so to get the neighborhood values (t, 𝒩_S(_i)) we perform interpolation at every step of the ODE solver.
Observation function.
We define the mapping from the latent space to the observation space as a parametric function g_θ_dec with parameters θ_dec:
(t,) ∼𝒩(g_θ_dec((t, )), σ_u^2I_D),
where 𝒩 is the Gaussian distribution, σ_u^2 is noise variance, and I_D is D-by-D identity matrix.
§.§ Generative model
[18]r0.3
< g r a p h i c s >
Multiple shooting splits a trajectory with one initial state (top) into two sub-trajectories with two initial states (bottom) and tries to minimize the gap between sub-trajectories (orange arrow).
Training models of dynamic systems is often challenging due to long training times and training instabilities <cit.>. To alleviate these problems, various heuristics have been proposed, such as progressive lengthening and splitting of the training trajectories <cit.>. We use multiple shooting <cit.>, a simple and efficient technique which has demonstrated its effectiveness in ODE learning applications <cit.>. We extent the multiple shooting framework for latent ODE models presented in <cit.> to our PDE modeling setup by introducing spatial dimensions in the latent state and designing an encoder adapted specifically to the PDE setting (Section <ref>).
Multiple shooting splits a single trajectory {(t_i)}_i=1,...,M with one initial state _1 into B consecutive non-overlapping sub-trajectories {(t_i)}_i ∈ℐ_b, b=1,…,B with B initial states _1:B:=(_1,…,_B) while imposing a continuity penalty between the sub-trajectories (see Figure <ref>). The index set ℐ_b contains time point indices for the b'th sub-trajectory. We also denote the temporal position of _b as t_[b] and place _b at the first time point preceding the b'th sub-trajectory (except _1 which is placed at t_1). Note that the shooting states _b have the same dimension as the original latent state (t) i.e., _b ∈ℝ^N × d. Multiple shooting allows to parallelize the simulation over the sub-trajectories and shortens the simulation intervals thus improving the training speed and stability. In Appendix <ref> we demonstrate the effect of multiple shooting on the model training and prediction accuracy.
We begin by defining the prior over the unknown model parameters and initial states:
p(_1:B, θ_dyn, θ_dec) = p(_1:B|θ_dyn)p(θ_dyn)p(θ_dec),
where p(θ_dyn) and p(θ_dec) are zero-mean diagonal Gaussians, and the continuity inducing prior p(_1:B|θ_dyn) is defined as in <cit.>
p(_1:B| θ_dyn)
= p(_1) ∏_b=2^Bp(_b|_b-1, θ_dyn).
Intuitively, the continuity prior p(_b|_b-1, θ_dyn) takes the initial latent state _b-1, simulates it forward from time t_[b-1] to t_[b] to get μ_[b] = ODESolve(t_[b] ; t_[b-1], _b-1, θ_dyn), and then forces μ_[b] to approximately match the initial state _b of the next sub-trajectory,
thus promoting continuity of the full trajectory.
We assume the continuity inducing prior factorizes across the grid points, i.e.,
p(_1:B| θ_dyn)
= [ ∏_j=1^N p(_1^j) ] [ ∏_b=2^B∏_j=1^N p(_b^j|_b-1, θ_dyn)],
= [ ∏_j=1^N p(_1^j) ] [ ∏_b=2^B∏_j=1^N𝒩( _b^j|(t_[b], _j; t_[b-1], _b-1, θ_dyn), σ_c^2I_d )],
where
p(_1^j) is a diagonal Gaussian,
and parameter σ_c^2 controls the strength of the prior. Note that the term (t_[b], _j; t_[b-1], _b-1, θ_dyn) in Equation <ref> equals the ODE forward solution ODESolve(t_[b] ; t_[b-1], _b-1, θ_dyn) at grid location _j.
Finally, we define our generative in terms of the following sampling procedure:
θ_dyn, θ_dec, _1:B ∼ p(θ_dyn)p(θ_dec) p(_1:B | θ_dyn),
(t_i) = (t_i; t_[b], _b, θ_dyn), b ∈{1, ..., B}, i ∈ℐ_b,
_i^j ∼ p(_i^j | g_θ_dec((t_i, _j)), i = 1, …, M, j=1,…,N,
with the following joint distribution (see Appendix <ref> for details about the model specification.):
p(_1:M, _1:B, θ_dyn, θ_dec) = ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^N[ p(_i^j|_b, θ_dyn, θ_dec) ] p(_1:B | θ_dyn) p(θ_dyn) p(θ_dec).
§ PARAMETER INFERENCE
§.§ Amortized variational inference
We approximate the true posterior over the model parameters and initial states p(_1:B, θ_dyn, θ_dec | _1:M) using variational inference <cit.> with the following approximate posterior:
q(θ_dyn, θ_dec, _1:B) = q(θ_dyn) q(θ_dec) q(_1:B) = q_ψ_dyn(θ_dyn) q_ψ_dec(θ_dec) ∏_b=1^B∏_j=1^Nq_ψ_b^j(_b^j),
where q_ψ_dyn, q_ψ_dec and q_ψ_b^j are diagonal Gaussians, and ψ_dyn, ψ_dec and ψ_b^j are variational parameters. To avoid direct optimization over the local variational parameters ψ_b^j, we use amortized variational inference <cit.> and train an encoder h_θ_enc with parameters θ_enc which maps observations _1:M to ψ_b^j (see Section <ref>). For brevity, we sometimes omit the dependence of approximate posteriors on variational parameters and simply write e.g., q(_b^j).
In variational inference the best approximation of the posterior is obtained by minimizing the Kullback-Leibler divergence:
KL[q(θ_dyn, θ_dec, _1:B) ‖ p(θ_dyn, θ_dec, _1:B|_1:N)],
which is equivalent to maximizing the evidence lower bound (ELBO), defined for our model as:
ℒ = ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N𝔼_q(_b, θ_dyn, θ_dec)[ log p (_i^j | _b, θ_dyn, θ_dec) ] _(i) observation model
-∑_j=1^NKL[ q(_1^j) ‖ p(_1^j) ]_(ii) initial state prior
- ∑_b=2^B∑_j=1^N𝔼_q(θ_dyn, _b-1)[ KL[ q(_b^j) ‖ p(_b^j|_b-1, θ_dyn) ] ]_(iii) continuity prior
-KL[q(θ_dyn) ‖ p(θ_dyn)]_(iv) dynamics prior
-KL[q(θ_dec) ‖ p(θ_dec)]_(v) decoder prior.
The terms (ii), (iv), and (v) are computed analytically, while terms (i) and (iii) are approximated using Monte Carlo integration for expectations, and numerical ODE solvers for initial value problems.
See Appendix <ref> and <ref> approximate posterior details and derivation and computation of the ELBO.
§.§ Encoder
Here we describe our encoder which maps observations _1:M to local variational parameters ψ_b^j required to sample the initial latent state of the sub-trajectory b at time point t_[b] and observation location _j. Similarly to our model, the encoder should be data-efficient and grid-independent.
Similarly to our model (Section <ref>), we enable grid-independence by making the encoder operate on spatial interpolants of the observations _1:M (even if they are noisy):
_i() := Interpolate(_i)(), i=1,…,M,
where spatial interpolation is done separately for each time point i. We then use the interpolants _i() to define the spatial neighborhoods 𝒩_S() in a grid-independent manner.
To improve data-efficiency, we assume ψ_b^j does not depend on the whole observed sequence _1:M, but only on some local information in a spatiotemporal neighborhood of t_[b] and _j. We define the temporal neighborhood of t_[b] as
𝒩_T(t_[b]) {k : |t_k - t_[b]| ≤δ_T, k=1,…,M},
where δ_T is a hyperparameter controlling the neighborhood size, and then define the spatiotemporal neighborhood of t_[b] and _j as
[t_[b], _j] := {_k() : k ∈𝒩_T(t_[b]), ∈𝒩_S(_j) }.
Our encoder operates on such spatiotemporal neighborhoods [t_[b], _j] and works in three steps (see Figure <ref>). First, for each time index k ∈𝒩_T(t_[b]) it aggregates the spatial information {_k()}_∈𝒩(_j) into a vector α_k^S. Then, it aggregates the spatial representations α_k^S across time into another vector α_[b]^T which is finally mapped to the variational parameters ψ_b^j as follows:
ψ_b^j = h_θ_enc([t_[b], _j]) = h_read(h_temporal(h_spatial([t_[b], _j]))).
Spatial aggregation. Since the spatial neighborhoods are fixed and remain identical for all spatial locations (see Figure <ref>), we implement the spatial aggregation function h_spatial as an MLP which takes elements of the set {_k()}_∈𝒩_S(_j) stacked in a fixed order as the input.
Temporal aggregation. We implement h_temporal as a stack of transformer layers <cit.> which allows it to operate on input sets of arbitrary size. We use time-aware attention and continuous relative positional encodings <cit.> which were shown to be effective on data from dynamical systems observed at irregular time intervals. Each transformer layer takes a layer-specific input set {ξ_k^in}_k ∈𝒩_T(t_[b]), where ξ_k^in is located at t_k, and maps it to an output set {ξ_k^out}_k ∈𝒩_T(t_[b]), where each ξ_k^out is computed using only the input elements within distance δ_T from t_k, thus promoting temporal locality. Furthermore, instead of using absolute positional encodings the model assumes the behavior of the system does not depend on time and uses relative temporal distances to inject positional information. The first layer takes {α_k^S}_k ∈𝒩_T(t_[b]) as the input, while the last layer returns a single element at time point t_[b], which represents the temporal aggregation α_[b]^T.
Variational parameter readout. Since α_i^T is a fixed-length vector, we implement h_read as an MLP.
§ EXPERIMENTS
We use three challenging datasets: Shallow Water, Navier-Stokes, and Scalar Flow which contain observations of spatiotemporal system at N ≈ 1100 grid points evolving over time (see Figure <ref>). The first two datasets are synthetic and generated using numeric PDE solvers (we use scikit-fdiff <cit.> for Shallow Water, and PhiFlow <cit.> for Navier-Stokes), while the third dataset contains real-world observations (camera images) of smoke plumes raising in warm air <cit.>. In all cases the observations are made at irregular spatiotemporal grids and contain only partial information about the true system state. All datasets contain 60/20/20 training/validation/testing trajectories. See Appendix <ref> for details.
We train our model for 20k iterations with constant learning rate of 3e-4 and linear warmup. The latent spatiotemporal dynamics are simulated using differentiable ODE solvers from the torchdiffeq package <cit.> (we use dopri5 with rtol=1e-3, atol=1e-4, no adjoint). Training is done on a single NVIDIA Tesla V100 GPU, with a single run taking 3-4 hours. We use the mean absolute error (MAE) on the test set as the performance measure. Error bars are standard errors over 4 random seeds. For forecasting we use the expected value of the posterior predictive distribution. See Appendix <ref> for all details about the training, validation, and testing setup.
Latent state dimension. Here we show the advantage of using latent-space models on partially observed data. We change the latent state dimension d from 1 to 5 and measure the test MAE. Note that for d=1 we effectively have a data-space model which models the observations without trying to reconstruct the missing states. Figure <ref> shows that in all cases there is improvement in performance as the latent dimension grows. For Shallow Water and Navier-Stokes the true latent dimension is 3. Since Scalar Flow is a real-world process, there is no true latent dimension. As a benchmark, we provide the performance of our model trained on fully-observed versions of the synthetic datasets (we use the same architecture and hyperparameters, but fix d to 3). Figure <ref> also shows examples of model predictions (at the final time point) for different values of d. We see a huge difference between d=1 and d=3,5. Note how apparently small difference in MAE at d=1 and d=5 for Scalar Flow corresponds to a dramatic improvement in the prediction quality.
Grid independence. Here we show the grid-independence property of our model by training it on grids with ≈ 1100 observation locations, and then testing on a coarser, original, and finer grids. For Shallow Water and Navier-Stokes the coarser/finer grids contain 290/4200 nodes, while for Scalar Flow we have 560/6420 nodes, respectively. Figure <ref> shows the model's performance on different spatial grids. We see that A performance drop on coarse grids is expected since as we get less accurate information about the system's initial state and simulate the dynamics on coarse grids. Figure <ref> also shows examples of model predictions (at the final time point) for different grid sizes.
Comparison to other models.
Here we compare our model with two recent models from the literature: MAgNet <cit.> and DINo <cit.>. Similarly to our model, these models also produce space-time continuous predictions: MAgNet uses neural network-based interpolation and Euler time discretization, while DINo uses implicit neural representation-based
[6]r0.5
Test MAE for different models.
1!
Shallow Water Navier-Stokes Scalar-Flow
MAgNet 0.061 ± 0.001 0.103 ± 0.003 0.056 ± 0.003
DINo 0.063 ± 0.003 0.113 ± 0.002 0.059 ± 0.001
Ours 0.016 ± 0.002 0.041 ± 0.003 0.042 ± 0.001
decoder and continuous-time dynamics. These two methods also use an encoder that takes a history of observations and map them to an initial state in the latent space, where the latent dynamics are learned and the latent state is mapped to the observation space via a decoder (we use the non-Markovian version of DINo). We use the official implementations of both models and tune the hyperparameters for the best performance. For Shallow Water and Navier-Stokes we use the history size of 5 and predict the next 20 steps, while for Scalar Flow the history size is 10 and we predict the next 10 steps. See Appendix <ref> for hyperparameter details. The results are shown in Table <ref>, and the model predictions are shown in Figure <ref>. Our model shows the best performance, achieving very accurate predictions on the synthetic data, and also shows the capacity for modeling real-world data managing to predict the smoke speed, direction, and even the smoke separation. In Figure <ref> we also test data efficiency of the models and show that our model requires much less data to converge to its lowest error. In Appendix <ref> we further demonstrate our model's capability to learn dynamics from noisy data.
§ RELATED WORK
Closest to our work is <cit.>, where they considered the problem of learning PDEs from partial observations and proposed a discrete and grid-dependent model that is restricted to regular spatiotemporal grids. Another related work is that of <cit.>, where they proposed a variational inference framework for learning ODEs from noisy and partially-observed data. However, they consider only low-dimensional ODEs and are restricted to regular grids.
Other works considered learning the latent space PDE dynamics using the “encode-process-decode” approach. <cit.> use GNN-based encoder and dynamics function and map the observations to the same spatial grid in the latent space and learn the latent space dynamics. <cit.> use a similar approach but with CNNs and map the observations to a coarser latent grid and learn the coarse-scale dynamics. <cit.> use CNNs to map observations to a low-dimensional latent vector and learn the latent dynamics. However, all these approaches are grid-dependent, limited to regular spatial/temporal grids, and require fully-observed data.
Interpolation has been used in numerous studies for various applications. Works such as <cit.> use interpolation to map latent states on coarse grids to observations on finer grids. <cit.> used interpolation as a post-processing step to obtain continuous predictions, while <cit.> used it to recover observations at missing nodes.
§ CONCLUSION
We proposed a novel space-time continuous, grid-independent model for learning PDE dynamics from noisy and partial observations on irregular spatiotemporal grids. Our contributions include an efficient generative modeling framework, a novel latent PDE model merging collocation and method of lines, and a data-efficient, grid-independent encoder design. The model demonstrates state-of-the-art performance on complex datasets, highlighting its potential for advancing data-driven PDE modeling and enabling accurate predictions of spatiotemporal phenomena in diverse fields. However, our model and encoder operate on every spatial and temporal location which might not be the most efficient approach and hinders scaling to extremely large grids, hence research into more efficient latent state extraction and dynamics modeling methods is needed.
plainnat
§ APPENDIX A
§.§ Model specification.
Here we provide all details about our model specification. The joint distribution for our model is
p(_1:M, _1:B, θ_dyn, θ_dec) = p(_1:N|_1:B, θ_dyn, θ_dec) p(_1:B | θ_dyn) p(θ_dyn) p(θ_dec).
Next, we specify each component in detail.
Parameter priors. The parameter priors are isotropic zero-mean multivariate normal distributions:
p(θ_dyn) = 𝒩(θ_dyn | 0, I),
p(θ_dec) = 𝒩(θ_dec | 0, I),
where 𝒩 is the normal distribution, 0 is a zero vector, and I is the identity matrix, both have an appropriate dimensionality dependent on the number of encoder and dynamics parameters.
Continuity prior. We define the continuity prior as
p(_1:B| θ_dyn)
= p(_1) ∏_b=2^Bp(_b|_b-1, θ_dyn),
= [ ∏_j=1^N p(_1^j) ] [ ∏_b=2^B∏_j=1^N p(_b^j|_b-1, θ_dyn)],
= [ ∏_j=1^N𝒩(_1^j | 0, I) ] [ ∏_b=2^B∏_j=1^N𝒩( _b^j|(t_[b], _j; t_[b-1], _b-1, θ_dyn), σ_c^2I ).],
where 𝒩 is the normal distribution, 0∈ℝ^d is a zero vector, I ∈ℝ^d × d is the identity matrix, and σ_c ∈ℝ is the parameter controlling the strength of the prior. Smaller values of σ_c tend to produce smaller gaps between the sub-trajectories.
Observation model
p(_1:N|_1:B, θ_dyn, θ_dec) = ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^N p(_i^j|_b, θ_dyn, θ_dec)
= ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^Np(_i^j | g_θ_dec((t_i, _j; t_[b], _b, θ_dyn)))
= ∏_b=1^B∏_i ∈ℐ_b^∏_j=1^N𝒩(_i^j | g_θ_dec((t_i, _j; t_[b], _b, θ_dyn)), σ_u^2 I),
where 𝒩 is the normal distribution, σ_u^2 is the observation noise variance, and I ∈ℝ^D × D is the identity matrix. Note again that (t_i, _j; t_[b], _b, θ_dyn) above equals the ODE forward solution ODESolve(t_i ; t_[b], _b, θ_dyn) at grid location _j.
§.§ Approximate posterior specification.
Here we provide all details about the approximate posterior. We define the approximate posterior as
q(θ_dyn, θ_dec, _1:B) = q(θ_dyn) q(θ_dec) q(_1:B) = q_ψ_dyn(θ_dyn) q_ψ_dec(θ_dec) ∏_b=1^B∏_j=1^Nq_ψ_b^j(_b^j).
Next, we specify each component in detail.
Dynamics parameters posterior. We define q_ψ_dyn(θ_dyn) as
q_ψ_dyn(θ_dyn) = 𝒩(θ_dyn | γ_dyn, diag (τ_dyn^2)),
where γ_dyn and τ_dyn^2 are vectors with an appropriate dimension (dependent on the number of dynamics parameters), and diag (τ_dyn^2) is a matrix with τ_dyn^2 on the diagonal. We define the vector of variational parameters as ψ_dyn = (γ_dyn, τ_dyn^2). We optimize directly over ψ_dyn and initialize γ_dyn using Xavier <cit.> initialization, while τ_dyn is initialized with each element equal to 9 · 10^-4.
Decoder parameters posterior. We define q_ψ_dec(θ_dec) as
q_ψ_dec(θ_dec) = 𝒩(θ_dec | γ_dec, diag (τ_dec^2)),
where γ_dec and τ_dec^2 are vectors with an appropriate dimension (dependent on the number of decoder parameters), and diag (τ_dec^2) is a matrix with τ_dec^2 on the diagonal. We define the vector of variational parameters as ψ_dec = (γ_dec, τ_dec^2). We optimize directly over ψ_dec and initialize γ_dec using Xavier <cit.> initialization, while τ_dec is initialized with each element equal to 9 · 10^-4.
Shooting variables posterior. We define q_ψ_b^j(_b^j) as
q_ψ_b^j(_b^j) = 𝒩(_b^j | γ_b^j, diag ([τ_b^j]^2))),
where the vectors γ_b^j, τ_b^j ∈ℝ^d are returned by the encoder h_θ_enc, and diag ([τ_b^j]^2) is a matrix with [τ_b^j]^2 on the diagonal. We define the vector of variational parameters as ψ_b^j = (γ_b^j, [τ_b^j]). Because the variational inference for the shooting variables is amortized, our model is trained w.r.t. the parameters of the encoder network, θ_enc.
§ APPENDIX B
§.§ Derivation of ELBO.
For our model and the choice of the approximate posterior the ELBO can be written as
ℒ = ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M, _1:B, θ_dyn, θ_dec)/q(θ_dyn, θ_dec, _1:B)dθ_dyn dθ_dec d_1:B
= ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M|_1:B, θ_dyn, θ_dec)p(_1:B|θ_dyn)p(θ_dyn)p(θ_dec)/q(_1:B)q(θ_dyn)q(θ_dec)dθ_dyn dθ_dec d_1:B
= ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M | _1:B, θ_dyn, θ_dec)dθ_dyn dθ_dec d_1:B
- ∫q(θ_dyn, θ_dec, _1:B) lnq(_1:B)/p(_1:B | θ_dyn)dθ_dyn dθ_dec d_1:B
- ∫q(θ_dyn, θ_dec, _1:B) lnq(θ_dyn)/p(θ_dyn)dθ_dyn dθ_dec d_1:B
- ∫q(θ_dec, θ_dec, _1:B) lnq(θ_dec)/p(θ_dec)dθ_dyn dθ_dec d_1:B
= ℒ_1 - ℒ_2 - ℒ_3 - ℒ_4.
Next, we will look at each term ℒ_i separately.
ℒ_1 = ∫q(θ_dyn, θ_dec, _1:B) lnp(_1:M | _1:B, θ_dyn, θ_dec)dθ_dyn dθ_dec d_1:B
= ∫q(θ_dyn, θ_dec, _1:B) ln[∏_b=1^B∏_i ∈ℐ_b∏_j=1^Np(_i^j | _b, θ_dyn, θ_dec)]dθ_dyn dθ_dec d_1:B
= ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N∫q(θ_dyn, θ_dec, _1:B) ln[p(_i^j | _b, θ_dyn, θ_dec)]dθ_dyn dθ_dec d_1:B
= ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N∫q(θ_dyn, θ_dec, _b) ln[p(_i^j | _b, θ_dyn, θ_dec)]dθ_dyn dθ_dec d_b
= ∑_b=1^B∑_i ∈ℐ_b∑_j=1^N𝔼_q(θ_dyn, θ_dec, _b)ln[p(_i^j | _b, θ_dyn, θ_dec)].
ℒ_2 = ∫q(θ_dyn, θ_dec, _1:B) lnq(_1:B)/p(_1:B | θ_dyn)dθ_dyndθ_dec d_1:B
= ∫q(θ_dyn, θ_dec, _1:B) ln[q(_1)/p(_1)∏_b=2^Bq(_b)/p(_b|_b-1, θ_dyn)]dθ_dyndθ_dec d_1:B
= ∫q(θ_dyn, θ_dec, _1:B) ln[∏_j=1^Nq(_1^j)/p(_1^j)]dθ_dyndθ_dec d_1:B
+ ∫q(θ_dyn, θ_dec, _1:B) ln[∏_b=2^B∏_j=1^Nq(_b^j)/p(_b^j|_b-1, θ_dyn)]dθ_dyndθ_dec d_1:B
= ∑_j=1^N∫q(θ_dyn, θ_dec, _1:B) ln[q(_1^j)/p(_1^j)]dθ_dyndθ_dec d_1:B
+ ∑_b=2^B∫q(θ_dyn, θ_dec, _1:B) ∑_j=1^Nln[q(_b^j)/p(_b^j|_b-1, θ_dyn)]dθ_dyndθ_dec d_1:B
= ∑_j=1^N∫q(_1^j) ln[q(_1^j)/p(_1^j)]d_1^j
+ ∑_b=2^B∫q(θ_dyn, _b-1, _b) ∑_j=1^Nln[q(_b^j)/p(_b^j|_b-1, θ_dyn)]dθ_dyn d_b-1 d_b
= ∑_j=1^N∫q(_1^j) ln[q(_1^j)/p(_1^j)]d_1^j
+ ∑_b=2^B∫q(θ_dyn, _b-1) ∑_j=1^N[ ∫ q(_b^j) lnq(_b^j)/p(_b^j|_b-1, θ_dyn)d_b^j]dθ_dyn d_b-1
= ∑_j=1^NKL( q(_1^j) ‖ p(_1^j) ) + ∑_b=2^B𝔼_q(θ_dyn, _b-1)[ ∑_j=1^NKL( q(_b^j) ‖ p(_b^j|_b-1, θ_dyn) ) ],
where KL is Kullback–Leibler (KL) divergence. Both of the KL divergences above have a closed form but the expectation w.r.t. q(θ_dyn, _b-1) does not.
ℒ_3 = KL(q(θ_dyn) ‖ p(θ_dyn)), ℒ_4 = KL(q(θ_dec) ‖ p(θ_dec)).
§.§ Computation of ELBO.
We compute the ELBO using the following algorithm:
* Sample θ_dyn, θ_dec from q_ψ_dyn(θ_dyn), q_ψ_dec(θ_dec).
* Sample _1:B by sampling each _b^j from q_ψ_b^j(_b^j) with ψ_b^j = h_θ_enc([t_[b], _j]).
* Compute _1:M from _1:B as in Equations <ref>-<ref>.
* Compute ELBO ℒ (KL terms are computed in closed form, for expectations we use Monte Carlo integration with one sample).
Sampling is done using reparametrization to allow unbiased gradients w.r.t. the model parameters.
§ APPENDIX C
§.§ Datasets.
Shallow Water. The shallow water equations are a system of partial differential equations (PDEs) that simulate the behavior of water in a shallow basin. These equations are effectively a depth-integrated version of the Navier-Stokes equations, assuming the horizontal length scale is significantly larger than the vertical length scale. Given these assumptions, they provide a model for water dynamics in a basin or similar environment, and are commonly utilized in predicting the propagation of water waves, tides, tsunamis, and coastal currents. The state of the system modeled by these equations consists of the wave height h(t, x, y), velocity in the x-direction u(t, x, y) and velocity in the y-direction v(t, x, y). Given an initial state (h_0, u_0, v_0), we solve the PDEs on a spatial domain Ω over time interval [0, T]. The shallow water equations are defined as:
∂ h/∂ t + ∂ (hu)/∂ x + ∂ (hv)/∂ y = 0,
∂ u/∂ t + u∂ u/∂ x + v∂ u/∂ y + g∂ h/∂ x = 0,
∂ v/∂ t + u∂ v/∂ x + v∂ v/∂ y + g∂ h/∂ y = 0,
where g is the gravitational constant.
We set the spatial domain Ω to be a unit square and use periodic boundary conditions. We set T=0.1. The solution is evaluated at randomly selected spatial locations and time points. We use 1089 spatial locations and 25 time points. The spatial end temporal grids are the same for all trajectories. Since we are dealing with partially-observed cases, we assume that we observe only the wave height h(t,x,y).
For each trajectory, we start with zero initial velocities and the initial height h_0(x,y) generated as:
h̃_0(x, y) = ∑_k,l = -N^Nλ_klcos(2π (kx+ly)) + γ_klsin(2π (kx+ly)),
h_0(x, y) = 1 + h̃_0(x, y) - min(h̃_0)/max(h̃_0) - min(h̃_0),
where N = 3 and λ_kl, γ_kl∼𝒩(0, 1).
The datasets used for training, validation, and testing contain 60, 20, and 20 trajectories, respectively.
We use scikit-fdiff <cit.> to solve the PDEs.
Navier-Stokes. For this dataset we model the propagation of a scalar field (e.g., smoke concentration) in a fluid (e.g., air). The modeling is done by coupling the Navier-Stokes equations with the Boussinesq buoyancy term and the transport equation to model the propagation of the scalar field. The state of the system modeled by these equations consists of the scalar field c(t,x,y), velocity in x-direction u(t,x,y), velocity in y-direction v(t,x,y), and pressure p(t,x,y). Given an initial state (c_0, u_0, v_0, p_0), we solve the PDEs on a spatial domain Ω over time interval [0, T]. The Navier-Stokes equations with the transport equation are defined as:
∂ u/∂ x + ∂ v/∂ y = 0,
∂ u/∂ t + u ∂ u/∂ x + v ∂ u/∂ y = - ∂ p/∂ x + ν( ∂^2 u/∂ x^2 + ∂^2 u/∂ y^2),
∂ v/∂ t + u ∂ v/∂ x + v ∂ v/∂ y = - ∂ p/∂ y + ν( ∂^2 v/∂ x^2 + ∂^2 v/∂ y^2) + c,
∂ c/∂ t = - u ∂ c/∂ x - v ∂ c/∂ y + ν( ∂^2 c/∂ x^2 + ∂^2 c/∂ y^2),
where ν = 0.002.
We set the spatial domain Ω to be a unit square and use periodic boundary conditions. We set T=2.0, but drop the first 0.5 seconds due to slow dynamics during this time period. The solution is evaluated at randomly selected spatial locations and time points. We use 1089 spatial locations and 25 time points. The spatial and temporal grids are the same for all trajectories. Since we are dealing with partially-observed cases, we assume that we observe only the scalar field c(t,x,y).
For each trajectory, we start with zero initial velocities and pressure, and the initial scalar field c_0(x,y) is generated as:
c̃_0(x, y) = ∑_k,l = -N^Nλ_klcos(2π (kx+ly)) + γ_klsin(2π (kx+ly)),
c_0(x, y) = c̃_0(x, y) - min(c̃_0)/max(c̃_0) - min(c̃_0),
where N = 2 and λ_kl, γ_kl∼𝒩(0, 1).
The datasets used for training, validation, and testing contain 60, 20, and 20 trajectories, respectively.
We use PhiFlow <cit.> to solve the PDEs.
Scalar Flow.
r0.2
< g r a p h i c s >
Spatial grid used for Scalar Flow dataset.
This dataset, proposed by <cit.>, consists of observations of smoke plumes rising in hot air. The observations are post-processed camera images of the smoke plumes taken from multiple views. For simplicity, we use only the front view. The dataset contains 104 trajectories, where each trajectory has 150 time points and each image has the resolution 1080 × 1920.
To reduce dimensionality of the observations we sub-sample the original spatial and temporal grids. For the temporal grid, we remove the first 50 time points, which leaves 100 time points, and then take every 4th time point, thus leaving 20 time points in total. The original 1080 × 1920 spatial grid is first down-sampled by a factor of 9 giving a new grid with resolution 120 × 213, and then the new grid is further sub-sampled based on the smoke density at each node. In particular, we compute the average smoke density at each node (averaged over time), and then sample the nodes without replacement with the probability proportional to the average smoke density (thus, nodes that have zero density most of the time are not selected). See example of a final grid in Figure <ref>. This gives a new grid with 1089 nodes.
We further smooth the observations by applying Gaussian smoothing with the standard deviation of 1.5 (assuming domain size 120 × 213).
We use the first 60 trajectories for training, next 20 for validation and next 20 for testing.
§.§ Model architecture and hyper-parameters.
Dynamics function. For all datasets we define F_θ_dyn as an MLP. For Shallow Water/Navier-Stokes/Scalar Flow we use 1/3/3 hidden layers with the size of 1024/512/512, respectively. We use ReLU nonlinearities.
Observation function. For all datasets we define g_θ_dec as a selector function which takes the latent state (t, x) ∈ℝ^d and returns its first component.
Encoder. Our encoder h_θ_enc consists of three function: h_θ_spatial, h_θ_temporal, and h_θ_read. The spatial aggregation function h_θ_spatial is a linear mapping to ℝ^128. The temporal aggregation function h_θ_temporal is a stack of transformer layers with temporal attention and continuous relative positional encodings <cit.>. For all datasets, we set the number of transformer layers to 6. Finally, the variational parameter readout function h_θ_read is a mapping defined as
ψ_b^j = h_θ_read(α_[b]^T) =
[ γ_b^j; τ_b^j ]=
[ Linear(α_[b]^T); exp(Linear(α_[b]^T)) ],
where Linear is a linear layer (different for each line), and γ_b^j and τ_b^j are the variational parameters discussed in Appendix A.
Spatial and temporal neighborhoods. We use the same spatial neighborhoods 𝒩_S() for both the encoder and the dynamics function. We define 𝒩_S() as the set of points consisting of the point and points on two concentric circles centered at , with radii r and r/2, respectively. Each circle contains 8 points spaced 45 degrees apart (see Figure <ref> (right)). The radius r is set to 0.1. For Shallow Water/Navier-Stokes/Scalar Flow the size of temporal neighborhood (δ_T) is set to 0.1/0.1/0.2, respectively.
Multiple Shooting. For Shallow Water/Navier-Stokes/Scalar Flow we split the full training trajectories into 4/4/19 sub-trajectories, or, equivalently, have the sub-trajectory length of 6/6/2.
§.§ Training, validation, and testing setup.
Data preprocessing.
We scale the temporal grids, spatial grids, and observations to be within the interval [0, 1].
Training. We train our model for 20000 iterations using Adam <cit.> optimizer with constant learning rate 3e-4 and linear warmup for 200 iterations. The latent spatiotemporal dynamics are simulated using differentiable ODE solvers from the torchdiffeq package <cit.> (we use dopri5 with rtol=1e-3, atol=1e-4, no adjoint). The batch size is 1.
Validation. We use validation set to track the performance of our model during training and save the parameters
that produce the best validation performance. As performance measure we use the mean absolute error at predicting the full validation trajectories given some number of initial observations. For Shallow Water/Navier-Stokes/Scalar Flow we use the first 5/5/10 observations. The predictions are made by taking one sample from the posterior predictive distribution (see Appendix C.4 for details).
Testing. Testing is done similarly to validation, except that as the prediction we use an estimate of the expected value of the posterior predictive distribution (see Appendix C.4 for details).
§.§ Forecasting.
Given initial observations _1:m at time points t_1:m, we predict the future observation _n at a time point t_n > t_m as the expected value of the approximate posterior predictive distribution:
p(_n | _1:m, _1:M) ≈∫ p(_n | _m, θ_dyn, θ_dec) q(_m) q(θ_dyn) q(θ_dec) d_m dθ_dyn dθ_dec.
The expected value is estimated via Monte Carlo integration, so the algorithm for predicting _n is:
* Sample θ_dyn, θ_dec from q(θ_dyn), q(θ_dec).
* Sample _m from q(_m) = ∏_j=1^Nq_ψ_m^j(_m^j), where the variational parameters ψ_m^j are given by the encoder h_θ_enc operating on the initial observations _1:m as ψ_m^j = h_θ_enc([t_m, _j]).
* Compute the latent state (t_n) = (t_n; t_m, _m, θ_dyn).
* Sample _n by sampling each _n^j from 𝒩(_n^j | g_θ_dec((t_n, _j))), σ_u^2 I).
* Repeat steps 1-4 n times and average the predictions (we use n=10).
§.§ Model comparison setup.
DINo. We use the official implementation of DINo <cit.>. The encoder is an MLP with 3 hidden layers, 512 neurons each, and Swish non-linearities. The code dimension is 100. The dynamics function is an MLP with 3 hidden layers, 512 neurons each, and Swish non-linearities. The decoder has 3 layers and 64 channels.
MAgNet. We use the official implementation of MAgNet <cit.>. We use the graph neural network variant of the model. The number of message-passing steps is 5. All MLPs have 4 layers with 128 neurons each in each layer. The latent state dimension is 128.
§ APPENDIX D
§.§ Spatiotemporal neighborhood shapes and sizes.
Here we investigate the effect of changing the shape and size of spatial and temporal neighborhoods used by the encoder and dynamics functions. We use the default hyperparameters discussed in Appendix C and change only the neighborhood shape or size. A neighborhood size of zero implies no spatial/temporal aggregation.
Initially, we use the original circular neighborhood displayed in Figure <ref> for both encoder and dynamics function and change only its size (radius). The results are presented in Figures <ref> and <ref>. In Figure <ref>, it is surprising to see very little effect from changing the encoder's spatial neighborhood size. A potential explanation is that the dynamics function shares the spatial aggregation task with the encoder. However, the results in Figure <ref> are more intuitive, displaying a U-shaped curve for the test MAE, indicating the importance of using spatial neighborhoods of appropriate size. Interestingly, the best results tend to be achieved with relatively large neighborhood sizes. Similarly, Figure <ref> shows U-shaped curves for the encoder's temporal neighborhood size, suggesting that latent state inference benefits from utilizing local temporal information.
We then examine the effect of changing the shape of the dynamics function's spatial neighborhood. We use ncircle neighborhoods, which consist of n equidistant concentric circular neighborhoods (see examples in Figure <ref>). Effectively, we maintain a fixed neighborhood size while altering its density. The results can be seen in Figure <ref>. We find that performance does not significantly improve when using denser (and presumably more informative) spatial neighborhoods, indicating that accurate predictions only require a relatively sparse neighborhood with appropriate size.
§.§ Multiple shooting.
Here we demonstrate the effect of using multiple shooting for model training. In Figure <ref> (left), we vary the sub-trajectory length (longer sub-trajectories imply more difficult training) and plot the test errors for each sub-trajectory length. We observe that in all cases, the best results are achieved when the sub-trajectory length is considerably smaller than the full trajectory length. In Figure <ref> (right) we further show the training times, and as can be seen multiple shooting allows to noticeably reduce the training times.
§ APPENDIX E
Noisy Data. Here we show the effect of observation noise on our model and compare the results against other models. We train all models with data noise of various strengths, and then compute test MAE on noiseless data (we still use noisy data to infer the initial state at test time). Figure <ref> shows that our model can manage noise strength up to 0.1 without significant drops in performance. Note that all observations are in the range [0, 1].
|
http://arxiv.org/abs/2307.07450v1 | 20230714161221 | Control landscape of measurement-assisted transition probability for a three-level quantum system with dynamical symmetry | [
"Maria Elovenkova",
"Alexander Pechen"
] | quant-ph | [
"quant-ph",
"81Q93"
] | |
http://arxiv.org/abs/2307.06181v1 | 20230712141219 | B-CLEAN-SC: CLEAN-SC for broadband sources | [
"Armin Goudarzi"
] | cs.SD | [
"cs.SD",
"eess.AS",
"physics.flu-dyn"
] |
JASA-EL/B-CLEAN-SC]B-CLEAN-SC: CLEAN-SC for broadband sources
[email protected]
German Aerospace Center (DLR), 37073 Göttingen, Germany
A. Goudarzi, JASA-EL
This paper presents B-CLEAN-SC, a variation of CLEAN-SC for broadband sources. Opposed to CLEAN-SC, which “deconvolves” the beamforming map for each frequency individually, B-CLEAN-SC processes frequency intervals. Instead of performing a deconvolution iteration at the location of the maximum level, B-CLEAN-SC performs it at the location of the over-frequency-averaged maximum to improve the location estimation. The method is validated and compared to standard CLEAN-SC on synthetic cases, and real-world experiments, for broad- and narrowband sources. It improves the source reconstruction at low and high frequencies and suppresses noise, while it only increases the need for memory but not computational effort.
[
Armin Goudarzi
August 12, 2023
===================
§ INTRODUCTION
Conventional beamforming is a well-established tool to identify and quantify sound sources on complex objects, such as cars, trains, and planes <cit.>. Naive methods estimate the sound power by virtually steering the Cross Spectral Matrix (CSM) to different focus points to obtain an independent estimation for each focus point. The resulting beamforming map is convoluted with the array's Point Spread Function (PSF), which limits the resolution at low frequencies by the array's aperture and at high frequencies by aliasing that results from the discrete microphone spacing. More advanced methods exist, such as gridless methods <cit.>. However, they are computationally expensive and often only proven to work on academic examples.
There exist a variety of “deconvolution” methods that aim in reconstructing the true source distribution from the so-called dirty beamforming maps. While advanced source reconstruction methods such as DAMAS <cit.> exist, CLEAN-SC <cit.> is the gold standard in industrial environments <cit.>, because it is extremely fast and robust. CLEAN-SC solves the deconvolution iteratively at each individual frequency. It assumes a dominant source so that the dirty map is dominated by its PSF. It then estimates that the source is located at the location of maximum Power Spectral Density (PSD) in the map and measures the coherence between the location and all other locations. It then subtracts the source from the Cross-Spectral Matrix (CSM) and dirty map. It then repeats the process to find the second source and so on until a stopping criterion is met. This process works extremely well at medium frequencies, where the PSF shows pronounced main-lobes and low side-lobes. At very low frequencies (compared to the array's aperture) the PSF of two adjacent sources will overlap and form a single blob in the dirty map. Thus, the maximum of the dirty map is no longer located at a true source position, but between multiple source positions. At these low frequencies, CLEAN-SC fails to identify the true sources and reconstructs the PSD wrongly. At very high frequencies the focus grid can often no longer resolve the main-lobe. Additionally, grating-lobes are present in the dirty map which are of the same magnitude as the main lobe. Thus, the maximum is often positioned at a grating-lobe which results in very noisy CLEAN-SC maps at these high frequencies. The improved algorithm HR-CLEAN-SC <cit.> exists that aims to solve the low-frequency issues of CLEAN-SC, which requires an initial CLEAN-SC solution and an additional iteration to obtain a solution. The spatial resolution of HR-CLEAN-SC is approximately doubled compared to CLEAN-SC, but less so if diagonal removal is applied.
Recently, a variation of the gridless CSM-fitting method Global Optimization (GO) was introduced for broadband sources <cit.> based on the observation, that sources typically have a constant location over frequency <cit.>. Broadband GO showed, that introducing the condition of a shared location over frequency smoothes out local minima in the optimization cost function, which are caused by the side- and grating lobes of the array's PSF. While the results were superior compared to CLEAN-SC and standard GO, the computational effort makes the method currently not suitable for industry applications <cit.>.
This paper introduces Broadband-CLEAN-SC (B-CLEAN-SC) which aims to relax the problems of CLEAN-SC at high and low frequencies by adapting the idea of broadband GO: The processing of multiple frequencies at once, so that the side-lobes cancel out, and true source positions can be identified. This is done by introducing a simple change to the CLEAN-SC algorithm: Instead of processing each frequency individually, B-CLEAN-SC processes frequency intervals at once (but still obtains smallband solutions). Here, the only difference lies in the determination of the location, from which the source power is sampled. B-CLEAN-SC averages the dirty maps over the frequency interval and uses the location of the maximum averaged source power. It then performs a standard CLEAN-SC iteration for each of the frequencies in the interval with individual source powers per frequency but at the shared location. Thus, the reconstruction at lower frequencies benefits from the resolution at higher frequencies, and the averaging of side- and grating lobes stabilized the process at very high frequencies.
§ METHODOLOGY
This Section presents the standard CLEAN-SC algorithm, and the proposed B-CLEAN-SC algorithm.
§.§ Standard CLEAN-SC
CLEAN-SC is based on the idea that the coherence Γ_jk^2 between an arbitrary focus point 𝐱_k and all other focus points 𝐱_j can be estimated by steering the CSM to the focus points with
Γ_jk^2 = |𝐰^*_j 𝐂𝐰_k|^2/(𝐰^*_j 𝐂𝐰_j)(𝐰^*_k 𝐂𝐰_k) = |A_jk|^2/A_jjA_kk ,
where 𝐰 is an arbitrary steering vector <cit.>. Removing the coherent parts of a source removes the PSF (but also distributed sources) from the map. This is performed iteratively with the Algorithm <ref>, where n is the current iteration, for a maximum number of N iterations, or until a stopping criterion is met <cit.>. f∈𝐟 is the current frequency, A is the conventional beamforming result for the steering vector w, and 𝐱 is a list of all focus points. 𝐂 is the dirty CSM, 𝐆 is the CSM of the iteratively identified source, and 𝐐 is the final CLEAN-SC estimation of the “deconvolved” map. For stability, a loop gain 0< α≤1 is used. For convenience the algorithm is described without Diagonal Removal (DR) <cit.>, since it only adds an identical step to both CLEAN-SC and B-CLEAN-SC.
§.§ B-CLEAN-SC
The B-CLEAN-SC algorithm is nearly identical to the CLEAN-SC algorithm, when CLEAN-SC is performed for all frequencies in parallel with the exception, that B-CLEAN-SC performs each iteration n at a shared location 𝐱_k for all frequencies (within the processed interval 𝐟). To determine the location, instead of using the maximum of the dirty map 𝐀_jj(f) separately for each frequency, the maximum of the over frequency averaged dirty map is used
k = argmax_j(⟨𝐀_ijj/max_j(𝐀_ijj^0)⟩_i) .
Here, 𝐀_ijj^0 denotes the original dirty map prior to subtractions. i denotes the index of the frequency f_i∈𝐟, j denotes the index of the focus point 𝐱_j. The subscript of the average operator ⟨…⟩ or the maximum argument operator indicates the dimension over which they are applied. 𝐀_ijj^0 is an estimation for the frequency-dependent amplitude of the overall source power (which typically decreases over frequency for aeroacoustic sources). The normalization by its maximum compensates for this behavior. Eq. <ref> is the only addition to the CLEAN-SC algorithm to obtain B-CLEAN-SC, see Algorithm <ref>. The algorithm is given for a frequency interval 𝐟, if the frequency interval does not cover the full frequency range, B-CLEAN-SC can be performed sequentially for multiple intervals.
Note, that the position 𝐱_k is not necessarily located on the main lobe of a dominant source for all frequencies if the sources have a strong frequency-dependent power. Especially at low frequencies, where the PSF of a dominant source may cover all other sources and dominate the estimated power at their true positions, this would lead to an overestimation of their power, and a subtraction of the main-lobe, when subtracting coherent portions of the map <cit.>. To relax this issue, a low gain factor α is needed, so that the number of necessary B-CLEAN-SC iterations increases. Since only the initial calculation of the dirty map is computationally expensive the extra iterations are not performance relevant.
§ RESULTS
This section presents the results of four different cases. Section <ref> presents two synthetic examples that aim to clarify the behavior of CLEAN-SC and B-CLEAN-SC. Section <ref> presents a real experiment with ground truth, so that the methods can be evaluated quantitatively. Last, Section <ref> presents a real-world wind tunnel experiment without ground truth, based on which the methods are evaluated qualitatively. Throughout this section, CLEAN-SC will be performed with diagonal removal, a maximum of 3N_S iterations per frequency where N_S is the number of true sources, and a gain factor of α=0.9 per iteration. B-CLEAN-SC will be performed with diagonal removal, a maximum of 10N_S iterations, and α=0.1 per iteration. To reduce the visual complexity of the results, beamforming maps are obtained only in 1D for case 1, and 2D for cases 2 and 3 with steering vector formulation III <cit.>, with DR.
§.§ Synthetic results
We propose case 1, a synthetic 1D example that highlights the differences between standard CLEAN-SC and B-CLEAN-SC. The array is located at -0.5 ≤ x ≤ 0.5, y=0. There are three sources S_i at x_1=0, x_2=0.1, x_3=0.5, y=0.5. The CSM is calculated at 256 frequencies f_max=8192, Δ f = 32. The focus grid is located at -1 ≤ x ≤ 1, y=0.5, Δ x=0.004. The PSD of S_1 linearly increases over frequency from PSD_1(f_0)=-10 to PSD_1(f_256)=0. The PSD of S_2 linearly decreases in the same way so that S_2 dominates at low frequencies and S_1 dominates at high frequencies. Additionally, S_3 is a smallband source that is only present at 3616≤ f ≤3840 at -10. For B-CLEAN-SC, the frequencies are processed in intervals of Δ f=2048.
Figure <ref> shows the results of case 1. Figure <ref> (a) and (c) show the CLEAN-SC results, Figure <ref> (b) and (d) show the B-CLEAN-SC results. (a) and (b) show the source reconstruction over space and frequency, and the color depicts the PSD. The underlying color-map depicts the conventional result for reference. The vertical colored lines represent Regions Of Interest (ROI) around the true source locations. (c) and (d) show the corresponding PSD, integrated from the same colored ROI. The black lines indicate noise, integrated from the area that does not correspond to any ROI. Additionally, a red line shows the integration of all sources within the map, as an estimation of the overall sound power. The ground truth is depicted with dotted lines for reference.
CLEAN-SC reconstructs the dominant source S_2 well down to f≥200, below which the maximum within the dirty map is assumed with a wrong level at the edges of the focal range. For S_1 the PSD reconstruction works well down to f≥1, below which CLEAN-SC gradually underestimates its power and gradually misses the correct location. The smallband source S_3 is reconstructed perfectly. B-CLEAN-SC perfectly estimates the sources' locations. The PSDs are reconstructed well throughout the frequency range, except for an underestimation of S_1 at f≈1. For B-CLEAN-SC, there is no noise.
§.§ Experiment with ground truth
Case 2 features a generic wind tunnel experiment with a streamlined monopole speaker, that is moved to three different locations <cit.>. The individual CSMs are used to calculate the ground truth at Mach M=0. Then, a measurement at M=0.06 is performed for all three source positions and their CSMs are added to obtain a problem with three sources. The sources are located at x_1=-0.05, x_2=0.1, x_3=0.25, y_1,2,3=0.1, z_1,2,3=0. The array consists of 7x7 equidistantly spaced microphones with Δ x = Δ y = 0.09, and is located at z=-0.65. The equidistant 2D focus grid Δ x = Δ y = 0.005 covers -0.3≥ x,y ≥ 0.3 at z=0. The sampling rate is f_s=65536, and Δ f=512.
Figure <ref> shows the results for case 2. Figure <ref> (a) shows CLEAN-SC's spatial source estimation, integrated over the y-dimension (to yield a 2D representation). Source S_1 (blue), S_2 (orange), and S_3 (green) are spatially well estimated for f≥3. However, the result is very noisy. Figure <ref> (b) shows the integrated ROI from the same colored areas in (a). The ROI are circles with a radius of r=0.03 around the true source locations. CLEAN-SC is generally able to estimate the source PSD well. Source S_1 is estimated well at high frequencies f≥5. Source S_2 is estimated well down to f ≥1.5, below which it can no longer be separated from S_3. S_3, which dominates at low frequencies, is estimated well down to f ≥2. Below this frequency, the overall power was estimated well, but could not be attributed to a true source position. Both S_1 and S_3 are reconstructed down a Signal-to-Signal Ratio (SSR) of around SSR = 30, which was used as a stopping criterion for CLEAN-SC. Throughout the frequency range, the result is very noisy and the Signal-to-Noise Ratio (SNR) compared to the dominating ROI spectrum is SNR = 11.5 (averaged over all frequencies, where there exists a ROI spectrum, which is f≥1.5).
Figure <ref> (c) and (d) show the corresponding B-CLEAN-SC results. For the spatial estimation, the processed frequency intervals with shared source positions are well visible. They result in a sparse positional estimation with less noise. Strong Side-lobes are reconstructed as “ghost sources”, that move closer to the true source position with increasing frequency. Sources S_2 and S_3 are reconstructed well throughout the whole frequency range Source S_1 and S_2 are both reconstructed down to the SSR = 30 stopping criterion. The SNR (compared to the noise outside of ROI) is SNR=17.8.
§.§ Wind tunnel experiment
Case 3 is a wind tunnel measurement of a Dornier 728 at M=0.125 <cit.>. The 2D focus grid Δ x = Δ y = 0.01 is rotated so that it covers and follows the wing. The spiral array consists of 149 microphones, has a diameter of approx. d=1, and is located approx. Δ z=1 from the wing. The signal is sampled at f_s=120 and the CSM is sampled for 128 frequencies at Δ f≈479. Since there exists no ground truth, the results will be only discussed qualitatively.
Figure <ref> (a) and (b) show the estimated source distribution over the y-dimension, integrated over x. Thus, the only sources that can be confused in this depiction are an outboard slat track and the flap side edge at y≈0.3. The color-map shows the estimated PSD, normalized per frequency, within a range of 15. The model is depicted for reference. Note, that the x-component of the model is plotted, but the color-map does not include any x-information. Figure <ref> (a) shows the CLEAN-SC result and (b) shows the B-CLEAN-SC result. For the CLEAN-SC result one can clearly identify the slat tracks in a frequency range of 5≥ f ≥15. Above this range, the result mostly shows the inboard Krüger slat, the nacelle area, and the noise for f≥10. Below f≤5 the source separation fails. The B-CLEAN-SC result shows the same slat tracks as dominant sources. However, they are also reconstructed at low frequencies f≤5 and at high frequencies f≥40. Additionally, there is nearly no noise at high frequencies. Additional sources are located between the sources, which are typically connected to slat cove tones <cit.>. Overall, the location of the estimated sources strongly correlates to the geometrical features of the model and is consistent over the whole frequency range.
Based on the excessive analysis of this data <cit.> ROI are defined that cover the inner (Krüger) slat and the slat tracks (blue), the outer slat (orange), and the flap side edge (green). The ROI are chosen, so that the integrated source types are similar <cit.>. Figure <ref> (c) shows the ROI, and Figure <ref> (d) shows the corresponding CLEAN-SC results (dotted) and B-CLEAN-SC results (dashed). Below f≤5, CLEAN-SC fails to reconstruct individual sources, as shown in Figure <ref>, which results in strong noise. B-CLEAN-SC estimates the dominant source to be the slat tracks (which coincides with the overall CLEAN-SC solution), followed by the outer slat and flap side edge. Between 5≥ f≥40 the ROI results of both methods are nearly identical. For f≥15 the CLEAN-SC result is contaminated with noise that, based on its spectral shape, originates from the outer slat. For the B-CLEAN-SC result, there is noise throughout the frequency range, however, the SNR is much larger compared to the CLEAN-SC result.
§ DISCUSSION
Case 1 showed the CLEAN-SC can predict arbitrary results at low frequencies. B-CLEAN-SC fixed this by averaging frequency intervals of dirty maps to determine source locations. This works, as the locations of side- and grating lobes change with frequency so that they cancel out during the averaging. Additionally, the source location at low frequencies below the Rayleigh resolution is determined based on higher frequencies, where the source positions can still be resolved. The case showed that B-CLEAN-SC also works for sources with a frequency-dependent spectrum and smallband sources. Here, the initial source marker is not guaranteed to be located on the dominant source for all frequencies. Thus, B-CLEAN-SC is prone to “confuse” the power contribution of these sources. To relax this problem, a low iteration gain factor of α=0.1 was used. Additionally, using frequency intervals instead of using the whole spectrum further relaxes this issue.
Case 2 showed how CLEAN-SC and B-CLEAN-SC perform on a generic wind tunnel measurement, featuring a monopole speaker with a ground truth. Overall, both methods performed similarly with two main differences. First, B-CLEAN-SC was able to correctly determine the location and power of the sources at low frequencies. Second, its overall noise level was 6 lower compared to CLEAN-SC.
Case 3 showed the performance of both methods on a real-world wind tunnel measurement of a Do728. Again, B-CLEAN-SC was able to reconstruct sources throughout the frequency range, compared to CLEAN-SC which identified sources mainly at 5≥ f ≥45. Since their location is roughly constant over frequency and corresponds to the geometric features (slat track, flap side edge, etc.) we assume these identified locations to be correct. The B-CLEAN-SC result is less noisy compared to the CLEAN-SC result. The source type-dependent ROI integration showed nearly identical results for both methods in the frequency region where CLEAN-SC correctly identified sources.
For B-CLEAN-SC, the frequency interval has an impact on the results (not shown within the scope of this paper). With an increasing frequency interval, the spatial source estimation is refined, but the spectral estimation gets worse if the dominance of sources strongly varies within the interval. One can possibly account for this behavior by defining frequency-dependent intervals so that the intervals are large at very low and high frequencies and small at medium frequencies where CLEAN-SC works well. A low gain factor relaxes this issue but increases the number of iterations.
§ CONCLUSION
This paper presented Broadband-CLEAN-SC (B-CLEAN-SC), a variation of CLEAN-SC for broadband sources. B-CLEAN-SC assumes that the location of broadband sources is constant over frequency intervals. For synthetic and experimental wind tunnel data B-CLEAN-SC outperformed CLEAN-SC at low frequencies. For experimental real data, B-CLEAN-SC also resulted in 6 less noise throughout the frequency range. On wind tunnel data of a Dornier 728 both methods showed that the source location assumption is valid, improves the spatial estimation of sources, and reduces noise.
The algorithmic difference between CLEAN-SC and B-CLEAN-SC is small. B-CLEAN-SC processes multiple frequencies at once and uses one additional operation per iteration compared to CLEAN-SC. As it requires a lower gain factor, more iterations are necessary to meet a convergence criterion which is, however, not performance relevant. However, it requires the storage of the CSM, steering vectors, and beamforming maps for multiple frequencies in the memory. In terms of today's computational capacities, this should not be an issue, which makes B-CLEAN-SC a viable method for little computational effort, but improved results at low and high frequencies.
|
http://arxiv.org/abs/2307.15758v2 | 20230710132241 | Search for ultralight dark matter with a frequency adjustable diamagnetic levitated sensor | [
"Rui Li",
"Shaochun Lin",
"Liang Zhang",
"Changkui Duan",
"Pu Huang",
"Jiangfeng Du"
] | astro-ph.CO | [
"astro-ph.CO",
"physics.ins-det",
"quant-ph"
] |
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
[email protected]
National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing, 210093, China
CAS Key Laboratory of Microscale Magnetic Resonance and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Among several dark matter candidates, bosonic ultra-light (sub-meV) dark matter is well motivated because it could couple to the Standard Model (SM) and induce new forces. Previous MICROSCOPE and Eöt-Wash torsion experiments have achieved high accuracy in the sub-1 Hz region, but at higher frequencies there is still a lack of relevant experimental research. We propose an experimental scheme based on the diamagnetic levitated micromechanical oscillator, one of the most sensitive sensors for acceleration sensitivity below the kilohertz scale. In order to improve the measurement range, we used the sensor whose resonance frequency ω_0 could be adjusted from 0.1Hz to 100Hz. The limits of the coupling constant g_ B-L are improved by more than 10 times compared to previous reports, and it may be possible to achieve higher accuracy by using the array of sensors in the future.
Search for ultralight dark matter with a frequency adjustable diamagnetic levitated sensor
Jiangfeng Du
August 12, 2023
==========================================================================================
§ INTRODUCTION
There are many astronomical <cit.> and cosmological observations <cit.> that prove the existence of dark matter particles<cit.>, but the specific parameters of dark matter, especially the quality, are still highly uncertain <cit.>. Many direct detection studies have assumed that dark matter is composed of supersymmetric fermions, but so far there has not been enough evidence. Now the focus of research is gradually shifting to ultralight bosons and the quality range is approximately 10^-22eV m_ϕ0.1eV <cit.>. For ultralight bosons with a mass less than 1eV, due to their high particle number density, they behave like a classical field. Due to the viral theorem , if the DM has virialized to the Galaxy, it will be moving with a typical speed v_DM≈ 10^5m/s <cit.>. This corresponds to Compton frequency ω_s=m_ϕ/ ħ and De Broglie wavelength λ_DM=hc^2/(m_ϕ v_DM).
According to the previous reports, such as ADMX <cit.> can search for the Peccei-Quinn axion in the mass range 10^-6eV m_ϕ 10^-3eV <cit.>. And the pseudoscalar axion-like ULMBs with masses between 10^-23eV and 10^-18eV <cit.> and scalar dilaton ULMBs with masses between 10^-21eV and 10^-5eV by use ultrastable clocks <cit.> and gravitation wave detectors <cit.>
have recently been reported.
When DM is a vector field couples to a conserved current, corresponding to the baryon number minus lepton number (B-L charge) in the SM. The Lagrangian in this case can be written as <cit.>:
ℒ=-1/4 F_μν F^μν -1/2 m_ϕ^2 A^2 +i g_ B-L A_μnγ^μ n
where n is the neutron field and the DM field couples directly to the number of neutrons, g_ B-L is the coupling strength.
Using the Lorentz gauge and the plane wave approximation, the dark electric field can be written as: E≈√(ρ_DM)sin (ω_s t-k⃗·x⃗), where ρ_DM≈ 0.3GeV/cm^3 <cit.> is the local DM density.
In ground experiments, assume that using a magnet-gravity mechanical oscillator to measure the ultralight DM field along the Earth's axis, we can parameterize the force exerted on the sensor as:
F_sig(t)=α g_ B-L N_g F_0 sin(ω_s t)
because the De Broglie wavelength of DM is much larger than the size of the sensor so that we drop the x dependence. In this equation, α=sinθ_N denotes the component along the direction of gravity and θ_N means the latitude of the location of the ground experiment system. In order to avoid the effects of the Earth's rotation under long time measurements and increase the force, experiment system is best carried out at high latitudes like in the Arctic which α=1. F_0=√(ρ_DM)≈ 10^-15N and N_g is the total number of neutrons in the sensor. We can approximate write it as N_g≈1/2 m/m_neu in a sensor with mass m and m_neu is the neutron mass. The force F_sig(t) is proportional to the mass of the sensor,
so the main criterion about the sensor is acceleration sensitivity.
Here we propose a experiment scheme to detect DM using a frequency adjustable diamagnetic levitated sensor. The resonance frequency could be changed by adjust the magnetic field gradient in a paramagnetic part of the oscillator and frequency range from 0.1Hz to 100Hz.
This means that we have high detection accuracy to detect DM with mass in the range from 10^-16eV to 10^-13eV.
Compare to previously reported experiments, our experiment scheme can achieve more than one order of magnitude improvement in the measurement of the coupling strength g_ B-L based on the results of theoretical calculation.
§ THEORETICAL CALCULATION
Under the effect of the ultralight DM field, consider thermal noise and measurement noise,
the motion equation of a mechanical oscillator at resonant frequency ω_0 could be written as:
mẍ+ mγẋ + mω_0^2 x
=F_sig(t)+F_th+F_mea
where γ is damp coefficient;
the F_sig(t) is the DM field drive from equation (<ref>); F_th is the environmental thermal noise; and the F_mea represents the measurement noise which is mainly composed of the detector imprecision noise and backaction of radiation pressure fluctuations.
The total acceleration noise of the system is given by:
S_aa^tot= S_aa^th+ (S_xx^imp/|χ_ m(ω,ω_0)|^2+ S_ff^ba/m^2 )
where χ_ m(ω,ω_0) is the mechanical susceptibility given by |χ_ m(ω,ω_0)|^2=1/[(ω^2-ω_0^2)^2+γ^2 ω^2],
and S_aa^th =4 γ k_B T/m is the thermal noise where k_B is Boltzmann constant and T indicates environment temperature.
The detector imprecision noise S_xx^imp and the backaction noise S_ff^ba
make up the total measurement noise
S_aa^mea=S_xx^imp /|χ_ m(ω,ω_0)|^2 +S_ff^ba / m^2,
and S_xx^imp· S_ff^ba=(1/η) ħ^2 meanwhile.
Here η⩽ 1 is the measurement efficiency, and η= 1 corresponding to standard quantum limit (SQL).
The total measurement noise S_aa^mea for the sensor operating at SQL condition at resonance frequency ω_0 could be given by the simple formula <cit.>:
S_aa^mea,SQL=2 √((ω_0^2-ω^2)^2+γ^2
ω^2)/m
And achieving the SQL in a frequency range need to optimize the measurement parameters
frequency by frequency as the range is scanned.
We use the total acceleratioon noise S_aa^tot as the acceleration measurement sensitivity of the system. From the equations (<ref>)-(<ref>), consider the optimal case of α=1, we obtain the relationship between coupling strength g_ B-L and the acceleration measurement sensitivity S_aa^tot by:
g_ B-L= 2 m_neu/F_0√(S_aa^tot/T_tot)
where T_tot denotes the effective total integration time. The DM signal is essentianlly a coherent force and the timescales T_coh≈ 10^6/ ω_s.
When the DM frequency ω_s is lower to satisfy T_coh T_mea,
all the measurement time T_mea contributes to the coherent DM signal. And as the DM frequency ω_s increases, when T_coh T_mea, only the proportion of T_coh/T_mea in the measurement time contributes to the coherent signal. So we define the effective integration time:
T_tot={[ T_mea if T_coh< T_mea; √(T_mea· T_coh) if T_coh>
T_mea ].
§ EXPERIMENTAL SCHEME
The levitated micromechanical and nanomechanical oscillators have been demonstrated as one of the ultrasensitive acceleration sensors due to its ultralow dissipation <cit.>.
We propose a reasonable scheme by our calculation as shown in Fig.<ref>(a). A diamagnetic sphere made by PMMA with radius r_1=0.5mm(corresponding volume V_1), density ρ_1 and magnetic susceptibility χ_ 1 is levitated in the upper magnet (name as Magnet-A) center region, and the oscillator signal is detected through the fibre on both sides.
A paramagnetic microsphere made by Tb_2 O_3 with
radius r_2=11 μm(corresponding volume V_2), density ρ_2 and magnetic susceptibility χ_ 2 is connected to the upper diamagnetic sphere through a thin glass rod. And another combined magnets (name as Magnet-B) is placed under the paramagnetic microsphere. The whole magnet assembly is placed in a multi-stage suspension system, and uses active vibration isolation devices to further improve the isolation
effect<cit.>.
Magnet-A is constructed in a similar way to our previous articles<cit.>. And need to use high remanence magnetic material with two different magnetisation direction to generate enough magnetic force. The red express the direction point to the centre, and the blue express the direction out to the centre. In addition, using a less remanence magnetic material to build the upper layer of Magnet-B and high magnetic material to build the lower layer. The combination of two different remanence magnetic materials allows Magnet-B to have a higher magnetic field gradient while reducing the magnetic field strength. And the direction of magnetisation is also indicated by red and blue colours.
The magnetic field energy of the upper paramagnetic sphere can be written as:
U_1=-∫_V_1χ_ 1/2μ_0 B_A ^2 dV
where B_A represents the magnetic field created by
Magnet-A.
Assuming that the Magnet-B is far away at beginning , the z direction equilibrium position z_0 of the oscillator in the magnetic-gravity trap satisfies:
∂ U_1/∂ z |_z=z_0=(ρ_1 V_1+ρ_2 V_2 )g.
And the resonance frequency in z direction is:
ω_0=√(1/ρ_1 V_1+ρ_2 V_2·∂^2 U_1/∂ z^2)|_z=z_0
Then we make the Magnet-B rise, the magnetic field B_ B from Magnet-B in the lower paramagnetic microsphere will become larger. And because of V_2≪ V_1, we can simplify the magnetic field energy of the paramagnetic microspheres as U_2=-χ_ 2 B_B^2 V_2/2μ_0.
Now the resonance frequency along z direction of the oscillator change as:
ω_0^'=√(ω_0^2-χ_ 2V_2/μ_0(ρ_1
V_1+ρ_2V_2)( ∂ B_ B/∂ z)^2)|_z=z_0
where χ_ 2 0 and ω_0^'ω_0.
We ignore the second order gradient term because of
(∂ B_B/∂ z)^2≫ B_B (∂^2 B_ B / ∂ z^2).
And the magnetic force from Magnet- B on the paramagnetic microsphere is much lower than the total gravity of oscillator since B_B and V_2 are very small, the equilibrium position z_0 will not be changed therefore.
We use finite element method to simulate the magnetic field gradient ∂ B_B/∂ z changes by the distance between the paramagnetic microsphere and Magnet-B expressed by d range from 50μm to 100 μm, then use equation (<ref>) to calculate the corresponding resonance frequency ω_0^', as shown in Fig.<ref>(b). It is theoretically possible to bring the resonance frequency ω_0^' close to zero by reducing the distance d. But in order to improve the stability of the oscillator and reduce the requirement for the isolation system, we select resonance frequency ω_0^' variation range from 0.1Hz to 100Hz.
§ EXPERIMENTAL RESULT ESTIMATE
Now we calculate the acceleration measurement sensitivity of this system. In order to improve the acceleration sensitivity, the whole system was placed in a low temperature environment which T=30mK, and estimate the damp coefficient γ=10^-4Hz <cit.>. In the Supplementary material, we calculate the dependence of the total measurement noise S_aa^mea on the laser input power P_in and obtained the optimized laser input
power P_opt(ω,ω_0) to minimised the total measurement noise.
In the cases of the oscillator resonance frequency ω_0 equal to 10Hz and 100Hz,
we calculate the corresponding acceleration noise and the results are shown in Fig.<ref>(a) and Fig.<ref>(b). When resonance frequency ω_0=10Hz,
assuming measurement efficiency η=1 and we set the laser input power to optimal laser power for each point as P_opt(ω,ω_0), the measurement noise S_aa^mea can almost reach the SQL at this time.
With the measurement efficiency η reduce to 0.1, the measurement noise is slightly increased.
But actually, to simplify the experiment, the laser input power need to choose near the resonance frequency ω_0 by P_opt(ω_0,ω_0), it will make the measurement noise S_aa^mea increase rapidly.
In Fig.<ref>(a), in the frequency range from 9Hz to 11Hz, the measurement noise S_aa^mea is always below the thermal noise S_aa^th with η=0.1. When the resonance frequency ω_0 is adjusted to 100Hz, the range of measurement noise S_aa^mea below thermal noise S_aa^th is reduced to 99.6Hz to 100.4Hz in Fig.<ref>(b). We choose the appropriate oscillator resonance frequency scan step Δω_0 from this.
According to the calculation results from Fig.<ref>(a) and
Fig.<ref>(b), we choose the scan step Δω_0=1Hz in the region resonance frequency ω_0 range from 0.1Hz to 100Hz, each scan cover the frequency range from ω_0-Δω_0/2 to ω_0+Δω_0/2, and fix the laser input power P_in=P_opt(ω_0,ω_0 ) in each scan meanwhile.
We calculate the acceleration measurement noise S_aa^mea with η=0.1 in each scan, and calculate the envelope of these series S_aa^mea writen as S_aa^mea^'. The acceleration measurement sensitivity S_aa^tot=S_aa^th+S_aa^mea^', and these results are presented in Fig.<ref>(c).
According to the previous discussion on the effective integration time T_tot,
we fix the measurement time of each scan as T_mea=10^5s.
When DM frequency ω_s10Hz, T_tot=T_mea; and when ω_s10Hz, T_tot=√(T_mea· 10^6/ω_s).
Combining previous discussion of the scan step, we estimate that about one hundred times adjustments and measurements will be required in total, corresponding to a total time of 1 × 10^7 seconds.
The final result of coupling strength g_ B-L from equation (<ref>) is shown in Fig.<ref>. In the region of ω_s 100Hz, this system always has high acceleration sensitivity by adjusting the resonance frequency of the mechanical oscillator. And we achieve more than an order of magnitude improvement in the measurement of g_ B-L compare to the MICROSCOPE and the Eöt-Wash torsion experiment.
And in the region of ω_s 100Hz, the measurement accuracy of g_ B-L decreases rapidly, due to the increase in measurement noise S_aa^mea.
Finally, we estimated the minimum g_ B-L that this system can detect. Assume that the DM frequency ω_s is 1Hz, 10Hz and 100Hz respectively.
From the equation (<ref>) and the measurement time T_mea range from 10^3s to 10^7s, the results are shown in Fig.<ref>.
When T_mea is less than the coherent time T_coh, g_ B-L decreases rapidly as T_mea increases; and when T_mea is greater than T_coh, g_ B-L decreases more slowly. If the final measurement time is about 10^7 s, the minimum g_ B-L that can be measured scale is about 10^-26.
§ CONCLUSION
We propose an experimental scheme to detect ultra-light dark matter using a frequency adjustable diamagnetic levitated microsphere sensor which can theoretically approach the standard quantum limit.
We change the resonance frequency by adjusting the distance between the paramagnetic microsphere and the lower combined magnets, and to obtain a lager range that maintains high acceleration measurement sensitivity.
Compared to the existing system, our method can achieve at least one order of magnitude improvement in the coupling constant g_ B-L, especially in the frequencies from 0.1Hz to 100Hz. And it may be possible to achieve higher accuracy by using the array of sensors in the future.
In this article, we consider only the effects of thermal noise and quantum measurement noise on the acceleration measurement sensitivity of the system.
In fact, there are many low frequency noises such as seismic waves and Earth tidal forces which also have a great impact on the accuracy of the experiment, and that cannot be shielded by the suspension system. This poses a great challenge to the actual measurement. Reducing the frequency scan step according to the accuracy of the active vibration isolation device may make the effect of other noise lower than thermal noise, and this needs to be verified by further experiments.
In general, the current ground-based precision measurement system may have a broader prospect in terms of dark matter measurement compared to the previous astronomical observation methods. In the future, with the development of measurement sensitivity and
measurement range of mechanical sensors , especially with the improvement quantum sensing technology, the measurement sensitivity may break through the standard quantum limit. It will open up more possibilities in terms of dark matter measurement.
This work was supported by the National Natural Science Foundation of China (Grants No.12205291, No. 12075115, No. 12075116, No. 11890702 and No. 12150011), the Fundamental Research Funds for the Central Universities, and Anhui Provincial Natural Science Foundation (Grant No. 2208085QA16).
apsrev4-1
§ APPENDIX: LIGHT FIELD CALCULATION AND MEASUREMENT NOISE OPTIMIZATION
Optical Calculation. The light emitted from the incident fiber is assumed to be Gaussian, taking the light propagation direction as the z-axis, the incident Gaussian light intensity distribution at waist can be written as <cit.>:
I_1 (r)=I_0 exp(-2r^2/ω_01^2)
And the waist radius of incident Gaussian beam is ω_01, which satisfies relation:
ω_01=√(a_0^2 λ^2/λ^2+π^2 a_0^2 tan^2 α)
where a_0 is the radius of fiber core, and sinα = N.A, N.A. is the numerical aperture of the fiber. In there a_0=5μm and N.A.=0.13 for single-mode fiber. The incident optical power is:
P_in=∫_0^∞ I_1 (r) 2 π rdr=π/2ω_01^2 I_0
The response of the light to the micro-sphere is calculated using the standard optical ABCD ray matrix <cit.>. Under the par-axial approximation, the transmission matrix 𝐓 is:
𝐓=[ A B; C D ]
which has the equation:
[ r_f; θ_f ]
=
𝐓[ r_i; θ_i ]
In calculating the transmission matrix 𝐓, we neglected the reflection of light at the interface and the absorption in the micro-sphere. Here A, B, C, D are
A=2/n-1,B=2R/n,C=1-n/n2/n,D=2/n-1,β_0=λ/πω_01^2
with the parameters λ=1550 nm, n=1.45, the we get the d_2 and ω_02 satisfy
d_2=AC/β_0^2+ACd_1^2+ADd_1+BCd_1+BD/C^2 /β_0^2+C^2 d_1^2+2CDd_1+D^2
ω_02=ω_01√((A+Cd_2 )^2+β_0^2(Ad_1+B+Cd_1 d_2+Dd_2 )^2)
d_2 and ω_02 are functions of d_1, choose a suitable d_1 so that ω_02≈ a_0.
The coupling efficiency Γ, of the laser beam and the single-mode optical fiber can be written as:
Γ=Γ_0 exp(-Γ_0·x_fib^2/2 (1/ω_02^2 +1/a_0^2)),
Γ_0=4ω_02^2a_0^2/(ω_02^2+a_0^2 )^2
x_fib indicate the fiber shift from the x direction, when x_fib=0, Γ=Γ_max=Γ_0.
In the experiment, fix x_fib at the place where ∂Γ/∂ x_fib is the largest. As x_fib=2.51μ m and Γ(x_fib)=0.604 in Fig.<ref>(b).
δ x is the displacement of the micro-sphere vertically to the optical axis (similar result for y direction), while δ x' is the projection on the incident fiber surface. Under par-axial approximation, δ x=ζ·δ x^' for small displacement δ x of the micro-sphere, with the displacement magnification factor:
ζ=d_1+d_2+2R/d_1+R,
ς=∂Γ/∂ x=∂Γ/∂ x'·∂ x'/∂ x=ζ·∂Γ/∂ x'
Measurement Noise. The relationship between the average power P and the photon number N is:
N_in=P_inT_mea/ħω_op,
N_dec=P_decT_mea/ħω_op
where ω_op is the light frequency. The photons satisfy the Poisson distribution and the corresponding photon number fluctuation is δ N_in=√(N_in) and δ N_dec=√(N_dec). Such fluctuation brings a imprecise detection noise of displacement δ x_imp:
δ x_imp =∂ x/∂Γ√((∂Γ/∂ N_inδ N_in)^2+
(∂Γ/∂ N_decδ N_dec)^2)
=1/ς√(Γ+Γ^2/N_in)
Thus the power density of displacement noise is:
S_xx^imp=1/ς^2(Γ+Γ^2)ħω_op/P_in
On the other hand, the photon passes through the micro-sphere which changes the direction and therefore generated a back-action force δ f_ba with the strength also proportional to the fluctuation of the incident photon δ N_in. The back-action force δ f_ba can be written as:
δ f_ba=√(N_in)ħΔ k /T_mea
where Δ k is the change of the wave vector.
Here we suppose that the direction of light wave vector is along the direction of the Gaussian light wavefront, and the probability of photon appearing is proportional to the intensity of Gaussian light. Δ k is the average change of light wave vector pass through the micro-sphere. It is calculated by √((Δ k_in)^2+(Δ k_out)^2), where Δ k_in is the average light wave vector go to the micro-sphere, Δ k_out is the average light wave vector go out of the micro-sphere. We obtain
(Δ k)^2= k^2 β
= k^2 ∫_0^∞k^2 r^3/k^2 r^2 +((1-z_r^2/z_l^2)kR^2/2ρ(z_l)+z_r/z-kρ(z_l))^2·
1/ω_1^2(z_l)exp(-2r^2/ω_1^2(z_l))dr
where k=ω_op/c, z_l=d_1+R-√(R^2-r^2), ω_1(z_l)=ω_01√(1+(z_l /z_r)^2), z_r=2 πω_01^2 / λ and ρ(z_l)=z_r (z_l/z_r +z_r/z_l).
The power density of back-action noise is thus:
S_ff^ba=P_inħω_opβ/c^2
and the product of imprecision noise and back-action noise is:
S_xx^imp· S_ff^ba=1/ς^2 (Γ+Γ^2 )
(ω_op /c)^2 β^2 ħ^2
The quantum efficiency of the measurement is defined as:
η=ς/4(Γ+Γ^2)β k^2
where η = 1 corresponding standard quantum limit (SQL). The total measurement noise is
S_aa^mea (ω)=S_xx^imp/|χ_ m(ω,ω_0)|^2
+S_ff^ba/m^2
S_aa^mea is minimized by tuning the incident laser power P_in under the product constraint of the imprecision noise and backaction noise. The optimized power is:
P_opt (ω,ω_0 )=√(Γ+Γ^2/β)m c/ς|χ_ m(ω,ω_0)|
with the minimised total acceleration measurement noise as:
S_aa,min^mea=2ħω_op/mς c |χ_ m(ω,ω_0)|√(β(Γ+Γ^2 ))
And in order to simplify the experiment process, we choose P_in =P_opt (ω_0,ω_0), with the optimized acceleration measurement noise at this time:
S_aa,opt^mea=ħω_op√(β(Γ+Γ^2 ))/mς c γω_0·(1/ |χ_ m(ω,ω_0)|^2+γ^2 ω_0^2 )
|
http://arxiv.org/abs/2307.04319v1 | 20230710032047 | New Variants of Frank-Wolfe Algorithm for Video Co-localization Problem | [
"Hamid Nazari"
] | cs.CV | [
"cs.CV",
"math.OC"
] |
FW Variants in Video Co-Localization
Clemson University, Clemson, SC
New Variants of Frank-Wolfe Algorithm for Video Co-localization Problem
Hamid Nazari
=======================================================================
The co-localization problem is a model that simultaneously localizes objects of the same class within a series of images or videos. In <cit.>, authors present new variants of the Frank-Wolfe algorithm (aka conditional gradient) that increase the efficiency in solving the image and video co-localization problems. The authors show the efficiency of their methods with the rate of decrease in a value called the Wolfe gap in each iteration of the algorithm. In this project, inspired by the conditional gradient sliding algorithm (CGS) <cit.>, We propose algorithms for solving such problems and demonstrate the efficiency of the proposed algorithms through numerical experiments. The efficiency of these methods with respect to the Wolfe gap is compared with implementing them on the YouTube-Objects dataset for videos.
§ IMAGE AND VIDEO CO-LOCALIZATION PROBLEMS
Problems in recognizing and localizing particular objects in images and videos have received much attention recently as internet photo and video sharing have become increasingly popular.
Co-localization involves localizing with bounding boxes in a set of images or videos as a sequence of images (frames).
§ MODEL SETUP FOR IMAGES
Our ultimate goal is to localize the common object in a set of images or in a series of frames of a video. Here we first have a brief review of image and video models based on formulation in <cit.>. To this end we review the required back grounds in each step as much as the features and variables in the mathematical programming model become understandable. Note that this formulation is based on formulation introduced in <cit.> for image co-localization. Quadratic formulation that we review in this section localizes any set of images and videos, simultaneously. In <cit.> also, we can find similar discrete optimization approaches in various computer vision applications.
§.§ Objectness for Images
Suppose that we have a set ℐ = {I_1, I_2, …, I_n} of n given images, and our goal is to localize the common object in each image. One approach is to find candidate boxes in each image that potentially contain an object using objectness <cit.>.
While object detectors for images are usually specialized for one object class such as cars, airplanes, cats, or dogs, objectness quantifies how likely it is for an image window to cover an object of any class. In an image, objects have a well-defined boundary and center, cats, dogs, and chairs, as opposed to indefinite background, such as walls, sky, grass, and road. Figure <ref> illustrates the desired behavior of an objectness measure. Green windows must score highest windows fitting an object tight, blue windows should score lower windows covering partly an object and partly the background, and red windows should score lowest windows containing only partial background. This approach and the way we score the windows is designed in <cit.> and explicitly trained to distinguish windows containing an object from background windows.
Using objectness, we generate m candidate boxes (e.g. green boxes in Figure <ref>) for each image that could potentially contain an object. In other words, if j∈{1,2,…,n} we define ℬ_j to be the set of all boxed in image I_j∈ℐ. Then the goal is to select the box that contains the object, from each image, jointly. Also. for simplicity let ℬ = ℬ_1 ∪ℬ_2 ∪⋯∪ℬ_n and n_b = nm the total number of boxes in all images.
§.§ Feature representation
Assume that we have determined m candidate boxes in each of two the different images I_i and I_j for any i,j∈{1,2,…, m}. A common object in I_i and I_j might be in different shape, scale, color, brightness, angle and many other features. Therefore, it is critical to extract distinctive invariant features from images that can be used to perform reliable matching between different views of an object. David G. Lowe in <cit.> introduces a method that finds features that are invariant to image scaling and rotation, and partially invariant to change in illumination and 3D camera view point. Using his method, large number of features can be extracted from typical images with efficient algorithms, as well as the cost of extracting these features is minimized. The major stages of computation used to generate the set of image features are as follows.
* Scale-space extrema detection: The first stage of computation searches over all scales and image locations. It is implemented efficiently by using a difference-of-Gaussian function to identify potential interest points that are invariant to scale and orientation.
* Keypoint localization:
At each candidate location, a detailed model is fit to determine location and scale. Keypoints are selected based on measures of their stability.
* Orientation assignment:
One or more orientations are assigned to each keypoint location based on local image gradient directions. All future operations are performed on image data that has been transformed relative to the assigned orientation, scale, and location for each feature, thereby providing invariance to these transformations.
* Keypoint descriptor:
The local image gradients are measured at the selected scale in the region around each keypoint. These are transformed into a representation that allows for significant levels of local shape distortion and change in illumination.
This process is called Scale Invariant Feature Transform (SIFT). SIFT transforms image data into scale-invariant coordinates relative to local features. Using SIFT we can generate large numbers of features that densely cover the image over full range of scales and locations.
Let b_k be a box in ℬ. Then we denote the SIFT feature representation of b_k as x_k∈^d where d = 10,000 is the dimensional feature descriptor for each box in ℬ. Finally, we stack the feature vectors to form a feature matrix X∈^n_b× d.
§.§ Prior, Similarity, and Discriminability of boxes
Let us denote the boxes that contain an instance of the common object as positive boxes, and the ones that don't as negative boxes. Then a prior is introduced for each box that represents a score that the box is positive. This happens using a saliency map <cit.> for each box and the prior is in fact the average saliency within the box, weighted by the size of the box. Finally we stack these values into the n_b dimensional vector m⃗ as the prior vector.
In addition, boxes that have the similar appearance should be labeled the same. This happens through a matrix called similarity matrix denoted by S. Similarity matrix of boxes in ℬ is based on the box feature matrix X described above. Let b_i and b_j be any two boxes in ℬ where i,j∈{1,2,…,n_b}. Then similarity matrix S∈^n_b× n_b is computed based on the χ^2-distance as
S_ij = exp-γ∑_k=1^d(x_ik - x_jk)^2/x_ik + x_jk,
where γ = (10d)^-1/2. For i and j where boxes b_i and b_j belong to the same image we set S_ij=0. Then the normalized Laplacian matrix <cit.> is computed as
ℒ = I_n_b - D^-1/2SD^-1/2,
where D is the diagonal matrix composed of row sums of S.
§.§ Model Formulation
Associated with each box b_j,k∈ℬ_j we define a binary variable z_j,k where z_j,k=1 when b_j,k is a positive box (contains an instance of the common object) and 0 otherwise. Then we define the integer vector variable
z⃗ = (z_1,1,…,z_1,m, …, z_n,1,…, z_n,m)^T∈{0,1}^n_b.
Making the assumption that in each image there exist at most 1 positive box, our set of constraints are define by
∑_k = 1^m z_j,k = 1, ∀ j ∈{1,…, n}.
As we introduced a prior for each box and defined the n_b dimensional vector of average saliency within the boxes, we obtain a linear term that penalizes less salient boxes as part of the objective function:
f_p(z⃗) := -z⃗^Tlog(m⃗).
Similarly, our choice of normalized Laplacian matrix ℒ defined in (<ref>) results in a quadratic term that handles the selection of similar boxes:
f_L(z⃗) := z⃗^Tℒz⃗.
This is motivated by the work of Shi and Malik <cit.> in which they have taken advantage of eigenvalues of the Laplacian for clustering z⃗ by the similarity matrix. In fact, they have shown that with the eigenvector corresponding to the second smallest eigenvalue of a normalized Laplacian matrix we can cluster z⃗ along the graph defined by the similarity matrix, leading to normalized cuts when used for image segmentation. Also, Belkin and Niyogi <cit.> showed that this problem is equivalent to minimizing (<ref>) under linear constraints. In fact, the similarity term works as a generative term which selects boxes that cluster well together <cit.>.
Although discriminative learning techniques such as support vector machines and ridge regression has been widely used on many supervised problems in which there are know labels, they can be used in this unsupervised case where the labels of boxes are unknown <cit.>. Motivated by <cit.>, we consider the ridge regression objective function for boxes:
min_w∈^d, c∈ 1/n_b∑_j=1^n∑_k=1^mz_j,k-wx_j,k - c_2^2 - κ/dw_2^2,
where w is the d dimensional weight vector of the classifier, and c is the bias. This cost function is being used among discriminative cost functions because the ridge regression problem has a explicit (closed form) solution for weights w and bias c which implies the quadratic function in the box labels <cit.>:
f_D(z⃗):=z⃗^T𝒜z⃗,
where
𝒜= 1/n_bΠ_n_bI_n_b-X(X^TΠ_n_bX+n_bκ I_n_b)^-1X^TΠ_n_b,
is the discriminative clustering term and Π_n_b = I_nb - 1/n_b1⃗_n_b1⃗_n_b^T in (<ref>) is the centering projection matrix. Note that this quadratic term allows us to utilize a discriminative objective function to penalize the selection of boxes whose features are not easily linearly separable from other boxes.
Summing up our results in (<ref>), (<ref>), (<ref>), and (<ref>), the optimization problem to select the best box in each image is given by
min_z⃗ z⃗^T(ℒ+μ𝒜)z⃗ - λ z⃗^Tlog(m⃗)
s.t ∑_k = 1^m z_j,k = 1, j=1,…, n
z⃗ = (z_1,1,…,z_1,m, …, z_n,1,…, z_n,m)^T∈{0,1}^n_b,
where parameter μ regularizes the trade-off between the quadratic terms (<ref>) and (<ref>), and parameter λ handles the trade-off between the linear term (<ref>) and the quadratic terms (<ref>) and (<ref>). Recall that the linear constraints ensures that one box from each image is selected in the optimal solution. Note that Hastie, Tibshirani, and Friedman in <cit.> showed that 𝒜 is a positive semi-definite matrix. Also, since matrix ℒ is positive semi-definite as well, the objective function of (<ref>) is convex.
§ MODEL SETUP FOR VIDEOS
Co-localization in a video is very similar to the image case, as a video is a sequence of images that are called frames. While an object might not have an extreme change in size, shape, color, etc in two frames in row, co-localization in a video could be a simpler task at some point. In this section we describe the localization of a common object in a set of videos. In fact, if 𝒱 = {V_1, V_2, …, V_n} is a set of n given videos, we explore an approach to localize a common object in each frame of each video. More precisely, we consider ℐ_i = {I_i1, I_i2, …, I_il_i} to be the temporally ordered set of frames of video V_i. Here I_ij is the i-th frame of the j-th video and l_i is the total number of frames, or the length of V_i for i=1,…,n and j=1,…, l_i. Similar to what we did in image case, we set ℬ_i,j to be the set of m generated candidate boxes, using objectness <cit.>, for j-th of i-th video. Then, considering l_i frames in video i and m boxes in each frame, we set n_b^v = ∑_i=1^n l_im to be the total number of boxes in 𝒱, the set of all videos.
Note that, if we set ℐ = {ℐ_1, ℐ_2,…, ℐ_n} to be the ordered set of all frames in 𝒱, model (<ref>) returns a single box in each frame (image) as an optimal solution. Although the objective function of this model capture the box prior, similarity, and discriminability within different videos, as we can define a more efficient similarity mapping withing boxes in the sequence of frames in a video.
§.§ Temporal Consistency In Frames of a Video
As discussed earlier in this section, objects in consecutive frames in video data are less likely to change drastically in appearance, position, and size. This is a motivation to use a separate prior for frames or images in video case. Temporal consistency <cit.> is a powerful prior that is often leveraged in video tasks such as tracking <cit.>. In this approach, in consecutive frames, boxes with great difference in size and position should be unlikely to be selected together. To this end, a simple temporal similarity measure is defined between two boxes b_i and b_j from consecutive frames with:
s_temporal(b_i, b_j) := exp-b_i^center - b_j^center_2 - b_i^area - b_j^area/max(b_i^area , b_j^area)_2.
A few comments comes in place about the prior defines in (<ref>). First, b_i^area is the vector of the pixel area of box b_i and b_i^center are the vectors of the center coordinates of box b_i, normalized by the width and height of the frame. Second, the metric defined in (<ref>) is a similarity metric that is defined between all pairs of boxes in adjacent frames. From this metric we can define a weighted graph 𝒢_i for video 𝒱_i for i = 1,2, …, n with nodes being the boxes in each frame and edges connecting boxes in consecutive frames and weights of edges defined as temporal similarity in (<ref>). Figure <ref> is a graphical representation of graph 𝒢_i. For small values of similarity measure with some threshold we disconnect the nodes and remove the edge. Finally, as long as we can create a weighted graph with boxes, any similarity measure other than the temporal consistency in (<ref>) can be used to weight the edges between two boxes, which makes the temporal framework pretty flexible.
Let us define
S_t(i,j) = {[ s_temporal(b_i, b_j) if frames i and j are adjacent; 0 otherwise ].
to be the similarity matrix define by the temporal similarity measure, where b_i and b_j are any two boxes in the set of all boxes in 𝒱. Similar to our approach to obtain (<ref>), with S_t we can compute the normalized Laplacian
U = I_n_b^v - D^-1/2S_tD^-1/2,
where D is the diagonal matrix composed of the row sums of S_t. This matrix encourages us to select boxes that are similar based on the temporal similarity metric (<ref>).
§.§ Video Model Formulation
As we discussed above, temporal similarity suggests a weighted graph 𝒢_i for video 𝒱_i for i=1,2,…,n. In fact, a valid path in 𝒢_i from the the first to the last frame in 𝒱_i corresponds to feasible boxes chosen in each frame of 𝒱_i. This motivates us to define a binary variable to be on when there is an edge between any two nodes in 𝒢_i and off otherwise. In better words, we define the binary variable y_i,j,k for video i and boxes b_j and b_k in 𝒱_i as
y_i,j,k = {[ 1 if boxes b_j and b_k contain the common object; 0 otherwise. ].
In fact, variable y_i,j,k corresponds to the existence of edge between boxes b_j and b_k in 𝒱_i. Also, we define the binary variable z_i,j,k to be 1 if the box b_k in frame j of video i contains the common object, and 0 otherwise. A type of constraint that we need to consider here is the fact that there might exist an edge between boxes b_j and b_k only if they are boxes in two consecutive frames. Then, for a typical box b_k in frame j of video 𝒱_i, we define index sets p(k_j) and c(k_j) to be the set of indices of parents and children boxes in frames j+1 and j-1, respectively, that are connected to b_k in frame j in the graph 𝒢_i. Therefore, a required set of constraints for localization in video case are defines by:
z_i,j,k = ∑_l∈ p(k_j) y_i,l,k_j = ∑_l∈ c(k_j)y_i,k_j,l, i = 1,…, n, j=1,…,l_i, k=1,…,m.
The other set of constraints, which are quite similar to the image co-localization case, are the set of constraints restricting each frame of each video to has only one box that contains the common object. These constraints are defined by:
∑_k = 1^m z_i,j,k = 1, i=1,2,…,n, j = 1,2,…, l_i.
Finally, we define the vectors of variables
z⃗ = (z_1,1,1,z_1,1,2, …, z_i,j,k, …, z_n,l_n,m)^T∈{0,1}^n_b^v
where n_b^v = m∑_i=1^nl_i. Then if we combine the temporal terms defined by (<ref>) with the terms in the objective function of the original image model (<ref>), then with constraint defines in (<ref>) and (<ref>), we obtain the following optimization formulation to select the box containing the common object in each frame of video:
min_z⃗, y z⃗^T(L+μ A + μ_t U)z⃗ - λ z⃗^Tlog(m⃗)
s.t. ∑_k = 1^m z_i,j,k = 1, i=1,2,…,n, j = 1,2,…, l_i,
z_i,j,k = ∑_l∈ p(k_j) y_i,l,k_j = ∑_l∈ c(k_j)y_i,k_j,l
i = 1,…, n, j=1,…,l_i, k_j=1,…,m,
y_i,s,t∈{0,1}, i = 1,…,n, s,t = 1,…,m
z⃗=(z_1,1,1,z_1,1,2, …, z_i,j,k, …, z_n,l_n,m)^T ∈{0,1}^n_b^v,
where μ_t is the trade-off weight for the temporal Laplacian matrix. Note that with the new objective function in problem (<ref>) the extra constraint (<ref>) in video case is necessary and without that the temporal Laplacian matrix would lead the solution to an invalid path. This formulation allows us to incorporate temporal consistency into the image model.
§ OPTIMIZATION
The formulation (<ref>) obtained to find the best box in each image of the set of the given images is a standard binary constrained quadratic problem. The only issue that makes this problem a non-convex problem are the binary constraints. Relaxing these constraints to the continuous linear constraints lead the problem to the convex optimization problem and can be solved efficiently using standard methods. In fact, first order methods such as like Frank-Wolfe method that we discussed in previous chapters can handle the relaxed problem efficiently as they linearize the quadratic objective function and use a linear optimization oracle in each iteration.
Denoting the feasible region of the problem (<ref>) by 𝒫, we can follow a similar approach for this problem as we did for (<ref>). We can relax the discrete non-convex set 𝒫 into the convex hull, or the integer hull for this specific case, conv(𝒫). Although standard algorithms such as interior point methods can be applied to solve this problem, but as the number of videos increases to hundreds and the dimension of the problem increases exponentially, such problems with complexity of 𝒪(N^3) with number of boxes, would perform very weakly. Similarly, for the relaxation of the video problem we will show in our implementations section that suggested first order methods perform efficiently. We will also propose a first order method later in this chapter and will show that it performs better than other first order methods that have been applied to this problem.
Note that, the constraints defining the set 𝒫 are separable in each video. In fact, for each video, these constraints are equivalent to the constraints of the shortest-path problem. This implies that the linear optimization step appears in each iteration of the first order methods are actually shortest-path problems that can be solved efficiently using dynamic programming.
Recall that Frank-Wolfe algorithm is a first order method that in each of its iteration updates the new point toward a direction by calling a linear optimization oracle. This objective function of this linear optimization is in fact a linear approximation of the objective function of (<ref>), and (<ref>). Frank-Wolfe algorithm specifically results in a simple linearizations with integer solution for the image and video co-localization optimization problems. For the image model, the linearlized cost function is separable for each image, and we can efficiently find the best integer solution with some threshold for this problem. For the video model also, the cost function and the constraints are separable for each video and optimizing the linearized function over the feasible region results in the shortest-path problem for each video.
In the following section we will propose an algorithm that can be applied on image and video co-localization optimization problems efficiently and we finally compare the performance of the proposed algorithm to the algorithms that are applied to these problems.
§ PROPOSED ALGORITHMS
Conditional Gradient Sliding (CGS) algorithm <cit.>, is a first order projection free method for solving convex optimization problems in which the feasible region is a convex and compact set. The major advantage of the CGS algorithm is that it skips gradient evaluation from time to time and uses the same information within some inner iterations. This property of the CGS algorithm becomes helpful when the dimension of the problem as size of the variable is relatively large and computations become more and more expensive.
As showed in previous chapters, CGS algorithm and its proposed variant, Conditional Gradient Sliding with Linesearch (CGS-ls) perform very well in many practical instances. Although the CGS and CGS-ls algorithms out-perform the Frank-Wolfe (FW) algorithm many cases, the variants of FW, such as Away-steps FW or Pairwise FW <cit.> converge faster to the optimal value than CGS for the image and video co-localization problem as we will show this in numerical experiments later in this chapter.
Motivated from the CGS algorithm and also Away-steps and pairwise FW methods, we propose an algorithms called Away-Steps Conditional Gradient Sliding (ACGS) and Pairwise Conditional Gradient Sliding (PCGS) that perform very well for image and video co-localization problems. ACGS and PCGS methods have iterations of the CGS method but the direction to update the new point in each iteration is motivated from the away steps and pairwise steps in the Away-steps and Pairwise FW. We will also show that the ACGS and PCGS out-perform all of the variants of the FW applied to the image and Video co-localization problem.
§.§ Away-Steps and Pairwise Conditional Gradient Sliding
The basic scheme of the ACGS and PCGS methods is obtained by performing a new search direction in CGS method, if the new direction leads the algorithm to smaller Wolfe gap. Also, similar to the CGS algorithm, the classical FW method (as ℱ𝒲 procedure) is incorporated in this algorithm to solve the projection subproblems in the accelerated gradient (AG) with some approximations. The ACGS and PCGS algorithms are described as in <ref> and <ref>.
Note that the purpose of the proposed algorithm is to be applied to the image and video co-localization problems (<ref>) and (<ref>). The objective function in both problems, as discussed before, are convex functions, and the feasible region is a set of finite binary vectors called atoms in ^d for some d. We denote this set by 𝒜 and its convex hull conv(𝒜) by ℳ. As 𝒜 is finite, ℳ is a polytope.
The first difference between the AGCS(PCGS) and the CGS method is that we incorporate the set 𝒮^(k) of active atoms in the ACGS(PCGS) algorithm. This set keeps record of atoms (integer points) in 𝒜 that are being used for the away direction d_K^away at each iteration such that the point y$̨ at current iteration is the sum of corners in𝒮^(k)reweighted byα^(k). This direction that is given in (<ref>), is defined by finding the atomv_kin𝒮^(k)that maximized the potential of descent given by-f'(y), y- v. Note that obtainingv$̨ in (<ref>) is fundamentally easier as the linear optimization is over the 𝒮^(k), the active set of possibly small finite set of points.
The second difference is in the way we update the step-size to update the new iteration point. As we observe in (<ref>) we incorporate a line-search method to obtain a step-size with maximum reduction in the objective toward a prespecified direction from the point at current iteration. With _max defined in (<ref>) and (<ref>) as the maximum step-size for the line-search step the algorithm guarantees that the new iterates y=̨ y +_max d_k^away stays feasible in each iteration. Note that the parameter _k in CGS algorithm is required to be set up in appropriate way to maintain the feasibility in each iteration. Such set ups are represented in <cit.> as =̨ 3/(k+2) and =̨ 2/(k+1) and in fact, we can us these set ups for CGS steps in step (<ref>) as the upper bound for γ_k instead of 1 in line-search step (<ref>). Also, it is easy to check that for the special case of the image and video co-localization problem in which the objective is a convex quadratic function $̨ in step (<ref>) has the closed form
=̨ -d^T ∇ f(x)/d^T Q d,
ifQ≽0is the quadratic term in the objective. This value is projected to 0 or_maxif is outside of the range[0, _max]for (<ref>) case.
Finally, we incorporate the Wolfe gap as an stopping criterion in the ACGS and PCGS algorithms. In fact, at steps (<ref>) and (<ref>), the algorithms checks if they have reached the given threshold to stop before the preset max number of iterationsN. As in classical FW, the Wolfe gap is an upper bound on the unknown suboptimality and from the convexity of the objectivefwe have
f(x_k) - f(x^⋆) ≤-f'(x)̨, x^⋆-y≤-f'(x)̨, x-̨y≤ϵ.
Note that for the image and video co-localization problem with binary decision variables in a CGS step we have
𝒮^(k+1) = {[ {x_k} if =̨ 1; 𝒮^(k)∪{x}̨ otherwise. ].
Also, forv∈𝒮^(k)∖{s_k}we have
α_s_t^(k+1):=(1-)̨α_s_t^(k) + and α_v^(k+1):= (1-)̨α_v^(k).
On the other hand, for an away step we have
𝒮^(k+1) = {[ 𝒮^(k)∖{v}̨ if =̨_max; 𝒮^(k) otherwise. ].
This step is called a drop step. Also, forv∈𝒮^(k)∖{v_k}we have
α_v_t^(k+1):=(1+)̨α_v_t^(k) + and α_v^(k+1):= (1+)̨α_v^(k).
ACGS and PCGS algorithms are slightly different in the direction that they use to update the new point at each iteration. More precisely, steps (<ref>) to (<ref>) in Algorithm <ref> are replaced with steps (<ref>) and (<ref>) in Algorithm <ref>. Similar to the Paiwise FW, the idea here is to only move weight from the away atomv$̨ to the CGS atom x$̨ and keep all otherαweight unchanged. In other words
α_v_t^(k+1):=α_v_t^(k) - and α_x^(k+1):= α_s^(k)+,
for some≤_max:=α_v_t^(k).
An important property of the formulation (<ref>) and (<ref>) is that their constraints are separable for each image and video. This helps computation to be more efficient if we use parallel computing. This, however, is a property of any first-order method and practically it is very memory efficient. In addition, as a solution to the convex relaxation is not necessarily an integer solution optimal or feasible to the original problem, we need to come up with a solution as close as possible to the obtained relaxation optimum. In image and video co-localization case, the most natural way of finding such a solution is to solve
min_p∈𝒫 p - y_2^2,
where𝒫is the feasible region of the original problem andyis the solution to the relaxed problem. It is easy to check that the projection problem (<ref>) is equivalent to
max_p∈𝒫 p,y,
which for the video model is just a shortest path problem that can be solved efficiently using dynamic programming.
§ EXPERIMENTAL RESULTS
In this section we experiment the proposed Algorithm <ref> to the problems introduced in (<ref>) and (<ref>) for image and video co-localization task. Recall that these problems are quadratic problems over the convex hull of paths in a network, the linear minimization oracle in first order methods is equivalent to find a shortest path in the network. We compare the performance of the proposed algorithm with the works in <cit.> and <cit.> on FW algorithm and its variants for the similar problem. For this comparison we reuse the codes available and shared for <cit.> and the included dataset of airplanes consist of 660 variables.
We begin this section by reviewing the performance of Away steps Frank-Wolfe (AFW) and its comparison to the solvers such as Gurobi and Mosek. These results are derived and shown in <cit.> and the goal in this section is to show how AFW outperforms other methods for our problem of interest. In <cit.>, however, Joulin A., Tang K., and Fei-Fei L. showed that their proposed Pairwise Frank-Wolfe (PairFW) algorithm outperforms any other variants of FW in solving this problem. We will end this section by showing that our proposed ACGS algorithm performs better any first order methods that have been utilized to solve the video co-localization problem.
§.§ FW v.s. Mosek and Gurobi
Algorithm <ref> is a variant of FW algorithm proposed in <cit.> in which the authors examined it on two datasets, the PASCAL VOC 2007 dataset <cit.> and the Youtube-Objects dataset <cit.>. This algorithm is in fact the AWF Algorithm introduced in <cit.> with some slight changes and some extra rounding steps. Also, the set𝒟in this algorithm is conv(𝒫)the convex hull of the feasible region of problems (<ref>) or (<ref>). Their implementation of Algorithm <ref> was coded in MATLAB and they compare it to two standard Quadratic Programming (QP) solvers, Mosek and Gurobi on a single-core 2.66GHz Intel CPU with 6GB of RAM. In addition, they setμ=0.4for the image model andμ=0.6for the video model andμ_t=1.8andλ= 0.1, for both image and video models. They extracted 20 objectness boxes from each image and sample each video every 10 frames as there is little change frames in short amount time.
The stopping criterion of Algorithm <ref> is based on the relative duality gap. This criterion, that is given in function duality-gap(z) in the algorithm, is defined asd = (f-g)/g, wherefis the objective function andgis its dual. In the implementation of this algorithm, authors consider two values1e- 2 and1e- 3 for the stopping thresholdϵ.
Figures <ref> presents some comparisons of the Algorithm <ref> as a variant of FW algorithm with QP solvers Mosek and Gurobi in logarithmic scale. Indeed, this comparison is based on the CPU time performance of the algorithms depending on the number of images and videos, or in better words, the dimension of the decision variables. This time is the time that takes that algorithms reach a duality gap less than the thresholdϵ. As we can observe from these plots, the variant of FW algorithm with away steps outperforms the standard QP solvers Mosek and Gurobi.
The reason that we review and represent these comparisons directly from <cit.>local is that in our implementations in next section we will only compare our proposed algorithms to some other first order methods. These first order methods include the AWF algorithm that we already know from this section that it outperforms standard QP solvers.
The PASCAL Visual Object Classes 2007 dataset <cit.> provides standardized image data of 20 objects for object classes recognition along with annotations for images and bounding box and object class label for each object. Challenges and competitions have been used to recognize objects from a number of visual object classes in realistic scenes. The YouTube-Objects dataset <cit.> consists of YouTube videos collected for 10 classes from PASCAL <cit.>: "aeroplane", "bird", "boat", "car", "cat", "cow", "dog", "horse", "motorbike", and "train". Although authors in <cit.> did the study on multiple objects of this dataset, in our implementations our focus will be on the "aeroplane" object class.
§.§ Implementations
Knowing that AFW Algorithm <ref> outperforms the standard QP solvers Mosek and Gurobi from the works in <cit.>, in this section we compare our proposed variants of the CGS algorithm, the ACGS Algorithm <ref> and the PCGS Algorithm <ref> to some other first order methods, including the AFW method. More precisely, we will compare the performance of our algorithms to all of the variants of the FW namely, the FW, the FW Algorithm with away steps (AFW), and the pairwise FW Algorithm as discussed in <cit.>. We also compare our algorithms to the original CGS Algorithm <cit.>. These comparisons include the duality gap, CPU time, and objective function value versus the iterations.
The implementations are over the YouTube Objects dataset <cit.> explained in previous section, and specifically its "aeroplane" class. We obtain the dataset for this class and also the codes for AFW and Pairwise FW algorithms available in the repositories for <cit.>. We only consider the task of video co-localization with the problem formulation defined in (<ref>) for this implementation. All algorithms are coded in MATLAB and run on a computer with Interl Core i5-6500 CPU 3.2 GHz processor with 16 GB of RAM.
In our implementations, we set all algorithms to stop either after the maximum number of iterations or after reaching the Wolfe duality gap threshold. We set the threshold toϵ=1e-5and the max number of iterations to 2000 iterations. All of the parameters exist in (<ref>) are set the same as in <cit.> for consistency in the comparison.
Note that both original versions of FW and CGS algorithms do not reach the desired duality gap before the preset 2000 max number of iterations. Also, the AFW algorithm takes 628 iterations, the Pairwise FW takes 436 iterations, the ACGS takes 84 iterations, and PCGS takes 82 iterations to reach the threshold for the duality gap.
As we observe in Figure <ref> both proposed variants of CGS algorithm, the ACGS and PCGS algorithms outperform the FW algorithms and its variants as well as the original CGS algorithm. The performance of the algorithms in terms of the CPU time versus iterations increments also is represented in Figure <ref>. As we observe in this figure the CPU time per iteration of AFW and ACGS and PCGS are quite similar, although the ACGS and PCGS algorithms reach the gap much earlier than the AFW algorithm.
In addition, while FW algorithm requires one linear optimization oracle per iteration, its CPU time per iteration is not significantly better than the other algorithms. Also, note that out of 84 iteration of the ACGS algorithm, it chooses the away direction in 34 iteration which improves the performance of CGS (with more than 2000 iterations) for this problem significantly.
Finally, authors in <cit.> proved, for the first time, the global linear convergence of the variants of FW algorithms, AFW and Pairwise FW, under strong convexity of the objective. One potential research work related to the current chapter is figure out the convergence of the proposed algorithms <ref> and <ref>.
CGS:Lan
Lan, Guanghui, and Yi Zhou. "Conditional gradient sliding for convex optimization." SIAM Journal on Optimization 26.2 (2016): 1379-1409
Nesterov
Nesterov, Y.: Introductory lectures on convex optimization: A basic course, vol. 87. Springer Science & Business Media (2013)
joulin2014efficient
Joulin, A., Tang, K., Fei-Fei, L.: Efficient image and video co-localization with frank-wolfe algorithm. In: European Conference on Computer Vision, pp. 253–268. Springer (2014)
tang2014co
Tang, K., Joulin, A., Li, L.J., Fei-Fei, L.: Co-localization in real-world images. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1464–1471 (2014)
alexe2012measuring
Alexe, B., Deselaers, T., Ferrari, V.: Measuring the objectness of image windows. IEEE trans-actions on pattern analysis and machine intelligence 34(11), 2189–2202 (2012)
boykov2001fast
Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Transactions on pattern analysis and machine intelligence 23(11), 1222–1239 (2001)
delong2012minimizing
Delong, A., Gorelick, L., Veksler, O., Boykov, Y.: Minimizing energies with hierarchical costs. International journal of computer vision 100(1), 38–58 (2012)
delong2012fast
Delong, A., Osokin, A., Isack, H.N., Boykov, Y.: Fast approximate energy minimization with label costs. International journal of computer vision 96(1), 1–27 (2012)
lowe2004distinctive
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International journal of computer vision 60(2), 91–110 (2004)
perazzi2012saliency
Perazzi, F., Krauhenbuhl., Pritch, Y., Hornung, A.: Saliency filters: Contrast based filtering for salient region detection. In: 2012 IEEE conference on computer vision and pattern recognition, pp. 733–740. IEEE (2012)
shi2000normalized
Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence 22(8), 888–905 (2000)
belkin2003laplacian
Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation 15(6), 1373–1396 (2003)
bach2007diffrac
Bach, F., Harchaoui, Z.: Diffrac: a discriminative and flexible framework for clustering. Advances in Neural Information Processing Systems 20 (2007)
xu2004maximum
Xu, L., Neufeld, J., Larson, B., Schuurmans, D.: Maximum margin clustering. Advances in neural information processing systems 17 (2004)
joulin2010discriminative
Joulin, A., Bach, F., Ponce, J.: Discriminative clustering for image co-segmentation. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1943–1950. IEEE (2010)
hastie2009elements
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition. Springer Series in Statistics. Springer (2009)
babenko2010robust
Babenko, B., Yang, M.H., Belongie, S.: Robust object tracking with online multiple instance learning. IEEE transactions on pattern analysis and machine intelligence 33(8), 1619–1632 (2010)
berclaz2011multiple
Berclaz, J., Fleuret, F., Turetken, E., Fua, P.: Multiple object tracking using k-shortest paths optimization. IEEE transactions on pattern analysis and machine intelligence 33(9), 1806–1819 (2011)
yilmaz2006object
Yilmaz, A., Javed, O., Shah, M.: Object tracking: A survey. Acm computing surveys (CSUR) 38(4), 13–es (2006)
tang2012shifting
Tang, K., Ramanathan, V., Fei-Fei, L., Koller, D.: Shifting weights: Adapting object detectors from image to video. Advances in Neural Information Processing Systems 25 (2012)
perez2002color
Perez, P., Hue, C., Vermaak, J., Gangnet, M.: Color-based probabilistic tracking. In: European Conference on Computer Vision, pp. 661–675. Springer (2002)
pang2013finding
Pang, Y., Ling, H.: Finding the best from the second bests-inhibiting subjective bias in evaluation of visual tracking algorithms. In: Proceedings of the IEEE International Conference on omputer Vision, pp. 2784–2791 (2013)
harestructured
Hare, S., Saffari, A., Torr, P., Struck, S.: Structured output tracking with kernels. In: IEEE International Conference on Computer Vision. IEEE, pp. 263–27
lacoste2015global
Lacoste-Julien, S., Jaggi, M.: On the global linear convergence of frank-wolfe optimization variants. Advances in neural information processing systems 28 (2015)
everingham2010pascal
Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual
object classes (voc) challenge. International journal of computer vision 88(2), 303–338 (2010)
prest2012learning
Prest, A., Leistner, C., Civera, J., Schmid, C., Ferrari, V.: Learning object class detectors from weakly annotated video. In: 2012 IEEE Conference on computer vision and pattern recognition,
pp. 3282–3289. IEEE (2012) |
http://arxiv.org/abs/2307.07236v1 | 20230714092030 | On Orbits and Bi-invariant Subsets of Binary $G$-Spaces | [
"Pavel S. Gevorgyan",
"A. A. Nazaryan"
] | math.GN | [
"math.GN",
"54H15, 57S99"
] |
Moscow Pedagogical State University
[email protected]
Yerevan State University
[email protected]
Orbits and bi-invariant subsets of binary G-spaces are studied. The problem of the distributivity of a binary action of a group G on a space X, which was posed in 2016 by one of the authors, is solved.
54H15; 57S99
On Orbits and Bi-invariant Subsets of Binary G-Spaces
A. A. Nazaryan
August 12, 2023
=====================================================
§ INTRODUCTION
The notions of a binary action of a group G on a topological space X and of a binary G-space were introduced in <cit.>. The group H_2(X) of all invertible continuous binary operations on a space X acts binarily on the space X. Moreover, if a group G acts binarily and effectively on X, then G is a subgroup of H_2(X) <cit.>. The category G-Top^2 of binary G-spaces and bi-equivariant mappings is a natural extension of the category G-Top of all G-spaces and equivariant mappings, which, in turn, is an extension of the category Top; i.e.,
Top⊂ G-Top⊂ G-Top^2.
In <cit.>, the notion of a distributive binary G-space X was introduced. One of the reasons why this notion is important is the special role played by distributive subgroups of the group H_2(X) of all invertible continuous binary operations on X. For example, any topological group is a distributive subgroup of the group of invertible binary operations of some space <cit.>. This statement is the binary topological counterpart of Cayley’s classical theorem on the representation of any finite group by unary operations (permutations).
This paper is concerned with orbits and bi-invariant sets in binary G-spaces. We emphasize that transferring the basic notions of the theory of G-spaces to the theory of binary G-spaces and studying them is not always easy. Substantial differences arise already in considering bi-invariant sets. For example, the union of bi-invariant subsets of a binary G-space is not necessarily a bi-invariant subset. Difficulties arise also in describing orbits of a binary G-space. Given a binary G-space X, the set G(x, x) is not generally the orbit of x ∈ X, and orbits of points may intersect.
In binary G-spaces, orbits are described recursively. As a result, we obtain finitely or infinitely generated orbits. In a distributive binary G-space X, all orbits are finitely generated. Moreover, for any x ∈ X, the set G(x, x) is bi-invariant; therefore, this set is the orbit of x, because it is the minimal bi-invariant subset containing x <cit.>. This gives rise to the following natural question: Is it true that if all orbits of a binary G-space X are of the form G(x, x), then X is a distributive binary G-space? This problem was posed in <cit.>. The present paper contains, in particular, a solution of this problem.
§ BASIC NOTIONS AND NOTATION
Let G be any topological group, and let X be any topological space.
The space X is called a G-space if it is equipped with a continuous action α of the group G, i.e., a continuous mapping α :G× X→ X satisfying the conditions
α(gh, x)=α(g, α(h,x)) and α(e,x)=x
or, in the notation α(g,x)=gx,
(gh)x=g(hx) and ex=x,
where e is the identity element of G, for any g, h ∈ G and x ∈ X.
The set
Ker α={g∈ G; gx=x}
is called the kernel of the action α. If Ker α=e, then α is called an effective action and X, an effective
G-space.
A binary action of a topological group G on a space X is a continuous mapping α :G× X^2→ X such that, for any g,h∈ G and x_1,x_2 ∈ X,
gh(x_1,x_2)=g(x_1, h(x_1,x_2)),
e(x_1,x_2)=x_2,
where g(x_1,x_2) = α (g, x_1,x_2).
A space X equipped with a binary action α of a group G, that is, a triple (G, X, α), is called a binary G-space.
Given A⊂ X and g∈ G, we set
g(A,A)={g(a_1,a_2); a_1,a_2∈ A}.
Similarly, for K⊂ G, we set
K(A,A)={g(a_1,a_2); g∈ K, a_1,a_2∈ A}.
For brevity, we sometimes write g(A) and K(A) instead of g(A,A) and K(A,A), respectively.
A subset A⊂ X is said to be bi-invariant if G(A,A)=A. A bi-invariant subset A of X is itself a binary G-space; it is called a binary G-subspace.
The orbit of an element x of a binary G-space X is the minimal bi-invariant set [x]⊂ X containing x.
Obviously, the set G(x,x) is a subset of the orbit [x]:
G(x,x)⊂ [x].
A binary G-space X is said to be distributive if
g(h(x,x'), h(x,x”))=h(x,g(x', x”)).
for any x,x',x”∈ X and g,h ∈ G.
The class of distributive binary G-spaces plays an important role in the theory of binary G-spaces.
Let H be a subgroup of G. The set
N_G(H)={g∈ G; g^-1Hg=H}
is called the normalizer of the subgroup H in the group G. The normalizer N_G(H) is the maximal subgroup of G containing H as a normal subgroup. Obviously, if H is a normal subgroup G, then N_G(H) = G.
The commutator of elements g and h of G is the element
[g,h]=g^-1h^-1gh∈ G.
The subgroup G'=[G,G] generated by all commutators of G is called the commutator subgroup of the group G. The commutator subgroup G' is a normal subgroup of G. The commutator subgroup G' is trivial if and only if the group G is commutative.
These definitions, as well as all other definitions, notions and results of group theory and the theory of continuous transformation groups used in the paper without reference, can be found in <cit.> and <cit.>.
§ BI-INVARIANT SUBSETS OF A BINARY G-SPACE
Any intersection of bi-invariant subsets of a binary G-space is a bi-invariant subset.
Suppose given bi-invariant subsets A and B of a binary G-space X. Let us prove that G(A∩ B)=A∩ B. Since A∩ B⊂ G(A∩ B), it suffices to show that G(A∩ B)⊂ A∩ B. Indeed, the inclusions A∩ B ⊂ A and A∩ B ⊂ B imply
G(A∩ B)⊂ G(A)=A, G(A∩ B)⊂ G(B)=B.
Therefore, G(A∩ B)⊂ A∩ B.
A union of bi-invariant subsets in a binary G-space, in contrast to that in a G-space, is not generally bi-invariant (see <cit.>).
Let (G, X, α) be a binary G-space. The G-space (G, X× X, α), on which the action of G is defined by
α(g, x_1,x_2)=(x_1, α(g,x_1,x_2)),
is called the natural G-square induced by the binary action α.
Setting α(g, x_1,x_2)=g· (x_1,x_2) and α(g,x_1,x_2)=g(x_1,x_2), we rewrite the last formula as
g· (x_1,x_2)=(x_1, g(x_1,x_2)).
Note that, for any g∈ G and a,a'∈ A,
g· (a,a')=(a,g(a,a'))∈ A× A
if and only if g(a,a')∈ A. Thus, the following proposition holds.
A subset A of a binary G-space X is bi-invariant if and only if the set A× A is invariant in the natural G-square X× X.
Let (X,G,α) be a G-space. The unary action α generates a binary action α of the group G on X by the rule
α(g,x_1,x_2)=α(g, x_2), or g(x_1,x_2)=gx_2,
for all g ∈ G and x_1, x_2∈ X. The action α is called the induced binary action, and the G-space (X,G, α) is called the induced binary G-space.
The following simple proposition is valid.
Let (X,G, α) be any G-space. A set A⊂ X is bi-invariant with respect to the induced binary action α if and only if it is invariant with respect to the action α.
In a binary G-space X, any bi-invariant subset containing a point x∈ X contains the whole set G(x,x). There arises the natural question of whether the set G(x,x) is bi-invariant. Example <ref> constructed at the end of this section shows that, in the general case, the answer in negative. However, there exists a large class of binary G-spaces in which the sets G(x,x) are bi-invariant.
Let X be a distributive binary G-space. Then, for any elements x, x' ∈ X,
G(G(x,x),G(x,x'))=G(x,x').
Indeed, in view of the distributivity of the binary action, we have
g(h(x,x),k(x,x'))=g(h(x,x),h(x, h^-1k(x,x')))=
=h(x, g(x,h^-1k(x,x')))=h(x, g h^-1k(x,x'))=
=hg h^-1k(x, x') ∈ G(x,x')
for any g,h,k ∈ G.
A direct consequence of this proposition is the following theorem.
Let X be a distributive binary G-space. Then the set G(x,x) is bi-invariant for any x∈ X.
There arises the natural question of whether the converse of this theorem is true. This problem was posed in <cit.>.
<cit.>
Let X be a binary G-space such that the set G(x,x)⊂ X is bi-invariant for any x∈ X. Is it true that X is a distributive binary G-space?
The rest of this section is devoted to the solution of this problem.
Let G be a topological group, and let H be its subgroup. The continuous mapping α : H× G^2 → G defined by
α (h,x_1,x_2) = x_1^-1hx_1x_2, or h(x_1,x_2) = x_1^-1hx_1x_2,
for any h∈ H and x_1, x_2 ∈ G determines a binary action of the subgroup H on the group G. Indeed, we have e(x_1,x_2)=x_2 and
hh'(x_1,x_2) = x_1^-1hh'x_1x_2 = x_1^-1hx_1x_1^-1h'x_1x_2 =
=h(x_1, x_1^-1h'x_1x_2) = h(x_1, h'(x_1,x_2)).
We denote the binary H-space thus obtained by (H,G,α).
Suppose given a binary H-space (H,G,α). A subset H(x,x)⊂ G, x∈ G, is bi-invariant if and only if x^-1Hx⊂ N_G(H), where N_G(H) is the normalizer of the subgroup H in the group G.
The bi-invariance of H(x,x), x∈ G, means that, for any h,h_1, h_2∈ H, there exists an element h∈ H such that
h(h_1(x,x), h_2(x,x))=h(x,x).
By virtue of (<ref>), this implies
h(x^-1h_1xx, x^-1h_2xx)=x^-1hxx,
x^-1x^-1h_1^-1xhx^-1h_1xx x^-1h_2xx=x^-1hxx,
x^-1h_1^-1xhx^-1h_1xh_2=h,
(x^-1h_1x)^-1h(x^-1h_1x)=hh_2^-1∈ H,
which is equivalent to x^-1Hx⊂ N_G(H).
If H is a normal subgroup of a group G, then H(x,x) is bi-invariant for any x∈ G.
Let (X,G, α) be a G-space. The induced binary G-space (X,G, α) is distributive if and only if the commutator subgroup G' of the group G is a subgroup of the kernel Ker α of the action α.
Suppose that the induced binary G-space X is distributive. Then, for any g,h∈ G and x∈ X, we have
g(h(x,x), h(x,x)) = h(x, g(x,x)) ⟹
⟹ (gh)x = (hg)x ⟹ (g^-1h^-1gh) x =x,
i.e., g^-1h^-1gh = [g,h]∈Ker α.
Now suppose that g^-1h^-1gh = [g,h]∈Ker α for any g,h∈ G. Let us verify that the induced binary
action is distributive:
g(h(x,x'), h(x,x”)) = (gh)x” = (hg)x” = h(x, g(x',x”)).
If (X,G, α) is an effective G-space, then the induced binary G-space (X,G, α) is distributive if and only if G is an Abelian group.
Solution of Problem <ref>. Let G be any non-Abelian group, and let X be an effective G-space. Consider the induced binary G-space (X,G, α). According to Proposition <ref>, the sets G(x,x)⊂ X are bi-invariant for all x∈ X. However, the binary G-space (X,G,α) is not distributive by virtue of Corollary <ref>. Thus, Problem <ref> has a negative solution.
Now we construct the promised example of a non-bi-invariant set of the form G(x,x).
Let G=GL(2,𝐑). Consider the set
H={[ 1 h; 0 1 ], h∈𝐑}.
This is a subgroup of the group GL(2,𝐑) and, therefore, formula (<ref>) defines a binary action of H on GL(2,𝐑).
Let us show that, for
x=[ 0 1; 1 0 ],
the subset H(x,x) of GL(2,𝐑) is not bi-invariant. Consider the matrix
h=
[ 1 1; 0 1 ]∈ H.
Note that the matrix
x^-1hx=
[ 0 1; 1 0 ][ 1 1; 0 1 ][ 0 1; 1 0 ] =
[ 1 0; 1 1 ]
does not belong to the normalizer N_G(H). Indeed, we have
(x^-1hx)^-1h(x^-1hx)=
[ 1 0; -1 1 ][ 1 1; 0 1 ][ 1 0; 1 1 ] =
[ 2 1; -1 0 ]∉ H.
Therefore, by virtue of Proposition <ref>, H H(x,x) is not a bi-invariant subset of the group GL(2,𝐑).
§ ORBITS OF A BINARY G-SPACE
In binary G-spaces, unlike in G-spaces, orbits may intersect.
Let GL(2,𝐑) be the topological group of nonsingular square matrices of order 2. The
set H={e,h}, where
e=
[ 1 0; 0 1 ],
h=[ 0 1; 1 0 ],
is a subgroup of GL(2,𝐑), because h^2=e. Therefore, formula (<ref>) defines a binary action of H on GL(2,𝐑) (see Example <ref>). Obviously, the orbit of the element h coincides with the subgroup H: [h]={e,h}. Now consider the matrix
x=
[ 0 -1; 1 -1 ]∉ [h].
Let us prove that the orbit [x] intersects the orbit [h]. Indeed,
h(x,x)=x^-1hxx=[ -1 1; -1 0 ][ 0 1; 1 0 ][ 0 -1; 1 -1 ][ 0 -1; 1 -1 ]=
[ 0 1; 1 0 ]=
h.
This example shows that the orbit [x] of a point x in a binary G-space X does not generally coincide with the orbits of all points of [x]. It may contain smaller orbits. However, there exists a large class of binary G-spaces with disjoint orbits.
Two orbits of a distributive binary G-space X either are disjoint or coincide.
Proof. Let x be any element of a distributive binary G-space X. By Theorem <ref>, the set G(x,x), x∈ X,
is bi-invariant; therefore, it is the orbit of the point x: [x] = G(x, x).
First, we prove that [x] is the orbit of each of its points, i.e., contains no proper bi-invariant subsets. Take any element g_0(x,x)∈ [x]. Let us show that [g_0(x,x)]=[x]. Obviously, [g_0(x,x)]⊂ [x]. Therefore, it suffices to check that [x] ⊂ [g_0(x,x)], i.e., any element g(x,x) of the orbit [x] is also an element of the orbit [g_0(x,x)]:
g(x,x) = g'(g_0(x,x), g_0(x,x))
for some g'∈ G. Indeed, taking into account the distributivity of the binary action, we obtain
g(x,x)= g_0g_0^-1g(x,x) = g_0(x, g_0^-1g(x,x)) =
=g_0^-1g(g_0(x,x), g_0(x,x)) = g'(g_0(x,x), g_0(x,x)),
where g'=g_0^-1g.
Now let [x] and [x'] be any orbits of a distributive binary G-space X. Suppose that these orbits intersect, i.e., there exists a point x∈ [x]∩ [x']. According to what was proved above, we have [x]=[x] and [x']=[x]. Therefore, the orbits [x] and [x'] coincide.
Let G be a topological group, and let H be a subgroup of G. The continuous mapping α : H× G^2→ G defined by
α(h,x,y) = xhx^-1y or h(x,y) = xhx^-1y,
h∈ H and x,y∈ G are any elements, is a binary action of H on G. Indeed,
h
h(x,y)=xhhx^-1y=xhx^-1xhx^-1y=h(x, xhx^-1y)=h(x, h(x,y)),
e(x,y)=xex^-1y=y
for all h, h∈ H and x,y ∈ G.
In contrast to (<ref>), the binary action (<ref>) is distributive:
h(h(x,y), h(x,z))=h(xhx^-1y, xhx^-1z)=
=xhx^-1yhy^-1xh^-1x^-1xhx^-1z=xhx^-1yhy^-1z=
=h(x,yhy^-1z)=h(x,h(y, z)).
for all h,h∈ H and x,y,z∈ G.
Thus, the orbits of the binary H-space under consideration are the set of the form H(x,x). Since H(x,x) = xHx^-1x = xH, it follows that the orbits are the left cosets of H in G.
To describe the orbit of a point x of any binary G-space X, we recursively define sets G^n(x,x), n∈ N,
by
G^1(x,x) = G(x,x), … , G^n(x,x) = G(G^n-1(x,x), G^n-1(x,x)).
It is easy to see that
(1) x∈ G^1(x,x)⊂ G^2(x,x)⊂…⊂ G^n(x,x) ⊂ ...;
(2) if G^n(x,x) is bi-invariant for some n∈ N, then G^k(x,x)=G^n(x,x) for all positive integers k>n and, therefore, ⋃_i=1^∞ G^i(x,x) = G^n(x,x);
(3) the set ⋃_i=1^∞ G^i(x,x) is bi-invariant in the binary G-space X.
These assertions directly imply the following proposition.
For any element x of a binary G-space X,
[x]= ⋃_i=1^∞ G^i(x,x),
where [x] is the orbit of x.
The orbit of a point x of a binary G-space X is said to be finitely generated if [x]=G^n(x,x) for some n∈ N. Otherwise, the orbit is said to be infinitely generated.
In the examples of binary G-spaces constructed previously, including those of distributive binary G-spaces, all orbits are finitely generated. However, the following theorem is valid.
There exist binary G-spaces with infinitely generated orbits.
Let G be an infinite topological group. Suppose that there exist elements h,x∈ G satisfying the following conditions:
(1) the elements h and x are of order 2: h^2=x^2=e, where e is the identity element of G;
(2) the element xh is of infinite order.
For example, let G=GL(2,𝐑) be the topological group of nonsingular square matrices of order 2. Consider
h=
[ 1 0; 0 -1 ],
x=[ -1 0; 1 1 ].
It is easy to verify that the matrices h and x satisfy conditions (1) and (2).
Consider the subgroup H={e,h} of G. It acts binarily on G by the rule (<ref>) (see Example <ref>). Let us prove that, in this binary H-space, the orbit of the element x is infinitely generated. To this end, it suffices to show that H^n+1(x,x)≠ H^n(x,x) for all positive integers n.
It immediately follows from the definition of the binary action (<ref>) and condition (1) that any element of H^n(x,x) different from x has one of the following forms:
(i) (xh)^i, (ii) (xh)^ix, (iii) h(xh)^i, (iv) h(xh)^ix,
where i is a positive integer. For example,
e(x,x)=x,
h(x,x)=x^-1hxx=xh,
h(h(x,x),x)=(xh)^-1h(xh)x=hxhxhx=h(xh)^2x,
h(x,h(x,x))=x^-1hxh(x,x)=xhxxh=x,
h(h(x,x),h(x,x))=h(xh,xh)=(xh)^-1h(xh)xh=hxhxhxh=h(xh)^3.
Therefore,
H^1(x,x)={x,xh}, H^2(x,x)={x,xh, h(xh)^2x, h(xh)^3}.
Since H^n(x,x) is finite, there exists an element y∈ H^n(x,x) which contains xh to the highest power k. Let us prove that at least one of the elements h(x,y) and h(xh,y) belongs to H^n+1(x,x) and does not belong to H^n(x,x). Consider the possible cases.
Case 1. Suppose that y has the form (i): y=(xh)^k. Then
h(xh,y)=h(xh,(xh)^k)= (xh)^-1h(xh)(xh)^k=
=hxhxh(xh)^k=h(xh)^k+2.
Case 2. If y has the form y=(xh)^kx, then
h(xh,y)=h(xh,(xh)^kx)= (xh)^-1h(xh)(xh)^kx=
= hxhxh(xh)^kx=h(xh)^k+2x.
Case 3. If y has the form y=h(xh)^k, then
h(x,y)=h(x,h(xh)^k)= x^-1hxh(xh)^k= xhxh(xh)^k=(xh)^k+2.
Case 4. If y has the form y=h(xh)^kx, then
h(x,y)=h(x,h(xh)^kx)= x^-1hxh(xh)^kx= xhxh(xh)^kx=(xh)^k+2x.
In all cases, the element specified above belongs to the set H^n+1(x,x) but does not belong to H^n(x,x), because xh has infinite order and its power is higher than k. Therefore,
H^n+1(x,x)≠ H^n(x,x).
This completes the proof of the theorem.
plain
|
http://arxiv.org/abs/2307.04054v1 | 20230708222123 | Deep Unsupervised Learning Using Spike-Timing-Dependent Plasticity | [
"Sen Lu",
"Abhronil Sengupta"
] | cs.CV | [
"cs.CV"
] |
Deep Unsupervised Learning Using Spike-Timing-Dependent Plasticity
Sen Lu, Abhronil Sengupta
School of Electrical Engineering and Computer Science
The Pennsylvania State University
University Park, PA 16802, USA
Email: {senlu, sengupta}@psu.edu
============================================================================================================================================================================================
Spike-Timing-Dependent Plasticity (STDP) is an unsupervised learning mechanism for Spiking Neural Networks (SNNs) that has received significant attention from the neuromorphic hardware community. However, scaling such local learning techniques to deeper networks and large-scale tasks has remained elusive. In this work, we investigate a Deep-STDP framework where a convolutional network is trained in tandem with pseudo-labels generated by the STDP clustering process on the network outputs. We achieve 24.56% higher accuracy and 3.5× faster convergence speed at iso-accuracy on a 10-class subset of the Tiny ImageNet dataset in contrast to a k-means clustering approach.
Unsupervised Learning, Spiking Neural Networks, Spike-Timing-Dependent Plasticity
§ INTRODUCTION
With high-quality AI applications permeating our society and daily lives, unsupervised learning is gaining increased attention as the cost of procuring labeled data has been skyrocketing concurrently. The ever-more data-hungry machine learning models usually require a humongous amount of labeled data, sometimes requiring expert knowledge, to achieve state-of-the-art performance today. Since manual annotation requires a huge investment of resources, unsupervised learning is naturally emerging as the best alternative.
One of the most prominent unsupervised learning methods is clustering. The main concept of clustering is to compress the input data (like images in the case of computer vision problems) into lower dimensions such that the low-dimensional features can be clustered into separable groups. The efficiency of the sample clustering process improves with better representations of the compressed features. Since the quality of features depends only on the dimension reduction algorithm, the design and choice of the clustering method are critical to the success of unsupervised learning. However, most real-world tasks are not easily represented as separable low-dimensional points. Earlier attempts include classical PCA reduction before clustering <cit.>, while others attempt to augment more features with “bags of features" <cit.>; but mostly constrained to smaller tasks. Recent works like DeepCluster have explored scaling of unsupervised learning approaches by incorporating the k-means clustering algorithm with a standard Convolutional Neural Network (CNN) architecture that can learn complex datasets such as ImageNet without any labels <cit.>. Some works have also proven that pre-training the network, even unsupervised, is beneficial to building the final model in terms of accuracy and convergence speed <cit.>.
The focus of this article, however, is on scaling unsupervised learning approaches in a relatively nascent, bio-plausible category of neural architectures - Spiking Neural Networks (SNNs). SNNs have been gaining momentum for empowering the next generation of edge intelligence platforms due to their significant power, energy, and latency advantages over conventional machine learning models <cit.>. One of the traditional mechanisms of training SNNs is through Spike-Timing-Dependent Plasticity (STDP) where the model weights are updated locally based on firing patterns of connecting neurons inspired by biological measurements <cit.>. STDP based learning rules have been lucrative for the neuromorphic hardware community where various emerging nanoelectronic devices have been demonstrated to mimic STDP based learning rules through their intrinsic physics, thereby leading to compact and resource-efficient on-chip learning platforms <cit.>. Recent works have also demonstrated that unsupervised STDP can serve as an energy-efficient hardware alternative to conventional clustering algorithms <cit.>.
However, scaling STDP trained SNNs to deeper networks and complex tasks has remained a daunting task. Leveraging insights from hybrid approaches to unsupervised deep learning like DeepCluster <cit.>, we aim to address this missing gap to enable deep unsupervised learning for SNNs. Further, while techniques like DeepCluster have shown promise to enable unsupervised learning at scale, the impact of the choice of the clustering method on the learning capability and computational requirements remains unexplored.
The main contributions of the paper can therefore be summarized as follows:
(i) We propose a hybrid SNN-compatible unsupervised training approach for deep convolutional networks and demonstrate its performance on complex recognition tasks going beyond toy datasets like MNIST.
(ii) We demonstrate the efficacy of STDP enabled deep clustering of visual features over state-of-the-art k-means clustering approach and provide justification through empirical analysis by using statistical tools, namely Fisher Information Matrix Trace, to prove that STDP learns faster and more accurately.
(iii) We also provide preliminary computational cost estimate comparisons of the STDP enabled Deep Clustering framework against conventional clustering methods and demonstrate the potential of significant energy savings.
§ RELATED WORKS
Deep Learning: Unsupervised learning of deep neural networks is a widely studied area in the machine learning community <cit.>. It can be roughly categorized into two main methods, namely clustering and association. Among many clustering algorithms, k-means <cit.>, or any variant of it <cit.>, is the most well-known and widely used method that groups features according to its similarities. Its applications can be found in practice across different domains <cit.>. Other approaches focus on associations to learn data representations which are described by a set of parameters using architectures such as autoencoders <cit.> (where the data distribution is learnt by encoding features in latent space).
In more recent works, such unsupervised learning methods have been applied to larger and more complex datasets <cit.>, making them applicable to more difficult problems. Further, recent advances in generative models have also provided opportunities at mapping unlabeled data to its underlying distribution, especially in the domain of image generation using Generative Adversarial Network (GAN) <cit.> with reconstruction loss directly <cit.> or using the auto-encoded latent space <cit.>. Dumoulin et al.'s recent effort at combining GAN and auto-encoder has demonstrated even better performance <cit.>.
Bio-Plausible Learning: Visual pattern recognition is also of great interest in the neuromorphic community <cit.>. In addition to standard supervised vision tasks, SNNs offer a unique solution to unsupervised learning - the STDP learning method <cit.>. In this scheme, the neural weight updates depend only on the temporal correlation between spikes without any guiding signals, which makes it essentially unsupervised. While it offers a bio-plausible solution, it is rarely used beyond MNIST-level tasks<cit.> and primarily used for single-layered networks. Going beyond conventional STDP based learning, Lee et al. <cit.> proposed an STDP-based pre-training scheme for deep networks that greedily trained the convolutional layers' weights, locally using STDP, one layer at a time but limited only to MNIST. Similarly, in Ferre et al.'s work <cit.>, the convolutional layers were trained on CIFAR10 and STL-10 with simplified STDP, but the layers were also trained individually with complex mechanisms. Further, their works are also limited to shallow convolutional architectures.
Our work explores a hybrid algorithm design based on a merger of the above two approaches. Our proposed framework provides a global training signal for the CNN using a straightforward and end-to-end STDP-based SNN implementation. We demonstrate significant accuracy improvement and computation savings for VGG-15 architecture on the Tiny ImageNet dataset in contrast to state-of-the-art deep clustering approaches.
§ PRELIMINARIES
§.§ Deep Clustering with k-means Algorithm
Deep Clustering <cit.> enabled unsupervised training of visual features primarily relies on the ability of clustering algorithms like the k-means to group together similar data points. k-means is a popular unsupervised algorithm for separating data points into distinct clusters. Given a user-specified value of k, the algorithm will find k clusters such that each data point is assigned to its nearest cluster. The vanilla implementation of the k-means algorithm iteratively calculates the Euclidean distance between points for comparison and updates the cluster centroids to fit the given distribution.
Deep Clustering utilizes the traditional CNN architecture to obtain the features to be used for clustering. The reason behind this feature reduction choice hinges upon the fact that a randomly initialized and untrained CNN outperforms a simple multilayer perceptron network by a considerable margin <cit.>. Driven by this observation, the main idea behind this framework is to bootstrap the better-than-chance signal to teach the network and learn the features. This teaching signal is transformed into a `pseudo-label' so that the network can learn from it. The `pseudo-labels' which may or may not be the same as the ground truth labels reflect the direction that the network weights should be updated. By doing so, the feature extraction layers may become slightly better at recognizing certain features and thereby producing more representative features. The improved features can ideally be more separable, thereby generating higher quality `pseudo-labels'. By repeating this process iteratively, the CNN should ideally converge by learning the `pseudo-labels' <cit.>.
Note that the CNN layers used for feature-reduction purposes can be converted into SNN layers with various methods as shown in many recent studies <cit.>, or trained from scratch using backpropagation through time (BPTT) <cit.> which opens up the potential for adopting the entire feature-reduction in a low-power neuromorphic setting. In this work, we therefore do not focus on the CNN-SNN conversion and train it by backpropagation without unrolling through time.
§.§ STDP Enabled Neuromorphic Clustering
STDP is an unsupervised learning mechanism that learns or unlearns neurons' synaptic connections based on spike timings <cit.>. In particular, the synaptic connection is strengthened when the post-synaptic neuron fires after the pre-synaptic neuron, and the connection is weakened if the post-synaptic neuron fires before the pre-synaptic neuron. The intuition behind STDP follows Hebbian learning philosophy where neurons that are activated together and sequentially are more spatio-temporally correlated and thus form a pattern, and vice versa. This learning rule enables the encoding of complex input distributions temporally without the need for guiding signals such as the label. The weights of the neuronal synapses are updated based on spike timings <cit.> as follows:
Δ w =
A_+e^-Δ t/β_+, if Δ t > 0
-A_-e^Δ t/β_-, if Δ t < 0
where, w is the weight, A_+/- are the learning rates, Δ t is the exact time difference between post-neuron and pre-neuron firing and β_+/- are the time-constants for the learning windows. In practical implementations, the exact spike timing is usually replaced with a spike trace (see Section IV-B) that decays over time to reduce memory storage for STDP implementation <cit.>.
STDP training is predominantly explored in Winner-Take-All networks in literature which consists of an excitatory layer of neurons with recurrent inhibitory connections <cit.> (see “STDP Enabled SNN for Clustering" sub-panel in Fig. <ref>). Such connections create a mechanism called `lateral inhibition' where activated neurons inhibit other neurons' activities and therefore assist the activated neurons to accentuate the learning process of its weights. To prevent any neuron from dominating the firing pattern, the second key mechanism is `homeostasis' which balances the overall activities of the neurons. Homeostasis prevents neurons from runaway excitation or total quiescence. One popular way to achieve this is through adaptive and decaying thresholding in which after every firing event, the firing threshold increases such that the firing neuron requires higher membrane potential to fire again in the future. Consequently, this will provide opportunities for other neurons in the network to fire and learn the synaptic weights. The critical balance of these two mechanisms ensures stable learning of the SNN. Fig. <ref> shows an example of STDP-trained weights of the excitatory neuron layer of an SNN where representative digit shapes are learnt without any label information for the MNIST dataset <cit.>. Each neuron in the network represents a cluster. By running inferences on the STDP network, we can cluster the inputs according to their corresponding most activated neuron. The learnt weights of each neuron is equivalent to the centroid of the cluster represented by that neuron.
§ METHODS
§.§ Proposed Deep-STDP Framework
As mentioned previously, the convolutional layers of the network compress the input images to a lower dimensional feature space as a one-dimensional vector. In abstract terms, the framework solves the following optimization problem <cit.>:
min _w ∈ℝ^d × k1/N∑_n=1^Nmin _y_n ∈{0,1}^k||f_θ (img_n) - w_y_n ||_1
such that y_n^⊺ 1_k = 1
where, N is the total number of training samples, y_n is the n-th optimal neuron assignment encoded as a one-hot vector, f_θ is the ConvNet forward pass output parameterized by its weights θ, img_n is the n-th input sample, w_y_n is the STDP-learnt synaptic weight map of the most activated neuron, d is the feature dimension of the ConvNet output and k is the number of neurons/clusters in the network. By minimizing the difference between the weights of the neurons and the patterns of the features, we can obtain an SNN that generates optimal assignments of y_n parameterized by weights w, which act as the pseudo-labels for our algorithm.
With the pseudo-labels, the network training can be accomplished through the standard minimization problem of network loss which can be described by:
min _ρ, θ1/N∑_n=1^Nℒ (g_ρ (f_θ (img_n)), y^*_n )
where, θ, ρ are parameters of the ConvNet f_θ (·) and classifier g_ρ (·) respectively, ℒ(·) is the loss function, img_n again is the n-th image input, y^*_n is the n-th optimal pseudo-label for this iteration.
However, SNNs only accept discrete spikes as input and therefore the ConvNet feature outputs in floating-point representation (after appropriate pre-processing like PCA reduction and l_2-normalization <cit.>) are subsequently rate encoded by a Poisson spike train generator, where the feature values are used as the Poisson distribution rate and sampled from the respective distribution.
At the end of the pseudo-label assignment, the STDP enabled SNN resets for the next iteration. This is intuitive since after the ConvNet weight update process, the feature distribution gets shifted and hence a new set of neuron/cluster weights should be learnt by the STDP framework. Algorithms <ref>-<ref> describe the overall structure of the proposed Deep-STDP framework shown in Fig. <ref>.
[1]// #1
[1]// #1
§.§ STDP Enabled SNN for Clustering
Clustering in the SNN is mediated through the temporal dynamics of Leaky-Integrate-Fire neurons in the excitatory layer. In the absence of any spiking inputs, the membrane potential of neurons in the excitatory layer is represented by V_exc at timestep t, or simply V_exc^t. It initializes with V_exc^t=0 = V_rest and decays as,
V_exc^t = V_rest + exp(1/V_decay) (V_exc^t-1-V_rest)
where, V_rest is the resting potential and V_decay is the potential decay constant.
Prior works <cit.> on using SNNs for clustering have mainly dealt with simple datasets without negative-valued features. This is in compliance with the nature of STDP learning for positive valued spikes. However, in our scenario, we consider negative valued spiking inputs as well in order to rate encode the negative features provided as output of the ConvNet. In order to enable STDP learning for negative inputs, we decompose the weight map into positive and negative components to learn positive and negative spike patterns respectively. Therefore, in presence of spikes, the excitatory layer's neuron membrane potential dynamics is updated as,
V_exc^t s^pre_+·w_+ + s^pre_-·w_-
where, the membrane potential is denoted by V^t_exc at timestep t, and the input spikes and pre-synaptic weights are represented by s^pre and w respectively (with their positive and negative counterparts). It is worth mentioning here that pre-neurons refer to the input neurons and post-neurons refer to the excitatory layer neurons since the synapses joining them are learnt by STDP.
Further, there is a refractory period L parameter for every neuron which will only allow execution of Eq. <ref> and <ref> if the refractory counter, l, equals `0'. A spike will be generated when the membrane potential at the current timestep is greater than the membrane threshold:
s=
1 if (V^t_exc > V_thr + ϵ) and (l=0)
0 otherwise
where, V_thr is the membrane threshold to fire a spike, ϵ is the adaptive threshold parameter, l is the refractory period counter which is reset to L upon a firing event and decays by 1 otherwise (thereby preventing neurons from firing for L timesteps after a spike). V^t_exc resets to V_reset after firing a spike. The adaptive threshold parameter acts as a balancer to prevent any neuron from being over-active (homeostasis) and is incremented by parameter α upon a firing event and otherwise decays exponentially at every timestep similar to Eq. <ref>: exp(1/ϵ_decay) ϵ. Every spike generated by a post-neuron triggers a membrane potential decrement by an amount w_inh for all the other neurons except itself.
In the context of our implementation, we used the spike trace τ to represent the temporal distance between two spikes. The spike trace value peaks at its firing to τ_o and exponentially decay as time lapses: exp(1/τ_decay) τ. The weight updates are similarly separated into positive and negative parts.
Pre-synaptic update:
[ Δ w_+ = -η^pre (s^pre_+ * τ^post); Δ w_- = η^pre (s^pre_- * τ^post) ]
Post-synaptic update:
[ Δ w_+ = η^post (τ^pre_+ * s^post); Δ w_- = η^post (τ^pre_- * s^post) ]
where, Δ w are the weight updates, η^pre, η^post are the learning rates for pre- and post-synaptic updates respectively, τ is the spike trace, and s is the spiking pattern. Superscript (^pre), (^post) indicates whether the trace or spike is from pre- or post-synaptic neuron respectively, and the subscript (_+), (_-) indicates whether the operation is for positive or negative input spikes. Note that the negative s^pre_- can be flipped easily by the distributive property of matrix multiplication.
§ EXPERIMENTS AND RESULTS
§.§ Datasets and Implementation
The proposed method was evaluated on the Tiny ImageNet dataset, which is a center-cropped subset of the large-scale ImageNet dataset <cit.>. Unlike the ImageNet 2012 dataset, which contains 1000 object categories, the Tiny ImageNet dataset comprises of only 200 categories. Due to computation constraints, we selected the first 10 classes from the Tiny ImageNet dataset by the naming order and considered both the training and testing sets for those corresponding classes in this work. All images were normalized to zero mean and unit variance and shuffled to avoid any bias. We chose VGG15 as the baseline network architecture with randomly initialized weights. Simulations were conducted using the PyTorch machine learning library and a modified version of the BindsNet toolbox <cit.> as the base platform for the experiments. The results reported for the DeepCluster framework <cit.> were obtained without any modification to the open-source codebase associated with the work, and its hyperparameters were unchanged unless mentioned in this work. The ConvNet learning rate was set to 1e-2 and the number of clusters was set to 10 times the number of classes (recommended as optimal in Ref. <cit.> and also found optimal in the Deep-STDP framework). The training was performed for 200 epochs. All results obtained were run on 2 GTX 2080Ti GPUs and the associated hyper-parameters used for the Deep-STDP framework can be found in Table <ref>.
Numerous cluster re-assignment frequencies were explored and `1' (`2') was found to be the optimal for Deep-STDP (DeepCluster), i.e. the pseudo-labels were generated by passing the entire dataset once (twice) every epoch. Note that this frequency represents the number of dataset iterations per epoch. Following the evaluation method proposed by Zhang et. al <cit.>, we froze all network parameters and trained a linear layer at the output to evaluate the efficiency of the model to capture the distribution of images in the training set as well as its usage as a pre-trained model for general use cases. We fixed the random seeds in each experiment such that the clustering process is deterministic for a particular run. To prevent loss in generality, all accuracy results reported here represent the average value over 5 independent runs with different sets of random seeds.
§.§ Evaluation Metrics
§.§.§ Fisher Information
The Fisher information (FI) quantitatively measures the amount of information retained in a statistical model after being trained on a given data distribution <cit.>. Many prior works have used this metric to measure different aspects of deep learning models including SNN models <cit.>. Unlike prior works, we use pseudo-labels to generate FI instead of ground-truth labels. FI reflects the impact of weight changes on the ConvNet output. If the FI of model parameters is small, we can conclude that the model's learning efficiency is poor since the weights can be pruned without affecting the output, and vice versa. Therefore, this metric implicitly measures the quality of the pseudo-labels.
Let us consider that the network tries to learn y from a distribution p parametrized by a set of weights θ. Given samples x, the posterior distribution is p_θ(y|x).
The Fisher information matrix (FIM) is defined as:
F=𝔼_x∼ X𝔼_y∼ p_θ(y|x) [∇_θlog p_θ(y|x) ∇_θlog p_θ(y|x)^T]
where, X is the empirical distribution of the actual dataset. However, the exact FIM is usually too large to be computed directly and therefore the value is usually approximated by its trace, which is given by:
Tr(F)=𝔼_x∼ X𝔼_y∼ p_θ(y|x) [||∇_θlog p_θ(y|x)||^2]
in which the expectations can be replaced by the averaged observation from the dataset of N samples:
Tr(F) = 1/N∑_k=1^N ||∇_θlog p_θ(y|x)||^2_2
where, Tr(F) is the trace of FIM, ∇ is the partial derivative operator. We follow the same implementation as the algorithm specified in Ref. <cit.>.
§.§.§ Normalized Mutual Information
Further, following the Deep Clustering work <cit.>, we also measured the Normalized Mutual Information (NMI) metrics to evaluate mutual information between two consecutive assignments of the STDP-enabled SNN, given by Eq. <ref>.
NMI(y^p,y^p-1) = I(y^p;y^p-1)/√([H(y^p)H(y^p-1)])
where, y^p,y^p-1 are label assignments for epoch p-1 and p respectively, I(·) is the mutual information function, and H(·) is the entropy function.
Since the assignments y^p, y^p-1 are consecutive and are generated from the same inputs, a high NMI value indicates a high correlation between the two sets of assignments as well as stable assignments of the pseudo-labels.
§.§ Performance Evaluation
Fig. <ref> demonstrates that Deep-STDP based unsupervised feature learning significantly outperforms DeepCluster approach based on k-means clustering. The superior quality of pseudo-labels generated by Deep-STDP is also explained empirically by the FIM trace variation over the learning epochs (see Fig. <ref>). While both algorithms perform similarly during the initial stages, the accuracy and FIM trace start improving significantly for the Deep-STDP approach over subsequent epochs. Performance evaluation metrics (NMI, FIM and Accuracy) for the two approaches at the end of the training process are tabulated in Table <ref>.
In addition to training an additional linear layer for numerical performance analysis, we also visualized the convolutional filter activations of the CNN trained using our proposed framework. We can observe from Fig. <ref> that the network forms distinct filters specialized for completely different visual patterns in different layers without using any ground truth label information. On the other hand, similar visualization performed on the DeepCluster trained network yielded similar simple patterns in the shallow layers without any complex patterns represented in the deeper layers, further substantiating the efficacy of the Deep-STDP approach.
§.§ Computational Cost Estimation
While a detailed system level hardware analysis for the two approaches is outside the scope of this work, we provide motivation for neuromorphic deep clustering by performing a comparative analysis of the computational cost of the two approaches.
§.§.§ Cost of k-means Clustering
To find the new centroid of a particular cluster, the algorithm calculates the averaged center of all the data points assigned to that cluster using the following equation:
c_j = 1/|C_j|∑x_i ∈ C_j
where, c_j is the averaged coordinates of the j-th centroid, |C_j| is the number of data points assigned to that corresponding cluster, and x_i is the i-th data point. Subsequently, the algorithm calculates the Euclidean distance between every data point and every centroid and assigns each data point to the cluster with the shortest distance to its centroid. The goal is to solve the optimization problem:
*argmin_C∑_j=1^k∑_i=1^|C_j| ||x_i - c_j||^2_2
where, *argmin_C solves for the optimal centroids and k is the total number of clusters.
The above two calculations will be repeated until convergence is achieved or until a maximum number of iterations is reached. Hence, the number of mathematical operations can be summarized as follows:
* Clustering Step: Compute the distance ||x_i - c_j||^2_2 from every point to every centroid and assign to k clusters
* Update Step: Re-center the centroids in new clusters by averaging over |C_j| for all clusters
* Repeat it times
To calculate the distance of a point x_i from c_j:
||x_i - c_j||^2_2 = √(∑_m=1^d=256(x_im - c_jm)^2)
where, d is the number of dimensions in the feature.
Hence, the number of multiplications (the number of squaring operations) in order to calculate the Euclidean distance is:
[k· d] · it · N
and the number of addition operations involved is:
[k· (2d-1) + d] · it · N
where, k is the number of clusters, N is the number of training samples, and it is the number of maximum iterations in the k-means algorithm. In Eq. <ref>, the k · (d-1) component arises from the summation of individual distance along each dimension while another k · d component arises from the subtraction operation for distance calculation along each dimension. The last d component arises from updating the new cluster coordinates (which in the worst case will iterate through all data points, see Eq. <ref>). Given the cost of float ADD operation is 0.9pJ and float MULT operation is 3.7pJ in 45nm CMOS process <cit.>, we estimated the total computational cost in the clustering process for every training epoch to be 14.1mJ (considering it=20). Considering 175 epochs of DeepCluster training to reach peak accuracy, the total computational cost is 2467.5mJ.
§.§.§ Cost of STDP Clustering
In the STDP based clustering approach, the computations can be summarized into the following parts:
* Feedforward Step: Integrate input Poisson spike train through the synapses connecting input and excitatory layer
* Learning Step: Updating the excitatory layer weights based on pre- and post-synaptic spiking activities
* Inhibition Step: Updating the neuron membrane potential based on lateral inhibitory connections
* Repeat T times
Although multiplication symbols were used in Algo. <ref>, computation with spike signals can always be reduced to summation operation since the spike magnitude is always `0' or `1' <cit.>. Further, the addition operation is conditional upon the receipt of spikes, thereby reducing the computation cost by a significant margin for a highly sparse spike train. For instance, the average spiking probability per neuron per timestep in the excitatory layer of the network is only 0.19%. Hence, the total number of addition operations can be summarized as:
[p_input· |w^exc| + (p_input + p_exc)· |w^exc| + p_exc· |w^inh|]
· T · N
where, p_input,p_exc are the average (per neuron per timestep averaged over the entire training process) spiking probability of the input and excitatory neuronal layer respectively, |w^exc| is the number of synaptic connections between the input and excitatory layer, either |w_+| or |w_-| since the input can be either positive or negative, |w^inh| is the total number of inhibitory connections in the network, T is the number of timesteps used for the STDP training process, and N is the number of training samples.
It is worth mentioning here that we primarily focus on the computationally expensive portions of both algorithms for these calculations. In Eq. <ref>, the p_input· |w^exc| component arises from the feedforward propagation of input spikes, (p_input + p_exc)· |w^exc| component arises from the learning step and p_exc· |w^inh| arises from the inhibition step. Therefore, the total computational cost for Deep-STDP per epoch is 55.34mJ and considering 50 epochs of training (iso-accuracy comparison as shown in Fig. <ref>), the total energy consumption is estimated to be 2767.2mJ - comparable to the DeepCluster framework.
§.§.§ System Level Cost Comparison:
We note that the STDP based framework does not change the computational load of the clustering framework significantly. However, the computational load at the system level will be also dependent on the computational load for feature extraction in the ConvNet. For instance, Ref. <cit.> mentions a third of the time during a forward pass is attributed to the clustering algorithm while the remaining is attributed to the deep ConvNet feature extraction. Therefore, we expect the Deep-STDP based framework to be significantly more resource efficient than the DeepCluster based approach due to 3.5× reduction in the number of training epochs - equivalently reducing the ConvNet feature extraction computational cost.
§ CONCLUSIONS
In conclusion, we proposed an end-to-end hybrid unsupervised framework for training deep CNNs that can be potentially implemented in a neuromorphic setting. We demonstrated significant benefits in terms of accuracy and computational cost by leveraging bio-plausible clustering techniques for deep unsupervised learning of visual features and substantiated our claims by empirical analysis through statistical tools like Fisher Information and Normalized Mutual Information. Our work significantly outperforms prior attempts at scaling bio-inspired learning rules like STDP to deeper networks and complex datasets. Future work can focus on further scaling of the approach and delving deeper into the mathematical underpinnings of the superior performance of STDP as a deep clustering mechanism.
§ ACKNOWLEDGMENTS
This material is based upon work supported in part by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under Award Number #DE-SC0021562 and the National Science Foundation grant CCF #1955815 and by Oracle Cloud credits and related resources provided by the Oracle for Research program.
10
url@samestyle
ding2004k
C. Ding and X. He, “K-means clustering via principal component analysis,” in
Proceedings of the twenty-first international conference on Machine
learning, 2004, p. 29.
csurka2004visual
G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray, “Visual
categorization with bags of keypoints,” in Workshop on statistical
learning in computer vision, ECCV, vol. 1, no. 1-22.1em plus 0.5em
minus 0.4emPrague, 2004, pp. 1–2.
caron2018deep
M. Caron, P. Bojanowski, A. Joulin, and M. Douze, “Deep clustering for
unsupervised learning of visual features,” in Proceedings of the
European conference on computer vision (ECCV), 2018, pp. 132–149.
radford2015unsupervised
A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning
with deep convolutional generative adversarial networks,” arXiv
preprint arXiv:1511.06434, 2015.
oord2018representation
A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with
contrastive predictive coding,” arXiv preprint arXiv:1807.03748,
2018.
radford2019language
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al.,
“Language models are unsupervised multitask learners,” OpenAI blog,
vol. 1, no. 8, p. 9, 2019.
sengupta2019going
A. Sengupta, Y. Ye, R. Wang, C. Liu, and K. Roy, “Going deeper in spiking
neural networks: Vgg and residual architectures,” Frontiers in
neuroscience, vol. 13, p. 95, 2019.
davies2021advancing
M. Davies, A. Wild, G. Orchard, Y. Sandamirskaya, G. A. F. Guerra, P. Joshi,
P. Plank, and S. R. Risbud, “Advancing neuromorphic computing with loihi: A
survey of results and outlook,” Proceedings of the IEEE, vol. 109,
no. 5, pp. 911–934, 2021.
diehl2015unsupervised
P. Diehl and M. Cook, “Unsupervised learning of digit recognition using
spike-timing-dependent plasticity,” Frontiers in Computational
Neuroscience, vol. 9, p. 99, 2015.
saha2021intrinsic
A. Saha, A. Islam, Z. Zhao, S. Deng, K. Ni, and A. Sengupta, “Intrinsic
synaptic plasticity of ferroelectric field effect transistors for online
learning,” Applied Physics Letters, vol. 119, no. 13, 2021.
frady2020neuromorphic
E. P. Frady, G. Orchard, D. Florey, N. Imam, R. Liu, J. Mishra, J. Tse,
A. Wild, F. T. Sommer, and M. Davies, “Neuromorphic nearest neighbor search
using intel's pohoiki springs,” in Proceedings of the neuro-inspired
computational elements workshop, 2020, pp. 1–10.
bengio2012unsupervised
Y. Bengio, A. C. Courville, and P. Vincent, “Unsupervised feature learning and
deep learning: A review and new perspectives,” CoRR, abs/1206.5538,
vol. 1, no. 2665, p. 2012, 2012.
dike2018unsupervised
H. U. Dike, Y. Zhou, K. K. Deveerasetty, and Q. Wu, “Unsupervised learning
based on artificial neural network: A review,” in 2018 IEEE
International Conference on Cyborg and Bionic Systems (CBS).1em plus
0.5em minus 0.4emIEEE, 2018, pp. 322–327.
lloyd1982least
S. Lloyd, “Least squares quantization in pcm,” IEEE transactions on
information theory, vol. 28, no. 2, pp. 129–137, 1982.
krishna1999genetic
K. Krishna and M. N. Murty, “Genetic k-means algorithm,” IEEE
Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics),
vol. 29, no. 3, pp. 433–439, 1999.
arthur2007k
D. Arthur and S. Vassilvitskii, “K-means++ the advantages of careful
seeding,” in Proceedings of the eighteenth annual ACM-SIAM symposium
on Discrete algorithms, 2007, pp. 1027–1035.
ng2006medical
H. Ng, S. Ong, K. Foong, P.-S. Goh, and W. Nowinski, “Medical image
segmentation using k-means clustering and improved watershed algorithm,” in
2006 IEEE southwest symposium on image analysis and
interpretation.1em plus 0.5em minus 0.4emIEEE, 2006, pp.
61–65.
kim2008recommender
K.-j. Kim and H. Ahn, “A recommender system using ga k-means clustering in an
online shopping market,” Expert systems with applications, vol. 34,
no. 2, pp. 1200–1209, 2008.
rumelhart1986learning
D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations
by back-propagating errors,” nature, vol. 323, no. 6088, pp.
533–536, 1986.
hinton2006reducing
G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data
with neural networks,” science, vol. 313, no. 5786, pp. 504–507,
2006.
rombach2022high
R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution
image synthesis with latent diffusion models,” in Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp.
10 684–10 695.
bojanowski2017optimizing
P. Bojanowski, A. Joulin, D. Lopez-Paz, and A. Szlam, “Optimizing the latent
space of generative networks,” arXiv preprint arXiv:1707.05776, 2017.
kingma2013auto
D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv
preprint arXiv:1312.6114, 2013.
masci2011stacked
J. Masci, U. Meier, D. Cireşan, and J. Schmidhuber, “Stacked
convolutional auto-encoders for hierarchical feature extraction,” in
Artificial Neural Networks and Machine Learning–ICANN 2011: 21st
International Conference on Artificial Neural Networks, Espoo, Finland, June
14-17, 2011, Proceedings, Part I 21.1em plus 0.5em minus 0.4emSpringer, 2011, pp. 52–59.
diehl2015fast
P. U. Diehl, D. Neil, J. Binas, M. Cook, S.-C. Liu, and M. Pfeiffer,
“Fast-classifying, high-accuracy spiking deep networks through weight and
threshold balancing,” in 2015 International joint conference on neural
networks (IJCNN).1em plus 0.5em minus 0.4emieee, 2015, pp.
1–8.
neftci2014event
E. Neftci, S. Das, B. Pedroni, K. Kreutz-Delgado, and G. Cauwenberghs,
“Event-driven contrastive divergence for spiking neuromorphic systems,”
Frontiers in neuroscience, vol. 7, p. 272, 2014.
lee2018pretrain
C. Lee, P. Panda, G. Srinivasan, and K. Roy, “Training deep spiking
convolutional neural networks with stdp-based unsupervised pre-training
followed by supervised fine-tuning,” Frontiers in Neuroscience,
vol. 12, 2018.
liu2019stdpLearning
D. Liu and S. Yue, “Event-driven continuous stdp learning with deep structure
for visual pattern recognition,” IEEE Transactions on Cybernetics,
vol. 49, no. 4, pp. 1377–1390, 2019.
ferre2018unsupervised
P. Ferré, F. Mamalet, and S. J. Thorpe, “Unsupervised feature learning
with winner-takes-all based stdp,” Frontiers in computational
neuroscience, vol. 12, p. 24, 2018.
noroozi2016unsupervised
M. Noroozi and P. Favaro, “Unsupervised learning of visual representations by
solving jigsaw puzzles,” in Computer Vision–ECCV 2016: 14th European
Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings,
Part VI.1em plus 0.5em minus 0.4emSpringer, 2016, pp. 69–84.
midya2019artificial
R. Midya, Z. Wang, S. Asapu, S. Joshi, Y. Li, Y. Zhuo, W. Song, H. Jiang,
N. Upadhay, M. Rao et al., “Artificial neural network (ann) to
spiking neural network (snn) converters based on diffusive memristors,”
Advanced Electronic Materials, vol. 5, no. 9, p. 1900060, 2019.
lu2020exploring
S. Lu and A. Sengupta, “Exploring the connection between binary and spiking
neural networks,” Frontiers in neuroscience, vol. 14, 2020.
lu2022neuroevolution
——, “Neuroevolution guided hybrid spiking neural network training,”
Frontiers in neuroscience, vol. 16, 2022.
gao2023high
H. Gao, J. He, H. Wang, T. Wang, Z. Zhong, J. Yu, Y. Wang, M. Tian, and C. Shi,
“High-accuracy deep ann-to-snn conversion using quantization-aware training
framework and calcium-gated bipolar leaky integrate and fire neuron,”
Frontiers in Neuroscience, vol. 17, p. 1141701, 2023.
bellec2018long
G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass, “Long
short-term memory and learning-to-learn in networks of spiking neurons,”
Advances in neural information processing systems, vol. 31, 2018.
Rathi2020DIETSNNDI
N. Rathi and K. Roy, “DIET-SNN: Direct input encoding with leakage and
threshold optimization in deep spiking neural networks,” ArXiv, vol.
abs/2008.03658, 2020.
caporale2008spike
N. Caporale and Y. Dan, “Spike timing–dependent plasticity: a hebbian
learning rule,” Annu. Rev. Neurosci., vol. 31, pp. 25–46, 2008.
Hazan_2018
H. Hazan, D. J. Saunders, H. Khan, D. Patel, D. T. Sanghavi, H. T. Siegelmann,
and R. Kozma, “Bindsnet: A machine learning-oriented spiking neural networks
library in python,” Frontiers in Neuroinformatics, vol. 12, p. 89,
2018.
deng2012mnist
L. Deng, “The mnist database of handwritten digit images for machine learning
research,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp.
141–142, 2012.
deng2009imagenet
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A
large-scale hierarchical image database,” in Computer Vision and
Pattern Recognition, 2009. CVPR 2009. IEEE Conference on.1em plus
0.5em minus 0.4emIEEE, 2009, pp. 248–255.
zhang2016colorful
R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in
Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The
Netherlands, October 11-14, 2016, Proceedings, Part III 14.1em plus
0.5em minus 0.4emSpringer, 2016, pp. 649–666.
amari2000methods
S.-i. Amari and H. Nagaoka, Methods of information geometry.1em
plus 0.5em minus 0.4emAmerican Mathematical Soc., 2000, vol. 191.
karakida2019universal
R. Karakida, S. Akaho, and S.-i. Amari, “Universal statistics of fisher
information in deep neural networks: Mean field approach,” in The 22nd
International Conference on Artificial Intelligence and Statistics.1em plus 0.5em minus 0.4emPMLR, 2019, pp. 1032–1041.
kim2022exploring
Y. Kim, Y. Li, H. Park, Y. Venkatesha, A. Hambitzer, and P. Panda, “Exploring
temporal information dynamics in spiking neural networks,” arXiv
preprint arXiv:2211.14406, 2022.
erhan2009visualizing
D. Erhan, Y. Bengio, A. Courville, and P. Vincent, “Visualizing higher-layer
features of a deep network,” University of Montreal, vol. 1341,
no. 3, p. 1, 2009.
han2015learning
S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections
for efficient neural network,” Advances in neural information
processing systems, vol. 28, 2015.
|
http://arxiv.org/abs/2307.05018v1 | 20230711053559 | Effects of neutron-rich nuclei masses on symmetry energy | [
"Seonghyun Kim",
"Dukjae Jang",
"Soonchul Choi",
"Tsuyoshi Miyatsu",
"Myung-Ki Cheoun"
] | nucl-th | [
"nucl-th"
] |
Preprint
Department of Physics and OMEG Institute, Soongsil University, Seoul 156-743, Republic of Korea
Corresponding Author: [email protected]
Center for Relativistic Laser Science, Institute for Basic Science (IBS), Gwangju 61005, Republic of Korea
Corresponding Author: [email protected]
Center for Exotic Nuclear Studies, Institute for Basic Science (IBS), Daejeon 34126, Republic of Korea
Department of Physics and OMEG Institute, Soongsil University, Seoul 156-743, Republic of Korea
Department of Physics and OMEG Institute, Soongsil University, Seoul 156-743, Republic of Korea
We explore the impact of neutron-rich nuclei masses on the symmetry energy properties using the mass table evaluated by the deformed relativistic Hartree-Bogoliubov theory in continuum (DRHBc) model. First, using the semi-empirical mass formula with the DRHBc mass table, we investigate the symmetry energy at saturation density ρ_0, denoted as S_0, and the ratio of surface to volume contributions to the symmetry energy, κ. As a result, we obtain S_0=27.85 MeV (κ=1.38) for a_ sym(A) =S_0 (1 - κ A^-1/3) (Type I) and S_0=32.66 MeV (κ=3.15) for a_ sym(A) = S_0 (1 + κ A^-1/3 )^-1 (Type II), which are lower than those obtained using the AME2020 mass table, S_0=28.54 MeV (κ=1.29) for Type I and S_0=33.81 MeV (κ=3.04) for Type II. Second, we further investigate the effect of these changes in a_ sym(A) on the density-dependent symmetry energy by employing the empirical model of S(ρ) = C_k(ρ/ρ_0)^2/3 + C_1(ρ/ρ_0) + C_2(ρ/ρ_0)^γ and universal relation of a_ sym(A=208) = S(ρ=0.1 fm^-3). Compared to the experimental constraints, we find that S_0 and slope parameter L, determined by the DRHBc mass table with Type II, are more suitable to explain the constraints by heavy ion collisions and isobaric analog states than AME2020. We also discuss the neutron skin thickness derived from the L, comparing it with experimental measurements.
Effects of neutron-rich nuclei masses on symmetry energy
Myung-Ki Cheoun
August 12, 2023
========================================================
§ INTRODUCTION
The nuclear symmetry energy plays an crucial role of understanding some experimental data of finite nuclei and lots of properties of isospin-asymmetric nuclear matter <cit.>.
Around the nuclear saturation density, ρ_0, the density-dependent symmetry energy is generally expanded as S(ρ) ≃ S_0+Lχ + 𝒪(χ^2) with χ = (ρ-ρ_0)/(3ρ_0), where S_0 and L denote the symmetry energy and the slope parameter at the ρ_0, respectively <cit.>.
The properties of the symmetry energy, including S_0 and L, can be determined from various measurements such as heavy-ion collisions (HICs) <cit.>, neutron skin thickness measurements via parity-violating elastic electron scattering <cit.>, and astrophysical observations of neutron stars <cit.>. However, current determinations based on various experimental measurements still span a broad range of values, with 24≤ S_0 (MeV)≤36 and -10≤ L (MeV)≤130 <cit.>, making it challenging to determine the precise values of S_0 and L <cit.>.
The symmetry energy coefficient of finite nuclei, a_ sym(A), is also a key quantity to study their characteristics because it can be directly provided by nuclear masses which are the most precisely measured information in nuclear physics.
Using the semi-empirical mass formula, known as Bethe–Weizsäcker mass formula <cit.>, a_ sym(A) is extracted from the mass differences of isotope or isobaric nuclei <cit.>, the measured α-decay energies of heavy nuclei <cit.>, and the double differences of “experimental” symmetry energies <cit.>.
In particular, it has been proposed that a universal relation exists between a_ sym(A) and S(ρ) in mean-field theories, a_ sym(A=208) ≃ S(ρ=0.1 fm^-3) <cit.>. This relation enables us to evaluate nuclear matter properties using information derived from finite nuclei, such as the neutron skin thickness and electric dipole polarizability (EDP) of ^208 Pb <cit.>, which might be a key relation for further discussion.
The extraction of the a_sym(A) from the mass formula relies on the determination of nuclear binding energy. In the past decade, significant advancements have been made in the development of several nuclear mass tables. Notably, the KTUY05 model has introduced a mass formula incorporating shell energy corrections <cit.>, and a comprehensive evaluation of nuclear masses for 9318 nuclei has been constructed by using the finite-range droplet macroscopic (FRDM) and the folded-Yukawa single-particle microscopic mass models (FRDM2012) <cit.>. Moreover, the atomic mass evaluations, AME2020, have provided nuclear mass data for 2550 stable nuclei in their ground states, based on experimentally measured nuclear masses <cit.>. Recent efforts have been directed towards expanding the nuclear mass table to include the neutron drip line, employing the deformed relativistic Hartree-Bogoliubov theory in continuum (DRHBc) model. This extends mass table encompasses 2583 even-even nuclei, spanning from the proton drip line to the neutron drip line <cit.>. Fig. <ref> depicts the coverage of each mass table.
In this study, we explore the impact of neutron-rich nuclei on a_ sym(A) by adopting the DRHBc and AME2020 mass tables.
As shown in Fig. <ref>, the DRHBc mass table provides a broader coverage of nuclear masses, extending to neutron-rich nuclei, compared to the AME2020 mass table which is limited to the experimental data region. We show how the nuclear masses of neutron-rich nuclei in DRHBc mass table affect the a_ sym(A). Furthermore, we present implications of the change in a_ sym(A) for S(ρ) by employing the universal relation S(ρ=0.1 fm^-3) = a_ sym(A=208) <cit.> and compare the results with experimental constraints from heavy-ion collisions, measurements in finite nuclei, and observations of neutron stars.
This paper is organized as follows.
In Sec. <ref>, we present a_ sym(A) with the DRHBc and AME2020 mass tables.
In Sec. <ref>, we discuss the effects of change in a_ sym(A) on S(ρ).
Lastly, a summary is included in Sec. <ref>.
§ THE SYMMETRY ENERGY COEFFICIENT WITH NEUTRON-RICH NUCLEI
In the Bethe–Weizsäcker mass formula, the binding energy of a nucleus with mass number A (=N+Z) is given by
B(A,Z) = a_v A-a_ surf A^2/3
- a_ sym(Z-N)^2/A - E_ Coul(A,Z) +a_ pair A^-1/2,
where a_v(surf)[pair] stands for the coefficient of volume (surface) [pairing] term and E_ Coul is the Coulomb energy.
Taking into account the difference of binding energies between isobaric nuclei, the symmetry energy coefficient of finite nuclei, a_ sym(A,Z,n), is written as
a_ sym(A,Z,n)
= A/8(A-2Z)[B(A,Z+n)-B(A,Z-n)/n
- E_ Coul(A,Z+n)-E_ Coul(A,Z-n)/n],
with n being a positive integer that determines the binding energy difference of isobaric nuclei.
Although the general form of a_ sym is expressed as a function of A, we explicitly present Z as well as A in Eq. (<ref>) to figure out the isospin dependence on a_ sym using the mass table with neutron-rich nuclei.
To avoid the choice of a reference nucleus used in Ref. <cit.>, we simply consider the mean value of Eq. (<ref>):
ã_ sym(A,Z) = 1/m∑_n=1^ma_ sym(A,Z,n),
with m being the number of pairs of isobaric nuclei.
In addition, we take the average of Eq. (<ref>) to compare it with the conventional symmetry energy coefficient of finite nuclei only with the A dependence:
a̅_ sym(A)= 1/ k ∑_Z=Z_ min^Z_ maxã_ sym(A,Z),
where Z_ max(min) denotes the maximum (minimum) number of Z and k is the number of ã_ sym(A,Z) in a given isobaric chain. Hereafter, we use a_ sym(A) for a̅_ sym(A).
In this study, we employ two phenomenological functions a_ sym(A) to fit the data obtained by Eq. (<ref>) from the given mass tables.
One is a_ sym(A)=S_0(1-κ A^-1/3) (Type I) and the other is given by a_ sym(A)=S_0(1+κ A^-1/3)^-1 (Type II) with the parameters S_0 and κ, where κ indicates the ratio of surface to volume contributions of the a_ sym(A), i.e., κ = a_ sym^S(A)/a_ sym^V(A) <cit.>.
We can see that Type I corresponds to the first order expansion of Type II in the small limit of A.
In both forms, the S_0 is dominant in the large A, while the κ becomes effective in the small A.
To precisely evaluate the a_ sym(A), it is necessary to remove the microscopic shell corrections from their binding energies because those corrections are not considered in the Bethe–Weizsäcker mass formula.
This is the same as in the case of Wigner correction.
The binding energy of a nucleus in Eq. (<ref>) is hence given by B(A,Z) = B_ Data(A,Z)-E_ sh(A,Z)-E_W(A,Z), where B_ Data(A,Z) is the experimental data taken from the AME2020 or DRHBc mass tables, E_ sh(A,Z) is the shell correction energy, and E_W(A,Z) is the Wigner correction <cit.>.
We here adopt E_ sh(A,Z) from the KTUY05 mass formula <cit.> since the shell corrections in the DRHBc mass table have not been studied yet.
We also use the form of E_W(A,Z) = 10 exp (-4.2 |I|) with isospin asymmetry, I=(N-Z)/A <cit.>.
As for the Coulomb energy in Eq. (<ref>), we exploit the same expression in Ref. <cit.>, deduced from the 88 pairs of mirror nuclei in the region of 11 ≤ A ≤ 75: E_ Coul(A,Z) = a_ Coul Z(Z-1)(1-bZ^-2/3)/A^1/3 with a_ Coul=0.704 MeV and b=0.985.
Fig. <ref> shows the symmetry energy coefficients of finite nuclei, ã_ sym(A,Z) in Eq. (<ref>) and a̅_ sym(A) in Eq. (<ref>), with the mass tables of DRHBc (upper panel) and AME2020 (lower panel). The extracted ã_ sym(A,Z) from the DRHBc mass table extensively covers the neutron-rich nuclei region compared to that of the AME2020 mass table. In particular, in Fig. <ref>, ã_ sym(A)s for the neutron rich nuclei (yellow points) suppress a̅_ sym(A), resulting in reduction of S_0. As a result, we obtain the S_0=27.85 MeV (Type I) and S_0=32.66 MeV (Type II) with the DRHBc mass table. On the other hand, for the AME2020 mass table, we obtain S_0=28.54 MeV (Type I) and S_0=33.81 MeV (Type II). This implies that the binding energies of neutron-rich nuclei contribute to a reduction in S_0. Furthermore, it is noteworthy that a more substantial decrease in S_0 would occur if a broader range of neutron-rich nuclides could be considered. However, the inclusion of such nuclides in the DRHBc mass table was limited by the availability of shell correction data adopted from KTUY05 data.
§ EFFECTS OF CHANGES IN A_ SYM(A) ON NUCLEAR MATTER PROPERTIES
We evaluate the effects of the change in a_ sym(A) due to the neutron-rich nuclei on S(ρ) by employing the following empirical density-dependent symmetry energy model <cit.>:
S(ρ) = C_k (ρ/ρ_0)^2/3 + C_1 (ρ/ρ_0) + C_2 (ρ/ρ_0)^γ.
We take the previous determinations of C_k and γ from the correlations in symmetry energy parameters, C_k=17.47 MeV and γ=1.52 <cit.>. In addition, to determine the remained coefficients C_1 and C_2, we adopt two relations of S(ρ =ρ_0) = S_0 and S(ρ=0.1 fm^-3) ≃ a_ sym(A=208) <cit.>. We note that the DRHBc (AME2020) mass table results in a_ sym(A=208) = 21.36 MeV (a_ sym(A=208) = 22.31 MeV) for Type I and a_ sym(A=208) = 21.32 MeV (a_ sym(A=208) = 22.33 MeV) for Type II. For ρ_0, we adopt ρ_0 = 0.15 ± 0.01 fm^-3 <cit.>. Taking into account the two conditions, we determine C_1 and C_2 for each result from the DRHBc and AME2020 mass tables. Moreover, using the relation of L = 2C_k + 3C_1 + 3C_2γ, we evaluate the L. We tabulate determinations of C_1, C_2, S_0, and L in Tab. <ref>, in which upper and lower limits for each data stem from the uncertainty in ρ_0.
Fig. <ref> shows the evaluated S(ρ) from the determinations in Tab. <ref> as a function of ρ with experimental constraints from analyses of EDP in ^208 Pb <cit.>, HICs <cit.>, and the isobaric analog states with neutron skin (IAS+Nskin) data <cit.>. The EDP measurement provides constraints on S(ρ) at ρ≲ 2ρ_0/3, which are consistent with our determinations of S(ρ=0.1 fm^-3) for both types regardless of the mass table. On the other hand, the constraints of S(ρ) around ρ_0 are provided from the analyses of HIC and IAS+Nskin, in which the allowed range of the S(ρ) depends on the uncertainty of evaluated S(ρ).
The behavior of S(ρ) depends on three conditions. First, the condition of S_0 = S(ρ = ρ_0) determines the behavior of S(ρ) in the vicinity of ρ≈ρ_0. Since an increase in S_0 leads to a higher value of S(ρ=ρ_0), S(ρ) becomes stiffer as S_0 increases for the fixed S(ρ≃ 0.1 fm^-3) <cit.>. Therefore, the S(ρ) for Type II (S_0 ≃ 33 MeV) is stiffer than the Type I (S_0≃ 28 MeV), which results in the higher L for Type II than that of Type I. (See Tab. <ref>.)
Second, the stiffness of S(ρ) depends on the condition of not only S_0, but S(ρ = 0.1 fm^-3). In Fig. <ref>, there are two intersecting points at ρ=0.1 fm^-3. Each point stems from the condition of S(ρ = 0.1 fm^-3) = a_ sym(A=208) for each mass table. Since the fitted line for the DRHBc mass table in Fig. <ref> leads to a lower value of a_sym(A=208) compared to the AME2020 mass table, the intersecting point for the DRHBc mass table in Fig. <ref> is lower than the case of AME2020 mass table. This lower value of S(ρ=0.1 fm^-3) contributes to make a stiffer S(ρ). As a result, when we compare the S(ρ) for each mass table in Type I, S(ρ) with DRHBc mass table is stiffer than that of the AME2020 case, despite its lower S_0. However, in the case of Type II, the difference of S_0 between DRHBc and AME2020 is greater than that of Type I, so that the S(ρ) for AME2020 is slightly stiffer than the S(ρ) for the DRHBc mass table. Consequently, in Tab. <ref>, the L for DRHBc in Type I (Type II) is higher (lower) than the L for AME2020.
Third, the L depends on ρ_0. For the given two conditions of S(ρ=0.1 fm^-3)=a_ sym(A=208) and S(ρ=ρ_0) = S_0, S(ρ) in Eq. (<ref>) decreases, as ρ_0 increases. In this case, the S(ρ) becomes softer, which in turn reduces L, and vice versa. This is shown in Tab. <ref>, where the upper (lower) limits of L correspond to the results with lower (upper) limit of ρ_0, respectively.
We compare our determinations of S_0 and L with experimental and observational constraints in Fig. <ref>. The orange-colored lines represent the constrained region of S_0 and L from the HICs experiments involving collisions between ^112 Sn and ^124 Sn <cit.>. Here, the region with hatched diagonal lines includes constraints from the pygmy dipole resonance data, yielding 30.2 ≤ S_0 ( MeV) ≤ 33.8 <cit.>. Consequently, out of the four cases considered in our determinations, only the S_0 value for Type II with the DRHBc mass table is allowed by this constraint. Furthermore, this region constrains the L as L ≤ 96.7 MeV for the DRHBc Type II, whose limit constrains ρ_0 as ρ_0 ≥ 0.156 fm^-3.
Measurements from finite nuclei also provide constraints on S(ρ). The FRDM is advantageous to extract the symmetry energy from measured binding energies because it can precisely evaluate the contribution of each term in the empirical mass formula. We show the constraints from the FRDM by using the green dashed box in Fig. <ref>, which provides constraints on S_0 = 32.5 ± 0.5 MeV and L = 70 ± 15 MeV <cit.>. The constraint on S_0 only allows the case of Type II with the DRHBc mass table. However, the constraints on L excludes our determinations of L for Type II. We also show constraints from the analysis of IAS <cit.> by using the gray solid box in Fig. <ref>. Notably, this constraint only allows the determinations of S_0 and L for the DRHBc Type II. In this case, ρ_0 is constrained as ρ_0 ≥ 0.148 fm^-3 by the given L.
Astrophysical observations are also one of the important constraints on the S(ρ). For instance, the Quantum Monte Carlo (QMC) technique, effective approach to solve the many-body problem, has been combined with constraints on the mass and radius of neutron stars, which provides constraints of 31.2 < S_0 ( MeV) < 34.3 and 36 < L ( MeV) < 55 <cit.>. We represent this constraint by using the magenta dotted line in Fig. <ref>. Our determinations of S_0 for Type II with both mass tables are allowed within this constraint, but the L is excluded by the constraint. On the other hand, the values of L for Type I are allowed, but the S_0 is excluded. We note that such astrophysical constraints also depend on uncertainties related to the description of X-ray bursts dynamics and the emissivity of the stellar surface <cit.>. Therefore, there could exist a discrepancy between astronomical constraints and experimental constraints, which is expected to be improved with greater precision in the future.
Lastly, we discuss the effects of change in a_ sym(A) on the neutron skin thickness, Δ R_ np. Over the past decades, various methods have been employed to measure the Δ R_ np, including coherent π^0γ production <cit.>, pionic atoms <cit.>, π scattering <cit.>, p̅ annihilation <cit.>, and elastic (polarized) proton scattering <cit.>. Recently, the PREX-2 collaboration reported new measurement of Δ R_ np = 0.283 ± 0.071 fm, using parity-violating electron scattering <cit.>. To compare our determinations with those experimental measurements, we employ the relation of Δ R_ np = 0.101 + 0.00147 L <cit.>. As a result, for Type I, we obtain Δ R_ np = 0.178^+0.019_-0.016 fm (0.172^+0.020_-0.015 fm) from DRHBc (AME2020) mass table. For Type II, we obtain Δ R_ np = 0.258^+0.031_-0.023 fm (0.259^+0.032_-0.023 fm) from DRHBc (AME2020) mass table. These results are presented in Fig. <ref> with other experimental determinations, in which the Δ R_np for Type II case is in agreement with the recent measurements from PREX-2. We also note that the Δ R_np for Type II is consistent with previous microscopic calculations based on the same DRHBc model, Δ R_ np = 0.257 fm <cit.>. Such self-consistency for R_np between microscopic and macroscopic results could be a signal guaranteeing the present approach.
§ SUMMARY
In summary, we investigate the impact of neutron-rich nuclei masses on the properties of the symmetry energy using the DRHBc mass table. We find that the binding energies of neutron-rich nuclei can suppress a̅_ sym(A), resulting in decreased S_0. Specifically, we obtain S_0=27.85 MeV (κ=1.38) for Type I and S_0=32.66 MeV (κ=3.15) for Type II. These results of S_0 are reduced rather than the determinations from the AME2020 mass table, S_0=28.54 MeV (κ=1.29) for Type I and S_0=33.81 MeV (κ=3.04) for Type II. Furthermore, based on these results with the empirical form of S(ρ) = C_k (ρ/ρ_0)^2/3 + C_1 (ρ/ρ_0) + C_2(ρ/ρ_0)^γ and the two presumed conditions: a_ sym(A=208) = S(ρ=0.1 fm^-3) and S_0 = S(ρ=ρ_0), we study properties of S(ρ), L, and Δ R_np using the mass table results. We present a summary of all of the determinations in Tab. <ref>.
Our findings reveal that changes in a̅_ sym(A) and ρ_0 affect the behavior of S(ρ) under the assumption of the universal relation. Specifically, the results from the DRHBc (AME2020) mass table lead to a stiffer S(ρ) for Type I (II), compared to the case of AME2020 (DRHBc) mass table. Interestingly, in the case of Type II, the decrease in S_0 due to the DRHBc mass table enables the determinations of S_0 to be allowed within the constraints from HICs and the IAS. In addition, the L for this case is simultaneously allowed by these constraints depending on ρ_0. For each constraint on L, we provide the new constraints of ρ_0, ρ_0 ≥ 0.156 fm^-3 for HICs and ρ_0 ≥ 0.148 fm^-3 for IAS. Furthermore, we discuss the effects of change in a_ sym(A) on the Δ R_np. Notably, our evaluation of Δ R_np in Type II is consistent with previous microscopic calculation based on the DRHBc model as well as PREX-2 measurement.
These results presented in this study may change when considering more neutron-rich nuclei. Therefore, it is desirable to investigate the effects of contributions from additional neutron-rich nuclei near the neutron drip line on the a_ sym(A). This study should involve a wider range of shell and Wigner corrections for the neutron-rich nuclei, which are not included in the current study. Such a future study will provide a more comprehensive understanding of how neutron-rich nuclei impact the properties of the symmetry energy.
S.K., T.M. and M.K.C. are supported by the National Research Foundation of Korea (Grant Nos. NRF-2020R1A2C3006177 and NRF-2021M7A1A1075764).
D.J. and S.C. are supported by the Institute for Basic Science under IBS-R012-D1 and IBS-R031-D1, respectively.
|
http://arxiv.org/abs/2307.05611v1 | 20230710225132 | Against the "nightmare of a mechanically determined universe": Why Bohm was never a Bohmian | [
"Flavio Del Santo",
"Gerd Christian Krizek"
] | physics.hist-ph | [
"physics.hist-ph",
"quant-ph"
] |
unsrt
Turán number for bushes
Zoltán Füredi
Alfréd Rényi Institute of Mathematics, Budapest, Hungary.
E-mail: .
Research partially supported by National Research, Development and Innovation Office NKFIH grants 132696 and 133819.
Alexandr Kostochka
University of Illinois at Urbana–Champaign, Urbana, IL 61801
and Sobolev Institute of Mathematics, Novosibirsk 630090, Russia. E-mail: .
Research supported in part by NSF
grant DMS-2153507 and NSF RTG grant DMS-1937241.
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================
David Bohm has put forward the first deterministic interpretation of quantum physics, and for this he seems to be regarded as a champion of determinism by physicists (both his contemporaries and the supporters of his interpretation, the so-called “Bohmians") as well as by historians of physics. The standard narrative is that he underwent a “conversion" from being a supporter of Bohr to being a staunch determinist, due to his interaction with Einstein and his commitment to Marxism. Here we show that Bohm actually upheld with continuity throughout his career some philosophical tenets that included a strong rejection of mechanistic determinism. As such, we conclude that Bohm was never a Bohmian and that his philosophical views have been largely misinterpreted.
“Why on earth are they calling it Bohmian mechanics? Haven't they read a word I have written?!"David Bohm (reported by Basil Hiley)
§ INTRODUCTION
David Bohm (1917-1992) went down in history as the physicist who achieved the impossible by providing an alternative deterministic interpretation of quantum mechanics <cit.>.[
Bohm himself referred to his interpretation as “alternative interpretation"<cit.>, as “causal interpretation"<cit.>, and as “quantum potential interpretation". In the literature it is referred to as “Ontological interpretation" <cit.>, “De Broglie-Bohm causal interpretation"<cit.>, or “De Broglie-Bohm Pilot-Wave Theory", “Bohmian Mechanics" <cit.>, or “Bohm theory" <cit.>. The variety of terminologies reflects different stances and views of Bohm's collaborators and successors which deviate in some cases substantially from Bohm's own ideas and whose discussion would go beyond the scope of this work.] Acclaimed or blamed therefore as a champion of determinism, he was (and still is) regarded by many as a cure against the claims of the Copenhagen school that quantum mechanics necessarily requires a completely novel way of looking at the world. According to this narrative, Bohm restored the seemingly lost comfort of mechanistic determinism, which had characterized physics for centuries, and his work seems therefore animated by a certain intellectual conservatism (see, e.g., <cit.>).
Here, we show that it was far from his intention to try to go back to an old pre-quantum paradigm. Bohm's views on philosophy of physics have instead been explicitly aimed, with continuity throughout his whole career, at demolishing certain established views that he perceived as limiting and dogmatic. As we shall see, one of these was the concept of mechanism, a form of reductionism which Bohm regarded as the
assumption that the great diversity of things that appear in all of our experience, every day as well as scientific, can all be reduced completely and perfectly to nothing more than consequences of the operation of an absolute and final set of purely quantitative laws determining the behaviour of a few kinds of basic entities or variables. (<cit.>, p. 37).
In this effort, Laplacian determinism was regarded by Bohm as the first and foremost expression of mechanism, and he thus searched for alternatives throughout his whole life.
As noted by Nobel laureate Roger Penrose, “there can be few physicists who have delved into the
philosophical implications of their subject as has David Bohm” <cit.>. It is indeed possible to identify at least three fundamental tenets in David Bohm's philosophy of physics, namely: (i) realism, (ii) causality, and (iii) anti-mechanism. Here we will not deal with Bohm's realism which has already been the subject of numerous studies, and it is undisputed that Bohm was committed to (some form of) realism (see, e.g., <cit.>, and references therein). On the other hand, we will focus on the latter two tenets, which have been astonishingly misunderstood in most of the vast literature devoted to Bohm's thought and his intellectual legacy. In particular, the term causality
has been commonly assumed to be a synonym of determinism; a mistake unfortunately still present in the literature in both physics and philosophy to date. Furthermore, Bohm always opposed mechanism, which, we stress again, has its most striking example (but not the only one) in determinism.
It is the main aim of this paper to clarify some of Bohm's original philosophical stances by demolishing certain established misconceptions around his commitment to determinism, which we cannot emphasize enough, was never present in his thought. It is a peculiar case that a scholar to whom so many historical and philosophical studies have been devoted has been so misrepresented. Bohm's sustained rejection of determinism was only partly acknowledged in <cit.> and new important evidences made available thanks to the publication of a collection of letters in <cit.>. Moreover, one of us (F.D.S.) already pointed out in <cit.> that Bohm's commitment to determinism was secondary to his commitment to realism. The same thesis was then put forward in <cit.>. Here, we show that Bohm's position was more radical than this: not only was not determinism his philosophical priority, but he actually always opposed it.
In section <ref>, we will recollect the standard narrative about Bohm's ideas.
Albeit with some variations, indeed, there seems to be a consensus about the fact that Bohm's main philosophical concern was to retrieve determinism in modern physics (at least at a certain stage of his working life).
We will strongly counter, in section <ref>, this standard narrative with a more accurate account of the actual philosophical views of David Bohm, focusing on his take on causality and (non)determinism. We will show that one of Bohm's main commitments was always anti-mechanism, a position that he had understood very early to be incompatible with determinism. This is what actually led him to initially (partly) support the indeterministic doctrine of Copenhagen, which, however, he abandoned when he realized that randomness is another, for him unacceptable, form of mechanism. Hence, his commitment to determinism—stemming from his celebrated alternative interpretation—is only ostensible. Bohm's anti-mechanistic position led him to develop a dialectic philosophical view of an unlimited number of levels of description of reality that can be neither deterministic nor fully random, but still allow either of these descriptions to exist at different levels.
We will here mainly focus on the period of the 1950s, because it is in that decade that Bohm allegedly underwent a change from being a supporter of Bohr to becoming a determinist and then supposedly abandoned this debate altogether as his commitment to Marxism faded away. To avoid further misinterpretations on our part, we will favor quoting as much as possible from Bohm's original writings rather than presenting our own summaries and analyses. Moreover, in the interest of conciseness, but without the risk of decontextualizing the quotations, we will provide more extended excerpts in the form of appendices, where the interested reader can find further evidence in support of the thesis put forward in the main text. We hope that letting Bohm speak for himself would finally bring justice to some aspects of his complex and original way of conceiving physics.
§ THE STANDARD NARRATIVE: BOHM'S ALLEGED COMMITMENT TO DETERMINISM
After World War II, the practices of physics underwent a drastic change. The foundational debate that had characterized the early days of quantum physics gave away to a pragmatic approach, the so-called “shut up and calculate", oriented towards applications often of a military nature <cit.>; the debate over the interpretation of the quantum formalism seemed to be settled for good. It was only a handful of physicists (and a few philosophers) scattered all over the world who started reviving the uneasiness towards the orthodox interpretation proposed by the school of Copenhagen (see Refs. <cit.>). Among them, David Bohm was a link between the old generation of critics—such as Albert Einstein, who played and active role in his intellectual life, Erwin Schrödinger, or (the early) Luis de Broglie—and the new underground culture concerned with quantum foundations to come.
After completing his PhD with Robert Oppenheimer at Berkeley in the 1940s and a post at the Institute of Advanced Studies in Princeton, in 1951, Bohm fell victim of the witch-hunt of McCarthyism because of his adherence to Marxism; this led him to a life of exile: firstly to Brazil, then to Israel, and finally to the UK, where he spent the rest of his life (see <cit.> for biographies of Bohm).
Although his research in the group of Oppenheimer was mainly about plasma physics, it is there that Bohm started getting interested in foundational problems of quantum theory, as he later recalled: “When I went to work with J. Robert Oppenheimer, I found a more congenial spirit in his group. For example, I was introduced to the work of Niels Bohr and this stimulated my interest, especially in the whole question of the oneness of the observer and the observed." (cited in <cit.>, p. 1. See also <cit.>, Ch. 4).
Bohr, together with Werner Heisenberg and others, was not only among the founding fathers of quantum theory but the initiator of the so-called Copenhagen interpretation thereof. The latter maintains that quantum mechanics necessarily
leads to abandoning certain fundamental precepts of classical physics, among which determinism, and instead to embrace the genuine probabilistic nature of quantum phenomena.
Bohm went so deep in his reflections about quantum theory and its foundations that, in 1951, he published the textbook
Quantum Theory <cit.>, fully in the spirit of the Copenhagen interpretation. Shortly after the publication, indeed, Bohm himself stated about his book: “a clear presentation of Bohr’s point of view (the first clear, if I may boast a little)."(Letter from Bohm to Miriam Yevick; Letter 66, Folder C117, January 23, 1952. In <cit.>, p. 235.)
However, in the very same year, Bohm submitted, on July 5th, a seminal work (published in two parts <cit.>) wherein he presented the first consistent alternative interpretation of the quantum formalism. He introduced the initial position of quantum particles as a “hidden variable" that, if known, would lead to deterministic trajectories similar to the familiar ones of classical mechanics (but guided by a genuinely additional quantum part in the potential).
So far, these are mere historical facts. Based on these, however, a standard narrative about David Bohm has crystallized, which can be summarized as follows: In the span of around a year, Bohm had a dramatic shift in his philosophical agenda moving one of his tenets from indeterminism to determinism. This narrative is not only popularized among physicists in the sort of working history that hovers in the community, but has been advocated by most historians, too. This is however not surprising, since admittedly it prima facie seems a rational account of the facts. A more thorough historical reconstruction, proposed among other works in the recent comprehensive biography of Bohm by Olival Freire Jr. <cit.>, tells a more nuanced story. First of all, it points out that already in his 1951 book <cit.>, Bohm had places some hints of his uneasiness with Copenhagen, such as endorsing ontological realistic assumptions (see <cit.>, pp. 48-51). Moreover, historians tend to add a third phase in which Bohm supposedly distanced himself again from determinism at the end of the 1950s, concurrently with his dropping of Marxism. This double shift, also in relation to Marxism, was strongly emphasized already by Pylkkänen <cit.>, and also Freire, although more cautiously, endorses a similar position: “Indeed, the connection between the break with Marxism and abandonment of determinism in science, particularly in physics, and not only in society, in Bohm’s thoughts is just a guess, albeit a plausible one." (<cit.>, p. 123). At any rate, the main point of the standard narrative is essentially present also in these more informed accounts.
The historical question that naturally arises then is: why did Bohm go through such a drastic and abrupt change from an adherent of the school of Copenhagen, i.e. a doctrine explicitly advocating the failure of determinism, to a novel deterministic interpretation? (And, possibly, why did he give in determinism again a few years later?). That is, what caused the sudden “conversion" of Bohm from an open supporter of indeterminism to a staunch determinist (and perhaps back)?
Numerous studies have tried to answer this question (<cit.>, apparently quite successfully despite a few minor details that are still the subject of historical debate.
But what if the question was the wrong one in the first place? What if determinism has never been a desideratum for Bohm, rather, this change was not about his worldview, but simply it was reflecting different phases of Bohm's experimentation in his attempt to achieve a physical theory that would satisfy his main philosophical tenets? In section <ref>, we will, in fact, defend this thesis. That is, that Bohm always upheld an anti-mechanistic view that was clearly incompatible with determinism alone.
Before doing that, in the remainder of this section, we will continue summarizing the standard narrative, or rather, its reply to the main question it poses.
There is an almost absolute consensus on the fact that the two elements that played the major role in Bohm's turn towards determinism have been, on the one hand, his encounter with Einstein, and, on the other, his Marxist views. This twofold explanation is by now well-established among historians, who mostly debate about the extent of one or the other influences (possibly, concurrently with Bohm's political prosecution; see <cit.>). This reconstruction was already put forward by the illustrious historian and philosopher of physics Max Jammer, according to a late recollection of Bohm himself:
Stimulated by his discussion with Einstein and influenced by an essay which, as he told the present author, was “written in English” and “probably by Blokhintsev or some other
Russian theorist like Terletzkii,” and which criticized Bohr’s approach, Bohm began to study
the possibility of introducing hidden variables. (<cit.> p. 279)[Note however, that there is a controversy about the value of this statement because there were no English translations available of either Blokhintsev's or some other Terletzkii's works at the time of Bohm's “conversion". See <cit.>, Section 3.4.2.]
It is indeed well-known that Einstein had opposed Bohr's views since the early days of quantum theory and his attempt to maintain determinism, summarized by the motto “God does not play dice", has entered the popular culture. However, while Einstein was invariably troubled by the abandonment of realism (and possibly of locality and localizability) implied by Bohr and his school, there are quite incontrovertible evidences that determinism was not Einstein's main philosophical concern <cit.>, and even less so in his late years. Actually, in 1953, in a letter to his friend Max Born, he stated: “I have written a
little nursery song about physics, which has startled Bohm and de Broglie a little. It is meant to demonstrate the indispensability of your statistical interpretation of quantum mechanics […] This may well have been so contrived by that same ‘non-dice-playing God’ who has caused so much bitter resentment against me, not only amongst the quantum theoreticians but also among the faithful of the Church of the Atheists” (Einstein, A. to Born, M, 12 Oct 1953 <cit.>). In the light of this, we can conjecture that the impact that Einstein had on Bohm at the time of their encounter at Princeton in the early 1950s, was probably that of casting doubt on the Copenhagen interpretation, and suggesting that one could search for an alternative. However, it does not seem likely that he directly pushed Bohm towards determinism, let alone hidden variable that he never supported (see <cit.>).
As for whether and to what extent Marxism has been a guiding principle for Bohm in developing his deterministic hidden variable interpretation, the question is subtler. This has been considered in detail by Forstner <cit.>, and partly by Peat <cit.>, Freire <cit.>, and Talbot <cit.>. Bohm surely agreed with the ontology supported by Marx and Engels, namely, a materialistic philosophy (or naturalism) which “says that the sole reality is the natural world, and this world is made up solely of matter" and “material things are not dependent for their existence or nature on any mind or minds", thus implying realism (from A. W. Wood, cited in <cit.>, p. 24). Moreover Marx and Engels put together this materialistic view and the dialectic of Hegel, which turned into the main guiding philosophy of Marxism, i.e., dialectical materialism. While dialectical materialism applied in a scientific context deals primarily with the nature of the world, it is in the Marxist analysis of the progress of history and society, historical materialism, that one finds determinism as a main characteristic. In fact, for Marx it is the mode of production and the struggle between social classes that necessarily determines historical change.
As explained by Freire <cit.>, it is objectively difficult to know to which Marxist writings Bohm had access to and therefore which parts of that philosophy had a concrete impact on his scientific and philosophical views. However, we will see in section <ref> that it is the dialectic aspect (and partly the materialist one, for what concerns realism) of Marxism that seems to have played the major role in the views about philosophy of science that guided Bohm, rather than the deterministic character of historical materialism.
As a matter of fact, Bohm was already a Marxist when he published his book <cit.> in which he endorsed the view of Bohr, so it does not seem to make sense to attribute his alleged conversion towards determinism to his adherence to Marxism. We will show, on the contrary, that his interest in Bohr actually stemmed, at least partly, from Marxism. This should be regarded as Bohm's first attempt to get away from a mechanistic philosophy in a dialectic (i.e. Marxist) spirit.
Historians are not the only ones who have misconceived Bohm's point of view. The idea that Bohm's first and foremost concern was that of restoring determinism at any cost was surely always widespread among physicists too. Starting with the contemporaries who were supportive of him—like Einstein, Luis de Broglie, and several Marxist physicists, in particular Jean-Pierre Vigier—and closely followed by his critics, they all emphasized Bohm's commitment to determinism: the former as a merit and the latter as a untenable conservative attitude (see <cit.>, Chapters 4.2-4.5, for the early reactions on Bohm's hidden variable model).[Incidentally, it should be recalled that Bohm's interpretation did not receive the praise that he expected and that he might have deserved. Even Einstein, who supported Bohm in his career and considered him a very talented physicist, stated that the way Bohm's way of restoring determinism “seems too cheap" (see <cit.>). There are several hypotheses about why this has been the case, related to the Zeitgeist of post-war physics, Bohm's political views, the authority of the Copenhagen school, etc. (See <cit.>). It was only in more recent years that the so-called Bohmian mechanics found new momentum in a sub-community of scholars interested in foundations of quantum physics (see <cit.>). Also Bohm's close collaborators rediscovered Bohm's original interpretation and encouraged further works closer to Bohm's non-mechanistic ideas (see <cit.>, <cit.>, <cit.>). ] As a matter of fact, due to his hidden variable model, Bohm started being regarded as a staunch determinist.
§ AN ALTERNATIVE NARRATIVE: BOHM AGAINST MECHANISTIC DETERMINISM
§.§ Indeterminism in Bohm's book Quantum Theory (1951) and beyond
As we have previously recalled, the first work of Bohm in which he manifestly deals with foundational questions is his 1951 book on quantum theory <cit.>. It is generally known, as we have discussed, that this book takes an approach close to the orthodox view of Copenhagen. Note that in doing so, Bohm was not blindly following the mainstream, but rather he was actively looking for ways to provide quantum mechanics of solid and understandable physical foundations, against the wide-spread pragmatic acceptance of an uninterpreted abstract formalism.
He therefore saw in the thought of Bohr an attractive philosophy because it was provided with two main features: the principle of complementarity, and irreducible probability (i.e. nondeterminism). In the former he saw elements of dialectics, which we claim was Bohm's main influence from Marxism. In fact, this is a first attempt, that Bohm was to develop in greater detail in the following years (see below), to apply the ideas of Engels who, in his Dialectics of Nature, “is especially opposed to attempts at mechanical
reductionism" <cit.>. In the context of quantum physics, this is the fact that it is the interaction between two qualitatively different descriptions (the classical and the quantum ones) to determine reality, forming something qualitatively new not according to necessity. This also satisfied Bohm's antireductionist convictions because the classical world ought to lie outside of the quantum domain as a primitive and cannot be in general fully reduced to a quantum description. As for the acceptance of objective chance (i.e., potentialities), he saw in this the most natural possibility to abandoning the view of mechanistic determinism. Later Bohm abandoned this approach, but he remained sympathetic to potentialities (see section <ref>). In a letter to at that time his girlfriend Hanna Loewy, presumably in 1950, Bohm explicitly clarified his motivations for having taken a Bohrian approach in his book:
I just got another idea on the quantum theory also. It is based on the fact that at the microscopic level, the quantum theory deals only with potentialities. For example, the quantum theory describes the probability that an electron can realise its potentiality for a given position. But to realise this potentiality, it must interact with some large scale (classical) system, such as an apparatus which measures position. It is only at the large scale that definite and well-defined events can exist. [...] Thus, the quantum theory presupposes the validity of classical concepts at the classical level. This means that one does not deduce the classical theory from the quantum theory, but that the two work together to describe the whole system. This is in contrast to most theories in physics, in which we analyse all large scale phenomena in terms of the small scale components. Here, we see that at the large scale level, new (classical)
phenomena appear, which are not contained logically in the small scale phenomena
alone. In other words, the behaviour of the whole system cannot be reduced to a
description of the relationship of all its parts, since, new properties appear in a large aggregate, not contained at all in the behaviour of the microscopic systems. (Letter from Bohm to Hanna Loewy; Letter 1. Folder C37, not dated. [February-May, 1950?], <cit.>, p. 99).
Moreover, soon after the publication of the book, he explained to his friend, the mathematician Miriam Yevick, why he got interested in Bohr:
All I knew was that there was one school, which utterly repelled me, in which one was supposed to introduce abstract mathematical postulates, and be satisfied if the calculations agreed with experiment. Against this, Bohr’s school seemed to be a big improvement, because at least he tried to explain the physical meaning of the theory. Moreover, there was an element of dialectics in Bohr’s point of view which attracted me. It seemed progressive because it broke the old mechanist materialist determinism, which left no room for growth and development of something new. (Bohm to Miriam Yevick; Letter 65. Folder C117, dated: Jan 7, 1952, <cit.>, p. 227); extended quotation in Appendix <ref>).
Note that at the time when he wrote this letter, Bohm was a staunch Marxist and most remarkably had already completed his work on deterministic hidden variables, and yet he was evidently criticizing mechanistic materialist determinism.
For what concerns its content, Bohm's book is an excellent technical manual of quantum mechanics and, although it endorses the view of the Copenhagen school, it is already possible to pin down where the main philosophical concerns of its author lie: causality is already his main focus together with his refusal of mechanism. However, at this stage, he explicitly endorses indeterminism as a way out of mechanism, a view that was soon to change when he realised that also indeterminism can be mechanistic.
We have recalled in the previous section that Freire <cit.> already noticed that a first element that distances Bohm from the Copenhagen school, is that in his 1951 book he looks for a realist account of nature. Another main difference with Copenhagen becomes manifest for what concerns causality. While for Heisenberg “quantum mechanics proves the invalidity of the law of causality,"[The original German phrase reads: “so wird durch die Quantenmechanik die Ungtültigkeit des Kausalgesetzes".] <cit.> for Bohm causality was an absolutely indispensable tenet. However, he makes very clear in his book that while maintaining causality he wants to escape determinism. Hence, a first major distinction, surely not well-understood at that time (and alas not even today in most of physics circles), is the conceptual difference between causality and determinism. This is also at the center of misunderstandings in the historical literature when referring to Bohm's later views, for instance in Freire's words:
“Soon both David Bohm and his critics were using “causal interpretation” to label his approach to quantum theory, clarifying Bohm’s ambition to restore a kind of determinism analogous to that of classical mechanics." (Ref, <cit.>, p. 63). In his 1951 book, Bohm actually advocates a causally non-deterministic nature of physical laws, in terms of tendencies (as we will see later, this is closely related to Popper's view in terms of propensities; see section <ref>):
we wish to call attention to the fact that, even in very early times, two alternative general types of causal laws appeared. One of these involved the notion of complete determinism; the other involved the notion of causes as determining general tendencies but not determining the behavior of a system completely. (<cit.>, Ch. 8, Sect. “Completely Deterministic vs. Causal Laws as Tendencies.")
Bohm goes as far as to brilliantly show that actually the determinism of classical physics makes the concept of causality redundant:
It is a curiously ironical development of history that, at the moment causal laws obtained
an exact expression in the form of Newton's equations of motion, the idea
of forces as causes of events became unnecessary and almost meaningless.
The latter idea lost so much of its significance because both the past and
the future of the entire system are determined completely by the equations
of motion of all the particles, coupled with their positions and
velocities at any one instant of time. Thus, we can no more say that
the future is caused by the past than we can say that the past is caused
by the future. [...]
Thus, classical theory leads to a point of view that is prescriptive and not causal.
(<cit.>, Ch. 8, Sect. “Classical Theory Prescriptive and not Causal".)
Hence, he saw a way out of the effective lack of causality in a completely deterministic theory in terms of the tendencies or potentialities entailed by (the Copenhagen interpretation of) quantum physics:
With the advent of quantum theory, the idea of complete determinism was shown to be wrong and was replaced by the idea that causes determine only a statistical trend, so that a given cause must be thought of as producing only a tendency toward an effect. [...] (<cit.>, Ch. 8, Sect. “New Properties of Quantum Concepts : Approximate and Statistical Causality".)
Thus, in terms of our new concept, matter should be regarded as having potentialities for developing either comparatively well-defined causal relationships between
comparatively poorly defined events or comparatively poorly defined
causal relationships between comparatively well-defined events, but not both together. (<cit.>, Ch. 8, Sect. “Relation between Space Time and Causal Aspects of Matter".)
We have thus seen why Bohm became aligned with Bohr in the first place, namely, to find a suitable alternative to mechanistic determinism that precluded a sensible concept of causality, which was for Bohm a crucial assumption for a physical theory. However, he soon realized that Bohr’s philosophy was not as satisfactorily as he previously had sensed because it indeed contained a dialectical approach but not as much of materialism as he would have wanted:
After I had written the book, I finally began to grasp the full
meaning of the theory, and could see that it leads inevitably to a form of (dialectical)
idealism. But this was not so clear when I started, because of the general confusion
in the literature.
(Bohm to Miriam Yevick; Letter 65. Folder C117, dated: Jan 7, 1952, <cit.>, p. 227); extended quotation in Appendix <ref>).
And again:
I notice that you call me “a disciple of Einstein". This is not very accurate. Actually I was a strong “Bohrian" and wrote my book under the assumption (later proved wrong) that the principle of Complementarity was a materialist point of view. It certainly is very dialectical, but I did not see at that time that it is not materialist.
After writing my book, I sent a copy to Einstein. He called me up asking to discuss the book, especially the Section on the paradox of EPR, which he liked very much. He thought I gave Bohr's point of view the most convincingly possible presentation, but he still refused to accept it. He then argued for some time, and he ended up convincing me that his objections were not answered. I thought about it for a while, becoming more convinced all the time that he was right. Finally I decided to look for a causal interpretation within few weeks, I hit upon the idea which I published, not knowing about de Broglie's work until later. It took me 10 hours of work, distributed over 2 months to convince Einstein that it made sense, but he actually never liked it. He only thought it was good to propose it to break out the present stagnant situation in physics.
(Bohm to Schatzman; Letter A1.15. September 7, 1952, <cit.>, p.335)
§.§ Against determinism, despite hidden variables (1952)
Exactly in the same period when his book <cit.> was appearing, Bohm was formulating his alternative, deterministic interpretation in terms of hidden variables.
Given his clear motivation recalled in the previous section, why did he do that? Bohm must have found himself in a strange position, when he managed to conceive a consistent model based on hidden variables that restored determinism. He clearly wanted to prove something that was considered impossible by the founding fathers of theory, in particular John von Neumann who had allegedly proven that a hidden variable completion of quantum mechanics was in principle impossible.[On the history of von Neumann's impossibility proof see <cit.>.] Moreover, Bohm wanted to prove that Bohr and Heisenberg's view was not necessarily the ultimate description of reality.
It should be stressed that at that time, no other interpretation of quantum physics was known besides (slightly different understandings) of the Copenhagen one, so probably stimulated by his novel awareness of the limits of Bohr's interpretation and by the discussions with Einstein he explicitly looked for any alternative different interpretation.
According to Hiley, indeed, Bohm
“was not a deterministic man, he used causality. [...] He was not bound to it [determinism]. David Bohm always used to say to me: `I am making a proposal'. So, all this people think he had rigid views. He didn't have rigid views. He was always making proposals, because he thought he never fully got to the bottom of quantum mechanics." <cit.>.
In fact, although Bohm stresses in his papers that the “`hidden" variables determine the precise results of each individual measurement process" <cit.>, repeatedly acknowledging very clearly the deterministic character of his model, he certainly never adopted a fundamental ontology merely made of particles plus their deterministic dynamics guided by the wave function. This is something that his followers, the so-called Bohmians (see footnote 1), have instead assumed, namely, considering Bohm's proposal as the ultimate description of reality, much against the view of Bohm himself. In fact, the germ of Bohm's way out of mechanical determinism (see further) as entailed by his proposal, is already expressed, although quite subtly, already in the conclusion of his second paper on hidden variables <cit.>, when he states:
This hypothesis is based on
the simple assumption that the world as a whole is
objectively real and that, as far as we now know, it can
correctly be regarded as having a precisely describable
and analyzable structure of unlimited complexity. The
pattern of this structure seems to be rejected completely
but indirectly at every level [...].
We should never expect to obtain a complete
theory of this structure, because there are almost
certainly more elements in existence than we possibly
can be aware of at any particular stage of scientific
development. Any specified element, however, can in
principle ultimately be discovered, but never all of
them.
Indeed, at least since 1951, most likely when he was still in Princeton (see <cit.>, footnote 48, p. 31), Bohm started developing a new philosophy based on the concept of having different levels of description, each of which can be either deterministic or indeterministic, but each of them giving only a partial account of reality. His ontology was thus made of the wholeness of the different levels of qualitatively different entities. However, he postulated the number of levels to be infinite, thereby making it fundamentally impossible to have mechanism, and in particular determinism:
Because of the existence of an infinite number of levels, the deterministic laws
of order at each level probably follow only as a result of conditions of chaos existing
at lower levels. If the lower-level conditions of chaos could be altered, then the very
framework of description of the higher level laws would also have to be altered.
Thus, we are led to a more dynamic concept of the laws of nature; for because
of their infinite complexity, richness, and depth, the applicability even of certain
very general forms of laws at a particular level may depend on conditions at other
levels, which are in principle subject to our prediction and control. This experience
should ultimately be repeated at any given level, however deep, as our knowledge is
extended. (Bohm to Miriam Yevick; Letter 58. Folder C116, dated: Nov 23 [1951], <cit.>, p. 205)
Note that this idea, while keeping being refined, remained essentially unchanged throughout Bohm's transition from the period of his 1951 book to his hidden variable proposal, and reached its main expression in the book Causality and Chance <cit.> published in 1957 (see section <ref>). For instance, after he had already completed his hidden variable interpretation, he wrote to Yevick:
The “things” at each level, are made up of
smaller “elements” at a more fundamental level, and it is the motion of these more fundamental
elements (not usually directly visible to us, except with the aid of elaborate
scientific research) which causes the appearance and disappearance of the “things”
existing at a higher level. These more fundamental “elements” however, cannot be
permanent, but must be made up of still more fundamental “elements” and so on ad
infinitum.
(Bohm to Miriam Yevick; Letter 65. Folder C117, dated: Jan 7, 1952, <cit.>, p. 227; extended quotation in Appendix <ref>)
Bohm also points out his position on the need for infinite levels to this collaborator Schatzman in a letter from 1952:
It is most likely that not even the substratum particles could be indestructible and unanalysable. Instead, there is probably another
substratum below this (of a qualitatively different kind most probably) and so on ad
infinitum. Thus, we should have an infinite series of qualitatively different levels of
laws. Any finite number of levels can always be understood by humanity, but never
all of them. (<cit.>, p. 351; extended quotation in Appendix <ref>)
And soon after his letter to Miriam Yevick in January, he wrote what is one of the most important quotations from the whole collection of known writings of David Bohm, because it unambiguously states that he could not accept mechanic determinism, even in the period when he was promoting his hidden variable model:
Most of the errors of both
the positivist and the 19th century “mechanical” materialists spring from an implicit
assumption that the laws of nature will some day finally be understood in terms
of a limited number of hypotheses. From this comes the nightmare of a mechanically
determined universe that follows an inevitable course. To avoid this nightmare,
positivists and idealists have given up causality and assumed a “spontaneous” (i.e.,
uncaused) element in physical processes.
The concept of a limitless number of levels [...] provides a motive
power for continual development & growth. Moreover, the nightmare of complete
determinism is avoided. Although each level is causal, the totality of levels cannot
ever be taken into account. Thus, as a matter of principle, we say that complete determinism
could not even be conceived of, yet, each level can be determined. Here, we
part company with the believers in “spontaneity” for we say that what appears to be
spontaneous is caused by factors, in principle, knowable, but now hidden to us. But
to be able to say this without implying complete determinism, we must assume an
unlimited number of levels.
(Bohm to Miriam Yevick; Letter 73. Folder C118, dated: Rec Mar 31 [1952], <cit.>, pp. 254-55; extended quotation in Appendix <ref>)
It is now clear that Bohm did not undergo a conversion form indeterminism (à al Copenhagen) to determinism (with hidden variables), as the standard narrative implies. He actually stayed faithful to his tenets of realism and causality and his shift was merely that of realising that Bohr`s approach was not enough to achieve what he had in mind. So it seems that his philosophical theory of the infinite levels was conceived to “cure" his own model from the “nightmare" of determinism. One should also remark that this idea of unlimited levels is very much in the spirit of dialectics, and indeed this is the most Marxist trait in Bohm's work. As pointed out by Talbot, such a connection is perhaps less abstract that one could think, drawing directly from the work of Engels: “especially in the
Dialectics of Nature, Engels introduces the idea of levels, or what he calls `forms of motion'. [...] Engels is especially opposed to attempts at mechanical reductionism, which `blots out the specific character' and `qualitative difference' of
non-mechanistic forms of motion." ( <cit.>, p. 25).
For Bohm this dialectic view of nature is a way to maintain a non trivial form of causality, intended as the possibility of creating non necessary new things, contrarily to the mechanistic view. In a letter to his friend —the American physicist Melba Phillips— Bohm spelled out this connection in detail:
Also an important additional aspect of causality needs to be discussed in more detail —namely— causality as a means of determining the mode of being of qualitatively
new things, which grow out of the old things. The basic aspect of mechanism is that
(as in an idealized machine) the universe is conceived of as made of basic elements
(particles, fields, or what have you) which simply interact according to fixed roles, and
which themselves never change as a result of the processes in which they take part. [...] However, the concept of the infinity of levels shows
that there need exist in nature no such thing as a basic element which never changes.
Thus, causal laws not only determine the future in a mechanical sense; i.e., in the
sense of determining quantitative changes in the arrangements of entities whose
intrinsic character is fixed. The causal laws also tell when qualitative changes will
occur and may define the characteristics of the new entities that can come into being. Thus, causality is a broader concept than that of mechanical determinism. [...]
A “mechanistic” attitude toward science however, tends
to limit the growth of our concepts in an arbitrary and dogmatically conceived way.
Such a mechanistic attitude refers not only, however, to the mechanistic determinists,
but also to the “mechanistic indeterminists”, who insist that in the quantum of action, we have reached an ultimate, indivisible, and unanalyzable entity, which will never be found to have a structure understandable in terms of a deeper level.
to fixed rules.
(Bohm to Melba Phillips. Letter 43. Folder C48, dated: Oct 13, 1953, <cit.>, p. 164; extended quotation in Appendix <ref>).
In the following years, Bohm kept developing his philosophy of the infinite levels, sharpening the distinction between causality and deterministic mechanism, advocating the former and in strong opposition to the latter. Causality is for Bohm the possibility of creating new qualitative entities in a non trivial sense, i.e. without being able to reduce everything to a finite collection of basic elements that cannot change and that are subject to fix laws:
Now, at first sight, it
may seem that we could eliminate the large-scale level by analyzing it in terms of its
basic molecular motions. And if there were a finite number of levels, this would be
true. But if there are an infinite number, then each level stands on a footing that is, in the long run, as basic as that of any other. For every level has below it a deeper one. Indeed, matter can be regarded as made up of the totality of all levels. Each level
makes its own specific contribution to the totality. (Bohm to Melba Phillips. Letter 46. Folder C48, dated: March 15, 1954, <cit.>, p. 170; extended quotation in Appendix<ref>).
Let us now stop for a moment and go back to the standard narrative. Freire makes a case that
in the 1950s Bohm did indeed promote the recovery of determinism. In 1951, before the term `causal interpretation' had gained currency in the debates on Bohm’s proposal, he himself emphasized it in his first letter to the
French astrophysicist and Marxist Évry Schatzman, while looking for allies, such
as Jean-Pierre Vigier and Louis de Broglie, to get support for his proposal: “My
position in these physical questions is that the world along with all observers who
are part of it is objectively real and in principle precisely definable (with arbitrarily
high accuracy), and subject to precise causal laws that apply in each individual case
and not only statistically.” (<cit.>, p. 65).
There seems to be a tension between the statements of Bohm here. However, one can hypothesize that his actual point of view on determinism is the one that emerges from the letters to his intimate friends, i.e., a staunch anti-mechanistic position. Thus, these letters seem to be a more trustable source than a first contact to somebody from whom Bohm was seeking the support. He probably tamed his more complex philosophical positions and tailored his letters to his interlocutor by highlighting the deterministic aspect in the interactions with Schatzman and later with Vigier to find a common ground with these more “traditional" Marxists who definitely prised determinism (see Appendix <ref>). Moreover, note that in the quoted letter to Schatzman, Bohm stresses the causal aspect of his proposal, which, as clarified above, does not necessarily means determinism.
§.§ An indeterministic causal model by Bohm and Vigier (1954)
So far, the evidence that Bohm was against determinism even during the years in which he devised and promoted his hidden variable model are limited to private correspondence. However, in 1954, Bohm published a paper with Vigier—Model of the causal interpretation of quantum theory
in terms of a fluid with irregular fluctuations <cit.>—that is a first attempt to put into practice the ideas of a model of causal interpretation which is however fundamentally non-deterministic, due to different levels of description. In fact, therein Bohm and Vigier postulate a field that is described by a fluid of density |ψ|^2, which is then able to recover the standard quantum mechanics
by introducing the hypothesis of a very irregular and effectively random fluctuation in the motions of the fluid. [...] Such random fluctuations are evidently consistent within the framework of the
causal interpretation of the quantum theory. Thus,
there are always random perturbations of any quantum
mechanical system which arise outside that system. <cit.>
They indeed clarify that “the causal interpretation of the quantum theory permits an unlimited number of new physical models" and that their proposed “model is an extension of the causal interpretation of the quantum theory already proposed, which provides a more concrete physical image of the meaning of our postulates than has been available before, and which suggests new properties of matter that may exist at deeper levels." <cit.>. Here causal means the possibility of explaining the theory in terms of a sub-quantum level (the fluid) that accounts for the higher quantum level. Note that, contrarily to the first hidden variable model <cit.>, this model is based on fundamental random fluctuations, thereby dispelling even more the doubt that Bohm was a committed determinist: “In the model that we have proposed here, however, the statistical fluctuation in the results of such [quantum] measurements are shown to be ascribable consistently to an assumed deeper level of irregular motion”. It is interesting to notice that while the postulated fluctuations of the fluid are considered to be (at this level of description) genuinely indeterministic, Bohm and Vigier think of these fluctuation as having a certain structure in terms of potentialities: “The fact that the mean density remains equal to |ψ|^2, despite the effects of the random fluctuations, implies then that a systematic tendency must exist for fluid elements to move toward regions of high mean fluid density.”
The ontological basis of this new indeterministic model and how it relates to Bohm’s philosophy of the infinite levels is explained by Bohm in correspondence with Einstein:
“The general idea is that at a level more fundamental than that of quantum mechanics, there is a field which satisfies causal laws. This field is, however, in a state of statistical fluctuations. These fluctuations are somehow described by the Ψ field.” (Bohm to Einstein ; Letter 16. page 5 Folder C14, February 3, 1954, <cit.>, p. 5).
My own point of view is that below the quantum theory there exists a sub quantum-mechanical level of continuous and causally determined motion, and that the quantum theory is related to the sub-quantum mechanical level, more or less as ordinary Brownian motion is related to the atomic level.
In other words, events at the atomic level are contingent on the (in general irregular) motions of some as yet unknown but qualitatively new kind of entity, existing below the atomic level.
As a result, the relationships between things, that can be defined at the atomic level will be characterized by the laws of chance, since they will be determined only in terms of some quasi-ergodic type of motion of new kinds of entities existing at the lower level. (Bohm to Einstein; Letter 21. Folder C15, dated: November 14, 1954, <cit.>)
Einstein’s replies may seem surprising to those who still believe that he was also a committed determinist at any cost, because they show once more that he was dissatisfied with Bohm’s first (deterministic) hidden variable model: “I am glad that you are deeply immersed seeking an objective description of the phenomena and that you feel that the task is much more difficult as you felt hitherto.” (Einstein to Bohm ; Letter 17. Folder C14, February 10, 1954, <cit.>). And again: “In the last years several attempts have been made to complete quantum theory as you have also attempted. But it seems to me we are still quite remote from a satisfactory solution of the problem.” (Einstein to Bohm ; Letter 20. Folder C15, dated: October 28, 1954, <cit.>)
Bohm did not develop further this approach which he most likely perceived as well as a proposed first step towards his philosophy of levels of description, but he came back to a stochastic causal interpretation, also with Hiley, in the 1980s <cit.>.
§.§ Causality and Chance in Modern Physics (1957)
It is around the same period that Bohm started thinking not only that either a deterministic or an indeterministic description was possible in every level of an infinite series, but that both individual laws and statistical laws are necessary for a causal interpretation:
The picture which I propose is this: The totality of causal laws includes both statistical and individual laws. We start with this totality as our basic reality. [...] The fundamental reality is that of matter in being and in process of change, or of becoming, as it may more accurately be called. (Bohm to Miriam Yevick. Letter 121. Folder C124, dated: Sept 10 1954, <cit.>, p. 419-22).
These dialectic ideas grew into a book, Causality and Chance, which Bohm published in 1957 <cit.>. Therein, Bohm identifies two types of causal laws (both considered fundamental): simple causal laws that connect past and future one-to-one (i.e. deterministic), and more general ones that are one-to-many, (i.e. that do not lead to a unique evolution but only to an array of possibility):
[L]et us note that the one-to-many character of a causal law has no essential relationship
to a lack of knowledge on our part concerning the additional causal factors to which the more precise details
of the effect can be traced. [...] In other words, a one-to-many law
represents an objectively necessary causal connection, but in this case, what is necessary is that the effect
remain within certain bounds; and not, as in simpler types of causal laws, that the effect be determined
uniquely. (<cit.>, p. 17).
And again, Bohm clarifies, as he always maintained (cf. <ref>), that causality is a more general concept than that of necessity (i.e., determinism):
We see, then, that it is appropriate to speak about objectively valid laws of chance, which tell us about a side of nature that is not treated completely by the causal laws alone. Indeed, the laws of chance are just as
necessary as the causal laws themselves. [Footnote:] Thus necessity is not to be identified with causality, but is instead a wide category. (<cit.>, p. 23).
Furthermore, Bohm here again stresses the fact that objective chance should be interpreted, as a potentiality, i.e., a property of the system and its causal conditions:
On the basis of the above considerations, we are then led to interpret the probability of, for example, a
given result in the game of dice as an objective property associated with the dice that are being used and
with the process by which they are thrown (<cit.>, p. 27; extended quotation in Appendix <ref>)
Note that this example is exactly the same used by Karl Popper <cit.> when he introduced the propensities interpretation (see section <ref>), again showing the compatibility between Bohm and a worldview based both on causality and on indeterminism.
Beyond causality, a large part of Bohm's 1957 book <cit.> is devoted to defend another of his main tenets, namely, anti-mechanism. However, while being still convinced that determinism is an unacceptable form of mechanism, there is a fundamental difference with respect to his book on quantum theory <cit.>. Here, in fact, Bohm does not consider randomness alone as a way out of mechanism:
The point of view described above evidently renounces an important aspect of the various forms of the
mechanistic philosophy that appeared from the sixteenth through the nineteenth centuries; namely, their
determinism. But in doing this, it has conserved and in fact enhanced the central and most essential
characteristic of this philosophy; namely, the assumption that everything in the whole universe can be
reduced completely and perfectly to nothing more than the effects of a set of mechanical parameters
undergoing purely quantitative changes. [...]
The question of what constitutes a mechanistic philosophy, therefore, cuts across the
problems of determinism and indeterminism. For this reason, we shall call the philosophy described in this
section by the name of “indeterministic mechanism” (<cit.>, pp.62-63).
Bohm's criticism of mechanism (and thereby of determinism), does not spare his own hidden variable interpretation, which he considers again an unsatisfactory physical model, whose main feature, he stresses, is consistency:
While our theory can be extended formally in a logically consistent way by introducing the concept of a wave in a 3N-dimensional space, it is evident that this procedure is not really acceptable in a physical theory, and should at least be regarded as an artifice that one uses provisionally until one obtains a better theory in which everything is expressed once more in ordinary three-dimensional space. (<cit.>, p. 117)
Finally, in his Causality and Chance, Bohm for the first time defends publicly his philosophical view of the infinite levels of description as the main alternative to mechanism, be it deterministic or indeterministic (see Appendix <ref> for relevant quotations). As noted already by Freire <cit.>, this marks Bohm's entry in the philosophical debate and would allow him to engage with prominent philosophers of science, the like of Paul Feyerabend and Karl Popper (see further). However, these ideas of infinite levels were not appreciated by his more traditional Marxist followers, who saw in this the undermining of determinism: a positive feature for Bohm and an unacceptable price for them. This is the case of Évry Schatzman and and Vigier who wrote to Bohm: “We may be wrong, but we do not agree at all with your ideas about the different levels of reality. It seems to us that it is a formal interpretation of the famous sentence of Lenin, in Materialism and Empiriocriticism, about the different levels of reality” (quoted in <cit.>, p. 108).
To conclude, in Causality and Chance Bohm synthesizes his main philosophical tenets that have been present in his writing since the beginning, but in a quite scattered way. Therein, Bohm defends, for the first time systematically, causality in its broadest sense, advocating the fundamental necessity of both individual laws and statistical laws, depending on the context. Moreover, he firmly rejects mechanism, not only in the form of determinism (which he did for many years already), but also in its indeterministic form. Finally, Bohm opposes mechanism with a dialectic philosophy of infinite levels of description that he had developed throughout the 1950s.
For what concerns physics proper, in 1957, Bohm published with his student Yakir Aharonov a paper where he rejects his own 1952 model, not on the basis of determinism but on nonlocality: “It must be admitted, however, that this quantum potential seems rather artificial in form [...] that
it implies instantaneous interactions between distant particles, so that it is not consistent with the theory of relativity.” <cit.>. Bohm thus kept proposing his dialectical views of different levels, similar to the paper with Vigier <cit.>, looking for a a “deeper subquantum-mechanical level” <cit.>.
It is interesting to notice, that still at this stage, Bohm's views were completely misunderstood. Luis de Broglie, who wrote the forward of his Causality and Chance, for instance, keeps attributing to Bohm the great merit of giving hope to those who look for a deterministic hidden variable explanation of quantum theory: “It is possible that looking into the future to a deeper level of physical reality we will be able to interpret the laws of probability and quantum physics as being the statistical results of the development of completely determined values of variables which are at present hidden from us. It may be that the powerful means we are beginning to use to break up the structure of the nucleus and to make new particles appear will give us one day a direct knowledge which we do not now have of this deeper level." (<cit.>, p. x). This goes completely against what Bohm conveys in his book, making wander whether people like de Broglie were actually reading Bohm’s works or they just imposed on him what they wished to hear.
Towards the end of the 1950s Bohm abandoned Communism, following the revelations of Stalin’s crimes by Nikita Khrushchev in 1956 (see <cit.>). As already recalled, this has been identified in the literature as the main motivation to abandon his commitment to determinism. But as we have shown, such an alleged commitment to determinism was never present in the first place and his dialectic attitude remained an important factor in his philosophy. However, probably due the frustration of being continuously misunderstood, Bohm’s engagement with different models of the causal interpretation became sparser. Actually, since his moving to the UK, firstly in Bristol and then in London, he engaged more and more in the philosophical debate, becoming friend with Paul Feyerabend, Karl Popper and Stephen Körner, and he kept his interpretational considerations away from his physics colleagues.
Hiley joined Bohm at Birkbeck college in London in 1961 and, as a matter of fact, they passed “ten years without actually talking about the causal interpretation" <cit.>. As recalled by Hiley <cit.>, it was only in the 1970s that two of Bohm's students, Chris Dewdney and Chris Philippidis, “rediscovered" the hidden variable papers <cit.> and went to Hiley to ask why Bohm and him were not discussing this important results. Hiley replied “because it is all wrong", but when further inquired, he realized that he did not actually know why, he only had picked up what everybody was saying. And when he finally read thoroughly Bohm’s original papers, he understood that nothing was wrong and motivated the students to use the computer to calculate the trajectories of particles using Bohm's model. This marks the revival of Bohm’s hidden variables (see also <cit.> Ch. 6.1), a revival to whom Bohm, however, obviously did not participate. Actually, when approached by Dewdney Philippidis, “Bohm himself [...] admitted that he had made a tactical error in his original presentation of the theory. The term hidden variables, he said, created the wrong impression, and the papers themselves were too rigid and deterministic." (<cit.>, p. 266).
In the following decades Bohm dedicated his work to an holistic approach that continued his ideas from the work on the causal interpretation of quantum theory. The purpose of Bohm’s original proposal in the light of his new ideas was later explained by himself in the following way:
To show that it was wrong to throw out hidden variables
because they could not be imagined, it was therefore sufficient
to propose any logically consistent theory that explained the
quantum mechanics, through hidden variables, no matter how
abstract and hypothetical it might be. Thus, the existence of even
a single consistent theory of this kind showed that whatever
arguments one might continue to use against hidden variables,
one could no longer use the argument that they are inconceivable.
Of course, the specific theory that was proposed was not
satisfactory for general physical reasons, but if one such theory is
possible, then other and better theories may also be possible, and
the natural implication of this argument is ‘Why not try to find
them?’ (<cit.>, p. 104)
His scientific program was based on quantum field theory to approach the concept of the infinite levels he already pointed out in the early works. His philosophical ideas remained consistent to his early works in the refusal of mechanistic ideas:
As we have seen, relativity theory requires continuity, strict causality (or determinism) and
locality. On the other hand, quantum theory requires noncontinuity,
non-causality and non-locality. So the basic concepts
of relativity and quantum theory directly contradict each other.
[...]
What is very probably needed instead is a qualitatively new theory, from
which both relativity and quantum theory are to be derived as
abstractions, approximations and limiting cases.
The basic notions of this new theory evidently cannot be
found by beginning with those features in which relativity and
quantum theory stand in direct contradiction. The best place to
begin is with what they have basically in common. This is
undivided wholeness. Though each comes to such wholeness in
a different way, it is clear that it is this to which they are both
fundamentally pointing.
To begin with undivided wholeness means, however, that we must drop the mechanistic order. (<cit.>, p. 223)
§.§ Propensities and the causal interpretation
Bohm has been in touch with Popper at least since 1959 (for the relationship between them, see <cit.> and references therein). It is exactly in that period that Popper—who was advocating for fundamental indeterminism in physics even at the classical level—developed a new interpretation of probabilities that are interpreted as objective physical properties, i.e., propensities or tendencies for a system to produce an outcome <cit.>.
Here we would like to stress that although Bohm's never actually pursued a program based on potentialities, he hinted at it in several occasions (see above). As we have seen, he endorsed that view in his Quantum Theory <cit.> and hinted that the statistical behaviors of quantum mechanics constrains the tendency of the sub-quantum fluid in his paper with Vigier <cit.>. Looking at Bohm’s correspondence with Popper, we find an explicit support of this view: “I feel that what you have to say about propensities make a genuine contribution to clarifying the issue that you discuss" (Bohm to K. Popper on March 15th 1967. PA, Popper’s Archives, Box/Folder: 84/19. AAU, Klagenfurt (Austria)/Hoover Institution, Stanford (California) <cit.>.
This was not appreciated by Popper himself, who should be listed among the many that misinterpreted Bohm, attributing to him a strong commitment to determinism. In fact, when Popper published his book on the foundations of quantum theory in 1982 <cit.>, although prizing Bohm for striving for realism, he harshly criticized him for being a determinist. Bohm replied to him, emphasizing once again that he was not committed to determinism and explicitly acknowledging for the first time, to our knowledge, that his view on the causal interpretation can be regarded in terms of potentialities:
“I certainly think that a realistic interpretation of physics is essential. I think also that I understand your propensity interpretation of probability and I have no objections against it. […]. However, I feel that you have not properly understood my own point of view, which is much less different from yours than is implied in your book. Firstly I am not wedded to determinism. It is true that I first used a deterministic version of […] quantum theory. But later, with Vigier, a paper was written, in which we assumed that the movement of the particle was a stochastic process. Clearly that is not determinism. Indeed, we can regard the stochastic movement of the particle as affected by a field of propensities, in accordance with your ideas […] The key question at issue is therefore not that of determinism vs. indeterminism. I personally do not feel addicted to determinism [...].
[W]hat is real has a being independent of the consciousness of the observer. John Bell has used the term “beable" to describe such an independent reality. From the point of view of realism, the main criticism of the orthodox interpretation of the quantum theory is that it has no room in it for “beables". [...] I introduced the notion that the “beables" of the quantum theory are the particles and the wavefunction (which contains information about the propensities). Along with Vigier, I can say that the “beables" are themselves conditioned by such propensities. What are called the observables of quantum theory are then potentialities of the “beables", realized according to a context, which in current physics, is determined by the experimental arrangement (though in nature, similar contexts will still exist without the intervention of human being). [...] My proposal has been that the “beables" are particles (moving stochastically), along with the wave function. (Bohm to K. Popper 13.07.1984. Box/Folder: 278/2. AAU, Klagenfurt (Austria)/Hoover Institution, Stanford (California) <cit.>)
§ DISCUSSION AND CONCLUSION
In this paper, we have shown that Bohm was always against mechanism and therefore determinism. We have rebutted the historical narrative according to which one can identify an early period when Bohm was a supporter of Bohr, a later period when he was a committed determinist (influenced by Einstein and by Marxism), and finally a period, after his break with Marxism, in which determinism quit being a main concern of his. On the contrary, Bohm's philosophical tenets have never changed throughout his whole life: he was always committed to develop a realistic, causal, non-mechanistic view of physics. This led him to develop a new dialectical philosophy composed of infinite levels of description that guided him in his work for the following decades. As such, Bohm would have never accepted determinism, at any stage of his life. In a slogan, Bohm was never a Bohmian.
Although the content of this paper has mostly a historical scope, it may affect also the physicists and philosophers who have proclaimed themselves Bohmians. It is undeniably true that Bohm provided the first deterministic hidden variable model of quantum theory. And yet, we just want to stress that for him this was nothing more than a model, a proof of principle that it was possible to do what was considered fundamentally unattainable.
However, at the same time, this was for him most unsatisfactory, for it betrayed one of his deepest convictions about nature, namely, that a basic ontology of particles moved around by deterministic laws cannot be the end of the story. Therefore, the many scholars who today support Bohmian mechanics at face value, giving to it an ontological role, should be aware that they are advocating a worldview that stems from what its original proposer considered a mere model which could not satisfy the basic standards of acceptability for a physical theory (except internal consistency). Now, while this is obviously a logically acceptable position, they should be aware that they are going directly against the fundamental views of Bohm, and cannot therefore whatsoever appeal to his authority. This separation between the original though of Bohm and those who adopted his model was so striking that soon before his death when he became aware of Sheldon Goldstein and Detlev Dürr's work on his ideas, Bohm bitterly confessed to his main collaborator Basil Hiley: “why on earth are they calling it Bohmian mechanics? Haven't they read a word I have written?" <cit.>. So, concerning determinism, Bohm finds himself in a position comparable (fortunately with less ethical implications) to that Einstein with respect to the atomic bomb: It is a historical fact that it was Einstein who suggested to US president Franklin Roosevelt to research on nuclear weapons to preempt Nazi Germany to achieve the same threat. However, for his whole life—before and after—Einstein was a committed pacifist. Similarly, it is a historical fact that Bohm developed a deterministic interpretation of quantum theory. However, for his whole life—before and after—he was a committed anti-determinist. Invoking Bohm to defend deterministic views of physics is like invoking Einstein to promote nuclear weapons.
§.§ Acknowledgements
The authors would like to thank Basil Hiley for taking time for an interview and valuable discussions. We also would like to express our thanks to Emma Illingworth from the David Bohm Archive at Birbeck Library for her support during our research.
§ APPENDIX A – EXCERPTS FROM CORRESPONDENCES OF D. BOHM
§.§ Excerpt of a letter from Bohm to Miriam Yevick (January 7, 1952)
Letter 65. Folder C117, dated: Jan 7, 1952, <cit.>, p. 227.
Now, to retain the concept of matter, we must above all retain the idea that
in some aspects at least, matter is indestructible and uncreatable. How then do we
explain the prevalence of change and the transiency of material things? This is done
by the notion of endless transformation. The “things” at each level, are made up of
smaller “elements” at a more fundamental level, and it is the motion of these more fundamental
elements (not usually directly visible to us, except with the aid of elaborate
scientific research) which causes the appearance and disappearance of the “things”
existing at a higher level. These more fundamental “elements” however, cannot be
permanent, but must be made up of still more fundamental “elements” and so on ad
infinitum. Thus, we can see that every “thing” that exists may at some time come into
existence and later go out of existence, but there is always a deeper level, in terms of
which this change can be viewed rationally as a transformation of a more elementary
form of matter, which is not itself basically altered in this particular transformation.
Nevertheless, no single “thing” is uncreatable or indestructible. Only matter as a
whole in its infinity of properties and potentialities is eternal.
§.§ Excerpt of a letter from Bohm to Schatzman; (not dated, 1952)
Letter A1.20, not dated, 1952. <cit.>, p. 351.
For quantum mechanics has show, that "empty" space a strongly fluctuating electromagnetic
fields and more important still, a very high density ( infinite according to the
present inadequate theories) of negative energy electrons, protons and neutrons. If
one adopts the new interpretation of the quantum mechanics, there is no choice but
co suppose chat these particles are really in existence. One therefore has been back to
the old notion of a material substratum filling all space. As a have said, this substratum
is very dense, much denser than any other form of matter. In fact, matter as it is
usually called, would be only a disturbance in the uniform background of substratum.
Light waves, etc. would also be disturbances of the substratum. The mysterious "annihilation"
and "creation" of material particles could now be understood naturally;
for with the [ ?] of energy, the substratum could be made non-uniform as a spreading
wave. These two forms of energy could be transformed into each other when we look
out at the sky, space appears to be almost empty, because light waves are scattered only
by inhomogeneities in space. Similarly material particles are likewise inhomogeneities
propagated freely in a uniform background. Thus, to a naive way of looking, space appears
empty, a similar phenomenon appears in connection with the theory of metals.
As you know, an electron will go through a very dense metal without being scattered as
long as the crystal lattice is perfectly regular. Only non-uniformities in the lattice will
scatter the electron. A naive observer (for example a positivist) would conclude from
this evidence that a metal consists of empty space, with a very thin haze of "matter" .
I would like to add one point here. It is most likely that not even the substratum
particles could be indestructible and unanalysable. Instead, there is probably another
substratum below this ( of a qualitatively different kind most probably) and so on ad
infinitum. Thus, we should have an infinite series of qualitatively different levels of
laws. Any finite number of levels can always be understood by humanity, but never
all of them. Thus, ·we can understand more vividly a number of dialectical principles,
for example, many people are puzzled by the dialectical assertion that matter must be
eternal ( i.e. no creation). The answer is that at any particular level, the forms of matter
as a whole, in its infinite number of properties and inter -connections is eternal. Secondly,
consider the statement of dialectics chat "a thing is not equal to itself" . this
we understand by the [ ? ] that a materiel "thing" contains an infinity of properties
whereas the concepts usually defining what the thing "is" cover only a finite number
of these properties. Thus, a thing is not only "what it is" but also a large nun1ber of
other things, which will manifest themselves later ; or in other words in "what is coming
to be". Moreover, the levels not taken into account in the usual definition of the
"theory" will generally produce effects that are in contradiction with the permanent
existence of this "thing" .
§.§ Excerpt of a letter from Bohm to Miriam Yevick (January 23, 1952)
Letter 66. Folder C117, dated: Jan 23, 1952, <cit.>, p. 235:
[I]t is essential to think that things are not only “what they are known to be”, but also a
whole list of different things connected with the infinite number of levels not known
to us. These other things may be thought of roughly as “what is coming into being”
since it is in the future form of the thing that the underlying factors will ultimately
manifest themselves. [...]
As in the structure of “elementary” forms of matter human beings contain an infinite number of at present unknown (or poorly known) levels of complexity of behavior.
This fact has two important implications: (1) The most obvious, that by scientific
study, we may ultimately learn to control some of the factors at any particular level,
and thus to produce startling changes in human nature (including even ourselves) (2)
Before this can be done, the different levels will manifest themselves in that people
cannot correctly be regarded as “being only what they are”, but that they can also
undergo fundamental transformations of character with changing conditions. [...]
As for the book [<cit.>], you must try to imagine the situation when I wrote it. You
suggest that I may have had some dishonesty, perhaps some desire to please the
“big shots” in writing it, and that this led me to back up the usual interpretation of
the quantum theory. You must remember several things however: (1) When I wrote
this book, there did not exist anywhere a clear statement of the basis of the theory.
There existed some books which made ridiculous abstract mathematical postulates that no one could possibly understand, and there were other discussions, such as those of Bohr, which aimed at discussing the physics, but in an incredibly vague way. A student at Princeton once told me that Bohr’s statements not only cancelled out with regard to their meaning in the first order, but also with regard to connotation in the second order. It was therefore necessary to go to the third order to find what Bohr meant. When I first started to study this subject 15 years ago, it fascinated me and puzzled me. I had no reason to suspect that the “big shots” had muddled up
the subject, since after all, had they not been astonishingly successful in predicting experiment after experiment? Above all, I never got over being puzzled by the theory.
When I started the book, I was in no position to see through the matter, because I still hadn’t made complete sense of it. All I knew was that there was one school, which utterly repelled me, in which one was supposed to introduce abstract mathematical postulates, and be satisfied if the calculations agreed with experiment. Against this,
Bohr’s school seemed to be a big improvement, because at least he tried to explain the physical meaning of the theory. Moreover, there was an element of dialectics in Bohr’s
point of view which attracted me. It seemed progressive because it broke the old
mechanist materialist determinism, which left no room for growth and development
of something new. After I had written the book, I finally began to grasp the full
meaning of the theory, and could see that it leads inevitably to a form of (dialectical)
idealism. But this was not so clear when I started, because of the general confusion
in the literature. If you tried to read other books, you wouldn’t be able to say that you
see through this stuff, just because the other books leave things just vague enough
so that you don’t know quite what you are seeing through. In writing this book,
I hope that I have not only clarified the issues for myself, but perhaps for other
people too. I suspect that a clear presentation of Bohr’s point of view (the first clear one, if I may boast a little) will do more to favor the causal interpretation than to favor Bohr’s interpretation. Now with my new point of view, I can see an infinitely
better way to get out of the trap of mechanistic determinism; namely through the
concept of an unlimited number of causal levels. I would call Bohr’s point of view
“static dialectics”. This is because it is a form of “slinging the lingo” in which the
dialectically opposing concepts are made just vague enough so that the contradictions
between them are avoided. Thus, one is not faced with the necessity of seeking new
concepts that synthesise the opposites, and the dynamic aspects of dialectics (i.e.
the contradictions leading to something new at another level) are lost. Finally, I
should say that I wrote the book in a spirit of struggle against the obscurantist notion
that nature can from now on be understood only in terms of abstract mathematical
postulates. The struggle was well worth while, since it led me to a new point of view.
§.§ Excerpt of a letter from Bohm to Miriam Yevick (March 31, 1952)
Letter 73. Folder C118, dated: Rec Mar 31 [1952], <cit.>, pp. 254-55:
I think that the explicit recognition of a limitless
number of levels would be a big step forward in science. Most of the errors of both
the positivist and the 19th century “mechanical” materialists spring from an implicit
assumption that the laws of nature will some day finally be understood in terms
of a limited number of hypotheses. From this comes the nightmare of a mechanically
determined universe that follows an inevitable course. To avoid this nightmare,
positivists and idealists have given up causality and assumed a “spontaneous” (i.e.,
uncaused) element in physical processes. [...]
The concept of a limitless number of levels suggests, however
that the work of science is never finished and leads one at each level to seek the
contradictions which can [unreadable] at the next level etc. Thus it provides a motive
power for continual development & growth. Moreover, the nightmare of complete
determinism is avoided. Although each level is causal, the totality of levels cannot
ever be taken into account. Thus, as a matter of principle, we say that complete determinism
could not even be conceived of, yet, each level can be determined. Here, we
part company with the believers in “spontaneity” for we say that what appears to be
spontaneous is caused by factors, in principle, knowable, but now hidden to us. But
to be able to say this without implying complete determinism, we must assume an
unlimited number of levels. It is the unlimited number of levels which give matter
its “non-mechanical” aspects, for if the analysis of physical laws could ever be completed,
the theory would either be deterministic + “mechanical”, or “indeterministic” and “spontaneous”. Another interesting point – if there are an infinite number of levels,
we can expect that all existing limitations (such as speed of light and uncertainty
principle) can be overcome with the aid of more fundamental levels. Thus, by the use
of causal laws, humanity can move toward freedom. Whereas, in the ignorance of
causal laws, humanity is enslaved either to determinism or to “spontaneity”, which,
being pure accident, is just as tyrannical.
One other point, a distinction between “determinism” and “causality”. Although
both words have roughly the same meaning, their implications are different. For
causality implies (a) that if you know the causes, you can predict the effects. (b)
That if you change the causes, you can change the effects in a predictable way.
But determinism implies only predictability. In fact, with complete determinism, it
would be impossible for us ever to change anything. Now, if there are a finite number
of levels, then complete causality obviously implies complete determinism. But if
there are an infinite number, then the two concepts part company. For we can have
complete causality at every level, in the sense that we can use this causality to change
the world in a predictable way,with the error in the predictions dependent only on our
level of knowledge; whereas we can in no sense conceive of the world as completely
determined. In this connection, note that the statement that new things can come
into existence is consistent with causality, only if what is already in existence has
an infinite number of levels. For if we have a finite number of causal levels, then
the future is already contained logically in the present, but not if we have an infinite
number. The appearance of qualitatively new things with time is possible with an
infinite number, because the effects of the limitless number of lower levels can always
surge up into a higher level (and vice versa) producing qualitative [missing words]
describable as a rearrangement of things already in existence.
§.§ Excerpt of a letter from Bohm to Melba Phillips (October 13, 1953)
Letter 43. Folder C48, dated: Oct 13, 1953, <cit.>, p. 164:
Also an important additional aspect of causality needs to be discussed in more detail –
namely – causality as a means of determining the mode of being of qualitatively
new things, which grow out of the old things. The basic aspect of mechanism is that
(as in an idealized machine) the universe is conceived of as made of basic elements
(particles, fields, or what have you) which simply interact according to fixed roles, and
which themselves never change as a result of the processes in which they take part.
Naturally, every physical theory has some non-mechanistic aspects. For example, in
the field theory, new entities (waves+particle — like singularities) can arise out of the
interconnections of the basic field elements through the field equations (especially
if the latter are non-linear). Also in a particle theory, new entities can arise out of
interactions. [...] Nevertheless, the basic elements in such theories are usually
conceived of as fixed and eternal. However, the concept of the infinity of levels shows
that there need exist in nature no such thing as a basic element which never changes.
Thus, causal laws not only determine the future in a mechanical sense; i.e., in the
sense of determining quantitative changes in the arrangements of entities whose
intrinsic character is fixed. The causal laws also tell when qualitative changes will
occur and may define the characteristics of the new entities that can come into being.
Thus, causality is a broader concept than that of mechanical determinism. It contains
limited mechanical determinism as a special case. Indeed, the concept of causality
is continually evolving with the development of science and other aspects of human
activity, so that the potential richness of this concept has no limit. In other words, we
may expect future generations to discover more and more aspects of the concept of
causality, thus transforming this concept in a way that we have at present no inkling
of. Yet these changes will not be arbitrary, but will instead grow in a definite way out
of the efforts to solve real problems presented by the successive levels of reality that
we shall be able to reach. A “mechanistic” attitude toward science however, tends
to limit the growth of our concepts in an arbitrary and dogmatically conceived way.
Such a mechanistic attitude refers not only, however, to the mechanistic determinists,
but also to the “mechanistic indeterminists”, who insist that in the quantum of action,
we have reached an ultimate, indivisible, and unanalyzable entity, which will never
be found to have a structure understandable in terms of a deeper level. In fact, the
quantum of action presents many aspects of the ultimate particles of the atomists,
so that the insistence that the quantum will never be analyzed is as mechanistic as a
theory of point particles following determined orbits. Similarly, the insistence that
chance+probability are not subject to a causal analysis at a deeper level constitutes a
mechanistic attitude toward these things, since chance+probability are conceived of
as existing in themselves and functioning under all possible circumstances according
to fixed rules. [...]
According to the mechanistic
indeterminists, it is fixed by an equally mechanical “chance” which is conceived
of as absolute and not itself capable of change or development. We may make an
analogy of a man who is offered the possibility of 100 different ways of being
executed. The deterministic school of executioners would choose the way according
to certain definite factors, e.g., the chemical concentration of the blood, the wave
- length of the light emitted from his skin, etc. The indeterministic school would
chose the way by spinning a roulette wheel. The non-mechanistic school would seek a qualitative change - i.e., to find a way to escape execution, taking advantage of all
factors, both “determinate” and “chance”. So the essential point is that because of
the infinite complexity and depth of the laws governing the nature of matter, no preassigned
scheme of things can remain adequate forever, not even if it is restricted
to being a general framework or outline. But this is just what most people find
it difficult to accept – perhaps because our society requires us to accept the idea
that a certain general form of social organization is inevitable, although within this
general framework, we may make various quantitative changes, either by chance, or
by determinate rule, as we please, as long as nothing essential is ever changed. [...]
My own opinion is that the
synthesis will eventually have to be on a still deeper level and will have to introduce
new kinds of entities that are neither particles nor fields, of which we have only a
vague idea at present.
§.§ Excerpt of a letter from Bohm to Melba Phillips (March 15, 1954)
Letter 46. Folder C48, dated: March 15, 1954, <cit.>, p. 170:
First of all, it is necessary to
sharpen the distinction between causality and mechanism (or deterministic mechanism).
Mechanism is characterized by two fundamental aspects:
(1) Everything is made of certain basic elements which themselves never change
in essence (i.e., qualitatively).
(2)All that these elements can do is to undergo some quantitative change according
to some fixed laws of change. For example, if they are bodies, they can move in space.
If they are fields, they can change their numerical values, etc. But the basic elements
themselves never undergo qualitative change.
If we postulate an infinity of levels, then we make a step beyond mechanism. For
the elements existing at each level are made of still smaller elements in motion (i.e.,
changing quantitatively), and the mode of being of the higher level elements arises
out of the motions of the lower level elements. Thus, there are no elements that can
never change.
Indeed, even if we have a finite number of levels, some qualitative change is
possible within a mechanistic theory. For example, with atoms in chaotic motion, we
obtain new large scale properties, such as pressure, temperature, etc., new entities,
such as gas, liquid, solid, and qualitative changes between them. Now, at first sight, it
may seem that we could eliminate the large-scale level by analyzing it in terms of its
basic molecular motions. And if there were a finite number of levels, this would be
true. But if there are an infinite number, then each level stands on a footing that is, in
the long run, as basic as that of any other. For every level has below it a deeper one.
Indeed, matter can be regarded as made up of the totality of all levels. Each level
makes its own specific contribution to the totality. Of course, each level finds an
image in others, so that one can deduce many properties of a given level by studying
other levels. Yet, there may be properties that cannot so be deduced. Not only may
these properties be peculiar to a given level, but they may involve “crossing” of levels. [...]
Now, a mechanical law is characterized by the fact that it specifies a rule governing
quantitative changes of elements that are fixed in nature. A more general causal
law may express the conditions governing qualitative change. But if it does this, it
must do something else that a mechanical law is never called upon to do. It must not
only determine the mode of change, but also the mode of being of the elements when
they are not changing. A mechanical law simply postulates a certain fixed and eternal
mode of being of the elements, so that there is a sharp separation between the laws of
change and the mode of being of the elements. A more general causal law does not
make such a sharp separation. Thus, in the theory of evolution, the principle of natural
selection enables us to say something about the mode of being of the various forms of
life, in terms of their past history of evolution, struggle for survival, etc. Similarly, in
embryology, one can in part, understand the characteristic properties of an animal at
a given stage of development in terms of its past history which helped make it what it
now is. Thus, a more general causal law may be historical in form. By this, I mean that
the very mode of being of the elements which enter into the laws is
a necessary consequence of the causal laws governing the whole chain of development.[...]
A causal law may express the necessity of a fundamental qualitative change, so
that what develops may have something new in it. This something new arise[s] as
a necessary consequence of what is old, and yet it is not just a rearrangement or
a quantitative change of the old elements.
§.§ Excerpt of a letter from Bohm to Miriam Yevick (September 10, 1954)
Letter 121. Folder C124, dated: Sept 10 1954, <cit.>, p. 419-22:
The picture which I propose is this: The totality of causal laws includes both
statistical and individual laws. We start with this totality as our basic reality. Then,
we may take various views of this totality, some of which stress the individual aspect
of the laws, and some of which stress the statistical aspect. But there is no such thing
as a perfect individual law, because there are always fluctuations and errors coming
from what has been left out. [...]
We start with the idea of a real world, which
is in a continual process of change and development. We must now find means of
analyzing this change and development. To begin, we seek those aspects that have a
relative permanence. Over a short period of time, these aspects may be idealized and
abstracted as having a being, conceived of as static. But like the mathematical point,
the notion of a property or an aspect of things as having such a static and complete
being is only a simplifying abstraction. In reality it does not have such static being,
as is shown by the fact that it changes after some time. The fundamental reality is that of matter in being and in process of change, or of becoming, as it may more
accurately be called. [...]
We note that causal laws are relationships
between various aspects of reality at different times. Depending on which aspects that
we find are necessary, possible, or convenient to relate, we will have different kinds
of causal laws, some more nearly statistical and some more nearly individual. But the
essential point is that one and the same system simultaneously obeys individual and
statistical laws. [...] Thus, we do not regard the world as made of certain fixed eternal basic elements,
satisfying corresponding laws. [...]
[S]tatistical laws are not purely a matter of convenience and practicability. Moreover
every level of individual law ultimately has some deeper statistical basis. A more
accurate statement of the problem is thus:
Both for reasons of practical convenience and for reasons of principle, we study
statistical aggregates in their own right. [...]
What must be stressed however is that
individual and statistical laws are abstractions as limiting cases of laws in general, and that there remains before us the problem of formulating more general types
of laws that could connect these two limiting cases in a continuous and rationally
understandable way.
§ APPENDIX B – EXCERPTS FROM THE WRITINGS OF D. BOHM
§.§ Excerpts from Causality and Chance (1957)
Evidently, then, the applicability of the theory of probability to scientific and other statistical problems
has no essential relationship either to our knowledge or to our ignorance. Rather, it depends only on the
objective existence of certain regularities that are characteristic of the systems and processes under
discussion, regularities which imply that the long run or average behaviour in a large aggregate of objects or
events is approximately independent of the precise details that determine exactly what will happen in each
individual case.
On the basis of the above considerations, we are then led to interpret the probability of, for example, a
given result in the game of dice as an objective property associated with the dice that are being used and
with the process by which they are thrown, a property that can be defined independently of the question of
whether or not we know enough to predict what will happen in each individual throw. (p. 27)
When we study any particular set of processes within one of its relatively autonomous contexts, we
discover that certain relationships remain constant under a wide range of changes of the detailed behaviour
of the things that enter into this context. Such constancy is interpreted not as a coincidence, but rather as an
objective necessity inherent in the nature of the things we are studying. These necessary relationships are
then manifestations of the causal laws applying in the context in question. These laws do not have to determine
a given effect uniquely. Instead, they may (in the case of one-to-many relationships) determine only that the
effect must remain within a certain range of possibilities. (p. 29)
Now, as we shall see in this chapter and in other parts of the book, the mechanistic philosophy has taken
many specific forms throughout the development of science. The most essential aspects of this philosophy
seem to the author, however, to be its assumption that the great diversity of things that appear in all of our
experience, every day as well as scientific, can all be reduced completely and perfectly to nothing more than
consequences of the operation of an absolute and final set of purely quantitative laws determining the
behaviour of a few kinds of basic entities or variables. (p. 37)
The essential change brought in by this new point of view was the introduction of an element of
arbitrariness into the theory. One still thought of the universe as a gigantic mechanical system with the
property that everything in it can in principle be reduced completely and perfectly to nothing more than the
results of purely quantitative changes taking place in suitable mechanical parameters. But instead of having
its behaviour determined completely in terms of definite laws governing these parameters, this universal
system could continually be subject to irregular alterations in the course of its motion. [...]
For we now see that there is a whole level in which
chance fluctuations are an inseparable part of the mode of being of things, so that they must be interwoven
into the fabric of the theory of this level in a fundamental way. Thus, we have been led to take an important
step beyond the classical notion of chance as nothing more than the effects of contingencies that modify the
boundary conditions or introduce randomly fluctuating external forces in a way that is not predictable within the context of interest, but which play no essential part in the formulation of the basic laws that apply within such a context.
If we stopped at this point, however, we should, as we have seen in the previous chapter, merely have
switched from deterministic to indeterministic mechanism. To avoid indeterministic mechanism, we must
suppose that, in their turn, the chance fluctuations come from something else. Since, as Heisenberg and Bohr
have shown so well, there is no room in the quantum domain for anything to exist in which these
fluctuations might originate, it is clear that to find their origin we must go to some new domain. [...]
Of course, if
one were now to make the assumption that these new laws would surely be nothing more than purely causal
laws, one would then fall back into deterministic mechanism, while the similar assumption that they were
surely nothing more than laws of probability would throw one back into indeterministic mechanism. On-the
other hand, we have in the proposals made in this chapter avoided both these dogmatic and arbitrary
extremes, since we have considered, as the situation demanded, the possibility that there are new features to
the causal laws (a “quantum force” not appearing at higher levels) as well as to the laws of chance (random
fluctuations originating in the sub-quantum mechanical level).
Of course, as we have indicated in Section 5, we do not regard our earlier proposals as providing a
completely satisfactory and definitive interpretation of the laws of the quantum domain. The basic reason is,
in a sense, that the fundamental concepts considered in the theory (waves and particles in interaction) are
still very probably too close to those applying in the classical domain to be appropriate to a completely new
domain such as that treated in the quantum theory. (pp. 126-127)
Actually, however, neither causal laws nor laws of chance
can ever be perfectly correct, because each inevitably leaves out some aspect of what is happening in
broader contexts. [...] Thus, we are led to regard these two kinds of laws as effectively furnishing different views of any
given natural process, such that at times we may need one view or the other to catch what is essential, while
at still other times, we may have to combine both views in an appropriate way. But we do not assume, as is
generally done in a mechanistic philosophy, that the whole of nature can eventually be treated completely
perfectly and unconditionally in terms of just one of these sides, so that the other will be seen to be
inessential, a mere shadow, that makes no fundamental contribution to our representation of nature as a whole. (p. 143)
§ APPENDIX C – EXCERPTS FROM THE SECONDARY LITERATURE ABOUT D. BOHM
§.§ Excerpt from Freire, O. Jr, David Bohm: A life dedicated to understanding the quantum world
Évry Schatzman, who was the intermediary for Bohm to contact Vigier, wrote to Bohm: “Any physical theory should be completely deterministic, because an affirmation of the dialectical materialism is that there is an objective reality and that this reality is cognizable, that we can built an image of that reality in our mind”. Schatzman was far from modest about the work which was being done by Bohm and Vigier, comparing it to Marx’s works: “We should be grateful to people like Vigier, like you, who have with tenacity devoted their efforts to the rebuilding of the quantum theory on its feet, just like the dialectic
of Hegel, which had to be put back on its feet!” However, if the Marxist background
was the cement, the collaboration between Bohm and Vigier blossomed in a fruitful
scientific collaboration. (<cit.>, p. 91)
|
http://arxiv.org/abs/2307.03923v1 | 20230708073717 | New Methods for MLE of Toeplitz Structured Covariance Matrices with Applications to RADAR Problems | [
"Augusto Aubry",
"Prabhu Babu",
"Antonio De Maio",
"Massimo Rosamilia"
] | eess.SP | [
"eess.SP"
] |
Submitted to IEEE Trans. on Signal Processing...
New Methods for MLE of Toeplitz Structured Covariance Matrices with Applications to RADAR Problems
Augusto Aubry, Senior Member, IEEE, Prabhu Babu, Antonio De Maio, Fellow, IEEE, and Massimo Rosamilia, Member, IEEE
A. Aubry and A. De Maio are with the Department of Electrical Engineering and Information Technology, Universita degli Studi di Napoli “Federico II”, DIETI, Via Claudio 21, I-80125 Napoli, Italy (E-mail: [email protected], [email protected]).
P. Babu is with CARE, IIT Delhi, New Delhi, 110016, India (E-mail: [email protected])
M. Rosamilia is with the National Inter-University Consortium for Telecommunications, 43124 Parma, Italy (e-mail: [email protected]).
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This work considers Maximum Likelihood Estimation (MLE) of a Toeplitz structured covariance matrix. In this regard, an equivalent reformulation of the MLE problem is introduced and two iterative algorithms are proposed for the optimization of the equivalent statistical learning framework. Both the strategies are based on the Majorization Minimization (MM) paradigm and hence enjoy nice properties such as monotonicity and ensured convergence to a stationary point of the equivalent MLE problem. The proposed framework is also extended to deal with MLE of other practically relevant covariance structures, namely, the banded Toeplitz, block Toeplitz, and Toeplitz-block-Toeplitz. Through numerical simulations, it is shown that the new methods provide excellent performance levels in terms of both mean square estimation error (which is very close to the benchmark Cramér-Rao Bound (CRB)) and signal-to-interference-plus-noise ratio, especially in comparison with state of the art strategies.
§ INTRODUCTION
Estimation of the data covariance matrix has diverse applications in radar signal processing, such as direction
of arrival estimation, target detection, adaptive beamforming, and sidelobe canceller design <cit.>. In these situations, the interference covariance matrix is estimated from the secondary/training data, which are assumed target-free and collected from spatial and/or temporal returns corresponding to range cells close to the one of interest. When the data follows a complex, zero-mean, circular Gaussian distribution, it is well known that the Sample Covariance Matrix (SCM) is the unstructured Maximum Likelihood (ML) estimate of the covariance matrix. However, in the presence of a small number of training data and/or when mismatches in training data spectral properties occur, it does not always represent a reliable choice for the covariance inference <cit.>. A well-known strategy, often discussed in the open literature to improve the performance of a covariance estimator, relies on the incorporation of some a priori knowledge about its underlying structure. For instance, in some radar applications, it is customary to suppose that data come from a stationary Gaussian random process, leading to a Hermitian symmetric Toeplitz Structured Covariance (TSC) matrix. Leveraging this information, one can obtain (under the design conditions) a more reliable estimator than the SCM <cit.>. Aside radar applications, the estimation of a TSC matrix is encountered in speech recognition <cit.>, spectral estimation <cit.>, gridless compressive sensing <cit.>, and hyperspectral imaging <cit.>.
So far, several algorithms have been proposed for estimating a TSC matrix. Let us first discuss those for ML Estimation (MLE). According to the Caratheodory parametrization <cit.> a Toeplitz covariance matrix ∈ℍ^m × m can always be decomposed as[Notice that the parametrization is unique provided that the rank of <m <cit.>.]
[ = ^H; []_k,k≥ 0 ],
where
=
[ 1 ⋯ 1; e^jω_1 ⋯ e^jω_r; ⋮ ⋱ ⋮; e^j(m-1)ω_1 ⋯ e^j(m-1)ω_r ],
=
[ p̃_1 … 0; ⋮ ⋱ ⋮; 0 … p̃_r ],
ω_i and p̃_i, i=1,2, ⋯,r ≤ m, denote some angular frequencies and their corresponding powers while r indicates the rank of . Capitalizing on this parametrization, Circulant Embedding (CE) of Toeplitz matrix (<cit.>) can be used to compute approximately the ML estimate of . According to CE, a Positive SemiDefinite (PSD) m × m Toeplitz matrix is modeled as
[ = ^H; = diag([p_1,p_2,⋯,p_L]), p_k≥ 0 , ]
where = [_m × m _m × (L-m)], _m × m is the identity matrix of size m × m, _m × L-m is the zero matrix of size m × L-m, is the normalized Discrete Fourier Transform (DFT) matrix of size L ≥ 2m-1 and is a diagonal matrix of size L × L with diagonal elements p_k≥ 0. Therefore, the matrix is completely parameterized by the diagonal matrix . Although estimating the Toeplitz covariance matrix using CE seems attractive, the representation in (<ref>) is valid only for a subset of Toeplitz covariance matrices. This can be intuitively justified because the Caratheodory parametrization in (<ref>) does not give restrictions on the frequencies spacing, while the CE in (<ref>) strictly requires the frequencies to lie on the Fourier grid. Hence, for some Toeplitz matrices, the parametrization in (<ref>) is only approximated. Based on CE, <cit.> and <cit.> have proposed an iterative algorithm based on Expectation-Maximization (EM) for MLE of . By modifying the M step in the EM procedure, in <cit.> the technique has been extended to deal with the banded Toeplitz covariance case. In <cit.>, still leveraging CE framework, a Majorization Minimization (MM) based optimization, with faster convergence than the EM of <cit.> and <cit.>, has been introduced. In <cit.> a closed-form estimator has been designed by invoking the extended invariance principle to deal with the Toeplitz constraint. Finally, in <cit.>, an efficient approximation of a Toeplitz covariance matrix under a rank constraint has been handled forcing the eigenvectors to be the same as those of the SCM whereas the Toeplitz constraint has been explicitly imposed while estimating the eigenvalues. Other than the MLE, several other alternative paradigms have been considered for the problem at hand. Recently, in <cit.> the Toeplitz structure is forced together with a condition number constraint via SCM projection onto a suitable constraint set. Other geometric based approaches for the TSC estimation have also been proposed in <cit.>.
In this manuscript, two iterative algorithms referred to as Alternating Projection Based TOeplitz Covariance Matrix Estimation 1 (ATOM1) and ATOM2 are devised leveraging a suitable reformulation of the MLE problem and the MM framework. Both ATOM1 and ATOM2 involve the construction of a bespoke surrogate function (s.f.) along with its optimization. Specifically, the two procedures construct distinct s.f. and therefore solve different surrogate minimization problems. While ATOM1 addresses the surrogate minimization problem using the Alternating Direction Method of Multipliers (ADMM), ATOM2 handles it either via alternating projection or Dykstra's algorithm. However, both the procedures directly estimate the Toeplitz covariance matrix without forcing a reparametrization via the CE. ATOM2 is also extended to include other constraints, such as banded Toeplitz, block-Toeplitz, and Toeplitz-block-Toeplitz structures. The major contributions of this paper can be summarized as follows:
* Two iterative algorithms ATOM1 and ATOM2 are proposed based on the MM framework to address MLE of a Toeplitz covariance matrix. Their computational complexities are thoroughly discussed. Also, the convergence of the procedures to a stationary point of the equivalent MLE problem is established.
* The extensions of ATOM2 to handle additional covariance structures, such as banded Toeplitz, block-Toeplitz, and Toeplitz-block-Toeplitz.
* The derivation of the Cramér-Rao Bound (CRB) for the estimation of Toeplitz, banded Toeplitz, and Toeplitz-block-Toeplitz covariance matrices are provided.
* Performance comparisons of the proposed algorithms (included their extensions) with some state-of-the-art procedures via numerical simulations are illustrated, using the Mean Square Error (MSE) and the Signal-to-Interference-plus-Noise Ratio (SINR) (for case studies related to radar applications) as performance metrics.
The organization of the paper is as follows. The MLE problem of Toeplitz covariance matrix for complex, zero-mean, circular Gaussian observations is formulated in Section <ref>. In Section <ref>, ATOM1 and ATOM2 algorithms are proposed, along with a discussion on their computational complexity and implementation aspects. Also, their convergence properties are studied. At the end of this section, the extension of ATOM2 to handle additional constraints along with the Toeplitz requirement is discussed too. In Section <ref>, the CRB for the estimation of Toeplitz, banded Toeplitz, and Toeplitz-block-Toeplitz covariance matrices is computed. In Section <ref>, the proposed algorithms are compared with some state-of-the-art techniques, and finally, concluding remarks are given in Section <ref>.
§.§ Notation
Throughout the paper, bold capital and bold small letter denote matrix and vector, respectively. A scalar is represented by a small letter. The value taken by an optimization vector at the t^th iteration is denoted by _t.
Furthermore, ℝ is used to denote the set of real numbers, ℝ^m and ℂ^m are used to represent the sets of m dimensional vectors of real and complex numbers, respectively, whereas ℝ^m × m, ℂ^m × m, and
ℍ^m × m are used to represent the sets of m × m matrices of real numbers, m × m matrices of complex numbers, and m × m Hermitian matrices, respectively. Superscripts (·)^T, (·)^*, (·)^H, and (·)^-1 indicate the transpose, complex conjugate, complex conjugate transpose, and inverse, respectively. For any x ∈ℝ, ⌈ x ⌉ returns the least integer greater than or equal to x. The trace and the determinant of a matrix are denoted by Tr() and ||, respectively. The notation []_i is used to represent the i^th column of the matrix . The symbol ⊗ indicates the Kronecker product while the gradient of a function f is denoted by ∇ f. The symbol ≽ (and its strict form ≻) is used to denote the generalized matrix inequality: for any ∈ℍ^m × m, ≽ 0 means that is a PSD matrix (≻ 0 for positive definiteness). Besides, for any ∈ℍ^m × m, eig() is the vector collecting the eigenvalues of (sorted in increasing order). The Euclidean norm of the vector is denoted by _2, || indicates the element wise modulus of the vector . The notation E[·] stands for statistical expectation. Finally, for any ,∈ℝ^m× m, max(,) refers to the matrix containing the element wise maximum between and .
§ PROBLEM FORMULATION
Let us assume the availability of n independent and identically distributed vectors {_1, _2, ⋯,_n}, where each _i is of size m and follows a m-variate complex, zero-mean, circular Gaussian distribution with covariance matrix ≻0. The maximum likelihood covariance estimation problem can be formulated as
[ ≻ 0 minimize f̅() =1n∑_i=1^n_i^H^-1_i + log|| . ]
If n ≥ m, Problem (<ref>) has a unique minimizer with probability one which is given by the SCM, i.e., _SCM = 1n∑_i=1^n_i_i^H. However, if the random process, where each observation is drawn, is stationary (at least in wide sense) then the covariance matrix also exhibits a Toeplitz structure which can be capitalized in the estimation process. By doing so, Problem (<ref>) becomes
[ MLE: ∈ Toep, ≻ 0 minimize f̅(), ]
where Toep is used to denote the set of Hermitian Toeplitz matrices of size m × m. The above problem has two constraints: a structural constraint and a positive definite constraint. Even though the structural constraint is convex, the non-convexity of the objective function makes Problem (<ref>) challenging to solve and no analytical solution seems to be available. In the following two iterative solution procedures for (<ref>) are designed exploiting the MM principle. Briefly, the MM technique mainly consists of two steps
* constructing a s.f. g(|_t) (where _t is the estimate of at the t^th iteration) for the objective function in (<ref>);
* minimizing the resulting surrogate problem at each iteration.
For more details, <cit.> provide an in-depth discussion on MM based algorithms.
§ ALGORITHMS FOR TOEPLITZ COVARIANCE MATRIX ESTIMATION
In this section, ATOM1 and ATOM2 are proposed to tackle the MLE problem of TSC matrix. Both exploit the MM principle (applied to an equivalent reformulation of the MLE problem) and differ in the way they construct and handle the surrogate minimization problem. ATOM1 solves the surrogate optimization using ADMM while ATOM2 tackles it using either alternating projection or Dykstra's algorithm. Subsequently, the computational complexity and proof of convergence of the procedures are established. Finally, the extension of ATOM2 to deal with additional covariance constraints along with the Toeplitz structure is provided.
Before proceeding further, let us observe that the Hermitian Toeplitz matrices intrinsically endow the centro-Hermitian symmetry structure <cit.>, i.e.,
= ^*
with the m× m permutation matrix given by
= [ 0 0 ⋯ 0 1; 0 0 ⋯ 1 0; ⋮ ⋮ ⋱ ⋮ ⋮; 1 0 ⋯ 0 0 ] .
As a consequence, Problem (<ref>) is tantamount to
∈ Toep, ≻ 0 minimize f(),
where
f() = (_FB^-1) + log||
refers to the restriction of f̅(·) to the centro-Hermitian covariance matrices, with _FB the forward-backward (FB) averaged sample covariance matrix[Hereafter, Problem (<ref>) (and thus (<ref>)) is assumed solvable, i.e., there exists a global optmizer ^* ≻ 0, as well as any limit point of a feasible sequence of matrices whose corresponding objectives converge to the optimal value is feasible to the optimization problem. As a consequence, without loss of generality, the constraint ≻ 0 can be relaxed into ≽ 0. Notably, a sufficient condition to ensure the aforementioned properties is provided by n ≥⌈ m/2 ⌉, corresponding to _FB≻ 0 with probability one.] given by _FB = 1/2 (_SCM + _SCM^* ) <cit.>.
Now, decomposing _FB=^H, e.g., via LDL factorization, with ∈ℂ^m × r, where r=rank(_FB)≤ m, Problem (<ref>) can be equivalently cast as[A similar constraint reformulation is used in some studies involving atomic norm for sparse reconstruction <cit.>.] (the interested reader may refer to Appendix A of the supplementary material to this paper)
min_∈ Toep,∈ℍ^r× r () + log||
s.t. ([ ^H; ])≽0,
where the objective is a concave differentiable function of and .
Before proceeding with the next important lemma, it is worth pointing out that Problem (<ref>) holds true even if the Toeplitz structural constraint in Problem (<ref>) and (<ref>) is replaced by any set of positive definite matrices, provided that the estimation problem is solvable.
Given a concave differentiable[For a non-differentiable function, the inequality in (<ref>) can be cast as h() ≤h(_t) + Tr((_t)^H (-_t)), where (_t) is the subgradient of the concave function h() at _t <cit.>. ] function h(): ℍ^r × r→ℝ, it can be majorized as
[ h() ≤h(_t) + Tr(∇h(_t)^H (-_t)), ]
where _t∈ℍ^r × r. The upper bound to h() is linear and differentiable with respect to (w.r.t.) .
Since h() is a concave function w.r.t. , (<ref>) stems from linearizing h() via its first order Taylor expansion <cit.>.
In order to tackle the challenging optimization problem (<ref>), MM-based methods <cit.>, denoted ATOM1 and ATOM2, are now developed.
To this end, let us observe that the term log|| in (<ref>) is a concave function w.r.t. <cit.>. Hence, it can be majorized using Lemma <ref> to get the following s.f.
g(,|_t) =() + (_t^-1) + c_1
=(_t) + c_1,
where the constant c_1 = log|_t| - m, _t = diag(,_t^-1), whereas = diag(,) is the block-diagonal matrix with blocks and along the main diagonal. Given _t, which in our case is the value assumed by the variable _t at the t-th iteration of the algorithm, the MM method demands for the solution of the following surrogate minimization task
_t+1 = ∈ Toep, ∈ℍ^r× r arg min g(,|_t)
s.t. ([ ^H; ]) ≽,
which is a SDP problem. Unfortunately, the computational complexity necessary to handle SDP using interior point methods is 𝒪((r+m)^6.5) <cit.>. In order to alleviate the computational issue, two different approaches are pursued. The former directly handles Problem (<ref>) via the iterative ADMM algorithm. The latter, by means of a suitable manipulation of (<ref>), constructs a different s.f. for the objective function in Problem (<ref>). By doing so, as clearly explained in the following, a computationally efficient and flexible estimation procedure capable of including additional constraints can be developed. To this end, let us observe that, adding and subtracting γTr(^2), (<ref>) is equivalent to
(_t) + γTr(^2)-γTr(^2)
with γ > 0∈ℝ a parameter of the surrogate construction stage (for γ↓ 0, the function in (<ref>) reduces to (<ref>)).
Now, being -Tr(^2) a concave function of and invoking Lemma <ref> applied to the feasible solution _t=diag(_t;_t) with _t = ^H_t^-1 and _t provided by the t-th iteration step of the estimation process, it is possible to construct the s.f. for (<ref>)
g̃(,|_t) = Tr(_t)+γTr(^2)-2γTr(_t)
- γTr(_t^2).
It is worth pointing out that g̃(,|_t) represents a surrogate to a s.f.. Nonetheless, since g̃(,|_t) is a tight approximation of g(,|_t), it is straightforward to show that (<ref>) provides a direct surrogate for the objective function in Problem (<ref>). Hence, given _t and after some algebraic manipulations, the resulting surrogate minimization problem at the t-th iteration can be cast as
_t+1= ∈ Toep, arg min - _t_F^2
subject to +≽0,
where _t = _t - γ'_t, with γ' = 0.5/γ and =[,^H;,].
In the following subsections <ref> and <ref> two iterative methods, i.e., ATOM1 and ATOM2, are proposed to solve the surrogate minimization problems in (<ref>) and (<ref>), respectively.
§.§ ATOM1
The surrogate minimization problem in (<ref>) is solved using ADMM <cit.>. To this end, an auxiliary variable ∈ℍ^r+m × r+m is introduced in (<ref>) and the problem is framed in the equivalent form
min_∈ Toep,≽,∈ℍ^r× r () + ((_t)^-1)
s.t. ([ ^H; ])- =0.
The augmented Lagrangian associated with (<ref>) is
ℒ_ρ(,,,)=() + ((_t)^-1)
+ [^H(([ ^H; ])-)]
+ ρ/2‖([ ^H; ])-‖_F^2,
where ρ >0 is the penalty parameter and is the Lagrange multiplier of size (r+m)× (r+m). Problem (<ref>) can be further rewritten as
ℒ_ρ(, , ) = (_t ) + (^H ( + - ))
+ ρ/2‖ + - ‖_F^2.
The (inner) iterative steps of ADMM algorithm <cit.> are
_k+1^t = ≽min ((_k^t)^T (_k^t + - ))
+ ρ/2‖_k^t + - ‖_F^2
_k+1^t = ∈ Toep,min () + ((_k^t)^T ( + - _k+1^t))
+ ρ/2‖ + - _k+1^t‖_F^2
_k+1^t =_k^t + ρ(_k+1^t + - _k+1^t),
where (·)^t_k is used to denote the k-th inner-iteration of the ADMM algorithm in correspondence of the t-th MM outer-loop. Problems (<ref>) and (<ref>) have closed-form solutions which can be computed via the projection of appropriate matrices onto the respective feasible sets. Indeed, Problem (<ref>) can be equivalently cast as
[ ^t_k+1= ≽ 0 arg min - ^t_k_F^2; ]
where ^t_k = _k^t + + 1/ρ_k^t. Hence, solving (<ref>) is tantamount to performing the orthogonal projection of the matrix ^t_k onto the set of the PSD matrices which can be computed as ^t_k+1=^t_kmax(diag(^t_k),)^t H_k, where diag(^t_k) and ^t_k are the matrices containing the eigenvalues and the corresponding orthonormal eigenvectors of ^t_k, respectively. Similarly, the update step of in (<ref>) can be rewritten as
_k+1^t = ∈ Toep,min ‖ - _k^t‖_F^2,
where _k^t = 𝒫_D–Toep( _k+1^t- - 1/ρ (_k^t +_t)), with 𝒫_D–Toep() computed as follows: Partitioning the matrix as =([ _11 _12; ^H_12 _22 ]) with _12 of size r× m, the orthogonal projection of interest amounts to set the upper diagonal block to _11 whereas the second diagonal block is obtained by averaging the elements along each diagonal of _22 and constructing the corresponding Toeplitz matrix.
Now, partitioning _k^t as _k^t = ([ ^t_11,k ^t_12,k; ^tH_12,k ^t_22,k ])
with ^t_11,k and ^t_22,k being r × r and m × m matrices, respectively, it follows that _k+1^t=^t_11,k and _k+1^t = ^t_22,k.
Before concluding, it is worth pointing out that since the surrogate minimization problem in (<ref>) is convex and only an equality constraint is forced, it is guaranteed that ADMM converges to a supposed existing[A sufficient condition for the existence of the optimal solution to Problem (<ref>) is provided by the solvability of (<ref>).] optimal unique solution to (<ref>) (see Section 3.2 in <cit.>, <cit.>). The pseudocode of the proposed algorithm is shown in Algorithm 1.
From Algorithm 1 it can be seen that ATOM1 requires initialization of the matrices _0, ^t_0 and ^t_0. _0 can be set using the initialization scheme discussed in <cit.> and, as t=0, ^t_0 can be set equal to ^H_0^-1 while ^t_0 can be constructed as ^t_0 =^H, where the elements of are drawn randomly from a uniform distribution over [0,1]. For t≥1, the matrices ^t_0 and ^t_0 can be initialized with their last value after convergence at the previous ADMM iteration, respectively. Another input parameter required by ATOM1 is the penalty weight ρ, introduced during the construction of the Augmented Lagrangian of the ADMM framework. It is shown in <cit.>, that the ADMM algorithm converges for any value of ρ>0. However, the numerical stability and the convergence rate depends on the choice of ρ. Simulation results have highlighted that for ρ = 1, the ADMM algorithm is stable for different values of n and m. Hence, unless otherwise stated, in all the numerical analysis ρ = 1 is used.
§.§.§ Computational complexity and discussion about ATOM1
ATOM1 is iterative in nature with two loops - the outer-loop updates the Toeplitz matrix _t while the inner-loop solves the surrogate minimization problem using ADMM. Note that in the inner-loop, it is required to construct the data-based matrix = ([ 0 ^H; 0 ]) - which is iteration independent and hence can be pre-computed and stored.
Let us now discuss the complexity related to the outer and inner-loops of ATOM1. The inner-loop of ATOM1 requires the computation of the matrix _t - which is outer-loop iteration dependent. Therefore, this matrix can be evaluated once in each outer-loop. Consequently, apart from the computations involved in the inner-loop, an outer-loop cycle just involves the evaluation of the matrix _t^-1. Since _t is Toeplitz, its inverse can be efficiently computed with a complexity 𝒪(m logm) <cit.>. The computational complexity of an inner-loop cycle is related to the projection of _k^t onto the set of PSD matrices and projection of ^t_k onto the set of block diagonal matrices where the upper part (of size r × r) is unconstrained, whereas the lower block (of size m × m) is Toeplitz structured.
The cost of this latter operation mainly involves the projection of ^t_22,k onto the set of Toeplitz matrices; thus, it is substantially dictated by the computation of average of the elements along the diagonals of ^t_22,k. Hence, the cost of the inner-step 4) is 𝒪(m^2). Next, the projection of onto the set of PSD matrices mainly involves the computation of the eigenvalues and eigenvectors of the matrix _k^t - whose corresponding complexity is 𝒪((r+m)^3) <cit.>. Therefore, the per-outer-iteration computational complexity of ATOM1 is 𝒪(η(r+m)^3) where η is the total number of inner-loop iterations required by the algorithm to converge.
A drawback of ATOM1 is the lack of a theoretical quality guarantee when it has to handle additional constraints on the covariance matrix. This is because ATOM1 implements ADMM algorithm at each inner-iteration which requires (to endow convergence guarantees to the process) the optimization problem to exhibit the standard form <cit.>
[ , minimize h_1(_1) + h_2(_2); subject to _1_1+_2_2 = ]
where h_1(_1), h_2(_2) are convex functions and _1, _2, are matrices of appropriate dimensions, respectively. Therefore, to incorporate additional inequality constraints (such as those resulting from upper bound on the condition number of the matrix _1 or a lower bound to the strength of diagonal elements, or more in general an intersection of closed convex sets that can be described by additional auxiliary variables), one needs to replace each inequality constraint with an appropriate equality constraint. This can be done by introducing a slack variable for each inequality constraint to the existing optimization variables _1 and _2. However, there is no convergence guarantee of ADMM when there are more than two optimization variables <cit.>. This issue can be addressed by the low complexity algorithm, referred to as ATOM2, proposed to solve Problem (<ref>).
§.§ ATOM2
Problem (<ref>) is tantamount to seeking the block diagonal matrix belonging to the intersection of the two sets - the former defined by block diagonal matrices with the lower diagonal block of size m × m fulfilling a Toeplitz structure and the latter given by the Linear Matrix Inequality (LMI) <cit.> + ≽ 0 - with minimum distance from . Being the feasible set of (<ref>) characterized by the intersection of convex sets, a viable, even though heuristic, means to tackle Problem (<ref>) is provided by the alternating projection or Projection Onto the Convex Sets (POCS) technique <cit.>, which has already been successfully applied in the signal processing context, e.g., <cit.>.
Let us denote by 𝒫_LMI() the orthogonal projection of an arbitrary matrix onto the set defined by +≽0. Now, to proceed further and employ the POCS framework, 𝒫_D–Toep() and 𝒫_LMI() projections must be employed. Remarkably, both can be obtained in closed-form: the former is computed as described in subsection <ref>; as to the latter, the orthogonal projection onto the set defined by LMI +≽0 is computed by first evaluating the EigenValue Decomposition (EVD) of the matrix +, i.e., obtaining [, ] = eig( +), where and are matrices containing the eigenvalues and eigenvectors of the spectral decomposition, respectively. Then, the orthogonal projection 𝒫_LMI() is given by max(,)^H -.
According to POCS method, given an initial value _0^t = _t, at the k-th inner-iteration first compute ^t_k+1 =𝒫_D–Toep(^t_k) and then, using ^t_k+1, determine ^t_k+1=𝒫_LMI(^t_k+1) which represents the starting point ^t_k+1 of the next inner-iteration. Hence, the POCS-based solution approach finds a sequence of iterates {^t_k} by alternatingly projecting between the two convex sets. Nevertheless, as reported in <cit.>, POCS may suffer from slow convergence. Even more crucial, the convergence to the global optimal solution to (<ref>) is, in general, not ensured <cit.>. A possible solution to the aforementioned shortcoming is provided by Dykstra's projection <cit.> which is a refinement of POCS capable of finding a point closest to _t by adding correction matrices _k and _k before each projection is performed, which in-turn ensures convergence of sequence {_k+1} to the optimal solution ^*=^* <cit.>. The pseudocode of Dykstra's algorithm is shown in Algorithm 2.
Once the optimal solution ^* is obtained via Dykstra's projection, the matrix _t+1 can be constructed from its lower diagonal block of size m × m. This process is repeated until the whole MM-procedure, i.e., including the outer-loop, converges.
The complete ATOM2 is summarized in Algorithm 3.
It requires the initialization of the matrix . In this respect, a similar scheme as in ATOM1 is followed, i.e., at each outer-iteration, the initial guess required to determine _t+1 in the inner-loop is obtained starting from _t.
§.§ Computational complexity of ATOM2
Like ATOM1, ATOM2 is an iterative algorithm with outer- and inner-loops. The outer-loop updates the Toeplitz matrix _t and the inner-loop implements the Dykstra's algorithm - which requires the computation of the matrices and _t^-1. The former is a iteration independent data matrix and therefore can be pre-constructed. The latter is outer-loop iteration dependent and therefore can be computed once in each outer-loop. Consequently, apart from the inner-loop computations, the outer-loop demands only the
computation of _t^-1 - which can be computed efficiently with complexity 𝒪(m logm). Meanwhile, the computational load of the inner-loop stems from the evaluation of EVD of the matrix (_k +_k) plus a data matrix - which has a complexity of about 𝒪((r+m)^3).
In Table <ref>, the computational complexity of ATOM1 and ATOM2 is compared with that of the state-of-the-art iterative algorithms <cit.>. Unlike the proposed algorithms, the state-of-the art methods are single loop iteration algorithms. Therefore, in the case of <cit.> η is used to represent the number of iterations required by the algorithm to converge. Inspection of Table <ref> shows that ATOM1 and ATOM2 have the highest complexity when compared to MELT and EM. Nevertheless, it is worth anticipating that this complexity increase is complemented by a superior performance in terms of generality of the problem solved (ATOM1 and ATOM2 do not exploit the CE, ATOM2 permits to handle additional structural constraints with quality guarantee, as shown in subsection <ref>), covariance matrix MSE, and achieved SINR.
§.§ Proof of convergence
In this subsection, the proof of convergence of ATOM1 and ATOM2 is established. In this regard, it is worth pointing out that both the algorithms differ in the way they construct and optimize the s.f. for the Problem (<ref>). Nonetheless, since ATOM1 and ATOM2 are based on the MM framework, the proof of convergence based on the following Theorem will hold for both algorithms.
Before stating the Theorem, let us first introduce the first-order optimality condition for minimizing a function over a convex constraint set. A point is a stationary point of f(·) if f'(;) ≥ 0 for all such that +∈𝒞, where 𝒞 is the convex constraint set and f'(;) is the directional derivative of f(·) at point in direction and is defined as <cit.>
[ f'(;) =λ↓ 0lim inf f(+λ) - f()λ ].
Based on the following Theorem, both ATOM1 and ATOM2 are guaranteed to converge to a stationary point of Problem (<ref>).
Denoting by {_t} the sequence of matrices generated by either ATOM1 or ATOM2, then the objective function of Problem (<ref>) monotonically decreases along the iterations. Besides, any positive definite cluster point[Under the assumption m≥ n/2, all the cluster points are demanded to be positive definite.] to _t is a stationary point to Problem (<ref>).
See Appendix B of the supplementary material for details.
§.§ Extensions of ATOM2
The augmentation of ATOM2 to handle additional constraints other than the Toeplitz structure in the covariance estimation process is now addressed. In particular, it is shown that ATOM2 can be generalized to account for the following scenarios: Banded Toeplitz, block-Toeplitz, and Toeplitz-block-Toeplitz matrices. On the other side, as already mentioned in subsection <ref>, ATOM1 cannot be directly extended to tackle the general constraints as for instance an upper bound requirement to the condition number.
§.§.§ MLE of banded Toeplitz covariance matrix
The covariance matrix is constrained to exhibit a banded Toeplitz structure of bandwidth b (see <cit.> for relevant applications). For instance, assuming a bandwidth b=2 and dimension m=5 the covariance matrix enjoys the following structure
=
[ r_1 r_2 r_3 0 0; r^*_2 r_1 r_2 r_3 0; r^*_3 r^*_2 r_1 r_2 r_3; 0 r^*_3 r^*_2 r_1 r_2; 0 0 r^*_3 r^*_2 r_1 ].
Then, the MLE problem for banded Toeplitz covariance matrix can be formulated as
[ ∈ Band-Toep, ≻ 0 minimize 1n∑_i=1^n_i^H^-1_i + log|| ],
where Band-Toep is used to denote the set of banded Toeplitz matrices. Like in (<ref>), the above problem can be cast in the following equivalent form
[ ∈ Band-Toep, minimize () + log||; subject to ([ ^H; ]) ≽0 ].
Hence, (<ref>) is handled via MM framework solving the following surrogate minimization problem
[ minimize - _F^2; subject to + ≽0; = diag(,) with being a; banded Toeplitz matrix ]
The above problem involves two convex sets: the set defined by the LMI +≽0 and the set of block diagonal matrices where the second block has a banded Toeplitz structure with bandwidth b. Consequently, Dykstra's projection algorithm or POCS can be used to solve Problem (<ref>). The projection of a matrix onto the LMI set can be calculated as discussed earlier in Subsection <ref>. The projection of a matrix = ([ _11 _12; ^H_12 _22 ]) onto the set of block diagonal matrices with the second banded Toeplitz block can be obtained as follows. The first diagonal block is the same as _11 and the second diagonal block is constructed by averaging the entries of the main and the first b upper-diagonals of the matrix _22 and computing the corresponding Toeplitz matrix <cit.>.
§.§.§ MLE of block-Toeplitz or Toeplitz-block-Toeplitz covariance matrix
In space-time adaptive processing radar applications, the covariance matrix exhibits a block-Toeplitz (BT) or a Toeplitz-block-Toeplitz (TBT) structure. An example of a BT-structured covariance matrix with p blocks is shown below
=
[ _0 _1 … _p-1; ^H_1 _0 … _p-2; ⋮ ⋱ ⋱ ⋮; ^H_p-1 … ^H_1 _0 ].
When each block exhibit a Toeplitz structure, then is TBT <cit.>.
The MLE problem of a BT or a TBT covariance matrix is formulated as
∈BT (TBT), ≻ 0 minimize 1n∑_i=1^n_i^H^-1_i + log||,
where the notation BT (TBT) is used to indicate the set of BT (TBT) matrices. A feasible solution to Problem (<ref>) can be obtained by solving at any given step the following surrogate optimization problem
[ minimize - _F^2; subject to +≽0; is a block diagonal matrix with; the second diagonal BT (TBT) block ].
Problem (<ref>) exhibits two constraints - 1) a LMI constraint and 2) a structural constraint - where the optimization variable is confined to be a block diagonal matrix with the second block having a BT (TBT) structure. Since both the constraints are convex, Dykstra's projection or POCS can be applied to solve Problem (<ref>). The projection of a matrix onto the LMI set can be calculated as discussed earlier in Section <ref> B. The projection of a given matrix onto the set of matrices whose second diagonal block has the BT (TBT) constraint can be obtained as follows. For the first diagonal block, the submatrix _11 is directly used. Then, the second diagonal block is obtained following two (three) steps. First, p matrices are obtained by averaging the (upper-right) diagonal blocks of the matrix _22. Then, only for TBT, each of the p matrices are projected onto the Toeplitz set as described in subsection <ref>. Finally, the resulting matrix is constructed according to (<ref>).
§ CRB CALCULATION
In this section, the CRB is derived for the estimation of Toeplitz structured covariance matrix (the interesting reader may refer to Appendix C of the supplementary material with reference to the CRBs of Banded Toeplitz, BT, and TBT covariance model). The CRB provides a lower bound on the variance of any unbiased estimator <cit.>. To proceed further, let represent the real value vector parametrizing a given covariance matrix structure of interest.
Then, the CRB is the inverse of the Fisher Information matrix (FIM) whose (i,k)^th element is
[ []_i,k = E[∂^2logf̅()/∂θ_i∂θ_k] ],
where ∂logf̅()/∂θ_i denotes the partial derivative of logf̅() w.r.t. θ_i, with θ_i the i-th element of .
Due to the Gaussian assumption, the (i,k)^th element of the FIM can be computed using the Slepian–Bangs
formula <cit.>
[]_i,k = nTr(^-1∂/∂θ_i^-1∂/∂θ_k).
In the following subsection, the FIM is derived for the Toeplitz covariance structure.
§.§ Toeplitz matrix
As the entries of the TSC matrix are completely characterized by its first row, i.e., [r_1, r_2,⋯ r_m]^T, the covariance matrix ∈ℍ^m × m can be parameterized by = [r_1, (r_2),⋯(r_m),(r_2),...,(r_m) ]^T∈ℝ^ 2m-1 where (r_i) and (r_i) denotes the real and imaginary parts of r_i, respectively. Then, the covariance matrix can be expressed in terms of and basis matrices ^Toep_g (defined as in (<ref>)), g=1,2,⋯,m <cit.>
[ = ∑_g=1^mθ_g(^Toep_g) + j ∑_g=m+1^2m-1θ_g(^Toep_g-m+1) ].
The (i,k)^th element of the matrix ^Toep_g is given as
[ [^Toep_g]_i,k=
1+j i-k=g-1=0
1+j k-i=g-1≠0
1-j i-k=g-1≠ 0
0 otherwise ].
Using (<ref>), ∂/∂θ_i can be obtained as
∂/∂θ_i=
(^Toep_i) 1≤ i ≤ m
j(^Toep_i-m+1) m+1 ≤ i ≤ 2m-1
Substituting ∂/∂θ_i in (<ref>), yields the FIM for Toeplitz covariance matrix.
§ NUMERICAL SIMULATIONS
In this section, the performance of the proposed covariance matrix estimators ATOM1 and ATOM2 is numerically analyzed in comparison with the following state-of-the-art algorithms: EM-based <cit.>, MELT <cit.>, the SCM, and the FB estimators <cit.>. First, a convergence analysis of the derived methods is provided, also in comparison with the aforementioned counterparts. Then, the estimation capabilities are analyzed in three different scenarios, using the MSE as performance metric, defined as[In the following, (<ref>) is computed via Monte Carlo techniques.]
MSE = E[ - ^2] ,
where indicates the estimate of the unknown , obtained according to one of the aforementioned strategies.
First of all, the covariance matrix is assumed to share the Toeplitz structure. Then, the banded Toeplitz, the BT, and the TBT constraints are considered. The CRB-based benchmark, computed as CRB = (^-1), is reported too, whereby, for each case study, the FIM is appropriately derived, see Section <ref>.
Furthermore, assuming a typical radar signal processing scenario, the performance is also evaluated in terms of average achievable SINR by an adaptive spatial filter.
It is also worth reporting that, in the aforementioned scenarios, ATOM1 and ATOM2 procedures are initialized using the FB estimate _FB, projected onto the set of Toeplitz matrices. Moreover, for the execution of ATOM2, the parameter γ is updated adaptively in each outer-loop iteration according to the following law[As to the adaptive ATOM2 surrogate construction stage, it has been empirically shown that the updating rule (<ref>), with γ_0= 10^-4 and k_1 = 5, provides satisfactory performance in all the scenarios; therefore, unless otherwise stated, ATOM2 s.f. (and the subsequent processing) is constructed using (<ref>) with the aforementioned values.]
γ = γ_0 (t logt+k_1)^2.
To illustrate the role of γ in the optimization process performed by ATOM2, a notional representation of the objective function (conceptually depicted as a one-dimensional curve and corresponding to a specific portion of a restriction of the multivariate objective) and the s.f. of ATOM1 and ATOM2, is reported in Fig. <ref>.
Remarkably, the value of γ affects the trade-off between performance and convergence speed of ATOM2. Indeed, while a smaller γ leads to a better performance (ATOM2 s.f. approaches the ATOM1 one as γ→ 0), it demands more inner-loop iterations to achieve convergence, due to the almost singular resulting metric. On the other hand, a larger γ reduces the overall computational cost, but introduces a growth in the approximation error. However, as the outer-loop iterations increase, the approximation error of the ATOM2 s.f. w.r.t. the objective function decreases as the updated point becomes closer and closer to a local minimum at which the sequence is “converging”. That said, slowly increasing γ with the number of iterations allows to speed-up its computational burden without decreasing its performance.
§.§ Assessment of iterative algorithms convergence for on-grid and off-grid frequencies
In this simulation, the convergence of ATOM1 and ATOM2 (whose inner-loop was implemented via Dykstra's algorithm) is assessed in comparison with MELT and EM algorithms. To this end, each data snapshot _k∈ℂ^m is modeled as
_k= ^1/2_k, k=1,2, ⋯, n
where _k∈ℂ^m, k=1,…, n are independent and identically distributed zero-mean circularly symmetric Gaussian random vectors with unit mean square value.
Two different experimental setups are considered, assuming m=6 and n=20. In the former, the true underlying Toeplitz covariance matrix is constructed by choosing the 2-nd, 3-rd, 5-th, 7-th, 8-th and the 11-th column of the DFT matrix with L=2m-1 in (<ref>), corresponding to the frequencies [0.5712, 1.1424, 2.2848, 3.4272, 3.9984, 5.7120] rad, and as powers [p_1, …, p_6]^T = [3, 6, 4, 1, 7, 5]^T, respectively. Figs. *fig:negLL_obj_ON_GRID_a and *fig:negLL_obj_ON_GRID_b show the negative log likelihood (<ref>) and the objective function of problem (<ref>) versus the number of iterations, respectively. It can be seen that all the algorithms numerically improve the negative log-likelihood as the number of iterations increases and almost converge to the same value, with negligible differences. Moreover, Fig. *fig:negLL_obj_ON_GRID_b indicates that the proposed algorithms monotonically decrease the problem objective function, which is expected since they optimize (<ref>) using the MM framework.
In the other experimental setup, the true underlying Toeplitz covariance matrix is constructed such that two of the frequencies are not on the Fourier grid. Therefore, the same parameters used in case study 1 are considered, with the exception that the Fourier frequencies 0.5712 rad and 3.9984 rad are replaced with 0.5 rad and 5.3 rad, respectively. For the case study at hand, the negative log-likelihood (<ref>) and the objective function of (<ref>) are reported in Figs. *fig:negLL_obj_OFF_GRID_a and *fig:negLL_obj_OFF_GRID_b versus the number of iterations, respectively. Inspection of Fig. *fig:negLL_obj_OFF_GRID_a reveals that while MELT and EM converge to a value of ≈ 22.4, ATOM1 and ATOM2 converge to 22. Therefore, when two of the frequencies do not lie on the Fourier grid, the state-of-the-art iterative algorithms converge to a larger value of the negative log-likelihood than the proposed methods. This is due to the fact that unlike the counterparts, the proposed algorithms estimate the Toeplitz covariance matrix without reparametrizing it via the CE technique and thus they are able to cover the whole set of Toeplitz covariance matrices. Furthermore, remarks similar to those made for the on-grid case hold true with reference to the results depicted in Fig. *fig:negLL_obj_OFF_GRID_b.
In the following, the mean computational time[The simulation has been executed using MATLAB R2020b on a desktop computer equipped with an Intel i5 processor and 16 GB of RAM.] (averaged over 1000 Monte Carlo trials) of the proposed techniques and the counterparts is examined. As case studies, four different values of m are considered, i.e., m ∈{4, 8, 16, 32}. Moreover, the data samples _k are generated as (<ref>) using n=4m samples, with R = T + I. The Toeplitz covariance matrix is generated assuming 3 equal power sources, i.e., with p = [5, 5, 5], whose frequencies are randomly selected (at each trial) such that two of them lie on the Fourier grid of the DFT matrix, with L=2m-1, whereas the third one is drawn from a uniform distribution over [0, 2π]. The iterative algorithms have been run until the following condition is met[For the execution of EM and MELT procedures, the exit condition is set as f(_t-1)-f(_t) ≤ 10^-4.]
p(_t-1, _t-1)-p(_t, _t) ≤ 10^-4
with p(, ) = () + log|| the objective function of problem (<ref>),
or until the maximum number of iterations (set equal to 1000) is reached.
The average computational time of the different algorithms (possibly with different values of the hyperparameters) are reported in Table <ref>.
The results show that ATOM2 has, in general, a longer execution time than ATOM1. This is because the inner-loop of ATOM2 (based on Dykstra's algorithm) requires an higher number of iterations and hence a longer run time to converge than ATOM1 inner-loop (implemented via ADMM), and similar to those of EM/MELT when γ_0 is small, where the distance is minimized in a metric space is ill defined more and more. However, when γ_0 = 10^-1, the run times of ATOM1 and ATOM2 are comparable and similar to those of MELT and EM. Interestingly, Table <ref> pinpoints that, for γ_0 sufficiently small, i.e., 10^-4, ATOM2 is generally able to reach MSE values smaller than ATOM1, reasonably to its adaptive step-size strategy (<ref>), which allows it to provide better quality estimates than ATOM1 as the outer-loop iteration increases. It can also be seen that EM has the least computational time (at large values of m). Nevertheless, as shown in Table <ref>, although the proposed algorithms have a slight longer computational time, the obtained estimates are superior, in terms of MSE, to those provided by MELT and EM.
Interestingly, as the data dimension increases, the resulting average MSE values reached by the ATOM2 using different γ_0 parameters becomes closer and closer. Therefore, for a sufficient larger data size, i.e., m≥32, γ_0 = 10^-1 represents an appropriate choice for ATOM2 implementation, as it offers a good performance with a reduced computational burden.
§.§ MSE vs n for Toeplitz covariance matrix
For this case studies, it is assumed m= 15 and the number of samples n ranging between 50 and 500 in steps of 50. The data _k∈ℂ^15 are again simulated according to (<ref>).
Precisely, two different experiments are considered whereby the true Toeplitz covariance matrix is generated using on-grid[The frequencies used in the first experiment are: [0.2167, 0.6500, 1.0833, 1.3, 1.5166, 1.9500, 2.3833, 2.8166, 3.2499, 3.6832 4.1166, 4.5499, 4.9832, 5.4165, 5.8499] rad. Their corresponding powers increase linearly from 1 to 15 with a unit step.] and off-grid frequencies[For the off-grid simulation, the frequencies [1.3, 2.8166, 4.9832,5.8499] rad are replaced with [1.25, 3.01, 5.20, 5.8] rad, respectively.], respectively.
The resulting MSE, computed over 1000 Monte Carlo trials, are illustrated in Fig. <ref>.
Inspection of the curves depicted in Fig. *fig:MSE_a shows that, regardless of the number of samples n, in the first experiment ATOM1 and ATOM2 almost reach the CRB, whereas EM and MELT yield a slightly better performance, resulting in a deviation from the CRB. This can be explained observing that the derived CRB does not exploit the information that the frequencies lie on-grid. Fig. *fig:MSE_b highlight that in the second experiment, ATOM1 attain the best performance, with results quite close to the CRB and slightly better than ATOM2, with a limited gap between the corresponding curves. Furthermore, MELT and EM exhibit similar MSE values which seem to saturate as n increases. The performance behavior of Fig. *fig:MSE_b stems from the observation that, unlike MELT and EM, ATOM1 and ATOM2 are gridless methods, delivering the same performance regardless of the sources frequencies.
§.§ MSE vs n for banded Toeplitz covariance matrix
This subsection analyzes the performance in the case of covariance matrix belonging to the set of banded Toeplitz matrices. In particular, the same simulation setup as in Section <ref> is considered, but enforcing the underlying covariance matrix to have a bandwidth b=6. To this end, is constructed by alternately projecting a random Hermitian matrix onto the set of banded Toeplitz matrices and the set of PSD matrices.
Moreover, for this study case, ATOM2 is implemented according to the procedure described in Section <ref>, namely explicitly including the banded Toeplitz structure in the constraint set.
Fig. <ref> highlights that the bespoke implementation of ATOM2 delivers the best performance, with MSE values really close to the CRB. Furthermore, MELT and EM share the same performance with a noticeable gap w.r.t. ATOM2, which is expected since the aforementioned algorithms do not leverage the banded structure of the covariance matrix.
§.§ MSE vs n for BT (TBT) covariance matrix
Here, the capabilities of ATOM2 are analyzed in the context of covariance matrix with TBT structure. To this end, assuming m=16 and p=4 blocks (each having block-size l=4), the covariance matrix is modeled as = _1 ⊗_1, where _1 ∈ℂ^l × l is a Toeplitz matrix constructed as in subsection <ref>, with frequencies [0.6, 1.4, 3.2, 5.1] rad and powers [3,6,4,1]. Thus, each data snapshot _k is drawn according to (<ref>).
The resulting MSE values (averaged over 1000 Monte Carlo trials) are displayed in Figure <ref> versus the number of snapshots. Specifically, the performance of both the BT and the TBT extension of ATOM2 (described in Section <ref>) are reported and compared with the CRB (see Appendix C reported in the supplementary material to this paper) as well as with two EM-based estimators, tailored respectively for BT/TBT covariance matrix <cit.>.
Inspection of the results reveals that ATOM2 TBT uniformly achieves the least MSE, with ATOM2 BT ranking second. As previously highlighted, the superior performance of the proposed method stems from the design criterion which does not require reparametrizing the covariance matrix using the CE.
§.§ Radar Application
In this subsection, the performance of the covariance estimation algorithms is evaluated with reference to the average achievable SINR in adaptive radar spatial processing context. To this end, let us consider a radar system equipped with a uniform linear array with m=6 sensors, pointing toward the boresight direction. The inter-element distance between each sensor is set equal to d=λ/2, where λ is the radar operating wavelength.
For this simulation scenario, the interference covariance matrix is modeled as = _s + σ_a^2 where σ_a^2 is the power level of the white disturbance noise (assumed without loss of generality equal to 0 dB) and _s is given by _s = ∑_l=1^Jσ_l^2 (ϕ_l) (ϕ_l)^H, where J is the number of uncorrelated narrow-band jammers and, for the l-th jammer,
(ϕ_l) = 1/√(m)[1, e^j 2π/λ d sin(ϕ_l), …, e^j (m-1) 2π/λ d sin(ϕ_l)]^T
is the steering vector in its direction-of-arrival ϕ_l, and σ^2_l the corresponding interferer power.
The capabilities of the estimation methods are analyzed by means of the average SINR, computed as
SINR_avg= 1K∑_i=1^K|_̂î^H(θ)|^2_i^H_̂î,
where K=500 is the number of Monte-Carlo trials and _i = _i^-1(θ) is the estimate of the optimal weight vector for adaptive spatial processing with _i the estimate of the interference-plus-noise covariance matrix for the i-th trial, computed either via the sample covariance matrix or enforcing the Toeplitz structure in the covariance matrix and employing the estimators ATOM1, ATOM2, EM, and MELT.
More precisely, J=2 jammers, with powers σ_1^2= 30 dB and σ_2^2= 20 dB, respectively, impinging on the array from θ_1=9.8^∘ and θ_2=-8.8^∘, is considered. As comparison terms, the optimum SINR, i.e., SINR_OPT = (θ)^H^-1(θ) and the performance of the Sample Matrix Inversion (SMI) beamformer, are included too.
The average SINR versus θ∈𝒯, with 𝒯 = [-π/2, π/2] discretized with 500 equally-spaced points, is shown in Fig. <ref>, for n∈{m, 2m, 3m}. Inspection of the plots highlights that as the number of samples n increases, the results achieved by ATOM1 and ATOM2 gets closer and closer to the optimum, yielding superior performance w.r.t. the counterparts.
§ CONCLUSION
In this paper, the MLE problem for TSC matrices has been addressed. Precisely, by reformulating appropriately the MLE optimization problem and leveraging the MM framework, two iterative algorithms ATOM1 and ATOM2 have been developed. Both inherit the key properties of MM i.e., they monotonically decrease the underlying cost function with guaranteed convergence to a stationary point of the equivalent MLE problem. Subsequently, ATOM2 has been extended to handle covariance matrix MLE forcing other Toeplitz-related structures, such as banded Toeplitz, BT, and TBT. Simulation results have indicated that the proposed algorithms can perform better than some state-of-the-art techniques in terms of MSE and the SINR metrics.
Some of the possible future research directions are now outlined. In particular, ATOM2 could be further extended to include the cases of low rank TSC, with the rank assumed either known or unknown at the design stage, as well as covariance matrix with an upper bound to the condition number.
Another possible extension of the proposed technique could be MLE of a Toeplitz covariance matrix assuming a compound Gaussian distribution for the underlining data which has a significant application in low-grazing angle target detection <cit.>. Moreover, acceleration methods inspired for instance by the SQUAREd iterative Methods (SQUAREM) <cit.> could be investigated. Finally, the design of sub-optimal optimization strategies (e.g., based on the gradient projection method) with an improved computational burden (a valuable feature for real-time applications) is definitely worth to be pursued.
§ APPENDIX A
§ PROOF OF EQUIVALENCE BETWEEN (8) AND (10)
Let ^⋆ be an optimal solution to (8), then (^⋆, ^⋆), with ^⋆= ^H ^⋆-1, is feasible for (10) and the two problems have the same objective values. This means that
v(8) ≥ v(10),
where v(·) indicates the optimal value of the corresponding optimization problem.
Moreover, for any fixed _1 ≻ 0, concentrating the objective function of (10) with respect to (which is tantamount to placing = ^H _1^-1), it follows that the concentrated optimization problem is
_1 ≽ 0 minimize (_FB_1^-1) + log|_1|,
due to Schur complement Theorem and the monotonicity of the trace operator with respect to generalized matrix inequality “≽”.
Finally, being by assumption (8) solvable, any minimizer of (<ref>) satisfies _1^⋆≻ 0 with a corresponding optimal solution to (10) given by (_1^⋆, ^H _1^⋆-1). This implies that
v(8)≤ v(10).
Capitalizing on (<ref>) and (<ref>) as well as the above considerations, it follows that v(8)=v(10) and given an optimal solution (_1^⋆,_1^⋆) to (10), _1^⋆ is also optimal to (8) and viceversa, given an optimal solution ^⋆ to (8) (^⋆, ^⋆) is an optimal point to (10).
§ APPENDIX B
§ PROOF OF THEOREM 3.2
To begin with, let us denote by h(|_t) either the objective function involved in the surrogate optimization problem of ATOM1 (12) or ATOM2 (15), where = diag(, ). This function, regardless of the method, satisfies the following two inequalities
h(_t|_t) = l(_t)
h(_t+1|_t) ≥l(_t+1)
where l()= Tr() + log||. Leveraging the above inequalities, it follows that
l(_t+1) (a)≤h(_t+1|_t) (b)≤h(_t|_t) (c)= l(_t)
In (<ref>), the inequality (a) and equality (c) stem from (<ref>) and (<ref>), respectively; besides, the inequality (b) is obtained by exploiting the fact that ATOM1 and ATOM2 globally solve the corresponding convex surrogate optimization problem. Therefore, (<ref>) implies that the sequence of objective value of Problem (16) generated by the proposed algorithms is monotonically decreasing , i.e.,
l(_0) ≥l(_1) ≥l(_2) ≥⋯
Next, let us denote by a cluster point to {_t} and let {_r_t} be a subsequence of {_t} converging to . Then, from (<ref>), (<ref>), and (<ref>)
[ h(_r_t+1|_r_t+1)= l(_t_j+1) ≤l(_r_t+1); ≤h(_r_t+1|_r_t)≤h(|_r_t), ∀ . ]
Thus, letting t →∞
h(|) ≤h(|),
which implies that h'(|;) ≥ 0 where h'(·|;) is the directional derivative of the surrogate function at point in a feasible direction . Finally, by Proposition 1 in <cit.>, the surrogate function h(|) and the objective function l(·) have the same first order behavior at . Therefore, h'(|;) ≥ 0 implies that l'(; ) ≥ 0. Hence, is a stationary
point of the objective function l().
§ APPENDIX C
§ CRB OF BANDED TOEPLITZ, BT, AND TBT COVARIANCE MODEL
Herein, the CRB of Banded Toeplitz, BT, and TBT covariance model are provided.
§.§ Banded Toeplitz matrix
In the case of banded Toeplitz matrix with bandwidth b, the first row of the covariance matrix ∈ℍ^m × m has only b+1 non-zero terms. Therefore, can be parameterized via = [r_1, (r_2),⋯(r_b+1),(r_2),...,(r_b+1) ]^T∈ℝ^ 2b+1. Besides can be expressed in terms of basis matrices ^Toep_g and real coefficients
[ = ∑_g=1^b+1θ_g(^Toep_g) + j ∑_g=b+2^2b+1θ_g(^Toep_g-b) ]
and consequently
∂/∂θ_i=
(^Toep_i) 1≤ i ≤ b+1
j(^Toep_i-b) b+2≤ i ≤ 2b+1
.
Substituting ∂/∂θ_i in (34), yields the FIM for banded Toeplitz covariance matrix.
§.§ Toeplitz-block-Toeplitz matrix
Before proceeding further, it is worth noting that a TBT matrix composed of p blocks of size l can be parameterized by the vector = [_0^T, _1^T, …, _P-1^T]^T∈ℝ^2 l -1 + (p-1)(4l-2) whereby _0 = [r_0,1, (r_0,2), …, (r_0,l), (r_0,2), …, (r_0,l)]^T∈ℝ^2l-1 and _p = [(r_p,1), …, (r_p,l), (r_p,1), …, (r_p,l),
(c_p,2), …, (r_p,l), (r_p,2), …, (r_p,l)]^T∈ℝ^4l-2, p=1,…, P-1, with r_p,n and c_p,n the n-th row and n-th column of _p, respectively.
Indeed, the TBT covariance matrix can be expressed as
^TBT = _0⊗_0 +∑_w=1^p-1((_w⊗^H_w) +(_w^T⊗_w)),
where
_0 =
∑_g=1^lθ_0,g(^Toep_g) + j ∑_g=l+1^2l-1θ_0,g(^Toep_g-l+1)
and, for w=1,…, p-1,
_w = ∑_g=1^l[θ_w,g + jθ_w,g+l ](_g)
+ ∑_g=2l+1^3l-1[θ_w,g + jθ_w,g+l-1 ](_g-2l+1)
with θ_w,g the g-th element of _w, _g = ^Toep_g as long as g = 1 and 1/2 ((^Toep_g)^T + j (^Toep_g)^T) elsewhere, whereas the (i,k)^th element of the matrix _w∈ℝ^l× l is given by
[_w]_i,k=
1 i-k=w
0 otherwise.
That said,
∂^TBT/∂θ_w,g is given by
0.19!∂^TBT/∂θ_w,g=0.81!_0⊗(^Toep_g) 1≤ g≤l, w=0
_0⊗ j(^Toep_g-l+1) l+1≤ g ≤ 2l-1, w=0
_w⊗(_g)^T
+ _w^T⊗(_g) 1≤ g ≤ l, w > 0
_w⊗(-j)(_g-l)^T
+ _w^T⊗j(_g-l) l+1≤ g ≤ 2l, w > 0
_w⊗(_g-2l+1)^T
+ _w^T⊗(_g-2l+1) 2l+1≤ g ≤ 3l-1, w > 0
_w⊗(-j)(_g-3l+2)^T
+ _w^T⊗j(_g-3l+2) 3l≤ g ≤ 4l-2, w > 0
which, employed in (34), yields the FIM for TBT covariance matrix.
IEEEtran
|
http://arxiv.org/abs/2307.05587v1 | 20230710154713 | Active Learning for Video Classification with Frame Level Queries | [
"Debanjan Goswami",
"Shayok Chakraborty"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Active Learning for Video Classification with Frame Level Queries
This research was supported in part by the National Science Foundation under Grant Number: 2143424
Debanjan Goswami
Department of Computer Science
Florida State University
Shayok Chakraborty
Department of Computer Science
Florida State University
August 12, 2023
=========================================================================================================================================================================
Deep learning algorithms have pushed the boundaries of computer vision research and have depicted commendable performance in a variety of applications. However, training a robust deep neural network necessitates a large amount of labeled training data, acquiring which involves significant time and human effort. This problem is even more serious for an application like video classification, where a human annotator has to watch an entire video end-to-end to furnish a label. Active learning algorithms automatically identify the most informative samples from large amounts of unlabeled data; this tremendously reduces the human annotation effort in inducing a machine learning model, as only the few samples that are identified by the algorithm, need to be labeled manually. In this paper, we propose a novel active learning framework for video classification, with the goal of further reducing the labeling onus on the human annotators. Our framework identifies a batch of exemplar videos, together with a set of informative frames for each video; the human annotator needs to merely review the frames and provide a label for each video. This involves much less manual work than watching the complete video to come up with a label. We formulate a criterion based on uncertainty and diversity to identify the informative videos and exploit representative sampling techniques to extract a set of exemplar frames from each video. To the best of our knowledge, this is the first research effort to develop an active learning framework for video classification, where the annotators need to inspect only a few frames to produce a label, rather than watching the end-to-end video. Our extensive empirical analyses corroborate the potential of our method to substantially reduce human annotation effort in applications like video classification, where annotating a single data instance can be extremely tedious.
active learning, video classification, deep learning
§ INTRODUCTION
With the widespread deployment of modern sensors and cameras, images and videos have become ubiquitous. This has encouraged the development of video classification algorithms to analyze their semantic content for various applications, such as search, summarization, security and surveillance among others. Deep neural networks (CNN and LSTM architectures) have depicted commendable performance in this field <cit.>. Common methods include obtaining global video-level descriptors using CNN architectures <cit.>, processing videos at two spatial resolutions: a low-resolution context stream and a high-resolution fovea stream <cit.>, fusion technique to integrate data representations at the frame level and video level <cit.> among others. However, for all these models to work reliably, a large amount of labeled training data is essential, gathering which is an expensive process in terms of time, labor and human expertise. Thus, an algorithm to reduce the human labeling effort is of immense importance in video classification applications.
Active Learning (AL) is a machine learning paradigm, where the goal is to automatically identify the salient and exemplar samples from large amounts of redundant data <cit.>. This tremendously reduces the human annotation effort in inducing a machine learning model, since the human expert only has to label the samples queried by the algorithm. Further, since the model gets trained on the exemplar samples from the data population, it typically depicts better generalization performance than a model where the training data is selected at random. This is an extremely relevant paradigm in today's world, where an enormous amount of digital data is being generated, but there is a serious dearth of human labor to annotate the data to induce learning models. AL has been successfully used in a variety of applications, including computer vision <cit.>, text analysis <cit.>, computational biology <cit.> and medical diagnosis <cit.> among others. Active learning is particularly relevant in the context of deep learning, in order to reduce human annotation effort in training the data-hungry deep neural networks <cit.>.
Designing an AL algorithm for a video classification application entails the human annotator to meticulously watch each queried video end-to-end in order to furnish a label [we use the terms annotators, oracles, labelers and users synonymously in this paper]. This is an extremely time-consuming and laborious process; the annotators may get bored and fatigued quickly and lose interest in the task. This necessitates specialized and more user-friendly query and annotation mechanisms, to utilize the available human labor more efficiently. In this paper, we propose a novel active learning algorithm to address this challenging and practical problem. Our algorithm identifies a batch of informative videos, together with a set of exemplar frames from each; the human annotator merely has to review the queried frames and furnish a label for each video. This is illustrated in Figure <ref>. Providing such feedback is significantly less time-consuming and burdensome than watching an end-to-end video. We formulate an optimization problem based on an uncertainty and diversity based criterion to identify a batch of informative videos, and exploit representative sampling techniques to select a subset of exemplar frames from each. To our knowledge, this is the first active learning framework for video classification which poses label queries based on a set of exemplar frames, rather than the complete video. We hope this research will motivate the development of AL algorithms with other novel annotation mechanisms, with the goal of further reducing the labeling burden on human oracles in a video classification application.
The rest of the paper is organized as follows: we present a survey of related research in Section <ref>, our active sampling framework is detailed in Section <ref>, the results of our empirical studies are presented in Section <ref>, and we conclude with discussions in Section <ref>.
§ RELATED WORK
In this section, we present an overview of active learning in general, followed by a survey of AL for video classification.
Active Learning: AL has received significant research attention in the machine learning community. Uncertainty sampling is by far the most common strategy for active learning, where unlabeled samples with highest classification uncertainties are queried for their labels. The uncertainty of an unlabeled sample can be computed by its entropy <cit.>, its distance from the separating hyperplane in the feature space for SVM classifiers <cit.>, the disagreement among a committee of classifiers regarding the label of the sample <cit.>, the expected error reduction of the future learner <cit.> and so on. Submodular optimization techniques have also been exploited for active data sampling <cit.>.
The growing success and popularity of deep learning has motivated research in the field of deep active learning (DAL), where the goal is to select informative unlabeled samples to efficiently train a deep neural network <cit.>. A task agnostic AL framework was proposed by Yoo and Kweon <cit.> that incorporated a loss prediction module in the network architecture, to predict the loss value of an unlabeled sample and query samples accordingly. A DAL framework based on core-set selection was proposed by Sener and Savarese <cit.>, which selected a batch of samples, such that the deep model trained on the selected samples depicts similar performance to that trained on the whole dataset. DAL has also been studied in conjunction with neural architecture search <cit.>, which queries samples for labeling and simultaneously searches for the best neural architectures on-the-fly. A novel training loss function for DAL was proposed by Shui et al., where active sample selection and traning the network parameters were achieved through alternating optimization <cit.>. Deep active learning techniques based on adversarial training have depicted particularly impressive performance <cit.>. Active learning has also been studied in conjunction with other learning paradigms such as transfer learning <cit.>, reinforcement learning <cit.> etc. Moreover, the idea of identifying an informative set of samples for human inspection has been extended to other problem domains, such as matrix completion <cit.>, video summarization <cit.> and feature selection <cit.> among others.
Recently, there have been efforts to design AL systems with novel query and annotation mechanisms, with the goal of further reducing the labeling burden on human annotators. Joshi et al. <cit.> proposed a binary query mechanism which queried an unlabeled sample together with a potential class label and the user had to provide the binary answer as to whether the queried unlabeled sample belonged to the selected class or not. Along similar lines, Biswas and Jacobs proposed an AL algorithm for clustering, which queried a pair of samples and the oracles needed to specify whether or not the samples in a pair correspond to the same cluster <cit.>. Xiong et al. <cit.> proposed a triplet query framework to learn approximate distance metrics for a nearest neighbor classifier; the algorithm queried unlabeled data triplets (x_i, x_j, x_k) and posed the question whether instance x_i was more similar to x_j than to x_k. Qian et al. <cit.> proposed an active learning algorithm where the query strategy was to place an ordering (or partial ordering) on the similarity of the neighbors of the selected unlabeled sample, rather than querying its actual label.
Active Learning for Video Classification: While AL has been extensively studied for image recognition <cit.>, it is much less explored for video classification. Similar to image classification, uncertainty sampling (using metrics like entropy, error reduction) is a popular AL query strategy for video recognition <cit.>. Yan et al. <cit.> proposed a multi-class AL framework for video classification using expected error reduction. Since the estimation of the posterior probability distribution P(y|x) may be unreliable due to the lack of sufficient training data, simple heuristics were also proposed to simplify the sample selection strategies. Another approach was developed in the context of SVMs, which queried a set of samples which can produce the maximum expected reduction in the SVM objective <cit.>. Bandla and Grauman <cit.> used AL to train an action detector for videos which selected the video which was expected to maximally reduce the entropy among all unlabeled videos. The core idea was to use the current trained detector to extract relevant portions in the video where the action of interest occurs, so that the video segment outside the interval does not introduce noise in the entropy computation. However, this method is specifically designed to actively learn an action detector from videos. Active contrastive learning has also been explored for learning audio-visual representations from unlabeled videos <cit.>.
All these methods require the human annotator to watch an unlabeled video end-to-end in order to provide a label, which may be extremely time-consuming and arduous. In contrast, our framework identifies a subset of exemplar frames, and the human labeler has to label a video by merely reviewing the frames, which is a much more efficient annotation strategy. Our method is applicable to any type of videos and does not make any assumptions about the contents of the video. Other related efforts include AL for video tracking <cit.>, video description <cit.>, video recommendation <cit.> and video segmentation <cit.>. However, these methods attempt to solve a different problem than video classification, which is the focus of this paper. We now describe our framework.
§ PROPOSED FRAMEWORK
Consider an active learning problem for video classification, where we are given a labeled training set L and an unlabeled set U, with |L| ≪ |U|. Each data sample x in L and U is a video. Let w be the deep neural network trained on L and C be the number of classes in the data. Our objective is two-fold: (i) select a batch B containing b unlabeled videos so that the model trained on L ∪ B has maximum generalization capability; (ii) however, we are not allowed to show an entire video to a human annotator and ask for its label; we are required to select a subset of k exemplar frames from each queried video, so that only those can be shown to an annotator for labeling the video.
Both these objectives are critical in improving the generalization capability of the deep model. The first objective ensures that the salient videos are selected from the unlabeled set for active query. The second objective ensures that the most representative frames are selected from each video for query. This is important, as otherwise, the annotator may not be confident enough to provide a label or may provide an incorrect label, both of which will result in a wastage of query budget and degrade the performance of the model. In the following sections, we discuss our active sampling strategies for sampling videos and frames.
§.§ Active Video Sampling
We quantified the utility of a batch of b videos and selected a batch furnishing the maximal utility. The informativeness and diversity metrics were used to compute the utility of a batch of videos in this research. An active learning framework driven by these conditions ensures that the video samples in the batch augment useful knowledge to the underlying deep neural network, and there is high diversity (minimum redundancy) of information among the samples in the batch. These conditions have been used in previous active learning research <cit.>.
Computing informativeness: The informativeness of an unlabeled video sample x_i was computed as the uncertainty of the deep model w in predicting a label for x_i. The Shannon's entropy was used to compute the prediction uncertainty:
e(x_i) = -∑_y=1^C P(y|x_i, w) log P(y|x_i, w)
Computing diversity: We computed a diversity matrix R ∈^|U| × |U| where R(i,j) denotes the diversity between videos x_i and x_j in the unlabeled set. We used the kernelized distance on the deep feature representations to compute the diversity between a pair of videos in this research:
R(i,j) = K (x_i, x_j)
where K = (. , .) denotes the distance in the Reproducing Kernel Hilbert Space (RKHS) <cit.>.
§.§.§ Active Video Selection
By definition, all the entries in e and R are non-negative, that is, e_i≥ 0 and R(i,j) ≥ 0, ∀ i,j. Given e and R, our objective is to select a batch of videos with high uncertainties (given by the entries in e) and high mutual diversity (given by the entries in R). We define a binary selection vector z ∈^|U| × 1 where z_i denotes whether the unlabeled video x_i will be selected in the batch (z_i = 1) or not (z_i = 0). Our batch selection task (with batch size b) can thus be posed as the following NP-hard integer quadratic programming (IQP) problem:
max_z e^T z + μ z^T R z
s.t. z_i∈{0,1}, ∀ i and∑_i=1^|U| z_i = b
where μ is a weight parameter governing the relative importance of the two terms. The binary integer constraints on z allow us to combine e and R into a single matrix Q ∈^|U| × |U| and express the optimization problem as follows:
max_z z^T Q z
s.t. z_i∈{0,1}, ∀ i and∑_i=1^|U| z_i = b
where the matrix Q is constructed as follows:
Q(i,j) =
μ R(i,j), if i≠ j
e(i), if i = j
The binary integer constraints on the variable z make the IQP in Equation (<ref>) NP-hard. We used the Iterative Truncated Power algorithm <cit.> to solve this optimization problem.
§.§.§ The Iterative Truncated Power Algorithm
This algorithm was originally proposed in the context of the sparse eigenvalue and the densest k-subgraph problems. It attempts to solve an optimization problem similar to that in Equation (<ref>). The algorithm starts with an initial solution z_0 and then generates a sequence of solutions z_1, z_2, …. The solution z_t at iteration t is obtained by multiplying the solution z_t-1 at iteration (t-1) by the matrix Q and then truncating all the entries to 0, except the b largest entries. The process is repeated until convergence. The algorithm is guaranteed to converge monotonically for a positive semi-definite (psd) matrix Q. When the matrix Q is not psd, the algorithm can be run on the shifted quadratic function (with a positive scalar added to the diagonal elements) to guarantee a monotonic convergence <cit.>. The algorithm is computationally efficient and converges fast. It benefits from a good starting point. In our empirical studies, the initial solution z_0 was taken as the indicator vector corresponding to the b largest column sums of the matrix Q, as it produced competitive results in our preliminary experiments. The pseudo-code for our active video sampling algorithm is presented in Algorithm <ref>.
§.§.§ Computational Considerations
Computing the diversity matrix R involves quadratic complexity. We first note that R needs to be computed only once in our framework, before the start of the AL iterations. As the unlabeled videos get queried through AL, we can keep deleting the corresponding rows and columns in R to derive the new diversity matrix. Moreover, random projection algorithms can be used to speed up computations. The theory of random projections states that, if we have a point cloud in a high dimensional space, they may be projected into a suitable lower-dimensional space such that the distances between the points are approximately preserved <cit.>. A data matrix A ∈^N × D in the D dimensional space is multiplied by a random projection matrix X ∈^D × d (d ≪ D) to obtain a projected matrix B ∈^N × d in the lower dimensional space d: B = AX <cit.>. This can be used to substantially reduce the computational overhead, as distance computations are more efficient in the low dimensional space. We will explore this as part of future research.
§.§ Active Frame Sampling
Once we select b videos from the unlabeled set, our next task is to identify a subset of k frames from each of these videos; we exploited representative sampling techniques for this purpose. These techniques identify the exemplar data points which well-represent a given dataset. In particular, the coreset algorithm selects a subset of points such that a model trained over the selected subset is maximally similar to that trained on the whole dataset. For the sake of completeness, we discuss the main ideas here and request interested readers to refer to <cit.> for further details. Coreset poses the subset selection problem as:
min_s: |s|=k | 1/n∑_i ∈ [n] l(x_i, y_i, A_i) - 1/|s|∑_j ∈ s l(x_j, y_j, A_j) |
where (x_i, y_i) denotes a training sample and its label, A_i denotes a learning algorithm which outputs a set of parameters by minimizing a loss function l(. , . , .) on a given labeled set i. Informally, given a budget k, the goal is to select a set of samples s, such that the model trained on s depicts similar performance as the model trained on the whole dataset with n samples.
This function cannot be directly optimized, as the labels of the samples in the unlabeled set are unknown. An upper bound of this function was derived and the problem of active sampling was shown to be equivalent to the k-center problem (also called min-max facility location problem) <cit.>. The objective of this problem is to select k center points from n samples, such that the largest distance between a data point and its nearest center is minimized. Formally, this can be posed as follows:
min_s: |s| = kmax_imin_j ∈ sΔ(x_i, x_j)
This problem is NP-Hard <cit.>. However, a greedy algorithm, as detailed in Algorithm <ref>, is guaranteed to produce a solution s such that: max_imin_j ∈ sΔ(x_i, x_j) ≤ 2 × OPT, where OPT is the optimal solution. We used this algorithm to select a subset of k frames from each of the queried videos. As evident from the formulation, our method does not make any assumptions about the contents of the video, and is applicable to any type of video.
§ EXPERIMENTS AND RESULTS
§.§ Datasets
We used the UCF-101 <cit.> and the Kinetics datasets <cit.> to study the performance of our algorithm. Both these datasets contain videos of humans performing a variety of actions, captured under unconstrained, real-world conditions, and are extensively used to study the performance of video classification algorithms. We used data from 5 classes at random from each dataset for our experiments.
§.§ Oracle Simulation
All the publicly available video datasets contain annotations for the complete videos; we did not find any datasets which contain annotations based on a subset of frames. Also, different active sampling algorithms will select different subsets of frames, and it is challenging to obtain annotations from a human labeler for every possible subset of frames for a given video, to conduct experiments. We therefore used a deep neural network to simulate the human labeling oracle in our empirical studies. The oracle model was trained on a completely different subset of the data. No information about the oracle model was used in the design and development of our active learning algorithm. During AL, when a video sample was selected for query, the selected frames were passed as an input to the trained oracle model and its prediction entropy on the sample was computed. If the entropy exceeded a particular threshold τ_oracle, the oracle was assumed to be not confident enough to produce a label, and no label was returned; otherwise, the oracle returned the predicted label (which may be correct or incorrect). These were done to appropriately mimic a real-world data annotation setup with a human annotator.
§.§ Implementation Details
Base Model: We used a CNN-RNN architecture in our experiments where InceptionV3 pretrained on the ImageNet-1k dataset was used as the feature extractor and a GRU network as the decoder [<https://keras.io/examples/vision/video_classification/>]. The input frames were scaled and normalized to a fixed input size of 224 × 224 pixels and fed into the Convolutional Neural Network (CNN). The features extracted were fed into a 5-layer GRU network which consists of 2 GRU layers and 1 fully connected layer with one dropout layer. The 2 GRU layers had 20 and 12 neurons, while the first fully connected layer had 8 neurons with the ReLU activation function.
We used the adam optimizer with a learning rate of 0.001, momentum of 0.99, batch size of 32, and the network was trained for 20 epochs in each active learning iteration.
Oracle Model: We used a similar CNN-RNN architecture as the oracle model. However, for the oracle model, the 2 GRU layers of the GRU network had 40 and 16 neurons. We used the adam optimizer with a learning rate of 0.001 for the UCF dataset and 0.01 for the Kinetics dataset, momentum of 0.99, batch size of 64, and the network was trained for 30 epochs. As part of future research, we plan to study the performance of our framework with other architectures for the oracle model, and also conduct experiments with real people as annotators.
§.§ Experimental Setup
Each dataset was split into 5 parts: (i) an initial training set L; (ii) unlabeled set U; (iii) test set T; (iv) training set to train the oracle model L_oracle; and (v) test set T_oracle to test the oracle model and compute the entropy threshold τ_oracle. The number of samples (videos) in each of these sets, together with the accuracy of the oracle model (A_oracle) for each dataset are depicted in Table <ref>. We note that a better trained oracle could have potentially improved the performance of our algorithm; however, we wanted to validate our algorithm in a challenging real-world setup, where the annotators can abstain from labeling samples and can also provide incorrect labels. We therefore used an oracle model with moderate accuracy (≈ 70 - 75%) in our empirical studies.
The oracle model was trained on L_oracle; each sample in T_oracle was then passed as an input to the trained oracle and the prediction entropy was noted. The 50^th percentile of the prediction entropy distribution was taken as the entropy threshold τ_oracle; during the AL iterations, if the entropy of any queried video exceeded this threshold, the oracle was assumed to abstain from labeling.
The base model was first trained on the set L. In each AL iteration, each algorithm queried b videos from the set U, and k frames from each of the b videos. The k frames of each video were then passed as an input to the oracle model. Based on its prediction entropy on the sample, the oracle may or may not furnish a label for a given unlabeled video sample. If the oracle does not furnish a label for a given video, it was discarded. The other unlabeled videos (which were labeled by the oracle), together with the returned labels were then appended to the training set, the base model was updated, and its accuracy was computed on the test set. The process was repeated for 10 iterations, which was taken as the stopping criterion in this work. All the results were averaged over 3 runs (with different training, unlabeled and test sets) to rule out the effects of randomness. The video budget b was taken as 25 and the frame budget k as 100 in each AL iteration. The weight parameter μ in Equation (<ref>) was taken as 0.01 and a Gaussian kernel was used to compute the diversity matrix in Equation (<ref>).
§.§ Comparison Baselines
As mentioned in Section <ref>, existing AL techniques for video classification query the complete videos for annotation and the labels obtained are assumed to be always correct. In our framework, the labeling oracle may refuse to provide a label to a queried video and may also provide an incorrect label. This is a more challenging and real-world scenario. It will thus not be fair to compare our method against the existing techniques. We used three comparison baselines in this work: (i) Random-Random (RR), where we selected a batch of b videos at random and a subset of k frames from each video at random; (ii) Entropy-Random (ER), where the b videos with the highest classification entropies were queried and k frames were queried from each at random; and (iii) Entropy-kmeans (EK), where b videos were first selected using entropy sampling; k-means clustering was then performed and the k frames corresponding to the k cluster centroids were selected for query from each video.
§.§ Active Learning Performance
The AL performance results are depicted in Figure <ref>. In each figure, the x-axis represents the iteration number, and the y-axis denotes the accuracy on the test set. The proposed method comprehensively outperforms the RR method on both datasets. The ER method depicts random fluctuations in the test accuracy over the AL iterations; our method, on the other hand, depicts a more steady growth in the test accuracy. The EK method depicts the best performance among the baselines, but is not as good as our method. Our method outperforms EK in most of the AL iterations across both the datasets. It also attains the highest accuracy after 10 AL iterations for both the datasets. We can conclude the following: (i) our video selection criterion based on uncertainty and diversity identifies the most informative videos in the unlabeled set; and (ii) our frame selection criterion based on representative sampling selects a subset of exemplar frames from each queried video, so that a large percentage of them can be correctly annotated by the oracle, which enriches the quality of our training data. As a result, our method augments maximal useful information to the deep neural network, which boosts its generalization capability. These results unanimously corroborate the potential of our framework in substantially reducing the human annotation effort in real-world video classification applications, where labeling a single sample involves significant time and human labor.
The performance of the oracle model is reported in Tables <ref> and <ref> for the UCF and Kinetics datasets respectively. A total of 250 videos were queried from these datasets (25 videos in each of the 10 AL iterations). The tables show the percentage of these videos that were correctly annotated, incorrectly annotated and discarded by the labeling oracle. For the UCF dataset, and for the proposed method, the oracle correctly annotated 58.66% of the queried videos (the highest among all the methods). This shows that representative sampling through coreset is an effective strategy to identify the exemplar frames from a queried video, which have a high probability of receiving the correct label from the oracle, and augmenting useful information to the training set. For the Kinetics dataset, 66% of the videos queried by our method were correctly annotated by the oracle, where as 67.33% of the videos queried by Random Sampling were annotated correctly by the oracle. However, we note that, having a high percentage of unlabeled videos correctly labeled by the oracle does not necessarily mean that the useful samples are being queried. For instance, it is easy to select a batch of videos, which do not have much useful content and are easy to label, and get a high percentage of them correctly labeled by the oracle. However, these videos, even if correctly labeled, will not augment much useful information to the training set, as they are devoid of any useful content. Even though RR depicts a slightly higher percentage of correctly labeled samples than our method in Table <ref>, its generalization accuracy is much worse than our method, as evident from Figure <ref>. The key challenge is to query a set of informative videos and get a high percentage of them correctly labeled by the oracle; both of these are crucial in improving the generalization capability of the model over time. The results in Figure <ref> jointly capture both these aspects, and show that our method outperforms the baselines.
§.§ Effect of the Number of Queried Frames per Video
In this experiment, we studied the effect of the frame budget k (number of frames allowed to be selected from a queried video) on the AL performance. The results on the UCF dataset, with frame budgets 10, 20, 50 and 100 are presented in Figure <ref>. Our method depicts impressive performance across different frame budgets. For frame budgets 20, 50 and 100, our framework attains the highest test accuracy after 10 AL iterations. Note that querying lesser number of frames from a video lessens the labeling burden on the oracle, as the oracle has to review an even smaller number of frames to furnish a label. These results show the promise and potential of our technique to further reduce human annotation effort in a video classification application.
§.§ Effect of the Number of Queried Videos
The goal of this experiment was to study the effect of the video budget b (number of videos queried in each AL iteration) on the AL performance. The results on the UCF dataset with b = 15, 20, 25 and 30 are shown in Figure <ref>. Our framework once again surpasses the baselines across the different budgets. These results are important from the standpoint of a real-world application, where the batch size is governed by the time, man-power and other available resources in a given application, and is different for different applications.
§ CONCLUSION AND FUTURE WORK
The goal of this research was to devise an efficient annotation mechanism to reduce human annotation effort in video classification applications, where annotating a single data instance is extremely tedious. Our framework identifies a batch of informative videos, together with a set of exemplar frames from each; the human annotator has to produce a label for each video just by reviewing the subset of frames, instead of watching the complete video end-to-end. To the best of our knowledge, this is the first research effort to develop an AL technique for video classification, with this kind of a query and annotation mechanism. Our empirical results validated the promise and potential of our framework to drastically reduce human annotation effort in training a deep neural network for video classification. We hope this research will motivate the development of AL algorithms with other annotation mechanisms, with the goal of further reducing the human annotation effort in video classification.
As part of future work, we plan to validate the performance of our algorithm on other applications where the data has a temporal nature. For instance, the proposed query mechanism will also be very relevant in a text classification application to identify informative text snippets, so that a human annotator can furnish a label by reviewing only the snippets, rather than reading the document end-to-end. We will also study the performance of our framework with different size of the data splits, as outlined in Table <ref>.
IEEEtran
|
http://arxiv.org/abs/2307.05467v1 | 20230711175302 | Is Kaniadakis $κ$-generalized statistical mechanics general? | [
"T. F. A. Alves",
"J. F. da Silva Neto",
"F. W. S. Lima",
"G. A. Alves",
"P. R. S. Carvalho"
] | hep-th | [
"hep-th",
"cond-mat.stat-mech",
"math-ph",
"math.MP"
] |
[email protected]
Departamento de Física, Universidade Federal do Piauí, 64049-550, Teresina, PI, Brazil
Departamento de Física, Universidade Estadual do Piauí, 64002-150, Teresina - PI, Brazil
In this Letter we introduce some field-theoretic approach for computing the critical properties of systems undergoing continuous phase transitions governed by the κ-generalized statistics, namely κ-generalized statistical field theory. In particular, we show, by computations through analytic and simulation results, that the κ-generalized Ising-like systems are not capable of describing the nonconventional critical properties of real imperfect crystals, e. g. of manganites, as some alternative generalized theory is, namely nonextensive statistical field theory, as shown recently in literature. Although κ-Ising-like systems do not depend on κ, we show that a few distinct systems do. Thus the κ-generalized statistical field theory is not general, i. e. it fails to generalize Ising-like systems for describing the critical behavior of imperfect crystals, and must be discarded as one generalizing statistical mechanics. For the latter systems we present the physical interpretation of the theory by furnishing the general physical interpretation of the deformation κ-parameter.
Is Kaniadakis κ-generalized statistical mechanics general?
G. A. Alves
==========================================================
§ INTRODUCTION
Motivated by the incapacity of describing some physical phenomena <cit.> through Boltzmann-Gibbs (BG) statistics, some attempts for generalizing that statistics were made. In fact, some generalized statistics were proposed <cit.>. In the generalization process, only that statistics satisfying a set of consistency conditions will survive. Among these conditions, the consistent statistics have to be obtained from a maximum principle and a trace-form entropy. Other consistency requirements are positivity, continuity, symmetry, expansibility, decisivity, maximality, concavity and Lesche stability. Another reasonable condition is its applicability to all problems to be generalized. Suppose that there is only and only one experimental situation in which for being described we need a generalized statistics and this statistics is not capable of describing such a situation. Then that statistics is not general and must be discarded as one trying to generalize statistical mechanics. In this direction, we can desire to generalize one of the most fundamental applications of BG statistical mechanics, that of the computation of the critical properties of continuous phase transitions <cit.>. For obtaining some of these properties, e. g. critical exponents, Kenneth Wilson developed the field-theoretic renormalization group <cit.>. Such a mathematical tool was successfully applied and furnished precise values for the critical indices that showed a satisfactory agreement with experimental results <cit.>. So, in the intention of obtaining a generalized theory of phase transitions valid in a generalized realm, recently, some generalized field-theoretic approach was designed <cit.>, namely nonextensive statistical field theory (NSFT). In this generalized approach, some new q-parameter is introduced in the generalization process and the resulting theory is valid for |1 - q| < 1. It was physically interpreted as one encoding some effective interaction, which can be turned off in the limit q→ 1 thus recovering the BG results <cit.>. Then a new generalized universality class arose, the O(N)_q one, from which emerged nonextensive Ising-like models as q-Ising (N = 1), q-XY (N = 2), q-Heisenberg (N = 3), -self-avoiding random walk (N = 0) and q-spherical models (N →∞). Some other q-generalized models are the q- percolation and Yang-Lee edge singularity, -ϕ^6 theory, -long-range, -Gross-Neveu, -uniaxial systems with strong dipolar forces, -Lifshitz, -long-range ϕ^3 theory, -ϕ^2k multicritical points of order k, -Gross-Neveu-Yukawa, -short- and -long-range directed and dynamic isotropic percolations <cit.>. Now, the nonconventional critical indices depend on the dimension d, N and symmetry of some N-component order parameter and if the interactions present are of short- or long-range type and on q. It was shown that nonextensivity was associated only to small length scales fluctuations and its effects emerged from radiative loop corrections. It does not manifest at large length scales, once such large scales would be probed by a supposed nonextensive thermodynamics. But a nonextensive thermodynamics does not exist, as it is shown in Ref <cit.>. However, there is a full set of thermodynamical properties valid for q≠ 1: see, for instance <cit.>. In fact, through a transformation of variables <cit.>, the supposed nonextensive thermodynamics can be mapped into its extensive counterpart. Also, the predictions of NSFT, through its Escort distribution version <cit.>, presented an excellent agreement with those obtained from computer simulations, within the margin of error, for the static and dynamic critical indices for the nonexetnsive version of the two-dimensional Ising model <cit.>. The aim of this Letter is: 1) Introducing the κ-generalized version of the Kenneth Wilson's field-theoretic renormalization group in momentum space <cit.>, namely κ-Statistical Field Theory (κ-SFT) and 2) Investigating what are the BG systems presenting the corresponding κ-generalized generalization, i. e., κ-generalized systems whose critical exponents depend on κ.
§ Κ-SFT
We introduce the κ-SFT by defining its Euclidean generating functional as
Z[J] = 𝒩^-1exp_κ[-∫ d^dxℒ_int(δ/δ J(x))]∫exp[1/2∫ d^dxd^dx^'J(x)G_0(x-x^')J(x^')],
where
exp_κ(-x) = (√(1 + κ^2x^2) - κ x)^1/κ
is the κ-generalized exponential function <cit.> and κ∈ (-1, 1) and G_0(x-x^') is the free propagator of the theory. Analogously to the Ref. [21] (where the corresponding second term is extensive, once it is associated to the free propagator, which can be defined only in the extensive scenario, we make q = 1 and obtain the conventional exponential), we make κ = 0 in the second term of Eq. (<ref>) and obtain the conventional exponential once it is associated to the free propagator, which can be defined only in the nongeneralized realm. The constant 𝒩 is determined from Z[J=0] = 1. Now we study some κ-generalized models.
§ Κ-ISING-LIKE SYSTEMS
By applying perturbation theory for some O(N)-symmetric N-component self-interacting λϕ^4 scalar field theory, we obtain the κ-generalized static, through six distinct and independent methods in dimensions d = 4 - ϵ, and dynamic critical exponents as
η_κ = η , ν_κ = ν , z_κ = z ,
where η_κ, ν_κ and z_κ are the corresponding nongeneralized critical exponents valid for all loop levels. We observe that the aforementioned critical indices are the same as their nongeneralized counterparts <cit.>. This shows that for the corresponding Ising-like systems as κ-generalized Ising, XY, Heinsenberg, self-avoiding random walk and spherical models, the associated critical exponents do not depend on κ. The same occurs for κ-generalized ϕ^6 <cit.>, long-range <cit.>, Gross-Neveu <cit.>, uniaxial strong dipolar forces <cit.>, spherical <cit.>, Lifshitz <cit.> and multicritical points of order k <cit.>. Then, the κ-SFT is not suitable for describing the critical properties of nonconventional real imperfect crystals, for example of manganites <cit.> presenting defects, impurities, inhomogeneities, size of the clusters, random magnetic dilution, magnetocrystalline anisotropies etc. and the competition among them, as the nonextensive statistical field theory is as shown recently in literature <cit.>.
Now, we compare the field-theoretic results of this section with the ones obtained by the Monte-Carlo simulation of the 2D Ising Model. We consider a square lattice with L^2 nodes and periodic boundary conditions, where we can assign a system state
σ = ( σ_1, σ_2, ..., σ_N ),
where N=L^2 and each stochastic spin variable can have the values σ_i = ± 1. We can start the dynamics with a random configuration, and at each step, we randomly choose one spin to be updated. Then, we try a spin flip with the κ-generalized Metropolis rate
w_i(σ) = Minimum[ 1,exp_k(-e^(2)_i/T)/exp_k(-e^(1)_i/T)].
where T is the temperature and e^(1)_i and e^(2)_i are the energies of the spin i before and after the spin-flip, respectively. The local spin energies e^(1)_i and e^(2)_i are given by the Ising Hamiltonian
H = ∑_<i,j>^L^2σ_iσ_j
where the index j runs on the first neighbors of node i. The following Master equation describes the Ising dynamics<cit.>
d/dt𝒫_σ = ∑_i^N w_i(σ^i) 𝒫_σ^i - w_i(σ) 𝒫_σ,
where 𝒫_σ is the occupation probability of one system state and
σ^i = ( σ_1, σ_2, ...,-σ_i, ..., σ_N),
is the state after a spin-flip. The occupation probabilities should have a stationary solution where the local spin energies obey the Kaniadakis distribution if one chooses the rates w_i in Eq. (<ref>).
We define a Monte Carlo step as the sequential update of L^2 spins. We wait for the system to reach the stationary state from an initial random state by updating the system N_term thermalization steps. When the system is in the stationary state, we begin to collect a time series with N_t elements of the thermodynamic parameters, for example, the mean magnetization
m_ℓ = | 1/L^2∑_i σ_i(t_ℓ) |,
and the mean internal energy
e_ℓ = 1/L^2<H>(t_ℓ),
where t_ℓ (ℓ = 1, 2, ..., N_t) is the simulation time after the thermalization, which is a multiple of the Monte Carlo step time because we also discard Monte-Carlo steps between two elements of the time series to avoid data correlation and critical slowing down effects <cit.>. The observable moments are then given by
<m^n> = 1/N_t∑_ℓ^N_t m^n_ℓ
<e^n> = 1/N_t∑_ℓ^N_t e^n_ℓ
And the following averages on the ensembles of m_ℓ and e_ℓ time series yield
U(T,L) = 1 - <m^4>/3<m^2>,
M(T,L) = < m >,
χ(T,L) = L^2/T( <m^2> - <m>^2 ),
c(T,L) = L^2/T^2( <e^2> - <e>^2 ),
which are the Binder cumulant, the magnetization, the magnetic susceptibility, and the specific heat, respectively. The Binder cumulant should not depend on the system size in the critical temperature allowing for an estimate of the critical temperature where the curves for different lattice sizes approximately cross <cit.>. In addition, from resampling the time series, one can estimate error bars <cit.>. The above thermodynamic quantities in Eq. (<ref>) should scale as the finite system size in two dimensions as
U(T,L) ∝ F_U [ L^-1/ν_κ(T - T_c) ]
M(T,L) ∝ L^-β_κ/ν_κ F_M [ L^-1/ν_κ(T - T_c) ]
χ(T,L) ∝ L^γ_κ/ν_κ F_χ[ L^-1/ν_κ(T - T_c) ]
c(T,L) ∝ L^α_κ/ν_κ F_c [ L^-1/ν_κ(T - T_c) ]
close to the critical temperature T_c in a continuous phase transition, respectively, where F_U, F_M, F_χ, and F_c are scaling functions.
We show simulation results of the 2D Ising model with the κ-generalized Metropolis rate in Eq. <ref> in Figs. <ref>, and <ref> for κ=1. We also simulated κ=0.2, κ=0.4, κ=0.6, and κ=0.8 (figures not shown), and resume the critical temperatures in Tab. <ref>. Negative values of κ reproduce the curves for respective positive values, the values of the κ-generalized critical exponents are some even function of κ. This result is in agreement with that obtained though κ-SFT, within the margin of error displayed in Table <ref>. We note T_c eventually vanishes by increasing the κ parameter. In addition, in the limit κ→ 0, we obtain the exact T_c of the Ising model on the square lattice.
We can estimate the critical exponent ratio 1/ν_κ from the dependence of dln M(T,L)/dT on the system size at T_c. In addition, we can estimate β_κ/ν_κ, γ_κ/ν_κ, and α_κ/ν_κ from the dependence of the magnetization, susceptibility and specific heat on the system size in the critical temperature T_c. In Fig. <ref>, we show results of dln M/dT, M, χ and c as functions of the system size in panels (a), (b), (c), and (d), respectively.
The linear regressions of data in the critical temperature T_c as functions of the system size furnish estimates of the exponent ratios, which we resume in Tab. <ref>. In all results shown in Tab. <ref>, we used N_term = 10^6, and N_t=10^7, where we discarded 10^2 Monte-Carlo steps between two successive elements of the time series. All simulation results are close to the exact 2D Ising model exponents, where we do not observed a strong dependence on κ.
We also show data collapses to confirm the scaling dependence of the thermodynamic properties given in Eq. <ref>, with the exact critical exponents of the 2D Ising model. In all results shown in Fig. <ref>, we used N_term = 10^5, and N_t=10^7, where we discarded 10 Monte-Carlo steps between two successive elements of the time series. We note that the data collapses are not compatible with a strong dependence of the critical exponents on κ. This deficiency of Kaniadakis statistics could be associated to the fact that it is trace-form but is not composable. For some discussion on the relevance of an entropic functional being simultaneously trace-form and composable, see Ref. <cit.>.
Even if we would consider the just mentioned weak dependence on κ, we also note that, for some value of κ, the correspondence between some κ-generalized critical exponents and the corresponding value of κ is not one-two-one. The value of the critical exponents are not uniquely determined for a given value of κ, i. e., we have two values of κ identifying the same value of the critical indices. This is a consequence of the fact that the κ-generalized distribution is some even function of κ, namely Eq. (<ref>). In fact, the set of values of the nonconventional critical exponents for real imperfect crystals, e. g. of manganites, can be described for some generalized statistical field theory only if this theory is general and thus furnishes some set of higher and lower values of the critical exponents, when compared to those for perfect crystals <cit.>. This task is attained only if the distribution is some injective function, namely the case of Ref. <cit.>, not the present one since an even function is not injective. Then the κ-generalized statistical field theory is not general and must be discarded as one generalizing statistical mechanics.
§ SOME Κ-GENERALIZED MODELS
Although there are not κ-dependent versions of nongeneralized Ising-like systems, some other κ-generalized systems exist. We have to present their critical exponents just below.
§.§ κ-percolation and κ-Yang-Lee edge singularity
The κ-generalized versions, namely both κ-generalized percolation <cit.> (α = -1 and β = -2) and Yang-Lee edge singularity <cit.> (α = -1 and β = -1), now for dimensions d = 6 - ϵ, present the following κ-generalized critical exponents
η_κ = η - 4αβκ^2/3(α - 4β)[α - 4β(1 - κ^2)]ϵ ,
ν_κ^-1 = ν^-1 - 20αβκ^2/3(α - 4β)[α - 4β(1 - κ^2)]ϵ ,
ω_κ = ω ,
where η_κ and ν_κ are the corresponding nongeneralized critical exponents up to all-loop order. As i. e. 1 < κ < 1. For Yang-Lee edge singularity, η_κ and ν_κ are not independent <cit.> and they are related by ν_κ^-1 = (d - 2 + η_κ)/2 <cit.>. From η_κ and α = -1 and β = -1 we compute ν_κ. We can evaluate the remaining κ-generalized critical indices from the scaling relations among them <cit.>.
§.§ κ-long-range λϕ^3 theory
For the long-range λϕ^3 theory <cit.> in d = 3σ - ε the corresponding κ-generalized critical exponents can be written as
η_σ , κ = η_σ, ν_σ, κ^-1 = ν_σ^-1 - κ^2/1 - κ^2α/2βϵ ,
where α and β assume the values -1, -1 and -1, -2 for the Yang-Lee edge singularity problem and percolation cases <cit.>, respectively. The nongeneralized value of η_σ = 2 - σ is exact <cit.> and η_σ ,κ is exact within the approximation of this work. The nongeneralized exponents values were obtained up to two-loop level in the earlier work <cit.>.
§.§ κ-Gross-Neveu-Yukawa model
The κ-generalized Gross-Neveu-Yukawa model <cit.> expresses interacting scalar field ϕ and N massless Dirac fermions ψ and ψ̅ in d = 4 - ϵ dimensions. The corresponding κ-generalized critical indices are given by <cit.>
η_ψ ,κ = η_ψ + κ^2/(2N + 3)(2N + 3 - 2κ^2)ϵ ,
η_ϕ ,κ = η_ϕ + 4Nκ^2/(2N + 3)(2N + 3 - 2κ^2)ϵ ,
ν_κ^-1 = ν^-1 - A_N,κ/(2N + 3)(2N + 3 - 2κ^2)ϵ ,
where
A_N,κ = (2N + 3)(R_N,κ/6 + 2N) - (2N + 3 - 2κ^2)(R_N/6 + 2N),
R_N,κ = -(2N - 3 + 2κ^2) + √((2N - 3 + 2κ^2)^2 + 144N(1 - 4κ^2)),
R_N = lim_κ→ 0 R_N,κ.
The nongeneralized critical exponents η_ψ, η_ϕ and ν were evaluated up to four-loop level in Ref. <cit.>.
§.§ κ-short- and κ-long-range directed percolation
For κ-generalized short- and long-range directed percolation <cit.> in d = 4 - ϵ and d = 2σ - ε, respectively, we obtain
η_κ = η - κ^2/1 - κ^2ϵ/6, η_σ ,κ = η_σ - κ^2/1 - κ^2ε/7,
ν_κ = ν + κ^2/1 - κ^2ϵ/16, ν_σ ,κ = ν_σ + κ^2/1 - κ^22ε/7σ^2,
z_κ = z - κ^2/1 - κ^2ϵ/12, z_σ ,κ = z_σ - κ^2/1 - κ^2ε/7.
The nongeneralized indices were computed up to two-loop level in <cit.>.
§.§ κ-short- and κ-short-range dynamic isotropic percolation
In the case of κ-generalized short- and long-range dynamic isotropic percolation <cit.> at d = 6 - ϵ and d = 3σ - ε, respectively, we have
η_κ = η - κ^2/1 - κ^2ϵ/21, η_σ ,κ = η_σ - κ^2/1 - κ^23ε/8,
ν_κ = ν + κ^2/1 - κ^25ϵ/84, ν_σ ,κ = ν_σ + κ^2/1 - κ^2ε/4σ^2,
z_κ = z - κ^2/1 - κ^2ϵ/6, z_σ ,κ = z_σ - κ^2/1 - κ^23ε/16.
The nongeneralized critical indices were computed up to two-loop level in Ref. <cit.>.
§ PHYSICAL INTERPRETATION OF THE RESULTS
The physical interpretation of the theory can be seen, e. g., from the results for the critical indices for both κ-percolation and κ-Yang-Lee edge singularity shown in Tables <ref>-<ref> just below
We observe that the κ-generalized critical indices numerical values turn out to be higher than the nongeneralized one when κ ranges away the nongeneralized value κ = 0 (both for κ > 0 and κ < 0 due to the fact that the κ-generalized exponential function is an even function of κ). Then now we present the physical interpretation of such results: from their definitions, the critical exponents furnish a measure of how much a given physical quantity diverges near the system critical point. In the case, for example, of the inverse susceptibility of a given material, we can obtain information about how much the system is susceptible to changes in the magnetic field. So the susceptibility diverges stronger (weaker) than in the nongeneralized case when the corresponding critical index, namey γ, displays higher (lower) numerical values. Then higher (lower) numerical values of the critical exponents means more (less) susceptible systems to magnetic field changes and thus systems interacting weakly (strongly) than the nongeneralized situation. Now the κ-parameter can be physically interpreted as one encoding some effective weaker interaction than in the nongeneralized case. Alternatively, we can predict the behavior of the system (with the energy E < 0 in units of k_BT) from
e_κ^-E≈ e^-E(1 + 1/6κ^2E^3) ≈ e^-(E - 1/6κ^2E^3).
In the approximation aforementioned, we have the effective energy (E < 0) E - 1/6κ^2E^3. It increases or gets weaker (never decreases or gets stronger since the κ-generalized exponential function is an even function of κ) for all values of κ through its range. As the effective energy or interaction always turn out to be weaker, the system must have κ-generalized critical indices higher than that in the nongeneralized situation as can be seen in Tables <ref>-<ref>. Furthermore, as the effective κ-generalized energy never decreases or presents stronger values, we can not never obtain κ-generalized critical exponents with smaller numerical values when compared with the nongeneralized one. As there are many real materials for which the corresponding critical exponents values are smaller that the nongeneralized one, the critical behavior of these materials can not be explained by applying the κ-generalized distribution, thus characterizing such a distribution as some incomplete one, as it was done by using the nonextensive one <cit.>.
§ CONCLUSIONS
We have introduced some general field-theoretic approach for studying the critical properties of systems undergoing continuous phase transitions in the κ-generalized statistics framework, namely κ-generalized statistical field theory. We have showed that some supposedly κ-generalized systems, e. g. κ- Ising, Heinsenberg, ϕ^6, long-range, Gross-Neveu, uniaxial strong dipolar forces, spherical, Lifshitz and multicritical models do not present the same behavior as their nongeneralized counterparts. It it is not suitable for describing the nonconventional critical properties of real imperfect crystals, e. g. of manganites, as some alternative generalized theory is, namely nonextensive statistical field theory, as shown recently in literature. This implies that the κ-generalized statistical field theory is not general and must be discarded as one generalizing statistical mechanics. Although κ-generalized versions of the systems aforementioned do not exist, we have displayed a few ones that depend on κ, for which we have presented the corresponding physical interpretation through the general physical interpretation of the κ-parameter.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ ACKNOWLEDGMENTS
PRSC would like to thank the Brazilian funding agencies CAPES and CNPq (grants: Universal-431727/2018 and Produtividade 307982/2019-0) for financial support.
|
http://arxiv.org/abs/2307.06274v1 | 20230712161830 | First Hitting Time of a One-Dimensional Levy Flight to Small Targets | [
"Daniel Gomez",
"Sean D Lawley"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"math.AP",
"math.PR",
"35B25, 60K50, 35R11"
] |
Tackling Computational Heterogeneity in FL:
A Few Theoretical Insights
Paper ID This paper is under review for ICCP 2023 and the PAMI special issue on computational photography. Do not distribute.
=================================================================================================================================
First hitting times (FHTs) describe the time it takes a random “searcher” to find a “target” and are used to study timescales in many applications. FHTs have been well-studied for diffusive search, especially for small targets, which is called the narrow capture or narrow escape problem. In this paper, we study the first hitting time to small targets for a one-dimensional superdiffusive search described by a Lévy flight. By applying the method of matched asymptotic expansions to a fractional differential equation we obtain an explicit asymptotic expansion for the mean FHT (MFHT). For fractional order s∈(0,1) (describing a (2s)-stable Lévy flight whose squared displacement scales as t^1/s in time t) and targets of radius ≪1, we show that the MFHT is order one for s∈(1/2,1) and diverges as log(1/) for s=1/2 and ^2s-1 for s∈(0,1/2). We then use our asymptotic results to identify the value of s∈(0,1] which minimizes the average MFHT and find that (a) this optimal value of s vanishes for sparse targets and (b) the value s=1/2 (corresponding to an inverse square Lévy search) is optimal in only very specific circumstances. We confirm our results by comparison to both deterministic numerical solutions of the associated fractional differential equation and stochastic simulations.
§ INTRODUCTION
The timescales of many physical, chemical, and biological processes are characterized by first hitting times (FHTs) <cit.>. Generically, a FHT is the time it takes a “searcher” to find a “target.” Applications include animal foraging <cit.>, transcription factor search for DNA binding sites <cit.>, synaptic transmission in neuroscience <cit.>, menopause timing <cit.>, financial income dynamics <cit.>, and computer search algorithms <cit.> among many other applications <cit.>.
FHTs are often called first passage times, first arrival times, exit times, escape times, or capture times.
Mathematical models of such processes often assume that the searcher randomly explores a given spatial domain, and a great deal of mathematical and computational methods have been developed to study the statistics and probability distribution of the FHT to the target(s) <cit.>. More precisely, let X={X(t)}_t≥0 denote the stochastic path of a searcher in a d-dimensional spatial domain Ω⊆^d. The FHT to a target set Ω_target⊂Ω (where Ω_target is possibly a union of multiple disjoint sets) is then
τ
:=inf{t≥0:X(t)∈Ω_target}.
Naturally, the statistics and distribution of the FHT τ depend on the stochastic dynamics of the searcher X, the space dimension d≥1, and the size and geometry of the target set Ω_target and spatial domain Ω.
A common framework for studying FHTs is to assume that the searcher X is a pure diffusion process (i.e. a Brownian motion) and the targets are much smaller than their confining spatial domain, which is called the narrow capture problem (or narrow escape problem if the target is embedded in the otherwise reflecting boundary) <cit.>. For bounded domains in dimension d=1, the MFHT of such a diffusive searcher is always finite even if the targets are single points. In contrast, the MFHT of diffusion in any dimension d≥2 diverges as the target size vanishes. In particular, if >0 compares the lengthscale of the target to the lengthscale of the confining domain, then it is well-known that as vanishes,
[τ]
=
O(1) if d=1,
O(log (1/)) if d=2,
O(^2-d) if d≥3.
The stark contrast in (<ref>) between dimensions d=1, d=2, and d≥3 stems from the fact that Brownian motion is recurrent if d=1, neighborhood recurrent in d=2, and transient in d≥3 <cit.>.
FHTs have also been studied for superdiffusive processes, which are characterized by squared displacements that grow superlinearly in time <cit.>. A common mathematical model of superdiffusion is a Lévy flight <cit.>, which is often derived from the continuous time random walk model <cit.>. In this model, a searcher waits at its current location for a random time and then jumps a random distance chosen from some jump length probability density f(y) in a uniform random direction. The searcher repeats these two steps indefinitely or until it reaches the target. For a finite mean waiting time t_0∈(0,∞) and a jump length density with the following slow power law decay,
f(y)
∼(l_0)^2s/y^1+2s as y→∞ for some s∈(0,1) and lengthscale l_0>0,
the probability density p(x,t) for the searcher position satisfies the following space-fractional Fokker-Planck equation in a certain scaling limit <cit.>,
∂/∂ tp
=-D_s(-Δ)^sp,
where
D_s=(l_0)^2s/t_0>0
is the generalized diffusivity
and (-Δ)^s denotes the fractional Laplacian of order s∈(0,1), defined by <cit.>
(-Δ)^s φ(x) = C_s_-∞^∞φ(x)-φ(y)/|x-y|^2s+ddy, C_s := 4^sΓ(s+d/2)/π^d/2|Γ(-s)|,
where P.V. indicates the principal value and Γ(·) denotes the gamma function.
Note that Lévy flights are often parameterized by their stability index α∈(0,2) <cit.>, which is simply twice the fractional order s∈(0,1),
α=2s∈(0,2).
Observe that (<ref>) is the diffusion equation describing Brownian motion if s=1.
Lévy flights are perhaps the most mathematically tractable model of superdiffusion, though analytical results for Lévy flights are scarce compared to their Brownian counterpart.
The mathematical analysis of hitting times of superdiffusive search processes has also been controversial. Indeed, the influential Lévy flight foraging hypothesis was based on the claimed theoretical optimality of a certain superdiffusive process involving heavy-tailed jumps as in (<ref>) with the “inverse square" value s=1/2 <cit.>, but this decades-old claim was recently shown to be false <cit.>.
In this paper, we study FHTs of Lévy flights to small targets in one space dimension. Assuming the targets are much smaller than the typical distance between them, we apply the method of matched asymptotic expansions to the fractional differential equation describing the MFHT. The resulting asymptotic formulas reveal how FHTs depend on the fractional order s∈(0,1), target size, target arrangement, and initial searcher location (or distribution of locations). We further determine the full probability distribution of the FHT for fractional orders s∈(0,1/2] in the small target limit. We validate our results by comparison to both deterministic numerical solutions of the associated fractional differential equation and stochastic simulations.
To describe our results more precisely, let X={X(t)}_t≥0 be a one-dimensional, (2s)-stable Lévy flight for s∈(0,1) with generalized diffusivity D_s>0 (i.e. the probability density that X(t)=x satisfies (<ref>)) and periodic boundary conditions at x=± l. Since we can always re-scale space and time according to
x→ x/l, t→ D_st/l^2s,
we set D_s=l=1 without loss of generality. Suppose that the target set Ω_ consists of N≥1 targets in the interval Ω=(-1,1)∈ centered at points {x_1,…,x_N}∈(-1,1) with radii { l_1,…, l_N}, i.e.
Ω_target
=∪_i=1^N(x_i- l_i,x_i+ l_i).
Here, l_1,…,l_N>0 are O(1) constants which allow the targets to differ in size. When the context is clear we denote by |·| the 2-periodic extension of the absolute value on (-1,1) so that |a-b| denotes the minimum distance between a and b in the periodic domain (-1,1). Assume that 0<≪1 and the targets are well-separated in the sense that |x_i-x_j|≫ for all i,j∈{1,…,N} with i≠ j. Let v(x) denote the MFHT to any of the N targets starting from x∈(-1,1), i.e.
v(x)
:=[τ | X(0)=x],
where τ is the FHT in (<ref>). The function v(x) satisfies
(-Δ)^s v(x) = 1, x∈Ω∖Ω_target,
v(x) = 0, x∈Ω_target,
v(x) is 2-periodic.
We obtain our results on the FHT by analyzing (<ref>) in the limit →0.
We now state our results on the MFHT for the case of a single target of radius >0 centered at x_1=0 (i.e. N=l_1=1). Note that our assumption of periodic boundary conditions means that this scenario is equivalent to a Lévy flight on all of with a periodic array of targets separated by distance 2. For any fractional order s≠1/2, the MFHT of a Lévy flight conditioned on starting at x∈(-1,1)∖{0} is given by the following asymptotic formula for 0<≪1
v(x)
∼^2s-12𝔞_s/𝔟_s
-2𝔞_sR_s(0)
+2𝔞_s(-|x|^2s-1+R_s(x)),
where
𝔞_s := -2π^-1sΓ(-2s)sin(π s), 𝔟_s:=Γ(1/2)/Γ(3/2-s)Γ(s),
and R_s is the regular part of the Green's function given explicitly in Proposition <ref>.
If s=1/2, then this MFHT is
v(x)∼log(2/)2/π
-2/πR_1/2(0)
+2/π(log|x|+R_1/2(x)).
If the Lévy flight searcher starts from a uniformly distributed position in the interval (-1,1), then the average MFHT is
1/2∫_-1^1v(x) x
∼^2s-12𝔞_s/𝔟_s
-R_s(0)2𝔞_s if s≠1/2,
log(2/)2/π
-2R_1/2(0)/π if s=1/2.
These results show an analog between Brownian search in dimensions d≥1 and Lévy search in dimension d=1 with fractional order s∈(0,1).
Specifically, (<ref>)-(<ref>) imply
[τ]
=
O(1) if s∈(1/2,1],
O(log (1/)) if s=1/2,
O(^2s-1) if s∈(0,1/2).
Comparing (<ref>) to (<ref>) shows that FHTs of Brownian motion in different dimensions diverge similarly to FHTs of Lévy flights in one dimension with different fractional orders. As in the case of Brownian motion in (<ref>), the different regimes in (<ref>) stem from differences in recurrence versus transience, which manifests in our analysis as different far-field behavior of the inner solutions used in our matched asymptotics. FHTs of Lévy flights in one dimension can diverge because the stochastic paths of Lévy flights are discontinuous. Hence, in contrast to Brownian motion, Lévy flights may jump across a target without actually hitting it in a phenomenon termed a “leapover” <cit.> (see Figure <ref> for an illustration).
Our analysis allows us to identify the value of s∈(0,1] which minimizes the MFHT. We find that this optimal value (denoted s_) grows continuously from s_≈0 up to s_≈1 (i.e. Brownian search) as the target density grows relative to the lengthscale l_0 in (<ref>)-(<ref>). In particular, we show that the value s=1/2 (corresponding to stability index α=2s=1, i.e. inverse square Lévy search) is optimal in only very specific circumstances.
The rest of the paper is organized as follows.
In Section <ref>, we analyze the mean and full probability distribution of the FHT. In Section <ref>, we compare our asymptotic results to numerical solutions of the associated fractional equations and stochastic simulations. In Section <ref>, we address the question of the fractional order s∈(0,1] that minimizes the MFHT. We conclude by summarizing our results and discussing related work. An appendix collects some more technical aspects of the numerical implementation in Section <ref>.
§ ASYMPTOTIC ANALYSIS OF THE MFHT
The method of matched asymptotic expansions (MMAE) has been an invaluable tool in the analysis of narrow capture and escape problems for pure diffusion processes since its introduction in <cit.>. Broadly speaking, the MMAE proceeds by formulating inner- and outer-problems whose solutions can be expressed in terms of a canonical “electrified disk” solution and an appropriately weighted sum of Green's functions respectively. Combining a solvability condition for the outer-problem together with matching conditions between the inner- and outer-solutions yields a linear system with which all remaining unknowns arising in the asymptotic analysis can be determined. In this section we adapt the MMAE to derive an asymptotic expansion for the MFHT satisfying the fractional differential equation (<ref>). We show how the MMAE in this fractional setting synthesizes the analysis of the standard narrow escape problem in dimensions d=2 and d=3. In addition we introduce a fractional counterpart to the classical electrified disk problem, as well as a 2-periodic fractional Green's function.
We begin our asymptotic analysis of the MFHT by seeking an outer asymptotic expansion of the form
v(x) ∼^2s-1v_0^(x),
valid for values of x that are sufficiently far from all targets in the sense that |x-x_i|≫ for all i=1,...,N. In addition for each i=1,...,N we seek an inner asymptotic expansion of the form
v(x_i+ X) ∼ V_i^(X),
valid for values of x=x_i+ X sufficiently close to the i^th target in the sense that X=O(1).
It is here convenient to recall two equivalent definitions of the fractional Laplacian given by (<ref>) when restricted to 2-periodic functions. Specifically, if we let φ(x) be an arbitrary 2-periodic function then
(-Δ)^s φ(x) = C_s _-1^1 K_s(x-y)(φ(x)-φ(y))dy,
where
K_s(z):= ∑_n∈ℤ1/|z+2n|^2s+1,
and where ℤ denotes the set of all integers. This expression is conveniently chosen to determine appropriate inner problems. Moreover, it can be shown (see, for example, Eq. (2.53) in <cit.>) that the restriction of the fractional Laplacian defined by (<ref>) to 2-periodic functions coincides with the spectral fractional Laplacian defined by
(-Δ)^s φ(x) = ∑_n=ℤ∖{0} |nπ|^2sφ_n e^inπ x, φ_n := 1/2∫_-1^1 e^-inπ xφ(x)dx.
This formulation proves to be useful when considering global quantities, such as the relevant periodic fractional Green's function.
In order to state our main result for this section we first define the scalars
ν_i^ := -1/log(ε l_i / 2), ν̅^ := 1/N∑_i=1^N ν_i^, l̅_s := 1/N∑_i=1^N l_i^1-2s,
as well as the N-dimensional vectors
l_s := [ l_1^1-2s; ⋮; l_N^1-2s ], ν^ := [ ν_1^; ⋮; ν_N^ ], e_N := [ 1; ⋮; 1 ].
In addition, we define the N× N diagonal matrices
ℒ_s := (l_1^1-2s,...,l_N^1-2s), 𝒩^ := (ν_1^,...,ν_N^),
as well as the N× N Green's matrix 𝒢_s whose entries are given by
(𝒢_s)_ij = R_s(0), i=j,
R_s(x_i-x_j) + H_s(x_i-x_j), i≠ j,
where R_s is the regular part of the Green's function defined in Proposition <ref>, and H_s(z) is the singular part with H_s(z):=-|z|^2s-1 for s≠1/2 and H_s(z):=log|z| for s=1/2. Our main asymptotic result for the hitting time is given below.
Let ≪ 1, let l_1,...,l_N=O(1), and suppose that -1≤ x_1<...<x_N<1 are well separated in the sense that |x_i-x_j|≫ O() for all i≠ j. For any 0<s<1 define
χ^ := 1/Nl̅_s(2𝔞_s/𝔟_s - ^1-2s𝔟_s l_s^T𝒢_sℒ_sB^), s≠ 1/2,
2/π N ν̅^( 1 - π/2(ν^)^T𝒢B^) , s=1/2,
where 𝔞_s and 𝔟_s are given by (<ref>), and where the N-dimensional vector B^=(B_1^,...,B_N^)^T is found by solving the linear system
( ℐ_N - ^1-2s𝔟_s( ℐ_N - 1/Nl̅_se_Nl_s^T )𝒢_sℒ_s ) B^ = 2𝔞_s/Nl̅_s 𝔟_se_N, s≠ 1/2,
(ℐ_N - 𝒩^( ℐ_N - 1/Nν̅^e_N(ν^)^T )𝒢_1/2)B^ = 2/π N ν̅^ν^, s=1/2,
where ℐ_N is the N× N identity matrix. Then, an asymptotic expression for the MFHT satisfying (<ref>) for |x-x_i|≫ for all i=1,...,N is given by
v(x) ∼^2s-1χ^ + 𝔟_s∑_j=1^N l_j^1-2sB_j^ (-|x-x_j|^2s-1 + R_s(x-x_j)), s≠ 1/2,
∑_j=1^N B_j^(log|x-x_j| + R_1/2(x-x_j) ), s=1/2,
where R_s(x) is the regular part of the Green's function found in Proposition <ref>.
The remainder of this section is organized as follows. In Section <ref> we first establish some key properties of a fractional counterpart to the classical electrified disk problem. This is followed by a discussion of a certain 2-periodic fractional Green's function in Section <ref>. In Section <ref> we then proceed with applying the MMAE to derive Principal Result <ref>. Finally, in Section <ref> we show that, to leading order, the FHT τ is exponentially distributed for s∈(0,1/2].
§.§ The Fractional Electrified Disk Problem
The fractional counterpart to the electrified disk problem in standard narrow capture problems is the problem
(-Δ)^s W_s(X) = 0, |X|>1,
W_s(X) = 1, |X|<1.
The function W_s(X) is the probability that a Lévy flight starting at X∈ℝ eventually hits the ball (-1,1). With this probabilistic interpretation, one readily obtains the following formula for W_s(X) when s<1/2 (see Corollary 2 in <cit.>)
W_s(X) = √(π)/Γ(12 - s)Γ(s)∫_X^2-1^∞u^s-1/√(u+1)du.
We proceed to derive an explicit expression for W_s(X) valid for all s∈(0,1). Specifically, we deploy a Kelvin transform and fractional Poisson formula for s≠ 1/2, and standard complex analysis tools for s=1/2. The main result is summarized in the following proposition.
The fractional electrified disk problem (<ref>) admits the following non-constant solution
W_s(X) = √(π)/Γ(s)Γ(32 - s)|X|^2s-1(1 - 1/X^2)^s _2F_1(1,1/2;3/2 -s; 1/X^2), s≠ 1/2,
1-log(X+√(X^2-1)), s=1/2,
|X|>1,
with W_s(X)=1 for |X|≤ 1. Moreover, this solution has the far-field behaviour
W_s(X)∼𝔟_s|X|^2s-1 + O(|X|^2s-3), s≠ 1/2,
-log(2|X|) + 1 + O(X^-2), s=1/2,
as |X|→∞,
where 𝔟_s is given by (<ref>).
Starting with the s≠ 1/2 case, we first transform (<ref>) into the more commonly considered fractional problem with extended Dirichlet boundary conditions posed outside of (-1,1). Specifically, we first use the Kelvin transform
X = 1/X, W_s(X) = |X|^2s-1W_s(1/X),
in terms of which we readily calculate (see, for example, Proposition A.1 in <cit.>)
(-Δ)^sW_s(X) = |X|^2s+1(-Δ)^sW_c(X).
In particular we find that W_s(X) solves
(-Δ)^s W_s(X) = 0, |X|<1,
W_s(X) = |X|^2s-1, |X|>1.
Notice that the inhomogeneous term g(X) = |X|^2s-1 for |X|>1 in (<ref>) can be extended to ℝ in such a way that g∈ L^1_loc(ℝ)∩ C(ℝ) and
∫_ℝ|g(X)|/1+|X|^1+2s dX <∞ .
It then follows that the unique continuous solution to (<ref>) is given by (see Theorem 2.10 in <cit.>)
W_s(X) = ∫_|Y|>1P_s(Y,X)|Y|^2s-1 dY, |X|<1,
|X|^2s-1, |X| > 1,
where P_s(y,x) is the fractional Poisson kernel given by
P_s(y,x) := p_s(1-x^2/y^2-1)^s 1/|x-y|, p_s := π^-1sin(π s) = 1/Γ(s)Γ(1-s).
Reverting to the original variables we therefore obtain the integral representation
W_s(X) = p_s ∫_1^∞( X^2-11-1/Y^2)^s 2|X|(XY)^2-1dY
= p_s |X|^2s-1(1-1X^2)^s∫_0^∞(z+1)^s-12/z^s (z+1-1X^2)dz,
where the first equality follows by combining the Y∈(-∞,-1) and Y∈(1,∞) contributions, and the second from the change of variables Y = √(z+1). Using the integral representation of the Gaussian Hypergeometric function (see Eq. 15.6.1 in <cit.>) we immediately obtain (<ref>). The far-field behaviour (<ref>) of W_s(X) is likewise immediately obtained by noting that (see Eq. 15.2.1 in <cit.>)
_2F_1(1,12;32-s;z) = 1 + z/3-2s + 3z^2/4s^2-16s+15 + O(z^3), |z|≪ 1.
The equivalence of (<ref>) and (<ref>) is readily verified using properties of the Gaussian Hypergeometric function. Specifically, we first recast the integral in (<ref>) in terms of the Gaussian Hypergeometric function using the change of variables u= X^2 (z+1)-1. Equivalence with (<ref>) is then verified by first using Euler's transformation _2F_1(a,b;c;z) = (1-z)^c-a-b _2F_1(c-a,c-b;c;z) and then using the symmetry property _2F_1(a,b;c;z)= _2F_1(b,a;c;z).
We consider next the case s=1/2 for which the previous calculations yield W_s(X)≡ 1. Indeed, it is easy to see that W_s(X) ≡ 1 is the unique continuous solution to (<ref>) when s=1/2. To find a non-constant solution to (<ref>) we instead consider the extended problem in the two-dimensional upper half-space. Specifically, we seek a non-constant solution W(X,Y) to
∂^2 W/∂ X^2 + ∂^2 W/∂ Y^2 = 0, -∞<X<∞, Y>0,
W = 1, |X| < 1, Y=0,
∂W/∂ Y = 0, |X|>1, Y=0,
in terms of which W_s=1/2(X) = W(X,0) (see <cit.> for additional details on the extension property of the fractional Laplacian). Such a non-constant solution must have logarithmic growth as X^2+Y^2→∞ and is given by
W(X,Y) = 1 + Im{arcsin(X+iY)},
where Im(z) denotes the imaginary part of z∈ℂ. Setting Y=0 and considering only values of |X|>1 we readily obtain (<ref>) from which the far-field behaviour (<ref>) immediately follows.
§.§ The Periodic Fractional Green's Function
The second quantity we need to apply the MMAE is the periodic fractional Green's function G_s(x) satisfying
(-Δ)^s G_s(x) = 1/2 - δ(x), -1<x<1,
G_s(x+2) = G(x), -∞<x<∞,
∫_-1^1 G_s(x)dx = 0.
Using the spectral definition of the fractional Laplacian (<ref>) it is straightforward to see that
G_s(x) = - ∑_n=1^∞cos nπ x/(nπ)^2s.
We readily see that G_s(x) diverges as x→ 0 for s≤ 1/2. The following proposition extracts this singular behaviour and decomposes G_s(x) into a singular part and a regular part.
The periodic fractional Green's function G_s(x) satisfying (<ref>) is given by
G_s(x) = -𝔞_s|x|^2s-1 + 𝔞_sR_s(x) , s≠ 1/2,
π^-1log|x| + π^-1R_1/2(x), s=1/2,
where 𝔞_s is given by (<ref>). When s≠ 1/2 the regular part R_s(x) admits the following rapidly converging series
R_s(x) = 12s - 2s-16 + 715(2s-1)(2s-2)(2s-3)24 + (2s-12 - (2s-1)(2s-2)(2s-3)12) |x|^2
+ (2s-1)(2s-2)(2s-3)24|x|^4 + 2(2s-1)⋯(2s-5)∑_n=1^∞a_2s,n(π n)^2scos(π n x),
where a_2s,n = ∫_π n^∞ x^2s-6sin x dx. On the other hand, when s=1/2 the regular part has the series expansion
R_1/2(x) = 1 + 2∑_n=1^∞( (nπ) - π/2) cos nπ x/nπ,
where (z) = ∫_0^z t^-1sin(t)dt denotes the usual sine integral.
The calculation of G_s(x) in the case s≠ 1/2 follows from computing Fourier series of |x|^2, |x|^4, and |x|^2s-1 and can be found in Appendix A of <cit.>. The case s=1/2 follows similarly, but this time only the Fourier series of log|x| is needed.
For the subsequent asymptotic analysis the most important part of G_s(x) in (<ref>) is the singular behaviour which takes the form of an algebraic singularity for s<1/2, a logarithmic singularity for s=1/2, and a bounded fractional cusp for s>1/2. The series expansions for the regular part appearing in (<ref>) and (<ref>) on the other hand are computationally useful due to their fast convergence.
§.§ Matched Asymptotic Expansions
Let x=x_i+ X and substitute the inner expansion (<ref>) into (<ref>) so that using (<ref>) for the fractional Laplacian we obtain
C_s_-1/^1/∑_n∈ℤV_i^(X)-V_i^(Y)/|2n+(X-Y)|^2s+1dY += 1,
where denotes higher-order-terms. The n=0 term dominates all other terms in the left-hand-side, and moreover we will also assume that it dominates the right-hand-side by assuming that V_i^≫^2s for all X=O(1). Further approximating the integral on the left-hand-side by replacing ±1/ with ±∞ we thus obtain the inner problem
(-Δ)^s V_i^ (X) = 0, |X|>l_i,
V_i^(X) = 0, |X|≤ l_i,
where the limiting behaviour of V_i^(X) as |X|→∞ will be found by matching with the limiting behaviour of the outer solution as x→ x_i for each i=1,...,N.
In light of Proposition <ref> we seek, for each i=1,...,N, a non-constant inner solution of the form
V_i^(X) = ^2s-1B_i^(1 - W_s(X/l_i)),
where B_i^ is some -dependent constant that remains to be determined. From Proposition <ref> we then have the far-field behaviour
V_i^(X) ∼^2s-1B_i^(1 - 𝔟_sl_i^1-2s|X|^2s-1 + O(|X|^2s-3) ), s≠ 1/2,
B_i^(log(2|X/l_i|) + O(|X|^-2) ), s=1/2,
as |X|→∞.
The far-field behaviour of V_i^(X) must coincide with the limiting behaviour of the outer solution v^_0(x) as x→ x_i. Specifically, writing X=^-1(x-x_i) we obtain the matching condition as |x-x_i|→ 0,
v^_0(x) ∼ B_i^(1 - 𝔟_s^1-2sl_i^1-2s|x-x_i|^2s-1 + O(^3-2s)), s≠ 1/2,
B_i^( log|x-x_i| +1/ν_i^ + O(^2)), s=1/2.
Given the singular term |x-x_i|^2s-1 in the limiting behaviour (<ref>) we find that v^_0(x) is the 2-periodic function satisfying
(-Δ)^s v_0^(x) = ^1-2s - ^1-2s𝔞_s^-1𝔟_s∑_j=1^N l_j^1-2sB_j^δ(x-x_j), s≠ 1/2,
1 - π∑_j=1^N B_j^δ(x-x_j), s=1/2,
Since this problem is posed on the whole (periodic) interval -1<x<1, we can now use the spectral definition (<ref>) for the fractional Laplacian so that by integrating (<ref>) over the domain we obtain the solvability conditions
𝔞_s^-1𝔟_s∑_j=1^N l_j^1-2sB_j^ = 2, ∑_j=1^N B_j^ = 2/π,
for s≠1/2 and s=1/2 respectively. Provided this condition is satisfied, we can then write v^_0(x) in terms of the periodic fractional Green's function found in Proposition <ref> as
v^_0(x) = χ^ + ^1-2s𝔟_s∑_j=1^N l_j^1-2sB_j^ (-|x-x_j|^2s-1 + R_s(x-x_j)), s≠ 1/2,
∑_j=1^N B_j^( log|x-x_j| + R_1/2(x-x_j)), s=1/2,
where χ^ is an undetermined constant.
The asymptotic analysis has thus far yielded an expression for the outer solution in terms of the N+1 unknown quantities B_1^,...,B_N^ and χ^. The solvability condition (<ref>) yields one equation in these N+1 unknowns. By revisiting the matching condition (<ref>) we obtain the remaining N equations with which all N+1 unknowns can be uniquely determined. Specifically, substituting the asymptotic expansion of (<ref>) as x→ x_i into the left-hand-side of (<ref>) gives the matching condition
^1-2s𝔟_s l_i^1-2sB_i^ R_s(0) + ^1-2s𝔟_s∑_j≠ i l_j^1-2sB_j^ (-|x_i-x_j|^2s-1 + R_s(x_i-x_j) )+ χ^ = B_i^,
when s≠ 1/2, and
B_i^ R_1/2(0) + ∑_j≠ iB_j^(log|x_i-x_j| + R_1/2(x_i-x_j)) + χ^ = B_i^/ν_i^,
when s=1/2 for each i=1,...N. In light of the definitions (<ref>) we can rewrite the solvability and matching conditions in vector notation as
l_s^TB^ = 2𝔞_s/𝔟_s, B^ - ^1-2s𝔟_s 𝒢_sℒ_sB^ = χ^e_N, s≠ 1/2,
e_N^TB^ = 2/π, B^ - 𝒩^𝒢_1/2B^ = χ^ν^, s=1/2.
Left-multiplying the matching condition in the s≠ 1/2 (respectively s=1/2) case by l_s^T (respectively e_N^T) and using the solvability condition yields the expression for χ^ found in (<ref>). Substituting this expression for χ^ back into the matching condition then gives the linear system (<ref>).
We claim that the solution B^ to (<ref>) is O(1) for all s∈(0,1). Indeed, when s<1/2 we readily obtain the expansion
B^ = 2𝔞_s/Nl̅_s𝔟_s∑_q=0^∞^q(1-2s)𝒥_s^q e_N, 𝒥_s := 𝔟_s( ℐ_N - 1/Nl̅_se_Nl_s^T )𝒢_sℒ_s.
Similarly, when s=1/2 we obtain an expansion in powers of ν_1^,...,ν_N^ starting with an O(1) term since ν^ / ν̅^ = O(1). When s>1/2 we must proceed by imposing a solvability condition. Specifically, assuming that 𝒢_s is invertible we find that the kernel of 𝒥_s is one-dimensional and spanned by ξ_s = ℒ_s^-1𝒢_s^-1e_N. Seeking an expansion of the form B^ = B_0 + ^2s-1B_1 + ⋯ and imposing a solvability condition for the B_1 equation yields
B^ = γ_0 ξ_s + O(^2s-1), γ_0 = 2𝔞_s/Nl̅_s 𝔟_sl_s^Te_N/l_s^Tξ_s.
The preceding discussion implies that our asymptotic expansion is consistent with the assumption V_i^(X)≫^2s that we made to neglect the inhomogeneous term on the right-hand-side of (<ref>).
Since B^=O(1) for all 0<s<1, we deduce from (<ref>) that χ^ = O(1) for s≤ 1/2 whereas χ^=O(^1-2s) for 1/2<s<1. Hence (<ref>) implies that to leading order the MFHT in the outer region is spatially constant for s≤ 1/2 whereas it is spatially variable for 1/2<s<1.
If the target configuration is symmetric, in the sense that l_1=...=l_N=l and adjacent targets are equidistant, then ν_1=...=ν_N=ν, the Green's matrix 𝒢_s is circulant, ℒ_s=lℐ_N, and 𝒩^=νℐ_N. The solution to (<ref>) is then explicitly given by B^ = 2𝔞_sNl𝔟_se_N and B^ = 2π N e_N for s≠ 1/2 and s=1/2 respectively. Moreover, it suffices to consider symmetric configurations for only N=1 since the case N>1 can be obtained by a simple spatial rescaling.
§.§ Probability distribution for 0<s<1/2
We now extend the preceding analysis of the MFHT to obtain the full probability distribution of the FHT in the limit →0 for s∈(0,1/2]. The mth moment of the FHT,
v_m(x)
:=[τ^m | X(0)=x], m∈{1,2,…},
satisfies the following fractional equation which couples to the (m-1) moment,
(-Δ)^s v_m = mv_m-1,
with identical boundary conditions to the first moment and v_1=v. For the m=2 moment, this becomes
(-Δ)^s v_2 = 2v_1.
For s∈(0,1/2], we have shown that v_1(x) is constant in space to leading order, v_1(x)∼μ_s,. Dividing (<ref>) by twice this constant implies that w_2:=v_2/(2μ_s,) satisfies the same fractional equation as the first moment v_1 to leading order. Hence, w_2∼ v_1 and thus v_2∼ 2(v_1)^2. Continuing this argument yields the leading order behavior of the mth moment,
v_m∼ m!(v_1)^m, m∈{1,2,…},
which implies that τ/μ_s, is exponentially distributed with unit mean in the limit →0 (since exponential random variables are determined by their moments <cit.>).
§ NUMERICAL SIMULATIONS
In this section we numerically calculate the FHT by solving the fractional differential equation (<ref>) directly, as well as by using Monte-Carlo methods. These numerical calculations will serve the purpose of validating the formal asymptotic calculations of the previous section, with the Monte Carlo simulations also allowing us to investigate the full probability distribution of the FHT. We proceed by first outlining the numerical methods used to solve (<ref>) in Section <ref>. In Section <ref> we outline the methods used in the Monte-Carlo simulations. Finally, in Section <ref> we showcase the results from our numerical computations.
§.§ Solving the MFHT Fractional Differential Equation
To numerically solve (<ref>) we require only a numerical discretization of the periodic fractional Laplacian (-Δ)^s. Our numerical discretization of the periodic fractional Laplacian is based on the finite difference-quadrature approach of Huang and Oberman <cit.>. Fix an integer M>0, let h=2/M, and let
z_n = -1 + hn, n∈ℳ:={0,...,M-1},
be a uniform discretization of the interval -1<x<1. Denote by (-Δ_h)^s the numerical discretization of the periodic fractional Laplacian on -1<x<1. The discrete operator (-Δ_h)^s acts on an arbitrary vector φ = (φ_0,...,φ_M-1)^T according to (see equation (FL_h) in <cit.>)
((-Δ_h)^sφ)_n = ∑_m∈ℳ(φ_n-φ_m)W_n-m, W_σ := w_σ + ∑_k=1^∞(w_σ-kM+w_σ+kM).
where we have used used periodicity to simplify the expression, and where each w_m (m∈ℤ) is an appropriately chosen weight. See Appendix <ref> for additional details on our choice of weights, as well as some practical considerations for their computation. Define the set ℐ:={n∈ℳ | z_n∈∪_i=1^N(x_i- l_i,x_i+ l_i)}. The numerical solution to the hitting-time problem (<ref>) is then obtained by finding v = (v_0,...,v_M-1)^T satisfying linear system
∑_m∈ℳ∖ℐ (v_n-v_m)W_n-m = 1, n∈ℳ∖ℐ,
v_n = 0, n∈ℐ.
In Section <ref> we use M=50,000 points and K=10,000 terms in the evaluation of the weights W_σ (see (<ref>) in Appendix <ref>).
§.§ Monte Carlo
We now describe the stochastic simulation algorithm used to generate FHTs of Lévy flights. Our stochastic simulation algorithm relies on constructing a Lévy fight by subordinating a Brownian motion <cit.>. Specifically, let B={B(u)}_u≥0 be a one-dimensional Brownian motion with unit diffusivity (i.e. scaled so that [(B(u))^2]=2u for all u≥0), and let U={U(t)}_t≥0 be an independent s-stable Lévy subordinator (i.e. it has Laplace exponent Φ(β)=β^s). Then the following random time change of B,
X(t)
:=D_s^1/(2s)B(U(t)) t≥0,
is a Lévy flight with generalized diffusivity D_s>0.
Given a discrete time step Δ t>0, we construct a statistically exact path of the s-stable subordinator {U(t)}_t≥0 on the discrete time grid {t_k}_k∈ℕ with t_k=kΔ t via
U(t_k+1)
=U(t_k)+(Δ t)^1/sΘ_k, k≥0,
where U(t_0)=U(0)=0 and {Θ_k}_k∈ℕ is an iid sequence of realizations of <cit.>
Θ
=sin(s(V+π/2)/(cos(V))^1/s(cos(V-γ(V+π/2))/E)^(1-s)/s,
where V is uniformly distributed on (-π/2,π/2) and E is an independent exponential random variable with [E]=1. We then construct a statistically exact path of the Brownian motion {B(u)}_u≥0 on the (random) discrete time grid {U(t_k)}_k∈ℕ via
B(U(t_k+1))
=B(U(t_k))+√(2(Δ t)^1/sΘ_k)ξ_k, k≥0,
where {ξ_k}_k∈ℤ is an iid sequence of standard Gaussian random variables and we impose periodic boundary conditions. Finally, we obtain a statistically exact path of the Lévy flight X={X(t)}_t≥0 in (<ref>) on the discrete time grid {t_k}_k∈ℕ via X(t_k)
=D_s^1/(2s)B(U(t_k)) for k≥0. The FHT τ to the target set U_ is then approximated by τ≈kΔ t where k
:=min{kΔ t≥0:X(t_k)∈ U_}.
The Monte Carlo data in the results below is computed from 10^3 independent trials with Δ t=10^-5 and D_s=1.
§.§ Results
To validate our asymptotic analysis we compare our asymptotic approximations for the MFHT with full numerical simulations using the methods outlined in Sections <ref> and <ref>. We present this comparison for two types of configurations. The first, which we refer to as the symmetric one-target configuration, consists of a single target with x_1=0 and l_1=1. The second, which we refer to as the asymmetric three-target configuration, consists of N=3 targets centred at x_1=-0.6, x_2=0.4, and x_3=0.75 with l_1=1, l_2=1.25, and l_3=1.5.
In Figures <ref> and <ref> we plot the MFHT for the symmetric one-target and asymmetric three-target configurations respectively. Specifically, each figure compares the solution obtained by solving (<ref>) numerically (solid curves), the solution obtained using the asymptotic approximation (<ref>) (dashed curves), as well as the values of the MFHT starting from specific values of x∈(-1,1) obtained from Monte Carlo simulations (hollow squares). In each case we observe excellent agreement between the asymptotic and numerical solutions even for moderately sized values of ε>0. In addition to validating our asymptotic approximations, the plots in Figures <ref> and <ref> also showcase the qualitative properties of the MFHT predicted by our asymptotic analysis. Specifically, they illustrate a strong ε-dependence when s<1/2 in contrast to when s>1/2 which supports the scaling v=O(ε^2s-1) for s<1/2 and v=O(1) for s>1/2. Moreover, we observe that for sufficiently small values of ε>0, the MFHT in the outer region is approximately spatially constant when s<1/2 whereas it is spatially variable when s>1/2. Although the leading order asymptotics predict a spatially constant solution for s=1/2, this is difficult to see numerically since the first order correction is O(1/log).
An additional quantity of interest is the MFHT averaged over uniformly distributed initial points x∈(-1,1), i.e.
v := 1/2∫_-1^1v(x)dx.
In Figure <ref> we plot this averaged MFHT versus ε>0 for different values of 0<s<1 for both the symmetric one-target and asymmetric three-target configurations. In each plot the solid curve corresponds to the asymptotically computed solution which, in light of the vanishing integral constraint in (<ref>), is equal to χ^ε given by (<ref>). The solid dots correspond to values obtained by numerically integrating the numerical solution to (<ref>), whereas the hollow squares are results from Monte Carlo simulations. These plots shows good agreement between the asymptotic approximation and numerical simulations.
Finally, in Figure <ref>, we compare (i) the full probability distribution of the FHT τ computed from stochastic simulations to (ii) the exponential distribution implied by the analysis in section <ref>. This plot is for the symmetric one-target configuration in Figure <ref> with s=0.3. The convergence to an exponential distribution is apparent as decreases from =0.05 in the left panel down to =0.005 in the right panel.
§ OPTIMAL RANDOM SEARCH
We now investigate the value of the fractional order s∈(0,1] which minimizes the averaged MFHT. By averaging over a uniformly distributed initial position, considering the case N=1, neglecting the highest order terms from our asymptotic expansion, and reversing the nondimensionalization in (<ref>), we arrive at the following dimensional measure of the search time,
T_s
:=
(l^2s/D_s)(^2s-12𝔞_s/𝔟_s
-2𝔞_sR_s(0)) if s≠1/2,
(l^2s/D_s)(log(2/)2/π
-2R_1/2(0)/π) if s=1/2,
s∈(0,1).
That is, T_s is the averaged MFHT over uniformly distributed initial positions of a one-dimensional, (2s)-stable Lévy flight with generalized diffusivity D_s>0
, and an infinite periodic array of targets with separation distance 2l>0 where each target has radius l with 0<≪1.
To study how T_s depends on s∈(0,1], we must choose how the generalized diffusivity D_s depends on s (since it has dimension [D_s]=()^2s/()). We follow <cit.> and introduce a lengthscale l_0>0 (independent of s) and suppose
D_s=(l_0)^2s/t_0
for some timescale t_0. Such a lengthscale l_0>0 arises naturally in the continuous-time random walk derivation of a Lévy flight (see (<ref>)-(<ref>) in Section <ref> and <cit.> for more details). Normalizing T_s by the Brownian search time T_1:=(l^2/D_1)(1-)^2/3 yields the following ratio for s∈(0,1),
ρ(s)
:=T_s/T_1
=(l_0/l)^2(1-s)/(1-)^2/3×^2s-12𝔞_s/𝔟_s
-2𝔞_s R_s(0) if s≠1/2,
log(2/)2/π
-2R_1/2(0)/π if s=1/2.
Hence, ρ(s)<1 (respectively ρ(s)>1) means that the Lévy search is faster (respectively slower) than Brownian search.
In the left panel of Figure <ref>, we plot ρ(s) as a function of s∈(0,1) for different values of l_0/l. Notice that l_0/l≪1 describes sparse targets and l_0/l≪̸1 describes dense targets (where “sparse” and “dense” are relative to the lengthscale l_0). This plot shows that Lévy search is faster than Brownian search for sparse targets, whereas Brownian search is faster than Lévy search for dense targets.
In the right panel of Figure <ref>, we plot the “optimal” value of s∈(0,1] which minimizes the search time,
s_
:=s arg min ρ(s),
as a function of the target density l_0/l for fixed values of . This plot shows that s_ varies continuously from s_≈0 for sparse targets up to s_≈1 (i.e. Brownian search) as the target density increases. Hence, the value s=1/2 (which corresponds to stability index α=2s=1, i.e. so-called inverse square Lévy search) is not distinguished from other values of s∈(0,1] in the sense that s_=1/2 for only a single value of the target density l_0/l for each >0. On the other hand, we do find that s_opt→1/2 if we first take the limit →0 and then take l_0/l→0. To see this, note first that we must have lim_→0s_>1/2 since (<ref>) implies lim_→0ρ(s)=+∞ if s≤1/2. Next, (<ref>) implies
lim_→0ρ(s)=-((l_0/l)^2(1-s)/3)(2𝔞_sR_s(0))>0 if s>1/2,
and therefore lim_l_0/l→0lim_→0s_=1/2.
§ DISCUSSION
In this paper we calculated an asymptotic approximation for the MFHT to a small target in a periodic one-dimensional domain. Our asymptotic approximation is summarized in Principal Result 1 and reduces the calculation of the MFHT to that of solving the linear system (<ref>), thereby providing a fast method for approximating the MFHT when the target size is small. In the special case of a symmetric configuration it suffices to consider the case a single target for which the system (<ref>) can be solved explicitly (see (<ref>)–(<ref>) in Section <ref>). Furthermore we validated our asymptotics by comparing them to numerical computations of the MFHT obtained by solving the fractional differential equation (<ref>) directly and by using stochastic simulations.
The asymptotic analysis leading to Principal Result 1 is analogous to that used in two- and three-dimensional narrow capture/escape problems involving pure diffusion <cit.>. This analogy was previously identified in <cit.> and is a result of the singular behaviour of the fractional free-space Green's function which is logarithmic when s=1/2 and algebraic when s<1/2, mirroring that of the classical free-space Green's function in two- and three-dimensions respectively. A novel aspect of the asymptotic analysis presented in this paper is the recognition of a fractional counterpart to the classical electrified disk problem. This fractional differential equation was solved by using a fractional Kelvin transform and fractional Poisson kernel for s≠ 1/2, and by considering a two-dimensional extended problem solvable by complex analysis methods for s=1/2. In addition, we determined that when s≤ 1/2 the MFHT is spatially constant to leading order, with this observation further allowing us to conclude that the FHT is exponentially distributed when s≤ 1/2.
The present study joins many prior works which use Lévy flights as simple theoretical models to investigate optimal search strategies.
Prior works often choose one-dimensional spatial domains due to their analytical tractability and as models for search in effectively one-dimensional domains such as streams, along coastlines, at forest-meadows, and other borders <cit.>. The very interesting work of Palyulin, Chechkin, and Metzler <cit.> is perhaps most closely related to our present study. In <cit.>, the authors consider a one-dimensional, possibly biased Lévy flight on the entire real line with a single point-like target. A major result of <cit.> is that despite the frequent claim that Lévy flights with s=1/2 are most efficient for sparse targets, the optimal value of s may range the entire interval between s=1/2 and s=1 and thus include Brownian search (the assumption of a point-like target in <cit.> meant that these authors did not consider s<1/2). Indeed, as the authors of <cit.> state, “the main message from this study is that Lévy flight search and its optimization is sensitive to the exact conditions” and “our results show clear limitations for the universality of Lévy flight foraging” <cit.>. Our results agree with these main points, as the optimal value of s in our study spans the entire interval (0,1] as the target density l_0/l increases from l_0/l≤ up to l_0/l≈1 (see Figure <ref>).
siam
§ ADDITIONAL CONSIDERATIONS FOR THE NUMERICAL DISCRETIZATION OF THE PERIODIC FRACTIONAL LAPLACIAN
To numerically implement (<ref>) we choose weights w_n (n∈ℤ) that are based on linear interpolants. Specifically, we define (see Section 3.1 of <cit.>)
F(t):= C_s/2s(2s-1)|t|^1-2s, s≠ 1/2,
-C_slog|t|, s=1/2,
where C_s is given by (<ref>) and in terms of which the weights are given by
w_n := 1/h^2sC_s/2-2s-F'(1)+F(2)-F(1), |n|=1,
F(n+1)-2F(n)+F(n-1), |n|≥ 2.
We use the explicit form of the weights to numerically speed up the evaluation of the infinite sums appearing in the definition of W_σ in (<ref>). For sufficiently large n∈ℤ we have
w_n = C_s/h^2s|n|^1+2s(1 + O(1/n^4) ),
so that for any fixed σ∈ℤ and any sufficiently large integer k≥ 1 we have
w_σ-kM + w_σ+kM = C_s/2^2s-1Mk^1+2s(1 + O((σ/kM)^2)).
Choosing a sufficiently large integer K≥ 1 we obtain
W_σ = w_σ + ∑_k=1^K (w_σ-kM+w_σ+kM) + C_s/2^2s-1 Mζ(1+2s,K+1) + O(σ^2/M^3K^2+2s),
where ζ(z,q) := ∑_n=0^∞ (n+q)^-z is the Hurwitz zeta function which can be quickly computed by standard numerical libraries. This formula for the weights W_σ (σ∈ℤ) provides a good approximation for W_σ for moderately sized K thereby reducing computational costs.
|
http://arxiv.org/abs/2307.04382v1 | 20230710072704 | Experimental verification of bound and multiparticle entanglement with the randomized measurement toolbox | [
"Chao Zhang",
"Yuan-Yuan Zhao",
"Nikolai Wyderka",
"Satoya Imai",
"Andreas Ketterer",
"Ning-Ning Wang",
"Kai Xu",
"Keren Li",
"Bi-Heng Liu",
"Yun-Feng Huang",
"Chuan-Feng Li",
"Guang-Can Guo",
"Otfried Gühne"
] | quant-ph | [
"quant-ph"
] |
These authors contributed equally to this paper.
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
These authors contributed equally to this paper.
Peng Cheng Laboratory, Shenzhen 518055, China
Institut für Theoretische Physik III, Heinrich-Heine-Universität Düsseldorf, Universitätsstr. 1, 40225 Düsseldorf, Germany
Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Str. 3, 57068 Siegen, Germany
Fraunhofer Institute for Applied Solid State Physics IAF, Tullastr. 72, 79108 Freiburg, Germany
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
Peng Cheng Laboratory, Shenzhen 518055, China
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
[email protected]
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
[email protected]
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China
CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
[email protected]
Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Str. 3, 57068 Siegen, Germany
In recent years, analysis methods for quantum states based on randomized measurements have been investigated extensively. Still, in the experimental implementations these methods were typically used for characterizing strongly entangled states and not to analyze the different families of multiparticle or weakly entangled states. In this work, we experimentally prepare various entangled states with path-polarization hyper-entangled photon pairs, and study their entanglement properties using the full toolbox of randomized measurements. First, we successfully characterize the correlations of a series of GHZ-W mixed states using the second moments of the random outcomes, and demonstrate the advantages of this method by comparing it with the well-known three-tangle and squared concurrence. Second, we generate bound entangled chessboard states of two three-dimensional systems and verify their weak entanglement with a criterion derived from moments of randomized measurements.
Experimental verification of bound and multiparticle entanglement with the randomized measurement toolbox
Otfried Gühne
August 12, 2023
==========================================================================================================
§ INTRODUCTION
Quantum entanglement is one of the most prominent non-classical features of
quantum mechanics and often viewed as a resource in quantum information processing <cit.>. Its generation and characterization is of growing interest from both practical and fundamental perspectives. While deciding whether a given quantum state is entangled or not is in general a hard task <cit.>, many experimentally feasible schemes exist that verify entanglement in some states.
A prominent example for such schemes are entanglement witnesses, which allow for rather simple detection of entanglement using few measurements, whereas other schemes detect non-locality by evaluating some Bell-type inequalities <cit.>. On the experimental side, numerous entangled states have been generated and multi-qubit entanglement <cit.>, high-dimensional
entanglement of two particles <cit.>, and also bound entanglement <cit.> has been characterized.
When applying the standard criteria in a practical experiment, however, one always needs to align the local measurement settings strictly or to make some assumptions
on the target state to prepare, e.g., by tailoring a witness specifically for
states close to some fixed target state. To remedy this, several schemes based
on the moments of randomized correlations have been proposed
<cit.>. They provide an efficient way to characterize multi-particle correlations in states without prior knowledge about the state, nor any alignment of measurement directions. Recently, it has been shown that this approach also allows for the detection of bound entanglement <cit.>.
In this paper, we implement in a photonic setup the randomized measurement scheme to detect entanglement in mixtures of three-qubit GHZ and W-states using second moments of the random outcomes. Furthermore, we prepare bound entangled chessboard states of two qutrits and show their entanglement by evaluating an
entanglement criterion which is based on the second and fourth
moment of a randomized measurement outcome, without implementing the
random unitaries explicitly. This demonstrates that the criterion
from Ref. <cit.> is indeed strong enough to capture this weak form of entanglement, even in the presence of noise and experimental imperfections. Our implementation combines the photon's polarization and path degrees of freedom to generate precisely controlled high-dimensional states and demonstrates the versatility and efficiency of the randomized measurement approach.
§ THEORY
In the randomized measurement scheme <cit.>, a subset S⊂{1,…,n} of the parties of an n-partite quantum state ρ of fixed local dimension d is measuring some fixed, local observables in random directions. The moments of the distribution of measurement results can be written as
ℛ_S^(t) = ∫dU_1 …dU_n ⟨ U_1 τ_1 U_1^†⊗…⊗ U_n τ_n U_n^†⟩_ρ^t,
where the τ_i denote the local observables, and τ_i = whenever i∉ S.
The integrals are evaluated over the Haar measure of the unitary group 𝒰(d).
In case of qubit systems, one usually chooses τ_i = σ_z for i∈ S, in which case the second moments (t=2) are related to the purities of the reduced states of ρ. The sum of second moments for all subsets S of size | S| = k is proportional to what is known as the k-sector length of the state <cit.>. In particular, for three qubits the sector lengths A_k are given by
A_1 =3(ℛ_A^(2)+ℛ_B^(2)+ℛ_C^(2)),
A_2 =9(ℛ_AB^(2)+ℛ_AC^(2)+ℛ_BC^(2)),
A_3 =27ℛ_ABC^(2).
Decomposing ρ in terms of the local Pauli basis {σ_0 = , σ_1 = σ_x, σ_2 = σ_y, σ_3 = σ_z}, yields
ρ_ABC=1/8∑_i,j,k=0^3 α_ijkσ_i⊗σ_j⊗σ_k
and allows to express the sector lengths in terms of the coefficients α_ijk as follows:
A_1 = ∑_i=1^3 (α_i00^2 + perm.),
A_2 = ∑_i,j=1^3 (α_ij0^2 + perm.), and
A_3 = ∑_i,j,k=1^3 α_ijk^2.
In terms of the sector lengths, several entanglement criteria exist that detect certain entangled states. To proceed, let us recall that a three-particle state ρ_ABC is called biseparable for a partition A|BC if
ρ_A|BC
= ∑_k q_k^A ρ_k^A ⊗ρ_k^BC,
where the positive coefficients q_k^A form a probability distribution.
Similarly, the biseparable states ρ_B|CA and ρ_C|AB can be defined.
Moreover, we can consider the mixture of biseparable states for all partitions as
ρ_bisep
= p_A ρ_A|BC + p_B ρ_B|CA + p_C ρ_C|AB,
where p_A, p_B, p_C are probabilities.
A quantum state is called genuinely multipartite entangled (GME) if it cannot be written in the form of ρ_bisep.
For three-qubit states, if A_3>3, the state must be GME (the maximal value being A_3=4 for the GHZ state |GHZ⟩=1/√(2)(|000⟩+|111⟩).
A stronger version exists, which states that if
A_2 + A_3 > 3(1+A_1),
the state cannot be biseparable w.r.t. any fixed partition, and strong numerical evidence exists that in that case, even GME states must be present <cit.>.
In this paper, we aim to detect entanglement in a mixture of a GHZ and a W state, given by
ρ(g) = g|GHZ⟩⟨GHZ|+(1-g)|W⟩⟨W|,
where g∈[0,1] denotes the amount of mixing and |W⟩ = 1/√(3)(|001⟩ + |010⟩ + |100⟩).
The family of states ρ(g) exhibits some interesting properties. First, it is supported in the symmetric subspace. This implies that
F_XYρ(g) = ρ(g)F_XY = ρ(g),
where
F_XY = ∑_i,j|ij⟩⟨ji|_XY
is the flip (swap) operator acting on the subsystems XY∈{AB, BC, CA}.
It is known that if a state lives in
the symmetric subspace, it is either fully separable or GME
<cit.>.
However, the experimentally generated version of the state ρ(g) cannot be assumed to have the symmetry due to experimental imperfections. Accordingly, the generated state can become biseparable, thus, we employ the criterion in Eq. (<ref>) to detect its entanglement.
We stress again that the criterion in Eq. (<ref>) has been conjectured to imply the presence of GME from numerical evidence, but its analytical proof has not yet been provided <cit.>.
That is, even if the criterion Eq. (<ref>) is verified experimentally, the state may be entangled for any fixed partition, but it can be a mixture of at least three biseparable states for different bipartitions.
Second, when the parameter g is outside the region of 0.297 ≤ g ≤ 0.612, the criterion in Eq. (<ref>) is satisfied.
This parameter region is very close to other well-known regions using two other entanglement measures <cit.>.
On the one hand, the three-tangle τ vanishes for 0≤ g≤ g_τ≈ 0.627, where τ measures residual (three-partite) entanglement that cannot be expressed as two-body entanglement <cit.>.
Note that the GHZ state maximizes the three-tangle, while it vanishes for the W state.
On the other hand, the sum of squared concurrences C_A|B^2 + C_A|C^2 vanishes for g_C ≈ 0.292 …≤ g≤ 1,
where the concurrence C_X|Y measures bipartite entanglement in the reduced state between the parties X and Y <cit.>.
Hence, we can conclude that the criterion in Eq. (<ref>) can detect the multi-partite entanglement of ρ(g) even in regions where the three-tangle and the concurrence vanish, if the parameter g satisfies
g_C ≤ g < 0.297 or
0.612 < g ≤ g_τ.
In contrast to qubit systems, the second moments of higher-dimensional states are not automatically related to sector lengths. In fact, the choice of the local observables influences which local unitary invariants can be extracted from the moments <cit.>. Let us expand a bipartite quantum state of dimension d in terms of some local, hermitian operator basis {λ_i}_i=0^d^2-1 with λ_0 =, (λ_iλ_j) = dδ_ij, such as the Gell-Mann basis <cit.>. Then
ρ = 1/d^2[⊗ + ∑_i=1^d^2-1 (α_i λ_i ⊗ + β_i ⊗λ_i) + ∑_i,j=1^d^2-1T_ijλ_i ⊗λ_j]
is called the generalized Bloch decomposition of ρ, where the matrix T is known as the correlation matrix of ρ. For this matrix, many entanglement criteria exist, most notably the de Vicente criterion <cit.>, stating that for separable states, (| T|) ≤ d-1. While the left-hand side is not directly accessible from the moments of randomized measurements, it is possible to obtain related quantities by carefully choosing the observables τ_i as detailed in Ref. <cit.>, such that
ℛ^(2)_AB=tr(TT^†)/(d-1)^2
ℛ^(4)_AB=[1/3tr(TT^†)/(d-1)^2+2/3tr(TT^† TT^†)]/(d-1)^4.
For example, for d=3, τ_i = diag(√(3/2), 0, -√(3/2)).
The combined knowledge of these two quantities allows to detect entanglement, whenever it is incompatible with the de Vicente criterion, i.e., if the measured value of ℛ^(4)_AB is below the minimum given by
min ℛ^(4)_AB
s.t. ℛ^(2)_AB = measured, tr(|T|)≤ d-1.
Note that this lower bound can also be calculated analytically <cit.>.
Interestingly, there exist states which have a positive partial transpose, but can be detected to be entangled by these two moments, implying bound entanglement. A 3× 3-dimensional state from the chessboard family of bound entangled states described in Ref. <cit.> (also see Appendix C2 in <cit.>) has been identified to violate it extremely, which makes it a good candidate to prepare and detect its entanglement experimentally. It is given by
ρ_ch=N∑_i=1^4|V_i⟩⟨V_i|,
where N=1/∑_i⟨V_i|V_i⟩^2=1/4
is a normalization factor and
|V_1⟩=1/√(6)(|0⟩+2|2⟩)|0⟩+1/√(6)|11⟩,
|V_2⟩=1/√(6)(-|0⟩+2|2⟩)|1⟩+1/√(6)|10⟩,
|V_3⟩=1/√(6)|0⟩(-|0⟩+2|2⟩)+1/√(6)|11⟩,
|V_4⟩=1/√(6)|1⟩(|0⟩+2|2⟩)+1/√(6)|01⟩.
§ EXPERIMENTAL SETUP
We proceed with a description of the experimental implementation. The GHZ-W mixed states are prepared by resorting to the states entangled in polarization degree of freedom (d.o.f.) and path d.o.f. of the photon (that is, hyper-entangled) and with methods similar to the ones in Refs. <cit.>. More detailed information about the state preparation of this family of states is given in Appendix A.
When preparing the bound entangled chessboard state, it is important to ensure that all its eigenvalues remain non-negative under partial transposition. However, the chessboard state is not of full rank. Affected by the imperfections of
the experiment, slightly negative eigenvalues of the partial transposition are likely to appear. A more robust way is to prepare
the state with a level of white noise <cit.>,
ρ_ch(p)=(1-p)ρ_ch+p𝕀16.
First, let us briefly review the state preparation procedure.
As depicted in Fig. <ref>, we generate polarization
entangled (2×2 entangled) photon pairs through a spontaneous
parametric down-conversion (SPDC) process. Subsequently, we expand the dimensionality of the system by introducing the path modes u and l. This will results in three modes: H_u, V_u, and H_l, where H_u represents a horizontally polarized photon occupying path u, and so on. Finally, specific operations are applied to the system to steer the state to the target ones.
Specifically, a Half-Wave Plate (HWP) H1 with the optic axis placed at 12.05^∘ is used to rotate a 390 nm horizontally polarized pump laser (with an 80 MHz repetition rate and a 140-fs pulse duration) to state |ψ_p⟩=√(5/6)|H⟩+√(1/6)|V⟩, where H and V represent the horizontal and the vertical polarization, respectively. The pump photon is then split into two photons after pumping two crossed-axis type-I β-Barium Borate (BBO) crystals in the SPDC process, transforming the state into |ψ_p⟩→√(5/6)|HH⟩+√(1/6)|VV⟩. By passing through the Beam Displacers (BDs) BD1 and BD2, the down-converted photons' H-(V-) components are directed to path u (l). And for path mode u, we have the mode labeled as H_u and V_u. By re-encoding |H⟩_u→|0⟩, |V⟩_l→|1⟩, and |V⟩_u→|2⟩, we obtain the hyper-entangled state |ψ_s⟩=√(5/6)|H_uH_u⟩+√(1/6)|V_lV_l⟩→√(5/6)|00⟩+√(1/6)|11⟩.
It is worth noting that all the four states |V_i⟩ in Eq. (<ref>) can be generated by performing local
operations on the state |ψ_s⟩,
|V_1⟩=U_2⊗𝕀|ψ⟩,
|V_2⟩=U_3⊗ U_1|ψ⟩,
|V_3⟩=𝕀⊗ U_3|ψ⟩, |V_4⟩=U_1⊗ U_2|ψ⟩,
where
U_1= (
0 1 0
1 0 0
0 0 1
),
U_2= (√(1/5) 0 √(4/5)
0 1 0
√(4/5) 0 -√(1/5) ),
U_3= (
-√(1/5) 0 √(4/5)
0 1 0
√(4/5) 0 √(1/5) ).
For the states |V_3⟩ and |V_4⟩, it also works by applying the unitary U_3⊗𝕀, and U_2⊗ U_1, respectively, and then exchanging the labels for the two detectors D1 and D2. Therefore, through performing the operator U_3 or U_2 on one photon of a pair and the operator U_1 or 𝕀 on the other photon simultaneously, the state |ψ_s⟩ will be transformed to each of the four states |V_i⟩. The switches between these operators are implemented by the motorized rotating HWPs and Quarter-Wave Plates (QWPs), which are controlled by the pseudo-random numbers generated from a classical computer. Two adjustable LED lights are placed before the detectors to introduce the different levels of white noise into the system.
In the measurement part, a QWP and an HWP located at path u are used to analyze the correlations between basis elements |0⟩ and |2⟩, and now the afterward BD works as a Polarization Beam Splitter (PBS). When measuring the superposition of basis elements |0⟩ and |1⟩, as well as |2⟩ and |1⟩, we first convert the path d.o.f. to the polarization d.o.f. via the wave plates and BDs, and then analyze with the combination of the QWP and the HWP. Detailed settings of the wave plates for standard quantum state tomography are given in Tab. <ref> of Appendix B. For each measurement basis, we randomly change the photon states to every one of the four states |V_i⟩. The two-photon coincidence counts are recorded per 10 s.
When it comes to measuring the randomized correlations, as elaborated in the theoretical framework, two distinct approaches are considered. The first one involves conducting local randomized measurements, while the second entails the direct application of Pauli operators or Gell-Mann matrices. In this study, we thoroughly examine and contrast these two methodologies for three-qubit states, utilizing a LabVIEW program to facilitate the automation of numerous measurements. Further details regarding the randomized measurement techniques can be found in the Appendix C. For the bound entangled states, we opt to directly measure the 81 combinations of Gell-Mann matrices to avoid the systematic errors that may emerge from the construction of 3× 3 random unitaries.
§ RESULTS
§.§ Results for the GHZ-W mixed states
In our experiment, a set of GHZ-W mixed states ρ(g) with step size 0.05 is prepared. For each state, 4000 measurements in randomized directions are performed, and for each measurement, about 5300 copies of the state are detected.
The entanglement criterion of Eq. (<ref>) is calculated from the randomized measurement data with the error bars obtained by repeating the whole process ten times. From the results in Fig. <ref>(a), we see that for 0≤ p≤ 0.2 and 0.7≤ p≤ 1, the criterion in Eq. (<ref>) is violated, while the criterion A_3-3≤ 0 is not. Clearly, Eq. (<ref>) improves the previous one.
Note that the sector length A_k can also be expressed in terms
of the coefficients α_ijk, and then compared with
the randomized measurements. Resorting to the standard
quantum state tomography process, we obtain the density matrix of the GHZ state ρ_GHZ^exp and W state ρ_W^exp, respectively.
The values of the criterion of Eq. (<ref>) are calculated from the state ρ(g)=gρ_GHZ^exp+(1-g)ρ_W^exp and plotted as the dashed red lines in Fig. <ref>(a) and (b).
In contrast, for the ideal states, we have (A_1, A_2, A_3)=((1-g)^2/3, 8g^2-8g+3, 4g^2+11(1-g)^2/3), and the theoretical values of the criteria are shown as the solid red lines in Fig. <ref>.
We see that the results deduced from randomized measurements and from the coefficients α_ijk are approximately identical, providing evidence for the correct implementation of the randomized measurements. In the region 0.08≤ g≤ 0.24 and 0.67≤ g≤ 0.88, where the criterion A_3-3≤ 0 fails, we detect genuinely multi-partite entanglement. Furthermore, from
Fig. <ref>(b), we see that our criterion still works for g≤ 0.24 in the violet color region where the states have no three-tangle and also for g≥0.67 in the light salmon region where they exhibit no squared concurrence.
§.§ Results for the chessboard state
The experimentally prepared chessboard state ρ_ch^exp is reconstructed using the maximum-likelihood algorithm. Due to imperfections, when no white noise is added, the minimal eigenvalues of the partially transposed (PT) density matrix is -0.0133, such that state is not PPT and probably not bound entangled. To remove these negative eigenvalues, we introduce different levels of white noise between p=0 and p=0.22 in the experiment, and plot the minimum PT eigenvalue and the violation of the entanglement criterion in Eq. (<ref>) in Fig. <ref>. In particular, for the state with noise level p=0.1291, the minimum PT eigenvalue equals 0.0026±0.0009 and the fidelity between the experimentally prepared state ρ_ch^exp and the the noisy chessboard state ρ_ch(p=0.1291) is given by F(ρ_ch,ρ_ch^exp)=tr(√(√(ρ_ch)ρ_ch^exp√(ρ_ch)))=0.9893± 0.0012.
Next, we show that the state is entangled by using the tool of the second and fourth moments. For the state under consideration at p=0.1291, the second moment is given by ℛ^(2)_AB=0.2355±0.0015, and the fourth moment by ℛ^(4)_AB=0.0259±0.0003, while for separable states, the lower bound on the fourth moment is given by 0.0277 for ℛ^(2)_AB=0.2355 when performing the optimization program in Eq. (<ref>). We see that the experimental value 0.0259 is smaller than the lower bound 0.0277 and violates it with 6 standard deviations. Therefore, we experimentally prepared a 3×3 bound entangled state with the photonic platform and analyzed its entanglement property via the second and fourth moments
successfully.
§ CONCLUSION
We experimentally produced a variety of genuinely entangled photonic states consisting of entangled photon pairs amended with path degrees of freedom and characterized them using methods based on locally randomized measurements. First, we showed how to generate genuinely entangled states of three parties and verified them using entanglement criteria based only on the second moments of the randomized measurements. The latter enabled the verification of mulitpartite entanglement in regimes where well-known measures of multipartite entanglement, i.e., the three-tangle or the squared concurrence, are zero. Further on, we demonstrated the production of weakly bound entangled chessboard states of two qutrits and used entanglement criteria based on the second and fourth moments of the taken randomized measurements to analyze the produced states. As a result, bound entangled states with mixed-state fidelities beyond 98% were successfully produced and verified.
Our work demonstrates the outstanding control of quantum states
in photonic setups and presents an efficient way for preparing a
low-rank bound entangled state. By incorporating appropriate white
noise, the setup demonstrates increased robustness against
transitioning into the free entangled region. Compared with several previous experiments, the precise control allowed us to directly verify
bipartite bound entanglement in minimal case of a 3×3 system,
without resorting to the various forms of bound entanglement in higher dimensions or in multiparticle systems. This will facilitate further exploration of interesting entanglement effects in
experiments.
§ ACKNOWLEDGEMENTS
We thank Xiao-Dong Yu for discussions. The work in USTC is supported by the National Natural Science Foundation of China (Nos. 11821404, 11734015, 62075208), the Fundamental Research Funds for the Central Universities (Nos. WK2030000061, YD2030002015), and the Innovation Program for Quantum Science and Technology (No. 2021ZD0301604). Y.Z. is support by the Major Key Project of PCL. S.I. and O.G. are supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, project numbers 447948357 and 440958198), the Sino-German Center for Research Promotion (Project M-0294), the ERC (Consolidator Grant 683107/TempoQ), and the German Ministry of Education and Research (Project QuKuK, BMBF Grant No.
16KIS1618K). S.I. acknowledges the support from the DAAD. N.W. acknowledges support by the QuantERA project QuICHE via the German Ministry of Education and Research (BMBF
Grant No. 16KIS1119K).
§ APPENDIX A: EXPERIMENTAL DETAILS ON THE PREPARATION OF THE GHZ-W MIXED STATES
In our experiment, the GHZ-W mixed states are prepared using the setup shown in Fig. <ref>, and the switch between the GHZ state and W state is realized by engineering the polarization-entangled photon source (EPS), and the subsequent unitary transformations constituted by Beam Displacers (BDs) and the Half-Wave Plates (HWPs). First, for the GHZ state, a polarization-entangled state |ψ_s⟩=1/√(2)(|HH⟩+|VV⟩)|l⟩ is generated through the type-I Spontaneous Parametric Down-Conversion (SPDC) process, and |u⟩ (|l⟩) in Fig. <ref> represents the path u (path l). Then, BD1 makes the vertically polarized part of the light passes through directly to path l, while the horizontal component passes with a 4 mm deviation to path u. That is to say, the BD1 performs as a CNOT gate with the polarizations as the controlled qubit and the path as the target qubit. When we set the angles of the half-wave plates H4∼H5 as 0^∘ and H6∼H7 as 45^∘, we get |ψ_s⟩→1/√(2)(|HH⟩|u⟩+|VV⟩|l⟩). By encoding the H (u) and V (l) to the logic qubit 0 and 1, we prepare the system into the three qubit GHZ state |GHZ⟩=1/√(2)(|000⟩+|111⟩).
When it comes to the W state, the EPS is tuned to the state |ψ_s⟩=1/√(3)|VH⟩|l⟩+√(2/3)|HV⟩|l⟩ by rotating the polarization directions of the pump beam to |ψ_p⟩=1/√(3)|H⟩+√(2/3)|V⟩ and performs a bit flip operation on one of each paired photon generated in the SPDC process. Now the angle of H4 is placed at -67.4^∘ and the one of H5 at 45^∘ to transform the state |V⟩|l⟩ to 1/√(2)(|V⟩|u⟩+|H⟩|l⟩), and |ψ_s⟩→1/√(3)|VH⟩|u⟩+1/√(3)|H⟩|V⟩|u⟩+1/√(3)|H⟩|H⟩|l⟩. With re-encoding, the W state |W⟩=1/√(3)(|100⟩+|010⟩+|001⟩) is generated.
At last, various states ρ(g)=g|GHZ⟩⟨GHZ|+(1-g)|W⟩⟨W| are generated by randomly switching the settings of the setup to produce state |GHZ⟩ or |W⟩, with probabilities g and 1-g, respectively.
In the measurement stage, the combination of a Quarter-Wave Plate (QWP), an HWP, and a Polarization Beam Splitter (PBS) enables the polarization state measurement in an arbitrary basis. Thus, the two polarization encoded qubits are analyzed with the devices boxed as parts (a) and (b), respectively. Here BD3 combined with H8 performs as a PBS with only one output port, so we must rotate Q2 and H2 twice to realize the projective measurements {U|0⟩⟨0|U^†, U|1⟩⟨1|U^†}. The third qubit, i.e., the path qubit, is transformed to the polarization degree of freedom, and then analyzed by wave plates Q3, H3, and PBS2 in the boxed part (c).
To facilitate the massive randomized measurements, i.e., 40,000 sets for each state ρ(g) in our experiment, the QWPs Q1∼Q3 and HWPs H1∼H3 are all mounted in Motorized Rotation Mounts (Newport, CONEX-PR50CC). For each local measurement setting drawn uniformly at random, a classical computer inputs the corresponding settings of the QWP and HWP and controls the wave plates automatically rotated to the target angles to perform the measurement. This entire process is executed via a LabVIEW program.
Here the quality of the state ρ(g) depends heavily on the GHZ state and the W state, so we give the benchmarks of these two states through quantum state tomography. We estimate the fidelities of the experimentally prepared state and the ideal state F(ρ^ideal,ρ^exp)=(tr√(√(ρ^ideal)ρ^exp√(ρ^ideal))) are 0.9919 and 0.9890 for GHZ state and W state, respectively. The real parts of the experimentally prepared state are shown in Fig. <ref>. All fidelities of the GHZ-W mixed states shown as the dots in Fig. <ref> are above 0.9836, which shows the good performance of the setup. The error bars are of the size of about 0.0001, which is obtained with Monte Carlo simulations by sampling the experimentally collected data.
§ APPENDIX B: QUANTUM STATE TOMOGRAPHY FOR THE CHESSBOARD STATE
As the red points in Fig. <ref> show, various noisy chessboard states ρ_ch(p) are prepared to study their entanglement properties. Here, the level of white noise p is estimated by comparing the total coincidence counts with the counts recorded when no white noise source is added, i.e., when the LED lights in Fig. <ref> are turned off. For instance, if we record a total of photonic counts N_p for state ρ_ch(p) and N_0 for state with no added white noise, then p is set to the value of 1-N_0/N_p.
To characterize the chessboard state that we prepared experimentally, we perform a standard quantum state tomography process, where the 81 vectors
|u_i⟩⊗|u_j⟩ (i, j=0,1,...8) are measured. The detailed forms of the kets |u_i⟩ are given by
|u_0⟩=|0⟩;
|u_1⟩=|1⟩;
|u_2⟩=|2⟩;
|u_3⟩=(|0⟩+|1⟩)/√(2);
|u_4⟩=(|0⟩+i|1⟩)/√(2);
|u_5⟩=(|1⟩+|2⟩)/√(2);
|u_6⟩=(|1⟩+i|2⟩)/√(2);
|u_7⟩=(|0⟩+|2⟩)/√(2);
|u_8⟩=(|0⟩+i|2⟩)/√(2).
Each basis is realized with the settings in Tab. <ref>.
We get the fidelities 0.9835±0.0005, 0.9838±0.0006, 0.9853±0.0005, 0.9893±0.0012, 0.9911±0.0005, 0.9930±0.0003 for states of p=0,0.052,0.0991,0.1291,0.1573,0.2158, respectively. The error bars are estimated with Monte Carlo simulations by sampling the experimental data 100 times.
§ APPENDIX C: ENTANGLEMENT DETECTION FOR THREE-QUBIT STATES WITH RANDOMIZED MEASUREMENTS
In our work, we use the criterion based on the second moment,
ℛ_S^(2) = ∫dU_1 …dU_n ⟨ U_1 τ_1 U_1^†⊗…⊗ U_n τ_n U_n^†⟩_ρ^2,
to study the entanglement property of the three-qubit state ρ(g), where τ_i=σ_z for i∈ S and τ_i= for i∉ S.
As each observable τ_i is measured in the standard basis |0⟩ and |1⟩, we will sort the detection outcomes into eight categories corresponding to the eight basis states M_ABC={|000⟩⟨000|, |001⟩⟨001|, |010⟩⟨010|, |011⟩⟨011|,
|100⟩⟨100|, |101⟩⟨101|, |110⟩⟨110|, |111⟩⟨111|}, respectively. In every single trial, instead of preparing the state ρ_U=Uρ(g) U^† and then making measurements in the standard basis, we directly perform the measurements U^† M_ABCU on the state ρ(g) in our experiment, where U=U_A⊗ U_B⊗ U_C. These two ways are equivalent to each other.
For each choice of local unitaries, we prepare N copies of the state to estimate the probability distributions of the outcomes, and a total of M random unitaries are applied to form the average over local unitaries.
We note that given the observable τ_i we choose, there are only two possible outcomes X_i∈{1, -1} for τ_ABC=τ_1⊗τ_2⊗τ_3. We define the probability for each outcome as p_i, which can be obtained by summing up the probabilities that correspond to the same measurement outcomes. As an example, consider the moment ℛ_A^(2), then τ_1=σ_z, τ_2=, and τ_3=, the outcomes assigned to the eight basis states M_ABC are 1,1,1,1,-1,-1,-1,-1, respectively. We get the probabilities p_1=
p_000+p_001+p_010+p_011 and p_2=p_100+p_101+p_110+p_111, where {p_1, p_2} represents the probability distribution for outcomes {1, -1}, and p_000=⟨ 000|ρ_U|000⟩ etc.
Next, we need to construct the unbiased estimator for Tr(ρ Uτ_ABCU^†)^2. For N independent trials, we get the unbiased estimator p_i=N_i/N so that 𝔼[p_i]=p_i, where N_i are the number of events with measurement outcome X_i. Also, we can find the unbiased estimators p_i^2 and p_ip_j such that 𝔼[p_i^2]=p_i^2 and 𝔼[p_ip_j]=p_ip_j:
p_i^2=N(p_i)^2-p_i/N-1
p_ip_j=N/N-1p_ip_j.
We get the unbiased estimator for E^2=Tr(ρ_Uτ_ABC)^2 via
E^2=∑_i X_i^2p_i^2+2∑_i<jX_iX_jp_ip_j.
For each of the M local unitaries and the observable τ_ABC, we have
E^2=N(p_1)^2-p_1/N-1+N(p_2)^2-p_2/N-1-2N/N-1p_1p_2.
After averaging over all the randomly chosen local unitaries, we get the estimate of the moments R_S^(2) as
R_S^(2)=1/M∑_i^M E^2
Finally, we combine the second estimates for the same size |S|=k to get the k-sector length of the state and plug it into the criterion to perform the entanglement analysis.
|
http://arxiv.org/abs/2307.06027v1 | 20230712091633 | Semantic Communications System with Model Division Multiple Access and Controllable Coding Rate for Point Cloud | [
"Xiaoyi Liu",
"Haotai Liang",
"Zhicheng Bao",
"Chen Dong",
"Xiaodong Xu"
] | cs.MM | [
"cs.MM"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Semantic Communications System with Model Division Multiple Access and Controllable Coding Rate for Point Cloud
Xiaoyi Liu,
Haotai Liang,
Zhicheng Bao,
Chen Dong*,
Xiaodong Xu, Senior Member, IEEE.
Xiaoyi Liu, Haotai Liang, and Zhicheng Bao are with the State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China (e-mail: [email protected]; [email protected]; [email protected]).
*Chen Dong is the corresponding author and with the State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China (e-mail: [email protected]).
Xiaodong Xu is with the State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China, and also with the Department of Broadband Communication, Peng Cheng Laboratory, Shenzhen, Guangdong, China (e-mail: [email protected]).
August 12, 2023
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Point cloud, as a 3D representation, is widely used in autonomous driving, virtual reality (VR), and augmented reality (AR). However, traditional communication systems think that the point cloud's semantic information is irrelevant to communication, which hinders the efficient transmission of point clouds in the era of artificial intelligence (AI). This paper proposes a point cloud based semantic communication system (PCSC), which uses AI-based encoding techniques to extract the semantic information of the point cloud and joint source-channel coding (JSCC) technology to overcome the distortion caused by noise channels and solve the “cliff effect" in traditional communication. In addition, the system realizes the controllable coding rate without fine-tuning the network. The method analyzes the coded semantic vector's importance and discards semantically-unimportant information, thereby improving the transmission efficiency. Besides, PCSC and the recently proposed non-orthogonal model division multiple access (MDMA) technology are combined to design a point cloud MDMA transmission system (M-PCSC) for multi-user transmission. Relevant experimental results show that the proposed method outperforms the traditional method 10dB in the same channel bandwidth ratio under the PSNR D1 and PSNR D2 metrics. In terms of transmission, the proposed method can effectively solve the “cliff effect" in the traditional methods.
Semantic communications, point cloud transmission, controllable coding rate, model division multiple access.
§ INTRODUCTION
Point cloud is one of the representations of 3D, which uses geometric coordinates and other attributes (e.g., reflectance, etc.) to characterize points <cit.>. Point cloud has been widely used in automatic driving, medical image processing, virtual reality (VR), and augmented reality (AR). Usually, the data volume of the point cloud is large, and the transmission of the point cloud will burden the traditional communication system, which does not pay attention to the meaning of the information to be transmitted and only encodes the information into strings with 0 and 1. For this reason, efficient transmission of the point cloud is highly desired.
According to Shannon's theory <cit.>, communication is divided into three levels: symbol transmission, semantic exchange of transmitted symbols, and semantic information exchange effect. As a new communication paradigm, semantic communication system extracts the semantics of the information to be transmitted, encodes, and transmits it, improving communication efficiency.
Semantic communication processes data in the semantic domain by extracting the meaning of the data and filtering out unimportant information, compressing the data while preserving the meaning. In semantic communication, a new joint source-channel coding (JSCC) scheme based on deep neural network (DNN) is presented <cit.><cit.><cit.><cit.><cit.>. This scheme is robust to the harsh channel environment, i.e., the low signal-to-noise ratio (SNR) region, and can solve the “cliff effect" well. There are already some semantic communication systems, as shown in Table <ref>. For text communication system, DeepSC <cit.> maximizes system capacity and minimizes semantic errors by restoring the meaning of a sentence rather than using bit or symbol errors in conventional communication. For image communication systems, the NTSCC <cit.> uses JSCC based on nonlinear transformation to realize image-based semantic feature extraction and transmission. LSCI <cit.> believes that the semantic transmission is essentially the flow of the AI model, so the semantic slice model (SeSM) is designed to realize this idea. DVST <cit.> also uses a nonlinear transformation to achieve semantic video transmission. The semantic communication systems mentioned above are all based on the physical layer of the open system interconnect reference model (OSI). There are also studies on network layer semantic communication systems based on point cloud video <cit.><cit.>. In AITransfer <cit.>, the dynamic network condition is incorporated into the end-to-end point cloud compression architecture. It employs a deep reinforcement learning-based adaptive control scheme to provide robust transmission. ISCom <cit.> consists of a region-of-interest (ROI) selection module, a lightweight point cloud video encoder-decoder network, and a deep reinforcement learning (DRL)-based scheduler to adapt an optimal encoder-decoder network.
However, there is still room for improvement. First, as mentioned above, the current point cloud semantic communication system does not involve the research of the channel in the physical layer. Therefore, it is necessary to design a physical-layer-based point cloud semantic communication system. In this paper, a point cloud semantic communication system, named PCSC, is proposed.
Second, in semantic communication systems, many methods cannot manually and accurately control code lengths in a simple way. For example, in <cit.>, the code length is learned and controlled implicitly, and it is not easy to give an accurate code length at a fixed channel bandwidth ratio. In <cit.>, the specific network parameters need to be changed and retrained to get different coding rates, which consumes lots of computing power and storage space. A rate allocation network is introduced in <cit.> to estimate the code length, although it does not need to be retrained to get different coding rates, the trained rate allocation network takes up storage space. Therefore, we propose three methods to analyze the importance of the encoded semantic vector and discard the non-important vector according to the specified bandwidth ratio to achieve efficient transmission. In this process, the network does not need to fine-tune for a particular coding rate.
Third, according to the recently proposed non-orthogonal model division multiple access (MDMA) <cit.>, semantic features extracted from the same artificial intelligence model (AI) have some shared information and some personalized information. In this paper, the non-orthogonal MDMA and point cloud semantic transmission are combined to construct a point-cloud-based MDMA transmission system, named M-PCSC, validating that non-orthogonal MDMA is also applicable to sources in other modalities.
The contribution of this paper can be summarized as the following:
(1) PCSC framework: A novel end-to-end learnable framework for point cloud transmission, i.e., PCSC, is proposed, which can extract the point cloud semantic information effectively. A joint source-channel coding (JSCC) is exploited to cope with channel noise and semantic distortion, which leads to a more robust point cloud transmission than traditional transmission schemes. To the best of the authors' knowledge, this is the first paper to propose a point cloud semantic communication system in the physical layer.
(2) Rate controllable semantic feature transmission: This paper uses a simple method to achieve different coding rates without any other rate estimate models or complicated frameworks. The PCSC, which is trained only once, can generate a wide range of variable coding rates. This mechanism is achieved by the value and the grad value of the encoded semantic vector in the latent representation.
(3) MDMA transmission: This paper presents a point cloud semantic communication system based on non-orthogonal MDMA (M-PCSC) to realize multi-user transmission, which can save channel bandwidth. We also found that compression is mainly aimed at shared information. As the compression rate decreases, the amount of shared information will reduce, and only the personalized information is left. Besides, the semantic spectral efficiency (S-SE) of downlink M-PCSC is deduced, and the S-SE is optimized to obtain the maximum spectral efficiency.
(4) Performance validation: We verify the performance of the PCSC across standard datasets. In the same channel bandwidth ratio (CBR), the PCSC has a much better gain on various established metrics, such as PSNR D1 and PSNR D2, compared to traditional G-PCC/Point Cloud Library (PCL) combined with LDPC and digital modulation schemes. Experiments show that the PCSC outperforms the traditional methods 10dB average in the same channel bandwidth ratio under the PSNR D1 and PSNR D2 metrics.
The rest of this paper is arranged as follows: In Section 2, the overall system model of PCSC and M-PCSC is introduced. Then, introduce the network architecture and the algorithms in detail in Section 3. Section 4 shows the experiment setup and results. Finally, the paper is concluded in Section 5.
§ SYSTEM MODEL
This section describes the proposed point cloud semantic communication system. First, an end-to-end point cloud semantic communication system (PCSC) with a stochastic physical channel is introduced. Then, a point cloud semantic communication system with non-orthogonal model division multiple access (M-PCSC) is presented.
§.§ End-to-End Point Cloud Semantic Communication System
The overall architecture of the proposed PCSC is presented in Fig. <ref>. After preprocessing the point cloud to get x, the joint source-channel encoder map x into a semantic vector y. The rate allocation module adopts a particular approach to rank the semantic vector y based on its level of importance, discarding non-essential information within y according to the restrictions, which can be set manually and explicitly. This selective process enables variable length coding to be executed on y, converting y into z. Term z is passed through the physical channel with transmission impairments, such as noise and distortion. The received, ẑ, is restored to ŷ in the signal recovery module by zero-padding. The ŷ is decoded into x̂ at the joint channel-source decoder and recovered to the point cloud after the post-processing. Both the encoder and decoder are designed with DNNs and trained together. The various PCSC modules are described in detail as follows:
§.§.§ Pre-processing
A point cloud is composed of a large volume of voxels when represented using (i, j, k)-based Manhattan space. The computational coding complexity grows significantly if processing an entire point cloud at a time, especially for a point cloud with high precision. For example, a point cloud with 10-bit precision allows 0≤ i, j, k ≤ 2^10-1. Thus, the point cloud is partitioned into non-overlap cubes, as shown in Fig. <ref>, to reduce computational complexity. The coordinates of each cube are described using the octree decomposition method <cit.>. Assuming that the size of the cube is W×W×W, and the position of a particular cube is (i_c,j_c,k_c), the local coordinates of a voxel (i_l,j_l,k_l) can be represented using global coordinate (i_g,j_g,k_g):
(i_l,j_l,k_l) = (i_g,j_g,k_g)-W×(i_c,j_c,k_c).
§.§.§ Base model
As shown in Fig. <ref>, the transmitter consists of a joint source-channel encoder and a rate allocation module. The joint source-channel encoder extracts the semantic features from partitioned point cloud x and ensures the effective transmission of semantic information over the physical channel simultaneously. The encoded symbol stream can be represented as follows:
y=SC_α(x),
where SC_α(·) is the joint source-channel encoder with parameter α.
The rate allocation module discards some points according to the importance degree of the semantic information to achieve variable rate coding and generates the semantic vector z:
z=R(y),
where R(·) is the rate allocate function, and the method for ranking the importance of the semantic factor is given in Section 3.
After encoding, the shortened vector z will be transmitted through the wireless channel. The received ẑ at the receiver can be expressed as:
ẑ=h× z+n,
where h corresponds the Rayleigh fading channel with 𝒞𝒩(0,1), n is the additive white Gaussian noise with 𝒞𝒩(0,σ ^2). For the additive-white-Gaussian-noise-channel (AWGN), h=1. For the end-to-end training of the encoder and the decoder, the channel must allow for backpropagation. In this paper, for the sake of simplicity, the AWGN and the Rayleigh fading channel are mainly considered.
At the receiver, the signal recovery module performs zero padding on the received ẑ to ensure that the lengths of the padded semantic vector ŷ and y are equal. The padded semantic vector ŷ can be expressed as:
ŷ=Re(ẑ),
where Re(·) is the signal recovery function. The recovered ŷ is sent to the joint channel-source decoder for decoding, which can be represented as:
x̂=SC_β^-1(ŷ),
where SC_β^-1(·) is the joint channel-source decoder with parameter β.
The PCSC aims to decrease semantic errors and minimize the number of transmitted symbols. Despite the ability of current communication systems to accomplish transmission with a low bit error rate, the presence of even a few bit errors may result in significant distortion of the reconstructed point cloud due to channel noise. This occurs due to the absence of certain point cloud information. To recover the point cloud successfully at the semantic level, this paper employs the joint source-channel coding to keep the meaning between x and x̂ unchanged. In the pre-processing, 1 indicates that the voxel is occupied, and 0 suggests that the voxel is not occupied. The decoded point cloud is a floating number in the range of 0 to 1, so the decoded point cloud needs to be classified into 1 or 0 accordingly. Inspired by <cit.>, we use weighted binary cross entropy (WBCE) as the distortion in training, and the formula is as follows:
l_WBCE=1/N_o∑^N_o-log p_x_o+ζ1/N_n∑^N_n-log (1-p_x_n),
where p_x=sigmoid(x), which is used to estimate the probability of a voxel being occupied, x_o denotes occupied voxels, x_n denotes unoccupied voxels, N_o and N_n denotes the number of occupied and unoccupied voxels, respectively. The value of the voxel x is a floating number between 0 and 1, which guarantees the differentiability of the backpropagation. At the same time, ζ is used to calculate the average loss of positive and negative samples to balance the loss penalty. In the experiment, ζ=3.
§.§.§ Post-processing
In the post-processing, the voxels decoded by the joint channel-source decoder are floating numbers of 0 to 1. They need to be binarized, using only 0 and 1 to represent the voxels. This paper uses the adaptive threshold for binarization <cit.>. Since p_x can also be considered as the probability of being occupied, p_x is sorted to extract the first k voxels that are most likely to be occupied. The binarized points are converted into local coordinates (i_l,j_l,k_l), then converted into global coordinates (i_g,j_g,k_g) and finally merged into a complete point cloud:
(i_g,j_g,k_g)=(i_l,j_l,k_l)+W× (i_c,j_c,k_c),
where (i_c,j_c,k_c) is the position of a particular cube.
§.§ Non-orthogonal Model Division Multiple Access for Point Cloud Communication System
A new type of non-orthogonal multiple access technology based on semantic domain resources, named model division multiple access (MDMA), has been proposed recently <cit.>. When multi-user transmission is performed, to save transmission bandwidth, the shared information is transmitted only once, and the personalized information of each user is transmitted separately. This paper designs a non-orthogonal MDMA transmission system based on point clouds named M-PCSC. The overall architecture is shown in Fig. <ref>.
§.§.§ Uplink System
The overall framework of the uplink transmission system is shown in Fig. <ref>fig3a. First, user 1 and user 2 extract the semantic information S_1 and S_2 by using the joint source-channel encoder:
S_1=SC_α(x_1), S_2=SC_α(x_2),
where SC_α(·) is the joint source-channel encoder in the PCSC with parameter α. The shared information S_1s, S_2s and the personalized semantic information S_1p, S_2p can be obtained according to the specific agreements. At time slot 1 (frequency 1), the shared semantic information S_1s and S_2s is merged at the air interface as S_s and sent through the wireless channel. At time slot 2 (frequency 2), the personalized semantic information S_1p and S_2p are sent to the base station. The received Ŝ_s, Ŝ_1p, and Ŝ_2p can be represented as follows:
{[ Ŝ_s=S_s+n_0, ; Ŝ_1p=S_1p+n_1, ; Ŝ_2p=S_2p+n_2, ].
where n_0, n_1, n_2 is the additive white Gaussian noise with CN(0, σ^2) when transmitting the shared information S_s, and personalized information S_1p, S_2p, respectively. After receiving Ŝ_s, Ŝ_1p, and Ŝ_2p, the base station first extracts Ŝ_1s and Ŝ_2s from Ŝ_s using function f(·):
Ŝ_1s=f(Ŝ_s), Ŝ_2s=f(Ŝ_s),
Then the base station can directly store the semantic information of x_1 and x_2, or recover the original point cloud x_1, x_2 by using the joint channel-source decoder:
x̂_1=SC_β^-1(Ŝ_1s+Ŝ_1p), x̂_2=SC_β^-1(Ŝ_2s+Ŝ_2p),
where SC_β^-1(·) is the joint channel-source decoder in the PCSC with parameter β.
§.§.§ Downlink System
The overall framework of the downlink system is shown in Fig. <ref>fig3b. First, the base station extracts the semantic information S'_1 and S'_2 of the point cloud x'_1 and x'_2 using the joint source-channel joint encoder SC_α(·):
S'_1=SC_α(x'_1), S'_2=SC_α(x'_2).
The shared information S'_1s, S'_2s and the personalized information S'_1p, S'_2p can be extracted by comparing the absolute variance between S'_1 and S'_2. The shared information S'_s superimposed S_1s' and S_2s' is sent only once, and the personalized information S'_1p and S'_2p are transmitted respectively to the users through the wireless channel:
{[ Ŝ^'_s=S'_s+n'_0, ; Ŝ^'_1p=S'_1p+n'_1, ; Ŝ^'_2p =S'_2p+n'_2, ].
where n'_0, n'_1, n'_2 is the additive white Gaussian noise with CN(0,σ^2). The Ŝ^'_1s and Ŝ^'_2s can be obtained using the f(·) mentioned in Eq. (<ref>). Then user1 and user2 utilize the joint channel-source decoder to reconstruct the point cloud:
x̂^'_1=SC_β^-1(Ŝ^'_1s+Ŝ^'_1p), x̂_2'=SC_β^-1(Ŝ^'_2s+Ŝ^'_2p).
§ SYSTEM DESIGN
This section mainly introduces the specific design of the PCSC and the M-PCSC. For the PCSC, as shown in Fig. <ref>, the transmitter includes a joint source-channel encoder and a rate allocation module. It extracts semantic information from the point cloud to be transmitted and generates symbols to facilitate subsequent transmission. The receiver mainly includes a signal recovery module and a joint channel-source decoder. This section first introduces the design of the PCSC and then introduces the controllable rate coding method. Finally, as shown in Fig. <ref>, the design of the M-PCSC is introduced.
§.§ Base Model
The base model is divided into three parts, as shown in Fig. <ref>. Each part will be described in detail as follows:
§.§.§ Encode
The pre-processed point cloud is fed into the joint source-channel encoder, which is based on 3D convolution. Stacking 3D convolutional neural networks makes sparsely distributed occupied voxels more compact. Voxception-Resnet (VRN) <cit.> is the basic unit of the joint source-channel encoder and the joint channel-source decoder. The VRN structure is based on the residual network structure <cit.>, but the bottleneck block in the residual structure and the basic residual network are connected into a single Inception-style block. Relevant research <cit.> has used VRN in the point cloud hyperprior coding structure. The encoder and decoder architecture used in this paper refers to the network architecture proposed in <cit.> and <cit.>. The joint source-channel encoder consists of an initial convolution layer, three main units, and two convolutions at the end. Each unit contains three stacked VRN blocks and a downsample convolutional layer. The encoder takes x as the input and reshapes the output w as a one-dimensional vector y.
§.§.§ Controllable rate transmission
The rate allocation module will generate a 0-1 mask, M(y), according to the importance ranking of y and the restrictions, which can be set manually and explicitly. The importance ranking detail will be given later. Term M(y) is element-wisely multiplied with y, and the results with 0 values will be discarded to get the transmitted symbols z. The shortened symbol z is transmitted through a wireless channel, and the M(y) will be transmitted to the receiver error-free. After receiving the ẑ, in the signal recovery module, zero-padding is applied to ẑ to ensure the length of ŷ equals y.
§.§.§ Decode
Term ŷ is reshaped into ŵ in the same form as w. After this, ŵ will be fed into the joint channel-source decoder, which has a symmetric architecture with the joint source-channel encoder.
Finally, the reconstructed point cloud can be obtained by post-processing the decoded result x̂.
§.§ Controllable Coding Rate
A rate allocation module is introduced in this paper to encode the point cloud with different rates. The methods do not need to train a specific rate allocation network or modify the original network parameters for re-training. They also can generate arbitrary coding rates (less than the maximum coding rate of the system). It can also be used in other semantic communication systems to achieve variable rate coding, which does not change the original structure. This paper uses the channel bandwidth ratio (CBR) <cit.> to measure the coding rate. CBR=k/e (k<e), where e is the source bandwidth given by the product of the pre-processed point cloud cube's length, width, height, and number of channels. Term k is defined as the channel bandwidth, and the value of k is the total length of the semantic vector for transmission. This subsection mainly describes how to analyze the importance of the semantic vector after coding by the encoder.
§.§.§ Value of the semantic vector
The semantic vector y can be expressed by [1,L], where L represents the length of the semantic vector. For each vector y_i (i<L), | y_i |^2 represents the signal power. So, a larger | y_i | represents a more significant power. During the training of the PCSC, the network will allocate high power for vectors of great importance, and this point can be proved in subsequent experiments. For this reason, it can be considered that a larger | y_i | has a higher degree of importance and a more robust noise tolerance. The network will rely more on it for subsequent decoding. More minor | y_i | are values that the network considers unimportant after learning, and the network can still decode well even if these insignificant points are not transmitted to the receiver. Therefore, the absolute values of the encoded semantic vectors can be ordered, and signals with small absolute values, i.e., signals with small power, can be discarded in the process of signal transmission to save channel bandwidth.
§.§.§ Gradient of the semantic vector
For the PCSC, during the training process, according to Eq. (<ref>), both the x̂ recovered from the decoder and the input x are used to calculate the loss, the loss is back-propagated to update the network weight.
The gradient of the encoded semantic vector w can be obtained in the backpropagation of the loss function. Assuming that the dimension of the encoded semantic vector is [A,C,E,F,G], the gradient δ^p of the p^th channel in the semantic vector is as follows:
δ ^p=1/E× F × G∑_i∑_j∑_k∂ l/∂ w_ijk^p,
where l is the loss function, w_ijk^p represents the value of w in the channel p at the coordinates (i,j,k). The gradient value δ^p of each channel in w is calculated using Eq. (<ref>). Finally, all δ^p, p∈(1,k) are concatenated in the channel direction to form the gradient δ of w. Inspired by <cit.>, for the trained networks, the gradient of w represents the contribution degree of each value in w to the decoding result, that is, the gradient of the unimportant value in w is low. In this paper, the encoded semantic vector w is firstly decoded by the transmitter, the loss is calculated using both the decoded x̂ and the input x, then the loss is back propagated to get the gradient δ of w. After this, reshape w into one dimension, and y can be ordered according to the absolute value of w.
§.§.§ Value and gradient of the semantic vector
For the PCSC, assume that l is the loss function of the network, x is the input data, and x̂ is the recovered data. For the trained model, l(x,x̂,γ) is locally optimal and close to 0, where γ is the network weight. If setting a value w_ijk^p in w to zero, the loss value will become l(x,x̂,γ|w_ijk^p=0) which will be larger if w_ijk^p is more critical. Besides, large loss values will affect the effect of the point cloud reconstruction. The square of the difference D between l(x,x̂,γ) and l(x,x̂,γ|w_ijk^p=0) can be described as follows:
D=[l(x,x̂,γ)-l(x,x̂,γ|w_ijk^p=0) ]^2.
The first-order Taylor expansion is used in Eq. (<ref>), and the result is as follows:
D= [∂ l(x,x̂,γ)/∂ w_ijk^pw_ijk^p ]^2.
It can be seen that D is related to both the value and the gradient of w_ijk^p. To ensure good performance of the reconstructed point cloud, the discarded vector in y should have smaller D, i.e., the absolute product of the value and gradient should be small.
§.§.§ Realize of controllable coding rate
The process of controllable rate coding is shown in Algorithm <ref>. One of the above three methods can be selected in the rate allocation module to rank the importance of y. Then a certain number of vectors can be discarded according to the required coding rate to achieve the specified CBR.
§.§ Non-orthogonal Model Division Multiple Access for PCSC
§.§.§ M-PCSC Uplink and Downlink Design
The uplink design of the M-PCSC is shown in Fig. <ref>MDMA_protocola. Considering a two-user access system, first, the base station matches two users who initiated point cloud uplink transmission instructions. Each user uses the joint source-channel encoder to encode the point cloud and transmits the shared and personalized information, respectively. The base station finally reconstructs the user’s transmitted point cloud semantic information.
The downlink design of the M-PCSC is shown in Fig. <ref>MDMA_protocolb. First, the base station searches for target users. Then the base station uses the joint source-channel encoder to encode the point cloud and sends the shared information and corresponding personalized information to the users. Finally, the users restore the point cloud.
In this paper, the semantic overlap rate (Sor) defined in <cit.> represents the resource reuse rate in the time and frequency domains. The definition of Sor is as follows:
Sor= | T_1⋂ T_2 |/ | T_1 |+ | T_2|,
or:
Sor= | B_1⋂ B_2 |/ | B_1 |+ | B_2|.
Assuming that T_i or B_i is the time or bandwidth resource occupied by user i (1≤ i≤ 2), | T_i | and | B_i | represent the corresponding time or bandwidth overhead.
In MDMA, the user's shared information is superimposed and transmitted only once. Therefore, users' information can be transmitted with smaller bandwidth. Literature <cit.> presents a new metric, feasibility, denoted as F, to represent the service capability that the channel can provide for multiple access systems:
F=R_c/R_s,
where R_c is the channel transmission rate, and R_s is the source coding rate. According to the description in <cit.>, the feasible area of non-orthogonal MDMA uplink and downlink are greater than that of NOMA.
§.§.§ M-PCSC Performance Analyze
Take the downlink transmission as an example. When the base station sends the point clouds x'_1 and x'_2 to different users, it uses Eq. (<ref>) to encode and extract the shared information S'_1s and S'_2s. The similarity between S'_1s and S'_2s can be measured by the absolute difference σ:
σ=|S'_1s-S'_2s|.
The smaller σ leads to the higher similarity between the S'_1s and S'_2s. S'_1s and S'_2s are superimposed into S'_s and sent to the corresponding users. In the AWGN channel, for the received Ŝ'_s, use Eq. (<ref>) to obtain Ŝ'_1s and Ŝ'_2s. In this paper, the f(·) is defined as follows:
f(Ŝ'_s)=1/2Ŝ'_s.
The Ŝ'̂_1s and Ŝ'̂_2s can be rewritten using Eq. (<ref>):
Ŝ'_1s=Ŝ'_2s=f(Ŝ'_s)=1/2(S'_1s+S'_2s+n'_0).
The difference between S'_1s and Ŝ'̂_1s, S'_2s and Ŝ'̂_2s are as follows:
{[ ΔŜ'_1s= | Ŝ'_1s-S'_1s |=1/2 | S'_1s-S'_2s-n_0 |=1/2σ + n'_0, ; ΔŜ'_2s= | Ŝ'_2s-S'_2s |=1/2 | S'_2s-S'_1s-n_0 |=1/2σ + n'_0. ].
Then the base station sends personalized information S'_1p and S'_2p to the corresponding users. The users merge shared information and personalized information and decode using Eq. (<ref>):
x̂^'_i=SC_β^-1(Ŝ^'_is+1/2σ+n'_0+n'_i+Ŝ^'_ip), i ∈{1,2}.
For the convenience of discussion, Eq. (<ref>) is simplified as follows:
x̂^'_i=SC_β^-1(Ŝ^'_is+1/2σ+n+Ŝ^'_ip), i ∈{1,2}.
PSNR D1 and PSNR D2 can be used as indicators to evaluate the point cloud reconstruction performance. The specific calculation formulas of PSNR D1 and PSNR D2 will be given in Section 4. Here, the PSNR D1 and PSNR D2 can be represented using abstract functions:
{[ η_D1(x'_i, x̂'_i)=η_D1(x'_i, SC_β^-1(Ŝ^'_is+1/2σ+n+Ŝ^'_ip)), ; η_D2(x'_i, x̂'_i)=η_D2(x'_i, SC_β^-1(Ŝ^'_is+1/2σ+n+Ŝ^'_ip)), ].
where i ∈{1, 2}. The results of η_D1 and η_D2 are related to σ, n, the proportions of S_is and S_ip. The proportion of S'_is and S'_ip can be measured by semantic overlap rate (Sor). For σ, it is also related to Sor. When the base station chooses the shared information, the absolute difference σ of the encoded semantic vectors between two point clouds is calculated, and the σ is then arranged from small to large. According to the specified Sor, the first L× Sor of the encoded vectors are selected as the shared information (L is the coding length after reshaping the encoded vector into one dimension). Fig. <ref> in the appendix describes the relationship between σ and Sor in different datasets, where σ is the value which index is L× Sor in the σ matrix in descending order. It can be seen that with the increase of Sor, σ also increases correspondingly. For this reason, Eq. (<ref>) can be rewritten as follows:
{[ η_D1(x'_i, x̂'_i)=g(Sor, SNR), ; η_D2(x'_i, x̂'_i)=h(Sor, SNR). ].
Because η_D1 and η_D2 are both functions about Sor and SNR, and the better the reconstruction performance, the higher η_D1 and η_D2 are. Here, the η_D1 is taken as an example to discuss, and the discussion on η_D2 is the same. Literature <cit.> definites the semantic transmission rate (S-Rate) for text semantic transmission. The S-Rate for the point cloud semantic transmission is as follows:
Γ_i=WI/(2-Sor)Lg(Sor, SNR), i∈{1,2},
where W is the bandwidth of the transmission channel, I is the average semantic information in a point cloud (measured in suts), (2-Sor)× L represents the coding length. The semantic spectral efficiency of the point cloud is further defined as follows:
ϕ_i=Γ_i/W=I/(2-Sor)Lg(Sor, SNR), i∈{1,2}.
In this part, a semantic-aware resource allocation model based on M-PCSC is proposed to maximize ϕ_i:
max ϕ_i <ref>
s.t. C_1 : i ∈{1,2},
C_2 : 0 ≤Sor ≤0.8,
C_3 : g ≥g_th,
C_4 : ϕ_i ≥ϕ_th.
According to the experiments in Section 4, M-PCSC can keep stable performance in Sor ≤ 0.8. C_2 specifies the permitted range of Sor, C_3 restricts the minimum PSNR D1 by g_th, and C_4 reflects the minimum required ϕ_th. According to <cit.>, term I/L depends on the source type, which is a constant for a particular source type. For this reason, Eq. (<ref>) can be rewritten as:
max g(Sor, SNR)/2-Sor <ref>
s.t. C_1 : i ∈{1,2},
C_2 : 0 ≤Sor ≤0.8,
C_3 : g ≥g_th,
C_4 : ϕ_i ≥ϕ_th,
Eq. (<ref>) depends on Sor and the channel conditions, so running M-PCSC on the AWGN channel to obtain the mapping between g(Sor, SNR)/2-Sor and (Sor, SNR), as shown in Fig. <ref>. Because C_2, C_3 and C_4 can only be obtained by the look-up table method, the exhausted searching method is used to solve Eq. (<ref>).
§ EXPERIMENTS
In this section, the experiment of PCSC and M-PCSC will be introduced in detail. First, the experiment settings, including the datasets, the relevant training settings, the structure of the network, and the baseline, are presented. Then, the controllable rate performance, the compression performance, the transmission performance, and the M-PCSC performance will be described.
§.§ Experiments Setup
§.§.§ Training datasets
About 8500 3D models from ShapeNet <cit.> are randomly selected for training, including 55 kinds of common objects, such as tables, chairs, cars, lamps, and so on. The point clouds from the mesh models are generated by randomly sampling points on the surface of the mesh models. Then the point clouds are voxelized into an occupied space of 256×256×256. The voxelized point clouds are separated into non-overlapping cubes with the size of 64×64×64. Non-overlapping cubes are randomly collected from each voxelized point cloud. A total of 180,000 cubes are used in the training. For the setting of the loss function, ζ=3, and the learning rate is 10^-5. The Adam optimizer <cit.> is used for iterative optimization.
§.§.§ Testing datasets
Two entire bodies, longdress and loot, with smooth surfaces and complete object shapes, are selected from 8i Voxelized Full Bodies (8iVFB) <cit.>. Another two upper bodies, Andrew and Sarah, with noisy and incomplete surfaces, are chosen from Microsoft Voxelized Upper Bodies (MVUB) <cit.>.
§.§.§ Baseline
To compare the performance of PCSC under different channel bandwidth ratios (CBRs), this paper compares the proposed controllable coding rate methods with the entropy method proposed in <cit.>. To compare the performance of different point cloud communication systems under different CBRs in the AWGN channel, compare the G-PCC <cit.> and PCL <cit.> using 1/2LDPC in BPSK, and the PCSC. The SNR is set to 10dB. There are two methods for G-PCC: G-PCC (octree) and G-PCC (trisoup). Regarding transmission, compare the difference between PCSC with joint source-channel coding (JSCC) and 1/2 LDPC channel coding rate in 16QAM and QPSK under different SNRs.
§.§.§ Evaluating indicator
In this paper, PSNR D1 and PSNR D2 <cit.> are used as evaluating indicators. PSNR D1 is the mean square error of point-to-point (c2c) distances in the original and reconstructed point clouds. PSNR D2 is the mean square error of point-to-plane (c2p) in the original and reconstructed point clouds.
To obtain the point-to-point error, for each point a_j in the original point cloud, the nearest neighbor method is used to locate its corresponding point b_i in the reconstructed point cloud. The connection a_j and b_i form an error vector E(i,j). The length of this error vector will lead to a point-to-point error. The calculation formula is as follows:
e_c2c^A,B=1/N_A∑ E(i,j)^2,
where A and B mean the original and reconstructed point cloud, respectively. N_A is the number of points in the original point cloud.
To obtain the error from point to surface, the error vector E(i,j) is projected along the normal vector direction N_j of the original point cloud a_j, and a new error vector E(i, j) is obtained. The error calculation formula of point to surface (c2p) is as follows:
e_c2p^A,B=1/N_A∑( E(i,j)· N_j)^2,
where A and B mean the original and the reconstructed point cloud, respectively. N_A is the number of points in the original point cloud.
Eq. (<ref>) and Eq. (<ref>) are measured by the mean square error (MSE). However, the MSE between multiple-point clouds is difficult to understand. To facilitate understanding, MSE is converted to a PSNR using the following equation:
PSNR_A,B=10log_10p^2/e_A,B,
where p is the peak value of the signal. In this paper, p=√(3)× (2^b-1), and b is the precision of the point cloud. The precision of the longdress and the loot is 10, and 9 for the andrew and the sarah.
§.§.§ Model structure
The structure of the Voxception-ResNet used in this paper is shown in Fig. <ref>, and the basic structures of the encoder and the decoder are shown in Table <ref>, where k, s, p, f mean the kernel, stride, padding, feature, respectively. The joint source-channel encoder extracts semantic features, compresses redundant information, and codes channels. The decoder structure, which is symmetrical to the encoder, is used for recovering the received semantic features into a point cloud.
§.§ Result Analysis
§.§.§ PCSC performance
Fig. <ref> shows the PSNR D1 and PSNR D2 at different coding rates over an AWGN channel with SNR=10dB. The channel bandwidth ratio CBR=0.0625 without discarding any points, and the CBR after dropping points are less than 0.0625. In this experiment, the ratio of discarding points was set to 0–0.9, with a step space of 0.1. The value, the gradient, and the product of the gradient and value are used as the basis for ranking semantic vectors. As mentioned above, the network learns the degree of importance and the adaptation of power allocation, so the performance is still reliable after discarding some points with low power (legend is small value) but worse after discarding some points with high power (legend is large value). The performance achieved by the methods proposed in this paper, especially at low CBR, is about 25dB higher than the performance of random dropping points. However, due to the way the three methods rank the semantic vector differently, the performance of the three methods is different at low CBR. In addition, the performance of the method proposed in this paper is stable where the CBR is 0.03125 (dropping rate is 50%) on longdress and loot, and stable where the CBR is 0.01875 (dropping rate is 70%) on andrew and sarah. This paper's method can effectively reduce the transmission bandwidth and still perform well under the condition of reduced CBR. Besides, the performance of the method proposed in this paper (value, gradient, value × gradient) is equivalent to that of the entropy method when the CBR is larger than 0.0125 (dropping rate is 80%) and is slightly lower than that of the entropy method when the CBR is less than 0.0125. The methods used in this paper do not need to add additional networks for training, only analyze the importance of the encoded semantic vectors and discard some semantically-unimportant vectors. The entropy method needs to modify the original network and retrain the network before ranking the importance of the vectors. Therefore, when the required CBR is not very small, the method proposed in this paper can be adopted, and when the required CBR is very small, the entropy method can be adopted.
Fig. <ref> shows the performance curve of the PSNR D1 and the PSNR D2 achieved by different communication systems. The SNR is set to 10dB. For the G-PCC (octree), the G-PCC (trisoup), and the PCL, use the noise-robust combination BPSK + 1/2 LDPC. It can be seen that PCSC has relatively stable performance under different CBR, which other methods can not achieve. Although the lowest CBR achieved by PCSC is slightly higher than that of G-PCC(trisoup), it is enough to show that PCSC can meet the existing compression performance. PCSC thinks that the symbols representing information are unequal. For PCSC, when transmitting the point cloud under different CBR, the importance of the coded semantic vectors is analyzed, and then the semantically-unimportant vectors are discarded. For other methods, it is considered that the symbols representing information are equal. When transmitting the point cloud in low CBR, the points in the input point cloud are directly reduced and then coded, likely losing some points with high importance. With the increase of CBR, more points are used for coding and transmission, and the reconstruction performance is improved. Therefore, the point cloud reconstruction performance of the PCSC is better than other methods at low CBR, and the performance of the PCSC is close to that of other methods at high CBR. Besides, the decoded point clouds and the ground truth is shown in Fig. <ref>. The error map based on the point-to-point distance between decoded point clouds and ground truth is also plotted. Compared with other methods, the method used in this paper has minor errors.
Fig. <ref> shows the transmission performance under different SNRs. The PCSC is trained once under the SNR=10dB and tested under various SNRs. The CBR is set to 0.0625. This paper compares the differences between the PCSC used joint source-channel coding (JSCC) in the AWGN and Rayleigh channels and the PCSC with 1/2 LDPC + 16QAM and 1/2 LDPC + BPSK. Under the high SNR, the performance of both JSCC and LDPC coding is stable, and the performance of LDPC is slightly higher than that of JSCC. With the decrease of SNR, the performance of the PCSC using LDPC coding drops sharply, appearing as a “cliff effect”. In the AWGN channel, the “cliff effect" of 1/2LDPC+16QAM and 1/2LDPC+QPSK occurs at about 4.5dB and -1dB, respectively. In the Rayleigh channel, the “cliff effect" of 1/2LDPC+16QAM and 1/2LDPC+QPSK occurs at approximately 9dB and 4.5dB, respectively. The JSCC effectively alleviates the “cliff effect". With the reduction of SNR, the system performance does not decline sharply but slowly, which indicates that the anti-noise ability of JSCC is relatively strong. In PCSC+1/2 LDPC+16QAM and PCSC+ 1/2 LDPC+BPSK, it is considered that the bit error location is random and not related to the semantic features of the point cloud behind the bit stream. In low SNR, the reconstruction performance drops sharply once an error occurs. In PCSC+JSCC, the whole system optimizes the end-to-end transmission distortion and considers the semantic features of the point cloud. Therefore, the model can learn how to preserve semantic features from noise.
§.§ M-PCSC performance
First, the original point cloud is compressed using the encoder of PCSC with different compression rates. The compression ratio is set to the file size after compression divided by the original file size. When the absolute difference between two semantic vectors is less than 0.001, the two vectors are considered to be similar. As shown in Fig. <ref>10a, it can be seen that the number of shared information between the two point clouds is gradually reduced with the reduction of the compression rate. When that compression ratio is extremely low, the number of shared information between the two point clouds is almost 0. It can be seen that the compression is mainly for shared information. Because M-PCSC transmits shared information once, and transmits personal information separately. Therefore, more bandwidth is needed when the compression ratio is low, because the proportion of shared information between the two point clouds is small. For this reason, the bandwidth occupation rate will increase with the reduction of the compression rate when using the M-PCSC.
Fig. <ref>10b shows the performance of M-PCSC with increased semantic overlap rate (Sor) over a Gaussian channel with SNR=10dB. When Sor is set to 0 and 1, the bandwidth occupancy rate is 1 and 0.5, respectively. When Sor is less than 0.8, the recovery performance of the two users is almost stable. When the Sor reaches 0.8, the performance is close to the original recovery, which means it can save 40% bandwidth. When the Sor is larger than 0.8, the performance degrades gradually.
To further demonstrate the performance of M-PCSC, the performance of M-PCSC and point cloud transmitting using NOMA in the case of PCSC is compared. For a fair comparison, the Sor of the M-PCSC is set to 0.8, and the CBR is also adjusted to ensure the bandwidth occupied by the M-PCSC is consistent with that occupied by NOMA. The experimental results are shown in Fig. <ref>10c. It can be seen that under a high SNR, the effect of NOMA is slightly higher than that of non-orthogonal MDMA. However, under a low SNR, the advantages of non-orthogonal MDMA are much greater than those of NOMA. When the SNR is 0dB, the performance of non-orthogonal MDMA is better than that of NOMA by about 30 dB.
§ CONCLUSION
This paper proposes a new point cloud semantic communication system (PCSC) and a simple but efficient method to control the coding rate. The value and the gradient of the encoded semantic vector are taken as a basis to analyze the importance degree of the semantic vector, and a certain proportion of unimportant-semantically data is discarded to save bandwidth. In addition, a system combining PCSC and the new proposed non-orthogonal model division multiple access (MDMA) technology is proposed, named M-PCSC, which can effectively reduce the transmission bandwidth when two point clouds are transmitted simultaneously. The entire system is described as an optimization problem to minimize end-to-end transmission distortion. Relevant experimental results show that the proposed communication system can generally exceed the traditional methods, and has a large increase in PSNR D1 and PSNR D2 indicators.
§ ACKNOWLEDGMENT
This work is supported in part by the National Key R&D Program of China under Grant 2022YFB2902102.
[The relationship between Sor and σ]
IEEEtran
|
http://arxiv.org/abs/2307.04578v1 | 20230710141120 | Exceptional points and phase transitions in non-Hermitian binary systems | [
"Amir Rahmani",
"Andrzej Opala",
"Michał Matuszewski"
] | quant-ph | [
"quant-ph",
"cond-mat.quant-gas"
] |
Institute of Physics Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw, Poland
Institute of Physics Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw, Poland
Institute of Experimental Physics, Faculty of Physics, University of Warsaw, ul. Pasteura 5, PL-02-093 Warsaw, Poland
Institute of Physics Polish Academy of Sciences, Al. Lotników 32/46, 02-668 Warsaw, Poland
Recent study demonstrated that steady states of a polariton system may demonstrate a first-order dissipative phase transition with an exceptional point that appears as an endpoint of the phase boundary [R. Hanai et al., Phys. Rev. Lett. 122, 185301 (2019)]. Here, we show that this phase transition is strictly related to the stability of solutions. In general, the exceptional point does not correspond to the endpoint of a phase transition, but rather it is the point where stable and unstable solutions coalesce. Moreover, we show that the transition may occur also in the weak coupling regime, which was excluded previously. In a certain range of parameters, we demonstrate permanent Rabi-like oscillations between light and matter fields. Our results contribute to the understanding of nonequilibrium light-matter systems, but can be generalized to any two-component oscillatory systems with gain and loss.
Exceptional points and phase transitions in non-Hermitian binary systems
Michał Matuszewski
August 12, 2023
========================================================================
Phase transitions correspond to significant alterations of the properties of a system caused by the modification of physical parameters.
Examples include the ferromagnetic-paramagnetic phase transition <cit.>, gas-liquid-solid transition <cit.>, Bose-Einstein condensation in bosonic and fermionic systems <cit.>, metal–insulator transition in solid state <cit.>, and topological phase transitions <cit.>. Phase transitions may also occur in non-Hermitian systems, which are systems that do not satisfy the condition of Hermiticity, which is embedded in quantum mechanics <cit.>. Here the non-Hermitian contributions may stem from dissipation <cit.> or asymmetric coupling <cit.> and lead to a number of unique properties such as non-reciprocity <cit.>, mutually interlinked non-Hermitian phase transitions <cit.> and the non-Hermitian skin effect <cit.>.
A striking example of non-Hermitian physics that deviates significantly from the Hermitian case is the coalescence of eigenstates and energy eigenvalues at so-called exceptional points (EPs). These spectral singularities may be accompanied by a non-Hermitian phase transition <cit.>. Standard procedure to investigate these phase transitions is through the study of the spectrum of the system as some controllable parameters are changed <cit.>. Typically, the process involves meticulous adjustment of loss and gain in order to achieve the desired outcome. In general, in a linear system the presence of EPs is independent of the stability of the stationary state that the system evolves to <cit.>. However, in a nonlinear system, more than one solution may be stable, which gives rise to the phenomena of bistability and multistability <cit.>. The existence of nonlinear features may affect the non-Hermitian effects realized in linear cases or give rise to entirely new phenomena <cit.>.
In order to examine the relationship between nonlinearity and non-Hermitian physics, it is necessary to study systems that possess variable nonlinearity and controllable gain and loss.
Particularly suitable systems for this study are those where matter couples with light, as they allow to take advantage of the difference in physical properties of these components. For example, it was demonstrated that exceptional points appear naturally in light-matter systems of exciton-polaritons and subtreshold Fabry-Perot lasers <cit.>. Moreover, it is possible to induce exceptional points by manipulating spatial and spin degrees of freedom of exciton-polaritons in various configurations <cit.>. In the case of bosonic condensates of exciton-polaritons, it was predicted that a dissipative first-order phase transition line exists in the phase diagram <cit.>, similar to a critical point in a liquid-gas phase transition. According to this study, this phase transition line exists in the regime of strong light-matter coupling and has an endpoint which corresponds to an exceptional point <cit.>.
In this letter, we investigate a non-Hermitian model describing interaction between two oscillating modes. We use it to examine the significance of nonlinearity in a non-Hermitian phase transition. This model can describe light and matter modes in exciton-polariton condensation and lasing, as investigated in Ref. <cit.>. We find that the model is incomplete unless nonlinear saturation of gain is taken into account. Importantly, saturation increases the complexity of the phase diagram and leads to the appearance of bistability. It has also profound consequences on the physics of the system. We find that while the first-order phase transition line with an endpoint is present, the equivalence of the endpoint to an exceptional point as found in <cit.> is no longer valid in the general case. The phase diagram of Ref. <cit.> can be restored in the limit of strong saturation. In contrast to the results of Ref. <cit.>, the transition between solutions can occur also in the weak coupling regime. This suggests that the second threshold from polariton to photon lasing, observed in experiments <cit.>, may be related to a dissipative phase transition in the weak coupling regime. Moreover, we find a regime of permanent Rabi-like oscillations between two stable solutions. This regime corresponds to a line in the phase diagram that ends with an exceptional point.
Model and Analytical Solutions. We consider a system of two coupled oscillators described by a non-Hermitian Hamiltonian with gain and loss. The imbalance between gain and loss in a linear system leads in general to solutions exponentially growing or decaying in time. To obtain non-trivial stationary solutions it is necessary to include nonlinearity. Here we adopt cubic nonlinearity that appears naturally in symmetric systems with no dependence on the complex phase. Such a model can be realized, among many other physical systems, in the case of cavity photons coupled to excitons, where the nonlinearity occurs only in the matter (exciton) component <cit.>. The system is described by complex functions ψ_C=n_Ce^iφ_C and ψ_X=n_Xe^iφ_X, corresponding to amplitudes of cavity photons and excitons, respectively.
The dynamics is governed by equations
iħ∂ψ/∂ t =
iħ∂_t|Ψ⟩=H|Ψ⟩ with |Ψ⟩=(ψ_C,ψ_X)^T, where non-Hermitian Hamiltonian H is given by <cit.>
H=(
E_C-iħγ_C ħΩ_R
ħΩ_R E_X+g|ψ_X|^2+ip
) .
Here ħΩ_R is the coupling strength, γ_C is the decay rate of the photon field, and p represents the gain to the exciton field. This gain can be realized in practice by nonresonant optical or electrical pumping. We define the complex nonlinear coefficient as g=g_1-ig_2, where g_1 is the strength of two body interactions (Kerr-like nonlinearity) and g_2|ψ_X|^2 is the saturation term that allows to avoid instability. Spectrum of Hamiltonian (<ref>) can be found analytically
E= 1/2[E_c+ℰ+i(𝒫-ħγ_c)
±√(4ħ^2Ω_R^2+[ℰ-E_c+i(𝒫+ħγ_c)]^2)] ,
where 𝒫=p-g_2(n_X^SS)^2 and ℰ=E_x+g_1 (n_X^SS)^2. For convenience, we denote the solution associated with plus (minus) by U(L). The respective steady state analytical solutions |Ψ⟩=|Ψ_0⟩ e^-i E t
can be found from the condition Im[E]=0, that is, the imaginary part of the eigenvalue of (<ref>) must be zero. In <cit.>, it was argued that one or two real energy solutions exist in certain regions in parameter space. However, it can be seen from (<ref>) that except from special values of parameters, real energy solutions can exist only when saturation represented by g_2 is taken into account.
We will show below that accounting for the nonlinear g_2 term does in fact lead to the appearance of up to three real-energy solutions, each of them of the form (<ref>).
The condition Im[E]=0 allows one to find analytical expression for n_X^SS
(n_X^SS)^2=1/g(Re[E]-E_X-iP-(ħΩ_R)^2/Re[E]-E_C+iħγ_C).
The resulting explicit formula for n_X^SS is tedious, but for a given n_X^SS, one can find closed forms of steady state n_C^SS and φ_CX=φ_C-φ_X
n^SS_C= n^SS_X√(p/ħγ_C-(n_X^SS)^2g_2/ħγ_C) ,
φ_CX^SS= (δ-g_1(n_X^SS)^2/ħΩ_R(n^SS_C/n_X^SS-n_X^SS/n^SS_C)-iγ_C n^SS_C/Ω_R n_X^SS) ,
where we introduced photon-exciton energy detuning δ=E_C-E_X.
Non-Hermitian Phase Transitions.
We use the analytical solutions from the previous section to determine the phase diagram of the system, looking at it from two perspectives. We analyze the steady state solutions and their multiplicity, as in Fig. <ref>(a). On the other hand, we consider the lowest-energy state among the dynamically stable ones and investigate its properties and possible transitions, see Fig. <ref>(b). The latter approach is equivalent to analyzing a system that is weakly coupled to an energy sink, which does not perturb the spectrum, but picks the lowest-energy stable solution after a sufficiently long evolution due to its energetic stability.
In the case when the conservative nonlinearity g_1 is stronger than the dissipative nonlinearity g_2, representative phase diagrams are shown in Fig. <ref>. We focus on the blue-detuned case (δ>0), which is much richer that the red-detuned case. In Fig. <ref>(a) the number of steady state solutions is shown. Up to three non-zero solutions, corresponding to both upper and lower branches of Eq. (<ref>) can exist, which results from the nonlinearity of the system. The region of zero solutions corresponds to the situation where pumping cannot overcome losses and no lasing nor polariton condensation occurs. For given Ω and γ_C, increasing pumping p can lead to one or several thresholds, as indicated with horizontal lines.
Special points in the phase diagram (marked by stars in Fig. <ref>) include the exceptional point (EP) and the endpoint of the first-order phase transition (ET). In contrast to <cit.>, we find that in general they do not coincide. To determine the position of the EP, one can find the following conditions for which the real and imaginary parts of eigenvalues are zero in Eq. (<ref>)
p^EP=ħΩ_R+g_2δ/g_1 , γ_C=Ω_R .
This can occur when n_X^SS=δ/g_1, that is, whenever the system is blue-detuned (δ>0).
On the other hand, the ET point is clearly visualised in the phase diagram that takes into account the energetic instability in panel Fig. <ref>(b). The first-order phase transition line begins at the ET point in the weak coupling regime (γ_C>Ω_R) and follows the arc represented by the ET-EP line towards the EP point. Below the EP, the phase transition line follows into the strong coupling regime. We conclude that, contrary to the results of <cit.>, the first-order phase transition can occur also in the weak coupling regime. This can be explained by a simple physical argument. Since the pumping influences the effective photon-exciton detuning δ̃=E_C-(E_X+g (n^SS_X)^2), the increase of pumping can change of the sign of δ̃, leading to an abrupt change of the lowest-energy state in the weak-coupling regime.
Figure <ref>(d) shows the dependence of the real part of the energy of solutions shown in Figs. <ref>(a,b), in the vicinity of the ET-EP line. As can be seen, the ET point is the point of the transition to bistability. On the other hand, the EP point corresponds to a turning point in the bistability curve. The cross-section including the EP point (γ_C=Ω) is depicted in more detail in Figure <ref>(c), which shows the occurrence of two stable branches from the upper and lower branches of Eq. (<ref>) and one unstable branch. At the EP, the unstable upper branch coalesces with the lower stable branch, leading to the first-order phase transition. The cross-section with the ET point (γ_C>Ω_R) is shown in Fig. <ref>(e), where the bistability curve closes, and the transition from the upper to lower branch becomes smooth. This leads to the possibility to encircle the exceptional point as indicated with arrows in Fig. <ref>(d).
Interestingly, additional features that have an influence on the physics of the system can occur in the strong coupling case (γ_C<Ω_R), see Fig. <ref>(f). These include the disappearance of one of the solutions in a certain parameter range and the dynamical instability of the lowest-energy branch (marked with orange line). Consequently, the upper, higher-energy solution may become the only viable solution despite the existence of lower-energy solutions.
In the opposite case when the dissipative nonlinearity dominates over the conservative one, we find that the phase diagram of energetically stable solutions recovers the results of <cit.>, see Fig. <ref>. As the dissipative nonlinearity is increased, the length of the ET-EP arc decreases, and finally the two points coalesce. In this specific case, the exceptional point is characterized by a jagged crest in the phase diagram, embodying a third-order exceptional point (see supplementary materials). This phenomenon arises from the coalescence of two stable solutions and a single unstable solution.
Permanent Rabi-Like Oscillations: R-Line.
Our analysis allows to predict that a peculiar oscillating state may form, as indicated in Fig. <ref>(a) by R-Line. In this case, long evolution leads to permanent oscillations, resembling Rabi oscillations in a two-level system, instead of stationary solutions. To explain this phenomenon, we examine imaginary and real parts of eigenvalues given in Eq. (<ref>). An example is shown in Figs. <ref>(a) and <ref>(b).
In general, two kinds of stationary solutions corresponding to Im[E(n_X)]=0 may exist. As shown in Fig. <ref>(a), in this particular case there are two solutions from the upper branch and one solution from the lower branch (the black dashed vertical lines denote
the emergent solutions). Our interest is in solutions from upper and lower branches that occur at the same n_X, while there is a gap in respective real parts, see Fig. <ref>(b). Such solutions occur when p=(g_2/g_1)δ+ħγ_C, which corresponds to a straight line (marked by R-line) in the phase diagram of Fig. <ref>(c).
An example of such permanent oscillations is shown in Fig. <ref>(c). After initial transient time, the oscillations stabilize at a cetain amplitude. When different initial conditions are used, the system may end up in one of the steady state solutions, as shown in Fig. <ref>(d). The frequency of oscillations is given by the gap, Ω=2√(Ω_R^2-γ_C^2). When the parameters of the system approach the exceptional point along the R-line, the gap decreases and the period of oscillations increases. At the exceptional point (Ω_R=γ_C), the solutions coalesce and the period becomes infinite. Therefore, the exceptional point is the endpoint of the R-line.
Discussion. We showed that, contrary to previous understanding, non-Hermitian polariton systems exhibit first-order phase transition with an endpoint that in general does not coincide with the exceptional point. Explanation of this phenomenon requires taking into account the nonlinear gain saturation and the consideration of the bistability curve. While the endpoint of the phase transition is where the bistability appears, the exceptional point is where the stable and unstable solutions coalesce. In addition, we demonstrated that first-order phase transition may occur in the weak coupling regime, and that for certain values of parameters one can predict permanent oscillations, whose frequency vanishes at the exceptional point.
The predicted results contribute to the ongoing debate surrounding polariton/photon lasing. The presence of an exceptional point has been identified as the possible underlying factor for the observed second threshold <cit.>. Here, we provide further insights by identifying several other thresholds in phase diagrams and pointing out that multiplicity and stability of solutions are also crucial factors, so far overlooked.
The presented results may be applied to much broader class of systems. The non-Hermitian Hamiltonian represented by the 2×2 matrix in Eq. (<ref>) describes in general an arbitrary two-mode oscillatory system with gain and loss in the two modes, and the cubic nonlinearity in one of them. This term appears naturally in any oscillatory system in the first order as long as the nonlinearity respects the global U(1) symmetry of the oscillations. Examples include not only all quantum mechanical systems such as Bose-Einstein condensates, but also high-frequency coupled classical oscillators, where phase of oscillations is irrelevant on the time scale of a slowly varying envelope. The results presented here should be applicable to any such system that exhibits exceptional points and nonlinearity.
A.R. and M.M. acknowledge support from National Science Center, Poland (PL), Grant No. 2016/22/E/ST3/00045. A.O. acknowledges support from Grant No. 2019/35/N/ST3/01379.
|
http://arxiv.org/abs/2307.04241v1 | 20230709181416 | Light trapping using dimer of spherical nanoparticles based on titanium nitride for plasmonic solar cells | [
"Nowshin Akhtary",
"Ahmed Zubair"
] | physics.optics | [
"physics.optics",
"physics.app-ph"
] |
1 Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh
*[email protected]
Light-trapping mechanisms with plasmonics are an excellent way to increase the efficiency of photovoltaics. Plasmonic dimer-shaped nanoparticles are effective in light absorption and scatterings, and there is hardly any research on dimer TiN nanoparticle-based PV. This paper demonstrated that titanium nitride could be a suitable substitute for other plasmonic materials in the visible and near-infrared spectrum. We designed a TiN-based spherical dimer plasmonic nanoparticle for photovoltaic applications. We conducted comparison analyses with the metals Ag, Au, and Al to ascertain the performance of TiN as a plasmonic material. Silicon had an average absorption power of ∼19%, and after incorporating TiN nanoparticles, the average absorbed power increased significantly to ∼75% over the whole spectral range. TiN dimer nanoparticle had the highest absorption cross-section, Q_ab value ∼6.2 W/m^2 greater than Ag, Au, and Al had a fraction of light scattered into the substrate value greater than Au, Al and comparable to Ag. TiN dimer exhibited better absorption enhancement, g for the whole spectral range than Ag, Au, and Al dimers for a radius of 15 nm with a peak value greater than 1. The maximum optical absorption efficiency of the plasmonic TiN nanostructures was ∼ 35.46%.
§ INTRODUCTION
Coal, natural gas, oil, biomass, and nuclear energy are non-renewable energy sources that are becoming scarcer and more depleted every day. Therefore, the abundance, affordability, and low environmental impact of renewable energy sources are all very advantageous. The efficiency of photovoltaic (PV) cells powered by renewable energy sources has been a great focus of research<cit.>. Light absorption and the frequency of electron-hole pair formation have a significant role in how well PV cells perform. The most common PV cells in CMOS technology are based on a silicon absorber layer with the limitation of high production costs and thinner absorption volume. There are many ways of increasing efficiency, such as surface texturing<cit.>, metal nanograting <cit.>, tandem structure <cit.>, optical absorption enhancement by increasing the effective optical path length or trapping light in the cell by introducing light scatters in the solar cell <cit.>. A suitable choice of materials for active layers can ensure better photon absorption, generating electron-hole pairs. However, only efficient absorption cannot generate efficient electron-hole pairs and, consequently, photo-voltage. The recombination process creates a loss of charge carriers; therefore, an optically thick semiconductor is unsuitable for better charge carrier separation. Additionally, more materials are needed for thicker semiconductors, which is cost-ineffective and wasteful. Thus, a thinner semiconductor layer is preferred.
Introducing metallic nanoparticles (NPs) has created an alternative approach to improving absorption efficiency. Surface plasmon resonance (SPR) can significantly enhance EM waves by placing plasmonic structures in the active layer. This phenomenon ensures enhanced light absorption providing strong scattering between the intense plasmon field and the active layer <cit.>. Localized surface plasmon resonances (LSPRs) generate light scattering by NPs. The LSPR occurs when the frequency of the optical photon coincides with the natural frequency of the collective vibration of conduction electrons in NPs, leading to strong near-field electromagnetic enhancement, acute spectral absorption, and scattering peaks <cit.>. The enhancement of optical absorption of NPs is the foremost attribute of LSPR <cit.>. Light absorption and photo-current have been improved by using the LSPR phenomena <cit.>. Much recent research has centered on the trade-off between optimal thickness and maximum field enhancement. The most significant improvement variables ordinarily happen when the junction between the absorber and NPs is illuminated by polarized light. Complex structures can achieve a sensitivity that leads to near-infrared (NIR) sensing and plasmon hybridization. The NPs structure can get over this restriction since it extends the inside field to the outside environment, which results in a considerable boost in detection sensitivity.
Plasmonic materials can support electrons or plasmons across a broad spectrum from infrared to ultraviolet solar radiation. Until recently, researchers were confined to noble metals like Ag and Au as plasmonic material. Ag and Au are frequently used plasmonic metals and optical metamaterials because of their strong DC conductivity or low resistive losses. While an electron in a metal's valence band absorbs a photon to jump to the Fermi surface or while an electron close to the Fermi surface absorbs a photon to fill the ensuing unoccupied conduction band, there is confinement for plasmonic metals, causing an excessive loss in conventional plasmonic materials. Ordinary metals have several drawbacks, including the size of the genuine portion of the permittivity, the ineffectiveness of tuning or balancing the optical properties of metals, and their high cost.
Due to the high optical loss of metals, alternative metals with the least ohmic loss may be preferred for plasmonic devices. To reduce the interband transition loss, many reports frequently utilized alternative plasmonic NP <cit.>. Conventional plasmonic materials have many shortcomings, leading researchers to seek better alternatives. The alternative plasmonics has the real permittivity of the same order. Hence, geometric fractions lights can be promptly tuned to coordinate the plan prerequisites. Conventional plasmonic metals confront debasement when exposed to air/oxygen or moisture, causing further problems in device fabrication and integration. These criteria directly affect optical properties and increase optical loss, resulting in more significant values of the dielectric function's imaginary part and rendering it incompatible with conventional silicon fabrication methods. Metal nitrides are a better alternative to overcome the shortcomings. Among them, titanium nitride (TiN) is a non-stoichiometric, interstitial compound with a high concentration of free carriers. It is refractory and steady, and its optical properties can be tuned by changing its geometric structure <cit.>. Moreover, it is consistent with silicon CMOS technology <cit.> and offers manufacturing and integration advantages that can help overcome the challenges. There are several reports on monomer spherical and hemispherical TiN NPs <cit.>. However, no dimer spherical TiN NP-based plasmonic solar cells have been reported.
This paper employed the finite-difference time-domain (FDTD) method to systematically investigate the scattering cross-section and absorption enhancement by spherical dimer TiN NPs for photovoltaic application.In order to ascertain the total scattering cross-section, the percentage of light scattered into the substrate, the absorption cross-section, and the spatial mapping of the electric field in this plasmonic nanosystem, we first built and optimized the dimer of the spherical NPs. We further investigated the effect of polarization sensitivity on the source. We gained insight into how the shape of the NPs enhanced the functionality of solar cells when the NPs were embedded into them. We investigated plasmonic core-shell configuration and analyzed the effect of dielectric coatings on NPs. Our work provided insights into using TiN in photovoltaic cells.
§ METHODOLOGY
§.§ Structural Design
We developed an alternative plasmonic material, TiN-based spherical dimer NP on a semi-infinite crystalline silicon substrate as can be seen in Fig. <ref> (see Fig.S1 of Supplement 1). In the visible and longer wavelengths, TiN displays localized surface plasmon phenomena and metallic characteristics. <cit.>. The plasmonic particles were separated from the semi-infinite silicon absorption layer by a thin Si_3N_4 layer as surface passivation. We compared cross-sections of NPs based on conventional noble plasmonic metal with TiN alternative plasmonic NPs. The size of the particles was varied, and their properties were analyzed. t_1 and t_2, respectively, presented the thin film and substrate thickness. The source's polarization angle represented by θ, r represented the radius of the sphere, and d represented the distance between the nanospheres of a dimer.
§.§ Simulation Methods
We applied the FDTD method, where Maxwell's equations were solved numerically, to study the mentioned nanosystems. The simulation dimensions of the FDTD were 1.2 μm × 1.2 μm × 1.25 μm. A mesh size of 0.4 nm was applied around the NPs. The source was adjusted for polarization perpendicular to the surface normal of the particles from the air side. The particles were incident to the total-field scattered-field (TFSF) plane wave along the negative z-axis. The incident source was a uniform wave with a prominent wavelength range of 550–1100 nm, which comprised the solar spectrum's highest feasible irradiance (AM 1.5). A plane wave with a TFSF was utilized to separate the incident field from the scattered field to examine the optical characteristics of NPs. The scattering characteristics were investigated using an external monitor. The spatial electric field mapping was performed by adjusting a frequency-domain power monitor. We used light scatterers to increase the light trapping efficiency, improving the absorber layer's absorption. We estimated the scattering and absorption cross-sections, and optimal values were obtained for better PV application by adjusting various factors. The electric and magnetic fields around the particle were calculated by converting the time domain into the frequency domain using a Fourier transform. The radial Poynting vector, S(ω), was calculated from the electric field, E(ω), and magnetic field H(ω) as a function of angular frequency, ω. In the scattered field region, the total of power P_s was determined along the +x, +y, +z, –x, –y, and –z directions. The ratio of the power in the scattered field region inside the substrate of the absorber layer to the power in the scattered field region in the air and the absorber layer is known as the percentage of light scattered into the substrate, f_sub. The total scattering cross-section, Q_T(ω) is defined as the sum of the power per unit area scattered in all directions divided by the power per unit area of the incident beam.
Q_T(ω)= P_s(ω)/I(ω).
Here, I(ω) is incident power intensity as a function of ω. Absorption cross-section is a measure of the probability of an absorption process. The total absorbed power divided by the power per unit area of the incident light was defined as absorption cross-section, Q_ab. It can be calculated from
dN/dz=-NnQ_ab
Here, dN/dz is the number of photons absorbed between the points z and z+dz, N is the number of photons penetrating to depth z, and n is the number of absorbing molecules per unit volume. The monitors outside the TFSF source determined the scattering cross-section. <cit.>.
We methodically considered the impact of NPs' Q_T, the light scattered into the substrate Q_sc, f_sub, and Q_ab by investigating their structural characteristics. We compared the effects of alternative plasmonic NPs to plasmonic metals and comprehensively considered the capacity to enhance absorption within an absorber layer by adding NPs. Moreover, to demonstrate the effectiveness of the dimer NPs in PV cells, we calculated the proposed structure's absorption enhancement and light absorption efficiency.
§ RESULTS AND DISCUSSION
§.§ Effect of different material-based plasmonic spherical dimer nanoparticle
We simulated spherical dimer NPs for different materials and tracked their scattering and absorbance behavior. Here, t_1, t_2, and r were regarded as 30 nm, 250 nm, and 100 nm, respectively. We evaluated the performance of NPs made of different materials, including Au, Ag, Al, and TiN. Moreover, we explored a core-shell configuration consisting of TiN NP with Si_3N_4 coating to maximize the performance. The foremost critical factor representing the path length enhancement of a scattering light-trapping structure is f_sub <cit.>. As can be seen in Fig. <ref> (a), the overall f_sub exhibited Ag> Au> TiN> Tin with Si_3N_4 coating> Al in this order. For 800 nm to 1100 nm wavelength, thef_sub of TiN NP was greater than those for Ag, Au, and Al-based NPs. When we varied the dimer materials, the peak value of Q_T was 19 W/m^2 for Au NP, and the values for Ag and Al were comparable to Au. TiN NP had comparable values of Q_T and Q_sc from 650 to 1000 nm wavelength. After adding Si_3N_4 coating on TiN NP the Q_T and Q_sc performed better for 850 to 1000 nm as can be seen Fig. <ref>(b)-(d). For scattering applications, Si_3N_4 coating can be used for TiN NP. Q_ab increased for TiN and TiN with Si_3N_4 coating NPs. The Q_ab was negligible for Ag, Au, and Al NPs.
When a plane wave collides with an object or scatterer, its energy is diverted in all directions. It is crucial to analyze the optical properties of NPs, including scattering cross-section and electric field distribution. By changing the spherical dimer NPs' material, shown in Fig. <ref> electric field maps in the xy plane were detected. LSPR modes can be produced by these structures. The plasmonic NP dimers are the equivalent of two atoms sharing electrons by bonding molecular orbitals. A dimer's excited dipoles on its two spheres may couple in the direction of the dimer axis, which is analogous to σ-type orbital for atoms or perpendicular to it, which is analogous to π-type orbital for atoms. Four additional plasmon modes emerge for dimers, which are homonuclear diatomic molecules equal to molecular hydrogen(H_2), nitrogen (N_2), oxygen (O_2), or a halogen (X_2)<cit.>. As shown in Fig. <ref>(f), when the charge of the dimers oscillates in the same direction, the charge accumulates, and electric field enhancement is observed. This phenomenon occurs both in bonding mode and anti-bonding mode, and they are in-phase antibonding with the highest energy and in-phase bonding with dipolar plasmon mode with the lowest energy (highest wavelength) (see Fig. <ref>(f)). Additionally, when the charge oscillates in different directions, there is no field enhancement, which is out-of-phase bonding and antibonding modes <cit.>. As can be seen in Figs. <ref>(d)-(e), the scattering spectra TiN and TiN with a dielectric Si_3N4 coating exhibited an unprecedented homogeneity for the two spheres. The in-phase bonding plasmon mode was observed for x-polarized light
The induced dipole moments resulted in two bright modes. Therefore, the accumulation of high free charge distribution around the surface and center of the dimer resulted in the enhancement of the electric field.
The quasistatic dipole approximation was used to compute the electric field (E) enhancement at yz and xy plane around the surface of Ag, Au, Al, and TiN dimers presented in Fig <ref>. Due to a lower real permittivity than Ag and Au, the magnitude of the magnetic field enhancement in TiN nanospheres was slightly smaller than those of Ag and Au. E-field intensity on the yz plane at x = 0, which is the center of the dimer, can be seen in Fig. <ref> for different material-based dimer nanosphere. For Ag and Au and TiN with Si_3N4 coating, charge distribution was high at the center of the dimers as presented in Figs. <ref>(a), (b) and (e). For Al and TiN, charge distribution at the center was less as compared to Ag and Au, as can be seen in Figs. <ref>(c) and (d). Here, the charge oscillates in the same direction, so the charge accumulated and filed enhancement occurred. Resulting LSPR modes were utilized in numerous detection applications <cit.>. The LSPR mode of TiN with Si_3N4 coating in the core-shell configuration was blued-shifted, and the peak value of the electric field increased compared to bare TiN dimer.
To determine the effectiveness of the NP in the photovoltaic cells, we calculated the absorbed power of each layer of nanostructure consisting of dimer spherical NPs on a 30 nm thin Si_3N_4 underlayer on Si substrate. NPs were comprised of Ag, Au, Al, and TiN. The divergence of the Poynting vector was used to compute the absorption per unit volume as given by,
p_abs=-1/2 real
(∇⃗ . S).
However, divergence calculations are frequently quite susceptible to numerical errors. Consequently, the simplest method for calculating absorbed power is,
p_abs=-1/2 real (iω E⃗. D^*).
It can be modified as
p_abs=-1/2 ω |E|^2 imag(ϵ).
Here, D is the electric displacement field, and ϵ is the permittivity. Standard 1.5 ATM solar spectrum and absorption of TiN NP-based solar cell are presented in Fig.<ref>(f). Solar light absorption is very efficient for wavelengths from 300 nm to 500 nm. For longer wavelengths than 500 nm, the absorption decreased gradually. Silicon had an average absorption power of ∼51% over the 400-500 nm range and ∼19% for the whole spectral range as presented in Fig. <ref>. The Si_3N4 layer enhanced the absorption for the 400-500 nm spectral range. After the incorporation of TiN NPs, the absorbed power increased significantly to ∼75% over the whole spectral range as presented in Fig. <ref>(d). In comparison to Ag, Au, and Al NPs, TiN NP integration had a substantially higher absorption power.
§.§ Absorption enhancement by TiN-based dimer NP
TThe absorption enhancement defines the increased absorption by the addition of NPs in the solar absorber layer. The absorption enhancement spectrum, g was presented by,
g(λ)=EQE_np(λ)/EQE_bs(λ).
Here, EQE_np is the device's external quantum efficiency when plasmonic nanoparticles were incorporated on top of a substrate and EQE_bs is the external quantum efficiency of a bare substrate. In this section t_1, and t_2 were taken to be 30 nm and 250 nm, respectively, in this section. We simulated the spectra of g for Ag, Au, Al, and TiN plasmonic dimers on a silicon substrate for r = 15 nm, 25 nm, and 30 nm presented in Figs. <ref>(a)-(c). TiN exhibited better absorption than Ag, Au, and Al dimers for r = 15 nm. For r = 25 nm the g of TiN was ∼1 for the whole spectral range which was better than Au and Al. For r = 25 nm, the g of TiN was between 0.9 to 1. For r = 30 nm the g of TiN was better than Al and comparable to Ag, Au for 400 to 800 nm and the g was greater than Ag, Au and Al for the range 800 nm to 1100 nm. For TiN, Al, Au, and Ag, the average enhancement, G, was discovered to equal 0.997, 0.991, 0.995, and 0.995, respectively.
The light absorption efficiency (LAE) was calculated by,
= ∫_400^1100 I(λ) A(λ) dλ/∫_400^1100 I(λ) dλ.
Here, I(λ) is the incident light intensity. And, absorbance, A(λ) was calculated by,
A (λ) = 1 - R (λ) - T (λ).
Here, T(λ) and R(λ) are the transmittance and reflectance of the structure. For the TiN plasmonic nanosphere on a kesterite substrate, we determined LAE. The values of LAE for r = 15 nm were found to be 35.46% and 33.78% for TiN and Ag respectively.
We calculated g for different radii of TiN plasmonic dimer on a silicon substrate as can be seen from Fig. <ref>(d). The g decreased with the increase of r for wavelength which agrees well with the previous study <cit.>.This happened as a result of plasmonic NP's strong forward scattering and weak absorption of light at various radii. While backward scattering prevented absorption, forward scattering promoted it. Spherical dimer plasmonic NPs with larger radii often have larger cross-sectional areas of scattering, this increase of metal NP and higher arrange mode excitation can control light scattering which improves or diminish the light absorbing productivity into the substrate <cit.>.
§.§ Impact of light polarization
As the polarization of light influences light scattering or absorption, we varied the polarization angle of the light source to compute the optical characteristics of the NPs <cit.>. For the structure design with optimized scattering and absorption, we changed the source's polarization angle for dimer spherical TiN NPs and found the optimal polarization angle, θ. Here, t_1 and r were regarded as being 30 nm and 100 nm, respectively. The theta had been varied from 0^∘ to 90^∘. For the wavelength ranging from 550 to 760 nm, the value of f_sub decreased with the increase of θ which is apparent from Fig. <ref>(a). With a reduction in θ, f_sub remarkably dropped for the wavelength range of 760 nm to 1100 nm. As the theta was raised, it was evident from Fig. <ref>(b) that Q_T increased significantly. The peak wavelength of Q_T had a red-shifted spectrum. As can be seen from Fig. <ref>(c), Q_sc increased as the θ was increased and from Fig. <ref>(d), Q_ab increased as the angle decreased from 90^∘ to 15^∘.
Figs. <ref>(a)–(f) represented E-field intensity for dimer NP with the variation of θ. The polarization angle defines the direction of the electric field and magnetic field. When the light was polarized in the x-direction (i.e., θ=0), field enhancement was observed along the x-axis. As the angle changed, the induced dipole changed with θ and consequently, in-phase bonding and antibonding plasmon modes occurred accordingly <cit.>. When θ=15 ^∘ and the charge of the dimers oscillates in the same direction the charge accumulated, and electric field enhancement was observed at θ=15 ^∘. This phenomenon occurred both in bonding mode and anti-bonding mode when they are in-phase plasmon mode. The direction of charge oscillation changed along the θ producing an induced dipole moment <cit.>. Fig <ref>(a)-(c) illustrated the strong electric field enhancement that was observed in the x-direction when charge oscillation was increasingly aligned to the x-direction. As the polarized angle rose from 45^∘ to 90 ^∘ strong electric field enhancement was observed in the y-direction as charge oscillation alignment changed to y direction and the in-phase antibonding mode was observed as illustrated in Fig<ref>(d)-(f).
E-field intensity on the yz plane represented the center of the dimers, can be seen from Figs. <ref>(a)-(f) with the variation of θ. When θ increased from 15^∘ to 45^∘, the effective induced dipole moment decreased, which originated the in-phase antibonding mode, due to the E-field polarization alignment changed toward y direction presented in Figs. <ref>(a)-(c). When θ increased from 60^∘ to 90^∘, the charge was distributed along the y-direction, which was the polarization angle of the E-field. Consequently, the out-of-phase antibonding mode was apparent when θ was near 90^∘, as can be seen in Figs. <ref>(d)-(f).
§.§ Impact of the distance between the spheres of dimer
We considered the impact of changing the distance between the spheres of TiN dimer NPs for the light source polarization angle at 0^∘ and 30^∘. We simulated dimer spherical NPs with various distances between the spheres for θ=0^∘ as can be seen from Fig. <ref>. We varied the distance, d from 0 nm to 50 nm to determine the structure for the optimized scattering cross-section. Here, t_1 and t_2 were considered 30 nm and 250 nm, respectively. The f_sub decreased with the increase of the d from d = 0 nm to 20 nm for 550 nm to 850 nm. When d = 50 nm the distance between the dimers increased, and they started to behave like a single sphere, As a result, f_sub increased. When d = 0 nm, the Q_T spectrum was the lowest and increased as d increased. The Q_sc decreased as d increased from 0 to 20 nm for the 550 nm to 820 nm range. For longer wavelengths than 820 nm, Q_sc increased with the increase of d. Q_ab was highest when d = 10 nm. For the wavelength 550 to 750 nm, the Q_ab decreased with the increase of d. For the dimers with d = 50 nm, as the distance increased, it started to behave like two independent monomer nanospheres.
We simulated a spherical dimer NP for 30^∘ the polarization angle of the source varying the distance between the spheres as can be seen in Fig. <ref>. Here, t_1 and t_2 were considered 30 nm and 250 nm, respectively. The f_sub and Q_sc were comparatively higher for the whole spectral range for d = 0 nm and lowest for d=20 nm. When d was smaller than 50 nm the f_sub and Q_sc increased with the decrease of d, and Q_T increased with the increase of d.
The Q_T was highest for d = 10 nm and performed better for the wavelength range 740 to 880 nm. Dimers performed better when there was no distance between the spheres. For the polarization angle from 0^∘ to 30^∘, the scattering spectra red-shifted.
§.§ Dependency on the radius of the dimer spherical NP
We simulated spherical dimer NPs with various radii and observed their optical characteristics, as seen from Fig. <ref>. To find the best structure for scattering cross-section, we varied the r from 50 nm to 120 nm. When the radius was 50 nm, Q_T was highest and the peak was almost 50 W/m^2.
The most important element to consider when modeling a light-trapping structure's increased path length is f_sub <cit.>. As the radius increased, the value of f_sub decreased significantly <cit.>. For the whole spectral range, f_sub did not vary appreciably at 50 nm and 70 nm radius. This made it possible to efficiently couple the part of the scattered light with a high in-plane wave vector that is transient in the air but engendered in silicon. As can be seen in Figs. <ref>(b)-(c), Q_T and the Q_sc decreased as the r increased from 50 nm to 120 nm. Q_ab exhibited the highest value for the whole spectral range for r = 100 nm and the peak was at 5.5 W/m^2 as can be seen from Figs. <ref>(d). Therefore 100 nm radius is optimum for dimers applications.
§ CONCLUSIONS
Dimer of spherical TiN NPs appeared as an efficient alternative plasmonic material for plasmonic and metamaterial applications.The results of our investigation into the effect of TiN dimer spherical NPs on the enhancement of the thin solar cell's light absorption were promising. After the incorporation of TiN NPs on a silicon substrate, the average absorbed power increased significantly from ∼19% to ∼75% over the whole spectral range. TiN exhibited better absorption enhancement, g and percentage absorbed power to Ag, Au, and Al dimers for r = 15. The average enhancement, G for TiN, Au, and Ag were found to be 0.9972, 0.9953. and 0.9954, respectively, for r = 15 nm. TiN dimer NP had the highest Q_ab value of ∼6.2 W/m^2 which were greater than Ag, Au, and Al. By changing the size of TiN dimer NPs, the absorption enhancement peak may be tailored to the required solar spectrum. TiN dimer NPs demonstrated to be beneficial when inserted in tandem solar cells because of their cost-effectiveness along with their abundance and ease to manufacture.
Funding
A.Z acknowledges the Basic Research Grant (Sonstha/R-60/Ref-4747) provided by the Bangladesh University of Engineering and Technology.
Acknowledgments
N.A. and A.Z. acknowledge the technical support of the Department of Electrical and Electronic Engineering at Bangladesh University of Engineering and Technology (BUET), Dhaka, Bangladesh, for the completion of the work.
Disclosures
The authors declare no conflict of interest.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
Supplemental document
See Supplement 1 for supporting content.
|
http://arxiv.org/abs/2307.04198v2 | 20230709150720 | Compact monotone tall complexity one $T$-spaces | [
"Isabelle Charton",
"Silvia Sabatini",
"Daniele Sepe"
] | math.SG | [
"math.SG"
] |
In this paper we study
compact monotone tall complexity one T-spaces. We use the
classification of Karshon and Tolman, and the
monotone condition, to prove that any two such
spaces are isomorphic if and only if they have equal
Duistermaat-Heckman measures. Moreover, we show that the moment
polytope is Delzant and reflexive, and provide a complete description of
the possible Duistermaat-Heckman measures. Whence we obtain a finiteness result that is analogous to that
for compact monotone symplectic toric manifolds. Furthermore, we show
that any such T-action can be extended to a toric (T ×
S^1)-action. Motivated by a conjecture of Fine and Panov, we prove that any
compact monotone tall complexity one T-space is equivariantly symplectomorphic to a Fano manifold
endowed with a suitable symplectic form and a complexity one T-action.
Mid-infrared spectroscopy with a broadly-tunable thin-film lithium niobate optical parametric oscillator
Amir H. Safavi-Naeini1
August 12, 2023
========================================================================================================
§ INTRODUCTION
Fano manifolds play an important role in complex algebraic geometry
and beyond. A compact complex manifold is Fano if its anticanonical
bundle is ample. Any such manifold is simply connected (see
<cit.> and <cit.>),
and its Todd genus equals one (see <cit.> for a definition).
Moreover, in any complex dimension there are finitely many
topological types of Fano manifolds (see <cit.>). The Fano
condition can be reformulated in Kähler terms: A compact complex manifold (Y,J) is Fano if and only if
there exists a Kähler form ω∈Ω^1,1(Y) such that c_1 = [ω], where c_1 is the first Chern class of (Y,J) – see, for instance, <cit.>. This motivates the following definition[In the literature
there are slight variations on this definition and sometimes these
manifolds are also called
symplectic Fano (see <cit.>).]: A
symplectic manifold (M,ω) is (positive) monotone if there exists
(a positive) λ∈ such that
c_1 = λ [ω],
where c_1 is the first Chern class of (M,J) and J is any
almost complex structure that is compatible with ω. If
(M,ω) is positive monotone, then ω
can be rescaled so as to be equal to c_1 in cohomology.
A driving question in symplectic topology is to determine whether
every compact positive monotone symplectic manifold is
diffeomorphic to a Fano manifold. The answer is affirmative in
(real) dimension up to four by work of McDuff (see
<cit.>), and is negative starting from dimension twelve by work of Fine and Panov (see
<cit.>). To the best of our knowledge, this is an open
problem in the remaining dimensions.
Motivated by a conjecture due to
Fine and Panov (see <cit.>), which is supported by recent
results (see <cit.>), we study the above question in
the presence of a Hamiltonian torus action: An action of a
compact torus T on a symplectic
manifold (M,ω) by symplectomorphisms that is codified by a smooth T-invariant map Φ
: M →^*, called moment map (see Section
<ref> for details). If the action is effective
and M is connected, the triple
(M,ω, Φ) is called a Hamiltonian T-space. We remark
that a monotone Hamiltonian T-space is necessarily positive
monotone (see Proposition <ref>). The
following is the long-term question behind this paper.
Find necessary and sufficient conditions for a compact monotone
Hamiltonian T-space to be diffeomorphic to a Fano
variety.
A starting point to attack Problem <ref> is to consider `large'
torus symmetries. This is codified precisely by the complexity
of a Hamiltonian T-space (M,ω,Φ), which is the
non-negative integer
k:=1/2 M - T.
Intuitively, the lower the complexity, the larger the
symmetry. A Hamiltonian T-space of complexity k is called a
complexity k T-space. Problem <ref> has been already solved in complexity zero, i.e.,
for compact monotone symplectic toric manifolds. To recall the solution, we
say that two Hamiltonian T-spaces are isomorphic if they
are symplectomorphic so that the moment maps are intertwined (see
Definition <ref> for a precise statement). Let (M,ω,
Φ) be a compact monotone symplectic toric manifold. By Delzant's classification (see <cit.>), the
isomorphism class of (M,ω,
Φ) is determined by
the moment polytope Φ(M) ⊂^*, which is Delzant (see Section <ref>). Moreover, if
without loss of generality we
assume that c_1 = [ω], then, up to translation,
Φ(M) is also reflexive (see Definition
<ref>). Reflexive polytopes were introduced by Batyrev
in <cit.> in the study of toric Fano varieties and, like Fano
manifolds, enjoy special properties. For
instance, if ℓ⊂ is the standard lattice,
then there are finitely many reflexive polytopes of full dimension in ^* up to the
standard action of GL(ℓ^*) – see Corollary
<ref>.
The combination of Delzant's classification and this
result above yields finiteness for compact monotone
symplectic toric manifolds up to the following notion of equivalence: Two Hamiltonian T-spaces (M_1,ω_1,Φ_1) and
(M_2,ω_2,Φ_2) are equivalent if there exists a
symplectomorphism Ψ : (M_1,ω_1) → (M_2,ω_2) and an affine transformation a ∈GL(ℓ^*) ⋉𝔱^* of 𝔱^* such that Φ_2
∘Ψ = a ∘Φ_1. More precisely, the following holds.
For each n∈_≥ 0, there are finitely many equivalence
classes of compact symplectic toric manifolds of dimension 2n with first Chern class equal to the class
of the symplectic form.
Moreover, by Delzant's classification and the Kähler description of
the Fano condition, the following result solves Problem <ref> in complexity zero.
If is a compact monotone symplectic toric manifold, then there exists an integrable
complex structure J on M that is compatible with ω and
invariant under the torus action such that the Kähler manifold (M,J) is Fano.
In fact, Theorem <ref> proves the stronger result that any
compact monotone symplectic toric manifold is
symplectomorphic to a Fano manifold (endowed with a suitable
symplectic form).
§.§ The results
In this paper we solve Problem <ref> for tall
complexity one T-spaces, i.e., those for which no reduced space is a point (see
Definition <ref>). Such spaces have been
classified by Karshon and Tolman in a series of papers (see
<cit.>). This classification is
more involved than that of compact symplectic
toric manifolds: For instance, there are several invariants, namely
the moment polytope, the genus, the painting and the
Duistermaat-Heckman measure (see Section <ref> for
more details),
and these invariants satisfy some compatibility conditions (see
<cit.>).
In order to attack Problem <ref> in the above setting, first we
study the isomorphism classes of compact monotone tall
complexity one T-spaces. Our first main result states that, for
these spaces, the Duistermaat-Heckman measure determines all other
invariants. For our purposes, we codify this measure by the unique
continuous function that represents its Radon-Nikodym derivative with
respect to the Lebesgue measure on ^*,
which we call the Duistermaat-Heckman function (see Theorem <ref> and Definition
<ref>).
Two compact monotone tall complexity one T-spaces are isomorphic if and only if their
Duistermaat-Heckman functions are equal.
Our second main result is finiteness of compact monotone tall complexity one
T-spaces up to equivalence, which is the analog of
Theorem <ref> . To this end, we observe that the moment polytope of
a space with c_1 = λ [ω] is a reflexive Delzant polytope if and only if c_1 =
[ω] and the moment map Φ satisfies the so-called
weight sum formula (see Proposition
<ref> and Lemma
<ref>). If is
a compact monotone tall complexity one
T-space, then we may rescale ω and translate Φ
so as to satisfy the above conditions (see Corollary
<ref> and Proposition <ref>). The next
result is a crucial step towards establishing finiteness.
Given a reflexive Delzant polytope Δ, there exist finitely
many isomorphism classes of compact monotone tall
complexity one T-spaces with Φ(M) = Δ.
The following result
is a simple consequence of
Theorem <ref> and answers a question originally posed to us by Yael Karshon.
For each n∈_≥ 0, there are finitely many equivalence classes of compact tall
complexity one T-spaces of dimension 2n with first Chern class equal to the class
of the symplectic form.
Our third main result concerns the extendability of a tall complexity one
T-action on a compact monotone symplectic manifold (M,ω) to a
toric (T × S^1)-action. To the best of our
knowledge, there is no criterion to ensure such extendability for
compact tall complexity one T-spaces of dimension at least six.
If is a compact monotone tall
complexity one T-space , then the Hamiltonian T-action
extends to a symplectic toric (T × S^1)-action.
Finally, our last main result is a solution to Problem <ref> for tall complexity one T-spaces. We recall that, given a compact torus T, there exists a unique
complex Lie group T_ such that the Lie algebra of T_ is
the complexification of and T is a maximal compact subgroup of
T_ (see <cit.>). For instance, if T = (S^1)^d,
then T_ = (^*)^d. The following result
is concerned with the existence of an integrable complex structure.
If is a compact monotone tall complexity one
T-space, then there exists a T-invariant integrable
complex structure J on M that is compatible with ω such that the complex manifold (M,J) is Fano and the
T-action extends to an effective holomorphic T_-action.
As an immediate consequence of Theorem <ref>, the
following result solves a stronger version of Problem <ref>
for compact tall complexity one T-spaces.
Any compact monotone tall complexity one
T-space is equivariantly symplectomorphic to a Fano manifold
endowed with a suitable symplectic form and a complexity one T-action.
§.§ Structure of the paper
In Section
<ref> we recall fundamental properties of (compact)
Hamiltonian T-spaces. While many of the notions and results
presented therein are well-known to experts, we also set the
terminology and notation used throughout. Some basic properties of
Hamiltonian T-spaces are considered in Section
<ref>, which describes in detail the local models
near any orbit, and introduces the notion of exceptional and regular
points and orbits. These are a generalization of the corresponding
concepts introduced by Karshon and Tolman in complexity one, which
play a key role in their classification. Moreover, we also discuss the notion of
exceptional and regular sheets, which are closely related to the
notion of x-ray (see <cit.>). In Section
<ref> we take a closer look at the invariants of
compact Hamiltonian T-spaces, starting with the so-called
Convexity Package and its consequences. A large part of Section <ref>
is taken up by the existence of the Duistermaat-Heckman
function of a compact Hamiltonian T-space (see Theorem <ref> and Definition
<ref>). We could not find an appropriate
reference for this result and, hence, included the material for completeness. We
focus on the complexity one case, showing that in this case there is a
polytope in ^* × that encodes the Duistermaat-Heckman
function (see Corollary <ref>). In Section
<ref> we introduce compact complexity preserving
Hamiltonian T-spaces, a class of spaces that generalizes simultaneously compact symplectic toric
manifolds and compact tall complexity one T-spaces. For instance, we
prove that their moment polytopes are Delzant polytopes (see
Proposition <ref>), and their Duistermaat-Heckman
functions enjoy natural properties (see Corollary <ref>). These spaces may be of
independent interest. Finally, we review the classification
of compact tall complexity one T-spaces due to Karshon and Tolman in
Section <ref>.
In Section <ref> we prove some properties of Hamiltonian
T-actions on compact monotone symplectic manifolds. We show that the presence of a Hamiltonian T-action forces
monotonicity to be positive (see Proposition <ref>). Hence, the symplectic form of any compact monotone
Hamiltonian T-space can be rescaled so that it is equal to
c_1 in cohomology (see Corollary
<ref>). Moreover, we show that the moment map of any compact
Hamiltonian T-space with c_1 = [ω] can be translated to satisfy the so-called
weight sum formula (see Proposition <ref>). Hence, in
order to study compact monotone
Hamiltonian T-spaces, it suffices to consider those that satisfy both aforementioned
conditions, which we call
normalized monotone. That is the content of Section
<ref>. In Section <ref>,
we characterize normalized monotone complexity preserving Hamiltonian
T-spaces as being precisely those that are compact
monotone and have reflexive Delzant polytopes as moment map images
(see Proposition
<ref> and Lemma
<ref>).
Section <ref> is the technical heart of the paper
and is where we prove our first main result, Theorem
<ref>. In Section <ref>, we
recall that the genus of a compact monotone tall complexity one
T-space is zero, a result proved in <cit.>. Moreover, we show
that there is a facet of the moment polytope along
which the Duistermaat-Heckman function is constant and equal to the
minimal value (see Proposition <ref>). Such a
facet, which we call minimal, plays an important role in the arguments
of Sections <ref> and <ref>. In
Section <ref>, we characterize isolated fixed points
in normalized monotone tall complexity one
T-spaces (see Proposition <ref>). This is
another fundamental result that we use extensively in Sections
<ref> and <ref>. The space of
exceptional orbits and the painting of a normalized monotone tall complexity one
T-space are studied in Section <ref>. We
show that the isotropy data associated to the former can be
reconstructed by `looking at the moment polytope' (see Remark
<ref> for a precise statement). Moreover,
we prove that the painting of a normalized monotone tall complexity one
T-space is trivial (see Definition
<ref> and Theorem
<ref>). In Section
<ref>, we provide an explicit formula for the
Duistermaat-Heckman function of a normalized monotone tall complexity one
T-space (see Theorem <ref>). It is given in terms of the number of connected components of the space
of exceptional orbits and an integer that can be associated to the
preimage of any vertex that lies on a given
minimal facet (see Lemma <ref>). Finally, in Section
<ref> we prove
Theorem <ref> by bringing the above results together.
In Section <ref> we prove all our remaining main
results. Section <ref> addresses the question of
finding necessary conditions for a function to be the
Duistermaat-Heckman function of a normalized monotone tall complexity one
T-space (see Proposition <ref>). This allows us to
prove the finiteness result, namely Theorem <ref> and Corollary
<ref>. In Section <ref>, we
prove that the aforementioned necessary conditions are, in fact,
sufficient (see Theorem <ref> and Corollary
<ref>). Our method to prove this result leads us
naturally to obtain the extendability result, namely Theorem
<ref>. Finally, in Section <ref>, we
prove Theorem <ref>.
§.§ Acknowledgments
We would like to thank Yael
Karshon for posing the question of finiteness.
The authors were partially supported by SFB-TRR 191 grant Symplectic
Structures in Geometry, Algebra and Dynamics funded by the Deutsche
Forschungsgemeinschaft. D.S. was partially supported by FAPERJ grant JCNE E-26/202.913/2019
and by a CAPES/Alexander von Humboldt Fellowship for Experienced
Researchers 88881.512955/2020-01. D. S. would like to thank
Universität zu Köln for the kind hospitality during a long stay. This study was financed in
part by the Coordenao de Aperfeioamento de Pessoal de Nvel Superior – Brazil
(CAPES) – Finance code 001. I.C. would like to thank Instituto de Matemtica Pura e Aplicada (IMPA), Rio de Janeiro,
for the support of a stay in Brazil.
§.§ Conventions
§.§.§ Tori
Throughout the paper, we identify the integral lattice of S^1 ⊂ with . This means that the exponential map exp :
Lie(S^1) = i→ S^1 is given by exp(ix) = e^2π i
x, where z ↦ e^z is the standard complex exponential
function. Moreover, we often identify Lie(S^1)
and its dual with tacitly; we trust that this does not cause
confusion.
In this paper, T is a compact torus of
dimension d with Lie algebra . We
denote its integral lattice by ℓ, namely ℓ=(exp→ T), and the dual of ℓ by ℓ^*. Moreover, we denote by ⟨·, ·⟩ the standard pairing between ^* and
. Finally, we fix an inner product on once and for all.
§.§.§ Convex polytopes
Let be a real vector
space of dimension d with full-rank
lattice ℓ⊂. A (convex) polytope Δ in
^* is a subset that satisfies either of the following two
equivalent conditions:
* Δ is the convex hull of a finite set of points, or
* Δ is the bounded intersection of a finite set of (closed)
half-spaces of ^*,
(see <cit.>). Throughout the paper, we assume
that a polytope Δ has dimension equal to that of ^*. We often
write Δ in its minimal representation, i.e.,
Δ=⋂_i=1^l {w∈^* |⟨ w,ν_i ⟩≥
c_i}
where ν_i ∈ is the inward normal, c_i ∈, and the
affine hyperplane {w∈^* |⟨ w,ν_i ⟩ =
c_i} supports a facet of Δ for i=1,…, l. A finite non-empty intersection
of facets of Δ is a face of Δ. For convenience, we
think of Δ also as a face. The dimension of a face ℱ of
Δ is the dimension of the affine span of ℱ in
^*. Faces that are 1-dimensional are called edges, while
0-dimensional faces are vertices.
§ (COMPACT) HAMILTONIAN T-SPACES AND THEIR INVARIANTS
§.§ Definition and first properties
Let (M,ω) be a symplectic manifold of dimension 2n.
A smooth T-action ψ T × M → M is
Hamiltonian if it admits a
moment map, i.e., a smooth T-invariant map Φ M →^* that satisfies
d ⟨Φ, ξ⟩ = -ι_ξ^#ω for all ξ∈,
where ξ^#∈𝔛(M) denotes the vector field associated
to ξ. In this case, the diffeomorphism ψ(t,
·) M → M is a symplectomorphism
for each t ∈ T,
i.e., it preserves ω.
For brevity we denote ψ(t, p) by t· p.
* A (compact) Hamiltonian T-space is a (compact)
connected symplectic manifold (M,ω) endowed with an effective Hamiltonian
T-action and a moment map Φ M →^*. We denote such a
space by (M,ω,Φ).
* Two Hamiltonian T-spaces (M_1,ω_1,Φ_1) and
(M_2,ω_2,Φ_2) are isomorphic if there exists a symplectomorphism Ψ: (M_1,ω_1) →
(M_2,ω_2) such that Φ_2 ∘Ψ = Φ_1.
Since T is connected, an isomorphism of
Hamiltonian T-spaces is necessarily a T-equivariant diffeomorphism.
Definition <ref> includes T = 0 (this is used, for
instance, in Theorem <ref>). In this case, a Hamiltonian T-space is simply a symplectic
manifold and an isomorphism is simply a symplectomorphism.
§.§.§ Orbital moment map and reduced spaces
We endow the quotient space M/T with the quotient
topology. Since Φ is T-invariant, it descends to a continuous
map Φ̅ : M/T →^*
that is called the orbital moment
map.
If Ψ is an isomorphism between (M_1,ω_1,Φ_1) and
(M_2,ω_2,Φ_2) , then there is a homeomorphism Ψ̅
: M_1/T → M_2/T such that Φ̅_1 = Φ̅_2 ∘Ψ̅.
The fibers of Φ̅ can be canonically identified with the
quotient of the fibers of Φ by the T-action. These are known as
reduced spaces. If α∈Φ(M) is a regular value of
Φ, then the reduced space at α, Φ^-1(α)/T, is
an orbifold that inherits a symplectic form ω_red
(see <cit.> and <cit.>).
§.§.§ Complexity of a Hamiltonian
T-space
Since the T-action on
M is effective and since orbits are
isotropic submanifolds of (M,ω), we have that d≤ n. The
difference n-d is a simple, but important invariant of .
The complexity of a Hamiltonian T-space is
k:=1/2 M - T.
Complexity zero Hamiltonian T-spaces are symplectic toric
manifolds. Throughout the paper, we refer to torus actions of
complexity zero as toric.
Intuitively, the complexity of a Hamiltonian T-space is half of the
dimension of a reduced space at a regular value.
§.§.§ Local model and local normal form
Given p ∈ M, its stabilizer is the closed
subgroup H:={t ∈ T | t · p = p}. We set h:= H. Since T is abelian, any two points on the same orbit have equal
stabilizers. Hence, the stabilizer of an orbit is well-defined. If 𝒪 denotes the T-orbit containing p, the
infinitesimal symplectic linear action of H on (T_pM,ω_p) fixes T_p
𝒪. Thus there is a symplectic linear action of
H on the quotient vector space (T_p
𝒪)^ω/T_p𝒪 endowed with the quotient linear
symplectic structure. We call the underlying Lie group homomorphism
the symplectic slice representation of p.
The symplectic slice representations of two points lying on the same
orbit are naturally isomorphic. Hence, the symplectic slice
representation of an orbit is well-defined. This allows us to `decorate' the
quotient space M/T by attaching the symplectic slice
representation to every orbit (this data includes the
stabilizer of the orbit).
Let Ψ be an isomorphism between (M_1,ω_1,Φ_1) and
(M_2,ω_2,Φ_2) and let Ψ̅ : M_1/T → M_2/T be
the homeomorphism given by Remark <ref>. For
any p ∈ M_1, Ψ̅([p]) and [p] have equal stabilizers
and symplectic slice representations.
Fix a T-invariant
almost complex structure on (M,ω); the existence of such a
structure is proved in <cit.>. We observe that (T_p
𝒪)^ω/T_p𝒪 has real dimension
2(h+n-d)=2(h+k), where k is the complexity of . Hence, we
use the above almost complex structure to
identify (T_p 𝒪)^ω/T_p𝒪 with ^h+k
endowed with the standard symplectic form
i/2 ∑_j=1^h+k dz_j ∧ dz_j.
Moreover, under this identification, the linear H-action is by
unitary transformations.
Let ρ : H → U(^h+k) be the
associated homomorphism of Lie groups. Since H is abelian,
ρ(H) is contained in a maximal torus of U(^h+k). We denote
the maximal torus of U(^h+k) consisting of diagonal
transformations by (S^1)^h+k. Hence,
we may assume that ρ factors through
a Lie group homomorphism H → (S^1)^h+k that we also denote by
ρ by a slight abuse of
notation. We
write ρ_j for the jth component of ρ, where j = 1,…,
h+k. Let d_eρ_j denote the derivative at the identity of
ρ_j and set
d_eρ_1:=2π i α_1,…, d_eρ_h+k:=2π i
α_h+k. Hence, α_j ∈ℓ^*_𝔥 for all j=1,…, h+k, where 𝔥
is the Lie algebra of H and ℓ_𝔥 denotes
the integral lattice in 𝔥. We call
α_1,…, α_h+k the isotropy weights of p (for the
H-action). The multiset of isotropy weights of p does not depend on the
choice of T-invariant almost complex structure on
(M,ω). Moreover, this multiset encodes the action of the
identity component of H on ^h+k. Explicitly,
exp(ξ)· (z_1,…,z_h+k)=(e^ 2π i ⟨α_1,ξ⟩z_1,…,
e^ 2π i ⟨α_h+k,ξ⟩z_h+k) for
every ξ∈𝔥 .
In particular, if H is connected, then the multiset of isotropy
weights of p determine the symplectic slice representation up to
unitary isomorphisms. Finally, by Remark <ref>, the multisets of
isotropy weights of two points lying on the same orbit are equal.
From the stabilizer H ≤ T of p and the Lie group homomorphism ρ :
H → (S^1)^h+k, we construct a symplectic manifold together
with a Hamiltonian T-action and a moment map. This is the local model for a T-invariant neighborhood of
𝒪 in . We do this in two equivalent ways, seeing as
one is more convenient for proofs and the other is more convenient for
calculations.
§.§.§ The abstract construction
Let Ω denote the
symplectic form on T^*T ×^h+k given
by taking the sum of the pullbacks of the canonical symplectic form on
T^*T and the standard symplectic form on ^h+k (see equation
(<ref>)). Let H act (on the right) on T^*T ×^h+k
as follows: On T^*T it acts by
the cotangent lift of (right) multiplication, while on ^h+k it
acts by z · h := ρ(h^-1)(z), for h ∈ H and z ∈^h+k. By construction,
the H-action on (T^*T ×^h+k,Ω) is
Hamiltonian. Let Φ̂ : T^*T ×^h+k→𝔥^* be the moment map that sends (0,0) ∈ T^*T ×^h+k to the origin in 𝔥^*. Since this H-action is
free and proper, the quotient
(T^*T ×^h+k) / /H:= Φ̂^-1(0)/H
is a smooth manifold that inherits a symplectic form
ω_red (see <cit.>).
Let T act (on the left) on T^*T ×^h+k as follows: On
T^*T it acts by the
cotangent lift of (left) multiplication, while on ^h+k it acts
trivially. This T-action is Hamiltonian and commutes with the above
H-action. Hence, it induces a Hamiltonian T-action on ((T^*T
×^h+k) / /H, ω_red). As a moment map for
this T-action we take the one that sends [0,0] ∈ (T^*T ×^h+k) / /H to the origin in ^* and we denote it by
Φ_red. The desired local model is the triple ((T^*T
×^h+k) / /H, ω_red, Φ_red).
§.§.§ The explicit construction
The choice of inner product on 𝔱 induces an
inner product on 𝔱^* which, in turn, determines an
isomorphism ^* ≃Ann(𝔥) ⊕𝔥^*. Moreover, we choose a trivialization T^*T ≅ T
×^*. With these choices, we fix an identification T^*T
×^h+k with T ×Ann(𝔥) ×𝔥^* ×^h+k. Under this identification, the above
(right) H-action is given by
(t,α,β,z) · h = (th,α,β,ρ(h^-1)z),
while the above moment map
Φ̂ is given by
(t,α,β,z) ↦β - Φ_H(z),
where Φ_H : ^h+k→𝔥^* is the homogeneous
moment map for the (left) H-action on ^h+k given by h · z
= ρ(h)z. The map
T ×Ann(𝔥) ×^h+k →Φ̂^-1(0)
(t,α,z) ↦ (t,α,Φ_H(z),z)
is an H-equivariant diffeomorphism, where the left hand side is
equipped with the (right) H-action on T ×Ann(𝔥) ×^h+k given by
(t,α,z) · h = (th, α, ρ(h^-1)z).
Hence the quotient
Y:=T ×_H
Ann(𝔥) ×^h+k
is diffeomorphic to (T^*T ×^h+k) / /H. We denote by ω_Y (respectively Φ_Y) the pullback of
ω_red (respectively Φ_red) under the above diffeomorphism. The (left)
T-action on Y is given by
s · [t,α,z] = [st,α,z],
while the moment map Φ_Y takes the form
Φ_Y([t,α,z]) = α + Φ_H(z).
The stabilizer of [1,0,0] ∈ Y is H and the symplectic slice
representation of [1,0,0] is ρ : H → (S^1)^h+k. Moreover, if the T-action is effective, the
complexity of (Y,ω_Y,Φ_Y) is equal to k. We refer to
(Y,ω_Y,Φ_Y) as the local model of
p. By
Remark <ref>, the local models at two points lying
on the same orbit are equal.
By a slight abuse of terminology, we also refer to ((T^*T
×^h+k) / /H, ω_red,
Φ_red) as the local model of p. Moreover,
throughout the paper we sometimes state results in terms of Y but use (T^*T
×^h+k) / /H in the proof. We trust that this does not
cause confusion.
As a consequence of the local normal form
theorem for Hamiltonian actions of compact Lie groups due to Guillemin-Sternberg <cit.> and Marle
<cit.>, any Hamiltonian
T-space is isomorphic to a local model near an orbit. More
precisely, the following holds.
Let be a Hamiltonian T-space. Given p ∈ M, let
(Y,ω_Y,Φ_Y) be the local model of p. There exist
T-invariant open neighborhoods U⊂
M of p and V ⊂ Y of [1,0,0] and an isomorphism between
(U,ω,Φ) and (V,ω_Y,Φ_Y + Φ(p)) that maps p to [1,0,0].
Let be a Hamiltonian T-space. For any p ∈ M with
stabilizer H, the
homomorphism ρ : H → (S^1)^h+k is injective.
Fix p ∈ M and notation as above. Let (Y,ω_Y,Φ_Y) be
the local model of p. Since the T-action on M is assumed
to be effective, so is the T-action on Y by Theorem <ref> (see Remark <ref>). By definition of
Y and of the T-action on Y, the T-action on
Y = T ×_H Ann(𝔥) ×^h+k is
effective if and only if the Lie group homomorphism ρ : H →
(S^1)^h+k is injective, as desired.
Given p ∈ M, let H be its stabilizer and let
{α_j} be the multiset of isotropy weights of p. By
Corollary <ref>, the H-action on (T_p 𝒪)^ω/T_p𝒪 is effective. In particular, so is the action
by its identity component. Using equation (<ref>), it follows that the -span of {α_j} equals
ℓ^*_𝔥.
The above discussion simplifies significantly if p is a fixed
point, i.e., if H=T. In this case, h =d so that
h+k = n, and the isotropy
weights α_1,… , α_n lie in ℓ^*. Moreover,
Y = ^n, the symplectic form ω_Y is the standard
symplectic form on ^n, the T-action on Y is given by
exp(ξ)· (z_1,…,z_n)=(e^ 2π i ⟨α_1 , ξ⟩z_1,…,
e^ 2π i ⟨α_n , ξ⟩z_n) for
every ξ∈ ,
and the moment map Φ_Y : Y →^* is given by
Φ_Y(z)=π∑_j=1^n α_j |z_j|^2,
where z = (z_1,…,z_n) ∈^n.
§.§.§ Regular and exceptional local models
Theorem <ref> allows us to understand a Hamiltonian T-space in a T-invariant open neighborhood of a
point p by studying the local model of p. For this reason, in this subsection we
take a closer look at local models.
In what follows, we fix a non-negative integer k and a closed subgroup H ≤ T. As above, we set d:= T and h:= H. We also fix
an injective Lie group homomorphism ρ : H
↪ (S^1)^h+k. We denote the subspace
of fixed points of the H-action induced by ρ by (^h+k)^H. As in Section
<ref>, we use k, H and ρ to construct
Hamiltonian T-spaces
((T^*T
×^h+k) / /H, ω_red, Φ_red)
≅ (Y,ω_Y,Φ_Y)
that we refer to as the local model determined by k, H and
ρ (see equations (<ref>), (<ref>) and
(<ref>) for the definition of Y, the T-action and Φ_Y
respectively). We remark that k is the complexity of (Y,ω_Y,Φ_Y), that H is the stabilizer of p:=[1,0,0] and
that ρ is the symplectic slice representation of p.
Let k be a non-negative integer, let H ≤ T
be a closed subgroup and let ρ : H ↪ (S^1)^h+k
be an injective Lie group homomorphism, where h = H. Set s:=
_ (^h+k)^H. There exists an isomorphism of
Hermitian vector spaces ^h+k≃^h+k-s⊕^s
such that
ρ = (ρ',1) : H ↪
(S^1)^h+k-s× 1 ↪ (S^1)^h+k-s× (S^1)^s ≃
(S^1)^h+k
and (^h+k-s)^H = {0}. Moreover, s ≤ k and, if s =
k, then ρ is an isomorphism between H and (S^1)^h and
the H-action on ^h is toric.
To simplify notation, set V := (^h+k)^H. The standard
Hermitian product on ^h+k induces an isomorphism of Hermitian
vector spaces ^h+k≃ V^⊥⊕ V, and both V and
V^⊥ are endowed with the restriction of the standard
Hermitian product. By definition ρ(H) fixes V pointwise and
is a subgroup of U(h+k). Hence, ρ splits as the
direct sum of two H-representations that we denote by
ρ_V : H → U(V) and ρ': H
→ U(V^⊥). By construction,
ρ_V is the trivial representation. Moreover,
ρ'(H) is contained in a maximal torus of U(V^⊥) (see
Section <ref>). Since the Hermitian vector
spaces V and V^⊥ are isomorphic to ^s and to
^h+k-s respectively and since {0}= (^h+k-s)^H, this proves the first statement. To prove
the second, we observe that ρ': H → U(V^⊥) is injective, since ρ is injective. Since maximal tori in U(V^⊥)
have dimension equal to h+k-s and
since h = H, it follows at once that s≤ k. Finally, if s = k,
then, since ρ' is injective, it follows that the map H
↪ (S^1)^h is an isomorphism of Lie groups.
Lemma <ref> allows us to `decompose' local models as
follows (see <cit.> for a proof in the case k=s=1).
Let k be a non-negative integer, let H ≤ T
be a closed subgroup and let ρ : H ↪ (S^1)^h+k
be an injective Lie group homomorphism, where h = H. Let s
= _(^h+k)^H and ρ' : H ↪ (S^1)^h+k-s be as
in the statement of Lemma <ref>, so that
(^h+k-s)^H={0}. The local
model determined by k, H and ρ is isomorphic to
(Y' ×^s,pr^*_1ω_Y' + pr^*_2ω_0, pr_1^*Φ_Y'),
where (Y',ω_Y',Φ_Y') is the local model determined by
k-s, H and ρ', ω_0 is the standard symplectic form
^s, and pr_j is the projection from Y' ×^s
to its jth component, for j=1,2.
In this proof we use the abstract construction of local models (see
Section <ref> and Remark
<ref>). Fix an isomorphism
^h+k≃^s ⊕^h+k-s as in the statement of
Lemma <ref>. This induces a symplectomorphism between (T^*T ×^h+k, Ω)
and ((T^*T ×^h+k-s) ×^s,pr^*_1Ω' +
pr^*ω_0), where Ω' denotes the symplectic form
on T^*T ×^h+k-s and, as above, pr_j is the projection from (T^*T ×^h+k-s) ×^s
to its jth component, for j=1,2. We endow (T^*T ×^h+k-s) ×^s with the following (right) H-action and
(left) T-action:
* (H-action): On T^*T ×^h+k-s we consider the (right) H-action used
to construct the local model determined by k-s, H and ρ',
while H acts trivially on ^s, and
* (T-action): On T^*T ×^h+k-s we consider the (left) T-action used
to construct the local model determined by k-s, H and ρ',
while T acts trivially on ^s.
The above symplectomorphism is both H- and T-equivariant. Hence,
there is a T-equivariant symplectomorphism between the symplectic
quotients with respect to the H-actions, i.e., an isomorphism
between the resulting Hamiltonian T-spaces. Once the abstract
constructions are identified with the explicit ones as in Section
<ref>, this yields the desired isomorphism of
Hamiltonian T-spaces.
Proposition <ref> is particularly simple if
s = k, i.e., if _ (^h+k)^H is maximal. In this case, (Y', ω', Φ_Y') is
toric.
Motivated by Theorem <ref> and by Remark <ref>, we introduce the
following dichotomy.
Let be a complexity k T-space and let p ∈ M be a
point with stabilizer H. Let 𝒪 be the orbit containing
p. We say that p is regular if
_(^h+k)^H = k and exceptional otherwise, where
H acts on (T_p𝒪)^ω/T_p 𝒪≃^h+k via the symplectic slice representation at p.
We observe that Definition <ref> extends
also to orbits (see Remark <ref>); hence,
throughout the paper, we also refer to regular and exceptional orbits.
Let be a Hamiltonian T-space. We define the set of exceptional orbits M_exc as the subset of M/T
consisting of exceptional orbits.
A fixed point p in a complexity k T-space is regular if
and only if it lies on a fixed submanifold of dimension k. We observe that the latter condition is
equivalent to the T-action on the normal bundle to the fixed
submanifold that contains p being toric.
The techniques of Example <ref> and Lemma
<ref>, together with Theorem <ref>, can be used to prove the following result.
* Any point with
trivial stabilizer, as well as any point in a complexity zero
T-space, is regular.
* If a Hamiltonian T-space has positive complexity, then any isolated fixed
point is exceptional.
* The subset of regular points in a Hamiltonian T-space is open.
* If a point p is regular, then the stabilizer of p is connected.
We conclude this section with two results in complexity one.
Let be a complexity one T-space. A point p ∈ M^T is
isolated if and only if it is exceptional.
By Lemma <ref>, if p ∈ M^T is
isolated, then it is exceptional. Conversely, if p ∈ M^T
is regular, then, by Theorem <ref>
and Example <ref>, p is not isolated.
Finally, as an immediate consequence of Theorem <ref> and of Proposition <ref>, the following
characterization of exceptional points holds (cf. <cit.>).
Let be a complexity one T-space. A point p ∈ M is
exceptional if and only if every nearby orbit in the
same moment fiber has a strictly smaller stabilizer.
In particular, Definition <ref> agrees with
the definition of exceptional orbits of <cit.> in complexity one,
and is the appropriate notion for our purposes.
§.§.§ Regular and exceptional sheets
Let be a Hamiltonian T-space. For any closed subgroup H ≤ T, the action of H on M is also
Hamiltonian: Let
𝔥 be the Lie algebra of H and let i: 𝔥↪ denote the
inclusion. Then a moment
map for the H-action is given by the composition
M Φ⟶^* i^*⟶𝔥^*,
where i^*^* →𝔥^* is the dual of i. We
denote the set of fixed points of the H-action by
M^H = { p ∈ M | h · p = p for all h ∈
H}.
The following result is used throughout and a proof can be found in <cit.>.
Let be a Hamiltonian T-space and let H ≤ T be
closed. Then any connected component of M^H is an embedded
symplectic submanifold of (M,ω).
We refer to the connected components of M^T as the fixed submanifolds of .
If N⊂ M^T is a fixed submanifold, then for any p,p'∈ N,
the isotropy weights of p are equal to those at p'. Hence, the
isotropy weights of the fixed submanifold N
are well-defined. Moreover, they can be used to determine N: By Theorem <ref>,
N equals twice the number of isotropy weights of N that are
equal to 0.
Let H ≤ T be a closed subgroup that is the stabilizer of a point
p ∈ M and let N ⊂ M^H denote the connected component that
contains p. We set ω_N:= ω|_N. By Lemma
<ref>, (N,ω_N) is an embedded
symplectic submanifold of (M,ω). Moreover, since T is
abelian and connected, and since N is connected, N is
T-invariant. The T-action on N induces a T':=T/H-action on N
that is effective because the stabilizer of p for the T-action
equals H. This T'-action on N is Hamiltonian. To see this, we
construct a moment map starting from Φ and the point p. To this
end, let pr : T → T' be the quotient map. We denote its
derivative at the identity by pr_*: →' and the dual
of pr_* by pr^* : (')^* →^*. By
construction, pr^* is an isomorphism between (')^* and
Ann(𝔥). Since N ⊂ M^H is connected and
since Φ is a moment map, ⟨Φ|_N,ξ⟩ = ⟨Φ(p), ξ⟩ for all ξ∈𝔥. Consequently, Φ(N) ⊂Φ(p) +
Ann(𝔥). Hence, there exists a unique smooth map
Φ_N : N → (')^* such that
pr^* ∘Φ_N = Φ|_N - Φ(p).
By (<ref>), since Φ : M →^* is
a moment map for the T-action on M and since
(N,ω_N) is a symplectic submanifold of (M,ω), it follows shows that Φ_N is a
moment map for the T'-action on N. Since the T'-action on N is
effective, this shows that
(N,ω_N,Φ_N) is a Hamiltonian T'-space.
Let be a Hamiltonian T-space and let H ≤ T be
a subgroup that occurs as a stabilizer of some point p ∈ M. The
Hamiltonian T'-space (N,ω_N,Φ_N) constructed above is
called a sheet stabilized by H. Whenever we wish to emphasize the role of p,
we say that (N,ω_N,Φ_N)
is the sheet through p.
Any fixed submanifold N ⊂ M^T gives rise to a sheet that we
denote simply by (N,ω_N).
For our purposes, it is useful to allow some freedom in the choice
of moment map associated to a sheet. Modulo pr^*, Φ_N and
Φ|_N only differ by a constant. Depending on the context, we
use either moment map. We trust that this does
not cause confusion.
Given a sheet (N,ω_N,Φ_N) of , we are interested in understanding
the complexity of (N,ω_N,Φ_N) in relation to that of
. By Definition <ref>, the complexity of
(N,ω_N,Φ_N) is 1/2 N - (d-h).
Let be a Hamiltonian T-space and let (N,ω_N,Φ_N)
be a sheet stabilized by H ≤ T. The complexity of (N,ω_N,Φ_N) is at most
that of . Moreover, the following
are equivalent:
* The complexity of (N,ω_N,Φ_N) is less than
that of .
* Any p ∈ N with stabilizer H is exceptional.
* The fiberwise H-action on the symplectic normal bundle to N has
positive complexity.
Let p ∈ N be a point that is stabilized by H and let
(Y,ω_Y,Φ_Y) be the local model of p. Since the above
conditions can be checked locally, by Theorem
<ref>, it suffices to prove the result in the case
= (Y,ω_Y,Φ_Y) and p = [1,0,0]. Recall that Y = T ×_H
Ann(𝔥) ×^h+k, where k is the
complexity of (Y,ω_Y,Φ_Y), and that the T-action on Y is given by
equation (<ref>). The submanifold Y^H is given by
T ×_H Ann(𝔥) × (^h+k)^H ≃
T/H ×Ann(𝔥) × (^h+k)^H,
where (^h+k)^H is the fixed point set for the H-action on
^h+k. Since (^h+k)^H is a subspace of ^h+k, it
follows that Y^H is connected. Since N ⊆ Y^H is a
connected component of Y^H, N = Y^H. Moreover, by
(<ref>), the dimension of N equals 2(d-h) +
2_(^h+k)^H. By Lemma <ref>,
_(^h+k)^H ≤ k. Hence, the complexity of
(N,ω_N,Φ_N) is at most k. Moreover, since
(^h+k)^H is a complex subspace of ^h+k, the H-action
on the symplectic
normal vector space to N at p can be identified with the linear
H-action on ^h+k/(^h+k)^H.
Suppose that the complexity of (N,ω_N,Φ_N) is less than that of (Y,ω_Y,Φ_Y). Since N
= 2(d-h) +
2_(^h+k)^H, it follows that _(^h+k)^H<k, so that
p is exceptional. Therefore, <ref> implies <ref>. If p is exceptional, so that _(^h+k)^H<k, then _^h+k/(^h+k)^H > h. Hence, the linear H-action on
^h+k/(^h+k)^H has positive
complexity. Therefore, <ref> implies <ref>. Finally,
if the H-action on the symplectic normal vector space to N at
p has positive complexity, then _^h+k/(^h+k)^H > h. The latter is equivalent to _(^h+k)^H<k, so that N
< 2(d-h) + 2k. Hence, the complexity of (N,ω_N,Φ_N) is less than
that of (Y,ω_Y,Φ_Y). Therefore,
<ref> implies <ref>.
The proof of Proposition <ref> can be
adapted to prove equivalence of the following conditions:
* The complexity of (N,ω_N,Φ_N) is
equal to that of .
* Any p ∈ N with stabilizer H is regular.
* The fiberwise H-action on the symplectic normal bundle to N has
complexity zero.
We observe that, by Proposition <ref>,
either <ref> or <ref> needs to hold for any sheet.
If (N,ω_N) is a fixed submanifold in , properties <ref> and
<ref> of Proposition <ref>
(respectively properties <ref> and <ref> of Remark <ref>) simplify as follows: The
dimension of N is less than (respectively equal to) twice the
complexity of , and all points in N are exceptional
(respectively regular). Moreover, N ≤ 2k,
where k is the complexity of .
Motivated by Proposition <ref> and Remark <ref>, we introduce
the following terminology.
A sheet (N,ω_N,Φ_N) in is exceptional
if it satisfies any of the conditions
of Proposition <ref>, and regular otherwise.
Exceptional sheets enjoy the following stronger characterization.
A sheet (N,ω_N,Φ_N) in is exceptional if and only if
every point in N is exceptional.
If every point in N is exceptional, then
(N,ω_N,Φ_N) is exceptional by Proposition
<ref>. Conversely, suppose that
(N,ω_N,Φ_N) is exceptional. By contradiction, suppose that p ∈ N is regular. By Lemma <ref>, every point in
N that is sufficiently close to p is also regular. By the
principal orbit theorem (see <cit.>), the set
of points in N that is stabilized by H is dense. Hence, there exist
points in N that are stabilized by H that are arbitrarily close
to p. However, by Proposition <ref>, any
such point is exceptional, a contradiction.
It is not necessarily true that, if (N,ω_N,Φ_N) is a
regular sheet, then all points in N are regular. A counterexample is as follows: Let be a Hamiltonian
T-space of positive complexity that contains an isolated fixed
point p. Since T is compact and
abelian, and
since M is connected, the
principal orbit theorem (see <cit.>) implies that
there exists an open and dense subset of M whose points have
trivial stabilizer. Hence, taking H = {e}, it follows that
is a regular sheet. However, by Lemma <ref>, p is exceptional.
§.§ Invariants of compact Hamiltonian T-spaces
In this section we recall some fundamental results about compact[In fact, many of the theorems presented in this
section hold under the weaker assumption that the moment map be
proper as a map to a convex open subset of ^*. However, this
degree of generality goes beyond the scope of this paper.]
Hamiltonian T-spaces.
§.§.§ Convexity package and its consequences
We start with the following foundational result (see
<cit.>).
Let be a compact Hamiltonian T-space.
* (Connectedness) The fibers of the moment map are connected.
* (Stability) The moment map is open as a map onto its
image.
* (Convexity) The moment map image is the convex hull of the images of
the fixed submanifolds.
We remark that, since the action is effective, the moment map
image of a compact Hamiltonian T-space is a polytope that has
dimension equal to ^*.
Let be a compact Hamiltonian T-space. The image Φ(M)
is called the moment polytope.
By Theorem <ref>, the moment polytope of a compact
Hamiltonian T-space is convex. In fact, more is true and, in order
to prove this, we need to recall a few notions. We say that a polytope
Δ⊂^*
is rational if any edge e ⊂Δ is of the form
e = {v + t α| t ∈ [0,l]} for some v ∈^*, α∈ℓ^* and l ∈_>0.
Moreover, a
subset C ⊆^* is a cone if, for all v ∈ C
and all λ∈_≥ 0, λ v ∈ C. A cone in ^* is proper if it does not contain any subspace of ^* of
positive dimension. The following result provides a local description
of the moment polytope of a compact Hamiltonian T-space near the
image of a fixed submanifold.
Let be a compact Hamiltonian T-space of dimension 2n. Let N be a fixed submanifold and let α_1,…α_n be the isotropy weights of N (see Remark <ref>). Consider the cone
𝒞_N = _≥ 0-span{α_1,…α_n}⊆^*,
and let ℋ_N ⊆^* be the maximal subspace that is contained in 𝒞_N.
* There exist an open neighborhood V of Φ(N) in Φ(M)
and an open neighborhood W of 0 in 𝒞_N such that V
= W + Φ(N). In particular, Φ(M) is rational.
* The intersection
( ℋ_N + Φ(N)) ∩Φ(M)
is a face of Φ(M) and the dimension of this
face equals the dimension of ℋ_N.
In particular, Φ(N) is a vertex of Φ(M) if and only if the cone 𝒞_N is proper.
For any p ∈ N, by Theorem <ref>, there exist a T-invariant open
neighborhood U_p of p and an open neighborhood W_p of 0 in
𝒞_N such that Φ(U_p) = W_p + Φ(N). Moreover, by
Theorem <ref>, Φ(U_p) is an open neighborhood of
Φ(p) in Φ(M). Since N is compact, there exist finitely
many p_1,…, p_r ∈ N such that N is contained in
⋃_j=1^r U_p_j. Set W:= ⋂_j=1^r W_p_j. By
construction, W + Φ(N) = ⋂_j=1^r Φ(U_p_j) is an open neighborhood of Φ(N)
in Φ(M). Since the cone 𝒞_N is convex and rational, and
since vertices of the moment polytope are the image of fixed submanifolds
by Theorem <ref>, the moment polytope is rational. This proves part <ref>.
To prove part <ref>, observe that any cone is the product of
the maximal subspace that it contains with a proper cone. Thus we
can write 𝒞_N = ℋ_N ×𝒞'_N for some proper cone 𝒞'_N. Hence, without loss of generality, we may take an
open neighborhood W in 𝒞_N
as in the statement of part <ref> to be of the form
W_ℋ× W', where W_ℋ (respectively W') is an open
neighborhood of 0 in ℋ_N (respectively
𝒞'_N). Therefore, the desired result holds `locally' by
part <ref>; convexity of
Φ(M) (see Theorem <ref>) implies that it is true `globally'.
Until the end of the section, we deduce some consequences of Theorem
<ref> that we use throughout the paper. We start with the following sufficient condition
for a sheet to be exceptional (see Definition <ref>).
Let be a compact Hamiltonian T-space and let
(N,ω_N,Φ_N) be a sheet stabilized by a non-trivial
subgroup H. If Φ(N) is not contained in
the boundary of Φ(M), then (N,ω_N,Φ_N) is exceptional.
We prove the contrapositive. Let (N,ω_N,Φ_N) be a regular sheet. It suffices
to show that the set of regular values of Φ|_N that are
contained in Φ(N) is contained in the boundary of
Φ(M). Let x ∈Φ(N) be a regular value for Φ|_N. By
Theorem <ref>,
there exists p ∈Φ|_N^-1(x) that has trivial stabilizer
for the T/H-action, i.e., the stabilizer of p for the
T-action is H. By Remark <ref>, p is
regular and, hence, the T/H-action on the symplectic normal bundle
to N is toric. By the local normal form (Theorem <ref>), and by openness of the moment map (Theorem
<ref>), it follows that Φ(x) lies in the boundary
of Φ(M).
For compact Hamiltonian T-spaces, the existence of exceptional sheets
is intimately connected to the existence of exceptional fixed points. More precisely, the following holds.
A compact Hamiltonian T-space contains an exceptional sheet if and
only if it contains an exceptional fixed point.
Suppose first that (N,ω_N,Φ_N) is an exceptional sheet of
. Since N is compact, it contains a fixed point p ∈
M^T. By Lemma <ref>, p is exceptional. Conversely, suppose that p ∈
M^T is exceptional. By definition, the sheet
(N,ω_N,Φ_N) through p is exceptional.
Next we deduce some general results about isotropy weights of isolated
fixed points that are used in one of the key results of our paper,
Proposition <ref>.
Let be a compact complexity k T-space and let p ∈
M^T be isolated. For any isotropy weight α of p,
there exists a sheet (N_α,ω_α,Φ_α) with the following
properties:
* the point p lies in N_α,
* the sheet (N_α,ω_α,Φ_α) is
stabilized by the codimension 1 subgroup
H_α :=exp({ξ∈𝔱|⟨α,ξ⟩∈ℤ}),
* the dimension of N_α is at most 2(k+1),
* the moment map image Φ_α(N_α) is contained
in the affine line Φ(p) + ⟨α⟩ and intersects the open
half-ray Φ(p) + _>0⟨α⟩, and
* there exists q_α∈ M^T ∩ N_α such that Φ(q_α) =
Φ_α(q_α) is a global extremum of Φ_α(N_α) with Φ(q_α)
∈Φ(p) + _>0⟨α⟩, and -α is
an isotropy weight of q_α.
Let α_1,…, α_n
∈ℓ^* be the isotropy weights of p. Without loss of generality, we may assume that α=α_n. By the local normal form
of Theorem <ref>, we may identify a T-invariant
open neighborhood
of p in M with a T-invariant open neighborhood of 0 ∈^n
so that the action becomes that of (<ref>) and
the moment map is given by (<ref>). Since p ∈ M^T is
isolated, α_j ≠ 0 for all j. Hence, since ⟨α_1,…, α_n ⟩ = ℓ^*, there can be at most
k isotropy weights that are multiples of
α_n. Therefore the subgroup H_α of (<ref>)
stabilizes a subspace of ^n that is of real dimension at most
2(k+1). Moreover, by (<ref>), the subgroup H_α
is the stabilizer of some point p'∈ M. The
sheet (N,ω_N,Φ_N) through p' in the sense of Definition
<ref> satisfies the desired conditions.
For our purposes, it is useful to introduce the following
terminology.
Let be a compact Hamiltonian T-space and let p ∈ M^T be
isolated. Given an isotropy weight α of p, we
say that the sheet (N_α,ω_α,Φ_α)
of
Lemma <ref> is along α.
Let be a compact Hamiltonian T-space. Let ℱ be
the facet of Φ(M) supported on {w ∈^* |⟨ w, ν⟩ = c}. If p ∈ M^T satisfies ⟨Φ(p), ν⟩
> c, then there exists an isotropy weight α of p
with ⟨α, ν⟩ < 0.
Let α_1,…,α_n be the isotropy weights of p and
suppose that ⟨α_j, ν⟩≥ 0 for all j=1,…, n. By part
<ref> of Corollary <ref>, an open neighborhood
of Φ(p) in Φ(M) equals an open neighborhood of Φ(p)
in Φ(p) + _≥ 0⟨α_1,…,α_n⟩. Hence, since ⟨α_j,ν⟩≥ 0 for all j, an open neighborhood V
of Φ(p) in Φ(M) is contained in the half-space {w ∈𝔱^* |⟨ w, ν⟩≥⟨Φ(p), ν⟩}. Since ⟨Φ(p), ν⟩
> c and since Φ(M) has a facet supported on {w ∈^* |⟨ w, ν⟩ = c}, this is a contradiction.
Motivated by Lemma <ref>, we introduce the following
terminology.
Let be a compact Hamiltonian T-space and let ℱ be
the facet of Φ(M) supported on {x ∈^* |⟨ x, ν⟩ = c}. For any p ∈ M^T with ⟨Φ(p), ν⟩
> c, we say that the isotropy weight α of p of
Lemma <ref> is (ℱ-)downward pointing.
Combining Lemmas <ref> and
<ref>, we obtain the following result.
Let be a compact Hamiltonian T-space and let ℱ be
the facet of Φ(M) supported on {x ∈^* |⟨ x, ν⟩ = c}. Let p ∈ M^T be isolated with ⟨Φ(p), ν⟩
> c and let α be an isotropy weight of p that
is ℱ-downward pointing. Let
(N_α,ω_α,Φ_α) be the sheet along
α. There exists q_α∈ M^T ∩ N_α
such that
* Φ(q) = Φ_α(q_α) is a global extremum of
Φ_α,
* -α is an isotropy weight of q_α, and
* ⟨Φ(q_α), ν⟩ <
⟨Φ(p),ν⟩.
Taking q_α as in Lemma <ref>, we need to
prove only the last property. This follows immediately by observing that Φ(q) ∈Φ(p) + _>0⟨α⟩ and that α is ℱ-downward pointing.
To conclude this section, we look at the preimage of faces of the
moment polytope. Given a face ℱ of Φ(M), we set
𝔥_ℱ:= {ξ∈|⟨ x-y, ξ⟩=0 for all
x,y∈ℱ}.
By part <ref> of Corollary <ref>,
𝔥_ℱ is a rational subspace of of
dimension equal to the codimension of ℱ in Φ(M). Hence
exp(𝔥_ℱ) is subtorus of T. The subset
M_ℱ:=Φ^-1(ℱ) ⊂ M is T-invariant; we set
H_ℱ:= { t ∈ T | t · p = p
for all p
∈ M_ℱ}.
Let be a compact Hamiltonian T-space and let ℱ be a face of Φ(M). Then
M_ℱ=Φ^-1(ℱ) is a connected component of
M^H_ℱ and the Lie algebra of H_ℱ equals 𝔥_ℱ.
Connectedness of M_ℱ can also be proved using the fact
that
the Convexity Package implies that the preimage of any convex set is
connected (see <cit.>).
Let be a compact Hamiltonian T-space and let
ℱ be a face of Φ(M). There exists
a connected, open and dense subset of M_ℱ whose points
have stabilizer equal to H_ℱ.
By the principal orbit theorem (see <cit.>) and since T is abelian, there exists a subgroup H
of T and a connected, open and dense subset of M_ℱ
such that H is the stabilizer of any point in this subset. Hence,
H_ℱ⊆ H. However, since H is the stabilizer
of orbits of principal type and since T is abelian, H is
contained in the stabilizer of any point in
M_ℱ. Therefore, H ⊆ H_ℱ.
By Corollary <ref>, the preimage of
a face ℱ of Φ(M) gives rise to a sheet in the sense
of Definition <ref> that is stabilized by
H_ℱ. We denote it by
(M_ℱ,ω_ℱ,Φ_ℱ).
In particular, since the complexity of (M_ℱ,ω_ℱ,Φ_ℱ) is at most that of by Proposition <ref>, if
the codimension of ℱ is r, then M_ℱ≤ 2n - 2r.
§.§.§ The Duistermaat-Heckman measure and its density
function
In this section we take a close look at an invariant of compact
Hamiltonian T-spaces that is central to this paper. We start by
recalling the following notion.
Let be a compact Hamiltonian T-space of dimension 2n. The Duistermaat-Heckman measure of
is the pushforward of the (normalized)
Liouville measure, i.e., for any Borel set U⊂𝔱^*,
m_DH(U)=1/(2π)^n∫_Φ^-1(U)ω^n/n!.
The Duistermaat-Heckman measure is absolutely continuous with respect to the Lebesgue
measure on 𝔱^* (see <cit.>). Therefore its Radon-Nikodym derivative with respect
to the Lebesgue measure is a Lebesgue integrable function f_DH𝔱^* → that is uniquely defined up to a set of measure zero. Without loss
of generality, we henceforth assume that f_DH vanishes identically
on ^* ∖Φ(M).
In <cit.>, Duistermaat and Heckman give an explicit representative of the restriction of
f_DH to the intersection of the moment polytope with the
set of regular values of Φ. In order to
state this result, we denote the set of regular
values of Φ contained in Φ(M) by
Φ(M)_reg. Moreover, we recall that for any x∈Φ(M)_reg, the reduced space M_x is an orbifold that
inherits a symplectic form that we denote by ω_x (see Section
<ref>).
Let be a compact complexity k T-space. The restriction of the
Radon-Nikodym derivative of the Duistermaat-Heckman measure of
to Φ(M)_reg can be chosen to be
equal to the function Φ(M)_reg→
x ↦1/(2π)^k∫_M_xω_x^k/k! =:
1/(2π)^kVol(M_x), x ∈Φ(M)_reg.
By <cit.>, the restriction of the
function (<ref>) to each connected component of Φ(M)_reg
is a polynomial of degree at most k. Moreover, if this
polynomial has positive degree, the coefficients
of the monomials of top degree are integral. This is because the cohomology classes
[ω_x] vary linearly with x on such a connected component
and the variation is controlled by a cohomology class
with integral coefficients (see <cit.>).
Theorem <ref> is sufficient to calculate the
Duistermaat-Heckman measure, as the set of singular values has
measure zero by Sard's theorem.
By Remark <ref>, the function of
(<ref>) is continuous. The aim of this section is to prove Theorem <ref>
below, which is probably well-known to experts (cf. <cit.> for linear symplectic actions on vector spaces and
<cit.>). However,
since we use it extensively throughout the paper, we include a proof for completeness.
Given a compact Hamiltonian T-space , there exists a unique
continuous function DH : Φ(M) → that extends the
function of (<ref>).
By Remark <ref>, if the interior of the moment polytope consists solely of regular
values, then Theorem <ref>
is trivial. In this case, DH is the restriction
of a polynomial of degree at most the complexity of
. For instance, the desired function in the
case of compact symplectic toric
manifolds is the indicator function of the moment polytope, since
reduced spaces are connected by Theorem <ref>, and since k = M_x
= 0 for all x ∈Φ(M)_reg.
Theorem <ref> allows us to introduce the following notion.
Let be a compact Hamiltonian T-space. We call the continuous
map DH : Φ (M) → given by Theorem
<ref> the Duistermaat-Heckman
function of .
Let be a compact Hamiltonian T-space and let H ⊂ T
be a subtorus. Choose a complementary subtorus K of T so that
T = H × K. This induces an identification ^* ≃𝔥^* ⊕𝔨^*. We write the Lebesgue measure on ^* as dxdy,
where dx (respectively dy) is the Lebesgue measure on
𝔥^* (respectively 𝔨^*). Let π : ^*
→𝔥^* be the projection induced by the inclusion H
⊂ T. The H-action is Hamiltonian with moment map Φ':= π∘Φ. Since DH is continuous, by Fubini's theorem, the Duistermaat-Heckman function of
(M,ω, Φ') is given by
DH (M,ω, Φ') (x) = ∫_Δ_xDH(x,y) dy
for all x ∈Φ'(M),
where Δ_x = π^-1(x) ∩Φ(M).
Suppose further that is a compact symplectic toric manifold and that
H has codimension 1. For any
x ∈Φ'(M), Δ_x is an interval in {x}×𝔨^* ≃{x}× that we write as {(x,y) ∈𝔥^*⊕| y ∈
[p_min(x), p_max(x)] }. By Remark <ref>,
DH (M,ω, Φ') (x) = p_max(x) - p_min(x) for all x ∈Φ'(M).
We observe that, since Φ(M) is a convex polytope, the
difference p_max - p_min is concave (cf. Proposition
<ref> below).
Our proof of Theorem <ref> uses extensively ideas from <cit.>. Before proceeding with the proof, we need to recall some
facts about singular values of the moment map.
§.§.§ Intermezzo 1: chambers of the moment map
We begin by relating singular points of the moment map of a
Hamiltonian T-space (that is not necessarily compact), to sheets
arising from one dimensional stabilizers (see Definition
<ref>). If K ≤ T is a closed one-dimensional
subgroup that occurs as the stabilizer of some point in M, then
every p ∈ M^K is a singular point of Φ. In fact, the converse
also holds.
Let be a Hamiltonian T-space. If K ≤ T is a
one-dimensional closed
subgroup that occurs as the stabilizer of some point
in M then Φ(M^K) is contained in the set of singular values
of Φ. Conversely, if x ∈^* is a
singular value of Φ, then there exists a one-dimensional closed
subgroup K ≤ T that occurs as the stabilizer of some point
in M such that x ∈Φ(M^K).
Since the action is Hamiltonian, the first statement
holds. Conversely, let p ∈ M be a singular point with Φ(p)
=x. Let H be the stabilizer of p, let h = H and let k
be the complexity of . Since the T-action is
Hamiltonian and since p is a singular point of Φ, H ≥ 1. If H = 1, there is nothing to prove. Hence, suppose that
H ≥ 2. By Theorem <ref>, it suffices to prove the result for the Hamiltonian T-action
on the local model of p. Hence, we may assume that is the
local model (Y,ω_Y,Φ_Y) at p and that p =
[1,0,0]. Moreover, by Corollary <ref>, the symplectic
slice representation ρ : H →
(S^1)^h+k of p is injective. The
stabilizer of any point in Y = T
×_H Ann(𝔥) ×^h+k is a subgroup of
H. In fact, if H̃≤ H is the stabilizer of some
point in Y, we have that
Y^H̃ = T ×_H Ann(𝔥) ×
(^h+k)^H̃,
where H acts on ^h+k via
the symplectic slice representation of p. Since the H-action on ^h+k is linear,
(^r)^H̃ is a subspace for any subgroup H̃≤ H. Hence, to prove the result, it suffices to show that there exists a
one-dimensional closed subgroup K ≤ H that occurs as a
stabilizer of a point in ^h+k.
Let η_1,…, η_h+k∈𝔥^* be the isotropy
weights of p. Since the symplectic slice representation of p is
injective, the H-action on ^h+k is effective. This implies that η_1,…, η_h+k
span 𝔥^*. Hence we may assume that there exists s ≥
1 such that the span of η_s+1,…, η_h+k has codimension
1. Since the H-action on ^h+k is Hamiltonian with moment map Φ_H(z) = 1/2∑_j=1^h+k
|z_j|^2 η_j, it can be checked directly that all
points in
{z = (z_1,…,z_r) ∈^h+k| z_j = 0 if
j=1,…,s, z_j ≠ 0 if j ≥ s+1 },
have stabilizers of dimension one, as desired.
In other words, the set of singular values of the moment map equals
the union of the moment map image of all sheets that are stabilized by
some one dimensional closed subgroup
of T (see Definition <ref>). Each such image is contained
in some affine hyperplane (see Remark
<ref> and the discussion preceding
it). If, in addition, M is compact there are two important
consequences. First, there are only finitely many subgroups of T that occur
as the stabilizer of some point in M (see <cit.>). Second, since the action is
Hamiltonian, Φ has some singular
point; hence, by Lemma <ref>, there exists a
one dimensional closed subgroup of T that that occurs
as the stabilizer of some point in M. Let K_1,…, K_r ≤ T be the
collection of such one dimensional closed subgroups. Since M is
compact, by Lemma <ref>, M^K_i is a compact submanifold of M
for each i=1,…,
r and, therefore, has finitely
many connected components N_i1,…, N_is_i. For each i,j,
we denote the corresponding sheet by (N_ij,ω_ij,Φ_ij)
and we set Δ_ij:= Φ_ij(N_ij).
We observe that the union of the Δ_ij's includes the union of all
facets. More precisely, the following holds.
Let be a compact Hamiltonian T-space. For any facet
ℱ of Φ(M), there exist indices i,j as above so
that ℱ = Δ_ij.
Let (M_ℱ,ω_ℱ,Φ_ℱ) be the
sheet corresponding to ℱ (see Corollary <ref> and
(<ref>)), and let H_ℱ its stabilizer. By Lemma
<ref>, since the codimension of ℱ
in ^* is one, the dimension of H_ℱ is also one. By
Corollary <ref>, it follows that
(M_ℱ,ω_ℱ,Φ_ℱ) is one
of the sheets constructed above.
The complement of the union of the Δ_ij's in the moment polytope is precisely Φ(M)_reg. We call the
closure of a connected component of this complement a chamber of
Φ(M). These chambers partition the moment polytope into
subpolytopes, i.e., the following properties hold (see Figure <ref>):
* Each chamber is a polytope in ^* of full dimension and any two chambers
intersect in a common face.
* Let F be a facet of a chamber. The set of
points x∈ F that are regular values for all the moment maps
Φ_ij of the sheets
(N_ij,ω_ij,Φ_ij) constructed above is dense in
F. Moreover this set is contained in the interior of
F. Conversely, if x is a singular value that is a regular value
for all the moment maps
Φ_ij of the sheets
(N_ij,ω_ij,Φ_ij) constructed above, then x lies in
the interior of a facet of a chamber.
* If two chambers 𝔠 and
𝔠' intersect in a face F, then there exists a
sequence of chambers 𝔠_0 = 𝔠,
𝔠_1,…, 𝔠_s = 𝔠' with the
property that the intersection of 𝔠_l and
𝔠_l+1 is a facet that contains F for all
l=0,…, s-1.
Properties <ref> – <ref> follow from the above
definition of the N_ij's and from Theorems <ref>
and <ref> (see also <cit.>).
By Remark <ref>, Theorem
<ref> follows at once if the interior of Φ(M)
consists entirely of regular values. The following result describes
precisely when this happens in terms of exceptional sheets.
Let be a compact Hamiltonian T-space. There is
precisely one chamber of Φ(M) if and only if there are no
exceptional sheets.
If Φ(M) has no exceptional sheets then all sheets are contained
in the boundary of Φ(M) by Lemma
<ref>. This implies that the union of
the Δ_ij's constructed above is contained in the boundary of
Φ(M), which equals the union of the facets of Φ(M) since
Φ(M) is a convex polytope. By Lemma
<ref>, the union of all the facets of
Φ(M) is contained in the union of the Δ_ij's. Hence,
the union of the Δ_ij's equals the boundary of
Φ(M). Since Φ(M) is a convex polytope of full dimension, the complement of the boundary of
Φ(M) in Φ(M) equals the interior of Φ(M), which is
connected. Hence, there is precisely one chamber of Φ(M).
Conversely, if Φ(M) has at least two chambers, then
at least one of the Δ_ij's constructed above cannot be
contained in the boundary of Φ(M). By Lemma
<ref>, the corresponding sheet is exceptional.
Seeing as the case of only one chamber has already been proved, in
what follows (namely, in Intermezzo 2 and in the proof of Theorem
<ref> below), we assume that there are at least
two chambers.
§.§.§ Intermezzo 2: the wall-crossing formula
The main tool that we use
in the proof of Theorem <ref> is the so-called wall-crossing
formula for compact Hamiltonian T-spaces (see <cit.>, <cit.> and <cit.>). We recall it here for completeness and
we draw on the above Intermezzo for notation. Moreover, in this
subsection we consider the closure of the complement of the moment map
as a chamber.
Let
𝔠_± be two chambers in Φ(M) that intersect in a
facet F. Let ξ∈ℓ be the primitive
element that is normal to the hyperplane supporting F and that points out
of 𝔠_- (see Figure <ref>). We fix a point x ∈ F that
has the property that is a regular value for all for all the moment maps
Φ_ij of the sheets
(N_ij,ω_ij,Φ_ij) constructed in the above Intermezzo;
such a point exists by property <ref>.
We use the fixed inner
product on to choose a complementary subspace 𝔨 to
the span of ξ, i.e., = ⟨ξ⟩⊕𝔨. Hence, viewing as the space of homogeneous polynomials of
degree one on ^*, we can view polynomials on ^* as being
generated by ξ and polynomials on 𝔨^*. Moreover,
since ξ∈ℓ, we have that exp (⟨ξ⟩) is a
circle in T that we denote by S^1. The subspace 𝔨 is
isomorphic to the Lie algebra of the quotient T/S^1. In what follows, we use this identification tacitly since
it is compatible with the identification of Remark <ref>.
Let f_±: ^* → be the polynomial that,
when restricted to the interior of 𝔠_±, equals
(<ref>). Since x is a regular value of
Φ_ij, it lies in the interior of a
chamber 𝔠_ij of Φ_ij(N_ij) for all i,j. Hence, there is a corresponding polynomial
f_ij: 𝔨^* → that, when restricted to the interior of the
𝔠_ij, equals (<ref>). (If x ∉Φ_ij(N_ij), this polynomial is identically zero.)
Finally, for each i,j, we let κ_ij be half the codimension
of N_ij in M and, if i,j are such that x ∈Φ_ij(N_ij), we let α_ij1,…,
α_ijκ_ij∈ be the isotropy weights for the S^1-action on the normal bundle to N_ij.
Set the notation as above. For all y ∈^* we have that
f_+(y)- f_-(y) = ∑_{i,j | x ∈Φ_ij(N_ij)}ξ^κ_ij-1(y-x)(∏_s=1^κ_ijα_ijs)^-1[f_ij(y-x)/(κ_ij-1)!
+ P_ij(y-x)],
where P_ij is a polynomial depending on i,j that is divisible
by ξ.
* We stress that Theorem <ref>
also holds if, say, 𝔠_- is the closure of the complement of
the moment map image.
* The polynomials P_ij have been computed explicitly (see
<cit.>). They depend on the symplectic
reduction of N_ij at x by the T/S^1-action.
§.§.§ Back to the Duistermaat-Heckman function of
With the above Intermezzos, we have all the ingredients to proceed with the proof of Theorem <ref>.
First, we prove existence of a continuous extension. Let 𝔠_1,…, 𝔠_l be the
chambers of Φ(M) and, for each i = 1,…, l, let f_i:^*
→ be the polynomial that equals
(<ref>) when restricted to the interior of
𝔠_i. The result is proved if
we show that the map given by
x ↦ f_i(x) if x ∈𝔠_i
is well-defined, i.e., given two chambers 𝔠_i and
𝔠_j such that 𝔠_i∩𝔠_j ≠∅, the following holds:
f_i(x) = f_j(x) for all x ∈𝔠_i ∩𝔠_j.
Clearly equation (<ref>) holds if i=j, so suppose that i
≠ j. Set F := 𝔠_i ∩𝔠_j; this is
a face of both 𝔠_i and 𝔠_j (see property <ref>).
We consider
first the special case that F is a facet. Since the set of points in F that
are regular values for all the sheets
(N,ω_N,Φ_N) constructed in Intermezzo 1 is
dense in F (see property <ref>), and since both f_i and
f_j are continuous, it suffices to check that equation (<ref>) holds
for those points. Let x ∈ F be such a point. If we show
that the codimension of each N such
that x ∈Φ(N) is at least four, then we are done by
Theorem <ref> (cf. <cit.>). Since F is a
facet of two distinct chambers, it is not contained in the boundary
of Φ(M). In particular, since x lies in the interior of F,
it does not lie in the boundary of Φ(M). Therefore, if N
is such that x ∈Φ(N), then Φ(N) is
not contained in the boundary of Φ(M). By Lemma
<ref>, (N,ω,Φ)
is an exceptional sheet. Thus the complexity of
(N,ω,Φ) is strictly less than that of
by Proposition <ref>. Since the dimension of the
torus that acts effectively on N is one less than that of
T, it follows that the codimension of N in M is at least
four, as desired.
If F is not a facet, then by property <ref> above we can find chambers 𝔠_0 =
𝔠_i, 𝔠_1,…, 𝔠_s =
𝔠_j such that 𝔠_l and 𝔠_l+1
intersect in a facet that contains F for all l=0,…, s-1 (see property <ref>). The result then follows
immediately by the above special case.
Uniqueness of the continuous extension follows immediately by
observing that Φ(M)_reg is a dense subset
of Φ(M) and that the function of (<ref>) is continuous.
The Duistermaat-Heckman function is an invariant of the isomorphism
class of a compact Hamiltonian T-space that plays an important role in this
paper. As an illustration, the following result describes the restriction of the
Duistermaat-Heckman function of to any facet of the moment
polytope.
Let be a compact Hamiltonian T-space. If
ℱ is a facet of Φ(M), then
DH |_ℱ =
DH(M_ℱ,ω_ℱ,Φ_ℱ) if M_ℱ = M - 2,
0 otherwise,
where (M_ℱ,ω_ℱ,Φ_ℱ)
is as in (<ref>).
In this proof we denote the closure
of the complement of the moment map image by
𝔠_- (cf. the paragraph
preceding Theorem <ref> and Remark
<ref>). By Lemma <ref>,
(M_ℱ,ω_ℱ,Φ_ℱ) is one
of the sheets constructed in Intermezzo 1; in particular, the
quotient T/H_ℱ is isomorphic to S^1, where
H_ℱ is the stabilizer of (M_ℱ,ω_ℱ,Φ_ℱ). Let x ∈ℱ
be a regular value of Φ_ℱ. Since M_ℱ =
Φ^-1(ℱ) (see Lemma <ref>), it
follows that
(M_ℱ,ω_ℱ,Φ_ℱ) is the
only sheet constructed in Intermezzo 1 that contains x in its
moment map image. In particular, x is a regular value for all the
moment maps of all the sheets constructed in Intermezzo 1. Hence,
there exists a chamber 𝔠_+ and a facet F of
𝔠_+ such that x lies in the interior of F.
Let ξ∈ℓ be the primitive normal to the hyperplane
supporting F that points out of 𝔠_-. Let f_+ : ^* → denotes the polynomial that, when restricted
to 𝔠_+, equals (<ref>). Since f_- ≡ 0, by Theorem <ref>,
f_+(y)= ξ^κ_ℱ-1(y-x) (∏_s=1^κ_ℱα_s)^-1[f_ℱ(y-x)/(κ_ℱ-1)!
+ P_ℱ(y-x)] for all y ∈^*,
where κ_ℱ is half of the codimension of
M_ℱ in M, α_1,…,
α_κ_ℱ∈ are the isotropy weights of
the S^1-action on the normal bundle to M_ℱ,
f_ℱ is the polynomial that equals (<ref>) when restricted to the
chamber of Φ_ℱ containing x, and P_ℱ is the polynomial
associated to (M_ℱ,ω_ℱ,Φ_ℱ) in
(<ref>). By (<ref>), if κ_ℱ≥ 2, then f_+(x) = 0. On the
other hand, if κ_ℱ = 1, since P_ℱ
is divisible by ξ (see Theorem <ref>, then the right hand side
of (<ref>) evaluated at x equals f_ℱ(0)/α_1.
Since the S^1-action is effective and ξ is chosen to point out
of 𝔠_-, it follows that α_1 = 1 and, hence,
f_+(x) = f_ℱ(0). Since f_+ and f_ℱ are
restrictions of DH and DH
(M_ℱ,ω_ℱ,Φ_ℱ)
respectively, we have shown that, if x is a regular value of
Φ_ℱ, then
DH(x) =
DH (M_ℱ,ω_ℱ,Φ_ℱ)(x) if M_ℱ = M - 2
0 otherwise.
Since Φ_ℱ(M_ℱ)_reg is
dense in Φ_ℱ(M_ℱ), and since DH
and DH (M_ℱ,ω_ℱ,Φ_ℱ)
are continuous and defined on ℱ, equation (<ref>)
implies the desired result.
§.§.§ The Duistermaat-Heckman function of a complexity one T-space
In this section we prove some properties of the Duistermaat Heckman
function of a compact complexity one T-space . We start with the
following special property that fails to hold in higher complexity
(see <cit.>).
The Duistermaat-Heckman function DH : Φ(M) → of a compact complexity one
T-space is concave.
By Theorem <ref>, DH is
continuous. Hence, it suffices to check that the restriction of DH to the interior of Φ(M) is concave. Since the complexity
of is one, by Remark <ref>, DH is piecewise
linear. Thus the restriction of DH to the interior of Φ(M) is concave if and only if it is
log-concave. The result then follows immediately from <cit.>.
As observed in <cit.>, continuity and concavity of
DH (see Theorem <ref>
and Proposition <ref>), together with convexity of Φ(M) (see Theorem <ref>), immediately
imply the following result.
The minimum of the Duistermaat-Heckman function of a compact
complexity one T-space is attained at a vertex.
Next we prove two results relating DH with exceptional orbits
and singular values of Φ.
Let be a compact complexity one T-space. If
there are no isolated fixed points, then M_exc =
∅ and the Duistermaat-Heckman
function DH : Φ(M) → is the restriction of an affine
function.
Since there are no isolated fixed points, Lemma
<ref> implies that there are no
exceptional fixed
points. Hence, by Lemma
<ref>, there are no
exceptional sheets. By Lemma <ref>,
M_exc = ∅. Moreover, by Lemma <ref>, there
is only one chamber of Φ(M). The result then follows by Remark <ref>.
Let be a compact complexity one T-space. If the
Duistermaat-Heckman function DH is constant, then there are
no singular values in the interior of Φ(M).
We prove the contrapositive. Suppose that there is a singular value
in the interior of Φ(M). Hence there exist two chambers
𝔠_- and 𝔠_+ of Φ(M) that intersect in
a facet F that is not contained in the boundary of Φ(M). As
in Intermezzo 2, we let ξ∈ℓ be a primitive element that
is normal to the hyperplane supporting F and points out of 𝔠_-. Moreover, we fix x ∈ F that is a regular value for
all the sheets (N_ij,ω_ij,Φ_ij) constructed in
Intermezzo 1; we observe that x lies in the interior of F (see
property <ref>).
Since F is not contained in the boundary
of Φ(M), neither is x. In particular, if x ∈Φ_ij(N_ij), then Φ_ij(N_ij) is not contained in the
boundary of Φ(M). Hence, (N_ij,ω_ij,Φ_ij) is
exceptional by Lemma <ref>. By
Proposition <ref>, the complexity of
(N_ij,ω_ij,Φ_ij) is strictly less than that of
. Since the complexity of is one, the complexity of
(N_ij,ω_ij,Φ_ij) is zero. Hence, the codimension of
N_ij in M equals 4. Therefore, the
lowest order term in ξ in the right hand side of equation
(<ref>) is
ξ(∑_{i,j | x ∈Φ_ij(N_ij)} (α_ij1α_ij2)^-1f_ij),
where, for each i,j, the polynomialf_ij and the integers
α_ij1,α_ij2 are as in the
discussion leading up to Theorem <ref>. Since x lies in the interior of
Φ(M), it follows that α_ij1α_ij2 < 0 and
f_ij(x) > 0 for each i,j. In particular, the polynomial in
equation (<ref>) is not identically zero and, therefore,
neither is the right hand side of equation (<ref>).
Let f_± : ^* → be the polynomial that, when restricted
to 𝔠_±, equals (<ref>). By (<ref>) and Theorem <ref>, f_+ and f_- are not
equal. Hence, since f_± is the restriction of the
Duistermaat-Heckman function DH to 𝔠_±,
DH is not constant, as desired.
The next result describes DH near a vertex of Φ(M) that
corresponds to a fixed surface. To this end, let v ∈Φ(M) be a vertex and let Σ = Φ^-1(v) be a fixed
surface. Let α_1,…,α_n be the isotropy weights of
Σ (see Remark <ref>), labeled so that
α_n = 0. Since v is a vertex, by part <ref> of
Corollary <ref>, a sufficiently small neighborhood of v
in Φ(M) is of the form
{ v + ∑_i=1^n-1t_i α_i | t_i ≥ 0
sufficiently small}.
Moreover, by part <ref> of
Corollary <ref>, for each i=1,…, n-1, the edge e_i of
Φ(M) that is incident to v is contained in the half-line
{v+t_iα_i | t_i ≥ 0}. Let (M_i,ω_i,Φ_i) be
the sheet corresponding to the edge e_i as in (<ref>) with
stabilizer H_i. By
Proposition <ref>, M_i = 4 and (M_i,ω_i,Φ_i) is a
compact complexity one Hamiltonian T/H_i-space. In what follows, we
identify T/H_i ≃ S^1. Let N be the normal bundle of Σ in M. Then N splits
T-equivariantly as a direct sum
N = L_1 ⊕…⊕ L_n-1,
where L_i denotes the normal bundle of Σ in M_i.
Let be a compact complexity one T-space of dimension
2n and let v ∈Φ(M) be a vertex such that Σ =
Φ^-1(v) is a fixed surface. Let α_1, …,
α_n-1 be the non-zero isotropy
weights of Σ. For all
t_1,…, t_n-1≥ 0 sufficiently small,
DH (v+ ∑_i=1^n-1t_iα_i)= ∫_Σω-∑_i=1^n-1t_i c_1(L_i) [
Σ],
where L_1,…, L_n-1 are as in (<ref>) and c_1(L_i)
is the first Chern class of L_i for i=1,…, n-1. In
particular, if DH attains its minimum at v, then
c_1(L_i)[Σ] ≤ 0 for all i=1,…, n-1.
Since v is a vertex and the complexity of is one, the restriction of DH to a
sufficiently small neighborhood of v is an affine function. Thus
there exist real numbers β_0,β_1,…, β_n-1 such
that, for all
t_1,…, t_n-1≥ 0 sufficiently small,
DH (v+ ∑_i=1^n-1t_iα_i) =
β_0 + ∑_i=1^n-1 t_iβ_i.
In order to determine the constants β_0,β_1,…,
β_n-1, it suffices to understand the restriction of DH to elements of the form v + t_jα_j for j=1,…,
n-1. Fix i=1,…, n-1. By Corollary
<ref>,
DH (v+t_iα_i) = DH(M_i,ω_i,Φ_i)(v +
t_iα_i).
By <cit.>, for
all t_i ≥ 0 sufficiently small,
DH(M_i,ω_i,Φ_i)(v +
t_iα_i) = ∫_Σω_i - t_i
c_1(L_i)[Σ].
The result follows immediately by comparing equations (<ref>),
(<ref>) and (<ref>).
The following result is an immediate consequence of piecewise
linearity of the Duistermaat-Heckman function of a compact complexity
one T- space and of Proposition <ref>.
Let be a compact complexity one T-space. The subset
{(x,t) ∈𝔱^* ×| x ∈Φ(M) , t
∈ [0,DH(x)]}
is a convex polytope in ^* ×.
The convex polytope of Corollary <ref> and the
Duistermaat-Heckman function of a compact complexity one T-space
are equivalent in the sense that knowing one allows to reconstruct the
other. To emphasize the combinatorial nature of the problem we study,
we introduce the following notion.
The convex polytope (<ref>) of a compact complexity one
T-space is called the Duistermaat-Heckman polytope of .
Suppose that is a compact complexity one T-space obtained
by restricting a complexity zero T × S^1-action on
(M,ω) to the subtorus T ×{1}. The moment
polytope of the complexity zero action need not agree with the
Duistermaat-Heckman polytope of . However, the two are
related as follows. If Δ denotes the moment polytope of the
complexity zero T × S^1-action, we have that
Δ = { (x,y)∈Φ'(M) ×| y ∈
[p_min(x), p_max(x)] },
whereas, combining Example <ref> and
(<ref>), the Duistermaat-Heckman polytope of equals
{(x,t) ∈𝔱^* ×| x ∈Φ(M) , t
∈ [0,p_max(x) - p_min(x)]}.
Finally, we observe that the latter can be obtained from the former
by applying a piecewise integral affine transformation of ^*
⊕, where the lattice is ℓ^* ⊕ (see Figure <ref>).
§.§ Compact complexity preserving Hamiltonian T-spaces
In this section, we introduce a class of Hamiltonian T-spaces that
enjoy special properties, which are also enjoyed by compact
symplectic toric manifolds (see Corollary <ref>,
Proposition <ref> and Corollary
<ref>). We begin
with the following result, which extends <cit.>.
Let be a compact complexity k T-space. If N ⊂
M^T is a fixed submanifold with N = 2k, then Φ(N) is a
vertex of Φ(M). Moreover, for every face ℱ of
Φ(M) that contains Φ(N), the sheet
(M_ℱ,ω_ℱ,Φ_ℱ) is
stabilized by a connected subgroup and has complexity equal to k.
Since N= 2k, by Remark <ref>,
(N,ω_N) is regular and every point in N is regular. Hence,
by Remark <ref>, the
T-action on the normal bundle to N is toric. By Corollary
<ref>, Φ(N) is a vertex of Φ(M).
Let ℱ be a face of Φ(M) that contains Φ(N) and
let p ∈ N ∩ M_ℱ. By Corollary
<ref>, there exist points arbitrarily close to p
that have stabilizer equal to H_ℱ, the stabilizer of
(M_ℱ,ω_ℱ,Φ_ℱ). Since
p is regular and all stabilizers in a regular local model
are connected, H_ℱ is connected by Theorem <ref>. Finally, we observe that, since (N,ω_N)
is a fixed submanifold and N ⊆
M_ℱ, (N,ω_N) is a
sheet in (M_ℱ,ω_ℱ,Φ_ℱ). Since
N = 2k and N ⊂ M^T/H_ℱ_ℱ,
by Proposition <ref>, the complexity of
(M_ℱ,ω_ℱ,Φ_ℱ) is at
least k. On the other hand,
(M_ℱ,ω_ℱ,Φ_ℱ) is a
sheet in and the complexity of the latter is k. Hence, by
Proposition <ref>, the complexity of
(M_ℱ,ω_ℱ,Φ_ℱ) is at
most k.
Let be a compact complexity k T-space. The following are equivalent:
* for each face ℱ of Φ(M), the
complexity of the sheet (M_ℱ,ω_ℱ,Φ_ℱ) equals k;
* for each face ℱ of Φ(M) of codimension r, M_ℱ has maximal dimension,
i.e., M_ℱ = 2n-2r;
* for each vertex v of Φ(M), Φ^-1(v) has maximal
dimension, i.e., Φ^-1(v) = 2k.
For any face ℱ of
Φ(M) of codimension r, the complexity of
(M_ℱ,ω_ℱ,Φ_ℱ) equals
that of if and only if M_ℱ = M -
2r. This shows that <ref> and <ref> are equivalent. Since vertices are faces of maximal codimension, <ref> implies
<ref>. The converse follows from Proposition <ref>.
Motivated by Corollary <ref>, we introduce the
following terminology.
A compact Hamiltonian T-space is said to be complexity
preserving if it satisfies any (and hence all) of the conditions
<ref> – <ref> in Corollary <ref>.
compact complexity preserving Hamiltonian T-spaces generalize compact
symplectic toric manifolds.
By Corollary <ref>, if is a compact complexity preserving Hamiltonian T-space,
then, for every face ℱ of Φ(M), so is (M_ℱ,ω_ℱ,Φ_ℱ).
The following result describes a property of the Duistermaat-Heckman
function of compact complexity preserving Hamiltonian T-spaces and is an immediate consequence of Propositions <ref>,
<ref> and Remark
<ref>.
Let be a compact complexity k T-space and suppose that N
⊂ M^T is a fixed submanifold with N = 2k. For
any face ℱ of Φ(M) containing Φ(N),
DH |_ℱ =
DH(M_ℱ,ω_ℱ,Φ_ℱ),
where (M_ℱ,ω_ℱ,Φ_ℱ)
is the sheet given in (<ref>). In particular, if is complexity preserving, then
(<ref>) holds for all faces of Φ(M).
To conclude this section, we prove the following result, which we need
in Section <ref>.
Let be a compact complexity preserving Hamiltonian T-space
of positive complexity. If there are no singular values of Φ contained in
the interior of Φ(M),
then the action has no isolated fixed points.
Suppose that p ∈ M^T is isolated. By condition <ref> in Corollary <ref>,
Φ(p) is not a vertex. Hence, if
ℱ is the face of smallest dimension in which Φ(p)
lies, then ℱ≥ 1. By part <ref>
of Corollary <ref>, an open neighborhood of
Φ(p) in Φ(M) can be identified with an open neighborhood of (0,0) ∈^ℱ×^d-ℱ in ^ℱ×𝒞'_p, where 𝒞'_p is the
proper cone in the proof of part <ref> of Corollary
<ref>. We observe that, since ℱ≥
1, the subset {0}×𝒞'_p intersects the interior
of ^ℱ×𝒞'_p. Choose d - ℱ linearly independent isotropy weights of p that span
𝒞'_p and, if needed, complete this set with ℱ
- 1 linearly
independent isotropy weights of p whose span is contained in ^ℱ. The span of these isotropy weights
α_1,…, α_d-1 satisfies
(Φ(p) + _≥ 0⟨α_1,…, α_d-1⟩) ∩Int(Φ(M)) ≠∅,
where Int(Φ(M)) denotes the interior of
Φ(M).
By Theorem <ref>, we may identify a T-invariant
neighborhood of p with a T-invariant neighborhood of 0 ∈^n so that Φ becomes the map
(z_1,…, z_n) ↦π∑_i=1^n α_i |z_i|^2
+Φ(p).
Moreover, by part <ref> of Corollary <ref>, an
open neighborhood of Φ(p) in Φ(M) can be identified with
an open neighborhood of Φ(p) in the image of the map of
equation (<ref>).
By (<ref>), the affine hyperplane Φ(p) + ⟨α_1,…, α_d-1⟩ intersects Int(Φ(M)). All values in this intersection
have a one-dimensional stabilizer: this is because
(Ann(⟨α_1 ⟩) ∩…∩Ann(⟨α_d-1⟩)) = 1,
since α_1,…, α_d-1 are linearly
independent. Hence, there is a singular value of Φ in
Int(Φ(M)), a contradiction.
§.§.§ Moment polytopes for compact complexity preserving
Hamiltonian T-spaces
In this subsection, we characterize the moment map image of complexity
preserving compact Hamiltonian T-spaces (see Proposition
<ref> below). To this end, given a polytope Δ⊂^* and a vertex v ∈Δ, we say that Δ is
smooth at v if
* there are exactly d edges e_1,…, e_d that are incident
to v, and
* there exists a basis α_1,…, α_d of ℓ^*
such that α_i is a tangent vector to the edge e_i for all
i=1,…, d.
A polytope Δ⊂^* is smooth at v if and only if the collection
of inward (or outward) normals to the facets of Δ that
contain v can be chosen to be a basis of ℓ. We say that a polytope Δ is
Delzant if it is smooth at every vertex.
The moment map image of a compact symplectic toric manifold is
a Delzant polytope and, conversely, every Delzant polytope
arises as such an image (see <cit.>). In general, this fails
to be true in higher complexity. However, under the additional
hypothesis of complexity preserving, the following result holds.
The moment map image of a compact complexity preserving Hamiltonian
T-space is a Delzant polytope in ^*.
Conversely, for every
Delzant polytope Δ in ^* and for every
integer k ≥ 0, there exists a compact complexity preserving Hamiltonian
T-space of complexity k such that Φ(M) = Δ.
Let be a compact complexity preserving Hamiltonian
T-space of complexity k. Let v ∈Φ(M) be a vertex. By Corollary
<ref>, Φ^-1(v) = 2k. Let α_1,…,α_n
be the isotropy weights of Φ^-1(v) (see Remark <ref>). Since
Φ^-1(v) =2k, precisely k weights are zero. Without loss of generality, we may assume
that α_n-k+1,…, α_n = 0. By part <ref> of Corollary <ref>
an open neighborhood of v in Φ(M) looks like
an open neighborhood of 0 in
_≥ 0-span{α_1,…,α_n}=
_≥ 0-span{α_1,…α_n-k}.
Since the complexity of is k, d = T =
n-k. Hence, by equation (<ref>), there are exactly
d edges that are incident to v. Moreover, by Remark <ref>,
the -span of α_1,…,α_d equals ℓ^*. Hence,
Φ(M) is smooth at v and the first statement follows.
Conversely, fix an integer k ≥ 0 and suppose that Δ is a
Delzant polytope in ^*. By the classification of compact symplectic toric manifolds
(see <cit.>), there exists a compact
complexity zero T-space (M',ω',Φ') such that Φ'(M') =
Δ. Let (M”,ω”) be a closed symplectic
manifold of dimension 2k. Consider the T-action on M:= M'× M” given by
taking the product of the above T-action on M' with the
trivial T-action on M”. This action is Hamiltonian for the
symplectic form ω obtained by summing the pullbacks to M of ω' and ω”
along the projections. A moment map for this T-action is given by the pullback to M of
Φ' along the projection M → M'; we denote this moment map
by Φ. Then (M,ω, Φ) is a compact complexity preserving
complexity k T-space with moment map image given by
Δ, as desired.
§.§ Compact tall complexity one T-spaces
In this section we introduce an important class of compact complexity one T-spaces.
A compact complexity one T-space is called tall if no reduced space is a point.
To shed light on Definition <ref> we observe that, if is a compact
complexity one T-space, then
the reduced space M_x is homeomorphic to a closed, connected orientable
surface for any x ∈Φ(M)_reg (see Section <ref>). If is tall, then this holds for all x ∈Φ(M) (see <cit.>). Moreover, the following
result holds.
A compact complexity one T-space is tall if and only if it is
complexity preserving.
Let be a compact tall complexity one T-space and let v
∈Φ(M) be a vertex. By Remark
<ref>, Φ^-1(v) is either a fixed point or a
fixed surface. Since the reduced space at v can be identified with
Φ^-1(v) and since is tall, Φ^-1(v) has
dimension two. Hence, since has complexity one, it
satisfies property <ref> in Corollary
<ref>; therefore, it is complexity preserving. Conversely, if is complexity preserving, then it satisfies
property <ref> in Corollary
<ref>. Hence, by <cit.>, no
reduced space is a point and is tall.
In <cit.> the authors classify tall
complexity one T-spaces[In loc. cit. the authors consider a more general class of tall complexity one spaces, namely those for which M is
connected but not necessarily compact
and such that there exists an open convex set 𝒯⊆^* containing the image of the moment map with the property that Φ M
→𝒯 is proper. However, we state all results in loc. cit. only in the compact case.]. Below we recall this
classification. Henceforth, we fix a
compact complexity one T-space . As a consequence of <cit.> or <cit.>, any two reduced spaces of
are homeomorphic. This motivates introducing the following notion.
The genus of a compact tall complexity one T-space
is the genus of the reduced space M_x for any x ∈Φ(M).
The following result is a stepping stone for the classification of
compact tall complexity one T-spaces (see Proposition 2.2 in <cit.>, Proposition 1.2 and
Remark 1.9 in
<cit.>)
If is a compact tall complexity one T-space, then there exist a closed oriented surface Σ and a map
f M/T →Σ such that
(Φ,f) M/T ⟶Φ(M)×Σ
is a homeomorphism and the restriction f : Φ^-1(x)/T →Σ is orientation-preserving for any x ∈Φ(M). Given two such maps f and f', there exists an orientation-preserving homeomorphism
ξΣ' →Σ such that f is homotopic to ξ∘ f' through maps that induce homeomorphisms
M/T→Φ(M)×Σ.
By Proposition <ref>, the genus of
is the genus of Σ.
The next invariant of tall complexity one T-spaces is related to the
exceptional orbits (see Remark <ref>), and is introduced below. To this end, we observe
that given a closed surface Σ and a map f : M/T →Σ as in Proposition <ref>, its restriction to
M_exc makes (Φ,f) : M_exc→Φ(M) ×Σ injective.
Let (M,ω, Φ), (M',ω',Φ') be compact tall complexity one
T-spaces and let Σ, Σ' be closed oriented surfaces.
* A painting of (M,ω,Φ) is a map f :
M_exc→Σ such that
(Φ,f) : M_exc→Φ(M) ×Σ
is injective.
* An isomorphism of exceptional orbits is a homeomorphism
i : M_exc→ M'_exc satisfying Φ = Φ' ∘ i that sends each orbit
to an orbit with the same symplectic slice representation.
* A painting f : M_exc→Σ of (M,ω,
Φ) is equivalent to a painting f' : M'_exc→Σ' of (M',ω',
Φ') if there exists an isomorphism of exceptional orbits i :
M_exc→ M_exc' and an
orientation-preserving homeomorphism ξ : Σ→Σ'
such that f' ∘ i and ξ∘ f are homotopic through paintings.
By Proposition <ref>, we can associate an equivalence class
of paintings to a compact tall complexity one T-space (see <cit.>). For our purposes, it is useful to introduce the following terminology.
Let be a tall, compact complexity one T-space. The equivalence class of paintings [f] associated to is
trivial if there exists a painting f: M_exc→Σ representing [f] that is constant on each connected
component of M_exc.
The classification of compact tall complexity one T-spaces is as
follows.
(Karshon–Tolman, Theorem 1 in <cit.>, and Theorem 1.8 and
Remark 1.9 <cit.>)
Two compact tall complexity one T-spaces are isomorphic if and only
if they have equal genera, equal
Duistermaat-Heckman measures, and equivalent paintings.
The invariants of a compact tall complexity one T-space
determine the moment map image, as it is the
support of the Duistermaat-Heckman measure (cf. <cit.>).
§ COMPACT MONOTONE HAMILTONIAN T-SPACES
In this section we use ideas and techniques from equivariant
cohomology, referring the reader to <cit.> for
details and background.
§.§ The weight sum formula
In this paper we are mostly concerned with compact Hamiltonian
T-spaces satisfying the following condition.
A symplectic manifold (M,ω) is monotone if there exists λ∈ such that
c_1=λ[ω], where c_1 is the first Chern class of (M,ω).
It is positive monotone if λ>0.
* If
(M,ω) is compact and monotone, since [ω] ≠ 0,
then λ in Definition <ref> is unique.
* Let (M,ω) be a monotone
symplectic manifold and let Ψ : (M',ω') → (M,ω)
be a symplectomorphism. Since Ψ pulls back almost complex
structures that are compatible with ω to almost complex
structures that are compatible with ω', (M',ω')
is monotone. Moreover, if (M,ω) is compact and if λ,
λ' ∈ are such that c_1 =
λ[ω] and c_1' = λ'[ω'], then λ = λ'.
If (M,ω) is such that H^2(M;)=, then it is monotone (e.g. P^n). In general, (positive) monotonicity is very restrictive. In the
presence of a Hamiltonian torus action,
the following result holds.
If (M,ω) is compact and monotone, and admits
an effective Hamiltonian T-action, then (M,ω) is positive monotone.
The proof follows mutatis mutandis that of <cit.>, in which it is assumed that M^S^1 is
discrete (see <cit.>). Let H ≤ T be a one
dimensional subtorus and let ϕ : M →𝔥^* be the
induced moment map. We identify H ≃ S^1 and consider
(M,ω, ϕ) as a Hamiltonian S^1-space. We observe that
ϕ M → (Lie(S^1))^* is a Morse-Bott function;
moreover, by
(<ref>), the isotropy weights in the positive normal bundle to a fixed point are
positive (cf. <cit.>). Therefore, if F_min (respectively F_max) denotes a fixed component on which ϕ attains its minimum
(respectively maximum), all the isotropy weights in the normal bundle to F_min (respectively F_max) are
positive (respectively negative). Moreover, even if some of the isotropy weights of p_min∈ F_min (respectively at
p_max∈ F_max) are zero, by the effectiveness of the action
some of them must be different from zero.
Hence, the sum of the isotropy weights of p_min (resp.p_max) is strictly positive (respectively strictly
negative).
To complete the proof, it is enough to consider the equivariant extensions of [ω] and
c_1 in the equivariant cohomology ring of M, which are respectively
[ω-ϕ] and c_1^S^1,
and to compare them
at p_min and p_max to deduce that λ must be positive
(see equation (5.1) in <cit.>).
Throughout this paper, a Hamiltonian T-space is monotone if (M,ω) is. The following result is an immediate
consequence of Remark <ref> and Proposition
<ref>.
If (M,ω, Φ) is a compact monotone Hamiltonian T-space, then there exists a unique λ >0 such that
c_1 = [λω].
The next proposition extends
<cit.>.
If is a compact Hamiltonian T-space with c_1 =
[ω], then there exists a unique w ∈^* such that the moment map Φ:=Φ + w
satisfies the weight sum formula, i.e.,
Φ(p) =-∑_j=1^n α_j, for all p∈ M^T ,
where α_1,…,α_n ∈ℓ^*
are the isotropy weights of p.
Since c_1=[ω] and since the action is Hamiltonian, there
exists a unique w ∈^* such that
c_1^T+w=[ω-Φ].
Thus the moment map Φ = Φ+w satisfies c_1^T=[ω-Φ].
Since M is compact and the action is Hamiltonian, there exists p
∈ M^T. The equality in (<ref>) is obtained by
comparing these two equivariant cohomology classes at p ∈ M^T
and observing that c_1^T(p)=∑_j=1^nα_j.
Let (M,ω,Φ) be a compact Hamiltonian T-space with c_1 =
[ω]. If (M',ω', Φ') is isomorphic to
(M,ω,Φ), then c_1' = [ω'] by Remark
<ref>. Let Ψ : (M,ω,Φ) →
(M',ω',Φ') be an isomorphism. By Remark <ref>,
Ψ is equivariant. Hence, by (<ref>), if w, w' ∈𝔱^* are as in Proposition <ref> for Φ and Φ'
respectively, then w = w'.
A compact monotone Hamiltonian T-space is normalized if
* c_1=[ω], and
* the moment map Φ satisfies the weight sum
formula (<ref>).
In this case we call a normalized monotone Hamiltonian T-space.
Since the isotropy weights of a fixed point lie in ℓ^*, Proposition <ref> has the following
immediate consequence.
If is a normalized monotone Hamiltonian T-space, then
[ω] ∈ H^2(M;) and, for any p
∈ M^T, Φ(p) ∈ℓ^*.
The following result is an immediate
consequence of Corollary
<ref> and Proposition <ref>.
If is a compact monotone Hamiltonian T-space, then there exist unique
λ >0 and w ∈𝔱^* such that (M,λω, λΦ + w) is normalized monotone.
Classifying compact monotone Hamiltonian T-space is almost equivalent to classifying
normalized monotone Hamiltonian T-spaces. More precisely, the
following holds.
Let (M,ω,Φ), (M',ω',Φ') be compact monotone Hamiltonian
T-spaces. Let λ, λ' > 0 and v, v' ∈𝔱^*
be as in Corollary <ref>. Then (M,ω,Φ)
and (M',ω',Φ') are isomorphic if and only if λ =
λ', v = v' and (M,λω, λΦ + v) is isomorphic to (M',λ'
ω', λ'Φ' + v').
Suppose that (M,ω,Φ)
and (M',ω',Φ') are isomorphic. Let Ψ : (M,ω) →
(M',ω') be a symplectomorphism such that Φ' ∘Ψ =
Φ. By part <ref> of Remark
<ref> and by Remark
<ref>, λ = λ' and v =
v'. Hence, Ψ : (M,λω) → (M',λ'ω') is
a symplectomorphism and (λ'Φ' + v') ∘Ψ = λΦ + v, i.e.,
Ψ is an isomorphism between (M,λω, λΦ + v) and (M',λ'
ω', λ'Φ' + v'). The converse is entirely analogous and is
left to the reader.
§.§ Moment polytopes of monotone complexity preserving Hamiltonian T-spaces
We recall that a polytope Δ in ^* can be described by its minimal representation (see Section <ref>):
Δ=⋂_i=1^l {w∈^* |⟨ w,ν_i ⟩≥
c_i}
for some inward normals ν_1,…, ν_l ∈ and
c_1,…, c_l ∈. Such a polytope Δ is integral if its
vertices belong to ℓ^*.
If Δ is integral, then it is possible to choose the inward
normal ν_i so that it is a primitive element of ℓ, for every i=1,…,l.
The corresponding constants c_i's are therefore uniquely determined
by this choice of ν_i's.
A polytope Δ⊂^* is
reflexive if it is integral and ν_i ∈ℓ
in its minimal representation is primitive with corresponding c_i=-1, for all i=1,…, l.
The following result is an immediate consequence of Definition <ref> and is stated below without proof (see <cit.>).
For any reflexive polytope the origin is the only interior lattice point.
Lemma <ref> and a result of Lagarias and
Ziegler <cit.>
imply the following result.
Up to the action of GL(ℓ^*), there are only finitely many reflexive
polytopes in ^*.
For instance, there are sixteen two-dimensional reflexive polytopes (see <cit.>), five of which are also
Delzant (see Figure <ref>).
For a rational polytope Δ⊂^*, given a vertex v of Δ
one can choose the vectors α_i's in (<ref>), which support the edges coming out of v, to be
primitive elements of ℓ^*; these vectors are uniquely determined and referred to as the weights of the vertex v.
Reflexive Delzant polytopes are in particular rational and they can be
characterized in terms of the weights of their vertices. More
precisely, the following result, proved in various contexts by various
authors, holds (see, in particular, <cit.>, <cit.> and
<cit.>).
Let Δ⊂^* be a d-dimensional Delzant polytope. The following conditions are equivalent:
* Δ is a reflexive polytope.
* Δ satisfies the weight sum formula, i.e., for
each vertex v ∈Δ,
v = - ∑_j=1^d α_j ,
where α_1,…,α_d are the weights of v.
In <cit.> it is assumed that the origin is an
interior point of Δ to prove that <ref> implies <ref>.
However this follows by (<ref>). Indeed, consider the multiset 𝒲 of all the primitive
vectors appearing as weights of vertices of Δ.
Note that, if α∈𝒲 has multiplicity r, then so
does -α.
Hence the sum of all the weights in 𝒲 is 0 ∈^*.
Therefore, if Δ satisfies (<ref>), then
∑_v∈𝒱 v = 0 ,
where 𝒱 is the set of vertices of Δ. Since Δ is the
convex hull of its vertices, the interior points of Δ are
precisely those that can be written as follows:
∑_v∈𝒱λ_v v with λ_v>0 for all v∈𝒱 and ∑_v∈𝒱λ_v=1 .
Let λ_v=1/|𝒱| for all
v∈𝒱. Then by (<ref>), we have 0 =∑_v∈𝒱λ_v
v. Hence, (<ref>) yields that 0 belongs to the interior of Δ.
The following technical lemma regarding reflexive Delzant polytopes is
used extensively in Sections <ref> and
<ref> below.
Let Δ be a reflexive Delzant polytope in ^*, let
ℱ be a facet of Δ supported on the affine
hyperplane {w ∈^* |⟨ w, ν⟩ = -1}, let v be a vertex of
Δ in ℱ, and let α_1,…, α_d ∈ℓ^* be the weights of v ordered so that
ℱ⊂ v + _≥ 0⟨α_1,…,α_d-1⟩.
Then ⟨α_d, ν⟩ = 1. Moreover, if e is the
edge that is incident to v and comes out of ℱ, then
there exists t_max∈_> 0 such that e = {v + t α_d
| 0 ≤ t ≤ t_max}.
By (<ref>), ⟨α_i, ν⟩ = 0 for all
i=1,…, d-1. By Proposition <ref>, Δ
satisfies the weight sum formula at v. Since v ∈ℱ,
⟨α_d, ν⟩ = 1.
If e is an edge of Δ as in the statement, then there exists
t_max > 0 such that e = {v + t α_d
| 0 ≤ t ≤ t_max}. It remains to show that t_max is a
positive integer. To this end, we observe that v':= v + t_maxα_d is a vertex of Δ. Since Δ is reflexive, v'
∈ℓ^*. By definition of weight, α_d is
primitive in ℓ^*. Hence, t_max∈ℤ_>0.
To conclude this section, we look at the relation between normalized
monotone complexity preserving T-spaces and reflexive Delzant
polytopes in ^*. We start with the following strengthening of Proposition <ref> under the
additional assumption of monotonicity.
The moment map image of a normalized monotone complexity
preserving T-space is a reflexive Delzant polytope in ^*.
Conversely, for every reflexive Delzant polytope Δ in ^* and for
every integer k ≥ 0, there exists a normalized monotone complexity
preserving Hamiltonian T-space of complexity k such that
Φ(M) = Δ.
Before proving Proposition <ref>, we recall the following result,
stated below without proof (see <cit.>).
Let be a compact symplectic toric manifold. If Φ(M) is
a reflexive Delzant polytope, then is normalized monotone.
Let be a normalized monotone complexity
preserving T-space. By Proposition <ref>,
Φ(M) is a Delzant polytope in ^*. It remains to show
that Φ(M) is reflexive. Since is complexity preserving, by part
<ref> of Corollary <ref>, the weights of Φ(M)
at a vertex v are equal to the non-zero weights of any p ∈Φ^-1(v). Since is normalized monotone, Φ satisfies the weight sum
formula (<ref>). Hence, Φ(M) also satisfies the
weight sum formula (<ref>) and so, by Proposition <ref>, it is reflexive.
Conversely, let Δ be a reflexive Delzant polytope in ^* and
let k ≥ 0 be an integer. We adapt the second half of the proof of Proposition <ref> (and fix the notation therein), to show that we can make appropriate choices so that the
resulting complexity preserving T-space of complexity k is normalized monotone. Since
Δ is reflexive, by Proposition <ref>, (M',
ω') is normalized monotone. Choose M” =
P^k and ω” to be a monotone symplectic form on P^k such that c_1( P^k) = [ω”]. The complexity
preserving T-space (M,ω,Φ) constructed in the second
half of the proof of Proposition <ref> has
complexity k and is normalized monotone, as desired.
We finish with the following simple, useful result.
Let be a compact monotone complexity
preserving T-space. If Φ(M) is reflexive Delzant,
then is normalized monotone.
If M =0, there is nothing to prove, so we may assume M
> 0. Let k be the complexity of . Since (M,ω) is
monotone, by the proof of Proposition <ref>, there exists a
unique constant w ∈^*
such that
c^T_1 + w= λ [ω - Φ].
We fix a
vertex v ∈Φ(M) and a fixed point p ∈Φ^-1(v). Evaluating both sides of the above
displayed equality at p,
∑_i=1^n-kα_i + c= - λ v,
where α_1,…, α_n-k are the non-zero
weights of p. Since is complexity preserving, the non-zero
isotropy weights of p in M are precisely the
weights of v in Φ(M). Hence, by Proposition <ref>,
(<ref>) gives that -v +w = - λ v. Since this equality
holds for any vertex v ∈Φ(M), we have w = 0 and
λ = 1, as desired.
§ COMPLETE INVARIANTS OF COMPACT MONOTONE TALL
COMPLEXITY ONE T-SPACES
§.§ The genus and a minimal facet
In this section we explore the first consequences of the combination
of tallness and monotonicity of compact complexity one T-spaces,
recovering and extending some of the results in <cit.>. To this
end, let be a monotone tall complexity one T-space of
dimension 2n and let v ∈Φ(M) be a vertex. Let N = L_1⊕…⊕
L_n-1 be the normal bundle to Σ:=Φ^-1(v) in M together with its
T-equivariant splitting into T-invariant complex line bundles as in
(<ref>).
Let be a compact monotone tall complexity one T-space of
dimension 2n and
let v ∈Φ(M) be a vertex that attains the minimum of
DH. If c_1(Σ), c_1(L_i) denote the first Chern
class of Σ and of the complex line bundle L_i for any
i=1,…, n-1 respectively, then
c_1(Σ)[Σ] > - ∑_i=1^n-1c_1(L_i)[Σ].
Moreover, the genus of in the sense of Definition
<ref> equals zero.
Since (M,ω) is monotone, by Proposition <ref>, it is positive monotone. Hence, since Σ is
a symplectic submanifold of (M,ω),
0 < c_1[Σ] = c_1(Σ)[Σ] + c_1(N)[Σ] =
c_1(Σ)[Σ] + ∑_i=1^n-1c_1(L_i)[Σ],
whence (<ref>) holds. By Lemma <ref>, the right hand side of (<ref>) is
non-negative, which implies that Σ is diffeomorphic to a
sphere, as desired.
Lemma <ref> is a stepping stone towards the following
important result.
Let be a compact monotone tall complexity one
T-space of dimension 2n. There exists a facet ℱ of Φ(M) such
that DH(M_ℱ,ω_ℱ,Φ_ℱ) is constant and equal to the minimum
of DH, where
(M_ℱ,ω_ℱ,Φ_ℱ) is
defined as in (<ref>). Moreover, for any vertex v ∈ℱ, there exist n-2
non-zero isotropy weights α_1,…,
α_n-2 of Φ^-1(v) such that
* ℱ⊂ v + _≥ 0⟨α_1, …,
α_n-2⟩, and
* the self-intersection of Φ^-1(v) in Φ^-1(e_j)
equals zero for all j=1,…, n-2, where e_j ⊂ℱ is the edge incident to v with tangent vector given by α_j.
By Proposition
<ref>, is complexity
preserving. Hence, by Corollary <ref>,
for any face ℱ̃ of Φ(M)
DH(M_ℱ̃,ω_ℱ̃,Φ_ℱ̃)
= DH |_ℱ̃.
Therefore, to prove the first statement, it suffices to prove that there exists a facet
ℱ of Φ(M) such that DH |_ℱ is
constant and equal to the minimum of DH – see Figure <ref>.
By Corollary <ref>, the minimum of DH is
attained at a vertex of Φ(M), say v_0. Let α_1,…,
α_n-1 be the non-zero isotropy weights of the fixed surface Σ:=Φ^-1(v_0). By
Lemma <ref>,
DH( v_0 + ∑_i=1^n-1t_i α_i) =
∫_Σω - ∑_i=1^n-1t_i
c_1(L_i)[Σ]
for all t_1,…, t_n-1≥ 0 sufficiently small. By Lemma
<ref>, 2 > - ∑_i=1^n-1c_1(L_i)[Σ];
moreover, by Lemma <ref>, c_1(L_i)[Σ]≤
0 for all i=1,…, n-1. Thus
at least n-2 of c_1(L_1)[Σ], …,
c_1(L_n-1)[Σ] vanish; there is no loss of generality in
assuming that c_1(L_i)[Σ]=0 for all i=1,…, n-2. Hence,
DH( v_0 + ∑_i=1^n-1t_i α_i) =
∫_Σω - t_n-1
c_1(L_n-1)[Σ]
for all t_1,…, t_n-1≥ 0 sufficiently small. In particular, the restriction of DH to a sufficiently small
neighborhood of v_0 in
(v_0 + _≥ 0⟨α_1, …, α_n-2⟩)
∩Φ(M)
is constant and equal to DH(v_0), which, by assumption, is the
minimum of DH. Let ℱ be the facet of Φ(M)
that is contained in v_0 + _≥ 0⟨α_1, …,
α_n-2⟩. Since DH is concave (see Proposition
<ref>), since DH attains its minimum at v_0,
and since v_0 ∈ℱ, DH|_ℱ
is constant and equal to the minimum of DH. Moreover, since
c_1(L_i)[Σ] = 0, the
self-intersection of Σ in Φ^-1(e_i) is zero for each
i= 1,…, n-2.
It remains to show that the bullet points in the statement hold for
all vertices of ℱ. However, since DH|_ℱ
is constant and equal to the minimum of DH, it follows that
any vertex of ℱ is a minimum of DH. Hence, the
above argument gives the desired result.
A facet as in Proposition
<ref> plays an important role throughout Section <ref>.
Let be a compact monotone tall complexity one
T-space. A facet of Φ(M) satisfying the conclusions of
Proposition <ref> is called a minimal
facet of Φ(M) and denoted by ℱ_min. Given a
minimal facet ℱ_min, the sheet
corresponding to ℱ_min is denoted by
(M_min,ω_min,Φ_min).
We observe that, in spite of the notation,
(M_min,ω_min,Φ_min) clearly depends on
ℱ_min. However, we trust that the notation does not
cause confusion.
Let be a compact monotone tall complexity one
T-space. If ℱ_min is a minimal facet of
Φ(M), then M_min contains no isolated fixed point of
.
By Remark <ref> and
Proposition <ref>,
(M_min,ω_min,Φ_min) is a
compact tall complexity one
T/H_ℱ_min-space. By Corollary
<ref> and Proposition
<ref>,
DH(M_min,ω_min,Φ_min) is
constant. Hence, by Lemma <ref>, there are no
singular values in the (relative) interior of Φ_min(M_min). Thus, by
Lemma <ref>,
(M_min,ω_min,Φ_min) has
no isolated fixed points for the
T/H_ℱ_min-action and, hence, for the T-action.
To conclude this section, we prove that certain self-intersections of
the pre-image of vertices in a minimal facet are independent of the
vertices. To this end, we say that an edge e of a polytope
Δ comes out of a facet ℱ if it is not
contained in ℱ but it is incident to
a vertex of Δ contained in ℱ.
Let be a compact monotone tall complexity one
T-space and let ℱ_min be a minimal facet of
Φ(M). There exists s ∈ such that, for any vertex v ∈ℱ_min, the self-intersection of Φ^-1(v) in
Φ^-1(e) equals s, where e is the edge of Φ(M) that
comes out of ℱ_min and is incident to v.
Let v_1,v_2 ∈ℱ_min be vertices and let e_1,e_2
be the edges that come out of ℱ_min that are incident
to v_1 and to v_2 respectively. By Proposition
<ref>, DH(v_1) = DH (v_2). Let
Σ_i:= Φ^-1(v_i) for i=1,2. Thus, by Lemma
<ref>, [ω](Σ_1) =
[ω](Σ_2). Since (M,ω) is monotone, it is positive
monotone by Proposition <ref>. Hence, c_1(Σ_1)
= c_1(Σ_2). Moreover, by Proposition <ref>,
the only self-intersection of Σ_i that can possibly be
different from zero is that in Φ^-1(e_i), for i=1,2. Hence,
since Σ_1 ≃Σ_2, the result follows.
§.§ A characterization of isolated fixed points
Henceforth, we assume that is a normalized monotone tall complexity one T-space unless otherwise stated. Moreover, we fix a
minimal facet ℱ_min of Φ(M) (which exists by
Proposition <ref>). By Proposition
<ref>, Φ(M) is a reflexive Delzant
polytope. Hence, by Definition <ref>, there exists a unique
primitive ν_min∈ℓ such that
ℱ_min ⊂{w ∈𝔱^* |⟨ w,ν_min⟩ = -1},
Φ(M) ⊂{w ∈𝔱^* |⟨ w,ν_min⟩≥ -1 }.
Finally, if v ∈ℱ_min is any vertex and α_1,…,α_n-1
are the non-zero isotropy weights of Φ^-1(v), then, by
Proposition <ref>, we may assume that the weights are
ordered so that
ℱ_min⊂ v + _≥ 0⟨α_1,…, α_n-2⟩.
In particular, by Lemma
<ref>,
⟨α_n-1, ν_min⟩ = 1.
Let be a normalized monotone tall complexity one T-space of dimension 2n. If p ∈ M^T is
isolated, then there exists an edge e of Φ(M)
that comes out of ℱ_min such that
Φ(p) is (the only element) in the
intersection of e with the linear hyperplane {w ∈𝔱^* |⟨ w,ν_min⟩ = 0 }.
Moreover, if v ∈ℱ_min is the vertex
that is incident to e and if α_1,…,α_n-1
are the non-zero isotropy weights of Φ^-1(v)
ordered so that (<ref>) holds,
then the isotropy weights of p are
α_1,…,α_n-2, α_n-1,-α_n-1.
Since is normalized monotone, by Corollary
<ref>, Φ(p) ∈ℓ^*. Hence, since ν_min∈ℓ, ⟨Φ(p), ν_min⟩∈. Moreover, by Corollary <ref>,
Φ(p) ∉ℱ_min; hence, by (<ref>), ⟨Φ(p), ν_min⟩ is a non-negative integer. Let β_1,…, β_n ∈ℓ^* be the isotropy weights of
p. By Lemma
<ref>, there exists an isotropy weight β
of p such that ⟨β, ν_min⟩ < 0. Without
loss of generality, we may assume that β_n = β. Let
(N,ω_N,Φ_N) be the sheet along β_n given by Lemma
<ref>, let
H = exp({ξ∈|⟨β_n,ξ⟩∈})
be its stabilizer, and let q ∈ M^T ∩ N be a fixed
point satisfying the conclusions of Corollary
<ref>, i.e.,
* Φ(q) = Φ_N(q) is a global extremum of
Φ_N,
* -β_n is an isotropy weight of q, and α_1 α_2 -α_2 ℱ_min
⟨Φ(q), ν_min⟩ <
⟨Φ(p),ν_min⟩.
We split the proof in two cases: first, we assume that ⟨Φ(p), ν_min⟩ is minimal among isolated fixed points and, second, we
deduce the general case from this special one.
Case 1: We suppose that ⟨Φ(p), ν_min⟩ is minimal among isolated fixed points. By
(<ref>), the fixed point q is not isolated. Hence, it lies
on a fixed surface. By Proposition <ref>, v is a vertex of Φ(M). Let
α_1,…, α_n-1 be the non-zero isotropy weights of
Φ^-1(Φ(q)) ordered so that α_n-1 = - β_n. By
Proposition <ref>, α_1,…,
α_n-1 are a basis of ℓ^*. Hence, β_n is a
primitive element in ℓ^*. Moreover, since H must be one of the stabilizers of dimension n-2 for points sufficiently
close to q in M, it follows that N = 4 and that Φ(p)
+ ⟨β_n⟩ contains an edge of
Φ(M). Hence, since is tall, Φ(p)
lies in the (relative) interior of this edge.
Hence, by Corollary <ref>, there exists precisely one
i=1,…, n-1 such that β_i is a multiple of β_n;
without loss of generality, we assume that i=n-1. By Remark <ref>, the
-span of β_1,…, β_n-2,β_n equals
ℓ^*. Hence, by a dimension count, β_1,…,
β_n-2,β_n are linearly independent. We
claim that ⟨β_j, ν_min⟩≥ 0
for all j =
1,…, n-2. Suppose, on the contrary, that there exists an index
j=1,…, n-2 such that ⟨β_j, ν_min⟩ <
0. By Corollary <ref> applied to β_j and
by the above argument, Φ(p)
must lie in the (relative) interior of an edge of Φ(M) that is
contained in Φ(p)
+ ⟨β_j⟩. (Observe that this uses the fact that ⟨Φ(p), ν_min⟩ is minimal among all isolated fixed points.) Since any point in a convex polytope
is contained in the (relative) interior of at most one edge, it
follows that ⟨β_j⟩ = ⟨β_n⟩, which contradicts the linear independence of β_1,…,
β_n-2,β_n.
Since β_n-1 is a multiple of β_n, since β_n
is primitive, and since Φ(p) lies in the (relative) interior of
an edge of Φ(M), there exists a positive integer λ
such that β_n-1 = -λβ_n. Since is normalized, the weight
sum formula at p
Φ(p) = -∑_j=1^nβ_j
implies that
0 ≤⟨Φ(p), ν_min⟩ = - ∑_j=1^n-2⟨β_j ,
ν_min⟩_≤ 0 + (λ - 1)_≥ 0⟨β_n, ν_min⟩_< 0≤ 0.
Therefore ⟨Φ(p), ν_min⟩= 0, λ = 1 and ⟨β_j,
ν_min⟩ = 0 for all j=1,…, n-2. Since ⟨Φ(q),
ν_min⟩ < ⟨Φ(p), ν_min⟩ by (<ref>), since
⟨Φ(q), ν_min⟩∈ by Corollary <ref>, and
by (<ref>), we have that ⟨Φ(q),
ν_min⟩ = -1, i.e., v:=Φ(q) is a vertex of
ℱ_min. Moreover, since α_n-1 = -β_n and
since ⟨β_n, ν_min⟩ < 0, the
edge incident to v contained in Φ(p) + ⟨β_n
⟩ comes out of ℱ_min. Since β_n-1 = -λβ_n, then
β_n-1 = α_n-1.
We observe that the set (!)
{β_1,…,β_n-2} is precisely the multiset of
isotropy weights for the T-action on the normal bundle to the
T-invariant submanifold N at the point p. Hence, this set
equals {α_1,…,α_n-2} modulo ⟨β_n
⟩ = ⟨α_n-1⟩, since the latter is the
multiset of isotropy weights for the T-action on the normal bundle
to the
T-invariant submanifold N at another point.
The affine hyperplane v + ⟨α_1,…,α_n-2⟩ contains
ℱ_min by (<ref>). Hence, ⟨α_j,
ν_min⟩ = 0 for all j =1,…, n-2. Since ⟨α_n-1,
ν_min⟩ > 0 and ⟨β_j,
ν_min⟩ = 0 for all j=1,…, n-2,
then {β_1,…,β_n-2} =
{α_1,…,α_n-2}. This completes the proof of the
result under the hypothesis that ⟨Φ(p), ν_min⟩ is minimal among all isolated fixed points.
Case 2: To conclude the proof, it suffices to show that, if p ∈ M^T is isolated,
then ⟨Φ(p), ν_min⟩ is minimal. Suppose not; then, by the above argument and
since ⟨Φ(p), ν_min⟩∈_≥ 0, there exists p' ∈ M^T such
that ⟨Φ(p'), ν_min⟩ >0 and minimal among fixed points with positive pairing. Let β_1',…, β_n' ∈ℓ^* be the
isotropy weights of p'. By Lemma
<ref>, we may assume that ⟨β_n',
ν_min⟩ < 0. We consider the sheet
(N',ω_N',Φ_N') along β_n' given by Lemma
<ref> and the point q' ∈ M^T ∩ N'
satisfying the conclusions of Corollary
<ref>. In analogy with (<ref>), ⟨Φ(q'), ν_min⟩ <
⟨Φ(p'), ν_min⟩. Hence, by minimality, q' is either
isolated and satisfies ⟨Φ(q'), ν_min⟩ = 0, or
is not isolated. In either
case, by the arguments used in the special case above, β'_n ∈ℓ^* is primitive, N' = 4 and Φ(p') lies
in the (relative) interior of an edge of Φ(M). In particular, there is precisely one other
isotropy weight of p' that is collinear with β'_n, say
β'_n-1. Moreover, as above, ⟨β'_j, ν_min⟩≥ 0 for all j =
1,…, n-2. Set β'_n-1 = - λ'
β_n for some positive integer λ'; since
β'_n is primitive, λ'≥ 1. We use again the weight sum formula (<ref>)
and, in analogy with (<ref>), we obtain the following absurd string
of inequalities
0 < ⟨Φ(p'), ν_min⟩ = - ∑_j=1^n-2⟨β'_j ,
ν_min⟩_≤ 0 + (λ' - 1)_≥ 0⟨β'_n, ν_min⟩_< 0≤ 0.
§.§ The space of exceptional orbits and the
painting
Motivated by Proposition <ref>, our first
aim is to prove properties of an isolated fixed point
with isotropy weights α_1,…,
α_n-2,α_n-1,-α_n-1 in a complexity one
T-space (see Lemma <ref> below). By Remark <ref>, α_1,…, α_n-1 form a basis of
ℓ^*; therefore, by a dimension count, α_j is primitive for all
j=1,…, n-1. We define the following subgroups of T:
H:=exp( {ξ∈|⟨α_i , ξ⟩ = 0, for all i=1,…, n-2}),
which is of dimension 1, and
T':=exp( {ξ∈|⟨α_n-1 , ξ⟩ = 0}),
which is of codimension 1.
Observe that T ≃ T' × H; moreover, we use the given inner
product to identify the duals 𝔥^*, (')^* of the Lie
algebras of H and T with ⟨α_n-1⟩ and ⟨α_1,…, α_n-2⟩ respectively.
We start by looking at the local model determined by the above
isotropy weights (see Section <ref>). We consider the
following T-action on ^n
exp(ξ)· (z_1,…,z_n-2,z_n-1,z_n)=
(e^ 2π i ⟨α_1 , ξ⟩z_1,…,
e^ 2π i ⟨α_n-2 , ξ⟩z_n-2,e^ 2π i ⟨α_n-1 , ξ⟩ z_n-1,
e^ 2π i ⟨ -α_n-1 , ξ⟩ z_n ) for
ξ∈,
with moment map Φ_0: ^n →^* given by
Φ_0(z_1,…, z_n) = π(∑_j=1^n-2α_j |z_j|^2 + α_n-1(|z_n-1|^2-|z_n|^2) ).
From (<ref>) it is clear that 0 ∈^n is a fixed
point, that the circle H acts trivially on
^n-2 =
⟨ z_1,…, z_n-2⟩, and that the (n-2)-dimensional
torus T' acts trivially on ^2=⟨ z_n-1, z_n⟩. Therefore
the linear T-action on
^n of (<ref>) splits as the product of a toric T'-action on ^n-2 =
⟨ z_1,…, z_n-2⟩, and a complexity one H-action on
^2 = ⟨ z_n-1, z_n⟩. Moreover,
*
the stabilizer in T of a point q := (z_1,…,z_n) ∈^n is the
product of the stabilizer in T' of q_1:= (z_1,…, z_n-2) ∈^n-2
and of the stabilizer in H of q_2:=(z_n-1,z_n) ∈^2,
* the symplectic slice representation of
q ∈^n for the action of T splits as the product of the
symplectic slice representations of q_1∈^n-2 for the action of T'
and of q_2 ∈^2 for the action of H, and
* a point q∈^n is exceptional with
respect to the action of T if and only if at least
one of q_1 ∈^n-2 and q_2∈^2 is exceptional with respect to the corresponding actions of T' and H.
We observe that property <ref> follows from properties
<ref> and <ref>. Hence, in order to understand properties of the product, we consider
each factor separately. This is the content of the following two
results.
Consider ^n-2 with the above linear toric T' action. Then
every
point in ^n-2 is regular for the action of T'. Moreover, for each q_1= (z_1,…, z_n-2) ∈^n-2,
the subset J:={ j ∈{1,…,n-2}| z_j≠ 0} is the unique subset
such that
* the moment map image π∑_j=1^n-2α_j |z_j|^2 ∈ (')^* lies in _> 0⟨{α_j
| j ∈ J }⟩ (if J = ∅, then q_1 = 0 and
the moment map image equals zero),
* the stabilizer of q_1 is
K_J:= exp( {ξ∈'|⟨α_j , ξ⟩ = 0 for all j∈ J}),
and
* the isotropy weights of q_1 are {α_j
| j ∉ J}⊂ (')^*, where we identify Lie(K_J)^* with ⟨{α_j| j∉ J ∪{n-1}}⟩⊆ (')^*⊂^*.
Conversely, given any subset J ⊆{1,…,n-2} and any w ∈_> 0⟨{α_j
| j ∈ J }⟩, there exists q_1 = (z_1,…, z_n-2) ∈^n-2 such
that π∑_j=1^n-2α_j |z_j|^2 =w, the stabilizer of q_1 is K_J, and the isotropy weights of q_1 are {α_j
| j ∉ J}.
Finally, the subset of points with trivial stabilizer
is path-connected and dense.
By Lemma <ref>, every point in a complexity zero Hamiltonian space is regular.
The linear toric T'-action on ^n-2 is given explicitly
by
exp(ξ')· (z_1,…,z_n-2)=(e^ 2π i ⟨α_1 , ξ'
⟩z_1,…,
e^ 2π i ⟨α_n-2 , ξ' ⟩z_n-2) for ξ'∈' ,
(cf. (<ref>)). By definition of J, the moment map
image π∑_j=1^n-2α_j |z_j|^2 lies in _> 0⟨{α_j
| j ∈ J }⟩. Since α_1,…,α_n-2 are linearly independent, J is the only such subset of {1,…,n-2}.
This proves the first bullet point.
Next we prove the second bullet point. Let K be the stabilizer
of q_1. By
(<ref>), if ξ' ∈' then exp(ξ') ∈ K if and only if ⟨α_j,ξ'⟩∈ for all j ∈ J.
However, since each α_j is primitive,
K=exp( {ξ'∈'|⟨α_j, ξ'⟩∈ for all j∈ J})=exp( {ξ'∈'|⟨α_j, ξ'⟩ =0 for all j∈ J}),
which, by definition, is exactly K_J.
We turn to the proof of the third bullet point. The symplectic slice
representation of q_1 is the
following representation of K_J: We set
^J: ={(w_1,…, w_n-2) ∈^n-2| w_j = 0
for all j ∈ J}.
This is a T'-invariant complex subspace of ^n-2 that can be
identified symplectically with the symplectic normal to the
T'-orbit of q_1, once the tangent space at q_1 is
identified with ^n-2. Under this identification, since the
T'-action on ^n-2 is linear, the K_J-action on ^J is
given by the restriction of the T'-action to K_J. The
isotropy weights of this K_J-action are given by the set
{α_j | j ∉ J}⊂ (')^*.
Conversely, given a subset J ⊆{1,…,n-2} and w ∈_> 0⟨{α_j
| j ∈ J }⟩, there exist positive constants
λ_j for j ∈ J such that w = ∑_j ∈ Jλ_j α_j. The point q_1 = (z_1,…, z_n-2) ∈^n-2 with coordinates given by
z_j =
π^-1√(λ_j) if j ∈ J
0 if j ∉ J
is such that π∑_j=1^n-2α_j |z_j|^2 =w, its stabilizer is K_J, and its isotropy weights are {α_j
| j ∉ J}.
Finally, q_1 = (z_1,…, z_n-2) ∈^n-2 has trivial stabilizer
if and only if z_j ≠ 0 for all j=1,…, n-2. The subset
{(z_1,…, z_n-2) ∈^n-2| z_j ≠ 0 for
all j =1,…, n-2 }
is clearly path-connected and dense.
Consider ^2 with the above linear complexity one H-action. A point q_2 ∈^2 is exceptional
if and only if q_2 = (0,0). Moreover, q_2 = (0,0) is the only point
stabilized by H. In this case, the isotropy weights of q_2 are
{± α_n-1}.
We fix an isomorphism between H and S^1 so that
α_n-1 corresponds to +1. Under this isomorphism, the above linear H-action on
^2 can be identified with the linear S^1-action on ^2
with weights equal to +1 and -1. The result then follows from a
simple computation
and from Lemma <ref>.
Theorem <ref> and Lemmas <ref>,
<ref> imply the following result that is central to this section.
Let be a complexity one T-space of dimension 2n and let p ∈ M^T
be isolated with isotropy weights
α_1,…,α_n-2,α_n-1,-α_n-1. There
exists an open neighborhood U of p such that the following are
equivalent:
* q ∈ U is exceptional, and
* Φ(q) ∈Φ(p) + _≥ 0⟨{α_j | j
= 1,…, n-2}⟩ and the stabilizer of q contains H.
Moreover, given q ∈ U exceptional, if J ⊆{1,…, n-2} is defined by
Φ(q) ∈Φ(p) + _> 0⟨{α_j | j ∈ J }⟩,
then
* the stabilizer of q is K_J × H, where
K_J is defined in (<ref>), and
* the isotropy weights of q are
{α_j | j ∉ J}∪{± α_n-1},
where we identify
Lie(K_J)^*⊆ (')^* ⊂^* with ⟨{α_j
| j∉ J ∪{n-1}}⟩.
Conversely, given any subset J ⊆{1,…, n-2} and any
w ∈Φ(p) + _>0⟨α_j | j ∈ J }⟩,
there exists an exceptional point q ∈ U such that Φ(q) = w,
the stabilizer of q is K_J × H, and the isotropy weights of
q are {α_j | j ∉ J}∪{± α_n-1}.
Finally, the subset
{q ∈ U | q is exceptional and has
stabilizer H }
is path-connected and dense in {q ∈ U | q is exceptional}.
By Theorem <ref>, it suffices to consider the
local model determined by the isotropy weights, p = 0 ∈^n and Φ(p) = 0. By property <ref>, and Lemmas <ref> and
<ref>, a point q = (q_1,q_2) ∈^n is exceptional if and only if q_2 = (0,0), which is also equivalent to the
stabilizer of q_2 being H. Suppose that q = (q_1,q_2) is exceptional and let J ⊆{1,…,
n-2} be the subset given by Lemma <ref>. Since q_2 =
(0,0), by (<ref>), Φ(q) lies in Φ(p) +
_> 0⟨{α_j | j ∈ J }⟩ if and only if π∑_j=1^n-2α_j |z_j|^2 ∈ (')^* lies in _> 0⟨{α_j
| j ∈ J }⟩.
Hence, J is the unique subset of {1,…,
n-2} such that (<ref>)
holds. Properties <ref> and <ref> in the statement
follow immediately from <ref> and <ref> in the discussion preceding Lemma <ref>, and from
Lemmas <ref> and <ref>.
Conversely, let J ⊆{1,…,
n-2} be a subset and w ∈_> 0⟨{α_j | j ∈ J }⟩. Let q_1 ∈^n-2 be the point given by Lemma
<ref>. By property <ref> in the discussion preceding Lemma <ref> and Lemma
<ref>, the point q = (q_1,0,0) ∈ V is exceptional. Moreover, by
(<ref>),
Φ(q) = w. By Lemmas <ref> and <ref>, and by
properties <ref> and <ref>, the stabilizer of q is
K_J × H and the isotropy weights of q
are {α_j | j ∉ J}∪{± α_n-1}, as desired.
Finally, by Lemmas <ref> and <ref>,
{q =(q_1,q_2) ∈^n-2×^2 | q is
exceptional and has
stabilizer H }
equals
{q = (q_1,0,0) ∈^n-2×^2 | q_1 has
trivial stabilizer}.
By Lemma <ref>, {q_1 ∈^n-2| q_1 has
trivial stabilizer} is path-connected and dense in
^n-2, thus completing the proof.
The subset J associated to an exceptional point near the isolated
fixed point of Lemma <ref> has the
following useful property.
Let be a complexity one T-space of dimension 2n and let p ∈ M^T
be isolated with isotropy weights
α_1,…,α_n-2,α_n-1,-α_n-1. Let U be the open neighborhood of p given by
Lemma <ref>. Given exceptional points q, q'
∈ U, let J, J' ⊆{1,…, n-2} be the subsets corresponding to q, q' as in
Lemma
<ref>. The symplectic slice
representations of q and q' are isomorphic if and only if J = J'.
If J = J', then by parts <ref> and <ref>
of Lemma <ref>, the points q and
q' have equal stabilizers and the same isotropy weights. Since
their common stabilizer is connected, it follows that they have
isomorphic symplectic slice representations. Conversely, suppose that q
and q' have isomorphic symplectic slice representations. Hence, by
Lemma <ref>, they have connected
stabilizers, so that K_J = K_J'.
Since the dual of the Lie algebra of K_J can be identified with
⟨{α_j | j∉ J ∪{n-1}}⟩, and since α_1,…, α_n-2 are linearly
independent, it follows that J = J'.
Throughout this section, we apply Lemma
<ref> and Corollary
<ref> to an isolated fixed point in a
normalized monotone tall complexity one T-space. In this case, H =
H_ℱ_min, the stabilizer of
(M_ℱ_min,ω_ℱ_min,Φ_ℱ_min),
see Definition <ref>. Intuitively speaking, the next result is the `global version' of Lemma
<ref> for normalized monotone tall
complexity one T-spaces.
Let be a normalized monotone
tall complexity one T-space of dimension 2n. Let q ∈ M
be exceptional. There exist p ∈ M^T isolated
and a unique subset J ⊆{1,…, n-2} such that, if
α_1,…,α_n-2,α_n-1,-α_n-1 are the
isotropy weights of p as in Proposition
<ref>, then
* the moment map image Φ(q) lies in Φ(p)
+ _>0⟨{α_j | j ∈ J}⟩,
* the stabilizer of q is K_J ×
H_ℱ_min, where K_J ≤ T' is as in
(<ref>), and
*
the isotropy weights of q are {α_j | j ∉ J}∪{± α_n-1} (see part <ref> of Lemma <ref>).
By Lemma
<ref>, the sheet (N,ω_N,Φ_N) through q is exceptional. Since
N is compact, it contains a fixed point p ∈ M^T that is exceptional and therefore isolated by
Lemma <ref>. Since N is connected,
by the principal orbit
theorem (see <cit.>),
there exists a relatively open, dense and connected
subset N' of N such that, if q' ∈
N', then q and q' have isomorphic symplectic
slice representations. In particular, if U is the open neighborhood of
p given by Lemma <ref>, then U ∩ N' is not empty; moreover, for all q' ∈ U ∩ N', the
symplectic slice representation of q' is isomorphic to that of q. By
Lemma <ref> and
Corollary <ref>, there exists a unique
subset J ⊆{1,…, n-2} such that, for all q' ∈ U
∩ N',
* the moment map image Φ(q') lies in Φ(p)
+ _>0⟨{α_j | j ∈ J}⟩,
* the stabilizer of q' is K_J ×
H_ℱ_min, where K_J ≤ T' is as in
(<ref>), and
* the isotropy weights of q' are {α_j | j ∉ J}∪{± α_n-1}.
Since U ∩ N ≠∅, the second and third
bullet points imply properties
<ref> and <ref>. To see that
property <ref> holds, we observe that, by the first bullet point, Φ(U ∩ N') is contained in Φ(p)
+ _>0⟨{α_j | j ∈ J}⟩. Since
N_reg is dense in N, we have that Φ(U ∩ N)
is contained in Φ(p)
+ _≥ 0⟨{α_j | j ∈ J}⟩. On the
other hand, since the stabilizer of q is K_J ×
H_ℱ_min, the sheet
(N,ω_N,Φ_N) is a compact Hamiltonian T”-space, where
T” = T/(K_J ×
H_ℱ_min) ≃ T' /K_J. By construction, we may identify the dual of the Lie
algebra of T” with Φ(p) + ⟨{α_j | j ∈ J}⟩. Hence, by the Convexity Package (Theorem
<ref>), the moment map image Φ_N(N) = Φ(N) is a
convex polytope in Φ(p)
+ ⟨{α_j | j ∈ J}⟩. Since Φ(p)
+ _≥ 0⟨{α_j | j ∈ J}⟩ is convex
in Φ(p)
+ ⟨{α_j | j ∈ J}⟩ and since Φ(U ∩ N)
is contained in Φ(p)
+ _≥ 0⟨{α_j | j ∈ J}⟩, Φ(N) is contained in Φ(p)
+ _≥ 0⟨{α_j | j ∈ J}⟩. In
particular, the interior of Φ(N) is contained in Φ(p)
+ _>0⟨{α_j | j ∈ J}⟩. Since q is
a regular point for the moment map Φ_N, the moment map image
Φ_N(q) = Φ(q) lies in the (relative) interior of Φ(N),
as desired.
By Lemma <ref>, if q is exceptional,
then the moment map Φ(q) can be used to reconstruct the symplectic slice
representation of q. To see this, we observe that, by property
<ref> and Proposition <ref>, Φ(q) lies in the affine hyperplane Φ(p)+{w ∈^*
|⟨ w, ν_min⟩ = 0}.
We recall that a basis
for the linear subspace {w ∈^*
|⟨ w, ν_min⟩ = 0} is given by α_1,…, α_n-2
(cf. (<ref>)). Hence, by Lemma
<ref>, Φ(q) determines the subset J
uniquely, and J determines the stabilizer and the isotropy weights of
q. Since the stabilizer of q is connected, the claim follows.
Our next aim is to prove Proposition
<ref>, which plays an
important role in several key results below (e.g., Theorems
<ref> and <ref>. We start
with the following result.
Let be a normalized monotone tall complexity one
T-space. The following are equivalent:
* there exists an isolated fixed point, and
* for each edge e that comes out of ℱ_min,
there exists an isolated fixed point p ∈ M^T such that
Φ(p) ∈ e.
Clearly <ref> implies <ref>. Conversely, suppose
that there exists an isolated fixed point p ∈ M^T. If M =
4, there is nothing to prove, so we may assume that M ≥
6. By
Proposition <ref>, there exists an edge
e that comes out of ℱ_min such that Φ(p) ∈
e. Let v ∈ℱ_min be the vertex to which e is
incident and let α_1,…, α_n-2,α_n-1 be the
non-zero isotropy weights of Φ^-1(v) ordered so that
(<ref>) holds.
For j=1,…, n-2, let v_j ∈ℱ_min be the vertex that lies on the edge supported on
v + _≥ 0⟨α_j ⟩ and is not v. Let e_j
be the edge that comes out of ℱ_min that is incident
to v_j. We claim that there exists an isolated fixed point p_j
such that Φ(p_j) ∈ e_j (see Figure <ref>). To this end, by Lemma
<ref>, there exists an
exceptional point q ∈ M
arbitrarily close to p such that Φ(q) ∈Φ(p) + _>0⟨α_j ⟩; moreover, the stabilizer of q has codimension
1. Let (N,ω_N,Φ_N) be the sheet through q. Since q is
exceptional, so is (N,ω_N,Φ_N); furthermore, p ∈ N by construction. Hence,
(N,ω_N,Φ_N) is a compact symplectic toric manifold with
moment map image contained in Φ(p) + _≥ 0⟨α_j ⟩. Let p_j ∈ M^T ∩ N be the unique fixed
point such that Φ(p_j) ∈Φ(p) + _>0⟨α_j
⟩. Since (N,ω_N,Φ_N) is exceptional, so is
p_j; moreover, by Lemma <ref>, p_j
is isolated. Hence, by Proposition
<ref>, the image Φ(p_j) lies on an
edge that comes out of ℱ_min. This edge is
necessarily e_j: To see this, we observe that the moment map image
Φ(N) is contained in the affine two-dimensional plane v + ⟨α_j,α_n-1⟩. This plane supports a
two-dimensional face of Φ(M) that contains e and the edge
of ℱ_min that is incident to both v and
v_j. Hence, there exists only one other edge that is incident to
v_j that is contained in this affine plane. Since this plane
intersects ℱ_min precisely in the edge that is
incident to both v and v_j, by (<ref>) applied to the
non-zero isotropy weights of Φ^-1(v_j), the other edge that
is incident to v_j and contained in the above affine plane must
come out of ℱ_min, i.e., it must be e_j.
By the last paragraph, <ref> holds for each edge that comes out of
ℱ_min that is incident to a vertex of
ℱ_min that is adjacent to v in ℱ_min
(i.e., there exists an edge of ℱ_min that is incident
to both vertices). We define the following relation on the set of
vertices of ℱ_min:
v_1 ∼ v_2 ⇔ either v_1=v_2 or
v_1 is adjacent to v_2.
Since the transitive closure of the above
relation has one equivalence class and since there is a one-to-one
correspondence between edges that come out of ℱ_min
and vertices of ℱ_min, <ref> holds.
As a consequence of Lemma <ref>,
we obtain the following sufficient
condition for a normalized monotone tall complexity one
T-space to be without isolated fixed points.
Let be a normalized monotone tall complexity one T-space. If
there is a vertex of Φ(M) on the linear hyperplane {w
∈^* |⟨ w, ν_min⟩ =
0}, then there are no isolated fixed points. Moreover,
M_exc = ∅.
Let v ∈Φ(M) be a vertex of Φ(M) such that ⟨ v, ν_min⟩ =
0. First we show that there is an edge e that comes out of ℱ_min that is
incident to v. Let α_1,…, α_n-1 be
the non-zero isotropy weights of Φ^-1(v). By
Lemma <ref>, we may assume that ⟨α_n-1, ν_min⟩ <
0. Let e be the edge that is contained in v + _≥
0⟨α_n-1⟩ and let v' ∈Φ(M) be the
other vertex to which e is incident. By construction, ⟨ v', ν_min⟩ <
0. Moreover, since is normalized monotone, Φ(M) is
integral. Therefore, by (<ref>), ⟨ v',
ν_min⟩ = -1, i.e., v' is a vertex of
ℱ_min. Since ⟨α_n-1, ν_min⟩ <
0, e is an edge of Φ(M) that comes out of
ℱ_min. Hence, v is the only element in the
intersection of e and {w
∈^* |⟨ w, ν_min⟩ =
0}. By Theorem <ref> and Proposition
<ref>, there is no isolated fixed point
that is mapped to e under Φ. Hence, by Lemmas
<ref>,
<ref> and
<ref>, the result follows.
The next result plays a key role throughout the paper.
Let be a normalized monotone tall complexity one T-space of dimension 2n. If
(N,ω_N,Φ_N) is an exceptional sheet that is stabilized by a
one-dimensional subgroup H, then H = H_ℱ_min and
Φ(N) = Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0}.
Since (N,ω_N,Φ_N) is exceptional, every point in N is
exceptional. Moreover, since (N,ω_N,Φ_N) is stabilized by
H, there exists a point q ∈ N with stabilizer equal to
H. Hence, by part <ref> of Lemma <ref>, there exist p ∈ M^T
isolated and a unique subset J ⊆{1,…, n-2} such
that, if α_1,…,α_n-2,α_n-1,-α_n-1 are the
isotropy weights of p as in Proposition
<ref>, then the stabilizer of q' is K_J ×
H_ℱ_min, where K_J ≤ T' is as in
(<ref>). By definition, K_J is
connected. Hence, if the dimension of the stabilizer of q is one,
then it must be H_ℱ_min, thus proving the first
statement.
By Proposition <ref>, Φ(M) is a reflexive
Delzant polytope and therefore the origin lies in the interior of
Φ(M) (see Lemma
<ref>).
Hence, the interior of Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} in {w ∈^* |⟨ w, ν_min⟩ = 0} is non-empty. Since both
M and N are compact, and since {w ∈^* |⟨ w, ν_min⟩ = 0} is a linear hyperplane in ^*, by the
Convexity Package (Theorem <ref>), both Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} and Φ_N(N) = Φ(N) are convex
polytopes. Therefore, in order to prove that (<ref>) holds, it
suffices to show that Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} and Φ(N) have the same vertices.
Since M_exc≠∅, by Corollary
<ref>, there is no vertex of Φ(M) lying
on {w ∈^* |⟨ w, ν_min⟩ = 0}. Hence, a point v̂∈Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} is a vertex if and only if there exists an edge e
of Φ(M)
that comes out of ℱ_min such that v̂ is the
intersection of e with {w ∈^* |⟨ w, ν_min⟩ = 0}. On the other hand, since (N,ω_N,Φ_N) is exceptional and since the
complexity of is one, by Proposition
<ref>, the complexity of
(N,ω_N,Φ_N) is zero, i.e., it is a compact symplectic
toric manifold. Therefore, v̂∈Φ(N) is a vertex if and only
if there exists an isolated fixed point p ∈ N such that
Φ(p) = v̂.
Let v̂∈Φ(N) be a vertex and let p ∈ N be as
above. By Proposition <ref>, there
exists v̂ an edge e of Φ(M)
that comes out of ℱ_min such that v̂ is the
intersection of e with {w ∈^* |⟨ w, ν_min⟩ = 0}. Hence, each vertex of Φ(N) is a vertex of Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0}. Moreover, by Lemma
<ref>, there exists an open
neighborhood U of p such that U ∩ N is precisely the subset
of exceptional points in U and
Φ(U ∩ N) = Φ(U) ∩{w ∈^* |⟨ w, ν_min⟩ = 0}.
Set V:= Φ(U). By the Convexity Package (Theorem
<ref>), V is an open neighborhood of
v̂. Moreover, by (<ref>),
V ∩Φ(N) = V ∩{w ∈^* |⟨ w, ν_min⟩ = 0},
(see Figure <ref>).
Suppose that there exists a vertex of Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} that is not a vertex of Φ(N). Since both Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} and Φ(N) are convex polytopes of full dimension in {w ∈^* |⟨ w, ν_min⟩ = 0} and since the vertices of the latter are a subset of
those of the former, there exists a vertex v̂ of Φ(N)
such that for any open neighborhood V of v̂
V ∩Φ(N) ⊊ V ∩{w ∈^* |⟨ w, ν_min⟩ = 0},
(see Figure <ref>). By (<ref>), this is a contradiction.
Our penultimate aim in this section is to prove Theorem
<ref> below. To this end, first we prove the
following result.
Let be a normalized monotone tall complexity one T-space of dimension 2n. If q ∈ M is
exceptional, then there exists a unique exceptional sheet
(N,ω_N,Φ_N) that is stabilized by H_ℱ_min such that q ∈ N.
First we prove uniqueness. Let (N_i, ω_i, Φ_i) be an exceptional sheet that is stabilized by H_ℱ_min such that q ∈ N_i for i=1,2. Hence, both N_1 and N_2 are connected components of
M^H_ℱ_min. Since q ∈ N_1 ∩ N_2, it follows
that N_1 = N_1 ∪ N_2 = N_2, as desired.
Next we prove existence. Let (N',ω',Φ') be the sheet
through q. Since q is exceptional, by Lemma
<ref>, (N',ω',Φ')
is exceptional. Since N' is compact, there exists p ∈
M^T ∩ N' that, by Lemma <ref>, is
isolated. We claim that there exists an exceptional sheet
(N,ω_N,Φ_N) stabilized by H_ℱ_min such
that p ∈ N. To this end, we use Proposition
<ref> and Lemma <ref>.
Let U be the open neighborhood of p given by Lemma
<ref> and let U_1 be the subset of U
consisting of exceptional points stabilized by
H_ℱ_min, which is path-connected and dense in the subset
of U consisting of exceptional points.
In particular,
U_1 ≠∅. Given any point q' ∈ U_1, we consider the
sheet (N,ω_N,Φ_N) through q'. By Lemma
<ref>, (N,ω_N,Φ_N) is
exceptional, while, by definition, it is stabilized by
H_ℱ_min. Since U_1 is dense in the subset
of U consisting of exceptional points and since p
∈ U is exceptional by Lemma <ref>, it
follows that p ∈ N, thus proving the claim.
Hence, N' ∩ N ≠∅. Moreover, since any point in N' is exceptional, by
Lemma <ref> the stabilizer of any point
in N' contains H_ℱ_min. Since both N' and N
are connected and since N is a connected component of
M^H_ℱ_min, it follows that N' ∪ N = N, so
that N' is contained in N. Hence, q ∈ N, as desired.
In fact, the proof of Lemma <ref>
yields a slightly stronger result, namely that, under the hypotheses
of the lemma, any exceptional sheet is contained in one that
stabilized by H_ℱ_min.
Let be a normalized monotone tall complexity one T-space of dimension 2n.
Each connected component of M_exc is mapped
homeomorphically to Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0 }
by the orbital moment map. In particular, each connected component
of M_exc is contractible. Moreover,
* if m is the
number of vertices of the minimal facet, then the number of
connected components of M_exc is precisely the number
of isolated fixed points divided by m, and
* if e is an
edge of Φ(M) that comes out of
ℱ_min, then the number of isolated fixed
points lying on Φ^-1(e) equals the number of connected
components of M_exc.
First we show that the image of {p ∈ M^H_ℱ_min| p is exceptional} under the quotient map M → M/T
equals M_exc and that number of connected components of
the latter equals the number of exceptional sheets that are
stabilized by H_ℱ_min. Clearly, the image of
{p ∈ M^H_ℱ_min| p is exceptional} under the quotient map
M → M/T is contained in M_exc. Conversely, given an
exceptional orbit 𝒪∈ M_exc, every point in
𝒪 is exceptional by Remarks <ref> and <ref>.
Fix a point p ∈𝒪. By Proposition
<ref> and Lemma <ref>, p ∈
M^H_ℱ_min. Since M^H_ℱ_min is
T-invariant, it follows that 𝒪 is contained in
M^H_ℱ_min. Hence, M_exc is contained
in the image of
{p ∈ M^H_ℱ_min| p is exceptional} under the quotient map
M → M/T and the first claim follows. To prove the second claim, we observe that the connected
components of {p ∈ M^H_ℱ_min| p is exceptional} are
exactly the exceptional sheets that are
stabilized by H_ℱ_min. Since sheets are
T-invariant and T is connected, the restriction of the quotient map
M → M/T to {p ∈ M^H_ℱ_min| p is exceptional} induces a
bijection between the connected components of
{p ∈ M^H_ℱ_min| p is exceptional} and those of
M_exc.
By Proposition <ref>, the image
of an exceptional sheet that is stabilized by
H_ℱ_min is precisely Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0 }. Moreover,
any such sheet is a compact symplectic toric manifold, so that the
corresponding orbital moment map image is a homeomorphism onto its
image. Hence, the first claim follows.
By the above argument, in order to prove the bulleted statements, it
suffices to show that the number of exceptional sheets that are
stabilized by H_ℱ_min is precisely the number of
isolated fixed points divided by m. We begin by observing that
M_exc = ∅ if and only if there are no isolated
fixed points. This is a consequence of Lemma
<ref> and the
complexity of being one. So we may assume that M_exc≠∅.
Let (N,ω_N,Φ_N) be such an exceptional
sheet that is stabilized by H_ℱ_min. Since (N,ω_N,Φ_N) is a compact symplectic toric manifold, the number of vertices
in Φ_N(N) = Φ(N) is precisely the number of fixed points. By Proposition
<ref>,
Φ(N) equals Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0 }; moreover,
there is a bijection between the vertices of Φ(N) and those of
ℱ_min. Hence, the cardinality of M^T ∩
N equals m.
Moreover, if (N',ω',Φ') is another sheet stabilized by
H_ℱ_min, then either N = N' or N ∩ N' =
∅. In particular, if N ≠ N', the subsets M^T ∩
N and M^T ∩
N' are disjoint. Finally, since the set of isolated fixed points is a
subset of the set of exceptional points by Lemma
<ref>, by Lemma
<ref>, M^T_isolated equals
the disjoint union over all exceptional sheets (N,ω_N,Φ_N) stabilized
by H_ℱ_min of the intersections
M_isolated^T ∩ N. Since any such sheet is exceptional,
by Lemma <ref>, M^T
∩ N equals the intersection of N with the set of isolated
fixed points for any exceptional sheet (N,ω_N,Φ_N) stabilized
by H_ℱ_min. Putting the above facts together, the
bulleted statements follow.
We conclude this section with the following important result.
The equivalence class of paintings of a compact normalized monotone tall complexity one T-space is trivial.
By Lemma <ref>, the genus of is zero. Let f :
M_exc→ S^2 be a painting of . We need to
construct a painting f' : M_exc→ S^2 that is
constant on each connected component of M_exc and that
is equivalent to f. If M_exc = ∅, there is
nothing to prove, so we may assume that M_exc≠∅.
Since Φ(M) ∩{ w ∈^* |⟨ w, ν_min⟩
= 0} is a convex polytope, it is contractible. We fix a point
w_0 in Φ(M) ∩{ w ∈^* |⟨ w, ν_min⟩
= 0} (for instance, the origin). Since Φ(M) ∩{ w ∈^* |⟨ w, ν_min⟩
= 0} is contractible, there exists a deformation retraction of Φ(M) ∩{ w ∈^* |⟨ w, ν_min⟩
= 0} onto
w_0 that we denote by H_t, where H_0 = id and H_1(w)
= w_0 for all w ∈Φ(M) ∩{ w ∈^* |⟨ w, ν_min⟩
= 0}.
Let k >0 be the number of connected components of
M_exc. We have that
M_exc = ∐_i=1^k M_i,
where M_i is a connected component of M_exc for
i=1,…, k. By Theorem <ref>, the
restriction of Φ M/T →^* to M_i is a homeomorphism onto Φ(M) ∩{ w ∈^* |⟨ w, ν_min⟩
= 0}; let Φ^-1_i be the inverse to this homeomorphism. We denote the restriction of f to M_i by f_i
and, for any 0 ≤ t ≤ 1, we define f_i,t : M_i → S^2 as
the composite f_i ∘Φ_i^-1∘ H_t ∘Φ. Since M_exc is the disjoint union of
the M_i's, for any 0 ≤ t ≤ 1, there exists a unique
continuous map f_t : M_exc→ S^2 such that the
restriction of f_t to M_i equals f_i,t for all i=1,…,
k.
We claim that f_t : M_exc→ S^2 is a painting of
for all 0≤ t ≤ 1. To this end, we fix 0 ≤ t ≤
1, consider the map (Φ,f_t) and two distinct points
[q_i],[q_j] ∈ M_exc belonging to M_i and M_j
respectively. We aim to show that (Φ([q_i]),f_t ([q_i])) ≠
(Φ([q_j]),f_t ([q_j])). If i = j, since the restriction of Φ
to M_i = M_j is a homeomorphism and since [q_i] ≠ [q_j], it
follows that Φ([q_i]) ≠Φ([q_j]),
so that the result follows. Suppose next that i ≠
j. If Φ([q_i]) ≠Φ([q_j]), then
the result follows, so we may assume that Φ([q_i])
= Φ([q_j]) =: w. If f_t ([q_i]) =
f_t([q_j]), then the points Φ^-1_i(H_t(w)) ∈
M_i and Φ^-1_j(H_t(w)) ∈
M_j are distinct (since i ≠ j), but are such that
(Φ(Φ^-1_i(H_t(w))),f(Φ^-1_i(H_t(w))))
=
(Φ(Φ^-1_j(H_t(w))),f(Φ^-1_j(H_t(w)))).
This implies that f is not a painting, which is absurd. Hence, f_t ([q_i]) ≠
f_t([q_j]), which implies that (Φ([q_i]),f_t ([q_i])) ≠
(Φ([q_j]),f_t ([q_j])), as desired.
We set f':= f_1. Since f_t : M_exc→ S^2 is a painting of
for all 0≤ t ≤ 1, f = f_0 and f' are homotopic
through paintings. Moreover, the restriction of f' to M_i is
constant by construction for any i=1,…, k, thus completing
the proof.
§.§ The Duistermaat-Heckman function
In order to prove the main result of this section (see Theorem
<ref>), we prove the following intermediate
result. To this end, we
recall that (<ref>) and (<ref>)
hold, that for any vertex v and any edge e of Φ(M), the
preimages Φ^-1(v) and Φ^-1(e) are a two dimensional
sphere and a four dimensional manifold respectively. Moreover,
by Lemma <ref>, there exists s ∈
such that, if v ∈ℱ_min and e is the edge that
comes out of ℱ_min that is incident to v, then the self-intersection
of Φ^-1(v) in Φ^-1(e) equals s (and does not depend on
v).
Let be a compact normalized monotone tall complexity one T-space of dimension 2n. Let v ∈ℱ_min be a vertex and let e be
the edge incident to v that comes out of ℱ_min. Let
s ∈ be as in Lemma <ref> and let k ≥ 0 be the number of isolated fixed
points contained in Φ^-1(e). Then the restriction of the Duistermaat-Heckman
function DH to e is the function
e →
w ↦ 2 - s ⟨ w, ν_min⟩ - k ρ(w),
where ρ : ^* → is the function given by
w ↦
0 if ⟨ w, ν_min⟩≤ 0
⟨ w, ν_min⟩ if ⟨ w, ν_min⟩≥ 0.
Let α_1,…, α_n-1 be the
non-zero isotropy weights of Φ^-1(v) ordered so
that (<ref>) holds; in particular, ⟨α_n-1,
ν_min⟩ = 1 (see Lemma
<ref> and (<ref>)). Since e is the edge that comes out of
ℱ_min and is incident to v, by Lemma
<ref>, there exists t_max∈_>0
such that
e = { v + tα_n-1| 0 ≤ t ≤ t_max}.
Let (M_e, ω_e,
Φ_e) be the sheet corresponding to e as in
(<ref>). Since α_n-1∈ℓ^* is primitive, the
stabilizer H_e of (M_e, ω_e,
Φ_e) is precisely exp(Ann( ⟨α_n-1⟩) ). Since is tall and compact,
(M_e, ω_e,
Φ_e) is a compact tall complexity one Hamiltonian T/H_e ≃ S^1-space by
property <ref> of Corollary
<ref> and by Proposition <ref>.
In what follows, Φ_e is chosen so that Φ_e(M_e)
= [0,t_max]. Moreover, the isolated fixed points for the S^1-action
on M_e are precisely the isolated fixed points for the T-action contained in
M_e. By Corollary <ref>, the restriction
of the Duistermaat-Heckman function DH to e equals DH
(M_e,ω_e,Φ_e). Since ⟨ v +
tα_n-1,ν_min⟩ = t-1, in order to
prove that (<ref>) holds, it
suffices to check that the Duistermaat-Heckman function of
(M_e,ω_e,Φ_e) is the function [0,t_max] → given
by
t ↦ 2 - s(t-1) -k Θ(t-1),
where Θ: → is the function
t ↦
0 if t ≤ 0
t if t ≥ 0.
By Proposition
<ref> and the definition of Φ_e, any isolated fixed point p ∈ M_e for the
S^1-action satisfies Φ_e(p) = 1 and its isotropy weights are
+1,-1. In particular, if k > 1, then t_max > 1. Since (M_e,ω_e,Φ_e) is tall and since the domain
of the Duistermaat-Heckman function is Φ_e(M_e) = [0,t_max] (see Definition
<ref>), by <cit.>, we have that
DH (M_e,ω_e,Φ_e)(t) =
∫_Φ_e^-1(v)ω_e - t
c_1(L_e)[Φ_e^-1(v)] - k Θ(t-1) for all t
∈ [0,t_max],
where c_1(L_e) is the first Chern class of
the normal bundle L_e to Φ_e^-1(v) in M_e.
Since is normalized monotone, since Φ^-1_e(v) =
Φ^-1(v) is a sphere by Lemma <ref>, and
by Lemma <ref>, we have that
∫_Φ_e^-1(v)ω_e = c_1(M)[Φ^-1(v)] = 2 +
∑_j=1^n-1c_1(L_j)[Φ^-1(v)] = 2 + c_1(L_n-1)[Φ^-1(v)],
where L_1⊕…⊕ L_n-1 is the T-equivariant
splitting of the normal bundle to Φ^-1(v) in M, and the
last equality follows from Proposition <ref>.
Combining equations (<ref>) and (<ref>), and observing
that L_e = L_n-1, we have that
DH (M_e,ω_e,Φ_e)(t) = 2 - s(t-1) - k Θ(t-1),
as desired.
By Theorem <ref>, the number of isolated fixed points
that lies in the preimage of an edge that comes ouf of
ℱ_min equals the number of connected components of
M_exc. Hence, we can use Proposition
<ref> to prove the following result.
Let be a compact normalized monotone tall complexity one T-space of dimension 2n. Let s ∈ be as in Lemma <ref> and let k ≥ 0
be the number of
connected components of M_exc. The Duistermaat-Heckman
function DH : Φ(M) → is given by
DH (w) = 2 - s ⟨ w, ν_min⟩ - k ρ(w),
where ρ : ^* → is the function
given by
w ↦
0 if ⟨ w, ν_min⟩≤ 0
⟨ w, ν_min⟩ if ⟨ w, ν_min⟩≥ 0.
First, we show that the interior of the
intersection
Φ(M) ∩{w ∈^* |±⟨ w, ν_min⟩ > 0}
consists entirely
of regular values of Φ. To this end, suppose that
w ∈^* ∖∂Φ(M) is a singular value of
Φ. Hence, there exists q ∈Φ^-1(w) that has a stabilizer
has positive dimension. By <cit.>, q is
exceptional. Thus, by Proposition
<ref> and Lemma
<ref>, Φ(q) ∈{w ∈^* |⟨ w, ν_min⟩ = 0}, as
desired. Hence, by Remark <ref>, the restriction of DH to
Φ(M) ∩{w ∈^* |±⟨ w, ν_min⟩ > 0} is the restriction of an affine
function f^±: ^* → of the form f^±(w) =
c^± + ⟨ w,β^±⟩ for some c^±∈
and some β^±∈ℓ.
We fix a vertex v ∈ℱ_min, we let e be the edge
that comes out of v and α_1,…, α_n-1 be the non-zero
isotropy weights of Φ^-1(v) ordered so that (<ref>) and
(<ref>) hold. Since DH is continuous (by Theorem
<ref>) and
since the restriction of DH to ℱ_min is
constant by Proposition <ref>, the
restriction of the affine function f^- to the affine hyperplane
supporting ℱ_min is constant. Since
f^- is an affine function and since ℱ_min is
supported on the hyperplane v + ⟨α_1,…,
α_n-2⟩, the restriction of the linear part of
f^- to ⟨α_1,…, α_n-2⟩
is identically zero. In other words, β^- ∈Ann(⟨α_1,…, α_n-2⟩). Hence, since ν_min∈Ann(⟨α_1,…, α_n-2⟩), since β^-, ν_min∈ℓ, and since
ν_min is primitive, there exists λ^- ∈ such that
β^- = λ^- ν_min.
By Lemma <ref>, there exists t_max∈_>0
such that
e = { v + tα_n-1| 0 ≤ t ≤ t_max}.
By (<ref>) and since v ∈ℱ_min, ⟨ v + tα_n-1,ν_min⟩ =
-1+t for all 0
≤ t ≤ t_max. Therefore, by Proposition <ref>, the restriction
of DH to e ∩{w ∈^* |⟨ w, ν_min⟩ < 0} is given
by the function that sends t to 2 + s -st, where 0 ≤ t < 1. Hence,
c^- + λ^-(-1+t) = 2 + s -s t for all 0 < t < 1.
Equation (<ref>) readily implies that λ^- = - s
and c^- = 2. Hence, f^-(w) = 2 - s⟨ w, ν_min⟩.
We split the remainder of the proof in two cases, depending on
whether t_max = 1 or t_max≥ 2. In the former case, the other vertex v' of Φ(M) that is incident
to e lies on the linear hyperplane {w ∈^* |⟨
w,ν_min⟩ = 0}. Hence, by Corollary
<ref>, there are no isolated fixed
points. Therefore, by Lemma
<ref>, the function DH is
the restriction of an affine function. Since the interior of Φ(M) ∩{w ∈^* | - ⟨ w, ν_min⟩ > 0} is not empty and
since the restriction of DH to this subset equals the affine
function f^-(w) = 2 - s⟨ w, ν_min⟩, it
follows that
DH (w) = 2 - s⟨ w, ν_min⟩ = 2
- s⟨ w, ν_min⟩ - k ρ(w) for all
w ∈Φ(M),
where the last equality follows from the fact that k=0, since
there are no isolated fixed points (see Lemma
<ref> and Corollary <ref>).
It remains to consider the case t_max≥ 2. Since DH is continuous, then the restriction of f^- and
f^+ to Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} are equal. Hence, the
restriction of the affine function f^+ to Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0} equals 2. By Proposition <ref>, Φ(M) is a reflexive
Delzant polytope; thus, by Lemma
<ref>, the origin lies in the interior of
Φ(M). Therefore, the interior of Φ(M) ∩{w ∈^* |⟨ w, ν_min⟩ = 0}
in {w ∈^* |⟨ w, ν_min⟩ = 0} is non-empty. Hence, the restriction of the linear part of f^+ to the
hyperplane {w ∈^* |⟨ w, ν_min⟩ = 0} is identically zero. Arguing as above, this implies
that there exists λ^+ ∈ such that β^+ = λ^+ ν_min.
Since t_max≥ 2, the intersection e ∩{w ∈^* |⟨ w, ν_min⟩ > 0} is not empty. By Proposition <ref>, the restriction
of DH to e ∩{w ∈^* |⟨ w, ν_min⟩ > 0} is
the function that sends t to 2 + s - st -k(t-1) for 1 < t ≤
t_max. Hence, we have
that
λ^+(-1+t) + 2 = 2 + s - st - k(t-1) for all 1 < t < t_max.
Equation (<ref>) readily implies that λ^+ = - s -
k. Hence, f^+(w) = 2 -s⟨ w, ν_min⟩-k⟨ w, ν_min⟩.
Thus the restriction of DH to Φ(M) ∖{w ∈^* |⟨ w, ν_min⟩ = 0} is the function given by
w ↦
2 - s ⟨ w, ν_min⟩ if ⟨ w, ν_min⟩ < 0
2 - s ⟨ w, ν_min⟩ - k ⟨ w, ν_min⟩ if ⟨ w, ν_min⟩ >0.
This function equals the restriction of (<ref>) on the dense
subset Φ(M) ∖{w ∈^* |⟨ w, ν_min⟩ = 0} of Φ(M). Since DH is continuous,
equation (<ref>) holds.
§.§ The Duistermaat-Heckman function is a complete invariant
We can proceed with the proof of our first main result.
Let and (M',ω',Φ') be compact monotone tall complexity one T-spaces of
dimension 2n. If they are isomorphic, then they have equal
Duistermaat-Heckman measures. By Theorem
<ref>, they have equal
Duistermaat-Heckman functions.
Suppose conversely that they have equal Duistermaat-Heckman
functions. First, we show that it suffices to show the result under
the additional assumption that both spaces are normalized. To this
end, assume this special case. Since and (M',ω',Φ') are monotone, by Corollary
<ref>, there exist λ, λ' >0 and
v, v' ∈^* such that (M,λω, λΦ+v) and
(M',λ'ω', λ'Φ'+v') are normalized monotone. Moreover,
since and (M',ω',Φ') are tall and have equal
Duistermaat-Heckman functions, there exists a Delzant polytope
Δ such that Φ(M) = Δ = Φ'(M'). Since (M,λω, λΦ+v) and
(M',λ'ω', λ'Φ'+v') are normalized monotone, by
Proposition <ref>, the polytopes λΔ + v and λ' Δ + v' are
reflexive Delzant. We observe that there exists at most one
reflexive Delzant polytope obtained from Δ by rescaling and
translating because the vertices of a reflexive Delzant polytope
are determined by the tangent cones to its vertices (see
Proposition <ref>). Hence, λ = λ' and v = v'; in particular,
λΔ + v = λ' Δ + v'. On the other hand, we
observe that, for any w ∈λΔ + v = λ' Δ +
v',
DH (M,λω, λΦ+v)(w) = λ
DH(λ w+v)
DH (M',λ'ω', λ' Φ'+v')(w) = λ'
DH (M',ω',Φ') (λ' w+v').
This is an immediate consequence of the definition of
the Duistermaat-Heckman function, of the fact that its restriction to
the set of regular values is given by (<ref>), and of the
complexity of both spaces being one. Hence, DH
(M,λω, λΦ+v) = DH (M',λ'ω', λ'
Φ'+v'). By assumption, (M,λω, λΦ+v) and (M',λ'ω', λ'
Φ'+v') are isomorphic. Therefore, by Lemma
<ref>, it follows that and
(M',ω',Φ') are isomorphic, as desired.
It remains to prove the result under the additional assumption that
the spaces are normalized. By Remark <ref> and Theorem
<ref>, the two spaces have equal Duistermaat-Heckman
measures. Moreover, both spaces
have genus zero by Lemma
<ref>, and equal moment map images, both of which are equal to a reflexive Delzant
polytope Δ by Proposition
<ref>. Since the Duistermaat-Heckman functions
are equal , there is a
facet ℱ_min of Δ that is a minimal facet for
both spaces. Let s, s' ∈ be the integers as in Lemma
<ref> for and (M',ω',Φ') respectively.
Since the Duistermaat-Heckman functions of
and (M',ω',Φ') are equal, by Theorem
<ref>,
2 - s⟨ w ,ν_min⟩ - k ρ(w) = 2 - s'⟨ w ,ν_min⟩ - k' ρ(w) for
all w ∈Δ,
where ρ : ^* → is the function of equation
(<ref>). Since the interior of Δ is not empty and
by (<ref>), there exists a point w ∈Δ such
that -1 < ⟨ w, ν_min⟩ < 0. Evaluating both sides of (<ref>) at
this point, we obtain that s = s'. By the same argument and since Δ is reflexive so that the origin is an
interior point of Δ, there exists w' ∈Δ
such that
⟨ w', ν_min⟩ > 0. Evaluating both sides of (<ref>) at w', we
obtain that k = k'. Hence, either both M_exc and
M'_exc are empty or neither is.
We suppose first that neither M_exc nor M'_exc
is empty. As in the proof of Theorem
<ref>, we write
M_exc = ∐_j=1^k M_j and
M'_exc = ∐_j=1^k M'_j,
where M_j is a connected component of
M_exc for all
j=1,…, k, and analogously for M'_j and M_exc'.
By Theorem
<ref>, there exist trivial paintings f :
M_exc→ S^2 and f':
M'_exc→ S^2 of and (M',ω',Φ')
respectively (see Definition <ref>). Since
f, f' are paintings, if i ≠ j,
f(M_i) ≠ f(M_j) and f'(M'_i) ≠ f'(M'_j). In
particular, the images of both f and f' consist of k
distinct points in S^2. We claim that we may
assume that f(M_j) = f'(M'_j)
for all j = 1,…, k. To see this, we can
use the argument of <cit.> to construct an
orientation-preserving diffeomorphism ξ : S^2 → S^2 such that
ξ(f(M_j)) = f'(M'_j) for all j =1,…, k. The composite
ξ∘ f is a trivial painting of that is equivalent to f.
By Theorem <ref>, for all j=1,…, n,
the orbital moment maps Φ (respectively Φ') maps
each connected component of M_j (respectively
M'_j) homeomorphically onto
Δ∩{ w ∈^* |⟨ w, ν_min⟩ =
0},
where we use the fact that Φ(M) = Δ = Φ'(M'). If Φ_j (respectively
Φ'_j) denotes the restriction of the orbital moment
map to M_j (respectively M_j'), then the composite i_j:= (Φ'_j)^-1∘Φ_j : M_j → M'_j is
a homeomorphism that satisfies Φ =
Φ' ∘ i_j. Hence, since M_exc and
M'_exc have the same number of connected components,
there is a unique map i : M_exc→ M'_exc
that, when restricted to M_j, equals i_j for all
j=1,…,n. Thus i is a homeomorphism such that Φ =
Φ' ∘ i. Moreover, since the moment map images of
and (M',ω',Φ') are equal, by Remark
<ref>, the homeomorphism i maps each orbit to an orbit with
the same symplectic slice representation. Hence, i :
M_exc→ M'_exc is an isomorphism of
exceptional orbits such that f' = i ∘ f. Thus and
(M',ω',Φ') have equivalent paintings.
If M_exc = ∅ = M'_exc, then and of
(M',ω',Φ') also have equivalent paintings
(trivially). Hence, in either case, the result follows by Theorem <ref>.
We observe that, by Corollary <ref>, Theorem
<ref> can be restated equivalently as saying that
two compact monotone tall complexity one T-spaces are isomorphic
if and only if they have equal Duistermaat-Heckman polytopes.
§ REALIZABILITY, EXTENSION TO TORIC AND FINITENESS RESULTS
§.§ Necessary conditions for the realization and a finiteness
result
To state the first result of this subsection, we recall that,
by Lemma <ref>, there exists s ∈ such that, if v ∈ℱ_min is a
vertex and e is the edge of Φ(M) that comes out of
ℱ_min and is incident to v, then the
self-intersection of the sphere Φ^-1(v) in the four-dimensional submanifold Φ^-1(e) is
s.
Let be a normalized monotone tall complexity one Hamiltonian
T-space, let s ∈ be as above and let k ∈ be the
number of connected components of M_exc. The pair
(s,k) ∈^2 belongs to the set
{(0,0), (-1,0), (-1,1), (-1,2) }.
We fix a vertex v ∈ℱ_min and we let e=e_n-1 be the edge
of Φ(M) that comes out of ℱ_min and is incident
to v. The normal bundle N to Σ:=Φ^-1(v) splits T-equivariantly as
L_1⊕⋯⊕ L_n-1. By Proposition
<ref>, the first Chern number c_1(L_j)[Σ],
which agrees with the self-intersection of Σ in Φ^-1(e_j), equals zero for all j=1,…,n-2.
Therefore, by Lemma <ref> and equation (<ref>) in its proof,
0<2+c_1(L_n-1)[Σ]=2+s .
Moreover, by (<ref>) in Lemma <ref>, s=c_1(L_n-1)[Σ]≤ 0. Hence,
we conclude that s∈{0,-1}.
Suppose first that s= 0. By Theorem <ref>,
the Duistermaat-Heckman function DH : Φ(M) → is given by w ↦
2 - kρ(w), where ρ : ^* → is the function of
(<ref>). Since k ≥ 0, 2 - kρ(w) ≤ 2
for all w ∈Φ(M). Since DH(v) = 2 and since v ∈ℱ_min, it follows that DH(w) = 2 for all w ∈Φ(M). By Proposition <ref>, Φ(M) is a
reflexive (Delzant) polytope. Hence, by Lemma
<ref>, Φ(M)
contains the origin in its interior. Thus, by definition of the
function ρ, k = 0.
Suppose that s = -1. We must show that k ≤ 2. To this end, we
may assume that k >
0. Let α∈ℓ^* be the isotropy weight of Φ^-1(v)
such that the edge e is contained in the half-ray v + _≥ 0⟨α⟩. By Lemma <ref>,
let t_max∈_>0 be such that e = {v
+ t α| 0 ≤ t ≤ t_max}. First, we prove that
t_max≥ 2. Let v' ∈Φ(M) be the other vertex to which e is incident. If
t_max = 1, then v' is a vertex of
Φ(M) that lies on the linear hyperplane {w ∈^* |⟨ w, ν_min⟩ = 0}. By the Convexity Package
(Theorem <ref>), and Proposition
<ref>, there are no isolated fixed
points in Φ^-1(e). Hence, by Theorem <ref>, M_exc = ∅,
a contradictionḢence, t_max≥ 2, as desired. As a
consequence, v + 2α∈Φ(M). To conclude
the proof, we evaluate DH at v + 2α.
By (<ref>) and the fact that v∈ℱ_min,
we obtain that ⟨ v+2α, ν_min⟩ = 1. Moreover, since is tall,
DH(w) > 0 for all w ∈Φ(M). Therefore, by
(<ref>),
DH(v+2α)= 2 + 1- k > 0.
Since k∈, it follows that k ≤ 2.
We proceed with the proof of our second main result.
Suppose that Φ(M) = Δ is reflexive Delzant. Since
is monotone, by Lemma <ref>, is normalized
monotone. Since there are finitely many facets of Φ(M) and
since, by Proposition <ref>, there are finitely many
possibilities for (s,k), by Theorem
<ref>, there are finitely many
possibilities for DH. Hence, the result follows from Theorem <ref>.
By Theorem <ref> and Proposition
<ref>, and since there are
precisely as many edges that come out of ℱ_min as there
are vertices of ℱ_min, we obtain the following bound on
the number of isolated fixed points.
Let be a normalized monotone tall complexity one T-space of dimension 2n. If
m is the number of vertices of a minimal facet, then there are precisely
either zero, m or 2m isolated fixed points in M.
The next result gives a combinatorial property of the moment map image
of a normalized monotone tall complexity one T-space such that
M_exc has two connected components.
Let be a normalized monotone tall complexity one T-space of dimension 2n. Let
ℱ_min be a minimal facet of Φ(M) supported on
the affine hyperplane {w ∈^* |⟨ w, ν_min⟩ =
-1}. If M_exc has two connected components, then
there exists a minimal facet ℱ'_min of Φ(M) supported on the affine hyperplane {w ∈^* |⟨ w, -ν_min⟩ =
-1}. In particular, Φ(M) is contained in the strip {w ∈^* | -1 ≤⟨ w, ν_min⟩≤ 1}.
We fix a vertex v ∈ℱ_min. Since the number k of
connected components of M_exc is 2, by Proposition
<ref>, it follows that s=-1. In particular, by
Theorem <ref>, the Duistermaat-Heckman function
DH : Φ(M) → of is given by
DH (w) = 2 + ⟨ w, ν_min⟩ - 2 ρ(w),
where ρ : Φ(M) → is the non-negative function given by
(<ref>). Since v ∈ℱ_min, DH
(v) = 1. Hence, since ℱ_min is a minimal facet, the minimal value of DH equals 1.
Let e be the edge of Φ(M) that comes out of
ℱ_min and is incident to v, and let α∈ℓ^* be the isotropy weight of Φ^-1(v) so that e is
contained in the half-ray v + _≥ 0⟨α⟩. By (<ref>) and
Proposition <ref>,
⟨α, ν_min⟩ =1 and there exists t_max∈_>0 such that e = {v + tα| 0 ≤ t ≤ t_max}. We set v':= v + t_maxα; this is a vertex of
Φ(M). Moreover, we observe that ⟨ v', ν_min⟩
= t_max -1. Since s=-1 and k =2, arguing as in the last
paragraph of the proof of
Proposition <ref>, t_max≥
2. In particular, by (<ref>) and since the minimal
value of DH is 1,
DH (v') = 3 - t_max≥ 1,
whence t_max≤ 2. Hence, t_max = 2 and DH (v')
=1, so that DH attains its minimum at v'. By Proposition
<ref>, v' lies on a minimal facet
ℱ_min'. In fact, ℱ_min' is contained
in the connected component of the level set (DH )^-1(1)
that contains v'. By (<ref>), the latter is given by the
affine hyperplane {w ∈^* |⟨ w, -ν_min⟩ =
-1}. Since the affine span of a facet is an affine hyperplane, the
first statement follows. The second statement follows at once from
the first and the fact that Φ(M) is contained in the
intersection {w ∈^* |⟨ w, ν_min⟩≥
-1}∩{w ∈^* |⟨ w, -ν_min⟩≥
-1}.
The next result is the most important building block of the main finiteness result of this paper,
Corollary <ref>.
Given a reflexive Delzant polytope Δ in ^*, there are
finitely many isomorphism classes of normalized monotone tall complexity one T-spaces with moment map image equal to
Δ.
Let ℱ be a facet of Δ and let (s,k) ∈{(0,0), (-1,0), (-1,1), (-1,2) }. Since Δ has
finitely many facets and by Proposition <ref>, it suffices to show that there are finitely
many isomorphism classes of normalized monotone tall complexity one T-spaces with moment map image equal to
Δ such that
* ℱ is a minimal facet of Φ(M),
* given any vertex v ∈ℱ and the edge e of
Φ(M) that comes out of ℱ and is incident to v,
the self-intersection of Φ^-1(v) in Φ^-1(e) equals
s (cf. Lemma <ref>), and
* the set of exceptional orbits has precisely k connected
components.
If there is no compact normalized monotone tall complexity one T-space with the above properties, there is
nothing to prove, so we may assume that there exists such a space . By Theorem <ref>, the data Δ,
ℱ and (s,k) determine uniquely the Duistermaat-Heckman
function of . Hence, by Theorem <ref>,
there is exactly one isomorphism class of compact normalized monotone tall complexity one T-space with the above properties, as desired.
Theorem <ref> and Corollary
<ref> allow us to prove Corollary
<ref>, thus answering a question posed to us by
Yael Karshon. We recall that two Hamiltonian T-spaces (M_1,ω_1,Φ_1) and
(M_2,ω_2,Φ_2) are equivalent if there exists a
symplectomorphism Ψ : (M_1,ω_1) → (M_2,ω_2) and an affine transformation a ∈GL(ℓ^*) ⋉𝔱^* such that Φ_2
∘Ψ = a ∘Φ_1. In this case, we write (M_1,ω_1,Φ_1) ∼ (M_2,ω_2,Φ_2).
* In the above notion of equivalence, the reason why we restrict to elements in GL(ℓ^*) ⋉𝔱^* is the following: Given an effective Hamiltonian T-space
and an affine transformation a of 𝔱^*, the
triple (M,ω, a ∘Φ) is an effective Hamiltonian T-space if
and only if a ∈GL(ℓ^*) ⋉𝔱^*.
* Isomorphic Hamiltonian T-spaces
in the sense of Definition <ref>
are necessarily equivalent, but the converse need not hold.
We fix n and we denote the set of equivalence classes of compact tall
complexity one T-spaces of dimension 2n with first Chern class equal to the class
of the symplectic form by ℳ_n. By Definition <ref>, any normalized monotone tall
complexity one T-space of dimension 2n is such that its first
Chern class equals the class of the symplectic form. We define an
auxiliary equivalence relation ≈ on the set of normalized monotone tall
complexity one T-spaces of dimension 2n as follows: Given two
such spaces
(M_1,ω_1,Φ_1), (M_2,ω_2,Φ_2), we say that (M_1,ω_1,Φ_1) ≈ (M_2,ω_2,Φ_2) if there exists
a linear transformation l ∈GL(ℓ^*) such
that Φ_2 ∘Ψ = l ∘Φ_1. We denote the set of
≈-equivalence classes of normalized monotone tall
complexity one T-spaces of dimension 2n by 𝒩ℳ_n.
We observe that there is a natural map 𝒩ℳ_n →ℳ_n sending the ≈-equivalence class of to
its ∼-equivalence class. Moreover, we claim that this map is a
bijection. First, we show that it is
injective. Suppose that (M_1,ω_1,Φ_1),
(M_2,ω_2,Φ_2) are normalized monotone tall
complexity one T-spaces of dimension 2n such that
(M_1,ω_1,Φ_1) ∼ (M_2,ω_2,Φ_2). Then there
exists a ∈GL(ℓ^*) ⋉𝔱^* such that a (Φ_1(M_1)) = Φ_2(M_2). We write
a = (l, v) for unique l ∈GL(ℓ^*) and v ∈𝔱^*. It suffices to show that v =
0. By Proposition <ref>, both Φ_1(M_1) and
Φ_2(M_2) are reflexive (Delzant) polytopes. In particular, all
vertices of Φ_1(M_1) and of
Φ_2(M_2) lie in ℓ^*. Since both polytopes have at least
one vertex and since l ∈GL(ℓ^*), v
∈ℓ^*. Moreover, by Lemma <ref>, the
origin is the only interior lattice point in both Φ_1(M_1) and
Φ_2(M_2). Hence, since a (Φ_1(M_1)) =
Φ_2(M_2), the lattice point v lies in the interior of
Φ_2(M_2). Thus v = 0, as desired. Next we prove
surjectivity. To this end, let be a compact tall
complexity one T-space of dimension 2n with c_1(M) = [ω]. By Proposition <ref>, there exists
(a unique) v ∈𝔱^* such that (M,ω,Φ +v) is
normalized. Hence,
∼ (M,ω,Φ +v), as desired.
Therefore it suffices to prove that 𝒩ℳ_n is
finite. To this end, we denote the orbit space of the standard
GL(ℓ^*)-action on the set of reflexive Delzant
polytopes in ^* by ℛ𝒟_n. By Proposition
<ref>, the map p: 𝒩ℳ_n →ℛ𝒟_n
that sends the ≈-equivalence class of to the
GL(ℓ^*)-orbit of Φ(M) is surjective. By Corollary
<ref>, ℛ𝒟_n is
finite. Hence, it suffices to prove that the fibers of the above map
are finite. We fix a reflexive Delzant polytope Δ and we
consider the map from the set of isomorphism classes of normalized monotone tall complexity one T-spaces with moment map image equal to
Δ to p^-1([Δ]) that sends the isomorphism class of
to its ≈-equivalence class. This map is surjective:
If is such that [Φ(M)] = [Δ], then
there exists l ∈GL(ℓ^*) such that Δ =
l(Φ(M)). The isomorphism class of (M, ω, l ∘Φ) is
then mapped to the ≈-equivalence class of . The
result now follows from Corollary <ref>.
§.§ Sufficient conditions for the realization and extension to
a toric action
Let be a normalized monotone tall complexity one T-space. By
the results of Section
<ref>, there exists a quadruple
(Δ,ℱ,s,k) determined by , where Δ = Φ(M), ℱ is
a facet of Δ that is a minimal facet of (see Definition
<ref>, s ∈ is the the self intersection of
the sphere Φ^-1(v) in Φ^-1(e), where v ∈ℱ
is any vertex and e is an edge of Δ that comes out of
ℱ and is incident to v, and k ∈ is the number of
connected components of M_exc. The overall aim of this section is
to determine which quadruples arise in this fashion (see Corollary <ref>). So far, we have
established the following necessary conditions:
(i) Δ is a full-dimensional reflexive Delzant polytope in ^* (Propositions <ref> and <ref>).
(ii) ℱ is a facet of Δ that is supported on the affine hyperplane {w ∈^* |⟨ w,ν⟩ =
-1}.
(iii) The pair (s,k) belongs to the set {(0,0), (-1,0), (-1,1), (-1,2) } (Proposition <ref>).
(iv) If there is a vertex of Φ(M) on the linear hyperplane {w
∈^* |⟨ w, ν⟩ =
0}, then k=0 (Corollary <ref>).
(v) If k=2 then there exists a facet ℱ' supported on the hyperplane
{w∈^* |⟨ w, -ν⟩ =
-1} (Corollary <ref>).
The first step towards proving which quadruples are associated to a normalized monotone tall complexity one T-space
and whether the T action extends to a toric action
(Corollary <ref>)
is
establishing its combinatorial analogue, namely Theorem
<ref>. To this end, we introduce the following
terminology.
We say that a quadruple (Δ, ℱ,s,k) consisting of a polytope Δ
in ^*, a facet ℱ⊂Δ, and integers s,k, is admissible if it satisfies conditions (i)–(v) above.
If (Δ, ℱ, -1,2) is admissible, then, by
the proof of Corollary <ref>, the polytope Δ is
contained in the strip {w ∈^* | -1 ≤⟨ w, ν⟩≤ 1}.
Let k ∈{1,2}. If (Δ, ℱ, -1,k) is admissible,
then for any edge e of Δ that intersects the linear
hyperplane {w ∈^* |⟨ w,ν⟩ =
0}, there exists a vertex v ∈ℱ and a weight
α_v ∈ℓ^* at v such that
e ∩{w ∈^* |⟨ w,ν⟩ =
0} = {v + α_v}.
In particular, e ∩{w ∈^* |⟨ w,ν⟩ =
0} is contained in ℓ^*.
Fix such an edge e. Since (Δ, ℱ, -1,k) is
admissible and k >0, Δ has no vertex on the linear
hyperplane {w ∈^* |⟨ w,ν⟩ =
0}. Hence, e intersects {w ∈^* |⟨ w,ν⟩ =
0} in the relative interior of e, so that the intersection e ∩{w ∈^* |⟨ w,ν⟩ =
0} consists of one element, which is not a vertex since (Δ, ℱ, -1,k) is
admissible. Let v ∈Δ be the vertex that is incident
to e and satisfies ⟨ v, ν⟩ < 0. Since Δ is
integral and is contained in the upper-half plane {w ∈^* |⟨ w,ν⟩≥ -1}, and since ν∈ℓ^* is
primitive, ⟨ v, ν⟩ = -1, i.e., v
∈ℱ. Moreover, e is the edge that comes out of
ℱ that is incident to v. Let α_v ∈ℓ^* be
the weight of v such that e is contained in the half-ray v +
_≥ 0⟨α_v ⟩. By Lemma
<ref>, ⟨ v+ α_v, ν⟩
= 0, whence e ∩{w ∈^* |⟨ w,ν⟩ =
0} = {v + α_v}. Since v, α_v ∈ℓ^*, the result
follows.
Admissible quadruples encode the abstract analogs of the functions
given by (<ref>) in Theorem <ref>.
Let (Δ, ℱ, s,k) be an admissible quadruple. The
abstract Duistermaat-Heckman function determined
by (Δ, ℱ, s,k) is the map Δ→ that sends w ∈Δ to
DH (w) = 2 - s ⟨ w, ν⟩ - k ρ(w),
where ρ : ^* → is given by
w ↦
0 if ⟨ w, ν⟩≤ 0
⟨ w, ν⟩ if ⟨ w, ν⟩≥ 0.
Let (Δ, ℱ, s,k) and (Δ', ℱ',
s',k') be admissible quadruples. By the arguments in the proof of
Theorem <ref> (see page proof theorem thm:DH_classifies), if the abstract Duistermaat-Heckman
functions determined by (Δ, ℱ, s,k) and (Δ', ℱ',
s',k') are equal, then (Δ, ℱ, s,k) = (Δ', ℱ',
s',k').
In what follows, we fix the map
pr : ^* ×→^* given by projection to the first
component and we denote the Lebesgue measure on by dy. Given a polytope Δ' in ^* ×, the projection
pr(Δ') is a polytope in ^*. On such a projection we
define the combinatorial analog of the function constructed in Example
<ref> (the terminology used below is not standard).
Let Δ' be a polytope in ^* ×. The height
function of Δ :=pr(Δ') is the map Δ→ that sends w ∈Δ to
Length(Δ'_w):=∫_Δ'_w dy,
where Δ'_w:= pr^-1(w) ∩Δ.
The combinatorial realizability result is as follows.
For each admissible quadruple (Δ, ℱ, s,k), there exists a
reflexive Delzant polytope Δ' in ^* × such that
pr(Δ') = Δ and the height function
of Δ equals the abstract Duistermaat-Heckman function determined
by (Δ, ℱ, s,k).
In Figure <ref> we provide the
complete list of reflexive Delzant polytopes Δ' such that the
projection is the
reflexive square Δ of Figure <ref>. Before turning to the proof of Theorem <ref>, following
<cit.>, we introduce an important
construction on smooth polytopes.
Let Δ be a Delzant polytope in ^* given by Δ=⋂_i=1^l {w∈^* |⟨ w,ν_i ⟩≥
c_i}, let ℱ be a face of Δ of codimension at
least two, and let I ⊂{1,…, l} be the subset of those
indices corresponding to the facets containing ℱ. We set
ν_0:= ∑_i ∈ Iν_i and, given
ϵ > 0, we also set c_0:= ϵ + ∑_i ∈ I
c_i. For any ϵ >0 such that any vertex v of Δ not
lying on ℱ satisfies ⟨ v, ν_0 ⟩ > c_0,
we define the blow-up of Δ along ℱ of size
ϵ to be the polytope
Δ∩{w ∈^* |⟨ w, ν_0 ⟩≥ c_0}.
As remarked in <cit.>, any blow-up of a Delzant
polytope (along any face and of any size) is a Delzant polytope (so
long as the face has codimension at least two and the size satisfies
the condition stated in Definition <ref>).
We fix an admissible quadruple (Δ, ℱ, s,k). We
split the proof in two cases, depending depending on whether k=0
or not.
Case 1: k=0. The abstract Duistermaat-Heckman function is DH(w) = 2 - s⟨ w, ν⟩ for all w ∈Δ. We set
Δ_(s,0)':={(w,y) ∈^* ×| w ∈Δ , -1
≤ y ≤ DH(w) -1},
where DH : Δ→ is the abstract Duistermaat-Heckman
function determined by (Δ, ℱ,
s,k) – see Figure <ref>. First, we claim that pr(Δ_(s,0)') = Δ. To
this end, it suffices to prove that
DH(w) ≥ 1 for all w ∈Δ. This follows immediately from the fact that,
by Definition <ref>, s
∈{0,-1} and Δ is contained in the upper half-space of ^*
given by {w ∈^* |⟨ w, ν⟩≥
-1}. By
construction, the height function of Δ_(s,0)' equals DH, so it
remains to show that Δ_(s,0)' is a reflexive Delzant polytope. To this
end, we write Δ in its minimal representation (see (<ref>))
Δ=⋂_i=1^l {w ∈^* |⟨ w, ν_i ⟩≥
-1},
where ν_i ∈ℓ^* is primitive for
all i=1,…, l and, without loss of generality, the hyperplane supporting
ℱ is {w∈^* |⟨ w,ν_1⟩≥
-1}, i.e., ν_1 = ν. By (<ref>),
Δ_(s,0)' ={(w,y) ∈^* ×|⟨ (w,y), (0,1)
⟩≥ -1}
∩{(w,y)
∈^* ×|⟨ (w,y),(-sν,-1) ⟩≥ -1}
∩⋂_i=1^l {(w,y) ∈^* ×|⟨ (w,y),(ν_i,0) ⟩≥
-1},
where, by a slight abuse of notation, we denote the natural pairing
between ^* × and × also by ⟨·,
·⟩. Therefore, Δ_(s,0)' is a polytope (see
Section <ref>). Moreover, the vertices of Δ_(s,0)' are precisely the elements of
the set
{(v, -1) ∈^* ×| v ∈Δ
vertex} ∪ {(v, 1 - s⟨ v, ν⟩) ∈^*
×| v ∈Δ
vertex}.
We observe that, since Δ is reflexive, any vertex of Δ
lies in ℓ^*; since ν∈ℓ, it follows that, if v ∈Δ is a vertex, then 1 - s⟨ v, ν⟩∈. Hence, by (<ref>), any vertex of Δ_(s,0)'
lies in ℓ^* ×, i.e., Δ_(s,0)' is integral. Since
ν_i ∈ℓ^* is primitive, (ν_i,0) ∈ℓ^* × is
primitive. Moreover, since ν
= ν_1 and since s ∈{0,-1}, (-sν,-1) ∈ℓ^* × is also primitive. Hence, by (<ref>), Δ_(s,0)' is
reflexive. Finally, to see that Δ_(s,0)' is Delzant, we fix a
vertex v ∈Δ. By (<ref>), the set of inward normals of the facets of
Δ_(s,0)' that contain (v,-1) (respectively (v, 1 - s⟨
v,ν⟩)) consists of (0,1)
(respectively (-sν, -1)), and of {(ν_v,
0)}, where {ν_v} is the set of inward normals of the facets of
Δ that contain v. Since Δ is Delzant, it follows
that Δ_(s,0)' is smooth at (v,-1) (respectively (v, 1 - s⟨
v,ν⟩)). Since any vertex of Δ_(s,0)' is equal to (v,-1)
or (v, 1 - s⟨
v,ν⟩) for some vertex v of Δ,
Δ_(s,0)' is Delzant, thus completing the proof in this case.
Case 2: k≠ 0. Since (Δ, ℱ, s,k) is
admissible, (s,k) ∈{(-1,1),(-1,2)}.
Moreover, by Definition <ref>, Δ has no vertices on the linear hyperplane {w ∈^* |⟨ w, ν⟩ = 0 } and the quadruple (Δ,
ℱ, 0,0) is also admissible. Hence, by Case 1, there exists a reflexive Delzant polytope Δ_(0,0)' satisfying the conclusions
of the statement for the admissible quadruple (Δ,
ℱ, 0,0). We deal with the cases (s,k) = (-1,1) and (s,k) = (-1,2)
separately.
∙ Suppose that (s,k) = (-1,1). By (<ref>), the reflexive
Delzant polytope Δ_(0,0)' has a codimension two face
ℱ̃ given by the intersection of the facets
supported by the affine hyperplanes
{(w,y) ∈^* ×|⟨ (w,y), (0,-1) ⟩ = -1 } and
{(w,y) ∈^* ×|⟨ (w,y), (ν,0) ⟩ = -1
}. This is a copy of ℱ on the affine hyperplane
{(w,y) ∈^* ×| y = 1}. We wish to perform the blow-up of Δ'_(0,0) along
ℱ̃ of size 1 (see Figure <ref>). To this end, with the
notation in Definition <ref>, ν_0 = (ν,-1)
and c_0 = -1. By (<ref>) and since s = -1, a vertex of Δ'_(0,0)
that does not lie on ℱ̃ is either of the form
(v,-1) for some vertex v of Δ or of the form (v,1) for
some vertex v of Δ that does not lie on
ℱ. Since ⟨ w, ν⟩≥ -1 for any w ∈Δ, if v is a vertex of Δ, then
⟨ (v,-1),(ν,-1) ⟩≥ -1 + 1 = 0 > -1 = c_0.
On the other hand, since there are no vertices of Δ lying on
the linear hyperplane {w ∈^* |⟨ w, ν⟩ = 0
} and since Δ is integral, if v is a vertex of Δ
that does not lie on ℱ, then ⟨ v, ν⟩≥ 1. Hence, in this case,
⟨ (v,-1),(ν,1) ⟩≥ 1 -1 > -1 = c_0.
By (<ref>) and (<ref>), we can perform the the blow-up of Δ'_(0,0) along
ℱ̃ of size 1 that we denote by
Δ'_(-1,1), i.e.,
Δ'_(-1,1) = Δ'_(0,0)∩{(w,y) ∈^* ×|⟨ (w,y),(ν,-1) ⟩≥ -1}.
By (<ref>) and (<ref>), and since s =
-1,
Δ'_(-1,1) = {(w,y) ∈^* ×| w ∈Δ , -1
≤ y ≤min(1,1 + ⟨ w, ν⟩)}.
Since Δ'_(0,0) is Delzant, by Remark
<ref>, Δ'_(-1,1) is also Delzant. By
(<ref>), it can be checked directly that a vertex of
Δ'_(-1,1) is one of the following three types:
* (v, min(1,1 + ⟨ v, ν⟩)) for some vertex v
of Δ,
* (w,1), where w lies on an edge of Δ and satisfies ⟨ w, ν⟩ = 0, or
* (v,-1) for some vertex v of Δ.
Since Δ is integral and since ν∈ℓ^*, if v is a
vertex of Δ, then (v, min(1,1 + ⟨ v, ν⟩)) and (v,-1) belong to ℓ^* ×. Moreover, by Lemma
<ref>, if w lies on an edge
of Δ and satisfies ⟨ w, ν⟩ = 0, then w ∈ℓ^*. Hence, (w,1) ∈ℓ^* ×, so that
Δ'_(-1,1) is integral. Moreover, by
(<ref>) and (<ref>), Δ'_(-1,1)
is reflexive. Since (s,k) = (-1,1), it follows that the map Δ→ that
sends w to min(1,1 + ⟨ w, ν⟩) equals DH - 1,
where DH : Δ→ is the abstract Duistermaat-Heckman
function determined by (Δ, ℱ,
-1,1). This implies both that pr(Δ'_(-1,1)) =
Δ (since the minimal value of the above map on Δ is
0), and that the height function of Δ equals the abstract Duistermaat-Heckman
function determined by (Δ, ℱ,
-1,1), as desired.
∙ Suppose that (s,k) = (-1,2): Since (Δ, ℱ,
-1,2) is admissible, there exists a facet ℱ' of
Δ supported on the affine hyperplane {w ∈^* |⟨ w,-ν⟩ =
-1} and the quadruple (Δ, ℱ,
-1,1) is also admissible. Let Δ'_(-1,1) be the reflexive
Delzant polytope constructed from (Δ, ℱ,
-1,1) as above. Hence, by (<ref>) and (<ref>), the reflexive
Delzant polytope Δ'_(-1,1) has a codimension two face
ℱ̃' given by the intersection of the facets
supported by the affine hyperplanes {(w,y) ∈^* ×|⟨ (w,y), (0,1) ⟩ = -1 } and
{(w,y) ∈^* ×|⟨ (w,y), (-ν,0) ⟩ =
-1}. This is a copy of ℱ' on the affine hyperplane
{(w,y) ∈^* ×| y = -1}. We wish to perform the blow-up of Δ'_(-1,1) along
ℱ̃' of size 1 (see Figure <ref>). To this end, with the
notation in Definition <ref>, ν_0 = (-ν,1)
and c_0 = -1. A vertex of Δ'_(-1,1)
that does not lie on ℱ̃' is of one of three types:
* (v, min(1,1 + ⟨ v, ν⟩)) for some vertex v
of Δ,
* (w,1), where w ∈Δ lies on an edge of Δ and satisfies ⟨ w, ν⟩ = 0, or
* (v,-1) for some vertex v of Δ that does not lie on
ℱ',
(see the proof in the case (s,k) = (-1,1).) In the first case, we have that
⟨ (v, min(1,1 + ⟨ v, ν⟩)), (-ν,1) ⟩
= min(1- ⟨ v, ν⟩,1) > -1 = c_0,
where the inequality follows from the fact that Δ is
contained in the strip {w ∈^* | -1 ≤⟨ w, ν⟩≤ 1} (see Remark <ref>). In the second case, we
have that
⟨ (w,1), (-ν,1)⟩ = 1 > -1 = c_0.
As in the case (s,k)=(-1,1), if v is a vertex of
Δ,
then ⟨ v, -ν⟩≥ 1. Hence, in the third case, we have that
⟨ (v,-1), (-ν,1) ⟩≥ 0 > -1 =c_0.
By (<ref>), (<ref>) and (<ref>), we can perform the the blow-up of Δ'_(-1,1) along
ℱ̃' of size 1 that we denote by
Δ'_(-1,2), i.e.,
Δ'_(-1,2) = Δ'_(-1,1)∩{(w,y) ∈^* ×|⟨ (w,y),(-ν,1) ⟩≥ -1}.
By (<ref>), we have that
Δ'_(-1,2) = {(w,y) ∈^* ×| w ∈Δ
, max(-1,-1 + ⟨ w, ν⟩)
≤ y ≤min(1,1 + ⟨ w, ν⟩)}.
Since Δ'_(-1,1) is smooth, by Remark
<ref>, Δ'_(-1,2) is also
smooth. Moreover, by
(<ref>), it can be checked directly that a vertex of
Δ'_(-1,2) is one of the following three types:
* (v, min(1,1 + ⟨ v, ν⟩)) for some vertex v
of Δ,
* (w, ± 1), where w lies on an edge of Δ and satisfies ⟨ w, ν⟩ = 0, or
* (v,max(-1,-1+⟨ v, ν⟩) for some vertex v of Δ.
As in the case (s,k)=(-1,1), it follows that
Δ'_(-1,2) is integral. Moreover, since
Δ'_(-1,1) is reflexive, by (<ref>) Δ'_(-1,1)
is reflexive. Since Δ is contained in the strip {w ∈^* | -1 ≤⟨ w, ν⟩≤ 1}, the maximal (respectively
minimal) value of the map
Δ→ that takes w to max(-1,-1 + ⟨ w, ν⟩) (respectively min(1,1 + ⟨ w, ν⟩)) is
zero. Hence, pr(Δ'_(-1,2)) =
Δ. Moreover, the height function of Δ is the map
Δ→ that sends w ∈Δ to
min (1,1 + ⟨ w, ν⟩) - max(-1,-1 + ⟨ w, ν⟩) = min(2 - ⟨ w, ν⟩, 2 + ⟨ w, ν⟩).
Since (s,k) = (-1,2), the above map equals the abstract
Duistermaat-Heckman function determined by (Δ, ℱ,
-1,2), as desired.
Theorem <ref> and Delzant's classification of compact
symplectic toric manifolds <cit.> yield the following
geometric realizability and extension result.
If (Δ, ℱ, s,k) is an admissible quadruple, then
there exists a normalized monotone tall
complexity one T-space such that
* Φ(M) = Δ and the
Duistermaat-Heckman function of equals the abstract
Duistermaat-Heckman function determined by (Δ,
ℱ, s,k), and
* the Hamiltonian T-action extends to an effective Hamiltonian (T ×
S^1)-action.
By Theorem <ref>, there exists a reflexive Delzant polytope Δ' in ^* × such that
pr(Δ') = Δ and the height function
of Δ equals the abstract Duistermaat-Heckman function determined
by (Δ, ℱ, s,k). By <cit.>, there exists a compact
complexity zero (T × S^1)-space (M,ω, Φ̃ =
(Φ,Ψ)) such that the moment map image Φ̃(M) =
Δ', where we identify (×)^* with ^* ×. We claim that satisfies the desired properties. To see this, we observe that, by construction, it is
tall and has complexity one, and the T-action extends to an effective Hamiltonian (T ×
S^1)-action. Moreover, since Δ' is a reflexive Delzant
polytope, by Proposition <ref>, (M,ω,
Φ̃) is normalized monotone so that, in particular,
(M,ω) satisfies c_1 = [ω]. Since Δ is reflexive
Delzant and since pr(Δ') = Δ,
Φ(M) = Δ, so that Φ satisfies the weight sum
formula. Hence, is normalized monotone. Finally, by Example
<ref>, the Duistermaat-Heckman function
of equals the height function of Δ. Since the latter equals the abstract Duistermaat-Heckman function determined
by (Δ, ℱ, s,k), the result follows.
In fact, the constructions in the proof of Theorem
<ref> have geometric counterparts that allow to give
an explicit geometric description of in Corollary
<ref>. For instance, the case k=0 is described
explicitly in <cit.>, while it is well-known that
the combinatorial blow-up of a polytope along a face corresponds to
an equivariant symplectic blow-up (see <cit.>).
We can prove another important result of this paper.
By Corollary <ref>, we may assume that is normalized monotone. Hence,
Δ := Φ(M) is a reflexive Delzant polytope by Proposition <ref>. Let
ℱ_min⊂Φ(M) be a minimal facet and let
(s,k) be as in the statement of
Proposition <ref>. By construction, the quadruple (Δ,
ℱ_min, s,k) is admissible. Hence, by Corollary
<ref>, there exists a normalized monotone tall
complexity one T-space (M',ω', Φ') such that
* its Duistermaat-Heckman function equals the abstract
Duistermaat-Heckman function associated to
(Δ, ℱ_min, s,k), and
* the Hamiltonian T-action extends to an effective Hamiltonian (T ×
S^1)-action.
By construction, and (M',ω', Φ') have equal
Duistermaat-Heckman functions. Hence, by Theorem
<ref>, they are isomorphic and the result follows.
§.§ Compact monotone tall complexity one spaces are
equivariantly Fano
In this section, we prove the last main result of our paper, Theorem
<ref>. To this end, we recall that a compact complex manifold
(Y,J) is Fano if and only if there exists a Kähler form σ∈Ω^1,1(Y) such that c_1(Y) = [ω].
By Corollary <ref>, there is no loss of
generality in assuming that is normalized monotone. By Theorem <ref>, the Hamiltonian T-action extends to an effective
Hamiltonian (T × S^1)-action. We denote the corresponding
normalized monotone symplectic toric manifold by (M,ω, Φ̃ =
(Φ,Ψ)). By the classification of compact
symplectic toric manifolds in <cit.> that there exists an
integrable almost complex structure J on M that is compatible with ω and
(T × S^1)-invariant; moreover, ω equals the Kähler
form of (M,J). By Proposition
<ref>, [ω] = c_1(M) > 0,
so that the Kähler
manifold (M,J) is Fano. Finally, by <cit.>, the T-action extends to an effective holomorphic T_-action,
as desired.
99
atiyah
M.F. Atiyah,
Convexity and commuting Hamiltonians,
Bull. London Math. Soc., 14, no. 1, (1982), 1 – 15.
ballmann
W. Ballmann,
Lectures on Kähler Manifolds,
ESI Lect. Math. Phys., European Mathematical Society
(EMS), Zürich, 2006.
batyrev
V. V. Batyrev,
Dual polyhedra and mirror symmetry for Calabi-Yau
hypersurfaces in toric varieties,
J. Algebraic Geom., 3, no. 3, (1994), 493 – 535.
bp
M. Brion, C. Procesi,
Action d'un tore dans une variété projective,
Operator algebras, unitary representations, enveloping
algebras, and invariant theory (Paris 1989), 509 – 539,
Progr. Math., 92, Birkhäuser Boston, Boston, MA, 1990.
ck
Y. Cho, M.K. Kim,
Log-concavity of complexity one Hamiltonian torus
actions,
C. R. Math. Acad. Sci. Paris, 350, no. 17-18,
(2012), 845 – 848.
cho
Y. Cho,
Classification of six dimensional monotone symplectic
manifolds admitting semifree circle actions I,
Internat. J. Math., 30, no.6, Paper No. 1950032,
(2018), 71 pp.
cho2
Y. Cho,
Classification of six dimensional monotone symplectic
manifolds admitting semifree circle actions II,
Internat. J. Math., 32, no. 2, Paper No. 2050120,
(2021), 47 pp.
cho3
Y. Cho,
Classification of six dimensional monotone symplectic
manifolds admitting semifree circle actions III,
preprint, (2019), arXiv:1905.07292v1.
delzant
T. Delzant,
Hamiltoniens périodiques et image convexes de l'application moment,
Bull. Soc. Math. France, 116, no. 3, (1988), 315 –
339.
DeVito
J. DeVito,
Homeomorphisms of the 2-sphere S^2 fixing a set of
points,
Mathematics Stack Exchange,
https://math.stackexchange.com/q/2947614https://math.stackexchange.com/q/2947614,
version: 2018-10-09.
dh
J.J. Duistermaat, G.J. Heckman,
On the variation in the cohomology of the symplectic form of
the reduced phase space,
Invent. Math., 69, no. 2, (1982), 259 – 268.
dk
J.J. Duistermaat, J.A.C. Kolk,
Lie Groups,
Universitext, Springer-Verlag, Berlin, 2000.
ep
M. Entov, L. Polterovich,
Rigid subsets of symplectic manifolds,
Compos. Math., 145, no. 3, (2009),
773 – 826.
fp_hyp
J. Fine, D. Panov,
Hyperbolic geometry and non-Kähler manifolds with
trivial canonical bundle,
Geom. Top., 14, no. 3, (2010), 1723 – 1763.
fp
J. Fine, D. Panov,
Circle invariant fat bundles and symplectic Fano
6-manifolds,
J. London Math. Soc., 91, no. 3, (2015), 709 – 730.
gvhs
L. Godinho, F. von Heymann, S. Sabatini,
12, 24 and beyond,
Adv. Math., 319, (2017), 472 – 521.
GLS
V. Guillemin, E. Lerman, S. Sternberg,
Symplectic Fibrations and Multiplicity Diagrams,
Cambridge University Press, Cambridge, 1996.
gs
V. Guillemin, S. Sternberg,
Convexity properties of the moment mapping,
Invent. Math., 67, no. 3, (1982), 491 – 513.
gs-kahler
V. Guillemin, S. Sternberg,
Geometric Quantization and Multiplicities of Group
Representations,
Invent. Math., 67, no. 3, (1982), 515 – 538.
gs-local
V. Guillemin, S. Sternberg,
A normal form for the moment map,
Differential geometric methods in mathematical physics
(Jerusalem, 1982), Math. Phys. Stud., 6, Reidel, Dordrecht,
1984, 161 – 175.
gs-inve
V. Guillemin, S. Sternberg,
Birational equivalence in the symplectic category,
Invent. Math., 97, no. 3, (1989), 485 – 522.
gs-supersymmetry
V. Guillemin, S. Sternberg,
Supersymmetry and Equivariant de Rham Theory,
Mathematics Past and Present, Springer-Verlag, Berlin, 1999.
hnp
C. Haase, B. Nill, A. Paffenholz,
Lecture Notes on Lattice Polytopes
preprint, available at
https://www2.mathematik.tu-darmstadt.de/ paffenholz/daten/preprints/20201007_
Lattice_Polytopes.pdfhttps://www2.mathematik.tu-darmstadt.de/∼paffenholz/daten/preprints/20201007_Lattice_Polytopes.pdf.
hirze
F. Hirzebruch, T. Berger, R. Jung,
Manifolds and modular forms,
Aspects of Mathematics, E20, With appendices by Nils-Peter
Skoruppa and by Paul Baum, Friedr. Vieweg & Sohn, Braunschweig,
1992.
isko_prok
V.A. Iskovskikh, Yu. G. Prokhorov,
Fano varieties,
in Algebraic Geometry, V, Encyclopaedia Math. Sci., 47, Springer, Berlin, 1999, 1 – 247.
kar_not_log
Y. Karshon,
Example of a non-log-concave Duistermaat-Heckman
measure,
Math. Res. Lett., 3, no. 4, (1996), 537 – 540.
karshon
Y. Karshon,
Periodic Hamiltonian flows on four dimensional
manifolds,
Mem. Amer. Math. Soc., 141, no. 672, 1999.
kt1
Y. Karshon, S. Tolman,
Centered complexity one Hamiltonian torus actions,
Trans. Amer. Math. Soc., 353, no. 12, (2001), 4831
– 4861.
kt2
Y. Karshon, S. Tolman,
Complete invariants for Hamiltonian torus actions with two
dimensional quotients,
J. Symplectic Geom., 2, no. 1, (2003), 25 – 82.
kt3
Y. Karshon, S. Tolman,
Classification of Hamiltonian torus actions with
two-dimensional quotients,
Geom. Topol., 18, no. 2, (2014), 669 – 716.
kirwan
F.C. Kirwan,
Cohomology of quotients in symplectic and algebraic
geometry,
Mathematical Notes, 31, Princeton University Press,
Princeton, NJ, 1984.
kollar
J. Kollár, Y. Miyaoka, S. Mori,
Rational connectedness and boundedness of Fano
manifolds,
J. Diff. Geom., 36, no. 3, (1992), 765 – 775.
lz
J.C. Lagarias, G.M. Ziegler,
Bounds for lattice polytopes containing a fixed number of interior
points in a sublattice,
Canad. J. Math., 43, no. 5, (1991), 1022 – 1035.
lerman_tolman
E. Lerman, S. Tolman,
Hamiltonian torus actions on symplectic orbifolds and
toric varieties,
Trans. Amer. Math. Soc., 349, no. 10, (1997), 4201
– 4230.
li
H. Li,
The fundamental group of symplectic manifolds with Hamiltonian Lie group actions,
J. Symplectic Geom., 4, no. 3, (2006), 345 – 372.
lp
N. Lindsay, D. Panov,
S^1-invariant symplectic hypersurfaces in dimension
6 and the Fano condition,
J. Top., 12, no. 1, (2019), 221 – 85.
lindsay
N. Lindsay,
Hamiltonian circle actions on symplectic Fano
manifolds,
Ph.D. thesis, King's College London, 2018.
marle
C.-M. Marle,
Modèle d'action hamiltonienne d'un groupe de Lie sur une
variété symplectique,
Rend. Sem. Mat. Univ. Politec. Torino, 43, no. 2,
(1985), 227 – 251.
marsden_weinstein
J. E. Marsden, A. Weinstein,
Reduction of symplectic manifolds with symmetry,
Rep. Math. Phys., 5, (1974), 121 – 130.
mcduff displacing
D. McDuff,
Displacing Lagrangian toric fibers via probes,
Low-Dimensional and Symplectic Topology, in: Proc. Sympos. Pure Math., vol. 82, Amer. Math. Soc.,
Providence, RI, (2011),
131 – 160.
mcduff_structure
D. McDuff,
The structure of rational and ruled symplectic 4-manifolds,
J. Amer. Math. Soc. 3, (1990), 679–712.
mcduff-salamon
D. McDuff, D. Salamon,
Introduction to symplectic topology,
Oxford Mathematical Monographs, Second Edition, The
Clarendon Press, Oxford University Press, New York, 1998.
mcduff_sal
D. McDuff, D. Salamon,
J-holomorphic curves and symplectic topology,
AMS Colloquium Publications, 52, American
Mathematical Society, Providence, RI, 2004.
mcduff_tolman
D. McDuff, S. Tolman,
Polytopes with Mass Linear Functions II: The
Four-Dimensional Case,
Int. Math. Res. Not. IMRN, no. 15, (2013), 3509 – 3599.
Nicolaescu
L. Nicolaescu,
An Invitation to Morse Theory,
Universitext, second edition, New York, (2011).
paradan
P.-E. Paradan,
Wall crossing formulaes in Hamiltonian geometry,
Progress in Mathematics, Geometric Aspects of Analysis and Mechanics. In Honor of the 65th Birthday of Hans Duistermaat.
(292), 2011, 295 – 343.
prv
B. Poonen, F. Rodriguez-Villegas,
Lattice Polygons and the Number 12,
The American Mathematical Monthly, 3, no. 3 (Mar.,
2000), 238 – 250.
rez
A.G. Reznikov,
Symplectic twistor spaces,
Ann. Global Ann. Geom., 11, no. 2, (1993), 109 – 118.
ss
S. Sabatini, D. Sepe,
On topological properties of positive complexity one spaces,
Transform. Groups, 27, no. 2, (2022), 723 – 735.
Sjamaar
R. Sjamaar,
Convexity properties of the moment mapping re-examined,
Adv. Math. 138, (1998), 46 – 91.
tolman_inven
S. Tolman,
Examples of non-Kähler Hamiltonian torus actions,
Invent. Math., 131, (1998), 299 – 310.
ziegler
G. M. Ziegler,
Lectures on Polytopes,
Graduate Texts in Mathematics, 152. Springer-Verlag, New
York, (1995).
|
http://arxiv.org/abs/2307.04023v1 | 20230708180031 | SDT: A Low-cost and Topology-reconfigurable Testbed for Network Research | [
"Zixuan Chen",
"Zhigao Zhao",
"Zijian Li",
"Jiang Shao",
"Sen Liu",
"Yang Xu"
] | cs.NI | [
"cs.NI",
"cs.PF"
] |
†]Zixuan Chen
†]Zhigao Zhao
†]Zijian Li
†]Jiang Shao
†]Sen Liu
†∗]Yang Xu
[ ]{zxchen20, zgzhao20, lizj21, jshao20, senliu, xuy} @fudan.edu.cn
[†]School of Computer Science, Fudan University, Shanghai, China
[]Institute of Fintech, Fudan University, Shanghai, China
[]Peng Cheng Laboratory, Shenzhen, China
SDT: A Low-cost and Topology-reconfigurable Testbed for Network Research
* Corresponding author: Yang Xu.
This paper will be published in IEEE CLUSTER 2023. Preview version only.
[
===================================================================================================================================================================================
Network experiments are essential to network-related scientific research (e.g., congestion control, QoS, network topology design, and traffic engineering). However, (re)configuring various topologies on a real testbed is expensive, time-consuming, and error-prone. In this paper, we propose Software Defined Topology Testbed (SDT), a method for constructing a user-defined network topology using a few commodity switches. SDT is low-cost, deployment-friendly, and reconfigurable, which can run multiple sets of experiments under different topologies by simply using different topology configuration files at the controller we designed. We implement a prototype of SDT and conduct numerous experiments. Evaluations show that SDT only introduces at most 2% extra overhead than full testbeds on multi-hop latency and is far more efficient than software simulators (reducing the evaluation time by up to 2899x). SDT is more cost-effective and scalable than existing Topology Projection (TP) solutions. Further experiments show that SDT can support various network research experiments at a low cost on topics including but not limited to topology design, congestion control, and traffic engineering.
Testbed, reconfigurable topology, network evaluation
§ INTRODUCTION
As the main bottleneck of Data Centers (DCs), the Data Center Networks (DCNs) have attracted much research attention from both industry and academia <cit.>. There exist some commonly used DCN topologies that are scalable and cost-effective including Fat-Tree <cit.>, Dragonfly <cit.>, Torus <cit.>, BCube <cit.>, HyperBCube <cit.>, et al. Research on DCNs, including congestion control mechanisms, routing algorithms, deadlock avoidance functions, et al., should be applied to most of these topologies (or at least some) for better generality (e.g., <cit.>). There are also many pieces of state-of-the-art research on optimizing the physical topology to improve the application performance like Distributed Machine Learning (DML) <cit.>. All of these require a testbed that can support multiple topologies to verify the effects of each mechanism.
It is not easy to support multiple topologies at the same time and do reconfiguration among them. First, building a topology such as Fat-Tree can be complex. For example, it needs 20 4-port switches and 48 cables to deploy a standard Fat-Tree topology supporting only 16 nodes (Figure <ref>). In addition, it is more complicated to support different topologies and reconfigurations simultaneously. Connections are error-prone and difficult to check when reconfiguring. Although emulators (e.g., Mininet <cit.>, Open vSwitch <cit.>, OpenStack <cit.>) can simulate a variety of topologies, they still have some obvious drawbacks such as long simulation time and insufficient authenticity of results. Therefore, deploying a full testbed for evaluation is crucial and irreplaceable, even if it is hard to make.
As far as we know, a qualified real-world testbed requires several characteristics, including fast topology reconfiguration, cost-friendly deployment, and convenient maintenance. The challenges in designing such a testbed lie in how to support topology reconfiguration, preferably without manual switching of cables; how to reduce the cost of the test platform, including hardware and labor costs; and even how to support user-defined topologies, rather than being limited to the existing commonly used topologies.
Switch Projection (SP) is a solution to construct topologies for network experiments but needs heavy staffing. The good news is that the Micro Electro Mechanical System (MEMS) optical switches can be used to build reconfigurable network topologies <cit.>. Based on its reconfigurable and lossless bi-switching property, it can take the place of SP's manpower. We call the SP with MEMS optical switches the “Switch Projection-Optical Switch (SP-OS)”. SP-OS can construct user-defined topologies and support real-time reconfiguration without manual operations. However, it still has certain disadvantages, such as high cost and poor expandability. Considering the above characteristics and challenges, we propose a topology-reconfigurable testbed named Software Defined Topology Testbed (SDT) without costly optical switches to achieve lower cost and better scalability.
In short, the contributions of the paper are
* We summarize the methodology of Topology Projection (TP) and propose SDT, a testbed solution for building real topologies. SDT uses commodity OpenFlow switches to construct various topologies. Once the connection deployment is completed, the topology (re)configuration can be finished in a short time without manually changing the physical connections or using optical switches (Figure <ref>).
* We develop an easy-to-use SDT controller supporting user-defined topologies. Users can develop their routing strategy or other new technologies with the SDT controller. The transformation process from logical topology to physical topology is fully automated.
* We compare SDT with existing TP methods, and SDT shows better cost-effectiveness and scalability. We use real applications to evaluate 1) the latency and bandwidth differences compared with the full testbed and 2) the Application Completion Time (ACT) and time consumption compared with the simulator. Evaluations show that SDT has only 0.03-2% deviation on latency compared to the full testbed and reduces the evaluation time by up to 2899x faster than the simulator in a 16-second HPC benchmark for communication efficiency with 32 nodes.
* We further implement some prevalent network functions on SDT, including routing strategy, deadlock avoidance, and congestion control. SDT shows substantial flexibility in network evaluations.
The rest of the paper is organized as follows. We introduce the related works in <ref>. We present the motivation and design of SDT in detail in Sections <ref> and <ref>. A prototype of SDT controller is introduced in <ref>. The accuracy and efficiency of SDT are evaluated in <ref>, with some state-of-the-art network functions implemented. We discuss SDT in <ref> and conclude the paper in <ref>.
§ RELATED WORKS
§.§ Reconfigurable Networks
To better allocate link bandwidth in response to the non-uniform traffic often present in DCNs, some researchers propose reconfigurable networks, which can dynamically adjust links based on real-time network traffic to better serve hot node pairs (nodes with heavy traffic). These reconfigurable networks are often implemented with optical devices, which can offer lossless bi-switching capabilities. The optical devices used in reconfigurable networks can mainly be categorized into MEMS-based optical switches and other specialized optical devices (e.g., free-space optics and optical devices that forward based on light wavelength).
§.§.§ Reconfigurable Networks based on MEMS Optical Switch
MEMS optical switches use several tiny mirrors on the silicon crystal to forward the light between different fiber interfaces. The tiny mirrors are called microarrays, working as a reconfigurable static crossbar by rotation.
MEMS optical switches have been put into practical usage very early, and the technology is relatively mature and less error-prone. Therefore, early reconfigurable networks, such as c-Through <cit.> and Helios <cit.>, use MEMS optical switches to build reconfigurable networks. However, MEMS optical switches still have drawbacks, such as their relatively large reconfiguration delays (about 100ms) and high hardware costs.
§.§.§ Reconfigurable Networks based on Customized Optics
To achieve faster reconfiguration, researchers have proposed other customized optical devices, such as Free Space Optics used in Firefly <cit.> and ProjecToR <cit.>, which reflect the laser propagating in the air with mirrors that can do faster angle adjustment to complete the reconfiguration. This kind of network can achieve reconfiguration as fast as 12μ s, but it is easily disturbed by the environment, which causes significant optical path shifts and makes the deployment impossible.
In addition, Sirius <cit.> uses Arrayed Waveguide Grating Router (AWGR) to forward the input light of different wavelengths to the corresponding output ports to complete the reconfiguration. However, this method needs to be used with a highly customized tunable laser that can quickly generate lasers of different wavelengths, which is also less practical.
Besides these, there are some other similar customized-optics-based fast reconfiguration works like <cit.>.
§.§ Network Evaluation Tools
Network researchers have developed and used many network evaluation tools in the past few decades. We roughly divide them into 1) simulator, 2) emulator, and 3) testbed. They have played a significant role in the progress of network technologies, but they also have certain disadvantages.
§.§.§ Simulator
Existing network simulation tools such as NS-2 <cit.>, NS-3 <cit.>, OPNET <cit.>, OMNET++ <cit.> and GloMoSim <cit.> offer efficient and cost-effective ways to evaluate the network performance under different conditions. However, compared with the testbed, they lack both scalability and reality. Simulators may take several days to complete one simulation, and they also suffer from the lack of ability to simulate various random situations that might occur in real networks.
§.§.§ Emulator
The primary goal of network emulators such as Mininet <cit.> with Open vSwitch (OVS) <cit.> and Netem <cit.> is to create an environment whereby users can flexibly combine the VMs, applications, products, and services to perform a relatively more authentic simulation. However, the performance of emulators is poor in the high bandwidth environment (10Gbps+) or medium-scale topologies (containing 20+ switches) due to the limitation of the system resources. Besides, emulators cannot do everything we want, e.g., Mininet has no official support for Priority-based Flow Control (PFC), even though PFC is already a standard feature.
As a widely used cloud computing infrastructure software, OpenStack <cit.> can be used to build a set of computing nodes with specific topologies using commodity servers and switches. However, the construction of topology on OpenStack is still virtualized by OVS. As a result, the network topology on OpenStack has scalability and reality problems and will be limited by the bandwidth.
§.§.§ Testbed
Existing testbed platforms available to researchers include Emulab <cit.>, CloudLab <cit.> and PlanetLab <cit.>, which have made considerable progress in making testbed as easy to use and control as simulation. Nevertheless, their drawbacks are also obvious. Whether virtualization is used or not, the reconfiguration of the testbed requires heavy manual operations. Several testbeds dedicated to wireless environments are proposed, such as TWIST <cit.>, and DRIVE <cit.>. These works mainly consider wireless environments, which do not apply to DCN-related experiments.
§ MOTIVATION AND BACKGROUND
This section firstly introduces our motivation for “Topology Projection (TP)”. Then, we summarize a straightforward solution named Switch Projection (SP). The SP can support TP easily but can not be reconfigured without manpower. MEMS optical switches can be introduced for topology reconfiguration, which is introduced at the end of this section with the name Switch Projection-Optical Switch (SP-OS).
§.§ Why Do We Need the SDT?
By comprehensively considering the pros and cons of three types of existing network evaluation tools (Table <ref>), we find that they are generally unable to achieve high-performance and low-cost evaluations for various network topologies. Although the simulation is easy to operate and the cost is relatively small, its scalability is limited by the high time cost. As the number of nodes increases and the network traffic grows, the simulation time can be thousands of times longer than the real-world ACT. Testbeds are needed to get better evaluation scalability and efficiency. However, the deployment expenses of testbeds are high and even unacceptable for researchers.
Therefore, we want to construct a system that performs almost the same as the full testbed with high efficiency and scalability. The system should support fast reconfiguration among various topologies without changing the physical connections under an acceptable budget. That is why we present SDT. The efficiency of SDT is close to full testbeds without any manual operation during reconfiguration and with lower hardware costs.
§.§ A Possible Solution: Switch Projection
Some works (e.g., <cit.>) use a switch to construct a simple topology for evaluation. We call this method of constructing a topology “TP”. SDT is also a TP method.
The main idea of traditional TP is to project the topologies by using the logical switch as a meta unit. The right side of Figure <ref> is the topology we want to construct, which is a part of a 2D-Torus. We call this “logical topology”. The radix of the switches in this logical topology is 4, i.e., every logical switch has 4 ports. The physical switch can be divided into sub-switches based on the radix. As a result, each sub-switch has 4 ports as well. After that, we can use these sub-switches for the topology projection.
We call this type of TP “SP” and conclude its general approach here. The first step of SP is dividing one physical switch into multiple sub-switches. Then we project the sub-switches to the logical switches in the topology, which is why this method is called SP. After the projection, we manually connect these sub-switches' corresponding ports to build the topology. We can use Software-Defined Networking (SDN) functions (e.g., flow tables in the OpenFlow switch) to divide the sub-switches.
Take Figure <ref> as an example of how SP works. We first divide and project the sub-switches. Ports 1-4 on the physical switch are considered on one sub-switch, so we project them to an arbitrary logical switch e.g., switch 1. Ports in the logical switch 1 are numbered based on the projected ports from the physical switch. The operations are the same for other sub-switches.
We then connect the cables between specific sub-switch ports based on the logical topology. For example, in the logical topology, there is a link between ports 3 and 9 (i.e., Link (A)). We connect the corresponding ports on the physical switch. After all the links are made, it is time to deploy the flow table (we use OpenFlow in this paper) to restrict the packet forwarding domain on the physical switch based on the ports' labels. For instance, data packets entering port 1 can only be forwarded to ports 2-4. The restrictions are based on the partition of sub-switches.
§.§ Make SP Topology-reconfigurable
The manual operations required for SP on topology reconfiguration are massive. We have to re-connect the cables manually on every topology reconfiguration, which is error-prone. As the topology size increases, the difficulty of deployment increases correspondingly. Therefore, we introduce MEMS optical switches into SP to reduce labor costs. The new design is called SP-OS.
The optical switch can replace manual operations on the reconfiguration. We connect all the ports on the physical switch to the optical switch (Figure <ref>). When the topology needs to be reconfigured, modifying the configuration of the optical switch based on the labels can replace the manual operations. The advantage of SP-OS is that once the testbed is deployed, all reconfigurations can be done remotely by software control.
The introduction of optical switches leads to increased hardware costs. Optical devices are generally costly. The price of a 320-port MEMS optical switch is more than $100k, and only 160 LC-LC[Lucent Connector (LC).] fibers can be connected. As the number of ports on the optical switch increases, the price increases significantly. SDT can work without optical switches, which provides significant savings.
TurboNet <cit.> is another topology-reconfigurable SP method for TP, which replaces manual reconnection with the Tofino switch's loopback ports. However, the use of loopback ports results in a reduction in the available bandwidth of the switches <cit.>. We compare the scalability between TurboNet and SDT in <ref>.
§ THE DESIGN OF SDT
In this section, we first introduce the fundamental design of SDT on a single switch. Then, we expand the SDT to multiple switches to support larger topologies. We also address the issue of topology partitioning in multi-switch deployments.
§.§ SDT on a Single Switch
Although SP-OS can support automated topology reconfiguration, its cost is relatively high due to the introduction of optical switches. Therefore, we design the SDT, which can provide the same functionality as SP-OS but without optical switches.
The main idea of SDT is to use Link Projection (LP) rather than SP to construct the logical topology on a physical switch. SDT first projects physical links[To construct a physical link, we connect two arbitrary ports on the switch. In the paper, the switch's upper and lower adjacent ports are connected for simplicity.] to logical ones on the topology, and then number the ports on the logical topology based on the projected ports from the physical switch. Taking Figure <ref> as an example, the physical links A and B are projected to the logical topology, and then the corresponding ports in the logical topology can be tagged with 1, 2, 3, and 4, respectively.
After the projection, we group ports on the physical switch into different sub-switches based on the relationship of their counterparts in the logical topology. For instance, in Figure <ref>, ports 1, 3, 5, and 7 in the topology form a logical switch, so the corresponding ports 1, 3, 5, 7 in the physical switch should be grouped in the same sub-switch. We use OpenFlow flow tables to keep the packets entering this sub-switch only forwarded to their corresponding forwarding domain. The other sub-switches are divided according to these steps as well.
Please note that no optical switch is needed when the topology is reconfigured in SDT.
Here we summarize the fundamental differences between SP-OS and SDT.
* In SP-OS, sub-switch partitions are determined arbitrarily (the only constraint is that the radix of sub-switches should match the radix of logical switches in the topology). MEMS optical switches are used to (re)connect links between these sub-switches based on the topology's logical switches (projected by SP).
* In SDT, physical links on the physical switch will remain fixed once constructed (which can be arbitrary). The sub-switches are (re)partitioned based on the result of LP. Rules in the flow tables of the OpenFlow switch can be used to realize the sub-switch partition, and no optical switch is needed during a topology reconfiguration.
The size of the logical topology supported by SDT is limited by the number of ports on the physical switch. A topology can be appropriately built if the total number of ports in the topology is less than or equal to the number of ports on the physical switch (excluding the ports connected to the end hosts). This constraint applies to all TP methods.
§.§ SDT on Multiple Switches
When one switch is insufficient to project the entire logical topology, multiple switches are needed to use. In SP-OS, it is not difficult to expand the supported logical topology by adding more switches and optical devices. The expansion of SDT is also relatively simple but requires additional discussions below.
On the construction of the multi-switch scenario, it needs to cut the logical topology into various sub-topologies, and each sub-topology is maintained independently by one physical switch.
There are two different types of links in multi-switch SDT. We call the links between the upper and lower adjacent ports of one switch self-links. For those links across the sub-topologies, we project them from the links across physical switches and call them inter-switch links. For instance, the topology has been cut into two sub-topologies on the right side of Figure <ref>. The links inside each sub-topology are self-links, and the links between the two sub-topologies are inter-switch links.
There is a requirement for the number of inter-switch links. Taking Figure <ref> as an example, the scale of the logical topology is larger than the previous one. As a result, one 64-port switch cannot build this topology, but two can make it. To build the topology, we divide the topology into two sub-topologies. How to divide the topologies is discussed in Sec. <ref>.
Here we use the formula to represent the inter-switch links. Define topology (graph) G(E, V) as the logical topology we hope to build, and the sub-topologies are G_A(E_A, V_A) and G_B(E_B, V_B). E_nA represents the links to nodes on the physical switch A, E_sA represents the self-links on the physical switch A, and E_aAB represents the inter-switch links between the physical switches A and B. In the logical topology, there is a relationship: E = E_n + E_s. For sub-topologies after being divided, they have
E_A = E_nA + E_sA
E_B = E_nB + E_sB
V = V_A + V_B
For inter-switch links, the following equation exists.
E_aAB = E_aBA = E - E_A - E_B
We can now determine the number of inter-switch links for the logical topology by Eq. <ref>. For the case in Figure <ref>, there are 8 inter-switch links between the two sub-topologies, which means at least 8 inter-switch links are required to construct this topology.
The reservation of inter-switch links is flexible, but it must fulfill the requirements of the desired topologies and the specifications of physical switches. Taking Figure <ref> as an example, we aim to construct a 4x4 2D-Torus topology (the connections to nodes are omitted for simplicity). When the number of ports on physical switches is greater than 64, only 1 switch is necessary. When the number of ports exceeds 32 but is less than 64, 2 switches are required to build the topology, as shown on the left side of Figure <ref>. Each switch is assigned 12 self-links and 8 inter-switch links in this scenario. When the number of ports is less than 32 but greater than 16, we can build it with 4 switches. Attention must be paid to determining the switches at both ends of the inter-switch links according to the partition results.
It is worth noting that even if the partitioning methods are different, the results of TP are almost the same. Nevertheless, a proper cutting method enables the testbed to support more topologies without manual modifications. In the implementation, if it needs to perform experiments on multiple topologies, we generally divide the topologies in advance based on the specifications of switches (port number, limitation of flow table et al.) to obtain a proper number of inter-switch links between different switch pairs, i.e., to keep the number of inter-switch links between multiple different switch pairs about the same. The reserved inter-switch links usually come from the maximum inter-switch links among all topologies.
§.§ Topology Partition for SDT on Multiple Switches
The partition of the logical topology needs to be discussed. We define the function “Cut(G(E, V), params...)” for dividing the topology. The input of the function is the logical topology G(E, V), switch parameters, and the number of switches. The output is the partitioning method that satisfies the requirements of all the topologies we aim to build and the number of links of each type to be assigned. The problem is represented with switches and nodes as vertices and logical links as edges. The logical topology can be described as an undirected graph. To achieve the partitioning, we apply a graph partitioning algorithm that splits the graph into sub-graphs.
The partition of the graph needs to meet certain requirements. The first is that the number of inter-switch links should be small, for the inter-switch links are relatively more complicated than self-links. With this requirement, one initial idea is to use the “Min-cut” partitioning algorithm to divide the topology. The target is to minimize the CutEdges(E_A, E_B) = ∑_u∈ V_A, v∈ V_Bw(u, v). Notes that w(u, v)=1.
Besides this, we also want to keep the number of used links (or ports) per physical switch as balanced as possible. It is beneficial to balance the number of ports and links of each physical switch in terms of resource usage and complexity of ports to nodes. However, Min-cut partitioning can not work well under this condition. Figure <ref> shows the differences between these partitioning methods. Another graph partitioning algorithm is needed, whose target is to minimize α× Cut(E_A, E_B) + β× (1/∑_E_A^i1 + 1/∑_E_B^i1).
To summarize the requirements for the SDT partitioning algorithm, the graph partitioning algorithm should 1) minimize the number of edges between sub-graphs and 2) balance the number of edges within each sub-graph. Meeting these requirements is a proven NP-hard problem, and algorithms such as RatioCut <cit.> or minimize normalized cut (NCut) <cit.> can be used to solve it. In practice, we use the widely-used METIS library <cit.> with these constraints to perform the partitioning of the topology, and the results are usually satisfactory. When multiple topologies need to be evaluated in real-world experiments, we perform graph partitioning for all topologies and then select the maximum number of inter-switch links as the reference for deployment on the physical topology.
§ IMPLEMENTATION DETAILS: SDT CONTROLLER
We implement the SDT controller based on the library Ryu <cit.> under version 4.34 and the API in commodity OpenFlow switches. As shown in Figure <ref>, the SDT controller consists of 4 modules. Topology Customization and Routing Strategy are two basic modules of the controller. The remaining two modules, i.e., Deadlock Avoidance and Network Monitor, are dedicated modules for DCNs. SDT controller supports fast (re)configuration of network topology and other modules by running a simple configuration file as shown in Figure <ref>.
§.§.§ Topology Customization
This module is essential for performing TP, consisting of 1) the checking function and 2) the deployment function. In the checking function, all user-defined topologies will be used as input to the module, along with how the testbed is connected (e.g., distribution of nodes and two types of links). The module first checks if these topologies meet the deployment conditions as addressed in <ref>. If not, the module will inform the user of the necessary link modification. Then, the checked user-defined topology is used as the input for the deployment function. The controller will maintain the logical topology as an undirected graph and run the TP process automatically in this function.
§.§.§ Routing Strategy
This module contains various routing strategies for different topologies. We implement several routing algorithms as shown in Table <ref>. Most of the user-defined routing strategies can be implemented by the SDT controller as a specific set of flow tables. For instance, when a new flow comes, the SDT controller calculates the paths on the logical topology according to the strategies and then delivers the corresponding flow tables to the proper OpenFlow switches to perform a specific routing for the flow.
§.§.§ Deadlock Avoidance and Network Monitor
These two modules are dedicated modules for DCNs. The former works in the lossless network, like RDMA over Converged Ethernet (RoCE), along with Routing Strategy module to avoid the deadlock. The latter is mainly used for network telemetry. For example, the SDT controller periodically collects statistics data in each port of OpenFlow switches through provided API. The collected data can be further used to calculate the load of each logical switch in the case of adaptive routing.
We use the SDT controller to implement some prevalent network functions to evaluate SDT's capability. For details, please refer to <ref>.
§ EVALUATION
In this section, we conduct several experiments to answer the questions, including:
* Will SDT introduce additional overhead (e.g., latency) compared to a full testbed? ( <ref>)
* How many types of topologies can SDT project? ( <ref>)
* How cost-effective and scalable is SDT compared to previous TP methods? ( <ref>)
* How much speed-up can SDT bring to network experiments? ( <ref>)
* Can existing network functions be applied to SDT? ( <ref>)
It is worth mentioning that all topology reconfigurations of SDT in this section are done remotely without any manual rewiring.
§.§ Experiment Setup
§.§.§ SDT Cluster Setup
We use 3 H3C S6861-54QF OpenFlow switches (with 64 10Gbps SFP+ ports and 6 40Gbps QSFP+ ports, which can be split into 4 10Gbps SFP+ ports) for SDT. We use 16 HPE DL360 Gen9 servers with E5-2695v4 (18 cores and 36 threads) as host servers and virtualize them to 32 computing nodes (i.e., virtual machines). Each host server has one Mellanox ConnectX-4 10GbE dual-port NIC. Each computing node is allocated with 32GB RAM and 8 CPU cores. Moreover, each computing node is bound with a physical NIC port through SR-IOV to ensure that the virtualization will not become the performance bottleneck. All the network devices support the Priority Flow Control (PFC) for lossless ethernet.
§.§.§ Baselines
We use a full testbed to compare the accuracy of SDT in terms of latency and bandwidth. We compare the Application Completion Time (ACT) of SDT with a self-designed simulator running different HPC applications under different topologies. We also evaluate the cost-effectiveness and scalability compared to SP, SP-OS, and TurboNet <cit.>.
The network simulator we use is based on two popular simulators BookSim <cit.> and SST/Macro <cit.>. The simulator supports a range of features needed by the evaluations (including PFC, cut-through, trace replaying, et al.) and is event-driven for efficiency. To run the same application as the nodes on SDT, the simulator uses the traces collected from running an HPC application on real computing nodes to ensure the simulator's authenticity. We only compare the SDT to the TurboNet with Port Mapper (PM) because the number of queues on each port in the topology projected by Queue Mapper (QM) is inadequate for experiments inside the DCs.
§.§ TP Accuracy of SDT
§.§.§ Latency
We construct a multi-hop topology for latency and bandwidth tests as shown in Figure <ref>. The topology consists of 8 switches and computing nodes. There is one node connected to each switch. The switches and nodes are inter-connected with 10Gbps links. We build this topology on SDT and a full testbed and compare the latency between Node 1 to Node 8 by using the Pingpong application in Intel MPI Benchmark (IMB) <cit.>. The application is running on the RoCEv2 network with ECN-disabled.
We perform the latency test 10k times on incremental message lengths (param -msglen) and collect the latencies. Define the average latency of the full testbed as l_r, and the latency of SDT is l_s. The overhead is calculated by l_s - l_r/l_r. Figure <ref> shows that the SDT would bring an acceptable overhead to the RTT. It is worth noting that the latency is quite small in the RoCEv2 network, which means introducing any tiny delay can lead to large deviations in results. For example, the 10-hop latency of the lengths below 256 bytes is under 10μ s. Although the latencies on RoCEv2 are sensitive to the hardware conditions, the overheads brought by SDT are below 1.6%, which can be ignored. With the increment of message lengths, the overhead brought by SDT is getting smaller.
§.§.§ Bandwidth
We use iperf3 to construct an incast scenario for bandwidth test: all other nodes send 10Gbps TCP traffics to node 4. We compare the bandwidth on loss and lossless networks (with PFC off/on, respectively).
The results (refer to Figure <ref>) demonstrate that with PFC enabled, the bandwidth allocation for each iperf3 flow aligns with the full testbed. For instance, nodes 3 and 5, which have 2 congestion points on their path to node 4, have comparable bandwidth when controlled by PFC in both the SDT and full testbed. Their bandwidth allocation is significantly distinct from that of other nodes with different hop counts. In the network without PFC, the bandwidth distribution between SDT and the full testbed has a nearly identical trend. Nodes that can allocate relatively high bandwidth (which may be influenced by RTT and other factors) behave similarly in both the actual topology and SDT. The trends are nearly alike for nodes with lower bandwidth. The only differences may be due to the additional overhead introduced by SDT, leading to slight differences in RTT and therefore different window growth rates.
To summarize, the way SDT builds the topology does introduce a bit of additional overhead, resulting in a deviation of 1.6% or less to the latencies compared to the full testbed in our environment. Our initial speculation is that these additional latency overheads are because TP increases the load of the switch's crossbar, which causes a slight bias compared to the real environment. These deviations are reasonable and have a negligible impact on the bandwidths.
During the evaluation, we also evaluate the hardware isolation using the Wireshark network sniffer on the client side. We deploy two unconnected topologies in one SDT testbed and conduct the same Pingpong experiment separately. The evaluation results show that the client's port does not receive packets from nodes that are not connected in one topology.
§.§ Scalability, Convenience, and Cost of SDT
We use simulations to compare the scalabilities, conveniences, and costs between SDT and other TP methods (SP, SP-OS, and TurboNet <cit.>) on the projection of multiple topologies, including the widely-used topologies in DCs (Fat-Tree, Dragonfly, and Torus) and 261 WAN topologies (comes from the Internet Topology Zoo <cit.>). The metric of reconfiguration times is calculated by the total time spent from the time the configuration is placed until the network is available. The hardware costs are extrapolated from the current market price of the hardware.
Table <ref> presents the results of the evaluations and shows that SDT can project more topologies than TurboNet at the same hardware cost, making it more scalable and cost-efficient than SP and SP-OS. SP requires manual reconnection, making reconfiguration time-consuming and prone to errors, especially for large topologies. SP-OS incorporates optical switches (OS) to facilitate reconfiguration but suffers from expensive hardware costs. TurboNet employs the loopback port of P4 switches for reconfiguration, resulting in halved bandwidth on the ports and reduced scalability compared to SDT. Also, recompiling the P4 program is time-consuming. SDT is the best option among these solutions due to its excellent scalability and cost-effectiveness.
§.§ Comparison between SDT, Simulator, and Full Testbed
We run a batch of HPC applications and benchmarks, including HPCG, HPL, miniGhost, miniFE, and IMB, to verify the ACT differences among SDT, the simulator, and the full testbed.
The HPC applications can verify the universality of SDT in the network experiments, while the IMB Alltoall is a pure traffic benchmark without any computation, ideal for verifying the impact on network performances brought by SDT's overhead. We run the applications on specific topologies and construct the topologies on both SDT and simulator. All parameters remain the same for the simulator and SDT, including PFC thresholds, congestion control, DCQCN enabled, cut-through enabled, et al. For details on network functions like deadlock avoidance, please refer to <ref>.
We select the topologies 1) Dragonfly with a=4, g=9 <cit.>, and h=2, 2) Fat-Tree with k=4 <cit.>, 3) 5x5 2D-Torus, and 4) 4x4x4 3D-Torus <cit.> for evaluation. For the topologies with the number of nodes greater than 32, we randomly select the nodes but keep the same among all the evaluations.
Table <ref> shows the difference in real-application evaluation between the SDT and simulator. Ax (B%) in the table represents the evaluation time of the SDT is A times faster than the simulator with a difference of ACT in B%. The result shows that the ACT collected in SDT is almost identical to the simulator, with a maximum deviation of 3%. However, the time consumption of SDT is greatly reduced compared to the simulator, especially in applications with heavy traffic.
Further evaluations are conducted to assess the performance improvement brought by SDT as the number of nodes increases. Figure <ref> compares the time consumption of full testbed (real ACT), simulator, and SDT in evaluating IMB Alltoall benchmark on a Dragonfly topology (a=4, g=9, h=2) with 1, 2, 4, 8, 16, and 32 randomly selected nodes. Note that SDT's time consumption includes the deployment time of the topology. Results show that when the ACT is short, the topology deployment time may result in overhead in the evaluation time consumption, but it is still faster than the simulator. It's worth mentioning that the simulation time may be affected by the performance of the machine running the simulation, but this does not resolve the issue that the simulation is much slower than a real experiment on SDT.
To summarize, SDT can well construct actual network topologies. The experiments performed on SDT show almost the same ACT as the real environments and the simulations, while SDT has much lower costs than the full testbed and is much faster than the simulator. There are good reasons that SDT can be used for more authentic and efficient network evaluations than simulators and emulators.
§.§ Running Prevalent Network Functions on SDT
We also evaluate the feasibility of deploying prevalent network features on the SDT, with two specific modern network functions, RoCEv2, and a naive active routing.
RoCEv2 works over lossless ethernet with PFC enabled. Since SDT does not have any hardware modifications to the physical ports, building a lossless network environment is possible by simply enabling the PFC on both switches and NIC ports. Moreover, DCQCN <cit.> is an end-to-end congestion control method to delay the generation of PFC messages. Like PFC, the DCQCN can be enabled by directly turning it on as long as the switch and the NIC support it. We further deploy three types of deadlock avoidance methods alongside routing strategies on the SDT (Table <ref>), which are working properly in the evaluation of real applications (See <ref>).
We implement an active routing algorithm based on <cit.> for the Dragonfly topology (a=4, g=9, h=2, with randomly selected 32 nodes). This algorithm extends Dragonfly's minimal routing policy by estimating network congestion according to the statistic data from Network Monitor module. We evaluate active routing using a prevalent pure communication application, i.e., IMB Alltoall. Results show that active routing works well on SDT, which can reduce the ACT of the IMB Alltoall.
In summary, SDT shows strong adaptability to existing network functions. Most existing ethernet features can be easily deployed in SDT. Researchers can use SDT to validate existing network functions in multiple-scale topologies or to develop and evaluate new network functions using SDT.
§ DISCUSSION AND FUTURE WORK
§.§ Flexibility Enhancement
In SDT, the inter-switch links reservation issue might occur ( <ref>). Manual operations may still be required once the reserved inter-switch links cannot accommodate the new user-defined topology. To handle this, SDT can leverage optical switches to turn a link into either a self-link or an inter-switch link dynamically according to the topology requirements to further enhance the flexibility of SDT. We are designing the SDT controller with the optical switches and investigating whether there are additional challenges.
§.§ Switch Selection
The SDT controller in this paper performs TP operations on commodity OpenFlow switches. Generally, other switches can also be used for TP if they meet the following conditions: 1) allowing loopback packets to pass through self-links (or the STP protocol can be disabled), and 2) supporting 5-tuple matching or others similar to determine the forwarding of packets. For instance, other types of switches, like switches supporting extended ACL tables, are also suitable for TP. The P4-based (Intel Tofino) SDT controller is under refinement.
§.§ Resource Limitation
In SDT, the most significant resource is the maximum number of supported flow table entries in each OpenFlow switch. When a switch runs out of flow table entries during the setup of logical topology, the setup procedure may fail or other unknown failures could occur. SDT controller leverage a built-in module to check the number of available table entries to avoid such problem. If the demand for entries is greater than the available one, it can merge entries, split the topology, or inform operators to add more switches. In our evaluation, the problem of inadequate flow table capacity is rare. For instance, when we project a Fat-Tree with k=4 (containing 20 switches and 16 nodes) to 2 OpenFlow switches, each switch requires about only 300 flow table entries, which is not difficult for modern commercial OpenFlow switches to deploy.
§ CONCLUSION
We summarize the advantages and disadvantages of existing network evaluation tools and conclude the methodology of an alternative method called “Topology Projection” (TP). Based on the idea of TP, we propose SDT, a deployment-friendly and automatically reconfigurable network topology testbed. SDT allows researchers to use several commodity OpenFlow switches to build network topologies based on user-defined topology configurations. SDT is fully transparent to other network components and can significantly reduce the deployment cost for network topology evaluations. We also develop the corresponding SDT controller for automatic topology reconfiguration. Through evaluations, we find that SDT can achieve almost the same physical properties as the full testbed and runs up to 2899x faster on network evaluations than the simulator does. SDT is more cost-effective and scalable than other TP solutions and can support a wide range of network research works.
§ ACKNOWLEDGEMENTS
This work is sponsored by the Key-Area Research and Development Program of Guangdong Province (2021B0101400001), National Natural Science Foundation of China (62150610497, 62172108, 62002066), Natural Science Foundation of Shanghai (23ZR1404900), the Major Key Project of PCL, and Open Research Projects of Zhejiang Lab (2022QA0AB07). We also sincerely appreciate the anonymous reviewers for their valuable and constructive feedback.
Touko-Format-unsrt
|
http://arxiv.org/abs/2307.05747v2 | 20230708141455 | Integrating Curricula with Replays: Its Effects on Continual Learning | [
"Ren Jie Tee",
"Mengmi Zhang"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Spectroscopic Devices for Asteroseismology With Small Telescopes in NARIT
Saran Poshyachinda
=========================================================================
Humans engage in learning and reviewing processes with curricula when acquiring new skills or knowledge.
This human learning behavior has inspired the integration of curricula with replay methods in continual learning agents. The goal is to emulate the human learning process, thereby improving knowledge retention and facilitating learning transfer.
Existing replay methods in continual learning agents involve the random selection and ordering of data from previous tasks, which has shown to be effective. However, limited research has explored the integration of different curricula with replay methods to enhance continual learning.
Our study takes initial steps in examining the impact of integrating curricula with replay methods on continual learning in three specific aspects: the interleaved frequency of replayed exemplars with training data, the sequence in which exemplars are replayed, and the strategy for selecting exemplars into the replay buffer. These aspects of curricula design align with cognitive psychology principles and leverage the benefits of interleaved practice during replays, easy-to-hard rehearsal, and exemplar selection strategy involving exemplars from a uniform distribution of difficulties.
Based on our results, these three curricula
effectively mitigated catastrophic forgetting and enhanced positive knowledge transfer, demonstrating the potential of curricula in advancing continual learning methodologies. Our code and data are available: <https://github.com/ZhangLab-DeepNeuroCogLab/Integrating-Curricula-with-Replays>
§ INTRODUCTION
Continual learning enables consecutive task acquisition without forgetting previously trained tasks <cit.>. This adaptability is vital for autonomous systems in dynamic environments, such as updating a grocery classification model with new products without retraining it on previous products. However, a significant challenge in continual learning is catastrophic forgetting, where knowledge from recent tasks interferes with earlier ones <cit.>, leading to performance degradation on earlier tasks after training on a task sequence.
To resolve this problem,
there are three primary types of continual learning methods commonly employed in the field:
regularization-based methods introduce regularization terms to mitigate catastrophic forgetting by preserving important parameters during training <cit.>; rehearsal-based methods store and replay a subset of previous data during training to maintain knowledge from previous tasks <cit.> and parameter isolation methods isolate specific parameters for each task to prevent interference between tasks <cit.>.
Rehearsal-based methods have proven highly effective in continual learning. However, existing approaches typically involve randomly selecting and rehearsing data from previous tasks. Limited research explores the incorporation of meaningful curricula into replay methods.
In parallel, in the curriculum learning literature, various approaches have focused on weakly supervised <cit.>, unsupervised <cit.>, and reinforcement learning tasks <cit.>. These studies demonstrate that curricula improve generalization abilities, task performances,
and convergence speed <cit.> during training. However, they primarily address intra-class difficulty and example scheduling within a single task, neglecting the impact of class presentation sequences across multiple tasks. Recent research has explored curricula in continual learning scenarios without data replays <cit.>. In complement to this work, our study investigates the role of curricula specifically during replay in continual learning, while keeping the curricula consistent for the feed-forward training process.
Exploring optimal curricula offers countless possibilities, and in our study, we take initial steps to investigate a limited set of potential curricula. We draw inspiration from two sources to guide the design of these curricula. Firstly, neuroscience research has revealed that neural activity patterns associated with past experiences are replayed in specific orders during rest or sleep, which is believed to contribute to memory consolidation and spatial navigation <cit.>. Secondly, pedagogy studies indicate that repetitive practice and revisiting previous knowledge with increasing difficulty enhance long-term memory integration in students <cit.>.
Specifically, we propose three types of curricula for replays and examine their impact on catastrophic forgetting and positive knowledge transfer: (1) the interleaved frequency of replayed exemplars with training data, (2) the replay sequence of exemplars, and (3) the strategy for selecting exemplars into the replay buffer. The experimental findings align with cognitive psychology principles, highlighting the advantages of frequently interleaving between training data and replayed exemplars, incorporating easy-to-hard rehearsals, and selecting exemplars from a uniform distribution of difficulties for replay. These observations present a promising avenue for advancing continual learning methods. It also provides insights into the underlying mechanisms of replay strategies in mitigating forgetting and facilitating knowledge transfer across tasks.
§ RELATED WORKS
§.§ Replay Methods in Continual Learning
Extensive research has focused on utilizing replay methods to address the issue of catastrophic forgetting. Conventional replay methods, such as iCaRL <cit.> and ER <cit.>, involve explicit training on previously saved data, while several variants, like DGR <cit.> and Pseudo-Recursal <cit.>, replay on artificially synthesized samples by generative models, resembling data from previous tasks.
Although these replay methods have made significant contributions in reducing catastrophic forgetting, they paid little attention to the incorporation of meaningful curricula into replay methods. Most methods randomly interleave the replay samples with the training data, without exploring the optimal mixing strategies <cit.>. In our work, we systematically studied the effect of interleaving curricula, which involves mixing training data and replay samples within a pre-defined interleave interval.
§.§ Curriculum Learning
Curriculum learning methods can be broadly categorized into two groups. The first group involves manual curriculum design by humans before training <cit.>, but these methods typically rely on human expertise and struggle to generalize to new domains. The second group consists of models that can autonomously design curricula without human intervention <cit.>. However, the application of these methods to enhance model performance has received limited attention in the continual learning setting.
Here, we highlight two factors to consider when applying curricula on the replay methods in continual learning. Firstly, while curriculum learning has demonstrated efficacy in enhancing generalization and training speed within a single task, the objective of curriculum learning in the context of continual learning is to retain knowledge from previous tasks while acquiring new knowledge from the current task. Secondly, unlike within-task curriculum learning, models in continual learning only have access to data from the current task, making it challenging to create a comprehensive between-task curriculum that encompasses the entire dataset.
Here, we took initial steps in this direction by exploring automated methods to determine the sequence of replay samples and introducing the sample selection strategy which finds the best replay samples for building a curriculum.
§ EXPERIMENTS
We investigated the effect of three types of replay curricula in the class incremental learning (CIL) setting. We first introduce CIL, and then elaborate on the three replay curricula individually.
Problem Setting. The objective of CIL is to teach a unified classification model Θ to recognize sets of object classes incrementally over time. Specifically, an image dataset D, consisting of N object classes, is split into subsets {D_1,...,D_t,...,D_T}
of
images
and presented over a sequence of T tasks. In each task t, the model only has access to training data in D_t, consisting of
samples from distinct classes C_t, and (x_i,t,y_i,t) is the i-th (image, label) pair in D_t. The model Θ can run multiple passes over D_t in task t. The model stops training on D_t
after its performance on the validation set saturates, considering the five most recent epochs.
We implemented the naive replay method where some raw images and their corresponding labels are selected from previous tasks and are stored in the replay buffer R_t. These data in R_t are inter-leaved with D_t for rehearsals. There are three types of replay curricula involved in this study: (1) the interleave frequency; (2) the rehearsal sequence of R_t in CIL; and (3) the image selection for R_t.
R_t is kept at a constant size of 1200 over all the tasks. See Appendix for more training details.
As an upper bound, we also include the offline method where the model Θ is trained on the entire dataset Θ from {D_1,...,D_T} over multiple epochs without any continual learning.
Datasets. We conducted experiments to investigate the use of these three types of curricula in replay methods on the two image datasets ciFAIR-10 and ciFAIR-100 <cit.>.
ciFAIR-10 dataset contains 10 object classes. The protocol asks the model Θ to incrementally learn 2 object classes in each task. There are a total of 5 tasks. ciFAIR-100 dataset contains 100 object classes. The CIL protocol asks the model Θ to incrementally learn 5 object classes in each task. There are a total of 20 tasks.
Both datasets have a total of 60,000 images, with 50,000 images used for training and 10,000 images used for testing.
The conclusions drawn from the experiments on both datasets are consistent. Without loss of generality, we focus on all the experiments and result analysis in ciFAIR-100 in the main text.
See Appendix for more implementation details and results on ciFAIR-10.
Evaluation Metrics. To assess the continual learning performance of the model Θ, we followed <cit.> and introduce 2 standard evaluation metrics. We define Forgetfullness (F) as the percentage decrease in classification accuracy on the test instances from C_1
between the Θ_t (model after being trained on D_t) and Θ_1. An ideal Θ_t could maintain the same classification accuracy on C_1 over tasks; i.e. ∀ t, F_t=0. The higher F is, the more Θ suffers from catastrophic forgetting. To assess the overall classification performance of Θ over tasks, we also report the continual average classification accuracy (Avg. Accu.). Avg. Accu. is computed as the average accuracy on all the test instance from C_i, where i∈{1, 2, ..., t}. For simplicity, we report the averaged F and Avg. Accu. over all the tasks.
Experimental Controls.
Within each experiment, only one variable of interest changes while the rest of the experiment conditions are fixed as control variables. As the previous study has shown that the sequence of class presentations affects the continual learning performance <cit.>, we use the same class presentation sequence in all three experiments. The same MobileNetV3 (small) network architecture is used as the backbone for the model Θ for all experiments. In every experiment, the total number of training samples and the total number of replay samples exposed to Θ remain the same across all experiment variables. Each experiment is conducted with 4 runs initialized with 4 random seeds, where the seeds are used to vary the controlled variables. The average performance across all 4 runs is reported.
§.§ Interleave Divisions during Rehearsals
The number of interleaving divisions refers to the number of splits of in D_t and R_t. It indicates how often the model Θ rehearses on R_t, while learning on a subset of D_t. For example, for interleaving division number 400, D_t is split into 400 groups where each group contains an equal number of (x_i,t,y_i,t) (image, label) pairs, and these (image, label) pairs are randomly selected from D_t without replacement. Correspondingly, R_t is also split into 400 groups with the same splitting criteria as D_t. At each training epoch, the model Θ_t at task t is repeatedly trained with one group of D_t followed by one group of R_t, until the entire D_t and R_t are exhaustively seen by Θ_t. We titrate the interleave division numbers with the range of 1, 8, 60, 120, and 300.
The training data is interleaved with replay data and then presented to the model in sequence. Different interleave division numbers result in different data presentation sequences; hence, different curricula.
§.§ Rehearsal Sequence of Replay Samples
We use the interleave divisions 1 and 600 for all the experiments in this subsection and vary the rehearsal sequence of data samples in R_t by taking into account the two factors: the sample difficulty levels and the increasing or decreasing directions of sample difficulty levels.
To measure whether a sample is easy or hard to learn, we introduce two difficulty measures: (1) the confidence score difficulty metrics and (2) the distance vector difficulty metrics. The confidence score difficulty metrics were used to assess whether a teacher network with full knowledge of the entire dataset D predicted high or low confidence of the given sample belonging to its ground truth class label. Specifically, each image within R_t was input to a teacher network. The teacher network is based on a MobileNetV3 (small) architecture, pre-trained on the entire dataset D. After this, the confidence score for the ground truth class of each sample was extracted from the teacher network’s output. R_t was then sorted according to its individual sample’s confidence score, where a higher confidence score means that the sample is easier to learn for Θ.
However, in CIL setting, having a teacher network with full access to the whole dataset is impractical, as the data is incrementally available over tasks. Hence, we employed the distance vector difficulty metrics, used widely in literature <cit.>. Intuitively, if the sample is closer to other samples in the memory buffer, it is easier for Θ to learn and generalize to other samples as well.
The 2nd last layer from a ResNet-50 model <cit.>, pretrained on the ImageNet dataset, was used to extract the feature vector of each sample in R_t. A Euclidean distance matrix was created, where the pairwise Euclidean distance for all the samples based on their feature vectors was computed. We then compute the sum of each row of the matrix and denote this column vector as a distance vector. Each element in this distance vector represents how much a particular sample differs from all other samples in the feature space. A smaller value in the distance vector means that the particular replay sample is easier to learn for Θ.
We introduce a series of rehearsal sequences in the orders of either easy-to-hard samples or hard-to-easy samples, where the difficulty levels of each sample are determined by either the confidence score difficulty metrics or the distance vector difficulty metrics.
As the previous study has shown that the class orders are also essential for continual learning <cit.>, here we also explore the effect of the class orders during replays. When we design the rehearsal sequence based on class difficulties in R_t, we adapt the two sample-level difficulty measures above to compute class-level difficulty measures by taking the average over all samples of the same class in R_t. We then sort all the samples in R_t by their class difficulty metrics, regardless of their individual sample difficulty scores.
Samples in R_t sorted by their class difficulties
were then presented to the model Θ in either the easy-to-hard or hard-to-easy
orders.
§.§ Selection of Samples for Replay Buffer
In common practice, selecting samples for R_t+1 from task t is often conducted in a random manner <cit.>. In contrast to the previous works, we vary the sample selection criteria for R_t+1 as follows: selecting only the easiest samples from task t for R_t+1, selecting the hardest samples from task t for R_t+1, and selecting samples that are uniformly distributed across difficulty levels from task t for R_t+1. The difficulty levels are judged based on the confidence scores and the distance vectors defined in the previous subsection. We use interleave division numbers 1 and 600 for all the experiments in this subsection.
§ RESULTS
§.§ Frequent Replays Enhance Performances
We report F and Avg. Accu. as a function of interleave divisions in Table <ref>.
Notably, we observed that interleave divisions are important factors influencing the continual learning performance of the replay method with the larger interleave divisions leading to better performances, as indicated by the decreasing F and increasing Avg. Accu. over all the tasks. It is possible that the model parameters at large division numbers are updated more frequently for both the current task and all previous tasks, resulting in minimal forgetting. However, we also note that the continual learning performance saturates at interleave division number 120. This implies that increasing interleave divisions beyond optimal values brings no extra benefits in continual learning.
§.§ Easy-To-Hard Rehearsal Sequences are Beneficial
We studied the models trained with different rehearsal sequences sorted in easy-to-hard or hard-to-easy curricula based on sample-level or class-level difficulty measures computed from either the confidence scores or distance vectors. We reported the Avg. Accu. results in Figure <ref> and F scores in Appendix and made four key observations. First, aligning with the observations in Table <ref> and the discussion from the previous subsection, large interleave divisions benefit continual learning models with higher average accuracy and less forgetting. Second, rehearsal sequences sorted by instance-level difficulties lead to much better continual learning performances (compare red bars versus blue bars). Third, the confidence score is a better evaluation metric measuring instance-level difficulties, as shown by the bars with and without texture patterns. Finally, the models trained with the easy-to-hard rehearsal sequences outperform the ones with reversed rehearsal sequences (compare light versus dark grey bars). It is possible that easy-to-hard rehearsal sequences make the models converge faster on the previous tasks due to more stable gradient updates; hence, the sequences lead to minimal forgetting and higher classification accuracy. We also compared the continual learning performance for both the offline method and the continual learning method with various curricula and observed that there still exists a large performance gap between these two.
§.§ Replays with Only Hard Data Hurt Performances
Here, we explored the effect of different sample selection strategies for replay samples in terms of the sample difficulty levels based on distance vectors or confidence scores. From Figure <ref>,
Our observations indicate that exclusively choosing the most challenging replay samples leads to inferior performance compared to selecting the easiest samples or incorporating samples with a balanced distribution of difficulty levels. Selecting samples with a uniform distribution of difficulty levels yields the best continual learning performance. This outcome may be attributed to the fact that difficult replay samples result in less flat loss landscapes, which in turn make the training process more challenging and slower to converge <cit.>. A curriculum for training the models to rehearse from the easiest to the hardest samples is the best, as it balances the greater precision in data fitting due to the hardest samples and the fast convergence speed during training due to the easier samples. Similar to the previous subsection, we also noted that the confidence score is a better measure of sample difficulty levels than the distance vectors.
§ CONCLUSION
Our study
examines the role of curricula during replays in the class-incremental learning setting in continual learning. We designed and conducted a series of controlled experiments to study the three key questions on replays: how often is the replay, what data should be replayed, and in what sequence to replay.
Across the two common image datasets, our experimental results shed light on the underlying principles of replay methods in continual learning and reveal the good curricula design choices for replay methods.
These curricula designs not only facilitate positive knowledge transfers (which has been explored in existing curriculum learning literature), but also mitigate catastrophic forgetting (a significant problem we need to solve in continual learning). Specifically, we
found that (1) replays should happen frequently; (2) only rehearsing on the most difficult exemplars hurts continual learning performances; and (3) rehearsals on samples with increasing difficulty eliminate forgetting more than its reversed difficulty orders.
There are numerous other possible choices of curricula designs for replay methods, such as a unified difficulty metric considering both confidence scores and distance vectors or the use of a student feedback loop to update the difficulty scores. In the future, we will look into the role of curricula under
stringent continual learning conditions, such as learning with limited training time or noisy data. We will also conduct experiments on other large-scale datasets and apply our replay curriculum to existing replay-based continual learning algorithms.
§ ACKNOWLEDGEMENTS
This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2021-025), its NRFF award NRF-NRFF15-2023-0001, Mengmi Zhang's Startup Grant from Agency for Science, Technology, and Research (A*STAR), and Early Career Investigatorship from Center for Frontier AI Research (CFAR), A*STAR. The authors declare that they have no competing interests.
§ APPENDIX
§.§ Experimental Details
For experiments on both ciFAIR-10 and ciFAIR-100, PyTorch’s default implementation of cross entropy loss was used for object classification tasks. The SGD algorithm was used as the optimizer. The learning rate was set at a constant of 0.001. Momentum was fixed at 0.9. A batch size of 32 is used.
For ciFAIR-10, we employ a 2-layer 2D-convolutional network with 6 and 16 channels in the successive layers, followed by 3 fully connected layers with 400, 120 and 84 hidden units respectively. ReLU was used as the activation function.
We follow the standard training and testing data splits from the original ciFAIR-10.
In every task, the model is trained for 250 epochs. Each experiment is conducted with 20 runs initialized with 20 random seeds, where the seeds are used to vary the controlled variables. The average performance across all 20 runs is reported.
For ciFAIR-100, PyTorch's implementation of MobileNetV3 (small) was used, including the default layers and activation function. We used a custom training, validation, and test data splits with a ratio of 9:1:2, and a stopping criteria for training depending on the validation loss. The ciFAIR-100 images were upscaled to 72x72 using PyTorch's bicubic interpolation function before training.
§.§ More Results and Analysis
We reported the continual learning performance on ciFAIR-10 dataset of the models trained with the three types of curricula as elaborated in Experiments Section.
See Table <ref> for interleave divisions, Figure <ref> for rehearsal sequences, and Figure <ref> for sample selections.
All the tables and figures on ciFAIR-10 dataset follow the same design conventions as the corresponding tables and figures on ciFAIR-100 dataset in the main text. The conclusions from the results of ciFAIR-10 dataset are consistent with the ones on the ciFAIR-100 dataset.
|
http://arxiv.org/abs/2307.07311v1 | 20230714124420 | Existence of optimal flat ribbons | [
"Simon Blatt",
"Matteo Raffaelli"
] | math.DG | [
"math.DG",
"math.CA",
"49Q10, 53A05 (Primary) 49J45, 74B20, 74K20 (Secondary)"
] |
Optimal flat ribbons]Existence of optimal flat ribbons
S. Blatt]Simon Blatt
Departement of Mathematics
University of Salzburg
Hellbrunnerstraße 34
5020 Salzburg
Austria
[email protected]
M. Raffaelli]Matteo Raffaelli
Institute of Discrete Mathematics and Geometry
TU Wien
Wiedner Hauptstraße 8-10/104
1040 Vienna
Austria
[email protected]
The second-named author was supported by Austrian Science Fund (FWF) project F 77 (SFB “Advanced Computational Design”).
[2020]Primary 49Q10, 53A05; Secondary 49J45, 74B20, 74K20
We apply the direct method of the calculus of variations to show that any nonplanar Frenet curve in ℝ^3 can be extended to an infinitely narrow flat ribbon having minimal bending energy. We also show that, in general, minimizers are not free of planar points, yet such points must be isolated under the mild condition that the torsion does not vanish.
[
[
July 14, 2023
=================
§ INTRODUCTION AND MAIN RESULT
In 1930 Sadowsky <cit.> announced without proof that the bending energy ∫_S H^2 dA of the envelope of the rectifying planes of a C^3 curve γ [0, l] →ℝ^3, in the limit of infinitely small width w and under the assumption of nowhere vanishing curvature κ, is given by
w/2∫_0^l(κ^2 + τ^2)^2/κ^2 dt;
here τ is the torsion of γ. A proof of Sadowsky’s claim was given by Wunderlich in 1962 <cit.>.
The interest in this energy, originally motivated by the problem of finding the equilibrium shape of a free-standing developable Möbius band, has revived in the last twenty years; see, e.g., <cit.>.
Recently, the second-named author <cit.> extended Sadowsky's formula to any flat ribbon along γ. Indeed, he showed that the bending energy, in the limit of infinitely small width, is given by
w/2∫_J(κ_n^2 + τ_g^2)^2/κ_n^2 dt,
where κ_n and τ_g are the normal curvature and the geodesic torsion of γ, respectively, and where J = { t ∈ [0, l] |κ_n(t) ≠ 0 }; see also <cit.>.
In this context, a natural question arises: if γ is nonplanar, i.e., when the energy is bounded away from zero, does there exist an optimal flat ribbon along γ, that is, one having minimal bending energy? The purpose of this short note is to answer such question affirmatively when γ is a Frenet curve.
A preliminary step in our analysis consists in transforming (<ref>) into a proper functional. To do so, it is enough to observe that (the normals of) any two flat ribbons along the same curve are related by a rotation θ [0, l] →ℝ about the common tangent.
In particular, when the principal normal P = γ” /κ is well-defined, the normal curvature and the geodesic torsion of γ with respect to the rotated normal P(θ) can be expressed by
κ_n = κcos(θ),
τ_g = θ' + τ.
Substituting these relations into (<ref>), we thus obtain the functional
E(θ)= ∫_0^l(κ^2cos^2θ + ( θ' + τ )^2)^2/κ^2cos^2θ dt.
Our main result pertains the functional E and is contained in the following theorem.
* There is a minimizer θ_min of E on W^1,4([0, l]), i.e., we have
E(θ_min) ≤ E(θ) for all θ∈ W^1,4([0, l]).
* For any a, b ∈ℝ, there is a minimizer of E on the subset
{θ∈ W^1,4([0, l]) |θ(0) = a and θ(l) = b }
of W^1,4([0, l]).
The theorem remains valid if one replaces κ∈ C^1([0,l]) in (<ref>) with any f ∈ C^0([0,l]).
The proof of Theorem <ref>, which will be finalized in section <ref>, is based on the direct method in the calculus of variation; see section <ref> for an alternative proof relying on Γ-convergence. It involves showing the coercivity and the weak sequential lower semicontinuity of E on W^1,4([0, l]) or a suitable closed subset therein. As we explain below, each of these tasks presents some challenge.
First of all, our functional is not coercive on W^1,4([0, l]), as it is 2π-periodic in θ. Consider, for example, the constant function θ_n = 2π n: it defines an unbounded sequence in W^1,4([0, l]), and yet E(θ_n) = ∫_0^l(κ^2 + τ^2)^2/κ^2 dt is (constant and) finite.
On the other hand, we can use this periodicity to our advantage: since W^1,4([0,l]) embeds into the Hölder space C^0, 3/4([0,l])—and hence also in C^0 ([0,l])—the fact that E is 2π-periodic allows us to only consider functions satisfying θ(0) ∈ [0, 2π]. In the next section we show that E is indeed coercive on the closed subset
V {θ∈ W^1,4([0,l]) |θ(0) ∈ [0, 2π]}
of W^1,4([0,l]).
As for the sequential lower semicontinuity, the main issue is that our integrand is not continuous. To deal with this problem we use an approximation argument. It turns out, as shown in section <ref>, that the sequential lower semicontinuity of E follows straightforwardly from that of the regular functional
E_ε(θ)= ∫_0^l(κ^2cos^2θ + ( θ' + τ )^2)^2/κ^2cos^2θ + ε^2 dt,
which for ε→ 0 approximates E monotonically from below.
We emphasize that, precisely because of this discontinuity, the classical indirect method of the calculus of variations does not seem readily applicable in our case. Indeed, to use the Euler–Lagrange equation (in the standard way) one would need to assume that θ_min is free of singular points, i.e., that θ_min(t) ∉π/2+πℤ for all t ∈ [0,l]. Our next result confirms that such assumption is, in general, invalid.
Suppose that the torsion τ is a constant function satisfying
|τ| > n π/l + max|κ|.
Then the minimizer θ_min of E in W^1,4([0,l]) has at least n singular points.
Thus, according to Theorem <ref>, one can enforce the presence of singular points. On the other hand, it turns out that when the torsion does not vanish, the set of singular points is necessarily discrete.
Let t_0 be a singular point of θ_min with τ(t_0) ≠ 0. Then t_0 is an isolated singular point, i.e., there is an ε > 0 such that the ε-neighborhood I_ε (t_0 - ε, t_0 + ε) ∩ [0,l] around t_0 does not contain any other singular point.
It is somewhat surprising that both Theorems <ref> and <ref> can be obtained, as we do in section <ref>, on the basis of such elementary results as the fundamental theorem of calculus, the reverse triangle inequality, and Hölder's inequality.
§ COERCIVITY
Once and for all, let
Λ = max{‖τ‖_L^∞, ‖κ‖_L^∞}.
The purpose of this section is to prove the following lemma.
The functional E is coercive on the closed subset
V {θ∈ W^1,4([0,l]) |θ(0) ∈ [0, 2π]}
of W^1,4([0,l]), i.e., we have for θ∈ V that E(θ) →∞ if ‖θ‖_W^1,4→∞.
The strategy is to show that E →∞ if ‖θ' ‖_L^4→∞, and that ‖θ' ‖_L^4→∞ when ‖θ‖_W^1,4→∞.
First, note that
Λ^2 E(θ) ≥∫_0^l (θ'+τ)^4 /cos^2θ dt.
Since |θ' + τ|≥|θ' |/2 if |θ' |≥ 2 ‖τ‖_L^∞≤ 2Λ, equation (<ref>) implies
Λ^2 E(θ) ≥1/16∫_|θ' |≥ 2 Λ (θ')^4 dt
≥1/16∫_0^l (θ')^4 dt - 2 Λ l = 1/16‖θ' ‖^4_L^4 - 2 Λ l,
and so if the homogeneous Sobolev norm ‖θ' ‖_L^4 goes to infinity, then so does E.
Next, let x ≠ y ∈ [0,l]. Applying Hölder's inequality, we obtain the following Morrey estimate:
|θ(x) - θ(y) | = |∫_[x,y]θ' dt |≤| x-y |^3/4‖θ' ‖_L^4([0,l]).
Together with θ(0) ∈ [0, l], this implies
‖θ‖_L^4≤‖θ(·) - θ (0)‖_L^4 + ‖θ(0)‖_L^4≤ l ‖θ'‖_L^4 + 2 π l^1/4,
from which we conclude that
‖θ‖_W^1,4≤ (1+l) ‖θ' ‖_L^4 + 2 π l^1/4,
as desired.
§ WEAK SEQUENTIAL LOWER SEMICONTINUITY
In this section we prove the sequential lower semicontinuity of E on W^1,4([0,l]).
The functional E is weakly sequentially lower semicontinuous, i.e., for any sequence of functions θ_n ∈ W^1,4([0,l]) with weak limit θ∈ W^1,4([0,l]) we have
E(θ) ≤lim inf_n →∞ E(θ_n).
As already mentioned in the introduction, the plan is to consider for ε >0 the regular functional
E_ε(θ)= ∫_0^l(κ^2cos^2θ + ( θ' + τ )^2)^2/κ^2cos^2θ + ε^2 dt.
Note that, for fixed θ, the integrand is monotonically decreasing in ε. Using Beppo Levi's monotone convergence theorem, we thus get that
E(θ) = lim_ε↓ 0 E_ε (θ) = sup_ε >0 E_ε (θ).
Now, if we knew that E_ε was weakly sequentially lower semicontinuous, then the following well-known lemma would imply the same for E. It simply states the the supremum of any collection of lower semicontinuous functions is again lower semicontinuous.
Let X be a topological space, and let A_i X → [0, ∞], i ∈ I be a family of lower semicontinuous functions. Then A X → [0, ∞] defined by A(x) = sup_i ∈ I A_i (x) is lower semicontinuous.
We need to show that the set A^-1 ((a,∞]) is open for all a ∈ [0, ∞). First, using the fact that A= sup_i ∈ I A_i, we get
A^-1 ((a,∞]) = ⋃_i ∈ I A^-1_i ((a, ∞]).
Since A_i is lower semicontinuous for all i ∈ I, we observe that A^-1 ((a,∞]) is the union of open sets, and so open itself.
Now we finally have to show the lower semicontinuity of E_ε.
For all ε >0, the functional E_ε is weakly sequentially lower semicontinuous on W^1,4([0,l]).
As the integrand is convex in θ', to prove Lemma <ref> it would be enough to invoke <cit.>. We nevertheless include an independent proof—which combines Sobolev embeddings with Mazur's lemma—for the benefit of the reader.
Let θ_n converge weakly to θ in W^1,4([0,l]). Since W^1,4([0,l]) embeds compactly into C^0([0,l]) and the image of any weakly convergent sequence under a compact embedding converges strongly, we deduce that θ_n and (κ^2cos^2θ_n+ε^2)^-1 converge uniformly to θ and (κ^2cos^2θ+ε^2)^-1, respectively. Moreover, exchanging θ_n by a suitable subsequence, we may assume that E_ε(θ_n) converges to the limit inferior of the original sequence.
We now aim to get rid of θ_n in the expression of E_ε(θ_n) and only keep θ'_n. To this end, we first rewrite E_ε(θ_n)
as
E_ε(θ_n) = I_1(θ_n) + I_2(θ_n) + I_3(θ_n) + I_4(θ_n),
where
I_1(θ_n) = ∫_0^l (κ^2 cos^2θ_n + (θ'_n + τ)^2 )^2 (1/κ^2 cos^2θ_n + ε^2 - 1/κ^2 cos^2θ + ε^2) dt,
I_2(θ_n) = ∫_0^l κ^4 cos^4θ_n - κ^4cos^4θ/κ^2 cos^2θ + ε^2 dt,
I_3(θ_n) = ∫_0^l 2 (κ^2cos^2θ_n - κ^2 cos^2θ) (θ_n'+τ)^2 /κ^2 cos^2θ + ε^2 dt,
I_4(θ_n) = ∫_0^l (κ^2cos^2θ + (θ_n'+τ)^2)^2 /κ^2cos^2θ + ε^2 dt.
Since cosθ_n and (κ^2 cos^2θ_n + ε^2)^-1 converge uniformly to cosθ and (κ^2cos^2θ + ε^2)^-1, respectively, and
∫_0^l (κ^2cos^2θ_n + (θ_n'+τ)^2)^2 dt ≤ (Λ^-2 + ε^2)^-1sup_n E_ε( θ_n) < ∞,
Hölder's inequality implies that I_1(θ_n), I_2(θ_n), and I_3(θ_n) converge to 0 as n tends to ∞; hence that I_4(θ_n) converges to lim_n →∞ E_ε(θ_n).
To deal with the term I_4 we use that the integrand is convex in θ'_n. Let m ∈ℕ. Mazur's lemma <cit.> tells us that there are convex combinations ϕ_n = ∑_i=m^m+nλ_n,iθ_i of θ_m, …, θ_m+n that converge strongly to θ in W^1,4([0,l]). Using the convexity of the integrand, we obtain
sup_i ≥ m I_4(θ_i) ≥∑_i=m^m+nλ_n,i I_4(θ_i) ≥ I_4 (∑_i=m^m+nλ_n,iθ_i ) = I_4(ϕ_n).
As I_4 is continuous on W^1,4([0,l]) and the convex combinations ϕ_n converge to θ strongly, this yields
sup_i ≥ m I_4(θ_i) ≥ I_4 (θ).
Finally, taking the limit as m →∞, we deduce that
lim_m →∞ E_ε (θ_m) = lim_m →∞ I_4 (θ_m) ≥ I_4 (θ).
Hence E_4 is weakly sequentially lower semicontinuous on W^1,4([0,l]).
§ EXISTENCE OF MINIMIZERS
We are now ready to prove our main result, Theorem <ref> in the introduction.
To begin with, let
V' = {θ∈ W^1,4([0, l]) |θ(0) = a - 2π⌊ a ⌋ and θ(l) = b -2π⌊ a ⌋},
where ⌊·⌋ denotes the floor function. We are going to show that E has a minimizer on both V and its subset V', which is also closed and convex in W^1,4([0,l]). This way, the statement will follow directly from the periodicity of E.
Let thus θ_n be a minimizing sequence for E on either V or V'. By Lemma <ref>, the sequence θ_n is bounded in W^1,4([0,l]), and so, according to <cit.>, we can assume after passing to a subsequence that it converges weakly to a function θ_0∈ W^1,4([0,l]); in particular, since any closed and convex subset of a Banach space is weakly closed, we deduce that the limit θ_0 is contained in either V or V'.
Now, as E is weakly sequentially lower semicontinuos by Lemma <ref>, we have
inf_θ E(θ) = lim_n →∞ E(θ_n) ≥ E(θ_0) ≥inf_θ E(θ),
where the infimum is taken over V or V'. Hence these inequalities must be equalities, and so θ_0 is a minimizer of E on V or V'.
§ GAMMA-CONVERGENCE
Here we give an alternative proof of Theorem <ref>, based on the fundamental theorem of Γ-convergence <cit.>.
To begin with, note that E_ε is coercive on V, as E_ε > E; having already shown that it is weakly sequentially lower semicontinuous, we have the existence of minimizers for any ε > 0.
* There is a minimizer of E_ε on W^1,4([0, l]).
* For any a, b ∈ℝ, there is a minimizer of E_ε on the subset
W^1,4_ab([0,l]) = {θ∈ W^1,4([0, l]) |θ(0) = a and θ(l) = b }
of W^1,4([0, l]).
To apply the fundamental theorem, we first show that E_ε E weakly.
The functionals E_ε Γ-converge to E on W^1,4([0,l]) with the weak topology as ε goes to 0.
To show the Γ-convergence of E_ε, we have to prove a liminf and a limsup inequality.
For the liminf inequality, suppose that θ_ε converges weakly to θ in W^1,4([0,l]), and choose a null sequence ε_n such that
lim_n →∞ E_ε_n (θ_ε_n) = lim inf_ε↓ 0 E_ε (θ_ε).
Clearly, as the functionals E_ε_n are weakly sequentially lower semicontinuous,
lim_n→∞ E_ε_m (θ_ε_n) ≥ E_ε_m (θ) for all m ∈ℕ.
Letting m go to infinity and observing that the right-hand side converges to E(θ), we thus obtain
lim_n→∞ E_ε_n (θ_ε_n) ≥ E (θ),
as desired.
As for the limsup inequality, for θ∈ W^1,4([0,l]) we simply take θ_ε = θ for all ε >0 as recovery sequence. This we can do, because the monotonicity of the integrand implies
lim_ε↓ 0 E_ε (θ) = E(θ)
via Beppo Levi's monotone convergence theorem.
Having shown that the approximating functionals E_ε have a minimizer and Γ-converge to E, the missing ingredient needed to deduce the existence of a minimizer of E is equicoercivity, which we discuss below.
A family of functionals F_α X →ℝ, α∈ I on a normed vector space X is said to be equicoercive if the set
⋃_α∈ I{F_α≤ t}
is bounded for all t ∈ℝ.
The family of functionals E_ε, 0 < ε≤ 1 is equicoercive on V.
As E_ε is pointwise nonincreasing in ε >0, we have
E_ε≥ E_1,
and so
⋃_ε∈ (0,1]{E_ε≤ t}⊂{E_1 ≤ t}.
But the set on the right-hand side is bounded, as we know that E_1 is coercive on W^1,4([0,l]).
Applying <cit.>, we finally get the existence of a minimizer of E.
The infimum of E is attained and we have
min_θ E(θ) = lim_ε↓ 0min_θ E_ε(θ),
where the minima are taken over W^1,4([0,l]) or W^1,4_ab([0,l]).
To keep the exposition as self-contained as possible, we close this section by giving an independent proof of Theorem <ref>.
From E_ε≤ E we immediately get the inequality
inf_θ E(θ) ≥lim_ε↓ 0min_θ E_ε(θ).
Moreover, as E(0) = ∫_0^l κ ^2 + τ^2 dt ≤Λ ^2 l + ‖τ‖^2_L^2 and E_ε is monotonically decreasing in ε > 0, we have the uniform bound
min_θ E_ε(θ) ≤Λ^2 l + ‖τ‖^2_L^2.
To show the existence of a minimizer of E, let ε_n > 0 be a null sequence, and choose minimizers θ_ε_n∈ V (resp., θ_ε_n∈ V') of E_ε_n. As E_ε_n (θ_ε_n) ≤Λ ^2 l+ ‖τ‖_L^2^2, Lemma <ref> ensures that the sequence θ_ε_n is bounded in W^1,4([0,l]). Hence, exactly as in the proof of Theorem <ref>, we can assume that it converges weakly to some θ_0 ∈ V (resp., θ_0 ∈ V) in W^1,4([0,l]).
Now, applying the liminf inequality, we obtain
inf_θE(θ) ≤ E(θ_0) ≤lim inf_n→∞ E_ε_n (θ_ε_n) = lim_ε↓ 0min_θ E_ε(θ),
where the infimum and minimum are taken over V (resp., V'). Together with (<ref>), this implies
lim_ε↓ 0min_θ E_ε(θ) ≤inf_θE(θ) ≤ E(θ_0) ≤lim inf_n→∞ E_ε_n (θ_ε_n)
= lim_ε↓ 0min_θ E_ε(θ),
and so this series of inequalities must hold with equality. Especially, we have
E(θ_0) = min_θ E (θ) = lim_ε↓ 0min_θ E_ε(θ),
and the theorem follows by periodicity.
§ NUMBER OF SINGULAR POINTS
Let t ∈ [0, l]. We say that t is a singular point of θ∈ W^1,4([0,l]) if θ(t) ∈π/2 + πℤ, i.e., if the denominator of our integrand vanishes. Clearly, when E(θ) <∞, singular points of θ correspond to planar points of the associated flat ribbon. The purpose of this section is to show that under certain assumptions a minimizer has “many" singular points; besides, we will see that if τ(t) ≠ 0, then t is at most an isolated singular point.
We begin with a lemma. To state it, let us introduce for any compact interval I ⊂ [0, l] the energy
E_I(θ)= ∫_I(κ^2cos^2θ + ( θ' + τ )^2)^2/κ^2cos^2θ dt.
Let a, b such that 0 ≤ a <b ≤ l, and suppose that the torsion τ never vanishes in [a,b]. Then
|θ(a) - θ(b)|≥ A (b-a) - B ^1/2 (b-a)^3/4E_I (θ) ^1/4,
where
A = min_t∈ [a,b]|τ (t)|
and
B= max_t ∈ [a,b]|κ(t) cosθ(t) |.
First, an application of the fundamental theorem of calculus and the reverse triangle inequality yields
|θ(a) - θ(b) | = |∫_a^b θ' dt | = |∫_a^b - τ + (θ' + τ) dt |≥|∫_a^b τ dt | - ∫_a^b |θ' + τ| dt.
Note that, as τ never vanishes in [0,l], it must have a sign there. Hence,
|∫_a^b τ dt|≥ (b-a) min_t∈ [a,b]|τ (t) | = (b-a) A .
Moreover, by Hölder's inequality,
∫_a^b |θ'+ τ| dt
≤ (b-a)^3/4 (∫_a^b |θ' + τ|^4 dt )^1/4
≤max_t ∈ [a,b]|κ(t) cosθ(t) |^1/2 (b-a)^3/4(∫_a^b |θ' + τ|^4/κ^2 cos^2θ dt )^1/4
≤max_t ∈ [a,b]|κ(t) cosθ(t) | ^1/2 (b-a)^3/4 E_I (θ) ^1/4
= B ^1/2 (b-a)^3/4 E_I (θ) ^1/4.
Summing up, we get
|θ(a) - θ (b)|≥ A (b-a) - B ^1/2 (b-a)^3/4 E_I (θ) ^1/4,
which is the desired conclusion.
Applying Lemma <ref> for a=0 and b=l, we can now deduce that minimizers of E are generally not free of singular points. In fact, by making sure that the right-hand side of (<ref>) is large enough, one can enforce the presence of any given number of singular points—as explained by Theorem <ref> in the introduction (reproduced below for the reader's convenience).
<ref>thm:NumberOfSingularPoints
Suppose that the torsion τ is a constant function satisfying
|τ| > n π/l + max|κ|.
Then the minimizer θ_min of E in W^1,4([0,l]) has at least n singular points.
theorem-1
Let K = max_t∈ [0,l]|κ(t) |. Using the linear function θ_0(t) = - ∫_0^t τ (s) ds as competitor, we obtain
E(θ_min) ≤ E(θ_0) = ∫_0^l κ ^2 cos^2θ dt ≤ K^2 l.
Now we apply Lemma <ref> to the complete interval, i.e., for a=0 and b=l. Since A= min_t∈ [0,l]|τ (t)| = |τ| and B= max_t∈ [0,l]|κ(t) cos (θ(t)) |≤ K, equation (<ref>) yields
|θ(0) - θ(l) |≥ l |τ| - K^1/2 l^3/4 K^1/2 l^1/4 = l ( |τ| - K),
which is larger than n π if |τ| > n π/l + K.
Although, as we just saw, singularities abound, the following generalized version of Theorem <ref> shows that under rather weak assumptions they form a discrete set. The proof is again based on Lemma <ref>.
Suppose that E(θ) < ∞, and let t_0 be a singular point of θ with τ(t_0) ≠ 0. Then t_0 is an isolated singular point, i.e., there is an ε > 0 such that the ε-neighborhood I_ε (t_0 - ε, t_0 + ε) ∩ [0,l] around t_0 does not contain any other singular point.
The simple heuristic behind the proof is that, in the neighborhood of a singular point, we must have
θ' + τ≈ 0,
as otherwise the energy cannot be bounded. So when τ does not vanish, the function θ must be strictly monotone.
Let t_0 be a singular point of θ, and choose ε >0 such that |θ(t) - θ(t_0) |≤π for all t∈ I_ε. Noting that, by Hölder's inequality,
|θ(x) - θ(y)|≤| x-y |^3/4‖θ'‖_L^4([x,y])≤| x-y |^3/4Λ^1/2 E^1/4_[x,y] (θ),
we first obtain, using that cos is Lipschitz continuous with Lipschitz constant one,
|cos(θ(t))| = |cos(θ(t)) - cos(θ(t_0))|≤|θ(t) - θ(t_0)|≤| t-t_0 |^ 3/4Λ^1/2 E^1/4_[t_0,t](θ)
≤| t- t_0| ^3/4Λ^1/2 E^1/4_[t-ε, t+ε](θ).
Then, as
B= max_x ∈ [t_0,t]|κ(x) cos( θ(x)) |≤| t-t_0 |^3/4Λ ^1/2 E^1/4(θ),
an application of Lemma <ref> with [a,b] = [t_0,t] gives
|θ(t) - θ(t_0)| ≥| t-t_0 |min_x ∈ [t_0- ε, t_0 + ε] ∩ [0,l]|τ(x)| - Λ^1/4| t-t_0 |^3/4 + 3/8 E^1/4 + 1/8 (θ )
= | t-t_0 |(min_x ∈ [t_0- ε, t_0 + ε] ∩ [0,l]|τ(x)| - Λ^1/4| t-t_0 |^1/8 E^3/8 (θ ))
≥| t-t_0 |(min_x ∈ [t_0- ε, t_0 + ε] ∩ [0,l]|τ(x) | - Λ^1/4ε^1/8 E^3/8 (θ ) ).
Finally, since
min_x ∈ [t_0- ε, t_0 + ε] ∩ [0,l]|τ(x) | - Λ^1/4ε^1/8 E^3/8 (θ ) →|τ(t_0)| >0 as ε→ 0,
it follows that there is an ε >0 such that
|θ(t) - θ(t_0) | >0 for all t ∈ I_ε with t≠ t_0.
This shows that I_ε does not contain any other singular point.
amsplain
|
http://arxiv.org/abs/2307.05610v1 | 20230710224010 | Substance or Style: What Does Your Image Embedding Know? | [
"Cyrus Rashtchian",
"Charles Herrmann",
"Chun-Sung Ferng",
"Ayan Chakrabarti",
"Dilip Krishnan",
"Deqing Sun",
"Da-Cheng Juan",
"Andrew Tomkins"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
plain
theoremTheorem[section]
proposition[theorem]Proposition
lemma[theorem]Lemma
corollary[theorem]Corollary
definition
definition[theorem]Definition
assumption[theorem]Assumption
remark
remark[theorem]Remark
squishenum
squishitem
Substance or Style: what does your image embedding know?
[
Substance or Style: What Does Your Image Embedding Know?
equal*
Firstname1 Lastname1equal,yyy
Firstname2 Lastname2equal,yyy,comp
Firstname3 Lastname3comp
Firstname4 Lastname4sch
Firstname5 Lastname5yyy
Firstname6 Lastname6sch,yyy,comp
Firstname7 Lastname7comp
Firstname8 Lastname8sch
Firstname8 Lastname8yyy,comp
yyyDepartment of XXX, University of YYY, Location, Country
compCompany Name, Location, Country
schSchool of ZZZ, Institute of WWW, Location, Country
Firstname1 [email protected]
Firstname2 [email protected]
Machine Learning, ICML
0.3in
]
Vision foundation models based on masking or contrastive learning are heavily studied in terms of semantic signals. Less understood is what non-semantic information these embeddings contain. For example, can we detect a blurred, recolored, or brightened image using an embedding like MAE, SimCLR, or CLIP without accessing the pixels? To address this, we design a systematic transformation prediction task and measure the visual content of six models that use different training schemes. Surprisingly, all six embeddings (including SimCLR) capture enough information to identify dozens of transformations.
We further compare the sensitivities of each embedding. Masking-based models (CAN and MAE) perform best on fine-grained transformation prediction, while image-text models (CLIP and ALIGN) generalize better to unseen transformations. Finally, we demonstrate that representations can contain object-level content and low-level details without sacrificing either. Overall, modern embeddings encode a variety of visual aspects, despite being trained on large datasets in a self-supervised way.
§ INTRODUCTION
Machine learning systems often use embeddings from large pre-trained models as a way to standardize and improve data representations. Such embeddings, known as foundation models, provide a `general-purpose’ data encoding method <cit.>. The models perform very well on many downstream tasks. They can be used with or without fine-tuning and even in a zero- or few-shot way.
Despite the popularity of foundation models, it is unclear what qualities of these embeddings are responsible for their good performance. A reasonable hypothesis is that better embeddings have a higher capacity in the sense that they capture more information about the raw data. An opposing hypothesis is that these embeddings precompute important, high-level features while ignoring low-level attributes that are immaterial for downstream tasks. Considering vision foundation models, the embeddings may (a) capture all the information in the image and achieve compression because natural images lie on a low dimensional manifold; or (b) compute a lossy compression, where their pre-training objectives guide what information they keep or discard. It also may be that some models are closer to (a) while others resemble (b). Before we, as a community, adopt foundation models, we should understand their predispositions.
One challenge is that researchers evaluate embeddings on the same axes. Prior work shows that foundation models perform well on downstream tasks. However, these findings stem from benchmark tasks, such as ImageNet, VTAB <cit.>, or COCO <cit.>. Conclusions from these analyses focus on the semantic content of embeddings (e.g., object-level details). We can only speculate about how the pre-training algorithm impacts what other visual aspects the model captures. A masked autoencoder (MAE) <cit.> fills in portions of the image, so MAE-based embeddings may be more sensitive to style. Contrastive losses like SimCLR <cit.> could encourage invariance to the augmentations used to form image pairs during training. Newer models such as CAN <cit.> combine both masking and contrastive pretraining and add other elements such as predicting random noise. Image-text models, such as CLIP <cit.> and ALIGN <cit.>, may learn visual concepts beyond the object-level categories of image datasets. In this paper, we investigate these speculations and perform complementary experiments to understand what non-semantic information these embeddings contain.
§.§ Predicting Transformations
If we aim to go beyond a semantic evaluation of models, we need to measure whether other types of information appear in the embeddings. We also want an approach that applies to arbitrary vision models, regardless of their training methods, dataset, architecture, etc. One way to accomplish this is the following experiment. We can modify an image and then see if this change is detectable after computing the image’s embedding. For example, consider two images: one that is a sample from ImageNet, and another where the same image has been slightly blurred. Then, compute embeddings for both images and throw out the original pixels. Assume that in both cases a linear probe will predict the correct ImageNet class for the image. The next question is: does the embedding contain enough information to determine which image was blurred and which was unaltered?
If the embedding contains sufficient information to detect blurring, then it should be possible to train a network to perform well on a `blurry or not’ classification task. Specifically, we can apply Gaussian blur to all images in ImageNet, and we can train a network to predict whether the transformation has been applied given access only to the image embeddings. Foundation models that capture more of the transformation information will perform better on this task, whereas models that perform poorly must be insensitive to the transformation. Note that freezing the embedding model is crucial for this analysis. If we fine-tuned on the transformation prediction task, then we would not know whether the original model captured the transformation.
The usefulness of being sensitive to blurring or other transformations depends on the downstream task. Some embeddings might completely ignore the blurring (leading to a blurring-invariant model) or encode the blurring in a consistent way (leading to a blurring-equivariant model). The former approach has advantages for stability, where the embedding should be unchanged. The latter equivariant approach is desirable for data cleaning or content filtering. We posit that if foundation models are going to be general-purpose, they should create nearly lossless embeddings, including low-level details. This is crucial for tasks such as determining if an image is a painting or photograph, if it has been taken during the day or night, if it is high-fidelity or grainy, or if it has been edited from the original.
§.§ Our Contributions
We propose a transformation prediction task to measure the sensitivity of embeddings to changes to an image.
Given the embedding of an image from a pre-trained model, the goal of this task is to predict how the image has been modified (e.g., blurred, brightened, darkened, noised, saturated, solarized, stylized, etc). We carefully design the set of transformations, ensuring enough variety to elicit whether embeddings capture different types of visual content. We also have two variations: a fine-grained version, where the train and test sets use the same 31 transformations, and a coarse-grained version, where we group together similar transformations into 10 classes and hold out some transformations.
All of the embeddings that we consider perform well at predicting transformations. This is surprising.
<ref>
shows that several transformations alter images in a subtle way. The frozen embedding networks must retain a lot of low-level image information, despite not being explicitly trained to do so. Our transformation prediction metric is orthogonal to, and hence complements, existing semantic accuracy measurements for image embeddings.
The transformation prediction tasks lead to new insights about the embeddings. CAN and MAE are more sensitive than SimCLR in some cases. Specifically, SimCLR is fairly invariant to hue, saturation, and brightness, but it is still quite sensitive to other transformations (including blurring, which is part of the contrastive training). We also evaluate the image embeddings of CLIP and ALIGN. These image-text models excel in recognizing the concept of style transfer. They can generalize to new styles after being only trained on a few. As a baseline, we test a supervised model, and we find that it performs comparably on the fine-grained task but significantly worse on the coarse-grained version.
A natural next question is whether post-processing the embedding to improve transformation prediction will effect the semantic accuracy (e.g., ImageNet top-1 accuracy). We actually find that it is possible to achieve good performance on both metrics when training a 2-layer MLP with two heads and optimizing a multi-task loss. This implies that the transformation information does not interfere with the object-level features. Both can coexist.
With transformed images there is a related question around robust accuracy. Common wisdom suggests that generalizing to OOD data can be facilitated by encouraging invariance to corruptions or style <cit.>. However, we find that increasing sensitivity to transformations (less invariance) does not significantly impact the semantic accuracy on transformed images.
In summary, our main findings are (see also <ref>):
* Foundation models capture information about dozens of transformations. Hence, we can use embeddings to detect a domain shift due to transformations.
* Vision models with masking (CAN, MAE) are more sensitive than those using only a contrastive loss (SimCLR)
to changes in hue, saturation, and brightness.
* Image-text models (CLIP and ALIGN) generalize better than image-only embeddings when classifying unseen transformations, such as new styles.
* Many errors come from mistaking images as normal (i.e., `Identity' transform) when they have been modified in unseen ways (e.g., background blur, grayscale, line shift).
* Sharing one hidden layer for semantic and transformation prediction does not harm the performance on either task.
Overall, our results support the hypothesis that foundation models provide a higher-capacity representation, rather than ignoring irrelevant features.
§ RELATED WORK
Foundation Models.
SimCLR <cit.> trains on pairs of transformed images, and the representation is penalized if the embeddings differ. The embedding should be less sensitive to these transformations (cropping, color distortion, and Gaussian blur). MAE <cit.> trains on images that have been subject to patch-wise masking and reconstructs the missing pixels.
CAN <cit.> combines contrastive learning, masked autoencoders, and noise prediction. Image embeddings also come from multi-modal models, such as CLIP <cit.> and ALIGN <cit.>. Both use a contrastive loss to form visual and language representations of image-text pairs.
Work has also investigated fine-tuning <cit.> and dataset quality <cit.>.
Compared to vision, much more work studies the information captured by language models <cit.>.
Invariance and Equivariance. The popularity of contrastive losses has led researchers to question whether embeddings should be encouraged to be insensitive (a.k.a., invariant) or sensitive (a.k.a., equivariant) to transformations <cit.>. This extends research that aims to understand rotation prediction <cit.>, a seminal task for unsupervised representation learning <cit.>. There has been efforts to measure CNN equivariance through individual features <cit.>, and to examine embeddings by reconstructing images <cit.>. Augmentation-aware learning has been proposed to improve semantic accuracy <cit.>. Another direction shows that
contrastive training learns domain-sensitive features, which helps OOD generalization <cit.>.
Transformation prediction. Work on visual chirality shows that, surprisingly, it is possible to train a model to detect whether an image has been horizontally flipped <cit.>.
A related effort considers predicting domains, such as painting, sketch, or cartoon <cit.>. Researchers have identified nuisance factors of X-ray images <cit.> even with a pre-trained chest radiography model <cit.>. Part of training diffusion models involves reversing the (artificial) Gaussian noise in an image, and part of the optimization involves a noise-prediction loss <cit.>. Recent work on cold diffusion considers reversing other transformations, including deblurring, inpainting, super-resolution, and snow removal <cit.>. Compared to prior work, we use transformation prediction to probe image embeddings, and we consider a much broader set of transformations.
§ PROBING EMBEDDINGS BY PREDICTING TRANSFORMATIONS
Evaluating only the typical semantic accuracy on class labels leaves open questions regarding what information from the raw data is retained or lost in the embedding. Therefore, we also measure the ability of a network to predict the type of transformation that has been applied to an image. To do so, we define a transformation prediction task along with new metrics. This task can be formulated for any dataset/task as long as there is a way to synthetically apply transformations.
§.§ Transformation Prediction Task
Assume we have T image transformations (<ref> shows examples). Here, for transformation, we take a broad definition. One option is a well-defined function, such as adding Gaussian noise with certain variance independently to each pixel. Another possibility is to have some random parameters, such as uniformly choosing a value in a range and increasing the image’s saturation by this much. Finally, we can have transformation families, containing several sub-transformations. For example, the family “color quantizing’’ could mean choosing a sub-transformation that modifies hue, inverts colors, or solarizes the image. Sub-transformations have their own (possibly random) parameters.
We apply each of the T transformations to all images in the training/test sets. This generates T+1 copies of the dataset, including the original images. Also, this process defines a (T+1)-way classification problem, labeling each image either with `Identity’ or one of the T transformations.
Metrics. Our tasks involve both unaltered (clean) images and transformed ones, as well as a new label for the type of transformation. For a dataset such as ImageNet, which contains images x and semantic class labels y, we will use t to denote the transformation label of our augmented dataset, leading to a labeled triple (x,y,t). A network can predict the semantic label y, the transformation label t, or both in a multi-task scenario.
The transformation prediction accuracy is the fraction of images receiving the correct transformation label (the network does not see the class label). We use clean semantic accuracy to refer to the fraction of correctly predicted class labels on unaltered images (i.e., the transformation t is the identity). The obfuscated semantic accuracy is the fraction of correct class labels when the image has been transformed (i.e., t is not the identity).
§.§ Evaluating Frozen Image Embeddings
Consider an image x, let t be one of the T+1 transformations, and use t(x) to denote the transformed version of x.
For a frozen embedding model ϕ, we compute the embedding ϕ(t(x)).
We then train a network that takes ϕ(t(x)) as input and outputs a semantic label or a transformation label or both. In a multi-task setting with a two-headed network that outputs two labels, we independently measure the clean/obfuscated semantic and transformation accuracies.
Training a linear probe on top of the embedding ϕ(t(x)) is the simplest setting to predict transformation labels. The last-layer weights can be trained using the transformation labels (while the embedding model is fixed). We find that we can improve performance by using an MLP with a single hidden layer instead of a linear probe. In this case, training the hidden layer leads to a new representation that has been post-processed for transformation prediction. We can also do this in a multi-task way, incorporating the loss from both the semantic and transformation prediction tasks.
We do not fine-tune the embedding model itself. We expect that it would lead to improved transformation prediction accuracy. However, it would conflate the information in the original embedding with the new information learned from the fine-tuning. Freezing the model, on the other hand, allows us to draw conclusion about existing embeddings.
§.§ Fine-grained vs. coarse-grained
In our experiments, we will consider a fine-grained task (where the train and test sets use the same transformations) and a coarse-grained version (where the same label contains different sub-transformations). For both tasks, the post-processing network should learn which features of the embedding correspond to different transformation labels.
The fine-grained task has 31 labels, including `Identity' for unaltered images. In a few cases, we use the same transformation with disjoint parameter ranges as separate classes. Specifically, two categories come from each of (i) a low or medium amount of motion blur, (ii) a low or high amount of Gaussian blur, (iii) a low or medium amount of Gaussian noise, and (iv) increasing or decreasing the brightness. During test-time, the same transformation applied to an image will only differ in its randomized parameters that are restricted to different ranges.
In the coarse-grained task, the training set has 28 transformations, split across 9 categories, plus the Identity transformation. The test set has 43 sub-transformations, split across the same 9 categories, plus the Identity transformation. Hence, there are 15 held-out transformations that the network only sees during test time. We define the coarse categories so that the visual content should be similar in some way. For example, `Quantize' contains 7 recoloring options (4 for training and 3 held-out). The `Style Transfer' label has 13 style options (6 for training and 7 held-out). For some categories, there are no held-out sub-transformations (e.g., Icon Overlay, Image Overlay, Line Halftoning).
Justifying the transformations. When choosing the sets of transformations, we have tried to cover a range of visual effects. Noise affects individual pixels and blurring affects nearby regions. Overlays are independent of the image, while style transfer heavily depends on the content. The filtering and quantizing options focus on hue, saturation, or value separately. Some transformations are barely human-visible, and others are strikingly obvious. Of course, the space of all possible transformations is impossible cover fully, but we aim to probe many aspects of embeddings.
§.§ Drawing conclusions about embeddings
We can use the transformation prediction task to measure if an embedding model captures certain visual content. Consider a transformation t, where t(x) denotes the transformed version of x. Assume we can train a post-processing network to predict that ϕ(t(x)) is transformed and ϕ(x) is not. Then, we can conclude that ϕ must preserve enough information about the image so t can be detected. That is, ϕ(x) ≠ϕ(t(x)). More interestingly, a network may succeed at predicting most transformations t from a set 𝒯 when they are applied to images in a dataset 𝒳. Hence, the sets A_t, ϕ = {ϕ(t(x)) | x ∈𝒳} for t ∈𝒯 are mostly disjoint. It is possible to use a sample from A_t, ϕ to determine t with high accuracy. We also believe the transformation prediction task is a direct measure of equivariance, as opposed to k-NN results <cit.>.
If the network cannot detect the transformation t, then we may conclude the opposite. The embedding ϕ does not preserve enough information. We can further qualify this based on the amount of post-processing required to extract this information. If t is detectable after zero or one layers, then the information must be readily accessible in ϕ(t(x)). Otherwise, if t can be detected but only after numerous layers, then the information is still present but can only be recovered after combining several sources of information from ϕ(t(x)). If no amount of post-processing suffices, then the embedding must truly be invariant, and ϕ(x) ≈ϕ(t(x)).
Given the above discussion, the fine-grained and coarse-grained tasks yield complementary insights. The benefit of the fine-grained task is that we can investigate the precision of the embedding's information. Distinguishing a blur of radius three vs. five should require more detailed information than distinguishing blurring vs. brightening. Also, using the same transformations for train and test simplifies the task.
In the coarse-grained task, the network does not see some sub-transformations during training, which enables us to measure a type of generalization. For example, consider transformations t and t' from the same class (e.g., two different styles). In the best case, we only use t during training, and the network can recognize that ϕ(t'(x)) is similar to ϕ(t(x)). It could be that the embeddings are close together or that ϕ encodes the style in some way. On the other hand, the network may fail to generalize, and predict ϕ(t'(x)) and ϕ(t(x)) differently. One conclusion is that ϕ may not be sensitive to t'. However, we will show later that prediction accuracy is quite high for the fine-grained task. The coarse-grained mistakes actually imply that ϕ captures both transformations but does so in a divergent way.
§ EXPERIMENTAL RESULTS
Datasets. We evaluate on transformed versions of ImageNet-1k <cit.>. In addition to the original image (Identity), we apply 30 transformations to each train/test image. This leads to 31 classes for the fine-grained transformation prediction task. We also construct a coarse-grained dataset with 10 categories, where each category contains one or more transformations along with a range of parameters (e.g., noise level or type of style transfer). The test set transformations form a superset of those applied to the training images. Full details in <ref>.
Metrics. We measure semantic and transformation prediction accuracies as defined in <ref>. In the fine-grained case, the model predicts one of 31 transformation classes; in the coarse-grained case, it predicts one of 10. For both cases, we average over a test set with size being the number of labels times the number of original images, i.e., (# classes) × 50k for ImageNet-1k. We measure semantic accuracy with ImageNet-1k class labels, separating the accuracy on clean and transformed (a.k.a., obfuscated) images.
Embedding Models. CAN, MAE, and SimCLR produce a 1024-dimensional embedding from a ViT L/16 trained on JFT-300M <cit.>. The SimCLR model also contains a projection to a 128-dimensional embedding that we use for one comparison. CLIP uses ViT L/14 for a 768-dimensional image embedding. ALIGN uses EfficientNet-L2 for the image encoder and outputs a 1376-dimensional embedding. Our baseline is a 1024-dimensional embedding from a supervised ViT L/16 trained on ImageNet-1k.
Post-processing. We pre-compute embeddings for all train and test images, and then we ignore the pixels. We then train a linear probe or a small MLP network on the frozen embeddings. Unless stated otherwise, the MLP has one hidden layer of width 2048, and we optimize it with ADAM and with a 0.2 dropout rate. We experimented with deeper/wider networks and with other dropout rates, but this did not lead to significantly different results in most cases. Note that while the embedding model is not trained on transformed images, the post-processing network can indirectly learn from them, depending on what information is in the embedding.
§.§ Do embeddings capture transformations?
We compare the transformation prediction performance of six embeddings in <ref>. All embeddings perform extremely well on the transformation detection task: over 93% accuracy for the fine-grained and over 79% accuracy for the coarse-grained. These embeddings preserve fairly detailed information about the input image that can be extracted with minimal post-processing (2-layer MLP).
§.§ What is the most equivaraint embedding?
In the fine-grained task (a test of which embedding has the most detailed information about the image), the CAN embedding performs the best with MAE being a close second. Note that both CAN and MAE use masking as part of the self-supervised training. This suggests that filling in the image patches increases the transformation sensitivity. The SimCLR embedding performs fairly well, despite the expectation that a contrastive loss would lead to high levels of invariance (we discuss SimCLR more in <ref>). CLIP and ALIGN perform slightly worse than CAN on the fine-grained task but still quite well.
In the coarse-grained task (a test of how well an embedding's information about transformations can generalize), the two text-image embeddings (CLIP and ALIGN) perform better than all other methods. This suggests that training with text improves the generalization ability of the image embedding. We note that, for all methods, the decreased accuracy between fine-grained and coarse-grained occurs because the held-out sub-transformations present a challenging OOD task. Also, all self-supervised models perform significantly better than the supervised baseline, suggesting that optimizing an embedding directly for semantic information does not by default retain as much transformation information. In <ref> we analyze in detail the different kinds of mistakes that the embeddings make.
§.§ Isn't SimCLR supposed to be invariant?
<ref> shows transformation prediction results for SimCLR. We consider two layers of the embedding model. Specifically, `SimCLR embed' refers to the second-to-last layer and has 1024 dimensions (which is standard and used in <ref>). Then, the network projects this onto the last layer `SimCLR proj' to form a 128 dimensional vector. We see that `SimCLR embed' generally outperforms `SimCLR proj' on both fine- and coarse-grained datasets, and this holds regardless of the post-processing method. One implication is that the final projection layer of SimCLR is responsible for much of the invariance that we expect from a contrastive loss. On the other hand, the layer right before this retains more information about transformations.
We also control for the dimensionalities (1k vs. 128) by evaluating a network that has one hidden layer of width 128. With a small width, we still see a large improvement from using SimCLR embed vs. proj (+14.88% for fine-grained, +7.19% for coarse-grained). Finally, we can greatly improve the performance of SimCLR proj by post-processing with a width 16k network (+10.66% for fine-grained, +5.81% for coarse-grained). This means that after the projection, there are transformation details that are not available via a linear probe but can be extracted with a 2-layer network.
§.§ Do all embeddings make the same mistakes?
We dig into the confusion matrices and how trends in the mistakes further illuminate the information in embeddings. The fine-grained and coarse-grained datasets lead to slightly different insights, and so we discuss them separately.
Fine-grained errors. The most common mistakes for all embeddings come from (i) misclassifying medium Gaussian blur as low Gaussian blur, and (ii) underpredicting `Identity' for the unaltered images. Both mistakes are fairly expected. Comparing MAE to CAN, we find that MAE has worse performance for central cropping, which is likely due to its more aggressive masking during training (CAN uses 50% masking while MAE uses 75%). Considering SimCLR, the lower accuracy comes mostly from mispredicting hue shift, brighten, and saturate. For example, SimCLR labels 45% images as `Identity' when their hue has been offset by 64. On the other hand, SimCLR performs comparably on the other transformations, including Gaussian blurring, despite this augmentation being part of the contrastive training.
Compared to CAN and MAE, both CLIP and ALIGN have trouble with motion blur, perhaps because this is not an effect that is easily tied to textual cues.
Coarse-grained errors.
We focus on style transfer results here. <ref> contains full confusion matrices, as well as <ref> and <ref>, which compare embeddings on held-out transformations.
CLIP performs quite well on the style transfer category, whereas this accounts for a sizeable fraction of errors for CAN, MAE, and Supervised.
For the held-out styles, CLIP correctly labels 86% of images. The best vision-only model is SimCLR, which has 54% accuracy. The errors for CAN/MAE come from the fact that they often predict restyled images as clean or filtered (e.g, blurred). CLIP and SimCLR achieve over 70% accuracy on the `Pasta' style, while CAN and MAE are below 4%.
§.§ Does transformation information interfere with semantic information?
We next explore the interplay between semantic and transformation accuracy by training two-head networks in a multi-task setting. The first head predicts the ImageNet-1k class. The second head predicts the transformation label. Both heads share the same 2048-dimensional hidden layer of the MLP that post-processes the embedding. As a baseline, we also train a one-head model that only predicts the semantic class (also using a 2-layer MLP with width 2048). We aim to determine how the multi-task setting affects the three metrics: semantic accuracy on clean images, obfuscated semantic accuracy on transformed images, and transformation prediction accuracy. <ref> reports these accuracies for both the fine-grained and coarse-grained versions. As before, we fix the embedding model and only train the MLP.
Clean semantic accuracy. Using the two-head network leads to comparable semantic prediction compared to a one-head network. Only in some cases do we see a decrease in accuracy. The post-processing, despite being only 2048 dimensional, is able to effectively combine both semantic and transformation information in the MLP's hidden layer. Comparing semantic accuracies, CLIP and ALIGN outperform the other methods by a large margin. This is expected since the linear probe accuracy of vision-only self-supervised methods (CAN, MAE, SimCLR) tend to be lower than the accuracies after fine-tuning <cit.>.
Obfuscated semantic accuracy. We move on to discuss the semantic accuracy on the transformed images (ObfSem). In essence this is a metric for the robustness of the models to a dataset shift. Moreover, many of the transformations were not seen during training, and hence, we can consider the images to be OOD. Across the embeddings, we observe a mix of increases and decreases to the ObfSem accuracy. In general, the deviations are small, and we conclude that transformations sensitivity does not impact the ability to succeed at object-level predictions.
§.§ What have we learned about embeddings?
Transformation prediction is surprisingly easy. While our main goal was to uncover new insights about foundation models, along the way we discovered that embeddings can be used to predict transformations. This ability is useful for OOD detection and content filtering. There is growing evidence that cloud-based classification systems are susceptible to transformation-based attacks, such as style transfer, Gaussian noise, or recoloring <cit.>. We believe this is an important direction, in addition to current OOD and anomaly detection efforts <cit.>. Fortunately, based on our results, modern embeddings suffice for both classification and for detecting many transformations.
Possible to have semantic & transformation accuracy. From <ref>, we see that the multi-task training leads to good performance on both semantic and transformation prediction. In some cases, the sensitivity to transformations even improves the obfuscated semantic accuracy. Another observation is that we achieve do the post-processing via the low-cost training of a 2-layer MLP. It is possible to fine-tune representations while freezing the large embedding model.
Different embeddings capture different information. By analyzing transformation prediction, we have drawn conclusions about the sensitivity of several embeddings. <ref> has summarized these insights, which help inform a choice between competing models. All of the models capture a lot of transformation information, which is useful to know (and perhaps unexpected). We hope that transformation prediction becomes a standard evaluation metric.
§.§ Where could we have done more?
One deficiency of our work is that we have not proposed ways to translate our observations into improvements on benchmarks. Transformation awareness could improve performance on downstream tasks beyond ImageNet. It would ideal to couple transformation prediction with a new architecture or algorithm and create a self-supervised method that outperforms CAN, MAE, SimCLR, CLIP, and ALIGN.
Another shortcoming is that we have not actually used our models to detect a dataset shift in real-world data. There are many settings where discovering image transformations is important, including content safety, detecting copyright evasion, and discovering manipulation. In this direction, we have shown that different types of semantically-trained embeddings can perform well on these detection tasks.
§ CONCLUSION
We constructed and investigated a new classification task that allowed us to shed new light on image embeddings. We showed that popular models capture enough information to distinguish dozens of transformations. Our experiments uncovered some ways in which SimCLR is more invariant than CAN and MAE, and the types of transformations that are captured by self-supervised vision models vs. image-text models, such as CLIP and ALIGN. We demonstrated that it is possible to post-process embeddings using a small network and extract more transformation information than a linear probe. The findings from the transformation prediction task provide new insights into the capacity of image embedding methods, which complements prior experiments on semantic accuracy. We discuss future work in <ref>.
icml2023
§ WHAT CAN YOU DO NEXT?
Our work is motivated by improving foundation models for their uses beyond semantic classification. We list many open directions for future work inspired by our findings:
* A central question is to create a nearly lossless image embedding that is also easily adapted for many downstream tasks. Our work suggests that it should be possible to keep more low-level information in the representation without compromising semantic performance. We believe this is an important direction because some of the downstream tasks may require these low-level features.
* Our results also suggest that networks that can predict transformations do not perform any worse in terms of data shift robustness (obfuscated semantic accuracy). This suggests that robust training methods might benefit from incorporating equivariance strategically, instead of focusing on invariance. Or, in contrast, transformers may be inherently equivariant, and achieving invariance may require even more aggressive training methods.
* Text-to-image generative models depend heavily on their pre-trained image encoder <cit.>. Fine-tuning the image backbone with transformation prediction could help in synthesizing transformed images. On the other hand, the invariance of image embeddings could prohibit the ability to generate certain visual features.
* A different direction is extending the transformation prediction task to be more fine-grained. We could ask the network to predict the specific parameters or strength of one or more transformations. One option is to predict both the transformation and strength of ImageNet-C transformations <cit.>. This should make the task more challenging, and thus, reveal larger quantitative gaps in the performance of various embeddings.
* Another extension could be to identify which part of an image has been altered. This could uncover further differences between embedding methods. For example, masking-based embeddings might struggle with this given that they are trained on heavily obscured images. Image-text models might perform well because language cues can refer to parts of images and relative positioning of objects.
* An alternative way to probe the visual side of image-text models is through text prompts that describe visual aspects. This has been studied for some attributes like color, shape, and material <cit.>. For example, <cit.> uses questions like “what is the color of a penguin?” or “what is the size of an ant?”
to probe the image-text model. Interestingly, our results suggest that CLIP and ALIGN retain quite a bit of visual information in the embedding. Hence, errors for the text prompts may be due to image-text alignment or to the language side of the model itself. It would be interesting to compare transformation prediction performance to these prompts and see if there are trends in the performance of different models.
* Recent work considers probes to understand how transformers process information <cit.>. In our SimCLR experiments, we saw that the final projection layer is responsible for much of the invariance. This suggests that certain layers may have a larger impact than others, as transformation information flows through the transformer model. Future work could continue the study of this interesting phenomenon.
* Considering other modalities, our transformation prediction task can apply to text or audio. For example, words can be changed with synonyms, characters can be replaced with symbols or typos, or sentences can be reordered based on syntactic freedoms. Following our analysis, this approach could then draw conclusions about the predispositions of different language models. This would complement some of the existing language model probing work <cit.>.
* From an application point of view, it would be interesting to use transformation prediction for a data cleaning or filtering task. Another application is detecting (adversarial) image manipulations.
It is possible to use an MLP trained on top of an embedding to find anomalous images, such as those that have been stylized or heavily edited.
§.§ Where could we have done more?
One deficiency of our work is that we have not proposed ways to translate our observations into improvements on benchmarks. Transformation awareness could improve performance on downstream tasks beyond ImageNet. It would ideal to couple transformation prediction with a new algorithm and create a self-supervised method that outperforms CAN, MAE, SimCLR, CLIP, and ALIGN.
Another shortcoming is that we have not actually used our models to detect a dataset shift in real-world data. There are many settings where discovering image transformations is important, including content safety, detecting copyright evasion, and discovering manipulation. In this direction, we have shown that different types of semantically-trained embeddings can perform well on detection tasks. Our generalization task also shows that training even a small MLP on top of an embedding can suffice to detect held-out transformations.
§ EXPERIMENTAL SET-UP
§.§ Drawing conclusions about embeddings
We can use the transformation prediction task to measure if an embedding model captures certain visual content. Consider a transformation t, where t(x) denotes the transformed version of x. Assume we can train a post-processing network to predict that ϕ(t(x)) is transformed and ϕ(x) is not. Then, we can conclude that ϕ must preserve enough information about the image so t can be detected. That is, ϕ(x) ≠ϕ(t(x)). More interestingly, a network may succeed at predicting most transformations t from a set 𝒯 when they are applied to images in a dataset 𝒳. Hence, the sets A_t, ϕ = {ϕ(t(x)) | x ∈𝒳} for t ∈𝒯 are mostly disjoint. It is possible to use a sample from A_t, ϕ to determine t with high accuracy. We also believe the transformation prediction task is a direct measure of equivariance, as opposed to k-NN results <cit.>.
If the network cannot detect the transformation t, then we may conclude the opposite. The embedding ϕ does not preserve enough information. We can further qualify this based on the amount of post-processing required to extract this information. If t is detectable after zero or one layers, then the information must be readily accessible in ϕ(t(x)). Otherwise, if t can be detected but only after numerous layers, then the information is still present but can only be recovered after combining several sources of information from ϕ(t(x)). If no amount of post-processing suffices, then the embedding must truly be invariant, and ϕ(x) ≈ϕ(t(x)).
Given the above discussion, the fine-grained and generalization tasks yield complementary insights. The benefit of the fine-grained task is that we can investigate the precision of the embedding's information. Distinguishing a blur of radius three vs. five should require more detailed information than distinguishing blurring vs. brightening.
In the generalization task, the network does not see some sub-transformations during training, which enables us to measure a type of generalization. For example, consider transformations t and t' from the same class (e.g., two different styles). In the best case, we only use t during training, and the network recognizes that ϕ(t'(x)) is similar to ϕ(t(x)). It could be that the embeddings are close together or that ϕ encodes the style in some way. On the other hand, the network may fail to generalize, and predict ϕ(t'(x)) and ϕ(t(x)) differently. One conclusion is that ϕ is insensitive to t'. However, we show later that prediction accuracy is quite high for the fine-grained task. The generalization mistakes actually imply that ϕ captures both transformations but does so in a divergent way.
Interestingly, in contrast to some work in NLP probing <cit.>, we observe the same trends using a linear probe and a 2-layer MLP. It would be worthwhile to also consider control metrics and random feature embeddings to further understand whether the transformation information is readily available or not, similar to <cit.>.
§.§ More experiment and training details
We use JAX to implement the models and run the experiments. The learning rate for is 10^-3, and use per-device batch size of 1024 with roughly 41.2 epochs for the fine-grained dataset and 127.9 for coarse-grained. For both datasets, the model trains while seeing a total of roughly 1.6B examples. We do not use any warm-up steps.
All models are optimized with ADAM and the learning rate decreases linearly. Given that we are only training 2-layer MLPs, the wall clock time for training is under a few hours using TPUs.
We use dropout with rate 0.2 for all experiments except when comparing SimCLR in <ref>, which has a dropout rate of 0.0 because we compare with a linear probe. We experimented with deeper/wider networks and other dropout rates, but this did not lead to much different results.
For 2-head models, we sum the losses for the semantic prediction and transformation prediction tasks. We use categorical cross entropy for both losses on the separate tasks. We do not weight the losses separately. We also randomly sample batches without controlling for the distribution of transformations in each batch. Hence, for the fine-grained task, we expect 1/31 of the images to be clean, and 1/10 to be clean for the coarse-grained task.
Embeddings computed consistent with external implementations for the various embedding models. We were given access to the ALIGN weights, we also use a standard CLIP checkpoint. We trained the supervised model from scratch, without optimizing the data augmentation strategy.
We use three pre-trained models (CAN, MAE, SimCLR) that were trained by the authors of the CAN paper <cit.> and shared with us (all trained on JFT-300M). Interestingly, we achieve slightly higher top-1 accuracy on ImageNet-1k with our 2-head multi-task MLPs compared to the linear probe results that they report. Specifically, our MLPs on top of CAN, MAE, SimCLR have 76.04, 70.18, 74.49 top-1 accuracy, respectively. Their linear probe results for CAN, MAE, SimCLR are 75.4, 64.1, 73.4, respectively. This improvement could either be due to (i) the extra layer in the MLP or (ii) training the MLP with both clean and transformed data. The MAE paper reports higher accuracy than ours, with 73.5 top-1 linear probe <cit.>.
§ DATASET AND TRANSFORMATION DETAILS
For the images, we use the ImageNet-1k dataset. We transform the images using standard methods in OpenCV <cit.> and Pillow <cit.>, and a few pre-trained models listed below.
§.§ High-level motivation for our choice of transformations.
We aim to determine whether embeddings from image foundation models can be used to train classifiers for non-semantic tasks. This is advantageous because in large ML systems, embeddings are often pre-computed and stored for as a way to compress and pre-process image datasets. Hence, using a small MLP on top of an embedding offers a light-weight way to automatically compute predicted signals about the images.
There are many non-semantic tasks that would fit into this framework. For example, for data cleaning, it is important to recognize poor image quality (e.g., JPEG artifacts, motion blur, cropping, etc). For content filtering and policy enforcement, it may be crucial to detect image manipulations (e.g., style transfer, text/icon overlays). In general, non-semantic image information is crucial for a myriad of tasks, such as determining if an image is a painting or photograph, if it has been taken during the day or night, if it is high-fidelity or grainy, or if it has been edited from the original.
When choosing the sets of transformations, we have tried to cover a range of visual effects. Noise affects individual pixels and blurring affects nearby regions. Overlays are independent of the image, while style transfer heavily depends on the content. The filtering and quantizing options focus on hue, saturation, or value separately. Some transformations are barely human-visible, and others are strikingly obvious. Of course, the space of all possible transformations is impossible cover fully, but we aim to probe many aspects of embeddings.
In the generalization task, we have also tried to set-up an experiment that reflects real-world usage. For example, with style transfer, we train with a subset of styles and ask the model to recognize examples of transformed images with unseen styles. For the other categories, we also believe that quantizing captures a variety of related recoloring effects, and filtering covers many types of blur and noise. There is certainly room to expand and refine the taxonomy of transformations, and this is a nice direction for future work.
§.§ Fine-grained transformations
Below is the list of transformations in the fine-grained transformation set. For transformations which are parameterized, multiple sets of parameters may be used. In this case, different parameter sets are considered as different "classes" in the transformation prediction problem. The parameters are the same for training and for testing.
* Identity
* No transformation, i.e. the original images.
* No parameter.
* Hue Scaling & Shift
* Scale and shift the hue channel in the hue-saturation-lightness (HSL) color space. hue_new = (hue × scale + 𝚘𝚏𝚏𝚜𝚎𝚝) mod 360.
* Parameter set 1: 𝚜𝚌𝚊𝚕𝚎 = -32, 𝚘𝚏𝚏𝚜𝚎𝚝 = -4
* Parameter set 2: 𝚜𝚌𝚊𝚕𝚎 = 1, 𝚘𝚏𝚏𝚜𝚎𝚝 = 64.
* Saturate & Desaturate
* Scale and shift the saturation channel in the HSL color space. saturation_new = clip(saturation × scale + 𝚘𝚏𝚏𝚜𝚎𝚝, 0, 255).
* Parameter set 1: 𝚜𝚌𝚊𝚕𝚎 = 5, 𝚘𝚏𝚏𝚜𝚎𝚝 = -4
* Parameter set 2: 𝚜𝚌𝚊𝚕𝚎 = 0.25, 𝚘𝚏𝚏𝚜𝚎𝚝 = 32
* Brighten & Darken
* Shift the lightness channel in the HSL color space. lightness_new = clip(lightness + 𝚘𝚏𝚏𝚜𝚎𝚝, 0, 255).
* Parameter set 1: 𝚘𝚏𝚏𝚜𝚎𝚝 = 96
* Parameter set 2: 𝚘𝚏𝚏𝚜𝚎𝚝 is uniformly sampled between -128 and -64.
* Gaussian Noise
* Add a random noise to each pixel. The noise distribution is a Gaussian distribution with mean 0 and standard deviation σ.
* Parameter set 1: σ = 0.05
* Parameter set 2: σ = 0.15
* Gaussian Blur
* Blur an image by a Gaussian function with a given radius.
* Parameter set 1: The radius is uniformly sampled between 3 and 5.
* Parameter set 2: The radius is uniformly sampled between 7 and 9.
* Motion Blur
* Simulate a motion of an image (as a 2D rectangle) along a random direction by a given length (in pixels).
* Parameter set 1: 𝚕𝚎𝚗𝚐𝚝𝚑 = 5
* Parameter set 2: 𝚕𝚎𝚗𝚐𝚝𝚑 = 10
* Corner Crop
* Keep only the bottom-right quadrant of an image.
* No parameter.
* Rotation
* Rotate an image counter-clockwise by a given degree.
* The degree is uniformly sampled between 90 and 270.
* JPEG Compression
* Re-compress an image with a given JPEG quality.
* The quality is uniformly sampled between 10 and 15.
* Floyd-Steinberg Dithering <cit.>
* Reduce the bit depth of an image by applying the Floyd-Steinberg dithering algorithm.
* The bit depth is set to 1.
* Posterize
* Reduce the bit depth of an image by quantizing each pixel value independently.
* The bit depth is set to 2.
* Pixelate
* Create a pixelation effect by downsampling an image with a factor and then upsampling to its originnal size.
* The downsampling factor is set to 0.15.
* Solarize
* Simulate photo solarization by inverting each pixel value above a threshold.
* The threshold is set to 192.
* Grayscale
* Change an image to a grayscale image.
* No parameter.
* Vertical Line Shift
* Rotate each column by a given distance (wrapping around), with even columns rotating down and odd columns rotating up.
* The distance is set to 3.
* Grid Overlay
* Change the pixels on even rows and on even columns to a fixed color RGB=(204, 255, 127).
* No parameters.
* Line Overlay
* Paint horizontal lines on an image.
* Each line is 4-pixel wide, and the distance between adjacent lines is 20 pixels. The lines are painted dark red RGB = (101, 0, 0).
* Icon Overlay
* Paint a wall of `grinning face' icons on an image.
* The opacity (alpha channel) of the icons is set to 32. The width ratio between image and icon is set to 10.
* Text Overlay
* Paint a wall of constant gibberish text on an image.
* The text is colored dark gray RGB=(25, 25, 25).
* Line Halftoning
* Apply a halftone process based on amplitude-modulated sinusoidal waves <cit.>.
* The waves are drawn with lines of 1-pixel width, and the maximum amplitude is 5 pixels.
* Style Transfer
* Apply the style transfer model <cit.> with a given style image.
* Parameter set 1: The style image is Vincent van Gogh's The Starry Night.
* Parameter set 2: The style image is Gyula Derkovits's Taligás.
* Parameter set 3: The style image is a photo of a bonfire.
* Parameter set 4: The style image is a photo of pasta.
§.§ Coarse-grained transformations
Below is the list of transformation categories and sub-transformations in the coarse-grained transformation set. The testing set of some categories may contain more sub-transformations or wider parameter ranges. For randomized parameters, we use U(a, b) to denote a uniform sample between a and b (inclusive). The parameters are independently sampled once for each image.
* Identity
* No transformation, i.e. the original images.
* No parameter.
* Icon Overlay
* Paint a wall of icons on an image.
* Training parameters: For each image an icon is randomly chosen from 5 candidate icons. The opacity (alpha channel) of the icons is U(64, 128). The width ratio between image and icon is U(8, 12).
* Testing parameters: Additional 5 candidate icons (10 in total). The opacity (alpha channel) of the icons is U(64, 144). The width ratio between image and icon is U(5, 15).
* Line Halftoning
* Apply a halftone process based on amplitude-modulated waves <cit.>.
* Training parameters: For each image an waveform is randomly chosen from 2 candidate waveforms. The waves are drawn with lines of U(1, 2)-pixel width, and the maximum amplitude is U(5, 7) pixels.
* Testing parameters: Additional 2 candidate waveforms (4 in total). The waves are drawn with lines of U(1, 2)-pixel width, and the maximum amplitude is U(4, 7) pixels.
* Filtering: Transformations making the image blurry or less clear.
* Gaussian Blur
* Blur an image by a Gaussian function with a given radius.
* Training parameter: The radius is U(3, 6).
* Testing parameter: The radius is U(2, 9).
* Motion Blur
* Simulate a motion of an image (as a 2D rectangle) along a random direction by a given length (in pixels).
* Training parameter: The radius is U(18, 27).
* Testing parameter: The radius is U(15, 35).
* Pixelate
* Create a pixelation effect by downsampling an image with a factor and then upsampling to its originnal size.
* Training parameter: The downsampling factor is U(0.25, 0.5).
* Testing parameter: The downsampling factor is U(0.125, 0.5).
* Blurry Background
* Change the aspect ratio and use a Gaussian blurred copy of the same image as background.
* Training parameters: Width and height scaling factors are U(1.0, 1.8). Gaussian blur radius is U(20, 40).
* Testing parameters: Width and height scaling factors are U(0.7, 2.0). Gaussian blur radius is U(20, 50).
* Line Shift
* Rotate each row or column by a given distance (wrapping around), with even rows/columns rotating in an opposite direction to odd rows/columns.
* This transformation is held out in training.
* Testing parameter: The distance is U(2, 8) pixels.
* Noise: Transformations adding high frequency artifacts.
* Gaussian Noise
* Add a random noise to each pixel. The noise distribution is a Gaussian distribution with mean 0 and standard deviation σ.
* Training parameter: σ∼ U(0.1, 0.5)
* Testing parameter: σ∼ U(0.1, 0.7)
* Impulse Noise
* Randomly sample a percentage of pixels and paint half of them white and half of them black.
* Training parameter: Noise percentage is U(10%, 30%).
* Testing parameter: Noise percentage is U(5%, 40%).
* Random Dithering
* Quantize each pixel to 0 or 255 using a per-pixel random threshold.
* No parameters.
* Ordered Dithering
* Quantize each pixel to using a 2×2 Bayer threshold matrix <cit.>.
* No parameters.
* Floyd-Steinberg Dithering <cit.>
* Reduce the bit depth of an image by applying the Floyd-Steinberg dithering algorithm.
* This transformation is held out in training.
* Testing parameter: The bit depth is U(1, 2).
* Image Fusing: Transformations which fuse another image (as a distraction) in foreground or background.
* Image Overlay
* Add a small distraction image to foreground with partial opacity.
* Training parameters: 5 choices of distraction images. The distraction image's dimensions are U(0.5, 0.7) fraction in size of the content image. The opacity is U(64, 128).
* Testing parameters: Additional 6 choices of distraction images (11 in total). The distraction image's dimensions are U(0.4, 0.8) fraction in size of the content image. The opacity is U(64, 128).
* Fusing
* Add a distraction image as background.
* Training parameters: 5 choices of distraction images. The foreground image's dimensions are U(0.6, 0.8) fraction in size of the background image. The foreground's opacity is U(128, 196).
* Testing parameters: Additional 6 choices of distraction images (11 in total). The foreground image's dimensions are U(0.4, 0.9) fraction in size of the background image. The foreground's opacity is U(128, 196).
* Quantizing: Transformations dealing with colors.
* Quantize Colors
* Reduce the number of distinct colors in an image. The colors are clustered and then replaced by the cluster centroids.
* Training parameter: The number of distinct colors after quantization is U(16, 64).
* Testing parameters: The number of distinct colors after quantization is U(8, 128).
* Invert Colors
* Invert all pixel values.
* No parameter.
* Solarize
* Simulate photo solarization by inverting each pixel value above a threshold.
* Training parameter: The threshold is U(96, 192).
* Testing parameter: The threshold is U(64, 224).
* HSL To RGB
* Convert an image to the HSL color space, and then directly read the values as RGB.
* No parameter.
* Grayscale
* Change an image to a grayscale image.
* This transformation is held-out in training.
* No parameter.
* Hue Shift & Scaling
* Scale and shift the hue channel in the hue-saturation-lightness (HSL) color space. hue_new = (hue × scale + 𝚘𝚏𝚏𝚜𝚎𝚝) mod 360.
* This transformation is held-out in training.
* Testing parameters 1: 𝚜𝚌𝚊𝚕𝚎=1, 𝚘𝚏𝚏𝚜𝚎𝚝=U(60, 300).
* Testing parameters 2: 𝚜𝚌𝚊𝚕𝚎=± U(8, 32), 𝚘𝚏𝚏𝚜𝚎𝚝=U(0, 360).
* Static Overlay
* Line Overlay
* Paint a series of equidistant parallel lines on an image. The lines in one image are in a random direction and of the same random color.
* Training parameters: Each line is U(5, 7)-pixel wide, and the distance between adjacent lines is U(18, 24) pixels.
* Testing parameters: Each line is U(3, 10)-pixel wide, and the distance between adjacent lines is U(15, 30) pixels.
* Text Overlay
* Paint a wall of gibberish text on an image. The text in one image are of the same random color.
* Training parameter: 5 choices of gibberish text.
* Testing parameter: Additional 5 choices of gibberish text (10 in total).
* Grid Overlay
* Change the pixels on even rows and on even columns to a random color (sampled per image).
* No parameters.
* Style Transfer
* Arbitrary Neural Style Transfer
* Apply a style transfer model <cit.> with a given style image.
* Training parameter 1: The style image is Vincent van Gogh's The Starry Night.
* Training parameter 2: The style image is Gyula Derkovits's Taligás.
* Training parameter 3: The style image is Edvard Munch's The Scream.
* Training parameter 4: The style image is Katsushika Hokusai's The Great Wave off Kanagawa.
* Held-out parameter 1: The style image is Amadeo de Souza-Cardoso's Landscape with Black Figure.
* Held-out parameter 2: The style image is Pablo Picasso's Violon.
* Held-out parameter 3: The style image is a photo of a a bonfire.
* Held-out parameter 4: The style image is a photo of pasta.
* Artistic Style Transfer
* Apply a style transfer model <cit.> with a set of pre-trained weights.
* Training parameter 1: The weights are pre-trained to mimic stained glass mosaics.
* Training parameter 2: The weights are pre-trained toward Francis Picabia's Udnie.
* Held-out parameter 1: The weights are pre-trained toward Vincent van Gogh's The Starry Night.
* Held-out parameter 2: The weights are pre-trained toward a painting of candies.
* Deep Dream
* Run a pre-trained DeepDream model <cit.> to enhance the patterns that the model recognizes.
* This transformation is held-out in training.
* Testing parameter: The DeepDream process is configured with U(7, 12) update iterations, learning rate U(0.05, 0.08), number of octaves U(6, 12), and octave scale U(1.5, 2.0).
* Warping: Transformations which rotate or transpose images.
* Rotation
* Rotate an image counter-clockwise by a given degree.
* Train parameter: The degree of rotation is 90.
* Held-out parameter 1: The degree of rotation is 180.
* Held-out parameter 2: The degree of rotation is 270.
* Vertical Flip
* Flip an image top to bottom.
* No parameter.
* Transpose
* Flip an image diagonally along a diagonal.
* Training parameter: Flipping along the minor diagonal.
* Held-out parameter: Flipping along the major diagonal.
§ FURTHER EXPERIMENTAL RESULTS
§.§ Smaller Representation (width 1k MLP)
<ref> shows the one-head and two-head accuracies for an MLP with one hidden layer of width 1k (the main paper table had width 2k). We use a one-headed model for each of transformation and semantic prediction, and then a two-headed model that trains to predict both labels. For the fine-grained dataset, performance is comparable for one- and two-head models. However, the two-headed models underperforms slightly (less than 1% worse). Interestingly, the coarse-grained dataset has the opposite trend for CAN, MAE, and SimCLR when it comes to transformation accuracy. Here, the two-headed model actually leads to a significant improvement in transformation prediction accuracy. The multi-task set-up likely prevents overfitting. This is beneficial because the coarse-grained dataset has held-out transformations (whereas the fine-grained dataset has the same train/test transformations).
§.§ Analyzing the Held-Out Transformations and Generalization Dataset
<ref> presents average accuracies for the held-out sub-transformations. <ref> zooms in the on the style transfer accuracies, showing the fraction of correct prediction for each of the thirteen styles that are displayed in <ref>. Then, we present the 10 × 10 confusion matrices for the coarse-grained dataset, one for each embedding. This matrices demonstrate the common errors made by the models, which inform the ways that the probe uncovers properties of the embeddings. Specifically, mispredicting certain transformations as `Identity' either points to invariance or to an inconsistent encoding of the transformation information.
|
http://arxiv.org/abs/2307.05222v1 | 20230711124539 | Generative Pretraining in Multimodality | [
"Quan Sun",
"Qiying Yu",
"Yufeng Cui",
"Fan Zhang",
"Xiaosong Zhang",
"Yueze Wang",
"Hongcheng Gao",
"Jingjing Liu",
"Tiejun Huang",
"Xinlong Wang"
] | cs.CV | [
"cs.CV"
] |
Plasmonic polarons induced by alkali-atom deposition in hafnium disulfide (1T-HfS_2)
Fabio Caruso
August 12, 2023
====================================================================================
We present , a Transformer-based multimodal foundation model, which can seamlessly generate images and texts in multimodal context.
This omnivore model can take in any single-modality or multimodal data input indiscriminately (, interleaved image, text and video) through a one-model-for-all autoregressive training process.
First, visual signals are encoded into embeddings, and together with text tokens form an interleaved input sequence.
is then end-to-end trained with a unified objective of classifying the next text token or regressing the next visual embedding in the multimodal sequence.
This versatile multimodality empowers the exploration of diverse pretraining data sources at scale, such as videos with interleaved frames and text, webpages with interleaved images and text, as well as web-scale image-text pairs and video-text pairs.
can serve as a generalist multimodal interface for both image-to-text and text-to-image tasks, and supports in-context image and text generation.
Across a broad range of zero-shot/few-shot tasks including image captioning, visual question answering, video question answering and text-to-image generation, demonstrates superb performance compared to state-of-the-art large multimodal models.
Extended capabilities such as multimodal assistants via instruction tuning are also demonstrated with impressive performance.
§ INTRODUCTION
With text corpus at massive scale, Large Language Models (LLMs) <cit.> with straightforward training objectives such as next-word-prediction learn to understand, reason, and generate text with unprecedented accuracy and fluency, paving the way for diverse real-life applications <cit.> unthinkable a decade ago.
Recent studies <cit.> have investigated Large Multimodal Models (LMMs) beyond LLMs.
Flamingo <cit.>, which connects a powerful language model with a pretrained vision encoder and inserts learnable layers to capture cross-modality dependencies, demonstrates strong abilities in multimodal zero-shot and in-context learning.
Recent works <cit.> also adopt this framework and build LMM by docking a vision encoder with an LLM.
Effective as they are, these LMMs are mostly trained on image-text pairs or documents, while overlooking video data as another scalable source of interleaved multimodal data.
Besides, the commonly used training objective in such LMMs is predicting the next text token <cit.>, typically with a frozen vision encoder and no supervision for the vision part, which highly restricts the model's capacity.
In this work, we introduce , a large multimodal model that learns from both video and image data interleaved with text, under a unified objective of predicting the next visual or text token in an autoregressive fashion.
Documents interleaved with images (e.g., textbooks, webpages) provide an intuitive representation of complex concepts, and have proved to be effective in empowering models with multimodal in-context learning ability <cit.>.
Videos, which usually contain interleaved image frames and subtitles (Figure <ref>), are an abundant source of multimodal data that has been largely overlooked. They naturally contain dense visual signals and encode stronger cross-modal correlations with text than regular multimedia documents.
Furthermore, public videos (especially user-generated clips) possess richer content diversity than Common Crawl[<https://commoncrawl.org/>], from which current training datasets mainly originate.
To take advantage of rich web-scale data with omnivore capacity, we formulate diverse sources of interleaved multimodal data (, videos with subtitles, webpages with images and text) into a unified format of interleaved image embeddings and text tokens (videos are converted into randomly-selected frames and subtitles interleaved into a sequence).
Specifically, visual signals are first encoded into embeddings via a visual representation model EVA-CLIP <cit.>, instead of being converted into discrete tokens. These visual embeddings together with text tokens constitute an interleaved multimodal input sequence.
We pretrain on these multimodal data sequences under a simple unified objective: predicting the next element in a multimodal sequence. Different from existing LMMs that compute the predict-the-next loss on text tokens only, in training , all input elements including both discrete text tokens and continuous image embeddings are accounted for loss computation. We adopt the cross-entropy classification loss for discrete text tokens, and the ℓ_2 regression loss for continuous visual embeddings. As raw images typically lack the left-to-right causal dependency as in language, does not perform image generative pretraining in the original pixel space.
Instead, visual embeddings are transformed into a causal latent space via Causal Transformer, which accepts the image encodings generated by EVA-CLIP as input, and outputs N tokens that capture the causal dependency of the given image (as illustrated in Figure <ref>).
Pretrained with the unified objective and diverse forms of data stated above,
can serve as a generalist interface for both image-to-text and text-to-image tasks by performing various types of completion in a multimodal sequence, , accepting multimodal prompts (, text, images, video, or their interleaved sequence) and outputting multimodal response (for image generation, visual embeddings are decoded by a fine-tuned diffusion model), as illustrated in Figure <ref>.
Further, demonstrates impressive abilities such as in-context text and image generation (the 2nd block of Figure <ref>), image blending (the 5th row of Figure <ref> that combines a cat and a tiger into a cute tiger-cat), video understanding (the last block of Figure <ref>), and real-world knowledge grounding (Section <ref>).
We evaluate on a broad range of zero-shot and few-shot tasks including image captioning, visual question answering, video question answering, and text-to-image generation.
For qualitative demonstration, we also build an effective multimodal assistant via instruction tuning on multimodal conversation data. The instruction-tuned assistant can effectively follow human instructions and interact with users via multimodal response.
§ : PREDICT THE NEXT IN MULTIMODALITY
§.§ Architecture
is a large-scale multimodal model that performs completion in multimodality, i.e., perceiving interleaved multimodal input and generating outputs varying in modalities. As illustrated in Figure <ref>,
consists of four parts: Visual Encoder, Causal Transformer, Multimodal Modeling, and Visual Decoder.
We leverage pretrained EVA-CLIP <cit.>, LLaMA <cit.> and Stable Diffusion <cit.> to initialize the Visual Encoder, the Multimodal Modeling LLM and the Visual Decoder, respectively.
Given any sequence with interleaved image, text and video,
we first encode the image into dense visual features via EVA-CLIP, then transform the encodings into a fixed number of N visual causal embeddings via Casual Transformer. Similarly, we encode a video of T frames into T × N visual causal embeddings.
Two special image tokens and are prepended and appended for each image or frame, respectively, to represent the beginning and end of the encoded image/frame embeddings.
The visual causal embeddings are combined with text tokens to form multimodal sequences that are fed into the Multimodal Modeling LLM for unified autoregressive modeling. We append and tokens to the start and the end of each sequence.
In inference, we fine-tune the Visual Decoder to decode the visual embeddings into a realistic image.
Causal Image-text Transformer.
Auto-regressively modeling images in raster order is counter-intuitive and has not demonstrated satisfactory performance, which may be attributed to the fact that images naturally possess 2D structures and are not perceived as sequential signals like text.
To better capture the characteristics of images and achieve unified modeling of different modalities, we propose a Causal Transformer module to transform 2D spatial visual signals to 1D causal sequences in a latent space Z.
Specifically, given an image I with its encodings g(I) from EVA-CLIP, Causal Transformer accepts randomly initialized embeddings {e_1, e_2, …, e_N} as input, and outputs N embeddings {z_1, z_2, …, z_N} that capture the causal dependency of the given image:
z_1, z_2, …, z_N = CausalTransformer(g(I), {e_1, e_2, …, e_N})
The architecture of Causal Transformer is similar to the decoder of Transformer <cit.>, with each block consisting of a causal self-attention layer, a cross-attention layer, and a feed-forward layer. Different from Q-Former <cit.> that captures bi-directional relations of input tokens, we use a causal self-attention layer to capture the causal dependency among the input latent embeddings for further unified causal modeling of vision and language modalities.
The cross-attention layer aggregates visual information from the image embeddings extracted from EVA-CLIP, where the visual embeddings are treated as keys and values, and the outputs from the previous causal attention layer serve as queries.
Visual Decoder.
We use a latent diffusion model to decode visual embeddings into images, and adopt the weights of Stable Diffusion <cit.> as initialization.
Specifically, we feed N visual embeddings generated by into the diffusion model as conditions for image decoding.
We replace the linear projections of the cross-attention modules in Stable Diffusion with new linear layers that accommodate the dimension of and Stable Diffusion.
§.§ Training Objective
Given an unlabeled web-scale corpora 𝒟 consisting of interleaved multimodal sequences x = (x_1, x_2, …, x_n), where x can be vision-language sequences of various forms, such as image-text pairs, image-text interleaved documents, or videos with subtitles.
x_i can be a signal unit (text or image token) from any arbitrary modality.
We first convert all continuous 2D signals (images and video frames) into 1D causal latent embedding sequences using Causal Transformer, then insert them back into the corresponding places in the sequence x.
The resulting sequence is represented as u = (u_1, u_2, …, u_m), where u_i can be either a discrete text token, or a visual embedding that captures causal dependency with neighboring visual embeddings.
We approximate the likelihood of the web-scale corpora p(x) with p(u), and maximize the likelihood in a unified auto-regressive manner as follows:
max_θ∑_u∈𝒟∑_i=1^|u|log
P(u_i|u_1, …, u_i-1; θ) ≈ p(x)
Two types of losses are adopted to optimize this objective. For discrete text tokens, cross-entropy loss is used to supervise classification in the predefined vocabulary with a language modeling head. For continuous visual embeddings, ℓ_2 regression loss is adopted with a separate regression head.
§.§ Generalist Interface
The unified auto-regressive modeling of different modalities endows with a powerful ability to serve as a multimodal generalist that can perform many types of completion in a multimodal sequence, , accepting multimodal sequence as input, and outputting signals across vision and language modalities.
For example, when using two image-text pairs of the same task as the prompt, automatically infers and completes the corresponding task given a new input, as shown in the second block of Figure <ref>.
Specifically, given a multimodal context, if the expected output format is text, will use the language modeling head to generate discrete text tokens. If the desired output is image, we will append a token at the end of the input sequence, then will autoregressively generate N visual embeddings that will then be sent to the visual decoder for decoding into a real-world image.
§ TRAINING
We pretrain with web-scale data across modalities in various forms, including image-text pairs (LAION-2B <cit.>, LAION-COCO <cit.>), interleaved images-text data (MMC4 <cit.>), video-text pairs (WebVid-10M <cit.>), and our collected interleaved video-text data (YT-Storyboard-1B).
All these data are formulated as multimodal sequences, from which learns under the objective of predict-the-next-element in a unified auto-regressive manner. After pretraining, we finetune an Image Decoder to transform visual embeddings into realistic images.
§.§ Data
Image-text Pairs. We use the image-text pairs from LAION-2B <cit.> and LAION-COCO <cit.> for pretraining. LAION-2B<cit.> provides images paired with noisy alt-texts from the web, and LAION-COCO <cit.> is its 600M subset that is captioned by BLIP <cit.>.
Video-text Pairs. WebVid-10M <cit.> is an extensive dataset consisting of a large collection of short videos with textual descriptions. These videos are sourced from materials websites with diverse contents and a strong correlation between text and video.
We use heuristic rules to remove irrelevant metadata (resolution of the original video, camera parameters).
Interleaved Image and Text. Large-scale image-text interleaved data plays a crucial role in unlocking the in-context learning ability of multimodal models. We leverage the Multimodal-C4 (MMC4) dataset <cit.>, an expanded version of the text-only C4 <cit.>. Multimodal-C4 <cit.> comprises a collection of approximately 75 million image-text-interleaved documents, with 400 million images and 38 billion tokens in total. From each document, we sample a random subsequence of L = 1024 take up to the first N = 5 images included in the sampled sequence. Additionally, we randomly sample N = 5 images along with their corresponding sentences to construct a subsequence of L = 512.
Interleaved Video and Text.
Videos with subtitles also present a promising and scalable source of interleaved multimodal data. We introduce the YT-Storyboard-1B dataset which collects 18 million videos and their corresponding subtitles from YouTube[<https://www.youtube.com>] using the video-ids provided by the YT-Temporal-1B dataset <cit.>. Instead of raw videos, we collect storyboard images (about 1.8 billion images in total), a set of thumbnails provided by the YouTube website for quick video viewing. The combination of storyboard thumbnails and subtitles creates a natural interleaved sequence of video and text ordered by timestamps. An example is provided in Figure <ref>.
More details about the pretraining datasets are deferred to Appendix <ref>.
§.§ Pretraining
We initialize 's Visual Encoder with the 1B version of EVA-02-CLIP <cit.>, and Multimodal Modeling LLM with the 13B version of LLaMA <cit.>.
LLaMA is a decoder-only Transformer <cit.> and EVA-02-CLIP is a 40-layer ViT <cit.>.
The Causal Transformer comprises 12 blocks, each of which consists of a causal self-attention layer, a cross-attention layer, and a feed-forward layer. Random initialization is used for Causal Transformer. The total number of parameters of is 14B and is trained end-to-end.
We use a batch size of 128 for image-text pair data, 64 for interleaved image-text data, 16 for video-text pair and interleaved video-text data. We adopt the AdamW optimizer <cit.> with β_1 = 0.9, β_2 = 0.98, and a weight decay of 0.05. We use a cosine learning rate decay with a peak learning rate of 1e-4 for the Causal Transformer, 3e-5 for LLaMA <cit.> and 5e-5 for EVA-02-CLIP <cit.>, and a linear warmup of 2k steps.
For each video, we randomly sample 8 frames for pretraining, and all images/frames are resized into 224×224 resolution. For image-text pair and interleaved data, we randomly put each image before or after its corresponding sentence. We train the model on 128 NVIDIA 80G-A100 GPUs for 10k steps with around 82M samples (150B tokens in total), and the pretraining takes approximately 2 days.
§.§ Visual Decoding
After pretraining, we tune the visual decoder with both LAION-COCO <cit.> and LAION-Aesthetics <cit.> (a high-aesthetics quality subset of LAION-5B <cit.>) image-text pair datasets under text-to-image task.
Specifically, We initialize the diffusion model with Stable Diffusion v1.5. We freeze the Visual Encoder, Multimodal Modeling LLM in , and the VAE in diffusion model during training, with only the parameters of U-Net updated. For each training sample, we append the token to the end of the input text and feed it into the Multimodal Modeling LLM, which will then generate N visual embeddings in an auto-regressive manner. These visual causal embeddings are fed into Image Decoder as the condition for image generation training.
We follow the model setups of Stable Diffusion v1.5.
We employ AdamW optimizer <cit.> with β_1 = 0.9, β_2 = 0.999 and the weight decay of 1e-2. We train the diffusion model with 32 A100-40G GPUs for 15k iterations. The batch size is set to 50 per GPU, and the learning rate warms up to 1e-4 for the first 5k steps, then decreases to 5e-5 and 1e-5 at 10k and 14k steps respectively. To further improve sample quality, we randomly drop image embeddings condition by 10% of the time during training to enable classifier-free guidance <cit.>. Please refer to Appendix <ref> for more training details.
§ INSTRUCTION TUNING
Language instruction tuning has helped pretrained language models to align with user intentions <cit.> and generalize to unseen tasks <cit.>.
We apply multimodal instruction tuning on to align it with human instructions through supervised finetuning on publicly available datasets,
including language instructions from ShareGPT <cit.> and Alpaca <cit.>, image-text instructions from LLaVA <cit.>, and video instructions from VideoChat <cit.> and Video-ChatGPT <cit.>. Dataset details can be found in Appendix <ref>.
In instruction tuning, we freeze all parameters of pretrained , and fine-tune a low-rank adaption (LoRA) module <cit.>. The main focus of instruction tuning is to align the model with natural language instructions, which are less relevant to vision features. Thus, we attach LoRA modules only to the self-attention layers of the Multimodal Modeling LLM, and add no adaptation to the Vision Encoder.
We use a batch size of 128 and train for 10k steps. The learning rate linearly warms up to 1e-5 in the first 500 steps, then decays to zero with a cosine schedule. The overall instruction tuning phase takes around 16 hours with 16 A100-80G GPUs.
All instruction-tuning data are packed with this template:
<System Message> [USER]: <Instruction> [ASSISTANT]: <Answer>,
where and are special tokens initialized from the embeddings of words `user' and `assistant', respectively. varies depending on the specific task, and detailed system messages used for different types of tasks can be found in Appendix <ref>. and are actual slots for human instructions and assistant answers, and only is accounted for loss computation.
§ EVALUATION
We evaluate on a broad range of vision-language tasks including image captioning (MS-COCO <cit.>), image question answering (VQAv2 <cit.>, OKVQA <cit.>, VizWiz <cit.>), visual dialog (VisDial <cit.>), video question answering (MSRVTTQA <cit.>, MSVDQA <cit.>, NextQA <cit.>) and text2image generation(MS-COCO<cit.>). Details of these benchmarks are described in Appendix <ref>.
We evaluate our pretrained and instruction-tuned models in zero-shot and few-shot settings.
§.§ Zero-shot Evaluation
In the zero-shot setting, the model is tested on tasks and datasets it has never encountered during training. Task-specific prompts are used to indicate different tasks to perform, without any additional tuning for model parameters.
Multimodal Understanding.
Table <ref> presents the zero-shot multimodal understanding performance of and (the instruction-tuned model). We adopted the multimodal Chain-of-Thought prompting technique on the pretrained model following <cit.>.
This approach involves two steps: first asking the model to generate a descriptive caption for visual content, then providing the model with both the generated caption and a task-specific prompt to output the final result. Additionally, to ensure a fair comparison with Flamingo <cit.>, we also evaluate using the same prompting strategy of Flamingo. These results are obtained by using two text-only examples from the task as prompts. Results evaluated under this strategy are indicated by an asterisk (*). Note that these prompts do not include any images, simulating a few-shot text prompt approach. For more detailed information regarding the evaluation, please refer to Appendix <ref>.
On COCO captioning task, achieves impressive zero-shot CIDEr score <cit.> of 112.4, which outperforms other LMMs by a large margin. In a wide range of image and video question answering tasks, consistently surpasses LMMs like Kosmos-1 and Flamingo-9B. Notably, achieves an accuracy of 34.4% on the complex VizWiz VQA dataset, versus Kosmos-1's 29.2% and Flamingo-9B's 28.8%.
is the instruction-tuned model that achieves notable improvements. Remarkably, even with only 14B parameters, can outperform much larger-scale Flamingo-80B model in several tasks such as VQAv2 (57.5% vs. 56.3%), VizWiz (38.1% vs. 31.6%), and MSVDQA (36.4% vs. 35.6%).
r0.46
Zero-shot text-to-image generation on MS-COCO <cit.> validation set. 30k samples are randomly sampled for evaluation.
Models FID(↓)
unimodal generation models
GLIDE <cit.> 12.24
Make-A-Scene <cit.> 11.84
DALL-E 2 <cit.> 10.39
SDv1.5 <cit.> 9.93
Imagen <cit.> 7.27
Parti <cit.> 7.23
multimodal generation models
GILL <cit.> 12.20
(ours) 11.66
Text2image Generation. We evaluate the zero-shot image generation ability on the validation set of MS-COCO <cit.>. Following <cit.>, we randomly sample 30k prompts from the validation set and calculate the zero-shot FID <cit.>. The results are shown in Table <ref>. For the generation of both and SDv1.5, we use PNDM <cit.> scheduler with 50 steps. We also adopt classifier-free guidance <cit.> for better generation quality. The scaling factor is set to 5.0 and 3.0 for and SDv1.5 respectively, as these settings yield the best performance for both models.
achieves better performance compared to a concurrent work GILL <cit.>, which also generates images with LLMs.
However, our model is inferior to SDv1.5 in terms of FID. This is probably because the condition space (image embeddings) of our visual decoder deviates a lot from the condition space (text embeddings) of the diffusion model used as initialization, and our model is trained for a relatively short 15k steps.
We believe there might be room to improve via fine-tuning with more steps, or using another visual decoder instead of adopting pretrained diffusion models that condition on text embeddings.
§.§ Few-shot Evaluation
In few-shot evaluation, the model is prompted with task-specific prompts and a small number of examples collected from the training data to evaluate its in-context learning ability.
Evaluation details can be found in Appendix <ref>. Table <ref> presents the performance of the pretraining model in image and video question answering tasks under the few-shot (k=2,4,8) evaluation setting.
We use the Retrieval In-Context Example Selection (RICES) <cit.> approach employed in Flamingo <cit.>.
With interleaved data incorporated in the pretraining phase, demonstrates superior performance to Flamingo-9B and Kosmos-1 under almost all scenarios. For example, achieves a VQAv2 accuracy of 58.4% and VizWiz 41.3% under the 4-shot setting, surpassing Flamingo-9B by +2.1% and +6.4%, respectively. For video-text tasks, demonstrates strong performance as well, such as 4-shot 21.8% v.s. Flamingo's 18.2% on the MSRVTTQA benchmark.
Additionally, we can observe a positive correlation between the number of shots k (k=0,2,4,8) and the performance of .
These results demonstrate 's remarkable in-context learning ability.
§.§ Qualitative Evaluation
Beyond quantitative benchmarks, we conduct adequate qualitative evaluation of . demonstrates impressive capabilities that cannot be evaluated on standard benchmarks, including real-world knowledge grounding (upper right of Figure <ref>),
interleaved multi-image understanding (left side of Figure <ref>), detailed video understanding (lower right of Figure <ref>), multimodal assistant (Figure <ref>), multi-turn dialogue (Figure <ref>), image blending (Figure <ref>), and (in-context) text-to-image generation.
For in-context text-to-image generation, can generate context-related images (in the first two rows of Figure <ref>, the generated images share the oil painting style in context, compared with the corresponding images generated without context in the first two rows of Figure <ref>), and follow context-related instructions, as shown in the 4th row of Figure <ref>.
The in-context ability of the multimodal modeling of (LLM as initialization) is responsible for this brand-new ability of image generation.
We also compare with other state-of-the-art multimodal assistants in terms of the ability to perform typical image captioning tasks (Figure <ref>) and follow human instructions (Figure <ref>).
In Figure <ref>, we test a slightly difficult instruction, and only response properly to list 8 books written by Agatha Christie and then recommend one.
§ RELATED WORK
Multimodal pretraining <cit.> learns cross-modal interactions from large-scale multimodal data.
BEiT series <cit.> convert visual signals into discrete tokens that can be pretrained same as language, and BEiT-3 <cit.> achieves exceptional fine-tuning performance with a unified BERT-style <cit.> masked signal modeling objective.
Flamingo <cit.> bridges powerful yet private pretrained vision and large language models and first demonstrates remarkable multimodal zero-shot and few-shot behaviors.
With the increasing impact <cit.> and accessability <cit.> of LLMs, recent work has also considered building multimodal models based on LLMs <cit.>, such as BLIP-series <cit.> that connect frozen vision and language pretrained models with a Q-Former to bridge the modality gap.
These LMMs commonly use predicting the next text token as the training objective and exert no supervision for vision data <cit.>. Instead, unifies the modeling of vision and language with the objective of predicting the next visual or text token in an autoregressive manner, and further explores videos as a new source of interleaved image-text data. This unified modeling leads to a generalist interface for diverse multimodal tasks that output either image or text.
Emerging recent studies <cit.> attempt to build powerful visual multimodal assistants based on LMMs through constructed conversation data. We also instruction-tune using publicly available datasets and build a multimodal assistant that aligns well with human instructions on both images and videos.
§ CONCLUSION
In this work, we present , a Large Multimodal Model (LMM) trained with a unified autoregressive objective of predicting the next element, including both visual and textual tokens. Apart from commonly used image-text pairs and interleaved documents, we explore another scalable data source of image-text interleaved data, , video.
trained under such unified objective and diverse data can serve as a generalist interface that is capable of performing diverse multimodal tasks, such as image captioning, image/video question answering, and text-to-image generation, together with new abilities like in-context text and image generation, and image blending.
We also build a multimodal assistant instruction-tuned on , which exhibits excellent human-aligned abilities such as multi-turn dialogue.
We hope that our work will inspire the community to continue exploring the potential of diverse multimodal data at the web-scale and also the generative pretraining beyond vision and language.
§ ACKNOWLEDGEMENT
We would like to thank Hanxiao Qu, Quanyue Ma, Teng Dai, Yemin Shi, Wenhao Huang, Yue Cao, as well as other colleagues at BAAI for their support to this project.
plain
§ EMU TRAINING
§.§ Pretraining
§.§.§ Dataset Details
Image-text Pairs. The LAION-2B dataset is the english subset of Laion-5B <cit.> and contains large-scale image-text pairs data. LAION-COCO <cit.> is captioned 600M images from LAION-2B with an ensemble of BLIP <cit.> and CLIP <cit.> models. Whereas the text in LAION-COCO <cit.> exhibits enhanced fluency and relevance to the associated images, it has insufficient text diversity and a potential loss of high-level semantic information, including world knowledge contents presented in the original LAION-2B dataset. Thus, we employ both the LAION-2B and LAION-COCO <cit.> datasets during pretraining.
Video-text Pairs. Webvid-10M <cit.> dataset contains a diversity of content with strong correlation between text and video. However, we found that a certain amount of the data contained irrelevant metadata information (resolution of the original video, camera parameters). To prevent the model from being influenced by these irrelevant details, we use heuristic rules to remove these content. Firstly, we build a word list consisting of irrelevant information. This word list is then utilized as a filtering mechanism to process the raw video text descriptions obtained from the original dataset. As a result, approximately 1 million datasets requiring cleaning are identified. Subsequently, specific rules are designed based on this list to identify and eliminate any words of irrelevant information within the text. Finally, the cleaned texts are subjected to rewriting using the Vicuna-13B <cit.>, thereby ensuring its fluency and enhancing the overall quality.
Interleaved Image and Text. Multimodal-C4 <cit.> is used as interleaved image-text data in pretraining. Following OpenFlamingo<cit.>, we filter images based on CLIP similarity score to ensure the relevance of the images and text in each document. Specifically, any image with a CLIP similarity score below the threshold of 0.32 for all text in the same document is discarded. From each document, we sample a random subsequence of L = 1024 and take up to the first N = 5 images included in the sampled sequence. This process results in long text with the inclusion of multiple images. Additionally, we randomly sample N = 5 images along with their corresponding sentences to construct a subsequence of L = 512. This approach yields N = 5 image-text pairs.
Interleaved Video and Text. Videos with interleaved subtitles text represent a valuable and scalable source of multimodal data that has received limited attention thus far. In our study, we introduced YT-Storyboard-1B dataset, which collected storyboard images from YouTube, utilizing the video-ids provided by the YT-Temporal-1B dataset, which encompasses a vast collection of 18 million videos, equating to a total of 1.8 billion storyboard images. Specifically, for each video, we crawl the storyboard images and subtitles files directly. Where the sampling time between storyboard images is fixed, so the start time of each image can be determined by the order. Subtitle files record the content of each subtitle, as well as the start and end times. Therefore, storyboard images and subtitles can be sorted according to their timestamps and adjacent subtitles can be merged to form an interleaved video-text sequence. By opting to collect storyboard images instead of raw video data, we eliminate the necessity of video decoding. Moreover, this approach leads to a substantial 20-fold reduction in data storage costs, resulting in increased download efficiency.
§.§.§ Training Details
We report the detailed training hyperparameters settings of during the pretraining in Table <ref>.
§.§ Visual Decoding
§.§.§ Dataset Details
LAION-Aesthetics <cit.> is the subset of LAION-5B <cit.> which have relatively high aesthetics quality while LAION-COCO <cit.> has relatively high image-text correlation. To empower the visual decoder to possess the ability of decoding visual embeddings with both high quality and high relevance to text prompts, we use both LAION-COCO and LAION-Aesthetics for visual decoding training. More specifically, we filter all text prompts with length greater than 150 to preserve a large enough batch size and prevent the GPU memory overflow. This rule discards about 8% of LAION-Aesthetics and 0.01% of LAION-COCO data, which has little effect on data diversity.
§.§.§ Training Details
The detailed training setups are listed in Table <ref>.
§ INSTRUCTION TUNING
§.§ Dataset Details
We collect publicly available language, image and video instruction datasets for instruction tuning.
* Language instructions: ShareGPT contains about 70K user dialogues with ChatGPT or GPT-4, and Alpaca <cit.> dataset contains 52K instruction-following data generated using self-instruct <cit.> from OpenAI's .
* Image instructions: we use LLaVA <cit.> dataset consisting of three types of visual instructions, conversation, detailed description, and complex reasoning, with a total number of 158K image-text instruction-following samples. In our preliminary experiments, we found the instruction-tuned model often generates instruction-irrelevant detailed descriptions of the image. Thus, we remove the detailed description subset of LLaVA.
We also find a bad pattern 'on top of the back of' in the model's response, and we filter all data that contains this pattern. The resulting 130K LLaVA subset is used for instruction tuning.
* Video instructions: we use VideoChat-11K <cit.> and a subset of Video-ChatGPT-100k <cit.> as our video-instruction dataset. VideoChat-11K dataset is built from WebVid-10M consisting of 7K detailed video descriptions and 4K video conversations. Video-ChatGPT-100k is built from ActivityNet, and we sample an around 30K subset that includes only videos under one minute.
We use a batch size of 128 and train for 10K steps, with 3 epoches for ShareGPT, Alpaca and LLaVA datasets, and 60K samples for video-instruction data. We attach LoRAs <cit.> on all linear projections of the self-attention layer, with the LoRA rank and alpha being 16.
§.§ System Messages
We use different system messages for language-instruction, image-instruction and video-instruction datasets, as shown in Table <ref>.
§ EVALUATION
§.§ Benchmarks
excels at performing diverse types of completion in multimodal sequences by accepting multimodal prompts, including text, images, videos, or their combinations, and generating comprehensive multimodal responses. To evaluate the capabilities of , we conduct extensive benchmark tests covering various tasks, which are summarized in Table <ref>. Specifically, we meticulously select 9 benchmarks that encompass multimodal image/video and language tasks, including text-to-image generation, visual question answering for both images and videos, and image-based visual dialogue. When benchmarking OKVQA, we use VQAv2 evaluation code[<https://github.com/GT-Vision-Lab/VQA>] and stem the answers using Porter stemming to consolidate answers following <cit.>. For other tasks, we either submit our results for evaluation on the official website or use standard evaluation code.
§.§ Zero-shot Evaluation
Prompt Template.
To ensure that the model outputs answers in the required style for the benchmark tests, we prompt and with task-specific templates, as shown in Table <ref>. For each type of task, we have developed dedicated templates to structure the model's output. In these templates, “{question}” will be replaced with the question from the question-answering task, “{history question}” will be replaced with the historical question from the multi-turn visual dialogues, and similarly “history answer” will be replaced with the historical annotated answer from the multi-turn visual dialogues. Then, the image/video will be added before the text as input to the model. Additionally, we implement post-processing techniques to filter out commonly occurring redundant phrases such as “it is”, “it's”, “a”, “an”, and “the”.
Furthermore, the model is required to output “unanswerable” for questions that cannot be answered in the VizWiz dataset. To achieve this, we augment the template by adding the phrase “is the answer known?” and prompt the model to respond with either “yes” or “no” by constraining the model generation. If the model responds with “no”, we immediately return the answer as “unanswerable”. On the other hand, if the model responds with “yes”, we proceed to prompt the model to provide a valid answer.
Multimodal Chain-of-Thought Prompting. To enhance the capabilities of the pretrained model, we utilize the Multimodal Chain-of-Thought prompting technique. Initially, when presented with an image or video, we employ a prompt to guide the model in generating a descriptive caption. Subsequently, the model is given both the caption and a task-specific prompt to generate the final result. The complete prompt template is shown in Table <ref>, where the “{caption}” tag in template will be replaced with the descriptive text generated by . The experimental results demonstrate that this test-time technique effectively improves the model's performance without any additional data, leveraging the inherent capability of the model itself.
Text-only Examples Prompting. To ensure a fair comparison with Flamingo, we include results obtained through text-only examples prompting, denoted by an asterisk (*) in Table <ref>. We adopt the same approach as Flamingo in selecting examples (i.e., RICES). This involves utilizing two text-only examples from the task as prompts, without any accompanying images (similar to the few-shot text prompts). During the evaluation process, we observed that this approach effectively formats the model's output, regardless of the label format of the datasets and the evaluation metrics employed, enabling a more accurate reflection of its true performance.
§.§ Few-shot Evaluation
In the few-shot evaluation settings, we incorporate a few example samples as prefixes in the template and connected the few-shot examples using “. ”. Additionally, like Flamingo, we employ the Retrieval In-Context Example Selection (RICES) approach to select the few-shot examples.
To implement RICES, we begin by randomly selecting 5000 training set samples for each dataset. Then, using the pretrained EVA-CLIP model, we extract features from both the training set images/videos and the test set images/videos. For each test set sample, we select examples from the training set based on the highest cosine similarity using the extracted features, including them in the prompt. For the video-text task, we retrieve similar videos from the training set by comparing the mean of frame-level visual features extracted from our pretrained EVA-CLIP model.
Furthermore, we discover that the support video examples didn't require too many frames, which could exceed the LLM's context length limit. Therefore, we sample 8 frames for the given video and only 2 frames for the corresponding support video examples.
§ QUALITATIVE CASES
|
http://arxiv.org/abs/2307.04957v1 | 20230711012009 | Reinforcement Learning with Non-Cumulative Objective | [
"Wei Cui",
"Wei Yu"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.NI",
"math.OC",
"stat.ML"
] |
op-tical net-works semi-conduc-tor IEEE-Xplore
plain
theoremTheorem
proposition[theorem]Proposition
lemmaLemma
corollary[theorem]Corollary
definition
definition[theorem]Definition
assumption[theorem]Assumption
remark
remark[theorem]Remark
subproof[1][]
Reinforcement Learning with
Non-Cumulative Objective
Wei Cui, Student Member, IEEE, and Wei Yu, Fellow, IEEE
Manuscript submitted on November 10, 2022, revised on August 12, 2023. This work is supported by Natural Sciences and Engineering Research Council (NSERC) of Canada via the Canada Research Chairs Program.
The authors are with The
Edward S. Rogers Sr. Department of Electrical and Computer Engineering,
University of Toronto, Toronto, ON M5S 3G4, Canada
(e-mails: {cuiwei2, weiyu}@ece.utoronto.ca).
October 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In reinforcement learning, the objective is almost always defined as a cumulative function over the rewards along the process. However, there are many optimal control and reinforcement learning problems in various application fields, especially in communications and networking, where the objectives are not naturally expressed as summations of the rewards. In this paper, we recognize the prevalence of non-cumulative objectives in various problems, and propose a modification to existing algorithms for optimizing such objectives. Specifically, we dive into the fundamental building block for many optimal control and reinforcement learning algorithms: the Bellman optimality equation. To optimize a non-cumulative objective, we replace the original summation operation in the Bellman update rule with a generalized operation corresponding to the objective. Furthermore, we provide sufficient conditions on the form of the generalized operation as well as assumptions on the Markov decision process under which the globally optimal convergence of the generalized Bellman updates can be guaranteed. We demonstrate the idea experimentally with the bottleneck objective, i.e., the objectives determined by the minimum reward along the process, on classical optimal control and reinforcement learning tasks, as well as on two network routing problems on maximizing the flow rates.
Reinforcement Learning, Optimal Control, Markov Decision Process, Wireless Network, Routing.
§ INTRODUCTION
In reinforcement learning (RL), an agent performs a sequence of actions to optimize a certain objective, over an environment modeled as a Markov decision process (MDP) <cit.>. The objective value is determined by the collection of intermediate rewards the agent receives until the MDP is terminated (or an absorbing state is reached). In most of the literature, the objective is defined as the summation of these intermediate rewards, which corresponds to the summation operation in the Bellman optimality equation <cit.> when computing the value function. Such cumulative objectives indeed capture the ultimate goals for many problems, such as Atari games <cit.>, stock trading <cit.>, advertisement placements <cit.>, and so on. Nonetheless, there are many problems with objectives that do not translate to summations of rewards.
Specifically, in the field of wireless communications, there are many system optimization problems that can be formulated and decomposed into sequences of optimization decisions, whose global objectives cannot be readily expressed as summations of rewards from individual optimization decisions. Examples of such problems include but are not limited to max-min optimizations in routing and resources allocation <cit.>, harmonic mean maximization for traffic engineering <cit.> and for transmission system optimization <cit.>, the proportional fairness optimizations for wireless communications <cit.>, and so on. In this paper, we recognize the prevalence of problems with non-cumulative objectives, and propose modifications to many existing optimal control and RL algorithms for optimizing such objectives[The code for this paper is available at: https://github.com/
].
In the optimal control or reinforcement learning literature, one class of problems with non-cumulative objectives are the problems where only terminal states matter, such as the the game of Go <cit.> or Chess <cit.>. Researchers managed to cast the objectives into summations of rewards, by assigning every reward a zero value except for the terminal reward. Problems seeking fast task completions form another class of examples, such as maze navigation or the mountain-car control task <cit.>. Researchers cast the objectives as cumulative rewards by assigning a penalty for each action the agent takes before reaching the destination <cit.>. There are also researches on objectives that are not easily cast into summations, such as the objectives as the average reward <cit.>. To optimize the average reward, besides computing the summation of rewards, the number of steps is either tracked explicitly <cit.>, or taken to the limit at infinity (for cyclic non-terminating MDPs) <cit.>. Regardless, the summation operation in the Bellman optimality equation remains in these proposed algorithms. There have been two works <cit.> exploring maximum-reward objectives, with applications on financial derivatives and medicine design. These works recognize the possibility of modifying the Bellman optimality equation, however their scopes are restricted to the maximum-reward objective formulation, instead of generalizing to a larger class of objective functions or proposing universal conditions for convergence. Furthermore, for MDPs whose state transition is a stochastic function of the input, the convergence to the global optimal policy cannot be guaranteed for the approach in <cit.> and <cit.>.
In this paper, we generalize the optimal control and reinforcement learning objectives to a variety of functions over the intermediate rewards. To optimize the generalized objectives, we exploit the flexibility in the Bellman optimality equation and modify it accordingly to the generalized objective functions. Specifically, we replace the summation operation in the Bellman optimality equation by new operations catering to the non-cumulative objective functions. Through this approach, we can readily adapt the existing optimal control or reinforcement learning algorithms to optimizing non-cumulative objectives, without needing to re-engineer a new set of artificial rewards just to cast the objectives into a summation of rewards. Furthermore, we provide the theoretical analysis on the generalized Bellman updates, and propose sufficient conditions on the form of the new operation as well as the assumptions on the MDP under which the global optimality of the converged value function and the corresponding greedy policy can be guaranteed.
By expanding the possibilities of the objective functions, we are now able to solve problems with objectives that are intrinsically non-cumulative. For experiments, we focus on the bottleneck objective: the objective as the minimum reward of all intermediate rewards. To optimize bottleneck objectives, we replace the summation operation in the Bellman optimality equation by the minimization operation, and apply the generalized Bellman update rule to learn the value function. In numerical simulations, we first re-formulate two classical reinforcement learning problems: the CartPole problem <cit.> and the Atari game Breakout, with bottleneck objectives. Through optimizing these problems with the proposed generalized Bellman updates, we obtain competitive performances by policies with different strategies from the classical solutions.
We further experiment on two network communication applications with bottleneck objectives: the problem of finding the single-path maximum flow on a directed graph as an optimal control task, as well as joint routing and spectrum access over a wireless ad hoc network as a reinforcement learning problem. The proposed approach achieves excellent performances on both problems that are otherwise difficult to solve using the conventional formulation and learning algorithms. Specifically, for the wireless ad hoc network problem, a prior work <cit.> has explored the Monte-Carlo estimation approach for learning the value function. In contrast, the proposed generalized update rule allows for the adaptation of the highly efficient temporal difference learning technique <cit.> to the generalized objective formulation, which results in noticeably faster and more stable learning progress. Furthermore, as the wireless ad hoc network problem is essentially a multi-agent reinforcement learning (MARL) problem, the results obtained also suggest that the proposed approach is readily compatible and effective under the multi-agent reinforcement learning setting.
The rest of the paper is organized as follows. In Section <ref>, we introduce the general problem description on optimizing non-cumulative objectives, as well as several examples where non-cumulative objectives are applicable. In Section <ref>, we formally propose the method of the generalized Bellman update rules, and provide theoretical convergence and optimality analysis. We provide the detailed problem formulations on several example applications, and elaborate on how the proposed generalizations can be applied to optimizing such specific applications in Section <ref>, followed by the numerical simulations and analysis of the results in Section <ref>. Lastly, we draw conclusions in Section <ref>.
§ GENERALIZED OPTIMAL CONTROL & REINFORCEMENT LEARNING FORMULATION
§.§ Conventional Formulation
Let 𝒮 and 𝒜 denote the state space and the action space of an MDP. At time step t, the agent observes a state s_t∈𝒮, executes an action a_t∈𝒜, and receives a reward r_t∈ℛ while transiting to the next state s_t+1∈𝒮. We use {p_R_t|S_t,A_t(r_t|s_t,a_t)}_t=1,2… and {p_S_t+1|S_t,A_t(s_t+1|s_t, a_t)}_t=1,2… to denote the reward distribution and the state transition distribution of the MDP, but often omit the subscripts for notational simplicity, e.g., as in {p(r_t|s_t,a_t)}_t=1,2… and {p(s_t+1|s_t,a_t)}_t=1,2…. In most of the literature, the objective is defined as the summation of all intermediate rewards the agent received along the process:
u = r_1+γ r_2+γ^2 r_3+… ,
where γ∈(0,1) is the discount factor to encourage the agent focuses more on rewards closer in time. The study of control (when both the reward distribution p(r_t|s_t,a_t) and the state transition distribution p(s_t+1|s_t,a_t) are known) or reinforcement learning (when neither p(r_t|s_t,a_t) nor p(s_t+1|s_t,a_t) is known) is to find a policy π for the agent to select actions based on states as a_t∼π(s_t),∀ t, such that u in <ref> is optimized.
Corresponding to <ref>, the value function is defined as the future cumulative rewards the agent expect to receive under a specific policy. Let 𝒱={(s,a) | s∈𝒮, a∈𝒜} denote the set of all possible state-action pairs. The value function Q^π∈ℛ^|𝒱| is a vector containing the future cumulative rewards expected starting from each (s,a) tuple, with:
Q^π(s_t, a_t)
= 𝔼_{p(r_t'|s_t',a_t')}_t'=t,t+1…
{p(s_t'+1|s_t',a_t')}_t'=t,t+1…
{a_t'+1∼π(s_t'+1)}_t'=t,t+1…[r_t+γ r_t+1+γ^2 r_t+2+… | s_t,a_t ]
= 𝔼_p(r_t|s_t,a_t)
p(s_t+1|s_t,a_t)
a_t+1∼π(s_t+1)[r_t+γ Q^π(s_t+1,a_t+1) | s_t,a_t] .
Q ^π(s_t, a_t)
= 𝔼_{p(r_t'|s_t',a_t')}_t'=t,t+1…
{p(s_t'+1|s_t',a_t')}_t'=t,t+1…
{a_t'+1∼π(s_t'+1)}_t'=t,t+1…[r_t+γ r_t+1+γ^2 r_t+2+… | s_t,a_t ]
= 𝔼_p(r_t|s_t,a_t)
p(s_t+1|s_t,a_t)
a_t+1∼π(s_t+1)[r_t+γ Q^π(s_t+1,a_t+1) | s_t,a_t] .
As shown in <cit.>, for stationary single-agent fully-observable MDPs, there exists a deterministic global-optimal policy π^*, with its value function denoted as Q^*, with the following relationship:
π^*(s_t)=argmax_aQ(s_t,a)
Essentially, π^*(s_t) is a deterministic distribution with all its probability density on the single action that maximizes Q(s_t,a_t). Therefore, π^* is commonly referred to as a greedy policy.
For optimal control, Q^* can be computed by <ref> with π being the global optimal greedy policy π^*, leading to the Bellman optimality equation:
Q^*(s_t, a_t) =𝔼_p(r_t|s_t,a_t)
p(s_t+1|s_t,a_t)[r_t+γmax_a_t+1Q^*(s_t+1,a_t+1) | s_t,a_t] .
Q ^*(s_t, a_t)
=𝔼_p(r_t|s_t,a_t)
p(s_t+1|s_t,a_t)[r_t+γmax_a_t+1Q^*(s_t+1,a_t+1) | s_t,a_t] .
Meanwhile, for reinforcement learning, Q^* is learned through iterative updates of sample-based approximations to <ref>, known as the Bellman update:
Q(s_t, a_t) ← r_t+γmax_a_t+1Q(s_t+1, a_t+1) ,
where the superscript on Q is dropped, since during these updates, Q does not necessarily correspond to the value function of any policy. We note that in Bellman updates, as shown by <ref>, the updated estimations for Q are obtained through bootstrapping from the current estimations. This learning technique is commonly known as temporal difference learning <cit.>, which enjoys low estimation variance and high learning efficiency. <ref> is used directly in the value-based algorithms such as SARSA <cit.>, Q-learning <cit.>, with the process commonly referred to as value iteration; and policy-based algorithms such as the class of Actor-Critic methods <cit.>.
§.§ Generalized Non-Cumulative Objectives
While it is proper to express the objective as <ref> in many scenarios, there exist applications where the objective u is intrinsically some other function over the intermediate rewards. In this paper, to generalize the class of objectives that can be optimized, we formulate the objectives as general functions over intermediate rewards:
u=f(r_1, r_2,r_3,…) .
Examples for such objectives can be seen from a wide variety of problems, which include, but are not limited to, the following classes of problems:
* The bottleneck of the intermediate rewards along the process, which fits into the large class of max-min optimization problems <cit.>. Among these max-min optimizations, the network routing problems are perhaps the most standout examples.
* The largest reward among the intermediate rewards along the process <cit.>.
* The harmonic mean of the intermediate rewards along the process, such as the average traveling velocity, electrical resistance in circuits, density of mixture. It has also been used in wireless communications as a measure of fairness among users <cit.>.
Among various non-cumulative objectives, the objective of the bottleneck reward is particularly prevalent. An important class of problems with bottleneck objectives are the network routing problems. Consider a data flow in a communication network consisting of multiple links, the highest rate the flow supports is the rate of the bottleneck link (i.e. the link with the lowest rate). Correspondingly, network routing problems are best formulated by the bottleneck objective. We describe such problems in detail in Section <ref>.
§ LEARNING ALGORITHMS WITH GENERALIZED BELLMAN UPDATES
This section aims to generalize optimal control and reinforcement learning to MDPs with non-cumulative objectives as <ref> by modifying the operation within the Bellman updates in <ref>. We present sufficient conditions on the modified operation as well as assumptions on the underlying MDPs such that the Bellman updates still maintain the global optimal convergence property. Furthermore, we provide examples of frequently-encountered non-cumulative objectives with corresponding operations that satisfy the conditions for convergence.
§.§ Bellman Update with Generalized Operations
Observing <ref>, the update target of the new iteration consists of three fundamental elements:
* Intermediate reward r_t,
* Value function at next state-action pair Q(s_t+1, a_t+1),
* Summation operation to combine <ref> and <ref>.
In this paper, we explore substitutions of <ref> in the Bellman optimality equation and its update rule by an alternative computational operation, which we refer to as the generalized Bellman update operation, denote by g(·,·). The operation takes <ref> and <ref> as the two arguments. As the result, we generalize <ref> to the following form:
Q(s_t, a_t) ← g(r_t, γmax_a_t+1Q(s_t+1, a_t+1)).
Through this generalized Bellman update operation, we are able to adapt the highly efficient temporal difference learning technique, as well as many popular reinforcement learning algorithms that based on it (e.g. SARSA, Q-learning, Actor-Critic), to optimizing the non-cumulative objectives, with minimal changes to these algorithms.
To determine which generalized objective functions f(⋯) as per <ref> can be optimized with such generalized Bellman updates, the first criterion is that the objective function needs to have optimal substructure <cit.>, as a fundamental requirement of dynamic programming. Furthermore, for learning based algorithms with value function approximators (such as neural networks), it is desirable to have the value function Q with fixed-dimension outputs from state to state (under most scenarios, the value function is a scalar function). This corresponds to the requirement that the objective function should be computable by iteratively updating a fixed number of statistics over its arguments (i.e. the intermediate rewards). When the two requirements are satisfied, we can deduce the proper operation g(·,·) from the objective function f(⋯) on a case-by-case basis.
§.§ Conditions for Convergence
To facilitate the theoretical analysis, we denote each step of the value iteration by the function mapping F^π:ℛ^|𝒱|→ℛ^|𝒱|. The superscript π indicates that the policy π is used for action selection in the one-step look ahead target computation. Correspondingly, we have:
(F^π Q)(s_t, a_t) = 𝔼_p(r_t|s_t,a_t)
p(s_t+1|s_t,a_t)
a_t+1∼π(s_t+1)g(r_t, γ Q(s_t+1, a_t+1)),
where the expectation is understood as conditioned under (s_t, a_t). When the deterministic greedy policy as derived from the current Q is used, the value iteration is denoted by F^* as follows:
(F^*Q)(s_t, a_t) = 𝔼_p(r_t|s_t,a_t)
p(s_t+1|s_t,a_t)g(r_t, γmax_a_t+1Q(s_t+1, a_t+1)).
The original Bellman updates in <ref> enjoy convergence to the global optimal value function, as shown in <cit.>. To generalize this convergence property for the generalized updates as <ref>, we present a sufficient condition on g(·,·) for ensuring convergence to a unique value function in the following theorem.
On a single-agent fully observable MDP, the series of value functions obtained from iteratively applying the generalized Bellman update rule Q← F^*Q as in <ref> is guaranteed to converge to a unique convergence point in ℛ^|𝒱| from any arbitrary starting point, if g(·,·):ℛ×ℛ→ℛ satisfies the following condition:
|g(a,b)-g(a,c)|≤|b-c| ∀ a,b,c∈ℛ
The mathematical proof of this theorem is presented in Appendix <ref>.
Note that in <ref>, we do not claim that the greedy policy resulted from the converged value function is the global optimal policy. Besides an additional condition we need on the operation g(·,·) (to be introduced in the next subsection), the main reason is that after generalizing the Bellman update operation to g(·,·), the value function learned through the value iteration process is no longer guaranteed to be the true expectation of the objective value as defined in <ref>, when the state transition functions and reward functions are stochastic. We elaborate on this observation in the following subsection.
§.§ Suboptimality with Stochastic Transitions and Rewards
With the generalized operation g(·,·), we express the objective in (<ref>) with g(·,·) as:
u=f(r_1, r_2, r_3, …) = g(r_1, γ g(r_2, γ g(r_3, …))) .
We have shown a condition on g(·,·) for convergence in <ref> for obtaining Q^*. Nonetheless, we observe that Q^* does not necessarily recover the true expectation of u when stochastic state transitions and rewards are considered. To illustrate this, consider an episode starting from the state s_1. Under the greedy policy π^* derived from Q^*, we take the expectation over p(r_t|s_t,a_t) and p(s_t+1|s_t,a_t) on <ref>, which leads to:
𝔼 _{a_t=π^*(s_t)}_t=1,2...
{p(r_t|s_t,a_t)}_t=1,2...
{p(s_t+1|s_t,a_t)}_t=1,2...[u(r_1, r_2, r_3, …)]
=𝔼_{a_t=π^*(s_t)}_t=1,2...
{p(r_t|s_t,a_t)}_t=1,2...
{p(s_t+1|s_t,a_t)}_t=1,2...[g(r_1, γ g(r_2, γ g(r_3, …)))] .
Meanwhile, starting from t=1, with the converged Q^* obtained from the generalized Bellman updates, we have:
𝔼 _a_1=π^*(s_1)[Q^*(s_1, a_1)]
=𝔼_a_1=π^*(s_1)
p(r_1|s_1,a_1)
p(s_2|s_1,a_1)
a_2=π^*(s_2)[g(r_1, γ Q^*(s_2, a_2))]
=𝔼_a_1=π^*(s_1)
p(r_1|s_1,a_1)
p(s_2|s_1,a_1)
a_2=π^*(s_2)[g(r_1, γ𝔼_p(r_2|s_2,a_2)
p(s_3|s_2,a_2)
a_3=π^*(s_3)[g(r_2, γ Q^*(s_3, a_3))])].
Comparing <ref> and <ref>, for 𝔼_a_1=π^*(s_1)[Q^*(s_1, a_1)] to be equal to the expectation of u under π^*, p(r_t|s_t,a_t), and p(s_t+1|s_t,a_t), we require g(·,·) to be exchangeable with 𝔼_π^*[·], 𝔼_p(r_t|s_t,a_t)[·], and 𝔼_p(s_t+1|s_t,a_t)[·]. With π^* being the deterministic greedy policy as in <ref>, the operation 𝔼_π^*[·] can always be exchanged with g(·,·). However, if p(r_t|s_t,a_t) or p(s_t+1|s_t,a_t) is stochastic, 𝔼_p(r_t|s_t,a_t)[·] or 𝔼_p(s_t+1|s_t,a_t)[·] is not necessarily exchangeable with g(·,·). In this case, <ref> and <ref> can potentially evaluate to different values, and therefore π^* derived from Q^* may be suboptimal.
Under this observation, in order to obtain a global optimality guarantee on the greedy policy π^*, we constrain the scope to deterministic MDPs. Furthermore, we introduce an additional condition on the generalized operation g(·,·) in order to establish global optimality, as formally stated in the following theorem:
Given a non-cumulative objective function u and its corresponding generalized Bellman update operation satisfying the condition <ref> from <ref>, let Q^* denote the convergence point of the value iteration (from iteratively applying the generalized Bellman update rule as in <ref>). For an MDP with deterministic p(r_t|s_t,a_t) and p(s_t+1|s_t,a_t), the greedy policy π^* derived from Q^* is guaranteed to be the global optimal policy, if g(·,·) satisfies the following additional condition:
b≥ c implies g(a,b)≥ g(a,c) ∀ a,b,c∈ℛ
The mathematical proof of this theorem is provided in Appendix <ref>.
We note that the assumptions on MDPs in <ref> are satisfied by a large class of optimal control and reinforcement learning problems: e.g., board games including Go and Chess, a subset of Atari games, the class of network routing problems (such as the problems to be studied in Section <ref>), and so on.
To summarize, given any general MDP, we may generalize its objective function and apply the generalized Bellman update as in <ref> to try to learn its value function. If the generalized update operation satisfies the condition as in <ref> in <ref>, the value iteration is guaranteed to converge to a unique convergence point. Furthermore, if the underlying MPD satisfies the assumptions in <ref>, and the update operation satisfies the condition <ref>, the convergence point is the optimal value function and the greedy policy π^* derived from the value function is guaranteed to be the global optimal policy.
§.§ Examples of Generalized Objectives and Bellman Update Operations
We introduce several widely applicable objectives, and present the corresponding modified Bellman update operations. In , we provide the proofs that these operations satisfy the conditions in <ref> and <ref>.
§.§.§ Bottleneck Reward Objective
The objective u is the minimum (i.e. bottleneck) intermediate reward in the process:
u(r_1, r_2, r_3, …) = min(r_1,r_2,r_3,…).
The corresponding modified Bellman update operation is:
g(r_t, γmax_a_t+1Q(s_t+1, a_t+1))=min(r_t, γmax_a_t+1Q(s_t+1, a_t+1)),
g(r_t, γmax_a_t+1Q(s_t+1, a_t+1))=
min(r_t, γmax_a_t+1Q(s_t+1, a_t+1)),
where the discount factor γ is useful for encouraging the agent to postpone the occurrences of negative rewards that often correspond to undesired or failure outcomes.
The proof that the bottleneck update operation satisfies both conditions in <ref> and <ref> is presented in Appendix <ref>.
§.§.§ Maximum Reward Objective
The objective u is the maximum intermediate rewards within the process:
u(r_1, r_2, r_3, …) = max(r_1,r_2,r_3,…).
The corresponding modified Bellman update operation is:
g(r_t, γmax_a_t+1Q(s_t+1, a_t+1))=max(r_t, γmax_a_t+1Q(s_t+1, a_t+1)).
g(r_t, γmax_a_t+1Q(s_t+1, a_t+1))=
max(r_t, γmax_a_t+1Q(s_t+1, a_t+1)).
The proof that the maximum update operation <ref> satisfies both conditions in <ref> and <ref> follows the same logic as the the proof for the bottleneck update operation shown in Appendix <ref>.
§.§.§ Harmonic Mean Reward Objective
Assuming , and the process is always terminated after a fixed number of steps, the objective u is the harmonic mean of all intermediate rewards within the process:
U(r_1, r_2,…,r_T) = 1/1/r_1+1/r_2+1/r_3+…+1/r_T ,
where we omit the constant reward count. Examples of such applications with harmonic mean objectives include:
* Optimize average traveling speed over a trip consisting of a fixed number of intervals.
* Minimize resistance in a circuit with a fixed number of resistors in parallel connection.
* Optimize mixture density (e.g. alloys) with a fixed number of selections on equal-weight components.
Although technically maximizing <ref> is equally valid as minimizing the summation of inverse of rewards, we present it as an example of a non-cumulative objective function with a modified Bellman update operation (as shown below) that satisfies the proposed convergence conditions.
The corresponding modified Bellman update operation is:
g(r_t, γmax_a_t+1Q(s_t+1,a_t+1)) = 1/1/r_t+1/γmax_a_t+1Q(s_t+1, a_t+1)) .
g(r_t, γmax_a_t+1Q(s_t+1,a_t+1)) =
1/1/r_t+1/γmax_a_t+1Q(s_t+1, a_t+1)) .
The proof that the harmonic mean update operation satisfies both conditions in <ref> and in <ref> is presented in Appendix <ref>.
§ APPLICATIONS OF GENERALIZED REINFORCEMENT LEARNING
§.§ Classical Reinforcement Learning Problems with Bottleneck Objectives
We first re-examine classical reinforcement learning problems, formulated with the bottleneck objectives as introduced in <ref>. In many classical optimal control and reinforcement learning applications, the agent's success is largely based on its ability to avoid failure or defeat. This is particularly the case when the MDPs lack significant intermediate milestones or checkpoints, such as the CartPole problem and the Atari game Breakout. Instead of regarding such tasks as collecting as many rewards as possible, the agent can interpret the tasks with the equally valid strategy of avoiding the worst outcome (corresponding to the lowest reward) as much as possible.
Conventionally, both tasks are formulated with the cumulative objective, each with an incremental rewarding scheme. In the CartPole task, a positive reward is assigned to the agent for every timestep it maintains the pole in the upright position; while in Atari, a positive reward is assigned each time the agent breaks a brick with the bouncing ball.
To formulate the task with the bottleneck objective for such classical tasks, we assign a negative reward to the agent when an undesired or failure event occurs after executing a certain action. For the other actions that do not directly lead to the failure events, we simply assign a zero intermediate reward. In the CartPole task, the agent aims to control the cart to vertically balance the pole. When the pole falls outside a pre-defined angle range, a negative reward is assigned to the agent. Similarly, for the Atari game Breakout, the agent controls the movement of a paddle to catch and reflect a bouncing ball upwards to destroy layers of bricks located above. Each time the agent fails to catch the falling ball with the paddle, it is assigned a negative reward. With the discount factor γ applied on rewards over time steps, the later the negative rewards occur, the higher the bottleneck objective is.
By optimizing the bottleneck objective, the agent is able to learn alternative strategies to these classical problems: For CartPole, the strategy is to prevent the pole from falling for as long as possible. For Breakout, the strategy is to keep the ball in play for a maximized duration through controlling the paddle to constantly catch and reflect the ball, which translates to, although not always most efficiently, maximizing the bricks destroyed and thus achieving competitive game scores.
§.§ Single-Path Maximum-Flow Routing with Bottleneck Objective on a Graph
§.§.§ Problem Setup
Consider a communication network modeled as a directed graph G=(𝒩, ℰ), where the set of nodes 𝒩 corresponds to the transmission nodes, and the set of edges ℰ corresponds to the communication links between the nodes.
A single-path data flow is routed through the network, from a fixed source node n_s∈𝒩 towards a fixed destination node n_t∈𝒩. Each directed edge e^n_i→ n_j∈ℰ from n_i∈𝒩 to n_j∈𝒩 represents the transmission link from n_i to n_j, and is assigned with a link rate capacity r(e^n_i→ n_j)=r_n_i→ n_j. We set r_n_i→ n_j=0 when there is no link from n_i to n_j in the network. The optimal routing problem is that of finding an ordered sequence of relay nodes as transmission hops, to form the route such that the bottleneck rate is maximized:
n_1,n_2,…,n_mmaximize min(r_n_s→ n_1, r_n_1→ n_2,…,r_n_m-1→ n_m, r_n_m→ n_t),
n_1,n_2,…,n_mmaximize min(r_n_s→ n_1, r_n_1→ n_2,…,
r_n_m-1→ n_m, r_n_m→ n_t),
where {n_i}_i∈{1… m} denote the m relay nodes (with the number m adjustable) forming the route of the flow.
§.§.§ Generalized Optimal Control Solution
To find the single-path maximum flow within a given network represented by a directed graph as described above, we formulate the routing process as an MDP: the agent moves along the frontier node of the route, and makes sequential decisions on the selection of the node for each hop, until the destination node is reached. For the state space 𝒮, each state is uniquely identified by the frontier node the agent resides on. Specifically, we use s^n_i∈𝒮 to denote the state that the current frontier node of the partially established route is node n_i. For the action space 𝒜, we use a^n_i→n_j∈𝒜 to denote the action to move from node n_i to node n_j. Lastly, as specified in the problem setup, r_n_i→n_j corresponds to the reward for the action a^n_i→ n_j, which is the link rate capacity of the link from n_i to n_j.
To optimize the objective (<ref>), for each state and action pair (s^n_i, a^n_i→ n_j), the generalized update is as follows:
Q( s^n_i, a^n_i→ n_j) ←min(r_n_i→ n_j, γmax_n_kQ( s^n_j, a^n_j→ n_k)).
Q( s^n_i, a^n_i→ n_j) ←
min(r_n_i→ n_j, γmax_n_kQ( s^n_k, a^n_j→ n_k)).
From the converged Q^*, we obtain the global optimal greedy policy π^* (guaranteed by the results in <ref> and <ref>), following which produces the flow route supporting the global maximal flow rate.
§.§ Wireless Ad hoc Network Routing and Spectrum Access with Bottleneck Objective
§.§.§ Problem Setup
Consider the physical-layer routing problem as discussed in <cit.>. In a wireless ad hoc network with a set of transmission nodes 𝒩, a set of data flows 𝒦 is to be established, each consisted of multiple hops with their own pairs of source and destination nodes. A set of frequency bands ℬ is available for transmission, each with a bandwidth of . We focus on two optimization tasks for these data flows: routing and spectrum access. The task of routing is to select an ordered list of intermediary relay nodes from 𝒩 to form the route for each data flow. The task of spectrum access is to select a frequency band from ℬ for the transmission of each hop in the route of each flow. We represent the route for flow k∈𝒦 as an ordered list denoted by
𝐧^(k):
𝐧^(k)=(n^(k)_0, n^(k)_1, n^(k)_2… n^(k)_m, n^(k)_m+1) ,
where n^(k)_0 and
n^(k)_m+1 represent the fixed source and destination node for flow k, and {n^(k)_i}_i∈{1… m} represent the m relay nodes (with the number m adjustable) forming the route of flow k. We represent the spectrum access solution for flow k as an ordered list denoted by 𝐛^(k), containing the selected frequency band of each hop:
𝐛^(k)=(b^(k)_1, b^(k)_2, b^(k)_3… b^(k)_m+1) ,
where b^(k)_i ∈ℬ denotes the frequency band selected for the i-th hop in the route of flow k, with i∈{1… m+1}. As the global topology of the ad hoc network is not available as inputs, the agents need to learn to infer the network topology during the routing process.
Consider a link from node n_i to node n_j over frequency band b. Let
h_(n_i→ n_j,b)∈𝒞 denote its channel coefficient. The maximum transmission rate of this link is based on
the signal to interference plus noise ratio (SINR) as follows:
SINR_(n_i→ n_j,b) = x_n_i,b|h_(n_i→ n_j,b)|^2p/∑_n_l≠ n_i,n_j
n_l∈𝒩x_n_l,b|h_(n_l→ n_j,b)|^2p + σ^2 ,
r_(n_i→ n_j,b) = Wlog(1+SINR_(n_i→ n_j,b)) .
where p and σ^2 denote the
transmit power of each node and the background noise power on each frequency band. The binary control variable x_n_i,b indicates whether the node n_i is transmitting on the band b or idle. The objective for each flow u^(k) is the transmission rate it supports, which is the bottleneck link rate:
u^(k)= r_min^(k) = min_i=0,1,2,…,mr_(n^(k)_i→ n^(k)_i+1, b^(k)_i+1) .
The global objective u over all data flows is then defined as the average of the bottleneck rates over all data flows:
u = ∑_k∈𝒦u^(k)/|𝒦|
§.§.§ Generalized Reinforcement Learning Solution
For the physical layer routing and spectrum access problem, we assign one agent per data flow, with each agent moving along the frontier node of its flow and making hop-by-hop decisions. With multiple data flows to be jointly optimized, this problem is essentially a multi-agent reinforcement learning problem, with higher complexity than the maximum flow routing problem on a graph as in Section <ref>. By optimizing this problem with the bottleneck objective formulation and the generalized Bellman updates, we demonstrate that the proposed approach is competitive and highly effective in the setting of multi-agent reinforcement learning.
For better parameter efficiency, we only train one set of parameters shared among all agents. We assume the wireless network is only partially observable to each agent, meaning Q^* is no longer guaranteed to be global optimal. Nonetheless, as shown in later simulations, the corresponding π^* is still competitive. We adopt the MDP formulation as in <cit.>. At each step, each agent gathers 4 pieces of information on frequency band b for each of its c closest neighboring nodes: the distance; the distance; the angle between and directions; and the signal interference on the neighbor on . With this information, the agent forms the state s on band b with s∈ℝ^4c,∀ s∈𝒮. For the action space 𝒜, the agent has c+1 actions on band b: one action for connecting with each of the c nodes via b, and one action for reprobing (if none of the c nodes is suitable). We use s^(n_i,b) to denote the state that the frontier node of the partially established flow is node n_i and that the transmission to the next hop uses the band b. We use a^(n_i→ n_j,b) to denote the agent's action to establish the link from node n_i to node n_j using band b, which is assigned the reward as the rate of this link r_(n_i→ n_j,b). During training, these rewards as link rates are computed after the routes are formed.
As the bottleneck rate is not expressible as summations, <cit.> uses the Monte-Carlo method <cit.> for estimating the value function. The key improvement we propose over <cit.> is to utilize the modified Bellman update rule for training the agents in the off-policy fashion, providing higher data efficiency, faster convergence, and better performances. Using <ref>, the generalized updates for training each agent are:
Q( s^(n_i,b), a^(n_i→ n_j,b)) ←min(r_(n_i→ n_j,b), γmax_n_k,b'Q( s^(n_j,b'), a^(n_j→ n_k,b'))).
Q( s^(n_i,b), a^(n_i→ n_j,b)) ←
min(r_(n_i→ n_j,b), γmax_n_k,b'Q( s^(n_j,b'), a^(n_j→ n_k,b'))).
After predicting Q values for all frequency bands, the agent selects the action with the single highest Q value among all bands to establish the new link, which specifies not only the optimal node as the next hop, but also the optimal frequency band for transmission to that node.
§ SIMULATIONS
We experiment on the optimal control and reinforcement learning problems and compare solutions from the conventional Bellman updates and from our proposed generalized Bellman updates. We use following terms to refer to each algorithm:
* Q-Min: Optimal control solution or RL policy based on the value function obtained from the generalized Bellman update rule as <ref> and <ref>.
* Q-Sum: Optimal control solution or RL policy based on the value function obtained from the conventional Bellman update rule as <ref>.
§.§ Classical Reinforcement Learning Problems
We use the double-DQN architecture <cit.> to model the agents. During training, the policy with decaying ϵ is used for collecting experiences, along with prioritized experience replay <cit.> for sampling training batches in each update step.
§.§.§ CartPole Task
To solve the CartPole task with the Q-Min algorithm, when the pole falls outside of the pre-defined angle range (±12^∘ from the up-right position), we assign a negative reward of -1 to the agent. To encourage the agent to postpone negative reward occurrence, we use a discount factor γ=0.95 in <ref>. For learning with the Q-Sum algorithm, we follow the conventional incremental rewarding scheme that has been long used in this task.
We illustrate the agent learning progress under both algorithms in <ref>, where we evaluate each agent's performance, averaged over 25 new episodes, after each 12500 update steps of training. We note that we stick with the conventional cumulative objective for CartPole as the performance metric when visualizing the learning progresses of both algorithms as shown in <ref>, which allows us to compare both algorithms directly. A competitive performance by the Q-Min algorithm on the cumulative objective would indicate that our alternative bottleneck objective is also a viable for formulating the task.
As shown by the numerical results, besides the oscillations in both learning curves (as DQN is known for unstable learning), the Q-Min agent and the Q-Sum agent learn to balance the pole at a similar pace throughout training. The close results between the two algorithms validate that the bottleneck objective is indeed a suitable alternative to the CartPole objective formulation.
§.§.§ Atari Breakout Game
To solve Atari with the proposed Q-Min algorithm, we utilize a simple reward scheme under Q-Min: we assign a negative reward of -1 to the agent each time it fails to catch the ball with the paddle, and set γ=0.98 in <ref> to encourage the agent to postpone such failure events. For learning with the Q-Sum algorithm, we follow the conventional incremental rewarding scheme originally built into the Atari game engine.
We present the learning progress of and in <ref>, with each agent's performance evaluated and averaged over 5 new game runs, after each 50 thousand update steps of training. Similar to the learning progress visualization for CartPole, we also use the conventional cumulative objective for the original Breakout game as the performance metric when plotting the learning curves of both algorithms.
Unlike in CartPole, the Q-Min agent shows a slightly slower learning progress and lower performance for Breakout. This is likely due to the Q-Min agent not learning the strictly optimal trajectories of redirecting the ball for hitting the most bricks, as its sole objective is to keep the ball in play. Nonetheless, even with simpler and more sparse rewards than the rewards used by Q-Sum, the Q-Min agent still manages to achieve relatively close performances to the Q-Sum agent, especially at the late training stage. The results illustrate the viability of interpreting Breakout with the bottleneck objective formulation.
We emphasize again that, specifically for these two classical problems, our goal is not to show that the proposed Q-Min algorithm is strictly superior to the conventional Q-Sum algorithm. After all, these two problems have long served as the canonical examples for the conventional reinforcement learning problems formulated with the cumulative objectives. Instead, we have shown that it is also valid to interpret and optimize these classical problems with the bottleneck objective formulation. Through learning with the proposed generalized Bellman update rule, the agent is capable of achieving the performances comparable with the results from the convention reinforcement learning approach as presented above. Essentially, When optimizing the agent under the bottleneck objective formulation, the agent learns an alternative game playing strategy for both CartPole and Breakout: to avoid or delay the failure event for as long as possible.
§.§ Single-Path Maximum-Flow Routing on Graph
We consider the directed graph network shown in <ref>, and perform the Q-Min algorithm with <ref> until convergence. With the MDP in this problem being finite, we set γ=1, which simplifies the numerical results with Q values precisely equal to the future bottleneck rates.
In Table <ref>, we present the iterations of both the Q-Min algorithm and the Q-Sum algorithm. In the first row of the table, we adopt the simplified notations for Q values: we use Q_n_i→ n_j to uniquely denote the state-action value function Q(s_n_i, a_(n_i,n_j)) in <ref>. We adopt synchronized iterations, where in each new iteration, the Q value in the right-hand-side of <ref> comes from the previous iteration. All the iterations of value function updates are shown until convergence.
For the Q-Min algorithm, it takes 4 iterations of the generalized Bellman updates to converge. From the resulted Q^*, we deduce the optimal policy π^* producing the following optimal flow route:
s→ b→ a→ d→ t .
This route obtained supports a flow rate of 5, which is indeed the global optimal flow rate.
On the other hand, for the Q-Sum algorithm, the convergence speed is lower than Q-Min, as it takes 5 iterations of the regular Bellman updates to converge. Furthermore, the deduced optimal policy results in the following flow route:
s→ b→ a→ c→ d→ t .
This route supports a flow rate of 4, which is sub-optimal and inferior to the route obtained by the Q-Min algorithm.
§.§ Wireless Ad hoc Network Routing and Spectrum Access
§.§.§ Experiment Settings
We simulate wireless ad hoc networks in a 1000m×1000m region with |𝒦|=3 data flows and |ℬ|=8 frequency bands. We adopt the same specifications as in <cit.> as we aim to compare results and illustrate the effectiveness of the proposed Q-Min algorithm. Specifically, we consider the short-range outdoor model ITU-1411 with a distance-dependent path-loss to model all wireless channels, over all frequency bands at 2.4GHz carrier frequency. Shadowing and fast-fading are not considered in the simulation setting. This corresponds to an outdoor environment (e.g., a rural or remote area), where the strengths of the wireless links are mostly functions of the distances between the transmitters and the receivers. We assume each of the |ℬ|=8 frequency bands has a 5MHz bandwidth for signal transmission. All antennas equipped at the transmission nodes have a height of 1.5m and 2.5dBi antenna gain. We assume a transmit power of 30dBm for all nodes and background noise at -130dBm/Hz.
To generate realistic wireless network layouts, the node locations are randomly generated with varying node densities over the region. Specifically, we divide the 1000m×1000m network region into nine equal sub-regions, and randomly locate (6, 8, 7, 6, 5, 10, 8, 9, 6) nodes within each of the nine sub-regions correspondingly.
§.§.§ Training Convergence Speed Comparison
We train each set of |𝒦|=3 agents with three algorithms: Q-Min, Q-Sum, and the algorithm by <cit.>:
* Q-MC: RL policy based on the value function obtained from the Monte-Carlo episodic estimations of future bottleneck rewards, computed at the end of episodes.
We generate 380,000 wireless ad hoc network layouts for training the agents under each algorithm, under the following training schedule:
* Initial 30,000 layouts are used for random routing on collecting initial experience.
* The middle 300,000 layouts are used for the ϵ-greedy policy based routing, with the ϵ value follows linear annealing from 1.0 to 0.1 throughout the training over these layouts.
* The final 50,000 layouts are used with ϵ=0 for the final convergence stage.
We use the Dueling-DQN architecture <cit.> to model all the agents, with the neural network specifications listed in <ref>, same as in <cit.>. Since the rewards as link rates are dense throughout the MDP, uniform sampling is sufficient for experience replay. We use c=10 as the number of neighbors the agent explores each time. The state inputs to the DQNs are therefore 40-component vectors, i.e. s∈ℛ^40,∀ s∈𝒮.
For the same reasons as in <ref>, we set γ=1 in <ref>. During training, we track both the mean-squared-errors for predictions on Q, and the routing performances over 100 newly generated network layouts at each update step of training. The training curves for all three algorithms are displayed in <ref>.
Shown by the learning curves, the conventional Q-Sum agents collectively achieve the worst learning progress and simply fail to convergence on the Q value estimations. While both the Q-Min agents and the Q-MC agents converge to comparable performances, the Q-Min agents enjoy a much faster convergence speed. This illustrates the advantage of the temporal difference learning over the Monte-Carlo method, which is made possible for non-cumulative objectives with the proposed generalized update rules.
To better understand why Q-Min achieves noticeably faster convergence than Q-MC, we emphasize that the Monte-Carlo estimations used by Q-MC are highly affected by the random explorations especially at the early stage of training. Certain random explorations might lead to an extremely low bottleneck rate for the newly established route. This bottleneck rate is then used as the Monte-Carlo estimation on the value function during training the Q-MC agent. Thus, the value estimations learned by the Q-MC agents suffer from low qualities significantly at the beginning of training.
On the other hand, with the proposed generalized update operation, the bottleneck objective can be estimated by the temporal difference learning technique as in the Q-Min algorithm. As an off-policy[An off-policy algorithm separates the policy that the value estimation is based on from the sampling policy, which is desired when the sampling policy is highly noisy (e.g. with many random explorations).] learning algorithm, the temporal difference learning estimations are much more resilient to the random explorations, since the estimation target is obtained through one-step bootstrapping on the already learned value function. Therefore, it is not a surprise to see the significant improvements on the training efficiency and convergence speed by the Q-Min agents.
§.§.§ Performances on Bottleneck Rates
We present in <ref> test results on the number of links established on each flow, as well as the achieved bottleneck flow rates for these data flows, over 1000 newly generated testing wireless ad hoc networks. Furthermore, for each method, we collect the bottleneck rates of all data flows over all testing wireless networks, and present the cumulative distribution function (CDF) of these bottleneck rates in <ref>. As shown by both the statistics and the distributions of the flow rates, the Q-Min agents achieve the best routing results, whereas the Q-Sum agents perform the worst by a large margin, while having much higher numbers of links over the established data flows.
We visualize the optimized routes by each RL algorithm over a random wireless ad hoc network in <ref>. The Q-Min agents learn to establish links with medium lengths. This policy ensures a certain level of channel strength for the bottleneck links, without constructing too many links to avoid excessive interference which is detrimental to the bottleneck link rates. Furthermore, the Q-Min agents also learn to spatially spread out data flows as well as the frequency bands used among links for effective interference mitigation.
On the other hand, the Q-Sum agents learn the policy that connects unnecessarily many short links to form routes, neglecting the importance of the bottleneck link within each flow. Evidently, the conventional reinforcement learning formulation is unsuitable for solving such routing problems. For this reason, in application fields such as network communications, generalizing the objective function and its learning rule through our proposed approach is an essential optimization technique.
§ CONCLUSION
This paper recognizes the possibilities of formulating optimal control or reinforcement learning objectives as non-cumulative functions over rewards, and generalizes existing algorithms to optimizing such objectives. Specifically, we explore the generalized operations in the Bellman update rule, for which we provide the global convergence conditions with mathematical proofs. We also recognize the assumptions required on the MDP state transitions and reward functions for ensuring the global optimality on the obtained policies. With the generalized objectives and learning algorithms, we are able to unveil alternative strategies to classical optimal control or reinforcement learning problems, and more importantly, realize the possibilities for solving new problems with intrinsically non-cumulative objectives, which are frequently encountered in the fields such as network communications. This opens up directions for a broader range of applications for optimal control and reinforcement learning techniques.
§ PROOF OF <REF>
If g(·,·) satisfies the condition <ref> in <ref>, then the generalized value function update F^* as in <ref> is a contraction mapping.
In the following mathematical expressions, for the simplicity of notations, we use p(r_t,s_t+1|s_t,a_t) as the shorthand notation for the joint distribution of p_R_t|S_t,A_t(r_t|s_t,a_t) and p_S_t+1|S_t,A_t(s_t+1|s_t,a_t). We also assume p(r_t,s_t+1|s_t,a_t) is a discrete distribution. For continuous distribution, the proof still holds with summations substituted by integrations when computing the expectations.
For any pair of value functions ∀ Q^1, Q^2 ∈ℛ^|𝒱|, we have:
‖ F ^*Q^1-F^*Q^2‖_∞
= max_s_t,a_t| (F^*Q^1)(s_t,a_t)-(F^*Q^2)(s_t,a_t)|
= max_s_t,a_t|∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)g(r_t, γmax_a_t+1Q^1(s_t+1, a_t+1))
-∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)g(r_t, γmax_a_t+1Q^2(s_t+1, a_t+1))|
= max_s_t,a_t|∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)[g(r_t, γmax_a_t+1Q^1(s_t+1, a_t+1))-g(r_t, γmax_a_t+1Q^2(s_t+1, a_t+1))]|
≤ max_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)| g(r_t, γmax_a_t+1Q^1(s_t+1, a_t+1)) -g(r_t, γmax_a_t+1Q^2(s_t+1, a_t+1))|
≤ max_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)|γmax_a_t+1Q^1(s_t+1, a_t+1) -γmax_a_t+1Q^2(s_t+1, a_t+1)|
≤ γmax_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)max_a_t+1| Q^1(s_t+1, a_t+1) -Q^2(s_t+1, a_t+1)|
≤ γmax_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)‖ Q^1-Q^2‖_∞
≤ γ‖ Q^1-Q^2‖_∞ ,
‖ F ^*Q^1-F^*Q^2‖_∞
= max_s_t,a_t| (F^*Q^1)(s_t,a_t)-(F^*Q^2)(s_t,a_t)|
= max_s_t,a_t|∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)g(r_t, γmax_a_t+1Q^1(s_t+1, a_t+1))
-∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)g(r_t, γmax_a_t+1Q^2(s_t+1, a_t+1))|
= max_s_t,a_t|∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)[g(r_t, γmax_a_t+1Q^1(s_t+1, a_t+1))
-g(r_t, γmax_a_t+1Q^2(s_t+1, a_t+1))]|
≤ max_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)| g(r_t, γmax_a_t+1Q^1(s_t+1, a_t+1))
-g(r_t, γmax_a_t+1Q^2(s_t+1, a_t+1))|
≤ max_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)|γmax_a_t+1Q^1(s_t+1, a_t+1)
-γmax_a_t+1Q^2(s_t+1, a_t+1)|
≤ γmax_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)max_a_t+1| Q^1(s_t+1, a_t+1)
-Q^2(s_t+1, a_t+1)|
≤ γmax_s_t,a_t∑_r_t,s_t+1p(r_t,s_t+1|s_t,a_t)‖ Q^1-Q^2‖_∞
≤ γ‖ Q^1-Q^2‖_∞ ,
where (<ref>) follows from the condition <ref> in <ref>; (<ref>) follows from the fact that for any two functions f_1 and f_2, we have |sup_x f_1(x)-sup_x f_2(x)|≤sup_x|f_1(x)-f_2(x)|; and lastly, (<ref>) follows from the normalization of the probability distribution p(r_t,s_t+1|s_t,a_t).
With Lemma <ref> established, we can readily prove the main theorem of the global convergence of the value function updates.
Starting from any arbitrary value function initialization point Q^0∈ℛ^|𝒱|, consider the value iteration process of iteratively applying the mapping F^*. With ℛ^|𝒱| being a Banach space and F^* being a contraction mapping (by Lemma <ref>), according to the Banach's fixed-point theorem <cit.>, the process is guaranteed to converge to a unique convergence point Q^*.
§ PROOF OF <REF>
If g(·,·) satisfies the condition <ref> in <ref>, then the generalized value function update F^* as in <ref> is monotonic, i.e., ∀ Q^1, Q^2∈ℛ^|𝒱|, if Q^1≥ Q^2, then F^*Q^1≥ F^*Q^2 always holds[The notation Q^1≥ Q^2 implies Q^1(s,a)≥ Q^2(s,a), ∀ s, a].
Q^1≥ Q^2
Q^1(s,a) ≥ Q^2(s,a), ∀ s,a
max_a Q^1(s,a) ≥max_a Q^2(s,a), ∀ s
g(r, γmax_a Q^1(s,a))≥ g(r, γmax_a Q^2(s,a)), ∀ s,r
where <ref> follows from the condition <ref> in <ref>.
Now we introduce the time step into the equations, and consider any given state and action pair at time t: s_t and a_t. Let s=s_t+1 in <ref> be the state the agent is in after executing a_t on s_t; and let r=r_t be the reward from executing a_t on s_t. We then have:
Q^1≥ Q^2
g(r_t, γmax_a Q^1(s_t+1,a))≥ g(r_t, γmax_a Q^2(s_t+1,a)), ∀ r_t, s_t+1
(F^*Q^1)(s_t,a_t) ≥ (F^*Q^2)(s_t,a_t), ∀ s_t, a_t
F^*Q^1 ≥ F^*Q^2
Q^1≥ Q^2
g(r_t, γmax_a Q^1(s_t+1,a))≥ g(r_t, γmax_a Q^2(s_t+1,a)),
∀ r_t, s_t+1
(F^*Q^1)(s_t,a_t) ≥ (F^*Q^2)(s_t,a_t), ∀ s_t, a_t
F^*Q^1 ≥ F^*Q^2
To show that Q^* is the optimal point, let π_0 be an arbitrary initial policy and Q^0∈ℛ^|𝒱| be the corresponding value function (therefore Q^0=F^π_0Q^0). we have:
Q^0 = F^π_0Q^0 ≤ F^*Q^0
where the inequality follows from the maximization over the actions by the greedy policy mapping F^*. Applying F^* again on both sides of the inequality, and by the monotonicity from Lemma <ref> we have:
Q^0 ≤ F^*Q^0 ≤F^*^2Q^0
where F^2 denotes iteratively applying the mapping F twice. After iteratively applying F^* until convergence, we arrive at a chain of inequalities ending with the unique convergence point:
Q^0 ≤ F^*Q^0 ≤F^*^2Q^0 ≤F^*^3Q^0 ≤…≤lim_n→∞F^*^nQ^*= Q^*
Since Q^0 is an arbitrary value function, we have shown that the unique point of convergence Q^* is indeed the global maximum value function.
Furthermore, given the assumptions that p(r_t|s_t,a_t) and p(s_t+1|s_t,a_t) are deterministic as required by <ref>, we have g(·,·) being exchangeable with 𝔼_p(r_t|s_t,a_t)[·] and 𝔼_p(s_t+1|s_t,a_t)[·]. By bringing the expectations inside the operation g(·,·), we can see that <ref> and <ref> are equivalent. Correspondingly, the converged point of the value iteration is truly the expectation value of the objective function as defined in <ref>. Therefore, the greedy policy π^* derived from the value function Q^* is truly the global optimal policy.
§ PROOF OF CONVERGENCE PROPERTIES ON THE BOTTLENECK UPDATE OPERATION
To show that the bottleneck update operation <ref> satisfies the condition by <ref> in <ref>:
Without loss of generality, we assume b≥ c, then we have:
* If c≤ b≤ a, then:
|g(a,b)-g(a,c)|=|min(a,b)-min(a,c)|=|b-c|.
* If c≤ a<b, then:
|g(a,b)-g(a,c)|= |min(a,b)-min(a,c)|
= a-c
< |b-c|.
* If a<c≤ b, then:
|g(a,b)-g(a,c)|= |min(a,b)-min(a,c)|
= 0
< |b-c|.
To show that the bottleneck operation <ref> satisfies the condition by <ref> in <ref>:
Given b≥ c, we have:
* If a≤ c, then:
g(a,b) = min(a,b)=a=min(a,c) = g(a,c).
* If c< a≤ b, then:
g(a,b) = min(a,b)=a> c=min(a,c) = g(a,c).
* If a>b, then:
g(a,b) = min(a,b)=b≥ c=min(a,c) = g(a,c).
§ PROOF OF CONVERGENCE PROPERTIES ON THE HARMONIC MEAN UPDATE OPERATION
To show that the harmonic mean operation <ref> satisfies the condition by <ref> in <ref>:
With our assumption of positive rewards, we have:
|g(a,b)-g(a,c)| = |1/1/a+1/b-1/1/a+1/c|
= |b-c|/(1/a+1/b)(1/a+1/c)bc
≤ |b-c|/(1/b)(1/c)bc
= |b-c|
To show that the harmonic mean operation <ref> satisfies the condition by <ref> in <ref>:
Given b≥ c. With our assumption of positive rewards, we have:
g(a,b)-g(a,c) = 1/1/a+1/b-1/1/a+1/c
= b-c/(1/a+1/b)(1/a+1/c)bc
≥ 0
IEEEtran
[
< g r a p h i c s >
]
Wei Cui (S'17) received the B.A.Sc. degree in Engineering Science, the M.A.Sc. degree in Electrical and Computer Engineering, and the Ph.D. degree in Electrical and Computer Engineering from University of Toronto, Toronto, ON, Canada, in 2017, 2019, and 2023, respectively.
His research interests include machine learning, optimization, and wireless communication.
[
< g r a p h i c s >
]
Wei Yu (Fellow, IEEE) received the B.A.Sc. degree in computer engineering and mathematics from the University of Waterloo, Waterloo, ON, Canada, and the M.S. and Ph.D. degrees in electrical engineering from Stanford University, Stanford, CA, USA. He is now a Professor in the Electrical and Computer Engineering Department at the University of Toronto, Toronto, ON, Canada, where he holds a Canada Research Chair (Tier 1) in Information Theory and Wireless Communications. He is a Fellow of the Canadian Academy of Engineering and a member of the College of New Scholars, Artists, and Scientists of the Royal Society of Canada. Prof. Wei Yu was the President of the IEEE Information Theory Society in 2021, and has served on its Board of Governors since 2015. He served as the Chair of the Signal Processing for Communications and Networking Technical Committee of the IEEE Signal Processing Society from 2017 to 2018. He was an IEEE Communications Society Distinguished Lecturer from 2015 to 2016. He served as an Area Editor of the IEEE Transactions on Wireless Communications, as an Associate Editor for IEEE Transactions on Information Theory, and as an Editor for the IEEE Transactions on Communications and IEEE Transactions on Wireless Communications. Prof. Wei Yu received the Steacie Memorial Fellowship in 2015, the IEEE Marconi Prize Paper Award in Wireless Communications in 2019, the IEEE Communications Society Award for Advances in Communication in 2019, the IEEE Signal Processing Society Best Paper Award in 2008, 2017 and 2021, the Journal of Communications and Networks Best Paper Award in 2017, and the IEEE Communications Society Best Tutorial Paper Award in 2015.
|
http://arxiv.org/abs/2307.07636v1 | 20230714212700 | Dissenting Explanations: Leveraging Disagreement to Reduce Model Overreliance | [
"Omer Reingold",
"Judy Hanwen Shen",
"Aditi Talati"
] | cs.AI | [
"cs.AI",
"68",
"I.2"
] |
[
Dissenting Explanations: Leveraging Disagreement to Reduce Model Overreliance
equal*
Omer Reingoldyyy
Judy Hanwen Shenyyy
Aditi Talatiyyy
yyyDepartment of Computer Science, Stanford University, Palo Alto, USA
Judy Hanwen [email protected]
Machine Learning, ICML
0.3in
]
While explainability is a desirable characteristic of increasingly complex black-box models, modern explanation methods have been shown to be inconsistent and contradictory. The semantics of explanations is not always fully understood – to what extent do explanations “explain” a decision and to what extent do they merely advocate for a decision? Can we help humans gain insights from explanations accompanying correct predictions and not over-rely on incorrect predictions advocated for by explanations? With this perspective in mind, we introduce the notion of dissenting explanations: conflicting predictions with accompanying explanations. We first explore the advantage of dissenting explanations in the setting of model multiplicity, where multiple models with similar performance may have different predictions. In such cases, providing dissenting explanations could be done by invoking the explanations of disagreeing models. Through a pilot study, we demonstrate that dissenting explanations reduce overreliance on model predictions, without reducing overall accuracy. Motivated by the utility of dissenting explanations we present both global and local methods for their generation.
§ INTRODUCTION
The development of increasingly capable AI systems has motivated many fields to consider AI-assisted decision-making. In high-stakes settings such as loan approval and patient diagnosis, it is imperative for humans to understand how any given model came to its decision. However, with the success of deep learning, many large state-of-the-art models are not easily interpretable. Thus, explainability (XAI) methods are crucial for providing justification for the decisions of black-box models. Such explanations justify a model's prediction on a singular input example, and their goal is to provide accurate information while also being succinct and easy for humans to parse <cit.>. In fact, a recent study found that explanations can help improve human performance and even reduce overreliance on AI on specific tasks such as solving a difficult maze <cit.>.
While explanations can serve as verification for certain tasks like maze completion, many predictive tasks are not verifiable in nature (e.g. predicting the probability of a loan default). In these cases, many different explanations can be used to explain a decision. In fact, recent works have shown that explanations generated from different methods based on the same instance can conflict <cit.>. Furthermore, different AI models with similar performances may vastly differ in predictions as well as explanations <cit.>. Instead of rejecting explanations altogether, the existence of multiple plausible explanations motivates the perspective that explanations can be treated as arguments supporting a given model prediction, rather than a verifiable proof for a given prediction.
With the framework of explanations as arguments, we may naturally construct a courtroom analogy, in which human decision makers are the judges deciding whether the model prediction is trustworthy. When a singular explanation is provided, a decision-maker may be unduly influenced to trust the prediction. Indeed, <cit.> show that when explanations are provided, humans are more likely to follow a model decision regardless of whether the model is correct. Thus, while an explanation provides a supporting argument for a prediction, we must also provide alternative arguments, arguing against the model prediction, in order to accommodate meticulous human decision-making. In the context of a consequential legal decision, presenting both sides amounts to procedural due process.
In this paper, we introduce the notion of dissenting explanations: explanations for an opposing model prediction to some reference model. To illustrate the importance of these explanations, we study existing model disagreement on a deceptive hotel reviews classification task. We perform a pilot study to show that, on this difficult-to-verify task, dissenting explanations indeed reduce model overreliance without reducing the accuracy of the human predictions. Finally, since dissenting explanations are a useful tool for reducing overreliance, even outside the context of existing model multiplicity, we develop methods to induce predictive multiplicity and create dissenting explanations. We present techniques for generating global disagreement with respect to any black-box model, as well as local disagreement on any instance; these methods achieve disagreement without sacrificing model accuracy.
§ RELATED WORK
One model, multiple explanations
Post-hoc explanations can be elicited from black box models through a variety of techniques including perturbation-based methods (e.g. LIME and SHAP <cit.>) and gradient-based methods (e.g. GradCAM and SmoothGrad <cit.>).
However, when applying such techniques to the same example, inconsistent and conflicting explanations for feature importance may arise. Surveying data scientists, <cit.> found that disagreements in explanations occur when the top features or the ordering of features is different and developed explanation disagreement metrics and find that more complex models exhibit higher disagreement.
Similar models, conflicting explanations
For models to give trustworthy predictions, humans may expect stable predictions with explanations across similarly accurate models. However, models with similar may have similar accuracy but different predictions and different model internals and decision-making processes (i.e. predictive multiplicity). <cit.> show that models similar in performance can yield vastly different explanations. Seemingly trivial choices in model architectures, random seeds, and hyperparameters may lead to inconsistent and contradicting explanations.
Overreliance and human-AI collaboration
Among tasks where neither humans nor AI routinely achieves perfect performance, <cit.> use AI predictions and explanations to help human participants with detecting deceptive hotel reviews and find that human performance was improved with AI predictions with explanations. <cit.> also find that explanations actually reduce overreliance in their set of maze task experiments. In contrast, <cit.> study common sense tasks including review sentiment classification and LSAT question answering and found that explanations increased accuracy when the AI model was correct but decreased accuracy when the AI model was wrong. However, they do observe that highlighting the features for the top two classes when presenting a single model prediction reduced overreliance on the AI recommendation.
Leveraging model and explanation multiplicity, we investigate the effect of also showing the explanation of a dissenting model in reducing overreliance. Specifically, we are motivated by settings where AI surpasses human performance, but human decision-makers may need make the final decision <cit.>. In these settings, the goal is to provide AI predictions with explanations to humans as a tool rather than removing humans from decision-making altogether. Crucially, our setting differs from <cit.> in that we examine differing independent predictions and accompanying explanations from different models in the Rashomon set <cit.> with the goal of improving human decision-making.
§ MODEL AND FRAMEWORK
We define dissenting explanations in the situation where we have model multiplicity. Let f, g : 𝒳→𝒴 be two different functions trained on the same data x, y∼𝒟; these functions do not have to belong to the same hypothesis class. We look at the specific case of binary classification (y ∈{0, 1}), but much of this work can also be extended to general classification tasks.
Then, let e(f, x) be an explanation for the model's prediction f(x). The shape of e depends on the type of explanation being used, and any of the standard explanation methods will produce a valid function e. Based on these definitions, we introduce the concept of a dissenting explanation as an explanation of the prediction of a disagreeing model:
Let f, g be any two different classifiers and let (x, y) ∼𝒟 be any example. Then, e(x, g) is a dissenting explanation for e(x, f) if f(x) g(x).
Dissenting explanations offer an argument for a contradictory prediction; each disagreeing model can produce its own dissenting explanation. Furthermore, dissenting explanations are explanation-method agnostic. In the more general setting of multi-class classification, the explanation e(g, x) is a dissenting explanation for e(f,x) as long as g predicts a label different from f(x).
Since disagreeing predictions are necessary for dissenting explanations, measuring how many predictions f and g disagree on gives an indication of how many dissenting explanations can be generated between two models.
Let f, g be any two different classifiers, the global disagreement between f and g on some set D is:
δ_D(f, g) = 1/|D|∑_x ∈ D1[f(x) g(x)]
Let _D(f) = 1/|D|∑_(, ) ∈ D1[f(x) y] be the empirical error of a classifier. For two classifiers f and g where _D(f), _D(g) ∈ [0, 1]:
δ_D(f, g) ≤_D(f) + _D(g)
This can be seen by considering that disagreement is maximized when f and g make mistakes on disjoint sets.
While disagreement can be maximized by finding a model that predicts differently on every example, we focus on the setting where models are similarly accurate. This set of models that are similarly accurate on some dataset is also described as the Rashomon set <cit.>.
Following prior work studying the overreliance of humans on AI predictions <cit.>, we define overreliance as how much human decisions mirror AI suggestions when the AI is incorrect: 𝔼[h(x) = f(x) | f(x) y] where h represents the human decision.
For the purposes of our experiments, we let e(x, f) ∈ℝ^d be a feature attribution explanation of f on x. A feature attribution explainer generates a linear “surrogate model” that approximates f in a neighborhood of x. If the weights of the linear surrogate model are w_i, then e(x, f) returns the most important features x_i, corresponding to the d largest values of |w_ix_i|. In our experiments, these feature attribution explanations are generated by LIME TextExplainer <cit.>.
For a feature attribution explanation, we say that e(x, f)_+ ∈ℝ^p are the set of features supporting the prediction f(x)=1 while e(x, f)_- ∈ℝ^n are the set of features that support the prediction f(x)=0 where p+n=d.
§ MOTIVATING STUDY: THE IMPORTANCE OF DISSENTING EXPLANATIONS
§.§ Hypothesis
Motivated by the potential of dissenting explanations to present an alternative argument against a model prediction, we seek to understand whether dissenting explanations can be helpful in reducing human overreliance on model predictions. To this end, we propose two hypotheses:
Hypothesis 1 (H1): Providing users with a singular explanation for an incorrect AI prediction increases human agreement with the incorrect prediction.
Hypothesis 2 (H2): Providing users with a dissenting explanation, arguing against the AI prediction, along with the explanation, will decrease human over-reliance without significantly decreasing human accuracy, as compared to providing a single AI prediction and explanation.
The purpose of the first hypothesis was to provide a baseline for how explanations affect human decisions, while the second hypothesis tests the value of dissenting explanations.
§.§ Study design
Task selection We focus on the setting of assistive AI: the setting where AI on average might perform better than humans but it is critical for humans to be the final decision maker. This is different from prior works, which focused on tasks either with verifiable answers given the explanation <cit.> or tasks where humans and AI perform approximately equally in order to measure collaboration potential <cit.>. Furthermore, we specifically consider explanations that are not verifiable proofs of the correct label but rather arguments for the model predictions, as these are the standard explanation forms available for complex model predictions <cit.>.
Thus, we had the following criteria in selecting a task: (1) The human accuracy for the task must be less than the model accuracy. (2) There must be room for model disagreement on the task; the AI model should not perform the task perfectly. (3) There must be an objective correct label for the examples. (4) AI explanations must be understandable for the participants, without providing complete proof of the correct answer.
Deceptive reviews task
Based on our requirements, we decided to use the Chicago Deceptive Reviews dataset <cit.>. This is a dataset of 1600 one-paragraph reviews of hotels in Chicago, where half the reviews are genuine reviews from TripAdvisor, and the other half were written by crowd workers that have only seen the name and website of the hotel. The goal of the task is to distinguish between real and deceptive reviews; a prior study found that humans on their own get at most 62% accuracy on this task, while a linear SVM achieved around 87% accuracy <cit.>. Furthermore, there exists a ground truth label: whether a review is deceptive or real. The explanations were in the form of highlighting the words selected by the feature attribution explainer; these words serve as an argument to the participant, convincing them to select a certain label without giving a complete proof of the correct answer.
To test our hypothesis, we design a study in which human participants attempt to categorize these hotel reviews. Participants are presented with 20 hotel reviews, each of which is real or deceptive, and are instructed to decide which reviews are real. They are assisted by AI predictions or explanations, where the existence or type of explanation varies based on the condition participants are assigned to. Participants are warned in the beginning that the AI predictions are not always correct, and they are also given a set of heuristics for identifying deceptive reviews, developed by prior work on this task <cit.>. We also survey users, post-task, about how difficult they found the task, how effective the AI suggestions were in helping categorize the reviews, and how much they trusted the AI suggestions. Finally, there is an optional open-ended question about how the AI suggestions helped them complete the task. These questions allow us more insight into whether people felt they trusted the model.
Generating explanations To properly benchmark against prior work <cit.>, we use the same linear SVM trained on TF-IDF of unigrams with English stop words removed as our reference model f with 87% accuracy. We also train an alternative 3-layer neural network model based on the exact same pre-processing which achieves 79% accuracy as the alternative mode g. We use LIME <cit.> to generate local explanations for each model using the Top-15 features. To double-check the quality of the LIME features, we compared the features to the weights of the linear SVM model and found meaningful overlap between the top features. In order to find dissenting explanations, we used examples in the test set where the neural network model disagreed with the linear SVM model. The two models disagreed on about 10% of examples in the test set which produced 32 examples. We sub-sampled these examples for an even balance of examples that the linear SVM (the reference model) predicted correctly and incorrectly.
Conditions
Each participant was presented with the same 20 reviews, along with the same 20 model predictions. The reference model f predicted the incorrect label on 8 of the 20 reviews. We randomly assigned each participant to one of the following four conditions. Participants are not aware of the other possible conditions for the study.
* C_0: Participants were presented with the AI prediction for each review, without any explanation.
* C_1: Participants were presented with the AI prediction for each review f(x), along with a supporting explanation e(x, f)_f(x). This means either the positive LIME features were highlighted in orange, if the model predicted “real," or the negative LIME features were highlighted in blue, if the model predicted “deceptive".
* C_2: Participants were presented with both the explanation and the dissenting explanation. They received the same explanation as in C_1, followed by the line “Another model predicts that this review is [real/deceptive]” and the corresponding explanation for the dissenting model.
* C_3: Participants were presented with an explanation that more closely matched the original LIME output, which includes both positive and negative features. Each explanation started with the line “The model predicts that this review is [real/deceptive]. It thinks the words in orange are evidence the review is real, while the words in blue are evidence it is deceptive." This was followed with the corresponding highlighted text.
The four conditions can be seen in Figurefig:generatingconditions. We provided participants with training before the task began that was specifically tailored to the condition that each participant is assigned to. All other aspects of the survey, such as the format, the reviews, the predictions themselves, and the post-survey questions, were kept constant across all four conditions. The main purpose of our study was to compare C_2 to C_1. C_0 was our control condition and C_3 was included so we could compare the effect of dissenting explanations to the pre-existing feature attribution, which contains both positive and negative features.
Participants
These surveys were posted on Prolific and made available to all fluent English speakers that have at least a 95% approval rate on Prolific and have not answered any previous surveys we have posted. Participants were given training examples at the beginning of the survey. Participants were compensated $3.50 USD for participating in the task and given an additional bonus of $1.00 USD if they answered more than half the questions correctly. For the average completion time of ∼ 15 minutes, this translates to a $18 USD hourly rate. Three attention check questions were included in the study where participants were told explicitly to select a certain answer. We excluded answers from participants who failed more than one attention check but still compensated these participants. After excluding the failed attention checks, there were N = 178 submissions in our analysis, with approximately 45 submissions per condition[Our study obtained IRB exemption approval (Stanford IRB-70387)]. Our sample size was calculated based on pilot studies[Pre-registration: <https://osf.io/hrv5m/>].
§.§ Results
We measure average accuracy on this task for each condition as an indicator for how well users learn from different types of explanations. Moreover, we measure overreliance on the model when the predictions were incorrect to understand how the different types of explanations affect overreliance on model predictions.
Quantitative findings
For each participant in the study, we measured their accuracy as the fraction of reviews they categorized correctly, out of the 20 total reviews. We measured overreliance as the fraction of reviews they agreed with the model prediction on, out of the 8 reviews the model predicted incorrectly. These results, averaged over each of the four conditions, are displayed in Figure <ref> and Figure <ref>. Since our task involves binary labels, we account for random agreement by also measuring Cohen's κ between a participant and the model's predictions <cit.>.
Using a one-way ANOVA test, we find that accuracy does not differ across conditions (p = 0.850), but overreliance and Cohen's κ scores do differ across conditions (p = 0.007 and p = 0.0001, respectively). We then perform one-tailed t-tests between conditions to test our specific hypotheses.
To analyze H1, we compared Cohen's κ scores over the 8 questions that the model predicted incorrectly, between C_0 and C_1. Our 1-tailed t-test did not give statistically significant results (p > 0.05). We also did not find significantly larger overreliance (p>0.05).
We find that our results support our main hypothesis (H2): providing participants with both a supporting and dissenting explanation (C_2) significantly reduces overreliance as compared to just a single explanation (C_1) (Figure <ref>, p=0.001). Moreover, the dissenting explanation does not significantly reduce accuracy (p=0.210). With one explanation (in condition C_1), participants get an average accuracy score of 0.593 ± 0.014, but an overreliance of 0.606 ± 0.023. Meanwhile, when provided with both a supporting and dissenting explanation (condition C_2), participants get an average accuracy of 0.576 ± 0.016, with an overreliance of 0.491 ± 0.029. We also observe that for human-model agreement, as measured by Cohen's κ, dissenting explanations in condition C_2 also give a significantly lower agreement with model predictions than just a single explanation C_1 (p=7e^-5 for all questions). Our results suggest providing dissenting explanations is a useful way to reduce overreliance in situations where it is unclear whether the model prediction is accurate.
Moreover, participants in C_3, who saw both the positive and negative features from a singular model explanation, had an average overreliance score of 0.595 ± 0.032. Thus, in our experiment, the method of dissenting explanations (C_2) produced lower overreliance as compared to C_3 (p=0.009). This shows that dissenting explanations provide a benefit beyond what is provided by existing explanation methods, and there is a significant difference in how humans react to positive evidence from one model and negative evidence from another, as opposed to positive and negative evidence from a singular model in this deception labeling task.
Qualitative analysis
Participants were asked to report their trust in the AI predictions, on a 5-point scale from “not at all” to “a great deal”. The reported trust matched the trend of the overreliance scores across the 4 conditions, where the average reported trust in the model predictions was lowest in the dissenting explanations condition, and higher in the other 3 conditions (Figure <ref>). This was reflected in participant comments; one comment for C_0 was “If i was on the fence on wheter it was fake or not i tried to listen to the AI suggestion", and many others had a similar sentiment. For condition C_2, there were many comments saying they distrusted the AI suggestions, and a few saying that they followed the suggestion with the more-highlighted paragraph. Similarly, in C_3, there were many comments such as “It helped me to easily identify the ammount of key words of each type.” Thus, the participants' beliefs about the study generally reflected the quantitative results we found for each of the explanation conditions.
§ FINDING DISSENTING EXPLANATIONS
Motivated by the potential of dissenting explanations for reducing overreliance on explanations, we present methods for producing disagreement in models. While prior works have focused on predictive multiplicity, a clear mapping between predictive multiplicity and explanation multiplicity has not been presented. Furthermore, previous techniques for maximizing predictive multiplicity through mixed integer programming are limited to the linear models <cit.>. In this section, we present and compare methods for increasing predictive multiplicity through the lens of explanations.
§.§ Global model disagreement: a model agnostic approach
We consider the setting where we have access to a reference model f and the training set. Our goal is to train a model g which will disagree with f as much as possible on a subsequent test set.
Given reference model f and training data D, find some g such that δ_D(f, g) (Definitiondef:global_disagree) is maximized while _D_test(f) ≈_D_test(g).
Regularization (Reg) First, we consider a regularization approach to penalize similarities between a fixed reference model f predictions and the current model g. Specifically, one empirical loss we can minimize is:
L(x, y, f) = 1/n∑_i=1^nl(g(x_i), y_i) + λ/n∑_i=1^n1[f(x_i) g(x_i)]
However, since the indicator function is not continuous and non-differentiable, we modify the objective to be:
L(x, y, f) = 1/n∑_i=1^nl(g(x_i), y_i) + λ/n∑_i=1^nl(g(x_i), f(x_i))
We consider the binary classification setting and set l to be the binary cross entropy loss and use the inverse predictions of f to maximize disagreement between f and g.
Reweighting (Weights)
Leveraging intuition from boosting, another approach to learning a maximally differing classifier is to upweight examples to our reference predictor gets wrong. Our approach differs from traditional boosting in that we are comparing explanations between resulting models instead of combining model outputs for a single prediction. Formally, the reweighting objective is as follows:
L(x, y, f) = 1/n∑_i=1^nw_il(g(x_i), y_i)
w_i = 1 + λ1[f(x_i) y_i)]
When l(x, y) = 1[x y], in the binary setting this reweighting objective is equivalent to the above regularization objective.
Experiment results First, we compare predictive multiplicity induced by both methods on the deceptive reviews dataset and use the same reference model f, a linear-SVM, from our human-centered studies. For all experiments in this section, we train a neural network g with a single hidden layer with the same features as the reference model f. The results presented are averaged over 5 different random seeds. Table <ref> summarizes the overall model accuracy, the percentage of examples f and g disagreed on, and the percentage of examples that were incorrectly predicted by f but rectified by g. All of these metrics are computed over a held-out test set. As λ increases, the number of conflicting prediction examples also increases[Training with the Reg objective using larger λ (e.g. λ≥ 1) resulted in instabilities for a variety of hyperparameters.]. However, this effect might be due to g simply getting more examples wrong. Thus, it is important to measure the number of f's incorrect predictions that are corrected by g. Of the total 38 examples in the test set that f predicts incorrectly, the percentage of correct examples reduces slightly as disagreement increases.
Table <ref> summarizes the effectiveness of using the Weights objective in creating model predictive disagreement. Disagreement is achieved without as much sacrifice in overall accuracy. Furthermore, both the percentage disagreement and corrected samples are high at larger λ values. To further compare the two approaches, Figure <ref> shows a Pareto plot of accuracy vs disagreement. We see that for both methods, as λ increases, accuracy and agreement decrease. Furthermore, comparing across 10 models trained to disagree with the reference model, there were about 28% of test set examples where at least 1 model disagreed with the reference model.
Explanation disagreement
For comparing dissenting explanations to the original explanation, we use a similar set of metrics as explanation agreement <cit.>. There are two main cases to consider: when the two models agree and when the two models disagree. When the models agree, <cit.> present several metrics to measure agreement between explanations including top k features and rank correlation. We consider three agreement metrics:
TopK = |_k(e(x, f)) ∩_k(e(x, g))|/|_k(e(x, f)) ∪_k(e(x, g))|
TopKPos = |_k(e(x, f))_+ ∩_k(e(x, g))_+|/|_k(e(x, f))_+ ∪_k(e(x, g))_+|
TopKNeg is just TopKPos with negative prediction features instead of positive. To measure explanation agreement, we evaluate models at different λ for both Reg (Figure <ref>) and Weights (Figure <ref>). For all three metrics, as λ increases, the explanation agreement also reduces. Although these results are unsurprising, they are compelling in illustrating that creating predictive multiplicity also in turn produces explanation multiplicity. Moreover, since a good portion of examples that the reference model classified incorrectly were rectified in our alternative models, this explanation multiplicity allows dissenting explanations to aid human judgment and reduce overreliance.
§.§ Local model disagreement: generating a dissenting explanation for any input
While the techniques we presented increase model disagreement on the test data only with the training data, the total coverage only spans <30% of points for our dataset. We now consider an alternative problem formulation where the test instance for which we want to achieve a different prediction is given. This allows us to produce a dissenting explanation for any input example in the reference model.
Given reference model f, training data D, and a test instance x, find some g where f(x) g(x) where _D_(f) ≈_D_(g).
Unlike global disagreement, we know the exact test instance for which we want a different (flipped in the binary case) prediction. For different model classes f, we present different methods for this problem. For models where parameters are directly solved for, we propose adding the test instance to a subset of the training data. Intuitively, as the training data size decreases, the influence of the test instance should increases thus increasing the success rate of generating a different prediction for x. Table <ref> shows this intuition for a Linear SVM model (based on the same features and hotel reviews dataset). The success rate, calculated over all the examples in the test set, increases as the total size of the dataset decreases, showing that Problem <ref> can be solved with high probability (i.e. >90%) without sacrificing significant model accuracy.
For models that are fitted with stochastic optimization (e.g. neural networks), we can directly minimize over just the test instance and measure how many iterations are required to change the label. Table <ref> describes the distribution over iterations required to flip an example label. The difficulty varies depending on the example and accuracy on the other test examples declines with more iterations. However, this method is also effective in flipping the label for roughly 80% of the test set examples while still maintaining ∼ 88% accuracy.
§ DISCUSSION
In this work, we take a holistic approach by first motivating the need for dissenting explanations through a human study to measure overreliance. For our deceptive reviews task, a task with a ground truth label but no method for direct verification, we demonstrate the utility of dissenting explanations in reducing overreliance. Our results complement existing work on the benefits of explanations <cit.> by exploring more ambiguous tasks.
After finding that overreliance can be reduced by introducing dissenting explanations which argue against a model prediction and explanation, we then present simple but effective heuristics for eliciting more disagreement between models with only query access to the reference model. We show that generating disagreement in predictions is sufficient for generating different explanations. Our work serves as a first step in presenting the human interaction and computational challenges in treating explanations as arguments for predictions in AI decision-making.
A promising direction of future work is to explore what tasks dissenting explanations best aid and other types of dissenting explanations involving counterfactual explanations. Furthermore, while we presented a preliminary intuition on explanation disagreement from one vs two models, more studies are required to understand whether these results are due to a difference in the selected features between the two conditions, or simply a difference in framing.
§ ACKNOWLEDGMENTS
This research was supported by the Simons collaboration on the theory of algorithmic fairness and the Simons Foundation Investigators Award 689988. Also thanks to Lindsay Popowski and Aspen Hopkins for the insightful discussions on experimental design.
icml2023
|
http://arxiv.org/abs/2307.07289v1 | 20230714120226 | Real-time Graph Building on FPGAs for Machine Learning Trigger Applications in Particle Physics | [
"Marc Neu",
"Juergen Becker",
"Philipp Dorwarth",
"Torben Ferber",
"Lea Reuter",
"Slavomira Stefkova",
"Kai Unger"
] | hep-ex | [
"hep-ex",
"eess.SP"
] |
Article Title]Real-time Graph Building on FPGAs for Machine Learning Trigger Applications in Particle Physics
[1]Marc [email protected]
1]Jürgen [email protected]
2]Philipp [email protected]
2]Torben [email protected]
2]Lea [email protected]
2]Slavomira [email protected]
1]Kai [email protected]
[1]Institut fuer Technik der Informationsverarbeitung (ITIV), Karlsruhe Institute of Technology (KIT), Karlsruhe, 76131, Germany
[2]Institute of Experimental Particle Physics (ETP), Karlsruhe Institute of Technology (KIT), Karlsruhe, 76131, Germany
We present a design methodology that enables the semi-automatic generation of a hardware-accelerated graph building architectures for locally constrained graphs based on formally described detector definitions.
In addition, we define a similarity measure in order to compare our locally constrained graph building approaches with commonly used k-nearest neighbour building approaches.
To demonstrate the feasibility of our solution for particle physics applications, we implemented a real-time graph building approach in a case study for the central drift chamber using Field-Programmable Gate Arrays (FPGAs).
Our presented solution adheres to all throughput and latency constraints currently present in the hardware-based trigger of the experiment.
We achieve constant time complexity at the expense of linear space complexity and thus prove that our automated methodology generates online graph building designs suitable for a wide range of particle physics applications.
By enabling an hardware-accelerated pre-processing of graphs, we enable the deployment of novel Graph Neural Networks (GNNs) in first level triggers of particle physics experiments.
[
[
August 12, 2023
===================
§ INTRODUCTION
Machine Learning is widely used in particle physics for various reconstruction tasks and Graph Neural Networks (GNNs) are recognised as one possible solution for irregular geometries in high energy
physics.
GNNs have proven suitable for jet clustering <cit.>, calorimeter clustering <cit.>, particle track reconstruction <cit.>, particle tagging <cit.> and particle flow reconstruction <cit.>.
However, all applications described above are implemented in an offline environment, relying on high performance computing clusters utilising Central Processing Units (CPUs) and Graphics Processing Units (GPUs) to achieve the required throughput for the analysis of collision events.
Therefore, existing implementations are not suitable for real-time particle tracking and reconstruction in trigger systems of particle detectors.
The realisation of GNNs on FPGAs for particle tracking is an active area of research <cit.>.
Due to latency and throughput constraints, a suitable implementation meeting all requirements imposed by particle physics experiments is yet to be developed.
Especially the generation of input graphs under latency constraints is a challenge that has not received full attention so far in the evaluation of existing prototypes.
Current prototypes as described in <cit.> are trained on preprocessed graph datasets, taking into account geometric properties of detectors.
However, a holistic implementation of GNNs for triggers requires the consideration of the entire data flow chain.
This raises the question on how to build graphs under latency constraints in high-throughput particle physics applications.
In our work, we consider constraints from currently operating first level trigger systems <cit.>: event processing rates in the order of 10100 and latencies in the order of 110 render the utilisation of compound platforms based on CPUs and Field Programmable Gate Arrays (FPGAs) used in other research areas infeasible <cit.>.
To overcome the research gap, our work comprises the following contributions:
First, we outline existing nearest neighbour graph-building methods and evaluate their feasibility for trigger applications.
Second, we develop a methodology to transform formal graph-building approaches to hardware accelerated processing elements in an automated way.
Third, we evaluate our proposed toolchain on the central drift chamber (CDC), demonstrating the feasibility of our solution to build graphs under the constraints imposed by current trigger systems.
The paper is organised as follows:
In <ref>
we give an overview of related work on FPGA-accelerated graph building.
The CDC, the event simulation and details of the beam background simulation are described in <ref>.
The methodology for transforming discrete sensor signals into a graphical representation is discussed in <ref>.
The procedure for implementing real-time graph building in hardware is described in <ref>.
A concrete example of real-time graph building for the Belle II CDC is provided in <ref>.
We summarise our results in <ref>.
§ RELATED WORK
Previous work on FPGA-accelerated GNNs for particle physics utilise input graphs based on synchronous sampled collision events as input for training and inference of the respective networks <cit.>.
Early studies made use of fully connected graphs which lead to scalability challenges for detectors with more than 10 individual sensors <cit.>.
Typical particle physics trigger systems have much higher number of sensors though (see <ref>).
Aiming to significantly reduce the maximum size of input graphs, the geometric arrangement of sensors in the detector has been considered recently <cit.>.
Nevertheless, input graphs are currently generated offline, stored in the FPGA memory and are accessed over AXI[AXI: Advanced eXtensible Interface, is an on-chip communication bus protocol.]-Mapped Memory interfaces in prototype implementations <cit.>.
However, as sensors in detectors are read out as individual channels without providing relational information, the processing of input graphs must be considered as part of the critical path in online track reconstruction and trigger algorithms.
While building suitable input graphs for neural networks is a rather recent application, general nearest neighbour (NN) graph building has been studied extensively in literature <cit.>.
In order to reduce the computational demand of NN graph-building algorithms, continuous efforts have been made towards building approximate graphs making use of local sensitive hashing <cit.>, backtracking <cit.>, or small world graphs <cit.>.
Performance improvement from these algorithms have been demonstrated for applications targeting high-dimensional graphs containing more than 10^6 vertices such as database queries <cit.>.
There are two key challenges that limit the generalisation of these techniques in the particle physics trigger context.
First, k-nearest neighbour () algorithms inherently rely on sequential processing and present challenges in efficient parallelisation.
Second, while there is a wide range of graph-processing frameworks available (see Ref. <cit.> for a survey on graph processing accelerators), none of them meet the stringent latency and throughput requirements of current particle physics trigger systems:
FFNG <cit.> focuses on the domain of high-performance computing and therefore does not impose hard real-time constraints.
GraphGen <cit.> relies on external memory controllers which introduce additional latency into the system.
GraphACT <cit.> utilise preprocessing techniques on CPU-FPGA compound structures in order to optimise throughput and energy efficiency which again introduces non determinism and additional latency.
And lastly, current GNN accelerators like HyGCN <cit.> or AWB-GCN <cit.> use the previously described techniques to reduce the required system bandwidth and improve the energy efficiency of the inference.
They are therefore not suitable for particle physics applications.
§ SIMULATION AND DATASET
In this work, we use simulated events to benchmark the graph-building algorithms.
The detector geometry and interactions of final state particles with the material are simulated using <cit.>, which is combined with the simulation of a detector response in the Belle II Analysis Software Framework <cit.>.
The detector consists of several subdetectors arranged around the beam pipe in a cylindrical structure that is described in detail in Ref. <cit.>.
The solenoid’s central axis is the z-axis of the laboratory frame.
The longitudinal direction, the transverse xy plane with azimuthal angle ϕ, and the polar angle θ are defined with respect to the detector’s solenoidal axis in the direction of the
electron beam.
The CDC consists of 14336 sense wires surrounded by field wires which are arranged in nine so-called superlayers of two types: axial and stereo superlayers.
The stereo superlayers are slightly angled, allowing for 3D reconstruction of the track.
In the simulated events, we only keep the detector response of the CDC.
We simulated two muons (μ^+,μ^-) per event with momentum 0.5 < p < 5 GeV/c, and direction 17^∘ < θ < 150^∘ and 0^∘ < ϕ < 360^∘ drawn randomly from independent uniform distributions in p, θ, and ϕ.
The generated polar angle range corresponds to the full CDC acceptance.
Each of the muons is displaced from the interaction point between 20 cm and 100 cm, where the displacement is drawn randomly from independent uniform distributions.
As part of the simulation, we overlay simulated beam background events corresponding to instantaneous luminosity of ℒ_beam=6.5×10^35 cm^-2s^-1 <cit.>.
The conditions we simulate are similar to the conditions that we expect to occur when the design of the experiment reaches its ultimate luminosity.
An example of an event display for a physical event e^+e^-→μ^+μ^-(γ) is shown in <ref>.
§ GRAPH BUILDING
This work proposes a methodology for transforming discrete sensor signals captured inside a particle detector into a graphical representation under real-time constraints.
Particular importance is given to the use-case of particle physics trigger algorithms, adhering to tight latency constraints in the sub-microsecond timescale.
Current large-scale particle detectors are composed of various discrete sensors and often, due to technical limitations, placed heterogeneously inside the system.
For this reason, signals from the sensors cannot be considered regularly distributed, as it is the case with, for example, monolithic image sensors.
In the following a detector D is defined as a set of N discrete sensors {s_1, ... , s_N }, where each individual sensor s_i is described by a feature vector of length f.
Some examples for described features are the euclidean location inside the detector, the timing information of the received signal, or a discrete hit identifier.
To map relational connections between individual sensors, a graph based on the detector description is generated which contains the respective sensor features.
Formally described, a graph building algorithm generates an non-directional graph G(D,E), where D is the set of vertices of the graph, and E ⊆ D × D is the set of edges.
The set of vertices is directly given by the previously described set of sensors in a detector.
Each edge e_ij = e(s_i,s_j) ∈ E with s_i, s_j ∈ D in the graph connects two sensors based on a building specification, that depends on sensor features.
In the following, we consider the case of building non-directed graphs.
We do not introduce any fundamental restrictions that limit the generalisation of our concept to directed graphs.
In general, graph building approaches are tailored to the specific detector and physics case.
We consider three approaches that can be classified into two classes of nearest-neighbour graph building: locally constrained graphs, and locally unconstrained graphs.
<Ref> depicts an exemplary cut-out of a detector, in which sensors are placed heterogeneously in two-dimensional space.
For simplicity, sensors are aligned in a grid-like structure without restricting the generality of our graph-building approach.
A graph is built for a query vertex which is depicted by a solid black circle.
We use the exemplary query vertex to illustrate NN-graph building on a single vertex for simplicity.
In the following, we compare the three building approaches and explain their differences.
§.§
graph building is illustrated on a single query node in <ref>.
Repeating the building algorithm sequentially leads to a worst-case execution time complexity of 𝒪(k | D |log(| D |) <cit.>.
To reduce the execution time, parallelization of the algorithm has been studied in Ref. <cit.>, achieving a lower theoretical complexity.
Based on the optimization, a linear 𝒪(| D |) time complexity is achieved in experimental evaluation <cit.>.
Nevertheless, substantial processing overhead and limitations through exclusive-read, exclusive-write memory interfaces limit the usability for trigger applications.
To achieve a higher degree of parallelization, algorithms as described in Ref. <cit.> make use of locally constrained approximate graphs.
§.§
graph building is illustrated on a single query node in <ref>.
The parameter defines an upper bound for the distance of a candidate vertex from the query vertex.
All vertices for which <ref> holds true are connected in a graph, yielding a locally constrained graph.
Figuratively, a uniform sphere is placed over a query point joining all edges which are inside the sphere into the graph:
d(x_i,y_i) = ‖x_i - x_j ‖_2 < ϵ
Since the approach is controlled by only one parameter, it is a general approach to building location-constrained graphs.
However, variations between adjacent sensors in heterogeneous detectors are not well represented in the algorithm.
§.§
Pattern nearest-neighbour () graph building is illustrated on a single query node in <ref>.
For building the graph, every candidate sensor is checked and, if a predefined condition is fulfilled, the edge between candidate node and query node is included in the graph.
§.§ Comparison
When comparing the , the and the algorithms, it is obvious that in general all three approaches yield different graphs for the same input set of sensors.
While the building and the building can both be considered locally constrained algorithms, the approach differs as outliers far away from a query point might be included.
Nevertheless it is noted in Ref. <cit.>, that on a uniformly distributed dataset a suitable upper bound * exists, for which the resulting graph is a good approximation of corresponding graph.
§ TOOLCHAIN
In the following, we leverage the described mathematical property to demonstrate the feasibility of building approximate graphs for trigger applications.
First, we provide a methodology to evaluate the approximate equivalence of , and graph building approaches, providing a measure of generality for parameters chosen in offline track reconstruction algorithms <cit.>.
Second, we semi-automatically generate a generic hardware implementation for the graph building as an application-specific version of the graph building, thus demonstrating the feasibility of graph-based signal processing in low-level trigger systems.
Third, we perform a case study on the trigger system demonstrating achievable throughput and latency measures in the environment of trigger applications.
§.§ Hardware Generator Methodology
Algorithms that generate graphs by relating multiple signal channels belong to the domain of digital signal processing.
As such they share characteristics of typical signal processing applications like digital filters or neural networks.
Both applications are data-flow dominated and require a large number of multiply-and-accumulate operators and optimizations for data throughput.
Thus, implementing these algorithms on FPGAs show promising results in comparison to an implementation on general purpose processors <cit.>.
Developing custom digital logic for FPGAs is time-consuming and error-prone.
To increase productivity, various high-level synthesis (HLS) frameworks have been developed that transform digital signal processing applications from formal definitions into hardware implementations, reducing the required design effort.
For example, digital filters are automatically implemented by commercially available development tools like MATLAB and hardware-aware training and deployment of neural networks is addressed by open-source tool-chains like FINN <cit.> and HLS4ML <cit.>.
While these frameworks have lowered the entry barriers for FPGA-algorithm development, their off-the-shelf usability is limited to pre-defined neural network architectures.
In addition, adapting the frameworks to support custom architectures is often time-consuming and error-prone.
Therefore, we propose a generator-based methodology enabling to transform a graph building algorithm into an actual firmware implementation.
<Ref> illustrates our development flow for both the generation of an intermediate representation of the circuit and an algorithmic evaluation of the building approach.
As an input, a database containing the formal definition of a detector is expected, alongside hyperparameters describing the building approach.
Based on the selected approach, an intermediate-graph representation is generated, containing information how the building approach is mapped onto the detector.
The intermediate-graph representation serves as an input for the hardware generation and the algorithmic evaluation.
On one side, an intermediate-circuit representation
is generated by combining the intermediate-graph representation and parameterised hardware modules from our hardware description language (HDL) template library.
We use Chisel3 <cit.> as hardware-design language providing an entry point to register transfer-level circuit designs in Scala.
On the other side, the intermediate-graph representation is evaluated on a user-defined dataset and compared to a generic graph-building approach.
To achieve a quantitative comparison we introduce similarity metrics for different operating conditions in the detector in <ref>.
This result can be used to iteratively adapt hyperparameters in the or approach, improving the similarity to graphs that are often used in offline track reconstruction.
§.§ Intermediate-Graph Representation
The parameter in the approach and the pattern function in the approach limit the dimensionality of the graph under construction.
In comparison to fully-connected graphs, the maximum number of edges is lowered by imposing local constraints on the connectedness of sensors in the detector.
Local constraints are implemented by considering the influence of static sensor features, like euclidean distances between sensors, during design time of the FPGA firmware.
Leveraging the a-priori knowledge of the sensor position, the computational effort required during online inference of the algorithm is lowered.
<Ref> describes the procedure to derive the intermediate-graph representation of an arbitrary graph-building procedure.
As an input the formally described set of sensors D is given.
Iterating over every sensor in the detector, the locality of not yet visited sensors is checked by a user-defined metric describing the graph building approach.
If a sensor is considered to be in the neighbourhood of another sensor, the connection is added to the resulting set of edge candidates E.
All edges in E must be checked for their validity during the inference of the online graph building.
The combination of the formal detector description and the set of candidate edges is sufficient to describe an arbitrary building approach on non-directed graphs.
According to algorithm <ref>, the worst-case time complexity during design-time amounts to 𝒪(| D |^2), which is higher than the worst-case time-complexity of state-of-the-art building approaches.
However, the worst-case time-complexity during run-time is now only dependent on the number of identified edges during design-time.
Therefore, generating a graph of low dimensionality by choosing a suitable metric, considerably lowers the number of required comparisons at run-time.
Such an optimization would not be possible when using a approach, as even for a low dimensionality all possible edges must be considered.
§.§ Full Toolchain Integration
Our methodology covers the conversion of an arbitrary graph building algorithm into an intermediate-circuit representation.
The resulting intermediate-circuit representation, implemented on the FPGA as a hardware module, exposes multiple interfaces on the FPGA.
On the input side, heterogeneous sensor data is supplied through a parallel interface as defined in the detector description.
On the output side, graph features are accessible through a parallel register interface to provide edge features to successive processing modules.
Considering the application of our module in a latency-sensitive, high-throughput environment like particle experiments, direct access to graph data is required at the hardware level.
Therefore bus architectures employed in general-purpose processors, like AXI or AMBA, are not suitable for our use case.
To reduce communication overhead between registers, which store graph data, and algorithmic processing units, an analysis of data paths during the generation of the final FPGA firmware is required.
<Ref> depicts exemplary, how our graph building methodology is combined with state-of-the-art HLS tools enabling the generation
of hardware-accelerated neural networks.
The left side of the figure depicts a generic HLS flow converting, for example, a PyTorch <cit.> neural network model into hardware modules.
There are numerous HLS toolchains available for deploying neural networks on FPGAs, for example HLS4ML <cit.>
, FINN <cit.>
or ScaleHLS <cit.>.
The register transfer level description of hardware modules generated by HLS toolchains are
composed of discrete registers, wires, and synthesizable operations.
In a similar way, the right side of the figure depicts our proposed graph building procedure.
The formal detector description and the user-defined graph building metric are used as an input to generate a register-transfer level description of the hardware module.
As both toolchains are generating hardware descriptions in the register transfer abstraction level, merging the two modules is feasible.
Last, a top level design combining both modules in SystemVerilog <cit.> is generated for an FPGA-specific implementation using commercially available toolchains, for example Vivado ML <cit.>.
§.§ Module Architecture
Utilising the generated intermediate graph description, available generator templates, and user-defined hyperparameters, a hardware module is generated at the register-transfer level.
The system architecture of the module is depicted in <ref>.
The total number of graph edges is factorised into M edge processing elements and N graph edges per edge processing element.
Readings from the detector sensors are routed to an array of M edge processing elements via a static distribution network.
Every edge processing element builds N graph edges in a time-division multiplex.
For each edge, two adjacent vertices are required which are provided to the edge processing element in two arrays of length N.
Consequently, graph edges are built from candidates identified at design time yielding a sparse array of both active and inactive edges.
In the described architecture, all generated edges are accessible through parallel registers.
In case a serial interface is required for successive algorithms, an interface transformation is achieved by adding FIFO modules.
<Ref> illustrates the block level diagram of an edge processing element in detail.
During design-time, each hardware module is allocated N edges which are built sequentially.
Static allocation allows a-priori known sensor and edge features, like euclidean distances, to be stored in read-only registers.
During run-time, the described module loads static features from the registers, combines them with variable input features, like the deposited energy,
and classifies the edge as active or inactive.
The online graph building is carried out in three steps.
First, a pair of sensor readings is loaded from the shift registers, and static sensor and edge features are loaded from a static lookup table.
Second, a Boolean flag is generated based on a neighbourhood condition e.g., a user-specified metric is fulfilled for two adjacent sensors.
Third, the resulting feature vector of the edge is stored in the respective register.
Feature vectors of all edge processing elements are routed via a second static distribution network mapping each edge to a fixed position in the output register.
The proposed architecture takes advantages of distributed lookup tables and registers on the FPGA in two ways.
First, due to the independence of the edge processing elements space-domain multiplexing is feasible on the FPGA even for large graphs.
Second, static features of the graph edges and vertices are stored in distributed registers allowing logic minimisation algorithms to reduce the required memory <cit.>.
To conclude, we developed an architecture for online graph building which is well suited for the latency constrained environment of low level trigger systems in particle physics experiments.
The variable output interface allows for an easy integration of successive trigger algorithms and leaves ample room for application specific optimisation.
The number of output queues is controlled by the parameter N which yields a flexible and efficient design supporting variable degrees of time-domain multiplexing.
§ CASE STUDY: BELLE II TRIGGER
To demonstrate the working principle of our concept, we adapt our graph building methodology for the first level (L1) trigger of the experiment.
The implementation focuses on the CDC (see <ref>) that is responsible for all track-based triggers.
§.§ Environment
The aim of the trigger system is to preselect collision events based on their reconstructed event topologies.
In order to filter events, a multi-stage trigger system is employed.
As a result, the effective data rate and thus the processing load of the data acquisition systems is reduced.
To give an overview of the constraints and requirements imposed by the experiment, the existing system is briefly described in the following.
The L1 track triggers are shown schematically in in <ref>.
They perform real-time filtering with a strict latency requirement of 5 <cit.>.
The sense wires inside the CDC are sampled with 32 and wire hits are accumulated for approximately 500.
In order to process all available input signals concurrently, a distributed FPGA-based platform is employed.
To obtain a trigger decision, track segments are generated from incoming events in parallel by performing space-division multiplexing.
Based on the output of the track segment finder (TSF), multiple algorithms including conventional 2D and 3D track finding algorithms as well as a Neural Network Trigger <cit.>
generate track objects of varying precision, efficiency, and purity for a Global Decision Logic <cit.>.
The integration of GNNs in the L1 trigger system requires an online-graph building approach that is optimised for both latency and throughput.
In this case study, we employ our proposed toolchain to generate an application-specific graph-building module as described in the previous section while adhering to constraints in the challenging environment of the experiment.
§.§ Graph Building
The wire configuration of the CDC is mapped onto the formal detector definition from <ref>, using wires as discrete sensors.
These sensors are called nodes or vertices in the following.
Inside the L1 trigger system, three signals are received per wire: a hit identifier, the TDC readout and the ADC readout, where TDC is the output of a time-to-digital converter measuring the drift time , and ADC is the output of an analogue-to-digital converter measuring the signal height that is proportional to the energy deposition in a drift cell.
Cartesian coordinates of the wires inside the detector are known during design time and used as static sensor features.
Additionally, the distance between two vertices, which is also known during design-time, is considered as an edge feature.
Illustrating the working principle our graph building approaches, <ref> depicts four
cut-outs of the CDC in the x-y plane for z=0.
In sector A, hit identifier received by the detector for an exemplary event are indicated by black markers.
The other three sectors show one graph building approach each:
Sector B depicts a graph for of k=6, as there are up to six direct neighbours for each wire.
The graphs connects wires that are widely separated.
Sector C shows an graph for = 22.
The specific value for is chosen, because 22 is in the range of one to two neighbour wires inside the CDC. This graph building approach connects hits in close proximity only, yielding multiple separated graphs.
In addition, more edges are detected in the inner rings compared to the outer rings of the detector due to the higher wire density in this region.
Finally, sector D shows a graph using the pattern described in <ref>.
The pattern extends the existing pattern <cit.> of the currently implemented TSF in the L1 trigger system by taking neighbours in the same superlayers into account.
When comparing the graphs and the graphs with each other, it is observed that the degrees[The degree of a vertex of a graph is the number of edges that are connected to the vertex.] of vertices are more evenly distributed (see inserts in <ref>).
§.§ Parameter Exploration
In general, , and algorithms generate different graphs for an identical input event.
However, to replace graph building with a locally constrained graph building approach, the graphs should ideally be identical.
As the generated graphs depends strongly on the chosen hyperparameters, on the geometry of the detector, and on the hit distribution of the events under observation, a quantitative measure of the similarity of the generated graphs between graphs and locally constrained graphs, such as or graphs, is necessary.
The optimal choice of the hyperparameter * is the one that maximises the similarity for any k.
For this optimisation we use simulated events as described in <ref>.
We generate both the graphs and the locally constrained graphs on the dataset considering the neighbourhood of wires inside the detector.
Edges of the graphs are labelled E_k, whereas the edges of observed locally constrained graphs are labelled E_l.
We measure the similarity between the two graphs using the the binary classifications metrics recall and precision
defined as
recall = | E_k ∩ E_l |/| E_k|,
precision = | E_k ∩ E_l |/| E_l|.
We vary k between 16 and between 1428, as the minimal distance between two wires in the CDC is approximately 10.
Precision and recall scores are calculated for every pair of k and parameters and show mean value over 2000 events in <ref>.
As expected, the precision score increases monotonically when parameter k is increased.
In addition, it increases if the parameter is reduced.
The recall score behaves in the opposite way: It monotonically decreases when parameter k is increased.
In addition, it decreases if the parameter is decreased.
Similarity is defined as the ratio between recall and precision, where an optimal working point also maximizes reall and precision itself.
We observe that we do not find high similarity for all values of k.
Maximal similarity is found for k=3 and =22, and k=4 and =28, respectively.
The corresponding precision and recall on the underlying data set are around 65-70%.
The similarity between and graphs can be interpreted in relation to the mathematical statement from Ref. <cit.> (compare <ref>).
Based on the background noise and the large number of hits per events, we assume that the hit identifiers in the dataset are approximately uniformly distributed.
Therefore, we expect that pairs of and graphs exist that exhibit a high degree of similarity, e.g. precision and recall scores close to one.
Our expectation is only partially met as the trade-off point reaches only about 65-70 %.
One possible reason for the remaining difference between the two graphs is the underlying background noise.
Although the events are clearly dominated by noise, the influence on the hit distribution is not strong enough for higher similarity scores.
We perform the same comparison between the and the graph building approach as shown in <ref>.
We achieve similar results in comparison to the comparison: The recall score is monotonically decreasing for a larger parameter k, and the precision score is monotonically increasing for larger parameter k.
For k between three and four, precision and recall scores are approximately similar and around 70 %.
Again, our expectation of a high degree of similarity is only partially met.
This similarity is to be expected, as the chosen pattern is also locally constrained and approximately ellipsoid.
§.§ Prototype Setup
For the implementation of the proposed algorithm into a hardware prototype, the CDC is partitioned into 20 partially overlapping sectors in ϕ and radial distance r for the L1 trigger.
Each ϕ-r-sector is processed independently by one FPGA platform, the overlapping of the sectors ensures that no data is lost.
The overlapping sectors must be merged in subsequent reconstruction steps that are not part of the graph-building stage.
In the following, the graph-building module is implemented on the Belle II Universal Trigger Board 4 (UT4) featuring a Xilinx Ultrascale XCVU160WE-2E.
The UT4 board is currently used in the L1 Trigger and therefore serves as a reference for for future upgrades of the L1 trigger system.
To implement the online graph building module, we generate JSON databases for every ϕ-sector of the CDC.
Each database represents a formal detector containing the positions of the wires and information about sensor-features as described in section <ref>.
Sensor features are composed of 1bit for the binary hit identifier, 5bit for the TDC readout, 4bit for the ADC readout, and the Cartesian coordinates of the wires.
Additional edge features containing information about the wire distances of two adjacent vertices are included as well.
The resolution of the euclidean features can be arbitrarily chosen and is therefore considered a hyperparameter of the module implementation.
The sector database and a function describing the pattern as illustrated in <ref> is provided as an input to our proposed toolchain which is implemented in Python 3.10.
An intermediate graph representation is generated as a JSON database, containing a type definitions of all vertices, edges and their respective features.
In addition, features known at design-time, such as Cartesian coordinates, are rounded down, quantized equally spaced, and included in the intermediate graph representation.
By generating the databases for all 20 sectors, we identify the smallest and largest sector of the CDC to provide a lower and an upper bound for our problem size.
The maximum number of edges in each sector is determined by the pattern from <ref>.
The smallest sectors are located in superlayer two containing 498 vertices and 2305 edges, while the largest sectors are located in superlayer six containing 978 vertices and 4545 edges.
To demonstrate our graph building approach, we synthesise the previously generated intermediate graph representation into a hardware module targeting the architecture of the UT4.
We provide the JSON database as an input for the hardware generator, which is a set of custom modules implemented in Chisel 3.6.0.
In addition, we provide a Scala function that performs the online classification of edge candidates based on the hit identifier:
an edge candidate is considered valid, if the hit identifiers of both adjacent vertices are hit.
For the edge processing elements we choose the number of edges per edge processing element N of eight.
Therefore, eight edges are processed sequentially in every edge processing element as described in <ref>.
Based on the required throughput of 32, a system frequency of at least 256 is required to achieve the desired throughput.
By starting the generator application, edges and features are extracted from the intermediate graph representation and scheduled on edge processing elements.
After completion, the hardware generator produces a SystemVerilog file containing the graph-building hardware module <cit.>.
§.§ Implementation Results
For further evaluation, the SystemVerilog module implementing the presented graph building is synthesised out-of-context for the UT4 board using Xilinx Vivado 2022.2.
During synthesis, the target frequency f_sys is set to 256, for which no timing violations are reported by the tool.
In addition, functional tests are performed to validate the algorithmic correctness of the module.
In the following we perform two series of measurements to validate the feasibility of the proposed implementation on the Xilinx Ultrascale XCVU160WE-2E FPGA.
<Ref> depicts the results of the two evaluation series, reporting the utilisation on the UT4 board for the respective resource types.
The first series of three synthesised versions is shown in <ref>, varying the input graph size in a suitable range between the 2305 and 4545 edges.
The highest occupancy is reported for registers, amounting up to 16.4 for the largest input graph, as opposed to 7.8 for the smallest graph.
For all other resource types, the utilisation is lower than 5.
In general, it is observed that the resource utilisation scales linearly with the number of edges in the input graph.
For the second series, a variation in resolution of the underlying edge features is considered.
An overview of all utilised features is given in <ref>.
The width of features that are received as inputs from the CDC, namely hit identifier, ADC readout, and TDC readout, are exemplary chosen in a way which is supported by the current readout system.
As an example, the the TDC readout quantisation of 5bit derives from the drift time resolution of 1 at a trigger data input rate of 32.
The resolution of euclidean coordinates and distances can be optimised at design-time.
In the following, we choose a resolution between 416bit which results in a quantisation error for the euclidean coordinates in the range 34.40.017.
4bit per coordinate result in a total edge width of 40bit, whereas a resolution of 16bit per coordinate results in a total edge width of 100bit.
The implementation utilisation of all three synthesised modules is shown in <ref>, varying the resolution of euclidean coordinates and distances in the generated edges.
Similar to the previous measurement, the highest utilisation is reported for registers, taking up between 11.1 and 26.1 depending on the width of the edges.
However it can be seen, that the implementation size scales linearly with the number of edges in the input graph.
Based on the presented results, the implementation of the graph building module is considered feasible on the UT4 board.
By experimental evaluation we show that our hardware architecture can be implemented semi-automatically for the L1 trigger of the experiment, enabling the deployment of GNNs in the latency-constrained trigger chain.
The feature vectors of the edges are provided via a parallel output register, where the address of every edge is statically determined at design time.
Depending on successive filtering algorithms, any number of output queues can be provided.
To conclude, our toolchain allows for a flexible and resource efficient design of online graph building modules for trigger applications.
In the presented implementation, our module is able to achieve a throughput of 32 million samples per second at total latency of 39.06, corresponding to ten clock cycles at f_sys.
As the reported latency is well below the required 𝒪(1), our graph building module leaves a large part of the latency and resource budget on FPGAs to the demanding GNN solutions.
§ CONCLUSION
In our work, we analysed three graph building approaches on their feasibility for the real-time environment of particle physics machine-learning applications.
As the algorithm, which is favoured by state-of-the-art GNN tracking solutions, is unsuitable for the strict sub-microsecond latency constraints imposed by trigger systems, we identify two locally constrained nearest neighbour algorithms and as possible alternatives.
In an effort to reduce the number of design-iterations and time-consuming hardware debugging, we develop a generator-based hardware design methodology tailored specifically to online graph-building algorithms.
Our approach generalises graph-building algorithms into a intermediate-graph representation based on a formal detector description and user-specified metrics.
The semi-automated workflow enables the generation of FPGA-accelerated hardware implementation of locally constrained nearest neighbour algorithms.
To demonstrate the capabilities of our toolchain, we perform a case study on the trigger system of the detector.
We implement an online graph-building algorithm which adapts the pattern of the current track segment finder, demonstrating the feasibility of our approach in the environment of particle physics trigger applications.
The code used for this research is available open source under Ref. <cit.>.
Nearest neighbour algorithms presented in this work achieve a 𝒪(1) time complexity and a 𝒪(| E |) space complexity, compared to a 𝒪(| D |) time complexity in approximate algorithms or a 𝒪(k | D |log(| D |) complexity in the sequential case <cit.>.
As a result, our semi-automated methodology may also be applied to other detectors with heterogeneous sensor arrays to build graphs under latency constraints, enabling the integration of GNN-tracking solutions in particle physics.
During the evaluation of our similarity metric, we found a non-negligible difference between graphs and locally constrained NN-graphs.
For the complete replacement of graphs with our proposed and graphs, the differences must be taken into account to achieve optimal performance when designing successive trigger stages.
For this reason, we consider the future development of methods for algorithm co-design essential for integrating GNNs into real-world trigger applications.
Data Availability Statement
The datasets generated during and analysed during the current study are property of the collaboration and not publicly available.
Code Availability Statement
The code used for this research is available open source under Ref. <cit.>.
Acknowledgements
The authors would like to thank the Belle II collaboration for useful discussions and suggestions on how to improve this work.
It is a great pleasure to thank (in alphabetical order) Greta Heine, Jan Kieseler, Christian Kiesling and Elia Schmidt for discussions, and Tanja Harbaum, Greta Heine, Taichiro Koga, Florian Schade, and Jing-Ge Shiu for feedback and comments on earlier versions of the manuscript.
§ COMPLIANCE WITH ETHICAL STANDARDS
§.§ Conflict of interest
The authors declare that they have no conflict of interest.
ain.bbl
|
http://arxiv.org/abs/2307.03997v1 | 20230708154148 | Efficient Model-Free Exploration in Low-Rank MDPs | [
"Zakaria Mhammedi",
"Adam Block",
"Dylan J. Foster",
"Alexander Rakhlin"
] | cs.LG | [
"cs.LG",
"math.OC"
] |
Lightweight Improved Residual Network for Efficient Inverse Tone Mapping
Liqi Xue, Tianyi Xu, Yongbao Song, Yan Liu, Lei Zhang, Xiantong Zhen, and Jun Xu
This work was sponsored by the National Natural Science Foundation of China (No. 62002176, 62176068, and 12101334), CAAI-Huawei MindSpore Open Fund, the Natural Science Foundation of Tianjin (No. 21JCQNJC00030), and the Fundamental Research Funds for the Central Universities. Corresponding author: Xiantong Zhen ([email protected]) and Jun Xu ([email protected]).
Liqi Xue, Tianyi Xu, Yan Liu, and Jun Xu are with the School of Statistics and Data Science, Nankai University, Tianjin 300071, China.
Yongbao Song is with the School of Mathematical Science, Nankai University, Tianijn 300071, China.
Lei Zhang and Xiantong Zhen are with the Computer Science College, Guangdong University of Petrochemical Technology, Maoming 525000, China.
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
A major challenge in reinforcement learning is to develop practical, sample-efficient algorithms for exploration in high-dimensional domains where generalization and function approximation is required. Low-Rank Markov Decision Processes—where transition probabilities admit a low-rank factorization based on an unknown feature embedding—offer a simple, yet expressive framework for RL with function approximation, but existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions such as latent variable structure, access to model-based function approximation, or reachability. In this work, we propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs that is both computationally efficient and model-free, allowing for general function approximation and requiring no additional structural assumptions. Our algorithm, , uses the notion of a generalized optimal design for the feature embedding as an efficiently computable basis for exploration, performing efficient optimal design computation by interleaving representation learning and policy optimization. Our analysis—which is appealingly simple and modular—carefully combines several techniques, including a new reduction from optimal design computation to policy optimization based on the Frank-Wolfe method, and an improved analysis of a certain minimax representation learning objective found in prior work.
§ INTRODUCTION
In reinforcement learning and control, many of the most promising
application domains require the agent to navigate complex,
high-dimensional state and action spaces, where generalization and function approximation
is necessary. The last decade has
witnessed impressive empirical success in domains where
data are abundant <cit.>,
but when data are limited, ensuring efficient exploration in
large domains is a major research question. For
statistical efficiency, the foundations have recently begun to
take shape, with a line
of research providing structural conditions that facilitate
sample-efficient exploration, as well as fundamental limits
<cit.>. Computational
efficiency, however, remains a major challenge: outside of simple
settings <cit.>, existing algorithms
with provable sample complexity guarantees are computationally
inefficient, and typically require solving intractable non-convex
optimization problems
<cit.>. The
prospect of
developing practical algorithms for exploration in
high-dimensional state spaces that are both computationally and
statistically efficient raises three fundamental questions:
* What are the right computational primitives for exploration?
That is, how can one efficiently represent and compute exploratory policies that
allow the learner
to explore the state
space and gather useful data?
* How should one leverage function approximation—for
example, via
representation learning—to
discover such primitives in a computationally and statistically
efficient fashion?
* Given answers to the first two questions, how can one efficiently interleave function approximation and exploration to provide provably efficient algorithms?
In this paper, we investigate these questions through the
model <cit.>. In a , the state space is large
and potentially continuous, but the transition probabilities admit an
(unknown) low-rank factorization. Concretely, for a finite-horizon
with horizon H, the transition densities for layer
h∈H satisfy
T_h(x_h+1|x_h,a_h) = [h+1](x_h+1)^(x_h,a_h),
where (·,·)∈^d and
(·)∈^d are state-action and next-state
embeddings. The low-rank structure in (<ref>)
facilitates tractable exploration: if the embedding is known
to the learner, one can efficiently learn a near-optimal policy with sample
complexity polynomial in the feature dimension d, and independent of
the size of the state space <cit.>; in this regard,
can be thought of as a low-dimensional representation that enables
sample-efficient RL. Following
<cit.>, we consider the challenging setting in
which both and are unknown to the
learner. This formulation generalizes well-known frameworks such as
the Block MDP (BMDP) model <cit.>,
and necessitates the use of representation
learning: the agent must learn an embedding that approximates
as it explores the environment, and must use this learned embedding
to drive subsequent exploration. This form of function approximation allows
for great flexibility, as can be an arbitrary, nonlinear
function of the state; in practice, it is common to model as a neural net <cit.>.
The is perhaps the simplest MDP structure that demands
systematic exploration and nonlinear function approximation while allowing for a continuum of states, yet understanding of
efficient algorithm design for this model is surprisingly
limited. Existing algorithms suffer from at least one of the following drawbacks:
* Computational intractability <cit.>.
* Strong modeling assumptions (e.g., ability to model
[h+1](·), which facilitates application of model-based
RL techniques)
<cit.>;
in this work, we aim for model-free methods that only require
learning .
* Restrictive structural assumptions (e.g.,
non-negativity or latent variable
structure for the embeddings in (<ref>)) <cit.>.
At the root of these limitations is the complex interplay between
exploration and representation learning:
the agent must learn a high-quality representation to guide
in exploring
the state space, but learning such a representation requires gathering
diverse and informative data, which is difficult to acquire without
having already explored the state space to begin with. Overcoming
this challenge—particularly where computational efficiency is
concerned—requires (1) representation learning procedures that lead to sufficiently expressive
representations for downstream applications, (2) efficient exploration procedures that are
robust to errors in learned representations, and 3) understanding the
interaction between these procedures, which must be interleaved. In
this work, we propose an algorithm that addresses each of these challenges, as detailed below.
Contributions
We provide the first provably computationally efficient and model-free
algorithm for general Low-Rank MDPs.
Our algorithm, (“Volumetric Exploration”), uses
the notion of a generalized optimal design for the
embedding as an efficiently computable
basis for exploration, and combines this with a minimax representation
learning objective <cit.>. interleaves exploration with representation learning in a layer-wise
fashion, learning a new representation at each layer h using exploratory
data gathered at previous layers, then uses this representation to
facilitate computation of a collection of exploratory policies (a
policy cover), which act as an approximate optimal design
for the features at layer h+1, ensuring good coverage for subsequent
iterations. is simple and modular, and its analysis is
surprisingly compact given the greater generality compared to prior
work
<cit.>.
accommodates general-purpose function approximation
to learn the representation (e.g., neural
nets or other flexible classes), and is efficient whenever a certain minimax
representation learning objective <cit.> can be solved efficiently for the
function class of interest. Compared to efficient algorithms from
prior work, : (1) is model-free (i.e., only requires access to a function class
Φ capable of modeling , and does not need to model
), and (2) applies to general Low-Rank MDPs, removing
the need for strong assumptions such as reachability or non-negativity of the feature embeddings
(so-called latent variable structure); see
<Ref>).
As a secondary benefit, the algorithm is reward-free.
Our analysis carefully combines several new techniques, including (1) a new reduction from optimal design
computation to policy optimization based on the Frank-Wolfe method, and (2) a new analysis of a minimax representation learning
objective introduced in <cit.>,
which leads to faster rates and shows for
the first time that this objective can lead to meaningful guarantees in general Low-Rank
MDPs without latent variable structure.
The algorithm follows a simple and modular template. To highlight this, we use the same template to give a
variant of the algorithm, (<ref>), which
leverages barycentric spanners <cit.> for
exploration, and obtains a tighter
sample complexity bound under an additional reachability assumption; see <ref>.
Organization
sec:setting formally introduces the model and the online
reinforcement learning framework we consider. In
<ref>, we highlight challenges faced
by previous approaches, introduce our main algorithm, , and
show how it overcomes these challenges, and then present its main
sample complexity guarantee. We conclude
with discussion in <ref>.
§ PROBLEM SETTING
§.§ Model
We work in an episodic, finite-horizon reinforcement learning framework, where H∈ denotes the horizon. A <cit.> is a tuple =(,, ()_h∈ [H],([h])_h∈[H],) consisting of a state space , action space with =A, distribution over initial states ∈Δ(), and mappings :→^d and : ×→^d.[We emphasize that neither [h] nor is known to the agent, in contrast to the linear MDP setting <cit.>.]
Beginning with _1∼, an episode proceeds in H steps, where for each step h∈H, the state _h evolves as a function of the agent's action _h via
_h+1∼T_h(·|_h,_h),
where T_h is a probability transition kernel, which is assumed to factorize based on and . In detail, we assume that there exists a σ-finite measure ν on such that for all 1 ≤ h ≤ H-1, and for all x ∈ and a ∈, the function x' ↦(x')^⊤(x, a) is a probability density with respect to ν (i.e. the function is everywhere non-negative and integrates to 1 under ν). For any '⊆, the probability that _h+1∈' under _h+1∼T_h(·|x_h,a_h) is then assumed to follow the law
T_h('|x_h,a_h) = ∫_'(x)^⊤(x_h, a_h) ν(x).
For notational compactness, we assume (following, e.g., <cit.>) that the MDP is layered so that = _1∪…∪_H for _i ∩_j=∅ for all i≠ j, where _h⊆ is the subset of states in that are reachable at layer h∈[H]. This can be seen to hold without loss of generality (modulo dependence on H), by augmenting the state space to include the layer index.
Our formulation, in which the transition dynamics (<ref>) are stated with respect to a base measure ν, are a rigorous generalization of formulations found in previous works <cit.>, which tend to implicitly assume the state space is countable and avoid rigorously defining integrals. We adopt this more general formulation to emphasize the applicability our results to continuous domains. However, in the special case where state space is countable, choosing ν as the counting measure yields T_h('|x_h,a_h) = ∑_x∈'(x)^⊤(x_h, a_h), which is consistent with prior work.
Policies and occupancy measures
We define =*π:→Δ() as the set of all randomized, Markovian policies. For a policy π∈, we let ^π denote the law of (_1,_1),…,(_H,_H) under _h∼π(_h), and let ^π denote the corresponding expectation. For any '⊆_h, we let _h^π[']^π[_h ∈'] denote the marginal law of _h under π. For x∈_h, we define the occupancy measure d^π(x) _h^π/ν(x) as the density of ^π_h with respect to ν.
§.§ Online Reinforcement Learning and Reward-Free Exploration
We consider a standard online reinforcement learning framework where the Low-Rank MDP is unknown, and the learning agent interacts with it in episodes, where at each episode the agent executes a policy of the form π:→Δ() and observes the resulting trajectory (_1,_1),…,(_H,_H).
While the ultimate goal of reinforcement learning is to optimize a policy with respect to a possibly unknown reward function, here we focus on the problem of
reward-free exploration, which entails learning a collection of policies that almost optimally “covers” the state space, and can be used to efficiently optimize any downstream reward function <cit.>. To wit, we aim to construct an policy cover, a collection of policies that can reach any state with near-optimal probability.
For α,∈(0,1], a subset Ψ⊆ is an (α,)-policy cover for layer h if
max_π∈Ψ d^π(x)≥α·max_π' ∈ d^π'(x) for all x∈_h such that max_π'∈Π d^π'(x)≥·[h](x).
Informally, an (α,)-policy cover Ψ has the property that for every state x∈ that is reachable with probability at least ·[h](x), there exists a policy in Ψ that reaches it with probability at least α··[h](x). We show (<ref>) that given access to such a policy cover with α =(, d^-1 ,A^-1), it is possible to optimize any downstream reward function to () precision with polynomial sample complexity.
def:polcover101 generalizes the notion of approximate policy cover used by <cit.> for the Block MDP setting; as in that work, the definition allows one to sacrifice states for which the maximum occupancy is small, which is necessary in the absence of reachability-style assumptions <cit.>. Compared to <cit.>, we replace the Block MDP condition max_π∈ d^π(x) ≥ by max_π∈ d^π(x) ≥·[h](x). As our analysis shows, the latter condition turns out to be better suited to the ℓ_2 geometry of the model, and is sufficient for the purpose of optimizing downstream reward functions up to O() precision (<ref>).
Function approximation and desiderata
We do not assume that the true features ()_h∈[H] or the mappings ([h])_h∈[H] are known to the learner.
To provide sample-efficient learning guarantees we make use of function approximation as in prior work <cit.>, and assume access to a feature class Φ⊆{ϕ : ×→^d} that contains , for h∈[H-1].
[Realizability]
The feature class Φ⊆{ϕ : ×→^d} has ∈Φ for all h∈[H]. Moreover, for all ϕ∈Φ, x ∈, and a ∈, it holds that ϕ(x, a)≤ 1.
The class Φ may consist of linear functions, neural networks, or other standard models depending on the application, and reflects the learner's prior knowledge of the underlying MDP. We assume that Φ is finite to simplify presentation, but extension to infinite classes is straightforward, as our results only invoke finiteness through standard uniform convergence arguments.
Note that unlike model-based approaches <cit.>, we do not assume access to a class capable of realizing the features , and our algorithm does not attempt to learn these features; this is why we distinguish our results as model-free.
Beyond realizability, we assume (following <cit.>) for normalization that, for all h∈[H] and (x,a)∈_h×, *_h(x,a)≤1, and that for all g:_h→0,1,
*∫__h[h](x)g(x) ν(x)≤√(d).
For ∈(0,1), our goal is to learn an (α,)-policy cover with α= (,d^-1,A^-1) using
(d,A,H,logΦ,^-1)
episodes of interaction.
This guarantee scales with the dimension d of the feature map and the complexity logΦ of the feature class but, critically, does not depend on the size of the state space ; note that by <cit.>, dependence on both H and A= is necessary when is unknown. Given such a guarantee, we show in <Ref> that it is possible to optimize any downstream reward function to error with polynomial sample complexity.
Additional preliminaries
For any m,n ∈ℕ, we denote by [mn] the integer interval {m,…, n}. We also let [n] [1n]. For any sequence of objects o_1, o_2,…, we define o_m:n (o_i)_i∈[m n].
A partial policy is a policy defined over a contiguous subset of layers ℓr⊆H. We denote by ^ℓ:r{π⋃_h=ℓ^r _h →Δ()} the set of all partial policies over layers ℓ to r; note that ≡^1:H. For a policy π∈^ℓ:r and h∈ℓr, π(x_h) denotes the action distribution for the policy at layer h when x_h∈_h is the current state. For 1≤ t≤ h≤ H and any pair of partial policies π∈^1:t-1, π'∈^t:h, we define π∘_t π'∈^1:h as the partial policy given by (π∘_t π')(x_ℓ) = π(x_ℓ) for all ℓ<t and (π∘_t π')(x_ℓ) = π'(x_ℓ) for all ℓ∈ [t h]. We define π∘_t π' in the same fashion for π∈^1:ℓ for ℓ≥ t.
We use the _h∼π as shorthand to indicate that _h is drawn from the law ^π, and likewise for (_h,_h)∼π and so on. For a set of partial policies Ψ{π^(i) i ∈ [N]}, we define (Ψ) as the random partial policy obtained by sampling ∼([N]) and playing π^(). We define ∈ as the random policy that selects actions in uniformly at random at each layer.
We use *· to denote the Euclidean norm, *·_∞ to denote the supremum norm on functions, and let (r)⊆^d denote the Euclidean ball of radius r. We let _(r) be the Frobenius ball of radius r>0 in ^d× d. We denote by the set of positive semi-definite matrices in ^d× d, and by “≼” the corresponding partial order. For a vector v∈^d, we denote by v[i] its ith coordinate.
We refer to a scalar c>0 as an absolute constant to indicate that it is independent of all problem parameters and use (·) to denote a bound up to factors polylogarithmic in parameters appearing in the expression.
§ : ALGORITHM AND MAIN RESULTS
In this section, we present the algorithm. We begin by describing
challenges in deriving efficient, model-free algorithms using existing
approaches (<ref>). We then formally describe (<ref>) and build intuition as to how it is able to overcome these challenges, and finally state our main sample
complexity guarantee (<ref>).
§.§ Challenges and Related Work
Designing algorithms with provable guarantees in the Low-Rank MDP setting is challenging because of the complicated interplay between representation learning and exploration. Indeed, while there are many efficient algorithms for the so-called linear MDP setting where the feature maps ()_h∈[H] are known (removing the need for representation learning) <cit.>, these approaches do not readily generalize to accommodate unknown features. For Low-Rank MDPs, previous algorithms suffer from at least one of the following three drawbacks: (1) the algorithms are computationally inefficient; (2) the algorithms are model-based; or (3) the algorithms place strong assumptions on the MDP that are unlikely to hold in practice. To motivate the algorithm, we briefly survey these results, highlighting several key challenges in avoiding these pitfalls.
Let us first discuss the issue of computational efficiency. While there are a number of algorithms—all based on the principle of optimism in the face of uncertainty—that provide tight sample complexity guarantees for Low-Rank MDPs in reward-based <cit.> and reward-free <cit.> settings, these algorithms involve intractable optimization problems, and cannot be implemented efficiently even when the learner has access to an optimization oracle for the representation class Φ <cit.>. This intractability arises because these algorithms implement optimism via a “global” approach, in which the algorithm explores at each round by choosing the most optimistic value function in a certain version space of candidate value functions; optimizing over this version space is challenging, as it involves satisfying non-convex constraints with a complicated dependence on the learned representation that are coupled globally across layers h∈H.
To avoid the intractability of global optimism, several works have restricted attention to a simpler model-based setting. Here, in addition to assuming that the feature maps ()_h∈[H] are realizable with respect to Φ, one assumes access to a second feature class Υ capable of modeling the mappings ()_h∈[H]; this facilitates direct estimation of the transition probability kernel T_h(·|x,a). For the model-based setting, it is possible to efficiently implement certain “local” forms of optimism <cit.>, as well as certain non-optimistic exploration techniques based on policy covers <cit.>. For example, one can estimate features using maximum likelihood, and then apply efficient algorithms for the known-feature setting with the estimated features plugged-in <cit.>; here, a key insight is that model-based estimation leads to strong distribution transfer guarantees for the learned features. As a result, there are now a number of efficient model-based algorithms <cit.>, some of which have been practically implemented <cit.>. Unfortunately, model-based realizability is a restrictive assumption, and falls short of the model-free guarantees we aim for in this work; indeed, in general, one cannot hope to estimate the feature map without sample complexity scaling with the number of states.[For example, in the special case of the Block MDP setting <cit.>, model-based realizability entails modeling a certain emission process, which is not required by model-free approaches.]
When one moves from model-based learning to model-free learning, representation learning becomes substantially more challenging—both for optimistic and non-optimistic approaches. Here, a key challenge is to develop representation learning procedures that are (1) efficient, yet (2) provide meaningful guarantees when the learned features are used downstream for exploration.
To our knowledge, the only proposal for a representation learning procedure satisfying both desiderata comes from the work of <cit.>, who introduced a promising “minimax” representation learning objective (described in detail in the sequel; cf. <ref>), which <cit.> subsequently showed to have encouraging empirical performance. However, to provide guarantees for this objective, both works place substantial additional restrictions on the low-rank factorization. In particular, <cit.> make the so-called latent variable assumption <cit.>, which asserts that and are non-negative coordinate-wise, and <cit.> further restrict to the Block MDP model <cit.>.
Non-negativity is a substantial restriction, as the best non-negative factorization can have exponentially large dimension relative to the best unrestricted factorization <cit.>. Beyond non-negativity, many prior works <cit.> require reachability assumptions, the weakest of which asserts that there exists η>0 such that for all x∈_h,
max_π∈ d^π(x)≥η·[h](x).
These works give sample complexity bounds that scale polynomially in η^-1, and do not give any guarantee when η=0; see <ref> for further background.[When specialized to tabular MDPs, reachability asserts that for each state x∈, there exists a policy that reaches x with probability at least η.] The source of both restrictions is the problem of how to quantify how close a learned representation ϕ is to the ground truth , which depends strongly on the downstream exploration strategy. In what follows, we show that with the right exploration strategy, this challenge can be ameliorated, but prior to our work it was unclear whether the minimax objective could lead to meaningful guarantees in the absence of non-negativity.
§.§ The Algorithm
Our algorithm, , is presented in <ref>. The
algorithm proceeds by building a policy cover layer-by-layer in an
inductive fashion. To describe the algorithm in detail, we slightly generalize <ref>.
For α,∈(0,1], a distribution P∈Δ() is an (α,)-randomized policy cover for layer h if
_π∼ P*d^π(x)≥α·max_π' ∈ d^π'(x) for all x∈_h such that max_π'∈Π d^π'(x)≥·[h](x).
If P is a randomized policy cover, then the set Ψ(P) is a policy
cover in the sense of <ref>, but is most
naturally described in terms of randomized policy covers, which allow
for non-uniform mixtures of policies. Critically, the randomized
policy covers used in have support size polynomial in d and H,
which allows them to be computed and represented efficiently.
For each layer h≥2, uses a randomized policy cover
Ph built at a previous iteration to perform K steps of
interleaved representation learning and exploration. Starting from
h,0Ph, for each step k∈K, first
invokes a subroutine,
(<ref>; deferred to <ref>) with the
randomized policy cover h,k-1 to produce a
feature map ϕh,k that approximates . Using
this feature map, the algorithm invokes a second subroutine,
(<ref> in <ref>) to produce a (sparsely
supported) policy distribution
Ph,k∈Δ() that acts as a generalized optimal design for the
estimated feature map ϕh,k, ensuring maximal coverage in
a certain sense; given this distribution, the algorithm defines
h,k=1/2k∑_ℓ=1^kPh,k +
1/2Ph and proceeds to step k+1. Once this process
completes, a new randomized policy cover for layer h+2 is formed via Ph+2=1/K∑_k=1^K∑_π∈(Ph,k)Ph,k(π)·_π∘_h+1. To
invoke the
subroutine, makes use of additional subroutines for policy optimization
(; <ref> in
<ref>) and estimation of certain
matrix-valued functionals (; <ref>
in <ref>). The use of multiple
(K>1) inner loop iterations within this scheme is
necessary to handle certain distribution shift
issues, which we will elaborate on momentarily.
We now describe
each component of the algorithm in detail,
highlighting how they allow us to overcome the
challenges in the prequel.
Generalized optimal design
At the heart of is the notion of a generalized
optimal design as an efficient basis for exploration. We
begin by defining a generalized optimal design for an abstract of
positive-semidefinite matrices ⊆.
Given a set ⊂ and parameters γ∈(0,1/d),
C≥1, we say that a distribution P∈Δ() is a
(C,γ)-generalized optimal design for if the matrix
M_PγI_d+_W∼P*W satisfies
sup_W∈(M_P^-1W) ≤ (1+C)d.
This definition generalizes the classical notion of G-optimal
design <cit.>, which corresponds to the
special case in which each W∈ is a rank-one matrix, and where γ=C=0.
The utility of generalized optimal designs for reward-free exploration is
highlighted in the following lemma.
Let h∈[H]. If a distribution P∈Δ() over policies is a
(C,γ)-generalized optimal design for the set
_h{^π[
(_h, _h)(_h, _h) ^]|π∈},
then the distribution
P'=∑_π∈(P)P(π)·_π∘_h+1 is an
(α,η)-randomized policy cover for layer h+2 with αη/2 d A and η 4 d √((1+C)γ).
<Ref>, proven in <Ref>, shows that to compute a policy cover for layer h+2, it suffices to compute a distribution over policies that acts as a generalized optimal design for the set _h{^π[
(_h, _h) (_h, _h) ^]|π∈}⊆^d. Of course, even if is known, this observation is only useful if we
can compute a spanner without explicitly enumerating over the set
, since our goal is to develop an efficient
algorithm. In what follows, we will show:
* By applying the Frank-Wolfe method
<cit.> to a certain determinantal/volumetric objective,
it holds that for any ϕ∈Φ, a sparsely supported
generalized optimal design for the set {^π[
ϕ(_h, _h)ϕ(_h, _h) ^ ]|π∈} can be computed
efficiently whenever, for any M∈ with
*M_≤1, one can (approximately) solve policy optimization problems of the form
_π∈^π*ϕ(_h,_h)Mϕ(_h,_h)^.
* Given access to policy covers P1,…,Ph for layers 1 to h, one can efficiently solve the optimization problem in (<ref>) by
appealing to the algorithm for policy
optimization (<ref>).
To handle the fact that is unknown, <ref>
uses the approach above to compute a generalized optimal design for the set {^π[
ϕh(_h, _h) ]|π∈}, where
ϕh∈Φ is a learned feature map. In what follows, we
first give a detailed overview of our optimal design computation approach, then show
how applies this approach to a feature map estimated via
representation learning.
Prior work <cit.> makes use
of elliptic planning objectives similar to the notion of optimal
design in
<ref>. An
important difference in our approach, which follows from the explicit
connection to optimal design, is that the right-hand side in
(<ref>) is bounded by an absolute (problem-dependent)
constant (d), and does not scale inversely proportional to the
target precision >0 or any sort of reachability parameter. This
property is essential to our reachability-free analysis.
Optimal design computation via approximate linear optimization
To describe generalized optimal design in , we take a brief detour
and consider an abstract approach to optimal design computation, which generalizes our problem. Suppose that we wish
to compute a spanner for an implicitly specified set of matrices
=*W^z_z∈⊆ indexed by an abstract set
. The set (which will be set to when we apply this
framework to RL) may be exponentially large, and cannot be efficiently enumerated. In addition, given z∈, we
cannot explicitly compute W^z, and have to settle for a noisy approximation.
To allow for optimal design computation, we assume access to two
oracles for the set , a linear optimization oracle :∩_(1)→ and
an index-to-matrix oracle :Δ()→. We assume
that for some _, _>0:
* For all M∈ with *M_=1, the output
ẑ_M(M) satisfies
(MW^ẑ_M) ≥sup_z∈(MW^z) - _.
* For all P∈Δ(), the output W_P(P)
satisfies
W_P - _z∼P*W^z_≤_.
Given access to oracles and with _=(γ) and _=(γ^2), the algorithm
(<ref>) computes a (C,γ)-approximate spanner for
using *γ^-2C^-2 d^-1ln (1 + 1/γ)
oracle calls. can be viewed as an application of the Frank-Wolfe
algorithm <cit.> for first-order optimization to
maximize the determinantal/volumetric objective
F(P) log(γ I_d + _z∼ P[W^z]),
which is inspired by the well-known duality of G-optimal and D-optimal
design <cit.>. Frank-Wolfe is well-suited to
our setting because it produces a sparsely supported
distribution P∈Δ(), with the sparsity bounded by the
number of iterations (d,γ^-1) and independent of
. This feature is critical for computational efficiency
when applied to RL, as the set = is too large for one to even
represent a general distribution P∈Δ() efficiently.
Representation learning
Ideally, we would
like to use to construct a generalized optimal design for the set {^π[_h(_h, _h) _h(_h, _h)^]|π∈} with =.
Because we do not have access to _h, each inner loop iteration
k∈K in <ref> instead applies with {^π[ϕh,k(_h, _h)]|π∈},
where ϕh,k is a learned
representation. We now describe how the feature map
ϕh,k is learned, then show how to use these learned features to
efficiently implement the oracles (·) and (·).
To learn representations for layer h, we use the algorithm (<ref>),
which was originally introduced in
<cit.>. When invoked in each inner loop
iteration k∈K via ϕh,k = (h, ,Φ,
Ph,k-1,n_) (<ref>), the
algorithm gathers a
collection of triples (_h, _h, _h+1) by rolling in to
_h with a policy sampled from the randomized policy cover h,k-1 and selecting _h
uniformly at random, then observing the resulting state _h+1. Using this dataset, the algorithm
solves a sequence of adversarial training sub-problems
(<ref> of <ref>) which involve
the feature class Φ and an auxiliary discriminator class :
→. As we discuss in detail in the sequel, these
sub-problems, described in (<ref>),
are amenable to standard gradient-based training methods. The
sub-problems are designed to approximate the following “idealized”
min-max-min representation learning objective:
ϕh,k∈_ϕ∈Φsup_f ∈inf_w∈(2d^1/2)_π∼h,k-1^π∘_h[(
ϕ(_h, _h)w - *f(_h+1)|_h,_h)^2
].
The intuition for
this objective lies in the fact that in a Low-Rank MDP, for any function f:→, the mapping (x,a)↦[ f(_h+1)
|_h=x, _h=a ] is linear in
_h(x, a). Thus, if is sufficiently expressive, we may
hope that any ϕh,k which solves (<ref>) will approximate
well. We adopt the simple discriminator class neurips= {. x ↦max_a∈θϕ(x, a) | θ∈(1), ϕ∈Φ}.
= {. x ↦max_a∈θϕ(x, a) | θ∈(1), ϕ∈Φ}.
We show that solving
(<ref>) with this choice for , which is slightly
simpler than those considered in <cit.>, yields an approximation
guarantee for ϕh,k that is suitable for downstream use in
optimal design computation.
To facilitate an analysis of that does not require reachability assumptions, we use
slightly different parameter values for than in
<cit.>, and provide a tighter sample
complexity bound (<ref>) which may be of independent interest.
In more detail, prior work shows that the algorithm solves
a variant of (<ref>) with
w∈(d^1/2·(^-1)), where >0 is the desired
bound on mean-squared error. Due to the polynomial dependence on
^-1, such a guarantee would lead to vacuous
guarantees when invoked within our analysis of . Our improved
analysis of , which is based on a determinantal potential
argument, shows that w∈((d)) suffices. A secondary benefit of our improved bound is a faster rate with
respect to the number of trajectories.
Putting everything together Having learned ϕh,k
using , each inner loop iteration k∈K of applies with {^π[ϕh,k(_h, _h) ϕh,k(_h, _h)^]|π∈},
=, C = 2, and γ chosen as a function of the
target accuracy; that is, we use the learned
representation ϕh,k as a plug-in estimate for the true representation
.[Though the policies produced by the
algorithm may not necessarily induce an optimal design for _h= {^π[
(_h, _h) ]|π∈} (this would
require a stronger coordinate-wise approximation guarantee, does not
necessarily follow from <ref>), our analysis shows that they still suffice to build a policy cover for layer h+2.]
With this choice, implementing
entails (approximately) solving
_π∈^π[ ϕh,k(_h, _h)^M ϕh,k(_h, _h)]
for a given matrix M∈∩_(1), and implementing entails estimating
the second moment matrix
^π[ϕh,k(_h, _h) ϕh,k(_h, _h)^]
for a given policy π∈.
We instantiate (π) as the Monte Carlo algorithm
(<Ref>), which simply samples trajectories according to π and returns the sample average of ϕh,k(_h, _h) ϕh,k(_h, _h)^.
To
implement (θ), we appeal to (<ref>). , given an arbitrary reward function r_1:h:×→ and a function class ⊆{g:
×→} capable of realizing all possible value
functions induced by these rewards, can use the policy covers
P1,…,Ph to efficiently compute a policy = (h,r_1:h, ,
P1:h, n) that approximately solves neurips_π∈^π[∑_t=1^h r_t(_t,_t)],
_π∈^π[∑_t=1^h r_t(_t,_t)],
and does so using polynomially many episodes; see <ref> for
details and formal guarantees.[This is the main
place where the analysis uses the inductive hypothesis
that P1:h are policy covers.] Thus, implementing (M)
for M∈∩_(1) is as
simple as invoking with the rewards neurips
r_h(x, a; M) = ϕh,k(x,a)^⊤ Mϕh,k(x,a), and r_t(x,a; θ) = 0, for t ≠ h.
r_t(x,a;M){[ ϕh,k(x,a)^⊤ Mϕh,k(x,a), for
t=h,; 0, otherwise. ].
Addressing distribution shift
With this, we have all the
ingredients needed for optimal design computation, and can prove that
Ph,k is an approximate optimal design with respect to
ϕh,k. However, we not quite done, due to the issue of
distribution shift, which motivates the use of multiple (K>1)
inner loop iterations within . In particular, while the
objective in (<ref>) ensures that ϕh,k approximates
well under Ph,k-1, the representations may be far
away from one another under the new distribution Ph,k produced
when we invoke with ϕh,k.[If Ph were
an exact (i.e., (α,0)-) policy cover, this would be a
non-issue. However with an approximate policy cover, which is all that
one can for in the absence of reachability, distribution shift must
be addressed.] To address this issue, we use a potential argument <cit.>
to show that as long as K is chosen to be sufficiently large, there exists
k^⋆∈*K such that ϕh,k^⋆
(approximately) enjoys a stronger on-policy approximation guarantee:
ϕh,k^⋆∈_ϕ∈Φsup_f ∈inf_w∈(2d^1/2)_π∼h,k^⋆^π∘_h[(
ϕ(_h, _h)w - *f(_h+1)|_h,_h)^2
].
This suffices to prove that the distribution Ph+2 constructed
in is an approximate policy cover
for layer h+2.
§.§ Main Guarantee for
The following result is the main sample complexity guarantee for (<ref>).
Let δ, η∈(0,1), and suppose that realizability holds (<ref>). If = (A,H,d,ln
(|Φ|/δ)) is sufficiently large, then the distributions P1:H
produced by (Φ, η, , δ) are a
(η^3/· d^6 A^2,)-randomized policy cover with probability at least
1-δ, where 4 H d^3/2η.
The total number of episodes used by is at most:
(A^4 d^20 H^17 (d + ln (|Φ|/δ))· 1/^14).
The next corollary follows immediately from the definition of a policy cover (<ref>).
Consider the setting of <ref> and let P1:H be the distributions
produced by . Then, under the same success event as in <ref>, the collection of policies Ψ1,…, ΨH, where Ψh Ph for each h∈[H], are a (η^3/· d^6 A^2,)-policy cover in the sense of <ref>, where η/(4 H d^3/2).
<ref> is the first provable, model-free sample complexity
guarantee for general Low-Rank MDPs that is attained by an
efficient algorithm. Prior to our work, all efficient model-free algorithms required non-negative features (latent
variable structure), reachability, or stronger assumptions
<cit.>; see <ref>.
While our guarantee is polynomial in
all relevant problem parameters, improving the dependence further
(e.g., to match that of the best known inefficient algorithms) is
an interesting direction for future research.
Application to reward-based RL
By using the policy cover produced by within (<ref>),
we can optimize any downstream reward function to error using
(d,A,H,logΦ,^-1) episodes. See
<ref> for details. A technical novelty here compared to, e.g. <cit.> (who also used and policy covers to optimize downstream reward functions), is in proving that our notion of approximate policy cover (<ref>) is sufficient for downstream reward optimization in s.
Efficiency and practicality is simple and practical. Defining _(ϕ, w, f) ∑_(x, a,
x')∈ (ϕ(x,a)^⊤ w - f(x'))^2, where
is a dataset consisting of (_h,_h,_h,_h+1)
tuples, the algorithm is provably efficient whenever the adversarial
objective
ft∈_f∈max_ϕ̃∈Φ{min_w∈(3d^3/2)_(ϕt, w, f) - min_w̃∈(2d^1/2)_(ϕ̃, w̃, f) },
in <ref> of (<ref>),
can be implemented efficiently. This objective was also assumed to be efficiently solvable in
<cit.> and was empirically shown to
be practical in <cit.>.[In
addition to <ref>, also solves the
objective
ϕt+1∈_ϕ∈Φmin_(w_1,…,w_t)∈(2√(d))^t∑_ℓ=1^t _(ϕ,w_ℓ,fℓ)
in <ref> of <ref>. Compared the
adversarial objective in (<ref>), this objective is
simpler, and only
requires minimization.] Note that both of the objective
is amenable to standard gradient-based optimization techniques, and allows
the class to be over-parameterized. While a detailed
experimental evaluation is outside of the scope of this paper, we are
optimistic about the empirical performance of the algorithm in light
of the encouraging results based on the same objective in
<cit.>.
Outside of representation learning, the only computational overhead in is
in the subroutine, which has runtime polynomial in all parameters. Indeed,
requires only polynomially many calls to the linear optimization oracle, instantiated as , which is
efficient whenever standard least-squares regression problems based on
the class Φ can be solved efficiently, analogous to
<cit.>. The
distributions Ph,k returned by each invocation of have
support size (d,^-1), and hence can be represented with
polynomial space memory; it follows that all of policy
distributions maintained throughout the execution of <ref> have
polynomial support size as well.
Under the setting of <ref>, if = (A,H,d,ln
(|Φ|/δ)) is sufficiently large, then the distributions P1:H
produced by (Φ, η, , δ) are such that max_h∈[H]| Ph| ≤· d^7/η^4.
Analysis and proof techniques
A significant challenge overcome by the proof of <ref> (given
in <ref>) is to show that
—despite being non-optimistic—succeeds in the absence of
reachability-type assumptions. To achieve this, we use a novel
adaptation of the extended
MDP technique introduced in the recent work
<cit.> in the context of Block MDPs. This
technique allows us to analyze in a modified version of the
true MDP which emulates certain properties of reachability; see
<ref> for details. Within the extended MDP, the crux of
the proof is to show that the
representation learning guarantee in (<ref>) is strong
enough to ensure that the downstream optimal design computation in
succeeds. It is straightforward to show that optimal design
computation would succeeds if we have access to an estimated
representation that ϕh,k that approximates
point-wise (i.e., uniformly for all (x,a) pairs), but the key challenge is that the guarantee in
(<ref>) only holds on average under the roll-in
distribution h,k-1. Prior works that make use of the same representation
learning objective ( <cit.> and
<cit.>) make use of additional structural assumptions
(non-negativity of the factorization for , and Block MDP
structure for ) to facilitate change-of-measure arguments
that address this issue. We avoid such assumptions by inductively appealing to
the optimal design objective in (<ref>), which provides a
stronger coverage guarantee compared to elliptic objectives from prior
work; see <ref>. While the high-level schema for the
proof is quite simple, there are
several subtle technical challenges that arise in analyzing in the
extended MDP, including:
* Showing that succeeds when invoked within , despite
the lack of uniform coverage.
* Proving that gives a sufficiently strong
approximation guarantee even when the weights used by the algorithm
are kept uniformly bounded throughout training; see <ref>.
* Addressing distribution shift that occurs when the updates policies using the
representations produced by .
See <ref> for
details.
§.§ Stronger Guarantees under Reachability:
The algorithm is appealing in its simplicity and
modularity. To highlight this, we use the same template to give a variant of the
algorithm, (<ref>), which obtains a tighter
sample complexity bound whenever a reachability assumption is satisfied.
Concretely, we make the following assumption.
[η-reachability]
For any h∈[H] and x∈_h,
max_π∈ d^π(x)≥η·[h](x).
<ref> generalizes and subsumes all
previous reachability-like conditions of which we are aware
<cit.>. Notably,
reachability is implied by the notion of feature
coverage <cit.> (used in the context of
transfer learning in Low-Rank MDPs), which asserts that
sup_π∈λ_min(^π[(_h,_h)(_h,_h)^⊤])
≥η, for some η>0. It is also implied by
explorability <cit.>, which is
similar to feature coverage, but involves the first moments of
. Our reachability assumption is also weaker than
the notion used in <cit.>
under the latent variable model, and generalizes the
notions of reachability for BMDPs <cit.>. See <ref> for details, as well as an exponential separation between <ref> and analogous assumptions in <cit.>.
follows the same template as , with two
differences. First, we remove the inner loop (which corresponds to
setting K=1 in ). Second, and more importantly, the subroutine is replaced
with a new subroutine, . Instead of computing an optimal
design, computes an alternative basis for exploration known as
a barycentric spanner <cit.>. is
an error-tolerant variant of a classical spanner computation
algorithm of <cit.>, and may be of independent
interest; we use the algorithm to compute a spanner for learned feature maps via reduction to policy
optimization. The sample complexity of improves upon ,
but its analysis leverages reachability. See <ref> for a detailed overview.
The main sample complexity guarantee for is as follows.
Let δ∈(0,1) be given, and suppose that realizability holds (<ref>) and that reachability (<ref>) is satisfied with parameter η>0. If = η/36 d^5/2 and = (A,H,d,ln
(|Φ|/δ)) is sufficiently large, then the policies Ψ1:H
produced by (Φ, , , δ) are a
(1/4 Ad,0)-policy cover with probability at least
1-δ.
The total number of episodes used by is at most:
( A^4 d^9 H^4 (d + ln (|Φ|/δ))· 1/η^2).
The sample complexity bound in <ref> scales
with the reachability parameter η as η^-2, which
significantly improves upon the dependence on the accuracy parameter
in <ref>. The dependence on the
dimension d is also tighter. We
find this result to be notable in its own right, as even in the
presence of similar reachability assumptions, all efficient model-free
algorithms in prior work required non-negative features (latent
variable structure) or stronger assumptions
<cit.>.
A secondary benefit of lies in memory: The algorithm
maintains policy covers with support size (d,^-1), while
the policy covers used in have support size (d),
which is independent of the target accuracy.
The proof of <ref> is similar to that of
<ref>, but is somewhat simpler, and does not require
appealing to the extended MDP analysis of
<cit.>. A useful feature of our proof is to show that the notion of
reachability in <ref>, which generalizes and
extends all previous reachability conditions in the and Block
MDP literature <cit.>,
is sufficient to build an exact (i.e., (α,0)-) policy cover. We
anticipate that this observation will find broader use.
§ DISCUSSION
Our work shows for the first time how to achieve efficient, model-free
exploration in general Low-Rank MDPs. On the technical side, our
results leave open a number of interesting technical questions,
including (1) regret (as opposed to PAC) guarantees, and (2) matching the minimax rate achieved by
inefficient algorithms using an efficient
algorithm. empirical evaluation?
More broadly, our work highlights the power of non-optimistic
algorithms that explore by building policy covers. In light of this, perhaps the most interesting question
is how to extend our techniques to more general function approximation
settings beyond the Low-Rank MDP model; this will likely entail
replacing the notion of optimal design with a more general form of
exploration basis.
§.§ Acknowledgements
We thank Noah Golowich, Dhruv Rohatgi, and Ayush Sekhari for
several helpful discussions. ZM and AR acknowledge support from the ONR through awards N00014-20-1-2336 and N00014-20-1-2394, and ARO through award W911NF-21-1-0328. AB acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. 1122374.
§ ADDITIONAL RELATED WORK
In thi section, we discuss relevant related work not already covered.
Block MDPs
A particularly well-studied special case low-rank MDPs with the latent variable assumed in <cit.> (defined in <Ref>) is the Block MDP (BMDP) model <cit.>. For this setting, <cit.> provide algorithms that conduct exploration in a provably oracle-efficient manner under a reachability assumption. This reachability assumption was removed by subsequent work of <cit.> (with a suboptimal rate) and <cit.> (with optimal error dependence). These works are tailored to the BMDP model, and it is unclear whether it is possible to extend them to general low-rank MDPs.
Barycentric spanners
<cit.> consider a variant of the framework in which we are given a class Υ that realizes the
next-state feature map , but do not have access to a class
Φ for the feature map , which is unknown. Their
algorithm, like , is based on barycentric spanners, though the algorithm
design considerations and analysis are significantly
different. Notably, their algorithm is not computationally efficient,
and their analysis takes advantage of the fact that realizability of
facilitates estimation of the occupancies d^π(·)_π∈ in ℓ_1-error. Barycentric spanners were also in the work of <cit.> for reinforcement learning in Partially Observable MDPs (POMDPs). Their analysis is substantially different from ours, and their algorithm appeals to the barycentric spanner computation approach in <cit.> in an off-the-shelf fashion.
Frank-Wolfe method in RL
Similar to our work, <cit.> make use of the Frank-Wolfe method for policy cover computation, but their algorithm is tailored to the known-feature (linear MDP) framework, and the design and analysis are quite different.
PART:
Analysis of
§ ORGANIZATION OF PART:ANALYSISVOX
<ref> of the appendix contains the proof of our main
result, <ref>, as well as other proofs. This
section is organized as follows:
* <ref> contains the analysis of <ref>.
* <ref>, <ref>, and <ref> contain results we rely on in the proof of <ref>. In particular, <ref>, <ref>, and <ref> provide generic guarantees for the subroutines (<ref>), (<ref>), and (<ref>) of (<ref>), respectively.
* In <ref>, we show how an approximate policy cover can be used to optimize downstream reward functions.
* In <ref>, we present some useful structural results concerning the extended MDP introduced in <ref>.
* Finally, <ref> contains a set of helper
results used throughout the analysis.
§ ANALYSIS: PROOF OF THM:VOXMAIN
In this section, we present the full proof of the main guarantee for (<ref>). In <ref>, we define key concepts needed for the analysis. <ref>, <ref>, and <ref> give guarantees for (<ref>), (<ref>), and (<ref>) as instantiated within . <ref> gives guarantees for the subroutine within . We then combine these results in <ref> to prove <ref>.
§.§ Extended Low-Rank MDP and Truncated Policies
In this section, we present two tools, the extended MDP and a truncated policy class, that will be used throughout the analysis of , and facilitate an analysis that does not require reachability assumptions. The definitions we give generalize analogous definitions given in <cit.> for the special case of Block MDPs, though the generalization to the low-rank MDP setting is non-trivial.
Extended MDP As in <cit.>, we define the extended MDP to be the result of augmenting the true MDP by adding a set of H terminal states _1:H, and a terminal action with the property that taking from any state at layer h∈ [H-1] leads to _h+1 deterministically, and any action in ∪{} at latent state _h transitions to _h+1 deterministically. To express as a low-rank MDP, we increase the feature dimension by 1. First, for any ϕ∈Φ, we define the extension
ϕ̅(x,a) = {[ [ϕ(x,a)^⊤, 0]^⊤∈^d+1, ∀ a∈, ∀ x∈,; e_d+1∈^d+1, a = , ∀ x∈,; e_d+1∈^d+1, ∀ a∈, x ∈{_1,…, _H}, ]. with ϕ̅^⋆ denoting the extension of ϕ^⋆. We similarly define[h](x) = {[ [[h](x)^⊤, 0]^⊤∈^d+1, ∀ x∈,; e_d+1∈^d+1, x=_h, ].
for h∈[H]. With these definitions, we formally define =(∪{_1,⋯, _H}, ∪{}, ρ, ([h])_h∈[H], (ϕ̅_h^⋆)_h∈[H]) as the extended MDP, which one can verify is indeed a low-rank MDP in d+1 dimensions.
We let be the set of all randomized Markov policies in , with the convention that π(_h)= for all π∈ and h∈ [H]. For any policy π→, we extend it to ∪{_1, …, _H} by taking π(_h)= for all h∈[H]. Moving forward, for any h∈[H], we let _h _h ∪{_h}, and define =∪.
We denote expectations and probability laws for trajectories in by and , respectively, and for any '⊆_h, we let _h^π[']^π[_h ∈'] denote the induced law of _h under a policy π in . Furthermore, for any x∈_h, we define the occupancy measure ^π(x) _h^π/ν̅(x) as the density of ^π_h with respect to ν̅= ν +∑_h∈[H]𝕀__h.
We define Φ be the set of all extended feature maps (as in (<ref>)) for ϕ∈Φ. In some proofs, it will be convenient to work with the restriction of the extended feature maps to their first d coordinates; for any ϕ∈Φ, we define
ϕ̃(·,·) (ϕ̅(·,·)[1], …, ϕ̅(·,·)[d])^⊤.
Finally, we the extend the notion of a policy cover to the extended MDP as follows.
For α∈(0,1], η≥ 0, a distribution P∈Δ() is a (α, η)-randomized policy cover relative to Π⊆ for layer h in if
_π∼ P [^π(x)] ≥α·max_π'∈Π^π'(x), for all x∈_h such that max_π'∈Π^π'(x)≥η·[h](x).
Truncated policy class
Next, we introduce the notion of the truncated policy class, generalizing <cit.>. We begin with some preliminary definitions.
For any h ∈ [H], given a collection of policies Π'⊆, we let
_h(Π') {ϕ̃^⋆,π_h|π∈Π'}, where ϕ̃^⋆,π_h^π[ϕ̃^⋆_h(_h, _h)].
Using this, we define the notion of η-reachable states relative to Π'.
For h∈[H] and a policy class Π'⊆, we define the set of η-reachable states at layer h relative to the set Π' as:
_h, η(Π') {x∈_h |∃ u ∈_h-1(Π') : [h](x)^⊤ u ≥[h](x)·η}.
Given a parameter η>0, we now define the truncated policy class _η inductively as follows: Let _0,η, and for each h≥ 1, let _h, η be the set of policies defined by
π∈_h,η∃π'∈_h-1,η : ∀ t ∈[H], ∀ x ∈_t, π(x) = {[ π'(x), if t=h and x ∈_h,η(_h-1,η),; , otherwise. ].
Finally, we define _η_H,η.
As in <cit.>, the utility behind the extended MDP and truncated policy class is as follows:
* While the extended BMDP does not necessarily enjoy the reachability property (<ref>), it emulates certain properties of reachable MDPs, but only if we compare performance to policies in _η.
* For all reward functions of interest, the best reward that can be achieved by a policy in _η is close to what can be achieved using arbitrary policies in .
§.§ Proof Overview
The proof of <ref> is inductive. For fixed h, the inductive hypothesis is that the distributions over policies P1:h+1 produced by satisfy the property:[By extending policies in to in the fashion described in <ref>, the distributions P1:h can be viewed as distribution over policies in .]
P1,… Ph+1 are (η32 dK A, η)-randomized policy covers relative to _η for layers 1 through h+1 in ,
where K is defined as in <ref>. Assuming the inductive hypothesis holds, we prove that with high probability, the distribution Ph+2 is a (η/32 dK A, η)-randomized policy cover relative to _η in for layer h+2. This inductive hypothesis is primarily used to show that , as invoked in <ref> is a valid choice for the oracle required by (that is, implements approximate linear optimization over = {^π[ ϕ(_h, _h)ϕ(_h, _h)^⊤] |π∈}, for any choice of ϕ∈Φ), which is proven in <Ref>. With this established, we instantiate the guarantee for from <ref> with and set to the instances of (<ref>) and (<ref>) in , respectively. To conclude the proof of the inductive step, we combine the guarantee for and the guarantee for in <Ref> with a change of measure argument, also enabled by the inductive hypothesis that P1:h are approximate policy covers (i.e. (<ref>)). As in <cit.>, a key feature of the analysis is that we work with the extended MDP and truncated policy class throughout the proof, only passing back to the true MDP once the induction is complete and <ref> has been proven to hold for all layers H. To pass back to the true MDP, we use the following (proven in <ref>).
Let h∈ [H], α∈ (0,1), and η >0 be given.
If P∈Δ() is an (α,η)-randomized policy cover relative to _η for layer h in , then P is an (α/2,)-randomized policy cover relative to for layer h in the true MDP , where 4 H d^3/2η.
In <ref> [reps. <ref>] we show that [resp. ], as invoked in <ref>, instantiates the approximate linear optimization oracle [resp. index-to-matrix oracle ] required by . In <ref> and <ref>, we prove guarantees for the instantiations of and within , respectively. In <ref>, we conclude the proof of <ref>.
§.§ Guarantee for as a Subroutine for
We begin by showing that , as invoked in <ref>, instantiates the approximate linear optimization oracle required by . In particular, we fix a layer h∈[H] and assume that P1:h+1 satisfy (<ref>) and apply the generic guarantees for given <Ref>.
For M ∈∩_(1) and ϕ∈Φ, define function classes '_1:h(M,ϕ) as follows:
'_t(M,ϕ) {g:(x,a)↦ϕ(x,a)^⊤ w |ϕ∈Φ , w ∈(√(d))}, ∀ t ∈[h-1] and '_h(M,ϕ) {r'_h(·,·; M,ϕ)} ,
where we define reward functions r'_1:h(·,·;M, ϕ) by:
∀ (x,a)∈×, r'_t(x,a;M,ϕ){[ ϕ(x,a)^⊤ M ϕ(x,a), for
t=h,; 0, otherwise. ].
With these rewards and function classes, we show will that for any M ∈∩_(1) and ϕ∈Φ, the output
= (h, r'_1:h(·, ·;M,ϕ), '_1:h(M,ϕ), P1:h, n)
satisfies the property that
max_π∈_η^π[ ϕ̃(_h, _h)^⊤ M ϕ̃(_h, _h) ] ≤^[ ϕ̃(_h, _h)^⊤Mϕ̃(_h, _h) ] + ,
with high probability once n≥ 1 is sufficiently large; recall that ϕ̃ is the restriction of to its first d coordinates, with defined as in <ref>.
Note that we can equivalently formulate (<ref>) as, for fixed M ∈∩_(1) and ϕ∈Φ, maximizing the sum of the reward functions r'_1:h(·,·;M, ϕ) in (<ref>).
Note that this matches the choice of reward functions in (<ref>) at iteration h, with ϕ = ϕh,k, the feature map returned by in <ref>.
We first verify that the function classes '_1:h(M,ϕ) realize the reward functions specified in (<ref>) in the sense of <Ref>.
For any ϕ∈Φ and M∈∩_F(1), under <ref>, the function classes '_1:h(M,ϕ) in (<ref>) realize the reward functions in (<ref>) in the sense of <ref> (in the true MDP). Furthermore:
* All functions in '_1:h(M,ϕ) take values in [-√(d), √(d)].
* max_t∈[h]ln_'_t(M,ϕ)()≤ln |Φ|+ d ln (√(d) /), where we recall that _() denotes the -covering number for a function class in ℓ_∞-distance (see <ref>).
Fix ϕ∈Φ and M∈∩_(1), and let r'_t(·,·)≡ r'_t(·,·; M, ϕ) and _t'_t'(M,ϕ), for t∈[h]. Further, for t∈[h] and π∈^t+1:h, we define the state-action value function (Q-function) at layer t with respect to the rewards r'_1:h and partial policy π:
∀ (x,a)∈_t×, Q^π_t(x,a) r'_t(x,a)+^π[.∑_ℓ=t+1^h r'_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
For t=h, we clearly have that for any π∈^h:h, Q^π_h(·,·)=r'_h(·,·)∈'_h. For t<h and any π∈^t+1:h, we have from the low-rank structure that for any (x,a)∈_t×, the Q-function Q^π_t satisfied
Q^π_t(x,a) = ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·ϕ^⋆_t(x,a)^⊤μ_t+1^⋆(y) ν (y),
= ϕ^⋆_t(x,a)^⊤( ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y)).
Now, note that for all y∈_t+1,
0≤^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ≤r'_t(·, ·)_∞,
≤M_·sup_x∈_t,a∈ϕ(x,a)^2, (by Cauchy-Schwarz)
≤ 1,
where the last inequality follows by the fact that ϕ(·,·)≤ 1 for all ϕ∈Φ, and that M_≤M_≤ 1. Combining (<ref>) with the normalizing assumption made on ([h])_h∈[H] in <ref> (i.e. that for all g:_t+1→0,1, *∫__t+1[t+1](y)g(y) ν(y)≤√(d)), we have that
w_t ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y) ∈(√(d)).
Thus, by (<ref>) we have
Q_t^π(·,·) ≡ϕ^⋆_t(·,·)^⊤ w_t, with w_t ∈(√(d)).
This, together with the fact that [t]∈Φ (by <ref>), implies that Q_t^π∈'_t, which establishes that '_1:h realize the rewards r'_1:h. The bound on the covering number _'_t() follows from a standard bound on the covering number of the ball (√(d)) <cit.>.
Combining <Ref> with <Ref> gives in the following bound on the quality of as an approximate linear optimization oracle over the space of policies.
Fix δ∈(0,1) and h∈[H]. Let M∈∩_(1), ϕ∈Φ, and be the output of when given input (h, r'_1:h(·, ·;M,ϕ), '_1:h(M,ϕ), P1:h, n), where
* The reward functions r'_1:h(·, ·;M,ϕ) are as in (<ref>).
* The function classes '_1:h(M,ϕ) are as in (<ref>).
* The distributions P1:h satisfy (<ref>).
Then, under <ref>, with probability at least 1-δ, we have that
max_π∈_η^π[ ϕ̃(_h, _h)^⊤Mϕ̃(_h, _h) ] ≤^[ ϕ̃(_h, _h)^⊤Mϕ̃(_h, _h) ] + _(n,δ),
where _(n,δ) cH d A√(K η^-1 n^-1 (d ln (n d^1/2)+ln (|Φ|/δ))) and c>0 is a sufficiently large absolute constant.
§.§ Guarantee for as a Subroutine for
We now state a performance guarantee for the subroutine (<Ref>), which simply estimates the second moment of the feature embedding of (_h, _h) under policy π by sampling sufficiently many trajectories and taking the empirical second moment. The following result shows that is a valid choice for the subroutine passed to within .
Let δ∈(0,1) h∈[H], ϕ∈Φ, π∈, and n∈ℕ be given. The output M_h= (h,ϕ(·,·)ϕ(·, ·)^⊤,π, n) (<ref>) satisfies M_h ∈ and, with probability at least 1-δ,
M_h - ^π[ϕ(_h,_h)ϕ(_h,_h)^⊤] _≤_(n,δ),
where _(n,δ) c ·√(n^-1·log( 1/δ)) and c>0 is a sufficiently large absolute constant.
Let ϕ∈Φ and π∈. The claim that M_h ∈ follows by the fact that M_h is an empirical average of rank-1 matrices in .
Now, we show (<ref>). By a standard matrix concentration inequality (see for example <cit.>) and the fact that ϕ(x, a)ϕ(x, a)^⊤_≤ 1 for all x ∈ and a ∈ (following from ϕ(·,·)≤ 1), there exists an absolute constant c>0 such that with probability at least 1 - δ,
M_h - ^π[ ϕ(_h, _h) ϕ(_h, _h)^⊤]_≤ c ·√(log(1/δ)/n) .
Since policies in never take the terminal action, the guarantee in <ref> can also be expressed in the extended MDP as we do in the next corollary.
Let δ∈(0,1) h∈[H], ϕ∈Φ, π∈, and n∈ℕ be given. The output M_h of (h,ϕ(·,·)ϕ(·, ·)^⊤,π, n) (<ref>) satisfies M_h ∈ and for a sufficiently large absolute constant c>0, with probability at least 1-δ,
M_h - ^π[ϕ̃(_h,_h)ϕ̃(_h,_h)^⊤] _≤_(n,δ),
where _(n,δ) c ·√(n^-1·log( 1/δ)) and ϕ̃ is the restriction of to the first d coordinates; see <ref>.
§.§ Guarantee for as a Subroutine for
In this section, we prove a guarantee for the instantiation of (<ref>) within .
For the rest of this section, we recall that (ϕh,k)_k∈[K] denote the feature maps returned by within (<ref>) at iteration h∈[H-2], and that (Ph,k)_k∈[K] denote the distributions returned by within <ref> at iteration h∈[H-2]. We define
Mh,kγ I + _π∼ Ph,k^π[ϕh,k(_h,_h)ϕh,k(_h,_h)^⊤].
In , we instantiate passing as and as . Combining <Ref> with the general guarantee of in <Ref>, we have the following result.
Let δ,γ∈(0,1) and K≥ 1 be as in <ref>, and fix h∈[H-2] and k∈[K]. Suppose that the feature class Φ satisfies <ref>, and that P1:h in <ref> satisfy (<ref>). Then, with probability at least 1-δ/3H:
* The number of iterations used by (<ref>) when invoked in <Ref> of <Ref> is at most T ⌈4/γ^2dlog( 1+1/γ)⌉.
* The distribution Ph,k output by is such that | Ph,k|≤ T and for Mh,k as in (<ref>), we have
sup_π∈_η^π[ ϕ̃h,k(_h,_h) ^2_( Mh,k)^-1] ≤ 3 d,
where we recall that ϕ̃h,k is the restriction of h,k to its first d coordinates, and h,k is the extension of ϕh,k to ; see <ref>.
By <Ref>, on the event that the instance of (resp. ) used by satisfy <Ref> with _=2γ/5 [_ = 2 γ^2/10], the two desiderata of the lemma hold; Here, we instantiate the guarantee in <ref> with C=2, which is what it is set to in <ref>. We claim that, with probability at least 1- δ/6 T H, each call to and to satisfies <Ref> with
=, _ref=_η, _=, and = {^π[ϕ̃h,k(_h,_h)ϕ̃h,k(_h,_h)^⊤] |π∈}.
Since and are called at most two times per iteration of , a union bound (see <ref>) concludes the proof contingent on the above claim.
We now prove the claim. First, note that the instance of that (<ref>) uses within <ref> is always of the form (see <ref> of <ref>):
(h, r_1:h(·, ·, M/M_), _1:h(M/M_), P1:h, n_)
with r_1:h and _1:h as in <Ref> and M ∈∖{0}; this matches the form in <Ref> ('s guarantee) with ϕ = ϕh,k, which implies that with probability at least 1- δ/6 T K, the output of _M of the instance in (<ref>) satisfies:
max_π∈_η^π[ ϕ̃h,k(_h, _h)^⊤Mϕ̃h,k(_h, _h) ]- ^_M[ ϕ̃h,k(_h, _h)^⊤Mϕ̃h,k(_h, _h) ]
≤ cM_· H d A√(K (d ln (n_ d^1/2)+ln (6 TK|Φ|/δ))/η n_),
for a sufficiently large absolute constant c>0. Thus, by choosing
n_ =·η^-1γ^-2 H^2 d^2K A^2· (d + ln (|Φ|/δ)),
for = (A,d,H,log(|Φ|/δ)) sufficiently large, the of (<ref>) is bounded by 2M_γ/5, which implies the claim for the invocation of within . Similarly, the choice of n_ in <Ref> ensures that the claim holds for the invocation of within , by <Ref>. The result follows.
§.§ Guarantee for as a Subroutine for
In this subsection, we prove a guarantee for the instantiation of within . Recall that (ϕh,k)_k∈[K] denote the feature maps returned by within (<ref>) at iteration h, and let (Ph,k)_k∈[0 K-1] and ( Ph,k)_k∈[K] be as in <ref>.
Recall that Ph,k-1∈Δ() is the distribution over policies that passes to at outer iteration h∈[H-2] and inner iteration k∈[K] to compute ϕh,k. Thus, by invoking <ref> in <ref> and using that
n_ = ·η^-5 A^2 d^10log (|Φ|/δ)
in <ref> for = (A,d,H,log(|Φ|/δ)) sufficiently large, we immediately obtain the following corollary.
Let δ,η∈(0,1), K≥ 1, and be as in <ref>, and fix h∈[H-2] and k∈[K]. Suppose that the class Φ satisfies <ref>. Then, with probability at least 1-δ/3HK, the instance of in <ref> of <ref> runs for t≤'· d iterations for ' = (A,d,H,log(|Φ|/δ)) sufficiently large, and outputs ϕh,k such that for all f∈, there exists w_fh,k∈(3d^3/2) satisfying
_π∼Ph,k-1^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤' · d^4 n^-1_log
(|Φ|/δ) ≤αη^2/32,
where w_f ∫__h+1 f(y) (y) ν(y) and αη/32 d K A.
We note that by the definition of Ph,k-1 in <ref> of <ref>, <ref> implies that, with probability at least 1-δ/3HK, for all k∈[2 K], f∈ and w_f,w_fh,k∈^d as in <ref>,
1/k-1∑_ℓ=1^k-1_π∼Ph,ℓ^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤2 ' · d^4 n^-1_log
(|Φ|/δ),
We now instantiate <ref> with B=3d^3/2A^1/2, ^2 =2 ' · d^4 n^-1_log
(|Φ|/δ), πℓ = _π∼ Ph,ℓ [π] ∈, for each ℓ∈[k], and
δk=√(∑_a∈(ϕh,k(·,a)^⊤wh,k_f - ϕ_h^⋆(·,a)^⊤w_f)^2),
and make use of the following facts:
* δk_∞≤ 3d^3/2 A^1/2 (since w_f∨w_fh,k≤3 d^3/2 and ϕ_h^⋆(·,·)∨ϕh,k(·,·)≤ 1).
* <ref> sets K = · d^5A/η^2 and n_≥·η^-4A d^10log (|Φ|/δ) with = (A,d,H,log(|Φ|/δ)) sufficiently large.
This leads to the following corollary.
Let δ,η∈(0,1), K≥ 1, and be as in <ref>, and fix h∈[H-2] and k∈[K]. Suppose that the feature class Φ satisfies <ref>. Then, with probability at least 1-δ/3H, the outputs (ϕh,k)_k∈[K] of in <ref> at iteration h of <ref> are such that for all f∈, with w_f, w_fh,k∈^d defined as in <ref>,
min_k∈[K]_π∼ Ph,k^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤η^2/128 d.
§.§ Concluding the Proof of thm:voxmain
In this section, we conclude the proof of <ref>. We prove the result as a direct consequence of the following inductive statement.
Consider iteration h∈[H] of (Φ, η, ,δ) (<ref>) with parameters >0,δ, η∈(0,1) and a feature class Φ satisfying <ref>. Further, assume that:
* The distributions P1:h+1 at the start of the hth iteration of satisfy (<ref>).
* P1:h+1 are supported on policies that never take the terminal action .
* The input parameter = (A,d,H,log(|Φ|/δ)) is sufficiently large.
Then, with probability at least 1-δ/H, the distribution Ph+2 produced by (Φ,η,,δ) at the end of the hth iteration is an ( η/32 dK A,η)-randomized policy cover relative to _η in for layer h+2, where K is as in <ref>. In addition, Ph+2⊆, and | Ph+2|≤576 d^7/η^4log (1+576 d^4/η^2).
This immediately implies <ref>, which bounds the cardinality of the supports of the distributions returned by <ref>
Follows immediately from <ref>.
In a first step we prove that with probability at least 1-δ, P1,… PH are (η32 dK A, η)-randomized policy covers relative to _η for layers 1 through H in ; that is, we need to show that (<ref>) holds for h=H-1 with probability at least 1-δ. To do this, we proceed by induction over h=1,…,H-1. The base case of h=1 trivially holds because Ψ1=∅ and Ψ2={π_}. The induction step now follows by <ref> and the union bound (see <ref>). Now, <ref> implies that P1,… PH are (η64 dK A, )-randomized policy covers relative to for layers 1 through H in the real MDP M, where 4 H d^3/2η. Plugging in the choice of K in <ref> implies the claim on P1,…, PH.
We now bound the number of trajectories <ref> requires. The total number of trajectories is equal to the sum of the number of trajectories , , and require. We know that and are called T = O(γ^-2 d) times by (<ref>) at each inner iteration k∈[K] of <ref> (γ is defined in <ref>), and is called once. Furthermore, each call to requires H · n_ trajectories, and and require n_ and n_ trajectories, respectively. Thus, the total number of trajectories is equal to
n_· H^2 K T+ n_· H K T + n_· H K
≤O(η^-13 d^27 H^4 A^4 (d + ln (|Φ|/δ))) +O(η^-14 d^28 H A ln (1/δ)) +O(η^-7 d^15 A^3 H ln (|Φ|/δ)),
where the inequality follows by the choice of parameters in <ref>.
This implies the desired bound on the number of trajectories
Let _h, _h', and _h” denote the success events in <ref>, <ref>, and <ref>, respectively, and note that by the union bound, we have [_h ∩_h'∩”_h]≥ 1 - δ/H. For the rest of this proof, we will condition on _h ∩_h'∩”_h.
Using <ref>, the assumption that P1:h+1 satisfy (<ref>) implies that the distributions P1, …, Ph+1 have the property that for all ℓ∈[h+1], x∈_ℓ,η(_η), then
_π∼ Pℓ*[ℓ](x)^⊤ϕ̅_ℓ-1^⋆,π≥α·sup_π∈_η[ℓ](x)^⊤ϕ̅_ℓ-1^⋆,π, for αη/32 dK A.
We will show that with probability at least 1-δ/H, the policy distribution Ph+2 satisfies the same property:
∀ x∈_h+2,η(_η), _π∈ Ph+2*[h+2](x)^⊤ϕ̅_h+1^⋆,π≥α·sup_π∈_η[h+2](x)^⊤ϕ̅_h+1^⋆,π.
By <ref> this is equivalent to the statement that Ph+2 is an ( η/32 dK A,η)-randomized policy cover relative to _η for layer h+2 in .
Throughout the proof, for any ℓ∈[2 H] and z∈_ℓ, we define
π_z ∈_π∈_η^π(z),
and note that by <ref>, we have
π_z ∈_π∈_η[ℓ](z)^⊤ϕ̅_ℓ-1^⋆,π, where ϕ̅_ℓ-1^⋆,π^π[^⋆_ℓ-1(_ℓ-1, _ℓ-1)].
Fix x∈_h+2,η(_η).
In the remainder of the proof, we will argue that Ph+2 satisfies the coverage property <ref> for x.
Preliminaries
We begin with some notation. Let us introduce a function f_x: _h+1→ defined by
f_x(y)_x^⊤ϕ̅^⋆_h+1(y,π_x(y)), where _x [θ_x^⊤, 0]^⊤ and θ_x [h+2](x)/[h+2](x).
Note that [h+2](x)>0, since x∈_h+2,η(_η). Next, we define
w_x ∫__h+1 f_x(y) (y) ν(y), and w̅_x [w_x^⊤, 0]^⊤∈^d+1.
By definition of π_x, we have that for all y∈_h+1,
_x^⊤ϕ̅^⋆_h+1(y,π_x(y)) = max_a∈_x^⊤ϕ̅^⋆_h+1(y,a),
≤max_a∈_x^⊤ϕ̅^⋆_h+1(y,a), (justified below)
= max_a∈θ_x^⊤ϕ^⋆_h+1(y,a), (since y≠_h+1 and [θ̅_x]_d+1=0)
where (<ref>) follows by the facts that _x^⊤ϕ̅^⋆_h+1(y,)=0 (since ϕ̅^⋆_h+1(·,)≡ e_d+1 and [_x]_d+1=0) and that
∀ a∈, _x^⊤ϕ̅^⋆_h+1(y,a) y≠_h+1=θ_x^⊤ϕ^⋆_h+1(y,a) = [h+2](x)^⊤ϕ_h+1^⋆(y,a)/[h+2](x),
≥ 0. ([h+2](·)^⊤ϕ_h+1^⋆(y,a) is a conditional law)
eq:cravit and the fact that θ_x=1 implies that
f_x|__h+1∈,
where f_x|__h+1 denotes the restriction of f_x to _h+1. We also note that since x∈_h+2,η(_η), we have
_x^⊤ϕ̅_h^⋆, π_x = [ ∫__h+1 f_x(y) (y)^⊤ν(y), 0] ϕ̅_h^⋆, π_x, (by definition of w̅_x in (<ref>))
= ∫__h+1 f_x(y) (y)^⊤ϕ̅_h^⋆, π_xν(y), (since (y)=[(y)^⊤, 0], for all y≠_h+1)
= ∫__h+1 f_x(y) (y)^⊤ϕ̅_h^⋆, π_x(y), (since f_x(_h+1)=0)
=_x^⊤ϕ̅_h+1^⋆,π_x, (by definition of f_x in (<ref>))
= 1/*[h+2](x)max_π∈_η[h+2](x)^⊤ϕ̃_h+1^⋆,π, (by definition of θ̅_x in (<ref>))
≥η>0,
where (<ref>) uses the definition of reachable states _h+2,η(_η) (see <ref>); we recall (see <ref>) that ϕ̃^⋆,π_h^π[ϕ̃^⋆_h(_h, _h)] and ϕ̃^⋆_h represents the restriction of ϕ̅^⋆_h to its first d coordinates.
Applying the guarantee for
Moving forward, we let (ϕh,k)_k∈[K] be the feature maps returned by within (<ref>) at iteration h, and define ϕ̅^k,π_h^π[h,k(_h,_h)], for any π∈, where we recall that h,k is the extension of ϕh,k to ; see <ref>. Further, for k∈[K], let wh,k_x be the vector wh,k_f in <ref> with f=f_x|__h+1, and note that
w_xh,k≤3d^3/2.
We will use the extended vector w̅_xh,k [(w_xh,k)^⊤,0]^⊤∈^d+1. By Jensen's inequality, we have for all k∈[K],
( h,k_x_h^k,π_x- _xϕ̅_h^⋆, π_x)^2
≤^π_x[(h,k(_h,_h)^⊤h,k_x - ϕ̅_h^⋆(_h,_h)^⊤_x)^2],
= ^π_x[(h,k(_h,π_x(_h))^⊤h,k_x - ϕ̅_h^⋆(_h,π_x(_h))^⊤_x)^2],
= ^π_x[𝕀{_h ∈_h,η(_η)}·(h,k(_h,π_x(_h))^⊤h,k_x - ϕ̅_h^⋆(_h,π_x(_h))^⊤_x)^2],
≤^π_x[𝕀{_h ∈_h,η(_η)}·∑_a∈(h,k(_h,a)^⊤h,k_x - ϕ̅_h^⋆(_h,a)^⊤_x)^2],
where the last inequality follows by the fact that h,k(·,)≡ϕ̅^⋆_h(·,) ≡ e_d+1 and [w̅_xh,k]_d+1=[w̅_x]_d+1=0 (by definition). Thus, for g(y) 𝕀{y∈_h,η(_η)}·∑_a∈(ϕ̅h,k(y,a)^⊤_xh,k - ϕ̅_h^⋆(y,a)^⊤_x )^2, (<ref>) implies that
( h,k_x_h^k,π_x- _xϕ̅_h^⋆, π_x)^2
≤∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_x_h-1(y),
≤∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_y_h-1(y), (by definition of π_y ((<ref>)) and (<ref>)))
≤α^-1_π∼ Ph[ ∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_h-1(y)], (by (<ref>) with ℓ=h, and g(y)=0 for all y∉_h,η(_η))
≤ 2 α^-1_π∼Ph,k-1[ ∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_h-1(y)], (Ph,k-1 as in <ref> of <ref>)
= 2 α^-1_π∼Ph,k-1^π[∑_a∈(h,k(_h,a)^⊤h,k_x - ϕ̅_h^⋆(_h,a)^⊤_x)^2],
= 2 α^-1_π∼Ph,k-1^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_x - ϕ_h^⋆(_h,a)^⊤w_x)^2],
where (<ref>) follows by the fact that the policies in the support of Ph,k-1 never take the terminal action (by assumption) and that h,k(x,a)^⊤h,k_x - ϕ̅_h^⋆(x,a)^⊤_x=ϕh,k(x,a)^⊤wh,k_x - ϕ_h^⋆(x,a)^⊤w_x for all a∈ whenever x≠_h. We note that Ph,k-1 is the distribution over policies that passes to to compute ϕh,k. Thus, since w_x = ∫__h+1 f_x(y) (y) ν(y) (see (<ref>)) and f_x|__h+1∈ (see (<ref>)), the guarantee for in <ref> together with (<ref>), implies that (recall that we condition on the event )
∀ k∈[K], | h,k_x_h^k,π_x- _xϕ̅_h^⋆, π_x| ≤η/4,
Since _xϕ̅_h^⋆, π_x≥η (see (<ref>)), (<ref>) implies that under , we have
∀ k∈[K], _xϕ̅_h^⋆, π_x≤4/3h,k_x_h^k,π_x.
Applying the guarantee for
To proceed, define
ℓ∈_k∈[K]_π∼ Ph,k^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_x - ϕ_h^⋆(_h,a)^⊤w_x)^2].
Note that by <ref>, we have,
_π∼ Ph,ℓ^π[∑_a∈(ϕh,ℓ(_h,a)^⊤wh,ℓ_x - ϕ_h^⋆(_h,a)^⊤w_x)^2] ≤η^2/128 d.
Let γ be as in <ref>, and for each k∈[K] define
Mh,kγ I + _π∼ Ph,k^π[ϕh,k(_h,_h)ϕh,k(_h,_h)^⊤], and Mh,k[ Mh,k 0_d × 1; 0_1 × d 0 ]∈^(d+1)× (d+1).
From (<ref>), Hölder's inequality, and AM-GM, we have
_xϕ̅_h^⋆, π_x ≤4/3*w̅h,ℓ_x _ Mh,ℓ·^ℓ, π_x_h_( Mh,ℓ)^, (( Mh,k)^ denotes the pseudo-inverse of Mh,k)
≤8d/η*w̅h,ℓ_x^2_ Mh,ℓ + η/12 d^ℓ, π_x_h^2_( Mh,ℓ)^,
≤8d/η*w̅h,ℓ_x^2_ Mh,ℓ + η/12 d^π_x[h,k(_h,_h)^2_( Mh,ℓ)^], (Jensen's inequality)
≤8d/η*w̅h,ℓ_x^2_ Mh,ℓ + η/12 d^π_x[ϕ̃h,k(_h,_h)^2_( Mh,ℓ)^-1].
By <ref> (in particular (<ref>)), we have that under the event _h”,
^π_x[ϕ̃h,k(_h,_h)^2_( Mh,ℓ)^-1] ≤ 3 d.
Combining this with (<ref>), it follows that
_xϕ̅_h^⋆, π_x ≤η/4 + 8d/η*w̅h,ℓ_x ^2_ Mh,ℓ ,
= η/4 + 8d/η·*wh,ℓ_x^2_ Mh,ℓ,
=η/4+ 8dγ/η·*wh,ℓ_x^2 + 8d/η·_π∼ Ph,ℓ^π[ ( ϕh,ℓ(_h,_h)^⊤wh,ℓ_x)^2 ],
≤η/4+ 72 d^4γ/η + 16 d/η·_π∼ Ph,ℓ^π[ ( ϕ^⋆_h(_h,_h)^⊤w_x)^2 ]+ η/8, (see below)
≤η/2+ 16 d/η·_π∼ Ph,ℓ^π[ ( ϕ^⋆_h(_h,_h)^⊤w_x)^2 ],
where (<ref>) follows by (<ref>), (<ref>), and that (a+b)^2 ≤ 2a^2 +2b^2. The last inequality follows by the parameter choice γ = η^2/576 d^4 (see <ref>).
Concluding
By the definition of w_x, the fact that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,π_x(y))≥ 0 is a conditional density for all y∈_h+1, and Jensen's inequality, we have:
∀ (y',a')∈_h×, (ϕ^⋆_h(y',a')^⊤ w_x )^2 = (∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y))^2,
≤∫__h+1(μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) )^2 μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
≤∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
where the last inequality follows by Cauchy-Schwarz and that ϕ^⋆_h+1(·, ·)≤ 1.
Plugging this into (<ref>), we have
_xϕ̅_h^⋆, π_x - η/2
≤16 d/η·_π∼ Ph,ℓ^π[∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)] ,
≤16 d A/η·_π∼ Ph,ℓ^π[1/A∑_a∈∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,a) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)] , (see below)
= 16 d A/η·_π∼ Ph,ℓ^π∘_h+1π_[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
≤16 d A K/η·_π∼ Ph+2^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
= 16 d A K/η·_π∼ Ph+2[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆,π_h+1],
where (<ref>) uses that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,a) is non-negative for all (y,a)∈_h+1× (since it is a conditional density), and (<ref>) follows by definition of Ph+2 in <ref>.
Combining (<ref>) with the fact that _xϕ̅_h^⋆, π_x≥η (see (<ref>)) yields
1/2·μ̅_h+2^⋆(x)^⊤/μ̅_h+2^⋆(x)ϕ̅^⋆,π_x_h+1 ≤16 d A K/η·_π∼ Ph+2[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆,π_h+1],
= 16 d A K/η·_π∼ Ph+2[ μ̅_h+2^⋆(x)^⊤/μ̅_h+2^⋆(x)ϕ̅^⋆,π_h+1],
where the last equality follows by the fact that policies in the support of Ph+2 never take the terminal action. This establishes (<ref>). Since this argument holds uniformly for all x∈_h+2,η(_η), the proof is completed. The bound on | Ph+2| follows immediately from <ref> and the choice of γ in <ref>.
§.§ Proof of <ref>
Let h∈ [H] and P∈Δ() be a (C,γ)-generalized optimal design (see <ref>) for the set
_h{^π[
(_h, _h)(_h, _h) ^]|π∈}.
Further, define P'=∑_π∈(P)P(π)·_π∘_h+1 and
M_PγI_d+_π∼P^π*(_h, _h)(_h, _h) ^.
We will show that P' is a (α,η)-randomized policy cover for layer h+2 with αη/2 d A and η 4 d √((1+C)γ).
Let x∈_h+2,η() and π_x ∈_π∈ d^π(x).
Preliminaries
We begin with some notation. Let us introduce a function f_x: _h+1→ defined by
f_x(y)θ_x^⊤ϕ^⋆_h+1(y,π_x(y)), where θ_x [h+2](x)/[h+2](x).
Note that [h+2](x)>0, since x∈_h+2,η(). Next, we define
w_x ∫__h+1 f_x(y) (y) ν(y) ∈^d.
Since f_x takes values in [-1,1] (because ϕ_h+1^⋆(· , ·)≤ 1 and θ_x≤ 1), the normalizing assumption on μ^⋆_h+1 in (<ref>) implies that
w_x ∈(2√(d)).
We also note that the definitions of f_x and w_x imply that
w_x^⊤ϕ_h^⋆, π_x = θ_x^⊤ϕ_h+1^⋆,π_x = sup_π∈θ_x^⊤ϕ_h+1^⋆,π, (by definition of π_x)
= 1/*[h+2](x)max_π∈[h+2](x)^⊤ϕ_h+1^⋆,π, (by definition of θ_x in (<ref>))
≥η>0,
where the penultimate inequality follows by the fact that x∈_h+2,η().
Using the generalized optimal design property
By Hölder's inequality, we have for any ν>0,
w_x^⊤ϕ_h^⋆,π_x ≤w_x_M_P·ϕ^⋆, π_x_h_M_P^-1,
≤1/2νw_x^2_M_P + ν/2ϕ^⋆, π_x_h^2_M_P^-1, (AM-GM)
≤1/2νw_x^2_M_P + ν/2^π_x[ ϕ^⋆_h(_h, _h)^2_M_P^-1], (Jensen's inequality)
= 1/2νw_x^2_M_P + ν/2(M_P^-1^π_x[ ϕ^⋆_h(_h, _h) ϕ^⋆_h(_h, _h)^⊤] ),
≤1/2νw_x^2_M_P + ν· d(1+C)/2, (P is a (C,γ)-generalized optimal design)
= γ/2νw_x^2 + 1/2ν_π∼ P^π[(w_x^⊤ϕ^⋆_h(_h,_h))^2] + ν· d(1+C)/2, (by definition of M_P)
≤2γ d/ν + 1/2ν_π∼ P^π[(w_x^⊤ϕ^⋆_h(_h,_h))^2] + ν· d(1+C)/2,
where the last inequality follows by the bound on w_x in (<ref>). Now, by the definition of w_x, the fact that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,π_x(y))≥ 0 is a conditional density for all y∈_h+1, and Jensen's inequality, we have:
∀ (y',a')∈_h×, (ϕ^⋆_h(y',a')^⊤ w_x )^2 = (∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y))^2,
≤∫__h+1(μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) )^2 μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
≤∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
where the last inequality follows by Cauchy-Schwarz and that ϕ^⋆_h+1(·, ·)≤ 1. Plugging (<ref>) into (<ref>) and rearranging, we obtain: for all ν>0,
w_x^⊤ϕ_h^⋆,π_x - 2γ d/ν - ν· d(1+C)/2
≤1/2ν_π∼ P^π[∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)],
≤A/2ν_π∼ P^π[1/A∑_a∈∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,a) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)], (see below)
= A/2ν_π∼ P^π∘_h+1π_[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
= A/2ν_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
where (<ref>) uses that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,a) is non-negative for all (y,a)∈_h+1× (since it is a conditional density), and the last inequality follows by definition of P'. Now, using (<ref>), we get: for ν2 √(γ (1+C)^-1),
1/2w_x^⊤ϕ_h^⋆,π_x ≤w_x^⊤ϕ_h^⋆,π_x - η/2,
≤w_x^⊤ϕ_h^⋆,π_x -2 d√((1+C)γ), (using that γ = η^2 d^-2 (1+C)^-1/16)
≤w_x^⊤ϕ_h^⋆,π_x - 2γ d/ν - ν· d(1+C)/2, (by the choice of ν)
≤A/2ν_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ], (by (<ref>))
= A/4 √(γ (1+C)^-1)_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
= Ad/η_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
where the last equality uses that γ = η^2 d^-2 (1+C)^-1/16.
Rearranging, implies that P' is an (η/2d A,η) randomized policy cover for layer h+2.
§ GENERIC GUARANTEE FOR
In this section we give a generic guarantee for the (<ref>). We consider the abstract framework introduced in <ref>, in which the aim is to compute a generalized optimal design for an implicitly specified set of matrices
=*W^z_z∈⊆ indexed by an abstract set . We assume that subroutines and used by satisfy the following assumption.
[Approximation guarantee for and ]
Consider an abstract set and a collection of PSD matrices {W^z∈^d× d| z∈} indexed by elements in . There exist _,_>0 and reference subsets _ref, _⊆ such that for any M ∈ and P∈Δ(_), the outputs ẑ_M (M/M_) and W_P (P)∈ satisfy ẑ_M∈_ and
sup_z∈_ref(M W^z) ≤(M W^ẑ_M)+_·M_ , and W_P - _z∼ P[W^z]_≤_.
For our application to RL, the sets _ref and _ are useful to accommodate algorithms that optimize relative to restricted policy sets.
Given such subroutines and , and γ>0, ((·),(·), ·,γ) applies the Frank-Wolfe (conditional gradient method) to approximately solve the optimization problem
_P ∈Δ() F(P), where F(P)-log(γ I_d + _z∼ P[W^z]).
Letting {W^z | z∈} and assuming that ⊆_(1), the main result for this subsection (<ref>) bounds the number of iterations used by ((·),(·), ·,γ) under <ref> and gives a guarantee for the output.
Let C∈(1,2] and γ∈(0,1) be such that γ C<5/2, and suppose that the collection {W^z | z ∈} consists of PSD matrices of Frobenius norm bounded by 1. If (<Ref>) is run with parameters C, γ and , satisfying <ref> with _=Cγ/5 and _=Cγ^2 /10, then the algorithm terminates after t ≤16 γ^-2C^-2 d^-1ln (1 + 1/γ) iterations,[While it may seem odd at first glance that the iteration complexity for scales with d^-1, we note that the non-trivial regime in <ref> is when γ≤ 1/d. This is because for γ≥ 1/d, we have (M_P^-1 W^z)≤ d for any P∈Δ() and z∈, since M_P≽ I_d/d and W^z∈∩_(1). Whenever γ≤1/d, the iteration complexity for increases with d, as expected.] and requires at most twice that many calls to each of and . Furthermore, the output P_t of is such that P_t∈Δ(_),
|supp P_t|≤ t, and
sup_z∈_ref(M_P_t^-1 W^z) ≤ (1+3C/2) · d, where M_P_tγ I_d +_z∼ P_t[ W^z].
Let F be as in (<ref>). For z∈ and P∈Δ(), define M^zγ I_d
+ W^z, W_P_z∼ P[W^z], and M_P γ I_d + W_P. Throughout the proof, we will use that the function f: M ↦ -log M defined over has the following gradient and Hessian expressions:
∇ f(M)[H] = - (M^-1 H) and ∇^2 f(M)[H,H] = (M^-1 H M^-1 H),
for all H∈^d× d.
To begin, by Taylor's theorem and the fact that the set of PSD matrices is convex, there exists λ∈[0,1] such that for any P,P'∈, defining M_λλ M_P + (1-λ) M_P'∈,
F(P') - F(P) = f(M_P') -f(M_P),
= ∇ f(M_P)[M_P'-M_P] + 1/2∇^2 f(M_λ)[M_P'-M_P, M_P'-M_P] ,
= - (M_P^-1 (W_P'- W_P)) + 1/2(M^-1_λ (W_P'-W_P) M^-1_λ(W_P'- W_P)),
≤- (M_P^-1 (W_P'- W_P)) + 1/2γ^2W_P' - W_P^2_,
where the last inequality follows because for all z∈, M^z = γ I_d + W^z≽γ I_d, since W^z∈. We also note that by definition of F in (<ref>) and the fact that ⊂∩_(1), we have
sup_P,P'∈Δ() F(P') - F(P) ≤ dln (1 + 1/γ),
since the determinant of a matrix is bounded by the product of the norms of its columns.
Bounding the number of iterations
If <ref> has not terminated at iteration ℓ≥ 1, then
(M_ℓ^-1W_ℓ)>(1+C)d,
where M_ℓ = γ I_d + (P_ℓ), W_ℓ =
(𝕀_z̃_ℓ), and z̃_ℓ =
(M_ℓ^-1/M_ℓ^-1_F). Since satisfies <ref> with _=
γ^2 C/10, we have that
M_P_ℓ - M_ℓ_∨W^z̃_ℓ - W_ℓ_≤γ^2 C/10.
Furthermore, since M_P_ℓ≽γ I_d (because ⊆), we have using Cauchy-Schwarz
rM_P_ℓ^-1· (M_ℓ - M_P_ℓ)_≤M_P_ℓ^-1_·M_P_ℓ - M_ℓ_≤γ C/10<1/4,
where the last inequality follows by the fact that γ C<5/2.
On the other hand, by <ref>, instantiated with A = M_P_ℓ and E = M_ℓ -M_P_ℓ, we have that
M_P_ℓ^-1 - M_ℓ^-1_≤M_ℓ -M_P_ℓ_/1-r·M_P_ℓ^-1_^2 ≤4/3 γ^2γ^2 C/10 , (by (<ref>), (<ref>), and M_P_ℓ≽γ I_d)
= 2C/15≤C/5.
Note also that since only returns matrices in (see <ref>), we have M_ℓ≽γ I_d, and so
M_ℓ^-1_≤1/γ.
Using (<ref>)-(<ref>) and the triangle inequality, we obtain
(M_P_ℓ^-1 W^z̃_ℓ) = ((M_P_ℓ^-1 -M_ℓ^-1) W^z̃_ℓ) + (M_ℓ^-1 (W^z̃_ℓ-W_ℓ)) + (M_ℓ^-1 W_ℓ),
> - M_P_ℓ^-1 -M_ℓ^-1_·W^z̃_ℓ_ -M_ℓ^-1_·W^z̃_ℓ-W_ℓ_ + (1+C)d, (by (<ref>))
≥ - C/5 - 1/γ·γ C/5+ (1+C)d, (by ⊆_(1) and (<ref>)-(<ref>))
≥ - C/2 + (1+C)d.
Now, recall that μ = Cγ^2 d/8. Instantiating (<ref>) with P'=P_ℓ+1 and P=P_ℓ and using (<ref>), we have
F(P_ℓ+1) ≤ F(P_ℓ) + (M_P_ℓ^-1 (W_P_ℓ- W_P_ℓ+1)) + 2/γ^2W_P_ℓ+1- W_P_ℓ^2_,
= F(P_ℓ) + μ·(M_P_ℓ^-1 (W_P_ℓ- W^z̃_ℓ)) + μ^2/2γ^2W^z̃_ℓ- W_P_ℓ^2_,
< F(P_ℓ) + μ·(C/2 - (1+C)d + (M_P_ℓ^-1 W_P_ℓ) ) + 2 μ^2/γ^2, (by ⊆_(1) and (<ref>))
≤ F(P_ℓ) - μ Cd/2 + 2μ^2/γ^2, (see below)
≤ F(P_ℓ) - γ^2 C^2 d^2/16 ,
where (<ref>) follows by the fact that (M_P_ℓ^-1 W_P_ℓ) ≤ d, and the last inequality follows by the choice of μ in <ref>. If the algorithm runs for t≥ 1 iterations, then summing (<ref>) and telescoping, we have
- (t-1) γ^2 C^2 d^2/16 > F(P_t)- F(P_1) ≥inf_P,P'∈Δ() F(P)-F(P') ≥ -d ln (1+1/γ),
where the last inequality follows by (<ref>). By rearranging, we conclude that
t < 1 + 16 γ^-2C^-2 d^-1ln (1 + 1/γ),
giving the claimed bound on the number of iterations.
Guarantee for the last iterate
Suppose the algorithm terminates at step t. Since and satisfy <ref> with _= C
γ/5, the iterates at step t satisfy (<ref>) in addition to
sup_z∈_(M_t^-1 W^z) ≤(M_t^-1 W^z̃_t) + C γM_t^-1_/5,
≤(M_t^-1 W^z̃_t) + C d^1/2M_t^-1_ /5,
≤(M_t^-1 W^z̃_t) + Cd^1/2 /5,
where the last inequality follows by (<ref>).
Combining this with the termination condition (M_t^-1W_t) ≤
(1+C)d, we have that
sup_z ∈_(M_P_t^-1 W^z)
≤sup_z ∈_((M_P_t^-1-M_t^-1) W^z)+ sup_z ∈_(M_t^-1 W^z),
≤sup_z ∈_((M_P_t^-1-M_t^-1) W^z) + (M_t^-1 W^z̃_t) +C d^1/2/5, (by (<ref>))
= sup_z ∈_((M_P_t^-1-M_t^-1) W^z) + (M_t^-1 W_t)+ (M_t^-1 (W^z̃_t -W_t)) +C d^1/2/5,
≤sup_z ∈_M_P_t^-1 -M_t^-1_·W^z_ + (1+C)d+M_t^-1_·W^z̃_t- W_t_ + C d^1/2/5, (see below)
≤2C/15+ (1+C)d+1/γ·C γ^2/10 + C d^1/2/5, (by (<ref>)-(<ref>) and ⊆_(1))
≤ (1+3C/2)· d,
where (<ref>) follows by Cauchy-Schwarz and (M_t^-1W_t) ≤
(1+C)d. This completes the proof.
§ GENERIC GUARANTEE FOR
In this section, we give a generic guarantee for (<ref>). Compared to previous guarantees in <cit.>, we prove a fast 1/n-type rate of convergence for , and show that the algorithm succeeds even when the norm of the weight w in <ref> does not grow with the number of iterations. We also use the slightly simpler discriminator class:
{. f x ↦max_a∈θ^⊤ϕ(x,a) | θ∈(1), ϕ∈Φ}.
The main guarantee for is as follows.
Let h∈ [H], δ∈(0,e^-1), and n∈ℕ be given, and suppose that satisfies the normalization assumption in <ref>.
For any function f ∈, define
w_f = ∫__h+1 f(x) _h+1(x) ν(x).
Let P∈Δ() be a distribution over policies, be as (<ref>), and
Φ be a feature class satisfying <ref>. With probability at least 1 - δ, with input (h, , Φ, P, n) terminates after t≤ T*d log_3/2 (2n d^-1/2) iterations, and its output ϕt satisfies
sup_f∈inf_w ∈(3d^3/2)_π∼ P^π∘_h π_[(w^⊤ϕt(_h,_h)- w_f^⊤ϕ_h^⋆(_h,_h) )^2] ≤_^2(n,δ),
where _^2(n,δ) c T d^3 n^-1log
(|Φ|/δ), for some sufficiently large absolute constant c>0.
To prove the theorem, we need a technical lemma, which follows from <cit.>.
Consider a call to (h, , Φ, P, n) (<ref>) in the setting of <ref>. Further, let _ be as in <ref> and define
(ϕt, wt_1,…, wt_t-1)∈_ϕ∈Φ,(w_1,…,w_t-1)∈(2√(d))^t-1∑_ℓ=1^t-1_(ϕ,w_ℓ,fℓ).
For any δ∈(0,1), there is an event t(δ) of probability at least 1-δ such that under t(δ), if <ref> does not terminate at iteration t≥ 1, then for wℓ w_fℓ:
∑_ℓ =1^t-1_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_ℓ - ϕ_h^⋆(_h,_h)^⊤ wℓ)^2] ≤ t _^2(n,δ),
inf_w ∈3/2(d^3/2)_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ w- ϕ_h^⋆(_h,_h)^⊤ wt)^2] > 8 d t_^2(n,δ),
where ^2_(n,δ) c d^2 n^-1ln
(|Φ|/δ) and c≥1 is a sufficiently large absolute constant.
With this, we prove <ref>.
Let us abbreviate _(n,δ),
with _(n,δ) defined as in <ref>. Further, let N 1+ *d log_3/2 (2d^3/2/), δ' δ/2N, and define
__(n,δ').
Note that ≤_ and N -1 ≤ T, where T is the number of iterations in the theorem statement; the latter inequality follows by the facts that the absolute constant c in <ref> is at least 1 and ln (|Φ|/δ)≥1. We define an event 1(δ')∩…∩N(δ'), where (^t(·))_t are the success events in <ref>. Note that []≥ 1 - δ/2 by the union bound. Throughout this proof, we condition on the event .
To begin the proof, we define a sequence of vectors (v_1:dℓ)_ℓ≥ 0 in an inductive
fashion, with v_iℓ∈^d for all
i∈d and ℓ≥0. For ℓ=0, we let
v_i0 = e_i/d, for all i∈[d]. For
ℓ≥ 1, we consider two cases:
* Case I: If
ℓ{j ∈[d] | |(V_-jℓ-1, wℓ)|>(1+C)· |(Vℓ-1)| . }≠∅,
where
Vℓ-1 (v_1ℓ-1,…,
v_dℓ-1)∈^d× d and
wℓw_fℓ, then we let
j_j'∈ℓj' and define
v_iℓ{[ wℓ , if i=j,; v_iℓ-1, otherwise. ].
* Case II: If ℓ=∅, we let
v_iℓ = v_iℓ-1, for all i∈[d].
We first show that t≠∅ at any iteration t∈[N] where does not terminate. Let t∈[N] be an iteration where the algorithm does not terminate, and suppose that t=∅. This means that
∀ j∈[d] , |(V_-jt-1, wt)|≤ (1+C)· |(Vt-1)|.
Now, since (Vt-1)≠ 0 (note that
*(Vt) is non-decreasing with t), we have
that span( Vt-1)= ^d. Thus, there exist
β_1,…, β_d∈ be such that wt=
∑_i=1^d β_i vt-1_i. By the linearity of the
determinant and (<ref>), we have
∀ j ∈[d], (1+C)|·(Vt-1)| ≥ |(V_-jt-1, wt)|,
= |(V_-jt-1, ∑_i=1^d β_i vt-1_i )|,
= *∑_i∈[d]β_i·(V_-jt-1, v_it-1),
= |β_j| · |(Vt-1)|.
This implies that |β_j|≤ (1+C) for all
j∈[d]. Now, note that by the definition of (v_it-1), we have that for any i∈[d] such that v_it-1≠ e_i/d, there exists ℓ∈ [t-1] such that wℓ= v_it-1. Let
t{i∈[d]| v_it-1≠ e_i/d},
and for any i∈t, let ℓ_i∈[t-1] be such that wℓ_i= v_it-1. Further, define
wt∑_i∈tβ_i wℓ_i= ∑_i∈tβ_i v_it-1,
and note that by the triangle inequality and the fact that wt=∑_i=1^d β_i v_it-1, we have
wt- wt≤ (1+C)_.
Finally, with the notation in (<ref>), define
wt_t ∑_i∈tβ_i wt_ℓ_i,and note that wt_t ∈ (1+C) (2d^3/2),
since |β_i| ≤ (1+C) for all i∈[d], |t|≤ d, and wt_ℓ∈(2√(d)), for all ℓ∈[t-1]. Now, by <ref>, in particular (<ref>), we have
∑_i∈t_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_ℓ_i - ϕ_h^⋆(_h,_h)^⊤ wℓ_i)^2] ≤ t _^2,
where _ is as in (<ref>). Using the
expressions in <ref> with (<ref>) and Jensen's inequality, we have that under t,
_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_t - ϕ_h^⋆(_h,_h)^⊤ wt)^2]
≤(∑_j∈t |β_j|) ·∑_i∈t_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_ℓ_i - ϕ_h^⋆(_h,_h)^⊤ wℓ_i)^2] ,
≤ (1+C) d t _^2.
Now, using (<ref>) and the facts that (a+b)^2 ≤ 2a^2 + 2 b^2 and ϕ^⋆_h_2≤ 1, we have that
_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_t - ϕ_h^⋆(_h,_h)^⊤ wt)^2] ≤ 2(1+C)^2 ^2 + 2(1+C)dt _^2,
≤ 2(1+C)^2 ^2_ + 2(1+C)dt _^2.
Using that C=1/2, we conclude that the right-hand side of this inequality is bounded by 8 d t_^2 which is a contradiction, since wt_t ∈ (1+C)(2d^3/2) = (3d^3/2) and by <ref>, we must have
inf_w∈(3d^3/2)_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ w- ϕ_h^⋆(_h,_h)^⊤ wt)^2]> 8 t _
if does not terminate at round t.
Therefore, we have that t≠∅, for any
iteration t∈[2 N] where does not
terminate.
We now bound the iteration count and prove that the guarantee in
<ref> holds at termination. Note that whenever ℓ≠∅ for ℓ>1, we have by construction:
|(Vℓ)| > 3/2 · |(Vℓ-1)|.
Thus, if runs for t∈[2 N] iterations, then
|(Vt)| > (3/2)^t-1· |(V1)|.
On the other hand, since the determinant of a matrix is bounded by the product of the norms of its columns and v_1:dt∈(2√(d)), we have
|(Vt)| ≤ 2^d d^d/2.
Note also that |(V0)| = (/d)^d. Plugging this
into (<ref>), we conclude that
(3/2)^t-1 < (2d^3/2/)^d.
Taking the logarithm on both sides and rearranging yields
t < 1+ d log_3/2 (2d^3/2/)≤ N.
Thus, the algorithm must terminate after at most N-1 iterations. Furthermore, by <cit.>, we have that with probability at least 1-δ/2N, if the algorithm terminates at iteration t, then
max_f∈inf_w ∈(3d^3/2)_π∼ P^π∘_h π_[(w^⊤ϕt(_h,_h)- w_f^⊤ϕ_h^⋆(_h,_h) )^2] ≤ 32 t _^2,
≤ 32 (N-1)_^2,
≤ 32 T _^2.
Applying a
union bound completes the proof.
§ GENERIC GUARANTEES FOR
In this section, we present self-contained guarantees for (<ref>). We show that given any reward functions r_1:h:×→_≥ 0 and function classes _1:h, where _t⊆{g: _t×→} for t∈[h], that “realize” these reward functions (we formalize this in the next definition), that if P1:h are (approximate) policy covers for layers 1 through h, then for sufficiently large n≥ 1 and with high probability, the output = (h,r_1:h, _1:h, P1:h, n) is an approximate maximizer of the objective
max_π∈^π[∑_t=1^h r_t(_t,_t)].
To formalize this result, we define the notion of realizability we require for the function classes _1:h.
We say that function classes _1:h, where _t⊆{g: _t×→} for t∈[h], realize reward functions r_1:h:×→ if for all t∈[h] and all π∈^t+1:h,
Q_t^π∈_t, where Q^π_t(x,a) r_t(x,a)+^π[.∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
Note that Q^π_t in (<ref>) represents the state-action value function (Q-function) at layer t∈[h] with respect to the rewards r_1:h and partial policy π.
In what follows, given a function class ⊆{g: ×→}, we use _() to denote the -covering number of in ℓ_∞ distance.
A set of functions {g_1, …, g_N}⊂{g: ×→} is an -cover of ⊆{g:×→} in ℓ_∞-distance if for all g∈, there exists i ∈ [N] such that
g - g_i_∞≤.
The -covering number _() is the size N of the smallest -cover of .
§.§ Intermediate Results for
To prove our main guarantees for (stated in the next subsection), we first two intermediate lemmas. The first shows that for any poly π, the corresponding Q-function is the Bayes-optimal predictor for the regression problem solved in when π is executed.
Let reward functions r_1:h:×→, P∈Δ(), and ∈^t+1:h be given. Fix t∈h, and let g^P,_ denote the Bayes-optimal predictor[Observe that because this loss is strongly convex with respect to the prediction, the Bayes-optimal predictor is unique up to sets of measure zero.] for the sum of rewards under a policy π sampled from P and composed with via π∘_t∘_t+1; that is,
g^P,_∈_ g : _t ×→_π∼ P^π∘_t π_∘_t+1[( g(_t, _t) - ∑_ℓ=t^h r_ℓ(_ℓ,_ℓ) )^2].
Then, g^P,_(·,·)≡ Q^_t(·,·), where Q^_t is the Q-function defined in (<ref>) for the partial policy ∈^t+1,h and rewards r_1:h.
The least-squares solution g^P,_ of the problem in (<ref>) satisfies, for all a∈ and x∈_t,
g^P,_ (x,a) = _π∼ P^π∘_t π_∘_t+1[ . ∑_ℓ=t^h r_ℓ(_ℓ,_ℓ) | _t =x ,_t =a ],
= [ r_t(_t,_t)|_t = x,_t = a]+ _π∼ P^π∘_t π_∘_t+1[ . ∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) | _t = x, _t =a],
= r_t(x,a) +^[ . ∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) | _t = x, _t =a], (see below)
= Q_t^(x,a),
where (<ref>) follows by the fact that conditioned on (_t,_t)=(x,a), the sum of rewards ∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) depend only on and not on the policy used to roll-in to layer t.
The next lemma shows that the solution t to the least-squares problem in (<ref>) of <ref> is close to the Q-function in the appropriate sense.
Let δ∈(0,1), B>0, n≥ 1, and h ∈[H] be fixed. Further, let (_, r_1:h, _1:h, P1:h) be such that
* _(n,δ)^2 = cB^2A/n (max_t∈[h]ln__t(1/n)+ln (n/δ)), where c>0 is a sufficiently large absolute constant.
* The function classes _1:h realize the reward functions r_1:h: ×→ (in the sense of <Ref>).
* The functions in _1:h are bounded in absolute value by B uniformly.
* P1,…,Ph∈Δ().
Then, for t∈[h], the solution t to the least-squares problem in (<ref>) in <ref> when invoked as (h, r_1:h, _1:h, P1:h, n) satisfies with probability at least 1-δ,
_π∼ Pt^π[ max_a∈( t(_t,a) - Q_t^t+1(_t, a) )^2 ]≤^2_(n,δ),
where t+1∈^t+1:h is defined as in <ref>.
Fix t∈[h] and abbreviate
gt_ g^Pt,t+1_,
where g^Pt,t+1 is defined as in <ref> (with P= Pt, = t+1, and reward functions r_1:h as in the lemma statement). By <ref>, gt_ is the Bayes-optimal solution to the least-squares problem in (<ref>) of <ref>. Thus, since _1:h realize the reward functions r_1:h, a standard uniform-convergence guarantee for least-square regression (see e.g. <cit.> with = 0 almost surely) implies that there exists an absolute constant c>0 (independent of t,h, and any other problem parameters) such that with probability at least 1-δ,
_π∼ Pt^π∘_tπ_∘_t+1t+1[ ( t(_t,_t) - gt_(_t,_t) )^2 ]≤ c· B^2 ·ln__t(1/n)+ln (n/δ)/n.
Since actions at layer t are taken uniformly at random, (<ref>) implies that
_π∼ Pt^π∘_tπ_∘_t+1t+1[ max_a∈( t(_t,a) - gt_(_t,a) )^2 ]≤ c· B^2A ·ln__t(1/n)+ln (n/δ)/n.
The desired result follows by observing that:
* For all (x,a)∈_t×, gt_(x,a)=Q^t+1_t(x,a), by <ref>.
* The term max_a∈( t(_t,a) - gt_(_t,a) )^2 in (<ref>) does not depend on the actions _t:h, and so the expectation _π∼ Pt^π∘_tπ_∘_t+1t+1· can be simplified to _π∼ Pt^π·.
§.§ Main Guarantee for With Non-Negative Rewards
We now state and prove the main guarantee for used within <ref>, which is stated with respect to the extended MDP defined in <ref>. This result requires non-negative rewards. For the rest of this section, we make use of the extended MDP notation and definitions introduced in <ref>. In addition, given non-negative reward functions r_1:h×→_≥ 0, we define their extensions r̅_1:h in as
r̅_t(x,a){[ r_t(x,a), (x,a)∈_t×; 0, if x= or a=. ].
With this, we now state the guarantee of .
Let α, δ,η∈(0,1), B>0, and h∈[H] be given. Consider reward functions r_1:h: ×→_≥ 0, function classes _1:h, policy distribution P1:h, and a parameter n≥ 1 satisfying the following properties:
* The function classes _1:h, where _t⊆{g: _t×→} for t∈[h], realize the reward functions r_1:h (in the sense of <Ref> with respect to the true MDP), and all functions in _1:h have range uniformly bounded by B.
* For each 1 ≤ t ≤ h, it holds that Pt is a (α,η)-randomized policy cover relative to _η for layer t in (see <ref>).
Then, with probability at least 1 - δ, the policy = (h, r_1:h, _1:h, P1:h, n) produced by <ref> (when applied to the true MDP), satisfies the following guarantee for r̅_1:h as in (<ref>):
max_π∈_η^π[∑_t=1^hr̅_t(_t,_t)] ≤^[∑_t=1^hr̅_t(_t,_t)] + _(n,δ),
where _(n,δ) c·H √(α^-1 B^2 A n^-1· (max_t∈[h]ln__t(1/n)+ln (n/δ))) and c>0 is an absolute constant.
First, we define extensions of Q-functions to the extended MDP using the extended rewards r̅_1:h in (<ref>); for all t∈[h] and all π∈^t+1:h, define the Q-function at layer t in the extended MDP with respect to the extended rewards r̅_1:h and partial policy π:
∀ (x,a)∈_t ×, Q^π_t(x,a) r̅_t(x,a)+^π[.∑_ℓ=t+1^hr̅_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
Note that for any partial policy π∈^t+1:h that never takes the terminal action, we have
Q^π_t(x,a)= {[ Q^π_t(x,a)≥ 0, if (x,a)∈_t ×,; 0 , if x = or a = , ].
where the fact that Q^π_t(·,·)≥ 0 follows because the rewards are non-negative. Further, for the function ĝt in <ref>, we define its (clipped) extension
g̅t(x,a){[ max(0,ĝt(x,a)), if (x,a)∈_t ×,; 0 , if x = or a = . ].
To begin, we will show that for any t∈[h] and _(·,·) as in <ref>, there is an event _t of probability at least 1- δ/H under which the learned partial policies t,t+1 are such that
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] ≤ 2 α^-1/2_(n,δH),
where π_⋆∈_π∈_η^π[∑_t=1^hr̅_t(_t,_t)] is the optimal policy with respect to the truncated policy set _η (definition in <ref>) and Q^π_t is the Q-function defined in (<ref>). Once we establish (<ref>) for all t∈[h], we will apply the performance difference lemma (<ref>) and the union bound to obtain the desired result.
Let π_⋆∈_π∈_η^π[∑_ℓ=1^h r̅_ℓ(_ℓ,_ℓ)]. Observe that the following properties hold:
* For all x∉_t,η(_η), π_⋆(x)= (by definition of _η); and
* For all policies π∈^t+1:h that never take the terminal action, Q^π_t(·,)≡ 0 ≤min_a∈, y∈_tQ^π_t(y,a) (see (<ref>)),
As a result, we have that for any t∈[h] and _t,η_t,η(_η),
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))]
≤^π_⋆[ 𝕀{_t ∈_t,η}·( Q^t+1_t(_t,π_⋆(_t)) - Q^t+1_t(_t, t(_t))) ],
=
^π_⋆[ 𝕀{_t ∈_t,η}·(Q^t+1_t(_t,π_⋆(_t))-g̅t(_t,π_⋆(_t)) + g̅t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t)))],
≤^π_⋆[ 𝕀{_t ∈_t,η}·(Q^t+1_t(_t,π_⋆(_t))-g̅t(_t,π_⋆(_t)) + g̅t(_t,t(_t))- Q^t+1_t(_t, t(_t)))],
where the last inequality follows by the facts that:
* t(x)∈_a∈t(x,a), for all x∈_t, by the definition of t in (<ref>).
* g̅t(·, )≡ 0 ≤g̅t(·, a), for all a∈, by definition of g̅t in (<ref>).
Continuing from the previous display, we have
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))]
≤ 2 ·^π_⋆[𝕀{_t ∈_t,η}·max_a∈| Q^t+1_t(_t,a)-g̅t(_t,a)| ],
= 2 ·^π_⋆[𝕀{_t ∈_t,η}·max_a∈| Q^t+1_t(_t,a)-g̅t(_t,a)| ], (since Q^t+1_t(·,)≡g̅t(·,)≡ 0)
≤ 2 ·√(^π_⋆[𝕀{_t ∈_t,η}·max_a∈( Q^t+1_t(_t,a)-g̅t(_t,a))^2 ]), (Jensen's inequality)
= 2 √(∫__t𝕀{x ∈_t,η}·max_a∈( Q^t+1_t(x,a)-g̅t(x,a))^2 ^(x) ν̅(x)),
≤ 2 √(α^-1∫__t𝕀{x ∈_t,η}·max_a∈( Q^t+1_t(x,a)-g̅t(x,a))^2 _π∼ Pt[^π(x)] ν̅(x)), (justified below)
≤ 2 √(α^-1_π∼ Pt[ ∫__tmax_a∈( Q^t+1_t(x,a)-g̅t(x,a))^2 ^π(x) ν̅(x)]), (Fubini's theorem)
= 2 √(α^-1·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-g̅t(_t,a))^2 ]),
= 2√(α^-1·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-max(0,t(_t,a)))^2 ]),
≤ 2 √(α^-1·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]),
where (<ref>) follows from the fact that Pt is an (α,η)-cover relative to _η for layer t in and π_⋆∈_η, and (<ref>) follows because:
* The policies in the support of Pt never take the terminal action; and
* | Q^t+1_t(x',a')-t(x',a')| = | Q^t+1_t(x',a')-max(0,g̅t(x',a'))|, ∀ (x',a')∈_t× (see (<ref>) and (<ref>)).
Finally, (<ref>) follows by the fact that the Q-functions are non-negative (since the rewards are non-negative), and so replacing max(0,ĝt(_t,a)) by ĝt(_t,a) on the right-hand side of (<ref>) only increases the value of the latter.
Now, from <ref> and the fact that _1:h realize r_1:h, we have that for any t∈[h], there is an absolute constant c>0 (independent of t and other problem parameters) and an event _t of probability at least 1-δ/H under which the solution t to the least-squares regression problem on (<ref>) of <ref> satisfies
_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]≤_(n,δH)^2,
where _(·,·)^2 is defined as in <ref>. Combining (<ref>) with (<ref>) establishes (<ref>) under the event _t.
To conclude the proof, we note that by the performance difference lemma (<ref>), we have
^[∑_t=1^hr̅_t(_t,_t)] - ^[∑_t=1^hr̅_t(_t,_t)]
= ∑_t=1^h ^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))].
Thus, under the event ⋃_t=1^h_t, we have that
^[∑_t=1^hr̅_t(_t,_t)] - ^[∑_t=1^hr̅_t(_t,_t)] ≤ 2H α^-1/2_(n,δH).
The desired result follows from the union bound, which gives []≥ 1-δ.
Let π,∈ be policies, and assume that π never takes the terminal action. Let Q_t^π be defined as in (<ref>). Then for any h≥ 1,
^[ ∑_t = 1^h r̅_t(_t, _t) ] - ^π[ ∑_t = 1^h r̅_t(_t, _t) ] = ∑_t= 1^h ^[Q_t^π(_t, (_t)) - Q_t^π(_t, π(_t)) ].
§.§ Main Guarantee for With Signed Rewards
We now state and prove a guarantee for in the true MDP , when invoked with signed rewards. We make use of the following lemma, which bounds the total probability mass for the set of states that are not reachable with sufficiently high probability.
For any t∈[H], it holds that
sup_π∈^π[_t ∈_t ∖_t,η()] ≤η· d^3/2.
Fix t∈ [H]. By definition of _t,η(), we have that
∀ x∈_t ∖_t,η(), sup_π∈ d^π(x) ≤η·μ^⋆_t(x).
Thus, integrating over x∈_t ∖_t,η(), we obtain
sup_π∈^π[_t ∈_t ∖_t,η()] = sup_π∈∫__t ∖_t,η() d^π(x) ν(x),
= η·∫__t ∖_t,η()μ^⋆_t(x)ν(x), (by (<ref>))
≤η·∫__tμ^⋆_t(x)ν(x),
≤η d^3/2,
where the last inequality follows by <ref>; this is a consequence of the normalization assumption (<ref>).
With this, we now state the guarantee of .
Let α, δ,∈(0,1), B,B_1:h>0, and h∈[H] be given. Consider reward functions r_1: _1×→ [-B_1,B_1],…,r_h: _h×→ [-B_h,B_h], function classes _1:h, distributions over policies P1:h, and a parameter n≥ 1 satisfying the following properties:
* The function classes _1:h, where _t⊆{g: _t×→} for t∈[h], realize the reward functions r_1:h (in the sense of <Ref>), and all functions in _1:h have range uniformly bounded by B.
* For each 1 ≤ t ≤ h, it holds that Pt is a (α,)-randomized policy cover for layer t (see <ref>).
Then, with probability at least 1 - δ, the policy = (h, r_1:h, _1:h, P1:h, n) produced by <ref> satisfies the following guarantee:
max_π∈^π[∑_t=1^hr_t(_t,_t)] ≤^[∑_t=1^hr_t(_t,_t)] + _(n,δ) + 2 h d^3/2·∑_t=1^h B_t,
where _(n,δ) c·H √(α^-1 B^2 A n^-1· (max_t∈[h]ln__t(1/n)+ln (n/δ))) and c>0 is an absolute constant.
First, we define the Q-functions for the reward r_1:h; for all t∈[h] and all π∈^t+1:h, define the Q-function at layer t with respect to the rewards r_1:h and partial policy π:
∀ (x,a)∈_t ×, Q^π_t(x,a) r_t(x,a)+^π[.∑_ℓ=t+1^hr_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
To begin, we will show that for any t∈[h] and _(·,·) as in <ref>, there is an event _t of probability at least 1- δ/H under which the learned partial policies t,t+1 are such that
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] ≤ 2 α^-1/2_(n,δH) + 2 d^3/2·∑_ℓ=1^h B_ℓ,
where π_⋆∈_π∈^π[∑_t=1^h r_t(_t,_t)] is the optimal policy. Once we establish (<ref>) for all t∈[h], we will apply the performance difference lemma (<ref> instantiated in the true MDP) and the union bound to obtain the desired result.
Let π_⋆∈_π∈^π[∑_ℓ=1^h r_ℓ(_ℓ,_ℓ)]. We have that for any t∈[h] and _t,_t,(),
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))]
= ^π_⋆[𝕀{_t ∈_t,}·( Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))) ]
+ ^π_⋆[𝕀{_t ∈_t ∖_t,}·( Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))) ].
We now bound the last term in (<ref>). Note that by the range assumption on the rewards r_1:h and the definition of the Q-function, we have Q^π_t(x,a)∈ [-∑_ℓ=t^h B_ℓ, ∑_ℓ=t^h B_ℓ], for all π∈^t+1:h. Thus, we have
^π_⋆[𝕀{_t ∈_t ∖_t,}·( Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))) ] ≤ 2^π_⋆[_t ∈_t ∖_t,] ·∑_ℓ=t^h B_ℓ,
≤2 · d^3/2·∑_ℓ=1^h B_ℓ,
where the last inequality follows by <ref>.
Plugging (<ref>) into (<ref>) and using that B_1:h≥ 0 implies that
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] - 2 d^3/2·∑_ℓ=1^h B_ℓ
≤^π_⋆[ 𝕀{_t ∈_t,}·( Q^t+1_t(_t,π_⋆(_t)) - Q^t+1_t(_t, t(_t))) ],
=
^π_⋆[ 𝕀{_t ∈_t,}·(Q^t+1_t(_t,π_⋆(_t))-ĝt(_t,π_⋆(_t)) + ĝt(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t)))],
≤^π_⋆[ 𝕀{_t ∈_t,}·(Q^t+1_t(_t,π_⋆(_t))-ĝt(_t,π_⋆(_t)) + ĝt(_t,t(_t))- Q^t+1_t(_t, t(_t)))],
where the last inequality follows by the fact that t(x)∈_a∈t(x,a), for all x∈_t, by the definition of t in (<ref>). Continuing from the previous display, we have
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] - 2 d^3/2·∑_ℓ=1^h B_ℓ
≤ 2 ·^π_⋆[𝕀{_t ∈_t,}·max_a∈| Q^t+1_t(_t,a)-ĝt(_t,a)| ],
≤ 2 ·√(^π_⋆[𝕀{_t ∈_t,}·max_a∈( Q^t+1_t(_t,a)-ĝt(_t,a))^2 ]), (Jensen's inequality)
= 2 √(∫__t𝕀{x ∈_t,}·max_a∈( Q^t+1_t(x,a)-ĝt(x,a))^2 d^(x) ν(x)),
≤ 2 √(1/α∫__t𝕀{x ∈_t,}·max_a∈( Q^t+1_t(x,a)-ĝt(x,a))^2 _π∼ Pt[d^π(x)] ν(x)), (justified below)
≤ 2 √(1/α_π∼ Pt[ ∫__tmax_a∈( Q^t+1_t(x,a)-ĝt(x,a))^2 d^π(x) ν(x)]), (Fubini's theorem)
= 2√(1/α·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]),
where (<ref>) follows from the fact that Pt is an (α,)-randomized policy cover for layer t.
Now, from <ref> and the fact that _1:h realize r_1:h, we have that for any t∈[h], there is an absolute constant c>0 (independent of t and other problem parameters) and an event _t of probability at least 1-δ/H under which the solution t to the least-squares regression problem on (<ref>) of <ref> satisfies
_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]≤_(n,δH)^2,
where _(·,·)^2 is defined as in <ref>. Combining (<ref>) with (<ref>) establishes (<ref>) under the event _t.
To conclude the proof, we note that by the performance difference lemma (<ref>), we have
^[∑_t=1^h r_t(_t,_t)] - ^[∑_t=1^h r_t(_t,_t)]
= ∑_t=1^h ^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))].
Thus, under the event ⋃_t=1^h_t, we have that
^[∑_t=1^h r_t(_t,_t)] - ^[∑_t=1^h r_t(_t,_t)] ≤ 2 H α^-1/2_(n,δH) +2 hd^3/2·∑_t=1^h B_t.
The desired result follows from the union bound, which gives []≥ 1-δ.
§ APPLICATION TO REWARD-BASED RL
In this section, we show how the output P1:H of (<ref>), which is a (η^3/· d^6 A^2, )-policy cover for η = /(4 H d^3/2) and = (A,H,d, log(|Φ|/δ)) sufficiently large (see <ref>), can be used to optimize downstream reward functions r_1:H; our treatment also applies to (for Ph(Ψh) for all h∈[H]). Since the output of is a randomized policy cover, one way to optimize the sum of rewards S_H ∑_h=1^H r_h is by first generating trajectories using policies in P1:H, then applying an offline RL algorithm, e.g. Fitted Q-Iteration () <cit.>, to optimize S_H. It is also possible to use with the randomized policy cover P1:H to achieve the same goal. We will showcase the latter approach, since we can make use of the guarantees for given in <ref>.
As in <ref>, we assume access to a function class _1:H, where _h ⊆{g: _h×→} for each h∈[H], that realize the rewards r_1:H in the following sense: for all h∈[H] and all π∈^h+1:H,
Q_h^π∈_h, where Q^π_h(x,a) r_h(x,a)+^π[.∑_t=h+1^H r_t(_t,_t) | _h=x,_h=a].
Note that when the reward functions r_1:H are linear in the feature map ; that is, when for all h∈[H] and (x,a)∈_h×,
r_h(x,a)=θ_h^⊤(x,a)
for some θ_h∈(1) (this is a common assumption in the context of RL in Low-Rank MDPs <cit.>), then the function classes _1:H, where
∀ h∈[H], _h = {g:(x,a)↦ϕ(x,a)^⊤ w |ϕ∈Φ , w ∈(2H√(d))},
realize r_1:H. We show this claim next.
Under <ref>, the function classes _1:H in (<ref>) realize the reward functions in (<ref>). Furthermore, the functions in _1:H are uniformly bounded by 2√(d)H, and ln__h()≤ln |Φ|+ d ln (2√(d)H /), for all h∈[H], where we recall that _() denotes the -covering number of in ℓ_∞-distance (see <ref>).
For h=H, we clearly have that for any π∈^H:H, Q^π_H(·,·)=r_H(·,·)∈_H. For h<H and π∈^h+1:H, we have, by the low-rank MDP structure and the expression of the rewards in (<ref>), that
Q^π_h(x,a) =r_h(_h,_h)+∫__h+1^π[.∑_t=h+1^H r_t(_t,_t) | _h+1=y,_h+1=π(y)] ·ϕ^⋆_h(x,a)^⊤μ_h+1^⋆(y) ν (y),
= ϕ^⋆_h(x,a)^⊤( θ_h + ∫__h+1^π[.∑_t=h+1^H r_t(_t,_t) | _h+1=y,_h+1=π(y)] ·μ_h+1^⋆(y) ν (y)).
Now, by the fact that ^π[∑_t=h+1^H r_t(_t,_t)|_h+1=y,_h+1=π(y)] ∈ [-H-h,H-h], for all y∈_h+1 (since the rewards take values between -1 and 1 thanks to ϕ(·,·),θ_h∈(1), for all h∈[H]), and the normalizing assumption made on ([h])_h∈[H] in <ref> (i.e. that for all g:_h+1→0,1, *∫__h+1[h+1](y)g(y) ν(y)≤√(d)), we have that
w_h θ_h+∫__h+1^π[.∑_t=h+1^H r_t(_t,_t) | _h+1=y,_h+1=π(y)] ·μ_h+1^⋆(y) ν (y) ∈(2H√(d)).
This, together with (<ref>) and the fact that [h]∈Φ (by <ref>), implies that that Q_h^π∈_h. The bound on the covering number __h(), follows from a standard bound on the covering number of the ball (2H√(d)) <cit.>.
Combining <Ref> with <Ref> results in the following guarantee for .
Let α,,δ∈(0,1) be given and fix h∈[H]. Let be the output of when given input (H, r_1:H, _1:H, P1:H, n), where
* The reward functions r_1:H are as in (<ref>), with θ_1:H∈(1)
* The function classes _1:H are as in (<ref>).
* For each 1≤ h≤ H, it holds that Ph is a (α,)-randomized policy cover for layer h (see <ref>).
Then, under <ref>, with probability at least 1-δ, we have that
max_π∈^π[∑_h=1^H r_h(_h,_h)]≤^[∑_h=1^H r_h(_h,_h)] + c H^2 √(d A · (d log(2n √(d)H) +ln (n|Φ|/δ)) /α n ) + 2 H^2 d^3/2,
for a sufficiently large absolute constant c>0.
By using that the distributions return by are an (η^3/· d^6 A^2, )-policy cover for η = /(4 H d^3/2) and = (A,H,d, log(|Φ|/δ)) sufficiently large (<ref>), we obtain the claimed sample complexity for <ref> in <ref>.
§ STRUCTURAL RESULTS FOR EXTENDED LOW-RANK MDP
In this section, we present some structural results involving the extented MDP and truncated policy class defined in <ref>. First, we recall the definition of the truncated policy class. Given a parameter η>0, let _0,η, and for each h≥ 1, let _h, η be the set of policies defined by
π∈_h,η∃π'∈_h-1,η : ∀ t ∈[H], ∀ x ∈_t, π(x) = {[ π'(x), if t=h and x ∈_h,η(_h-1,η),; , otherwise, ].
where for a set of policies Π'⊆, we let
_h, η(Π') {x∈_h | max_π∈Π'^π(x) ≥μ̅_h^⋆(x)·η. }.
Note that this matches the definition in (<ref>) because [μ̅^⋆_h(x)]_d+1=0, for all x≠_h. Finally, we let _η_H,η.
The next lemma bounds the probability of the set of states that are not reachable with sufficiently high probability.
Under the normalization assumption (<ref>), we have that for any t∈[H],
sup_π∈_η^π[_t ∈_t ∖_t,η(_η)] ≤η· d^3/2.
Fix t∈ [H]. By definition of _t,η(_η), we have that
∀ x∈_t ∖_t,η(_η), sup_π∈_η^π(x) ≤η·^⋆_t(x).
Thus, integrating over x∈_t ∖_t,η(_η), we obtain
sup_π∈_η^π[_t ∈_t ∖_t,η(_η)] = sup_π∈_η∫__t ∖_t,η(_η)^π(x) (x),
= η·∫__t ∖_t,η(_η)μ̅^⋆_t(x)(x), (by (<ref>))
≤η·∫__tμ̅^⋆_t(x)ν̅(x),
= η·∫__tμ^⋆_t(x)ν(x), (since [_t(x)]_d+1=0, ∀ x ≠_t)
≤η d^3/2,
where the last inequality follows by <ref>; this is a consequence of the normalization assumption (<ref>).
The next lemma generalizes <cit.> to s.
For all h ∈[H], x∈_h, and ℓ∈[h H], we have max_π∈_ℓ-1,η(x)= max_π∈_ℓ,η(x). Further,
∀ x∈_h, max_π∈_h-1, η^π(x) = max_π∈_η^π(x) .
We will show that for all ℓ∈[hH],
∀ x∈_h, max_π∈_ℓ-1,η(x)= max_π∈_ℓ,η(x).
This implies (<ref>) by summing both sides of (<ref>) over ℓ=h,…, H, telescoping, and using that _η=_H, η. To prove the result, let ℓ∈[hH], x∈_h, and π̃∈_π'∈_ℓ-1,η^π'(x). Further, let π∈_ℓ, η be as in (<ref>) with π'=π̃. In this case, by (<ref>), we have π̃(x')=π(x'), for all x'∈_τ, and τ≤ [ℓ-1]. Using this and the fact that x∈_h and ℓ≥ h, we have
max_π̆∈_ℓ-1,η^π̆(x) =^π̃(x)= ^π(x) ≤max_π̆∈_ℓ, η^π̆(x).
We now show the inequality in the other direction. Let ℓ∈[hH], x∈_h, and π̃∈_π̆∈_ℓ,η^π̆(x). Further, let π'∈_ℓ-1, η be as in (<ref>) for π = π̃. In this case, by (<ref>), we have π̃(x)=π'(x), for all τ∈ [ℓ-1]. Using this and the fact that x∈_h and ℓ≥ h, we have
max_π̆∈_ℓ,η^π̆(x) =^π̃(x)= ^π'(x) ≤max_π̆∈_ℓ-1, η^π̆(x).
This shows (<ref>) and completes the proof.
Using <ref> and the definition of _h,η(·) in (<ref>), we obtain the following corollary.
For all h∈[H], it holds that
_h,η(_h-1,η) = _h,η(_η).
The next lemma quantifies the “cost of truncation” incurred by optimizing reward functions using policies in the truncated class _η instead of
Let η∈(0,1), and B_1:H>0, and consider reward functions r_1: _1×→ [-B_1,B_1],…,r_H: _H×→ [-B_H,B_H]. We have
sup_π∈_η^π[ ∑_h=1^H r̅_h(_h,_h) ] ≥sup_π∈^π[ ∑_h=1^H r̅_h(_h,_h) ] - 2 H d^3/2η∑_h=1^H B_h,
where, for each h∈[H], r̅_h(x,a)=r_h(x,a) for all (x,a)∈_h×, and r̅_h(x,a)=0 when x=_h or a=.
Let r̅_1:H be the “extended” reward functions as in the lemma's statement. Let h∈[H] and π_h-1∈_π∈_h-1,η^π[∑_h=1^H r̅_h(_h,_h)]. Further, define π_h as π∈_h,η in (<ref>) with π'=π_h-1. Note that since for all t∈[h-1] and x∈_t, π_h(x)=π_h-1(x) (by (<ref>)), we have
^π_h-1[∑_t=1^h-1r̅_t(_t,_t)] = ^π_h[∑_t=1^h-1r̅_t(_t,_t)].
On the other hand, for _h,η_h,η(_h-1,η) we have
^π_h-1[∑_t=h^H r̅_t(_t,_t)]
= ^π_h-1[∑_t=h^H r̅_t(_t,_t)],
= ^π_h-1[ 𝕀{_h ∈_h,η}·∑_t=h^H r̅_t(_t,_t)]+ ^π_h-1[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)] ,
= ^π_h[ 𝕀{_h ∈_h,η}·∑_t=h^H r̅_t(_t,_t)] + ^π_h-1[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)] , (by definition of _h,η and π_h)
= ^π_h[ ∑_t=h^H r̅_t(_t,_t)] - ^π_h[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)] + ^π_h-1[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)],
= ^π_h[ ∑_t=h^H r̅_t(_t,_t)] - ^π_h[ 𝕀{_h ∈_h∖_h,η}·∑_t=h^H r̅_t(_t,_t)] + ^π_h-1[ 𝕀{_h ∈_h∖_h,η}·∑_t=h^H r̅_t(_t,_t)],
where the last equality follows by the fact that I) if _h =_h, then _t=_t for all t∈ [h H], and II) r̅_t(,·)≡ 0, for all t∈ [h … H]. Now, using the range assumption on the rewards, we get
^π_h-1[∑_t=h^H r̅_t(_t,_t)] ≤^π_h[ ∑_t=h^H r̅_t(_t,_t)] +(^π_h[_h ∈_h ∖_h,η] + ^π_h-1[_h ∈_h ∖_h,η]) ∑_t=h^H B_t.
On the other hand, by <ref> and the fact that π_h-1∈_h-1,η and π_h∈_h,η, we have that
^π_h-1[_h ∈_h ∖_h,η] ∨^π_h[_h ∈_h ∖_h,η]≤sup_π∈_η^π[_h ∈_h ∖_h,η].
Furthermore, by <ref>, we have _h,η = _h,η(_η). Combining this with (<ref>) and <ref>, we get
^π_h-1[_h ∈_h ∖_h,η] ∨^π_h[_h ∈_h ∖_h,η]≤sup_π∈_η^π[_h ∈_h ∖_h,η(_η)] ≤η d^3/2.
Plugging this into (<ref>) and using (<ref>) implies that
^π_h-1[∑_t=h^H r̅_t(_t,_t)] ≤^π_h[ ∑_t=h^H r̅_t(_t,_t)]+ 2 η d^3/2∑_h=1^H B_h.
Summing both sides of (<ref>) for h=1,…, H, telescoping, and using that _0,η= and _H,η= _η, we get
max_π∈^π[∑_t=1^Hr̅_t(_t,_t)] ≤max_π∈_η^π[∑_t=1^Hr̅_t(_t,_t)] + 2H η d^3/2∑_h=1^H B_h.
Using this, we now prove <ref>, which allows us to transfer any guarantees in the extended MDP and truncated policies _η back to the original MDP with the unrestricted policy class .
Fix h∈[H], and let y∈_h be such that μ_h^⋆(y)>0. To prove <ref>, we will instantiate <ref> with rewards (r_t) given by
r_t(x,a) = {[ μ_h^⋆(y)^⊤/μ_h^⋆(y)ϕ^⋆_h-1(x,a), if t=h and (x,a)∈_h×,; 0, otherwise. ].
We define the extended rewards (r̅_t) such that for all t∈[H], r̅_t(x,a)=r_t(x,a) for all (x,a)∈_t×, and r̅_t(x,a)=0 when x=_t or a=. By applying <ref> (with B_h =1 and B_t=0 for all t≠ h) and using that |r_h(·,·)|≤ 1 (since ϕ^⋆_h-1(·, ·)≤ 1), we get
max_π∈^π[∑_t=1^Hr̅_t(_t,_t)] ≤max_π∈_η^π[∑_t=1^Hr̅_t(_t,_t)] + 2H η d^3/2.
On the other hand, the definition of (r_t) implies that for any π∈,
^π[∑_t=1^Hr̅_t(_t,_t)] = μ_h^⋆(y)^⊤/μ_h^⋆(y)ϕ̃^⋆,π_h-1,
where ϕ̃^⋆,π_h-1^π[ϕ̃^⋆_h-1(_h-1,_h-1)] and ϕ̃^⋆_h-1 is the restriction of ^⋆_h-1 to its first d coordinates (^⋆_h-1 is defined in <ref>). Now, since y≠_h, we have [μ̅_h^⋆(y)]_d+1=0, and so μ^⋆_h(y)^⊤ϕ̃^⋆,π_h-1= ^⋆_h(y)^⊤ϕ̅^⋆, π_h-1. Thus, plugging this into (<ref>) and using <ref>, we get
∀π∈, ^π[∑_t=1^Hr̅_t(_t,_t)] = _h^⋆(y)^⊤/μ_h^⋆(y)ϕ̅^⋆,π_h-1= ^π(y)/μ^⋆_h(y).
Plugging this into (<ref>) and using that ⊆, we have
max_π∈d^π(y)/μ^⋆_h(y) =max_π∈^π(y)/μ^⋆_h(y)≤max_π∈^π(y)/μ^⋆_h(y)≤max_π∈_η^π(y)/μ^⋆_h(y) + 2Hη d^3/2.
Now, suppose that y is such that max_π∈d^π(y)/μ^⋆_h(y)≥ 4 H η d^3/2. By (<ref>), this implies that
max_π∈_η^π(y)/μ^⋆_h(y)≥ 2H η d^3/2≥η,
and so since P is a (α,η)-randomized policy cover relative to _η for layer t in , we have that
max_π∈_η^π(y)/μ^⋆_h(y)≤α^-1_π∼ P^π[d̅^π(y)/μ^⋆_h(y)].
Combining this with (<ref>) implies that
max_π∈d^π(y)/μ^⋆_h(y) ≤α^-1_π∼ P^π[d̅^π(y)/μ^⋆_h(y)] + 2Hη d^3/2,
≤α^-1_π∼ P^π[d̅^π(y)/μ^⋆_h(y)] +1/2max_π∈d^π(y)/μ^⋆_h(y),
where the last inequality follows by the fact that y is such that max_π∈d^π(y)/μ^⋆_h(y)≥ 4 H η d^3/2. Rearranging the previous display and using that ^π(·)≡ d^π(·) for all policies π that never take the terminal action, we get:
α/2max_π∈d^π(y)/μ^⋆_h(y)≤_π∼ P^π[d^π(y)/μ^⋆_h(y)].
This shows that P is a (α/2, 4 Hη d^3/2)-randomized policy cover.
§ HELPER LEMMAS
For any h∈[2 H], x∈_h, and π∈, we have
d^π(x) = [h](x)^⊤ϕ^⋆, π_h-1, where ϕ^⋆, π_h-1^π[ϕ^⋆_h-1(_h-1,_h-1)],
Let δ∈(0,1) and H≥ 1 be given. If a sequence of events _1,…,_H satisfies [_h|_1,…,_h-1]≥1-δ/H for all h∈[H], then
[_1:H]≥1-δ.
By the chain rule, we have
[_1:H] = ∏_h∈[H][_h|_1,…,_h-1] ≥∏_h∈[H] (1-δ/H) =(1-δ/H)^H ≥ 1-δ.
The normalization assumption in (<ref>) has the following useful implication.
For any h∈[H], if the normalization condition (<ref>) holds, then
∫__hμ^⋆_h(x)ν(x) ≤ d^3/2.
For each i∈[d], if we define g(x)sgn([μ^⋆_h(x)]_i), we have
∫__h |[μ^⋆_h(x)]_i| ν (x) = ∫__h g(x) · [μ^⋆_h(x)]_i ν (x),
= √((∫__h g(x) · [μ^⋆_h(x)]_i ν (x))^2),
≤√(∑_j∈[d](∫__h g(x) · [μ^⋆_h(x)]_j ν (x))^2),
= ∫__h g(x) ·μ^⋆_h(x)ν(x) ,
≤√(d).
Therefore, we have
∫__hμ^⋆_h(x)ν (x)≤∑_i∈[d]∫__h |[μ^⋆_h(x)]_i| ν (x)≤ d^3/2.
Next, we show that the coverability parameter <cit.> constant for s is bounded by d.
For all h∈[H], there exists a measure ρ_h on _h × such that
sup_(x,a)∈_h×sup_π∈d^π(x,a)/ρ_h(x,a)≤ d.
Consider layer h+1. By definition for x ∈_h+1, we have that for any
π, d^π(x) = ^π[
μ_h+1^⋆(x)^⊤ϕ_h^⋆(_h, _h)]=μ_h+1^⋆(x)^⊤ϕ_h^⋆, π. Let
Ψ{π_1, …, π_d} be a barycentric
spanner for the set {ϕ^⋆, π_h |π∈} (see <ref>). Let
π_x denote the policy maximizing d^π(x) (if no such
maximizer exists, we may pass to a maximizing sequence). By definition of a barycentric spanner, there exist β_1, …, β_d ∈ [-1, 1] such that ϕ_h^⋆, π_x = ∑_i=1^d β_i ϕ_h^⋆, π_i, and so
d^π_x(x) = ∑_i = 1^d β_i
μ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i≤∑_i = 1^d *β_iμ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i
≤ d ·∑_i = 1^d 1/dμ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i
=d ·∑_i = 1^d 1/d
d^π_i(x),
where we have used that μ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i is non-negative.
Thus, by defining ρ_h+11/d∑_i=1^d d^π_i, we obtain the desired result.
Let >0, and B>0 be given. Fix h∈[H] and consider
a sequence of policies π1:K∈ and functions δ1:K:_h×→ [-B,B] such that for all k∈ [2 K],
^k-1[ δk(_h,_h)^2 ] ≤^2, where k-11/k-1∑_ℓ=1^k-1πℓ. Thenmin_k∈[K]^πk[ δk(_h,_h) ] ≤√(2 d ln K) + 2 d B K^-1.
Define k-1(·,·) ^k-1[d^π(·,·)], if k∈[2 K],
and k-1(·,·)≡ 0 if k=1. Further, let
ρ̃k(·,·) d/kρ_h(·,·), where
ρ_h(x,a) is as in <ref>. Finally, for any (x,a)∈_h ×, we define the “burn-in” index
τ_h(x,a) min{ k ∈[K] |d̅k-1(x,a) > (k-1) · d ·ρ_h(x,a) },
and note that τ_h(·,·)>1. Since the coverability constant is bounded by d in s (see <ref>), we have the following facts which follow from the derivations in <cit.>:
∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) > k }·δk(_h,_h)] ≤ 2d B,
∀ (x,a)∈_h ×,∀ k≥τ_h(x,a), d̅k-1(x,a) + ρ̃k(x,a) ≤ 2d̅k-1(x,a).
With this, we have
∑_k=1^K ^πk[ δk(_h,_h) ]
= ∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) > k }·δk(_h,_h)] + ∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)] ,
≤ 2 d B + ∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)] ,
where the last inequality uses (<ref>). We now bound the second term on the of (<ref>). We have
∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)]
=∑_k=1^K ∑_(x,a)∈_h×dk(x,a) δk(x,a) ·𝕀{τ_h(x,a) ≤ k } ,
=∑_k=2^K ∑_(x,a)∈_h×dk(x,a) δk(x,a) ·𝕀{τ_h(x,a) ≤ k }, (since τ_h(·,·)>1)
= ∑_k=2^K ∑_(x,a)∈_h×d^πk(x,a)(k-1(x,a)/k-1(x,a))^1/2δk(x,a)·𝕀{τ_h(x,a) ≤ k } ,
≤√(∑_k=2^K ∑_(x,a)∈_h×d^πk(x,a)^2 ·𝕀{τ_h(x,a) ≤ k }/k-1(x,a))·√(∑_k=1^K ∑_(x,a)∈_h×k-1(x,a) ·δk(x,a)^2), (Cauchy Schwarz)
≤√(∑_k=2^K ∑_(x,a)∈_h×2d^πk(x,a)^2/k-1(x,a) + ρ̃k(x,a))·√(∑_k=1^K ∑_(x,a)∈_h×k-1(x,a) ·δk(x,a)^2),
where the last step follows by (<ref>). For the second term in (<ref>), we have
∑_k=1^K ∑_(x,a)∈_h×k-1(x,a) δk(x,a)^2 ≤ K ^2,
by (<ref>).
On the other hand, for the first term on the of (<ref>), we have
∑_k=2^K ∑_(x,a)∈_h×d^πk(x,a)^2/k-1(x,a) + ρ̃k(x,a) ≤∑_k=2^K ∑_(x,a)∈_h×max_ℓ∈ [K] d^πℓ(x,a)d^πk(x,a)/k-1(x,a) + ρ̃k(x,a) ,
≤∑_k=2^K ∑_(x,a)∈_h× d ρ_h(x,a)d^πk(x,a)/k-1(x,a) + ρ̃k(x,a),
≤∑_k=1^K ∑_(x,a)∈_h×d ρ_h(x,a)k · d^πk(x,a)/∑_ℓ∈[k-1] d^πℓ(x,a) + dρ_h(x,a),
≤ K d∑_(x,a)∈_h×ρ_h(x,a) ln K,
=K dln K,
where (<ref>) follows by <ref>
and <cit.>. Plugging (<ref>)
and (<ref>) into (<ref>), we get that
∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)] ≤ K √(2 d ln K).
Combining this with (<ref>), we get
K ·min_k∈[K]^πk[ δk(_h,_h) ] ≤∑_k=1^K ^πk[ δk(_h,_h) ] ≤ K √(2 d ln K) + 2 d B.
This implies the desired result.
The following is a restatement of Theorem 2.2 in <cit.>.
Let A, E∈^d× d. If A is non-singular and rA^-1E_< 1, then A+E is non-singular and (A+E)^-1- A^-1_≤E_A^-1^2_/(1-r).
PART:
Analysis of
§ ORGANIZATION OF PART:ANALYSISSPANNER
<ref> of the appendix contains the proof
of <ref>, the guarantee for <ref>. This
section is organized as follows:
* In <ref>, we give an overview of (<ref>) and highlight its key differences to (<ref>).
* <ref> contains the proof of <ref>.
* <ref>, provides generic guarantees
for the subroutine of ,
which are used within the proof of <ref>.
* Finally, <ref> compares the
reachability assumption used in the analysis of
to other notions used throughout the literature on RL in Low-Rank MDPs.
We note that the analysis of <ref> in <ref> also makes use of the guarantee of from <ref> in <ref>.
§ : ALGORITHM OVERVIEW
The algorithm is presented in <ref>. The
algorithm proceeds by building a policy cover layer-by-layer in an
inductive fashion. The structure of the algorithm is similar to that
of , with the main difference being that instead of computing
an optimal design, the algorithm computes a barycentric spanner
for the feature map.
In more detail, for each layer h≥2, uses a policy cover
Ψh built at a previous iteration within the
(<ref>) subroutine to produce a
feature map h that approximates . Using this feature map, the algorithm invokes a second subroutine, (<ref>) to produce a collection of policies π_1,…,π_d that act as a barycentric spanner for the
feature map, ensuring maximal coverage; given these policies, a new policy cover for layer h+2 is formed via Ψh+2={π_i∘_h+1π_ : i∈[d] }. To invoke the
subroutine, makes use of for policy optimization and
(<ref>) for estimation of vector-valued
functionals. Compared to , there is no inner loop (i.e.,
K=1); this is facilitated by the reachability assumption.
In what follows, we expand on the main differences between .and , focusing on the role of barycentric spanners
Barycentric spanners
uses the notion of a barycentric spanner
<cit.> as an efficient basis for exploration. We
define a barycentric spanner for an abstract set as follows
Given a set ⊂^d such that () = ^d, we say that a set { w_1, …, w_d }⊆ is a (C, )-approximate barycentric spanner for if for every w ∈, there exist β_1, …, β_d ∈ [-C, C] such that w - ∑_i = 1^d β_i w_i≤.[Note that our definition is a slight generalization of <cit.>; the latter is recovered with = 0.]
The following result shows that for Low-Rank MDPs, barycentric spanners
offer a compact representation for policy covers.
Suppose <ref> holds with η>0. If Ψ⊆ is a collection of policies such that {^π[
(_h, _h) ]|π∈Ψ}⊆^d is a (C,
)-approximate barycentric spanner for _h{^π[
(_h, _h) ]|π∈} with ≤η/2, then Ψ is an (α,0)-policy cover for layer h+1 with α = (2dC)^-1.
<Ref>, proven in <Ref>, shows that to compute a policy
cover for layer h+1, it suffices to find a barycentric spanner for the
set _h{^π[
(_h, _h) ]|π∈}⊆^d. Similar to the approach to optimal design computation in
, we show that barycentric spanner computation can be
efficiently reduced
to policy optimization:
* Using, , a novel adaptation of the classical algorithm of
<cit.>, it holds that for any ϕ∈Φ,
spanner computation for the set {^π[
ϕ(_h, _h) ]|π∈} can be performed efficiently whenever, for any θ∈(1), one can (approximately) solve linear optimization problems of the form
_π∈^π*θ^ϕ(_h,_h).
* Given access to policy covers Ψ1:h for layers 1 to h, one can efficiently solve the optimization problem in (<ref>) by
appealing to (<ref>).
To handle the fact that is unknown, <ref> computes policies π_1:d that induce a barycentric spanner for the set {^π[
h(_h, _h) ]|π∈}, where
h∈Φ is an estimated feature map produced by
. In what follows, we give a detailed overview of how the
subroutine achieves efficient spanner computation.
Barycentric spanner computation via approximate linear optimization
We consider an abstract framework for
barycentric spanner computation, which generalizes the problem faced
within . Suppose that we wish
to compute a spanner for an implicitly specified set
=*w^z_z∈⊆^d indexed by an abstract set
.
To allow for efficient spanner computation without resorting to
enumeration over the set , we assume access to two
oracles for the set , a linear optimization oracle :(1)→ and
an index-to-vector oracle :→^d. We assume that for some >0:
* For all θ∈^d with *θ=1, the output
ẑ_θ(θ) satisfies
θ^⊤w^ẑ_θ≥sup_z∈θ^⊤ w^z -.
* For all z∈, the output ŵ_z(z)
satisfies
ŵ_z - w^z≤.
The algorithm
(<ref>) computes a (C,)-approximate spanner for
using
(dlog(d/)) total calls to and . is an error-tolerant variant of the classical spanner computation algorithm of
<cit.>, which was originally introduced and
analyzed for
spanner computation with an exact linear optimization
oracle. Tolerance to approximation errors in the linear optimization oracle
is critical for our application to RL, where additive
errors will arise in sampling trajectories, as well as estimating
the feature maps ()_h∈[H]. achieves error tolerance by
perturbing the vectors returned by (θ) in the direction of
θ, which amounts to running the classical algorithm on an -fattening of , and is necessary in order to ensure that the approximation error of does not swamp the signal in directions θ in which is too “skinny.” This technique may be of independent interest; see <ref>
for additional details and formal guarantees.
Putting everything together Equipped with an estimated
feature map h from , applies
to the set {^π[h(_h,
_h)]|π∈} with
= and C = 2; that is, we plug-in the learned
representation h for the true representation
.[Though the policies produced by the algorithm may not necessarily induce a spanner for _h= {^π[
(_h, _h) ]|π∈} (this would
require “point-wise” representation learning guarantees, which we do
not have), our analysis shows that they still suffice to build a policy cover for layer h+2.]
With this choice, implementing
entails (approximately) solving
_π∈^π[ θ^⊤h(_h, _h)]
for a given θ∈(1), and implementing the oracle
entails estimating
^π[h(_h, _h)]
for a given π∈.
We instantiate (π) as the Monte Carlo algorithm
(<Ref>). To
implement (θ), we invoke with the rewards neurips
r_h(x, a; θ) = h(x,a)^⊤θ, and r_t(x,a; θ) = 0, for t ≠ h.
r_t(x,a;θ){[ h(x,a)^⊤θ, for
t=h,; 0, otherwise. ].
§ ANALYSIS: PROOF OF THM:SPANRLMAIN
In this section, we prove the main guarantee for (<ref>). First, we outline our proof strategy in <ref>. Then, in <ref> and <ref>, we present guarantees for the instances of (<ref>) and (<ref>) used within . We then combine these results in <ref> to complete the proof of <ref>. A self-contained guarantee for (<Ref>) is given in <Ref>.
§.§ Proof Strategy
Like the proof of <ref> for , the proof of <ref> is inductive. However, due to the assumption of reachability, the proof does not make use of the extended MDP analysis used in the proof of <ref>, making it somewhat simpler.
For fixed h, we assume that the policy set Ψ1:h+1 produced by satisfies the property:
Ψ1,…Ψh+1 are (1 Ad,0)-policy covers for layers 1 through h+1, and max_t∈[h+1]|Ψt|≤ d.
Conditioned on this claim, we show that with high probability, the set Ψh+2 is a (1/4 A d,0)-policy cover for layer h +2. To prove this, we use the inductive assumption to show that acts as an approximate linear optimization oracle over = {^π[ h(_h, _h) ] |π∈} (<Ref>). Using this, we then instantiate the guarantee of from <ref> with and instantiated with and . To conclude the proof of the inductive step, we the main guarantee for together with the main guarantee for (<Ref>), along with a change of measure argument enabled by the assumption that Ψ1:h are policy covers (i.e. (<ref>)).
§.§ Guarantee for as a Subroutine for
We begin by showing that , as configured within , acts as an approximate linear optimization oracle as required by . In particular, we fix a layer h, assume that Ψ1:h+1 satisfy (<ref>), apply the generic guarantees for in <Ref>.
Define function classes _1:h such that for each t∈[h],
_t {g:(x,a)↦ϕ(x,a)^⊤ w |ϕ∈Φ, w ∈(2√(d))}.
Given θ∈(1) and ϕ∈Φ, consider the reward functions r'_1:h(·,·;θ, ϕ) given by:
∀ (x,a)∈×, r'_t(x,a;θ,ϕ){[ ϕ(x,a)^⊤θ, for
t=h,; 0, otherwise. ].
With these rewards and function classes, we show that the output
= (h, r'_1:h(·, ·;θ,ϕ), _1:h, P1:h, n),
where Pt(Ψt), for each t∈[h], approximately solves
max_π∈θ^π[ ϕ(_h, _h) ]
with high probability if n≥ 1 is sufficiently large. Note that this matches the choice of reward functions in (<ref>) at iteration h with ϕ = ϕh, the feature map returned by in <ref>.
We first verify that the classes _1:h realize the reward functions specified in (<ref>) in the sense of <Ref>.
Under <ref>, the function classes _1:h in (<ref>) realize (<ref>) the reward functions in (<ref>) for any ϕ∈Φ and θ∈(1). Furthermore, the functions in _1:h are uniformly bounded by 2√(d), and ln__t()≤ln |Φ|+ d ln (2√(d) /), for all t∈[h], where we recall that _() denotes the -covering number of in ℓ_∞-distance (see <ref>).
Fix ϕ∈Φ and θ∈(1), and let r'_t(·,·)≡ r'_t(·,·; θ, ϕ), for t∈[h]. Further, for t∈[h] and π∈^t+1:h, we define the state-action value function (Q-function) at layer t with respect to the rewards r'_1:h and partial policy π:
∀ (x,a)∈_t×, Q^π_t(x,a) r'_t(x,a)+^π[.∑_ℓ=t+1^h r'_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
For t=h, we clearly have that for any π∈^h:h, Q^π_h(·,·)=r'_h(·,·)∈_h. For t<h and π∈^t+1:h, we have by the low-rank structure that
Q^π_t(x,a) = ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·ϕ^⋆_t(x,a)^⊤μ_t+1^⋆(y) ν (y),
= ϕ^⋆_t(x,a)^⊤( ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y)).
Now, by the fact that ^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ∈ [-1,1], for all y∈_t+1 (since ϕ(·,·)∈(1), for all ϕ∈Φ), and the normalizing assumption made on ([h])_h∈[H] in <ref> (i.e. that for all g:_t+1→0,1, *∫__t+1[t+1](y)g(y) ν(y)≤√(d)), we have that
w_t ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y) ∈(2√(d)).
This, together with (<ref>) and the fact that [t]∈Φ (by <ref>), implies that that Q_t^π∈_t. The bound on the covering number __t(), follows from a standard bound on the covering number of the ball (2√(d)) <cit.>..
Combining <Ref> with <Ref> (with =0) results in the following bound on the quality of as an approximate linear optimization oracle.
Let ,δ∈(0,1) be given and fix h∈[H]. Given θ∈(1) and ϕ∈Φ, let be the output of when given input (h, r'_1:h(·, ·;θ,ϕ), _1:h, P1:h, n), where
* The reward functions r'_1:h(·, ·;θ,ϕ) are as in (<ref>).
* The function classes _1:h are as in (<ref>).
* Pt(Ψt), for each t∈[h], and the collection of policies Ψ1:h satisfy (<ref>).
Then, under <ref>, with probability at least 1-δ, we have that
max_π∈θ^⊤^π[ϕ(_h,_h)]≤θ^⊤^[ϕ(_h,_h)] + _(n,δ),
where _(n,δ) c H A d √(d n^-1 (d ln (2n d^1/2)+ln (|Φ|/δ))) for a sufficiently large absolute constant c>0.
§.§ Guarantee for as a Subroutine for
In this section, we prove a guarantee for the invocation of within . We first show that (<Ref>) is a valid choice for the subroutine passed to .
Let δ∈(0,1), h∈[H], ϕ∈Φ, π∈, and n∈ℕ be given. The output _h= (h,ϕ,π, n) (<ref>) satisfies, with probability at least 1-δ,
_h - ^π[ϕ(_h,_h)] ≤_(n,δ),
where _ c ·√(n^-1·log (1/δ)) and c>0 is a sufficiently large absolute constant.
By a standard vector-valued concentration bound in euclidean space (see for example <cit.>) and the fact that ϕ(x, a)≤ 1 for all x ∈ and a ∈, there exists an absolute constant c>0 such that with probability at least 1 - δ,
_h - ^π[ ϕ(_h, _h) ]≤ c ·√(log(1/δ)/n).
Recall that in , we instantiate passing as and as . Combining <Ref> with the general guarantee for in <Ref>, we have the following result.
Consider iteration h∈ [H] of (Φ,,,δ) (<ref>) with ,>0, δ∈(0,1), and feature class Φ satisfying <ref>. Further, let h denote the feature map returned by in <Ref> at iteration h. If Ψ1:h in <ref> satisfy (<ref>) and =(A,d,H,ln(|Φ|/δ)) is sufficiently large, then with probability at least 1 - δ/2H, we have that
* The number of iterations of in <Ref> of <Ref> is at most N ⌈d/2log_2( 100d/)⌉.
* The output (π_1, …, π_d) of has the property that for all π∈, there exist β_1,…,β_d∈[-2,2] such that
*^(h),π - ∑_i=1^d β_i ^(h),π_i≤ 3 d , where ^(h),π'^π'[h(_h,_h)].
By <Ref>, on the event that the instances of and used by satisfy <Ref> with ' = /2, the two prerequisite assumptions of the lemma hold; We instantiate the guarantee in <ref> with C=2, as used by <ref>. We claim that each call to and to satisfies <Ref> with probability at least 1- δ/8 d N H. Because each of and get called at most 4 d N times per iteration of , a union bound concludes the proof contingent on this claim.
We now prove the claim. First, note that the instance of that uses within <ref> is of the form:
(h, r_1:h(·, ·, θ), _1:h, P1:h, n_)
with r_1:h and _1:h as in <Ref>, and Pt(Ψt) for each t∈[h]; this matches the form in <Ref> ('s guarantee) with ϕ = ϕh, which implies that with probability at least 1- δ/8 d N H, the output of of the instance in (<ref>) satisfies:
max_π∈θ^⊤^π[ϕ(_h,_h)]≤θ^⊤^[ϕ(_h,_h)] + c θ H A d √(d · (d ln (2n_ d^1/2)+ln (8 dNH|Φ|/δ))/n_),
for a sufficiently large absolute constant c>0. Thus, by choosing
n_ = ·^-2 A^2 d^3 H^2 · (d +ln (|Φ|/δ)),
for =(A,d,H,ln(|Φ|/δ)) sufficiently large, the of (<ref>) is bounded by θ/2, which implies the claim for the invocation of within . Similarly, the choice of n_ in <Ref> ensures that the claim holds for the invocation of within by <Ref>.
§.§ Guarantee for as a Subroutine for
In this section, we prove a guarantee for the invocation of within
Recall that Ph= (Ψh) is the distribution over policies that passes to at iteration h∈[H-2] to compute feature map ϕh. Thus, by invoking <ref> in <ref> and using the choice of n_ in <ref>, we immediately obtain the following corollary.
Let δ,∈(0,1), and be as in <ref>, and fix h∈[H-2]. Suppose that the feature class Φ satisfies <ref>. Then, with probability at least 1-δ/2H, the instance of in <ref> of <ref> runs for t≤· d iterations for = (A,d,H,log(|Φ|/δ)) sufficiently large, and returns output ϕh such that for all f∈, there exists w_fh∈(3d^3/2) satisfying
^(Ψh)[∑_a∈(ϕh(_h,a)^⊤wh_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤η^2/64 A^2 d^2,
where w_f ∫__h+1 f(y) (y) ν(y).
§.§ Concluding the Proof of thm:spanrlmain
In this section, we conclude the proof of the main guarantee (<ref>). We derive the guarantee from the following inductive claim.
Consider iteration h∈ [H] of (Φ,,,δ) (<ref>) with parameters ,>0, δ∈(0,1) and a feature class Φ satisfying <ref>. Further, assume that:
* The collection of policies Ψ1:h+1 at the start of the hth iteration of satisfy (<ref>).
* <ref> (reachability) holds with η>0.
* The input parameter to is set to =η/36 d^5/2.
* The input parameter =(A,d,H,ln (|Φ|/δ)) is sufficiently large.
Then, with probability at least 1-δ/H, the set of policies Ψh+2 produced by (Φ,,,δ) at the end of iteration h is an (1/ Ad,0)-policy cover for layer h+2.
With this, we can now prove <ref>.
Note that it suffices to prove that (<ref>) holds for h=H-1 with probability at least 1-δ. To do this, we proceed by induction over h=1,…,H-1. The base case of h=1 trivially holds because Ψ1=∅ and Ψ2={π_}. The induction step now follows by <ref> and the union bound (see <ref>).
The number of trajectories used by is dominated by calls to . Since is called O(dln (d/)) times at each iteration of (<ref>), and each call to requires at most H n_ trajectories, the total number of trajectories after H iterations of is bounded by O(H^2 d n_). By plugging the choices for n_ and from the theorem statement, we obtain the claimed sample complexity.
Before proving <ref>, we make the following simple observation.
For any π∈, h∈ [H-1], any x∈_h+1, we have
(x)^⊤^π[ϕ_h^⋆(_h,_h)]=d^π(x)≥ 0.
The equality follows by construction. The non-negativity of d^π(x) follows by definition of a probability density.
We now prove <ref>.
Let _h and _h' denote the success events in <ref> and <ref>, respectively, and note that by the union bound, we have [_h ∩_h']≥ 1 - δ/H. For the rest of this proof, we will condition on _h ∩_h'.
Throughout, we denote
ϕ_h^⋆,π^π[ϕ_h^⋆(_h,_h)], ∀ h∈[H], ∀π∈.
Because Ψ1:h+1 satisfy (<ref>) (i.e., are a policy cover) it holds by <Ref> that for all x∈_h,
max_π∈Ψh[h](x)^⊤ϕ_h-1^⋆,π≥α·sup_π∈[h](x)^⊤ϕ_h-1^⋆,π, for α1/ A d.
We will show that with probability at least 1-δ/H, the policy set Ψh+2 has the same property for layer h+2; that is, for all x∈_h+1,
max_π∈Ψh+2[h+2](x)^⊤ϕ_h+1^⋆,π≥α·sup_π∈[h+2](x)^⊤ϕ_h+1^⋆,π.
Again, by <ref> this is equivalent to the statement that Ψh+2 is an (1/ Ad,0)-policy cover for layer h+2.
For the remainder of the proof, we will fix x∈_h+2 and let π_x ∈_π∈[h+2](x)^⊤ϕ_h+1^⋆,π. Our goal is to show that the inequality <ref> holds for x.
Preliminaries Note that since x∈_h+2, we have [h+2](x)>0. It will be convenient to introduce a function f: _h+1→ defined by
f(y)θ_x^⊤ϕ^⋆_h+1(y,π_x(y)), where θ_x [h+2](x)/[h+2](x).
Further, we define
w_x ∫__h+1 f(y) (y) ν(y).
By definition of π_x, we have that for all y∈_h+1,
θ_x^⊤ϕ^⋆_h+1(y,π_x(y)) = max_a∈θ_x^⊤ϕ^⋆_h+1(y,a).
This together with the fact that θ_x=1 implies that
f ∈ = {. x ↦max_a∈θ^⊤ϕ(x,a) | θ∈(1), ϕ∈Φ};
the discriminator class in <ref> of .
Note also that since x∈_h+2, we have by reachability that
w_x^⊤ϕ_h^⋆, π_x= θ_x^⊤ϕ_h+1^⋆,π_x=1/*[h+2](x)max_π∈[h+2](x)^⊤ϕ_h+1^⋆,π≥η>0.
Applying the guarantee for
Moving forward, let h be the feature map returned by at the hth iteration of <ref>, and define ϕ^(h),π^π[ϕh(_h,_h)], for any π∈. Further, let w_xh be the vector w_fh in <ref> with f=f_x, and note that
w_xh≤ 3 d^3/2.
By Jensen's inequality, we compute
( wh_xϕ^(h),π_x- w_xϕ_h^⋆, π_x)^2
≤^π_x[( h(_h,_h)^⊤ wh_x - ϕ_h^⋆(_h,_h)^⊤ w_x )^2], (Jensen's inequality)
= ∫__h( h(y,π_x(y))^⊤ wh_x - ϕ_h^⋆(y,π_x(y))^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π_x_h-1ν(y), (Low-Rank MDP)
≤α^-1max_π̃∈Ψh∫__h( h(y,π_x(y))^⊤ wh_x -ϕ_h^⋆(y,π_x(y))^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π̃_h-1ν(y), (by (<ref>))
≤α^-1∑_π̃∈Ψh∫__h( h(y,π_x(y))^⊤ wh_x - ϕ_h^⋆(y,π_x(y))^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π̃_h-1ν(y), (by <ref>)
≤α^-1∑_π̃∈Ψh∑_a∈∫__h( h(y,a)^⊤ wh_x - ϕ_h^⋆(y,a)^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π̃_h-1ν(y),
=A α^-1 d·^(Ψh)[( h(_h,_h)^⊤ wh_x - ϕ_h^⋆(_h,_h)^⊤ w_x )^2],
where the last step follows by the definition of Ψh in <ref> and that Ψh = d. Now, since w_x = ∫__h+1 f(y) (y) ν(y) (see (<ref>)) and f∈ (see (<ref>)); the guarantee for in <ref> together with (<ref>) implies that (conditioned on the event )
| wh_x^(h),π_x- w_xϕ_h^⋆, π_x| ≤√(A dη^2/64 α A^2 d^2)≤η/4.
Applying the guarantee for
Letting π_1,…,π_d be the policies returned by at iteration h of , the guarantee of in <ref> implies that there exist β_1, …, β_d∈[-2,2] such that
*ϕ^(h),π_x-∑_i=1^d β _iϕ^(h),π_i≤ 3 d ≤η/12 d^3/2,
where the last inequality follows by the fact that = η/36 d^5/2. Combining (<ref>) with (<ref>) and using the triangle inequality, we get that
w_x^⊤ϕ_h^⋆, π_x ≤∑_i=1^d β_i w_x^⊤ϕ_h^⋆, π_i + wh_x·η/12 d^3/2 +η/4,
≤∑_i=1^d β_i w_x^⊤ϕ_h^⋆, π_i + η/4+η/4, (by (<ref>))
≤ 2d max_i∈[d] w_x^⊤ϕ_h^⋆, π_i + η/2.
Combining this with (<ref>) and rearranging implies
w_x^⊤ϕ_h^⋆, π_x≤ 4d·max_i∈[d] w_x^⊤ϕ_h^⋆, π_i.
On the other hand, by definition of w_x, we have
max_i∈[d] w_x^⊤ϕ_h^⋆, π_i = max_i∈[d]θ_x^⊤ϕ_h+1^⋆, π_i∘_h+1π_x,
= 1/*[h+2](x)max_i∈[d]^π_i ∘_h+1π_x[[h+2](x)^⊤ϕ_h+1^⋆(_h+1,_h+1)],
≤A/*[h+2](x)max_i∈[d]^π_i ∘_h+1π_[[h+2](x)^⊤ϕ_h+1^⋆(_h+1,_h+1)], (see below)
= A/*[h+2](x)max_π∈Ψh+2[h+2](x)^⊤ϕ_h+1^⋆, π,
where the inequality follows from the non-negativity of _h+1(·)_h+1(x,a), for all (x,a)∈_h× (due to <Ref>), and (<ref>) follows from the definition of Ψh+2 in <Ref> of <Ref>. Combining (<ref>) and (<ref>) then implies that
1/*[h+2](x)[h+2](x)^⊤ϕ_h+1^⋆, π_x =θ_x^⊤ϕ_h+1^⋆,π_x= w_x^⊤ϕ_h^⋆, π_x ≤ 4d ·max_i∈[d] w_x^⊤ϕ_h^⋆, π_i,
≤4 A d/*[h+2](x)max_π∈Ψh+2[h+2](x)^⊤ϕ_h+1^⋆, π.
This, together with <ref>, implies that (<ref>) holds. Since this argument holds uniformly for all x∈_h+2, this completes the proof.
§.§ Proof of lem:barycentricspannerknownphi
By definition for x ∈_h+1, we have d^π(x) = ^π[ (x)^⊤[h](_h, _h)]. Let π_x denote the policy maximizing d^π(x) (if no such maximizer exists, we may pass to a maximizing sequence) and let Ψ = {π_1, …, π_d }. Then, we have for some β_1, …, β_d ∈ [-C, C],
d^π_x(x) = (x)^⊤(∑_i = 1^d β_i [π_i]) + (x)^⊤( [π_x] - ∑_i = 1^d β_i[π_i]),
≤ C d ·max_i ∈[d](x)^⊤[π_i] + ·(x)
, (Cauchy-Schwarz)
≤ C d ·max_i ∈[d](x)^⊤[π_i] + 1/2d^π_x(x),
where the inequality follows by the fact that <ref> holds with ≤η/2. The result now
follows by rearranging.
§ GENERIC GUARANTEE FOR
In this section, we give a generic guarantee for the algorithm when invoked with oracles and satisfying the following assumption.
[ and as approximate Linear Optimization Oracles]
For some abstract set and a collection of vectors {w^z∈^d | z∈} indexed by elements in , there exists '>0 such that for any θ∈^d∖{0} and z∈, the outputs ẑ_θ(θ/θ) and ŵ_z (z) satisfy
sup_z∈θ^⊤ w^z≤θ^⊤ w^ẑ_θ +' ·θ, and ŵ_z - w^z≤' .
Letting {w^z | z∈} and assuming that ⊆(1), the next theorem bounds the number of iterations of ((·),(·), ·,·) under <ref>, and shows that the output is an approximate barycentric spanner for (<ref>). Our result extends those of <cit.>, in that it only requires an approximate linear optimization oracle, which is potentially of independent interest.
Fix C>1 and ∈(0,1) and suppose that {w^z | z ∈}⊆(1). If (<Ref>) is run with parameters C, >0 and oracles , satisfying <ref> with '=/2, then it terminates after d + d/2log_C100 d/^2 iterations, and requires at most twice that many calls to each of and . Furthermore, the output z_1:d has the property that for all z∈, there exist β_1,…,β_d∈[-C,C], such that
*w^z - ∑_i=1^dβ_i w^z_i≤3Cd ·/2.
The proof will follows similar steps to those in <cit.>, with modifications to account for the fact that linear optimization over the set {w^z | z∈} is only performed approximately.
Part I: Bounding the number of iterations
In <Ref>, there are two loops, both of which require two calls to and per iteration. As the first loop has exactly d iterations, it suffices to bound the number of iterations in the second loop.
Let Mi (w_1,…, w_i, e_i+1, …, e_d) be the matrix whose columns are the vectors at end of the ith iteration of the first loop (<ref>) of <ref>; note that columns i+1 through d are unchanged at this point in the algorithm. For i∈[d], we define ℓ_i(w) (w,Mi_-i) and θ_i((e_j, Mi_-i))_j∈ [d]∈^d, where we recall that for any matrix A, the matrix A_-i is defined as the result of removing the ith column from A. Note that ℓ_i is linear in w, and in particular
ℓ_i(w) w^⊤θ_i.
Let W0 Md = (w_1, …, w_d), and let Wj denote the resulting matrix after j iterations of the second loop (<Ref>) of <ref>. We will show that for any J≥ 1,
(WJ) ≤(W0) ·( 100 d/^2)^d/2.
By construction of the loop, we have (Wj) ≥ C ·(Wj-1) for each j ∈[J], and thus (WJ) ≥(W0) · C^J. Combining these two facts will establish the bound on the iteration complexity. We now prove (<ref>).
Let u_i = e^⊤_i(Mi)^-1 (note that u_i is a row vector) and let U denote the matrix whose ith row is u_i. We observe that for all w ∈^d,
u_iw = ℓ_i(w)/ℓ_i(w_i),
where we note that ℓ_i(w_i) ≠ 0 by construction; indeed, the columns of Mi are a basis for ^d because (Mi) ≠ 0, and the equality holds on the columns, so the two linear functions must be equal. Now, since <ref> holds with '=/2, we have
θ^⊤_iw_i^+≥sup_z ∈θ^⊤_iw^z - /2θ_i, and θ^⊤_iw_i^-≤inf_z ∈θ^⊤_iw^z + /2θ_i,
where w_i^± = (z_i^±). We will now show that
ℓ_i(w_i) ≥/2·θ_i.
There are two cases. First, suppose that θ^⊤_iw_i^+≥ - θ^⊤_iw_i^-, corresponding to the conditional in <Ref> of <ref> being satisfied. Combining this with (<ref>), we have
θ_i^⊤ w_i^+ ≥( sup_z∈θ_i^⊤ w^z -/2θ_i) ∨ (-θ_i^⊤ w_i^-),
≥( sup_z∈θ_i^⊤ w^z -/2θ_i)∨( sup_z∈ -θ_i^⊤ w^z -/2θ_i), (by (<ref>))
= ( sup_z∈θ_i^⊤ w^z )∨( sup_z∈ -θ_i^⊤ w^z ) - /2θ_i,
≥ - /2θ_i.
Because the conditional is satisfied, w_i = w_i^+ + ·θ_i/θ_i, and so by plugging this into (<ref>), we have
ℓ_i(w_i) = θ^⊤_iw_i≥/2·θ_i.
The case that θ^⊤_iw_i^+≤ - θ^⊤_iw_i^- is essentially identical, establishing (<ref>). Now, recall that { w^z | z ∈} and let ⊕( 3/2) { w + b | w ∈ and b ∈( 3/2) } denote the Minkowski sum with ( 3/2). By Cauchy-Schwarz, it holds that for all w' w + b ∈⊕( 3/2),
ℓ_i(w') = θ^⊤_iw' = θ^⊤_iw + θ^⊤_ib≤( 1 + 3 /2) ·θ_i,
where we used that ⊆(1) (by assumption). Thus, for any w' ∈⊕( 3/2), we have
u_iw' = ℓ_i(w')/ℓ_i(w_i)≤ 1+3 /2 .
We now observe that by construction and the fact that <ref> holds with '=/2, the kth column w_k' of WJ belongs to ⊕( 3 /2), for any k∈[d]. Thus, the (i,k) entry u_iw_k' of U WJ satisfies u_iw_k'∈[-1 - 3 /2, 1+ 3 /2], and so the columns of U WJ have Euclidean norm at most 10 √(d)/. Since the magnitude of the determinant of a matrix is upper bounded by the product of the Euclidean norms of its columns, it holds that (U WJ)≤( 100 d/^2)^d/2.
On the other hand, again by construction, we see that the columns w_1,…, w_d of W0 satisfy u_iw_j=0, for j<i, and u_iw_i=1. Thus, U W0 is an upper-triangular matrix with 1s on the diagonal, and hence has determinant 1. Because determinants are multiplicative, this implies that (U) ≠ 0. We now compute:
(WJ) = (U WJ)/(U) = (U WJ)/(U W0)≤( 100 d/^2)^d/2.
Thus, the upper bound on (WJ) holds and the claim is proven. Therefore, we have
C^J ≤( 100 d/^2)^d/2,
and so J ≤⌈d/2log_C( 100 d/^2)⌉.
Part II: Spanner property for the output Having shown that the algorithm terminates, we now show that the result is an approximate barycentric spanner for . Let W (w_1, …, w_d) be the matrix at termination of the algorithm. By definition, if the second loop (<Ref>) has terminated, then for all i∈[d],
max(θ_i^⊤ w_i^+, - θ_i^⊤ w_i^-) +·θ_i≤ C · |(w_i,W_-i)|,
where θ_i = ((e_j, W_-i))_j∈[d]∈^d. On the other hand, by <ref>, (<ref>) holds, and so
∀ z∈, ∀ i ∈ [d], |(w^z,W_-i)| = |θ_i^⊤ w^z| ≤max(θ_i^⊤ w_i^+, - θ_i^⊤ w_i^-) +·θ_i,
≤ C· |(w_i,W_-i)|.
Now, fix z∈. Since (W) ≠ 0, there exist β_1:d∈ such that w^z= ∑_i=1^d β_i w_i. By plugging this into (<ref>) and using the linearity of the determinant, we have
∀ i∈[d], C· |(w_i,W_-i)| ≥ |(w^z,W_-i)| = |∑_j=1^d β_i (w_j,W_-i)| = |β_i| · |(w_i,W_-i)|.
Therefore, |β_i|≤ C, for all i∈[d]. Now, by definition of w_1:d and w_1:d, for all i∈[d], we have that w_i - w_i≤. Furthermore, by <ref>, we also have that w_i -w^z_i≤/2. Therefore, by the triangle inequality, we have
w^z- ∑_i=1^d β_i w^z_i≤w^z- ∑_i=1^d β_i w_i + ∑_i=1^d|β_i| w_i - w^z_i + ∑_i=1^d|β_i| w_i - w_i ≤ 3d C /2.
This completes the proof.
§ PROPERTIES OF REACHABILITY ASSUMPTION
In this section, we compare the η-reachability assumption used by
(<ref>) to different reachability
assumptions used throughout the literature on RL in Low-Rank MDPs. In
<ref>, we demonstrate an exponential separation
between our notion of reachability and notions considered in the so-called latent variable model <cit.>. In <ref>, we consider a number of other reachability assumptions and show that they imply <Ref>.
§.§ Comparison to Latent Variable Model
In this subsection, we show that our reachability assumption is
implied a reachability assumption used by
<cit.> in the latent
variable/non-negative feature model, and show that our reachability
assumption can hold even when the best possible latent variable
embedding dimension is exponential in the dimension d. We begin by
defining the latent variable model.
Givn a transition operator T:
×→Δ(), a latent variable representation consists of a countable latent space and functions ψ:×→Δ() and
q:→Δ(), such that T(·| x,a) = ∑_z∈
q(·| z) ψ(z | x,a). The latent variable
dimension of T, denoted is the cardinality of smallest
latent space for which T admits a latent variable
representation.
The interpretation for the latent variable model is as follows:
* Each (x,a) pair
induces a distribution ψ(x,a) ∈Δ()
over z∈.
* The latent variable is sampled as ∼ψ(x,a).
* The next state is sampled as '
∼ q(·|).
Note that in discrete state spaces, all transition operators admit a trivial latent variable
representation, as we may take ψ(x,a) = T(·| x,a), but
the dimension of such a representation is potentially infinite. A latent
variable representation certifies that there exists a factorization T(x' | x,a) =
ψ(x,a)^⊤ q(x') with embedding dimension ||, and so
, and hence gives an upper bound on the rank of the
transition operator. On the other hand, compared with the general Low-Rank factorization,
the latent variable factorization additionally requires that ψ(x,a)
and q(·| z) are probability distributions, and thus
non-negative, for all z∈ and (x,a)∈×,
implying that is equivalent to the non-negative rank <cit.> of the transition operator.
Assuming that a latent variable representation exists, <cit.> consider the following notion of reachability.
There exists η>0 such that
∀ h∈[H-1], ∀ z∈_h+1, sup_π∈^π[_h+1=z]≥η.
We first show the latent variable reachability condition above implies our more general assumption.
Consider a Low-Rank MDP with rank d≥ 1. Under the
latent variable model in <ref>, if the latent
variable reachability condition in (<ref>) is satisfied for some η>0, then, for all h∈[H], the transition kernel T_h in admits a factorization T_h(·| x,a)=(·)^⊤(x,a), where (·)∈^ and (·,·)∈^, such that ≤ d A^2/η^2 and η^2/A √(d)-reachability (in the sense of <ref>) is satisfied.
Suppose that <ref> (η-reachability) holds. By <cit.>, the non-negative rank of is bounded as ≤ d A^2/η^2.
Letting q and ψ be as in the definition of the latent variable representation in <ref>, we define and as: for all h∈[H-1],
(·) (q(·| z))_z∈∈^, and (·,·) (ψ(z|· , ·))_z∈∈^.
Now, fix h∈[H-1] and x∈_h+1. For z_0∈_z∈_h+1q(x| z), we have
sup_π∈ d^π(x)= ^π[_h+1 = x] = sup_π∈∑_z∈_h+1
q(x | z) ·^π[ψ(z |_h,_h)],
=sup_π∈
q(x | z_0) ·^π[ψ(z_0 |_h,_h)],
= (x)_∞·sup_π∈^π[_h+1=z_0],
≥η·(x)_∞ , (using reachability)
≥η/√()·(x).
We now complement the result above by showing that there
exists low-rank MDPs for which our notion of reachability
(<ref>) is satisfied with η
polynomially small, yet the best possible latent variable
embedding has dimension =2^Ω(d). This contrasts
the results in <cit.>, which
show that latent variable reachability implies a polynomial
bound on the latent variable dimension.
There exists a one-step Low-Rank-MDP of rank d≥1, where η-reachability (<ref>) is satisfied with η=1/2√(d), but where the non-negative rank satisfies =2^Ω(d).
Let n ∈ℕ and d n 2 +1. As shown
in the proof of <cit.>, there exists
a horizon-two MDP with the following properties:
* The state spaces _1 and _2 at layers 1 and 2, respectively, are finite.
* The cardinality of is d; i.e. = {a_1,…, a_d}.[Technically, the example in the proof of <cit.> does not explicitly specify the number of actions. Instead, the example assigns a number of state-action pairs to vectors in ^d, without specifying the number of actions. The number of actions in their example is a degree of freedom, which we set to d here without loss of generality.]
* The transition kernel T_1 admits the factorization:
T_1(·| x,a) = [2](·)^⊤ϕ_1^⋆(x,a)∈Δ(_2), ∀ (x,a)∈_1×,
where for all x'∈_2, [2](x')∈_≥ 0^d, and for all (x,a)∈_1 ×, ϕ_1^⋆(x,a)∈_≥0^d.
* The non-negative rank of is =2^Ω(d).
We augment this MDP by adding an extra state , and let
_1_1∪{}. We define
_1^⋆:_1×→_≥0^d be the
extension of ϕ_1^⋆ given by
∀ i∈[d], _1^⋆(, a_i)= e_i, and ∀ x ∈_1, _1^⋆(x, a_i)= ϕ_1^⋆(x,a_i),
where e_i is the ith basis element in ^d. We define the
initial state distribution to have ρ()=1/2 and
ρ(x)=1/2 |_1|, for all x∈_1.[We note
that <cit.> did not specify the initial
distribution, which is not needed for the conclusion of their
result.] We let =(_1∪_2,,
_1^⋆,([h])_h∈[2],) denote the resulting
MDP. Note that adding an extra state at layer 1 in this fashion only adds d additional rows to the transition matrix T (viewed as a (|_1×|)× |_2| matrix). Therefore, the non-negative rank of is as least that of .
We now show that reachability is satisfied in . Let π_i the policy that always plays action a_i. With this, we have that for any x'∈_2,
sup_π∈ d^π(x') ≥max_i∈[d] d^π_i(x'),
= max_i∈[d][2](x')^⊤[_1^⋆(_1,a_i)] ,
= max_i∈[d]{[𝕀{_1=}·[2](x')^⊤_1^⋆(_1,a_i)] +[𝕀{_1≠}·[2](x')^⊤_1^⋆(_1,a_i)] },
≥max_i∈[d]ρ() [2](x')^⊤_1^⋆(,a_i).
where the last inequality follows by the fact that, for all (x,a)∈_1×, [2](·)^⊤_1^⋆(x,a)=[2](x')^⊤ϕ_1^⋆(x,a) ≥ 0
(since [2](x')^⊤ϕ_1^⋆(x,a) is a conditional
density). On the other hand, from the construction of _1^⋆ and the fact that [2](x')∈^d_≥ 0, we have
max_i∈[d][2](x')^⊤_1^⋆(,a_i)=[2](x')_∞≥[2](x')/√(d).
Combining this with (<ref>) and using that ρ(x_0)=1/2
implies that reachability 1/(2√(d)) is satisfied in .
§.§ Relation to Other Reachability Assumptions
In this subsection, we show that <ref> is implied
by a notion of feature coverage used in the context of transfer
learning in Low-Rank MDPs <cit.>, as well as a notion of
explorability used in the context of reward-free RL in linear
MDPs <cit.>.
§.§.§ Feature Coverage
We first consider coverage condition used by <cit.>, which involves the second moments of the feature map .
We say that the linear MDP with featurization _h satisfies η-feature coverage if for all h ∈ [H],
sup_π∈λ_min(^π[(_h,_h)(_h,_h)^⊤]) ≥η.
We show that η-feature coverage implies
(η/2)^3/2-reachability. Thus, up to polynomial dependence,
η-feature coverage is a special case of <ref>.
Suppose that an MDP satisfies η-feature coverage as in <ref> for some η>0. If (x,a)∈(1) for all x,a, then the MDP satisfies (η/2)^3/2-reachability in the sense of <Ref>.
Let h∈ [H] and x∈_h+1 be given and define
θ(x)/(x).
To keep notation compact, we define _h ϕ_h^⋆(_h,_h). By η-feature coverage, there exists π∈ such that
η≤^π [(θ^⊤_h)^2] = ^π [𝕀{(θ^⊤_h)^2 < η/2}· (θ^⊤_h)^2] + ^π [𝕀{(θ^⊤_h)^2 ≥η/2}· (θ^⊤_h)^2] ,
≤η/2 + ^π [(θ^⊤_h)^2 ≥η/2],
where we have used that θ=1 and ϕ_h^⋆(x,a)≤ 1 for all (x,a)∈_h×. Rearranging (<ref>) and using that θ^⊤_h≥ 0 (it is a scaled conditional density), have
^π [θ^⊤_h ≥√(η/2)] = ^π [(θ^⊤_h)^2 ≥η/2] ≥η/2.
Now, by Markov's inequality, we have that
θ^⊤ϕ_h^⋆,π= ^π[θ^⊤_h] ≥√(η/2)·^π [θ^⊤_h ≥√(η/2)] ≥ (η/2)^3/2,
where we have once more used that θ^⊤_h≥ 0 almost surely.
§.§.§ Explorability
We now consider the explorability assumption of <cit.>, which involves the first moment of the feature map . This notion is defined as follows.
We say that a linear MDP satisfies η-explorability if for any h∈[H] and any θ∈^d∖{0} it holds that
sup_π∈ |θ^⊤^π[(_h,_h)]| ≥η·θ.
We now show that η-explorability is a special case of η-reachability:
Suppose that the explorability condition in <ref> is satisfied with η>0. Then, η-reachability is satisfied.
Let x∈_h+1 and define θ(x). By explorability, we have that
sup_π∈ d^π(x) = sup_π∈^π[(x)^⊤(_h,_h)],
= sup_π∈ |^π[(x)^⊤(_h,_h)]|, ((·)^⊤(x,a) is a condition law)
= sup_π∈ |θ^⊤^π[(_h,_h)]|,
≥η·θ , (by explorability)
= η·(x).
This shows that <ref> is satisfied with
parameter η.
|
http://arxiv.org/abs/2307.05076v1 | 20230711071924 | Incentive Engineering for Concurrent Games | [
"David Hyland",
"Julian Gutierrez",
"Michael Wooldridge"
] | cs.GT | [
"cs.GT",
"cs.LO",
"cs.MA"
] |
Wide dynamic range charge sensor operation by high-speed feedback control of radio-frequency reflectometry
Tomohiro Otsuka
August 12, 2023
==========================================================================================================
We consider the problem of incentivising desirable behaviours in multi-agent systems by way of taxation schemes. Our study employs the concurrent games model: in this model, each agent is primarily motivated to seek the satisfaction of a goal, expressed as a Linear Temporal Logic (LTL) formula; secondarily, agents seek to minimise costs, where costs are imposed based on the actions taken by agents in different states of the game. In this setting, we consider an external principal who can influence agents' preferences by imposing taxes (additional costs) on the actions chosen by agents in different states. The principal imposes taxation schemes to motivate agents to choose a course of action that will lead to the satisfaction of their goal, also expressed as an LTL formula. However, taxation schemes are limited in their ability to influence agents' preferences: an agent will always prefer to satisfy its goal rather than otherwise, no matter what the costs. The fundamental question that we study is whether the principal can impose a taxation scheme such that, in the resulting game, the principal's goal is satisfied in at least one or all runs of the game that could arise by agents choosing to follow game-theoretic equilibrium strategies. We consider two different types of taxation schemes: in a static scheme, the same tax is imposed on a state-action profile pair in all circumstances, while in a dynamic scheme, the principal can choose to vary taxes depending on the circumstances. We investigate the main game-theoretic properties of this model as well as the computational complexity of the relevant decision problems.
§ INTRODUCTION
Rational verification is the problem of establishing which temporal logic properties will be satisfied by a multi-agent system, under the assumption that agents in the system choose strategies that form a game-theoretic equilibrium <cit.>. Thus, rational verification enables us to verify which desirable and undesirable behaviours could arise in a system through individually rational choices. This article, however, expands beyond verification and studies methods for incentivising outcomes with favourable properties while mitigating undesirable consequences. One prominent example is the implementation of Pigovian taxes, which effectively discourage agents from engaging in activities that generate negative externalities. These taxes have been extensively explored in various domains, including sustainability and AI for social good, with applications such as reducing carbon emissions, road congestion, and river pollution <cit.>.
We take as our starting point the work of <cit.>, who considered the possibility of influencing one-shot Boolean games by introducing taxation schemes, which impose additional costs onto a game at the level of individual actions. In the model of preferences considered in <cit.>, agents are primarily motivated to achieve a goal expressed as a (propositional) logical formula, and only secondarily motivated to minimise costs. This logical component limits the possibility to influence agent preferences: an agent can never be motivated by a taxation scheme away from achieving its goal. In related work, Wooldridge et al. defined the following implementation problem: given a game G and an objective Υ, expressed as a propositional logic formula, does there exists a taxation scheme τ that could be imposed upon G such that, in the resulting game G^τ, the objective Υ will be satisfied in at least one Nash equilibrium <cit.>.
We develop these ideas by applying models of finite-state automata to introduce and motivate the use of history-dependent incentives in the context of concurrent games <cit.>. In a concurrent game, play continues for an infinite number of rounds, where at each round, each agent simultaneously chooses an action to perform. Preferences in such a multiplayer game are defined by associating with each agent i a Linear Temporal Logic (LTL) goal γ_i, which agent i desires to see satisfied. In this work, we also assume that actions incur costs, and that agents seek to minimise their limit-average costs.
Since, in contrast to the model of <cit.>, play in our games continues for an infinite number of rounds, we find there are two natural variations of taxation schemes for concurrent games. In a static taxation scheme, we impose a fixed cost on state-action profiles so that the same state-action profile will always incur the same tax, no matter when it is performed. In a dynamic taxation scheme, the same state-action profile may incur different taxes in different circumstances: it is history-dependent. We first show that dynamic taxation schemes are strictly more powerful than static taxation schemes, making them a more appropriate model of incentives in the context of concurrent games, characterise the conditions under which an LTL objective Υ can be implemented in a game using dynamic taxation schemes, and begin to investigate the computational complexity of the corresponding decision problems.
§ PRELIMINARIES
Where S is a set, we denote the powerset of S by 2^S.
We use various propositional languages to express properties of the
systems we consider. In these languages, we will let Φ be a
finite and non-empty vocabulary of Boolean variables, with typical
elements p, q, …. Where a is a finite word and b is also a word (either
finite or infinite), we denote the word obtained by concatenating
a and b by a b. Where a is a finite word, we denote by a^ω the infinite repetition of a. Finally, we use _n^+ for the set of n-tuples of non-negative real numbers.
Concurrent Game Arenas:
We work with concurrent game structures, which in this work we will refer to as arenas (to distinguish them from the game structures that we introduce later in this section) <cit.>.
Formally a concurrent game arena is given by a structure
= (,,_1, …,_n,,,,s_0),
where: is a finite and non-empty set of arena states; = 1, …, n is the set of agents – for any i ∈, we let -i = ∖i denote the set of all agents excluding i; for each i ∈, _i is the finite and non-empty set of unique actions available to agent i – we let = _i ∈_i denote the set of all actions available to all players in the game and = _1 ⋯_n denote the set of all action profiles; : _1 ⋯_n → is the state transformer function which prescribes how the state of the arena is updated for each possible action profile – we refer to a pair (s,), consisting of a state s ∈ and an action profile ∈ as a state-action profile; : _1 ⋯_n →_+^n is the cost function – given a state-action profile (s,) and an agent i ∈, we write _i(s,) for the i-th component of (s,), which corresponds to the cost that agent i incurs when is executed at s; : →2^Φ is a labelling function that specifies which propositional variables are true in each state s ∈; and s_0 ∈ is the initial state of the arena. In what follows, it is useful to define for every agent i ∈ the value c_i^* to be the maximum cost that i could incur through the execution of a state-action profile: c_i^* = max_i(s,) | s ∈, ∈.
Runs:
Games are played in an arena as follows. The arena begins in its initial state s_0, and each agent i ∈ selects an action α_i ∈_i to perform; the actions so selected define an action profile, ∈_1 ⋯_n. The arena then transitions to a new state s_1 = (s_0,α_1, …,α_n). Each agent then selects another action α'_i ∈ Ac_i, and the arena again transitions to a new state s_2 = (s_1,'). In this way, we trace out an infinite interleaved sequence of states and action profiles, referred to as a run, ρ : s_0
_0⟶ s_1
_1⟶ s_2
_2⟶⋯.
Where ρ is a run and k ∈, we write s(ρ,k) to denote the state indexed by k in ρ, so s(ρ,0) is the first state in ρ, s(ρ,1) is the second, and so on. In the same way, we denote the k-th action profile played in a run ρ by (ρ,k-1) and to single out an individual agent i's k-th action, we write α_i(ρ,k-1).
Above, we defined the cost function with respect to
individual state-action pairs. In what follows, we find it useful to lift the
cost function from individual state-action pairs to sequences of state-action pairs and runs. Since runs are infinite, simply taking the sum of costs is not appropriate: instead, we consider the cost of a run to be the average cost incurred by an agent i over the run; more precisely, we define the average cost incurred by agent i over the first t steps of the run ρ as _i(ρ,0:t) = 1/t+1∑_j=0^t_i(ρ,j) for t≥ 1, whereby _i(ρ,j) we mean _i(s(ρ,j),(ρ,j)).
Then, we define the cost incurred by an agent i over the run ρ, denoted _i(ρ), as the inferior limit of means: _i(ρ) =lim inf_t→∞_i(ρ,0: t). It can be shown that the value _i(ρ) always converges because the sequence of averages _i(ρ,0:t) is Cauchy.
Linear Temporal Logic:
We use the language of Linear Temporal Logic
(LTL) to express properties of runs <cit.>. Formally, the syntax of LTL is defined wrt. a set Φ
of Boolean variables by the following grammar:
φ ::=
⊤|
p |φ|φ∨φ|φ|φφ
where p ∈Φ.
Other usual logic connectives (“”, “”, “”, “”) are defined in terms of and in the conventional way. Given a set of
variables Φ, let LTL(Φ) be the set of LTL formulae
over Φ; where the variable set Φ is clear from the context, we simply write LTL. We interpret formulae of LTL with respect to pairs (ρ,t), where ρ is a run, and t ∈ is a temporal index
into ρ. Any given LTL formula may be true at none or multiple
time points on a run; for example, a formula q will be true
at a time point t∈ on a run ρ if q is true on a run
ρ at time t+1. We will write (ρ,t)ϕ to mean that
ϕ∈ LTL is true at time t ∈ on run ρ. The rules
defining when formulae are true (i.e., the semantics of LTL) are
defined as follows:
[ (ρ,t)⊤ ; (ρ,t) p p∈(s(ρ,t)); (ρ,t)φ (ρ,t) φ; (ρ,t)φ∨ψ (ρ,t) φ (ρ,t) ψ; (ρ,t)φ (ρ,t+1) φ; (ρ,t)φψ t' ≥ t : (ρ,t') ψ; t ≤ t” < t': (ρ,t”) φ ]
We write ρϕ as a shorthand for (ρ,0) ϕ, in which case we say that ρ satisfies φ. A formula φ is satisfiable if there is some run satisfying φ. Checking satisfiability for LTL formulae is known to be
PSpace-complete <cit.>, while the synthesis problem for LTL is 2ExpTime-complete <cit.>.
In addition to the LTL tense operators (“in the next state…”) and (“…until …”), we make use of the two derived operators (“eventually…”) and (“always…”), which are defined as follows <cit.>:
φ = ⊤φ and φ = φ.
Strategies:
We model strategies for agents as finite-state machines with output. Formally, strategy σ_i for agent i ∈ is given by a structure σ_i = (Q_i,next_i,do_i,q_i^0), where Q_i is a finite set of machine states, next_i : Q_i _1 ⋯_n → Q_i is the machine's state transformer function, do_i : Q_i →_i is the machine's action selection function, and q^0_i ∈ Q_i is the machine's initial state. A collection of strategies, one for each agent i∈, is a strategy profile: = (σ_1, …, σ_n). A strategy profile enacted in an arena will generate a unique run, which we denote by ρ(,); the formal definition is standard, and we will omit it here <cit.>. Where is clear from the context, we will simply write ρ(). For each agent i∈, we write Σ_i for the set of all possible strategies for the agent and Σ = Σ_1 ⋯Σ_n for the set of all possible strategy profiles for all players.
For a set of distinct agents A ⊆, we write Σ_A = ∏_i ∈ AΣ_i for the set of partial strategy profiles available to the group A and Σ_-A = ∏_j ∈∖ AΣ_j for the set of partial strategy profiles available to the set of all agents excluding those in A. Where = (σ_1, …, σ_i, …, σ_n) is a strategy profile and σ_i' is a strategy for agent i, we denote the strategy profile obtained by replacing the i-th component of with σ_i' by (_-i,σ_i'). Similarly, given a strategy profile and a set of agents A ⊆, we write _A = (σ_i)_i ∈ A to denote a partial strategy profile for the agents in A and if _A' ∈Σ_A is another partial strategy profile for A, we write (_-A,_A') for the strategy profile obtained by replacing _A in with _A'.
Games, Utilities, and Preferences:
We obtain a concurrent game from an arena by associating with each agent i a goal γ_i, represented as an LTL formula. Formally, a concurrent game is given by a structure
= (,,_1, …,_n,,,,s_0, γ_1, …, γ_n),
where (,,_1, …,_n,,,,s_0) is a concurrent game arena, and γ_i is the LTL goal of agent i, for each i ∈. Runs in a concurrent game are defined over the game's arena , and hence we use the notations ρ(,) and ρ(,) interchangeably. When the game or arena is clear from the context, we omit the and simply write ρ(). Given a strategy profile , the generated run ρ() will satisfy the goals of some agents and not satisfy the goals of others, that is, there will be a set W() = i ∈ : ρ()γ_i of winners and a set L() = ∖ W() of losers.
We are now ready to define preferences for agents. Our basic idea is that, as in <cit.>, agents' preferences are structured: they first desire to accomplish their goal, and secondarily desire to minimise their costs. To capture this idea, it is convenient to define preferences via utility functions u_i over runs, where i's utility for a run ρ is
u_i(ρ) =
{[ 1+c_i^*-_i(ρ) ; -_i(ρ) ].
Defined in this way, if an agent i gets their goal achieved, their utility will lie in the range [1,c_i^* + 1] (depending on the cost she incurs), whereas if she does not achieve their goal, then their utility will lie within [-c_i^*,0].
Preference relations _i over runs are then defined in the obvious way: ρ_1 _i ρ_2 if and only if u_i(ρ_1) ≥ u_i(ρ_2), with indifference relations ∼_i and strict preference relations _i defined as usual.
Nash equilibrium:
A strategy profile is a (pure strategy) Nash equilibrium if there is no agent i and strategy σ_i' such that ρ(_-i,σ_i') _i ρ(). If such a strategy σ_i' exists for a given agent i, we say that σ_i' is a beneficial deviation for i from .
Given a game , let () denote its set of Nash equilibria. In general, Nash equilibria in this model of concurrent games may require agents to play infinite memory strategies <cit.>, but we do not consider these in this study [Even in the purely quantitative setting where all agents' goals are ⊤, it is still possible that some Nash equilibria require infinite memory <cit.>.].
Where ϕ is an LTL formula, we find it useful to define _ϕ() to be the set of Nash equilibrium strategy profiles that result in ϕ being satisfied: _ϕ()= ∈() |ρ()ϕ.
It is sometimes useful to consider a concurrent game that is modified so that no costs are incurred in it. We call such a game a cost-free game. Where is a game, let ^0 denote the game that is the same as except that the cost function ^0 of ^0 is such that _i^0(s,) = 0 for all i ∈, s ∈, and ∈.
Given this, the following is readily established (cf., <cit.>):
Given a game , the problem of checking
whether (^0) ≠∅ is 2ExpTime-complete.
The notion of Nash equilibrium is closely related to the concept of beneficial deviations. Given how preferences are defined in this study, it will be useful to introduce terminology that captures the potential deviations that agents may have <cit.>. Firstly, given a game , we say that a strategy profile ^1 ∈Σ is distinguishable from another strategy profile ^2 ∈Σ if ρ(^1,) ≠ρ(^2,). Then, for an agent i, a strategy profile , and an alternative strategy σ_i' ≠σ_i, we say that σ_i' is an initial deviation for agent i from strategy profile , written →_i (_-i,σ_i'), if we have i ∈ W() ⇒ i ∈ W(_-i,σ_i') and strategy profile is distinguishable from (_-i,σ_i').
§ TAXATION SCHEMES
We now introduce a model of incentives for concurrent games. For incentives to work, they clearly must appeal to an agent's preferences _i. As we saw above, incentives for our games are defined with respect to both goals and costs: an agent's primary desire is to see their goal achieved – the desire to minimise costs is strictly secondary to this. We will assume that we cannot change agents' goals: they are assumed to be fixed and immutable. It follows that any incentives we offer an agent to alter their behaviour must appeal to the costs incurred by that agent. Our basic model of incentives assumes that we can alter the cost structure of a game by imposing taxes, which depend on the collective actions that agents choose in different states. Taxes may increase an agent's costs, influencing their preferences and rational choices.
Formally, we model static taxation schemes as functions τ : →_+^n. A static taxation scheme τ imposed on a game = (,,_1, …,_n,,,,s_0, γ_1, …, γ_n) will result in a new game, which we denote by
^τ = (,,_1, …,_n,,^τ,,s_0, γ_1, …, γ_n),
which is the same as except that the cost function ^τ of ^τ is defined as ^τ(s,) = (s,) + τ(s,). Similarly, we write ^τ to denote the arena with modified cost function ^τ associated with ^τ and u_i^τ(ρ) to denote the utility function of agent i over run ρ with the modified cost function ^τ. Given and a taxation scheme τ, we write ρ_1 _i^τρ_2 iff u_i^τ(ρ_1) ≥ u_i^τ(ρ_2). The indifference relations ∼_i^τ and strict preference relations _i^τ are defined analogously.
The model of static taxation schemes has the advantage of simplicity, but it is naturally limited in the range of behaviours it can incentivise—particularly with respect to behaviours Υ expressed as LTL formulae. To overcome this limitation, we therefore introduce a dynamic model of taxation schemes. This model essentially allows a designer to impose taxation schemes that can choose to tax the same action in different amounts, depending on the history of the run to date. A very natural model for dynamic taxation schemes is to describe them using a finite state machine with output—the same approach that we used to model strategies for individual agents. Formally, a dynamic taxation scheme T is defined by a tuple T = (Q_T,next_T,do_T,q_T^0) where Q_T is a finite set of taxation machine states, next_T : Q_T _1 ⋯_n → Q_T is the transition function of the machine, q_T^0 ∈ Q_T is the initial state, and do_T : Q_T → (→_+^n) is the output function of the machine.
With this, let 𝒯 be the set of all dynamic taxation schemes for a game . As a run unfolds, we think of the taxation machine being executed alongside the strategies. At each time step, the machine outputs a static taxation scheme, which is applied at that time step only, with do_T(q_T^0) being the initial taxation scheme imposed.
When we impose dynamic taxation schemes, we no longer have a simple transformation ^τ on games as we did with static taxation schemes τ. Instead, we define the effect of a taxation scheme with respect to a run ρ. Formally, given a run ρ of a game , a dynamic taxation scheme T induces an infinite sequence of static taxation schemes, which we denote by t(ρ,T). We can think of this sequence as a function t(ρ,T) : → (→_+^n).
We denote the cost of the run ρ in the presence of a dynamic taxation scheme T by ^T(ρ):
^T(ρ) =
lim inf_u→∞1/u∑_v=0^u(ρ,v) +
t(ρ,T)(v)(s(ρ,v),(ρ,v))_(*)
The expression (*) denotes the vector of taxes incurred by the agents as a consequence of performing the action profile which they chose at time step v on the run ρ. The cost _i^T(ρ) to agent i of the run ρ under T is then given by the i-th component of ^T(ρ).
Two robots are situated in a grid world (Figure <ref>), where atomic propositions represent events where a robot picks up an apple (label a_ij represents agent i picking up apple j), has delivered an apple to the basket (label b_i represents agent i delivering an apple to the basket), or where the robots have crashed into each other (label c).
Additionally, suppose that both robots are programmed with LTL goals γ_1 = γ_2 = c. In this way, the robots are not pre-programmed to perform specific tasks, and it is therefore the duty of the principal to design taxes that motivate the robots to perform a desired function, e.g., pick apples and deliver them to the basket quickly. Because the game is initially costless, there is an infinite number of Nash equilibria that could arise from this scenario and it is by no means obvious that the robots will choose one in which they perform the desired function. Hence, the principal may attempt to design a taxation scheme to eliminate those that do not achieve their objective, thus motivating the robots to collect apples and deliver them to the basket. Clearly, using dynamic taxation schemes affords the principal more control over how the robots should accomplish this than static taxation schemes.
§ NASH IMPLEMENTATION
We consider the scenario in which a principal, who is external to the game, has a particular goal that they wish to see satisfied within the game; in a general economic setting, the goal might be intended to capture some principle of social welfare, for example. In our setting, the goal is specified as an LTL formula Υ, and will typically represent a desirable system/global behaviour. The principal has the power to influence the game by choosing a taxation scheme and imposing it upon the game. Then, given a game and a goal Υ, our primary question is whether it is possible to design a taxation scheme T such that, assuming the agents, individually and independently, act rationally (by choosing strategies that collectively form a Nash equilibrium in the modified game), the goal Υ will be satisfied in the run ρ() that results from executing the strategies . In this section, we will explore two ways of interpreting this problem.
E-Nash Implementation: A goal Υ is E-Nash implemented by a taxation scheme T in if there is a Nash equilibrium strategy profile of the game ^T such that ρ() Υ. The notion of E-Nash implementation is thus analogous to the E-Nash concept in rational verification <cit.>. Observe that, if the answer to this question is “yes” then this implies that the game ^T has at least one Nash equilibrium. Let us define the set to be the set of taxation schemes T that E-Nash implements Υ in :
=
T ∈𝒯|_Υ(^T) ≠∅ .
The obvious decision problem is then as follows:
E-Nash Implementation:
Given: Game , LTL goal Υ.
Question: Is it the case that ≠∅?
This decision problem proves to be closely related to the E-Nash problem <cit.>, and the following result establishes its complexity:
E-Nash Implementation is 2ExpTime-complete, even when 𝒯 is restricted to static taxation schemes.
For membership, we can check whether Υ is satisfied on any Nash equilibrium of the cost-free concurrent game ^0 obtained from by effectively removing its cost function using a static taxation scheme which makes all costs uniform for all agents. This then becomes the E-Nash problem, known to be 2ExpTime-complete. The answer will be “yes” iff Υ is satisfied on some Nash equilibrium of ^0; and if the answer is “yes”, then observing that (^T) ⊆(^0) for all taxation schemes T ∈𝒯 <cit.>, the given LTL goal Υ can be E-Nash implemented in . For hardness, we can reduce the problem of checking whether a cost-free concurrent game G has a Nash equilibrium (Theorem <ref>). Simply ask whether Υ = ⊤ can be E-Nash implemented in ^0.
For the second part of the result, observe that the reduction above only involves removing the costs from the game and checking the answer to E-Nash, which can be done using a simple static taxation scheme. Hardness follows in a similar manner.
A-Nash Implementation: The universal counterpart of E-Nash implementation is A-Nash Implementation.
We say that Υ is A-Nash implemented by T in if we have both 1) Υ is E-Nash implemented by T in game ; and 2) (^T) = _Υ(^T).
We thus define as follows:
=
T ∈𝒯|(^T) = _Υ(G^T) ≠∅
The decision problem is then:
A-Nash Implementation:
Given: Game , LTL goal Υ.
Question: Is it the case that ≠∅?
The following result shows that, unlike the case of E-Nash implementation, dynamic taxation schemes are strictly more powerful than static taxation schemes for A-Nash implementation. It can be verified that the game in Figure <ref>, the taxation scheme in Figure <ref>, and the principal's goal being Υ = G(p ↔ q) are witnesses to this result (see Appendix for the full proof):
There exists a game and an LTL goal Υ such that ≠∅, but not if 𝒯 is restricted to static taxation schemes.
Before proceeding with the A-Nash Implementation problem, we will need to introduce some additional terminology and concepts, beginning first with deviation graphs, paths, and cycles. A deviation graph is a directed graph Γ = (, E), where ⊆Σ is a set of nodes which represent strategy profiles in Σ and E ⊆(,') ∈×|→_i ' i ∈ is a set of directed edges between strategy profiles that represent initial deviations. Additionally, we say that a dynamic taxation scheme T induces a deviation graph Γ = (,E) if for every (,') ∈×, it holds that ' _i^T for some i ∈ if and only if (,') ∈ E. In other words, if the edges in a deviation graph precisely capture all of the beneficial deviations between its nodes under T, then the deviation graph is said to be induced by T.[This definition implies that a taxation scheme may induce many possible deviation graphs in general, depending on the nodes selected to be part of the graph.] Then, a deviation path is simply any path P = (^1, …,^m) within a deviation graph Γ where (^j,^j+1) ∈ E for all j ∈1,…,m-1.
Because the principal is only able to observe the actions taken by the agents and not their strategies directly, any taxation scheme that changes the cost of some strategy profile will also change the cost of all strategy profiles that are indistinguishable from by the same amount.
This naturally suggests that we modify the concept of a deviation path to take indistinguishability into account. To this end, we say that a sequence of runs P_o = (ρ^1,ρ^2,…,ρ^m) is an observed deviation path in a deviation graph Γ = (,E) if there exists an underlying tuple (^1, ^2, …, ^m) such that for all j ∈1,…,m, it holds that 1) ρ^j = ρ(^j), and 2) if j < m, then (^j,^j+1') ∈ E for some ^j+1' such that ρ(^j+1') = ρ(^j+1).
Then, a deviation cycle is a deviation path (^1,…,^m) where ρ(^1) = ρ(^m).
A deviation path P = (^1, ^2, …, ^m) is said to involve an agent i if ^j →_i ^j+1 for some j ∈1,…,m-1 and similarly, an observed deviation path P_o in a deviation graph involves agent i if the analogous property holds for all of its underlying sets. Given a game 𝒢 and a set of strategy profiles X, a taxation scheme T eliminates X if NE(𝒢^T) ∩ X = ∅. Finally, a set of strategy profiles X is said to be eliminable if there exists a taxation scheme that eliminates it. With this, we can characterise the conditions under which a finite set of strategy profiles is eliminable:
Let be a game and X ⊂Σ be a finite set of strategy profiles in . Then, X is eliminable if and only if there exists a finite deviation graph Γ = (, E) that satisfies the following properties: 1) For every ∈ X, there is some ' ∈ such that (,') ∈ E; and 2) Every deviation cycle in Γ involves at least two agents.
The forward direction follows by observing that if all deviation graphs fail to satisfy at least one of the two properties, then every deviation graph will either fail to eliminate some ∈ X if induced, or will not be inducible by any dynamic taxation scheme. The backward direction can be established by constructing a dynamic taxation scheme T^Γ that induces a deviation graph Γ satisfying the two properties. Using these properties, it follows that T^Γ eliminates X.
To conclude our study of dynamic taxation schemes, we present a characterisation of the A-Nash implementation problem.[Note that, in general, Proposition <ref> cannot be directly applied to Theorem <ref>, because it is assumed that the set to be eliminated is finite, whereas _Υ(^0) is generally infinite. However, this can be reconciled if some restriction is placed on the agents' strategies so that Σ is finite, which is the case in many game-theoretic situations of interest, e.g., in games with memoryless, or even bounded memory, strategies – both used to model bounded rationality.]
Let be a game and Υ be an LTL formula. Then ≠∅ if and only if the following conditions hold:
* ≠∅;
* _Υ(^0) is eliminable.
For the forward direction, it follows from the definition of the problem that if = ∅, then = ∅. Moreover, it is also clear that if _Υ(^0) is not eliminable, then it is impossible to design a (dynamic) taxation scheme such that only good equilibria remain in the game and hence, = ∅.
For the backward direction, suppose that the two conditions hold and let T be a taxation scheme that only affects the limiting-average costs incurred by agents under strategy profiles in _Υ(^0), and eliminates this set. Such a taxation scheme is guaranteed to exist by the assumption that condition (2) holds and because it is known that no good equilibrium is indistinguishable from a bad one. Now consider a static taxation scheme τ such that c_i(s,) + τ_i(s,) = ĉ for all i ∈, (s,) ∈×, and some ĉ≥max_i ∈ c_i^*. Combining τ with T gives us a taxation scheme T^* such that for each state q ∈ Q_T^* = Q_T and (s,) ∈×, we have do_T^*(q)(s,) = do_T(q)(s,) + τ(s,). Now, because T eliminates _Υ(^0), and (^τ) = (^0), it follows that T^* eliminates _Υ(^0). Finally, note that because the satisfaction of an LTL formula on a given run is solely dependent on the run's trace, it follows that all good equilibria, i.e., strategy profiles in _Υ(^0), are distinguishable from all bad equilibria, so we have _Υ(^0) ∩(^T^*) ≠∅.
It is straightforward to see that A-Nash Implementation is 2EXPTIME-hard via a simple reduction from the problem of checking whether a Nash equilibrium exists in a concurrent game – simply ask if the formula ⊤ can be A-Nash implemented in ^0. However, it is an open question whether a matching upper bound exists and we conjecture that it does not. This problem is difficult primarily for two reasons. Firstly, it is well documented that Nash equilibria may require infinite memory in games with lexicographic ω-regular and mean-payoff objectives <cit.>, and the complexity of deciding whether a Nash equilibrium even exists in games with our model of preferences has yet to be settled <cit.>. Secondly, Theorem <ref> and Proposition <ref> suggest that unless the strategy space is restricted to a finite set, a taxation scheme that A-Nash implements a formula may require reasoning over an infinite deviation graph, and hence require infinite memory. Nevertheless, our characterisation under such restrictions provides the first step towards understanding this problem in the more general setting.
§ RELATED WORK AND CONCLUSIONS
This work was motivated by <cit.>, and based on that work, presents four main contributions: the introduction of static and dynamic taxation schemes as an extension to concurrent games expanding the model in (one-shot) Boolean games <cit.>; a study of the complexity of some of the most relevant computational decision problems building on previous work in rational verification <cit.>; evidence (formal proof) of the strict advantage of dynamic taxation schemes over static ones, which illustrates the role of not just observability but also memory to a principal's ability to (dis)incentivise certain outcomes <cit.>; and a full characterisation of the eliminability of sets of strategy profiles under dynamic taxation schemes and the A-Nash implementation problem.
The incentive design problem has been studied in many different settings, and <cit.> group existing approaches broadly into those from the economics, control theory, and machine learning communities. However, more recent works in this area adopt multi-disciplinary methods such as automated mechanism design <cit.>, which typically focus on the problem of constructing incentive-compatible mechanisms to optimise a particular objective such as social welfare. Other approaches in this area reduce mechanism design to a program synthesis problem <cit.> or a satisfiability problem for quantitative strategy logic formulae <cit.>. The notion of dynamic incentives has also been investigated in (multi-agent) learning settings <cit.>. These works focus solely on adaptively modifying the rewards for quantitative reward-maximising agents. In contrast, our model of agent utilities more naturally captures fundamental constraints on the principal's ability to (dis)incentivise certain outcomes due to the lexicographic nature of agents' preferences <cit.>.
Another area closely related to incentives is that of norm design <cit.>. Norms are often modelled as the encouragement or prohibition of actions that agents may choose to take by a regulatory agent.
The most closely related works in this area are those of <cit.>, who study the problem of synthesising dynamic norms in different classes of concurrent games to satisfy temporal logic specifications. Whereas norms in these frameworks have the ability to disable actions at runtime, our model confers only the power to incentivise behaviours upon the principal. Finally, other studies model norms with violation penalties, but differ from our work in how incentives, preferences, and strategies are modelled <cit.>.
In summary, a principal's ability to align self-interested decision-makers' interests with higher-order goals presents an important research challenge for promoting cooperation in multi-agent systems. The present study highlights the challenges associated with incentive design in the presence of constraints on the kinds of behaviours that can be elicited, makes progress on the theoretical aspects of this endeavour through an analysis of taxation schemes, and suggests several avenues for further work. Promising directions include extensions of the game model to probabilistic/stochastic or learning settings, finding optimal complexity upper bounds for the A-Nash implementation problems, and consideration of different formal models of incentives. We expect that this and such further investigations will positively contribute to our ability to develop game-theoretically aware incentives in multi-agent systems.
eptcs
§ SUPPLEMENTARY MATERIAL
There exists a game and an LTL goal Υ such that ≠∅, but not if 𝒯 is restricted to static taxation schemes.
Consider the concurrent game in Figure <ref>. Intuitively, both agents desire to always eventually visit either s_1 or s_2. Suppose that the principal's objective is Υ = G(p ↔ q), i.e., they would like the agents to never visit s_2 or s_3.
Firstly, observe that there is no static taxation scheme which can A-Nash implement Υ, as any modification to the costs of the game will not eliminate any Nash equilibria where the agents visit s_2 or s_3 a finite number of times. This is due to the prefix-independence of costs in infinite games with limiting-average payoffs <cit.>.
However, the dynamic taxation scheme depicted in Figure <ref> A-Nash implements Υ. To see this, observe that for any strategy profile that visits s_2 or s_3 a finite number of times, there exists a deviation for some agent to ensure that s_2 and s_3 are never visited. Such a deviation will result in all agents i ∈1,2 satisfying their goals γ_i and strictly reducing their average costs from at least c_i^*+1 to some value strictly below this. This constitutes a beneficial deviation and hence, there is no Nash equilibrium under T that does not satisfy Υ. Moreover, any strategy profile that leads to the sequence of states s(ρ(),0:) = (s_0 s_1)^ω is a Nash equilibrium of ^T and hence goal Υ is A-Nash implemented by T in this game.
Let be a game and X ⊂Σ be a finite set of strategy profiles in . Then, X is eliminable if and only if there exists a finite deviation graph Γ = (, E) that satisfies the following properties: 1) For every ∈ X, there is some ' ∈ such that (,') ∈ E; and 2) Every deviation cycle in Γ involves at least two agents.
For the forward direction, suppose that there is no deviation graph Γ satisfying both properties (1) and (2) in the statement. Then, for all deviation graphs Γ, either for some ∈ X, there is no ' ∈ such that (,') ∈, or there is some deviation cycle in Γ involving only one agent. Now consider any deviation graph Γ = (,E), where = X ∪' |→_i ' ∈ X i ∈. In the first case, it is clear that any taxation scheme that induces Γ does not eliminate and hence X. In the second case, no taxation scheme can induce the deviation graph Γ. To see why, suppose for contradiction that some taxation scheme T induces Γ and let i be the agent for which there is a deviation cycle C = ^1,…,^m in Γ involving only agent i. Then, we have ^1 _i^T ^2 _i^T …_i^T ^m and by transitivity of the preference relation _i^T, we can conclude that ^1 _i^T ^m. However, by definition of a deviation cycle, ^1 and ^m are indistinguishable, so agent i will always receive the same utility under both ^1 and ^m, no matter what taxation scheme is imposed on them and hence, we have a contradiction. From this, we can conclude that every deviation graph that can be induced by a taxation scheme does not eliminate X and hence, X is not eliminable, proving this part of the statement.
For the backward direction, assume that there is a deviation graph Γ that satisfies both properties. Under this assumption, we will construct a dynamic taxation scheme T that eliminates X. To assign the appropriate costs to different strategy profiles, we will make use of the lengths of deviation paths within Γ. For every i ∈, let ℓ_i denote the length of the longest observed deviation path in Γ that involves only agent i. Additionally, for all ∈, let d_i(ρ()) denote the length of the longest observed deviation path in Γ that starts from ρ() and involves only i. The difference between these two quantities will serve as the basis for how much taxation an agent i will incur for any given strategy profile in . Observe that because it is assumed that no deviation cycle involves only one agent, both quantities are well-defined and finite for all agents and strategy profiles. Then, for a deviation graph Γ and a run ρ, let ρ be the set of agents i ∈ for which there is some pair of strategy profiles , ' ∈ such that we have both (,') ∈ E_D and ρ = ρ('). In other words, ρ represents the set of agents who have an initial deviation from some other strategy profile in to one that generates the run ρ. With this, we would like to construct a dynamic taxation scheme such that for any strategy profile , the following criteria are satisfied:
* C_i^T(ρ()) ≥ (ℓ_i - d_i(ρ()))· (c_i^*+1) if i ∈ρ;
* C_i^T(ρ()) = C_i(ρ()) otherwise.
Intuitively, the idea is to ensure that for every edge (,') ∈ E, the agent i ∈ for whom →_i ' gets taxed by a significantly higher amount for choosing compared to when they choose '. To see why it is possible to construct such a taxation scheme, first observe that if ρ≠ρ' for any two runs ρ,ρ', then there is some dynamic taxation scheme that can distinguish between the two by simply tracing out the two runs up to the first point in which they differ and then branching accordingly. From this point onwards, the dynamic taxation scheme can then output static taxation schemes, which assign different limiting average costs to the agents according to the above criteria. Extending this approach to a taxation scheme that distinguishes between all unique runs generated by elements of , it follows that there is a dynamic taxation scheme T that satisfies the two criteria. Consequently, for all (,') ∈ E, it follows that ' _i^T because ρ() ≠ρ(') by definition of the initial deviation relation →_i. Moreover, because it is assumed that no deviation cycle involves only one agent, T gives rise to a strict total ordering _i^T on the elements of for each i ∈. Finally, by property (1), it holds that for every ∈ X, some agent has a beneficial deviation from to another ' ∈ under T and hence, T eliminates X.
|
http://arxiv.org/abs/2307.04981v1 | 20230711023946 | A Multi-view Impartial Decision Network for Frontotemporal Dementia Diagnosis | [
"Guoyao Deng",
"Ke Zou",
"Meng Wang",
"Xuedong Yuan",
"Sancong Ying",
"Huazhu Fu"
] | cs.CV | [
"cs.CV"
] |
G.Deng et al.
National Key Laboratory of Fundamental Science on Synthetic Vision,
Sichuan
University, Sichuan, China College of Computer Science, Sichuan University, Sichuan, China Institute of High Performance Computing, A*STAR, Singapore
A Multi-view Impartial Decision Network for Frontotemporal Dementia Diagnosis
Guoyao Deng1, Ke Zou1,3, Meng Wang3, Xuedong Yuan2, Sancong Ying2 and Huazhu Fu3
August 12, 2023
====================================================================================
Frontotemporal Dementia (FTD) diagnosis has been successfully progress using deep learning techniques. However, current FTD identification methods suffer from two limitations. Firstly, they do not exploit the potential of multi-view functional magnetic resonance imaging (fMRI) for classifying FTD. Secondly, they do not consider the reliability of the multi-view FTD diagnosis.
To address these limitations, we propose a reliable multi-view impartial decision network (MID-Net) for FTD diagnosis in fMRI. Our MID-Net provides confidence for each view and generates a reliable prediction without any conflict. To achieve this, we employ multiple expert models to extract evidence from the abundant neural network information contained in fMRI images. We then introduce the Dirichlet Distribution to characterize the expert class probability distribution from an evidence level. Additionally, a novel Impartial Decision Maker (IDer) is proposed to combine the different opinions inductively to arrive at an unbiased prediction without additional computation cost.
Overall, our MID-Net dynamically integrates the decisions of different experts on FTD disease, especially when dealing with multi-view high-conflict cases. Extensive experiments on a high-quality FTD fMRI dataset demonstrate that our model outperforms previous methods and provides high uncertainty for hard-to-classify examples. We believe that our approach represents a significant step toward the deployment of reliable FTD decision-making under multi-expert conditions. We will release the codes for reproduction after acceptance.
§ INTRODUCTION
Frontotemporal Dementia (FTD) has become the second most common type of presenile dementia, which causes personality changes, progressive behavioral problems, and cognitive impairment in tens of millions of the elderly. FTD includes a range of subtypes. This heterogeneity of FTD hampers the diagnosis process. Thus early and accurate diagnoses are of vital importance for comprehending the disease process and developing disease-modifying treatments <cit.>. For deep learning computer-aid-diagnosis based on fMRI images, since fMRI images inherently have multiple perspectives on brain neural network activity, multi-view input can enable the network to learn abundant brain activity information. But two major problems exist in current FTD deep learning classification: Single-view classification fails to extract sufficient brain activity information. And fusion strategies in trusted multi-view classification do not properly balance conflicts.
There are a variety of CAD methods currently in fMRI image diagnosis <cit.>. Traditional machine learning CAD methods require hand-crafted features and feature selection, which ignore abundant origin information hidden in fMRI images <cit.>. Deep learning CAD methods using networks like CNN mostly focus on directly extracting information from fMRI images without feature egineering <cit.>. Sarraf et al. <cit.> used LeNet-5 to classify Alzheimer’s Disease (AD) from healthy controls using single-view 2D images converted from fMRI data. Ramzan et al. <cit.> applied Resnet to classify AD with the same input data form. These approaches can not correctly estimate the confidence of the predictions because of the drawback of softmax. Moreover, the promise of multi-view classification in fMRI images has not been investigated. To this end, a reliable model should both recognize sufficient brain neural network information and provide accurate confidence.
Multi-view classification is mainly divided into early fusion, late fusion, and score fusion according to fusion strategy<cit.>.
Confidence-based fusion proposed by Han et al. <cit.> is a late fusion strategy that combines decisions at the evidence level. It combined multi-view information with Dempster-Shafer Theory aiming to provide trusted prediction with confidence. Although this method did promote both classification reliability and robustness, the fatal "Zadeh's problem" remained unsolved <cit.>. While trying to combine two opinions that conflict with each other using DS-Combine<cit.>, the combined result becomes counter-intuitive and baseless. This reveals that DS-Combine is inapplicable in FTD diagnosis due to such results may affect the doctor's judgment, and even delay the diagnosis and effective treatment of FTD disease<cit.>. A balanced and risk-aware method is needed.
Based on the above analysis, we propose a credible Frontotemporal Dementia (FTD) multi-view classification method in this paper. Our model infers confidence for each view's prediction and properly handles the complementary and conflicting information among them, providing a more accurate and reliable diagnosis of FTD. To estimate the confidence of each view, we adopted evidential deep learning. Furthermore, we proposed the Impartial Decision Maker (IDer) to solve "Zadeh's problem" in current trusted multi-view classification, avoiding high-risk and counter-intuitive prediction. This approach is a crucial step towards safe and trusted FTD diagnosis and deployment.
To the best of our knowledge, we are the first to address conflict in FTD multi-view classification. Our contributions can be summarized as follows:
(1) We propose a novel multi-view impartial decision method (MID-Net) for Frontotemporal Dementia diagnosis and classification based on rs-fMRI images.
(2) We introduce the Impartial Decision Maker (IDer) for sample-adaptive multi-view integration, which combines multi-view information at the evidence level and forms a unified opinion without any conflict.
(3) We conduct sufficient experiments on the FTD dataset[The dataset has IRB certification.] to verify the performance of our MID-Net and the effectiveness of uncertainty estimation.
§ METHOD
§.§ Overall framework & Uncertainty quantification
The overall framework is shown in Fig. <ref>. In order to make full use of brain activity information in rs-fMRI images, we first deploy backbone networks for three sectional views. After the feature extraction, models form the opinions with collected evidence through a Dirichlet distribution. At the final decision level, we utilize IDer to complete the trusted fusion. We now delve into the details.
Evidential Classifier:
The drawback of the maximum-likelihood classifier has been discussed, that the point estimate of softmax produces over-confident erroneous prediction when faces out-of-distribution samples. When deployed in CAD, it will cause ineffective treatment and even irreparable consequences. In order to quantify the confidence behind every choice our model makes for a single view, we utilize evidential classifiers for more reliable predictions, which infer the strength of evidence to quantify belief masses and uncertainty under Dirichlet distribution<cit.>.
Dirichlet Distribution:
In the theory of Subjective Logic <cit.>, Dirichlet distribution formalizes the model's opinions assigned belief mass of any subset of the frame of discernment and the uncertainty mass as an explicit parameter. This type of uncertainty mass expresses "I do not know" for all possible states. More specifically, in a frame of K mutually exclusive singletons (e.g., class labels), these K belief masses and the uncertainty mass of each view are all non-negative and sum to one, as:
u_^v + ∑_k=1^Kb_k^v = 1,
where u_^v≥0 and b_k≥0 for k = 1,2,···, K, denote the overall uncertainty and the belief of k-th class respectively.
the evidence e_^v = [e_1^v,⋯,e_K^v] induce the parameter α_k^v of Dirichlet distribution in the theory of subjective theory,i.e., α_k^v = e_k^v + 1. And the belief mass b_k^v and uncertainty mass u_^v are computed as:
b_k^v = e_k^v + 1/S_v = α_v/S_^v and u_^v = K/S_^v,
where S_^v =∑_k^Kα_k^v = ∑_k^K(e_k^v+1) refers to as the Dirichlet strength. For each view, the more evidence observed for a possible class from a sample, the greater the corresponding belief. When little evidence is gained, greater uncertainty is assigned to this view. Composing opinions via Dirichlet distribution avoids making false predictions that are overconfident, also the estimated uncertainty ensures the decision risk of our model is visible.
§.§ Impartial Decision Maker
"Zadeh's problem" in FTD: As shown in Fig. <ref> (a), due to the characteristics of complementary brain activity information contained in different brain regions<cit.>, the conclusions drawn from different perspectives may be divergent. The Current fusion strategy DS-Combine cannot fuse two divided opinions from two different perspectives, even arrive at a counterintuitive, baseless opinion. We found that this problem of DS-combine is caused by using Dempster's rule to fuse opinions. This drawback of Dempster's rule has been pointed out by Zadeh et al.<cit.>. Therefore, in the multi-view classification of fMRI images, a fusion strategy that can properly handle conflicts of opinion is critical.
To better fuse opinion with low risk and resolve the highly conflicting situation, we proposed the Impartial Decision Maker based on The Weighted Operator Theory <cit.>aiming to achieve a more balanced information fusion. For any two views, models' opinions towards K classes O_^1 = [b_1^1,b_2^1,...,b_k^1,u_^1] and O_^2= [b_1^2,b_2^2,...,b_k^2,u_^2] is
combined in the following manner:
O^IDer = O_^1 O_^2,
where is the combination operator. The specific formulation in our IDer is:
b_k^IDer = b_k^1b_k^2 + b_k^1u_^2 + b_k^2u_^1 + w_b_k^CRF,u^IDer = u_^1u_^2+ w_u^CRF,
wherew_b_k^CRF and w_u^CRF are the conflict resolution factor(CRF). b_k^IDer is the combined belief of k class and u^IDer is the combined uncertainty of two views. CRF measures conflict among beliefs and redistribute belief and uncertainty. w_b_k^CRF and w_u^CRF are calculated as:
w_b_k^CRF = 1/2(b_k^1+b_k^2),
w_u^CRF = 1/2(u_^1+u_^2).
Based on the above formulations, we obtain the unified opinion O^IDer for two views.
Thus given v views of data, we can first collect evidence from each view and then combine the opinions from different views by the following rule:
O^IDer = O_^1 O_^2 ⋯ O_^v.
After obtaining the final opinion O^IDer =[{b_k^IDer}_k=1^K,u^IDer] for all views, the combined evidence for all possible classes e^IDer is calculated according to Eq. <ref>. Finally, the probability of all categories and the uncertainty of the overall decision are inferred through the above parameters.
On account of the method above, IDer combines evidence elegantly from different sources in a balanced way as shown in Fig. <ref>. Complementary information from highly conflicting opinions is reasonably taken into account. Compared to the DS-combine evidential fusion rule<cit.>, one can recognize that our IDer is more reasonable for multi-view fMRI image information fusion since different view sections of the neural network can contain very complementary information. With integrated opinions inferring the output in a human-understandable fashion, IDer guarantees impartiality in decision-making and truthful causality exists in our model's diagnosis, instead of leaving us questioning why is the result.
§.§ Learning process for FTD
For a given sample i, our model output the evidence of each class, which is represented as e_i.Furthermore, the corresponding parameter of Dirichlet distribution α_i is equal to e_i + 1. By calculating this parameter, we can obtain the final estimationa_i/S_i of class probabilities. As our downstream task is a classification task, we first need cross-entropy loss function to supervise the training process, In the multinomial opinion D(p_i|α_i), where p_i is the class probability. Therefore, we adopted the adjusted CE loss which can be further addressed as follows:
ℒ_a = ∫[ ∑_n = 1^C - y_iklog (p_ik)]1/B( α _i)∏_n = 1^C p_ik^α _ik - 1dp_i = ∑_n = 1^C y_ik( ψ( S_i) - ψ( α _ik)),
where ψ( ·) denote the digamma function. p_m is the class assignment probabilities on a simplex. To guarantee that incorrect labels will yield less evidence, even shrinking to 0, the KL divergence loss function is introduced as below:
ℒ_KL = log( Γ( ∑_j= 1^K α̃_ij)/Γ (K)∑_j= 1^C Γ( α̃_ij)) + ∑_j = 1^K ( α̃_ij - 1)[ ψ( α̃_ij) - ψ( ∑_j = 1^C α̃_ij)],
where Γ( ·) is the gamma function. α̃_mc = y_mc + ( 1 - y_mc) ⊙α _mc denotes the adjusted parameters of the Dirichlet distribution, which is used to ensure that ground truth class evidence is not mistaken for 0. Hence, the overall loss function of our proposed network can be defined as follows:
ℒ = ℒ_a + λℒ_KL,
where λ>0 is the annealing factor, in order to prevent the training process from collapsing at the early stage thereby ending up with a flat uniform distribution output.
§ EXPERIMENTS
In order to evaluate the proposed method, we compared with the following methods: single-view softmax classifier (S-S), single-view evidential deep learning (S-E), multi-view softmax classifiers with score fusion (M-S+SF), multi-view evidential classifiers with DS-combine (M-E+DS). MLE methods are supervised by cross-entropy loss. Two backbone networks, Reset-18 and Vision Transformer are chosen.
Data & Implementation Details We validate our method on the FTD test set. All the data are pre-processed by SPM12 [https://www.fil.ion.ucl.ac.uk/spm/]and DPABI <cit.> and calculated in MNI space. 164 cases of patients with ground truth are stratified and divided into train, validation, and test sets. The test set size is 20% of the total dataset size. The 4D fMRI image data in nifti format was first converted along the time axis into a stack of 3D volumes. Then we extract horizontal, lateral, and frontal view sections from the center of each volume. The shape of the input slice is zoomed to 3×224×244. The data contain 4 classes, which are labeled as bvFTD (label 0), svPPA(label 1), healthy control(label 2), and nfvPPA (label 4). Our proposed network is implemented in PyTorch and trained on NVIDIA GeForce RTX 3090. We adopt the Adam optimizer to optimize the overall parameters with an initial learning rate of 0.001. The maximum epoch is set to 10. Random hue change, random saturation change, random value change of the image, Gaussian blur, motion blur, and median blur are utilized as data augmentation. All the following experiments adopted a five-fold cross-validation strategy to prevent performance improvement caused by accidental factors.
Uncertainty Estimation. To further illustrate the role of uncertainty estimation in our model, we visualize the classification results of several out-of-distribution samples from the public ADHD fMRI image dataset ADHD-200[<http://fcon_1000.projects.nitrc.org/indi/adhd200/>]. As reported in Fig. <ref> (a), the baseline model tends to predict very high confidence for the most likely class even if it is completely erroneous. But benefit from the ability to say "I'm not sure.", our MID-Net can detect the decision risk thus provide high uncertainty as shown in <ref> (b). As shown in the <ref> (c) and (d), higher uncertainty are generated for low-quality view 2. Meanwhile, the uncertainty of the combined opinoin is higher. These results suggest our model's predictions remain credible even when encountering low-quality data.
Comparison with baseline methods.
As shown in Tab. <ref>, our method reaches satisfactory results. Single-view methods fell to be competitive because of lacking enough brain activity information. With VIT backbone, because of the superiority of IDer, our model far exceeds the accuracy rate of other models by more than 3%.
Comparison of decision-making capabilities on low-quality fMRI images. In fMRI pre-processing, manual calibration is often difficult and time-consuming. But the image quality without manual calibration may be poor. In order to test the model classification ability when the image de-noising effect is poor or a view is polluted. In Tab. <ref> and Tab. <ref>, we can see that with both backbones, the performance of our method had only slightly decreased by less than half that of other methods. IDer redistributes beliefs and uncertainty, and combines opinions well, eliminating most of the distractions.
§ CONCLUSION
In this paper, we present MID-Net, a multi-view impartial decision network for FTD diagnosis with uncertainty estimation. Our approach offers a means of estimating the uncertainty of each prediction, which is crucial for providing confidence measurements in FTD diagnosis. To accomplish this, we propose the use of an Impartial Decision Maker (IDer) that can combine opinions impartially and make inferences without incurring computational costs or necessitating changes to the backbone network. As a result, our model can prevent overconfident predictions and accurately estimate the risks associated with its decisions. Our extensive experiments demonstrate that our approach provides reliable and robust uncertainty estimates, which can quantify the decision-making risk of the model. Furthermore, we show that our method can also identify poor quality FTD pre-processing. Moreover, in the diagnosis of FTD, IDer conducts fair unification and reasoning on the evidence of brain activity information collected from different perspectives.
In summary, our MID-Net competes effectively with previous approaches in terms of classification robustness and the reliability of uncertainty estimation. It provides a valuable contribution to the field of FTD diagnosis by offering a reliable and impartial means of decision-making that can accommodate evidence from multiple perspectives.
splncs04
§ SUPPLEMENTARY MATERIALS
|
http://arxiv.org/abs/2307.06279v1 | 20230709050025 | SpreadNUTS -- Moderate Dynamic Extension of Paths for No-U-Turn Sampling & Partitioning Visited Regions | [
"Fareed Sheriff"
] | stat.CO | [
"stat.CO",
"cs.LG"
] |
— Moderate Dynamic Extension of Paths for No-U-Turn Sampling & Partitioning Visited Regions
Fareed Sheriff
May 17, 2023
============================================================================================
§ INTRODUCTION & PRIOR WORK
Markov chain Monte Carlo (MCMC) methods have existed for a long time and the field is well-explored. The purpose of MCMC methods is to approximate a distribution through repeated sampling; most MCMC algorithms exhibit asymptotically optimal behavior in that they converge to the true distribution at the limit. However, what differentiates these algorithms are their practical convergence guarantees and efficiency. While a sampler may eventually approximate a distribution well, because it is used in the real world it is necessary that the point at which the sampler yields a good estimate of the distribution is reachable in a reasonable amount of time. Similarly, if it is computationally difficult or intractable to produce good samples from a distribution for use in estimation, then there is no real-world utility afforded by the sampler. Thus, most MCMC methods these days focus on improving efficiency and speeding up convergence.
We present a cursory overview of popular MCMC techniques. Random-walk Metropolis-Hastings is a rudimentary algorithm for sampling from a distribution by inducing a Markov chain on repeated samples: the next sample is chosen through a draw from the sampling distribution that takes the current sample as a parameter. However, as the name suggests, this exhibits strong random walk behavior, making it undesirable practically due to the possibly long burn-in period and large number of samples needed to thoroughly explore the distribution space. In fact, many MCMC algorithms suffer from random walk behavior and often only mitigate such behavior as outright erasing random walks is difficult. Hamiltonian Monte Carlo (HMC) is a class of MCMC methods that theoretically exhibit no random walk behavior because of properties related to Hamiltonian dynamics. This paper introduces modifications to a specific HMC algorithm known as the no-U-turn sampler (NUTS) that aims to explore the sample space faster than NUTS, yielding a sampler that has faster convergence to the true distribution than NUTS.
§.§ Hamiltonian/Hybrid Monte Carlo
[This subsection summarizes relevant parts of <cit.>]
Hamiltonian dynamics work on a system of position-momentum pairs (p,q) subject to Hamilton's equations
dq_i/dt = ∂ H/∂ p_i, dp_i/dt = -∂ H/∂ q_i
where p,q are vector-valued functions of time over a d-dimensional space and H(q,p) is the Hamiltonian, which represents the system's total energy. We assume for HMC that the Hamiltonian expresses the system's potential and kinetic energies H(q,p) = U(q)+K(p). We also define for HMC U(q) to be the negative of the log density of q up to a constant and K(p) = 12p^TM^-1p to be the negative of the log density of the Gaussian with zero mean and covariance matrix M (often, the Gaussians will be uncorrelated, so M will be diagonal), also up to a constant. We thus rewrite Hamilton's equations to be
dq_i/dt = (M^-1p)_i, dp_i/dt = - ∂ U/∂ q_i
As with MCMC methods as a whole, the Hamiltonian is (time-)reversible and is invariant under Hamilton's equations, meaning the acceptance probability is 1. In practice, it is close to 1 because we cannot practically make the Hamiltonian invariant when solving Hamilton's equations due to error accumulated when solving the PDEs numerically.
To numerically solve the PDEs, we use a symplectic integrator, which preserves the Hamiltonian's invariance under integration of Hamilton's equations. A commonly-used symplectic integrator is the leapfrog integrator, which makes use of a "halfstep" in the integration process to better inform the estimate of the Hamiltonian in the next timestep. The equations that govern the leapfrog integrator are as follows with stepsize :
p_i(t+2) = p_i(t)- /2∂ U/∂ q_iq(t)
q_i(t+) = q_i(t) + p_i(t+2)/m_i
p_i(t+) = p_i(t+2) - /2∂ U/∂ q_i q(t+)
In effect, we compute an estimate of p at t+2, estimate q using this estiamte of p, then again estimate p using the estimate of q at t+, thus taking into account the estimate of p at t+2 and p's relationship with q.
HMC samples from continuous distributions on ^d with well-defined densities and partials of the log densities. We define the joint distribution P of (p,q) on the Hamiltonian H to be
P(q,p) = 1/Ze^-1/TH(q,p)
for any positive constant Z and T. Then,
H(q,p) = U(q)+K(p) → P(q,p) = 1/Ze^-U(q)/Te^-K(p)/T
We choose U(q) to be -logπ(q) for the distribution π from which we are trying to sample. The distribution of K(p) is independent of q, but it is common to use a quadratic like K(p) = p^TM^-1p/2. For diagonal M, this yields K(p) = ∑_ip^2_i/2m_i.
HMC works in two steps. The first step draws a value for momentum p using the zero-centered Gaussian with covariance matrix M. The second step conducts a Metropolis update using the Hamiltonian. Using a stepsize of for L steps, a trajectory of samples is calculated, which is accepted with probability
min(1,exp(U(q)-U(q^*)+K(p)-K(p^*)_H(q,p)-H(q^*,p^*)))
which works exactly because the Hamiltonian is time-reversible.
Practical considerations to take into account when implementing HMC include varying ,L. Note, however, that HMC requires adjustment/setting of the parameters , L.
§ NO-U-TURN SAMPLING
One of the few and biggest problems with HMC<cit.> is the necessity to tune ,L — without proper tuning, we lose many of the efficiency guarantees of HMC. No-U-turn sampling (NUTS)<cit.>] aims to alleviate some of these problems. NUTS is a type of HMC algorithm that does not calculate the trajectory for constant L steps and instead stops the trajectory when sufficient error or space explored has been accumulated. Furthermore, it tunes dynamically to make NUTS an effectively parameterless version of HMC.
NUTS replaces a constant L by stopping the trajectory once some condition has been triggered. This condition is checking that the distance between the proposal q^* and the initial q will not continue to increase. We can check this by taking the product of the momentum and the difference between the sampled proposal and initial proposal (q^*-q)· p^* (the U-turn condition), noting that if it is negative, then the direction of our next step will be toward already-sampled points. Because this does not maintain time-reversibility, NUTS runs the Hamiltonian both forward and backward with equal probability and calculates the U-turn condition between the endpoints of the extension of the trajectory generated in the current iteration, checking that it is nonnegative. NUTS generates the trajectory through a doubling scheme that randomly chooses a direction (forward or backward in time), then on the ith iteration of generating this trajectory takes 2^i timesteps in the chosen direction, adding the calculated points to the current trajectory. A point is chosen as a sample from this trajectory in the following manner: once the trajectory is generated first by sampling some rejection energy threshold u uniformly from [0,P(q,p)] = [0,e^-H(q,p)], extending the point forward and backward in time repeatedly, then uniformly randomly selecting a point from this "tree" of points (trajectory).
§ MODERATE DYNAMIC EXTENSION OF PATHS
We consider two additions to the NUTS scheme: relaxing the U-turn condition checks on the induced binary tree of the generated trajectory with, and increasing the size of the trajectory by more than double every iteration. Our reasoning behind both of these ideas is that the number of U-turn condition checks on the subtrees of the subtrajectory created by the doubling process in NUTS adds excessive (and potentially underjustified) overhead when checking that the U-turn condition is not violated between the two leaves on the edge of each sutree. This overhead is linear in the number of generated points. While it is stated that "except for very simple models with very little data, the costs of these inner products should be negligible compared to the cost of computing gradients" <cit.> (in reference to the inner products calculated when evaluating the U-turn condition), such a rigorous check can in and of itself be counterproductive and could risk cutting off the trajectory being generated before it has sufficiently explored the space around it. This is because while the U-turn condition checks whether the trajectory turns back on itself, if we check for violation between many pairs of points, adjacent or not, this degenerates into a check that the trajectory is always pointing in the direction of unexplored space.
However, this is not a very useful condition to force because we could have a trajectory that moves backward a tiny bit but later continues to move away from previously-explored points, thus exhibiting a general trend toward unexplored space. While we agree that checking that no violation of the U-turn condition should occur between the first few points on the path, we note that as long as the general trend of the path does not violate the U-turn condition, the path contributes to exploring space. We thus strike a compromise: we relax the U-turn condition checks on the balanced tree built on each iteration's points by continuing to check that the U-turn condition is not violated between the leaves on the edge of each subtree of the tree built on each iteration's point, but now build a k-ary tree on the calculated points instead of a binary tree where k is the iteration number. This both decreases the number of U-turn condition checks and iteratively relaxes the strictness of the U-turn violation penalty as more points are generated.
Specifically, instead of doubling the tree by adding 2^k points to the end of our path in direction d∼{-1,1}, we add k^k points and check the U-turn condition fewer times on these points: where we would check the U-turn condition around 2^klog_2k time on these k^k points, we now check the condition k^k-1/k-1≈ k^k-1=2^(k-1)log_2k, which is less than 2^klog_2k by a multiplicative factor of k (which grows asymptotically).
§ PARTITIONING VISITED REGIONS
To prevent ourselves from exploring parts of the distribution that we have already explored, when sampling from the generated trajectory, we bias our selection toward points the space around which we have not already explored. This still satisfies detailed balance because the probability of having already chosen a point from some subspace of the distribution is uniform across all subspaces. Thus, we still have the same convergence guarantees as NUTS. However, we attempt to sample the distribution in a more "spread out" manner by exploring unexplored parts of the trajectory (which itself maintains the invariant of a fixed density) so in the end we still sample in accordance with the distribution's density but with regularization that enforces exploring unexplored parts of the space.
We can keep track of how much space we have explored close to a datapoint using any type of querying data structure that allows us to calculate some measure of how explored the space around a given point is (for example, a multidimensional Gaussian convoluted with all previously-sampled points). For sake of example and efficiency, we consider a k-dimensional binary search tree T on all sampled points that allows us to find the closest point in average-case Ø(logn) time with insertion also taking Ø(logn).
Our metric d_p for how much space has been explored near a given point p will be the squared L_2 norm of p with the closest neighbor in T (sum of squares of difference of coordinates). We then define the probability of choosing p to be proportional to d_p and the metric on all other points of the trajectory so that the probability we select p from trajectory t = (p_0,⋯, p_k) equals
m_p/∑_p_i∈ tm_p_i
We can then choose a point by allocating some proportion of a uniform r.v. to each point and sampling from this uniform to select the point. This is efficient and so the entire procedure allows us to regularize toward sampling the distribution thoroughly while maintaining sampling by density with the cost of a multiplicative Ø(logn) factor to the sampling process.
§ RESULTS
We discuss our testing regime in more detail: we randomly generate mixtures of multivariate Gaussians, which we use to compare how well regular NUTS samples compared to the modified NUTS algorithm presented in this paper by comparing the empirical distributions of each algorithm with the true distribution of the mixtures using a sort of discretized total variation metric. We refer to our algorithm as "SpreadNUTS" because it attempts to spread NUTS trajectories over the sample space to better leave less of the sample space unexplored.[Our code for SpreadNUTS is based on the code at <cit.>, and we test SpreadNUTS against this implementation of NUTS]
§.§ Testing Regime
We randomly select k Gaussian distributions where k is distributed over a discrete uniform that takes values from 1 to 4 (the choice of 5 is arbitrary). We choose the means of the distributions uniformly randomly from the interval [-⃗2⃗0⃗, 2⃗0⃗] (this choice is also arbitrary); we choose the covariance matrix by generating a matrix whose entries are uniformly random over [0,1], multiplying it by its transpose (generating a valid correlation matrix), then multiplying by a draw from a uniform over interval [0,4] (also arbitrary). This ensures the covariance matrix is positive semidefinite (and is also diagonally dominant). We also uniformly randomly choose a dimension for the Gaussians from 1 to 5. Finally, we generate mixture probabilities p⃗ such that the elementwise sum is 1 and each value is nonnegative by generating [0,1] entries, then dividing by the sum of these entries. While this does not yield a uniform distribution (the distribution is biased toward 1⃗D⃗ where D is the dimension and is chosen uniformly from 1 to 3 — the low upper bound on dimension is because for dimensions 4 or higher, regular NUTS tends to perform very slowly and it takes too much time to generate samples), this is okay for our purposes because we desire mixtures biased toward uniformly sampling from each vertex so there is sufficient density for sampling algorithms to actually sample from the Gaussians. This randomly generates Gaussian mixtures. Our choice of using Gaussian mixtures was arbitrary and based primarily on convenience of sampling through methods other than Monte Carlo.
We define our discretized total variation metric by randomly sampling from the Gaussian mixture (which we do by randomly sampling from each Gaussian, then choosing a number of samples from each collection of samples proportional to the probability of the Gaussian relative to the rest of the mixture). We then generate a relative empirical pdf by discretizing the interval from -⃗2⃗0⃗ to 2⃗0⃗ into 0.1-unit squares, calculating the proportion of samples in each square. Our discretized total variation metric m_TV is calculated by taking the absolute difference between the relative empirical pdfs of the samples generated from each algorithm and the relative empirical pdf generated by sampling directly from Gaussians weighted by the relative empirical pdf of the Gaussians. Our comparison between the two algorithms is done by both looking at both the ratio and actual values of m_TV between the algorithms and the mixture samples over choice of dimension. We also compare this with the m_TV between the Gaussian mixtures resampled again in order to obtain a means of roughly evaluating how well our algorithm performs both relative to NUTS and relative to a true sampler.
§.§ Results & Conclusion
We compare the m_TV metric between NUTS and SpreadNUTS by plotting them against each other and samples resampled from the mixture as well as by plotting the log of the m_TV ratio between NUTS and SpreadNUTS as well as between each algorithm and samples resampled from the mixture. In the first plot, the lower the m_TV, the better. In the second plot, the close to 0 the score the better; specifically, the log of the ratio between the algorithm and resampled mixture should ideally be close to 0 because this indicates it performs as well as samples from the mixture. We then discuss trends we noticed and provide examples of plots to compare NUTS to SpreadNUTS visually.
The following is a plot of m_TV vs. dimension for NUTS, our algorithm, and samples from a Gaussian mixture all compared against samples from a Gaussian mixture. Note that we compare two distinct draws from a Gaussian mixture with each other when calculating the m_TV to estimate how much of the m_TV of the algorithms is due to randomness attributed to relatively small sample size (we sample 10000 points per mixture and discard the first 500 as burn-in). Alongside it is a comparison of ratios between NUTS m_TV and our algorithm's m_TV with the mixture m_TV vs. dimension to see how close to a random sample the two algorithms get to m_TV.
The following are plots of m_TV ratio with the mixture m_TV for varying values of k (the number of Gaussians in the mixture) after fixing dimension.
The above shows that for dimension 1, NUTS performs better than SpreadNUTS; however, for higher dimensions, SpreadNUTS gets closer and closer to Gaussian sampling, suggesting that it handles density islands better than NUTS.
We note some interesting idiosyncracies of SpreadNUTS: in spite of the fact that it tends to perform better than NUTS in higher dimensions, what might actually be going on is that when the distance between "islands" of density in a distribution is sufficiently small enough for classical NUTS to feasibly leap across islands, SpreadNUTS simply makes it more likely that we will actually leap across islands. However, when the distance between these islands is too large for classical NUTS to reasonably travel between islands, SpreadNUTS cannot increase a low probability of traversing these islands enough for it to happen often. Thus, we conclude that while SpreadNUTS may increase the probability of traversing relatively high-density portions of the distribution relative to classical NUTS, it only attempts to "smooth" sampling across parts of the sample space that classical NUTS explores — it cannot explore parts of the sample space that classical NUTS does not explore. We examine two examples that showcase this trend: a 2d Gaussian mixture consisting of two distributions (μ,I_2),(-μ, I_2) with equal weight on both. In the first figure, μ = ⟨2.5,2.5⟩; in the second figure μ = ⟨5,5⟩. We compare SpreadNUTS to NUTS and see that while SpreadNUTS increases the probability of traversing these islands relative to classical NUTS, SpreadNUTS does not traverse the islands when classical NUTS does not. Furthermore, looking at the above figures, we can see that on the whole, SpreadNUTS m_TV gets closer to Gaussian sampling as dimension increases while NUTS first increases at dimension 2, then decreases at dimension 3 but still with significantly greater m_TV than either Gaussian sampling or SpreadNUTS sampling. We note that the number of dimensions used was small (3) and the number of Gaussians in the mixture was from 1 to 4; furthermore, the number of samples was 9.5K for each sampling method. Some error may have been introduced in the relatively small number of samples. A bigger point of contention is that the number of dimensions was too small to make any concrete claims about the efficacy of NUTS vs. SpreadNUTS and the use of Gaussian mixtures as our sample distribution may have introduced some bias that helps SpreadNUTS sample better than NUTS. There is more testing to be done, but we tentatively conclude that SpreadNUTS alleviates to some degree the lack of sample space exploration present in NUTS.
unsrt
§ APPENDIX
We derive the gradient and log-likelihood of Gaussian mixture M ∼∑^Nπ_i(μ_i, Σ_i). The likelihood (for a single datapoint x) is
p_M(x|π,μ⃗,Σ⃗) = ∑_i=1^Nπ_i(x|μ_i,Σ_i)
and the log-likelihood is
lnp_M(x|π, μ⃗,Σ⃗) = ln(∑_i=1^Nπ_i(x|μ_i,Σ_i))
For a single Gaussian, this devolves to c -0.5 (μ- x)^TΣ^-1(μ-x) for extra constant c = -0.5ln(|Σ^-1|(2π)^k).
Then, the gradient of the log-likelihood w.r.t. μ⃗ is
∂ln(p_M(x|π, μ⃗, Σ⃗))/∂μ⃗ = 1/∑_iπ_i(x|μ_i,Σ_i)·∂ p(x|π,μ⃗,Σ⃗)/∂μ⃗
∂ p(x|π,μ⃗,Σ⃗)/∂μ⃗ = ∑_i∂π_i(x|μ_i,Σ_i)/∂μ_i
∂π_i(x|μ_i,Σ_i)/∂μ_i = ∂/∂μ_i(π_i√(|Σ^-1_i|(2π)^-k)exp(-1/2(μ_i-x)^TΣ^-1_i(μ_i-x))) = Σ^-1(x-μ_i)π_i(x|μ_i,Σ_i)
∂ln(p_M(x|π, μ⃗, Σ⃗))/∂μ⃗ = ∑_iΣ^-1(x-μ_i)π_i(x|μ_i,Σ_i)/∑_iπ_i(x|μ_i,Σ_i)
For a single Gaussian, this simplifies to Σ^-1(x-μ).
As an aside, our testing regime experiences compounding rounding errors when exponentiating and taking logs, specifically when we take the log of the exponential of a number close to 0, which rounds to 0. We attempt to alleviate this problem by expressing the proportions of the normal likelihoods π_i(x|μ_i,Σ_i) to the sum of the normal likelihoods as the exponential of the difference of the log likelihood and the log of the sum of likelihoods, where we calculate the log of the sum of likelihoods by summing logs as below:
log(x+y) = log(x(1+yx)) = logx + log(1+yx) = logx + log(1+e^logy-logx)
log∑_ix_i = log(x_1(1+1/x_1∑_i=2^kx_i)) = logx_1 + log(1+e^log∑_i>1x_i-logx_1)
log∑_i>1x_i = logx_2 + log(1+e^log∑_i>2x_i-logx_2)
x_i/∑x_i = exp(logx_i-log∑x_i)
Thus, we can recursively express the log of sums as the sum of log sums (in practice, we sort the Gaussian pdfs when evaluating logs to minimize error at each step, yielding a technique known as LogSumExp or LSE). This helps decrease error accumulated when summing likelihoods because of the error introduced when summing exponentials.
|
http://arxiv.org/abs/2307.04785v1 | 20230710180000 | Empirically Constraining the Spectra of a Stars Heterogeneities From Its Rotation Lightcurve | [
"David Berardo",
"Julien de Wit",
"Benjamin V. Rackham"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.IM",
"astro-ph.SR"
] |
0000-0001-6298-412X]David Berardo
Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
0000-0003-2415-2191]Julien de Wit
Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
0000-0002-3627-1676]Benjamin V. Rackham
51 Pegasi b Fellow
Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
David Berardo
[email protected]
Transmission spectroscopy is currently the most powerful technique to study a wide range of planetary atmospheres, leveraging the filtering of a star's light by a planet's atmosphere rather than its own emission. However, both a planet and its star contribute to the information encoded in a transmission spectrum and a particular challenge relate to disentangling their contributions. As measurements improve, the lack of fidelity of stellar spectra models present a bottleneck for accurate disentanglement. Considering JWST and future high-precision spectroscopy missions, we investigate the ability to derive empirical constraints on the emission spectra of stellar surface heterogeneities (i.e., spots and faculae) using the same facility as used to acquire the transmission spectra intended to characterize a given atmosphere.
Using TRAPPIST-1 as a test case, we demonstrate that it is possible to constrain the photospheric spectrum to ≥0.2% and the spectra of stellar heterogeneities to within 1-5%, which will be valuable benchmarks to inform the new generation of theoretical stellar models. Long baseline of observations (≥90% of the stellar rotation period) are necessary to ensure the photon-limited (i.e., instrument-limited) exploration of exoplanetary atmospheres via transmission spectroscopy.
§ INTRODUCTION
Transmission spectroscopy was the first technique introduced to study the atmospheres of worlds beyond the solar system <cit.>. Today, it is still one of the most powerful techniques in this context, as it leverages the light coming from a host star rather than the light directly emitted by the planet itself, which is orders of magnitude fainter. As the field of exoplanetary science transitions in the coming decade towards the spectroscopic characterization of directly imaged exoplanets, emission spectroscopy will become the primary avenue to study planetary atmospheres. Until then, perfecting the art of transmission spectroscopy studies is a must.
Currently, the dominant bottlenecks for transmission spectroscopy are associated with imperfections in our opacity models <cit.> and stellar models <cit.>. The current limitations in opacity models have been shown to result in an accuracy wall preventing most atmospheric properties beyond ∼0.5 dex for all planets but large, hot, and highly-metallic ones <cit.>. Future efforts supporting the standardization of existing databases, and the improvement of treatments of broadening and far-wing behaviors, should mitigate the current bottleneck.
Regarding stellar models, <cit.> showed that not accounting for stellar contamination will yield biased inferences of atmospheric properties. However, correcting for stellar contamination is challenging, as the model limitations (i.e., lack of fidelity) can yield a biased correction of the contamination via an inadequate fit of the out-of-transit spectrum. The lack of fidelity can also result in challenges in inferring the number of components present on the stellar disk <cit.>. Fortunately, when stellar models with a sufficient fidelity are accessible, the degeneracy between the number of components and their covering fractions can be lifted, leading to an optimal correction of the stellar contamination <cit.>. Sufficient fidelity is defined here as follows: with a precision superior or equal to the expected uncertainty associated with the out-of-transit spectra obtained for transit observations in the targeted system. This definition therefore supports returning to a regime of photon-limited studies–where instruments are used at their maximum potential. While a new generation of stellar models are being computed following the guidance of the report from NASA's Exoplanet Exploration Program Study Analysis Group 21 <cit.>, we investigate a possible avenue to empirically derive the emission spectra of a star's heterogeneities. Doing so would provide the community with a data-driven solution to the stellar-model challenge, i.e., benchmarks for ongoing theoretical simulations.
In this paper, we present a framework leveraging a multi-wavelength stellar spectroscopic rotation curve to constrain empirically the emission spectra of its different heterogeneities. We focus our injection–retrieval test on M-dwarf stars with properties similar to those of TRAPPIST-1 (T_eff = 2566 K), for which stellar contamination is expected to be the most pronounced <cit.> and the most challenging to correct <cit.>. We present in <ref> the forward model developed to generate the synthetic, multi-wavelength observations of an heterogeneous stellar surface. In <ref>, we present the retrieval framework used to assess the extent to which the properties of individual heterogeneities (size, positions, and emission spectra) can be constrained based on a synthetic rotation light-curve. In <ref>, we present the injection–retrieval tests performed and their results, including testing the effect of varying the duration and sampling of an observation relative to the stellar rotation period. In <ref>, we describe the results of these preliminary tests, as well as highlight future steps to improve and expand upon this initial framework.
§ FORWARD MODEL FOR GENERATING SYNTHETIC DATA
In this section we present the forward model used to generate synthetic time- and wavelength-dependent observations of an heterogeneous stellar surface. These synthetic observations are generated using a grid-based stellar surface model, which consists of a star (described by its rotation period and rotation axis orientation) as well as a list of heterogeneities, which are each described by a latitude, longitude, radius, and temperature.
§.§ Spectral Model
For this analysis, we use the PHOENIX stellar spectral model grid[<https://phoenix.astro.physik.uni-goettingen.de/>] to simulate the emission of an individual surface feature <cit.>. These grids provide adequate coverage to describe the photospheric background of an M dwarf, as well as heterogeneities which vary by several hundred degrees in either direction relative to the photosphere
. For the stellar photosphere we use a spectral model with a temperature of 2500 K, a log g of 5.0, and an [Fe/H] metallicity of 0 (similar to TRAPPIST-1, which has a surface temperature of 2566 ±26 K, a log g of 5.2396 ± 0.006 <cit.> and an [Fe/H] metallicity of 0.05 ± 0.08 <cit.>). For heterogeneities, we alter only the temperature of the model spectrum used, since the surface gravity and metallicity are typically expected to remain constant across a stellar surface <cit.>. In this way, we make the common assumption the emission from heterogeneities resembles that of a stellar photosphere with a different effective temperature. For our analysis, we used spectral models corresponding to 2300 K and 2700 K (varying ± 200 K relative to the photosphere).
For this analysis we use the PHOENIX grids of specific intensity spectra, which provide spectral information as a function of viewing angle μ, as opposed to disk-averaged intensities. When sampling from these specific intensity spectra, we take the value corresponding to μ = 0 (i.e., the center of the star, normal to the observer's line of sight). We then calculate a quadratic limb-darkening profile for the stellar surface, and scale this intensity across the stellar surface, allowing us to have control over the limb darkening of the signal.
We emphasize that although we use simulated models to generate the synthetic data, this does not invalidate the premise of this study to empirically retrieve stellar spectra. This is because when fitting for these spectra later on, we use no information about the input spectra whatsoever, and thus the retrieval is not biased based on prior knowledge.
§.§ Instrumental Model
We consider observations made with the NIRISS Single Object Slitless Spectroscopy (SOSS) instrument <cit.> on JWST <cit.>, which has a spectral resolution of R≈700 at 0.6–2.8 μm[https://jwst-docs.stsci.edu/jwst-near-infrared-imager-and-slitless-spectrograph/niriss-observing-strategies/niriss-soss-recommended-strategies], providing an adequate compromise between resolving power and spectral coverage for such work considering the spectral energy distribution (SED) of stars, including M dwarfs <cit.>. The spectral resolution of the PHOENIX spectra is much higher than can be observed with JWST, and so they must first be down-sampled to a resolution of R = 700 using a Gaussian convolution filter to match the expected signal from NIRISS. After adjusting the resolution, we also bin the spectra down to a wavelength spacing of 8.8 μm. These are appropriate transformations in this case given that the forward model is linear, and thus high resolution is not needed (see <cit.> for further discussion on binning and down-sampling spectra).
§.§ Spatial Model
The stellar surface is treated as a grid in the longitudinal and latitudinal directions. Once the stellar spectra are calculated, we must then determine where on the surface each heterogeneity lies. This is done using a flood fill technique, where we begin at the cell of the stellar surface corresponding to the heterogeneity center, and spread out from this point until we reach a cell which is too far from the central cell to be a part of a given heterogeneity. As this is done, each cell is marked as being a part of the heterogeneity and assigned the flux corresponding to its temperature as well as the relevant wavelength. While the model has been optimized for a circular feature, in principle any shape can be `painted' on the stellar surface grid, accounting for projection effects. This model is based off of a similar one used in <cit.>, which was used to model the interactions of an heterogeneous star with a debris disk.
In addition to this flux map, we also calculate maps which correspond to the projected area of a given cell, taking into account the shape of the cell as well as its normal vector relative to the observer. We also calculate a limb darkening map. These three maps can then be multiplied together to produce a final observation map, which can be rapidly summed to measure the observed flux at a given time. In order to calculate the flux at a different time, the flux map is simply `rolled' along the longitudinal axis, since the projected area and limb darkening effects are constant in time.
§ RETRIEVAL FRAMEWORK
The goal of this initial study is to demonstrate the capability to characterize arbitrary heterogeneities of a stellar surface and their contribution to the overall stellar spectrum without relying on physical models, which currently cannot provide a sufficient level of accuracy. In this work we focus in particular on heterogeneities which can be described by their size, location, and temperature. The effect of the position and size of a heterogeneity are highly non-linear, due to both their projection onto the observing plane as well limb-darkening effects. Thus when retrieving these parameters we will employ standard Markov chain Monte Carlo (MCMC) methods in order to sample the full range of parameter space. For a given distribution of heterogeneities, however, the total spectral signal can be described as a linear combination of the stellar photosphere and the heterogeneity spectra (scaled by their relative surface area), and thus can be solved for as a linear matrix problem, which we outline in this section. Once we have re-formulated the spectral retrieval as a linear algebra problem, we utilize spectral value decomposition (SVD)[https://en.wikipedia.org/wiki/Singular_value_decomposition] in order to estimate the spectral signal of each component (including the photosphere). Thus the problem can be separated into a non-linear MCMC retrieval (the geometric properties of the heterogeneity) and linear retrieval (the spectral signal of the photosphere and individual heterogeneities).
§.§ Linear component of retrieval model
Given a set of synthetic observations, we now describe the framework used to constrain the properties of individual components (size, positions, and spectra). The total flux observed, Flux(λ,t), at a given wavelength λ and time t is a linear combination of the geometric signals of all the components modulated by the spectral signal of each component and can thus be written as:
Flux(λ,t) = Λ_phot(λ) + ∑_i[Λ_i(λ)-Λ_phot(λ)] × S_i(t)
where Λ_phot(λ) is the (constant in time) spectral signal of the photosphere, Λ_i(λ) is the spectrum of the i^th heterogeneity, and S_i(t) is the time-varying geometric projection of a heterogeneity, which is a function of its size and position on the stellar surface, as well as any limb-darkening effects. The sum runs over the number of individual heterogeneity features. A graphical depiction of this decomposition is show in <ref>
Within an MCMC framework, the linear component of the model can be estimated using SVD, allowing us to leverage rapid and robust libraries available in Python to retrieve the spectral signal of each feature in just a few milliseconds on a modern laptop computer. The benefit of this separation is that the geometric signal of any surface features can often be estimated from a white light curve, as well as with more sophisticated techniques to analyze the frequency components of the light curves. Thus strong priors can be placed on the position and sizes of heterogeneities, which reduces the overall time needed to run such a retrieval.
§.§ A Note on Limb Darkening
The geometric signal of the heterogeneity in the previous equations (i.e., the quantity S_i(t)) requires a choice of limb darkening coefficients for the stellar surface, since it is calculated as the combination of the size of a cell and its projected area, multiplied by a limb darkening factor. However, in general, limb darkening is an effect which depends on the temperature of the stellar surface, which is the quantity we are attempting to fit. Thus we find ourselves in a loop where the stellar spectrum is required in order to know the appropriate value of the limb darkening coefficients, which is required in order to fit for the stellar spectrum. As a result, the current fitting routine assumes that limb darkening is independent of temperature, at least within the range considered in this work (± 200 K). In general, limb darkening is expected to vary with temperature <cit.>. However, since the models are generated under the same assumption, we may still assess the ability of the our framework to recover injected signals. In <ref> we briefly highlight how this may be addressed in the future and the additional prospects for characterization it will allow for.
§ INJECTION–RETRIEVAL TESTS
Given the forward model used to simulate observations described in <ref>, and the retrieval mechanism described in <ref>, we now describe a series of injection–retrieval tests we use to test the ability of the model to recover stellar surface heterogeneities.
§.§ Fitting for Spectral Components
In order to test the effectiveness of the model in retrieving spectral features of a star, we first perform a series of injection–retrieval tests in an idealized scenario in which we assume to know the number of heterogeneities, as well as their positions and sizes. Thus in this first stage we are attempting to retrieve only the spectral features of heterogeneities and photosphere (the linear part of the retrieval), which represents a best-case scenario and effectively acts as an upper limit on the strength of the current framework. In this idealized scenario, we have removed the complex, non-linear component of fitting for the feature positions, and the problem is reduced to a linear one of disentangling the spectral contribution of each component. By employing SVD, this can be solved in just milliseconds (including the full range of time and wavelength observations), allowing rapid testing of a variety of scenarios. This can similarly represent a scenario where strong priors have been obtained for the spectral components, based on an analysis of a white lightcurve or a pre-fitting routine which places constraints on the possible heterogeneity configurations.
We tested the model on a suite of stellar surfaces, including ones with heterogeneities hotter than the photosphere, colder than the photosphere, both, as well as anywhere from one to four individual heterogeneities. Additionally, we tested a series of single-heterogeneity models with all but one parameter being held constant, varying either the size of a heterogeneity or its latitudinal position. The full sample of surfaces considered is described in <ref>, along with the deviation from the true spectra used to simulate the observations. The results of these tests reveal that the model is able to recover the spectra of heterogeneities to sufficient precisions (i.e., better than the out-of-transit spectrum–see <ref>). For example, the precision achieved on the photospheric spectrum is ≤ 0.1% vs ∼0.5% for the out-of-transit spectrum associated with transit observations in the TRAPPIST-1 system–typically based on a ∼ 2 hr integration. The spectra of heterogeneities are constrained at the level of 1 to 5% depending notably on their sizes and latitudinal position.
The spectra of heterogeneities are less constrained due to their smaller covering fraction resulting in less photons from them. Their small covering fraction also mean that while the uncertainties associated with their spectra are larger, they contribute to the total uncertainty budget for the stellar model at a similar level than the photosphere. For this reason, we will assess sufficient model fidelity based on the ratio of the uncertainty associated with the retrieved photospheric spectrum and the one associated with the out-of-transit spectrum.
§.§ Retrieving Full Heterogeneities
In order to fully test the ability of the model to characterise a heterogeneous stellar surface, we also run a set of retrievals where we attempt to estimate not only the spectral signature of each component, but also their sizes and positions on the stellar surface. For a fit with N heterogeneities, we thus have 3N + 2 parameters: a size, latitude and longitude for each heterogeneity, as well as two limb darkening parameters for a quadratic limb darkening law. As described in <ref>, we run an MCMC retrieval within which we linearly retrieve the spectral signals of each component using SVD.
The results of this fitting process highlight the inherent difficulty in constraining the position and size of a heterogeneity, which outlines clear areas for future improvement. The longitude of a spot is typically reliably constrained to within a few degrees of the true value, due to the high time-sampling resolution. The latitude however is often much less constrained, with the model being able to differentiate only between equatorial or polar spots. Additionally, the size of a spot is typically only constrained to within 50% of its true value, although the model is capable of excluding extremely large or small/non-existent spots. In section <ref> we outline how additional prior information may be used to help further constrain the size of a feature, based on an global physical constraints on the overall scaling of its spectrum (leveraging the trade-off between feature size and spectral amplitude).
A subset of the models from the previous section were tested, where we fixed the number of heterogeneities to the true value. As an aside, we ran fits on the white lightcurve for each model, where we sequentially added in additional features. In most cases, the true number of components was found to best describe the data, while adding additional components did not improve the fit and resulted in a worse BIC (Bayesian Information Criterion) value.
In this first run, heterogeneities were allowed to occur anywhere on the stellar surface, and in some cases this led to degeneracies where two heterogeneities would overlap and contribute to the overall spectrum jointly. Additionally, we found that without additional information, the latitudinal position of a heterogeneity was difficult to constrain. These issues highlight clear areas for improvement for future work, which we discuss further in <ref>.
Despite issues with constraining the geometric properties of spot features, in most cases the model was still able to recover the photospheric signal to within 1%. We show the results of an example fit in <ref>, comparing the individual retrieved component spectra to the spectra used to generate the synthetic observations.
§.§ Varying Observation Baseline
In the previous sections, retrieval was performed using simulated observations covering an entire rotation period of the host star. However, in most cases a strong argument must be made to justify the use of high-demand facilities to continuously stare at a single target. In this section we investigate the effect of observing only a portion of the full rotation lightcurve on the ability of the framework to accurately measure the photospheric spectrum of a star. Given the time-variability of a heterogeneity signal, there exists a strong correlation between the duration of an observation, the phase offset relative to a heterogeneity's longitude, and the retrieved uncertainty on the stellar photosphere.
To this end, we first simulate a heterogeneous stellar surface as in the previous section, with anywhere from 1–4 heterogeneities which may be colder or hotter than the background photosphere. From this model, we then generate a set of synthetic observations again as described in the previous sections.
For each observation, we chose two parameters: (1) an offset for the longitudinal rotation of the star relative to the observer, and (2) a viewing window, defined as a fraction from 0–1 of the stellar rotation period. Selecting a value of one represents the analysis done in the previous section, for which the entire stellar rotation was supplied to the fitting routine. These two values define a time series, for which we generate the base-vector signals attributed to each heterogeneity on the stellar surface. We then use SVD decomposition to rapidly fit the linear component of the model. As in the previous section, we can then compare the retrieved spectrum to the injected spectrum for each component, the results of which are shown in <ref>.
The various curves represent different observation durations. For a given observation duration, the residual signal can vary strongly as a function of stellar rotation phase. This is more pronounced for the shorter durations. For example, the residual for an observation covering 0.1 of the stellar rotation can vary from approximately 1% to over 100%. We attribute this variation to the unequal ability of each phase to contribute a set of component spectra descriptive of the entire photosphere. In other words, when fewer or no heterogeneities are present, one cannot extract the necessary information to model the photosphere at a phase showing many heterogeneities. Thus, the shorter-duration observations show both larger residuals overall and larger variability in residuals with rotation phase. For this reason, we find that only a covering fraction of ≥90% can reliably constrain the stellar spectra to within the OOT uncertainty (0.5%). Indeed, while the targeted precision of 0.5% may be achieved for some configurations with only a 40% phase coverage, it is not achieved for all (average precision ∼1%).
§ DISCUSSION & FUTURE STEPS
This work represents the first steps towards building a library of empirical emission spectra for stellar surface heterogeneities. While similar in scope to the work of <cit.> that compiled a library of empirical spectra for various stellar types, an important distinction resides in that the spectra being measured are not for disk-integrated features, but rather for `pure' basis components which may be combined with rotational geometry in order to produce accurate spectra for stars with arbitrarily complex surface features. Such a library will not only enable the robust correction of the TLS effect based on out-of-transit measurements, it will also provide important benchmarks for the next-generation of theoretical stellar models <cit.>, and further inform key relationships between the properties of stars and those of heterogeneities such as between heterogeneities temperature and size, photospheric temperatures, and atomic line-depth ratios.
Indeed, we are able to constrain photospheric spectra at the level of the 0.1% and typically 1–5% for the spectra of heterogeneities while
spectra with precisions of ∼ 1% (S/N∼ 100) are used commonly to constrain the fundamental physical parameters of exoplanet host stars <cit.>.
In terms of absolute flux calibrations, for example, the goal for the X-SHOOTER instrument is ≤ 10% <cit.>, while the eventual goal of the JWST calibration program is 1% accuracy for each observing mode <cit.>.
Thus, constraints on component spectra from this technique are on par with current precisions available for integrated disk spectra and will be limited ultimately by the overall precision and accuracy limitations of JWST observations themselves providing valuable data-driven benchmarks to inform the next generation of models.
Our framework enables retrieving both the geometric features of heterogeneities as well as their individual spectral contributions, without relying on any prior information from spectra generated by physical models.
In the rest of this discussion, we highlight a series of possible improvements to the framework introduced here.
§.§ Series of Snapshots for Slow Rotators
Covering 90% of a stellar rotation of TRAPPIST-1 would correspond to a ∼72-hr stare at the system, which is both feasible and reasonable for such a high-priority target. Doing so for slow rotators that may have periods up to 30 times that of TRAPPIST-1's, however, would be impractical. For such hosts, we show that a continuous series of small stares (“snapshots”) could be used instead (see Figure <ref>). In order to reach the targeted precision, we find that snapshots needs a minimum duration equal the intended OOT integration and sufficient occurrences to sample time-varying contribution of the heterogeneities.
As seen in the bottom panels of <ref>, the duration and number of snapshots required to achieve a given SNR are related offering multiple observational options. For a 30-day rotation period, a sufficient precision is achieved for, e.g., 40 2-hr snapshots, or 20 4-hr , 10 8-hr, 5 16-hr. These options correspond to a 10× lower observation requirement than when considering a long continuous stare. Of the four options highlighted above, we expect that the later will be favored when accounting for practical considerations (e.g., overheads and slew time).
§.§ Wavelength-dependent Limb Darkening
The models described in this work used limb darkening laws which did not change as a function of temperature or wavelength. While this represents an important first step in estimating the capability of this framework, future developments should account for such dependencies, which could notably be used to break the currently observed degeneracies between the latitude and size of a heterogeneity and thus better constrain the latitudinal distribution of heterogeneities.
§.§ Including Prior Knowledge From Model Spectra
The present proof-of-concept is performed without any prior knowledge regarding stellar physics. Future works could explore how relevant priors could be added to the framework without introducing biases from complete stellar models. An example of such priors would be a parametrization of the relative flux expected between wavelength bins associated to the feature of a same molecule. While absolute flux values may be biased, relationships between wavelengths may be robust enough to provide additional constraints. This information could be extracted using Gaussian processes in order to measure correlations between different wavelengths <cit.>. Constraining the spectra in this way would enable tighter constraints on the size and latitude of a given feature, which is currently degenerate with the overall amplitude of its spectrum. Additionally, including the use of activity indicators provided by high-precision spectroscopy to help solve in the inverse problem of reconstructing active regions on the stellar surface <cit.>.
§.§ Correcting for Stellar Contamination at Different Epochs
The ultimate goal of this work is to generate a library of empirically retrieved spectra for the heterogeneities of a given star in order to support for the robust correction of in-transit stellar contamination at any past and future epochs. The feasibility of this approach is supported by the following. First, heterogeneities of a given star have been shown to have consistent properties. For example, molecular-band modeling of echelle spectra of DM UMa suggests a spot temperature of 3570 ± 100 K during an observing campaign in 1995, with filling factors ranging from 0.25 ± 0.08 to 0.30 ± 0.10 <cit.>. Returning to the same star during six nights in 1998, a later analysis found a spot temperature of 3450 ± 120 K and filling factors ranging from 0.28 ± 0.06 to 0.42 ± 0.05 <cit.>. Second, properties of heterogeneities appear to be correlated making it easier to to pin down. Starspot temperatures show a clear dependence on photospheric temperature, based on Doppler imaging, modeling of molecular bands, and atomic line-depth ratios <cit.>. Therefore while heterogeneity's filling factors surely evolve over a stellar activity cycle, their temperatures and thus spectra are a static characteristic of a given star supporting our proposition of their relevance across epochs.
In other words, while a series of improvements to this framework can (and should) be made in the future, the present theoretical proof-of-concept suffices to move towards a practical application with JWST data as a next step. Such data would also inform in a relevant manner the aforementioned series of improvements (e.g., empirical wavelength- and temperature-dependencies of the limb-darkening). We thus look forward to an on-sky validation and further development of this framework in the near future to enable the robust atmospheric characterization of planets whose spectra would otherwise stay contaminated.
§ ACKNOWLEDGEMENTS
We thank Elsa Ducrot and the Pandora Team for helpful discussions regarding this project.
B.V.R. thanks the Heising-Simons Foundation for support.
This material is based upon work supported by the National Aeronautics and Space Administration under Agreement No. 80NSSC21K0593 for the program “Alien Earths”.
The results reported herein benefited from collaborations and/or information exchange within NASA’s Nexus for Exoplanet System Science (NExSS) research coordination network sponsored by NASA’s Science Mission Directorate.
§ TEST MODEL PARAMETERS
|
http://arxiv.org/abs/2307.04047v1 | 20230708211641 | Calibration-Aware Margin Loss: Pushing the Accuracy-Calibration Consistency Pareto Frontier for Deep Metric Learning | [
"Qin Zhang",
"Linghan Xu",
"Qingming Tang",
"Jun Fang",
"Ying Nian Wu",
"Joe Tighe",
"Yifan Xing"
] | cs.CV | [
"cs.CV"
] |
Calibration-Aware Margin Loss: Pushing the Accuracy-Calibration Consistency Pareto Frontier for Deep Metric Learning
Qin Zhang^*,1, Linghan Xu^*,1, Qingming Tang^2, Jun Fang^1, Ying Nian Wu^1, Joe Tighe^1, Yifan Xing^1
^1 AWS AI Labs ^2 Alexa AI
{qzaamz, linghax, qmtang, junfa, wunyin, tighej, yifax}@amazon.com
=================================================================================================================================================================================================================
*Equal contribution.
empty
The ability to use the same distance threshold across different test classes / distributions is highly desired for a frictionless deployment of commercial image retrieval systems. However, state-of-the-art deep metric learning losses often result in highly varied intra-class and inter-class embedding structures, making threshold calibration a non-trivial process in practice. In this paper, we propose a novel metric named Operating-Point-Incosistency-Score (OPIS) that measures the variance in the operating characteristics across different classes in a target calibration range, and demonstrate that high accuracy of a metric learning embedding model does not guarantee calibration consistency for both seen and unseen classes. We find that, in the high-accuracy regime, there exists a Pareto frontier where accuracy improvement comes at the cost of calibration consistency. To address this, we develop a novel regularization, named Calibration-Aware Margin
(CAM) loss, to encourage uniformity in the representation structures across classes during training. Extensive experiments demonstrate CAM's effectiveness in improving calibration-consistency while retaining or even enhancing accuracy, outperforming state-of-the-art deep metric learning methods.
§ INTRODUCTION
Deep metric learning (DML) learns a discriminative representation via a deep neural network to align the distances between embeddings to semantic similarities such that visually similar samples are close to each other and dis-similar samples are far apart. Given the massive success of DML on visual recognition tasks <cit.>, a natural challenge arises in making the algorithms more robust in their performance against different seen and unseen test classes such that a single distance threshold can be used for any test dataset without sophisticated post-training calibration. Common DML losses such as contrastive loss <cit.>, triplet loss <cit.> and proxy-based losses <cit.> suffer from the problem of threshold inconsistency across different classes, as they implicitly optimize the distance threshold based on the “semantic" similarities, whose definition may vary from class to class. Consequently, even if an embedding model has strong separability, different classes may still require different distance thresholds to maintain a consistent operating point in false reject rate (FRR) and false acceptance rate (FAR). Such a problem is more pronounced in real-world testing environments where both the test classes and test distributions are unknown.
There are two main causes for this threshold inconsistency problem.
First, the model is usually estimated over a training population, and may not properly characterize the testing population in the presence of domain mismatch, co-variate and diversity shift <cit.>, as well as extension to the open set and open world <cit.>. Second, there can be high variance in intra-class compactness and inter-class separation across both training and testing populations, as observed in <cit.>, even when the training distribution accurately characterizes the test distribution.
We abstract this phenomenon in DML that different classes require different distance thresholds to achieve a similar retrieval or recognition accuracy as calibration inconsistency.
Unlike calibration for closed-set classification which focuses on making the predicted confidence probability match the empirical correctness <cit.>, the calibration in DML refers to finding a transformation of the embedding distance to achieve target operating points in FAR and FRR. As DML aims at fine-grained recognition
with the requirement of generalization to open-world unseen test-time classes, the calibration inconsistency problem becomes increasingly relevant for model evaluation, threshold selection, and broader concerns about robustness, fairness and bias. Traditional calibration methods such as Platt calibrations <cit.> or isotonic regression <cit.> use a calibration dataset to calibrate the distance measurements to achieve target operating points for a trained embedding model. However, such methods are unscalable as the hand-crafting of calibration sets <cit.> is highly costly and requires knowledge of the test distribution. To mitigate, we wish to learn a calibration-consistent metric space during the embedding model training. Note that in this paper, our goal is not to unify calibration-aware training and post-hoc model calibration, since the two are complimentary to each other and cannot replace one another. Instead, we focus on calibration-aware training as it has the potential to improve both accuracy and calibration consistency concurrently.
In this work, we introduce the following key insights. First, we quantify the notion of calibration inconsistency in DML by proposing a novel metric, called Operating-Point-Inconsistency-Score (OPIS), which measures the variance in the operating characteristics across different classes in the target calibration range. In addition, we find that the calibration inconsistency problem cannot be resolved with higher model accuracy. As illustrated in <ref>, there exists an accuracy-calibration consistency Pareto frontier in the high accuracy regime where the calibration consistency starts to deteriorate with increased accuracy. To address this, we propose a novel hinge-loss-based regularization named Calibration-Aware Margin loss (CAM). CAM introduces two margin-based constraints, one each for a regularization over the positive and negative sample pairs respectively, as well as a simple “attention" mechanism to focus on the hard pairs only. These mechanisms effectively prevent excessive class compactness and over-separation between classes.
Therefore, the intra-class and inter-class embedding structures become less dependent on the label, leading to more consistent thresholds across classes.
We evaluate the proposed OPIS calibration inconsistency metric and CAM regularization over three image retrieval tasks, covering data domains of nature species, birds and cars. We find the phenomenon of accuracy-calibration consistency trade-off to be a common issue across all three domains. With CAM, we outperform state-of-the-art (SoTA) DML methods in both calibration consistency and retrieval accuracy. In particular, on iNaturalist <cit.>, the largest image retrieval benchmark, we reduce the OPIS calibration inconsistency score from 3.7e-4 to 1.8e-4 while improving retrieval Recall@1 from 84.0% to 85.1%.
To summarize, we make the following contributions: (i) We formalize the notion of calibration inconsistency in DML, and develop a novel OPIS metric to quantify this property; (ii) We evaluate the OPIS metric over various DML losses, and identify for the first time, an accuracy-calibration consistency Pareto frontier; (iii) To improve calibration consistency with training, we propose a novel CAM regularization which boosts the performance of SoTA methods on a variety of image retrieval tasks in both calibration consistency and accuracy; (iv) We find that we can further improve accuracy by combining CAM with class-adaptive weights approximated by the vMF concentration <cit.>.
§ RELATED WORKS
Calibration Inconsistency in DML
The advancement in DML has been focused on accuracy, generalization and scalability. The Smooth-AP loss <cit.> is a ground-breaking work that optimizes a smoothed approximation for the non-differentiable average precision. Similar to Smooth-AP, the Recall@k Surrogate loss <cit.> (L_RS@k) approximates recall@k – the standard metrics for evaluating image retrieval methods. Using vision-transformer architectures
and a very large batch size (=4000), L_RS@k achieves SoTA performance in several large-scale image retrieval benchmarks <cit.>. However, when the number of classes is very large (e.g. face recognition), these pairwise methods become prohibitively inefficient. To reduce the computational complexity associated with large class numbers, proxy-based approaches such as <cit.> are commonly employed where sample representations are compared against class prototypes. During inference, it is a common practice to normalize the backbone embeddings to lie on the unit hypersphere <cit.> so that its metric space can be directly analyzed by measurements such as the cosine similarity, although earlier works in DML also used other metrics such as the Mahalanobis distance <cit.> or distance metrics learned from data <cit.>. While these methods have achieved good accuracy, they are prone to bias <cit.> and poor calibration consistency in production settings. To illustrate this, we give a qualitative example of the non-uniformity in embedding structures across classes, which is the root cause of calibration inconsistency. We train a shallow CNN on a random subset of the MNIST dataset <cit.> using the Arcface <cit.> loss with a feature dimension of three, and use the rest of the dataset for testing. As is shown in <ref>, the class centroid distribution is far from uniform with varying representation compactness across classes. For example, digits 4, 8, 9 are very close to each other, while digit 1 is far from the rest. Meanwhile, the embedding space is not fully utilized – nearly half of the space appears to be wasted.
In <ref>, we further show that high accuracy does not guarantee calibration consistency by visualizing the utility to distance curves for test classes in the CUB-200 dataset <cit.>. The utility score is defined in <ref> as the F_1 score based on specificity and sensitivity. As illustrated, higher accuracy does not guarantee better calibration consistency (e.g., ProxyNCA <cit.> has better retrieval accuracy in recall@1 than Smooth-AP <cit.>, yet the consistency in the operating characteristics across classes appears to be worse). This indicates that high accuracy does not
guarantee good calibration consistency in DML.
Nevertheless, there have been few works in literature that study this problem.
Calibration-Aware Training. Though calibration-aware training is underexplored in DML, it has been widely studied in classification and regression tasks. Common approaches use regularization to push the model update toward calibrated results like the confidence penalty <cit.>, the DCA term penalty <cit.> and the Multi-Class Difference in Confidence and Accuracy
loss <cit.>. A recent work <cit.> revisits the focal loss by introducing adaptiveness into the γ parameter to prevent over-confident predictions and improve the overall calibration. In the DML domain, a recent study <cit.> proposes the Cross-Example Negative Mining loss (L_CENM) to improve global score calibration for the learnt embedding by combining threshold relevancy and top-k relevancy, with an application to document-retrieval systems. To our knowledge, it is the first loss function tailored to improving threshold calibration consistency
in DML. However, the CENM loss is prone to sub-optimality and convergence issues if k is not properly selected.
Additionally, in face recognition applications, <cit.> proposes a false positive rate penalty loss to mitigate bias across different demographic groups. <cit.> also proposes the Threshold Consistency Penalty
to improve the consistency in the thresholds across different domains of face images, which is shown to improve the model performance under the single-threshold evaluation protocol. Nonetheless, <cit.> requires the construction of a large feature queue to ensure sufficient negative pairs for different domains, which can be impractical for fine-grained visual recognition where the number of “domains" can be very large. Meanwhile, as they are intended for face recognition, both <cit.> and <cit.> focus on controlling only FAR, which limits their applicability to other areas where recall may be important.
Metrics for Calibration
Calibration measures how much one can trust a model’s predictions. Since <cit.>, many quantitative metrics have been proposed for confidence calibration of classification models. Expected Calibration Error
<cit.> is one of the most popular metrics. It indicates the level of miscalibration by taking the average L1 distance between the DNN output maximum prediction and the actual accuracy over a validation set. Maximum Calibration Error <cit.> measures the maximum discrepancy instead of the expectation, and is preferred for safety-critical applications. However, both metrics suffer from issues such as failing to condition on the class or assess all the predictions a model makes, which in practice may lead to conflicting conclusions. Nixon et al <cit.> conducted a comprehensive study and proposed several solutions to address these flaws. Their recommended approach combines the L_2 norm with class conditioning and adaptive binning to tackle the non-uniform data dispersion across probability ranges, which is shown to have more consistent metric rank ordering across various datasets. However, metrics for calibration threshold inconsistency in DML is still largely underexplored.
§ QUANTIFY CALIBRATION CONSISTENCY IN DML
Operating-Point-Inconsistency Score. Despite commonalities in thresholding, class conditionality and variance-bias trade-off <cit.>,
metrics defined for confidence calibration in classification <cit.> cannot be directly applied to measure calibration in DML. The reason is that the former produces a probability that can be compared to the empirical frequency of correctness while the latter outputs a distance for semantic similarity that is intrinsically non-probabilistic, due to the ambiguity in semantic similarity across classes. <cit.> introduced the calibration threshold for face recognition systems, which corresponds to the distance threshold at a given overall FAR for a calibration dataset. While this notion links the calibration threshold with the overall FAR, it fails to measure the consistency in the operating characteristics across different classes that cover both sensitivity (FRR) and specificity (FAR).
To address this, we formally define a utility measure for accuracy as a function of the distance threshold d. Let ϕ be one side of the accuracy metric (e.g. precision or specificity), and ψ be the other side (e.g. recall or sensitivity). Analogous to the commonly-used F_β metric <cit.>, assuming one is willing to trade 1 unit of ϕ for c unit of ψ (c=1 if not specified), we can summarize the two metrics into one utility score U by the harmonic mean, as defined below:
U(d) = (1+c^2)·ϕ(d) ·ψ(d)/c^2ϕ(d)+ψ(d)
This utility score is a concave function whose value ranges from 0 (perfectly wrong) to 1 (perfectly accurate). We consider the L_2 distance on a unit hypersphere as the distance metric, which gives [0,2] as the global calibration range. On a unit hypersphere, the pair-wise L_2 distance and cosine similarity are one-to-one bijective. Without loss of generality, we let ϕ be specificity and ψ be sensitivity as they are not only more relevant for visual recognition systems but also less sensitive to test data composition.
Per use case, there can be lower / upper bound requirement on the recognition accuracy that determines the calibration range, denoted as [d^min, d^max].
Note that when the requirement is measured over a calibration set at one specific target FAR, this calibration range is equivalent to the calibration threshold defined in <cit.>. Equipped with these definitions, we propose the Operating-Point-Inconsistency Score (OPIS) to quantify the variance in the utility curves across test classes in the calibration range as follows:
OPIS=∑_i=1^T∫_d^min^d^maxw_i· ||U_i(d)-U̅(d)||^2 dd/∑_i=1^T w_i · (d^max-d^min)
where i=1,2,...,T is the index for the test classes, w_i is the class weight (we let w_i=1 for simplicity), and U̅(d) is the average utility score for the entire test dataset.
We highlight the importance of the OPIS metric by comparing it to the commonly-used accuracy metric in image retrieval tasks, recall@k. While recall@k focuses on top-k relevancy, OPIS emphasizes threshold-relevancy, which is often preferred in commercial image retrieval systems for its robustness against unknown test distributions. In addition, OPIS is defined over both FAR and FRR, while recall@k fails to capture FAR, making it less desirable for safety-critical applications (e.g., where top-k retrieved samples may contain offensive or illegal contents). As quality assessment needs to be multi-dimensional, OPIS should be used orthogonally to recall@k as an additional guard rail for model evaluation, as illustrated in <ref>. For example, when comparing two models A and B, if B's recall@k is higher and OPIS is lower, then B is better than A in both accuracy and calibration consistency. However, if B's recall@k and OPIS are both higher than A, then B has worse calibration consistency than A, despite its higher accuracy.
ϵ-OPIS for Utility Divide in a Dataset The overall OPIS metric does not emphasize on the outlier classes. For applications where outlier threshold calibration consistency is essential, we provide a more fine-grained metric in extension to overall OPIS that focuses on the utility inequality between the best and worst sub-groups at a given distance threshold. We define the expected utility of the ε percentile of best-performing classes as follows:
U_ε_best(d) = ϕ_ε_best(d) ·ψ_ε_best(d)/ϕ_ε_best(d)+ψ_ε_best(d)
where ϕ_ε_best(d) and ψ_ε_best(d) are the accuracy metrics calculated for the entirety of the ε percentile of the best-performing classes. By replacing ε_best in <ref> with ε_worst, the same can be defined for U_ε_worst(d) which accounts for the ε percentile of the worst-performing classes. Then, we define the ε-OPIS metric as the following:
ε-OPIS = ∫_d^min^d^max ||U_ε_worst(d)- U_ε_best(d)||^2 dd/d^max-d^min
By definition, the ε-OPIS metric is maximized at ε→ 0, and eventually becomes zero when ε→ 100% as the best-performing set and worst-performing set become identical.
§ TOWARDS CALIBRATION-CONSISTENT DML
We propose our calibration-aware training framework using a Calibration-Aware Margin (CAM) regularization to improve calibration consistency across different classes during training, as illustrated in <ref>. CAM can be combined with any commonly-used base loss to reduce the trade-off between accuracy and calibration. In the following, we discuss the details of CAM loss as well as its adaptive variant.
§.§ Calibration-Aware Margin Loss
To disambiguate the distance thresholds across different classes, we propose the CAM regularization, which explicitly penalizes hard positive sample pairs (whose cosine similarity is less than a certain positive margin) and hard negative sample pairs (whose cosine similarity is greater than a certain negative margin). Let S^+ and S^- be the sets of cosine similarity scores for positive pairs and negative pairs in a mini-batch, and |S^m^+| and |S^m^-| be the number of positive and negative pairs selected given m^+ and m^-, respectively. The CAM regularizer can then be written as:
L_CAM = λ^+·∑_s∈ S^+1_s ≤ m^+ (m^+-s)/|S^m^+| +
λ^-·∑_s∈ S^-1_s ≥ m^- (s-m^-)/|S^m^-|
where 1_ condition =1 if condition is true, and 0 otherwise, λ^+ and λ^- are the weights of positive and negative regularization, and m^+, m^- are cosine margins for positive and negative pairs, respectively. This regularizer can be combined with any base loss L_base, yielding the final objective:
L_final = L_base + L_CAM
Analysis.
Our CAM loss is different from contrastive loss as it does not aim to bring all similar samples closer and dissimilar samples far apart. Instead, it penalizes positive pairs that are too dissimilar and negative pairs that are too similar.
CAM is also different from the margin-based softmax losses such as <cit.> in several ways, as illustrated in <ref>. First, designed as a regularization that functions on top of a base loss, CAM only applies to the hard sample pairs (positive or negative) near the margin boundaries, defined by m^+ and m^-, via the indicator functions which act as a simple “attention" mechanism. This sampling mechanism differs from the semi-hard negative mining strategy <cit.> as well as its variants <cit.> because the sampling strategy in CAM is defined based on the absolute values of L_2 distance of the positive and negative pairs, respectively, instead of their relative differences.
Second, CAM uses two margin parameters to regularize both the intra-class and inter-class distance distributions, which captures both hard positive and negative examples and therefore generates more hard pairs within a mini-batch.
Finally, CAM is a pair-wise loss, which is better at capturing sample-to-sample relationship compared to proxy-based methods. Thus, the resulting metric space has a more equidistant class centroid distribution with improved uniformity in the representation compactness across different classes. Together, these factors create more consistent distance thresholds across different classes by actively preventing the formation of excessively compact classes and over-separation between classes.
Complexity. In a mini-batch with size n, the complexity of the CAM loss is 𝕆(n^2) as it compares every sample with all samples in the mini-batch. For large-scale image benchmarks where the number of training classes (K) is significantly greater than the batch size (K ≫ n), this complexity is comparable to or even less than most proxy-based (𝕆(nK)) or pair-based losses. For instance, the largest batch size used in literature is 4000 as in <cit.>, which is still less than the number of classes in iNaturalist <cit.> (=5690).
§.§ Class-Adaptive Margin
Many studies have introduced adaptiveness in the training objective using a variety of indicators <cit.>. From a slightly different angle, we argue that class-level representation compactness should be another important factor for adaptiveness. Motivated by this, we introduce the class-adaptive CAM regularization (L_AdaCAM) based on the class compactness approximated by a von Mises-Fisher (vMF) <cit.> distribution characterized by a concentration parameter, κ. The higher the concentration, the more compact a class is. A widely-used approximation of κ is Sra's method <cit.> which takes the following form:
κ_j =R̅(M-R̅^2)/(1-R̅^2)
where R̅=∑_i=1^n_j f_i/n_j is the norm of the average embedding (f) for class j containing n_j samples. The estimated κ is transformed into a class compactness score z_j=2κ_j-κ_min-κ_max/κ_max-κ_min, where κ_min, κ_max are pre-defined normalization constants for κ. Then, the adaptive CAM (AdaCAM) loss, can be derived by replacing
m^+ in <ref> with a class-adaptive m_j^+ while keeping the negative regularization fixed across all classes, expressed as follows:
m_j^+=m^+· w_j^vMF/𝔼_j [w_j^vMF]
where w_j^vMF=1/1+e^z_j is the class-adaptive scale that gives smaller positive margins for classes with higher κ.
Analysis. AdaCAM further exploits the potential for accuracy improvement by relaxing the positive margin class-adaptively according to the vMF model. With this relaxed constraint in m^+, we expect a minor trade-off between calibration consistency and accuracy, as shown in <ref>.
Complexity. We do not train with AdaCAM from scratch as the vMF model requires high embedding quality to yield a meaningful approximation. Instead, after a model is trained with L_base+L_CAM, we fine-tune it with L_base+L_AdaCAM for 30 epochs at a small learning rate of 1e-6. For memory efficiency, given R̅'s additive nature in <ref>, we progressively update a dictionary for average representation per class after each forward pass, which takes an additional memory of 𝕆(KM)
where M is the embedding dimension. At the end of every epoch, we compute κ for each class all at once, leading to a negligible overhead in overall memory.
§ EXPERIMENTS
We benchmark our methodology over a variety of large-scale image retrieval benchmarks including cars, birds, and nature species, using different base losses and DNN backbones. First, we
give detailed ablation studies
to justify our design choices. We then demonstrate the advantages of our CAM and AdaCAM regularizations in concurrently boosting calibration consistency and accuracy through large-scale image retrieval experiments.
§.§ Dataset and Implementation Details
Datasets. We use commonly-used image retrieval benchmarks including iNaturalist-2018 (iNat) <cit.>, CUB-200-2011 (CUB) <cit.> and Cars-196 (Cars) <cit.>. In particular, the iNaturalist dataset follows the open-set train-test-split where the training classes are disjoint to the test classes. The details of the datasets are listed in <ref>. For evaluation, we report recall@k for accuracy, and use OPIS and ϵ-OPIS defined in <ref> for calibration consistency.
In line with <cit.>, we estimate calibration consistency using normalized features of image pairs in 1:1 comparisons. Due to the large number of classes in iNaturalist, instead of exhaustive sampling of all pairs, we only sample positive pairs exhaustively and sample negative pairs randomly with a fixed negative-to-positive ratio of 10-to-1 for each class. All pairs in CUB and Cars are exhaustively sampled.
Implementation details. We consider both ResNet50<cit.> and the Vision Transformer <cit.> backbones.
Following <cit.>, the ResNet50 is pretrained on ImageNet <cit.>. For the Vision Transformers (ViT), we follow <cit.> and use ImageNet-21k initialization from the timm <cit.> library. Since the original papers do not report the OPIS metric, we train both baseline models (without CAM) and CAM-regularized models using the same set-up.
All of the hyper-parameters for each base loss are taken from the original papers. For CAM, we set λ^+ = λ^-= 1 for simplicity. The margin parameters (m^+ , m^-) are tuned using grid search on 10% of the training data for each benchmark.
For AdaCAM
, we let κ_min and κ_max be the 5^th and 95^th percentiles of vMF concentrations for all classes in every epoch to reduce the impact of outliers, respectively. The other parameters remain the same as the non-adaptive CAM.
We also use the same optimization algorithms including the learning rate as each base loss. During training, mini-batches are generated following <cit.> by randomly sampling 4 images per class. The calibration range is based on the FAR range for the end-user application, e.g., a low FAR range is more relevant for safety critical ones. This is similar to the choice of k in recall@k where a smaller k entails a higher requirement in precision. For consistency, we use the same calibration range of 1e-2≤FAR≤1e-1 in all three benchmarks.
§.§ Ablation and Complexity Analysis
Pareto Frontier for Accuracy and Calibration Consistency.
In <ref> we visualize different dynamics between calibration consistency and accuracy in different accuracy regimes for models trained on iNaturalist with various losses, backbones and batch sizes. In the low-accuracy regime (right along the x-axis), the accuracy tends to improve concurrently with calibration consistency. This is aligned with the conventional belief that stronger discriminability can improve calibration consistency by encouraging stronger affinity of samples towards the class centroids. However, with increasing accuracy, a Pareto frontier <cit.> starts to form between recognition accuracy and calibration consistency in the high-accuracy regime (recall@1 approaching 100%), where accuracy improvement leads to degradation in calibration consistency.
The same trade-off is observed in other benchmarks including CUB and Cars. While it might be counter-intuitive, this finding is not surprising: as calibration consistency measures the statistical uniformity in inter-class and intra-class embedding structures, it is implicitly identifying sources of bias which often comes at the cost of accuracy.
Effect of CAM Margin Hyper-parameter. We ablate over the margin hyper-parameters m^+ and m^- in the CAM regularization. As shown in <ref>, adding CAM effectively improves the calibration consistency compared to the baseline Smooth-AP (SAP) loss across all combinations of margin hyper-parameters. For accuracy, it is observed that the negative margin m^- contributes more to the performance than the positive margin m^+. When it is too stringent, e.g., m^-=0.25, the accuracy drops below the baseline. We conjecture that an overly-tight requirement on the negative margin may overshadow the baseline loss as well as the positive term in CAM, leading to degraded accuracy.
Comparison with Other Regularizations. In <ref> we show that CAM outperforms the other regularizers including the CENM loss <cit.> which is designed for improving calibration consistency in DML. We ascribe this improvement to CAM's effectiveness in encouraging uniformity in inter- and intra-class distances, as mentioned in <ref>. The other losses, however, tend to interfere with the base loss (L_SAP), resulting in lower retrieval accuracy. Note that although adding contrastive loss as the regularizer leads to the best calibration consistency, it also causes degradation in accuracy. However, our CAM regularization improves both accuracy and calibration consistency at the same time.
Effect of CAM over different base DML losses. We add CAM regularization to a variety of SoTA DML losses including Smooth-AP <cit.> and Recall@k Surrogate <cit.>. As is shown in <ref>
, adding CAM regularization consistently improves accuracy and calibration consistency at the same time across top-performing base losses.
Effect of Different Architectures on CAM.
In <ref>, we show that the accuracy and calibration consistency improvement induced by adding the CAM regularization is universal across different backbone architectures. In general, we find that there is more improvement in accuracy for ResNets models than for ViTs after adding CAM.
CAM Time Complexity. In <ref>, we compare CAM to Recall@k Surrogate, the SoTA loss for image retrieval, to show that the slightly increased time complexity of CAM and its adaptive variant, AdaCAM, leads to a negligible increase (<3.6%) in the overall training time per epoch.
§.§ CAM Large-Scale Image Retrieval Experiment
The results for models trained with and without the CAM regularizer over large-scale benchmarks
are summarized in <ref>.
For the Recall@k Surrogate loss <cit.>, we use their official codebase on top of our CAM implementation.
It is clear that our CAM loss is effective in improving calibration consistency (measured by OPIS and ϵ-OPIS), by up to 77.3%,
compared to the different baseline losses considered. Meanwhile, adding CAM regularization is shown to consistently improve accuracy across almost all benchmarks, base losses and backbone architectures. Specifically, on iNaturalist, the largest image retrieval benchmark, adding our CAM regularization is shown to out-perform SoTA DML method L_RS@k, reducing the OPIS calibration inconsistency score from 0.37e-3 to 0.17e-3, while improving the recall@1 accuracy metrics from 84.0% to 84.8%.
Adaptive CAM.
<ref> gives the results for fine-tuning a CAM-regularized model (trained with L_base+L_CAM) with AdaCAM (L_base+L_AdaCAM). For ViT-B/16
architecture, introducing class-adaptiveness in the positive margin during the fine-tuning stage increases the overall recall@1 accuracy by a large margin from 84.8% to 85.1% for iNaturalist, 87.6% to 88.4% for CUB, and 87.7% to 89.7% for Cars. As fine-tuning with AdaCAM exploits the potential for accuracy improving by relaxing the positive margin class-adaptively, it tends to cause a minor degradation in OPIS compared to the CAM-regularized baseline, as shown in the table, although it is still significantly better than training without the CAM regularization (trained with L_base only).
§ CONCLUSION
This work has formalized the notion of calibration inconsistency in DML. We developed an original metric, named Operating-Point-Incosistency-Score (OPIS), to quantify the calibration inconsistency across different test classes, which can be used orthogonally to existing accuracy metrics as an additional guard rail for model evaluation in DML. With OPIS, we found that the calibration inconsistency problem could not be fully resolved with higher model accuracy. To address this, we proposed a novel hinge-loss-based regularization, called Calibration-Aware Margin loss (CAM) which simultaneously enforces equality in intra-class compactness and inter-class separateness across different classes. With CAM, we demonstrated SoTA performance in both accuracy and calibration consistency on a variety of large-scale image retrieval benchmarks.
Limitations. As with other inductive learning methods, CAM is subject to failure with a large distribution shift between the training set and the test set. Additionally, CAM is pair-based so applying it to million-scale class sizes such as face recognition remains an open question.
false
§ CONCLUSION
This work has formalized the calibration inconsistency problem in deep metric learning. We develop a novel metric to quantitatively measure calibration inconsistency in DML across different test classes, and find that the calibration inconsistency problem can not be resolved with higher model accuracy. To address this, we propose a novel hinge-loss-based regularization, called “Hard-sample Margin Constraint loss”, which enforces a global constraint on the L_2 distances between the hard positive and hard negative pairs. With CAM, we demonstrate SoTA results in both accuracy and calibration consistency on three large-scale image-retrieval benchmarks. We also devise a class-adaptive CAM regularizer based on the class-level representation compactness approximated by the vMF concentration to further boost accuracy.
Discussion. As with other inductive learning methods, CAM is subject to failure with large distribution shift between train and test sets. Additionally, CAM is pair-based so how to apply it to million-scale class sizes remains an open question. One possibility is to modify CAM by constraining the L_2 distance between samples and near-by class prototypes instead.
ieee_fullname
|
http://arxiv.org/abs/2307.04387v1 | 20230710074833 | Classification of metric fibrations | [
"Yasuhiko Asao"
] | math.AT | [
"math.AT",
"math.CT",
"math.MG"
] |
Relieving the S_8 Tension: Exploring the Surface-type DBI Model as a Dark Matter Paradigm
Huanyuan Shan
August 12, 2023
=========================================================================================
In this paper, we study `a fibration of metric spaces' that was originally introduced by Leinster (<cit.>) in the study of the magnitude and called metric fibrations. He showed that the magnitude of a metric fibration splits into the product of those of the fiber and the base, which is analogous to the Euler characteristic and topological fiber bundles. His idea and our approach is based on Lawvere's suggestion of viewing a metric space as an enriched category (<cit.>). Actually, the metric fibration turns out to be the restriction of the enriched Grothendieck fibrations (<cit.>) to metric spaces (<cit.>). We give a complete classification of metric fibrations by several means, which is parallel to that of topological fiber bundles. That is, the classification of metric fibrations is reduced to that of `principal fibrations', which is done by the `1-Čech cohomology' in an appropriate sense. Here we introduce the notion of torsors in the category of metric spaces, and the discussions are analogous to the sheaf theory. Further, we can define the `fundamental group π^m_1(X)' of a metric space X, which is a group object in metric spaces, such that the conjugation classes of homomorphisms (π^m_1(X), ) corresponds to the isomorphism classes of `principal -fibrations' over X. Namely, it is classified like topological covering spaces.
§ INTRODUCTION
The idea of metric fibration is first introduced by Leinster in the study of magnitude (<cit.>). The magnitude theory that he coined can be considered as a promotion of Lawvere's suggestion of viewing a metric space as a [0, ∞]-enriched category. The magnitude of a metric space is defined as the `Euler characteristic of enriched categories'. In fact, he showed that the magnitude of a metric fibration splits into the product of those of the fiber and the base (Theorem 2.3.11 of <cit.>), which is analogous to the case of topological fiber bundles. Later, the author (<cit.>) pointed out that it is actually a restriction of the enriched Grothendieck fibration (<cit.>) to metric spaces, by dealing with small categories and metric spaces from a unified view point, namely as filtered set enriched categories. By this approach, we can expect to obtain novel ideas to the one side that is well-studied on the other side.
As an example, the following Figure 1 is one of the simplest non-trivial metric fibrations. Note that we consider connected graphs as metric spaces by taking the shortest path metric (see also Proposition <ref>). Both graphs are metric fibrations over the complete graph K_3 with the fiber K_2 as shown in Example 5.29 of <cit.>. Further, they have the same magnitude as pointed out in Example 3.7 of <cit.>. In Proposition 5.30 of <cit.>, it is shown that the right one is the only non-trivial metric fibration over K_3 with the fiber K_2. Here, `trivial' means that it is the cartesian product of graphs. On the other hand, any metric fibration over a four cycle graph C_4, or more generally an even cycle graph, is shown to be trivial in the same proposition.
In this paper, we give a complete classification of metric fibrations by several means, which is parallel to that of topological fiber bundles. Namely, we define `principal fibrations', `fundamental groups' and `a 1-Čech cohomology' for metric spaces, and obtain the equivalence between categories of these objects. Roughly speaking, we obtain an analogy of the following correspondence in the case of topological fiber bundles with a discrete structure group.
Fiber bundles over X with structure group G@<->[d]
Principal G-bundles over X (G-torsors)@<->[d]
[X, BG] ≅(π_1(X), G)/ conjugation@<->[d]
H^1(X, G)
We explain more in detail in the following. First recall that any usual Grothendieck fibration over a small category C can be obtained from a lax functor C, which is called the Grothendieck construction (<cit.>). In <cit.>, it is shown that any metric fibration over a metric space X can be obtained from a `lax functor' X that is called metric action (Definition <ref>). Here is the category of metric spaces and Lipschitz maps. We can consider the Grothendieck and the metric fibration as the definition of fibrations via `the lifting property', while the lax functor and the metric action is the one via `the transformation functions'. More precisely, we have the following.
The Grothendieck construction gives a category equivalence
_X ≃_X,
where we denote the category of metric actions X by _X and the category of metric fibrations over X by _X (Definitions <ref>, <ref>).
We can define a subcategory _X^ of _X that consists of `principal -fibrations' (Definition <ref>). We call it a category of -torsors. On the other hand, we can also define a subcategory _X^ of _X^ that is the counterpart of _X^ (Definition <ref>). The category _X^ consists of a metric action X that takes a group , not just a metric space, as the value. Then we have the following.
The Grothendieck construction gives a category equivalence
_X^≃_X^.
Here, a group is not just a group but is a group object of , which we call a metric group (Definition <ref>). As an example of a metric group, we construct the fundamental group π_1^m(X) of a metric space X (Definition <ref>). We also define a category (π_1^m(X), ) of homomorphisms π_1^m(X), where a morphism between homomorphisms is defined as a conjugation relation (Definition <ref>). Then we have the following.
We have a category equivalence
(π^m_1(X, x_0), ) ≃^_X.
As a corollary, we reprove Proposition 5.30 of <cit.> in the following form. We note that the notion of a metric group is equivalent to that of a `normed group' (Proposition <ref>). For a metric group , we denote the corresponding norm of an element g ∈ by |g| ∈_≥ 0.
Let C_n be an undirected n-cycle graph. Then we have
π^m_1(C_n) ≅ with |1| = 1 n : odd,
0 n : even.
Hence we have that _C_n^≃ (, ) n : odd,
0 n : even, for any metric group , which implies that there is only a trivial metric fibration over C_2n and that there is at most one non-trivial metric fibration over C_2n+1.
Now, similarly to the topological case, we can define an `associated bundle construction' from a torsor and a metric space Y (Corollary <ref>). This construction gives the following.
Suppose that Y is a bounded metric space. Then we have a category equivalence
_X^ Y≃ core_X^Y,
where _X^Y is the full subcategory of _X that consists of metric fibrations with the fiber Y (Definition <ref>), and we denote the core of a category by core (Definition <ref> (4)).
Here, we equip the group Y of isometries on Y with a metric group structure by d_ Y(f, g) = sup_y ∈ Yd_Y(fy, gy) (Example <ref>). However, we should suppose that Y is a bounded metric space so that d_ Y is indeed a distance function. For the case of general metric fibrations, we should extend our arguments concerning extended metric group that allows ∞ as values of a distance function (Definition <ref>), and we obtain an essentially same but extended result (Proposition <ref>).
Finally, we define a `1-Čech cohomology' ^1(X, ), which is a category, of a -torsor X (Definition <ref>). This is an analogy from the Čech cohomology constructed from the local sections of a principal bundle. Similarly to the topological case, we can construct a cocycle from a family of local sections (Proposition <ref>), and conversely we can construct a -torsor by pasting copies of 's along a cocycle (Proposition <ref>). Then we have the following from this correspondences.
We have a category equivalence
^1(X; ) ≃^_X.
§.§.§ Acknowledgements
The author is grateful to Luigi Caputi for fruitful and helpful comments and feedbacks on the first draft of the paper. He also would like to thank Masahiko Yoshinaga for valuable discussions and comments.
§ CONVENTIONS
In this section, we prepare terms for categories, graphs, weighted graphs and metric spaces that are well-known but may not be commonly used.
§.§ Categories
In this article, we suppose that categories are locally small. We denote the object class of a category C by C, and the set of all morphisms from a to b by C(a, b) for any objects a, b ∈ C. We also denote the class of all morphisms in C by C.
Let C and D be categories, and F : C D be a functor.
* We say that F is faithful if the map F : C(a, b) D(Fa, Fb) is injective for any objects a, b ∈ C. We say that F is full if the map F : C(a, b) D(Fa, Fb) is surjective for any objects a, b ∈ C. We also say that F is fully faithful if it is faithful and full.
* We say that F is split essentially surjective if there is a family of isomorpshisms {Fc ≅ d | c ∈ C}_d ∈ D.
* We say that F is a category equivalence if there exists a functor G : D C and natural isomorpshisms GF ≅ id_C and FG ≅ id_D. When there exists a category equivalence C D, we say that C and D are equivalent.
* We define a groupoid C by C = C and C(a, b) = {f ∈ C(a, b) |f is an isomorphism} for any a, b ∈ C.
The following are standard.
If a functor F : C D is fully faithful and split essentially surjective, then it is a category equivalence.
A category equivalence F : C D induces a category equivalence F : C D.
For a classification of objects of a category, we often want to consider `isomorphism classes of objects' and compare it with another category. However, in general, we can't do that since the class of objects is not necessarily a set. Instead, we consider a category equivalence C D that implies a bijection between isomorphism classes of objects if they are small.
§.§ Metric spaces
* A quasi metric space (X, d) is a set X equipped with a function d : X _≥ 0 satisfying that
* d(x, x) = 0,
* d(x, x') = d(x', x),
* d(x, x') + d(x', x”) ≥ d(x, x”),
for any x, x', x”∈ X.
* A Lipschitz map f : X Y between quasi metric spaces X and Y is a map satisfying that d_Y(fx, fx') ≤ d_X(x, x') for any x, x' ∈ X. We denote the category of quasi metric spaces and Lipschitz maps by . We call an isomorphism in an isometry.
* A metric space (X, d) is a quasi metric space satisfying that
* d(x, x') = 0 if and only if x = x'.
We denote the full subcategory of that consists of metric spaces by .
* A graph G is a pair of sets (V(G), E(G)) such that E(G) ⊂{e ∈ 2^V(G)|# e = 2}, where we denote the cardinality of a set by #. We call an element of V(G) a vertex, and an element of E(G) an edge. A graph homomorphism f : G H between graphs G and H is a map f : V(G) V(H) such that fe ∈ E(H) or # fe = 1 for any e ∈ E(G). We denote the category of graphs and graph homomorphisms by .
* A path on a graph G is a tuple (x_0, …, x_n) ∈ V(G)^n+1 for some n≥ 0 such that {x_i, x_i+1}∈ E(G) for any 0≤ i ≤ n-1. A connected graph G is a graph such that there exists a path (x_0, …, x_n) with x_0 = x and x_n = x' for any x, x' ∈ V(G). We denote the full subcategory of that consists of connected graphs by _ conn.
* A weighted graph (G, w_G) is a graph G equipped with a function w_G : E(G) _≥ 0. A weighted graph homomorphism f : G H between weighted graphs G and H is a graph homomorphism such that w_H(fe) ≤ w_G(e) for any e ∈ E(G), where we abuse that w_H(fe) = 0 if # fe = 1. We denote the category of weighted graphs and weighted graph homomorphisms by . We also denote the full subcategory of that consists of weighted graphs (G, w_G) such that the graph G is connected by _ conn.
We define functors and _ conn_ conn by forgetting additional structures. We also define the functor _ conn that sends a quasi metric space (X, d) to a weighted graph (X, w_X) defined by V(X) = X, E(X) = {e ∈ 2^X|# e = 2} and w_X {x, x'} = d(x, x').
The above functors have left adjoints.
We describe each functor F in the following, and they are the left adjoint functors of each functor G of the above since the unit and the counit give that FGF = F and GFG = G.
* We define a functor _ conn_ conn by sending a connected graph to a weighted graph with w = 0.
* We define a functor _ conn by sending a weighted graph (G, w_G) to a quasi metric space (V(G), d_G) defined by d_G(x, x') = inf∪_n≥ 0{∑_i=0^n-1w_G{x_i, x_i+1}| (x = x_0, …, x_n = x') is a path on G}.
* We define a functor by sending a quasi metric space (X, d) to a metric space ( KQX, d) defined as follows. We define an equivalence relation ∼ on X by x ∼ x' if and only if d(x, x') = 0. We also define a function KQX := X/∼_≥ 0 by d([x], [x']) = d(x, x').
For a quasi metric space X, we call the metric space KQX the Kolmogorov quotient of X.
* For quasi metric spaces (X, d_X) and (Y, d_Y), we define a metric space called the L^1-product (X× Y, d_X× Y) by d_X× Y((x, y), (x', y')) = d_X(x, x') + d_Y(y, y') for any x, x' ∈ X and y, y' ∈ Y.
* For graphs G and H, we define a graph called the cartesian product G× H by V(G× H) = V(G)× V(H), and {(x, y), (x', y')}∈ E(G× H) if and only if one of the following holds :
* x = x' and {y, y'}∈ E(H),
* {x, x'}∈ E(G) and y = y',
for any x, x' ∈ V(G) and y, y' ∈ V(H).
* For weighted graphs (G, w_G) and (H, w_H), we define a weighted graph (G× H, w_G× H) by w_G× H{(x, y), (x', y')} = w_G{x, x'} + w_H{y, y'} for any {(x, y), (x', y')}∈ E(G× H), where G× H is the cartesian product of graphs and we abuse that w_G{x, x} = w_H{y, y} = 0.
These products make each category a symmetric monoidal category.
The functors _ conn_ conn and their left adjoints are strong monoidal except for the functor _ conn that is lax monoidal.
For the functors and _ conn_ conn, it is obvious since they are inclusions. It is also obvious for the functor _ conn_ conn by the definition. For the functor , we define a map KQ(X× Y) KQX× KQY by [(x, y)] ↦ ([x], [y]). Then it is obviously natural and is an isometry since we have that [(x, y)]∼ [(x', y')] if and only if [x]∼ [x'] and [y]∼ [y']. For the functor F : _ conn, the identity on the set F(G× H) = F(G)× F(H) is an isometry since
d_w_G× H((x, y), (x', y'))
= inf∪_n≥ 0{∑_i=0^n-1w_G× H{(x_i, y_i), (x_i+1, y_i+1)}|
((x, y) = (x_0, y_0), …, (x_n, y_n) = (x', y')) is a path on G× H}
= inf∪_n≥ 0{∑_i=0^n-1w_G{x_i, x_i+1} + w_H{y_i, y_i+1}|
((x, y) = (x_0, y_0), …, (x_n, y_n) = (x', y')) is a path on G× H}
= inf∪_n≥ 0{∑_i=0^n-1w_G{x_i, x_i+1}| (x = x_0, …, x_n = x')}
+ inf∪_m≥ 0{∑_i=0^m-1 w_H{y_i, y_i+1}| (y = y_0, …, y_m = y')}
= d_w_G(x, x') + d_w_H(y, y')
= d_F(G)× F(H)((x, y), (x', y')),
for any x, x' ∈ V(G) and y, y' ∈ V(H). It is obviously natural. Finally, for the functor G : _ conn, the identity on the set G(X)× G(Y) = G(X× Y) is a weighted graph homomorphism since it is an inclusion of graphs and preserves weightings. It is obviously natural. This completes the proof.
* An extended quasi metric space is a set X equipped with a function d : X [0, ∞] that satisfies the same conditions for quasi metric spaces. Namely, it is a quasi metric space admitting ∞ as a value of distance. A Lipschitz map between extended quasi metric spaces is a distance non-increasing map. We denote the category of extended quasi metric spaces and Lipschitz maps by . We similarly define extended metric spaces and we denote the full subcategory of that consists of them by .
* For extended quasi metric spaces X and Y, we define the L^1-product of them similarly to that of quasi metric spaces. It makes the category a symmetric monoidal category.
* We define functors and by forgetting additional structures. We also define the functor similarly to the functor _ conn except that {x, x'} does not span an edge for x, x' ∈ X with d(x, x') = ∞.
The following is immediate.
* The functors have left adjonts. Further, all of these functors are commutative with the inclusions , , _ conn and _ conn.
* The functors of (1) are strong monoidal except for the functor that is lax monoidal.
§ _X ≃_X
In this section, we introduce two notions, the metric action and the metric fibration, and show the equivalence between them. The notion of metric fibation is originally introduced by Leinster (<cit.>) in the study of magnitude. The other was introduced by the author in <cit.>, which is the counterpart of lax functors in category theory, while the metric fibration is a generalization of the Grothendieck fibration. As written in the introduction, we can consider the Grothendieck (or metric) fibration as the definition of fibrations via `the lifting property', while the lax functor is the one via `the transformation functions'.
Let X be a metric space.
* A metric action F : X consists of metric spaces Fx ∈ for any x ∈ X and isometries F_xx' : Fx Fx' for any x, x' ∈ X satisfying the following for any x, x', x”∈ X :
* F_xx = id_Fx and F_x'x = F_xx'^-1,
* d_Fx”(F_x'x”F_xx'a, F_xx”a) ≤ d_X(x, x') + d_X(x', x”) - d_X(x, x”) for any a ∈ Fx.
* A metric transformation θ : F ⟹ G consists of Lipschitz maps θ_x : Fx Gx for any x ∈ X satisfying that G_xx'θ_x = θ_x'F_xx' for any x, x' ∈ X. We can define the composition of metric transformations θ and θ' by (θ'θ)_x = θ'_xθ_x. We denote the category of metric actions X and metric transformations by _X.
* Let π : E X be a Lipschitz map between metric spaces. We say that π is a metric fibration over X if it satisfies the following : For any ∈ E and x ∈ X, there uniquely exists _x ∈π^-1x such that
* d_E(, _x) = d_X(π, x),
* d_E(, ') = d_E(, _x) + d_E(_x, ') for any ' ∈π^-1x.
We call the point _x the lift of x along .
* For metric fibrations π : E X and π' : E' X, a morphism φ : ππ' is a Lipschitz map φ : E E' such that π'φ = π. We denote the category of metric fibrations over X and morphisms by _X.
For a product of metric spaces E = X× Y, the projection X× Y X is a metric fibration. We call it a trivial metric fibration.
Let π : E X be a metric fibration, and x, x' ∈ X. Then the correspondence π^-1x ∋ a ↦ a_x'∈π^-1x' is an isometry, where we equip the sets π^-1x and π^-1x' with the induced metric from E.
Note that the statement is obviously true if E = ∅. We suppose that E ≠∅ in the following, and then any fiber π^-1x is non-empty. For a ∈π^-1x, we have d_E(a_x', a) = d_E(a_x', (a_x')_x) + d_E((a_x')_x, a) = d_X(x', x) + d_E((a_x')_x, a). We also have d_E(a, a_x') = d_X(x, x'). Hence we obtain that d_E((a_x')_x, a) = 0, hence (a_x')_x = a for any x, x' ∈ X. This implies that the correspondence is a bijection. Further, we have
d_E(a, b_x') = d_E(a, a_x') + d_E(a_x', b_x') = d_X(x, x') + d_E(a_x', b_x')
and
d_E(b_x', a) = d_E(b_x', b) + d_E(b, a) = d_X(x', x) + d_E(b, a)
for any a, b ∈π^-1x. Hence we obtain that d_E(a, b) = d_E(a_x', b_x') for any x, x' ∈ X and a, b ∈π^-1x, which implies that the correspondence is an isometry. This completes the proof.
Let φ : ππ' be a morphism of metric fibrations. For any x, x' ∈ X and a ∈π^-1x, we have (φ a)_x' = φ a_x'.
We have
d_E'((φ a)_x', φ a_x') = d_E'(φ a, φ a_x') - d_X(x, x')
≤ d_E(a, a_x') - d_X(x, x')
= 0,
hence we obtain that (φ a)_x' = φ a_x'. This completes the proof.
Let F : X be a metric action. We define a metric fibration π_F : E(F) X as follows :
* E(F) = {(x, a) | a ∈ Fx, x ∈ X},
* d_E(F)((x, a), (x', b)) = d_X(x, x') + d_Fx'(F_xx'a, b),
* π_F(x, a) = x.
We call the above construction the Grothendieck construction.
The Grothendieck construction gives a functor E : _X _X.
Let θ : F ⟹ G be a metric transformation. Then we construct Lipschitz maps φ_θ : E(F) E(G) by φ_θ (x, a) = (x, θ_x a) for any x ∈ X and a ∈ Fx. It is checked that φ_θ is a Lipschitz map as follows :
d_E(G)(φ_θ (x, a), φ_θ (x', b)) =
d_E(G)((x, θ_x a), (x', θ_x' b))
= d_X(x, x') + d_Gx'(G_xx'θ_x a, θ_x'b)
= d_X(x, x') + d_Gx'(θ_x' F_xx' a, θ_x'b)
≤ d_X(x, x') + d_Fx'( F_xx' a, b)
= d_E(F)((x, a), (x', b)).
Next we show that the correspondence θ↦φ_θ is functorial, that is, we have φ_ id_F = id_E(F) and φ_θ'θ = φ_θ'φ_θ for any metric transformations θ : F ⟹ G and θ' : G ⟹ H. The former is obvious and the latter is checked as follows :
φ_θ'θ(x, a) = (x, (θ'θ)_x a)
= (x, θ'_xθ_x a)
= φ_θ'φ_θ(x, a).
Finally, φ_θ is obviously a morphism of the metric fibration. This completes the proof.
We have a functor F : _X _X.
Let π : E X be a metric fibration. We define a metric action F_π : X by F_π x = π^-1x and (F_π)_xx'a = a_x' for any x, x' ∈ X and a∈π^-1x, where we equip the set π^-1x with the induced metric from E. It follows that (F_π)_xx = id_F_π x by the uniqueness of the lifts, and that (F_π)_xx' defines an isometry F_π x F_π x' with (F_π)_xx'^-1 = (F_π)_x'x by Lemma <ref>. Further, we have that
d_F_π x”((F_π)_x'x”(F_π)_xx'a, (F_π)_xx”a) = d_F_π x”((a_x')_x”, a_x”)
= d_E(a, (a_x')_x”) - d_X(x, x”)
≤ d_E(a, a_x') + d_E(a_x', (a_x')_x”) - d_X(x, x”)
= d_X(x, x') + d_X(x', x”) - d_X(x, x”),
for any x, x', x”∈ X and a ∈ F_π x. Hence F_π certainly defines a metric action X. Next, let φ : ππ' be a morphism of metric fibrations. We define a metric transformation θ_φ : F_π⟹ F_π' by (θ_φ)_x a = φ a for any x ∈ X and a ∈ F_πx. Then it satisfies that
(F_π')_xx'(θ_φ)_x a = (F_π')_xx'φ a
= (φ a)_x'
= φ a_x'
= (θ_φ)_x'(F_π)_xx',
where the third line follows from Lemma <ref>, hence θ_φ certainly defines a metric transformation F_π⟹ F_π'. Note that we have θ_ id_π = id_F_π and (θ_ψφ)_xa = ψφ a = (θ_ψ)_x(θ_φ)_xa for morphisms φ and ψ, which implies the functoriality of F. This completes the proof.
The following is the counterpart of the correspondence between lax functors and the Grothendieck fibrations (B1 <cit.>), and enhances Corollary 5.26 of <cit.>.
The Grothendieck construction functor E : _X _X is a category equivalence.
We show that FE ≅ id__X and EF ≅ id__X. It is immediate to verify FE ≅ id__X by the definition. We show that EF_π≅π for a metric fibration π : E X. Note that EF_π is a metric space consists of points (x, a) with x ∈ X and a ∈π^-1x, and we have d_EF_π((x, a), (x', a')) = d_X(x, x') + d_π^-1x'(a_x', a'). We define a map f : EF_π E by f(x, a) = a for any x ∈ X and a ∈π^-1x. Then it is obviously an isometry and preserves fibers, hence an isomorphism of metric fibrations. The naturality of this isomorphism is obvious. This completes the proof.
Note that the trivial metric fibration corresponds to the constant metric action, that is F_xx'= id for any x, x' ∈ X.
§ THE FUNDAMENTAL METRIC GROUP OF A METRIC SPACE
In this section, we give a concise introduction to metric groups. We also give a definition of metric fundamental group, which plays a role of π_1 for metric space in the classification of metric fibrations.
§.§ Metric groups
* A metric group is a group object in . That is, a metric space equipped with Lipschitz maps · : ×, (-)^-1 : and a point e ∈ satisfying the suitable conditions of groups.
* For metric groups 𝒢 and ℋ, a homomorphism from to $̋ is a Lipschitz map$̋ that commutes with the group structure.
* We denote the category of metric groups and homomorphisms by .
Let (, d) be a metric group. Then
* we have d(kg, kh) = d(g, h) = d(gk, hk) for any g, h, k ∈.
* we have d(g, h) = d(g^-1, h^-1) for any g, h ∈.
* Since the map : g kg is a Lipschitz map for any k ∈, we have d(kg, kh) ≤ d(g, h) and d(k^-1(kg), k^-1(kh)) ≤ d(kg, kh). Hence we obtain that d(kg, kh) = d(g, h). The other can be proved similarly.
* By (1), we have d(g^-1, h^-1) = d(e, gh^-1) = d(h, g) = d(g, h).
This completes the proof.
Let (X, d) be a metric space, and let ^u X be the set of isometries f on X such that sup_x∈ Xd_X(x, fx)< ∞. We equip ^u X with a group structure by compositions. We also define a distance function on ^u X by d_^u X(f, g) = sup_x∈ X d_X(fx, gx). Then it is immediate to verify the conditions that (^u X, d_^u X) is a metric group. Note that, if the metric space X is bounded, namely we have
sup_x,x'∈ X d_X(x, x')< ∞,
then the group ^u X consists of all isometries on X, by which we denote X.
* A normed group is a group G equipped with a map |-| : G _≥ 0 satisfying that
* |g| = 0 if and only if g = e,
* |gh| ≤ |g| + |h| for any g, h ∈ G.
Here we denote the unit of G by e.
* A normed group G is called conjugation invariant if it satisfies that |h^-1gh| = |g| for any g, h ∈ G.
* A normed group G is called inverse invariant if it satisfies that |g^-1| = |g| for any g ∈ G.
* For normed groups G and H, a normed homomorphism from G to H is a group homomorphism φ : G H satisfying that |φ g|≤ |g|.
* We denote the category of conjugation and inverse invariant normed groups and normed homomorphisms by _ conj^-1.
The categories and _ conj^-1 are equivalent.
For a metric group , we define a conjugation and inverse invariant normed group N by
* N = as a group,
* |g| = d_(e, g) for any g ∈ N.
Note that this construction is functorial. Conversely, we define a metric group MG from a conjugation and inverse invariant normed group G by
* MG = G as a group,
* d_ MG(g, h) = |h^-1g|.
This construction is also functorial. It is straightforward to verify that the compositions of these functors are naturally isomorphic to the identities. This completes the proof.
§.§ The fundamental metric group
Let X be a metric space and x ∈ X.
* For each n ≥ 0, we define a set P_n(X, x) by
P_n(X, x) := {(x, x_1, …, x_n, x) ∈ X^n+2}.
We also define that P(X, x) := ⋃_nP_n(X, x).
* We define a connected graph G(X, x) with the vertex set P(X, x) as follows. For u, v ∈ P(X, x), an unordered pair {u, v} spans an edge if and only if it satisfies both of the following :
* There is an n ≥ 0 such that u ∈ P_n(X, x) and v ∈ P_n+1(X, x).
* There is a 0 ≤ j ≤ n such that u_i = v_i for 1 ≤ i ≤ j and u_i = v_i+1 for j+1 ≤ i ≤ n, where we have u = (x, u_1, …, u_n, x) and v = (x, v_1, …, v_n+1, x).
* We equip the graph G(X, x) with a weighted graph structure by defining a function w_G(X, x) on edges by
w_G(X, x){u, v} = d_X(v_j, v_j+1) + d_X(v_j+1, v_j+2) - d_X(v_j, v_j+2) v_j≠ v_j+2,
0 v_j = v_j+2,
where we use the notations in (2).
* We denote the quasi-metric space obtained from the weighted graph G(X, x) by Q(X, x). We also denote the Kolmogorov quotient of Q(X, x) by π_1^m(X, x).
Let X be a metric space and x ∈ X.
* The metric space π^m_1(X, x) has a metric group structure given by the concatenation defined as
[(x, u_1, …, u_n, x)]∙ [(x, v_1, …, v_k, x)] = [(x, u_1, …, u_n, v_1, …, v_k, x)].
The unit is given by [(x, x)] ∈π^m_1(X, x).
* For any x' ∈ X, we have an isomorphism π^m_1(X, x) ≅π^m_1(X, x') given by
[(x, u_1, …, u_n, x)] ↦ [(x', x, u_1, …, u_n, x, x')].
* We first show that the weighted graph G(X, x) is a monoid object in _ conn by the concatenation. Let (u, v), (u', v') ∈ G(X, x)× G(X, x), and suppose that {(u, v), (u', v')} spans an edge. Then we have that u = u' and v ∈ P_n(X, x), v' ∈ P_n+1(X, x), or v = v' and u ∈ P_n(X, x), u' ∈ P_n+1(X, x) for some n. We also have that w_G(X, x)× G(X, x){(u, v), (u', v')} = w_G(X, x){u, u'} + w_G(X, x){v, v'}. Note that {u∙ v, u'∙ v'} spans an edge in G(X, x). Further, we have w_G(X, x){u∙ v, u'∙ v'} = w_G(X, x){u, u'} + w_G(X, x){v, v'}. Hence the concatenation map ∙ : G(X, x)× G(X, x) G(X, x) is a weighted graph homomorphism. It is immediate to verify that the identity is the element (x, x) and that the product is associative. Thus the weighted graph G(X, x) is a monoid object in _ conn, and by Proposition <ref>, π^m_1(X, x) is a monoid object in . Now we show that it is a group object, namely, any element [(x, x_0, …, x_n, x)] has the inverse [(x, x_n, …, x_0, x)]. It reduces to show that d_Q(X, x)((x, x_n, …, x_0, x_0, …, x_n x), (x, x)) = 0. However, it is obvious that the elements (x, x_n, …, x_0, x_0, …, x_n x) and (x, x) can be connected by a path that consists of edges with weight 0 in G(X, x), that implies the desired equality. This completes the proof.
* It is straightforward.
Let X be a metric space and x ∈ X. We call the metric group π_1^m(X, x) the fundamental metric group of X with the base point x. We sometimes omit the base point and denote it by π_1^m(X).
As just a group, π_1^m(X) is obtained as the fundamental group of a simplicial complex S_X whose n-simplices are subsets {x_0, …, x_n}⊂ X such that any distinct 3 points x_i, x_j, x_k satisfy that |Δ(x_i, x_j, x_k)| = 0 (see Definition <ref>).
Note that our fundamental group π_1^m(X) is not functorial with respect to Lipschitz maps. However, it is functorial with respect to Lipschitz maps that preserve coline'ness ( |Δ(x_i, x_j, x_k)| = 0 ), in particular embedding of metric spaces.
§ _X^≃_X^≃ (Π^M_1(X, X_0), )
In this section, we introduce the notion of `principal -bundles' for metric spaces. We define it from two different view points, namely as a metric action and as a metric fibration, which turn out to be equivalent. As a metric action, we call it a -metric action, and as a metric fibration, we call it a -torsor. Then we show that they are classified by the conjugation classes of homomorphisms π^m_1(X, x_0).
§.§ _X^≃_X^
Let X be a metric space and be a metric group.
* A -metric action F : X is a metric action satisfying the following :
* F_x = for any x ∈ X.
* F_xx' is a left multiplication by some f_xx'∈ for any x, x' ∈ X.
* Let F, G : X be -metric actions. A -metric transformation θ : F ⟹ G is a metric transformation such that each component θ_x : Fx Gx is a left multiplication by an element θ_x ∈. We denote the category of -metric actions X and -metric transformations by _X^.
Apparently, _X^ is a subcategory of _X and is also a groupoid.
Let G be a group and X be a metric space. We say that X is a right G-torsor if G acts on X from the right and satisfies the following :
* It is free and transitive.
* g : X X is an isometry for any g ∈ G.
* we have d_X(x, xg) = d_X(x', x'g) for any x, x' ∈ X and g ∈ G.
Let (X, d_X) be a metric space and G be a group. Suppose that X is a right G-torsor. Then there exist a distance function d_G on G and a metric group structure ·_x on X for each x ∈ X such that the map
G X ; g ↦ xg
gives an isomorphism of metric groups (G, d_G) ≅ (X, ·_x). Furthermore, the unit of the metric group (X, ·_x) is x.
Fix a point x ∈ X. We define a map d_G : G × G _≥ 0 by d_G(f, g) = d_X(xf, xg), which is independent from the choice of x ∈ X. It is immediate to check that (G, d_G) is a metric space. Further, we have
d_G(ff', gg') = d_X(xff', xgg')
≤ d_X(xff', xgf') + d_X(xgf', xgg')
≤ d_X(xf, xg) + d_X(xf', xg')
= d_G(f, g) + d_G(f', g'),
and
d_G(f^-1, g^-1) = d_X(xf^-1, xg^-1)
= d_X(x, xg^-1f)
= d_X(xg, (xg)g^-1f)
= d_X(xg, xf)
= d_X(xf, xg)
= d_G(f, g),
for any f, f', g, g' ∈ G. Hence (G, d_G) is a metric group. Now we define a map G X by g ↦ xg. Then this map is an isometry by the definition. Hence we can transfer the metric group structure on G to X by this map. With respect to this group structure ·_x on X, we have x·_x x' = eg' = x' and x'·_x x = g'e = x', where we put x' = xg'. Hence x ∈ X is the unit of the group (X, ·_x). This completes the proof.
Let G be a group. A metric fibration π : E X is a G-torsor over X if it satisfies the following :
* G acts isometrically on E from the right, and preserves each fiber of π.
* each fiber of π is a right G-torsor with respect to the above action.
Let π : E X be a G-torsor, and x, x' ∈ X. Then the metric group structures on G induced from the fibers π^-1x and π^-1x' are identical.
Note that, for any ∈π^-1x and f ∈, we have
d_E(( f)_x', _x'f) = d_E( f, _x'f) - d_E( f, ( f)_x')
= d_E(, _x') - d_E( f, ( f)_x')
= d_X(x, x') - d_X(x, x')
= 0,
hence we obtain that ( f)_x' = _x'f. Let d_x and d_x' be the distance function on G induced from the fibers π^-1x and π^-1x' respectively. Namely, for ∈π^-1x and f, g ∈ G, we have d_x(f, g) = d_E( f, g) and d_x'(f, g) = d_E(_x'f, _x'g). Therefore we obtain that d_x'(f, g) = d_E(_x'f, _x'g) = d_E(( f)_x', ( g)_x') = d_E( f, g) = d_x(f, g) by Lemma <ref>. This completes the proof.
For a G-torsor π : E X, we can consider the group G as a metric group that is isometric to a fiber of π by Lemma <ref>. Further, such a metric structure is independent from the choice of the fiber by Lemma <ref>. Hence, in the following, we write `G-torsors' by `-torsors', where is the metric group that is the group G equipped with the above metric structure.
Let π : E X and π' : E' X be -torsors. A -morphism φ : ππ' is a G-equivariant map E E' that is also a morphism of metric fibrations. We denote the category of -torsors over X and -morphisms by _X^.
Note that the category _X^ is a subcategory of _X. Further, we can show that any -morphism is an isomorphism as follows : Note that for any ∈ E, x ∈ X and g ∈, we have d_E(, _xg) = d_X(π, x) + |g| by the definitions. Then the -equivariance of φ and Lemma <ref> implies that
d_E'(φ, φ (_xg)) = d_E'(φ, (φ)_xg)
= d_X(π' φ, x) + |g|
= d_X(π, x) + |g|
= d_E(, _xg),
which implies that φ preserves distances. The invertibility of φ is immediate from the G-equivariance.
Now we show the equivalence of -metric actions and -torsors in the following.
The Grothendieck construction functor E : _X _X of Proposition <ref> restricts to a functor _X^_X^.
Let F : X be a -metric action. Let E(F) be the metric fibration given by the Grothendieck construction. Note that we have d_E(F)((x, g), (x', g')) = d_X(x, x') + d_(g_xx'g, g'). We define a action on E(F) by (x, g)h = (x, gh) for any g, h ∈ and x ∈ X. Then it is obviously compatible with the projection, and also free and transitive on each fiber. We also have that
d_E(F)((x, g)h, (x', g')h) = d_E(F)((x, gh), (x', g'h))
= d_X(x, x') + d_(g_xx'gh, g'h)
= d_X(x, x') + d_(g_xx'g, g')
= d_E(F)((x, g), (x', g')),
hence it acts isometrically. Further, we have that
d_E(F)((x, g), (x, g)h) = d_E(F)((x, g), (x, gh))
= d_(g, gh)
= d_(e, h),
hence each fiber is a right -torsor. Therefore, we obtain that E(F) is a -torsor. Let θ : F ⟹ F' be a -metric transformation. The Grothendieck construction gives a map φ_θ : E(F) E(F') by φ_θ (x, g) = (x, θ_x g), which is a morphism of metric fibrations. It is checked that φ_θ is -equivariant as follows :
(φ_θ (x, g))h = (x, θ_x gh) = φ_θ (x, gh).
Hence it is a -morphism. This completes the proof.
The functor F : _X _X of Proposition <ref> restricts to a functor _X^_X^.
Let π : E X be a -torsor. We fix points x_0 ∈ X and ∈π^-1x_0. For any x ∈ X, we equip each set π^-1x with a metric group structure isomorphic to with the unit _x by Lemma <ref>. Hence we can identify each fiber with by the map g ↦_xg for any x ∈ X. Now we put (_x)_x' = _x'g_xx'∈π^-1x' for x, x' ∈ X and g_xx'∈. Then, for any h ∈, we have
d_X(x, x') = d_E(_xh, (_xh)_x')
= d_E(_x, (_xh)_x'h^-1)
= d_E(_x, _x'g_xx') + d_E(_x'g_xx', (_xh)_x'h^-1)
= d_X(x, x') + d_E(_x'g_xx', (_xh)_x'h^-1),
hence we obtain that (_xh)_x' = _x'g_xx'h. This implies that the map π^-1x π^-1x' given by lifts _xh ↦ (_xh)_x' is the left multiplication by g_xx' when we identify each fiber with as above. Hence the functor F gives a -metric action. Next, let φ : ππ' be a -morphism between -torsors π : E X and π' : E' X. It induces a Lipschitz map φ_x : π^-1x π'^-1x. Since fibers π^-1x and π'^-1x are idetified with and φ_x is -equivariant, we can identify φ_x with the left multiplication by φ_x_x. This implies that the functor F sends the -morphism φ to a -metric transformation between F_π and F_π'. This completes the proof.
The Grothendieck construction functor _X^_X^ is a category equivalence.
By Proposition <ref>, we have natural isomorphisms EF ≅ id__X and FE ≅ id__X. We should show that these isomorphisms are obtained in _X^ and _X^ when restricted to them, which is immediate. This completes the proof.
§.§ _X^≃ (π^m_1(X, x_0), )
First we define the category of homomorphisms of metric groups '.
Let and ' be metric groups, and let (, ') be the set of all homomorphisms '. We equip (, ') with a groupoid structure by defining (, ')(φ, ψ) = {h ∈' |φ = h^-1ψ h} for any homomorphisms φ, ψ : '. The identity on φ∈ (, ') is the unit e ∈', and the composition of morphisms h ∈ (, ')(φ, ψ) and h' ∈ (, ')(ψ, ξ) is defined by h'∘ h = h'h.
Let X be a metric space and be a metric group. For each x_0 ∈ X, we have a functor A : (π^m_1(X, x_0), ) ^_X.
Let φ : π^m_1(X, x_0) be a homomorphism. We define a -metric action F_φ : X by F_φ x = and (F_φ)_xx' = φ[(x_0, x', x, x_0)]· :, where we denote the left multiplication by (-)·. It is verified that this certainly defines a -metric action as follows. For any x, x' ∈ X, we have (F_φ)_xx = φ[(x_0, x, x, x_0)]· = e· = id_, and (F_φ)_x'x = φ[(x_0, x, x', x_0)]· = (φ[(x_0, x', x, x_0)])^-1· = (F_φ)_xx'^-1. Further, we have
d_((F_φ)_x'x”(F_φ)_xx'g, (F_φ)_xx”g) = d_(φ[(x_0, x”, x', x_0)]φ[(x_0, x', x, x_0)], φ[(x_0, x”, x, x_0)])
= d_(φ[(x_0, x”, x', x, x_0)], φ[(x_0, x”, x, x_0)])
= d_((φ[(x_0, x”, x, x_0)])^-1φ[(x_0, x”, x', x, x_0)], e)
= d_(φ[(x_0, x, x”, x', x, x_0)], e)
≤ d_π^m_1(X, x_0)([(x_0, x, x”, x', x, x_0)], [x_0, x_0])
≤ d_X(x, x') + d_X(x', x”) - d_X(x, x”),
for any x, x', x”∈ X and g∈. Let h : φψ be a morphism in (π^m_1(X, x_0), ), namely we have φ = h^-1ψ h with h ∈. Then we can construct a -metric transformation θ : F_φ⟹ F_ψ by θ_x = h· :. It satisfies that (F_ψ)_xx'θ_x = θ_x'(F_φ)_xx' since we have ψ[(x_0, x', x, x_0)]h = hφ[(x_0, x', x, x_0)]. This completes the proof.
Let X be a metric space and be a metric group. For each x_0 ∈ X, we have a functor B : ^_X (π^m_1(X, x_0), ).
Let F : X be a -metric action. Then we can define a homomorphism φ_F : π^m_1(X, x_0) by
φ_F [(x_0, x_1, …, x_n, x_0)] = F_x_1x_0F_x_2x_1… F_x_nx_n-1F_x_0x_n,
for any [(x_0, x_1, …, x_n, x_0)] ∈π^m_1(X, x_0). It is immediate to check the well-defined'ness. Let F, F' : X be -metric actions and θ : F ⟹ F' be a -metric transformation. Then we have
θ_x_0^-1φ_F'[(x_0, x_1, …, x_n, x_0)]θ_x_0 = θ_x_0^-1F'_x_1x_0F'_x_2x_1… F'_x_nx_n-1F'_x_0x_nθ_x_0
= F_x_1x_0F_x_2x_1… F_x_nx_n-1F_x_0x_n
= φ_F[(x_0, x_1, …, x_n, x_0)],
for any [(x_0, x_1, …, x_n, x_0)] ∈π^m_1(X, x_0). Hence θ_x_0∈ gives a morphism θ_x_0 : φ_F φ_F'. This correspondence is obviously functorial. This completes the proof.
The functor A : (π^m_1(X, x_0), ) ^_X of Lemma <ref> is a category equivalence.
We show the natural isomorphisms BA ≅ id_ (π^m_1(X, x_0), ) and AB ≅ id_^_X. For a homomorphism φ : π^m_1(X, x_0), we have
φ_F_φ[(x_0, x_1, …, x_n, x_0)] = (F_φ)_x_1x_0(F_φ)_x_2x_1… (F_φ)_x_0x_n
= φ[(x_0, x_0, x_1, x_0)]φ[(x_0, x_1, x_2, x_0)]…φ[(x_0, x_n, x_0, x_0)]
= φ[(x_0, x_0, x_1, x_1, … x_n, x_n, x_0, x_0)]
= φ[(x_0, x_1, … x_n, x_0)],
for any [(x_0, x_1, …, x_n, x_0)] ∈π^m_1(X, x_0). Hence we obtain an isomorphism BA φ = φ that is obviously natural. Conversely, let F : X be a -metric action. Then we have (F_φ_F)_x = and (F_φ_F)_xx' = φ_F[(x_0, x', x, x_0)] = F_x'x_0F_xx'F_x_0x. Now we define a -metric transformation θ : F_φ_F⟹ F by θ_x = F_x_0x. It is obvious that we have F_xx'θ_x=θ_x'(F_φ_F)_xx', hence it is well-defined and obviously an isomorphism. For a -metric transformation τ : F ⟹ F', we have (ABτ)_x = τ_x_0· : (F_φ_F)_x (F'_φ_F')_x by the construction. Hence the condition τ_xF_x_0x = F'_x_0xτ_x_0 of the -metric transformation implies the naturality of this isomorphism. This completes the proof.
§.§ Example
We give the following example of fundamental metric group.
Let C_n be an undirected n-cycle graph. Then we have
π^m_1(C_n) ≅ with |1| = 1 n : odd,
0 n : even.
Hence we have that _C_n^≃ (, ) n : odd,
0 n : even, for any metric group , which implies that there is only a trivial metric fibration over C_2n and that there is at most one non-trivial metric fibration over C_2n+1.
Let V(C_n) = {v_1, …, v_n} be the vertex set whose numbering is anti-clockwise. For C_2n, it reduces to show that [(v_1, v_2, …, v_2n, v_1)] = [(v_1, v_1)]. Since we have d_C_2n(v_i, v_j) = d_C_2n(v_i, v_k) + d_C_2n(v_k, v_j) for any i≤ k ≤ j with j-i≤ n, we obtain that
[(v_1, v_2, …, v_2n, v_1)] = [(v_1, …, v_n+1, …, v_2n, v_1)]
= [(v_1, v_n+1, v_1)]
= [(v_1, v_1)].
For C_2n+1, the possible non-trivial element of π^m_1(C_2n+1) is a concatenation or its inverse of the element [(v_1, …, v_2n+1, v_1)]. Now we have [(v_1, …, v_2n+1, v_1)] = [(v_1, v_n+1, v_n+2, v_1)], by the same argument as above, and
d_Q(C_2n+1, v_1)((v_1, v_n+1, v_n+2, v_1), (v_1, v_n+1, v_1))
= d_C_2n+1(v_n+1, v_n+2) + d_C_2n+1(v_n+2, v_1) - d_C_2n+1(v_n+1, v_1)
= d_C_2n+1(v_n+1, v_n+2)
= 1.
Hence we obtain that |[(v_1, …, v_2n+1, v_1)]| = 1. This completes the proof.
Note that the cycle graph C_n is a metric group /n with |1| = 1. Hence the examples in Figure 1 are /2-torsors, which are classified by (, /2) ≅/2.
§ CLASSIFICATION OF METRIC FIBRATIONS
In this section, we classify general metric fibrations by fixing the base and the fiber. It is analogous to that of topological fiber bundles, namely it reduces to classifying principal bundles whose fiber is the structure group of the concerned fibration. We divide it into two cases, whether the fiber is bounded or not, since we need to consider expanded metric spaces for the unbounded case, which are essentially same although.
§.§ The functor (-)^x_0
Before we show the classification, we introduce a technical functor that will be used later.
For any metric action F : X and a point x_0 ∈ X, we define a metric action ^x_0 : X as follows. We define that ^x_0 x = Fx_0 and ^x_0_xx' = F_x'x_0F_xx'F_x_0x : Fx_0 Fx_0 for any x, x' ∈ X. Then it is verified that this defines a metric action as follows : We have ^x_0_xx = F_xx_0F_xxF_x_0x = id_Fx_0 = id_^x_0 x. We also have (^x_0_x'x)^-1 = (F_xx_0F_x'xF_x_0x')^-1 = F_x'x_0F_xx'F_x_0x = ^x_0_xx' and
d_^x_0 x”(^x_0_x'x”^x_0_xx'a, ^x_0_xx”a) = d_Fx_0(F_x”x_0F_x'x”F_x_0x'F_x'x_0F_xx'F_x_0xa, F_x”x_0F_xx”F_x_0xa)
= d_Fx”(F_x'x”F_xx'F_x_0xa, F_xx”F_x_0xa)
≤ d_X(x, x') + d_X(x', x”) - d_X(x, x”),
for any x, x', x”∈ X and a ∈^x_0 x.
The correspondence F ↦^x_0 defines a fully faithful functor (-)^x_0: _X _X. Further, it is restricted to a fully faithful functor _X^_X^ for any metric group .
Let θ : F ⟹ G be a metric transformation. We define a metric transformation θ^x_0 : ^x_0⟹G^x_0 by θ^x_0_x = θ_x_0 : ^x_0x G^x_0x ; a ↦θ_x_0a. Then we have
G^x_0_xx'θ^x_0_x = G_x'x_0G_xx'G_x_0xθ_x_0
= G_x'x_0G_xx'θ_xF_x_0x
= G_x'x_0θ_x'F_xx'F_x_0x
= θ_x_0F_x'x_0F_xx'F_x_0x
= θ^x_0_xF^x_0_xx',
hence this certainly defines a metric transformation. It is obvious that id_F^x_0 = id_^x_0 and (θ' θ)^x_0 = θ'^x_0θ^x_0. It is a faithful functor because G_xx_0θ_x = θ_x_0F_xx_0 implies that θ_x = θ'_x for any x ∈ X if two metric transformation θ, θ' satisfies θ_x_0 = θ'_x_0. By the definition, it is restricted to a faithful functor _X^_X^ for any metric group . Next we show the fullness. Let η : ^x_0⟹G^x_0 be a metric transformation. Then we have G^x_0_x_0xη_x_0 = η_xF^x_0_x_0x and F^x_0_x_0x = id_F_x_0, G^x_0_x_0x = id_G_x_0. Hence we obtain that η_x_0 = η_x for any x ∈ X. Now we define a metric transformation η : F ⟹ G by η_x = G_x_0xη_x_0F_xx_0 : Fx Gx. Then we have
G_xx'η_x = G_xx'G_x_0xη_x_0F_xx_0
= G_x_0x'G^x_0_xx'η_xF_xx_0
= G_x_0x'η_x'F^x_0_xx'F_xx_0
= G_x_0x'η_x'F_x'x_0F_xx'F_x_0xF_xx_0
= G_x_0x'η_x_0F_x'x_0F_xx'
= η_x'F_xx',
hence this certainly defines a metric transformation. We obviously have (η)^x_0 = η, which implies that the functor (-)^x_0 is full. The restriction to _X^_X^ is immediate. This completes the proof.
The functor (-)^x_0: _X _X is split essentially surjective. Its restriction _X^_X^ is also split essentially surjective for any metric group .
Let F : X be a metric action. We define a metric transformation θ : ^x_0⟹ F by θ_x = F_x_0x : F^x_0x Fx ; a ↦ F_x_0xa. It certainly satisfies that
F_xx'θ_x = F_xx'F_x_0x
= F_x_0x'F_x'x_0F_xx'F_x_0x
= θ_x'F^x_0_xx'.
Further, we define a metric transformation θ^-1 : F ⟹^x_0 by θ^-1_x = F_xx_0 : Fx ^x_0x for any x ∈ X. Then we have _xx'^x_0θ^-1_x = θ^-1_xF_xx' similarly to the above, hence it certainly defines a metric transformation. It is obviously an isomorphism. The restriction to _X^_X^ is immediate.
This completes the proof.
The functor (-)^x_0: _X _X and its restriction _X^_X^ for any metric group are category equivalences.
* We denote the image of the functor (-)^x_0: _X _X by _X^x_0.
* We denote the full subcategory of _X that consists of metric actions F : X such that Fx ≅ Y for any x ∈ X and a metric space Y by _X^Y.
* We denote the image of (-)^x_0 restricted to _X^Y and _X^ by _X^Y, x_0 and _X^, x_0 respectively.
* We denote the full subcategory of _X that consists of metric fibrations π : E X such that π^-1x ≅ Y for any x ∈ X and a metric space Y by _X^Y.
* We have category equivalences _X^Y _X^Y, x_0 and _X^_X^, x_0.
* The Grothendieck construction functor E : _X _X is restricted to the category equivalence _X^Y _X^Y.
(1) follows from Corollary <ref>, and (2) follows from the proof of Proposition <ref>.
§.§ Classification for the case of bounded fibers
In this subsection, we suppose that X and Y are metric spaces and Y is bounded. Note that we have a metric group Y (Example <ref>).
We have a faithful functor
-↷ Y : _X^ Y_X^ Y.
Let F ∈_X^ Y. We define a metric action F ↷ Y : X by (F ↷ Y)x = Y and (F ↷ Y)_xx' = F_xx' : Y Y. It is immediate to verify that this certainly defines a metric action. For an Y-metric transformation θ : F ⟹ G, we define a metric transformation θ↷ Y : F↷ Y ⟹ G↷ Y by (θ↷ Y)_x = θ_x : Y Y ; y ↦θ_xy. Then it is also immediate to verify that it is a metric transformation. Further, this obviously defines a faithful functor. This completes the proof.
The functor -↷ Y : _X^ Y_X^Y is split essentially surjective.
Let F ∈_X^Y and fix isometries φ_x : Y Fx by the axiom of choice. We define an Y-metric action F by ( F)x = Y and ( F)_xx' = φ_x'^-1F_xx'φ_x· that is a left multiplication. Then we can verify that it is an Y-metric action as follows. Note that we have ( F)_xx = φ_x^-1F_xxφ_x· = id_ Y and ( F)_xx'^-1 = φ_x^-1F_x'xφ_x'· = ( F)_x'x. We also have that
d_ Y(( F)_x'x”( F)_xx', ( F)_xx”)
= d_ Y(φ_x”^-1F_x'x”φ_x'φ_x'^-1F_xx'φ_x, φ_x”^-1F_xx”φ_x)
= d_ Y(φ_x”^-1F_x'x”F_xx'φ_x, φ_x”^-1F_xx”φ_x)
= sup_a ∈ Yd_Y(φ_x”^-1F_x'x”F_xx'φ_xa, φ_x”^-1F_xx”φ_xa)
= sup_a ∈ Fxd_Fx”(F_x'x”F_xx'a, F_xx”a)
≤ d_X(x, x') + d_X(x', x”) - d_X(x, x”).
Now we define a metric transformation φ : F↷ Y ⟹ F by φ_x : ( F↷ Y)x = Y Fx. Then it certainly satisfies that F_xx'φ_x = φ_x'( F↷ Y)_xx' and is an isomorphism by the definition. This completes the proof.
Since the category _X^ Y is a groupoid, the image of the functor -↷ Y is in core_X^Y, where we denote the subcategory that consists of all isomorphisms by core (Definition <ref> (4)).
The functor -↷ Y^x_0 : _X^ Y core_X^Y, x_0 is full.
Note that we have -↷ Y^x_0 = (-)^x_0↷ Y by the definitions. Since the functor (-)^x_0 : _X^ Y_X^ Y is full by Lemma <ref>, we show that the restriction -↷ Y : _X^ Y, x_0 core_X^Y, x_0 is full. Let θ : F^x_0↷ Y ⟹G^x_0↷ Y be an isomorphism in _X^Y, x_0, where F, G ∈_X^ Y. Then we have an isometry θ_x : Y Y such that G_x'x_0G_xx'G_x_0xθ_x = θ_x'F_x'x_0F_xx'F_x_0x for any x, x' ∈ X. Since we have θ_x ∈ Y, we obtain a morphism θ' : F^x_0⟹G^x_0∈_X^ Y, x_0 defined by θ'_x = θ_x. It is obvious that we have θ' ↷ Y = θ. This completes the proof.
The functor -↷ Y^x_0 : _X^ Y core_X^Y, x_0 is a category equivalence.
The categories _X^ Y and core_X^Y are equivalent.
It follows from Corollary <ref> with core_X^Y ≃ core_X^Y ≃ core_X^Y, x_0 by Lemma <ref>.
§.§ Classification for the case of unbounded fibers
To classify general metric fibrations, we generalize the discussions so far to extended metric groups.
* An extended metric group is a group object in .
* For extended metric groups 𝒢 and ℋ, a homomorphism from to $̋ is a Lipschitz map$̋ that commutes with the group structure.
* We denote the category of extended metric groups and homomorphisms by . Note that the category is a full subcategory of .
Let (X, d) be a metric space, and let X be the group of isometries on X. We define a distance function on X by d_ X(f, g) = sup_x∈ X d_X(fx, gx). Then it is immediate to verify the conditions that ( X, d_ X) is an extended metric group. We note that the `unit component' of X, that is a set of isometries f such that d_ X( id_X, f)< ∞, is exactly ^u X (Example <ref>). Note that, if the metric space X has finite diameter, then we have X = ^u X that is a metric group.
Let and ' be extended metric groups, and let (, ') be the set of homomorphisms. We equip (, ') with a groupoid structure similarly to the metric group case by defining (, ')(φ, ψ) = {h ∈' |φ = h^-1ψ h} for any homomorphisms φ, ψ : '.
We note that the same statement as Lemma <ref> holds for extended metric groups. Further, the relationship between extended metric spaces and normed groups similar to Proposition <ref> holds if we replace the codomain of norms by [0, ∞].
Let be an extended metric group and X be a metric space. An extended -metric action F is a correspondence X ∋ x ↦ Fx = and F_xx'∈ such that
* F_xx = e, F_xx' = F_x'x^-1,
* d_(F_x'x”F_xx', F_xx”) ≤ d_X(x, x')+d_X(x', x”) - d_X(x, x”).
For extended -metric actions F and G, an extended -metric transformation θ : F⟹ G is a family of elements {θ_x ∈}_x∈ X such that G_xx'θ_x = θ_x'F_xx'. We denote the category of extended -metric actions and extended -metric transformations by _X^.
The following is obtained from the same arguments in subsection <ref> by replacing the `metric group' by `extended metric group'.
For an extended metric group and a metric space X, the categories ^_X and (π^m_1(X, x_0), ) are equivalent.
Further, the arguments in subsection <ref> can be applied for extended case, and we obtain the following.
For any metric spaces X and Y, the categories _X^ Y and core_X^Y are equivalent. Hence metric fibrations with fiber Y are classified by (π^m_1(X, x_0), ).
§ COHOMOLOGICAL INTERPRETATION
In this section, we give a cohomological classification of -torsors. It is an analogy of the 1-Čech cohomology. Before giving the definition, we introduce the following technical term.
Let X be a metric space, and x_1, x_2, x_3 ∈ X. We denote the subset {x_1, x_2, x_3}⊂ X by Δ(x_1, x_2, x_3) and call it a triangle. We define the degeneracy degree of the triangle Δ(x_1, x_2, x_3) by
|Δ(x_1, x_2, x_3)| := min{d_X(x_i, x_j) + d_X(x_j, x_k) - d_X(x_i, x_k) |{i, j, k} = {1, 2, 3}}.
Note that it is enough to consider i, j, k's running in the cyclic order to obtain the above minimum.
The following is the definition of our `1-Čech chomology'.
Let X be a metric space and suppose that points of X are indexed as X = {x_i}_i ∈ I. For a metric group , we define the 1-cohomology of X with the coefficient in as the category ^1(X; ) by
^1(X; ) = {(a_ijk) ∈^I^3| a_ijka_kjℓ = a_ijℓ, |a_ijka_jkia_kij| ≤ |Δ(x_i, x_j, x_k)|},
and
^1(X; )((a_ijk), (b_ijk)) = {(f_ij) ∈^I^2| a_ijkf_jk = f_ijb_ijk},
where we denote the conjugation invariant norm on by |-|. We call an object of ^1(X; ) a cocycle. Apparently, the above constructions are independent from the choice of the index I.
Note that, for a cocycle (a_ijk) ∈^1(X; ), the condition a_ijka_kjℓ = a_ijℓ implies that a_iji = e and a_ijk = a_kji^-1 for any i, j, k ∈ I. Further, for a morphism (f_ij), we have f_ij = f_ji from the condition a_ijkf_jk = f_ijb_ijk and a_iji = b_iji = e.
The 1-cohomology of X with the coefficient in is well-defined, that is, ^1(X; ) is indeed a category, in particular a groupoid.
Let (a_ijk), (b_ijk), (c_ijk) ∈^1(X; ), and (f_ij) : (a_ijk) (b_ijk) and (f'_ij) : (b_ijk) (c_ijk) be morphisms. Then (f'∘ f)_ij := f_ijf'_ij defines a morphism ((f'∘ f)_ij) : (a_ijk) (c_ijk) since we have
a_ijkf_jkf'_jk = f_ijb_ijkf'_jk = f_ijf'_ijc_ijk.
It obviously satisfies the associativity. The identity on a_ijk is apparently defined by e_ij = e, where e denotes the unit of . Further, (f^-1_ij) defines a morphism (f^-1_ij) : b_ijk a_ijk that is the inverse of (f_ij). This completes the proof.
We have a faithful functor β : ^1(X; ) ^_X.
For (a_ijk) ∈^1(X; ), we define a -torsor β (a_ijk) as follows. Let 𝒰 = ∐_(i, j) ∈ I^2_ij, where _ij = ^ij_i ∐^ij_j = ∐. We write an element of ^ij_∙ as g^ij_∙ and we denote the identification = ^ij_∙ by the map ^ij_∙ ; g ↦ g^ij_∙, where ∙∈{i, j} for any i ≠ j ∈ I. We define an equivalence relation ∼ on 𝒰 generated by
g^ij_j ∼ (ga_ijk)^jk_j.
Note that we have g^ij_j∼ g^ji_j for any i, j ∈ I. We denote the quotient set 𝒰/∼ by β (a_ijk) in the following. Then we have a surjective map π : β (a_ijk) X defined by π [g^ij_j] = x_j. For this map π, we have the following.
For any i, j ∈ I, the map π^-1x_j ; g ↦ [g^ij_j] is a bijection.
The surjectivity is clear. We show the injectivity. Suppose that we have [g^ij_j] = [h^ij_j] for g, h ∈. That is, we have elements a_k_0jk_1, a_k_1jk_2, …, a_k_N-1jk_N∈ such that ga_k_0jk_1… a_k_N-1jk_N = h and k_0 = k_N = i. Then the condition a_ijka_kjℓ = a_ijℓ implies that ga_iji=h, hence g = h. This completes the proof.
Note that Lemma <ref> implies that [g^ij_j] = [h^jk_j] implies that h = ga_ijk. Now we can define a distance function d_β (a_ijk) on β (a_ijk) as follows. Let ε_i ∈π^-1x_i and ε_j ∈π^-1x_j. Then there uniquely exist g, h ∈ such that [g^ij_i] = ε_i and [h^ij_j] = ε_j by Lemma <ref>. Then we define that
d_β (a_ijk)(ε_i, _j) = d_X(x_i, x_j) + d_(g, h).
The non-degeneracy is clear. The symmetry follows from that [g^ij_i] = [g^ji_i]. The triangle inequality is verified as follows. Let ε_i ∈π^-1x_i, ε_j ∈π^-1x_j and ε_k ∈π^-1x_k. Suppose that we have [g^ij_i] = ε_i = [g'^ik_i], [h^ij_j] = ε_j = [h'^jk_j], and [m^jk_k] = ε_k = [m'^ik_k]. Then we have g = g'a_kij, h' = ha_ijk and m = m'a_ikj, hence we obtain that
d_β (a_ijk)(ε_i, ε_j) + d_β (a_ijk)(ε_j, ε_k)
= d_X(x_i, x_j) + d_(g, h) + d_X(x_j, x_k) + d_(h', m)
= d_X(x_i, x_j) + d_X(x_j, x_k) + d_(g'a_kij, h) + d_(ha_ijk, m'a_ikj)
= d_X(x_i, x_j) + d_X(x_j, x_k) + d_(g'a_kij, h) + d_(ha_ijka_jkia_kij, m'a_kij)
+ d_(h, ha_ijka_jkia_kij) - d_(h, ha_ijka_jkia_kij)
≥ d_X(x_i, x_j) + d_X(x_j, x_k) + d_(g'a_kij, m'a_kij) - |a_ijka_jkia_kij|
≥ d_X(x_i, x_j) + d_X(x_j, x_k) + d_(g', m') - |Δ(x_i, x_j, x_k)|
≥ d_X(x_i, x_k) + d_(g', m')
= d_β (a_ijk)(ε_i, ε_k).
Now a map π : β (a_ijk) X is obviously a 1-Lipschitz map. Further, we verify that it is a metric fibration as follows. Let x_i, x_j ∈ X and ε_i ∈π^-1x_i. Suppose that we have ε_i = [g^ij_i] for g ∈. Then ε_j := [g^ij_j] ∈π^-1x_j is the unique element in π^-1x_j such that d_β (a_ijk)(ε_i, ε_j) = d_X(x_i, x_j). Also, for ε'_j := [h^ij_j] ∈π^-1x_j, we have d_β (a_ijk)(ε_i, ε'_j) = d_X(x_i, x_j) + d_(g, h) = d_β (a_ijk)(ε_i, ε_j) + d_β (a_ijk)(ε_j, ε'_j). Finally, we equip the metric fibration π : β (a_ijk) X with a right action by as [g^ij_∙]h = [(h^-1g)^ij_∙] for any i, j ∈ I and ∙∈{i, j}. This is well-defined since we have that
[(ga_ijk)^jk_j]h = [(h^-1ga_ijk)^jk_j] = [(h^-1g)^ij_j] = [g^ij_j]h.
It is straightforward to verify that this is a -torsor.
Next we show the functoriality. Let (f_ij) : (a_ijk) (b_ijk) ∈^1(X; ). We construct a map f_∗ : β(a_ijk) β(b_ijk) by [g^ij_∙] ↦ [(gf_ij)^ij_∙] for any i, j ∈ I and ∙∈{i, j}. It is well-defined since we have that
[(ga_ijk)^jk_j] ↦ [(ga_ijkf_jk)^jk_j]= [(gf_ijb_ijk)^jk_j] = [(gf_ij)^ij_j].
The map f_∗ obviously preserves fibers, and is an isometry since we have that
d_β(b_ijk)(f_∗ [g^ij_i], f_∗ [h^ij_j]) = d_β(b_ijk)( [(gf_ij)^ij_i], [(hf_ij)^ij_j])
= d_X(x_i, x_j) + d_(gf_ij, hf_ij)
= d_X(x_i, x_j) + d_(g, h)
= d_β(a_ijk)([g^ij_i], [h^ij_j]).
Further, it is -equivariant since we have that
(f_∗[g^ij_j])m = [(gf_ij)^ij_j]m = [(m^-1gf_ij)^ij_j] = f_∗([g^ij_j]m).
The faithfullness is obvious from the construction. This completes the proof.
The functor β : ^1(X; ) ^_X is full.
Let (a_ijk), (b_ijk) ∈^1(X; ) be cocycles, and suppose that we have a morphism φ : β(a_ijk) β(b_ijk) in ^_X. We denote the projections β(a_ijk) X and β(b_ijk) X by π_a and π_b respectively in the following. For any i, j ∈ I, we have bijections A_ij : π_a^-1x_j and B_ij : π_b^-1x_j given by g ↦ [g^ij_j] by Lemma <ref>. Then we define a map φ_ij = B_ij^-1φ A_ij :, namely we have φ[g^ij_j] = [(φ_ijg)^ij_j]. Now the -equivariance of φ implies that
φ[g^ij_j] = φ[(ge)^ij_j] = (φ[e^ij_j])g^-1 = [(φ_ije)^ij_j]g^-1 = [(gφ_ije)^ij_j],
which implies that φ_ijg = gφ_ije by Lemma <ref>. From this, we obtain that
φ[(ga_ijk)^jk_j] = φ[(ga_ijk)^kj_j] = [(φ_kj(ga_ijk))^kj_j] = [(ga_ijkφ_kje)^kj_j].
Since we have [g^ij_j] = [(ga_ijk)^jk_j], we obtain that a_ijkφ_kje = (φ_ije)b_ijk by Lemma <ref>. Further, since the lift of x_j along [g^ij_i] is [g^ij_j] and φ preserves the lift, the conditions φ[g^ij_j] = [(φ_ijg)^ij_j] and φ[g^ji_i] = [(φ_jig)^ji_i] implies that φ_ij = φ_ji. Hence we obtain a morphism (φ_ije) : (a_ijk) (b_ijk) in ^1(X; ), which satisfies that β (φ_ije) = φ by the construction. This completes the proof.
Let π : E X be a -torsor. For x_i, x_j ∈ X, we define a local section of π over a pair (x_i, x_j) as a pair of points (ε_i, ε_j) ∈ E^2 such that π_i = x_i, π_j = x_j and ε_j is the lift of x_j along ε_i. We say that ((ε^ij_i,ε^ij_j))_(i, j)∈ I^2 is a local section of π if each (ε^ij_i, ε^ij_j) is a local section of π over a pair (x_i, x_j) and satisfies that ε^ij_i = ε^ji_i.
Let π : E X be a -torsor. For a local section s =((ε^ij_i,ε^ij_j))_(i, j)∈ I^2 of π, we can construct a cocycle α_s π∈^1(X;). Further, for any two local sections s, s' of π, the corresponding cocycles α_s π and α_s'π are isomorphic.
We define a_ijk∈ as the unique element such that ε^ij_ja_ijk = ε^jk_j. Then (a_ijk) satisfies that a_ijka_kjℓ = a_ijℓ since we have
ε^ij_ja_ijka_kjℓ = ε^jk_ja_kjℓ = ε^kj_ja_kjℓ = ε^jℓ_j.
Now note that we have ε_xg = (ε g)_x for any ε∈ E, x ∈ X and g ∈. Hence we have that
ε^ij_ja_ijka_jkia_kij = ε^jk_ja_jkia_kij
= (ε^jk_k)_x_ja_jkia_kij
= (ε^jk_ka_jki)_x_ja_kij
= (ε^ki_k)_x_ja_kij
= ((ε^ki_i)_x_ka_kij)_x_j
= ((ε^ki_ia_kij)_x_k)_x_j
= ((ε^ij_i)_x_k)_x_j.
Hence we obtain that
|a_ijka_jkia_kij| = d_E(ε^ij_j, ε^ij_ja_ijka_jkia_kij)
= d_E(ε^ij_j, ((ε^ij_i)_x_k)_x_j)
= -d_E(ε^ij_j, ε^ij_i) + d_E(ε^ij_i, ((ε^ij_i)_x_k)_x_j)
≤ -d_E(ε^ij_j, ε^ij_i) + d_E(ε^ij_i, (ε^ij_i)_x_k) + d_E((ε^ij_i)_x_k, ((ε^ij_i)_x_k)_x_j)
= -d_X(x_j, x_i) + d_X(x_i, x_k) + d_X(x_k, x_j).
Since the norm |-| on is conjugation invariant, the value |a_ijka_jkia_kij| is invariant under the cyclic permutation on {i, j, k}, hence we obtain that |a_ijka_jkia_kij| ≤ |Δ(x_i, x_j, x_k)|. Thus we obtain a cocycle α_s π := (a_ijk) ∈^1(X ; ). Suppose that we have local sections s = ((ε^ij_i,ε^ij_j))_(i, j)∈ I^2 and s' = ((μ^ij_i,μ^ij_j))_(i, j)∈ I^2. Then there exists an element (f_ij) ∈^I^2 such that (ε^ij_if_ij,ε^ij_jf_ij) = (μ^ij_i,μ^ij_j). Let α_s π = (a_ijk) and α_s'π = (b_ijk). Then we obtain that
ε^ij_ja_ijkf_jkb^-1_ijk = ε^jk_jf_jkb^-1_ijk = μ^jk_jb^-1_ijk = μ^ij_j,
which implies that f_ij = a_ijkf_jkb^-1_ijk. Hence (f_ij) defines a morphism (f_ij) : (a_ijk) (b_ijk) in ^1(X; ). Since ^1(X; ) is a groupoid, this is an isomorphism. This completes the proof.
The functor β : ^1(X; ) ^_X is split essentially surjective.
Let π : E X be a -torsor. Fix a local section s = ((ε^ij_i,ε^ij_j))_(i, j)∈ I^2 of π. Let α_sπ = (a_ijk) be the cocycle constructed in Proposition <ref>. We show that the -torsors β(a_ijk) and π are isomorphic. We define a map φ : β(a_ijk) E by [g^ij_∙] ↦ε^ij_∙ g^-1. It is well-defined since we have that
[(ga_ijk)^jk_j] ↦ε^jk_ja^-1_ijkg^-1 = ε^ij_jg^-1.
It obviously preserves fibers and is a bijection. Also, it is an isometry since we have that
d_E(φ[g^ij_i], φ[h^ij_j]) = d_E(ε^ij_ig^-1, ε^ij_jh^-1)
= d_E(ε^ij_i, ε^ij_jh^-1g)
= d_E(ε^ij_i, ε^ij_j) + d_E(ε^ij_j, ε^ij_jh^-1g)
= d_X(x_i, x_j) + d_(g^-1, h^-1)
= d_β(a_ijk)([g^ij_i], [h^ij_j]).
Further, it is immediately verified that φ is -equivariant. Hence the map φ gives an isomorphsim in ^_X. This completes the proof.
The functor β : ^1(X; ) ^_X is a category equivalence.
99
A0 Y. Asao, Magnitude and magnitude homology of filtered set enriched categories, (2023), preprint, arXiv:2303.05677.
Gr A. Grothendieck, Technique de descente et théorèmes d’existence en géométrie algébrique. I. Généralités. Descente par morphismes fidèlement plats, Sèminaire N. Bourbaki exp. no190 (1960) 299–327
Gr2A. Grothendieck, Revêtements Étales et Groupe Fondamental - Séminaire de Géometrie Algébrique du Bois Marie 1960/61, LNM 224 Springer (1971)
JPT P. T. Johnstone, Sketches of an Elephant : A Topos Theory Compendium, Oxford: Oxford University Press (2002).
La F. W. Lawvere, Metric spaces, generalized logic and closed categories, Rendiconti del Seminario Matematico e Fisico di Milano, XLIII : 135–166, 1973. Reprinted as Reprints in Theory and Applications of Categories 1:1–37, 2002.
L3 T. Leinster, The magnitude of metric spaces, Documenta Mathematica 18 (2013), 857–905.
L1 T. Leinster, The magnitude of a graph
, Mathematical Proceedings of the Cambridge Philosophical Society 166 (2019), 247–264.
Mc S. Mac Lane, Categories for the Working Mathematician, Graduate Texts in Mathematics 5, Springer, Berlin, 1971.
Roff E. Roff, The size and shape of things: magnitude, diversity, homology, PhD thesis, University of Edinburgh, 2022.
|
Subsets and Splits