entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 15
199
| authors
list | primary_category
stringlengths 5
18
| categories
list | text
stringlengths 1
461k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.00887v1
|
20230703093526
|
Meromorphic vector bundles on the Fargues--Fontaine curve
|
[
"Ian Gleason",
"Alexander B. Ivanov"
] |
math.AG
|
[
"math.AG",
"math.NT"
] |
I. Gleason, A. Ivanov]Ian Gleason, Alexander B. Ivanov
Mathematisches Institut der Universität Bonn, Endenicher Allee 60, Bonn, Germany
[email protected]
Fakultät für Mathematik, Ruhr-Universität Bochum, Universitätsstrasse 150, D-44780 Bochum, Germany
[email protected]
We introduce and study the stack of meromorphic G-bundles on the Fargues–Fontaine curve.
This object defines a correspondence between the Kottwitz stack and _G. We expect it to play a crucial role in comparing the schematic and analytic versions of the geometric local Langlands categories.
Our first main result is the identification of the generic Newton strata of _G^ with the Fargues–Scholze charts .
Our second main result is a generalization of Fargues' theorem in families.
We call this the meromorphic comparison theorem.
It plays a key role in proving that the analytification functor is fully faithful.
Along the way, we give new proofs to what we call the topological and schematic comparison theorems.
These say that the topologies of _G and are reversed and that the two stacks take the same values when evaluated on schemes.
Meromorphic vector bundles on the Fargues–Fontaine curve
[
August 1, 2023
========================================================
§ INTRODUCTION
Let p be a prime number, let E/_p be a finite field extension, let ℓ be a prime with ℓ≠ p and Λ=_ℓ.
Let G be a connected reductive group over E.
Let W_E be the Weil group and let ^LG=Ĝ⋊ W_E be the L-group.
§.§ Motivation and context
Let Π_G be the set of isomorphism classes of smooth irreducible representations of the locally profinite group G(E) with values in Λ and let Φ_G be the set of Ĝ-conjugacy classes of L-parameters.
The basic form of the local Langlands correspondence gives a map
LLC_G:Π_G→Φ_G
satisfying some properties <cit.>, <cit.>.
For GL_n the map LLC_GL_n is bijective <cit.>, but this does not hold more generally.
Nevertheless, LLC_G has finite fibers that are called L-packets and understanding them is the subject of the refined local Langlands correspondence.
For quasi-split groups, one can fix a Whittaker datum to put the elements of an L-packet in canonical bijection with the set of isomorphisms classes of certain finite group constructed in terms of the L-parameter <cit.>.
When G is not quasi-split Whittaker data do not exist.
Vogan realized that to work with general G it is advantageous to consider its quasi-split inner form G^∗ and parametrize simultaneously the representations of all the pure inner twists of G^∗ <cit.>.
Motivated by the study of special fibers of Shimura varieties, Kottwitz introduced the set B(G) of isocrystals with G-structure <cit.>.
The set of basic elements B(G)_bas gives rise to the so-called extended pure inner forms G_b of G.
Kottwitz formulated a refined version of the local Langlands correspondence for non-Archimedean local fields using the inner forms that arise from B(G)_bas <cit.> <cit.>.
The set B(G) can be realized as the underlying topological space of two geometric objects.
One object is of analytic nature, _G (the stack of G-bundles on the Fargues–Fontaine curve) and a second object is of schematic nature, (the Kottwitz stack parametrizing isocrystals with G-structure).
For every element b∈ B(G) one can define locally closed strata i_b:_b→ and j_b:_G^b→_G.
Interestingly, whenever b∈ B(G)_bas both _b and _G^b agree with the classifying stack [∗/G_b(E)] for the extended pure inner form of G defined by b.[Even when b is not basic, the category of étale sheaves on _G^b and _b can be understood in terms of the representation category of a pure inner form of a Levi subgroup.]
This leads to the hope that the refined local Langlands correspondence of Kottwitz has a categorical refinement that one can access by studying the geometry of the stacks _G and/or .
Recent breakthroughs in p-adic and perfect geometry <cit.> together with the introduction and study of the stack of L-parameters <cit.>, have led experts to formulate precise conjectures that capture this hope.
These efforts promote, in a precise way, the refined local Langlands correspondence mentioned above to a categorical statement <cit.>.
There is widespread agreement on what to consider on the Galois side, namely a version of the derived category of coherent sheaves ^b,qc_coh(_Ĝ,Λ) of the stack _Ĝ,Λ parametrizing L-parameters over Λ (see <cit.>, <cit.>).
On the automorphic side, there are at least two reasonable constructions of the local Langlands category.
The essential difference between them arises from the fact that B(G) has two geometric incarnations.
Let G be quasi-split and let W_ be the Whittaker representation associated to .
On the analytic side, Fargues–Scholze construct the category of lisse sheaves D_lis(_G,Λ) <cit.> and prove it is compactly generated.
Moreover, they endow this category with the so-called spectral action by the category of perfect complexes Perf(_Ĝ,Λ).
They conjecture that there is a unique Perf(_Ĝ,Λ)-linear equivalence of ∞-categories
^an_G:_lis(_G,Λ)^ω≅^b,qc_coh(_Ĝ,Λ)
that sends the analytic Whittaker sheaf ^an_=j_1,!W_ to the structure sheaf __Ĝ,Λ where both objects are treated as elements of their ind-completions.
On the schematic side, Xiao–Zhu consider the moduli stack of local shtukas Sht^loc_k in the context of characteristic p perfect geometry.
They attach their own candidates for the local Langlands category namely they construct a triangulated category of cohomological correspondences P^Corr(Sht^loc_k) <cit.> <cit.>.
This approach is pushed further in the forthcoming work of Hemo–Zhu <cit.>, where they construct an ∞-category Shv(,Λ) whose homotopy category agrees with P^Corr(Sht^loc_k).
Zhu conjectures that there is an equivalence
^sch_G: Shv(,Λ) ≅Ind(^b,qc_coh(_Ĝ,Λ))
sending __Ĝ,Λ to the schematic Whittaker sheaf ^sch_=i_1,*W_ <cit.>.
Moreover, Hemo–Zhu have announced a proof of the unipotent part of the categorical local Langlands correspondence <cit.>.
Let us clarify.
When Λ is of characteristic 0, the stack of L-parameters has an open and closed substack ^unip_Ĝ,Λ⊆_Ĝ,Λ defining a full subcategory
Ind(^b,qc_coh(^unip_Ĝ,Λ))⊆Ind(^b,qc_coh(_Ĝ,Λ)).
One can also define a full subcategory Shv^unip(,Λ)⊆Shv(,Λ) defined by the property that for all b∈ B(G) the restriction to _b is given by a complex of G_b-representations that are unipotent in the sense of Lusztig <cit.>.
Using Bezrukavnikov's equivalence <cit.>, Hemo and Zhu prove that there is an equivalence of ∞-categories
^sch_G: Shv^unip(,Λ) ≅Ind(^b,qc_coh(^unip_Ĝ,Λ)).
It is natural to expect that there exists an equivalence
Ψ:Shv(,Λ)→_lis(_G,Λ),
satisfying Ψ(^sch_)=^an_.
Indeed, the two local Langlands categories are conjectured to be equivalent to Ind(^b,qc_coh(_Ĝ,Λ)) and if the two conjectures are true one can simply define Ψ=^an,-1_G∘^sch_G.
A reasonable question the reader can ask is: why do we need two local Langlands categories?
We believe that it is profitable to construct Ψ directly in order to better understand ^sch_G and ^an_G.
At a technical level, a direct construction of Ψ allows one to transfer Hemo–Zhu's results on unipotent categorical local Langlands correspondence to the Fargues–Scholze setup and conversely, endow Shv(,Λ) with a spectral action.
It would also allow us to formulate rigorously the eigensheaf property for the Deligne–Lusztig sheaves considered in <cit.>.
More philosophically, the schematic perspective and the analytic perspective understand different phenomena.
For example, the schematic perspective cannot witness the spectral action because "the paw" is fixed.
On the other hand, Shv(,Λ) is directly related to Bezrukavnikov's equivalence and its Frobenius-twisted categorical trace <cit.> since, in contrast with _G, both and the Hecke stack are constructed in terms of Witt vector loop groups.
At the heart of the equivalence Ψ, there should be a geometric explanation.
Namely, that the stacks (G) and _G are incarnations of the same geometric object.
In this paper, we reveal these geometric relations which we formulate in terms of three comparison theorems (see 10).
One of the achievements of this article is the construction of a third incarnation _G^ that mediates between (G) and _G. Roughly speaking, _G^ is given by the same moduli problem as _G, but we require a meromorphicity condition on the action of Frobenius (see <Ref>, <Ref>).
This object defines a correspondence
_G^rσdγ _G
^
We call the map γ the generic polygon map and σ the special polygon map inspired by <cit.>.
Morally, Ψ should be given by σ_!∘γ^*∘ c^* where
c^*:Shv(,Λ)→(^,Λ)
is an analytification functor <cit.>.
Unfortunately, one can not define σ_! naively since _G^ is not an Artin v-stack (see 9.1).[One can work with σ_♮ (i.e. the left adjoint of σ^∗) which always exists, but this does not avoid the problem. Indeed, σ_♮∘γ^* does not necessarily land in _lis(_G).] In particular, the usual 6-functor formalisms considered in the analytic perspective <cit.> do not suffice to construct σ_!.
For this reason, although we are convinced that our geometric considerations are the key to the construction and study of Ψ, we do not try to compare the local Langlands categories themselves in this work and we leave this comparison for a second article in which we justify the existence of σ_! on an appropriate cohomological theory.
Nevertheless, to orient the reader, we still provide some informal indication of the cohomological relevance that our main theorems have.
§.§ Main results
For b∈ B(G) we let _b⊆ denote the locally closed substack determined by b.
Then _b^⊆^ is also a locally closed substack and we have an identification
_b^=[∗/J_b(E)].
Recall the moduli stack of Fargues–Scholze <cit.> that is used to define the smooth charts of _G.
It comes endowed with a map
q:→∐_b∈ B(G) [∗/J_b(E)]≅∐_b∈ B(G)_b^
The following <Ref> is a relative and Tannakian version of Kedlaya's work on the slope filtration <cit.>, and our first main result.
We have a commutative diagram with Cartesian square
r dq [bend left]rrπ _G^dγ rσ _G
∐_b∈B(G) _b^r ^
In other words, the restriction of σ_G^ mer_G to γ^-1([∗/J_b(E)]) coincides with the Fargues–Scholze chart π_b _b _G <cit.>.
To apply Tannakian formalism one has to take subtle care of the exact structure.
We do this by justifying that a sequence is exact if and only if it is exact at every geometric point (see <Ref>). <Ref> holds for an arbitrary non-Archimedean local field E.
This theorem has a cohomological consequence that we now discuss.
Recall from <cit.> that we have identifications
(Rep J_b(E),Λ)≅_lis([∗/J_b(E)],Λ) ≅_lis(_b^,Λ) ≅_lis(_b,Λ).
Let i_b:_b→ denote the inclusion maps of the Newton strata.
The !-pushforward functors define full subcategories
Shv_b,!(,Λ)⊆Shv(,Λ).
On the analytic side we can consider a full subcategory
_lis(_G,Λ)__b⊆_lis(_G,Λ)
obtained as the essential image of the fully faithful functor
π_b,!∘ q_b^*:_lis([∗/J_b(E)],Λ)→_lis(_G,Λ).
The categories Shv(,Λ)^ω and _lis(_G,Λ)^ω have semi-orthogonal decompositions by the subcategories Shv_b,!(,Λ)^ω and _lis(_G,Λ)^ω__b respectively.
One can deduce from <Ref> that if
σ_!:(^_G,Λ)^ω→_lis(_G,Λ)^ω
exists and satisfies proper base change, then Ψ=σ_!∘γ^*∘ c^* restricts to an equivalence
Ψ:Shv_b,!(,Λ)^ω→_lis(_G,Λ)^ω__b
such that Ψ(_^sch)=_^an as ind-objects.
In particular, if σ_!∘γ^*∘ c^* is fully faithful it is also essentially surjective since every object in _lis(_G,Λ)^ω is a finite colimit of objects in _lis(_G,Λ)__b^ω as b varies.
In summary, our first main geometric result would have as a cohomological consequence the essential surjectivity of Ψ.
Z. Wu made similar considerations using σ_♮ instead of σ_! (see <Ref>).
Our second main result is related to full faithfulness. Recall the analytification functor X↦ X^† obtained from sheafifying the formula
(R,R^+)↦ X( R^∘).
For any small v-stack X we have a fully faithful map <cit.>
_ét(X,_ℓ)_ét(X^, _ℓ) _ét(X^†, _ℓ).
We have the following identification of small v-stacks
_G^≅^†
and an identification of maps b^*_=γ^*.
A similar statement holds for the stack of -shtukas.
This is our deepest result and it can be regarded as a version of Fargues' theorem in families (see <Ref>).
The proof of <Ref> does not generalize naively to local fields in equal characteristic.
We have a fully-faithful comparison map
γ^*∘ c^*:_ét(,_ℓ)→_ét(_G^,_ℓ).
<Ref> provides an approach to prove that Ψ is fully-faithful.
Indeed, it suffices to prove that σ_! is fully-faithful when restricted to those objects in the essential image of γ^*∘ c^*.
The advantage being that the geometry of _G^ is much closer to that of _G than that of .
We warn the reader that _ét(,_ℓ) does not agree with Shv(,_ℓ).
There is a fully faithful version of γ^*∘ c^* for Shv(,_ℓ), but its target category is not _ét(_G^,_ℓ).
This, among other cohomological subtleties, will be addressed in the follow up project.
Our third main theorem is of technical nature, but it has already found applications outside of the scope of this article.
For example, it is a key technical ingredient in Zhang's proof of the integral version of Scholze's fiber product conjecture <cit.>.
Every vector bundle on the Fargues–Fontaine curve extends v-locally at ∞.
As a direct consequence we obtain the classification of <Ref> below.
We fix some notation.
Let S= R be a product of points with R^∘=∏_i∈ I O_C_i and family of pseudo-uniformizers ϖ_∞=(ϖ_i)_i∈ I such that ϖ_∞ defines the topology on R^∘.
Fix S^♯ an untilt given by a non-zero divisor ξ_∞=(ξ_i)_i∈ I, this induces for all i∈ I an untilt C^♯_i.
The following categories are equivalent:
* The category of shtukas over S with paw at S^♯.
* The category of Breuil–Kisin–Fargues modules over _inf(R^∘,♯).
* The category of I-indexed families {(M_i,Φ_i)}_i∈ I of Breuil–Kisin–Fargues modules over _inf(O_C^♯_i) with uniformly bounded poles and zeroes at ξ_∞.
§.§ New proofs of two established results
As a consequence of our considerations we found new approaches to previously proven theorems relating the geometry of and _G.
§.§.§ The schematic comparison
Recall the reduction functor introduced by the first author in <cit.> (see also 2).
The following theorem is a reformulation of a result of Anschütz <cit.>, generalized by Pappas–Rapoport <cit.>. We clarify and strengthen their approach.
We have an identification of scheme theoretic v-stacks
(_G)^≅.
A similar statement holds for the stack of -shtukas.
We regard <Ref> as a classicality statement.
Anschütz proves the 0-dimensional case using the classification of vector bundles over the Fargues–Fontaine curve.
Pappas–Rapoport prove this more generally using the 0-dimensional statement and in particular rely on the φ-structure.
We give a uniform proof and work directly with the category of v-vector bundles over Y_(0,∞) showing that classicality is unrelated to the φ-structure.
Güthge also realized this independently (see <Ref>).
§.§.§ The topological comparison
Recall that B(G) comes endowed with a topology induced by its partial order.
We can also consider B(G)^op endowed with the opposite topology.
Viehmann <cit.> proves |_G|^op≅ B(G).
Rapoport–Richartz <cit.> and He <cit.> prove ||≅ B(G).
We give a completely new proof of the following theorem.
The natural maps are homeomorphisms:
|_G|^op≅ ||≅ |^|
§.§ Acknowledgements
We would like to thank Peter Scholze for many illuminating conversations.
We also thank Alexander Bertoloni-Meli and Linus Hamann for explaining several things about the refined local Langlands correspondence.
We also thank David Hansen, Georgios Pappas, Jared Weinstein for helpful comments on a preliminary version of the article.
We also thank Johannes Anschütz, Anton Güthge, Ben Heuer, João Lourenço, Sug Woo Shin, Eva Viehmann, Mingjia Zhang and Xinwen Zhu for stimulating conversations.
This paper was written during stays at Max-Planck-Institut für Mathematik and Universität Bonn, we are thankful for the hospitality of these institutions. The first author has received funding by DFG via the Leibniz-Preis of Peter Scholze. The second author was supported by a Heisenberg grant (grant nr. IV 177/3-1) of the DFG, based at the Universities of Bonn and Bochum. Also he was supported by the Leibniz Universität Hannover.
§ NOTATION, TERMINOLOGY AND GENERALITIES
Let _q be the field of q=p^n elements.
We let denote the category of characteristic p perfectoid spaces over _q endowed with the v-topology and let the category of characteristic p perfect schemes over _q.
We will consider several topologies on , mainly the scheme theoretic v-topology, the arc-topology and the proétale topology.
We let denote the category of sets, we let denote the (2,1)-category of groupoids, we let denote the (2,1)-category of closed symmetric monoidal exact categories, and we let denote the (2,1)-category of closed symmetric monoidal categories.
Let and be the categories of small v-sheaves and small scheme-theoretic v-sheaves respectively.
There are several interesting constructions that go from one way to the other, that we will use below.
* ♢: →: given by sheafifying the rule ^♢_pre(R,R^+)=( R^+).
* : →: given by sheafifying the rule ^_pre(R,R^+)=( R).
* †: →: given by sheafifying the rule ^†_pre(R,R^+)=( R^∘).
* red:→: given by ^red( A)=((A,A)), where (A,A) is given the discrete topology.
* mer:→ given by sheafifying the rule ^mer(R,R^+)=((R^dis,R^dis,+)) where R^dis is R with its discrete topology.
We say that an affine scheme = A is a comb if for all x∈π_0() the closed subscheme attached to x is of the form V_x for V_x a valuation ring.
We say that is a strict comb if the fraction field of all such V_x is algebraically closed.
We say that a strict comb is an extremally disconnected comb if π_0() is an extremally disconnected Hausdorff space.
If A=∏_i∈ I V_i where V_i is a valuation ring, then we say that is a product comb.
Observe that strict product combs are extremally disconnected combs.
Suppose that ∈ is a product comb = A and ϖ∈ A is a non-zero divisor. Let R^+=A_ϖ be the ϖ-adic completion of A and let R=R^+[1/ϖ]. Then R is a totally disconnected space and we call any space obtained this way a product of points. If is in addition strict, then R is strictly totally disconnected and we call it a strict product of points.
Let = A be a product comb with A=∏_i∈ IV_i and ϖ∈ A a non-zero divisor. Let R^+=A_ϖ be the ϖ-adic completion. Let ϖ_i be the image of ϖ in V_i which is also a non-zero divisor. Let K_i^+=V_i,ϖ_i be the ϖ_i-adic completion. Then, the family of projection maps R^+→ K_i^+ induces a ring isomorphism R^+=∏_i∈ I K_i^+.
Let I× be the partial order with (i_1,n_1)≤ (i_2,n_2) if i_1=i_2 and n_1≤ n_2.
We have a functor from I× to the category of rings sending (i,n) to V_i/ϖ_i^n.
The constructions of R^+ and ∏_i∈ I K_i^+ correspond to two different ways of computing the limit of this diagram.
If R is a (strict) product of points, then R is a (strict) comb.
By <Ref>, R^+=∏_i∈ I C_i^+ where C^+_i are valuation rings. Since ultraproducts of valuation rings (with algebraically closed fraction field) are again valuation rings (with algebraically closed fraction field) R^+ is a (strict) comb. Now, since Zariski localizations of (strict) combs are (strict) combs again, R is a (strict) comb.
Let G be a locally profinite group then:
[∗/G]^=[∗/G]^♢=[∗/G].
Let R∈.
Observe that G^♢=G^=G.
Since (respectively ♢) commutes with limits, it suffices to prove that the map ∗→ [∗/G]^ (respectively ∗→ [∗/G]^♢) is surjective.
This amounts to showing that if is a G-torsor for the schematic v-topology over R (respectively R^+), then there is an analytic v-cover of R'→ R such that restricted to R' is trivial.
We can take R' to be a strict product of points.
Indeed, if follows from a theorem of Gabber <cit.> that every G-torsor is pro-étale locally trivial. Since R' (respectively R'^+) are extremally disconnected combs, every pro-étale cover over them splits.
We fix the following notation throughout the text.
We let E be a mixed characteristic non-Archimedean local field, we let O_E⊆ E denote the ring of integers, we let π∈ O_E denote a choice of uniformizer, we assume that _q=O_E/π, we denote by _p a fixed completed algebraic closure of E.
Let ∈. If = A for a perfect _q-algebra A, we let A denote the topological ring of O_E-Witt vectors.
More precisely, A:=(A)⊗__pO_E, where (A) denotes the p-typical Witt vectors.
We let _ denote A.
For general , we construct _ by glueing on affine charts.
We denote by φ:_→_ the canonical lift of absolute Frobenius on .
We let Y_:= A[1/π], this is an analytic sous-perfectoid adic space (indeed, after inverting π, A A ⊗_O_E O_E[π^1/p^∞]^∧_π becomes a perfectoid cover that splits by <cit.>).
In the category of v-sheaves we have the identities _^=^♢× O_E and Y_^=^♢× E.
Similarly, let S∈.
Recall that there is a unique sousperfectoid adic space _S (respectively Y_S) over O_E (respectively E), such that _S^ = S × O_E (respectively Y_S^ = S × E), see for example <cit.>.
If S= R we let _inf(R^+) denote (R^+) endowed with the (π,[ϖ])-adic topology.
Then _S is the locus in _inf(R^+) where [ϖ]≠ 0 for some pseudo-uniformizer ϖ∈ R^+, and Y_S is the locus in _inf(R^+) where π· [ϖ]≠ 0.
§ FAMILIES OF UNTILTED VECTOR BUNDLES
If is a small v-stack endowed with a map f:→ O_E, we can consider the ringed site (_v,^♯) whose objects are maps m: R→ where R∈ and where R^♯ is the untilt defined by the composition R→ O_E→_p <cit.>.
By <cit.>, ^♯ is a v-sheaf of rings.
We let () denote the category of vector bundles in this site (i.e. sheaves of ^♯-modules that are v-locally isomorphic to a finite direct sum of ^♯).
Recall that if R^♯ is a perfectoid space over O_E inducing a map R→ O_E, by v-descent of vector bundles <cit.>, we get an identity ( R)=Vect( R^♯).
Moreover, we have a sheaf valued in closed exact symmetric monoidal categories
:{small v-stacks/_p}→.
Recall from <cit.> that an analytic Huber pair (A,A^+) over _p is said to be v-complete if H^0((A))_v,^♯)=A.
If (A,A^+) is sheafy and v-complete then the pullback functor Vect( A)→( A) is fully-faithful. In particular, if (A,A^+) is sous-perfectoid then Vect( A)→( A) is fully-faithful.
Since A is sheafy, by <cit.>, any object in Vect( A) is given by M for a finite projective A-module M.
Since we have internal Hom-objects it suffices to prove H^0(( A)_v,M_v)=M.
Taking a two step pro-étale hypercover R_2 →_→ R_1→ A we see that:
H^0(( A)_v,M_v)=eq.(M⊗_A R_1→_→ M⊗_A R_2)=M⊗_A A
Since by hypothesis H^0(( A)_v,^♯)=A.
The second claim follows from <cit.>
We let Vect^v_:→ be given by the rule:
S ↦(_S^),
where the map _S^→ O_E→_p comes from the formula _S^= R× O_E → O_E.
Similarly, we define Vect^v_Y:→ by the rule:
S ↦(Y_S^).
The functors Vect^v_Y and Vect^v_ are v-sheaves.
Given a v-cover S_1→ S with Čech nerve S_∙→ S. Now,
Vect^_Y(S)=(Y_S^)=(Y_S_n^)= Vect^_Y(S_n).
Here the second equality follows from the fact that Y_S_1^→ Y_S^ is a v-cover whose Cech nerve is Y_S_∙^.
Recall that by <cit.> for any perfectoid S the space _S has a basis of open neighborhoods A⊆_S that are v-complete and sheafy. By <Ref> we obtain fully-faithful functor:
Vect(Y_S)→(Y_S^), Vect(_S)→(_S^).
We can define functors Vect^cl_Y,Vect^cl_:→, given by the rule:
S↦Vect(Y_S), S↦Vect(_S).
The natural maps Vect^cl_Y→Vect^_Y and Vect^cl_→Vect^_ define subsheaves.
It suffices to prove that if ∈Vect^_Y(S) and there is a cover S_1→ S such that _S_1∈Vect^cl_Y(S_1) then ∈Vect^cl_Y(S). But this follows from the proof of <cit.>.
Since Vect_Y^cl is a v-sheaf we can evaluate it in any small v-stack , which gives a fully faithful embedding:
Vect_Y^cl()↪Vect_Y^()≅(× E).
As it turns out, testing classicality can be done at geometric points.
Let be a small v-sheaf and let ∈Vect_Y^().
Suppose that for every geometric point c: C→ the object c^* lies in Vect^cl_Y( C).
Then ∈Vect_Y^cl().
Since Vect^cl_Y is a subsheaf, we may work v-locally and assume that the v-sheaf is representable by an affinoid perfectoid S:= R.
In this case _S∈(Y_S^), and we wish to show _S∈Vect(Y_S).
This follows from <Ref>, applied to = R.
Let be a small v-sheaf.
Suppose that the second projection map × E→ E is represented by a map of adic spaces Y_→ E such that Y_ is sous-perfectoid and Y_×_E_p is perfectoid.
Suppose that ∈(× E) and that for every geometric point c: C→ the pullback c^* lies in Vect(Y_C), then ∈Vect(Y_).
The category of sous-perfectoid spaces is stable under rational localization and has a well-behaved theory of vector bundles.
Since vector bundles satisfy étale descent it suffices to construct an étale cover f:U→ Y_ for which f^*∈Vect(U).
Indeed, such cover will again be sous-perfectoid by <cit.>, so that Vect(U) is a full subcategory of (U^) and one can transfer all of the descent data.
In particular, we can find an open cover ∐_i∈ I U_i→ Y_ by affinoid analytic adic spaces.
Consider the universally open pro-étale Galois cover π:Y_:=×_p→ Y_^, with Galois group Γ:=Gal(E).
For every U_i, consider the restriction of to (U_i^) and let U_i:=π^-1(U_i).
Since U_i is by hypothesis perfectoid, then π^*∈Vect(U_i).
Fix y∈U_i and take an affinoid open neighborhood y∈U_y,i⊆U_i, such that π^* is free when restricted to U_y,i.
By shrinking U_y,i and choosing an open subgroup Γ_y⊆Γ we may always assume that the action of Γ_y on U_i stabilizes U_y,i.
Let U_y,i:=U_y,i/Γ_y, in this way f_y:U_y,i→ U_i is an étale neighborhood, π_y:U_y,i→ U_y,i is a proétale Galois cover with Galois group Γ_y and the family of f_y form an étale cover of U_i.
Let K=_p^Γ_y, let B̃_y,i be the global sections of U_y,i and let B_y,i=B̃_y,i^Γ_y which are also the global sections of U_y,i.
Let n be the rank of and let _y,i:=M_n× n(B̃_y,i) which we treat as a p-adic Banach _p-algebra, by choosing a norm that induces its natural topology.
Observe that _y,i=M_n× n(B_y,i)⊗_K _p.
After fixing a basis of π_y^*f_y^*_S we may identify End(π_y^*f_y^*_S) with _y,i, and by transfer of structure the descent datum along π_y translates into a continuous semi-linear representation ρ_y,i:Γ_y→ (_y,i)^× with ρ_y,i(γ_1·γ_2)=ρ_y,i(γ_1)· [ρ(γ_2)^γ_1] as in <cit.>.
Moreover, for every geometric point c: C→ we can basechange all of our constructions to obtain a map _y,i→_y,i,c producing a semi-linear continuous representation ρ_y,i,c:Γ_y→_y,i,c that encodes the descent datum of _c:=c^* along the map U_y,i,c→ U_y,i,c (assuming of course that U_y,i,c is non-empty, meaning the projection map U_y,i^→ has non empty fibers along c).
In <cit.>, Sen attaches to any semi-linear continuous representations ρ:Γ_y→^× with values on a _p-Banach algebra an element φ(ρ)∈.
This elements has the following properties:
* φ(ρ) is functorial in . More precisely, given a map of _p Banach algebras f:_1→_2 and a representation ρ_1:Γ_y→_1, if we let ρ_2=f∘ρ_1 then φ(ρ_2)=f(φ(ρ_1)).
* φ(ρ) detects "locally isomorphic classes" of continuous semi-linear representations.
More precisely, if there exists an element x∈^× such that xφ(ρ_1)x^-1=φ(ρ_2) then there exist an open subgroup Γ'⊆Γ_y such that (ρ_1)_|Γ' is isomorphic to (ρ_2)_|Γ'.
* φ(ρ) is a topological invariant, i.e. it doesn't depend on the norm of only on the topology induced by the norm.
* φ(ρ)=0 if and only if ρ is trivial when restricted to an open subgroup.
We consider φ(ρ_y,i)∈_y,i, by hypothesis φ(ρ_y,i,c)=0 for all c: C→ since _c∈Vect(Y_C) and consequently the restriction to U_y,i,c is also classical. This implies that φ(ρ_y,i)=0.
Indeed, it suffices to justify that if a∈_y,i and c(a)∈_y,i,c is equal to 0 for every geometric point c: C→ S then a=0.
But _y,i and each of the _y,i,c are uniform (even perfectoid) so it suffices to prove that |a|_x=0 for all points x∈(_y,i)=Ũ_y,i.
Now, the family of maps |(_y,i,c)|→ |(_y,i)| is surjective which proves the claim.
This implies that ρ_y,i is locally isomorphic to the trivial semi-linear representation.
In other words, π_y^*f_y^* descends to the trivial bundle over the étale neighborhood of U_y,S determined by some open subgroup Γ_y'⊆Γ_y.
Let U'_y,i=U_y,i/Γ'_y, the family of maps ∐_i∈ I,yU'_y,i→ Y_ is an étale cover over which is classical as we needed to construct.
Let ∈ be a perfect scheme and let ⊆^♢ be an open sub-v-sheaf.
Then × E=Y_^ for a unique sous-perfectoid space Y_ and the natural functor Vect(Y_)→Vect^cl_Y() is an equivalence in .
Recall that ^♢× E is represented by Y_, which is sous-perfectoid by <cit.>.
Since × E is an open subsheaf of Y_E^♢, by <cit.> the former one is also represented by a corresponding sous-perfectoid open subspace of Y_.
Since Y_ is sous-perfectoid, we have a fully faithful embedding Vect(Y_)→(Y_^)≅Vect_Y^().
Moreover, this map is exact and reflects exactness since exactness can be tested on geometric points of both sides and |Y_|=|Y_^|.
Essential surjectivity follows from <Ref>.
We now wish to describe Vect_Y^cl() for three specific types of v-sheaves corresponding to one of the following three setups:
* The schematic setup: =(A,A) for A a perfect ring in characteristic p endowed with the discrete topology.
In this case, Vect_Y^cl() is the category of projective modules over A[1/π].
* The meromorphic setup: =(R,R^+) where R^+ is a perfect ring in characteristic p endowed with the discrete topology and R=R^+[1/ϖ] for ϖ∈ R^+ a non zero-divisor.
Typically such a setup arises from considering the Huber pair obtained from a perfectoid Huber pair by replacing the usual topology by the discrete one.
In this case, Vect_Y^cl() agrees with vector bundles over the adic space ((R^+)[1/π])_[ϖ]≠ 0.
This adic space is not quasi-compact.
* The formal setup: =(R^+,R^+) where (R,R^+) is perfectoid and R^+ is endowed with the ϖ-adic topology for some pseudo-uniformizer ϖ∈ R^+.
In this case, Vect_Y^cl() agrees with vector bundles over Y^R_(0,∞].
On his work comparing prismatic F-crystals to families of shtukas, Güthge also considered Vect_Y^cl and Vect_^cl independently <cit.>.
In contrast with our work, he only considers the schematic and formal setups, but he proves classicality results for both Vect_Y^cl and Vect_^cl.
§ DIEUDONNÉ MODULES AND ISOCRYSTALS
Let ∈ be a qcqs perfect scheme over _q.
A Dieudonné module over is a pair (,Φ_) where is a vector bundle over _ and Φ_ is an isomorphism:
Φ_:φ^*_Y_→_Y_.
An isocrystal over is a pair (,Φ_) where is a vector bundle over Y_ and Φ_ is an isomorphism:
Φ_:φ^*→.
The category we consider in <Ref> is canonically equivalent (by the evident functor) to the one considered in <cit.>.
Nevertheless, we prefer to work with the adic space Y_ because it is only in this space that one can apply the geometric reasoning used to prove <Ref>.
A morphism of Dieudonné modules (respectively of isocrystals) over is a φ-equivariant map.
A sequence
Σ:=[(_1,φ_1)→(_2,φ_2)→ (_3,φ_3)]
of maps of Dieudonné modules (respectively of isocrystals) over is exact if it is exact at the level of underlying vector bundles.
We denote by :→, respectively :→, the presheaf that attaches to any scheme the closed exact symmetric monoidal category of Dieudonné modules over , respectively isocrystals over .
A map f:(_1,φ_1)→ (_2,φ_2) of Dieudonné modules is called an isogeny if there exists a map g:(_2,φ_2)→ (_1,φ_2) and a locally constant function N:||→ such that f∘ g=π^N and g∘ f=π^N (the multiplication by π^N(s) map).
We denote by ()[1/π] the category obtained from () by formally inverting isogenies.
The natural map ()→() factors canonically through a fully faithful embedding ()[1/π]↪() and ()[1/π] inherits the structure of an exact closed symmetric monoidal category from that of ().
We denote by [1/π]:→ the presheaf with values in obtained by the rule ↦()[1/π].
and are scheme theoretic -valued arc-sheaves (in particular, v-sheaves).
Moreover, the v-sheafification of [1/π] is .
See <cit.>. To be more precise, the first claim follows from <cit.> for and from <cit.> for .
For the second claim, it suffices to show that [1/π](A)=(A) for A ranging over a basis of the v-topology.
But this holds when A is a comb.
Indeed, isocrystals over combs have a free underlying vector bundle by <cit.>.
Let Σ:=[_1→_2→_3] be a sequence in ().
Suppose that for every geometric point x̅→ the sequence Σ_x̅ is exact, then Σ is exact.
Moreover, if Σ is assumed to be a complex then it suffices to check exactness of Σ_x̅ for geometric points whose image in is closed.
Exactness can be verified on an open cover, so we may assume = A.
A sequence in () is exact if and only if its underlying vector bundle over ( A) is exact.
By <cit.> and <cit.> (see <cit.>) it is equivalent to ask that the sequence is exact over ( A).
Moreover any basis defined over A deforms to a basis over ( A), so that shrinking A we may assume all of the bundles are free.
The map A→∏_x̅→ C_x is injective, so we can check on geometric points that the sequence is a complex.
Once we know the sequence is a complex whether it is exact or not can be checked on geometric points of ( A) whose image is closed in ( A).
But every geometric point of that form can be lifted to a geometric point of ( C) where C ranges over geometric points x̅: C→ whose image is closed in .
By hypothesis, over ( C) the sequence is exact.
To prove an analogue of <Ref> for , we first give a reinterpretation.
Let Bun_FF:→ denote the stack of vector bundles on the relative Fargues–Fontaine curve, we can reformulate it as the following Cartesian square of sheaves with values in :
Bun_FF r d Vect^cl_Y dΔ
Vect^cl_Y r(id,Frob) Vect^cl_Y ×Vect^cl_Y
For all ∈, Bun_FF(^♢)≅() in .
By <Ref>, Vect^cl_Y(^♢)≅Vect(Y_) in , and by the definition of () the following diagram is Cartesian in :
() r d Vect(Y_) dΔ
Vect(Y_) r(id,Frob) Vect(Y_)×Vect(Y_)
The result of <Ref> is implicitly proved during the proof of <cit.>.
The key idea to approach the problem using Sen theory is already present in that work.
Let Σ:=[_1→_2→_3] ∈() be a sequence with constant rank rk.(_i)=r_i and r_1+r_3=r_2. The following hold:
* The sequence is exact if and only if for every geometric point x→ the sequence Σ_x̅∈(x) is exact.
* Moreover, if the sequence is already assumed to be a complex, then exactness can be checked on geometric points x→ whose image is a closed point.
The forward implication is evident.
Assume that for every geometric point of , the sequence is exact.
Since a scheme-theoretic cover '→ induces a v-cover Y_'→ Y_, we may test exactness locally.
Thus, we may assume that = R is a comb and by <cit.> that all the underlying vector bundles are free.
We write M_1∈M_r_2× r_1((R)[1/π]) and M_2∈M_r_3× r_2((R)[1/π]) the matrices representing the maps _1→_2 and _2→_3 respectively.
The induced map _1→_3 is the 0 map if and only if the matrix M_2· M_1=0.
This can be tested on geometric points since R is perfect and in particular reduced.
Exactness can now be expressed in terms of the rank of M_1 and M_2 at the different points of (R)[1/π].
The locus where M_1 has rank strictly smaller to r_1 is a Zariski closed subset (cut out by the minors of M_1) Z⊆(R).
Moreover, since the map _1→_2 is φ-equivariant we have φ(Z)=Z. Indeed, the rank of M_1 equals the rank of φ(M_1).
Suppose Z≠∅ and let z∈ Z.
Endow R with the discrete topology and consider the projection map f:(R)[1/π]→ (R,R).
Suppose that f(z) is a d-analytic point <cit.>.
Then f^-1(f(z)) is of the form Y_T where T=(K,K^+) is a perfectoid field.
By φ-equivariance and because Z is closed we can conclude that Z contains the point at infinity in _(0,∞]= E×(K^+,K^+).
If k=O_K/K^∘∘ this shows that (k)[1/π]⊆ Z.
Since (k)[1/π] is a field, this shows that every r_1-minor in M_1 thought of as an element in R^=(R) vanishes identically when restricted to k^.
The same must be true for every point in the closure of (k)⊆(R).
In particular, we found a closed point x→(R) for which _1,x→_2,x is not injective.
This contradicts our assumption, so Z=∅.
A similar argument proves that _2→_3 is surjective and by rank considerations the sequence is also exact in the middle.
We now consider analytic versions of the category of Dieudonné modules and the category of isocrystals.
If S= R we let ^_pre(S)=( R) and we call elements of this category analytic Dieudonné modules over S.
Let S= R, then ^_pre(S)≅^(S), i.e. ^_pre is already a (-valued) v-sheaf. In particular, the equivalence is exact, reflects exactness and is compatible with the monoidal structure.
This follows from the fact that the rule S↦Vect((R)) is a v-stack <cit.>.
Exactness can be checked on maximal ideals of (R).
Moreover, the image of the map R→(R) contains every closed point, and if R'→ R is a v-cover, the image of the map R'→ R also contains every closed point.
We let [1/π]^_pre(S)=[1/π]( R) and ^_pre(S)=( R) and consider them as functors from to . We let ^ be the v-sheafification of ^_pre.
We call objects of ^(S) analytic isocrystals.
Let S= R, the following hold:
* The rule S↦[1/π]^_pre(S) is a v-separated -valued presheaf.
* The v-sheafification of [1/π]^_pre is ^.
* We have a ⊗-exact fully-faithful embedding that reflects exactness:
[1/π]^_pre(S)⊆^(S).
Given ∈[1/π]( R) we define a functor Γ_:→ given by the rule:
Γ_: R' ↦H^0((R')[1/π],)^φ=Id
where we range over maps R'→ R.
To prove the first claim, since all categories have internal Hom-objects, it suffices to prove that Γ_ is a v-sheaf.
Consider now the functor _:→ given by the rule:
_: R' ↦H^0((R'),)
By <cit.>, _ is a v-sheaf.
In particular, the filtered colimit
__…
is also a v-sheaf which we denote by _[1/π].
Finally, we get a Cartesian diagram of presheaves:
Γ_r d _[1/π] dΔ
_[1/π] r(id,Φ) _[1/π]×_[1/π]
and we can conclude since the limit of sheaves is a sheaf.
The second claim follows from <Ref> and the proof of <Ref>.
The third claim is almost a reinterpretation of the first claims, it remains to prove that the functor reflects exactness.
Let Σ:=[_1→_2→_3]∈[1/π]^_pre(S), be a sequence in [1/π]( R).
Assume that Σ is exact in ^(S).
By definition, this means that Σ is exact in [1/π]( R') for a v-cover R'→ R.
Since the functor is fully-faithful, we deduce that the sequence is a complex.
By the second part of <Ref>, we can check exactness on closed points of R.
Since R'→ R is v-cover, every closed point of R is in the image of R'→ R.
In particular, if a sequence becomes exact over [1/π]( R') then it was already exact over [1/π]( R) as we needed to show.
Let S= R.
Let _1→_2→_3 ∈^(S) be a sequence with constant rank rk.(_i)=r_i and r_1+r_3=r_2.
The sequence is exact if and only if for every geometric point x→ S the sequence _1,x→_2,x→_3,x∈^(x̅) is exact.
The forward implication is evident.
Assume that for every geometric point of S the sequence is exact.
By the definition of the exact structure on ^(S) via sheafification, we may test exactness locally.
We can find a v-cover S':= R'→ S such that each _i∈[1/π]^_pre(S').
Since the map R'→∏_x∈ R' C_x is injective we can test on geometric points if the map is a complex.
Once we know it is a complex, by <Ref> we can test exactness on closed points of R'.
But every closed point of R' supports a geometric point of R'.
§ SHTUKAS, ISOSHTUKAS AND MEROMORPHIC VECTOR BUNDLES
Let S= R∈.
A crystalline shtuka over S is a pair (,Φ_) where is a vector bundle over _S and Φ_ is an isomorphism
Φ_:(φ^*)_Y_S→_Y_S
that is meromorphic (cf. <cit.>) along π=0.
A morphism of crystalline shtukas is a φ-equivariant map. We say that a sequence of maps Σ:=[(_1,φ_1)→(_2,φ_2)→ (_3,φ_3)] is exact if it is exact at the level of underlying vector bundles over _S.
We let :→ denote the presheaf that attaches to any perfectoid space S the closed exact symmetric monoidal category of crystalline shtukas over S.
is a -valued v-sheaf.
This follows from the proof of <cit.>.
Indeed, we have a Cartesian diagram in :
(S) r d dΔ Vect(_S)[1/π]
Vect(_S)r(id,φ^*) Vect(_S)[1/π]×Vect(_S)[1/π]
Although the rule S↦Vect(_S)[1/π] is not a v-sheaf, it is a v-separated presheaf. This already implies that is a v-sheaf.
A map f:(_1,φ_1)→ (_2,φ_2) of crystalline shtukas is called an isogeny if there exists a map g:(_2,φ_2)→ (_1,φ_2) and N:|S|→ a locally constant function such that g∘ f=π^N and g∘ f=π^N.
We denote by [1/π](S) the category obtained from (S) by formally inverting isogenies.
We will call objects in this category isoshtukas.
For ∞> r_2≥ r_1 ≥ 0, let B_[r_1,r_2]: = H^0(_[r_1,r_2]^R^+,) be the global sections of an affinoid open of the relative Fargues–Fontaine curve (cf. <cit.>).
We have the Frobenius map φ B_[0,r]→ B_[0,r/q]. Then (S) fits into the Cartesian square:
(S) rd Vect(B_[0,1/q][1/π]) dΔ
Vect(B_[0,1]) r( res^∗,φ^∗) Vect(B_[0,1/q][1/π]) × Vect(B_[0,1/q][1/π]),
where res B_[0,1] B_[0,1/q] is the natural restriction map, and Vect(B_[0,r]) is the category of finite projective B_[0,r]-modules.
Analogously, we get a Cartesian diagram of categories
(S)[1/π] rd Vect(B_[0,1/q][1/π]) dΔ
( Vect(B_[0,1]))[1/π] r( res^∗,φ^∗) Vect(B_[0,1/q][1/π]) × Vect(B_[0,1/q][1/π]),
since π is not a zero-divisor, we have a fully-faithful embedding of categories:
( Vect(B_[0,1]))[1/π]⊆ Vect(B_[0,1][1/π])
and we can endow ( Vect(B_[0,1]))[1/π] with the exact structure it inherits from Vect(B_[0,1][1/π]).
Moreover, we can endow (S)[1/π] with the exact structure that makes this diagram a Cartesian square in .
Consider the diagram of categories:
The following diagram is Cartesian in :
(S) rd [1/π](S) d
^(S) r [1/π]^_pre(S)
The argument is a standard application of Beauville–Laszlo descent <cit.>.
We provide the details for the convenience of the reader.
Recall the Cartesian diagram
(S) rd Vect(B_[0,1/q][1/π]) dΔ
Vect(B_[0,1]) r( res^∗,φ^∗) Vect(B_[0,1/q][1/π]) × Vect(B_[0,1/q][1/π]),
Replacing the role of B_[0,1] and B_[0,1/q] by their π-completions, we obtain the Cartesian diagram:
^(S) rd Vect((R)[1/π]) dΔ
Vect((R)) r( res^∗,φ^∗) Vect((R)[1/π]) × Vect((R)[1/π]).
Similarly, we obtain diagrams:
(S)[1/π] rd Vect(B_[0,1/q][1/π]) dΔ
( Vect(B_[0,1]))[1/π] r( res^∗,φ^∗) Vect(B_[0,1/q][1/π]) × Vect(B_[0,1/q][1/π]),
[1/π]^_pre(S) rd Vect((R)[1/π]) dΔ
( Vect((R)))[1/π] r( res^∗,φ^∗) Vect((R)[1/π]) × Vect((R)[1/π]).
Moreover, these four Cartesian diagrams can be organized in a commutative square of Cartesian diagrams.
For any fixed i ∈{left,right} and j ∈{upper, lower}, their (i,j)th corners form a commutative diagram, which we denote C_i,j.
For example, C_left,upper is the diagram that we wish to prove is Cartesian.
Moreover, C_left,lower is the left square of (<ref>), where the horizontal arrows in the right square consist of fully-faithful embeddings.
Vect(B_[0,1]) rd Vect(B_[0,1])[1/π]dr Vect(B_[0,1][1/π]) d
Vect((R)) r Vect((R))[1/π] r Vect((R)[1/π])
As π∈ B_[0,1] is not a zero-divisor and the π-adic completion of B_[0,1] is 𝕎(R), the Beauville–Laszlo lemma <cit.> implies that this diagram is Cartesian. For any j, C_right,j is automatically Cartesian, as the horizontal maps in it are isomorphisms. From this it formally follows that C_upper,left is also Cartesian.
We now give a different presentation of [1/π].
Let S=(R,R^+)∈, let S^+=(R^+,R^+), let T=(R^dis,R^dis,+) where (R^dis,R^dis,+) is (R,R^+) with its discrete topology and let T^+=(R^dis,+,R^dis,+).
We have an equivalence _FF(T) ∼→[1/π](S) of categories in .
Pick ϖ∈ R^+ a pseudo-uniformizer.
We consider two different topologies on the ring (R^+).
On one hand we can endow it with the (π,[ϖ])-adic topology in which case we write _inf(R^+) for this topological ring.
We can also endow it with its π-adic topology, in this case we simply write (R^+).
Now, (_inf(R^+))= O_E × S^+, whereas (R^+)= O_E × T^+.
Moreover, we have open immersion of v-sheaves S⊆ T⊆(R^+)^♢.
By <Ref>, we have
Vect_Y^cl(T)=Vect((R^+)_π· [ϖ]≠ 0)=Vect( ((R^+)[1/π])_[ϖ]≠ 0)
We can cover ((R^+)[1/π])_[ϖ]≠ 0 by sets of the form {π≤ [ϖ^1/q^n]≠ 0| n∈} and by φ-equivariance the value of ∈_FF(T) is determined by its value on {π≤ [ϖ]≠ 0}.
More precisely, if B_[0,q^n]^disc is the ring of global sections of the locus {π≤ [ϖ^1/q^n]≠ 0}⊆ ((R^+))_[ϖ]≠ 0, then B_[0,q^n]^disc[1/π] is the ring of global sections of {π≤ [ϖ^1/q^n]≠ 0, π≠ 0} and we have the following Cartesian diagram in :
_FF(T)r d Vect(B^disc_[0,1/q][1/π]) dΔ
Vect(B^disc_[0,1][1/π])r(id,Frob) Vect(B^disc_[0,1/q][1/π])×Vect(B^disc_[0,1/q][1/π])
Let B_[0,q^n] denote the ring of global sections of the locus {π≤ [ϖ^1/q^n]≠ 0}⊆ (_inf(R^+))_[ϖ]≠ 0.
Then the natural map B^disc_[0,q^n]→ B_[0,q^n] is a continuous isomorphism of rings that is not a homeomorphism! <cit.>.
In particular, Vect(B^disc_[0,q^n][1/π])∼→Vect(B_[0,q^n][1/π]) in , by a theorem of Kedlaya–Liu <cit.>.
We define the stack of meromorphic vector bundles on the relative Fargues–Fontaine curve, which we denote by , as the v-stackification of [1/π], when this later is treated as a presheaf valued in .
There is a restriction map (S)→_FF(S) which factors through the π-localization (S)[1/π]→_FF(S).
Since _FF is a v-sheaf this further extends to a ⊗-exact map (S)→_FF(S).
Analogously, we get a ⊗-exact map (S)→^(S).
* We let σ:→_FF denote the map constructed above, we call this map the special polygon map.
* We let γ:→^ denote the map constructed above, we call this map the generic polygon map.
We now study basic properties of . Let S = (R,R^+) ∈.
The map [1/π](S)→(S) is ⊗-exact fully-faithful and reflects exactness.
In other words, [1/π] is a v-separated prestack in .
Moreover, exactness in (S) can be verified on geometric points of S.
Since both categories have internal Hom-objects it suffices to prove that for ∈(S) the rule
T ↦H^0(_T,_|_T)^φ=Id[1/π]
is a v-sheaf, where T is affinoid perfectoid over S.
Since this is a filtered colimit of v-sheaves (by <Ref>) full-faithfulness follows.
We now show that it reflects exactness.
Let Σ:=[_1→_2→_3] be a sequence in [1/π](S), which becomes exact over S'= R' for a v-cover f:S'→ S.
By full-faithfulness above, we can already deduce that Σ is a complex.
Let T' and T denote (R'^dis,R'^dis,+) and (R^dis,R^dis,+).
By <Ref>, we may interpret Σ as a sequence in _FF(T) that becomes exact in _FF(T').
We can verify exactness of Σ on geometric points of T.
We warn the reader that although the map S'→ S is a v-cover the map T'→ T might no longer be surjective even at the level of topological spaces.
Nevertheless, it is surjective on the locus where ϖ is topologically nilpotent for a pseudo-uniformizer ϖ∈ R^+.
Indeed this locus agrees with S' and S respectively.
So it suffices to prove exactness of Σ on the complement of S in T.
Let U= (R,R). The complement of S in T is the closure U of U in T.
Moreover, U∖ U consists of vertical specializations of elements in U, and the same can be said of U× E and U× E.
In particular, Σ is exact over U if and only if it is exact over U.
We know that Σ is exact when restricted to (R',R').
By <Ref>, we may interpret Σ restricted to U as a sequence in ( R) that becomes exact over ( R').
By <Ref>, we can check exactness on closed points.
Fortunately, the map R'→ R covers all closed points.
Indeed, every maximal ideal of R supports a valuation that is continuous for the ϖ-adic topology.
The kernel of any lift of such valuation to R' maps to this maximal ideal.
Finally, we wish to prove that a sequence Σ:=[_1→_2→_3] is exact (S) if and only if for every geometric point x→ S the sequence Σ_x is exact.
By definition, exactness can be verified v-locally so we may assume that S= R is a strict product of points with R^+=∏_i∈ IC_i^+ and that each _j∈[1/π] for j∈{1,2,3}.
The map R→∏_i∈ I C_i is injective, so we deduce that Σ is a complex.
We can now argue as above.
Namely, we consider Σ as a sequence in _FF((R^dis,R^dis,+)), and we show that Σ is exact on all points of (R^dis,R^dis,+).
This is clear on the locus where ϖ is topologically nilpotent by our assumption.
To verify exactness on (R^dis,R^dis) we interpret this as an object in ( R) and we may check exactness on closed points. For any closed point, the residue field map C→ R can be promoted to a geometric point C→ R and the induced sequence in ( C) is induced from the corresponding one in [1/π](C,O_C), which is exact by assumption.
All of the squares of the commutative diagram below are Cartesian in .
(S) rd [1/π](S) rd (S) d
^(S) r [1/π]^_pre(S) r ^(S)
That the left square is Cartesian is <Ref>.
By <Ref> and <Ref>, and ^ are already v-sheaves with values in . From this and <Ref> it follows that the outer square is Cartesian by taking sheafification.
Let S= R.
We wish to show that the map
(S)[1/π]→(S)×_^(S)[1/π]^_pre(S)
is an exact equivalence that reflects exactness.
By <Ref>, the map is already fully-faithful and we must show it is essentially surjective.
Suppose we are given objects ∈(S) and ∈[1/π]^_pre(S) together with an isomorphism α:→ on ^(S).
We can lift ' to an object in ^(S) and since the outer square is Cartesian this defines an object ∈(S) inducing the triple (,',α).
The image of in [1/π](S) induces the triple (,,α) as we needed to show.
This shows that the right square is Cartesian in .
That it is even Cartesian in follows form part (3) of <Ref> and from <Ref>.
A sequence Σ:[_1→_2→_3] in (S) is exact if and only if its image in _FF(S) is exact.
Since both can be checked at the level of geometric points we may assume S= (C,O_C). In this case, B^C_[0,1] is a principal ideal domain and the closed ideals correspond to untilts of C.
The map of ringed topological spaces f:_(0,1]→ B^C_[0,1][1/π] covers every maximal ideal of the target and B^C_[0,1][1/π]→H^0(_(0,1],) is injective. Consequently f^* reflects exactness.
§ SEMI-STABLE FILTRATIONS
Given a λ∈ with λ=m/n and (m,n)=1 we let (λ)∈(_q) be the simple standard isocrystal of slope λ given by the pair ((_q)[1/π]^n,M) where M is the matrix operator with M· e_i=e_i+1 for 1≤ i≤ n-1 and M· e_n=π^-me_1.
We say that an isocrystal is standard if it has the form:
⊕_λ∈(λ)^m_λ
Where m:→ is a multiplicity function with finite support.
Notice that our convention for standard isocrystals reverses the signs in comparison to most classical conventions.
For us a Newton polygon is a function f →_≥ 0 with f^-1(_>0) finite. Its slopes are the values x ∈ with f(x) ≠ 0 and the multiplicity of the slope x is f(x). We denote by the set of all Newton polygons. Then is endowed with the partial order f ≤ g if and only if ∑_x∈ f(x)x = ∑_x ∈ g(x)x and for all x ∈ one has ∑_y ≥ x f(y)y ≤∑_y ≥ x g(y)y.
We say a Newton polygon is semi-stable if it has a single slope.
We let ^ss⊆ denote the subset of semi-stable polygons, this are the minimal elements in this set.
Recall that on a geometric point, isomorphism classes of objects in _FF( C) and ^( C) are both classified by elements in .
If ∈ we define two functions γ_,σ_:|S|→ which we call the generic polygon and special polygon respectively.
Using a different language, Kedlaya proves that for any ∈ we have γ_≥σ_ <cit.>.
This a key step in Kedlaya–Liu's proof of the semicontinuity theorem <cit.>.
Let ∈(S) with constant rank and image ∈^(S) under the map γ:(S)→^(S).
* We say that is locally standard if its Newton polygon is locally constant.
* We say that it is semi-stable if it is locally standard and each of its Newton polygons has only one slope.
* We say is generically locally standard if is locally standard, equivalently if γ_ is locally constant.
* We say is semi-stable if is semi-stable.
We denote by ()^loc and by ()^ denote the stacks of generically locally standard meromorphic vector bundles and semi-stable meromorphic vector bundles respectively.
Let S= R.
We say that an object (,Φ)∈^(S) is anti-effective if Φ^-1:→φ^* extends to a map Ψ:→φ^* defined over (R). An object in ∈(S) is anti-effective if its image in ^(S) is anti-effective.
Let ∈(S) such that γ_ is constant of smallest slope 0, then it lifts v-locally to an anti-effective crystalline shtuka.
By <Ref> it suffices to prove that locally standard analytic isocrystals of smallest slope 0 lift v-locally to an anti-effective Dieudonné module.
Working v-locally we may assume γ()∈[1/π]^_pre(S), and since γ() is locally standard, by <cit.> we may even assume γ()≅⊕_i=1^n (λ_i)^m_i and by assumption λ_i≥ 0 for i.
The standard models of (λ_i) already define an anti-effective crystalline shtuka, by our sign <Ref>.
Suppose that S= R is a product of points. Let (,Φ)∈(S) be anti-effective, then
Hom_(,)=Hom_^(,γ()).
Moreover, if f∈Hom_^(,γ()) defines a sub-isocrystal ⊆, then the corresponding lift also defines a sub-bundle ⊆ in .
Since B^R_[0,r]⊆ R the map
Hom_(,)→Hom_^(,γ()).
is injective.
To prove surjectivity we fix a basis of β:^n→ over _[0,q/N] for some N∈, this induces a basis ϕ^*β:^n→φ^* over _[0,1/N], let r=1/N.
Since (,Φ) is anti-effective we can think of (,Φ) through β and φ^*β as a matrix M∈GL_n(B^R_[0,r]) such that
M^-1∈GL_n(B^R_[0,r][1/π])∩ M_n× n( R).
A map f∈Hom_^(,γ()) can then be though of a vector v∈(R)[1/π]^n satisfying the equation
Mφ v=v.
On the other hand v∈Hom_(,) if and only if v∈ B_[0,s][1/π] for some s>0. Indeed, we can use φ-equivariance to extend this map along _[0,∞). Replacing v by π^N· v we may assume v∈(R)^n.
We fix a norm of |·|:R→ inducing the topology of R with |ϖ|=1/q and define a function |·|_k: R→ by the formula:
∑_i=0^∞ [a_i]π^i↦sup_0≤ i≤ k |a_i|.
This definition extends to M_n× n( R) and ( R)^n by taking supremum over the entries.
By the strong triangle inequality, and because M^-1∈ M_n× n( R), we have that for every k∈ the inequality |M^-1· v|_k≤ |M^-1|_k· |v|_k holds and by inspection |φ v|_k=|v|_k^q.
From this we deduce that |v|^q-1_k≤ |M^-1|_k.
Let m_ij∈ B^R_[0,r] denote the (i,j) entry of M^-1 and write m_ij=∑_l=0^∞ [m_ijl] π^l.
The sequences m_ijl all satisfy that lim_l↦∞ |m_ijl|· (1/q)^N· l=0.
Now, <Ref> shows that lim_l↦∞ |M^-1|_l· (1/q)^N· l=0.
In particular, lim_l↦∞ |v|_l· (1/q)^N·(q-1)· l=0, which implies that v∈ (B^R_[0,1/N·(q-1)])^n as we needed to show.
The last claim can be verified at the level of geometric points. Consider the ideal I in B^C_[0,1] generated by the entries of v.
Since B^C_[0,1] is a principal ideal domain, the zero locus of I consists of finitely many closed points in B^C_[0,1].
Moreover, the zero locus is φ-equivariant so it is at worst the ideal cut by π, but then it avoids B^C_[0,1][1/π].
Let I be a finite set and ρ a number with 0<ρ<1. For each i ∈ I, let (b_i,j)_j≥ 0 be a sequence in ℝ_≥ 0 such that lim_j↦∞b_i,j·ρ^j=0. For each j≥ 0 let B_j = max_i ∈ I, j'≤ j{b_i,j'}. Then lim_j↦∞B_j·ρ^j=0.
This reduces easily to the case I={1}. Fix ε>0. By assumption, there is some j_ε,0>0 such that for all j ≥ j_ε,0, b_j ρ^j < ε. Put
λ = max_j'<j_ε,0 b_j'ρ^j'.
Pick now j_ε big enough, such that ρ^j_ε - j_ε,0λ < ε. Then for any j≥ j_ε we have
B_j ρ^j = max_j'≤ j{b_j'ρ^j}
= max{ max_j'<j_ε,0{b_j'ρ^j'ρ^j-j'}, max_j_ε,0≤ j' ≤ j{b_j'ρ^j'ρ^j-j'}} < ε
Indeed, if j'<j_ε,0, then b_j'ρ^j'ρ^j-j'≤λρ^j-j'≤λρ^j_ε - j_ε,0 < ε (as ρ < 1 and j-j' ≥ j_ε - j_ε,0); and if j' > j_ε,0, then b_j'ρ^j' < ε and ρ^j-j' < 1.
The maps (^)^ ()^ (_FF)^ are ⊗-exact equivalences.
To prove that γ is an equivalence it suffices to prove that it is fully-faithful.
Indeed, essential surjectivity can then be verified locally and by <cit.> (which is a special case of <cit.>) every object of ^ is pro-étale locally isomorphic to (λ)^m which is already in (_q).
Moreover, we may instead prove full-faithfulness of the maps [1/π](S)→[1/π]^_pre(S) when restricted to the semi-stable locus, since this will pass to the sheafification.
Let (_i,Φ_i)∈[1/π](S) with i∈{1,2}.
We consider the internal Hom-object ^:=Hom(_1,_2) and :=γ(^), and consider the functors:
^:T↦Hom_mer(,^_|_T).
:T↦Hom(,_|_T).
We have an injective map of sheaves ^→.
Indeed, this can be checked on points where it follows from injectivity of B^C_[0,∞)⊆ C.
It suffices to prove ^→ is surjective.
We may assume γ__1= γ__2 otherwise =0.
Then ^∈[1/π](S)^ and ∈[1/π](S)^ are semi-stable of slope 0.
This case follows from <Ref> and <Ref>.
Once we know that (^)^≅ ()^ the equivalence ()^≅ (_FF)^ follows from <cit.>.
Exactness of the equivalences can be checked on geometric points, but over points all categories are the category of finite modules over a central simple algebra over E.
We extend the definition of semi-stable vector bundles to flags.
For this we consider -filtered meromorphic vector bundles (respectively vector bundles, respectively analytic isocrystals).
Let S= R.
We consider sequences of the form {_r}_r∈∈(S) (respectively {_r}_r∈∈_FF(S), respectively {_r}_r∈∈^(S)) with _r⊆_s when r<s and such that _r/_<r=0 for all but finitely many r∈.
By hypothesis, there is N>>0 such that _s=_N for every s>N, we call _N the underlying vector bundle of {_r}_r∈.
We say that a -filtered meromorphic vector bundle (respectively a vector bundle, respectively analytic isocrystal) is a semi-stable filtration if _r/_<r is semi-stable of slope r. We let _ss^(S) (respectively ^σ_ss(S), respectively _ss^γ(S)) denote the categories whose objects are semi-stable filtrations and whose morphisms are maps in (S) (respectively _FF(S), respectively ^(S)) that respect the filtration.
The natural map _ss^→^σ_ss is a ⊗-exact equivalence of v-stacks.
Full-faithfulness: Let {_r}_r∈ and {_r}_r∈ be in _ss^(S),
with underlying meromorphic vector bundles and .
The internal Hom-bundle :=Hom(,) is naturally endowed with a -filtration {_r}_r∈.
Now, it is not hard to verify that {_r}_r∈ is a semi-stable filtration.
Moreover, we have an identification:
Hom__ss^({_r}_r∈,{_r}_r∈)=Hom_(,_≤ 0).
Analogously,
Hom__ss({_r}_r∈,{_r}_r∈)=Hom__FF(,_≤ 0).
Since {_r}_r∈ is semistable, one can prove inductively on the support of {_r}_r∈ that Hom_(,_≤ r)=0=Hom__FF(,_≤ r) for all r<0.
To prove full-faithfulness it suffices to show:
Hom_(,_≤ 0/_<0)≅Hom__FF(,_≤ 0/_<0)
but _≤ 0/_<0 is semi-stable of slope 0, so the result follows directly from <Ref>.
Essential surjectivity:
Let {_r}∈_ss^σ with underlying vector bundle of rank n.
If E_s is the degree s unramified extension of E then objects in _FF can be constructed by descent from objects in _FF,E_s, and by full-faithfulness a descent datum in _ss^σ agrees with descent datum in ^_ss.
This allow us to assume that the support of the filtration is contained in .
Since essential surjectivity can now be proved v-locally we may think of every bundle _r as a free module M_r over B^R_[1,q] with φ-glueing data over B^R_[1,1].
We may even assume that the graded pieces _N/_<N are isomorphic to (N)^m_N.
We may choose basis for the M_r over B^R_[1,q] compatible with the filtration and in such a way that after transferring the Frobenius structure to ^n the induced N-graded pieces are given by diagonal matrices of the form π^-N.
This gives an upper block-diagonal matrix A ∈ M_n× n(B^R_[1,1]), with diagonal blocks of the form π^-N·Id_m_N,m_N.
To finish the argument, it suffices to show that there is a matrix A_∞∈ P(B^R_[0,1][1/π]) and a matrix U∈ P(B^R_[1,q]) with
U^-1A_∞φ(U)=A.
This follows from Lemma <ref> below.
Before proving the remaining Lemma <ref>, we need some preparations.
We have B_[1,1]^R = B_[0,1]^R[1/π] + [ϖ]B_[1,∞]^R.
Let A_1 = W(R^+)[π/[ϖ]], A_2 = W(R^+)[[ϖ]/π] and A_12 = W(R^+)[π/[ϖ],[ϖ]/π].
We have B_[1,1]^R = (A_12)_π^∧[1/π], B_[0,1]^R = (A_1)_[ϖ]^∧[1/[ϖ]] and B_[1,∞]^R = (A_2)_π^∧[1/π].
After multiplication with a big enough power of π, it suffices to show that any element of (A_12)_π^∧ can be written as a sum of an element of (A_1)^∧_[ϖ] and an element of [ϖ]/π· (A_2)_π^∧.
For any n ≥ 1, let I_n = {(i,j) ∈^2 0≤ i < n } and let
S_n ⊆∏_(i,j) ∈ I_n R^+
be the subset of all sequences a = (a_ij)_ij for which a_ij = 0 except for finitely many (i,j) ∈ I_n.
Let also S_n^+ ⊆ S_n (resp. S_n^- ⊆ S_n) be the subset of all sequences for which a_ij = 0 unless j≥ 0 (resp. a_ij = 0 unless j < 0). There is a commutative diagram, D_n, of sets
S_n^+ r d S_n d S_n^- l d
A_1/[ϖ]^n A_1 r A_12/π^n A_12 [ϖ]/π·(A_2/π^n A_2) l
(note that A_12/π^n A_12 = A_12/[ϖ]^nA_12), where the upper horizontal maps are the defining inclusions, the lower horizontal maps are induced by the natural ring maps A_1 → A_12← A_2 (and the inclusion of the ideal [ϖ]/πA_2 ⊆ A_2) and the vertical maps are given by sending (a_ij)_ij to ∑_ij [a_ij]π^i· (π/[ϖ])^j.
We make three observations, which immediately follow from the explicit definition of the vertical maps: first, the middle vertical map is surjective. Second, there is an obvious map D_n+1→ D_n of commutative diagrams and the resulting diagram is commutative. Third, when we define the map + S_n^+ × S_n^- → S_n by (a+b)_ij = a_ij if j≥ 0 and (a+b)_ij = b_ij if j<0, then the resulting diagram
S_n^+ ×S_n^- r+ d S_n d
A_1/[ϖ]^n A_1 ×[ϖ]/π·(A_2/π^n A_2) r+ A_12/π^n A_12
is commutative.
Let now S = lim_n S_n and S^± = lim_n S_n^±. Explicitly, S ⊆∏_(i,j) ∈_≥ 0× R^+ is the subset of all sequences (a_ij)_ij satisfying the condition that for each i there is some j(i)≥ 0 such that a_ij = 0 unless |j|<j(i) and S^+ and S^- are corresponding subsets of S. Passing to the limit over all n > 0, we obtain a commutative diagram
S^+ r d S d S^- l d
(A_1)_[ϖ]^∧r (A_12)^∧_π [ϖ]/π·(A_2)^∧_πl
where the middle vertical arrow is still surjective. Moreover, we also get the commutative diagram
S^+ ×S^- r+ d S d
(A_1)_[ϖ]^∧×[ϖ]/π·(A_2)_π^∧r+ (A_12)_π^∧,
where the lower horizontal map is the restriction of the addition map B_[0,1]×[ϖ]/π· B_[1,∞]→ B_[1,1] and the upper horizontal map is defined in the same way as S_n^+ × S_n^- → S_n.
Now, S^+ × S^- → S and S → (A_12)_π^∧ are surjective, and hence also the lower horizontal map in the diagram is surjective, which is precisely what we had to show.
Recall that restriction of functions defines an inclusion B_[1/q,∞]^R ⊆ B_[1,∞]^R and Frobenius induces an isomorphism φ B_[1,∞]^R ∼→ B_[1/q,∞]^R ⊆ B_[1,∞]^R.
Let k ∈_≥ 0. The image of the map
ψ_k B_[1,∞]^R → B_[1,∞]^R, a ↦π^-ka - φ(a)
contains [ϖ]B_[1,∞]^R. If k>0, it contains B_[1,∞]^R.
Let A = W(R^+)[ϖ/π].
Recall that B_[1,∞]^R = A^∧_π[1/π].
Thus, as ψ_k(π^nx) =π^nψ_k(x), it suffices to show that the image contains [ϖ]A^∧_π (resp. A^∧_π if k>0).
Let x ∈ [ϖ]A_π^∧ if k=0 (resp. x ∈ A^∧_π if k>0).
Note that the sequence (π^i· kφ^(i-1)(x))_i≥ 1 in A^∧_π converges π-adically to 0. (Use that φ(A_π^∧) ⊆ A_π^∧ and φ([ϖ]) = [ϖ]^q.)
Thus y = ∑_i=1^∞π^ikφ^(i-1)(x) exists in A_π^∧.
By π-adic continuity of Frobenius and hence of ψ_k, it is immediate that ψ_k(y) = x.
Let n≥ 1 and let A ∈_n(B_[1,1]^R) be upper triangular with ith diagonal entry π^s_i for some s_i ∈ (1≤ i≤ n).
Assume that s_1 ≥ s_2 ≥…≥ s_n holds.
Then there exists a unipotent upper triangular matrix U ∈_n(B_[1,∞]^R) such that U^-1Aφ(U) is upper triangular with entries in B_[0,1]^R[1/π].
We argue by induction on n. If n=1, there is nothing to show. Assume n is fixed and we know the claim for all matrices of size (n-1) × (n-1).
Let a_ij denote the (i,j)th entry of A.
Exploiting the induction hypothesis for the lower right (n-1)×(n-1)-minor of A, we may assume that a_ij∈ B_[0,1]^R[1/π] for all i>1.
Let now 1< j≤ n.
Suppose, by induction, that for all 1<j'<j, one has a_1j'∈ B_[0,1][1/π].
It suffices to find, in this situation, a unipotent upper triangular matrix U ∈_n(B_[1,∞]^R) such that U^-1Aφ(U) has all the above properties of A and additionally its (1,j)th entry lies in B_[0,1]^R[1/π].
Therefore, write a_1j = a_1j^ mer + a_1j' with some a_1j^ mer∈ B_[0,1]^R[1/π] and a_1j' ∈ [ϖ]B_[1,∞]^R, according to <Ref>.
By <Ref>, there exists some y ∈ B_[1,∞]^R with ψ_s_1 - s_j(y) = a_1j' (we use s_j ≤ s_1).
Let U = (U_ℓ m)_ℓ m∈_n(B_[1,∞]^R) be such that U_ℓ m = δ_ℓ m (Kronecker-delta), unless (ℓ,m)=(1,j) and U_1 j = y.
Then it is immediate to compute that U^-1Aφ(U) satisfies all the claimed conditions.
The forgetful functor ^γ_→^ factors through (^)^loc and defines a ⊗-exact equivalence:
^γ_→ (^)^loc.
On points, any filtration splits since the category of isocrystals is semi-simple. In particular, the Newton polygon can be computed on the graded pieces.
By definition of semi-stable filtrations the Newton polygon is constant on the graded isocrystal.
We prove full-faithfulness, let {_r}_r∈ and {_r}_r∈ be two semi-stable filtrations with underlying analytic isocrystals and .
Let denote the Hom-bundle endowed with its induced semi-stable filtration {_r}_r∈.
We need to show that:
Hom_^(, )=Hom_^(, _≤ 0).
But, we can prove inductively on the support of {_r}_r∈ that
Hom_^(, _≤ r/_≤ 0)=0,
for all r>0 since the graded pieces all have slope larger than 0.
Since essential surjectivity can be proved v-locally it suffices to show that the standard objects can be endowed with a semi-stable filtration, but this is clear.
The forgetful functor ^_→ factors through ()^loc and defines a ⊗-exact equivalence:
^_→ ()^loc.
That the map respects the monoidal structure and exactness is automatic, since it is defined in terms of those of .
That the map factors through ()^loc follows from <Ref>.
To show full-faithfulness we may pass again to a Hom-bundles with semi-stable filtration {_r}_r∈ as in the proof of <Ref>.
We need to show:
Hom_(, )=Hom_(, _≤ 0).
But as in the proof of <Ref>, Hom_(, _≤ r/_≤ 0)=0 for all r>0.
Essential surjectivity can now be proved v-locally.
So it suffices to show that every isoshtuka ∈ ([1/π])^loc(S) can be endowed with a semi-stable filtration.
In other words, we must show that the unique semi-stable filtration of γ() lifts to a filtration in .
Replacing E by its degree s field extension E_s, and since we have already proved full-faithfulness, we may assume that the generic Newton polygon only takes values in .
Twisting by a line bundle we may even assume that the smallest slope is 0.
We can now apply <Ref> and <Ref> to find a sub-bundle ^k⊆, where k is the rank of γ()_0 and such that γ()/γ(^k) has all slopes greater than 0.
By induction on the rank, /^k can be endowed with a semi-stable filtration {(/^k)_r}_r∈, we can lift this filtration to .
§ G-BUNDLES WITH MEROMORPHIC STRUCTURE
§.§ -structure
Let be a smooth affine group scheme over O_E, and denote by G its generic fiber over E.
Later on we will assume that is parahoric and that G is reductive.
We let _, respectively _G, denote the Tannakian category of algebraic representations of over O_E, respectively of G over E.
We let :→ denote the presheaf in groupoids with
S↦Fun_ex^⊗(_,(S)),
where Fun_ex^⊗ denotes the ⊗-compatible O_E-linear exact functors.
Analogously, we let :→ denote the presheaf in groupoid with
S↦Fun_ex^⊗(_G,(S)).
Recall the loop group and positive loop group functors LG,L^+:→ given on affine schemes S= A by the formulas
LG(S):=G((A)[1/π])
and
L^+(S):=((A)).
We let LG and L^+ act on LG by φ-conjugation.
LG and L^+ are arc-sheaves.
As both are ind-schemes and the arc-topology is subcanonical (in fact, canonical) on perfect _p-schemes by <cit.>, the claim follows.
The following statements hold:
* and are scheme theoretic small v-stacks.
* The natural maps LG → and LG → are v-covers.
* We have identities =[LG//_φ L^+] and =[LG//_φ LG].
The first claim follows by Tannakian formalism from <Ref>.
The second claim holds as v-locally on R any -torsor resp. G-torsor is free.
Indeed, this happens when R is a strict comb.
For -torsors this is easy to see since étale locally in R any -torsor is trivial, but if R is a strict comb (even a if it is only a w-contractible affine scheme <cit.>) any étale cover of R has a section.
Now, G-torsors are free on combs by <cit.>[The running assumption on loc. cit. is that G is reductive, but the proof of Theorem 11.4 does not use this hypothesis.] (see <cit.> for the vector bundle case).
The third claims follows directly from the second one by computing the fiber products LG×_LG and LG×_LG.
We define the following 4 presheaves over with values in groupoids:
* _ with: S↦Fun_ex^⊗(_,(S)).
* _G with: S↦Fun_ex^⊗(_,^(S)).
* ^_G with: S↦Fun_ex^⊗(_,(S)).
* _ with: S↦Fun_ex^⊗(_,^(S)).
The following statements hold:
* _, _, _G and ^_G are small v-stacks.
* We have a Cartesian diagram:
_r d ^_G d
_r _G
* We have identifications
_=()^=[LG^//_φ L^+^]
and
_G=()^=[LG^//_φ LG^].
* The maps _→_G and _→^_G are v-covers.
Since the application Fun_ex^⊗(_,-) commutes with 2-limits within and all of , ^, ^ and are v-stacks in all of the presheaves of <Ref> are v-sheaves.
For the same reason, the second claim follows directly from <Ref>.
Furthermore, Fun_ex^⊗(_,-) commutes with sheafification which implies directly _=()^ and _G=^.
Since the functor (-)^ commutes with finite limits it suffices to prove that the maps LG^→_ and LG^→_G are surjective to deduce the formulas from the third assertion.
Let ∈_G(S), the argument for _ being analogous.
Surjectivity can be shown v-locally so we may assume S= R is a strict product of points and that for all objects V∈_ the object (V)∈^(S) is isomorphic to one in [1/π]^_pre.
We obtain ⊗-exact functor from _ to the category of projective (R)[1/π]-modules which we interpret as a G-torsor over (R)[1/π].
By <cit.> such torsors are trivial over combs, and by <Ref> R is a comb.
After choosing a trivialization of , the φ-structure corresponds to an element LG( R) which gives precisely a point LG^(S) lifting our original point.
The final claim follows from basechange from the third claim and the second claim.
§.§ Newton strata on _G
We now wish to study the geometry of _G and _G^. Recall the Kottwitz set B(G), which classifies isocrystals with G-structure over algebraically closed fields. The Newton point defines a map B(G) →(G), where the Newton cone (G) of G is a partially ordered set (for G = _n, (G) = with from Section <ref>). In particular, B(G) inherits the partial order from . For more details on B(G) see, for example, <cit.>.
Let = A∈, and let b∈ B(G). We let _≤ b()⊆() denote the full subcategory of objects ∈() whose Newton polygon is bounded by b at geometric points of .
We let _b()⊆_≤ b() denote the full subcategory of objects ∈_≤ b() whose Newton polygon is exactly b at geometric points of .
The following theorem due to work of various authors summarizes what we will need about the geometry of .
For any b∈ B(G) the map _≤ b→ is a perfectly finitely presented closed immersion. Moreover, _b=[∗/J_b(_p)] as scheme-theoretic v-stacks.
The first statement follows from <cit.>. The last statement follows from <cit.>.
The elements of |_G| are in bijection with B(G).
Points in |_G| are in bijection with equivalence classes of C-valued points of _G, ranging over all v-covers of C.
After replacing C by a v-cover they are of the form ( C), which is given by B(G).
It follows a posteriori from <Ref>, that for C an algebraically closed non-Archimedean field the natural map is an equivalence of categories ( C)≅_G( C).
Let S= R ∈. We let _G^≤ b(S)⊆_G(S) denote the full subcategory of objects ∈_G(S) whose Newton polygon is bounded by b at geometric points of S.
We let _G^b(S)⊆_G^≤ b(S) denote the full subcategory of objects ∈_G^≤ b() whose Newton polygon is exactly b at geometric points of S.
For any b∈ B(G) the map _G^≤ b→_G is a closed immersion and agrees with _≤ b^. The map _G^b→_G^≤ b is an open immersion. Moreover, _G^b=_b^=[∗/J_b(_p)] as v-stacks.
Since preserves open and closed immersions, it suffices to identify _G^≤ b and _G^b with _≤ b^ and _b^ respectively.
Let S= R, by definition _G^≤ b(S) is the subcategory of objects of ∈_G(S) whose Newton polygon is pointwise bounded by b at every geometric point S.
Whereas, _≤ b^_pre(S) correspond to isocrystals over R whose polygon is bounded by b at every geometric point of R.
To prove _≤ b^=_G^≤ b it suffices to show that v-locally having Newton polygon be bounded by R or by R agree.
Of course, the schematic condition is stronger than the analytic one, since on the analytic side a condition is imposed only on those ideals of R that support a continuous valuation.
Now, over product of points the two conditions agree.
Indeed, principal connected components of a product of points support a continuous valuation.
Moreover, these components are dense in R.
A similar argument shows _b^=_G^b. Indeed, if S is a product of points all of the maximal ideals of R support a continuous valuation and the map _b→_≤ b is open.
The last claim follows directly from <Ref>.
§.§ Newton strata on _G^
Recall the moduli stack of Fargues–Scholze <cit.>. The connected components of are indexed by b∈ B(G) and the map _b→_G are the smooth charts.
The v-stack is the moduli stack given by the formula
: S↦Fun^⊗_ex(Rep_G,^σ_(S)).
It follows directly from the definition.
The moduli stack fits in the following Cartesian diagram of small v-stacks:
r d _G^dγ
∐_b∈B(G) ^b_G r _G
While this article was in preparation we learned from a private communication with Z. Wu that he had proven independently a version of <Ref> in the language of relative Robba rings.
Observe that we have the following identification:
∐_b∈ B(G)^b_G(S) =Fun^⊗_ex(Rep_G,(^)^loc(S)).
Since Fun^⊗_ex(Rep_G,-) commutes with limits, it suffices to show that ^σ_(S) fits on the following Cartesian diagram:
^σ_(S) r d (S) d
(^)^loc(S) r ^(S)
By definition, (_FF^)^loc fits as the upper-left entry of the above Cartesian diagram. But by <Ref> and <Ref>
(_FF^)^loc(S)≅_ss^≅^σ_(S)
Let S= R and let ∈_FF(S). The following hold:
* After replacing S by a v-cover, can be lifted to (S).
* After replacing S by a v-cover, can be lifted to (S).
* The map of small v-stacks _G^→_G is surjective.
* The map of small v-stacks _^→_G is surjective.
The first and second claims are particular instances of the third and fourth claim in the case where G=GL_n.
For the third claim, the map →_G is formally and ℓ-cohomologically smooth and surjects onto its image.
In particular, it is a surjection of small v-stacks.
The result follows since this map factors through _G^→_G.
The fourth claim follows from <Ref> and from the third claim.
Given two subsets U_1,U_2⊆ B(G)
We let ^σ∈ U_2 _γ∈ U_1 denote γ^-1(_G^U_1)∩σ^-1(_G^U_2).
Whenever U_i=B(G), we omit the subscript or superscript as an abbreviation.
We will mostly use <Ref> when U_1 or U_2 are given by Newton polygon inequalities. In this case, we use more intuitive notation for example ^σ=b means σ^-1(_G^b) and _γ=b=γ^-1(_G^b)=_b.
§ EXTENDING VECTOR BUNDLES AT ∞
Let C be a non-Archimedean algebraically closed field.
One interesting consequence of the classification theorem of vector bundles on the Fargues–Fontaine curve is that every such vector bundle extends at ∞ i.e. it is isomorphic to one obtained from a φ-module over Y^C_(0,∞].
The purpose of this section is to prove that this statement holds in families when one is allowed to work v-locally.
Let S=(R,R^+)∈ and T=(R,R^∘).
* We let ^+_FF:→ denote presheaf given by the rule that attaches to S the category of pairs (,Φ) where is a vector bundle over Y^R^∘_(0,∞] and Φ:φ^*→ is an isomorphism.
* We say that ∈_FF(S) extends at ∞ if it is in the essential image of the map ^+_FF(S)→_FF(T)≅_FF(S).
* We denote ^†_pre:→ the presheaf given by the rule:
(R,R^+)↦( R^∘)
* We say that ∈(S) is a BKF-shtuka if it is in the essential image of the map ^†_pre(S)→(T)≅(S).
* We denote [1/π]^†_pre:→ the presheaf given by the rule:
(R,R^+)↦[1/π]( R^∘)
* We denote ^†_pre:→ the presheaf given by the rule:
(R,R^+)↦( R^∘)
Let S= R∈, the following hold:
* The map ^+_FF(S)→_FF(S) is exact and fully-faithful.
* If S is a product of points then we have the following sequence of Cartesian diagrams in :
^†_pre(S) r d [1/π]^†_pre(S) r d ^+_FF(S) d
(S) r [1/π](S) r _FF(S)
* If S is a product of points then [1/π]^†_pre(S)≅^†_pre(S).
* The sheafification of [1/π]^†_pre is ^†.
The first claim is <cit.>. For the second claim, note that by Kedlaya's GAGA <cit.> we can identify the category (S) ×__FF(S)^+_FF(S) with the category of vector bundles over (R^∘)∖ ({π =0}∩{[ϖ] = 0}) together with φ-action defined over (R^∘)[1/π]. But as S is a product of points, by <cit.> (or <cit.>) and <cit.>, any such vector bundle extends uniquely to a vector bundle over (R^∘). This proves that the outer diagram is Cartesian. Moreover, the same argument also applies to the isogeny categories, proving that the right square is Cartesian. It then follows that the left square is Cartesian.
For the third claim, write S = (R,R^+). We need to show that any isocrystal over (R^∘)[1/π] contains a (R^∘)-lattice. But as S is a product of points, Proposition <ref> and <cit.> imply that is free as a (R^∘)[1/π]-module. But then an (R^∘)-lattice obviously exists.
Fourth claim follows from the third.
We warn the reader that the maps ^†_pre(S) →(S) and ^+_FF(S) →_FF(S) do not reflect exactness.
The advantage of working with ^†_pre is that its values on product of points are easy to describe.
Let S= R be a product of points with R^+=R^∘=∏_i∈ IO_C_i, then the restriction functor
^†_pre(S)→∏_i∈ I( O_C_i)
is fully-faithful, and its essential image is the collection of families of {(_i,Φ_i)}_i∈ I with uniformly bounded zeros and poles on π.
The fully-faithful functor is induced by the isomorphism (∏ O_C_i) = ∏(O_C_i). The pole (resp. zero) at each i ∈ I of any object in the essential image is bounded by the pole (resp. zero) of its preimage. Conversely, if we have a uniform bound, then the Frobenius is represented by a matrix with entries in (R^∘)[1/π] = (∏(O_C_i))[1/π] ⊆∏ ((O_C_i)[1/π]), whose inverse also has entries in this subring.
Moreover at the level of geometric points is also easy to describe, this is the π=ξ version of Fargues' theorem <cit.>.
Let C be a non-Archimedean field, then the following categories are equivalent:
* BKF-modules with ξ=π. In other words, the category pairs (M,Φ) where M is a free (O_C)-module and Φ:M[1/π]→(O_C) _φ⊗_(O_C) M [1/π] is an isomorphism.
* ^†_pre(C,C^+)
* (C,C^+).
By definition ^†_pre(C,C^+)=(O_C), which is precisely a BKF-module with ξ=π, so the first two categories are the same category.
The equivalence with the third category is given in <cit.> when ξ≠π.
The same proof strategy applies.
In <Ref> we will extend <Ref> to the case of product of points.
Let S= R∈, the following hold:
* Given ∈_FF(S) there is a v-cover S'→ S and a unique (up to isomorphism) ∈^+_FF(S) with ≅ in _FF(S').
* Given ∈(S) there is a v-cover S'→ S and a unique (up to isomorphism) ∈^†_pre(S') with ≅ in (S').
* The map _n^♢→_n is a v-cover.
We reduce the first and second claim to the third as follows.
Let ∈_FF(S) of rank n, since _n^♢→_n is surjective there is a cover (R')=S'→ S and a map ∈_n^♢(S').
Refining S' further, we may assume that is given by an object ∈_n( R'^+), which we may think of as vector bundle over _E×(R'^+)^♢ with φ-action defined over E×(R'^+)^♢.
We can consider the inclusion of v-sheaves
(R',R'^∘)⊆(R'^∘,R'^∘)⊆(R'^∘)^♢⊆(R'^+)^♢
where the pair (R'^∘,R'^∘) is given the ϖ-adic topology for some pseudo-uniformizer.
The map _n^♢→_n is then obtained by restricting to open locus (R',R'^∘)⊆(R'^∘,R'^∘).
The first claim then follows from <Ref> and the identity (Y_(0,∞]^R'^∘)^=(R'^∘,R'^∘)× E.
The second claim follows from the first claim, from the second part of <Ref> and from the fact that product of points are basis for the v-topology.
We move on to prove the third claim.
Let T⊆GL_n be the diagonal torus.
Let B(T)_sr denote the set of strongly regular elements.
This set classifies isomorphism classes of sums of n line bundles all of which have different slope.
Observe that the map ∐_b∈ B(T)_sr_b →_n is surjective.
We will construct a perfect scheme Y_b together with a map f_b:Y_b→_n in such a way that Y_b^♢ contains an open subsheaf S_b⊆ Y_b^♢ with the property that the map S_b→^_n factors through ^∘_b and surjects onto it.
Recall that _n^b≅ [∗/T(E)]≅_T^b, whenever b is strongly regular.
Moreover, _T^b=[∗/T(O_E)] and by <Ref> we get a closed immersion _T^b→^b_n.
We let ^T,b_n=_T^b×__n_n⊆_n.
We have an identification ^T,b_n≅ [_b/T(O_E)].
Indeed, this follow from <Ref> and the following sequence of Cartesian diagrams:
^T,b_n r d _n^b r d _b r d d _n^
_T^b r _n^b r _n^br _n
For every point x∈ |^T,b_n| we can find a non-Archimedean field C_x, an open bounded valuation ring C^+_x⊆ C_x and x̃∈_n(C^+_x) inducing x.
More precisely, x is the underlying point obtained from the composition of maps:
(C_x,C^+_x)⊆ (C^+_x)^♢→_n^♢→_n.
The product ∏x̃ produces a map ∏x̃:Y_b=∏C^+_x→^≤ b_≤μ where μ is the only cocharacter of T with b∈ B(T,μ).
In particular, it produces maps:
Y_b^♢→ (^≤ b_≤μ)^♢→^≤ b_n→_γ≤ b⊆_n^.
We let S_b⊆ Y_b^♢ the locus that factors through _b^∘=^σ≠ b_γ=b⊆_≤ b.
By <Ref>, S_b is a product of points and in particular qcqs.
Moreover, by construction the map S_b→_n factors through ^b_n. Also, on principal components S_b→^b_n factors through _n^T,b and since _n^T,b⊆^b_n is a closed immersion all of S_b factors through ^T,b_n.
Recall from <cit.> that _b^∘ is a spatial diamond, this implies that the map S_b→ [_b^∘/T(O_E)] is qcqs. But by construction |S_b|→ |[_b^∘/T(O_E)]| is surjective so this map is a v-cover.
With notation as in the proof of <Ref>, the map ^b_T→_n^b is a closed immersion.
We have maps ^b_T→_n^b→_n^b≅ [∗/T(E)].
It suffices to prove this is a closed immersion after basechange by the v-cover ∗→ [∗/T(E)].
The resulting map is the inclusion of affine Grassmannians Gr_T→Gr_GL_n.
We let the notation be as in the proof of <Ref>.
That is Y_b=∏C^+_x, where the C_x are algebraically closed non-Archimedean fields and C_x^+⊆ C_x are open and bounded valuation subrings.
We are given a map Y_b→_n^≤ b, which induces a map Y_b^♢→_γ≤ b→_n^.
We let S_b⊆ Y^♢_b be the preimage of _b^∘ in Y^♢_b.
Then there exists a family of pseudo-uniformizers f_x∈ C^+_x defining an element f∈∏ C^+_x such that S_b= R where R^+=∏ C^+_x endowed with the f-adic topology and R= ∏ C^+_x[1/f].
Recall that by <Ref>, _b⊆_n^ is γ^-1(_n^b). Moreover,
_b^∘ =γ^-1(_n^b)∩σ^-1(_n^< b).
For all b'<b in B(GL_n) we get a perfectly finitely presented closed immersion Z_b'⊆ Y_b with open complement U_b'⊆ Y_b by <Ref>.
By finite presentation, and since all of C^+_x are valuation rings, there is an element f_b'∈∏ C_x^+ such that Z_b' is the perfection of (∏ C_x^+/f_b') and U_b'= (∏ C_x^+)[1/f_b'].
We get Cartesian diagrams:
Z_b'^r d Y_b^d
(_n^≤b')^r (_n^≤b)^
Moreover, if we let f=∏_b'<bf_b', then U_b:=Y_b×__n^≤ b_n^b can be obtained as (∏ C_x^+)[1/f], and we get a Cartesian diagram:
Y_b^♢∩U_b^r d U_b^r d _n^bd
Y_b^♢r Y_b^r _n^≤b
On the other hand, we claim that the locus in Y_b^♢ that factors through _n^≤ b' is the locus where f_b' is topologically nilpotent.
Indeed, since _n^≤ b⊆_n is open and both are partially proper it suffices to verify this on rank 1 points.
We take a map x:(C,O_C)→ Y_b^♢, which we can always promote to a map (O_C,O_C)→ Y_b^♢, and if k is the residue field of O_C we get a map k→ Y_b^♢ we denote the induced point sp(x).
By construction, the composition Z_b'^♢⊆ Y_b^♢→_n^≤ b factors through _n^≤ b', and the locus where f_b' is topologically nilpotent coincides with those points for which sp(x)∈ Z_b'^♢.
On the other hand, for any map (O_C,O_C)→_n such that k→_n factors through ^≤ b'_n the whole map factors through ^≤ b'_n.
Ranging over b'<b we see that the locus in Y_b^♢ that factors through ∪_b'<b_n^≤ b' is the locus where at least one of the f_b' is topologically nilpotent.
Since all of the f_b'∈ C_x^+, this is equivalent to the locus where f is topologically nilpotent.
In this way, the locus in Y^♢_b that factors through _b^∘ is the locus where f is both topologically nilpotent and invertible.
The description of S_b now follows.
§ MEROMORPHIC BANACH–COLMEZ SPACES
Recall that given a small v-stack S and an object ∈_FF(S) we can construct a Banach–Colmez space ():/S→ by the formula:
[f:T→ S] ↦Hom__FF(T)(,f^*)
The map ()→ S is partially proper and representable in locally spatial diamonds.
Let ∈(S), and let ∈(S).
* We define the meromorphic Banach–Colmez space of , that we denote by ^():/S→, as given by the formula:
[f:T→ S] ↦Hom_(T)(,f^*).
* We can treat as an object in (S) and write ^(). Then, we can consider the canonical lattice, that we denote by ^sht()⊆^(), as given by the formula:
[f:T→ S] ↦Hom_(T)(,f^*).
Whenever ∈(S), to ease the notation, we denote by () what strictly speaking should be written as (σ()).
Let S be a small v-stack, let ∈(S), and let ∈(S). The following hold:
* The map ^()→ S is representable in diamonds.
* The map ^sht()→ S is proper, representable in spatial diamonds and quasi-pro-étale.
* The map ^sht()→() is a closed immersion.
Recall that ^()⊆() and ^sht()⊆(), this implies that ^() and ^sht() are separated over S.
By pro-étale descent, we may assume that S is a strictly totally disconnected perfectoid space.
In this case, () and () are locally spatial diamonds and by <cit.> any subsheaf of them is again a diamond. This proves the first claim, and by <cit.> together with <cit.> we may work v-locally in S to prove the second claim.
Thus, we may assume that S=(R,R^+) is a strict product of points and that ∈^†_pre(S).
After choosing a basis for , we get a matrix M_∈GL_n((R^∘)[1/π]) and we obtain the following Cartesian diagram for any T∈/S.
^sht()[T] dr (_T)^n dΔ
(_T)^n r(id,M^T_·φ) (_T)^n ×(_T)^n
The functor T↦(_T)^n is isomorphic to an infinite dimensional compact unit ball of radius 1, which is a spatial diamond proper over S.
In particular, ^() is a spatial diamond proper over S, and the map ^sht()→() is a closed immersion since it is injective and proper.
Finally, to prove that the map is quasi-pro-étale we may by <cit.> assume that S=(C,O_C) is a geometric point.
In this case ^() is a closed subsheaf of Hom_^(,)≅O_E^r where r is the number of summands of in when we treat as an analytic isocrystal (C,O_C).
This is clearly quasi-pro-étale over S.
Let S= R be a strict product of points, then the map ^†_pre(S)→(S) is an equivalence in .
By the first and second parts of <Ref>, the map ^†_pre(S)→(S) is fully-faithful, and we wish to show essential surjectivity.
Write R^+=∏_i∈ IC^+_i, let ∈(S), let S_i= C_i and let _i denote the restriction of to S_i.
By <Ref>, over points we have an equivalence ^†_pre(S_i)≅(S_i), we let ∈^†_pre(S)=∏_i∈ I_i.
Let be the sheaf of isomorphisms between and .
We may regard as closed subsheaf of ^(Hom(,)⊕Hom(,)).
In particular, is a spatial diamond and the map → S is proper and quasi-pro-étale.
Moreover, the map _i→ S_i has sections by the definition of , which implies that π_0()→π_0(S) is surjective since principal components are dense and both spaces are compact Hausdorff.
This says that → S is a pro-étale cover, and since S is extremally disconnected this map has a section.
This proves that ≅.
Combining <Ref> with <Ref> we get the following concrete description of (S).
Let R^∘=∏_i∈ IO_C_i for C_i a family of non-Archimedean fields, and let ∈(S) of constant rank n.
Then there is a family of matrices M_i∈GL_n((O_C_i)[1/π]) and a number N∈ with M_i and M_i^-1 in 1/π^N· M_n× n((O_C_i)), such that is isomorphic to ((R^∘)^n,M_) where M_=∏_i∈ IM_i.
If S= R is a strict product of points, then the map _^+(S)→_(S) is an equivalence in .
It suffices to show essential surjectivity.
Let ∈_(S), let T⊆GL_n be the diagonal torus and take b∈ B(GL_n) in the image of B(T).
Since _b→_n is formally smooth, by <cit.>, lifts to ∈_γ =b(S).
As in the proof of <Ref> we may interpret the map _b→ [∗/T(_p)] as a map to _T^b, and the map [∗/T(_p)]→ [∗/T(_p)] as DM^b_T→_T.
In particular, étale locally we may lift and since S splits all étale covers we can choose an object ∈(S) lifting the original one.
By <Ref> we may further find an object ∈^†_pre(S).
This defines an object in _^+(S) lifting .
§.§ On the diagonal of _G^
Unfortunately, Δ__G^ is not representable in locally spatial diamonds.
As a consequence, _G^ cannot be, by definition, an Artin v-stack.
In this subsection we provide an indication on how to prove that this diagonal map fails to be representable in locally spatial diamonds.
The material in this subsection is irrelevant for the rest of the article and the reader can safely ignore it.
Let T={1/n>0| n>0}∪{0} be the space of convergent sequences, and consider S=T× C.
This is a strictly totally disconnected perfectoid spaces whose global sections is R the ring of continuous functions f:T→ C which we may think of as convergent sequences. Fix a pseudo-uniformizer ϖ∈ C and let r∈ R denote the element with r(1/n)=ϖ^n and r(0)=0.
We consider ∈(S) given by the matrix
M_:=( [ [r]/π 1; 1/π 0 ])∈ M_2× 2((R^∘)[1/π]).
In this case, ^() is not a locally spatial diamond as we discuss below.
Let the notation be as in <Ref>. Then ^() is a diamond that is not a locally spatial diamond.
Suppose that ^() is a locally spatial diamond. Since ^()⊆(), ^() is quasiseparated and if U⊆^() is a quasicompact open subset then U is a spatial diamond.
Fix U quasicompact containing the zero section 0_T∈ U.
The map U→() is quasicompact, this implies that it is a point-wise subsheaf i.e. if f: R→() is a map such that each of its geometric points factor through U then f factors through U.
Since ^()=1/π^n U then ^()⊆() is also a point-wise subsheaf.
This contradicts the next paragraph.
We give some indication for why ^()⊆() is not a point-wise subsheaf.
The claim is that there are functions a,b∈ H^0(Y^S_(0,∞],) such that
( [ [r]/π 1; 1/π 0 ]) ( [ φ(a); φ(b) ]) = ( [ a; b ]),
such that a(0)=b(0), and such that for all n∈ the function a(1/n) lies in (O_C)[1/π] and has a pole of order ⌊log(n) ⌋.
In particular, this gives a map S→() that point by point lies in ^(), but does not factor through ^().
The elements a and b are roughly constructed as follows.
The meromorphic bundle is obtained from basechange by a meromorphic bundle _t living over _qt, by the map t↦ r.
One finds algebraic expressions in terms of t to construct elements s_a(t),s_b(t)∈(_q t) with Teichmüller expansion s_a(t)=[s_0(t)]+π[s_1(t)]+…π^i[s_i(t)]…, satisfying
( [ [t]/π 1; 1/π 0 ]) ( [ φ(s_a(t)); φ(s_b(t)) ]) = ( [ s_a(t); s_b(t) ]).
For example, s_0(t)=-t^1/q^2-q.
Then a is constructed as the sum a=Σ_k=1^∞ a_k where a_k is the function on Y^S_(0,∞] with a_k(1/k)=1/π^-⌊log(k)⌋s_a(ϖ^k) and 0 in every other connected component of T.
After a long computation one can show that the limit of the Σ_k=1^n a_k exists in H^0(Y^S_(0,∞],).
We explain explicitly the case Δ_^_GL_3. Let the notation be as in <Ref>, and consider the meromorphic vector bundle over S given by =⊕. The group of meromorphic automorphisms Aut_() arises as the basechange of the diagonal Δ_^_GL_3 by the map S→ (_GL_3)^2 given by (,).
Moreover, ^()⊆Aut_() is a closed immersion corresponding to the unipotent radical of the Levi defined by the direct sum decomposition ⊕. In particular, if Aut_() was a locally spatial diamond then ^() would also be, which contradicts <Ref>.
§ THREE COMPARISON THEOREMS
§.§ The meromorphic comparison
The following statement shows that extending at ∞ also holds for G-bundles.
Let G be a reductive group, a parahoric model and S∈ of the form S=(R,R^+). The following statements hold:
* We have fully-faithful embedding of groupoids ^+_G(S)→_G(S).
* Given ∈_G(S) there is a v-cover S'→ S and a unique up to isomorphism ∈^+_G(S') with ≅ in _G(S').
* The v-sheafification of ^+_G is _G.
* If S is a strict product of points, we have a Cartesian diagram of groupoids:
_^†_pre(S) r d _(S) d
^+_G(S)r _G(S).
* If S is a strict product of points, then _^†_pre(S) →_(S) and ^+_G(S)→_G(S) are equivalences.
The first claim follows from <Ref>.
Indeed, we can identify ^+_G(S) with the category Fun_ex^⊗(_G,^+_FF(S)).
The second claim follows from the first part of <Ref>.
Indeed, we regard ∈_G(S) as an object in Fun_ex^⊗(_G,_FF(S)).
We know that if V∈_G then there is a v-cover S_V→ S, and a unique (up to isomorphism) object in _V∈^+_FF(S) lifting _V.
Taking the limit of v-covers S'=_V∈_G S_V→ S we may promote to an object in ∈Fun^⊗(_G,^+_FF(S')).
Now, since we assumed that G is reductive, the category _G is semi-simple.
Moreover, since the map ^+_FF(S')→_FF(S') is fully-faithful, it reflects split-exact sequences.
Since is ⊗-exact, must also be ⊗-exact.
The third claim follows directly from the first and second claims.
The fourth claim follows by applying the same arguments (GAGA and extending -torsors) as Proposition <ref>(2).
For the fifth claim, it suffices to prove that ∈_(S) is in the essential image of _^†_pre(S), and by the fourth claim, it suffices to show that the induced object ∈_G(S) lifts to an object in ^+_G(S), but for every V∈_G the corresponding bundle _V∈_(S) lifts uniquely to an object ^+_V∈^+_(S).
By the argument given in the proof of the second claim the functor
V↦^+_V
is exact and defines a lift ^+∈_G^+(S).
The following statements hold:
* We have an isomorphism of small v-stacks _^†≅_.
* We have an isomorphism of small v-stacks ^†≅_G^.
* The maps _^♢→_ and ^♢→_G^ are v-surjective.
This result can be regarded as a version of Fargues' theorem <cit.> in families. Recall that Fargues' theorem states that the category shtukas over (C,O_C) is equivalent to the category of BKF-modules of (O_C).
Although this statement is not true for general families, the theorem above shows that the statement is v-locally true.
Indeed, _(R,R^+) parametrizes -shtukas over (R,R^+) while _^† is the sheafification of the functor attaching to (R,R^+) a BKF-modules over (R^∘) with -structure.
<Ref> shows that _^†_pre(S) →_(S) is fully-faithful and v-locally surjective, this proves the first claim.
For the second claim, consider the fully-faithful map
Fun_ex^⊗(_G,^†_pre[1/π])(S)→Fun_ex^⊗(_G,[1/π])(S).
from the second part of <Ref> and the second claim of <Ref> above this map is v-locally essentially surjective.
In particular, after sheafification the map above becomes an isomorphism of sheaves of groupoids.
The left hand side identifies with ^† while the right hand side is _G^.
For the third claim, it suffices to prove _^♢→_ is surjective since _→_G^ is surjective and the map _^♢→_G^ factors through ^♢.
By the identity _≅^†_, it suffices to prove that ∈^†_pre_(S) lifts to an object in ^♢_pre_(S') for some v-cover S'→ S.
We can reduce this to the case where S= R is a strict product of points with R^+=∏_i∈ IC_i^+, and is given by a matrix M∈((∏_i∈ I O_C_i)[1/π]).
Any φ-conjugation by a matrix N∈((∏_i∈ I O_C_i))=∏_i∈ I((O_C_i)) defines an isomorphic object in ^†_pre_(S).
This allow us to reduce to the case where the set I is a singleton and we must show that M is φ-conjugate to a matrix defined over M'∈((C^+)).
We may do this at the level of residue rings k=O_C/C^∘∘ and k^+=C^+/C^∘∘ where it follows from the ind-properness of affine Deligne–Lusztig varieties.
§.§ The schematic comparison
Let G be a reductive group and be a parahoric model.
* The natural map (_G)^ is an isomorphism of scheme-theoretic v-sheaves valued in groupoids.
* The natural map (_)^ is an isomorphism of scheme-theoretic v-sheaves valued in groupoids.
Let X∈, for the first claim we write:
(X) ≅Fun_ex^⊗(_G,(X))
≅Fun_ex^⊗(_G,_(X^♢))
≅_G(X^♢)
≅ (_G)^(X).
Here, the second isomorphism is <Ref>.
For the second claim, since (_)^ and _ are v-sheaves (the latter by Proposition <ref>(1)) it suffices to prove (X)_(X^♢) when X= A is a comb.
In this case, (X) is equivalent to the category where the objects are elements M_∈((A)[1/π]), and morphisms between M__1 and M__2 are element N∈((A)) with N^-1· M__1φ(N)=M__2.
On the other hand by <Ref>.(5), an isomorphism between M__1 and M__2 in _(X^♢) corresponds to a functorial choice of elements N_R∈((R^∘)) with N_R^-1· M__1φ(N_R)=M__2 ranging over maps R→ X^♢, with R a product of points.
Since H^0(X^♢,^∘)=A such collection of N_R come uniquely from an element N∈((A)) which shows that (X)→_(X^♢) is fully faithful.
To prove essential surjectivity fix ∈_(X^♢) this induces elements _∈_G(X^♢) and _∈(X) unique up to isomorphism.
Objects in _(X) lifting _ correspond to sections of ×_ X→ X, whereas objects in _(X^♢) lifting _ correspond to sections _×__G X^♢→ X^♢.
The result follows from <Ref> below.
Let X∈ and X→ be a map, then
(_×__GX^♢)^=×_ X.
The argument given in <cit.> works in this generality.
§.§ The topological comparison
Recall that by results of Rapoport–Richartz <cit.> and He <cit.>,
||≅ B(G).
Here the latter is given the topology induced by the partial order defined by Kottwitz.
Alternatively, by the results of Viehmann <cit.> we also have
|_G|^op≅ B(G).
where |_G|^ is the topological space where a subset is open in |_G|^ if and only if it is closed in |_G|.
Combining these two references we obtain that
||≅ |_G|^op
In this section we give a direct and new proof of the identity (<ref>).
As a consequence we prove that the identities (<ref>) and (<ref>) are equivalent statements.
We set some notation.
Let b_1,b_2∈ B(G).
* We say that b_1 ≼_ b_2 if b_1∈{b_2} in .
* We say that b_1 ≼__G b_2 if b_1∈{b_2} in _G.
* We say that b_1 ≼_^op_G b_2 if b_2∈{b_1} in _G.
Moreover, we write b_1 _ b_2, b_1 __G b_2 or b_1 _^_G b_2 whenever b_2 covers b_1 in the respective order.
Let U⊆ B(G). For b∈ B(G) we let U_≤ b:=U∩ B(G)_≤ b.
* U is closed in U_≤ b is closed in for all b∈ B(G).
* U is closed in _G U_≤ b is closed in _G for all b∈ B(G).
* U is open in _G U_≤ b is open in _G for all b∈ B(G).
* |_G|^ is a topological space.
* The topology on , _G and _G is determined by their closure partial orders: ≼_, ≼__G, and ≼__G^.
We prove the first claim, the second and third claim being analogous.
The forward implication is evident since _≤ b⊆ is a closed immersion.
For any map f: R→, there are a finite number of elements b^f_i∈ B(G) such that f factors through ⋃_i=1^n _≤ b^f_i.
By assumption U∩⋃_i=1^n _≤ b^f_i is closed in .
Since f factors through the set above, basechange of f along U defines a closed immersion.
The fourth claim follows from the third.
Indeed, the only part that needs justification is that arbitrary union of open subsets in |_G|^ is open.
This is equivalent to the preservation of open subsets of |_G| under arbitrary intersections.
But arbitrary intersections can be expressed as finite intersections when we restrict them to _G^≤ b.
The last claim follows from the first three.
Indeed, , _G and _G have the strong topology along the inclusion maps from ∐_b∈ B(G)_≤ b, ∐_b∈ B(G)_G^≤ b and ∐_b∈ B(G)_G^≤ b. Moreover, since these latter ones are finite topological spaces they are determined by their closure relations.
Now, because _≤ b⊆ and _G^≤ b⊆_G are closed immersions and _G^≤ b⊆_G is an open immersion we know that:
* b_1 ≼_ b_2 b_1 ≤_B(G) b_2
* b_1 ≼__G b_2 b_1 ≤_B(G) b_2
* b_1 ≼__G^ b_2 b_1 ≤_B(G) b_2
Let the notation be as above. The partial orders ≼_, ≼__G, ≼__G^ agree. In particular,
|| ≅ |_G| ≅|_G|^.
For the rest of the proof we fix b_1,b_2∈ B(G) with b_1≤_B(G) b_2.
We first prove ||≅ |_G|. Recall that preserves closed immersion, consequently:
b_1≼__G b_2 b_1≼_ b_2.
Now, suppose that b_1_ b_2.
We claim that there is a perfect rank 1 valuation ring V and a map V→ such that the induced maps on k_V (the residue field) and K_V (the fraction field) factor through _b_1 and _b_2 respectively.
Indeed, we may find a map f: R→ with the property that for all x∈ R the induced map k_x→ factors through either _b_1 or _b_2 and with the property that f^-1(_b_2)∩ f^-1(_b_1)≠∅.
We may replace R by a v-cover, so we may assume that R=∏_i∈ I V_i is a product of valuation rings.
Since the inclusion _≤ b_1→ is perfectly finitely presented there is r∈ R such that R/(r)^perf⊆ R is f^-1(_b_1).
We may write R=R_1× R_2 where R_1=∏_{i∈ I| r_i=0}V_i and R_2=∏_{i∈ I| r_i≠ 0}V_i and replace R by R_2.
Let K_V_i denote the fraction field of V_i.
Now, ∏_i∈ I K_V_i⊆ R is a pro-open subset lying in f^-1(_b_2).
Since f^-1(_b_1) is non-empty there is a connected component in x∈β I with associated valuation ring V_x such that that the image of r in V_x, which we denote r_x, is not identically 0, but it is also not a unit.
The largest prime ideal contained in ⟨ r_x⟩ and the smallest prime ideal containing ⟨ r_x⟩ define a rank 1 valuation ring with the desired properties.
The map V→ induces a map (V,V)→^♢→_G such that the corresponding map on (k_V,k_V) and (K_V,K_V) factor through _G^b_1 and _G^b_2 respectively.
This implies that (K_V,V)→_G factors through _G^b_2, but (K,V)⊆(V,V) is dense.
This proves:
b_1≼_ b_2 b_1≼__G b_2.
In the same fashion, the map V→ induces a map (V,V)→^♢→_G that restricted to (k_V,k_V) and (K_V,K_V) factors through _G^b_1 and _G^b_2 respectively.
Let π∈ V be a pseudo-uniformizer, let V̂_π be the π-adic completion of V and let K=V[1/π], then (K,V̂_π) is a perfectoid field.
Also, (V̂_π,V̂_π) has two points, one corresponding to (K,V̂_π) and one corresponding to (k_V,k_V).
By <Ref>, the map (V̂_π,V̂_π)→_G corresponds to a ⊗-exact functor from _G to the category of φ-equivariant objects in Vect(Y_(0,∞]^K).
Using <cit.>, we conclude that the map (V̂_π,V̂_π)→_G factors through _G^b_1 as (k_V,k_V)→_G does.
Moreover, (V̂_π,V̂_π)⊆(V,V) is an open subsheaf whose v-sheaf theoretic closure is (V,V).
This allows us to conclude:
b_1≼_ b_2 b_1≼_^_G b_2.
Finally, suppose that b_1_^_G b_2.
Using these assumptions we may find a map R→_G with the property that for all x∈ R the induced map C_x→_G factors through either _G^b_1 or _G^b_2 and with the property that f^-1(_G^b_1)∩ f^-1(_G^b_2)≠∅.
Replacing R by a v-cover we may assume that it is a product of points, with R^+=∏_i∈ IC^+_i.
By shrinking R and ignoring some factors if necessary we may assume that the principal components of R all factor through _G^b_1 without changing the condition that f^-1(_G^b_1)∩ f^-1(_G^b_2)≠∅.
This forces at least one non-principal component to factor through _G^b_2.
Moreover, we may assume C^+_i=O_C_i for all i so that R^+=R^∘.
By <Ref> we may assume that our map R→_G is induced from a map R^+→.
Let k_i denote the residue field of O_C_i.
By assumption, the map (C_i,O_C_i)→_G factors through _G^b_1.
In particular, k_i→ factors through _b_1.
Which implies that ∏_i∈ I k_i→ also factors through _b_1.
Indeed, it certainly factors through _≤ b_1, and the locus where it factors through _b' with b'<b_1 is finitely presented and contains no principal component of ∏_i∈ I k_i which implies that it is empty.
We see that the closed point of every connected component of R^+ factors through _b_1.
Furthermore, there is at least one point x∈ R^+ mapping to _b_2.
The connected component containing x defines a valuation ring V_x and a map V_x→ such that the closed point factors through _b_1 and at least one point of V_x factors through _b_2. This allows us to conclude that:
b_1≼_^_G b_2 b_1≼_ b_2.
alpha
|
http://arxiv.org/abs/2307.03315v1
|
20230706220233
|
Assisting Clinical Decisions for Scarcely Available Treatment via Disentangled Latent Representation
|
[
"Bing Xue",
"Ahmed Sameh Said",
"Ziqi Xu",
"Hanyang Liu",
"Neel Shah",
"Hanqing Yang",
"Philip Payne",
"Chenyang Lu"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
0000-0002-9162-098X
McKelvey School of Engineering
Washington University in St. Louis
1 Brookings Drive
St. Louis
Missouri
USA
63130
[email protected]
School of Medicine
Washington University in St. Louis
St. Louis
Missouri
USA
[email protected]
McKelvey School of Engineering
Washington University in St. Louis
St. Louis
Missouri
USA
[email protected]
McKelvey School of Engineering
Washington University in St. Louis
St. Louis
Missouri
USA
[email protected]
School of Medicine
Washington University in St. Louis
St. Louis
Missouri
USA
[email protected]
McKelvey School of Engineering
Washington University in St. Louis
St. Louis
Missouri
USA
[email protected]
School of Medicine
Washington University in St. Louis
St. Louis
Missouri
USA
[email protected]
Corresponding author.
McKelvey School of Engineering
School of Medicine
Washington University in St. Louis
1 Brookings Drive
St. Louis
Missouri
USA
63130
[email protected]
Assisting Clinical Decisions for Scarcely Available Treatment via Disentangled Latent Representation
Chenyang Lu
====================================================================================================
§ INTRODUCTION
The severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) and the associated COVID-19 pandemic have created a substantial and unforeseen burden on the global healthcare system <cit.>. With a global mortality of over 6.8 million (as of Feb 2023), there is considerable focus on therapeutic solutions for patients with most severe manifestations of the disease. In many cases, scarce treatment options need to be considered and evaluated to support patients' lives. In particular, World Health Organization recommends extracorporeal membrane oxygenation (ECMO) for patients who are refractory to conventional therapies, a support modality only available in expert centres with sufficient experience <cit.>. As such treatments are technically complex and resource-intensive with difficulty in predicting outcomes, proper treatment evaluation and assignment for ECMO has been the subject of significant debate since the start of the pandemic <cit.>.
Recent reports point to the vitalness of ECMO support over 14,000 reported COVID-19 patients with an overall hospital mortality of approximately 47% <cit.>. In comparison, nearly 90% who couldn't find a spot at an ECMO center died, and these patients were young and previously healthy, with a median age of 40 <cit.>. In addition to the vitalness, demand for ECMO far exceeded its availability leading to numerous patients waiting for ECMO support. To date, patient triage and ECMO resource allocation have been limited to the use of Intensive Care Unit (ICU) illness markers and markers of severe medically refractory respiratory failure, neither of which has been validated to predict patients who would ultimately benefit from this resource-intensive, high-risk therapy <cit.>.
These gaps in knowledge highlight the need to develop clinically applicable predictive models to assist clinicians in identifying patients most likely to benefit from ECMO support and evaluating the treatment effect thus aiding in patient triage and the necessary resource allocation <cit.>.
From the perspective of treatment effect analysis, treatment assignment indicates whether a patient received the ECMO treatment, “factual outcomes” corresponds to patient’s discharge status (i.e. whether the patient survived/died), and patient’s features (or coviatates) are the electronic health records (EHR) before ECMO initiation. To support ECMO treatment decisions, we need to make two-side estimations. First, we need to model the probability of getting treatment for each patient (propensity scoring), to reflect the underlying treatment assignment policy <cit.> as well as the associated risk consideration, as ECMO itself can lead patients to death. Second, we need to estimate the impact of each treatment decision, calculated by the survival/death difference with and without treatment.
Developing an ECMO decision-assistant model differs from typical supervised machine learning problems or standard treatment effect problems in healthcare. Compared with a supervised clinical problem <cit.>, the entire vector of treatment effects can never be obtained, but only the factual outcomes aligned with the individualized treatment assignments. Compared with a typical treatment effect problem <cit.>, it faces the challenges of strong selection bias, scarcity of treatment cases and curse of dimensionality.
First, unlike randomized controlled trials or many common datasets, ECMO prediction is prone to strong selection bias. ECMO is only applied to high-risk patients with severe symptoms, when few life-supporting alternatives are available. As a result, the features characterizing severity are significantly different from those among non-ECMO-treated patients (referred hereafter as control patients). For treatment effect models that apply a supervised learning framework for each treatment option separately, the learned models would not generalize well to the entire population. Second, the technical complexity and resource intensiveness of such treatment limit the number of ECMO assignments, resulting in a much smaller cohort size when compared to control patients. In fact, a retrospective study shows that fewer than 0.7% of critically-illed COVID-19 patients received ECMO treatment<cit.>. Moreover, the EHR dataset contains hundreds types of measurements/lab tests, and only a small subset of these measurements/tests would be conducted to each patient. Due to the limited cohort size in ECMO patients, it is challenging for most machine learning models to overcome the curse of dimensionality without falling into over-fitting. Instead of directly capturing the relationship between the prediction tasks and these partially-observed high-dimensional input features, a lower-dimensional representation of inputs is desired <cit.>.
In this paper, we tackle these challenges and propose Treatment Variational AutoEncoder (TVAE), a novel approach that uses a disentangled and balanced latent representation to infer a subject’s potential (factual and counterfactual) outcomes and treatment assignment. It leverages the recent advances in representation learning, and extends the capability of deep generative models in the following aspects:
* Treatment Joint Inference: The lower-dimensional latent representation of TVAE is semi-supervised by treatment assignment and factual response. This architecture eliminates the need of auxiliary networks for prediction, and regulates the counterfactual prediction.
* Distribution Balancing: To generate an accurate latent representation, the selection bias is delicately disentangled from other latent dimensions to facilitate treatment assignment prediction and maximize the information sharing between two groups.
* Label Balancing: Utilizing the generative function of TVAE, we create fake ECMO cases from the posterior distribution of real ECMO cases, hence addressing the data imbalance and over-fitting without perturbing the patient distribution.
Our proposed TVAE outperforms state-of-the-art treatment effect models in predicting ECMO treatment assignment as well as factual responses (with or without ECMO), validated by two large real-world COVID-19 datasets consisting of an international dataset including 118,801 Intensive Care Units (ICU) patients from 1651 hospitals and a institutional dataset including 6,016 ICU patients from 15 hospitals. It also achieves the best performance in estimating individual treatment effects using the public IHDP dataset. While this work is motivated by and evaluated in the context of ECMO treatment, the proposed approach may be generalized for other treatment estimation tasks facing selection bias, label imbalance, and curse of dimensionality.
§ RELATED WORK
§.§ ECMO Treatment Prediction
To date, there exist a substantial gap between existing studies and ECMO treatment analysis. Most studies are limited to ECMO mortality scoring systems <cit.> that rely on identifying pre-ECMO variables, which are available and validated only from patients already supported on ECMO. As such, none of these scores have utilized an appropriate matched non-ECMO cohort, rendering them incapable of identifying the patients who should receive ECMO support <cit.>. A recent study uses gradient boosting trees to predict ECMO treatment assignment <cit.>, but it is not designed to predict treatment effect in terms of factual and counterfactual outcomes, which is an important contribution of this work.
§.§ Individual Treatment Effect
To predict each individual's treatment assignment and treatment effects, the most intuitive approach is to build separate single-task predictors for propensity (probability of receiving ECMO treatment), treatment outcome and control outcome (i.e., survival or death with/without ECMO treatment), respectively <cit.>. Such models are generalized as meta-learners in recent literature <cit.>. These models are prone to the strong selection bias and label imbalance in ECMO treatment assignment, as each treatment-outcome predictor is only exposed to a specific patient group, but not the whole population.
Another popular approach reported in the literature is by adapting non-parametric models for individual level treatment effect. The simplest version is the k nearest neighbors model, and more advanced models are adapted from tree-based ensemble models. Causal Forest has been developed from random forest to obtain a consistent estimator with semi-parametric asymptotic convergence rate <cit.>. Bayesian Additive Regression Trees (BART)-based methods have been proposed to build the trees with the regularization prior and the backfitting Markov chain Monte Carlo (MCMC) algorithm <cit.>. Compared to neural network-based solutions, it is hard for such models to build and regularize the patients representations, hence the curse of dimensionality in characterizing scarce treatment group remains a challenge.
§.§.§ Representation Learning
Our proposed TVAE is related to earlier works using deep representation learning for treatment effect estimation. To build a balanced representation between treatment and control groups, various strategies are adopted to force information sharing between the groups. An popular strategy is to remove the selection bias from the representations of treatment and control groups, hence the representations of both groups are similar. Such representation can be from shared layers (such as SNet <cit.>, DCN-PD <cit.>, Dragonnet <cit.>, TARNET <cit.>, BNN <cit.>) or separate networks (e.g., TNet <cit.>). In contrast, TVAE acknowledges the strong selection bias in ECMO data hence does not force the representations of two groups to be the same. Instead, TVAE disentangles the representation of patients into different aspects (or latent dimensions), and extracts the biased aspect to a designated latent dimension. By doing so, the remaining latent dimensions are naturally "balanced" as both groups share the similar information in these aspects. Considering the rareness of ECMO treatment events, TVAE further avoids the potential over-fitting by maximizing the cross-group similarities in these remaining latent dimensions.
§.§.§ Deep Generative Models
Another direction to estimate treatment effects is through deep generative models. CEVAE <cit.> and IntactVAE <cit.> adapt the Variational Autoencoder (VAE) to transform the representation of all patients into a common latent space, and build auxiliary networks to predict factual and counterfactual outcomes. GANITE, on the other hand, learn the counterfactual
distributions instead of conditional expected values <cit.>.
As a VAE-based framework, TVAE possesses the salient properties of the abovementioned works, but in a different fashion. TVAE has a re-organized latent space that encodes patients' characteristics by explicitly expressing the predicted distributions of treatment assignment as well as factual and counterfactual outcomes. This eliminates the extra complexity of adding auxiliary networks and its potential insufficient training (as each auxiliary network for treatment outcome is only trained by a subset of data). Such re-organized latent space further regulates the counterfactual outcomes through the clustering effect as well as input reconstruction. Moreover, TVAE leverages its generative feature to tackle the data imbalance in ECMO prediction: it augments ECMO cases by upsampling from their latent distributions, hence achieving label balance in training iterations.
§ PROBLEM FORMULATION
Throughout this paper, we adopt Rubin’s potential outcomes model <cit.> and consider the population of COVID-19 subjects where each subject i is associated with a p-dimensional feature X_i ∈ X ⊆ℝ^p, a binary treatment assignment indicator W_i ∈{0,1}, and two potential outcomes Y_i(1), Y_i(0)∈{0,1} drawn from a Bernoulli distribution (Y_i(1), Y_i(0))|X_i ∼ P(.|X_i). For an observational dataset D comprising n independent samples of
the tuple {X_i, W_i, Y_i(W_i)}, where Y_i(W_i) and Y_i(1-W_i) are the factual and the counterfactual outcomes, respectively, we are interested in the probability of treatment assignment (propensity score) p(x) = P(W_i = 1|X_i), the potential outcome with treatment 𝔼 [Y_i(1) |X_i ] and the potential outcome without treatment 𝔼 [Y_i(0) |X_i ]. As the treatment outcomes are binary (survival or death), the treatment is "impactful" only if treatment leads to a change from death to survival. In a more generalized setting, a proxy of treatment impact is always used, which is the reduction in mortality risk (the individualized treatment effect, or ITE) T(X_i) = 𝔼 [Y_i(1) - Y_i(0) |X_i ] <cit.>.
As the counterfactual outcome, Y_i(1-W_i), can never be observed in practice, direct test-set evaluation of the treatment effect is impossible. Existing counterfactual estimation methods usually make the following important assumptions:
[No Unmeasured Confounding]
Given X, the outcome variables Y_0 and Y_1
are independent of treatment assignment, i.e., (Y_0, Y_1) W |X.
[Positivity]
For any covariates X_i, the probability to receive/not receive treatment is positive, i.e., 0 < P(W = w|X = X_i) < 1, ∀ w and i.
The first assumption comes from the fact that every candidate patient is continuously measured in various aspects that might be relevant to treatment assignment and potential treatment outcomes, and clinicians rely on these measures to make reasonable treatment decisions. As all such measurements/tests are captured in the EHR dataset, we believe the dataset has included all confounding variables. The second assumption holds because only potential treatment candidates are included in this study, and no patients must receive ECMO treatment in real clinical scenario.
More discussions on two assumptions are attached in Appendix A.
§ TVAE
Our proposed TVAE framework builds upon a VAE architecture to transform the high-dimensional inputs into a lower-dimensional latent representation. A VAE jointly trains a decoder network (parameterized with θ) with an encoder (parameterized with ϕ) to recover the original inputs X from the latent encoding Z while regularizing the learned latent space to be close to the prior distribution. For a vanilla VAE, the loss function of training the encoder-decoder network can be written as:
l_VAE(ϕ,θ) = ∑_X_i ∈𝒳 -𝔼_Z_i∼ q_ϕ(Z_i|X_i) [ log p_θ (X_i|Z_i)]
+ KL(q_ϕ(Z|X) || p(Z))
where p_ϕ(Z|X) and q_θ(X|Z) are the learned approximation of the posterior and likelihood distributions, and p(Z) is the prior assumption. This loss function consists of two parts: a reconstruction term and a Kullback–Leibler (KL) divergence regularizer. The former loss maximizes the recovery of the inputs, hence the latent encoding must be a truthful representation of patients. The latter helps learn an approximation to the true underlying characteristics of the patient data and produce a compact, smooth and meaningful latent space, which can make the learned latent representations easier to use for downstream tasks such as clustering and data generation <cit.>.
In our proposed TVAE, we design a customized encoder, Treatment Joint Inference (Section <ref>), to simultaneously make inferences on both treatment assignment decision and treatment effect (by predicting both factual and counterfactual outcomes).
We further propose two other schemes for TVAE, Distribution Balancing (Section <ref>) and Label Balancing (Section <ref>), to tackle the challenges of selection bias in treatment effect estimation and label imbalance in ECMO assignment prediction, respectively.
Figure 1 shows an overview of our proposed TVAE framework.
§.§ Treatment Joint Inference
Unlike the previous ITE approaches <cit.>, we aim to simultaneously make inferences on treatment assignment and treatment effect (by predicting both factual and counterfactual outcomes) based on a compact neural structure without any auxiliary predictor network. In TVAE, this is achieved by assigning specific latent dimensions in the latent representation Z as the estimate of the treatment assignment W, and the observed treatment outcome Y(1) or control outcome Y(0).
Intuitively, irrelevant to the actual assignment (Assumption 1), both the factual and counterfactual treatment outcomes are true reflection of the patient physical status, and therefore can be naturally considered as part of the patient representation Z.
Given the latent representation produced by the variational encoder, Z=(Z^1, Z^2, ..., Z^d)∈ℝ^d, let the first three dimensions, Z^1, Z^2 and Z^3, be encoded to estimate the treatment assginment (i.e., Z^1=Ŵ), treatment outcome (i.e., Z^2=Ŷ(1)), and control outcome (i.e., Z^3=Ŷ(0)), respectively
. Then the estimated assignment Ŵ=Z^1 and the factual outcome Ŷ(W)=Z^3-W (where W∈{0,1}) is supervised by minimizing the following loss:
l_TI(ϕ,θ) = ∑_X_i ∈𝒳 -𝔼_Z_i^1, Z_i^3-W∼ q_ϕ[
log p(W_i,Y_i(W_i)|Z_i^1, Z_i^3-W)]
where W_i and Y(W_i) are the true treatment assignment and factual outcome.
For binary treatment outcomes, such as survival/death for ECMO data, cross entropy is used to implement Eq. (<ref>).
For continuous treatment outcomes, such as the semi-simulated IHDP dataset (Section 5.4), mean square error is used.
The joint inference encoding allows us to make use of the label information (factual outcomes and treatment assignment) to optimize the encoder.
Similar idea of incorporating supervised information into VAE can be found in prior work such as conditional VAE (CVAE) <cit.> and CEVAE <cit.>.
However, they all rely on an auxiliary network for label prediction. Moreover, as part of the latent representation, the label inference is also directly regularized by the unsupervised data reconstruction.
Due to the absence of counterfactual ground truth Y(1-W), it's impossible to directly learn the encoded dimension Z^3-(1-W) in a supervised fashion as in Eq. (<ref>) above. However, with our TVAE,
the estimate of counterfactual outcomes can be optimized through a semi-supervised process by jointly optimizing l_VAE(ϕ,θ) in Eq. (<ref>) and l_TI(ϕ,θ) in Eq. (<ref>).
On one hand, the learning of Ŷ(1-W) is regularized by the patient reconstruction process in l_VAE(ϕ,θ). To minimize reconstruction error, Z^3-(1-W) must be a truthful representation of X.
On the other hand, the semi-supervised setting of TVAE helps the latent encoding "share" outcome information across similar patients.
More concretely, similar patients are clustered close to each other in a well-learned smooth and compact latent space of VAE, the counterfactual outcome of a patient (e.g., treatment outcome of a control patient) can be inferred using the factual outcomes of similar patients with a different treatment assignment in the latent neighborhood .
§.§ Distribution Balancing
§.§.§ Disentangled Latent Representation
Previous studies maximize the distribution similarity in control and treatment groups, therefore selection bias is removed and the representation of the scarce treatment group can be regularized <cit.> by the control group. However,
instead of removing the selection bias from patients' representations, we argue that it should be utilized to enhance propensity scoring. This can be achieved by disentanglement <cit.> together with the Treatment Joint Inference. By enforcing disentanglement in the latent space while jointly encoding the predicted outcomes (W, Y(W)) in the designated latent dimensions, the selection bias flows into Z^1 for treatment assignment prediction, and naturally the remaining latent dimensions are balanced between treatment and control group. To show this, consider the disentanglement through mini Total Correlation (TC) in the d-dimensional latent space <cit.>,
where the TC of the set of variables, Z^1:d={Z^1, Z^2, ..., Z^d}, is defined as the ratio between the joint distribution and the product of the marginals, i.e.,
TC(Z^1:d|X) := KL(q_ϕ(Z|X) || ∏_j=1^d q_ϕ(Z^j|X)).
Given Z^1 = Ŵ, conditioned on the data X, the total correlation of Z^1:d equals the sum of the total correlation of Z^2:d and the mutual information between the first dimension Ŵ and all the other dimensions Z^2:d, i.e.,
TC(Z^1:d|X) = TC(Z^2:d|X)
+ I(Ŵ|X;Z^2:d|X)
where I(A;B) is the mutual information between variable A and B.
The proof can be found in Appendix C. As the Joint Encoding forces Z^1 to approximate W, clearly TC is minimized when I(Ŵ;Z^2:d|X) is minimized. Alternatively, we can see this from the fact that all terms on both sides are nonnegative, hence the minimization is reached only if the mutual information between Z^1 and other latent dimensions is 0.
Hence, the remaining latent dimensions neither contain any treatment assignment information, nor are they affected by the selection bias. Due to the curse of dimensionality, the latent representation of treatment group using a deep encoder easily overfits and become poorly generalizable. Note that, however, we can use the learned representation in the control group to guide the learned representation in the treatment group. The idea is as follows: when a latent dimension only preserves the non-confounding information, then the distribution is irrelevant to treatment assignment. For example, as gender is not considered in treatment assignment, the latent encoding for gender should be distributed indifferently in treatment and no-treatment groups. This can be expressed as
For any dimension
Z^j in the latent representation Z:=[Z^1,Z^2,...,Z^d],
Dist(Z^j(0))=Dist(Z^j(1)), if and only if Z^j (Y(0),Y(1),W)|X, where Dist(.) denotes the distribution.
§.§.§ Distribution Matching
This proposition provides the ground to further regulate the remaining latent dimensions by minimizing the distribution difference.
An intuitive way is to calculate the maximum mean discrepancy (MMD) for the distance on the space of probability measures, as it has an unbiased U-statistic estimator, which can be used
in conjunction with gradient descent-based methods <cit.>. Considering the complexity of potential distributions, we apply the kernel trick so that the MMD is zero if and only if the distributions are identical in the projected Hilbert space. Denote q_ϕ(Z^j|X(0)) by P^j_- and q_ϕ(Z^j|X(1)) by P^j_+, the kernelized MMD metric can be expressed as:
MMD(q_ϕ(Z^j|X(0)),q_ϕ(Z^j|X(1)))
= MMD(P^j_-,P^j_+)
= ∫_Z k_Z(z,.)dP^j_-(z) -
∫_Z k_Z(z,.)dP^j_+(z)
_ℋ_k
where k is an infinite-dimensional radial basis function (RBF) kernel and ℋ_k is the corresponding reproducing kernel Hilbert space. Alternative distribution matching methods, such as linear MMD without kernelization, Wasserstein Distance and KL divergence are evaluated in treatment effect estimation in Sec. 5.5.
With balanced representation, two factors are inserted into the loss function for joint-optimization: the TC loss and the MMD loss:
l_DB(ϕ,θ) = TC(Z^1:d|X)
+ γ∑_j=4^dMMD(q_ϕ(Z^j|X(0)),q_ϕ(Z^j|X(1)))
where hyperparameter γ is used to adjust the scales of the MMD loss to be similar to the TC loss.
§.§ Label Balancing
The scarcity of ECMO treatment resources and the clinical considerations in treatment assignment induce the selection bias in COVID-19 patients. The limited number of the treatment assignment leads to significant data imbalance (less than 3% in ECMO datasets), thus an encoder network easily ignores the minority group, overfits the minority group or underfits the majority group in latent representation.
To address this issue, we enrich the training data by augmenting the under-represented patients while maintaining their intrinsic characeristics. Instead of using extra data augmentation models <cit.> to generate fake patients, we utilize the generative power in our established latent space to sample more ECMO cases, as the latent representation of TVAE (or any VAE-based model) is a posterior distribution (as shown in Figure 1). Since the patients are sampled from the latent distribution of real ECMO cases, the generated "fake" patients (constructed by passing the upsampled latent representations through decoder) do not change the distribution and characteristics of treatment group. Subsequently, the upsampled "fake" data is concatenated with the original training data to form a more balanced inputs for model learning.
For a well-balanced dataset, the loss function of TVAE is simply l_total(ϕ,θ) = l_VAE(ϕ,θ) + α· l_TI(ϕ,θ) + β· l_DB(ϕ,θ), where hyperparameters α and β are used to adjust each loss term to be in similar scales in the current dataset. Take the public IHDP dataset for example, we set α and β to be 1 and 0.1, respectively.
In the ECMO case where treatment cases are rare, the label balancing module kicks in during the training iteration, hence the model input is the augmented dataset X' ∼ p_θ(X'|Z') where Z' is the up-sampled latent representations. The loss function can be expressed as:
l_total(ϕ,θ) = ∑_X_i ∈𝒳 -𝔼_Z_i∼ q_ϕ(Z_i|X_i) [log p_θ (X_i|Z_i) +
αlog p(W_i,Y_i|Z_i) ]
+ KL(q_ϕ(Z|X') p(Z))
+ β· KL(q_ϕ(Z|X')∏_j=1^d q_ϕ(Z^j|X'))
+ βγ·∑_j=4^dMMD(q_ϕ(Z^j|X'(0)),q_ϕ(Z^j|X'(1)))
When the loss function converges, the latent encoding will not change by the added fake patients, hence q_ϕ(Z|X') = q_ϕ(Z|X).
§ EXPERIMENTS
The first part of the experiments examine how TVAE performs under real ECMO settings. For each individual case, we look at the predicted treatment assignment and the factual response (mortality or survival) with the assigned treatment option. Since the factual response may come from either treatment group or control group, the performance in response prediction suggests the overall performance in predicting both treatment and control (no-treatment) outcomes.
To evaluate TVAE's performance in estimating treatment effect, we rely on the public synthesized datasets (where counterfactual outcomes are also available). Our experiments are designed to to answer the following questions:
* Can TVAE predict the treatment assignment and factual responses?
* How do the tailored components (DB and LB) contribute to the TVAE model?
* How does TVAE perform in individual treatment effect estimation on synthesized datasets?
§.§ Data
In this study, we constructed two real COVID-19 datasets from different continents. Data access agreement and IRB approval were acquired prior to the study, as shown in Appendix E, together with data processing pipeline, feature extraction methods, and characteristics of the cohort. Evaluating the individual treatment effect (ITE) estimation on the ISARIC and BJC ECMO data is impossible, since the ground truth of the counterfactual outcomes is not available in reality. Therefore, we also implement TVAE using the synthesized Infant Health and Development Program (IHDP) dataset, described in previous studies <cit.>. It consists of 1000 replications. In each replication, the dataset contains 747 subjects (139 treated and
608 control), represented by 25 covariates.
ISARIC Dataset
The first ECMO dataset includes COVID-19 individuals from the International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC)–World Health Organization (WHO) Clinical Characterisation Protocol (CCP), referred hereafter as ISARIC Data. Through international collaborative efforts, it covers 1651 hospitals across 63 countries from 26 January 2020 to 20 September 2021. We include a total of 118,801 patients who were admitted to an Intensive Care Unit (ICU) for at least 24 hours so that ECMO treatment is a feasible treatment option (hence the assumption of a positive probability for treatment assignment holds). Among these patients, 1,451 (1.22%) received ECMO treatment. As the patients are from different hospitals with different treatment decision criteria, the characteristics in both treatment and control groups are heterogenous. The mortality ratio is 40.00% in treatment group and 50.56% in control group.
BJC Dataset
The second ECMO dataset is a single institutional dataset, containing electronic health records (EHR) spanning 15 hospitals in Barnes Jewish HealthCare system. This dataset is referred hereafter as BJC Dataset. It contains COVID-19 patients admitted to ICU during 19 months (March 3rd 2020 - October 1st 2021). Among the total of 6,016 included patients, 134 (2.23%) received ECMO treatment. As the patients are from the same healthcare system, the treatment assignment is made by a panel of clinical experts with consistent decision criteria. The mortality ratio in the treatment group is 47.01% and in the control group is 18.99%. Detailed data processing and feature extraction are provided in Appendix E.
§.§ Baseline Settings and Evaluation Metrics
We implemented the state-of-the-art treatment effect algorithms that were discussed in the related works. This includes a linear ordinary least square model (OLS), a nonparametric k-Nearest Neighbor model (kNN), the tree-based causal inference models (BART <cit.> and Causal Forest (CF) <cit.>), deep generative causal inference models (CEVAE and GANITE <cit.>), and other deep representation causal inference models (DCN-PD, BNN, TNet, SNet, TARNET, and Dragonnet). <cit.>.
Among all the models, we performed the grid-search of hyper-parameters. The final hyper-parameter settings are described in Appendix G.
An ablation study is conducted to investigate if the tailored components (Distribution Balancing and Label Balancing) help with ECMO prediction. This involves both quantitative evaluations by prediction performance and qualitative evaluation by visualizing the latent distributions. TVAE-DB represents the model when distribution balancing is removed from the loss function, and TVAE-LB represents the model when the minority cases are not up-sampled from the learned latent distribution.
In ISARIC Dataset and BJC Dataset, both treatment assignment and factual outcomes are binary, therefore the area under the Receiver-Operating Characteristic curves (AUROC) is used to evaluate the predictive power of each model. Due to the scarcity of the treatment resources, we are interested in the trade-off of precision and recall, hence we also calculate the area under the Precision Recall curve (AUPRC). Quantitative results are reported with mean and standard error after 5-fold cross validation, where the stratification is random while preserving the treatment assignment ratio in each fold of patients.
For IHDP dataset, we followed the train/test split strategy with 1000 replications provided in the previous study <cit.>, and calculate the square root of the Precision in Estimation of Heterogeneous Effect (rPEHE), defined as:
ϵ_rPEHE = √(1/N∑_i=1^N (Y_i(1)-Y_i(0) - (Y_i(1) -Y_i(0) ) )^2)
where Y_i(1) and Y_i(0) are the estimated treatment and no-treatment outcomes, respectively.
§.§ Quantitative and Qualitative Results in ECMO Prediction
The quantitative results of prediction metrics are reported in Table 1 for ISARIC Dataset, and Table 2 for BJC Dataset. For models that do not (explicitly) predict propensity score, the corresponding fields are labeled as "N/A".
§.§.§ Overall performance
Our proposed TVAE outperforms all the baseline methods in ISARIC and BJC datasets by all metrics. Such observation has statistical significance, measured by 90% confidence interval and paired one-tail t-test.
§.§.§ Model complexity v.s. over-fitting
On BJC Dataset where the cohort size of ECMO patients is significantly small (134 cases) and the input feature dimensions are relatively large (178 features), simple linear models such as OLS perform better than complex tree-based models and most of the deep-learning models. This is likely the result of over-fitting when complex models try to characterize scarce treatment cases. On the other hand, more complex models (such as BART, CF, GANITE, CEVAE and Dragonnet) outperform OLS on ISARIC Dataset, as it is significant larger and more heterogeneous. Armed with multiple novel regularization schemes (semi-supervised encoding, disentanglement and distribution matching), TVAE demonstrates its robustness in BJC cohort, meanwhile its deep learning architecture is capable of discovering the underlying nonlinear patterns in ISARIC cohort.
§.§.§ Tree-based learning v.s. deep representation learning
Similar as observed in a previous study <cit.>, tree-based models have high prediction performance in tabular data, but performance deteriorates when facing label imbalance. In the label-imbalanced treatment assignment prediction, CEVAE and DCNPD achieves similar or better AUROC/AURPC than tree-based models. In our TVAE, the carefully designed architecture improves its representation learning, achieving even more significant improvement over tree-based models in imbalanced problems (treatment assignment prediction) while matching the performance in factual prediction.
§.§.§ Ablation analysis.
We are interested in how the dedicated components affect the model performance. To quantitatively evaluate the contribution of Distribution Balancing, we remove the MMD loss and disentanglement, and reduce the weight of KL divergence (KL divergence helps disentanglement <cit.>, but it must be kept for clustering effects). After the removal of the Distribution Balancing, all metrics dropped, and the decrease is more significant in the BJC Dataset, resulting in 6.6% reduction in the AUPRC of treatment assignment prediction. This is consistent with the fact that the BJC Dataset has limited treatment samples, hence more prone to overfitting. To qualitatively visualize how Distribution Balancing affects the latent representations, in Fig. 2 we plot the direct visualizations of projected inputs in the latent space (as well as probability density distribution) of treatment and control groups in unsupervised dimensions, with and without Distribution Balancing. Each scatter point represents an ECMO/control case, and the transparency is proportional to the empirical probability density at this location. The empirical probability density is calculated through kernel density estimation (KDE). We used the 4th and 5th latent dimension, as the first three dimensions are for treatment assignment, factual and counter factual estimation, respectively. A clear divergence between treatment and control groups is observed after the removal of Distribution Balancing, suggesting the potential existence of selection bias in these dimensions or overfitting.
To quantitatively evaluate the contribution of Label Balancing, we remove the upsampling from the training process. Without balancing the treatment/control groups, we observed 25.1% and 7.9% drops in the AUPRC of treatment assignment on ISARIC Dataset and BJC Dataset. To investigate the optimal upsampling ratio, we vary the relative size of the upsampled ECMO cases and the optimal unsampled size is found to be 80% (relative to the number of control cases) for ISARIC Dataset and 60% for BJC Dataset, as can be seen in Appendix H.
Note that the ablation of Treatment Joint Inference is not within the scope of this study, since removing it will totally change the architecture of the proposed work.
§.§.§ Risk stratification.
Given TVAE as a decision assistance tool, we want to visualize the consistency between risk predictions and observed outcomes. In Fig. 3, we plot the predicted propensity score, and the treatment effects (measured by the mortality risk with ECMO minus the mortality risk without ECMO). Via propensity scoring, TVAE separates cases that are more likely to be assigned treatment from cases that are not, and the stratification matches with the actual clinical decisions. By predicting the potential risk reduction by ECMO treatment, our model divides the treatment group into those who benefit more from ECMO and those who benefit less. The division is consistent with the actual death and survival outcomes.
§.§ Individual Treatment Effect Estimation
We first compare the treatment effect estimation between TVAE and the state-of-the-art algorithms. Recent literature has noted the inconsistency of results reported in existing literature, where the calculated metrics might be different, or sometimes from different replication strategies <cit.>. To generate a fair comparison between all algorithms, we followed the train/test split strategy with 1000 replications provided in the original study <cit.>, and calculate the square root of the Precision in Estimation of Heterogeneous Effect (rPEHE) for all included state-of-the-art algorithms.
For reproducibility, the results of TVAE as well as existing algorithms are provided using Jupyter Notebook on Github: <https://github.com/xuebing1234/tvae>. As tabulated in Table 3, TVAE has significant improvement in estimating treatment effects over the state-of-the-art models.
Since both the factual and counterfactual outcomes are available, we further evaluate how different Distribution Matching strategies (Kernelized MMD, MMD, Wasserstein Distance, and KL divergence) affect the treatment effect estimation. Among them, Kernelized MMD and and KL divergence lead to lowest estimation errors, indicating that they might be more suitable for distribution matching in batch learning.
It is noteworthy that IHDP (and potentially any other synthesized datasets) does not possess the complexity in real ECMO scenario. For instance, the level of scarcity in the treatment assignment (pos/neg ratio 23%) is >10 times higher than ECMO datasets, and it has much smaller selection bias (derandomization simply by removing non-white mothers) and simpler input space.
§ CONCLUSION AND BROADER IMPACT
We aim to support the challenging treatment decisions on ECMO treatment, a scarcely available life-supporting modality for COVID-19 patients. We proposed a disentangled representation model that delivers the propensity score and potential outcomes through semi-supervised variational autoencoding. Several innovative components are proposed and integrated to address the strong selection bias, scarcity of treatment cases, and the curse of dimensionality in patient characterization. The Treatment Joint Inference eliminates the auxiliary networks while regulating factual and counterfactual predictions through input reconstruction, clustering and semi-supervision. The Distribution Balancing disentangles the latent dimensions into different aspects of information and extract the selection bias to aid propensity scoring. The remaining non-confounding dimensions are regulated by maximizing the distributions between treatment and control. The Label Balancing component mitigates data imbalance with generative latent representation while preserving the patient distributions. The experiments on two real-world COVID-19 datasets and the public synthetic dataset show that the model is robust in capturing the underlying decision-making process as well as individual treatment effects, hence outperforming other state-of-the-art algorithms.
Potential Impact:
This work fills the gap between treatment effect models and the critical need for ECMO clinical decision support (or other application domains that face strong selection bias, scarcity of treatment, and overfitting). To our best knowledge, there is no machine learning tool that can aid clinicians in identifying ECMO candidates from high-dimensional EHR data while weighing the risks of disease progression versus the scarcity of ECMO support.
In fact, the decision is arguably the most complex decision made in the ICU setting.
The improvement in predictions over state-of-art models leads to the better identification of treatment needs and resource allocation, hence saving lives. When using a machine learning assistive tool in ICU setting, we usually want the model to maximize sensitivity (true positive rate) while fixing a high specificity (true negative rate). Comparing TVAE with existing algorithms (for example, BART), when fixing a high specificity of 0.95, TVAE has a sensitivity of 0.84 (while BART has a sensitivity of 0.71) in the BJC Dataset, and 0.64 (while BART has 0.33) in ISARIC Dataset. Considering the capacity of ECMO resources, such performance improvement leads to 374 more patients (18 in BJC Dataset, and 356 in ISARIC) being correctly identified for ECMO treatment (see more details in Appendix F). Although it is impossible to compare treatment effect estimation without counterfactual outcomes, the improved AUROC and AUPRC in factual outcome prediction suggests potential improvement in characterizing the survival/death impact for each treatment decision.
Limitation:
This work is not without limitations. In clinical practice, when the treatment decisions are not EHR-related or tracked by the EHR system, the assumption of No Unmeasured Confounding might be violated. Meanwhile, the evaluation of model performance on the counterfactual outcomes remains to be a challenge except using synthetic datasets. Due to the concerns in potential algorithm bias and robustness arose from the data collection, sample size, and underestimation of certain groups, the machine learning predictions should only serve as assistance to clinical decision making. In health care settings, the treatment effect estimated by our model should be used as only one of the inputs to the clinicians alongside other clinical and ethical considerations. How to incorporate our treatment effect model in clinical decisions should be investigated in future studies.
The completion of this research project would not have been possible without the contributions and support of the ISARIC Clinical Characterisation Group (ORCID ID: 0009-0004-5601-9672), Pandemic Sciences Institute, University of Oxford. We are deeply grateful to all contributors who played a role in the success of this project. The full author list and funding information is provided in the permanent https://docs.google.com/document/d/17pWtnRI251dDsaQPAb5SMFjHQgEkJ-Hi/edit?usp=sharing ouid=114308118763539217087 rtpof=true sd=truelink.
This work was supported by the Fullgraf Foundation, and the Big Ideas 2020 COVID Grant through the Healthcare Innovations Lab at the BJC Healthcare and Washington University in St. Louis School of Medicine. ASS has received research support from the Children’s Discovery Institute Faculty Development Award at Washington University in St. Louis.
ACM-Reference-Format
§ DISCUSSION ON ASSUMPTIONS
The decision to initiate ECMO is almost always the most complex decision to make. ECMO is the most resource intensive therapy provided in the ICU and is associated with significant morbidities, in addition there are no universally accepted tools to identify patients at highest risk of receiving ECMO or universally accepted criteria for ECMO initiation. It was thus important to first identify patients not eligible for ECMO (by either BMI or age). From an inclusion perspective it was then important to include all the clinical, laboratory and therapeutic variables that influence ECMO decision making as no single variable can solely identify patients who might or might not receive ECMO. For example, a patient’s respiratory rate if high could represent severe respiratory distress and failure that could contribute to the decision to provide ECMO. A normal respiratory rate for a patient who is endotracheally intubated, on mechanical ventilatory support and under neuromuscular blockade but without satisfactory evidence of adequate gas exchange could also be considered a reason to consider ECMO support. Additionally, a patient in severe respiratory failure with subsequent hypercarbia can become hypopneic with low respiratory rate could be at risk of impending cardio-respiratory arrest and thus could be a candidate for urgent ECMO support.
§.§ Discussion on Assumption 1
The no unmeasured confounding assumption comes from the fact that an ICU patient is continuously measured in various aspects that might be relevant to treatment assignment and potential treatment outcomes, and clinicians rely on these measures to make reasonable treatment decisions. We assume that clinicians have taken necessary measurements that reflect all confounding information. As all such measurements/tests are captured in the Electronic Health Records, we believe the dataset has included all confounding variables. Take Institutional Data for example, we collect 52,216 measurements only from the flowsheets table, despite other tables such as demographics, comorbidities, lab tests, ventilation settings, etc. However, in the actual implementation, highly missing measures/tests are excluded from the model inputs, which remains to be investigated if this incurs any no unmeasured confounding. This has been included in the discussion of limitations in Sec. 6.
§.§ Discussion on Assumption 2
For the rigorous analysis, we pre-exclude definite cases with zero probability (for example, any ICU patients with age>80 is considered unsuitable for ECMO treatment). Since there are no clinical criteria that an ICU patient ‘must’ be on ECMO, probability of each invidual treatment assignment is less than 1. In the resulting dataset, the probability of treatment decision is always in the range of (0,1).
§ IDENTIFIABILITY OF TREATMENT OUTCOMES
To predict the factual and counterfactual outcomes (hence the individual treatment effect), we need to show that p(Y|X,do(W=1)) in TVAE is identifiable. From the Identification Theorem <cit.>, we can see that:
p(Y|X,do(W=1)) =∫_Z p(Y|X,do(W=1),Z)p(Z|X,do(W=1))dZ
i=∫_Z p(Y|X,W=1,Z)p(Z|X,W=1)dZ
j=∫_Z^1 p(Y|X,W=1,Z^1)p(Z^1|X,W=1)dZ^1
k=∫_Z^1 p(Y|W=1,Z^1)p(Z^1|X,W=1)dZ^1
where equality (i) is by the rules of do-calculus, equality (j) is by the designated latent dimension in TVAE, and equality (k) comes from the property of VAE that Y is independent of X given Z. To ensure that Y is only expressed in the designated latent dimension, disentanglement is further added to TVAE (see Sec 4.3). The case of p(Y|X,do(W=0)) is identical, hence the defined individual treatment effect can be recovered.
§ PROOF OF PROPOSITION 1
Given Z^1:d:={Z^1, Z^2, ..., Z^d} and Z^1 = Ŵ, conditioned on the data X, the total correlation of Z^1:d equals the sum of the total correlation of Z^2:d and the mutual information between the first dimension Ŵ and all the other dimensions Z^2:d, i.e.,
TC(Z^1:d|X) = TC(Z^2:d|X)
+ I(Ŵ|X;Z^2:d|X)
where I(A;B) is the mutual information between variable A and B.
LHS =𝔼_q_ϕ(Z^1:d|X)[ logq_ϕ(Z^1:d|X)/q_ϕ(Z^1|X)q_ϕ(Z^2|X)...q_ϕ(Z^d|X)]
= 𝔼_q_ϕ(Z^1:d|X)[ log(q_ϕ(Z^2:d|X)/q_ϕ(Z^1|X)q_ϕ(Z^2|X)...q_ϕ(Z^d|X))
+ log(q_ϕ(Z^1:d|X)/q_ϕ(Z^2:d|X)q_ϕ(Z^1|X))]
= TC(Z^2:d|X)
+ I(Z^1|X;Z^2:d|X)
= RHS
§ PREDICTED RISKS OF CONTROL AND TREATMENT CASES
The predicted distribution of ECMO cases and control cases in terms of treatment effect and control effect are plotted in Figure S1. Patients positioned to the right have higher mortality risk even without ECMO treatment, and patients in the upper regions have higher mortality risks after ECMO treatment. From the figure we can see that 1): Death ECMO cases have higher predicted treatment and control mortality risks than survival cases, and 2): actual treatment cases do not have the highest predicted mortality risks (either treatment or no-treatment).
For 1), this is consistent with our intuition. First, the deaths after treatment demonstrate their high mortality risk even after ECMO. Second, the deaths imply their more severe symptoms before treatment decision, hence they are associated with higher mortality risks without ECMO treatment. For 2), the actual clinical decision is not the reduction in mortality risks, but the changes in outcome. For instance, a reduction of death probability from 30% to 0 will not justify the ECMO treatment when comparing to the reduction of death probability from 65% to 35%, since the latter is more likely to result in a change of outcome (changing the patient from death to survival). As a result, the actual ECMO assignment are not picking the cases with highest mortality risk (more likely to die regardless of treatment or not), but rather the cases that are more likely to be overturned.
§ DATA ACCESS AGREEMENT, PREPROCESSING AND COHORT CHARACTERISTICS
Data access agreement with International Severe Acute Respiratory and emerging Infections
Consortium (ISARIC) were signed on Dec 18, 2020. The purpose of access is to contribute to execute an analysis of "Development of predictive
analytics model for need of extracorporeal support in COVID-19".
For the BJC dataset, the IRB (#202011004) titled "Identifying predictors for ECMO need in COVID 19 patients" was approved on Nov 2, 2020.
Detailed data description and processing pipeline and characteristics of cohort are summarized. For ISARIC, the summary is provided here: https://tinyurl.com/mrv8rmp3. For the institutional data, the summary is provided here: https://tinyurl.com/yxersh7d.
§ MORE METRICS BETWEEN TVAE AND BART
§ PARAMETERS OF BASELINES
The summary of hyper-parameters used in ECMO datasets are listed here: https://tinyurl.com/433yhjzh The hyper-parameters are tuned through grid-search. For IHDP, we use the default hyper-parameters in the associated Github code repository, and uploaded the experiment results here: https://github.com/xuebing1234/tvae
§ OPTIMAL SAMPLING STRATEGY
|
http://arxiv.org/abs/2307.02500v1
|
20230704135155
|
Interpretable Computer Vision Models through Adversarial Training: Unveiling the Robustness-Interpretability Connection
|
[
"Delyan Boychev"
] |
cs.CV
|
[
"cs.CV",
"68T10, 68T45",
"I.4.0"
] |
Interpretable Computer Vision Models through Adversarial Training: Unveiling the Robustness-Interpretability Connection
Delyan Boychev
High School of Mathematics and Natural Sciences Veliko Tarnovo, Bulgaria
August 1, 2023
August 1, 2023
=======================================================================================================================
empty
With the perpetual increase of complexity of the state-of-the-art deep neural networks, it becomes a more and more challenging task to maintain their interpretability. Our work aims to evaluate the effects of adversarial training utilized to produce robust models - less vulnerable to adversarial attacks. It has been shown to make computer vision models more interpretable. Interpretability is as essential as robustness when we deploy the models to the real world. To prove the correlation between these two problems, we extensively examine the models using local feature-importance methods (SHAP, Integrated Gradients) and feature visualization techniques (Representation Inversion, Class Specific Image Generation). Standard models, compared to robust are more susceptible to adversarial attacks, and their learned representations are less meaningful to humans. Conversely, these models focus on distinctive regions of the images which support their predictions. Moreover, the features learned by the robust model are closer to the real ones.
§ INTRODUCTION
Deep convolutional neural networks are used widely in Computer Vision. They achieve high accuracy on computer vision problems, such as image classification <cit.>, object detection <cit.>, etc. Because of their superhuman performance on such tasks, they are continuously integrated into high-risk areas such as self-driving cars. Due to such applications, it becomes increasingly important for them to be interpretable and reliable. Interpretability is the ability of humans to understand the decision-making process of the model - which makes it very useful in detecting dataset biases and prediction flaws.
Furthermore, adversarial robustness is also essential for the models. It has been shown that models are susceptible to adversarial attacks <cit.>. If we change the input of the model slightly, we can mislead it to make wrong predictions, even though the perturbations applied to the input are often imperceptible to the human eye. Those types of input alternations are called adversarial attacks. They can be used, for example, to penetrate facial recognition systems <cit.> or make self-driving vehicles crash <cit.>. One way to make models more robust against such attacks is through an approach called adversarial training <cit.>, which relies on the fact that we can train deep neural networks on adversarial examples instead of using standard data, and teach them to classify the examples correctly.
Robustness and interpretability are both extremely important qualities of Computer Vision models. To safely integrate computer vision models into our lives, we have to comprehend the decision-making process and be sure that they are robust against potential adversaries.
Some researchers have noticed a correlation between robustness and interpretability <cit.> <cit.>. In our work, we aim to investigate this correlation through the lens of modern interpretability methods such as Integrated Gradients attributions <cit.>, SHAP values <cit.> and Feature Visualization.
Firstly, we train a standard model and a robust model on both the CIFAR-10 dataset <cit.> and a subset of the ImageNet dataset <cit.>, which are trained in the same conditions because we want to make valid comparisons. These models use ResNet architecture <cit.>. The difference between the CIFAR-10 and Small ImageNet model is that the Small ImageNet model uses deeper ResNet to achieve high performance, because of the high-resolution images. After that, we analyze the interpretability of the models through different techniques. Some of them are local, which means that we explain only one specific example, and others are global - explanations of the whole behavior of the model. One of them is SHAP(SHapley Additive exPlanations) - a game theoretic approach that makes local explanations using the classical Shapley values from game theory. It gives us information about which regions of the image are most important for the decision. The other one we utilize is called Integrated Gradients attributions <cit.>. It computes which features the model relies on by computing their average contribution. It is another reasonable way to analyze models' interpretability. The last aspect of interpretability, we are studying, is the learned features. There are different ways to visualize the neural network features - Direct Feature Visualization, Class Specific Image Generation, and Representation Inversion. These methods present the main learned characteristics that are human meaningful and we can catch how the model interprets specific classes of the model. In our work, we apply quality analysis of these features and compare the results from the robust model to the standard model ones.
§ METHODS
§.§ Setup
§.§.§ CIFAR-10
The first dataset we work with is the CIFAR-10. It consists of 60000 32x32 RGB images spread out in 10 separate classes. We divided the dataset into a training set and a test set. The training set size is 50000, and the test set size is 10000. The training set includes 5000 images from every class. We chose the CIFAR-10 dataset because it is standardized and widely applied for benchmarking.
§.§.§ Small ImageNet 150
We consider training models on the ILSVRC 2017 dataset (ImageNet-1k) <cit.>, which contains over 1 million training images. Hence, we decided to take 150 classes from ImageNet because we can even obtain high performance and reasonable interpretability plots, but with a reduced computational expense. Each class consists of 600 images for training and 50 images for validation. The training set size is 90000 images and the validation is 7500 images. For testing, we use the validation and the TopImages test set from ImageNetV2 <cit.>. The total size of the dataset is 99000 128x128 RGB images. These images are not as small as CIFAR-10 images and we can analyze models' interpretability much deeper and also achieve high performance. This subset, which we named Small ImageNet 150, is generated by randomly picking classes and images.
§.§.§ Model Architecture
The model architecture is also essential for interoperability analysis. Residual networks are often used to solve many image classification problems <cit.>. Residual Networks are convolutional neural networks and they consist of residual blocks (Fig. <ref>). The main difference from the simple convolutional neural networks is the skip connection. It is just adding the previous layer output to the layer ahead. Sometimes the dimensions of x and the dimensions of the block's output are different. In this situation, we should use the projection method to match the dimensions, which is done by adding 1×1 convolutional layers to the input. Another difference from the plain convolutional neural network is the batch normalization layer added after every convolutional layer. There are two types of blocks. In Fig. <ref> is presented the Basic Residual Block. It is applied in smaller networks like ResNet18 and ResNet34 because this block is computationally expensive and slow in deeper networks. The Bottleneck Residual Block (Fig. <ref>) consists of three convolutional layers - 1 x 1, 3 x 3, and 1 x 1. The 1 x 1 layer decreases and then increases the input and output dimensions. It reduces the execution time because the 3 x 3 convolution remains with low input and output dimensions. Therefore we can build deeper ResNets that are more efficient and faster for training than the ResNets with Basic Blocks. For instance, ResNet50 is constructed by replacing the Basic Blocks in ResNet34 with Bottleneck Blocks.
Residual Networks are less likely to overfit and to result in vanishing or exploding gradients. The CIFAR-10 is a tiny dataset - consequently, our model needs fewer deep layers. ResNet18 performs reasonably enough for our task. The models reach high accuracy in fewer training epochs. Then we can analyze the models.
To train a model on the Small ImageNet 150 dataset, we use more deep layers to achieve high performance. ResNet50 is big enough to perform well on this dataset as well as to examine the interpretability of the models.
We do not fine-tune pre-trained models on account that we don't know the conditions on which they are trained. Model training conditions are major for interpretability and robustness comparisons.
§.§ Model Robustness
§.§.§ Adversarial Attacks
White box adversarial attacks are invisible to the human eye perturbations added to the input image. We know the weights of the model when we make such attacks. They lead the model to make wrong decisions.
First, we denote our classifier as F() and its weights as W. x is the natural input with labels y. C is the number of classes. We use Cross-Entropy Loss <cit.>, widely applied in neural networks:
l(t, y)= -logexp(t_y)/∑_j=1^C exp(t_j)
where t represents the output of F(x, W).
In order to produce the attack, we maximize the loss with respect to the perturbation which we denote as δ.
max_δ∈Δ l(F(x+δ, W), y)
where
Δ = {δ: δ_p≤ε}
The most popular perturbation sets are the l_2 and the l_∞ balls, due to the simplicity of projecting onto them. We denote the perturbation set and the maximum perturbation size respectively with p and ε.
We will consider Projected Gradient Descent as a way of tackling the optimization problem in Equation <ref>. If we refer to the gradient of the loss function with respect to a given image as ∇_x l, then the adversarial perturbation δ can be iteratively updated with step size σ as follows:
δ: _δ_p≤ε(δ + σ*∇_δ l(F(x+δ, W), y))
where 𝒫 is the projection function.
§.§.§ Adversarial Training
The given model architecture can increase its robustness by replacing the standard training objective min_W l(x, y) with its adversarial training counterpart, viz.
min_Wmax_δ∈Δ l(F(x+δ, W), y).
Note that the robustness of a given model is relative to a chosen l_p ball with a small radius ε, because a large radius would mean that the image may be perturbed to the extent that it is either no longer recognizable even to humans or it portrays an entirely different concept. The pseudo-code of adversarial training with PGD is presented in Algorithm <ref>.
§.§ Model Interpretability
§.§.§ Integrated Gradients
The Integrated Gradient method - a local attribution technique, was introduced at ICML <cit.>. It is applied to compute which features impact the model output score (Softmax probability) negatively or positively for a given input.
First, we denote the d-th input dimension as x_d, the baseline for it as x'_d. δ^IG_d is the difference between them:
δ^IG_d = x_d - x'_d
δ^IG = x - x'
The gradients of the model score with respect to the input features indicate which features have the steepest slope. By integrating the gradients along the straight path from the baseline to the original image, we achieve the expected contribution of each feature d to the prediction. The baseline x' represents the absence of some input features. The straight path is obtained by monotonical linear interpolation between the baseline and the original image with a hyperparameter denoted as α. This is the integrated gradient where F is the predict function:
ϕ^IG_d(f, x, x') = δ^IG_d×∫^1_α=0∂f(x' + αδ^IG)/∂x_ddα
The integral is approximated by the left Riemann sum in the original paper <cit.>. However, <cit.> conclude that the trapezoidal rule is a faster method than the left Riemann sum. First, we need Δα, the difference between every step in the integration, where m is the number of steps:
Δα = α_m - α_0/m+1 = 1-0/m+1 = 1/m+1
where α_0 and α_m respectively equal 0 and 1 because we integrate into the interval of 0 to 1. We add one to the number of steps because include because the zeroth and the last element are included. The gradients are denoted as g_0, g_1, ..., g_m:
∂F(x' + 0/mδ^IG)/∂x_d, ∂F(x' + 1/mδ^IG)/∂x_d, ..., ∂F(x' + δ^IG)/∂x_d
Therefore the integrated gradients are approximated as follows:
IG = Δα/2∑^m_i=1g_i+g_i-1 = 1/2(m+1)∑^m_i=1g_i+g_i-1
Multiplying the difference δ^IG_d by the integrated gradients IG, we are scaling the integrated gradients by the size of the change in the input features. It allows us to see how much the model's output changes as a result of the specific change in input features that we are interested in.
IntegratedGrads^approx_d = δ^IG_d× IG
§.§.§ Feature representations visualization
We continue with different types of methods which are visualizing the feature learned from training. Some of them are officially proposed by <cit.>. We note the representation function as R() which maps input x to a representation vector R(x) ∈ℝ^k - penultimate layer of the network. The standard model's representations are called "standard representations", analogous robust model's representations are called "robust representations".
Feature Visualization
Feature visualization <cit.> is visualizing features specific to different classes that the model learned. We need to choose one or many activations from the representation vector, which we maximize with priority to the noise δ added to the input. It represents a Gradient Descent whose aim is to visualize human-meaningful representations learned through the training procedure.
max_δ R(x_rand + δ)_t
where t∈ [k] is the index of the activation which we maximize. x_rand can be an image from the dataset or random noise. If we maximize more than one activation, we apply this formula, where z is the set of the activations:
max_δ1/z∑_i=1^z R(x_rand + δ)_z_i
After that, we get the images from the test set that maximally and minimally activate the neurons to see if these images have features similar to the ones in the visualization.
Representation Inversion
This technique <cit.> aims to approximate an image's representation vector to another image's representation vector to see whether the images will be approximated too. The procedure is conducted in the l_2 space. Our target image is x_targ from the test set and the starting point (source image) is x_src which can be a noise or image from the test set belonging to a different class. We apply normalization of the distance by dividing it by the normalized representation vector that refers to the target image.
min_δR(x_src + δ) - R(x_targ)_2/R(x_targ)_2
Utilizing the method, we achieve similar images to the original ones. However, we don't prove they are close in the feature space. Hence, we involve the distance measure between the feature vectors of the original and inverted image. We select pre-trained InceptionV3 on account that it is applied in many metrics in which feature extraction is needed, such as Fréchet Inception Distance <cit.> and Inception Score <cit.>. To complete the task, we get the middle feature vector, containing 192 features. After that, we measure the l_2 distance between these two feature vectors (computed on the original and inverted image) and determine which model's inversion is closer to the original..
Class Specific Image Generation
In contrast to the other methods, Class Specific Image Generation operates without accessing representation vectors. It is previously utilized by <cit.>, but we replace the Stochastic Gradient Descent with the Projected Gradient Descent. The procedure consists of maximizing the specific output logit (raw probabilities before Softmax function) - one of the classes (its index, denoted as i), with respect to the noise added to the input. The starting point image is called the source. To optimize the process of generation, we choose random noise from the multivardist of the specific class images (computed on the test set images). The concept for choosing starting point is inspired by <cit.>. Class Specific Image Generation is visualizing what the model learned about a specific class instead of visualizing single features that refer to one of the activations in the representation vector. F() is the model prediction function that gives us the output logits.
max_δ F(x_src + δ)_i
Likewise, in the Representation Inversion method, feature similarity is a fundamental problem. We measure the quality of generated images and not the similarity between two specific images. Utilizing the FID, solves our task. It measures the distance between the feature distributions of the natural images and the model-generated ones.
§.§.§ SHAP
SHapley Additive exPlanations (SHAP) <cit.> is a game theoretic approach in which the game is the model's prediction and players are the model parameters, viz.
.87!ϕ_j(F)=∑_S⊆{x_1,…,x_n}∖{x_j}|S|!(n-|S|-1)!/n!(F(S∪{x_j})-F(S))
With F() we denote our classifier, x_j represents one feature from the set of features S, n is the number of features and ϕ_j is the Shapley value for feature x_j.
SHAP provides global and local interpretability by showing how much each feature (in our context image pixels) affects the prediction, either positively or negatively. We investigate the model's predictions and compare robust to standard ones.
In our case, we apply a local method for explanation, SHAP gradient explainer, which works similarly to the Integrated Gradient method. It is computing the Expected Gradients <cit.>, similarly to the Integrated Gradients.
.87!ϕ^EG_c(x, D_data) = x' ∼ D,α∼ U(0,1)𝔼[ (x_c - x'_c) ×∂F(x' + α(x - x'))/∂x_c]
The difference between the Integrated Gradient method is that we use a randomly chosen baseline from a subset of the dataset and linear interpolation hyperparameter α. The expectation is the average from all cases. It approximates the Shapley values.
§.§ Multivariate Normal Distribution
The Multivariate Normal Distribution or Joint normal distribution is a multidimensional generalization of the one-dimensional normal distribution. It is indicative of the correlation between multiple variables. Such a distribution is characterized by the mean and the covariance matrix. We have a set of values X (X_i is a column of matrix a X), and to compute the mean and covariance matrix, we apply the following formulas:
mean(x) = 1/n∑_i=1^n x_i
cov(x, y) = 1/n∑_i=1^n (x_i - mean(x))(y_i - mean(y))
μ = [ mean(X_1); .; .; .; mean(X_n); ]
Σ = [ cov(X_1, X_1) ... cov(X_1, X_n); . . .; . . .; . . .; cov(X_n, X_1) ... cov(X_n, X_n); ]
where n is the length of each feature vector, x_i is the i-th value of the vector, and μ and Σ are the mean vector and covariance matrix of X, respectively.
§.§ Fréchet Inception Distance
The Fréchet Inception Distance (FID) is a metric for evaluating the quality of generated images. Introduced by <cit.>, the FID has since become a standard for comparing generative models. In our case, it is applied to rate the quality of the class visualizations. The metric relies on the Fréchet distance, which measures the distance between two Multivariate Normal Distributions. To compute the score, we first calculate the mean and covariance of the feature vector sets generated by real and generated images, which are obtained by passing images through a pre-trained Deep Convolutional Neural Network, usually the InceptionNet <cit.>.
We compute the Fréchet Inception Distance between two multivardists of feature vectors X and X_1, but first, we calculate their mean - μ, μ_1, and covariance - Σ, Σ_1. The FID formula is structured as follows:
.87!FID(μ, μ_1, Σ, Σ_1) = ||μ - μ_1||_2^2 + Tr(Σ + Σ_1 - 2(ΣΣ_1)^1/2)
Here, Tr is the trace operator of a given matrix. The FID score measures the distance between the two distributions of feature vectors (real and fake images), with lower values indicating greater similarity between the distributions. The perfect FID score is 0, meaning the fake images are identical to the real ones.
§ RESULTS
§.§ CIFAR-10
First, we train the ResNet18 model on the non-robust CIFAR-10 dataset - a standard model. It achieves maximum accuracy of 92.7% after 100 epochs of training. The performance of the model is reasonable for interpretability analysis.
The second model is called the robust model. It is trained on the robust CIFAR-10 dataset, generated on each batch of the training procedure, applying PGD for 20 iterations, projection on the l_2 ball with a constraint ε=0.5 and step size σ=0.1. The best performance model reaches 85% accuracy on natural examples and 64.6% accuracy on adversarial examples. The metric for choosing the best model is the average of the two accuracies.
Accuracies of the models are systemized in Table <ref>. The standard model has the highest accuracy on natural examples, but the lower accuracy on adversarial examples. On the other hand, the robust model has balanced accuracies on standard input as well as on adversaries. Our next task is to analyze the correlation between models' robustness and interpretability using the methods from section <ref>.
Integrated Gradients
Utilizing the Integrated Gradient method we produce the attributions in Fig. <ref>. The robust model's explanations are smoother than the standard model ones. We note that the robust model focuses on specific parts of the object, and the regions contributing positively to the prediction are not scattered - the body of the horse, the branch of the tree, and the body of the bird. The analysis of the CIFAR-10 model is a challenging task because of the size of the input images. Despite that, we report the significant difference between the explanations of the two types of models. More examples are presented in Fig. <ref>, where we determine that the distinctive regions in most of them have a positive impact on the prediction of the robust model.
SHAP
The SHAP technique accomplishes plots similar to those in the prior method. However, the feature-importance heat maps are smoother than the Integrated Gradient method. Robust model explanations are based on distinctive regions - the body and the tire of the car; the head, the tail, and the legs of the horse; the outlines of the frog. On the contrary, the standard model decisions are inexplainable because we state the spread of pixels with high and low values. Moreover, the robust model's wrong predictions can be explained - if we consider some examples from Fig. <ref> - B2, C3, we notice that the model is concentrating on regions that are not part of the object of attention and it is the reason for the wrong prediction.
Class Specific Image Generation
The generated images by the models are placed in Fig. <ref>. The robust model generates almost complete objects. Their colors are natural and the images resemble real objects. On the other hand, the standard model fails to accomplish the task of painting features for the specific class in the image. After extensive examination of the examples, we note that the robust model includes distinctive features unique to the class. To prove that these visualizations are proximate to real ones in the feature space, we apply the FID score - Table <ref>. It confirms that the set of generated images by the robust model is closer to the set of natural images than the standard-generated ones.
Representations Inversion
By approximating the representation vectors of two sets of images, we achieve the plots in Fig. <ref>. The robust model paints features that the original image contains. On the contrary, the standard model produces noise that is not human meaningful. It leads us to the conclusion that the standard model can reproduce many examples with almost identical representation vectors in the feature space of the standard model. However, that is not true for the robust model - it completes the task to invert the source image. We have other situations to consider, for instance, random noise source images.
It is not an issue for the robust model. Moreover, we confirm that the images are similar in the feature space. In Table <ref> are placed l_2 distances between the feature vectors of the original images and the inverted ones.
Direct Feature Visualization
We can correspondingly visualize single features by maximizing randomly chosen activation from the representation vector. In Fig. <ref> is placed the plot of maximized activation 130. We note the features specific to class frogs. To ascertain that we get the images from the test set that maximally activate it. All images belong to class frogs. Another example of Direct Feature Visualization is placed in Fig. <ref>.
§.§ Small ImageNet 150
Our first task is to train the ResNet50 model on the non-robust Small ImageNet 150 dataset - the standard model. It gains its convergence at 72.12% accuracy on the validation set, which is a fair performance to measure its robustness and interpretability.
The second model is trained on the robust Small ImageNet 150 dataset generated by 20 iterations of PGD and projection on the l_2 ball with a constraint ε=1.5 and step size σ=2.5*1.5/20. The model converges at 58.87% accuracy on validation natural examples and 37.54% accuracy on adversarial examples generated under l_2. The metric to select the best model is the average of the two accuracies.
Accuracies of the models on the test set are systemized in Table <ref>. We notice that the standard model has the highest accuracy on natural examples, but the lowest on adversarial examples. On the other hand, the robust model has balanced accuracies on standard input as well as on adversaries. Our next task is to analyze the correlation between models' robustness and interpretability.
We have two models for comparison and we compare them using the described techniques in section <ref>.
Integrated Gradients
Utilizing the first technique, Integrated Gradients, we produce the plots in Fig. <ref>. The robust models heatmaps are more logical and understandable. The model focuses on distinctive parts of the object, for instance - the stripes of the tiger. Conversely, the standard model concentrates on the whole body. But the case is not the same in the first image - the circles of the ringlet have a positive contribution to the prediction of the standard model. However, the robust model heatmap is clear and the values are concentrated in the distinctive regions. There are some examples in which the robust model fails to recognize the image correctly. Despite that, the heatmaps are intuitive enough to explain why the model makes a mistake, which is essential in real-world situations. Furthermore, there are samples where the standard model fails, but the robust one - does not. Such examples are presented in Fig. <ref> - D2, E2, H2.
SHAP
We continue the examination of the models with the next local method - SHAP. SHAP values plots are placed in Fig. <ref>. The robust model attention is focused on the whole structure of the object, for instance, the example with the moped. The values are concentrated and not spread out like the standard model's values. The standard model's decisions are inexplainable - there are no regions with a positive impact on model prediction. Furthermore, there are many examples similar to these in Fig. <ref>. Sometimes the robust model makes errors and the decision can be justified, for instance, in image D2 - the robust model concentrates on the background and not on the padlock. It is the reason for the wrong prediction.
Class Specific Image Generation
We come to the visualization methods - visualizing the learned representations. The model-generated images are presented in Fig. <ref>. The images generated by the robust model are meaningful as well as resemble natural images. On the other hand, the standard model cannot perform well in this task - its visualizations are completely meaningless to the human eye. It is not enough to prove that the quality of the images produced by the robust model is high. Hence, we apply feature analysis using the FID score. The score suggests that the robust model's images contain features that are close to features of natural images in the feature space. Conversely, it can be claimed that the features reproduced by the standard model are not comparable to the real objects' features. The method is inspired by <cit.>, but we apply different loss functions, datasets, and metrics.
Representations Inversion
Applying the Representation Inversion method, the robust model can approximate the images while approximating the representation vectors. The robust inverted images in Fig. <ref> are visually identical to the original ones. On the contrary, the standard model inversions are not close to the original image. Due to that, we claim the standard model can reproduce many examples whose representation vectors are close to the original image vector. The robust model images contain features part of the original image. To prove images are similar in the feature space, we apply the feature extractor and measure the l_2 distance between the feature vectors. The computed distances confirm the feature similarity - Table <ref>.
Direct Feature Visualization
By maximizing a randomly chosen feature from the representation vector, we achieve the plot in Fig. <ref> - maximized activation 492. The robust model produces a texture that is specific to class starfish. We get the images from the test set that maximally activate it. All of the images belong to the class of starfish. Moreover, the standard model fails to perform well in this task. Another example of Feature Visualization can be found in Fig. <ref>.
§ DISCUSSION
The results demonstrate that robust models are more interpretable and adversarially robust than standard models, despite achieving lower accuracy on natural examples. These models focus on distinctive regions that contribute positively to the prediction, even if it is wrong. Moreover, they produce indicative visualizations and inversions, which resemble natural features. Based on the FID score and l_2 distance between the feature vectors of the inverted images, we are confident that they are close in feature space and determine that the robust models achieve reasonable results in contrast with standard ones.
§ FUTURE WORK
The architectures we apply are deep and computationally expensive to operate on mobile devices. Channel pruning <cit.> has been shown to be an effective method for reducing model complexity and enhancing the model's inference time, but it is an open question whether the robust model will stay interpretable after applying this technique.
§ CONCLUSION
The results from our study suggest that the decisions of the robust models are more explainable and meaningful to humans than the predictions of the standard models. Furthermore, the features produced by those models are closer to the natural features of the objects, not only in the visual space but in the feature space too. After applying all proposed techniques, it is stated that we cannot make decisions about the models based on just one of the methods. In combination with quality and similarity analysis methods, feature visualization techniques provide more generalized information about model interpretability than local methods.
§ ACKNOWLEDGEMENTS
I want to thank Kristian Georgiev and Hristo Todorov for their help as scientific advisors. I would also like to thank Radostin Cholakov for the provided computational resources, Nikola Gyulev for his advice, and the High School Student Institute of Mathematics and Informatics of the Bulgarian Academy of Sciences for supporting the project.
§ CIFAR-10 EXAMPLES
§ SMALL IMAGENET 150 EXAMPLES
|
http://arxiv.org/abs/2307.01041v1
|
20230703142015
|
Practical Non-Invasive Probing Attacks Against Novel Carbon-Nanotube-Based Physical Unclonable Functions
|
[
"Nikolaos Athanasios Anagnostopoulos",
"Alexander Braml",
"Nico Mexis",
"Florian Frank",
"Simon Böttger",
"Martin Hartmann",
"Sascha Hermann",
"Elif Bilge Kavun",
"Stefan Katzenbeisser",
"Tolga Arul"
] |
cs.CR
|
[
"cs.CR",
"cs.ET"
] |
Practical Non-Invasive Probing Attacks Against Novel Carbon-Nanotube-Based Physical Unclonable Functions
Nikolaos Athanasios Anagnostopoulos1, Alexander Braml1, Nico Mexis1, Florian Frank1, Simon Böttger2, Martin Hartmann2,
Sascha Hermann23, Elif Bilge Kavun1, Stefan Katzenbeisser1, and Tolga Arul14
1Faculty of Computer Science and Mathematics, University of Passau, 94032 Passau, Germany
Emails: {anagno02, braml11, mexis01, frank55, kavun01, katzen07, arul01}@ads.uni-passau.de
2Research Center for Materials, Architectures and Integration of Nanomembranes,
and Center for Microtechnologies, Chemnitz University of Technology, 09126 Chemnitz, Germany
Emails: {simon.boettger, martin.hartmann, sascha.hermann}@zfm.tu-chemnitz.de
3Fraunhofer Institute for Electronic Nano Systems (ENAS), 09126 Chemnitz, Germany
Email: [email protected]
4Department of Computer Science, Technical University of Darmstadt, 64289 Darmstadt, Germany
Email: [email protected]
August 1, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
As the number of devices being interconnected increases, so does also the demand for (lightweight) security. To this end, Physical Unclonable Functions (PUFs) have been proposed as hardware primitives that can act as roots of trust and security. Recently, a new type of PUF based on Carbon NanoTubes (CNTs) has been proposed. At the same time, attacks and testing based on direct electrical probing appear to be moving towards non-invasive techniques. In this context, this work attempts to examine the potential for practical non-invasive probing attacks against the CNT-PUF, a novel PUF based on CNTs.
Our results indicate that direct probing might potentially compromise the security of this PUF. Nevertheless, we note that this holds true only in the case that the attacker can directly probe the wire corresponding to the secret value of each CNT-PUF cell. Thus, we can conclude that the examined CNT-PUFs are rather resilient to direct probing attacks, that non-invasive probing methods appear to be promising for testing such PUFs, and that, in order for the attacker to gain the full-length value of the secret, all the relevant channels would need to be probed. Nevertheless, as our work proves, practical non-invasive attacks against the CNT-PUF are feasible and adequate countermeasures need to be employed in order to address this issue.
Probing; Carbon NanoTubes (CNTs); Physical Unclonable Function (PUF); non-invasive; practical attack;
§ INTRODUCTION
As devices become more and more interconnected, especially in the context of the Internet of Things (IoT), the need for security keeps constantly increasing. To this end, hardware-based security primitives, such as Physical Unclonable Functions (PUFs [The term “Physical Unique Function” can be used to more accurately describe a PUF.]) and True Random Number Generators (TRNGs), have been proposed as potential solutions that can act as security and trust anchors for the devices into which they are incorporated.
In particular, PUFs have been proposed as a hardware-based security solution that allows for the secure generation and storage of cryptographic tokens, such as encryption keys, and thus for the implementation of a wide range of cryptographic protocols. PUFs are physical objects, such as hardware, that possess unique characteristics that are most often induced in the physical object by minor variations during its manufacturing process. Under specific conditions, e.g., when particular input is provided to a hardware device, to which we refer as the PUF challenge, the aforementioned PUF characteristics can be evaluated to acquire what is known as the PUF response, which in the case of hardware-based PUFs is most often binary logical values. Then, due to the inherent randomness, robustness, and uniqueness of each PUF instance, each of the corresponding PUF responses can be utilised for security applications.
Moreover, novel nanomaterials, such as Carbon NanoTubes (CNTs), graphene, and memristors, have recently started being utilised in the production of Integrated Circuits (ICs), not only in experimental chips intended only for research purposes, but even in commercially available products. At the same time, CNTs have recently been proposed for the implementation of PUFs <cit.>, because such PUFs may be more robust and tamper-resistant in comparison to ones based on silicon <cit.>, while additionally being compatible with the complementary metal-oxide-semiconductor (CMOS) fabrication process of electronic devices <cit.>. Such PUFs and their quality characteristics have already been examined in a number of recent works <cit.>.
Nevertheless, apart from the level of security that such PUFs can provide, i.e., the quality characteristics of their responses, also their resilience to attacks needs to be examined. To this end, in this work, we will examine whether non-invasive channel probing attacks can be performed against the CNT-PUFs proposed in <cit.>.
We need to note that, in general, probing is often considered as an invasive attack. However, recent research has shown that probing attacks can be performed in a non-invasive manner, most often utilising laser-based probing techniques <cit.>. Hence, in this work, we examine the potential for an attacker to acquire useful information on the CNT-PUF based on the leakage currents of the Carbon-NanoTube Field Effect Transistors (CNT-FETs) that form this PUF. For this reason, one might potentially also view our leakage-current probing as an attempt for a side-channel attack, to the extent that the leakage current is considered as a side channel.
In any case, however, our probing method is based on direct electrical probing of the leakage currents of the CNT-PUF, which would constitute a potential attack technique that would be extremely easy to perform. Additionally, to the extent that the relevant CNT-FETs used for the realisation of the CNT-PUF, are incorporated into a system as an IC component of its own, e.g., a dedicated CNT-PUF or a combination of a CNT-based sensor and a CNT-PUF, our probing method can actually be considered as rather practical, as long as it can be performed in a non-invasive manner, for example, by probing the relevant pins of the corresponding chip. In particular, only relatively low expertise and low to medium cost, related to the one-off purchase of electrical-measurement equipment, such as a Source-Measurement Unit (SMU), are required.
In this way, our work investigates the ability of an attacker to perform relatively low-cost and low-expertise non-invasive probing attacks in a rather practical manner. In particular, our work makes the following contributions:
* It measures the gate leakage current, I_G-leakage, of cells that correspond to the logical value 1 and of cells that correspond to the logical value 0, and examines whether an attacker can utilise this leakage current to gain information on the nature of a particular cell. Since a global gate is used in the CNT-PUF, i.e., the gates of all the cells are connected using the same wire, this current measurement does not allow the attacker to gain an advantage.
* It measures the combined leakage current at the drains of all cells found in columns other than the selected column and at the sources of all cells found in rows other than the selected row (I_leakage-non_measured_cells), which also, as expected, does not provide the attacker with an advantage, since the measured drain leakage currents tend to even out, as they are not dependent on the nature of the selected cell itself, and thus also not on the logical value to which it corresponds.
* It measures the drain leakage current, I_D-leakage, of a particular cell, which is rather strongly correlated with the nature of each cell, and thus also with the logical value to which each cell corresponds. Thus, in this case, it is shown that an attacker can gain a significant advantage and practically guess the logical value of the corresponding cell with extremely high probability.
* It discusses the presented results and their significance for the security of the CNT-PUF, especially in the context of the concatenation of the drain currents, I_D, of all CNT-PUF cells forming the secret of this PUF, i.e., its response.
The rest of this work is organised as follows. <Ref> presents background information relevant to PUFs in general, and the CNT-PUF in particular. <Ref> briefly explores works related to the two so-far distinct topics of IC probing and CNT-based PUFs. Then, <Ref> examines the potential for attacks against the CNT-PUFs based on non-invasive probing. Finally, <Ref> summarises our results and propose potential future research topics, in order to conclude this work.
§ BACKGROUND INFORMATION ON THE CNT-PUF
As already mentioned, the specific conditions as well as any input or stimulus used in order to measure a PUF are referred to as the PUF response, while the output of the PUF, e.g., the concatenation of the values of its cells, is referred to as the PUF response.
In general, the quality of a PUF as a security mechanism is highly dependent on the existence of Challenge-Response Pairs (CRPs) that are unique per device and rather unpredictable. Thus, in order to evaluate the quality of the PUF responses produced, the well-known metrics of the Shannon entropy, the Hamming weight, and the Hamming distance are often employed.
The CNT-PUF response is based on the characterisation, through a threshold I_D value, of the cells of a monolithic 12×12 crossbar array of CNT-FETs, as either conductive (acting either as true conductors or as semiconductors) or non-conductive (acting as insulators). The cells belonging to the former category are assigned the logical value of 1, and the cells belonging to the latter category, the logical value of 0, leading to a 144-bit PUF response that has proven to be extremely stable <cit.>.
In particular, the drain current I_D of each CNT-FET is measured under the influence of a gate-source voltage V_GS equal to -2.5V and a drain-source voltage V_DS equal to -1V. Nevertheless, it has been observed that the measurements of I_D inherently incorporate some noise, leading to slightly different values for the same cell, even for the exact same value of V_GS. However, as such measurement values tend to concentrate within a limited region, i.e., to form a cluster, each cell is assigned a logical value based on whether the measured I_D value is above or below a threshold value that is common for all cell measurements. Essentially, cells providing a high I_D, whose CNTs act as true conductors or semiconductors, are assigned the logical value of 1, and cells providing a low I_D, whose CNTs act as insulators, are assigned the logical value of 0.
The concatenation of the logical values assigned to all the cells forms the response of this PUF, with the provided V_GS and V_DS, and the order in which the CNT-FET cells are measured, as well as other relevant conditions, such as the ambient temperature, forming the relevant PUF challenge. As it is evident, this method allows only for the production of a single 144-bit binary response for each crossbar array.
§ RELATED WORK
In general, there appear to be no works relevant to testing in depth the security of PUFs based on CNTs by physically probing them so far, while it has also been claimed that micro-probing could easily break down a PUF based on CNTs and, thus, destroy its information <cit.>. Thus, we will briefly examine here the two so-far distinct topics of CNT-based PUFs and physical probing.
As already mentioned, CNT-based PUFs and their quality characteristics have been examined in a number of recent works. In particular, rather robust CNT-based PUFs have been proposed in the literature, but their security has only minimally been examined.
In 2016, Hu et al. <cit.> investigated the ability of self-assembled CNTs arranged in a 64×40 crossbar structure to serve as either a binary or a ternary PUF. For both cases, a maximum fractional intra-device Hamming distance of ≈10% is reported. Additionally, the work of Hu et al. examines the stability of the relevant CNTs through measurements at 25C and at 85C. Thus, the security that such PUFs provide is rather well-tested, but their own security is only briefly examined in this work.
In 2017, Moradi et al. <cit.> proposed novel CNT-based PUF types utilising either the voltage or the current output of CNT-FETs. This work provided simulation results for the temperature range between 0C and 100C, demonstrating reliability values that would correspond to fractional intra-device Hamming distances of at least 3.33% for the voltage-based PUF and of at least 12.72% for the current-based PUF. Thus, the security that these PUFs provide is rather proven, but their resilience to attacks is not considered.
In 2018, Liu et al. <cit.> discussed the combination of a CNT-FET crossbar structure with the Lorenz chaotic system, in order to provide a PUF that would be resistant to machine-learning attacks. Only simulation results are provided and the work focuses on the uniqueness, randomness, and unpredictability of the PUF responses, with no results being reported regarding the stability of these responses. The authors consider machine-learning attacks, namely attacks based on the employment of Support Vector Machines (SVMs), Deep Belief Networks (DBNs), Linear Regression (LR), and Evolution Strategies (ES), performed both against the CNT-based PUF on its own and its combination with the Lorenz system of equations. In general, the authors rather prove that the Lorenz chaotic system cannot be machine-learned using the relevant methods tested, and thus its combination with the CNT-based PUF also cannot be machine-learned.
However, it is worth noting that the CNT-based PUF tested is a weak PUF, i.e., it has ideally only one CRP, and thus should always be machine-learned with 100% accuracy. Nevertheless, the authors of <cit.> report that this is not the case for all the machine-learning methods tested. This could be attributed to the simulated CNT-based PUF responses being noisy, as would be the case in reality, but the authors do not clarify if that is indeed the case.
It is also rather notable that it is claimed in <cit.> that micro-probing could easily destroy a CNT-based PUF; a claim that is not truly explored and which would only be relevant to invasive probing. Another bold claim of this work is that CNT-based PUFs are difficult to clone and, therefore, resistant to physical attacks. Since the response of CNT-based PUFs is dependent only on whether each relevant PUF cell is conductive or insulative, this claim is rather easy to disprove. In this work, we prove that a CNT-PUF cell's state can be measured through the drain leakage current, i.e., even when the cell is not triggered with a gate-source voltage difference, and thus can potentially be easily cloned, as long as an attacker has physical access to the relevant wire(s) of the device.
A work by Moon et al. <cit.> regarding PUFs produced by all-printed CNT networks, which was published in 2019, demonstrated no significant changes regarding the reliability of the fabricated CNTs over 10,000 measurement cycles and 14 days. However, the relevant resistance characteristic of individual CNTs was reported to have changed up to 16.7%, and results for temperature variations between 25C and 80C exhibited an average difference of up to 30%. Moreover, the authors proved that this PUF is robust and stable against tampering attacks such as high-temperature baking, light illumination, and radiation exposure. Nevertheless, the authors do not examine direct electrical probing of the PUF, and note that a local physical attack would destroy the device without gaining access to the secret, rather referring to an invasive attack. On the contrary, our work shows that a non-invasive probing attack can potentially access the device's secret, as long as the relevant wires and signals are not adequately protected and/or monitored.
Also in 2019, Burzurí et al. <cit.> examined the ability of single-walled carbon nanotubes to be used for the creation of PUFs. This work demonstrated good results regarding the uniqueness and the robustness of the fabricated PUFs, reporting fractional intra-device Hamming distances of 6.3% after two weeks and 8.3% after two months. However, this work also does not consider the resilience of the relevant PUF to attacks.
Finally, most recently, another work by Böttger et al. <cit.> proposed the CNT-PUF, a highly robust PUF, whose response is, however, not fully stable. The authors propose identifying the few unstable bits during enrollment and excluding them from the production of future responses, which almost always results in robust responses thereafter. An average fractional intra-device Hamming distance of only 1% is reported for the I_D threshold that provides the highest robustness, while an average fractional intra-device Hamming distance of 3% is reported for other less optimal values of such a threshold value. Again, however, no attacks against this PUF are considered.
When probing methods are considered, we can distinguish between invasive and non-invasive methods. Regarding the former type of probing, the relevant literature, e.g., <cit.>, appears to suggest that its employment in the context of CNT-PUFs would not be effective and/or the relevant probing attempt would be detectable. Hence, non-invasive probing would be the preferable way of probing these PUFs, both in the context of an attack as well as for testing purposes.
To this end, we have already noted that non-invasive probing is most often realised using laser-based probing techniques <cit.>. Nevertheless, depending on the case, other non-invasive probing techniques may be possible. For example, in 2010, Fuhrmann et al. <cit.> utilised surface acoustic waves as probing means to measure in a non-invasive manner the persistent photoconductivity of a ZnCdSe/ZnSe quantum-well heterostructure on a LiNbO_3 substrate.
Moreover, in 2017, Sugawara et al. <cit.> proposed utilising a detection technique based on non-invasive laser probing as a side channel both to the advantage of a malicious attacker as well as for testing and fault analysis. More information on laser fault injection and countermeasures can be found in <cit.> and other works.
More recently, in a work from 2019, Rahman and Asadizanjani <cit.> assessed back-side attacks against instances of System-on-a-Chip (SoC) technology, including electro-optical probing, laser fault injection, and other similar techniques, as well as relevant back-side testing methods. The authors noted that such semi-invasive and non-invasive probing techniques allow for run-time testing from the back side of a chip, which is most often left unprotected. The authors also classify the different adversaries into a number of categories, based on their having access to the chip design itself or not, and on their expertise.
Finally, in 2022, Sakamaki et al. <cit.> demonstrated the non-invasive probing measurement of transmission lines on CMOS chips from 100 MHz to 500 GHz. The authors managed to reduce probe skating down to 10μ m using a precision-controlled probe station, which allowed for the use of extremely small contact pads, and helped preserve the characteristics of the contact material even after repeated probe touchdowns that normally would have worn out the pads. The authors suggest that non-invasive probing can be used to characterise Complementary Metal–Oxide–Semiconductor (CMOS) passive devices, which do not require Direct Current (DC) biasing.
Generally, different invasive probing techniques have been examined rather thoroughly in a number of works. For example, in 1999, Kuhn and Kömmerling <cit.> examined the physical security of smartcards, including tampering methods, such as invasive micro-probing. It is important to note that this work examined also the exact costs of such attacks; an aspect of (physical and/or other) attacks that is most often neglected. This work was an extension of the one by Kömmerling and Kuhn <cit.>, where the relevant topics had been first discussed.
In 2011, Skorobogatov <cit.> broadly examined physical attacks and tamper resistance, focusing, among others, on invasive micro-probing, while also discussing optical probing and laser-based techniques.
Moreover, in 2013, Helfmeier et al. <cit.> examined back-side invasive micro-probing attacks for invasive IC analysis, editing, and other similar applications.
In 2017, Shi et al. <cit.> reviewed invasive probing attacks and studied potential countermeasures and designs against them.
Finally, in 2018, Rahman et al. <cit.> had a work published on physical inspection and attacks, which, among others, discusses both electrical and optical probing, considering the former as an invasive method and the latter as a non-invasive technique.
In any case, we note that none of these works breached, let alone addressed, the subject of probing CNT-based PUFs, especially in a non-invasive manner.
§ PRACTICAL NON-INVASIVE PROBING ATTACKS AGAINST CNT-PUFS
In this section, we first consider an attacker model relevant to non-invasive probing attacks in the context of CNT-PUFs, then, discuss the regular way in which such PUFs are measured, and, subsequently proceed to present potential non-invasive attacks against such PUFs, discuss the relevant results and their significance, as well as possible countermeasures against such attacks.
§.§ Attacker Model
In this work, we consider an attacker with physical access to the device containing the CNT-PUF. The attacker can probe wires relevant to the CNT-PUF in a non-invasive way, in order to provide DC voltage and measure the relevant electrical currents. We assume that the CNT-PUF may be a standalone chip, which may also potentially be incorporated into a larger system. Then, the attacker may directly probe the pins of the CNT-PUF chip, or any wires connected to them, in a non-invasive way, as long as these are kept accessible or can be accessed in such a way, e.g., through laser-based probing.
The attacker would need to have access to relevant electrical-measurement equipment, including an SMU, as the current levels are extremely small, of an order of 10^-12A, i.e., some picoamperes, in the worst case. Additionally, a DC power supply of regular (not necessarily high) precision may also be required. The relevant cost for the one-off purchase of such equipment may be some thousands of euros or, at most, tens of thousands of euros. Thus, our attack may be considered as a low-to-medium-cost attack, which however can be performed by an individual. Such an individual needs to possess some information regarding the way in which the CNT-PUF works, which, however, is publicly available and does not really require the expertise of a specialist. That means that a person of low expertise, with basic knowledge in the field of electricity can perform the attacks described in this work. In particular, the most challenging issue that the attacker will need to address is deciding for every type of leakage-current measurement on an adequate threshold in order to separate between currents corresponding to conductive and non-conductive cells. This decision may require some experience, which the attacker is bound to quickly acquire (even on-the-field) after a number of cells have been measured. To this end, experience may also be acquired by measuring an own CNT-PUF device, before the attack is performed on the target CNT-PUF device; nevertheless, this is not truly necessary for the attack to be successful.
Thus, the attacker may be a person of low expertise with access to medium-cost electrical-measurement equipment, which he/she does not need to own, and with physical access to the CNT-PUF device.
In general, as our probing attacks are based on direct electrical probing of the leakage currents of the CNT-PUF, we believe that they constitute practical attacks that would be extremely easy to perform and which also a rather small amount of time to perform, in the order of minutes.
§.§ Regular Measurement Procedure
It is important to first examine the regular way in which the CNT-PUF structure may be measured, in order to provide the reader with some insights into the way the proposed attacks may work out and why we consider them as practical.
In order to measure each cell of a CNT-PUF, the relevant row and column are selected, and the source of the cell is connected to the ground, while a low level of voltage (typically 0.5V or 1V) is provided to the drain. Without loss of generality, also -0.5V or -1V may be provided to the drain. The sources and the drains of all the cells found in rows and columns that have both not been selected are provided with the same level of voltage as the drain of the selected cell, in order to exactly avoid current leakage and other parasitic phenomena among the different rows and columns. Then, a certain voltage level is provided to the global gate, typically of 2V or 2.5V, which would lead to all the cells being turned on [This is exactly the reason why the sources and the drains of the cells that are in rows and columns that have both not been selected should be provided with the same voltage, in order to avoid that these cells conduct; in this way, we ensure that these cells have the least possible effect on the measurement. Unavoidably, however, while all the cells in the selected row other than the selected cell will also not conduct, as both their sources and drains are at the same voltage level, all the cells in the relevant selected column will conduct, as their sources are grounded and their drains are at the voltage level of the drain of the cell to be measured.]. Again, without loss of generality, also -2.5V or -2V may be provided to the gate. At this point, the drain current of the selected cell is measured, and based on its value the cell can be classified as conducting or non-conducting, and thus be assigned the logical value of 1 or 0, respectively.
The measurement setup can be seen in <Ref>. There, an Arbitrary Waveform Generator (AWG) provides the gate voltage, for faster testing. The layout of a CNT-FET, which forms a CNT-PUF cell is shown in <Ref>. Hence, the equivalent cell layout of the employed measurement setup is illustrated in <Ref>.
§.§ Examined Non-Invasive Probing Attacks
As already described, we test three different probing measurements:
§.§.§ Measuring the Gate Leakage Current of the Cells Selected
We measure the gate leakage current, I_G-leakage, of cells selected for measurement, by each time setting the relevant cell's V_DS=0, by grounding both its source and its drain. We also set V_GS=2, while the sources and the drains of the cells in rows and columns not selected are provided with 0.5V, leading these cells to also exhibit V_DS=0. Then, we measure the overall I_G for the global gate, which, however, corresponds to the gate leakage of not only that cell, but also of all the cells in the rows and columns that have both not been selected. The latter cells also have some voltage provided to their sources and drains, while the drain and the source of the selected cell are grounded. At the same time, the cells in the same row as the selected cell but on non-selected columns are turned on, with V_DS=0.5, while the cells in the same column as the selected cell but on non-selected rows are turned on, with V_DS=-0.5. Due to the crossbar structure of the CNT-PUF, and the use of a global gate, it is impossible to measure the gate leakage of only a single cell. However, in the future, the sources of the cells in non-selected rows could also be grounded, in order to have most of the cells turned on, and only the cells in the selected column not turned on. The equivalent cell layout for this probing measurement is shown in <Ref>. <Ref> illustrates the results relevant to such probing measurements for two semiconducting and two non-conducting cells [For reasons of brevity and clarity, but without loss of generality, we present results for only four cells, two semi-conducting ones and two non-conducting. Moreover, we chose to present results for only semi-conducting and non-conducting cells, as the characteristics of semi-conducting cells are rather closer to those of non-conducting cells than the characteristics of fully conducting cells would have been.]. As it is clear, there is no way to distinguish between semi-conducting and non-conducting cells using this method.
§.§.§ Measuring the Leakage Current of the Cells Found in Rows and Columns that Have Not Been Selected
In the second probing measurement method, we utilise a probe to measure the combined leakage current at the drains of all cells found in columns other than the selected column and at the sources of all cells found in rows other than the selected row (I_leakage-non_measured_cells). To this end, we ground the gate, and provide 0.5V to the sources and the drains of the cells in rows and columns not selected, leading these cells to exhibit V_DS=0. At the same time, we ground the source of the selected cell, which would normally be measured, and provide 0.5V to its drain. This means that all the cells in the selected row have V_DS=0.5, while both the sources and the drains of the rest of the cells are provided with 0.5V, so that they exhibit V_DS=0. As the global gate is grounded, none of the cells is expected to be turned on. The equivalent cell layout for this probing measurement is shown in <Ref>. <Ref> illustrates the results relevant to such probing measurements for two semiconducting and two non-conducting cells <ref>. It is clear that, again, there is no way to distinguish between semi-conducting and non-conducting cells using this method.
Additionally, similar results occur when either of the following changes is made: the drain of the selected cell is also grounded, or the drains of all cells found in columns other than the selected column and the sources of all cells found in rows other than the selected row are grounded, so that cells found in rows and columns that have both not been selected exhibit V_DS=0. For the former case, the relevant equivalent cell layout is shown in <Ref> and the relevant measurement results in <Ref>, and for the latter case, the relevant equivalent cell layout is shown in <Ref> and the relevant measurement results in <Ref>.
§.§.§ Measuring the Drain Leakage current of the Cells Selected
We measure the gate leakage current, I_D-leakage, of cells selected for measurement, by grounding the global gate. Each time, we also ground the selected cell's source and provide 0.5V to its drain. The sources and the drains of the cells in rows and columns not selected are also provided with 0.5V, leading these cells to exhibit V_DS=0. Then, we measure the drain current of the selected cell. As the global gate is grounded, none of the cells is expected to be turned on. The equivalent cell layout for this probing measurement is shown in <Ref>. <Ref> illustrates the results relevant to such probing measurements for two semiconducting and two non-conducting cells <ref>. It is evident that, in this case, the nature of the cells can clearly be distinguished using these measurements.
§.§ Discussion on the Results Presented and Potential Countermeasures
Our results indicate that non-invasive direct electrical probing can potentially compromise the security of this PUF. At the same time, however, as our results clearly show, the only way for an attacker to gain meaningful information on the nature of a CNT-PUF cell through non-invasive probing is to measure its drain leakage current. Nevertheless, as the drain current of a cell essentially forms its secret, measuring the drain leakage current is rather equivalent to being granted access to the secret itself.
To this end, it is clear that any potential countermeasure should aim to disallow access to the drains of the cells, as these carry the secret. Thus, potential countermeasures may include burying the relevant wire under other wires and metal layers in the Printed Circuit Board (PCB) of a larger system, if the CNT-PUF is directly incorporated into the PCB, or including the relevant characterisation circuits on the CNT-PUF chip, so that there is no need for the drain current to be transmitted over one of the chip's pins, if the CNT-PUF is incorporated into a chip that is soldered to the relevant PCB.
In any case, we do note that the attacker needs to decide on an adequate threshold in order to distinguish between the drain leakage currents from conductive cells and those from non-conductive cells. This may require some experience, which, we believe, however, that it is easy to gain even during an attack itself.
Moreover, we believe that the proposed attack technique is rather easy to perform, does not incur extremely high costs, and requires only a relatively small amount of time, in the order of minutes. Thus, the presented attack method can be considered as rather practical. Even if the presented method can be mitigated as an attack, it can, at the same time, serve as an adequate testing method for detecting defective non-conducting CNT-FETs and/or potentially short-circuited cells.
Thus, we can easily conclude that the examined CNT-PUFs are rather resilient to direct probing attacks, and that, in order for the attacker to gain the full-length value of the PUF response, all the relevant drain wires would need to be probed, which could potentially be prevented. At the same time, however, the described non-invasive probing methods appear to be promising for testing such CNT-based structures for the detection of faults.
§ CONCLUSION
In this work, we have presented a number of practical non-invasive attack methods against the CNT-PUF, a novel carbon-nanotube-based physical unclonable function. We have examined these attack techniques and proven that most of them do not provide the attacker with any advantage. At the same time, however, we have shown that by measuring the drain leakage current of each CNT-PUF, the attacker can, with high probability, fully predict the CNT-PUF's response, i.e., its secret.
To this end, and as non-invasive probing attacks are becoming more and more available, we suggest adequately protecting the relevant wires and/or pins that may provide access to a CNT-PUF cell's drain.
In general, we can easily conclude that the examined non-invasive direct probing methods have a higher potential as testing techniques rather than as truly efficient attacks. Nevertheless, even though the examined CNT-PUFs have proven to be rather resilient to direct probing attacks, such attacks are feasible and adequate countermeasures need to be employed in order to address this issue.
As part of future work, it could be examined for the first probing method proposed, if grounding the sources of the cells in non-selected rows, in order to have most of the cells turned on, and only the cells in the selected column, including the cell selected for measurement, turned off, would have any effect on the attacker's ability to distinguish between conducting and non-conducting cells through the gate leakage current.
§ ACKNOWLEDGMENT
This work has been partially funded by the Interreg VI-A Programme Germany/Bavaria–Austria 2021–2027 – Programm INTERREG VI-A Bayern–Österreich 2021–2027, as part of Project BA0100016: “CySeReS-KMU: Cyber Security and Resilience in Supply Chains with focus on SMEs”, co-funded by the European Union, and by the German Research Foundation – Deutsche Forschungsgemeinschaft (DFG), under Projects 440182124: “PUFMem: Intrinsic Physical Unclonable Functions from Emerging Non-Volatile Memories”, and 439892735: “NANOSEC: Tamper-Evident PUFs Based on Nanostructures for Secure and Robust Hardware Security Primitives” of the Priority Program – SchwerPunktProgramme (SPP) 2253: “Nano Security: From Nano-Electronics to Secure Systems”.
./IEEEtran
|
http://arxiv.org/abs/2307.02859v1
|
20230706085112
|
Emergence of half-metallic ferromagnetism in transition metal substituted Na$_{0.5}$Bi$_{0.5}$TiO$_3$
|
[
"Chandan Kumar Vishwakarma",
"B. K. Mani"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci"
] |
APS/123-QED
Department of Physics, Indian Institute of Technology,
Hauz Khas, New Delhi 110016, India
[email protected]
Department of Physics, Indian Institute of Technology,
Hauz Khas, New Delhi 110016, India
The multifunctional materials with prominent properties such as electrical,
ferroelectric, magnetic, optical and magneto-optical are of keen interest to
several practical implications. In the roadmap of designing such materials,
in the present work, using density functional theory based first-principles
calculations, we have investigated the functional properties of transition metal
substituted-NBT. Our calculations predict the emergence of half-metallic
ferromagnetism in the system. A nonzero magnetic moment of 1.49
μ_ B/ f.u. is obtained for 25% concentration of Ni.
Our data on optical properties for pure NBT is in excellent agreement with
available theory and experiments. For Ni-NBT, we observed a diverging
nature of static dielectric constant, which could be attributed to
the induced metallic character in the material. Our simulations on MOKE
predict a significant Kerr signal of 0.7^∘ for 6.25% Ni-concentration.
Emergence of half-metallic ferromagnetism in transition metal
substituted Na_0.5Bi_0.5TiO_3
B. K. Mani
August 1, 2023
============================================================================================
§ INTRODUCTION
The development of multifunctional materials with two or more properties, such
as magnetic, ferroelectric, piezoelectric, and optical, has received a lot of
interest in recent years <cit.>. These materials
have the potential to revolutionize various industry applications, including
healthcare, energy and electronics <cit.>.
In the search for such materials, sodium bismuth titanate,
Na_0.5Bi_0.5TiO_3 (NBT), has received a
remarkable attention than any other lead-free ferroelectrics due to its
tendency to show multifunctionality by various
mechanisms <cit.>.
NBT is a complex perovskite oxide with two cations (Na^+ and Bi^3+) on the A-site
and one cation (Ti^2+) on B-site with a rhombohedral symmetry at room
temperature <cit.>.
It exhibits various anomalous properties associated with site-specific substitutions,
including improved ferroelectricity and piezoelectricity, magnetism and optoelectronic
properties <cit.>.
The presence of Ti at the B site provides a strategy to introduce ferromagnetism
by substituting transition-metal (TM) at B-site. In experimental
studies, Refs. <cit.> and <cit.>, ferromagnetism at room
temperature was observed for Fe and Co-doped NBT, respectively.
In a similar experimental work by Dung et al., a room-temperature
ferromagnetism was reported for Ni-doped NBT <cit.>.
The maximum magnetization value reported was around 0.91 μ_ B/Ni
for 9% of Ni concentration at 5 K. Moreover, it was also observed in the
same study that the optical bandgap decreases with Ni-concentration.
However, in a different experimental study, Pradhan et al.,
the optical band gap was observed to increase with Ni
concentrations <cit.>.
The contradictory trend of experimental data suggest the lack of understanding
for optical behavior of TM-doped NBT. In addition, to the best of our knowledge,
there are no data from theory simulations on probing magnetism in TM-doped NBT.
It can thus be surmised that there is a need for a systematic theoretical
study to understand the underlying mechanism behind the multifunctional
properties in TM-doped NBT.
The present study aims to probe, with the help of the state-of-the-art of
first-principles calculations, the electronic, magnetic, optical and
magneto-optical properties of NBT and TM-substituted NBT. More precisely,
we aim is to address the following questions:
i) What is the impact of Ni substitution on the electrical and optical properties of NBT?
ii) Assimilate the mechanism behind the advent of magnetic degrees of freedom in Ni-substituted NBT
iii) How this introduced ferromagnetism couples with the dielectric properties of NBT?
To assess the coupling between magnetic and optical degrees of freedom, we have
examined the linear magneto-optic Kerr effect in the polar geometry, in which the
spin and incident photons are perpendicular to the sample surface.
This configuration of the Kerr effect is the most favorable way to trace
the magneto-optical properties experimentally <cit.>.
The texts in the paper are organized in four sections. In Section II, we provide
a brief description of the computational methods used in our calculations.
In Section III, we present and analyze our results on electronic structure, magnetic,
optical, and magneto-optical properties for NBT and Ni-substituted NBT.
The summary of our findings is presented in the last Section.
§ COMPUTATIONAL METHODOLOGY
Probing transition metal-substituted NBT structure and emerging
properties requires an accurate treatment of interstitial effects
in the material at atomic scale.
For this, we have performed ab-initio spin-polarized calculations
using density functional theory (DFT) as implemented in the Vienna ab-initio
simulation package (VASP)<cit.>. To account for the exchange
correlation among electrons, we used Perdew-Burke-Ernzerhof (PBE) <cit.>
variation of generalized-gradient approximation pseudopotential.
And, to account for the strongly correlated 3d-electrons of Ni we have
incorporated the Hubbard U correction <cit.> in our calculation.
The value, 11.57, of U is computed self-consistently using density
functional perturbation theory (DFPT) employing
the cococcioni's et al. <cit.> approach.
A rhombohedral supercell of size 2×2×2 with 80-atoms is
used to incorporate various concentrations of Ni.
All the structures were optimized using full relaxation calculations
up to 10^-4 eV Å^-1 force tolerance. For this, we used conjugate
gradient algorithm with Monkhorst-Pack <cit.> k-mesh
of 5×5×5. For the self-consistent-field (SCF) calculations,
the Brillouin zone was sampled with 9×9×9 k-mesh.
The energy convergence criterion is maintained at 0.001 meV, whereas
the plane wave energy cutoff used was 600 eV. The real and imaginary
parts of the dielectric function is calculated using DFPT as
implemented in VASP.
§ RESULTS AND DISCUSSIONS
§.§ Crystal Structure
The structural parameters for pure NBT were taken from the experimental crystal
structure (space group R3c) data <cit.> and optimized further through
the full relaxation calculations to achieve a minimum energy configuration. Our
computed lattice parameters and Wyckoff positions are given in Table <ref>,
along with the data from the literature for comparison <cit.>
As we observed from the table, our computed lattice parameter 5.65 is
in good agreement with the experimental value 5.51 <cit.>.
The reason for slightly larger value could be attributed to the use of
GGA functional our calculation <cit.>.
To incorporate various Ni concentrations in Na_0.5Bi_0.5[Ti_1-xNi_x]O_3
(Ni-NBT), we used optimized NBT structure and created a 2×2×2 supercell. And, we
investigated the properties for x = 0.0625, 0.125,
0.1875, and 0.25 concentrations of Ni.
The Ni-NBT structures were fully optimized again using the force tolerance
up to the 10^-5 eV Å^-1.
From our simulations we find that all Ni-NBT structures crystallize in
rhombohedral (R3m) phase. In Fig. <ref>, we have shown the crystal
structures for NBT (panel(a)) and 0.25Ni-NBT (panel(b)).
The optimized lattice parameters for the chosen concentrations of Ni are
given in Table <ref>. To the best of our knowledge, there are no
experimental or other theory data for lattice parameters for Ni-NBT available
in the literature for comparison.
§.§ Electronic Structure and Ferroelectric Properties
In Fig. <ref> we have shown the spin-polarized electronic band
structures of NBT (panels (a) and (b)) and 25Ni-NBT crystals (panels (c)
and (d)). We chose to report the data for the highest concentration of Ni as
it has the largest effect on the computed properties. The corresponding data
for other concentrations are, however, provided in the supplementary material.
As we observed from the panels (a) and (b) of the figure, NBT exhibits a
direct band gap electronic structure at Γ point for both the spin channels.
The calculated band gap, 2.57 eV, is in good agreement with the other
theoretical value, 2.82, reported in <cit.>.
The observed wide band gaps for both spin channels suggests the semiconducting
nature of the NBT crystal, and is consistent with the data reported in the
literature <cit.>.
For 25Ni-NBT, however, we observe an asymmetry in the majority and
minority spin channels (panels (c) and (d)). For majority spin, the Fermi level
lies in the valence band and shows a metallic nature. Whereas, for minority
spin sub band, a large band gap of ∼ 2.56 eV is obtained, which resembles
the electronic structure of NBT shown in panel (b). This mixed nature of
electronic structure indicates a half-metallic character of 25Ni-NBT. A similar
electronic structures we also obtain for other concentrations of Ni.
To get further insight into the half-metallicity in Ni-NBT, we examined
the atom-projected and orbital-projected electronic structures of NBT and
25Ni-NBT. The data from this for bands and density of states (DoS) are shown
in Figs. <ref> and <ref>, respectively.
For NBT, as discernible from the panels (a) and (b) of Fig. <ref>, the
valence band for both the spin channels have dominant contributions from O,
where 2p-electrons contribute the most. For the conduction band, however,
the most significant contribution comes from the 3d-electrons of Ti.
This observed nature of the electronic structure of NBT is consistent
with the reported trend in Refs <cit.>.
For 25Ni-NBT, for majority spin channel, the bands around the Fermi
energy are of mixed O and Ni character, with O contribution more prominent
than Ni at the Fermi energy (panels (a), (c) in Fig. <ref>). This is
also consistent with the atom-projected DoS shown in Fig <ref>(b).
At Fermi energy, O contributes ≈ 70% which mainly comes from
2p-electron, whereas the contribution from Ni (mostly from 3d-electrons)
is about ≈ 20% of the total value. And like NBT, the
conduction band is dominated by the 3d-electrons of Ti.
For minority spin sub band, a significant contribution of O in the valance
band below the Fermi level is observed, and comes from the 2p-electrons.
However, unlike majority spin sub band, there is a negligible contribution
from Ni in the bands below the Fermi level. Using the number of electronic
states at Fermi level for both spin
channels we calculated the spin polarization, which comes out to be 100%.
This non-zero electronic states at the Fermi level for majority spin and a
wide band gap for minority spin confirms the half-metallicity in
Ni-substituted NBT. A similar trend of half-metallicity and spin polarization
is also observed for Fe-NBT.
Next, we examine the ferroelectric properties in NBT and Ni-NBT. For this, we
calculated the remanent polarization for NBT and Ni-NBT. The electronic contribution
to polarization was computed using the Berry phase
approach <cit.>. NBT is a well known lead-free ferroelectric
material with experimentally reported remanent polarization as
38 μ C/ cm^2 <cit.>,
32 μ C/ cm^2 <cit.> and
42.4 μ C/ cm^2 <cit.> along [001] pseudo-cubic
direction. From theory calculations, the reported value of spontaneous
polarization, P_ S, is 26 μ C/ cm^2 <cit.>.
Our computed value of P_ S is 49 μ C/ cm^2. The reason for the
difference from experiment could be attributed to the fact that the
reported experimental polarizations are at room temperature, whereas the
computed values are at 0 K.
The spontaneous polarization deceases after the Ni-substitution. We obtained
33.7 and 29.7 μ C/ cm^2 of P_ S for 12 and 25% of Ni, respectively.
The reason for this decease could be attributed to the increasing metallic
nature due to Ni-substitution.
§.§ Optical Properties
Next, we investigate the optical properties of NBT and Ni-NBT. For this, we
calculated the frequency-dependent complex dielectric function using Ehrenreich
and Cohen's equation <cit.>, ϵ(ω) = ϵ_1(ω)
+ iϵ_2(ω), where ϵ_1 and ϵ_2 are the real
and imaginary parts, respectively.
The complex dielectric function of a material is a key parameter and could be
useful in probing several fundamental properties of the material. The imaginary
part of the dielectric function could be calculated using the linear
response theory <cit.> as
ϵ_2(ω) = 2π e^2/ϵ_0 Ω∑_k, v, cδ(E_k^c - E_k^v - ħω)
|⟨Ψ_k^c|(𝐧̂.𝐫)|Ψ_k^v⟩|^2.
Here, the indices k, v and c represent the wave vector, valence
and conduction bands, respectively. The states |Ψ_k^v ⟩ and
|Ψ_k^c⟩ are the wavefunctions associated with valence
and conduction bands, respectively, and, E_k^v and E_k^c are the
corresponding energies. The constants, e, Ω and ϵ_0 are
the charge of the electron, volume of the cell and the permittivity of the
free space, respectively. The operator 𝐧̂ represents the
direction of the applied electric field. The real component of the
dielectric constant can be derived from the imaginary component using
the Kramers-Kronig relation <cit.>
ϵ_1(ω) = 1 + 2/π P ∫_0^∞ϵ_2(ω')ω' dω'/ω'^2 - ω^2 - iη,
where P is the principal value and η is an infinitesimal broadening
associated with the adiabatic switching of the dielectric perturbation.
The ϵ_1 (ω) and ϵ_2 (ω) for NBT and Ni-NBT from our calculations
along with the available experimental data are shown in Fig. <ref>.
For NBT, as discernible from panels (a) and (b), our calculated
real and imaginary components of dielectric function are in good agreement
with the experiment <cit.>. The slight deviation could be attributed
to the temperature effects in experiment, as the reported experimental data are
at room temperature.
Inspecting the real component more closely, the value of static dielectric
constant, ϵ(0), is 6.5. This is consistent with the value reported
in previous theory calculation <cit.> and experiment <cit.>.
This relatively higher value of ϵ(0) suggests NBT as a potential
candidate for light-harvesting applications <cit.>.
The other important characteristic of real spectrum is, negative values at
higher energies. This trend is consistent with the previously reported
theoretical data <cit.>, and suggests NBT as a potential candidate
for plasmonic applications <cit.>.
Examining the imaginary component more closely, we observe one preeminent
peak at ∼ 4.2 eV and four low-intensity secondary peaks at ∼ 5.7,
∼ 6.4, ∼ 7.7, and ∼ 9.0 eV energies. The primary peak originates
from the interband transitions from O-2p to Ti-3d and Bi-6p states. The
secondary peaks, however, embed major contributions from the O-2p to Na-2s/2p
transitions. Our calculated ϵ_2 (ω) spectrum is in qualitative
agreement with the reported theoretical data <cit.>. The onset of
ϵ_2 (ω) spectrum suggests the optical band gap of NBT as
≈2.57 eV, which is consistent with the direct electronic bandgap
discussed in previous section.
Considering the case of Ni-NBT, as discernible from the panels (c) and (d),
ϵ_1 (ω) and ϵ_2 (ω) show a similar trend as NBT at
higher energies (above 2.5 eV). And, as can be inferred from Fig. <ref>(d),
the reason for the observed peaks is attributed to the same interband
transitions, O-2p to Ti-3d and Bi-6p. In the low energy regions (below 0.5 eV),
however, we observe a diverging nature of ϵ_1 (ω). The reason for this
could be attributed to the half-metallic nature of Ni-NBT. The value of static
dielectric constant is observed to increase with Ni concentration, and the highest
value of ≈ 61 is obtained for 25% concentration. Consistent with the
trend of ϵ_1 (ω), ϵ_2 (ω) shows sharp peaks below
0.5 eV, with increasing amplitudes with Ni-concentration.
To get more insight and compare with experimental observations, we have examined
the absorption coefficient, α, for NBT and Ni-NBT.
In addition, we have extracted the optical bandgap, E_g, for minority spin
channel of Ni-NBT at different concentrations using the Tauc's plot <cit.>.
The data from this are shown in Fig. <ref>.
As discernible from the panel (a) of the figure, for NBT, our simulation is in
good agreement with the experiment, with a slight shift in the onset of the peak.
The reason for this shift could be attributed to the temperature effects in
experimental data. For Ni-NBT, on contrary, we observe nonzero peaks in
the IR region, which is consistent with the electronic structure data suggesting
the half-metallic nature Ni-NBT. Panel (c) shows the optical bandgap for
different Ni-concentrations. Consistent with the trend in experiment
our computed E_g decreases with Ni concentration. This could be
explained in terms of inverse relationship between static dielectric constant
and E_g using the Penn model <cit.>. The increased E_g for 25%
concentration could be attributed the decrease in ϵ_1 (0) to 61 from 81 for 18%.
Our computed E_g for NBT is close to the previous calculations <cit.>.
The experimental values, Refs. <cit.>,
are however on the higher side as they are at finite temperatures.
§.§ Magnetic Properties
Next, as a probe to magnetic degrees of freedom introduced in the system, we
examined the magnetic moments of NBT and Ni-NBT. And, to find the actual ground
state magnetic configuration of the system, we probed both ferromagnetic (FM)
and antiferromagnetic (AFM) orientations of magnetic moments. From our calculations,
we find FM phase as the actual ground state for all concentrations Ni. This is
evident from the relative energies of FM and AFM phases given in Table <ref>
for 25% of Ni, where the AFM energy is observed to be
larger ≈ 25 meV.
In Fig. <ref>, we have shown total magnetic moment as function of
Ni-concentration. Consistent with literature, and as to be expected due
to pure ferroelectric nature, we obtained a zero magnetic moment for NBT.
For Ni-NBT, however, we observed a trend of increasing (nonzero) magnetic moments
as function of Ni-concentrations. The maximum magnetic moment observed
is 1.48 μ_B/f.u., for the highest concentration of 25%.
The increase in the magnetic moment with concentration could be attributed
to the increasing ferromagnetic exchange between neighboring Ni ions at
higher concentrations. Our calculated magnetic moment, 0.76 μ_B/f.u.,
for concentration 12.5% is in good agreement with the experimental value 0.91,
reported for 9 % of Ni <cit.>.
To get more insight into the origin of nonzero magnetic moments in Ni-NBT, we
examined separate contributions from each ion. The data from this is
tabulated in Table <ref> for the highest concentration of 25%.
The values listed in the parenthesis are the contributions from orbital
magentic moment.
As to be expected, Ni
contributes dominantly, with ≈ 105% of the total magnetic moment.
The spin magentic moment originates from the unpaired 3d-electrons in
e_g states (panel (b)). The obtained value of spin magentic moment, 1.54 μ_B/atom,
is however smaller than the expected theory value of 2.83 μ_B/atom.
The reason for this could be attributed to the strong hybridization
between O 2p and Ni 3d orbitals.
Ni is also observed to display a small orbital magnetic moment of
0.02 μ_B/atom parallel to the spin contribution through SoC.
The second dominant contribution is from the O ions. They contribute
≈ -9% of the total value. The opposite contribution from O
leads to a decrease in the total magentic moment. Like Ni, O also has
a small parallel contribution from the orbital magentic moment.
Among the other ions, Bi contributes about 2% of total value, whereas
Na and Ti has less than 1% contributions.
§.§ Magneto-optical Properties
The presence of magnetic degrees of freedom in Ni-NBT leads to an anisotropy
in the dielectric tensor due to the breaking of time-reversal symmetry.
The dielectric tensor for a magnetized material could be written as
ϵ_ij = ϵ^(0)_ij + ϵ^(1)_ij,
where ϵ^(0)_ij is dielectric tensor in absence of
magnetization and ϵ^(1)_ij represents the contribution due
nonzero magnetization. Within linear in magnetization M, ϵ^(1)_ij
could be expressed as ϵ^(1)_ij = K_ijk M_k, where K is the
magneto-optical coefficient.
To examine the magneto-optical properties of Ni-NBT, we computed MOKE
spectra in the polar configuration (Fig. <ref>(a)), where both
the incident linearly-polarized wave and magnetization are considered
perpendicular to the surface. The polar configuration is one of the most
common setups used to trace the magneto-optical properties experimentally.
The Kerr rotation angle, θ_k, and Kerr ellipticity, η_k, can be
extracted from the diagonal and off-diagonal dielectric response
as <cit.>
θ_k + i η_k = - K/√(ϵ^(0))
(1 - ϵ^(0)),
where ϵ^(0) is the diagonal component of the dielectric tensor
in the absence of magnetization. Separating real and imaginary components
in Eq. (<ref>), we can derive
θ_k = - [K_1^2 + K_2^2]^1/2/[ϵ_1^2 + ϵ_2^2]^1/4
[(1-ϵ_1)^2 + ϵ_2^2]^1/2cosΘ and
η_k = - [K_1^2 + K_2^2]^1/2/[ϵ_1^2 + ϵ_2^2]^1/4
[(1-ϵ_1)^2 + ϵ_2^2]^1/2sinΘ,
where Θ = tan^-1(K_2/K_1)
- 1/2tan^-1(ϵ_2/ϵ_1)
- tan^-1(- ϵ_2/1 - ϵ_1).
The calculated real and imaginary parts of K and complex Kerr rotation
angles for different concentrations of Ni-NBT are shown in Fig <ref>.
As discernible from the panel (d), Kerr rotation shows the same qualitative
behavior for all concentrations at higher energies, above ∼ 2 eV. There are
substantial peaks in both negative and positive y-axes, which are signatures
of clockwise and anticlockwise polarizations, respectively, in the material.
In the low energy range, below ∼ 2 eV, however, we observed a mix trend
for θ_k at different concentrations.
In the negative y-axis, the most significant peak of amplitude 0.58^∘ is
observed at 3.3 eV for 6.5% concentration. The amplitude of the peaks is
observed to decrease with Ni-concentration. In the positive y-axis, however,
a θ_k reaching up to 0.7^∘ around 10.8 eV is observed for 25%
concentration. Unlike the trend for negative θ_k, the peak amplitude
for positive θ_k increases with Ni-concentrations.
Fig. <ref>(e) shows the Kerr ellipticity data as function of energy.
Like the trend of θ_k, all concentrations show a similar qualitative
behavior at higher energies, whereas a mix trend for amplitudes is observed in
the low energy range. We observed a significant peak of amplitude
0.72^∘ around 10 eV energy in the negative y-axis for 6.25% concentration.
Unlike θ_k, there is not much variation in the amplitude of this peak
with Ni-concentrations. Consistent with the trend of θ_k, apart from this
primary peak, we also observe few secondary peaks of amplitudes 0.33^∘,
0.56^∘, and 0.54^∘ at 4.0, 5.25 and 6.7 eV, respectively.
The significant Kerr signals obtained from our simulations for Ni-NBT suggest
it as a potential candidate for magneto-optical applications.
§ CONCLUSIONS
In conclusion, with the help of density functional theory based first-principles
calculations, we examined the effect of transition metal substitution on
electronic, ferroelectric, magnetic, optical and magneto optical
properties of NBT. In agreement with literature, our simulations on electronic
properties show NBT as a direct band semiconductor. Our computed bandgap
2.56 eV is within the range of previous theory calculations and experiments.
For transition metal substituted-NBT, we observed an emergence of half-metallic
ferromagnetism in the system. Our simulation show, while minority spin exhibits
a wide bandgap, there are nonzero states at Fermi energy for majority spin.
The reason for this could be attributed to the shift in the energy levels
of majority spin states due to hybridization between O 2p and Ni 3d
states. This asymmetry in the two spin channels lead to an emergence of
nonzero permanent magentic moment in the material. We obtained a magnetic
moment of 1.5 μ_ B/ f.u. for 20% of Ni concentration.
For optical properties of NBT, our simulation results are consistent
with the available experimental and other theory results.
For Ni-NBT, however, we observed a diverging nature of static dielectric
constant in the infrared region, which could be attributed the metallic
nature of the material. Our data on MOKE show significant values of
Kerr angles in Ni-NBT, which suggests transition metal substituted-NBT
as potential candidates for magneto-optical applications.
§ ACKNOWLEDGMENTS
The authors wish to thank Ravi Kumar, Mohd Zeeshan and Indranil Mal
for useful discussions. C. K. V. acknowledges the funding support from
Council of Scientific & Industrial
Research, India (Grant No. 09/086(1297)/2017-EMR-I).
B. K. M. acknowledges the funding support from SERB, DST (CRG/2022/003845).
The results presented in the paper are based on the computations using the
High Performance Computing cluster, Padum, at the Indian Institute of
Technology Delhi, New Delhi
|
http://arxiv.org/abs/2307.00677v2
|
20230702223008
|
SDC-HSDD-NDSA: Structure Detecting Cluster by Hierarchical Secondary Directed Differential with Normalized Density and Self-Adaption
|
[
"Hao Shu"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
op-tical net-works semi-conduc-tor IEEE-Xplore
B-.05emi-.025em b-.08em
T-.1667em.7exE-.125emX
DefinitionDefinition
breakablealgorithm
algorithm
height.8pt depth0pt 2pt
2pt
SDC-HSDD-NDSA: Structure Detecting Cluster by Hierarchical Secondary Directed Differential with Normalized Density and Self-Adaption
Hao Shu
Hao Shu is with College of Electronics and Information Engineering, Shenzhen University, Shenzhen, China
August 1, 2023
====================================================================================================================================
Density-based clustering could be the most popular clustering algorithm since it can identify clusters of arbitrary shape as long as different (high-density) clusters are separated by low-density regions. However, the requirement of the separateness of clusters by low-density regions is not trivial since a high-density region might have different structures which should be clustered into different groups. Such a situation demonstrates the main flaw of all previous density-based clustering algorithms we have known–structures in a high-density cluster could not be detected. Therefore, this paper aims to provide a density-based clustering scheme that not only has the ability previous ones have but could also detect structures in a high-density region not separated by low-density ones. The algorithm employs secondary directed differential, hierarchy, normalized density, as well as the self-adaption coefficient, and thus is called Structure Detecting Cluster by Hierarchical Secondary Directed Differential with Normalized Density and Self-Adaption, dubbed by SDC-HSDD-NDSA for short. To illustrate its effectiveness, we run the algorithm in several data sets. The results verify its validity in structure detection, robustness over noises, as well as independence of granularities, and demonstrate that it could outperform previous ones. The Python code of the paper could be found on https://github.com/Hao-B-Shu/SDC-HSDD-NDSAhttps://github.com/Hao-B-Shu/SDC-HSDD-NDSA.
Density-based Cluster, Hierarchical Cluster, Structure Detecting, Cluster with Noise, Granularity Independence.
§ INTRODUCTION
Clustering, as one of the most classical and foundational tasks in unsupervised machine learning, has been studied as long as the subject occurs, with extensive applications including in data mining, sample partition, and astronomical classification. A satisfactory clustering algorithm should group data without violating intuition, be robust over noises, and can be applied in wide granularities, which is also expected to have a low complexity. In the past few decades, many algorithms have been proposed which are included in several categories such as partition-based<cit.>, model-based<cit.>, density-based<cit.>, grid-based<cit.>, border-based<cit.>, and hierarchical<cit.>. Similar ideas could also be employed in detecting isolated points<cit.> while schemes were also purposed to accelerate the algorithms<cit.>
Among all these algorithms, density-based clusterings might be the most outstanding ones for their ability to identify clusters in different shapes and robustness over noises. The most famous density-based clustering algorithm, which might also be the first one, was proposed in 1996, known as DBSCAN<cit.>, with most of the later works could somehow trace back to it. There are many varieties of clustering via density, such as DENCLUE<cit.> calculating the influence of each point firstly, OPTICS<cit.> ordering point firstly, LOF<cit.> employing the k-nearest neighbor to detect outliers, ones employing shared nearest neighbor<cit.> or reversed nearest neighbor<cit.> to define density, HDBSCAN<cit.> (Hierarchical DBSCAN), DP<cit.> employing density peaks, ADBSCAN<cit.> adapting coefficient, and others.
However, all these algorithms are built on the same preconditions that any single cluster should have a high density while different clusters are separated by low-density regions, which, however, are not trivial. In practice, a high-density region might have inside structures, which might not be separated by low-density regions. Please see Figure<ref>, Figure<ref>, and Figure<ref>, which obviously have 3, 3, and 1 clusters without low-density regions, respectively. As far as we know, all previous algorithms fail to address this issue.
Therefore, in this paper, we present a novel clustering scheme to address this problem, which not only can cluster the data that previous works can but is also valid in detecting different structures in high-density regions. It could be robust over noises and is independent of granularities of data. The scheme employs secondary directed differential, hierarchical method, normalized density, as well as the self-adaption coefficient, and thus named Structure Detecting Cluster by Hierarchical Secondary Directed Differential with Normalized Density and Self-Adaption, dubbed by SDC-HSDD-NDSA for short. To support its effectiveness, we provide experiments on several data sets in different granularities, with or without noises as well as structures. The results demonstrate the validity and robustness of our scheme, which could outperform previous works.
The organization of the paper is as follows. In Section II, DBSCAN and the k-NN version of DBSCAN are reviewed. Then, Section III is dedicated to the main scheme, followed by the experiment results in Section IV. Some discussions, including the study of complexities, are provided in Section V. Finally, Section VI is a conclusion section.
The main contributions of the work include:
(1) Illustrate the restriction of previous clustering algorithms, which all fail to detect structures in high-density regions.
(2) Provide a novel clustering scheme, dubbed by SDC-HSDD-NDSA, which not only enjoys the ability that previous algorithms have but is also valid in detecting structures in high-density regions.
(3) Demonstrate the effectiveness, robustness, and independence with the granularity of the new method by experimental results.
The main insights of our scheme include the followings:
(1) To detect structures in a high-density region that is not separated by low-density ones, employing density as the criterion is insufficient. In fact, even employing the differential of density could fail. Please see the types in Figure<ref> in which the differential of density could be not large but should have more than one cluster and Figure<ref> in which the differential of density could be significant but should have only one cluster.
(2) When processing data of dimensions more than 1, the differentials are essentially directed, and thus employing non-directed properties such as gradient might not be enough. Please see Figure<ref> in which there should be three clusters with the two columns in the middle should in the same cluster of the rows at the top and down, but the gradients (might be defined as the maximal differential of a point from points in its neighbors) of points in middle columns could be very different from the top and down rows.
(3) If we employ k nearest neighbors, choosing different k in calculating densities and searching closed points might improve algorithms.
(4) To avoid the influence of granularity, employing normalized density should benefit. For example, it might allow coefficients independent of data sets. On the other hand, it might be needed in different parts of an algorithm to choose different normalized schemes since different normalized schemes might be suitable for different tasks.
(5) The self-adaption of the coefficient could be obtained by repeating the algorithm several times.
(6) Various strategies might need to be employed in different parts of an algorithm.
§ RELATIVED WORKS
This section aims to review the DBSCAN algorithm and its k-NN version, as well as provide related knowledge for understanding the scheme in the next section. We would explain the definitions and algorithms as simply as possible. Since they might be textbook-level, readers familiar with this subject might skip.
[ϵ-neighbor]
For a non-negative number ϵ, the ϵ-neighbor N_ϵ(x) of a point x in a data set X is a set that consists of all points whose distance to x is not larger than ϵ, namely, N_ϵ(x):={y∈ X| D(x,y)≤ϵ}, where D is a fixed measure of distance.
In the DBSCAN algorithm<cit.>, the density of a point x is defined to be the number of points in its ϵ-neighbor, namely |N_ϵ(x)|. To run the algorithm, one must choose an ϵ and a minimal density Minst. Points whose density is larger than Minst are marked as core points, and the algorithm is implemented as follows. Firstly, all points are marked as unclassified, while an unclassified core point is picked to found a cluster. Then the cluster is expanded iteratively by absorbing the unclassified points in the ϵ neighbors of the core points in the cluster–when a new core point is absorbed, the ϵ neighbor of it is also absorbed. The absorbed points are marked as classified while the cluster is expanded until no new points can be absorbed. Hence, a final cluster is defined. After that, a new unclassified core point is picked to found a new cluster which is expanded as above. The procedure is repeated until no core points are unclassified. Finally, all left unclassified points are marked as noises, while the above-found clusters consist of the final clustering result.
Although the DBSCAN algorithm has caught substantial attention since published, one of its main flaws is the need for choosing ϵ and Minst, which might be challenging to decide since it requires prior knowledge such as the granularity of the data set. By enlarging the granularity of a data set, for example, changing the coordinates of all data x=(x_1,x_2) in a two-dimensional space into nx=(nx_1,nx_2), the clustering result should not be changed. Hence, ϵ should be substituted with another, namely nϵ. Even if this issue could be solved by data preprocessing, the difficulty in choosing ϵ and Minst might still occur, since different clusters might have different local granularities.
To address the issue in choosing ϵ, a suggestion might be employing the k-nearest neighbor instead of the ϵ-neighbor.
[k-Nearest Neighbors (kNN)]
A k-distance point p_x of a point x in a data set X associated with the measure of distance D is a point satisfying that there are at most k-1 points whose distances to x are smaller than D(x,p_x) while there are at least k points with distance smaller or equal to D(x,p_x). The D(x,p_x)-neighbor of x is defined to be the k-nearest neighbor (kNN) of x.
Simply speaking, a k-distance point of x is a k-th closest point of x (there might be several points with the same distances to x) while the kNN of x consist of points not farther than a k-distance point.
The k-nearest neighbor was first employed to detect isolated points<cit.>, but it could be easily employed in clustering tasks. It can be viewed as a method in choosing ϵ in the DBSCAN algorithm–different ϵ might be chosen for different points to be the k-distances of them. The k-NN version of the DBSCAN algorithm can be obtained simply by replacing the ϵ-neighbor to the kNN of points with a priorly fixed k after marking all isolated points and viewing all left ones as core points. [The above algorithm will be called the k-NN clustering algorithm in the remaining paper, while the algorithm that predicts each new point in the cluster containing the closest known point will be named the k-NN classifier.]
Nevertheless, to detect isolated points, criteria like Minst in the DBSCAN algorithm, which depends on the granularity, must be chosen – for example, the maximal k-distance for a point not marked as an isolated point.
§ METHODOLOGY
Since previous density-based algorithms could not solve the structure-detecting problems in high-density regions, novel algorithms, illustrations, and analyses are presented in this section. The pseudocode of the core of the SDC-HSDD-NDSA algorithm will be provided in subsection A, followed by illustrations in subsection B to E. Some drawbacks of the core algorithm are investigated in subsection F, and to overcome them, the SDC-HSDD-ND algorithm will be proposed in subsection G. Subsection H is dedicated to a suggestion in choosing coefficients, and the final algorithm with the self-adaption coefficient, namely SDC-HSDD-NDSA algorithm, will be presented in subsection I. The Python codes of the pseudocodes could be found on https://github.com/Hao-B-Shu/SDC-HSDD-NDSAhttps://github.com/Hao-B-Shu/SDC-HSDD-NDSA.
§.§ The core algorithm: SDC-SDD-ND
The pseudocode of the core of the SDC-HSDD-NDSA algorithm, dubbed by SDC-SDD-ND is as follows, in which only the data set is required while other inputs are optional with defaults discussed in the followings.
The core algorithm: SDC-SDD-ND
§.§ The calculation of the density
The first task needed in all density-based algorithms is to calculate the densities. Some papers employed 1/ϵ, in which ϵ is the k-distance of the point, to be the density of a point. However, we find it more suitable to employ the average distance and thus employ 1/r^d as the density of a point in the algorithm (employ 1/r^d rather than 1/r is because it is more similar to the natural density), in which r is the average distance of the closest RhoCalculateK points of the point and d is the dimension of data.
As for the choice of RhoCalculateK, it is expected to be small. For instance, in Figure<ref>, the points in the corner of a square should be considered to have the same density as points in the inner square. If RhoCalculateK is chosen to be 2 or 3, it is certainly
such a case, while if RhoCalculateK is chosen to be larger than 3, the density of the corner points could be lower than the density of the inner points. However, on the other hand, the smaller RhoCalculateK is, the non-robustness of the algorithm would be since the density of a point could be influenced by a single noisy point easier. Therefore, considering the trade-off of these issues, the default of RhoCalculateK is set to 4 in the algorithm.
On the other hand, to avoid the influence of granularity, it is essential to employ normalized densities instead of densities. Furthermore, it could be better to normalize the densities employed in calculating secondary differentials and detecting isolated points by different methods. In the algorithm, the densities in calculating secondary differentials are normalized by dividing the maximal one, while the densities in detecting isolated points are normalized by dividing the average one.
§.§ The searching neighbour
The condition of the secondary differential lower than eps is very tight, which could result in the over-refinement of clusters. Despite the somehow necessity in detecting structures, to reduce the influence of this issue, other conditions are expected to be loose. Therefore in the algorithm, we suggest to enlarger the number of searching neighbors. However, employing a larger number of neighbors in calculating densities might lead to inaccuracy, as explained in the above subsection. Therefore, different numbers of neighbors in neighbor searching and density calculating are advocated. A similar idea could be employed in calculating densities in detecting isolated points.
Hence, the presented algorithm takes the number of searching neighbors SearchNeiborK=7, the number of neighbors in calculating densities RhoCalculateK=4, and the number of neighbors in calculating densities employed in detecting isolated points IsoNeiborK=4 as defaults.
§.§ The choice of MaxIsoPointRho
The choice MaxIsoPointRho depends on the tightness in detecting isolated points. It should be chosen larger if one requires more strict in detecting isolated points, while it could be lower if only too extreme points are needed to be marked as isolated points. Also, MaxIsoPointRho can be set to 0 for no needing to detect isolated points. In the above algorithm, the default of it is 0.07, chosen by tests.
§.§ The merging of the clusters
As illustrated in subsection C above, the tightness of the secondary differential condition might lead to the over-refinement of the clusters. Consequently, some clusters that should not be separated might be separated when clustering. However, fortunately, with the same reason for the tightness of the condition, the over-refined clusters could be tiny in most cases. Therefore, the problem could be managed by merging small clusters.
As for the merging scheme, the algorithm simply pointwise redistributes points in clusters whose length is lower than MinClusterPoint to the closest cluster whose length reaches MinClusterPoint. It might not be the best choice, but it could be the simplest one sufficient for substantial cases. Certainly, other merging methods might be applied instead, such as redistributing points in small clusters to the closest cluster without requiring its length, merging cluster-wise rather than pointwise, and merging via the k-NN algorithm with k≥ 2 instead of simply by the distance to a cluster.
§.§ Problems and solutions
There are two main problems in employing the core algorithm to cluster.
Firstly, the core algorithm might fail to cluster data sets in which high-density clusters are separated by low-density regions with cluster-level structures but without coherent refined structures in a cluster, such as the top one in Figure<ref>. The clustering result by simply employing the core algorithm is in the middle of Figure<ref>, which fails to cluster the two cycles as in the K-mean algorithm (though it might be better). It is because there are small clusters with enough length in the inner cycle as well as in the outer cycle, respectively, but most points are in clusters without sufficient points and thus are merged into the closest one as in the K-mean algorithm. However, fortunately, the issue can be solved by traditional density-based algorithms such as clustering by the k-NN version of DBSCAN stated in section II, which is a particular case of the presented core algorithm. Therefore, an immediate solution could be applying the core algorithm with mode=kNN, namely clustering via k-NN, and then employing the core algorithm with mode=SD, namely implementing the algorithm normally. The clustering result is provided at the bottom of Figure<ref>.
Another problem is that high-density points whose densities are low compared with the highest-density one might need to be separated into different clusters but could fail to, for example, please refer to the top and the middle of Figure<ref>. It might be caused by the right-up density in the left-down square being much higher than most regions, such that the normalized densities in those regions are very low, resulting in the small secondary differentials and thus failing to classify. This problem indeed affects most non-hierarchical density-based clustering algorithms. The solution is thus applying hierarchical clustering by repeating the core algorithm, and the result is provided at the bottom of Figure<ref>.
§.§ The SDC-HSDD-ND algorithm
After the investigations in the above subsection, the hierarchical algorithm, dubbed by SDC-HSDD-ND, is ready to be provided. The pseudocode is as follows.
The hierarchical algorithm: SDC-HSDD-ND
§.§ The choices of eps and the minimal length on merging
The last thing left without discussion in the above subsections is the choices of eps and MinClusterPoint.
The choice of MinClusterPoint depends on how small a cluster should be aborted. As the over-refinement problem is one of the main concerns in the algorithm, one should expect the threshold of a cluster to be high. Here, the default of MinClusterPoint is set to 35 after several tests. On the other hand, a coefficient related to the number of data might be a better choice to be the threshold of a cluster since a large data set seems to have a higher threshold. Therefore, the final choice of the minimal length of a cluster could be max(MinClusterPoint,(1-f)× N), where 0≤ 1-f<1 is a fixed fraction and N is the number of data.
However, a large MinClusterPoint might cause another problem that small clusters could easily be absorbed, which could sometimes merge distant clusters. For instance, see Figure<ref>, where the two clusters in the corner are too small with 15 and 20 members, respectively, and thus are merged into the center one if the minimal length of a cluster is set to be 35 simply. The issue might not be serious in mode=SD since the refinements are implemented after clustering by neighbors, but it might be a matter in mode=KNN. A suggestion to reduce the problem is relaxing the condition of the minimal cluster, namely choosing another bound MinKNNClusterPoint≤ MinClusterPoint as the minimal requirement of numbers of points in a cluster in mode=KNN. In such a mode, all clusters whose length is smaller than MinKNNClusterPoint are merged into the cluster of isolated points, and the default of MinKNNClusterPoint is chosen to be 7.
On the other hand, the choice of eps could be essential. In principle, it depends on how accurately one wants to detect structures. If eps is set to be equal to or larger than 4, then the core algorithm is the one clustering simply by SearchNeiborK-NN without detecting structures. It demonstrates that clustering by k-NN is a special case of the above algorithm. Generally, The decrease of eps represents a tighter restriction in clustering, in which more accurate structures could be detected. However, the tighter condition also means that less accurate structures might be omitted, which could increase the risk of over-refinement, especially in those clusters without regular structures. The default of eps in the algorithm is set be 0.075 after several tests.
However, the default setting on eps might not be sufficient for certain tasks. A better way might be that eps could be self-adjust. Therefore, we suggest that the algorithm could begin by a small eps and then enlarge it until a suitable one is found. Despiteness, the initialization of eps might still be a problem. It could neither be so small that leads to serious over-refinements nor be too large to detect the structures. Furthermore, the exploration on eps could consume extra calculations, which should be taken into account.
Based on the discussions above, we suggest employing the clustering algorithm with self-adjust eps chosen as follows. Whenever a data set employing mode=SD to cluster, a suitable eps is chosen by running the clustering algorithm with eps, initialized by a small Mineps, recording the number of clusters whose length reaches MinClusterPoint, and followed by another run with eps=eps+adjust, where adjust is the chosen step in enlarging eps. If the number of clusters is reduced, then eps-adjust could be considered as the suitable one, otherwise repeat the procedure until a suitable eps is found or eps≥ Maxeps, where Maxeps is the maximal choice of eps. In the default settings, Mineps is set to 0.045, Maxeps is set to 0.075, and adjust is set to 0.005.
§.§ The final algorithm: SDC-HSDD-NDSA
Finally, the algorithm integrating all of the considerations in ready to be generated, please see Figure<ref>, and the pseudocode is in the following, where the coefficient IOC represents whether isolated points are required to be merged into a single cluster. If it is True, all isolated points would be merged into a single cluster, and otherwise, the different local isolated points would be displayed as different clusters.
The final algorithm with self-adaption: SDC-HSDD-NDSA
§ EXPERIMENT RESULT
In this section, experiment results are provided. The algorithm is tested on several data sets with or without structures as well as in noise or noiseless settings. More experiment results, including data sets that combine different clusters with different granularities and with random isolated points, are provided in supplied materials.
The benchmark of the experiments is set to the k-NN clustering algorithm introduced in section II, since our algorithm employs kNN as neighbors and extends the k-NN clustering algorithm, as stated in section III H.
In the whole section as well as the remaining paper, Noise=x represents that points are added Gauss noises with the standard deviation σ=x× d, where d be the granularity, namely , , and d=max(d_x, d_y).
§.§ Effectiveness and robustness
The following experiments demonstrate the effectiveness and the robustness of the SDC-HSDD-NDSA algorithm, where and in the remaining paper, all coefficients except the data set are of the default settings concluded in section III unless especially pointed out. Moreover, for convenience, we only display the most representative data sets here, while others could be found in the supplied materials.
The results of the SDC-HSDD-NDSA run on the data sets in the introduction are provided in Figure<ref>, Figure<ref>, and Figure<ref>. The results demonstrate that the algorithm is effective in these data sets and robust over noises.
§.§ Competativeness
As illustrated in the introduction, the data sets in the Figure<ref> could not be clustered successfully by previous density-based clustering algorithms since clusters are not separated by low-density regions. It indicates that our algorithm could outperform previous ones in several data sets, especially for those with apparent structures in high-density regions.
In the following, we combine the clustering results by SDC-HSDD-NDSA and k-NN with different k. Instead of employing the TripleSquare itself, the data set consists of two Gaussian data sets at the top-left as well as the TripleSquare at the down-right, to avoid the extreme choices of coefficients that are only valid in the TripleSquare, please see Figure<ref>. The figures show that the k-NN clustering algorithm introduced in section II fails in k=7 and has been invalid on Gaussian samples in k=4 but still can not cluster the TripleSquare. On the other hand, the SDC-HSDD-NDSA not only succeeds in clustering all clusters but can also detect isolated points.
§ DISCUSSION
§.§ Asymmetry
The SDC-HSDD-NDSA algorithm is asymmetric, namely choosing different start points in clustering might provide different results. To allow a unique clustering result, the algorithm is set to choose the unclassified point with the highest density as the start point when founding a new cluster.
§.§ Detecting isolated points
As shown in the above sections, the SDC-HSDD-NDSA algorithm can detect isolated points. The isolated points consist of points that are not distributed to a cluster whose length is larger than MinKNNClusterPoint in mode=KNN as well as those with normalized density lower than MaxIsoPointRho in mode=SD. They are either merged into a single cluster for IOC=True or displayed separately on mode=KNN and mode=SD with local in all clusters for IOC=False.
§.§ Complexity
The time complexity of the SDC-HSDD-NDSA algorithm can be discussed as follows, assuming the worst case unless especially pointed out and the dimension of the data is d.
(1) Calculate kNN: O(d× N^2) the worst case and on average by the KD-tree method.
(2) Calculate density and normalization: O(N).
(3) Calculate the density-differentials of points in kNN of each point: O(N).
(4) Determine isolated points IP: O(N).
(5) Cluster: O(N^2) in starting with highest-density points caused by determining the starting point with the maximal density among all unclustered points of each new cluster, in which the time complexity could be O(N^2) in the worst case that every cluster consists of exactly one data and the data set is ordered by the increasing order, followed by absorbing points, in which the time complexity is O(|C_i|) for the cluster C_i and O(N) in total since . However, the procedure can be accelerated to O(N) if new clusters are started by a chosen point instead of the ones with the highest density.
(6) Merge: O(d× N^2) in the worst case and O(d× N× logN) on average by the KD-tree method. The merge procedure is, essentially, a k-NN classifier with k=1, where data in a cluster whose length reaches the minimal requirement is with labels related to its cluster, and the redistributed data is predicted by the k-NN classifier.
Therefore, the core algorithm could run with the time complexity in the worst case and O(d× N× logN) on average by the KD-tree method, if not require the clusters starting by the unclustered points with the highest-density. The self-adjusted procedure could be considered as repeating the core algorithm several but not many times, and thus would not increase the time complexity. Finally, in the worst case that the hierarchical cluster-tree is as unbalanced as possible, there could be t+1 hierarchies with t≈ log_f(MinClusterPoint/N), resulting in the total time complexity ∑_i=0^tg(f^iN), where N is the number of data, 1-f is a fixed fraction stated in section III H, and g(x) denotes the time complexity of the (non-hierarchical) self-adaption algorithm in x data. Hence, the time complexity of the SDC-HSDD-NDSA algorithm could be O(d× N^2) in the worst case that g(x)=O(d× x^2), while ∑_i=0^tO(d× f^iN× logf^iN)=O(d× N × logN) in the average case that g(x)=d× x× logx, by choosing f strictly small than 1, since equation (1).
d× N× logN≤∑_i=0^td× f^iN× logf^iN≤∑_i=0^td× f^iN × logN≤ d× N × logN ×1-f^t+1/1-f≤ d× N × logN ×1/1-f
A similar argument can show that the space complexity of the SDC-HSDD-NDSA algorithm could be the same as a k-NN searching algorithm, since the space complexity of other procedures is not larger than O(N) except the searching of k-NN, which is not less than O(N).
§.§ Extendable
The differential in higher levels as well as the combination of differentials in different levels might also be employed in clustering. Also, the normalized scheme could be applied to the differentials of densities, not only for the densities. However, in what cases do they valid might need another study.
§ CONCLUSION
In conclusion, we present a novel density-based clustering algorithm dubbed by SDC-HSDD-NDSA with various discussions. It not only has the ability previous ones such as DBSCAN have but can also detect structures in the regions not being separated by low-density regions, which can not be solved by any previous density-based algorithm even theoretically, as fas as we know. The minimal requirement of input coefficients is the data set only, which could be convenient to employ. The complexity could be the same as a k-NN searching algorithm. Experiment results are also provided to demonstrate the effectiveness, robustness, and completeness of the SDC-HSDD-NDSA algorithm.
§ BIBLIOGRAPHIES
IEEEtran
§ SUPPLEMENTARY MATERIAL
Supplementary experiment results, including noise and noiseless ones, are provided in this section.
Figure<ref> and Figure<ref> display the clustering results in Gauss samples and three lines, respectively. Figure<ref> displays the result in two squares with gradually increased densities. Figure<ref> demonstrates that the algorithm could be valid in the data set with a single bridge between two clusters. Figure<ref>, Figure<ref>, and Figure<ref> exhibit the results in the data set combining several data sets with different granularities and varied densities for the noiseless case as well as noisy cases with additional random points.
|
http://arxiv.org/abs/2307.00837v1
|
20230703082019
|
Surgical fine-tuning for Grape Bunch Segmentation under Visual Domain Shifts
|
[
"Agnese Chiatti",
"Riccardo Bertoglio",
"Nico Catalano",
"Matteo Gatti",
"Matteo Matteucci"
] |
cs.RO
|
[
"cs.RO",
"cs.CV",
"cs.LG"
] |
Greedy Selection for Heterogeneous Sensors
Kaushani Majumder, Student Member, IEEE, Sibi Raj B. Pillai, Satish Mulleti, Member, IEEE
K. Majumder, S. R. B. Pillai, and S. Mulleti are with the Department of Electrical Engineering, Indian Institute of Technology Bombay, Mumbai, 400076, India. Emails: [email protected], [email protected], [email protected]
August 1, 2023
================================================================================================================================================================================================================================================================================================================================================
empty
empty
Mobile robots will play a crucial role in the transition towards sustainable agriculture. To autonomously and effectively monitor the state of plants, robots ought to be equipped with visual perception capabilities that are robust to the rapid changes that characterise agricultural settings. In this paper, we focus on the challenging task of segmenting grape bunches from images collected by mobile robots in vineyards. In this context, we present the first study that applies surgical fine-tuning to instance segmentation tasks. We show how selectively tuning only specific model layers can support the adaptation of pre-trained Deep Learning models to newly-collected grape images that introduce visual domain shifts, while also substantially reducing the number of tuned parameters.
§ INTRODUCTION AND BACKGROUND
The climate change crisis has highlighted the importance of increasing the sustainability of food production, as prescribed in the European Commission's "Farm to Fork" strategy[<https://food.ec.europa.eu/horizontal-topics/farm-fork-strategy_en>]. In this regard, digital technologies are playing a crucial role in reducing the amount of water and chemicals used in agriculture <cit.>. One of the key applications of digital technologies is the deployment of mobile robots, which can perform a range of tasks such as plant spraying <cit.>, weeding <cit.>, and harvesting <cit.>. To carry out these tasks effectively, robots need the ability to autonomously monitor plant traits and status, a task also known as plant phenotyping. For example, in vineyards, a robot must be capable of detecting plant organs for posing the appropriate cuts during winter pruning operations <cit.>. They also ought to accurately identify the presence of grape bunches, their level of ripeness, and promptly detect the emergence of any diseases that may compromise the fruit quality.
Robot's perception systems deployed in agricultural settings face particular challenges due to the significant weather and seasonal variations that characterise these environments. Thus, ensuring the effective reuse of visual patterns and features learned under specific environmental conditions (e.g., in terms of weather, lighting, and plant diversity) becomes crucial. This requirement stems from the need to guarantee accurate plant monitoring, even when the underlying conditions change. For instance, viewpoint changes caused by different sensor positions and occlusions caused by leaves are prominent factors that can hinder the accurate monitoring of fruit <cit.>.
The widespread application of Deep Learning (DL) methods has considerably accelerated the progress in various visual perception tasks, including plant phenotyping <cit.>. However, supervised DL methods typically require abundant training data and are susceptible to changes in the data distribution. Moreover, training all model parameters on new data is a costly process in terms of computational power and memory footprint, especially when working on edge devices and mobile platforms. To address these issues, one possible approach is to pre-train the model on a large-scale source domain and fine-tune the parameters on a few examples from the target domain. The aim of fine-tuning is to adapt the model to the target domain while retaining the information learned during pre-training, particularly in cases where the source and target distributions significantly overlap despite the shift. This process is commonly known as transfer learning. A traditional transfer learning practice known as linear probing involves fine-tuning only the last few layers of a Deep Neural Network (DNN) while reusing features from earlier layers. This approach was based on initial evidence suggesting that representations in earlier layers may be more transferable to new data and tasks than the specialised features learned in higher layers <cit.>.
Recent research <cit.> has explored effective alternatives to this consolidated fine-tuning practice. Indeed, Lee et al. <cit.> discovered that selectively tuning only the earlier, intermediate, or last layers of a DNN can counteract different types of distribution shifts and often even outperform cases where all model parameters are tuned. They have named this approach surgical fine-tuning (SFT). Their study concerned transfer learning across different image classification benchmarks, such as CIFAR and ImageNet. However, the authors' conclusions have yet to be validated on image segmentation tasks and data gathered in real-world application scenarios, e.g., from mobile robots.
This paper focuses on the task of grape bunch segmentation, which is a critical prerequisite for autonomous plant phenotyping and yield forecast in vineyards <cit.>. Our research investigates whether surgical fine-tuning can support grape bunch segmentation under visual domain shifts. To address this research question, we extend the study of surgical fine-tuning from image classification models to instance segmentation architectures in the specific case of viticulture. The work in <cit.> is most closely related to this study, because it evaluates the utility of linear probing for grape segmentation. However, the experiments in <cit.> did not examine the option of fine-tuning layers other than the classification head.
To facilitate the analysis of different types of visual domain shifts that characterise vineyards, we introduce the VINEyard Piacenza Image Collections (VINEPICs) <cit.>, a comprehensive and novel grape image archive. In <cit.>, Santos et al. presented the Embrapa Wine Grape Instance Segmentation Dataset (WGISD), which is a large-scale collection of vineyard images displaying high-resolution instances of grape bunches across five different grapevine varieties. Our dataset was gathered in a distinct geographic area and it encompasses different grapevine varieties from those in the WGISD dataset, including wine and table grapes. Crucially, the proposed VINEPICs dataset contains additional variations in terms of camera viewpoint, scene occlusion, and time of data collection. Moreover, we captured images using a consumer-grade camera mounted on a mobile robot, which presents additional challenges due to possible motion blur from the robot's movement. As such, the contributed dataset more closely resembles realistic setups in autonomous vineyard phenotyping compared to the WGISD benchmark.
Our results from applying the widely-adopted Mask R-CNN model <cit.> to challenging robot-collected images indicate that adopting a surgical fine-tuning strategy can significantly outperform both linear probing and full parameter tuning when novel samples that introduce distribution shifts are considered. The paper is structured as follows. In Section <ref>, we present the reference datasets, ablation study, technical implementation, and evaluation metrics used in our experiments. We then discuss the experimental results in Section <ref>. Concluding remarks and future extensions of this work are left to Section <ref>.
§ MATERIALS AND METHODS
To test the performance of applying surgical fine-tuning to instance segmentation models, we ran a set of layered experiments. Consistently with <cit.>, we set up the training in two stages. First, we pre-trained on the largest available set of examples for the grape segmentation task: namely WGISD in this case <cit.>. Then, we considered different target sets that introduce a distribution shift from the source set. The goal was evaluating the extent to which transfer learning can be achieved from source to target, with minimal adjustments, thanks to surgical fine-tuning.
Differently from <cit.>, where the evaluation set was held out from the same data used for fine-tuning, we ran inferences on a different dataset, collected one year after the fine-tuning set. This setup resembles the real-world challenges of viticulture applications. Indeed, grape images can be collected only at specific times of the year and adapting learning models from past years to newly-collected data becomes essential.
§.§ Datasets
Embrapa WGISD. The Embrapa Wine Grape Instance Segmentation Dataset (WGISD) <cit.> comprises 300 high-resolution images depicting 2,020 grape bunches from five Vitis vinifera L. grapevine varieties: Chardonnay, Cabernet Franc, Cabernet Sauvignon, Sauvignon Blanc, and Syrah. The images were captured at the Guaspari Winery (Espírito Santo do Pinhal, São Paulo, Brazil) in April 2018, with the exception of images of the Syrah dataset that was collected in April 2017. Grape bunches were photographed while keeping the camera principal axis approximately perpendicular to the vineyard row, using both a Canon EOS REBEL T3i DSLR camera and a Motorola Z2 Play smartphone and were resized and stored at a resolution of 2048x1365. At the time of data collection, no defoliation treatments were applied except for the routine canopy management for wine production adopted in the region. In the original data split used in <cit.>, 110 images (accounting for 1612 grape instances) were jointly devoted to training and validation, whereas 27 images (i.e., 408 grape instances) were held out for testing. However, the actual split between training and validation was not provided. Therefore, we decided to use a 20% validation split stratified across grape varieties from the original training subset.
VINEPICs. The VINEyard Piacenza Image Collections (VINEPICs) dataset consists of grape images collected at the vineyard facility of Università Cattolica del Sacro Cuore in Piacenza, Italy. The VINEPICs dataset is publicly available under CC BY 4.0 (Attribution 4.0 International) license and accessible at this link <https://doi.org/10.5281/zenodo.7866442>. The acronym VINEPICs21 refers to the first collection of images gathered in the summer of 2021 on Red Globe vines (Vitis vinifera L.) grafted on Selection Oppenheim 4 (SO4), i.e., the vine rootstock, growing outdoors in 25 L pots. This set includes 73 RGB images captured on three different dates: 26 images of resolution 480x848 were collected at beginning of grape ripening on July 27th, 23 images of resolution 720x1280 on August 23rd when berries were fully coloured, and 24 images of resolution 720x1080 at harvest on September 9th. An Intel D435i RGB-D camera was used to capture the data, which was mounted on a SCOUT 2.0 AgileX robotic platform, a four-wheeled differential steering mobile robot[The analyses presented in this paper only concern RGB images, but we also collected depth data to support a wider range of applications, such as, e.g., estimating the volume of grape bunches.]. The plants were arranged along two, vertically shoot-positioned, North-South oriented rows and hedgerow-trained for a canopy wall extending about 1.3 m above the main wire. Each vine had a ∼1 m cane bearing 10-11 nodes that was raised 80 cm from the ground. Between fruit-set (BBCH 71) and berry touch (BBCH 79) <cit.>, the leaves around bunches were gradually removed for a resulting fully defoliated fruit zone with reduced incidence of berry sunburns <cit.>. Before veraison, eight vines were subjected to crop thinning to control for fruit occlusions caused by excessive fruit density. Accordingly, a basal bunch was kept every second shoot for about six retained bunches/vine; the remaining unthinned vines were clustered into two groups with about 10 and 4 bunches/vine. During data collection, the camera principal axis was rotated to form an angle of approximately 45 with the scanned plant row. The grape bunch regions were annotated using polygonal masks through the Computer Vision Annotation Tool (CVAT)[<https://github.com/opencv/cvat>], and the annotations followed the COCO annotation format[<https://cocodataset.org/>].
A second and more extensive dataset, named VINEPICs22, was collected at the same vineyard facility of Università Cattolica del Sacro Cuore in Piacenza, Italy, on two separate dates in August and September 2022, approximately one year after the previous set. This dataset comprises 165 annotated images, representative of different types of domain shifts, including 1464 grape bunch instances. From this dataset, we extracted subsets of data to control for the incremental changes we expect from the fine-tuning domain (VINEPICs21) to the target domain, as detailed in Table <ref>. Specifically, the VINEPICs22R set includes new images collected from the same grape variety (Red Globe), by maintaining the same camera viewpoint, and level of defoliation as VINEPICs21. VINEPICs22RV introduces a change in the camera viewpoint (i.e., the camera principal axis is perpendicular to the plant rows), while set VINEPICs22RF was captured first on non-defoliated canopies. Furthermore, sets VINEPICs22C and VINEPICs22O maintain the same camera viewpoint and defoliation level as VINEPICs21 but represent different grape varieties, namely Cabernet Sauvignon (red grape) and Ortrugo (white grape), growing in a experimental vineyard. Table <ref> maps the changes introduced for each fine-tuning and target set to the taxonomy of shift types adopted in <cit.>. The selected target sets cover three shift types: i) input-level shifts, which occur due to variations in the visual appearance of the same environment (e.g., observing the same vineyard on different days introduces lighting variations); ii) feature-level shifts, where the source-target shift is caused by different populations of the same class, in our case, different grape varieties; and iii) natural shifts, which are due to collecting the source and target data in different environments, in our case, different growing conditions (potted vines vs. experimental vineyard). Output-level shifts do not concern our use-case, since the target class (grape bunches) remains unchanged throughout the experiments detailed in this paper.
§.§ Surgical fine-tuning for instance segmentation
Given the focus on image classification tasks, the experiments described in <cit.> consider ResNet architectures <cit.> as a reference and utilize surgical fine-tuning to manipulate the different residual blocks. However, in the context of instance segmentation tasks, supplementary modules are introduced for detecting and segmenting object regions. Region-based segmentation architectures such as the widely utilized Mask R-CNN model <cit.> merge CNN feature extraction layers with a Region Proposal Network (RPN) that extracts Regions of Interest (ROI) from input images. Predicted object regions are then fed to three network heads that operate in parallel, generating predictions for the object class, bounding box, and polygonal mask (Figure <ref>). A popular implementation of this generalized architecture uses a combination of ResNets and Feature Pyramid Networks (FPN) as a backbone for the feature extraction step <cit.>.
To assess the efficacy of surgical fine-tuning in the context of region-based segmentation models, we also ought to examine the impact of selectively fine-tuning the FPN and RPN components, along with the residual blocks and classification heads. Hence, we conduct experiments that compare the following model ablations:
* Tune All: This configuration fine-tunes all model parameters.
* Linear Probing: In this classic configuration, only parameters in the three ROI heads are updated, while earlier layer parameters remain fixed at values learned during pre-training.
* Res n: This setup involves fine-tuning only the ResNet layers, specifically the residual block identified by the number n. We use the keyword "stem" to refer to the first residual block, and the notation "res n" for blocks numbered 2 and higher. This setup follows the rationale applied in <cit.>.
* Joint SFT: Res Block n + FPN at n: This configuration is a variation of the previous setup, where the selected residual blocks are fine-tuned simultaneously with the related Feature Pyramid Network (FPN) operations.
* RPN: In this setup, we only apply surgical fine-tuning to the Region Proposal Network (RPN) in the Mask R-CNN model.
To the best of our knowledge, this is the first study on the application of surgical fine-tuning to instance segmentation tasks.
§.§ Implementation details
To apply surgical fine-tuning as described in the previous section, we customised the Detectron2[<https://github.com/facebookresearch/detectron2>] implementation of the Mask R-CNN architecture. The code for reproducing these trials is available at <https://github.com/AIRLab-POLIMI/SFT_grape_segmentation>.
We augmented our training examples by applying various transformations such as Gaussian blur, additive Gaussian noise, random brightness, contrast, and saturation, pixel dropout, and random flipping transformations. During pre-training on the source domain, we utilized ResNet50 and ResNet101 backbones employing Group Normalization (GN). We experimented with different weight initializations following the Detectron2 Mask R-CNN baselines for the COCO instance segmentation task. In the first configuration, we used the weights obtained from the method introduced in <cit.>, where the model was trained from scratch on COCO with an extended training schedule and an augmented jittering scale. In the second configuration, we initialized the model with the weights from the method presented in <cit.>, where Mask R-CNN was trained on COCO instances from scratch, i.e., with random weight initialization, rather than reusing initialization values derived from ImageNet. All models were trained with a batch size of 2 images, and we used an early stopping criterion if the validation loss did not improve for 30 consecutive evaluation checks, with one evaluation check every 220 minibatch iterations. We optimized model parameters using stochastic gradient descent, with a constant learning rate set to 0.01.
§.§ Evaluation metrics
We evaluate the instance segmentation performance by measuring the Average Precision (AP) of predicted object regions, as well as the standard Precision (P), Recall (R), and F1 score of predicted object instances. The metrics were averaged over Intersection over Union (IoU) values ranging from 0.3 to 0.9, to allow for comparison with the results presented in <cit.>. Consistently with <cit.>, only predictions with confidence greater than 0.9 for the grape class are considered in the evaluation.
We prioritize improvements in terms of F1 over individual P and R scores, as detecting all true positives is as important as minimizing the false positives in the target use-case.
§ RESULTS AND DISCUSSION
Before conducting the ablation study, we pre-trained three Mask R-CNN models on the WGISD dataset. Table <ref> demonstrates that on our task, ResNet50 backbones generally delivered better results than ResNet101 backbones. Furthermore, initializing the model with weights obtained after training from scratch on the COCO dataset <cit.> yielded the best combination of segmented object region quality (in terms of AP) and grape class prediction quality (in terms of F1), compared to using weights from longer training schedules and large-scale jittering <cit.>. Therefore, we have chosen the "Mask R-CNN ResNet50 <cit.>" model as the baseline for fine-tuning on VINEPICs21.
During the fine-tuning stage, we applied the different ablations presented in Section <ref> and evaluated the results on the five target sets selected from VINEPICs22. The top-performing methods in each set of trials, together with the "linear probing" and "tune all" alternatives, are summarised in Table <ref>. The complete evaluation results can be found in the appendix of this paper (Table <ref>). We also report the number of parameters tuned in each configuration in Table <ref>.
Results on the VINEPICs22R sets approximate scenarios where the only change introduced is the date and time of data collection, while considering the same grape variety (Red Globe), camera viewpoint, and defoliation level as the fine-tuning set. In this case, fine-tuning the first four CNN layers individually, excluding the stem, ensured a higher AP than the scenario when all model parameters are tuned. In particular, tuning the third ResNet block led to the highest AP and F1 scores, outperforming linear probing.
Changing camera viewpoint, in VINEPICs22RV, led to generally higher scores than the previous set of trials. Notably, the AP scores are even higher than the AP achieved on the VINEPICs21 test set, for the majority of tested ablations. This result may be due to the fact that a perpendicular camera viewpoint is more similar to the setup adopted in the WGISD set, i.e., the source set. Moreover, it is worth noting that the VINEPICs21 test split comprises nearly twice as many grape instances as the VINEPICs22RV set. As a result, the average scores in the VINEPICs21 case provide more conservative performance figures than VINEPICs22, which accounts for approximately 100 instances for each subset (Table <ref>). In this case, tuning the third and fourth ResNet blocks led to the most marked improvement over the the "tune all" and "linear probing" performance. In particular, tuning the fourth ResNet block in combination with its FPN layers led to the highest results with respect to the AP of region predictions, Recall and F1 of instance predictions. Interestingly, the top precision was achieved when tuning the Region Proposal Network in isolation, albeit generating a higher number of false positives, as indicated by the lower recall scores.
We then considered grape images captured in the presence of occluding foliage (VINEPICs22RF), under temporal and viewpoint conditions that are comparable to the tuning set. Similarly to the case of the temporal shifts introduced in VINEPICs22R, the top performance was achieved by tuning the third ResNet block. However, in this case, while the highest AP score was achieved in the "res3" configuration, the highest F1 was reached by jointly tuning res3 with FPN.
When we shift the target domain towards different grape varieties, the drop in performance from the fine-tuning set to the target sets is significant. Indeed, although the source set (WGISD) already included examples of both red and white grape bunches, the VINEPICs22C and VINEPICs22O sets are drastically more challenging than previously examined sets. First, the number of instances to be detected in each frame is significantly higher in this case, as exemplified in Figure <ref>. Moreover, images in these sets were captured at a lower resolution than WGISD and in lower lighting conditions than both the WGISD and the VINEPICs21 sets. Thus, this setup complicates not only the learning but also the manual annotation of grape instances. Under these challenging conditions, selectively tuning the stem and RPN was ineffective and prevented the model from providing any grape predictions (Table <ref>).
Conversely, applying surgical fine-tuning to intermediate layers resulted in a significant improvement over the near-zero baseline performance. In the case of the Cabernet Sauvignon variety (VINEPICs22C) tuning only the parameters in the fourth ResNet block improved the AP by 10% and the F1 by 12%, compared to "linear probing". In the case of the Ortrugo variety (VINEPICs22O), jointly tuning res4 with FPN outperformed "linear probing" by 8%, in terms of AP, and by 14%, in terms of F1.
Overall, results from these experiments support the view that selecting intermediate network layers can outperform the common practice of only re-training the classification head of the model, when visual domain shifts are introduced. In particular, we found that selecting the third block for fine-tuning best supported temporal changes, as well as changes in the level of plant defoliation. Selecting the fourth ResNet block, instead, contributed to mitigating the impact of viewpoint and grape variety shifts.
Importantly, adopting a surgical fine-tuning approach allowed us to substantially reduce the number of parameter updates, compared to the costly alternative of re-training the complete model from scratch: from over 45M total parameters to nearly 1M and 7M in the res3 and res4 cases (Table <ref>).
§ CONCLUSIONS
To effectively deploy mobile robots for agricultural applications, improving the adaptability of visual perception methods based on Deep Learning to rapidly-changing environments is essential. In particular, we have considered the task of autonomously segmenting grape instances from images collected in real vineyards. In this context, we showed that pre-training on large-scale, high-resolution training examples and fine-tuning only selected layers on more challenging robot-collected data can support knowledge transfer to newly-collected grape images that introduce changes in the camera viewpoint, foliage occlusion level, and grape variety.
Notably, tuning intermediate network layers improves the robustness of the model to input-level and feature-level shifts. These findings complement the evidence gathered in <cit.> on image classification benchmarks, where input-level shifts were best supported by tuning the initial network layers. These results also withstand the popular practice of only tuning the last layers on a new target domain. Even in challenging scenarios where images of novel grape varieties are introduced at test time, surgical fine-tuning on intermediate network blocks allowed us to bootstrap the grape segmentation performance, while drastically reducing the number of parameters required for fine-tuning.
Our evaluation of the utility of surgical fine-tuning to support grape segmentation has been limited to methods derived from the widely-applied Mask R-CNN architecture. Thus, future research directions include the study of instance segmentation models that are based on Transformers, such as <cit.>, for instance. Another transfer learning approach that we have not yet explored concerns the combination of linear probing with the selection of useful features from different layers, as proposed in <cit.>.
The availability of the VINEPICs resource can facilitate the progress in tackling these unexplored research directions.
§ APPENDIX
Table <ref> reports the complete evaluation results for the VINEPICs22 target sets.
§ ACKNOWLEDGMENTS
This paper is supported by the Italian L’Oreal-UNESCO program “For Women in Science”, the European Union's
Digital Europe Programme under grant agreement Nº 101100622 (AgrifoodTEF). The study was conducted within the Agritech National Research Center and received funding from the European Union Next-GenerationEU (PIANO NAZIONALE DI RIPRESA E RESILIENZA (PNRR) – MISSIONE 4 COMPONENTE 2, INVESTIMENTO 1.4 – D.D. 1032 17/06/2022, CN00000022).
IEEEtran
|
http://arxiv.org/abs/2307.01917v1
|
20230704210116
|
Stranding Risk for Underactuated Vessels in Complex Ocean Currents: Analysis and Controllers
|
[
"Andreas Doering",
"Marius Wiggert",
"Hanna Krasowski",
"Manan Doshi",
"Pierre F. J. Lermusiaux",
"Claire J. Tomlin"
] |
eess.SY
|
[
"eess.SY",
"cs.AI",
"cs.RO",
"cs.SY"
] |
IEEEexample:BSTcontrol
HJHamilton-Jacobi
HJIHamilton-Jacobi-Isaacs
ODEOrdinary Differential Equation
MPCModel Predictive Control
MDPMarkov Decision Processes
RMSERoot Mean Squared Error
MSE[MSE]mean-squared-error
RLReinforcement Learning
PDEPartial Differential Equation
ASVAutonomous Surface Vehicles
brs[BRS]backward reachable set
BRTbackward reachable tube
HJ-MTRHamilton-Jacobi Multi-Time Reachability
GPGPGreat Pacific Garbage Patch
ctrl1[Floating]Passive Floating Controller
ctrl2[MTR-no-Obs]MTR with no obstacles controller
ctrl3[Switch-MTR-no-Obs]switching controller
ctrl4[MTR]multi-time HJ reachability closed loop controller with obstacles
ctrl5[Switch-MTR]switching controller with obstacles
ctrl6[SmallDist-MTR]MTR with small disturbance
CBFControl Barrier Function
COLREGS[COLREGS]Convention on the International
Regulations for Preventing Collisions at Sea
TTRTime-to-Reach
Stranding Risk for Underactuated Vessels in Complex Ocean Currents:
Analysis and Controllers
Andreas Doering^1,2,*, Marius Wiggert^1,*, Hanna Krasowski^2, Manan Doshi^3
Pierre F.J. Lermusiaux^3 and Claire J. Tomlin^1
^* A.D. and M.W. have contributed equally to this work.
^1 A.D., M.W., and C.J.T. are with the Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, USA. For inquiries contact: [email protected]
^2 A.D. and H.K. are with the School of Computation, Information and Technology of the Technical University of Munich, Germany
^3 M.D. and P.F.J.L. are with the Department of Mechanical Engineering at the Massachusetts Institute of Technology, USA.
The authors gratefully acknowledge the support of the C3.ai Digital Transformation Institute, the IFI fellowship
of the German Academic Exchange Service (DAAD) funded by the Federal Ministry of Education and Research (BMBF)
, the research training group ConVeY funded by the German Research Foundation under grant GRK 2428, the DARPA Assured Autonomy Program, and the ONR BRC program.
August 1, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Low-propulsion vessels can take advantage of powerful ocean currents to navigate towards a destination. Recent results demonstrated that vessels can reach their destination with high probability despite forecast errors.
However, these results do not consider the critical aspect of safety of such vessels: because of their low propulsion which is much smaller than the magnitude of currents, they might end up in currents that inevitably push them into unsafe areas such as shallow areas, garbage patches, and shipping lanes. In this work, we first investigate the risk of stranding for free-floating vessels in the Northeast Pacific. We find that at least 5.04% would strand within 90 days. Next, we encode the unsafe sets as hard constraints into Hamilton-Jacobi Multi-Time Reachability (HJ-MTR) to synthesize a feedback policy that is equivalent to re-planning at each time step at low computational cost. While applying this policy closed-loop guarantees safe operation when the currents are known, in realistic situations only imperfect forecasts are available.
We demonstrate the safety of our approach in such realistic situations empirically with large-scale simulations of a vessel navigating in high-risk regions in the Northeast Pacific. We find that applying our policy closed-loop with daily re-planning on new forecasts can ensure safety with high probability even under forecast errors that exceed the maximal propulsion. Our method significantly improves safety over the baselines and still achieves a timely arrival of the vessel at the destination.
§ INTRODUCTION
Autonomous systems are increasingly deployed for long-term tasks and need to operate energy-efficient. For systems operating in the oceans or in the air, this leads to a growing interest in utilizing the dynamics of the surrounding flows as a means of propulsion. Stratospheric balloons and airships utilize wind fields<cit.>, while ocean gliders and active drifters exploit ocean currents <cit.>.
Our recent work <cit.> has demonstrated that a vessel with just 0.1 propulsion can navigate reliably to a target region by hitchhiking on ocean currents of up to 2. This work has further been extended to the application of floating farms which maximize the growth of seaweed over longtime horizons <cit.> and to multiple agents that want to stay in proximity to each other to stay connected in local communication networks <cit.>.
However, these approaches do not factor in safety aspects, although the use of ASV in unmanned and long-term operations may pose crucial safety risks. In the event of significant damage, the ASV may become inoperable and may be abandoned or sunk, resulting in financial losses and potential environmental impacts.
One important safety hazard are shallow waters, especially near strong currents, as the ASV can easily strand. Another significant safety hazard is entering a garbage patch that has a high concentration of marine debris, which can cover an area of up to 1.6 million <cit.> as in the case of the GPGP. The garbage can get entangled in the ASV rotors or damage other components, resulting in loss of control.
Furthermore, collisions with other vessels may cause damage to the ASV and potentially endanger the crew of the other vessel. Shipping lanes are another area of increased risk to the ASV as they are used by large, fast-moving vessels.
Next, we present related work into safe motion planning for autonomous vessels in maritime environments which can mitigate many of these problems.
Related Work
The current research focuses on collision avoidance <cit.> and compliance with the COLREGS <cit.> as safety aspects for motion planning of autonomous vessels.
For example, Zhao et al. <cit.> use reinforcement learning to achieve COLREGS-compliant motion planning for encounters with multiple vessels. The studies <cit.> only look at collision avoidance and achieve this by employing velocity obstacles. In general, this research on safe motion planning for autonomous vessels considers vessels that are fully actuated. Since our vessel has restricted maneuverability due to its underactuation, it is unnecessary to comply with rules for power-driven vessels from the COLREGS. Thus, we consider the safety specification of collision avoidance with largely static obstacles such as shallow areas, shipping lanes, or garbage patches.
Agents operating in three-dimensional flows can evade obstacles or strong currents by utilizing the third dimension
which has been demonstrated for stratospheric balloons by <cit.>. They ensure safe paths by formulating the problem as a discretized Markov Decision Process and a heuristic cost function. This only ensures heuristic safety and relies on a realistic uncertainty distribution of stratospheric winds, which are not available for ocean currents.
While many papers on maritime safety consider underactuated vessels, most consider underactuation due to non-holonomic actuation of vessels such as <cit.>. In this paper, we define underactuated as having maximum propulsion that is less than the magnitude of error of the forecasted flows, posing severe challenges for the safety of ASVs.
The avoidance of dynamic obstacles and forbidden regions including the coordinations of vehicles has been treated using Hamilton-Jacobi Reachability <cit.> and applied in real-time with underwater vehicles to avoid too shallow areas <cit.>.
Robust MPC approaches can guarantee safety under disturbances by ensuring that the system is always in a state from which it can reach a robust control invariant set within a finite time horizon <cit.>. In this robust control invariant set, there always exists a control input that ensures that the system can stay in this set indefinitely. However, in our problem setting with underactuated vessels and imperfect, deterministic ocean current forecasts, no such control invariant set exists, hence robust control with realistic bounds is infeasible.
Our paper makes two main contributions:
First, we perform an empirical evaluation of stranding risk for free-floating vessels in the Northeast Pacific.
Second, we present our methods of HJ-MTR with re-planning on two timescales for safe motion planning of underactuated ASV in a setting with realistic ocean currents and daily forecasts.
Furthermore, we evaluate our controller with several baseline controllers over a large set of simulated missions.
We structure the paper as follows: we define our problem statement in Sec. <ref> and motivate the need for a safety controller with a stranding study in Sec. <ref>. In Sec. <ref> we introduce our method and summarize HJ-MTR. We present experiments we conducted in Sec. <ref> and discuss them in Sec. <ref>. We conclude and present future work in <ref>.
§ PROBLEM STATEMENT
We now define the problem of collision avoidance for underactuated vessels by introducing the flow model, vessel model, and representation of obstacles. We then introduce the notion of stranding as our key performance measure.
§.§ System Dynamics, Obstacles and Target
We consider moving in a general time-varying non-linear flow field v(,) →ℝ^n, with ∈𝐑^n representing the spatial state,
∈[0,T] the time and n the dimension of the spatial domain. In our case, n=2 as we regard an ASV operating on the ocean surface.
We denote the actuation signal by from a bounded set 𝕌∈ R^n_u with n_u the dimension of the control.
Let () ∈𝐑^n denote the position of our ASV at time . Our model for the dynamics of the ASV is given by
() = f((), (), )
= v(,) + g(, x, t) ∀∈ [0, ],
Here, the actuation corresponds to the relative velocity g(, x, t) of the ASV with respect to the ocean. Hence the absolute velocity of the vehicle is given by the vector sum of the ocean currents at the location of the vehicle and the relative velocity of the vehicle with respect to the currents. The maximum actuation of the vessel is constrained by ||g(, x, t)||_2 ≤_max.
We define the target and obstacle as sets ∈ℝ^n and ∈ℝ^n respectively.
We assume that these sets are not time-dependent. However, note that the methods presented can be extended to time-dependent cases by the algorithm described in <cit.>.
§.§ Problem Setting
The agent's goal is to navigate safely and reliably from a start state x_0 at start time t_0 to a target region ∈ℝ^n.
We employ the same empirical definition of reliability as <cit.> defining it as the success rate of a controller navigating from x_0 at t_0 to within a maximum allowed time T_max, over a representative set of start-target missions {x_0, t_0, }∈𝕄.
We define stranding as an agent entering the obstacle set before T_max. We then quantify safety as the stranding rate of a controller over the same set of missions.
We define stranding as entering waters with depth less than [round-pad = false]-150. This is application specific and includes entering obstacles such as garbage patches or areas with high traffic density.
We use the oceanographic systems of HYCOM <cit.> and Copernicus <cit.>, similar to prior reasearch <cit.>. The systems each offer a 5-10 day ocean current forecast based on their models with daily updates. They also offer a so-called hindcast with higher accuracy, which is assimilated from further data and published several days later.
For realisitc simulation of real operations, the forecasted currents v̂ received by the vessel need to differ from the true currents v that are used by the simulation by the forecast error δ, which needs to be comparable to empirical forecast errors of the oceanographic system.
If the true currents are known a priori and there exists a trajectory that prevents stranding, our method guarantees safety. However, we are interested in realistic settings with a complex empirical distribution of forecast errors δ(x, t) and severe underactuation e.g. in our experiments
||g(, x, t)||_2=0.1≪RMSE(δ) ≈0.2 and currents ||max(v)||_2 ≈1.4 where safety despite despite worst case forecast errors is impossible.
Hence, in Sec. <ref> we evaluate the performance of our method empirically over a large set of missions 𝕄 in realistic settings with currents and forecasted currents similar to realistic currents and forecasts 𝕍. We evaluate the performance based on the identity functions 𝕀_Suc, 𝕀_Obs that evaluate to 1 if in the and sets respectively and to 0 otherwise:
𝔼_, ∼𝕄_initial condition; v, v̂∼𝕍_real and forecasted
ocean currents{𝕀_Suc, 𝕀_Obs}_Success and Stranding
§ STRANDING STUDY
To illustrate the need for our safety controller, we analyze the rate of stranding for free-floating vessels off the coast of California and Mexico between N15 and N40 and W160 and W105.
We define entering an area with a depth of less than 150 as stranding, due to operational needs of some ASV.
The stranding study can be conducted either analytically or experimentally.
Here, we perform it empirically by sampling [round-pad = false]10000 missions. Each mission consists of a uniformly sampled starting location in the region investigated, outside of the stranding area, and a uniformly sampled starting time. We simulate the trajectories over a time horizon of 10 and 90 days, using Copernicus data for the year 2022. We observe that 1.67% missions strand within 10 days and 5.04% strand within 90 days. In Figure <ref> we show the spatial distribution of the stranding rate over 90 days. We can only count strandings that occur in the defined region, as we stop the simulation for platforms leaving the regarded region, which occurred for 1.78% missions within 10 days and 18.75% within 90 days.
§ SAFE HJ CONTROLLER
The forecasts provided by the available ocean forecasting systems are deterministic. This prohibits us from applying probabilistic methods that would require a realistic distribution of currents <cit.>.
Ocean current forecasts exhibit a distribution shift to real currents, e.g., for HYCOM data <cit.> the global forecast error for speed is RMSE(δ) = 0.2 with a vector correlation <cit.> decreasing over the 5-day forecast horizon <cit.>. For an underactuated vessel with ||max(g(,x,t))||_2=0.1, safe navigation cannot be robust against a disturbance of d=0.2.
Hence we choose to not use robust control
but to ensure safety despite forecast errors by re-planning on two timescales. First, we compute the value function daily for every forecast we receive using HJ-MTR <cit.>. Second, for every time step, e.g. 10, we re-plan by taking the spatial gradient of the value function at x to obtain a time-optimal control. This is necessary because the real currents differ from the forecasted currents v_t ≠v̂_̂t̂, thus, we will be in a different spatial state x than predicted.
§.§ Multi-Time HJ Reachability for Closed-Loop Control
We develop a controller from the theoretic approach of HJ-MTR, which has been derived in <cit.>. For completeness, we summarize the technique here.
We first define a modified dynamical system f_a such that the state of the vessel becomes frozen when it hits either the target or an obstacle.
= f_a(,,) =
0, ∈∪
v(,) + (), otherwise.
.
We define the modified loss function (,) such that we earn a reward based on how early we reach the target:
(,) =
-α, ∈ and ∉_
0, otherwise.
Finally, we define the terminal cost function () to be infinitely high if the ASV terminates in an obstacle and is equal to the distance from the target set otherwise.
() = ∞, ∈_
d(,), otherwise.
.
Finally, we obtain the Hamilton-Jacobi PDE which lets us solve for the value function:
=
α,
() ∈∩ (_)^c
0,
() ∈_
-v(,t)- _max,
otherwise.
(, T) = ∞, ∈_
d(,), otherwise.
.
This value function J^* subsequently allows us to compute a feedback policy for this system given by
π_v̂(,) = _[· f(,,)] ∀∉(_∪) ,
= -/_2_max
This policy guarantees safety when the value function was computed with the true currents v. However, in realistic settings with only forecasts v̂ available, we apply π_v̂ closed-loop which is equivalent to replanning at every time step. Applying this policy in closed-loop (see Fig. <ref>) means we take the time-optimal control at each state, which is equivalent to full time horizon replanning at each time step.
We introduce the safe TTR map D^*, which is easier to interpret and can be easily computed from the value function.
D^*(x, t) = T + J^*(x, t) -t, ∀(x, t) s.t., J^*(x,t) ≤ 0
We illustrate the interpretability of the safe TTR in Fig. <ref>. If D^*(x,t) is e.g. 3, it means that a vessel starting at x at time t can reach the target in 3 time units when following the optimal control (Eq. <ref>).
We solve the HJ-MTR in periodic intervals to update the safe TTR value function D^*. In our work, we solve it once per day upon receiving new forecasts similar to <cit.>.
This needs to be done, when the next state is different than the predicted state, which is likely due to the forecast error δ.
In summary, there are three core advantages of our method compared to classical MPC with non-linear programming. We can guarantee time-optimality in non-linear dynamics over the full time horizon. We require very low online computation to extract the gradient at each step. In case the vessel cannot reach the destination, the optimal control will attempt to minimize the terminal distance to the target, while non-linear programming would not provide us with a trajectory in such a case.
§ EXPERIMENTS
We conduct experiments to ascertain that our method of using HJ-MTR with obstacles is able to reach the target without colliding with obstacles despite forecast errors.
We simulate a large number of missions on realistic ocean currents and compare the performance of our control schema to baseline methods.
§.§ Experimental Set-Up
Our experiments investigate the stranding rate and reliability of several controllers for navigating a two-dimensional ASV with fixed magnitude, holonomic actuation of ||g(, x, t)||_2 = ||||_2 = 0.1. The control input is the angle θ for steering the ASV in ocean currents v(x,t) ∈ [0, 1.4], which the vessel utilizes to reach its target region. Additionally, we describe how we ascertain the realism of our ocean forecast simulation and the creation of our obstacle sets, and the generation of a representative set of missions. Subsequently, we explain our baseline methods and evaluation metrics.
Realistic Ocean Forecast Simulation
In a real-world setting, a vessel can receive the most recent forecast in regular intervals, e.g. daily, and provide it to the control methods to perform replanning.
We employ ocean current hindcast data from Copernicus <cit.> and HYCOM <cit.> for the region off the coast of California and Mexico between N15 and N40 and W160 and W105.
We simulate the system dynamics based on hindcast as the true flow v(x,t) using Copernicus hindcasts and use a series of 5 days of HYCOM hindcasts as forecasts for planning. It should be noted that, unlike HYCOM, Copernicus incorporates tidal currents into its forecasts.
We want to ensure a realistic simulation of the forecast error δ. This forecast error can be measured with various metrics on the currents such as RMSE, vector correlation, and separation distance <cit.>. In our simulations, these are on average 0.18 RMSE, which is close to the validation RMSE of 0.19 of the HYCOM forecast error <cit.>. We measured 0.63 vector correlation compared with 0.64 for HYCOM <cit.> and 0.62 for Copernicus <cit.>, each measured at t=71, with a value of 2 representing perfect correlation and 0 no correlation. Thus, our simulation set-up represents realistic situations well.
Obstacles derived from Bathymetry
The bathymetry data we employ is the GEBCO 2022 grid <cit.>. It is a global, continuous terrain grid with a resolution of ;;15. We coarsen it to the same resolution of the current data ;5 by using the maximum in each grid cell to overestimate the elevation in each grid cell. We further precompute a distance map corresponding to the minimal distance to obstacle areas of depth under 150 for each grid cell. This is done by employing a Breadth-First-Search starting in the obstacle area with a distance of zero and exploring outwards. We use this distance as switching condition for the ctrl3.
Large Representative Set of Missions
We generate a representative set of 1146 start-target missions 𝕄 with the following procedure. We uniformly sample target points x_, center spatially in the region introduced in <ref>.
We reject points with a minimum distance below 0.5 to the boundary of the region, with distance to obstacles below 0.025. We want to generate missions with a higher risk of stranding, hence we limited the maximum distance to an obstacle to 3.
We then uniformly sample final times t_T for a time horizon of 10 days while using data from 2022.
To validate each mission is feasible for ctrl2, introduced in <cit.>, given the true currents v(𝐱,t) we calculate the BRT using HJ-MTR from x__center at t_. We sample a start position from the BRT so that the ASV can reach the target within 5-9 days. Finally we define the target region to be a circular region with radius r_ = 0.1 around x_, center. Note that the introduction of obstacles increases the difficulty of the mission and renders some of these missions infeasible, as they can block a desired path.
Controllers
It has been shown that navigating from start to target with an underactuated ASV can be successfully done by hitchhiking ocean currents <cit.>. Yet their controller ctrl2 does not incorporate safety aspects into its value function. Our work seeks to extend its capabilities by incorporating collision avoidance into planning <cit.>. We hence compare the performance of several controllers to the baseline presented in <cit.>.
The first controller is the ctrl1, with an actuation of = 0. It is the same we used in <ref> and serves the purpose to present a lower bound on the performance and should show how complex it is to safely complete the missions.
The second controller is a reactive safety controller, meaning it does not reason about currents for safety. Instead it is a switching controller with switching condition being the distance to the closest obstacle. If the distance is below a threshold, the safety controller takes over and actuates with full actuation into the direction of the largest distance to obstacles. We set the threshold to 20. Once the distance to obstacles is larger than the threshold, the navigation controller ctrl2 takes over again. We utilize it to be able to compare to a simple solution that adds safety functionality to the baseline controller ctrl2.
Our ctrl4 is the controller we presented in detail in <ref>. It is the primary contribution of our paper. There are two ablations of the ctrl4. The ctrl5 is a switching controller that uses ctrl4 for navigation. The switching condition is the same as for the ctrl2.
The last controller we examine is the ctrl6, which is the ctrl4 controller with an unrealistically low disturbance of d=0.05. As explained in <ref> in a realistic setting with d=0.2 we cannot be robust with an actuation of only _max=0.1.
Evaluation Metrics
We define our key metric stranding rate as the rate of a controller entering the obstacle set over the set of missions 𝕄. We further evaluate the reliability, defined as the success rate of a controller over the set of missions 𝕄 <cit.>.
§.§ Experimental Results
We evaluate the controllers performances over ||𝕄|| = 1146 start-target missions and run the simulation for T_max=240. If the ASV collides with an obstacle, we terminate the mission and count it as stranded, if it reaches the target region within T_max we count it as success, if it does neither we count it as timeout.
In complex flows with forecast errors and in close proximity to obstacles, our controller ctrl4 has a stranding rate of only 0.96%, compared to 4.71% of the baseline ctrl2 (Table <ref>).
We evaluate if the stranding rate of our controllers is lower than the baseline of ctrl2 in a statistically significant manner by performing a one-sided two-sample z proportion test for the other controllers.
Let Γ be the stranding rate of a controller and our null hypothesis be:
H_0: Γ_ctrl2 = Γ_controller.
With the alternate hypothesis:
H_A: Γ_ctrl2 > Γ_controller.
The stranding rate of both controllers is higher than ctrl2 in a statistically significant way with p-values ctrl3 p=2.6e^-3, ctrl4 p=3.1e^-8, ctrl5 p=9.3e^-7, ctrl6 p=9.5e^-5). Additionally, the success rate of ctrl4 is not reduced by safety and even shows the highest success rate.
§ DISCUSSION
We note that our controllers exhibit a lower success rate than in <cit.>. We believe this is due to three differences in the set-up. First, the time-to-target for each mission in <cit.> is between 20-120h with T_max=150, while our sampled time-to-target is 120-216h with T_max=240.
Hence our missions are longer and have smaller time buffers to reach the target. In extreme cases, their missions are expected to finish in 20 with 130 additional hours to reach the target before T_max, while in the worst case, our missions can have a 216 mission with a buffer of 24h.
Second, the sampling is feasible for ctrl2 without considering obstacles (Sec. <ref>). Hence missions may be unfeasible for ctrl2 due to stranding on obstacles and unfeasible for the other controllers, as it may take them time to circumvent obstacles in the path.
Third, we sample missions with a maximum distance to shore of 3, exposing the vessels more to tidal currents near shore.
§ CONCLUSION AND FUTURE WORK
In this work, we have demonstrated that HJ-MTR with obstacles can be used to reduce the rate of stranding even in complex flows using daily forecasts with large errors. We evaluated our method over a large set of 5-9 day start-to-target missions distributed spatially near the Coast of California, Hawaii, and the Baja California area and temporally across the year 2022 using realistic ocean currents. In our experiments, our method has achieved a stranding rate of 0.96% which is significantly lower than the baseline controllers and also has a slightly higher success rate.
While we have demonstrated the ability of our method with two-dimensional ocean currents, we emphasize that it is also applicable in a three-dimensional setting such as underwater or in the air. Furthermore, HJ-MTR is able to handle dynamic constraints <cit.>. However, including dynamic obstacles such as ships that move fast and change their course would require a higher frequency of re-planning to account for those changes, resulting in higher computational costs.
In the future, we plan to model zones of a potential hazard, e.g. shipping lanes and garbage patches, as soft constraints, where instead of preventing entering altogether it would be beneficial to e.g. minimize the time spent therein.
By reducing time in shipping lanes an ASV could avoid many vessels. As of now, it is also uncertain how underactuated ASVs would be classified under the COLREGS and if evasion is necessary or if they should stop their propulsion to be floated along a vessel <cit.>.
Getting the rotors of an ASV entangled in the garbage can render it inoperable, hence it is beneficial to avoid areas with a larger density of garbage such as the center of the GPGP, while not making the whole 1.6 million <cit.> of the GPGP an obstacle to be avoided. We can investigate using a risk-based extension of a soft-edge and dynamic forbidden region <cit.>.
IEEEtran
|
http://arxiv.org/abs/2307.02241v1
|
20230705123229
|
Approximate Turing kernelization and lower bounds for domination problems
|
[
"Stefan Kratsch",
"Pascal Kunz"
] |
cs.DS
|
[
"cs.DS"
] |
Multi-IRS-Enabled Integrated Sensing and Communications
Yuan Fang, Siyao Zhang, Xinmin Li, Jie Xu, and Shuguang Cui
Y. Fang is with the Future Network of Intelligence Institute (FNii) and the School of Science and Engineering (SSE), The Chinese University of Hong Kong (Shenzhen), Shenzhen, 518172, China (e-mail: [email protected]).
S. Zhang is with the School of Computer Science and Technology, Southwest University of Science and Technology, Mianyang 621000, China, and the FNii, The Chinese University of Hong Kong (Shenzhen), Shenzhen, 518172, China (e-mail: [email protected]).
X. Li is with the School of Information Engineering, Southwest University of Science and Technology, Mianyang 621000, China, and the FNii, The Chinese University of Hong Kong (Shenzhen), Shenzhen, 518172, China (e-mail: [email protected]).
J. Xu and S. Cui are with SSE and FNii, The Chinese University of Hong Kong (Shenzhen), Shenzhen, 518172, China (e-mail: [email protected], [email protected]). J. Xu is the corresponding author.
August 1, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
An α-approximate polynomial Turing kernelization is a polynomial-time algorithm that computes an (α c)-approximate solution for a parameterized optimization problem when given access to an oracle that can compute c-approximate solutions to instances with size bounded by a polynomial in the parameter.
Hols et al. [ESA 2020] showed that a wide array of graph problems admit a (1+)-approximate polynomial Turing kernelization when parameterized by the treewidth of the graph and left open whether Dominating Set also admits such a kernelization.
We show that Dominating Set and several related problems parameterized by treewidth do not admit constant-factor approximate polynomial Turing kernelizations, even with respect to the much larger parameter vertex cover number, under certain reasonable complexity assumptions.
On the positive side, we show that all of them do have a (1+)-approximate polynomial Turing kernelization for every >0 for the joint parameterization by treewidth and maximum degree, a parameter which generalizes cutwidth, for example.
§ INTRODUCTION
The gold standard in kernelization is a polynomial (exact) kernelization, i.e. a compression of input instances to a parameterized problem to a size that is polynomial in the parameter such that an exact solution for the original instance can be recovered from the compressed instance.
Several weaker notions of kernelization have been developed for problems that do not admit polynomial kernelizations.
Turing kernelization <cit.> does away with the restriction that the solution must be recovered from a single compressed instance and instead allow several small instances to be created and the solution to be extracted from solutions to all of these instances.
Lossy kernelizations <cit.>, in turn, do away with the requirement that the solution that can be recovered from the compressed instance be an optimum solution, allowing the solution to the original instance to be worse than optimal by a constant factor.
Hols et al. <cit.> introduced lossy Turing kernelizations, which allow both multiple compressed instances and approximate solutions, and showed that several graph problems parameterized by treewidth admit (1+)-approximate Turing kernelizations for every >0.
They left as an open question whether or not the problem Dominating Set parameterized by treewidth also admits a constant-factor approximate Turing kernelization.
Our contribution.
We answer this question in the negative and show that a (2^log ^c vc)-approximate polynomial Turing kernelization for Dominating Set[vc][We use FOO[X] to refer to the problem FOO parameterized by X.], where vc refers to the vertex cover number, would contradict the Exponential Time Hypothesis.
We prove analogous lower bounds for Capacitated Dominating Set[vc], Connected Dominating Set[vc] for Hitting Set[U], where U is the universe, and for Node Steiner Tree[V∖ T] where V∖ T is the set of non-terminal vertices.
Of course, the lower bounds for the vertex cover number also imply lower bounds for the smaller parameter treewidth.
Using a second approach for obtaining lower bounds for approximate Turing kernelizations, essentially a gap-introducing polynomial parameter transformation (PPT), we show that Independent Dominating Set[] does not have an α-approximate polynomial Turing kernelization for any constant α, unless every problem in the complexity class MK[2] has a polynomial (exact) Turing kernelization, which would contradict a conjecture by Hermelin et al. <cit.>.
We then show that for the joint parameterization by treewidth and the maximum degree, each of the aforementioned domination problems does have a (1+)-approximate polynomial Turing kernelization for every > 0.
This generalizes, for instance, parameterization for cutwidth or bandwidth.
Related work.
For an introduction to kernelization, including brief overviews on lossy and Turing kernelizations, we refer to the standard textbook <cit.>.
Binkele-Raible et al. <cit.> introduced the first Turing kernelization for a problem that does not admit a polynomial kernelization.
Since then numerous Turing kernelizations have been published for such problems.
Hermelin et al. <cit.> introduced a framework, which we will make use of in <ref>, for ruling out (exact) polynomial Turing kernelizations.
Fellows et al. <cit.> were the first to combine the fields of kernelization and approximation.
A later study by Lokshtanov et al. <cit.> introduced the framework of lossy kernelization that has become more established.
Finally, Hols et al. <cit.> gave the first approximate Turing kernelizations.
Approximation algorithms and lower bounds for domination problems have received considerable attention.
They are closely related to the problems Hitting Set and Set Cover.
A classical result by Chvátal <cit.> implies a polynomial-time (log n)-factor approximation for Dominating Set.
There are also (log n)-factor approximations for Connected Dominating Set <cit.> and Capacitated Dominating Set <cit.>, but Independent Dominating Set does not have a (n^1-)-approximation for any > 0, unless P = NP <cit.>.
Chlebík and Chlebíková <cit.> showed that
Dominating Set, Connected Dominating Set, and Capacitated Dominating do not have constant-factor approximations even on graphs with maximum degree bounded by a constant Δ and that Independent Dominating Set does not have better than a Δ-factor approximation.
§ PRELIMINARIES
Graphs.
We use standard graph terminology and all graphs are undirected, simple, and finite.
For a graph G=(V,E) and X⊆ V, we use N[X] X ∪{v ∈ V |∃ u ∈ X {u,v}∈ E} to denote the closed neighborhood of X and, if v ∈ V, then we let N[v] N[{v}].
We will use Δ and vc to refer to the maximum degree and vertex cover number of a graph, respectively.
Let G=(V,E) be a graph X⊆ V a vertex set.
The set X ⊆ V is a dominating set if N[X] = V.
It is an independent set if there is no edge {u,v}∈ E with u,v ∈ X.
It is an independent dominating set if it is both an independent and a dominating set.
It is a connected dominating set if it is a dominating set and the graph G[X] is connected.
A capacitated graph G=(V,E,) consists of a graph (V,E) and a capacity function V →.
A capacitated dominating set in a capacitated graph G=(V,E,) is a pair (X,f) where X ⊆ V and f V∖ X → X such that (i) v and f(v) are adjacent for all v∈ V ∖ X and (ii) f^-1(v)≤(v) for all v∈ X.
The size of (X,f) is X.
A tree decomposition of a graph G=(V,E) is a pair =(T=(W,F),{X_t}_t∈ W) where T is a tree, X_t ⊆ V for all t∈ W, ⋃_t ∈ W X_t = V, for each e ∈ E there is a t∈ W such that e⊆ X_t, and for each v∈ V the node set {t ∈ W | v ∈ X_t} induces a connected subgraph of T.
The width of is max_t ∈ WX_t -1.
The treewidth tw(G) of G is the minimum width of any tree decomposition of G.
A rooted tree decomposition consists of a tree decomposition =(T=(W,F),{X_t}_t∈ W) along with a designated root r ∈ W.
Given this rooted decomposition and a node t ∈ W, we will use V_t⊆ V to denote the set of vertices v such that v such v ∈ X_t' and t' is a descendant (possibly t itself) of t in the rooted tree (T,r).
The rooted tree decomposition is nice if X_r=∅ and X_t =∅ for every leaf of T and every other node t of T is of one of three types: (i) a forget node, in which case t has a single child t' and there is a vertex v∈ V such that X_t' = X_t ∪{v}, (ii) an introduce node, in which case t has a single child t' and there is a vertex v∈ V such that X_t = X_t'∪{v}, or (iii) a join node, in which case t has exactly two children t_1 and t_2 and X_t = X_t_1 = X_t_2.
Turing kernelization.
A parameterized decision problem is a set L⊆Σ^* ×ℕ.
A Turing kernelization of size fℕ→ℕ for a parameterized decision problem L is a polynomial-time algorithm that receives as input an instance (x,k) ∈Σ^* ×ℕ and access to an oracle that, for any instance (x',k') ∈Σ^* ×ℕ with x' + k' ≤ f(k), outputs whether (x',k') ∈ L in a single step, and decides whether (x,k) ∈ L.
It is a polynomial Turing kernel if f is polynomially bounded.
A polynomial parameter transformation (PPT) from one parameterized decision problem L to a second such problem L' is a polynomial-time computable function fΣ^* ×ℕ→Σ^* ×ℕ such that (x,k) ∈ L if and only (x',k') ∈ L' and there is a polynomially bounded function p such that p(k') ≤ k for all (x,k),(x',k') ∈Σ^*×ℕ with f(x,k) = (x',k').
A parameterized minimization problem is defined by a computable function Σ^* ×ℕ×Σ^* →ℝ∪{∞, -∞}.
The optimum value for an instance (I,k) ∈Σ^* ×ℕ is _(I,k) min_x ∈Σ^*(I,k,x).
We will say that a solution x ∈Σ^* is α-approximate if (I,k,x) ≤α·_(I,k).
In order to simplify notation, we will allow ourselves to write (I,x) instead of (I,k,x) and _(I) instead of _(I,k) if those values do not depend on k.
The problems (Capacitated/Connected/Independent) Dominating Set are defined by (G,X) X if X is (capacitated/connected/independent) dominating set in X and (G,X) ∞, otherwise.
The problem Node Steiner Tree is defined by NST((G,T),X) X if G[X∪ T] is connected and NST((G,T),X) ∞, otherwise.
Hitting Set is a problem whose input (U,) consists of a set U and a family ⊆ 2^U of nonempty sets, a solution X is a subset of U, and HS(X) ∞ if there is an S ∈ such that X ∩ S = ∅ and HS(X) X, otherwise.
Let α∈ℝ with α≥ 1 and let be a parameterized minimization problem.
An α-approximate Turing kernelization of size f ℕ→ℕ for is a polynomial-time algorithm that given an instance (I,k) computes a (cα)-approximate solution when given access to an oracle for P which outputs a c-approximate solution to any instance (I',k') with I' + k' ≤ f(k) in a single step.
It is an α-approximate polynomial Turing kernelization if f is polynomially bounded.
Note that the algorithm is not given access to c, the approximation factor of the oracle, and is not allowed to depend on c.
In practice, it can also be helpful for the approximate Turing kernelization algorithm to receive a witness for the parameter value k as input.
Similarly to Hols et al. <cit.>, we will assume that our approximate Turing kernelizations for the parameterization treewidth plus maximum degree are given as input a graph G and a nice tree decomposition of width tw(G).
Alternatively, one could also use the polynomial-time algorithm due to Feige et al. <cit.> to compute a tree decomposition of width (√(logtw(G))·tw(G)) and then use this tree decomposition.
We will also assume that the given tree decomposition is nice, which is not really a restriction, because there is a polynomial-time algorithm that converts any tree decomposition into a nice tree decomposition without changing the width <cit.>.
§ LOWER BOUNDS
§.§ Exponential-time hypothesis
In the following, we will show that several problems do not have an approximate Turing kernelization assuming the exponential-time hypothesis (ETH).
The proof builds on a proof due to Lokshtanov et al. <cit.> showing that Hitting Set parameterized by the size of the universe does not admit a lossy (Karp) kernelization unless the ETH fails.
Let 3-CNF-SAT denote the satisfiability problem for Boolean formulas in conjunctive normal form with at most three literals in each clause.
The exponential-time hypothesis (ETH) <cit.> states that there is there is a fixed c > 0 such that 3-CNF-SAT is not solvable in time 2^cn· (n + m)^(1), where n and m are the numbers of variables and clauses, respectively.
Let and ' be parameterized minimization problems and fΣ^* ×ℕ→_+ a real-valued function that takes instances of as input.
An f-approximation-preserving polynomial parameter transformation (f-APPT) from to ' consists of two algorithms:
* a polynomial-time algorithm (the reduction algorithm) that receives as input an instance (I,k) for and outputs an instance (I',k') for ' with k' ≤ p(k) for some polynomially bounded function p and
* a polynomial-time algorithm (the lifting algorithm) that receives as input the instances (I,k), (I',k'), where the latter is the output of when given the former, as well as a solution x for (I,k) and outputs a solution y for (I',k') with
(I,k,y) ≤f( I,k ) ·_(I,k) ·'(I',k,x)/_'(I',k).
We will use the following lemma, which is a weaker version of a result by Nelson <cit.>.
If there are a constant c<1 and a polynomial-time algorithm that computes an (2^log^c U)-factor approximation for Hitting Set, then the ETH fails.
Let be a parameterized minimization problem that satisfies the following two conditions:
* There is an f-APPT from Hitting Set[U] to with f(U,) ∈(2^log^c_1U) for some constant c_1 <1.
* There is a constant c_2<1 and a polynomial-time algorithm that computes a (2^log ^c_2I)-factor approximation for .
Then, there is no (2^log ^c_3 k)-approximate polynomial Turing kernelization for for any c_3<1, unless the ETH fails.
Suppose that satisfies conditions (a) and (b) and admits a (2^log ^c_3 k)-approximate polynomial Turing kernelization of size (k^d).
Furthermore, assume that p(n) ≤(n^d') where p is the polynomial parameter bound for the reduction algorithm of the f-APPT.
Choose any constant c with max{c_1,c_2,c_3} < c < 1 and observe that for any constant α and i ∈{1,2,3} we have that 2^αlog^c_i n ≤(2^log^c n).
We will give a (2^log^c U)-approximation algorithm for Hitting Set.
By <ref>, this proves the claim.
The algorithm proceeds as follows.
Given an instance I=(U,) of Hitting Set as input, it first applies the reduction algorithm of the APPT to obtain an instance (I',k) of P.
Then, it runs the given approximate Turing kernelization on (I',k).
Whenever this Turing kernelization queries the oracle, this query is answered by running the approximation algorithm given by condition (b).
Once the Turing kernelization outputs a solution X, the algorithm calls the lifting algorithm of the APPT on X, (I,U), and (I',k).
The algorithm outputs the solution Y given by the lifting algorithm.
It remains to show that Y≤ (2^log^c_3U·(I)).
First, observe that the (2^log ^c_2I)-factor approximation algorithm is only run on instances (J,ℓ) with J≤(k^d), so it always outputs a solution Z with (J,ℓ,Z) ≤(2^log ^c_2 k^d·_(J,ℓ)).
Hence, in the algorithm described above the Turing kernelization is given a (2^dlog ^c_2 k)-approximate oracle, so it follows that (I',k,X) ≤(2^log ^c_2 k· 2^d log ^c_3 k·_(I',k)).
Since k ≤(U^d'), it follows that (I',k,X) ≤(2^log ^c_3U^d'· 2^d log ^c_2U^d'·_(I',k)).
Therefore:
Y ≤f( I,k ) ·(I) ·(I',k,X)/_(I',k)≤( 2^log ^c_3U^d'· 2^d log ^c_2U^d'· f(I,k) ·(I) )
≤( 2^log ^c_3U^d'· 2^d log ^c_2U^d'· f(I) ·(I) )
≤( 2^log ^c_3U^d'· 2^d log ^c_2U^d'· 2^log^c_1U·(I) )
≤(2^log^cU·(I)).
With <ref>, we can prove approximate Turing kernelization lower bounds for several parameterized minimization problems.
Unless the ETH fails, there are no (2^log ^c k)-approximate polynomial Turing kernels, for any c<1 and where k denotes the respective parameter, for the following parameterized minimization problems:
* Hitting Set[U],
* Dominating Set[],
* Capacitated Dominating Set[],
* Connected Dominating Set[], and
* Node Steiner Tree[V∖ T].
For each problem, we will prove conditions (a) and (b) from <ref>.
*
* Immediate.
* Chvátal <cit.> gives a (log)-factor approximation algorithm.
*
* The following folklore reduction is a 1-APPT.
Let (U,) be an instance of hitting set.
The algorithm creates a graph G as follows.
For every x ∈ U and for every S ∈, G contains vertices v_x and w_S, respectively, and G also contains an additional vertex u.
The vertices {v_x | x ∈ U}∪{u} form a clique and there is an edge between v_x and w_S if and only if x∈ S.
Observe that the vertices {v_x | x ∈ U} form a vertex cover in G, so clearly vc(G) ≤U, and that (U,𝒮) = (G).
Let X be a dominating set in G.
Let X' be obtained from X by removing z and replacing any w_S by an arbitrary v_x with x∈ S (such an element u must exist, as we assume that all S ∈𝒮 are non-empty).
The algorithm outputs Y {x ∈ U | v_x ∈ X'}.
This set is a hitting set, because for any S ∈𝒮 one of the following cases applies: (i) w_S ∉ X, meaning that X contains a neighbor v_x of w_S. Then, also x∈ X' and, hence x ∈ Y and x∈ S. (ii) w_S ∈ X, meaning that w_S is replaced by v_x with x∈ S when creating X'. Then x∈ Y and x∈ S.
Finally,
Y = X = (U,) ·X/(G),
since (U,) = (G).
* The (log n)-factor approximation for Hitting Set <cit.> can also be used in a straightforward manner to approximate Dominating Set.
*
* Any instance of Dominating Set can be transformed into an equivalent instance of Capacitated Dominating Set by setting (v) (v) for all vertices v.
The claim then follows by the same argument as for Dominating Set.
* Wolsey <cit.> gives a (log) factor approximation for Capacitated Hitting Set which can be adapted to approximate Capacitated Dominating Set.
*
* The APPT given in (ii) for Dominating Set also works for Connected Dominating Set, because, in the graphs produced by , (G) = (G) and the solution output by is always a clique and, therefore, connected.
* Guha and Khuller <cit.> give a (logΔ) ≤(log n)-factor approximation for Connected Dominating Set.
*
* The following reduction is essentially the same as the one given by Dom et al. <cit.>
Let (U,) be an instance of Hitting Set.
The algorithm creates the graph G as in the reduction for Dominating Set in (ii) and sets T {w_s | s ∈}∪{u}.
Clearly, V ∖ T = U and it easy to show that (U,) = (G,T). By a similar argument as in (ii), the algorithm can output {x ∈ U | v_x ∈ X} where X is a given solution for the Node Steiner Tree instance (G,T).
* Klein and Ravi <cit.> give a (log n)-factor approximation for this problem.
If is a hereditary class of graphs, then we may define the Restricted -Deletion problem as follows:
We are given a graph G=(V,E) and X⊆ V such that G- X ∈ and are asked to find a minimum Y ⊆ X such that G- Y ∈.
For Restricted Perfect Deletion[X], Restricted Weakly Chordal Deletion[X], and Restricted Wheel-Free Deletion[X], reductions given by Heggernes et al. <cit.> and Lokshtanov <cit.> can be shown to be 1-APPTs.
However, it is open whether they have a (2^log ^c I)-factor approximation with c<1, so we cannot rule out an approximate polynomial Turing kernelization.
However, we can observe that, under ETH, they cannot have both an approximation algorithm with the aforementioned guarantee and an approximate polynomial Turing kernelization.
More generally, we can deduce from the proof of <ref> the following about any parameterized minimization problem that only satisfies the first condition in <ref>:
If we define an approximate polynomial Turing compression of a problem into a problem ' to be essentially an approximate polynomial Turing kernelization for , except that it is given access to an approximate oracle for ' rather than , then we can rule out (under ETH) an approximate polynomial Turing compression of any problem that satisfies the first condition into any problem ' that satisfies the second condition in the same lemma.
§.§ MK[2]-hardness
The approach described in <ref> is unlikely to work for the problem Independent Dominating Set.
That approach requires a (2^log^c n)-factor approximation algorithm with c<1 to answer the queries of the Turing kernelization.
However, there is no (n^1-)-factor approximation for this problem for any >0 unless P=NP <cit.>.
In the following, we will prove that there is no constant-factor approximate polynomial Turing kernelization for Independent Dominating Set[vc], assuming a conjecture by Hermelin et al. <cit.> stating that parameterized decision problems that are hard for the complexity class MK[2] do not admit polynomial (exact) Turing kernelizations.
Let CNF-SAT denote the satisfiability problem for Boolean formulas in conjunctive normal form.
The class MK[2] may be defined as the set of all parameterized problems that can be reduced with a PPT to CNF-SAT[n] where n denotes the number of variables.[This is not directly the definition given by Hermelin et al. <cit.>, but an equivalent characterization.]
We will prove that an α-approximate polynomial Turing kernelization for Independent Dominating Set[vc] implies the existence of a polynomial Turing kernelization for CNF-SAT[n].
For this, we will need the following lemma allowing us to translate queries between oracles for Independent Dominating Set and CNF-SAT using a standard self-reduction:
There is a polynomial-time algorithm that, given as input a graph G and access to an oracle that decides in a single step instances of CNF-SAT whose size is polynomially bounded in the size of G, outputs a minimum independent dominating set of G.
The decision version of Independent Dominating Set, in which one is given a graph H and an integer k and is asked to decide whether H contains an independent dominating set of size at most k, is in and CNF-SAT is -hard, so there is a polynomial-time many-one reduction from Independent Dominating Set to CNF-SAT.
For any graph H and integer k let R(H,k) denote the instance of CNF-SAT obtained by applying this reduction to (H,k).
Let n be the number of vertices in G.
We first determine the size of a minimum independent dominating set in G by querying the oracle for CNF-SAT on the instance R(G,k) for each k ∈ [n].
Observe that the size of R(G,k) is polynomially bounded in the size of G.
Hence, we may input this instance to the oracle.
Let k_0 be the smallest value of k for which this query returns yes.
We must then construct an independent dominating set of size k_0 in G.
We initially set ℓ k_0, H G, and S ∅ and perform the following operation as long as ℓ > 0.
For each vertex u in H, we query the oracle on the instance R(H-N[u],ℓ-1).
If this query returns yes, then we add u to S and set H H - N[u] and ℓℓ -1.
Once ℓ = 0, we output S.
We claim that this procedure returns an independent dominating set of size k_0 if k_0 is the size of a minimum independent dominating set in G.
Let X be a minimum independent dominating set in G and u ∈ X.
Then, X ∖{u} is a minimum dominating set of size k_0-1 in G - N[u] and the claim follows inductively.
If, for any α≥ 1, there is an α-approximate polynomial Turing kernelization for Independent Dominating Set[vc], then there is a polynomial Turing kernelization for CNF-SAT[n].
Assume that there is an α-approximate Turing kernelization for Independent Dominating Set[vc] whose size is bounded by the polynomial p.
We will give a polynomial Turing kernelization for CNF-SAT[n].
Let the input be a formula F in conjunctive normal form over the variables x_1,…,x_n consisting of the clauses C_1,…,C_m.
First, we compute a graph G on which we then run the approximate Turing kernelization for Independent Dominating Set.
The construction of the graph G in the following is due to Irving <cit.>.
Let s ⌈α· n ⌉+1.
The graph G=(V,E) contains vertices v_1,…,v_n and v_1,…,v_n, representing the literals that may occur in F.
Additionally, for each j ∈ [m], there are s vertices w_j^1,…,w_j^s representing the clause C_j.
For each i ∈ [n], there is an edge between v_i and v_i.
There is also an edge between v_i and w_j^ℓ for all ℓ∈ [s] if x_i ∈ C_j and an edge between v_i and w_j^ℓ for all ℓ∈ [s] if x_i ∈ C_j.
The intuition behind this construction is as follows:
In G any independent dominating set may contain at most one of v_i and v_i for each i ∈ [n], so any such set represents a partial truth assignment of the variables x_1,…,x_n.
If F is satisfiable, then G contains an independent dominating set of size n.
Conversely, if F is not satisfiable, then any independent dominating set must contain w^1_j,…,w^s_j for some j∈ [m], so it must have size at least s > α· n, thus creating a gap of size greater than α between yes and no instances.
Moreover, {v_i,v_i| i ∈ [n]} is a vertex cover in G, so the vertex cover number of G is polynomially bounded in n.
The Turing kernelization for CNF-SAT proceeds in the following manner.
Given the formula F, it first computes the graph G and runs the α-approximate Turing kernelization for Independent Dominating Set on G.
Whenever the α-approximate Turing kernelization queries the oracle on an instance G', this query is answered using the algorithm given by <ref>.
Observe that the size of G' is polynomially bounded in the vertex cover number of G, which in turn is polynomially bounded in n, so the oracle queries made by this algorithm are possible.
Let X be the independent dominating set for G output by the approximate Turing kernelization.
Since this Turing kernelization is given access to a 1-approximate oracle, X≤α·(G).
We claim that F is satisfiable if and only if X≤α· n.
With this claim, the Turing kernelization can return yes if and only if this condition is met.
If F is satisfiable and φ{x_1,…,x_n}→{0,1} is a satisfying truth assignment, then Y {v_i |φ(x_i) = 1 }∪{v_i|φ(x_i) = 0} is an independent dominating set in G.
Hence, X≤α·(G) ≤α·Y = α· n.
Conversely, suppose that F is not satisfiable.
For each i ∈ [n], the set X may contain at most one of the vertices v_i and v_i.
Consider the partial truth assignment with φ (x_i) 1 if v_i ∈ X and φ(x_i) 0 if v_i∈ X.
Because F is unsatisfiable there is a clause C_j that is not satisfied by φ.
Hence, X must contain the vertices w_j^1,…,w_j^s.
Therefore, X≥ s > α· n.
§ TURING KERNELIZATIONS FOR PARAMETER TREEWIDTH PLUS MAXIMUM DEGREE
In this section, we will prove that the domination problems for which we proved lower bounds when parameterized by the vertex cover number do have (1+)-approximate polynomial Turing kernels when parameterized by treewidth plus maximum degree.
The following lemma is a generalization of <cit.> and can be proved in basically the same way.
Let G be a graph with n vertices, be a nice tree decomposition of width w, and s ≤ n.
Then, there is a node t of such that s ≤V_t≤ 2s.
Moreover, such a node t can be found in polynomial time.
§.§ Dominating Set
We start with Dominating Set.
Let G=(V,E) be a graph.
* If Δ is the maximum degree of G, then (G) ≥V/Δ +1.
* If A,B,C ⊆ V with A∪ C = V, A∩ C = B, and there are no edges between A∖ B and C ∖ B, then
(G) ≥(G[A]) + (G[C]) - 2B.
* Every vertex can only dominate its at most Δ neighbors and itself.
* Let X be a dominating set in G of size (G).
Then Y (X∩ A) ∪ B and Z (X∩ C) ∪ B are dominating sets in G[A] and G[C], respectively.
Hence,
(G) = X = X∩ A + X∩ C - X∩ B
≥Y - B∖ X + Z - B
≥(G[A]) + (G[C]) - 2B.
The kernelization algorithm for Dominating Set uses <ref>, but is simpler and otherwise similar to the one for Capacitated Dominating Set and is deferred to the appendix.
For every > 0, there is a (1+)-approximate Turing kernelization for Dominating Set with (1+/·tw·Δ) vertices.
Consider <ref>.
This algorithm always returns a dominating set of G.
If the algorithm terminates in line <ref>, then this is true because the oracle always outputs a dominating set.
If it terminates in line <ref>, then let v ∈ V be an arbitrary vertex.
If v ∈ V_t, then v is dominated by a vertex in S_t, because S_t is a dominating set in G[V_t].
If v ∈ V∖ V_t, then v is dominated by a vertex in S'.
The algorithm runs in polynomial time.
Finally, we must show that the solution output by the algorithm contains at most c· (1+)·(G) vertices.
We prove the claim by induction on the number of recursive calls.
If there is no recursive call, the algorithm terminates in line <ref> and the solution contains at most c·(G) vertices.
Otherwise, by induction:
S'_t ∪ S' ≤S'_t + S'≤ c·(G[V_t]) + S'
= c·(1+)·(G[V_t]) - c··(G[V_t]) + S'
†≤ c·(1+)·(G[V_t]) - c··V_t/Δ + 1 + S'
≤ c·(1+)·(G[V_t]) - 2· c·(1+)·(tw+1) + S'
≤ c·(1+)·(G[V_t]) - 2· c·(1+)·X_t + S'
≤ c·(1+)·(G[V_t]) - 2· c·(1+)·X_t + c·(1+)·(G-V_t)
= c·(1+)·((G[V_t]) - 2X_t + (G-V_t))
≤ c·(1 + ) ·(G).
Here, the inequality marked † follows from lemma:dslemma:dsI and the one marked follows from lemma:dslemma:dsII with A=(V∖ V_t)∪ X_t, B=X_t, and C=V_t ∖ X_t.
§.§ Capacitated Dominating Set
Next, we consider Capacitated Dominating Set.
Let G = (V,E,) be a capacitated graph with maximum degree Δ and A,B,C ⊆ V such that A∪ C = V, A∩ C = B, and there are no edges from A∖ B to C ∖ B.
* (G) ≥(G[A]) + (G[C]) - 2B.
* Given capacitated dominating sets (X,f) and (Y,g) in G[A] and G[C], respectively, one can in construct in polynomial time a capacitated dominating set for G of size at most X + Y + (Δ+1) ·B.
* Let (X,f) with X⊆ V and f V∖ X → X be a capacitated dominating set of size (G).
Then, (Y,g) with Y (X ∩ A)∪ B and g(v) f(v) for all v∈ A ∖ Y is a capacitated dominating set in G[A] and (Z,h) with Z (X ∩ C)∪ B and h(v) f(v) for all v∈ C ∖ Z is a capacitated dominating set in G[C].
Hence,
(G) = X = X∩ A + X∩ C - X∩ B
= Y - B∖ X + Z - B - X ∩ B
≥Y + Z - 2B
≥(G[A]) + (G[C]) - 2B.
* We construct the capacitated dominating set (Z,h) for G as follows.
Let Z X ∪ Y ∪ N[B].
Observe that N[B]≤ (Δ + 1)B.
Define h by setting h(v) f(v) for all v ∈ A ∖ Z and h(v) g(v) for all v ∈ C ∖ Z.
One can easily verify that this is a capacitated dominating set.
For every > 0, there is a (1+)-approximate Turing kernelization for Capacitated Dominating Set with (1+/·tw·Δ^2) vertices.
Consider <ref>.
This algorithm always returns a capacitated dominating set of G.
If the algorithm terminates in line <ref>, then this is true because the oracle always outputs a capacitated dominating set.
If it terminates in line <ref>, then (S_t,f_t) and (S',f') are capacitated dominating sets for G[V_t] and G-(V_t∖ X_t), respectively. It follows by lemma:capdslemma:capdsII, that (S,f) is a capacitated dominating set for G.
The algorithm runs in polynomial time.
Finally, we must show that the solution output by the algorithm contains at most c· (1+)·(G) vertices.
We prove the claim by induction on the number of recursive calls.
If there is no recursive call, the algorithm terminates in line <ref> and the solution contains at most c·(G) vertices.
Otherwise, by induction:
S ≤S'_t + S' + (Δ +1)·X_t≤ c·(G[V_t]) + S' + (Δ +1)·X_t
= c·(1+)·(G[V_t]) - c··(G[V_t]) + S' + (Δ +1)·X_t
†≤ c·(1+)·(G[V_t]) - c··V_t/Δ + 1 + S' + (Δ +1)·X_t
≤ c·(1+)·(G[V_t]) - 3· c·(1+)·(tw+1)·(Δ+1) + S' + (Δ +1)·X_t
≤ c·(1+)·(G[V_t]) - 3· c·(1+)·X_t·(Δ+1) + S'
+ c· (1+)· (Δ +1)·X_t
≤ c·(1+)·(G[V_t]) - 2· c·(1+)·X_t·(Δ+1) + S'
≤ c·(1+)·(G[V_t]) - 2· c·(1+)·X_t + c·(1+)·(G-V_t)
= c·(1+)·((G[V_t]) - 2X_t + (G-V_t))
≤ c·(1 + ) ·(G)
The inequality marked † follows from lemma:dslemma:dsI and the fact that (G) ≥(G).
follows from the fact that c· (1+) ≥ 1 and
from lemma:capdslemma:capdsI.
§.§ Independent Dominating Set
The next problem we consider is Independent Dominating Set.
Let G = (V,E) be a graph with maximum degree Δ and A,B,C ⊆ V such that A∪ C = V, A∩ C = B, and there are no edges from A∖ B to C ∖ B.
* If X is an independent set in G, then there is an independent dominating set X' that contains X and X'∖ X is at most the number of vertices not dominated by X, and such a set X' can be computed in polynomial time.
* (G) ≥(G[A]) + (G[C]) - 2B.
* Given independent dominating sets X and Y in G[A] and G[C], respectively, one can in construct in polynomial time an independent dominating set for G of size at most X + Y + (Δ+1) ·B.
* If X is a dominating set, then X' X.
Otherwise, there is a vertex v ∈ V ∖ N[X].
We add v to X and continue.
Observe that when v is added to X, the latter remains an independent set.
* Let X be an independent dominating set of size (G) in G.
Let Y X ∩ A.
Since Y ⊆ X, it follows that Y is an independent set.
Moreover, Y dominates all vertices in (A∖ B) ∪ (X ∩ B), leaving at most B ∖ X vertices undominated.
We apply (<ref>) to Y and obtain Y', an independent dominating set in G[A] of size at most X∩ A + B - X ∩ B.
We apply the same argument to G[C] to obtain an independent dominating set Z of size at most X∩ C + B - X ∩ B.
It follows that:
(G) = X = X∩ A + X∩ C - X∩ B
= Y - B∖ X + Z - B - X ∩ B
≥Y + Z - 2B
≥(G[A]) + (G[C]) - 2B.
* Z (X ∪ Y) ∖ B is an independent set in G.
Since X ∪ Y is a dominating set and at most (Δ+1)·B vertices can be dominated by vertices in B, it follows that Z leaves at most that many vertices in G undominated.
Applying (<ref>) to Z yields an independent dominating set of size at most X + Y + (Δ+1)·B.
The kernelization algorithm for Independent Dominating Set uses <ref>, but is otherwise similar to the one for Capacitated Dominating Set and is deferred to the appendix.
For every > 0, there is a (1+)-approximate Turing kernelization for Independent Dominating Set with (1+/·tw·Δ^2) vertices.
Consider <ref>.
This algorithm always returns an independent dominating set of G.
If the algorithm terminates in line <ref>, then this is true because the oracle always outputs an independent dominating set.
If it terminates in line <ref>, then S_t and S' are independent dominating sets for G[V_t] and G-(V_t∖ X_t), respectively. It follows by lemma:inddslemma:inddsIII, that S is an independent dominating set for G.
The algorithm runs in polynomial time.
Finally, we must show that the solution output by the algorithm contains at most c· (1+)·(G) vertices.
We prove the claim by induction on the number of recursive calls.
If there is no recursive call, the algorithm terminates in line <ref> and the solution contains at most c·(G) vertices.
Otherwise, by induction:
S ≤S'_t + S' + (Δ +1)·X_t≤ c·(G[V_t]) + S' + (Δ +1)·X_t
= c·(1+)·(G[V_t]) - c··(G[V_t]) + S' + (Δ +1)·X_t
†≤ c·(1+)·(G[V_t]) - c··V_t/Δ + 1 + S' + (Δ +1)·X_t
≤ c·(1+)·(G[V_t]) - 3· c·(1+)·(tw+1)·(Δ+1) + S' + (Δ +1)·X_t
≤ c·(1+)·(G[V_t]) - 3· c·(1+)·X_t·(Δ+1)
+ S' + c· (1+)· (Δ +1)·X_t
≤ c·(1+)·(G[V_t]) - 2· c·(1+)·X_t·(Δ+1) + S'
≤ c·(1+)·(G[V_t]) - 2· c·(1+)·X_t + c·(1+)·(G-V_t)
= c·(1+)·((G[V_t]) - 2X_t + (G-V_t))
≤ c·(1 + ) ·(G)
The inequality marked with † follows from lemma:dslemma:dsI and the fact that (G) ≥(G).
follows from the fact that c· (1+) ≥ 1 and from lemma:inddslemma:inddsII.
§.§ Connected Dominating Set
Finally, we consider the problem Connected Dominating Set.
If S⊆ V is a vertex set in a graph G=(V,E), then let R(G,S) denote the graph obtained by deleting S, introducing a new vertex z, and connecting z to any vertex in V∖ S that has a neighbor in S.
Let G = (V,E) be a connected graph and A,B,C ⊆ V such that A∪ C = V, A∩ C = B, there are no edges from A∖ B to C ∖ B, and A∖ B and C∖ B are both non-empty.
* (G) ≥(R(G[A],B)) + (R(G[C],B)) - 2.
* Given connected dominating sets X and Y in R(G[A],B) and R(G[C],B), respectively, one can in construct in polynomial time a connected dominating set for G of size at most X + Y + 3B.
* Let X be a connected dominating set in G of size (G).
We claim that Y (X∩ (A ∖ B)) ∪{z} and Z (X∩ (C∖ B)) ∪{z} are connected dominating sets in R(G[A],B) and R(G[C],B), respectively.
We only prove this for Y and R(G[A],B)), as the case of Z and R(G[C],B) is analogous.
First, we show that Y is a dominating set.
Let v be a vertex in R(G[A],B)).
If v ∈{z,z'}, then v is dominated by z in Y.
Otherwise, v ∈ A ∖ B and there is a vertex w ∈ X that dominates v in G.
If w ∈ B, then z is adjacent to v in R(G[A],B)) and v is dominated by z in that graph.
If w ∈ A ∖ B, then w ∈ Y and v is dominated by w in R(G[A],B)).
We must also show that the subgraph of R(G[A],B) induced by Y is connected.
Let v,v' ∈ Y.
We must show that the subgraph of R(G[A],B) induced by Y contains a path from v to v'.
First we assume that v,v' z,z', implying that v,v' ∈ X.
Hence, there is a path P from v to v' in G[X].
If P ⊆ A ∖ B, then P ⊆ Y and we are done.
Otherwise, P must pass through B.
Let w be the first vertex in B on P and w' the final one.
Obtain P' by replacing the subpath of P between and including w and w' with z.
Then, P' is a path from v to v' in the subgraph of R(G[A],B) induced by Y.
Finally, suppose that v' = z.
If v= z, there is nothing to show, so we assume that v z and, therefore, v∈ X∩ (A∖ B).
Because C∖ B is non-empty, X must contain a vertex w∈ B.
Because G[X] is connected, G[X] must also contain a path P from v to w.
Let w' be the first vertex in B on the path P (possibly, w'=w).
We obtain P', a path from v to z in the subgraph of R(G[A],B) induced by Y, by taking the subpath of P from v to w' and replacing w' with z.
This proves that Y is a connected dominating set in R(G[A],B).
Then,
(G) = X≥X∩ (A∖ B) + X∩ (C∖ B)
≥Y - 1 + Z - 1
≥(R(G[A],B)) + (R(G[C],B)) - 2.
* Let Z' X ∪ Y ∪ B.
Every connected component of G[Z'] contains a vertex in B, so this graph as at most B connected components.
We obtain a connected dominating set Z in G as follows.
We start with Z Z'.
Choose two connected components C_1,C_2 in G[Z].
Because G is connected, it contains a path P starting in v_1 ∈ C_1 and ending in v_2 ∈ C_2.
This path must contain a vertex that is not adjacent to any vertex in C_1, because if every vertex in P∖ C_1 were adjacent to a vertex in C_1, then v_2 is adjacent to a vertex in C_1, implying that C_1 and C_2 are not distinct connected components in G[Z]
Let w be the first vertex on P that is not adjacent to a vertex in C_1.
Because Z is a dominating set in G, there must be a vertex x ∈ Z ∖ C_1 such that w ∈ N[x] (note that, possible w=x).
Adding w and x merges C_1 with the connected component of G[Z] containing x.
This process must be repeated at most B times to obtain a connected dominating set.
In each iteration at most two vertices are added to Z.
Since Z initially contains X + Z + B vertices, we obtain a connected dominating set containing at most X + Z + 3B vertices.
The kernelization algorithm for Connected Dominating Set is similar to the one for Capacitated Dominating Set and is deferred to the appendix.
For every > 0, there is a (1+)-approximate Turing kernelization for Connected Dominating Set with (1+/·tw·Δ) vertices.
Consider <ref>. This algorithm always returns a connected dominating set of G.
If the algorithm terminates in line <ref>, then this is true because the oracle always outputs a connected dominating set.
If it terminates in line <ref>, then S_t and S' are connected dominating sets for R(G[V_t],X_t) and R(G - (V_t∖ X_t)), respectively. It follows by lemma:condslemma:condsII, that S is a connected dominating set for G.
The algorithm runs in polynomial time.
Finally, we must show that the solution output by the algorithm contains at most c· (1+)·(G) vertices.
We prove the claim by induction on the number of recursive calls.
If there is no recursive call, the algorithm terminates in line <ref> and the solution contains at most c·(G) vertices.
Otherwise, by induction:
S ≤S'_t + S' + 3X_t≤ c·(R(G[V_t],X_t)) + S' + 3X_t
= c·(1+)·(R(G[V_t],X_t)) - c··(R(G[V_t],X_t)) + S' + 3X_t
†≤ c·(1+)·(R(G[V_t],X_t)) - c·· (V_t-X_t+1)/Δ + 1 + S' + 3X_t
= c·(1+)·(R(G[V_t],X_t)) - c··V_t/Δ + 1 + S' + c··(X_t+1)/Δ+1 + 3X_t
≤ c·(1+)·(G[V_t]) - 4· c·(1+)·(tw+2) - 2c(1+) + S'
+ c··(X_t+1)/Δ+1 + 3X_t
≤ c·(1+)·(G[V_t]) - 4· c·(1+)·(X_t+1) - 2c(1+) + S'
+ c· (1+)·(4X_t + 1)
≤ c·(1+)·(G[V_t]) - 2· c(1+) + S'
≤ c·(1+)·(G[V_t]) - 2c(1+) + c·(1+)·(G-V_t)
= c·(1+)·((G[V_t]) - 2 + (G-V_t))
≤ c·(1 + ) ·(G)
The inequality marked with † follows from lemma:dslemma:dsII and the fact that (G) ≥(G).
follows from the fact that c· (1+) ≥ 1 and
from lemma:condslemma:condsI.
§ CONCLUSION
We conclude by pointing out two open problems concerning approximate Turing kernelization:
* Does Connected Feedback Vertex Set parameterized by treewidth admit an approximate polynomial Turing kernelization?
The approach employed by Hols et al. <cit.> for Connected Vertex Cover and here for Connected Dominating Set cannot be used for Connected Feedback Vertex Set, because the ratio between the size of a minimum connected feedback vertex and the size of a minimum feedback vertex set is unbounded.
* The biggest open question in Turing kernelization is whether or not there are polynomial Turing kernelizations for the problems Longest Path and Longest Cycle parameterized by the solution size <cit.>.
There has been some progress on this problem by considering the restriction to certain graph classes <cit.>.
Developing an approximate Turing kernelization may be another way of achieving progress in this regard.
|
http://arxiv.org/abs/2307.02706v1
|
20230706005205
|
Super Riemann surfaces and fatgraphs
|
[
"Albert S. Schwarz",
"Anton M. Zeitlin"
] |
math.DG
|
[
"math.DG",
"hep-th",
"math-ph",
"math.AG",
"math.GT",
"math.MP"
] |
A.S. Schwarz]Albert S. Schwarz
[Albert S. Schwarz]
Department of Mathematics,
University of California at Davis,
Davis, CA, USAEmail: mailto:[email protected]@math.ucdavis.edu
A.M. Zeitlin]Anton M. Zeitlin
[Anton M. Zeitlin]
Department of Mathematics,
Louisiana State University,
Baton Rouge, LA, USAEmail: mailto:[email protected]@lsu.edu,http://math.lsu.edu/ zeitlinhttp://math.lsu.edu/∼zeitlin
Our goal is to describe superconformal structures on super Riemann
surfaces (SRS), based on data assigned to a fatgraph.
We start from the complex structures on punctured (1|1)-supermanifolds, characterizing the corresponding moduli and the deformations using Strebel differentials and certain Čech cocycles for a specific covering, which we reproduce from a fatgraph data, consisting of U(1)-graph connection and odd parameters at the vertices. Then we consider dual (1|1)-supermanifolds and related superconformal structures for N=2 super Riemann surfaces. The superconformal structures N=1 SRS are computed as the fixed points of involution on supermoduli space of N=2 SRS.
equationsection
Super Riemann Surfaces and Fatgraphs
[
August 1, 2023
====================================
equationsection
toc
toc
§ INTRODUCTION
§.§ Some history and earlier results
The geometry of moduli spaces of (punctured) Riemann surfaces has been a central topic in modern mathematics for many years. Since the 1980s, string theory served as a significant source of ideas in studying moduli spaces. For a proper description of string theory, one has to consider certain generalizations of moduli spaces related to the fact that strings, while propagating, should carry extra anticommutative parameters, thus generating what is known as superconformal manifold as introduced by M.A. Baranov and A.S. Schwarz <cit.> or super Riemann surface (SRS) as independently introduced by D. Friedan <cit.> (see also <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.> for a review). It turned out that such spaces' geometry is quite involved, e.g., <cit.>.
An important task is, of course, related to the parametrization of such supermoduli.
There are several ways of looking at the parametrization problem. For example, one could deal with supermoduli spaces of punctured Riemann surfaces with the negative Euler characteristic from the point of view of higher Teichmüller theory as a subset in the character variety for the corresponding supergroup. In the case of original moduli spaces using methods of hyperbolic geometry R. Penner described coordinates in the universal cover of moduli space, the Teichmüller space, as the subspace of the character variety of PSL(2,ℝ),
so that the corresponding Riemann surfaces appear here from the uniformization point of view as a factor of the upper half-plane by the element of the related character variety, i.e., the Fuchsian subgroup <cit.>.
The action of the mapping class group in these coordinates is rational. It could be described combinatorially using decorated triangulations or dual objects, known as metric fatgraphs or ribbon graphs for the corresponding Riemann surfaces.
Thus constructed coordinates were generalized to the case of reductive groups <cit.>. The supergroup case yet remained a mystery until recently. In <cit.>, <cit.>, <cit.>, such coordinates were constructed in the framework of the higher Teichmüller spaces associated to supergroups OSp(1|2) and OSp(2|2), which correspond to the Teichmüller spaces N=1 and N=2 SRS. The desired N=1 and N=2 SRS could be reconstructed using the elements of character variety via the appropriately modified uniformization approach <cit.>, <cit.>.
There is a different, more “hands-on" approach to the moduli spaces of punctured Riemann surfaces, where one can see directly the transition functions for the corresponding complex structures, which we discuss in more detail below. One can start from the parameterization of moduli spaces by the so-called Strebel differentials, which again can be described using metric fatgraphs <cit.>. That approach allowed (see <cit.>) to “glue" the Riemann surface explicitly by constructing transition functions.
In this paper, we want to generalize this construction in the case of super Riemann surfaces. We start by describing the moduli space of (1|1)-supermanifolds. This result also describes the moduli space of
N=2 super Riemann surfaces. Finally, we study the moduli space of N=1 super Riemann surfaces using the fact that this space can be obtained as a set of fixed points of the involution of the space of (1|1)-supermanifolds
constructed in <cit.>.
§.§ The structure of the paper and main results.
In Section 2 we review basic notions related to (1|1)-supermanifolds, N=1 and N=2 Super-Riemann surfaces (SRS). We devote special attention to the punctured N=1 SRS with two puncture classes corresponding to various spin structure choices: Ramond (R) and Neveu-Schwarz (NS).
In Section 3, we define two instrumental objects which come from geometric topology.
The first object is a fatgraph (or ribbon graph). This graph is homotopically equivalent to the punctured surface with the cyclic ordering of half-edges at every vertex,
which comes from the orientation of the surface so that each puncture is associated with
a particular cycle on the graph.
The second object is a spin structure on the fatgraph, making it a spin fatgraph. We describe spin structures as the classes of orientations on fatgraphs based on the works <cit.>, <cit.>, <cit.>, where N=1, N=2 SRS were studied from a uniformization perspective. This construction allows distinguishing boundary components of such spin fatgraph, separating them into two sets based on comparing their orientation and the orientation induced by the surface. Those two sets correspond to NS and R punctures in the uniformization picture.
Section 4 is devoted to an important construction allowing us to relate the data assigned to the fatgraphs to the theory of moduli of Riemann Surfaces, following <cit.>, <cit.>.
Namely, we explicitly describe the moduli spaces of moduli spaces of surfaces
F^c with marked points, using special covering {U_v, V_p}, with one neighborhood U_v for every vertex v and V_p for every puncture p. The set of {U_v} has only double overlaps U_v∩ U_v', corresponding to edges {v,v'}, so that ∪_vU_v=F is a punctured surface.
U_p overlaps with all U_v for all the vertices surrounding the puncture.
To construct the corresponding transition functions w'=f_v',v(w), y=f_p,v(w) on overlaps, we consider the fatgraph with one positive number per every edge, producing the metric fatgraph. Then we attach the infinite stripe to the edge, with the width being the corresponding parameter. The transition functions rise from gluing stripes corresponding to edges into neighborhoods U_v, with the width being a positive parameter assigned to the edge. The key ideas of this description, which is due to Kontsevich <cit.> and further elaborated by Mulase and Penkava <cit.>, lies within the theory of Strebel differentials. These are holomorphic quadratic differentials on a punctured surface with certain extra conditions. One can reconstruct the metric fatgraph and the corresponding complex structure for every Strebel differential so that their zeroes define the vertices of the fatgraphs, and the order of zero determines the valence of the corresponding vertex. At the same time, the punctures correspond to their double poles. All this can be summarized in the fact that Strebel differentials parametrize the trivial
ℝ_+^s-bundle over the moduli space of Riemann surfaces with s punctures.
In Section 5, we use this fatgraph description to characterize the moduli space of (1|1)-supermanifolds with punctures: we use the term “puncture" for marked points or (0|1)-divisors assigned to marked points on the underlying Riemann surface. At first, we consider the split (1|1)-supermanifolds, which can be viewed as Riemann surfaces with line bundle ℒ over it. The corresponding moduli space can be then described by the flat U(1) connections on the corresponding metric fatgraphs with zero monodromies around the punctures, accompanied by a fixed divisor at punctures, one for every degree.
Next, we describe this construction's deformation by expressing the tangent bundle's odd parts to the corresponding moduli space as Čech cocycles on the Riemann surface F. These cocycles lead to the infinitesimal deformations of the transition functions, which could be continued beyond the infinitesimal level.
Parametrizing such Čech cocycles is a nontrivial problem, which, however, can be solved in the case when deg(ℒ)=1-g-n-r/2, where n is the number of point punctures and r is the (even) number of (0|1)-divisor punctures. In this case, the corresponding cocycles can be characterized by the ordered sets of complex odd parameters for every vertex, where the number of parameters in each set depends on the valence of the vertex. This is roughly twice more parameters than needed, so there are equivalences between complex structures constructed in such a way. We characterize those equivalences explicitly using sections of the appropriate line bundles.
Thus the fatgraph description of the split case, together with the parametrization of cocycles, immediately leads to complete parametrization of the complex structures of (1|1)-supermanifolds with such degree.
We note that on the level of uniformization, this is an important subclass of supermanifolds obtained in <cit.>, corresponding to flat connections with zero monodromies around punctures.
In Section 6, we use the results of Dolgikh, Rosly, and Schwarz <cit.>, who explicitly described the equivalence between N=2 super Riemann surfaces and (1|1)-supermanifolds, expressing the transition functions of N=2 SRS using the transition functions for (1|1)- supermanifolds obtained in Section 5.
In Section 7, we first discuss the involution on the moduli space of N=2 SRS, such that the fixed points of this involution are N=1 SRS. We then describe the split case, characterizing various
choices of the corresponding line bundle using spin structures on the fatgraph, thus looking at the corresponding supermoduli space with the given assignment of R and NS punctures as a 2^2g covering space over moduli space of punctured Riemann surfaces.
Then we apply the involution to the deformations, first on the infinitesimal level and then continuing beyond, using the superconformal condition.
This eventually leads to the following omnibus Theorem, our main result.
* Consider the following data on a fatgraph τ:
* Metric structure.
* Spin structure, as equivalence class of orientations on the fatgraph. The cycles on the fatgraph, encircling the punctures are divided into two subsets, NS and R, depending on whether there is odd or even number of edges oriented opposite to the surface-induced orientation of the appropriate boundary piece of a fatgraph correspondingly. We denote the number of the corresponding boundary pices as n_R and n_NS.
* Ordered set {σ_v^k}_k=0,…, m_v-3 of odd complex parameters for each vertex v, where m_v is the valence of the vertex v.
Then the following is true:
* Data from (1) and (2) determine uniquely the split Riemann surface with n_R Ramond and n_NS Neveu-Schwarz punctures with the transition functions given by
w'=f_v',v(w) ξ'=±√(∂_w f_v',v(w))ξ,
on for each overlap U_v∩ U_v'. The sign of the square root is given by the spin structure on the fatgraph, making odd coordinate a section of a line bundle ℒ on the corresponding closed Riemann surface F^c, such that ℒ^2=T⊗𝒪(-D_R), where D_R is a divisor, which is a sum of points corresponding to the Ramond punctures.
* Part (3) of the above data allows to construct Čech cocycles on a Riemann surface F, which are the representatives of ΠȞ^1(F^c, ℒ⊗𝒪(-D_NS)), where D_NS is a divisor, corresponding to the sum of the points corresponding to NS punctures:
ρ_v,v'|_U_v∩ U_v'=ρ_v-ρ_v', so that ρ_v|_U_v=σ_v(w)/w^m_v-2, ρ_v'|_U_v'=σ_v'(w')/w'^ m_v'-2,
σ_v(w)=∑^m_v-3_i=0σ^i_v w^i, σ_v'(w')=∑^m_v'-3_i=0σ^i_v'w'^i,
where ρ_v, ρ_v' are the meromorphic sections of
ℒ⊗𝒪(-D_NS) on U_v, U_v' correspondingly, so that m_v is valence of the given vertex v.
The cocycles defined by configurations described by {σ_v} and {σ̃_v} are equivalent to each other if and only if
σ_v(w)-σ̃_v(w)=γ^(m-3)(w),
for every vertex v,
γ∈ΠH^0(F^c, ℒ⊗ K^2⊗𝒪(D_NS+2D_R)),
γ|_U_v=γ(w) so that γ^(m-3)(w) is the Taylor expansion of γ(w) up to order m-3.
We call two sets of data associated to the fatgraph τ equivalent, if they are related as in (<ref>).
* There exist a superconformal structure for N=1 super Riemann surface SF with n_R Ramond punctures and n_NS Neveu-Schwarz punctures so that the superconformal transition functions on for each overlap U_v∩ U_v' are:
w'=f^(σ)_v'v(w+ξλ^(σ)_v',v(w))
ξ'=±√(∂_w f^(σ)_v',v(w))(1+1/2λ^(σ)_v',v(w)∂_wλ^(σ)_v',v(w))(ξ+λ^(σ)_v',v(w)),
where the deformed functions f^(σ)_v',v, λ^(σ)_v',v depend on odd parameters {σ^k_v}, characterizing Čech cocylce {ρ_v',v}, with f^(0)_v',v=f_v',v and in the first order in {σ^k_v} variables λ^(σ)_v',v=ρ_v',v.
* To describe the non-split SRS, we fix the choice of transition functions in (c) for every metric spin fatgraph τ with the odd data from (3). We consider the set of superconformal structures constructed by picking one superconformal structure per equivalence class of data for every fatgraph τ. The points in this set represent inequivalent superconformal structures, and together they form a dense subspace of odd complex dimension 2g-2+n_NS+n_R/2 in the space of all superconformal structures with n_NS Neveu-Schwarz and n_R Ramond punctures associated to F.
§.§ Acknowledgements
A.M.Z. is partially supported by Simons
Collaboration Grant 578501 and NSF grant
DMS-2203823.
§ (1|1)-SUPERMANIFOLDS, N=1 AND N=2 SUPER-RIEMANN SURFACES AND SUPERCONFORMAL TRANSFORMATIONS
§.§ Super Riemann surfaces and superconformal transformations.
We remind that a complex supermanifold of dimension (1|1) (see, e.g., <cit.>) over some Grassmann algebra S is a pair (X,𝒪_X), where X is a topological space and 𝒪_X is a sheaf of supercommutative S-algebras over X such that (X,𝒪^red_X) can be identified with a Riemann surface (where 𝒪^red_X is obtained from 𝒪_X by quoting out nilpotents) and for some open sets U_α⊂ X and some linearly independent elements {θ_α} we have 𝒪_U_α=𝒪^red_U_α⊗ S[θ_α].
We will also refer to (X,𝒪^red_X) as a base manifold. These open sets U_α serve as coordinate neighborhoods for supermanifolds with coordinates (z_α, θ_α). The coordinate transformations on the overlaps U_α∪ U_β
are given by the following formulas z_α=f_αβ(z_β, θ_β), θ_α=Ψ_αβ(z_β, θ_β), where F_αβ, Ψ_αβ are even and odd functions correspondingly.
A super Riemann surface (SRS) <cit.>,<cit.> over some Grassmann algebra S is a complex supermanifold of dimension 1|1 over S, with one more extra structure: there is
an odd subbundle 𝒟 of TΣ of dimension 0|1, such that for any nonzero section D of 𝒟 on an
open subset U of , D^2 is nowhere proportional to D, i.e. we have the exact sequence:
0→𝒟→ T→𝒟^2→ 0.
One can pick the holomorphic local coordinates in such a way that this odd vector field
will have the form f(z,θ)D_θ, where f(z,θ) is a non vanishing function and:
D_θ=∂_θ+θ∂_z, D_θ^2=∂_z.
Such coordinates are called superconformal. The transformation between two superconformal coordinate systems
(z, θ), (z', θ') is determined by the condition that 𝒟 should be preserved, namely:
D_θ=(D_θθ') D_θ',
Locally one obtains:
z'=u(z)+θη(z)√(u'(z)), θ'=η(z)+θ√(u'(z)+η(z)η'(z)),
so that the constraint on the transformation emerging from the local change of coordinates is D_θ z'-θ'D_θθ'=0.
§.§ N=2 super Riemann surfaces.
N=2 super Riemann surfaces (N=2 SRS) is a generalization of super Riemann surfaces, being a supermanifold of dimension (1|2) with extra structure. Its tangent bundle has two subbundles 𝒟_+ and 𝒟_-, so that each of them are integrable, meaning that if D_± are nonvanishing sections of 𝒟_±, we have
D_+^2=aD_+, D_-^2=aD_-
for some functions a and b. At the same time, the direct sum 𝒟_+ ⊕𝒟_- is non-integrable, so that [D_+, D_-] is a basis for the tangent bundle. Namely, for N=2 supermanifold X one has an exact sequence:
0→𝒟_+ ⊕𝒟_-→ TX→𝒟_+ ⊗𝒟_-→ 0.
As in the case of super Riemann surfaces one can show that there exist superconformal coordinates in which locally 𝒟_+ and 𝒟_- are generated by:
D_+=∂_θ_++1/2θ_-∂_z, D_-=∂_θ_-+1/2θ_+∂_z,
so that D_±^2=0, [D_+, D_-]=∂_z.
It turns out, there is an equivalence between (1|1) supermanifolds and N=2 SRS as it was estabished by Dolgikh, Rosly and Schwarz <cit.>.
One can notice that there is an involution θ_+↔θ_-.
The corresponding complex (1|1) supermanifold constructed from the N=2 SRS after the involution is of course generally a different one and it is called dual. In fact, such a dual supermanifold turns out to be a supermanifold of (0|1) divisors of original one. The self-dual (1|1) supermanifolds are of course N=1 super Riemann surfaces.
We will discuss these questions in more detail later in the text.
§.§ Punctures: Ramond and Neveu-Schwarz
Let us now discuss the types of punctures one can have on N=1 super Riemann surface.
The NS puncture is a natural generalization of the puncture of ordinary Riemann surface, and can be considered as any point (z_0,θ_0) on the super Riemann surface. Locally one can associate to it a (0|1)-dimensional divisor of the form z=z_0-θ_0θ, which is the orbit with respect to the action of the group generated by D, and this divisor uniquely determine the point (z_0,θ_0) due to the superconformal structure.
Let us consider the case when the puncture is at (0,0) locally. In its neighborhood let us pick a coordinate transformation
z=e^w, θ= e^w/2η, such that the neighborhood (without the puncture) is mapped to a supertube with w sitting on a cylinder w ∼ w+2π i, and D_θ becomes
D_θ=e^-w/2(∂_η+η∂_w).
Hence (w,η) are superconformal coordinates, and we have the full equivalence relation given by w ∼w+2πi, η→-η.
The case of Ramond puncture is a whole different story. On the level of super Riemann surfaces, the associated divisor is determined as follows. In this case, we are looking at the
case when the condition that D^2 is linearly independent of D is violated along some (0|1) divisor. Namely, in some local coordinates (z,θ) near the Ramond puncture with coordinates (0,0), 𝒟 has a section of
the form
D^*_θ=∂_θ+zθ∂_z.
We see that its square vanishes along the Ramond divisor z=0. One can map the neighborhood patch to the supertube using a different coordinate transformation
z=e^w,θ=η, those coordinates on the supertube will be superconformal, since
D_η=∂_η+η∂_w. Notice that the identifications we have to impose on (w, η) now become: w ∼w+2πi,η→+η.
To describe Ramond punctures globally, consider a subbundle 𝒟 is generated by such operators D^*_η, for the Ramond punctures p_1, p_2,… p_n_R. We have exact sequence:
0→𝒟→ TΣ→𝒟^2⊗𝒪(𝒫)→ 0,
where 𝒫=∑^n_R_i=1𝒫_i is a divisor where 𝒟^2=0 mod 𝒟.
In the split case TΣ|_X=TX⊕𝒢, dividing the tangent space to TΣ into even and odd parts, which one can identify with 𝒟^2⊗𝒪(𝒫)and 𝒟 correspondingly. Also, notice that after reducing it to the base manifold 𝒪(𝒫)=𝒪(∑^n_R_i=1p_i).
Therefore,
𝒢^2=TX⊗𝒪(-∑^n_R_i=1p_i).
That automatically implies that deg(ℱ)=1-g-n_R/2, leading to the fact that there should be even number of such punctures, known as R punctures.
§ FATGRAPHS AND SPIN STRUCTURES
From now on we will consider Riemann surfaces of genus g withs-punctures (s>0) with negative Euler characteristic, which we will denote as F^s_g or simply F. The corresponding closed version will be denoted as F^c.
Consider the fatgraph τ, corresponding to an s-punctured surface F.
This is a graph, which is homotopically equivalent to F, with cyclic orderings on half-edges for every vertex <cit.> induced by the orientation of the surface.
Let _0,_1 denote the set of vertices and edges of respectively. Let be an orientation on the edges _1⊂. As in <cit.>, we define a fatgraph reflection at a vertex v of (,) to reverse the orientations of on every edge of incident to v.
We define () to be the equivalence classes of orientations on a trivalent fatgraph τ spine of F, where the equivalence relation is given by _1∼_2 iff ω_1 and ω_2 differ by finite number of fatgraph reflections. It is an affine H^1-space where the cohomology group H^1:=H^1(F;_2) acts on () by changing the orientation of the edges along cycles.
In <cit.> various realizations of the spin structures on the surface F, characterized by a trivalent fatgraph τ are described. Those results can be easily generalized to a fatgraph with vertices of any valence.
In fact, following <cit.>, a spin structure can be characterized by a quadratic form q:H_1(F;_2)→_2
so that for any cycles a,b one has q(a+b)=q(a)+q(b)+a· b where a· b denotes the intersection form.
The space of all orientation classes is an affine H^1-space. Really, fix a fatgraph , we denote by o_(e) the orientation of the edge e∈ in the orientation . We define _̣_1,_2:_1→_2 by
_̣_1,_2(e):=+1 o__1(e)=o__2(e),
-1 o__1(e)≠ o__2(e),
which defines an element in H^1(F;_2).
<cit.> The set of spin structures is isomorphic to the space of quadratic forms (F) on H_1(F;_2), and also isomorphic to (), as affine H^1-spaces.
This leads to the following important consequence.
<cit.> Given an oriented simple cycle ∈̧π_1(F) homotopic to a path on the fatgraph with orientation class [], the corresponding quadratic form is given by
q([γ])=(-1)^L_γ(-1)^N_γ=(-1)^R_γ(-1)^[N]_γ,
where L_$̧ (resp.R_$̧) is the number of left (resp. right) turns of γ on the fatgraph , and N_$̧ (resp.[N]_γ) is the number of edges ofsuch thatγandhave the same (resp. opposite) orientation.
If we talk about the paths corresponding to the boundary cycles on the fatgraph,q([γ])=(-1)^k, wherekis a number of edges with orientation opposite to the canonical orientation ofγ.
One can identify them with R and NS punctures in uniformization picture <cit.>, so thatkis even for R punctures and odd for NS punctures.
In fact there is another way of thinking about the spin structures, using graph connections <cit.>,<cit.>,<cit.>.
<cit.> Let G be a group. A G-graph connection on is the assignment g_e∈ G to each oriented edge e of so that g_[e]=g_e if [e] is the opposite orientation to e. Two assignments {g_e},{g_e'} are equivalent iff there are t_v∈ G for each vertex v of such that g_e'=t_v g_e t_w for each oriented edge e∈_1 with initial point v and terminal point w.
Therefore, we obtain the following description of spin structures.
<cit.>
The space of spin connections on F is identified with ℤ_2-graph connections on a given fatgraph τ of F.
§ COMPLEX STRUCTURES AND STREBEL DIFFERENTIALS
§.§ Gluing of Riemann surfaces.
Consider the fatgraphτcorresponding to ans-punctured surfaceFof genusg.
Let us assign a positive real parameterL_jassociated to every edgej. We will the resulting object a metric fatgraph.
It is known from the works of Penner (the so-called convex hull construction) that the fatgraphs with valence of every vertex greater ore equal to 3 or dual ideal cell decompositions of Riemann surfaces describe the mapping class group-invariant cell decomposition of the decorated Teichmüller space (see e.g. <cit.>, <cit.>), a universal cover ofℝ^s_+⊗ℳ_g,s, where ℳ_g,sis a moduli space of Riemann surfaces of genusgwithsmarked points. Then the trivalent fatgraphs correspond to the higher-dimensional cells of dimension6g-6 +3s.
An important problem is how to reproduce inequivalent complex structures based on the data of metric fatgraphs. In an important work of Mulase and Penkava <cit.>, based on earlier ideas of Kontsevich <cit.>, allows to
construct the appropriate covering of a Riemann surface and the transition functions, associated with a given fatgraph, thus exhausting all possible complex structures.
Let us have a look at those in detail.
Fixing an orientation onτ, we consider a neighborhoodU_vwith coordinatewcorresponding to the fixedm-valent vertexv, so that the vertex is placed at the pointw=0.
One can describe that neighborhood by considering stripes
{z_j,∈ℂ, 0< Re (z_i)<L_j}, j=1, … , m
glued together via formula
w=e^2π i(j-1)/mz_j^2/m, j=1, … , m
if allmedges are pointing out from vertexv. In the case if one or more of them is pointing towards vertexv, we substitute the above formulas by
w=e^2π i(j-1)/m(L_j-z_j)^2/m, j=1, … , m.
One can construct such coordinate patches around every such vertex. The overlapsU_v∩U_v'are described by the corresponding stripes associated to the edgejof the fatgraph runnig betweenvandv'. Note, that there is no triple intersections on such a punctured surface and that the vertices of the fatgraph belong to the boundary of the intersections.
Let us look at the transition functions on the overlaps between two such coordinate neighborhoodsU_v,U_v'around neighboring verticesvandv', assuming the edge is pointing fromvtov'.
We note that both coordinateswandw'are expressed in terms ofz_jin the following way:
w=c_jz_j^2/m, w'=c'_j(L_j-z_j)^2/m,
wherec_j,c'_jaremth andm'th roots of unity.
The resulting overlap coordinate transformationf_v'vbetween patches is given by the following formula:
w'=f_v',v(w)=c'_j(L_j-w^m/2)^2/m,
where-iπ/2<arg(w^m/2) <iπ/2.
That completely describes the transition functions between charts for the punctured Riemann surfaceF.
Note that if the consecutive edgesL_1, L_2, …, L_ncorrespond to boundary piece of the fatgraph associated with puncturep, one can glue the following coordinate neighborhoodV_pwith coordinateycovering the puncture:
y= e^x=exp(2π i/a_B(L_1+… L_k-1+z_k)), where a_B=L_1+… +L_n,
so that one glues together top or bottom part of the stripes based on orientation andx-variable is on a cylinderx∼x+2πi. Suppose thek-th strip is glued to the vertexv_kwith coordinatew_kas above, then
the transition functionf_pvis given by:
y=f_p,v(w)=exp(2π i/a_B(L_1+… L_k-1+w_k^m/2)).
In the following we will adopt the following notation forL-parameters: if the edge connects two verticesv,v'we will denote the corresponding parameter asL_v,v'.
§.§ Strebel differentials.
An important object in the constructions of <cit.>,<cit.> are the Strebel differentials, the quadratic meromorphic differentials with special properties. A nonzero quadratic differential is a holomorphic sectionμofK^⊗2onFdefines a flat metric on the couplement of the set of its zeroes, written in local coordinates as|μ(z)||dz|^2, so thatμ=μ(z)dz^2.
A horizontal trajectory of a quadratic differential is a curve along whichμ(z)dz^2is real and positive. The Strebel differential is the one for which the space of nonclosed positive trajectories has measure zero.
Non-closed trajectories of a given Strebel differential decompose the surface into the
maximal ring domains swept out by closed trajectories. These ring domains
can be annuli or punctured disks. All trajectories from any fixed maximal
ring domain have the same length, the circumference of domain.
The following Theorem is due to Strebel:
<cit.>
For any connected closed Riemann surface F^c with s distinct points p_1,…, p_s, s>0, s>χ(F^c)
and n positive real numbers a_1, …, a_s there
exists a unique Strebel differential on F=F^c\{ p_1,…, p_s}, whose maximal ring
domains are s punctured disks surrounding p_i's with circumferences a_i's.
The union of non-closed trajectories of Strebel differentials together with their zeroes define a graph, embedded into a Riemann surface, thus giving to it a fatgraph structure. Every vertex of a fatgraph corresponds to the zero of the Strebel differential of degreem-2, wherem≥3is the valence
of the vertex. The length of each edge gives the graph a metric structure.
For every such Strebel differential one can construct the covering associated to the corresponding fatgraph, described in the previous section and vice versa, so that in the chartsU_v, Strebel differentialμhas the following explicit form:
μ |_U_v=w^m-2dw^2.
It also has pole of order2at punctures, so that iny-coordinates for each neighborhoodV_pthe differential looks as follows:μ|_V_p=-a_B/4πy^-2dy^2.
One can formulate then the following Theorem.
<cit.>
Let ℳ^ comb_g,n denote the set of equivalence classes of connected
ribbon graphs with metric and with valency of each vertex greater than or
equal to 3, such that the corresponding noncompact surface has genus g and s
punctures numbered by 1,…, n. The map ℳ_g,n×ℝ^s→ℳ^ comb_g,n
which associated to the surface F^c and numbers a_1, … a_n
the critical graph of the canonical Strebel differential from Theorem <ref> is one-to-one.
In this paper we do not need more properties of Strebel differentials, however, we refer reader to <cit.>, as well as original source <cit.> for more information.
§ COMPLEX STRUCTURES ON (1|1) SUPERMANIFOLDS
§.§ Split case
Let us consider the punctured Riemann surface glued as in previous subsection
using metric fatgraph and overlapping neighborhoodsU_vcorresponding to vertices.
To construct the cooordinate transformations for a split(1|1)-supermanifoldSFwith such a base complex manifold, one has to consider a line bundleℒoverF^c. Then the coordinate transformations for the coordinates(w', ξ'),(w, ξ),(y,η)corresponding to
neighborhoodsU_v, U_v', V_pof verticesv, v'and puncturepare given by the following formulas:
ξ'=g_v',v(w)ξ, w'=f_v',v(w),
η=g_p,v(w)ξ, y=f_p,v(w),
whereg_v',v,g_p,wis the holomorphic function, serving as a transition function
of bundleℒ.
The collection{g_v',v, g_v,p}generates a Čech cocycles
g_v,v'|_U_v∩ U_v'∈ H^0(U_v∩ U_v', 𝒪^*),
g_v,p|_U_v∩ V_p∈ H^0(U_v∩ V_p, 𝒪^*),
representing the Picard group ofF^c, i.e.Ȟ^1(F, 𝒪^*), if the following constraint ong_v',vand{g_v,p}is imposed around the given puncturep:
g_v,v'|_U_v'∩ U_v∩ V_p=g_v,pg_p,v'.
Then the following Proposition holds.
When ℒ is degree 0 over F^c, the fatgraph data
describing it is a U(1) graph connection with a trivial monodromy around every boundary piece.
Proof.
Notice, that one can chooseg_v'vto be constant functions with values on a unit circle, which on the level of fatgraph is described byU(1)-graph connection, so thatg_v,v'=e^ih_v,v', whereh_v,v'∈ℝ. Indeed the corrresponding holomorphic equivalences for the corresponding Čech cocyle reduce to constantU(1)gauge transformations at the vertices.
However, according to the condition (<ref>) that we imposed, we have to haveg_v_1,v_2g_v_2,v_3…g_v_n-1,v_ng_v_n,v_1=1, which is exactly the trivial monodromy condition.▪In order to describe any line bundle of degreed, one has to do the following.
First, choose a fixed divisor of degreed, say a linear combination of puncture points. Then, multiplying it on appropriate bundle of degree 0, one can reproduce the original bundle.
Since we described the moduli spaces of degree 0 bundles in a Proposition <ref> above, we can now characterize split punctured supermanifolds.
Consider the following data on a fatgraph τ:
* Metric structure.
* Flat U(1)-connection with zero monodromies around punctures.
* Fixed divisor M of degree d, which is a linear combination of puncture points.
The data above determines the complex split (1|1)-supermanifold corresponding to the line bundle of degree d on F. For a fixed divisor M, metric fatgraphs with U(1) connections describe the moduli space of split (1|1)-supermanifolds.
§.§ Infinitesimal Deformations and various types of punctures
As usual, the infinitesimal deformations of the above formulas leading to generic non-split structure are described byH^1(SF^c, ST), whereSTis a tangent bundle ofSF^c, whereSF^cis a split(1|1)-supermanifold which we discussed in the previous section.
Since we are deforming the split case one can describe infinitesimal deformationsρ∈H^1(SF^c, ST)using Čech cocycles,
i.e. in coordinates(ξ,w)ofU_vonU_v∩U_v'andU_v∩V_p:
ρ_v ,v'= v_v,v'(w)∂_w+ξα_v,v'(w)∂_w+
β_v,v'(w)∂_ξ+ u_v,v'(w)ξ∂_ξ,
ρ_p ,v= v_p,v(w)∂_w+ξα_p,v(w)∂_w+
β_p,v(w)∂_ξ+ u_p,v(w)ξ∂_ξ.
where the indicesv,v'andp,vhere mean that the corresponding elements are the corresponding Čech cocycles considered on the intersectionsU_v∩U_v', andV_p∩U_v.
Now we need to specify the behavior at the punctures to describe the cocyclesρleading to deformations ofSFin terms of cocyles onF^c.
There are two types of punctures we want to consider:
* puncture as a (0|1)-dimensional divisor on SF^c. We denote the number of such punctures as r.
* puncture as a (0|0)-dimensional divisor, or in other words just a point on SF^c.
We denote the number of such punctures as n.
LetTbe the tangent bundle ofF^c,D_n+rbe the divisor corresponding to sum of all points onF^ccorresponding to punctures, andD_nis the sum of the ones corresponding to point-punctures onSF. Let us look in detail at the components of (<ref>):
v∈Ž^1( F^c, T⊗𝒪(-D_n+r)), u∈Ž^1(F^c, 𝒪),
β∈ΠŽ^1(F^c, ℒ⊗𝒪(-D_n)), α∈ΠŽ^1(F^c,T⊗ℒ^-1⊗𝒪(-D_n+r)),
whereŽ^1is the notation for Čech cocycles of degree 1. Note, that
we need to impose the constraints on cocycles onV_p∩U_v∩U_v':
s_v,v'|_U_v'∩ U_v∩ V_p=s_v,p+s_p,v',
wheres=v,u, α, β.
Here u- and v-terms correspond to the deformations of the original manifoldF, and
notice, that we already incorporated the moduli for the base manifoldFand the line bundleℒin the formulas (<ref>). The odd deformations, provided by the cyclesα,βgive the following deformations for the upper line of (<ref>)
ξ'=g_v'v(w)(ξ+β_v',v(w)), w'=f_v'v(w+ξα_v',v(w)),
which describes (in the first order in complex parameters)all possible complex structures on the punctured supermanifold.
If we remove the infinitesimality condition, formulas above will be deformed. Let us formulate it in a precise form.
* Consider the following data:
* A metric fatgraph with a U(1)-connection with trivial monodromy around boundary pieces, a fixed divisor which is a iinear combination of puncture points of degree d, which defines a split punctured (1|1) supermanifold determined by base Riemann surface F and line bundle ℒ.
* Čech cocycles β̃=∑_kσ_k^βb_k, α̃=∑_k
σ_k^αa_k, so that
{σ_k^β}, {σ_k^α} are two sets of odd parameters,
b_k∈Ž^1(F^c, ℒ⊗𝒪(-D_n)),
a_k∈Ž^1(F^c,T⊗ℒ^-1⊗𝒪(-D_n+r)),
where D_n+r is the divisor corresponding to sum of all s=n+r punctures on the closed surface F^c, D_n is the sum of the certain subset of the set of punctures, and the cohomology classes of {b_k}{a_k} form a basis in the corresponding cohomology spaces.
This data gives rise to a family of complex structures on SF, the (1|1)-supermanifold with n point punctures and r (0|1)-divisor punctures, so that the transition functions on
SF are given by the following formulas on the overlaps {U_v∩ U_v'}:
ξ'=g^(α, β)_v'v(w)(ξ+β_v',v(w)), w'=f^(α, β)_v'v(w+ξα_v',v(w)),
where g^(α, β)_v',v, f^(α, β)_v',v, are holomorphic functions on the overlaps, depending on the parameters σ_k^α and σ_k^β such that:
g^(0,0)_v',v=g_v',v, f^(0,0)_v',v=f_v',v, where {f_v'v}, {g_v',v} define the split supermanifold with the line bundle ℒ and s punctures so that in the first order in {σ_k^α} and {σ_k^β} we have
β̃_v',v(w)=β_v',v(w), α̃_v',v(w)=α_v',v(w).
* Let us fix the choice of transition functions in (<ref>), for every metric fatgraph τ with the U(1)-connection, divisor of degree d, and the odd data given by the cocycles β̃, α̃ on F^c.
The complex structures constructed in such a way are inequivalent to each other, and the set of such complex structures constructed by varying τ and the data on it, form a dense subset of maximal dimension in the moduli space of punctured (1|1) supermanifolds with underlying line bundles of degree d.
Proof.
Let us look at the formulas (<ref>) as generic ones, for arbitrary holomorphic functions{α_v'v},{β_v'v}on overlaps. There is a finite number of odd parameters which parametrize all{α_v',v},{β_v',v}corresponding to inequivalent complex structures. Expanding the formulas (<ref>) in terms of these parameters we obtain that in the linear
orderβ∈ΠŽ^1(F^c, ℒ⊗𝒪(-D_n))andα∈ΠŽ^1(F^c,T⊗ℒ^-1⊗𝒪(-D_n+r))as in the infintesimal case.
Conversely, sinceα,βrepresent the tangent space to the moduli space
of complex structures, parametersσ^α,σ^βserve as
coordinates there. Considering the corresponding 1-parametric subgroups generated byα̃,β̃we obtain formulas from (<ref>).
The fact that the cohomologically equivalent cocycles lead to the equivalent complex structures is justified by dimensional reasons.▪It is, however, nontrivial to explicitly parametrize those cocyclesα, β. In the next subsection we will analyze the special case of supermanifolds with the line bundleℒof negative degree.
§.§ (1|1)-supermanifolds with deg (ℒ)=1-g-r/2
It is not easy to explicitly parametrize cocyclesα, βfrom a fatgraph data if one does not fix a degree. From now on, we will be interested in the case whendeg(ℒ)=1-g-k, wheres≥k≥0onF^c. Assuming that the number of divisor punctures is even and settingk=r/2, both bundlesℒ⊗𝒪(-D_n)andℒ^-1⊗T⊗𝒪(-D_n+r)have equal degree1-g-r/2-nonF^c.
Let us be generic enough first and characterize the cycles inΠŽ^1(F^c, ℒ⊗𝒪(-D_n))wheres≥k=g-1-ℒ≥0, using the data from the fatgraph. To do that, we define a cocycleρ, a representative ofΠȞ^1(F^c, ℒ⊗𝒪(-D_n))as follows:
ρ_v,v'|_U_v∩ U_v'=ρ_v-ρ_v', so that ρ_v|_U_v=σ_v(w)/w^m_v-2, ρ_v'|_U_v'=σ_v'(w')/w'^ m_v'-2,
ρ_v,p|_U_v∩ V_p=ρ_v ,
whereρ_v,ρ_v'are meromorphic sections ofℒ⊗𝒪(-D_n)onU_v,U_v'correspondingly, so thatm_vis valence of the given vertexv,σ_v(w)=∑^m_v-3_i=0σ^i_v w^iare the polynomials with odd coefficients, assigned to each fatgraph vertexvof degree at mostm_v-3,
Let us denote for simplicityℒ̃=ℒ⊗𝒪(-D_n).
Then the following proposition holds.
* The cycles (<ref>) are uniquely defined by the numbers σ_v at the fatgraph vertices, thus forming a complex vector space of dimension 4g-4+2s.
* Cycle ρ is cohomologous to cycle ρ̃ in ΠȞ^1(F^c, ℒ̃) if and only if
σ_v(w)-σ̃_v(w)=γ^(m-3)(w),
for every vertex v, where γ∈ΠH^0(F^c, ℒ̃⊗ K^2⊗𝒪(2D_n+r)), so that γ |_U_v=γ(w), γ^(m-3)(w) is the Taylor expansion of γ(w) up to order m-3.
* The cohomology classes of cycles ρ span ΠȞ^1(F^c, ℒ̃).
Proof. To prove part(1)one just has to count the number of vertices and parametres at verrtices. An elememtary Euler characteristic computation shows that
2g-2+s=∑_j≥ 3(j/2-1)𝒱_j(τ),
where𝒱_j(τ)is the number ofj-valent vertices inτ. Notice, that for aj-valent vertexv, we have exactlyj-2odd parameters from the expansion ofσ_v(w), which immediately leads to the necessary parameter count, giving4g-4+2s.
To prove(2), on each coordinate neighborhoodU_v, Strebel differentialμhas the formμ|_U_v=w^m_v-2dw^2, andμ|_V_p=-a_B/4πdy^2/y^2, which means that one can rewrite the formula for the cocycle
ρ_v,v'=(γ_v-γ_v')/μ, ρ_v,p=(γ_v-γ_p)/μ,
whereγ_v|_U_v=σ_v(w),γ_p|_V_p=0, so thatγ_v∈ΠȞ^0(U_v, ℒ̃⊗K^2)andγ_p=0∈ΠȞ^0(V_p, ℒ̃⊗K^2).
Suppose that such cocycle is exact, namely:
γ_v/μ-γ_v'/μ=(a_v-a_v')|_U_v∩ U_v', γ_v/μ-γ_p/μ=(a_v-a_p)|_U_v∩ U_p
for allvandv', so thata_v∈ΠȞ^0(U_v, ℒ̃),a_p∈ΠȞ^0(V_p, ℒ̃).
It is equivalent to(γ_v-a_vμ)=(γ_v'-a_v'μ)|_U_v∩U_v',(γ_v-a_vμ)=(γ_p-a_pμ)|_U_v∩V_p, i.e. formulasγ_v-a_vμ=γ |_U_v, γ_p-a_pμ=γ |_V_pdefinesγas a holomorphic section onF, i.e.γ∈ΠȞ^0(F, ℒ̃⊗K^2). Assumingγ|_U_v=γ(w)anda_v|_U_v=a(w), the identityγ_v=a_vμ+γ|_U_vis only possible ifa_v(w)=γ(w)-γ^(m-3)(w)/w^m-2 and σ_v(w)=γ^(m-3)(w),whereγ^(m-3)(w)is the Taylor expansion ofγ(w)up to orderm-3.
Also the identityγ_p=a_pμ+γ|_V_p,i.e.a_pμ+γ |_V_p=0is possible only
ifγ|_V_phas poles not greater than 2 aty=0, or, more precisely,γ∈H^0(V_p,ℒ̃⊗K^2⊗𝒪(2D_n+r)).
Therefore, cyclesρandρ̃are cohomologous to each other iff the relation between the parameters on the fatgraph{σ_v}and{σ̃_v}correspodingly parametrizing them is as follows:
σ̃_v(w)-σ_v(w)=γ^(m-3)(w)
whereγ∈H^0(F^c,ℒ̃⊗K^2⊗𝒪(2D_n+r))onF^c, such thatγ|_U_v=γ(w)with poles at the punctures ofFof order less or equal to 2 so thatγ^(m-3)(w)is the Taylor expansion ofβ(w)up to orderm-3.
Now, to prove part(3), we need to show that such classes of cocycles form a2g-2+k+n-dimensional complex space as elements ofΠȞ^1(F^c, ℒ̃). For a given sectionγofℒ̃⊗K^2⊗𝒪(2D_n+r), the collection of the coefficients inγ^(m-3), for each vertexvform a vector in our4g-4+2s-dimensional space ofσ-parameters. The space, spanned by all such vectors is a complex2g-2+2s-k-n-dimensional space. Indeed, it cannot be of smaller dimension, since we know thatdim_ℂȞ^1(F^c, ℒ⊗𝒪(-D_n))=2g-2+k+n, at the same time it cannot be of greater dimension, since we know that the dimension of space of such meromorphic global sections ofℒ̃⊗K^2⊗𝒪(2D_n+r)is2g-2+2s-k-nby the Riemann-Roch theorem.▪Now we are ready to formulate a Theorem regarding parametrization of complex structures via fatgraph data.
* Consider the following data associated to the fatgraph τ:
* Metric structure and a U(1)-connection on τ with zero monodromy around punctures and a fixed divisor of degree d=1-g-r/2 at the punctures.
* Two complex odd parameter sets {σ^α_v,k}, {σ^β_v,k} at each vertex v, so that k=0,…, m_v-3.
We will call two sets of data from (1) associated to fatgraph τ equivalent if the odd data are related as in Theorem <ref>.
Constructing transition functions f_v',v and g_v',v from the even fatgraph data and cocycles
α̃, β̃, corresponding to r(0|1)-divisor punctures and n point punctures from the odd data, one obtains a family of complex structures on (1|1)-supermanifold in the framework of Theorem <ref>.
* Fixing the transition functions in (<ref>) and considering one such complex structure per equivalence class of data for every fatgraph τ, we obtain a set of inequivalent complex structures, which is a dense subspace of odd complex dimension 4g-4+2n+r in the space of all complex structures on (1|1)-supermanifolds with base line bundle of degree
d=1-g-r/2 and s=n+r punctures, where n is the number of point punctures and r is the number of (0|1)-divisor punctures.
Proof. The first part of data allows to construct split(1|1)-supermanifold as we know from previous sections, the odd data from the second allows to construct the corresponding cyclesα̃∈ΠŽ^1(F^c,ℒ⊗𝒪(-D_n))andβ̃∈ΠŽ^1(F^c, ℒ^-1⊗T⊗𝒪(-D_n+r)). If we choose an orientation on the fatgraph, the formulas (see Theorem <ref>):
ξ'=g^(α, β)_v',v(w)(ξ+β_v',v(w)), w'=f^(α, β)_v',v(w+ξα_v',v(w))
produce the transition functions onU_v∩U_v'for the vertex oriented fromvtov'.▪Remark. Note, that the gauge equivalence forU(1)connection, produce the following identification. If real numbersh_v,v'parametrizeU(1)connection, then the transformations:
h_v,v'→ h_v,v'+t_v-t_v',
σ^α_v, σ^β_v→ e^it_vσ^α_v, e^-it_vσ^β_v
produce equivalent configuration for infinitesimal parametersσ.
In the paper <cit.> the uniformization version ofN=2Teichmüller space was constructed (see also <cit.>, <cit.>), which corresponds exactly to(1|1)-supermanifolds, which serves as a universal cover for the
one we use here in the case ofdeg(ℒ)=1-g-r/2. The above identifications played an instrumental role in the construction.▪In the next two sections we will use the obtained results to describe transition functions for the corrresponding dual supermanifold andN=2super Riemann surface following <cit.>.
§.§ Dual (1|1) supermanifold.
Finally, we give a desription of the concept dual(1|1)supermanifold is a supermanifold of(0|1)divisors ofSF. To describe the explicit coordinates and coordinate transformations transformations on such an object one can use a very simple equation (see, e.g., <cit.>):
w=a+ζξ
wherea,ζare the coordinates parametrizing such a(0|1)divisor. Let us derive the formulas for the transformations ofa,ζvariables, for the transformation between the charts with coordinates(a, ζ)and(a',ζ'), so thatw'=a+ζ'ξ'.
ξ'=g^(α, β)_v'v(a+ζξ)(ξ+β_v',v(a+ζξ)),
a'+ζ'ξ'=f^(α, β)_v',v(a+ζξ+ξα_v',v(a+ζξ)).
We will now substitute first equation in the second and obtain:
a'+ζ'g^(α, β)_v'v(a+ζξ)(ξ+β_v',v(a+ζξ))=
f_v'v(a+ζξ+ξα_v',v(a)).
which leads to two equations:
a'+ζ' g^(α, β)_v'v(a)β_v',v(a)=f^(α, β)_v'v(a)
ζ'g^(α, β)_v'v(a)+ζ'ζ∂_a (g^(α, β)_v'v(a)β_v',v(a))=ζ∂_a f^(α, β)_v'v(a)-∂_a f^(α, β)_v'v(a)α_v',v(a)
The latter equation immediately gives the transformation forζ:
ζ'=g^(α, β)_vv'(a)(1+ζ g^(α, β)_vv'(a)∂_a (g^(α, β)_v'v(a)β_v',v(a)))(∂_af_v'v(a)ζ- ∂_af_v'v(a)α_v',v(a))
which could be simplified as follows:
ζ'=g^(α, β)_v,v'(a)∂_a f^(α, β)_v'v(a)(s_v,v'(a)ζ- α_v',v(a))
s_v,v'=(1-g^(α, β)_v,v'(a)∂_a (g^(α, β)_v',v(a)β_v',v(a))α_v',v(a)).
Now, substituting that into the equation (<ref>) fora'we obtain:
a'+∂_a f^(α, β)_v',v(a)(s_v,v'(a)ζ-α_v',v(a))β_v',v(a)=f^(α, β)_v'v(a),
which is equivalent to
a'=f^(α, β)_v'v(a)-
(∂_af_v'v(a)(1-∂_aβ_v',v(a)α_v',v(a)))ζ- ∂_a f^(α, β)_v',v(a)α_v',v(a))β_v',v(a),
and simpler:
a'=f^(α, β)_v'v(a-(1-∂_aβ_v',v(a)α_v',v(a))ζβ_v',v(a)+β_v',v(a)α_v',v(a)).
One can see from the transformations we obtained that the self-dual(1|1)supermanifolds are indeedN=1SRS.
Let us combine all that in the following theorem.
Given the coordinate transformations (<ref>) for SF, the coordinate transformations for the dual manifold SF of (0|1) divisors is given by the formulas:
ζ'=g^(α, β)_v,v'(a)∂_a f^(α, β)_v'v(a)(s_v,v'(a)ζ- α_v',v(a)), where
s_v,v'=
(1-g^(α, β)_v,v'(a)∂_a (g^(α, β)_v',v(a)β_v',v(a))α_v',v(a)),
a'=f^(α, β)_v'v(a-(1-∂_aβ_v',v(a)α_v',v(a))ζβ_v',v(a)+β_v',v(a)α_v',v(a)).
Remark. Note, that in the case of a dual manifold,ℒis replaced byℒ^-1⊗T.
§ N=2 SUPER RIEMANN SURFACES
In this section we write down the coordinate transformations
for puncturedN=2supermanifoldSF_N=2, corresponding toSF, based on the equivalence between complex structures on(1|1)-supermanifolds and superconformal structures onN=2supermanifolds discovered in <cit.>.
Let us write down the transition functions between the chart with coordinates(z,θ)and chart with coordinates(u,η)on(1|1)supermanifold in the following way:
u=S(z)+θ V(z)φ (z), η=ψ(z)+θ V(z),
whereS(z), V(z)andφ(z), ψ(z)are correspondingly even and odd analytic functions.
Onthe other hand, the superconformal coordinate transformations forN=2SRS between the charts with coordinates(z, θ_+, θ_-)and(z', θ'_+, θ'_-)are:
z'=q(z)+1/2θ_-ϵ_+(z)q_-(z)+1/2θ_+ϵ_-(z)q_+(z)+1/4θ_+θ_-∂_z(ϵ_+(z)ϵ_-(z))
θ'_+=ϵ_+(z)+θ_+q_+(z)+1/2θ_+θ_-∂_zϵ_+(z)
θ'_-=ϵ_-(z)+θ_-q_-(z)+1/2θ_-θ_+∂_zϵ_-(z)
q_+(z)q_-(z)=∂_z q(z)+1/2(ϵ_+(z)∂_zϵ_-(z)+ϵ_-(z)∂_zϵ_+(z)).
There is a following Theorem matching these transformations.
<cit.>
There is one-to one correspondence between N=2 SRS from (1|1)-supermanifolds. The explicit correspondence between transition functions is given by the following formulas:
ϵ_+(z)=ψ(z), q_+(z)=V(z)
ϵ_-(z)=φ(z), q_-(z)=(∂_z S(z)-∂_zψ(z)φ(z)) V^-1(z),
q(z)=S(z)+1/2φ(z)ψ(z).
Let us now describe how it works for the transition functions we introduced in the previous section.
In our case:
V(w)=g^(α, β)_v',v(w), ψ(w)=g^(α, β)_v',v(w)β_v',v(w),
S(w)= f^(α, β)_v',v(w) , φ(w)=∂_wf^(α, β)_v,'v(w)α_v',v(w)g^(α, β)_v,v'(w).
Therefore, we can write for the transition functions ofSF_N=2:
ϵ_+(w)=g^(α, β)_v',v(w)β_v',v(w),
ϵ_-(w)=∂_wf^(α, β)_v',v(w)α_v',v(w)g^(α, β)_v,v'(w),
q_+(w)=g^(α, β)_v'v(w),
q_-(w)=(∂_w f^(α, β)_v'v(w)-
∂_w(g^(α, β)_v',v(w)β_v',v(w))∂_wf^(α, β)_v',v(w)α_v',v(w)g^(α, β)_v,v'(w)) g^(α, β)_v,v'(w),
q(w)=f^(α, β)_v',v(w) +1/2∂_wf^(α, β)_v,'v(w)α_v',v(w)β_v',v(w).
This can be rewritten in a simpler way:
ϵ_+(w)=g^(α, β)_v',v(w)β_v',v(w),
ϵ_-(w)=∂_wf^(α, β)_v',v(w)α_v',v(w)g^(α, β)_v,v'(w),
q_+(w)=g^(α, β)_v',v(w)
q_-(w)=∂_w f^(α, β)_v',v(w)g^(α, β)_v,v'(w)(1+α_v',v(w)∂_wβ_v',v(w))+
∂_wg^(α, β)_v,v'(w)∂_wf^(α, β)_v',v(w)β_v',v(w)α_v',v(w),
q(w)=f^(α, β)_v',v(w)+1/2∂_wf^(α, β)_v,'v(w)α_v',v(w)β_v',v(w).
Hence we obtain the following theorem.
Formulas (<ref>) produce the transition functions describing the superconformal structure on N=2 SRS with punctures, corresponding to (1|1)-supermanifolds with transition functions (<ref>). Namely the transition function corresponding to oriented edge v,v' of the fatgraph, i.e. the overlap U_v∩ U_v' is desribed by the functions ϵ_±(w), q_±(w) from (<ref>).
§ INVOLUTION AND N=1 SUPER-RIEMANN SURFACES WITH NS AND R PUNCTURES.
§.§ Involution: R vs NS punctures
There is an involutionIon the moduli space of super-Riemann surfaces, such that
I: D_±→ D'_∓,
whereD'_∓is the corresponding operator afterN=2superconformal transformation.
Such an involution takesN=2super Riemann surface to the dual, which on the level of(1|1)- supermanifolds produces a manifold of(0|1)-divisors, which we discussed earlier. The self-dual supermanifolds are known to beN=1super-Riemann surfaces.
Let us describe how this works on aN=2supertube (orN=2punctured disk) with coordinates( x, η_+, η_-), wherex∼x+2πi.
Let us consider an obvious choice of how involution could act in these coordinates:
D_+→ D_ -, D_-→ D_+,
For self-duality one has to
identifyη_+andη_-, i.e.(x, η_+, η_-)∼(x +2π, η_-, η_+). The operatorD=D_++D_-gives a standard superconformal structure on a supertube. We see, that in this case the puncture is a Ramond puncture.
Let us perform an elementaryN=2superconformal transformation, amounting to reflection, so that involution is
D_+→ -D_ -, D_-→ -D_+,
i.e.η_±→-η_∓. The invariance under this involution gives the identification(x, η_+, η_-)∼(x +2π, -η_-, -η_+), so that operatorD=D_++D_-gives a superconformal structure around NS puncture.
Note, that the two examples of the action of involution which we considered in this section are the only ones, which preserve the base manifold.
§.§ Split N=1 SRS
Let us now discuss splitN=2SRS, which implies that we let cocyclesα,β=0.
The involution
I: D_±→ D_∓,
acts on the level of transition functions as follows:q_±(z)→ q_∓(z)Therefore, for fixed points of the involution we have
g_v',v^2(w)=∂_w f_v',v(w).
This means thatg^(α, β)_v',v(w)=sign(v',v) √(∂_w f_v',v(w))wheresign(v',v)is the notation for the sign of the square root, so that on a resultingN=1SRS
we have:
ξ'= sign(v',v)√(∂_w f_v',v(w))ξ.
Choice of signs for such square roots is the same as the choice of spin structure on the punctured surface. However, we already discussed that problem on the level of fatgraphs (Section 3), which leads to the following Theorem.
Consider a metric fatgraph τ with a spin structure ω provided by the orientation as discussed in Section 3. This data defines the superconformal structure on the split N=1 SRS.
For every boundary cycle on the fatgraph, corresponding to puncture p, let m_p be the number of oriented edges, which are opposite to the orientation induced by the one on the surface. The corresponding puncture is Ramond or Neveu-Schwarz, depending on whether m_p is even or odd.
Proof.
So, let us consider the metric graphτwith orientations on edges. Our problem is to use orientations to define
To do that, for each overlap we will look at thezcoordinates on stripes, discussed in section 4. For given verticesvandv', the transformation betweenzandz'coordinates is given byz'=f̃_v',v(z)=L_v,v'-zWe will define the value of the√(∂_z f̃_v',v(z))=±iin the following way. If the orientation is from vertexvtov'we choose the positive sign (sign(v,v')=1) and negative otherwise (sign(v,v')=-1). One can prove that such choice does not depend on the choice of orientation for a given spin structure, namely a different choice, corresponding to a fatgraph reflection, will just result in a reflection of an odd coordinate for a given vertex.
Regarding R and NS punctures, one can deduce immediately that the statement is correct just by a simple condition that there is a natural combinatorial constraint on the punctures withm_pbeing odd on a fatgraph (see Section 4), matching the one for Ramond punctures on a surface.
Nevertheless, let us prove that directly.
For a given choice of spin structure, let us superconformally continueg^(α, β)_v'vcocycles by constructingg̃_p,v(z)=sign(p,v)√(∂_zf̃_p,v(z))onU_v∩U_v'∩V_p,
where we remind thatx=f̃_p,v(z)=2π i/a_B(L_v_1,v_2+...+ L_v_k,v_k-1+z),wherezis the coordinate on the consequtive stripev_k,v_k+1andv_1,…, v_mare consequtive vertices around the puncture.
Now we obtainRandNSpunctures by gluing the supertube with a proper twist of the odd variable. That will of course depend on the numberm_pof the edges{v_i, v_i+1}, which have opposite orientation with respect to orientation induced on the cycle by the one on the surface.
Note, that in terms ofz-variables√(∂_zf̃_p,v(z))is a constant, so one can again make a choice of signs explicitly. We have the following identity
sign(v_1,v_2) sign(v_2,v_3)… sign(v_n-1,v_n) sign(v_n,v_1)=± 1,
where positive sign is for evenNand negative otherwise. In the case ofm_peven, we can choose{sign(p,v)}so that forsign(v,v'), so thatv,v'are neighboring vertices, so thatsign(v,v')=sign(p,v)sign(p,v'), thus gluing the stripes into supertube.
However, this is not possible in the case
of oddm_p. In this case we have to assume thatsign(v_n,v_1)=-sign(p,v_1)sign(p,v_n), thus gluing the stripes into twisted
supertube corresponding toNSpuncture.▪Remark. One can of course superconformally transform the twisted supertube inNSpuncture case into the disk, the same way we did in the introduction, thus making
the corresponding cocycle{g_v,p(y)}={±√(∂_y f_v,p(y))}. In Ramond case this is of course impossible. We see that ifpis an R puncture,g_v,p^2(x)=y∂ _y f_v,p(y).Therefore, for the bundleℒwe have a condition:
ℒ^2=T⊗𝒪(-∑^n_R_i=1p_i),
which is possible only whenn_Ris divisible by 2.
§.§ N=1 SRS: non-split case.
In order to construct nonsplitN=1SRS, we first will do it on infinitesimal level near the splitN=1SRS. So, let us look at the formulas (<ref>) whenα_v',vβ_v',vare infinitesimal:
ϵ_+(w)=g_v',v(w)β_v',v(w),
ϵ_-(w)=∂_wf_v',v(w)α_v',v(w)g^(α, β)_v,v'(w),
q_+(w)=g_v',v(w),
q_-(w)=∂_w f_v',v(w)g_v,v'(w),
q(w)=f_v',v(w).
An invariance under simple involutionD_±→D_∓allows to identifyα_v',vandβ_v',vand as beforeg^2_v',v(w)=∂_w f_v',v(w), thus infinitesimally the transformations for the resultingN=1SRS on the overlapU_v∩U_v'is given by:
w'=f_v'v(w+ξρ_v',v(w))
ξ'=±√(∂_wf_v',v(w))(ξ+ρ_v',v(w)),
so that the signs of are prescribed as in the Theorem <ref>,
whereρ∈ΠŽ^1(F^c, ℒ⊗ O(-D_NS)) and ℒ^2=T⊗ O(-D_R),so thatD_RandD_NSare the divisors corresponding to the sum of allNSandRpunctures correspondingly. We described such cocycles using odd number decorations at the vertices of the the fatgraph in the Theorem <ref>.
The formulas (<ref>) are not hard to continue to full superconformal transformations for transition functions (one can obtain them by applying involutionD_±→D_∓invariance to the formulas (<ref>) as well):
w'=f^(ρ)_v'v(w+ξλ^(ρ)_v',v(w))
ξ'=±√(∂_w f^(ρ)_v',v(w))(1+1/2λ^(ρ)_v',v(w)∂_wλ^(ρ)_v',v(w))(ξ+λ_v',v(w)).
Combining the parametrization data for cocylesρfrom Theorem <ref> with the results of this section, we obtain the following omnibus Theorem, describing the dense set of superconformal structures oinside moduli space ofN=1SRS.
* Consider the following data on a fatgraph τ:
* Metric structure.
* Spin structure, as equivalence class of orientations on the fatgraph. The cycles on the fatgraph, encircling the punctures are divided into two subsets, NS and R, depending on whether there is odd or even number of edges oriented opposite to the surface-induced orientation of the appropriate boundary piece of a fatgraph correspondingly. We denote the number of the corresponding boundary pices as n_R and n_NS.
* Ordered set {σ_v^k}_k=0,…, m_v-3 of odd complex parameters for each vertex v, where m_v is the valence of the vertex v.
Then the following is true:
* Data from (1) and (2) determine uniquely the split Riemann surface with n_R Ramond and n_NS Neveu-Schwarz punctures with the transition functions given by
w'=f_v',v(w) ξ'=±√(∂_w f_v',v(w))ξ,
on for each overlap U_v∩ U_v'. The sign of the square root is given by the spin structure on the fatgraph, making odd coordinate a section of a line bundle ℒ on the corresponding closed Riemann surface F^c, such that ℒ^2=T⊗𝒪(-D_R), where D_R is a divisor, which is a sum of points corresponding to the Ramond punctures.
* Part (3) of the above data allows to construct Čech cocycles on a Riemann surface F, which are the representatives of ΠȞ^1(F^c, ℒ⊗𝒪(-D_NS)), where D_NS is a divisor, corresponding to the sum of the points corresponding to NS punctures:
ρ_v,v'|_U_v∩ U_v'=ρ_v-ρ_v', so that ρ_v|_U_v=σ_v(w)/w^m_v-2, ρ_v'|_U_v'=σ_v'(w')/w'^ m_v'-2,
σ_v(w)=∑^m_v-3_i=0σ^i_v w^i, σ_v'(w')=∑^m_v'-3_i=0σ^i_v'w'^i,
where ρ_v, ρ_v' are the meromorphic sections of
ℒ⊗𝒪(-D_NS) on U_v, U_v' correspondingly, so that m_v is valence of the given vertex v.
The cocycles defined by configurations described by {σ_v} and {σ̃_v} are equivalent to each other if and only if
σ_v(w)-σ̃_v(w)=γ^(m-3)(w),
for every vertex v,
γ∈ΠH^0(F^c, ℒ⊗ K^2⊗𝒪(D_NS+2D_R)),
γ|_U_v=γ(w) so that γ^(m-3)(w) is the Taylor expansion of γ(w) up to order m-3.
We call two sets of data associated to the fatgraph τ equivalent, if they are related as in (<ref>).
* There exist a superconformal structure for N=1 super Riemann surface SF with n_R Ramond punctures and n_NS Neveu-Schwarz punctures so that the superconformal transition functions on for each overlap U_v∩ U_v' are:
w'=f^(σ)_v'v(w+ξλ^(σ)_v',v(w))
ξ'=±√(∂_w f^(σ)_v',v(w))(1+1/2λ^(σ)_v',v(w)∂_wλ^(σ)_v',v(w))(ξ+λ^(σ)_v',v(w)),
where the deformed functions f^(σ)_v',v, λ^(σ)_v',v depend on odd parameters {σ^k_v}, characterizing Čech cocylce {ρ_v',v}, with f^(0)_v',v=f_v',v and in the first order in {σ^k_v} variables λ^(σ)_v',v=ρ_v',v.
*
To describe the non-split SRS, we fix the choice of transition functions in (c) for every metric spin fatgraph τ with the odd data from (3). We consider the set of superconformal structures constructed by picking one superconformal structure per equivalence class of data for every fatgraph τ. The points in this set represent inequivalent superconformal structures, and together they form a dense subspace of odd complex dimension 2g-2+n_NS+n_R/2 in the space of all superconformal structures with n_NS Neveu-Schwarz and n_R Ramond punctures associated to F.
|
http://arxiv.org/abs/2307.01331v1
|
20230703201008
|
PM 1-322: new variable planetary nebula
|
[
"E. Paunzen",
"K. Bernhard",
"J. Budaj",
"F. -J. Hambsch",
"S. Hümmerich",
"D. Jones",
"J. Krticka"
] |
astro-ph.SR
|
[
"astro-ph.SR",
"astro-ph.GA"
] |
The alpha particle charge radius, the radion and the proton radius puzzle
A. S. Lemos
Received Month day, 2023; accepted June day, 2023
=========================================================================
PM 1-322
Paunzen et al.
§ INTRODUCTION
Most planetary nebulae (PNe) are far from homogeneous spherical objects. On the contrary, a plethora of structural differences in PNe suggests that binary stars and their evolution
may play a significant role in PNe formation.
In a binary system, an asymptotic giant branch (AGB) star overflows the Roche lobe creating a common envelope (CE) engulfing the binary. The outcome is a short-period binary or a merger, and an ejected CE that forms the PN.
Mass transfer in wide binary systems may also lead to the PN phase. AGB stars suffer huge mass loss through stellar wind, the velocity of which is comparable
to the orbital velocity. This results in an accumulation of material in the orbital plane. Long-period binaries were indeed confirmed in the cases of PN G052.7+50.7, PN LoTr5, and NGC 1514 <cit.>.
A few central stars of PNe also show dust obscuration events, such as V651 Mon
<cit.>, CPD-56^∘8032 <cit.>, or M 2-29 <cit.>.
A recent review of PNe and the role of binaries can be found in <cit.>.
The spectra of PNe are characterised by strong emission lines, most of which are forbidden lines originating from meta-stable levels of ionised species <cit.>. Such lines typically occur in vast rarefied gas regions irradiated by strong ultraviolet (UV) light.
However, similar conditions and lines can be found in post-AGB stars, B[e] stars, or symbiotic stars.
Many post-AGB stars are long-period binaries and have dusty disks - the so-called Van Winckel objects <cit.>. However, most of these objects completely avoid the PN phase; they thus seem to follow a completely different evolutionary pathway, although there are many similarities with the wider binary central stars of PNe <cit.>.
Symbiotic stars (SySts) are long-period binaries by definition. There are three distinct subclasses of these objects. One of them, D'-type, shows a very pronounced infrared (IR) excess peaking at 20-30 micron <cit.>. It was suggested that they may in fact be compact PNe <cit.>. Some SySts feature eclipses and strong forbidden emission lines <cit.>.
B[e] stars are also often characterised by strong double-peaked Hα emission lines, narrow forbidden low-excitation emission lines ([Fe ii], [OI]), and a strong IR excess <cit.>. Some of them may also be unresolved compact PNe <cit.>.
White dwarfs (WDs) are direct descendants of PNe. Some of them feature an IR excess and, recently, a few WDs were discovered that are eclipsed by dust clouds associated with exoasteroids <cit.>.
This paper presents a study of the nebula PM 1-322, which bears many similarities to the above-mentioned objects. Its special properties were discovered by one of us (K. Bernhard) as a byproduct of the search for anti-phase variability at different wavelengths.
Anti-phase variations in different photometric passbands are rarely observed and were previously considered an almost unique feature of a subgroup of α^2 Canum Venaticorum (ACV) stars (see e.g. ). The search was carried out by a visual inspection of the r and g band light curves of suspected variable stars from the Zwicky Transient Facility (ZTF) Catalog of Periodic Variable Stars <cit.>. In addition to several unpublished strictly periodic anti-phase ACV stars, there was also a single object (ZTFJ201451.59+120353.4) that showed anti-phase variability between the r and g band light curves but apparently no strict periodicity, which, however, is a characteristic of ACV variables. To the best of our knowledge, a similar behaviour has not yet been described in the literature, which is what initiated this study.
ZTFJ201451.59+120353.4 identifies with PM 1-322.
§ AN OVERVIEW OF PM 1-322 FROM THE LITERATURE
<cit.> were the first to report the discovery of PM 1-322 as a young PN on the basis of spectroscopic observations. Because of the high electron density and the
position in the line strengths 4363/Hγ versus 5007/Hβ diagram (see Fig. 3 therein), the authors already discussed the possibility of the object being a symbiotic star. Their main argument against the symbiotic star hypothesis was the absence of any absorption line or TiO band in the spectra as well as no indication for an increase of the continuum towards the red, which would indicate the presence of a companion. Therefore, they favoured the interpretation that PM 1-322 is a young high-density PN.
High-resolution long-slit spectra were presented by <cit.>. The nebula was
unresolved in their direct images. From the [Nii] and [Oiii] single peak profiles, they derived
a systemic velocity of the nebula of +27.2 . The Hα emission line showed a double-peaked profile with a centroid radial velocity in excellent agreement with the value listed above. The two emission peaks of the Hα profile
were observed at -17.5 and +9 with respect to the
system velocity. The red peak was stronger than the blue one. Because of the large differences between the forbidden lines and
the Hα profiles, <cit.> suggested a different origin for these spectral features. In particular, the relatively broad wings of the Hα emission line were thought to possibly be caused by Rayleigh-Raman scattering in a very dense region close to the central star. The authors concluded that
the spectral properties of PM 1-322 are remarkably similar to the properties of other PNe suspected to host a symbiotic central star <cit.>.
<cit.> classified PM 1-322 as a D'-type SySt. In these stars, both components, the giant cool star and the dust shells, contribute to the total spectral energy distribution (SED), which results in
a nearly flat profile. The above-mentioned authors derived an effective temperature of the cool companion of 3811 K
and a primary dust shell temperature of 722 K. No Ovi 6830 line was detected. This emission feature was detected in about 50% to 60% of all Galactic SySts and is used as a diagnostic tool for their characterisation <cit.>. It was identified as Raman scattering of the Ovi resonance doublet at 1032 and 1038 Å by neutral hydrogen.
§ RESULTS FROM GAIA
Because PM 1-322 is an extended source, one has to assess
the astrometric data of the Gaia data releases carefully. The object Gaia DR3 1803129048700838016 is about 42 from the position of PM 1-322 (Fig. <ref>). We searched the Gaia data from DR2 <cit.> and (E)DR3 <cit.> for the corresponding entries of both objects.
In Table <ref>, we summarise all relevant information from the two recent data releases. The data are all intrinsically consistent, with the exception of the parallax (π) for PM 1-322. Transforming the values into distances using the Bayesian approach by <cit.>, we get a distance range from 1234 to 1521 pc for the DR2 and from 1730 to 2299 pc (geometric approach) as well as from 2095 to 2668 pc (photogeometric approach) for the DR3 data. We therefore constructed a colour-magnitude diagram using the distance values of all three different approaches.
As next step, the interstellar reddening (absorption) was taken into account. For this, we relied on the three-dimensional reddening map of <cit.>. The extinction in this direction is quite low and fairly constant over the listed distance range. The adopted absorption in V is 0.356 mag, which we transformed to the Gaia DR2 photometric system using the coefficients listed in <cit.>.
In Fig. <ref>, we present the M(G)_0 versus (BP-RP)_0 diagram together with the main-sequence PARSEC isochrones <cit.> for a solar metallicity of [Z] = 0.0152. The dereddened colour suggests an effective temperature of about 6600 K. The distance from the Gaia DR2 data would place the object significantly below the zero-age main sequence while the values from DR3 would place it at
or close to it.
Also included in Table <ref> are the duplicity flag `Dup' and the Renormalised Unit Weight Error (RUWE). RUWE is expected to be around 1.0 for sources where the single-star model provides a good fit
to the astrometric observations. Values significantly larger than 1.0 could indicate that the
source is non-single or otherwise problematic for the astrometric
solution[<https://www.cosmos.esa.int/web/gaia/public-dpac-documents>].
The large RUWE value for PM 1-322 could be caused by its extended nature.
§ SPECTRAL ENERGY DISTRIBUTION
Although the Gaia filters are broad, it is possible that the measured fluxes are affected by variability or strong emission lines. In particular, the fluxes in the r filter might be enhanced by strong Hα emission, which would result in making PM 1-322 appear redder and cooler. The overall SED contains more information on the nature of the star.
The SED of our target object was constructed using the VOSA tool <cit.> and is shown in the top panel of Fig. <ref>. Only broad-band photometry was considered. The figure also includes our flux-calibrated spectrum, which is described in more detail in Section <ref>. This high-resolution spectrum was smoothed with a running window that was 2 Å wide to enhance its signal-to-noise ratio and decrease the number of points. The spectrum contains strong emission lines but its continuum agrees well with the photometry. This indicates that the continuum is a major contributor to the flux at these broad-band photometric filters. A more detailed assessment of the contribution of the strongest emission lines to the flux at different filters is presented in Section <ref>. Typically, these lines contribute less than 30% to the flux in these broad-band filters. However, variability of the source can affect the SED as well. As we see in Section <ref>, the star may experience brightness fluctuations by about 1 magnitude. However, in the optical region, for most of the time, the amplitude of the variability is less than 0.3 mag.
The observations indicate strong flux in the UV and optical regions and a significant IR excess. However, the SED is not that of a typical symbiotic star in quiescence
<cit.>, which is dominated by the red giant star. The scatter is larger than the error bars, which is due to the variability of the source and its emission lines. For this reason, it is difficult to estimate the interstellar extinction and justify approximating the SED with stellar atmosphere models that feature absorption lines.
That is why, as a first approximation, we ignore the extinction[We tried to optimise and determine the reddening parameters but this did not lead to any improvement in the fit of the scattered SED values.] and fit the observed F_λ fluxes using a simple model with two black bodies. The flux calibrated spectrum was not used in this fitting procedure.
Assuming a distance of 2083 pc, the temperatures and radii of the two black bodies are
T_1=9400 K, R_1=0.56 R_⊙ and T_2=400 K, R_2=600 R_⊙. The fit is also shown in Fig. <ref>.
We caution that although we use non-transparent spherical black-bodies, the real objects may be quite different. They do not have to be stars and they do not even have to be optically thick. For example, the first component could easily be an accretion disk hiding a much hotter central star (c.f. also Sections <ref> and <ref>).
If there is a significant extinction, the temperature of the first (hotter) component will be higher.
Consequently, if the temperature is higher, the radius of the first component will be smaller. In either case, the radius of this component is much smaller than the radius of a star on the main sequence at such temperatures. It would rather correspond to a subdwarf B-type (sdB) star or the pseudo-photosphere of a white dwarf surrounded by circumstellar material.
If the secondary component is a star, its radius implies a supergiant. However, the temperature of this component is so low that it it obvious that the excess is caused by a dust cloud. Nevertheless, it is still conceivable that a star is embedded in the dust cloud, similar to what is observed in D'-type SySts. A similar infrared excess is also often observed in compact PNe. !!!
The log-log representation of the SED shown in the top panel of Fig.<ref> can be very misleading, which is why we also present the λ F_λ versus logλ
representation in the bottom panel. This kind of plot is better suited for assessing the actual SED. It demonstrates that most of the energy is actually radiated at IR wavelengths within the 7-40 micron interval. Assuming that a single hot star is responsible for heating the dust, its light emitted towards the observer has to be significantly attenuated, absorbed (e.g. by a dust shell or an edge-on disk), and reradiated in the IR region. This rules out a single star with an inclined dusty disk or ring configuration. This scenario also indicates that a simple two component black-body model completely fails to account for most of the energy coming from the star.
The situation improves if we add a third black body to the solution, which leads to the derivation of the following parameters:
T_1=9400 K, R_1=0.56 R_⊙, T_2=800 K, R_2=100 R_⊙, and T_3=180 K, R_3=4600 R_⊙, respectively. The fit is also shown in Figure <ref>. Since the third body radiates a lot of energy in the above mentioned 7-40 mic interval, it could be observed with the James Webb Space Telescope <cit.>. The angular radius of the third body on the sky would be about 10 mas, which indicates that such an optically-thick dust cloud will not be resolved. In case the cloud is optically thin, its radius might be much larger. However, optically-thin clouds could also radiate a lot of energy at these wavelengths due to the strong opacity of silicates, which would shrink the radius necessary to produce the observed amount of energy. Clearly, a more sophisticated model has to be used to account for the observed IR excess.
§ LONG-TERM VARIABILITY
Several archives were searched for historic photomeric data on our target object. The ZTF <cit.> has observed this star in two filters (ZTF g and ZTF r) for more than 4 years. The WISE <cit.> and NEOWISE <cit.> databases contain 12 yr of observations in the IR region
(W1 and W2 filters). Similarly, the All-Sky Automated Survey for Supernovae (ASAS-SN) contains 9 years of observations in the V and g filters <cit.>. For our analysis, we removed low-quality observations that only represent upper limits. The resulting collection of measurements is shown in Figure <ref>.
Two episodes of brightening by about one magnitude are observed in the W1 region, which are separated by 4500 days (12 yr). These are also seen in the W2 filter, albeit with reduced amplitudes. The most recent brightening was also captured in the ZTF r filter. In between these two major events, there is a smaller brightening event at around MJD 57200 that is noticeable in both IR filters.
This kind of behaviour, however, is not observed in the optical region in the V and g filters, where there is a reverse trend with dimming events instead of brightenings. This is best illustrated in Figure <ref>, which zooms in on the light curve evolution during the last four years of the ZTF coverage. A relatively quiet period during MJD 58200-59200 indicates little dimming events in ZTF g that are accompanied by small brightenings in ZTF r. A sudden (week-long) eruption-like event is observed in ZTF r at MJD 59270, which is then followed by additional activity and brightening until MJD 59530, when another, even stronger (month-long) eruption-like event occurred. After this event, the brightness dropped by about one magnitude and we refer to this drop as `an eclipse'. Its duration is about half a year. The behaviour in the ZTF g filter also shows enhanced activity in between the eruption-like events but there is no brightening. During the eclipse, however, the brightness also decreases in ZTF g and the drop is significantly more pronounced than in the ZTF r filter.
To explore the nature of this unusual variability, we calculated the synthetic photometry in ZTF r and ZTF g filters using our flux calibrated spectrum described in more detail in Section <ref>. Filter transmission curves and zero points were taken from the SVO filter profile service <cit.>. We obtained the following magnitudes: ZTFg=15.43, ZTFr=14.53. Then we removed the three strongest emission lines: Hβ, [OIII]5007, and Hα, recalculated the fluxes and magnitudes, and obtained ZTFg=15.65, ZTFr=14.93.
It shows that the emission lines contribute about 19% and 31% of flux in those filters, respectively. It also shows that any variability with an amplitude less than 0.2 mag in the g filter and 0.4 mag in the r filter, such as the eclipse, are unlikely to be caused by variability in emission lines or by the eclipse of a source of `pure' emission lines.
§ OUR FOLLOW-UP OBSERVATIONS
§.§ Spectroscopy
PM 1-322 was observed with the FIES instrument <cit.> on the 2.56-m Nordic Optical Telescope <cit.> on the night of November 14, 2022. FIES was employed in low-resolution mode (fibre diameter of 2.5) providing a reciprocal resolution of approximately 25000 from 3700–9000 Å. Three consecutive exposures of 1200s were obtained, followed by a single 300s exposure of the standard star BD+28 4211 <cit.>. The airmass during the observations was approximately 1.4, as such the atmospheric dispersion corrector (ADC) was not used (based on commissioning tests, the ADC is recommended only for observations at airmasses ≳ 1.5).
The data were reduced using the FIEStool pipeline[http://www.not.iac.es/instruments/fies/fiestool/FIEStool.html]. The flux calibration is only approximate given the fiber-fed nature of the observations.
We note that our observations sample a different part of the sky compared to the previous observations of <cit.> or <cit.>, which used slits with a width of 4" and 1.6", respectively. However, since the nebula was spatially unresolved in Hα, [NII], and [OIII] images with a resolution of about 1.8" <cit.>, it is unlikely that the slit width of 1.6", 4", or our aperture of 2.5" affects the spectral line profiles significantly.
The whole spectrum is shown in the top-left panel of Figure <ref>. It consists of numerous strong (mostly forbidden) emission lines and a very weak continuum, which is not strong enough to permit equivalent width measurements or detect any absorption lines. Its typical S/N is about 5.
Most of the emission lines have nice symmetric profiles. Some lines, such as [O iii] 4959, 5007, have curious flat peaks (see the top-right panel of Figure <ref> for a comparison with [Ne iii] 3869), which is not caused by saturation of the CCD detector. These lines are fairly narrow with a full width at half maxima of about 30 km/s, which is still above the limit dictated by the spectral resolution. Radial velocities derived from individual lines give a consistent value of about
22.1± 0.5 km/s. This is significantly less than the value of 27.2 km/s estimated by <cit.>. However, we note that one has to keep in mind that our aperture was slightly different than that used by the aforementioned authors.
The Balmer series shows a very interesting behaviour. Lines originating from higher levels are narrower with a single peak. Lines from progressively lower levels are broader and develop a symmetric double-peaked profile (see the bottom-left panel of Figure <ref>). Hα shows a central depression deeper than the half of the maximum and wings that can be traced up to 400 km/s. Radial velocities of the blue emission, central absorption, and red emission are -2.3, 22.3, 46.3 km/s, respectively. This Hα profile is very different from that observed in 2007 by <cit.>, in which the red emission was much stronger than the blue emission. Again, we note that our aperture was slightly different from the one used by these authors. The same pattern is observed in the nitrogen lines [N ii] 5755, 6583 (Figure <ref>, bottom-right panel). The first line originates from a higher level and is narrower while the second line is broader and double-peaked. The trend of narrower line widths for lines originating from higher Balmer lines is opposite compared to the dependence of peak separation on the principal quantum number of upper level observed in classical Be stars <cit.>.
A possible interpretation of these observations is that the central star is embedded in a nebula and an almost edge-on disk. Assuming this configuration, one could speculate and argue along the following lines. Lines from the lower levels originate from the inner disk, that is a denser material in the orbital plane which is subject to Keplerian motion. The half-separation of the emission peaks in the Hα profile is mainly sensitive to the radius of the disk and to its inclination. Assuming that the inclination is close to edge-on (see Section <ref>) and the half-separation of the emission peaks is about 24 km/s, the radius where Hα is produced would be about 1 au (assuming a central star slightly less massive than our Sun). The reason why low excitation Balmer lines originate from the inner disk could be that they are more opaque. There is no mutual radial velocity component between the disk material orbiting on circular orbits and the nucleus. Consequently, in the line core, the disk becomes opaque in the radial direction easily. The outer disk suffers from lower irradiation and, hence, less radiative excitation. An observer looking at the edge-on disk will not see very deep into the disk in the core of Hα. However, the blue and red wings of the line may be desaturated by the velocity gradient along the line-of-sight in the inner disk. Hence, in the line wings, the observer sees deeper and hotter regions of the inner disk with higher velocities. The denser material in the inner disk might also imply a stronger collisional excitation of lower levels.
Lines from the higher levels and forbidden lines would originate from either larger distances from the star, or from well above the disk plane, or from distant polar regions of the nebula where the densities are much lower. Assuming there is a hot star at the centre, these low density regions would be subject to considerably stronger UV irradiation since they would not be shielded by the disk. Such radiative excitation and ionisation would give rise to single-peaked forbidden emission lines from highly-ionised species.
In the spectrum, we identified lines from H i, He i, He ii (25 eV), [N ii], N iii (30 eV), [O i, ii, iii] (35 eV), O ii, [Ne iii] (41 eV), [S ii, iii] (23 eV), [Cl iv] (40 eV), [Ar iii, iv] (41 eV), and [K iv] (46 eV). The numbers in parentheses indicate the ionisation potential in eV required to reach that particular ionisation state. This clearly supports the idea of a hot central star. Only the presence of [O iii] requires temperatures above 25 000 K <cit.>. Following this scenario, the temperature of our hottest black body from Figure <ref>
would be slightly lower but an additional black body much hotter than 25 000 K (possibly shielded by the disk) would need to be introduced. This might also explain why the data point at the shortest wavelength in Figure <ref> exhibits a significantly increased flux compared to the models presented.
The explanation offered above may not be the only possible model. The double-peaked and flat-topped profiles might also originate from bipolar outflows or radially expanding latitude-dependent winds with dusty disks as seen in B[e]-type stars <cit.>. Some planetary nebula show a so-called Wilson effect <cit.>, which is when the separation between the blue and red peaks of lines of highly ionised species is smaller than that for less ionised ones. It is attributed to radially expanding envelopes.
In the flux calibrated spectra, we measured the fluxes of some of the most important spectral lines. They are given in Table <ref> and compared with those from <cit.>. These are observed values not corrected for dust extinction or reddening. The most apparent change is that the forbidden oxygen lines, mainly [OIII] 5007, are now significantly weaker than before. Hα got slightly stronger and is now the strongest line in the optical spectrum. Although its central intensity is smaller than that of
[OIII] 5007 when plotted per unit of wavelength (see Fig. <ref>), it carries most of the energy. HeI lines are also stronger now, except HeI 5047, but this is likely due to a misprint in the previous value. This indicates that densities and collisions in the environment have increased.
It is very convenient to use the 4363/Hγ and 5007/Hβ line ratios, which are not sensitive to dust reddening. Both line ratios have decreased significantly, which places PM 1-322 deeper into the realm of the SySts (see Fig. 3 of <cit.>).
§.§ Photometry
Follow-up observations were performed at the Remote Observatory Atacama Desert <cit.>. The observations were acquired through Astrodon Photometric filters with an Orion Optics, UK Optimised Dall Kirkham 40 cm f/6.8 telescope and a FLI 16803 CCD camera. The field of view of the camera is 47 × 47 arcmin^2. Each data set consists of pairs of exposures with 90 s (B) or 60 s (V, R_C, and I_C). Twilight sky-flat images were used for flat-field corrections. The observations covered the time span from August 9, 2022 to September 12, 2022.
On September 7 to 9, time series in B and R_C were taken over a period of about 3.6 hours each, which are depicted in Figure <ref>.
These data indicate variability with an amplitude of about 0.1 mag on timescales as short as one hour. The finite speed of light therefore puts an important constraint on the scale of the objects involved in this kind of variability, which have to be smaller than a few au. We cannot, however, distinguish whether this variability originates from the variability in the continuum or from the emission lines.
To examine the ROAD observations for periodic variability, the Generalised Lomb Scargle algorithm from the programme package PERANSO <cit.> was used. After removing a slight linear trend, the B, V, R_C, and I_C data were examined for signals with a false alarm probability (FAP) less than 1 % (FAP < 0.01) in the period range of 0.05-2 d. No significant periodic signal was found in the examined data sets (Figure <ref>). While the number and pattern of peaks below the significance threshold suggest that there is some form of short-time variability in the period range of several tenths of a day, it is irregular or semi-regular at best.
§ INTERPRETATION AND DISCUSSION
In this section, we discuss several possible scenarios that might qualitatively explain the behaviour of our target object. They may or may not be correct; only future observations will be able to shed more light on this interesting system.
§.§ Binary scenarios
The two major and the one minor brightening events seen at roughly regular intervals in the IR light curves are suggestive of a periodic behaviour which could be associated with a binary nature of PM 1-322. In the following, we assume that the IR excess is caused by dust and not by the emission lines. Although we regularly refer to the hot and cold components, we caution that they do not necessarily have to be stars. For example, a hot component might be a pseudo-photosphere
or a hot disk surrounding an even hotter star. The cold component might be a dust cloud or a disk harbouring a cooler star. Bearing this in mind, one could speculate along the following lines.
Scenario (A): Forward and backward scattering on dust.
This scenario assumes a binary with a hot and a cold component. The cold component has, or is associated with, a dust cloud, which emits thermal radiation and also scatters the radiation from the star. The orbital inclination is close to an edge-on configuration.
Supposing the orbital period is 12 yr, one would observe major brightening episodes due to the forward scattering on an optically thin dust every 12 yr, which happens when the dust cloud is in front of the hot component <cit.>. The eclipse might occur at this moment. Apart from that, a minor brightening due to the backward scattering would be observed when the cloud is behind the hot component. These events would be more pronounced at shorter wavelengths and this is exactly what is observed in the W2, W1, and ZTF r filters.
Unfortunately, this model fails to explain the opposite behaviour in the visible region. Furthermore, the observed amplitude
of about one magnitude in the W1 filter seems too high to be explained by scattering. Consequently, we think this scenario is likely not a satisfactory explanation.
Scenario (B): Reflection effect on dust.
Similar to model A, this scenario also assumes a hot and a cold component.
The cold component and/or the associated dust cloud is at least partially opaque and heated by the hot component on its `day' side. Assuming an orbital period of 6 yr, we would observe brightenings due to the thermal and scattered radiation from the cloud. This would be analogous to the reflection effect in binary systems and a brightening would be observed when the dust cloud is behind the hot component.
Again, this is neither suited to explain the anti-phase behaviour in the optical region nor the eclipse event which should occur at minimum IR light. What we see at shorter wavelengths looks like eclipses correlated with the brightenings in the IR.
This leads us to the following scenario.
Scenario (C): Dusty tails.
This scenario assumes a hot component and a cold dusty component, with the dust cloud eclipsing the hot component every 12 yr. The minor dimming event in the V filter at MJD 57200 is then a secondary eclipse. However, this requires the assumption that the hot component has a non-negligible size (or a disk) to be able to cause secondary eclipses.
We still need to explain the brightenings at IR wavelengths. Assuming the dust cloud has the shape of a tail trailing behind the cold component, its cross-section (projection) would be largest
near the primary eclipse causing the IR brightening. This would be analogous to the ellipsoidal variability of eclipsing binaries but with eclipses occurring at maxima rather than minima.
Six years later, when the tail is behind the hot component, we would observe another maximum in the IR, which is approximately what is observed. However, based on this model, one would expect that the minor and major IR brightenings are of similar strength or that the minor brightening is stronger due to the above-mentioned reflection effect. This, however, is not the case. It also remains hard to understand the brightening in ZTF r before the eclipse and the little dips in the ZTF g filter during the quiescent phase (MJD 58200-59200). Consequently, it gets increasingly complicated and this scenario is likely not the correct explanation either.
Scenario (D): Dust clouds on an elliptical orbit.
In this scenario, we envisage a situation similar to scenario B (reflection effect), with the difference that the secondary component and its dust cloud are on an elliptical orbit with a period of about 6 yr.
When the cloud is at periastron, it becomes heated, which produces a brightening in the IR that is stronger in W1 than in W2. The probability that the dust cloud eclipses the hot component is highest at periastron, which could explain the observed dimmings at shorter wavelengths during the times of maxima in the IR. For this scenario to work, the dust cloud needs to be at least partially transparent. To explain the anti-phase behaviour and activity during the quiescent phase at MJD 58200-59200, one could envisage multiple dust clouds on similar elliptical orbits. This would be similar to the model of disintegrating asteroids <cit.> proposed to explain the behaviour of Boyajian's star <cit.>. In this object, minor dimming events are also observed before a major one. Dust embedded asteroids were also found orbiting white dwarfs <cit.>.
The problem with this scenario is the brightening before the eclipse in the ZTF r filter at about 6300 Å, which would require temperatures well above 2000 K and the dust would not survive at such temperatures.
One might speculate that if the secondary component is a giant star embedded in a dusty shell, the shell would sublimate at periastron, unveiling the spectrum of a giant star followed by brightenings in the ZTF r filter.
§.§ Puffed-up dusty disk scenario
Scenario (E): There may be another more simple explanation of the observed behaviour.
Let us assume a central hot star surrounded by an inner hotter gaseous disk and an outer cooler dusty disk. We assume that both disks are nearly edge-on. If the disks were to expand and get hotter for some reason, an increase of brightness in the IR and ZTF r filters would be observed. At the same time, the puffed-up edges of the outer disk might eclipse the central star or the hotter inner disk, which leads to the dimming events in the optical region. One would expect that the dimming is stronger at shorter wavelengths due to dust extinction being stronger here – and this is exactly what is observed.
This model also agrees with the spectroscopic data and the double-peaked line profiles. The major eclipse is still likely due to a dust enshrouded secondary star but, theoretically, even this could be understood by puffing-up the disk.
However, the origin of the disk remains to be explained, as well as the reason why it would expand and get hotter. It is interesting to note that, before doing so, there is a relatively short eruption-like event, which was observed in the ZTF data (MJD 59270) and might trigger these changes. It may be a sort of eruption throwing out material that subsequently cools down and forms new dust. It is also conceivable that the hot star has a cool companion and the eruption takes actually place on the companion star. The material from this cool star is then transferred to the accretion disk increasing the mass accretion rate and thereby heating and puffing-up the disk. This would be analogous to what is observed in a symbiotic star when an expanded and flared disk builds up during active phases <cit.>.
The inner disk may completely hide a central hot white dwarf from our sight. However, in the perpendicular direction,
the hot star is unobscured, ionising the circumstellar medium and giving rise to the strong emission lines.
Small variability in the disk may also explain the little dimming events in the ZTF g filter during the quiescent period (MJD 58200-59200). The observed shift in the radial velocities of 5 km/s might somehow be associated with the companion.
Alternatively, there might be smaller bodies, their debris, inhomogeneities, or dust clouds in the disk that cause the major eclipse. Smaller variability in ZTF r and g before the eclipse or during the quiescent period may also easily be due to variability in the emission lines. The eruption-like events that preceded the eclipse may conceivably be due to small-body collision cascades or `a nebular reflection' of and eruption that took place at the hot star, which radiatively excited the material in the nebula that subsequently `shone' in the emission lines. In summary, we consider scenarios D & E, or a combination thereof, as the most likely explanation for the observed kind of variability.
§ CONCLUSION
During the search for anti-phase variability at different wavelengths, we discovered that ZTFJ201451.59+120353.4 is an object with very peculiar variability properties. The ZTF r and g data show a one-magnitude deep, eclipse-like event with a duration of about half a year that occurred in 2022. Furthermore, the variability is characterised by dimming events in the optical region that are accompanied by brightening events in the red and IR regions. Apart from that, two fast eruption-like events were recorded in the ZTF r data. Archival data from the WISE mission indicate long-term variability with a possible period of about 12 or 6 yr. Our follow-up photometric observations revealed a stochastic short-term variability with an amplitude of about 0.1 mag on the timescale of about one hour.
Most intriguing, ZTFJ201451.59+120353.4 identifies with the planetary nebula candidate PM 1-322. Its spectral energy distribution peaks in the mid-infrared region. Our high-resolution spectroscopic observations show strong, narrow, and mostly symmetric forbidden emission lines from highly ionised species and broader symmetric, double-peaked emission in Hα, which is very different from what is seen in earlier spectra obtained in 2007. Radial velocities derived from these lines are also different and shifted by about 5 km/s with respect to the values measured in 2007.
We speculate about the nature of the observed variability pattern. Many possible scenarios seem suitable to at least partly explain the observations and cannot be excluded. The forbidden emission lines from highly ionised species indicate the presence of a hot compact star embedded in an extended nebula, while the double-peaked emission in Hα suggests a gaseous disk. The significant IR excess in the SED indicates the presence of dust. The observed eclipses and variability dictate that the gaseous disk, the dust, and a possible companion are in a nearly edge-on configuration.
While we prefer a scenario involving a puffed-up dusty disk (Section <ref>), which readily agrees with the spectroscopic properties and is also able to explain the observed anti-phase variability, we stress that these are only speculations. Many open questions remain and it is clear that this most interesting system requires further long-term monitoring as well as UV and IR spectroscopic observations, which we herewith encourage.
The authors would like to thank Dr. Augustin Skopal for his comments on the manuscript and Andrii Maliuk for help with the archival data.
This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular, the institutions participating in the Gaia Multilateral Agreement. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration.
This publication also makes use of data products from NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology, funded by the Planetary Science Division of the National Aeronautics and Space Administration.
Based on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope
at the Palomar Observatory as part of the Zwicky Transient Facility project.
ZTF is supported by the National Science Foundation under Grants No. AST-1440341 and AST-2034437 and a collaboration including current partners Caltech, IPAC, the Weizmann Institute for Science,
the Oskar Klein Center at Stockholm University, the University of Maryland,
Deutsches Elektronen-Synchrotron and Humboldt University, the TANGO Consortium of Taiwan,
the University of Wisconsin at Milwaukee, Trinity College Dublin, Lawrence Livermore National Laboratories,
IN2P3, University of Warwick, Ruhr University Bochum, Northwestern University and former partners
the University of Washington, Los Alamos National Laboratories, and Lawrence Berkeley National Laboratories.
Operations are conducted by COO, IPAC, and UW.
Based on observations made with the Nordic Optical Telescope, owned in collaboration by the University of Turku and Aarhus University, and operated jointly by Aarhus University, the University of Turku and the University of Oslo, representing Denmark, Finland and Norway, the University of Iceland and Stockholm University at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofśica de Canarias.
This publication makes use of VOSA,
developed under the Spanish Virtual Observatory project supported
from the Spanish MICINN through grant AyA2008-02156.
This research has made use of the Spanish Virtual Observatory (https://svo.cab.inta-csic.es) project funded by MCIN/AEI/10.13039/501100011033/ through grant PID2020-112949GB-I00.
JB was supported by the VEGA 2/0031/22 and APVV-20-148 grants. This work was supported by the Erasmus+ programme of the European Union under
grant number 2020-1-CZ01-KA203-078200
aa
|
http://arxiv.org/abs/2307.01756v1
|
20230704145537
|
Identifying Professional Photographers Through Image Quality and Aesthetics in Flickr
|
[
"Sofia Strukova",
"Rubén Gaspar Marco",
"José A. Ruipérez-Valiente",
"Félix Gómez Mármol"
] |
cs.CY
|
[
"cs.CY"
] |
Pretraining is All You Need: A Multi-Atlas Enhanced Transformer Framework for Autism Spectrum Disorder Classification
Lucas Mahler1Qi Wang1,2Julius Steiglechner1,2Florian Birk 1, 2 Samuel Heczko 1 Klaus Scheffler 1, 2Gabriele Lohmann 1, 2
August 1, 2023
============================================================================================================================
In our generation, there is an undoubted rise in the use of social media and specifically photo and video sharing platforms. These sites have proved their ability to yield rich data sets through the users' interaction which can be used to perform a data-driven evaluation of capabilities. Nevertheless, this study reveals the lack of suitable data sets in photo and video sharing platforms and evaluation processes across them. In this way, our first contribution is the creation of one of the largest labelled data sets in Flickr with the multimodal data which has been open sourced as part of this contribution. Predicated on these data, we explored machine learning models and concluded that it is feasible to properly predict whether a user is a professional photographer or not based on self-reported occupation labels and several feature representations out of the user, photo and crowdsourced sets. We also examined the relationship between the aesthetics and technical quality of a picture and the social activity of that picture. Finally, we depicted which characteristics differentiate professional photographers from non-professionals. As far as we know, the results presented in this work represent an important novelty for the users' expertise identification which researchers from various domains can use for different applications.
§ INTRODUCTION
Nowadays, we observe the emergence of a wide range of online technology-mediated portals. They have proved their ability to generate rich data sets through the users' interaction, which can be used to perform a data-driven evaluation of competencies and capabilities <cit.>. Across them, there is a wide group of photo and video sharing platforms which are gaining indispensable popularity at the present time. This is explained by the fact that, in accordance with a comprehensive survey, users have five primary social and psychological motives to use one of the rising photo-sharing social networking services, which are social interaction, archiving, self-expression, escapism, and peeking <cit.>. More than that, images and photos are powerful tools based on their potential impact on people’s knowledge, attitudes, and perceptions regarding diverse topics <cit.>. In this way, the mobile applications of most photo and video sharing platforms came onto the market at the right moment in the history of technology and made them the dominant image-sharing social media in the second decade of the 21st century <cit.>.
Regardless of the high acceptance of photo and video sharing platforms throughout all segments of the world's population and their escalating use, there is no publicly available data set containing multiple data types and covering a considerable fraction of users from these platforms. Besides, despite the fact that there already exists ample evidence of vindicated methods used to measure the expertise of users across the group of sites that can be called content sharing and consumption <cit.>, there has not been much attention given to the in-depth exploration of photo and video sharing platforms that hold much potential to infer not only common metrics like popularity <cit.> but also a range of users' competencies or capabilities. One of the most valuable skills to detect and explore in this context is photography capability. As could be expected, photography skills are subjective and people often disagree with each other on the matter of taste. This is due to the fact that it is hard to conclude which photo is the best in terms of aesthetic and technical qualities. Since it is already not a trivial task for a person to identify technically sound and aesthetically attractive pictures, it is even more complicated for a machine to evaluate the quality of a picture explained by the fact that machines have to cope with noise in the picture represented by intensity levels, colour saturation, lighting, compression, artefacts, etc. <cit.>. Also, machines do not have prior knowledge and struggle to understand some of the aspects of our world. As a good solution to the challenge of image preprocessing, Convolutional Neural Networks (CNN) trained with human-labelled data hold the potential to fill this gap <cit.>.
Data generated on the photo and video sharing platforms hold the potential to be used in various contexts. Across them, we can highlight the possibility of the creation of pathways for learning about the user's behaviour, general traits of Web navigation and the ability to perform data-driven content analysis. More than that, this knowledge could be valuable for informal learning focused on acquiring new attainments or competencies <cit.>. From another perspective, online content can yield violence in the user community, which is considered one of the most important problems of the 21st century <cit.>. In this way, data generated online hold the potential not only to infer valuable information about the users but also about vulnerabilities surrounding virtual life. Besides, the data set from any photo and video sharing platform with multimodal data would be able to infer the photography capabilities of users. This will open an opportunity to automatically detect good photographers on the Web and offer personalised aesthetic-based photo recommendations.
In this work, we examine several photo and video sharing platforms and the existing studies focusing on analysing data available across them. Based on these grounds and the encountered gaps, our first step was to create one of the largest data sets available from the Flickr platform <cit.>. We collected data from 27,538 users who uploaded photos to Flickr in December 2021, specifically those who specified their occupation. Additionally, we enriched the data set with features resulting from the automated analysis of the photos and their comments including three Image Quality Assessment scores representing aesthetic and technical aspects of the photos. Also, we labelled the data to indicate whether the user is a professional photographer. We are releasing the data set as part of this papers' contribution. Thus, it is open sourced and is available in the following URL: <cit.>. Next, we propose our method to infer if a user is a professional photographer or not based on self-reported occupation labels, which is a novel contribution to the literature. Finally, to the best of our knowledge, this is the first time that characterisation of professional and non-professional users is presented in any photo and video platform.
Accordingly, the first objective of the paper at hand was to create a data set focused on the Flickr photo and video sharing platform with multimodal data including crowdsourced, user and photo features that would allow to answer the following Research Questions (RQs) that we state next:
* RQ1. What model is better to infer if a user is a professional photographer? One based on photo features including aesthetics and technical quality scores, one based on the social network activity of the photographer, or one based on crowdsourced features that represent the interaction of other users with the photo?
* RQ2. What is the relationship between the aesthetics, the technical quality and the social activity of a given picture?
* RQ3. What characteristics differentiate professional photographers from non-professionals?
The remainder of this paper is structured as follows. In Section <ref>, we focus on the background of our study uncovering the subject of photo and video sharing platforms. In Section <ref>, we present our research methodology. We expand this section by selecting the photo and video sharing platform and explaining the data collection process. Next, we depict the final data collection and describe machine learning (ML) algorithms to identify professional photographers. Our findings are outlined in Section <ref>, while we extend the results in Section <ref>. Finally, we draw our conclusions and future research directions in Section <ref>.
§ BACKGROUND
§.§ Photo & Video Sharing Platforms
The main goal of photo and video sharing platforms is to allow their users to share various multimedia content, including photos and videos. Some of the platforms have built-in editing filters and organisation by hashtags and geographical tagging. Most of these sites also include a social networking service permitting users to connect with each other through comments or messages, browse other users' content, share and receive feedback. In this way, some material can be shared publicly or with pre-approved followers. In Table <ref>, we present a comparison of the leading photo and video sharing platforms, namely, Flickr, 500px[<https://500px.com/>], Instagram[<https://www.instagram.com/>], 1x.com[<https://1x.com/>], SmugMug[<https://www.smugmug.com/>] and Pinterest[<https://www.pinterest.com>], across several characteristics.
There were three fundamental points of comparison for our research: the number of monthly users, the access to an Application Programming Interface (API), and the ability to write comments. We could not find the up-to-the-minute number of active users per month in 500px, 1x.com and SmugMug, while among others, the most visited portal is Instagram with its 2,000 million users per month, followed by Pinterest and Flickr with their 430 and 90 million users, respectively. Finally, most of the portals that we explored offer an API, except 1x.com and 500px that shut down its API access in 2018.
Flickr was a pioneer in online photo sharing and nowadays is one of the leading photo-sharing platforms worldwide which attracts extensive research attention <cit.>. Its users include diverse profiles of both professional and amateur photographers who want to share their portfolios. In 2018, it was acquired by SmugMug, a paid photo-sharing service. Similarly, SmugMug is characterised as a premium online photo and video sharing service business which currently has material uploaded by amateur and professional photographers around the world <cit.>. 500px and 1x.com are also more suitable for serious cameramen and they offer an image-focused design. On the contrary, Instagram is a social photo-sharing service launched in 2010 as an iPhone application fitting for non-professional users. Its users can take and manipulate photographs by adding filters and frames that enhanced the users’ experience. They can also share them online where other users can react by means of comments and “likes". Instagram is bringing an opportunity to communicate experiences through both choice of photo subject and ways to manipulate and present them <cit.>. Lastly, Pinterest was launched in 2010 as a Web site where users can save an image (known as a “pin") that they upload or find on a Web page onto a collection of these pins. A more detailed description of these platforms can be found in <cit.>.
§.§ Related work
There exist many studies disclosing the potential of Web portals to yield a significant amount of data, which can allow the detection of potential experts. On the whole, the expertise finding is focused on detecting topical authority in a selected topic in forums and question and answer websites (e.g., Reddit <cit.> or Quora <cit.>). In contrast, most of the research in this domain is centred on proficiency in different programming languages, libraries or tools across portals highly related to the field of computer science such as GitHub <cit.> and StackOverflow <cit.>. However, there is not much work done on discovering artistic skills which are crucial to look at things from different perspectives and to remain competitive globally <cit.>. We also did not find any study aiming to identify professional photographers through image quality, aesthetics or any other photo-related features, thus both our novel data set <cit.> and our research in this study significantly contributes to the literature.
From another perspective, the vast majority of the studies are making use of single-mode data sources. For example, Kantharaju et al. utilised clickstream data to trace player knowledge in educational games <cit.> and Pal et al. extracted textual data represented in questions and answers of users of a question and answer portal <cit.>. On the contrary, very few researchers decided to employ multimodal data sources. One of the examples of such an approach is <cit.> demonstrating the use of textual, behavioural and time-aware features in StackOverflow. The results of this work proved the utility of adding behavioural and time-aware features to the baseline method with an accuracy improvement for early detection of expertise. Even though there is a clear trend in using multimodal data, we did not find previous studies that operated various types of data in photo and video sharing platforms.
Also, we saw a heightened interest towards photo and video sharing platforms which could be able to reflect important information about users. A few studies are revealing that rich data sets from these portals could be used to explicitly or implicitly perform a data-driven evaluation of diverse capabilities. For example, Pal et. al. presented a novel approach to finding topical authorities in Instagram <cit.>. Their method is based on the self-described interests of the follower base of popular accounts. Similarly, Purba et al. carried out an analysis of popularity trends and predictions on Instagram, using a set of features acquired from users’ metadata, posts, hashtags, image assessment, and history of actions <cit.>. In the analysis of popularity trends, engagement grade is used in comparison to respect the lower engagement rate of users with a higher number of followers. It was found that image quality, posting time, and type of image highly impact engagement rate. However, neither of these studies of Instagram focused on photography capabilities as we do.
Finally, despite the enhancing relevance of photo and video sharing platforms and research across them, there are no publicly available data sets that could be used for the exploration of users' personality traits or capabilities. This is an important gap existing in the current domain.
§ METHODOLOGY
In this section, we describe the methodology process of building a supervised learning model for the final goal of professional photographers' identification. First, we explain the photo and video sharing platform selection followed by the description of its API service. Next, we give details on the feature engineering process. Then, we describe the final data collection and the ground truth. Finally, we explain the ML models that we chose for the stated goal and evaluation metrics to estimate their performance.
§.§ Methodology overview
To answer the RQs stated at the beginning of our study, we pursued the methodology process presented in Figure <ref>.
In the first step, we selected the photo and video sharing platform based on various metrics presented in Table <ref>. In the second step, we downloaded the photos from the selected site. In the third, fourth and fifth steps, we obtained the user, crowdsourced and photography features and ground truth in order to build the ML model in the sixth step. In the eighth step, we chose the best model to infer if a user is a professional photographer or not based on self-reported occupation labels. Next, in the ninth step, we explored the relationship between the aesthetics and technical quality of a picture and its social activity. Finally, in the tenth step, we found the common characteristics of professional photographers and non-professionals.
§.§ Photo and video sharing platform selection
Based on Table <ref> presented in the previous section, we can conclude that the most active photo and video sharing platforms are Instagram, Pinterest and Flickr, being visited by 2,000, 430 and 60 million active users every month, respectively. Flickr has a PRO service where a user can get unlimited storage, making it one of the cheapest hosting sites around. To keep Pinterest running smoothly, the users can create up to 200,000 pins and 2,000 boards which is a collection where users save specific pins. In contrast, Instagram allows its users to upload an unlimited quantity of photos. Moreover, as discussed earlier, these three platforms offer API services that can be helpful while acquiring the needed data.
Although Instagram has the huge privilege of having many registered users and the facility of uploading an unlimited number of photos, for our study we see it as a disadvantage because it could be hard to find those users whose behaviour on the website would correspond with the profile of professional photographers. Moreover, it does not allow uploading original-sized photos. On the other hand, Pinterest is not focused on pictures taken by users themselves but rather on drawings, paintings or artworks created on a computer. Therefore, we will focus on a photo and video sharing platform Flickr as a proxy for the photography skills of users.
Flickr also differs from Instagram by providing online communities and other groups on numerous social media and other platforms to improve customer relationships. Groups are a place to share ideas and photos with other like-minded members. Some group administrators first have to approve the users’ request to join. Flickr offers its users to create profiles with personal information, albums/photosets which are helpful to organise their photos and galleries/collections to which they can add other users’ media. Flickr is also geared toward beginners and enables them to edit the photos directly on the platform, such as adjusting brightness and contrast and applying various filters. There is also a concept of photostream which is a collection of media files that solely belongs to a user (public – others can visit the profile and see what the user uploaded, private – only the user and the list of permitted users will be able to view the content). All users have a list of their favourite photos. There is also an ability to connect with other users. As a social photo-sharing site, Flickr allows users to maintain a list of contacts. From the perspective of a registered user of Flickr, there are five categories of people on Flickr: the user, the user’s family, the user’s friends, the user’s contacts who are neither family nor friends, and everyone else <cit.>. Statistics for a free account show the total number of views, favourites, and comments it has. From another point of view, users can use tags to categorise and search for photos. There are several ways to tag pictures, either one at a time or in batches. Flickr lets users add up to 75 tags to each picture including the geotagging feature. Finally, every user can set a license representing the copyright permission for a given picture.
§.§ Flickr API – data collection
Flickr provides an API service which facilitates significantly the process of data collection. Primarily, we decided to download the data of only those users who had filled the occupation field in their profile. This selection is explained by the fact that we wanted to avoid bias derived from assuming their profession. Intending to obtain a representative and comprehensive sample of the platform's active users, we singled out those users who were sufficiently active during the month of December 2021. We searched for all the photos of the month of December discarding screenshots and videos. There were 225,590 users who uploaded photos in December 2021.
For the user selection, we discarded those users whose number of photos uploaded in December 2021 was equal to or greater than 20% of their total activity, in order to filter out those users without a minimum activity on the platform. We also filtered out the 5% of users from both ends of the distribution of total photos uploaded to avoid outliers. As a result, the final number of users we selected is 151,468.
For the time limits reason, it was impractical to aim to extract the data from all the photos from all the users. Finally, we downloaded all pictures of the selection of 27,538 users. The complete process of downloading data with Flickr API is thoroughly explained in <cit.> which also describes the full data collection process.
§.§ Feature engineering for the ML model
§.§.§ Deep learning models
Child states that the photographer has to pre-visualise, pre-produce and create an environment using not only selected equipment, subject matter, props, and far more importantly, light <cit.>. Image quality can be affected by the noise, the blur and the used technical requirements and equipment. From another perspective, the aesthetics of the photo depend on the colours' balance (their compatibility and what feelings they evoke), contrast (variance between light and dark), lighting in general, camera to subject diagram, camera angle and height, meter readings of light ratios, composition, subject choice and symmetry.
Despite the fact that evaluating these points might be hard for an ordinary user, some models perform well in this regard. After exploring several surveys including a comprehensive performance evaluation of image quality assessment algorithms <cit.>, we selected two algorithms based on their high performance and the ability to rebuild them. Firstly, Neural Image Assessment (NIMA) – a deep CNN that is trained to predict which images a typical user would rate as looking good (technically) or attractive (aesthetically) <cit.>. Other models classify images as low/high scores while the NIMA model produces a distribution of ratings for any given image – on a scale of 1 to 10, NIMA assigns likelihoods to each of the possible scores. Various functions of the NIMA vector score (such as the mean) can then be used to rank photos aesthetically. The authors replaced the last layer of the baseline CNN with a fully-connected layer with 10 neurons followed by soft-max activations. Baseline CNN weights are initialised by training, and then an end-to-end training on quality assessment is performed.
Secondly, Photo Aesthetics Ranking Network with Attributes and Content Adaptation proposes to train a deep convolutional neural network to rank photo aesthetics in which the relative ranking of photo aesthetics is directly modelled in the loss function <cit.>. This model incorporates joint learning of meaningful photographic attributes and image content information which can help regularise the complicated photo aesthetics rating problem. This model returns ratings for any given image – on a scale of 0 to 1.
§.§.§ Comments preprocessing
To analyse the comments, we followed several data preprocessing steps for comments. First, we changed all the comments to lowercase. Next, we replaced emojis in comments with their description codes with the use of the Python Demoji library [<https://pypi.org/project/demoji/>]. Moreover, we cleaned the comments from hyperlinks and non-alphanumeric text. Finally, we removed empty comments (including those that consisted only of stop words). For every comment, we computed features further explained in Section <ref>.
§.§.§ Description of the final data collection
The final data set used for further investigation consists of 2,647,927 pictures of 27,538 users. Each picture was downloaded in a size such that the smallest side of the image measures more than 230 pixels because the NIMA model takes images of 224x224 pixels size as input and Photo Aesthetics Ranking Network with Attributes and Content Adaptation works with 227x227 pixels images.
We have grouped the features that we obtained into three families – photography, crowdsourced and user-author. The photography feature set includes the following features:
* Publication date – Number of days since the photo uploaded to Flickr.
* Update date – Number of days since the last update of the photo metadata (visits, favourites, comments, etc.).
* Groups number – Number of groups in which the photo has been posted.
* NIMA technical score – Technical score from NIMA model implementation.
* NIMA aesthetic score – Aesthetic score from NIMA model implementation.
* Kong score – Aesthetic score from Photo Aesthetics Ranking Network with Attributes and Content Adaptation implementation.
Crowdsourced features. These involve information or opinions from a group of people who submit their views via the Flickr site.
* Comments number – Number of comments written on the photo page.
* Views number – Number of views the photo got.
* Favourites number – Number of users who added the photo to the list of their favourites.
* Average polarity of the comments – computed with TextBlob. TextBlob a Python library for processing textual data <cit.>.
* Average subjectivity of the comments – Average number of subjective words in posted answers computed with TextBlob.
* Average readability of the comments, including two metrics indicating how difficult a passage in English is to understand, such as the number of difficult words and reading time.
* Average entropy of the answer – A statistical parameter that measures how much information is produced on average for each letter of a text in a language.
* Average comment length – Character count of the comment.
The user feature set includes the following features:
* Photos number – Total number of photos uploaded by the user to the platform.
* Join date – Number of days since the user became the member of the forum.
* Following number – Number of the users followed by the user.
* Groups number – Number of groups to which the user belongs.
* Flickr PRO – The indication if the user has the paid membership Flickr PRO[<https://www.flickr.com/account/upgrade/pro/>]. Flickr PRO provides advanced statistics on photos and videos of the user. Also, it allows ad-free browsing on Flickr for the PRO user and their visitors. Moreover, it permits unlimited uploads at full resolution and easily backup. Finally, the user can establish detailed privacy settings for every photo.
Next, we aggregated crowdsourced and photo features by every user. As a result, for every user, there is a representation of every feature in terms of a minimum, a maximum and an average value.
Finally, for a better understanding, as social activity features to answer the RQ2, we consider a minimum, a maximum and an average values of the following variables – the number of comments and the number of favourites (how many people added a picture to their list of favourites).
§.§.§ Ground truth
To collect the ground truth values, we firstly obtained the occupation self-indicated by the user. Based on it, we detected if the occupation is related to photography. It was computed with the use of regular expressions in several languages that use the Latin alphabet. The regular expression includes the following terms – “fot", “phot", “valokuv", “zdjȩcie", “dealbh", “bild", “grianghraf", “nuotrauk", “pictur", “myndin", “billed", “ljósmyndari", “ritratt". Accordingly, as a ground truth value, we will consider those users who have the photography-related occupation. This being the case, there are 4,108 users (≈15%) fulfilling this criteria.
§.§ ML model to identify professional photographers
Following the scope of our study, we will compare the performance of both interpretable and non-interpretable classification techniques over the features mentioned in the previous section to fulfil the goal stated in Section <ref>. We selected the two most interpretable classification techniques including a probabilistic and a logit models – Gaussian Naïve Bayes and Logistic Regression (LR) and pit them against non-interpretable techniques including a bagging and a boosting models – Random Forest (RF) and Gradient Boosting Classifier. We believe that our choice of algorithms covers a wide spectrum of attribute-based learning approaches. Hence, we restrict our case study to the use of these algorithms with the final goal of selecting the best model out of a set of classifiers with various feature representations.
We have trained our model applying a 10-fold cross-validation. Given the significant data imbalance, we have trained the model to maximise the quality metric AUC, which takes into account the data imbalance. Moreover, we also report F1-score and the accuracy of the model.
§ RESULTS
§.§ RQ1. Professional and non-professional photographers
From Table <ref>, we can observe the fact that all the models – interpretable which include Gaussian Naïve Bayes and LR and non-interpretable ones represented by RF and Gradient Boosting Classifier – perform in a similar way. The results show that, in general, most competing algorithms were fairly accurate for our data set. The surprising observation is that the accuracy score varies from 0.85 and reaches 0.92 in most tests of models and feature sets. This can be explained by the fact that accuracy is a simple evaluation measure for binary classification and it is more suitable for matters when the data are perfectly balanced. This is not the case in our study and it was explained previously in Section <ref>. Consequently, we computed AUC and F1 scores which help us to observe the precision and the recall providing more insight into the differences between classifiers.
The comprehensive comparison revealed that RF demonstrates the best performance when using the user and photo features reaching an accuracy of 0.92, an AUC score of 0.73 and an F1 score of 0.89. Similarly, with RF, Gradient Boosting Classifier was almost as successful as the approach with the best capacity. It showed just slightly worst results if we compare the AUC and F1 measures of each model in every set of features. Nevertheless, its prediction with the set including user features showed sufficiently good results indicating an accuracy of 0.92, an AUC score of 0.72 and an F1 score of 0.89. In this way, non-interpretable algorithms exhibited superior evaluation performance compared with the rest of the contenders.
Regarding the comparison of interpretable models, among Gaussian Naïve Bayes and LR, the best-performing model is LR with the photo features indicating an accuracy of 0.92, an AUC score of 0.68 and an F1 score of 0.88. From Table <ref>, we note that the performance of Gaussian Naïve Bayes is less competitive compared to LR. However, even though these algorithms achieve the lowest AUC score, they reach comparably high accuracy and F1 score. We also can observe that combining sets of features does not always mean a clear increase in the evaluation ability than using each set of features alone.
§.§ RQ2. The aesthetics and technical quality and the social activity of photos
To answer the question regarding the relationship between the aesthetics and technical quality of a picture and the social activity of that picture, we first explored the correlation matrix of the social activity features and NIMA technical score, NIMA aesthetic score and Kong score represented in Figure <ref>. As it is plain to observe, the technical and aesthetic scores are highly correlated between themselves. As a case in point, the correlation between the average of the Kong scores and the average of the NIMA aesthetic scores is 0.45 which indicates that they are strongly positively correlated. As depicted in the figure, there are many variables that are relatively highly correlated with their different representations (a minimum, a maximum and an average). Variables that represent different concepts are not positively correlated with each other, with the exception of aesthetic and technical scores.
Then, we examined the performance of the best-performing model selected in the previous section – RF separately on the social activity feature set and the features related to the aesthetics and technical quality of pictures. The results reported in Table <ref> indicate that the algorithm has a better predictive power with the photo features that include NIMA technical score, NIMA aesthetic score and Kong score reaching an accuracy of 0.92, an AUC score of 0.67 and an F1 score of 0.88. On the other hand, with the social activity set of features, Gradient Boosting Classifier shows a lower AUC score equal to 0.6 but the same F1 score equal to 0.88. This means that despite the subjectivity of art, the aesthetic and technical scores computed by CNN models are reliable.
§.§ RQ3. Common characteristics of professional and non-professionals
To answer this RQ, we aggregate the results of the prediction of the RF best performing model with user and photo features. We computed average metrics per professional photographer and non-professionals for all user and photo features that were explained in Section <ref>. The aggregation of these two types of users predicted by RF with their most common characteristics and differences is shown in Figure <ref>. The model identified 974 users as professional photographers (≈12%) and 7,279 users as non-professionals. It is noteworthy to mention that there are many more non-professional users than other types of users. It is consistent with the distribution of ground truth presented in Section <ref> where we explained the class imbalance context.
In this Figure <ref>, we depict these two types of users and the metrics which differentiate them. We performed a multivariate analysis of variance (MANOVA) to ascertain that the differences between these types of users are statistically significant. This fact was confirmed by obtaining an F-value = 3,268 and p-value ≪ 0.0. Thus, we can confirm that the two types of users have statistically significant different characteristics. Also, we conducted an analysis of variance (ANOVA) for each individual feature to see which of them are statistically different which are represented in Figure <ref>. The outcomes of average photography technical and aesthetic scores show that photos of professional photographers got a higher NIMA aesthetic score (4.86 versus 4.54), NIMA technical score (5.16 versus 4.78) and Kong score (0.55 versus 0.51). Moreover, photos of professional photographers tend to be visited more often getting an average number of views equal to 3,602 while photos of non-professional users get an average of 236 views. Besides, pictures of professional users differ from another type of users by the average number of groups where they are published – 14 versus 4. Finally, the number of users followed by professional photographers is fairly higher – 1,547 opposite 1,340 in the case of non-professional users.
§ DISCUSSION
In this section, we first discuss the obtained results. Then, we talk about the potential application of our study in real scenarios. Finally, we raise the limitations of our work.
§.§ Obtained results
To summarise, the experiments conducted on the data set extracted from Flickr suggest that all competing interpretable and non-interpretable algorithms (Gaussian Naïve Bayes and LR, RF and Gradient Boosting Classifier) provide meaningful results. It is worth mentioning that studies examined in Section<ref> applied models for the task of expert finding in technical fields or concise areas which showed better results. This can be explained by the fact that artistic skills and specifically photography skills are ill-defined and there are no established methods and features to determine and measure them. Also, based on the results presented in Table <ref>, we can notice that some feature sets are more informational than others. For example, the results of using photo and user features are proving that these denominate more meticulously professional users. This indicates that the Flickr data of users and photos can be used to identify professional photographers and non-professional users based on self-reported occupation labels. Other researchers can use our findings as a base to find more powerful models in order to strengthen the detection of experts in the photography field.
After recognising the satisfactory performance of the above-mentioned algorithms, we focused on RQ2 questioning the relationship between the aesthetics and technical quality of a picture and the social activity of that picture. The fact that we did not see much correlation between social activity features and technical and aesthetic scores of the photos was not unpredictable. It can be explained by the fact that many existing studies in the literature on image aesthetic assessment are based on the data sets like A Large-Scale Database for Aesthetic Visual Analysis <cit.> or Tampere Image Database <cit.>. These data sets were annotated with semantic and aesthetic labels and rated by users unidentifiable to researchers. In this way, it is not clear that these annotators align with the photography enthusiast community. Besides, aesthetic beauty is subjective based on the fact that the perception of the beauty of the same picture can be different. Moreover, due to the social network features of the photo and video sharing portal, the behaviour of users might prevail over aesthetics.
Ultimately, to answer the RQ3 in relation to the characteristics that differentiate professional photographers from non-professionals, we aggregated the results of the prediction of RF by every user. Based on the statistically significant features, we noticed that users differ by the average NIMA aesthetic score, the average NIMA technical score, the average Kong score, the average number of groups, the average number of views and the number of the following users. All these features are reasonably higher for professional photographers. The fact that photos from non-professional users are less visited can be explained by two other variables – the average number of followers and groups. The number of followers the users by default explains the number of clicks that their pictures could get. On the other hand, to achieve that a picture is published in a group, the user should conduct some activity by uploading it there. Besides, some groups require administration approval for the picture to be in the group. We can conclude that definitely this is correlated with the technical and aesthetic scores which are higher for professional photographers.
§.§ Application in real scenarios
The results presented in this paper can find workability in the task of assessing the photography quality of users. The automatic detection of professional photographers can be used in order to build more reliable photo and video sharing platforms by establishing high standards of skills for users.
Moreover, our findings can be applied in different contexts apart from the stated identification of professional photographers. Through computer vision, we can detect inappropriate content. This is a relevant issue nowadays since the proliferation of social media enables people to express their opinions widely online leading to the emergence of conflict and hate. The lack of a universal hate classifier generalising various training sets and contexts was addressed by <cit.>. The authors developed a cross-platform online hate classifier which performs well for detecting hateful comments across multiple social media platforms including YouTube, Reddit, Wikipedia and Twitter. However, the data sets used for this study mainly include manually-labelled comments from these sites. We believe that this work can contribute significantly to improving the coverage of the existing platform. More than that, in these latter days, even the seemingly harmless meme can become a multimodal type of hate speech considered as a direct attack on people based on ethnicity, religious affiliation, gender, etc. <cit.>.
More than that, our results can serve as a base for creating a platform which could offer personalised aesthetic-based photo recommendations. This tool is already implemented in several portals such as the Netflix recommender system <cit.> and there is a need of extrapolating it to other platforms. It can help photography websites better serve the needs of non-professionals and professional photographers <cit.>. Content-based image search does not fully satisfy the needs of such users since they are usually not interested in content alone. Instead, they are often looking for photos with certain photographic aesthetics, which may include monochromaticity, light contrast, and style.
Another important topic that can be addressed with the help of this study is privacy issues. We can detect sensitive places and photographs violating the community terms and conditions. On the one hand, most websites nowadays are taking measures against spam messages and inappropriate content. However, every day malefactors are inventing new ways of overcoming them. Also, not all the systems can detect photographs of which places can make it vulnerable.
§.§ Limitations
We should like to discuss the research gaps that arise today within the topic of the identification of professional photographers on Flickr.
It is important to mention that our results prove the potential of ML models to be used in several domains. However, the expected foundation of image quality and aesthetics was not proved for professional identification. We noticed a certain level of correlation assumed from the MANOVA test but not enough to assure that pictures of professionals have higher scores according to the models described in Section <ref>.
More than that, the ground truth for this study is based on the self-proclaimed occupation of the users. We believe that there might be more characteristics of professional users that could be used as the base of the ML model.
Finally, it would be useful to repeat the study with other portals explored in Section <ref>. Flickr, being a large photo and video sharing platform, could be not representative enough regarding the amount of not professional users.
§ CONCLUSIONS AND FUTURE WORK
This work aimed to fill the gap of the lack of any open data set on photo and video sharing platforms. We provided a significant contribution to the literature by collecting one of the largest labelled data sets on Flickr with multimodal data including crowdsourced, user and photo features. From 225,590 users who uploaded photos in December 2021, we filtered out users without a minimum activity on the platform and the 5% of users from both ends of the distribution of total photos uploaded to avoid outliers. As a result, the final number of users we selected is 151,468. For the time limits reason, we downloaded all pictures of the selection of 27,538 users. Based on these data, we addressed the task of identification of professional photographers and non-professional users on Flickr. We used several feature sets and tested four models on them and their representation. From interpretable classification techniques – Gaussian Naïve Bayes and LR and non-interpretable techniques – RF and Gradient Boosting Classifier, RF showed the best performance using user and photo features. Our results demonstrated that it is feasible to properly predict whether a user is a professional photographer or not based on self-reported occupation labels. We also deduced that the technical and aesthetic scores of the picture are not highly correlated with the social activity carried out in this picture. Finally, based on statistically significant features, we draw the inference that professional photographers can be distinguished from non-professional users by higher NIMA aesthetic score, the average NIMA technical score, the average Kong score, the average number of groups, the average number of views and the number of the following users.
We will devote our future work to the models generalisation to detect professional photographers in other photo and video platforms that we described in this study as sites holding the potential for this type of task. Moreover, we will be expanding and replicating the study in other environments. Also, there is an assured need of performing the validation of our findings, e.g., through manual labelling or through other platforms such as LinkedIn. Besides, following the presented results, we would like to explore additional potential applications of this work, e.g., automatic detection of good photographers on the Web.
unsrtnat
|
http://arxiv.org/abs/2307.00194v1
|
20230701020349
|
A Requirements-Driven Platform for Validating Field Operations of Small Uncrewed Aerial Vehicles
|
[
"Ankit Agrawal",
"Bohan Zhang",
"Yashaswini Shivalingaiah",
"Michael Vierhauser",
"Jane Cleland-Huang"
] |
cs.SE
|
[
"cs.SE"
] |
A Requirements-Driven Platform for Validating Field Operations of Small Uncrewed Aerial Vehicles
Ankit Agrawal
Bohan Zhang
Yashaswini Shivalingaiah
Department of Computer Science
Saint Louis University
St Louis, MO, USA
[email protected]
Michael Vierhauser
LIT Secure and Correct Systems Lab
Johannes Kepler University Linz
Linz, Austria
[email protected]
Jane Cleland-Huang
Computer Science And Engineering
University Of Notre Dame
Notre Dame, IN, USA
[email protected]
===============================================================================================================================================================================================================================================================================================================================================================================================================================
Flight-time failures of small Uncrewed Aerial Systems (sUAS) can have a severe impact on people or the environment. Therefore, sUAS applications must be thoroughly evaluated and tested to ensure their adherence to specified requirements, and safe behavior under real-world conditions, such as poor weather, wireless interference, and satellite failure. However, current simulation environments for autonomous vehicles, including sUAS, provide limited support for validating their behavior in diverse environmental contexts and moreover, lack a test harness to facilitate structured testing based on system-level requirements. We address these shortcomings by eliciting and specifying requirements for an sUAS testing and simulation platform, and developing and deploying it. The constructed platform, (), allows sUAS developers to define the operating context, configure multi-sUAS mission requirements, specify safety properties, and deploy their own custom sUAS applications in a high-fidelity 3D environment. The Monitoring system collects runtime data from sUAS and the environment, analyzes compliance with safety properties, and captures violations. We report on two case studies in which we used our platform prior to real-world sUAS deployments, in order to evaluate sUAS mission behavior in various environmental contexts. Furthermore, we conducted a study with developers and found that simplifies the process of specifying requirements-driven test scenarios and analyzing acceptance test results.
Safety Assurance, Requirements Specification, Small Uncrewed Aerial Systems, Digital Shadow, Cyber-Physical Systems
§ INTRODUCTION
With the rise of artificial intelligence, small Uncrewed Aerial Systems (sUAS) are imbued with increasingly complex decision-making capabilities, in order to perform missions autonomously in diverse environmental conditions <cit.>.
As failures during operation can lead to severe accidents that are harmful to people, physical structures, or the environment, it is essential to specify safety requirements, design effective solutions, and establish a robust testing process, infrastructure, and corresponding monitoring tools for validating that the system satisfies its requirements prior to deployment <cit.>. Environmental conditions, and their diverse combinations, especially those at the boundaries of an sUAS' operating capacity, can impact the behavior of an sUAS in unpredictable ways, and therefore, many accounts of sUAS flight failures due to problems such as radio interference <cit.>, or high winds <cit.> have occurred. This, in turn, means that functional tests must be executed under diverse conditions. For example, the requirement that “An sUAS shall complete a flight composed of multiple waypoints in wind gusts of 23mph without colliding with stationary objects, the terrain, or other aircraft” needs to be operationalized within diverse test scenarios that specify the specific flight details, as well as additional environmental attributes such as wind direction, temperature, precipitation, visibility, and geographical information.
Performing rigorous software verification and validation (V&V) on Cyber-Physical Systems (CPS) in general, and sUAS in particular, is a time-consuming process that typically involves a combination of simulations and real-world testing to validate the correctness of system behavior under a range of conditions <cit.>. Furthermore, many tests cannot easily be conducted on physical sUAS, especially those that target, or even exceed operational boundaries, such as flying in extreme weather conditions or in (too) close proximity to objects or humans. However, critical differences between the simulation and the real-world environment can result in substantial back-and-forth testing between physical testing sites and developers, extending project development times and increasing costs. This problem is primarily attributable to (a) lack of tool support for developing realistic scenario simulations, (b) difficulties in identifying and/or modeling edge-case scenarios in the real-world environment, (c) isolated simulation environments that fail to consider interactions with sensors and physical devices used by humans to interact with the system, and (d) the lack of a structured process and platform for specifying, executing, analyzing, and testing diverse system requirements.
In practice, for the domain of sUAS, developers currently rely on simulations using 2D maps <cit.> or 3D simulation environments, such as Gazebo <cit.> or AirSim <cit.>.
Gazebo <cit.>, for example, facilitates sUAS simulations with limited automated support for incorporating realistic landscapes <cit.> and weather conditions, while AirSim provides high-fidelity weather simulations, but it lacks realistic flight conditions such as simulating real-world airspace restrictions, and mission-specific environmental elements, such as simulating a drowning person in a river to support search-and-rescue test scenarios <cit.>. These existing simulation environments rely more upon an ad-hoc, trial-and-error testing approach with limited support for specifying real-world test scenarios <cit.>, and provide even less support for a requirements-driven test environment in which diverse scenarios are generated and executed for given requirements.
In this paper, we address this challenge by presenting a new platform, , referred to as in the remainder of the paper, for supporting the creation of requirements-driven test scenarios. We employed design science <cit.> to collect and specify clear design and development objectives of (discussed in Section <ref>), based on identified key challenges. Specifically during simulation testing, and related to simulating volatile high-fidelity environmental conditions that affect sUAS behavior (discussed in Section <ref>). Our platform allows developers or testers to specify environmental conditions, configure sUAS sensor capabilities, and specify test properties to validate system-level requirements. Based on specifications, the platform generates the simulation environment and deploys sUAS with configured sensor information. automatically monitors the specified test properties for violation and generates an acceptance test report containing detailed simulation analytics. By using , developers can investigate the capabilities and limitations of their sUAS applications against system-level requirements prior to field deployment.
The contributions of this paper are therefore as follows:
(1) We analyze real-world sUAS incidents to identify common points of failure and subsequently specify requirements for the validation platform. Our aim is to enable sUAS developers to effectively validate critical mission behaviors within realistic contexts.
(2) We describe the set of safety-related, first-order runtime properties used by to validate requirements for correct flight operations of sUAS.
(3) We derive a structured end-to-end process for specifying various elements of test scenarios, for performing multi-sUAS fuzz testing activities, analyzing simulation results, and evaluating 's fidelity for physical sUAS systems under test. This process advances the state-of-the-art in requirements-driven simulation platforms for autonomous vehicles.
To evaluate our platform, we conducted a case study with two real-world cases in which was used to validate requirements for our own drone system composed of multiple autonomous and collaborating sUAS. DroneResponse <cit.> assumed the role of the Drone System under Test (DSuT). In addition, we performed a preliminary study to evaluate sUAS developers' opinion of and its ability for designing requirements-based tests.
The remainder of this paper is organized as follows. overview provides an overview of our platform, and in configproperties, we further describe the set of features vital for an sUAS testing platform and describe 's ability for configuring the environment and runtime properties.
In platform, we comprehensively describe our platform architecture and the process of specifying and executing high-fidelity realistic environmental conditions and respective mission-specific tests are created
We outline our evaluation set-up in sim_eval, and present our findings in Sections <ref> and <ref>. We, finally, discuss threats to validity and related work in Sections <ref> and <ref>.
§ OVERVIEW OF
Testing and validating a CPS against its requirements requires more than “just” simulating its behavior in a virtual space, and involves systematic requirements elicitation, hardware and software testing including the definition and analysis of safety properties, and integration of the user and human interactions <cit.>. Therefore, our primary objective of developing is twofold.
* DO1: Develop a simulation approach that goes beyond providing simplistic pass/fail test results.
* DO2: Automatically execute diverse test scenarios under realistic 3D environments to effectively detect safety-related issues in sUAS applications.
To accomplish these primary development objectives, we identified several features to incorporate in . First, to validate a set of requirements, developers need the ability to specify, execute, and validate complex test scenarios that include (1) realistic scenes with buildings, trees, and other landmarks, (2) interactive environments that include, for example, roads, people, fires, and traffic accidents, and (3) environmental factors such as weather, wireless interference, and satellite availability.
Second, the environment needs a diverse selection of high-fidelity sUAS with sensors and actuators such as controllable cameras and other sensors; all of which provide APIs for interfacing with the test applications. For the simulation to support diverse sUAS applications, it needs to allow a tester to deploy their own sUAS applications within the test environment with minimal effort. Third, the test platform needs to facilitate runtime monitoring<cit.>, by providing monitors that enable users to specify properties and constraints and to subsequently monitor the runtime behavior of individual sUAS, their interactions with each other, with human actors involved in a mission, and their environment. Finally, bringing all of this together, the environment must provide users with the means to specify test scenarios for specific requirements, and enact those tests in the defined environment whilst simultaneously selecting relevant constraints to be checked during the running simulation.
We have designed to address these requirements, and provide a high-level overview in overview.
In Step 1, users develop their own single- or multi-sUAS applications using supported flight controllers (with current support for PX4 <cit.> and Ardupilot <cit.>), and then specify test cases for validating requirements, based on real-world scenarios such as multi-sUAS area search.
In Step 2, the users configure the environmental conditions (e.g., weather, signal, geographical regions) in which they need to execute their tests as per system-level requirements as well as safety properties (e.g., “minimum horizontal and lateral separation distances between sUAS”) that sUAS are expected to maintain during the mission. The combination of the sUAS application, prescribed mission, and environment and safety properties specifications constitute a Test Scenario.
In Step 3, the simulation engine creates realistic environmental conditions including weather conditions, and realistic terrains and landscapes, deploys the desired number of sUAS in the environment, and configures sensor models of each sUAS as per requirements. When activated, the fuzzy test generator component of the platform generates multiple test scenarios by fuzzing the user-provided environmental configuration within the given range of values. The objective of this component is to examine the robustness of the sUAS application by analyzing the extent to which the system is able to perform as expected in adverse environmental conditions. Finally, after generating test cases, the platform simulates both the primary configured test and its fuzzy versions.
In Step 4, data is collected from the sUAS and the environment throughout the mission, and the monitoring system continually analyzes the data for violations. Finally, in Step 5, produces an analytics report that contains analysis of simulation results from each test case, comparisons across all fuzzy test cases, and a list of detected violations.
The key novelty of is that it allows developers to easily specify test scenarios in combination with environmental conditions, sUAS capabilities, and a set of monitorable properties indicative of system-level requirements satisfaction conditions. Developers can deploy and validate their own sUAS applications under the specified test scenario using prior to deployment in the physical world. Furthermore, supports fuzzy testing <cit.> of system-level requirements, allowing developers to identify the operating boundaries of their application under adverse environmental conditions. Additionally, consolidates and analyzes runtime information from diverse sources in order to provide insights into passed and failed test properties for further analysis and error diagnosis (Step 6).
In the following sections, we describe the configuration properties, platform, and architecture in more detail.
§ ENVIRONMENTAL CONFIGURATION
As input to the overall design process, we sought to identify a common set of configurable environmental factors relevant for validating sUAS applications. We used a deductive (top-down) approach based on a set of well-known sUAS problems related to environmental terrain <cit.>, GPS denial <cit.> signal interference <cit.>, weather and lighting <cit.>, human-operations <cit.>, and sUAS physical failures <cit.>. To identify detailed information about each of these categories, including specific types of failures and potential ways of monitoring them, we further conducted a search for sUAS incident reports published by news services and regulatory bodies and reviewed scientific publications related to sUAS safety requirements. As part of our incident investigation, we searched for scientific publications on Google Scholar and Web of Science for “sUAS/UAV safety, accidents, incidents” and examined sUAS incident reports as depicted in Table <ref>.
§.§ Configuration Requirements
The literature and incident analysis identified numerous ways in which environmental factors played a key role in sUAS incidents. We categorize them into six groups, provide examples for each, and summarize the key factors.
Geographical Locations and Terrain: sUAS are deployed on diverse missions across vastly different types of terrain including open farmland, forests, urban areas with tall buildings, and mountainous terrain. Other areas include protected airspace (e.g., in close proximity to airports or over national parks, and prisons). Reports document diverse incidents including an sUAS collision with a Hot Air Balloon over Boise, Idaho whilst flying without authorization in controlled airspace <cit.>, a goose hit in Sweden <cit.>, a construction crane in Kent, UK <cit.>, and a mountain in Colorado <cit.>. The simulation environment, therefore, needs to support diverse scenarios including stationary and moving objects, diverse types of terrain, and restricted airspace to allow test cases that validate whether an sUAS can complete its mission successfully whilst complying with all legal airspace regulations. Configurable properties, therefore, include ; (regulated airspace and no-fly-zone automatically retrieved from service providers, with the ability to upload additional regions of prohibited or constrained airspace); and (stationary objects, e.g., cranes, and movable objects such as vehicles).
Signal Loss and sUAS Communication: Loss of communication between the sUAS and the remote pilot (e.g., <cit.>) can occur if the sUAS exceeds its range capabilities, is obstructed by an obstacle, or when electromagnetic interference otherwise disrupts the data link <cit.>. It can impact telemetry between the sUAS and a handheld radio controller, or software-based Ground Control System (GCS) (e.g., MissionPlanner, QGroundControl <cit.>), or other forms of communication (e.g., WiFi or Mesh Radio) between ground-based software systems and onboard software applications which are wired to the onboard flight controller. Loss-of-signal alone does not cause a crash, as most sUAS automatically enter failsafe modes, such as return-to-launch (RTL), when communication is disrupted; however, as airspace becomes increasingly crowded, simple RTL commands could themselves cause incidents <cit.>. Configurable , therefore, need to be integrated into to allow users to test sUAS response to unexpected loss of data link.
GPS Deprivation: Most sUAS rely upon GPS for geolocation purposes, and as a result, loss of reliable GPS leads to sUAS crashes <cit.>. Geolocation accuracy typically increases with the number of satellites, returning up to 2 meters of accuracy with 15 or more satellite connections, but decreasing rapidly as the number of connections decreases. sUAS systems can compensate for geolocation uncertainty, for example, by maintaining greater distances from terrain, buildings, or other sUAS, or by using sensors to help prevent collisions. The environment, therefore, needs configurable that are able to predict GPS accuracy and/or inject loss of satellite faults into the test environment.
Weather and Lighting Factors: Weather conditions greatly impact the functioning of both the sUAS' perception of the environment and its control algorithms <cit.>. Wind speed is often cited as a cause of sUAS crashes, because high wind speed in a particular direction can negatively affect the ability of the sUAS to maintain its desired flight path <cit.>. Furthermore, rain, fog, and snow, as well as low lighting conditions in the environment, impair the ability of computer vision models to interpret the environment correctly <cit.>. Turbulent weather has resulted in several incidents, including failure to hold position <cit.>, or dislodging payload <cit.>. Therefore, must support configuration and simulation of various , including wind direction, speeds, and gusts at different altitudes, precipitation types and levels, , and .
Human Interactions: Numerous sUAS accidents can be accounted to human-related errors, caused by recklessness, lack of training, or poor user interface design <cit.>. For example, in the collision with the Hot Air Balloon <cit.>, the pilot recklessly overrode warnings that he was entering prohibited airspace. In an accident, we experienced sudden and erratic altitude swings during takeoff, which required the operator to perform an impossibly complex series of actions in order to gain manual control before the sUAS plunged to the ground. must support testing by allowing users to connect their interactive devices, such as Radio Controllers, to the simulation environment <cit.>.
Sensor and Hardware Issues: While the primary aim of is to test the safe operation of sUAS software applications and deployments, hardware failures, sometimes confounded by environmental factors, are often the primary cause, or a clear contributor, to an accident <cit.>. The most common faults include loss of signal, excessive vibration, compass interference, battery problems <cit.>, and motor failures <cit.>. Dramatic hardware failures that cause complete and sudden loss of flightworthiness are out of the scope of our current work. However, can test software solutions for detecting, recovering from, and/or preventing hardware faults from occurring. For example, given an event that triggers an RTL failsafe mechanism, can the sUAS return home without colliding with other sUAS? To create this type of test environment, must support capabilities that generate sUAS failures at runtime. Test scenarios should define types of failures (e.g., vibration, loss of signal) and their frequencies. While this is out of scope for the current paper, initial work in this area shows that it is feasible to accomplish <cit.>.
These categories and their associated incidents provide an initial set of guidelines for specifying 's configuration requirements, designing a template for test specifications, and identifying a set of monitorable properties.
§.§ Configuration Properties
By analyzing the reported environmentally-related sUAS accidents and incidents, we identified an initial set of relevant contributing factors as depicted in factors. These attributes need to be configurable in order to support meaningful test scenarios. Each parameter is labeled as fully implemented, partially implemented, or planned for future releases. The Scene category allows developers to define specific regions of operation that match their actual deployments. Users can also specify various parameters related to sensors and weather conditions. Sensors and hardware faults can be represented by various forms of fault models, and human interactions can be configured by mapping sUAS controls (e.g., the radio controller) to an API associated with each sUAS.
§ ARCHITECTURE AND TESTING PROCESS
In this section, we provide a comprehensive overview of the and its components as depicted in dwarch, and the process for executing a test scenario.
comprises three main components: (A) a Test Scenario Configurator and Generator that accepts user-defined specifications to configure the environment and test properties and generate variations for fuzzy testing, (B) a Simulation Environment with sUAS physics engines and a high-fidelity simulated world, and (C) a Runtime Monitoring Environment that collects and analyzes data during test execution to determine the success or failure of acceptance tests and provide mission analytics visualization for sUAS application diagnosis.
§.§ Test Scenario Configurator
Developers configure system-level tests by specifying environmental conditions, sUAS sensor configuration, and test properties based on specified system-level requirements.
§.§.§ Configuring the Simulation Environment
Simulation of realistic environmental conditions is an important part of CPS testing. This includes wind conditions (velocity, direction), geographical regions (urban, densely populated, rural), and time of day (bright sunlight, dark), all of which impact sUAs flight trajectories and mission execution. Additionally, the test scenario scope can be configured according to environmental requirements for a particular mission, such as a drowning victim in a particular river at a specific geographical location or a burning building in a particular area. Based on these inputs, establishes the simulation environment. In the following step, the user configures the characteristics of the sUAS with which they wish to carry out missions.
§.§.§ Configuring and Deploying multiple sUAS
Multi-sUAS systems consist of heterogeneous sUAS, therefore developers must be able to specify the specifications of all sUAS participating in a test scenario.
This includes testing sUAS with specific hardware setups, e.g., sensor configurations and flight missions. As part of this, allows configuring the number of sUAS part of a mission, sensor specifications of each sUAS, and their home geolocations in the simulation environment. Based on this, deploys multiple sUAS with the defined sensor configurations at the specified location in the simulated environment. Since users are able to “bring their own sUAS application”, the mission to be expected could be simply planned using an off-the-shelf application, such as QGroundControl <cit.>, or could be developed in a customized sUAS application <cit.>.
§.§.§ Specifying Test Properties
Determining mission success is one major aspect in which significantly differs from current sUAS simulation environments and testing processes in which tests are deemed to have passed if the user observes correct behavior during simulation. In we replace this ad-hoc process with a structured, and well-defined way of automatically determining if a test (or set of tests) has or .
Success criteria are defined via a set of test properties that must hold true throughout the entire mission in order for the test to be considered to have . The test properties are directly derived from the requirements. Therefore, the third, and last step in the configuration process thus involves configuring the test properties based on safety requirements of the system. For example, specifying the safe distance that two sUAS shall always maintain, or specifying the safe landing spots for each sUAS during the mission.
leverages its runtime monitoring environment that collects sUAS sensor data at runtime to evaluate whether the safety properties specified by the user hold true during the simulation or not. monitors are further discussed in Section <ref>.
§.§.§ Test Scenario Execution & Fuzzy Test Generation
Before the test scenario is deployed within the simulation engine for execution, provides support for test case fuzzing <cit.>. This step facilitates testing sUAS applications in uncertain or extreme environmental conditions, without the need to manually create hundreds of test scenarios with slightly different value combinations. Fuzzy testing is intended to determine under which realistic conditions (e.g. max wind velocity) the system-level requirements are satisfied. The test fuzzer component generates multiple copies of user-specified test scenarios and manipulates parameter values to explore sUAS reliability in uncertain environments. Users can specify which environmental configuration to fuzz during test execution. For instance, if a user specifies wind velocity to fuzz, will increase the wind velocity exponentially in each fuzzed scenario to determine unsafe wind velocity for the sUAS to operate in the real world. Additionally, the user must specify the maximum value of the parameter in order to create a termination condition for fuzzy testing. Thus, the architecture and the testing process facilitate the testing of UAS applications under uncertain conditions without requiring the developers to come up with and specify the complex unexpected real-world uncertainties.
§.§ The Simulated Environment
§.§.§ Environment Simulator and Executor
The environment is structured around accurate 3D geospatial data containing real-world landmarks, such as streets, buildings, bridges, power lines, and trees. Further, it needs to be augmented by mission-specific objects and phenomena which constitute environments for specific sUAS deployments such as people, vehicles, fires, floods, and avalanches. Therefore, in addition to simulating real-world locations, provides a dedicated Scene Animator that serves as a collection of 3D animations representing typical sUAS mission deployments such as timed simulation of a drowning person in a river or structural fire.
The Simulation Initialization Manager retrieves user-configured and fuzzed test scenarios from a Scenario Database to initialize the simulation environment's initial conditions, including number of sUAS, and weather conditions.
§.§ Runtime Monitors and Reports for Analysis
Runtime monitoring is the process of observing and analyzing the behavior of a software system during its execution <cit.>. In order to support critical analysis of simulation results, includes its own runtime monitoring environment to collect data during simulation and check for safety requirement violations. The monitoring component collects data from three sources: sUAS sensors, the environment (e.g., windspeed), and human interactions.
In order to be able to provide “meaningful” analysis and Acceptance Test results, it is necessary to provide additional information that goes beyond a simple or for a specific test. In case of undesired behavior, sUAS developers must understand the how, why, and where a particular test scenario failed to satisfy the requirement. For this purpose, takes runtime information and consolidates it for each sUAS in the environment, including any changes to the 3D environment for software analysis purposes.
§ EMPIRICAL EVALUATION
To evaluate whether the design of accomplishes the first development objective, DO1, as discussed in Section <ref>, the applicability of to real-world sUAS applications, and its support for testing system-level requirements, we (i) performed a case study following the guidelines described by Runeson and Hoest <cit.> for two sUAS use cases for which we needed assurances that our DSuT would perform safely in the real world; and (ii) conducted a perception evaluation study <cit.> with sUAS developers asking them to configure a multi-sUAS test scenario based on given requirements. We explore RQ1 defined as follows:
∙ RQ1:
How effective is the design of for defining and executing realistic test scenarios for system-level requirements, and how do sUAS developers perceive the overall simulation testing process when using ?
We addressed this RQ in two ways. First, to evaluate the expressivity and the design of , we applied it to two real-world sUAS application scenarios, creating a series of acceptance tests for each of them, and specifying environmental features and respective monitors relevant to the tests to determine the feasibility of our approach.
Second, we asked software engineers and sUAS developers to configure their own test scenarios based on given real-world requirements using . After finishing the scenario configuration and analysis of the acceptance test report, we asked the developers a series of follow-up questions to understand how is perceived by end-users and to assess the quality and usefulness of the generated test reports.
To evaluate whether the design of is capable of detecting safety-related concerns in sUAS applications and whether it accomplishes the second objective of development (DO2), we investigate RQ2 as follows:
∙ RQ2:
To what extent can detect and report safety-related issues that occur during tests?
We addressed this RQ by running a series of simulations in our platform, based on the aforementioned tests, and by seeding faults causing failures in the system to assess if is capable of detecting and documenting them during the simulation.
§.§ Prototype Implementation
We implemented an initial prototype to support Pixhawk/PX4 application tests for a number of environmental configuration options and monitorable properties. The configurable properties and features currently supported are marked as implemented, or partially implemented, in factors.
We developed a web application using the React Web framework that allows users to interact with . The web application has a Wizard that guides users through the process of creating a test scenario to validate a specific requirement, supported by a dashboard that displays the consolidated simulation results for users to analyze.
Through the web application, users can configure their sUAS, environment and test properties, which are then sent to the backend server written in Python using Flask over HTTP. Using the configuration provided by the user, the back-end server generates the fuzzy test scenario and instantiates the simulation environment.
Our simulation environment was implemented using the Unreal Engine <cit.>, integrating open-source digital shadow models of the real world by using Cesium for Unreal <cit.>. We simulated sUAS using AirSim <cit.>, an open-source, cross-platform simulator, and used AirSim's APIs to simulate weather conditions in the environment. Furthermore, we implemented runtime monitors as Python modules that collect (using AirSim's APIs) and validate data against the test properties specified by the user. Additionally, interaction data is collected through input devices such as handheld controllers, keyboards, and game controllers.
§.§ Drone System Under Test
Both use cases were enacted using our own semi-autonomous, multi-sUAS system, which includes a Ground Control Station (GCS) supported by a suite of microservices, onboard autonomous processing capabilities, and diverse GUIs to support human interactions. The Onboard Pilot acts as an application layer for the PX4 autopilot stack which includes flight control software and hardware for executing plans. Its internal State Machine receives mission specifications and instantiates itself dynamically for the current mission.
§.§ Use Case Driven Test Scenarios
We conducted experiments on two real-world use cases. The first use case represents the deployment of multiple sUAS at an active airbase to collect long-distance imagery of people at diverse pitches and altitudes, and the second use case involved a live search-and-rescue demonstration that we conducted in August 2022 in conjunction with a local Fire Department's water-rescue team. Each use case is described in detail, including its environmental configurations, test properties, and unique requirements.
∙ UC1 – Video Collection at Pitch and Range:
For the first use case, computer vision researchers provided requirements for collecting aerial images at specific camera pitches and custom flying patterns. The engineering team developed an sUAS application to deploy three sUAS at the test location of an active airbase, tasked with collecting aerial imagery of people from diverse distances, pitches, and angles, while maintaining minimum separation distance between all sUAS.
∙ UC2 – Search-And-Rescue:
For our second case, we deployed four Hexacopter sUAS in support of a live search-and-rescue demonstration in collaboration with emergency responders. The public nature of the demonstration, and the fact that it was conducted during the summertime at a crowded beach area, required rigorous testing before deployment. The sUAS were assigned a search area and dispatched to search, utilizing onboard computer vision, and streaming video when a potential sighting was made. In particular, we wanted to simulate the exercise in advance, to evaluate the impact of windy conditions, and to ensure that the critical safety properties held throughout the mission.
§ RQ1 – TEST SCENARIO DEFINITION
§.§ Ability to configure real-world requirements
We applied to the planning and validation of both use cases using our own drone system as the DSuT. We used the interface to configure test scenarios that included digital shadow models of the designated area, no-fly zones, wind, and lighting conditions. Tables <ref> and <ref> summarize the detailed test configuration for UC1 and UC2. In addition, we identified critical safety requirements for each deployment and translated them into test properties supported by .
The missions flown in were specified entirely using our own DSuT. Tests were then executed successfully and the Acceptance Tests reports were generated for analysis. However, as depicted in Table <ref>, we have only integrated a subset of the potential configuration parameters in the current prototype. For example, we have not yet integrated any form of sUAS fault injection, GNSS (satellite), or wireless network failures. We, therefore, deliberately did not attempt to configure the environment with these properties and leave their inclusion to the next phase of our work. The configuration of real-world scenarios showed that allows configuring sUAS, environmental settings, and tests based on system-level requirements, simulating sUAS in realistic conditions, and analyzing acceptance test results.
§.§ Perception-based Evaluation with Practitioners
We conducted a preliminary evaluation, under an approved study protocol, to assess usability and end-user perception of for creating test scenarios, configuring test properties, and interpreting simulation results.
Study Setup & Execution: We leveraged our professional networks to recruit five software engineers with experience in requirements engineering and testing. The study was divided into three phases: learning, task performance, and interviewing. In the learning phase, we provided an overview of the objectives of and demonstrated the web interface. In the task performance phase, we assigned a requirement from Table <ref> and asked participants to configure test scenarios and analyze the acceptance test report. We used a think-aloud protocol throughout this phase to obtain insights into user opinion and interaction. In the final phase, we asked participants to reflect on the usability of our solution using a questionnaire. Table <ref> shows the industrial software engineering experience of study participants and their five-point Likert scale ratings on the ease of test scenario configuration, analyzing acceptance test reports, and examining mission analytics information provided through graphical plots. Each study session took approximately 30 minutes.
Results: The user interface for configuring the multi-sUAS test scenario was intuitive and easy to use for all participants. Participant P3 gave very positive feedback on the platform, suggesting that is a valuable tool for simulation testing. P4, who has been actively working on sUAS applications for three years, emphasized the uniqueness of the capabilities, stating that “I have not seen any other sUAS simulation and testing platform that encourages developers to conduct simulation testing based on system-level requirements”. During the study, participants also identified several issues. P1 reported difficulty specifying the home geolocation of each sUAS, which was time-consuming as they had to copy and paste geocoordinates from Google Maps. P1, P2, and P3 also recommended visualizing the location of multiple sUAS on a 2D map to understand their relative proximity.
During the simulation, participants were interested in observing the flight path of all sUAS in the 3D environment, but found it challenging when the sUAS were far apart. This was due to the view port's inability to cover the entire area occupied by all the sUAS simultaneously, even when adjusted and zoomed in/out. This highlights the need for a solution that supports multi-viewports based on sUAS location, allowing users to observe multiple sUAS even when they are far apart.
After observing the simulation, participants analyzed the generated acceptance test report. P5, with more than 6 years of experience in CPS development, found the auto-generated acceptance test report and accompanying plots depicting deviations in the flight path under varying wind velocities to be a “game changer”.
P5 found the visualization tools to be extremely helpful in interpreting simulation results and recommended that the feature be integrated into the Continuous Integration (CI) Pipeline of their sUAS development environment.
P3 recommended that future improvements should include explanations for deviations from flight paths to help developers understand the factors affecting flight paths.
In response to RQ1, we were able to apply to our two use cases and demonstrate its practical applicability in real-world testing scenarios and integration in a thorough software testing process. We were able to specify the safety requirements, connect our own DSuT with minimal effort, execute test cases, and analyze the results provided by the Acceptance Tests to further improve our application. We also found that developers were able to configure complex test scenarios easily, and acknowledged the usefulness of the Acceptance Test reports. These results demonstrate the practical applicability of our platform in real-world testing and suggest that we achieved our first objective (DO1) of developing as described in Section <ref>.
§ RQ2 – TEST SCENARIO EXECUTION
To address RQ2 we executed a series of tests for each of the use cases. We deliberately included test cases that we expected to fail, in order to evaluate 's ability to raise errors appropriately. Here, we report one example from each of the use cases. In wind we plot flight logs from simulations in winds of 23mph and 30mph. The first test passed, whilst the second one (at 30mph) failed when the sUAS blew away into the lake. Second, in Figure <ref> we show several no-fly zones at the airport. In this example, a flight triggered a test because the flight path flew over a temporary no-fly zone violating C1.2. These and many other examples demonstrated that was able to accurately differentiate between and test cases.
Identification of Operating Boundaries: Incorporating fuzzy testing in the framework enables to automatically compare simulation results across test scenarios, thereby testing an entire range of environmental conditions for a single requirement. To evaluate the practical application of this, we used Airsim's default algorithm to fly a UAV in a circular trajectory at different velocities, created test scenarios, and configured to fuzz the wind velocity, with a maximum wind velocity of 18 meters/s. The sUAS flew at velocities of 6, 9, and 13 meters/s under no wind, with additional simulations conducted at 10 and 18 m/s wind velocity. Results were analyzed to determine the sUAS's ability to withstand varying wind conditions. Figure <ref> shows a series of plots generated by monitoring environment to visualize how the increasing wind velocity impacts the sUAS's flight path. These auto-generated plots provided insight into the ability of sUAS to fly circular missions under varying wind conditions.
In response to RQ2, we found that the sUAS monitoring environment is capable of reporting violations when the sUAS breaches any of the user-configured test properties, as demonstrated in our two use case executions. Additionally, our findings reveal that fuzzing the environmental configuration is effective in identifying operational constraints of the sUAS application. Comparing simulation results from the fuzzed test scenario to the actual test scenario provided valuable insights into the safe behavior of sUAS under varying environmental conditions. These findings also suggest that we achieved our second objective (DO2) of developing .
§ THREATS TO VALIDITY
In this section, we discuss threats to validity of our study and execution of the test scenarios.
∙ Construct Validity refers to how accurately a test measures the concept it was designed to evaluate. In the case of the perception study, participants came from our own network, and their evaluation of 's usability could have been biased to provide positive feedback; however, their responses were supported by qualitative responses which provided clear rationales supporting their positions.
∙ Internal Validity describes threats that could potentially cause the observed effects besides the independent variable. In the case of RQ2, we evaluated the extent to which could detect safety-related issues. However, here we assume that the underlying physics engine provides a high-fidelity simulation and accurate proxy of the actual physical sUAS. If the physics of the engine and that of the drone differ considerably, then certain types of faults that occur in the real world will not be detected in simulation. Future versions of will address this by supporting fidelity tests and providing an interface to configure the actual physics engine.
∙ External Validity describes threats to the generalizability of results.
First, while the two use cases used for the evaluation represent complex and real-world applications, exhibiting safety-critical aspects, our evaluation is based on applying to only one DSuT and one type of flight controller. Therefore, additional case studies are required to evaluate generalizability and broader applicability of . As part of this effort, we specifically provide APIs to facilitate tests with diverse sUAS and additional types of missions. Second, while the feedback we received from the five highly experienced software engineers is valuable, they may not necessarily represent the opinions and preferences of a broader range of the development community.
Design: This still seems more like a limitation in the engineered product rather than a threat to validity of the study itself. The problem is that we are combining discussion and threats together. Let's discuss what we want to do here. I would suggest just removing all things about scale and adding them as future work i.e., scaling up.
First, the current design utilizes a client-server application architecture, requiring developers to manually trigger simulations after configuring test scenarios. However, system requirements often change throughout the development process. For large-scale projects, this manual approach could result in considerable effort for developers to configure and execute test scenarios for new or modified requirements. In our design, test scenarios are captured as JSON messages, so execution of predefined test scenarios against new or modified requirements can be automated through /DW integration with Continuous Integration tools, further improving its practical utility. Therefore, future studies should examine how impacts large scale sUAS development teams by supporting automated simulation testing of new or modified system requirements.
Second, we observed that the computational resource requirements of increased with the number of sUAS and the quality of the simulation environment in the simulation. started experiencing performance issues when we deployed 20 aircrafts in a simulation scenario. As a result, to facilitate large-scale simulations, future efforts should conduct a systematic performance evaluation and utilize the findings to improve the design.
Evaluation: The following factors affect our assessment of and validity of our findings: First, while the two use cases used for the evaluation represent complex and real-world applications, exhibiting safety-critical aspects, our evaluation is based on applying to one DSuT and one flight controller. Therefore, additional case studies are required to evaluate the generalizeability and broader applicability of . We provide APIs to facilitate tests with diverse sUAS and additional types of missions. Second, while the feedback we received from the five highly experienced software engineers is valuable, it may not necessarily represent the opinions and preferences of a broader range of development community. Finally, simulation results should be interpreted with caution, considering the fidelity of the underlying physics engine with respect to the physical sUAS that will be deployed in the field.
We reported our results using Airsim's default engine that behaves similarly to the physical hexacopters that we fly under normal operating conditions; however, we plan to develop APIs that allow the integration of custom and more advanced physics engines with in the future.
§ RELATED WORK
sUAS Simulation:
Simulation is widely used in AV research, with many open-source simulators for self-driving cars, such as TORCS <cit.> and CARLA <cit.>. However, in the sUAS domain, rich simulation environments are far more limited, particularly, with regard to the application-specific scenarios that can be defined and properties that can be monitored during a simulation run. Dronology <cit.> is a centralized, multi-sUAS platform with built-in run-time monitors. It has been replaced by DroneResponse which uses a distributed model that supports greater autonomy <cit.>. Neither of these platforms provides a structured testing harness to specify and validate the specific sUAS behavior, nor do they consider the complexities of the real world during simulation. With regards to “pure” simulation tools, Gazebo <cit.>, a 3D robotic simulation platform, and AirSim <cit.> facilitate basic sUAS testing. However, these tools alone are limited in terms of providing a structured and well-defined testing environment. Afzal <cit.>, released Gzscenic that lets developers specify various environment elements using domain language and automatically arranges them in Gazebo for simulation. However, there are limitations to the domain language in terms of specifying more complex and 3D realistic landscapes.
In contrast, our current approach facilitates the testing of multi-sUAS applications in complex scenarios by simulating realistic 3D environmental conditions, automating fuzzy testing, and generating acceptance test results to facilitate debugging.
sUAS V&V Activities:
To validate the impact of adverse weather conditions on flight dynamics, researchers have built climate-controlled facilities <cit.>. However, testing in climate-controlled facilities is expensive and difficult. Grigoropoulos and Lalis <cit.> describe a simulation environment that allows testing and execution of sUAS applications using digital twin representation of sUAS to detect changes and deviations from requirements. Similar to our work on sUAS testing, Schmittle <cit.> have proposed OpenUAV, a testbed for UAV testing, focusing on UAV education and research activities. OpenUAV provides a dedicated frontend interface and containerized architecture to facilitate UAV testing. Similarly, StellaUAV <cit.> lets developers build test scenarios using simplistic terrain types, obstacle types, and weather conditions. Khatiri <cit.> created SURREALIST, a tool that uses real UAV flight data to generate test cases in a simulated environment. However, while all these approaches do provide testing capabilities for sUAS applications, with we focus specifically on enabling mission-specific tests with a particular focus on sUAS interaction with the digital shadows of real-world in realistic scenarios.
AI capabilities have enabled sUAS systems to make autonomous decisions. However, human interventions in sUAS autonomy remain necessary to ensure system safety <cit.>. Recently, Cleland-Huang proposed the MAPE-K_HMT framework <cit.>, which augments the traditional MAPE-K loop with Human-Machine Teaming (HMT) requirements <cit.>. Current sUAS simulation tools lack emphasis on human-sUAS interaction testing. In contrast, proposed architecture monitors human inputs and facilitates human-sUAS control switching testing during simulation.
§ CONCLUSION AND FUTURE WORK
In this paper, we have presented , a platform that software engineers developing novel sUAS applications can use to specify diverse requirements for multi-sUAS missions, define acceptance tests, and deploy their own sUAS missions into a realistic simulation environment. We have validated the feasibility and usefulness of our platform through applying it to two different sUAS missions, and through evaluating sUAS developers' perception of . Our platform supports requirements validation and black-box testing of entire missions, and provides critical support for deploying validated sUAS applications into the physical world. This paper contributes to the field of requirements engineering by offering a novel approach to specifying and testing complex sUAS missions in a realistic simulation environment. In future work, we plan to extend our platform to support specific scenarios, such as delivery or surveillance, and through integrating more advanced features such as fault injection and recovery mechanisms.
§ DATA AVAILABILITY
We provide a list of sUAS incidents, evaluation data, web app for designing test scenarios, simulation engine package, and user study materials in our public Github repository[<https://github.com/UAVLab-SLU/RE-23-Supp-Materials>].
§ ACKNOWLEDGMENT
The work described in this paper was primarily funded under NSF Grant 1931962 and partially funded by the Linz Institute of Technology (LIT-2019-7-INC-316).
IEEEtran
|
http://arxiv.org/abs/2307.02879v1
|
20230706093336
|
Algorithms for computing norms and characteristic polynomials on general Drinfeld modules
|
[
"Xavier Caruso",
"Antoine Leudière"
] |
cs.SC
|
[
"cs.SC",
"math.NT"
] |
Algorithms for computing norms and
characteristic polynomials on general Drinfeld modules
Xavier Caruso[Université de Bordeaux, CNRS, INRIA, 351, cours de la Libération, 33405 Talence, France],
Antoine Leudière[Université de Lorraine, INRIA, CNRS, 615 rue du Jardin Botanique, 54600 Villers-lès-Nancy, France]
August 1, 2023
===========================================================================================================================================================================================================================================
We provide two families of algorithms to compute characteristic
polynomials of endomorphisms and norms of isogenies of Drinfeld
modules. Our algorithms work for Drinfeld modules of any rank,
defined over any base curve.
When the base curve is ℙ^1_, we do a thorough
study of the complexity, demonstrating that our algorithms are,
in many cases, the most asymptotically performant.
The first family of algorithms relies on the correspondence
between Drinfeld modules and Anderson motives, reducing the computation to
linear algebra over a polynomial ring. The second family, available only for the Frobenius endomorphism,
is based on a new formula expressing
the characteristic polynomial of the Frobenius as a
reduced norm in a central simple algebra.
§ INTRODUCTION
tocsectionIntroduction
Drinfeld modules were introduced in 1974 to serve as the fundations of the
class field theory of function fields <cit.>. Although they were
initially considered as mathematical abstract objects, recent papers
highlighted a growing interest for the computational aspects in these topics:
in the recent years, a PhD thesis <cit.> and at least three
papers focused on the algorithmics of Drinfeld modules <cit.>.
Due to their striking similarities with elliptic curves, Drinfeld modules
were considered several times for their applications in cryptography <cit.>. Other
applications saw them being used to efficiently factor polynomials in [T]
<cit.>.
The present paper is a contribution to the algorithmic toolbox of
Drinfeld modules. More precisely, we focus on the effective and
efficient computation of characteristic polynomials of endomorphisms
of Drinfeld modules, as well as norms of general isogenies.
Context.
Before going deeper into our results, we recall briefly the purpose and the
most significant achievements of the theory of Drinfeld modules. Classical
class field theory aims at describing abelian extensions of local and global
fields, using information available solely at the field's level
<cit.>. Premises of the theory go
back to Gauß' Disquisitiones Arithmeticae, and in 1853, Kronecker
stated the famous Kronecker-Weber theorem: every abelian number field lies
within a cyclotomic field <cit.>. Another
crucial theorem from class field theory is the Kronecker Jugendtraum,
relating maximal abelian unramified extensions of quadratic imaginary number
fields and the theory of complex multiplication of elliptic curves. More
generally, a result conjectured by Hilbert, and proved by Takagi in 1920
<cit.>, asserts that every number field K is contained
within a maximal abelian unramified extension H whose class group is isomorphic to
(H/K). The field H is called the Hilbert class field of K and,
apart from abelian number fields and imaginary quadratic number fields, it is
generally hard to describe, yet even to compute.
The goal of Drinfeld modules
is to set up an analogue ot these results for function fields.
A Drinfeld module is an algebraic object which is defined within the following
setting: a base curve C over which is projective, smooth and
geometrically connected (e.g. C = ℙ^1_); a fixed point
∞ of C; the ring A of rational functions on C regular outside
∞ (e.g. A = [T]); a base field K with a structure of
A-algebra. We then talk
about Drinfeld A-modules. An important feature of Drinfeld modules is that
they endow the algebraic closure of K with a structure of A-module. When A =
[T], this structure surprisingly ressembles to the -module
structure on the points of an elliptic curve. Important references on Drinfeld
modules include <cit.>.
The simplest Drinfeld modules are the rank 1 Drinfeld modules over the curve
ℙ^1_, where K is the function field (T), i.e. the
Drinfeld [T]-modules of rank 1 over (T). They were studied by
Carlitz <cit.>, and provide function field analogues of
roots of unity, and consequently, of cyclotomic fields; the analogue of the
Kronecker-Weber theorem was subsequently proved by Hayes
<cit.>. Coming to the Jugendtraum, we need to go to
Drinfeld modules of rank 1 over general curves and Drinfeld [T]-modules
or rank 2 over finite fields. The latter have a theory of complex
multiplication which shares many similarities with that of elliptic curves over
finite fields. As an illustration, we mention that the endomorphism ring of
such a Drinfeld module is either an order in a quadratic imaginary function field
or a maximal order in a quaternion algebra.
Algorithmic results.
Like in the classical setting, the theory of complex multiplication of Drinfeld
modules depends heavily on the notion of characteristic polynomial of the
Frobenius endomorphism, which we compute in this paper. This polynomial lies in
A[X] and is an invariant of primary importance: it determines the
isogeny class of the underlying Drinfeld module, it controls the theory
of complex multiplication and it is the main building block in the
construction of the attached L-function <cit.>. Moreover, in the case of rank 2 Drinfeld modules over
[T], being ordinary is equivalent to having a middle term not divisible by
the function field characteristic. The characteristic polynomial of the Frobenius also defines curves and
extensions that naturally arise in the class field theory of function fields
<cit.>. More generally, characteristic polynomials can
be defined for any endomorphism in any rank and over any base.
In the present paper, we design algorithms for computing the characteristic
polynomial of any endomorphism of a Drinfeld module on the one hand,
and for computing the norm of any isogeny between Drinfeld modules on the other
hand. When A = [T], we moreover do a thorough analysis of their
complexity.
To state our complexity results, it is convenient to use Laudau's
O-notation and some of its variants. Precisely, if f and g
are two positive quantities depending on parameters, we write
* g ∈ O(f) if there exists an absolute positive constant C
such that g ≤ C · f,
* g ∈(f) if there exist absolute positive constant C and
k such that g ≤ C · f log^k f,
* g ∈(f) if, for all ε > 0, there exists
a positive constant C_ε
such that g ≤ C_ε· f^1 + ε,
where all inequalities are required to hold true for all choices
of parameters.
Let also ω∈ [2,3] denote a feasible exponent for matrix
multiplication; by this, we mean that we are given an algorithm which is
able to compute the product of two n × n matrices over a ring R
for a cost of O(n^ω) operations in R. The naive algorithm leads
to ω = 3; however, better algorithms do exist and the best known
value for ω, nowadays, is less than 2.37188 <cit.>.
Similarly, let Ω be a feasible exponent for the computation of the
characteristic polynomial of a matrix over polynomials rings over a field.
Using Kaltofen and Villard's algorithm, it is known that one can reach
Ω < 2.69497 <cit.>.
If K is a finite extension of of degree d, we also denote by
(n,d) a log-concave function with respect to the variable n
having the following property:
the number of operations in [Here,
we assume that applying the Frobenius of K counts for (d)
operations
in , see <ref> for more details.] needed
for multiplying two Ore polynomials in K{τ} of degree n is
in ((n,d)).
Our first result is about the computation of the characteristic
polynomial of an endomorphism of a Drinfeld module.
[see Theorems <ref> and <ref>]
Let ϕ be a Drinfeld [T]-module of rank r over a field K, and
let u be an endomorphism of ϕ of degree n.
The characteristic polynomial of u can be computed for a cost of
(n^2 + (n+r)r^Ω - 1)
operations in K and O(n^2 + r^2)
applications of the Frobenius.
Moreover, when K is a finite extension of of degree d,
the characteristic polynomial of u can be computed for a cost of
(d log^2 q) +
( ((n, d) + ndr + (n + d)r^ω)·log q)
bit operations.
We then study more particularly the special case of the Frobenius
endomorphism (which is only defined when K is a finite field),
for which we provide three different algorithms that we call ,
and respectively.
Let ϕ be a Drinfeld [T]-module of rank r over a finite
extenstion K of of degree d. The characteristic polynomial of the Frobenius
endomorphism of ϕ can be computed for a cost of either
* [algorithm, see <ref>]
(d log^2 q) + (((d, d) + d^2r + dr^ω)·log q), or
* [algorithm, see <ref>]
(d log^2 q) + ((d^2r^ω - 1 + dr^ω)·log q), or
* [algorithm, see <ref>]
(d log^2 q) + (r d^ωlog q)
bit operations.
We finally come to general isogenies between different Drinfeld modules.
In this case, the characteristic polynomial is not well-defined, but the
norm is.
[see Theorems <ref> and <ref>]
Let ϕ and ψ be two Drinfeld [T]-modules of rank r over a
field K, and let u : ϕ→ψ be an isogeny of degree n.
The norm of u can be computed for a cost of
(n^2 + nr^ω-1 + r^ω)
operations in K and O(n^2 + r^2) applications of the Frobenius.
Moreover, when K is a finite extension of of degree d,
the norm of u can be computed for a cost of
(d log^2 q) +
(((n, d) + ndr + nmin(d,r)r^ω-1 + dr^ω)
·log q )
bit operations.
Moreover, we propose extensions of all our algorithms to Drinfeld
modules defined over a general curve C (and not just ℙ^1_).
However, we do not carry out, in the present paper, a thorough study of
the complexity in this general setting.
Finally, we mention that, in the case of ℙ^1_, our algorithms
have been implemented in SageMath <cit.> and will be
hopefully publicly available soon in the standard distribution.
Meanwhile, the interested user may read tutorials and try out our
software package online on the platform plm-binder at:
<https://xavier.caruso.ovh/notebook/drinfeld-modules>
Comparison with previous results.
To the authors' knowledge, it is the first time that algorithms are
presented for Drinfeld modules defined over a general curve; so far,
only the case of ℙ^1_ was adressed.
Also, we are not aware of previous works on the explicit computations
of norms of general isogenies between different Drinfeld modules.
In contrast, the question of the explicit computation of the
characteristic polynomial of the Frobenius endomorphism, especially
in the case of rank 2, was already considered by many
authors <cit.>.
Our algorithms for this task are however new and they turn out to be
competitive for a large range of parameters.
More precisely, prior to our work, the most efficient algorithm was due
to Musleh and Schost <cit.>.
Depending on the relative values of r, d = [K: ] and m =
(), all four algorithms (, , and Musleh-Schost's
algorithm) achieve the best asymptotic complexity in at least one regime,
as shown in Figure <ref>.
As a rule of thumb, the reader can memorize that our algorithms are
better when r ≫√(d) (or even r ≫
d^0.431 if one takes into account fast algorithms for matrix
multiplication); on the contrary, when r ≪√(d), our algorithms
may still be competitive, depending on the relative values of log(m) /
log(d) and log(r)/log(d).
For a more complete review on existing algorithms and comparison
between complexities, we refer to the tables of Appendix <ref>
(page appendix:review).
(0.3805, 0.761)
(0.3805, 0)
(0.4308, 1)
Anderson motives.
The main theoretical input upon which all our algorithms are based is the
motive attached to a Drinfeld module, introduced by Anderson in
1986 <cit.> (see also <cit.>). In the classical setting of algebraic geometry,
Grothendieck describes the motive (X) of an algebraic variety X as the
ultimate object able to encode all the “linear” properties of X. Since
characteristic polynomials and norms are obviously constructions of linear
nature, we expect to be able to recover them at the level of motives. However,
in the classical setting, motives are usually quite complicated objects, often
defined by accumulating subtle categorical constructions. More or less, this
totally prevents using them for algorithmic applications.
It is striking that the situation for Drinfeld modules is much more tractable:
the Anderson motive (ϕ) of a Drinfeld module ϕ is a very
explicit object—concretely, it is just K{τ} equipped with extra
structures—which is very well-adapted to algorithmic manipulations.
However, (ϕ) exhibits all the theoretical features one expects; in
particular, it retains all the information we need on characteristic
polynomials of endomorphisms and norms of isogenies.
In the present paper, we make an intersive use of this yoga. In
particular, we highlight that our methods are not an adaptation of
existing methods from elliptic curves.
More precisely, an endomorphism u of a Drinfeld module corresponds to
a linear endomorphism (u) at the level of Anderson motives. It is
moreover a well-known fact that the characteristic polynomial of (u)
agrees with that of u (see <cit.>
for the case of ℙ^1_). In the present paper, we give a new proof
of this theorem, and extend it to general isogenies, establishing that
the norm of an isogeny u is the ideal generated by the determinant
of (u) (see Theorem <ref>).
We then use this result to reduce the computations we are interested in
to the computation of the determinant or the characteristic polynomial
of an actual matrix. In the case of ℙ^1_, this is immediate
since Anderson motives are free over K[T], with an explicit
canonical basis.
For a general curve, Anderson motives are not always free but only
projective, which induces technical difficulties for algorithmics.
Although it should be doable to tackle these issues head-on, we
choose to work around them by reducing the problem to the case of
ℙ^1_ treated previously.
The previous discussion applies to all our algorithms, except the
algorithm which is different in nature: it is based on a new
formula interpreting the characteristic polynomial of the Frobenius
endomorphism as a reduced norm in some well-suited central simple
algebra.
For the sake of simplicity, we only state our result for the case
of ℙ^1_ in this introduction.
[see Theorem <ref>]
Let ϕ be a Drinfeld [T]-module over a finite field K
and let π(T,U) ∈[T][U] be
the characteristic polynomial of the Frobenius endomorphism of ϕ.
Let also χ(τ^d, V) ∈[τ^d][V] be the reduced
characteristic polynomial of ϕ_T ∈ K{τ}. Then
π(T, U) = χ(U, T).
The above theorem reduces the computation of the characteristic
polynomial of the Frobenius endomorphism to the computation of a
reduced characteristic polymonial which, using classical techniques,
further reduces to the computation of the characteristic polynomial
of an actual matrix over of size d × d (with d = [K:] as
above) over [T].
To conclude, we would like to mention that, on the theoretical side,
Anderson motives are not only a powerful tool for studying Drinfeld
modules; they are nowadays considered as a vast generalization of
Drinfeld modules, providing more flexibility in the constructions
and having their own interest. The methods presented in this article strongly
suggest that designing algorithms in the framework of general Anderson
motives is completely within our reach (and maybe easier!). We then do
believe that time is ripe to go beyond Drinfeld modules and start
working with Anderson motives at the algorithmic level.
Acknowledgements.
We thank Pierre-Jean Spaenlehauer and Emmanuel Thomé for their guidance.
We thank Cécile Armana, Alain Couvreur, Quentin Gazda, Federico Pellarin
and Floric Tavarès-Ribeiro for helpful discussions.
This work benefited from the financial support of the ANR projects
CLap-CLap (ANR-18-CE40-0026-01), Barracuda (ANR-21-CE39-0009) and
PadLEfAn (ANR-22-CE40-0013).
§ BACKGROUND
This section serves as a gentle preliminary part in which we
introduce the setup of this article. On the theoretical side, we
recall basic definitions and constructions on Drinfeld modules while,
on the computational side, we specify our complexity model and discuss
several algorithmic primitives we shall constantly use throughout
this article.
§.§ Drinfeld modules
Throughout this paper, we fix a finite field of cardinality q. Let C
be smooth, projective, geometrically connected curve over . Let ∞
be a distinguished closed point on C and let A denote the ring of rational
functions on X that are regular outside ∞. If F is an extension of
, we write A_F = F ⊗_ A. Thanks to our assumptions on C,
the ring A_F is a Dedekind domain. We recall that the degree of an
ideal of A_F, denoted by (), is defined as the F-dimension
of A_F/.
For a ∈ A_F, we will often write (a) for (a A_F).
We consider an extension K of and fix an algebraic
closure of K. We fix in addition a homomorphism of
-algebras
γ: A → K.
The kernel of γ, a prime ideal of A, is denoted by and referred to as the
characteristic.
An ideal of A is said away from the characteristic if it is
coprime to . Finally, we let be
the algebra of Ore polynomials over K in τ, in which
the multiplication is twisted according to the rule τ a = a^q
τ for all a ∈ K.
§.§.§ Drinfeld modules and isogenies
We define Drinfeld modules and their morphisms.
[Drinfeld modules]
A Drinfeld A-module (or a Drinfeld module for short)
over K is a ring homomorphism
ϕ: A →
whose constant coefficient agrees with γ and whose image is not
contained in K.
For a ∈ A, we write ϕ_a for ϕ(a).
By definition, the rank of ϕ is the unique positive integer
r such that (ϕ_a) = r (a) for all a ∈ A (see
<cit.>).
The simplest Drinfeld modules are those for which C = ℙ^1_ and
∞ is the point at infinity, i.e. A = [T].
In this case, a Drinfeld module ϕ of rank r is defined by
the datum of an Ore polynomial
ϕ_T = γ(T) + g_1 τ⋯ + g_r τ^r,
with g_1, …,
g_r ∈ K and g_r ≠ 0. Carlitz modules are those Drinfeld
[T]-modules for which K = (T) and
r = 1.
[Morphisms]
Let ϕ, ψ be two Drinfeld modules.
A morphism
u: ϕ→ψ is, by definition, an Ore polynomial u such that
u ϕ_a = ψ_a u
for every a ∈ A. An isogeny is a nonzero morphism.
This definition equips the class of Drinfeld
modules with a structure of category, in which the composition is given by
the product in the ring of Ore polynomials.
We say that ϕ and ψ are isogenous if there exists
an isogeny between ϕ and ψ. One checks that two isogenous Drinfeld
modules have the same rank.
For any a ∈ A, ϕ_a defines an endomorphism of ϕ.
If K is a finite field of degree d over , then τ^d defines an
endomorphism called the Frobenius endomorphism of ϕ;
it is denoted by F_ϕ.
Let u : ϕ→ψ be an isogeny defined by the degree n Ore polynomial
u = u_0 + u_1 τ + ⋯ + u_n τ^n.
We say that n is the τ-degree of u.
By definition, the height of u is the smallest integer
h for which u_h ≠ 0. In what follows, we denote it by h(u).
When h(u) = 0, we say that u is separable.
When the characteristic is zero, any
isogeny is separable. On the contrary, when does not vanish,
h(u) is a necessarily multiple of (), and u decomposes as
u = u_s ∘τ^h(u),
where τ^h(u) defines an isogeny from ϕ to a second
Drinfeld module ϕ' and u_s : ϕ' →ψ is a separable
isogeny.
§.§.§ Torsion points, Tate module, and Anderson motives
Let ϕ and ψ be two rank r Drinfeld modules. We define the most
important algebraic structures attached to a Drinfeld module.
[A-module]
* The A-module of ϕ, denoted (ϕ), is the A-module
equipped with the structure given by
a · z = ϕ_a(z)
for a ∈ A and z ∈(ϕ).
* Given an additional ideal of A,
we define the -torsion _(ϕ) of ϕ as the
-torsion of the module (ϕ), that
is the subset of consisting of elements z for which
ϕ_a(z) = 0 for all a ∈.
For an element a ∈ A, we write _a(ϕ)
for _aA(ϕ).
Any morphism of Drinfeld modules u : ϕ→ψ induces
A-linear morphisms
(u)
(ϕ)
(ψ)
z
z(u)
and
_(u): _(ϕ) →_(ψ).
For any nonzero ideal
⊂ A away from the characteristic, the module _(ϕ) is free of rank r over A/, i.e.
_(ϕ) ≃ (A/)^r <cit.>. This classical fact
highlights one of the first similarities with elliptic curves, of which
rank two Drinfeld modules are said to be function field analogues.
[Tate module]
Let be a maximal ideal of A, away from the characteristic.
We define the -adic Tate module of ϕ as the inverse limit
_(ϕ) = _^n(ϕ).
The Tate module _(ϕ) is a module over the completion
A_ of A with respect to the place .
It is free of rank r, and morphisms u: ϕ→ψ give rise
to A_-linear maps _(u): _(ϕ) →_(ψ).
[Anderson motive]
* The A-motive of ϕ, denoted by (ϕ), is the A_K-module
equipped with the structure given by
(λ⊗ a) · f = λ f ϕ_a
where λ∈ K, a ∈ A, f ∈(ϕ) and the
multiplication in the right hand side is computed in .
* Given in addition an ideal of A, we define
_(ϕ) = A/⊗_A (ϕ) = (ϕ)/(ϕ).
For an element a ∈ A, we write _a(ϕ)
for _aA(ϕ).
In classical references (e.g. <cit.>), the
A-motive (ϕ) carries more structure: it is a module over
the noncommutative ring K{τ}⊗_ A = A_K{τ}.
This additional τ-action is important, but never
used in this article. Therefore, for simplicity, we only retain the structure of A_K-module.
It is well known that (ϕ) is projective of rank r over
A_K (see <cit.>).
When A = [T], we have A_K ≃ K[T] and (ϕ) is
free with basis (1, τ, …, τ^r-1) <cit.>.
We stress that this has significant importance for our algorithmic purpose.
In general, a morphism of Drinfeld modules u : ϕ→ψ induces
a morphisms of A_K-modules
(u)
(ψ)
(ϕ)
f
fu
and
_(u): _(ψ) →_(ϕ).
We refer to <cit.> or <cit.> for
more details and generalizations. The degree of the Ore polynomial defining an
element f ∈(ϕ) (resp. (u)) is called the τ-degree of f
(resp. (u)).
Let and be ideals of A, with maximal.
The constructions , _, _, and _ define
functors from the category of Drinfeld modules:
* (resp. _) is a covariant functor to the category of
A-modules (resp. A/-modules);
* _ is a covariant functor to the category of A_-modules;
* (resp. _) is a contravariant functor to the category of
A_K-modules[More precisely, is a functor to the category of
Anderson motives.] (resp. A_K/ A_K-modules).
In standard references, the -torsion is denoted by ϕ[].
In this article, we prefer the notation _(ϕ) because it
better underlines
the functorial properties of the construction, which will later play a
leading role.
§.§.§ Norms and characteristic polynomials
The norm of an isogeny is defined in <cit.>, in terms of Euler-Poincaré
characteristic.
Let us take a step back, and fix a Dedekind domain .
The
Euler-Poincaré characteristic, denoted by χ_, is a function defined on
the class of finitely generated -modules and assuming
values in the set of ideals of . It is uniquely determined
by the following conditions:
* χ_(/) = for every ideal of ;
* χ_(M_2) = χ_(M_1) ·χ_(M_3)
for every exact sequence 0 → M_1 → M_2 → M_3 → 0 of finitely
generated -modules.
The formation of Euler-Poincaré characteristic commutes with flat scalar extension.
In particular, given a finitely generated -module M and a maximal
ideal ⊂, we have
χ_(M) ⊗_ = χ_(M ⊗_).
Similarly, if ' is another Dedekind domain lying above , we have
χ_(M) ⊗_' = χ_'(M ⊗_').
If M is torsion, the Noether's theorem on the structure of finitely generated
modules over Dedekind domains <cit.> implies that M
decomposes as M ≃/_1 ×⋯×/_ℓ, where
_1, …, _ℓ are ideals of . In that case, χ_(M) =
_1⋯_ℓ.
[Norm]
Let u: ϕ→ψ be an isogeny. The norm of u, denoted by
(u), is defined as
(u) = ^h(u)/()·χ_A((u)).
We recall that h(u) denotes the height of u.
This definition takes into account that an isogeny and its
separable part have the same kernel: the correction by the factor
^h(u)/() corresponds to the purely inseparable part.
Let r be the rank of ϕ.
For a ∈ A, we have (ϕ_a) = a^r A.
If ≠ 0 then (τ^ℓ()) = ^ℓ for
all ℓ∈. In particular, when K is a
finite extension of degree d of , the norm of the
Frobenius endomorphism F_ϕ is explicitly given by
(F_ϕ) = ^d/().
One proves <cit.> that the norm is multiplicative:
if u and v are composable isogenies, we have
(v∘ u) = (v) ·(u).
When u is an endomorphism, its action on the Tate module _(u)
is a linear endomorphism, whose determinant lies in A and generates
(u) <cit.>:
(u) = (_(u)) · A.
[Characteristic polynomial]
Let u: ϕ→ϕ be an endomorphism.
We define the characteristic polynomial of u as the
characteristic polynomial of _q(u).
Since _q(ϕ)
has rank r over A_, the characteristic polynomial of u has degree r.
It is also proven that it has coefficients in A <cit.>.
In this example, we assume that A = [T], that K is finite of degree d over
, and that ϕ is a rank two Drinfeld module defined by
ϕ_T = γ(T) + g τ + Δτ^2.
The characteristic polynomial of the Frobenius endomorphism of ϕ
takes the form <cit.>
X^2 - tX + (-1)^n N_K/(Δ)^-1^n/()
where N_K/ is the norm from K to and, in a slight
abuse of notation, the notation is used to denote the monic generator of
the characteristic.
The coefficient t ∈[T] is called the Frobenius
trace of ϕ and we have _T(t) ≤ d/2. We refer to
Remark <ref> for more
information about the Frobenius norm.
The endeavour of computing this polynomial has been the object of many
research articles, leading to a variety of algorithms. We refer to Appendix <ref> for a review of
their respective complexities.
§.§.§ Restriction of Drinfeld modules
We consider γ' : A' → K, a second
base for Drinfeld modules satisfying the assumptions
of <ref>, and we assume that we are given in
addition an injective homomorphism of rings f : A' → A such
that γ' = γ∘ f.
Thanks to our assumptions on A and A', we find that f endows
A' with a structure of finite A-algebra.
If ϕ : A → K{τ} is a Drinfeld module, the composite
ϕ∘ f : A' → A → K{τ}
defines a Drinfeld module over A', denoted by f^* ϕ and referred to as
the restriction of ϕ along f.
Considering two Drinfeld A-modules as well as a morphism u : ϕ→ψ,
one checks that the Ore polynomial defining u also defines
an isogeny f^* ϕ→ f^* ψ, which we denote by f^* u.
The construction f^* defines a functor from the category
of Drinfeld modules over A to the category of Drinfeld modules
over A'.
The action of f^* on the motives is easy to describe: the motive
(f^* ϕ) is simply (ϕ) with the restricted action of
A and, for any morphism u: ϕ→ψ, the maps (f^*u)
and (u) are the same (up to the above identification).
§.§ Algorithmics
We now move to algorithmics and discuss the complexity of
performing basic operations on matrices on the one hand, and on
Ore polynomials on the other hand.
§.§.§ Complexity model
We recall the Landau's notation O, and
from the introduction: if f and g
are two positive quantities depending on parameters, we write
* g ∈ O(f) if there exists an absolute positive constant C
such that g ≤ C · f for all choices of parameters,
* g ∈(f) if there exist absolute positive constants C
and k such that g ≤ C · f log^k f for all choices of
parameters,
* g ∈(f) if, for all ε > 0, there exists
a positive constant C_ε such that g ≤ C_ε· f^1 + ε for all choices of parameters.
We notice that O(f) ⊂(f) ⊂(f) for
all f as above. Moreover, if f_1 and f_2 are two quantities
as above, one checks that
O(f_1) + O(f_2) ⊂ O(f_1 + f_2),
(f_1) + (f_2) ⊂(f_1 + f_2) and,
similarly, (f_1) + (f_2) ⊂(f_1 + f_2).
In this article, we measure complexity in two different ways.
When K is an arbitrary field, we use arithmetic
complexity, meaning that we count separately arithmetic
operations (addition, subtraction, multiplication and division)
in K on the one hand, and applications of Frobenius (that is the
computation of x^q for a given x ∈ K) on the
other hand.
On the contrary, when K is a finite field, we rather use
bit complexity, meaning that we count operations on
bits.
When K is a finite extension of of degree d presented as
a quotient K = [X]/Q(X) (for some irreducible polynomial Q(X)
∈[X] of degree d) and when is itself presented as a
quotient of [X], classical algorithms based on Fast Fourier
Transform allows for performing all arithmetic operations in K
for a cost of (d log q) bit operations (see for instance
<cit.>).
Estimating the cost of applying the Frobenius endomorphism of K
is more challenging, even though partial results are
available in the literature. First of all, Kedlaya and Umans'
algorithm <cit.> for fast modular composition is
theoretically capable to compute an image by Frobenius for a cost of
(d log q) bit operations. However, if α denotes the
image of X in K, one needs nevertheless to precompute α^q,
i.e. to write α^q on the
canonical monomial basis (1, α, …, α^d-1). Using a fast
exponentiation algorithm, this can be done for an initial cost of
(d log^2 q) bit operations. Another
flaw with this approach is that, as far as we know, one still
lacks an efficient implementation of Kedlaya and Umans' algorithm.
Another option, which achieves quasi-optimal complexity, is to use
the elliptic normal bases of Couveignes and Lercier <cit.>
instead of the classical monomial basis.
Indeed, in those bases, all arithmetic operations and applications
of Frobenius can be computed for a cost of (d) operations
in , corresponding to (d log q) bit operations.
The drawback of this solution is that constructing an elliptic
normal basis can be costly. Nevertheless this needs to be done only once, at the
instanciation of K.
Taking all of this into account, we choose to follow the convention of <cit.>
and opt for the first option: we make the
assumption that all arithmetic operations and applications of
Frobenius in K costs (d log q) bit operations, plus
a unique initial cost of (d log^2 q) operations
for the precomputation of α^q.
§.§.§ Polynomial matrices
We give a rough review of the literature on the computation of determinants and
characteristic polynomials of polynomial matrices. We recall from the
introduction that the notation ω∈ [2,3] refers to feasible exponent
for matrix multiplication. When matrices have coefficients in a field L, both
computing determinants and characteristic polynomials reduce to matrix
multiplication <cit.>.
Computing the determinant of a polynomial matrix also reduces to matrix multiplication <cit.>. However, the situation of the characteristic
polynomial is more delicate. Consider a s-by-s matrix with entries in
L[T]. Computing its characteristic polynomial can be done for a cost of
(s^Ω n) operations in L with Ω < 2.69497
<cit.>.
When M is a s-by-s matrix, we use the notation π(M) to be to
its monic characteristic polynomial, that is π(M) = (X·I_s - M)
where I_s is the identity matrix of size s. In the next
two lemmas, we derive two useful algorithms, for two specific situations.
We assume that L is a finite field of degree d over .
Let M be a s-by-s
matrix with coefficients in L[T].
Let n be a uniform upper bound on the degree of the coefficients of
π(M).
There exists a Las Vegas algorithm that computes
the π(M) for a cost of
(n/d) + ((n + d)s^ω) operations in .
Let L' be an extension of L of degree ⌈ n / d ⌉; such
an extension, altogether with a generator α of L' over , can be found out using Couveignes and Lercier's Las Vegas
algorithm, whose complexity is in (n/d) operations
in <cit.>.
The degree of the extension L'/ is then in the range [n, n+d].
Let M(α) denote the evaluation of M at T = α, and
write its characteristic polynomial as follows:
π(M(α)) = ∑_i=0^s ∑_j=0^n a_i, jα^i X^i.
where the coefficients a_i,j are in . Then
π(M) = ∑_i=0^s ∑_j=0^n a_i, j T^i X^i.
The generator α being known, computing π(M(α)) costs (s^ω)
operations in L', which corresponds to ((n+d) s^ω)
operations in .
Let M be a
s-by-s matrix with coefficients in [T] and let n be
a uniform upper bound on the degress of the entries of M.
We assume that the coefficients of π(M) fall in [T^s].
There exists a Las Vegas algorithm that computes π(M) with
probability at least 1/2 for a cost of
(n s^ω) operations in .
Let α_1, …, α_n ∈ be such that α_i^s ≠α_j^s whenever i ≠ j.
We compute the matrices M(α_1), …, M(α_n)
and compute their characteristic polynomials
π(M(α_1)), …, π(M(α_n)), for a total cost of
(ns^ω) operations in . Thanks to our
assumption, π(M) can be seen as having s polynomial coefficients
of degree at most n. Using fast interpolation algorithms <cit.>,
π(M) can therefore
be recovered from the π(M(α_i))'s for a cost of (ns)
operations in . We end up with a total of (n s^ω)
operations in .
This procedure only works if is large enough to
pick a valid set {α_1, …, α_n}.
Let ρ = (q-1, s)/q-1 be the proportion of
elements in _q^× that are d-th roots of unity. A family
(α_1, …, α_n) ∈ (_q^×)^n has probability
p_n = (1 - ρ)(1 - 2ρ)⋯ (1 - n ρ) to form a valid set. As
p_n ≥ 1 - n(n+1)/2ρ, the process has a chance of
success greater than 1/2 as soon as q > 1 + s n (n+1).
If is not large enough, we do all computations in a finite
extension of . With these estimations, we conclude that it is enough to work in an
extension whose degree has order of magnitude log_q(s n^2). Building
this extension, as well as computing in it, does not affect the
announced complexity.
§.§.§ Ore polynomials
In full generality,
multiplications and Euclidean divisions of Ore polynomials in
K{τ} of degree at most n can be achieved with the naive
algorithm for a cost of O(n^2) operations in K and O(n^2) extra
applications of the Frobenius endomorphism.
However, when K is a finite field, we can take advantage of fast Ore polynomial
multiplication <cit.>. As before,
we use the letter d to denote the degree of the extension K/.
Let (n,d) denote a function having the following property: the
number of bit operations needed for multiplying two
Ore polynomials in of degree less than n is within
((n,d)log q).
At the time of writing this article, the best known value of is
given in <cit.>[In <cit.>,
the complexity is given in number of
operations in the ground field , with the assumption that
applying the Frobenius endomorphism of K requires at most
(d) operations in . Consequently one operation in
in the setting of <cit.> corresponds to
(log q) bit operations in the complexity model of this
article (see <ref>).],
[Note that there is a typo in <cit.>: the
critical exponent is not 5-ω/2 but 2/5-ω.]:
(n, d) = n^ω+1/2 d
for n ≤ d^2/5 - ω,
= n^ω-2 d^2
for d^2/5 - ω≤ n ≤ d,
= n d^ω - 1
for d ≤ n.
Let also be the function defined by
(n, d) = sup_0 < m ⩽ n(m, d) n/m.
The function is the smallest log-concave function above .
It is proved in <cit.> that computing the
right-Euclidean division of Ore polynomials in of degree less
than n requires at most ((n, d)log q) bit operations.
With the above values for (n,d), we have
(n, d) = n^ω+1/2 d
for n ≤ d^2/5 - ω,
= n d^4/5 - ω
for d^2/5 - ω≤ n.
§ CHARACTERISTIC POLYNOMIALS OF ENDOMORPHISMS
In this section, we recall that characteristic polynomials
of endomorphisms of Drinfeld modules can be read off at the
level of Anderson motives. We then take advantage of this motivic
interpretation to design fast algorithms (including the algorithms
and mentioned in the introduction) for computing Drinfeld module
endomorphism characteristic polynomials.
§.§ Duality between torsion points and A-motives
It is a standard result in the theory of Drinfeld modules that
A-motives are duals to the so-called A-modules which, in some sense,
correspond to torsion points (see for instance <cit.> or <cit.>). We hereby propose
a concrete incarnation of this yoga, establishing a duality between the
functors _ and _. The material presented in this subsection
is somehow classical. However, we believe that our presentation is more elementary than
those from aforementioned references: for
instance, we do not need the introduction of (abelian) A-modules. As such, we
include all proofs, hoping they will be of interest for some readers.
Let be an ideal of A away from the characteristic. We consider the
evaluation map
: (ϕ) ×(ϕ)
(z, f)
f(z).
It is easily checked that is -linear with respect to
the variable z and K-linear with respect to the variable
f. Moreover, it follows from the definitions that vanishes
on the subset _(ϕ) ×(ϕ) and therefore
induces a bilinear mapping
_
_(ϕ) ×_(ϕ)
.
We consider the scalar extensions _(ϕ)_ = ⊗__(ϕ) and _(ϕ)_ = ⊗_K _(ϕ). The map
_ induces a -bilinear form
_,
_(ϕ)_ ×_(ϕ)_
.
The bilinear form _, is a perfect pairing.
Recall that, since is away from the characteristic,
_(ϕ) is free with rank r over A/. Therefore,
__(ϕ) = r ·() = _K _(ϕ),
and _(ϕ)_ and _(ϕ)_ have the same
dimension over .
It is then enough to prove that _, is nondegenerate on the left,
meaning that if x ∈_(ϕ)_ satisfies _, (x,y) =
0 for all y ∈_(ϕ)_, then x must vanish. More
generally, we are going to prove that there is no nonzero x ∈_(ϕ)_ having the following property:
_,(x, 1 ⊗τ^j) = 0 for all j large enough.
We argue by contradiction and
consider an element x ∈_(ϕ)_ satisfying the above
property. We write
x = λ_1 ⊗ z_1 + ⋯ + λ_n ⊗ z_n.
with λ_i ∈ and z_i ∈_(ϕ). Moreover, we assume
that x is chosen in such a way that the number of terms n is minimal.
This ensures in particular that the z_i's are linearly independent over
. Writing that _,(x, 1 ⊗τ^j) vanishes, we obtain
the relation
(E_j) : λ_1 z_1^q^j + ⋯ + λ_n z_n^q^j = 0,
which, in turn, implies
(E'_j) : λ_1^q z_1^q^j+1 + ⋯ + λ_n^q z_n^q^j+1 = 0.
Combining the relations (E_j+1) and (E'_j), we find
(λ_1^q - λ_n^q-1λ_1) · z_1^q^j+1 + ⋯ +
(λ_n-1^q - λ_n^q-1λ_n-1) · z_n-1^q^j+1 = 0.
In other words, the vector
y = (λ_1^q - λ_n^q-1λ_1) ⊗ z_1^q^j+1 + ⋯ +
(λ_n-1^q - λ_n^q-1λ_n-1) ⊗ z_n-1^q^j+1∈_(ϕ)_
is a new solution to our problem.
This will contradict the minimality condition in the choice of x if we can
prove that y does not vanish. To do this, we again argue by contradiction.
Given that the z_i's are linearly independent over , the vanishing of
y would imply λ_i^q - λ_n^q-1λ_i = 0 for all
i, from which we would deduce that all the quotients
λ_i/λ_n lie in . Thanks to the relations (E_j),
this again contradicts the linear independence of the z_i's over .
Proposition <ref> can be seen as a Drinfeld analogue of the
classical pairing between the singular homology and the de Rham cohomology of
a complex abelian variety: the space _(ϕ) plays the role of the
singular homology (via the étale viewpoint), while the space
_(ϕ) can be thought of as the incarnation of the de Rham cohomology
(see <cit.>).
Proposition <ref> gives a natural identification
α_ϕ :
_(ϕ)_≃_(_(ϕ)_, )
≃_K(_(ϕ), ),
where _ (resp. _K) refers to the space of -linear
(resp. K-linear) morphisms. A priori, the isomorphism α_ϕ is
only -linear; we upgrade it and make it A_-linear.
Let M be a module over A_K. We set M^∗ = _K(M, K) and equip it
with the structure of A_K-module given by
a ·ξ = (m ↦ξ(am)),
where a ∈ A_K and ξ∈ M^∗.
One checks that the construction M ↦ M^∗ is functorial, in the sense
that if g : M_1 → M_2 is a morphism of A_K-modules, then the dual map
g^∗ : M_2^∗→ M_1^∗ is A_K-linear as well. We define
_(ϕ)^∗_ = ⊗_K _(ϕ)^∗; it is a module
over A_.
A direct adaptation of <cit.>
using Noether's structure theorem for finitely generated modules over a
Dedekind domain <cit.> gives the following lemma.
Any torsion finitely generated A_K-module M is
(noncanonically) isomorphic to its dual M^∗.
The perfect pairing _, induces an A_-linear
isomorphism:
α_ϕ :
_(ϕ)_ ∼⟶ _(ϕ)^∗_.
Moreover, given a Drinfeld module morphism u : ϕ→ψ,
the following diagram is commutative:
[column sep=huge,row sep=large]
_(ϕ)_ _(ψ)_
_(ϕ)^∗_ _(ψ)^∗_["𝕀⊗_(u)", from=1-1, to=1-2]
["α_ϕ"', from=1-1, to=2-1]
["𝕀⊗_(u)^∗"', from=2-1, to=2-2]
["α_ψ", from=1-2, to=2-2]
For the first assertion, we already know that α_ϕ is a
-linear isomorphism. It then only remains to verify that it is
A-linear. Let a ∈ A and z ∈_(ϕ). By definition a·z =
ϕ_a(z) and a·f = f ϕ_a for f ∈(ϕ). Hence
α_ϕ(a·z) is the function f ↦
f(ϕ_a(z)) = (f ϕ_a) (z) = (a·f)(z),
which means that α_ϕ(a·z) = a ·α_ϕ(z)
as desired.
The second assertion is easily checked.
Theorem <ref> shows that _(ϕ)_ determines
_(ϕ)_ and vice versa. One can actually do much better
and obtain a direct correspondence between _(ϕ) and
_(ϕ) without extending scalars to
(see, for instance, <cit.>).
For this, we need to add
more structures. On the one hand, on _(ϕ), we retain the
τ-action as discussed in Remark <ref>. On the other hand,
on _(ϕ), we have a Galois action. Precisely let denote the
separable closure of K inside . From the fact that is away from
the characteristic, we deduce that _(ϕ) lies in , and endow
with an action of the Galois group G_K = (/K). We now have the following
identifications refining those of Theorem <ref>:
_(ϕ) ≃_(_(ϕ), )
_(ϕ) ≃_[G_K](_(ϕ), )
where, in the first (resp. second) line, we consider
K-linear morphisms commuting with the τ-action (resp. -linear
morphisms commutating with the Galois action). In other words, the Galois representation
_(ϕ) and the τ-module _(ϕ)
correspond one to the other under Katz' anti-equivalence of
categories <cit.>.
In <cit.>, van der Heiden proposes another approach,
proving that there is a canonical A-linear isomorphism:
_(ϕ) ≃_A/(_(ϕ)^τ, Ω_A / Ω_A)
where _(ϕ)^τ denotes the subset of fixed points of
_(ϕ) by the τ-action and
Ω_A is the module of Kähler differential forms
of A over (see Proposition 4.3 of loc. cit.).
However, the formulation of Theorem <ref> is better
suited for the applications we shall develop in this article.
If M is a finitely generated projective A_K-module of rank n,
we let
M = ⋀^n M
denote the maximal exterior power of M.
Any A_K-linear endomorphism f : M → M induces a linear map f :
M → M. The latter is
the multiplication by some element of A_K, that we call the
determinant of f and denote by
f in a slight abuse of notation.
Similarly, we define the characteristic polynomial of f as the
determinant of the A_K[X]-linear map X-f acting on A_K[X]
⊗_A_K M.
A classical consequence of Theorem <ref> is the following.
Let ϕ be a Drinfeld module and
let u : ϕ→ϕ be an endomorphism.
Let ⊂ A be
a maximal ideal away from the characteristic. Then
the characteristic polynomials of _(u) and (u) are equal.
In particular, (u) is the principal ideal generated by ((u)).
Let n ∈. Applying Theorem <ref> with
= ^n, we find
π(_^n(u))
= π(_^n(u)_)
= π(_^n(u)_)
= π(_^n(u)),
the second equality being a consequence of Theorem <ref> and
the fact that two dual morphisms have the same determinant (in
suitable bases, their matrices are transposed one to the other).
Thus we obtain
π (_(u)) ≡π ((u)) ^n.
Since this holds for all positive integer n, we conclude that
π(_(u)) = π((u)).
The last statement now
follows from <cit.>.
§.§ Algorithms: the case of ℙ^1
In this subsection, we assume that A = [T], and we
let ϕ be a Drinfeld module of rank r. We fix an endomorphism u: ϕ→ϕ
and aim at designing an algorithm that computes
the characteristic polynomial (resp. norm) of u.
Under the assumption that A = [T], the ring A_K ≃ K[T] is a principal ideal
domain and (ϕ)
is free of rank r. Moreover, a canonical basis is
given by (1, τ, …, τ^r-1). Our strategy is then clear:
we compute the matrix representing the K[T]-linear map (u) in
the aforementioned canonical basis and then return its characteristic
polynomial (resp. determinant); Theorem <ref> ensures that
it is
the characteristic polynomial (resp. norm) of u.
§.§.§ Generic algorithm
Our first need is to design an algorithm for computing the coordinates of an element
f ∈(ϕ), represented as an Ore polynomial, in the canonical
basis of (ϕ).
This is achieved by Algorithm <ref>, whose
correctness is immediately proved by induction on the τ-degree
of f.
For an input f ∈(ϕ) of τ-degree n.
Algorithm <ref> requires O(n^2) applications
of the Frobenius endomorphism and O (n^2) operations in K.
The first step of the algorithm consists in computing ϕ_X^m.
Using fast exponentiation, this costs O (n^2) applications of the
Frobenius endomorphism and O (n^2) operations in K. The Euclidean
division requires O (n^2) applications of the Frobenius endomorphism
and O (n^2) operations in K as well.
Let C(s) be the cost of running the algorithm on an entry with
degree s. By what precedes, C(s) is less than
C(⌈s/2⌉), plus O (s^2) operations in K and
O (s^2) applications of the Frobenius endomorphism.
We conclude using the Master
Theorem <cit.>.
From Algorithm <ref>, we also derive the
following bounds on the size of the coefficients.
Let f ∈(ϕ) and let f_0, …, f_r-1∈ K[T] be the
coordinates of f in the canonical basis. Then for 0 ≤ i < r we have
_T(f_i) ≤_τ(f) - i/r.
Let (P_i, j)_0 ≤ i,j < r be the matrix of (u) in the canonical
bases. Then for every 0 ≤ i, j ≤ r-1 we have
(P_i, j) ≤(u) + j - i/r.
By definition, P_i,j is the coefficient in front of τ^i
in the decomposition of τ^j u in the canonical basis.
The corollary then follows from Lemma <ref>.
As a consequence of the previous statements, we obtain an alternative
proof of the following
classical result <cit.>.
Let π = π_0(T) + ⋯ + π_r(T) X^r be the characteristic polynomial
of the Frobenius endomorphism of ϕ. Then for every 0 ⩽ i
⩽ r we have
(π_i) ⩽r - i/r d.
Using Theorem <ref>, we know that π is the
characteristic polynomial of the matrix P of (τ^d) in the canonical
bases. Therefore, for every 0 ≤ i ≤ d, π is the trace of
⋀^i (τ^d), which is an alternated sum on the minors of P with
size i. We conclude using Corollary <ref>.
Instead of independently computing all columns using
Algorithm <ref>, a more intelligent approach can be
employed to calculate the matrix of (u):
in order to speed up the computation of a column, we may reuse those that are already
computed.
For this, we write
ϕ_T = g_0 + g_1 τ + ⋯ + g_r τ^r
with g_i ∈ K, g_r ≠ 0. For a polynomial h ∈ K[T],
we let h^τ denote the polynomial
deduced from h by raising all its coefficients to the q-th power.
An easy computation then shows that if
(f_0, …, f_r-1) are the coordinates of some f ∈(ϕ)
in the canonical basis, then the coordinates (f'_0, …, f'_r-1) of τ f are
defined by the following matrix equality:
[ f'_0; f'_1; ⋮; f'_r-1 ]
=
[ 0 0 … 0 T - g_0/g_r; 1 0 … 0 - g_1/g_r; ⋱ ; 0 0 … 1 - g_r-1/g_r ]·[ f_0^τ; f_1^τ; ⋮; f_r-1^τ ].
This readily yields Algorithm <ref>.
For an input f ∈(ϕ) of τ-degree n,
Algorithm <ref> requires at most O(n)
applications of the Frobenius endomorphism and O(n) operations in K.
By Lemma <ref>, the polynomial f_i ∈ K[T] has degree at
most n-i/r. As a consequence, computing f_i^τ requires at
most ⌊n-i/r⌋ + 1 applications of the
Frobenius endomorphism, and the pre-computation on line 1 costs
∑_i=0^r-1(⌊n-i/r⌋ + 1) = n + 1
such applications. The
remaining steps can be done in O (n) arithmetic operations in K.
Computing the matrix of (u) is now just a matter of computing the
coordinates of u and iteratively applying r times the τ-action.
The precise procedure is presented in Algorithm <ref>.
For an input u of τ-degree n,
Algorithm <ref> requires
at most O(n^2 + r^2) applications of the Frobenius endomorphism,
and O(n^2 + r^2) operations in K.
Computing U_0 requires O(n^2) applications of the Frobenius endomorphism
and O(n^2) operations in K (Lemma <ref>).
Then, knowing U_i for some 1 ≤ i ≤ r-1, the computation of U_i+1 requires at most
O(n+i) applications of the Frobenius and O(n+i) operations in K
by Lemma <ref>.
Summing all the contributions, we end up with the announced complexity.
We now have all the ingredients to write down
Algorithm <ref>, which is the
main algorithm of this section.
For a morphism of Drinfeld modules u: ϕ→ϕ of
τ-degree n,
Algorithm <ref> computes the
characteristic polynomial of u for a cost of
O(n^2 + r^2) applications of the Frobenius and
(n^2 + (n+r)r^Ω - 1) operations in K.
The cost of computing the matrix of (u) is O(n^2 + r^2)
applications of the Frobenius endomorphism, and O(n^2 + r^2)
operations in K. The matrix has size r and, thanks to Corollary
<ref>, we know that all its entries have
degree less than 1 + n/r.
Its characteristic polynomial can then be computed within
((n+r) r^Ω-1) operations in K
(see <ref>).
The theorem follows.
§.§.§ The case of finite fields
If K is a finite field, we can speed up the
computation by using specific algorithmic primitives to compute
characteristic polynomial of polynomial matrices (see <ref>) on
the one hand, and to compute Ore Euclidean divisions (see <ref>) on the other hand.
If K is a finite extension of of degree d and
u is an endomorphism of τ-degree n of a Drinfeld
module ϕ of rank r, then
Algorithm <ref> computes the
characteristic polynomial of u for a cost of
(dlog^2 q) +
(((n, d) + ndr + nr^ω + dr^ω)·log q)
bit operations.
The complexity analysis is similar to that of
Theorem <ref>, except that the Ore Euclidean division
of Algorithm <ref> now costs
(d log^2 q) + ((n, d)log q)
bit operations. The computation of the matrix of (u) therefore
requires
(dlog^2 q) +
(((n, d) + dr(n + r))log q)
bit operations.
Finally, it remains to compute the characteristic polynomial of the matrix.
For this, we first notice that all its coefficients of have degree at most n
(Corollary <ref>).
Therefore, using Lemma <ref>, the computation of the
characteristic polynomial costs
((n+d)r^ω) operations in .
The theorem follows.
Comparing with the algorithms of <cit.>, we find that
Algorithm <ref> exhibits a better
theoretical complexity, except when the degree of γ(T) is close
to d and the rank r is very small compared to d and n; in this
case, the algorithm of <cit.> has quadratic
complexity in max(n,d), beating the term (n, d).
When u is the Frobenius endomorphism,
Algorithm <ref> leads to the algorithm
discussed in the introduction, whose complexity is given by
Corollary <ref>.
If K is a finite field of degree d over ,
Algorithm <ref> computes the
characteristic polynomial of the Frobenius endomorphism of ϕ
for a cost of
(dlog^2 q) +
(((d, d) + d^2r + dr^ω) ·log q)
bit operations.
This is a direct application of Theorem <ref> with
n = d.
§.§.§ The case of the Frobenius endomorphism: another approach
Below, we present yet another method to compute the characteristic polynomial
of the Frobenius endomorphism F_ϕ.
This leads to the algorithm
, as mentioned in the introduction, which performs better for some
ranges of parameters (at least theoretically).
It is based on the two following remarks:
* As the Ore polynomial τ^d is central in , and its action on the
motive can unambiguously be defined as a left or right multiplication.
* The left multiplication by τ on (ϕ) is a semi-linear
application, whose matrix is the companion matrix appearing
in Equation (<ref>), which is easy to compute.
More precisely, for a nonnegative integer s, let μ_s be the K[T]-semi-linear
endomorphism of (ϕ) defined by f ↦τ^s f. We denote its
matrix by M_s. In other words, M_s is the matrix whose j-th column contains
the coefficients of τ^j+s∈(ϕ) in the canonical basis.
The matrix M_1 is the companion matrix of Equation (<ref>)
and, by definition, the matrix of (u) is M_d.
For a polynomial P ∈[T], we define P^τ^s as the
polynomial obtained by raising each coefficient of P to power q^s.
Similarly, given a matrix M with entries in [T], we write
M^τ^s for the matrix obtained from M by applying P ↦
P^τ^s to each of its entry. A calculation shows that
M_s = M_1 · M_1^τ⋯ M_1^τ^s-1.
This equation leads to the following square and multiply-like formulas:
M_2s = M_s · M_s^τ^s,
M_2s + 1 = M_1 · M_s^τ· M_s^τ^s+1.
Let α be a generator of K over . Elements of K are
classically represented as polynomials in α with coefficients in
and degree d-1. Applying τ^s to an element
∑_i=0^r-1 a_i α^i ∈ K amounts to applying the
substitution α↦τ^s(α). Thus, this can be
efficiently computed using Kedlaya-Umans' algorithm for modular
composition <cit.> for a cost of
(d log q) bit operations. As mentioned in
<ref>, an initial precomputation of α^q must be
performed once and for all, for a cost of (d log^2 q) bit operations.
[Variant ]
If K is a finite extension of of degree d, the characteristic
polynomial of the Frobenius endomorphism of a Drinfeld module ϕ
of rank r can be computed for a cost of
(dlog^2 q) +
((d^2 r^ω - 1 + d r^ω)·log q)
bit operations.
Let C(s) be the cost, counted in bit operations, of computing the pair
𝒫_s = (M_s, τ^s(α)).
To compute 𝒫_2s and 𝒫_2s+1, one uses the
recurrence relations (<ref>) and (<ref>).
As M_s has r^2 polynomial coefficients of degree at most
s/r (Lemma <ref>), computing τ^s(M) requires O(sr) modular compositions of degree
d. As previously mentioned, we use Kedlaya-Umans' algorithm <cit.> for this task, leading to a total cost of
(nrd·log q) bit operations.
Similarly τ^2s(α) can be computed by composing τ^s(α)
with itself; using again Kedlaya-Umans' algorithm, this can be done within
(d·log q) bit operations. Moreover, the matrix product
M_s ·τ^s(M) requires (dsr^ω-1) extra operations in .
Given that one operation in corresponds to (log q) ⊂(log q) bit operations,
we conclude that
C(2s) ⩽C(s) + (dsr^ω - 1log q).
A similar analysis provides a similar bound for C(2s+1).
Solving the recurrence, we obtain
C(s) ∈(d s r^ω - 1log q).
Therefore, the computation of (u) can be done
within (d^2 r^ω - 1log q) bit operations.
Finally, the characteristic polynomial of the matrix of (u) is computed as
previously, using Lemma <ref>, for a cost of (d r^ω) operations in ,
which is no more than (d r^ω·log q) bit
operations. Adding both contributions and taking into account the
precomputation of α^q, we obtain the corollary.
§.§ Algorithms: the case of a general curve
We now drop the assumption that A = [T].
In full generality, it is not true that the motive (ϕ)
is free over A_K, and the matrix of (u) is not defined. One can
nevertheless easily work around this difficulty, by extending scalars
to the fraction field of A_K, denoted by (A_K). Indeed,
(A_K) ⊗_A_K(ϕ) is obviously free over (A_K) given that the latter is a field. It is also clear that
the determinants of (u) and (A_K) ⊗_A_K(u) are
equal.
Our first need is to design an algorithm for computing a basis
of (A_K) ⊗_A_K(ϕ). For this, we will rely on the
case of [T], previously treated.
We consider an element T ∈ A, T ∉. Since the underlying
curve C is absolutely irreducible, T must be transcendental
over . This gives an embedding [T] ↪ A, which
extends to an inclusion of fields K(T) ↪(A_K). The
resulting extension is finite of degree t = (T).
Let
(b_1, …, b_t) be a basis of (A_K) over K(T).
In what follows, T and (b_1,
…, b_t) are assumed to be known. Finding them depends on the way
C is given, but we believe that our hypothesis is reasonable. For instance, if C is presented as a plane smooth
curve, i.e. if A is given as
A = [X,Y] / P(X,Y)
with
P ∈[X,Y]
one may choose T = X, t = _Y P and b_i = Y^i-1 for 1
≤ j ≤ t.
Let g be the genus of C.
The Riemann-Roch theorem indicates that the Riemann-Roch space ℒ((g+1)·[∞]) has dimension at least 2. Hence it
must contain a transcendental function, which shows that there always
exists T for which t ≤ g+1.
In practice, T can be computed through various different algorithms
(see <cit.> and the
references therein).
Now given a Drinfeld module ϕ : A → K{τ} over A, we
restrict it to [T] via the
embedding [T] → A,
obtaining a second Drinfeld module
ϕ' : [T] → K{τ} (see <ref>). Then
(ϕ') = (ϕ), with the same structure of K[T]-modules.
Moreover, if ϕ has rank r, we have
ϕ'_T = ϕ_T = r ·(T) = rt
showing that ϕ' has rank rt. The family (1, τ,
…, τ^rt-1) is a basis of (ϕ) over K[T], and we can
use Algorithm <ref> to compute the coordinates
of any element of (ϕ) with respect to this basis.
Let : (ϕ) → K[T]^rt be the map taking an element
of (ϕ) to the column vector representing its coordinate in
the above basis. Both and ^-1 are efficiently
computable.
Let e_1 be an arbitrary nonzero element of (ϕ), e.g.
e_1 = 1. A K(T)-basis of the (A_K)-line generated by e_1
is explicitly given by the family e_1 ϕ_b_1, …, e_1
ϕ_b_t. For 1 ≤ j ≤ t, we set C_1,j = (e_1
ϕ_b_j) and we form the following matrix, with rt rows and t
columns:
M_1 = (
C_1,1 ⋯ C_1,t).
We now consider a column vector E_2 outside the image of M_1
and define e_2 = ^-1(E_2); e_2 is not
(A_K)-collinear to e_1, and we have constructed a free
family of cardinality 2. We then continue the same process, by
setting C_2,j = (e_2 ϕ_b,j) and considering the rt ×
2t matrix
M_2 = (
C_1,1 ⋯ C_1,t
C_2,1 ⋯ C_2,t).
We pick a column vector E_3 outside the image of M_2 and
define e_3 = ^-1(E_3), as well as M_3. We repeat this construction until
we reach e_r. The vectors e_1, …, e_r being linearly
independent over (A_K), they form a (A_K)-basis
of (A_K) ⊗_A_K(ϕ). The matrix M_r is nothing but the
change-of-basis matrix from the canonical K[T]-basis of (ϕ) to the
newly computed basis
= (e_1 ϕ_b_1, …, e_1 ϕ_b_t, …,
e_r ϕ_b_1, …, e_r ϕ_b_t). If f ∈(ϕ), the product
M_r^-1·^-1(f) gives the coordinates of f in .
From this, we eventually read the coordinates of f in the
(A_K)-basis (e_1, …, e_r).
To summarize, we have constructed a (A_K)-basis of (A_K)
⊗_A_K(ϕ)
and designed an algorithm to compute coordinates in this
basis. Using these inputs as primitives, it is now straightforward
to extend the results of <ref> to the case of a general
curve.
§ NORMS OF ISOGENIES
In Section <ref>, we have only covered the case of
endomorphisms between Drinfeld modules. We now consider
general morphisms and isogenies. Let ϕ, ψ be two rank r Drinfeld
A-modules, and let u : ϕ→ψ be an isogeny.
In this setting, the caracteristic polynomial is no longer defined but
the norm of u continues to make sense (see
<ref>); we recall that it is an ideal of A,
denoted by (u).
The purpose of this section is twofold: first, to establish explicit formulas
that recover (u) at the motive level, and secondly, to offer efficient
algorithms for the computation of (u) using those formulas.
§.§ Reading norms on the motive
In our general context, the determinant of u can no longer be defined as
previously. In
§<ref>, we set up important definitions and
statements about determinants in projective modules. Our main results are
stated in <ref>.
§.§.§ Determinants on projective modules
Let be a Dedekind domain. Let M, M' be two finitely generated
projective -modules of rank n. Let f: M → M' be
an -linear mapping. The morphism f gives rise to the -linear map f :
M → M'. However, when f has different domain and codomain,
i.e. M ≠ M', it no
longer makes sense to interpret f as the multiplication by some
scalar.
Instead, we define the “determinant” of f, denoted by f,
as the ideal quotient
( M' : ( f)), that is
f
= ( M' : ( f))
= {a ∈: a M' ⊂( f)}.
Equivalently f is the annihilator ideal of the cokernel
of f.
Since is a Dedekind domain, f can be decomposed as a
product
f = ∏_^v_( f),
where the product runs over all maximal ideals of and the exponent
v_( f) is a nonnegative integer referred to as the -adic
valuation of f.
For the purpose of this article, it is fundamental to notice that
v_( f) can be found out by computing the classical determinant of an
actual matrix. Indeed, letting as before denote the
completion[When studying projective modules, it is more common to
consider the
localization _() instead of the completion _. Although the
first setting is simpler, the second better suits our needs.] of
at , we define M_ = ⊗_ M and
M'_ = ⊗_ M'. The map f induces a -linear morphism f_ : M_→
M'_. We deduce from the flatness of over that
f_ = ⊗_ f
= (·)^v_( f),
where f_ is defined, similarly to f, as the
annihilator ideal of the cokernel of f_.
On the other hand, we know that is a principal domain. Hence both
M_ and M'_ are free of rank n over . We choose bases
ℬ_() and ℬ'_() of M_ and M'_
respectively, and let F_ denote the matrix of f_ in these bases.
It follows from the definition
that f_ = (F_). Comparing with
Equation (<ref>), we finally conclude that
v_( f) = v_( F_).
We notice in particular that, although the determinant itself depends on the
choices of ℬ_() and ℬ'_(), its -adic
valuation does not. Indeed, changing ℬ_() (resp. ℬ'_()) boils down to multiplying F_ by an invertible matrix on the
left (resp. on the right), which only multiplies the determinant a unit, and as
such, does not affect its -adic valuation.
In a similar fashion, one can relate f to the Euler-Poincaré
characteristic of the cokernel of f, which is essential to establish
our main theorem.
We have
f = χ_( f).
As we have seen, the Euler-Poincaré characteristic commutes with
localization. Therefore, it is enough to prove that f_ =
χ_( f_) for each maximal ideal of .
Let then be a maximal ideal of . It follows from the structure
theorem of finitely generated modules over principal domains that there exist
bases ℬ_ and ℬ'_ in which the matrix F_ of
f_ is diagonal. If δ_1, …, δ_r denote its diagonal
coefficients, we have
f_≃( / δ_1 ) ×⋯×( / δ_r ).
Hence
χ_( f_)
= δ_1 ⋯δ_r ·
= ( F_) · = f_
which is what we wanted to prove.
§.§.§ Main results
We may now state and prove the main theoretical results of this subsection.
Let ϕ and ψ be two Drinfeld modules, and let u: ϕ→ψ be
an isogeny. We have
(u) = (u).
Writing u as the product of a purely inseparable isogeny
with a separable isogeny, and noticing that (1) is
multiplicative and (2) is functorial, we are reduced to prove the theorem
when u = τ^() on the one hand and when u is separable
on the other hand.
Purely inseparable case.
We assume that u = τ^().
We follow Gekeler's idea for proving
<cit.>.
Let ⊂ be a maximal ideal away from the characteristic.
Note that the map _(u): _(ϕ) →_(ψ) is an
isomorphism because τ is coprime with the right gcd of
ϕ_q for q varying in .
By Theorem <ref>, we conclude that
_(u) : _(ψ) →_(ϕ) is an isomorphism as
well, showing that is coprime with χ_A_K((u)).
Consequently, (u) is a power of .
On the other hand, observe that, by definition,
(u) = _K((u)) = (χ_A_K((u))).
Proposition <ref> then implies that
((u)) = (u) = (). Putting all together,
we conclude that (u) = = (u).
Separable case.
Given that u is nonzero, the kernel of the A-linear map (u) is a torsion
A-module. Let a ∈ A such that a ·(u) = 0.
For all elements z ∈, we then have the following
implication: if u(z) = 0, then ϕ_a(z) = 0. Since u is
separable, this implies that u right-divides ϕ_a, from
which we deduce that a annihilates (u) as well.
Applying successively the right exact functor
- ⊗_A A/aA and the left exact functor _K(-, )
to the exact sequence of A_K-modules
0 → (ψ) → (ϕ) → (u) → 0,
we get the following exact sequence of A_-modules
0
→ ((u))^∗⊗_K → _a(ϕ)^∗⊗_K → _a(ψ)^∗⊗_K .
This shows that
((u))^∗⊗_K ≃(_a(u)^∗) ⊗_K ≃(_a(u)^∗⊗_K ).
From Theorem <ref>, we then derive the following isomorphisms of
A_-modules:
((u))^∗⊗_K ≃(_a(u) ⊗_)
= ((u) ⊗_)
≃(u) ⊗_.
Consequently, u being separable, we find that
(u) = χ_A((u)) = χ_A_K(((u))^∗).
Using finally Lemma <ref>, we end up with
(u) = χ_A_K((u)) = (u), proving the theorem.
An interesting consequence of Theorem <ref> is
a compatibility result between norms of isogenies and restrictions
of Drinfeld modules (see <ref>), which will be
particularly useful to us when Drinfeld A-modules are restricted
to A' = [T].
Let γ' : A' → K be a second base for Drinfeld modules
satisfying the assumptions of <ref>,
coming together with an injective homomorphism of rings f : A' → A
such that γ = γ' ∘ f.
Let ϕ, ψ : A → K{τ} be two Drinfeld A-modules and
let u : ϕ→ψ be a morphism. Then
(f^* u) = N_A/A'((u))
where N_A/A' : A → A' is the norm map from A to A'
via f.
Let be a prime ideal of A'_K, and let A'_K, be the
completion of A'_K at .
Write A_K, = A'_K,⊗_A'_K A_K,
(ϕ)_ = A'_K,⊗_A'_K(ϕ), and
(ψ)_ = A'_K,⊗_A'_K(ψ).
Since A_K, is a product
of local rings, the module (ϕ)_ is free over A_K,.
We pick a basis
_ϕ = (e_ϕ,i)_1 ≤ i ≤ r of it,
together with a basis = (a_m)_1 ≤ m ≤ n of A_K,
over A'_K,.
Note that the family '_ϕ = (a_m·e_ϕ,i)_1 ≤ i
≤ r, 1 ≤ m ≤ n is a A'_K,-basis of (ϕ)_
= (f^* ϕ)_.
We define similarly _ψ and '_ψ.
Let C = (c_ij)_1 ≤ i,j ≤ r be the matrix of (u)
with respect to the bases _ψ and _ϕ and, for a
∈ A'_K,, let M(a) ∈ (A'_K,)^n × n be the matrix of
the multiplication by a over A_K,. The matrix of f^* u in the
bases '_ψ and '_ϕ is the block matrix
D =
(
M(c_1,1) ⋯ M(c_1,r)
⋮ ⋮
M(c_r,1) ⋯ M(c_r,r)
)
The main result of <cit.> implies that D =
N_A_K,/A'_K,( C).
The proposition then follows from Theorem <ref>.
§.§ Algorithms: the case of ℙ^1
Let A = [T] as in <ref>.
Theorem <ref> readily translates to an algorithm
for computing the norm of an isogeny between Drinfeld modules;
this is Algorithm <ref>.
Let ϕ and ψ be two Drinfeld [T]-modules of rank r
and let u: ϕ→ψ be an isogeny of τ-degree n.
Algorithm <ref> computes the norm of u for
a cost of O (n^2 + r^2) applications of the Frobenius endomorphism of K
and (n^2 + nr^ω-1 + r^ω) operations in K.
Per Lemma <ref>, the cost of computing the
matrix of (u) is O(n^2+r^2) applications of the Frobenius
endomorphism, and O(n^2+r^2) operations in K. Besides,
this matrix has size r and its entries have degrees all less than
1 + n/r (Lemma <ref>, which is also
valid for isogenies).
Therefore, using the algorithmic primitives of
<ref>, computing its determinant requires ((n+r) r^ω-1)
operations in K.
When K is a finite field, one can speed up
Algorithm <ref> using the optimized primitives
of <ref> for manipulating Ore polynomials, as for
the endomorphism case.
Precisely, we have the following.
If K is a finite field of degree d over ,
Algorithm <ref> computes the norm of the isogeny u for
a cost of
(dlog^2 q) +
(((n, d) + ndr + nmin(d,r)r^ω-1 + dr^ω)
·log q)
bit operations.
Per the first part of the proof of Theorem <ref>,
the computation of (u) requires
(d log^2 q) + (((n, d) + dr(n+r))log q)
bit operations.
Then, for the computation of the determinant, we distinguish
between two cases. If d ≤ r, we keep on using the algorithms
of <cit.>, for a
cost of (nr^ω-1 + r^ω) operations in K, that is
(ndr^ω-1 + dr^ω) operations in . On the contrary,
when d ≥ r, we use Lemma <ref>, performing then
(nr^ω + dr^ω) operations in . Putting all together,
and remembering that an operation in corresponds to (log q)
bit operations, we get the theorem.
When u is an endomorphism, the norm can be computed as the constant
coefficient of the characteristic polynomial of u, up to a sign. We notice
that the algorithms of the present subsection in some cases run faster than
those of <ref>. This is because we compute the
determinant of the matrix of (u) instead of its whole characteristic
polynomial. However, we stress that the asymptotic costs of computing the
characteristic polynomial and the norm of an endomorphism may be equal. This
owes to the fact that in some cases, computing the characteristic polynomial
of a matrix, or computing its determinant, both reduces to matrix
multiplication.
In the special case where u = F_ϕ is the Frobenius endomorphism, the norm is given
by a simple closed-formula (see <cit.> and
<cit.>), namely
(F_ϕ) = (-1)^rd - r - d N_K/(Δ)^-1^d/(),
where Δ is the leading coefficient of ϕ_T. Computing
the Frobenius norm using Equation (<ref>) costs
(d log^2 q) + (d log q) bit
operations <cit.>. Noticing that the
Frobenius norm is a degree d polynomial in [T], this complexity
is essentially optimal with respect to d, and asymptotically better
than other algorithms mentioned in this paper (see also
Appendix <ref>).
§.§ Algorithms: the case of a general curve
When A is arbitrary, determining the norm of an isogeny u: ϕ→ψ
becomes more complex due to the nonfreeness of the motives (ϕ) and
(ψ) in general. This necessitates working with arbitrary torsion-free
modules over Dedekind rings. While this approach appears viable, we will
follow an alternative strategy that simplifies the general scenario by
reducing the computation to the previously addressed case of [T].
From now on, we assume for simplicity that A is presented as
A = [X,Y] / P(X,Y)
and that (x) > (y), where x and y denote the images
in A of X and Y respectively.
Let ϕ, ψ : A → K{τ} be two Drinfeld modules of rank r,
and let u : ϕ→ψ be an isogeny between them. We consider a
new variable Λ and form the polynomial rings K[Λ] and
A_K[Λ]. We set
(ϕ)[Λ] = A_K[Λ] ⊗_A_K(ϕ)
and endow it with the structure of K[T,Λ]-module
inherited from its
structure of A_K[Λ]-module through the ring homomorphism
f : K[T, Λ] → A_K[Λ],
T ↦ x + Λ·y, Λ↦Λ.
Similarly, we define (ψ)[Λ] and endow it with a structure
of K[T,Λ]-module.
The assumption (x) > (y) ensures that ϕ_x + Λ·ϕ_y is an Ore polynomial of degree r ·(x)
with leading coefficient lying in K. Writing s = r ·(x),
we deduce that the family (1, τ, …, τ^s-1) is a
K[T,Λ]-basis of both (ϕ)[Λ] and (ψ)
[Λ].
On the other hand, we observe that, after extending scalars to
A_K[Λ], the morphism (u) : (ψ) →(ϕ) induces a
K[T,Λ]-linear map (u)[Λ] :
(ψ)[Λ] →(ϕ)[Λ].
Its determinant in the aforementioned distinguished bases is a
bivariate polynomial, that we call δ(T,Λ). Evaluating it at
T = x + Λ y, we obtain a univariate polynomial in Λ
with coefficients in A_K.
With the above notation and hypothesis, the leading coefficient of
δ(T,Λ) with respect to T is a nonzero constant c ∈
K^×.
Moreover, if we write
δ(x+Λ y, Λ) =
δ_0 + δ_1·Λ + ⋯ + δ_n·Λ^n
(n ∈, δ_i ∈ A_K),
then c^-1δ_0, …, c^-1δ_n all lie in A and generate
(u).
For any fixed element λ∈, notice that the degree of
the univariate polynomial δ(T, λ) is equal to the τ-degree
of u. Since the
latter remains constant when λ varies in , so does the
former. The first assertion of the theorem follows.
Set I = ⊗_Fq(u), which is an ideal of A_.
Recall that the maximal ideals of A_ are all of the form
_(x_0, y_0) = (x-x_0) A_ + (y-y_0) A_
with x_0, y_0 ∈. We write the decomposition of I into a
product of prime ideals:
I = _(x_1, y_1)·_(x_2, y_2)⋯_(x_ℓ,y_ℓ)
where ℓ is a nonnegative integer and x_i, y_i ∈ for
all i between 1 and ℓ.
We fix an element λ∈ and consider the ring
homomorphism f_λ : [T] → A_ defined by T ↦ x +
λ y. The map f_λ is the specialization of f at λ, and a finite
morphism whose degree does not depend on λ.
Let N_λ : A_→[T] denote the norm map with respect to
f_λ. It follows from the decomposition (<ref>)
that N_λ(I)
is the ideal of [T] generated by the polynomial
P_λ(T) =
(T - x_1 - λ y_1) ⋯ (T - x_ℓ - λ y_ℓ).
On the other hand, repeating the proof of
Corollary <ref>, we find that N_λ(I)
is also the ideal
generated by δ(T, λ). Therefore δ(T, λ) =
c · P_λ(T). Since this equality holds for any λ∈, it is safe to replace λ by the formal variable
Λ. Specializing at T = x + Λ y, we obtain
δ(x + Λ y, Λ) =
c ·∏_i=1^ℓ((x - x_i) + Λ·(y - y_i))
Expanding the latter product and comparing with the definition
of I, we find that I is the ideal of A_ generated by
δ_0, …, δ_n.
Finally, the fact that I is defined over A implies that the
pairs (x_i, y_i) are conjugated under the Galois action, which
eventually shows that the c^-1·δ_i's are in A.
The theorem follows.
Theorem <ref> readily translates to an algorithm for
computing the norm (u), namely:
* we compute the matrix of (u)[Λ] using
Algorithm <ref>
(treating Λ as a formal parameter),
* we compute the determinant δ(T,Λ) of this matrix
and let c ∈ K^× be its leading coefficient with respect to
T,
* we write
c^-1·δ(x+Λ y, Λ) =
δ'_0 + δ'_1·Λ + ⋯ + δ'_n·Λ^n
(δ'_i ∈ A_K).
* we return the ideal of A generated by δ'_0, …, δ'_n.
It follows from the proof of Theorem <ref> that the degree
n of δ(x+Λ y, Λ) is equal to ℓ, on the one
hand, and to the τ-degree of the isogeny u, on the other hand.
Unfortunately, this quantity may be large, especially when we compare
it with the minimal number of generators of (u), which is at most
2 because A is a Dedeking domain.
To overcome this issue, an option could be to compute the
δ'_i's one by one by using relaxed arithmetics <cit.>: each
time a new δ'_i is computed, we form the ideal I_i generated
by δ'_0, …, δ'_i and stop the process when I_i
has degree n; we then have the guarantee that (u) = I_i and that
we have computed the ideal we were looking for.
When x_1, …, x_ℓ are pairwise disjoint (which is the
most favorable case), we already have (u) = I_1, so that the
above procedure stops very rapidly.
Another option consists in picking random elements λ∈ K and
computing the evaluations δ(T, λ) and c^-1·δ(x+λ y, y). Doing so, we obtain elements in (u) and
we can hope, as above, that only a few number of them will generate the
ideal. Again, this can be checked by looking at the degree of the
candidate ideals.
§ THE CENTRAL SIMPLE ALGEBRA METHOD
Throughout this section, we assume that K is a finite extension
of and we let d denote the degree of K/.
Our aim is to design an alternative algorithm (namely the algorithm
referred to as in the introduction) for computing the
characteristic polynomial of the Frobenius endomorphism F_ϕ
of a rank r Drinfeld A-module ϕ.
We recall that, by definition, F_ϕ is the endomorphism
corresponding to the Ore polynomial τ^d ∈.
Our algorithm is based on Theorem <ref>, which
provides a new formula for the characteristic polynomial of
F_ϕ by means of reduced norms in a certain central simple
algebra.
§.§ The characteristic polynomial of the Frobenius as a reduced norm
Theorem <ref>, the main result of this section and stated in
<ref>, requires a preliminary introduction on general
Ore polynomials and reduced norms. This is the goal of
<ref>.
§.§.§ General Ore polynomials and reduced norms
We first recall some standard facts about Ore
polynomials[For a more detailed survey on this topic,
we refer to <cit.>.].
Given a ring L equipped with a ring endomorphism θ :
L → L, we form the ring L[t;θ] whose elements are formal
expressions of the form
a_0 + a_1 t + ⋯ + a_n t^n
(n ∈, a_0, …, a_n ∈ L)
subject to the usual addition and multiplication driven by the rule
t b = θ(b) t for b ∈ L. The ring L[t;θ] is the
so-called ring of Ore polynomials over L twisted by θ;
it is noncommutative unless θ is the identity morphism.
From this point onward, we focus on the case where L is a field, as it holds
significant importance for this section.
The ring L[t;θ] then shares many properties with classical
polynomial rings over a field. Notably, it is equipped with a
notion of degree and with an Euclidean division on the right: given
two Ore polynomial A, B ∈ L[t;θ] with B ≠ 0, there exist
uniquely determined Q, R ∈ L[t;θ] such that A = QB + R and
R < B. As in the classical commutative case, this implies
that L[t;θ] is left Euclidean, i.e. all left ideals of
L[t;θ] are generated by one element. From this property, we
derive the existence of right gcd: given P, Q ∈ L[t;θ],
the right gcd of P and Q, denoted by (P,Q), is the unique
monic polynomial satisfying the relation
L[t;θ]·P + L[t;θ]·Q =
L[t;θ]·(P,Q).
From now on, we assume further that θ has finite order d.
This hypothesis ensures in particular that the center of L[t;θ]
is large; precisely, it is the subring F[t^d] where F denotes the
subfield of L fixed by θ. By standard Galois theory, the
extension L/F has degree d and it is Galois with cyclic Galois
group generated by θ.
In this situation, the field of fractions of L[t;θ] can be
obtained by inverting the elements in the center, i.e. we
have
(L[t;θ]) = F(t^d) ⊗_F[t^d] L[t;θ].
Besides, the latter is a central simple algebra over
F(t^d) <cit.>.
This provides us with a reduced norm map
: (L[t;θ]) → F(t^d)
which is multiplicative and acts as the d-th power on F(t^d).
Let P ∈ L[t;θ], P ≠ 0.
We form the quotient D_P = L[t;θ] / L[t;θ] P, which is
a L-vector space of dimension (P) with basis (1, t, …,
t^(P) -1). Since t^d is a central element in L[t;θ],
the multiplication by t^d defines a L-linear endomorphism of
D_P, which we denote by γ_P. Its characteristic polynomial
π(γ_P) is then a monic polynomial of degree (P).
For all P ∈ L[t;θ], P ≠ 0, we have
(P) = N_L/F((P)) ·π(γ_P)(t^d)
where (P) is the leading coefficient of P and N_L/F
is the norm map from L to F, i.e. N_L/F(x) =
x ·θ(x) ⋯θ^r-1(x).
See <cit.>.
Proposition <ref> implies in particular that
(P) is a polynomial whenever P ∈ L[t;θ] and that
π(γ_P) has coefficients in F. Both of them are not
immediate from the definition.
§.§.§ Main results
We come back to our setting: we assume that K is a finite
extension of of degree d and consider a Drinfeld module
ϕ : A → K{τ} of rank r.
We notice that K{τ} can be alternatively depicted as the
ring of Ore polynomials K[t; ] where : K → K is
the Frobenius endomorphism taking x to x^q.
Recall that we have set A_K = K ⊗_ A, and define
θ = ⊗ 𝕀_A, which is a ring endomorphism of
A_K of order d with fixed subring A.
We form the Ore algebra A_K[t; θ]; it contains K[t;θ]
≃ K{τ} as a subring. In particular, the elements ϕ_a
(a ∈ A) naturally sit in A_K[t;θ].
We define the ideal
I(ϕ) = ∑_a ∈ A A_K[t; θ] · (ϕ_a - a).
In other words, I(ϕ) is the left ideal of A_K[t; θ]
generated by the elements (ϕ_a - a) for a running over A.
We assume that A is generated as a -algebra by the elements
a_1, …, a_n. Then I(ϕ) is generated as a left ideal
of A_K[t; θ] by ϕ_a_1-a_1, …, ϕ_a_n-a_n.
Let I' be the left ideal of A_K[t; θ] generated by ϕ_a_1-a_1,
…, ϕ_a_n-a_n. We need to prove that I' = I(ϕ).
The inclusion I' ⊂ I(ϕ) is obvious.
For the reverse inclusion, consider λ∈ and a,b ∈ A
such that ϕ_a-a, ϕ_b-b ∈ I'. The equalities
ϕ_λ a - λ a
= λ· (ϕ_a - a)
ϕ_a+b - (a+b)
= (ϕ_a - a) + (ϕ_b - b)
ϕ_ab - ab
= ϕ_a · (ϕ_b - b) + b · (ϕ_a - a)
(recall that b is central, so it commutes with ϕ_a)
show that the three elements on the left hand side belong to I'
as well. This stability property eventually ensures that I'
contains all elements of the form ϕ_a - a. Hence I(ϕ)
⊂ I' as desired.
We recall from <ref> that the A-motive of ϕ,
denoted by (ϕ), is isomorphic to K{τ} as a K-vector
space. This gives
a K-linear inclusion (ϕ) ↪ A_K[t; θ]
(mapping τ to t). We consider the composite
α_ϕ : (ϕ) ↪ A_K[t; θ]
→ A_K[t; θ] / I(ϕ).
The map α_ϕ is a A_K-linear isomorphism.
We first check linearity.
Let λ∈ K, a ∈ A and f ∈(ϕ). By definition,
we have (λ⊗ a) · f = λ f ϕ_a. Hence
α_ϕ((λ⊗ a) · f)
= λ f ϕ_a ≡λ f a I(ϕ).
Moreover a is a central element in A_K[t; θ].
We conclude that
α_ϕ((λ⊗ a) · f)
= λ a f and linearity follows.
In order to prove that α_ϕ is an isomorphism, we
observe that A_K[t; θ] ≃ K{τ}⊗_ A
and we define the K-linear map
β_ϕ : A_K[t; θ] → K{τ} (as sets, = (ϕ))
that takes f ⊗ a to f ϕ_a (for f ∈ K{τ}
and a ∈ A).
We claim that β_ϕ vanishes on I(ϕ). Indeed,
for a, b ∈ A and g ∈ K{τ}, we have
β_ϕ((g ⊗ b)·(ϕ_a ⊗ 1 - 1 ⊗ a))
= β_ϕ(gϕ_a ⊗ b - g ⊗ ab)
= gϕ_a ϕ_b - g ϕ_ab = 0.
Consequently, β_ϕ induces a mapping
β̅_ϕ : A_K[t; θ]/I(ϕ) →(ϕ).
It is now formal to check that β̅_ϕ is a left and
right inverse of α_ϕ, showing that α_ϕ is an
isomorphism.
We write (A_K) for the field of fractions of A_K.
The morphism θ extends to a ring endomorphism of (A_K)
that, in a slight abuse of notation, we continue to denote by
θ. On (A_K), θ has order d and its fixed
subfield is (A).
We consider the Ore polynomial ring (A_K)[t; θ]. By what we
have seen previously, its center is (A)[t^d]
and there is a reduced norm map
(A_K)[t; θ]
(A)[t^d].
We define I_0(ϕ) =
(A_K) ⊗_A_K I(ϕ); it is a left ideal
of (A_K)[t; θ]. Since the latter is a principal ideal domain, I_0(ϕ)
is generated by a unique element g(ϕ), which we assume to be
monic.
Concretely g(ϕ) is the right gcd of the elements (ϕ_a - a)
when a varies in A. After Lemma <ref>, we even have
g(ϕ) =
(ϕ_a_1-a_1, …, ϕ_a_n-a_n)
as soon as a_1, …, a_n generate A as an -algebra.
We keep the previous notation and assumptions.
Let F_ϕ be the Frobenius endomorphism of ϕ and
let π(F_ϕ) be its monic characteristic polynomial. Then
π(F_ϕ)(t^d) = (g(ϕ)).
Write _0(ϕ) = (A_K) ⊗_A_K(ϕ).
On the one hand, it follows from Proposition <ref>
that α_ϕ induces an isomorphism
_0(ϕ) ≃(A_K)[t; θ] / (A_K)[t; θ]·g(ϕ).
With Proposition <ref>, we realize that
(g(ϕ)) is equal to the characteristic polynomial of
the right multiplication by t^d on _0(ϕ), that is
(g(ϕ))
= π((A_K) ⊗_A_K(F_ϕ))
= π((F_ϕ)).
We conclude by invoking Theorem <ref>.
When A = [T], it follows from Lemma <ref> that g(ϕ)
is K(T)-collinear to ϕ_T - T. Therefore, its reduced norm is
the reduced characteristic polynomial of ϕ_T and we recover
Theorem <ref>, stated in the introduction.
§.§ Algorithms: the case of ℙ^1
We move to algorithmical purpose.
By Theorem <ref>, the computation of the characteristic
polynomial of F_ϕ reduces to the computation of a reduced norm.
On the other hand, it is a classical fact that the
reduced norm of a polynomial P ∈ A_K[t;θ] can be computed as
a usual norm. Precisely, we consider the subalgebra A[t] of
A_K[t;θ]; it is commutative and étale over the center A[t^d].
Moreover A_K[t;θ] appears as a free left module of rank d over
A[t]. Thus, there exists a norm map
N_A_K[t;θ]/A[t] which takes a polynomial P to the
determinant of the A[t]-linear endomorphism of
μ_P
A_K[t;θ]
A_K[t;θ]
Q
QP.
With this notation, we have
(P) = N_A_K[t;θ]/A[t](P) ∈ A[t].
We now assume that A = [T] and
fix a Drinfeld module ϕ : [T] → K{τ}. It follows
from Lemma <ref> that g(ϕ) is K(T)-collinear
to ϕ_T - T.
Fix a basis ℬ = (e_1, …, e_d) of K over
and observe that ℬ is an A[t]-basis of
A_K[t;θ] as well.
Let M be the matrix of μ_ϕ_T in ℬ. Its entries all
lie in [t] given that ϕ_T has coefficients in K. Observing
moreover that μ_g(ϕ) = μ_ϕ_T - μ_T = μ_ϕ_T - T,
we conclude that
π(F_ϕ)(t^d) = π(M)(T)
where π(M) is the characteristic polynomial of M.
We emphasize that the two variables t and T play different
roles in the two sides of the Equality (<ref>):
in the left hand side, t appears in the variable at which the
characteristic polynomial is evaluated whereas, in the right hand
side, it is an internal variable appearing in the matrix M; and
conversely for T.
In order to explicitly compute the matrix of μ_P for a
given Ore polynomial P ∈ K[t;θ], we can proceed as
follows. We write
P = g_0 + g_1 t + ⋯ + g_n t^n (g_i ∈ K) and
notice that
μ_P =
μ_g_0 + μ_t ∘μ_g_1 + ⋯ + μ_t^n ∘μ_g_n.
Moreover the set of equalities e_i t = t e_i^1/q for 1 ≤ i
≤ d shows that the matrix
of μ_t is t·F^-1 where F is the matrix of the Frobenius
endomorphism acting on K (which is -linear).
These observations readily lead to Algorithm <ref>.
If ℬ is the working basis of K/,
Algorithm <ref> requires d
applications of the Frobenius endomorphism and (n d^ω)
operations in .
Since ℬ is the working basis, writing the coordinates of
an element of K in ℬ costs nothing. Therefore,
computing the matrix F amounts to computing each g_i^q for 1 ≤ i ≤
d. This then requires d applications of the
Frobenius endomorphism. Similarly computing each G_j requires d
multiplications in K, corresponding to (d^2) operations
in .
Finally, the computation on line 4 requires one inversion and O(n)
multiplications of r × r matrices over . The cost of this
computation is then (n d^ω) operations in .
We now have everything we need to compute the characteristic polynomial
of the Frobenius endomorphism: see Algorithm <ref>.
[Variant ]
Algorithm <ref> computes
the characteristic polynomial of the Frobenius endomorphism of ϕ for a
cost of (dlog^2 q) + (r d^ωlog q) bit operations.
Per Lemma <ref>, computing the matrix of
μ_ϕ_T requires O(d) applications of the Frobenius and
(rd^ω) operations in .
Using Lemma <ref>, computing its characteristic
polynomial can be achieved for an extra cost of (rd^ω)
operations in .
All of this correspond to (dlog^2 q) + (r d^ωlog q)
bit operations in our complexity model (see <ref>).
§.§ Algorithms: the case of a general curve
When A is a general curve, it is possible to follow the same strategy as
before. However several simplifications that were previously applicable cannot
be implemented in this case.
First of all, finding g(ϕ) requires some computation.
By Lemma <ref>, however, g(ϕ) can be obtained as the right
gcd of a finite number of Ore polynomials, as soon as we have a
finite presentation of the ring A.
Fortunately, such a right gcd can be computed using a noncommutative
variant of the Euclidean algorithm.
Once g(ϕ) is known, one can compute its reduced norm using the
method of <ref>: we form the matrix of the
(A)[t]-linear map μ_g(ϕ) : (A_K)[t;θ] →(A_K)[t;θ], defined by Q ↦ Q · g(ϕ), and view
(g(ϕ)) as the determinant of μ_g(ϕ).
This approach yields a working algorithm for computing π(F_ϕ).
It has nevertheless two drawbacks.
First, the computation of the right gcd may be costly and have an
impact on the size of the coefficients in the base ring (A_K),
which is not finite. One may gain a certain level of control
by using the theory of noncommutative subresultants introduced
by Li in <cit.>, but this requires additional caution.
The second disadvantage is that the Ore polynomial g(ϕ) is
in general not of the form ϕ_a - a, implying that the
computation of its reduced norm no longer boils down to finding
the characteristic polynomial of a matrix with entries in .
Instead, we need to compute the determinant of a general matrix
over (A)[t], which can be a more costly operation.
It turns out that we can overcome these two issues by following
the same strategy as in <ref> and reducing the
problem to the case of [T].
For simplicity, we assume again that A is presented as
A = [X,Y] / P(X,Y)
with
P ∈[X,Y]
and that (x) > (y) where x and y denote the images
in A of the variables X and Y. We introduce a new variable
Λ and the Ore polynomial ring K[T,Λ][t; θ]
where θ acts on K via the Frobenius map x ↦ x^q
and acts trivially on T and Λ. In this setting, we have
a reduced norm map
: K[T,Λ][t; θ] →[T,Λ][t^d].
We consider the trivariate polynomial
ϖ(T, Λ, t^d) = (ϕ_x + Λ·ϕ_y
- T) and write
ϖ(x+Λ y, Λ, t^d) =
ϖ_0(t^d) + ϖ_1(t^d) ·Λ + ⋯ +
ϖ_n(t^d) ·Λ^n
where the ϖ_i's are univariate polynomials over (A).
This gives the following theorem, which is an analogue of
Theorem <ref> and whose proof is similar.
We keep the previous notation and assumptions.
Let F_ϕ be the Frobenius endomorphism of ϕ and
let π(F_ϕ) be its monic characteristic polynomial. Then
π(F_ϕ) = (ϖ_0, ϖ_1, …, ϖ_n).
The formula of Theorem <ref> readily provides an algorithm
for computing π(F_ϕ). This strategy is not hindered by the two
aforementioned disadvantages. Moreover, as mentioned in <ref>, it may occur that π(F_ϕ) is already the gcd of the
first polynomials ϖ_0, …, ϖ_i, for some i < n. Therefore, it
can be beneficial to compute the ϖ_i's one by one (using relaxed
arithmetics), determining the corresponding gcd at each step, and stopping the
computation as soon as the resulting polynomial reaches degree d. As also
discussed in <ref>, another option is to work with
evaluations at random values λ∈ instead of working with the
formal variable Λ.
tocsection
alpha
§ REVIEW OF EXISTING ALGORITHMS
In all this Section, ϕ is a rank r Drinfeld [T]-module over a field
K. The field K may not be finite, but when it is, its degree over is
denoted by d. The function field characteristic of K is an ideal of
[T] whose degree is denoted by m. We consider an endomorphism or an
isogeny u whose degree as an Ore polynomial is n.
We let ω be a feasible exponent for matrix multiplication and
Ω be a feasible exponent for matrix characteristic polynomial
computation.
We underline that any algorithm the computes the characteristic
polynomial of an endomorphism computes its norm as a byproduct.
Furthermore, the Frobenius norm can be computed in
(d log^2 q) + (d log q) bit operations (see
Remark <ref>), which is strictly better than
any other algorithm mentioned in this paper.
In all the tables below, the term (d log^2 q) which appears
in blue on many lines always correspond to the precompution of the image
of a generator of K/ by the Frobenius endomorphism (see <ref>).
font=footnotesize
Algorithms for the characteristic polynomial of the Frobenius
endomorphism in rank two
1|c|Algorithm
1c|Bit complexity
1c|Constraints
<cit.>
1
(d^3 log q)
gray!15
<cit.>
2
(d^1.885log q)
m = d
<cit.>
3
(d^2log^2 q)
gray!15
<cit.>
4
(d^2 log q)
gray!15
<cit.>
(d^3 log q)
m = d
<cit.>
5
(d^1.5log q)
m = d
<cit.>
(d^1.5log q)
m = d
<cit.>
(d^2/√(m)log q)
m < d
<cit.>
(d^2 d + m/mlog q)
gray!15
<cit.>
((d, d) log q)
gray!15
Cor. <ref>,
((d, d) log q)
gray!15
Th. <ref>,
(d^2 log q)
gray!15
Th. <ref>,
(d^ωlog q)
gray!15
1 Deterministic algorithm by Gekeler. The Frobenius norm is
directly computed, and the Frobenius trace is computed as the solution of
a linear system. See also <cit.>.
2 Monte-Carlo algorithm by Musleh and Schost. The algorithm is
inspired by ideas from ideas of Narayanan in <cit.>, as
well as Copersmith's block Wiedemann algorithm.
3 Monte-Carlo algorithm by Musleh and Schost. The algorithm
computes the Frobenius norm, and the minimal polynomial of ϕ_T using
a Monte-Carlo algorithm. After, it recovers F_ϕ by solving a Hankel
system.
4 Deterministic algorithm by Musleh and Schost. Drinfeld analogue
of Schoof's algorithm for elliptic curves.
5 Deterministic Algorithm by Doliskani,
Narayanan and Schost, introduced to factorize polynomials in [T].
The algorithm actually computes the Hasse invariant of the
Drinfeld module, from which the Frobenius trace is recovered thanks to
the assumption that m = d. The algorithm gets inspiration from
elliptic curve algorithms and computes the Hasse invariant as an element
in a recursive sequence discovered by Gekeler. See
<cit.>.
Algorithm described in Table <ref>.
Algorithm described in Table <ref>.
font=footnotesize
Algorithms for the characteristic polynomial of the Frobenius
endomorphism in any rank r
1|c|Algorithm
1c|Bit complexity
1c|Constraints
<cit.>
1
(r^2 d^3 log q)
m = d
<cit.>
2
(r^ω d^3/2log q)
m = d
<cit.>
2
((r^Ω/m + r^ω/√(m)) d^2log q)
m < d
<cit.>
((r^Ω + min(dr^2, (d+r)r^ω-1)) d(d + m)/mlog q)
gray!15
<cit.>
((r^Ωd(d + m)/m +
r ·(d + r, d)) log q)
gray!15
Cor. <ref>,
3
(((d, d) + rd^2 + dr^ω) log q)
gray!15
Th. <ref>,
4
((d^2 r^ω-1 + dr^ω) log q)
gray!15
Th. <ref>,
5
(rd^ωlog q)
gray!15
1 Deterministic algorithm by Garai and Papikian. With
Proposition <ref> and the hypothesis m = d, the
coefficients of F_ϕ are uniquely determined by their images under
γ: [T] → K. The Frobenius norm is computed using
Equation (<ref>) and the other coefficients are
recursively computed.
2 Two deterministic algorithms by Musleh and Schost. The
characteristic polynomial of any endomorphism is the characteristic
polynomial of its action on the crystaline cohomology. In the case of the
Frobenius endomorphism, algorithmic speed-ups are possible using a
baby step-giant step method.
3 Probabilistic algorithm. The characteristic polynomial of the
Frobenius endomorphism is the characteristic polynomial of its action on
the motive.
4 Probabilistic algorithm. The characteristic polynomial of the
Frobenius endomorphism is the characteristic polynomial of its action on
the motive. The corresponding matrix is recursively
computed using a square and multiply-like procedure.
5 Probabilistic algorithm. The characteristic polynomial of the
Frobenius endomorphism is interpreted as the reduced characteristic
polynomial of ϕ_T in the central simple [τ^d]-algebra .
Algorithm described in Table <ref>.
font=footnotesize
Algorithms for characteristic polynomials of degree n
endomorphisms, in any rank r, over a finite field of degree d over
1|c|Algorithm
1c|Bit complexity
1c|Constraints
<cit.>
1
((r^Ω + min(nr^2, (n+r)r^ω-1)) d(n + m)/mlog q)
gray!15
<cit.>
1
((r^Ωd(n + m)/m + r (n + r, d)) log q)
gray!15
Th. <ref>,
2
(((n, d) + ndr + nr^ω + dr^ω) log q)
gray!15
1 Two deterministic algorithms by Musleh and Schost. The
characteristic polynomial of any endomorphism is the characteristic
polynomial of its action on the crystalline cohomology of the Drinfeld
module.
2 Probabilistic algorithm. The characteristic polynomial of any
endomorphism is the characteristic polynomial of its action on the motive
of the Drinfeld module.
font=footnotesize
Algorithms for characteristic polynomials of degree n
endomorphisms, in any rank r, over a generic field
1|c|Algorithm
1c|Operations in the base field & Frobenius applications
1c|Constraints
Th. <ref>
1
(n^2 + (n + r)r^Ω - 1) &
O(n^2 + r^2)
gray!15
1 Probabilistic algorithm. The characteristic polynomial of any
endomorphism is the characteristic polynomial of its action on the motive
of the Drinfeld module.
font=footnotesize
Algorithms for computing norms of degree n isogenies, in any rank
r, over a finite field of degree d over
1|c|Algorithm
1c|Bit complexity
1c|Constraints
Th. <ref>
1
(((n, d) + ndr + nmin(d,r)r^ω-1
+ dr^ω)log q)
gray!15
3|c|See also Table <ref>.
1 Probabilistic algorithm. The norm of any isogeny is the
determinant of the motivic application associated to the isogeny.
font=footnotesize
Algorithms for computing norms of degree n isogenies, in any rank
r over a finite field of degree d over
1|c|Algorithm
1c|Operations in the base field & Frobenius applications
1c|Constraints
Th. <ref>
1
(n^2 + (n + r)r^ω-1) &
O(n^2 + r^2)
gray!15
1 Probabilistic algorithm. The norm of any isogeny is the
determinant of the motivic application associated to the isogeny.
|
http://arxiv.org/abs/2307.00865v1
|
20230703090801
|
A Survey on Graph Classification and Link Prediction based on GNN
|
[
"Xingyu Liu",
"Juan Chen",
"Quan Wen"
] |
cs.LG
|
[
"cs.LG"
] |
12pt 0.05cm
A Survey on Graph Classification and Link Prediction based on GNN
Xingyu Liu Juan Chen Quan Wen
School of Computer Science and Engineering
University of Electronic Science and Technology of China
Chengdu, Sichuan, 611730, P.R. China
=====================================================================================================================================================================================
Abstract: Traditional convolutional neural networks are limited to handling Euclidean space data, overlooking the vast realm of real-life scenarios represented as graph data, including transportation networks, social networks, and reference networks. The pivotal step in transferring convolutional neural networks to graph data analysis and processing lies in the construction of graph convolutional operators and graph pooling operators. This comprehensive review article delves into the world of graph convolutional neural networks. Firstly, it elaborates on the fundamentals of graph convolutional neural networks. Subsequently, it elucidates the graph neural network models based on attention mechanisms and autoencoders, summarizing their application in node classification, graph classification, and link prediction along with the associated datasets.
6pt
Keywords: Graph convolutional neural network, Node classification, Link prediction.
§ INTRODUCTION
The characteristic of deep learning is the accumulation of multiple layers of neural networks, resulting in better learning representation ability. The rapid development of convolutional neural networks (CNN) has taken deep learning to a new level<cit.>. The translation invariance, locality, and combinatorial properties of CNN make it naturally suitable for tasks such as processing Euclidean structured data such as images<cit.>, At the same time, it can also be applied to various other fields of machine learning<cit.>. The success of deep learning partly stems from the ability to extract effective data representations from Euclidean data for efficient processing. Another reason is that thanks to the rapid development of GPUs, computers have powerful computing and storage capabilities, It can train and learn deep learning models in large-scale data sets, which makes deep learning perform well in natural language processing<cit.>, machine vision<cit.>, recommendation systems<cit.> and other fields
However, existing neural networks can only process conventional Euclidean structured data. As shown in Figure. 1(a), Euclidean data structures are characterized by fixed arrangement rules and orders of nodes, such as 2D grids and 1D sequences. Currently, more and more practical application problems must consider non Euclidean data, such as Figure. 1(b), where nodes in non Euclidean data structures do not have fixed arrangement rules and orders, This makes it difficult to directly transfer traditional deep learning models to tasks dealing with non Euclidean structured data. If CNN is directly applied to it, it is difficult to define convolutional kernels in non Euclidean data due to the unfixed number and arrangement order of neighboring nodes in the non Euclidean data center, which does not meet translation invariance. Research work on graph neural networks (GNNs), At the beginning, it was about how to fix the number of neighboring nodes and how to sort and expand them, such as the PATCHY-SAN<cit.>, LGCN<cit.>, DCNN<cit.> methods. After completing the above two tasks, non Euclidean structured data is transformed into Euclidean structured data, which can then be processed using CNN. A graph is a typical non Euclidean data with points and edges, In practice, various non Euclidean data problems can be abstracted into graph structures. For example, in transportation systems, graph based learning models can effectively predict road condition information<cit.>. In computer vision, the interaction between humans and objects can be viewed as a graph structure, which can be effectively recognized<cit.>.
Recently, some scholars have reviewed graph neural networks and their branches of graph convolutional neural networks<cit.>. The difference in this article is that it focuses on introducing the methods and models of graph neural networks in node classification and link prediction of citation networks. In citation networks, a typical classification task is to provide the content information and citation relationships between each article, and to classify each article into the corresponding domain. For example, in a semi supervised classification scenario of nodes, the attribute information of nodes includes the title or abstract information of the article, as well as the relationships referenced between nodes to form network information. Given a small amount of virtual data tables, the domain to which each node belongs in the network is divided through deep learning. In this task, GCN effectively modeled the node text attributes and reference network structure, achieving great success. Compared to directly using content information (such as MLP), using only structural information (such as DeepWalk<cit.>) and traditional semi supervised node classification methods on graphs, such as Planetoid <cit.>, traditional methods have much lower classification accuracy than graph convolutional neural network algorithms represented by GCN. Among them, the Graph Attention Network (GAT)<cit.> performs better than the Planetoid model in classic citation network datasets. Therefore, this task is often seen as a benchmark task to measure the effectiveness of a graph convolutional neural network model. GCN<cit.>, GAT<cit.>, and GWNN<cit.> all used citation network classification tasks to verify the effectiveness of the model.
§ GRAPH NEURAL NETWORK
§.§ Graph Structure Class
§.§.§ Edge Information Graph
In recent years, the concept of edge information graph has gained considerable attention in the field of graph theory. An edge information graph is defined as a graph structure in which different edges possess distinct structural characteristics. These characteristics may include the weight, direction, and heterogeneous relationships between nodes.
For example, consider the complex structure of a social network graph. The relationships between nodes within this graph may take on a variety of forms, ranging from unidirectional "follow" relationships to bidirectional "friendship" relationships. Due to the complexity of such relationships, they can not be adequately represented by simple weight constraints. This highlights the importance of considering the full range of structural edge information in the analysis of graphs with complex relationships.
§.§.§ Spatio-Temporal Graph
A Spatio-Temporal graph is a type of property graph.Its characteristic is that the characteristic matrix X in the high-dimensional feature space f^* will change with time. This structure is represented as G^* = (V, E, A, X), where V, E, and A denote the vertices, edges, and adjacency matrix, respectively. With the introduction of time series, graph structure can effectively manage tasks that require handling of dynamic and temporal relationship types. Yan et al.<cit.> presented a method for skeleton motion detection based on Spatio-Temporal graph convolutional neural networks.
§.§ Convolution Graph Neural Network
Graph convolutional neural networks can be divided into two categories in terms of feature space: frequency domain and spatial domain. A graph convolutional neural network maps the data G = (V, E) of the original graph structure to a new feature spacef^G → f^*. Taking a single-layer forward propagation graph convolutional neural network as an example, the features of the layer i neural network are denoted by w_i. In computing each node v_i in the graph structure, the output H^l+1 of each layer of the neural network can be expressed by the nonlinear function f(·,·), where A is the feature adjacency matrix. The graph convolutional neural network structure is implemented by the nonlinear activation function ReLU = σ(·) with the following layered propagation rule:f(H^l,A) = σ(D̂^-1/2ÂD̂^-1/2H^lW^l) where  = A+I denotes the adjacency matrix of the graph structure G = (V,E), I denotes the identity matrix, D̂ = ∑Â_ij denotes the diagonal matrix, and W_l denotes the weight matrix of the layer l of the convolutional neural network . Through the hierarchical propagation rules, the graph convolution neural network introduces the local parameter sharing characteristics of the convolution neural network into the graph structure, so that the breadth of the sensing area of each node will be greatly improved with the increase of the number of propagation layers, so as to obtain more information from the neighboring nodes.Based on the existing GNN structure, a general GNN structure flowchart can be and summarized, as shown in Figure. 2.
§.§ Spatio-Temporal Graph Neural Network
As an attribute graph network, The Spatio-Temporal graph neural network introduces the characteristics of time series. It can simultaneously obtain the characteristic information of time and space domains in the graph structure, and the characteristics of each node will change with time. We mainly discusses the Spatio-Temporal graph neural network structure that uses graph convolution to extract spatial feature dependence in the spatial domain. It is mainly divided into three time-domain feature acquisition methods: traditional convolution network, gated loop network and graph convolution network. Figure. 3 shows the network structure comparison between graph convolution neural network and Spatio-Temporal graph neural network (taking 1D-CNN+GCN structure as an example). The two network structures are constructed on the basis of graph convolution computing unit, where φ Is the element distance between matrix Z and Z^T, and MLP full connection represents multilayer perceptron full connection neural network.
§ GRAPH NEURAL NETWORK BASED ON ATTENTION IMPLEMENTATION
The attention mechanism has shown strong capabilities in processing sequential tasks<cit.>, such as in machine reading and learning sentence representation tasks. Its powerful advantage lies in allowing variable input sizes, and then utilizing the attention mechanism to only focus on the most important parts before making decisions. Some studies have found that the attention mechanism can improve convolutional methods, allowing for the construction of a powerful model, In dealing with some tasks, better performance can be achieved. Therefore, reference<cit.> introduced attention mechanism into the process of neighbor node aggregation in graph neural networks and proposed graph attention networks (GAT). In the traditional GNN framework, attention layers were added to learn the different weights of each neighbor node, Treat them differently. In the process of aggregating neighboring nodes, only focus on the nodes with larger effects, while ignoring some nodes with smaller effects. The core idea of GAT is to use neural networks to learn the weights of each neighboring node, and then use neighboring nodes with different weights to update the representation of the central node. Figure. 4 is a schematic diagram of the GAT layer structure. Figure. 4(a) shows the calculation of weights between node i and node j, Figure. 4(b) shows a node using a multi head attention mechanism in its neighborhood to update its own representation. The attention factor of node j relative to node i is solved as:
a_i j=exp(L e a k y ReLU(α^T[W h_i W h_j]))/∑_k ∈ N_iexp(L e a k y ReLU(α^T[W hW h_j]))
where a_ij represents the attention factor of node j relative to node i, W is a affine transformation for dimension reduction, α^T represents the weight vector parameter,|| represents the vector splicing operation, and LeakyReLU(x^') = {[ x^', x^'>0; λ x^', x^'≤ 0 ]. is the leak correction linear unit. Then, with the nonlinear activation function δ, the learned attention factor a_ij can be used to update the central node i:
h_i^'=δ(∑_j ∈ N_i a_i j^k W^k h_j)
In order to make the model more stable, the author also applied a multi head attention mechanism. Instead of using only one function to calculate attention factors, K different functions were set to jointly calculate attention factors. The results of each function can obtain a set of attention parameters, and can also provide a set of parameters for the weighted sum of the next layer. In each convolutional layer, K different attention mechanisms do not affect each other, Work independently. Finally, concatenate or average the results obtained from each attention mechanism to obtain the final result. If K different attention mechanisms are calculated simultaneously, we can obtain:
h_i^'=_k=1^K δ(∑_j ∈ N_i a_i j^k W^k h_j)
|| represents the concatenation operation, and a_ij^k is the attention factor obtained by the k-th attention parameter function. For the last convolutional layer, if the multi head attention mechanism is used for solving, the average method should be used to solve:
h_i^'=δ(1/K∑_k=1^K ∑_j ∈ N_i a_i j^k W^k h_j)
Reference <cit.> also introduced the multi head attention mechanism into the aggregation process of neighboring nodes, proposing gated attention networks (GAAN). However, unlike GAT, which uses averaging or concatenation to determine the final attention factor, GAAN believes that although using the multi head attention mechanism can gather information from multiple neighboring nodes of the central node, not every head of attention mechanism has the same contribution, A certain head of attention may capture useless information. Therefore, GAAN assigns different weights to each attention mechanism in multi head attention to aggregate neighboring node information and complete the update of the central node. Therefore, GAAN first calculates an additional soft gate between 0 (low importance) and 1 (high importance), assigning different weights to each head of attention. Then, combined with the multi head attention aggregator, You can obtain a gated attention aggregator:
y_i=F C_θ_0(x̃_i ⊕∏_k=1^K(g_i^(k)∑_j ∈ N w_i, j^(k) F C_θ_v^(k)^h(z_j)))
g_i=(g_i^(1), g_i^(2), ⋯, g_i^(K))=ψ_g(x̃_i, z_N_i)
where FC_θ_0(·) means that the activation function is not applied after the linear transformation, ⊕ is the connection operation, K is the number of attention mechanisms, and w_i,j^(k) is the k-th attention weight between node i and j, θ_v^(k) is the parameter of the k-th header used to query the vector. g_i^(k) is the threshold value of the k-th header of node i, Apply convolutional network Ψ_g and take the center node feature x_i and neighbor node feature z_N_i to calculate the g_i. Convolution network Ψ_g. The convolutional network Ψ_g can be designed according to its actual needs, and literature<cit.> adopts average pooling and maximum pooling for construction:
g_i=F C_θ_g^δ(x̃_i ⊕max _j ∈ N_i({F C_θ_m(z_j)}) ⊕. .1/|N_i|∑_j ∈ N_i z_j),
where θ_m represents mapping the feature vectors of neighboring nodes to the dimension d_m, θ_g represents mapping the concatenated feature vectors to the k-th gate. Finally, the author of reference<cit.> constructed a gated recursive unit using GGAN and successfully applied it to traffic speed prediction problems.
In reference<cit.>, it was proposed that although GAT has achieved good results in multiple tasks, there is still a lack of clear understanding of its discriminative ability. Therefore, the author of this paper conducted a theoretical analysis of the representation characteristics of graph neural networks using attention mechanisms as aggregators, and analyzed that such graph neural networks are always unable to distinguish all situations with different structures. The results show that, The existing attention based aggregators cannot preserve the cardinality of multiple sets of node feature vectors during aggregation, which limits their discriminative ability. The proposed method modifies the cardinality and can be applied to any type of attention mechanism.Zhang et al<cit.> developed a self attention graph neural network (SAGNN) based on attention mechanism for hypergraphs. SAGNN can handle different types of hypergraphs and is suitable for various learning tasks and isomorphic and heterogeneous hypergraphs with variables. This method can improve or match the latest performance of hypergraph learning, solving the shortcomings of previous methods, For example, it is impossible to predict the hyperedges of non-uniform heterogeneous hypergraphs. U2GNN<cit.> proposed a novel graph embedding model by introducing a universal self attention network, which can learn low dimensional embedding vectors that can be used for graph classification. In implementation, U2GNN first uses attention layers for calculation, Then, a recursive transformation is performed to iteratively remember the weight size of the vector representation of each node and its neighboring nodes in each iteration, and the final output sum is the final embedded representation of the entire graph. This method can solve the weaknesses in existing models, To generate reasonable node embedding vectors, the above models apply attention mechanism to spatial domain graph neural networks. In order to better utilize the local and global structural information of the graph, reference<cit.>first attempted to transfer attention mechanism from spatial domain to spectral domain, proposing spectral graph attention network (SpGAT). In SpGAT, graph wavelets are selected as spectral bases, And decompose it into low-frequency and high-frequency components based on indicators. Then, construct two different convolutional kernels based on low-frequency and high-frequency components, and apply attention mechanisms to these two kernels to capture their importance. By introducing different trainable attention weights to low-frequency and high-frequency components, local and global information in the graph can be effectively captured, And compared to the spatial domain, the attention spGAT greatly reduces learning parameters, thereby improving the performance of GNN. In order to better understand the application of attention mechanisms in graph neural networks and identify the factors that affect attention mechanisms, a series of experiments and models were designed in reference <cit.> to conduct in-depth research and analysis. Firstly, the graph isomorphism network (GIN) model<cit.> was used to conduct experiments on the dataset, but it was found that its performance was very poor, And it is difficult to learn attention subgraph networks. Therefore, the author combined GIN and ChebyNet networks to propose a ChebyGIN network model, and added attention factors to form an attention model. A weakly supervised training method was adopted to improve the performance of the model, Experiments were conducted on the models in color counting and triangle counting tasks, and four conclusions were drawn:
§.§.§
The main contribution of the attention mechanism in graph neural networks to node attention is that it can be extended to more complex or noisy graphs, which can transform a model that cannot be generalized into a very robust model;
§.§.§
The factors that affect the performance of attention mechanism in GNN include the initialization of attention model, the selection of GNN model, attention mechanism and the hyperparameter of GNN model;
§.§.§
Weak supervised training methods can improve the performance of attention mechanisms in GNN models;
§.§.§
The attention mechanism can make GNN more robust to larger and noisy graphs.
We summarize the attention based graph convolutional neural network model mentioned above in Table 1:
§ GRAPH NEURAL NETWORK BASED ON AUTOENCODER IMPLEMENTATION
In the unsupervised learning task, the autoencoder (AE) and its variants play a very important role. It realizes implicit representation learning with the help of neural network model, and has strong data feature extraction ability. AE realizes effective representation learning of input data through encoder and decoder, and the dimension of implicit representation learned can be far less than the dimension of input data, The purpose of dimensionality reduction is achieved. AE is currently the preferred deep learning technology for implicit representation learning. When we input raw data with certain connections (x_1,x_2,⋯,x_n) into AE for reconstruction learning, we can complete the task of feature extraction. The application scenarios of autoencoders are very wide, and they are often used in tasks such as data denoising, image reconstruction, and anomaly detection. In addition, When AE is used to generate data similar to training data, it is called a generative model. Due to the above advantages of AE, some scholars have applied AE and its variant models to graph neural networks. Reference<cit.> first proposed a variational graph autoencoder (VGAE) model based on variational autoencoder (VAE), Apply VAE to the processing of graph structured data. VGAE uses hidden variables to learn interpretable hidden representations of undirected graphs, and implements this model using a graph convolutional network encoder and a simple inner product decoder. In this model, the encoder is implemented using a 2-layer GCN:
q(H|I, A)=∏_i=1^N q(h_i |I, A)
where q(h_i |I, A)=N(h_i |μ_i, diag(δ_i^2)), the average matrix of nodes is μ=GCN_μ(I,A), and the variance of nodes is logσ=GCN_σ(I,A). The GCN of layer 2 is:
GCN(I,A)=AReLU(AXW_0)W_1
where, AD=D^-1/2AD^-1/2is the adjacency matrix of the symmetric specification. The generative model used to reconstruct the graph is calculated by using the inner product of implicit variables:p(A|I)=∏_i=1^N ∏_j=1^N p(A_i j|h_i, h_j), where p(A_ij=1|h_i,h_j)=σ(h^T_i,h_j), A_ij are the elements of matrix A. Finally, the loss function is defined as:
L=E_q(H|I, A)[log p(A|H)]- KL[q(H|I, A) P(H)]
where, the first item on the right side of the equation represents the cross entropy function, and the second item represents the KL distance between the graph generated by the decoder and the input graph.
Most of the existing network embedding methods represent each node by a point in the low dimensional vector space. Thus, the formation of the entire network structure is deterministic. However, in reality, the network is full of uncertainty in the process of formation and evolution, which makes these methods have some drawbacks. In view of the above drawbacks, Reference<cit.> proposed a deep variational network embedding (DVNE) method for embedding in Wasserstein space. Due to the fact that Gaussian distributions essentially represent uncertainty properties, DVNE utilizes a deep variational model to learn Gaussian embeddings for each node in Wasserstein space, rather than using a point vector to represent nodes. This allows for the learning of Gaussian embeddings for each node in Wasserstein space while maintaining network structure, Modeling the uncertainty of nodes. In the DVNE method, the second Wasserstein distance (W_2) is used to measure the similarity between distributions. A deep variational model is used to minimize the Wasserstein distance between model distribution and data distribution, thereby extracting the intrinsic relationship between the mean vector and variance term.
In the implementation process of DVNE, the W_2 distance of two Gaussian distribution functions is defined as:
W_2(N(μ_1, Σ_1) ; N(μ_2, Σ_2))^2= μ_1-μ_2+Σ_1^1/2-Σ_2^1/2_F^2
where, N represents the Gaussian distribution. The loss function L of DVNE consists of two parts: one is the loss based on ranking that keeps the first order approximate L1 norm; The second is to maintain the second-order approximation L2 norm reconstruction loss.
L=L_1+α L_2,
L_1=∑_(i, j, k) ∈ D(E_i j^2+exp(-E_i k)),
L_2=inf _Q(Z | C) ∈ Q E_P_C E_Q(Z | C)[C ⊙(C-G(Z))_2^2]
where D={(i, j, k) | j ∈ N(i), k ∉ N(j)} is a triple set, E_ij is the W_2 distance between node i and j. C is the input feature, Q is the encoder, is the Hadamard product, G is the decoder, Z is the random variable. Finally, the parameters in the model are learned by minimizing the loss function.
The method<cit.> introduced AE into the learning representation of vertices and proposed a structured deep network embedding (SDNE) method. Most existing network embedding methods use shallow models, which cannot capture highly nonlinear network structures, resulting in poor network performance. The SDNE method utilizes second-order approximation to capture global network structures, The network performance is not good enough. At the same time, the first-order approximation is used to maintain the local network structure. Finally, the network structure is maintained by using first-order and second-order proximity in the semi supervised depth model, which can effectively capture the highly nonlinear network structure and maintain the global and local structures. Then the loss function of the model is:
L_min=L_2nd+α L_1st+ vL_reg
where L2 is the second order approximate loss function, L1 is the first order approximate loss function, and Lr is the regularization term to prevent overfitting. Each loss function is defined as:
L_2 nd=∑_i=1^n(ô_i-o_i) ⊙b_i_2^2= (X̂-X) ⊙B_F^2,
L_1 st=∑_i, j=1^n a_i, jh_i-h_j_2^2
where b_i={b_i, j}_j=1^n, if a_i,j=0, b_i,j=1, otherwise b_i,j=β>1, b and β are parameters. a_i,j is the element of adjacency matrix A.
Rule equivalence refers to the fact that vertices located in different parts of a network may have similar roles or positions, which is easily overlooked in research on network embedding. Reference<cit.> proposes a deep recursive network embedding (DRNE) to learn network embeddings with rule equivalency. The neighborhood of nodes is transformed into an ordered sequence, and each node is represented by a normalized LSTM layer, Aggregates their neighbor characteristics by recursion, and the loss function of DRNE is:
L = ∑_v ∈ V || X_v-Agg ({ X_u|u ∈ N(v) }) ||_F^2
where, X_v and X_u represent the embedded vector representation of node v and u, and Agg is a aggregate function implemented by LSTM. In a recursive step, the embedded representation of nodes can maintain the local structure of their neighborhood. By iteratively updating the learned representation, the learned embedded vector of nodes can integrate their structural information in a global sense, so as to achieve rule equivalence. When serializing neighborhood nodes, The most effective neighborhood ranking measure - degree - is used to rank them. Finally, regularization term is added as the loss function of the whole model to update the parameters.
The method<cit.> applied AE to matrix completion in recommendation systems and proposed the graph convolution matrix completion method (GC-MC). GC-MC viewed matrix completion as a link prediction problem on the graph and designed a graph self coding framework based on bipartite interaction graph for differentiable information transmission. Its encoder was implemented using graph convolution, The decoder is completed by a bilinear function. Reference<cit.> proposes a new framework for combating graph embedding of graph data. In order to learn robust embedding, two countermeasures are proposed to combat regularization graph auto encoder (ARGA) and regularization variational graph auto encoder (ARVGA). In addition to the above methods, The graph neural network based on autoencoders also has the Graph2Gauss<cit.> method that can effectively learn node embeddings on large-scale graphs. Table 2 summarizes the graph neural network models based on autoencoders.
§ EXPERIMENTS ON GRAPH CLASSIFICATION AND LINK PREDICTION
§.§ GNN Classifier
Let H̃^1 be the augmented node repersentation set by concatenating H̃^1 with the embedding of the synthetics nodes, and Ṽ_L be the augmented labeled set by incorporating the synthetic nodes into V_L. We have an augmented graph G̃ = {Ã,H̃} with labeled node set Ṽ_L. The data size of different classes in G̃ becomens balanced, and an unbiased Gnn classifier would be abel to be trained on that. Specifically, we adopt another GraphSage block, appended by a linear layer for node classification on G̃ as:
𝐡_v^2=σ(𝐖^2 ·CONCAT(𝐡_v^1, 𝐇̃^1 ·𝐀̃[:, v]))
𝐏_v=softmax(σ(𝐖^c ·CONCAT(𝐡_v^2, 𝐇^2 ·𝐀̃[:, v])))
where H^2 represents node representation matrix of the 2nd GraphSage block, and W refers to the weight parameters. P_v is the probability distribution on class labels for node v. The classifier module is optimized using cross-entropy loss as:
𝐘_v^'=cargmax𝐏_v, c
We compare the some GNN-based models' performance. Table 3 shows the corresponding F_1 and MCC values for the two real-world datasets.
§.§ Link Prediction
Human trajectory prediction is one application of link prediction.Two metrics are used to evaluate model performance: the Average Displacement Error (ADE) <cit.> defined in equation <ref> and the Final Displacement Error (FDE) <cit.> defined in equation <ref>. Intuitively, ADE measures the average prediction performance along the trajectory, while the FDE considers only the prediction precision at the end points. Since Social-STGCNN generates a bi-variate Gaussian distribution as the prediction, to compare a distribution with a certain target value, we follow the evaluation method used in Social-LSTM <cit.> in which 20 samples are generated based on the predicted distribution. Then the ADE and FDE are computed using the closest sample to the ground truth. This method of evaluation were adapted by several works such as Social-GAN <cit.> and many more.
The performance of Social-STGCNN is compared with other models on ADE/FDE metrics in table 4 <cit.>.
ADE=∑_n ∈ N∑_t ∈ T_pp̂_t^n-p_t^n_2/N × T_p
FDE=∑_n ∈ Np̂_t^n-p_t^n_2/N ,t = T_p
99
l1
LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324.
l2
Gu J, Wang Z, Kuen J, et al. Recent advances in convolutional neural networks[J]. Pattern recognition, 2018, 77: 354-377.
l3
Lawrence S, Giles C L, Tsoi A C, et al. Face recognition: A convolutional neural-network approach[J]. IEEE transactions on neural networks, 1997, 8(1): 98-113.
l4
Ciresan D C, Meier U, Masci J, et al. Flexible, high performance convolutional neural networks for image classification[C]//Twenty-second international joint conference on artificial intelligence. 2011.
l5
Dalal N, Triggs B. Histograms of oriented gradients for human detection[C]//2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05). Ieee, 2005, 1: 886-893.
l6
Fan J, Xu W, Wu Y, et al. Human tracking using convolutional neural networks[J]. IEEE transactions on Neural Networks, 2010, 21(10): 1610-1623.
l7
Huang W, Qiao Y, Tang X. Robust scene text detection with convolution neural network induced mser trees[C]//Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part IV 13. Springer International Publishing, 2014: 497-511.
l8
Chen Y. Convolutional neural network for sentence classification[D]. University of Waterloo, 2015.
l9
Donahue J, Jia Y, Vinyals O, et al. Decaf: A deep convolutional activation feature for generic visual recognition[C]//International conference on machine learning. PMLR, 2014: 647-655.
l10
Zhu H, Li X, Zhang P, et al. Learning tree-based deep model for recommender systems[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018: 1079-1088.
l11
Niepert M, Ahmed M, Kutzkov K. Learning convolutional neural networks for graphs[C]//International conference on machine learning. PMLR, 2016: 2014-2023.
l12
Gao H, Wang Z, Ji S. Large-scale learnable graph convolutional networks[C]//Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 2018: 1416-1424.
l13
Atwood J, Towsley D. Diffusion-convolutional neural networks[J]. Advances in neural information processing systems, 2016, 29.
l14
Diao Z, Wang X, Zhang D, et al. Dynamic spatial-temporal graph convolutional neural networks for traffic forecasting[C]//Proceedings of the AAAI conference on artificial intelligence. 2019, 33(01): 890-897.
l15
Qi S, Wang W, Jia B, et al. Learning human-object interactions by graph parsing neural networks[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 401-417.
l16
Xu B, Cen K, Huang J. A Survey on Graph Convolutional Neural Network[J]. Chinese Journal of Computers,2020,43(5):755-780. DOI:10.11897/SP.J.1016.2020.00755.
l17
Wang J, Kong L, Huang Z, et al. Survey of Graph Neural Network[J]. Computer Engineering,2021,47(4):1-12. DOI:10.19678/j.issn.1000-3428.0058382.
l18
Ma S, Liu J, Zuo X. Survey on Graph Neural Networ[J]. Journal of Computer Research and Development,2022,59(1):47-80. DOI:10.7544/issn1000-1239.20201055.
l19
Perozzi B, Al-Rfou R, Skiena S. Deepwalk: Online learning of social representations[C]//Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. 2014: 701-710.
l20
Yang Z, Cohen W, Salakhudinov R. Revisiting semi-supervised learning with graph embeddings[C]//International conference on machine learning. PMLR, 2016: 40-48.
l21
Veličković P, Cucurull G, Casanova A, et al. Graph attention networks[J]. arXiv preprint arXiv:1710.10903, 2017.
l22
Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks[J]. arXiv preprint arXiv:1609.02907, 2016.
l23
Xu B, Shen H, Cao Q, et al. Graph wavelet neural network[J]. arXiv preprint arXiv:1904.07785, 2019.
liu4
Yan S, Xiong Y, Lin D. Spatial temporal graph convolutional networks for skeleton-based action recognition[C]//Thirty-second AAAI conference on artificial intelligence. 2018.
l24
Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate[J]. arXiv preprint arXiv:1409.0473, 2014.
l25
Zhang S, Xie L. Improving attention mechanism in graph neural networks via cardinality preservation[C]//IJCAI: Proceedings of the Conference. NIH Public Access, 2020, 2020: 1395.
l26
Zhang J, Shi X, Xie J, et al. GaAn: Gated attention networks for learning on large and spatiotemporal graphs[J]. arXiv preprint arXiv:1803.07294, 2018.
l27
Zhang R, Zou Y, Ma J. Hyper-SAGNN: a self-attention based graph neural network for hypergraphs[J]. arXiv preprint arXiv:1911.02613, 2019.
l28
Nguyen D Q, Nguyen T D, Phung D. Universal graph transformer self-attention networks[C]//Companion Proceedings of the Web Conference 2022. 2022: 193-196.
l29
Chang H, Rong Y, Xu T, et al. Spectral graph attention network with fast eigen-approximation[C]//Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 2021: 2905-2909.
l30
Knyazev B, Taylor G W, Amer M. Understanding attention and generalization in graph neural networks[J]. Advances in neural information processing systems, 2019, 32.
l31
Xu K, Hu W, Leskovec J, et al. How powerful are graph neural networks?[J]. arXiv preprint arXiv:1810.00826, 2018.
l32
Kipf T N, Welling M. Variational graph auto-encoders[J]. arXiv preprint arXiv:1611.07308, 2016.
l33
Zhu D, Cui P, Wang D, et al. Deep variational network embedding in wasserstein space[C]//Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 2018: 2827-2836.
l34
Wang D, Cui P, Zhu W. Structural deep network embedding[C]//Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. 2016: 1225-1234.
l35
Tu K, Cui P, Wang X, et al. Deep recursive network embedding with regular equivalence[C]//Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 2018: 2357-2366.
l36
Berg R, Kipf T N, Welling M. Graph convolutional matrix completion[J]. arXiv preprint arXiv:1706.02263, 2017.
l37
Pan S, Hu R, Long G, et al. Adversarially regularized graph autoencoder for graph embedding[J]. arXiv preprint arXiv:1802.04407, 2018.
l38
Bojchevski A, Günnemann S. Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking[J]. arXiv preprint arXiv:1707.03815, 2017.
liu8
Zhang M, Cui Z, Neumann M, et al. An end-to-end deep learning architecture for graph classification[C]//Proceedings of the AAAI conference on artificial intelligence. 2018, 32(1).
liu9
Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks[J]. arXiv preprint arXiv:1609.02907, 2016.
liu10
Alahi A, Goel K, Ramanathan V, et al. Social lstm: Human trajectory prediction in crowded spaces[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 961-971.
liu11
Pellegrini S, Ess A, Schindler K, et al. You'll never walk alone: Modeling social behavior for multi-target tracking[C]//2009 IEEE 12th international conference on computer vision. IEEE, 2009: 261-268.
liu12
Gupta A, Johnson J, Fei-Fei L, et al. Social gan: Socially acceptable trajectories with generative adversarial networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 2255-2264.
liu13
Li J, Ma H, Tomizuka M. Conditional generative neural system for probabilistic trajectory prediction[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019: 6150-6156.
liu14
Liang J, Jiang L, Niebles J C, et al. Peeking into the future: Predicting future person activities and locations in videos[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 5725-5734.
st
Mohamed A, Qian K, Elhoseiny M, et al. Social-stgcnn: A social spatio-temporal graph convolutional neural network for human trajectory prediction[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 14424-14432.
liu7
Mohamed A, Qian K, Elhoseiny M, et al. Social-stgcnn: A social spatio-temporal graph convolutional neural network for human trajectory prediction[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 14424-14432.
GCN
Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks[J]. arXiv preprint arXiv:1609.02907, 2016.
DR
Shi M, Tang Y, Zhu X, et al. Multi-class imbalanced graph convolutional network learning[C]//Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20). 2020.
GraphSMOTE
Zhao T, Zhang X, Wang S. Graphsmote: Imbalanced node classification on graphs with graph neural networks[C]//Proceedings of the 14th ACM international conference on web search and data mining. 2021: 833-841.
GNN-INCM
Huang Z, Tang Y, Chen Y. A graph neural network-based node classification model on class-imbalanced graph data[J]. Knowledge-Based Systems, 2022, 244: 108538.
|
http://arxiv.org/abs/2307.02258v1
|
20230705125614
|
On the Futaki invariant of Fano threefolds
|
[
"Lars Martin Sektnan",
"Carl Tipler"
] |
math.AG
|
[
"math.AG",
"math.DG",
"14J45 (Primary), 53C55 (Secondary)"
] |
We study the zero locus of the Futaki invariant on K-polystable Fano threefolds, seen as a map from the Kähler cone to the dual of the Lie algebra of the reduced automorphism group. We show that, apart from families 3.9, 3.13, 3.19, 3.20, 4.2, 4.4, 4.7 and 5.3 of the Iskovskikh–Mori–Mukai classification of Fano threefolds, the Futaki invariant of such manifolds vanishes identically on their Kähler cone. In all cases, when the Picard rank is greater or equal to two, we exhibit explicit 2-dimensional differentiable families of Kähler classes containing the anti-canonical class and on which the Futaki invariant is identically zero. As a corollary, we deduce the existence of non Kähler–Einstein cscK metrics on all such Fano threefolds.
Security Risk Analysis Methodologies for Automotive Systems
Mohamed Abouelnaga, Christine Jakobs
August 1, 2023
===========================================================
§ INTRODUCTION
The Futaki invariant was introduced by Akito Futaki (<cit.>) as an obstruction to the existence of Kähler–Einstein metrics on Fano manifolds. Its definition extends to any compact polarised Kähler manifold, and its vanishing is a necessary condition for the existence of a constant scalar curvature Kähler metric (cscK for short) in a given Kähler class.
In this note, we study the zero locus of the Futaki invariant, seen as a map from the Kähler cone to the dual of the Lie algebra of the reduced automorphism group (see Section <ref> for the definitions). This locus is fully understood for Fano surfaces from the works <cit.>, which we recall in Section <ref>. Here we will focus on K-polystable Fano threefolds. The description of this class of manifolds has seen recently great progress, in particular with <cit.> (see also references therein).
Relying on a case by case analysis, our little contribution to Fanography is the following :
Let (X,-K_X) be a K-polystable Fano threefold that belongs to family N°, with
∉{ 3.9, 3.13, 3.19, 3.20, 4.2, 4.4, 4.7, 5.3 }.
Then, the Futaki invariant of X vanishes identically on its Kähler cone.
Note that when (X) is finite or when the Picard rank ρ(X)=1, the Futaki invariant vanishes identically on the Kähler cone, as soon as X is K-polystable in the second case. From the classification in <cit.>, there exists 33 families of Fano threefolds with ρ(X)≥ 2 that admit members which are K-polystable with respect to the anti-canonical polarisation and which have infinite automorphism group. We verify that of these, only 8 families might have members with classes on which the Futaki invariant does not vanish. Further, for these 8 families, we provide explicit 2-dimensional families of Kähler classes that contain c_1(X) and on which the Futaki invariant vanishes.
Let (X,-K_X) be a K-polystable Fano threefold that belongs to family N°, with
∈{ 3.9, 3.13, 3.19, 3.20, 4.2, 4.4, 4.7, 5.3 }.
Then, there is at least a 2-dimensional family of Kähler classes on X, containing c_1(X), where the Futaki invariant vanishes.
From the LeBrun–Simanca openness theorem (<cit.>), we deduce the following corollary.
Let X be a K-polystable Fano threefold with Picard rank ρ(X)≥ 2. Then X admits a 2-dimensional family of cscK metrics parametrised by a 2-dimensional family of Kähler classes containing c_1(X).
For K-polystable members of the families ∈{ 4.2, 4.4, 4.7 } with infinite automorphism group, we actually show that there is a 3-dimensional family of Kähler classes near -K_X with vanishing Futaki invariant.
Our results should be compared with the recent <cit.>, where the Futaki invariant of Bott manifolds is studied. In contrast to our results, which guarantee the vanishing of the Futaki invariant in many cases, it is shown in <cit.> that the only Bott manifolds for which the Futaki invariant vanishes on the whole Kähler cone are isomorphic to products of projective lines. The key observation to prove our results is the existence of enough discrete symmetries that preserve every Kähler class on Fano threefolds, which in the majority of the cases considered here will be responsible for the vanishing of the Futaki invariant.
§.§ Acknowledgments
The authors would like to thank Hendrik Süß for kindly answering our questions on complexity one Fano threefolds.
CT is partially supported by the grants MARGE ANR-21-CE40-0011 and BRIDGES ANR–FAPESP ANR-21-CE40-0017. LMS is funded by a Marie Skłodowska-Curie Individual Fellowship, funded from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101028041.
§.§ Notations and conventions
Throughout the paper, for a compact Kähler manifold X, we will denote by (X) (respectively _0(X)) its automorphism group (respectively the connected component of the identity of the reduced automorphism group of X), and by (X) the Lie algebra of (X). If Z⊂ X is a subvariety (not necessarily connected), (X,Z) stands for elements in (X) that leave Z globally invariant. We denote by _X the Kähler cone of X. We will identify a divisor D with (D), and use the notation c_1(D) for its first Chern class.
§ PRELIMINARIES
Let X be a compact Kähler manifold, and Ω∈_X a Kähler class on X. We denote the Futaki invariant of (X,Ω) by
[ _(X,Ω) : _0(X) → ; v ↦ -∫_X f_v,g scal_g dμ_g, ]
where _0(X) is the Lie algebra of the reduced automorphism group of X, g denotes a Kähler metric with Kähler form in Ω and volume form d μ_g, f_v,g is the normalised holomorphy potential of v with respect to g, and scal_g denotes the scalar curvature of g (see e.g. <cit.>, <cit.> or <cit.> for this formulation of the Futaki invariant, initially introduced in <cit.>).
We will be interested in K-polystable Fano manifolds, or equivalently Fano manifolds admitting a Kähler–Einstein metric of positive curvature by the resolution of the Yau–Tian–Donaldson conjecture (<cit.>). For such manifolds, by Matsushima's result (<cit.>), and from Bochner's formula (see <cit.>), we have _0(X)=(X). We will therefore consider the Futaki invariant as a map
_X : _X →(X)^*.
By construction, _X vanishes on any class that admits a cscK metric, and it is then straightforward that _X≡ 0 whenever X is a K-polystable Fano manifold with Picard rank 1, or when the automorphism group of X is finite.
§.§ The case of smooth Del Pezzo surfaces
We refer here the reader to <cit.> and <cit.>.
If X is a smooth Del Pezzo surface with infinite automorphism group, then K_X^2∈{ 6, 7 , 8 , 9 }. Moreover, it is K-polystable and of Picard rank ρ(X)≥ 2 if and only if X=^1×^1 or K_X^2=6, i.e. when X is a blow-up of ^2 along three non-collinear points (<cit.>). In the first case, X admits a product cscK metric in each class, and _X≡ 0, while in the latter case, the vanishing locus of _X is described in <cit.> (see Section <ref> for the exact description).
§.§ Further properties of the Futaki invariant
The key property that we will use is the invariance of _X under the (X)-action. This was already used in <cit.> to show the vanishing of _X on specific examples.
We will use the following proposition repeatedly.
Let (X,Ω) be a polarised Fano manifold. Assume that there is τ∈(X) and v∈(X) such that
(i) τ^*Ω=Ω,
(ii) there is c∈^*∖{ 1} with _τ(v)=c· v.
Then Fut_(X,Ω)(v)=0.
This follows from the -invariance of the Futaki invariant, which implies that
_(X,(τ^-1)^*Ω)(_τ(v))=_(X,Ω)(v),
see <cit.> or <cit.>.
The anti-canonical class c_1(X) is always (X)-invariant.
As an application, we have the following useful corollary :
Let π : X → Y be the blow-up of a smooth Fano manifold Y along smooth and disjoint subvarieties Z_i⊂ Y. Assume that there is a finite group G⊂(Y) such that :
(i) Each Z_i is G-invariant;
(ii) Each class Ω∈ H^1,1(Y,) is G-invariant;
(iii) For any v∈(Y) that lifts to X, there is τ∈ G and c∈^*∖{ 1 } such that _τ(v)=c· v.
Then _X≡ 0.
From hypothesis (i), the G-action on Y lifts to a G-action on X. The vector space H^1,1(X,) is spanned by the pullback of the classes in H^1,1(Y,) and the exceptional divisors of π. By hypothesis (i) and (ii), any class in H^1,1(X,) is then G-invariant. The Lie algebra (X) is spanned by lifts of elements in (Y) that preserve the Z_i's. For any such element, the identity _τ(v)=c· v holds on X∖⋃_i π^-1(Z_i), hence on X, by continuity. The result follows from Proposition <ref>.
In practice, we will mainly use Corollary <ref> with
G≃/2, (X)≃, c=-1.
To prove item (i) of Proposition <ref> or item (ii) of Corollary <ref>, we will use the fact that in homogeneous coordinates, the Fubini–Study metric
_FS=i/2log(| z |^2),
and hence its class [_FS]∈ H^1,1(^n,), is invariant under the 𝔖_n+1-action on ^n by permutation of the homogeneous coordinates.
§.§ The list to check
From the discussion in the beginning of this section, to prove Theorem <ref>, it is enough to consider K-polystable Fano threefolds with infinite automorphism group and Picard rank ρ(X)≥ 2. From <cit.>, this reduces to Fano threefolds in family N°, for
∈{[ 2.20, 2.21, 2.22,2.24, 2.27, 2.29, 2.32 ,2.34 ,3.5 , 3.8, 3.9,; 3.10, 3.12 ,3.13, 3.15, 3.17 , 3.19, 3.20, 3.25 , 3.27, 4.2, 4.3,; 4.4,
4.6 ,4.7, 4.13 , 5.1 ,5.3, 6.1, 7.1, 8.1, 9.1, 10.1 ]}.
The strategy of the proof is then direct – we will use the invariance of _X to show its vanishing on _X using a case by case study. For X belonging to family N° with
∈{[ 2.20, 2.21, 2.22,2.24, 2.27, 2.29, 2.32 ,2.34 ,; 3.5 , 3.8,3.10, 3.12 , 3.15, 3.17 , 3.25 , 3.27,; 4.3,4.6 , 4.13 , 5.1 , 6.1, 7.1, 8.1, 9.1, 10.1 ]},
we will see that _X≡ 0.
For the remaining 8 families, we will obtain explicit Kähler classes of the form
c_1(X)+ c_1(D)∈_X
with D⊂ X a divisor and ∈ a parameter such that _(X,c_1(X)+ c_1(D))=0. As the vanishing of the Futaki invariant is preserved under scaling of the Kähler metric, this provides the 2-dimensional families of Kähler classes with vanishing Futaki invariant alluded to in the introduction. Corollary <ref> then follows from LeBrun–Simanca's openness theorem <cit.>, which asserts that the locus in the Kähler cone of Kähler classes that admit an extremal metric in the sense of Calabi (<cit.>) is open, together with the characterisation of cscK metrics amongst extremal metrics as the ones with zero Futaki invariant (<cit.>).
§ FAMILIES WITH (X)≃𝔰𝔩_N()
Here we will consider families N°, with
∈{ 2.27, 2.32, 3.17, 4.6, 6.1, 7.1, 8.1, 9.1, 10.1 }.
The Lie algebra 𝔰𝔩_n() is simple, hence equal to its derived ideal [𝔰𝔩_n(),𝔰𝔩_n()]. As the Futaki invariant is a character from (X) to (see e.g. <cit.> or <cit.>), it vanishes on the derived ideal [(X),(X)]. Hence, if (X)≃𝔰𝔩_n(), [(X),(X)]=(X) and the Futaki invariant vanishes identically on the whole Kähler cone of X. From
PGL_n()≃SL_n()/μ_n,
the Lie algebra of PGL_n() is 𝔰𝔩_n(). From <cit.>, this settles the case of all the K-polystable Fano threefolds in families N°, with
∈{ 2.27, 2.32, 3.17, 4.6, 6.1, 7.1, 8.1, 9.1, 10.1},
and also some cases in families { 2.21, 3.13 }.
§ PRODUCTS
Next, we consider families N°2.34, N°3.27 and N°5.3, which are products of lower dimensional Fano manifolds.
§.§ Families 2.34 and 3.27
The unique members in these two families are ^1×^2 and ^1 ×^1 ×^1, which both carry a product of cscK metrics in any class, and thus has vanishing Futaki character for any Kähler class.
§.§ Family 5.3
The unique Fano threefold in family 5.3 is ^1× S_6, where S_6 is the Del Pezzo surface with K_S_6^2=6. It is K-polystable as a product of Kähler–Einstein manifolds from <cit.>. The surface S_6 is the unique (up to isomorphism) toric surface obtained by blowing-up ^2 in the three fixed points under the torus action. We denote by H (the strict transform of) a generic hyperplane and D_1, D_2 and D_3 the three exceptional divisors in S_6. From <cit.>, the Futaki invariant of S_6 vanishes exactly in the following families of Kähler classes
3c_1(H)-ac_1(D_1)-bc_1(D_2)-(3-a-b)c_1(D_3)
and
3c_1(H)-c(c_1(D_1)+c_1(D_2)+c_1(D_3)),
where a,b,c are positive constants satisfying a+b<3 and c<3/2. As the Futaki invariant vanishes on ^1, we easily deduce the vanishing locus of the Futaki invariant on X=^1× S_6. In particular, as c_1(X)=c_1(^1)+c_1(S_6), and as c_1(S_6)=3c_1(H)-(c_1(D_1)+c_1(D_2)+c_1(D_3)), we deduce the existence of differentiable families of Kähler classes on X containing c_1(X) for which the Futaki invariant vanishes identically.
§ BLOW-UPS OF PROJECTIVE SPACE
In this section we address families N°, with
∈{ 2.22, 3.12, 3.25}.
All the members of these families are obtained by blowing up certain curves in projective space ^3.
§.§ Family 2.22
Members of the family 2.22 of Fano threefolds are obtained as blowups of certain curves in ^3. More precisely, let Φ : ^1 ×^1 →^3 be the Segre embedding
([x:y],[u:v]) ↦ [xu : xv : yu : yv].
The image of Φ is the surface
S={ z_0 z_3 - z_1 z_2=0 }.
A Fano threefold X is in the family 2.22 if it is the blowup of the image via Φ of a curve Č with 𝒪(Č) = 𝒪(3,1). Such X have Picard rank 2, generated by the line bundle associated to the proper transform of a hyperplane and of that generated by the exceptional divisor E of the blowup. The K-polystability of members of this family (with respect to the anticanonical polarisation) is discussed in detail in <cit.>.
Up to biholomorphism, there is a unique member X_0 of this family with infinite automorphism group.
It is K-polystable, and can be obtained by picking the curve Č to be Č_0 = { ux^3-vy^3=0}, so that
X_0 = _C_0^3,
where C_0 = Φ(Č_0). The ^*-action
λ·([z_0 : z_1 : z_2 : z_3]) = [λ z_0 : λ^4 z_1 : z_2 : λ^3 z_3]
preserves C_0 and so lifts to X_0. This generates _0 (X_0) (see <cit.>).
The curve C_0 is a rational curve, which can e.g. be seen by applying the Riemann–Hurwitz formula to the restriction to Č_0 ⊂^1 ×^1 of the projection to the second factor. An explicit parametrisation is given by
[τ_0 : τ_1] ↦ [τ_0τ_1^3 : τ_0^4 : τ_1^4 : τ_1 τ_0^3].
Note that the action of the involution τ given by
τ·([z_0 : z_1 : z_2 : z_3]) = [z_3 : z_2 : z_1 : z_0]
on ^3 preserves C_0 and so lifts to X_0. We can then apply Corollary <ref> to the blow-up X_0→^3 with the group G=⟨τ⟩≃/2, which implies the vanishing of the Futaki invariant on the Kähler cone of X_0.
§.§ Family 3.12
From <cit.>, the only element in Family 3.12 with infinite automorphism group is given, up to isomorphism, by X=Bl_L∪ C(^3) the blow up of ^3 along the disjoint curves
L={ x_0=x_3=0 }⊂^3
and
C={ [s^3 : s^2 t : st^2 : t^3 ], [s : t]∈^1 }⊂^3.
The reduced automorphism group of X is isomorphic to ^*, and its action is given by the lift of the ^*-action on ^3 described by
λ· ([x_0:x_1:x_2:x_3] ) =[x_0 : λ x_1 : λ^2 x_2 : λ^3 x_3] .
Then, we can consider the /2-action given by
τ([x_0:x_1 : x_2 : x_3]) = [x_3:x_2:x_1:x_0].
The group generated by τ in (^3) satisfies hypothesis (i)-(iii) from Corollary <ref>, and we deduce that the Futaki invariant of X vanishes on the whole Kähler cone.
§.§ Family 3.25
The Fano threefold X in family 3.25 is the blow-up of ^3 in two disjoint lines. It is K-polystable from <cit.>. We can assume the two blown-up lines are { x_1=x_2=0}⊂^3 and { x_3=x_4=0 }⊂^3. One has
_0(X)≃_(2,2)()≃_2()×_2()/^*,
where the first (resp. second) _2() factor acts linearly on the coordinates (x_1,x_2) (resp. on (x_3,x_4)) while the ^*-action corresponds to homotheties on ^4 (see <cit.>). The Lie algebra (X) of _0(X) fits in an exact sequence
0 →→_2()⊕_2() →(X) → 0.
We also have the sequence induced by the trace map _2() → :
0→_2() →_2() →→ 0,
from which we deduce the sequence of vector spaces
0 →→ (_2()⊕)⊕(_2()⊕) →(X) → 0.
From the discussion in Section <ref>, the Futaki invariant of X will vanish on the _2()-factors that project to (X). Hence, it is enough to test the vanishing of the Futaki invariant on the generators of the remaining two ^*-actions modulo homotheties, which are induced by:
(λ,μ)· ([x_0:x_1:x_2:x_3])=[λ x_0: x_1 : μ x_2 : x_3],
where (λ,μ)∈ (^*)^2. We can consider the finite group G generated by the reflections
τ( [x_0:x_1:x_2:x_3])= [x_1:x_0:x_2:x_3]
and
σ( [x_0:x_1:x_2:x_3])= [x_0:x_1:x_3:x_2].
This group preserves the two blown-up lines, while the adjoint action of τ (resp. σ) sends the generator of the λ-action (resp. the μ-action) to its inverse. Hence, from Corollary <ref>, we see that the Futaki invariant of X vanishes on its whole Kähler cone.
§ BLOW-UPS OF PRODUCTS OF PROJECTIVE SPACES
In this section, we will consider families N°, with
∈{ 3.5, 4.3, 4.13 }.
These are obtained as blowups of products of projective spaces.
§.§ Family 3.5
From <cit.>, the only element in Family 3.5 with infinite automorphism group is given, up to isomorphism, by X=Bl_C(^1×^2) the blow up of ^1×^2 along the curve C=ψ(Č) given by the image of
Č={ ux^5+vy^5=0 }⊂^1×^1
via the map
[ ψ : ^1×^1 → ^1×^2; ([u:v],[x:y]) ↦ ([u:v],[x^2:xy:y^2]). ]
Then, _0(X)≃^*, where the ^*-action is generated by the lift to X of the action
λ·([u:v],[x_0:x_1:x_2])=([λ^5 u : v],[x_0:λ x_1:λ^2 x_2]).
We also have a /2-action induced by
τ([u:v],[x_0:x_1:x_2])=([v:u],[x_2:x_1:x_0]).
Those actions come respectively from the actions
λ·([u:v],[x:y])=([λ^5 u : v],[x:λ y])
and
τ([u:v],[x:y])=([v:u],[y:x])
on ^1×^1, with respect to which ψ is equivariant. Then, we see that C is τ-invariant, as well as the classes π_i^*[_FS^i], where π_1 : ^1×^2 →^1 and π_2 : ^1×^2 →^2 denote the projections and _FS^i stands for the Fubini–Study metric on ^i. Finally, identifying λ∈^* with its action, we have τ∘λ∘τ^-1=λ^-1. Hence, hypothesis (i)-(iii) from Corollary <ref> are satisfied, and the Futaki invariant of X vanishes for any Kähler class.
§.§ Family 4.3
Following <cit.>, up to isomorphism, the unique Fano threefold in Family 4.3 is the blow-up of ^1×^1×^1 along
C={ x_0y_1 - x_1 y_0= x_0z_1^2+x_1z_0^2=0 }
where [x_0:x_1], [y_0:y_1] and [z_0:z_1] denote the homogeneous coordinates on the first, second and last factor respectively. We have _0(X)≃^* where the action is given by the lift of the ^*-action on ^1×^1×^1 given by
λ·([x_0:x_1],[y_0:y_1],[z_0: z_1])=([x_0:λ^2x_1],[y_0:λ^2y_1],[z_0:λ z_1]).
The involution
τ ([x_0:x_1],[y_0:y_1],[z_0: z_1])=([x_1:x_0],[y_1:y_0],[z_1: z_0])
preserves C and the (1,1)-classes on C given by ι^*_j[_FS], for ι_j the composition of the inclusion C⊂^1×^1×^1 and the projection on the j-th factor. The adjoint action of τ maps the generator of the ^*-action to its inverse, so Proposition <ref> applies and the Futaki invariant of X vanishes identically.
§.§ Family 4.13
From <cit.>, the only element in Family 4.13 with infinite automorphism group is given, up to isomorphism, by X=Bl_C(^1×^1×^1) the blow up of ^1×^1×^1 along the curve
C={ x_0y_1-x_1y_0=x_0^3z_0+x_1^3z_1=0}.
The reduced automorphism group of X is isomorphic to ^*, and its action is given by the lift of the ^*-action on ^1×^1×^1 described by
λ· ([x_0:x_1],[y_0:y_1],[z_0:z_1] ) =([λ x_0:x_1],[λ y_0:y_1],[λ^-3 z_0:z_1] ).
Then, we can consider the /2-action given by
τ([x_0:x_1],[y_0:y_1],[z_0:z_1]) = ([x_1:x_0],[y_1:y_0],[z_1:z_0]).
Clearly, this action satisfies hypothesis (i)-(iii) from Corollary <ref> (notice that τ∘λ∘τ^-1=λ^-1, identifying λ with the induced action), from which we deduce the vanishing of the Futaki invariant of X for any Kähler class.
§ BLOW-UPS OF A SMOOTH QUADRIC
In this section, we consider families N°, with
∈{ 2.21, 2.29, 3.10, 3.15, 3.19, 3.20, 4.4, 5.1 }.
§.§ Family 2.21
This family is somewhat similar to the Mukai–Umemura family 1.10. In addition to members of the family with discrete automorphism group, there is a one-dimensional family with automorphism group containing a semi-direct product of ^* and /2, one member which admits an effective PGL_2-action and one member which has a reduced automorphism group 𝔾_a. The first two of these are K-polystable for the anti-canonical polarisation, whereas the last does not have a reductive automorphism group and is therefore not K-polystable.
The members that admit an effective 𝔾_m-action can be described as follows (see <cit.>). Let C be the quartic rational curve in ^4 given as the image of the map ^1 →^4 given by
[p:q] ↦ [p^4: p^3 q : p^2q^2 : pq^3 : q^4].
For t ∉{0, ± 1}, let Q_t be the smooth hypersurface
Q_t = V(z_1z_3 -t^2z_0z_4 + (t^2-1)z_2^2).
Note that C ⊂ Q_t for any t. Let X_t = _C( Q_t). Then X_t is one of the members that admit an effective ^*-action (including the member with an effective PGL_2-action, which corresponds to t=±1/2). Note that X_t has Picard rank 2, generated by a hyperplane H and the exceptional divisor E of the blowup.
The ^*-action given by
λ·([z_0:z_1:z_2:z_3:z_4]) = [z_0: λ z_1: λ^2 z_2: λ^3 z_3: λ^4 z_4]
preserves C and Q_t, as does the involution
τ([z_0:z_1 : z_2 : z_3 : z_4] )= [z_4 : z_3 : z_2 : z_1 : z_0].
The lifts of these generate the effective actions of ^*⋊/2 on X_t. As τ preserves C, the class [_FS]_| X_t, and sends a generator of the ^*-action to its inverse by conjugation, Proposition <ref> shows that the Futaki invariant of X_t vanishes on its whole Kähler cone (note that the case t=±1/2, with (X_t)≃_2(), was dealt with in Section <ref>).
§.§ Family 2.29
There is a unique smooth Fano threefold X in family 2.29. It is isomorphic to the blow-up of
Q={ x_0^2+x_1x_2 +x_3x_4=0 }⊂^4.
along the smooth conic
C={ x_0^2+x_1x_2=x_3=x_4=0 }⊂ Q.
It is K-polystable (see <cit.>) and the group _0(X) is isomorphic to ^*×_2() (see <cit.>). We then have (X)≃⊕_2(). From the discussion in Section <ref>, the Futaki invariant of (X, []) vanishes on the _2()-component of (X)
for any Kähler class []. Thus, to check the vanishing of the Futaki invariant, it remains to check the vanishing on the -component of (X). From <cit.>, the ^*-component of _0(X) can be identified with the pointwise stabiliser of C in _0(Q). This is then the ^*-action induced by
λ· ([x_0:x_1:x_2:x_3:x_4])=[x_0:x_1:x_2:λ x_3:λ^-1x_4].
We then introduce the involution
τ([x_0:x_1:x_2:x_3:x_4])=[x_0:x_2:x_1:x_4:x_3].
This automorphism of ^4 preserves Q and C and lifts to an automorphism of X.
Its adjoint action maps a generator of the ^*-action of interest to its inverse, and by Corollary <ref>, we deduce the vanishing of the Futaki invariant of X for any Kähler class.
§.§ Family 3.10
Let X be a K-polystable element in the family 3.10 such that (X) is infinite. Then, from <cit.>, up to isomorphism, we may assume that X=_C_1∪ C_2(Q_a) is the blow-up of the quadric
Q_a={ w^2+xy+zt+a(xt+yz)=0 }⊂^4
along the two disjoint smooth irreducible conics C_1⊂ Q_a and C_2⊂ Q_a given by
C_1={ w^2+zt=x=y=0 }
and
C_2={ w^2+xy=z=t=0 }
where [x,y,z,t,w] stand for the homogeneous coordinates on ^4 and where a∈∖{ -1, +1 } is a complex parameter. Moreover, for a=0, _0(X)≃ (^*)^2 and for a≠ 0, _0(X)≃^*.
§.§.§ Case a=0
In this situation, the (^*)^2-action on X is the lift of the action on Q_0 induced by the following formula, for (α,β)∈(^*)^2 :
(α,β)· ([x:y:z:t:w])=[α x : α^-1y : β z : β^-1 t : w].
Consider the group G=/2×/2 generated by (σ,τ) defined by
σ([x:y:z:t:w])=[y:x:z:t:w]
and
τ([x:y:z:t:w])=[x:y:t:z:w].
Then G ⊂(Q_0), and G preserves C_1 and C_2. It also leaves invariant the class ι^*[_FS] on Q_0, where ι : Q_0 →^4 denotes the inclusion and _FS the Fubini–Study metric. Hence, hypothesis (i) and (ii) of Corollary <ref> are satisfied. Finally, _σ(v_1)=-v_1 and _τ(v_2)=-v_2, where v_1 generates the ^*-action α↦ [α x : α^-1y : z : t : w] while v_2 generates the ^*-action β↦ [x : y : β z : β^-1 t : w]. Then, Corollary <ref> implies the vanishing of the Futaki invariant on X for any class.
§.§.§ Case a≠ 0
The same argument as in the previous case applies, where this time the ^*-action of _0(X) is induced by the diagonal of the above, given by
α· ([x:y:z:t:w])=([α x : α^-1y : α z : α^-1 t : w]).
and the group G≃/2 is generated by
ς([x:y:z:t:w])=([y:x:t:z:w]).
§.§ Family 3.15
From <cit.>, the only smooth K-polystable Fano threefold in family 3.15 is given by the blow-up X=_L∪ C(Q)→ Q of the quadric
Q={ x_0^2+2x_1x_2+2x_1x_4+2x_2x_3 =0}⊂^4
along the line
L={ x_0=x_1=x_2=0 }
and the smooth conic (disjoint from L)
C={ x_0^2+2x_1x_2=x_3=x_4=0 }.
The automorphism group of X satisfies _0(X)≃^* with ^*-action given, for λ∈^*, by (the lift of)
λ·([x_0:x_1:x_2:x_3:x_4])=[λ x_0 : λ^2 x_1 : x_2 : λ^2 x_3 : x_4 ].
The involution
τ([x_0:x_1:x_2:x_3:x_4])=[x_0:x_2:x_1:x_4:x_3]
preserves Q, L and C. It also leaves the class ι^*[_FS] invariant, where ι : Q →^4 is the inclusion. Then, Corollary <ref> applies to X→ Q and G=⟨τ⟩≃/2, so that the Futaki invariant of X identically vanishes on _X.
§.§ Families 3.19 and 3.20
Consider the smooth quadric Fano threefold
Q={ x_0^2+x_1x_2 +x_3x_4=0 }⊂^4.
The family 3.19 (resp. 3.20) is obtained by blowing-up Q in two points (respectively two disjoint lines). More precisely, we can obtain the unique Fano threefold in family 3.19 by considering X_1 to be the blow-up of Q along the points
P_1=[0:0:0:1:0]
and
P_2=[0:0:0:0:1].
The unique Fano threefold X_2 in family 3.20 is the blow-up of Q along the two disjoint lines
L_1={ x_0=x_1=x_3=0 }
and
L_2={ x_0=x_2=x_4=0 }.
In both cases, the Fano threefold X_i is K-polystable (see <cit.>) and the group _0(X_i) is isomorphic to ^*×_2() (see <cit.>). We then have (X_i) = ⊕_2(). From the discussion in Section <ref>, the Futaki invariant of (X_i, [_i]) vanishes on the _2()-component of (X_i)
for any Kähler class [_i]. Therefore, to check the vanishing of the Futaki invariant on (X_i,[_i]), it remains to check the vanishing on the -component of (X_i).
To this aim we introduce the involution
τ([x_0:x_1:x_2:x_3:x_4])=[x_0:x_2:x_1:x_4:x_3].
This automorphism of ^4 preserves Q, and swaps the two connected components of the blown-up locus in both cases. Therefore, τ lifts to an automorphism of X_i, still denoted τ, for i∈{ 1,2}. Note that on X_i, any Kähler class of the form
[_]:=c_1(X_i)+ (c_1((E_1^i))+c_1((E_2^i)))
is τ-invariant, where ∈ is chosen so that
c_1(X_i)+ (c_1((E_1^i))+c_1((E_2^i)))>0
and the E_j^i's denote the exceptional divisors of the blow-up X_i → Q.
Next, we investigate how this action interacts with the generator of the ^*-component in _0 (X_i), to verify that we can apply Proposition <ref> to deduce the vanishing of the Futaki invariant. We do this for the two families separately.
§.§.§ Family 3.19
We follow the discussion in <cit.>. An automorphism of X_1 comes from an automorphism of ^4 that leaves Q and { P_1}∪{ P_2 } invariant. By linearity, such an automorphism preserves the line spanned by the two points, and thus its orthogonal complement Π={ x_3=x_4=0 }. It then leaves the conic C=Q∩Π invariant. From <cit.>, the ^*-component of _0(X_1) can be identified with the pointwise stabiliser of C in _0(Q). This is then the ^*-action given by
λ· ([x_0:x_1:x_2:x_3:x_4])=[x_0:x_1:x_2:λ x_3:λ^-1x_4].
The adjoint action of τ maps a generator of this action to its inverse, and by Proposition <ref>, we deduce the vanishing of the Futaki invariant of (X_1,[_]).
§.§.§ Family 3.20
Following the discussion in <cit.>, the ^*-component of _0(X_2) is obtained as follows. An element in _0(Q,L_1∪ L_2) must preserve the linear span of L_1 and L_2, that is Q∩{ x_0=0 }. It then leaves invariant
Q'={ x_0 = x_1 x_2 + x_3 x_4 =0 }.
The group _0(Q, L_1∪ L_2) acts on the family of lines (ℓ_t)_t∈^1 in Q' given by
[x_1:x_3] ↦ [0 : x_1 : t x_3 : x_3 : -tx_1] ⊂ Q'.
The ^*-component of _0(X) then corresponds to the stabiliser of the lines L_1=ℓ_∞ and L_2=ℓ_0 under this action. In coordinates, the action is given by
λ· ([x_0:x_1:x_2:x_3:x_4 ])=[λ x_0:x_1:λ^2 x_2: x_3:λ^2 x_4 ].
As with family 3.19, using the τ-action and the Ad-invariance of the Futaki invariant, we can conclude that the Futaki invariant of (X_2,[_]) vanishes.
§.§ Family 4.4
Up to isomorphism, there is a unique smooth Fano threefold X in family 4.4. Its automorphism group satisfies _0(X)≃ (^*)^2, and it is K-polystable from <cit.>. Recall that the smooth Fano threefold X_1 in family 3.19 can be obtained as a blow-up along two points of a smooth quadric Q⊂^4. We can then realise the manifold X as the blow-up of X_1 along the proper transform of the conic that passes through the blown-up points in Q. Coming back to our parametrisation in Section <ref>, we can take Q⊂^4 to be
Q={ x_0^2+x_1x_2 +x_3x_4=0 }⊂^4
and the blown-up points to be
P_1=[0:0:0:1:0]
and
P_2=[0:0:0:0:1].
Then, the conic in Q joining P_1 and P_2 is
_1={ x_1=x_2=x_0^2+x_3x_4 =0 }⊂ Q.
The (^*)^2-action on Q that lifts to X through the two blow-up maps X→ X_1 → Q is given in coordinates by
(λ,μ)·([x_0:x_1:x_2:x_3:x_4])=[x_0:λ x_1:λ^-1x_2:μ x_3:μ^-1x_4].
Again, the involution
τ([x_0:x_1:x_2:x_3:x_4])=[x_0:x_2:x_1:x_4:x_3]
preserves C_1 and swaps the blown-up points. Arguing as before, we see that the Futaki invariant of X will vanish in classes of the form
c_1(X)+ c_1() + δ (c_1(E_1)+c_1(E_2))
for (,δ)∈^2 small enough and where is the exceptional divisor of X→ X_1, while E_1 and E_2 are the strict transforms of the exceptional divisors of X_1→ Q. Note that after scaling, this gives a 3-dimensional family in the Kähler cone of X.
§.§ Family 5.1
From <cit.>, the unique smooth Fano threefold X in family 5.1 is K-polystable. It can be described as follows. Consider first the smooth quadric in ^4
Q={ x_1x_2+x_2x_3+x_3x_1+x_4x_5 = 0 }⊂^4
where we denote by [x_1:x_2:x_3:x_4:x_5] the homogeneous coordinates on ^4. We then fix a smooth conic C=Q∩{ x_4=x_5=0 }⊂ Q and points P_1=[1:0:0:0:0], P_2=[0:1:0:0:0] and P_3=[0:0:1:0:0] in Q. Let Y → Q the blow-up of Q in the three points (P_i)_1≤ i≤ 3 and Č the strict transform of C in Y. Then, X is obtained as the blow-up of Y along Č. Its automorphism group satisfies _0(X)≃^*, where the ^*-action is the lift of the action defined on Q by
λ· ([x_1:x_2:x_3:x_4:x_5]) =[λ x_1:λ x_2:λ x_3:λ^2 x_4:x_5].
The manifold X also admits an involution which is the lift of the involution τ defined on Q by
τ ([x_1:x_2:x_3:x_4:x_5]) = [x_1:x_2:x_3:x_5:x_4].
We observe that τ preserves the Kähler class associated to the hyperplane section H∩ Q and fixes C, as well as the points P_1, P_2 and P_3. Hence, all the (1,1)-classes on X are invariant under the (lifted) involution. As the adjoint action of τ maps the generator of the ^*-action to its inverse, we conclude as in <ref> the vanishing of the Futaki invariant of X for all its Kähler classes.
§ HYPERSURFACES IN ^2×^2 AND THEIR BLOW-UPS
In this section, we consider families N°, with
∈{ 2.24, 3.8, 4.7 }.
§.§ Family 2.24
From <cit.> (see also <cit.>), the only K-polystable element in Family 2.24 with infinite automorphism group is given, up to isomorphism, by
X={ xu^2+yv^2+zw^2 }⊂^2×^2.
It has _0(X)≃ (^*)^2, where the action of (α,β)∈ (^*)^2 is given by
(α,β)·([x:y:z],[u:v:w])=([α^2 x:β^2 y:z],[α^-1u:β^-1v:w]).
The group G = /2×/2 acts on ^2 ×^2, with the action of (σ,τ) ∈ G generated by
σ([x:y:z],[u:v:w])=([z:y:x],[w:v:u])
and
τ([x:y:z],[u:v:w])=([x:z:y],[u:w:v]).
Note that G⊂(X) and that the inclusion ι : X →^2×^2 is G-equivariant. Hence we deduce that ι^*[_FS^i] is G-invariant, where _FS^i denote the Fubini–Study metric on the i-th factor. Then, any Kähler class on X is G-invariant. Denote by v_1 (resp. v_2) the generator of the ^*-action α·([x:y:z],[u:v:w])=([α^2 x:y:z],[α^-1u:v:w]) (resp. β·([x:y:z],[u:v:w])=([x:β^2 y:z],[u:β^-1v:w])) on X. A direct computation shows
{[ _σ(v_1) = -(v_1+v_2); _τ(v_2) = -(v_1+v_2). ].
Using -invariance of the Futaki invariant, as discussed in Proposition <ref>, we deduce that for any Kähler class Ω on X :
{[ _(X,Ω)(v_1) = -_(X,Ω)(v_1)-_(X,Ω)(v_2); _(X,Ω)(v_2) = -_(X,Ω)(v_1)-_(X,Ω)(v_2), ].
hence _X is identically zero.
§.§ Family 3.8
From <cit.>, the only element in Family 3.8 with infinite automorphism group is given, up to isomorphism, by X=Bl_C(Y) the blow up of Y along the curve C, where
Y={ (vw + u^2 )x + v^2 y + w^2 z = 0 }⊂^2×^2
is a smooth divisor of degree (1,2) and where C=π_1^-1([1:0:0]), with π_1 the projection onto the first factor of ^2×^2. The variety Y is the only element in Family 2.24 with infinite automorphism group, and
(X)≃(Y)≃^*⋊/2.
More explicitly, the ^*-action is for λ∈^* given by
λ·([x:y:z],[u:v:w])=([x:λ^-2y:λ^2z],[λ u: λ^2 v : w]),
while the /2-action is generated by τ:
τ([x:y:z],[u:v:w])=([x:z:y],[u:w:v]).
Identifying λ with the corresponding element in (Y), we have τ∘λ∘τ^-1=λ^-1, so that item (iii) in Corollary <ref> is satisfied. The inclusion ι:Y→^2×^2 is τ-equivariant, and then the classes ι^*[_FS^i] are τ-invariant, for _FS^i the Fubini–Study metric on each factor of ^2×^2. This shows that hypothesis (ii) from Corollary <ref> holds as well. Finally, the curve C is τ-invariant, and by Corollary <ref>, the Futaki character of X is identically zero on its Kähler cone.
§.§ Family 4.7
Let X be a smooth Fano threefold in family 4.7. Then it is a blow-up of a smooth divisor W of bidegree (1,1) on ^2×^2 along two disjoints curves of bidegrees (1,0) and (0,1), and it is K-polystable <cit.>. To perform computations, we will assume that
W={ xu+yv+zw = 0 }⊂^2×^2,
where [x,y,z] and [u,v,w] stand for homogeneous coordinates on the first and second factors respectively. We will denote by π_i : W →^2 the natural projection on the i-th factor. We then let C_i=π_i^-1([0:0:1])⊂ W. Then, X=_C_1∪ C_2(W) and from <cit.>, we have
_0(X)≃_2().
The isomorphism is defined as follows. First, automorphisms of X are induced by automorphisms of W that leave C_1∪ C_2 invariant. Arguing as in <cit.>, they correspond to lift of isomorphisms of ^2 that leave the set
π_1(C_1∪ C_2)={ [0:0:1]}∪{ [x: y : 0], (x,y)∈^2∖{ 0}}
invariant. Those elements are easily identified to elements in _2(). From Section <ref>, the Futaki invariant vanishes on the _2()-component in (X). We can identify a supplementary subspace of _2() in (X) by considering the lift to X of the generators of the ^*-action on ^2 given by
λ· ([x:y:z])=([λ x : y: z]).
The lift of this action to W is given by
λ· ([x:y:z],[u:v:w])=([λ x:y:z],[λ^-1u:v:w]).
We introduce the involution
τ([x:y:z],[u:v:w])=([u:v:w],[x:y:z]).
This preserves W, and swaps the curves C_1 and C_2. It also swaps the (1,1)-classes π_1^*[_FS] and
π_2^*[_FS]. Finally, its adjoint actions maps a generator of the ^*-action (<ref>) to its inverse. Then, following Section <ref>, we deduce the vanishing of the Futaki invariant on X for any Kähler class of the form
c_1(X)+π^*(π_1^*[_FS]+π_2^*[_FS])+η (c_1((E_1))+c_1((E_2))),
where π : X → W denotes the blow-down map, E_1 and E_2 the exceptional divisors, and (,η)∈^2 are chosen so that the class is positive.
§ REMAINING CASES
We finish with families N°, with
∈{ 2.20, 3.9, 3.13, 4.2 }.
§.§ Family 2.20
Consider the Plücker embedding of (2,5) in ^9. Any smooth intersection of this embedded sixfold with a linear subspace of codimension 3 is a Fano manifold. We call this Fano threefold V_5 and it is the unique member of family 1.15 of Fano threefolds.
Now, let C be a twisted cubic in V_5 and let X=_C( V_5). Then X is a member of the family 2.20 of Fano threefolds. Up to isomorphism, there is a unique choice of curve such that X has infinite automorphism group <cit.>. In this case, (X) is a semidirect product ^*⋊/2.
In <cit.>, it is shown that the unique element in family 2.20 with infinite automorphism group is K-polystable. Moreover, the following explicit description of X is given. First, V_5 can be realised as the subvariety of ^6 cut out by the equations
{[ x_4x_5-x_0x_2+x_1^2 = 0; x_4x_6-x_1x_3+x_2^2=0; x_4^2 - x_0x_3+x_1x_2 = 0; x_1x_4 -x_0x_6 -x_2x_5 = 0; x_2x_4 -x_3x_5 -x_1x_6 = 0. ].
We will then identify V_5 with this variety. Then, we can chose C to be the twisted cubic parametrised by
([r:s]) ↦ ([r^3 : r^2s:rs^2 : s^3 : 0 : 0 : 0])∈ V_5.
We consider X=_C(V_5) with this parametrisation. The ^*⋊/2-action on ^6 generated by
λ·([x_0:x_1:x_2:x_3:x_4:x_5:x_6])=
[λ^3x_0:λ^5x_1:λ^7x_2:λ^9x_3:λ^6x_4:λ^4x_5:λ^8 x_6]
for λ∈^* and the involution
τ([x_0:x_1:x_2:x_3:x_4:x_5:x_6])=[x_3:x_2:x_1:x_0:x_4:x_6:x_5]
preserves V_5 and C, hence lifts to X. This provides the isomorphism
(X)≃^*⋊/2.
Note that conjugation by τ sends a generator of the ^*-action to its inverse. As H^1,1(V_5,) is generated by the class of a hyperplane section in ^6, and as the class of the Fubini–Study metric on ^6 is τ-invariant, we can apply Corollary <ref> to X, and we deduce the vanishing of the Futaki invariant on the Kähler cone of X.
§.§ Families 3.9 and 4.2
We now consider families 3.9 and 4.2. Any member of one of these families have _0(X)≃^* and is K-polystable by <cit.>, which we will follow closely.
Let
be either ^2 or ^1×^1 and let ⊂ be a smooth irreducible curve given by a quartic if =^2 and a (2,2)-curve in the other case. Denote by pr_i the projection of ^1× onto the i-th factor. We then set =pr_2^*()≃^1 ×, =pr_1^*([1:0]) and '=pr_1^*([0:1]). We consider
G=^*⋊/2
acting on ^1 by
λ·[u:v]=[u:λ v]
and
τ([u:v])=[v:u].
The G-action lifts to ^1×, with the involution τ swapping and '. We then introduce η : W →^1× a double cover branched over +'+, and E, E' and B the preimages on W of the surfaces , ' and respectively. Then, set X̂→ W the blow-up of W along the curves E∩B and E'∩B with exceptional surfaces Ŝ and Ŝ'. We denote the proper transforms of E, E' and B by Ê, Ê', B̂ respectively. Finally, X is obtained as the image of a contraction X̂→ X of B̂ to a curve isomorphic to . We set E, E', S and S' the proper transforms on X of Ê, Ê', Ŝ and Ŝ' respectively.
One can check that all the birational maps involved in producing X are G-equivariant, and we obtain _0(X)≃^*. Moreover, the involution on X induced by τ (that we will still denote τ) swaps E and E', and also swaps S and S'. Hence, the Kähler classes c_1(E)+c_1(E') and c_1(S)+c_1(S') are both τ-invariant. Clearly, on ^1×, the adjoint action of the involution τ maps a generator of the ^*-action to its inverse. This remains true on W by equivariance, and thus on X that is birationally equivalent to W by continuity of holomorphic vector fields away from the exceptional loci. Then, Proposition <ref> applies to show that the Futaki invariant of X vanishes in any Kähler class of the form
c_1(X)+(c_1(E)+c_1(E'))+δ(c_1(S)+c_1(S')),
where (,δ)∈^2 is chosen so that the class is positive.
To understand the subset of the Kähler cone these classes generate, we use the following alternative description of X, still following <cit.>.
§.§.§ Family 3.9
This is the case when =^2. X can then also be obtained as the blow-up ϕ : X → V of V along a curve C⊂ V where
π : V=(⊕(2))→^2=
is a ^1-bundle, and C=π^*∩ E_V, where E_V is the zero section of π. We also have that the strict transform of E_V (resp. of the infinity section E'_V, and of π^*) on X is E (resp. E' and S'), while the exceptional divisor of ϕ is S. Hence we get the relation c_1(E)+c_1(E')=0 in this case (but c_1(S)+c_1(S')≠ 0), and we obtain a 2-dimensional family of classes that admit cscK metrics given by
(δ,r) → r(c_1(X)+δ(c_1(S)+c_1(S'))).
§.§.§ Family 4.2
This is the case when =^1×^1. Again, we can recover X from the maps
π : X → V
and
ϕ : V →,
with π the contraction of S to a curve isomorphic to and ϕ a ^1-bundle over ^1×^1. According to <cit.>, we have
Pic(X)=[H_1]⊕[H_2]⊕[E]⊕[E']
where H_i=(π∘ϕ)^*(ℓ_i) and ℓ_1, ℓ_2 denote two different rulings of ^1×^1. There are relations -K_X∼ 2(H_1+H_2)+E+E', S∼ H_1+H_2 -E+E' and S'∼ H_1+H_2 +E-E', so that the Kähler classes described above can be written
2(1+δ)(c_1(H_1)+c_1(H_2))+(1+)(c_1(E)+c_1(E')).
Together with scaling we therefore obtain a 3-dimensional family of classes with vanishing Futaki invariant.
§.§ Family 3.13
Let X be a smooth K-polystable Fano threefold in family 3.13. From <cit.>,
either _0(X)≃PGL_2(), and so from Section <ref> the Futaki invariant vanishes identically, or _0(X)≃^*. In the latter case, denoting [x_0:x_1:x_2], [y_0:y_1:y_2] and [z_0:z_1:z_3] the homogeneous coordinates on the first, second and third factors of ^2×^2×^2, X is given by the equations
{[ x_0y_0+x_1y_1+x_2y_2 = 0; y_0z_0+y_1z_1+y_2z_2 = 0; (1 + s)x_0 z_1 + (1 -s)x_1 z_0 -2x_2 z_2 = 0 ].
in ^2×^2×^2, for s∉{ -1,0,1 }, and
(X)≃^*⋊𝔖_3.
The ^*-action for λ∈^* is given on a point P with homogeneous coordinates ([x_0:x_1:x_2], [y_0:y_1:y_2],[z_0:z_1:z_3]) by
λ·(P)=
([λ x_0:λ^-1x_1:x_2], [λ^-1y_0:λ y_1:y_2],[λ z_0:λ^-1z_1:z_3]).
Further, there are two involutions τ_x,z and τ_y,z in (X), whose actions are given by
τ_x,z(P)=([z_1 : z_0 : z_2 ], [y_1 : y_0 : y_2 ], [x_1 : x_0 : x_2 ])
and
τ_y,z(P)=([x_1 : x_0 : -x_2],[(1-s)z_0:(1+s)z_1:2z_2],[y_0/1-s :y_1/1+s : y_2/2]).
Note that τ_x,z∘λ∘τ_x,z^-1=λ^-1 and τ_y,z∘λ∘τ_y,z^-1=λ^-1 (where we identified λ with the corresponding element in (X)). From <cit.>, the projection maps η_x,η_y,η_z : ^2×^2×^2 →^2 induce holomorphic maps, still denoted η_x, η_y and η_z, from X to ^2.
If we denote α_i:= η_i^*[_FS]∈ H^1,1(X,) the pullback of the class of the Fubini–Study form, for i∈{ x,y,z}, by equivariance of the projections, we see that α_y is τ_x,z-invariant while α_x is τ_y,z-invariant. Hence, for any >0 small enough, the class c_1(X)+α_x is τ_y,z-invariant and the class c_1(X)+α_y is τ_x,z-invariant. From Proposition <ref>, the Futaki invariants of (X,c_1(X)+α_x) and (X,c_1(X)+α_y) vanish. Hence, X will carry cscK deformations of its Kähler–Einstein metrics in the classes c_1(X)+α_y and c_1(X)+α_x for small enough by LeBrun–Simanca's openness theorem.
We have used two different involutions τ_x,z and τ_y,z in the above to the deduce the vanishing of the Futaki invariant in the classes c_1(X)+α_y and c_1(X)+α_x. We are therefore not able from these arguments to deduce that the Futaki invariant vanishes on the sums of these classes. Hence we still only get a 2-dimensional family of classes with vanishing Futaki invariant in this case.
alpha
|
http://arxiv.org/abs/2307.00930v1
|
20230703110937
|
Accelerated binary black holes in globular clusters: forecasts and detectability in the era of space-based gravitational-wave detectors
|
[
"Avinash Tiwari",
"Aditya Vijaykumar",
"Shasvath J. Kapadia",
"Giacomo Fragione",
"Sourav Chatterjee"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"gr-qc"
] |
firstpage–lastpage
Comment on "Effects of shear methods on shear strengths and deformation modes of two
typical transition metal carbides and their unification"
Marcin Maździarz
August 1, 2023
==============================================================================================================================================
The motion of the center of mass of a coalescing binary black hole (BBH) in a gravitational potential imprints a line-of-sight acceleration (LOSA) onto the emitted gravitational wave (GW) signal. The acceleration could be sufficiently large in dense stellar environments, such as globular clusters (GCs), to be detectable with next-generation space-based detectors. In this work, we use outputs of the cluster monte carlo (cmc) simulations of dense star clusters to forecast the distribution of detectable LOSAs in DECIGO and LISA eras.
We study the effect of cluster properties—metallicity, virial and galactocentric radii—on the distribution of detectable accelerations, account for cosmologically-motivated distributions of cluster formation times, masses, and metallicities, and also incorporate the delay time between the formation of BBHs and their merger in our analysis.
We find that larger metallicities provide a larger fraction of detectable accelerations by virtue of a greater abundance of relatively lighter BBHs, which allow a higher number of GW cycles in the detectable frequency band. Conversely, smaller metallicities result in fewer detections, most of which come from relatively more massive BBHs with fewer cycles but larger LOSAs. We similarly find correlations between the virial radii of the clusters and the fractions of detectable accelerations. Our work, therefore, provides an important science case for space-based GW detectors in the context of probing GC properties via the detection of LOSAs of merging BBHs.
Gravitational-Waves – Binary Black Holes – Globular Clusters
§ INTRODUCTION
The formation, evolution, and merger environments of binary black holes (BBH) are subjects of many active research efforts (see eg. for a review).
The prevalent expectation is that the majority of the BBHs detected by the LIGO-Virgo-KAGRA network <cit.> likely formed either through isolated evolution in the galactic field or through many-body interactions in dense dynamical environments. Isolated evolution could proceed mainly via a common envelope phase <cit.>[Note however that some works <cit.> are finding that a majority of binaries do not require a common envelope phase and could form and evolve just via stable mass transfer.], or via chemically homogeneous evolution <cit.>. Dynamical environments could include globular clusters <cit.>, nuclear star clusters <cit.>, and disks of active galactic nuclei <cit.>, among others.
The ∼ 90 BBH detections reported by the LIGO-Virgo-KAGRA collaboration <cit.> have started to shed some light on their origin <cit.>.
However, making precise inferences on formation channels from data needs to take into consideration two factors. The first is that detected binaries could be coming from a combination of the aforementioned formation channels.
Indeed, the data suggest that multiple formation sub-channels even within isolated evolution contribute to this spectrum, although the extent of these contributions from different channels is unknown and not straightforward to constrain, in part because of the systematics associated with the population synthesis simulations <cit.>.
The second is that in general, the shape of the GW waveform itself contains no definite signatures that can conclusively ascertain the provenance of the binary on a single-event basis[A notable exception to this is the presence of eccentricity in the binary orbit, which could indicate formation and evolution of binaries in dynamical environments.].
In the case of BBH mergers assembled dynamically, the binaries move on orbits determined by the star cluster gravitational potential. As this motion could leave an imprint on the GW signal in the form of a Doppler shift, its detection would contribute to our ability to identify the binary formation channel. However, a binary orbit at constant velocity would produce a constant Doppler shift in the GW waveform, degenerate with the mass of the binary. On the other hand, accelerated motion (with a non-zero component of the acceleration along the observer's line-of-sight) could modulate the signal and, therefore, be detectable <cit.>. Constraints on this line-of-sight acceleration inferred directly from the GW signal could hence carry information on the environment in which the binary merged <cit.>.
GCs are among the dense stellar environments expected to efficiently assemble BBH mergers. They are stable, spherically symmetric, gravitationally bound collections of ∼ 10^4 - 10^6 stars with typical sizes of ∼ 1-10 pc <cit.>. BBHs merging in GCs are expected to present an acceleration reminiscent of the environment in which they formed. Thus, detecting signatures of (time-varying) Doppler shift could not only point towards identifying different formation environments, but could also provide crucial information about masses, density profiles, metallicities, and ages of GCs.
In this work, we calculate accelerations of BBHs in GCs, as a function of the cluster properties, using the catalogue pertaining to the large-scale cluster monte carlo (cmc) <cit.> simulation. We extract and determine the accelerations of all the BBH binaries that merge within a Hubble time from the cmc catalogue, and employ a GW Fisher analysis <cit.> to estimate whether such accelerations can be sufficiently well constrained with the proposed DECIGO <cit.> and LISA <cit.> space-based detectors. We construct distributions of accelerations as a function of GC properties, with appropriately chosen detectability, metallicity, and cluster-mass weights. We then study the imprint of GC properties on these distributions.
The rest of the paper is organised as follows. Section <ref> describes the cmc catalog models and outlines the prescription we use to construct distributions of BBH accelerations in GCs. Section <ref> presents the results and, in particular, the imprint of GC properties on the distribution of accelerated BBHs. Section <ref> summarizes the paper, discusses this work in the context of other GW probes of GCs, and suggests the scope for future work. In the entirety of the paper, we assume the standard cosmological model with parameters fixed to the Planck 2018 values <cit.>.
§ METHOD
§.§ The cmc models
The cmc catalogue comprises 144 simulations of GCs. It uses a Hénon type Monte Carlo algorithm which enables a long-term evolution of the GC <cit.>, assuming a set of initial conditions. Details of the cmc simulation can be found in . Here, we briefly summarize some of the most important features of the models.
Four different initial cluster properties describe the cmc catalogue grid. These properties are: the total number of single stars and binaries in the cluster (N = 2 × 10^5, 4 × 10^5, 8 × 10^5, 1.6 × 10^6), the initial virial radius of the cluster (r_v/pc = 0.5, 1, 2, 4), the galactocentric radius of the cluster (r_g/kpc = 2, 8, 20), and initial metallicity of the cluster (Z = 2 × 10^-2, 2 × 10^-3, 2 × 10^-4). Each combination of these parameters corresponds to one cmc simulation and the outputs of all the 144 simulations are catalogued in <cit.>.
A number of fixed initial conditions are assumed for the whole set of simulations. The initial cluster potential is assumed to follow a King profile <cit.>, with concentration parameter W_0 = 5. The stellar masses are drawn from a Kroupa initial mass function <cit.>, assuming a mass range of 0.08 M_⊙- 150 M_⊙ and the stellar binary fraction is set to f_b = 5%.
For binaries, the primary component is drawn from a Kroupa IMF, while the secondary component is chosen by drawing from a uniform distribution of mass ratios q ∈ [0.1, 1]. The initial orbital period of binaries is drawn from a log-uniform distribution, with a lower limit on the separation set such that this separation (d) does not fall below five times the sum of the stellar radii of the binary (d ≥ 5(R_1 + R_2)), and an upper limit set by the hard/soft boundary. Each simulation is evolved across 14 Gyr or until the GC undergoes tidal disruption (see eg. ) or collisional runaway (see eg. ).
A number of physical processes have been incorporated into the cmc simulations. These include stellar and binary evolution, neutron star formation, black hole (BH) formation, modeling of strong encounters, two-body relaxation, three-body binary formation, implementation of galactic tides, and stellar collisions. We refer the reader to for details on all these processes; we briefly summarize the prescriptions used for BH formation below.
BHs are modeled to form via standard iron core-collapse supernovae (CCSNe) using the “rapid model” for stellar remnants <cit.>. The CCSNe impart natal kicks to the BH, with mass fallback decreasing the magnitude of the kick. The kick velocities of neutron stars, V_NS, are assumed to be described by a Maxwellian distribution with a dispersion set to σ = 265 km/s. For BHs, the kick magnitudes are then modulated as a function of the fallback mass fraction f_b such that V_BH = (1 - f_b)V_NS, where this fraction pertains to the percentage of stellar envelope mass that falls back onto the collapsed core. Additionally, pulsational pair-instability <cit.> is implemented, which results in the mapping of stars with helium core masses in the range 45 - 65 M_⊙ to BHs of masses in the vicinity of 40 M_⊙ <cit.>, producing an excess in that region of the BH mass spectrum. Stars with helium cores in excess of 65 M_⊙ are modeled to produce no remnants at all <cit.>.
§.§ Extracting accelerations from the cmc catalog
We describe below the prescription used to evaluate the accelerations of merging BBHs in GCs. A flowchart summary of the prescription we use below is provided in Figure <ref>.
* For each merger in the cmc catalog, we determine the mass of the cluster M_enc enclosed within a radius R, where R is the distance of the BBH from the centre of the cluster when it merges.
* The acceleration of the center of mass of the BBH divided by the speed of light a/c, is then evaluated as <cit.>:
a/c = GM_enc/cR^2
* For each BBH and corresponding acceleration, n = 5000 redshift samples are drawn following the cosmic star-formation rate density (SFRD) as given in the Madau-Fragos prescription <cit.>:
p(z) ∝(1 + z)^2.6/1.0 + [(1.0 + z)/3.2]^6.2
These samples correspond to cluster-formation redshifts. In essence, we assume that the history of cluster formation follows that of stars.
* To evaluate the merger epochs, we first convert the cluster-formation redshifts to lookback time t_lb, cl. Then, using the time-delay values from the simulation, t_bbh, delay, the lookback time of the BBH at merger is calculated as t_lb, bbh = t_lb, cl - t_bbh, delay. If positive, this lookback time is now converted back to a redshift at the merger. Otherwise, the sample is rejected, since it implies that the BBH will not merge within the age of the universe.
* Converting the redshift at merger to a luminosity distance at merger, and using the intrinsic parameters of the BBH provided from the simulation, the signal-to-noise ratio (SNR) ρ in DECIGO and LISA are calculated as:
ρ^2 = 4Re∫_f_min^f_max|h (f)|^2/S_n(f)df
where h is the Fourier transform of the GW waveform as seen by (projected onto) the detector and modulated by a/c, S_n(f) is the detector's sky-averaged noise power spectral density (PSD), and f_min, f_max are frequency limits set by the detector bandwidth
[The detector antenna pattern will change appreciably over the inspiral timescale. However, while calculating the SNR, we do not account for the effects of this time-varying detector antenna pattern for computational ease. We do not expect a significant change in the obtained SNR due to this effect.].
To choose the frequency limits for the mergers, assuming an observation time of 4 years, we follow <cit.> with {f_min, f_max} being {10^-2, 10}Hz for DECIGO and {10^-4, 1}Hz for LISA. To model h, we use the TaylorF2 prescription given by:
h(f) = 𝒜 f^-7/6 e^i (Ψ(f) + ΔΨ(f))
where 𝒜∝ℳ^5/6/D_L with ℳ, D_L as the chirp mass and luminosity distance of binary respectively, Ψ (f) is given by Eq. (3.18) of , and ΔΨ (f) is given by Eq. (4) of . The BBHs are assumed to have face-on (inner) orbits.
* If ρ≥ 10 (8) for DECIGO (LISA), the BBH is considered to be detectable, and the sample is kept. Otherwise, it is rejected.
* Each detected BBH sample is assigned a set of weights: a cluster-mass weight, W_cl, and a metallicity weight, W_Z, to account for the relative cosmological abundance of clusters with different properties. We also assign a detectability weight W_det. These weights are computed following . The cluster-mass weight is assigned following the cluster initial mass function as <cit.>:
W_cl∝1/M_cl^2
where M_cl is the mass of the cluster at formation. The metallicity weight W_Z is assigned using lognormal distribution with a 0.5 dex standard deviation, and redshift-dependent mean given by <cit.>:
log(Z/Z_⊙) = 0.153 - 0.074z^1.34
The detectability weight is given by:
W_det = p_det(m_1, m_2, z)1/1 + zdV_c/dz
where the detection probability p_det is accounted for by setting an SNR threshold and rejecting samples that do not exceed that threshold. As mentioned before, the threshold is 8 for LISA and 10 for DECIGO. To the surviving samples, we then assign a weight determined by the product of the cosmological time dilation piece 1/1+z and the differential comoving volume dV_c/dz.
* To determine if the acceleration of a sample BBH is constrainable, we resort to a Fisher Matrix Analysis (FMA) which approximates the shape of the GW parameter estimation likelihood to be Gaussian in the source parameters <cit.>. From the corresponding covariance matrix, a statistical r.m.s. error Δ (a/c) is calculated, assuming the Gaussian is centered on true a/c. If a/c < Δ (a/c), a/c = 0 is contained within the 68% errorbar, and the BBH's acceleration is said to be “missed” ie. the event cannot be confidently identified as accelerating. If a/c > Δ (a/c), a/c = 0 lies outside the 68% errorbar and the BBH's acceleration is said to be “found”. In other words, the event can be identified as accelerating at 68% CL. We briefly describe the application of the FMA to the identification of found-missed accelerations in Section <ref>.
* We construct various histograms, including histograms of found and missed accelerations, weighted by W_t, the product of the mass, metallicity, and detectability weights <cit.>:
W_t = W_cl× W_Z× W_det
We point out here that the BBH is assumed to be optimally oriented in a way that maximizes the SNR and the magnitude of the LOSA. The latter is also assumed to be unchanging. The inner orbit is assumed to be face-on, while the outer orbit (i.e., the orbit of the BBH's center of mass in the potential of the globular cluster) is assumed to be edge-on. The fractions of found accelerations in this work should therefore be considered as upper limits.
§.§ Identifying found-missed BBH accelerations
A constant line-of-sight velocity component of the center of mass of a BBH will produce a constant Doppler shift that is degenerate with the mass of the BBH. On the other hand, a BBH with a LOSA will result in a time-varying Doppler shift, which in turn will modulate the GW waveform with respect to one that is not accelerated. At leading order, a deviation ΔΨ(f) in the GW phase Ψ(f) is incurred at -4 Post Newtonian (PN) order, and is given by <cit.>:
ΔΨ(f) = 25/65536η^2(GM/c^3)(a/c)v_f^-13
where v_f = (π GMf/c^3)^1/3. calculated 3.5 PN corrections beyond the leading order to ΔΨ(f), and also showed that including these higher-order corrections is necessary for unbiased source property inference. We hence use the full expression of ΔΨ (f) from to construct our waveform approximant h(f).
To calculate the r.m.s error Δ (a/c), the Fisher matrix Γ is first constructed as <cit.>:
Γ_ij = (∂ h/∂θ_i | ∂ h/∂θ_j)
where θ_i,j are the binary's intrinsic and extrinsic parameters that determine the shape of the GW, and (|) represents a noise-weighted inner product between two GW waveforms a(f), b(f):
(a | b) = 2 ∫_f_min^f_maxa(f) b^*(f) + a^*(f) b(f)S_n(f) df
The covariance matrix is then evaluated as C = Γ^-1, and Δ(a/c) is read-off (and square-rooted) from the corresponding diagonal element in C.
§ RESULTS
In this section, we provide weighted distributions of found and missed accelerations, as well as the corresponding fractions with respect to the total number of detected BBHs. We also evaluate the fractions and the distributions as a function of the cluster properties (metallicity, galactocentric radius, and virial radius) in the cmc models. We restrict our attention to DECIGO and LISA detectors, which, by virtue of their sensitivity in the low-frequency regime, are especially suited to detect LOSAs from GCs.
§.§ Aggregate distributions of found and missed accelerations in DECIGO
There are two competing effects that determine if a BBH is detectable and its acceleration is found. BBHs with heavier masses produce GW signals with larger amplitudes and are therefore relatively easier to detect, although increasing the mass eventually reduces the detectability due to a smaller number of GW cycles in the detector frequency band. On the other hand, BBHs with lighter masses produce GWs with a smaller amplitude and are therefore relatively more difficult to detect out to large distances.
Given that LOSA modulations to the GW are a low-frequency effect, incurring corrections in the GW phase at -4PN, longer durations of the in-band inspirals enable stronger constraints on the acceleration, or, equivalently, allow probes of smaller accelerations. BBHs with lighter masses spend a longer time in-band relative to BBHs with heavier masses, and thus contribute more significantly to the distribution of found accelerations. Moreover, the metallicity and cluster-mass weights also contribute to the fraction of found accelerations, as well as the shape of their distributions (see Appendix <ref>).
We show, in Figure <ref>, the distribution of detectable (total), found, and missed accelerations. Of the total detectable BBHs in DECIGO, 12% are found. The detectable accelerations follow a distribution that peaks between 10^-17s^-1 and 10^-16s^-1 with the median value being 1.7 × 10^-16s^-1 and 90% CI being [4.7 × 10^-18, 4.8 × 10^-15]s^-1.
The missed acceleration distribution peaks roughly at a similar value, having a median value: 8.5 × 10^-17s^-1 and 90% CI: [2.5 × 10^-18, 1.8 × 10^-15]s^-1 with relatively smaller support between 10^-15s^-1 and 10^-14s^-1.
Conversely, the found acceleration distribution peaks at ∼ 10^-15s^-1, having a median value: 6.3 × 10^-16s^-1 and 90% CI: [3.5 × 10^-17, 1.3 × 10^-14]s^-1.
We depict, in Figure <ref>, the distributions of found and missed detector-frame total masses M_det = M_source(1 + z), where z is the cosmological redshift. The distribution of found M_det is shifted towards smaller values relative to the corresponding missed distribution. This can be explained as follows. Lower-redshift mergers are lower-mass because of the mass-dependence of delay times and since mass segregation in GCs favors higher-mass mergers at early times <cit.>. Smaller masses then enable better constraints on a/c by virtue of spending more cycles in the detector band.
Similarly, Figure <ref> gives the distributions of found and missed BBH redshifts. Again, as with M_det, the distribution of found z is shifted towards smaller values relative to the corresponding missed distribution. Smaller z correspond to larger SNRs, which reduces the r.m.s error approximately as Δ(a/c) ∝ 1/ρ. This allows relatively smaller accelerations to also be identified within 68% confidence.
§.§ Effect of GC properties on the distributions of found and missed acceleration
Properties of the GC determine the population of BBHs in the GC and its spatial distribution, and thus, by extension, the distribution of BBH accelerations. Here, we break down the effect of metallicity, virial radius, and galactocentric radius on the distribution of found and missed accelerations, and corresponding distributions of M_det and R (outer orbital radius/cluster-centric radius).
The cmc catalog encompasses 3 distinct GC metallicities: Z = 2 × 10^-4, 2 × 10^-3, 2 × 10^-2. We extract BBH accelerations and construct distributions (weighted by the metallicity, cluster mass, and detectability weights) pertaining to found accelerations for each of these metallicities. We find that the majority of found accelerations, 93%, come from relatively higher metallicity GCs, Z = 0.02 (29%), 0.002 (64%), with the fraction dropping to 7% for Z = 0.0002.
The found distributions of accelerations, detector-frame masses, and orbital radii, are shown in Figure <ref>. The left panel shows a systematic preference for higher accelerations with decreasing metallicity. This can be understood from the fact that GCs with a larger metallicity prefer forming at low redshift and have a relatively larger fraction of low-mass BBHs. This decreases the detector-frame mass and enables measurements of smaller accelerations.
The larger detector-frame mass BBHs in low-metallicity (and high-redshift) clusters, need larger accelerations to be confidently identified (found) as accelerating.
This explanation is further corroborated by the corresponding distributions in the center panel, which show a systematic shift of M_det distributions to larger values with decreasing metallicity—the medians being ∼ 34 M_⊙, ∼ 99 M_⊙, and ∼ 125 M_⊙ in descending order of the metallicity. The right panel is also consistent with this picture since the distribution of outer orbital radii shifts to decreasing values with decreasing metallicity[See Appendix <ref> for more details.]. Smaller radii yield larger accelerations, which are required by heavier masses to be identified confidently (found) as accelerating.
We study the effect of changing the virial radius on the distributions of found accelerations and corresponding M_det and R. The cmc catalog provides 4 discrete values: r_v/pc = 0.5, 1.0, 2.0, 4.0. The effect of changing r_v on the distributions is less pronounced than the effect of changing Z. This can be explained as follows. For a given mass distribution and location of BBH mergers in a GC, a smaller r_v leads to more compact GCs, which in turn leads to larger accelerations. However, while there is a direct correlation between r_v and acceleration, there is no such correlation between r_v and BBH mass. Thus, while smaller r_v yield larger accelerations in general, they do not necessarily yield smaller M_det which enables stronger constraints on a/c. Nevertheless, we do find that the fraction of found accelerations, varies markedly with decreasing r_v: r_v = 0.5 pc (70%), r_v = 1.0 pc (23%), r_v = 2.0 pc (6%), r_v = 4.0 pc (1%).
We additionally study the effect of varying Z and r_v on the z distribution pertaining to found accelerations. This is shown in Figure <ref>. The left panel shows the z distribution getting progressively larger support at larger z values with decreasing metallicity Z. This can be readily explained in terms of the age of clusters. Lower metallicity GCs are older and thus reside at larger z values. Conversely, higher metallicity GCs are younger and contain a larger fraction of lower-mass BBHs. This results in fewer samples of found accelerations at larger z – both due to reduced SNR as well as poorer acceleration constraints from higher-mass BBHs. The effect of r_v on z distributions is less pronounced, although larger accelerations from smaller values of r_v imply increasing support at larger redshifts.
We do not find any significant effect of changing r_g on the distributions or the fraction of found accelerations. This is due to the fact that the accelerations extracted from the cmc simulations consider only the potential of the GC and not the potential of the galaxy in which the GC is hosted. The center of mass of the GC itself will have an acceleration, which depends on r_g but has not been considered in this work. Adding this effect will likely cause a systematic shift in the acceleration distributions; however, we do not expect the distributions to be impacted significantly if the GC is situated at typical locations (r_g ∼ kpc) in the galaxy[See Figure 5 of for an estimate of acceleration due to the gravitational potential of a Milky Way-like galaxy.].
We refer the reader to Appendix <ref> for a more detailed explanation of how the application of metallicity and cluster-mass weights to the intrinsic distribution of found accelerations impact the variation of the fraction of BBHs with these accelerations as a function of cluster properties.
§.§ Distributions of found and missed accelerations in LISA
LISA's sensitivity band covers a frequency range that is lower than DECIGO: f ∈ [10^-4, 1] Hz. The BBHs, therefore, spend a significantly longer time within the LISA band than the DECIGO band, which should enable stronger constraints on acceleration. However, LISA's sensitivity to stellar mass BBHs is much lower as compared to BBHs. As a result, the majority of the lighter BBHs are not detectable (ρ < 8) in LISA, given that the Madau-Fragos SFRD peaks at z∼ 2. Nevertheless, among those BBHs that are detectable, ∼ 14% are found, in part because the lower frequency reach of LISA enables binaries to spend longer times in-band.
In Figure <ref>, we provide the distribution of found and missed accelerations (left panel), and the variation of found acceleration distributions with metallicity (right panel), whose imprint was found to be the most pronounced in the DECIGO analysis. We once again see that the found acceleration's distribution peaks between 10^-15s^-1 and 10^-14s^-1 with the median value being 1.2 × 10^-15s^-1 and 90% CI being [3.6 × 10^-17, 1.7 × 10^-14]s^-1. However, unlike DECIGO, we find that the majority of these accelerations (93%) come from Z = 0.002 clusters. This can be explained as the consequence of competing effects. GCs with higher metallicities have a larger fraction of lighter BBHs, many of which are undetectable with LISA. On the other hand, BBH with lighter masses enable more precise acceleration measurements. Among the discrete metallicities considered, the metallicity value closest to the “sweet spot” that has both detectable and measurable accelerations is Z = 0.002. It should be noted that the metallicity weight (and to a lesser extent the cluster-mass weight) also contributes to enhancing the fraction of found accelerations for Z = 0.002 (see Appendix <ref>). Correlations of metallicity with M_ det and R are similar though less pronounced than what was found for DECIGO, and are therefore not plotted.
§ SUMMARY AND OUTLOOK
GCs are one class of dense stellar environments expected to host BBH mergers. The ∼ 90 events detected by the LIGO-Virgo-KAGRA network cannot conclusively determine if a given BBH was hosted by a GC, although the merger rate in GCs can be estimated by comparing against GC simulations <cit.>, or by calculating the fraction of the BBH population that is consistent with having isotropic spin directions <cit.>.
Such rates, however, are limited by the sample size of the detected BBHs, as well as uncertainties in the models of GCs and their initial properties (size, metallicity, etc).
On the other hand, LOSAs of BBHs leave an imprint on their GW waveform at -4PN, and can therefore be potentially constrained by detectors sensitive at low frequencies (e.g: decihertz, millihertz bands) such as DECIGO and LISA. BBHs in GCs are expected to contain finite LOSAs, and their distribution could contain imprints of the properties of the GCs. LOSAs could therefore assist in identifying the provenance of BBHs.
In this work, we forecast the distribution of detectable BBHs in GCs in DECIGO and LISA eras, that also produce accelerations that are identifiable (found) at ≥ 68% confidence. To do so, we use the outputs of the cmc catalogue to extract distributions of BBH accelerations, following the scheme presented in Figure <ref>. We summarize our main results below.
* We find that ∼ 12% (∼ 14%) of detectable BBHs in the DECIGO (LISA) era have accelerations that are well-constrained away from zero. We also find that the distribution of measurable (found) accelerations peaks at 10^-15s^-1 in DECIGO and between 10^-15s^-1 and 10^-14s^-1 in LISA.
* Among found accelerations, the majority (∼ 93% in DECIGO and LISA) come from relatively higher metallicity (Z = 2 × 10^-2, 2 × 10^-3) clusters. This is clearly reflected in the mass spectrum of BBHs with found accelerations. Higher metallicity clusters form at low redshift and have a larger fraction of relatively low-mass BBHs, thus enabling better measurements of acceleration. Conversely, low metallicity (Z = 2 × 10^-4) results in a larger fraction of high (detector frame) mass BBHs, and their accelerations need to be 1-2 orders of magnitude larger to be found. In LISA, Z = 0.002 dominates the fraction of measurable accelerations due to competing effects of lighter masses being more difficult to detect while also enabling more precise acceleration measurements.
* We observe correlations between the virial radius r_v of the cluster and the shape of the distributions, although these are less pronounced compared to the correlations with metallicity. Nevertheless, the majority of the found accelerations come from small r_v (e.g. 70% of found accelerations come from r_v = 0.5 pc). We find no appreciable dependence of the fraction of identifiable accelerations on the galactocentric radius r_g, likely because the accelerations extracted from the cmc simulations do not account for the galactic potential that hosts the GC.
* Converting the percentage of found accelerations to a rate of found accelerations in the DECIGO/LISA eras require estimates of BBH merger rates out to redshifts z > 1, which to date is poorly constrained. We instead plot the fraction of found accelerations in DECIGO[Given that the intrinsic rate of detectable stellar mass BBHs is expected to be small in the LISA era, we do not plot the corresponding evolution of the fraction of found accelerations with z. All events with found acceleration in LISA lie at z ≲ 0.2.] as a function of redshift in Figure <ref>. This fraction initially decreases, reaches its minimum value in the redshift bin [5.5, 6], and starts rising slightly again. This rise coincides with the redshift (z∼ 6) at which Z=0.0002 clusters overtake Z=0.002 clusters in their contribution to the total number of detected events (in-part due to high W_Z; see Figure <ref>). Since the source-frame masses in Z=0.0002 clusters are slightly higher than those in Z=0.002 clusters, and events in low-metallicity clusters have higher acceleration owing to their relative closeness to the center[This is due to lower natal kicks in low-metallicity environments <cit.>. See also Appendix <ref> for a related discussion.], the number of found events increases slightly in comparison to the number of missed events above z∼ 6[ The slight rise around z∼ 1.5 can be similarly attributed to the redshift beyond which binaries in Z=0.002 clusters dominate over those in Z=0.02 clusters. ].
We note that the results mentioned above and in the rest of the work are contingent on our modelling assumptions for W_Z, W_ cl, and W_ det. For instance, our understanding of cosmic GC formation history is incomplete and the assumption that GC formation follows star formation might not be a good one. Semi-analytic models of GC formation built using dark matter halo merger trees <cit.> show that the cluster formation rate density peaks at a higher redshift (z ∼ 4) and does not track the SFRD. However, these estimates are themselves model-dependent, and we prefer to use an observation-oriented (ie. the Madau-Fragos SFRD) prescription in our work. While we only focus on model-dependent forecasts of LOSAs, measurements of LOSAs can also be used to constrain host GC properties of BBHs independently or in tandem with methods in .
Other dense stellar environments that could host BBHs include nuclear star clusters <cit.> and AGNs <cit.>. As follow-up work, we plan to study the distributions of accelerations of BBHs in these environments, and the imprints of their properties on said distributions. We also plan to compare distributions of accelerations coming from these different dense stellar environments, which, in principle, could help in determining the provenance of the BBHs.
The accelerations of BBHs extracted from the cmc simulations consider only the effect of the GC gravitational potential. However, encounters of BBHs with a third body, when they lie within the band of the detectors, could impart an acceleration that is significantly larger than those provided by the GC potential. Accelerations of such in-encounter mergers could therefore be detectable even by future ground-based detectors, such as the XG network. We plan to investigate this as well in future work.
§ ACKNOWLEDGEMENTS
We thank Michael Zevin for comments on a draft version of this work. We also thank Zoheyr Doctor, Nathan Johnson-McDaniel, Parameswaran Ajith, and K. G. Arun for useful discussions. This work has made use of <cit.>, <cit.>, <cit.>, <cit.>,
<cit.>, <cit.>, and <cit.> software. A.V. is supported by the Department of Atomic Energy, Government of India, under Project No. RTI4001. G.F. acknowledges support by NASA Grant 80NSSC21K1722 and NSF Grant AST-2108624 at Northwestern University. SC acknowledges support from the Department of Atomic Energy, Government of India, under project no. 12-R&D-TFR-5.02-0200 and RTI 4002. All computations were performed on the SARATHI computing cluster at IUCAA.
§ DATA AVAILABILITY
The data underlying this article will be shared on reasonable request to the corresponding author.
mnras
§ EFFECT OF WEIGHTS ON THE DISTRIBUTION OF MEASURABLE ACCELERATIONS
§.§ Initial Cluster Mass Weight
To construct the distribution and determine the fraction of found (measurable) accelerations, we apply a weight W_cl that is inversely proportional to the square of the initial cluster mass (cf. Eq. <ref>). The effect of applying this weight is to enhance the fraction of found accelerations pertaining to low metallicity mergers. This can be explained as follows:
* High metallicity environments form low mass pre-supernova cores due to higher line-driven winds <cit.>.
* Low mass cores get a larger supernova natal kick owing to lesser mass fallback <cit.>. This high kick displaces them from the center of the cluster, i.e. to higher R, possibly also ejecting them from the cluster in the process.
* The only way to then have appreciable acceleration for high metallicity mergers is by having a very dense environment, i.e. clusters with a higher mass.
* Since massive clusters are down-weighted by W_ cl∼ 1/M^2_ cl, the total number of high-metallicity mergers is also down-weighted.
This is illustrated in the scatter plots of found BBHs in DECIGO. Figure <ref> shows scatter plots of found accelerations vs corresponding radii for different metallicities, with and without accounting for W_ cl.
§.§ Metallicity Weight
Another weight that is applied to the distribution of found accelerations is the metallicity weight. The weight is evaluated using a log-normal distribution in the metallicity whose mean is redshift dependent <cit.>. Since the BBH redshifts are drawn following the Madau-Fragos SFRD, the metallicity weight is (broadly) a result of convolving this distribution with the log-normal distribution.
We plot metallicity weights for found samples as a function of redshift. We see that Z = 0.002 has the largest weights between z = 1 - 4, in comparison to the other metallicities Z = 0.02 and 0.002. Since the Madau-Fragos SFRD has the largest support between z = 1 - 4, metallicity weights tend to enhance the fraction of found accelerations for low metallicties (say Z = 0.002), relative to the other metallicities. This in part explains the fractions displayed in Figure <ref>. Furthermore, Z = 0.0002 has the largest weights only at z ≳ 7.5, where the Madau-Fragos SFRD has negligible support. On the other hand, where the SFRD has the largest support (z = 1 - 4), this metallicity value has the smallest weight. This explains, in part, the small fraction of found accelerations assigned to Z = 0.0002.
|
http://arxiv.org/abs/2307.02460v1
|
20230705173341
|
Performance Scaling via Optimal Transport: Enabling Data Selection from Partially Revealed Sources
|
[
"Feiyang Kang",
"Hoang Anh Just",
"Anit Kumar Sahu",
"Ruoxi Jia"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.CE",
"cs.CV"
] |
Landscape approximation of low energy solutions to binary optimization problems
Dimitris G. Angelakis
August 1, 2023
===============================================================================
Traditionally, data selection has been studied in settings where all samples from prospective sources are fully revealed to a machine learning developer. However, in practical data exchange scenarios, data providers often reveal only a limited subset of samples before an acquisition decision is made. Recently, there have been efforts to fit scaling laws that predict model performance at any size and data source composition using the limited available samples. However, these scaling functions are black-box, computationally expensive to fit, highly susceptible to overfitting, or/and difficult to optimize for data selection. This paper proposes a framework called , which predicts model performance and supports data selection decisions based on partial samples of prospective data sources. Our approach distinguishes itself from existing work by introducing a novel two-stage performance inference process. In the first stage, we leverage the Optimal Transport distance to predict the model's performance for any data mixture ratio within the range of disclosed data sizes. In the second stage, we extrapolate the performance to larger undisclosed data sizes based on a novel parameter-free mapping technique inspired by neural scaling laws. We further derive an efficient gradient-based method to select data sources based on the projected model performance. Evaluation over a diverse range of applications demonstrates that significantly improves existing performance scaling approaches in terms of both the accuracy of performance inference and the computation costs associated with constructing the performance predictor. Also, outperforms by a wide margin in data selection effectiveness compared to a range of other off-the-shelf solutions.
§ INTRODUCTION
The choice of training data is one of the most crucial components when it comes to extracting the best performance out of a model. Since data is typically acquired from various sources, such as different organizations or vendors, machine learning practitioners often encounter a central question: how to select and combine samples from these data sources?
Although data selection has been extensively studied in the literature related to active learning <cit.>, coreset selection <cit.>, and data valuation <cit.>, most techniques are designed for a fully-observable setting where all data sources are fully revealed to the model developer. The core ideas behind these techniques are to compare the relative importance of different data points or enumerate possible combinations of data points, all of which require complete knowledge of the entire collection of data points. While these methods have shown promising results, their practical applications in real-world scenarios are limited due to a significant gap: the acquisition decision-making processes require knowledge of the entire data sets, while data owners may only reveal limited samples before an acquisition decision is made (e.g., <cit.> provide the examples in real-world data markets).
To bridge the gap, this paper explores strategic data selection in partially observable settings, where only limited samples of data sources (referred to as pilot datasets) are accessible. The goal is to determine an optimal allocation of the selection budget to each source, only based on the pilot datasets, such that the model trained on the mixture of collected data achieves the best performance at some given objectives.
Technical challenges. In the fully-observable setting, the evaluation and eventually ranking the candidate data selection decisions, including the number of samples to be selected and the ratio of samples from each source ("mixing ratios"), can be determined directly on the available datasets <cit.>.
However, the partially observable setting presents considerable challenges for evaluating a selection decision as one can no longer directly evaluate model performance on the entire data. With limited samples from each data source, the best possible evaluation is the resulting model performance for any combination of the pilot datasets. Then, to make an informed selection decision, it is necessary to understand the model's performance when trained on potentially larger datasets (target scales) at various mixing ratios. In other words, there is a need for prediction and projection of model performance onto larger data scales at different mixing ratios.
A recent study <cit.> proposes a performance scaling law that takes into account the data size and mixing ratio to predict model performance. Providing a preliminary exploration of this problem, though, this approach faces two major limitations: (1) The numerical instability of its high-order form for scaling functions causes significant difficulty in fitting its parameters, rendering the fitted function susceptible to overfitting and often fails to extrapolate model performance on unseen data mixtures. (2) It hypothesizes on the separability of model performance scaling with data composition and with data size, which is generally untrue as evidenced by latest research <cit.> and leads to unsatisfactory performance prediction results in empirical observations. Besides, this method requires parameters that grow quadratically with the number of data sources, demanding many (mixing ratio, resulting performance) pairs to fit the function, resulting in substantial computational overhead. Thus, there remains a considerable lack of effective and practical approaches to this problem.
Contributions. The paper investigates two fundamental building blocks for strategic data selection in the partially observable setting: (Q1) How to provide an accurate projection of model performance trained on any combination of data sources based on limited samples? And (Q2) How to determine the optimal selection strategy? Towards that end, the paper makes the following contributions.
∙ Parameter-Efficient Performance Prediction based on Optimal Transport (Addressing Q1). In contrast to existing model performance scaling methods that feature a one-shot fitting of a non-informative parametric model ("surrogate"), our approach is a novel two-stage performance inference process, where the first stage addresses the dependence of the model performance on the mixing ratio by fitting a parameter-efficient model between model performance and Optimal Transport <cit.> distance between the mixtures of training data and the validation data (Section <ref>).
Then, for stage two,
we propose a parameter-free mapping that directly projects model performance onto larger data scales, achieving remarkable accuracy as it fully preserves this the dependency of model performance scaling with data sizes and data distributions (Section <ref>).
∙ Determining optimal data selection strategies (Addressing Q2).
We consider the typical data selection goal: maximizing the resulting model performance with fixed data acquisition budgets (data quantity). With model performance predicted by the proposed tools, these problems translate into convex losses that are optimized effectively via gradient-based methods (Section <ref>). We also provide in Appendix.<ref> how it similarly applies to alternative objectives such as minimizing data acquisition costs for the resulting model performance to reach a given level.
∙ Experiments. We experiment on a variety of applications (vision, natural language processing (NLP) etc., with simple to complex models) with a rich diversity of tasks and scenarios. The proposed approach is highly effective in performance prediction, demonstrating superior prediction precision to many baselines while being much more efficient to be constructed (Section <ref>).
We test the performance of data selection by optimizing the performance predictor and show that it improves over existing methods by 3% on ImageNet-100.
§ RELATED WORK
The recent line of research on Data valuation aims to assess the relative importance of each data source ("value'") to machine learning applications <cit.>. While originally designed for data pricing, these values are frequently used to inform data selection <cit.>: in more detail, one can rank the data sources based on their values and select the data points with the highest values. While value-based selection shows some promising results, data values are not directly related to model performance and hence cannot inform the prediction of model performance resulting from the selected data. Besides, values for different data sources typically cannot be combined to measure the value of their compositions <cit.>.
Notably, distributional distances including Optimal Transport have seen a major presence in data valuation as an implicit proxy for model performance <cit.>, but no connection has been made to directly relate data distance to model performance. Our work bridges this gap and directly addresses this long-standing problem. On another line, Coreset selection attempts to find a representative subset of data points to speed up learning <cit.>. Coreset selection methods have been studied for different learning algorithms <cit.>. For example, a popular coreset selection approach for neural networks is to cast data selection as a bilevel optimization problem that selects the optimal subset to maximize the performance evaluated on a validation dataset <cit.>. However, coreset selection techniques rely on access to all the data samples to be chosen from, which limits their use in the partially observable setting.
Besides, Predicting the resulting model performance associated with a dataset without performing actual training on it has attracted a lot of attention in different use cases, such as interpreting the model learning process <cit.>–which leverages surrogate functions to model the black-box relationships between model performance and training data,
or predicting performance under the distributional shift <cit.>.
Our work resembles the idea of predicting model performance from data but differs in the technique of leveraging the data distance in the performance predictor. Scaling laws, predicting how the model performance changes with the scale of training data, model parameters, and computation budget <cit.>, have seen increasingly successful in a variety of tasks pertaining to vision and text processing <cit.>. The performance of machine-learning models generally adheres to a power law relationship with the scale of these variables, which allows for predicting the model performance on larger scales with high precision <cit.> and provides a viable approach to predicting the potential usefulness of target data from only a small proportion of the set. <cit.> shows that data from different distributions generally scale at different rates. Our work provides a novel approach that materializes this dependency of scaling relationships with data distributions and achieves remarkable empirical results.
§ PROBLEM FORMULATION
Data provider. Suppose that there are m prospective data providers. Datasets (data sources) held by these providers are denoted by D^all_1,…,D^all_m, respectively. We focus on the case that only partial data (samples) from these sources are made available to the public, replicating practical data exchange scenarios <cit.>. We refer to the public subset of each data source as a pilot dataset and denote it by D^pi_i, where D^pi_i ⊆ D^all_i and |D^pi_i|=n_i ≪N̅_i=|D^all_i| for all i. Each provider i, upon accepting the purchasing order for acquiring n_i samples (n_i≤N̅_i), will randomly sample a subset S_i of size n_i from D^all_i and return the subset to the requester.[This paper will assume each provider honestly provides requested samples and leave an in-depth study of potential security risks, such as malicious data manipulation <cit.> to future work.]
Data collector (or requester, machine learning practitioner). Now, consider a data collector who would like to acquire samples from the providers to train a model. Notably, the collector's acquisition decisions must be made based only on the pilot datasets. We assume the collector has a validation set D^val, representing the desired target data distribution. For ease of exposition, we assume the collector has a target learning algorithm 𝒜 [The proposed data selection approach can support multiple learning algorithms by simply applying it to different choices of algorithms/metrics and picking the best one.] that is going to be applied to the collected data as well as a target performance metric ℒ which takes the input of a trained model and a validation set and returns a performance score. The model performance resulting from training on any dataset S can be thus expressed as ℒ(𝒜(S),D^val).
Given a selection budget of N samples, a mixing ratio of data sources 𝐩= {p_1,…,p_m} such that ∀_i, 0 ≤ p_i ≤ 1 and ∑_i=1^m p_i = 1, and m datasets D_1, …, D_m to be mixed, we denote the selected dataset by 𝒟(N,𝐩) = S_1 ∪⋯∪ S_m, where each S_i is a random subset of D^all_i and |S_i| = p_i N. Using these notations, we now describe the typical acquisition goals that can be accommodated by our approach:
* (Primary) Fixed-budget selection for maximal performance: The collector seeks to maximize the resulting model performance by strategically choosing the mixing ratio 𝐩 of m data sources at a pre-specified selection budget N_s≤∑_i=1^m N̅_i. The objective can be formalized as max_𝐩ℒ(𝒜(𝒟(N_s,𝐩)),D^val).
* (Alternative) Flexible-budget selection for reaching performance threshold with minimal costs: The collector seeks to attain a target model performance u^tar by choosing both the mixing ratio 𝐩 as well as the selection budget N. More formally, the objective can be expressed as min_N,𝐩 N s.t. max_𝐩ℒ(𝒜(𝒟(N,𝐩),D^val) ≥ u^tar.
The alternative objective can be treated as a direct extension of the primary, where one solves the "performance maximization" problem for different data quantities N and performs a line search for minimal data quantity N that meets the performance requirement. We defer to Appendix.<ref> for its complete solution procedure due to the similarity.
Design challenge and key idea. The primary challenge is that the collector cannot access D^all_1, …, D^all_m for decision making and hence cannot directly evaluate the two optimization objectives for every N. Yet, as the pilot datasets are public, the collector can evaluate and observe the model performance associated with various mixtures of the pilot datasets for N∈{N: Np_i≤ |D^pi_i|, i=1,…,m} project the evaluations onto larger data scales. Our high-level idea to tackle the challenge is to first predict the model performance associated with any mixture of prospective unrevealed data sources based on observations on pilot datasets and project the predictions onto different data scales using scaling laws, then determine the data selection strategy by optimizing the predicted performance at the target scales.
§ METHODOLOGY OF : PREDICTION, PROJECTION, AND SELECTION
§.§ Preliminaries on Optimal Transport
Optimal Transport (OT) is a metric for measuring the discrepancy between probability distributions <cit.>. Compared to other measures such as the Kullback-Leibler Divergence <cit.> or Maximum Mean Discrepancies <cit.>, OT enjoys advantageous analytical properties (is a valid metric; compatible with sparse-support distributions; stable with respect to deformations of the distributions’ supports <cit.>).
Given probability measures μ_t, μ_v over the space 𝒵, the OT distance is defined as <cit.>
OT(μ_t, μ_v) := min_π∈Π(μ_t, μ_v)∫_𝒵^2𝒞(z, z') d π(z, z'),
where Π(μ_t, μ_v) :={π∈𝒫(𝒵×𝒵)|∫_𝒵π (z, z') dz=μ_t, .
. ∫_𝒵π(z, z')dz'=μ_v} denotes a collection of couplings between two distributions μ_t and μ_v, 𝒞: 𝒵×𝒵→ℝ^+ is a symmetric positive-definite cost function (with 𝒞(z, z)=0), respectively. A popular choice of 𝒞 is given in <cit.> by considering z as the feature-label pair (x,y). The computation of the OT distance usually relies on the Sinkhorn algorithm <cit.>, which attains almost linear time complexity and memory overhead with the state-of-the-art implementations and applies to large scales with parallel computing <cit.>.
Given D_t={(x_i, y_i)}_i=1^N of size N, and D_v={(x_i', y_i')}_i=1^T of size T, one can construct discrete probability measures μ_t(x,y) := 1/N∑_i=1^N δ_(x_i,y_i) and μ_v(x,y) := 1/T∑_i=1^T δ_(x_i',y_i'), where δ is the Dirac delta function. With slight abuse of notation, we use OT(D_t,D_v) to denote the OT distance between their corresponding discrete measures OT(μ_t(x,y),μ_v(x,y)).
Extensive theoretical studies show that the OT distance between two distributions provides an upper bound on the difference of a model's performance when evaluated on the two distributions <cit.>. Largely built upon Kantorovich-Rubinstein Duality <cit.>, existing theoretical results require assumptions on the Lipschitz constant of the model with respect to the input space. However, the constant is rarely known in practice, nor can it be bounded tightly for complex models such as deep neural networks. As a result, despite its widespread popularity as a performance proxy <cit.>, one cannot directly apply the existing theoretical results to directly estimate the model performance based on the OT distance, posing an important gap.
§.§ Aligning data distance with performance predictions
Inspired by the theoretical results that the upper bound on the difference between training loss and validation loss can be tightly bounded by an affine transformation of the OT distance <cit.>, our first proposed approach is
to directly estimate this transformation by empirically fitting data distances to model performance and then the estimated transformation can be used for predicting the model performance for different data mixtures. Formally, we consider the following performance estimator:
ℒ̂(𝒜(𝒟(N, 𝐩)),D^val)=a_1·OT(𝒟(N, 𝐩), D^val)+a_0,
where scaling parameter a_1 and centering parameter a_0 define the affine transformation. These two parameters can be estimated through least-square fitting. In particular, consider collecting the "training data" by forming the set of tuples {(N_j,𝐩_j, ℒ(𝒜(𝒟(N_j,𝐩_j)),D^val))}_j=1^ℓ, where N_j is randomly sampled from {1,…,∑_i=1^m |D^pi_i|} and 𝐩_j is sampled from a probability simplex. Then, these parameters can be estimated as
(â_̂1̂, â_̂0̂) =min_a_1, a_0∑_j=1^ℓ(ℒ̂(𝒜(𝒟(N_j, 𝐩_j)),D^val)-ℒ(𝒜(𝒟(N_j, 𝐩_j)),D^val))^2.
We refer to this method as center-scaling (). a_1 can be considered an empirical estimate of the Lipschitz constant, and this treatment has been informally adopted in various works under different names <cit.>. has only two parameters that need to be estimated, which brings an important benefit of efficiency: we only need to have a few training iterations to get the "training data" for the above least-square fitting.
While proposed is sufficient to provide reliable performance predictions in most circumstances, we found it possible to further improve the prediction accuracy by making the scaling parameter and centering parameter a function of the mixing ratio. The intuition is that for samples from different data sources (i.e., data lying in different manifolds of the input space), the Lipschitz constant of the model along the combined manifold may vary with the mixing ratio.
Hence, we supplement with simple nonlinear terms to characterize the dependence on each data source, leading to the pseudo-quadratic () method, which is given as
ℒ̂(𝒜(𝒟(N, 𝐩)),D^val)=∑_i=1^m (b_2^i· p_i^2+b_1^i· p_i+b_0)·OT(𝒟(N, 𝐩), D^val)+∑_i=1^m (c_2^i· p_i^2+c_1^i· p_i+c_0),
where (𝐛_2,𝐛_1, 𝐛_0, 𝐜_2, 𝐜_1, 𝐜_0) are pseudo-quadratic parameters where the fitting process is similar to Eq. (<ref>).
has 𝒪(m) parameters (m is the number of data sources).
So its fitting process will require more re-training than . However, as we will show in Section <ref>, it still significantly improves the efficiency over the existing baselines <cit.>. This pseudo-quadratic form is chosen for it contains the simplest nonlinear terms and we want to preserve the convexity for numerical stability. We trimmed off the cross terms in the quadratic function as they often do not contribute much, resulting in the number of parameters growing linearly rather than quadratically with the number of data sources as in <cit.>, greatly easing the computation burden in parameter fitting.
§.§ Parameter-free performance projection onto larger data scales
Once the parameters are learned, the performance predictors (<ref>) and (<ref>) can be used to predict the validation performance associated with a training set by calculating the OT distance between the training and the validation set and plugging it as input to the predictors. Then, we need to project these predictions onto the target data scales. Neural scaling laws showcase the predictability of empirical performance with respect to the size of the training dataset, where it typically follows a log-linear scaling relationship as
𝔼_V [ℒ(𝒜(𝒟(N,𝐩)); D^val)]≈ -αlog(N)+C
where α and C are some constants <cit.>. Recent work <cit.> shows that when data from different sources differ in "quality" (e.g., noise level), which is the most likely scenario in practice, the scaling parameters are often vastly different, rendering parameters α and C functions of data composition 𝐩 and model performance for different data mixtures 𝐩 scales with different rates. <cit.> assumes the same constant α for all data mixtures, leading to unsatisfactory scaling results. The difficulty underlying this dependence is that these scaling parameters differ for every data mixture and there is no closed-form expression available for this functional relationship. With the performance prediction tools proposed above, it is possible to directly predict model performance of any data mixture at the scale the tool has been fitted. That is, for any given data scale N_0, as long as one completes the one-off fitting process of the performance predictor, model performance for any data composition at data size N_0 can be inferred directly using Eq. (<ref>) or Eq. (<ref>). Thus, by performing the fitting process at different small scales for once, for any desired data mixture, we can directly fit the neural scaling laws for this particular distribution and project it onto larger data scales, without needing to train any additional model or make any further approximations. We formalize it into the following theorem.
Consider log-linear performance scaling relationship depending on both data size N and data composition 𝐩 given as
𝔼_V [ℒ(𝒜(𝒟(N,𝐩)); D^val)]= -α(𝐩) log(N)+C(𝐩). Assume one has completed the fitting of the performance predictor on two different scales N_0 < N_1, which gives ℒ̂(𝒜(𝒟(N_0,𝐩)); D^val)
and ℒ̂(𝒜(𝒟(N_1,𝐩)); D^val) for all data mixtures 𝐩. Then, the model performance ℒ̂(𝒜(𝒟(N,𝐩)); D^val) for any data mixture 𝐩 at any data scale N can be predicted as
ℒ̂(𝒜(𝒟(N,𝐩); D^val) = (logN_1/N_0)^-1[ log N/N_0ℒ̂(𝒜(𝒟(N_1,𝐩)); D^val)-log N/N_1ℒ̂(𝒜(𝒟(N_0,𝐩); D^val)]
without requiring fitting any additional parameters. The proof and derivations are given in Appendix.<ref>. We refer to this method as parameter-free projection for model performance. As this procedure does not rely on any additional assumption or parameterized surrogate, it requires minimal computational overhead while achieving considerably higher prediction accuracy compared to existing approaches such as <cit.>. Not exclusive to the performance predictor proposed in this work, this method can be plugged into other predictors seamlessly and provides benefits at large, marking a novel contribution to performance projection in data acquisition problems.
§.§ Performance-guided data selection
The intention of creating the proposed tools is not limited to providing predictions for model performance, rather, we expect the predictions to support determining the optimal data acquisition strategy. We show that these problems are convex and differentiable (Appendix.<ref>) and thus can be solved effectively via gradient-based methods. Specifically, for our primary objective fixed-budget selection for maximal performance, with the proposed performance predictors with projection, we solve for
𝐩^*=max_𝐩ℒ̂(𝒜(𝒟(N_s,𝐩)),D^val).
We solve it iteratively with the following procedure. First, we initialize the algorithm with 𝐩=𝐩^0 where 𝐩^0 can be chosen arbitrarily provided that ∑_i p_i=1. Then, at each step, we perform the gradient update as
𝐩^𝐭+1←𝐩^𝐭 + d^t·.∂ℒ̂(𝒜(𝒟(N_s,𝐩)),D^val)/∂𝐩|_𝐩=𝐩^t,
where d^t is the step size at iteration t and we obtain 𝐩^*=𝐩^𝐓 at convergence as the desired solution. Optimal Transport naturally provides its gradient w.r.t. the probability mass of data points
in its dual solutions <cit.>, which directly gives the gradients w.r.t. data mixtures 𝐩. This easy availability of gradients renders the optimization highly efficient in computation, resulting in remarkably fast solutions. We use the calibrated gradient of OT from <cit.> which ensures the updated mixture 𝐩 remains within the simplex ∑_i p_i=1 at each step. We provide technical details of the gradient computation in Appendix.<ref>. The alternative objective can be treated as a direct extension of the primary and we defer to Appendix.<ref> for its solution procedure. The pseudo-code for is provided in Appendix <ref>.
§ EVALUATION
In this section, we cover two main applications for . 1) Performance projection, where for any mixing ratio of data sources and any data scale, we want to predict the performance of the model trained on a given composed dataset. We also demonstrate efficiency and efficacy of our method for different scenarios of data sources, such as mislabeled or unlabeled data.
2) Optimal data source acquisition strategy, where for a given data budget, we find a mixing ratio of data sources that can maximize the performance of a model. We present a solution to select optimal data source composition for two learning paradigms scenarios: training from scratch and model fine-tuning.
We compare with six existing baseline methods, where the first four: (1) Linear <cit.>, which assumes a linear relationship with the data compositions; (2) Pseudo-Quadratic assumes a simple non-linear relationship; (3) Quadratic <cit.> assumes a fully quadratic relationship; (4) Rational <cit.> models a relationship through the sum of a set of rational functions. For m data sources, each function has m parameters, and there are m such functions, totaling m^2 parameters; (5) LOO <cit.> measures the importance of a data source by computing the performance difference after removing that source; (6) Shapley <cit.> is a game-theoretic method which computes the average marginal contribution of a data source to different subsets of other sources. Baselines (5) and (6) are suitable for informing the selection of data sources but are unable to predict model performance, so we only include them in data source selection experiments. Details on implemented baselines are described in Appendix <ref> and further explained in <cit.>. For all experiments, we set up the problem with three data sources, where each source consists of different classes, and we refer the reader for additional information on the experimental setup, algorithm, datasets, models, implementations, code repository, and ablation studies on the number of data sources to Appendix <ref>. We also showcase the runtime vs performance prediction trade-off and comparison with baseline methods.
Evaluation Metrics. We use mean absolute error (MAE) to measure the performance of our method by calculating the absolute difference between the predicted accuracy and the actual accuracy. For the object detection task, we adopt a commonly used metric, mean average precision (mAP), which measures the average precision of a model across multiple classes or categories, providing a single value that represents the overall accuracy. The average precision represents the area under the precision-recall curve for a single class.
Hyperparameters. For practical reasons, we set the data scale N_1 in Eq. <ref> to be the size of the smallest pilot dataset, i.e. N_1 = min_i |D_i^pi |. Upon selecting N_1, we empirically choose N_0 to be 2/3 N_1. For further investigations on selecting N_0, we provide sensitivity analysis on N_0 in Appendix <ref>.
§.§ Performance Prediction
§.§.§ Performance prediction for unseen data mixtures 𝐩
In this experiment, we fit the parameters on limited compositions and extrapolate the prediction to unseen compositions. Specifically, we choose one data source and limit its maximum composition to <55% of contribution in the training set of the performance predictor, then we predict accuracy on the compositions consisting of ≥55% of contribution. As we observe in Figure <ref>, Linear and Pseudo-Quadratic methods cannot fit well the training data, which indicates that these methods do not have a strong representation power. While Quadratic and Rational baselines can fit the training data, they suffer from overfitting and do not generalize to unseen compositions. On the other hand, as seen in Table <ref>, our method achieves the best training and extrapolation performance. achieves second best extrapolation performance. Furthermore, we analyze the efficiency of our method compared to other baselines. As observed in Figure <ref>, not only achieves the lowest MAE score but also converges with around 15 training data for and 25 for , which demonstrates low computational requirement of our method. With shown strong predictive power of , we now proceed to practical applications in performance projections onto larger data scales.
§.§.§ Performance Projection to Larger Data Scales
R0.4
< g r a p h i c s >
Performance projections from 1K CIFAR-10 samples across various mixing ratios and larger data scales: 2K, 5K, 7K, 10K. Comparison between and baselines.
Mislabeled Data Sources. In this experiment, we project performance onto larger data scales and also assume a more practical setting where data sources might not be of high quality and contain noisy labels <cit.>. It is then critical to factor such irregularity for the performance prediction into our method.
Given three mislabeled data sources formed by sampling CIFAR-10, each of which releases a pilot dataset of size 1K, we project performance for various mixing ratios onto larger data sizes, i.e. 2K, 5K, 7K, and 10K. We then measure the MAE value across all data scales. We observe in Fig. <ref> that achieves the best projection performance compared to all baseline methods. achieves the lowest MAE score below 2% and is slightly above 2%. The improved performance of our method can be attributed to the incorporation of actual data distance computation. This inclusion allows for a more accurate representation of mislabeling information in performance projection, unlike baseline methods that neglect this crucial information.
The promising results demonstrate the potential of our method to project performance of any composition to any data scale, which is important in the case of mislabeled data sources in the partially-revealed setting.
R0.4
< g r a p h i c s >
Visualization of the performance projection error, which is calculated as the difference between the predicted and actual performance. Projections onto 2K, 5K, 7K, 10K data scales from 1K samples. Predictions remain effective even at significantly larger scales.
Unlabeled Data Sources.
As mentioned earlier, data sources often contain noisy labels, and the process of labeling data can be costly. On the other hand, it is not uncommon to encounter unlabeled data sources. Therefore, we would like to extend our method to accommodate the setting of data sources without labeled data, and we aim to project performance of unlabeled data compositions from pilot data source mixtures of 1K samples from CIFAR-10. The three data sources contain unlabeled data from different classes of CIFAR-10.
In this case, we compute the optimal transport distance on the feature space only, and we assume access to the labels of the pilot datasets, which enables us to train a model and obtain performance values. Consequently, we project performance across various mixing ratios onto larger data sizes (2K, 5K, 7K, and 10K). To visualize the performance of our method, we plot the difference between the projected performance and the actual performance. The closer the value approaches zero, the more optimal the projection becomes. Surprisingly, as we illustrate in Fig. <ref>, can consistently maintain good projection performance for larger data sizes. Even at 10K, the largest error is below 10%. These outcomes show that can also be extended to unlabeled data sources, which demonstrates the flexibility and practicality of our method.
§.§ Optimal Data Source Selection
Training a model requires a tremendous amount of resources to tune hyperparameters to achieve the highest performance. We demonstrate that by choosing data strategically, we can also improve model performance. We consider a setting, where we are facing the problem of choosing only 50K to train ResNet-50 on ImageNet100 and would like to maximize the model's performance.
However, we are provided with only a pilot dataset of size 10K from each data source.
As we observe in Figure <ref>, our optimized mixing ratio based on (<ref>) achieves the highest model performance compared to all baselines. Further, using the functions from the previous step, we project the performance of our selected mixing ratio, and we observe in Figure <ref> that not only most closely predicts the accuracy to the actual accuracy but also attains the highest actual accuracy out of all methods.
The improved selection of mixture ratios in our method can be attributed to our proposed selection approach (<ref>). Unlike baseline methods that assume the same optimal composition for all data scales, our method finds optimal compositions specific to each data scale. For more experiments on CIFAR-10, we refer the reader to Appendix <ref>.
§.§ Application to Fine-Tuning
As powerful architectures has been introduced and computation power has improved, larger models and datasets has become increasingly prevalent in training visual and natural language tasks. However, retraining these large pre-trained foundation models can be cost-prohibitive, which leads to widespread adoption for fine-tuning these models. However, these large pre-trained foundation models are expensive to retrain but are popular for fine-tuning for more customized tasks. In our case, we adopt a pre-trained Faster R-CNN model trained on COCO dataset. Our task is to fine-tune the model on the autonomous driving dataset for object detection, BDD100K <cit.>. We assume each data source to specialize in taking pictures at a specific time of day, i.e. daytime, night, or dawn/dusk pictures. Similarly to the previous task, we select optimal data source composition and project the mean average precision (mAP) onto larger data scales. In Figure <ref>(b), we observe fine-tuning accuracy projection onto eight larger data scales from 1000 samples and observe that our predictions do not deviate from the actual accuracy by more than 0.4, which indicates 's extended capability of performance projection for fine-tuning.
§ DISCUSSION AND OUTLOOK
This paper presents a novel framework to conduct data selection from partially revealed sources, addressing the practical challenge in emerging data market scenarios. In particular, in contrast to existing work that tries to directly fit non-informative parametric surrogates on the limited available samples to predict model performance at different data sizes and compositions of data sources, which suffers from pronounced computational burdens and often unsatisfactory results, our key technical contribution is an OT-based performance scaling method.
The take-away from our empirical study is that despite being extensively adopted in the past, fitting non-informative parametric surrogates for predicting performance scaling is actually suboptimal–computationally inefficient, often impractical, and less accurate; utilizing data distance in the performance prediction provides immediate benefits and presents a better pathway to construct the predictors.
Contributing a new perspective with performance and efficiency improvements, this work still has some Limitations and opens up many new investigation venues, such as lifting the requirement on validation data, accounting for malicious data owners, and extending to data sources that are misaligned in feature space. Additional discussions and Broader Impacts are provided in Appendix.<ref>.
RJ and the ReDS lab would like to thank the support from NSF via the Grant OAC-2239622.
unsrt
[sections]
[sections]l1
§ : FRAMEWORK AND ALGORITHMS
§.§ Pipeline
I. Training Data Preparation. For data distance-based performance prediction, the first step of the pipeline is to prepare "training data" (introduced in Section <ref>) to fit parameters of Equations <ref> and <ref>. The data consists of different data source compositions for some given data scales N_0, N_1, where for each composition p, we compute the OT distance of the composed dataset 𝒟(N_1,p) to the validation data and then train a model on 𝒟(N_1,p) to get the actual model performance. For simplicity, we select compositions through grid search. This step is represented in Algorithm <ref> from lines 4-9 and is the first (I) step in the pipeline Figure <ref>.
II. Fitting Predictor Function. Once the training data is prepared, we proceed to fit our function in Eqs. <ref> and <ref>. This step is shown in lines 10-11 in Algorithm <ref> and is the second (II) step of the pipeline Figure <ref>. Then, with the performance predictors fitted for data scales N_0 and N_1, we move on to the inference stage, where we can perform 2 tasks: performance projection and data source selection.
III. Two-Stage Performance Projection. For performance projection, we project the performance to any data size N for any mixing ratio p in two stages. (1) We predict the performance given the mixing ratio p at data scales N_0 and N_1. (2) We use Eq. <ref> to project performance prediction to any data size N. This process is represented in line 12 of Algorithm <ref> and is the third (III) step of the pipeline Figure <ref>.
IV. Optimal Data Source Selection. For optimal data source selection, we solve an optimization problem through gradient descent, which is provided in Eq. <ref>. The gradient computation uses parameters of the fitted functions from step II and the process terminates when the mixing ratio converges. This process is represented in line 12 of Algorithm <ref> and is the fourth (IV) step of the pipeline Figure <ref>.
§ PROOFS AND OPTIMIZATION DETAILS
§.§ Proof for Theorem <ref>
1[Data Composition Dependent Performance Projection (restated)] Consider log-linear performance scaling relationship depending on both data size N and data composition 𝐩 given as
𝔼_V [ℒ(𝒜(𝒟(N,𝐩)); D^val)]= -α(𝐩) log(N)+C(𝐩)
Assume one has completed the fitting of the performance predictor on two different scales N_0 < N_1, which gives ℒ̂(𝒜(𝒟(N_0,𝐩)); D^val)
and ℒ̂(𝒜(𝒟(N_1,𝐩)); D^val) for all data mixtures 𝐩. Then, the model performance ℒ̂(𝒜(𝒟(N,𝐩)); D^val) for any data mixture 𝐩 at any data scale N can be predicted as
ℒ̂(𝒜(𝒟(N,𝐩); D^val) = (logN_1/N_0)^-1[ log N/N_0ℒ̂(𝒜(𝒟(N_1,𝐩)); D^val)-log N/N_1ℒ̂(𝒜(𝒟(N_0,𝐩); D^val)]
From Eq. (<ref>), for any data mixture 𝐩_𝐳, we have
𝔼_V [ℒ(𝒜(𝒟(N,𝐩_𝐳)); D^val)]= -α(𝐩_𝐳) log(N)+C(𝐩_𝐳)
Then, for data scales N_0 and N_1 where one has completed fitting the performance predictors, we have
𝔼_V [ℒ(𝒜(𝒟(N_0,𝐩_𝐳)); D^val)]= -α(𝐩_𝐳) log(N_0)+C(𝐩_𝐳)
𝔼_V [ℒ(𝒜(𝒟(N_1,𝐩_𝐳)); D^val)]= -α(𝐩_𝐳) log(N_1)+C(𝐩_𝐳)
which gives
α̂(𝐩_𝐳) = ℒ̂(𝒜(𝒟(N_0,𝐩_𝐳)); D^val) - ℒ̂(𝒜(𝒟(N_1,𝐩_𝐳)); D^val)/log(N_1) - log(N_0)
where ℒ̂(𝒜(𝒟(N_0,𝐩_𝐳)); D^val) and ℒ̂(𝒜(𝒟(N_1,𝐩_𝐳));D^val) are given by the fitted predictors.
For any data scale N_z, we have
𝔼_V [ℒ(𝒜(𝒟(N_z,𝐩_𝐳)); D^val)]= -α(𝐩_𝐳) log(N_z)+C(𝐩_𝐳)
Plugging in the above equations, the performance prediction can be given by
ℒ̂(𝒜(𝒟(N_z,𝐩_𝐳)); D^val)= -α̂(𝐩_𝐳)·[log(N_z) - log(N_1)]+ℒ̂(𝒜(𝒟(N_1,𝐩)); D^val)
= -[ℒ̂(𝒜(𝒟(N_0,𝐩_𝐳)); D^val) - ℒ̂(𝒜(𝒟(N_1,𝐩_𝐳)); D^val)]log(N_z) - log(N_1)/log(N_1) - log(N_0)
+ ℒ̂(𝒜(𝒟(N_1,𝐩)); D^val)
= (logN_1/N_0)^-1[ log N_z/N_0ℒ̂(𝒜(𝒟(N_1,𝐩_𝐳)); D^val-log N_z/N_1ℒ̂(𝒜(𝒟(N_0,𝐩_𝐳); D^val)]
which completes the proof.
Q.E.D.
§.§ Objectives for Data Selection
Problem formulation is provided in Section <ref>, where the following two objectives are introduced. Given a selection budget of N samples, a mixing ratio of data sources 𝐩= {p_1,…,p_m} such that ∀_i, 0 ≤ p_i ≤ 1 and ∑_i=1^m p_i = 1, and m datasets D_1, …, D_m to be mixed, we denote the selected dataset by 𝒟(N,𝐩) = S_1 ∪⋯∪ S_m, where each S_i is a random subset of D^all_i and |S_i| = p_i N. Using these notations, we now describe the typical acquisition goals that can be accommodated by our approach:
* (Primary) Fixed-budget selection for maximal performance: The collector seeks to maximize the resulting model performance by strategically choosing the mixing ratio 𝐩 of m data sources at a pre-specified selection budget N_s≤∑_i=1^m N̅_i. The objective can be formalized as max_𝐩ℒ(𝒜(𝒟(N_s,𝐩)),D^val).
* (Alternative) Flexible-budget selection for reaching performance threshold with minimal costs: The collector seeks to attain a target model performance u^tar by choosing both the mixing ratio 𝐩 as well as the selection budget N. More formally, the objective can be expressed as min_N,𝐩 N s.t. max_𝐩ℒ(𝒜(𝒟(N,𝐩),D^val) ≥ u^tar.
The primary objective, "fixed-budget selection for maximal performance", is formulated as a convex optimization and we solve it via gradient-based methods. The alternative objective can be treated as a direct extension of the primary, where one solves the "fixed-budget selection for maximal performance" problem for different data quantities N and performs a line search for minimal data quantity N that meets the performance requirement.
§.§ Optimization and Convexity
§.§.§ Primary
For our primary objective fixed-budget selection for maximal performance, with the proposed performance predictors with projection, we solve for
𝐩^*=max_𝐩ℒ̂(𝒜(𝒟(N_s,𝐩)),D^val)
We show this objective function is convex in data composition 𝐩, where the proposed gradient-based method will guarantee to find its optimal solution 𝐩^* efficiently.
Empirically, model performance ℒ always appears convex in data composition 𝐩, which is also reported in <cit.>. This means the model trained on data combined from multiple sources always achieves no worse performance than the average performance of models trained separately on data from each source. Theoretically, as given in <cit.>, the gap between training and validation performance can be tightly bounded by the OT distance between training and validation data, whereas the OT distance is always convex in data composition 𝐩. We now show our proposed performance predictors as well as the optimization problem based on them are also convex in data composition 𝐩.
For in Eq. (<ref>), consider data compositions 𝐩_0≠𝐩_1 and a convex combination 𝐩_2 = α𝐩_0+(1-α)𝐩_1 for some constant α∈(0,1). Then, we have
ℒ̂(𝒜(𝒟(N, 𝐩_2)),D^val) - αℒ̂(𝒜(𝒟(N, 𝐩_0)),D^val) - (1-α)ℒ̂(𝒜(𝒟(N, 𝐩_1)),D^val)
= a_1·[OT(𝒟(N, 𝐩_2), D^val)-αOT(𝒟(N, 𝐩_0), D^val)-(1-α)OT(𝒟(N, 𝐩_1), D^val)]
≥ 0
where the inequality holds because a_1>0 always holds and OT distance is always convex in data composition 𝐩 by definition <cit.>. Thus, is convex in data composition 𝐩. is constructed as with pseudo quadratic terms and its convexity in 𝐩 can be shown similarly.
We solve the above optimization problem iteratively with the following procedure. First, we initialize the algorithm with 𝐩=𝐩^0 where 𝐩^0 can be chosen arbitrarily provided that ∑_i p_i=1. Then, at each step, we perform the gradient update as
𝐩^𝐭+1←𝐩^𝐭 + d^t·.∂ℒ̂(𝒜(𝒟(N_s,𝐩)),D^val)/∂𝐩|_𝐩=𝐩^t
where d^t is the step size at iteration t and we obtain 𝐩^*=𝐩^𝐓 at convergence as the desired optimal solution.
§.§.§ Alternative
Then, for flexible-budget selection for reaching performance threshold with minimal budget:
min_N,𝐩 N s.t. max_𝐩ℒ(𝒜(𝒟(N,𝐩),D^val) ≥ u^tar
We solve it through a bi-level optimization, where the lower level is the same as the primary objective and the upper level is a line search for optimal data quantity N^*. Note that the model performance ℒ(𝒜(𝒟(N,𝐩),D^val) is monotonically non-decreasing and concave in N. Thus, the optimal data quantity N^* can be found via a straightforward line search. Initialize at N_0=0 and 𝐩=𝐩^0 where 𝐩^0 can be chosen arbitrarily provided that ∑_i p_i=1. Then, at each step, we perform the gradient update as
N^t+1←N^t + d^t·.∂ℒ̂(𝒜(𝒟(N,𝐩^𝐭*)),D^val)/∂ N|_N=N^t
with
𝐩^𝐭*= max_𝐩ℒ̂(𝒜(𝒟(N_t,𝐩)),D^val)
where d^t is the step size at iteration t and 𝐩^t* is the optimal data mixture at N_t, respectively. Continue until ℒ̂(𝒜(𝒟(N_t,𝐩^𝐭*)),D^val) ≥ u^tar is achieved and then the data acquisition strategy 𝒟(N_t,𝐩^𝐭*) is accepted. Note that at each step, 𝐩^t is initialized from 𝐩^(t-1)* and the optimization for 𝐩^t* is completed fairly easily within a few steps.
§.§ Gradients Calculation, Stepsize Selection, and Convergence
For the primary problem optimizing over 𝐩(N) where N is the target data quantity, the gradients can be calculated as
∂ℒ̂(𝒜(𝒟(N,𝐩); D^val)/∂𝐩=
(logN_1/N_0)^-1[ log N/N_0∂ℒ̂(𝒜(𝒟(N_1,𝐩)); D^val)/∂𝐩-log N/N_1∂ℒ̂(𝒜(𝒟(N_0,𝐩); D^val)/∂𝐩]
Optimal Transport naturally provides the gradient information of the OT distance w.r.t. the probability mass of datapoints on which it is computed in its dual solutions. <cit.> provides an approach that directly constructs the calibrated gradient from the output of the OT solver that informs how the OT distance changes as the probability mass of the datapoints changes, while ensuring the updated mixture 𝐩 remains within the simplex ∑_i p_i=1 at each step.
Specifically, for , recalling Eq. (<ref>), we have
∂ℒ̂(𝒜(𝒟(N,𝐩); D^val)/∂𝐩 = a_1·∂OT(𝒜(𝒟(N,𝐩); D^val)/∂𝐩
=[∂OT(𝒜(𝒟(N,𝐩); D^val)/∂ p
_1...∂OT(𝒜(𝒟(N,𝐩); D^val)/∂ p_m],
where p_1+p_2+...p_m=1. Let {r_1^1, r_1^2...r_1^n_1...r_i^n_i...r_m^n_m} be the samples consisting R(N, 𝐩) where r_i represents samples from data source i. Then, the calibrated gradient can be given as
∂OT(𝒜(𝒟(N,𝐩); D^val)/∂ p
_i=1/n_i(∑_j=1^n_i f_i^j - n_i/N-n_i∑_x={1...m}∖ i∑_y=1^n_x f_x^y),
where f_i^j is the dual solution of OT that corresponds to r_i^j. The calibrated gradient ensures the updated mixture 𝐩 remains within the simplex ∑_i p_i=1 at each step. Similarly, for in Eq. (<ref>), the calibrated gradient is given as
∂ℒ̂(𝒜(𝒟(N,𝐩); D^val)/∂ p_i = (b_2^i· p_i^2+b_1^i· p_i+b_0)·∂OT(𝒜(𝒟(N,𝐩); D^val)/∂ p_i
+[b_2^i· (2p_i)+b_1^i]·OT(𝒜(𝒟(N,𝐩); D^val) + [c_2^i· (2p_i)+c_1^i].
Then, to perform gradient-based optimization, first, we initialize the algorithm with 𝐩=𝐩^0 where 𝐩^0 can be chosen arbitrarily provided at ∑_i p_i=1. Then, at each step, we perform the gradient update as
𝐩^𝐭+1←𝐩^𝐭 + d^t·.∂ℒ̂(𝒜(𝒟(N_s,𝐩)),D^val)/∂𝐩|_𝐩=𝐩^t
where d^t > 0 is the step size at iteration t. In practice, we choose diminishing step sizes that satisfy Robbins—Monro conditions such that d^t < d^t+1, ∑_t d^t = ∞, and ∑_t (d^t)^2 < ∞. then the series 𝐩^𝐭 is guaranteed to converge to the optimal solution 𝐩^* <cit.> given that objective function is convex and bounded. Gradients for can be obtained similarly and the solution procedure is the same. The proposed method is shown to achieve remarkable performance and fast convergence, yielding satisfactory results in a swift manner.
§ SAMPLING STOCHASTICITY AND MARKET PRACTICES
In the data selection problem formulated in this work, we aim to optimize predicted performance with objectives given in terms of finite samples from the pilot dataset. We note that this pilot dataset is considered a random sample from the whole dataset of each data provider and is inevitably affected by the stochasticity of the sampling process. Our performance predictions ℒ̂ are empirical estimates of the expectation for the variable based on the samples. Thus, it substantially depends on the sampling process for the estimates to be unbiased and precise. Data providers should adhere to certain guidelines when selecting the pilot datasets. The sampling process should be unbiased where each sample is selected with an equal chance. Examples include sampling with a Bernoulli process where each sample is selected with a fixed probability p; or permutation sampling where one selects the first N samples from the random permutation.
There might be strategic providers that do not adhere to the guidelines. Multiple mechanisms are available to incentivize providers to provide true samples. Since the full dataset will be revealed after purchase, the data buyer can examine the posterior probability for the pilot dataset being sampled from the whole dataset according to the prescribed sampling protocol. The chance of the distribution of the pilot dataset having a large deviation from that of the whole dataset should be small. A threshold for hypothesis testing can be set to determine whether to accept or reject the pilot dataset as an unbiased sample from the whole dataset. If there are external supervisions (e.g., market regulators), they can conduct sequential hypothesis testing to check whether the samples provided by each seller converge to the whole dataset or comply to the prescribed sampling procedure.
§ EXPERIMENT DETAILS AND ADDITIONAL RESULTS
§.§ Datasets and Models
For our experiments, we use the following vision and language datasets:
For IMDB dataset, we trained a Long Short-Term Memory (LSTM) network model <cit.> for 20 epochs; for MNIST dataset, we use Support Vector Machines (SVMs) with RBF kernel. For CIFAR-10 dataset, we trained on the pre-activation Resnet with identity mappings (PreActResNet-18 <cit.>) for 100 epochs. We trained ImageNet-100 on ResNet-50 <cit.> backbone for 200 epochs with cosine annealing as the learning rate scheduler. For the autonomous driving object detection dataset, BDD100K, we fine-tune an improved pre-trained faster region proposal-CNN network (Faster-RCNN ResNet50 <cit.>) on COCO dataset <cit.> for 30 epochs.
§.§ Details on Baseline Methods
For N samples (data quantity) from m data sources with a mixing ratio 𝐩= {p_1,…,p_m}, we consider the following baselines
Linear:
ℒ̂(𝒜(𝒟(N,𝐩); D^val):=𝐚'𝐩+blog(N)+c, where 𝐚 = {a_0, a_1, ...,a_m}, b, and c are coefficients to be fitted.
Leave-one-out (LOO) and Shapley can be considered special cases for Linear, where the coefficients are calculated as the marginal contribution of the data source (LOO) or its averaged contribution to different combinations of other data sources (Shapley) <cit.>, as opposed to the least-square fitting as in Linear.
Pseudo-quadratic:
ℒ̂(𝒜(𝒟(N,𝐩); D^val):=∑_i=1^m (c_2^i· p_i^2+c_1^i· p_i+c_0)+blog(N)
Quadratic:
ℒ̂(𝒜(𝒟(N,𝐩); D^val):=∑_i=1^m (c_2^i· p_i^2+c_1^i· p_i+c_0)+∑_i=1^m∑_j=1^i (c_3^ij· p_ip_j)+blog(N)
Rational:
ℒ̂(𝒜(𝒟(N,𝐩); D^val):=∑_i=1^m(∑_j=1^m c^ij· p_j )^-1+blog(N)
We fit the Rational baseline according to the setup detailed in <cit.> and to our best effort. Originally, the method is intended for predicting log loss, whereas in our case, we aim to predict model accuracy. Thus, we replaced the log loss with log(1-accuracy) for the prediction target.
§.§ Performance Prediction for Unseen Data Mixtures p: Ablation Study
In the previous experiments, we presented results over a single data source class distribution. Here, we present a more comprehensive view of the 's performance by running multiple times over random class distributions of data sources, according to Table <ref>. As shown in Table <ref>, for all cases, either or achieves the highest performance, while baseline methods struggle to get close performance. Although, in many cases (1,2,3,5), Quadratic or Rational baseline obtains the lowest training data MAE, but in test performance, these methods have poor generalization scores, which indicate high overfitting to the training data. These results indicate that predicting performance from data source composition is insufficient for fitting and OT distance plays an important role in better alignment of data sources to the model accuracy.
§.§ Extended Application to Multiple Number of Data Sources
So far, our experiments are focused solely on the cases with three data sources. To show the practicality of our method, we extend to more challenging cases, where we have more than three data sources. Specifically, we explore the setting with 4, 5, and 6 data sources and illustrate the results with baseline comparisons in Table <ref>. As we observe, our methods, both and , achieve the best testing MAE scores as well as one of the lowest training MAE scores. While both Linear and Pseudo-Quadratic baselines are under-performing and receive poor testing MAE values. As observed in previous experiments, Quadratic baseline presents strong training data fitting but weak testing predictions. For the Rational baseline, we tried our best to fit this method, but we demonstrate that with the larger number of data sources, it is even harder to properly train this function which results in increasing errors, respectively. This experiment demonstrates the capability of our method to extend to more practical settings with multiple data sources.
§.§ Optimal Data Source Composition
Here, we showcase the optimal data source selection on CIFAR-10 dataset. Similarly as in Section <ref>
given a pilot dataset of size 1.5K from each of the three data sources, we would like to find the mixing ratio that can maximize the model performance when trained on 10K dataset. As illustrated in Fig. <ref>, and select mixing ratios that achieve the highest model performance, gaining 4% and 2% in performance improvement over the best baseline method. Our methods select nontrivial mixing ratios which also outperform the uniform mixing ratio by 5% and 3%, respectively. Furthermore, we observe in Fig. <ref> that our methods also perform well in performance prediction into larger data scales. Specifically
and predictions are within 2.2% discrepancy from the actual model performance for 10K dataset, while the best baseline method prediction has over 5% error.
To demonstrate capability, we additionally present results on the case where data sources contain some mislabeled data.
In this case, we assume data sources have noisy labels. In particular, data sources have 20%, 15%, 25% of mislabeled data, respectively. In Fig. <ref>, surprisingly, we observe that even though the first data source has 20% of mislabeled data, choosing more of that source improves the model performance. A possible explanation is that the first source dataset contains classes that are important for learning and the second source has less important for model performance or requires less to be learned (especially since it has a lower mislabeled rate). Moreover, we notice that 's mixing ratio can improve the model performance by over 3% from the performance of the best baseline. Results from Fig. <ref> indicate that 's performance projection onto 10K dataset has a prediction error within 1.7%, while the best baseline has an error of over 2.2%. To sum up, we have shown possibilities of in mixing ratio selection and performance prediction. While baseline methods have the same mixing ratio over any data scales, our method chooses different optimal mixing ratios for different data scales, which has shown some advantages in improving model performance.
§.§ Code Repository
For the purposes of the double-blind review, the code repository is accessible via the anonymous link https://anonymous.4open.science/r/projektor-D21A
https://anonymous.4open.science/r/projektor-D21A. The code will be published after the review process is finished.
§ ADDITIONAL DISCUSSION AND BROADER IMPACTS
Limitations. Despite contributing a new perspective with performance and efficiency improvements, this work still has some limitations and opens up many new investigation venues:
(1) How to quantify the influence or further lift the dependence on validation data? While a validation set representative of the downstream learning task is a common assumption in the ML literature, it may or may not be available during data exchange and its quality may vary. (2) Our design could be vulnerable to estimation errors in the scaling law for data sizes, which could lead to magnified prediction errors on larger scales and affect data acquisition decisions <cit.>. Especially, noises are inevitable due to the performance stochasticity of ML models as well as the sampling process to generate the pilot dataset.
(3) Our current framework does not take into consideration of broader tasks that aim for goals beyond accuracy, e.g., fairness, variable data costs, as well as broader acquisition scenarios where data sources have misaligned feature space. Incorporating other objectives and extending to heterogeneous data sources is an exciting direction. (4) Our setup considers honest data providers and the requested samples are faithfully sampled from the actual data sources, leaving an in-depth study of potential security risks, such as malicious data manipulation <cit.> to future work.
Broader Impacts. This work will have significant impacts beyond advancing the research on data selection and data markets. The techniques developed in this work can be applied to a variety of other subfields of ML related to data acquisition, data valuation, interpretability, robustness, etc. The results of this paper will facilitate the automation of data selection and quality management in machine learning, which in turn, accelerates research and improves services based on ML. Data exchanges and data markets are also at the heart of the global data economy. The advancements in practical data exchanges in this work will substantially benefit the development of data markets and promote data sharing, contributing to the business and economy as well as society as a whole.
|
http://arxiv.org/abs/2307.01252v1
|
20230703180001
|
ReveaLLAGN 0: First Look at JWST MIRI data of Sombrero and NGC 1052
|
[
"K. Goold",
"A. Seth",
"M. Molina",
"D. Ohlson",
"J. C. Runnoe",
"T. Boeker",
"T. A. Davis",
"A. Dumont",
"M. Eracleous",
"J. A. Fernández-Ontiveros",
"E. Gallo",
"A. D. Goulding",
"J. E. Greene",
"L. C. Ho",
"S. B. Markoff",
"N. Neumayer",
"R. Plotkin",
"A. Prieto",
"S. Satyapal",
"G. Van De Ven",
"J. L. Walsh",
"F. Yuan",
"A. Feldmeier-Krause",
"K. Gültekin",
"S. Hoenig",
"A. Kirkpatrick",
"N. Lützgendorf",
"A. E. Reines",
"J. Strader",
"J. R. Trump",
"K. T. Voggel"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
0000-0002-7743-9906]Kameron Goold
Department of Physics & Astronomy, University of Utah, James Fletcher Building, 115 1400 E, Salt Lake City, UT 84112, USA
0000-0003-0248-5470]Anil Seth
Department of Physics & Astronomy, University of Utah, James Fletcher Building, 115 1400 E, Salt Lake City, UT 84112, USA
0000-0001-8440-3613]Mallory Molina
Department of Physics & Astronomy, University of Utah, James Fletcher Building, 115 1400 E, Salt Lake City, UT 84112, USA
Department of Physics & Astronomy, Vanderbilt University, Nashville, TN 37235, USA
0009-0004-9457-2495]David Ohlson
Department of Physics & Astronomy, University of Utah, James Fletcher Building, 115 1400 E, Salt Lake City, UT 84112, USA
0000-0001-8557-2822]Jessie C. Runnoe
Department of Physics & Astronomy, Vanderbilt University, Nashville, TN 37235, USA
0000-0002-5666-7782]Torsten Böker
European Space Agency, c/o STScI, 3700 San Martin Drive, Baltimore, MD 21218, USA
0000-0003-4932-9379]Timothy A. Davis
Cardiff Hub for Astrophysics Research & Technology, School of Physics & Astronomy, Cardiff University, Queens Buildings, Cardiff, CF24 3AA, UK
0000-0003-0234-3376]Antoine Dumont
Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117, Heidelberg, Germany
0000-0002-3719-940X]Michael Eracleous
Department of Astronomy & Astrophysics and Institute for Gravitation and the Cosmos, The
Pennsylvania State University, 525 Davey Lab, University Park, PA 16802, USA
0000-0001-9490-899X]Juan Antonio Fernández-Ontiveros
Istituto di Astrofisica e Planetologia Spaziali (INAF–IAPS), Via Fosso del Cavaliere 100, I–00133 Roma, Italy
Centro de Estudios de Física del Cosmos de Aragón (CEFCA), Plaza San Juan 1, E–44001, Teruel, Spain
0000-0001-5802-6041]Elena Gallo
Department of Astronomy, University of Michigan, 1085 S. University Ave., Ann Arbor, MI 48109, USA
0000-0003-4700-663X]Andy D. Goulding
Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA
0000-0002-5612-3427]Jenny E. Greene
Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA
0000-0001-6947-5846]Luis C. Ho
Kavil Institute for Astronomy and Astrophysics, Peking University, Beijing 100871, China
Department of Astronomy, School of Physics, Peking University, Beijing 100871, China
0000-0001-9564-0876]Sera B. Markoff
Anton Pannekoek Institute for Astronomy, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands
0000-0002-6922-2598]Nadine Neumayer
Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117, Heidelberg, Germany
0000-0002-7092-0326]Richard M. Plotkin
Department of Physics, University of Nevada, Reno, NV 89557, USA
Nevada Center for Astrophysics, University of Nevada, Las Vegas, NV 89154, USA
0000-0002-3585-2639]Almudena Prieto
Universidad de La Laguna (ULL), Dpto. Astrofísica, Avd. Astrofísico Fco. Sánchez s/n, 38206 La Laguna, Tenerife, Spain
Instituto de Astrofísica de Canarias (IAC), C/Vía Láctea s/n, 38205 La Laguna, Tenerife, Spain
Universitäts-Sternwarte, Fakultät für Physik, Ludwig-Maximilians-Universität München, 81679 München, Germany
0000-0003-2277-2354]Shobita Satyapal
George Mason University, Department of Physics and Astronomy, MS3F3, 4400 University Drive, Fairfax, VA 22030, USA
0000-0003-4546-7731]Glenn van de Ven
Department of Astrophysics, University of Vienna, Türkenschanzstraße 17, 1180 Vienna, Austria
0000-0002-1881-5908]Jonelle L. Walsh
George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics & Astronomy, Texas A&M University, 4242 TAMU, College Station, TX 77843, USA
0000-0003-3564-6437]Feng Yuan
Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai 200030, People’s Republic of China
0000-0002-0160-7221]Anja Feldmeier-Krause
Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117, Heidelberg, Germany
0000-0002-1146-0198]Kayhan Gültekin
Department of Astronomy, University of Michigan, 1085 S. University Ave., Ann Arbor, MI 48109, USA
0000-0002-6353-1111]Sebastian Hönig
Department of Physics & Astronomy, University of Southampton, Hampshire SO17 1BJ Southampton, UK
0000-0002-5537-8110]Allison Kirkpatrick
Department of Physics and Astronomy, University of Kansas, Lawrence, KS 66045, USA
0000-0002-4034-0080]Nora Lützgendorf
European Space Agency, c/o STScI, 3700 San Martin Drive, Baltimore, MD 21218, USA
0000-0001-7158-614X]Amy E. Reines
eXtreme Gravity Institute, Department of Physics, Montana State University, Bozeman, MT 59717, USA
0000-0002-1468-9668]Jay Strader
Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA
0000-0002-1410-0470]Jonathan R. Trump
Department of Physics, 196 Auditorium Road, Unit 3046, University of Connecticut, Storrs, CT 06269, USA
0000-0001-6215-0950]Karina T. Voggel
Universite de Strasbourg, CNRS, Observatoire astronomique de Strasbourg, UMR 7550, 67000 Strasbourg, France
We present the first results from the Revealing Low-Luminosity Active Galactic Nuclei (ReveaLLAGN) survey, a JWST survey of seven nearby LLAGN. We focus on two observations with the Mid-Infrared Instrument's (MIRI) Medium Resolution Spectrograph (MRS) of the nuclei of NGC 1052 and Sombrero (NGC 4594 / M104). We also compare these data to public JWST data of a higher-luminosity AGN, NGC 7319. JWST clearly resolves the AGN component even in Sombrero, the faintest target in our survey; the AGN components have very red spectra. We find that the emission-line widths in both NGC 1052 and Sombrero increase with increasing ionization potential, with FWHM>1000 kms^-1 for lines with ionization potential ≳ 50 eV. These lines are also significantly blue-shifted in both LLAGN. The high ionization potential lines in NGC 7319 show neither broad widths or significant blue shifts. Many of the lower ionization potential emission lines in Sombrero show significant blue wings extending >1000 kms^-1. These features and the emission-line maps in both galaxies are consistent with outflows along the jet direction. Sombrero has the lowest luminosity high-ionization potential lines ([Ne5] and [O4]) ever measured in the mid-IR, but the relative strengths of these lines are consistent with higher luminosity AGN. On the other hand, the [Ne5] emission is much weaker relative to the [Ne3] and [Ne2] lines of higher-luminosity AGN. These initial results show the great promise that JWST holds for identifying and studying the physical nature of LLAGN.
§ INTRODUCTION
As material falls onto a black hole, that material heats up and emits light creating an active galaxy nucleus (AGN). While the most rapidly accreting objects are seen to the edges of our Universe as luminous quasars, the vast majority of central supermassive black holes in nearby galaxies are low-luminosity AGN (LLAGN), accreting at rates far below their Eddington limits <cit.>.
The observational signatures of LLAGN differ from their higher luminosity counterparts <cit.>. In particular, an increasing fraction of the LLAGN energy is channeled into a compact jet that can impact the gas in their host galaxies, keeping massive early-type galaxies quiescent <cit.>. The inner part of the accretion disk is believed to transition to a radiatively inefficient accretion flow <cit.>, but the structure of LLAGNs is still not yet well understood.
Infrared (IR) wavelengths are particularly valuable for studying AGN <cit.>, as the dust that hides many AGN at optical and UV wavelengths strongly emits in the IR. In fact, the energy output for many AGN is highest at X-ray and mid-IR wavelengths <cit.>. Furthermore, the emission from AGN at 12 μm has been found to be tightly correlated with the 2-10 keV X-ray emission, with similar luminosities in both bands <cit.>.
In addition to the continuum emission from dust or jet emission <cit.>, strong emission lines are seen at infrared wavelengths, including high ionization potential (IP) “coronal” emission lines that track the ionization field of the AGN <cit.>. At lower accretion rates, the infrared signatures may change <cit.>, however, it also becomes increasingly difficult to separate out the nuclear emission from LLAGN from the surrounding emission from star formation due to the less luminous AGN. This is especially true for low ionization nuclear emission regions (LINERs), which are characterized by strong low-ionization emission lines, a weakly-accreting black hole and shocked emission from strong outflows <cit.>.
The advent of JWST brings new opportunities in the study of AGN. It is the most sensitive instrument ever for detecting AGN, roughly matching 2 Ms in Chandra Deep Field North in just 10 ks <cit.>. However, given their low accretion rates, emission from the central engine in LLAGNs is often mixed with the emission from the surrounding host galaxy. Thus, we will need empirical templates or models to cleanly separate LLAGN emission from that of the galaxy.
Fortunately, JWST also has remarkable spatial resolution, which allows us to isolate the LLAGN emission from that of the host galaxy in nearby objects. This is the goal of the Revealing LLAGN (ReveaLLAGN) project, which is obtaining integral field spectroscopic (IFS) observations of seven nearby, well-known LLAGN spanning a wide range of both black hole mass (10^5.5–9.8 M_⊙) and Eddington ratio (log(L_bol/L_edd) of -6.2 to -2.7). In addition to providing templates of LLAGN that can be applied to more distant observations, the continuum and coronal line emission can provide valuable constraints for understanding the internal structure of LLAGN.
In this paper, we report the first results from the ReveaLLAGN project based on the Mid-Infrared Instrument (MIRI) medium-resolution spectrometer (MRS) data from our first two targets, Sombrero (also known as M104 and NGC 4594) and NGC 1052. The overall properties of these galaxies are listed in Table <ref>. Both galaxies are classified as LINERs based on their optical emission lines <cit.>, and their low AGN luminosities are due to their low Eddington ratios and accretion rates. In the context of the full ReveaLLAGN sample these two galaxies bracket the survey in terms of their mid-IR fluxes, and thus represent the expected highest (NGC 1052) and lowest (Sombrero) signal-to-noise ratios (S/N). In this paper, we contrast these two LLAGN with the higher luminosity and Eddington ratio Seyfert 2 AGN in NGC 7319 <cit.>, which is also at a much larger distance. All three galaxies have similar BH masses, thus the primary differences between these AGN are that they have Eddington ratios spanning ∼4 dex. Both Sombrero and NGC 1052 have well-studied AGN with small-scale radio jets that can create shocked emission which could contribute to the observed nuclear emission <cit.>. Additionally, both galaxies' spectral energy distributions (SEDs) show a lack of emission in the UV relative to higher luminosity AGN <cit.>, consistent with other LLAGN <cit.>. We review previous observations of both galaxies' AGN in more depth in Section <ref>.
In Section <ref> we describe the data acquisition and reduction processes. We present our spectral extraction process and emission-line measurements for both the nuclear spectra and the emission-line maps in Section <ref>. We present our analysis of the data in Section <ref>, and discuss them in context of previous work in Section <ref>. We conclude in Section <ref>. We note that all JWST data is barycenter corrected, and thus velocities are given in the barycentric frame.
lcccccccc
0pt
Galaxy Properties
Galaxy Name Distance V_sys Galaxy Mass Morph. AGN Type BH Mass AGN X-ray Lum. Eddington Ratio
Mpc km s^-1 log(M_⋆/M_⊙) log(M_∙/M_⊙) log(L_X/erg s^-1) log(L_bol/L_edd)
NGC 1052 19.4±0.2 1487.9±5.1 10.71 E4 L1.9 8.82 41.46 -3.97
Sombrero/M104/NGC 4594^1 19.6±0.3 1090.9±5.1 11.18 Sa L2 8.83 40.04 -5.66
NGC 7319 99.8±7.0 6747.4±3.6 11.07 SBbc Sy2 8.10 42.17 -1.67
Distances: NGC 1052 – <cit.>, Sombrero – <cit.>; for NGC 7319, the distance is a flow-corrected redshift based distance assuming H_0 = 67.8 from the NASA Extragalactic Database. Systemic Velocities V_sys: are NASA Extragalactic database heliocentric velocities taken from <cit.> for NGC 1052, <cit.> for Sombrero, and <cit.> for NGC 7319. Galaxy Mass: NGC 1052 & Sombrero from S4G <cit.> with Sombrero corrected to the distance used here; for NGC 7319, we use <cit.> and assume M/L_K = 0.6. Morphological Type: from <cit.>, AGN Type: NGC 1052 and Sombrero from <cit.>, NGC 7319 from <cit.>. BH Mass: NGC 1052 & NGC 7319 based on velocity dispersion <cit.>, Sombrero from <cit.>. AGN X-ray Luminosity: 2-10 keV luminosities for NGC 1052 & NGC 7319 from <cit.>, Sombrero from <cit.> using updated distance. Eddington Ratio: NGC 1052 and Sombrero from <cit.> using listed distances and BH masses, NGC 7319 from <cit.>.
1We adopt “Sombrero” for the galaxy's name in this work.
§ DATA REDUCTION AND METHODS
§.§ Targets and Data Acquisition
We use JWST MIRI/MRS <cit.> to collect IFS data for our ReveaLLAGN targets in the mid-IR (4.9–27.9 μm). The full mid-IR wavelength range for MIRI/MRS is covered by 4 different channels (ch1–4):
ch1 (4.9–7.65 μm) and ch2 (7.51–11.71 μm) use the MIRIFU_SHORT Detector, while ch3 (11.55–17.98 μm) and ch4 (17.71–27.9 μm) use the MIRIFU_LONG Detector. Each channel has an increasing field of view (FoV): ch1 (3.2× 3.7), ch2 (4.0× 4.8), ch3 (5.2× 6.2), and ch4 (6.6× 7.7), and pixel size: ch1 (0196), ch2 (0196), ch3 (0245), ch4 (0273). All observations were taken using all three MIRI/MRS sub-channels.
We describe the observational details for our two ReveaLLAGN targets; details on the NGC 7319 observation are discussed in <cit.>. Our Sombrero observations are centered at RA: 12:39:59.430 DEC: -11:37:22.99; this is taken from Gaia EDR3 <cit.>. Our NGC 1052 observations are centered at RA: 02:41:04.798, DEC: -08:15:20.75 taken from very-long-baseline interferometry measurements of the AGN <cit.>.
Background exposures were taken using offset blank fields selected based on WISE 12 μm imaging: for Sombrero this field was at RA: 12:39:55.9810, DEC: -11:32:11.44 and for NGC 1052 at RA: 02:41:5.1200, DEC: -08:12:37.70.
Our MIRI/MRS measurements were taken using the 4-Point, Extended Source optimized ALL-channel dither pattern using the inverted, or negative, dither orientation[<https://jwst-docs.stsci.edu/jwst-mid-infrared-instrument/miri-operations/miri-dithering/miri-mrs-dithering>]. This ensures improved sampling of the point spread function (PSF) at all wavelengths and allows the correction of hot detector pixels. The exposure time for both Sombrero and NGC 1052 was 921.313 seconds split over four dithers for each sub-channel setting. Background exposures used a single dither position with an exposure length of 230.328 seconds for each sub-channel setting. The Sombrero data were among the first science data taken with JWST on July 4th, 2022, while the NGC1052 data were taken on August 11th, 2022.
§.§ Data Reduction
We process the raw observations for Sombrero, NGC 1052, and NGC 7319 through version 1.8.2 of the JWST pipeline using jwst_0989.pmap, which is a versioned reference file that gives overall context for the pipeline. Calibration of our data is divided into three main stages of processing; the , , and pipelines.
The pipeline takes the raw counts from the detector, applies basic detector-level corrections to all exposures, and creates uncalibrated countrate images, or lvl2a data products[ See https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline-overview/stages-of-jwst-data-processing/calwebb_detector1calwebb_detector1 documentation for more information. ]. The pipeline takes the lvl2a products and applies additional instrumental corrections and calibrations to produce a fully calibrated individual exposure, or lvl2b data products. For MIRI/MRS observations, this stage includes adding WCS information, flat field corrections, and stray light subtraction. We include an optional fringing removal[See https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline-overview/stages-of-jwst-data-processing/calwebb_spec2calwebb_spec2 documentation for more information ] step during this stage to address the significant fringes found in the MIRI/IFU data. The pipeline processes lvl2b spectroscopic observations into lvl3 data by combining calibrated lvl2b data from associated dithered exposures into a 3-D spectral cube or 2-D extracted spectra. For MIRI/MRS data the master background subtraction and outlier detection occurs in this stage as well. We choose a final product of 4 data cubes, one for each channel[See https://jwst-docs.stsci.edu/jwst-science-calibration-pipeline-overview/stages-of-jwst-data-processing/calwebb_spec3calwebb_spec3 documentation for more information. ]. The wavelength solution, FLT-4, associated with our pipeline version has a 1σ wavelength calibration error of 10-30 km s^-1 <cit.> through the MRS wavelength range.
§ SPECTRAL EXTRACTION AND METHODS
§.§ Nuclear Spectra Extraction
Nuclear spectra were extracted using the python package's aperture photometry code. At each wavelength, we used a photometric aperture centroided on the median flux image of each channel. The width of this aperture depended on wavelength to account for the changing PSF, with an angular radius of 1.22λ/(6.5 meters) – roughly 1 spatial FWHM (FWHM_ Rayleigh); this aperture radius ranges from 019 at 5 μm to 097 at 25 μm. The radius of this aperture on the short wavelength 019 corresponds to 8.8, 17.9, and 92 pc in Sombrero, NGC 1052, and NGC 7319 respectively. Background subtraction was done using an annulus with radii between 2 and 2.5× this value.
We created a wavelength-dependent aperture correction based on the MIRI data cube of 10 Lac (obtained from Argyriou, I., private communication). This aperture correction (total/aperture flux) was derived using the same aperture and background annulus as for our galaxy nuclei, with the total flux obtained by integrating the flux of the full data cube. Due to residual sky background issues, we took the median flux of pixels with a radius greater than 6× FWHM_ Rayleigh as a background subtraction in each spaxel before calculating the total flux of the data cube at each wavelength. To create a smooth relation, we smoothed the derived aperture correction at each wavelength with a moving median. We compared this smoothed aperture correction to several other point source observations (HD192163 and HD76534) as well as NGC 1052, which is nearly point like at longer wavelengths and found generally good agreement (to within ∼10%) in the aperture corrections between sources for channels 1-3, with much poorer agreement and due to noisier measurements in channel 4. The aperture correction declines from values of ∼2.1 at 5 μm to values similar to the prediction (1.4). We therefore fit a 5th order polynomial to our smoothed correction in channels 1-3, and set the ch4 correction to a constant 1.4 value. This aperture correction has been applied throughout this paper.
§.§ Measuring Emission Features
§.§.§ Multi-Gaussian Fitting of the Nuclear Spectrum
Our nuclear spectra are very high S/N with clear evidence of many emission lines. These lines often show complex profiles – to extract both flux and velocity information from these lines, we perform multi-Gaussian fits. We first define continuum and fitting windows for each line based on visual inspection – our default fitting window is based on a velocity width of 5000 kms^-1. We fit a linear function to the continuum on either side of the emission feature and subtract the result from the data. Next, we utilize the python package to fit both a single Gaussian and multi-Gaussian model to the continuum-subtracted emission line. We allow the multi-Gaussian model to consist of up to five components, where each Gaussian component is constrained by the width of the wavelength dependent MIRI instrument LSF and the results of the initial single-Gaussian fits. We select the model with the lowest Bayesian inference criteria (BIC) as the best-fit model.
An example fit to is shown in the left panel of Figure <ref>. We do not ascribe any physical interpretation to the individual Gaussian components, instead, we use them to accurately describe the emission-line profile from which we measure the flux, peak velocity, and FWHM_ model. The FWHM_ line of each emission-line is corrected for the width of the MIRI/MRS line spread function (LSF) at the corresponding wavelength, given by
FWHM_ line = √( FWHM_ model^2 - FWHM_ LSF^2)
We use the MIRI MRS LSF width given by <cit.>: FWHM_ LSF = c/R, where c is the speed of light, and R = 4603 - 128λ.
Errors on derived quantities are determined from a Monte Carlo (MC) simulation with Gaussian noise added to each pixel based on the standard deviation of the pixels in the continuum windows. The median standard deviation in the continuum pixels is ∼4× the formal flux errors provided by the pipeline. Emission-line detections are determined if the integrated flux of the best single-Gaussian emission-line model is above a 5σ threshold. 5σ upper limits are provided for lines without clear detections. We adopt a lower limit on errors for any wavelength dependent measurement equal to the wavelength calibration error of 30 km s^-1 provided in <ref>. The derived line properties and their associated errors are given in Table <ref>.
Two key lines of interest for tracing AGN activity are the high-IP lines (IP > 50 eV) and .
However in both our ReveaLLAGN targets, these lines are each blended with a neighboring low-IP line (IP <20). Specifically, is blended with the [Cl2]λ14.36 μm emission line, while is blended with the [Fe2]λ25.98 μm emission line. We deblend the features using a constrained multi-Gaussian model; the low-IP component is fixed to be a scaled version of the [Fe2]λ5.34 μm line (Figure <ref>), an isolated low-IP line with high signal-to-noise. We then allow to fit the and emission with a single Gaussian component. To capture the full uncertainty of this measurement we fit the [Fe2]λ5.34 μm in each iteration of the MC process before constraining the and models.
§.§.§ Constructing Emission Line Maps
Outside the nucleus, many lines have low signal-to-noise ratios, making the multi-Gaussian method we use for the nuclear spectrum less robust. We therefore simplify the Gaussian fitting process used for the nuclear spectra described above by limiting the Gaussian model to a single Gaussian component. The emission-line flux is calculated by measuring the area under the best-fit Gaussian model, while velocity is determined by calculating the displacement between the centroid of the best-fit Gaussian model and the rest wavelength of the emission line. For the blended high-IP features (e.g. Fig. <ref>, right), we attempted to deblend them pixel-by-pixel using two-Gaussian fits, but found no significant detection of the and emission beyond the central few spaxels due to a combination of low S/N and perhaps the nuclear concentration of these lines. We calculate errors on the flux and velocity using a Monte Carlo simulation as above, and use a 5σ detection threshold, below which we find our Gaussian fits don't characterize the data well. We discuss the resulting line maps in the Section <ref>.
To investigate the ionizing mechanisms of our emission lines, we quantify the spatial extent of the emission region in our line maps by measuring the spatial FWHM (FWHM_ spat) of prominent emission lines. We do this by creating a contour at 50% of the peak flux and calculate 2× the median radius from the peak flux to the contour line.
We correct the measured FWHM_ spat for the MIRI/MRS PSF, which varies by a factor of five over the MIRI wavelength range. Using the FWHM of the MIRI/MRS PSF (FWHM_ MRS) taken from <cit.> we get:
FWHM_ spat,corr = √( FWHM_ spat^2 - FWHM_ MRS^2)
The results for this measurement are listed in Table <ref> and presented in Section <ref>, with discussion in <ref>.
§ RESULTS
§.§ Nuclear Region Emission Line Analysis
§.§.§ Variations with Ionization Potential
In Figure <ref> we show the nuclear emission-line properties in our two ReveaLLAGN targets, as well as NGC 7319, ordered by their IP to search for systematic trends. The top panel shows the line luminosity and we find the most luminous detected lines in Sombrero and NGC 1052 are followed by which have IPs of 21.56 and 40.96 eV respectively, while in NGC 7319 the line (IP=54.94 eV) is the most luminous line. More generally, NGC 7319 shows overall higher luminosity in all lines compared to Sombrero and NGC 1052, with the relative luminosity increasing for the higher IP lines.
The middle panel of Figure <ref> shows the FWHM_ line (see equation <ref>) of each line as a function of IP. These FWHM_ line values are derived from the best-fit multi-Gaussian model to the nuclear emission lines (see Section <ref>). The red and blue dashed lines represent arcsecond-level central velocity dispersions for Sombrero and NGC 1052 <cit.> translated to a FWHM. The emission lines in Sombrero and NGC 1052 are visibly broader than those in NGC 7319 (as can be seen in Figure <ref>). Specifically, in NGC 7319 the lines have FWHM_ line∼200 kms^-1 regardless of IP. Meanwhile in Sombrero and NGC 1052, all detected lines are significantly wider, where the broadest lines have FWHM_ line≳1000 kms^-1. A clear trend is also seen with IP in Sombrero with the higher IP lines having significantly larger FWHM_ line values. A similar trend is seen in NGC 1052 though with the [Ne6]λ7.65μm emission feature being notably narrower than other high-IP lines.
=5cm
cccccccccccccc
0pt
Nuclear Spectra Measurements
Galaxy Line Wavelengtha IPb Transition Flux Flux Err Peak Vel Peak Vel Err FWHM_ line FWHM_ line Err S/N Warning
μ m eV 10^-14erg s^-1 cm^-2 10^-14erg s^-1 cm^-2 km s^-1 km s^-1 km s^-1 km s^-1
Sombrero [Fe II] 5.340 7.90 ^4F_9/2-a ^6D_9/2 0.488 0.004 50 30 540 30 94.5 0
Sombrero H2 5.448 15.37 (12-10)O(9) 0.009 – – – – – 0.0 0
Sombrero [Mg VII] 5.504 186.76 ^3P_2-^3P_1 0.020 – – – – – 1.7 0
Sombrero H2 5.511 15.37 (0-0)S(7) 0.083 0.003 110 30 310 30 25.8 0
Sombrero [Mg V] 5.608 109.27 ^3P_1-^3P_2 0.038 0.004 -310 100 1580 110 7.9 0
Sombrero H2 6.109 15.37 (0-0)S(6) 0.053 0.005 20 30 320 160 12.5 0
Sombrero [Ni II] 6.636 7.64 ^2 D_3/2-^2 D_5/2 0.150 0.004 60 30 900 80 35.2 0
Sombrero [Fe II] 6.721 7.90 ^4F_9/2-a ^6D_7/2 0.033 0.003 60 30 620 50 9.5 0
Sombrero H2 6.909 15.37 (0-0)S(5) 0.159 0.003 100 30 400 30 49.9 0
Sombrero [Ar II] 6.985 15.76 ^2 P1/2-^2 P3/2 2.320 0.007 30 30 800 30 370.5 0
Sombrero [Na III] 7.318 47.29 ^2 P_1/2-^2 P_3/2 0.064 0.003 -20 30 1020 40 20.8 0
Sombrero H 7.458 13.60 Pfund-alpha 0.063 0.003 130 30 1130 30 18.5 0
Sombrero [Ne VI] 7.652 126.25 ^2P_3/2-^2P_1/2 0.037 0.010 -590 170 2140 550 6.7 0
Sombrero H2 8.026 15.37 (0-0)S(4) 0.053 0.001 50 30 440 30 32.7 0
Sombrero [Ar III] 8.991 27.63 ^3P_1-^3P_2 0.403 0.007 20 30 730 50 89.3 0
Sombrero [Fe VII] 9.527 124.98 ^3F_3-^3F_2 2.195 – – – – – 0.1 0
Sombrero H2 9.665 15.37 (0-0)S(3) 0.140 0.002 90 30 400 30 58.0 0
Sombrero [S IV] 10.510 34.86 ^2P_3/2-^2P_1/2 0.222 0.006 0 40 870 100 31.8 0
Sombrero H2 12.278 15.37 (0-0)S(2) 0.042 0.002 60 30 530 40 16.6 1
Sombrero H 12.367 13.60 Humph-alpha 0.050 0.004 -130 80 1670 190 15.7 1
Sombrero [Ne II] 12.814 21.56 ^2P_1/2-^2P_3/2 6.317 0.020 50 30 600 30 757.2 0
Sombrero [Ar V] 13.102 59.58 ^3P_1-^3P_0 0.004 – – – – – 4.0 0
Sombrero [Ne V] 14.322 97.19 ^3P_-^3P_1 0.080 0.004 -290 40 1690 140 32.7 0
Sombrero [Cl II] 14.368 12.97 ^3P_1-^3P_2 – – – – – – 13.3 3
Sombrero [Ne III] 15.555 40.96 ^3P_1-^3P_2 4.101 0.015 40 30 540 30 556.2 0
The complete table is presented in the online version of the Astrophysical Journal. Here we present the first few rows to show its form and content. The measured quantities provided here are derived from the multi-component Gaussian fits described in Section <ref>. We define the line as detected if the integrated flux of a best-fit single-Gaussian model has a S/N≥ 5; upper limits are provided for undetected emission lines. The “Warning” column identifies issues with the spectra (blended feature, bad pixel, etc). 0 - good fit; measurements reported. 1 - blended/possibly blended features based on visual inspection; measurements reported. 2 - unacceptable spectra quality; no measurements to report. 3 - no measurements to report due to deblending procedure (Section <ref>, Figure <ref>).
aRest wavelengths from https://physics.nist.gov/PhysRefData/ASD/lines_form.htmlNIST.
bIonization potential energy from https://physics.nist.gov/PhysRefData/ASD/ionEnergy.htmlNIST.
Finally, the bottom panel of Figure <ref> shows the peak velocity of the emission lines as a function of IP. The peak velocity is measured from our best-fit multi-Gaussian models and we see distinct differences between the galaxies here. For NGC 7319, the peak velocities are quite close to zero at all IP, which some slightly blue-shifted lines (∼50 kms^-1) at intermediate IPs. The exception is the [O4] line, which shows a significant blue-shift. We caution that this line is one of the longest wavelength lines we have; the wavelength calibration is less accurate at long wavelengths, but is still estimated to be <30 kms^-1 by <cit.>; this line is also among the most blue-shifted lines in Sombrero and NGC 1052.
For Sombrero, the high-IP lines are almost all significantly blueshifted (greater than 3σ from zero), while the lower IP lines and H_2 lines show a slight redshift. The redshift of the H_2 lines in Sombrero (median Peak Velocity of 56 km s^-1) may indicate that our systemic velocity taken from HI measurements <cit.> is offset; if this were the case most of the low- and mid-IP lines would show a modest blue-shift with a general trend of larger blue-shift with higher IP. In NGC 1052, the blueshift in the highest IP lines are weaker, but there is also a sign of blue-shifted emission even at lower IP. The blue-shifted emission could be due to outflows, which we discuss in detail in Section <ref>.
§.§.§ Detailed Nuclear Line Profiles
The high spectral resolution of JWST lets us resolve line widths and look at the detailed shapes of emission lines. Above we found that the high-IP lines show broad, often blue-shifted emission lines, and here we look in more detail at the shapes of the lines with the highest signal-to-noise ratios (S/N > 50). Figure <ref> shows these lines in each galaxy centered on their expected velocity.
Looking at each galaxy, these strong lines show remarkably consistent line profiles between different lines suggesting a common physical origin. However, significant differences are seen between galaxies, with Sombrero having a notably asymmetric line profile with blue wings reaching >1000 kms^-1, while NGC 1052 and NGC 7319 show more symmetric lines. The strong asymmetry in Sombrero likely indicates the presence of an outflow, which we will discuss in more detail in Section <ref>. The narrower lines in NGC 7319 relative to the other two galaxies are clearly visible as well. We note that the highest IP lines in NGC 1052 and Sombrero are not high enough S/N to examine their line profiles in detail (as well as blending issues in a couple lines) .
§.§ 2-D Emission Line Information: Line Maps & FWHM
§.§.§ Flux and Velocity Maps
Figure <ref> shows flux and velocity maps for three lines in both Sombrero and NGC 1052.
These are created using the single Gaussian fitting method described in Section <ref>. Three lines are shown for each galaxy; a relatively strong molecular hydrogen line at 9.66 μm, the [Ar2] line at 6.98 μm (IP: 15.76 eV), and the [Ne3] line at 15.56 μm (IP: 40.96 eV). These three lines span a wide range of IP and critical densities and thus likely trace very different density gas <cit.>. The highest IP lines are unresolved, and therefore compact, showing detectable emission only in the central few pixels. Although we don't show velocity dispersion maps here, we discuss them below.
In the Sombrero galaxy, all three lines have similar morphologies, extended East-to-West with blue-shifted emission towards the West. The molecular hydrogen emission has no clear point-like emission and is red-shifted relative to the systemic velocity in the nuclear region; this redshift is also seen in several other H_2 and low IP lines in Sombrero(Figure <ref>). As discussed in the previous subsection, this may be due to the adopted systemic velocity for Sombrero. The velocity dispersion seen in molecular hydrogen emission maps is quite homogeneous with values up to 240 km s^-1, comparable to the measured nuclear stellar velocity dispersion <cit.>. Clear point-like emission is seen in both [Ar2]λ6.98μm and ; this emission appears to be more concentrated in [Ar2]λ6.98μm than , however this may be due simply to the lower resolution at these wavelengths; we examine this in more detail below in Section <ref>. Filaments can be seen extending out to the North/West from the nuclear region in the flux map. The velocity maps of both ions shown are similar to the H_2, but show complex velocity fields e.g. a patch of blue-shifted emission ∼2 East of the nucleus. The velocity dispersion in [Ar2]λ6.98μm and both peak in the nuclear region with a maximum velocity of about 500 km s^-1.
In NGC 1052, the H_2 emission-line map differs significantly from the Ar2]λ6.98μm and emission. The H_2 emission-line flux maps have a weak peak in the nuclear region and extend north-east to south-west, similar in morphology to the CO gas seen with ALMA in <cit.>, which they interpret as a circumnuclear disk. The velocity maps of H_2 are blue-shifted in the north-east and red-shifted to the south and west. Velocity dispersion is patchy and peaks at ∼275 km s^-1, a bit higher than the <cit.> central stellar velocity dispersion of 215 km s^-1. The [Ar2]λ6.98μm and emission-line flux maps are strongly peaked in the nucleus and share a roughly concentric radial profile. Their velocity maps exhibit a heavily blue-shifted region directly east of the nuclear region and a heavily red-shifted region directly west with |ΔV| ∼ 300 kms^-1 – these regions are roughly aligned with the orientation of the compact radio jet <cit.>. The velocity increases in these heavily red and blueshifted regions up to 590 km s^-1.
§.§.§ Spatial FWHM Measurements
Following the methodology outlined in Section <ref>, we determine FWHM_ spat,corr, characterizing the PSF-corrected spatial extent, for six emission lines in Sombrero and four emission lines in NGC 1052. These lines are at low- and mid- IP and have sufficient signal-to-noise to enable the measurement. The FWHM_ MRS, FWHM_ spat and FWHM_ spat,corr measurements are provided in Table <ref>.
Overall, we find that the lines in NGC 1052 are either unresolved or just barely spatially resolved, with the [Ne3] line having the largest spatial extent (FWHM_ spat,corr = 030 or 28.2 pc). On the other hand, all the emission lines in Sombrero are spatially resolved, with FWHM_ spat,corr > 017 or 8 pc, and no clear trend with IP. We note that while FWHM_ spat,corr estimates were not possible for the high-IP coronal lines ( and ), these lines do appear to be quite compact in both galaxies. In both galaxies, the emission is more extended than the emission, a somewhat surprising result that we discuss further in Section <ref>.
c c c c c |c c c |c c
0pt
Spatial FWHM Measurements of the Resolved Emission Lines
Feature Rest Wavelength IP FWHM_ MRS 3cSombrero 3cNGC 1052
(l2ptr2pt)5-7 (l2ptr2pt)8-10
4c FWHM_ spat 2cFWHM_ spat,corr FWHM_ spat 2cFWHM_ spat,corr
(μm) eV (arcsec) (arcsec) (arcsec) (pc) (arcsec) (arcsec) (pc)
[Fe II] 5.34 7.9 0.27 0.49 0.42 19.45 0.36 0.24 22.34
[Ar II] 6.99 15.76 0.31 0.35 0.17 7.87 0.33 0.12 10.35
[Ar III] 8.99 27.63 0.42 0.46 0.20 9.26 0.41 –^ –
[Ne II] 12.81 21.56 0.57 0.62 0.24 11.11 0.58 0.09 8.46
[Ne III] 15.56 40.96 0.63 0.70 0.31 14.35 0.69 0.30 28.22
[S III] 18.71 23.34 0.86 0.99 0.49 22.69 0.86 –^ –
The FWHM of the MRS PSF (FWHM_ MRS) is taken from <cit.>. We combine this with the measured spatial FWHM (FWHM_ spat) via Equation <ref> to calculate the corrected FWHM (FWHM_ spat,corr). We only report the lines that we were able to spatially resolve in at least one galaxy. See Section <ref> for details.
^* FWHM_ spat measurement unavailable.
^ Line is unresolved, FWHM_ spat < FWHM_ MRS.
§ DISCUSSION
In this section we present our results in the context of previous work. First, in section <ref>, we discuss the power of JWST in separating LLAGN from their host galaxies. Then in section <ref>, we compare the nuclear emission features from our LLAGN to AGNs of varying types, and end with section <ref> by discussing evidence for outflows seen in the LLAGN spectra.
§.§ The Promise of JWST for Revealing LLAGN
In Figure <ref> we show a comparison of the extracted nuclear spectrum (see Section <ref>) in Sombrero to both the integrated flux in the JWST data cube, and the Spitzer LR spectrum from the SINGS survey <cit.>. The integrated flux was calculated by summing all spaxels in each MIRI data cube. Since the FoV varies between each channel, we normalized the integrated spectrum to channel 4. In this channel the FoV measures 66×77 corresponding to a physical scale of 306×357 pc^2 at the distance of Sombrero. Note that the integrated spectrum is not shown at the longest wavelengths due to sky subtraction issues as discussed in <cit.>.
The nuclear emission clearly shows a SED that increases with wavelength, while the integrated data cube has a very different SED. Just ∼1% of the flux in the JWST integrated cube is coming from the nuclear component at 5 μm, while the nuclear compoment is >10% of the flux by 20 μm. This rising nuclear SED is consistent with two previous photometric measurements of Sombrero at high resolution (black points/line in Figure <ref>) and within the expectations of LLAGN spectra <cit.>. However, the information available in the nuclear spectrum is clearly far richer than was available with previous ground-based photometric measurements.
The two larger scale spectra from both Spitzer and our integrated JWST data in Figure <ref> show very different spectral shapes that are dominated by galaxy emission. The shape of these two spectra are in good agreement despite the different apertures suggesting a roughly constant SED for the galaxy component. Overall, the data show that even in Sombrero, the faintest target in the ReveaLLAGN survey, we can cleanly extract the LLAGN emission and separate it from its surrounding galaxy.
Although the primary goal of this paper is analysis of the emission lines in our ReveaLLAGN MIRI spectra, the continuum shape also encodes information on the emission mechanisms of these LLAGN. High angular resolution work on LLAGN has consistently shown jet dominated emission to follow a broken power-law continuum <cit.> which is consistent with self-absorbed synchrotron emission characteristic of compact jet emission <cit.>.
While Figure <ref> shows broad agreement with a single power-law fit from <cit.> over the MIRI wavelength range, there is also considerable complexity seen in the SEDs (Figure <ref>), with a clear inflection point in the Sombrero nuclear spectrum at 9 μm. We also see a gradual flattening of the spectrum at long wavelengths in NGC 1052, which is consistent with the turnover of the broken power below 20 µm and the nuclear fluxes at lower frequencies <cit.>. The complexity of the continuum shapes we see in the MIRI spectra suggest additional information may be available from detailed fitting of the continuum that includes the contributions of broad silicate features (Fernández-Ontiveros et al., in prep).
§.§ The Emission Lines of LLAGN: Comparison to Previous Work
In this subsection, we focus on comparing the nuclear emission-line luminosities and ratios to previous measurements of typically much higher luminosity AGN.
Figure <ref> compares the luminosities of the two high-IP lines detected in all three galaxies, and to literature measurements primarily from Spitzer <cit.>.
We note that these data have much lower physical resolution than our nuclear JWST data, and thus contamination of the AGN spectra by galaxy light is likely significant in some cases, especially for lower-IP lines discussed below that are excited by sources other than the AGN. NGC 7319, as expected, has luminosities in both lines very typical of previously measured AGN, while Sombrero has the lowest luminosities of both lines compared to any previous measurements. While, Sombrero and NGC 1052 stand out as being very low luminosity detections, they both follow the tight, nearly linear correlation between these two coronal lines that is seen across a wide range of AGNs <cit.>.
Comparing ionized states of a particular atom enables us to study the ionization structure within an AGN more clearly. In this regard, the mid-IR is particularly valuable as it contains multiple neon emission lines at different ionizations. In Figure <ref> we compare the flux values of , , and from our sample to previous surveys. Comparing line fluxes (rather than luminosities) ensures that correlations seen are the result of excitation differences, and not caused by observing sources at a range of distances (which can create false correlations between line luminosities).
The left panel comparing [Ne5][For the rest of the discussion, we will refer to , and as [Ne2], [Ne3] and [Ne5], respectively.] and [Ne3] shows a roughly linear correlation that gets tighter with increasing [Ne5] flux. Sombrero has significantly weaker [Ne5] than other sources with similar [Ne3] flux, and many of the lower luminosity sources including NGC 1052 also scatter towards fainter [Ne5] flux relative to the relation seen at higher line fluxes. Thus Sombrero is an outlier, but follows the qualitative trend of lower [Ne5] luminosity that are seen in other lower luminosity AGN. The middle panel comparing the flux of [Ne2] to [Ne5] shows similar results to the left panel, but with a much looser relation seen between the lines at high line fluxes. Finally the right panel shows that the relative [Ne2] and [Ne3] flux fall within the range of previous measurements in all three galaxies. This suggests that these lower IP lines have values typical of higher luminosity AGN, and it is the [Ne5] line that is weaker than in other sources.
We combine the information on all three neon lines in Figure <ref>, which compares the ratios of [Ne5]/[Ne2] and [Ne3]/[Ne2]. The ratio of [Ne5] to [Ne2] has been employed as a diagnostic tool in IR spectra to assess the contribution of AGN activity <cit.>. Since [Ne5] can only be formed through AGN processes, while [Ne2] can arise from both AGN and non-AGN mechanisms, this ratio helps determine the presence and influence of AGN. We emphasize again, that the literature data here have low spatial resolution, and therefore any line emission in the central kiloparsecs of the galaxies contain significant contamination from the host galaxy.
NGC 1052 and especially Sombrero fall well below the main trend line found in Figure <ref> and into a region only populated with upper limits of [Ne5] from other surveys.
We can get a sense of the level of galaxy contamination in our own JWST spectra by comparing the extent of emission features with different IP and in Section <ref> we find that the FWHM_ spat,corr of the [Ne2] and [Ne3] emission lines are quite compact. We would expect [Ne2] be more spatially extended than higher IP lines, including [Ne3], since [Ne2] lines come predominantly from star formation. This is not what we find in either source; in fact [Ne2] is found to be more compact than [Ne3] in both NGC 1052 and Sombrero. The fact that [Ne2] emission is compact doesn't strictly mean that it comes from the AGN, it could simply mean that any star formation is also compact/unresolved. While <cit.> reports the presence of extended Hα emission perpendicular to the jet in Sombrero, which may be associated with star formation, they find no conclusive evidence of star formation, from UV to IR, within parsecs of the center of Sombrero, nor in NGC 1052 <cit.>. A lack of excitation from star formation is consistent with the absence of any PAH emission in the nuclear spectra of NGC 1052 and only a weak PAH signature at 11.3 μm in Sombrero (Fig. <ref>). This lack of evidence for star formation suggests that the nuclear line ratios from our targets (Figure <ref>) are not significantly contaminated by emission from star formation, and that the outlier status of our two galaxies are the result of very low luminosity detections of [Ne5] made possible by the spatial and spectral resolution of JWST. The differences we see then in Figure <ref> are due to excitation differences from the AGN accretion structure. This difference can be explained by either a change in SED or very low ionization parameters that result in a deficiency of the high energy photons (≳100 eV) needed to excite the line. This conclusion is consistent with previous work on LLAGNS <cit.> including photoionisation models for compact jet synchrotron emission <cit.>, shock excitation models <cit.>, and the expectations of a central engine with advection dominated accretion flows <cit.>. We will be able to test this result and compare this to models for AGN ionization once the full ReveaLLAGN sample is available (Fernández-Ontiveros et al., in prep).
§.§ Outflows in NGC 1052 and Sombrero
In Section <ref>, we identify the following emission-line features in NGC 1052 and Sombrero:
* an increase in line widths with IP
* an increase in blue-shifted emission with IP
* broad emission in the weakly-detected high-IP and coronal lines, and
* prominent blue wings in the high signal-to-noise lines of Sombrero.
The trend of increasing line width and IP was originally attributed to cloud stratification–the coronal lines are emitted from denser clouds closer to the central engine which are subject to more intense ionizing flux <cit.>. Recent work has confirmed that many Seyfert galaxies, regardless of brightness or AGN type, show an increase in both line FWHM and line blue-shifting with increasing IP <cit.>. Furthermore, there are known correlations between blue-shifted emission and both increasing IP in coronal lines and increasing line width in the [O3] line in narrow-line Seyfert 1 galaxies <cit.>. While there is clear evidence that coronal-line emission and their profiles are driven mainly by photoionization from the AGN <cit.>, other work has demonstrated that outflows are needed to fully explain the observed emission <cit.>. In fact, the blue-shifted emission even at mid-IPs could trace out-flowing material closer to the AGN than the narrower emission, with the line asymmetry being caused by red-shifted emission being absorbed along the line-of-sight <cit.>.
Given the known importance of outflows and shocked emission in LINERs <cit.>, we conclude that the emission-line features identified above are indicators of outflows for both Sombrero and NGC 1052. We discuss other evidence and the possible origins of the outflows in NGC 1052 and Sombrero below.
§.§.§ Previous Evidence of Outflows in NGC 1052
Previous work has demonstrated the presence of AGN-related outflows in NGC 1052 on multiple spatial scales. Optical IFS studies of NGC 1052 show evidence for an outflow from the AGN on larger scales <cit.>. The outflow is roughly aligned with the radio jet <cit.>, with a PA of ∼70^∘ and is generally in good agreement with the velocity structures seen in Figure <ref>. These studies also find a broad Hα and Hβ component with a width of ∼3000 kms^-1; this is significantly broader than the widths of the mid and high-IP lines we see here.
Similarly, on much smaller spatial scales, <cit.>, <cit.> and <cit.> found evidence for outflows in HST data. Both <cit.> and <cit.> found evidence for strong outflows as well as ionized regions associated with jet-like features. Meanwhile, <cit.> demonstrated that shocked emission likely originating from these outflows are the dominant power source at just ∼20 pc outside of the galaxy center
. Similar to <cit.>, <cit.> and this work, <cit.> found that the shock-dominated, off-nuclear emission lines had widths consistent with v≲500 km s^-1. They also found broad Hα and Hβ emission in the unresolved AGN spectrum, with FWHM∼10^3 km s^-1. We note that a majority of the emission seen in <cit.> lies within the JWST nuclear aperture used in this work.
§.§.§ Previous Evidence of Outflows in Sombrero
Given the low accretion rate and the presence of a small-scale radio jet, Sombrero likely has strong radio outflows <cit.>. In fact, <cit.> determined that while Sombrero has organized motion within the central 05 consistent with an overall rotation pattern, there are significant irregularities that could be caused by outflows. <cit.> also found evidence of turbulent motion via spiral-like wisps in the narrow-band Hα+[N2] imaging. <cit.> further identified a strong velocity gradient near the galaxy center, and noted that the kinematics of the gas within the central 1 was decoupled from the gas in the spiral wisps. These East-West oriented wisps are not well-aligned with the inner radio jet described by <cit.> and <cit.>, which runs along the North-South axis and is oriented towards our line of sight. We note that the presence of broad Hα is unclear, with two analyses of the same HST spectra coming to different conclusions <cit.>. <cit.> found that the near-infrared SED appears to be similar to that of other type 2 LINERs, and <cit.> and <cit.> also found evidence for larger-scale outflows in Sombrero using radio and X-ray data, respectively.
§.§.§ Origins of Outflows
Here we consider two possible models for the outflows seen in NGC 1052 and Sombrero. We note that radiation pressure-driven outflows do not significantly contribute to the outflows seen in LLAGN <cit.>, and therefore we do not discuss them below. As a reminder, we note that both of these objects are classified as LINERs with low Eddington ratios given in Table <ref>. They also both have detected small-scale radio jets <cit.>.
Winds Launched from the RIAFs:
Unlike traditional cold, thin-disk models, RIAFs occur when the accretion rate is sufficiently low that the inner disk puffs up and becomes a hot, advection-dominated accretion flow <cit.>. Previous empirical studies showed that radio outflows from AGNs, including those with thin-disk accretion flows and RIAFs, increase in strength as the accretion rate decreases <cit.>. RIAFs extending to large scales can eliminate broad line emission <cit.> and the “big blue bump” associated with thin-disk accretion <cit.>; the corresponding lack of UV emission and broad line features in most LINER AGN <cit.> suggests they may be powered by RIAFs.
The strong wind along the polar or jet direction in RIAFs that was predicted by magnetohydrodynamical numeral simulations <cit.> has been observationally confirmed in recent years <cit.>. These energetic winds originate in the coronal region of the accretion flow, implying that higher-IP lines would experience more intense outflows, and thus likely have larger widths, consistent with the findings presented in Section <ref>. Given their low accretion rates (see Table <ref>), the absence of the “big blue bump” in both of their SEDs <cit.>, and the lack of clear broad Hα emission in Sombrero <cit.>, it is likely that both NGC 1052 and Sombrero are powered by a RIAF. Therefore, we conclude that the energetic winds driven by the hot accretion flows in both LLAGNs likely contribute to the observed emission. However, we note that by their nature RIAFs do drive radio jets, and as such these winds may not be the sole explanation for the observed outflows.
Jet-Driven Outflows:
Jets associated with AGN accretion are known to drive outflows that create shocked emission and can regulate the star-formation rate in the galaxy <cit.>. In fact, while we did not find any trends with IP in the nuclear spectra of NGC 7319, <cit.> found that high-IP coronal-line emission is detected close to the hot spots of the known radio jet, which they conclude indicates the presence of a jet-driven outflow.
Due to their less luminous, lower-accretion rate engines, the shocked emission driven by jets or outflows can often dominate over photoionization at small distances from the nuclei in LLAGNs <cit.>. Furthermore subparsec-scale radio jets occur more frequently in LINERs <cit.>, which could further indicate the presence of jet-driven outflows.
Recent work by <cit.> demonstrated that small-scale jets can produce large widths even in mid-IP lines like [O3] λ5007, similar to the widths seen in our mid-IP lines studied here. They also conclude that similar widths can be seen in the different gas phases of the ISM, which appears to be somewhat qualitatively true for NGC 1052–the observed positive correlation between IP and FWHM in NGC 1052 in Figure <ref> is much less pronounced than that in Sombrero. Furthermore, both <cit.> and <cit.> found evidence that the jet in NGC 1052 was interacting with the circumnuclear gas.
In both the RIAF- and jet-driven wind scenario, the orientation of the jet should impact the observable signatures. In Sombrero, modeling of VLBI data suggests the inner jet is oriented close to our line-of-sight <cit.>, while in NGC 1052, the jet is oriented more in the plane of the sky <cit.>. This difference in jet orientation may be the reason that only Sombrero shows the blue-shifted emission in its nuclear spectrum, while the ionized emission-line maps in NGC 1052 show strong strong blue- and red-shifts oriented close to the jet axis (Figure <ref>). However, since both RIAF- and jet-driven winds will result in an outflow in the jet direction, a combination of SED modeling on the smallest scales with emission-line analysis like that presented here is likely required to resolve what drives the outflows in LLAGN.
§ CONCLUSIONS
This paper features the first observations of the ReveaLLAGN survey, a JWST project to characterize seven nearby LLAGN. We present MIRI/MRS data of the least and most luminous targets in our sample, Sombrero and NGC 1052. We compare these data to data of NGC 7319, a higher luminosity AGN. We characterize the numerous emission lines seen in the nuclear spectrum
and create line maps across the MRS field of view for stronger lines.
We find the following results:
* The resolution and sensitivity of JWST allows us to cleanly separate the AGN continuum and emission lines from the surrounding galaxy even in our least luminous target, Sombrero.
* The ionized emission lines in both Sombrero and NGC 1052 are broad, and have widths that increase with increasing IP reaching FWHM>1000kms^-1. The highest IP lines (IP >50) show blue-shifted peak velocities with a median velocity of -423 km s^-1 seen in Sombrero and -186 km s^-1 in NGC 1052.
* The highest signal-to-noise ionic lines in Sombrero with show a clear blue wing extending >1000kms^-1 from the peak emission.
* Sombrero has the lowest luminosity high-IP lines ([O4] and [Ne5]) yet detected in any source. NGC 1052 also shows low luminosity in both these lines, and the relative luminosity of these lines follows the relation seen in more luminous AGN.
* The is weak relative to the and as compared to previously measured AGN. This does not appear to be due to galaxy contamination, and thus likely indicates a deficiency of high energy ionizing photons in these LLAGN.
Our full ReveaLLAGN dataset will include observations of seven nearby LLAGN with both the NIRSpec IFU and MIRI/MRS. We will present the nuclear spectra of these in an upcoming paper (Seth et al., in prep), as well as an analysis of their emission lines (Goold et al. in prep). We will also be modeling the continuum emission and emission lines from the ReveaLLAGN sample (Fernández-Ontiveros et al. in prep). The ReveaLLAGN spectra will be valuable in both identifying the unique features of LLAGN, and revealing the nature of the central engine in LLAGN.
We thank Ioannis Argyriou for his helpful suggestions and willingness to share data. KG, AS, and DO acknowledge support from JWST Cycle 1 grant GO-2016. We acknowledge the ERO team for developing their observing program with a zero-exclusive-access period. The work of MM is supported in part through a fellowship sponsored by the Willard L. Eccles Foundation. LCH was supported by the National Science Foundation of China (11721303, 11991052, 12011540375, 12233001), the National Key R&D Program of China (2022YFF0503401), and the China Manned Space Project (CMS-CSST-2021-A04, CMS-CSST-2021-A06).
JWST (MIRI/MRS)
astropy <cit.>, lmfit (<https://github.com/lmfit/lmfit-py>), jwst calibration pipeline v1.8.2 (<https://github.com/spacetelescope/jwst>)
aasjournal
|
http://arxiv.org/abs/2307.01811v1
|
20230704163337
|
Cold atom-ion systems in radiofrequency multipole traps: event-drive molecular dynamics and stochastic simulations
|
[
"Mateo Londoño",
"Javier Madroñero",
"Jesús Pérez-Ríos"
] |
physics.atom-ph
|
[
"physics.atom-ph"
] |
APS/123-QED
Centre for Bioinformatics and Photonics (CIBioFi), Universidad del Valle, Edificio E20 No. 1069, 760032 Cali, Colombia
Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA
Institute for Advanced Computational Science, Stony Brook University, Stony Brook, New York 11794, USA
We have studied the general aspects of the dynamics of an ion trapped in an ideal multipolar radiofrequency trap while interacting with a dense cold atomic gas. In particular, we have explored the dynamical stability, the energy relaxation and the characteristic harmonic motion exhibited by a trapped Yb^+ ion in different multipolar potentials and immersed in various cold atomic samples (Li, Na, Rb, Yb). For this purpose, we used two different molecular dynamics simulations; one based on a time-event drive algorithm and the other based on the stochastic Langevin equation. Relevant values for experimental realizations, such as the associated ion's lifetimes and observable distributions, are presented along with some analytical expressions which relate the ion's dynamical properties with the trap parameters.
–Cold atom-ion systems in radiofrequency multipole traps: event-drive molecular dynamics and stochastic simulations
Jesús Pérez-Ríos
August 1, 2023
===================================================================================================================
§ INTRODUCTION
The realization of cold hybrid atom-ion systems has revolutionized the field of atomic, molecular, and optical physics, leading to a new venue for studying impurity physics <cit.>, atom-ion collisions <cit.> and quantum information sciences <cit.>. However, most ion-atom systems require a time-dependent trap to hold the ion. At the same time, the ion is brought in contact with an atomic gas. As the atom approaches the ion, it is pulled away from the center of the trap, leading to the well-known micromotion heating <cit.>. This effect represents a problem for most applications in the cold and ultracold regime. For example, in quantum information sciences, micromotion heating can reduce the efficiency of sympathetic ion cooling, enhancing atom losses due to laser heating caused by successive gates <cit.>. Similarly, in cold chemistry, the time-dependent trap induces long-lived ion-atom complexes that could potentially affect the stability of the ion <cit.>.
One solution to curb micromotion heating is implementing radiofrequency higher-order multipole traps for ion confinement <cit.>. These traps create an almost boxlike trapping potential with a sizable flat potential or field-free region in the center, reducing the heating effects <cit.>. However, despite the advantage of lowering micromotion effects, the multipole trap has some weaknesses. For example, the trapping properties depend on the ion's average distance to the trap's center, leading to a stochastic stability parameter. Additionally, no analytical solution can be found to the equations of motion, and the numerical study of the collisional dynamics is very cumbersome <cit.>. Generally, the trapping stability is characterized by a molecular dynamics approach, leading to a partial understanding of thermalized ions' energy distribution and position as a function of the trap's nature <cit.>. On the other hand, a stochastic approach based on the Langevin equation has recently been developed for ions in a quadrupole trap <cit.>. The time-continuous nature of the stochastic approach allows for describing the relaxation process of the ion or spectral composition of the motion and the time-dependence of the resulting ion's distributions, not considered in molecular dynamics simulation <cit.>. In addition, the stochastic formulation typically results in shorter simulation times than elaborate molecular dynamics simulations. For example, as the trap order increases, the free field region of potential reduces micromotion heating, and the stochastic Langevin simulation approach will become highly efficient for the ion dynamics in a buffer gas for any mass ratio.
This work theoretically explores the dynamics of a single ion in a multipole trap in contact with an atomic gas. To this end, we use two different simulation methods. One is based on a novel event-driven molecular dynamics simulation method via sampling collisional times from realistic atom-ion collisions. The other relies on the Markovian Langevin equation. The first is highly efficient for studying the stability of atomic mixtures of interest at different temperatures and initial conditions. In contrast, the second, cheaper computationally speaking, gives a precise understanding of the thermalization process. As a result, we can describe the ion stability in the discrete and continuous time domain, leading to new insights into the thermalization of the ion. The paper is divided as follows. Sections <ref> and <ref> focus on the dynamics stabilities and energy distributions of the trapped ions using the event-drive molecular dynamics. Section <ref> is devoted to the ion's dynamics using the Langevin equation, which is valid for multipolar traps where thermal behavior dominates the final distributions or, equivalently, where micromotion heating effects are reduced. Finally, Section <ref> summarizes the main results of the work and present some perspectives.
§ TRAP DEPTH AND DYNAMIC STABILITY FOR A SINGLE ION
The dynamics of an ion in a radiofrequency (RF) multipole trap can be described using the adiabatic approximation (see appendix <ref>). Within this approximation, the slow secular motion is decoupled from the fast micromotion, and the stability of the ion is described with a single dynamic stability parameter [Note that there is an additional n in the definition compared to the parameter η in previous works <cit.>, this is a consequence of our definition of the parameter q as an independent value of n.]:
η = qn(n-1)( r/r_0)^n-2 ,
where n is the trap order, r_0 is the radius of the trap and q = 2eU_AC/m_ionr_0^2Ω_RF^2. In this equation, U_AC stands for the voltage on the electrodes, e is the atomic ion charge, m_ion represents the ion's mass and Ω_RF is the trap frequency. Therefore, the dynamic stability for a single ion in a trap with n>2 depends on the distance of the ion to the center of the trap, becoming a stochastic variable due to collisions with the gas. As a result, for a given maximum value of the dynamic stability parameter, η_max, there is always a distance of the ion at which the dynamics become unstable, given by
r_cri = r_0( η_max/qn(n-1))^1/(n-2),
where generally η_max=0.33 <cit.>.
Next, following Refs. <cit.>, we introduce the trapping volume as the region where no effective energy transfer occurs between the ion and the field, also known as the field-free region. This region is bounded by r_cri when r_cri < r_0, or by r_0 on the contrary, as illustrated in panel (a) Fig.<ref>. Hence, as the figure suggests, the effective trapping depth is a function of the RF frequency and the q-parameter, given by
V_depth(Ω;q) = m_ion(qnΩ)^2/16r_tr^2n-2/r_0^2n-4,
where r_tr = min (r_cri, r_0).
Due to the lack of stability diagrams in multipole traps, it is preferable to take the effective depth of the trapping potential as the reference to define stable trap parameters. Panel (b) of Fig. <ref> displays the effective depth of the trapping potential as a function of q for Ω_RF = 2π MHz, showing a maximum depth at r_cri = r_0, as expected in virtue of Eq. (<ref>). Similarly, we notice that low-order traps show a larger trap depth than higher-order traps, indicative of the stronger stability of low-order traps versus high-order ones. As a result, for a given Ω_RF, we chose the initial dynamical stability η based on the q-value where the maximum trap depth is observed. However, this choice will not guarantee the stability of the ion due to the inherent position-dependent stability. Hence, an approach based on the dynamics of the ion is required.
§ EVENT-DRIVEN MOLECULAR DYNAMICS SIMULATIONS
Event-driven molecular dynamics simulations generalize the simulation approach of Zipkes et al. <cit.> to the case of multipole traps. In our approach, we assume instantaneous hard-sphere collisions with an energy-independent scattering rate proper of the Langevin model for ion-atom collisions. However, because of the lack of analytical expressions for the ion's motion in a multipolar potential, we propagate the equation of motion from one collision event to the next one. Additionally, we consider a homogeneous atomic density. The event-drive algorithm consists of the following steps:
* Initialization: We initialize the trap parameters (Ω_RF, q, r_0), including n, the temperature and density of the atomic cloud (T, ρ), and the ion initial conditions. Typically, we place the ion at a distance of 0.01 r_0 at rest, v_ion,0=0.
* Event time: Once the initial conditions for the experiment are set up, we compute the event-time (t_c), associated with an atom-ion collision. This time is sampled from a Poisson distribution with mean value τ = 1/Γ_Lang., where the Langevin scattering rate Γ_Lang. depends on the atom-ion long range coefficient C_4 and the atom-ion reduced mass μ as (in atomic units)
Γ_Lang. = 2π√(C_4/μ).
* Preparing the collision: With the event-time computed, we propagate the ion in the trap from the initial condition to the collision time. The propagation is carried out using a fourth order Runge-Kutta method to integrate the ion's equation of motion
m_iond^2r(t)/dt^2 = F(r) = eU_ACn/r_0^ncos(Ω_RFt)r^n-1e_RF,
where e_RF represents the unitary vector of the trapping force, which depends on the azimuthal angle ϕ. At the same time, we pick up an atom from the ensemble. The velocity for the atom is sampled from a Maxwell-Boltzmann distribution associated to the temperature T of the gas. If the final position of the ion's satisfy r_ion < r_0 the collision is produced, if not the propagation will broke up and the ion is lost.
* Hard-sphere collision: If the collision takes place, the ion's velocity changes following a hard-sphere collision with the atom, going from the initial value v_ion,0 to
v_ion,f = (1-β)v_ion,0 + βℛ(θ, ϕ)v_ion,0,
with β = 1/1+ζ, ζ = m_atom/m_ion, and ℛ(θ, ϕ) represents the rotation matrix depending on the collision angles θ and ϕ, which are sampled homogeneously from [0,2π], following Langevin cross section. The position of the ion remains unchanged.
* Saving the observables: For the energy of the ion we compute the secular velocity given by
v_sec,f = (1-β)v_sec,0 + βℛv_sec,0 + β(ℛ-1)v_mm.
The micromotion component v_mm is computed using
v_mm ≈ -qnΩ_RF/2sin(Ω_RF t)r^n-1_sece_RF,
and using r_sec = r_ion the ion's position at the end of the propagation. Then, the new velocity and position are settled as the initial conditions and we loop back to compute the event-time.
The algorithm will finish when reaching the total number of collisions or when the ion gets lost from the trap.
§ STABILITY OF A SINGLE ION IN A MULTIPOLE TRAP
We explore the stability of a Yb^+ ion immersed in different cold atomic baths using the event-driven molecular dynamic simulation approach. In particular, we are interested in some atomic species previously studied in the context of cold collisions <cit.>. In this study, stability is given as the survival probability of the ion one reaches thermalization with the bath. Specifically, for each atom-ion mixture, we simulate 1000 sympathetic cooling experiments. Once the ion thermalizes, we let the ion evolve for 500 extra collisions with the bath atoms. Once finished the simulations, stability is defined as the number of survival trapped ions (N_s) over the number of experiments (N_Tot).
The results for our simulations are shown in Fig. <ref>, where the survival probability of the ion as a function of the trap order and initial η value is displayed. All the simulations are carried out at T=1×10^-3K and a sample density of
ρ=1×10^18m^-3. As the Figure shows, low-order traps represent high stability but are prone to present RF heating. Therefore, a dodecapolar trap is the best choice based on stability and RF heating. Furthermore, we have computed the average number of collisions before abandon the trap and its lifetime, as displayed in Table <ref>. This Table presents the lifetimes and the mean number of collisions for some of the previously explored unstable configurations. There is a remarkable variance associated with the lifetimes, showing the high sensibility of the system to initial conditions as well as initial collisional events. The thermalization rate, reported for each mixture at different temperatures, is not affected by the trap properties and is reported for each mixture. In general, from the data, it can be seen that most of the systems reach thermalization before the ion leaves the trap.
Based on our study, ion losses usually take place after thermalization. Hence, the time-average probability of losses after thermalization should satisfy
P̅_loss∝P̅(r|r_cri)P̃(v|v_cri),
where P̅(r|r_cri) is the time-average probability of finding the ion at the critical position given by Eq. (<ref>) and P̃(v|v_cri) is the probability of having the ion with a velocity larger than a critical value v_cri. In general, the form of the distributions depends on the multipolar order, for low-order traps the micromotion heating leads to long-tail behavior which is notorious as the atom-to-ion mass ratio increases. Then, Tsallis-type distribution captures properly this behavior <cit.>. However, for trap-order n>4, which we are more interested here, the distributions tend to a thermal behavior as shown in Fig. <ref>. Then, the average position distribution (P̅(r)) satisfies
P̅(r) ∝exp (-V_eff(r)/k_BT),
going from a Gaussian-type distribution when n = 2 to a box-like homogeneous distribution as the polar order tends to infinity. This box-like distribution tendency, displayed in panel (b) of Fig. (<ref>), gives rise to a free-field region where the ion can move almost freely. As a result, in higher-order multipole traps, the ion is less localized, reaching more considerable distances from the trap's center, eventually leading to ion losses.
However, the instability depends on the velocity of the ion at the critical distance too.
P̅(v|v_cri) stands for the probability that the ion shows a larger velocity than a critical value v_cri: the minimum value for having ion losses at the critical position and depends on the trap parameters. Assuming n>4, we can approximate the distribution to a thermal form as
P̅(v|v_cri) ∝Θ(v-v_cri)v e^-mv^2/k_BT,
where Θ(x) is the Heaviside function of argument x, and v_cri depends on the trap properties. In general, v_cri is influenced by micromotion effects, as well as by possible collisions at the boundary where the field rises to its highest value. However, an estimate could be determined when the ion's kinetic energy at the boundary is equal to the effective potential depth,
V_depth (Ω_RF; q) = m_ion(qnΩ_RF)^2/16r_tr^2n-2/r_0^2n-4 = 1/2 m_ion v_cri^2,
which leads to the following relations
v_cri(n;q) = nΩ_RFr_0/√(8) q, if r_0≤ r_cri,
nΩ_RFr_0/√(8)( η_max/n(n-1))^2n-2/2n-4 q^1/2-n, if r_cri≤ r_0. .
Fig. <ref> shows v_cri for different multipole traps, comparing the full numerical approach based on event-driven molecular dynamics against our analytical results of Eq. (<ref>). As a result, it is observed that for increasing values of q, Eq. (<ref>) overestimates the value of v_cri because it ignores micromotion and collision effects that could increase the secular velocity at the limit distance, producing losses for lower velocities than predicted. However, the model gives good agreement for low values of q and an adequate qualitative description of the loss dynamics. Hence, as the temperature decreases, the stability will increase following the velocity distribution for a fixed trap configuration. Once the temperature is such that the critical velocity is not reached, the ion can only be lost through collision events at the boundary.
§ MEAN KINETIC ENERGY OF AN ION
In the adiabatic approximation, the virial theorem
2/3⟨ E_k⟩ = (n-1)⟨ V_eff⟩,
holds, resulting in the ratio ⟨ V_eff⟩/⟨ E_k⟩ = 2/3(n-1), and the ion's mean kinetic energy as
⟨ E ⟩ = [3/2 + 1/n-1]k_BT
However, it is possible to include mass effects assuming that micromotion degrees of freedom is assigned to the atom dynamics <cit.>. We can then arrive at the expected value for the atomic energy
⟨ E_a⟩ = 3/2 k_BT + ζ V_eff(r),
where ζ = m_atom/m_ion is the mass ratio. Combining Eqs. (<ref>) and (<ref>) we propose that the mean kinetic energy of the ion can be described as
⟨ E ⟩ = 3/2k_BT + k_BT/n-1 + α(n,ζ)ζ/n-1 k_BT,
where α(n,ζ) is a free parameter to determine, depending on the trap order and mass ratio.
Fig. <ref> displays the results for the mean energy of a ^+Yb ion trap in a multipole trap in the presence of different atomic baths. For low mass ratio values, the required fitting parameter from Eq. (<ref>) is independent of the trap order. However, when the mass ratio approaches one, the trap order has a strong effect on the mean kinetic energy of the ion. In that case, Eq. (<ref>) is still applicable but with a fitting parameter for each trap order.
From Eq. (<ref>), it is possible to identify the non-thermal component of the mean kinetic of the ion as
Δ E = k_BT/n-1 + α(n,ζ)ζ/n-1 k_BT.
When this term is small compared to the thermal component of the ion kinetic energy, it is possible to use a temperature to describe the ion rather than its mean kinetic energy. Furthermore, note that Eq. (<ref>) allows expressing those contributions of the effective potential in Eq. (<ref>) as thermal contributions, where the trap-parameters modifications are only incorporated in the fitting parameter α, which can be easily computed.
§ LANGEVIN EQUATION MODEL
The dynamics of a trapped ion in a neutral sea can be described by solving the Langevin stochastic equation of motion <cit.>. In this approach, all degrees of freedom of the bath are substituted by an effective stochastic force modeled by a Gaussian white noise ζ(t), whose components satisfy
⟨ζ_j(t) ⟩ = 0 and ⟨ζ_j(t)ζ_i(s) ⟩ = Dδ_ijδ(t-s).
Where D is the diffusion coefficient related to the friction coefficient γ by means of the fluctuation-dissipation theorem, γ = D/2k_BT. Therefore, the diffusion and friction coefficients are related at a given temperature T. The friction coefficient, γ, encapsulates details of the atom-ion scattering through the thermally averaged diffusion cross-section, following the Chapman-Enskog approximation <cit.>.
Here, the stochastic equations of motion are formulated in Cartesian coordinates, using a multipolar expansion based on the analytical function z = (x + i y)^n to obtain the RF field for a given trap order, n. The explicit derivation is shown in appendix <ref>. The equations of motion for each component of the ion's radial position, r_j, is given by
d^2r_j/dt^2 + γ/m_iondr_j/dt+ Ω^2_RFq_j/2cos(Ω_RFt)∂/∂ r_jU_n(x,y) = ζ_j(t)/m_ion,
where U_n(x,y) represents the spatial dependence of the multipolar field, which can be written as
U_n(x,y)=∑_k=0^m2m2kx^2(m-k)(-1)^ky^2k,
if n is even (n = 2m with m ϵ ℕ^+), or
U_n(x,y)=∑_k=0^m2m+12kx^2(m-k) + 1(-1)^ky^2k,
if n is odd (n = 2m + 1 with m ϵ ℕ^+).
Eq. (<ref>) represents a coupled stochastic differential equation for any n>2, in contrast to the case of a Paul trap <cit.>. Furthermore, Eq. (<ref>) contains explicit time-dependent terms, and, as a result, there is no stationary solution for the associated Fokker-Planck equation. However, thanks to the Gaussian noise term (stochastic force) ζ_j, the variables r_j and v_i will follow a time-averaged thermal distribution, as shown in Fig. <ref>, where a comparison between the Langevin approach and the thermal distribution is displayed. The energy distribution of the ion is shown in panel (a) where it is noticed a wonderful agreement between our Langevin simulation and the thermal distribution. Similarly, in the case of the spacial distribution of the ion, panel (b), the Langevin formulation described the ion position extremely well in a thermal bath, given by Eq. (<ref>).
A stochastic formulation of the trapped ion dynamics in a neutral bath has allowed us to explore the continuous-time evolution of the physical quantities and, consequently, distributions, which is the primary advantage of a stochastic approach versus a molecular dynamics one. Here, we solve Eq. (<ref>) using the leap-frog Verlet algorithm <cit.> and take the average over 10^5 realizations of the ensemble to report on the mean time evolution of different quantities. We will use brackets to denote the ensemble average and the over bar for the time average.
Fig. <ref> shows the evolution of the mean radial kinetic energy, ⟨ E_k⟩ = ⟨ E_k,x⟩ + ⟨ E_k,y⟩, for a single Yb^+ ion trapped in a dodecapolar trap (n=6) in the presence of a Rb cloud at T = 1× 10^-3 K. In this Figure, it is noticed that the energy undergoes a double thermalization process. First, the ion energy thermalizes to a field-free value of k_BT according to the equipartition theorem. Then, once the potential starts to act significantly on the ion, energy is not a conserved quantity. The kinetic energy becomes a function of time and rapidly reaches a second time-ensemble average value, which is approximately equal to 2k_BT, associated with the two micromotion degrees of freedom. The presence of a double thermalization process occurs if the relaxation time τ_R = m/γ is lower than the time it takes for the ion to leave the field-free region of the trap. Therefore, the thermalization process is drastically affected by the density of the neutral bath, as one can see by comparing panels (a) and (b). Panel (a) shows a higher density baht than panel (b), and as a consequence, panel (a) shows abrupter thermalization dynamics in comparison with panel (b). Additionally, in panel (a), one also notices how the event-drive molecular dynamics simulation describes a similar tendency to the same mean energy value as in the Langevin equation model. However, the double thermalization is not observed due to the discrete steps in the simulation. Note that this observation validates the field-free approximation for the ion around the central region of the trap, absent in low order trap, in which subsequent thermalizations do not occur <cit.>.
The time to reach the second thermalization value depends on the trap order and q, which is illustrated in Fig. <ref>. Increasing the q-parameter for the same trap order results in a lower critical radius, then, the effect of the field is felt by the ion at shorter distances, leading to shorter second-thermalization times, as shown in panel (a) of Fig. <ref>. In the same panel, we notice that for the same q-parameter, a higher order trap leads to shorter thermalization time than in the case of low trap order, which is a consequence of the dependence of the effective potential amplitude with n^2. Finally, in Fig.<ref>(b) it is corroborated how the field-free dynamics is the same for each case as all of them have the same thermal and atomic properties of the bath.
Another signature of the free-field evolution is a diffusive evolution at short times, characterized by a quadratic-to-linear evolution of the mean square radial displacement ⟨ r^2⟩ = ⟨ x^2⟩ + ⟨ y^2⟩, as displayed in Fig. <ref>. On the contrary, at sufficiently long times, ⟨ r^2⟩ saturates as characteristic of bounded stochastic motion. All these remarkable aspects of the time-continuous evolution of the ion's distributions result in a better understanding of the sympathetic cooling process and a helpful criterium for choosing traps to enhance the stability of the ion and its localization or control relaxation processes.
§ CONCLUSIONS
In this work, we have introduced two novel methodologies to simulate the dynamics of a single trapped ion in a multipolar trap immersed in a cold gas. First, we introduce the event-driven molecular dynamics simulation to explore the dynamic stability of the ion in this system. In addition, we develop a stochastic approach based on the Langevin equation. The first technique presents an analysis of the ion-bath dynamics in a multipole trap from a discrete-time perspective. In contrast, the stochastic approach leads to a continuous time description of the dynamics.
The event-driven molecular dynamics is an ideal tool for trap stability studies. For example, the dodecapolar trap represents the optimal choice to reduce micromotion heating while the ion shows stable dynamics. Similarly, ion losses usually occur after thermalization for the range of considered temperatures, around T= 1×10^-3K. In addition, thanks to the event-drive molecular dynamics simulations, we have derived an expression for the mean kinetic energy of the ion that generalizes previous attempts in the literature <cit.>. On the other hand, the stochastic approach is a good fit for thermalization studies since it is a continuous-time approach and computationally cheap. Using this methodology, we predict a two-step thermalization mechanism of the ion. First, the ion thermalizes to the expected free-field case of k_BT, while in the second step, it reaches the expected 2k_BT.
Finally, these methods presented here are readily extensible to more involved experimental scenarios: including the excess of micromotion or the imperfections in electrodes. Therefore, these techniques could potentially impact the field of hybrid ion-atom system.
§ ACKNOWLEDGMENTS
J.P.-R. thanks the Simons Foundation for the support.
§ DYNAMIC STABILITY PARAMETER
In the case of multipole traps, for η < η_max the adiabatic approximation os valid <cit.>, and in the absence of a DC voltage, the multipole trap potential is given by
V_RF,n = U_AC/r_0^ncos(Ω_RFt)r^n.
Therefore, the ion's equation of motion reads as
m_id^2r(t)/dt^2 = F(r) = eU_ACn/r_0^ncos(Ω_RFt)r^n-1e_RF,
where e_RF represents the unitary vector of the trapping force, which depends on the azimuthal angle ϕ.
Let us assume that within the adiabatic approximation the position of the ion can be written as
r(t) = r_sec(t) + r_mm(t), where r_sec and r_mm correspond to the secular and micromotion components of the ion's motion, respectively, satisfying |r_sec| ≫ |r_mm|
and |r̈_sec| ≪ |r̈_mm|. Then, Eq.(<ref>) reads as
m_id^2/dt^2(r_sec(t) + r_mm(t) ) ≈F(r_sec) + (r_mm∇)F(r_sec),
where a Taylor expansion has been performed on the force term up to first order in r_mm.
Equating the most relevant terms in both sides of the equation, we find the differential equation for the micromotion as
d^2/dt^2r_mm = F(r_sec) = -eU_ACn/m_ir_0^ncos(Ω_RF t)r_sec^n-1e_RF,
that is easily solvable taking into account that the micromotion and secular are decoupled leading to
r_mm≈eU_ACn/m_ir_0^nΩ_RF^2cos(Ω_RF t)r^n-1_sece_RF.
Using this approximate solution we find the equation for the secular motion as
d^2/dt^2r_sec = 1/m_i(r_mm∇)F(r_sec)
≈e^2U_AC^2n^2/m_i^2r_0^2nΩ^2(n-1)cos^2(Ω_RF t)r_sec^2n-3e_r,
where e_r is the radial unitary vector. Averaging over the fast RF oscillations we see that the secular component follows the equation
d^2/dt^2r_sec = q^2n^2Ω^2/4(n-1)r_sec^2n-3/r_0^2n-4
with q = 2eU_AC/r_0^2m_iΩ_RF^2.
Equation <ref> represents the periodic motion of the ion in the pseudopotential
V_eff(r)= m_ionq^2n^2Ω^2/16r_sec^2n-2/r_0^2n-4,
generated from averaging the energy associated with the micromotion.
This adiabatic decomposition of the ion's motion is valid always we keep the conditions |r_sec| ≫ |r_mm|
and |r̈_sec| ≪ |r̈_mm| as mentioned previously. The validity of this condition is evaluated through the quotient between the two terms on the right-hand side of equation <ref>. This leads us to define the stability parameter
<cit.>
η = qn(n-1)( r/r_0)^n-2.
§ MULTIPOLAR POTENTIAL IN CARTESIAN COORDINATES
The previous formulations can be also described in terms of cartesian coordinates. This version could result more convenient for some numerical aplications as in the case of the Langevin equation described in Sec.(<ref>). Then, it is imperative to writte the spatial part of the multipolar potential in cartesian coordinates as shown here.
The radial part of this potential can be written as
V_RF,n(x,y,t) = (U_DC+ U_ACcos(Ω_RF t))U_n(x,y),
where U_DC and U_AC refers to the continuum and alternating potential amplitude in the electrodes
and U_n(x,y) is the spatial dependence of the potential.
U_n(x,y) is a harmonic function and it can be built up, in the ideal electrode case, from the real part of the analytical complex function <cit.>
z^n = (x+iy)^n = U_n(x,y) + iV_n(x,y).
If n is even (n = 2m with m ϵ ℕ^+), we can write the potential as
U_n(x,y)=∑_k=0^m2m2kx^2(m-k)(-1)^ky^2k,
x and y are going to have the same even exponents between 0 and n. From this potential, we derived the spatial part of the force components
∂ U_n(x,y)/∂ x =∑_k=0^m-12m2k2(m-k)x^2(m-k)-1(-1)^ky^2k
= 2m∑_k=0^m-12m-12kx^2(m-k)-1(-1)^ky^2k,
and
∂ U_n(x,y)/∂ y =∑_k=1^m2m2k2(k)x^2(m-k)-1(-1)^ky^2k-1
= 2m∑_k=1^m2m-12k-1x^2(m-k)(-1)^ky^2k-1.
Then both of the component of the force have the same number of terms with the same binomial coefficients.
For an odd-n trap, (n = 2m+1 with m ϵ ℕ^+), the spatial dependence of the forces take the form
∂ U_n(x,y)/∂ x = (2m+1)∑_k=0^m2m2kx^2(m-k)(-1)^ky^2k,
and
∂ U_n(x,y)/∂ y = (2m+1)∑_k=1^m2m2k-1x^2(m-k)+1(-1)^ky^2k-1.
and the components do not have either, the same number of terms nor the same binomial coefficients. This results in a remarkable difference between the x and y dynamics.
§ HAMONIC CONTRIBUTION TO ⟨ R^2⟩ AND ⟨ V^2⟩
Additional dynamical aspects can be studied from the Langevin dynamics methodology, which brings light to some questions for instance the ion's localization. Here we address the harmonic behavior of the mean square velocity and position of the ion. Fig. <ref> shows the power spectrum for the evolution of ⟨ r^2⟩ and ⟨ v^2⟩. From this plot, two main aspects can be highlighted: First, the remarkable difference between the amplitude of the radio-frequency oscillations in the evolution of ⟨ v^2⟩ and ⟨ r^2⟩. The amplitude of the oscillations in ⟨ r^2⟩ is so small compared to its mean value that it can be approximated to a time-independent variable. Second, we notice that the evolution of ⟨ v^2⟩ contains more harmonic contributions from the fundamental trap frequency Ω_RF than ⟨ r^2⟩. Furthermore, only even harmonic contributions appear. These two properties are only characteristic of traps where n = 2m, and m is an odd number, as is the case of the dodecapolar trap (n = 6, m = 3).
To understand why this happened we can start noticing that the spatial part of the trapping force for x and y components are, following eq. <ref>,
∂/∂ xU_n(x,y) = ∑_k=0^m-1 2(m-k) 2m2kx^2(m-k)-1(-1)^ky^2k
∂/∂ yU_n(x,y) = ∑_k=0^m-1 2k 2m2kx^2(m-k)(-1)^ky^2k-1,
respectively. Further manipulation of the y-component led us to the expression
∂/∂ yU_n(x,y) = (-1)^m∑_k=0^m-1 2(m-k) 2m2ky^2(m-k)-1(-1)^kx^2k,
which is exactly the x-component but with the change x → y and the front sign (-1)^m.
Now, for long times (t ≫τ_c ) we can express the solution for the mean square value of each position component as a Fourier series <cit.>
⟨ r_j^2⟩ = ∑_nr_j,ne^-i nΩ_RFt,
where the Fourier coefficients r_j,n are going to depend, among other things, on the n-th power of the q-parameter <cit.>. Then, if m is odd, the spatial part of the trapping force is going to be the same for x and y components but with the opposite sign because of the (-1)^m term (see eqs. <ref> and <ref>). We can assign this different sign to the q factor as usual in the linear Paul trap such that q_x = -q_y, doing this, all the Fourier coefficients become identical for ⟨ x^2⟩ and ⟨ y^2⟩ but with a different sign which is only manifest in the odd powers of the q-factor, it means, a negative sign is going to accomplish the odd harmonic contribution for the y component. As a consequence, when defining ⟨ r^2⟩ for long times we have
⟨ r^2⟩ = ⟨ x^2⟩ + ⟨ y^2⟩
= (x_0 + x_1e^-i Ω_RFt +...) + (x_0 - x_1e^-i Ω_RFt +...)
= 2x_0 + 2x_2e^-i Ω_2RFt+... = 2∑_n x_2ne^-2inΩ_RFt.
So, the first time-dependent contribution is second order in q, which is small for most of the stable configurations found in Sec. <ref>.
In general, this time independence of the mean square radial displacement results in better localization properties of the ion inside the trap. We should also notice that high-order contributions of the harmonics for the mean square velocity are larger than the one for the mean square position, which results in the strong time-dependence of ⟨ v^2⟩. The same arguments result in eq. <ref> keeps for ⟨ v^2⟩. In fig. <ref> we show the power spectrum of ⟨ v^2⟩ for the n=6 and n=4 traps. The octupolar trap shows three additional peaks in the spectrum, one at the fundamental RF-frequency and the other two at the third and fifth harmonic, verifying the previous analysis.
|
http://arxiv.org/abs/2307.03062v1
|
20230706152923
|
Quantum criticality on a compressible lattice
|
[
"Saheli Sarkar",
"Lars Franke",
"Nikolas Grivas",
"Markus Garst"
] |
cond-mat.str-el
|
[
"cond-mat.str-el",
"cond-mat.stat-mech"
] | |
http://arxiv.org/abs/2307.00353v1
|
20230701144242
|
Bulk-Boundary Correspondence in Two-Dimensional Non-Hermitian Systems: Topological Winding Tuple Characterizes Boundary Accumulation of Magnons
|
[
"Chengyuan Cai",
"Dante M. Kennes",
"Michael A. Sentef",
"Tao Yu"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall"
] |
School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China
Institut für Theorie der Statistischen Physik, RWTH Aachen University and JARA-Fundamentals of Future Information Technology, 52056 Aachen, Germany
Max Planck Institute for the Structure and Dynamics of Matter, Luruper Chaussee 149, 22761 Hamburg, Germany
Institute for Theoretical Physics and Bremen Center for Computational Materials Science,
University of Bremen, 28359 Bremen, Germany
Max Planck Institute for the Structure and Dynamics of Matter, Luruper Chaussee 149, 22761 Hamburg, Germany
[email protected]
School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China
We propose a topological winding tuple ( W_1, W_2) to fully characterize the accumulation of eigenmodes at boundaries in two-dimensional non-Hermitian systems. Specifically, we investigate a long-ranged coupled and non-Hermitian two-dimensional array of nanomagnets, fabricated on a thin magnetic substrate and subjected to an in-plane magnetic field. We predict topology-driven edge and corner accumulation of magnetic eigenmodes. By varying the direction of the in-plane field, all magnon states accumulate either at different edges of the array with ( W_1=± 1, W_2=0) or ( W_1=0, W_2=± 1), or at different corners characterized by ( W_1=± 1, W_2=± 1). The uncovered winding tuple establishes bulk-boundary correspondence for two-dimensional non-Hermitian systems.
Bulk-Boundary Correspondence in Two-Dimensional Non-Hermitian Systems: Topological Winding Tuple Characterizes Boundary Accumulation of Magnons
Tao Yu
August 1, 2023
===============================================================================================================================================
Introduction.—The discovery of the one-dimensional non-Hermitian skin effect, yielding an accumulation of a macroscopic number of bulk eigenstates at the edge <cit.>, stimulated the recent explorations of open systems, achieving useful functionalities such as funneling of light <cit.>, unidirectional amplification <cit.>, non-local response <cit.>, and enhanced device sensitivity <cit.>. The winding number of the frequency spectrum ω(κ), defined for periodic boundary conditions, was found to form a loop in the presence of a skin effect when the wave number κ evolves by one period. In one dimension, this winding number characterizes the skin effect's topological origin and precisely determines on which edge the eigenstates localize <cit.>. Extending the non-Hermitian skin effect from one to higher dimensions yields rich and diverse manifestations of skin modes including edge, corner, surface, or hinge accumulations <cit.>, which have been experimentally observed in acoustics <cit.> and topoelectrical circuits <cit.>, but not yet in magnonics <cit.>. Magnonic systems exploit magnetic excitations, i.e., magnons, as potential low-energy-consumption information carriers <cit.>.
The rapid progress in the field also raised theoretical challenges and urgent issues in the topological characterization of the different skin modes <cit.>. Kawabata et al. showed that the non-zero Wess-Zumino term leads to the presence of (higher-order) corner skin modes in non-Hermitian systems <cit.>. Zhang et al. proposed a general theorem to characterize the existence of a non-Hermitian skin effect in higher dimensions in terms of spectra area in the complex plane <cit.>, viz. the non-Hermitian skin effect appears when the spectra under periodic boundary conditions cover a finite area. However, a convenient approach to precisely distinguish different edge and corner accumulations, i.e., a precise prediction of the edge or corner on which the modes localize, appears to be missing so far.
In this Letter, we remedy this by predicting different edge or corner
accumulations of magnons in ferromagnetic heterostructures composed of a regularly shaped two-dimensional (2D) array of nanomagnets that are fabricated on a thin magnetic substrate and biased by an in-plane magnetic field. The system is illustrated in Fig. <ref>. Mediated by the propagating magnons in the substrate, the indirect interaction between Kittel magnons <cit.> in the nanomagnet is long-range and chiral <cit.>, driving the accumulation. Here the frequency spectrum ω(κ_1,κ_2) under periodic boundary conditions is a function of two real wave numbers κ_1 and κ_2, which allows us to define a winding tuple ( W_1, W_2) by fixing one of the wave numbers. We use such winding tuples to fully characterize different edge and corner aggregations of bulk eigenstates that precisely predict which edge or corner the modes localize on, which can be varied by varying the direction of the in-plane field in our model.
The winding tuple shows that all of the magnonic bulk eigenstates accumulate either at different edges of the array with ( W_1=± 1, W_2=0) or ( W_1=0, W_2=± 1), or at different corners characterized by ( W_1=± 1, W_2=± 1). These predictions can be tested experimentally with conventional metallic nanomagnets on a high-quality thin
magnetic substrate such as yttrium iron garnet (YIG).
Non-Hermitian magnonic edge and corner eigenstates.—We consider a finite-sized 2D square array of regular shape composed of N_y× N_z nanomagnets, e.g. CoFeB, Py, Ni, or Co, of width w∼ O(100) nm, length l∼ O(100) nm, and thickness d∼ O(10) nm, fabricated on the finite area of a magnetic substrate such as YIG thin film of thickness s∼ O(10) nm, as illustrated in Fig. <ref>. The distance between neighboring nanomagnet is Λ_y and Λ_z, respectively, in the ŷ- and ẑ-directions, and (a,b) indicates the nanomagnet in the a-th column and b-th row.
An in-plane magnetic field H_0 with an angle θ with respect to the ẑ-direction biases the saturated magnetization M_s and M̃_s of the substrate and nanomagnets. For soft YIG magnetic substrates, M_s is parallel to H_0. M̃_s is larger than M_s and, due to the shape anisotropy, it has an angle θ̃θ with respect to the ẑ-direction. We refer to the Supplemental Material (SM) <cit.> for the calculation of θ̃.
When Λ_y,z≫{w,l,d} is of micrometer size, the direct dipolar interaction between the nanomagnets is suppressed to be negligibly small. The nanomagnet then couples dominantly with the magnetic substrate via the dipolar interaction, assuming that the interlayer exchange interaction is suppressed by inserting a thin insulator layer <cit.>. So the ferromagnetic resonance (FMR) modes or Kittel magnons β̂_a,b <cit.> of frequency Ω in the (a,b)-th nanomagnets couple indirectly via the dipolar interaction with the traveling magnons m̂_ k of wave vector k=(k_y,k_z) in the substrate <cit.> with the coupling constant given by
g_ k^(a,b)=g_𝐤e^i (ak_y Λ_y+bk_z Λ_z), where g_𝐤 is real (refer to the SM <cit.> for detailed derivations). The total Hamiltonian
Ĥ/ħ =∑_a,b(Ω-iδ_β) β̂^†_a,bβ̂_a,b+∑_ k(ω_k-iδ_m)m̂^†_ km̂_ k
+(∑_a,b∑_ kg^(a,b)_ km̂_ kβ̂^†_a,b+ H.c.)
describes coupled harmonic oscillators,
where δ_β=α̃_GΩ and δ_m=α_Gω_k with the damping constants α̃_G and α_G for the magnons in the nanomagnet and substrate, and ω_k=μ_0γ(H_0+α_ exM_sk^2) is the dispersion of the exchange magnons in the substrate with the vacuum permeability μ_0, the modulus of electron gyromagnetic ratio γ, and the exchange stiffness α_ ex.
The Kittel magnons in the nanomagnets
couple effectively via virtually exchanging magnons in the substrate <cit.>.
The effective coupling between magnons in the (a,b)-th and (a',b')-th nanomagnet is Γ( r_a-a',b-b')=i∑_ kg_ k^2e^i[(a-a')k_y Λ_y+(b-b')k_z Λ_z]/(ω-ω_k+i δ_m).
In polar coordinates k=(k,φ) and r_a-a',b-b'=(r_a-a',b-b',ϕ_a-a',b-b'), performing the contour integral over k with the on-shell approximation ω→Ω yields <cit.>
Γ( r_a-a',b-b'=0)=L_y L_z/4π∫_0^2π d φk_Ω/v_k_Ωg^2(k_Ω,φ),
Γ( r_a-a',b-b' 0)=L_y L_z/2 π∫_ϕ_a-a',b-b'-π/2^ϕ_a-a',b-b'+π/2 d φk_Ω/v_k_Ωg^2(k_Ω,φ)
×exp[i q_Ω r_a-a',b-b'cos (φ-ϕ_a-a',b-b')],
where the lengths of substrate L_y and L_z are along the ŷ- and ẑ-directions, k_Ω=√((Ω-μ_0γ H_0)/(μ_0γα_ exM_s)) is the wave number of the resonant magnon to the FMR frequency Ω that propagates with group velocity v_k_Ω=(∂ω_k/∂ k)|_k_Ω=2μ_0γα_ exM_sk_Ω, and q_Ω=k_Ω(1+iα_G/2).
Therefore, the elements of the effective Hamiltonian matrix of nanomagnet magnons read
H_ eff|_a=a',b=b'=Ω-i δ_β-i Γ( r_a-a',b-b'=0),
H_ eff|_a a' or b≠ b'=-iΓ( r_a-a',b-b' 0).
The substrate, on the one hand, adds an extra dissipation Γ( r_a-a',b-b'=0) to the Kittel magnons δ_β, and, on the other hand, mediates an effective coupling Γ( r_a-a',b-b' 0) between different nanomagnets. The matrix (<ref>) is non-Hermitian such that its diagonalization requires, in general, different left η_ξ and right ψ_ξ eigenvectors, where the state index ξ={1,2,⋯,N_yN_z}. The left and right eigenvectors obey the biorthonormal condition η_ξ^†ψ_ξ'=δ_ξξ' <cit.>.
Here we illustrate the results with dimensions used in
experiments Refs. <cit.> by considering an array of 30×30 CoFeB nanomagnets of thickness d=30 nm, width w=100 nm, and length l=200 nm with the neighboring distance Λ_y=Λ_z=2.2 μm that are fabricated on the thin YIG film of thickness s=10 nm, biased by the in-plane magnetic field μ_0H_0=0.05 T. The saturated magnetization of CoFeB
μ_0M̃_s=1.6 T <cit.> is much larger than that of YIG μ_0M_s=0.177 T <cit.>.
For the ultrathin YIG substrate, the Gilbert damping coefficient α_G∼ 10^-3 <cit.> and exchange stiffness α_ ex=3×10^-16 m^2 <cit.>. Besides, μ_0=4π×10^-7 H/m and γ=1.82×10^11 s^-1· T^-1.
The modulus of effective coupling |Γ( r)| under different magnetic configurations θ={0,π,-π/4,π/4} and θ̃={0, 0, -0.056π,0.056π} are plotted in Table <ref>(a)-(d), which show tunable chiralities or non-reciprocities. In the parallel configuration θ=0, |Γ( r)| is symmetric in the ẑ-direction, but is stronger when y>0, implying that the Kittel magnon tends to interact with the substrate magnons propagating to the right. The chirality becomes opposite in the antiparallel configuration θ=π. The chirality is altered strongly when θ=±π/4 as shown in Table <ref>(c) and (d), where |Γ( r)| is asymmetric in both the ŷ and ẑ-direction. These chiralities drive different accumulations of magnonic eigenstates. To show such accumulations, we plot in Table. <ref>(e)-(h) the spatial distributions of all eigenstates W( r_a,b)=[1/(N_yN_z)]∑_ξ|ψ_ξ( r_a,b)|^2. In the collinear parallel and antiparallel configurations, the chirality only drives the accumulation at one edge: as in (e) with θ=0, all the eigenstates accumulate at the right edge, but in (f) with θ=π, they accumulate at the left edge. While in the non-collinear configuration with θ=±π/4 as in (g) and (h), all the magnonic eigenstates
become skewed to the lower-right and upper-right corners, respectively, showing two kinds of non-Hermitian accumulation. These non-Hermitian skin modes are of first order since all the modes accumulate <cit.>, different from the higher-order corner skin modes <cit.>.
Full Topological characterization.—As addressed, it is still a theoretical challenge to topologically distinguish the edge and corner accumulations in the 2D non-Hermitian skin effect <cit.>. To this end, we address a full topological characterization of the accumulation of magnon eigenstates in terms of the winding tuple of the complex frequency spectra under periodic boundary condition. However, before we can turn to this winding tuple, we need to deal with the long-range coupled system rendering the construction of periodic boundary conditions non-trivial since every two magnets couple, differently from the short-range coupled system <cit.>. To solve this issue we propose to map the system with a finite array on the substrate under open boundary conditions to the periodic system by repeating the finite array on the substrate an infinite number of times and requesting the magnon operator in the a-th column and b-th row to satisfy periodic condition β̂_(a,b)=β̂_(a+N_y,b)=β̂_(a,b+N_z), as addressed in Fig. <ref> for the one-dimensional situation. Good agreement is obtained in the one-dimensional system, which allows an analytical treatment, where our numerical results agree with the analytical one <cit.>. We refer to the SM <cit.> for a detailed comparison.
The translational symmetry is recovered when we repeat the block of the nanomagnet array along the ŷ- and ẑ-directions indefinitely. We label every block by {n_y,n_z}∈ (-∞,∞) and every nanomagnet in the block by {a,b}. The magnons in the substrate then interact with the Kittel magnons in all nanomagnets, leading to the Hamiltonian
Ĥ_p/ħ=∑_n_y,n_z∑_a=1^N_y∑_b=1^N_z(Ω-iδ_β) β̂^(n_y,n_z)†_a,bβ̂^(n_y,n_z)_a,b
+∑_ k(ω_k-iδ_m)m̂^†_ km̂_ k+(∑_ k∑_n_y,n_z∑_a=1^N_y∑_b=1^N_z g_ km̂_ k
×β̂^(n_y,n_z)†_a,be^i((a+n_yN_y)k_yΛ_y+(b+n_zN_z)k_zΛ_z)+ H.c.),
where the phase in the coupling term records the position of the nanomagnet.
Due to the periodicity, we only need to focus on one block such as the {n_y=0,n_z=0} block. Below we denote β̂_a,b^(0,0) by β̂_a,b for short notation.
By Langevin's equation <cit.> and using the effective coupling (<ref>), we find
(ω-Ω+i δ_β) β̂_a,b
=-i∑_a',b'∑_n_y,n_zΓ( r_a,b- r_a'+n_yN_y,b'+n_zN_z)β̂_a',b'
=-i∑_a',b'Γ^p( r_a,b- r_a',b)β̂_a',b',
where r_a,b=a Λ_yŷ+b Λ_zẑ is the position of the (a,b)-th nanomagnet and in the second line we impose the periodic condition β_a,b^(n_y,n_z)=β_a,b^(0,0).
Γ^p( r_a,b- r_a',b')=∑_n_y,n_zΓ( r_a,b- r_a'+n_yN_y,b'+n_zN_z)
is periodic in both the ŷ- and ẑ-directions since Γ^p( r)=Γ^p( r+N_yΛ_yŷ)=Γ^p( r+N_zΛ_zẑ).
We then find from Eq. (<ref>) the elements of the Hamiltonian matrix of the periodic system, which under the on-shell approximation ω→Ω read
H_ eff^p|_a=a',b=b'=Ω-i δ_β-iΓ^p( r=0),
H_ eff^p|_a a' or b≠ b'=-iΓ^p( r_a,b- r_a',b').
Due to the periodicity of Γ^p( r) the eigenfunctions of matrix H^p_ eff are the plane waves
ψ^p_κ_y,κ_z=1/√(N_yN_z)(e^i(κ_yΛ_y+κ_zΛ_z),e^i(κ_yΛ_y+2κ_zΛ_z),⋯,.
.e^i(κ_yΛ_y+N_zκ_zΛ_z),e^i(2κ_yΛ_y+κ_zΛ_z),⋯,e^i(N_yκ_yΛ_y+N_zκ_zΛ_z))^T,
where κ_y≡ 2π l_y/(N_yΛ_y) and κ_z≡ 2π l_z/(N_zΛ_z) are real with integers l_y={1,2,...,N_y} and l_z={1,2,...,N_z}.
It obeys H^p_ effψ^p_κ_y,κ_z=ω^p(κ_y,κ_z)ψ^p_κ_y,κ_z, where the eigenfrequency
ω^p(κ_y,κ_z) =Ω-iδ_β-i∑_a=0^N_y-1∑_b=0^N_z-1Γ^p(-a Λ_yŷ-b Λ_zẑ)
× e^i(aκ_yΛ_y+bκ_zΛ_z).
Since the complex spectra ω^p(κ_y,κ_z) are functions of two
real wave numbers κ_y
and κ_z, they have a complicated distribution on the complex plane. The conventional spectra topology with the winding number in the one-dimensional system is still convenient to characterize the topological origin of the skin effect. Here we use it to
characterize the 2D non-Hermitian skin effect by fixing one component of (κ_yΛ_y,κ_zΛ_z) at any (convenient) value and monitor the evolution of ω^p(κ_y,κ_z) on the complex plane when the other wave number evolves by a period. Accordingly, we define the topological winding tuple ( W_y, W_z) by fixing, respectively, κ_zΛ_z and κ_yΛ_y for the entries of the tuple:
W_i={y,z}={[ 0, if ∀ω_0, Q_i=0; -Q_i/|Q_i|, if ∃ω_0, Q_i 0 ],.
where with respect to the reference frequency ω_0
Q_i={y,z}=∫_0^2πd/d(κ_iΛ_i)[ω^p(κ_y,κ_z)-ω_0]d(κ_iΛ_i).
When the spectra do not form a loop, W_i=0; otherwise W_i=1 (-1) for the clockwise (anticlockwise) evolution of the frequency spectra, which can be computed by properly choosing ω_0 on the complex plane.
The winding tuple ( W_y, W_z) precisely characterizes different edge or corner accumulations in the 2D non-Hermitian skin effect. When both two indexes vanish, no 2D non-Hermitian skin effect occurs; when only one of them is nonzero, the magnons accumulate at one of the edges, i.e. upper, lower, left, and right skin modes that are characterized, respectively, by { W_y, W_z}={0,1},{0,-1},{-1,0}, and {1,0}; when both exist, the skin modes accumulate at one of the corners, with the upper-left, lower-left, upper-right, lower-right corner modes characterized, respectively, by { W_y, W_z}={-1,1},{-1,-1},{1,1}, and {1,-1}.
This is justified by the numerical calculation in Table. <ref>(i)-(p) with N_y=N_z=250 for the spectra winding when fixing one of κ_y and κ_z. For the edge accumulation when θ={0,π} one component of the winding numbers vanishes; while for the corner accumulation when θ=±π/4, both winding numbers are nonzero that governs the position that the magnonic eigenstates localize.
Discussion.—In conclusion, we predict the edge or corner accumulations of magnons in the nanomagnetic array that act as magnetic dipoles on a high-quality magnetic insulating substrate and fully characterize their topological origin in terms of winding tuples. Such an approach can be extended to the three-dimensional case with a winding three-tuple and so on for a long-range coupled system of regular shape. The insights obtained in magnonics, where magnetic dipoles are exploited, should straightforwardly apply to analogous electric dipoles that are coupled in a long-range way, for instance in chiral photonics <cit.> or plasmonics <cit.>.
This work is financially supported by the National Natural Science Foundation of China, and the startup grant of Huazhong University of Science and Technology (Grants No. 3004012185 and 3004012198). DMK acknowledges funding by the DFG under RTG 1995, within the Priority Program SPP 2244 “2DMP” — 443273985 and under Germany's Excellence Strategy - Cluster of
Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 -
390534769.
99
Bergholtz E. J. Bergholtz, J. C. Budich, and F. K. Kunst, Exceptional topology of non-Hermitian systems, Rev. Mod. Phys. 93, 015005 (2021).
XZhang_review X. Zhang, T. Zhang, M.-H. Lu, and Y.-F. Chen, A review on non-Hermitian skin effect, Adv. Phys.-X 7, 2109431 (2022).
KDing K. Ding, C. Fang, and G. Ma, Non-Hermitian topology and exceptional-point geometries, Nat. Rev. Phys. 4, 745 (2022).
Okuma_review N. Okuma and M. Sato, Non-Hermitian Topological Phenomena: A Review, Annu. Rev. Condens. Matter Phys. 14, 83 (2023).
RLin R. Lin, T. Tai, L. Li, and C. H. Lee, Topological Non-Hermitian skin effect, arXiv:2302.03057.
Yu_review T. Yu, J. Zou, B. Zeng, J. W. Rao, and K. Xia, Non-Hermitian Topological Magnonics, arXiv:2306.04348.
Weidemann S. Weidemann, M. Kremer, T. Helbig, T. Hofmann, A. Stegmaier, M. Greiter, R. Thomale, and A. Szameit, Topological funneling of light,
Science 368, 311 (2020).
McDonald A. McDonald and A. A. Clerk, Exponentially-enhanced quantum sensing with
non-Hermitian lattice dynamics, Nat. Commun. 11, 5382 (2020).
XWen X. Wen, X. Zhu, A. Fan, W. Y. Tam, J. Zhu, H. W. Wu, F. Lemoult, M. Fink, and J. Li, Unidirectional amplification with acoustic non-Hermitian space-time varying metamaterial, Commun. Phys. 5, 18 (2022).
Helbig T. Helbig, T. Hofmann, S. Imhof, M. Abdelghany, T. Kiessling, L. W. Molenkamp, C. H. Lee, A. Szameit, M. Greiter, and R. Thomale, Generalized bulk-boundary correspondence in non-Hermitian topolectrical circuits, Nat. Phys. 16, 747 (2020).
Ghatak A. Ghatak, M. Brandenbourger, J. Van Wezel, and C. Coulais, Observation of non-Hermitian topology and its bulk-edge correspondence in an active mechanical metamaterial, Proc. Natl. Acad. Sci. 117, 29561 (2020).
Budich J. C. Budich and E.J. Bergholtz, Non-Hermitian Topological Sensors, Phys. Rev. Lett. 125, 180403 (2020).
Yu_Zeng T. Yu and B. W. Zeng, Giant microwave sensitivity of a magnetic array by long-range chiral interaction driven skin effect, Phys. Rev. B 105, L180401 (2022).
HYuan H. Yuan, W. Zhang, Z. Zhou, W. Wang, N. Pan, Y. Feng, H. Sun, and X. Zhang, Non-Hermitian Topolectrical Circuit Sensor with High Sensitivity, Adv. Sci. 2023, 2301128 (2023).
FKK F. K. Kunst, E. Edvardsson, J. C. Budich, and E. J. Bergholtz, Biorthogonal Bulk-Boundary Correspondence in Non-Hermitian Systems, Phys. Rev. Lett. 121, 026808 (2018).
Shunyu S. Yao and Z. Wang, Edge States and Topological Invariants of Non-Hermitian Systems, Phys. Rev. Lett. 121, 086803 (2018).
YXiong Y. Xiong, Why does bulk boundary correspondence fail in some non-hermitian topological models, J. Phys. Commun. 2, 035043 (2018).
CYin C. Yin, H. Jiang, L. Li, R. Lü, and S. Chen, Geometrical meaning of winding number and its characterization of topological phases in
one-dimensional chiral non-Hermitian systems, Phys. Rev. A 97, 052115 (2018).
HShen H. Shen, B. Zhen, and L. Fu, Topological Band Theory for Non-Hermitian Hamiltonians, Phys. Rev. Lett. 120, 146402 (2018).
ZGong Z. Gong, Y. Ashida, K. Kawabata, K. Takasan, S. Higashikawa, and M. Ueda, Topological Phases of Non-Hermitian Systems, Phys. Rev. X 8, 031079 (2018).
Yokomizo K. Yokomizo and S. Murakami, Non-Bloch Band Theory of Non-Hermitian Systems, Phys. Rev. Lett. 123, 066404 (2019).
Kawabata_2 K. Kawabata, K. Shiozaki, M. Ueda, and M. Sato, Symmetry and Topology in Non-Hermitian Physics, Phys. Rev. X 9, 041015 (2019).
KZhang1 K. Zhang, Z. Yang, and C. Fang, Correspondence between Winding Numbers and Skin Modes in Non-Hermitian Systems, Phys. Rev. Lett. 125, 126402 (2020).
Okuma N. Okuma, K. Kawabata, K. Shiozaki and M. Sato, Topological Origin of Non-Hermitian Skin Effects, Phys. Rev. Lett. 124, 086801 (2020).
HHu H. Hu and E. Zhao, Knots and Non-Hermitian Bloch Bands, Phys. Rev. Lett. 126, 010401 (2021).
Schindler F. Schindler, A. M. Cook, M. G. Vergniory, Z. Wang, S. S. Parkin, B. A. Bernevig, and T. Neupert, Higher-order topological insulators,
Sci. Adv. 4, eaat0346 (2018).
CHLee1 C. H. Lee, L. Li, and J. Gong, Hybrid Higher-Order Skin-Topological Modes in Nonreciprocal Systems, Phys. Rev. Lett. 123, 016805 (2019).
Kawabata K. Kawabata, M. Sato, and K. Shiozaki, Higher-order non-Hermitian skin effect, Phys. Rev. B 102, 205118 (2020).
Okugawa R. Okugawa, R. Takahashi, and K. Yokomizo, Second-order topological non-Hermitian skin effects, Phys. Rev. B 102, 241202 (2020).
BXie B. Xie, H.-X. Wang, X. Zhang, P. Zhan, J.-H. Jiang, M. Lu, and Y. Chen, Higher-order band topology, Nat. Rev. Phys. 3, 520 (2021).
YFu Y. Fu, J. Hu, and S. Wan, Non-Hermitian second-order skin and topological modes, Phys. Rev. B 103, 045420 (2021).
TLi T. Li, Y.-S. Zhang, and W. Yi, Two-Dimensional Quantum Walk with Non-Hermitian Skin Effects, Chin. Phys. Lett. 38, 030301 (2021).
KZhang2 K. Zhang, Z. Yang, and C. Fang, Universal non-Hermitian skin effect in two and higher dimensions, Nat. Commun. 13, 2496 (2022).
WZhu W. Zhu and J. Gong, Hybrid skin-topological modes without asymmetric couplings, Phys. Rev. B 106, 035425 (2022).
YLi Y. Li, C. Liang, C. Wang, C. Lu, and Y.-C. Liu, Gain-Loss-Induced Hybrid Skin-Topological Effect, Phys. Rev. Lett. 128, 223903 (2022).
Flebus_van_der_Waals K. Deng and B. Flebus, Non-Hermitian skin effect in magnetic systems, Phys. Rev. B 105, L180406 (2022).
XZhang X. Zhang, Y. Tian, J.-H. Jiang, M.-H. Lu, and Y.-F. Chen, Observation of higher-order non-Hermitian skin
effect, Nat. Commun. 12, 5377 (2021).
DZou D. Zou, T. Chen, W. He, J. Bao, C. H. Lee, H. Sun, and X. Zhang, Observation of hybrid higher-order skin-topological
effect in non-Hermitian topolectrical circuits, Nat. Commun. 12, 7201 (2021).
CShang C. Shang, S. Liu, R. Shao, P. Han, X. Zang, X. Zhang, K. N. Salama, W. Gao, C. H. Lee, R. Thomale, A. Manchon, S. Zhang, T. J. Cui, and U. Schwingenschlögl, Experimental Identification of the Second-Order
Non-Hermitian Skin Effect with Physics-Graph-Informed
Machine Learning, Adv. Sci. 9, 2202922 (2022).
Flebus_review H. M. Hurst and B. Flebus, Non-Hermitian physics in magnetic systems, J. Appl. Phys. 132, 220902 (2022).
Lenk B. Lenk, H. Ulrichs, F. Garbs, and M. Münzenberg, The building blocks of magnonics, Phys. Rep. 507, 107 (2011).
Chumak A. V. Chumak, V.I. Vasyuchka, A.A. Serga, and B. Hillebrands, Magnon spintronics, Nat. Phys. 11, 453 (2015).
Grundler D. Grundler, Nanomagnonics around the corner, Nat. Nanotechnol. 11, 407 (2016).
Demidov V.E. Demidov, S. Urazhdin, G. de Loubens, O. Klein, V. Cros, A. Anane, and S.O. Demokritov, Magnetization oscillations and waves driven by pure spin currents, Phys. Rep. 673, 1 (2017).
Brataas A. Brataas, B. van Wees, O. Klein, G. de Loubens, and M. Viret, Spin Insulatronics, Phys. Rep. 885, 1 (2020).
Barman Barman et al., The 2021 Magnonics Roadmap, J. Phys. Condens. Matter 33, 413001 (2021).
HYWang H.-Y. Wang, F. Song, Z. Wang, Amoeba formulation of the non-Hermitian skin effect in higher dimensions, arXiv:2212.11743.
Haiping H. Hu, Non-Hermitian band theory in all dimensions: uniform spectra and skin effect, arXiv:2306.12022.
Kittel C. Kittel, On the Theory of Ferromagnetic Resonance Absorption, Phys. Rev. 73, 155 (1948).
CPSW T. Yu, Y. M. Blanter, and G. E. W. Bauer, Chiral Pumping of Spin Waves, Phys. Rev. Lett. 123, 247202 (2019).
chiral T. Yu, Z. C. Luo, and G. E. W. Bauer, Chirality as Generalized Spin-Orbit Interaction in Spintronics, Phys. Rep. 1009, 1 (2023).
supplement Supplemental Material [...] for the calculation of the magnetic configuration, chiral coupling between magnons, effective magnon Hamiltonian of the nanomagnet subsystem, and precise topological characterization for the one-dimensional case.
JChen J. Chen, T. Yu, C. Liu, T. Liu, M. Madami, K. Shen, J. Zhang, S. Tu, M. S. Alam, K. Xia, M. Wu, G. Gubbiotti, Y. M. Blanter, G. E. W. Bauer, and H. Yu, Excitation of unidirectional exchange spin waves by a nanoscale magnetic grating, Phys. Rev. B 100, 104427 (2019).
dipolar H. Wang, J. Chen, T. Yu, C. Liu, C. Guo, S. Liu, K. Shen, H. Jia, T. Liu, J. Zhang, M. A. Cabero, Q. Song, S. Tu, M. Wu, X. Han, K. Xia, D. Yu, G. E. W. Bauer, and H. Yu, Nonreciprocal coherent coupling of nanomagnets by exchange
spin waves, Nano Res. 14, 2133 (2021).
Moiseyev N. Moiseyev, Non-Hermitian Quantum Mechanics (Cambridge University Press, Cambridge, England, 2011).
nonHermrev1 V. Meden, L. Grunwald, and D. M. Kennes, PT-symmetric, non-Hermitian quantum many-body physics—a methodological perspective, arXiv:2303.05956.
HWang H. Wang, J. Chen, T. Liu, J. Zhang, K. Baumgaertl, C. Guo, Y. Li, C. Liu, P. Che, S. Tu, S. Liu, P. Gao, X. Han, D. Yu, M. Wu, D. Grundler, and H. Yu, Chiral Spin-Wave Velocities Induced by All-Garnet Interfacial Dzyaloshinskii-Moriya Interaction in Ultrathin Yttrium Iron Garnet Films, Phys. Rev. Lett. 124, 027203 (2020).
XYWei X.-Y. Wei, O. A. Santos, C. H. S. Lusero, G. E. W. Bauer, J. B. Youssef, and B. J. v. Wees, Giant magnon spin conductivity in ultrathin
yttrium iron garnet films, Nat. Mater. 21, 1352 (2022).
CoFeP_Ms M. Küß, M. Heigl, L. Flacke, A. Hörner, M. Weiler, M. Albrecht, and A. Wixforth, Nonreciprocal Dzyaloshinskii-Moriya Magnetoacoustic Waves, Phys. Rev. Lett. 125, 217203 (2020).
NHatano N. Hatano and D. R. Nelson, Localization Transitions in Non-Hermitian Quantum Mechanics, Phys. Rev. Lett. 77, 570 (1996).
Gardiner C. W. Gardiner and M. J. Collett, Input and output in damped quantum systems: Quantum stochastic differential equations and the master equation, Phys. Rev. A 31, 3761 (1985).
Clerk A. A. Clerk, M. H. Devoret, S. M. Girvin, F. Marquardt, and R. J. Schoelkopf, Introduction to quantum noise, measurement, and amplification, Rev. Mod. Phys. 82, 1155 (2010).
LingLu L. Lu, J. D. Joannopoulos, and M. Soljačić, Topological photonics, Nat. Photon. 8, 821 (2014).
Lodahl P.Lodahl, S. Mahmoodian, S. Stobbe, A. Rauschenbeutel, P. Schneeweiss, J. Volz, H. Pichler, and P. Zoller, Chiral quantum optics, Nature 541, 473 (2017).
Ozawa T.Ozawa, H. M. Price, A. Amo, N. Goldman, M. Hafezi, L. Lu, M. C. Rechtsman, D. Schuster, J. Simon, O. Zilberberg, and L. Carusotto, Topological photonics, Rev. Mod. Phys. 91, 015006 (2019).
ZLan Z. Lan, M. L.N. Chen, F. Gao, S. Zhang, and W. E.I. Sha, A brief review of topological photonics in one, two, and three dimensions, Rev. Phys. 9, 100076 (2022).
Rodriguez F.J. Rodríguez-Fortuño, G. Marino, P. Ginzburg, D. O'Connor, A. Martínez, G. A. Wurtz, and A. V. Zayats, Near-field interference for the unidirectional excitation of electromagnetic guided modes, Science 340, 328 (2013).
Petersen J. Petersen, J. Volz, and A. Rauschenbeutel, Chiral nanophotonic waveguide interface based on spin-orbit interaction of light, Science 346, 67 (2014).
|
http://arxiv.org/abs/2307.02065v1
|
20230705070858
|
Line Graphics Digitization: A Step Towards Full Automation
|
[
"Omar Moured",
"Jiaming Zhang",
"Alina Roitberg",
"Thorsten Schwarz",
"Rainer Stiefelhagen"
] |
cs.CV
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] |
Omar Moured et al.
^1CV:HCI lab, Karlsruhe Institute of Technology, Germany.
^2ACCESS@KIT, Karlsruhe Institute of Technology, Germany.
{firstname.lastname}@kit.edu
<https://github.com/moured/Document-Graphics-Digitization.git>
Line Graphics Digitization: A Step Towards Full Automation
Omar Moured1,2 Jiaming Zhang1Alina Roitberg1Thorsten Schwarz 2Rainer Stiefelhagen1,2
========================================================================================
The digitization of documents allows for wider accessibility and reproducibility.
While automatic digitization of document layout and text content has been a long-standing focus of research, this problem in regard to graphical elements, such as statistical plots, has been under-explored.
In this paper, we introduce the task of fine-grained visual understanding of mathematical graphics and present the Line Graphics (LG) dataset, which includes pixel-wise annotations of 5 coarse and 10 fine-grained categories. Our dataset covers 520 images of mathematical graphics collected from 450 documents from different disciplines. Our proposed dataset can support two different computer vision tasks, i.e., semantic segmentation and object detection. To benchmark our LG dataset, we explore 7 state-of-the-art models.
To foster further research on the digitization of statistical graphs, we will make the dataset, code and models publicly available to the community.
§ INTRODUCTION
With the rapid growth of information available online[<https://www.statista.com/statistics/871513/worldwide-data-created/>], access to knowledge has never been easier.
However, as the volume of information continues to grow, there is a need for more efficient ways to extract useful information from documents such as papers and presentation slides.
This is particularly important for individuals with special needs, such as visually impaired individuals <cit.>, for whom traditional methods of accessing information may not be feasible.
During courses, graphs are a vital supplement to lecturers' speech as they effectively summarize complex data or visualize mathematical functions.
However, one downside of this medium is the difficulty of automatic information extraction, as graphs contain very fine-grained elements, such as fine lines, small numbers or axes descriptions, while the traditional document analysis frameworks focus on coarse structures within complete pages <cit.> or slides <cit.>.
The process of separating distinct regions of a plot and assigning them a semantic meaning at a pixel-level, known as graph segmentation, is an important prerequisite step for graph understanding. One application of using pixel-level data to fully automate the process is to generate an imposed document or 2D refreshable tactile display that can be easily interpreted through touch for people with blindness or visual impairment. Hence, end-to-end full automation of plot digitization could be achieved.
Presumably, due to the lack of annotated datasets for fine-grained analysis of plots, the utilization of modern deep semantic segmentation architectures has been rather overlooked in the context of mathematical graphs.
In this paper, we introduce the task of fine-grained visual understanding of mathematical graphics and present the Line Graphics (LG) dataset, which includes pixel-wise annotations of 10 different categories.
Our dataset covers 520 images of mathematical graphics collected from 450 documents from different disciplines, such as physics, economics, and engineering.
Figure <ref> provides several examples of statistical plots collected in our dataset.
By providing pixel-wise and bounding box annotations, we enable our dataset to support two different computer vision tasks: instance, semantic segmentation and object detection.
To benchmark our LG dataset, we explore 7 state-of-the-art models, including efficiency- and accuracy-driven frameworks, (e.g., MobileNetV3 <cit.> and SegNeXt <cit.>) with SegNeXt yielding the best results with 67.56% mIoU.
Our results show that while we have achieved high overall accuracy in our models, the accuracy varies depending on the type of object.
Specifically, we found that spine-related categories and plot title were the hardest to recognize accurately.
To foster further research on the digitization of statistical graphs, we will make the dataset, code, and models publicly available to the community.
The key findings and contributions of this paper are can be summarized as:
* We introduce the task of fine-grained visual understanding of mathematical graphics, aimed at reducing manual user input when digitalizing documents.
* We collect and publicly release the Line Graphics (LG) dataset as a benchmark for semantic segmentation and object detection in line graphics. The dataset includes plots from papers and slides from various fields and is annotated with 10 fine-grained classes at both pixel and bounding box levels.
* We perform extensive evaluations on 7 state-of-the-art semantic segmentation models, analyzing the impact of factors such as image resolution and category types on the performance. Our findings demonstrate the feasibility of the proposed task, with the top model achieving a mean Intersection over Union
of 67.56%. However, further advancement is needed in certain categories, such as plot title or spines, as well as for low-resolution data.
§ RELATED WORK
§.§ Document Graphics Analysis
Visual document analysis is a well-studied research area, mostly focusing on text <cit.> and layout analysis <cit.> of complete pages originating from scientific papers <cit.>, presentation slides <cit.>, magazines <cit.>, historical handwritten documents <cit.> or receipts <cit.>.
In comparison, the research of chart analysis is more limited, with an
overview of the existing approaches provided by <cit.>.
In particular, several learning-based methods have been used for (1) localizing and extracting charts from pages <cit.>, or (2) harvesting text and tabular data from charts <cit.>.
Seweryn et. al. <cit.> propose a framework covering chart classification, detection of essential elements, and generation of textual descriptions of four chart types (lines, dot lines, vertical bar plots and horizontal bar plots).
<cit.> focuses on exploring the semantic content of alt text in accessibility publications, revealing that the quality of author-written alt text is mixed. The authors also provide a dataset of such alt text to aid the development of tools for better authoring and give recommendations for publishers.
<cit.> developed a program for fully automatic conversions of line plots (png files) into numerical data (CSV files) by using several Deep-NNs.
<cit.> introduce a fully automated chart data extraction algorithm for circular-shaped and grid-like chart types.
Semantic segmentation of documents <cit.> has close ties to computer vision, where segmentation model performance has greatly improved with deep learning advancements <cit.>. Despite this, research in fine-grained semantic segmentation of mathematical graphs lags due to a lack of annotated examples for training. To address this, our dataset seeks to close the gap and provide a public benchmark for data-driven graph segmentation methods.
§.§ Document Graphics Datasets
Table <ref> provides an overview of the five published datasets most related to our benchmark.
The PDFFigures 2.0 dataset is a random sample of 346 papers from over 200 venues with at least 9 citations collected from Semantic Scholar, covering bounding box annotations for captions, figures, and tables <cit.>.
The dataset of Poco et al. <cit.> comprises automatically generated and manually annotated charts from Quartz news and academic paper.
The data for each image includes the bounding boxes and transcribed content of all text elements.
The authors further investigate automatic recovery of visual encodings from chart images using text elements and OCR. They present an end-to-end pipeline that detects text elements, classifies their role and content, and infers encoding specification using a CNN for mark type classification.
Dai et al. <cit.> collect a benchmark that covers bar charts collected from the web as well as synthetic charts randomly generated through a script. The authors present Chart Decoder – a deep learning-based system which automatically extracts textual and numeric information from such charts.
DocFigure <cit.> is a scientific figure classification dataset consisting of 33,000 annotated figures from 28 categories found in scientific articles. The authors also designed a web-based annotation tool to efficiently categorize a large number of figures.
The ICDAR 2019 CHART-Infographics competition <cit.> aimed to explore automatic chart recognition, which was divided into multiple tasks, including image classification, text detection, text role classification, axis analysis and plot element detection. A large synthetic training set was provided and systems were evaluated on synthetic charts and real charts from scientific literature.
In comparison to these datasets, LG targets semantic segmentation of line graphs with >500 examples collected from documents of 18 different disciplines, with manual pixel-level annotations at two levels of granularity and 15 labels in total (5 coarse and 10 fine-grained categories). Our LG dataset aims to establish a public benchmark for data-driven graph segmentation methods and will be made publicly accessible upon publication.
§ LG DATASET
In this paper, we present the first segmentation dataset to analyze line charts and keep pace with the advancements in the AI community. Our dataset contains 520 mathematical graphics extracted manually from 450 documents. Among these, 7238 human annotated instances. The goal is to facilitate automatic visual understanding of mathematical charts by offering a suitable and challenging benchmark. Next, we provide a comprehensive description of the data collection and annotation process, followed by a thorough analysis of the dataset's features and characteristics.
§.§ Data Collection and Annotation
§.§.§ Classes.
To ensure a comprehensive and robust labelling process, we set out to categorize line chart pixels into 5 coarse and 10 fine-grained classes. The primary focus was on creating fine-grained categories that offer a wide range of variations and challenges for further analysis. This was achieved through a thorough review of charts by three annotators with research experience, who identified the most frequent and critical object types encountered in such charts. Based on this review, as well as an inspection of related work, we arrived at 10 relevant categories. Some of these can be further categorized into three coarse categories, namely, Title class (e.g. plot title), Spine class (e.g. "spine" with no label data), and the Label class (e.g. x-axis labels). As detailed in Table <ref>, in this work, we conduct experiments with the 10 classes, which are p-title, x-title, y-title, x-spine, y-spine, spine, x-label, y-label, legend and line.
§.§.§ Collection.
In addition to ensuring that the source documents in LG dataset are free from intellectual property constraints, we have imposed certain requirements for the documents to adhere to. First, all collected documents should contain at least one complex line chart, regardless of the document type (scanned, digital, slide, etc..) or field. Second, to represent different time periods, both old and new charts were collected. Third, the similarity levels between cropped images were maintained as low as possible to ensure that each image presents a unique and challenging point for analysis. To achieve broad coverage of all fields, documents in LG dataset were collected from 5 different disciplines and their top published subcategories as shown in Figure <ref>. The collection process involved a manual search using scientific keywords, and carefully inspecting each document downloaded from sources such as arXiv and Google Scholar. This approach helped ensure a consistent and uniform distribution of documents across all categories.
§.§.§ Annotation.
Fine-grained pixel-level annotations were provided for the 10 chart classes as depicted in Figure <ref>. This level of detail was necessary due to the presence of fine structures in the charts, such as lines. Using bounding boxes alone would not be sufficient as it would result in the incorrect annotation of background pixels as foreground and difficulty in distinguishing between different lines in the plot. Bounding boxes were still provided despite the pixel-level ones, as we believe they may be useful for certain classes such as the text content, except lines, for which a plotting area bounding box category was labelled instead. The annotation process was initiated with the provision of 20 pages of guidelines and 100 sample images to each of the three annotators, resulting in a mean pairwise label agreement of 80%. Further, annotators were given a batch of images to annotate and each annotation was reviewed by the other two annotators. To facilitate instance-level segmentation, we provide annotations for each instance separately in COCO JSON format. For example, each line has a separate ID in the line mask.
§.§ Dataset Properties
Fully automating the mathematical graphics digitization process involves retrieving the metadata and converting it into a machine-readable format, such as a spreadsheet. This requires a thorough understanding of fine-grained elements, such as axes to project the lines and obtain pixel (x,y) entries, axes labels to calibrate retrieved line values from the pixels domain, and legend to match and describe the lines respectively. In our review of existing datasets, we found a mix of both private and public datasets focused on graphics digitization. However, despite their similar goal, these datasets often lack diversity in terms of plot variations, richness, and classes.
§.§.§ Split
The LG dataset consists of three subsets - training, validation, and test - each of which is split with a reasonable proportion of the total instances. As depicted in Table <ref>, some classes exhibit limited numbers, however, this accurately reflects their low-frequency occurrence, such as plot titles that are typically found in figure captions. Despite this, our experiments have demonstrated that the richness of the data was crucial in overcoming this challenge.
§.§.§ Variations
Table <ref> below demonstrates the diversity and inclusiveness of our dataset, as it includes a wide range of instances counts, styles, and locations, without any aforementioned limitations. Our dataset includes a comprehensive range of variations for all classes, as summarized in Table <ref>. We have covered a wide range of plot types, including those that feature multiple chart types like bar, scatter, and line charts as in Figure <ref> (a) (c) and (d), as well as plots with repeated classes like multiple y or x-axes and ticks. The text content in our dataset is annotated with variations in integer, decimal, and DateTime formats, as well as tilt. Furthermore, we have taken into account different markers, patterns, and sizes for line and spine classes, and added the class "other" to represent the annotated plot area explanatory text, focus points, and arrows. The background variations in our dataset include colour (single or multiple), gradient, and RGB images.
§.§.§ Spatial Distribution Visualization
We have additionally analyzed the statistical localization information of all ground-truth instances. As shown in Figure <ref>, the frequency of occurrence is visualized in heatmaps. We can see that the Title class has a strong prior position, as titles are typically standalone text at the edges of the chart. Spines on the other side reveal that the majority of charts are box format with two axes (left and bottom edge). Cartesian-type with intersecting x and y axes are observed less frequently. Our heatmap evaluation shows that spines have an average width of 3 pixels, with a minimum and maximum of 1 and 7 pixels, respectively, making them one of the challenging classes to segment. Interestingly, as we see in the legend heat map, they are predominantly positioned at the top and on either side of the plotting area, but they can also appear in other locations. According to statistics, 44% of the legends are located at the top.
§ EXPERIMENTS
§.§ Implementation Details
We perform experiments utilizing both Jittor and Pytorch. Our implementation is based on the MMsegmentation[<https://github.com/open-mmlab/mmsegmentation>] library and the models were trained on an A40 GPU with an input resolution of (2048, 1024). Our evaluation metric is Mean Intersection over Union (mIoU). During training, we applied common data augmentation techniques such as random flipping, scaling (ranging from 0.5 to 2), and cropping. The batch size was set to 8 with an initial learning rate of 6e-5, using a poly-learning rate decay policy. The models were trained for 50K iterations. For testing, we employed a single-scale
flip strategy to ensure fairness in comparison.
To understand the choice of models, we further analyze the properties of the selected models in conjunction with our proposed line graphic segmentation task.
§.§ Baselines
We consider 7 state-of-the-art semantic segmentation models for this task:
MobileNetV3 <cit.> is designed for image segmentation in both high and low-resource environments. It incorporates the depthwise separable convolution from MobileNetV1 and the Inverted Residual with Linear Disability from MobileNetV2 to balance accuracy and computational cost. Additionally, the model introduces the V3 lightweight attention mechanism, which enhances its ability to selectively focus on important features. These improvements make MobileNetV3 a good choice for resource-constrained applications of line graphic segmentation.
HRNet <cit.> leverages multi-scale feature representations to effectively handle high-resolution image understanding tasks such as human pose estimation and semantic segmentation. Throughout the network, HRNet utilizes repeated information exchange across multiple scales and a multi-scale feature output that is fed into a task-specific head. This innovative architecture enables easy feature reuse and efficient computation, making HRNet a top choice for high-resolution graphic segmentation task.
DeepLabv3+ <cit.> is a CNN semantic segmentation model that utilizes a decoder module to obtain sharper object boundaries and a more fine-grained segmentation, which is crucial for the proposed line graphic segmentation. In addition, an ASPP module captures multi-scale context information, while a lightweight Xception architecture provides efficient computation.
PSPNet <cit.> proposed the pyramid pooling module, which is able to extract an effective global contextual prior. Extracting pyramidal features since the model can perceive more context information of the input line graphic.
Swin <cit.> introduced the Transformer model with a shifted window operation and a hierarchical design. The shifted window operation includes non-overlapping local windows and overlapping cross-windows. It provides the locality of convolution to the graphic segmentation task, and on the other hand, it saves computation as compared to the original transformer models.
SegFormer <cit.> leverages the Transformer architecture and self-attention mechanisms. The model consists of a hierarchical Transformer encoder and a lightweight all-MLP decoder head, making it both effective and computationally efficient. The design enables SegFormer to capture long-range dependencies within an image, i.e., a line graphic in this work.
SegNeXt <cit.> proposed a new design of convolutional attention by rethinking the self-attention in Transformer models. In this work, the resource-costly self-attention module is replaced by using depth-wise convolution with large sizes. As a result, the multi-scale convolutional attention with a large kernel can encode context information more effectively and efficiently, which is crucial for the line graphic segmentation task.
§ EVALUATION
§.§ Quantitative Results
In Table <ref>, the efficiency-oriented CNN model MobileNetV3 with only 1.14M parameters obtains 56.22% mIoU score on the proposed LG dataset. The high-resolution model HRNet has 57.60% in mIoU and the DeepLabv3+ model has 61.64%, but both have parameter >60M. We found that the PSPNet with pyramid pooling module in the architectural design can achieve better results with 62.04%. In Table <ref>, the recent Transformer-based models achieve relatively better results than the CNN-based models. For example, the SegFormer model with pyramid architecture and with 81.97M parameters can obtain 65.59% in mIoU with a +3.55% gain compared to PSPNet. The Swin Transformer with hierarchical design and shifted windows has 66.61% in mIoU, but it has the highest number of parameters. However, the state-of-the-art CNN-based SegNeXt utilizes multi-scale convolutional attention to evoke spatial attention, leading to the highest mIoU score of 67.56% in our LG dataset. Furthermore, the SegNeXt achieve 4 top scores on 5 coarse classes, including Title, Spine, Label and Legend. Besides, it obtains 6 top scores out of 10 fine classes, which are xtitle, ytitle, xpsine, yspine, xlabel, and legend. The results show that a stronger architecture for the semantic segmentation task can achieve better results in the proposed LG benchmark, yielding reliable and accessible mathematical graphics.
§.§ Ablation Study
Apart from the qualitative analysis of state-of-the-art models, we further perform an ablation study on the aforementioned CNN- and Transformer-based models. As shown in Table <ref>, the ablation study is two-fold. First, to understand the impact of model scales on segmentation performance, different model scales are ablated. For example, the tiny (T) and small (S) versions of Swin Transformer, are evaluated on our LG dataset. Besides, to analyze the effect of the image resolution, two image sizes (i.e., 512×512 and 2048×1024) are involved in the ablation study. According to the results shown in Table <ref>, we obtain an insight that higher resolution of input images can achieve larger gains than using models with higher complexity. For example, SegFormer-B0 in resolution of 2048×1024 outperforms SegFormer-B5 in 512×512 with a 21.82% gain in mIoU. Regarding the ablation study, it is recommended to use a larger image size as input than to use a model with greater complexity. Another benefit of this setting is to maintain the high efficiency of the trained model, which is more practical and promising on graphic segmentation applications.
§.§ Qualitative Results
To further understand the line graph semantic segmentation task, we visualize some examples in Fig. <ref>. From left to right are the input RGB line graphs, the ground truth labels, and the segmentation results generated by SegNeXt-L model. Although the backgrounds of these input images have different colors and textures, the model can accurately segment them (in purple). We found that SegNeXt with >67% mIoU can output surprisingly good segmentation results, including precise masks for thin objects, such as xspine and yspine. Besides, in the bottom row, the intersecting lines can be segmented accurately. Apart from the positive results, the xspine and yspine in the bottom row cannot be recognized well, which means that there is still room for improvement in the LG benchmark. Nonetheless, the other classes, such as labels, titles and legend, can be segmented correctly.
§ CONCLUSION
In conclusion, this paper presents the first line plot dataset for multi-task deep learning, providing support for object detection, semantic segmentation, and instance-level segmentation. Our comprehensive evaluations of state-of-the-art segmentation models demonstrate the potential for an end-to-end solution.
Moreover, this work has the potential to greatly improve accessibility for visually impaired and blind individuals. The ability to accurately detect and recognize mathematical graphics could lead to more accessible educational materials and support the digitization of mathematical information.
We are actively working to expand the scope of the dataset by including more types of mathematical graphics and incorporating instances relationship into the metadata. This will continue to drive advancements in this field and enable further research into the digitization of mathematical graphics.
Acknowledgments.
This work was supported in part by the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant No.861166, in part by the Ministry of Science, Research and the Arts of Baden-Württemberg (MWK) through the Cooperative Graduate School Accessibility through AI-based Assistive Technology (KATE) under Grant BW6-03, and in part by the Federal Ministry of Education and Research (BMBF) through a fellowship within the IFI programme of the German Academic Exchange Service (DAAD). This work was partially performed on the HoreKa supercomputer funded by the MWK and by the Federal Ministry of Education and Research.
splncs04
|
http://arxiv.org/abs/2307.00525v1
|
20230702091713
|
Some preconditioning techniques for a class of double saddle point problems
|
[
"Fariba Balani Bakrani",
"Luca Bergamaschi",
"Angeles Martinez",
"Masoud Hajarian"
] |
math.NA
|
[
"math.NA",
"cs.NA",
"65F10, 65F50"
] |
Some preconditioning techniques for a class of double saddle point problems
Fariba Bakrani Balani[Department of Applied Mathematics, Faculty of Mathematical Sciences, Shahid Beheshti University, Tehran, Iran. : [email protected]], Luca Bergamaschi[Department of Civil Environmental and Architectural Engineering, University of Padua, Italy.
: [email protected]], Ángeles Martínez[Department of Mathematics and Geosciences, University of Trieste, Trieste, Italy.
: [email protected]], and
Masoud Hajarian[Department of Applied Mathematics, Faculty of Mathematical Sciences, Shahid Beheshti University, Tehran, Iran. : [email protected] ,
]
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
-28pt
In this paper, we describe and analyze the spectral properties of a number of exact block preconditioners for a class of double saddle point problems. Among all these, we consider an inexact version of a block triangular preconditioner providing extremely
fast convergence of the FGMRES method. We develop a spectral analysis of the preconditioned matrix
showing
that the complex eigenvalues lie in a circle of center (1,0)
and radius 1, while the real eigenvalues are described in terms of the roots of a third order polynomial with real coefficients.
Numerical examples are reported to illustrate the efficiency of inexact versions of the proposed preconditioners, and to verify
the theoretical bounds.
AMS classification: 65F10, 65F50, 56F08.
Keywords: Double saddle point problems. Preconditioning. Krylov subspace methods.
§ INTRODUCTION
This paper is concerned with a number of block preconditioners for the numerical solution of large and sparse linear system of equations of double saddle-point type of the form
𝒜w≡[ A B^T 0; B 0 C^T; 0 C 0 ][ x; y; z ]=[ f; g; h ]≡ b,
where A∈ℝ^n × n is a symmetric positive definite matrix, B ∈ℝ^m × n and C ∈ℝ^l × m have full row rank, f ∈ℝ^n, g ∈ℝ^m and h ∈ℝ^l are given vectors. Such linear systems arise in a number of scientific applications including constrained least squares problems <cit.>, constrained quadratic programming <cit.>, magma-mantle dynamics <cit.>, to mention a few; see, e.g. <cit.>. Similar block structures
arise e.g. in liquid crystal director modeling or in the coupled Stokes-Darcy problem, and the preconditioning of such linear systems has been considered in <cit.>.
Obviously, the matrix of system (<ref>) is symmetric and can be considered as a 2 × 2 block matrix <cit.>.
Due to the fact that these saddle point matrices are typically large and sparse, their iterative solution is recommended e.g. by Krylov subspace iterative methods <cit.>. In order to improve the efficiency of iterative methods, some preconditioning techniques are employed.
To solve iteratively the linear system (<ref>), a number of preconditioning methods have been investigated and studied in the literature. In <cit.>, Huang developed the block diagonal preconditioner 𝒫_D and its inexact version 𝒫_D which are of the forms
𝒫_D=[ A 0 0; 0 S 0; 0 0 X ], 𝒫_D=[ A 0 0; 0 S 0; 0 0 X ],
where S = BA^-1B^T, X = C S^-1 C^T and A, S and X
are symmetric positive definite approximations of A, S, and X, respectively. Exact and inexact versions of the block diagonal preconditioner 𝒫_D have been analyzed in <cit.>.
Cao <cit.> considered the equivalent linear system
𝒜w≡[ A B^T 0; -B 0 -C^T; 0 C 0 ][ x; y; z ]=[ f; -g; h ]≡ b,
and proposed the shift-splitting iteration of the form
1/2(α I+𝒜)w^(k+1)=1/2(α I-𝒜)w^(k)+b,
which leads to the preconditioner
𝒫_SS=1/2[ α I+A B^T 0; -B α I -C^T; 0 C α I ],
where α is a positive constant and I is the identity matrix of appropriate size.
In addition, a relaxed version of the shift-splitting preconditioner has been considered by dropping the shift parameter in
the (1,1) block of 𝒫_SS. In <cit.> two block preconditioners are proposed, and the spectral distributions of their inexact versions are described.
In <cit.>, three exact block preconditioners for solving (<ref>) have been introduced and analyzed which are defined as
𝒫_1=[ A 0 0; B -S C^T; 0 0 -X ], 𝒫_2=[ A 0 0; B -S C^T; 0 0 X ], 𝒫_3=[ A B^T 0; B -S 0; 0 0 - X ].
Moreover, it is shown that the preconditioned matrices corresponding to the above preconditioners only have at most three distinct eigenvalues.
More recently, Wang and Li <cit.> have proposed an exact and inexact parameterized block symmetric positive definite preconditioner for solving the double saddle point problem (<ref>).
Enlightened by the type 𝒫 preconditioners in (<ref>), we describe several other block preconditioning approaches to be employed within a Krylov subspace methods for the solution of linear system of equations (<ref>),
𝒬_1=[ A B^T 0; 0 -S 0; 0 0 X ], 𝒬_2=[ A B^T 0; 0 S C^T; 0 0 - X ], 𝒬_3=[ A B^T 0; 0 -S C^T; 0 0 ± X ],
and the block preconditioners of the forms
𝒬_4=[ A B^T 0; B 0 0; 0 C ± X ], 𝒬_5=[ A B^T 0; B 0 0; 0 0 X ].
We analyze the spectral
distribution of the corresponding preconditioned matrices which in all cases have at most three distinct eigenvalues thus
guaranteeing the finite termination of e.g. the GMRES iterative method. In realistic problems the proposed
preconditioners can not be used exactly since they require (for their application)
* Solution of a system with A,
* Explicit computation of S = B A^-1 B^T,
* Solution of a system with S,
* Explicit computation of X = C S^-1 C^T,
* Solution of a system with X.
Particularly, steps 2. and 4. require inversion of the (possibly sparse) matrices A and S. Practical application of the described preconditioners is then subjected to approximation of matrices A, S and X with A, S and X, respectively.
To measure the effects of such approximation on the spectral properties of the preconditioned matrix we considered the block
triangular preconditioner 𝒬_3 (with the plus sign in (3,3) block), and give bound on the complex and real eigenvalues
in terms of the (real and positive) eigenvalues of A^-1 A, S^-1 S, X^-1 X.
The outline of this work is described as follows. In section <ref>, we derive and analyze some exact
block preconditioners for solving double saddle point problem (<ref>).
In section <ref>, we test the inexact versions of the proposed preconditioners on a test case in combination
with the FGMRES iterative method. We then focus in section <ref> on the spectral analysis of the block preconditioner
which revealed the most efficient. Bound on truly complex as well as on real eigenvalues are developed for this
preconditioner and compared with the actual spectral distribution.
Some further results about the real eigenvalues of a simplified version of the triangular preconditioner are developed and
tested in section <ref>.
Finally, we state some conclusions in section <ref>.
§ BLOCK PRECONDITIONERS AND EIGENVALUE ANALYSIS
Let us consider the preconditioners in (<ref>) and (<ref>).
We observe that these preconditioners are nonsingular since A is symmetric positive definite and both B and C have full row rank. We know that the eigenvalues of 𝒜𝒬^-1 and
𝒬^-1𝒜 (for general preconditioner 𝒬) are equal, so that spectral results can be given in terms of any of the two matrices.
In the following the eigenvalues of the preconditioned matrices corresponding to the proposed preconditioners are determined. The notations σ(.) and ||.|| denote the set of all eigenvalues of a matrix and the Euclidean norm of a vector, respectively. We use
(λ) and (λ) to denote the real and imaginary parts of a complex eigenvalue λ.
The preconditioners 𝒬_3 and 𝒬_4 with positive (3,3)-block are denoted by and .
Suppose that A∈ℝ^n × n is symmetric positive definite and B∈ℝ^m × n and C ∈ℝ^l × m are matrices with full row rank. Then the preconditioner 𝒬_1 for 𝒜 satisfies
σ(𝒜𝒬_1^-1) ∈{1,1/2(1±√(3)i)}.
First we must compute 𝒬_1^-1. An easy calculations yields that
𝒬_1^-1=[ A^-1 A^-1B^TS^-1 0; 0 -S^-1 0; 0 0 X^-1 ],
It follows directly from (<ref>) that
𝒜𝒬_1^-1=[ I 0 0; BA^-1 I C^TX^-1; 0 -CS^-1 0 ].
We now determine the eigenvalues of the preconditioned matrix 𝒜𝒬_1^-1 by using the Laplace expansion. Therefore, the characteristic polynomial of 𝒜𝒬_1^-1 is given by
q(λ)=det(λ I- 𝒜𝒬_1^-1)=(λ-1)^n
(λ-1)I -C^TX^-1
CS^-1 λ I
.
Clearly, λ =1 is an eigenvalue of 𝒜𝒬_1^-1 with algebraic multiplicity at least n.
To determine the rest of eigenvalues, we seek λ≠ 1, x_2 and x_3 satisfying
(λ-1)x_2-C^TX^-1x_3 =0,
CS^-1x_2+λ x_3 =0.
Computing x_2=1/λ-1CS^-1x_3 from equation (<ref>) and
substituting the value x_2 into equation (<ref>), we get
(λ^2I-λ I+I)x_3=0
Note that the vector x_3 must be nonzero, otherwise if x_3 = 0, then x_2=0, and we saw that x_1=0 if λ≠ 1. Without loss of generality, we can assume
that x_3^*x_3=1. Multiplying equation (<ref>) on the left by x_3^*, we obtain
λ^2-λ+1=0.
The roots of (<ref>) are equal to
λ=1/2(1±√(3)i),
which completes the proof.
From the foregoing theorem it is evident that the preconditioned matrix 𝒜𝒬_1^-1 has eigenvalues clustered around three values 1, 1/2(1+ √(3)i), and 1/2(1- √(3)i), therefore one can expect rapid convergence for the preconditioned GMRES method.
It is easy to verify that the preconditioned matrix 𝒯=𝒜𝒬_1^-1 satisfies the following polynomial
(𝒯-ℐ)(𝒯^2-𝒯+ℐ)=0.
Since the above relation can be factorized into distinct linear factors (over ℝ), we conclude that 𝒯 is diagonalizable and has at most three distinct eigenvalues 1, 1/2(1±√(3)i).
Suppose that A ∈ℝ^n × n is symmetric positive definite and B ∈ℝ^m × n and C ∈ℝ^l × m are matrices with full row rank. Then the preconditioner 𝒬_2 for 𝒜 satisfies
σ(𝒜𝒬_2^-1) ∈{± 1,± i}.
Straightforward calculations reveal that
𝒬_2^-1=[ A^-1 -A^-1B^TS^-1 -A^-1B^TS^-1C^TX^-1; 0 S^-1 S^-1C^TX^-1; 0 0 -X^-1 ].
It follows from (<ref>) that
𝒜𝒬_2^-1=[ I 0 0; BA^-1 -I -2C^TX^-1; 0 CS^-1 I ].
To proceed we use the Laplace expansion to determine the eigenvalues of 𝒜𝒬_2^-1. Therefore, the characteristic polynomial of 𝒜𝒬_2^-1 is given by
q(λ)=det(λ I- 𝒜𝒬_2^-1)=(λ-1)^n
(λ+1)I 2C^TX^-1
-CS^-1 (λ-1) I
.
It is clear that λ=1 is an eigenvalue of 𝒜𝒬_2^-1 with algebraic multiplicity at least n.
To find the remaining eigenvalues, we seek λ≠ 1, x_2 and x_3 satisfying
(λ+1)x_2+2C^TX^-1x_3 =0,
-CS^-1x_2+(λ-1) x_3 =0.
Notice that x_2≠ 0, otherwise if x_2 = 0, then x_3=0 from equation (<ref>), and we saw that x_1=0 if λ≠ 1.
From equation (<ref>), we derive x_3=1/λ-1CS^-1x_2. If x_2∈ker(CS^-1), then λ=-1 is an eigenvalue of 𝒜𝒬_2^-1. Thus, it can be deduced that (0;x_2;0) is an eigenvector associated with λ = -1, where x_2 is an arbitrary vector. In the sequel we assume that λ≠ -1.
Substituting the value x_3 into equation (<ref>), we get
((λ^2-1)I+2C^TX^-1CS^-1)x_2=0.
We normalize x_2 such that x_2^*x_2=1, and multiply equation (<ref>) by x_2 on the left
to obtain
λ^2-1+2x_2^*C^TX^-1CS^-1x_2=0.
From the above relation, the eigenvalue λ can be expressed
λ=±√(1-2x_2^*C^TX^-1CS^-1x_2).
It is easy to verify that C^TX^-1CS^-1 is a projector onto ℛ(C^TX^-1)=ℛ(C^T), where ℛ denotes the range of a matrix. We rewrite the relation (<ref>) as
x_2=(-2/λ+1)C^TX^-1x_3,
and hence we observe x_2∈ℛ(C^TX^-1). Consequently, we have
x_2^*C^TX^-1CS^-1x_2=1.
It follows that 𝒜𝒬_2^-1 has eigenvalues λ =± i, and we have proved the theorem.
From equations (<ref>), (<ref>) and the proof of Theorem <ref>, it can be seen that λ=-1 may or may not be an eigenvalue of 𝒜𝒬_2^-1. We observed that λ=-1 is an eigenvalue if x_2 ∈ker(CS^-1). Conversely, suppose that λ=-1 is an eigenvalue of 𝒜𝒬_2^-1. From Eqs. (<ref>) and (<ref>), we have x_3=0 and then CS^-1x_2=0 which means that x_2 ∈ker(CS^-1).
This condition is necessary and sufficient for λ=-1 to be an eigenvalue of 𝒜𝒬_2^-1 with associated eigenvector of the form (0;x_2;0).
It is easy to check that the preconditioned matrix ℱ=𝒜𝒬_2^-1
satisfies
(ℱ-ℐ)(ℱ+ℐ)(ℱ^2+ℐ)=0.
From the relation (<ref>), we can conclude that ℱ is diagonalizable and has at most four distinct eigenvalues ±1, ± i.
According to the following partitioning
(
[ * * 0; * *; 0 0 * ]),
it is clear that the preconditioners 𝒫_1, 𝒫_2, 𝒬_3 and
have the same structure and all are in the class of block triangular preconditioners. The eigenvalues of the preconditioned matrices 𝒜𝒬_3^-1 and 𝒜 are also provided here for completeness.
Suppose that A ∈ℝ^n × n is symmetric positive definite and B ∈ℝ^m × n and C ∈ℝ^l × m have full row rank. Then the eigenvalues of the preconditioned matrices 𝒜𝒬_3^-1 and 𝒜𝒬_4^-1 are 1 and -1.
After straightforward computations, we can obtain
𝒜𝒬_3^-1 =
[ A B^T 0; B 0 C^T; 0 C 0 ][ A^-1 A^-1B^TS^-1 A^-1B^TS^-1C^TX^-1; 0 -S^-1 -S^-1C^TX^-1; 0 0 -X^-1 ]
=[ I 0 0; BA^-1 I 0; 0 -CS^-1 -I ],
and
𝒬_4^-1𝒜 =
[ A^-1-A^-1B^TS^-1BA^-1 A^-1B^TS^-1 0; S^-1BA^-1 -S^-1 0; X^-1CS^-1BA^-1 -X^-1CS^-1 -X^-1 ][ A B^T 0; B 0 C^T; 0 C 0 ]
=[ I 0 A^-1B^TS^-1C^T; 0 I -S^-1C^T; 0 0 -I ],
which implies that preconditioned matrices 𝒜𝒬_3^-1 and 𝒬_4^-1𝒜 (or 𝒜𝒬_4^-1) have eigenvalues 1 and -1.
It is easy to check that the minimum polynomial of the preconditioned matrices 𝒜𝒬_3^-1 and 𝒬_4^-1𝒜 have order 4.
One may imply favorable convergence rate for the Krylov subspace methods.
It is obvious that the preconditioned matrices 𝒜 and 𝒜 satisfy
𝒜=[ I 0 0; BA^-1 I 0; 0 -CS^-1 I ], and 𝒜=[ I 0 A^-1B^TS^-1C^T; 0 I -S^-1C^T; 0 0 I ].
Hence, we conclude that the eigenvalues of 𝒜 and
𝒜 are all one. Moreover, the minimum polynomial of the preconditioned matrix 𝒜 has order 3, while
𝒜 has minimum polynomial of order 2. Therefore, the GMRES method will reach the exact solution in at most three and two steps, respectively.
Suppose that A∈ℝ^n × n is symmetric positive definite and B∈ℝ^m × n and C ∈ℝ^l × m are matrices with full row rank. Then the preconditioner 𝒬_5 for 𝒜 satisfies
σ(𝒜𝒬_5^-1) ∈{1,1/2(1±√(3)i)}.
By simple calculations, we can obtain
𝒬_5^-1=[ A^-1-A^-1B^TS^-1BA^-1 A^-1B^TS^-1 0; S^-1BA^-1 -S^-1 0; 0 0 X^-1 ].
We readily verify that
𝒜𝒬_5^-1=[ I 0 0; 0 I C^TX^-1; CS^-1BA^-1 -CS^-1 0 ].
The rest of the proof is similar to that of Theorem <ref>; we omit the details here.
It can also be shown that the preconditioned matrix 𝒥=𝒜𝒬_5^-1 satisfies
(𝒥-ℐ)(𝒥^2-𝒥+ℐ)=0,
and it follows that 𝒥 is diagonalizable and has at most three distinct eigenvalues 1, 1/2(1±√(3)i).
Finally, we mention that although the preconditioners type 𝒫 and 𝒬 are theoretically powerful, they are not practical in real-world problems. Solving linear systems involving S and X can be prohibitive.
However, we may be able to devise effective inexact version of these preconditioners for 𝒜 by approximating
the Schur complement matrices.
To apply the proposed type 𝒬 preconditioners within a Krylov subspace method,
we require to solve the linear system of equations with the coefficient matrices A,
S=BA^-1B^T and X=CS^-1C^T.
Instead of exactly performing solves with S and X, we approximate S and X by matrices
S and X, respectively, and then the linear systems involving these matrices can be iteratively solved.
§ EXPERIMENTAL COMPARISONS OF THE PRECONDITIONERS
In this section we present a numerical test, which will be solved by
Flexible GMRES (FGMRES) with no restart, using the previously described preconditioners. The comparisons will be carried on in terms of
number of FGMRES iterations (represented by ITS) and elapsed CPU time in seconds (represented by CPU).
We also provide the norm of error vectors denoted as ERR=w^(k)-w^*_2/w^*_2.
The initial guess is set to be the zero vector and the iterations will be stopped whenever
b-𝒜w^(k)_2/b_2 < ,
with the tolerance selected as will be explained below.
In all cases, the right-hand side vector b is computed after selecting the exact solution of (<ref>) as
* w= e ≡ (1,1,…,1)^T∈ℝ^n+m+l.
* w a random vector of the appropriate dimension.
The numerical experiments presented in this work have been carried out on a computer with an
Intel Core i7-1185G7 CPU @ 3.00GHz processor and 16 GB RAM using Matlab 2022a.
(<cit.>)
We consider the linear system of equations (<ref>) for which
A=diag[ 2W^TW+D_1,D_2,D_3 ]∈ℝ^n × n,
is a block diagonal matrix;
B=[E,-I_2p_1,I_2p_2]∈ℝ^m × n, and C = E^T∈ℝ^l × m,
are both full row rank matrices where p_1 = p^2, p_2 = p(p + 1); W = (w_ij) ∈ℝ^p_2× p_2 with
w_ij = e^-2((i/3)^2+(j/3)^2); D_1= I_p_2 is the identity matrix;
D_i= diag(d^(i)_j) ∈ℝ^2p_1× 2p_1, ( i=2,3)
are diagonal matrices with
d^(2)_j=
1, for 1≤ j ≤ p_1,
10^-5(j-p_1^2), for p_1+1≤ j ≤ 2p_1,
d^(3)_j=10^-5(j+p_1^2), for 1≤ j ≤ 2p_1;
and
E=[ E_1⊗ I_p; I_p⊗ E_1 ], with
E_1=[ 2 -1; 2 -1; ⋱ ⋱; 2 -1 ]∈ℝ^p × (p+1).
To summarize we have n = 5p^2 + p, m = 2p^2 and l = p^2 + p and the size of the double saddle point matrix is n+m+l = 8p^2 + 2p.
As the block approximations, we take S=tridiag(BA^-1B^T), the tridiagonal part of S,
where A=diag(A) and
X=C L_S^-T L_S^-1 C^T with L_S the exact bidiagonal factor of S.
In addition to the previously analyzed preconditioners we also present the results of an inexact variant of
the block preconditioner considered in <cit.> (here denoted as 𝒫_ASB),
where the Schur complement matrix is simply approximate with the identity matrix (S≡ I),
showing outstanding results in the solution of this Example.
However, these results (convergence in 2 iterations) can be obtained only by the choice of the right-hand-side (b = A e)
and of the initial (zero) vector. In this case the initial residual r_0 ≡ b satisfies
𝒜 r_0 = 𝒫_ASB r_0 which makes the Krylov subspace of degree 1 invariant for this particular
right-hand-side and therefore ensures convergence in at most two iterations. Clearly, changing the right-hand-side,
this property does no longer hold.
Figure <ref> plots the eigenvalue distribution of the preconditioned matrices with the approximation matrices S and X. We observe from this figure that the
preconditioned matrix with has more clustered eigenvalues than the other ones, which can considerably improve the convergence rate of the Krylov subspace iterative methods.
Preconditioners 𝒬_1 and 𝒬_5 display quite similar eigenvalue distributions
(the ideal version have the same eigenvalues). We then remove 𝒬_1 from the numerical results
in view of its slightly higher application cost in comparison with 𝒬_5.
All other proposed preconditioners display also negative eigenvalues, predicting slow GMRES convergence.
Finally, the spectral distribution with the 𝒫_ASB preconditioner, does not seem to be favorable, as it spreads
over a wide (real) interval.
The linear system with S is solved exactly by solving two bidiagonal systems with L_S and L_S^T,
while the system with X is solved, without forming explicitly matrix X, by the PCG method accelerated
by the incomplete Cholesky factorization of C diag(S)^-1 C^T with a
drop tolerance τ = 10^-4. The work to be done before the beginning of the FGMRES process is described in
Algorithm <ref> while the application of the preconditioner at each FGMRES iteration
is sketched in Algorithm <ref>, for the
preconditioner .
The numerical results corresponding to the block preconditioned FGMRES for Example <ref> are given in Tables
<ref> (using the right hand side as b = 𝒜 e) and <ref> where an exact random solution is employed.
We run FGMRES with all preconditioners for problems with p = {16, 32, 64, 128, 256, 512, 1024} ending up with
a problem with more than 8 million unknowns. To obtain a relative error of (roughly) the same order of magnitude we adjusted
the tolerance as
= 10N^2, N = n+m+l.
From these tables, we see that the preconditioners 𝒬_1,
𝒬_5 and, particularly, outperform the
other ones in terms of iteration number and CPU time, being all the proposed preconditioners more convenient
than 𝒫_D
and obtaining FGMRES convergence to the solution of (<ref>) in a reasonable number
of iterations and CPU time. For the largest problem and random right-hand-side, only preconditioners
and 𝒬_5 could solve the given linear system within the memory of our laptop, due do the small
number of iterations they required.
Regarding the 𝒫_ASB preconditioner we observe that it is the most performing one when w ≡ e, yet
not providing convergence in two iterations since its exact version is employed, whereas it reveals not competitive
with for a random exact solution.
In the next section, we will discuss in detail the eigenvalue distribution of the preconditioner matrix using the inexact version of and we leave other inexact version of the
proposed preconditioners as a topic for further research. Regarding the complex
eigenvalues, we perform an analysis similar to the one in <cit.>, but generalized here for the 3× 3 block triangular preconditioner.
§ EIGENVALUE ANALYSIS OF THE INEXACT VARIANTS OF
We analyze in this section the eigenvalue distribution of the preconditioned matrix
𝒜𝒬̅^-1, where, in the sequel,
𝒬̅≡ =[ A B^T 0; 0 -S C^T; 0 0 X ],
with A, S and X proper SPD approximations (preconditioners) of A, S and X, respectively.
The relevant spectral properties of the preconditioned matrix 𝒜𝒬̅^-1 will be given in terms of the eigenvalues of
A^-1 A, S^-1S̃ and X^-1X̃ where S̃ = B A^-1 B^T and X̃ = C S^-1 C^T. To this aim, we define
γ_min^A ≡λ_min (A^-1 A), γ_max^A ≡λ_max (A^-1 A), γ_A ∈ [ γ_min^A, γ_max^A ],
γ_min^S≡λ_min (S^-1S̃), γ_max^S≡λ_max (S^-1S̃), γ_S∈ [ γ_min^S, γ_max^S ],
γ_min^X≡λ_min (X^-1X̃), γ_max^X≡λ_max (X^-1X̃), γ_X∈ [ γ_min^X, γ_max^X ].
We will finally make the assumption that 1∈ [γ_min^A, γ_max^A]. This assumption, very commonly satisfied in
practice, will simplify some of the bounds mostly regarding real eigenvalues.
Let
𝒟̅=[ A 0 0; 0 S 0; 0 0 X ].
Then finding the eigenvalues of 𝒜𝒬̅^-1 is equivalent to solving
𝒟̅^-1/2𝒜𝒟̅^-1/2𝐰=λ𝒟̅^-1/2𝒬̅𝒟̅^-1/2w,
or
[ Ã R^T 0; R 0 K^T; 0 K 0 ][ x; y; z ]=λ[ I R^T 0; 0 -I K^T; 0 0 I ][ x; y; z ],
where Ã=A^-1/2AA^-1/2, R =S^-1/2BA^-1/2
and K = X^-1/2CS^-1/2.
Suppose that A ∈ℝ^n × n is symmetric positive definite and B ∈ℝ^m × n and C ∈ℝ^l × m are matrices with full row rank. Let A, S and X be the symmetric positive approximations of A, S and X, respectively.
Assume that λ is an eigenvalue of the preconditioned matrix 𝒜𝒬̅^-1 and (x; y; z) is the corresponding eigenvector.
If (λ) ≠ 0, then λ satisfies
|λ-1| < √(1-γ^A_min), if Ky = 0,
|λ-1| ≤ √( 1 -γ^A_minx^2/y^2), if Ky 0.
Let λ be an eigenvalue of matrix 𝒜𝒬̅^-1 and (x; y; z) be the corresponding eigenvector such that x^2+y^2+z^2=1.
It follows from (<ref>) that
Ãx-λ x=(λ -1) R^Ty,
Rx-(λ -1)K^Tz=-λ y,
Ky=λ z.
Since A, S and X are SPD, and B and C are matrices with full row rank, then à is SPD and K and R are matrices with full row rank.
If y=0, from (<ref>) we can derive Ãx=λ x. Thus, it can be deduced that λ∈ [γ^A_min,γ^A_max] and then the eigenvalue λ is real. The associated eigenvector for this case is of the form (x; 0; 0) where x 0.
Assume now that λ 1 and y 0. The rest of the proof is divided into two cases:
Case I. Ky=0. From (<ref>), we obtain z=0.
Multiplying (<ref>) by x^* on the left and the transposed conjugate of (<ref>) by y on the right, we get
x^*Ãx-λx^2 =(λ -1)x^* R^Ty,
x^*R^Ty=-λ̅y^2.
Inserting (<ref>) into equation (<ref>), we have
x^*Ãx-λ+(λ-λ̅)y^2+|λ|^2y^2 =0.
Let λ = a + ib, then taking the real and imaginary parts of (<ref>) apart, we obtain
x^*Ãx-a+(a^2+b^2)y^2=0,
b(2y^2-1)=0.
From (<ref>), we have b=0 or y^2=1/2. We assume that b≠ 0. From (<ref>) and after some simple calculations, we have
2x^*Ãx-λ -λ̅+|λ|^2=0.
Using identity |λ|^2-λ-λ̅=|λ-1|^2-1, we obtain |λ-1|^2=1-2x^*Ãx. If 1-γ^A_min≥ 0, we deduce that
|λ-1|^2 ≤ 1-γ^A_min,
which implies that
1-√(1-γ^A_min)≤(λ) ≤ 1+√(1-γ^A_min).
If 1-γ^A_min < 0, therefore there exists no λ with nonzero imaginary part satisfying the equality in (<ref>).
Case II. Ky ≠ 0. Multiplying (<ref>) by x^* on the left, the transposed conjugate of (<ref>) by y on the right and (<ref>) by z^* on the left, we derive
x^*Ãx-λx^2=(λ -1)x^* R^Ty,
x^*R^Ty-(λ̅-1)z^*Ky=-λ̅y^2,
z^*Ky=λz^2.
Inserting (<ref>) and (<ref>) into equation (<ref>) and easy manipulations, we get
x^*Ãx-λ(x^2+z^2)+(λ^2+|λ|^2-λ |λ|^2) z^2+λ̅(λ-1)y^2=0.
Using identity x^2+z^2=1-y^2, the above expression becomes
x^*Ãx-λ+(λ-λ̅+|λ|^2)y^2+(λ^2-λ |λ|^2+|λ|^2)z^2=0,
and can be equivalently written as
x^* Ã x - λ + (λ - λ̅+ |λ|^2) y^2 -
λ(|λ|^2 - (λ + λ̅)) z^2 = 0.
In case of complex eigenvalues, we will show that the real quantity
ρ = |λ|^2 - (λ + λ̅) = |λ - 1|^2 -1 = |λ|^2 - 2a,
is always negative, showing that the complex eigenvalues lie in an open circle with center (1,0) and prescribed radius.
Let us write (<ref>), exploiting the real and imaginary part,
x^* Ã x -a + |λ|^2 y^2 - a ρz^2 =0,
b(-1 + 2 y^2 - ρz^2) =0.
If λ is complex, then b 0 and from (<ref>) we obtain
z^2 = 2y^2 -1/ρ,
and substituting it in (<ref>) we have
0 = x^* Ã x - a + |λ|^2 y^2 - a (2y^2 -1) = x^* Ã x + y^2(|λ|^2 - 2a) = x^* Ã x + y^2 ρ,
from which
ρ = -x^* Ã x/y^2.
We can rewrite (<ref>) as
ρ = - γ_A x^2/y^2≤ - γ^A_minx^2/y^2, γ_A=x^* Ã x/x^2.
which together with (<ref>) completes the proof of the theorem.
In the following, our aim is to characterize the real eigenvalues of the preconditioned matrix
not lying in [γ^A_min,γ^A_max]. To this end, we premise two technical lemmas which will be useful for our analysis.
(<cit.>)
Let λ∉ [γ^A_min,γ^A_max]. Then for arbitrary z ≠ 0, there exists a vector s ≠ 0 such that
z^T(Ã-λ I)^-1z/z^Tz=(s^TÃs/s^Ts-λ)^-1=(γ_A-λ)^-1,
where γ_A=s^TÃss^Ts.
Let p(x) be the polynomial defined as
p(x)=x^3-a_1x^2+a_2x-a_3, a_j >0, j=1, 2, 3,
and let a = min{a_1,a_3a_2} and b = max{a_1,a_3a_2}. Then p(x) < 0, ∀ x ∈ (0,a) and p(x) > 0, ∀ x > b.
The statement of the lemma comes from observing that p(x) is the sum of the term x^3 - a_1x^2
which is negative in (0, a_1) and positive for x > a_1 and of the term a_2x - a_3 which is increasing and changes sign once for x = a_3a_2.
Let, as in the previous lemma, γ_A=s^TÃss^Ts and define
γ_S=y^TRR^Tyy^Ty = y^TS^-1/2S̃S^-1/2yy^T y, hence γ_S∈ [γ^S_min, γ^S_max]
and γ_X=z^TKK^Tzz^Tz=z^TS^-1/2X̃S^-1/2zz^T z∈ [γ^X_min, γ^X_max].
We are now able to bound the real eigenvalues of the preconditioned matrix 𝒜𝒬̅^-1. We split the main
theorem considering two cases K y = 0 and K y 0.
If Ky = 0, then the
real eigenvalues of the preconditioned matrix not lying in
[γ^A_min,γ^A_max] satisfy
λ^2-(γ_A+γ_S)λ+γ_S = 0.
Moreover the following synthetic bound holds:
min{γ^A_min, γ^S_min/γ^A_max+γ^S_min}≤λ≤γ^A_max+γ^S_max.
From (<ref>), we have
x =(Ã-λ I)^-1(λ -1)R^Ty.
Inserting x into the equation (<ref>) yields
R(Ã-λ I)^-1(1-λ)R^Ty=λ y.
Multiplying the above equation by y^Ty^Ty and using Lemma <ref>, we derive
λ^2-(γ_A+γ_S)λ+γ_S=0,
The solutions of equation (<ref>) are
λ_1,2=γ_A+γ_S±√((γ_A+γ_S)^2-4γ_S)/2.
It is easy to see that
λ_1=γ_A+γ_S+ √((γ_A+γ_S)^2-4γ_S)/2≤γ_A+γ_S≤γ_max^A+γ^S_max.
It is not hard to find that the smallest eigenvalue λ_2 is a decreasing function with respect to γ_A and it is an increasing with respect to γ_S if γ_A ≥ 1. Therefore, we have
λ_2 =γ_A+γ_S- √((γ_A+γ_S)^2-4γ_S)/2
=2γ_S/γ_A+γ_S+ √((γ_A+γ_S)^2-4γ_S)≥γ^S_min/γ^A_max+γ^S_min.
From the above discussion, we have proved that the real eigenvalues satisfy
γ^S_min/γ^A_max+γ^S_min≤λ≤γ^A_max+γ^S_max.
Before developing bound on the real eigenvalues of the preconditioned matrix in the general case we state the following Lemma.
Let ζ∈ℝ be either 0 < ζ < min{γ_min^A, γ_min^Sγ_max^A+γ_min^S} or ζ≥γ_max^A+γ_max^S.
Then the symmetric matrix
Z(ζ) = (1-ζ ) R (ζ I - Ã)^-1 R^T + ζ I,
has either all positive or all negative eigenvalues.
Let w be a nonzero vector. Multiplying (<ref>) by
w^Tw^Tw on the left and by
w on the right and applying Lemma <ref>, since
ζ I - Ã has all positive or all negative eigenvalues, yields
w^T Z w/w^T w = (1-ζ) w^T R (ζ I - Ã)^-1 R^T w/w^T w+ζ =
(setting z = R^T w)
= (1-ζ) z^T (ζ I - Ã)^-1 z/z^T zw^T R R^T w/w^T w + ζ =
= 1-ζ/ζ - γ_Aγ_S + ζ.
The Rayleigh quotient associated to Z(ζ), namely the function h(ζ) = 1-ζζ - γ_Aγ_S + ζ can not be zero under the hypotheses on ζ.
In fact,
h(ζ) = 0 ⟹ζ^2 - (γ_A+γ_S) ζ +γ_S = 0,
and applying (<ref>) we obtain the desired result.
The next theorem provides bounds on the real eigenvalues of the preconditioned matrix 𝒜𝒬̅^-1 in the general case.
Let λ∈ℝ and λ∉[min{γ_min^A, γ_min^Sγ_max^A+γ_min^S}, γ_max^A+γ_max^S ].
Then the remaining real
eigenvalues of the preconditioned matrix 𝒜𝒬̅^-1
satisfy
λ^3-(γ_A+γ_S+γ_X)λ^2+(γ_S+γ_X+
γ_Aγ_X)λ-γ_Aγ_X= 0.
Moreover the following synthetic bound holds:
min{γ_min^Sγ_max^A+γ_min^S,
γ_min^A γ_min^X/γ_min^X+ γ^S_max + γ_min^Aγ_min^X}≤λ≤γ_max^A+γ_max^S+γ_max^X.
The equation (<ref>) can be written as
x = (1-λ) (λ I -Ã)^-1 R^T y.
When we insert this into the second equation in (<ref>), we obtain
Z(λ)y = (λ-1) K^T z,
where Z(λ)=(1-λ) R(λ I -Ã)^-1 R^T +λ I.
The hypotheses on λ allow to use
Lemma <ref> which guarantee the matrix Z(λ) is either SPD or symmetric negative definite.
Hence, obtaining
y = (λ - 1)Z(λ)^-1 K^T z from the previous equation and substituting in (<ref>) yields
[K(λ-1)Z(λ)^-1 K ^T - λ I ]z = 0.
Premultiplying by z^Ton the left and dividing by z^Tz yields
z^TK (λ-1) Z(λ)^-1 K ^T z/z^Tz-λ=0.
Setting w=K^Tz, we can obtain
(λ-1) w^TZ(λ)^-1w/w^Twz^T K K^T z/z^T z-λ =0.
Denoted the vector u = Z(λ)^-1/2w, the equation (<ref>) becomes
(λ-1) u^T u/u^T Z(λ) uz^T K K^T z/z^T z-λ =0.
Using now the relation (<ref>) in Lemma <ref>, we get
λ-1/1-λ/λ - γ_Aγ_S + λγ_X - λ = 0.
After simple algebra we are left with the following polynomial cubic equation
q(λ) ≡λ^3-(γ_A+γ_S+γ_X)λ^2+(γ_S+γ_X+
γ_Aγ_X)λ-γ_Aγ_X=0.
Applying Lemma <ref> to this cubic polynomial we have
a = γ_A γ_X/γ_X+ γ_S + γ_A γ_X,
b = γ_A+γ_S+γ_X .
In this case it is easily verified that a < b from which we have that
a < λ < b and the statement of the theorem
results by observing that the lower bound is an increasing function of both γ_A and γ_K
and decreasing on γ_S.
Check of the bounds in Theorems <ref> and <ref>.
Figure <ref> displays in depth the eigenvalue distribution of preconditioned matrix 𝒬̅^-1𝒜.
* The complex eigenvalues of the preconditioned matrix 𝒬̅^-1𝒜 fall in the open circle with center (1,0) and radius 1;
* Regarding the real eigenvalues, the results are summarized in the following table:
min{λ, λ∈} max{λ, λ∈} Lower bound (<ref>) Upper bound (<ref>)
0.1982 3.0019 0.1342 6.2110
In the next section, we will perform a more accurate eigenvalues analysis of the preconditioned matrix with the 𝒬̅ preconditioner, under additional hypotheses.
§ FURTHER CHARACTERIZATION OF REAL EIGENVALUES
We will now consider a simplified preconditioner in which the only approximation is provided by A A, whereas
S=BA^-1B^T ≡S̃ and X=CS^-1C^T ≡X̃.
Note that RR^T=I_m and KK^T=I_l.
Let S≡S̃ and X≡X̃. Then
any real eigenvalue λ of 𝒜𝒬̅^-1 is bounded by
min{λ^+(γ^A_min),γ^A_min, 1/γ^A_max+1}≤λ≤max{λ^+ (γ^A_max), γ^A_max +1},
where λ^+(.) is the (unique) positive root of the equation
λ^3 - (2 + γ_A) λ^2 + (2 + γ_A) λ - γ_A = 0.
Moreover, the following more synthetic bound holds:
γ^A_min2≤λ≤γ^A_max+1.
For this simplified preconditioner we have γ_S ≡ 1 and γ_X ≡ 1. In this case the equation (<ref>) becomes
p(λ; γ_A) ≡λ^3 - (2 + γ_A) λ^2 + (2 + γ_A) λ - γ_A = 0,
for all real
λ∉[min{γ_min^A, 11 + γ_max^A}, 1+γ_max^A ].
The cubic polynomial equation (<ref>) can be written as
p(λ; γ_A) ≡λ((λ-1)^2+1) - γ_A (λ^2 - λ + 1) = 0,
showing that the function g(x) ≡ p(λ; x) is decreasing for each x ≥ 0 and therefore
the position of the largest positive root of (<ref>) is increasing.
Moreover it is easy to show that
for every γ_A > 0, there is a unique positive root to the equation p(λ; γ_A) = 0. In fact
p(0; γ_A) = -γ_A < 0, p'(0; γ_A) = 2 + γ_A, p'(λ̃; γ_A) = 0,
λ̃= γ_A+2 - √(γ_A^2+γ_A - 2)/3,
so that if γ_A < 1, the polynomial p is increasing for λ > 0 and it takes a local maximum in λ̃ if γ_A > 1
in which, however, p(λ̃; γ_A) < p(λ̃; 1) = 0.
Combining all these facts we finally have
λ^+(γ^A_min) ≤λ≤λ^+ (γ^A_max),
where λ^+(γ_A) refers to the unique positive solution of p(λ; γ_A) = 0, and the thesis holds
by observing that p(γ^A_min; γ^A_min) > 0 ⟹λ^+(γ^A_min) < γ^A_min and
p(γ^A_max; γ^A_max) < 0 ⟹λ^+(γ^A_max) > γ^A_max.
Also the second part of the theorem holds
since p(γ^A_min2; γ^A_min) = -(γ^A_min)^38 < 0, then
λ^+(γ^A_min) > γ^A_min2. Moreover, from
p(γ^A_max+1; γ^A_max) = 1 > 0, we have
that λ^+(γ^A_max) < γ^A_max+1.
Combining this with (<ref>) and observing that γ^A_min2 < 1γ^A_max + 1,
we conclude the proof.
Note that we could have applied directly Lemma <ref> to equation (<ref>),
obtaining the following bounds
γ_min^A/2 + γ_min^A≤λ≤γ_max^A + 2,
which are looser than those proved in Theorem <ref>.
Check of the bounds in Theorem <ref>.
The following example is given to assess the theoretical results developed in Theorem <ref>.
Consider the linear system (<ref>) with the block matrices are randomly generated by the following MATLAB code:
n = 100; m = 80; l = 60;
z = 1+10*rand(1); w = z*rand(n,1); w = 0.1+sort(w); w(1:10) = w(1);
A = diag(w); B = rand(m,n); C = rand(l,m);
In this example, matrix A is diagonal with a random eigenvalue distribution in [0.1, 11] and A = I.
In Figure <ref> (left) we show the whole spectrum of the preconditioned matrix 𝒬̅^-1𝒜 together with the bounds for the real eigenvalues. In Figure <ref> (right)
a zoom of the smallest (real) eigenvalues is provided showing that both the lower bounds, namely
γ_min^A (red box) and λ^+ (black plus) are smaller, yet very close, than the smallest real eigenvalue
of the preconditioned matrix. The results of this experiment as well as the observation of the figures
point out that:
* The complex eigenvalues of the related preconditioned matrix are located in a circle centered at (1, 0) with radius 1;
* The real eigenvalues lie in the real interval [0.1024, 3.373];
* Here γ^A_min2 = 0.1003, λ^+ = 0.1008 and γ^A_max+1 = 3.552
We can appreciate the closeness of the bounds to the endpoints of the real eigenvalue interval.
§ CONCLUSIONS
In this work, we have considered a number of exact block preconditioners, developing
the spectral distribution of the corresponding preconditioned matrices, for a class of double saddle point problems.
Some numerical experiments are performed, which show the good behavior of the preconditioned FGMRES method using an inexact counterpart of these
preconditioner, in comparison with other preconditioners from the literature.
We have then concentrated on the inexact variants of a specific block triangular preconditioner, performing a complete
spectral analysis and relating the eigenvalue distribution of the preconditioned matrix with the extremal eigenvalues
of the (symmetric and positive definite) preconditioned (1,1) block and the Schur complement matrices.
Numerical tests are reported which confirm the validity of the developed theoretical bounds.
Future work is aimed at generalizing this work to provide the eigenvalue distribution of
more general double saddle-point matrices, in particular those with nonzero
(2,2) and (3,3) blocks, and to test them on a wide number of realistic applications, such as, e.g., coupled poromechanical models <cit.>,
and the coupled Stokes-Darcy equation <cit.>.
10
Yuan
J.-Y. Yuan, Numerical methods for generalized least squares problems, J. Comput. Appl. Math., 66 (1996), pp. 571–584.
Han
D.-R. Han, X.-M. Yuan, Local linear convergence of the alternating direction method of multipliers for quadratic programs, SIAM J. Numer. Anal., 51 (2013), pp. 3446–3457.
Rhebergen
S. Rhebergen, G.N. Wells, A.J. Wathen, R.F. Katz, Three-field block preconditioners for models of coupled magma/mantle dynamics, SIAM J. Sci. Comput., 37 (2015), pp. A2270–A2294.
Chen
Z.-M. Chen, Q. Du, J. Zou, Finite element methods with matching and nonmatching meshes for Maxwell equations with discontinuous coefficients, SIAM J. Numer. Anal., 37 (2000), pp. 154–1570.
Monk
P. Monk, Analysis of a finite element method for Maxwell's equations, SIAM J. Numer. Anal., 29 (1992), pp. 714–729.
Cai
M. Cai, M. Mu, J. Xu, Preconditioning techniques for a mixed Stokes/Darcy model in porous media applications, Comput. Appl. Math., 233 (2009), pp. 346–355.
ChenRen
F. Chen, B. Ren, On preconditioning of double saddle point linear systems arising from liquid crystal director modeling, Appl. Math. Lett, 136 (2023), 108445.
Szyld
P. Chidyagwai, S. Ladenheim, D. B. Szyld,
Constraint preconditioning for the coupled Stokes-Darcy
system,
SIAM J. Sci. Comput., 38, (2016),
pp. A668—A690.
Benzi2018 F.P.A. Beik, M. Benzi,
Iterative methods for double saddle point systems,
SIAM J. Matrix Anal. Appl., 39 (2018), pp.
902–921.
BeikBenzi2022 F.P.A. Beik, M. Benzi, Preconditioning techniques for the coupled Stokes–Darcy problem: spectral and field-of-values analysis. Numer. Math. 150, 257–298 (2022).
Cao
Y. Cao, Shift-splitting preconditioners for a class of block three-by-three saddle point problems, Appl. Math. Lett., 96 (2019), pp. 40–46.
Bradley
S. Bradley, C. Greif,
Eigenvalue bounds for double saddle-point systems.
IMA Journal of Numerical Analysis, (2023). Published online on 23 December 2022.
Simoncini1
V. Simoncini, D. Szyld, Recent computational developments in Krylov subspace methods for linear systems, Numer. Linear Algebra Appl., 14 (2007), pp. 1–59.
Huang1
N. Huang, C.-F. Ma, Spectral analysis of the preconditioned system for the 3 × 3 block saddle point problem, Numer. Algor., 81 (2019), pp. 421–444.
Balani-et-al
F. Balani Bakrani, M. Hajarian, L. Bergamaschi, Two block preconditioners for a class of double saddle point linear systems,
Applied Numerical Mathematics, 190 (2023), pp. 155–167.
Xie
X. Xie, H.B. Li, A note on preconditioning for the 3 × 3 block saddle point problem, Comput. Math. Appl., 79 (2020), pp. 3289–3296.
Wang
N. N. Wang, J.-C. Li, On parameterized block symmetric positive definite preconditioners for a class of block three-by-three saddle point problems, Comput. Appl. Math., 405 (2022), 113959.
Simoncini
V. Simoncini, Block triangular preconditioners for symmetric saddle-point problems, Appl. Numer. Math., 49 (2004), pp. 63–80.
AslSalBei
H. Aslani, D.K. Salkuyeh, F.P.A. Beik,
On the Preconditioning of Three-by-Three Block Saddle Point Problems,
Filomat, 35 (2021),
pp. 5181–5194.
Bergamaschi
L. Bergamaschi, On eigenvalue distribution of constraint-preconditioned symmetric saddle point matrices, Numer. Linear Algebra Appl., 19 (2012), pp. 754–772.
Frigo-et-al
M. Frigo, N. Castelletto, M. Ferronato,
Enhanced relaxed physical factorization preconditioner for coupled poromechanics,
Comput. Math. Appl.,
106 (2022), pp. 27–39.
|
http://arxiv.org/abs/2307.02698v2
|
20230706000732
|
Applying a Color Palette with Local Control using Diffusion Models
|
[
"Vaibhav Vavilala",
"David Forsyth"
] |
cs.CV
|
[
"cs.CV"
] |
[
Applying a Color Palette with Local Control using Diffusion Models
Vaibhav Vavilala
University of Illinois at Urbana-Champaign
[email protected]
David Forsyth
University of Illinois at Urbana-Champaign
[email protected]
August 1, 2023
==================================================================================================================================================================
type=figure
< g r a p h i c s >
figure Sample editing workflow enabled by our method. First row The user selects an existing image or generates one. Then, a user can extract patches and move them around, optionally conditioning regions to inpaint with a color for the network to target. Here, we move the dragon's head up, and the network inpainted the missing region with a bottom lip, effectively opening the dragon's mouth. From there, we selected a reference image and our palette transfer network applied the palette to the image, changing its style significantly. Second row Another example, where we condition patches on a color.
]
empty
We demonstrate two novel editing procedures in the context of fantasy card art. Palette transfer applies a specified reference palette to a given card. For fantasy art, the desired change in palette can be very large, leading to huge changes in the “look” of the art. We demonstrate that a pipeline of vector quantization; matching; and “vector dequantization” (using a diffusion model) produces successful extreme palette transfers. Segment control allows an artist to move one or more image segments, and to optionally specify the desired color of the result. The combination of these two types of edit yields valuable workflows, including: move a segment, then recolor; recolor, then force some segments to take a prescribed color. We demonstrate our methods on the challenging Yu-Gi-Oh card art dataset.
§ INTRODUCTION
We describe image editing procedures for creating and manipulating
images of fantasy art. Our images are targeted at card game
characters, and so illustrate creatures, spells and the like. Our
editing procedures allow the artist to apply natural vector art edits to
pixel art. So, for example, an artist can move sections of an image
(equivalent to translating or editing layers of vector art); or can
wholly change the image's color palette (equivalent to recoloring
polygons and gradients in vector art). We use diffusion models to
“snap” the artist's changes to realistic images.
Diffusion models <cit.> have proven to be state
of the art in image synthesis. In particular, techniques have been
developed to exert control over the output via text prompts
<cit.>, edges, segmentation maps, or depth maps
<cit.>. Some of these procedures do not apply here – for example, there
is no method for editing an imaginary monster's depth map – but
important and natural controls have not yet been demonstrated.
Fine-grained edits to an image involve selecting some portion of the
image, changing it, then obtaining a natural result. Diffusion models
have been shown to accept segmentations as conditioning
(<cit.>), but fine-grained edits have not yet been
shown. Our method obtains a segmentation from
<cit.>. A user can then move or remove one or
more patches, and the diffusion model will inpaint the result. Further,
the user can specify a color to be used for patches, and the diffusion
model will respect that control. This color conditioning is novel,
and is useful in artist workflows. We show that our color-conditioned
inpainting method works well in practice.
Palette control involves applying a specified palette to a given
image. The palette may come from a set of example colors, or more
commonly, an example image. Palette control may involve very large
changes to the color palette, changing the overall feel of the image
(for example, Fig. <ref>) without disrupting gloss effects, color
gradients, and so on. Current palette control methods
(reviewed below) do not apply, because they are oriented to small or
moderate palette variations for natural images.
For a palette control method to be useful in our application, it
should be able to make very large changes to the palette of the art;
it should be easy to obtain multiple distinct transfers;
and it should be possible to apply detailed edits to the palette
mapping. Our palette control method is built using three tools: vector
quantization of an image's palette; correspondence between color
palettes; and “vector dequantization”, where a diffusion model
constructs one or more natural images conditioned on a vector quantized input.
Assume we wish to show image A in B's palette. We vector quantize
each image's color gamut. We then build a correspondence between the
centers, then map the colors using the correspondence to
obtain a vector quantized version of the result. Finally, we vector
dequantize to obtain a natural image. Choices that affect the result
are: the number of centers chosen in vector quantization (see Fig. <ref>);
the particular correspondence process (Fig. <ref>); and the randomness
inherent in the diffusion model (Fig. <ref>).
§ RELATED WORK
Image Synthesis: There is a rich history of neural image
synthesis with GANs
<cit.> and
diffusion models
<cit.>. The quality of
the generated distributions has reached a level nearly
indistinguishable from the ground truth, with problems like aliasing
<cit.> and sparse datasets <cit.>
well-explored. Techniques have been developed to condition the
generation on additional inputs like depth or normals
<cit.>.
Image inpainting is a process by which a patch of pixels that have
been removed in an image (say for object removal) must be filled
in (see <cit.> for a review). A number of methods
have been developed to solve this problem with CNNs
<cit.> and diffusion models
<cit.>, generating sensible pixels consistent with
the surroundings while preserving texture outside the target region.
We are not aware of an inpainting method that allows the user to
specify the approximate color of the inpainted region - we argue this
is an important image editing activity and useful to have in
practice.
Palette mapping can be seen as a variant of style transfer,
though we deprecate this view. Style transfer methods
(eg <cit.>)
do change palettes, but change the spatial layout of the image as
well; in contrast we want to change only the colors. There is
a small literature of palette transfer methods (with a
review, <cit.>). Chang et al. demonstrate the
value of quantizing a palette in a wholly interactive method.
A number of automatic methods are framed as a warp that matches color
spaces. Reinhard et al. match moments in lαβ color
space <cit.>; they obtain improved transfer by segmenting image
and reference and matching segments. Wu et al use an explicit
semantic representation to match (so flowers to flowers and grass to
grass, say) <cit.>. Pitié et al. warp the color space
using repeated 1D projections and a form of histogram
matching <cit.>; Hwang et al. use moving least
squares <cit.>. Cho et al. train a network to apply a
specified palette to a natural image using color
augmentations <cit.>. While palette mapping
could be viewed as a colorization problem (decolor, then colorize with
the specified palette), we are not aware of colorization algorithms
that can be forced to use a restricted palette.
In contrast to the palette mapping literature, our method must apply
to card art rather than natural images. This creates some important
constraints. Gradients are frequent (and cause problems to existing
methods; see, for example, figure 4c of <cit.>). These
problems occur because warping the color space may speed up the color
change in the gradient and lead to a gradient becoming a set of
stripes. Gloss effects are quite widely represented, and mismatching
colors can cause very odd looking specularities. The
set of colors used is often small, and has strong effect on the
overall “look” of the art.
§ METHOD
Dataset:
Our dataset is an enlarged version of the Yu-Gi-Oh! dataset in <cit.> and we process it a similar way. However, we found better upscaling results with Real-ESRGAN <cit.>. We also include spells and traps, bringing the total dataset size to 11.3K. We still train at 512 resolution.
Training Details:
We use ControlNet for training <cit.> (starting from a pretrained Stable Diffusion 2.1 model) and train for 200 epochs on an NVIDIA A40 GPU, requiring approx. 7 days. We didn't notice an appreciable improvement in quality (and observed slower training) by allowing some of the diffusion model U-Net weights to be trainable. We trained two ControlNets: one for color transfer, and the other for inpainting. We describe them below.
Palette Transfer: Our Palette Transfer ControlNet accepts a 4 channel input. The first channel corresponds with the image edges (which is toggled on/off with 50% probability at training time such that the network can learn to generate images without this conditioning). We use the Canny Edge detector from OpenCV. The remaining three channels correspond with the quantized image (also toggled with 50% probability at training time) for which we use the Python Image Library. We vary the number of colors at random between [5-64]. As we show in our results section, when quantizing with fewer colors, we obtain fairly diverse images because the image representation is far more crude. With lots of colors in the quantized representation, samples with the same conditioning but different seed look much more similar since the target image is further constrained.
Inpainting:
Our Inpainting ControlNet accepts a 6 channel input. The first 3 channels correspond with texture that we do not want the network to change. This is simply an RGB image with values between 0-1. We rescale the pixels to lie between 0.5-1 such that we can disambiguate no color in a channel (which becomes 0.5) and a pixel we want the network to inpaint (which becomes 0). The remaining 3 channels correspond with color hints for the pixels we want the network to inpaint. For such patches, we condition the network to attempt to fill in that patch with pixels whose mean color corresponds with what the user asked for. As before, we re-scale the target colors to lie between 0.5-1. Thus an input of 0 for a region we want to inpaint implies the diffusion model can figure out for itself what to inpaint with no color conditioning in that patch. Pixels we are not inpainting are also set to 0 for these last three channels. To train the inpainting network, we first need to extract patches for each training image. We use Segment Anything <cit.> to extract patches from an image, and at training time, randomly select some patches from an image to remove from the texture conditioning (i.e. set those values to 0). Segment Anything found an average of 88 patches per image (note that all the pixels are distributed among the patches). For each patch that was removed, with 50% probability we set the corresponding patch of pixels in the color hint channels to be the mean color within the support of that patch. Thus, the network learns to associate piecewise-constant color with patches of real image pixels - a powerful user control that works quite well as we show in our results section.
§ RESULTS
§.§ Qualitative Evaluation
We present extensive qualitative evaluation of our methods. First, our method can generate good samples unconditionally,
only text prompt input (see Fig. <ref>). Second, our method can transfer color palettes, with several
variations (Fig. <ref>), different numbers of colors in the palette (Fig. <ref>), and different methods
to form a correspondence between palettes (Fig. <ref>). Additionally, instead of extracting a reference palette
from a real image, we can use pre-built color palettes, for example colormaps from matplotlib, with good palette
transfer results (Fig. <ref>). We observed that smaller color palettes force the network to hallucinate more, resulting in greater variations between samples. This feature can be used creatively by an artist (Fig. <ref>).
When transferring a color palette, we found cases where the diffusion model failed to match the target color palette
(oftentimes producing oversaturated patches that still match the edges). This is likely due to sparsity in the dataset -
the network may not have seen certain colors paired with certain segments and struggles to hallucinate a sensible
image. Remedying this is future work and could perhaps involve some form of data augmentation.
We tie up our contributions in an example artist editing workflow in the teaser figure (<ref>), showing how to edit a real image by moving segments and transferring a palette.
§.§ Quantitative Evaluation
We present a quantitative evaluation of our method in Table <ref>, where we aim to show three features of our
method.
Baseline Our method produces reasonable images without conditioning, obtaining an FID of 35.6 (see
Fig. <ref> for qualitative examples). We used our 4 channel Palette Transfer network but with no edge or
palette conditioning and per-sample prompts selected at random from the training set to generate 50K samples. This model
clearly favors conditioning; fine-tuning a diffusion model without conditioning on this dataset is likely to yield much
stronger FID.
Inpainting For 50k samples, we randomly select 10% of the masks associated with a random
training sample, and for each patch we randomly select a color to condition that patch (or with 50% probability, not
condition the color for that patch). Through this process we obtain an FID of 3.6. Thus, after inpainting approx. 10%
of the pixels, the generated distribution does not drift too far from the training set.
Palette Transfer For
each of the 50k samples generated, we pick two random training samples - one for the source edges/color, and another for
the target palette that must be transferred. While it is plausible that color transfer (a significant global edit) could
create low-quality images that drift far from the training set, this does not happen in practice, obtaining an FID of
18.0 (about half the baseline of the conditioning method).
§ CONCLUSION
Our quantitative and qualitative results show we can edit global and local characteristics of fantasy art, allowing
artists to apply vector art techniques to pixel art. Our pipeline of (vector quantize; match; vector dequantize)
likely applies to problems such as texture transfer, where one wants to preserve the spatial structure of the image but obtain
a different texture “feel” (so make a grassy field snowy, for example).
ieee_fullname
|
http://arxiv.org/abs/2307.02244v1
|
20230705123639
|
Self-supervised learning with diffusion-based multichannel speech enhancement for speaker verification under noisy conditions
|
[
"Sandipana Dowerah",
"Ajinkya Kulkarni",
"Romain Serizel",
"Denis Jouvet"
] |
cs.SD
|
[
"cs.SD",
"eess.AS"
] |
S3C: Self-Supervised Stochastic Classifiers for Few-Shot Class-Incremental Learning
Jayateja Kalla Soma Biswas
August 1, 2023
===================================================================================
The paper introduces Diff-Filter, a multichannel speech enhancement approach based on the diffusion probabilistic model, for improving speaker verification performance under noisy and reverberant conditions. It also presents a new two-step training procedure that takes the benefit of self-supervised learning. In the first stage, the Diff-Filter is trained by conducting time-domain speech filtering using a scoring-based diffusion model. In the second stage, the Diff-Filter is jointly optimized with a pre-trained ECAPA-TDNN speaker verification model under a self-supervised learning framework. We present a novel loss based on equal error rate. This loss is used to conduct self-supervised learning on a dataset that is not labelled in terms of speakers. The proposed approach is evaluated on MultiSV, a multichannel speaker verification dataset, and shows significant improvements in performance under noisy multichannel conditions.
Index Terms: multichannel speech enhancement, diffusion probabilistic models, speaker verification, self-supervised learning
§ INTRODUCTION
Speaker verification (SV) aims to confirm the identity of a person based on his/her voice characteristics. SV has achieved significant performance gain in controlled or close-talk scenarios. However, it suffers from unsatisfactory performance in multichannel far-field scenarios. This is due to complex environmental settings as speech signals propagating in the long-range are subject to fading, absorption, room reverberation and complex environmental noises, which change the pressure level at different frequencies and degrade the signal quality. Speech enhancement (SE) can be used as a pre-processing to SV in noisy reverberant scenarios. Speech enhancement aims to enhance the quality and intelligibility of speech signals that are corrupted by noise and/or reverberation by estimating the original clean speech signal using various signal processing techniques. Multichannel speech enhancement aims to enhance distorted speech using multiple microphones and improve performance by taking advantage of the additional spatial information provided by these microphones compared to single-channel.
Generative models aim to learn the fundamental characteristics of speech, such as its spectral and temporal structure and can use this prior knowledge to identify clean speech from noisy or reverberant input signals that fall outside the learned distribution. <cit.> used the raw waveform, or magnitude spectrum, as input for generative model-based speech enhancement. Generative adversarial networks (GAN) <cit.>, variational autoencoders (VAE) <cit.>, and flow-based models <cit.> have been used to estimate the distribution of clean speech signals. Recently, diffusion-based models have also been studied for speech enhancement <cit.>. All these approaches share the concept of gradually converting input data into noise and training a neural network to invert this process for various noise scales based on the Markov chain.
DiffuSE <cit.> was proposed to recover the clean speech signal from the noisy signal based on Markov chains; it provides a framework for denoising diffusion probabilistic models. Lu et al. formulated the CDiffSE model using a generalized conditional diffusion probabilistic model that incorporates the observed noisy data into the model <cit.>. While CDiffSE and DiffSE employ U-net as their diffusion decoder network, our proposed work takes a different approach and uses Conv-TasNet as the diffusion decoder instead. Specifically, our method conducts speech enhancement on the time-domain representation of the signal. Zhang et al. extend the Diff-Wave vocoder <cit.> using a convolutional conditioner for denoising, and it is trained separately using a L1 loss for matching latent representations <cit.>.Our proposed approach incorporates a conditioning network based on Conv-TasNet in addition to the diffusion decoder. This conditioning network provides an estimate of the clean and noisy signals, which are combined with the multichannel noisy signal and fed into the diffusion decoder. By doing so, the diffusion process is made easier as it can learn to remove the noise while taking into account the clean speech estimate provided by the conditioning network. Recently, some studies <cit.> have explored scoring-based diffusion models with stochastic differential equations (SDE) instead of Markov chains. SDE enables the controlling of selecting the reverse diffusion steps for enhancement <cit.>. The aforementioned works use only single-channel and have not been studied for SV.
Self-supervised learning is a powerful machine learning technique that enables models to learn from unlabeled data by leveraging the inherent structure or patterns in the data itself without the need for explicit supervision from labelled data. In the context of speaker verification tasks, few approaches have conducted contrastive learning for self-supervised learning <cit.>. The loss function design for SV mainly focuses on speaker classification loss function, and verification loss <cit.>. Furthermore, the contrastive learning framework enables the online creation of verification labels. In order to exploit the multichannel speech data without explicit speaker labels, we propose to use the equal error rate (EER) evaluation metric as a loss function to optimize the speaker embedding representation on the verification task.
In this paper, we present a diffusion probabilistic model (DPM)-based two-stage multichannel speech enhancement approach as a pre-processing to SV. We named our approach Diff-Filter as it mimics the behaviour of Rank-1 multichannel Wiener filter (MWF). In the first stage, we train the Diff-Filter by conducting time-domain speech filtering using a scoring-based diffusion model. In the second stage of training, we jointly optimize the Diff-Filter with a pre-trained ECAPA-TDNN SV model under a self-supervised learning framework. We evaluate our results on MultiSV, a multichannel SV dataset, and show that our proposed approach significantly improves SV performance under multichannel noisy conditions.
§ PROPOSED APPROACH
In this section, we present the proposed approach for developing a robust multichannel SV system in a noisy environment. In the first phase, we trained the ECAPA-TDNN <cit.> based SV system and a multichannel speech enhancement system separately. We used this pre-training of speech enhancement and SV for training the jointly optimized system using self-supervised learning. We jointly optimized Diff-Filter and ECAPA-TDNN with the proposed EER loss as a verification loss to optimize the binary classification with speaker embedding representations. Diff-Filter is a scoring-based diffusion probabilistic model where Conv-TasNet architecture is utilized for conducting the diffusion process. Diff-Filter is trained to provide Rank-1 MWF clean speech signal for a given multichannel noisy input signal. We used a conditioning network to provide the estimates of clean and noise signals as additional input to the diffusion decoder, thus conditioning the sampling process from terminal distribution aware of noise to be removed from the noisy multichannel signal.
§.§ Diff-Filter
This section presents a novel way to train a multichannel speech enhancement system as a DPM-based filtering method named Diff-Filter. We termed the proposed system Diff-Filter, as it replicates the functionality of the Rank-1 MWF filter to provide a clean speech signal. The proposed Diff-Filter comprises a diffusion-based decoder network and a conditioning network, as shown in Figure <ref>. We used Conv-TasNet <cit.> as an external conditioning network. The conditioning network is used to compute the estimates of the clean speech signal, s, and noise in time-domain representation, N. We provide conditioning network output estimates along with the multichannel noisy speech signal as input to the Diff-Filter system. In the forward and reverse diffusion process, terminal noise distribution is defined as 𝒩(μ, I), where the mean is μ, and I is unit variance. We parameterized the mean μ of terminal noise distribution of the diffusion process using noisy multichannel input, y.
Similar to <cit.>, we incorporated scoring-based diffusion probabilistic model, in which the diffusion decoder learns the trajectories of forward diffusion in reverse time order. In the training phase, the forward diffusion process is conducted by iteratively deconstructing Rank-1 MWF clean speech estimate signal to the terminal distribution defined by noisy multichannel signal. Furthermore, terminal distribution is also conditioned with estimates of the clean speech signal and noise signal provided by the conditioning network. The usage of clean speech estimate and noise estimate in the diffusion process assists in conducting noise-aware speech enhancement. We used stochastic differential equations (SDE) to learn the gradients of the forward diffusion process as shown in Figure 1, where s_θ(X_t,μ,s,N,t)) denotes the diffusion decoder network with t as the diffusion time step and β as noise scheduler. During the inference phase solving the SDE describing dynamics of the reverse diffusion with a simple first-order Euler-Maruyama scheme <cit.>.
We trained the Diff-Filter using a two-stage training process. First, we conditioned the diffusion encoder with a target clean speech signal, a target noise signal, and a noisy multichannel signal. The main purpose of pre-training the diffusion decoder is to ensure that diffusion model parameters converge in optimal minima direction using target clean speech and noise. In the second stage of training, we used the clean speech and noise estimated by the conditioning network.
In the inference phase, multichannel noisy signals and estimates of speech and noise signals obtained from the conditioning network are provided to the diffusion decoder to estimate the reverse trajectories of forward diffusion. The reverse diffusion process iteratively reconstructs the Rank-1 MWF filter output of clean speech by sampling latent variables from conditional terminal distribution.
§.§ Self-supervised learning for multichannel SV
We jointly optimized the Diff-Filter and ECAPA-TDNN as a multichannel SV. In joint optimization, the multichannel noisy signal is first given to Diff-Filter, which provides a single-channel Rank-1-MWF filtered clean speech signal. The error gradient passes through ECAPA-TDNN and Diff-Filter as a single unit because we back-propagate through both models that we are jointly training. To conduct the self-supervised learning with the unlabelled dataset, we created an RIR simulated dataset and applied it to clean speech from the LibriSpeech dataset <cit.> detailed in section 3.
As shown in Figure <ref>, utterances 1 and 2 are given to the jointly optimized network composed of Diff-Filter and ECAPA-TDNN. The utterances 1 and 2 are multichannel noisy signals, where utterance 2 is either the data-augmented multichannel noisy signal from utterance 1 or a randomly selected multichannel noisy signal from a different speaker. During the training, we used the self-supervised contrastive learning framework, where both utterances, utterances 1 and 2, are given to the same jointly optimized network. Verification labels are generated as 0 or 1 if utterance 2 is data-augmented from utterance 2 or not, where 1 represents that utterances 1 and 2 are from the same speaker, 0 otherwise. For data augmentation in self-supervised learning, we used speed perturbation by 0.9, 1.1 factor only and masking the 1 sec part of the noisy multichannel signal.
We propose to use an EER as a loss function to train the jointly optimized network. The EER is the location on a receiver operating characteristic curve where the false acceptance rate and false rejection rate are equal. First, we computed the cosine similarity distance between embedding_1 and embedding_2 for a given batch. Then, false acceptance rate (FAR) and false rejection rate (FRR) are estimated based on cosine scores and verification labels using torchmetrics[https://torchmetrics.readthedocs.io]. We estimated EER for the given batch size from FAR and FRR as stated in Equation 1, where ℒ_EER ranges from value 0 to 1.
ℒ_EER = FAR [ argmin | FRR- FAR | ]
We also estimated cosine similarity loss between embeddings: embedding_1 and embedding_2 <cit.> as shown below in Equation 2.
ℒ_cosine =
1-cos(emb_1, emb_2) label=1
max(0,cos(emb_1, emb_2)-M) else
where emb_1 and emb_2 refers to embeddings extracted on utt_1 and utt_2, respectively, M refers to the regularizer of value 0.2, and cos refers to the cosine angle between emb_1 and emb_2.
§ DATASET PREPARATION
We used various datasets at different stages while developing the proposed approach for multichannel SV in noisy conditions. We used the MultiSV dataset <cit.> for training the Diff-Filter, which consists of 4 channel speech utterances room simulated impulse response with background noises from Music, MUSAN, and freesound.org[https://freesound.org/]. The training dataset of MultiSV is simulated using the VoxCeleb2 dataset <cit.>. Consistent with the Diff-Filter training data source, we utilized the VoxCeleb2 dataset with standard Kaldi-based data augmentation techniques for training ECAPA-TDNN single-channel SV. We opted for the VoxCeleb2 dataset for joint training as MultiSV is a labelled dataset, and the core of self-supervised learning is to explore the unlabelled dataset.
To jointly optimized the network, we first simulated a room impulse dataset and applied it to the clean speech from the LibriSpeech dataset without taking into account the speaker information, thus creating an unlabelled multichannel SV dataset. The pyroomacoustics toolbox[https://github.com/LCAV/pyroomacoustics] is used for room simulation with 4 channels. The room length was drawn randomly between [3,8] m, the width was chosen between [3,5] m, and the height was chosen between [2,3] m. The absorption coefficient was drawn randomly such that the room's RT60 was between [200,600] ms. The minimum distance between a source and the wall is 1.5 m and 1 m between the wall and the microphones. We generated a total of 50000 training samples for self-supervised learning.
To evaluate the proposed work, we used two multichannel trial protocols from the MultiSV dataset, namely MRE and MRE hard trial protocols. The evaluation set of MultiSV is retransmitted development set derived from the VOiCES dataset. In addition to MultiSV evaluation data, we also created an internal evaluation set using Fabiole corpus <cit.>, a French speech corpus consisting of around 6882 audio files from 130 native French speakers. The speech data of Fabiole has been collected from different French radio and TV shows. For creating each evaluation set, we have used 1200 speech files from Fabiole representing 2 hrs of evaluation material. We used the same configuration for room impulse response simulation as used for creating the training dataset for the self-supervised learning phase. We designed the evaluation set with various RIR scenarios to be used for both speech enhancement and SV.
§ EXPERIMENTATION SET-UP
§.§ Multichannel speech enhancement
The model is trained using two loss functions, diffusion loss and scale invariant signal to distortion (SI-SDR) loss <cit.>. The diffusion loss is defined by Fisher divergence as a way to compute the scoring function, which is the gradient of change in log probability density in each diffusion step <cit.>. The second loss function, SI-SDR loss, is applied to the output of the conditioning network to ensure that the diffusion model ingrains the intrinsic information about clean speech estimate and noise estimate in time-domain representation. In training, we provided speech segments of a fixed length of 4 seconds of duration.
We set the initial weight of 0.001 on SI-SDR loss. Then, we increased the initial weight by 0.0001 after every 5 epoch till it reached 1. For the two-stage training approach, first, we trained the network for 100 epochs with a learning rate of 1e-2 and reduced the learning rate over the epochs with a factor of 0.85 after every 5 epoch. We used Adam optimizer for two-stage training with a batch size of 2. In the second stage of training, the system is trained with a learning rate of 1e-4 for 500 epochs.
We used Conv-TasNet architecture to develop both diffusion decoder and conditioning network, with modification of replacing PReLU activation function with GeLU <cit.>. The implementation of networks using Conv-TasNet includes 512 filters in the convolutional block and transpose convolutional block (N), 20 lengths of filters (L), 256 channels in a bottleneck, and the residual paths 1×1 convolutional blocks. Each convolutional block's kernel size (P) is set to 3, and the number of convolutional blocks in each repeat is 8. Also, we adopted global layer normalization with a non-causal strategy for Diff-Filter implementation. To ensure a stable learning process, we used gradient clipping with a maximum L2-norm of 5.
We conducted self-supervised training on the proposed approach in a contrastive learning framework for 50k iterations with a batch size of 4. In each batch of self-supervised training, we kept equal distribution of verification labels as 0 and 1. We used Adam optimizer with a learning rate of 1e-3 with weight decay of 1e-4 for every 1000 iteration.
§.§ Speaker verification
We used ECAPA-TDNN as a single-channel SV system from <cit.>. We used the VoxCeleb2 dev dataset for training ECAPA-TDNN. As SV systems often benefit from data augmentation, we used a combination of different data-augmentation techniques, such as Kaldi recipes of data-augmentation (using MUSAN <cit.> and room impulse response dataset[https://www.openslr.org/28/]) and speed perturbation by changing the tempo of speech.
Besides squeeze and excitation block, the attention module of ECAPA-TDNN is set to 128. The scale dimension in Res2Block is set to 8. We extracted 256 dimension speaker embedding from the ECAPA-TDNN network. Initially, we trained the ECAPA-TDNN network with a cyclic learning rate varying between 1e-8 and 1e-3 using the triangular policy with Adam optimizer. The ECAPA-TDNN network is trained with angular margin softmax with a margin of 0.3 and softmax pre-scaling of 30, 100k iterations. We provided the Mel spectrogram as an input to ECAPA-TDNN. We extracted 40-dimensional Mel spectrogram features using the torchaudio library with a window length of 400 samples, hop size of 160, and 512 FFT length/ Mel spectrogram features of 40 dimensions as input to the ECAPA-TDNN network. We used a cosine scoring system for verification purposes from extracted embedding.
§ RESULTS AND DISCUSSION
We compared the performance of the proposed approach with Conv-TasNet as baseline multichannel speech enhancement used as a front end to the ECAPA-TDNN system. For establishing baseline Conv-TasNet, we trained under the same training data used by the Diff-Filter system. Also, we used the same network configuration for Conv-TasNet as for the conditioning network of Diff-Filter. In addition to this, we also computed performance with oracle Rank-1 MWF in order to analyze the filtering approach based on the diffusion probabilistic model. We used EER as an evaluation metric to evaluate the multichannel SV systems on MRE and MRE hard trials from the MultiSV dataset. We compute signal-to-inference ratio (SIR), signal-to-distortion ratio (SDR), and EER on a Fabiole-based multichannel evaluation set. We used MIR eval tool[https://craffel.github.io/mir_eval/] to compute the SIR and SDR metrics. The usage of SIR and SDR metrics provides insight into the performance of the multichannel speech enhancement system as a front end to the SV system.
In Table <ref>, the Diff-Filter front-end outperforms the Conv-TasNet without additional post-training using joint optimization or self-supervised learning. We observed that the proposed approach showed better results on both trials MRE and MRE hard compared to baseline results presented in <cit.>, where the Resnet-based SV system was used. We obtained the best results on the proposed approach trained under a self-supervised learning framework, which shows an efficient generalization of speaker representation under noisy conditions using an unlabelled speaker dataset. In the case of the MRE hard protocol, it has performance close to the multichannel speech enhancement baseline using Oracle Rank-1-MWF. On the other hand, the performance of the proposed approach had a significant margin in performance difference with oracle Rank-1 MWF. Table <ref> illustrates consistent performance improvement by the proposed approach on both trials sets on the Fabiole-based evaluation set. SDR and SIR seem to be closely co-related with EER. With a SIR of 24.37, the proposed joint optimized approach with self-supervised learning achieves the best performance among all the speech enhancement systems. Similarly, with an SDR of 7.02, the proposed joint optimization approach with self-supervised learning achieves the best performance among all the systems evaluated. SIR and SDR
The proposed approach shows consistent performance on both SV and multichannel speech enhancement tasks. The usage of self-supervised learning eases the network optimization for generalization from the unlabelled distribution. As one of the primary evaluation metrics for the SV task is EER, the adaptation of EER loss without speaker labels in self-supervised training elevates the intraclass speaker representation while increasing the interclass speaker representation. The usage of a conditioning network allowed the diffusion process allowed to perform a noise-aware reverse diffusion process. The usage of Conv-TasNet as a diffusion decoder enabled to perform the step-wise noise removal on time-domain signal representation, thus inherently considering the phase information.
§ CONCLUSION
In this work, we proposed Diff-Filter, a multichannel speech enhancement approach as a front end to SV. We improved the performance of the proposed Diff-Filter by jointly optimizing it with ECAPA-TDNN-based SV and further training under self-supervised contrastive learning. We presented EER loss in self-supervised learning to exploit the unlabelled speaker dataset. The obtained results have shown significant improvement in performance on the MultiSV dataset compared to state-of-the-art systems. In order to measure speech enhancement performance, we used SIR and SDR evaluation metrics. The results computed on the simulated evaluation set (derived from Fabiole) showed results in-line with performance on the MultiSV evaluation set. In future, we will conduct further experimentation with Diff-Filter to observe the efficiency of different tasks such as source separation, speaker diarization etc.
IEEEtran
|
http://arxiv.org/abs/2307.02730v1
|
20230706023056
|
Fine-grained Action Analysis: A Multi-modality and Multi-task Dataset of Figure Skating
|
[
"Sheng-Lan Liu",
"Yu-Ning Ding",
"Si-Fan Zhang",
"Wen-Yue Chen",
"Ning Zhou",
"Hao Liu",
"Gui-Hong Lao"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
0000–0000/00$00.00 2021 IEEE
Fine-grained Action Analysis: A Multi-modality and Multi-task Dataset of Figure Skating
Sheng-Lan Liu, Yu-Ning Ding, Si-Fan Zhang, Wen-Yue Chen, Ning Zhou, Hao Liu, Gui-Hong Lao
Sheng-Lan Liu, Yu-Ning Ding, Si-Fan Zhang, Wen-Yue Chen, Ning Zhou, Hao Liu, Gui-Hong Lao are with the Computer Science and Technology, Dalian University of Technology, Dalian 116024, China.
(Corresponding author: Sheng-Lan Liu, E-mail: [email protected])
===============================================================================================================================================================================================================================================================================================================================================================
The fine-grained action analysis of the existing action datasets is challenged by insufficient action categories, low fine granularities, limited modalities, and tasks. In this paper, we propose a Multi-modality and Multi-task dataset of Figure Skating (MMFS) which was collected from the World Figure Skating Championships. MMFS, which possesses action recognition and action quality assessment, captures RGB, skeleton, and is collected the score of actions from 11671 clips with 256 categories including spatial and temporal labels. The key contributions of our dataset fall into three aspects as follows. (1) Independently spatial and temporal categories are first proposed to further explore fine-grained action recognition and quality assessment. (2) MMFS first introduces the skeleton modality for complex fine-grained action quality assessment. (3) Our multi-modality and multi-task dataset encourage more action analysis models. To benchmark our dataset, we adopt RGB-based and skeleton-based baseline methods for action recognition and action quality assessment.
multi-modality and multi-task dataset, fine-grained action recognition, fine-grained action quality assessment.
§ INTRODUCTION
With the deeper exploration in action recognition, fine-grained human action recognition has long been a question of great interest in a wide range of fields<cit.><cit.>. The content of videos with fine-grained human action is composed of different combinations of scenes, tools (fixed or non-fixed), objects (dynamic or static), and persons. In recent years, the motion-centered fine-grained action recognition datasets such as <cit.><cit.><cit.>, pay more attention to creating new action categories with the combinations of tools and human actions<cit.>. Recent developments in fine-grained human action recognition have heightened the need for professional sports. Compared with the existing datasets with different scenes, professional sport is challenging because human action will play an important role in a single scene<cit.><cit.>. Meanwhile, the size of our dataset and the number of action categories are untouchable by the combination of human action and non-fixed tools (More details will be elaborated in Sec.2.). Therefore, it is easier to show more details of fine-grained actions with non-fixed tools in a single scene. The challenges of fine-grained human action datasets are mainly derived from 1) Annotation quality. 2) The impact of pv (pose variation) and tv (temporal action variation) on cl (change of label). It's worth noting that tv is influenced by the number of repeated action units and the speed variation among actions (one or both will be represented in an action sequence). Such impact can be denoted as P(cl|pv)(or P(cl|tv)), in which P indicates the probability of label changing under the condition of pv or tv. The reader should bear in mind that the fine-grained action is based on small inter-class variance. We can divide the fine-grained action into fine-grained semantics and fine-grained complexity. Given the above, the disadvantages of the existing datasets can be listed as follows:
Fine-grained semantics. The fine-grained semantics that can be simply described as P(cl|pv)→1 and P(cl|tv)→1 will lead to small intra-class variance. The fine-grained motion-centered action datasets place more emphasis on the quality of action annotation (requires professionalism and expert participation), the number of categories, and temporal fine-grained semantics <cit.>. Owing to the lack of official document[https://] or real-time labeling by experts, most datasets (e.g. dance<cit.>, Taichi<cit.>, etc) are weak in labeling, the accuracy and professionalism of labels are limited <cit.>. Moreover, restricted by fixed tools (e.g. pommel horse in FineGym<cit.>) or strategic objects (e.g. basketball<cit.>), the number of fine-grained categories in the existing human action datasets is insufficient (see Tab. <ref>). In fact, the relationship between pv and cl tends to be formulated by P(cl|pv)→1, which means the larger pv is, the more the number of categories will be. And this is also what most of the existing datasets adopt to increase the number of fine-grained categories. Yet, tv (temporal action variation), which also contributes to ensuring categories, quite goes by the board. That is, the condition P(cl|tv)→1 is rarely met so that the fine granularity would not increase at the temporal level.
Fine-grained complexity. The fine-grained complexity is mainly reflected in two aspects: 1) the large duration and speed variance 2) P(cl|pv)→0 and P(cl|tv)→0. Action categories that only contain fine-grained complexity without fine-grained semantics will lead to large intra-class variance. Up to now, most studies in the field of human action datasets<cit.> have only focused on fine-grained semantics and limited spatial fine-grained complexity (See Fig. <ref>). There has been no detailed investigation of fine-grained complexity about temporal levels<cit.> and spatio-temporal levels. For the existing recognition models, it is less challenging to obtain well-trained models from the existing fine-grained human action datasets in the case of the complex spatio-temporal features of fine-grained complexity is inadequate.
Modality. There only exist RGB and flow features for most existing fine-grained human action datasets. It is unfortunate that the skeleton features in FineGym dataset <cit.>, which consists of RGB, flow, and skeleton features simultaneously, is exacted incompletely. Accordingly, the development of Fine-grained skeleton-based models is limited in the field of human action recognition.
Taken together, reliable action labels are expected to ensure that the change of label (cl) impacted by tv and pv is accurate. The number of fine-grained actions and the intra-class variance are limited. A small action dataset FSD-10 <cit.> is proposed for fine-grained action analysis with the above characteristics but without independent spatial/temporal fine-grained semantics and large-scale samples. We thus propose a new figure skating dataset named MMFS (Multi-modality Multi-task dataset of Figure Skating), collected from videos with high definition (720P) in the World Figure Skating Championships. Compared with the existing human action datasets, the advantages of MMFS can be summarized as follows:
Strong annotation. Weak annotation is labeled by trained people. Medium annotation is indexed by trained people and official documents. Strong annotation is annotated by experts and an official document, which means MMFS is jointly annotated by both real-time expert determination and proficient annotators under the help of an official document, which can be used to guarantee the label is equipped with accuracy and professionalism.
Independently Spatial Label (SL) and Temporal Label (TL). MMFS dataset has spatio-temporal fine-grained semantics: Skates, as wearable and non-fixed tools, assist body movements to add richer pose details to actions <cit.>, introducing more complex spatial fine-grained actions. The number of fine-grained actions will be increased by tv and pv as part of action units change for one given action (please see Fig. <ref> for details). To further research action recognition at both spatial and temporal levels, we propose integrally spatial and temporal labels in MMFS. Note that the prediction of temporal labels is more difficult than spatial ones. Temporal semantics indicates more rigorous requirements than spatial semantics because the large duration and speed variance lead to the large intra-class variance. A hierarchical label structure including temporal and spatial labels is built to compare the fine-grained spatial and temporal semantics.
High complexity of spatio-temporal fine-grained action categories. 1) In comparison with the other datasets, the large duration and speed variance of actions make temporal granularity could be adequately demonstrated. For instance, the Jump could be completed within 2s, while the StepSequence would last from 12s to 68s. The longer average duration of MMFS indicates that more action units can be included in action (see Fig. <ref>). 2) There are sufficient cases of P(cl|pv)→0 and P(cl|tv)→0 in our dataset. More action units and complex spatio-temporal features can maintain the large intra-class variance of fine-grained actions, even with the increasing number of fine-grained action categories (see Section <ref> for details).
Multi-modality. In addition to the RGB feature, the MMFS dataset has the full-body skeleton feature, which offers a great challenge to design remarkable multi-modality models.
Multi-task. MMFS which includes action recognition and action quality assessment tasks, is now the largest multi-modality action quality assessment dataset. The score of skating is determined by the quality of the movement and the rules of the International Skating Union (ISU). To be specific, the score of each movement is composed of basic value (BV) and grade of execution (GOE). Therefore, the scoring system is relatively complex, which brings greater challenges to the scoring model.
According to the characteristics and challenges of MMFS, extensive experiments are conducted, including state-of-the-art RGB-based and skeleton-based action recognition models with different input modalities (RGB, flow, and skeleton features). The experiments indicate that: 1) The duration and speed variance of the dataset is large, which makes it difficult to recognize tv-dominated actions; 2) The accuracy of semantic fine-grained actions could be more easily enhanced than that of fine-grained complex (P(cl|pv)→0 or P(cl|tv)→0) actions by increasing the number of input frames.
Overall, this work contributes to the fine-grained action field in two aspects:
(1) To our best knowledge, MMFS is the first fine-grained action dataset with strong annotation, high fine-grained spatio-temporal complexity, multi-modality, and multi-task characteristics.
(2) MMFS is challenging to the existing state-of-the-art action recognition models. MMFS, which can be utilized to exploit more excellent models for action-related tasks, provides inspiration for future exploration in this field.
MMFS involves fine-grained action recognition and action quality assessment tasks. According to the characteristics of MMFS, extensive experiments are conducted, including mainstream RGB-based and skeleton-based action recognition models with different input modalities (RGB and skeleton features). The experiments indicate the challenges of our benchmark, which highlights the need for further research on fine-grained action analysis.
§ RELATED WORK
Coarse-grained Action Recognition Dataset. Coarse-grained datasets always focus on the combination of multiple content elements of videos, such as HMDB51 <cit.>, UCF101 <cit.> and ActivityNet <cit.> (and also includes large scale datasets something-something <cit.>, Kinetics <cit.>, Moments <cit.> and AViD <cit.>). The discrimination of these datasets relies on elements (scenes, objects, or tools) rather than the person <cit.>. In order to focus on the motion of video datasets, motion-centered research began to attract more attention. KTH <cit.> and Weizmann <cit.> are early coarse-grained motion recognition datasets without background interference. To enhance the quality of the motion in the dataset, professional sports datasets are involved for high-level human motion expression, such as UCF sport <cit.> and Sport-1M <cit.>, which enhances the number of categories and the variance of action. However, the coarse-grained datasets can not be used to develop fine-grained action analysis models of sports.
Fine-grained Video Dataset. To weaken the category discriminability of scene and object <cit.> and to deepen understanding of videos, researchers focus more on fine-grained action recognition (AR) datasets. Many simple sports based on balls (like football <cit.>, basketball <cit.>) and body (such as Tai Chi <cit.> and Karate <cit.>) without complex rules are presented to facilitate fine-grained action dataset. Then, more complex sports datasets like MIT-skating <cit.>, diving48 <cit.>, FSD-10 <cit.> and FineGym <cit.> are proposed to further explore the video understanding. However, these mentioned fine-grained datasets above can not be employed to promote multi-modality and multi-task models.
Multi-modality, Multi-task Dataset. Some fine-grained datasets are presented to generate multi-task models like MultiSports <cit.> (Spatio-Temporal action detection). Moreover, many datasets (such as AQA <cit.> AQA-7 <cit.> and FineDiving <cit.>) are come up for action quality assessment, where MTL-AQA<cit.> proposes a multi-task model to process action quality assessment (AQA) and action recognition. MTL-AQA is a diving dataset, but it provides limited fine-grained types (all action types are combinations of a small number of actions). Besides, the pose of action is of great concern in the AQA task, which can distinguish the key to action changes. Yet the skeleton modality only applies to action recognition like NTU <cit.>. In comparison, the size of MMFS is larger than MTL-AQA and an extra modality can be utilized for action quality assessment. Besides, the data and experiments on the temporal label are rarely mentioned in previous research work. The specific comparison of related datasets is listed in Tab. <ref>.
§ DATASET
MMFS, a multi-task and multi-modality dataset, is challenging for fine-grained action analysis. In this section, the construction of the MMFS dataset is introduced in detail, including data preparation, data annotation, and quality control. Then, we demonstrate the statistical properties and challenges of MMFS.
§.§ Dataset Construction
Data Preparation. We collect 107 competition videos of the World Figure Skating Championships from 2017 to 2019 as original videos which are standardized to 30fps with high resolutions on Youtube (720p). Then, the videos are segmented according to 439 figure skaters of two individual items (men, ladies). Each segmented pre-cut video is a complete performance of one skater for checking fine-grained action annotation results and training annotators.
Data Annotation. We annotate two semantics levels for the MMFS dataset, including 3 sets and 256 fine-grained categories(more details of 256 categories of MMFS could be found on our project page). Before annotating the original videos, all the annotators had been trained by professional annotators with figure skating knowledge combining experts' annotation information of all sampled actions in pre-cut videos. From experts' annotation to proficient annotators parsing, combining ISU technical documents is a new strong annotation structure that is an assurance for annotation of MMFS. The official document is referenced by (proficient) annotators during all annotation steps. The main steps of annotation can be summarized as follows (see Fig. <ref>). First, the start to the end frames of one action (as a clip) in the original videos are determined according to the provided experts' ground truth in the original videos (see Fig. <ref>). Then, the incomplete and redundant clips of the original videos have been removed before annotation. At last, all the clips will be annotated manually.
Quality Control. In order to ensure the quality of the MMFS, we adopt the following control methods. 1) Before the formal annotation task, the annotators' annotation ability is evaluated to be competent in this work. 2)It is the key to ensure annotation quality by the information board in the upper left corner of videos, which can not only assist in editing videos but also provide GroundTruth for clips. 3) Professional annotators check and revise all the annotations of actions by leveraging pre-cut videos and all the clips of original videos.
§.§ Dataset Statistics
MMFS contains 11671 clips captured from 107 competition videos, totaling 35.38 hours. To balance the sample distribution of MMFS, we select 63 categories out of 256 categories by filtering insufficient data. Finally, 5104 samples are selected to construct MMFS-63. The samples of the training set and the test set show the characteristics of Heavy-tailed distribution in MMFS-63 (see Fig. <ref>). The average duration of each category is shown as Fig. <ref>(b). Specifically, the total video duration of the selected samples reaches 16.35h and the average duration is 11.54s. The duration ranges of actions are from 0.83s to 84.53s with a standard deviation of 10.11s. Compared with the existing datasets <cit.><cit.>, in MMFS-63, the average duration is longer and the variance of duration is larger, so more fine-grained related properties can be obtained to bring more challenges.
§.§ Dataset Characteristics
High Quality. (1) High Video Quality. All the RGB videos in MMFS are 1080p, which benefits describe the subtle difference between clips. High video quality and non-fixed tools are two prerequisites for high-quality videos to extract skeleton features. (2) Strong annotation. Unlike the weak annotation in <cit.>, MMFS is strongly annotated on two levels: First, joint annotations are achieved to ensure label reliability by professional annotators combining with the ISU technical document and the provided experts' real-time GroundTruth of the original videos (see Fig. <ref>). Second, the footage of videos always follows the skater to avoid misclassification due to irrelevant frames.
Multi-task. Generally speaking, action datasets are used for two tasks: action recognition and segmentation. However, Action Quality Assessment <cit.> (AQA) would emerge as an imperative and challengeable issue in MMFS, which can be used to evaluate the action performance of skaters based on BV and GOE scores. As shown in Fig. <ref>(b), BV and GOE, which depend on action categories and action performance, respectively, are included in our dataset. BV depends on action types and degree of action difficulty. Besides, a 10% bonus BV score is appended in the latter half of a program.
Multi-modality. We extract the RGB, flow, and skeleton features from the videos in MMFS. Specifically, the skeleton features are obtained using HRNet <cit.>(see Fig. <ref>(b) and more details in supplementary materials). Furthermore, the audio features, which may play important roles in AQA tasks, can also be extracted from videos. Actions matched to musical structure tend to obtain higher GOE scores in the official documentation.
Hierarchical Multi-label. All actions are labeled manually on three levels, coined as set, sub-set, and element. And the sub-set can be divided into the spatial label (SL) and temporal label(TL) as shown in Fig. <ref>.
§.§ Dataset Challenge
For most action recognition datasets, scenes, objects, tools, and persons are essential elements. Many fine-grained actions are generated based on the combination of the person and other elements. MMFS pays more attention to fine-grained action by non-fixed tools (skates). We analyze the fine-grained semantics and the fine-grained complexity in the MMFS, to propose new challenges for the existing models. Figure 3 describes the differences between semantics and complexity. The specific challenges of MMFS are as follows:
Fine-grained semantics The challenges in Fine-grained semantics can be described as the change of labels from the subtle spatio-temporal variation of action units. (1) Temporal variation (P(cl|tv)→1). It is a problem to determine the number of rotations from a few frames. For example, it is hard to distinguish 2Axel jump and 3Axel jump through limited frames. (2) Spatial variation (P(cl|pv)→1). It would be difficult to recognize an action by subtle spatial variation of action units. Fig. <ref>(b) shows the subtle variation between the Flip jump and the Lutz jump. The subtle variation is that the edge of the ice blade is outside on Lutz and inside on Flip. (3) Spatio-temporal variation<cit.> (P(cl|pv,tv)→1). In Fig. <ref>(a), the classification will be confused by the similarity features in the partial spatio-temporal variation among classes.
Fine-grained complexity The challenges in Fine-grained complexity are more reflected in the larger inter-class variance and the large duration and speed variance of actions. The detail can be seen in Fig. <ref>. (1)Temporal variation (P(cl|tv)→0). The temporal intra-class variance can be demonstrated by the samples in Fig. <ref>. Although the top two actions belong to the same category, a clear difference in both the action speed and the number of rotations can be detected. Although the two bottom samples in Fig. <ref> have high similarity in speed, they belong to different actions P(tv|cl)→0. (2) Spatial variation (P(cl|pv)→0). The enhanced intra-class variance of action features is mainly reflected by the GOE of actions. The insufficient number of turns and raising hands (Fig. <ref>) cause GOE deduction and bonus, respectively. More GOE deduction of one action will be caused by hand support, turnover, paralleling feet, and trips during the landing process. Except for GOE, some skaters prefer clockwise rotation while some prefer the opposite. (3) Spatio-temporal variation (P(cl|pv,tv)→0). The challenge can be demonstrated by the comparison of StepSequence. StepSequence1 requires at least five difficult sub-actions while StepSequence2 requires at least seven difficult sub-actions in the official document. The sub-actions of the same grade StepSequence can be differently combined by a skater.
§ EXPERIMENT
§.§ Experimental Preparation
In MMFS-63, all the samples are divided into 4113 and 991 clips for training and testing. Set-level of MMFS, sub-set-level (Temporal Label 22 (TL22) and Spatial Label 25 (SL25)). And fine-grained elements-level (MMFS-63) are annotated by the different semantic labels. We use 30 fps of RGB videos and extract skeleton features with 17 joints for each frame by leveraging HRNet in MMFS-63.
To better understand the performance of prominent action recognition models on this proposed dataset, we benchmark a variety of models on MMFS and group the models into two categories: RGB-based models and skeleton-based models.
RGB-based Models. For RGB-based action recognition, models process very high dimensional input and are more sensitive to the size of training data. Several prominent action recognition models are selected as test methods. Specifically, the RGB-based experiments are conducted utilizing I3D <cit.>, TSN <cit.>, TSM <cit.>, and PAN<cit.> methods. As for action quality assessment, C3D-LSTM <cit.>, C3D-AVG-MTL <cit.>, CoRe <cit.>, and DAE-MLP <cit.> are utilized for the baseline methods.
Skeleton-based Models. We adopt the skeleton-based models on this dataset, including ST-GCN <cit.>, 2S-AGCN <cit.>, CTRGCN <cit.>, efficientGCN B4 <cit.>, and PoseC3D<cit.>. For the skeleton-based methods, the large duration variance of clips (the length range of clips is between 25 and 2536 frames) motivates us to use the average frame number of all clips (320 frames) to construct the input[The 320 frames are extracted from equal divisions of each clip. The clip with insufficient frames (less than 320 frames) should be padded by zeros instead of skeleton features].
In the benchmark, we focus on fine-grained action recognition with multi-modality, spatial and temporal semantics comparison, and the performance of mainstream methods in action quality assessment. The parameterization of all models can be found in the supplemental material.
§.§ Fine-grained Action Recognition and Quality Assessment
Multi-modality Action Recognition. For image-based videos, RGB modality is utilized to extract the spatial content of frames while the skeleton modality could extract the full-body motion features, which have removed most spatial appearance contents. In MMFS, the accuracies of skeleton modality in Tab. <ref> are substantially enhanced compared with the results of RGB-based modality in Tab. <ref>. The results of Tab. <ref> and Tab. <ref> illustrate that MMFS is more discriminative in motion feature variation of body pose and is not sensitive to the visual scene.
The Comparison of the Action Quality Assessment task. For action quality assessment, we adopt the Spearman correlation coefficient (SC) as the metric of experiments. As shown in Tab. <ref>, the mainstream method has achieved effective but not excellent accuracy on our dataset, which shows that our dataset can bring new challenges to the evaluation task.
§.§ The Comparison of Spatial and Temporal Semantics
Hierarchical Label. Different from the coarse-grained dataset, 3 sets in MMFS are divided into 63 action categories to propose a fine-grained action dataset. As shown in Tab. <ref> and Tab. <ref>, the performance of all the compared models drops a lot when the fine granularity is considered on MMFS. The three sets can not achieve outstanding performance with TSN <cit.>, while ST-GCN <cit.> presents better results based on the features of 320 frames. However, the performance of ST-GCN <cit.> is also limited to the Spin and Sequence sets. We show the confusing actions in the supplemental material. And the most confusing actions are the Spin set, where more fine-grained temporal semantics will be addressed because of the longer length of duration.
The Comparison over SL and TL. To observe which one occupies more important influence in fine-grained recognition between spatial semantics and temporal semantics, we propose TL22 and SL25 on the sub-set level. As shown in Tab. <ref>, the action recognition accuracy of temporal label division (TL22) achieves worse performance than that of spatial division (SL25). It illustrates that temporal action recognition is more challenging than the same task in the spatial division. The similar recognition results on TL22 and MMFS-63 demonstrate that most of the difficulties focus on the temporal action recognition task. The experimental results above demonstrate that the existing action recognition models fail to extract temporal discriminant features on both the skeleton and RGB-based modalities.
The Key Challenge in Temporal Semantics. As shown in Fig. <ref>, with the increase in the number of selected frames, CTR-GCN can achieve significant growth on our data set, while FineGym99 has only achieved a small increase. This shows that despite the fine-grained datasets are more sensitive to temporal variance, the temporal feature is difficult to be extracted on our MMFS dataset.
§ CONCLUSION
In this paper, we propose a Multi-modality and Multi-task Dataset of Figure Skating (MMFS) to further research on fine-grained analysis. Distinguishing from the existing fine-grained action datasets, MMFS contains more fine-grained semantics including spatial semantics and temporal semantics. All 11671 clips are annotated with a hierarchically multi-label structure and fine-grained analysis can be conducted on multi-modality. We evaluate the mainstream methods based on RGB-based models and skeleton-based models. In our experiments, we highlight that temporal semantics is more difficult and complex than spatial semantics for the existing models and the skeleton modality achieves better performance on fine-grained analysis. Hope a new unbalanced dataset can be presented for fine-grained analysis.
ieee
|
http://arxiv.org/abs/2307.01765v1
|
20230704150840
|
Wasserstein medians: robustness, PDE characterization and numerics
|
[
"Guillaume Carlier",
"Enis Chenchene",
"Katharina Eichinger"
] |
math.OC
|
[
"math.OC",
"math.AP",
"stat.AP"
] |
Infinite-thin shock layer solutions for stationary compressible conical flows and numerical results via Fourier spectral method
Hairong Yuan
August 1, 2023
===============================================================================================================================
We investigate the notion of Wasserstein median as an alternative to the Wasserstein barycenter, which has become popular but may be sensitive to outliers. In terms of robustness to corrupted data, we indeed show that Wasserstein medians have a breakdown point of approximately 1/2. We give explicit constructions of Wasserstein medians in dimension one which enable us to obtain L^p estimates (which do not hold in higher dimensions). We also address dual and multimarginal reformulations. In convex subsets of ^d, we connect Wasserstein medians to a minimal (multi) flow problem à la Beckmann and a system of PDEs of Monge–Kantorovich-type, for which we propose a p-Laplacian approximation. Our analysis eventually leads to a new numerical method to compute Wasserstein medians, which is based on a Douglas–Rachford scheme applied to the minimal flow formulation of the problem.
Keywords: Wasserstein medians, optimal transport, duality, Beckmann's problem, p-Laplace system approximation, Douglas–Rachford splitting method.
§ INTRODUCTION
The notions of mean and median are well-known to be of variational nature. For instance, the arithmetic mean of a sample composed by N points in ℝ^d is the minimizer of the sum of the squared Euclidean distances to the sample points. Minimizing a weighted sum of distances to the sample points, one gets a notion of weighted medians, which in the literature is commonly referred to as Torricelli–Fermat–Weber points or geometric medians. As pointed out by Maurice Fréchet in his seminal work <cit.>, these definitions can be generalized to any metric space (,d), yielding the notion of Fréchet mean and Fréchet median (or in general typical element).
The concept of Wasserstein barycenter, which corresponds to Fréchet means over the Wasserstein space of probability measures with finite second moments and equipped with the quadratic Wasserstein distance, was introduced and extensively studied in <cit.>. Since then, research on Wasserstein barycenters has expanded in various directions. For instance, investigations have been conducted on Riemannian manifolds <cit.>, population barycenters involving possibly infinitely many measures <cit.>, and Radon spaces <cit.>. The concept has gained popularity as a valuable tool for meaningful geometric interpolation between probability measures, finding applications in diverse fields such as image synthesis <cit.>, template estimation <cit.>, bayesian learning <cit.>, and statistics <cit.>. Despite the inherent complexity of computing Wasserstein barycenters <cit.>, numerical methods based on entropic regularization and the Sinkhorn algorithm have demonstrated their efficiency in calculating these interpolations <cit.>.
In the present paper, we investigate a slightly different problem, namely that of Wasserstein medians, which, given weights λ_1, …λ_N and probability measures ν_1, …, ν_N with finite first moments, over a metric space , consists in finding a probability measure ν minimizing the dispersion criterion ∑_i=1^N λ_i W_1(ν_i, ν). Following Fréchet's metric viewpoint, this amounts to look for Torricelli–Fermat–Weber or equiprobable points in the Wasserstein space of order one. Our primary motivation for studying these objects comes from the following question: does the well-known robustness of geometric medians extend to Wasserstein medians? Consider for instance the problem of averaging the daily attendance frequency of some London underground stationsTfl open data <https://tfl.gov.uk/info-for/open-data-users>, accessed on June 29, 2023. as in Figure <ref> or the five pictures on the left of Figure <ref>. It is pretty clear in these examples that Wasserstein medians show some sort of robustness, and that in general they should behave quite differently from the barycenter.
Our objective is to further explore the notion of Wasserstein median with a first focus on stability and robustness. We also investigate in depth the one-dimensional case where special constructions (which we call vertical and horizontal selections) select medians which inherit properties of the sample measures, in particular, we show that if all the sample measures ν_i are absolutely continuous with densities bounded by some M_i, then there exists a Wasserstein median with a density bounded by max_i M_i, which, as we will show later (Example <ref>), cannot be true in higher dimensions. For more general situations, we present some general tools to study Wasserstein medians, such as multi-marginal and dual formulations for the initial convex minimization problem. To the best of our knowldege, Wasserstein medians have not been very much investigated even in the Euclidean setting with more than two sample measures, however related optimal matching problems (with two sample measures and additional constraints) have been studied in <cit.>, <cit.> and <cit.>. In the Euclidean setting in several dimensions, we also characterize medians by a minimal flow problem à la Beckmann <cit.> and a system of PDEs of Monge–Kantorovich type. This analysis leads to a new numerical method to compute Wasserstein medians, which is based on a Douglas–Rachford scheme applied to the minimal flow formulation of the problem.
The paper is organized as follows: in Section <ref>, we introduce the problem, show existence of Wasserstein medians and consider some basic examples. In Section <ref>, we discuss the stability of the notion subject to perturbations of the sample measures and prove that the break-down point of the Wasserstein median problem with uniform weights is at least 1/2, i.e. to drastically corrupt the estimation of the Wasserstein median one has to modify at least half of the sample measures. In Section <ref>, we focus on the one-dimensional case and emphasize the properties of medians which we call vertical and horizontal median selections. In Section <ref>, we present dual and multi-marginal formulations of the problem. In Section <ref>, we use a minimal flow formulation of the Wasserstein median problem to derive a system of Monge–Kantorovich type PDEs that characterizes medians. We also describe an approximation by a system of p-Laplace equations. We conclude in Section <ref> with a brief description of the numerical methods we implemented to obtain the various figures in this paper and present a new one based on a Douglas–Rachford scheme on the flow formulation.
§ DEFINITION, EXISTENCE AND BASIC EXAMPLES
Setting. Let (𝒳,d) be a proper metric space, i.e. a metric space in which closed balls are compact. This implies in particular that (, d) is Polish, i.e. separable and complete. Note that (𝒳,d) being proper is a natural assumption to define medians by minimization of weighted sums of distances; indeed this implies that for every integer N≥ 1, every (x_1, …, x_N)∈^N and every λ:=(λ_1,…,λ_N) in the simplex Δ_N:
Δ_N:= { (λ_1,…,λ_N) ∈_+^N : ∑_i=1^N λ_i =1},
the set of medians of (x_1, …, x_N) with weights λ, defined by
(x_1, …, x_N):=_x ∈𝒳∑_i=1^N λ_i d(x_i, x)
is a nonempty (and compact) subset of .
[Medians on the real line]
For = equipped with the distance associated with the absolute value, N≥ 1, =(λ_1, …, λ_N)∈Δ_N and :=(x_1, …, x_N)∈^N, () is the set of minimizers of the convex, piecewise affine function x↦ f(x):=∑_i=1^N λ_i | x -x_i|, this function being right and left differentiable at each point with corresponding one-sided derivative given by
f'(x^-)=∑_i : x_i <xλ_i- ∑_i : x_i ≥ xλ_i=2 ∑_i : x_i <xλ_i-1, f'(x^+) =2 ∑_i : x_i ≤ xλ_i-1.
We see that x belongs to the median interval () if and only if f'(x^-) ≤ 0 ≤ f'(x^+), i.e.
∑_i : x_i <xλ_i ≤1/2≤∑_i : x_i ≤ xλ_i,
that is, ()= [^-(), ^+()], where ^-() and ^+() stand for the lower and upper medians respectively, which are given by:
^-():=inf{ y ∈ : ∑_i : x_i ≤ yλ_i ≥1/2}, ^+():=sup{ y ∈ : ∑_i : x_i < yλ_i ≤1/2}.
We shall use extensively properties of lower and upper medians when studying Wasserstein medians on in Section <ref>. Obviously, since f is affine in the neighbourhood of each point of ∖{x_1, …, x_N}, both ^-() and ^+() belong to the sample {x_1, …, x_N}:
I_±():={i =1, …, N : ^±() =x_i }≠∅.
Note also both ^- and ^+ are positively homogeneous and that setting =(1, …, 1),
^±()=1, ^±(α)=α^±(), for all α∈_+.
Of course, in general, medians are highly non-unique. For instance if N=2k is even, λ_i=1/N and x_1<…< x_N, the median interval is [x_k, x_k+1]. A mild condition guaranteeing uniqueness i.e. ^-()= ^+() for every is:
There is no subset I⊂{1,…,N} such that: ∑_i∈ Iλ_i=1/2.
Despite non-uniqueness, both selections ^+ and ^- enjoy nice properties: obviously they are monotone in each of their argument and invariant by translation, that is, for all ≥ (i.e. -∈_+^N) we have ^±()≥^±(), and for α∈ it holds ^±( + α) = ^±()+α. This implies in particular that for every and one has
^±() + min_i=1, …, N (y_i-x_i) ≤^±() ≤^±() + max_i=1, …, N (y_i-x_i),
so that ^± are Lipschitz continuous. Inequality (<ref>) will be very useful for studying horizontal and vertical Wasserstein median selections on the real line in Section <ref>. In fact, we will also need to use a slightly refined form of (<ref>), namely: for all there exists ε>0 such that for all with ‖-‖_∞≤ε it holds
^±() + min_i∈ I_±() (y_i-x_i) ≤^±() ≤^±() + max_i∈ I_±() (y_i-x_i),
where we recall that I_±() are given by (<ref>). The proof of (<ref>) is postponed to the appendix.
[Torricelli–Fermat–Weber points] Consider now =^d equipped with the distance associated with the Euclidean norm |·|, ∈Δ_N and (x_1, …, x_N)∈^N, by definition, x∈(x_1, …, x_N) if and only if x minimizes the convex function ∑_i=1^N λ_i |· -x_i| i.e. satisfies the optimality condition
0 ∈∑_i=1^N λ_i ∂|·| (x-x_i),
where ∂|·| (x-x_i) is the subdifferential of the Euclidean norm at x-x_i:
∂|·| (x-x_i)={p∈^d : | p |≤ 1, ⟨ p, x-x_i⟩= | x-x_i|}= B(0, 1)
x-x_i/| x-x_i|
where B(0,1) stands for the closed unit Euclidean ball. Therefore x∈(x_1, …, x_N) if and only if there exist p_1, …, p_N such that
| p_i |≤ 1, ⟨ p_i, x-x_i⟩= | x-x_i|, i=1, …, N, ∑_i=1^N λ_i p_i=0.
Note that for x∈(x_1, …, x_N) either x=x_i for some i or
∑_i=1^N λ_i x-x_i/| x-x_i|=0
so that in any case x is a convex combination of x_1, …, x_N, we thus have
(x_1, …, x_N) ⊂co{ x_1, …, x_N}.
Denote by () the set of Borel probability measures on . Recall that on () the narrow topology is the coarsest topology making μ∈() ↦∫_ f μ̣ continuous
for every continuous and bounded function f on and that this topology is metrizable on () (so that there is no need to distinguish between narrow compactness and narrow sequential compactness). We denote by _1() the set of Borel probability measures with finite first moment i.e. the set of μ∈() for which for some (equivalently for all) x_0 ∈, d(x_0, ·)∈ L^1(, μ). We endow _1() with the Wasserstein distance of order one:
W_1(μ,ν):=inf_γ∈Π(μ,ν) ∫_^2d(x,y) γ̣(x,y),
where Π(μ,ν) is the set of transport plans between μ and ν i.e. the set of Borel probability measures on ^2 with marginals μ and ν.
With this choice, the metric space (_1(), W_1) is a Polish (but not necessarily proper) space. Let us recall the Kantorovich–Rubinstein duality formula which expresses W_1(μ,ν) as
W_1(μ,ν):=sup{∫_ u μ̣- ∫_ u ν̣ : },
in particular, W_1(μ, ν) is the dual Lipschitz semi-norm of μ-ν and the linear interpolation μ_t:= (1-t)μ+ tν for t∈ [0,1], is obviously a geodesic between μ and ν i.e.:
W_1(μ_t, μ_s)=| t-s | W_1(μ, ν), for all (t,s)∈ [0,1]^2.
Note that convergence in _1() for W_1 implies convergence for the narrow topology but is stronger unless is compact.
For proofs of these classical facts and more on Wasserstein distances, we refer to the textbooks <cit.>.
Wasserstein medians. As mentioned in the introduction, on (_1(), W_1), one can naturally define medians in the Fréchet sense as follows. Given N≥ 1, =(ν_1, …, ν_N) ∈_1()^N and λ:=(λ_1,…,λ_N)∈Δ_N, consider the weighted dispersion functional
ℱ_, (μ):= ∑_i=1^N λ_i W_1(ν_i,μ), for all μ∈_1()
then Wasserstein medians are defined as minimizers of this dispersion functional:
[Wasserstein medians] For N≥ 1, let =(ν_1,…,ν_N) ∈_1()^N and λ=(λ_1,…,λ_N)∈Δ_N, defining ℱ_, (ν) by (<ref>),
we call Wasserstein median of (ν_1,…,ν_N) with weights λ any solution of the following (convex) problem
v(, ):=min_μ∈_1() ℱ_, (μ)
We denote by 𝖬𝖾𝖽_λ(ν_1,…,ν_N) the set of all Wasserstein medians of with weights λ.
The existence of a solution of (<ref>) follows from the direct method:
Let N≥1, =(ν_1,…,ν_N) ∈𝒫_1()^N and ∈Δ_N, then there exists a minimizer of (<ref>) and the set 𝖬𝖾𝖽_λ(ν_1,…,ν_N) is a convex and narrowly compact subset of 𝒫_1().
The functional ℱ_, is l.s.c. for the narrow topology (this follows at once from (<ref>)) and by the triangle inequality for every x_0∈ and every μ∈𝒫_1() one has
W_1(δ_x_0, μ)=∫_ d(x_0, x) μ̣(x) ≤ℱ_, (μ) + ℱ_, (δ_x_0),
which implies that the first moment is uniformly bounded on sublevel sets of ℱ_,. Since (, d) is proper, this implies that sublevel sets of ℱ_, are tight hence narrowly relatively compact by Prokhorov's theorem. This implies nonemptiness and narrow compactness of 𝖬𝖾𝖽_λ(ν_1,…,ν_N), convexity follows from the convexity of F_,.
Let us end this section with some simple explicit examples.
[Medians of Dirac masses] If ν_i=δ_x_i is a Dirac mass for all i =1,…, N, then 𝖬𝖾𝖽_λ(ν_1,…,ν_N) is nothing but the set of probability measures supported on (x_1, …, x_N). In particular if =, and x_1 ≤ x_2 and =(λ_1, 1-λ_1) with λ_1 ∈ (0,1), (x_1, x_2)=[x_1, x_2] so that (δ_x_1, δ_x_2) is the set of all probability measures supported on [x_1, x_2].
[Threshold effect] Suppose that there is J ⊂{1,…,N} with ∑_j ∈ Jλ_j ≥1/2 and ν:=(ν_1,…,ν_N) with ν_j = ρ for j ∈ J for some ρ∈_1(). Then a Wasserstein median of ν is given by ρ since for any ρ̃∈_1(), denoting J^c := {1,…,N }∖ J
∑_i=1^N λ_i W_1(ν_i, ρ) = ∑_i ∈ J^cλ_i W_1(ν_i, ρ)
≤∑_i=1^N λ_i W_1(ν_i, ρ̃) + ∑_i ∈ J^cλ_i W_1(ρ̃, ρ)- ∑_i ∈ Jλ_i W_1(ν_i, ρ̃)
= ∑_i=1^N λ_i W_1(ν_i, ρ̃) + ( ∑_i ∈ J^cλ_i - ∑_i ∈ Jλ_i )_≤ 0 W_1(ρ̃, ρ).
Note that if ∑_j ∈ Jλ_j > 1/2, this also proves that the Wasserstein median is unique and equal to ρ. Note that this threshold effect is not specific to Wasserstein medians but holds for Fréchet medians in any metric space.
[Medians of two measures]
If N=2, ν_1 ≠ν_2, it follows from the previous example that when λ_1 ∈ (1/2,1) (respectively λ_1 ∈ (0,1/2)) the median of (ν_1, ν_2) with weights (λ_1, 1-λ_1) is ν_1 (respectively ν_2), when λ_1=λ_2=1/2, by the triangle inequality any interpolate (1-t) ν_1+t ν_2, t∈ [0,1] belongs to 𝖬𝖾𝖽_1/2, 1/2(ν_1,ν_2).
[Medians of translated measures]
Consider =^d endowed with the Euclidean distance, μ∈_1(), (x_1, …, x_N)∈^N and let ν_i:=τ_v_i#μ be the translation of μ by v_i (i.e. τ_v_i_#μ(A)=μ(A-v_i), for every Borel subset A of ^d). We claim that whenever x ∈(x_1, …, x_N) one has τ_x_#μ∈(τ_x_1_#μ, …, τ_x_N_#μ). To see this, let (p_1, …, p_N) satisfy the optimality condition (<ref>), then we first have
∑_i=1^N λ_i W_1( τ_x_i_#, τ_x_#μ) ≤∑_i=1^N λ_i | x-x_i|=∑_i=1^N λ_i ⟨ p_i, x-x_i⟩ =-∑_i=1^N λ_i ⟨ p_i , x_i⟩.
Let now ν∈_1(), since p_i ∈ B(0,1) the affine function u_i(y):= ⟨ p_i, y+x-x_i⟩ is 1-Lipschitz so that by the Kantorovich–Rubinstein formula
W_1(τ_x_i_#μ, ν) ≥⟨ p_i, ∫_^d (y-x_i+x) ν̣(y)⟩- ⟨ p_i, ∫_^d (y+x) μ̣(y) ⟩
= ⟨ p_i, ∫_^d (y-x_i)ν̣(y)⟩ - ⟨ p_i , ∫_^d y μ̣(y) ⟩.
Multiplying by λ_i, summing and using (<ref>), we obtain
∑_i=1^N λ_i W_1(τ_x_i_#μ, ν) ≥ -∑_i=1^N λ_i ⟨ p_i, x_i⟩≥∑_i=1^N λ_i W_1( τ_x_i_#, τ_x_#μ)
which shows that τ_x_#μ∈(τ_x_1_#μ, …, τ_x_N_#μ).
§ STABILITY AND ROBUSTNESS
The stability with respect to perturbations of the sample measures is a crucial property for any location estimator especially when the underlying space is unbounded. This is why, in this section, we will first investigate some stability properties of Wasserstein medians (improving the easy narrow stability to the stability in W_1-distance), note that Theorem 5.5 in <cit.> establishes strong consistency results in a much more general framework. We will then show robustness to outliers by showing that the breakdown point of Wasserstein medians on an unbounded is at least 1/2, the proof will be an easy adaptation of <cit.> revealing that the argument is in fact quite general and actually carries over to Fréchet medians on geodesic metric spaces.
§.§ Compactness in W_1 distance and stability with respect to data
Let N≥ 1, (, )=(λ_1, …, λ_N, ν_1, …, ν_N) and (', ')=(λ'_1, …, λ'_N, ν'_1, …, ν'_N) in Δ_N×_1()^N, an obvious consequence of the triangle inequality is the fact that for any μ∈_1(), one has
ℱ_, (μ) ≤ℱ_', '(μ) + max_i=1, …, N W_1(ν_i, , ν'_i)+ ∑_i=1^N |λ_i-λ'_i|max_i=1,…, N W_1(ν'_i, μ).
This pointwise inequality for the dispersions corresponding to (, ) and (', '), implies in particular that ℱ_', ' converges to ℱ_, uniformly on W_1 balls as
∑_i=1^N |λ_i-λ'_i| + max_i=1, …, N W_1(ν_i, , ν'_i) → 0.
Let us also observe that for every x_0∈, again by the triangle inequality, one also has the moment bound
sup_μ∈()∫_ d(x_0, x) μ̣(x) ≤ 2 max_i=1, …, N∫_ d(x_0, x_i) ν̣_i(x).
Recalling the definition of v(, ) from (<ref>), (<ref>) and (<ref>) show that v is locally Lipschitz continuous for W_1. Combining the previous pointwise convergence with the narrow lower semicontinuity of W_1, (<ref>) and the narrow compactness of measures with bounded first moments, we straightforwardly get:
Let N≥ 1, (, )=(λ_1, …, λ_N, ν_1, …, ν_N) ∈Δ_N ×_1()^N and (^n, ^n)_n ∈ℕ= (λ_1^n, …, λ^n_N, ν_1^n, …, ν^n_N)_n ∈ℕ be a sequence in Δ_N ×_1()^N such that
∑_i=1^N |λ_i^n-λ_i| + max_i=1, …, N W_1(ν_i^n, ν_i) → 0
Then, ℱ_^n, ^n Γ-converges to ℱ_, for the narrow topology, in particular if μ^n ∈(^n) for all n ∈ℕ, narrow cluster points of (μ^n)_n ∈ℕ belong to ().
One can improve the previous (elementary and expected) result by stability in W_1 distance as follows (for more general results of this type, we refer the reader to <cit.>):
Let N≥ 1, (, ) ∈Δ_N ×_1()^N, (^n, ^n)_n ∈ℕ be a sequence in Δ_N ×_1()^N such that (<ref>) holds and let μ^n ∈(^n) for all n ∈ℕ, then (μ^n)_n ∈ℕ admits a subsequence that converges for W_1 to some μ∈(). In particular () is compact and the set-valued map (, ) ∈Δ_N ×_1()^N ↦() ⊂_1() has a closed graph for the W_1 distance.
We already know from Lemma <ref> that (μ^n)_n∈ℕ has a (not relabeled) subsequence that converges narrowly to some μ which belongs to (). To improve narrow to W_1 convergence, it follows from Proposition 7.1.5 of <cit.>, that it is enough to show that (some subsequence of) (μ^n)_n∈ℕ has uniformly integrable moments. More precisely, fixing x_0 ∈ and for R>0 denoting by B(x_0, R) the open ball of radius R, we have to show that (passing to a subsequence if necessary)
lim_R → +∞sup_n ∫_∖ B(x_0, R) d(x_0, x) μ̣^n(x) =0.
Let γ_i^n ∈Π(ν_i^n, μ^n) such that ∫_^2 d(x_i, x) γ̣_i^n(x_i, x)=W_1(ν_i^n, μ^n), since both sequences (ν_i^n)_n∈ℕ and (μ^n)_n∈ℕ are tight so is (γ_i^n)_n∈ℕ, passing to subsequences if necessary, we may thus assume that (γ_i^n)_n∈ℕ converges narrowly to some γ_i∈(×). Of course γ_i ∈Π(ν_i, μ) and then
W_1(ν_i, μ) ≤∫_^2 d(x_i, x) γ̣_i(x_i, x) ≤lim inf_n ∫_^2 d(x_i, x) γ̣_i^n(x_i, x)= lim inf_n W_1(ν_i^n, μ^n).
We deduce from Lemma <ref> and the fact that (ν_i^n)_n∈ℕ and (μ^n)_n∈ℕ have uniformly bounded moments
∑_i=1^N λ_i W_1(ν_i, μ)=lim_n ∑_i=1^N λ_i^n W_1(ν_i^n, μ^n) ≥∑_i=1^N lim inf_n λ_i^n W_1(ν_i^n, μ^n)=∑_i=1^N λ_i lim inf_n W_1(ν_i^n, μ^n).
Hence, for every i for which λ_i>0, one has W_1(ν_i, μ)=lim inf_n W_1(ν_i^n, μ^n). Assuming without loss of generality that λ_1>0, we thus have
W_1(ν_1, μ)=∫_^2 d(x_1, x) γ̣_1(x_1, x) = lim inf_n ∫_^2 d(x_1, x) γ̣_1^n(x_1, x).
Passing to a subsequence if necessary, we may assume that the liminf of the right hand side above is a true limit and then, using Lemma 5.1.7 of <cit.>, we deduce that
lim_R → +∞sup_n ∫_{(x_1, x) ∈^2 : d(x_1, x) ≥ R} d(x_1, x) γ̣_1^n(x_1,x) =0.
Note also that since (ν_1^n)_n∈ℕ converges in W_1 we also have
lim_R → +∞sup_n ∫_∖ B(x_0, R) d(x_0, x_1) ν̣_1^n(x_1) =0.
Defining for R>0 and t≥ 0,
Φ_R(t):= t
0
note that Φ_R is non decreasing and
Φ_R(t+s) ≤ 2 (Φ_R/2 (t)+ Φ_R/2 (s) ),
so by the triangle inequality for every (x, x_1) ∈^2, we have
Φ_R( d(x_0, x)) ≤ 2 (Φ_R/2 (d(x_0, x_1))+ Φ_R/2 (d(x_1, x)) ).
Integrating with respect to γ_1^n which has marginals ν_1^n and μ^n yields
∫_∖ B(x_0, R) d(x_0, x) μ̣^n(x) = ∫_Φ_R( d(x_0, x)) μ̣^n(x)= ∫_Φ_R( d(x_0, x)) γ̣_1^n(x_1, x)
≤ 2 ∫_Φ_R/2( d(x_0, x_1)) ν̣_1^n(x_1) + 2 ∫_^2Φ_R/2( d(x_1, x)) γ̣_1^n(x_1,x)
= 2 ∫_∖ B(x_0, R/2) d(x_0, x_1) ν̣_1^n(x_1)
+ 2 ∫_{(x_1, x) ∈^2 : d(x_1, x) ≥R/2} d(x_1, x) γ̣_1^n(x_1,x).
Then, (<ref>) readily follows from (<ref>) and (<ref>).
§.§ Robustness of Wasserstein medians
In statistics, a popular robustness index is the so-called break-down point. Roughly speaking, it is the largest fraction of the input data which could be corrupted (i.e. changed arbitrarily) without moving the estimation too far from the original estimation for the non-corrupted data. It is well known that the break-down point of geometric medians with uniform weights is approximately 12, see, e.g. Theorem 2.1 and 2.2 <cit.>, so that even corrupting about half of the data, we can stay rather confident on the output. In this section, we prove a similar result for Wasserstein medians. To do so, we first recall some basic facts about break-down points, starting with a definition of the break-down point adapted to the case of a non-unique estimator.
[Break-down point]
Let (,d) be a metric space. Let N≥ 2 and λ=(λ_1,…,λ_N) ∈Δ_N. For a set-valued map t_λ: ^N → 2^ with nonempty values, we define its break-down point associated to the weights λ at =(x_1,…,x_N)∈𝒳^N by
b(t_λ()) := min{∑_i ∈ Iλ_i: I ⊂{1,…,N}, sup_^I ∈^N
^I_j = _j ∀ j ∉ I{ d(y,x) : y ∈ t_λ(^I), x ∈ t_λ() }=+∞}.
We now state the main theorem for Wasserstein medians, where the reference metric space is 𝒫_1() equipped with the W_1 distance. The proof is a slight generalization of Theorem 2.2. in <cit.>.
Suppose the metric space (, d) is proper and unbounded. Let N≥ 2, ν:=(ν_1,…,ν_N)∈𝒫_1()^N and λ := (λ_1,…,λ_N) ∈Δ_N.
Then the break-down point of 𝖬𝖾𝖽_λ(ν) is given by
b(𝖬𝖾𝖽_λ(ν))= min{∑_j ∈ Jλ_j : J ⊂{1,…,N }, ∑_j ∈ Jλ_j ≥1/2}.
For future reference, let us denote by B the right hand-side of (<ref>). Let us take ν∈𝖬𝖾𝖽_λ(ν_1,…,ν_N) and I ⊂{1,…,N} such that ∑_i ∈ Iλ_i < 1/2. Denote by μ:=(μ_1,…,μ_N) ∈_1()^N a corrupted collection of ν:=(ν_1,…,ν_N), i.e. such that μ_j = ν_j for all j ∉ I. Let
C:=max_ρ∈𝖬𝖾𝖽_λ(ν_1,…,ν_N)max_1 ≤ i ≤ N W_1(ρ,ν_i), δ := max{∑_j ∈ Jλ_j : J ⊂{1,…,N }, ∑_j ∈ Jλ_j < 1/2}.
Let μ∈𝖬𝖾𝖽_λ(μ), let us first prove by contradiction that
W_1(ν,μ) ≤2Cδ/1-2δ + 2C.
In order to do so, let ℬ=B_2C(ν) be the ball with center ν and radius 2C with respect to the W_1 distance. Further, let
ξ := 𝖣𝗂𝗌𝗍(μ,ℬ) := inf_ρ∈ℬ W_1(μ,ρ).
Then by the triangle inequality W_1(μ,ν) ≤ξ + 2C, so that for all j=1,…,N
W_1(μ_j,μ) ≥ W_1(μ_j,ν) - W_1(ν,μ) ≥ W_1(μ_j,ν) - (ξ + 2C).
Now suppose by contradiction that ξ > 2Cδ/(1-2δ), which in particular implies that W_1(μ, ν) > 2C.
Using the fact that in (_1(), W_1), line segments are geodesics (recall (<ref>)), defining for j=1,…,N the interpolation ν_j^t := (1-t) ν_j + t μ, t ∈ [0,1], we have
W_1(ν_j,μ) = W_1(ν_j, ν_j^t) + W_1(ν_j^t,μ) for t ∈ [0,1].
Since W_1(ν,ν_j^0)= W_1(ν,ν_j) ≤ C and W_1(ν,ν_j^1)=W_1(ν, μ) > 2 C, there exists t̅∈ [0,1] such that W_1(ν,ν_j^t̅) = 2C. In particular W_1(ν_j^t̅,μ)≥ξ
and W_1(ν_j,ν_j^t̅) ≥ W_1(ν,ν_j^t̅)-W_1(ν_j, ν)= 2C - W_1(ν_j, ν) ≥ W_1(ν_j, ν) so that
W_1(ν_j,μ) = W_1(ν_j,ν_j^t̅) + W_1(ν_j^t̅,μ) ≥ W_1(ν_j,ν) + ξ.
Putting together (<ref>) and (<ref>) yields
∑_j=1^N λ_j W_1(μ_j,μ) ≥∑_j ∈ Iλ_j(W_1(μ_j,ν) - (ξ + 2C)) + ∑_j ∉ Iλ_j(W_1(μ_j,ν)+ ξ)
= ∑_j =1^N λ_j W_1(μ_j,ν) + ξ (∑_j ∉ Iλ_j - ∑_j ∈ Iλ_j) - 2C ∑_j ∈ Iλ_j
≥∑_j =1^N λ_j W_1(μ_j,ν) + ξ (1 - 2 δ) - 2Cδ
> ∑_j =1^N λ_j W_1(μ_j,ν),
which contradicts μ being a Wasserstein median for the corrupted collection μ. We thus have
ξ≤2Cδ/1-δ W_1(ν,μ) ≤ξ+ 2 C ≤2Cδ/1-2δ + 2C,
and b(𝖬𝖾𝖽_λ(ν)) >δ, yielding b(𝖬𝖾𝖽_λ(ν))≥ B. To obtain the exact value of the breakdown point, take now J ⊂{1,…,N} with ∑_j ∈ Jλ_j ≥1/2
and consider a sequence (x_n)_n ∈ in such that d(x_n,x_0) → +∞ as n →∞. Then, by using the sequence of corrupted collections defined by μ^n:=(μ^n_1,…,μ^n_N) with μ^n_j = δ_x_n for j ∈ J and μ^n_k = ν_k for k ∉ J we have δ_x_n∈𝖬𝖾𝖽_λ(μ^n) as we have observed in Example <ref> and
W_1(δ_x_n,ν) ≥ d(x_n,x_0) - W_1(δ_x_0,ν) → + ∞ as n →∞,
implying b(𝖬𝖾𝖽_λ(ν))≤ B and concluding the proof.
Note that in the case of uniform weights, i.e. with λ:=(1/N, …, 1/N), (<ref>) turns into the classical estimate b(𝖬𝖾𝖽_λ(ν)) = ⌊N+12⌋/N. Let us finally emphasize that the proof of Theorem <ref> actually works for Fréchet medians on any geodesic metric space.
§ ONE DIMENSIONAL WASSERSTEIN MEDIANS
In this section, we study the case of Wasserstein medians on =ℝ with distance d induced by the absolute value. Since the Wasserstein distance of order 1 is equal to the L^1 distance between cumulative or quantile distribution functions, the problem becomes more explicit. This will in particular enable us to find different explicit constructions of Wasserstein medians. In this section for all ν∈_1(ℝ) we denote by F_ν its associated cumulative distribution function (cdf), which is defined by F_ν(x)=ν((-∞,x]) for all x ∈ℝ. We also denote by Q_ν: [0,1]→ its pseudo-inverse or quantile distribution function (qdf), which is defined by
Q_ν(t):=inf{x ∈ℝ : F_ν(x)≥ t }.
Denoting by the Lebesgue measure on [0,1], it is well-known that one recovers ν from its qdf Q_ν through Q_ν_#=ν, that is Q_ν is the monotone transport between and ν. We first recall that in one dimension, both maps ν∈𝒫_1() ↦ F_ν and ν∈𝒫_1() ↦ Q_ν map isometrically, for the L^1 distance, the Wasserstein space (𝒫_1(), W_1) to the set of cdf's of probabilities in 𝒫_1() (i.e. the set of nondecreasing, right-continuous function F:→ [0,1] such that (1-F)∈ L^1((0, +∞)), F∈ L^1((-∞, 0)), F(+∞)=1 and F(-∞)=0) and the set of qdf's (i.e. the set of L^1((0,1), ) non-decreasing left-continuous functions) respectively. More precisely, for (μ,ν)∈𝒫_1(ℝ)^2 we have the following convenient expressions for the 1-Wasserstein distance between μ and ν (see Theorem 2.9 in <cit.>):
W_1(μ,ν) =∫_0^1 | Q_ν(t)-Q_μ(t) | ṭ=Q_ν-Q_μ_L^1([0,1])
=∫_ℝ |F_μ(x)-F_ν(x)| x̣=F_ν-F_μ_L^1().
This enables us to reformulate the Wasserstein median problem as
min(<ref>) =min_ν∈𝒫_1(ℝ) ∫_ℝ∑_i=1^N λ_i |F_ν(t)-F_ν_i(t)| ṭ
=min_ν∈𝒫_1(ℝ) ∫_0^1 ∑_i=1^Nλ_i |Q_ν(t)-Q_ν_i(t)| ṭ,
which will be referred as vertical (<ref>) and horizontal (<ref>) formulations. The terminology will become clear in the sequel. Note that, in this way, the problem is equivalent to performing a proper selection of a weighted median of all cumulative or quantile distribution functions, the lower and upper median maps ^+ and ^- defined in (<ref>) in Example <ref> and their regularity properties will be particularly useful in this setting.
Let λ∈Δ_N, :=(ν_1, …, ν_N) ∈𝒫_1()^N and ν∈𝒫_1(), then the following statements are equivalent
* ν∈(),
* F_ν(x) ∈(F_ν_1(x), …, F_ν_N(x)) for all x∈,
* Q_ν(t) ∈(Q_ν_1(t), …, Q_ν_N(t)) for all t∈ [0,1].
In particular, if (<ref>) holds, there exists a unique Wasserstein median.
The fact that <ref> implies <ref> obviously follows from the definition of and the expression in (<ref>) for the Wasserstein distance. Assume now that ν∈(), then we should have for a.e. x∈,
^-(F_ν_1(x), …, F_ν_N(x)) ≤ F_ν(x) ≤^+(F_ν_1(x), …, F_ν_N(x)).
Hence for x∈, there exists a sequence ε_n >0, ε_n → 0 such that the previous inequality holds at x+ε_n,
by the right continuity of F_ν, (F_ν_1(·), …, F_ν_N(·)) at x and the continuity of ^±, we easily get that (<ref>) actually holds at x hence everywhere, proving the equivalence between <ref> and <ref>. The equivalence between <ref> and <ref> follows the same lines (using left-continuity of qdf's).
This suggests to define
F^-(x):=^-(F_ν_1(x), …, F_ν_N(x)), F^+(x):=^+(F_ν_1(x), …, F_ν_N(x)), for all x∈,
as well as for θ∈ [0,1],
F_θ(x):=(1-θ) F^-(x)+ θ F^+(x), for all x∈.
Thanks to the properties of ^+ and ^- we saw in Example <ref> and the fact that the F_ν_i's are the cdf's of probability measures with finite first moments, F^+ and F^- are also the cdf's of measures with finite first moments and then so is F_θ. Thanks to Proposition <ref>, F_θ is the cdf of ν^θ which belongs to the set of Wasserstein medians (), we call these measures ν^θ a vertical median selections:
[Vertical median selections]
For every θ∈ [0,1], the measure ν^θ whose cdf is F_θ given by (<ref>) is called the vertical median selection of with weights and interpolation parameter θ and simply denoted (θ, ).
Let us also define
Q^-(t):=^-(Q_ν_1(t), …, Q_ν_N(t)), Q^+(t):=^+(Q_ν_1(x), …, Q_ν_N(t)), for all t∈ (0,1),
as well as for θ∈ [0,1],
Q_θ(t):=(1-θ) Q^-(t)+ θ Q^+(t), for all t∈ (0,1).
It is easy to see that Q_θ is nondecreasing, left-continuous and in L^1((0,1), ); it is therefore the qdf of a median μ^θ∈() which we call an horizontal median selection:
[Horizontal median selections]
For every θ∈ [0,1], the measure μ^θ whose qdf is Q_θ given by (<ref>) is called the horizontal median selection of with weights and interpolation parameter θ and simply denoted (θ, ).
A first nice feature of both vertical and horizontal median selections is that it selects medians in a Lipschitz continuous way with respect to the sample measures:
Let ∈Δ_N, =(ν_1, …, ν_N) ∈𝒫_1()^N, =(_1, …, _N)∈𝒫_1()^N, θ∈ [0,1] then
W_1( (θ, ) , (θ, )) ≤∑_i=1^N W_1(ν_i, _i),
and
W_1( (θ, ) , (θ, ))≤∑_i=1^N W_1(ν_i, _i) .
From (<ref>), we have for every x∈:
|^± (F_ν_1(x), …, F_ν_N(x))- ^± (F__1(x), …, F__N(x)) |
≤max_i=1,…, N| F_ν_i(x)- F__i(x)|≤∑_i=1^N| F_ν_i(x)- F__i(x)|.
Integrating and recalling the cdf expression (<ref>) for the Wasserstein distance, we readily get (<ref>) for θ=0 and θ=1, the general case θ∈ [0,1] follows by the triangle inequality. The proof of (<ref>) is similar using the expression of W_1 in terms of quantiles as in (<ref>).
One may wonder whether some medians inherit properties of the sample measures and in particular whether samples consisting of probabilities with an L^p density with respect to the Lebesgue measure have medians with the same property. As we will shortly see, vertical and horizontal medians will enable us to answer these questions by the positive.
Let ∈Δ_N, =(ν_1, …, ν_N)∈𝒫_1()^N, θ∈ [0,1], and ν^θ:=(θ, ), μ^θ:=(θ, ), then
* if ν_1, …, ν_N are atomless, then so are ν^θ and μ^θ,
* if ν_1, …, ν_N have connected supports, then so does μ^θ.
Recall that for a probability measure, being atomless is equivalent to having a continuous cdf as well as to having a strictly increasing qdf, see, e.g., Proposition 1 in <cit.>. Let us denote by F_θ the cdf (see (<ref>)) of ν^θ and by Q_θ the qdf of μ^θ (see (<ref>)). If ν_1, …, ν_N are atomless then F_θ is continuous by continuity of F_ν_i so that ν^θ is atomless. On the other hand, (<ref>) entails
Q_θ(t)-Q_θ(s) ≥min_1 ≤ i ≤ N (Q_ν_i(t)-Q_ν_i(s)), for all (t,s)∈ (0,1)^2,
so that Q_θ is strictly increasing whenever each Q_ν_i is. Let us assume now that ν_1, …, ν_N have connected supports then each Q_ν_i is continuous and so is Q_θ. Thus, μ^θ has a connected support (again by Proposition 1 in <cit.>).
Considering absolute continuity of medians, we first discuss the easier case of vertical selections:
Let ∈Δ_N, =(ν_1, …, ν_N)∈𝒫_1()^N, θ∈ [0,1], and ν^θ:=(θ, ). If ν_1,…,ν_N are all absolutely continuous (with respect to the Lebesgue measure on ) with densities f_1,…,f_N∈ L^1() then ν^θ is absolutely continuous with a density f_ν^θ∈ L^1() which satisfies
min_1 ≤ i ≤ N f_i ≤ f_ν^θ≤max_1 ≤ i ≤ N f_i, a.e. on .
In particular, if, for some p∈ [1, ∞], f_i∈ L^p() for i=1, …, N, then f_ν^θ∈ L^p() and
min_1 ≤ i ≤ N f_i_L^p()≤ f_ν^θ_L^p()≤max_1 ≤ i ≤ N f_i_L^p()≤∑_i=1^N f_i_L^p().
Let x∈ and h≥ 0, it follows from (<ref>) and the definition of the cdf F_θ that
0≤ F_θ(x+h)-F_θ(x)≤max_1 ≤ i ≤ N{ F_ν_i(x+h)-F_ν_i(x)} = max_1 ≤ i ≤ N∫_x^x+h f_i ≤∫_x^x+hmax_1 ≤ i ≤ N f_i,
which yields absolute continuity of F_θ, i.e. ν^θ is absolutely continuous with respect to Lebesgue's measure, and the upper bound in (<ref>). In a similar fashion,
F_θ(x+h)-F_θ(x) ≥∫_x^x+hmin_1 ≤ i ≤ N f_i,
which shows shows the lower bound in (<ref>) and concludes the proof.
In particular, in dimension one, vertical medians automatically select medians which inherit integrability properties of the sample measures, with simple explicit pointwise bounds.
Let us now turn our attention to the case horizontal selections which is slightly more involved.
Let ∈Δ_N, =(ν_1, …, ν_N)∈𝒫_1()^N, θ∈ [0,1], and μ^θ:=(θ, ). If ν_1,…,ν_N are all absolutely continuous (with respect to the Lebesgue measure on ) with densities f_1,…,f_N∈ L^1() then:
*
μ^0 and μ^1 are absolutely continuous with densities f_μ^0, f_μ^1 which satisfy
min_1 ≤ i ≤ N f_i ≤min(f_μ^0, f_μ^1) ≤max(f_μ^0, f_μ^1) ≤max_1 ≤ i ≤ N f_i, a.e. on ,
* for every θ∈ [0,1], μ^θ is absolutely continuous, we denote its density f_μ^θ,
* if, for some p∈ [1, ∞], f_i∈ L^p() for i=1, …, N, then f_μ^θ∈ L^p() and
f_μ^θ_L^p()≤max_1 ≤ i ≤ N f_i_L^p()≤∑_i=1^N f_i_L^p().
We shall proceed in three steps.
Step 1: Let us show (<ref>), under the extra assumption that each f_i satisfies
f_i ∈ L^∞(), 1/f_i∈ L^∞_().
Recall that by construction
μ^0:=Q^-_#, μ^1:=Q^+_# Q^±:=^±(Q_ν_1, …, Q_ν_N),
and (<ref>) ensures that each F_ν_i is Lipschitz (with Lipschitz constant ‖ f_i‖_L^∞()) with inverse Q_ν_i, which is locally Lipschitz on (0,1), with Q_ν_i' satisfying
f_i(Q_ν_i) Q_ν_i' =1, Q_ν_i' ≥1/M M:= max_j ‖ f_j‖_L^∞().
Hence, Q^± are locally Lipschitz on (0,1), and it follows from (<ref>) that for 0< t< s <1, one has
Q^±(s)-Q^±(t) ≥1/M (s-t),
which implies that Q^-=Q_μ^0 and Q^+=Q_μ^1 have M-Lipschitz inverses which are the cdf's F_μ^0 and F_μ^1. Thus, μ^0 and μ^1 are absolutely continuous with bounded positive densities f_μ^0, f_μ^1, and
f_μ^0 (Q_μ^0) Q_μ^0' =1, f_μ^1 (Q_μ^1) Q_μ^1' =1 .
Using (<ref>), we also have for 0< t < s < 1 with |t-s| small enough
max_i ∈ I_-(t) (Q_ν_i(s)-Q_ν_i(t) )≥ Q_μ^0(s)-Q_μ^0(t) ≥min_i ∈ I_-(t) (Q_ν_i(s)-Q_ν_i(t)),
where I_-(t):={i : Q^-(t)=Q_ν_i(t)}. If we choose t a point where all qdf's Q_μ^0, Q_ν_i are differentiable and the change of variable formulas (<ref>) and (<ref>) hold dividing the previous inequality by (s-t) and letting s→ t^+ yields
max_i ∈ I_-(t) Q_ν_i'(t)=max_i ∈ I_-(t)1/f_i(Q_μ^0(t))≥ Q'_μ^0(t)=1/ f_μ^0(Q_μ^0(t))
≥min_i ∈ I_-(t) Q_ν_i'(t)=min_i ∈ I_-(t) 1/f_i(Q_μ^0(t)),
so that
min_ 1≤ i ≤ N f_i ( Q_μ^0(t)) ≤
f_μ^0 ( Q_μ^0(t)) ≤max_ 1≤ i ≤ N f_i ( Q_μ^0(t)) .
But since μ^0= Q_μ^0_# has a positive and bounded density, it has the same null sets as hence the previous inequality can be simply reformulated as
min_ 1≤ i ≤ N f_i ≤ f_μ^0≤max_ 1≤ i ≤ N f_i .
The fact that f_μ^1 obeys the same inequality can be proved in a similar way using (<ref>) for ^+ instead of ^-, we thus have shown (<ref>) under (<ref>). Note also that the L^p bound (<ref>) follows for θ∈{0,1}.
Step 2: Again assuming (<ref>), let us show absolute continuity of μ^θ and the L^p bound (<ref>) for θ∈ (0,1). We shall proceed by a displacement convexity argument which is reminiscent of McCann's seminal work <cit.>. Let us recall that F_μ^0 is Lipschitz with locally Lipschitz inverse Q^- so that F_μ^0_#μ^0= and
μ^θ=((1-θ) Q^-+ θ Q^+)_#= ((1-θ) Q^-+ θ Q^+)_# (F_μ^0_#μ^0) =((1-θ) 𝕀+ θ T)_#μ^0,
where T:=Q^+ ∘ F_μ^0 is the monotone transport from μ^0 to μ^1 so that μ^θ is the displacement interpolation between μ^0 and μ^1 as defined by McCann in <cit.> (in the more general and involved multi-dimensional setting). Since the (locally Lipschitz) map (1-θ) 𝕀 + θ T has a Lipschitz inverse and μ^0 is absolutely continuous, μ^θ is absolutely continuous, we then denote by f_μ^θ its density. Recalling that the qdf of μ^θ, Q_θ= (1-θ) Q^-+ Q^+ is locally Lipschitz, differentiable with a strictly positive derivative a.e. and we have the change of variable formula:
f_μ^θ(Q_θ) Q_θ'=1
Let V:→ with V(0)=0 be convex, then the function α>0 ↦Φ(α) := α V(α^-1) is convex as well and then we have
∫_ V(f_μ^θ(x))x̣ = ∫_0^1 V (1/Q_θ'(t)) Q_θ'(t) ṭ =∫_0^1 Φ ((1-θ) Q_μ^0'(t)+ θ Q_μ^1'(t))) ṭ
≤ (1-θ) ∫_0^1 Φ (Q_μ^0'(t)) ṭ + θ∫_0^1 Φ (Q_μ^1'(t)) ṭ
= (1-θ) ∫_ V(f_μ^0(x)) x̣ + θ∫_ V (f_μ^1(x)) x̣.
Taking V(α)=|α|^p, recalling (<ref>) we in particular get
∫_ (f_μ^θ(x))^px̣≤ (1-θ) ∫_ (f_μ^0(x))^px̣ + θ∫_ (f_μ^1(x))^px̣≤‖max_1≤ i ≤ N f_i ‖_L^p()^p,
which gives (<ref>).
Step 3: general case by Lemma <ref>. To get rid of the extra assumption (<ref>), let g be the density of a standard Gaussian measure and for >0 set
f_i^ := min ((1-) f_i + g, ^-1)/∫_min( (1-) f_i + g, ^-1) .
Applying the previous steps to μ_^θ:=(θ, f_1^, …, f_N^), we get
min_1 ≤ i ≤ N f_i^≤min(f_μ_^0, f_μ_^1) ≤max(f_μ_^0, f_μ_^1) ≤max_1 ≤ i ≤ N f_i^,
and for every θ∈ [0,1],
f_μ_^θ_L^p()≤max_1 ≤ i ≤ N f_i^_L^p().
Since f_i^ converges to f_i in L^p and μ_^θ converges to μ^θ in Wasserstein distance thanks to Lemma <ref>, we can pass to the limit → 0^+ in these bounds, obtaining (<ref>) and (<ref>).
For Wasserstein barycenters (in any dimension), the fact that one sample measure with positive weight is L^p implies that the barycenter is L^p as well (see <cit.>). For Wasserstein medians in one dimension, we really need all sample measures to be L^p to find an L^p median. To see this, recall that the median of ν_1:=δ_x with weight 2/3 and any probability ν_2∈_1() (with the smoothest density one can think of) with weight 1/3 is δ_x. Note also that due to the fact that ^± are Lipschitz but nonsmooth, vertical and horizontal median selections of sample measures with smooth (or Sobolev) densities do not have a continuous density in general.
§ MULTI-MARGINAL AND DUAL FORMULATIONS
§.§ Multi-marginal formulation
Given the proper metric space (, d), ∈Δ_N and =(ν_1, …, ν_N)∈_1()^N, the Wasserstein median problem (<ref>) is, like the Wasserstein barycenter problem, a special instance of the matching for teams problem <cit.> and, as such, admits linear reformulations which take the form of multi-marginal optimal transport problems. Let us now recall this reformulation in the Wasserstein median context. For :=(x, x_1, …, x_N)∈^N+1, let us define:
f_ (x, x_1, …, x_N):=∑_i=1^N λ_i d(x_i, x), c_(x_1, …, x_N):=min_y∈ f_ (y, x_1, …, x_N),
and the projections:
π_0()=x, π_j()=x_j, 1≤ j ≤ N, π_0,j ()=(x, x_j), π_1, …, N ()=(x_1, …, x_N).
We denote by Π(ν_1,…,ν_N) the set of Borel probability measures on ^N having ν_i as i-th marginal and the linear multi-marginal problems
inf{∫_^N+1 f_θ̣ : θ∈_1(X^N+1), π_1,…, N_#θ∈Π(ν_1,…,ν_N) },
and
inf_γ∈Π(ν_1,…,ν_N)∫_^N c_γ̣.
Since (, d) is Polish, it follows from the disintegration theorem (see paragraph 5.3 in <cit.>) that if θ is admissible for (<ref>) it can be disintegrated with respect to its marginal
γ:=π_1,…, N_#θ as
θ= θ^x_1, …, x_N⊗γ,
for a Borel family of conditional probability measures θ^x_1, …, x_N on . For fixed marginal γ:=π_1,…, N_#θ, minimizing with respect to the conditional probability θ^x_1, …, x_N the integral of f_ obviously amounts to choose it supported on (x_1, …, x_N) so that it is easy to see that (<ref>) and (<ref>) are equivalent in the sense that they have the same value and that θ solves (<ref>) if and only if γ:=π_1,…, N_#θ solves (<ref>) and θ is supported by the set of (x, x_1, …, x_N) such that x∈(x_1, …, x_N). The fact that Π(ν_1, …, ν_N) is tight and the properness of (,d) ensure that the infimum in both (<ref>) and (<ref>) is attained. The connection with the Wasserstein median problem (<ref>) and its solutions () is summarized by:
The following hold:
* min(<ref>) = min(<ref>) = min(<ref>).
* If ν∈() i.e. ν solves (<ref>), then, there exists θ solving (<ref>) such that ν=π_0_#θ and, conversely, if θ solves (<ref>), then π_0_#θ∈().
* If θ solves (<ref>) and ν=π_0_#θ, then for every j such that λ_j>0, γ_j :=π_0,j_#θ is an optimal transport plan between ν and ν_j.
* If 𝗆_λ:^N→ is a Borel selection of and γ solves (<ref>), then (𝗆_λ)_#γ∈().
* ν∈() if and only if there exists γ solving (<ref>) and a Borel family of probability measures θ^x_1, …, x_N such that θ^x_1, …, x_N is supported on (x_1,…, x_N) for γ-a.e. (x_1, …, x_N) and ν= π_0_# (θ^x_1, …, x_N⊗γ).
The proof of similar results can be found in <cit.> and therefore omitted. Even though <cit.> consider a compact setting (with general costs), the same proof easily adapts to the present setting of a proper metric space with the distance as cost. Note that, as in <cit.>, one can deduce from a Wasserstein median ν∈() a solution of (<ref>) with first marginal ν as follows: let γ_i be an optimal plan between ν and ν_i disintegrated with respect to ν as γ_i =ν⊗γ_i^x and define θ by gluing i.e.:
∫_ϕ(x, x_1, …, x_N) θ̣(x, x_1, …, x_N):=∫_( ∫_^Nϕ(x, x_1, …, x_N) γ̣_1^x(x_1) …γ̣_N^x(x_N) ) ν̣(x)
for every ϕ∈ C_b(^N+1). Then, θ solves (<ref>) and by construction π_0_#θ=ν. Note that pushing forward by a selection of a solution of the multi-marginal (<ref>) as in <ref> above corresponds to special medians for which, using the notation of <ref>, θ^x_1,…, x_N=δ_𝗆_λ(x_1, …, x_N) is a Dirac mass. Since is in general not single-valued, not all medians are of this form. Consider for instance 𝒳=[-1,1] equipped with the usual Euclidean distance and let ν_1=δ_-1/2 and ν_2=δ_1/2 with uniform weights. Then (ν_1,ν_2) is the set of all probability measures supported on [-1/2,1/2] whereas <ref> only selects Dirac masses.
Let us now give an application of Theorem <ref>:
Let =ℝ^d be equipped with the Euclidean distance. If all the sample measures are supported on a closed convex subset 𝒦⊂ℝ^d, then every Wasserstein median ν∈() is supported on 𝒦 as well. Moreover, if V:^d →_+ is quasiconvex (i.e. {V ≤ t} is convex for every t≥ 0) then
∫_^d V ν̣≤∑_i=1^N ∫_^d V ν̣_i.
In particular, for any p∈ (0, +∞) we have the following bound on the p-moments of ν:
∫_ℝ^d |x|^p ν̣≤∑_i=1^N∫_ℝ^d |x|^p ν̣_i.
We know from point <ref> of Theorem <ref>, that there exists γ∈Π(ν_1, …ν_N) and a family of probability θ^x_1, …, x_N supported by (x_1, …, x_N) such that for every continuous and bounded (or more generally Borel) function f:^d → one has
∫_^d f(x) ν̣(x) =∫_(^d)^N( ∫_^d f(x) θ̣^x_1, …, x_N(x) ) γ̣(x_1, …, x_N),
but since θ^x_1, …, x_N is supported by (x_1, …, x_N)⊂co{x_1, …, x_N} (as we have seen in Example <ref>), if all the ν_i's are supported by the closed convex set 𝒦 then so is θ^x_1, …, x_N for γ-a.e. (x_1, …, x_N) and then ν(𝒦)=1. Likewise, if V is nonnegative and quasiconvex, then for θ^x_1, …, x_N a.e. x we have
V(x) ≤max_1≤ i ≤ N V(x_i) ≤∑_i=1^N V(x_i),
and integrating this inequality with respect to θ^x_1, …, x_N first and then with respect to γ in Π(ν_1, …, ν_N) we obtain the announced moment bounds.
A counterexample to linear L^∞ density bounds in several dimensions. We have seen in Theorems <ref> and <ref> that when =, and the sample measures have densities uniformly bounded by some M, vertical and horizontal median selections enable to find Wasserstein medians with a density which is bounded by the same bound M. In other words, in dimension one, it is possible to have a linear control on the L^∞ norm of some well-chosen Wasserstein median in terms of L^∞ bounds of the sample measures. The situation seems to be more intricate in higher dimensions. The following example shows that a linear L^∞-bound cannot hold in two dimensions.
For 0<ϵ<1 let ν_1 be a uniform measure supported on the rectangle [-1-ℓ, -1]×[-ϵ2, ϵ2] and let ν_2, ν_3 and ν_4 be obtained by successive rotations by 90^∘ of ν_1 as in Figure <ref>. Consider uniform weights λ_i=14, for i=1, …, 4, and let ν∈(ν_1, …, ν_4). We know from Theorem <ref> that one can write ν:=π_0_#θ where π_i_#θ=ν_i for i=1, …, 4 and x is a geometric median of (x_1,…,x_4) for θ-a.e. (x,x_1,…,x_4). Now note that, with this construction, four points x_i ∈ ν_i always form a convex quadrilateral, and as shown in Theorem 1 in <cit.>, their unique median is the intersection of the two segments [x_1,x_3] and [x_2,x_4]. In particular such geometric medians belong to the square [-ϵ/2,ϵ/2]^2, which therefore supports any ν∈(ν_1, …, ν_4). This shows that the L^∞ norm of ν is at least ^-2: it cannot be bounded from above uniformly in by a multiple of max_i=1,…, 4‖ν_i ‖_L^∞=ℓ^-1^-1.
§.§ Dual Formulation
To introduce a dual formulation à la Kantorovich of (<ref>), we fix a point x_0 ∈ and define the spaces
Y_0 := { f ∈ C() : lim_d(x,x_0) →∞f(x)/1 + d(x,x_0) = 0},
Y_b := { f ∈ C() : sup_x ∈| f(x)|/1 + d(x,x_0) < ∞}.
Note that these spaces are independent of the choice of x_0 and that the dual of Y_0 may be identified with the space of signed measures
with finite first moment
(Y_0)^* ={μ∈() : (1+d(x,x_0))μ∈() }.
We will also assume here without loss of generality that all the weights λ_i are strictly positive in the Wasserstein median problem (<ref>) and define for λ>0:
_λ():={ v : →, | v(x)-v(y)|≤λ d(x,y), for all (x,y)∈^2}.
Setting c_i:=λ_i d, the c_i-transform of a function u:→, denoted u^c_i, is by definition given by
u^c_i(x):=inf_y ∈{λ_i d(x,y)-u(y) }, for all x∈,
note that, by the triangle inequality u^c_i is either everywhere -∞ or a λ_i-Lipschitz function. It is also a classical fact (see, e.g. Proposition 3.1 in <cit.>) that u∈_λ_i() if and only if u^c_i=-u.
Following <cit.>, let us now consider the concave maximization problem
sup{∑_i=1^N∫_ u_i^c_i dν_i : u_i ∈ Y_0, ∑_i=1^N u_i=0},
and its relaxed version
sup {∑_i=1^N∫_ u_i^c_iν̣_i : u_i ∈ Y_b, ∑_i=1^N u_i=0 }.
By definition of the c_i-transform, it is easy to check the weak duality relation
min(<ref>)≥sup(<ref>)≥sup(<ref>).
Using convex duality by proceeding exactly as in the proof of Propositions 2.2 and 2.3 in <cit.> for the Wasserstein barycenter case, one can show that (<ref>) is the dual of (<ref>) and that strong duality holds i.e.: min(<ref>) = sup(<ref>) =sup(<ref>). It will be convenient in the sequel to consider yet another formulation of (<ref>):
sup{∑_i=1^N∫_ u_i ν̣_i : u_i ∈_λ_i(), i=1, …, N, ∑_i=1^Nu_i≤ 0 }.
Let (ν_1, …, ν_N)∈𝒫_1()^N and :=(λ_1, …, λ_N)∈Δ_N with each λ_i strictly positive. Then we have
min(<ref>) = sup(<ref>)= max(<ref>),
where we have written max (<ref>) to emphasize that the supremum in (<ref>) is attained.
Recall that min(<ref>) = sup(<ref>).
Step 1: sup(<ref>)≥sup(<ref>). Let (u_1,…,u_N) be admissible for (<ref>), take ψ=(ψ_1,…,ψ_N), with ψ_i = -u_i for all i=1,…, N-1 and ψ_N = u_1 + … + u_N. Since Lipschitz functions belong to Y_b, ψ is admissible for (<ref>) and we have:
∑_i=1^N∫_ u_i ν̣_i =∑_i=1^N-1∫_ (-ψ_i) ν̣_i+∫_ u_N ν̣_N.
For i=1, …, N-1, since ψ_i ∈_λ_i(), we have -ψ_i=ψ_i^c_i. Moreover, ψ_N=u_1+… + u_N-1≤ -u_N, hence ψ_N^c_N≥ u_N, yielding
∑_i=1^N∫_ u_i ν̣_i ≤∑_i=1^N∫_ψ_i^c_iν̣_i ≤sup(<ref>) .
Step 2: sup(<ref>)≥sup(<ref>). Let ψ=(ψ_1,…,ψ_N) be admissible for (<ref>). Consider u=(u_1,…,u_N)=(ψ_1^c_1,…,ψ_N^c_N). By construction, each u_i is λ_i-Lipschitz and to see that u is admissible for (<ref>) we observe that for every x∈:
∑_i=1^N u_i(x)=∑_i=1^N ψ_i^c_i(x)=∑_i=1^N inf_y {λ_i d(x,y)-ψ_i(y) }≤ -∑_i=1^N ψ_i(x)=0,
and, then,
∑_i=1^N ∫_ψ_i^c_i ν̣_i =∑_i=1^N ∫_ u_i ν̣_i ≤sup(<ref>).
Step 3: the supremum is attained in sup(<ref>). We note that both constraints and the objective function in (<ref>) are unchanged when one replaces u_i by u_i+ α_i where the α_i's are constant that sum to 0, we may therefore restrict the maximization in sup(<ref>) to the smaller admissible set of potentials (u_1, …, u_N) such that
u_i ∈_λ_i(), ∑_i=1^N u_i ≤ 0, and ∫_ u_i ν̣_i=0, for i=1, …, N-1.
Since this set contains (0, …, 0) we can reduce it even further this set by considering only potentials for which the objective is positive:
∫_ u_N ν̣_N ≥ 0.
If we denote by K the set of potentials that satisfy (<ref>) and (<ref>), we observe that if (u_1, …, u_N)∈ K then for i=1, …, N-1 and x∈, since u_i is λ_i-Lipschitz, one has
u_i(x) ≤∫_ u_i ν̣_i + λ_i ∫_ d(x, y) ν̣_i (y) ≤λ_i d(x,x_0) + m_i, m_i:=λ_i ∫_ d(x_0, y) ν̣_i(y).
Reasoning in a similar way for -u_i, we get bounds with linear growth, namely | u_i |≤λ_i d(·,x_0)+ m_i for i=1, …, N-1. Since u_N ≤ -∑_i=1^N u_i we get a similar upper bound with linear growth for u_N, and, for a lower bound, we use (<ref>) which, together with the fact that u_N is λ_N-Lipschitz gives
u_N ≥ -λ_N d(·, x_0)-λ_N ∫_ d(x_0, y) ν̣_N(y).
Let us now take a maximizing sequence in K for (<ref>). The above linear bounds and Ascoli–Arzelà's theorem guarantee that this sequence converges locally uniformly to some u, and again by these linear bounds, the fact that ν_i∈_1() for all i =1,…, N, and Lebesgue's dominated convergence theorem, one deduces that u∈ K and u actually solves (<ref>).
We may derive from the primal-dual relations between (<ref>) and (<ref>) a characterization of Wasserstein medians in terms of Kantorovich potentials
Let =(ν_1, …, ν_N) ∈_1()^N, =(λ_1, …, λ_N) ∈Δ_N with λ_i>0 and let ν∈_1(). Then ν∈() if and only if there exist ψ_1, …, ψ_N such that
* for i=1, …, N, ψ_i ∈_1() is a Kantorovich potential between ν_i and ν, i.e.
W_1(ν_i, ν)=∫_ψ_i ν̣_i- ∫_ψ_i ν̣,
* there holds
∑_i=1^N λ_i ψ_i ≤ 0 on , and ∑_i=1^N λ_i ψ_i = 0 .
It follows from the duality result of proposition <ref> that ν∈() if and only if there exists (u_1, …, u_N) admissible for (<ref>) such that
∑_i=1^N λ_i W_1(ν_i, ν)=∑_i=1^N ∫_ u_i ν̣_i
(in which case (u_1, …, u_N) automatically solves (<ref>)). Setting ψ_i= u_i/ λ_i we thus have ψ_i ∈_1() and ∑_i=1^N λ_i ψ_i ≤ 0 on . By the Kantorovich–Rubinstein duality formula (<ref>), we have
W_1(ν_i, ν) ≥∫_ψ_i ν̣_i- ∫_ψ_i ν̣.
Multiplying by λ_i summing and using the fact that ν is a nonnegative measure and ∑_i=1^N λ_i ψ_i ≤ 0 thus yields
∑_i=1^N λ_i W_1(ν_i, ν)≥∑_i=1^N λ_i ∫_ψ_i ν̣_i - ∑_i=1^N λ_i ∫_ψ_i ν̣
≥∑_i=1^N λ_i ∫_ψ_i ν̣_i=
∑_i=1^N ∫_ u_i ν̣_i,
so that (<ref>) holds if and only if each inequality (<ref>) is an equality, i.e. ψ_i is a Kantorovich potential between ν_i and ν and
∫_(∑_i=1^N λ_i ψ_i) ν̣=0,
i.e. ∑_i=1^N λ_i ψ_i = 0 on (ν) since each ψ_i is continuous.
§ BECKMANN MINIMAL FLOW FORMULATION
In this section, we consider the Wasserstein median problem on a convex compact subset Ω of ^d, with non empty interior (which is not really restrictive) equipped with the Euclidean distance. In this setting, we will see that, taking advantage of the so-called Beckmann's minimal flow formulation of Monge's problem, one can derive a system of PDEs that characterize Wasserstein medians. We are given ∈Δ_N with λ_i>0 for all i=1,…, N, and =(ν_1, …, ν_N)∈(Ω)^N, we know from Corollary <ref> that any measure in () is supported on Ω.
§.§ The Beckmann problem
We denote by ℳ(Ω,ℝ^d) the set of vector valued measures on Ω. For such a measure σ, we denote by |σ|∈ℳ_+(Ω) its total variation measure and recall that one can write σ̣= σ̂|̣σ| for some Borel map σ̂ such that |σ̂| =1, |σ|-a.e.; for every test-function ϕ∈ C(Ω, ^d), one can therefore write
∫_Ωϕ· σ̣= ∫_Ωϕ(x) ·σ̂(x) |̣σ|(x).
Let us denote by ℳ_÷(Ω,ℝ^d) the set of vector valued measures σ whose divergence ∇·σ is a finite measure, where ∇·σ is defined in the sense of distributions. Given i=1, …, N and ν∈(Ω), a vector-valued measure σ_i∈ℳ_÷(Ω,ℝ^d) is an admissible flow between ν_i and ν if it solves
∇·σ_i +ν_i=ν
in the weak sense, i.e.
∫_Ω∇ϕ· σ̣_i=∫_Ωϕ (̣ν_i-ν), for all ϕ∈ C^1(Ω).
Beckmann's formulation of the optimal transport problem with distance cost between ν_i and ν consists in finding an admissible flow with minimal total variation, it thus reads
inf_σ_i ∈ℳ_÷(Ω,ℝ^d){|σ_i | (Ω) : ∇·σ_i +ν_i=ν}
where |σ_i | (Ω) denotes the total variation of σ_i. This problem was introduced by Beckmann in the 1950's <cit.> and its connections with the optimal transport problem W_1(ν, ν_i) is well-known, as we shall recall now, referring the reader to <cit.> and <cit.> for detailed statements and proofs. First of all, let us recall that the value of (<ref>) coincides with the Wasserstein distance W_1(ν, ν_i) so recalling the Kantorovich–Rubinstein formula, we have (and we write min and max on purpose to emphasize the existence of solutions):
W_1(ν_i, ν)=min_σ_i ∈ℳ_÷(Ω,ℝ^d){|σ_i | (Ω) : ∇·σ_i +ν_i=ν}=max_u_i∈_1(Ω)∫_Ω u_i (̣ν_i-ν).
Following the seminal work of <cit.>, the sharp connection between optimal flows, i.e. solutions of (<ref>) and Kantorovich potentials is captured by the Monge–Kantorovich PDE system which we now recall.
[Monge–Kantorovich PDE]
A pair (u_i, ρ_i)∈_1(Ω)×ℳ_+(Ω) solves the Monge–Kantorovich system between ν_i and ν:
∇· (ρ_i ∇_ρ_i u_i)+ ν_i=ν, |∇_ρ_i u_i | =1
if there exists (u_i^)_>0∈ C^1(Ω)∩_1(Ω) converging uniformly to u_i as → 0, such that ∇ u_i^ converges in L^2(ρ_i) to some σ̂_i (so that |σ̂_i|≤ 1) and
∇· (ρ_i σ̂_i)+ ν_i=ν, |σ̂_i | =1 .
Assume that (u_i, ρ_i)∈_1(Ω)×ℳ_+(Ω) solves the Monge–Kantorovich system between ν_i and ν, and let (u_i^)_>0∈ C^1(Ω)∩_1(Ω) converge uniformly to u_i as → 0, and be such that ∇ u_i^ converges in L^2(ρ_i) to some σ̂_i which satisfies (<ref>), then using the fact that σ_i:= ρ_i σ̂_i is admissible for (<ref>) we deduce from (<ref>) and (<ref>):
W_1(ν_i, ν) ≥∫_Ω u_i (̣ν_i-ν) = lim_→ 0∫_Ω u_i^ (̣ν_i-ν)= lim_→ 0∫_Ω∇ u_i^·σ̂_i ρ̣_i
= ∫_Ω|σ̂_i |^2 ρ̣_i = ρ_i (Ω)= |σ_i |(Ω) ≥ W_1(ν_i, ν)
which proves that u_i is a Kantorovich potential and σ_i:= ρ_i σ̂_i is an optimal flow:
W_1(ν_i, ν)= ∫_Ω u_i (̣ν_i-ν)=|σ_i |(Ω).
This also enables one to define unambiguously the L^2(ρ_i)-limit of ∇ v_i^ for any any approximation[Note that such approximations can easily be performed by first extending u_i to a 1-Lipschitz function to the whole of ^d and then mollifying by convolution this extension.] of u_i by C^1(Ω)∩_1(Ω), indeed if (v_i^)_>0 is a sequence of such approximations, using again (<ref>), we have:
‖∇ v_i^- σ̂_i ‖^2_L^2(ρ_i)≤ 2 |σ_i |(Ω)-2 ∫_Ω∇ v_i^·σ̂_i ρ̣_i= 2 W_1(ν_i, ν)- 2∫_Ω v_i^(̣ν_i-ν) → 0 .
In other words, in definition <ref>, the direction σ̂_i ∈ L^2(ρ_i) only depends on ρ_i and u_i and not on the approximation of u_i and it is legitimate to set ∇_ρ_i u_i=σ̂_i
and to call it the tangential gradient of u_i with respect to ρ_i (and justify a posteriori the notation ∇_ρ_i u_i). We have seen that solutions of the Monge–Kantorovich system yield optimal flows and optimal potentials, but the converse is easy to check. Indeed, let u_i ∈_1(Ω) and σ_i ∈ℳ_÷(Ω,ℝ^d) be such that
W_1(ν_i, ν)=∫_Ω u_i (̣ν_i-ν)=|σ_i |(Ω)
setting ρ_i:=|σ_i| and σ̂_i such that |σ̂_i|=1 ρ_i-a.e. and σ̣_i = σ̂_i ρ̣_i, then (<ref>) holds and if (u_i^)_>0 is a sequence of C^1 ∩_1 approximations of u_i then
‖∇ u_i^- σ̂_i ‖^2_L^2(ρ_i)≤ 2 |σ_i |(Ω)-2 ∫_Ω∇ u_i^·σ̂_i ρ̣_i= 2 W_1(ν_i, ν)- 2∫_Ω u_i^(̣ν_i-ν) → 0
so that (u_i, ρ_i) solves the Monge–Kantorovich system (<ref>) which therefore fully characterizes the primal-dual extremality relations in (<ref>).
Note that if ρ_i is absolutely continuous with respect to the Lebesgue measure ρ_i ∈ L^1(Ω), then whenever u_i ∈_1(Ω), ρ_i ∇ u_i belongs to L^1(Ω) so ∇· (ρ_i ∇ u_i) is well defined in the sense of distributions and (<ref>) simplifies to
∇· (ρ_i ∇ u_i)+ ν_i=ν, |∇ u_i | =1
In Monge–Kantorovich theory, ρ_i=|σ_i| where σ_i is an optimal flow is called the transport density and the study of integral estimates for transport densities has been the object of an intensive stream of research <cit.>. In particular, if ν_i is absolutely continuous with respect to the Lebesgue measure (and ν is an arbitrary probability measure) then the solution σ_i of (<ref>) is unique (Theorem 4.14 and Corollary 4.15 in <cit.>) and absolutely continuous as well (Theorem 4.16 in <cit.>) so that the transport density ρ_i is in L^1 and the Monge–Kantorovich PDE can be understood as in (<ref>) without using the notion of tangential gradient. Higher integrability results can be found in Theorem 4.20 in <cit.>.
The connection between optimal flows, transport densities and optimal plans, is also well-known, namely given γ_i ∈Π(ν, ν_i) optimal i.e. such that W_1(ν_i, ν)=∫_Ω×Ω| x-x_i|γ̣_i(x, x_i), define the vector valued measure σ_γ_i by
∫_Ωϕ·σ̣_γ_i=∫_Ω×Ω∫_0^1 ϕ(x+t(x_i-x)) · (x_i-x) ṭ γ̣_i(x, x_i), for all ϕ∈ C(Ω, ^d).
Then, ∇·σ_γ_i+ν_i=ν and σ_γ_i is an optimal flow i.e. solves (<ref>), moreover (see Theorem 4.13 in <cit.>), any σ_i solving (<ref>) is of the form σ_γ_i for some optimal plan γ_i. We also refer to in <cit.> and <cit.> for more on the subject and in particular connections between optimal flows and the directions of the so-called transport rays.
§.§ A system of PDEs for Wasserstein medians
We now rewrite the Wasserstein median problem (<ref>) in terms of a multi-flow minimization:
inf_(σ_1, …, σ_N, ν) ∈ℳ_÷(,ℝ^d)^N×𝒫(Ω){∑_j=1^N λ_j |σ_j | (Ω): ∇·σ_j+ν_j = ν, j=1, …, N },
and observe that ν solves (<ref>) if and only if there exist (σ_1, …, σ_N) ∈ℳ_÷(,ℝ^d)^N such that (σ_1, …, σ_N, ν) solves (<ref>). Since we have assumed λ_i>0 we can perform the change of unknown u_i → u_i/λ_i in (<ref>) and rewrite it as
sup{∑_i=1^Nλ_i∫_Ω u_i ν̣_i : u_i ∈_1(Ω), i=1, …, N, ∑_i=1^Nλ_i u_i≤ 0 }.
We may deduce from what we have recalled in the previous paragraph, a characterization of Wasserstein medians as well as optimal flows in (<ref>) and optimal potentials in (<ref>) by a system of PDEs of Monge–Kantorovich type. Note that if a median ν∈(ν_1, …, ν_N) was known, the problem of finding the corresponding optimal flows would be decoupled into N Monge–Kantorovich PDEs in the sense of definition <ref>, but to determine ν, we should take into account the obstacle constraint ∑_i=1^N λ_i u_i ≤ 0 from (<ref>) and the optimality condition from Theorem <ref> that requires ∑_i=1^N λ_i u_i to vanish on the support of ν. All this can be summarized as:
Let ν∈(Ω) then ν∈(ν_1, …, ν_N) if and only if there exist (u_1, …, u_N)∈_1(Ω)^N and (ρ_1, …, ρ_N)∈ℳ_+(Ω)^N such that, for i=1, …, N
∇· (ρ_i ∇_ρ_i u_i)+ ν_i=ν, |∇_ρ_i u_i | =1
coupled with the obstacle conditions
∑_i=1^N λ_i u_i ≤ 0 , ∑_i=1^N λ_i u_i= 0
Moreover in this case (u_1,…, u_N) solves (<ref>) and (ρ_1 ∇_ρ_1 u_1, …, ρ_N ∇_ρ_N u_N, ν) solves (<ref>).
In dimension 1, one can integrate the equation ∇·σ_i +ν_i=ν and in this case, (<ref>) appears as the vertical formulation of the median problem (<ref>). One can therefore interpret (<ref>) in higher dimensions as a multidimensional extension of (<ref>).
If ν_i is in L^1(Ω), then the corresponding optimal flow σ_i and transport density ρ_i are also in L^1(Ω) (even though medians need not be absolutely continuous) and one can replace the tangential gradient ∇_ρ_i u_i by ∇ u_i in the Monge–Kantorovich PDE (<ref>).
If θ solves the multimarginal problem (<ref>), then we know that ν:=π_0_#θ is a median and we can recover the corresponding flows as in (<ref>) i.e. by defining:
∫_Ωϕ·σ̣^θ_i=∫_Ω^N+1∫_0^1 ϕ(x+t(x_i-x)) · (x_i-x) ṭ θ̣(x, x_1, …, x_N), for all ϕ∈ C(Ω, ^d),
with this construction (σ^θ_1, …, σ^θ_N, π_0_#θ) is a solution of (<ref>). In fact, invoking Theorem 4.13 in <cit.>, any solution of (<ref>) can be obtained in this way from an optimal multi marginal plan θ.
§.§ Approximation by a system of p-Laplace equations
We shall now see how to approximate a median, as well as dual potentials and Beckmann flows by a single system of p-Laplace equations (with p large as in the seminal work of Evans and Gangbo <cit.>, also see <cit.> for a similar strategy for a matching problem involving two sample measures). Given ϵ>0, we are given an exponent ≥ 2d, and assume these exponents satisfy
lim_ϵ→ 0^+=+∞.
We then consider the functional, defined for u=(u_1, …, u_N) ∈ W^1, (Ω)^N by
J_ϵ(u):=1/∑_i=1^N ∫_Ω|∇ u_i|^+ 1/2 ϵ∫_Ω(∑_j=1^N λ_j u_j)_+^2 -∑_i=1^N λ_i ∫_Ω u_i ν̣_i,
observing that (u)=(u+α) if α_i's are constants that sum to 0, we can add the normalizing constraint
∫_Ω u_i =0, i=1, …, N-1.
With this normalization at hand we can prove the following.
Let ϵ > 0, > d, then
inf_u ∈ W^1, (Ω)^N J_ϵ(u)
admits a unique solution which satisfies the normalization (<ref>).
Existence. First note that for i=1,…,N-1, u_i ∈ W^1,(Ω) with ∫_Ω u_i=0, using successively Poincaré–Wirtinger's, Morrey's and Young's inequalities, we have
∫_Ω|∇ u_i|^ - λ_i ∫_Ω u_i ν̣_i
≥ C_ϵ/2 u_i ^_W^1,(Ω) - λ_i u_i _L^∞(Ω)
≥ C_ϵ/4 u_i ^_W^1,(Ω) +C'_ϵ u_i ^_L^∞(Ω) - δ/ u_i _L^∞(Ω)^ - 1/δ^q/q(λ_i)^q,
where C_ϵ, C'_ϵ >0 are constants (independent of u_i), δ > 0 and q = /-1 the conjugate exponent.
To treat the N-th component, let u_N ∈ W^1,(Ω) and define a_N := _Ω u_N x̣, then similarly as before
∫_Ω|∇ u_N|^ - λ_N ∫_Ω u_N ν̣_N
≥ C_ϵ/2 u_N - a_N ^_W^1,(Ω) - λ_N u_N - a_N _L^∞(Ω) - λ_N a_N
≥ C_ϵ/4 u_N - a_N ^_W^1,(Ω) +C'_ϵ u_N - a_N ^_L^∞(Ω) - δ/ u_N - a_N_L^∞(Ω)^ - 1/δ^q/q(λ_N)^q - λ_N a_N.
By choosing δ >0 small enough, we obtain altogether
J_ϵ(u) ≥C_ϵ/4∑_i=1^N-1 u_i ^_W^1,(Ω) + C_ϵ/4 u_N - a_N ^_W^1,(Ω) + C - λ_N a_N + 1/2 ϵ∫_Ω(∑_j=1^N λ_j u_j)_+^2,
where C is a constant only depending on , λ_i (i=1,…,N) and C'_ϵ.
Now let (u^n)_n ∈=(u_1^n,…,u_N^n)_n ∈∈( W^1,(Ω)^N )^ be a minimizing sequence of J_ϵ satifying our normalization. In order to conclude that (u^n)_n ∈ is bounded in W^1,(Ω), it is enough to find an upper bound on a_N^n = _Ω u_N^n x̣. Assume by contradiction, that (up to a not relabeled subsequence) a_N^n → + ∞ as n →∞, then, by (<ref>) there are constants K, C̃_ϵ >0 (independent of n) such that for i=1,…, N-1
(K + λ_N a_N^n/C̃_ϵ)^1/≥ u_i^n _L^∞(Ω), and (K + λ_N a_N^n/C̃_ϵ)^1/≥ u_N^n - a_N^n_L^∞(Ω),
for all n ∈.
But then, denoting K^n_ϵ := (K + λ_N a_N^n/C̃_ϵ)^1/
1/2 ϵ∫_Ω(∑_j=1^N λ_j u_j)_+^2 - λ_N a^n_N
≥ 1/2 ϵ∫_Ω(λ_N a^n_N - K^n_ϵ)_+^2 - λ_N a^n_N
≥ 1/2 ϵ∫_Ω(λ_N a^n_N(1 - o(1)) )_+^2 - λ_N a^n_N → + ∞ as n →∞,
contradicting (u^n)_n ∈ being a minimizing sequence.
This implies that (a_N^n)_n ∈ is bounded hence (u_N^n)_n ∈ is bounded in W^1,(Ω). Since (u_i^n)_n ∈ is bounded in W^1,(Ω) for i=1, …, N, it has a subsequence that converges weakly in W^1,(Ω), by the weak lower semi continuity of J_, the weak limit of this subsequence is indeed a minimizer of J_.
Uniqueness. Let u, u̅ be minimizers of J_ϵ. Then by strict convexity of |·|^ and ·^2 we have
∇ u_i = ∇u̅_̅i̅ ^d-a.e. for i = 1,…,N,
(∑_j=1^N λ_j u_j)_+^2 = (∑_j=1^N λ_j u̅_j)_+^2 ^d-a.e.
By the normalization (<ref>) we then get u_i = u̅_i for i=1,…,N-1, and there is c_N ∈ such that u_N = u̅_N + c_N. But then
0 = J_ϵ(u) - J_ϵ(u̅) =λ_N ∫_Ω u_N ν̣_N - λ_N ∫_Ω (u_N - c_N)ν̣_N = λ_N c_N,
which is only possible if c_N = 0.
The unique minimizer of J_ under the normalization (<ref>), u^ϵ=(u_1^ϵ, …, u_N^ϵ) is characterized by the system of PDEs
-∇·(|∇ u_i^ϵ|^-2∇ u_i^ϵ) + λ_i (∑_j=1^N λ_j _j/ϵ)_+=λ_i ν_i, i=1, …, N
with Neumann boundary conditions, in the weak sense which means that, for every i and every φ∈ W^1, (Ω), one has
∫_Ω|∇ u_i^ϵ|^-2∇ u_i^ϵ·∇φ+ λ_i ∫_Ω(∑_j=1^N λ_j _j/ϵ)_+ φ = λ_i ∫_Ωφν̣_i,
of course, supplemented by the normalization (<ref>). To shorten notations and for further use, let us define
_i:= |∇ u_i^ϵ|^-2∇ u_i^ϵ/λ_i, :=1/ϵ( ∑_j=1^N λ_j _j)_+.
So that the optimality system (<ref>) can be rewritten as
-∇·_i + =ν_i, i=1, …, N.
In particular (testing the N-th equation against a constant) which is a nonnegative continuous (at least 1/2-Hölder when ≥ 2d) function, is a probability density on Ω.
Then we have, the following convergence result:
Up to extracting a vanishing (not explicitly written) sequence ε_n → 0 as n→∞, one may assume that
* ()_ϵ>0 converges uniformly to some u=(u_1, …, u_N) which is a vector of optimal dual potentials, i.e. solves (<ref>)<,
* for each i, (_i)_ϵ>0 converges weakly * to some vector-valued measure σ_i, ()_ϵ>0 converges weakly * to some probability measure ν and (σ_1, …, σ_N, ν) solves the Beckmann problem (<ref>). In particular, ν is a Wasserstein median.
Step 1: bounds on . Multiplying (<ref>) by _i first yields
‖∇_i‖_L^^ + λ_i ∫_Ω_i =λ_i ∫_Ω_i ν̣_i, i=1, …, N.
Summing over i we thus get
∑_i=1^N ‖∇_i‖_L^^ + 1/ϵ∫_Ω( ∑_j=1^N λ_j _j)^2_+ =∑_i=1^Nλ_i ∫_Ω_i ν̣_i.
By Morrey's and Hölder's inequalities, ≥ 2d and the fact that _i has zero mean for i=1, …, N-1, we have for positive constant C and C' depending only on Ω (but possibly changing from one line to another)
‖_i‖_∞≤ C ‖∇_i ‖_L^2d≤ C |Ω|^1/2d-1/‖∇_i ‖_L^≤ C' ‖∇_i ‖_L^,
which together with (<ref>) and the fact that both and ν_i are probability measures gives
max_i=1, …, N-1‖_i‖_∞≤ C, max_i=1, …, N-1‖∇_i‖_L^^≤ C.
Let us now get similar bounds on _N, using (<ref>) with i=N and using the fact that and ν_i are probability measures and then again Morrey's inequality (applied to u_N - _Ω u_N), we get
‖∇_N‖_L^^ = λ_N ∫ (_N-min__N)(ν_i -) ≤λ_N osc _ ( _N) ≤ C ‖∇_N‖_L^,
which gives
‖∇_N‖_L^^≤ C, osc _ ( _N) ≤ C.
With (<ref>) and (<ref>) and the bound on osc _ ( _N), we thus get taking C' ≥∑_i=1^N-1λ_i _i
0 ≤1/ϵ∫_Ω(λ_N max__N-C')_+^2 ≤ C + λ_N ∫_N ν_N≤ C+ λ_N max__N
from which one readily deduces that max__N is bounded uniformly in ϵ, hence (_N)_ϵ>0 is bounded in L^∞ because of the bound on osc _ ( _N). Finally, we have shown that
max_i=1, …, N‖_i‖_∞≤ C, max_i=1, …, N‖∇_i‖_L^^≤ C,
which implies also C^0,1/2 bounds so extracting a vanishing (not explicitly written) sequence ε_n → 0 as n→∞, thanks to Ascoli–Arzelá's theorem, one may assume that (u^ϵ)_ϵ>0 converges uniformly to some u with u∈ W^1,q(Ω) for every q∈ (1, +∞). And since (∇ u^ϵ)_ϵ>0 is bounded in every L^q, we may also assume that for every q∈ (1, +∞), (∇ u^ϵ)_ϵ>0 converges weakly to ∇ u in L^q(Ω). Of course, we may also assume that (ν^ϵ)_ϵ>0 converges weakly * to some probability measure ν and that the (bounded in L^1, thanks to (<ref>) and the definition of σ_i^ϵ) sequence (σ_i^ϵ)_ϵ>0 converges weakly * to some vector-valued measure σ_i.
Step 2: u satisfies the constraints of the dual. By (<ref>) and (<ref>), we have
∫_Ω( ∑_j=1^N λ_j _j)^2_+ ≤ Cϵ.
so that, letting ϵ→ 0^+, we get
∑_j=1^N λ_j u_j ≤ 0.
Let us now prove that each u_i is 1-Lipschitz as a consequence of (<ref>) and (<ref>). First fix q and let ϵ be small enough so that ≥ q, then
‖∇_i ‖_L^q≤|Ω|^1/q-1/ C^1/.
So letting ϵ→ 0, we get with (<ref>)
‖∇ u_i ‖_L^q≤|Ω|^1/q, for all q∈ (1,+∞).
So letting now q→ +∞ we obtain
‖∇ u_i ‖_L^∞≤ 1
which implies that each u_i is 1-Lipschitz by convexity of Ω.
Step 3: optimality of the limits. We already know that ν is a probability measure. Passing to the limit in (<ref>), we get
-∇·σ_i + ν=ν_i, i=1, …, N,
which is the constraint in Beckmann problem (<ref>). Since u is admissible in the dual, to conclude, by weak duality, it is enough to show that
∑_i=1^N λ_i |σ_i | (Ω) ≤∑_i=1^N λ_i ∫_Ω u_i ν̣_i.
First observe that (<ref>) entails
∑_i=1^N λ_i ∫_i ν̣_i ≥∑_i=1^N ∫_Ω|∇_i|^
= ∑_i=1^N ∫_Ω|λ_i _i|^/-1 .
Note then that, by Hölder's inequality we have
∫_Ω|_i|^/-1≥‖_i‖_L^1^/-1|Ω|^-1/-1
so that
lim inf_ϵ→ 0^+∫_Ω|_i|^/-1≥lim inf_ϵ→ 0^+‖_i‖_L^1‖_i‖_L^1^1/-1≥lim inf_ϵ→ 0^+‖_i‖_L^1≥|σ_i | (Ω)
where the second inequality is obtained by distinguishing the (obvious) case where (after a suitable extraction) (_i)_ϵ>0 converges strongly to 0 in L^1 and the case where ‖_i‖_L^1 remains bounded away from 0 and the last inequality follows from the weak * convergence of (_i)_ϵ>0 to σ_i. We thus get
∑_i=1^N λ_i ∫_Ω u_i ν_i = lim inf_ϵ→ 0^+∑_i=1^N λ_i ∫_Ω_i ν_i≥lim inf_ϵ→ 0^+∑_i=1^N λ_i^^/-1∫_Ω|_i|^/-1≥∑_i=1^N λ_i |σ_i |(Ω),
which proves (<ref>) and ends the proof.
§ NUMERICS
In this section, we briefly mention the numerical methods we employed to generate the figures in the paper and present a new one based on a Douglas–Rachford scheme for the multi-flow formulation (<ref>). All the experiments are performed in Python on a Intel(R) Core(TM) i5-5200U CPU @ 2.20GHz and 8 Gb of RAM and are available for reproducibility at <https://github.com/TraDE-OPT/wasserstein-medians>.
§.§ Sorting, Linear Programming, Sinkhorn
Recall from Section <ref>, that in the one dimensional case, the Wasserstein median problem admits an almost-closed form solution, which can be computed directly with simple sorting procedures. We implemented these well-known schemes to generate Figure <ref>. Here we rather focus on the case ⊂ℝ^2, which is more relevant e.g. for imaging.
Wasserstein median problems on a fixed grid of size n = p^2 for a sample of size N can be tackled either via Linear Programming methods, taking advantage of the minimum-cost flow nature of the problem <cit.>, or via Sinkhorn-like methods on an entropy-regularized finite dimensional variant of (<ref>) <cit.>, see also <cit.>. The latter represents the most popular approach. We employed the Sinkhorn method to generate Figure <ref>. Despite their well-known advantages, Sinkhorn algorithm and entropic regularization methods can lead to severe computational issues, such as blurred outputs, important numerical instabilities, and memory issues to store the so-called kernel matrix <cit.>. It is worth mentioning that several efforts have been made to develop Sinkhorn-like methods that address these limitations, including log-space tricks for stability <cit.>, de-biased variants for blurring artifacts <cit.>, and truncation strategies for memory and speed improvements <cit.>.
In the next paragraph, we present a new approach which targets (<ref>) and benefits from low memory requirements, fast convergence behaviour, and produces non-blurred approximate medians. Note, however, that this is approach, well-suited for Wasserstein medians, cannot be easily generalized to approximate Wasserstein barycenters .
§.§ Douglas–Rachford on the Beckmann formulation.
Given a square domain Ω, and N≥ 2 measures (ν_1, …, ν_N) ∈𝒫(Ω)^N, consider the Beckmann minimal flow formulation of the Wasserstein median problem (<ref>). To discretize (<ref>), we introduce the square grid 𝒢_h:={hi : i=1,…, p}^2 with step-length h:=1/p, and the discrete spaces ℳ_h:={μ: 𝒢→ℝ} and 𝒮_h:={σ: 𝒢→ℝ^2}. Note that ℳ_h and 𝒮_h are finite dimensional vector spaces which can be identified with ℝ^n and ℝ^n× 2, respectively, where n :=p^2. Thus, we often treat elements in ℳ_h and 𝒮_h as vectors. We consider the usual discretization of the gradient ∇_h: ℳ_h→𝒮_h defined via forward differences with homogeneous Neumann boundary conditions as in <cit.>. The discrete divergence operator, which we denote by ÷_h = -∇_h^*, is the opposite adjoint of ∇_h, with respect to the scalar products ⟨·, ·⟩_ℳ_h and ⟨·, ·⟩_𝒮_h (i.e. the usual ℓ^2 scalar products on ℝ^n and ^n× 2, respectively). Now, let
ℱ_h:={ (σ_1, …, σ_N, ν)∈𝒮_h^N×ℳ_h : ÷_h σ_k+ν_k=ν for all k= 1, …, N },
where (ν_1, …, ν_N)∈ℳ_h^N are suitable (not renamed) discretizations of ν_1, …, ν_N on the grid 𝒢_h. With this notation, let us consider the discretized version of (<ref>):
min_(σ_1, …, σ_N, ν) ∈ℱ_h ∑_k=1^N λ_k σ_k _1,2 + 𝕀_Δ(ν)
Where Δ is the unit simplex, and ·_1,2 is the ℓ_1,2 norm on 𝒮_h, also known as group-Lasso penalty, which is defined for all σ∈𝒮_h by σ_1,2:=∑_i=1^n σ(x_i), where · is the usual ℓ_2 norm on ℝ^n.
To solve (<ref>), we apply a Douglas–Rachford method to
min_(σ_1, …, σ_N, ν) ∈𝒮_h^N×ℳ_h ∑_k=1^N λ_k σ_k _1,2 + 𝕀_Δ(ν)_:=g_1(σ_1, …, σ_N, ν)+𝕀_ℱ_h(σ_1, …, σ_N, ν)_ :=g_2(σ_1, …, σ_N, ν).
The Douglas–Rachford method <cit.> is an instance of the proximal point algorithm <cit.>, which can be employed to solve a minimization problem consisting of the sum of two convex lower semicontinuous functions which are accessible through evaluation of their proximity operators. In our case, the proximity operator of g_1, which is separable, consists in a projection onto the unit simplex, denoted by P_Δ, for the discrete measure ν and on the application of the proximity operator of the group-Lasso penalty, denoted by _τ, where τ >0, on each component σ_i, which can be computed in closed form <cit.>.
The proximity operator of g_2, i.e. the projection onto the affine subspace ℱ_h, is more delicate. Recall from optimality conditions that, formally, the projection onto the solution set of a linear system of the form Ax=b is given, for all y, by Py = y - A^*ξ where ξ is any element that solves AA^*ξ = Ay-b. In our case, we have b = -[ν_1, …, ν_N]^T and the linear operators A and AA^* can be written in block form as
A :=
[ ÷_h -I; ⋱ ⋮; ÷_h -I ],
AA^* =
[ -Δ_h+I I ⋯ I; I ⋱ ⋮; ⋮ I; I I ⋯ -Δ_h+I ],
where Δ_h:ℳ_h→ℳ_h is the discrete Laplacian operator, namely Δ_h = ÷_h∇_h.
Let σ =(σ_1, …, σ_N) ∈𝒮_h^N and ν∈ℳ_h ∩Δ, and let ∇_h:ℳ_h→𝒮_h be the discrete gradient operator defined via forward differences with homogeneous Neumann boundary conditions. Then the projection (σ, ν) of (σ, ν) onto ℱ_h is given by
σ_i := σ_i+∇_hξ_i, ν := ν+ξ_1+⋯+ξ_N,
where ξ_i := ξ_i'-(I-1NΔ_h)^-1(1/N∑_j=1^Nξ_j') and ξ_i' is any solution to
-Δ_h ξ_i' = ÷_hσ_i+ν_i-ν for all i = 1, …, N.
First, let i = 1, …, N, let 1∈ℳ_h be constantly equal to 1, and note that, by definition of the scalar products, since ν_i, ν∈Δ and ∇_h = {1}, we get
⟨÷_hσ_i+ν_i-ν, 1⟩_ℳ_h = ⟨÷_hσ_i, 1⟩_ℳ_h = -⟨σ_i, ∇_h 1⟩_𝒮_h = 0.
Hence, ÷_hσ_i+ν_i-ν∈ (∇_h)^⊥ =(Δ_h)^⊥ = Δ_h for all i = 1, …, N, and, thus, (<ref>) actually admits a solution. From optimality conditions, we only need to show that A(σ, ν) = b and that ξ:=(ξ_1, …, ξ_N) solves AA^*ξ=A(σ, ν)-b where A and AA^* are defined in (<ref>). Let us start with the latter. Denoting ξ̅':=1N∑_i=1^Nξ_i', we have for all i = 1, …, N that
-Δ_h ξ_i+ξ_1+⋯+ξ_N = -Δ_h ξ_i'+Δ_h(I-1NΔ_h)^-1ξ̅'+Nξ̅'-N(I-1NΔ_h)^-1ξ̅'
=÷_hσ_i+ν_i-ν+Nξ̅'-N(I-1NΔ_h)(I-1NΔ_h)^-1ξ̅'
= ÷_h σ_i+ν_i-ν.
Hence AA^*ξ=A(σ, ν)-b. Regarding A(σ, ν) = b, we have
÷_hσ_i +ν_i = ÷_h σ_i+Δ_hξ_i +ν_i = ÷_h σ_i+Δ_h ξ_i'-Δ_h(I-1NΔ_h)^-1ξ̅'+ν_i
= ν -Δ_h(I-1NΔ_h)^-1ξ̅'=ν+Nξ̅'-N(I-1NΔ_h)^-1ξ̅'=ν,
which concludes the proof.
Proposition <ref> allows us to implement a Douglas–Rachford scheme on (<ref>), which we summarize in Algorithm <ref>.
Note that, in Algorithm <ref>, we are required to solve two sparse (elliptic) linear systems, which we tackle with generic sparse linear solvers provided by standard Python libraries. However, one should put adequate care when trying to solve the first Laplacian system. Indeed, if the projection onto the simplex is not computed sufficiently well, the right-hand side can lie out of the range of the Laplacian. For this reason, in our numerical implementation, we smoothed out all possible numerical errors with a further projection of the right-hand side onto the set of discrete measures with a total mass equal to one.
The computational cost required to solve the aforementioned linear systems is overall balanced with a very fast iteration-wise convergence behaviour. Remarkably, there is no need to store dense n× n matrices. This makes the proposed method suitable for highly large-scale instances, see e.g. Figures <ref> and <ref>.
Convergence. The Douglas–Rachford splitting method benefits from robust convergence guarantees, without any condition neither on the starting point nor on the step-size τ>0 <cit.>. In particular, we have that if (σ_q^k)_k∈ℕ, (ν^k)_k∈ℕ, (η^k)_k∈ℕ and (μ^k)_k ∈ℕ are the sequences generated by Algorithm <ref>, then for each q= 1, …, N, we have σ_q^k →σ_q^* and ν^k →ν^* and (σ_1^*, …, σ_N^*, ν^*) solves (<ref>). As a stopping criterion, we measure the residual r^k:= ∑_q=1^Nη_q^k+1-η_q^k_𝒮_h^2+μ^k+1-μ^k^2_ℳ_h, which is guaranteed to converge to zero with a o(k^-1) worst-case rate, and we stop the iterations as soon as the residual drops below a prescribed tolerance.
Comments. Note that to solve (<ref>), we also implemented the Primal Dual Hybrid Gradient method by Chambolle and Pock, with different step-size selection strategies, such as backtracking and adaptive schemes <cit.>, and several different fixed step-sizes choices, which, however, always provided very slow behaviours, and therefore, we chose not to discuss it further. Note that for OT-like problems, the Douglas–Rachford splitting method has been employed first in its dual formulation (ADMM) in <cit.>, then in <cit.> and more recently in <cit.>. Its extension to the Wasserstein median case proposed in the present paper has been surprisingly overlooked.
Acknowledgments: E.C. would like to thank Stefano Gualandi for the helpful discussions during the preparation of this work. G.C. acknowledges the support of the Lagrange Mathematics and Computing Research Center. E.C. has received funding from the European Union’s Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Skłodowska-Curie Grant Agreement No. 861137. The Department of Mathematics and Scientific Computing, to which E.C. is affiliated, is a member of NAWI Graz (https://www.nawigraz.at/enhttps://www.nawigraz.at/). K.E. acknowledges that this project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 754362.
< g r a p h i c s >
§ APPENDIX
Proof of (<ref>).
Of course, if all the x_i's are equal I_±()={1, …, N} and (<ref>) is nothing else than (<ref>). We may therefore assume that
Δ :=min{| x_i-x_j | : x_i ≠ x_j } >0.
Then, setting
δ_i^+:=max_k { y_k-x_k : x_k=x_i}, δ_i^-:=min_k { y_k-x_k : x_k=x_i},
and ^±:=(δ_1^±, … , δ_N^±) we have + ^+ ≥≥+ ^- and then by monotonicity
^±( + ^+) ≥^±() ≥^±(+ ^-),
but if we choose close enough to , namely such that
max_i, j|δ^±_i - δ^±_j |≤Δ/2
this, together with the definition of Δ, implies that the components of and + ^± are ordered in the same way, i.e. x_j < x_i if and only if x_j + δ_j^± < x_i + δ_i^±. Thus, for i∈ I_+(x), x_i =^+() and
∑_j : x_j + δ_j^- < x_i + δ_i^- λ_j = ∑_j : x_j < x_i λ_j ≤1/2 ,
so that
^+() ≥^+(+ ^-) ≥ x_i + ^-_i ≥^+() + min_k∈ I_+() (y_k-x_k).
In a similar way for i∈ I_-(x), x_i =^-() and
∑_j : x_j + δ_j^- ≤ x_i + δ_i^- λ_j = ∑_j : x_j ≤ x_i λ_j ≥1/2,
so that
^-() ≥^-(+ ^-) ≥ x_i + ^-_i ≥^-() + min_k∈ I_-() (y_k-x_k).
This proves the rightmost inequalites in (<ref>). The proof of the leftmost inequalities in (<ref>) is similar and thus omitted.
plain
|
http://arxiv.org/abs/2307.00672v1
|
20230702214425
|
Electron delocalization in aromaticity as a superposition phenomenon
|
[
"Mahir H. Yeşiller",
"Onur Pusuluk"
] |
quant-ph
|
[
"quant-ph",
"physics.chem-ph"
] |
[email protected]
Department of Physics, Koç University, 34450 Sarıyer, Istanbul, Turkey
[email protected]
Department of Physics, Koç University, 34450 Sarıyer, Istanbul, Turkey
This letter investigates the applications and extensions of the resource theory of quantum superposition within the realm of quantum chemistry. Specifically, our emphasis is placed on the exploration of aromaticity, a fundamental concept originally developed to elucidate the structural symmetry, energetic stability, and chemical reactivity of benzene and its derivatives. While both aromaticity and its counterpart, antiaromaticity, are associated with the delocalization of electrons between nonorthogonal atomic orbitals, they lack a universally accepted and comprehensive definition. We demonstrate that the genuine quantum superposition exhibited by biorthogonal atomic orbitals effectively captures the aromaticity order of molecules. These findings reveal that the quantum resource theories hold significant implications, offering fresh insights into our comprehension of chemical bonding phenomena.
Electron delocalization in aromaticity as a superposition phenomenon
Onur Pusuluk
August 1, 2023
====================================================================
Introduction.— One of the counterintuitive concepts that make quantum mechanics an intriguing theory is the superposition principle, which allows quantum systems to exist in multiple states simultaneously <cit.>. This unique and exceptional phenomenon has significant implications as a resource for tasks that cannot be accomplished through classical means, exemplified by quantum teleportation <cit.>. Therefore, the quantification and manipulation of various forms of quantum superposition, such as quantum coherence and quantum entanglement, have become subjects of great interest <cit.>.
Accordingly, the quantum superposition of distinguishable (orthogonal) states, known as quantum coherence, has a well-grounded resource theory <cit.>. When this special form of superposition is shared between systems separated in space, it is referred to as quantum correlation, which is better understood through its quantification and manipulation <cit.>. The basis-independent nature of quantum correlations makes them a crucial resource for many emerging quantum technologies <cit.>. For instance, quantum entanglement, an exemplary manifestation of quantum superposition, is considered a perfect illustration of quantum correlations, and its substantial role as a resource within quantum technologies is widely recognized <cit.>. Nevertheless, quantum entanglement is only a subset of the most general quantum correlations, known as quantum discord <cit.>.
For indistinguishable (nonorthogonal) states, quantification and manipulation of superposition require the generalization of the framework used in coherence theory. This generalization, called the resource theory of superposition (RTS), relaxes the orthogonality condition of basis states to linear independence <cit.>. Here, quantum superposition can exist in two different forms: not only as the linear combination of basis states but also with overlaps <cit.>. From this perspective, quantum coherence is viewed as a subset of superposition when nonclassicality is formed entirely in the form of a linear combination of basis states, and all the overlaps between these states vanish.
The quantification and manipulation of nonclassicality in optical coherent states can be considered an application of RTS <cit.>. However, the importance of overlaps between nonorthogonal states is most evident in molecular electronic states <cit.>. In the molecular world, nonorthogonal states correspond to mathematical objects representing chemical structures such as atomic orbitals <cit.>. Chemical bonding and other related quantum chemical phenomena arise from the overlaps between these states <cit.>. Some of these phenomena form the basis of (quantum) chemistry science and education <cit.> but are referred to as “unicorns” in chemistry because they cannot be physically observed <cit.>.
Aromaticity is one of the most ancient unicorns in chemistry <cit.>, tracing its origins back even further than the discovery of quantum physics and the principle of superposition <cit.>. However, it still lacks a comprehensive and universally accepted definition <cit.>. Conventionally, both aromaticity and its counterpart, antiaromaticity, are believed to arise from electron delocalization within a group of atoms that form a closed loop, either in two or three dimensions <cit.>. In simpler terms, a molecule is considered nonaromatic if a its electronic structure cannot be represented by a quantum superposition that spreads over a closed region or volume in space.
This letter investigates the delocalization of electrons in the π-electronic structures of some archetypal monocyclic aromatic molecules as an application of RTS. To this aim, we consider the ground state of molecules in the basis of localized nonorthogonal atomic orbitals at the post-Hartree-Fock level of theory. By using the genuine superposition measure that we proposed in <cit.>, we show that the amount of superposition shared between biorthogonal atomic orbitals can effectively capture the aromaticity order of the molecules, comparable to the most successful measures of aromaticity used in the literature.
The structure of the paper is as follows. We begin by providing an overview of quantifying superposition in nonorthogonal quantum states and its limitations, followed by the introduction of the biorthogonal framework that addresses these limitations. Subsequently, we present results for applying these concepts to analyze electron delocalization in the π-electronic structures of ground states for some aromatic molecules.
Superposition in a nonorthogonal basis.— Consider a d-dimensional Hilbert space denoted by ℋ, which possesses a normalized, linearly independent, and nonorthogonal basis consisting of states |c_i⟩ such that ⟨ c_i|c_j⟩ = S_ij∈ℂ. Any density operator ρ̂ existing within ℋ can be represented using this basis as follows:
ρ̂ = ∑_i,j=1^d ρ_ij|c_i⟩⟨ c_j|.
Here, the complex coefficients ρ_ij are equal to ⟨ c_i^⊥ | ρ̂ | c_j^⊥⟩, where {|c_i^⊥⟩} with ⟨ c_i^⊥ |c_j ⟩ = δ_i,j is another nonorthogonal basis called the dual of the basis {|c_i⟩}.
When the density operator takes the form ρ̂_f=∑_i p_i |c_i⟩⟨ c_i|, where p_i represents a probability distribution, the matrix ρ formed by ρ_ij contains non-zero elements only in its diagonal. These states are referred to as superposition-free according to Ref. <cit.>. Conversely, states that possess non-zero off-diagonal elements in ρ are termed superposition states. By taking ρ̂_f as a reference point or benchmark for quantifying and comparing the resource content of other states, the amount of superposition present in state (<ref>) can be quantified by
l_1[ ρ ] = ∑_i≠ j |ρ_ij|.
However, as demonstrated in Ref. <cit.>, when we regard ρ̂_f as the state of a composite system, it becomes evident that it should encompass superposition. This superposition arises not from a mere linear combination of nonorthogonal states, but rather from their mutual overlaps. Consequently, the off-diagonal elements of ρ do not fully encode the complete information regarding the quantum superposition carried by the state ρ̂.
Superposition in a biorthogonal basis.— The limitations of the nonorthogonal matrix representation ρ become apparent when it fails to satisfy the unit-trace requirement. To address this, we should introduce the overlap or Gram matrix S, which contains the necessary overlap information. Then, the unit-trace requirement can be fulfilled by considering the following expressions:
tr[ ρ̂]= ∑_i ⟨ c_i^⊥| ρ̂ |c_i⟩ = 1 = tr[ ρ S ].
On the other hand, an extension of the nonorthogonal basis {|c_i⟩} to {|c_i⟩, |c_i^⊥⟩} naturally incorporates the overlap information through the structure of the dual basis <cit.>, defined by
|c_i^⊥⟩ = ∑_j S_ji^-1|c_j⟩.
By employing the biorthogonal basis {|c_i⟩, |c_i^⊥⟩}, we can represent the state described in Eq. (<ref>) as follows:
ρ̂ = ∑_i,j=1^d ρ̅_ij |c_i^⊥⟩⟨ c_j|,
which allows us to construct an alternative matrix form for ρ̂, known as the biorthogonal matrix representation. It is composed of the elements ρ̅_ij=⟨ c_i^⊥|ρ̂|c_j⟩ and is denoted as ρBO. Although this matrix is non-Hermitian, it satisfies the unit-trace property. In other words, since ρBO = ρ S, the trace of ρBO should be equal to one, as shown in Eq. (<ref>). Therefore, ρBO encapsulates all the (non-)probabilistic information in its (off-)diagonal elements. Consequently, we can quantify the total superposition in the state ρ̂ using the following expression:
l_1[ ρ_BO ] = ∑_i≠ j |ρ̅_ij|.
Hence, Eq. (<ref>) serves as a measure for genuine quantum superposition since it quantifies all types of superposition. On the other hand, Eq. (<ref>) only quantifies inter-basis superposition. It is worth noting that the biorthogonality framework presented here also facilitates consistent generalizations of conventional quantum mechanics for finite-dimensional systems <cit.>. These generalizations incorporate non-Hermitian observables with complete, yet non-orthogonal, eigenstates.
Superposition in aromaticity— Let us explore the application of the superposition measures discussed thus far to analyze the quantum nature of chemical bonding phenomena. Specifically, our focus will be on examining some representative aromatic and antiaromatic molecules presented in Tables <ref>, <ref>, and <ref>. These molecules exhibit planar and cyclic structures, wherein the electrons involved in the formation of π-bonds become delocalized across the atoms within the ring. This electron delocalization greatly affects the stability of the entire molecule and gives rise to various aromatic or antiaromatic properties. We can conceptualize it as a superposition that takes place within the cyclic π-subspace of the molecular electronic system. At this point, the question arises: what is the nature of this superposition? Does it correspond to inter-basis superposition, or is it a genuine superposition?
The electronic ground state of the molecules under investigation can be described as a linear combination of nonorthogonal states known as Slater determinants. These determinants are constructed using the occupation of p_z atomic orbitals (AOs) that are arranged in a circular manner and form the π-subspace of the molecular electronic system (for additional information, please refer to Appendix <ref>). The overlaps between the Slater determinants are solely determined by the overlaps between the AOs. Similarly, the duals of the Slater determinants are formed by the duals of the AOs. In the field of quantum chemistry, these AOs and their duals are commonly referred to as biorthogonal orbitals <cit.>. In this context, the genuine quantum superposition present in the molecular electronic state can be understood as the quantum superposition shared between the biorthogonal AOs.
Today, we recognize the existence of various types of aromaticity. Among the oldest and simplest is Hückel aromaticity. According to Hückel rule, monocyclic conjugated hydrocarbon, also known as annulenes (C_nH_n), with D_nh symmetry are deemed aromatic if they contain 4 n + 2 π-electrons. Conversely, those with 4 n π-electrons are classified as antiaromatic. The triangular structures in Table <ref> are the smallest molecular systems that follow this rule. According to our results, when cyclopropene is converted from its nonaromatic form (C_3H_3) to either the aromatic or antiaromatic ionic forms, the amount of superposition possessed by the molecule increases. While the increase in the inter-basis superposition shared between nonorthogonal Slater determinants is more pronounced in the antiaromatic case (C_3H_3^-), the enhancement in aromaticity (C_3H_3^+) is more evident in the genuine superposition shared between biorthogonal Slater determinants.
The following molecules under consideration are pentagonal structures represented by C_4H_4X (see Table <ref>). The π-electronic structures of the cyclopentadienyl anion (C_5H_5^-), pyrrole (C_4H_5N), and furan (C_4H_4O) consist of 5 orbitals and 6 electrons. Based on Hückel's rule, the cyclopentadienyl anion is classified as aromatic. Also, pyrrole and furan are known as hetero-aromatic molecules. The expected order of aromaticity among these three molecules is as follows: cyclopentadienyl anion > pyrrole > furan <cit.>. Likewise, the cyclopentadienyl cation (C_5H_5^+) is considered antiaromatic according to Hückel's rule. Borole (C_4H_5B), which also has 5 orbitals and 4 electrons in its π-electronic structure, exhibits less antiaromaticity compared to the cyclopentadienyl cation <cit.>. Our findings indicate that both l_1[ρ] and l_1[ρ_BO] effectively quantify the extent of electron delocalization in the π-electronic structures of these five-membered molecules, consistent with the expected order of aromaticity and antiaromaticity among them.
Now, let us move on to a collection of molecules presented in Table <ref> and characterized by a hexagonal π-electronic structure comprising 6 orbitals and 6 electrons (see Appendix <ref> for more detail). These molecules are commonly classified as aromatic in the literature, with their expected order of aromaticity as follows: benzene (C_6H_6) > pyridine (C_5H_5N) > pyridazine (C_4H_4N_2) ≈ pyrazine (C_4H_4N_2) ≈ pyrimidine (C_4H_4N_2) > triazine (C_3H_3N_3) > hexazine (N_6) > borazine (B_3N_3H_6) <cit.>.
Evaluating the order above serves as one of the tests used to assess the effectiveness of electron delocalization-based aromaticity descriptors. However, even the most successful descriptors fail to accurately predict the complete expected trend. For instance, the para-delocalization index (PDI) considers pyrazine, pyridazine, and hexazine more aromatic than benzene <cit.>, which contradicts the convention stating that as carbon atoms in benzene are replaced with nitrogen atoms, the aromaticity should decrease gradually. Similarly, the multicenter indices such as multicenter electron delocalization index (MCI) and I_ring infer that pyridazine is more aromatic than pyridine <cit.>.
Table <ref> demonstrates that the inter-basis superposition measured by l_1[ρ] fails to replicate the expected aromaticity order. In contrast, the genuine quantum superposition quantified by l_1[ρ_BO] achieves a high degree of accuracy in reproducing this order, except for the ordering between triazine and hexazine. This finding suggest that the density matrix representation in the biorthogonal AO basis, ρ_BO, encompasses comprehensive information about the extent of electron delocalization in the π-structure of an aromatic molecular system.
Besides, the deviation of our results from the conventional order, in which the triazine's π-electronic structure is expected to be more delocalized than those in hexazine, warrants further investigation. We previously mentioned that hexazine appears to be more aromatic than triazine according to certain aromaticity measures like PDI. Our calculations may yield a similar finding due to the fact that, despite having more nitrogen atoms, hexazine possesses a more symmetric structure compared to triazine. In a sense, when we examine the overlaps between Slater determinants, we may measure not only the amount of electron delocalization but also the uniformity of its distribution. This observation is supported by the ordering of pyrazine > pyrimidine > pyridazine, as indicated in Table <ref>. Although these three molecules have an equal number of nitrogen atoms, the hydrocarbon pathways connecting the two nitrogen atoms exhibit symmetry in pyrazine, while they display complete antisymmetry in pyridazine.
Outlook.— Throughout this study, we have conducted an in-depth exploration of the concept of aromaticity as an application of the RTS. Our primary objective was to quantify the electron delocalization in the ground state π-electronic structures of representative monocyclic molecules in terms of the quantum superposition present among localized AOs. We performed electronic state calculations using the CASSCF method and adopted the mode picture, which is particularly well-suited for characterizing quantum correlations in systems comprising indistinguishable fermionic particles <cit.>. Furthermore, we utilized the l_1 norm to measure the amount of quantum superposition in the electronic density matrices expressed in both nonorthogonal and biorthogonal AO bases. This comprehensive framework enabled us to capture not only the shared quantum superposition between nonorthogonal Slater determinants but also the local quantum superposition embedded within their overlaps.
Our study provides compelling evidence for the accurate reproduction of the expected order of aromaticity in the majority of the molecules we examined. This remarkable achievement is made possible through the genuine quantum superposition displayed by the biorthogonal density matrices. In contrast, the nonorthogonal density matrices, which carry inter-base quantum superposition, prove inadequate in accurately ordering the same molecules. These findings highlight the exceptional success of the biorthogonal framework in capturing all nonclassicality in nonorthogonal systems, as demonstrated by the electron delocalization in molecular systems.
Additionally, our research uncovers the potential applications of chemical systems within the RTS, particularly in technological domains. Numerous experimental studies have already explored the phenomenon of delocalization in these systems <cit.>, providing a foundation for employing similar experimental setups to harness the power of superposition in various areas. This opens up exciting possibilities for leveraging molecular systems in cutting-edge fields such as quantum information, quantum metrology, and quantum cryptology. Moreover, while the biorthogonal extension of the RTS provides a comprehensive framework for understanding electron delocalization in the π-structure of molecular systems, the current superposition measures used in this study fail to differentiate between electron delocalization in aromatic and antiaromatic molecules. Identifying a method to make this distinction could potentially lead us toward a superposition-based measure for aromaticity and even facilitate the development of a quantum resource theory for this phenomenon.
56
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Dirac(1981)]dirac1981principles
author author P. A. M. Dirac, @noop title The
principles of quantum mechanics, number 27 (publisher Oxford university press, year 1981)NoStop
[Bennett et al.(1993)Bennett, Brassard, Crépeau,
Jozsa, Peres, and Wootters]bennett1993teleporting
author author C. H. Bennett, author G. Brassard,
author C. Crépeau, author R. Jozsa, author
A. Peres, and author
W. K. Wootters, title
title Teleporting an unknown quantum state via dual classical
and einstein-podolsky-rosen channels, https://doi.org/10.1103/PhysRevLett.70.1895 journal journal Phys. Rev. Lett. volume 70, pages 1895 (year 1993)NoStop
[Hu et al.(2023)Hu,
Guo, Liu, Li, and Guo]hu2023progress
author author X.-M. Hu, author Y. Guo, author B.-H. Liu, author
C.-F. Li, and author
G.-C. Guo, title title Progress in quantum teleportation, https://doi.org/10.1038/s42254-023-00588-x journal journal Nat. Rev. Phys. , pages 1 (year
2023)NoStop
[Aberg(2006)]aberg2006quantifying
author author J. Aberg, title title Quantifying
superposition, https://doi.org/10.48550/arXiv.quant-ph/0612146
journal journal arXiv preprint quant-ph/0612146 (year 2006)NoStop
[Baumgratz et al.(2014)Baumgratz, Cramer, and Plenio]baumgratz2014quantifying
author author T. Baumgratz, author M. Cramer, and author M. B. Plenio, title title Quantifying coherence, https://doi.org/10.1103/PhysRevA.105.042410 journal journal Phys. Rev. Lett. volume 113, pages 140401 (year 2014)NoStop
[Streltsov et al.(2017)Streltsov, Adesso, and Plenio]streltsov2017colloquium
author author A. Streltsov, author G. Adesso, and author M. B. Plenio, title title Colloquium: Quantum coherence as a
resource, https://doi.org/10.1103/RevModPhys.89.041003 journal journal Rev. Mod. Phys. volume
89, pages 041003 (year 2017)NoStop
[Modi et al.(2012)Modi,
Brodutch, Cable, Paterek, and Vedral]modi2012classical
author author K. Modi, author A. Brodutch,
author H. Cable, author T. Paterek, and author
V. Vedral, title title The classical-quantum boundary for correlations: Discord and related
measures, https://doi.org/10.1103/RevModPhys.84.1655 journal journal Rev. Mod. Phys. volume
84, pages 1655 (year 2012)NoStop
[Adesso et al.(2016)Adesso,
Bromley, and Cianciaruso]adesso2016measures
author author G. Adesso, author T. R. Bromley, and author M. Cianciaruso, title title Measures and
applications of quantum correlations, https://doi.org/10.1088/1751-8113/49/47/473001 journal
journal J. Phys. A: Math. Theor. volume
49, pages 473001 (year 2016)NoStop
[Dowling and Milburn(2003)]dowling2003quantum
author author J. P. Dowling and author G. J. Milburn, title title Quantum technology: the
second quantum revolution, https://doi.org/10.1098/rsta.2003.1227
journal journal Philos. Trans. Royal Soc. A volume 361, pages 1655 (year
2003)NoStop
[Georgescu and Nori(2012)]georgescu2012quantum
author author I. Georgescu and author F. Nori, title title Quantum technologies: an old
new story, https://doi.org/10.1088/2058-7058/25/05/28 journal journal Phys. World volume
25, pages 16 (year 2012)NoStop
[Jaeger(2018)]jaeger2018second
author author L. Jaeger, https://doi.org/10.1007/978-3-319-98824-5 title The Second Quantum Revolution (publisher
Springer International Publishing, year 2018)NoStop
[Horodecki et al.(2009)Horodecki, Horodecki, Horodecki, and Horodecki]horodecki2009quantum
author author R. Horodecki, author P. Horodecki, author M. Horodecki, and author K. Horodecki, title title Quantum entanglement, https://doi.org/10.1103/RevModPhys.81.865 journal
journal Rev. Mod. Phys. volume 81, pages 865 (year 2009)NoStop
[Pezzè et al.(2018)Pezzè, Smerzi, Oberthaler,
Schmied, and Treutlein]pezze2018quantum
author author L. Pezzè, author A. Smerzi,
author M. K. Oberthaler,
author R. Schmied, and author P. Treutlein, title
title Quantum metrology with nonclassical states of atomic
ensembles, https://doi.org/10.1103/RevModPhys.90.035005 journal journal Rev. Mod. Phys. volume 90, pages 035005 (year
2018)NoStop
[Pirandola et al.(2020)Pirandola, Andersen, Banchi, Berta, Bunandar, Colbeck, Englund, Gehring, Lupo, Ottaviani et al.]pirandola2020advances
author author S. Pirandola, author U. L. Andersen, author L. Banchi,
author M. Berta, author D. Bunandar, author
R. Colbeck, author D. Englund, author T. Gehring, author C. Lupo, author C. Ottaviani, et al., title title
Advances in quantum cryptography, https://doi.org/10.1364/AOP.361502 journal journal Adv. Opt. Photonics volume 12, pages 1012 (year 2020)NoStop
[Ollivier and Zurek(2001)]ollivier2001quantum
author author H. Ollivier and author W. H. Zurek, title title Quantum discord: a measure
of the quantumness of correlations, https://doi.org/10.1103/PhysRevLett.88.017901 journal
journal Phys. Rev. Lett. volume 88, pages 017901 (year 2001)NoStop
[Henderson and Vedral(2001)]henderson2001classical
author author L. Henderson and author V. Vedral, title title Classical, quantum and
total correlations, https://doi.org/10.1088/0305-4470/34/35/315
journal journal J. Phys. A Math. volume 34, pages 6899 (year
2001)NoStop
[Modi et al.(2010)Modi,
Paterek, Son, Vedral, and Williamson]modi2010unified
author author K. Modi, author T. Paterek,
author W. Son, author
V. Vedral, and author
M. Williamson, title
title Unified view of quantum and classical correlations, https://doi.org/10.1103/PhysRevLett.104.080501 journal
journal Phys. Rev. Lett. volume 104, pages 080501 (year 2010)NoStop
[Bera et al.(2017)Bera,
Das, Sadhukhan, Roy,
De, and Sen]bera2017quantum
author author A. Bera, author T. Das, author D. Sadhukhan, author
S. S. Roy, author A. S. De, and author U. Sen, title title Quantum
discord and its allies: a review of recent progress, https://doi.org/10.1088/1361-6633/aa872f journal journal Rep. Prog. Phys. volume 81, pages 024001 (year 2017)NoStop
[Theurer et al.(2017)Theurer, Killoran, Egloff, and Plenio]theurer2017resource
author author T. Theurer, author N. Killoran,
author D. Egloff, and author M. B. Plenio, title
title Resource theory of superposition, https://doi.org/10.1103/PhysRevLett.119.230401 journal
journal Phys. Rev. Lett. volume 119, pages 230401 (year 2017)NoStop
[Pusuluk(2022)]pusuluk2022unified
author author O. Pusuluk, title title Unified view of quantum
superposition and quantum indistinguishability, https://doi.org/10.48550/arXiv.2210.02398 journal journal arXiv preprint arXiv:2210.02398 (year
2022)NoStop
[Pauling(1960)]pauling2013bonding
author author L. Pauling, @noop title The nature of the
chemical bond and the structure of molecules and crystals, edition 3rd ed. (publisher Cornell University Press, year 1960)NoStop
[Helgaker et al.(2013)Helgaker, Jorgensen, and Olsen]helgaker2013molecular
author author T. Helgaker, author P. Jorgensen, and author J. Olsen, https://doi.org/10.1002/9781119019572 title Molecular electronic-structure theory (publisher
John Wiley & Sons, year 2013)NoStop
[Weinhold(1999)]weinhold1999chemical
author author F. Weinhold, title title Chemical bonding as a
superposition phenomenon, https://doi.org/10.1021/ed076p1141
journal journal J. Chem. Educ. volume 76, pages 1141 (year
1999)NoStop
[Weinhold and Klein(2014)]weinhold2014hydrogen
author author F. Weinhold and author R. A. Klein, title title What is a hydrogen bond?
resonance covalency in the supramolecular domain, https://doi.org/10.1039/C4RP00030G journal journal Chem. Educ. Res. Pract. volume 15, pages 276 (year 2014)NoStop
[Frenking and Shaik(2014)]frenking2014chemical
author author G. Frenking and author S. Shaik, https://doi.org/10.1002/9783527664696 title The chemical bond: fundamental aspects of chemical bonding (publisher John Wiley & Sons, year
2014)NoStop
[McQuarrie(2008)]mcquarrie2008quantum
author author D. A. McQuarrie, @noop title Quantum chemistry (publisher University Science Books, year
2008)NoStop
[Kauzmann(2013)]kauzmann2013quantum
author author W. Kauzmann, @noop title Quantum chemistry: an
introduction (publisher Elsevier, year
2013)NoStop
[Frenking and Krapp(2007)]frenking2007unicorns
author author G. Frenking and author A. Krapp, title title Unicorns in the world of
chemical bonding models, https://doi.org/10.1002/jcc.20543
journal journal J. Comput. Chem. volume 28, pages 15 (year
2007)NoStop
[Kekulé(1865)]kekule1865constitution
author author A. Kekulé, title title Sur la constitution
des substances aromatiques, @noop journal journal Bull. Soc. Chim. Fr. volume 3, pages 98 (year 1865)NoStop
[Martín and Scott(2015)]martin2015challenges
author author N. Martín and author L. T. Scott, title title Challenges in aromaticity:
150 years after kekulé's benzene, https://doi.org/10.1039/C5CS90085A journal journal Chem. Soc. Rev. volume 44, pages 6397 (year 2015)NoStop
[Solà(2016)]sola2016aromaticity
author author M. Solà, title title Aromaticity, @noop journal journal Encycl. Phys. Org.
Chem. , pages 1 (year 2016)NoStop
[Solà(2017)]sola2017aromaticity
author author M. Solà, title title Why aromaticity is a
suspicious concept? why?, https://doi.org/10.3389/fchem.2017.00022
journal journal Front. Chem. volume 5, pages 22 (year 2017)NoStop
[Merino et al.(2023)Merino,
Solà, Fernández, Foroutan-Nejad, Lazzeretti, Frenking,
Anderson, Sundholm, Cossío, Petrukhina et al.]merino2023aromaticity
author author G. Merino, author M. Solà,
author I. Fernández, author C. Foroutan-Nejad, author P. Lazzeretti, author
G. Frenking, author
H. L. Anderson, author
D. Sundholm, author
F. P. Cossío, author
M. A. Petrukhina, et al., title title Aromaticity: Quo vadis, https://doi.org/10.1039/D2SC04998H journal journal Chem. Sci. (year 2023)NoStop
[Minkin(1999)]minkin1999glossary
author author V. I. Minkin, title title Glossary of terms used in
theoretical organic chemistry, https://doi.org/10.1351/pac199971101919 journal journal Pure Appl. Chem. volume 71, pages 1919 (year 1999)NoStop
[Chen et al.(2005)Chen,
Wannere, Corminboeuf, Puchta, and Schleyer]chen2005nucleus
author author Z. Chen, author C. S. Wannere,
author C. Corminboeuf, author R. Puchta, and author
P. v. R. Schleyer, title
title Nucleus-independent chemical shifts (nics) as an
aromaticity criterion, https://doi.org/10.1021/cr030088+
journal journal Chem. Rev. volume 105, pages 3842 (year
2005)NoStop
[Brody(2013)]brody2013biorthogonal
author author D. C. Brody, title title Biorthogonal quantum
mechanics, https://doi.org/10.1088/1751-8113/47/3/035305
journal journal J. Phys. A: Math. Theor. volume 47, pages 035305 (year 2013)NoStop
[Mostafazadeh(2010)]mostafazadeh2010pseudo
author author A. Mostafazadeh, title title Pseudo-hermitian
representation of quantum mechanics, https://doi.org/10.1142/S0219887810004816 journal journal Int. J. Geom. Methods Mod. Phys. volume
7, pages 1191 (year 2010)NoStop
[Ju et al.(2019)Ju,
Miranowicz, Chen, and Nori]ju2019non
author author C.-Y. Ju, author A. Miranowicz,
author G.-Y. Chen, and author F. Nori, title title Non-hermitian hamiltonians and no-go theorems in
quantum information, https://doi.org/10.1103/PhysRevA.100.062118
journal journal Phys. Rev. A volume 100, pages 062118 (year
2019)NoStop
[Moshinsky and Seligman(1971)]1971_AOs
author author M. Moshinsky and author T. Seligman, title title Group theory and second
quantization for nonorthogonal orbitals, https://doi.org/10.1016/0003-4916(71)90191-6 journal
journal Ann. Phys. volume 66, pages 311 (year 1971)NoStop
[Gouyet(1973)]1973_AOs
author author J. F. Gouyet, title title Use of biorthogonal
orbitals in calculation by perturbation of molecular interactions, https://doi.org/10.1063/1.1680674 journal journal J. Chem. Phys. volume 59, pages 4637 (year 1973)NoStop
[Cantu et al.(1975)Cantu,
Klein, Matsen, and Seligman]1975_AOs
author author A. A. Cantu, author D. J. Klein,
author F. A. Matsen, and author T. H. Seligman, title title A second quantized formulation of
valence bond theory, https://doi.org/10.1007/bf00963472 journal journal Theor. Chim. Acta volume 38, pages 341 (year 1975)NoStop
[Malmqvist(1986)]1986_AOs
author author P. Å. Malmqvist, title title
Calculation of transition density matrices by nonunitary orbital
transformations, https://doi.org/10.1002/qua.560300404 journal journal Int. J. Quantum Chem. volume 30, pages 479 (year 1986)NoStop
[Solà et al.(2022)Solà, Boldyrev, Cyrañski,
Krygowski, and Merino]sola2022aromaticity
author author M. Solà, author A. I. Boldyrev, author M. K. Cyrañski, author T. M. Krygowski, and author G. Merino, https://doi.org/10.1002/9781119085928 title Aromaticity and Antiaromaticity: Concepts and Applications (publisher John Wiley & Sons, year
2022)NoStop
[Cyrański(2005)]cyranski2005energetic
author author M. K. Cyrański, title title Energetic aspects of
cyclic pi-electron delocalization: evaluation of the methods of estimating
aromatic stabilization energies, https://doi.org/10.1021/cr0300845
journal journal Chem. Rev. volume 105, pages 3773 (year
2005)NoStop
[Feixas et al.(2008)Feixas,
Matito, Poater, and Solà]feixas2008performance
author author F. Feixas, author E. Matito,
author J. Poater, and author M. Solà, title
title On the performance of some aromaticity indices: a critical
assessment using a test set, https://doi.org/10.1002/jcc.20914
journal journal J. Comput. Chem. volume 29, pages 1543 (year
2008)NoStop
[Wiseman and Vaccaro(2003)]wiseman2003entanglement
author author H. M. Wiseman and author J. A. Vaccaro, title title Entanglement of
indistinguishable particles shared between two parties, https://doi.org/10.1103/PhysRevLett.91.097902 journal
journal Phys. Rev. Lett. volume 91, pages 097902 (year 2003)NoStop
[Benatti et al.(2020)Benatti, Floreanini, Franchini, and Marzolino]benatti2020entanglement
author author F. Benatti, author R. Floreanini,
author F. Franchini, and author U. Marzolino, title title Entanglement in indistinguishable particle
systems, https://doi.org/10.1016/j.physrep.2020.07.003 journal journal Phys. Rep. volume
878, pages 1 (year 2020)NoStop
[Hornberger et al.(2012)Hornberger, Gerlich, Haslinger,
Nimmrichter, and Arndt]Hornberger2012
author author K. Hornberger, author S. Gerlich,
author P. Haslinger, author S. Nimmrichter, and author M. Arndt, title
title Colloquium: Quantum interference of clusters and
molecules, https://doi.org/10.1103/revmodphys.84.157 journal journal Rev. Mod. Phys. volume
84, pages 157 (year 2012)NoStop
[Brand et al.(2018)Brand,
Stickler, Knobloch, Shayeghi,
Hornberger, and Arndt]Brand2018
author author C. Brand, author B. A. Stickler,
author C. Knobloch, author A. Shayeghi, author
K. Hornberger, and author
M. Arndt, title title Conformer selection by matter-wave interference, https://doi.org/10.1103/PhysRevLett.121.173002 journal
journal Phys. Rev. Lett. volume 121, pages 173002 (year 2018)NoStop
[Brand et al.(2020)Brand,
Kiałka, Troyer, Knobloch,
Simonović, Stickler, Hornberger, and Arndt]Brand2020
author author C. Brand, author F. Kiałka,
author S. Troyer, author C. Knobloch, author
K. Simonović, author
B. A. Stickler, author
K. Hornberger, and author
M. Arndt, title title Bragg diffraction of large organic molecules, https://doi.org/10.1103/PhysRevLett.125.033604 journal
journal Phys. Rev. Lett. volume 125, pages 033604 (year 2020)NoStop
[Hiberty and Leforestier(1978)]hiberty1978expansion
author author P. Hiberty and author C. Leforestier, title title Expansion of
molecular orbital wave functions into valence bond wave functions. a
simplified procedure, https://doi.org/10.1021/ja00475a007
journal journal J. Am. Chem. Soc. volume 100, pages 2012 (year
1978)NoStop
[Shaik and Hiberty(2007)]shaik2007chemist
author author S. S. Shaik and author P. C. Hiberty, @noop title A chemist's guide to
valence bond theory (publisher John Wiley & Sons, year 2007)NoStop
[Johnson(2002)]johnson2013nist
author author R. Johnson, https://doi.org/10.18434/T47C7Z title
Computational chemistry comparison and benchmark database, nist standard
reference database 101 (year 2002)NoStop
[Frisch and et al.(2009)]g09
author author M. J. Frisch and author et al., @noop
title Gaussian 09 Revision E.01 (year 2009), note (Gaussian Inc., Wallingford CT)NoStop
[Sun et al.(2018)Sun,
Berkelbach, Blunt, Booth,
Guo, Li, Liu, McClain, Sayfutyarova, Sharma et al.]sun2018pyscf
author author Q. Sun, author T. C. Berkelbach,
author N. S. Blunt, author G. H. Booth, author
S. Guo, author Z. Li, author J. Liu, author J. D. McClain,
author E. R. Sayfutyarova,
author S. Sharma, et al., title title Pyscf: the python-based
simulations of chemistry framework, https://doi.org/10.1002/wcms.1340 journal journal Wiley Interdiscip. Rev. Comput. Mol. Sci. volume 8, pages e1340 (year
2018)NoStop
[Sun et al.(2020)Sun,
Zhang, Banerjee, Bao,
Barbry, Blunt, Bogdanov,
Booth, Chen, Cui et al.]sun2020recent
author author Q. Sun, author X. Zhang, author S. Banerjee, author
P. Bao, author M. Barbry, author N. S. Blunt, author N. A. Bogdanov, author G. H. Booth,
author J. Chen, author
Z.-H. Cui, et al., title title Recent developments in the pyscf program
package, https://doi.org/10.1063/5.0006074 journal
journal J. Chem. Phys. volume 153, pages 024109 (year 2020)NoStop
§ CONSTRUCTION OF ELECTRONIC STATES
Consider a molecular system composed of N molecular orbitals (MOs) and n_e electrons. The dominant configuration of the system, known as the Hartree-Fock (HF) state |Ψ_HF⟩, can be expressed as follows:
|Ψ_HF⟩ = |ψ_n_e⟩∧…∧|ψ_2⟩∧|ψ_1⟩
≡ |11… 10… 0 ⟩_ψ_1 ψ_2 …ψ_n_eψ_n_e+1…ψ_2N.
In the equation above, each MO consists of two spin modes, referred to as molecular spin-orbitals. For instance, |ψ_2μ-1⟩ and |ψ_2μ⟩ represent the up and down spin-orbitals, respectively, of the μth MO. Additionally, to ensure the anti-symmetric algebra of the Fock space, the joint electronic system is constructed using the wedge product (∧) rather than the tensor product (⊗).
Each system configuration corresponds to a Slater determinant, which is an antisymmetric product of occupied molecular spin-orbitals. For example, |Ψ_HF⟩≡ |ψ_1 ψ_2 …ψ_n_e| represents the occupied MOs in the HF state. In this study, our specific focus lies on the π-electronic sub-structure, which is characterized solely by the π-MOs and the π-electrons occupying them. Therefore, all the relevant configurations can be obtained by exciting the π-electrons from the reference state |Ψ_HF⟩ within the π-MOs. Subsequently, the electronic ground state of the π-structure, denoted as |Ψ_π⟩, can be expressed as a linear combination of these configurations:
|Ψ_π⟩ = ∑_sλ_s | s_1 s_2 … s_2N_π⟩_ψ_1^πψ_2^π…ψ_2N_π^π.
Here, N_π and n_e^π represent the number of π-MOs and π-electrons, respectively. The set {ψ_μ^π} denotes the π-molecular spin-orbital basis. Each s_μ∈{0,1}, and the sum of s_μ from μ=1 to 2N_π equals n_e^π. A specific electronic configuration is represented by the vector s⃗={s_1,s_2,...,s_2N_π}. The coefficients λ_s⃗ correspond to the different configurations and are determined by selecting the π-MOs and π-electrons as the active space and optimizing the π-MOs within this space using the CASSCF method.
The subsequent step involves rewriting |Ψ_π⟩ in the nonorthogonal AO basis. To accomplish this, we leverage the fact that for the planar cyclic molecules we consider, each π-molecular spin-orbital ψ_μ^π can be expressed as a linear combination of the p-atomic orbitals {p_ν^π} involved in the π-bonds, as shown below:
ψ_μ = ∑_ν C_νμp_ν^π.
Here, C_νμ represents the contribution of the νth atom's p-AO p_ν^π to the μth π-MO. The AO coefficients of MOs can be obtained from a CASSCF calculation.
Once we have obtained the coefficients for all π-MOs in Eq. (<ref>), we can construct the coefficient matrix C. This matrix contains the coefficients of each p-AO in each π-MO. By utilizing this matrix, we can represent each configuration in Eq. (<ref>) as a linear combination of configurations in the p-AO basis <cit.>. Consequently, we can expand the π-electronic structure state in Eq. (<ref>) using the π-AO basis {p_ν^π} in the following manner:
|Ψ_π^AO⟩ = ∑_s⃗λ_s⃗^AO |s_1 s_2 … s_2N_π⟩_p_1^π p_2^π… p_2N_π^π,
where χ_2ν-1 and χ_2ν denote the up and down spin-orbitals of the νth atomic orbital p_ν^π, respectively. λ_s⃗^AO represents the expansion coefficient of the electronic state in the p-AO basis.
To quantify the inter-basis superposition, we utilize Eq. (<ref>), specifically l_1[|Ψ_π^AO⟩⟨Ψ_π^AO| ]. In order to represent the π-electronic structure state in the biorthogonal p-AO basis, we begin by obtaining the overlap matrix S. The elements of S indicate the overlaps of electron configurations in the AO basis, determined through the inner product of configurations:
S_km=⟨ k_1k_2… k_2N_π | m_1m_2… m_2N_π⟩_p_1^π p_2^π… p_2N_π^π.
Once the overlap matrix is obtained, it becomes straightforward to derive the biorthogonal density matrix representation ρ_BO for |Ψ_π^AO⟩, using the relationship ρ_BO=|Ψ_π^AO⟩⟨Ψ_π^AO| S.
§ COMPUTATIONAL DETAILS
Except for hexazine (N_6), borole (C_4H_5B), and cyclopentadienyl cation (C_5H_5^+), all the molecules' optimized geometries were obtained at the B3LYP/aug-cc-pVTZ from the CCCBDB database <cit.>. The other geometries were optimized in the Gaussian 09 <cit.> at the same level of theory. All HF and CASSCF calculations have been performed using the PySCF program package <cit.> with STO-6G minimal basis set.
In the CASSCF calculation of 6-MR compounds, each molecule's π-electronic structure consists of 6 π-molecular orbitals (MOs) –-3 of which possess bonding and 3 with anti-bonding character–- and 6 electrons within these orbitals. Similarly, the AO basis for each molecule is composed of 6 p-AOs, with each contributing one from a distinct atom in the ring to the π-electronic structure.
In MO (AO) basis, the π-electronic structures of the cyclopentadienyl anion (C_5H_5^-), pyrrole (C_4H_5N), and furan (C_4H_4O) consist of 5 π-MOs (p-AOs) and 6 electrons, whereas those of borole (C_4H_5B) and cyclopentadienyl cation (C_5H_5^+) comprise 5 π-MOs (p-AOs) and 4 electrons. In pyrrole (furan), each carbon atom contributes to the π-system with one p-AO and an electron. In contrast, its nitrogen (oxygen) that undergoes sp^2 hybridization and places a lone electron pair in a p-AO contributes to π-system with a p-AO and its 2 electrons. However, in borole, the contributed p-AO of the boron atom to the π-electronic system is empty.
|
http://arxiv.org/abs/2307.02092v1
|
20230705081017
|
Make A Long Image Short: Adaptive Token Length for Vision Transformers
|
[
"Qiqi Zhou",
"Yichen Zhu"
] |
cs.LG
|
[
"cs.LG",
"cs.CV"
] |
Q. Zhou, Y. Zhu.
Shanghai University of Electric Power, Shanghai, China Midea Group, Shanghai, China
{zhouqq31, zhuyc25}@midea.com
[* Work done during internships at Midea Group.
Corresponding authors.]
Make A Long Image Short: Adaptive Token Length for Vision Transformers
Qiqi Zhou 1,* Yichen Zhu 2,
August 1, 2023
======================================================================
The vision transformer is a model that breaks down each image into a sequence of tokens with a fixed length and processes them similarly to words in natural language processing. Although increasing the number of tokens typically results in better performance, it also leads to a considerable increase in computational cost. Motivated by the saying "A picture is worth a thousand words," we propose an innovative approach to accelerate the ViT model by shortening long images. Specifically, we introduce a method for adaptively assigning token length for each image at test time to accelerate inference speed. First, we train a Resizable-ViT (ReViT) model capable of processing input with diverse token lengths. Next, we extract token-length labels from ReViT that indicate the minimum number of tokens required to achieve accurate predictions. We then use these labels to train a lightweight Token-Length Assigner (TLA) that allocates the optimal token length for each image during inference. The TLA enables ReViT to process images with the minimum sufficient number of tokens, reducing token numbers in the ViT model and improving inference speed. Our approach is general and compatible with modern vision transformer architectures, significantly reducing computational costs. We verified the effectiveness of our methods on multiple representative ViT models on image classification and action recognition.
§ INTRODUCTION
The transformer has achieved remarkable success in computer vision since the introduction of ViT <cit.>. It has demonstrated impressive performance compared to convolutional neural networks (CNNs) on various visual domains, including image classification <cit.>, object detection <cit.>, semantic segmentation <cit.>, and action recognition <cit.>, using both supervised and self-supervised <cit.> training configurations. Despite the development of ViT models, their deployment remains a challenge due to the high computational cost associated with them.
R0.5
< g r a p h i c s >
The motivation for our approach. While some images (right) may need many tokens to predict their category, some images are easy to recognize. Thus, only a small number of tokens is sufficient to classify them correctly.
Accelerating ViT is a crucial yet understudied area. While many techniques like pruning, distillation, and neural architecture search have been applied to accelerate CNNs, these cannot be directly applied to ViT due to significant differences between the models <cit.>. As the attention module in the transformer computes the fully-connected relations among all input patches <cit.>, the computational cost becomes quadratic with respect to the length of the input sequence <cit.>. Consequently, the transformer can be computationally expensive, particularly for longer input sequences. In the ViT model, images are divided into a fixed number of tokens; following conventional practice <cit.>, an image is represented by 16 × 16 tokens. We aim to reduce the computational complexity of ViT by reducing the number of tokens used to split the images. Our motivation is depicted in Figure <ref>, which shows three examples predicted by individually trained DeiT-S models <cit.> with different token lengths. The checkmark denotes correct prediction, and the cross denotes the wrong prediction. We observe that some "easy-to-classify" images only require a few tokens to determine their category accurately, while some images require more tokens to make the right prediction. These observations motivate us to reduce the computational complexity of the existing ViT model by accurately classifying the input using the minimum possible number of tokens.
In an ideal scenario, we would know the minimum number of tokens required to accurately predict an image, and we could train a model to assign the optimal token length to the ViT model. However, training multiple ViT models, each with a fixed token length, would be computationally infeasible. To address this, we propose a modification to the transformer architecture, changing it from "static" to "dynamic," enabling the ViT model to adaptively process images with varying token lengths. This dynamic transformer, called Resizable-ViT (ReViT), identifies the minimum token length required to achieve correct predictions for each image. We then train a lightweight Token-Length Assigner (TLA) to predict the appropriate token length for a given image, with the label obtained from the ReViT. Consequently, the ReViT can process images with lower computational costs based on the assigned token length.
The primary challenge of our approach is training the ReViT to enable the ViT model to process images of any size provided by the TLA. To tackle this challenge, we introduce a token length-aware layer normalization that switches the normalization statistics for each type of token length, and a self-distillation module that enhances the model's performance when using short token lengths in ReViT. Additionally, the ViT model needs to see the images with the corresponding token lengths beforehand to handle various token lengths effectively. However, as the number of predefined token-length choices increases, the training cost linearly increases. To overcome this, we introduce a parallel computing strategy for efficient training that makes the ReViT training almost as inexpensive as a vanilla ViT model's training.
We showcase the efficacy of our approach on several prominent ViT models, such as DeiT <cit.> and LV-ViT <cit.> for image classification, and TimesFormer <cit.> for video recognition. Our experiments demonstrate that our method can significantly reduce computational costs while maintaining performance levels. For instance, we achieve a 50% acceleration in DeiT-S <cit.> model with an accuracy reduction of only 0.1%. On action recognition, the computational cost of TimesFormer <cit.> can be reduced up to 33% on Kinetic 400 with only a 0.5% loss in recognition accuracy.
§ RELATED WORKS
Vision transformer. ViT have recently gained much attention in computer vision due to their strong capability to model long-range relations. Many attempts have been made to integrate long-range modeling into CNNs, such as non-local networks <cit.>, relation networks <cit.>, among others. Vision Transformer (ViT)<cit.> introduced a set of pure Transformer backbones for image classification, and its follow-ups have soon modified the vision transformer to dominate many downstream tasks for computer vision, such as object detection<cit.>, semantic segmentation <cit.>, action recognition <cit.>, 2D/3D human pose estimation <cit.>, 3D object detection <cit.>, and even self-supervision <cit.>. ViT has shown great potential to be an alternative backbone for convolutional neural networks.
Dynamic vision transformer. The over-parameterized model is known to have many attractive merits and can achieve better performance than smaller models. However, in real-world scenarios, computational efficiency is critical as executed computation is translated into power consumption or carbon emission. To address this issue, many works have attempted to reduce the computational cost of Convolutional Neural Networks (CNNs) through methods such as neural architecture search <cit.>, knowledge distillation <cit.>, and pruning <cit.>.
Recent work has shift its attention to reduce the number of tokens used for inference, as the number of tokens can be a computational bottleneck to the vision transformer. There are two major approaches: unstructured token sparsification and structured token division. The majority of works, including PatchSlim <cit.>, TokenSparse <cit.>, GlobalEncoder <cit.>, IA-RED <cit.>, and Tokenlearner <cit.>, focus on the former. TokenLearner <cit.> uses an MLP to reduce the number of tokens. TokenPooling <cit.> merges tokens via a k-mean based algorithm. TokenMerge <cit.> calculates the token similarity and merges tokens via bipartite soft matching.
.
They aim to remove uninformative tokens, such as those that learn features from the background of the image, thereby boosting inference speed by reserving only informative tokens. These approaches typically need to progressively reduce the number of tokens based on the inputs and can be performed either jointly with ViT training or afterward. However, pruning tokens sparsely can bring unstable training issues, especially when the model is huge <cit.>.
The latter, which is known as unstructured token sparsification, is the most relevant work to our research. Wang et al.<cit.> proposed Dynamic Vision Transformer (DVT) to dynamically determine the number of patches required to divide an image. They employed a cascade of ViT models, with each ViT responsible for a specific token length. The cascade ViT model makes a sequential decision and stops inference for an input image if it has sufficient confidence in the prediction at the current token length. In contrast to DVT<cit.>, our method is more practical and accessible, as it only requires a single ViT model. Additionally, we focus on how to accurately determine the minimum number of token lengths required in the transformer to provide correct predictions for each image.
§ METHODOLOGY
The vision transformers treat an image as a sentence by dividing the 2D image into 1D tokens and modeling the long-range dependencies between them using the multi-head self-attention mechanism. However, the self-attention is considered the computational bottleneck in the transformer model, as its computational cost increases quadratically with the number of incoming tokens. As mentioned earlier, our approach is motivated by the observation that many easy-to-recognize images do not require 16×16 tokens <cit.> to be correctly classified. Therefore, computational costs can be reduced by processing fewer tokens on easy images while using more tokens on hard images. It is worth noting that the key to a successful input-dependent token-adaptive ViT model is to determine precisely the minimum number of tokens required to accurately classify the image.
To achieve our goal, we propose a two-stage model training approach. In the first stage, we train a ViT model that can handle images with any predefined token lengths. Usually, a single ViT model can only handle one token length. We describe the model design and training strategy of this ViT model in detail in Section <ref>. In the second stage, we train a model to determine the appropriate token length for each image. We first obtain the token-length label, which represents the minimum number of tokens required for accurate classification, from the previously trained ViT model. Then, we train a Token-Length Assigner (TLA) using the training data, where the input is an image and the label is the corresponding token length. This decoupled procedure allows the TLA to make a better decision regarding the number of tokens required for each image. During inference, the TLA guides the ViT model on the optimal number of tokens required for accurate classification based on the input. The complete training and testing process is illustrated in Figure <ref>.
In the following, we first introduce the Token-Label Assigner, then present the training method on the Resizable-ViT model and improved techniques.
§.§ Token-Length Assigner
The purpose of the Token-Length Assigner (TLA) is to make accurate predictions based on the feedback from ReViT. TLA training is performed after ReViT. We first define a list of token lengths L = [l_1, l_2, …, l_n] in descending order. For simplicity, we use a single number to represent the token length, such as L = [14 × 14, 10 × 10, 7 × 7]. The model with a token length of 7 × 7 has the lowest computational cost among the three token lengths.
In order to train a token-length adapter (TLA), it is necessary to obtain a token-length label from the ReViT model at convergence. For an image, the token-length label is defined as the minimum token length required by the ViT model to accurately classify that image. The inference speed of the ReViT model, denoted by M, can be ranked as Speed(M_l_1) < Speed(M_l_2) < … < Speed(M_l_k), where k = len(L) represents the total number of options for token length. For each input x, we can obtain the prediction y_l_i = M_l_i(X) for all i ∈ n. The label of the input x is determined by the smallest token size l_j for which any smaller token length would result in an incorrect prediction, i.e., y_l_j-1≠ y^gt, where gt is the ground truth label. Therefore, a set of input-output pairs (x, l_j) can be obtained and used to train the TLA. Since token-label assignment is straightforward, the TLA is a lightweight module, with minimal computational overhead introduced. Moreover, since unnecessary tokens are reduced in the ViT model, the additional computational overhead is relatively small.
§.§ Resizable-ViT
In this section, we present the Resizable-ViT (ReViT), a dynamic ViT model capable of accurately classifying images with various token lengths. We introduce two techniques that enhance the performance of ReViT and subsequently present the training strategy. Additionally, we offer an efficient training implementation that accelerates the training process of ReViT.
Token-Aware Layer Normalization.
The Layer Normalization (LN/LayerNorm) layer is a widely used normalization technique that accelerates training and improves the generalization of the Transformer architecture. In both natural language processing and computer vision, it is common to adopt an LN layer after addition in the transformer block. However, as the feature maps of the self-attention matrices and feed-forward networks constantly change, the number of token sizes changes as well. Consequently, inaccurate normalization statistics across different token lengths are shared in the same layer, which impairs test accuracy. Additionally, we found empirically that LN cannot be shared in ReViT.
To address this issue, we propose a Token-Length-Aware LayerNorm (TAL-LN), which uses an independent LayerNorm for each choice of token length in the predefined token length list. In other words, we use Add & {LN_1, ..., LN_k} as a building block, where k represents the number of predefined token lengths. Each LayerNorm layer calculates layer-wise statistics specifically and learns the parameters of the corresponding feature map. Furthermore, the number of extra parameters in TAL-LN is negligible since the number of parameters in normalization layers typically takes less than one percent of the total model size <cit.>. A brief summary is illustrated in Figure <ref>.
Self-Distillation
It is aware that the performance of ViT is strongly correlated to the number of patches, and experiments have shown that reducing the token size significantly hampers the accuracy of small token ViT. Directly optimizing via supervision from the ground truth poses a challenge for the small token length sub-model. Motivated by self-attention, a variant of knowledge distillation techniques, where the teacher can be insufficiently trained, or even the student model itself <cit.>, we propose a token length-aware self-distillation (TLSD). In the next section, we will show that the model with the largest token length M_1 is always trained first. For M_l_1, the training objective is to minimize the cross-entropy loss ℒCE. When it comes to the model with other token lengths Ml_i, i ≤ k, i ≠ 1, we use a distillation objective to train the target model:
ℒ_teacher = (1 - λ)ℒ_CE(ϕ(Z_s), y) + λτ^2KL(ϕ(Z_s/τ), ϕ(Z_t/τ))
where Z_s and Z_t is the logits of the student model M_l_i and teacher model M_l_1, respectively. τ is the temperature for the distillation, λ is the coefficient balancing the KL loss (Kullack-Leibler divergence) and the CE loss (cross-entropy) on ground truth label y, and ϕ is the softmax function. Similar to DeiT, we add a distillation token for student models. Figure <ref> gives an overview. Notably, this distillation scheme is computational-free: we can directly use the predicted label of the model with the largest token length as the training label for other sub-model, while for the largest token length model, we use ground truth.
§.§ Training Strategy
h0.5
< g r a p h i c s >
Efficient training implement for Resizable Transformer through parallel computing. All gradient from the replicate nodes are synchronize on the node where that have the largest token length to save the cost of communication.
To enable the ViT model to adaptively process various token lengths in the predefined choice list, it is necessary to expose it to images with different token lengths. Inspired by batch gradient accumulation, a technique used to overcome the problem of small batch size by accumulating gradient and batch statistics in a single iteration, we propose a mixing token length training. As shown in Algorithm <ref>, a batch of images is processed with different token lengths to compute the loss through feed-forward, and individual gradients are obtained. After looping through all token length choices, the gradients of all parameters calculated by feeding different token lengths are accumulated to update the parameters.
Efficient Training Implementation.
An issue with the aforementioned training strategy is that the training time increases linearly with the number of predefined token length choices. To address this issue, we propose an efficient implementation strategy that trades memory cost for training time. As shown in Figure <ref>, we replicate the model, with each model corresponding to a specific token length. At the end of each iteration, the gradients of the different replicas are synchronized and accumulated. Notably, we always send the gradient of replicas in which the token length is small to the one with a larger token length, as they are the training bottleneck. Thus, the communication cost in the gradient synchronization step is negligible. Then, the model parameters are updated through back-propagation. After the parameter updating is complete, the main process distributes the learned parameters to the rest of the replicas. These steps are repeated until the end of training, after which all replicas except the model in the main process can be removed. As such, the training time of the Resizable Transformer reduces from O(k) to O(1), where k is the number of predefined token lengths. Though the number of k is small, i.e., k=3, in practice, the computational cost of training k ViT is high. Through our designed parallel computing, the training cost for ReViT is almost the same as that of naive ViT, where the cost of communication between replicas is negligible compared to the model training cost. In exchange for fast training, extra computational power is required for parallel computing.
§ EXPERIMENTS
Implementation details. For image classification, we trained all models on the ImageNet <cit.> training set consisting of around 1.2 million images and reported their accuracy on the 50k test images. The predefined token lengths were set to 14 × 14, 10 × 10, and 7 × 7 by default, with the token length of 4 × 4 excluded due to a significant accuracy drop. We conducted experiments on DeiT-S <cit.> and LV-ViT-S <cit.> using an image resolution of 224 × 224, unless otherwise specified. We followed the training settings and optimization methods described in the original papers of DeiT <cit.> and LV-ViT <cit.>. For LV-ViT, we obtained token labels for smaller token lengths using their proposed method. We also trained the ReViT on resized images with higher resolutions, such as 384 on DeiT-S. To avoid optimization difficulties caused by large kernel and stride convolutional layers required for patch embedding, we replaced them with consecutive convolutions followed by the method in Xiao et al. <cit.>. After training the ReViT, we obtained token-length labels for all training data and trained the Token-Length Assigner (TLA), which was a small version of EfficientNet-B0 compared to the ViT model. We also included feature map transfer and attention transfer as part of self-distillation, which we found empirically useful.
We use Something-Something V2 <cit.> to conduct experiments on action recognition. The Something-Something V2
is another large-scale video dataset, having around 169k videos for training and 20k videos for validation. We follow the training setting of MotionFormer <cit.>. Specifically, two versions of MotionFormer are tested. The default version operates on 16 × 224 × 224 video clips, and a high spatial resolution variant operates on 32 × 448 × 448 video clips.
§.§ Experimental Results
Main Results on ImageNet Classification. We present the main results of our ReViT based on DeiT-S and LV-ViT-S in Figure <ref>. Our approach is compared with several models, including DeiT <cit.>, CaiT <cit.>, LV-ViT <cit.>, CoaT <cit.>, Swin <cit.>, Twins <cit.>, Visformer <cit.>, ConViT <cit.>, TNT <cit.>, and EfficientNet <cit.>. The results show that our method achieves a favorable accuracy-throughput trade-off. Specifically, ReViT reduces the computational cost of the baseline counterpart by decreasing the token number used for inference. By increasing the input resolution, we manage to outperform the baseline counterpart, given a similar computational cost. We also highlight the experimental results of DVT <cit.> in red. Our method achieves significantly better performance in terms of both accuracy and throughput. We hypothesize that despite the low FLOPs of DVT, the practical speed of DVT is high due to its multiple cascade ViT structure.
Main Results on Video Recognition. One of the core motivations behind ReViT is to address the issue of high computational costs in extremely long token lengths during inference for image classification tasks. To further explore this idea, we investigate the applicability of our method to video recognition tasks, where the token length in transformers is typically much longer than that in image classifiers.
To this end, we train the ReViT-MotionFormer models with ViT-B and ViT-L, two different backbones, and compare them with the baseline models, respectively. The results are presented in Table <ref>. Our method demonstrates a significant speedup over the MotionFormer baseline, with a computational cost reduction of approximately 51% and a 0.1% accuracy increase. By training on larger image resolutions, we correspondingly reduce the model size by 48% with a 0.5% accuracy drop, which is slightly worse than the smaller resolution counterpart. Nonetheless, our experiments demonstrate that ReViT is effective for action recognition tasks.
Visualization of samples with different token-length.
We selected eight classes from the ImageNet validation set and chose three samples from each category, classified as easy, medium, and hard, corresponding to tokens with dimensions of 14 × 14, 10 × 10, and 7 × 7, respectively. The image samples were selected based on the token length assigned by the Token-Length Assigner. The resulting images are displayed in Figure <ref>. Notably, some classes do not have all images filled because less than three samples in the validation set belong to those categories. For example, only one image in the dog class requires the largest token length for classification. We observe that the number of tokens required to predict the category is highly correlated with the object's size. For larger objects, only a few tokens are sufficient to predict their category.
§.§ Ablation Study
Shared patch embedding and position encoding.
We conducted an experiment to evaluate the impact of using shared patch embedding and position encoding. As the token number changes during training, we applied some techniques to enable sharing of both operations. To handle position encoding, we followed the approach of ViT <cit.> and zero-padded the position encoding module whenever the token size changed. This technique was initially used to adjust the positional encoding in the pretrain-finetune paradigm. For shared patch embedding, we used a weight-sharing kernel <cit.>. A large kernel was constructed to process a large patch size, and when the patch size changed, a smaller kernel with shared weight on the center was adopted to flatten the image patch.
As shown in Table <ref>, both shared patch embedding and shared positional encoding decreased the model's accuracy. In particular, the accuracy dropped by nearly 14% for the large token length model when using the shared patch strategy. The shared positional encoding module performed better than shared patch embedding but still significantly impacted the performance of ReViT.
The effect of self-distillation and choice of τ. We conducted experiments to verify the effectiveness of self-distillation in ReViT and investigated the impact of the hyper-parameter τ. We tested two different values of τ, 0.9 and 0.5, for all sub-networks and demonstrated the results in Table <ref>. Without self-distillation, the accuracy on small token lengths was comparable to tokens of size 10 × 10, but significantly worse on tokens of size 7 × 7. When we applied self-distillation with τ = 5, the accuracy of both models increased. To further evaluate the model, we used τ = 5. The higher value of τ negatively impacted the accuracy of the largest token length, dropping the accuracy by around 0.3%, but significantly improving the performance of models with token size 7 × 7. This highlights the necessity of using self-distillation in our scenario and demonstrates the importance of carefully selecting the hyper-parameter τ for optimal performance.
Training cost and Memory Consumption. We compared ReViT with DeiT-S and DVT <cit.> in terms of training cost and memory consumption, as shown in Figure <ref>. ReViT-B denotes the baseline approach of ReViT, while ReViT-E is the efficient implementation method. Both ReViT-B and DeiT-S show a linear increase in training cost as the number of choices in s increases. ReViT-B is cheaper because backpropagation of multiple token lengths is merged. However, the training time of ReViT-E slightly increases due to the communication cost between parallel models increasing.
As for memory consumption (number of parameters) during testing, since our method only has a single ViT where most computational heavy components are shared, the memory cost is slightly higher than the baseline. However, compared to DVT, the increase in the number of parameters with respect to the increasing number of token length choices is negligible. This indicates that our approach is more practical than DVT in terms of both training cost and memory cost. Furthermore, our method is easier to apply to existing ViT models than DVT.
Comparison with DVT. We conducted a further investigation of our proposed method based on DeiT-S and compared it with DVT, which was also developed based on DeiT-S. Figure <ref> shows that our proposed ReViT achieves superior performance compared to DVT. This could be due to our better selection of the number of patches that achieves the best accuracy-speed tradeoff.
§ CONCLUSIONS
This paper aims to reduce the token length to split the image in the ViT model to eliminate unnecessary computational costs. First, we propose the Resizable Transformer (ReViT), which adaptively processes any predefined token size for a given image. Then, we define a Token-Length Assigner to decide the minimum number of tokens that the transformer can use to classify the individual image correctly. Extensive experiments indicate that ReViT can significantly accelerate the state-of-the-art ViT model. Also, compared to the prior SOTA method, our approach achieves better training speed, inference cost, and model performance. Therefore, we believe our paper benefits practitioners who would like to adopt ViT in deployment.
splncs04
|
http://arxiv.org/abs/2307.00908v1
|
20230703101234
|
Quantum Machine Learning on Near-Term Quantum Devices: Current State of Supervised and Unsupervised Techniques for Real-World Applications
|
[
"Yaswitha Gujju",
"Atsushi Matsuo",
"Rudy Raymond"
] |
quant-ph
|
[
"quant-ph",
"cs.LG",
"stat.ML"
] |
Cavity-Induced Strong Magnon-Magnon Coupling in Altermagnets
Peng Yan
============================================================
*
The past decade has seen considerable progress in quantum hardware in terms of the speed, number of qubits and quantum volume which is defined as the maximum size of a quantum circuit that can be effectively implemented on a near-term quantum device. Consequently, there has also been a rise in the number of works based on the applications of Quantum Machine Learning (QML) on real hardware to attain quantum advantage over their classical counterparts. In this survey, our primary focus is on selected supervised and unsupervised learning applications implemented on quantum hardware, specifically targeting real-world scenarios. Our survey explores and highlights the current limitations of QML implementations on quantum hardware. We delve into various techniques to overcome these limitations, such as encoding techniques, ansatz structure, error mitigation, and gradient methods. Additionally, we assess the performance of these QML implementations in comparison to their classical counterparts. Finally, we conclude our survey with a discussion on the existing bottlenecks associated with applying QML on real quantum devices and propose potential solutions for overcoming these challenges in the future.
§ INTRODUCTION
Machine Learning (ML) has made its presence ubiquitous with applications ranging from image recognition, healthcare diagnosis, text translation, anomaly detection and even physics. Recently, near-term quantum devices <cit.> have gained attention for their potential to solve simpler instances of classical intractable problems despite being limited by noise arising from short coherence times and limited qubit connectivity. Experimental demonstrations of quantum factoring algorithms, like Shor's algorithm, have proven challenging, but progress has been made, such as the factorization of N=15 using nuclear spins as quantum bits with room temperature liquid state nuclear magnetic resonance techniques <cit.>. The combination of quantum computing <cit.> and machine learning, termed, Quantum Machine Learning (QML) <cit.> has become an active research area with great advancements being made in the last decade. Within QML itself, depending on the type of data (classical or quantum) and algorithm (classical or quantum) there are multiple possible subdomains. In this survey, we primarily focus on supervised and unsupervised based quantum-enhanced machine learning algorithms which involve a quantum subroutine that is run on the real quantum hardware.
The past decade has witnessed significant advancements in the performance of quantum hardware, including the number of qubits, speed, and quantum volume <cit.>. Consequently, there has been an increase in the number of works implementing Quantum Machine Learning (QML) on real hardware. The common objective of these works is to demonstrate the advantages of utilizing quantum computers, with their unique properties such as entanglement and superposition, for practical machine learning tasks. To gain a comprehensive understanding of the current performance and limitations of near-term quantum devices in QML, it is necessary to conduct a thorough study. In this survey, we aim to consolidate and analyze works that involve the implementation of QML on real hardware to assess their performance. There has been a growing trend in utilizing quantum computing for commercial and industrial applications, as evidenced by several studies <cit.>. In light of these recent publications, our focus is specifically directed towards exploring applications and techniques that hold relevance for real-world scenarios. Consequently, we have identified high energy physics <cit.>, finance <cit.>, and healthcare <cit.> as the domains of interest for our survey.
Two main QML frameworks, quantum kernel methods <cit.> and the variational quantum algorithms <cit.> have been widely used due to their ability to be implemented with relative ease on the quantum hardware. They have also been shown to work on general datasets. The first method involves building a kernel similar to the technique used in Support Vector Machines (SVM) <cit.> while the second method employs a parameterized quantum circuit (PQC) whose parameters need to be optimized.
The main focus of this study is to understand and highlight the limitations and techniques employed in the different fields with varying datasets and algorithms used to be able to run on current ion-trap and superconducting-based quantum hardware. It is worth noting that quantum computing encompasses hardware architectures beyond gate-based systems, such as D-Wave's quantum computer that utilizes quantum annealing <cit.>. However, for the purpose of this study, we will focus on gate-based architectures as they adhere to the circuit model paradigm, which differentiates them from alternative approaches.
The current state of Quantum Machine Learning (QML) <cit.> is confronted with several limitations, primarily attributed to the capabilities of quantum hardware. These limitations encompass aspects such as limited qubit connectivity, noise, coherence times, and errors in state preparation and measurement. Furthermore, the prolonged running times on quantum hardware can influence the execution and results of QML algorithms. Another crucial aspect is the efficient encoding of classical data into quantum features. Additionally, the loading and storage of prepared quantum states without succumbing to decoherence pose significant challenges. Subsequent to state preparation, the design of efficient quantum algorithms becomes paramount. Depending on the utilization of either kernels or variational quantum circuits, algorithm-specific challenges must be addressed. For variational algorithms, issues such as barren plateaus require particular attention. The training of these models, as well as the selection of suitable optimizers and loss functions, greatly impact algorithm efficiency. In the case of kernel techniques, the choice of an appropriate feature map is vital. Overall, optimization and scalability play crucial roles in refining the generalization capabilities of QML models. Building efficient hybrid classical-quantum algorithms that leverage the strengths of both frameworks remains a major challenge in QML. Furthermore, the scalability of these algorithms is pivotal to ensure their applicability to real-world problems. Lastly, addressing the vulnerability and security of QML models is of utmost importance to mitigate potential adversarial attacks.
To organize the papers, we adopt a hierarchical classification approach. Initially, we categorize them based on their real-world applications, such as High Energy Physics (refer to Table <ref>), Finance (refer to Table <ref>), and Healthcare datasets (refer to Table <ref>). We also include papers that utilize benchmark datasets, such as MNIST and Iris, which are displayed in Table <ref>. Additionally, we incorporate papers that employ artificial datasets (refer to Table <ref>) and quantum datasets (refer to Table <ref>).
Here, quantum data refers to data that is naturally already embedded in a Hilbert space which means that the data is already in the form of a set of quantum states or a set of unitaries. This is in contrast to classical data which needs to be encoded in a quantum system.
The tables summarize various information, including the reference, number of qubits used, and the type of problem being addressed. The problems encompass classification, which involves categorizing data points into predefined classes; regression/simulation, which entails predicting continuous or numerical values based on input data; clustering, which involves grouping similar data points together based on their characteristics or similarities; and dimensionality reduction, which aims to reduce the number of input features while retaining essential information. The tables also indicate the type of hardware used, such as superconducting or ion-trapped, and specify whether the model training was conducted on a QPU or simulator. It should be noted that all the listed papers conducted their testing on a QPU. Lastly, the tables provide comprehensive details about the quantum models employed, encompassing various models such as Quantum Generative Adversarial Network (QGAN), Variational Quantum Circuit (VQC), Quantum Tensor Network (QTN), Quantum Principal Component Analysis (qPCA) and Kernel, Quantum K-means.
It is important to note that the list may be biased towards results obtained using specific quantum devices, and there is a possibility that we may have missed some papers.
The paper is organized as follows: In Sections II, III, and IV, we provide an overview of the fundamental concepts in Classical Machine Learning, Quantum Computing, and Quantum Machine Learning, respectively. In Section V, we examine the applications of Quantum Machine Learning techniques, specifically focusing on kernel techniques and variational quantum classifiers, categorized into supervised and unsupervised learning. Section VI delves into works that explore techniques such as encoding and feature engineering for running QML on real quantum hardware, with evaluations conducted on benchmark datasets. In Section VII, we investigate the limitations related to hardware and algorithms. Finally, we conclude with discussions on the current bottlenecks and propose possible solutions for future research.
[2]First experimental demonstration of quantum adversarial learning
[3]First proof-of-principle for QSVM
§ NOTATION
Throughout the paper, the dataset is denoted as 𝒟 = {(x^1,y^1),…,(x^m,y^m)}, where 𝒟 represents the dataset for supervised learning containing m samples or observations. For unsupervised learning, the data does not contain labels and can be represented as 𝒟 = {x^1,…,x^m}. Each x^i represents the i-th input data sample and can be understood as a vector defined as x^i = [x^i_1, x^i_2, …, x^i_d] where d is the number of features in the input data. The corresponding label or output associated with x^i is represented by y^i. Moving over to the vector spaces, we represent the N-dimensional Hilbert space as ℋ^N for a system with n qubits such that N = 2^n. The complex space is represented using ℂ, while the real space is denoted as ℛ. The feature map is denoted as ϕ. The quantum gates are represented as H for the Hadamard gate, X for the Pauli-X gate, Y for the Pauli-Y gate, Z for the Pauli-Z gate, and U for the unitary operator. The gate parameters are denoted using θ. The depth of the encoder part of the circuit is represented as N_depth^in, while N_depth^var is used to represent the depth of the variational part of the circuit.
§ CLASSICAL MACHINE LEARNING
The field of Artificial intelligence (AI) has become omnipresent with many practical applications such as automation of routine labor, speech recognition, computer vision etc. To avoid depending on hard-coded knowledge, it is essential for these AI systems to acquire knowledge from their surroundings by solving a learning problem <cit.>. Machine learning (ML)<cit.> is an evolving branch of Artificial intelligence that is essentially devoted to solving such problems where the goal is to improve some measure of performance when executing a task, through some type of training experience. ML models are trained on sample data, called training data, which enables them learn properties of the data and make predictions or decisions accordingly.
In this survey, we look at supervised and unsupervised learning.
Supervised Learning
Here, the model is provided with labelled data. To measure the performance, the model is evaluated on unseen data called testing data. Two common types of supervised-learning algorithms include classification and regression.
Training involves minimizing the cost function over the input data and adjusting its weights until the model has been fitted appropriately. Examples of common classifiers include linear classifiers, support vector machines (SVM), random forest etc. In regression type problems, the goal is to fit a function over the data (independent variables) to predict the output. Commonly used regression models include linear regression, support vector regression etc.
Unsupervised Learning
On the other hand, unsupervised learning involves training the model to analyze and cluster unlabeled datasets with the goal of discovering hidden patterns or structure in the data. In addition to clustering, which involves finding structure in the data by grouping similar points and separating dissimilar points, unsupervised learning also includes dimensionality reduction techniques such as PCA, autoencoders, singular value decomposition etc. Here, the objective is to reduce the dimension of the data without losing out on too much information.
§ QUANTUM COMPUTING
The phenomenon of quantum superposition and entanglement is what gives quantum computing an edge over classical computing. This can translate to significant speedup or reduced computational resources in terms of time and space. Here, we briefly discuss the basics of quantum computing. The basic unit of quantum computation is the qubit,
|ψ⟩ = α |0⟩ + β |1⟩
(where α , β ∈ ℂ and |0⟩, |1⟩ represent the computational basis in the
two-dimensional Hilbert space ℋ).
The absolute squares of the amplitudes (i.e. |α|^2 and |β|^2) are the probabilities to measure the qubit in either 0 or 1 state, respectively, such that |α|^2 + |β|^2 = 1.
As such, |ψ⟩ as can be rewritten as
|ψ⟩ = cosθ/2 |0⟩ + e^iϕsinθ/2 |1⟩
where 0 ≤θ≤π and 0 ≤ϕ≤ 2π are real numbers. Unitary matrices (quantum gates) can be applied to quantum states to transform into other quantum states to ensure that the condition on the amplitude-based probabilities is maintained even after the transformation. Through single qubit quantum gates we can manipulate the basis state, amplitude or phase of a qubit (for example through the so called X gate,
the Z gate and the Y gate respectively), or put a
qubit with β = 0 (α = 0) into an equal superposition. α = 1/√(2), β = ±1/√(2) (the Hadamard
or H-gate). Multi-qubit gates are often based on
controlled operations that execute a single qubit
operation only if another (ancilla or control qubit) is
in a certain state. One of the most important gates
is the two qubit controlled-NOT (CNOT, or CX) gate, which flips the basis state of the target qubit when the control qubit is in
state |1⟩. A set of arbitrary one-qubit rotation gates and two-qubit controlled-NOT (or CNOT) gates is universal which means that any quantum operation can be implemented using a combination of these basic gates. We list typical quantum gates (along with their symbols, and their matrix forms) used in quantum circuits for quantum machine learning in Figure <ref>.
§ QUANTUM MACHINE LEARNING
In this study, we focus on two widely used QML algorithms based on variational quantum circuits and quantum kernel methods.
In both of these approaches, we start by encoding d-dimensional classical data so that it is embedded as a quantum state vector in the Hilbert space. By doing so, we can exploit the exponential dimensionality of the Hilbert space that grows with the number of qubits, giving it a stronger representational power over the classical feature space which may help capture strong correlation between variables. Both the models involve data encoding but differ in the way the quantum state is handled. We look at some of the most commonly used techniques for encoding classical data along with the different QML models.
§.§ Encoding Datasets
Quantum Machine Learning (QML) involves learning from either classical or quantum data. It is more likely to obtain exponential quantum advantage in machine learning when data comes from quantum-mechanical processes, such as experiments in various fields <cit.>. Classical data is encoded in bits (0s and 1s), such as images, texts, medical records, etc. Quantum data, on the other hand, is encoded in quantum bits called qubits, which can represent states beyond 0 and 1. Qubits can contain information from physical processes like quantum sensing or quantum control. While classical data can be efficiently encoded in qubits, the reverse is not true. In QML, quantum data refers to data already in a quantum state, while classical data needs to be encoded into a quantum system.
A requisite for obtaining quantum advantage in both VQC and QKE<cit.> on classical datasets is that the embedding or encoding the datasets has to be efficiently implementable on quantum circuits to avoid the so-called data-loading problem <cit.>. The quantum embeddings help represent classical data as quantum states in the Hilbert space thereby allowing us to truly harness the power of quantum systems. Some desirable properties of an encoder are that the number of gates required to implement the encoder must be at most polynomial in the number of qubits, and the intractability by any classical operation to simulate it is preferred. Additionally, it is ideal to have a bijective encoding such that there is a unique quantum state ρ_x_i for each sample x_i. Finally, the single and two-qubit gates required to implement the encoder should be compatible with the native gate set of the near-term quantum devices so that the compilation of the circuit is hardware efficient <cit.>. Thus, data encoding plays an important role as they determine the features that quantum models represent <cit.>, the decision boundaries learnt <cit.>, and measurements that optimally distinguish between data classes <cit.>.
§.§.§ Basis Encoding
This is the simplest and one of the most common encodings<cit.> that maps a binary string classical data x = x_1… x_n into the computational basis |x⟩ = |x_1… x_n⟩. It requires n qubits to encode n bits of classical data, and is useful to feed one sample classical bit at a time to a QML model. The power of quantum resource comes when the batches of classical samples are represented as superpositions of basis states <cit.>. Quantum bits can be used to create quantum states that are superposition of classical datasets, i.e., quantum batches.
In the case of supervised learning, as pointed out in <cit.>, one can create quantum states |+1⟩ and |-1⟩ each of which is a superposition of the basis encoding of samples with label l(x) as +1 and -1, respectively, as below (omitting ancilla and working qubits), and use them to train a QML model on superposition states of real world data.
|+1⟩ = 1/√(N_+)∑_x:l(x)=+1|x⟩
|-1⟩ = 1/√(N_-)∑_x:l(x)=-1|x⟩,
where N_+ and N_- are, respectively, the number of samples with label +1 and -1. It is argued in <cit.> that the above quantum batches can result in training a QML model with smoother loss fluctuation and can be more efficient in the sample complexity for better generalization error than individual samples.
§.§.§ Amplitude Encoding
The classical data x, which is an d-dimensional vector, is encoded into the amplitude of the quantum state<cit.>. Namely, for x = (x_1,…,x_d) such that ∑_i |x_i|^2 = 1, the corresponding encoding is the quantum state
|ψ_x⟩ = ∑_i=1^d x_i|i⟩,
that only requires logd qubits to store x.
The advantage of this encoding is in the exponential memory saving and, if one can design a QML model that runs in polynomial time in the size of the number of qubits, then there are hopes for exponential quantum advantage. In fact, many QML models promising quantum advantages use this encoding combined with quantum basic linear algebras, such as, HHL <cit.> and others (see, e.g., <cit.>). The main drawback is that quantum circuits that generate |ψ_x⟩ can require quantum circuits with exponential number of native gates <cit.>, and hence the data-loading problem <cit.>.
To avoid exponential circuit complexity, recent works <cit.> propose the use of unary amplitude encoding to encode x using an d-qubit quantum state (i.e., a qubit per feature) as
|ϕ_x⟩ = ∑_i=1^d x_i|e_i⟩,
where |e_i⟩ is the i-th unary computational basis |0…010…0⟩ with "1" only at the i-th qubit. It is shown that the depth of the circuit to generate unary encoding is logarithmic in d <cit.>, and linear using cascade of RBS gates <cit.>.
§.§.§ Divide-and-Conquer Approach
This data loading technique is a modified version of amplitude encoding and is introduced in <cit.> using controlled swap gates and ancilla qubits. As the name suggests, this method is based on divide-and-conquer approach and derives motivation from <cit.>. The d-dimensional input vector is loaded in the probability amplitudes of computational basis state with entangled information in ancillary qubits. The results show exponential time advantage using a quantum circuit with poly-logarithmic depth and O(d) qubits. However, the reduced circuit depth comes at a cost of increasing the circuit width and creating additional entanglement between data register qubits and an ancillary system.
§.§.§ Angle Encoding
While the aforementioned amplitude encodings require at least O(logd)-depth circuits, one can load x with constant depth quantum circuits by embedding x_i ∈ℝ, i.e., the i-th element of x, as a parameter of Pauli rotational gates R_X(x_i) ≡ e^-ix_iX/2, or R_Y(x_i) ≡ e^-ix_iY/2, or R_Z(x_i) = e^-ix_iZ/2. The data also needs to be normalised or scaled using min-max scaling in a suitable range to be evaluated as gate angles and the choice of this range can influence the performance. For example, in <cit.>, the use of angles in range [-1, 1] was found to be more optimal than [-π, π]. For example, starting from the all-zero quantum state, one can create the following n-qubit quantum state (where n = d) representing x by applying R_Y(x_i) to the i-th qubit for i=0…d-1.
|x⟩≡⊗_i=0^d-1 R_Y(x_i) |0⟩^d = ⊗_i=0^d-1cos(x_i/2)|0⟩ + sin(x_i/2)|1⟩
The above quantum state is a product state that can be represented classically in O(d) computational space and time, but when combined with entanglement layers and their block repetitions,
the angle encoding can be used as a building block to generate sophisticated entangled states that are difficult to compute classically.
Also worth mentioning are the so-called, First order encoding (FOE) and Second Order Encoding (SOE) as defined in <cit.>. In FOE, to encode x_k ∈ℝ, the single qubit gates R_Z(x_k) are used. This can be lifted to a higher encoding using SOE where more parameters are used along with entangling gates. For example, to encode x_l, x_m ∈ℝ along with their correlation in the l-th and m-th qubits, SOE utilizes the gate e^i(π - x_l)(π - x_m)Z_l Z_m.
When the classical data x is a bitstring of length d, which is often used to represent discrete features, <cit.> proposes to utilize the so-caled Quantum Random Access Codes (QRAC) to obtain a constant factor saving in the number of qubits. For example, the previous |e_i⟩ is known as one-hot encoding in classical machine learning that requires d qubits. With the QRAC encoding, the bitstring x = x_0…x_d-1∈{0,1}^d can be represented with ⌈d/3⌉-qubit quantum state ρ_x as below.
ρ_x≡|ψ_x⟩⟨ψ_x| = ⊗_i=0^d/3-11/2(I + 1/√(3)( (-1)^x_3i X + (-1)^x_3i+1Y + (-1)^x_3i+2Z )),
where for simplicity d > 0 is assumed to be divisible by 3. Notice that the value of x_3i+j can be retrieved by measuring the i-th qubit of ρ_x in X, Y or Z bases for j = 0, 1, 2, respectively. The QRAC encoding can be run with a single-qubit gate for each qubit.
Data Re-uploading
Angle encoding applies a Pauli rotation gate whose degree of freedom is one, say for x_j ∈ℝ, the R_Z(x_j) at the j-th qubit. Meanwhile, it is known that a general single-qubit rotation gate U(·) has three degrees of freedom and is represented by matrix form in Figure <ref>.
First proposed in <cit.>, the data re-uploading techniques utilizes the above U(·) to encode three elements of x in a qubit. By repeating the application of U(·) each with different three elements of x for j∈{0,…,d/3-1}, hence the re-uploading, the whole data point x can be encoded in a single qubit.
We can easily see that the data re-uploading is the angle encoding repeated with different parameters x_j's because the above U(·) can be decomposed into a sequence of Pauli rotation gates as below.
U(x_3j, x_3j+1, x_3j+2) = R_Z(x_3j+1+π) √(X) R_Z(x_3j+π) √(X) R_Z(x_3j+2)
The parameters of data re-uploading can be linearly transformed before being used in U(·) or trained to fit the prediction <cit.>.
This method has been used in a variety of applications ranging from drug discovery <cit.>, image classification of MNIST dataset <cit.> and Variational Quantum Eigensolver <cit.>. Due to the structure of single-qubit unitary gates, this encoding is particularly suited for data with rotational symmetry.
§.§.§ Hamiltonian Encoding
While the encoding quantum state from the angle encoding is obtained by transforming the all-zero quantum state with single-qubit rotational gates which are classically computable, the Hamiltonian encoding evolves the all-zero quantum state according to the Hamiltonian parameterized by x to generate highly entangled states. Namely, let the Hamiltonian be H(x) = ∑_i f_i(x) H_i, where f_i(x)∈ℝ is the weight function and h_i = ⊗_j=1^nσ_i^j for σ_i^j ∈{I, X, Y, Z}. For a fixed t, the quantum state |ψ_t(x)⟩ that encodes x is obtained from the time evolution
|ψ_t(x)⟩ = e^-iH(x)t/ħ|0⟩^⊗ n,
which can be run on gate-based quantum hardware using techniques such as Trotterization<cit.>, variational approaches <cit.> and linear combination of unitaries<cit.>.
§.§ Models
§.§.§ Quantum Kernel Estimation
The kernel trick enables one to process higher dimensional data without explicitly computing the feature vector. This method is most commonly used in classification using of the support vector machine (SVM)<cit.>. By means of the kernel, every feature map corresponds to a distance measure in input space by means of the inner product of feature vectors <cit.>. The key highlight of kernel tricks with quantum states, or quantum kernels, comes from its ability to compute similarities from the encoding of the classical data into the quantum state space through entanglement and interference so as to generate correlations between variables that are classically intractable <cit.>. This is expected to give more expressive feature embeddings leading to better performance in pattern recognition and classification tasks compared to the classical counterparts. However, the true advantage does not come from the high dimensional space (which is also possible using classical kernels) but rather from being able to construct complex circuits which are hard to calculate classically. Even so, while the classical kernels can be computed exactly, the quantum kernels are subject to small additive noise in each kernel entry due to finite sampling, while classical kernels can be computed exactly. To tackle this, error-mitigation techniques have been developed <cit.> for cases when the feature map circuit is sufficiently shallow.
The following steps are key components involved in QKE<cit.>:
* Quantum Feature Map : A feature map ϕ is employed to encode the classical data x to the quantum state space using unitary operations. For any two data points x^i, x^j ∈ 𝒟, the encoded data is represented as Φ(x^i) and Φ(x^j) respectively.
* Inner product : The kernel entry can be obtained as the inner product between two data-encoded feature vectors Φ(x^i) and Φ(x^j) i.e.
κ(x^i, x^j) = |⟨Φ(x^j)|Φ(x^i)⟩|^2
The kernel entry can be estimated by recording the frequency of the all-zero outcome 0^n. This procedure is referred to as quantum kernel estimation (QKE).
Different methods <cit.> can be employed to estimate the fidelity between general quantum states, one of which is the swap test.
Quantum Support Vector Machines use the kernel built using QKE with a classical SVM. It was first introduced in <cit.> while the proof-of-principle was first demonstrated for classifying handwritten characters in <cit.>.
The advantage of using quantum kernels is not so apparent when we have large datasets where the quantum cost scales quadratically with the training dataset size <cit.>.
Efficient data encoding and generating useful quantum kernels is constrained by the limited number of qubits and heuristic characterization <cit.>. Additionally, fewer measurements, and large system noise necessitate error mitigation techniques requiring significant additional quantum resources <cit.>. In <cit.>, an indefinite kernel learning based method is implemented to demonstrate the advantage of kernel methods for near term quantum devices by suppressing the estimation error. Recently, the work in <cit.> introduced a novel approach for measuring quantum kernels using randomized measurements showing a linear scaling of features based on circuit depth. The method also incorporates a cost-free error mitigation and offers improved scalability, with the quantum computation time scaling linearly with the dataset size and quadratic scaling for classical post-processing.
Different types of kernels.In the previous section, the quantum kernel is computed as the (non-negative) frequency of observing the all-zero bits of running the concatenation of the quantum circuit encoding x^i with the inverse of quantum circuit encoding x^j as in Eq. (<ref>). This type of quantum kernels is quite powerful to classify artificial data derived from the discrete-log problems <cit.>, and to classify group-structured data when the initial state |0^n⟩ in Eq. (<ref>) is replaced with optimized fiducial quantum states computed from kernel alignment <cit.>. At the latter, experimental results on a 27-qubit device, when the data are encoded with single-qubit rotational gates and the fiducial quantum state is matched with connectivity of qubits in the quantum device, are demonstrated.
There are many other types of quantum kernels available whose elements are not necessarily restricted to be non-negative. For example, the Hadamard-test classifier (HTC), that encodes real-valued vectors with amplitude encoding, computes the weighted sum of inner product between a test data vector with the superposition of training data vectors for binary classification <cit.>. The compact version of HTC is by <cit.>. While the full quantum space in Eq. (<ref>) seems to be powerful, it is pointed in <cit.> that it can fail to learn a simple function. To overcome this, the projected quantum kernel, that projects the quantum kernel into classical one and compute the elements of kernels from the functions of reduced density matrices is introduced in <cit.> to obtain better quantum kernels that can also learn the data derived from the discrete-log problems in <cit.>.
§.§.§ Swap-test Classifier
The Swap-test classifier as proposed in
<cit.> is implemented as a distance-based quantum classifier where the kernel is based on the quantum state fidelity raised to a certain power at the cost of using multiple copies of training and test data. The choice of the quantum feature map plays a pivotal role in defining the kernel and the overall efficiency of the classifier. The training and test data are encoded in a specific format following which the classifier is realized by means of the swap-test <cit.>.
The swap test measures the similarity between the input quantum state and the reference quantum states for each class using measurements to compute a similarity score that indicates the overlap between the input state and the reference states.
§.§.§ Variational Quantum Circuits (VQC)
These algorithms primarily focus on optimizing the parameters of the PQC and known to provide a general framework that is compatible with different classes of problems leading to different structures and grades of complexity. The optimization is performed classically while allowing the circuit to remain shallow making it a versatile tool for near term quantum devices.
This basic structure of VQC involves the following three steps :
* Quantum Feature Map : A non-linear feature map ϕ is employed to encode the classical data x to the quantum state space. This is done by applying the circuit U_ϕ(x) to the initial state |0⟩^⊗ n :
| Φ(x)⟩ = U_ϕ(x)|0⟩^⊗ n.
The initial state |0⟩^⊗n can be replaced by any fiducial quantum state as shown in <cit.>. The encoding circuit U_ϕ(x) can also be applied more than once and/or interleaved with the model circuit described later.
* Model Circuit : A short-depth parameterised quantum circuit W(θ) is applied on the obtained quantum state with layers that are parameterized by the rotational angles for the gates that needs to be optimized during training. The optimization is performed over a cost function.
* Measurement and Preprocessing : The outcome of the measurement results in a bit string z ∈{0,1}^n that is mapped to a label. This circuit is re-run multiple times and sampled to estimate the probability of observation z which can be obtained as
⟨Φ(x) | W^†(θ)M_y W(θ)|Φ(x)⟩
which is calculated for each of the different classes y using the measurement operator M_y.
At the aforementioned quantum feature map and the model circuit, the CZ and CNOT (along with Hadamard gate) are commonly used to create entanglement. A common strategy to optimize the sub-circuit for entangling qubits is to entangle adjacent qubits, namely, we first entangle the 2i-th qubit with the 2i+1, and then after this, we only entangle the 2i+1-th qubit with the 2i+2-th qubit for i=0,… n. By doing so, we can parallelize the entanglement operation and reduce the execution dependency <cit.>. The circuit for this is shown in Figure <ref>. Based on the depth of the circuit chosen, we can repeat the quantum feature map with entangling sub-circuit, or the model circuit with entangling sub-circuit.
Ansatz
The choice of the ansatz also plays a pivotal role as the parameters θ of the circuit represented by W are optimized during the training. For example, the experiments in <cit.> showed better performance for the weighted ansatz (Section <ref>) in comparison to the Fourier ansatz (inspired from <cit.>), which introduces linear and logarithmic dependencies to the same gate without using tunable weights. The authors speculate that using weights allows better representability especially for smaller layers.
The layered ansatz is another common technique that comprises of layers such that each of the layers is composed of a set of entangling gates preceded by two alternating single-qubit rotation gates. The number of layers is an hyperparameter.
Modified Layerwise Learning
As the name suggests, this strategy involves training the circuit layer by layer such that only a small set of parameters are optimized in a single update <cit.>. Initially, only a small circuit with few start layers is chosen such that all parameters are set to 0. This circuit is optimized by running it for a few epochs. The parameters are now frozen and a new set of layers is added. Now, the new layers' parameters are optimized with the previous layers' frozen parameters until no more improvement is obtained in the cost function or until the desired depth is reached. Then, the circuit depth is fixed and a larger set of the parameters is trained again. This strategy can help avoid the barren plateau due to the small number of layers and also maintains a favorable signal-to-noise ratio. <cit.>.
Optimizers
The work in <cit.> demonstrated that gradient-free optimizers, Simultaneous Perturbation Stochastic Approximation (SPSA) and Powell’s method, and the gradient-based optimizers, AMSGrad and BFGS performed the best in the noisy simulation, and appeared to be less affected by noise than the rest of the methods. SPSA appeared to be the best performing method while COBYLA, Nelder-Mead and Conjugate-Gradient methods were the most heavily affected by noise even with the slightest noise levels.
Recently, the work presented in <cit.> introduces a novel approach that combines the approximated gradient from Simultaneous Perturbation Stochastic Approximation (SPSA) with classical optimizers. This approach surpasses the performance of standard SPSA and the parameter-shift rule in regression tasks, demonstrating enhanced convergence rate and error reduction, especially when considering noise.
Even the choice of batch size for training affects the convergence rate. In principle, quantum computing allows encoding a batch of training inputs into a quantum state in superposition and feed it into the classifier, which can be used to extract gradients for the updates from the quantum device. However this would extend the time complexity of state preparation routine for general cases, and even worse for more sophisticated feature maps. Single-batch stochastic gradient descent, where only one randomly sampled training input is considered in each iteration, can have favourable convergence properties, especially in cases where there is a lot of data available <cit.>. However in <cit.>, training using single data per update led to slow convergence with volatile validation loss per epoch which was avoided by increasing the batch size to 64.
Some of the major limitations associated with classical optimizers are repeated measurements and the complexity of gradient calculation <cit.>. Classical optimizers often require repeatedly measuring the outputs of a quantum circuit and feeding them into the classical computer. This process can lead to slower convergence rates for the optimization algorithm. The complexity of calculating gradients can impact the convergence of the optimization algorithm, particularly as the feature size (d) of the input increases. For instance, gradient-based methods like gradient descent have a complexity of O(d) <cit.>, which can become a scalability bottleneck.
To address these limitations, researchers have proposed quantum gradient methods <cit.> in the recent past as potential alternatives. These methods aim to leverage the benefits of quantum computation to overcome the challenges associated with classical optimization. However, their practical implementation still faces challenges related to applicability and complexity.
Parameter shift rule
To optimize the objective, it is useful to have access to exact gradients of quantum circuits with respect to gate parameters. The parameter update requires computing ∇L(θ) which in turns requires computing the gradient of the quantum circuit output f due to the chain rule since the loss function is the function of the output of the quantum circuits. The gradient of the quantum circuits output is calculated using the parameter-shift rules <cit.> by varying the value of the gate parameters θ slightly.
For the gates used in angle encoding, the parameter-shift can be applied as
∂ f/∂θ = 1/2[f(θ + π/2) - f(θ - π/2)]
In other cases, different strategies can be applied as discussed in <cit.>. When the ansatz consists of single-qubit rotation gates R_x(θ), R_y(θ), R_z(θ) as in Figure <ref>, the loss function can be optimized with gradient-free optimizers using coordinate descent <cit.>. While the gradient-based optimizers can be parallelized <cit.>, the gradient-free coordinate descent methods are sequential but have been shown to converge to local optima faster <cit.>. Generalization of gradient-free sequential single-qubit gate optimizers are derived in <cit.>. Nevertheless, within the existing training framework for Quantum Neural Networks (QNNs), it is necessary to compute gradients with respect to the objective function directly on the quantum device. However, this computation faces significant scalability challenges and is susceptible to hardware limitations and sampling noise inherent in the current generation of quantum hardware <cit.>.
In a recent study, <cit.> presented an alternative training algorithm that circumvents the need for gradient information. They introduced a novel meta-optimization algorithm, which involves training a meta-optimizer network to generate optimal parameters for the quantum circuit. These parameters are carefully chosen to minimize the objective function without relying on traditional gradient-based approaches.
Quantum Natural Gradient
The geometry of the parameter space plays a huge role in the efficient optimization of the VQC parameters<cit.>. In <cit.>, the authors expect that a smaller network structure of the VQC can lead to significant advantage as it allows using a computationally more expensive optimization algorithms resulting in a faster learning rate. This is also advantageous when the training data is limited.
In vanilla gradient descent, the loss function L(θ) is minimized in the l2 vector space by updating the network parameter θ^(t) at time t to θ^(t+1) in the direction of the steepest slope as
θ^(t+1) = θ^(t) - η∇ L(θ)
Since each of the model parameters are updated by the same Euclidean distance, there is a possibility of getting stuck in the local minima since the value of f(θ) varies at different rate with respect to each parameter. This is tackled in natural gradient descent where the parameter space corresponds to Riemannian geometry which is defined by the Fisher Information Matrix <cit.> and is invariant under re-parametrisation. The parameters are updated as
θ^(t+1) = θ^(t) - η F^-1∇ L(θ)
where F is the Fisher Information index. The calculation of F^-1 in general is computationally expensive. However, this leads to faster convergence, and can help avoid getting stuck in local minima <cit.>.
For VQC parameter optimization, it has been shown that using the standard Euclidean geometry is sub-optimal<cit.>. The quantum gradient descent is the quantum version of natural gradient descent which uses the Fubini-Study metric g <cit.>. This Fubini-Study metric tensor is unique invariant metric tensor to the space of quantum states and exploits the geometric structure of the VQC's parameter space. The parameters are updated as
θ^(t+1) = θ^(t) - η g^+∇ L(θ)
where g^+ is the pseudo-inverse of the Fubini-Study metric g.
Faster convergence has been observed for quantum gradient descent compared to the vanilla gradient descent with similar number of trainable parameters <cit.>.
Quantum Natural SPSA
The large computational costs associated with calculating the Quantum Fisher Information(QFI), which scales quadratically in the number of ansatz parameters, limits the advantage of using quantum gradient descent over standard gradients. To counter this, a new approach is introduced in <cit.>, Quantum Natural-Simultaneous Perturbation Stochastic Approximation(QN-SPSA), which inherits fast convergence and robustness of quantum natural gradient with respect to the initial parameters, while having the computational cost benefits of SPSA <cit.>.
Additionally, it is worth mentioning some recent works, such as the Pure Quantum Gradient Descent Algorithm, which was proposed in a recent study <cit.>. This innovative quantum-based method for gradient calculation claims to provide a theoretical computational complexity of O(1) in contrast to the O(d) complexity of the classical algorithm <cit.>.
§.§.§ Quantum Principal Component Analysis (PCA)
PCA has been used for the optimal low-rank approximation of a matrix through spectral decomposition by setting a threshold on the eigenvalues. By doing so, we only retain the principal components of the spectral decomposition while discarding those with the smaller eigenvalues. However, when the size of the matrix is large, the computational costs increase which is why we look at quantum algorithms.
The implementation of quantum PCA in <cit.> helps construct the eigenvectors and eigenvalues of the unknown density matrix thereby discovering their properties. The authors assume that the matrix can be represented by a quantum state, i.e. it is a non-negative matrix with trace equal to one, which covers a wide range of interesting cases. It uses multiple copies of an unknown density matrix to construct the eigenvectors corresponding to the large eigenvalues of the state (the principal components) in time O(log N) where N is the dimension of the Hilbert space, resulting in an exponential speed-up over existing algorithms. They provide novel methods of state discrimination and cluster assignment.
§.§.§ Quantum Orthogonal Neural Networks
Orthogonal neural networks are neural networks with orthogonal trained weight matrices which provide the advantage of avoiding vanishing gradients and improved accuracies <cit.>. The PQC for implementing the orthogonal neural networks was first introduced in <cit.> using unary amplitude encoding and a pyramidal structure using only RBS gates. The orthogonality of the weight matrix is preserved by performing gradient descent on the parameters of the quantum circuit. This works because a quantum circuit with real-valued unitary gates is an orthogonal matrix hence the gradient descent is equivalent to updating the weight matrix. Another feature of the circuit is one-to-one mapping between the parameters of the orthogonal matrix and the quantum gates of the circuit. The circuit architecture benefits from linear circuit depth and error mitigation due to unary encoding along with nearest neighbor connectivity due to the distribution of the RBS gates. In <cit.>, the results show linear scaling of the training run time with respect to the number of parameters.
§.§.§ Quantum Generative Adversarial Networks
The primary goal of a classical generative adversarial network (GAN) <cit.> is to generate data by studying a collection of training examples and learning the underlying probability distribution. It typically involves an iterative adversarial training procedure between two neural networks, the discriminator and the generator model. The generator creates fake data with the goal of generating data as close as possible to the real training dataset while the discriminator tries to separate this fake data from the real data.
The quantum variant of GAN (QGAN) was proposed independently in <cit.> where a QNN is used as the discriminator or generator or both. In <cit.>, faster convergence was noted for a classical discrimator in comparison to other architectures. For more details refer to <cit.>.
Quantum Adversarial Learning
Adversarial machine learning involves assessing vulnerabilities of machine learning in adversarial settings and consequently implementing techniques to make the models more robust to such manipulations. In the quantum setting, <cit.> shows that a quantum classifier which performs with nearly the state-of-the-art accuracy can be deceived by adding unnoticeable perturbations to the original samples.
§.§.§ Tensor Networks
Tensor networks (TN) are a popular method in the field of quantum many-body problems due to their ability to represent many-body localized systems and are already known for their performance in the classical setting for supervised and unsupervised learning tasks. TNs can represent both quantum states and circuits <cit.> using VQCs with rules described in <cit.>. They can also simulate strongly entangled quantum systems <cit.>. Depending on the architecture, the number of physical qubits scales only logarithmically with, or independently of the input or output data sizes which can be implemented on small, near-term quantum devices using lesser physical qubits. The work in <cit.> shows that classical TNs require exponentially more trainable parameters and higher Hilbert-space mapping to perform on par with the quantum counterparts which makes them vulnerable to a highly flat loss landscape. A review can be found in <cit.>.
§.§.§ Quantum Autoencoder
The task of a classical autoencoder is to obtain a low level representation of a given input such that the original data can be recovered. This has applications in dimensionality reduction and generative data model. The quantum version of the classical encoder was first implemented in <cit.> where an ansatz is trained to obtain a compressed version of an ensemble of pure quantum states. Different variants are explored in <cit.>. The learning task involves finding unitaries that preserve the quantum information of the input through the smaller intermediate latent space. The PQC initially encodes the input state into an intermediate latent space. Following this, the decoder acts with the goal of being able to reconstruct the input. A cost function is used to estimate the fidelity (distance) between the input and output states.
§ APPLICATIONS
The performance metric is usually measured using AUC-ROC which stands for Area under the ROC Curve <cit.>. The ROC curve (receiver operating characteristic curve) is a commonly used graph that summarizes the performance of a classifier over all possible probability thresholds. The AUC-ROC provides intuition about the capability of the model to distinguish accurately between true positives and false positives (True Positive Rate (TPR) on the y-axis and the False Positive Rate (FPR) on the x-axis). The score varies from 0 to 1 where higher score implies better distinction/performance and score 0.5 corresponds to random guessing.
§.§ High Energy Physics
Most of the studies focus on obtaining better performance with limited data <cit.>.
Among recent works, VQC has been used widely for HEP based applications with data agnostic techniques for feature encoding like single qubit rotation gate or ZZ-gate <cit.>. However, these methods are not suitable for HEP as they end up incurring large overhead on the number of qubits or gates for multi-dimensional data.
Here we review the applications of VQC and QSVM in HEP.
§.§.§ Classification
A prominent use case under this category is the Event Classification which involves discriminating signal events from the background events in the context of the Standard Model of particle physics<cit.>.
In <cit.>, the data is encoded using first-order encoding (FOE) and a variational part based on a simplified version of <cit.> with depth 1. A combination of three variables is determined using a DNN with AUC-ROC and run on the IBM Quantum device <cit.> with 3 qubits. The cross-entropy cost function is optimized using COBYLA. Results showed a higher cost function with more fluctuations for the real device compared to the simulator, but both showed consistent AUC (around 0.80) within the standard deviation. Additionally, second-order encoding (SOE) was employed, but no improvement was observed on a real quantum computer. This may be attributed to the 60% increase in single and two-qubit gates when transitioning from FOE to SOE, resulting in increased hardware noise due to gate errors.
In <cit.>, an improvement over this method is shown using data re-uploading and modified layer-wise learning with only 1 qubit and 5 layers. However, training is performed on the PennyLane simulator, optimizing the MSE cost function with the Adam optimizer. Inference tests on Rigetti's 32-qubit superconducting quantum processor obtained an AUC of 0.830, surpassing <cit.> while using fewer qubits. Training and testing AUC using 2000 samples demonstrate that data re-uploading generalizes well without overfitting or underfitting.
To compare the performance of variational circuit-based and kernel-based methods on the same dataset, we refer to <cit.> and <cit.>, which use VQC and QSVM, respectively. Both works employ PCA <cit.> for data preprocessing, matching the number of encoded variables with the available qubits of the IBM Quantum device <cit.>, followed by angle encoding.
The ansatz in <cit.> uses parallelized entangling CZ gates with linear qubit connectivity (Figure <ref>). For training on real hardware, the feature map and variational circuit depth were set to 1. SPSA was used for optimization, and the results were benchmarked against classical SVM <cit.> and binary decision tree <cit.>. The simulator performance was comparable to classical methods (AUC around 0.82). To minimize readout errors, only half the qubits were observed after pairing them with CZ gates. The performance on real quantum hardware was similar (AUCs > 0.80) to the simulator. However, the authors note the long training time on quantum hardware (200 hours) for 500 training iterations on 100 events.
For the QSVM-based approach in <cit.>, a parallel entangling circuit was constructed using 15 qubits of the IBM Quantum device, similar to <cit.>, to obtain a short-depth circuit for execution on real quantum hardware. To reduce statistical uncertainties, 8192 measurement shots were performed for each kernel entry. The hardware performance approached that of the noiseless simulator for small training samples of size 100 (average AUC: 0.831).
The results of testing on IBM Quantum device showed that the simulator-trained model performed well even on the hardware with AUC around 0.78. The authors note that the faster learning rate despite the computationally expensive optimization algorithm is tied to the small structure of the VQC.
Testing the model on an IBM Quantum device showed good performance, with an AUC of approximately 0.78. The authors attribute the faster learning rate, despite the computationally expensive optimization algorithm, to the small structure of the VQC.
In <cit.>, the authors note data obtained from quantum-enhanced experiments can achieve quantum advantage for learning tasks of a physical state and a physical process from a perspective of sampling complexity. Quantum-enhanced experiments consist of quantum sensors, quantum memory, and quantum computers.
In quantum-enhanced experiments, quantum information is directly stored in quantum memory while classical experiments require measurements to store classical data in classical memory.
Quantum-enhanced experiments preserve quantumness of quantum data until performing entanglement measurements on pairs of copies of data in quantum memory. It is expected that research utilizing the power of quantum data will become increasingly active in the future.
§.§.§ Regression
Simulation
The use of different variants of QGAN architectures in HEP can be seen in <cit.> and <cit.> for simulation. For example, in <cit.>, the proposed QGAN contains a classical discriminator and two parameterized quantum circuits generators for generating images. The performance was measured via relative entropy and individual relative entropy. The model was trained using a simulator while the inference results of the pre-trained model on superconducting chips and ion-trap machines showed low standard deviation and error rates indicating the feasibility of dual-PQC training for superconducting chips. However, the authors note the vulnerability of the training process of falling into mode collapse <cit.> where the model only reproduces a low variety of samples. They also suggest techniques such as increasing the training set size and adding an additional term to the loss function as possible solutions to ameliorate the problem.
In contrast, the QGAN in <cit.>, named style-QGAN, was implemented using a QNN generator and a classical NN discriminator. The data was encoded using angle encoding and the cross-entropy loss function was optimized for both using Adadelta <cit.>. While earlier implementations of QGAN provided the prior noise distribution to the generator via the first input gates, the work in <cit.> embeds it on every layer of single qubit and entangling gate in the circuit. The results showed an improvement over the state-of-the-art with shallow circuits on both the 3-qubit superconducting and ion-trapped architectures implying potential hardware independent viability. Additionally, both the quantum hardware are able to capture the correlations even on small sample set.
§.§ Healthcare
A few applications of quantum machine learning in healthcare include Healthcare diagnostics and treatment, cancer detection, prediction of different stages of diabetes and even the security of sensitive information such as healthcare data.
Given the sensitivity of these applications, the cost of any incorrect predictions may have huge negative consequences and hence requires utmost carefulness. In this regard, binary classification of MRI images using a VQC is performed in <cit.> to check the vulnerability of quantum learning systems in Healthcare diagnostics.
To prepare highly-entangled multi-qubit quantum state, interleaved block-encoding <cit.> was used on 10 qubits. The variational parameters are fixed for adversarial perturbations and the results show that the original differ from the adversarial images by a small amount of perturbations. The results shows that the quantum classifier predicts the legitimate states accurately while mispredicting all (half) of the adversarial examples highlighting the vulnerability aspect. Additionally, experiments were performed on quantum data as well with the quantum classifier reaching perfect accuracy in about 30 epochs on both train and test datasets.
In <cit.>, QNN and quantum orthogonal neural networks are used for Healthcare image classification on RetinaMNIST dataset and PneumoniaMNIST <cit.>. The images were pre-processed using PCA and followed by unary amplitude encoding. A series of experiments was performed on the real hardware using 5 and 9 qubits. The results show comparable accuracies for majority of the classification experiments performed on real quantum hardware to that of their classical counterparts. However, the hardware limitations come into picture for more difficult tasks. Additionally, circuit optimization based on the hardware and translation of the RBS gates into native hardware gates was performed to reduce the overall gate count. The results show better performance for 5-qubit results in contrast to the 9-qubit experiments where the hardware performance seems to diverge from the simulator performance. The authors note the unstable performance of the quantum hardware due to the randomness in the training or inference making it incapable of performing Healthcare image classification on par with the classical models.
To analyse the advantage of using quantum machine learning in terms of sample complexity, the work in <cit.> conduct experiments using QSVM on small dataset of size 200–300 training samples uses kernel techniques for prediction on six-month persistance of rheumatoid arthritis. The experiments were conducted on different configurations of features and data sizes to identify cases where quantum kernels could provide advantage. A new metric, Empirical Quantum Advantage (EQA), is proposed to quantitatively estimate the accuracy of the model performance as a function of the number of features and sample size. The estimation of the custom kernel turns out to be the most computationally expensive task. The authors claim to be the first ones to use geometric difference to analyse the relative separation between classical and quantum feature. They note that kernels are noisy and that quantum advantage expressed in terms of the generalization error vanishes with large datasets, fewer measurements, and increased system noise.
§.§ Finance
Some applications of ML operations applicable to finance include regression for asset pricing <cit.>,
classification for portfolio optimization <cit.>, clustering for portfolio risk analysis and stock selection <cit.>, generative modeling for market regime
identification, feature extraction for fraud detection<cit.>, reinforcement
learning for algorithmic trading <cit.>, and Natural Language Processing
(NLP) for risk assessment <cit.>, financial forecasting <cit.> and accounting and
auditing <cit.>. In a similar vein, QML has been used for different applications such as feature selection for fraud detection in <cit.> where a PQC was trained on a subset of good features selected based on their performance using a predefined metric. The use of QN-SPSA showed good convergence for training on 20 qubits with potential for deeper circuits. The results on hardware were comparable to state-of-the-art classical methods in certain aspects while in others it showed the potential to find better subsets. The authors of <cit.> note that model run on IBM Quantum device was able to outperform traditional methods without using error mitigation.
Another application of QML was explored in <cit.> to reduce the number of noisy factors for pricing interest-rate financial derivatives using a qPCA. The experiments were performed on 5-qubit IBM Quantum device for 2 × 2 and 3 × 3 cross-correlation matrices based on historical data for two and three time-maturing forward rates. However, this method showed difficulty in scaling to larger datasets.
§ LIMITATIONS
The current quantum hardware is susceptible to noise resulting in a very low qubit coherence time of the order of a few hundred microseconds.
Common sources of noise include (1) crosstalk due to simultaneous gate execution in quantum algorithm that allow parallel operations (2) quantum decoherence (3) single qubit/rotation and two-qubit gate error rate due to imperfect implementation and (4) shot noise from measurements on quantum
states. Additional limitations due to qubit count and gate fidelity prevent the use of quantum error correction. The use of VQCs provide a framework to enable practical applications of noisy quantum hardware. Here, we briefly look at some of the limitations associated with the current QML approaches.
Hardware limitations
The common causes of error rates are the State Preparation and Measurement Error Rate (SPAM) and gate errors. The SPAM measures the correctness of the initial calibration settings and the final read out measurement and is indispensible for scaling to hundreds or thousands of qubits. A general strategy to counter the noise in quantum hardware is to increase the number of measurements to help reduce the generalization error <cit.>. However, this may also be counterproductive due to the readout error during measurement. For example, the prediction accuracy dropped on increasing the number of shots from 500 to 1000 in <cit.>. The authors note that the experiment was already dominated by systematic noise which was prone to changes every time the system was calibrated indicating the variability in the calibration of the system. Other options include using shot-frugal optimizers <cit.> which use a stochastic gradient descent based approach while adapting the number of
shots (or measurements) needed at each iteration.
A popular noise mitigation technique is the zero-noise extrapolation to first order for gate-error mitigation described in <cit.> and can be implemented in software without requiring any prior knowledge of the quantum computer noise parameters. Factors such as qubit life time and coherence time are affected by decoherence.
Decoherence, characterized by uncontrolled interactions between a quantum system and its environment, poses a significant challenge in quantum computing resulting in the loss of quantum behavior within the quantum processor, nullifying any potential advantages offered by quantum algorithms. The decoherence time limitation significantly restricts the number of operations that can be performed in a quantum algorithm. Additionally, the development of high-fidelity qubits poses another critical hardware challenge. To tackle these issues, an effective approach is to treat qubits as part of an open environment and leverage classical simulation software packages during the design phase.
Superconducting QPUs have a coherence time of around 100 microseconds while certain trapped ions have extended that to 50 seconds. The gate speed along with decoherence need to make sure that the gates are applied before the system decoheres. Superconducting and photonic generally have the fastest gate speeds. The qubit connectivity which is the general layout of the qubits dictates interaction between a given qubit and its neighbours. Due to limited connectivity, SWAP gates can be inserted but can result in additional overhead and subsequent error rates. While some device offer all-to-all connectivity, long-range gates are generally more noisy.
The delay between submitting a circuit to the cloud and receiving a result, without clarity on the calibration timings, can lead to significant statistical errors<cit.> as it is unclear on how these errors influence circuit performance between runs on all systems. The lack of information on aspects such as qubit assignment, compiler/transpiler methods, component drift rate, and time since last calibration also affect the analysis as noted in <cit.>.
Long running time
Often, studies require large number of samples and qubits (20 qubits or more) which necessitates large amount of computational power for quantum computer simulations. Long running times have been noted in <cit.> on current quantum hardware, even when using small data samples, likely due to the initialization, queuing, execution and measurement time in the current quantum hardware. For example, the study in <cit.> took around 200 hours to run 500 training iterations on 100 events on quantum hardware. This poses a serious limitation for real world applications such as HEP which generally require large training data. In terms of the model performance, using small sample size often leads to significant variance and poor performance. Furthermore, the limited access to QPU resources makes it infeasible to conduct validation on multiple sets<cit.>.
In <cit.>, the authors propose measuring the speed using circuit layer operations per second (CLOPS) by considering the interaction between classical and quantum computing. The CLOPS benchmark consists of 100 parameterized templated circuits and takes into account various factors such as data transfer, run-time compilation, latencies, gate times, measurements, qubit reset time, delays, parameter updates, and result processing. However, CLOPS focuses mainly on the quantum computing aspect and considers classical computation as an auxiliary to quantum computing. Furthermore, factors such as qubit quality and gate operations are not captured in the metric. Experimental results indicate that the execution time of quantum circuits constitutes a small proportion (less than 1%) of the total execution time <cit.>.
Another proposed solution to improve the training time is Quantum Federated Learning (QFL), which uses distributed training across several quantum computer. Federated learning consists of several clients or local nodes learning on their own data and a central node to aggregate the models collected from those local nodes.
A framework for federated training was presented in <cit.> using hybrid quantum-classical machine learning models. Their simulation results show faster convergence compared to the non-federated training and the same level of trained model accuracies. Other works include <cit.> where they introduce slimmable QFL (SlimQFL), a dynamic QFL framework which has shown to achieve higher classification accuracy than the standard QFL.
In contrast, ensemble learning involves the combination of multiple individual models, referred to as base models or weak learners, to create a more accurate and robust predictive model. These base models can be of the same type or different types, and their predictions are aggregated using methods such as voting, averaging, or weighted averaging. Ensemble learning aims to improve overall performance and accuracy by leveraging the strengths of multiple models.
On the other hand, federated learning is distinct from ensemble learning in that it enables collaborative training across distributed entities without sharing raw data, ensuring privacy and security. While ensemble learning focuses on model aggregation, federated learning emphasizes the distributed nature of training.
Some works that explore ensemble learning in the context of quantum machine learning include <cit.>.
Inefficient data loader
Being able to load classical data as quantum states efficiently is a bottleneck that has often been sidelined in works that discuss speedup using QML algorithms. Given a classical data point, the job of a data loader is to read the data once and output a PQC that prepares an appropriate quantum representation. The encoding part of input data generally consumes a significant portion of the coherence time often leaving little time for actual algorithm to process the data<cit.>. Several proposals for more efficient data loading have been made in this regard. For example, the work in <cit.> tries to tackle this by describing ways to load a classical data point with logarithmic depth quantum circuits while using the same number of qubits as the features dimension. Another technique is described in <cit.> where a shallow parallel data loader is implemented for d-dimensional data points using d qubits, d - 1 RBS gates and circuits of depth only log d. However, the viability of this approach is limited by connectivity requirements beyond those supported by the hardware.
The idea of Quantum Random Access Memory (QRAM)<cit.> has been proposed for the long-term storage of the state of quantum registers and can be considered to be a specific hardware device that can access classical data in superposition natively, thus having the ability to create quantum states in logarithmic time. Despite challenges in implementation, alternative proposals with similar functionality have emerged. In <cit.>, a circuit with O(d) qubits and O(d) depth was described to perform the bucket brigade architecture with proven robustness to a certain level of noise.
Barren plateau
Flat optimization landscapes, where the gradient variance diminishes exponentially with the number of qubits, are commonly encountered in variational quantum algorithms. Similar to classical machine learning, quantum loss landscapes are susceptible to numerous local minima. Recent studies <cit.> have demonstrated that overparameterization can help alleviate Barren Plateaus by utilizing more parameters than necessary for a given problem. This allows the Quantum Neural Network (QNN) to explore all relevant directions in the state space. However, factors such as ansatz architecture <cit.>, cost function <cit.>, and parameter initialization contribute to encountering Barren Plateaus <cit.>.
For instance, highly expressive ansatz <cit.> or ansatz with exhaustive entanglement <cit.> can result in exponentially flat landscapes as the number of qubits increases <cit.>. In such cases, informed parameter initialization or problem-dependent ansatz design can be beneficial. Limiting entanglement in the ansatz <cit.> can help overcome exhaustive entanglement-induced Barren Plateaus <cit.>. The choice of observables to define the loss function also influences the presence of Barren Plateaus. Using global observables that require measuring all n qubits simultaneously <cit.> can lead to Barren Plateaus, whereas employing local observables that compare quantum states at the single-qubit level <cit.> can avoid this issue. Recent research <cit.> has shown that local cost functions encounter Barren Plateaus when learning random unitary properties. Furthermore, local noise in the hardware <cit.> can affect the optimization process. Techniques such as error mitigation <cit.> can help reduce the impact of local noise. Different ansatz designs, including Variable ansatz <cit.>, Hamiltonian Variational Ansatz <cit.>, or Hardware-Efficient ansatz, which aim to reduce gate overhead <cit.>, can be utilized and optimized using quantum-specific optimizers <cit.> for training.
Gradient-based methods are generally preferred for large parameter spaces <cit.>. However, gradient-free methods have also been utilized for optimization, as shown in <cit.>, where Nelder-Mead was employed for QVE optimization. However, scaling results in <cit.> indicate that deep versions of randomly initialized hardware-efficient ansatzes suffer from exponentially vanishing gradients. As an alternative, one can opt for barren-plateaus-immune ansatzes <cit.> instead of hardware-efficient ansatzes. Additionally, using shallow circuits with local cost functions <cit.> can help mitigate the presence of Barren Plateaus. In <cit.>, an alternating layered ansatz is proposed, which was later proven to have sufficient expressibility <cit.>. The results in <cit.> demonstrate that the barren plateau phenomenon extends to VQAs with randomly initialized shallow alternating layered ansatzes and establish a relationship between locality and trainability of VQCs. They also show that despite using a shallow circuit, defining a cost function using global observables leads to exponentially vanishing gradients. Among other techniques, the initialization strategy using identity blocks described in <cit.> and layer-wise training can be employed to mitigate Barren Plateaus.
In the study by <cit.>, researchers developed a scalable method to calculate the gradient and its variance by proving that randomly initialized circuits can be exactly mapped to a set of simpler circuits that can be efficiently simulated on a classical computer.
§ OPEN QUESTIONS
The key objective in the field of Quantum Machine Learning (QML) is to demonstrate quantum advantage, surpassing classical methods in data science applications either in terms of sample complexity or time complexity. This requires a flexible and exploratory approach to identify the areas where QML can have the greatest impact. Although there are claims of polynomial and exponential speed-ups in QML, empirical evidence establishing a clear advantage over classical algorithms is still limited. Furthermore, providing a robust theoretical foundation for quantum advantage poses significant challenges in the field. It remains unclear whether the observed performance improvements are solely attributed to careful hyperparameter selection, benchmarks, and comparisons, or if there is a fundamental structural advantage <cit.>.
It can be observed that QML as a field is moving towards becoming an empirical science. The theoretical aspect of proving concepts is anticipated to be challenging, and the emphasis is increasingly placed on practical demonstrations. This trend is particularly notable as the number of qubits and circuit depth surpasses 100 × 100.
There is a possibility that an efficient classical algorithm exists for a given learning problem that can achieve comparable results to quantum learning algorithms. This is exemplified in <cit.> where the variational circuits can be replaced by a classical support vector machine if the encoding is classically tractable.
Furthermore, due to finite sampling noise, none of the heuristic quantum learning algorithms have proven to solve a classically hard learning problem <cit.>. These inherent limitations imply that the current benefits of quantum algorithms can only be realized under certain circumstances. Specifically, only a few variational quantum-based algorithms have shown an apparent advantage in a constrained situation <cit.>. Recently, <cit.> investigated the impact of finite sampling noise and subsequently introduced a technique called variance regularization based on the expressivity of QNNs to reduce the variance of the output.
To shed light on the current state of quantum machine learning, several research directions and areas of investigation are identified:
Establishing Standardized Benchmarks In order to effectively evaluate the superiority of QML algorithms compared to classical ones, it is crucial to establish standardized benchmarks. Currently, standard classical data benchmarks such as MNIST, Iris etc are used (as shown in Table <ref>). The lack of standardized quantum datasets highlights the need for easily preparable quantum states to serve as benchmarks for evaluating QML models <cit.>.
Quantum Data Preparation While achieving a quantum advantage with classical data is challenging, QML models utilizing quantum data show more promise. Finding the most optimal encoding technique for a given dataset is another crucial challenge that needs to be addressed. These embedding techniques necessitate having features which are classically hard to simulate with practical usefulness. Identifying datasets that can take advantage of quantum computing for computing kernels is an important avenue of research. Currently, there is a lack of efficient quantum RAM (qRAM) capable of encoding and reliably storing information as a quantum state. This presents a significant hardware challenge in quantum computing.
Error Mitigation and Quantum Error Correction Error mitigation and error correction are crucial for the long-term viability of fault-tolerant quantum computers. However, the implementation of quantum error correction introduces overhead that can reduce the speedup of quantum computations <cit.>. Therefore, finding efficient quantum error correcting codes and developing methods to generate ground states using QML models are important areas of research.
In recent studies, various error-mitigation techniques have been explored, including the use of ensemble learning approaches that combine multiple VQCs to improve the precision of classifiers for both classical and quantum datasets. One such study by <cit.> proposes two ensemble-learning error mitigation methods for VQCs: bootstrap aggregating and adaptive boosting. These methods can be applied to classification, kernel learning, regression, and even extended to QSVM. Importantly, their ensemble-learning VQCs are designed to be compatible with near-term quantum devices, distinguishing them from other ensemble-learning proposals that rely on resource-intensive hardware implementations involving multi-qubit controlled unitaries and complex quantum subroutines such as quantum phase estimation <cit.>, Grover search<cit.>, and quantum mean estimation <cit.>.
Ansatz Selection and Scalability Ansatz selection plays a crucial role in preventing Barren plateaus and achieving efficient scalability. Despite the theoretical work done to demonstrate provable advantage on synthetic datasets <cit.>, more research is needed to understand the impact of entanglement in the model ansatz. Developing efficient methods to adjust parameter values and train quantum circuits to minimize specific loss functions in VQCs is an active area of research. Parameter initialization strategies for large-scale QNNs need to be explored to improve their scalability.
To comprehend the scalability of QML methods for large problems, analyzing trainability and prediction error is necessary. Access to reliable quantum hardware is also crucial. In QML, training the model involves minimizing a loss function to find the optimal set of parameters. Quantum landscape theory explores the properties of this loss function landscape, focusing on challenges like local minima and barren plateaus<cit.>.
Backpropagation and Scalability Backpropagation plays a crucial role in the success of deep neural networks by efficiently computing gradients using the computational graph. This computational advantage allows for the training of deep networks. Recent applications like ChatGPT <cit.>, utilize backpropagation during training for the efficient calculation of gradients for batches of input-output pairs which enables scalability in handling large datasets. This technique allows for parallel computation and parameter updates, contributing to the model's ability to handle increased complexity. However, when it comes to parameterized quantum circuits, backpropagation is significantly less efficient compared to classical circuits. This inefficiency directly impacts the trainability of quantum models. The existing gradient methods used in parameterized quantum models lack the scaling properties of backpropagation, raising questions about their computational complexity.
Addressing this issue, a recent study by <cit.> highlights the need to explore alternative architectures and optimization methods to improve the scalability of quantum models. The authors suggest that backpropagation may not be the appropriate optimization method for quantum models and propose an alternative while emphasizing on the importance of finding optimization methods that can effectively handle the computational complexity of parameterized quantum circuits to enhance the trainability and scalability of quantum models.
QML Model Security The current state of QML lacks privacy-preserving features, raising concerns about the potential exposure of sensitive information in machine learning datasets <cit.>.
To address this issue, it is crucial to implement privacy-preserving algorithms in QML, such as differential privacy, which minimizes the influence of individual data points on the training process.
However, the application of differential privacy in the context of QML requires further study and exploration to ensure effective privacy protection in machine learning models. Recently, <cit.> demonstrated the first proof‐of‐principle of privacy‐preserving QML.
Towards Explainable QML models
Realizing explainable AI (XAI) is a challenging yet crucial research field that provides insights into the decision-making process of machine learning models, addressing aspects such as fairness and security, especially in domains like medical research <cit.>. An organic extension to QML also necessitates studying the fundamental aspects of QML <cit.>. Given this context, exploring explainability in QML, called explainable QML (XQML), aims to provide humanly understandable interpretations of QML systems, similar to classical ML. Currently, the field of XQML remains relatively unexplored; however, it holds great potential for yielding fundamental insights, particularly given that QML is still in its early stages. Certain aspects of XQML, such as intuitively explaining quantum feature spaces and understanding the behavior of QML models in relation to QPUs through transformations and operations, go beyond the scope of classical XAI. Addressing these aspects may require the development of entirely new approaches to explainability or interpretability. The exploration of XQML, in conjunction with the prospect of improved hardware, may be considered more promising than solely focusing on identifying quantum advantages <cit.>.
Hyperparameter Choices and Transparency The lack of extensive discussions on hyperparameter choices in current quantum machine learning studies poses challenges to transparency, interpretability, and progress in the field. Many studies that demonstrate promising results on benchmark datasets often fail to provide open-source reference implementations of their competitive algorithms. This lack of accessibility hinders the reproducibility of results and raises concerns about potential positive bias, where only a selected set of experiments showing favorable model performance are reported, while others are disregarded.
Furthermore, reproducibility can be challenging when working with open-source projects like Qiskit, which undergo regular updates and improvements to enhance functionality, address bugs, and introduce new features. These updates can lead to deprecated features and code incompatibility, affecting the ability to reproduce results.
To address this issue, researchers should prioritize providing comprehensive documentation that includes detailed information on hyperparameter selection. This documentation should offer insights into the decision-making processes behind choosing specific hyperparameters and discuss the potential implications of different selections. By sharing this information, researchers can enhance transparency and enable others to replicate and build upon their work effectively. Furthermore, the provision of open-source reference implementations is crucial for fostering collaboration, promoting rigorous evaluation, and advancing the field collectively. Accessible and reproducible code allows researchers to validate and compare different approaches, facilitating the identification of strengths and weaknesses in quantum machine learning algorithms.
Reproducibility in experiments on quantum hardware can be challenging due to noise, limited access, calibration issues, and algorithmic variability. To address these challenges, researchers should thoroughly document the experimental setup, share the source code, use standardized benchmarks, and promote collaboration and open science practices.
Federated Learning and Quantum Boosting Thoroughly studying the use of federated learning to distribute computational tasks among limited-capability quantum machines, coupled with investigating the potential of quantum boosting classifiers, can significantly enhance the scalability and utilization of available Near-term noisy devices <cit.>. These research directions hold promise for leveraging the collective power of distributed quantum resources and improving the overall performance of quantum machine learning systems.
The challenge of scalability in quantum algorithms and its impact on real-world applications is a critical issue that requires further investigation. The recent findings in <cit.> demonstrate the successful measurement of accurate expectation values for large circuit volumes using a noisy 127-qubit quantum processor, highlighting the potential of quantum computing in a pre-fault-tolerant era. However, it is important to acknowledge that the error mitigation techniques discussed in <cit.> suffer from exponential computational time as the number of qubits increases. Moreover, comparing these techniques to "brute force" classical methods may not be entirely fair, as it fails to acknowledge the significant advancements made by classical methods in simulating quantum dynamics.
To advance the field, it is crucial to establish a shared community consensus on identifying problems that are both interesting for practical applications and genuinely challenging to simulate classically. This requires acknowledging the progress made by classical methods and not solely equating high entanglement with classical simulation difficulty. It is necessary to continue the development and exploration of both quantum and classical approximation methods, as they provide valuable benchmarks for each other's capabilities. By addressing these challenges and fostering collaboration between quantum and classical approaches, we can drive the field forward and unlock the full potential of quantum computing.
§ ACKNOWLEDGEMENT
We want to emphasize that the list of papers presented in this draft has been compiled to include as many relevant papers as possible. However, we acknowledge that due to the rapid pace of development in this field, our coverage may not be exhaustive. We welcome suggestions and feedback and are willing to incorporate additional papers as appropriate. Special thanks to Tamiya Onodera of IBM Quantum, IBM Research Tokyo, and Shesha Raghunathan of IBM Quantum, IBM Research India for their valuable discussions and comments.
|
http://arxiv.org/abs/2307.01572v1
|
20230704085601
|
Efficient computation of optical excitations in two-dimensional materials with the Xatu code
|
[
"Alejandro José Uría-Álvarez",
"Juan José Esteve-Paredes",
"Manuel Antonio García-Blázquez",
"Juan José Palacios"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci",
"cond-mat.mes-hall"
] |
a]Alejandro José Uría-Álvarezauthor
a]Juan José Esteve-Paredes
a]M. A. García-Blázquez
a,b]Juan José Palacios
[author] Corresponding author.
E-mail address: [email protected]
[a]Departamento de Física de la Materia Condensada, Universidad Autónoma de Madrid, 28049 Madrid, Spain
[b]Instituto Nicolás Cabrera, Condensed Matter Physics Centre (IFIMAC), 28049 Madrid, Spain
Here we describe an efficient numerical implementation of the Bethe-Salpeter equation
to obtain the excitonic spectrum of semiconductors. This is done on the electronic structure calculated either at the simplest tight-binding level or through
density funcional theory calculations based on local orbitals. We use a simplified model
for the electron-electron interactions which considers atomic orbitals as point-like orbitals and a phenomenological screening. The optical conductivity can then be optionally computed within the Kubo formalism. Our results for paradigmatic two-dimensional materials such as hBN and MoS_2, when compared with those of more sophisticated first-principles methods, are excellent and envision a practical use of our implementation
beyond the computational limitations of such methods.
Exciton, Bethe-Salpeter Equation, Optics, Many-Body Physics, Localized Orbitals, Tight-Binding
PROGRAM SUMMARY
Program Title: Xatu
CPC Library link to program files: (to be added by Technical Editor)
Developer's repository link:
https://github.com/alejandrojuria/xatu
Code Ocean capsule: (to be added by Technical Editor)
Licensing provisions: GPLv3
Programming language: C++, Fortran, Python
Nature of problem: The exciton spectrum is obtained as the solution of the Bethe-Salpeter equation for insulators and semi-conductors. Constructing the equation involves determining the screening of the electrostatic interaction and then determining the matrix elements of the interaction kernel, which are computationally-intensive tasks, specially if one takes a purely ab-initio approach.
Solution method: The Bethe-Salpeter equation can be efficiently set up and solved assuming that the basis of the reference electronic structure calculation, obtained either from tight-binding models or density functional theory with actual localized orbitals, corresponds to point-like localized orbitals. This, in addition to using an effective screening instead of computing the dielectric constant, allows to obtain the interaction kernel at very low computational cost and, thereof, the exciton spectrum as well as the light absorption of materials.
Additional comments including restrictions and unusual features:
The code requires using at least C++11, given that it uses version-specific features. All linear algebra routines have been delegated to the library.
§ INTRODUCTION
Bound electron-hole pairs, namely excitons, are known to be largely responsible for the most prominent features of the optical response of semiconductors near the band edge<cit.>, particularly for low dimensional materials<cit.>. This includes, of course, absorption and photoluminescence, but also the photovoltaic response, where substantial efforts, both experimentally and theoretically, are being made for energy-harvesting real-life applications <cit.>. On the theory side, multiple ways to describe excitons have been developed varying in accuracy and sophistication <cit.>, from an effective two-body description <cit.> and configuration interaction <cit.> to many-body perturbation theory (MBPT) <cit.> or time-dependent techniques <cit.>. MBPT itself can be purely electronic or include electron-phonon interactions <cit.>. The current standard for exciton calculations is GW-BSE: the GW approximation <cit.> is used to correct the density functional theory (DFT) electronic band structure, specifically the gap in insulators and semiconductors <cit.>. The resulting band structure is then used to compute the exciton spectrum with the Bethe-Salpeter equation (BSE). The accuracy of the MBPT approach has prompted the development of several software applications for the calculation of first-principles many-body excitations <cit.>.
Being first-principles calculations, one can seek quantitative agreement with experiments, at the cost of computational time. Alternatively, one can seek less costly, qualitative comparison through an effective description of the interactions. Our code Xatu is intended for such purpose. While the base electronic structure can be computed at any degree of fidelity, from the simplest tight-binding (TB) model to the more sophisticated GW approximation, electron-hole interactions are taken into consideration through a simplified model where orbitals are considered to be point-like along with phenomenological models for screening. In principle, any band structure can be used as the starting point as long as it comes from a localized orbitals basis code; plane waves-based calculations are out of scope since they go against the nature of the approximation used for the interaction. Ultimately, these two approximations result in a considerable reduction of the computational cost.
Beyond the intrinsic speed-up coming from the calculation scheme itself, Xatu has been written mainly in , and is designed to be as efficient and general as possible while keeping its usability relatively simple. It targets a wide range of systems in the landscape of computational tools for optical excitations, from those that simply are out of the range of first-principles ones because of the complexity of the unit cell, to those that require a quick iteration, while obtaining qualitative and sometimes even quantitative agreement with experiments.
§ EXCITON THEORY
§.§ The Bethe-Salpeter equation
Here we review the basic aspects of our theoretical approach, highlighting the simplifications and analogies with respect to the standarized GW-BSE method (see e.g. <cit.>).
From a quantum chemistry perspective, for the description of excitons we consider the exact, non-relativistic electronic Hamiltonian of the solid of interest:
H = H_0 + V =∑_i,jt_ijc^†_ic_j + 1/2∑_i,j,k,lV_ijklc^†_ic^†_jc_lc_k,
where the indices include orbital and position degrees of freedom; we restrict to basis of localized orbitals. H_0 describes the kinetic and ion-electron interaction terms and V is the electrostatic interaction between electrons. Diagonalization of H_0 yields a Bloch eigenbasis |nk⟩ with energies ε_nk, which here will correspond to insulating or semi-conducting materials. The interaction term in <ref> contains
V_ijkl =⟨i,j|V|k,l|
⟩ =∫ dr dr' φ^*_i(r)φ^*_j(r')V(r,r')φ_k(r)φ_l(r')
where V(r,r') is the two-body interaction. This can be the bare Coulomb interaction or some alternative interaction to take into account dimensionality or screening.
Since the non-interacting Hamiltonian H_0 describes insulating materials, it is usually a good approximation to take the ground state for the interacting Hamiltonian H as the Fermi sea:
|GS⟩ = ∏_n,k^ε_nk≤ε_Fc^†_nk|0⟩
where |0⟩ denotes the state with zero electrons, and ε_F is the Fermi energy. Then, an electron-hole pair of center-of-mass momentum Q between the conduction band c and the valence band v, and located at momentum k is defined as:
|v,c,k,Q⟩ = c^†_ck + Qc_vk|GS⟩
meaning that one electron of momentum k from the valence bands is promoted to the conduction bands with momentum k+Q. Note that even though we denote these states as electron-hole pairs, we are not actually using hole quasiparticle operators, but simply refer to the hole as the absence of an electron in the Fermi sea. We will stick to the electron picture throughout this work, unless specified otherwise. These electron-hole pairs will serve as the basis for the exciton states, |X⟩_Q:
|X⟩_Q = ∑_v,c,kA_vc^Q(k)|v,c,k,Q⟩
= ∑_v,c,kA_vc^Q(k)c^†_ck + Qc_vk|GS⟩
Therefore, the exciton is expressed as a linear combination of electron-hole pairs over different bands and momenta. Note that Q serves as a good quantum number for the exciton states, since the interaction is momentum-conserving. The interaction only mixes electron-hole pairs with the same net momentum, which is Q. This can be seen by computing explicitly a general interaction matrix element, V_ijkl.
Next, we determine the A_vc^Q(k) coefficients that minimize the expectation value ⟨X|H|X|_⟩Q:
δ E[X]/δ X = δ/δ X[⟨X|H|X|_⟩Q/⟨X|X|_⟩Q] = 0
Performing this derivative explicitly is equivalent to the problem of diagonalizing the Hamiltonian represented in the basis of electron-hole pairs:
∑_v',c',k'H_vc,v'c'(k, k', Q)A_v'c'^Q(k') = E_XA_vc^Q(k)
where H_vc,v'c'(k, k', Q)=⟨v,c,k,Q|H|v',c',k',Q|$⟩.
The expansion in electron-hole pairs of the exciton is actually an ansatz: we obtain exact eigenstates of the Hamiltonian restricted to a partition of the Hilbert space,PHP, wherePis a projector over the single electron-hole pairs.
PHP = ∑_v,c,k
v',c',k'H_vc,v'c'(k, k', Q)|v,c,k,Q⟩⟨v',c',k',Q|
In fact, if we only consider charge-conserving excitations, we could represent the Hamiltonian in the following way:
H = ⊕^N_e_n=0 P_nHP_n + C, with
where
P_n=∑_{c_i},{v_i}
{c'_i},{v'_i}|{c_i}, {v_i}⟩⟨{c'_i}, {v'_i}|
and
|{c_i}, {v_i}⟩ = ∏^n_i=1c^†_c_i∏^n_i=1c_v_i|GS⟩N_eis the total number of electrons,Cthe coupling between the different excitation sectors, andP_nis the projector over the n-th electron-hole pairs sector.
If instead of using the Bloch states fromH_0we formulate the problem in terms of the Hartree-Fock (HF) solution to <ref>, then the coupling between the Fermi sea and the single-pair sector,P_0HP_1, is exactly zero according to Brillouin's theorem<cit.>. As we will mention later, we will assume that this always holds even when the ground state has not been calculated in the HF approximation.
The same, however, is not true forP_0HP_2orP_1HP_2, i.e., the interaction couples the ground state and the one electron-hole pair sector with the two electron-hole pairs sector. Thus, the proposed ground state and the exciton states are never exact but approximate eigenstates. Given that the material is insulating, we expect the coupling to be weak due to the energy differences, which justifies the ansatz. Keeping with the exact diagonalization approach, one could try to diagonalize the Hamiltonian including more excitation sectors. Although possible in principle, it becomes quickly unfeasible since the Hilbert space in many-body systems grows exponentially and, in this case, the eigenstates would involve a mixture of excitations, losing the interpretation as a bound electron-hole pair.
Going back to <ref>, we compute next the Hamiltonian matrix elements in theH_0basis, which are given in terms of the single particle energies and the interaction matrix elements:
H_vc,v'c'(k,k', Q) =
δ_kk'δ_vv'[ε_ck+Qδ_cc'+Σ_cc'(k+Q,k'+Q)]
-δ_kk'δ_cc'[ε_vkδ_vv'+Σ_v'v(k',k)] -(D - X)_vcv'c'(k,k',Q)
where
D_vc,v'c'(k,k', Q) = V_ck+Q,v'k',c'k'+Q,vk
X_vc,v'c'(k,k', Q) = V_ck+Q,v'k',vk,c'k'+Q
and
Σ_nm(k,k')=∑_j,k”^occ( V_n k, jk”, m k', jk” - V_nk,j k”, jk”,mk')D,Xcorrespond to the direct and exchange interactions between the electron-hole pair, whereasΣis the self-energy coming from the interaction of the electron/hole with the Fermi sea. At this point we could obtain the exciton spectrum diagonalizing (<ref>). Instead, it is more convenient to solve first for the ground-state of (<ref>) at the mean-field level, i.e. in the HF approximation <cit.>. If we now write (<ref>) in the HF band basis, we obtain:
(ε_ck+Q - ε_vk)A_vc^Q(k) + ∑_v',c',k'K_vc,v'c'(k,k', Q)A_v'c'^Q(k')
= E_XA_vc^Q(k)
whereε_nkare now the HF quasiparticle energies, andK = - (D-X)is the interaction kernel. Thus, the self-energies are now incorporated into the quasiparticle energies instead. Note that the Fermi sea energy has been set to zero, so that exciton energies can be compared directly with the gap of the system. This is the standard form of the Bethe-Salpeter equation for excitons using the Tamm-Dancoff approximation (TDA) <cit.>, and it defines the starting point for any exciton calculation. The main difference with MBPT comes from the interaction kernel, which there involves a dynamically screened interaction, usually in the random-phase approximation <cit.>. The determination of the dielectric constant is a computationally intensive task <cit.>, which we avoid setting instead an effective static screening.
So far we have seen that it is more convenient to pose the exciton problem in terms of the HF basis, as it simplifies the problem and allows to decouple excitation sectors.
In practice, we do not address the problem of determining the mean-field solution to (<ref>). Instead, we start directly from equation (<ref>) assuming that the initial band structure, which is already known, verifies it. Namely, for tight-binding band structures we drop the self-energy terms assuming that we are using a HF solution. Alternatively, if the band structure comes from DFT or MBPT (e.g. GW approximation), then we also remove the self-energy terms since the quasiparticle energies already include self-energy corrections (although they do not cancel exactly with those from (<ref>)). Thus, from now on we regard the starting band structure as the non-interacting HamiltonianH_0.
§.§ Interaction matrix elements
With Eq. (<ref>) established, a practical expression for the interaction matrix elements (<ref>) remains to be obtained. The single-particle states, using a basis of localized orbitals, can be written as:
φ_nk(r) = 1/√(N)∑_Re^ik·R∑_i,αC^nk_iαϕ_α(r-R - t_i)
where{ϕ_α}denote the orbitals located at the atomiof the motif andNis the number of unit cells of the system. As mentioned before, this wavefunction may correspond to that of a tight-binding model (meaning that the spatial nature of the orbitals is ignored and are typically considered orthonormal), or a DFT calculation with a local orbital basis set, which are in general non-orthogonal. While the origin of the single-particle states can be different, for the actual calculation of the interactions we will treat them on the same footing, approximating them as point-like orthonormal orbitals.
Depending on how we treat the interaction, different working expressions for the matrix elements can be obtained. For instance, we address first the direct term, which is given by:
D_vc,v'c'(k,k', Q)
= ∫ drdr'φ^*_ck+Q(r)φ^*_v'k'(r')V(r,r')φ_c'k'+Q(r)φ_vk(r')
We substitute the single-particle Bloch states (<ref>) in Eq. (<ref>). Expanding each term, we end up having to evaluate the same four-body integral, but now between the orbitals that compose each state:
∫ dr dr' ϕ^*_α(r)ϕ^*_β(r')V(r,r')ϕ_γ(r)ϕ_δ(r')
At this point, there are two ways to compute the present four-body integral: we can evaluate directly the interaction in real space, or, instead, use its Fourier series to work in reciprocal space. In both cases we consider point-like orbitals centered atR+t_i:
ϕ_α(r - R - t_i)ϕ_β(r - R' - t_j) ≈δ_αβδ(r - R - t_i)δ_ijδ_R,R'.
Integrating in real space, after simplifying the resulting deltas, we obtain the following expression for the direct termD:
D_vc,v'c'(k, k', Q)
= 1/N∑_ij∑_αβ(C_iα^ck+Q)^*(C_jβ^v'k')^*C_iα^c'k'+QC_jβ^vkV_ij(k'-k)
where
V_ij(k'-k) = ∑_Re^i(k'-k)RV(R - (t_j - t_i)).
HereV_ij(k'-k)can be regarded as a lattice Fourier transform centered att_j - t_i. Since it is defined as a sum over lattice vectors and not an integral, one cannot use the shift property from the Fourier transform. Attempting to do so would result in breaking the spatial symmetries of the Hamiltonian. Then, the direct term can be interpreted as the weighted average of the Fourier transform of the interaction between the electron and the hole, over all positions and orbitals. The exchange term is computed analogously:
X_vc,v'c'(k, k', Q)
= 1/N∑_ij∑_αβ(C_iα^ck+Q)^*(C_jβ^v'k')^*C_iα^vkC_jβ^c'k'+QV_ij(Q)
In case that there is only one atom in the motif, then expressions (<ref>), (<ref>) simplify even further since the interaction decouples from the tight-binding coefficients, yielding:
D_vc,v'c'(k, k', Q) =
1/NV(k'-k)(U^†_k+QU_k'+Q)_cc'(U^†_kU_k')_v'v
X_vc,v'c'(k, k', Q) =
1/NV(Q)(U^†_k+QU_k)_cv(U^†_k'U_k'+Q)_v'c'
whereU_kis the unitary matrix that diagonalizes the Bloch HamiltonianH(k)<cit.>. The evaluation of these expressions is much faster than the corresponding ones (<ref>) and (<ref>) for a general case. Additionally, forQ=0, the exchange term (<ref>) becomes exactly zero, which is not true in general, although it is usually neglected. As mentioned before, for DFT band structures we evaluate the interaction using the same point-like approximation, performing first a Löwdin orthogonalization of the basis. This allows to improve the TB descriptions, incorporating fine details to the quasiparticle dispersion along the BZ. In such treatments, our interaction matrix elements are an approximation to the true ones involving ab-initio orbitals. Given that in DFT the orbitals are known (e.g. gaussian-type basis in the CRYSTAL <cit.> code), one could, in principle, evaluate the integrals (<ref>) exactly for a closer ab-initio calculation of excitons.
The previous calculation corresponds to the evaluation of the interaction matrix elements
in real space. An alternative approach consists of writing the interaction as its Fourier series before evaluating (<ref>) <cit.>:
V(r-r')=1/N∑_qV(q)e^iq·(r-r')
where
V(q) = 1/V_cell∫_ΩV(r)e^-iq·rdrΩ=NV_celldenotes the volume of the crystal. Usually, one takesΩ→∞meaning that we can evaluate the integral analytically, this is,V(q)becomes the Fourier transform of the potential. Note, however, thatqis not restricted to the first Brillouin Zone (BZ), andV(q)is not periodic in the BZ. Therefore, in principle one has to sum overq∈BZ, but also over reciprocal vectorsG, i.e.:
V(r-r')=1/N∑_q∈BZ∑_GV(q + G)e^i(q + G)·(r-r')
The evaluation of the integral is done in the same way, although in this case there is a plane wave instead of the electrostatic interaction. This approach is particularly useful when using a plane wave basis, since it allows to evaluate the four-body integrals exactly without need for approximation (<ref>). The interaction matrix elementsD,Xare now given by:
D_vc,v'c'(k, k', Q) = 1/N∑_GV(k-k' + G)I^G_ck+Q,c'k'+Q(I^G_vk,v'k')^*
X_vc,v'c'(k, k', Q) = 1/N∑_GV(Q + G)I^G_ck + Q,vk(I^G_c'k' + Q,v'k')^*
where
I^G_nk,mk' = ∑_iα(C_iα^nk)^*C_iα^mk'e^i(k-k'+G)·t_i
UsuallyV(q)decays fast enough so it suffices to sum only overG=0for the excitons to converge in energy. Xatu allows to use the interactions evaluated in real-space (expressions (<ref>), (<ref>)), or in reciprocal space (expressions (<ref>)). They are benchmarked in section <ref>; by default we use the interactions in real-space since the calculation converges faster with the number of points in the BZ mesh, it can be used rigorously for finite systems such as ribbons, and the current implementation performs on par with the reciprocal one.
Once the interaction kernel is determined, Eq. (<ref>) can be solved to obtain the exciton energies and wavefunctions, i.e., the coefficientsA_vc^Q(k). These can be used to compute different quantities. For instance, given that the exciton is written as a linear combination of electron-hole pairs with well-definedkquantum number, we can define the probability density of finding the exciton in a specific pair ink-space as:
|ψ_X(k)|^2=∑_v,c|A_vc^Q(k)|^2
which is the straightforward definition since all electron-hole pairs are orthonormal to each other.
§.§ Spinful excitons
If the single-particle basis includes spin, one can also compute the expected value of the total spin of the exciton,S_z^T=S_z^e+S_z^h. Given that we are using a fully electronic description of the exciton, we need to specify the electrons whose spin we want to measure. To this purpose, we write the total spin operator in second quantization as:
S^T_z = ∑_c',c,kσ_c'k+Q, ck+Qc^†_c'k+Qc_ck+Q - ∑_v',v,kσ_v'k,vkc_v'kc^†_vk
whereσ_nm=⟨n|S_z|m|$⟩. The labels c,c',v,v' refer exclusively to the conduction and valence bands used in the definition of the excitons. Note that the second term, which corresponds to the spin of the hole, has a minus sign. This is because holes, when described as quasiparticles, have opposite momentum and spin of the corresponding electronic state, i.e. h^†_n,-k,-σ = (-1)^σc_nkσ, for states below the Fermi energy, ε_nk < ε_F <cit.>. These h operators describe creation/annihilation of holes in terms of their electronic counterpart. Although we keep k the same (since we are still in the electronic picture), we already incorporate this minus sign to give a correct description of the total spin of the exciton. As we will see later, this sign change is also necessary to retrieve the known singlet and triplet states when summing angular momentum. The two pictures are equivalent, and all the previous calculations can be reproduced in the electron-hole picture.
The expected value of the total spin is then given by:
⟨X|S^T_z|X|=⟩[∑_v,c,k∑_c'A^Q_vc. (k)(A^Q_vc'(k))^*σ_ck+Q, c'k+Q
. - ∑_v'A^Q_vc(k)(A^Q_v'c(k))^*σ_vk, v'k]
If [H_0, S_z] = 0, then the spin projection S_z is also a good quantum number for the Bloch states. Therefore, they can be written now as |nkσ⟩, or in real space as φ_nk(r)χ_σ, where χ_σ denotes the spin part of the state. This means that the spin operator S_z is diagonal, σ_nm=σ_nδ_nm, which allows us to simplify expression (<ref>):
⟨S_z^T|=⟩∑_v,c,k|A^Q_vc(k)|^2(σ_c - σ_v)
Another consequence of having the spin well-defined is that it also propagates to the electron-hole pairs that serve as a basis for the exciton states, i.e. |v, c, k,Q⟩=c^†_ck+Qc_vk|GS⟩, where v=(v,σ_v), c=(c,σ_c). In principle, we allow the spin of the electron and the hole to be different, σ_c ≠σ_v. Taking into account the spin in the computation of the interaction matrix elements, we arrive at constraints on which electron-hole pairs interact. Then the direct and exchange terms read:
D_vc,v'c'(k,k', Q) = δ_σ_cσ_c'δ_σ_vσ_v'D_vc,v'c'(k,k', Q)
X_vc,v'c'(k,k', Q) = δ_σ_cσ_vδ_σ_v'σ_c'X_vc,v'c'(k,k', Q)
which can be directly obtained by substituting the single-particle states, since the spin part is not mixed with the orbital part of the states (i.e. |nkσ⟩=|nk⟩⊗|σ⟩). At this point, we can arrange the electron-hole pairs into four groups depending on their spin:
{|++⟩, |–⟩, |+-⟩, |-+⟩}_e={|σ_cσ_v⟩}_e
The e subindex is used to denote that this corresponds to the electronic picture. Then the Hamiltonian represented in terms of the spin groups, and taking into account (<ref>) becomes:
H =
[ H_0 - D + X X 0 0; X H_0 - D + X 0 0; 0 0 H_0 - D 0; 0 0 0 H_0 - D ]
where H_0, D, X are blocks which include matrix elements corresponding to different electron-hole pairs but same spin group. If we now take into account that the hole in its quasiparticle representation must have spin opposite of that of the electron vacancy, then our states are {|+-⟩, |-+⟩, |++⟩, |–⟩}_eh, where eh denotes electron-hole picture. Therefore, the exciton spectrum would be composed of groups of three triplet states and one singlet state, as when adding angular momenta. If instead we turn off the exchange interaction, then every state should have at least four-fold degeneracy. Any additional degeneracy would come from spatial symmetries of the Hamiltonian, in particular from the irreducible representations of the little group at Q (see <ref>).
§.§ Real-space wavefunction
Plotting the probability density (<ref>) is useful to extract some information about the exciton such as the wavefunction type (s, p, etc, following the hydrogenic model). The same can be argued for its real-space wavefunction, ψ_X(r_e, r_h). However, obtaining it is not as straightforward as the k wavefunction. To do so, first we define the field operators as:
ψ^†(r) = ∑_nkφ^*_nk(r)c^†_nk, ψ(r) = ∑_nkφ_nk(r)c_nk
where φ_nk(r) are the single-particle states in coordinate representation. Then, we can define the amplitude or real space wavefunction of the exciton in the following way:
ψ_X(r_e,r_h) = ⟨GS|ψ(r_e)ψ^†(r_h)|X|⟩
This definition is motivated by the fact that φ_nk(r)=⟨GS|ψ(r)|nk|$⟩. Before computing the amplitude, it is convenient to switch to the electron-hole picture. The field operator written in terms of electron and hole operators is:
ψ(r)=∑_ckφ_ck(r)c_ck + ∑_vkφ_vk(r)h^†_v-k≡ψ_e(r) + ψ^†_h(r)
whereψ_e(r),ψ_h(r)are the annihilation field operator for the electrons, and holes respectively. Since we are switching from the electronic to the electron-hole picture, the same has to be done for the exciton state,|X⟩=∑_v,c,kA_vc^Q(k)c^†_ck+Qh^†_v,-k|0⟩. Evaluating the exciton amplitude in terms of the electron and hole field operators, we obtain:
ψ_X(r_e,r_h) =⟨GS|ψ_e(r_e)ψ_h(r_h)|X|
⟩ =∑_v,c,kA_vc^Q(k)φ_ck+Q(r_e)φ^*_vk(r_h)
To obtain the first equality note that there are four cross terms containing electron and hole field operators. Two of them are zero, since they they move around the electron or the hole [e.g.ψ_e(r_e)ψ^†_e(r_h)], meaning that the final state is still orthonormal to the ground state. There is a third term consisting on creation of an electron and a hole,ψ^†_e(r_e)ψ^†_h(r_h). This term is also zero because we assume that our ground state is the Fermi sea, meaning that it does not contain excited electrons. If this were the case, then the exciton could also consist on deexcitations or antiresonant transitions. This is known as the Tamm-Dancoff approximation, and it is also usually present in GW-BSE. To obtain the final expression for the wavefunction, it remains to substitute the expression of the field operators. One recovers the electron-hole pairs states of the exciton basis (up to a sign from operator permutation), and from orthonormality it results in expression (<ref>).
At this point, to be able to plot the exciton real space wavefunction, we still need to evaluate (<ref>) in terms of the single-particle statesφ_nk(r). Since the exciton wavefunction depends on both the position of the electron and the hole, first we need to fix the position of either of them to be able to plot the wavefunction. Since we assume the orbitals are point-like, both the electron and hole can only be localized at the atomic positions, so we will evaluate the wavefunction and the probability density at these points only.
We set the electron to be located at cellR_eand atomt_mof the motif,r_e=R_e + t_m, while the hole is at positionr_h=R_h + t_n. Using the point-like approximation (<ref>), the probability density of finding the electron at a given position with the hole fixed reads:
|ψ_X(R_e + t_m, R_h + t_n)|^2 = ∑_αβ|ψ^αβ_X(R_e + t_m, R_h + t_n)|^2
where
|ψ_X^αβ(R_e + t_m, R_h + t_n)|^2 =
1/N^2∑_v,c,k∑_v',c',k'A_vc^Q(k)(A_v'c'^Q(k'))^*e^i(k - k')·(R_e - R_h)
· C^c,k+Q_mα(C^c',k'+Q_mα)^*(C^v,k_nβ)^*C^v',k'_nβ
For both the reciprocal and the real-space probability densities, one could expect them to have the symmetries of the crystal, since[H,C]=0, whereCis any symmetry operator from the space group. However, if the states are degenerate, then they are not necessarily eigenstates of the symmetry operators and consequently the associated densities will not be invariant under symmetry transformations. Still, in this case it is possible to define a probability density that preserves the symmetry of the crystal for each degenerate subspace:
|ψ_X(r, r_h)|^2=∑_n|ψ_X^(n)(r, r_h)|^2
where the indexnruns over exciton states degenerate in energy. An analogous expression holds for thekwavefunction. It is always good practice to check that the resulting probability densities preserve the symmetry of the crystal to ensure that the calculation was done correctly. The proof for the invariance under symmetry operations of (<ref>) is given in <ref>.
§.§ Optical conductivity and light absorption
As an example of a post-processing calculation, we investigate here the interaction of the material with an incident linearly-polarized electric field. We elaborate below how to compute the optical response by means of the exciton eigenfunctions.
For a sufficiently low-intensity and linearly-polarized homogeneus electric pulse,ε(t), the induced current per unit frequency in the bulk of the material can be writtenJ_a=∑_βσ_ab(ω) ε̃_b(ω), where the linear optical conductivity reads <cit.>
σ_ab(ω)= π e^2 ħ/V∑_k^N_X1/E_k[ V_k^a(V_k^b)^∗] δ(ħω-E_k)
Here,N_Xis the number of exciton states,E_kis the energy of thek-th excited state,V_k^a=⟨GS|v̂^a|X_k|$⟩ is the velocity matrix element (VME) transition to the ground state and V is the volume of the solid under periodic boundary conditions. In the equation above, only excitons with Q=0 are considered, as finite momentum excitons cannot be achieved by light incidence. Thus, we drop Q from the notation, and instead specify the excitation index k in the exciton coefficients, A^k_vc(k). The exciton VME is found to be
V_k^a=∑_cv kA_vc^k(k)v^a_vc(k),
where v^a_vc(k) ≡⟨vk|v̂^a|ck|=⟩iħ^-1⟨vk|[H_0,r̂^a]|ck|$⟩ (H_0is the non-interacting or mean-field Hamiltonian). With light polarized along theadirection, an exciton is dark (or bright) ifV_k^a=0(V_k^a≠0). In general, the brightness of an exciton and its contribution to Eq. (<ref>) is dictated by the selection rules ofA_vc^k(k)andv^a_vc(k)over the Brillouin zone. The calculation of VMEs has to be worked out taking into account the underlying local basis of our approach. It is found <cit.> (we simplify the notation here by doingiα→αfor the rest of the section)
⟨nk|v̂|n'k|=⟩
∑_αα' (C^nk_α)^*C^n'k_α'∇_k H_αα'(k),
+ i∑_αα'(C^nk_α)^*C^n'k_α'[ ϵ_n(k) ξ_αα'(k)- ϵ_n'(k)ξ_α' α^∗(k) ].
withξ_αα'(k)=i⟨u_αk|∇_ku_α' k|$⟩ the Berry connection between Bloch basis states. After some algebra, it reads
ξ_αα'(k)=∑_Re^ik·R⟨α0|r̂|α' R|+⟩i∇_kS_αα'(k).
In the case of an underlying non-orthonormal local orbital basis, the overlap matrix S_αα'(k)≡⟨αk|α'k|$⟩ is accounted and makes the Berry connection above non-hermitian. Instead, one hasξ_αα'(k)=ξ_α' α^∗(k)+i∇_kS_αα'(k). Eq. (<ref>) allows to evaluate the optical matrix elements by means of the non-interacting hamiltonian plus position matrix elements between the local orbitals.
In the case of an orthogonal basis set, as in tight-binding models, the overlap matrix is an unitary matrix at all points of the Brillouin zone. In this case VMEs read
v^a_vc(k)=∑_αα'(C^nk_α)^*C^n'k_α'[∂ H_αα'(k)/∂ k_a+iH_αα'(k)(t^a_α'-t^a_α) ]
This expression is sometimes known as the “diagonal tight-binding approximation (TBA)” in ab-initio calculations involving the maximally-localized Wannier functions <cit.>, where the inter-orbital position matrix elements are discarded. Eq. (<ref>) can thus be evaluated and is implemented in our code.
Additionaly, Eq. (<ref>) can be compared with the frequency dependent expression for its non-interacting counterpart. In the limit of no correlations, it reduces to
σ_ab(ω)=π e^2 ħ/V∑_cvk1/ε_c k-ε_vk [ v_cv^a(k)v_vc^b(k) ]
·δ(ħω-[ε_ck-ε_v k])
From the frequency-dependent optical conductivity one can obtain related quantities of interest. For instance, the ratio of absorbed incident flux per unit frequency and unit length (considering vaccum surroundings) is <cit.>S(ω)=σ(ω)/cϵ_0, also called absorbance. Note that this quantity is ill-defined for 2D lattice systems. In such case, all the absorbance is assumed to occur atz=0reference plane of the material.
§ IMPLEMENTATION
The programming languages of choice for the implementation of the exciton theory were and , which are the usual options for heavy numerical computations. In the case of , to facilitate manipulation of matrices we use the library <cit.>, on top of the usual libraries for linear algebra ( and ). The core of the code was written in , except the post-diagonalization calculation of the optical conductivity being written in . This routine is wrapped inside the library. The software was designed with a hybrid approach in mind: previous packages such as DFT codes require the preparation of input files, which are then fed to the program and result in some output files which may be post-processed to extract information. We propose to use the same scheme, i.e. to prepare an input file which describes the system where we want to compute the excitons (namely the HamiltonianH_0), and another one with the description of the excitons (participating bands,kmesh, etc). However, there is an alternative usage, which is employing directly the exciton API defined to built the program. This a common approach, where one builds a library to expose some functionality to the user (e.g. libraries). Therefore, one can define some system and script the computation of excitons using the API. This is advised whenever we are interested in performing some manipulation of the excitons, and not only obtaining the spectrum or the absorption. There is a third approach, consisting on using the system files to leverage the definition of the system to other programs (e.g. DFT), and then use the API instead of the exciton configuration file. The different forms to use the code will be reviewed in <ref>. The CLI option parsing has been done using the header-only library https://tclap.sourceforge.net/TCLAP, which is distributed with this package.
§.§ Complexity analysis
Next we will discuss the numerical implementation of the exciton computation and related quantities. Solving the Bethe-Salpeter equation (<ref>) amounts to diagonalizing the corresponding matrixPHP.
Diagonalization is done using the standard linear algebra libraries, meaning that the main problem is constructingPHPas fast as possible. Consider a system formed byNunit cells in total (meaning√(N)along each Bravais vector for a two dimensional system). To treat the interaction rigorously, one has to compute the excitons on a BZ mesh with the same number ofkpoints as unit cells, due to the periodic boundary conditions. Therefore, one has to computeN^2matrix elements, and each of them requires computing the lattice Fourier transform, which involves summations over theNunit cells. This has to be done for all possible band pairsB, so a naive implementation of (<ref>) would have𝒪(N^3B^2)time complexity, on par with matrix diagonalization algorithms. Note that each interaction matrix element also requires knowing the tight-binding coefficients{C^nk_iα}. If the dimension of the Bloch Hamiltonian isM, then diagonalizing on the fly the system for each element of the BSE would result in time𝒪(N^2B^2(N + M^3)).
The easiest way to reduce the time complexity of the BSE construction is to increase the space complexity, i.e. to precompute and store quantities that appear multiple times, instead of computing them on the fly. This can be done for the Bloch Hamiltonian eigenvectors. Before constructingPHP, we diagonalizeH(k)∀k∈BZ, and store the eigenvectors. At this point, if were to store all eigenvectors, the spatial complexity would go from𝒪(1)to𝒪(NM^2). Since we only need the eigenvectors corresponding to the bands that participate in the exciton formation, it suffices to store only those, meaning that the spatial complexity would be𝒪(NMB), i.e. we have to storeNmatrices of sizeM×B. Accessing directly the eigenvectors results in a time complexity of𝒪(N^3B^2 + NM^3).
The same could be done for the lattice Fourier transformV_ij(k-k'). Since it depends on the difference between between twokpoints, we could simply storeV_ijfor each pair ofkpoints. This implies high spatial complexity𝒪(N^2), but overall it does not report any speed advantage, since precomputing this would be of order𝒪(N^3). However, it is possible to reduce the time cost of the algorithm: as long as thekpoint mesh covers the whole BZ uniformly (as given by Monkhorst-Pack), then we can map thekpoint difference back to a singlekpoint using the periodicity ofV_ij(k-k'):
∀k,k'∈BZ, ∃G∈Reciprocal lattice, k”∈BZ
s.t. k-k' = G + k”
Therefore, it suffices to compute and storeV_ij(k)∀k∈BZ. Then, when initializing the matrix elements ofPHP, one has to find the vectork”such that it verifies (<ref>). The time complexity now is𝒪(N^2), which is a reduction of an order of magnitude. The space complexity is also reduced, being now𝒪(N).
With this, the algorithm for determiningPHPhas time order𝒪(N^2B^2+NM^3), and the memory requirements are𝒪(N + NMB)=𝒪(NMB). As we will see, this allows for very fast computation of the BSE matrix, meaning that the main bottleneck lies in the diagonalization, as it often happens. In some cases we might be interested in the whole spectrum, but usually it suffices to determine the lowest energy eigenstates. To address this, the code includes a custom implementation of the Davidson algorithm, which is suited to obtain the ground state of quantum chemistry Hamiltonians <cit.>.
So far the discussion was focused on how to reduce the complexity of the algorithm, but it is equally important to comment on how to perform the actual computation of the matrix elements. The big O notation neglects all constant factors, which is fine for theoretical considerations, but might have a considerable impact on the real behaviour of the code. The general strategy followed was to vectorize all calculations to make use of the highly optimized and parallel existing linear algebra routines. The remaining parts that do not allow vectorization, such as matrix element initialization inPHP, were all parallelized with . Currently, all the parallelism is shared-memory and distributed parallelism might be implemented in the future.
For instance, consider the direct interaction term which requires computing expression (<ref>). Supposed that the lattice Fourier transform of the interaction is already computed for all motif combinationsi, jand for allkpoints, we basically have to sum over tight-binding coefficients multiplied by the interaction. Given that the Bloch eigenstates are already stored as columns in matrices, we want to write this as matrix-vector products. Specifically, we can useV_ijas a bilinear form, so with a well-defined matrixVthe direct term can be written as:
D_vc,v'c'(k,k',Q) = C^T_cc'V(k'-k)C_v'v
where
V = V(k-k')⊗𝕀_n, C_nm = C^*_n ⊙ C_m⊙denotes element-wise array product,C_nis the vector of coefficients corresponding to state|n⟩and𝕀_ndenotes a square matrix of ones of dimensionn,nbeing the number of orbitals per atom. Note that this expression is only valid if all atoms have the same number of orbitals. Otherwise, one must take into account the different number of orbitals per chemical species when performing the Kronecker´s products. The exchange termXcan be computed in an analogous way. Note that this assumes that the order of the single-particle basis is{|i⟩⊗|α⟩⊗|σ⟩}, i.e. for each atomic position, we run over orbitals, and for each orbital we run over spin. This is also relevant for the computation of the spin of the excitons, since it follows this convention.
As we mentioned at the beginning, to compensate for the lack of screening of the theory, one typically uses the Rytova-Keldysh potential <cit.> instead of the bare Coulomb potential in the context of two-dimensional materials. However, both interactions diverge atr=0. We regularize this divergence by settingV(0)=V(a)<cit.>, whereadenotes the lattice parameter. Currently, the code only implements the Keldysh potential, given by:
V(r)=e^2/8ε_0εr_0[H_0(r/r_0)- Y_0(r/r_0)]
whereε=(ε_m+ε_s)/2, withε_s,ε_mbeing the dielectric constants of the substrate and the embedding medium (usually vacuum) respectively, andr_0the effective screening length. Those three parameters have to be specified for all calculations.H_0,Y_0are Struve and Bessel functions of second kind respectively.
Also, since the interaction decays quickly, we employ a radial cutoff, such that for distancesr>R_cwe take the interaction to be zero. Then, the effective interaction is:
V(r)={[ V(a) if r = 0; V(r) if r < R_c; 0 else ].
whereR_cis the cutoff radius. The cutoff has two purposes: first, it enforces the crystal symmetries in the transformed potential (as a function ofk). Secondly, it allows to compute the summation over lattice positions faster. Instead of evaluating the potential over all lattice positions, we restrict the sum to the lattice positions where we know the potential is different from zero. As for the interactions computed using the Fourier series of the potential, we setV(q=0)=0to remove the long wavelength divergence.
Lastly, it is also worth mentioning how to compute the probability of finding the electron on a given spatial position (<ref>). Since this requires two summations overk, k', its cost would be𝒪(N^2). To obtain the whole wavefunction, a priori we have to evaluate this over each position in the crystal, meaning that the cost would be𝒪(N^3). However, this would be the worse case scenario in which the exciton is strongly delocalized in real-space. Usually, it will suffice to compute the real-space wavefunction on a contour of the hole position, for a few unit cells only. To actually compute the probability, we want to use the fact that we are storing the exciton coefficients as vectors. First, note that (<ref>) can be written as:
|ψ_X^αβ(t_n + R_e, t_m + R_h)|^2
= |1/N∑_v,c,kA_vc^Q(k)e^ik·(R_e -R_h)
C^c,k+Q_mα(C^v,k_nβ)^*|^2
which already reduces the complexity down to𝒪(N). Then, the probability is computed as||A⊙C||^2, whereAis the vector of exciton coefficients that incorporates the exponential terms andCare the tight-binding coefficients arranged such that they match the electron-hole pair ordering of the exciton.
§ EXAMPLES
So far we have discussed the theory underlying the code and its numerical implementation. Therefore, it remains to show actual examples of the capabilities of the code. One context where excitons are relevant is valleytronics: materials with honeycomb structure which exhibit the band gap at theK, K'points of the Brillouin zone (the "valleys"), and whose optical excitations can be tuned according to the valley <cit.>. The materials most commonly used for this purpose are transition metal dichalcogenides (TMDs), with formulaWX_2, where W is the transition metal and S some chalcogenide. Another similar material that has become highly relevant is hexagonal boron nitride (hBN), although in this case due to its good properties as an insulating substrate <cit.>.
These materials have become the prototypical examples to test the capabilities of an exciton code, and have been studied extensively. We will characterize the excitons in both hBN andMoS_2, i.e. obtain the exciton spectrum forQ=0, show the associated wavefunctions and compute the optical conductivity. We will also show how a simple strain model of hBN can be used to break some crystal symmetries and modify the excitonic ground state.
All the calculations shown are done with the real-space approach to the interaction matrix elements and neglecting the exchange term, unless specified otherwise.
§.§ hBN
Monolayer hexagonal boron nitride has a large quasi-particle band gap, with ab-initio calculations predicting a value of6-8eV depending on the method <cit.>. As we will see, the band structure of hBN is relatively flat along theM-Kpath in the Brillouin zone. This, in conjunction with small screening results in excitons that are strongly delocalized in reciprocal space, but are tightly bound in real space.
This material can be described easily with a minimal 2-band tight-binding model <cit.>, equivalent to graphene but with opposite onsite energies for each atom of the motif. The tight-binding model for hBN reads:
H= ∑_iΔ/2(c^†_ic_i-d^†_id_i)+∑_<i,j>
i ≠ j[ tc^†_id_j + h.c.]
wherec^† (d^†)denote creation operators for B (N) atoms. The indicesi, jrun over unit cells, and the summation over<i,j>spans only the first-neighbours.
The parameters aret=-2.3eV,Δ/2=3.625eV, and the corresponding system file can be found in the code repository under the folder .
From a Slater-Koster perspective, hBN is described byp_zorbitals. Taking the model to be spin-polarized for simplicity, there are only two bands and there must be only one electron per unit cell (half-filling) for it to be an insulator. Once the model is defined and the system file appropriately constructed, we can begin setting the parameters of the calculation. First, we need to specify the constants that appear in the Keldysh potential in Eq. (<ref>). These parameters determine the strength of the electrostatic interaction and consequently affect the exciton binding energies. Here we follow previous works to set these quantities <cit.>, but sometimes we will be interested in exploring the effect of tuning the dielectric constants, or instead we will want to set them to reproduce known experimental results. Nevertheless, values for typical substrates can be found in literature andr_0can also be estimated from ab-initio calculations <cit.>.
The other parameters of the exciton file are related to the convergence of the excitons themselves. Varying the number ofkpoints in the mesh,N_k, one obtains the convergence curves shown in Fig. (<ref>). The convergence has been done with both the default interactions (in real-space) and with reciprocal interactions (Fig. (<ref>a)). For reciprocal interactions energies converge much slower than the real-space counterpart, on top of requiring summing over severalGreciprocal cells. In materials with highly localized excitons inkspace, it usually suffices to take onlyG=0(e.g.MoS_2). However, we will see later that hBN excitons are highly delocalized in reciprocal space, which is why the interaction can see neighbouring reciprocal unit cells.
After checking convergence, we can start studying the exciton themselves. The energies of the first 8 states and their degeneracies are given in table (<ref>). To make sense of the degeneracies, one has to check the character table of the point group of the material: hBN has the crystallographic point groupD_3h,
with both one- and two-dimensional irreducible representations.
Since the symmetry operations and their action on single-particle states are specific to each problem, the code does not address the problem of identifying the irreducible representation of each exciton, nor labeling them in terms of symmetry eigenvalues. Instead, we only check that theQ-excitonic wavefunctions have the allowed degeneracies and (<ref>) is invariant under the little group atQ.
Thekprobability densities of the first eight excitonic states, grouped by degenerate levels, are shown in Fig. (<ref>). Each energy level has the symmetry of the lattice, as expected since we are plotting (<ref>). The additional symmetry in this case is due to time-reversal symmetry and the fact thatQ=0is a time-reversal invariant momenta. We see that the wavefunctions peak at the valleys, although they also spread over theK-M-K'paths. This means that the excitons are formed by strongly interacting electron-hole pairs inkspace, which explains why we need to sum over several reciprocal cells when using the reciprocal interactions. As for the shape of excitons, we find the common pattern: the first state peaks iss-like in the sense that it does not have nodes. The next state would bep-like and so on. Note that the hydrogen analogy only concerns the shape of the wavefunctions, and not the energy spectrum, which in general differs from the hydrogen series.
Since the excitons are delocalized in reciprocal space, we expect them to be strongly localized in real space. The real space densities of each degenerate level are shown in Fig. (<ref>). The hydrogenic picture makes more sense when looking at the real-space wavefunction, since it can be understood then as the problem of two interacting opposite sign charges. The spectrum and the degeneracies do not match that of hydrogen, but the wavefunctions behave radially as we would expect.
In hBN the spin-orbit coupling is small and it suffices to compute the excitons as a spinless system, in particular given that we are also neglecting the exchange interaction. If we consider a spinful system, without exchange again, we obtain exactly the same energy levels but now four-fold degenerate (on top of the previous spatial degeneracy). The same stands for both types of wavefunctions.
Our study of the exciton spectrum in hBN concludes with the calculation of the optical conductivity <cit.>, which reflects the light absorbance from a source up to a constant factor. So far we have not discussed which excitons of the spectrum are bright or dark. This can be seen through the calculation of the optical oscillator strengths within Eq. (<ref>), which determine the transition rate for photon emission. The frequency-dependent conductivity of monolayer hBN is given in Fig. (<ref>). Electron-hole interactions move the spectral power from the continuum to pronounced sub-band gap peaks. Attending to Table (<ref>) and Fig. (<ref>), we see that non-degenerate excitons with mainlyscharacter are bright. The relative height of the peaks can be understood by looking at the magnitude of the wavefunctions near theKandK' points. All bright excitons can be excited with linearly polarized light along two orthonormal polarization directions, giving rise to an isotropic conductivity consistent with theD_3hpoint group of the material.
It is of interest to check the validity of the results against a more refined description of the band structure of the material. This can be done with the code by using a local orbital-based DFT calculation as the starting Hamiltonian, instead of using a parametrized tight-binding model. The exciton energies will depend on the gap as estimated from the functional used, but we expect to get similar wavefunctions and conductivity. Since we consider several orbitals for each chemical species now, we have multiple valence and conduction bands so we should converge the exciton with respect to the number of bands as well. It is a proper check to do, but in this case the different bands are well separated, so their effect should be negligible.
The DFT band structure and the wavefunctions of the ground state exciton are shown in Fig. (<ref>). One could use standard LDA functionals, but here we opt for a hybrid functional (HSE06<cit.> in this case), which is efficiently implemented in CRYSTAL <cit.>. This type of functional yields a better estimation of the single-particle gap due to a different treatment of the exchange-correlation term. For both LDA (not shown) and hybrid functionals such as HSE06, the wavefunctions closely resemble those obtained with TB models. For instance, we observe the same sublattice polarization present in the TB real-space densities with the HSE06 calculation (Fig. <ref>c).
The energy spectrum shows the same degeneracies, although the positions of some of the levels are exchanged.
To illustrate the applicability of the code beyond standard cases, we now study the effect of strain on the exciton spectrum. If we apply some uniaxial in-plane strain along thexaxis, the point group of the material will change toC_2v(with rotation axis alongx). The degeneracy of the ground state came from the spatial symmetries, meaning that it should be broken for any strain value, given that all irreducible representations ofC_2vare of dimension 1. Therefore, we can study the energy splitting of the ground state as a function of the applied strain.
The strain model used is fairly straightforward. Based on the original tight-binding model, we now consider the hopping parameters to have an exponential dependence on the distance:
t(r) = t_0e^-a(r-r_0),
whereais some decay length,t_0the original value of the hopping andr_0the reference length. Additionally, the distortion of the lattice due to strain is taken to affect only bonds parallel to the strain. A rigorous approach would have to implement appropriate distortion of all atomic positions according to the stress tensor <cit.>, but for our purposes this simple model suffices. This is illustrated in Fig. (<ref>)
The procedure to study the exciton spectrum as a function of strain is as follows: we generate different system files (i.e. different Hamiltonians) for different values of the strain, which translates into different atomic positions. Then, we run the exciton simulation for each system file, storing the energies. As we expected, now all states are non-degenerate because of the symmetry groupC_2v. We can plot the ground state splitting as a function of strain, which is shown in Fig. <ref>(a). In Fig.<ref>(b) we show the conductivity for some finite value of the strain. The response is no longer isotropic due to the lattice symmetry breaking caused by strain, where the exciton peaks shift for both light polarizations.
§.§ MoS_2
To conclude the examples section, we also analyze the exciton spectrum ofMoS_2. Same as hBN, in monolayer form this material presents itself in a honeycomb lattice, although it is not planar. Instead, it is formed by three layers of composition S-Mo-S respectively.
The description of the band structure ofMoS_2requires a more complex model, which is why we use it to showcase the code. We use a Slater-Koster tight-binding model <cit.>, where each chemical species has a different set of orbitals (Mo hasdorbitals, and S onlyporbitals). This, together with the non-negligible spin-orbit coupling results in a more complex band structure than that of hBN. Both the lattice and the band structure can be found in Fig. (<ref>).
After checking convergence with the number ofkpoints and the number of bands, we obtain the spectrum shown in table (<ref>). In this case, the point group of the material is againD_3hand the irreducible representations realized by the wavefunctions atQ=0are compatible with the character table of the group <cit.>.
As before, to ensure that the excitons were computed correctly we can plot the total densities to ensure that they have the expected symmetries.
In Fig. (<ref>a) we show the reciprocal probability density of the first energy level. As opposed to hBN, we observe that the states are strongly localized at the valleys.
Resolving the degeneracy by labeling each exciton with theC_3eigenvalues would result in each exciton localized in a different valley <cit.>. This shows that at least the low energy spectrum ofMoS_2can be studied at one valley instead of the whole BZ <cit.>. This allows to get a more precise description of the exciton since one can use a more refined mesh. The wavefunction for the exciton obtained at one valley can be seen in Fig. (<ref>b), using the code feature to reduce the BZ mesh by some integer factor. For higher excited states this does not hold since the states become more extended across the BZ, reaching both valleys.
Since the excitons are very localized in reciprocal space, they should be delocalized in real-space, meaning that the radius of the exciton should be large (e.g. compared to that of hBN). To complete the characterization of the excitons, we
calculate the optical conductivity as shown in Fig. (<ref>)
While the exciton energies converge quickly withN_k, it is usually necessary to include morekpoints in the calculation of the optical conductivity in order to smooth unphysical oscillations derived from the discrete mesh. As it can be seen, the shape of the spectrum matches previous tight-binding studies <cit.> and agrees well with ab-initio results <cit.>. At low energies, the optical conductivity of MoS_2presents the characteristic A and B exciton peaks, that are understood considering the main spin-allowed electron-hole excitations at theKandK'points. The split of∼
100meV between such peaks reflects the effect of SOC in TMD materials <cit.>. At higher energies, the main feature of the spectra is a pronounced peak similar to the non-interacting case but red-shifted in energy. The excitons giving rise to such peak are often called “C" excitons and were fully characterized in Ref. <cit.>, already showing the potential of tight-binding methods for studying new exciton physics.
§ CONCLUSIONS
We have developed a software package that allows to solve the Bethe-Salpeter equation constructed from either tight-binding models or DFT calculations based on localized orbitals. By considering orbitals as point-like, the computation of the interactions becomes drastically simplified. Together with an effective screening, this results in a fast determination of the BSE matrix. More specifically, our real-space implementation of the interaction matrix elements is shown to be faster and more precise than its reciprocal-space counterpart, which is the formulation more commonly used.
As in GW-BSE approximations, the starting band structure plays a crucial role in determining the resulting exciton spectrum. Therefore it is key to select the best possible functional (typically hybrids) or the most accurate tight-binding models that capture the most prominent features of the band structure. Then, by choosing appropriately the screening parameters, it is possible to reproduce the results of GW-BSE or similar first-principles codes at a fraction of the computational cost.
The Xatu code currently provides all the tools needed to extract and characterize the exciton spectrum, either using the binary or via its API. Nevertheless, the package is still under development, as new functionalities and optimizations are added. Our future plans include giving support for distributed parallelism to enable bigger system sizes and calculation of different excitation types such as trions or biexcitons. The code is currently aimed at the description of 2D materials, but it can support 0D and 3D systems. Since the Keldysh potential is only adequate for 2D systems, we will implement additional potentials suitable for different dimensionalities.
We also plan to add the possibility of performing exact calculations of the interaction matrix elements when using Gaussian-based DFT codes to compute the band structure. Currently we provide an interface with the CRYSTAL code <cit.>, and ideally more interfaces to community codes will be added over time, such as SIESTA <cit.> or Wannier90 <cit.>. The project has been released under an open-source license and as such community contributions are welcome and encouraged.
Note added: Upon completion of this work we became aware of a very recent submission which also addresses the problem of determining the exciton spectrum from Wannier-based tight-binding models <cit.>. Nevertheless, we believe that the thorough characterization of excitons we provide here can be advantageous and complementary to other different tools.
§ ACKNOWLEDGMENTS
The authors acknowledge financial support from Spanish MICINN (Grant Nos. PID2019-109539GB-C43 & TED2021-131323B-I00), María de Maeztu Program for Units of Excellence in R&D (GrantNo.CEX2018-000805-M), Comunidad Autónoma de Madrid through the Nanomag COST-CM Program (GrantNo.S2018/NMT-4321), Generalitat Valenciana through Programa Prometeo (2021/017), Centro de Computación Científica of the Universidad Autónoma de Madrid, and Red Española de Supercomputación.
§ SYMMETRY
Throughout the document, we have mentioned and used several times the fact that each individual exciton state does not necessarily have the symmetries of the lattice, but it is the sum of the square amplitude of them who does. In this appendix we will give a proof of this statement: Given |Ψ|^2=∑_n|ψ_n|^2, where ψ_n denotes the wavefunction of the exciton states on some degenerate subspace of PHP, and given some symmetry operation C such that [H, C] = 0, then
C|Ψ|^2 = |Ψ|^2.
First we have the consider the action of the symmetry operator C on an exciton state. Given that the eigenstates of a degenerate subspace of H are not in general eigenstates of C, the most general action is to mix the degenerate states, i.e.:
Cψ_n=∑_iα_inψ_i
The coefficients α_in are the matrix elements of C. To prove (<ref>), we need to know the action of C on the squared amplitude, C|ψ_n|^2. So first we want to prove the following property:
C|ψ_n|^2=|Cψ_n|^2
This can be proven using the action of the symmetry operation on the coordinate of the wavefunction, i.e. Cψ_n(x) = ψ_n(C^-1x):
C|ψ_n|^2(x) =|ψ_n|^2(C^-1x)=ψ_n(C^-1x)ψ_n^*(C^-1x)
=Cψ_n(x)Cψ_n^*(x) = |Cψ_n|^2(x)
where we have also used that Cψ_n^*(x) = (Cψ_n)^*(x). This last identity can be proved conjugating the action of the symmetry on the coordinates:
(Cψ_n)^*(x) = ψ_n^*(C^-1x) = Cψ_n^*(x)
This enables us to compute C|ψ_n|^2 in terms of a expansion on the states of the degenerate subspace:
C|ψ_n|^2=|Cψ_n|^2=|∑_iα_inψ_i|^2=∑_i,jα_inα^*_jnψ_iψ^*_j
Finally, with this expression we can prove the symmetry invariance of |Ψ|^2 = ∑_n|ψ_n|^2. To do so, we act with the symmetry operation C on |Ψ|^2:
C|Ψ^2| = ∑_n C|ψ_n|^2 = ∑_n [∑_i,jα_inα^*_jnψ_iψ^*_j]
= ∑_ij[∑_nα_inα^*_jn]ψ_iψ^*_j = ∑_i|ψ_i|^2 = |Ψ|^2
where we have used that C is unitary, i.e. ∑_nα_inα^*_jn = δ_ij. This proves that the sum of the squared amplitude of each degenerate state is invariant under the symmetry operations.
On a different note, back in the examples we used the character table of the point group of the solid to justify the observed state degeneracies, and in the previous proof we also considered some general symmetry C such that [H, C] = 0. For the abstract, unrepresented Hamiltonian, given any operation C of the point group of the solid, it is true that [H, C]= 0. However, we are not working with the total Hamiltonian, but with a sector of it. So one must actually look for symmetry operations that commute with PHP:
[PHP, C] = 0
Since the sectors of electron-hole pairs of different momentum are disconnected, we can define H(Q) = P_QHP_Q, where P_Q is the projector over electron-hole pairs of Q total momentum. This Hamiltonian is analogous to the Bloch Hamiltonian H(k), and it can be shown that it transforms in the same way:
C^-1H(Q)C = H(C^-1Q)
meaning that for Q=0 the symmetry group is the crystallographic point group, but for Q≠ 0 the Hamiltonian is invariant only under symmetry operations of the little group of Q, whose irreducible representations thus dictate the (unitary) transformation properties of the Q-excitonic wavefunctions.
Proof that H(Q) transforms as the Bloch Hamiltonian H(k) under symmetry operations:
C^-1H(Q)C = C^-1P_QCC^-1HCC^-1P_QC
= C^-1P_QCHC^-1P_QC
where we have used that [H,C]=0. So we only need to see how the projectors transform under the symmetry operation to determine how H(Q) transforms.
C^-1P_QC = ∑_k,v,cC^-1c^†_c,k+Qc_vk|GS⟩⟨GS|c^†_vkc_c,k+QC
Inserting identities, we can transform each creation/annihilation operator according to C^-1c^†_nkC = c^†_n,C^-1k, up to an arbitrary phase that is cancelled in (<ref>). From (<ref>) it follows that the Fermi sea is invariant under point group operations, i.e. C|GS⟩ = |GS⟩ (again, up to an arbitrary phase that is cancelled), we arrive to the following expression:
C^-1P_QC
= ∑_k,v,cc^†_c,C^-1k+C^-1Qc_v,C^-1k|GS⟩⟨GS|c^†_v,C^-1kc_c,C^-1k+C^-1Q
From the C-invariance of the BZ in k-space, we arrive at the final expression for the transformed projector:
C^-1P_QC = ∑_k,v,c|v,c,k,C^-1Q⟩⟨v,c,k,C^-1Q| = P_C^-1Q
Therefore, the projected exciton Hamiltonian H(Q) also transforms in the same way:
C^-1H(Q)C = H(C^-1Q)
Likewise, the application of time-reversal yields T^-1H(Q)T=H(-Q) whenever it is a symmetry of the system.
§ USAGE
The installation instructions can be found at the repository <https://github.com/alejandrojuria/xatu>, so it will not be discussed here.
The code has been developed with a hybrid approach in mind: one can resort to configuration files to run the program, in analogy with DFT codes, or instead program both the non-interacting system and run the exciton simulation using the provided API. First we are going to discuss its usage with configuration files. The basic usage as a CLI program is described by:
[backgroundcolor=, basicstyle=]
xatu [OPTIONS] systemfile [excitonfile]
The executable always expects one file describing the system where we want to compute the excitons, and then another file specifying the parameters of the simulation. Their content is addressed in the next sections. The executable can also take optional flags, generally to tune the output of the simulation. By default, running the program without additional flags prints the exciton energies, without writing the results to any file.
[backgroundcolor=, basicstyle=]
-h (–help)
Used to print a help message with the usage of the executable and a list of all possible flags that may be passed. The simulation is not performed (even in presence of configuration files).
[backgroundcolor=, basicstyle=]
-s (–states) nstates
The number of states specified with this flag is also used for any of the output flags. By default, the number of states is 8 (i.e. if the flag is not present).
[backgroundcolor=, basicstyle=]
-p (–precision) decimals
One can specify the number of decimals used when printing the exciton energies. This is relevant to detect state degeneracy without inspecting manually the states. Defaults to 6 decimals if not present.
[backgroundcolor=, basicstyle=]
-d (–dft) [ncells]
This flag is used to indicate that the provided corresponds to a CRYSTAL output file, instead of following the standarized format. DFT calculations usually involve several unit cells to determine the Bloch Hamiltonian, so the optional value can be passed to specify how many we want to take into account. Otherwise all of them are read and used.
[backgroundcolor=, basicstyle=]
-eck (–energy, –eigenstates, –kwf)
The optional flags , , , are used to specify which exciton output is written to file. writes the energies, writes the eigenvectors, writes the reciprocal density. Note that they can be combined instead of being written separately (e.g. instead of ).
[backgroundcolor=, basicstyle=]
-r (–rswf) [holeIndex] [-r ncells]
Used to write the real-space probability densities to a file. One can give the index of the atom where the hole is located (defaults to first atom of the motif). It can be used a second time to specify the number of unit cells where we want to compute the amplitude (e.g. fixes the hole at the second atom of the motif, and uses 10 unit cells along each axis).
[backgroundcolor=, basicstyle=]
-s (–spin)
Computes the total spin of the excitons, and writes it to a file. This assumes that the single-particle basis includes spin without performing any check, so incorrect usage could result in wrong results or runtime errors.
[backgroundcolor=, basicstyle=]
-a (–absorption)
Computes the optical conductivity (which reflects the absorption of light up to a constant factor) as a function of frequency using the exciton spectrum, and saves the result to a file. A file named "kubo_w.in" with the adequate format (shown below) must be present in the working directory.
[backgroundcolor=, basicstyle=]
-m (–method) diag | davidson | sparse
Choose method to obtain the eigenstates of the BSE. By default, full diagonalization is used. If the Davidson or sparse (Lanczos) method is selected, then it is used to compute the number of states specified before.
[backgroundcolor=, basicstyle=]
-b (–bands) kpointsfile
To check that the system file was written correctly, one can use this option to diagonalize the Bloch Hamiltonian on the k points specified on a file, and write the energy bands to a file. No exciton calculation is performed.
§.§ Structure of a system file
The system configuration files contain all the information needed to characterize completely the material of study: it provides the lattice vectors and motif positions, which is required for the real-space evaluation of the excitons. Then, we have the number of orbitals of each unique chemical species, which is needed to compute the matrix elements correctly, and the filling, which determines which bands participate in the formation of the exciton. Finally, the file contains the matrices needed to build the Bloch Hamiltonian, this is, the Fock matrices H(R) and their corresponding Bravais vectors R. The Bloch Hamiltonian is then reconstructed as:
H(k)=∑_RH(R)e^ik·R
Note that even though one has to provide the orbitals of each species, the specific type of orbital is not needed since the interaction is computed using the point-like approximation. A system file is specified using labels for each block. Blocks always begin with the block delimiter , followed by a label. A block is then defined as all the content between two consecutive block delimiters. The expected content for each label will be discussed next. Any line containing is regarded as a comment, and empty lines are skipped.
#: Basis vectors of the Bravais lattice. The number of vectors present is also used to determine the dimensionality of the system. The expected format is one vector per line, .
#: List with the positions and chemical species of all atoms of the motif (unit cell). The chemical species are specified with an integer index, used later to retrieve the number of orbitals of that species. The expected format is one atom per line, .
#: Number of orbitals of each chemical species present. The position of the number of orbitals for each species follows the indexing used in the motif block. This block expects one or more numbers of orbitals, the same as the number of different species present, .
#: Total number of electrons in the unit cell. Required to identify the Fermi level, which is the reference point in the construction of the excitons. Must be an integer number.
#: List of Bravais vectors R that participate in the construction of the Bloch Hamiltonian (<ref>). Expected one per line, in format .
#: Matrices H(R) that construct the Bloch Hamiltonian H(k). The matrices must be fully defined, i.e., they cannot be triangular, since the code does not use hermiticity to generate the Bloch Hamiltonian. The Fock matrices given must follow the ordering given in the block . The matrices can be real or complex, and each one must be separated from the next using the delimiter . In case the matrices are complex, the real and imaginary parts must be separated by a space, and the complex part must carry the imaginary number symbol (e.g. 1.5-2.1j). Both i and j can be used.
#: In case that the orbitals used are not orthonormal, one can optionally provide the overlap matrices S(R). The overlap in k space is given by:
S(k) = ∑_RS(R)e^ik·R
This is necessary to be able to reproduce the bands, which come from solving the generalized eigenvalue problem H(k)S(k)Ψ = ES(k)Ψ.
This will be specially necessary if the system was determined using DFT, since in tight-binding we usually assume orthonormality. This block follows the same rules as : each matrix S(R) must be separated with the delimiter , and they must follow the order given in .
Several examples of valid system files are provided in the code repository, under the folder .
§.§ Structure of an exciton file
The purpose of the system file was to specify completely the system where we want to compute the excitons. Then, the exciton file is used to describe the excitons themselves: number of points in the mesh or submesh, bands that participate and center-of-mass momentum for example, as well as some additional flags. The idea is to keep the functionality as orthogonal as possible between files. With one system file, we can test for the convergence of the excitons with the number of kpoints, or with the number of bands modifying the exciton file only. Finally, we have the runtime options of the program, which in general do not affect the energy and modify the output exclusively. The philosophy is to maximize the reproducibility and facilitate tracking of the experiments.
The exciton files are built following the rules of the system files. They are composed of blocks, starting with . Each block has a label, which determines the expected content of the block. Next we provide a list of the possible parameters used in the construction of an exciton file:
#: Used to specify the name of the files containing the output of the program. The files will be named , , etc.
#: Number of bands above and below the Fermi level. The minimum value is 1, to describe one conduction band and one valence band (i.e. only one combination of bands).
#: As an alternative to , one can specify a list with the indices of the bands that compose the exciton. 0 is taken as the last valence band, meaning that 1 would be the first conduction band, -1 is the second valence band and so on. This option can be used to generate asymmetric combinations of bands. It overrides the block.
#: Number of points in one direction of the Brillouin zone, or equivalently number of unit cells along one axis. The same number of points is taken along all directions.
#: Used to specify a submesh of the Brillouin zone. Takes a positive integer m, which divides the BZ along each axis by that factor. The resulting area is meshed with the number of points specified in the block. This option can become memory intensive (it scales as 𝒪(m^d), d the dimension).
#: In case that we are using a submesh, then probably we also want to shift the meshed area to center it at the gap, where the exciton peaks. Takes a vector with its components, .
#: The Keldysh interaction requires setting the dielectric constants of substrate ϵ_s, the medium ϵ_m and the screening length r_0, which involves the dielectric constant of the material. This block expects three values, .
#: One can optionally specify the total or center-of-mass momentum Q of the exciton. By default, it is taken to be zero, unless this block is specified. It expects a vector in form .
#: If present, the interaction matrix elements are computed in reciprocal space instead of direct space, which is the default. It takes an integer argument to specify the number of reciprocal cells to sum over, .
#: Flag to turn on the exchange interaction. By default computations neglect the exchange, and use only the direct term. It has to be set to or .
#: Used to specify a scissor shift of the bands to correct the gap. This optional field takes a single value,
As it can be seen, a minimum exciton simulation only requires specifying the number of bands, the number of k points and the dielectric constants. The modification of any of the present parameters is expected to result in a variation of the exciton results (energies, wavefunctions, conductivity), which is why all this parameters have been delegated to the same file.
§.§ Absorption file
For the calculation of the Kubo conductivity, one needs to provide a separate input file named in the working folder. This file is used to specify all parameters relative to the conductivity calculation, namely the desired energy interval, the point sampling and the broadening to be used, as well as the output files. Its format is as follows:
Do note that as opposed to the previous configuration files, the name of each section starting with # is not actually relevant for the parsing; the program always expects the same fields to be present in the file, and in the same order as presented here. For the broadening, three different options are allowed ()
§.§ As a library
So far we have discussed a more streamlined usage of the package. In some cases, however, the user could benefit from accessing directly the results of an exciton calculation, instead of having to dump it to a file to postprocess it later. To enable this possibility, the package has been also designed as a library, meaning that one can import the classes and functions defined in the API, and use them to build some extra functionality. Some use cases could be scenarios with exciton interactions, such as exciton-exciton interactions or exciton-polaritons.
To do so, the package provides a header file which defines a namespace. Within the namespace we have access to all the exciton functionality, which is completely documented. For instructions on how to build the documentation, we refer to the project repository where the most up-to-date information will be present. Additionally, some usage examples can be found under the root directory in the folder .
The outline for a general exciton simulation is the following: one first has to create a System object, which can be done with a system file. Alternatively, one can define a subclass that inherits from System, and use it to implement the desired behaviour (namely the Bloch Hamiltonian). Then this System is passed on to the Exciton class, which we configure with the desired parameters. The interacting Hamiltonian is initialized and solved, returning a Result object which contains the eigenvalues and eigenvectors. With this, now we can compute some observables, or instead use these states to perform some other calculations out of the scope of the code.
§.§ Output
To conclude the usage section, we will describe briefly the structure of the output files. Here we describe how each file is written so the user can write their own custom routines; we also provide some example Python scripts under the folder .
* Energy: The energy file has in the first line the total number of energies written in the file. The second line contains all the energies, separated by a tabulation, . All energies are written, including degenerate levels. All the exciton energies are given with respect to the Fermi sea energy. To obtain the binding energy, one must substract the gap from the exciton energy. Units are [E]= eV.
* States: The first line contains the dimension of the BSE matrix n, i.e. the number of different electron-hole pairs. The next n lines specify the valence, conduction bands of each electron-hole pair and their k point, . Afterwards, each line specifies completely the coefficients of each exciton state. The format per line is: .
* Reciprocal probability density: For the reciprocal density, on each line we specify the coordinates of the k point and the associated probability, . Each state is separated from the next by a delimiter . Units are [k] =Å^-1.
* Real-space probability density: The first line has the coordinates of the hole, . The following ones each have the coordinates of one atomic position, and the probability of finding the electron: . Densities for different states are separated by . Units are [x]=Å.
* Spin: On each line we write the index of the current exciton, and next the total spin projection, the hole and the electron spin, . Spin units are [S_z]=ħ.
* Absorption: Both conductivities with and without exciton effects are computed and written to two different files. Each row contains the following columns:
ω, σ_xx, σ_xy, σ_xz, σ_yx, σ_yy, σ_yz, σ_zx, σ_zy, σ_zz. Units are [ω] = eV, [σ_ij] = e^2/ħ.
elsarticle-num
|
http://arxiv.org/abs/2307.03337v1
|
20230707004406
|
Personalized Prediction of Recurrent Stress Events Using Self-Supervised Learning on Multimodal Time-Series Data
|
[
"Tanvir Islam",
"Peter Washington"
] |
cs.LG
|
[
"cs.LG",
"eess.SP"
] |
[
Personalized Prediction of Recurrent Stress Events Using Self-Supervised Learning on Multimodal Time-Series Data
equal*
Tanvir Islamyyy
Peter Washingtonyyy
yyyInformation and Computer Sciences Department, University of Hawaii at Manoa, Honolulu, Hawaii, USA
Tanvir [email protected]
Machine Learning, ICML
0.3in
]
Chronic stress can significantly affect physical and mental health. The advent of wearable technology allows for the tracking of physiological signals, potentially leading to innovative stress prediction and intervention methods. However, challenges such as label scarcity and data heterogeneity render stress prediction difficult in practice. To counter these issues, we have developed a multimodal personalized stress prediction system using wearable biosignal data. We employ self-supervised learning (SSL) to pre-train the models on each subject's data, allowing the models to learn the baseline dynamics of the participant's biosignals prior to fine-tuning the stress prediction task. We test our model on the Wearable Stress and Affect Detection (WESAD) dataset, demonstrating that our SSL models outperform non-SSL models while utilizing less than 5% of the annotations. These results suggest that our approach can personalize stress prediction to each user with minimal annotations. This paradigm has the potential to enable personalized prediction of a variety of recurring health events using complex multimodal data streams.
§ INTRODUCTION
Chronic stress can have profound detrimental effects on health and well-being. Research has shown that prolonged exposure to stress hormones can lead to increased risk of mental health disorders such as anxiety and depression <cit.> and can contribute to the development of cardiovascular diseases, including hypertension and heart disease <cit.>. Furthermore, chronic stress has been associated with dysregulation of the immune system, leading to impaired immune function and increased susceptibility to infections and autoimmune disorders <cit.>.
Biosignal-based stress detection methods are of increasing interest to the digital health community due to their potential to recognize stress levels in real-time. Commonly used biosignals for stress prediction include electrocardiograms (ECG) <cit.>, galvanic skin response (GSR) <cit.>, electromyograms (EMG) <cit.>, skin temperature (ST) <cit.>, skin conductivity <cit.>, and respiratory rate <cit.>.
A fundamental challenge for the prediction of stress from multimodal biosignals is the engineering of appropriate features. Traditional machine learning approaches face a significant challenge in need for manual feature generation, and the feature representation approaches vary depending on the physiological signal <cit.>. These manually created features, although widely used <cit.>, often do not lead to optimal performance outcomes for subjective prediction targets such as stress due to the inherent subjectivity, complexity, and heterogeneity of the data. Deep neural networks (DNNs) have revolutionized machine learning <cit.> by learning to extract complex data patterns across various fields <cit.>, including in the medical sciences <cit.>, without the need for manual feature extraction <cit.>.
Another major challenge in machine learning with multimodal biosignal data is obtaining datasets with high-quality labels <cit.>, a process which is notably costly and where the clinical ground truth is often difficult to define <cit.>. Annotation of stress events is laborious for the end user, leading to relatively sparse labels in most, if not all, datasets containing biosignals. The sparse labels, coupled with the user-dependent nature of the physiological stress response, lead to immense difficulty in training a generalizable stress prediction model.
To address these two challenges, which have traditionally led to a subpar performance in the use of machine learning prediction of subjective psychiatric outcomes such as stress, we suggest a personalized self-supervised learning (SSL) approach to stress prediction. By training a separate model per user (personalization) and using SSL to learn the baseline dynamics of each user's biosignals, we are able to learn using relatively few annotations from each user, even for annotations which are inherently subjective such as stress. We capitalize on multiple sensor modalities, performing stress recognition through the integration of diverse information <cit.>. We use electrodermal activity (EDA), electrocardiogram (ECG), electromyography (EMG), respiration (RESP), core body temperature (TEMP), and three-axis acceleration (ACC) to train our stress prediction models.
Our contributions to the machine learning for the healthcare field are as follows:
* We evaluate stress prediction models trained using multiple biosignal modalities.
* We explore the personalization of these multimodal stress prediction models, enabled through SSL procedures which learn the baseline temporal dynamics of each user's biosignals.
* We compare multimodal personalized models with and without SSL pre-training to quantify the impact of personalized multimodal SSL.
§ RELATED WORKS
Stress prediction from biosignals is a rich and expanding field. Established machine learning approaches have yielded preliminary successes while leaving room for improvement. Fernández et al. introduced a non-invasive, radar-based stress detection method that primarily uses respiratory patterns and applied Recurrence Quantification Analysis (RQA) to achieve 94.4% accuracy, offering broader applicability than heartbeat-based methods, particularly for individuals with a higher body mass index <cit.>. Ghaderi et al. employed machine learning algorithms to process various biological signals for accurate stress level classification, emphasizing the role of respiration as a crucial sensor and suggesting its potential applications for tailoring stress management treatments <cit.>. Karthikeyan et al. studied the correlation between stress levels and EMG measurements of muscle tension <cit.>. Other lines of research emphasize the utility of ECG data in stress measurement, including a novel ECG-based mono-fuzzy measure to assess stress levels <cit.> as well as the application of person-specific and person-independent stress prediction methods <cit.>.
SSL has only recently been applied to biosignals-based stress prediction. Robert et al. proposed a stress prediction system based on contrastive learning, utilizing modified EDA signals to classify stress and non-stress scenarios <cit.>. Contrastive learning techniques such as SimCLR <cit.> and BOYL <cit.> heavily rely on data augmentation. However, this approach has limitations, as determining the most beneficial augmentations is challenging <cit.>. By contrast, our study explores a different approach by excluding data augmentation and instead focusing on self-supervised pre-training using the raw data.
Multimodal machine learning techniques are required to handle the multiple concurrent biosignals which are recorded by consumer and research wearables. Bobade et al. detected stress levels in individuals by analyzing various bio-signals such as acceleration, ECG, blood volume pulse, body temperature, respiration, EMG, and EDA. Their experimental results included accuracy levels of up to 84.32% for three-way classification (amusement vs. baseline vs. stress) and up to 95.21% accuracy for binary classification (stress vs. non-stress), outperforming previous work in the field <cit.>.
Aigrain et al. developed a methodology for analyzing multimodal stress detection results by considering multiple assessments of stress <cit.>. Data from 25 subjects in a stressful situation were collected, along with three different assessments: self-assessment, assessments from external observers, and assessments from a physiology expert. The study found that a combination of behavioral and physiological features, such as body movement, blood volume pulse, and heart rate, provided valuable information for classifying stress across the three assessments <cit.>. Bara et al. introduced a deep learning-based approach for multimodal stress detection, utilizing the MuSE dataset <cit.> to evaluate various configurations and modular architectures. The results demonstrated the potential of deep learning methods in capturing affective representations related to stress, paving the way for further investigations and applications in affective computing <cit.>.
In contrast to these prior works, we combine the fields of multimodal machine learning and SSL to achieve optimal performance with a minimal number of training labels. Multimodal models traditionally require more labels to accommodate the expanded feature space, but we hypothesize that SSL can drastically reduce this need. Furthermore, we combine these ideas with the idea of model personalization, where a separate model is trained per individual.
§ METHODOLOGY
We aim to develop a model that can accurately predict stress by learning representations of six concurrent biosignals: EDA, ECG, EMG, Temp, Resp, and ACC. Our model is developed in two stages: (1) pre-training to predict a pretext task T_p and (2) fine-tuning to the downstream task T_d, which in this case is stress prediction. In the pretext task (T_p), the goal is to learn robust and generalized features from unlabeled biosignals through a self-supervised process which learns meaningful representations that can capture relevant patterns and information from each user's individual biosignal dynamics. Once the representations are learned from the pretext task, the second stage (T_d) focuses on predicting stress using the original biosignal data along with human-annotated stress labels (y_i). We compare the fine-tuned SSL-based model against a purely supervised model. For fully supervised training, we use the same architecture as T_d.
Figure <ref> illustrates the proposed self-supervised solution, depicting the two-stage process of learning representations and predicting stress.
§.§ Dataset
We use a publicly available dataset called WESAD <cit.>, consisting of data collected using the Empatica E4 wrist-worn device and RespiBAN device worn on the chest. WESAD contains six biosignals per participant: ECG, EDA, EMG, RESP, TEMP, and ACC. In addition to the physiological data collected, participants periodically completed questionnaires assessing their emotional state during the data collection session. Our models are trained exclusively on the RespiBAN device data.
During the data collection process, information related to three distinct affective states, namely stress, amusement, and relaxation, were gathered from a group of 15 participants. In order to assess each participant's anxiety levels, they were presented with six questions derived from the State-Trait Anxiety Inventory (STAI). These questions required the participants to indicate their responses on a four-point Likert scale. The six STAI questions covered six distinct affective states: feeling at ease, nervous, jittery, relaxed, worried, and pleasant. By asking these STAI questions, the developers of WESAD aimed to gauge each participant's anxiety levels and to capture their subjective emotional states.
Our models were trained using these six biosignals from the RespiBAN device for baseline condition, thus facilitating a multimodal learning training process.
§.§ Label Representation
The WESAD dataset includes participants' responses to six questions from the STAI, which were rated on a four-point Likert scale (ranging from 1 to 4). In this paper, we address the task of stress detection by proposing a label quantification approach. Rather than treating stress detection as a discrete classification problem with predefined stress levels, we transform it into a regression problem by assigning continuous values to the stress labels. Specifically, we convert the stress labels of 1, 2, 3, and 4 into quantified values of 0.25, 0.5, 0.75, and 1, respectively. To ensure a quantified and uniformly distributed representation of labels, we perform a conversion based on the evenly spaced four-point Likert scale. This label quantification technique allows us to capture the underlying intensity or severity of stress, enabling more fine-grained analysis and prediction. By leveraging DNN over raw biosignals, we can now model and predict stress levels with greater precision and sensitivity, contributing to more accurate and nuanced stress prediction systems. Our experimental results demonstrate the effectiveness of label quantification in improving the performance and interpretability of stress prediction models, highlighting its potential for advancing the field of stress analysis and management.
§.§ Self-supervised Learning of Biosignal Representations
Self-supervised pre-training involves gaining a comprehensive understanding of data without relying on labeled information, known as the “pretext task”. By training a model with a robust representation, it becomes more feasible to transfer to a specific task. We trained separate models for each subject using only their respective data, allowing the models to learn the baseline dynamics of each individual.
The model is pre-trained by dividing each of the six signals into fixed windows of 7000 (10 seconds) dimensions, resulting in 7910 data points with a 100-point overlap using the forecasting method. Each training data point consists of a 7000-dimensional vector, with the target output being the subsequent 40 data points. We set the window size to 10 seconds because we aim to predict stress for 10 seconds. A 1D CNN is utilized for the pre-training process. The 1D CNN architecture of this model pre-training is presented in Table <ref>.
Let the original signal sequence is S = S_1, S_2, ..., S_n. We then create a series of training data points X and corresponding targets Y as follows: For each i from 1 to D (where D is the total number of data points), we have X_i = S_(i-1)*O + 1, ..., S_(i-1)*O + W and the corresponding target Y_i = S_(i-1)*O + W + 1, ..., S_(i-1)*O + W + P, where S is the original signal, D is the total number of data points, W is the window size (7000 in our case), P is the number of points to predict (40 in our case), O is the overlap (100 in our case), X_i is the i-th data point (an array of W elements from the original signal), Y_i is the corresponding target (an array of P elements from the signal, immediately following the elements in X_i).
Each X_i is a window of the original signal, and each Y_i is the sequence of points immediately following the corresponding X_i in the signal.
With this method, we have learned baseline representations of all biosignal modalities for each user: R_EDA, R_ECG, R_EMG, R_Temp, R_ACC, R_Resp
§.§ Stress Prediction Task
In the second phase of learning (T_d), the model uses the initial biosignals and responses provided by participants (y_i). In this phase, the framework utilizes the fixed convolution layers from the first network to extract features that will help to predict stress. The fully-connected layers at the end of the network are subsequently trained in a supervised manner to predict stress using the extracted features.
In order to perform the stress prediction task (T_d) using multiple modalities, the representations of the biosignals, namely R_EDA, R_ECG, R_EMG, R_Temp, R_ACC, and R_Resp, are fused together into a single multimodal representation R. This late-stage fusion process combines the information from each modality to create a comprehensive representation that captures the relevant features for predicting stress. To accomplish T_d with representation R, a network: ρ = w(R, θ) is trained where ρ is the output prediction vector and θ is
the set of trainable parameters. Finally, we calculate the optimal value of θ by minimizing mean squared error loss:
L = 1/n∑_i=1^n (y_i - ŷ_i)^2
where n represents the number of data points, y_i represents the observed values, and ŷ_i represents the predicted values.
We describe the details of the network that is used for the downstream stress prediction task (T_d) in Table <ref>.
§.§ Experimental Procedures
We examine the performance of the models pre-trained with SSL in comparison to models trained solely through supervised training without pre-training. To conduct this comparison, we utilize a dataset consisting of 7,000 data points from six biosignals, with each data point representing a 10-second window and all six signal modalities. These data points are used to train both types of models (SSL pre-training followed by supervised fine-tuning and purely supervised training). To test the models, we hold out the last set of 910 data points with 10-second windows. Both models are evaluated using the same test set for each subject.
§ RESULTS AND EVALUATION
We evaluate each model using RMSE (Root Mean Squared Error). We show the RMSE scores for all subjects across various questions in Figure <ref>. The scores were obtained by training 10 sets of random data points three times, each time using different random data samples, and calculating the average RMSE score.
We observe that self-supervised pre-training achieves significantly lower RMSE compared to models trained solely through supervised learning. This indicates that SSL, which involves learning representations from unlabeled data, results in improved performance and reduced prediction errors compared to models trained only with labeled data. The figure demonstrates the superiority of SSL pre-training in terms of RMSE scores.
We also compare the models pre-trained with SSL and the purely supervised model using a set of different labeled data points. Figure <ref> shows the comparison between these two models for Subject 2 for all questions. Figure <ref> demonstrates that even when sampling only 5 random data points, the SSL pre-trained model repeatedly outperforms purely supervised training for all users and all outcome measures.
§ DISCUSSION
We present an analysis of a personalized multimodal learning framework, focusing on the advantages of self-supervised pre-training in comparison to a fully-supervised training paradigm. The results of this study emphasize the superiority of SSL over purely supervised training methods within the context of personalized learning. We observe this benefit with only a few labeled examples, demonstrating the personalized SSL can enable learning of complex and subjective outcomes such as stress using relatively small datasets.
With the aid of SSL, the personalized models can leverage both labeled and unlabeled data, making more efficient use of available information. This is especially valuable in real-world situations where limited labeled data or data of varying quality is present such as in the personalization of machine learning-powered healthcare applications <cit.>, where ample data are recorded per individual but each label is burdensome for the end-user to provide. By contrast, purely supervised models display high variability in their performance as indicated by a significant fluctuation in RMSE values across different training runs or data samples. This variability can be problematic for personalized applications, where consistent and dependable outcomes are crucial.
There are some limitations of this study which should be addressed in follow-up work. The WESAD dataset was created in a highly controlled environment. Future research endeavors should involve comprehensive and unconstrained data collection setups to test the system using real-time, “in-the-wild” data streams. Such experimental procedures will provide valuable insights into multimodal personalization in real-world scenarios.
§ CONCLUSION
We introduce a novel approach for enhancing deep learning models to predict stress using multimodal biosignals. Our technique empowers deep learning models to adapt to an individual's unique baseline temporal dynamics, enabling more precise and personalized predictions of stress with significantly fewer annotations required. By leveraging multi-modal biosignals, our method opens up new possibilities for understanding and addressing stress-related challenges in various contexts, including healthcare, workplace environments, and personalized mental health interventions. The potential impact of our findings is substantial, as the personalized learning technique presented here significantly reduces the need for extensive human annotation typically associated with deep learning models for healthcare.
§ ACKNOWLEDGEMENTS
The technical support and advanced computing resources from University of Hawaii Information Technology Services – Cyberinfrastructure, funded in part by the National Science Foundation CC* awards # 2201428 and # 2232862 are gratefully acknowledged.
langley00
icml2023
|
http://arxiv.org/abs/2307.03185v1
|
20230706175829
|
Spin-Polarized Majorana Zero Modes in Proximitized Superconducting Penta-Silicene Nanoribbons
|
[
"R. C. Bento Ribeiro",
"J. H. Correa",
"L. S. Ricco",
"I. A. Shelykh",
"M. A. Continentino",
"A. C. Seridonio",
"M. Minissale",
"G. L. Lay",
"M. S. Figueira"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall"
] |
Centro Brasileiro de Pesquisas Físicas, Rua Dr. Xavier Sigaud, 150, Urca 22290-180, Rio de Janeiro, RJ, Brazil
Universidad Tecnológica del Perú, Nathalio Sánchez, 125, 15046, Lima, Perú
AGH University of Krakow, Academic Centre for Materials and Nanotechnology, al. A. Mickiewicza 30, 30-059 Krakow, Poland
Science Institute, University of Iceland, Dunhagi-3, IS-107,Reykjavik, Iceland
Science Institute, University of Iceland, Dunhagi-3, IS-107,Reykjavik, Iceland
Russian Quantum Center, Skolkovo IC, Bolshoy Bulvar 30 bld. 1, Moscow 121205, Russia
Centro Brasileiro de Pesquisas Físicas, Rua Dr. Xavier Sigaud, 150, Urca 22290-180, Rio de Janeiro, RJ, Brazil
School of Engineering, Department of Physics and Chemistry, São Paulo State University (UNESP), 15385-000 Ilha Solteira-SP, Brazil
Aix-Marseille Université, CNRS, PIIM UMR 7345, 13397 Marseille Cedex, France
Aix-Marseille Université, CNRS, PIIM UMR 7345, 13397 Marseille Cedex, France
Instituto de Física, Universidade Federal Fluminense, Av. Litorânea s/N, CEP: 24210-340, Niterói, RJ, Brasil
We theoretically investigate the possibility of obtaining Majorana zero modes (MZMs) in penta-silicene nanoribbons (p-SiNRs) with induced p-wave superconductivity. The model explicitly considers an external magnetic field perpendicularly applied to the nanoribbon plane, as well as an extrinsic Rashba spin-orbit coupling (RSOC), in addition to the first nearest neighbor hopping term and p-wave superconducting pairing. By analyzing the dispersion relation profiles, we observe the successive closing and reopening of the induced superconducting gap with a single spin component, indicating a spin-polarized topological phase transition (TPT). Correspondingly, the plots of the energy spectrum versus the chemical potential reveal the existence of zero-energy states with a preferential spin orientation characterized by nonoverlapping wave functions localized at opposite ends of the superconducting p-SiNRs. These findings strongly suggest the emergence of topologically protected, spin-polarized MZMs at the ends of the p-SiNRs with induced p-wave superconducting pairing, which can be realized by proximitizing the nanoribbon with an s-wave superconductor, such as lead. The proposal paves the way for silicene-based Majorana devices hosting multiple MZMs with a well-defined spin orientation, with possible applications in fault-tolerant quantum computing platforms and Majorana spintronics.
Spin-Polarized Majorana Zero Modes in Proximitized Superconducting Penta-Silicene Nanoribbons
M. S. Figueira
August 1, 2023
=============================================================================================
§ INTRODUCTION
Ultra-scaling of nanoelectronic devices, beyond Moore’s law, still using the ubiquitous silicon technology, could come from silicene <cit.>, the first silicon-based graphene-like artificial two-dimensional (2D) quantum material, which further engendered the Xenes family <cit.>, and which was used to fabricate an atom-thin channel in a field effect transistor <cit.>. Moreover, topological silicon nanowires hosting Majorana fermions could be a materials platform for a quantum computer <cit.>. However, like other nanowire candidates, even proximitized ones based on heavier constituents with larger spin-orbit coupling, until now, no conclusive experimental measurements guarantee incontrovertibly the existence of topologically protected Majorana zero modes (MZMs) for the possible realization of qubits <cit.>.
Since the appearance of the generic Kitaev model <cit.>, several platforms were proposed to realize it, both from theoretical <cit.>, and experimental points of view <cit.>. A helpful review of the experimental state-of-the-art on this subject can be found in Refs. <cit.>. This model considers p-wave superconductor pairing between electrons in different sites of a one-dimensional chain (Kitaev chain) and predicts the existence of unpaired MZMs at opposite ends of a finite Kitaev chain. However, until now, there are no conclusive experimental measurements that guarantee without doubt the existence of topologically protected MZMs <cit.>. The experimental detection of MZMs remains an elusive problem, and they were not really observed until now. Per se, this situation justifies the search for new platforms.
One possible alternative platform is the one-dimensional honeycomb nanoribbons (HNRs) that have been receiving growing attention in the literature<cit.>. Nevertheless, the mono-elemental 2D graphene-like materials coined Xenes, where X represents elements from group IIIA to group VIA of the periodic table, could constitute possible candidates to build HNRs with the ability to harbor MZMs at their ends <cit.>. Penta-Silicene (X=Si) is an up-and-coming candidate in this family for obtaining a HNR geometry that can host MZMs <cit.>.
A paradigmatic breakthrough would be the experimental implementation of the generic Kitaev toy model with a silicon platform <cit.>. In a previous work <cit.>, we addressed the problem of Majorana spin discrimination employing a double-spin Kitaev zigzag honeycomb nanoribbons (KzHNR), which mimics two parallel Kitaev chains connected by the hopping t (see figure 1 of <cit.>). Since such KzHNRs have not been realized in experiments, we look instead in the present paper at the possibility of obtaining MZMs in p-SiNRs, harboring Dirac fermions, which have been epitaxially grown on Ag(110) surfaces <cit.>. Typically, highly perfect, atom thin, massively aligned single strand p-SiNRs, 0.8 nm in width, and with lengths extending to tens of nanometers were obtained by molecular beam epitaxy upon in situ Si deposition onto Ag(110) surfaces held at room temperature, as shown in Fig. <ref>(a). In scanning tunneling microscopy (STM) and high-resolution nc-AFM images, these p-SiNRs appear as two shifted lines of protrusions along the [110] direction as shown in Fig. <ref>(b-c) and are separated by twice the nearest neighbor Ag-Ag distance, i.e., 0.577 nm. Their hidden internal atomic structure was initially uncovered employing thorough density functional theory (DFT) calculations and simulations of the STM images <cit.>, pointing to an arrangement of pure Si pentagonal building blocks, as displayed in Fig. <ref>(d), which defines the missing pentagonal row (P-MR) model employed in the Supplemental information of reference <cit.> to optimize the angles and the distance between the silicon atoms in the pentagonal arrangement. This unique atomic geometry was later directly visualized by high-resolution non-contact atomic force microscopy (Fig. <ref>(c) from <cit.>). We will theoretically demonstrate that these p-SiNRs could constitute a tantalizing disruptive new Kitaev platform.
We propose an experimental implementation for discriminating spin-polarized MZMs in p-SiNRs grown on the Ag(110) surface and aligned along the [110] direction. Since silver is not a superconductor, we will proximitize them with lead, a conventional Bardeen–Cooper–Schrieffer (BCS) superconductor with a relatively high critical temperature of T_c = 7.2 K, upon evaporating in situ a thin lead film on top through a mask, as already mentioned in <cit.>. Indeed, Pb can be easily grown on Ag(110) surfaces <cit.> and is known to interact only very weakly with the SiNRs, preserving their integrity and their electronic properties <cit.>. Then, detecting and distinguishing the MZMs at the ends of the SiNRs will be done in situ at low temperatures with the STM following the methodology of Yazdani and co-workers <cit.>.
In this paper, we characterize the TPTs employing the spinless version of the model and the inclusion of the p-wave superconducting pairing and the magnetic field reveal the emergence of topologically protected MZMs with the spin discriminated at opposite ends of the p-SiNRs; this result constitutes the main finding of the work. We also calculate the wave function of the MZMs at the ends of the p-SiNR, showing its topological signature.
§ THE MODEL
§.§ Lattice transformations
In Fig. <ref>(a), to reduce the geometry complexity of the p-SiNR and facilitate the tight-binding calculations, we redefine its structure using square-shaped pentagons. In the geometry of the pentagons that constitute the p-SiNRs of Fig. <ref>(b), four silicon atoms are located on the missing pentagonal row, and only one exhibits a buckling structure (pink atoms). We neglect the buckling structure of these atoms and employ a planar configuration composed of square-shaped pentagons. As the distance between the silicon atoms that constitute the pentagons are close, we consider them equal to a_0 and identify it as the lattice parameter of the p-SiNR. We also define the nearest neighbor hopping as equal to t, which is considered the energy unit in all the calculations. L≡ 2Na_0, is the length of the p-SiNR, with a_0 being the distance between atoms and N is the number of sites of the corresponding Kitaev chain (top or bottom), employed in the calculation, as indicated in Fig. <ref>(c), that exhibits the shape of the p-SiNR and the unit cell composed of six atoms inside the dashed rectangle employed in the calculations. We expect these simplifications will not change the results once we keep the lattice.
§.§ Effective Hamiltonian - spinless case
The total Hamiltonian, which describes the spinless p-SiNR of Fig. <ref> is given by
H = H_t + H_Δ ,
with
H_t=-
∑_i=1 ^Nμ( a^†_i, +a_i, + - a^†_i, -a_i, - +b^†_i, +b_i, +-b^†_i, -b_i, - +.
c^†_i, +c_i, + -c^†_i, -c_i, - + d^†_i, +d_i, +-d^†_i, -d_i, - +
. e^†_i, +e_i, +-e^†_i, -e_i, - +f^†_i, + f_i, + -f^†_i, -f_i, - ) -
∑_i=1 ^N t (
a^†_i, + b_i, + - b_i, - a_i, -^† +.
b^†_i, + c_i, +- c_i, - b_i, -^† +
c^†_i, + d_i, +- d_i, - c_i, -^† +
. d^†_i, + e_i, +- e_i, - d_i, -^† +
e^†_i, + f_i, +- f_i, - e_i, -^†) -
∑_i=1 ^N-1 t(a^†_i+1, + f_i, +- f_i, - a_i+1, -^† +
a^†_i+1, + c_i, +- .
. c_i, - a_i+1, -^† +d^†_i+1, + f_i, +- f_i, - d_i+1,-^†) + H.c. ,
where μ is the chemical potential, the index (-) and (+) differentiate the creation and annihilation operators for electrons and holes, respectively, and H.c. is the Hermitian conjugate. The system Hamiltonian of Eq. (<ref>) was built according to the unit cell of nonequivalent Si atoms (a,b,c,d,e,f) shown in Fig. <ref>(c).
The p-SiNRs are grown on Ag(110) surfaces in the setup proposed here. However, silver is not a superconductor, and to generate a p-wave pairing Δ on the pink and yellow atoms of Fig. <ref>, we evaporate in situ a thin lead film over the Ag(110) surface in such a way that the buckled silicon atoms enter in contact with the lead atoms. Under the presence of a strong RSOC arising from the Pb atoms and an applied magnetic field, the s-wave Cooper pairs of the Pb film can enter into the p-SiNH region via proximity effect (Andreev reflections) <cit.>, giving rise to a p-wave-induced pairing in the double p-HNRs structure. By following the same procedure done in our previous work <cit.> and based on the Kitaev model <cit.>, we introduce a spinless p-wave superconducting pairing Δ between the “external” pink and yellow atoms of the same type as shown in Fig. <ref>. The Hamiltonian, which describes such a pairing, reads
H_Δ=
∑_i=1 ^N-1Δ(b^†_i,+ b^†_i+1,- -
b^†_i+1,- b^†_i,+ +.
.e^†_i,+ e^†_i+1,- -
e^†_i+1,- e^†_i,+)+ H.c. .
§.§ Effective Hamiltonian - spinful case
In order to properly account for the spin degree of freedom in the superconducting p-SiNRs, we follow our previous work <cit.>. We introduce a Zeeman effect due to the application of an external magnetic field perpendicular to the p-SiNRs plane. The Hamiltonian, which accounts for the Zeeman effect, reads:
H_z = ∑_i=1, σ^N Z sgn(σ)
( a_ i,σ ^† a_i,σ +
b_i,σ ^† b_i,σ + .
. c_i,σ ^† c_i,σ +
d_i,σ ^† d_i,σ +
e_i,σ ^† e_i,σ +
f_i,σ ^† f_i,σ ) + H.c. ,
wherein Z is the effective strength of the external Zeeman field and σ=↑,↓ is the spin index for each operator.
The extrinsic RSOC induced in the p-SiNRs can be modulated by the action of an external electric field E⃗ applied perpendicularly to the nanoribbon plane <cit.>. Its corresponding general Hamiltonian reads
H_R= ∑_i,j=1,σ^N i c_i, σ ^† (u⃗_i,j . γ⃗) c_j, (σ) + H.c.,
where u⃗_i,j = -R /a_0k̂×δ⃗_i,j, with R being the extrinsic RSOC parameter, δ⃗_i,j is the vector that connects the adjacent lattice sites i and j, and γ⃗ the Pauli matrices. The index σ indicates the opposite spin direction of σ. The Eq. <ref> turns into
H_R = ∑_i=1,σ^N(
γ_1( a^†_i, σ b_i+1/2, σ ) + γ_2 ( b^†_i+1/2, σ a_i, σ) + .
(a^†_i, σ c_i-1, σ ) - (c^†_i-1, σ a_i, σ) +
(-i) ( a^†_i, σ f_i, σ ) +
(i) (f^†_i, σ a_i, σ) + γ_3 (b^†_i+1/2, σ c_i+1, σ) + γ_4 (c^†_i+1, σ b_i+1/2, σ) +
(-i)(c^†_i+1, σ d_i+1, σ) + (i)(d^†_i+1, σ c_i+1, σ) -(d^†_i+1, σ f_i, σ) +
(f^†_i, σ d_i+1, σ) + γ_3 (d^†_i+1, σe_i+3/2, σ) + γ_4 (e^†_i+3/2,σ d_i+1, σ)+
. γ_1 (e^†_i+3/2,σf_i+2, σ) + γ_2 (f^†_i+2,σ e_i+3/2, σ) ) + H.c.,
where γ_1 = -1/2 + i√(3)/2 , γ_2 =1/2 - i√(3)/2 , γ_3 =-1/2 - i√(3)/2 and γ_4 =1/2 + i√(3)/2.
Notice that from Eqs. (<ref>) and (<ref>), we are assuming the external Zeeman field Z perfectly perpendicular to the RSOC, i.e, Z≡ Z_⊥≠ 0 and Z_∥ = 0. In Rashba nanowires setups, this condition is responsible for the vanishing of the induced superconducting gap at zero momentum (inner gap) and the opening of a constant gap at finite momentum (outer gap), which characterizes the topological phase transition and the concomitant emergence of MZMs protected by the outer gap <cit.>.
However, from the experimental perspective, ensuring that the magnetic field is applied only in the perpendicular direction of the RSOC field can be challenging. Then, it is natural to consider also the effects of Z_∥≠ 0. In this situation, we have both components of the Zeeman field, and the critical magnetic field condition for the topological phase transition remains the same. However, the behavior of the outer gap is not constant anymore, which affects the topological protection of the MZMs towards fault-tolerant quantum computing operations. The effect of Z_∥ in the outer gap is not so detrimental if the RSOC is strong.
It is worth noticing that the opposite cases of Z ≡ Z_∥≠ 0 and Z_⊥ = 0 can lead to the vanishing of the outer gap, hence preventing the topological phase and emergence of MZMs. Therefore, since our system is qualitatively described by the similar underlying physics of Rashba nanowires, it is appropriate to experimentally ensure the dominance of the magnetic field component perpendicular to the Rashba field.
Considering also the spin degree of freedom on both H_t and H_Δ [Eqs. (<ref>) and <ref>)], we now can define the total system Hamiltonian as
H_total = H_t + H_Z + H_R + H_Δ,
which can be written in the corresponding Bogolyubov-de Gennes (BdG) form in k-space as
H_total(k)=Φ^TH_BdG(k)Φ, with
H_BdG(k) =
[
H_↑, ↑(k) H_↑, ↓(k) H_Δ,↑, ↑(k) H_Δ,↑, ↓(k)
H_↑, ↓(k) H_↓, ↓(k) H_Δ,↓↑(k) H_Δ,↓, ↓(k)
H^* _Δ,↑,↑(-k) H^* _Δ,↑,↓(-k) H^*_↑, ↑(-k) H^*_↑, ↓(-k)
H^* _Δ,↓,↑(-k) H^* _Δ,↓,↓(-k) H^*_↓, ↑(-k) H^*_↓, ↓(-k)
],
where H_σ,σ'(± k) and H_Δ,σ,σ'(± k) represent the matrix elements for different spin directions and the matrix elements corresponding to the part of the matrix where superconducting couplings Δ appear, respectively. The spinor Φ was constructed with the fermionic operators in the following order:
Φ^T=(a_k,↑,b_k,↑,c_k,↑,d_k,↑,e_k,↑,f_k,↑,
a_k,↓,b_k,↓,c_k,↓,d_k,↓,e_k,↓,f_k,↓,
a^†_-k,↑,b^†_-k,↑,c^†_-k,↑,d^†_-k,↑,e^†_-k,↑,f^†_-k,↑,
a^†_-k,↓,b^†_-k,↓,c^†_-k,↓,d^†_-k,↓,e^†_-k,↓,f^†_-k,↓) .
The spin alignment for each situation in the next section is computed numerically. We calculate the mean value of the Pauli matrix in ẑ direction Ŝ_z, i.e., ⟨Ŝ_z ⟩ = ⟨Ψ|Ŝ_z|Ψ⟩, where |Ψ⟩ are the eigenvectors of the total Hamiltonian given by Eq. (<ref>).
In hybrid semiconducting-superconducting nanowires, sometimes called Majorana nanowires, the following features strongly suggest the emergence of MZMs at the nanowire ends <cit.>:
(a) Closing and subsequent reopening of the superconducting gap in the bulk relation dispersion as the chemical potential μ changes, indicating a TPT;
(b) Emergence of persistent zero-modes for specific system parameter values associated with nonoverlapping wave functions localized at the opposite ends of the nanowire.
To obtain the TPTs present in the p-SiNRs, we will consider the infinite case given by the Hamiltonian of Eq. (<ref>). We calculate the bulk band structure, discussed in detail in the supplemental material (SM). To investigate the existence of MZMs in the p-SiNRs, we will analyze the spinless p-SiNRs with finite size N=100 and calculate the energy spectrum as a function of the chemical potential μ and the probability density function |ψ|^2 associated with the zero-energy states which arise on the real axis of the energy spectrum.
Both the energies E_n and eigenvectors ψ_n per site are obtained by numerically solving the Schrödinger equation H ψ_n = E_n ψ_n for the Hamiltonian of Eq. (<ref>). To evaluate the position dependence of the wave functions associated with zero energy states, we numerically calculate the eigenvector ψ_n when E_n=0, which allows obtaining the probability density per lattice site according to
| ψ_n | ^2 = ψ_n ψ_n ^*.
§ RESULTS AND DISCUSSION
§.§ Finite spinless p-SiNRs
We employed the following parameter set in all the calculations: Δ = 0.5 t, Z=0.1t, R=0.05t and N=100. The top panels of Fig. <ref> show the bulk energy dispersion of the p-SiNRs, in the presence of the superconducting p-wave pairing, described by Eqs. (<ref>-<ref>), along the k_x direction, for three representative values of chemical potential μ [vertical arrows in panel (d)]. Fig. <ref>(a) depicts the closing of the SC gap at k_x=0 for
μ = 0.0t. As the value of μ enhances, the SC gap opens as shown in panels (b) for
μ = 0.4t and closes again at k_x=0 for μ = 0.7t as shown in panel (c). This closing and reopening of the SC gap with the tuning of μ characterize a topological phase transition. The bulk-boundary correspondence principle <cit.> ensures the topologically protected MZMs at the ends of the p-SiNRs.
To verify the emergence of MZMs associated with the TPTs depicted in Fig. <ref>(a)-(c), we plot the p-SiNRs energy spectrum as a function of μ in Fig. <ref>(d). There are no zero-energy modes for the values of μ where the gap closes (red and magenta vertical lines). However, for values of μ inside the topological gap, for example, when μ=0.4t (green vertical line), two zero-energy states appear on the real axis, indicating the presence of MZMs at the opposite ends of the p-SiNRs, topologically protected by the effective p-wave SC gap [Fig. <ref>(b)]. This finding is similar to what was obtained in our previous work <cit.>, wherein the MZMs emerge at the opposite ends of a finite double zHNR.
Fig. <ref>(f) shows isolated zero-energy modes for μ=0.4t, which are associated with a nonoverlapping wave function, well-localized at the ends of the p-SiNRs, as depicted in Fig. <ref>(j); which together with the topological phase transition [Fig. <ref>(a)-(c)], is a piece of strong evidence that topologically protected MZMs emerge at the opposite ends of the spinless p-SiNRs. In the Supplemental Material, we developed an extensive analysis of the topological and trivial phases of the spinless p-wave superconducting p-SiNR, that can be distinguished by the Zak number topological invariant <cit.>. However, we cannot afford to do the same study for the spinful case due to the extreme mathematical complexity.
Although there are zero-energy modes for other values of μ [Fig. <ref>(e) and (g)], they are not associated with wave functions well-localized at the ends of the p-SiNR, as can be seen in Figs. <ref>(i) and (k), for μ=0.0t and μ=0.7t, respectively.
We also highlight that we analyze only one region of all energy spectrum shown in Figs. <ref>(d), which presents other ranges of chemical potentials wherein a zero-energy state, associated with the emergence of MZMs, arises. A more detailed study of this energy spectra can be found in the SM. We can also observe that, unlike the system of our previous work <cit.>, the energy spectrum of Figs. <ref>(d) is asymmetric at about μ = 0.
§.§ Finite spinful p-SiNRs
Now we will analyze how the spinless scenario shown in Fig. <ref> is affected by the presence of both Zeeman field [Eq. (<ref>)] and extrinsic RSOC [Eq. (<ref>)] coupling within the spinful description [Eq. (<ref>)].
Figs. <ref>(a)-(e) exhibit the energy dispersion of the p-SiNRs given by the eigenenergies of BdG Hamiltonian [Eq. (<ref>)] as a function of k_x, for distinct values of the chemical potential μ, indicated by vertical lines in Fig. <ref>(f). The spin polarization is indicated by the vertical color bar, in which the red color represents the spin ↑=1, while the blue color stands for spin ↓=-1, and the light shades of colors mean the spin is neither up nor down. As μ is tuned, we can see the opening and closing of the superconducting gap, thus indicating a TPT, as previously verified in the spinless situation [Fig. <ref>(a)-(c)]. However, here we can notice that each TPT associated with a specific value of μ has a preferential spin orientation, except Fig <ref>(c), where the system exhibits a conventional band gap.
The spin-polarized TPTs in Figs. <ref>(a),(b),(d), and (e) lead to the appearance of spin-polarized zero-modes in Fig. <ref>(f), which shows the system energy spectrum as a function of μ. These zero-modes indicate the emergence of spin-polarized MZMs at the ends of the p-SiNRs as μ is changed, similar to those found in <cit.>.
The panels (g)-(k) of Fig. <ref> depict the corresponding energy levels sorted in ascending order. The different values of μ used to calculate the MZMs are indicated by vertical black lines in Fig. <ref>(f). For μ=-2.7t [Fig. <ref>(g)], there are two zero modes on the real axis of spin up (red points), associated with nonoverlapping wave functions shown in Fig. <ref>(l). For μ=-2.35t [Fig. <ref>(h)], there are two energy-states in the spin-up direction and other two with spin-down, associated with degenerate (blue and red) nonoverlapping wave functions shown in Fig. <ref>(m). For μ=1.1t [Fig. <ref>(h)], there are four spin-down energy states outside the real axis, there are no MZMs, and the wave functions completelly overlap along the ribbon. For μ=2.09t [Fig. <ref>(j)], there are two zero modes on the real axis of spin up (red points). Finally, for μ=2.2t [Fig. <ref>(k)], there are four MZMs with spin-down energy states on the real axis. This situation happens because, at μ=2.09t, a TPT occurs for spin-up, the gap closes at k=±π, and for μ>2.09t the gap defines a trivial band insulator for this spin orientation and MZMs with spin-up are not available anymore. These well-localized probability densities describing wave functions centered at the opposite ends of the superconducting p-SiNRs, associated with zero-energy edge states, indicate the emergence of MZMs in the same way previously found for the spinless system.
Fig. <ref> represents the same situation as Fig. <ref> but with the magnetic field pointing in the opposite direction. The net effect on the p-SiNRs is to change the MZMs, for all μ values, in spin up to down and vice versa. Therefore, it is possible to select the spin polarization of the MZMs by changing the chemical potential μ or the magnetic field orientation.
In Fig. <ref>, we mainly analyze the dispersion relation, energy spectrum, and nature of the zero-modes at μ=0 of Fig. <ref>, with the magnetic field pointing in the up direction. Fig. <ref>(a) depicts E(k) as a function of k_x, showing that there is a finite topological superconducting gap only for the spin-down orientation (blue line), while the spin-up (red line) remains gapless. This behavior suggests a spin-polarized TPT at zero chemical potential, implying that only the system's spin-down component is within the topological regime. At the same time, the spin-up belongs to a metallic phase. Fig. <ref>(b),(c) represent two MZMs of spin-down with its correspondent nonoverlapped wave function, respectively, and Fig. <ref>(d), shows detail at around μ=0 region.
We also investigate how the energy spectrum as a function of μ is affected by the length of the p-SiNRs. Fig. <ref> exhibits the energy spectrum of the superconducting p-SiNRs for increasing values of nanoribbon length N. From the smallest system considered [N=10, Fig. <ref>(a)] to the largest one [N=100, Fig. <ref>(e)], it can be noticed a decrease of the amplitude of oscillations at around the real axis (E=0), and at the same time the definition of the MZMs on the real axis improves as N increases, and for N=100 the MZMs are well defined in all the real axis. It should be mentioned that these oscillations around zero energy are expected for short Majorana nanowires due to the overlap between Majorana wavefunctions of opposite ends. Therefore, such oscillations are expected to decrease as the system becomes larger. The same behavior was verified in the work <cit.>.
§ CONCLUSIONS AND PERSPECTIVES
This paper demonstrates the emergence of topologically protected MZMs at opposite ends of spinless and spinful p-SiNRs with p-wave superconducting pairing. These MZMs exhibit spin discrimination, and their polarization can be controlled by adjusting the nanoribbon chemical potential or the external magnetic field. To implement our findings experimentally, we propose a material engineering of p-SiNRs grown over an Ag(110) surface [cf. Fig. <ref>(a)], with a thin Pb film deposited on top <cit.>. In this device, the proximity effect will enable the penetration of Cooper pairs from the Pb s-wave superconductor into the p-SiNRs <cit.>, and in combination with an external magnetic field and the extrinsic RSOC modulated by the action of an external electric field E⃗ applied perpendicularly to the nanoribbon plane <cit.>, it will induce p-wave pairing in the buckled atoms of the double p-SiNRs structure [cf. Fig. <ref>(d)].
We should highlight the potential applications driven by the spin-polarized MZMs presented in this work, notably demonstrated in the results of Fig. <ref>, with the down spin component associated with MZMs, while the up component displays metallic features, resulting in a half-metallic behavior for the system <cit.>. This property could be harnessed to design a single Majorana transistor (SMT) built from a quantum dot (QD) sandwiched by finite p-SiNR leads <cit.>. This setup resembles the conventional single electron transistor (SET) <cit.>. The SMT can be a valuable tool for discerning between MZMs and trivial Andreev bound states <cit.>. Particularly, the leakage of MZMs through the QD <cit.>, along with both local and crossed Andreev reflections induced by a specific spin orientation within the p-SiNR-QD-p-SiNR SMT structure, is expected to generate distinct electronic transport signatures, enabling the identification of MZMs.
In addition to the spin-polarization of MZMs, our proposal also features the emergence of four MZMs at the ends of the p-SiNR, as illustrated in Figs. <ref>(h,k) and <ref>(h,k). Two MZMs are located at opposite ends of the top chain, while another two are at the bottom. Depending on the chemical potential and applied magnetic field orientation, these MZMs can exhibit either the same or opposite spin orientations. Having four MZMs, at least, is crucial for implementing quantum computing operations between two qubits, as it requires the presence of two fermionic sites, i.e., four MZMs <cit.>. Therefore, our proposal is a promising candidate for realizing hybrid quantum computing operations <cit.> between conventional qubits and spin-polarized Majorana-based qubits and paves the way for defining quantum computing operations using Majorana spintronics <cit.>.
§ APPENDIX
§.§ Topological Classification and Zak phase topological invariant
The classification of the topological phases of matter is provided by the analysis of fundamental symmetries for a given Hamiltonian in the discrete reciprocal space <cit.>, namely time-reversal (𝒯ℛ), particle-hole (𝒫ℋ) or charge conjugation and chiral symmetries (𝒦).
For the particular case of the spinless penta-silicene nanoribbons (p-SiNRs) with p-wave superconducting pairing at their edges [Eq. (1-3) of main text], it is verified that both 𝒯ℛ and 𝒫ℋ symmetries are preserved, once
𝒯h(k) 𝒯^-1 = h(-k)
and
𝒞h(k)𝒞^-1=-h(-k),
where 𝒯 and 𝒞 are the time-reversal and charge conjugation operators, respectively, and h(k) is a matrix coming from the Hamiltonian of Eq. (1-3) in the manuscript, rewritten in the Bogoliubov-de Gennes (BdG) representation, i.e.,
ℋ(k)= 1/2∑_kΨ^†_k h(k) Ψ_k,
with
Ψ_k≡(a_k, a^†_-k, b_k,b^†_-k, c_k, c^†_-k, d_k, d^†_-k, e_k, e^†_-k ,f_k,
f^†_-k)^T
being the spinor, which accounts the assumption of 𝒫ℋ symmetry.
The fulfilment of both 𝒯ℛ and 𝒫ℋ symmetries directly implies that the 𝒦 symmetry is also preserved <cit.>, meaning that
𝒦h(k)𝒦^-1 = -h(k),
where 𝒦= 𝒯·𝒞 corresponds to the chiral operator. Moreover, from the relations expressed in Eqs. (<ref>), (<ref>) and (<ref>), we obtain 𝒯^2=1, 𝒞^2=1 and 𝒦^2=1, meaning that the BdG Hamiltonian [Eq. (<ref>)] of the spinless superconducting p-SiNR [Eq. (1-3), main text] is a representative of the BDI symmetry class <cit.>, the same class of the well-known Kitaev chain <cit.>.
It is worth mentioning that spinless superconducting p-SiNR is a simplification considering an “intrinsic” magnetic field. The presence of this field is crucial for inducing the formation of p-wave superconducting pairing along the nanoribbon edges. However, in practical experimental setups, the source of the spin-polarization is an external magnetic field that naturally breaks the 𝒯ℛ symmetry and hence, 𝒦 symmetry. From this argument, the “artificial” 𝒯ℛ symmetry of the spinless model can be neglected. Thus the BdG Hamiltonian of Eq. (<ref>) belongs to the D symmetry class <cit.>. Therefore, the p-SiNR in presence of an applied magnetic field is a ℤ_2 superconductor in one-dimension <cit.>, once the proposed double-spin Kitaev zigzag nanoribbon configuration can be regarded as two interconnected Kitaev chains with a hopping term (cf. discussion in the main text).
From the previously discussed perspective, the topological and trivial phases of the spinless p-wave superconducting p-SiNR, as described by the BdG Hamiltonian of Eq. (<ref>), can be distinguished by the Zak number topological invariant <cit.>
φ_Zak= - ∫_-π^πdk/2 π i∂ _k ln[Det(A(k))].
A nonzero quantized Zak phase φ_Zak is associated with the emergence of topologically protected edge states, which is an outcome of the conventional bulk-boundary correspondence <cit.>. Specifically, the integer values of φ_Zak topological invariant correspond to the number of topologically protected edge modes present in the system and characterize its topological phase transitions (TPTs).
To compute the Zak number through Eq. (<ref>), it is necessary to obtain a chiral matrix 𝒜(k) associated with h(k), which is performed through the computation of a unitary transformation outlined below:
h̃(k) = 𝒰^†h(k)𝒰 = [ 0 A(k); A^*(k) 0 ],
bringing h(k) to its chiral form, where
A(k)=
[2 μ 2 t e^i k/2 2 t e^- i k 0 0 2 t
2 t e^- i k/2 2 Φ_k + 2 μ 2 t e^i k/2 0 0 0
2 t e^i k 2 t e^- i k/2 2 μ 2 t 0 0
0 0 2 t 2 μ 2 t e^i k/2 - 2 t e^- i k
0 0 0 2 t e^- i k/2 2 Φ_k + 2 μ 2 t e^i k/2
2 t 0 0 - 2 t e^i k 2 t e^- i k/2 2 μ],
is the chiral matrix, with Φ_k = i Δsin(2 k ).
By considering Eq. (<ref>) and Eq. (<ref>) and employing numerical integration, it becomes feasible to compute the Zak number for several values of chemical potential μ. The manipulation of μ triggers the closing and subsequent reopening of the superconducting gap, a phenomenon closely related to the TPTs, as discussed in the main text.
In this context, Fig. <ref> illustrates the Zak number across distinct regions in the bulk energy dispersion of the spinless p-wave superconducting p-SiNR. Notably, a Zak phase of zero corresponds to regions where zero modes are absent, indicating that the system resides within the topologically trivial phase. Conversely, for φ_Zak≠ 0, zero-energy modes emerge, indicating the presence of topologically protected Majorana zero modes (MZMs) at the edges of either one (φ_Zak= 1) or both top/bottom chains (φ_Zak = 2) of the p-SiNR <cit.>.
§ DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.
§ ACKNOWLEDGEMENTS
M S F, M A C, and A C S acknowledge financial support from the National Council for Scientific and Technological Development (CNPq) grant numbers 311980/2021-0, 305810/2020-0, 308695/2021-6, respectively. M S F acknowledges the Foundation for Support of Research in the State of Rio de Janeiro (FAPERJ) processes number 210 355/2018 and 211.605/2021. M A C acknowledges financial support to the Foundation for Support of Research in the State of Rio de Janeiro (FAPERJ) for the fellowship of the Programa Cientistas do Nosso Estado, E-26/201.223/2021. L S R and I A S acknowledge the Icelandic Research Fund (Rannis), grant No. 163082-051.
§ AUTHOR CONTRIBUTIONS
All authors participate in the scientific discussion of the work. All authors reviewed the paper. M.S.F., R.C.B.R., A.C.S., L.S.R., M.M., and G.L.L. edit the paper. R.C.B.R. performed the numerical calculations. R.C.B.R., and L.S.R. performed analytical calculations.
§ COMPETING INTERESTS
The authors declare no competing interests.
§ ADDITIONAL INFORMATION
Supplementary Information The online version contains supplementary material
Correspondence and requests for materials should be addressed to M.S.F ([email protected]).
unsrt
|
http://arxiv.org/abs/2307.01976v1
|
20230705012957
|
Fragile superconductivity in a Dirac metal
|
[
"Chris J. Lygouras",
"Junyi Zhang",
"Jonah Gautreau",
"Mathew Pula",
"Sudarshan Sharma",
"Shiyuan Gao",
"Tanya Berry",
"Thomas Halloran",
"Peter Orban",
"Gael Grissonnanche",
"Juan R. Chamorro",
"Kagetora Mikuri",
"Dilip K. Bhoi",
"Maxime A. Siegler",
"Kenneth K. Livi",
"Yoshiya Uwatoko",
"Satoru Nakatsuji",
"B. J. Ramshaw",
"Yi Li",
"Graeme M. Luke",
"Collin L. Broholm",
"Tyrel M. McQueen"
] |
cond-mat.supr-con
|
[
"cond-mat.supr-con"
] |
The existence of distinguishable bases in three-dimensional subspaces of qutrit-qudit systems under one-way local operations and classical communication
Dragomir Žoković
August 1, 2023
========================================================================================================================================================
* Institute for Quantum Matter and William H. Miller III Department of Physics and Astronomy, Johns Hopkins University, Baltimore, Maryland 21218, USA
* Department of Physics and Astronomy, McMaster University, Hamilton, Ontario, L8S 4M1, Canada
* Department of Chemistry, Johns Hopkins University, Baltimore, Maryland, 21218, USA
* Laboratory of Atomic and Solid State Physics, Cornell University, Ithaca, NY, USA
* Kavli Institute at Cornell for Nanoscale Science, Ithaca, NY, USA
* Institute for Solid State Physics (ISSP), University of Tokyo, Kashiwa, Chiba, 277-8581, Japan
* Department of Physics, University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
* Trans-scale Quantum Science Institute, University of Tokyo, Bunkyo-ku, Tokyo 113-8654, Japan
* CREST, Japan Science and Technology Agency (JST), 4-1-8 Honcho Kawaguchi, Saitama, 332-0012, Japan
* Canadian Institute for Advanced Research, Toronto, M5G 1Z7, ON, Canada
* TRIUMF, Vancouver, British Columbia, V6T 2A3, Canada
* NIST Center for Neutron Research, National Institute of Standards and Technology, Gaithersburg, Maryland, 20899, USA
* Department of Materials Science and Engineering, Johns Hopkins University, Baltimore, Maryland, 21218, USA
Studying superconductivity in Dirac semimetals is an important step in understanding quantum matter with topologically non-trivial order parameters. We report on the properties of the superconducting phase in single crystals of the Dirac material LaCuSb2 prepared by the self-flux method. We find that chemical and hydrostatic pressure drastically suppress the superconducting transition. Furthermore, due to large Fermi surface anisotropy, magnetization and muon spin relaxation measurements reveal Type-II superconductivity for applied magnetic fields along the a-axis, and Type-I superconductivity for fields along the c-axis. Specific heat confirms the bulk nature of the transition, and its deviation from single-gap s-wave BCS theory suggests multigap superconductivity. Our tight-binding model points to an anisotropic gap function arising from the spin-orbital texture near the Dirac nodes, providing an explanation for the appearance of an anomaly in specific heat well below T_c. Given the existence of superconductivity in a material harboring Dirac fermions, LaCuSb2 proves an interesting material candidate in the search for topological superconductivity.
§ INTRODUCTION
In topological semimetals, the electronic band structure features relativistic linearly-dispersive band crossings. This degeneracy gives rise to quasiparticles in condensed matter systems known as Dirac or Weyl fermions, analogous to those found in quantum field theory. Realizing these in bulk materials is an exciting prospect because they bring about topological protection, physical properties beyond those seen in the semiclassical regime, and novel phases of matter. For example, Dirac semimetals are often found to have large linear transverse magnetoresistance, and negative longitudinal magnetoresistance due to the chiral anomaly <cit.>. Furthermore, new phases of matter can be realized when these topological fermions influence other electronic or magnetic properties, and vice-versa. Exotic topological phases like a monopole superconductor are believed to arise in superconducting magnetic Weyl semimetals with inversion symmetry, whose gap functions are given by monopole harmonics <cit.>. On the way to realizing such monopole superconductors, it is natural to study centrosymmetric, non-magnetic superconductors that harbor Dirac fermions to understand the physical phenomena that emerge. One family of interest is the set of square-net materials that contain pnictogens like Sb or Bi, where Dirac fermions arise due to nonsymmorphic symmetry <cit.>. Chamorro et al.<cit.> found a large linear magnetoresistance and small effective masses in the square-net material LaCuSb2 that were attributed to Dirac fermions. Earlier reports of superconductivity <cit.> in this material makes it an ideal system to study the interplay between Dirac fermions and superconductivity, however superconductivity was not detected by Chamorro et al.
Here we resolve this contradiction, demonstrating that there is an extreme sensitivity of the physical properties to copper stoichiometry in LaCuSb2. We have grown single crystals of LaCuSb2 with varying copper content to study the effect of disorder on the superconducting state. In optimized samples with the highest superconducting T_c and closest to ideal stoichiometry, we used specific heat, susceptibility, magnetization, and muon spin rotation and relaxation (μSR) to characterize the superconductivity and its critical behavior. We utilize a combination of simple free-electron models, Bardeen-Cooper-Schrieffer (BCS) models, and tight-binding models to understand the fragile and anisotropic superconductivity in LaCuSb2.
§ RESULTS
§.§ Structure
LaCuSb2 crystallizes in the centrosymmetric, nonsymmorphic tetragonal space group P4/nmm (129). It can support both the Cu-deficient ZrCuSiAs structure <cit.>, and Cu-excess defect-CaBe2Ge2 structure, where the latter features an interstitial Cu site <cit.>, as shown in Fig. <ref>a. We used N_Cu, related to the flux ratio used for crystal growth, as a parameter to tune stoichiometry in LaCuSb2 (see Methods). We used powder X-ray diffraction (XRD) to determine the lattice constants of the grown crystals. Fig. <ref>b shows the space of lattice parameters a versus c (in Å) for various crystals at room temperature. There is a clear nearly-linear trend in the a versus c data below and above N_Cu≈ 2.25, where the trend changes from expanding c, contracting a to both expanding a and c. This is consistent with data from Ohta <cit.>, whereby the change in the lattice trends occurs at the change between Cu-deficient samples and Cu-excess samples. In the same plot, we have included data from crystals grown with differing starting flux ratios than that reported in this paper. These fall on roughly the same trendlines, affirming that the single parameter N_Cu can be used to tune a wide range of stoichiometry. We used single-crystal XRD to provide a quantitative estimate of the ratio of the elements, and in particular the copper concentration x_Cu, in various samples, as shown in Fig. <ref>a.
§.§ Superconducting Dome
To demonstrate the effect of off-stoichiometry on the superconductivity, we measured DC magnetization on multiple samples grown with different flux ratios. We plotted T_c against parameters characterizing the sample that, over the relevant range of stoichiometry, can be expected to vary if not in proportion then monotonically with the chemical potential. Ours is a self-doping process with Cu1 vacancies in the case of the ZrCuSiAs structure, or occupied Cu1 with interstitial Cu1' ions in the case of the defect-CaBe2Ge2 structure. Both the unit cell volume V_0 and the refined copper occupancy x_Cu are suitable parameters as a function of which to trace out the superconducting dome. The results are shown in Fig. <ref>a and Fig. <ref>b, while the supporting susceptibility data are shown in Extended Fig. <ref>a. We indeed see a systematic change in the superconducting transition temperature as a function of the unit cell and the Cu-occupancy. There is a nearly-linear increase of T_c at small unit cell volumes (Cu-deficient samples), a saturation near nominal stoichiometry until N_Cu≈ 2, and a nearly-linear decrease with larger unit cell volumes (Cu-excess samples).
§.§ Hydrostatic Pressure
Given the large variation in the superconducting transition with chemical composition, we studied the effects of hydrostatic pressure on the superconducting state. As shown in Fig. <ref>c, T_c decreases with increasing pressures though full diamagnetic screening is achieved up to the largest pressure of 2 GPa accessed. The reduction in T_c with pressure amounts to *T_cp = -0.31(3) K/GPa. The suppression of T_c with doping also occurs as the volume decreases at a rate *T_cv|_chem = 80(20) K, where dv=(V-V_0)/V_0 relative to the optimized N_Cu=2. If the effect of doping were solely associated with chemical pressure, then the corresponding bulk modulus K=-*T_cv|_chem(*T_cp)^-1=270(80) GPa. With this, the extracted T_c and pressure can be directly compared to the chemical doping, as seen in Fig. <ref>b. Note the large effective bulk modulus may be taken as an indication that the effects of copper doping go beyond chemical pressure. Interestingly, for pressures between 1.7 GPa and 2 GPa, there is an anomaly within the transition to diamagnetism near T^* ≈ 0.35 K. The ambient-pressure specific heat capacity displays a potentially related second lower temperature anomaly near this temperature.
§.§ Specific Heat
To determine whether superconductivity is a bulk effect in LaCuSb2 and glean information about the superconducting gap function, we turn to specific heat capacity measurements. For a stoichiometric sample in zero field, C_p/T features a sharp jump with midpoint T_c = 0.98(2) K (Fig. <ref>a). Apart from a higher transition temperature, C_p(T) is similar to that of polycrystalline samples reported by Muro et al. <cit.>. The single crystalline sample, however, enables measurements with a well-defined field orientation. Our measurements of C_p(T) for fields applied along the c-axis are in Fig. <ref>a. For H = 75 Oe superconductivity is fully suppressed, while for H=25 Oe the specific heat jump Δ C_p at T_c=0.81(4) K is actually enhanced over the zero field data. In a Type-I superconductor the superconducting transition becomes first order in an applied field meaning there is latent heat transfer at T_c <cit.>. While no latent heat was detected for LaCuSb2 the enhanced value of Δ C_p(T_c)/C_p(T_c)= 1.16(3) in 25 Oe compared with Δ C_p(T_c)/C_p(T_c)= 0.94(4) in 0 Oe might be a result of a first-order transition smeared by the inhomogeneous internal field distribution resulting from demagnetization effect in the irregularly shaped sample.
In contrast to the high-temperature specific heat capacity, which is dominated by phonons and electrons (quantified by the Debye factor β_3 and Sommerfeld constant γ_n, respectively), C_p(T) at low T is dominated by a nuclear Schottky anomaly (see SI <ref> for more details on the corresponding modeling). Subtracting the nuclear Schottky and phonon contributions from the measured specific heat allows us to estimate the zero-field electronic specific heat, C_el(T) = C_p(T) - C_N(T) - β_3 T^3, shown in Fig. <ref>b. From these data, we conclude that superconductivity in LaCuSb2 is of bulk origin and not a secondary phase or surface effect. The electronic specific heat capacity also may be directly compared with predictions from BCS theory. For instance, the size of the specific heat jump in zero field Δ c/γ_n T_c = 0.94(4) is less than that expected from BCS theory of Δ c/γ_n T_c = 1.43. This suggests a gap Δ(0)/γ_n T_c differing from conventional BCS theory, or possibly multiband superconductivity. Furthermore, the electronic specific heat features an abrupt drop at low temperatures near T^* ≈ 0.35 K, leading to an exponentially-activated decline. This is a similar temperature scale to the kink seen in the high-pressure susceptibility data in Fig. <ref>c. As we will show, our tight-binding model points to the possibility of an anisotropic gap function that causes the feature at T^*. To highlight the main features of the data, we explore several possible models in the SI <ref>, including the Eilenberger self-consistent two band model <cit.>, shown in Fig. <ref>b.
§.§ Quantum oscillations
To quantify changes in the Fermi surface, we studied Shubnikov-de Haas quantum oscillations in the Hall resistivity for a stoichiometric sample. The large residual-resistivity ratio of 16 indicates high sample quality (see SI, <ref>). The magnetoresistance ρ(H)/ρ(0) is nearly linear in fields up to 16 T (Fig. <ref>a), and is largest at the lowest temperatures, similar to previous results found for this Dirac material <cit.>. Next, the Hall data (Fig. <ref>b) show nearly linear behavior at low fields less than 7T, but reveals non-linearity and quantum oscillations at high fields up to 16 T. We extracted the oscillatory behavior as a function of field and temperature from the Hall data (Fig. <ref>c). Using the amplitude of the 48 T frequency and the Lifshitz-Kosevich formula <cit.>, we could extract the effective masses of the charge carriers (Fig. <ref>d). This is dominated by small effective masses that mainly arise from the small electron pockets near the X point. We find m_a^* = 0.058 m_e from the 48 T oscillation, which is within error of the masses found in samples from Chamorro et al. <cit.> and Akiba et al. <cit.>. It is worth noting the 48 T frequency found here is relatively unchanged compared to the above works, suggesting self-doping has negligible effects on the Fermi surface geometry.
§.§ Anisotropic Magnetization
We turn to magnetization measurements on the optimized sample to study the critical fields in LaCuSb2. Demagnetization corrections were taken into account to allow us to report the internal magnetic field H_int (see SI, <ref>). The 4π M versus H_int data can distinguish between Type-I and Type-II superconductivity. Fig. <ref>c,d shows the demagnetization-corrected magnetization data for internal fields along the a- and c-axis, respectively. Note the slopes χ = dM/dH_int are consistent with the expected value for bulk superconducting susceptibility 4πχ = -1 in both cases. With applied fields along the a-axis, the magnetization is linear at small fields indicative of a Meissner state, but then the diamagnetic magnetization abruptly decreases for H>H_c1(T). This suggests Type-II superconductivity with a small critical field H_c2(0) = 172(6) Oe, as deduced from extrapolating the data to zero temperature (see SI, <ref>). For applied fields along the c-axis, the magnetization remains linear for an extended field range until it sharply drops to zero magnetization near a critical field, rather than gradually decaying due to the appearance of flux lines as in the previous case, indicative of Type-I superconductivity. The response of LaCuSb2 to magnetic fields is highly anisotropic, as shown schematically in Fig. <ref>e.
§.§ Transverse-field μSR
So far, our discussions have involved measurements that probe bulk sample-averaged thermodynamic properties. The use of a local probe like μSR complement these results and confirm the anisotropic superconducting nature of LaCuSb2. We used transverse-field (TF) μSR to study the response of the superconducting state to applied magnetic fields. The geometry of the experiment is shown in Fig <ref>a, with the co-mounted samples shown in Fig <ref>b. When the samples were co-aligned with magnetic field parallel to the a-axis, the real Fourier transform amplitude (Fig. <ref>d) of the asymmetry data (Fig. <ref>c) showed a broad peak centered at fields lower than the applied field. The broad distribution of internal fields sampled by the muon ensemble indicates the formation of a vortex lattice. We fit the asymmetry data to
A_a(t) = A_0 [F e^-σ^2 t^2/2cos(ω t +ϕ) + (1-F) e^-λ_bg tcos(ω_bg t +ϕ)].
Here F is the fraction of muons that stop in LaCuSb2 with the remainder stopping in the Ag sample holder. ω = γ_μ H_int is the muon precession frequency in the average internal field; ω_bg = γ_μ H_app is the corresponding frequency for muons that stop in the Ag sample holder. λ_bg describes the exponential relaxation rate for muons that stop in Ag. σ characterizes the width of the internal field distribution in the sample; γ_μ = 2π· 135.5 MHz/T is the gyromagnetic ratio of the muon; and ϕ is the phase angle of the initial muon polarization. For the purpose of our analysis, the gaussian relaxation term in Equation <ref> provides an adequate approximation to the Fourier transform of the internal field distribution n(H) for the vortex lattice, which peaks at fields H<H_app <cit.>. With certain assumptions (see SI, <ref>), we used the temperature dependence of the relaxation rate to obtain the superfluid density ρ(T) = λ^2(0)/λ^2(T). This may be compared to theoretical models to extract information about the gap function, and possible multiband effects (see Extended Fig. <ref> for fitting models and their discussion). Most notably, we find the low temperature limit is consistent with conventional s-wave behavior and no thermal anomaly is observed in ρ(T) near T^*.
We then measured the same crystals with the magnetic field oriented parallel to the c-axis. In stark contrast to the previous case, the real Fourier transform amplitude (Fig. <ref>f) of the asymmetry data (Fig. <ref>e) featured a broad peak at internal fields higher than the applied field. To understand this, we note that in the Meissner state, most muons will enter the superconductor where B=0, while muons implanted in the Ag sample holder precess in the applied field. However, due to demagnetization effects, some regions of the superconducting sample sustain an internal field that is greater than the critical field H_c. In Type-I superconductors, such a region must become normal, as there is no other state (such as a vortex state) to support superconductivity. A Type-I superconductor with demagnetization factor N in an applied field H_app > (1-N)H_c is in the intermediate state, characterized by laminar structures of superconducting and normal regions, the latter of which maintains a constant internal field equal to the critical field H_c <cit.>. In our μSR experiment, N≈ 0.86 for the plate-like ensemble of co-aligned single crystals, and H_c = 59.8(1.0) Oe from magnetization data, meaning that in applied magnetic fields above H_app≈ 8.4 Oe, the samples were assuredly in the intermediate state. Therefore, the observed oscillation frequency in the μSR spectrum corresponds to the critical field H_c. To model the data and extract H_c(T), we fit the asymmetry spectrum using the equation
A_c(t) = A_0 [F (1-F_S) e^-σ^2 t^2/2cos(ω t +ϕ) + (1-F) e^-λ_bg tcos(ω_bg t +ϕ)]
Here F as before is the fraction of muons stopping in the sample, and F_S is the superconducting volume fraction of that sample. Only the normal regions with volume fraction F(1-F_S) will have a non-zero frequency ω due to the intermediate state. From the fits to the data, we find a large superconducting volume fraction F_S = 0.924 at 20 mK and 10 Oe, indicative of a bulk superconducting response as we expect from the thermodynamic data. We also extracted the critical field H_c(T) by tracking the temperature-dependence of the frequency ω>ω_bg from the normal regions in the sample. We found that in this entire temperature range (down to 0.02 K) in 10 Oe and 40 Oe, the FFT spectrum of muon precession always showed a peak at a field higher than the applied field. This provides microscopic evidence that LaCuSb2 is in a pure Type-I superconducting state when magnetic fields are applied parallel to the c-axis. Combining the a-axis and c-axis oriented magnetic fields allows us to map out the phase diagram, with the vortex behavior (and lack thereof) sketched in Fig. <ref>e and demonstrated quantitatively in Extended Fig. <ref>.
§.§ Tight-Binding Model and Superconductivity
Towards a microscopic understanding of superconductivity in LaCuSb2, we implemented a tight-binding model for the pocket near the X-point that describes multiple bands derived from Sb orbitals related by the space-group symmetry <cit.>. The derivation is described in detail in the SI <ref>. The pair-wise degeneracy of the bands at X is enforced by the nonsymmorphic symmetry of the space group. These bands intertwine across the Brillouin zone, which contributes additional spin-orbital texture near X point. Assuming local isotropic attractions, we find anisotropic superconducting gaps on the Fermi pockets around X. We tentatively attribute the anomaly T^* observed in the specific heat measurement to this extreme anisotropy of the pairing gaps. At lowest temperatures, the excitations are exponentially-suppressed due to nodeless superconducting gaps, as in a conventional s-wave superconductor. When the temperature increases and reaches the smallest gap size, thermal fluctuations are significantly enhanced, giving rise to the anomalous increase in the specific heat observed around T^*=0.35 K. However, the superconducting gaps do not close at T^*, so this is a thermal cross-over phenomenon and there is no peak in the specific heat. When the temperature reaches T_c, the superconducting gaps close, which gives rise to the sharp discontinuity at T_c as usual for a BCS superconductor. This picture is sketched in Fig. <ref>f.
§ DISCUSSION
Despite the anisotropy we observe experimentally, we gain quantitative insight into the superconducting state by calculating thermodynamic quantities in a one-band, isotropic free-electron model as a first approximation. The relevant thermodynamic and superconducting quantities are tabulated in Table <ref>. In particular, we find a coherence length of ξ≈ 1μm, whereas the mean-free path is ℓ≈ 0.06 μm, suggesting LaCuSb2 is in the dirty limit<cit.> since ξ≫ℓ. The relevant expression <cit.> for calculating the Ginzburg-Landau (GL) parameter in the dirty limit is κ = (7.49× 10^3) γ_nV^1/2ρ_0 (with all quantities in cgs units), where ρ_0 is the value of the low-temperature residual resistivity plateau. The out-of-plane GL parameter κ_c depends on currents generated in the ab plane, so using ρ=ρ_0a for in-plane resistivity, we find κ_c = 0.398(7). That κ_c<1/√(2) is consistent with Type-I superconductivity for H∥ c. Extrapolating our magnetization data to T=0, we have H_c = 59.8(1.0) Oe and H_c2 = 172(6) Oe and thereby estimate<cit.> κ_a = H_c2/√(2) H_c = 2.03(8). Next, we use conventional BCS theory to estimate the thermodynamic critical field H_c. This field is related to the gap function and density of states <cit.>, and in cgs units is given by H_c(0) = 1.764√(6/π)·γ_nV^1/2 T_c. From this we estimate that H_c(0) ≈ 68(1) Oe, in remarkable agreement with the measured phase diagram (see Extended Fig. <ref>). We attribute the small critical field to the low T_c and the small density of states in this Dirac material.
Although Type-I superconductivity is mainly found in elemental superconductors and several binary compounds (e.g. noncentrosymmetric BeAu <cit.> and Dirac semimetal candidate PdTe2 <cit.>), it has also been observed in ternary compounds including LaRhSi3 <cit.> and LiPd2Ge <cit.>, among others. However, anisotropic Type-I and Type-II superconductivity is not as widely reported. In the theory of conventional anisotropic superconductors <cit.>, the anisotropy factor γ = κ_a/κ_c is related to the effective masses as γ^2 = m_c/m_a. The anisotropy of the GL parameter is directly related to the anisotropy of the Fermi surface, as schematized in Fig. <ref>e. Furthermore, it is at the same time possible to satisfy that κ_c<1/√(2) and κ_a > 1/√(2) when γ≫ 1, implying a superconductor whose type depends on the applied magnetic field direction <cit.>. In particular, the direction of the field relative to the crystallographic axes will either favor or disfavor vortices in the superconducting order parameter, based on the free energy in distinct crystallographic directions. While rare, this behavior has been observed in C8K <cit.> and TaN <cit.>, where the angular dependence of the critical field was explicitly studied. However, such an anisotropy seems not to be well-studied in ternary systems. Considering the previous estimates of κ_a and κ_c, the large anisotropy parameter γ^2 ≈ 26(2) is a consequence of the small in-plane effective masses of this Dirac material.
From specific heat, we have pointed to the possibility of multigap behavior or an anisotropic gap, supported by the findings of our tight-binding model. However, determining the temperature-dependence of the gaps using our Hamiltonian requires fitting many free parameters. In Extended Fig. <ref>, we highlight several toy models that capture the overall features in specific heat and superfluid density, but do not capture all features. One possibility to measure this anisotropic gap is using quasi-particle interference with Scanning Tunneling Microscopy (STM). The quasiparticle interference signal should be significantly enhanced near the gap edge, which may be directly compared to the anomaly in the specific heat. We find it quite compelling to obtain a more complete microscopic understanding of the superconducting state of LaCuSb2 using a combination of theory and probes like STM. Overall, despite the fragility of the superconducting state in LaCuSb2 to stoichiometry, pressure, and magnetic fields, it is worth exploring related compounds, perhaps ones that contain magnetic moments, to continue the search for exotic topological phases like the monopole superconductor.
§ AUTHOR CONTRIBUTIONS
Sample synthesis and data analysis on the superconducting state was performed by C.J.L. under the supervision of C.L.B. and T.M.M. The μSR experiments were carried out by C.J.L., J.G., M.P., S.S., and P.O. under the supervision of C.L.B and G.M.L. The tight-binding analysis was performed by J.Z. under the supervision of Y.L. The DFT calculations for the band structure were performed by S.G. The heat capacity measurements were performed by C.J.L. and T.H. The He3 susceptibility and magnetization measurements were performed by C.J.L., T.B. and J.R.C. Quantum oscillations were measured and analyzed by G.G. under the supervision of B.J.R. Single-crystal X-ray diffraction measurements and refinements were performed by M.A.S. TEM measurements were performed by K.L. The hydrostatic pressure measurements were performed by K.M. and D.K.B. under the supervision of Y.U. and S.N. All authors contributed with comments and edits.
Synthesis: Single crystals of LaCuSb2 were grown by the self-flux method. Cut pieces from lanthanum ingot (Ames Laboratory, 99.99%), pieces of copper (Alfa Aesar, 99.999%), and antimony shot (Strem Chemicals, 99.9999%) were weighed with a total mass of about 4 g in various molar ratios and placed in an alumina crucible. An inverted catch crucible with an alumina strainer was placed atop the first crucible, and both were sealed in a quartz tube under partial pressure of argon gas. The ampules were heated to 1070^∘ C at a rate of 100^∘ C/h and held for 12 h, then cooled to 670^∘ C at a rate of 4^∘ C/h before centrifuging. Inspired by previous literature on isostructural LaAgSb2, which was reported to produce stoichiometric samples with starting ratio 0.045:0.091:0.864 (or roughly, 1:2:19.2) <cit.>, we used varying compositions 1:N_Cu:19.2 with 1 ≤ N_Cu≤ 6 as a parameter to tune the chemical potential of the system and grow crystals with different Cu content.
Polycrystalline samples were also prepared using a simple reaction of the elements in a quartz tube. Powders with ratios 1:δ:2 with δ = 0.8, 1.0, 1.2 were synthesized by first melting about 1.0 g of the elements in a quartz tube, using a step furnace at 600^∘ C for 24 hours. This polycrystal was then ground and reheated to 800^∘C for roughly 12 hours. Further heating of the powder at high temperatures, or low temperatures for extended periods of time, was disadvantageous due to the decomposition of the structure, as evidenced by larger relative percentages of secondary phases Sb and Cu2Sb. We compared the lattice constants (in particular, the c/a ratio) of the polycrystalline samples to those of our single crystals, and to those from the literature.
X-ray diffraction: Powder XRD data were collected at room temperature using a laboratory Bruker D8 Venture Focus diffractometer with LynxEye detector in the range from 10-80 degrees. Refinements on the powder XRD data were performed using Topas 5.0 (Bruker). The occupancy of La and Sb ions were constrained to 100%, while the Cu occupancy was refined freely. We also refined the strain parameter and preferred orientation, due to the layered nature of the crystals.
Single-crystal XRD data were acquired at 110(2) K using a SuperNova diffractometer (equipped with Atlas detector) with Mo Kα radiation (λ = 0.71073 Å) under the program CrysAlisPro (Version CrysAlisPro 1.171.39.29c, Rigaku OD, 2017). The same program was used to refine the cell dimensions and for data reduction. The structure was solved with the program SHELXS-2018/3 and was refined on F2 with SHELXL-2018/3 <cit.>. Analytical numeric absorption correction using a multifaceted crystal model was applied using CrysAlisPro. The temperature of the data collection was controlled using the system Cryojet (manufactured by Oxford Instruments). For all samples in the ZrCuSiAs structure type, the occupancy factor for Cu1 was refined freely. For samples in the defect-CaBe2Ge2 structure type, the additional Cu site (denoted Cu1') was necessary to obtain good fits to the data. The occupancy factor for both Cu1 and Cu1' were refined freely, and the reported occupancy is x_Cu = x[Cu1]+x[Cu1']. Crystallographic data tables are included in the Supplementary Information.
Specific heat: Heat capacity measurements were performed in a Quantum Design Physical Properties Measurement System (PPMS) with the dilution refrigerator option. The magnetic field was degaussed above 3.8 K to minimize effects of persistent fields. Measurements were performed on a single crystalline sample of LaCuSb_2 of 10.7(1) mg mass oriented such that the applied magnetic field was along the nominal c-axis, with reported fields depicted in Fig. <ref> between 0 Oe and 75 Oe. All measurements were performed in a fixed magnetic field and measured upon cooling, with a minimum temperature between T=0.05-0.1 K and a maximum temperature of T=3.8 K.
Magnetization: Magnetization measurements were performed in a Quantum Design (QD) Magnetic Properties Measurement System (MPMS) with QD iHelium3 He3-insert. To extract the superconducting dome, we measured samples with nominal applied fields of 2 Oe along the a-axis, after degaussing at temperatures above T_c. The samples were cut with a large aspect ratio along the a-axis such that the demagnetization corrections were small. Measurements on the optimal sample were performed taking into account the non-zero demagnetization factors. We performed isothermal measurements in applied field by first degaussing at high temperature above T_c. We then cooled in zero field to the appropriate temperature, and measured from zero to fields well past the point that magnetization vanished to study 4π M versus H_int.
Electrical transport: Shubnikov-de Haas oscillations were measured in a single crystal cut from the same crystal used for other thermodynamic measurements, at temperatures down to 0.3 K and fields up to 16 T. Resistivity and Hall effect measurements above 2K were performed on a PPMS using the AC Transport (ACT) option. For all resistivity measurements we prepared a polished bar-shaped crystal with the current applied along the a-axis, using a four-probe configuration consisting of platinum wires and Epo-Tek silver epoxy. For Hall effect measurements we used the five-probe method which allowed the Hall signal to be tuned to zero in zero applied field. Low-temperature resistivity measurements were performed in a PPMS using a Lake Shore Model 372 AC Resistance Bridge. We applied an AC current of 316 μA at a frequency of 13.7 Hz. The field was applied along the c-axis after degaussing the magnet at 1.1 K (above the transition) to reduce the effect of persistent magnetic fields. Measurements were taken upon cooling in various applied fields.
Hydrostatic pressure: Measurements under hydrostatic pressure were performed using a Bluefors dilution refrigerator down to 0.05 K and up to 2.0 GPa at the Institute for Solid State Physics (ISSP), The University of Tokyo. To track the variation of the superconducting transition temperature under hydrostatic pressure, the AC magnetic susceptibility of a sample was measured with a mutual induction method at a fixed frequency of 317 Hz with a modulation field of about 1 Oe. Measurements were performed on the optimized samples N_Cu = 2, cut from the same crystal used for magnetization, specific heat, and resistivity measurements. For applying pressure, a piston-cylinder cell made from nonmagnetic BeCu and NiCrAl alloys was used with Daphne 7373 as the pressure transmitting medium. The pressure was determined from the superconducting transition temperature of Pb.
Muon spin rotation: Zero-field and transverse-field muon spin rotation and relaxation measurements were performed at the TRIUMF facility in Vancouver, Canada. A spectrometer incorporating a dilution refrigerator was used on the M15 beamline, to allow for measurements down to 20 mK. The setup makes use of a superconducting magnet to allow for magnetic fields up to 5 T, and resistive coils for finer control and field-zeroing. The magnetic field was applied horizontally, parallel to the direction of the muon beam. In the ZF measurements, the muon spin was (anti)parallel to the beam direction, while in TF measurements the muon spin was perpendicular to the field and beam direction. Single crystals grown with the optimal ratio N_Cu=2 and with the greatest thickness (average 1.0 mm) were cut along the a-axis. We used multiple co-aligned single crystals, totaling a mass of about 0.98g, to maximize the measured signal and reduce background from muons not implanted in LaCuSb2. The crystals were first mounted with the a-axis parallel to the applied field for the first measurement, and the same crystals were then individually rotated and remounted with the c-axis parallel to the applied field for the second measurement. The samples were mounted on a silver cold finger using a mixture of Apezion N grease and copper-loaded Cry-Con grease to ensure good thermal contact. For the field along the a-axis, we used H_app = 40 Oe. For the field along the c-axis, we used H_app = 40 Oe to access low critical fields and H_app = 10 Oe to access higher critical fields. All data at constant field were simultaneously refined and fit for various temperatures using the musrfit program. <cit.>
References
naturemag
10
url<#>1urlprefixURL
Jia
authorJia, S., authorXu, S.-Y. &
authorHasan, M. Z.
titleWeyl semimetals, Fermi arcs and chiral anomalies.
journalNat. Mater. volume15,
pages1140–1144 (year2016).
Liang
authorLiang, T. et al.
titleUltrahigh mobility and giant magnetoresistance in
the Dirac semimetal Cd_3As_2.
journalNat. Mater. volume14,
pages280–284 (year2015).
YiLi
authorLi, Y. & authorHaldane, F. D. M.
titleTopological Nodal Cooper Pairing in Doped Weyl
Metals.
journalPhys. Rev. Lett.
volume120, pages067003
(year2018).
<https://link.aps.org/doi/10.1103/PhysRevLett.120.067003>.
GeunsikLee
authorLee, G., authorFarhan, M. A.,
authorKim, J. S. & authorShim, J. H.
titleAnisotropic Dirac electronic structures of
AMnBi_2 (A=Sr,Ca).
journalPhys. Rev. B volume87,
pages245104 (year2013).
<https://link.aps.org/doi/10.1103/PhysRevB.87.245104>.
Young2015
authorYoung, S. M. & authorKane, C. L.
titleDirac semimetals in two dimensions.
journalPhys. Rev. Lett.
volume115, pages126803
(year2015).
Chamorro
authorChamorro, J. R. et al.
titleDirac fermions and possible weak antilocalization in
LaCuSb2 .
journalAPL Materials volume7
(year2019).
Gamayunova
authorGamayunova, N. V. et al.
titleElectron-phonon interaction in ternary rare-earth
copper antimonides LaCuSb2 and La(Cu0.8Ag0.2)Sb2 probed by Yanson
point-contact spectroscopy.
journal2017 IEEE 7th International Conference
Nanomaterials: Application & Properties (NAP) pages1–4
(year2017).
Muro
authorMuro, Y., authorTakeda, N. &
authorIshikawa, M.
titleMagnetic and transport properties of dense Kondo
systems, CeTSb2 (T=Ni, Cu, Pd and Ag).
journalJournal of Alloys and Compounds
volume257, pages23–29
(year1997).
<https://www.sciencedirect.com/science/article/pii/S0925838896031283>.
Sologub
authorSologub, O., authorHiebl, K.,
authorRogl, P., authorNoël, H. &
authorBodak, O.
titleOn the crystal structure and magnetic properties of
the ternary rare earth compounds RETSb2 with RE = rare earth and T = Ni, Pd,
Cu and Au.
journalJournal of Alloys and Compounds
volume210, pages153–157
(year1994).
Ohta
authorOhta, M.
titleThermoelectric Properties of Ternary Rare-Earth
Copper Antimonides LaCu_xSb_2 (0.9<x<1.3).
journalMaterials Transactions
volume50, pages1881–1884
(year2009).
Yang
authorYang, X. X. et al.
titleRCu_1+xSb_2 (R = La, Ce, Pr, Nd, Sm, Gd, Tb,
Dy, Ho and Y) Phases with Defect CaBe2Ge2-Type Structure.
In booktitlePRICM-5, vol. volume475 of
seriesMaterials Science Forum, pages861–864
(publisherTrans Tech Publications Ltd, year2005).
Klemm
authorKlemm, R.
titleLayered Superconductors: Volume 1.
International Series of Monographs on Physics
(publisherOUP Oxford, year2012).
<https://books.google.com/books?id=EWORhjhzdqMC>.
Prozorov2
authorProzorov, R. & authorKogan, V. G.
titleLondon penetration depth in iron-based
superconductors.
journalReports on Progress in Physics
volume74, pages124505
(year2011).
Shoenberg
authorShoenberg, D.
titleMagnetic oscillations in metals
(publisherCambridge university press, year2009).
Akiba2023
authorAkiba, K. & authorKobayashi, T. C.
titlePhonon-mediated superconductivity in the Sb
square-net compound LaCuSb_2.
journalPhys. Rev. B
volume107, pages245117
(year2023).
<https://link.aps.org/doi/10.1103/PhysRevB.107.245117>.
Sonier
authorSonier, J. E., authorBrewer, J. H. &
authorKiefl, R. F.
titleSR studies of the vortex state in
type-II superconductors.
journalRev. Mod. Phys.
volume72, pages769–811
(year2000).
<https://link.aps.org/doi/10.1103/RevModPhys.72.769>.
Beare
authorBeare, J. et al.
titleSR and magnetometry study
of the type-I superconductor BeAu.
journalPhys. Rev. B volume99,
pages134510 (year2019).
<https://link.aps.org/doi/10.1103/PhysRevB.99.134510>.
Kozhevnikov
authorKozhevnikov, V., authorSuter, A.,
authorProkscha, T. & authorVan Haesendonck, C.
titleExperimental Study of the Magnetic Field
Distribution and Shape of Domains Near the Surface of a Type-I Superconductor
in the Intermediate State.
journalJournal of Superconductivity and Novel
Magnetism volume33, pages3361–3376
(year2020).
<https://doi.org/10.1007/s10948-020-05576-1>.
Schoop2016
authorSchoop, L. M. et al.
titleDirac cone protected by non-symmorphic symmetry and
three-dimensional Dirac line node in ZrSiS.
journalNature Communications
volume7, pages11696 (year2016).
<https://doi.org/10.1038/ncomms11696>.
Tinkham
authorTinkham, M.
titleIntroduction to Superconductivity.
Dover Books on Physics Series (publisherDover
Publications, year2004).
<https://books.google.com/books?id=VpUk3NfwDIkC>.
Weber
authorWeber, H. W., authorSporna, J. F. &
authorSeidl, E.
titleTransition from Type-II to Type-I Superconductivity
with Magnetic Field Direction.
journalPhys. Rev. Lett.
volume41, pages1502–1506
(year1978).
<https://link.aps.org/doi/10.1103/PhysRevLett.41.1502>.
Itoh
authorItoh, N.
titleSuperconducting state of neutron stars.
journalProgress of Theoretical Physics
volume42, pages1478–1479
(year1969).
Leng2019
authorLeng, H., authorOrain, J.-C.,
authorAmato, A., authorHuang, Y. K. &
authorde Visser, A.
titleType-I superconductivity in the Dirac semimetal
PdTe_2 probed by SR.
journalPhys. Rev. B
volume100, pages224501
(year2019).
<https://link.aps.org/doi/10.1103/PhysRevB.100.224501>.
Anand
authorAnand, V. K. et al.
titleSpecific heat and SR study on the
noncentrosymmetric superconductor LaRhSi_3.
journalPhys. Rev. B volume83,
pages064522 (year2011).
<https://link.aps.org/doi/10.1103/PhysRevB.83.064522>.
Gornicka
authorGórnicka, K. et al.
titleSoft-mode enhanced type-I superconductivity in
LiPd_2Ge.
journalPhys. Rev. B
volume102, pages024507
(year2020).
<https://link.aps.org/doi/10.1103/PhysRevB.102.024507>.
Kogan2002
authorKogan, V. G.
titleMacroscopic anisotropy in superconductors with
anisotropic gaps.
journalPhys. Rev. B volume66,
pages020509 (year2002).
<https://link.aps.org/doi/10.1103/PhysRevB.66.020509>.
Koike
authorKoike, Y., authorTanuma, S.,
authorSuematsu, H. & authorHiguchi, K.
titleSuperconductivity in the graphite-potassium
intercalation compound C_8K.
journalJournal of Physics and Chemistry of Solids
volume41, pages1111–1118
(year1980).
<https://www.sciencedirect.com/science/article/pii/0022369780900670>.
Sporna
authorSporna, J. F., authorSeidl, E. &
authorWeber, H. W.
titleAnisotropy of the superconductive to normal
transition in tantalum-nitrogen single crystals.
journalJournal of Low Temperature Physics
volume37, pages639–661
(year1979).
<https://doi.org/10.1007/BF00113876>.
Myers
authorMyers, K. et al.
titleSystematic study of anisotropic transport and
magnetic properties of RAgSb2 (R=Y, La–Nd, Sm, Gd–Tm).
journalJournal of Magnetism and Magnetic Materials
volume205, pages27–52
(year1999).
<https://www.sciencedirect.com/science/article/pii/S0304885399004722>.
Masubuchi
authorMasubuchi, S. et al.
titleChemical Substitution Effect on CDW State in
LaAgSb_2.
In booktitleProceedings of the International
Conference on Strongly Correlated Electron Systems (SCES2013),
pages011053 (year2014).
Sheldrick
authorSheldrick, G. M.
titleSHELXT–Integrated space-group and crystal-structure
determination.
journalActa Crystallographica Section A: Foundations
and Advances volume71, pages3–8
(year2015).
Suter
authorSuter, A. & authorWojek, B.
titleMusrfit: a free platform-independent framework for
μSR data analysis.
journalPhysics Procedia
volume30, pages69–73
(year2012).
Acknowledgments This work was supported as part of the Institute for Quantum Matter, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Award DESC0019331. C.J.L. acknowledges the support of the William Gardner Fellowship. The 3He MPMS was funded by the National Science Foundation, Division of Materials Research, Major Research Instrumentation Program, under Award 1828490. Research at McMaster University was supported by the Natural Sciences and Engineering Research Council (GML). The high pressure measurements were funded by KAKENHI (no. JP19H00648), JST-Mirai Program (no. JPMJMI20A1), JST-CREST (no. JPMJCR18T3 and JPMJCR15Q5). We would like to thank Bassam Hitti and Sarah Dunsiger for their assistance during the μSR experiment; Lisa Pogue for her preliminary DFT work; and Yishu Wang for her assistance with resistivity measurements.
Competing Interests The authors declare that they have no competing financial interests.
Correspondence Correspondence and requests for materials should be addressed to [email protected]
Note added While writing this paper, we became aware of an experimental study on single crystals of LaCuSb2 <cit.>. The specific heat, resistivity, and Hall effect measurements are consistent with our data. However, no investigation into the magnetic field anisotropy was reported. Furthermore, the presence of a low-temperature feature in thermodynamic measurements allows us to gain insight on the contributions from the Dirac band structure.
Supplementary Information
§ CRYSTAL GROWTH AND EFFECTS OF STOICHIOMETRY
The flux-grown crystals were plate-like with average surface areas of 1 cm^2, often limited by the crucible diameter, and with varying thickness. The least Cu-rich fluxes produced crystals with the most appreciable thickness, compared to the most Cu-rich fluxes having large surfaces areas and decreased thickness. Grown crystals with copper-rich fluxes often came out with layers of solidified Sb-Cu2Sb flux on the surface, which could be removed with mechanical polishing. The presence of these phases is expected from the ternary phase diagram reported previously Gschneidner. Inclusions, primarily Sb and less-so Cu2Sb, were present in all samples, and Sb was used as a reference for the lattice constants for powder XRD. Importantly, Sb is not superconducting at ambient pressure Wittig and Cu2Sb is only superconducting below 0.085 K Andres, and thus cannot account for the superconducting signals observed near 1 K in our samples. However, the nature of off-stoichiometry may effect the physical properties we observe. For example, removal of entire planes of Cu would in principle shrink the unit cell to an effective LaSb2 structure, which (in the orthorhombic structure) is known to be superconducting Guo, Ruszala
The fact that N_Cu can tune a wide range of compositions is likely due to the change of thermodynamic chemical potential and stable phases that can emerge in the LaCu_xSb2 solid-solution-type structure. If the main role of the Cu off-stoichiometry is interpreted as the introduction of non-magnetic impurities (as opposed to stacking defects, see TEM results below), in the form of vacancies or interstitial sites, then according to Anderson's theorem on dirty s-wave superconductors, such defects should have no effect on T_c Anderson. There may be several ways to reconcile this. One possibility is that the small effects of off-stoichiometry may be affecting the Fermi level and density of states, as suggested in resistivity measurements (see <ref>), thereby resulting in stronger dependence of T_c as expected under conventional BCS theory <cit.>. Another possible explanation is that the gap function itself is anisotropic leading to strong dependence of T_c on the scattering rate, a well-known result from Gor'kov Petrovic, Gorkov. Indeed, an anisotropic gap function is expected from our tight binding model for the pairing gap function. It is possible that one or both effects could explain the drastic suppression of T_c found in the superconducting dome.
§ TRANSMISSION ELECTRON MICROSCOPY (TEM)
A Cu-deficient sample with N_Cu = 1, and a near-stoichiometric sample with N_Cu = 2, were ground into a fine powder for use in TEM. The samples used were from the same crystals cut for thermodynamic measurements. Microcrystallites nearly oriented along a and c were studied for their fringe patterns to estimate the lattice constants, and to gauge small variations in fringe width, which may give evidence of stacking defects in the unit cell. This high-resolution TEM data precludes variations in the lattice constant within ± 0.44Å in 27 unit cells. We used the GMS-3 software to integrate over regions shown in Extended Fig. <ref>, and extract the intensity as a function of position over the length of the rectangles. Near each maxima, we fit the location of the central peak using a simple quadratic fit. This was used to deduce the distribution of distances between successive fringes. The resulting histogram shows spread within one pixel, i.e. a spread of 0.44 Å. The lack of clear bimodal distributions outside the resolution of the scan indicates there are no obvious stacking disorders that may produce LaSb2. Given the off-stoichiometry on the order of 0-5%, this suggests possibly statistical vacancies of Cu atoms as suggested in Klemenz. Interestingly, the samples with smaller copper flux ratios N_Cu≈ 1 that were cut into the bulk of the sample appeared to have a copper-colored tint on the exposed cut surface after a period of several months. This indicates the copper ions may be mobile in the structure at room temperature, made possible by the presence of vacancies and a path by which ions can travel.
§ MAGNETIZATION AND SUSCEPTIBILITY
The volume magnetization was deduced from the total moment μ_tot from SQUID measurements by taking into account the mass m of the samples, and using theoretical density ρ_th = 7.48 g cm^-3, to compute M = μ_tot/V = μ_totρ_th/m. In magnetization data, we computed the internal field to correct for the effects of the demagnetization factor,
H_int,i = H_a,i - 4π N_i M_i
where H_a,i is the applied field in the direction i, M_i is the (volume) magnetization, and N_i is the demagnetization factor with 0≤ N_i ≤ 1. For diamagnetic samples in a rectangular prism geometry, the demagnetization factor could be estimated with Prozorov
N ≈4AB/4AB+3C(A+B)
where A,B (C) are the lengths of the sides perpendicular (parallel) to the applied field. Anisotropic magnetization measurements on an optimized sample (N_Cu = 2) were performed on a nearly-rectangular-prism sample with approximate dimensions 3.2 × 1.52 × 0.84 mm^3 and mass 27.54 mg. From this, we estimate a demagnetization factor N_a ≈ 0.18 for fields along the a-axis and N_c ≈ 0.62 for fields along the c-axis. In the μSR measurement, our collection of co-aligned samples formed the shape of a rectangular prism with effective dimensions 15 × 12 × 1.4 mm^3. Here the magnetic field was always applied parallel to the thinnest dimension, yielding a demagnetization correction factor of about N≈ 0.86.
Quantities like the superconducting transition in susceptibility were estimated by fitting to simple models of a diamagnetic response assuming a Gaussian distribution. Supposing the susceptibility is exactly 4πχ =-1 below T_c and exactly 4πχ = 0 above T_c, we get a step-like transition at T_c. For T_c' distributed in a Gaussian distribution about some mean T_c, this becomes
4πχ (T) = ∫_-∞^∞ T_c' [Θ(T-T_c') -1] e^-(T_c'-T_c)^2/2σ^2/√(2πσ^2) = 1/2[ erf( T-T_c/√(2σ^2)) - 1 ] ≡ B_σ(T,T_c)
For the ambient-pressure measurements, the susceptibility for all samples measured showed sharp transitions and no broad tails at low temperatures, with all samples having a saturated susceptibility by 0.4 K. In all cases σ, the width of the superconducting transition, was used to estimate the error bar on T_c, rather than the error reported by the fit routine. The susceptibility for various values of N_Cu is shown in Extended Fig. <ref>a.
For the high-pressure measurements above 1.7 GPa, the AC data were fit using a modified form due to the double-peak nature of the transition. We model this double-peak feature coming from the successive transition of two gaps opening at T_c, for the sake of extracting T_c(p) and T^*:
4πχ'(T,p) = A[ fB_σ_1(T,T_c(p)) + (1-f)B_σ_2(T,T^*(p)) ] + C
where A is the voltage amplitude, f is the fraction of each component, σ_1 and σ_2 are the widths of the transitions, and C is a constant voltage offset.
Ambient-pressure susceptibility data are shown in Extended Fig. <ref>b, highlighting the zero-field cooled (ZFCW) and field-cooled (FCW) curves for the optimized sample (measured upon warming), using 4πχ_0 = 4πχ_v (1-N_i) where χ_v is the molar susceptibility calculated with the nominal applied field value. Note that 4πχ is greater than -1, due to the large relative error suggesting the applied field might have been slightly larger than the reported 2 Oe. While the FC volume susceptibility 4πχ_v is an indication of the Meissner fraction, errors in field calibration or density estimations resulted in inferred Meissner fractions greater than 100%. However, the demagnetization-corrected magnetization versus applied field demonstrate the 4πχ_v = -1 relation in Fig. <ref>c,d.
To determine the critical field from c-axis magnetization data H_int,c(T), we find the field that results in a discontinuous transition into the normal state (M≠ 0 to M=0). This was done by fitting the H_int data versus 4π M_c to a constant value, for each fixed temperature. For magnetization along the a-axis, we define the critical field H_c1 to be the inflection point of the magnetization. Numerically, this was determined by taking the derivative data f = (4π M)/ H, and fitting it near H_c1 to the piecewise function
f(H,H_0) =
-1 H≤ H_0
a(H-H_0)-1 H>H_0
The critical field H_c1 was defined as the field where 4π M changes slope, or f(H_c1,H_0) = 0, thus
H_c1 = aH_0 + 1/a
Furthermore, H_c2 was determined by fitting 4π M as a linear function at high fields (for small values of 4π M),
4π M ≈ bH + c (4π M≈ 0)
The critical field was defined such that 4π M(H_c2) = 0, or H_c2 = -c/b from the fitted parameters. In all of these fits, the uncertainty was obtained through error propagation.
We can finally extract the zero-temperature critical field by fitting the data to conventional models for the temperature dependence:
H_c1(T) = H_c1(0) · (1-t^2)
H_c2(T) = H_c2(0) ·1-t^2/1+t^2
where t=T/T_c is the reduced temperature. This was done for the data in magnetization, specific heat, and μSR where we assumed only the a-axis oriented samples have a critical field H_c2, while the remainder of the data were fit for H_c1.
§ TRANSPORT
The reason why superconductivity exists in some samples and not in others with differing copper content is an important question. Given the complicated band structure with various bands crossing the Fermi level, it is not out of the question that tuning the stoichiometry will affect disorder scattering. Indeed, the resistivity of the optimized sample is about an order of magnitude lower than that reported for LaCuSb2 in the literature <cit.>. As seen in Extended Fig.<ref>a, the residual resistivity at low temperatures is ρ_0a = 1.883(15) μΩ-cm with a residual resistivity ratio (RRR) of about 14, whereas previous samples have been in the ∼ 1 mΩ-cm range. Furthermore, the low-temperature residual resistivity is the lowest in the optimized sample N_Cu=2, whereas it is larger for other samples near the endpoints of the superconducting dome. Importantly, the large linear magnetoresistance seen in Extended Fig. <ref>b appears resilient in LaCuSb2, as it is found in the normal state of the superconducting samples as well as in the samples from Chamorro et al. <cit.>. This suggests that the Dirac electrons still play an important role at low temperatures in our samples.
Besides effects of disorder, varying the copper concentration can also affect the chemical potential, and thus the carrier density. To study this, Hall effect measurements were taken using a five-probe configuration with AC Transport in the PPMS, on a sample cut from the same crystal as used in other measurements. The lead separation was 0.38 mm and the sample thickness was 0.15 mm. The applied current was 40 mA with frequency 103 Hz. Symmetrization difficulties resulted in apparent non-linearity. Regardless, the Hall coefficient gave a carrier density of approximately 4.16(2)× 10^22 cm^-1. Interestingly, the sign of the charge carriers in the sample with N_Cu = 2 was positive, while the sign for samples from Chamorro et al. <cit.> was negative. This suggests the change in stoichiometry also results in a change in the Fermi level, which can result in different contributions from electron or hole charge carriers. In our work, there is also an apparent trend that the end members of the superconducting dome have a smaller carrier density than the optimized sample, for example seen in Extended Fig. <ref>c. Off-stoichiometry not only affects the sample quality but also the electronic structure, and may correlate with the presence of superconductivity at low temperatures. However, the carrier densities across the superconducting dome are within an order of magnitude of each other, so the density of states is likely to change in proportion to the small changes in the Copper content x_Cu.
We also studied the resistivity in the superconducting state for the optimized N_Cu=2 sample. The DR-temperature resistivity data in applied fields are shown in Fig. <ref>d. Like the susceptibility, the temperature dependence of the resistivity was fit assuming Gaussian broadening:
ρ(T) = 1/2[ erf( T-T_c/√(2σ^2)) + 1 ] ρ_0a
with ρ_0a the low-temperature residual resistivity. The resistivity was measured for applied magnetic field along the c-axis and currents along the a-axis, varying temperature at fixed fields. In these measurements we did not reach a zero-resistance state, but the temperature of the midpoint T_c found in the resistivity drop is consistent with T_c from other thermodynamic measurements, in low fields H<25 Oe. At fields higher than H_c1, the width σ became exceedingly large and the resistance did not saturate at low temperatures, so it was difficult to estimate T_c and σ. This also suggests that the surviving superconducting state at these higher fields is due to percolation or filamentary superconductivity, and is not necessarily a bulk response. For this reason, we did not fit the resistivity data to extract critical field(s) in Extended Fig.<ref>.
Overall, resistivity data for various samples highlights the importance of the copper stoichiometry. While the susceptibility data indicate changes in the superconductivity, we find concomitant changes in the resistivity and Hall effect that suggest these phenomena are related. However, despite changes in defect density, Fermi level, and carrier densities, we still find large linear magnetoresistance in all samples suggestive of the presence of Dirac fermions.
§ SPECIFIC HEAT
To extract the transition temperatures from specific heat, we first fit the data c_p/T to a spline near the transition temperature. The derivative of this spliced data was taken, and the resulting derivative data were fit to a Gaussian function. The transition temperature was then reported as the midpoint and standard deviation of this fit function, at various fields, for use in the phase diagram.
In the normal state above 1 K, we extrapolate the Sommerfeld coefficient and phonon contribution c_p = γ_n T + β_3 T^3 to zero temperature and to lowest order. This yields a Sommerfeld coefficient γ_n = 4.78(1) mJ/mol-K^2 and a phonon contribution β_3 = 0.571(2) mJ/mol-K^4. With d=4 atoms per formula unit LaCuSb2, the corresponding Debye temperature Θ_D = (12π^4 Rd/5β_3)^1/3 = 238.8(3) K. This is consistent with previously reported values <cit.>.
In the isotropic free-electron model, the molar specific heat Sommerfeld coefficient γ_n in the normal state can be used to determine the specific heat effective mass of the charge carriers Ashcroft,
γ_n = (V_fu N_A) γ_nV = 12π^2 V_0 Rk_B n/(ħ k_F)^2 m^*
where γ_nV is the volume specific heat Sommerfeld constant, and V_0 and V_fu = 1/2 V_0 are the volumes of the unit cell and one formula unit, respectively. Using the measured Sommerfeld constant, along with the carrier density derived from the Hall effect, we find a specific heat effective mass of m^* = 1.44(1)m_e. This is to be contrasted with the low effective in-plane masses of charge carriers in our previous report <cit.>, likely due to the anisotropy of the Fermi surface.
From specific heat one can obtain the electron-phonon coupling parameter λ_e-p, given by
λ_e-p = 1.04+μ^* ln(Θ_D1.45T_c)/(1-0.62μ^* )ln(Θ_D1.45T_c)-1.04
where μ^* ≈ 0.13 for intermetallic superconductors. We find a value of λ_e-p≈ 0.466, which is in contrast to the value assumed in a previous theoretical work finding λ_e-p≈ 0 Ruszala. However, the free-electron value γ_b ≈ 2.85 mJ/mol-K^2 and the experimental value differ by the enhancement factor λ_e-p = γ_n/γ_b - 1 ≈ 0.678, in reasonable agreement with the calculated value in Eq. (<ref>).
The main contribution to the total specific heat at lowest temperatures is from the nuclear Schottky anomaly. This is ascribed to the interaction of nuclear quadrupolar moments with local electric field gradients at the nucleus (see SI <ref> for more details on the corresponding modeling). This contribution is dependent on both the spinful nuclear moments, the point group symmetry of the ions involved, and is to be expected for T<100 mK in low symmetry solids containing Cu Caspary and/or Sb Ortiz. The anomaly also grows with magnetic field, as seen in other Sb-based superconductors Aoki. To extract the electronic specific heat, we model the total contribution to the specific heat for T<0.35 K by the equation
c_p(T,H) = c(H) e^-Δ(H)/k_B T + A(H)/T^2 + β_3 T^3
where c(H) and Δ(H) are the phenomenological parameters used to model the activated behavior associated with the knee-like feature; A(H) is related to the quadrupole coupling and increases in applied magnetic fields as the nuclear spin states undergo additional Zeeman splitting. In this way, we simultaneously fit the different contributions to separate out the Schottky anomaly (dominant at low temperature) and phonons (dominant at high temperature) from the electronic contributions.
The entropy of the electronic charge carriers is calculated as
Δ S_el(T)/γ_n T = 1/T∫_0^T c_el(T')/γ_n T' T'
For free electrons, Δ S_el(T)/γ_n T = 1 at all temperatures. In a superconducting sample, Δ S_el(T)/γ_n T = 1 for T>T_c in the normal state, and there is an entropy balance at T_c such that Δ S_el(T_c)/γ_n T_c = 1. For T<T_c in the superconducting state, Δ S_el(T_c)/γ_n T_c<1 and decreases to zero as T→ 0. Extended Fig. <ref> shows the calculated entropy, which is close to 1 at T_c and above. This indicates the sample is a bulk superconductor and that the sharp drop in the electronic specific heat at T^* is intrinsic. Note that the sample coupling reported by the PPMS decreases at decreasing temperatures, due to the combination of the exponentially-activated thermal conductivity and the large sample mass required for a good measurement signal. However, this sudden drop near T^* is necessary to obey entropy balance at T_c, where Δ S(T_c) = γ_n T_c. The slight overshoot of the entropy Δ S_el/γ_n T_c > 1 is likely a result of the uncertainty associated with subtracting the large nuclear Schottky anomaly when isolating the electronic contribution to the specific heat capacity. Furthermore, around phase transitions the specific heat can be difficult to extract C_p precisely, adding further error to calculating the entropy. These limit the rigor possible when fitting to multi-gap models, as these BCS and self-consistent models strictly obey entropy balance.
§ ZERO-FIELD MUON SPIN ROTATION
The existence of Dirac fermions in LaCuSb2, coupled with any unconventional time-reversal symmetry (TRS) breaking, would make LaCuSb2 a prime material candidate in the search for monopole superconductivity. TRS breaking is not a priori expected in LaCuSb2, since the nominal crystal structure P4/nmm is centrosymmetric, and all ions in LaCuSb2 are nonmagnetic. To determine whether TRS is broken below T_c in LaCuSb2, we used μSR in zero field as a way to search for inhomogeneous magnetic fields. Extended Fig. <ref> shows the asymmetry A(t) as a function of time for temperatures above and below T_c. At all temperatures, we fit the asymmetry to the equation
A(t) = A_0 [F · G_KT(t) e^-Λ t + (1-F) e^-λ_bg t]
where G_KT(t) is the Kubo-Toyabe function due to random fields from nuclear moments; Λ is the temperature-dependent relaxation rate; and F is the fraction of muons that stop in LaCuSb2 as opposed to the silver mounting plate. In exotic superconductors that break TRS, Λ(T) increases with decreasing temperature due to larger field inhomogeneity that often appears below T_c, such as from domains related by TRS. Simultaneous fits of our high- and low-temperature ZF spectrum reveal a relaxation rate Λ(T) that is essentially constant with temperature (within error), where Λ = 9(4)× 10^-3 μs^-1 at 1.293(3) K and Λ = 11(4)× 10^-3 μs^-1 at 0.017(1) K. That is, the maximum field size that could exist consistent with these data is ΔΛ/γ_μ = 0.02(7) Oe.
§ VORTICES AND TRANSVERSE-FIELD MUON SPIN ROTATION
For a Type-II superconductor, the relaxation rate σ is related to the superconducting relaxation rate by σ = √(σ_SC^2 + σ_n^2 ), where σ_n is the temperature independent nuclear spin induced relaxation rate. We estimate the London penetration depth from σ_SC as follows. Firstly, for H∥ a, muons probe the London penetration depth λ_ac = √(λ_a λ_c) due to superconducting currents flowing in the ac plane Liarte. Assuming that the field ratio b≡ H_app/H_c2 satisfies 0.13/κ^2 ≪ b ≪ 1, where κ is the GL parameter, the London penetration depth and superconducting relaxation rate are related by Brandt
σ_SC≈0.0609 γ_μΦ_0/λ^2,
where Φ_0 ≈ 2.067× 10^-15 Wb is the magnetic flux quantum. With an applied field of H_app = 40 Oe, we were above H_c1 = 32(1) Oe to set us firmly in the mixed phase, and such that b≈ 40 Oe/172 Oe ≈ 0.23 satisfies the condition that 0.13/κ^2 ≪ b ≪ 1, assuming κ_a ∼ 1.
In this way, we were able to estimate the geometric mean of the zero-temperature anisotropic London penetration depth, λ_ac(0) ≈ 408(2) nm. We note that more accurate estimations of λ_ac(0) will involve functions of b and κ, which we do not extract directly from the μSR data. We can extract the superfluid density ρ(T)=λ^2(0)/λ^2(T) independent of the assumptions made in Eq. (<ref>) if we assume the carrier effective mass is constant.
§ TIGHT-BINDING MODEL FOR FERMI SURFACES OF TOPOLOGICAL BANDS
LaCuSb2 comprises multiple Fermi surfaces. Besides several small pockets around the Γ-point, there are a pair of large quasi-2D diamond-shaped Fermi surfaces. Along with the pockets around the X-point, these arise from the topological bands with Dirac nodal lines protected by the nonsymmorphic symmetries of the space group. To better demonstrate the band topology and study the consequences for superconductivity, we construct an eight-band tight-binding model that captures essential features of the Fermi surfaces of topological bands.
According to the first principles calculations, the topological bands mainly consists of p_x,y-orbitals of Sb, which allows us to focus on a 2D Sb square net layer. The Sb square net is geometrically a square lattice where Sb atoms occupy the lattice sites. However, the unit cell of the square net (defined by the base vectors 𝐚 and 𝐛 of LaCuSb2) is the doubled unit cell of the square lattice. Therefore, in each unit cell of Sb square net, there are two Sb atoms. These two sites per unit cell can be artificially distinguished by slightly displacing the atoms along ±z-directions.
It should be emphasized that, in reality, the Sb atoms forming the square net are geometrically all in a plane, and the difference comes from the chemical environment of the neighbouring layers above and below.
Nevertheless, the artificial displacements reduce the accidental symmetries of a square lattice to the true symmetries of the layer group P4/nmm inherited from the space group bearing the same name.
Extended Fig. <ref>a,b show the high-symmetry points and paths in the Brillouin zone for space group P4/nmm, along with those at k_z=0, respectively. Extended Fig. <ref>c shows the unit cell of Sb square net. The orange circles represent the Sb atoms. The inversion center is indicated by the red circle in between the Sb sites, and the two-fold rotational symmetry along z-axis is centered on Sb sites.
After putting p_x,y-orbitals on each Sb site, and labeling the two sublattices by r = {A,B}, we now describe our tight-binding model. The nearest neighbour hoppings and second nearest neighbour hoppings are parameterized by Slater-Koster type parameters. Notice that, for nearest neighbour hoppings, the hoppings are along diagonals, so
the orbital basis are rotated by 45^∘ as
[ c_r,p_ξ; c_r,p_η ]
= 1/√(2)[ 1 1; -1 1 ][ c_r,p_x; c_r,p_y ].
The Hamiltonian for the nearest neighbour hoppings can be written as
ℋ_nn
= 1/2∑_𝐑,δ,α
t_1;δ,α
c_𝐑+δ, B, p_α^†
c_𝐑, A, p_α
+ h.c.
= ∑_𝐤ψ_𝐤, r, p_α^†[ h_ξ(𝐤) 0; 0 h_η(𝐤) ]ψ_𝐤, r, p_α,
t_1;δ,α =
+t_1, δ = α,
-t'_1, δ≠α;
ψ_𝐤, r, p_α^†
= [ c_𝐤,A, p_ξ^† c_𝐤, B, p_ξ^† c_𝐤,A, p_η^† c_𝐤, B, p_η ^† ],
h_ξ(𝐤) = [+t_1 cos(k_x a/2 + k_ya/2) - t'_1cos(-k_x a/2 + k_ya/2)] τ^(r)_1 ,
h_η(𝐤) = [ - t'_1cos(k_x a/2 + k_ya/2) +t_1 cos(-k_x a/2 + k_ya/2) ] τ^(r)_1,
where the sum of 𝐑 runs through all the unit cells, and δ points towards the nearest neighbouring sites. The matrices τ_i^(r) are Pauli matrices with superscript specifying the physical degrees of freedom the Pauli matrices act upon. Here (r) indicates the sublattice index. We will use τ^(σ)_i for spins and τ^(Δ)_i for particle-hole doubling in the Bogoliubov-de Gennes formalism of superconductivity.
The Hamiltonian for the second nearest neighbour hoppings can be written in a similar fashion as
ℋ_2nn
= 1/2∑_𝐑,δ',α
t_2;δ',α[
c_𝐑+δ', A, p_α^†
c_𝐑, A, p_α + c_𝐑+δ', B, p_α^†
c_𝐑, B, p_α]
= ∑_𝐤ψ_𝐤, r, p_α^†[ h_x(𝐤) 0; 0 h_y(𝐤) ]ψ_𝐤, r, p_α,
t_2;δ',α =
+t_2, δ' = α,
-t'_2, δ' ≠α;
ψ_𝐤, r, p_α^†
= [ c_𝐤,A, p_x^† c_𝐤, B, p_x^† c_𝐤,A, p_y^† c_𝐤, B, p_y ^† ],
h_x(𝐤) = [+t_2 cos(k_xa) -t'_2 cos(k_ya) ] τ_0^(r),
h_y(𝐤) = [ - t'_2cos(k_xa ) +t_2 cos(k_ya) ] τ^(r)_0,
where δ' points towards the second nearest neighbour sites.
We also include the orbital energies and the chemical potential for the sake of describing doping and discussing superconductivity. These are simply diagonal terms
ℋ_μ
= ∑_𝐑,r,α
(ϵ_p - μ )[
c_𝐑, r, p_α^†
c_𝐑, r, p_α]
= ∑_𝐤ψ_𝐤, r, p_α^†[ (ϵ_p-μ)τ_0^(r) 0; 0 (ϵ_p-μ)τ_0^(r) ]ψ_𝐤, r, p_α,
where we assume the orbital energies ϵ_r,p_α = ϵ_p are the same for both sublattices required by the crystal symmetry (e.g., the inversion symmetry).
To determine the tight-binding parameters, we fit our model band structure to the first principles calculations. The full band structure is shown in Extended Fig. <ref>d. We find t_1 = 3.375, t'_1 = 0.875, t_2=0.125, t'_2=0.125, and ϵ_p-μ = -0.625 all in unit of eV. Extended Fig. <ref>e shows our model bands that give rise to the Dirac nodal lines.
The band structure in Extended Fig. <ref>e exhibits several band crossings near the X-point. Bands that are degenerate at the X-point remain doubly degenerate along X-M, which gives rise to a dispersive Dirac nodal line. We also note that the crossings between Γ-X and Γ-M have the same origin. They are, in fact, part of the diamond-shaped Dirac nodal line intersecting with the high symmetry planes. The Dirac nodal lines are protected by the non-symmorphic crystal symmetry, as we will demonstrate below.
The key symmetry element that protects the nodal lines is the glide mirror plane g={R=M_z|𝐭= [1/2,1/2,0]}. For any 𝐤 in k_z=0 plane,
g𝐤 = 𝐤,
therefore, we may choose the Bloch waves to be eigenstates of g as well, i.e. g|u_𝐧,𝐤⟩ = λ_n,𝐤|u_𝐧,𝐤⟩, λ_n,𝐤= λ_n exp(i𝐤·𝐭).
Since R^2=1, λ_n = ±1. Extended Fig. <ref>f shows band cuts along 𝐆_x= 2π/a x̂. The solid curves are bands cut through the Γ-point, and the dashed curves are bands cut through Γ+π/2 ŷ. Band eigenvalues are calculated according to our tight-binding Hamiltonian. Bands with eigenvalue λ=-1 are in red and λ=+1 in blue. Furthermore, the time-reversal and the inversion symmetries ensure two-fold degeneracy of each band due to spins. Therefore, the symmetry protected crossings are Dirac nodes.
It bears emphasizing that the nodes on BZ boundaries and those within the BZ are of different types. The TR symmetry Θ together with the non-symmorphic symmetry g protects the nodal line along X-M, which can be understood as Kramers degeneracy with respect to the antiunitary operator Θ̃ = g Θ, with Θ̃^2=-1. Therefore, these crossings, located at BZ boundaries, are like type-II Dirac node.
Regarding four bands as two pairs of intertwined bands crossing at BZ boundaries, the band crossings between the pairs are also protected by g. However, these crossings are less robust compared to the previous case, in the sense that when the pairs of bands are deformed, the inter-pair crossings can move and even annihilate pairwise. Extended Fig. <ref>f shows two cuts along 𝐆_x= 2π/a x̂ through the Γ-point and Γ+π/2 ŷ. Note that the inter-pair crossing moves towards k_x=0. These crossings can be finally annihilated and give rise to a closed diamond-shaped nodal line inside BZ. Therefore, unlike the crossings at BZ boundaries, the inter-pair crossings are like type-I Dirac nodes.
In the presence of the spin-orbit couplings (SOC), the diamond-shaped nodal line will generically be gapped (also hybridized with other bands). However, as the SOC gaps are small and more than 200 meV below the Fermi energy, our simple tight-binding model still gives a good description of the spin-orbital textures of the Fermi surface that are relevant for superconductivity and transport measurements. The SOC effect on the nodal line along X-M also vanishes to leading order. Therefore we neglect SOC in our tight-binding model, and the model band Hamiltonian is
ℋ = ℋ_nn + ℋ_2nn + ℋ_μ
We now assume the mean field approximation applies to LaCuSb_2 and discuss the superconducting pairings according to Bogoliubov-de Gennes effective Hamiltonian. We write the generic superconducting paring terms for Cooper pair with zero center-of-mass momentum as
ℋ_Δ
= ∑_𝐤ψ_-𝐤, σ, r, p_α^†Δ̂_σ, r, p_α; σ', r', p_β(𝐤)
ψ_𝐤, σ', r', p_β^†
+ h.c.,
where Δ̂^T(-𝐤) = -Δ̂(𝐤) is required by the anticommutation relation for the fermionic operators. The transpose T acts on all the indices including spins, sublattices, and orbitals. We choose the basis of matrices
τ_μ^(σ)⊗τ_ν^(α)⊗τ_ρ^(r),
Therefore, there are 28 possible terms. From current experimental results, the superconductivity is consistent with singlet pairing, so we can fix the spin part to be iτ_y^(σ). Transforming Eq. <ref> back to real space, we assume that the leading pairing terms come from the local on-site attractive interactions, which further fixes pairing in the sublattice space to be τ_0^(r).
There remain three possibilities for orbital degrees of freedom i.e., τ_0^(α), τ_1^(α), τ_3^(α).
Therefore, we have
Δ̂ = Δ_1 iτ_y^(σ)⊗τ_0^(α)⊗τ_0^(r)
+ Δ_2 iτ_y^(σ)⊗τ_1^(α)⊗τ_0^(r)
+ Δ_3 iτ_y^(σ)⊗τ_3^(α)⊗τ_0^(r).
Although the pairing amplitudes Δ_i are allowed to take generic complex numbers, an over-all U(1) phase is chosen by spontaneous symmetry breaking of the superconductivity. Furthermore, notice that if Δ_1/Δ_3 is not purely imaginary, the pairing amplitudes for p_x- and p_y-orbitals are anisotropic, which has not been evidenced from the experiments. The resulting BdG spectrum is shown in Extended Fig. <ref>g,h, which highlight the anisotropic gap that results from the theory, with typical values of pairing amplitudes Δ_1=Δ_2 = 20meV. The pairing amplitudes are set large compared to the experimental value determined from T_c for illustrative purposes. Zooming in on the spectrum near X, extended Fig. <ref>h shows that although the pairing amplitudes are isotropic, very anisotropic gaps are induced on the Fermi surfaces. This anisotropy originates from the spin-orbital textures of the Dirac nodal lines. While the detailed profiles of the gap size depends on various combinations of the pairing amplitude Δ_i, the anisotropy induced by the spin-orbit texture is generic.
§ GAP FITTING MODELS
The complicated structure of the gap function discussed in the tight-binding analysis, along with the many free parameters, makes microscopic fitting untenable. We discuss below how we can try to faithfully represent the gap function using fewer parameters. To start, in the alpha model, the gaps Δ_i(0)/k_B T_ci≡ 1.764·α_i are taken as variables that can differ from the nominal BCS value α_0 = 1. In principle T_c can also be different among the gaps. The specific heat in the superconducting state is obtained from the entropy Bouquet,
S_i/γ_i = -6/π^2 k_B∫_0^∞ [fln f + (1-f)ln(1-f) ] dϵ
where f = 1/(exp(β E) + 1), with β= (k_B T)^-1, and E = √(ϵ^2 + Δ_i^2 (T)). The gap can be modeled approximately with the BCS gap modified by the α-model Carrington
Δ_i(T) = α· 1.764 k_B T_citanh{ 1.82 [1.018 (T_c/T-1)]^0.51}
The specific heat contribution from each gap is
c_i/γ_i T = (S_i/γ_i)T|_T
and the total specific heat is
c/γ_n T = 1/γ_n T∑_i c_i = ∑_i n_i ·c_i/γ_i T
with γ_i/γ_n = n_i and ∑_i n_i = 1.
The model should also be consistent with the superfluid density extracted from μSR. In the dirty limit, the superfluid density is
ρ_μ(T) ≡λ^2_μ(0)/λ^2_μ(T) = Δ_μ(T)/Δ_μ(0)tanh( Δ_μ(T)/2k_B T)
The measured superfluid density would then be related to the two separate contributions by
ρ(T) = ∑_μγ_μρ_μ (T)
with ∑_μγ_μ = γ_1 + (1-γ_1) = 1. For two bands the γ_1 parameter depends on the Fermi velocities of the bands as follows
γ_1 = n_1 v_1^2/n_1 v_1^2 + n_2 v_2^2
While in our BdG model the gap can be anisotropic, we use a single parameter for the gap function (in each band) as an effective gap size.
The alpha-model explicitly assumes that there is no inter-band pairing interaction, however such an interaction cannot be ruled out. Complementary to the alpha-model is the self-consistent Eilenberger two-band model <cit.> which accounts for inter-band pairing. To calculate the relevant thermodynamic quantities for the superconducting state, we write δ_μ = Δ_μ/2π T for the bands μ=1,2, and we self-consistently solve the coupled equations
δ_ν = ∑_μ n_μλ_νμδ_μ·( λ^-1 + lnT_c/T - A_μ)
A_μ = ∑_n=0^∞[ 1/n+1/2 - 1/√(δ_μ^2 + (n+1/2)^2)]
where
λ = 2 n_1 n_2 (λ_11λ_22 - λ_12^2)/n_1 λ_11 + n_2 λ_22 - √( (n_1 λ_11 - n_2 λ_22)^2 + 4n_1 n_2λ_12^2 )
and λ_νμ = N(0) V(ν,μ) are dimensionless effective interaction coefficients. In the clean limit
ρ_μ(T) = ∑_n=0^∞δ_μ^2/[δ_μ^2 + (n+1/2)^2]^3/2
We have modeled the specific heat data using various models: (1) single BCS-like gap, (2) two-gap α-model with different T_c; (3) and two-gap Eilenberger model. The specific heat and superfluid density were simultaneously refined by minimizing χ^2 = χ^2_c + χ^2_ρ. The results of the fit are shown in Table <ref>. We emphasize that the fits may suffer from the slight deviation in the data from entropy balance, which requires Δ S(T_c)/γ_n T_c = 1. Furthermore, due to the many parameters in the tight-binding and BdG models, we cannot at present select a specific model for LaCuSb2. However, it is worth noting that Model 3 comes the closest to matching the superfluid density in the dirty limit, primarily due to the influence of one gap (as γ_1 ≈ 1). Simultaneously the curvature of the second gap comes closer to representing the specific heat drop near T^*.
§ PHASE DIAGRAM
From the specific heat, magnetization, resistivity, and μSR data, we deduced the field-temperature phase diagram and estimated the critical field. We show the complete phase diagram for magnetic fields applied along the a- and c-axis in Extended Fig. <ref>. Using fits to the critical fields as a function of temperature, we find that H_c1(0) = 32(1) Oe and H_c2(0) = 172(6) Oe for the Type-II superconducting state using magnetization data, whereas H_c(0) ≈ 65.1(2) Oe in the Type-I superconducting state using μSR data.
The thermodynamic critical field H_c as deduced in a Type-II superconductor can be estimated from H_c ≈√(H_c1 H_c2). To verify if this is the case, we plotted √(H_c1 H_c2) as a function of temperature using the magnetization data in the Type-II superconducting state. As seen in Extended Fig. <ref>, the resulting data are quite comparable to the critical fields derived from the Type-I superconducting state, which affirms that both the anisotropy and low critical fields are intrinsic to LaCuSb2 and its particular Dirac band structure.
Other references
naturemag
99
listctr32
url<#>1urlprefixURL
Gschneidner
authorSologub, O. & authorSalamakha, P.
titleHandbook on the Physics and Chemistry of Rare
Earths, vol. volume33, chap. chapterRare
Earth-Antimony Systems (publisherElsevier,
year2003).
Wittig
authorWittig, J.
titleA study of the superconductivity of antimony under
pressure and a search for superconductivity in arsenic.
journalJournal of Physics and Chemistry of Solids
volume30, pages1407–1410
(year1969).
<https://www.sciencedirect.com/science/article/pii/0022369769902029>.
Andres
authorAndres, K., authorBucher, E.,
authorMaita, J. & authorCooper, A.
titleSuperconductivity of Cu-Sb phases and absence of
antiferromagnetism in Cu_2Sb.
journalPhysics Letters A
volume28, pages67–68
(year1968).
<https://www.sciencedirect.com/science/article/pii/0375960168906051>.
Guo
authorGuo, S. et al.
titleDimensional crossover in the electrical and magnetic
properties of the layered LaSb_2 superconductor under pressure: The
role of phase fluctuations.
journalPhys. Rev. B volume83,
pages174520 (year2011).
<https://link.aps.org/doi/10.1103/PhysRevB.83.174520>.
Ruszala
authorRuszala, P., authorWiniarski, M. &
authorS-C, M.
titleDirac-like band structure of LaTESb_2 (TE = Ni,
Cu, and Pd) superconductors by DFT calculations.
journalComputational Materials Science
volume154, pages106–110
(year2018).
<https://www.sciencedirect.com/science/article/pii/S0927025618304774>.
Anderson
authorAnderson, P. W.
titleTheory of dirty superconductors.
journalJournal of Physics and Chemistry of Solids
volume11, pages26–30
(year1959).
Petrovic
authorPetrovic, C., authorBud'ko, S. L.,
authorKogan, V. G. & authorCanfield, P. C.
titleEffects of La substitution on the superconducting
state of CeCoIn_5.
journalPhys. Rev. B volume66,
pages054534 (year2002).
<https://link.aps.org/doi/10.1103/PhysRevB.66.054534>.
Gorkov
authorGor'kov, L. & authorMelik-Barkhudarov, T.
titleMicroscopic derivation of the ginzburg-landau
equations for an anisotropic superconductor.
journalSOVIET PHYSICS JETP
volume18 (year1964).
Klemenz
authorKlemenz, S. et al.
titleThe role of delocalized chemical bonding in
square-net-based topological semimetals.
journalJournal of the American Chemical Society
volume142, pages6350–6359
(year2020).
Prozorov
authorProzorov, R. & authorKogan, V. G.
titleEffective demagnetizing factors of diamagnetic
samples of various shapes.
journalPhys. Rev. Appl.
volume10, pages014030
(year2018).
Ashcroft
authorAshcroft, N. & authorMermin, N.
titleSolid State Physics
(publisherHolt, Rinehart and Winston, year1976).
<https://books.google.com/books?id=oXIfAQAAMAAJ>.
Caspary
authorCaspary, R., authorWinkelmann, M. &
authorSteglich, F.
titleOrigin of the nuclear specific heats in high-Tc
superconductors.
journalPhysica C: Superconductivity and its
Applications volume162-164, pages474–475
(year1989).
<https://www.sciencedirect.com/science/article/pii/092145348991112X>.
Ortiz
authorOrtiz, B. R. et al.
titleSuperconductivity in the ℤ_2 kagome
metal KV_3Sb_5.
journalPhys. Rev. Materials
volume5, pages034801 (year2021).
<https://link.aps.org/doi/10.1103/PhysRevMaterials.5.034801>.
Aoki
authorAoki, Y. et al.
titleThermodynamical Study on the Heavy-Fermion
Superconductor PrOs_4Sb_12: Evidence for Field-Induced Phase
Transition.
journalJournal of the Physical Society of Japan
volume71, pages2098–2101
(year2002).
<https://doi.org/10.1143/JPSJ.71.2098>.
https://doi.org/10.1143/JPSJ.71.2098.
Liarte
authorLiarte, D., authorTranstrum, M. &
authorSethna, J.
titleGinzburg-Landau theory of the superheating field
anisotropy of layered superconductors.
journalPhys. Rev. B volume94,
pages144504 (year2016).
<https://link.aps.org/doi/10.1103/PhysRevB.94.144504>.
Brandt
authorBrandt, E. H.
titleProperties of the ideal Ginzburg-Landau vortex
lattice.
journalPhys. Rev. B volume68,
pages054506 (year2003).
<https://link.aps.org/doi/10.1103/PhysRevB.68.054506>.
Bouquet
authorBouquet, F. et al.
titlePhenomenological two-gap model for the specific heat
of MgB_2.
journalEurophysics Letters (EPL)
volume56, pages856–862
(year2001).
<https://doi.org/10.1209/epl/i2001-00598-7>.
Carrington
authorCarrington, A. & authorManzano, F.
titleMagnetic penetration depth of MgB_2.
journalPhysica C: Superconductivity
volume385, pages205–214
(year2003).
|
http://arxiv.org/abs/2307.00539v1
|
20230702110614
|
The hidden-charm pentaquark states in a mass splitting model
|
[
"Shi-Yuan Li",
"Yan-Rui Liu",
"Zi-Long Man",
"Zong-Guo Si",
"Jing Wu"
] |
hep-ph
|
[
"hep-ph",
"hep-ex",
"hep-lat",
"nucl-th"
] |
[email protected]
[email protected]
[email protected]
^1School of Physics, Shandong University, Jinan, Shandong 250100, China
^2School of Science, Shandong Jianzhu University, Jinan 250101, China
Assuming that the P_c(4312)^+ is a I(J^P)=1/2(3/2^-) compact pentaquark, we study the mass spectrum of its S-wave hidden-charm partner states in a color-magnetic interaction model. Combining the information from their decays obtained in a simple rearrangement scheme, one finds that the quantum numbers of P_c(4457)^+, P_c(4440)^+, and P_c(4337)^+ can be assigned to be I(J^P)=1/2(3/2^-), 1/2(1/2^-), and 1/2(1/2^-), respectively, while both P_cs(4338)^0 and P_cs(4459)^0 can be interpreted as I(J^P)=0(1/2^-) udscc̅ compact states. Based on the numerical results, we also find narrow pentaquarks in ssncc̅ (n=u,d) and ssscc̅ systems. The decay properties of the studied pentaquarks and the searching channels for them can be tested in future experiments.
The hidden-charm pentaquark states in a mass splitting model
Jing Wu^2
August 1, 2023
============================================================
§ INTRODUCTION
In 2015, two exotic states P_c(4380)^+ and P_c(4450)^+ were observed in the J/ψ p invariant mass distributions in the decay Λ_b^0 → J/ψ pK^- by the LHCb Collaboration <cit.>. Because their masses are very high, one cannot interpret them as excited three-quark baryons. Their minimal quark content should be uudcc̅. Therefore, they are good candidates of hidden-charm pentaquark states. In 2019, LHCb<cit.> reported a new hidden-charm pentaquark-like state P_c(4312)^+, while the P_c(4450)^+ were resolved into two states P_c(4440)^+ and P_c(4457)^+ with updated statistics. Recently, LHCb announced the evidence of new pentaquark-like states P_c(4337)^+, P_cs(4459)^0, and P_cs(4338)^0 in the decay channels B^0_s → J/ψ pp̅ <cit.>, Ξ_b^- → J/ψΛ K^- <cit.>, and B^-→ J/ψΛp̅ <cit.>, respectively. The minimal quark content of the P_cs(4459)^0 and P_cs(4338)^0 is udscc̅. We summarize the masses, decay widths, and observed channels of these states in Table <ref>. The newly observed hidden-charm pentaquark-like states inspired lot of debates about their inner structures and quantum numbers <cit.>.
In the literature, interpretations of the above mentioned exotic baryons include compact pentaquark states <cit.>, molecule states <cit.>, cusp effects <cit.>, coupled channel effects <cit.>, etc. There are also studies of their decay and production properties <cit.>. One may consult Refs. <cit.> for more discussions. Most studies support the molecule interpretation.
In fact, one still hardly distinguishs the inner structures of these observed pentatquark-like states from the current experimental data. The possibility that their properties can be understood in the compact pentaquark picture is still not ruled out.
In previous papers <cit.>, we have studied the mass spectra and rearrangement decays of S-wave hidden-charm pentaquark states with the (qqq)_8_c(QQ̅)_8_c (q=u,d,s) configuration in the chromomagnetic interaction (CMI) model by choosing a reference hadron-hadron channel. From the combined analysis of spectrum and decay, our results indicate that the P_c(4457)^+, P_c(4440)^+, and P_c(4312)^+ are probably J^P=3/2^-, 1/2^-, and 3/2^- (uud)_8_c(cc̅)_8_c pentaquark states, respectively. However, there are two drawbacks in these works. On the one hand, the mass spectra are estimated by using a hadron-hadron threshold as the reference scale and the choice of meson-baryon channel affects the results. On the other hand, the contributions from the color-singlet (qqq)_1_c(cc̅)_1_c component were not considered in the pentaquark wave functions, which caused the lack of information of charmonium decay channels. Here, we revisit the compact hidden-charm pentaquark states with an improved framework.
In Ref. <cit.>, we found that the X(4140) can be interpreted as a compact csc̅s̅ tetraquark state. Later in Refs. <cit.>, we found that the mass spectra of other tetraquarks may be obtained by treating the X(4140) as a reference state. Now, we use a similar idea to study compact pentaquarks.
We improve the CMI model to estimate masses of the hidden-charm pentaquark states assuming that the P_c(4312)^+ is a compact pentaquark. Up to now, all the hidden-charm pentaquarks are observed in the J/ψ channels. By comparing the theoretical calculations and experimental data, such decay properties can provide more information about the internal structures of hadrons. Therefore, we also include the hidden-charm channels in the calculation of decay widths with a simple scheme.
This paper is arranged as follows. After the introduction, we present the formalism to study mass spectra and rearrangement decays of hidden-charm pentaquark states in Sec. <ref>. The numerical results which include discussions about predicted stable pentaquarks and possible assignments for the observed states will be given in Sec. <ref>. The last section is for summary.
§ FORMALISM
§.§ Mass splitting model
We employ the chromomagnetic interaction model to study the S-wave qqqcc̅ (q=u,d,s) systems. The model Hamiltonian reads
H=∑_im_i+H_CMI=∑_im_i-∑_i<jC_ijλ_i·λ_jσ_i·σ_j,
where λ_i and σ_i are the Gell-Mann matrix and the Pauli matrix for the i-th quark, respectively. m_i is effective quark mass. The effective coupling coefficient C_ij reflects the strength between the i-th and j-th quarks, which can be extracted from the ground hadrons. One calculates the mass of an S-wave pentaquark with
M=∑_im_i+⟨ H_CMI⟩
after diagonalizing the Hamiltonian. In fact, we obtained overestimated hadron masses with this formula in our previous studies. They may be regarded as theoretical upper limits <cit.>. The overestimated masses are mainly due to the values of m_i's. Because each system actually has its own m_i's, the model can not afford an appropriate description of the attraction between quark components for all systems. To get more reasonable theoretical results, the mass of a pentaquark state can be rewritten as
M=(M_ref-⟨ H_CMI⟩_ref)+⟨ H_CMI⟩,
where M_ref and ⟨ H_CMI⟩_ref are the measureed mass and choromomagnetic interaction matrix of the reference system, respectively. This method can partially compensate the uncertainty caused by effective quark masses <cit.>.
There are two schemes to choose the reference system for the hidden-charm pentaquark states. The first scheme involves a meson-baryon channel whose threshold is treated as the reference scale. It yields more reasonable results than the scheme adopting Eq. (<ref>). In our previous work <cit.>, we obtained masses of hidden-charm pentaquark states with different thresholds, but it is difficult to determine which threshold is a more appropriate choice. The second scheme adopts a compact reference pentaquark, which is more reasonable than the first scheme since the structure of a meson-baryon state is actually different from a compact state. The procedure is similar to getting the estimated masses for tetraquark systems, where one identifys the X(4140) as the lowest 1^++ csc̅s̅ compact tetraquark and treats it as the reference state <cit.>. In Ref. <cit.>, we studied the (uud)_8_c(cc̅)_8_c pentaquark states within the CMI model using a (charmed meson)-(charmed baryon) threshold as a reference. The results indicate that the P_c(4312)^+ can be assigned as a J=3/2 (uud)_8_c(cc̅)_8_c compact pentaquark. Here, we still assume that the P_c(4312)^+ is a compact state with I(J^P)=1/2(3/2^-) and choose it as a reference in the present case. The difference is that it is now a mixed state of (uud)_8_c(cc̅)_8_c and (uud)_1_c(cc̅)_1_c. From the following numerical results (see Sec. <ref>), one finds that the colored (qqq)_8_c(cc̅)_8_c component of P_c(4312)^+ plays a dominant role in the wave function and the adopted assumption is consistent with Ref. <cit.>. In this updated scheme, the mass formulas for the considered systems are
M_nnncc̅=(M_P_c(4312)^+-⟨ H_CMI⟩_P_c(4312)^+)+⟨ H_CMI⟩_nnncc̅,
M_nnscc̅=(M_P_c(4312)^+-⟨ H_CMI⟩_P_c(4312)^+)+Δ_sn+⟨ H_CMI⟩_nnscc̅,
M_ssncc̅=(M_P_c(4312)^+-⟨ H_CMI⟩_P_c(4312)^+)+2Δ_sn+⟨ H_CMI⟩_ssncc̅,
M_ssscc̅=(M_P_c(4312)^+-⟨ H_CMI⟩_P_c(4312)^+)+3Δ_sn+⟨ H_CMI⟩_ssscc̅,
where Δ_sn=m_s-m_n denotes the effective quark mass gap between s quark and n(=u,d) quark.
To relate the masses of nnscc̅, nsscc̅, and ssscc̅ to that of P_c(4312)^+, we introduce this parameter. Compared with Eq. (<ref>), the problem of effective quark mass becomes that of mass gap between different flavors of quarks and the uncertainty caused by effective quark masses are partially compensated <cit.>.
To calculate the CMI Hamiltonians of pentaquark systems, one constructs their wave functions. In Refs. <cit.>, the wave functions involving color-octet component (qqq)_8_c(cc̅)_8_c have been obtained. In the present work, we reconstruct wave functions by incorporating the color-singlet component (qqq)_1_c(cc̅)_1_c. These wave functions which are summarized in Table <ref> will also be used to understand the decay properties of hidden-charm pentaquark states. In the table, we adopt the notation [(qqq_flavor)^spin_color(cc̅)^spin_color]^spin_color. For brevity, we use F (D) to denote the flavor wave function of the first three (two) light quarks. The notation MS (MA) indicates that the first two light quarks are symmetric (antisymmetric) and S (A) means that the wave function is totally symmetric (antisymmetric) in flavor, spin, or color space. For example, the wave function [(F_S)^S_A(cc̅)^1_1]^5/2_1 is for the I(J)=3/2(5/2) case. The subscript S in F_S indicates that the flavor wave function for the first three quarks is symmetric under the permutation of any two quarks and the superscript S (subscript A) of F_S means that the spin (color) wave function for the three light quarks is totally symmetric (antisymmetric).
Here, we present the calculated CMI matrices with explicit expressions. For the I=3/2,Y=1 case, we have
⟨ H_CMI⟩_J=5/2=8C_12+16/3C_45;
⟨ H_CMI⟩_J=3/2=[ 10C_12+10/3C_14-10/3C_15-2/3C_45 8/√(3)(C_14+4C_15) 8√(5)/3(-C_14+C_15); 8(C_12-2C_45) 0; 8C_12+16/3C_45; ];
⟨ H_CMI⟩_J=1/2=[ 10C_12+2C_45 10/√(3)(C_14+C_15) -8√(2/3)(C_14+C_15); 10C_12-20/3(C_14-C_15)-2/3C_45 8√(2)/3(-C_14+C_15); 8C_12+16/3C_45 ].
For the I=1/2,Y=1 case, the matrices are
⟨ H_CMI⟩_J=5/2=2C_12+6C_14+6C_15-2/3C_45;
⟨ H⟩_J=3/2=[ -2C_12+2C_14+2C_15-2/3C_45 2√(2/3)(C_14+4C_15) -2/3√(10)(C_14-4C_15) 8/3(C_14-C_15); 2(C_12+C_45) 2√(15)(C_14-C_15) 4√(2/3)(C_14+C_15); 2C_12-2/3(6c_14+6C_15+C_45) 4/3√(10)(-C_14+C_15); -8C_12+16/3C_45 ];
⟨ H_CMI⟩_J=1/2=[ -2C_12+2C_45 2√(3)(C_14-C_15) -4/√(3)(C_14+4C_15) 0 8/√(3)(C_14+C_15); -2/3([ 3C_12+6C_14; +6C_15+C_45 ]) -4/3(C_14-4C_15) 8/√(3)(C_14+C_15) -16/3(C_14-C_15); ([ 2C_12-10C_14; -10C_15-2/3C_45 ]) -8/√(3)(C_14+C_15) -8/3(C_14-C_15); -8C_12-16C_45 0; -8C_12+16/3C_45 ].
Now we move on to the nnscc̅ systems. For simplicity, we write the CMI matrix in the form
⟨ H_CMI⟩=[ X Y; Y^T Z ],
where the symmetric matrix X involves only color-octet contributions and the symmetric Z is for the color-singlet component. The X expressions can be found in Ref. <cit.>. We here just give Y and Z results. For the I=1,Y=0 case, we have
Y_J=5/2=4√(2)/3(β-ν),
Z_J=5/2=8/3(C_12+2C_13+2C_45);
Y_J=3/2=[ 4√(2)/9(2β+ν) 4/3√(2/3)(α+2μ) -4√(10)/9(β+2ν); 4/3√(2/3)(α+2μ) 0 4/3√(10/3)(α-μ); -4√(10)/9(β+2ν) 4/3√(10/3)(α-μ) -8√(2)/9(β-ν); 4√(2)/3β -4√(2/3)α 4√(10)/3β ],
Z_J=3/2=diag(8/3(C_12-4C_13+2C_45),8/3(C_12+2C_13)-16C_45,8/3(C_12+2C_13+2C_45));
Y_J=1/2=[ 0 4/3√(2/3)(2α+μ) -8/3√(3)(α+2μ); 4/3√(2/3)(2α+μ) -8√(2)/9(2β+ν) -8/9(β+2ν); -8/3√(3)(α+2μ) -8/9(β+2ν) -20√(2)/9(β-ν); 0 4√(2/3)α 8/√(3)α; 4√(2/3)α -8√(2)/3β 8/3β ],
Z_J=1/2=diag(8/3(C_12-4C_13)-16C_45,8/3(C_12-4C_13+2C_45),8/3(C_12+2C_13+2C_45)),
where α=C_14+C_15, β=C_14-C_15, μ=C_34+C_35, and ν=C_34-C_35. For the I=0,Y=0 case, the Y and Z blocks are
Y_J=3/2=[ 4√(2)/3β; -4√(2/3)α; 4√(10)/3β; -4√(2)/3ν ],
Z_J=3/2=-8C_12+16/3C_45;
Y_J=1/2=[ 0 4√(2/3)α; 4√(2/3)α -8√(2)/3β; 8/√(3)α 8/3β; 0 -4√(2/3)μ; -4√(2/3)μ 8√(2)/3ν; ],
Z_J=1/2=diag(-8(C_12+2C_45),-8C_12+16/3C_45),
For the I=1/2, Y=-1 (I=0, Y=-2) case, the matrices are similar to the I=1, Y=0 (I=3/2, Y=1) case.
§.§ Rearrangement decay
In previous works <cit.>, a simple decay scheme with a constant Hamiltonian H=α has been adopted in order to study the rearrangement decay properties of a multiquark state into two conventional hadrons. In principle, the decay constant α should be changed for different systems. From our study, one finds that the theoretical ratios between widths of P_c(4312)^+, P_c(4440)^+, and P_c(4457)^+ by using this simple model are roughly consistent with the experimental results. Here, we still adopt this model to investigate decay properties of the hidden-charm pentaquark states.
There are four possible rearrangement decay types,
(q_1q_2q_3)(cc̅)→(q_1q_2c)_1c+(q_3c̅)_1c,
(q_1q_2q_3)(cc̅)→(q_1cq_3)_1c+(q_2c̅)_1c,
(q_1q_2q_3)(cc̅)→(cq_2q_3)_1c+(q_1c̅)_1c,
(q_1q_2q_3)(cc̅)→(q_1q_2q_3)_1c+(cc̅)_1c.
To calculate their matrix elements, one projects the wave function of the final meson-baryon state onto the initial pentaquark. In the color space, the final state is recoupled to the (qqq)(cc̅) base by using the SU(3) Clebsch-Gordan coefficients <cit.>,
(q_1q_2c)_1(q_3c̅)_1 =2√(2)/3(q_1q_2q_3)_MA(cc̅)_8+1/3(q_1q_2q_3)_1(cc̅)_1,
(q_1cq_3)_1(q_2c̅)_1 =-√(2/3)(q_1q_2q_3)_MS(cc̅)_8-√(2)/3(q_1q_2q_3)_MA(cc̅)_8+1/3(q_1q_2q_3)_1(cc̅)_1,
(cq_2q_3)_1(q_1c̅)_1 =√(2/3)(q_1q_2q_3)_MS(cc̅)_8-√(2)/3(q_1q_2q_3)_MA(cc̅)_8+1/3(q_1q_2q_3)_1(cc̅)_1,
(q_1q_2q_3)_1(cc̅)_1 =(q_1q_2q_3)_1(cc̅)_1.
In the spin and flavor spaces, similar recouplings are also conducted. The initial wave function of a pentaquark state, as an eigenstate of the chromomagnetic interaction, can be written as Ψ_penta=∑_ix_i(q_1q_2q_3cc̅) where x_i is the element of an eigenvactor of the CMI matrix. Then the amplitude squared of a rearrangement decay channel is |ℳ|^2=α^2|∑_ix_iy_i|^2. Here, y_i represents the coefficient when one recouples the meson-baryon base to the (qqq)(cc̅) base. The rearrangement decay width for a pentaquark is then given by
Γ=|ℳ|^2|p⃗_1|/8π M^2_pentaquark.
where p⃗_1 is the three momentum of a final hadron in the center-of-mass frame.
§ NUMERICAL RESULTS
§.§ Model parameters
In our calculations, we use the coupling parameters listed in Table <ref>. We will set m_n=362 MeV, m_s=540 MeV, m_c=1725 MeV, and m_b=5053 MeV for the effective quark masses when adopting Eq. (<ref>). The mass gap Δ_sn=90.6 MeV extracted from ground hadrons is taken from Ref. <cit.>. One may consult Ref. <cit.> for details regarding the selection procedure for this parameter. The masses of final hadrons used in calculations are taken from the particle data book <cit.>. Here, we assume that the two-body rearrangement decays saturate the total width.
That is, the sum of two-body rearrangement decay widths is equal to the measured width for a hidden-charm pentaquark state, Γ_sum=Γ_total. One determines the parameter α=4647.94 MeV from the decay width of P_c(4312)^+.
With the above parameters, the masses and decay widths of ground hidden-charm pentaquark states are calculated. We list these results in Tables <ref>-<ref>.
§.§ The nnncc̅ system
There are four I(J^P)=1/2(3/2^-) nnncc̅ states when one considers contributions from both color-octet and color-singlet structures. Following the conclusion of Ref. <cit.>, we assume that the P_c(4312)^+ is the second lowest I(J^P)=1/2(3/2^-) nnncc̅ compact pentaquark and treat it as the reference state in studying other pentaquarks.
We collect the numerical results for the masses of nnncc̅ compact states in Table <ref>. In the table, the first column shows the quantum numbers. The second and third columns list the numerical values for the CMI matrix and the corresponding eigenvalues, respectively. The fourth column gives the pentaquark masses by referencing to P_c(4312)^+. The masses in the fifth and sixth columns are estimated with the NJ/ψ (Δ J/ψ) threshold and Eq. (<ref>), respectively. They can be treated as the lower and upper limits for the masses of the nnncc̅ states.
Fig. <ref> displays the relative positions for the nnncc̅ compact states. In the I=1/2 case, four pentaquark states are located above 4.4 GeV and three pentaquarks have masses around 4.3 GeV. The results indicate that one may identify the calculated J^P=3/2^- (J^P=1/2^-) pentaquark with mass 4461 (4421) MeV to be the P_c(4457)^+ (P_c(4440)^+), which is consistent with the assignment given in Ref. <cit.>. Just from the mass, the P_c(4337)^+ seems to be a J=1/2 state. One can check whether this assignment is reasonable from the decay properties.
In Table <ref> (Table <ref>), we present the rearrangement decay widths for the I=1/2 (I=3/2) nnncc̅ pentaquarks. The ratios between widths of the interested states will be checked. To avoid confusion, we use the symbol P̃ to denote theoretical states. From the results in Table <ref>, one gets
Γ(P̃_c(4421)^+):Γ(P̃_c(4461)^+)=2.42 ,
Γ(P̃_c(4421)^+):Γ(P̃_c(4312)^+)=1.24 ,
Γ(P̃_c(4312)^+):Γ(P̃_c(4461)^+)=1.96,
Γ(P̃_c(4324)^+):Γ(P̃_c(4461)^+)=2.64,
Γ(P̃_c(4324)^+):Γ(P̃_c(4312)^+)=1.35,
Γ(P̃_c(4324)^+):Γ(P̃_c(4421)^+)=1.09.
On the other hand, the ratios between the measured widths are
Γ(P_c(4440)^+):Γ(P_c(4457)^+)=3.2^+2.1_-3.5,
Γ(P_c(4440)^+):Γ(P_c(4312)^+)=2.1^+1.5_-1.5,
Γ(P_c(4312)^+):Γ(P_c(4457)^+)=1.5^+1.0_-1.7,
Γ(P_c(4337)^+):Γ(P_c(4457)^+)=4.5^+5.0_-5.2,
Γ(P_c(4337)^+):Γ(P_c(4312)^+)=3.0^+3.4_-2.3,
Γ(P_c(4337)^+):Γ(P_c(4440)^+)=1.4^+1.6_-1.1.
In order to clearly compare the results in Eq. (<ref>) with those in Eq. (<ref>), we plot the values of ratios in Fig. <ref>. One finds that the calculated ratios between widths are compatible with the experimental data within error. Then it is reasonable to regard the P_c(4457)^+, P_c(4440)^+, and P_c(4337)^+ as our P̃_c(4461) with I(J^P)=1/2(3/2^-), P̃_c(4421) with I(J^P)=1/2(1/2^-), and P̃_c(4324) with I(J^P)=1/2(1/2^-), respectively.
If the above assignment is correct, we can give an estimate for the partial width ratios for the four P_c states. In the P_c(4457)^+ case, one has Γ(Σ^*_c D̅):Γ(Λ_c D̅^*):Γ(N J/Ψ)=2.3:4.0:1.0. Since the contributions from the color-singlet component are included now, the hidden-charm decay modes can be described. The P_c(4440)^+ would mainly decay into Λ_cD̅^*, while its decays into Σ_cD̅, Λ_cD̅, N J/Ψ, and Nη_c are relatively suppressed. The ratios between partial widths of these five channels are 45.5:3.0:3.0:7.5:1.0. For the P_c(4312)^+, the partial width ratio between the two dominant decay modes Λ_cD̅^* and N J/Ψ is Γ(N J/Ψ):Γ(Λ_cD̅^*)=1.1. This is different from our previous result <cit.>. The P_c(4337)^+ may have two dominant decay channels Λ_cD̅ and N J/Ψ with the branching fraction reaching up to 91%. The ratio between their partial widths is found to be Γ(Λ_cD̅):Γ(NJ/Ψ)=1.3.
The decay into Nη_c is also sizable with the branching fraction of Br[P_c(4337)→ Nη_c]∼7%. However, the decay channels Σ_cD̅ and Λ_cD̅^* are suppressed. If our results are all acceptable, it is worth noting that the I(J^P)=1/2(5/2^-) hidden-charm state P̃_c(4479)^+, a compact structure without hidden-charm decay channels, may be stable, because its mass is lower than the Σ_c^*D̅^* threshold. Beside these five states, four additional pentaquarks may also exist whose decay properties can be found in Table <ref>.
Compared with the I=1/2 nnncc̅ pentaquarks, the masses and rearrangement decay widths of the I=3/2 states are overall larger. All the I=3/2 states can decay into Δ J/ψ and search for all of them in this mode is possible. However, the J=5/2 state (P̃_c(4557)), the two heaviest J=3/2 states (P̃_c(4581) and P̃_c(4549)), and the second heaviest J=1/2 state (P̃_c(4579)) have similar masses, which probably makes it difficult to distinguish them in a preliminary experimental study. The P̃_c(4557) mainly decays into Δ J/ψ and Σ_c^*D̅^*, while the J=3/2 (J=1/2) states have special rearrangement channels Σ_c^*D̅ and Δη_c (Σ_cD̅).
The above discussions are based on the assignment that the P_c(4312)^+ is a compact pentaquark with I(J^P)=1/2(3/2^-). This assumption results from the combined analysis of mass spectrum and decay properties. To see the consistency between the present study and the study in Ref. <cit.>, we list the eigenvalues and eigenvectors of the I(J^P)=1/2(3/2^-) CMI matrix in Table <ref>. Clearly, the color-octet component dominates the wave function of P_c(4312)^+ with a probability ∼83%.
§.§ The nnscc̅ system
The masses of the nnscc̅ compact pentaquarks are calculated and are listed in Table <ref>. We depict the relative positions for these states in Fig. <ref>. In the I=0 case, five pentaquarks have masses around 4338 MeV and two pentaquarks have masses close to 4459 MeV. Just from the spectrum, the theoretical P̃_cs(4338) and P̃_cs(4478) with J=3/2 are good candidates for the P_cs(4338)^0 and P_cs(4459)^0, respectively, but there are also other possibilities. To discuss possible assignments for the quantum numbers of the two observed P_cs states, we again adopt the decay widths estimated with the simple rearrangement scheme. The results in the isoscalar case are summarized in Table <ref>.
If one assigns the P_cs(4459)^0 and P_cs(4338)^0 to be J=3/2 pentaquark states P̃_cs(4478)^0 and P̃_cs(4338)^0, respectively, the calculated width ratio Γ(P̃_cs(4478)^0):Γ(P̃_cs(4338)^0)=0.12 is contradicted with the experimental value Γ(P_cs(4459)^0):Γ(P_cs(4338)^0)=2.5^+1.6_-1.4. We have to consider other possible assignments. The relevant width ratios are
Γ(P̃_cs(4478)^0):Γ(P̃_cs(4371)^0) = 0.15,
Γ(P̃_cs(4478)^0):Γ(P̃_cs(4328)^0) = 0.56,
Γ(P̃_cs(4478)^0):Γ(P̃_cs(4318)^0) = 2.57,
Γ(P̃_cs(4478)^0):Γ(P̃_cs(4304)^0) = 0.17,
Γ(P̃_cs(4497)^0):Γ(P̃_cs(4371)^0) = 0.72,
Γ(P̃_cs(4497)^0):Γ(P̃_cs(4338)^0) = 0.61,
Γ(P̃_cs(4497)^0):Γ(P̃_cs(4328)^0) = 2.78,
Γ(P̃_cs(4497)^0):Γ(P̃_cs(4318)^0) = 12.71,
Γ(P̃_cs(4497)^0):Γ(P̃_cs(4304)^0) = 0.83.
The third and seventh ratios are consistent with the experimental value. However, the width of P̃_cs(4318)^0 is much smaller than the measured Γ(P_cs(4338)^0), which leads to the most possible assignment that the observed P_cs(4459)^0 and P_cs(4338)^0 correspond to P̃_cs(4497)^0 and P̃_cs(4328)^0, respectively. Therefore, our analysis indicates that the quantum numbers of both P_cs(4338)^0 and P_cs(4459)^0 may be assigned as I(J^P)=0(1/2^-). The comparison of width ratio between model calculation and experimental value with this assignment is also shown in Fig. <ref>.
If the P_cs(4459)^0 indeed corresponds to the highest J=1/2 pentaquark state P̃_cs(4497)^0, it may mainly decay into Λ_cD̅^*_s, Ξ_cD̅^*, and Λ J/Ψ, while the decays into Λ_cD̅_s, Ξ_c^'D̅, Ξ_c D̅, and Λη_c are suppressed because of small phase space. The ratios between the main partial widths of P_cs(4459)^0 are predicted to be Γ(Λ_cD̅^*_s):Γ(Ξ_cD̅^*):Γ(Λ J/Ψ)=2.3:1.1:1.0, which can be tested in future experiments. If the P_cs(4338)^0 really corresponds to the fourth highest J=1/2 pentaquark state P̃_cs(4328)^0, its main decay modes would be Λ J/Ψ and Λ_cD̅_s. The ratio between the corresponding partial widths is estimated to be Γ(Λ J/Ψ):Γ(Λ_cD̅_s)=3.0.
It is interesting to note that the J=5/2 state P̃_cs(4550)^0, the lightest J=3/2 state P̃_cs(4318)^0, and the lightest J=1/2 state P̃_cs(4127)^0 may be stable. The P̃_cs(4550)^0 being a compact hidden-color structure can be searched for in the radiative decay channel Ξ_c^*+D^-γ. The search for P̃_cs(4318)^0 can be conducted with more analyses in the Λ J/Ψ channel. The experimentalists may search for the P̃_cs(4127)^0 in the Λ^0 η_c or Λ^0 π^+D_s^- channel.
In the I=1 case, many nnscc̅ states have the Σ^* J/ψ decay mode. They can be searched for in this channel. Of course, other channels listed in Table <ref> can also be used. The light J=5/2 pentaquark state P̃_cs(4575) should be a stable one, which can be searched for in the Σ^*++ J/Ψ and Λ_c^+π^- D^*-_s channels.
§.§ The ssncc̅ system
The symmetry for the wave functions of ssncc̅ states is the same as that for I=1, Y=0 nnscc̅ states. Noticing the difference in effective coupling parameters, one can get similar CMI matrices from those for nnscc̅. The numerical results are collected in Table <ref> where the data listed in the fourth, fifth, and sixth columns are estimated with the P_c(4312)^+, J/ΨΞ threshold, and effective quark masses, respectively. We also plot the relative positions for pentaquark states and relevant meson-baryon thresholds in Fig. <ref>(a). The rearrangement decay information can be found from Table <ref>.
From the results, the lightest state whose spin is 1/2 has a mass around 4.3 GeV. It has only one rearrangement decay channel Ξη_c. Although the coupling with this channel is strong, the width is not large because of the small phase space. The rearrangement decay width of the light J=5/2 pentaquark is very small, which indicates that it is also stable. Searching for such a state in the Ξ^*J/ψ channel will give more information. The fourth highest J=3/2 state also has a relatively stable structure. It may be searched for in the Ξ^*η_c and Ξ J/Ψ channels. Compared with the nnncc̅ and nnscc̅ cases, the rearrangement decay widths in the ssncc̅ case are relatively smaller. It is possible to observe many double-strange hidden-charm exotic structures in the Ξ J/ψ or Ξ^* J/ψ mass distribution. The open-charm decay channels listed in Table <ref> may be used to distinguish the spins of the observed structures.
§.§ The ssscc̅ system
As for the ssscc̅ case, the calculation procedure and resulting expressions are similar to the I=3/2,Y=1 nnncc̅ case, but the numerical results are different. We present the mass results in Table <ref>, show the relative positions for pentaquarks and relevant meson-baryon thresholds in Fig. <ref>(b), and give the rearrangement decay information in Table <ref>.
From Tables <ref> and <ref>, compared with the I=3/2,Y=1 nnncc̅ case, the decay widths of ssscc̅ states are relatively small because of the smaller phase space. The model calculation tells us that the lightest J=1/2 state with mass 4623 MeV, the lightest J=3/2 state with mass 4591 MeV, and the J=5/2 state with mass 4728 MeV are below their rearrangement decay thresholds and should all be stable. The search for them in the Ξ^0π^-J/ψ channel is called for. The second lightest J=1/2 pentaquark with mass 4734 MeV has one rearrangement decay channel Ω_cD̅_s. Although it is higher than the threshold, the coupling with this channel is weak. It should also be a stable state and a search for this pentaquark in the Ω_cD̅_s channel is strongly proposed.
§ SUMMARY
In this work, we investigate the mass spectra and two-body rearrangement decays of the S-wave hidden-charm pentaquark states within a mass splitting model. In this model, the P_c(4312)^+ is assumed to be a hidden-charm compact pentaquark with I(J^P)=1/2(3/2^-) and the properties of other pentaquarks are studied by treating the P_c(4312)^+ as the reference state. Both color-octet (qqq)_8_c(cc̅)_8_c (q=u,d,s) and color-singlet (qqq)_1_c(cc̅)_1_c components are considered for the wave functions.
From the numerical analyses, one finds that the P_c(4457)^+, P_c(4440)^+, and P_c(4337)^+ can be regarded as the I(J^P)=1/2(3/2^-), 1/2(1/2^-), and 1/2(1/2^-) pentaquark states, respectively. The P_c(4457)^+ mainly rearranges into Σ^*_cD̅, Λ_cD̅^*, and N/J/Ψ. The dominant decay channel of P_c(4440)^+ is Λ_cD̅^*. For the rearrangement decay of P_c(4312)^+, the NJ/Ψ and Λ_cD̅^* channel are of equal importance. The P_c(4337)^+ mainly decays into Λ_cD̅ as well as NJ/Ψ. The high spin pentaquark state nnncc̅ (n=u,d) with I(J^P)=1/2(5/2^-) has a mass around 4479 MeV, but it should be narrow. This state has only color-octet (nnn)_8_c(cc̅)_8_c component and can be searched for in the Λ^+_cπ^-D^*+ channel in future experiments.
From the spectrum of I=0, Y=0 nnscc̅ pentaquark states, we get good candidates of P_cs(4338)^0 and P_cs(4459)^0 whose quantum numbers are I(J^P)=0(3/2^-). However, the ratio between their widths cannot be understood. When a slightly larger uncertainty in mass is allowed, we find that assigning both P_cs(4338)^0 and P_cs(4459)^0 to be pentaquark states with I(J^P)=0(1/2^-) can result in a width ratio consistent with the experimental data. The lightest isoscalar pentaquarks with J=1/2, 3/2, and 5/2 should all be narrow states. This J=5/2 state, similar to the case of I(J^P)=1/2(5/2^-) nnncc̅, also has only color-octet component.
According to our results for the ssncc̅ case, the light J=5/2 state and the fourth highest J=3/2 state have narrow widths. For the ssscc̅ case, there may be four stable states which are the lightest ones with J=1/2,3/2,5/2 and the second lightest one with J=1/2.
§ ACKNOWLEDGMENTS
We would like to thank Dr. Jian-Bo Cheng for useful discussions. This project was supported by the National Natural Science Foundation of China under Grant Nos. 12235008, 12275157, and 11905114.
unsrt
|
http://arxiv.org/abs/2307.01104v1
|
20230703153119
|
Dephasing effects on quantum correlations and teleportation in presence of state dependent bath
|
[
"Mehboob Rashid",
"Muzaffar Qadir Lone",
"Prince A Ganai"
] |
quant-ph
|
[
"quant-ph"
] |
^1Department of Physics, National Institute of Technology, Srinagar-190006 India.
^2Quantum Dynamics Lab, Department of Physics, University of Kashmir, Srinagar-190006 India
Quantum information protocols are often designed in the ideal situation with no decoherence. However, in real setup, these protocols are subject to the decoherence and thus reducing fidelity of the measurement outcome. In this work, we analyze the effect of state dependent bath on the quantum correlations and the fidelity of a single qubit teleportation. We model our system-bath interaction as qubits interacting with a common bath of bosons, and the state dependence of the bath is generated through a projective measurement on the joint state in thermal equilibrium. The analytic expressions for the time evolution of entanglement, discord and average fidelity of quantum teleportation are calculated. It is shown that due to the presence of initial system-bath correlations, the system maintains quantum correlations for long times. Furthermore, due to the presence of finite long time entanglement of the quantum channel, the average fidelity is shown to be higher than its classical value.
Dephasing effects on quantum correlations and teleportation in presence of state dependent bath
Prince A Ganai^1
Received XXX; accepted XX
===============================================================================================
§ INTRODUCTION
Quantum correlations described by entanglement<cit.> and discord<cit.> are important features of quantum mechanics that arise due to non-separbility, non-locality or impossibility of local descrimination. In addition to their role in fundamental aspects of physics, these correlations find their applications as the resource for quantum computation and quantum information<cit.>.
For example, quantum teleportation<cit.>, super dense coding<cit.>, etc. In many ways, quantum communication protocols are superior to their conventional counterparts<cit.>. For instance, they feature excellent security and channel capacity<cit.>. Quantum teleportation protocol is one of the several techniques that allow for a unit fidelity
of a quantum state transfer with preshared maximal entangled state between two parties. Furthermore, certian class of separable states with non-zero discord have been recognised as resource for speed up of certian computational tasks over classical counterparts<cit.>.
In contrast to isolated quantum systems, the interaction of a system with the bath degrades quantum correlations<cit.>. This in turn affects the utilization of quantum correlations for quantum technologies.
The effects of these system-bath (SB) interactions lead to Markovian or non-Markovian dynamics. In Markovian case, the dynamics is memoreyless, while in non-Markovian dynamics, the system retrieves information back from the bath signalling the presence of memory effects<cit.>. The non-Markovian effects have shown to play a significant role for various quantum protocols like the dissipative quantum computation<cit.>, quantum metrology<cit.>, entanglement generation<cit.>, dynamical control of correlations in various system like quantum optics<cit.>, nuclear magnetic resonance <cit.>,
nanophysics<cit.>, etc. In understanding such dynamics, it is often assumed that system and bath are initially uncorrelated which is a consequence of Born approximation. However, under strong coupling, this assumption is violated, for example in quantum state preparation. In this direction many works have analyzed the role of these initial SB correlations in dephasing models<cit.>, superconducting qubits<cit.>, quantum dots<cit.> etc.
In this work, we consider a dephasing model represented by two qubits coupled to a collective bath with distance dependent interaction. Our goal is to study the role of initial SB interactions on the dynamics of quantum correlations and quantum teleportation. In earlier works, effects of such initial SB correlations have been studied. For example, Li, et. al.<cit.> Zhang, et. al. <cit.>
have shown that initial SB correlations have strong influence on the dynamics of quantum discord and entanglement. However, the type of initial states considered in these works are restricted to pure states at zero temperature only. Here we consider a class of initial states at finite temperature obtained via projective measurements.
Furthermore, the dynamics of average fidelity of teleportation of a single qubit in presence of some particular types of noise have also been studied<cit.>.
It has been shown that local noise can even boost the fidelity of single-qubit teleportation <cit.>. In these works, initial SB correlations present in the teleportated qubit or in the entangled channel are not considered. Here we attempt to analyze whether these initial SB correlations affect the average fidelity of teleportation.
This paper is structured in the following way. We introduce the model system with SB correlations in section II.
The dynamics of quantum correlations given by negativity and discord is presented in section III. In section IV, we discuss quantum teleporation protocol and find that the initial SB correlations help to maintain average fidelity above classical value. Finally we conclude in section V.
§ MODELLING SYSTEM-BATH INTERACTIONS
We consider a two qubit channel shared by Alice and Bob that evolves according to a dephasing model where each qubit separated by distance L, is coupled to collective bath given by (ħ = 1):
H = H_S + H_B + H_int
= ω_0/2∑_iσ_i^z + ∑_kω_kb_k^†b_k + ∑_ikσ_i^z(g_ke^-ik⃗.r⃗_⃗i⃗b_k + h.c.) .
ω_0 is energy splitting of qubits; bath modes are characterized by energy ω_k with b_k,b_k^† as annihilation and creation operators for the kth bath mode. σ^z_i and r⃗_⃗i⃗ are the z-Pauli matrix and position vector of ith-qubit respectively. Here, h.c. means Hermitian conjugate. For notational convenience we call channel shared by Alice and Bob as “system (S) ". Since in the Born approximation, system-bath correlations are neglected. However, in this work we consider a particular type of initial state that incorporates system-bath correlations at finite temperature<cit.>. In order to generate a state dependent bath i.e. initial system bath correlations,
we consider a thermal equillbrium state given by ρ_SB^T = e^-β H /Z. Here Z is the partition function and β =1/T. Now we make a projective measurement via projection operators {Π_i} on the state of the system such that the total SB density operator collapses to
ρ_SB^T = 1/Z∑_i Π_i e^-β H Π_i.
Now we prepare the state of the system to be in
|ψ⟩ so that Π_i=|ψ⟩⟨ψ|. With this projection, the above sum reduces to a single term:
ρ_SB^T =|ψ⟩⟨ψ| ⊗1/Z_B⟨ψ| e^-β H |ψ⟩ = ρ_S ⊗ρ_B^ψ
where Z_B = Tr_B⟨ψ|exp(-β H)|ψ⟩. First, we compare this state with the uncorrelated state used via Born approximation: ρ_tot(0)= ρ_S (0) ⊗ρ_B where ρ_B = e^-β H_B /Z is the bath density matrix. Here the bath state does not depend on the parameters of the system while the bath state defined in equation <ref> depends non-trivially on the paramters of state of the system |ψ⟩. Next, we make comparison to the correlated initial states reported in the literature for example in references <cit.>, which
are of the form |ψ⟩_r =α |0⟩_S|0⟩_B + β |1⟩_S|1⟩_B,
where |0⟩_B is the vacuum state of the bath and |1⟩_B = b^† |0⟩ is a bath state with single excitation. This assumption is adhoc in a sense that it does not incorporate more number of excitations which are important in strong coupling limit or when Born approximation is not valid. These are pure states with no temperature dependence. The form of initially correlated states considered in the equation <ref> are entirely different in their construction. These states arise due to selective measurements on a pre-defined thermal equilibrium state which give rise to non-trivial initial system-bath correlations. In contrast, the measurement on the state |ψ⟩_r results in an uncorrelated state. Furthermore, a pecularity of the state in equation <ref> is its variation of the SB correlation with temperature. As T → 0, the joint state of the system and bath ρ = e^-β H/Z→ | gnd⟩⟨ gnd|, where | gnd⟩ is some ground state of the Hamiltonian system (S) plus bath (B) H_S +H_B becomes uncorrelated whereas for T 0, the joint state becomes correlated due to thermal fluctuations. The vice-versa happens for |ψ⟩_r. The total state in equation <ref> is a non-Gibb's state.
Next, we consider the initial state of the system to be |ψ⟩_S= √(α)|00⟩ + √(1-α)|11⟩; therefore, we can calculate density matrix of the bath as:
ρ_B^ψ (0) = 1/Z_B⟨ψ| e^-β H |ψ⟩
= 1/Z_B[ α e^-βω_0e^-β H_1^++ (1-α) e^βω_0 e^-β H_1^-) ],
where H_1^± = H_B ∓ (B_1k± B_2k) and B_ik= g_ikb_k+ g_ik^⋆b_k^†, i=1,2. Now, we consider the time evolution operator U(t) =Te^-i ∫_0^t dτ H_I(τ), T is the time ordering operator H_I(t) is the interaction Hamiltonian in interaction picture. The time evolved density matrix of the system can be calculated using the depahsing model above and is given by <cit.>
ρ_S(t) = Tr_B[U(t) ρ^T_SB(0) U(t)^†]= Tr_B [U(t) ρ_S ⊗ρ_B^ψ U(t)^† ]
= α |00⟩⟨ 00| + √(α(1-α))κ(t)|00⟩⟨ 11| +
√(α(1-α))κ^*(t) |11⟩⟨ 00| + (1-α) |11⟩⟨ 11|
where
κ(t) = [ α e^-βω_0 e^2iζ(t)+(1-α) e^βω_0 e^-2iζ(t)/α e^-βω_0 +(1-α) e^βω_0)]exp(-4Σ_k|g_k|^2/ω_k^2(1+cosk.L)(1-cosω_kt) βω_k/2)
and ζ(t)= 8 ∑_k |g_k|^2/ω_k^2sinω_k t [1+cos (k⃗.L⃗)]. The term in square brackets of κ(t) capture the initial SB correlations with dependence on the system parameters, while exponential is the standard decoherence function with dependence on distance of separation of qubits in addition to the dependence on temperature and bath parameters. To simplify analysis, we use α=1/2.
§ DYNAMICS OF QUANTUM CORRELATIONS
§.§ Entanglement
In this section, we study the time evolution of entanglement of the two-qubit quantum channel represented by the aforementioned model via negativity(𝒩). This measure os based on positive partial transpose (PPT) criteria for the separability
and is defined as<cit.> :
𝒩(ρ) := || ρ^T_A||-1/2
where, ρ^T_a is the partial transpose of ρ with respect to subsystem A. The trace norm of an operator Ô is given as
||Ô||_1=Tr|Ô|=Tr√(O^† O). For the two qubit channel given by the class of states |ψ⟩_S=√(α)|00⟩ + √(1-α)|11⟩, with 0≤α≤ 1, we can write entanglement negativity as
𝒩(ρ)= 2√(α(1-α))√([cos^2(ζ(t)) + sin^2(ζ(t)) tanh^2(βω_0))] e^-γ_s(t).
The standard decoherence function γ_s(t) is given by
γ_s (t)= 4∑_k |g_k|^2 cos^2(k⃗.L⃗)/21-cosω_k t/ω_k^2βω_k/2
while the decoherence due to initial SB correlations are encoded in the function γ_ic(t):
γ_ic(t) =-1/2log [ cos^2(ζ) + sin^2(ζ) tanh^2(βω_0) ].
§.§ Quantum Discord
Quantum discord represents quantum correlations beyond entanglement. It is defined as the difference between total correlations and classical correlations in a given system. Let ρ^AB be the density operator for a bipartite system AB, then the total correlations are given by the mutual information I(ρ^AB):
I(ρ^AB) = S(ρ^A) + S(ρ^B)-S(ρ^AB)
where S(ρ)=-tr (ρ log ρ) is the von Neuman entropy of the density matrix ρ. In order to determine the classical correlations, we define one dimensional projectors P_k, so that the conditional density matrix after measurements on the subsystem B, can be written as
ρ_k =1/p_k (I_A ⊗ P_k) ρ^AB (I_A⊗ P_k), with p_k = tr((I_A ⊗ P_k) ρ^AB (I_A⊗ P_k)). Therefore, we write the entropy corresponding to this measurement as S(ρ^AB|P_k) = ∑_k p_k S(ρ_k). The mutual information after this measurement can be written as
I(ρ^AB|P_k)= S(ρ^A)-S(ρ^AB|P_k). Therefore, we write the classical correlations C(ρ^AB) present in the quantum system as the supremum over all von Neuman measurements P_k:
C(ρ^AB)= {P_k} sup I(ρ^AB|P_k).
Quantum discord Q(ρ^AB) is therefore given as the difference between mutual information I(ρ^AB) and the classical correlations C(ρ^AB)<cit.>:
Q(ρ^AB)= I(ρ^AB)-C(ρ^AB).
This expression is in general difficult to evaluate. However, the initial states considered here we can calculate it exactly and is given by <cit.>
Q= min(Q_1, Q_2)
with Q_1=1 and
Q= (1+ |κ(t)|/2)log_2(1 + |κ(t)|/2) + (1 - |κ(t)|/2)log_2(1 - |κ(t)|/2) +1
Next, in order to analyze the behaviour of quantum correlations given by concurrence and quantum discord, we first define bath spectral density J(ω) as
J(ω) = ∑_k |g_1k+ g_2k|^2 δ(ω-ω_k).
The exact form of this function is very complicated and depends on dimensionality of the bath, however we can model it phenomenologically. We assume the form of g_k in ω-space as g(ω)=ηω/ω_c e^-ω^2/ω^2_c, where η is the intrinsic SB coupling
and ω_c is the cutoff frequency of the bath. Using this form of g(ω) and integrating over solid angle in equations (for gamma and phi)
we get
γ_s(t) = 8 η/ω_c^2 π^2 c^3∫_0^∞ dωω e^-ω^2/ω^2_c(1+sinω s/ω s) sin^2ω t/2βω/2 .
ζ(t) = 4η/π^2 c^3ω_c^2∫_0^∞ dωω e^-ω^2/ω_c^2(1+sinω s/ω s) sinω t .
Here s=L/c with c to be the velocity of the bath modes and the L the distance of qubits of the channel. s defines a time scale due to the SB interaction mediated by bath modes.
Furthermore, different time scales that arise in our model are provided by the cutoff frequency ω_c which give the relaxation time scale for the bath; relaxation time scale for qubits provided by energy ω_0. Using these time scales, we parameterize above equations as: ω→ω/ω_c, t →ω_c t, s→ω_c s and measure temperature with respect to ω_c: β→βω_c. For notational convenience, we take ω_c=1 without loss of generality.
From the expression in equation[<ref>], we see that for cos(k⃗.L⃗)=-1, both γ(t) and ζ(t) vanish, thus resulting in no decay of negativity. Since, discord depends on the |κ(t)|, which in this case turns out to be |κ(t)|= 1/√(2)√(cosh2βω_0)/coshβω_0 . Therefore, even though initial correlations does not have any effect on the entanglement decay but strongly effects the discord. Next, in figure <ref>, we plot negativity 𝒩 and discord Q with respect to rescaled time t for βω_c ∼ 1 and βω_c << 1, which correspond to low and high temperatures. In the low temperature regime βω_c ∼ 1 (fig. <ref>(a)), we observe that both negativity 𝒩(t) and discord Q(t) have non-monotonic behaviour, that decays initially but finally saturates to a finite non-zero value. However, at high temperature βω_c <<1 (fig. <ref>(b)), we see that quantum discord initially has an abrupt decay in comparison to negativity with a saturation to a non-zero value in the long time limit. We can understand this behavior in an intuitive way as follows. The number of modes that would cause fast decoherence are suppressed by the initial SB correlations. However, at high temperatures, we have thermal fluctuations which increases the number of modes to be scattered causing an abrupt decay of correlations; the competition between these thermal fluctuations with those of initial SB correlations result in a less finite value in comparison to its low temperature case. This can be further verified from the comparison to the Markovian and uncorrelated initial SB states plotted in figure <ref>(c) and <ref>(d) for low and high temperature regimes respectively. We see that initial SB correlations help to maintain coherence in the system for long times.
§ QUANTUM TELEPORTATION IN PRESENCE OF DEPHASING
§.§ Standard Teleportation Protocol
In standard quantum teleportation protocol<cit.>, an unknown quantum state |ψ_int⟩ is teleported from Alice to Bob who share an entangled state that acts as a quantum channel between them. The protocol can be formulated in terms of density matrix formalism as follows. Let ρ_in be the density matrix of an unknown state to be teleported, ρ_AB be the channel density matrix shared by Alice and Bob; and ρ_B be the output density matrix i.e. density matrix of the teleported state recovered by the Bob. The total initial state (ρ_in and the channel) is given by
ρ^T = ρ_in⊗ρ_AB.
As the first step of the protocol, Alice performs projective measurements on her qubits, namely the input state and her portion of the entangled channel. Let {Π_i} be the set of projection operators used by Alice. Thus after the projective measurements, the state of the total system changes to
ρ_i^T = Π_i ρ^T Π_i/P_i
where P_i= TrΠ_i ρ^T Π_i is the probability of occurrence of specific density matrix ρ_i^T corresponding to Π_i-projection. As a next step, Alice communicates these measurement results to the Bob via a classical channel. With this knowledge Bob recovers the teleported state ρ_B by applying suitable unitary operators to his density matrix:
ρ_i^B = U_i Tr_A [ρ^T_i] U^†_i/Q_i= U_i Tr_A [Π_i ρ^T Π_i] U^†_i/Q_i.
Here Tr_A means trace over the Alice's qubits. The unitary operators U_i, which Bob must apply to complete the protocol, is dependent not only on Alice's measurement outcome but also on the quantum channel that was employed. As a specific example, which we consider in this work, is the teleportation of a single qubit |ψ⟩ = cosθ/2|0⟩ + sinθ/2e^iϕ|1⟩ ( θ and ϕ are the polar and azimuthal angles) through a noisy channel of two qubits shared by Alice and bob. The {Π_i} projection operators are specified by Bell states while the unitary operators U_i are given by {I, σ^x, σ^y, σ^z} depending on the measurements due to Alice. Here I is the identity operator while σ^i (i=x, y, z) are the Pauli spin operators.
The performance of a given teleportation protocol can be represented by Fidelity F. It represents the overlap of the initial state with the final output state:
F= Tr [ρ_inρ^B] = ⟨ψ_in|ρ^B |ψ_in⟩.
The fidelity is bounded as 0≤ F≤ 1, where F=0 (no teleportation ) means initial and final states are orthogonal to each other while F=1 (perfect teleportation) means initial and final states are same. The classical bound on fidelity is F=2/3 which is simulated by classical channel.
Since the state to be teleported is typically unknown, it is more practical to determine the average fidelity provided by
F_av= 1/4π∫_0^π dθ∫_0^2π dϕ F(θ,ϕ) sinθ
where 4π is the solid angle.
§.§ Teleportation in presence of initial SB correlations
Here we consider the following cases to understand the influence of dephasing in presence of initial SB correlations on quantum teleportation. As a first case, we consider the channel shared by Alice and Bob coupled to the bath. We consider the teleportation of a single qubit state ρ_in=|ψ_in⟩⟨ψ_in |, with |ψ⟩ = cosθ/2|0⟩ + sinθ/2e^iϕ|1⟩. Also, we assume a two qubit channel (system) shared by Alice and Bob given by
|ψ⟩_S= √(α)|00⟩ + √(1-α)|11⟩ with 0≤α≤ 1. The first qubit is in possession of Alice while Bob holds second qubit. Since the channel is coupled with the bath and the joint state of the SB evolves according to the dephasing model given above. Due to initial SB correlations, the state of bath depends in a non-trivial way on the parameters of the channel and accordingly the density matrix for the channel can be written as ρ_S(t) given in equation <ref> .
Using this channel ρ_S(t), Alice can teleport a given state |ψ_in ⟩ faithfully to Bob. To achieve this, Alice performs Bell measurements on her qubits using projection operators defined by {Π_i} where
Π_1 = |Φ^+⟩⟨Φ^+|, Π_2 = |Φ^-⟩⟨Φ^-|, Π_3 = |Ψ^+⟩⟨Ψ^+| and Π_4 = |Ψ^-⟩⟨Ψ^-|. The Bell states are defined as |Φ^±⟩=1/√(2)(|00⟩±|11⟩) and |Ψ^±⟩=1/√(2)(|01⟩±|10⟩).
Using the teleportation protocol given above, the state of the Bob (without applying unitary operation) corresponding to projection Π_1 is given by
ρ^B_1 = 1/4Q_1[ αcos^2θ/2 |0⟩⟨ 0| + 1/2√(α(1-α))sinθκ(t) |0⟩⟨ 1|
+ 1/2√(α(1-α))sinθκ^*(t) |1⟩⟨ 0| + (1-α) sin^2θ/2 |1⟩⟨ 1|] ]
where Q_1 = 1/2(1-αsin^2θ/2). Based on the measurement outcome of Alice, Bob applies now unitary transformation U=I on his qubit to get the output state of the teleportation:
ρ^B_out_1 = U_1ρ^B_1 U_1^† =ρ^B_1.
Next, for the projective measurement by Alice using Π_2, the state of Bob is given by
ρ^B_2 = 1/4Q_2[ αcos^2θ/2 |0⟩⟨ 0| - 1/2√(α(1-α))sinθκ(t) |0⟩⟨ 1|
- 1/2√(α(1-α))sinθκ^*(t) |1⟩⟨ 0| + (1-α) sin^2θ/2 |1⟩⟨ 1|] ].
with Q_2=Q_1. Bob now applies unitary transformation U_2= σ^z to get the teleported state
ρ^B_out_2 = U_2ρ^B_2 U_2^†= σ^z ρ^B_2 σ^z.
Along the similar lines, Bob applies unitary transformation U_3= σ^x, U_4= σ^xσ^z corresponding to projective measurement of Alice by Π_3,Π_4 to get the teleported state ρ^B_out_3, ρ^B_out_4 respectively with the probabilities Q_3, Q_4. Since different ρ^B_out_i i=1,2,3,4
occur in general with different probabilities Q_i so we take average over all Q_i i.e. F̃=∑_i Q_i F_i. Since, F̃ depends on the input states |ψ_in⟩, therefore assuming uniform distribution of all these states we write the efficiency of the protocol in terms of average fidelity F_av (α =1/2) as:
F_av = 1/4π∫_0^π dθ∫_0^2π dϕF̅sinθ
= 2/3 + κ(t) + κ^*(t) /6 = 2/3 + 1/3cos(2ζ(t))e^-γ(t).
Since ζ(t) and γ(t) depend on the parameters of the bath and distance of separation of qubits L, this result therefore shows that average fidelity F_av is independent of initial SB correlations. Next, we plot in figure <ref>, time dependence of F_av for different values of s for high temperature and low temperature cases. In the low temperature regime figure <ref>(a), we see that F_av has strong non-Markovian behaviour, decaying first to a classical optimal value and then saturating at a value higher than classical value of 2/3. Also, for large values of s, i.e. larger the qubit separation of channel, we see that F_av is always greater than 2/3. However, in the high temperature limit βω_c<<1, figure <ref>(b), we have F_av saturating at a classical value for almost all values s. As we increase the distance between the qubits of the channel, less number of modes interact that results in useful quantum correlations at βω_c ∼ 1 (figure <ref>(c)). In the other case βω_c <<1, we see that thermal fluctuations play an important role to destroy long range correlations. Since F_av does not depend on initial SB correlations which would compete with thermal fluctuations. Thus, due to small correlations present in the βω_c <<1 case (figure <ref>(d)), we have F_av saturating at classical value.
As the second case, we consider the qubits with Alice are subjected to decoherence in presence of initial SB correlations. It can be shown that fidelity is also independent of initial SB correlations and is given by the same equation <ref>.
In the third case, we consider the qubit that is being teleported subjected to decoherence. In this case, the average fidelity F_av is dependent on initial SB correlations and is given by
F_av = 2/3 + (βω_0/2-1)cos(ζ_0(t))/6 sinhβω_0/2e^-γ(t) ,
where ζ_0(t) = 4∑_k |g_k|^2/ω_k^2 sinω_kt. From this result, we observe that at the critical temperature βω_0/2∼ 1, we get classical bound of fidelity F_av =2/3. For βω_0 <<1, we have F_av = 2/3-cos(ζ_0(t))/3 βω_0e^-γ(t)<2/3. Therefore for high temperatures where thermal fluctuations destroy coherence and we get F_av <2/3. In case of low temperatures βω_0 >>1, we have F_av=2/3+βω_0/6 e^-βω_0/2cos(ζ_0(t))e^-γ(t) which is always greater than 2/3 for finite value of temperature.
§ CONCLUSION
In conclusion, we have studied the role of initial SB correlations on the dynamics of quantum correlations given by entanglement and discord in dephasing model with distance dependent interactions. The joint state of SB is constructed via projected measurements on an initially thermal equilibrium state of system and bath. In the low temperature regime βω_c ∼ 1, we have shown that negativity and discord have non-monotonic behavior due
to underlying non-Markovian effects present in the dynamics. Due to presence of initial SB correlations, we have negativity and discord saturating to a finite non-zero value.
Next, in order to investigate the usefulness of these saturated values of quantum correlations in the long time limit, we studied the standard teleportation protocol. In the case, where channel is coupled to bath, we have shown that initial SB correlations have no role to play on average fidelity of teleportation. Moreover, we have shown the distance between the qubits of the channel effect the dynamics of average fidelity. In the low temperature case, the average fidelity is always greater than the classical value while for high temperature case, it saturates to classical value.
The same results holds true if the qubits with Alice undergo dephasing dynamics. However, if the qubit that is being teleported is subjected to dephasing, the average fidelity strongly depends on the initial SB correlations. At high temperatures in this case, it is shown that due to thermal fluctuations, the average fidelity is always less than classical value while at low temperatures it is saturates to classical value in the long time limit. Also, there exist a critical temperature βω_0/2∼ 1, for which F_av→2/3.
9
1R Horodecki, P Horodecki, M Horodecki, and K Horodecki “Quantum entanglement" Rev. Mod. Phys. 81, 865 (2009).
2Harold Ollivier and Wojciech H. Zurek. “Quantum discord: a measure of the quantumness of correlations." Physical review letters 88, 017901 (2001).
3A. Bera, T. Das, D. Sadhukhan, S. Singha Roy, A. Sen(De) and U. Sen “Quantum discord and its allies: a review of recent progress." Reports on Progress in Physics81, 024001 (2017).
4 L Henderson and V Vedral. “Classical, quantum and total correlations." Journal of physics A: mathematical and general 34, 6899 (2001).
5 K. Modi. “A pedagogical overview of quantum discord." Open Systems and Information Dynamics 21, 1440006 (2014).
6K Modi, A Brodutch, H Cable, T Paterek, V Vedral. “The classical-quantum boundary for correlations: Discord and related measures." Reviews of Modern Physics 84, 1655 (2012).
7 Michael A. Nielsen, Isaac L. Chuang Quantum Computation and Quantum Information, Cambridge University Press, 2010.
8Nanxi Zou. “`Quantum entanglement and its application in quantum communication." Journal of Physics: Conference Series 1827 1 (2021).
9S. Pirandola, J. Eisert, C. Weedbrook, A. Furusawa and S. L. Braunstein “Advances in quantum teleportation. Nature Photon 9, 641 (2015).
10 X S Ma, T Herbst, T Scheidl, D Wang, S Kropatschek, W Naylor, B Wittmann, A Mech, J Kofler, E Anisimova, V Makarov, T Jennewein, R Ursin and A Zeilinger. ”Quantum teleportation over 143 kilometres using active feed-forward". Nature 489, 269(2012).
11J G Ren, P Xu, H L Yong, et al. “Ground-to-satellite quantum teleportation". Nature 549, 70 (2017)
12S. L. N. Hermans, M. Pompili, H. K. C. Beukers, S. Baier, J. Borregaard and R. Hanson. “Qubit teleportation between non-neighbouring nodes in a quantum network. Nature 605, 663 (2022).
13M Horodecki and M Piani. “On quantum advantage in dense coding." Journal of Physics A: Mathematical and Theoretical 45, 105306 (2012).
14 Yu Guo, Bi-Heng Liu, Chuan-Feng Li, Guang-Can Guo “Advances in quantum dense coding." Advanced Quantum Technologies 2, 1900011 (2019).
15C. W. Tsai, C. R. Hsieh and T. Hwang. “Dense coding using cluster states and its application on deterministic secure quantum communication." The European Physical Journal D 61, 779 (2011).
16Riccardo Laurenza, Cosmo Lupo, Seth Lloyd, and Stefano Pirandola. “Dense coding capacity of a quantum channel." Physical Review Research 2, 023023 (2020).
17 E I Goettems, T O Maciel, D O Soares-Pinto, E I Duzzioni. “Promoting quantum correlations in deterministic quantum computation with a one-qubit model via postselection." Physical Review A 103, 042416 (2021).
18O Göktaş, WK Tham, K Bonsma-Fisher and A Brodutch.“Benchmarking quantum processors with a single qubit". Quantum Inf Process 19, 146 (2020)
19 K Zhang, J Thompson, X Zhang, Y Shen, Y Lu, S Zhang, J Ma, V Vedral, M Gu and K Kim.“Modular quantum computation in a trapped ion system". Nature Communications 10, 4692 (2019).
20 M Boyer, A Brodutch, T Mor. “Entanglement and deterministic quantum computing with one qubit." Physical Review A 95, 022330 (2017).
21S Pg, ND Varikuti, V Madhok. “Exponential speedup in measuring out-of-time-ordered correlators and gate fidelity with a single bit of quantum information." Physics Letters A 397, 127257 (2021).
22W. Pfaff, B. J. Hensen, H. Bernien, S. B. van Dam, M. S. Blok, T. H. Taminiau, M. J. Tiggelman, R. N. Schouten, M. Markham, D. J. Twitchen, R. Hanson “Unconditional quantum teleportation between distant solid-state quantum bits." Science 345, 6196 (2014).
23L Gyongyosi, S Imre, HV Nguyen. “A survey on quantum channel capacities." IEEE Communications Surveys and Tutorials 20, 1149 (2018).
24G. Smith, "Quantum channel capacities." 2010 IEEE Information Theory Workshop. IEEE, 1 (2010).
25M Liang.“Teleportation-based quantum homomorphic encryption scheme with quasi-compactness and perfect security." Quantum Information Processing 19, 28 (2020).
26Animesh Datta, Anil Shaji, and Carlton M. Caves. “Quantum discord and the power of one qubit." Physical review letters 100, 050502 (2008).
27D. Cavalcanti, L. Aolita, S. Boixo, K. Modi, M. Piani, and A. Winter.“Operational interpretations of quantum discord". Physical Review A 83, 032324 (2011).
28V Madhok, A Datta. “Quantum discord as a resource in quantum communication." International Journal of Modern Physics B 27, 1345041 (2013).
29 B Dakić, Y Ole Lipp, X Ma, M Ringbauer, S Kropatschek, S Barz, T Paterek, V Vedral, A Zeilinger, Č Brukner and P Walther. “Quantum discord as resource for remote state preparation." Nature Physics 8, 666 (2012).
30S. Boixo, L. Aolita, D. Cavalcanti, K. Modi and A. Winter.“Quantum locking of classical correlations and quantum discord of classical-quantum states." International Journal of Quantum Information 9, 1643 (2011).
31H-P Breuer, F. Petruccione. The theory of open quantum systems. Oxford University Press, (2002).
32 Heinz-Peter Breuer. “Foundations and measures of quantum non-Markovianity." Journal of Physics B: Atomic, Molecular and Optical Physics45, 154001 (2012).
33Inés de Vega and D. Alonso. “Dynamics of non-Markovian open quantum systems." Reviews of Modern Physics 89, 015001 (2017).
34Muzaffar Q. Lone and S. Yarlagadda. “Decoherence dynamics of interacting qubits coupled to a bath of local optical phonons." International Journal of Modern Physics B 30, 1650063 (2016).
35 Frank Verstraete, Michael M. Wolf and J. Ignacio Cirac. "Quantum computation and quantum-state engineering driven by dissipation." Nature physics5, 633 (2009).
36 Géza Tóth and Iagoba Apellaniz. “Quantum metrology from a quantum information science perspective." Journal of Physics A: Mathematical and Theoretical 47, 424006 (2014).
37V. Giovannetti, S. Lloyd, and L. Maccone. “Advances in quantum metrology." Nature photonics 5 222 (2011).
38J Joo, WJ Munro, TP Spiller. “Quantum metrology with entangled coherent states." Physical review letters 107, 083601 (2011).
39 H Cable, M Gu, K Modi. “Power of one bit of quantum information in quantum metrology." Physical Review A 93, 040304 (2016).
40 M. Thorwart, J. Eckel, J.H. Reina, P. Nalbach, S. Weiss. “Enhanced quantum entanglement in the non-Markovian dynamics of biomolecular excitons." Chemical Physics Letters 478, 234 (2009).
41H. Li-Yuan and F. Mao-Fa. “Protecting entanglement by detuning: in Markovian environments vs in non-Markovian environments." Chinese Physics B 19, 090318 (2010).
42 A. Nourmandipour, M. K. Tavassoly, and M. Rafiee. “Dynamics and protection of entanglement in n-qubit systems within Markovian and non-Markovian environments." Physical Review A 93, 022327 (2016).
43 R. Vasile, S. Olivares, M. G. A. Paris, and S. Maniscalco. “Continuous-variable quantum key distribution in non-Markovian channels." Physical Review A 83, 042321 (2011).
44 M. G. Genoni, P. Giorda, and M. G. A. Paris. “Optimal estimation of entanglement." Physical Review A 78, 032303 (2008).
45 G. Brida, I. P. Degiovanni, A. Florio,
M. Genovese, P. Giorda, A. Meda, M. G. A. Paris, and A.
Shurupov. “Experimental estimation of entanglement at the quantum limit." Physical review letters 104, 100501 (2010).
46 A Ferraro,M. G. A. Paris. “Nonclassicality criteria from phase-space representations and information-theoretical constraints are maximally inequivalent." Physical review letters 108, 260403 (2012).
47 J. G. Filgueiras, T. O. Maciel, R. E. Auccaise, R. O. Vianna,
R. S. Sarthour, and I. S. Oliveira. “Experimental implementation of a NMR entanglement witness." Quantum Information Process 11, 1883 (2012).
48 F Buscemi, P Bordone, A Bertoni. “Quantum teleportation of electrons in quantum wires with surface acoustic waves." Physical Review B 81, 045312 (2010).
49F Buscemi. “Shor’s quantum algorithm using electrons in semiconductor nanostructures." Physical Review A 83, 012302 (2011).
50F Buscemi, P Bordone, A Bertoni. “On demand entanglement in double quantum dots via coherent carrier scattering." New Journal of Physics13, 013023 (2011).
51Xiao-Zhong Yuan, Hsi-Sheng Goan, and Ka-Di Zhu. “Non-Markovian reduced dynamics and entanglement evolution of two coupled spins in a quantum spin environment." Physical Review B 75, 045331 (2007).
52H. T Wang, C. F Li, Y Zou, R. C Ge, G. C Guo “Non-Markovian entanglement dynamics in the presence of system-bath coherence." Physical review letters 104, 250401 (2010).
53M Q Lone. “Entanglement dynamics of two interacting qubits under the influence of local dissipation." Pramana 87, 1 (2016).
54A. J. Leggett, S. Chakravarty, A. T. Dorsey, Matthew P. A. Fisher, Anupam Garg, and W. Zwerger. “Dynamics of the dissipative two-state system." Reviews of Modern Physics 59, 1 (1987).
55 L D Romero, J P Paz. “Decoherence and initial correlations in quantum Brownian motion." Physical Review A 55, 4070 (1997).
56M Q Lone, T Byrnes. “Suppression of the ac-Stark-shift scattering rate due to non-Markovian behavior." Physical Review A 92, 011401 (2015).
57Hao Ying, Da-Wei Luo, and Jing-Bo Xu. “Control of the quantum interference in a superconducting qubit system." Journal of Applied Physics 114, 164902 (2013).
58Chikako Uchiyama and Masaki Aihara. “Role of initial quantum correlation in transient linear response." Physical Review A 82, 044104 (2010).
59L Li, J Zou, BM Xu, TT Ru, H Li, B Shao, Z He. “Quantum discord dynamics in the presence of initial system–bath correlations." Physica Scripta 86, 065001 (2012).
60Y J Zhang, W Han, Y J Xia, Y M Yu, H Fan. “Role of initial system-bath correlation on coherence trapping." Scientific reports 5, 13359 (2015).
61Raphael Fortes and Gustavo Rigolin. “Fighting noise with noise in realistic quantum teleportation." Physical Review A 92, 012338 (2015).
62Dong-Gil Im, Chung-Hyun Lee, Yosep Kim, Hyunchul Nha
, M. S. Kim, Seung-Woo Lee and Yoon-Ho Kim “Optimal teleportation via noisy quantum channels without additional qubit resources." npj Quantum Information 7, 86 (2021).
63Laura T. Knoll, Christian T. Schmiegelow, and Miguel A. Larotonda. “Noisy quantum teleportation: An experimental study on the influence of local environments." Physical Review A 90, 042332 (2014).
64Raphael Fortes and Gustavo Rigolin. “Probabilistic quantum teleportation in the presence of noise." Physical Review A 93, 062330 (2016).
65S. Harraz, S. Cong and J. J. Nieto, “Protected quantum teleportation through noisy channel by weak measurement and environment-assisted measurement." IEEE Communications Letters 26, 528 (2021).
66X Hu, Y Gu, Q Gong, G Guo. “Noise effect on fidelity of two-qubit teleportation." Physical Review A 81, 054302 (2010).
67P Badziag, M Horodecki, P Horodecki, R Horodecki. “Local environment can enhance fidelity of quantum teleportation." Physical Review A62, 012311 (2000).
68Somshubhro Bandyopadhyay. “Origin of noisy states whose teleportation fidelity can be enhanced through dissipation." Physical Review A 65, 022302 (2002).
69Ye Yeo, Z-W. Kho and L Wang. “Effects of Pauli channels and noisy quantum operations on standard teleportation." Europhysics Letters 86, 40009 (2009).
70M Q Lone, C Nagele, B Weslake, T Byrnes. “On the role of the measurement apparatus in quantum measurements." arXiv preprint : 1711.10257 (2017).
71M Rashid, M Q Lone, P A Ganai. “Time evolution of quantum correlations in presence of state dependent bath." Physica Scripta 97, 075104 (2022).
72G Vidal, R F Werner. “Computable measure of entanglement." Physical Review A 65, 032314 (2002).
73 S. Luo “Quantum discord for two-qubit systems." Physical Review A 77, 042303 (2008).
|
http://arxiv.org/abs/2307.02671v1
|
20230705220428
|
AI4OPT: AI Institute for Advances in Optimization
|
[
"Pascal Van Hentenryck",
"Kevin Dalmeijer"
] |
math.OC
|
[
"math.OC",
"cs.AI"
] |
[
GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples using Gradients and Invariance Transformations
Julia Lust
Robert Bosch GmbH, Stuttgart, Germany
University of Lübeck, Lübeck, Germany
[email protected]
Alexandru P. Condurache
Robert Bosch GmbH, Stuttgart, Germany
University of Lübeck, Lübeck, Germany
[email protected]
August 1, 2023
============================================================================================================================================================================================================================================================================
]
This article is a short introduction to , the NSF AI Institute for Advances in Optimization.
fuses AI and Optimization, inspired by end-use cases in supply chains, energy systems, chip design and manufacturing, and sustainable food systems. also applies its “teaching the teachers” philosophy to provide longitudinal educational pathways in AI for engineering.
Keywords:
optimization, machine learning,
supply chains, energy systems, chip design and manufacturing, resilience, sustainability.
§ INTRODUCTION
The mission of the NSF Artificial Intelligence (AI) Institute for
Advances in Optimization () is
to revolutionize decision making at massive scales by fusing AI and mathematical optimization and delivering scientific breakthroughs that the two fields cannot achieve
independently.
This Institute pursues this objective by integrating the model-driven
paradigm typically followed in Operations Research with data-driven
methodologies coming from AI. The research in is
use-inspired, addressing fundamental societal and technological
challenges of our times. They include
* How to design agile, sustainable, resilient, and equitable supply chains?
* How to operate energy systems powered by distributed renewable energy resources?
* How to deliver a step change in chip design and manufacturing, and manufacturing as a whole?
* How to create sustainable eco-systems within the food-water-energy
nexus?
focuses on AI for Engineering, which raises deep
scientific challenges in terms of reliability, robustness, and
scalability. Indeed, is driven by high-stake applications
that feature physical, engineering, and business constraints, and
complex objectives that must balance efficiency, resilience,
sustainability, and equity. Moreover, the underlying optimization
problems at the core of the grand challenges are of very large scale,
many of which are beyond the scope of existing technologies. To
address these, is organized around methodology thrusts that
focus on specific challenges: they include a new generation of
data-driven optimization solvers, decision making under uncertainty,
combinatorial and reinforcement learning, end-to-end optimization, and
decentralized learning and optimization. In addition, a transversal
thrust on Ethical AI ensures that ethics is included in the design of
every fundamental and use-inspired project, not as an afterthought.
The complementarity of the end-use cases and the methodology thrusts
creates a virtuous cycle of innovation, both in foundational research
and industrial impact.
The research mission of is complemented by its educational vision which is
to create longitudinal pathways for AI in engineering, from high-school to graduate education, using a “teach the teachers” philosophy to maximize impact.
The pathways start in middle and high-schools (through summer camps and engineering practices), move to undergraduate education
through the Faculty Training Program, and the creation of graduate
programs. The “teaching the teachers” philosophy is pervasive
across the Institute: its goal is to empower teachers at every
education level to create programs in their own institutions, e.g.,
minors and majors in artificial intelligence.
also includes mentorship programs to help students take leadership
roles and reinforce their understanding of the material over time.
The Institute has a strong focus on historically black high schools,
Historically Black Colleges and Universities (HBCUs), and Minority
Serving Institutions (MSI), to develop talent and increase the
diversity of the AI workforce.
is led by the Georgia Insitute of Technology in collaboration
with universities in California (UC Berkeley, USC, UC San Diego),
Texas (UT Arlington), and Georgia (Clark Atlanta University). The
Institute is creating a vibrant nexus in AI and optimization, bringing
together academic institutions, industrial partners, international
collaborators, and educators. In Atlanta, is located on the
12th floor of CODA building in Midtown, providing a prime space for
faculty, research scientists, and students that encourages knowledge
cross-fertilization.
The Industrial Partner Program (IPP) of AI4OPT features novel
internship programs to facilitate research collaborations between
academia and industry. It assembles some of the most innovative
companies in supply chains, manufacturing, and energy systems, with
the goal of maximizing the impact emerging from the fusion of AI and
optimization.
The rest of this article is organized as follows. Section
<ref> illustrates how the Institute contributes
scientific breakthroughs that the two fields cannot achieve
independently. Section <ref> describes some of the
societal and technological challenges that are driving
research. Section <ref> briefly reviews the
methodology thrusts driven by the end-use cases. Sections
<ref> and <ref> outline some workforce
development activities and how acts as a nexus at the
intersection of AI and optimization. It is impossible to do justice
to all the activities of the Institute in a short article, but the
hope is that this presentation will encourage readers to learn more
about .
§ OPTIMIZATION PROXIES
At its core, optimization models are used in decision-making applications to map problem inputs into optimal solutions. Optimization solvers have seen dramatic
progress over the last decades, producing optimal solutions to many
industrial problems. For instance, optimization models quite literally keep the lights on by committing generators and dispatching electricity in real time
every five minutes. Optimization models run end-to-end supply chains, which involve aspects such as the scheduling of manufacturing plants, load consolidation for middle-mile logistics,
and the design of e-commerce
networks, to name a few. Yet recent developments are
challenging even the best solvers:
optimization models have grown even larger, are expected to capture the realities of an increasingly uncertain and volatile world, and are blurring the distinction between planning and operations. As a result,
optimization solvers have become too slow in many
contexts: they include real-time applications, large-scale Monte-Carlo
simulations that are based on optimization, and environments where
humans interact with optimization technology.
In those circumstances, it is natural to explore whether machine
learning can replace optimization, moving most of the computational
burden offline. A machine-learning model can then learn the
input-output mapping of the optimization model, producing a first
approximation to the concept of an optimization proxy.
The challenge,
however, comes from applying this idea to AI for
Engineering. Indeed, many of the end-use cases of the Institute
feature optimization problems with hard physical, engineering, and
business constraints. For instance, in an electrical grid, the load
(demand) and the generation (supply) must be equal at all times. In
supply chains, shipments must fit within the vehicle capacity.
In addition, optimization proxies will be deployed in
high-stake applications, which means that they must cater to a wide variety of instances, deliver high-quality solutions with performance guarantees, and learn
instances with millions of input parameters and make hundreds of thousands of predictions.
To address these considerations, the End-to-End Optimization
thrust explores the science and engineering of optimization
proxies. Its research led to novel architectures including the one
depicted in Figure <ref>. This architecture postulates an
optimization proxy as the composition of a machine-learning layer that
produces a high-quality approximation of the optimization model,
followed by a feasibility layer that repairs the prediction to deliver
a feasible solution. This Learning and Repair architecture can be
trained end-to-end, backpropagating the loss function through the
feasibility layers.
This fusion of AI and optimization can deliver breakthroughs that cannot
be achieved by the two fields independently. One such application is
real-time risk assessment, which is becoming increasingly pervasive in
energy systems and supply chains, including for some of the key
partners of . As mentioned earlier, transmission
system operators typically solve a market-clearing optimization every
five minutes to balance load and generation. Given the increased
volatility in net load due to intermittent renewable resources, grid
operators are interested in real-time risk assessment tools,
like the one described in Figure <ref>. Such tools run a
large number of Monte-Carlo simulations, using scenarios from
probabilistic forecasts, to quantify the system-wide risk. A single
simulation may take up to 45 minutes, given the computational
complexity and the sheer number of optimization problems, making it
impractical to assess risk in real time. Work by the Institute has
shown that optimization proxies are a transformative technology for
this application: they make it possible to run these simulations in a
few milliseconds with relative errors below 1%, giving rise to
potential new tools to manage risk in real time
<cit.>.
§ END-USE CASES
As mentioned, already, at , challenges from end-use cases inspire foundational research, which then delivers innovations to address them. Here is a brief description of these end-use cases.
Supply Chains
Supply-chain management used to be an arcane topic, discussed by a few
and invisible to the general public. This has changed after the
pandemic: the public is now aware of a topic that has become
top-of-mind in many corporate boards. Supply chains have become
larger, and e-commerce has proliferated, imposing significant
environmental costs to meet new customer expectations. At the same time,
many customers and suppliers, especially in rural regions, face
increasing difficulties in procuring or delivering specific
products. What is needed is a paradigm change, a new vision for
supply chains that complements efficiency with resilience,
sustainability, and equity goals. Research in supply chains at
is centered around end-to-end supply chains, with
scalability, resilience, sustainability, and equity as core
challenges. has assembled a consortium of partners that cover
(almost) all aspects of supply chains. It leverages novel forecasting
methods, optimization proxies, decision making under uncertainty, and
automation to meet these challenges.
Energy Systems
The challenge for energy systems is clear: how to reinvent the planning and operations of a grid powered by renewable energy sources and
storage. Energy systems are transitioning from the century-old
“generation follows the load” organization of the grid to a paradigm
centered on risk assessment and risk management. This end-use case
helps reinvent energy systems by pursuing four overarching
themes: (1) probabilistic forecasting to quantify uncertainty; (2)
stochastic and risk-aware optimization to capture this uncertainty in
decision processes; (3) optimization proxies to perform real-time risk
assessment and risk-aware optimization; and (4) decentralized
optimization to address the massive proliferation of distributed
energy resources.
Chip Design and Manufacturing
Each generation of chips is becoming more expensive to design,
requiring numerous cycles between expert designers and simulators. It
is no longer possible or desirable to separate the various phases of
the design, e.g., circuit synthesis, placement, and routing. What is
needed is a new generation of tools that help engineers design
circuits more holistically. Machine learning (including RL and inverse
learning) has been shown to have significant potential in this area; it is the goal of this end-use case to explore its role in circuit
optimization. Chip manufacturing has also becoming increasingly
complex and subject to uncertainty in supply, demand, and the
complexity of the bill of materials.
In conjunction with supply chains, this end-use case also aims at transforming the optimization of the chip manufacturing process.
Sustainable Systems
The food-energy-water nexus is identified as one of the key grand
challenges of the 21st century and AI has demonstrated early potential
to address complex problems in this space. This end-use case conducts
three interconnected projects on biogas, water, and food to reduce greenhouse emissions and boost food production.
§ METHODOLOGY THRUSTS
The methodology thrusts carry out foundational AI research inspired by the end-use cases. Here is a brief description of the methodology thrusts of .
End-to-End Optimization
The End-to-End Optimization thrust primarily focuses on the science and engineering of optimization proxies that were described
in Section <ref>. Recent contributions include the
concepts of self-supervised primal-dual learning
<cit.>, compact learning
<cit.>, and End-To-End
Learning and Repair <cit.>. The thrust
draws inspiration from the end-use cases in energy systems and supply
chains, and also explores topics in decision-focused learning, learning to optimize, verification, explanation, and
formal guarantees.
New Generation Solvers
The Solvers thrust works on a new generation of highly
tunable optimization solvers that use machine learning and historical
data to dramatically improve performance in settings where an
optimization model is used repeatedly. Recent results apply the
Learning to Optimize paradigm to mixed-integer programming
<cit.>, mixed-integer
nonlinear programming <cit.>, and
AI planners <cit.>.
The end-use cases contribute problem instances to benchmark solver performance.
Decision Making Under Uncertainty
The energy and supply-chain end-use cases clearly indicate the need
for advances in decision making under uncertainty.
This includes probabilistic forecasting, uncertainty
quantification, scenario generation, and detection of rare events in
presence of spatial-temporal correlations
<cit.>. Of particular
interest is the fundamental and applied research on conformal
predictions. The thrust also explores solution techniques for specific
classes of multi-stage stochastic optimization problems
<cit.> and new Bayesian
risk-sensitive and distributionally-robust optimization models
<cit.>.
Reinforcement Learning
The RL thrust focuses on the end-use cases of the Institute, which are
much larger and more complex than environments in which RL has been
successful so far. It contributes foundational advances to deep RL to
handle such complex environments
<cit.>, and expands
RL research to include societal and ethical considerations.
The thrust will be increasingly focused on offline RL
<cit.> to make the technology
safer and more amenable to industrial use.
Combinatorial Learning
This thrust studies machine learning in the context of combinatorial
and highly constrained applications with the goal to improve
generalization and interpretability and reduce errors. Recent results
include highly-specific strong convex models with structured sparsity
<cit.>,
pairwise-based optimization algorithms for counteracting learning bias
<cit.>, and meta-algorithms
to automatically select the best solver
<cit.>.
Distributed and Multi-Agent Learning and Optimization
This thrust explores decentralized solutions to manage a large number
of agents, motivated by applications in energy systems and automated
warehouses. Recent results focus on AI-based decentralized path
planning and execution <cit.>, on how
agents can help each other learn through communication
<cit.>, and
distributed learning algorithms that are robust against communication
imperfections <cit.>.
Ethical AI
The Ethical AI thrust is transversal: it draws from, and informs,
every thrust and end-use case in the Institute. It leverages the
fusion of optimization and machine learning to ensure ethical and
socially conscious design of large scale deployments. Projects
include creating new theoretical foundations for ethics in practice
<cit.>, including fairness into supply
chains and energy networks <cit.>, technological
progress for high-impact policy changes
<cit.>, and the
incorporation of IEEE Well-being Metrics into the design and
deployment of AI and optimization research.
§ WORKFORCE DEVELOPMENT
The education and workforce development initiatives of are
presented in detail by
<cit.> and are only briefly mentioned
here. Perhaps the most distinctive feature of these programs is the
“teaching the teachers” philosophy that permeates the
initiatives.
is reaching middle and high-school students through the https://www.ai4opt.org/seth-bonder-campSeth Bonder summer camps that are delivered, not only to students, but also to high-school teachers.
The camps are organized at Georgia Tech, at UC Berkeley, and online (in collaboration with Kids Teach Tech), and attract a diverse range of participants, with a large proportion of minority students and young women.
Students who succesfully complete the camp are invited to become mentors in the following year.
The https://www.ai4opt.org/undergraduate-educationFaculty Training Program (FTP) provides faculty members from HBCUs and MSIs with courses in AI, data science, and course design to create minors and majors in AI at their own institutions.
This three-year program includes a yearly 3-4 week visit to Georgia Tech and online courses throughout the year.
The program started in June 2022, and multiple FTP participants are already creating minors and majors, and are working with to expand their AI offerings.
§ AI4OPT AS A NEXUS
acts as a nexus for AI and optimization with applications in
supply chains, energy systems, manufacturing, and sustainability. The Institute nexus is organized
around its research and educational programs, its Industrial Partner Program (IPP), its national and international collaborations, and its outreach activities. In outreach, is pursuing additional partnerships, e.g., with Jackson State University in Mississipi and Huston-Tillotson University in Austin, Texas. The IPP continues to grow and covers the entire spectrum of activities in end-to-end supply chains and the planning and operations of electrical power systems. features novel longitudinal internships that are piloted by the Institute at Georgia Tech, and strong collaborations with DOE national laboratories, Independent Systems Operators (ISO), and peer international institutions. A particularly exciting development is the collaboration with the AI Institute ICICLE around decentralized collaborative multimodal food supply chains with a focus on tribal communities.
plainnat
|
http://arxiv.org/abs/2307.01362v1
|
20230703213340
|
Direct Superpoints Matching for Fast and Robust Point Cloud Registration
|
[
"Aniket Gupta",
"Yiming Xie",
"Hanumant Singh",
"Huaizu Jiang"
] |
cs.CV
|
[
"cs.CV"
] |
Quadrupole moments and their interactions in the triangular lattice antiferromagnet FeI_2
Gang Chen
August 1, 2023
=========================================================================================
Although deep neural networks endow the downsampled superpoints with discriminative feature representations, directly matching them is usually not used alone in state-of-the-art methods, mainly for two reasons.
First, the correspondences are inevitably noisy, so RANSAC-like refinement is usually adopted.
Such ad hoc postprocessing, however, is slow and not differentiable, which can not be jointly optimized with feature learning.
Second, superpoints are sparse and thus more RANSAC iterations are needed.
Existing approaches use the coarse-to-fine strategy to propagate the superpoints correspondences to the point level, which are not discriminative enough and further necessitates the postprocessing refinement.
In this paper, we present a simple yet effective approach to extract correspondences by directly matching superpoints using a global softmax layer in an end-to-end manner, which are used to determine the rigid transformation between the source and target point cloud.
Compared with methods that directly predict corresponding points, by leveraging the rich information from the superpoints matchings, we can obtain more accurate estimation of the transformation and effectively filter out outliers without any postprocessing refinement.
As a result, our approach is not only fast, but also achieves state-of-the-art results on the challenging ModelNet and 3DMatch benchmarks.
Our code and model weights will be publicly released.
§ INTRODUCTION
Point cloud registration refers to the task of aligning two partially overlapping point clouds into a shared coordinate system. In this paper, we tackle the problem of rigid registration where the goal is to determine the optimal transformation matrix, including rotation and translation, from one point cloud (source) to the other (target).
It has attracted a lot of research interest due to its broad applications in SLAM (Simultaneous Localization and Mapping) <cit.>, autonomous driving <cit.>, 3D reconstruction <cit.>, etc.
A prevailing paradigm to solve the registration task is to leverage the correspondences of superpoints across two point clouds, which can be obtained either using separate keypoint detectors <cit.> to capture salient and distinctive points or regions within a point cloud.
With the rapid advancement in representation learning in point clouds using deep neural networks <cit.>, the keypoint-free approaches have gained significant attention, where the point cloud is downsampled into superpoints.
Although they may not be designed to capture the salient keypoints in the point cloud, the learned representations endow the superpoints with discriminative power so they can be matched across the source and target point clouds.
The transformation matrix can then be obtained based on the superpoints correspondences using the Kabsh-Umeyama Algorithm <cit.>.
Direct superpoint matching is typically not used by itself in existing approaches for point cloud registration for two reasons.
First, the correspondences of superpoints inevitably contain errors, either because the point clouds contain noisy sensory data or because some of the correspondences are simply incorrect.
As a result, postprocessing refinement is usually needed, for instance, using RANSAC to prune out the outliers <cit.>.
However, RANSAC is inherently slow and not differentiable, and thus cannot be integrated into the training step.
Second, superpoints are essentially sparse, particularly because of downsampling in neural networks.
Therefore, more RANSAC iterations are needed as the number of inliers (, correct correspondences) is small, which imposes a significant computation burden.
To this end, the coarse-to-fine registration scheme is usually adopted in state-of-the-art approaches <cit.>, where superpoints correspondences serve as coarse correspondences only.
They are then propagated to the point level, forming finer-level correspondences.
Since point-wise features only capture local information, they lack enough discriminative power. Consequently, the noise in the point-wise correspondences is more prominent, which further necessitates the additional refinement step.
Although the Local to Global Registration (LGR) module introduced in <cit.> improves the speed, which works similarly to RANSAC in an iterative manner, the selection and refinement of the transformation still remain non-differentiable.
The entire model thus can not be optimized jointly, leading to inferior accuracy.
In this paper, we present a simple yet effective approach for rigid point cloud registration by directly matching the superpoints.
Specifically, superpoints features are first obtained from the KPConv backbone <cit.>, which are then enhanced using Transformer blocks with interleaved multi-head self and cross attention modules <cit.>, enabling effective learning of the structural information of from the target to the source point cloud.
The enhanced superpoint features are finally matched using a Global Softmax layer inspired by the 2D matching work <cit.>, which generates a correlation matrix capturing the pairwise relationships between the keypoints from the source and target point clouds.
But unlike <cit.>, we fully harness the rich information in the correlation matrix.
We employ a differentiable variant of the Kabsh-Umeyama Algorithm <cit.>, considering the strengths of matchings, where the weights of the SVD (Singular Value Decomposition) are the correlation scores between two corresponding superpoints.
By examining the correlation scores of the superpoints, we can easily identify the highly reliable correspondences without resorting to the time-consuming RANSAC or the non-differentiable LGR module <cit.>.
Our approach is fully differentiable and can be trained in an end-to-end manner, which enables joint optimization of the feature representation learning, superpoints matching, and transformation estimation, leading to improved alignment results.
Moreover, our approach does not require ad-hoc postprocessing, , RANSAC, it runs fast, making it potentially more useful in practice.
Compared with the recent end-to-end method <cit.>, our approach directly matches the superpoints.
With the rich information obtained from the matching step, our approach can get a more accurate estimation of the transformation between the point clouds robustly by filtering out the outliers effectively.
We run experiments on standard benchmarks, including ModelNet <cit.> and 3DMatch <cit.>, and achieve state-of-the-art results.
Extensive ablation studies validate the effectiveness of each module in our proposed approach.
The key contributions of the paper can be summarized as follows:
* Efficiency. Our approach tackles the point cloud registration by directly matching superpoints. The rich information in the matching step allows us to effectively filter out outliers without using RANSAC-like approaches. As a result, our approach is efficient and runs fast in practice.
* Simplicity. Our registration pipeline can be trained end-to-end without the any ad hoc postprocessing refinement. It streamlines the process and makes it more straightforward to implement and understand.
* Robustness. We thoroughly evaluate our method on various datasets, demonstrating its superior performance in comparison to existing techniques. Our approach achieves state-of-the-art results on multiple benchmarks, with notably high inlier ratios. This showcases the robustness and accuracy of our proposed registration framework.
§ RELATED WORK
Traditional registration approaches.
The most known algorithm Iterative Closest Point (ICP) <cit.> has been widely used for point cloud registration.
ICP solves the registration problem iteratively in two steps: (1) It obtains the spatially closest point correspondence and then (2) finds the least-squares rigid transformation.
The spatial-distance-based correspondences are sensitive to the initial transformation and point noises.
A lot of variants <cit.> have been proposed to improve ICP.
Another line of work <cit.> formulates the point cloud registration as probability distribution matching problems to improve the robustness and to converge quickly.
However, they still heavily rely on an appropriate initialization and can easily converge to a local optimum as opposed to the global solution.
Unlike the local methods mentioned above, global approaches are invariant to the initialization.
Based on branch-and-bound techniques, several works <cit.> have been proposed, but they are very slow and impractical in some scenarios.
Another method is to extract and match keypoints based on feature extraction methods such as FPFH <cit.> and SHOT <cit.>.
Then RANSAC <cit.> can be used for registration.
However, RANSAC is computationally very slow compared to ICP.
Learning-based registration approaches.
Recently, many works have used deep learning for point cloud learning and registration.
Some work first estimates the correspondence between two point clouds and then computes the transformation with some robust pose estimators.
To predict the correspondence between two point clouds, 3DMatch <cit.> detects the repeatable keypoints and learns discriminative descriptors for keypoints.
The following works aim to either improve the keypoint detections <cit.>
or learn better feature descriptors <cit.>.
Predator <cit.> uses the attention mechanism proposed in Transformers <cit.> to enhance the point feature descriptors.
Other detector-free methods <cit.> extract the correspondences by considering all possible matches.
Another line of work <cit.> has included the transformation computation into the training pipeline.
Following the idea of ICP <cit.>, <cit.> iteratively predict the soft correspondences and computes the transformation with differentiable weighted SVD.
Another way <cit.> first extracts a global feature vector for each point cloud and predicts the transformation with global feature vectors. This approach usually fails in large-scale scenes.
Unlike these works which require eighter ad-hoc postprocessing or coarse-to-fine registration, our method directly matches the superpoints without any refinement.
Correspondence filters.
RANSAC <cit.> is typically used to filter out the outliers in the predicted correspondence to obtain a robust transformation estimation.
However, RANSAC is relatively slow and cannot be incorporated into the training pipeline because the hypothesis selection step is non-differentiable.
To alleviate these problems, DSAC <cit.> modify the RANSAC pipeline and make it differentiable.
Other deep robust estimators <cit.> usually use the classification network to identify which correspondences are outliers and then reject them.
Instead of using these complex correspondence filters, our method can directly filter out outliers effectively by leveraging the rich information in the superpoints matching.
§ METHOD
Given the source and target point clouds 𝐗∈ℝ^M×3 and 𝐘∈ℝ^N×3, our goal is to determine the optimal rigid transformation 𝐓 = {𝐑, 𝐭} with rotation 𝐑∈ SO(3) and translation 𝐭∈ℝ^3 to align two point clouds into a common coordinate system.
M and N denote the numbers points.
§.§ Superpoints Feature Extraction and Enhancement
In our approach, we use Kernel Point Convolution (KPConv) <cit.> as the backbone to selectively downsample the point cloud into a set of superpoints and extract global feature vectors for each superpoint.
Specifically, the KPConv backbone uses a series of ResNet-like blocks <cit.> and convolutions to downsample the input point clouds into a reduced set of superpoints ∈ℝ^M'×3 and ∈ℝ^N'×3, where M' < M and N' < N.
The superpoints are described by their feature vectors _∈ℝ^M'× D and _∈ℝ^N'× D, respectively, with D being the feature dimension.
The network weights are shared among the two point clouds.
We use a shallower backbone for 3DMatch dataset compared to <cit.> to avoid significant downsampling.
Although KPConv backbone provides reasonably good representations, the superpoints features are obtained within each point cloud independently.
To obtain highly discriminative feature representations for superpoints matching, we further enhance their feature representations in the source point cloud to be conditioned on the target one and vice-versa.
Following the previous work of point cloud registration <cit.> and the 2D counterpart of superpoints matching (, optical flow) <cit.>, we adopt the multi-head attention mechanism in the Transformer model <cit.> as the feature enhancement module, shown in Fig. <ref>.
It consists of both self and cross-attention, where the self-attention is to integrate the information from the other points within the same point cloud and the cross-attention allows interactions with points in another point cloud to consider the mutual dependencies.
In addition to the multi-head attention, other components in the Transformer model, including position encodings of 3D points, residual connections, layer normalization, and feed-forward network are applied to each layer.
The entire feature enhancement module consists of 6 such layers with 256 dimensions and 8 attention heads.
The outputs of the feature enhancement module are features _∈ℝ^M'× D and _∈ℝ^N'× D which has aggregated geometric information from both source and target point cloud. The strongly associated features are strengthened while the weakly associated features are weakened.
With such highly discriminative feature representations, we can obtain high-quality superpoints matchings.
§.§ Direct Superpoint Matching for Rigid Transformation Estimation
To get the correspondences between two point clouds, we first compare the feature similarity for each point in _ to all points in _ by computing their correlations <cit.>, which can be done efficiently in a single step as follows:
𝐂 = softmax(__^T) ∈ℝ^M' × N',
where 𝐂 is the normalized correlation matrix that represents the similarity between two point clouds. Based on the correlation matrix, the correspondences Ŷ and 𝐗̂ can be directly calculated by using the largest correlation for each point.
The rigid transformation between the source and target point clouds can be then estimated using the superpoints correspondences with a weighted variant of the Kabsch-Umeyama algorithm <cit.>:
𝐑̂, 𝐭̂ = _𝐑, 𝐭∑_i^min(N', M')w_i 𝐑𝐱̂_i + 𝐭 - ŷ_i^2,
where 𝐱̂_i, ŷ_i are two matched superpoints, denoting the i^th point in 𝐗̂, Ŷ, respectively.
The coefficient w_i can be used to weigh different correspondences.
Although we use the highly discriminative feature representations enhanced by the attention module, the correspondences are inevitably noisy.
How to filter out the outliers (, incorrect correspondences)?
We show that the normalized correlation matrix obtained from superpoints matching in Eq. (<ref>) contains rich information, which allows us effectively select high-confident superpoints correspondences to estimate the transformation.
Specifically, if _i is similar to multiple superpoints, , because of the repetitive patterns, its matching to _i tends to be unreliable.
Therefore, the normalized correlation score between them will be low since is normalized w.r.t. all other superpoints in the target point cloud.
We can therefore set w_i=(𝐱̂_i, 𝐲̂_i), capturing the confidence of the superpoints matchings.
On the other hand, we can use w_i to select only highly confident superpoint matchings, , top 15% of them, to estimate the transformation in Eq.(<ref>.
We can also augment the weights w_i by considering the overlap score, which is introduced below, to further improve the accuracy.
Compared to previous work <cit.>, our approach is simple yet effective, which does not only eliminates the coarse-to-fine strategy but more importantly, the slow and non-differentiable RANSAC.
As a result, our approach is fast and the superpoints feature learning, superpoints matching, and transformation estimation can be optimized jointly in an end-to-end manner, leading to superior accuracy.
Although the feature enhancement module is also used in <cit.>, our approach is fundamentally different.
Instead of predicting the correspondences by using the feature representations of the source point cloud only, we perform superpoints matching and use the rich information in the matching step to effectively filter outliers and get a robust estimation of the transformation.
§.§ Loss Functions
We train our approach using the following three loss functions, where the transformation loss is the main loss term and the other two are auxiliary ones.
Transformation Loss.
We apply a L1 loss on the predicted transformed locations of all keypoints with the predicted and ground truth transformation matrix.
ℒ_T = 1/M'∑_i^M'|𝐑̂𝐱_𝐢' + 𝐭̂ - (𝐑_gt𝐱_𝐢' + 𝐭_gt)|_1.
Overlap Loss. Inspired by <cit.>, we estimate the overlap values Ô_𝐗' and Ô_𝐘' using a separate MLP layer based on the enhanced feature _ and _, respectively.
The overlap estimation is formulated as a binary classification problem, so we use binary cross-entropy loss:
ℒ^X_o = -1/M'∑_i^M'
o_x,i^* ·logô_x,i^* + (1 - o_x,i^*) ·log(1 - ô_x,i),
where ô_x,i is the estimated overlap probability and o_x,i^* is the ground truth label.
We compute the overlap loss ℒ^Y_o for the target point cloud similarly.
Feature Loss.
Following <cit.>, to ensure that the enhanced features of both point clouds are in the same feature space, we apply an InfoNCE <cit.> loss on the enhanced features _ and _.
Given a set of superpoints correspondences {(_i, _i)}_i=1^K and their associated feature representations {(__i, __i), the feature loss is defined as
ℒ_f = -1/Klog__i^T __i/__i^T __i + ∑_j≠ i__i^T __j.
The linear transformation 𝐖 is enforced to be symmetrical by parameterizing it as the sum of an upper triangular matrix 𝐔 and its transpose, 𝐖 = 𝐔 + 𝐔^T.
The final loss is a weighted sum of all the losses above with
ℒ = ℒ_T + αℒ_f + β (ℒ_o^X + ℒ_o^Y),
where we set the loss weights α = 0.1 and β = 1 empirically.
§ EXPERIMENTS
We evaluate our approach on two benchmarks with overlap ranging from 10% to 75%. The first dataset we evaluate is on ModelNet with two benchmarks settings following <cit.> and the second dataset is 3DMatch <cit.> on two benchmarks following <cit.>.
§.§ Implementation details
Our approach is implemented using the PyTorch framework <cit.> on a system with an Intel i9-1300K CPU and a single RTX 3090 GPU. The network training is performed with the AdamW optimizer <cit.>, utilizing a learning rate of 0.0001 and a weight decay of 0.0001. For the ModelNet dataset, the network is trained for 400 epochs with a batch size of 4. On the 3DMatch dataset, the network is trained for 50 epochs with a batch size of 4 as well. Training the network requires approximately 22 hours on ModelNet and around 2 days on 3DMatch.
§.§ ModelNet and ModelLoNet Benchmarks
The ModelNet40 <cit.> dataset comprises of synthetic CAD models. Following the data setting in <cit.>, the point clouds are randomly sampled from mesh faces of the CAD models, cropped and subsampled. For the ModelNet and ModelLoNet benchmarks, the average overlap is 73.5% and 53.6%, respectively. Our network is trained exclusively on ModelNet, and we evaluate its generalization performance on ModelLoNet. For benchmarking the performance of our model we use the Relative Rotation Error (RRE) and Relative translation Error (RTE) as the primary metrics. Following <cit.>, we also calculate the Chamfer Distance (CD) between the registered scan pairs.
The results are shown in Table <ref>. We compare against correspondence-based approaches <cit.>, coarse-to-fine registration approaches <cit.>, and end-to-end methods <cit.>. We see that our approach performs well on both benchmarks. Moreover, the low chamfer error suggests that predicted correspondences have very high accuracy. Our approach is also able to outperform methods using post-processing steps like RANSAC <cit.> by a significant margin, which further strengthens the point that direct superpoints matchings can actually work well for point cloud registration.
§.§ 3DMatch and 3DLoMatch Benchmarks
3DMatch <cit.> is a collection of 62 scenes, from which we use 46 for training, 8 for testing, and 8 for validation following <cit.>. We use the preprocessed data from <cit.> which contains point clouds downsampled using a voxel-grid subsampling method. The 3DMatch benchmark contains point clouds pairs with >30% overlap while the 3DLoMatch benchmark contains scan pairs with only 10%-30% overlap. Following <cit.>, we perform training data augmentation by applying small rigid perturbations, jittering, and shuffling of points.
Following the literature <cit.>, We report the results of 3DMatch dataset on 5 metrics including RRE, RTE, Registration Recall (RR), Feature Matching Recall (FMR), and Inlier Ratio (IR).
We compare our approach against several approaches including learned corresponding based algorithms <cit.> and coarse-to-fine approaches <cit.>. Since the algorithms which use some post-preprocessing steps like RANSAC perform well only with a large number of interest points, we only show results for the maximum number of sampled points (5000). We also compare against several approaches designed to avoid RANSAC <cit.>.
The quantitative results are shown in Table <ref>. For the 3DMatch benchmark, our approach outperforms all the previous methods.
This implies that in cases of significant overlap (>30%), the superpoint correspondences are very distinctive and accurate. One of the problems associated with superpoint matching in point clouds is the resolution issue, where the correspondences might not actually coincide with each other when transformed with the ground truth transformation matrix due to the subsampling of the point cloud. In our case, the resolution issue is automatically handled by the correlation weights.
We also show systematic comparisons with other approaches in Table <ref>.
We can see our model runs fast and is compact.
r0.5
Comparisons of inference speed and number of parameters on the 3DMatch dataset.
Methods Matching e2e Time Param
CoFiNet coarse-to-fine 1.922s 5.5M
GeoTransformer coarse-to-fine 0.088s 9.8M
RegTR 0.063s 11.8M
Ours direct 0.073s 7.8M
For the 3DLoMatch benchmark, in comparison to all the approaches that do not use any post-processing, our method performs significantly better than the other approaches. Note that <cit.> uses LGR as the refinement step to get the best results, but when using all the predicted correspondences with their scores and applying weighted SVD, they get much worse results. This implies that the final set of all the correspondences is not optimal and contains outliers. In our case, these outliers are filtered by the overlap and correlation values, thus providing accurate correspondences. The validity of the correspondences predicted by our approach can be verified by comparing the mean Inlier Ratio in Table <ref>. We get the highest mean IR, almost 19% better than the second-best approach on 3DMatch and 14% higher in 3DLoMatch.
§.§ Ablation Studies
Effectiveness of the feature enhancement.
The feature enhancement module is essential in our approach. To assess its effectiveness, we trained various networks with attention layers ranging from 0 to 6. Figure <ref> illustrates the performance changes as we increase the number of attention layers. When no feature enhancement is used, the model's performance is significantly poorer, which is understandable since the source point cloud's superpoint features lack information about the target point cloud. Performance stabilizes at around 6 attention layers. It is worth noting that increasing the number of layers also leads to higher computation burden.
Using correlation scores for filtering out outliers.
To filter out outliers in the predicted correspondences we primarily use the correlation values along with overlap values as weights for weighted SVD.
Since these weights are much lower for outlier values, the registration results are very accurate. If we use unweighted SVD (with equal weights for all the correspondences), the performance drops rapidly with the increase in number of correspondences, as shown in Fig. <ref>.
We experiment with a simple iterative refinement scheme proposed in <cit.>, where we can iteratively re-estimate the transformation by pruning out outliers. We found that the performance of this refinement saturated after 5 iterations. See Table <ref> row 4 for results. Fig. <ref> also shows that this approach is quite stable in pruning outliers on different number of correspondences and improves the results by a slight margin.
r0.55
Effectiveness of different loss terms.
Overlap Feature 3c|3DMatch 3c3DLoMatch
3-8
Loss Loss RRE RTE RR RRE RTE RR
2.521 0.076 76.5 5.272 0.132 31.2
2.123 0.062 79.6 4.020 0.105 37.2
1.651 0.049 90.3 2.876 0.082 58.0
1.436 0.045 93.7 2.553 0.074 65.0
We also experiment with RANSAC on the set of superpoints correspondences, See Table <ref> row 2, 3. But as expected, RANSAC gives slightly worse results while taking more computation time. This implies
that using our approach is much more effective in pruning outliers thus eliminating the need for refinement.
Effectiveness of the loss terms.
Lastly, we analyze the effectiveness of each loss function. Table <ref> shows the results with different loss function configurations. We see that by just using Feature Loss with Transformation loss, the model is able to achieves good performance and using Overlap loss is the final step that helps the model prune remaining outliers. Row 3 in Table <ref> again implies that superpoint correspondences are very distinctive.
§ LIMITATIONS AND BROADER IMPACTS
We have identified two limitations in the current architecture, First, our approach requires a decent amount of superpoints in the low overlapping region to perform well.
Second, our approach samples about 200 best correspondences, but for the low overlapping region only a few of them lie on the overlapping region thus making it hard for the network to achieve good performance on 3DLoMatch benchmark.
Instead of regularly downsampling, structure-aware sampling strategies are needed.
This research has the potential to significantly impact the fields of 3D computer vision and robotics, as well as any domains that rely on accurate and efficient point cloud registration, such as autonomous navigation, 3D mapping, and augmented reality.
By introducing a simple yet effective approach for directly matching superpoints in an end-to-end manner, the research not only speeds up the registration process but also enhances its robustness by effectively filtering out outliers without any postprocessing refinement.
§ CONCLUSION
We introduce a straightforward yet highly effective method for point cloud registration that offers simplicity in understanding and implementation. Our approach directly matches superpoints to establish correspondences, enabling the computation of rigid transformations and correlation weights.
To enhance performance in low overlap regions, we augment the correlation weights with predicted overlap values.
Experimental evaluations on the 3DMatch and ModelNet datasets demonstrate the efficacy of our approach across different overlap scenarios. Notably, our approach is computationally efficient, robust to outliers, and does not require post-processing steps, making it suitable for real-time applications. In the future, we plan to extend its application to cross-modalities such as image-to-image and image-to-point cloud registration.
plain
§ APPENDIX
§.§ Network Architecture
The network architecture for the KPConv backbone used is shown in Fig. <ref> and the architecture for the Feature Enhancement module is shown in Fig. <ref>.
We do not downsample the point cloud by a large factor in the bakbone unlike other methods <cit.> as this leads to a very low resolution in the overlapping region of the point cloud pair. Since, we are training the network to learn correlations between the downsampled superpoints, it is beneficial to have a moderately larger number of points.
§.§ Evaluation Metrics
We report Registration Recall (RR), Relative Rotation Error (RRE), Relative Translation Error (RTE), Feature Matching Recall (FMR), and Inlier Ratio (IR) for the 3DMatch dataset following common practice <cit.>. For ModelNet dataset, we report RRE, RTE, and Chamfer Distance Error (CD) following <cit.>.
§.§.§ 3DMatch/3DLoMatch
Inlier Ratio is the fraction of inlier correspondences in all predicted correspondences. Corresponding points are considered inliers if the distance between them under the ground-truth transformation is smaller than τ_1 = 10 cm
IR(𝐗, 𝐘) = 1/|Ĉ|∑_(𝐱_i, 𝐲_i) ∈Ĉ𝐓^𝐗_𝐘(𝐱_i) - 𝐲_i_2 < τ_1 ,
where · is the Iversion bracket, 𝐓^𝐗_𝐘 is the ground-truth transformation and (𝐱_i, 𝐲_i) ∈Ĉ are the predicted correspondences.
Feature Matching Recall is the fraction of point cloud pairs whose inlier ratio is above τ_2 = 0.05. It indicates the likelihood of recovering the optimal transformation between two point clouds. It is a good indicator for methods which use post-processing refinement steps like RANSAC <cit.> as having a higher FMR increases the probability of recovering the correct transformation. If the total number of point cloud pairs in the test set is 𝒩, FMR can be calculated as
FMR = 1/𝒩∑^𝒩_i=1IR > τ_2 .
Registration Recall represents the fraction of point cloud pairs whose Root Mean Square Error (RMSE) is less than τ_3 = 20 cm.
RR directly measures the effectiveness of the registration algorithm.
RR = 1/𝒩∑^𝒩_i=1√(1/|𝒞^*|∑_(𝐱_i, 𝐲_i) ∈𝒞^*𝐓^𝐗_𝐘(𝐱_i) - 𝐲_i^2_2)
< τ_3 ,
where 𝒞^* is the set of ground truth correspondences and 𝐓^𝐗_𝐘 is the predicted transformation matrix.
Relative Rotation Error and Relative Translation Error are the rotation and translation errors between the ground truth transformation matrix 𝐓^𝐗_𝐘∈ SE(3) and the predicted transformation 𝐓^𝐗_𝐘∈ SE(3). We can obtain 𝐑∈ SO(3), 𝐑∈ SO(3), 𝐭∈𝐑^3 and 𝐭∈𝐑^3 as the predicted rotation matrix, ground truth rotation matrix, predicted translation vector and ground truth translation vector respectively. Thus, RRE and RTE can be calculated as follows
RRE = arccos(trace(𝐑^T 𝐑) - 1/2 ), RTE = 𝐭 - 𝐭.
§.§.§ ModelNet/ModelLoNet
Chamfer Distance measures the dissimilarity or discrepancy between two sets of points or point clouds. It quantifies the minimum average distance between each point in one set and its nearest neighbor in the other set. It provides a measure of how well two point clouds align or match with each other, with lower values indicating a better alignment.
CD(𝐗, 𝐘) = ∑_x ∈𝐗min_y ∈𝐘 y - 𝐓^𝐗_𝐘(x) _2^2 + ∑_y ∈𝐘min_x ∈𝐗 y - 𝐓^𝐗_𝐘(x) _2^2,
where 𝐗 and 𝐘 represent the sets of points in two point clouds, ·_2 denotes the Euclidean distance and 𝐓^𝐗_𝐘 is the predicted transformation between point clouds 𝐗, and 𝐘.
§.§ Additional Results
Table <ref> shows the scene-wise registration results on the 3DMatch and 3DLoMatch benchmarks. Following literature <cit.>, we only calculate mean RRE and RTE for the successfully registered scan pairs. For the 3DMatch benchmark, our method outperforms other methods in every metric. Our methods achieves not only the highest registration recall, but also the lowest mean RRE and RTE. This implies that the for all the successfully registered points clouds, the predicted correspondences and their correlation values are very accurate. For the 3DLoMatch benchmark, our method outperforms every method which does not use post-processing refinement like RegTR,
while giving the lowest RRE and RTE values. In comparison to approaches which use post-processing refinement like GeoTransformer, Predator, and CoFiNet,
the performance gap is expected as the predictions in 3DLoMatch contain a much higher number of outliers. But the gap is minor and our approach runs faster.
§.§ Iterative Pose Refinement
We experiment with a simple iterative refinement scheme proposed in <cit.>, where we can iteratively re-estimate the transformation by pruning out outliers.
𝐑,𝐭 = max_𝐑_𝐢, 𝐭_𝐢∑_i𝐑𝐱_i + 𝐭 - 𝐲_i^2 < τ_a ,
where · is the Iverson bracket, τ_a is the threshold radius (10 cm for 3DMatch and 5cm for ModelNet dataset) and (𝐱_i , 𝐲_i) ∈Ĉ are the predicted correspondences.
We can iteratively re-estimate the transformation with the matches that satisfy the criteria and prune out the outliers in every step. We can use this pose refinement in cases of very low overlap cases like the 3DLoMatch benchmark. Although we do not use this refinement in our results in the paper, we provide the analysis for practical usage.
For our tests, we found that the performance of this refinement saturated after 5 iterations, as shown in Fig. <ref>.
§.§ Qualitative Results
We provide more visual results in Fig. <ref> and Fig. <ref>.
|
http://arxiv.org/abs/2307.02469v2
|
20230705174428
|
What Matters in Training a GPT4-Style Language Model with Multimodal Inputs?
|
[
"Yan Zeng",
"Hanbo Zhang",
"Jiani Zheng",
"Jiangnan Xia",
"Guoqiang Wei",
"Yang Wei",
"Yuchen Zhang",
"Tao Kong"
] |
cs.CV
|
[
"cs.CV",
"cs.CL"
] |
Statistical Physics Perspective on Economic Inequality
Victor M. Yakovenko
2 July 2023
======================================================
[remember picture,overlay,shift=(current page.north west)]
[anchor=north west,xshift=0.4cm,yshift=-3.0cm]-1[1]
< g r a p h i c s >
;
Recent advancements in Large Language Models (LLMs) such as GPT4 have displayed exceptional multi-modal capabilities in following open-ended instructions given images. However, the performance of these models heavily relies on design choices such as network structures, training data, and training strategies, and these choices have not been extensively discussed in the literature, making it difficult to quantify progress in this field. To address this issue, this paper presents a systematic and comprehensive study, quantitatively and qualitatively, on training such models. We implement over 20 variants with controlled settings. Concretely, for network structures, we compare different LLM backbones and model designs. For training data, we investigate the impact of data and sampling strategies. For instructions, we explore the influence of diversified prompts on the instruction-following ability of the trained models. For benchmarks, we contribute the first, to our best knowledge, comprehensive evaluation set including both image and video tasks through crowd-sourcing. Based on our findings, we present , which performs the most accurate multi-modal understanding while keeping the best multi-modal generation ability compared to existing open-sourced GPT4-style models.
§ INTRODUCTION
Large Language Models (LLMs) <cit.> have progressed rapidly in recent years and achieved impressive performance in language understanding and generalization.
With instruction fine-tuning <cit.>, LLMs can be further improved to follow open-ended instructions from non-expert users and serve as dialog-based assistants in our daily lives. Leveraging powerful LLMs, recent studies have examined methods for adapting LLMs to multimodal inputs (e.g., images <cit.>, videos <cit.>, and audio <cit.>) and outputs (e.g., vision tasks <cit.>, and robotic manipulation skills <cit.>).
Notably, GPT4 has astounded the world with its impressively stable zero-shot versatile yet practical capabilities, such as generating descriptions, stories, poetry, advertisements, and codes given images, which were rarely observed in previous vision language models <cit.>.
However, it still remains a mystery that: How does GPT4 obtain its impressive smartness? Though actively investigated recently, the existing models are usually different in network structure, training data, training recipes, prompts, and evaluation benchmarks, which makes it extremely hard to tell which factors are crucial in achieving a high-performance multi-modal language model. In addition, suitable quantitative benchmarks for evaluating and comparing such models are lacking, making it difficult to attribute and quantify the progress in open-sourced multi-modal LLMs.
Therefore, in this paper, we conduct a systematic study on training GPT4-style models to address the aforementioned issues.
According to the existing literature, we identify three possible keys to achieving high performance for multi-modal LLMs: network structures, training data, and diversified instructions.
Regarding network structures, we explore different LLM adaptation strategies, including the widely utilized cross-attention-based structure <cit.> and the recently popular decoder-only structure with a multi-modal adapter <cit.>.
Besides, we investigate different backbones including LLaMA-7B and Vicuna-7B to assess whether language instruction fine-tuning affects the final multi-modal performance. As for training data, we experiment with several large-scale datasets (e.g. COYO700M <cit.>, DataComp1B <cit.>, and BlipCapFilt <cit.>) consisting of image-text pairs to observe the effects of different data combinations. For instructions, we manually label at least three prompts for each task and generate more with GPT4 to figure out the influence of the diversity of language prompts. In total, there are 500 prompts for over 50 tasks.
In summary, we implement ∼20 variants with controlled settings and conduct extensive experiments to draw reliable conclusions both quantitatively and qualitatively.
For benchmarking, we argue that the evaluation of multi-modal LLMs is essentially different from typical visual-language methods.
The primary challenge when evaluating a GPT4-style model is balancing text generation capability and multi-modal understanding accuracy. To address this, we present a new benchmark incorporating both video and image data to evaluate both the multi-modal understanding and text generation performances. Using our proposed benchmark, we evaluate a large bunch of open-source methods and provide a comprehensive review. Concretely, we adopt two protocols for quantitative evaluation. First, we collect an Open-ended Visual Question Answering (Open-VQA) test set, including questions on objects, OCR, counting, reasoning, action recognition, chronological ordering, and more. Different from standard VQA <cit.>, the ground-truth answer in Open-VQA is open-ended. To evaluate the performance on Open-VQA, we prompt GPT4 to make it a discriminator, yielding a 95% agreement with human evaluation. This benchmark is used to evaluate the accuracy of all models. Additionally, we adopt the OwlEval test set proposed by mPLUG-owl <cit.> to assess the text generation ability given images. Though OwlEval is a tiny set containing only 82 questions based on 50 images, it covers a diverse range of tasks such as generating descriptions, stories, poems, advertisements, codes, and other sophisticated yet practical analyses of given images. In this part, we recruit human annotators to rank different models.
Based on extensive analysis of our controlled experiments, our findings can be summarized as follows:
* Prefix-tuning with trainable adaptors has shown better performances to adapt LLMs to multi-modal inputs compared to cross attention (e.g. Flamingo <cit.>).
* Data quality is more important than quantity. We find that models trained on large-scale image text pairs like COYO700M and DataComp1B are not better to generate languages than models trained on a much smaller but high-quality dataset, since they can contaminate the output distribution.
* Diversified prompts are crucial to the improvement of the instruction-following ability and, consequently, final performance.
* For the multi-modal adaptation of LLMs, it is crucial to carefully balance the multi-modal understanding and text generation abilities. Multi-modal adaptation based on instruction-finetuned models like Vicuna can improve the instruction-following abilities.
Through our study, we present , a simple prefix-tuning GPT4-style model, with a two-stage training recipe.
For the first stage, we use ∼120M image-text pairs to align visual and linguistic embeddings. For the second stage, we finetune our model with 20 multi-modal tasks with image or video inputs and NLP instruction data to learn to follow instructions.
We transform all multi-modal datasets into the instruction-following format with manually written prompts and more GPT4-generated ones to keep the consistency of all training data. The resulting model performs the most accurate multi-modal understanding while exhibiting the best multi-modal generation ability compared to existing open-sourced models.
§
is a GPT4-style large language model that can take images and videos as inputs.
Built on top of Vicuna, it is further trained with additional trainable adapters on high-quality image-text pairs and visual language tasks.
In this section, we will introduce our in detail, including the problem formulation (<ref>), architecture (<ref>), pretraining (<ref>), and instruction finetuning (<ref>).
§.§ Formulations
A GPT4-style large language model is defined as a decoder-only transformer <cit.> that takes both visual and instructional tokens as inputs and generates responses in text auto-regressively.
Formally, the input includes vision tokens w_v={w_i}_i=1^V and instruction tokens w_l={w_j}_j=V+1^V+L,
where V and L represent the number of vision tokens and instruction tokens. The vision tokens and instruction tokens in our model are directly concatenated to form the input of the decoder-only model. Conditioned on the multi-modal inputs, the model predicts the response in an auto-regressive manner, i.e., each word w_i is predicted conditioned on all input tokens and previous predictions. Therefore, the sentence is predicted by the following equation:
p(w_V+L+1: V+L+T|w_1:V+L) ∼∏_t=V+L+1^V+L+TP(w_t | w_< t)
In large language models <cit.>, the network is usually trained on numerous text corpus to learn the causal relationships among tokens.
Similarly, our model is also trained on the collected visual-language tasks to learn the next-word distribution.
Notably, compared to the contrastive pretraining <cit.>, pretraining with next-word prediction requires data with fluent texts that can represent the “natural” causal dependency between the predicted word and the past context very well <cit.>.
We will introduce the details of data collection and selection in Section <ref> and <ref> in detail.
§.§ Details of Model Architecture
Overview
Our model takes simultaneously vision and language as inputs to generate text responses following the input instructions.
The overall structure of our model is shown in Fig.<ref>.
Concretely, vision inputs are first processed by a vision encoder to get a sequence of vision tokens w_v.
After that, w_v are fused with instruction tokens w_l for multi-modal tasks.
In our model, we directly concatenate the projected vision tokens and instruction tokens as the input of LLMs, which can then be processed by the decoder-only LLMs naturally.
We call this structure “prefix-finetuning” (PT) in contrast to the cross-attention-based models like Flamingo <cit.>. Moreover, we find that by adding a small trainable adapter after some layers in the frozen LLMs, the performance could be further improved with low training costs.
To generate responses, the left-to-right causal decoder auto-regressively predicts the next token by taking all previous tokens as inputs until encountering the <EOS>.
Adapter
The trainable adapters are inserted into the LLMs after every M blocks.
In our experiments, M=1. As shown in Figure <ref>(b), the adapter linearly projects each token into a lower-dimensional space and then re-projects it back.
Concretely, in , the hidden state for each token is 4096-d.
The adapter first imposes layer normalization <cit.> onto the hidden states.
Then a linear layer is used to downsample the dimension of each token state from 4096 to 2048, based on which SiLU <cit.> is set as the non-linear activation function, which keeps consistent with LLaMA <cit.>.
Finally, the other linear layer is used to re-map the 2048-d hidden state back to 4096-d.
[13]r0.6
< g r a p h i c s >
Architecture of . (a) Overall; (b) Adapter.
Vision Encoder
To extract vision features of images and video frames, we apply EVA-1B <cit.> as our vision encoder ϕ_v(x).
It maps an image to a sequence of visual tokens.
The downsample rate is 14, meaning that an image with resolution H× W will be represented by a sequence of H/14×W/14 tokens.
To improve the efficiency of training and inference, we adapt the resampler Φ mechanism <cit.> that reduces the dimensions of vision inputs by injecting the long vision token sequence into a short and learnable query sequence w_v^q:
w_v = Φ(ϕ_v(x), w_v^q)
where x is the input image, ϕ_v(x) is the raw tokens directly given by the vision encoder, w_v is the condensed token sequence consisting of 32 tokens regardless of the number of raw tokens from the vision encoder.
§.§ Pretraining
During pretraining, we utilize more than 120M image-text pairs to train the newly added layers so as to build connections of different modalities.
Our pretraining follows the typical next-word prediction training with the cross entropy loss.
To accelerate pretraining, we first pre-train our model on images of 224×224 resolution.
Nevertheless, we found that only pretraining on a low resolution is not enough for some downstream tasks like table reading and OCR.
Therefore, after 100k steps of pretraining on low-res images, we continue to increase the input resolution to 420×420 and train the model for another 10k steps.
Training data during this phase mainly consists of
BlipCapFilt 115M <cit.>, CC12M <cit.>, CC3M <cit.>, and SBU <cit.>.
Besides, we also add high-quality labeled data during pretraining that have been also used in the instruction finetuning phase, like captioning, visual question answering, and classification.
Details of all pretraining datasets are listed in Table <ref>.
Our model is trained on a total of ∼14B tokens from all these datasets during the pretraining stage and ∼3B tokens during the instruction-finetuning stage.
§.§ Instruction Fintuning
To finetune our model with diversified instructions, we collect an instruction finetuning multi-modal dataset based on the public ones.
Our dataset consists of 50+ text-only, image-text, and video-text tasks mainly belonging to 5 categories: Text-only Instruction-Following, Image/Video Visual Question Answering, Image/Video Captioning, Classification, and Image-conditioned Dialog for Complex Reasoning and Instruction Following.
We also provide the corresponding instructions for all of these tasks (see Appendix Table <ref> for details).
To do so, we manually labeled at least 3 different prompts for each of these tasks, and then invoke GPT4 to automatically generate more based on the following “meta prompt”, i.e., the prompt used to generate prompts for different tasks:
Here are some instructions that define a visual-language task. Continue to write 15 instructions with the same meaning: 1) PROMPT1; 2) PROMPT2; 3) PROMPT3;
Besides, we also collect some available public (visual-)text instruction data (also listed in Table <ref>) to further improve the ability of our model to follow open-ended instructions, including the instruction data used in FlanT5 <cit.>, Alpaca <cit.>, Mini-GPT4 <cit.>, LLAVA <cit.>, and Baize <cit.>.
We follow the same causal prediction loss as in pretraining, i.e., the cross entropy loss to predict the next word based on all previous tokens.
Nevertheless, we observed that different weight combinations of the instruction data have a crucial influence on the final performance.
Empirically, we finally impose the weight strategy presented in Table <ref>.
§ EXPERIMENT
In this section, we aim to answer the following questions according to empirical studies:
a) How can we evaluate the performance of a GPT4-style model? (Section <ref>)
b) Compared to existing models, what are the advantages of our ? (Section <ref>)
c) What matters to train a high-performance GPT4-style model? (Section <ref>)
d) What is the performance of in open-world zero-shot scenarios? (Section Appendix <ref>)
§.§ Evaluation Protocols
The evaluation of GPT4-style generative language models is challenging because the quality of natural languages is inherently subjective and highly depends on specific cases.
Existing models like PaLM-E <cit.>, PaLI <cit.>, BLIP2 <cit.>, or InstructBLIP <cit.> turn to the evaluation on visual-language benchmarks like image caption <cit.> or visual question answering <cit.>, i.e., fine-tuning multi-modal LLMs on a single downstream task on which the evaluation is conducted.
Nevertheless, though it may achieve better performance, over-finetuning on such benchmarks will damage the generation ability of large language models, which conflicts with the primary motivation to use large language models.
Moreover, such benchmarks, especially the (semi-)automatically generated ones like TDIUC <cit.>, always contain a high ratio of easy or noisy examples, making them less suitable.
On the contrary, other methods like MiniGPT4 <cit.> or LLaVA <cit.> only showcase their performance in some challenging yet practical scenarios without quantitative results due to the lack of quantitative benchmarks for such generative multi-modal language models.
Therefore, in this section, we propose to evaluate the GPT4-style models in the following two aspects:
* A cleaned subset of visual-language benchmark, which should be challenging and compatible with generative models, with prompted GPT4 to get the quantitative results.
* An open-world challenging yet practical test set to evaluate the performance on realistic scenarios where GPT4-style models are needed, with humans to evaluate the user experience.
To do so, we manually collect an Open-VQA test set consisting of 450 samples with image or video input, which contains diverse questions on objects, OCR, counting, reasoning, action recognition, chronological ordering, etc., from VQA 2.0 <cit.>, OCRVQA <cit.>, Place365 <cit.>, MSVD <cit.>, MSRVTT <cit.>, and Something-Something-V2 (SthV2) <cit.>.
Though Place365 is a classification task and SthV2 is a video captioning task, we write proper prompts to make them both VQA tasks.
Besides, we carefully examine the data and modify the questions and ground-truth answers if necessary to make them reliably correct and challenging enough to be a benchmark for GPT4-style models.
Randomly sampled examples are given in Fig. <ref>(a).
Different from the traditional VQA benchmark, Open-VQA supports open-ended answers.
To achieve so, we prompt GPT4 to make it the referee, which achieves a consistency of more than 95% compared with humans[We evaluate the consistency on 100 samples from a randomly selected subset with our model.].
The prompt for GPT4 used in this phase is as follows:
Given the question “QUESTION”, does the answer “PREDICTION” imply the answer “GROUND_TRUTH”? Answer with Yes or No.
Moreover, general-purpose language generation with image inputs is also important to multi-modal LLMs.
Therefore, we also adopt the OwlEval test set proposed by mPLUG-owl <cit.>, which contains 82 questions based on 50 images, where 21 from MiniGPT-4 <cit.>, 13 from MM-REACT <cit.>, 9 from BLIP2 <cit.>, 3 from GPT4 <cit.>, and 4 collected by mPLUG-owl itself.
The test set includes diversified and practical cases such as dense image captioning, dialogue writing, story writing, poem writing, teaching, programming, etc.
We give some examples in Fig.<ref>(b).
However, OwlEval is proposed together with mPLUG-owl.
Hence, directly using it as the benchmark is possibly unfair to other models.
To make the comparison fair, we pad each image in the OwlEval with 8 pixels as shown in Fig.<ref>(b) before feeding them into the models.
We recruit human annotators to evaluate the performance.
Scores range from 1 to 5.
If two models are considered to be equally good or bad, they will have the same score.
For each data, the annotator will assign a score for each model.
We only allow at most 2 models that are equally good or bad, and for each annotator, the total number of ties should be no more than 10 for the whole set.
During the evaluation, the correctness has the highest priority, then should be the richness of the generated content.
Finally, we also compare our method with others on the newly proposed MME benchmark <cit.>, which includes 14 different subtasks that evaluate the perception and cognition ability of multi-modal large language models.
§.§ Quantitative Experiments
Open-VQA benchmark We first evaluate our model as well as several existing open-sourced multi-modal LLMs on the Open-VQA benchmark.
Results are shown in Table <ref>.
We can conclude that our model has achieved the best performance both in the image and video understanding tasks.
Notably, InstructBLIP <cit.> also achieves high performance in most cases, even better than our model in OCR, color recognition, and action recognition tasks.
However, we observe that it always outputs one word for the question as shown in Fig.<ref> and <ref>, which is less preferred by most of the users (see Fig.<ref>).
We also showcase some of the examples in Fig. <ref>.
More cases including video VQA examples can be found in Fig. <ref> and <ref> in the appendix.
We can see that our model can give the correct answer in most cases as well as a concise reason that supports the answer, which makes it more user-friendly.
OwlEval benchmark
We evaluate the performances of general-purpose natural language generation on OwlEval test set.
From the human evaluation results in Fig.<ref>, we can see that our model has the best language generation performance while keeping high performance on the Open-VQA benchmark.
BLIP2 <cit.> and InstructBLIP <cit.>, though achieved high performance on the Open-VQA benchmark, are not preferred by human users due to their extremely short outputs, i.e., in most cases, they only output one word or phrase as the answer without any explanation.
In contrast, MiniGPT4 <cit.> and mPLUG-Owl <cit.> are trained less to fit the Open-VQA benchmark and keep more language generation ability.
Hence, they are preferred over the BLIP models, though they may make more factual errors.
We also show some results on the OwlEval in Fig. <ref>.
In general, we observe that if a model has lower accuracy on the Open-VQA benchmark, it tends to make factual errors inconsistent with the given image during text generation.
Nevertheless, models with higher performance on the Open-VQA benchmark usually tend to lose language generation ability, e.g., generate short sentences.
We attribute this conclusion to the under-training or over-training on visual-language tasks.
To be specific, existing training data from visual-language tasks always includes short outputs.
By training on these data, the model can learn to align the visual and linguistic concepts, yet lose the language generation ability inherited from the large language model.
From the high performance of our model, we can see that one possible way to train a high-performance model with better language generation ability is to carefully select and clean the data, as well as design the proper sampling ratios.
Nevertheless, the key to balance language generation and correctness is a high-quality visual-language dataset that contains clean and rich expressions, which should be explored in our future work.
MME benchmark
[22]r0.6
< g r a p h i c s >
Comparison on MME benchmark.
We also compare with available existing open-source models on the MME benchmark <cit.>.
Results are shown in Figure <ref> and Appendix <ref>.
We can see that our model is a state-of-the-art model in 7 out of 14 subtasks, especially for the perception tasks including Color, Celebrity, Scene, Landmark, Position, Count, and Existence.
Yet, from the figure, we can also see that our model seems not to perform well on cognition tasks including Code Reasoning, Text Translation, and Numerical.
Notably, cognition benchmarks including Code Reasoning, Text Translation, and Numerical in MME only contain 20 examples, which may cause high variance in the evaluation of different checkpoints.
§.§ Ablation Study
We conduct an in-depth ablation study to investigate the impact of different components or training recipes on multi-modal understanding and language generation performances.
In this section, we follow the same evaluation method proposed in Section <ref>.
LLaMA vs. Vicuna
As shown in Table <ref>, our experiments show that in the aspect of correctness, instruction-finetuned backbone (e.g.Vicuna) performs slightly better on our Open-VQA benchmark (like LLaVA) as shown in Table <ref> and <ref>, but slightly worse on the OwlEval benchmark (Figure <ref>).
However, Vicuna-based model does indeed follow the instruction better. For example, the average answer length given the instruction “give a short answer” is 15.81, compared to 20.15 from the LLaMA-based model. One can also refer to Figure <ref>(a) for examples of the comparison in terms of their instruction-following ability.
Impact of Diversified Prompts
It has been proved to be important to train LLMs on instruction data so as to make them follow instructions correctly <cit.>.
Therefore, we ablate our model with diversified prompts written by both users and GPT4.
The results in Table <ref> and <ref> show that our prompts help to balance different abilities.
Moreover, we also find that by using diversified prompts, our model can follow the open-ended instructions better than the ones trained without these prompts (Table <ref>).
This observation accords with the text-only models.
The human evaluation results in Figure <ref>(b) also accord with our observations.
Diversified tasks and prompts will help to improve the generalization of the model to new tasks and instructions.
Impact of Training Data
We investigate the impact of data quantity and quality by training our model with or without the large-scale yet noisy image-text pairs (COYO700M <cit.> and DataComp1B <cit.>).
During our experiments, we find training data in both pretraining and finetuning largely influence the model performance.
Different from traditional visual-language pretraining <cit.>, we find that multi-modal LLMs do not benefit from large-scale but noisy image-text pairs because many of the texts in such datasets are not fluent or natural language expressions.
For the generative pretraining in our model, they largely damage the language generation ability as shown in Figure <ref>(b).
As a result, pretraining on such large-scale datasets achieves no better results than only training on a much smaller but cleaner dataset as evaluated by the human users as shown in Figure <ref>(c).
Prefix-Tuning vs. Cross-Attn We follow Flamingo <cit.>, concretely Open-Flamingo <cit.>, to implement the cross-attention method.
Following its original settings, we only use multi-modal instruction data for pre-training.
For the finetuning stage, we experiment with two variants, with or without trainable LLM, i.e., with or without the use of text instruction data.
As shown in Table <ref> and <ref>, both of them perform worse than our prefix-tuning with adapters.
Though the models can generate fluent and relevant responses, their outputs usually do not give correct answers to the questions.
We also verified our conclusion with human annotators, as shown in Figure <ref>(d).
Results show that human users give lower preference to the cross-attention models. Overall, cross-attention models could require more hyper-parameter searching to achieve better performances, and we leave it to further work.
Impact of Larger Image Resolution
We increase image resolution in the first stage with only 10K step training.
After that, we freeze the vision encoder and thus the expense of increasing image resolution is affordable.
For rigor, we also conducted an experiment to verify the impact of image resolutions on the model performance.
The experiment results in Table <ref> and <ref> show that the training on 420x420 resolution achieves better performance than the models only trained on 224x224.
§ RELATED WORK
Large-language models.
Large language models (LLMs) have been widely investigated in recent years due to their good generality on zero-shot tasks, including GPT3 <cit.>, PaLM <cit.>, BLOOM <cit.>, Chinchilla <cit.>, T5 <cit.>, LLaMA <cit.>, OPT <cit.>, GLM <cit.>, etc.
After being pre-trained on massive text corpora, such models can perform surprisingly well on downstream tasks without further finetuning.
In particular, the simple yet efficient structure of decoder-only models like GPT-3 can easily scale up to hundreds of billions of parameters and show an elegant scaling law with the increase of model size and data amounts <cit.>.
Moreover, recent advances in instruction finetuning <cit.> have also shown that large-scale language models can be finetuned with limited amounts of instruction data to follow open-ended instructions in natural language.
This not only improves their performance on downstream tasks substantially but also makes it a user-friendly assistant in our daily life <cit.>.
Centralized Multi-modal Interactive System. Inspired by the achievements in LLMs, it is straightforward to ask a question: Is it possible to design a model that accepts multi-modal inputs while being able to chat with humans in natural language?
Therefore, recent works investigate actively to design of such multi-modal interactive models.
One of the most intuitive ideas, such as Visual ChatGPT <cit.>, MM-REACT <cit.>, HuggingGPT <cit.>, InternGPT <cit.>, SayCan <cit.>, InnerMonologue <cit.>, integrates various existing individual models or tools such as OCR, object detection, image captioning, visual question answering, text-to-image generation, or robot manipulation policies by a centralized controller.
In such a system, the LLM works as a “manager” that directly accepts instructions from users and selects the most appropriate tools to respond to requests while the integrated individual models are “workers” responsible for a specific kind of task.
Typically, such models are powerful to address problems that are already well-defined.
Yet, they, to some extent, lack zero-shot ability when encountering open-ended instructions which cannot be handled by any of their workers.
End-to-end Multi-modal Large Language Models. By contrast, inspired by the recent advances of LLMs, it has also been shown feasible and promising to directly train the neural networks that directly accept multi-modal inputs and output responses end-to-end.
To achieve so, one intuitive idea is to adapt the LLMs to multi-modal inputs by adding some additional trainable parameters and finetuning them on multi-modal data.
For example, Flamingos <cit.> is one of the early works to explore this idea.
Firstly, it takes a vision encoder (like NFNet <cit.> in their original version, or recent CLIP ViT <cit.>) to extract visual embeddings.
Then, it applies multi-layer cross-attention to fuse the multi-modal inputs for the final prediction.
Recent works directly concatenate vision embeddings to the inputs of LLMs and finetune LLMs end-to-end.
To do so, they usually add an additional projection layer to map the vision embeddings to the same dimension as the language embeddings, and then directly feed them into LLMs for further training.
Different methods may take different training strategies.
BLIP2 <cit.> designs a Q-Former, which is the only trainable part, to align the dimensions of vision and language tokens.
PaLM-E <cit.>, which is built upon PaLM <cit.>, is trained totally end-to-end with no fixed layers using a mix of multi-modal datasets including WebLI 10B dataset <cit.>.
Mini-GPT4 <cit.> freezes all weights of the vision encoder and the LLM while only finetuning the weights of the projection layer.
LLAVA <cit.> fixes the vision encoder while keeping the LLMs trainable during the instruction finetuning stage.
mPLUG-owl <cit.> tunes the vision encoder and keeps LLMs fixed to align the vision and language embeddings in the first stage while further tuning the LLMs and keeping the vision encoder fixed in the second instruction-finetuning stage.
KOSMOS-1 <cit.> does not rely on any pretrained LLMs and is trained from scratch on large amounts of mixed data including image-text pairs (COYO700M <cit.>, LAION2B <cit.>, etc.), text corpora (Common Crawl, the Pile <cit.>, etc.), and interleaved image-text data.
These models are all powerful and show promising results to develop multi-modal large language models.
§ DISCUSSIONS AND LIMITATIONS
§.§ Findings and Takeaways
Prefix-tuning has shown better performances than cross-attention methods on multi-modal adaptation for large language models.
As shown in our experiments, prefix-tuning with adaptors show good performance on open-ended instruction-following tasks after training in billions of multi-modal tokens. By contrast, cross-attention models are not that efficient to achieve good performance, though more hyper-parameter searching could improve its performances and we leave it in future work.
Multi-modal LLMs are not as instruction-following as LLMs.
In our experiments, we find that current multi-modal LLMs are not as good at the instruction following as language models.
For example, InstructBLIP <cit.> tends to generate short responses regardless of the input instructions, while other models tend to generate long sentences without considering the instruction like “Give a short answer” or “Answer in one word”.
We assume that this is from the lacking of high-quality and diversified multi-modal instruction data.
The quality of training data is critical to model performance.
As concluded in Section <ref>, based on the experimentation on different pretraining data, we find that a small number of high-quality data with fluent texts can perform even slightly better than the large-scale noisy datasets.
We attribute this to the difference between generative pretraining and contrastive pretraining, since generative pretraining is directly learning the conditional distribution of words but not the similarity between texts and images.
Therefore, to train a high-performance multi-modal LLM, despite the quantity of data, it is crucial to prepare a high-quality dataset that satisfies: 1) it includes high-quality and fluent texts; 2) it aligns the texts and images well.
Tasks and prompts are crucial for zero-shot abilities.
As shown in Section <ref>, diversified prompts have a great impact on the final performance.
The essential observation behind this is that the zero-shot generality of multi-modal language models depends on the diversity of tasks involved during training.
The model can generalize to more and more unseen instructions as it sees more and more types of tasks.
This accords with the observation in text-only models <cit.>.
Balancing the correctness and language generation ability is important.
In our experiments, we find that if the model is under-trained on downstream tasks such as VQA, it will suffer from the problem of hallucination and keep making mistakes.
While if the model is over-trained on downstream tasks, it will not be able to follow the user's instructions to generate long answers.
Therefore, it would be important to carefully balance the training data to train it so as to correctly read images and videos while keeping its generation ability.
§.§ Limitations
Evaluation
It is hard to evaluate a multi-modal large language model since its evaluation is essentially different from traditional visual-language models.
Though we take the first step to quantitatively evaluate both the multi-modal understanding accuracy and language generation ability, it is still an open problem: how can we establish a comprehensive and automatic benchmark to evaluate existing multi-modal large language models?
Training Data
Though we have successfully collected and cleaned a mixed dataset to train our , we still put a lot of effort to balance different abilities (e.g. correctness and language generation, long and short answers).
Moreover, there are still no available image-text datasets that contain long texts which are ideal for pretraining.
Besides, restricted by the computational resources that we can use, we do not conduct extensive experiments to find the optimal data combination strategy (e.g. sampling ratios, tasks, and prompts), which has been left for future work.
Multi-lingual
Our model is built upon LLaMA <cit.>, which is mainly trained on English corpus.
Therefore, our model is not that good at multi-lingual responses.
Though it can understand and sometimes output other languages (like shown in Figure <ref>), it is still unexplored how to build a high-performance multi-lingual and multi-modal large language model.
Safety
Currently, we do not conduct safety checks and restrict the outputs of our model.
Therefore, the model may output contents that are not appropriate and even toxic, depending on and restricted by the data used for training.
The authors do not support the use of harmful language generation using our codes and models, like any usage on ethical, political, and racism issues.
§ CONCLUSIONS
In this paper, we present , a multi-modal GPT4-style large language model that can take as input images/videos and responses with open-ended natural languages.
Through extensive empirical study, we show that our model outperforms other existing open-source models both in multi-modal understanding and language generation.
We also explore different factors that can affect the performance of a multi-modal large language model and conclude that: 1) for network structure, prefix-tuning is better than cross-attention to fuse different modalities; 2) instruction following is closely related to the number of tasks and prompts used for training; 3) the generative pretraining is much more sensitive the quality of training data than previous pretraining methods such as contrastive training; 4) balancing the correctness and language generation is important for multi-modal large language models.
For future work, it is promising to scale up the model to a larger size (e.g. 30B and 65B LLaMA <cit.>), as well as a larger and more diversified set of instructional tasks.
Moreover, a large-scale and high-quality multi-modal dataset is also needed to train such models.
Therefore, it is worth the effort to collect such a dataset, which will be a great contribution to this area.
Multi-lingual ability and safety are also undoubtedly crucial for realistic applications.
§ ACKNOWLEDGEMENTS
We would like to acknowledge Hang Li at ByteDance for his generous assistance in insightful comments in technical discussions.
Additionally, we extend our appreciation to the colleagues at ByteDance for their efforts and support of this project. We are also thankful to the LLaMA and Vicuna teams for granting us access to their models.
unsrtnat
§ EXPERIMENTAL DETAILS
§.§ Training Details
We use the DeepSpeed <cit.> to accelerate training, and set the BFloat16 as the default model precision.
We report the detailed model training hyperparameters in Table <ref>.
§.§ Hyper-parameters for Generation
During the deployment of all models, we find that for most of them, the performance would be better if we apply a description-first strategy.
That is, before sending the request from the user, by default, we feed a fixed prompt “Describe the image in detail” first in the “0th” round of the conversation.
After that, the user's instructions will be sequentially processed.
Nevertheless, we found that the quality of generated texts by MiniGPT4 using this description-first strategy is worse than the ones directly generated.
Therefore, for MiniGPT4 <cit.>, we generated the response with its default settings.
Similarly, for mPLUG-owl <cit.>, we follow the default parameters presented at http://vlarena.opengvlab.com/http://vlarena.opengvlab.com/.
Detailed settings can be found in <ref> for different tasks.
§ MME PERFORMANCE
§ CASE STUDY
§.§ Image VQA Cases
§.§ Video VQA Cases
§ TRAINING DATA
§.§ Data & Tasks
§.§ Prompt Examples
§ OWLEVAL CASES
blabla
§ OPEN DEMONSTRATIONS
§.§ Multi-turn Dialog
§.§ Multi-lingual Response
§.§ Instruction-following Ability
We also demonstrate the instruction-following ability of different models.
We can see that both and mPLUG-owl can follow instructions correctly to some extent.
Yet, InstructBLIP is not sensitive to different instructions.
|
http://arxiv.org/abs/2307.01434v1
|
20230704015607
|
Learning to Branch in Combinatorial Optimization with Graph Pointer Networks
|
[
"Rui Wang",
"Zhiming Zhou",
"Tao Zhang",
"Ling Wang",
"Xin Xu",
"Xiangke Liao",
"Kaiwen Li"
] |
cs.LG
|
[
"cs.LG",
"cs.NE",
"math.CO"
] |
Journal of IEEE/CAA Journal of Automatica Sinica, Vol. 00, No. 0, Month 2023
Rui Wang et al.: Learning to Branch in Combinatorial Optimization with Graph Pointer Networks
Learning to Branch in Combinatorial Optimization with Graph Pointer Networks
Rui Wang, Senior Member, IEEE,
Zhiming Zhou,
Tao Zhang,
Ling Wang,
Xin Xu,
Xiangke Liao,
Kaiwen Li
This paper is partially supported by the National Science Fund for Outstanding Young Scholars (62122093) and the National Natural Science Foundation of China (No. 72071205).
Rui Wang, Kaiwen Li (corresponding author), Tao Zhang are with the College of Systems Engineering, National University of Defense Technology, Changsha 410073, PR China, and also with the Hunan Key Laboratory of Multi-Energy System Intelligent Interconnection Technology, HKL-MSI2T,Changsha 410073, PR China. (e-mail: [email protected], [email protected], [email protected]); Zhiming Zhou is with Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, P. R. China. ([email protected]);
Xin Xu is with the College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, PR China;
Ling Wang is with Department of Automation, Tsinghua University, Beijing, 100084, P. R. China. ([email protected]);
Xiangke Liao is with the College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, PR China.
Manuscript received March 19, 2022; revised March 26, 2021.
August 1, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Branch-and-bound is a typical way to solve combinatorial optimization problems. This paper proposes a graph pointer network model for learning the variable selection policy in the branch-and-bound. We extract the graph features, global features and historical features to represent the solver state. The proposed model, which combines the graph neural network and the pointer mechanism, can effectively map from the solver state to the branching variable decisions. The model is trained to imitate the classic strong branching expert rule by a designed top-k Kullback-Leibler divergence loss function. Experiments on a series of benchmark problems demonstrate that the proposed approach significantly outperforms the widely used expert-designed branching rules. Our approach also outperforms the state-of-the-art machine-learning-based branch-and-bound methods in terms of solving speed and search tree size on all the test instances. In addition, the model can generalize to unseen instances and scale to larger instances.
Branch-and-bound, Deep learning, Graph neural network, Imitation learning, Combinatorial optimization.
§ INTRODUCTION
Combinatorial optimization seeks to explore discrete decision spaces, and finds the optimal solution in acceptable execution time. Combinatorial optimization problems arise in diverse real-world domains such as manufacturing, telecommunications, transportation and various types of planning problem <cit.>. This kind of problems can be immensely difficult to be solved, since it is computationally impractical to find the best combination of the discrete variables through exhaustive enumeration. Actually most of the NP-hard problems in mathematical and operational research fields are typical examples of combinatorial optimization, such as Traveling Salesman Problem (TSP), Maximum Independent Set <cit.>, Graph Coloring <cit.>, Boolean Satisfiability <cit.>, etc..
A vast of approaches have been proposed to tackle combinatorial optimization challenges these years. They can be basically divided into the following categories: exact algorithms, approximation algorithms and heuristics. Exact algorithms are algorithms that can always find the optimal solution to a combinatorial optimization problem. A naive way is searching all possible solutions through enumeration, however, costing intractable solving time. Some advanced techniques have been proposed, such as branch-and-bound, to efficiently prune the searching space. Approximation algorithms, in some cases, can solve an optimization problem in polynomial-time, and can provide a theoretically guaranteed bound on the ratio between the obtained solution and the optimal one. However, such algorithms may not exist for all real-world combinatorial optimization problems. Heuristics provide no guarantees for the solution quality, but are faster than the above approaches. Hard-won expertise and trial-and-error efforts are often required to design the heuristics.
As exact algorithms can always solve an problem to optimality, and no problem-specific heuristic requires to be hand-crafted, modern optimization solvers generally employ exact algorithms, typically the branch-and-bound (B&B) approach, to solve the combinatorial optimization problems, which can be formulated as mixed-integer linear programs (MILPs). B&B solves general MILPs in a divide-and-conquer manner. B&B <cit.> recursively splits the search space of the problem into smaller regions in a tree structure, where each node represents the subproblem that searches subsets of the solution set. Subtrees can be pruned once it provably cannot produce better solutions than the current best solution; otherwise, the subtree is further partitioned into subproblems until an integral solution is found or the subproblem is infeasible. In this solving process, there are several decision-making problems that should be considered to improve the performance: node selection problem, i.e., which node/subproblem should we select to process next given a set of leaf nodes in the search tree?; variable selection problem (a.k.a. branching), i.e., which variable should we branch on to partition the current node/subproblem?
For a long time, such decisions are made according to some carefully designated heuristics on a specific type of MILP instances. A lot of designing and trial-and-error efforts are required to hand-craft these hard-coded expert heuristics. In recent years with the development of artificial intelligence, more attentions are drawn on learning the heuristics by machine learning models instead of designing by experts. This idea makes sense since heuristics are typically formed by a set of rules, which can be possibly parameterized by models, such as the deep neural networks. Such learning-based approaches have been investigated in recent years <cit.>. However, this line of work still raises the following challenges: how to extract effective features to represent the current state of the B&B process, based on which the branching decision is made; how to design effective models to map from the B&B state to the branching decision.
In this paper we propose a graph pointer network model to address the above challenges. In specific, we focus on the branching problem, i.e., which variable to branch on. Instead of designing the branching heuristics manually for each problem type, we propose to learn the branching heuristics automatically by a novel model to reduce the solving time of MILPs. We achieve this by using imitation learning to approximate the strong branching branching rule, which is empirically effective but computationally expensive. Though this idea is not new <cit.>, we improve the performance of the learning model in a novel way. The contributions are as follows:
* In addition to graph features, we design the global and historical features to represent the solver state. The extracted features can provide a richer representation for the problem state.
* We develop a new model that combines the graph neural network and the pointer mechanism. Graph neural network is used to encode the graph features, and the pointer mechanism is used to incorporate the global and historical features to output the variable index.
* A top-k Kullback-Leibler divergence loss function is designed to train the model to imitate the expert branching rules.
* The proposed approach can outperform expert-designed branching rules and state-of-the-art machine learning methods on all the test problems.
* Once trained, the model can generalize to unseen larger instances.
§ RELATED WORK
Recent days have seen a surge of applying artificial intelligence methods for combinatorial optimization.
Vinyals et al. <cit.> developed a pointer network model for solving small scale combinatorial optimization problems like the traveling salesman problems (TSPs). It borrowed the idea of the widely used sequence-to-sequence model in the machine translation field, and used the attention mechanism to map from the input sequence to the output sequence. This work inspired a number of subsequent researches that use machine/deep learning methods for combinatorial optimization.
Most the current works focus on solving the combinatorial optimization problems in an end-to-end manner. Bello et al. <cit.> first proposed to use a deep reinforcement learning (DRL) method to optimize the pointer network model, which can output the solution sequence directly. Nazari et al. <cit.> investigated the vehicle routing problem (VRP) by modifying the pointer network and the attention mechanism. Khalil et al. <cit.> developed a structure2vec graph neural network (GNN) model for combinatorial optimization. The GNN model can encode the graph feature of the problem and aid the decisions. Other works <cit.> explored advanced GNN models like the graph convolution networks (GCNs) and diverse training methods to solve the combinatorial optimization problems more effectively. Moreover, authors in <cit.> improved the attention mechanism of the pointer network by leveraging the recent advances of the famous Transformer model <cit.> in the field of seqence-to-seqence learning. The attention model developed by Kool et al. <cit.> achieved the state-of-the-art performance among the above approaches. This model can solve a number of combinatorial optimization problems, such as the TSP, VRP, the Orienteering Problem, etc. In addition, Li et al. <cit.> extended this line of work to a multiobjective version.
Regarding using the artificial intelligence methods to improve the B&B algorithm for combinatorial optimization, Bengio et al. <cit.> made a thorough survey for this line of works. He et al. <cit.> developed a DAgger model to learn the node selection strategy by imitation learning. On the contrary, Khalil et al. <cit.> focused on the variable selection problem, and developed a machine learning model to mimic the classic strong branching strategy. Extensive features of the candidate branching variables are carefully extracted as the input of the model. The model is trained by minimizing the difference of the predicted branching decisions and the decisions made by the strong branching. Moreover, Gasse et al. <cit.> developed a novel GCN model for learning the variable selection strategy. They exploited the variable-constraint bipartite graph feature of the mixed-integer linear programs (MIPs), and encoded the branching strategy into a graph neural network. The model is trained to mimic the strong branching policy by imitation learning. Following this work, Gupta et al. <cit.> designed a hybrid model that uses the above GCN model at the root node and a weak but fast model at the the remaining nodes. This method has a weaker predictive performance but an overall faster solving speed due to its less computational cost. In addition, Nair et al. <cit.> proposed two models, Neural Diving and Neural Branching, to enhance the traditional MIP solver. Neural Diving predicts the partial assignments for its integer variables, which can result smaller MIPs. Neural Branching learns a neural network-based variable selection policy that can reduce the overall solving time.
The remainder of the paper is organized as follows. pre introduces the preliminaries of the work. The proposed graph pointer network model is described in s:model. sec:policy-learning outlines the imitation learning method for optimizing the model parameters. The experiment setup and numerical results are presented in sec:experiments. The last section gives some concluding remarks and future perspectives.
§ PRELIMINARIES
§.§ Problem Definition
Mixed Integer Linear Program
A combinatorial optimization problem can be always modeled as a mixed integer linear programming problem (MILP), having the form:
min 𝐜^⊤ 𝐱
s.t. 𝐀 𝐱≤𝐛
𝐥≤𝐱≤𝐮
x_i∈ℤ, i ∈ℐ
𝐜∈ℝ^n, 𝐀∈ℝ^m × n, 𝐛∈ℝ^m, 𝐥, 𝐮∈ℝ^n,
where the aim is to find an optimal set of 𝐱 to minimize the objective function with 𝐜 as the objective coefficient vector. There are m constraints and n decision variables. A subset of the decision variables are integer, and ℐ⊆{1, …, n} is their index set. 𝐀,𝐛 are the coefficient matrix and the right-hand-side vector of the constraints. 𝐥, 𝐮 bound the decision variables.
LP relaxation of a MILP
An MILP can be relaxed to a linear program (LP) by eliminating all the integrality constraints. In the minimization style, the solution obtained by solving the LP relaxation of the (<ref>) provides a lower bound to (<ref>).
Branch-and-bound
Branch-and-bound begins by solving the LP relaxation of the original MILP. The obtained solution 𝐱^* provides the lower bound to the problem. If the obtained solution respects all the MILP integrality constraints, it is the optimal solution to (<ref>), and the algorithm terminates. If not, the LP relaxation is further partitioned into two subproblems by branching on an integer variable that does not respect integrality of the MILP. This is done by adding the following two constraints into the LP relaxation, respectively <cit.>:
x_i≤⌊ x_i^⋆⌋, x_i≥⌈ x_i^⋆⌉, ∃ i ∈ℐ| x_i^⋆∉ℤ,
where ⌊ x_i^⋆⌋ refers to the maximum integer value that is smaller than x_i^⋆, and ⌈ x_i^⋆⌉ is the minimum integer value that is larger than x_i^⋆. Here i is called the branching variable.
By branching on i, two new LPs are constructed, which refer to the leaf nodes/subproblems of the search tree. The next step is to pick one leaf node, and repeat the above steps. Once a feasible solution x̂ is found, that is, all the MILP integrality constraints are satisfied, it provides the upper bound to the problem. If a solution is found with a lower objective value than x̂, the upper bound is updated. On the other hand, if a solution is found with worse objective value than the current upper bound, this subproblem is pruned and no longer branched. The subproblem is also fathomed if the solution is integer or the LP is infeasible. The above procedures repeat until no subproblems remains. The incumbent solution with the best bound is returned <cit.>.
§.§ Branching strategies
In the branching variable selection decision process, an integer variable i is selected among the candidate variables 𝒞={i | x_i^⋆∉ℤ, i ∈ℐ} that do not satisfy the integer constraint. Existing methods usually score each candidate variable in 𝒞 according to some handcrafted heuristics, and the variable with the largest score is selected for branching. The most commonly used scoring criterion is the change of the lower bound of the sub-problem after the variable is branched. Based on this criterion, a series of branch rules are designed to improve the efficiency of B&B.
Strong branching (SB) is an effective but expensive scoring heuristic, which is found empirically that, it can always produce the smallest B&B search tree compared with other heuristics <cit.>. SB rule explicitly measures the upper and lower bounds changes of the sub-problem, so as to select the best branching variable, which is computed as follows. For the LP sub-problem corresponding to the current node N, its LP solution is 𝐱^*, and its corresponding objective value is z^*. By branching on variable i, two LP sub-problems N_i^- and N_i^+ are obtained, and the corresponding objective values are z_i^*- and z_i^*+. If N_i^- and N_i^+ have no feasible solutions, then z_i^*- and z_i^*+ are set to very large values. Therefore, the change of the objective function value after branching on variable i is Δ_i^-=z_i^*--z^* and Δ_i^+=z_i^*+-z^*. The SB score is calculated as <cit.>:
S B_j=score(max{Δ_j^-, ϵ}, max{Δ_j^+, ϵ})
where the product function is usually considered as the scoring function, that is, score(a,b)=a× b. SB rule computes the SB scores for all the candidate variables in the candidate set 𝒞, and selects the decision variable with the largest SB score to branch on. Each branching decision requires long computational time since computing each SB score requires solving two LP sub-problems. In this case, the SB-based B&B algorithm usually suffers heavy computational burden although SB can greatly reduce the search tree size.
In view of the heavy computational burden of the SB method, calculating the pseudocost instead of the SB score is another commonly used method in the current optimization solver. Pseudocost branching (PB) estimates the score of a variable according to its historical scores during the previous search process. Instead of solving the two sub-problems by branching on i, the upwards (downwards) score of variable i is the average value of the objective value changes when upwards (downwards) branching on variable i in the previous branching process. This can greatly shorten the calculation time. Denote the upwards and downwards average scores of variable i as Ψ_j^- and Ψ_j^+, PC is calculated as <cit.>:
P C_j=score((x_i^*-⌊ x_i^* ⌋) Ψ_j^-, (⌈ x_i^* ⌉-x_i^* ) Ψ_j^+)
where x_i^*-⌊ x_i^* ⌋ and x_i^*-⌈ x_i^* ⌉ represent the decimal part of the variable value. PC method can effectively reduce the computing time of each branching decision. However, the search tree is much larger than that obtained by SB, since there is no sufficient historical data in the early stage of the searching to estimate the variable socres, which results in incorrect branching decisions. In view of the pros and cons of SB and PC, the reliability branching (RB) method applies SB at the beginning of the search till enough historical data is accumulated, and then applies PB in the subsequent process.
It can be seen that there is a contradiction between the branching performance and the time cost by making each branching decision.
In this study, we aim to use the deep learning method to imitate the SB heuristic, which is good performing but expensive, so as to reduce the computational burden.
§ MODEL
We design a graph pointer network (GPN) model to mimic the above-mentioned SB strategy. The input of the model is the current state of the solver, and the output is the variable selection decision. We first formulate B&B as a Markov decision process. At each step, the model perceives the current state and selects the variable. The state of the solver changes accordingly. In addition, we define the state of the solver, including the graph structure feature, global feature and historical feature. Finally, a graph pointer neural network model is designed according to the state definition, which can perceive the current state of the solver and make branching decisions.
§.§ Markov decision process modeling
B&B can be modeled as a Markov decision process <cit.>, as shown in fig:bb1.
At each decision step t, the current state of the solver is 𝐬_t, which represents the state of the current search tree. Based on the current state of the solver 𝐬_t, the agent selects a variable a_t = i from the candidate set 𝒞={i | x_i^⋆∉ℤ, i ∈ℐ} according to the strategy π(𝐚_t | 𝐬_t).
The solver solves the two LP sub-problems after branching on variable i. Subsequently, the solver updates the upper and lower bounds, prunes the search tree, and selects the next leaf node to branch. At this time, the solver has been converted to a new state 𝐬_t+1. Then the solver applies the branch strategy π(𝐚_t+1 | 𝐬_t+1) again to make the branching decision. This process is looped until all the leaf nodes are explored.
The initial state of the Markov decision process corresponds to the root node of the B&B search tree. And the final state is the end of the optimization process, i.e., all leaf nodes cannot be branched further. Denote the branching strategy as π, the Markov decision process can be modeled as <cit.>:
p_π(τ) = p(𝐬_0) ∏_t=0^T-1∑_𝐚∈𝒜(𝐬_t)π(𝐚 | 𝐬_t) p(𝐬_t+1 | 𝐬_t, 𝐚)
.
In this paper, we learn the branching strategy π to imitate the SB rule, which is realized through the following steps: 1) Define the problem state s_t. At each step of the branch decision, branch decision needs to be made according to the current problem state. However, there is no standardized definition of the solver state. It is necessary to extract effective features to better represent the solver state, so as to make better decisions accordingly. 2) Parameterize the branch strategy π via a novel model. The model should be able to map the problem state s_t to the branching action a_t correctly. The models, such as neural networks, random forests and support vector machines, need to be designed according to the characteristics of B&B. 3) Optimize the parameters of the model by an effective training algorithm. The model π can be learned through a variety of machine learning methods to minimize the size of the search tree or reduce the total run-time of the B&B algorithm.
The proposed deep learning-based B&B method is constituted of the above three parts introduced as follows.
§.§ State Definition
We first define the state s_t of B&B at the decision-making step t. In addition to the graph features introduced in <cit.>, we further design the global features and historical features of the problem, which can provide a more thorough representation of the solver state. Therefore, s_t is composed of variable features, constraint features, edge features, global features, and historical features, namely s_t = ( V,C,E,G,H).
The graph features ( V,C,E) of the problem is defined by the bipartite graph of the current solver state, as shown in fig:bb2. The bipartite graph is composed of m constraints and n variables. Variables x_1, x_2, ⋯, x_n are on the left side of the graph. The right-hand side (constant) term of the constraint is on the right side of the graph. The edge (i,j) ∈ E of the graph is the connection of the variable i and the constraint j, i.e., whether the constraint j includes the variable i. The weight of the edge is the coefficient of the variable i in constraint j.
According to the bipartite graph structure, we define the solver state which is composed of variable features, constraint features, edge features, global features, and historical features:
(1) Variable features represent the attributes of candidate variables at branching step t, including the variable type, variable coefficient, current value of the variable, whether the current value of the variable is on the boundary, the decimal part of the solution value of the variable, etc. There are n candidate variables in total, and the feature dimension is d. Therefore, the variable feature dimension is n × d. The detailed introduction of the variable features is listed tab:var.
(2) Constraint features represent the attributes of the LP constraint at branching step t, such as the right value of the constraint, whether the left value of the constraint exactly reaches the boundary, the similarity of the constraint coefficient and the target coefficient, etc. The current LP problem has a total of m constraints, and the feature dimension is c. Thus, the dimension of the constraint feature is m × c, and the detailed description of the constraint features can be found in tab:cons.
(3) Edge feature is the coefficient of each variable in each constraint. Therefore, there are m × n edges in total, and the feature dimension is 1. The coefficient value is 0 if the constraint does not contain a certain variable.
(4) Global feature G represents the global state of the solver, such as the current optimality gap of the problem, the gap between the objective value of the current node and the upper/lower bounds, the depth of the current search tree, the depth of the current node, etc. We design and extract the global features using the API interface of PySCIPOpt, which is an open source B&B solver. We list the detailed global features in tab:global.
G mainly includes two parts: 1) global features of the whole MILP, including the gap between the upper and lower bounds of the current stage of MILP, the number of feasible solutions/infeasible solutions, etc.; 2) global features of the current LP sub-problem node, including the depth of the current node, the LP objective value information of the current node, etc.
The depth of the current node and the gap between the upper and lower bounds can be directly obtained by calling the PySCIPOpt interface. The number of feasible/infeasible solutions is computed by the proportion of leaf nodes that produce feasible/infeasible solutions:
P_feasible = N_feasible/max (N_leaves,0.1)
The gap between the current node's LP objective value and the global upper/lower bounds gap is calculated by the following formula according to <cit.>:
gap(x, y)= 0 , if x y<0
|x-y|/max{|x|,|y|, 1 × 10^-10} , else
where the current node's LP objective value and the global upper/bounds are obtained from the PySCIPOpt interface.
The relative position pos of the current node's LP objective value to the global upper/lower bounds is computed by <cit.>:
(z,x,y) = |x-z|/|x-y|.
(5) Historical feature consists of two parts. The first part is comprised of features of all past branching decisions 𝒞_1 = a_1 ⋯ a_t-1 at previous steps 1 ⋯ t-1. The second part is comprised of features of variables 𝒞_2 = x_1, x_2, ⋯ whose values have changed when generating the current node. That is, 𝒞_2 is the set of variables whose values have changed in the solution of the new problem after adding an integer constraint to the parent problem.
Traditional approaches only considers variable features, constraint features and edge features <cit.>. This work further extracts global features and historical features, so as to obtain a richer representation of the environment state s_t. The global status of the current search tree and the current node can provide more information for the agent to make the branching decisions. Moreover, observing the variables whose values have changed when generating the current node, and observing the variables selected during the historical branching process, can also provide effective information for making the branching decisions. Therefore, it is expected that adding additional global features and historical features can better describe the state of the current problem.
§.§ Graph Pointer Network Model
In this section, we propose a graph pointer network (GPN) similar to <cit.> that combines the graph neural network and the pointer mechanism to model the branching policy, which can map from the solver state to the branching decisions effectively.
From the features extracted in the previous section, it can be seen that the solver state has a bipartite graph structure, that is, the left nodes (variables) and the right nodes (constraints) are connected by edges, as shown in fig:bb2. Graph neural network can effectively process the information of graph structure, and has been successfully applied to various machine learning tasks with graph structure input, such as social networks and citation networks. Therefore, we encode the graph structure of the solver state by a graph neural network model.
In addition, we take the global and historical features as a query, and compute the attention value, which is then normalized as a softmax probability distribution, as a pointer to the input sequence. In this way, the variable with the largest probability is selected as the branching variable.
The proposed graph pointer neural network model is composed of two parts: 1) the graph neural network calculates the feature vector for each variable based on variable features, constraint features and edge features; 2) the pointer mechanism outputs the variable selection probabilities by computing the attention values according to variables' feature vectors and the query which is constructed by the global and historical features. The detailed process of modeling the branching policy is as follows.
(1) Initial embedding calculation
Variable features, constraint features, edge features, and global features have different dimensions. For example, the variable feature is 13-dimensional, and the global feature is 9-dimensional. Therefore, we first compute the d_h-dimensional embedding of the variable features 𝐱_v, constraint features 𝐱_c, edge features 𝐱_e and global features 𝐱_g:
[ 𝐱_v←EMBEDDING(𝐱_v); 𝐱_c←EMBEDDING(𝐱_c); 𝐱_e←EMBEDDING(𝐱_e); 𝐱_g←EMBEDDING(𝐱_g) ]
where EMBEDDING(·) is a two-layer fully connected neural network. The hidden dimension is d_h and the activation function between layers is LeakyRELU:
LeakyRELU (x)={[ x, if x ≥ 0; 10^-2× x, otherwise ].
(2) Graph Neural Network
Next, we compute the final variable features by a graph convolution neural network similar to <cit.>:
[ 𝐱_c^i𝐟_𝒞(𝐱_c^i, ∑_j^(i, j) ∈E𝐠_𝒞(𝐱_c^i, 𝐱_v^j, 𝐱_e^i,j)); 𝐱_v^j𝐟_𝒱(𝐱_v^j, ∑_j^(i, j) ∈E𝐠_𝒱(𝐱_v^j, 𝐱_c^i, 𝐱_e^i,j)) ]
Function 𝐠(·) is defined as:
g(𝐱_c^i, 𝐱_v^j, 𝐱_e^i,j) = MLP(𝐱_c^i + 𝐱_v^j + 𝐱_e^i,j)
where MLP is a two-layer fully connected neural network with LeakyRELU activation function. Function 𝐟(·) is also a two-layer fully connected neural network with LeakyRELU activation function. As demonstrated in eq:graph, the graph embedding is computed by two successive convolution passes, one from variables to constraints and the next one from constraints to variables. The first convolution step computes the features 𝐱_c^i of constraint i according to features 𝐱_v^j of its connected variables j, features of the edge 𝐱_e^i,j and its own features. The second step computes the embedding 𝐱_v^j of variable j according to the above obtained features 𝐱_c^i of its connected constraints i, features of the edge 𝐱_e^i,j and its own features. Through the graph convolution process, the final variable features aggregate the original variable features, constraint features and coefficient features of the problem, so as to effectively contain the graph information of the MILP state.
(3) Historical feature calculation
At branching step t, the first part of the historical features is the past branching decisions 𝒞_1 = a_1 ⋯ a_t-1 at steps 1 ⋯ t-1. We compute this part of d_h-dimensional historical features as:
𝐱_h1^t = MLP(1/t-1∑_i=1^t-1𝐱_v^a_i)
where MLP is a single-layer fully connected neural network layer, and a_i is the variable selected by the solver at step i.
The second part of the historical feature is the variable set 𝒞_2 whose value changes during the process of generating the current node. The same operation is performed on 𝒞_2 to obtain the d_h-dimensional vector 𝐱_h2^t. In addition, 𝐱_h2^t and 𝐱_h1^t are zero vectors if t==0.
(4) Pointer Mechanism
We compute the attention value as a pointer to the candidate variables. The attention value is computed by a compatibility function of the query with the key.
The query, which is composed of global features and historical features, represents the current state of the solver. The key represents the feature of each candidate variable. In specific, the query vector is calculated as the weighted average of global and historical features:
𝐪_𝐭 = w_1*𝐱_g^t + w_2* 𝐱_h1^t + w_3 *𝐱_h2^t
where w_1, w_2, w_3 are weight values to be optimized while training. Moreover, the key of variable i is defined as k_i = W_k 𝐱_v^i, i ∈𝒞, which is the linear projection of the variable features. Denote the query at branching step t as 𝐪_𝐭 and the keys of candidate variables as 𝐤_𝐢, i ∈𝒞, one has:
u_i^t =W_3 (W_1 k_i+W_2 q_t) i ∈(1, …, n)
a_i^t =softmax(u_i^t) i ∈(1, …, n)
where u_i^t is the attention value computed by the compatibility function. Note that other compatibility function can also be applied to compute the attention, which can refer to <cit.> for more details. softmax is used to normalize the attention value to the probability distribution a_i^t, representing the probability of selecting variable i at branching step t. In this case, we can choose the variable with the highest probability a_i^t as the branching variable.
In addition, it is necessary to normalize the variable features, constraint features, edge features, and global features due to their different data range. To this end, we apply the prenorm layer as introduced in <cit.> to normalize the variable, constraint, and edge features. We also add a prenorm layer of global features accordingly, so that the neural network model can deal with problem instance with global features of different scales.
§.§ Branch and Bound algorithm based on GPN
We use the GPN model to select the branching variable in B&B. The GPN-based B&B algorithm is illustrated in algorithm <ref>.
First, the LP relaxation of the original MILP problem is set as the root node. The queue data structure is maintained to store the sub-problem nodes to be solved. Each node defines an initial lower bound l, which represents the lower bound of its parent node. After the global upper bound is updated, if l is greater than the global upper bound, then the node will be pruned. When the node is taken out of the queue, its lower bound is compared with the global upper bound, and the node is pruned if the lower bound is greater than the global upper bound. The global upper bound is initialized to ∞, and is updated every time a better feasible solution is obtained. We extract variable, constraint, edge, global and historical features of candidate variables, which are subsequently input to the GPN model. The model output the probability distribution of the candidate variables. We can select the one with the highest probability as the variable to branch on. And two sub-problems are generated accordingly. This process loops until the queue is empty, i.e., all leaves of the search tree are explored.
§ TRAINING METHOD
We use imitation learning to train the proposed model. The objective is to imitate the strong branching rule.
Imitation learning <cit.> can solve various multi-step decision-making problems. In comparison with unsupervised reinforcement learning methods, imitation learning can improve the training efficiency with the help of expert experiences. Imitation learning requires labeled training data provided by human experts {τ_1, τ_2, …, τ_m}, where τ_i=<s_1^i, a_1^i, s_2^i, a_2^i, …>. s_1^i, a_1^i represents the “state-action" pairs in a Markov decision process generated by solving an instance using the SB-based B&B. Therefore, the labeled training set can be constructed as 𝒟={(s_1, a_1),(s_2, a_2),(s_3, a_3), …}. Denote a_i as the label, the variable selection problem can be converted into a classification problem. The objective is to minimize the difference between the expert actions and the predicted actions.
In specific, we conduct the SB-based B&B on randomly generated combinatorial optimization instances. The “state-action" pairs are recorded to form a training set 𝒟={(𝐬_i, 𝐚^⋆_i)}_i=1^N. Denote the expert actions as 𝐚^⋆ and the predicted actions as π(s), we optimize the model parameters θ by minimizing:
ℒ(θ) = 1/N∑_(𝐬, 𝐚^*)∈𝒟loss( π_θ(𝐬), 𝐚^⋆).
where loss(*) is a function that defines the difference between the true value and the predicted value. For classification problems, there are a number of loss(*) functions such as the accuracy and cross entropy.
However, in B&B, SB scores of different variables might be the same or pretty close. It is equivalent to select these variables. In this case, we record the SB scores instead of the variable indices to construct the training set 𝒟={(𝐬_i, score^⋆_i)}_i=1^N. The aim is to imitate the distribution of the SB scores instead of the branching actions. To this end, we use the Kullback-Leibler (KL) divergence as a measure of the difference between the SB score distribution and the predicted probability distribution. By minimizing the KL divergence, the model can can work better for the above situation where multiple variables own the same or similar SB scores.
Denote P as the true distribution of the data and Q as the predicted distribution of the model to fit P, KL divergence is defined as:
D_KL(P Q)=∑_x ∈𝒳 P(x) log(P(x)/Q(x))
Therefore, we optimize the model parameters θ by minimizing:
ℒ(θ) = D_KL(score^⋆π_θ(𝐬)) = ∑_(𝐬, score^⋆)∈𝒟score^⋆log(score^⋆/π_θ(𝐬))
In addition, we only care about the variables with high SB scores. The probability distribution of other variables has no effect on the branching variable selection. Thereby, we emphasize the similarity loss of variables with high SB scores in the training phase. In specific, we sort the variables according to their probabilities output by the model. We should pay more attention to the first few variables. To this end, the KL divergence of the top-k variables is added to the loss item. We first sort the probabilities π_θ(𝐬) output by the model, and select the first k variables ℐ_k. The KL divergence value of variables ℐ_k is computed by eq:kl as D_KL(score_ℐ_k^⋆π_θ(𝐬)_ℐ_k). And the loss for training the model is defined as:
ℒ(θ) = D_KL(score^⋆π_θ(𝐬)) + D_KL(score_ℐ_k^⋆π_θ(𝐬)_ℐ_k)
The first term of the loss can make the overall predicted distribution similar to the distribution of the SB scores, while the second term makes the model pay more attention to the variables of large probabilities and weaken the distribution of irrelevant variables for selecting the branching variables. This can alleviate the situation where a large amount of training time is cost to fit the distribution of irrelevant variables.
§ EXPERIMENTAL RESULTS AND DISCUSSION
§.§ Experiment Settings
§.§.§ Comparison Algorithm
We compare the proposed approach against the following approaches:
(1) First, we compare the proposed approach against the classic B&B algorithm. The branching rule of reliability branching (RB), strong branching (SB) and pseudocost branching (PB) are compared respectively. They are all implemented in the well-known SCIP solver. The cutting plane is only allowed at the root node. Other heuristics are disabled during the branching process for fair comparison. Our method is also implemented in the SCIP solver, and uses the same set of parameters as the competitor methods.
(2) Next, the proposed approach is compared with the state-of-the-art machine learning-based B&B algorithms: branching method based on ExtraTrees <cit.> model <cit.> (TREES); branching method <cit.> ( SVMRANK) and <cit.> (LMART) based on SVMrank <cit.> and LambdaMART <cit.> model; branching method based on graph neural network <cit.>( GNN).
§.§.§ Test Problems
Effectiveness of the proposed method is evaluated on the following three benchmark combinatorial optimization problems.
(1) Set covering problem <cit.>
The set covering instances contain 1,000 columns. The model is trained on instances with 500 rows, and is evaluated on instances with 500 and 1,000 rows, respectively.
(2) Capacitated facility location problem <cit.>
The instances are generated with 100 facilities. The model is trained on instances with 100 customers, and is evaluated on instances with 100 and 200 customers, respectively.
(3) Maximum independent set problem <cit.>
The instances are generated following the process in <cit.>. The model is trained on instances of 500 nodes, and is evaluated on instances with 500 and 1000 nodes, respectively.
§.§.§ Experimental parameter settings
All compared algorithms are implemented by Python on the SCIP solver. SCIP uses its default parameters. The hidden dimensions of the models are set to d_h=64. The Adam optimizer is used for training with learning rate of 0.001. We set k=10 for the top-k imitation learning. The learning rate decreases 80% if the loss does not decrease for 10 epochs. The training is terminated if the loss does not decrease for 20 epochs.
§.§.§ Training and evaluation
(1)Training data generation
The SCIP solver with default settings is used to collect training samples offline. We generate random instances and solve them using the SCIP. During the collecting procedure, the branching rule of RB is adopted with a probability of 95%, and the branching rule of SB is adopted with a probability of 5%. Only the samples generated by SB are collected. The data of variable, constraint, edge, global and historical features, candidate variable sets, and SB scores of the variables is collected.
Instances are randomly generated and solved until 140,000 samples are collected. 100,000 samples are used as the training set, 2,000 samples are used as the validation set, and 2,000 samples are used as the test set.
(2) Evaluation method
We first evaluate the capability of the GPN method in imitating the SB rule. Since multiple variables may have the same or similar SB scores, the following indices are used to evaluate the model accuracy <cit.>: 1) the percentage of times the output of the model is exactly the variable with the highest SB score (acc@1); 2) the percentage of times the output of the model is one of the five variables with the highest SB scores (acc@5); 3) the percentage of times the output of the model is one of the ten variables with the highest SB scores (acc@10). Moreover, we evaluate the total solving time of the GPN-based B&B in comparison with benchmark methods.
§.§ Results
fig:set, fig:loc and fig:ind present the training performances of the proposed GPN model and the classic GNN model on three test problems. The convergence of the loss and model accuracy on the validation set are compared.
GNN is currently a benchmark model for imitating the branching rule. Results show that the proposed GPN model outperforms the traditional GNN model in terms of both convergence speed and final convergence performance while training. Moreover, GPN comfortably outperforms GNN in terms of the model accuracy for all of the three problems on validation set. The advantages of the GPN are more obvious on the location problem and the maximum independent set problem, where the GPN can converge to 0.66 and 0.003 while the GNN can only converge to 0.72 and 0.0047. This validates the effectiveness of the proposed GPN model. The attention-based pointer mechanism proposed in the GPN can effectively understand the graph and global characteristics of the problem, thus making more accurate decisions.
tab:set, tab:location, tab:ind present the model accuracy of GPN, TREES, SVMRANK, LMART, and GNN methods on test set. Results of TREES, SVMRANK and LMART are from <cit.>. Results of acc@1, acc@5 and acc@10 are listed respectively.
It is observed that the GPN method has the highest accuracy among the compared approaches on all the three test problems. Its advantage over traditional machine learning methods is more obvious on the maximum independent set problem. We can see a significant effectiveness of the GPN model.
In addition, we evaluate the running time of the approaches, since the aim of the branching models is to reduce the overall solving time of B&B. The solving time is determined by the size of the search tree, that is, the number of explored nodes. It is also determined by the time consumed by making the branch decisions. Therefore, a good branching model can reduce the size of the search tree while making fast branching decisions.
tab:s1, tab:s3, tab:s3 list the results of solving time and the number of explored nodes when using GPN and the compared approaches for solving the three test problems. Results are obtained by solving 100 randomly generated problems and taking the average.
tab:s1 shows that, in comparison with the PB and RB rule, the proposed GPN method achieves at least 40% increase in the solution speed when solving the 500- and 1000-row set covering instances. In terms of the number of explored nodes, GPN outperforms all of the compared methods except for the SB rule on the set cover instances. SB can always get the smallest search tree. But its total solving time has no advantage due to its long computation time of making branching decisions. It is obvious that the GPN method outperforms all the compared machine learning methods in terms of the solving speed and the ability of reducing the search tree on the set covering instances.
It can be seen from tab:s2 that the GPN method shows greater advantages in solving the 100- and 200-customer capacitated facility location instances than the compared methods. In specific, GPN runs twice faster than the PB and RB method. Compared with the machine learning methods, GPN has the fastest solving speed and the fewest number of nodes.
On maximum independent set instances, GPN achieves nearly 10% improvement in the solution speed and 20% reduction in the number of nodes as seen in tab:s3. The solving time is reduced nearly twice when using the GPN compared with the PB and RB methods.
Note that, the test instances are generated randomly, and are different from the training set. Once the model is trained, it can generalize to unseen instances, and scale to larger instances. Although the RB heuristic is carefully handcrafted by experts, it is still defeated by the proposed GPN method, which can learn the heuristics from data. Experiments validate the novelty and efficiency of the GPN method.
The goal of B&B is to solve the combinatorial optimization problem as fast as possible, so the branch strategy must be a trade-off between the quality of the decision and the time spent on each decision. An extreme example is the SB branch rule, by calculating the SB score for variable selection, the final solution can be obtained with a small number of searches, but each decision step is very time-consuming, so that the overall running time is very long. Therefore, the method proposed in this study can achieve better decision quality and decision time. The trade-off of the SB score is slightly worse than the SB score, but it requires less calculation time, thus improving the overall solution speed.
§ CONCLUSION
This paper modeled the variable selection strategy in B&B with a deep neural network model. In addition to graph features, we further designed the global and historical features to represent the solver state. The model is comprised of the graph neural network and the pointer mechanism. Graph neural network is used to encode the graph features as the queries for the pointer. The global and historical features are processed as the key. The attention value is computed by the queries and the key as a pointer to the input sequence. We demonstrate on benchmark problems that, our approach can improve the overall B&B performance over traditional expert-deigned branching rules. Our approach can also outperform the state-of-the-art machine-learning-based B&B methods.
In future work, more combinatorial problems should be investigated via the proposed GPN model. Reinforcement learning methods can also be studied to improve the models trained by imitation learning.
IEEEtran
[
< g r a p h i c s >
]Rui Wang received his Bachelor degree from the National University of Defense Technology, P.R. China in 2008, and the Doctor degree from the University of Sheffield, U.K in 2013. Currently, he is an Associate professor with the National University of Defense Technology. His current research interest includes evolutionary computation, multi-objective optimization and the development of algorithms applicable in practice.
Dr. Wang received the Operational Research Society Ph.D. Prize at 2016, and the National Science Fund for Outstanding Young Scholars at 2021. He is also an Associate Editor of the Swarm and Evolutionary Computation, the IEEE Trans on Evolutionary Computation.
[
< g r a p h i c s >
]Kaiwen Li received the B.S., M.S. degrees from National University of Defense Technology (NUDT), Changsha, China, in 2016 and 2018.
He is a student with the College of Systems Engineering, NUDT. His research interests include prediction technique, multiobjective optimization, reinforcement learning, data mining, and optimization methods on Energy Internet.
[
< g r a p h i c s >
]Tao Zhang received the B.S., M.S., Ph.D. degrees from National University of Defense Technology (NUDT), Changsha, China, in 1998, 2001, and 2004, respectively.
He is a Professor with the College of Systems Engineering, NUDT. His research interests include multicriteria decision making, optimal scheduling, data mining, and optimization methods on energy Internet network.
[
< g r a p h i c s >
]Ling Wang received his B.Sc. in automation and Ph.D. degree in control theory and control engineering from Tsinghua University, Beijing, China, in 1995 and 1999, respectively. Since 1999, he has been with the Department of Automation, Tsinghua University, where he became a Full Professor in 2008. His current research interests include intelligent optimization and production scheduling. He was the recipient of the National Natural Science Fund for Distinguished Young Scholars of China, the National Natural Science Award (second place) in 2014, the Science and Technology Award of Beijing City in 2008, and the Natural Science Award (first place in 2003, and second place in 2007) nominated by the Ministry of Education of China.
[
< g r a p h i c s >
]Xiangke Liao received the BS degree from Tsinghua University, China, in 1985, and the MS degree from the National University of Defense Technology (NUDT), China, in 1988, both in computer science. He is currently a professor with the College of Computer, NUDT. His research interests include high-performance computing systems, operating systems, and parallel and distributed
computing.He is the principle investigator and chief designer of Tianhe-2 supercomputer.
|
http://arxiv.org/abs/2307.01000v1
|
20230703132914
|
Pareto optimal proxy metrics
|
[
"Lee Richardson",
"Alessandro Zito",
"Dylan Greaves",
"Jacopo Soriano"
] |
stat.ME
|
[
"stat.ME",
"cs.LG"
] |
Yielding transition of amorphous solids in the presence of aspherical impurities
Anoop Mutneja ^1,
Bhanu Prasad Bhowmik ^2 and Smarajit Karmakar ^1
August 1, 2023
================================================================================
North star metrics and online experimentation play a central role in how technology companies improve their products. In many practical settings, however, evaluating experiments based on the north star metric directly can be difficult. The two most significant issues are 1) low sensitivity of the north star metric and 2) differences between the short-term and long-term impact on the north star metric. A common solution is to rely on proxy metrics rather than the north star in experiment evaluation and launch decisions. Existing literature on proxy metrics concentrates mainly on the estimation of the long-term impact from short-term experimental data. In this paper, instead, we focus on the trade-off between the estimation of the long-term impact and the sensitivity in the short term. In particular, we propose the Pareto optimal proxy metrics method, which simultaneously optimizes prediction accuracy and sensitivity. In addition, we give an efficient multi-objective optimization algorithm that outperforms standard methods. We applied our methodology to experiments from a large industrial recommendation system, and found proxy metrics that are eight times more sensitive than the north star and consistently moved in the same direction, increasing the velocity and the quality of the decisions to launch new features.
§ INTRODUCTION
North star metrics are central to the operations of technology companies like Airbnb, Uber, and Google, amongst many others <cit.>. Functionally, teams use north star metrics to align priorities, evaluate progress, and determine if features should be launched <cit.>.
Although north star metrics are valuable, there are issues using north star metrics in experimentation. To understand the issues better, it is important to know how experimentation works at large tech companies. A standard flow is the following: a team of engineers, data scientists and product managers have an idea to improve the product; the idea is implemented, and an experiment on a small amount of traffic is run for 1-2 weeks. If the metrics are promising, the team takes the experiment to a launch review, which determines if the feature will be launched to all users. The timescale of this process is crucial – the faster one can run and evaluate experiments, the more ideas one can evaluate and integrate into the product. Two main issues arise in this context. The first is that the north star metric is often not sufficiently sensitive <cit.>. This means that the team will have experiment results that do not provide a clear indication of whether the idea is improving the north star metric. The second issue is that the north star metric can be different in the short and long term <cit.> due to novelty effects, system learning, and user learning, amongst other factors.
A solution to deal with this problem is to use a proxy metric, also referred to as a surrogate metric, in place of the north star <cit.>. The ideal proxy metric is short-term sensitive, and an accurate predictor of the long-term impact of the north star metric. Figure <ref> visualizes the ideal proxy metric in two scenarios where it helps teams overcome the limitations of the north star metric.
Existing literature on proxy metrics <cit.> has focused more on predicting the long-term effect, but has not focused on its trade-off with short-term sensitivity. In this paper, we fulfill both goals with a method that optimizes both objectives simultaneously, called Pareto optimal proxy metrics. To our knowledge, this is the first method that explicitly optimizes sensitivity.
The paper is divided as follows. Section <ref> discusses how to measure the objectives and their empirical trade-off. Section <ref> covers our methodology and algorithms. Section <ref> discusses our results, and we conclude in Section <ref> with some observations on how to use proxy metrics effectively.
§ HOW TO MEASURE PROXY METRIC PERFORMANCE
The two key properties for metrics are metric sensitivity and directionality <cit.>. The first refers to the ability of a metric to detect a statistically significant effect, while the second measures the level of agreement between the metric and the long-term effect of the north star. This Section discusses each property individually, and proposes metrics to quantify them. We conclude with our empirical observation regarding the trade-off between sensitivity and directionality, which motivated the methodology in this paper (see Figure <ref>).
§.§ Metric sensitivity
Metric sensitivity is commonly associated with statistical power. However, it can be expressed as a broader concept <cit.>. In simple terms, metric sensitivity measures the ability to detect a significant effect for a metric. Following <cit.>, we can write this as
P(Reject H_0) = ∫ P(Reject H_0 | δ)dP(δ),
where δ is the true treatment effect, P(Reject H_0 | δ) is the statistical power, and dP(δ) is the distribution of true treatment effects in a population of related experiments. Sensitivity depends heavily on the type of experiments. This is captured in the dP(δ) term in Equation <ref>, and is sometimes referred to as the moveability of the metric. For example, metrics related to Search quality will be more sensitive in Search experiments, and less sensitive in experiments from other product areas (notifications, home feed recommendations, etc.). Although each experiment is unique, our analysis groups together experiments with similar treatments, and we assume that the underlying treatment effects are independent and identically distributed draws from a common distribution of treatment effects.
We need to define quantities that summarize how sensitive a metric is. Our intuition is that we can estimate the probability a metric will detect a statistically significant effect by seeing how often such an effect was statistically significant in historical experiments. Suppose that there are J experiments whose outcome is recorded by M metrics. In each experiment, the population is randomly partitioned into N ≈ 100 equal groups, and within each group, users are independently assigned to a treatment and a control group. We refer to these groups as independent hash buckets <cit.>.
Let X_i,j, m^Tr and X_i,j, m^Ct with m = 1, …, M and j= 1,…, J denote the short-term recorded values for metric m in experiment j in the treatment and in the control group, respectively, and let X_i,j, m= 100%× (X_i,j, m^Tr - X_i,j, m^Ct)/X_i,j, m^Ct their percentage differences, in hash bucket i= 1,…, N. We refer to these metrics as auxiliary metrics, since their combination will be used to construct a proxy metric in Section <ref>. The within hash bucket sample sizes are typically large enough that we can use the central limit theorem to assume that X_i,j, miid∼ N(θ_j,m, σ^2_j,m) for i = 1, …, N, where θ_j, m and σ^2_j, m are unknown mean and variance parameters, and test
H_0, j, m: θ_j, m = 0 vs H_1, j, m: θ_j, m≠ 0.
Calling X̅_j, m = N^-1∑_i = 1^N X_i, j, m the mean percentage difference between the two groups and se_j, m the standard error, calculated at Google via the Jackknife method <cit.>, the null hypothesis H_0, j, m is rejected at the α level if the test statistics t_j, m =X̅_j, m/se_j, m is larger than a threshold τ_α, N-1 in absolute value. The common practice is to let α = 0.05.
From the above, it naturally follows that metric sensitivity should be directly related to the value of the test statistic t_j, m. For instance, we call binary sensitivity for metric m the quantity
(X̅_·, m) = 1/J∑_j = 1^J1(|t_j, m| > τ_α, N-1), (m = 1,…, M),
where X̅_·, m = {X̅_1, m,…, X̅_J, m}. Equation (<ref>) measures the proportion of statistically significant experiments in our pool of experiments for every metric m. Another characteristic of equation (<ref>) is that it takes on a discrete set of values. This is an issue when the number of experiments J is low. In this case, one can resort to smoother versions of binary sensitivity, such as the average sensitivity, defined as
(X̅_·, m) = 1/J∑_j = 1^J |t_j, m|, (m = 1, …, M).
The above quantity is the average absolute value of the test statistic across experiments. It has the advantage of being continuous and thus easier to optimize, but it pays a cost in terms of lack of interpretability and is also more susceptible to outliers. In the case of large outliers, one effective strategy is to cap the value of the t-statistic.
Which measure of sensitivity to use depends on the application. When a large pool of experiments is available, we recommend using equation (<ref>) due to its interpretation and intrinsic simplicity. Equation (<ref>) should be invoked when optimizing over a discrete quantity yields unstable results.
§.§ Directionality
The second key metric property we need to quantify is called directionality. Through directionality, we want to capture the alignment between the increase (decrease) in the metric and long-term improvement (deterioration) of the user experience. While this is ideal, getting ground truth data for directionality can be complex. A few existing approaches either involve running degradation experiments or manually labeling experiments, as discussed in <cit.>. Both approaches are reasonable, but suffer from scalability issues.
Our method measures directionality by comparing the short-term value of a metric against the long-term value of the north star. The advantage of this approach is that we can compute the measure in every experiment. The disadvantage is that the estimate of the treatment effect of the north star metric is noisy, which makes it harder to separate the correlation in noise from the correlation in the treatment effects. This can be handled, however, by measuring correlation across repeated experiments.
There are various ways to quantify the directionality of a metric. In this paper, we consider two measures: the first is the mean squared error, while the second is the empirical correlation. Following the setting of Section <ref>, let Y_i, j^Tr and Y_i, j^Ct define the long-term value of the north star in the treatment and in the control group for every cookie bucket i and experiment j. The resulting recorded percentage difference is Y_i, j = 100% (Y_i, j^Tr - Y_i, j^Ct)/Y_i, j^Ct. Then we can define the mean squared error as
(X̅_·, m) = 1/J∑_j=1^J (Y̅_j - X̅_j, m)^2, (m = 1, …, M),
where again Y̅_j = N^-1∑_i=1^N Y_i, j is the long-term mean of the north star in experiment j. Equation (<ref>) measures how well metric m predicts the long-term north star on average. Such a measure depends on the scale of X and Y and may require standardization of the metrics. For a scale-free measure, one instead may adopt correlation, which is defined as follows
(X̅_·, m) = ∑_j=1^J (Y̅_j - Y̅)(X̅_j, m- X̅)/√(∑_j=1^J (Y̅_j - Y̅_̅m̅)^2∑_j=1^J (X̅_j, m - X̅_m)^2), (m = 1, …, M),
where X̅_m = J^-1∑_j = 1^JX̅_j,m and Y̅= J^-1∑_j = 1^JY̅_j are the grand mean of metric m and the north star across all experiments.
Equations (<ref>) and (<ref>) quantify the agreeableness between a metric m and the north star, and their use is entirely dependent on the application. Notice that equation (<ref>) measures the linear relationship, but other measures of correlation may be employed, such as Spearman correlation. It is possible to use different measures of correlation because our methodology is agnostic to specific measures of sensitivity and directionality, as detailed in Section <ref>.
§.§ The trade-off between sensitivity and directionality
So far, we have established two key properties for a metric: sensitivity and directionality. Empirically, we observe an inverse relationship between these two properties. This can be clearly seen from Figure <ref>, where we plot the value of the binary sensitivity in equation (<ref>) and the correlation with the north star in equation (<ref>) for over 300 experiments on a large industrial recommendation system.
As such, there is a trade-off between sensitivity and directionality: the more we increase sensitivity, the less likely our metric will be related to the north star. Thus, our methodology aims to combine auxiliary metrics into a single proxy metric to balance such trade-off in an optimal manner.
§ PARETO OPTIMAL PROXY METRICS
Our core idea is to use multi-objective optimization to learn the optimal trade-off between sensitivity and directionality. Our algorithm learns a set of proxy metrics with the optimal trade-off, known as the Pareto front. The proxy metrics in the Pareto front are linear combinations of auxiliary metrics. Each proxy in the Pareto front is Pareto optimal, in that we can not increase sensitivity without decreasing correlation, and vice versa.
In this section, we first describe the proxy metric problem, and we later cast the proxy metric problem into the Pareto optimal framework. Then we discuss algorithms to learn the Pareto front and compare their performance.
§.§ The proxy metric problem
We define a proxy metric as a linear combination between the auxiliary metrics m = 1, …, M. Let ω = (ω_1, …, ω_M) be a vector of weights. A proxy metric is obtained as
Z_i, j(ω) = ∑_m = 1^M ω_m X_i, j, m,
for each i = 1,…, N and each experiment j = 1,…, J. Here, ω_m defines the weight that metric m has on the proxy Z_i, j. For interpretability reasons, it is useful to consider a normalized version of the weights, namely imposing that ∑_m=1^M ω_m = 1 with each ω_m≥0. In doing so, we require that a positive outcome is associated with an increase in the auxiliary metrics. This means we must swap the sign of metrics whose decrease has a positive impact. These include, for example, metrics that represent bad user experiences, like abandoning the page or refining a query, and which are negatively correlated with the north star metric. Within such a formulation, the proxy metric becomes a weighted average across single metrics where ω_m measures the importance of metric m. Un-normalized versions of the proxy weights can also be considered, depending on the context and the measures over which the optimization is carried over. In general, the binary sensitivity in equation (<ref>) and the correlation in equation (<ref>) are invariant to the scale of ω_m, which implies that they remain equal irrespective of whether the weights are normalized or not.
Within such a framework, our goal is to find the weights in equation (<ref>). Let Z̅_j() = J^-1∑_i = N^J Z_i,j be the average values for the proxy metric in experiments j= 1, …, J and Z̅_·() = {Z̅_1(), …, Z̅_J()} their collection. When binary sensitivity and correlation are used as measures for sensitivity and directionality, multi-objective optimization is performed via the following problem
ω^* = max_ω = (ω_1, …, ω_m){(Z̅_·()), (Z̅_·())}.
The solution to the optimization in equation (<ref>) is not available in an explicit analytical form, which means that we need to resort to multi-objective optimization algorithms to find ^*. We discuss these algorithms after first introducing the concept of Pareto optimality.
§.§ Pareto optimality for proxy metrics
A Pareto equilibrium is a situation where any action taken by an individual toward optimizing one outcome will automatically lead to a loss in other outcomes. In this situation, there is no way to improve both outcomes simultaneously. If there was, then the current state is said to be Pareto dominated. In the context of our application, the natural trade-off between correlation and sensitivity implies that we cannot unilaterally maximize one dimension without incurring in a loss in the other. Thus, our goal is to look for weights that are not dominated in any dimension.
In reference with equation (<ref>), we say that the set of weights is Pareto dominated if there exists another set of weight ' such that (Z̅_·(')) ≥(Z̅_·()) and (Z̅_·(')) ≥(Z̅_·()) at the same time. We write ≺' to indicate the dominance relationship. Then, the set of non-dominated points is called Pareto set. We indicate it as 𝒲 = {_1,…, _q}, where for all , ' ∈𝒲 neither ≺' not ' ≺. The objective values associated with the Pareto set are called the Pareto front.
Figure <ref> shows an example of what the Pareto front and the Pareto set look like. The grey points represent the value of the objectives for a set of weights generated at random, while the red points are the ones in the Pareto set. The green dot is an example point that is Pareto dominated by the area highlighted in grey. It is easy to see that any point in the grey area is strictly better than the green dot. The purpose of multi-objective optimization is to efficiently identify the Pareto front and the weights in the Pareto set. Algorithms to estimate the Pareto front are reported in the next Section.
§.§ Algorithms for Pareto optimal proxies
Multi-objective optimization is a well-studied problem that can be solved via a wealth of efficient algorithms. Common methods to extract the Pareto front combine Kriging techniques with expected improvement minimization <cit.>, or black box methods via transfer learning <cit.>. These methods are particularly suitable for cases where the objective functions are intrinsically expensive to calculate, and therefore one wishes to limit the number of evaluations required to extract the front. In our case, however, both objective functions can be calculated with minimal computational effort. As such, we propose two algorithms to efficiently extract the front that rely on sampling strategies and nonlinear optimization routines. We then compare our algorithms against a standard Kriging-based implementation.
Our first method to extract the Pareto front involves a simple randomized search, as described in Algorithm <ref> below. The mechanism is relatively straightforward: at each step, we propose a candidate weight ω and calculate the associated proxy Z_i, j for every i= 1,…, N and every experiment j=1, …, J. Then, we evaluate the desired objective functions, such as the binary sensitivity and the correlation in equations (<ref>) and (<ref>). These allow us to tell whether ω is dominated. In the second case, we update the Pareto front by removing the Pareto dominated weights and then by including the new one in the Pareto set.
The advantage of Algorithm <ref> is that it explores the whole space of possible weights and can be performed online with minimum storage requirements. However, such exploration is often inefficient, since the vast majority of sampled weights are not on the Pareto front. Moreover, the method may suffer from a curse of dimensionality: if the total number of auxiliary metrics M is large, then a massive number of candidate weights is required to explore the hypercube [0,1]^M exhaustively. A standard solution to such a problem relies on a more directed exploration of the space of weights via Kriging, where the weight at one iteration is sampled from normal distributions whose mean and variance are obtained by minimizing an in-fill criterion <cit.>. Refer to <cit.> for a practical overview. Since evaluating sensitivity and correlation is a relatively simple operation, we propose a more directed algorithm, which we now illustrate.
Consider the bivariate optimization problem in equation (<ref>). If we fix one dimension, say sensitivity, to a certain threshold and later optimize with respect to the other dimension in a constrained manner, then varying the threshold between 0 and 1 should equivalently extract the front. In practice, this procedure is approximated by binning the sensitivity in disjoint intervals, say [u_b, u_b+1) with b = 1, …, B-1, with u_1 = 0 and u_B = 1, and then solving
ω^*_b = max_ω : (Z̅_·()) ∈ [u_b, u_b+1) (Z̅_·()),
for each b=1, …, B-1. The resulting Pareto front is composed of a length B set of weights. We summarize this in Algorithm <ref> below.
The optimization problem in equation (<ref>) and Algorithm <ref> can be solved via common nonlinear optimization methods such as the ones in the package. See <cit.> and references therein.
Each algorithm produces a set of Pareto optimal proxy metrics. However, we typically rely on a single proxy metric for experiment evaluation and launch decisions. This means we need to select a proxy from the Pareto front. In practice, we use the Pareto set to reduce the space of candidate proxies, and later choose the final weights based on statistical properties and other product considerations.
§.§ Algorithm performance
This Section evaluates the performance of our proposed algorithms. The task is extracting the Pareto front between binary sensitivity and correlation from a set of over 300 experiments. Details on the data are described in Section <ref>. We test three different algorithms:
* Randomized search (Algorithm <ref>). We let the algorithm run for M × 4000 iterations.
* Constrained optimization via binning (Algorithm <ref>). We split sensitivity into 14 discrete bins, ranging from 0 to the maximum sensitivity of a single metric in our data set. From the package, we rely on the locally biased dividing rectangles algorithm <cit.>.
* Kriging and minimization of the expected increase in hyper-volume <cit.>, using the package <cit.>. We let the algorithm run for M × 40 iterations.
We estimate the Pareto front for M = 5, 10, and 15 metrics to understand how algorithm performance scales in the number of metrics. Figure <ref> compares the Pareto front extracted by each algorithm. Each algorithm yields a similar Pareto front. We notice that constrained optimization detects points in high sensitivity and high correlation regions better than the other two methods, especially as the number of metrics increases. However, the middle of these extracted curves are very similar.
A more direct comparison is reported in Figure <ref>. Here, we quantify the extracted Pareto Front using the Area under the Pareto Front metric (larger values are better). We also compare the run-time of each algorithm. The clear takeaway from Figure <ref> is that the choice of algorithms does not matter much for a small number of metrics (5). However, constrained optimization is the best trade-off between accuracy and speed when the number of metrics is large.
§ RESULTS
We implemented our methodology on over 300 experiments in a large industrial recommendation system. We then evaluated the performance of the resulting proxy on over 500 related experiments that ran throughout the subsequent six months. Specifically, we compare the proxy with the short-term north star metric, since its precise goal is to improve upon the sensitivity of the short-term north star itself. As success criteria, we use Binary Sensitivity in equation (<ref>) and the proxy score, which is a one-number statistic that evaluates proxy quality. See Appendix <ref> for a detailed definition.
Table 1 compares our short-term proxy metric against the short-term north star metric. Our proxy metric was 8.5 times more sensitive. In the cases where the long-term north star metric was statistically significant, the proxy was statistically significant 72% of the time, compared to just 40% of the time for the short-term north star. In this set of experiments, we did not observe any case where the proxy metric was statistically significant in the opposite direction as the long-term north star metric. We have, however, seen this occur in different analyses. But the occurrence is rare and happens in less than 1% of experiments. Finally, our proxy metric has a 50% higher proxy score than the short-term north star. Our key takeaway is that we can find proxy metrics that are dramatically more sensitive while barely sacrificing directionality.
Table 1 only evaluates the relationship between the proxy and north star metric when the north star is statistically significant. These experiments are useful because we have a clear direction from the north star metric. However, it is also important to assess the proxy metric when the long-term north star metric is neutral. For this, we can look at the magnitude of the north star metric when the long-term effect is not statistically significant, split by whether the proxy is negative, neutral, or positive. We display this in Figure <ref>, which shows that, although we may not get statistically significant results for the north star metric, making decisions based on the proxy will be positive for the north star on average. In practice, we are careful when rolling out these cases, and have tools to catch any launch that does not behave as expected.
Finally, it is instructive to analyze how the weights of the proxy metrics vary as we move along the Pareto front from directionality to sensitivity, as illustrated in the example in Figure <ref>. As expected, when we select points that emphasize correlation, our proxy metric puts more weight on the short-term north star. But when we choose points that emphasize sensitivity, we put much more weight on sensitive, local metrics.
§ DISCUSSION
This paper proposes a new method to find proxy metrics that optimizes the trade-off between sensitivity and directionality. To our knowledge, this is the first approach that explicitly incorporates metric sensitivity into the objective. In our experiments, we found proxy metrics that were 6-10 times more sensitive than the short-term north star metric, and minimal cases where the proxy and the north star moved in opposite directions.
Our experience developing proxy metrics with multiple teams across multiple years has spurred many thoughts on their pros, cons, and things to watch out for. These considerations go beyond the mathematical framework discussed in this paper, and we list them in the next section. We then discuss some other benefits of using proxy metrics. Finally, we'll discuss some limitations in our methodology and future areas of improvement.
§.§ Considerations beyond Pareto optimality
Below are other important considerations we learned from deploying proxy metrics in practice:
* Make sure you need proxies before developing them. Proxies should be motivated by an insensitive north star metric, or one that is consistently different between the short and long term. It is important to validate that you have these issues before developing proxies. To assess sensitivity, you can compute the Binary Sensitivity in a set of experiments. To assess short and long-term differences, one possibility is to compare the treatment effects at the beginning and end of your experiments.
* Try better experiment design before using proxies. Proxies are one way to increase sensitivity, but they are not the only way. Before you create proxy metrics, you should assess if your sensitivity problems can be solved with a better experiment design. For example, you may be able to run larger experiments, longer experiments, or narrower triggering to only include users that were actually impacted by the treatment. Solving at the design stage is ideal because it allows us to target the north star directly.
* Choose proxies with common sense. The best auxiliary metrics in our proxy metric captured intuitive, critical aspects of the specific user journey targeted by that class of experiments. For example, whether a user had a satisfactory watch from the homepage is a good auxiliary metric for experiments changing the recommendations on the home feed. In fact, many of the best auxiliary metrics were already informally used by engineers, suggesting that common sense metrics have superior statistical properties.
* Validate and monitor your proxies, ideally using holdbacks. It is important to remember that proxy metrics are not what we want to move. We want to move the north star, and proxies are a means to this end. The best tool we have found for validating proxies is the cumulative long-term holdback, including all launches that were made based on the same proxy metric. It is also helpful to regularly repeat the model fitting process on recent data, and perform out-of-sample testing, to ensure your proxy is still at an optimal point.
§.§ Other benefits of proxy metrics
Developing proxies had many unplanned benefits beyond their strict application as a tool for experiment evaluation. The first major benefit is the sheer educational factor: the data science team and our organizational partners developed a much deeper intuition about our metrics. We learned baseline sensitivities, how the baseline sensitives vary across different product areas, and the correlations between metrics.
Another unplanned benefit is that the proxy metric development process highlighted several areas to improve the way we run experiments. We started to do better experiment design, and to collect data from experiments more systematically, now that the experiments can also be viewed as training data for proxy metrics.
Finally, the most important benefit is that we uncovered several auxiliary metrics that were correlated with the north star, but not holistic enough to be included in the final proxy. We added these signals directly into our machine-learning systems, which resulted in several launches that directly improved the long-term user experience.
§.§ Discussion, limitations, and future directions
This methodology is an important milestone, but there are still many areas to develop, and our methodology is sure to evolve over time.
The first area to explore is causality. Our approach relies on the assumption that the treatment effects of the experiments are independent draws from a common distribution of treatment effects, and that future experiments come from the same generative process. Literature from clinical trials <cit.>, however, has more formal notions of causality for surrogate metrics, and we plan to explore this area and see if there's anything we can glean.
Another important improvement would be a more principled approach to select the final proxy metric. Some initial work along these lines revolves around our proxy score (Appendix <ref>) and Area under the Pareto curve (Figure <ref>). We hope to have a more refined perspective on this topic in the future.
We also did not explore more classic model-building improvements in detail. For example, we do not address non-linearity and feature selection. Non-linearity is particularly important, because it helps in cases where two components of the proxy metric move in opposite directions. For feature selection, we currently hand-pick several auxiliary metrics to include in the proxy metric optimization. However, we should be able to improve upon this by either inducing sparsity when estimating the Pareto front, or adopting a more principled feature selection approach.
To conclude, let's take a step back and consider the practical implications of our results. Essentially, we found that the appropriate local metrics, that are close to the experiment context, are vastly more sensitive than the north star, and rarely move in the opposite direction. The implication is that using the north star as a launch criterion is likely too conservative, and teams can learn more and faster by focusing on the relevant local metrics.
Faster iteration has also opened our eyes to other mechanisms we can use to ensure that our launches are positive for the user experience. We mentioned earlier that launches using proxies should be paired with larger and longer running holdbacks. In fact, through such holdbacks we were able to catch small but slightly negative launches (case 1 in Figure <ref>, but with the opposite sign), and further refine our understanding of the differences between the short and long term impact on the north star metric (case 2 in Figure <ref>, but with the opposite sign).
§ THE PROXY SCORE
It is useful to have a single metric that quantifies the performance of a proxy metric. We have relied on a measure called proxy score. The proxy score rewards properties of an ideal proxy metric: short-term sensitivity, and moving in the same long-term direction as the north star (Figure <ref>). The motivation behind our specific definition comes from the contingency table visualized in Figure <ref>, which is generated from 1000 simulated experiments.
The green cells in Figure <ref> represent cases where the proxy is statistically significant in the short-term, the north star is significant in the long-term, and the proxy and north star move in the same direction. These are unambiguously good cases, and we refer to them as Detections. The red cells are unambiguously bad cases: both the short-term proxy and north star are statistically significant, but they move in opposite directions. We call these Mistakes. Informally, we define the proxy score as
Proxy Score = Detections - Mistakes/Number of experiments where the north star is significant.
The key idea is that the proxy score rewards both sensitivity, and accurate directionality. More sensitive metrics are more likely to be in the first and third rows, where they can accumulate reward. But metrics in the first and third rows can only accumulate reward if they are in the correct direction. Thus, the proxy score rewards both sensitivity and directionality. Microsoft independently developed a similar score, called Label Agreement <cit.>.
More formally, and following the notation in Section <ref>, we can define the proxy score using hypothesis tests for the proxy metric and the north star metric, defined as
H_0, j^ns: θ_j^ns = 0 vs H_1,j^ns: θ^ns_j≠ 0,
H_0, j^z: θ_j^z = 0 vs H_1, j^z: θ^z_j≠ 0.
If we let D_j = {θ_j^ns, σ_j^ns, θ_j^z, θ_j^z} be data required to compute the hypothesis tests, then the proxy score for experiment j can be written as
(D_j) = 1(H^z_0,j rejected) (Proxy Significant)
× 1(H^ns_0,j rejected) (North Star Significant)
× [1(θ_j^ns > 0 and θ_j^z > 0) + 1(θ_j^ns < 0 and θ_j^z < 0) ] (Agree)
× [- 1(θ_j^ns > 0 and θ_j^z < 0) - 1(θ_j^ns < 0 and θ_j^z > 0) ], (Disagree)
where 1(·) is an indicator equal to one if its argument is true, and zero otherwise.
We can aggregate these values across all experiments in our data, and scale by the number of experiments where the north star is significant, to compute the final proxy score for a set of experiments. The scaling factor ensures that the proxy score is always between -1 and 1.
(D) = ∑_j=1^J(D_j)/1(H^ns_0,j rejected).
Similar to Binary sensitivity, there can be issues with the proxy score when the north star metric is rarely significant. We have explored a few ways to make this continuous, for example by substituting indicators for Bayesian posterior probabilities.
|
http://arxiv.org/abs/2307.01142v1
|
20230703163246
|
Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
|
[
"Stephen MacNeil",
"Andrew Tran",
"Joanne Kim",
"Ziheng Huang",
"Seth Bernstein",
"Dan Mogil"
] |
cs.HC
|
[
"cs.HC"
] |
Generating Diverse Explanations with Large Language Models]Generating Diverse Explanations of Code Snippets using GPT-3
Prompt Templates]Prompt Templates: Generating Prompts for Large Language Model based on UI Affordances
Prompt Middleware]Prompt Middleware: Generating Prompts for Large Language Models based on UI Affordances
Prompt Middleware]Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances
Temple University
1801 N Broad St
Philadelphia
PA
USA
19122
[email protected]
Temple University
1801 N Broad St
Philadelphia
PA
USA
19122
[email protected]
Temple University
1801 N Broad St
Philadelphia
PA
USA
19122
[email protected]
University of California—San Diego
9500 Gilman Drive
La Jolla
CA
USA
92093
[email protected]
Temple University
1801 N Broad St
Philadelphia
PA
USA
19122
[email protected]
Temple University
1801 N Broad St
Philadelphia
PA
USA
19122
[email protected]
< g r a p h i c s >
Three methods to connect user interface components to large language models. 1) static prompts are predefined prompts that can be selected directly from the UI, 2) template-based prompts generate prompts based on selected options in the UI, 3) free-form prompts provide a direct way of interacting with prompts.
figure description
[
Dan Mogil
August 1, 2023
==================
§ ABSTRACT
To help users do complex work, researchers have developed techniques to integrate AI and human intelligence into user interfaces (UIs). With the recent introduction of large language models (LLMs), which can generate text in response to a natural language prompt, there are new opportunities to consider how to integrate LLMs into UIs. We present Prompt Middleware, a framework for generating prompts for LLMs based on UI affordances. These include prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). We demonstrate this framework with FeedbackBuffet, a writing assistant that automatically generates feedback based on a user's text input. Inspired by prior research showing how templates can help non-experts perform more like experts, FeedbackBuffet leverages template-based prompt middleware to enable feedback seekers to specify the types of feedback they want to receive as options in a UI. These options are composed using a template to form a feedback request prompt to GPT-3. We conclude with a discussion about how Prompt Middleware can help developers integrate LLMs into UIs.
§ INTRODUCTION
Previous research has demonstrated ways that intelligence can be integrated into UI <cit.>. In `Wizard of Oz' systems, an expert manually controls UI features to simulate an intelligent user interface <cit.>. Similarly, crowdsourcing systems, such as Soylent <cit.>, integrate crowdworkers to power UIs through crowd workflows <cit.>. Finally, specialized machine learning models have also been trained for a specific task and then embedded into systems and interfaces <cit.>. Across these systems, rules, heuristics, workflows, and specialized models guide the ways that interface affordances can be enhanced with intelligence.
Recent advances in natural language processing have resulted in large language models (LLMs), such as GPT-3 <cit.>, which have the ability to understand natural language prompts and generate relevant text responses. These models are already being used to facilitate creative work <cit.>. However, it is not yet clear how to best integrate LLMs into existing UI. In this paper, we explore three methods for integrating LLMs into UI using prompts that are predefined by experts (static prompts), generated from templates with fill-in options in the UI (template-based prompts), or created from scratch (free-form prompts). These three techniques for integrating LLMs into UIs, which we call Prompt Middleware, provide varying amounts of control and guidance to users over the underlying prompt generation process. We demonstrate the concept of Prompt Middleware by developing FeedbackBuffet, a writing assistant that generates feedback for users by guiding them through a menu of feedback options allowing them to determine the type of feedback they would like to receive. FeedbackBuffet implements the `template prompt' middleware to package these feedback options into a prompt for GPT-3.
§ PROMPT MIDDLEWARE: CONNECTING UI AFFORDANCES TO LLMS
Crafting high-quality prompts is challenging <cit.>. To help people create high-quality prompts, PromptMaker guides users to create their own prompts with templates and procedural guidance <cit.>. Another approach called AI Chaining simplifies a complex prompting process by splitting a request into smaller requests which are individually prompted and then stitched back together <cit.>. This approach was shown to improve performance and transparency.
Where previous work has focused on making prompt engineering easier, these approaches have not yet addressed two crucial aspects: 1) techniques to scaffold domain expertise into the prompting process, and 2) directly integrating LLMs into user interfaces. We propose Prompt Middleware as a framework for achieving these two goal by mapping options in the UI to generate prompts for an LLM. Prompt Middleware acts as a middle layer between the LLM and the UI, while also embedding domain expertise into the prompting process. The UI abstracts away the complexity of the prompts and separates concerns between a user completing their tasks and the prompts that might guide LLMs to help them in those tasks. Summarized in Figure <ref>, the following sections introduce three Prompt Middleware types: static, template-based, and free-form.
§.§ Static prompts leverage best practices
Prompt engineering and few-shot learning are common techniques to improve the quality of responses from LLMs <cit.>. We propose the concept of static prompts as a method for making these best practices available to users in a UI. A static prompt is a predefined prompt generated by experts through prompt engineering to achieve high-quality responses from an LLM. As shown in Figure <ref>, static prompts can be hidden behind a button in a UI to send a predefined prompt to an LLM on behalf of the user. This allows users to tap into best practices with minimal effort but at the cost of giving up control of prompt generation.
§.§ Template-based prompts provide flexibility
Previous researchers have shown how expertise and best practices can be directly embedded into templates to guide crowdworkers <cit.>, non-experts <cit.>, and even experts <cit.> to do better work. For example, Motif, a video storytelling application, leverages storytelling patterns to guide users’ story creation <cit.>. Inspired by how templates can guide people, we explore how expert templates might similarly guide LLMs. We propose template-based prompts as a method for generating prompts by filling in a pre-made template with options from a user interface. The template and user interface can integrate expertise and best practices while giving users more control through options in the UI.
§.§ Free-form prompts provide full control
Previous research shows that developing free-form prompts can be challenging <cit.>. However, experts can generate high-quality prompts through the process of prompt engineering. Providing users with full control of the prompting process may be desired in some cases. Free-form prompts provide full access to users as they design their prompt from scratch.
§ CASE STUDY
The case study methodology is a technique for illustrating an idea through the use of examples <cit.>. To engage more deeply with the concept of Prompt Middleware, we developed the FeedbackBuffet prototype which implements the template-based prompting design pattern. This case study illustrates what template-based prompting might look like when implemented in a user interface.
§.§ FeedbackBuffet System
FeedbackBuffet is a writing assistant that allows users to request automated feedback for any writing sample, such as an essay, email, or statement of purpose, based on UI options. As shown in Figure <ref>, UI options offer users relevant feedback options which are combined using a template to form a prompt for GPT-3. The template integrates best practices of feedback design and cues the feedback seeker to consider qualities of good feedback. FeedbackBuffet implements the template-based prompt middleware to integrate intelligence into the interface.
§.§.§ System Implementation
The system is implemented as a ReactJS web app. The prompts are generated through template literals (i.e.: string interpolation) where each selected option from the UI is injected into the template to form a string that is sent as a prompt to OpenAI via API calls using zero-shot learning.
§.§.§ Integrate Best Practices in Feedback Design
There are principles and best practices for feedback design, such as asking a clarifying question and then making a statement <cit.>, sandwiching criticism between two positive comments <cit.>, and making feedback actionable <cit.>. The feedback template used by FeedbackBuffet is based on a feedback framework that includes valence, level of abstraction, and feedback type <cit.>, summarized in Figure <ref>. We present examples of the feedback generated by GPT-3 using our template in Figure <ref>.
§.§ Use Case: Requesting Design Feedback
To illustrate how FeedbackBuffet operates, we present the following use case about Sasha, a CS student who is taking career preparatory course to work on his statement of purpose. Sasha completes a first draft of his statement and he receives feedback from his instructor that critiques the structure—Sasha did not start with a strong motivation. He focused too much on the graduate program before motivating the reader. After adding motivation based on his journey into computers, he uses FeedbackBuffet to get more feedback before his next class. He pastes his statement into the input area and selects options to request feedback about the content of this draft. These options along with his draft are packaged as a prompt and then sent to GPT-3. He receives the feedback shown in Figure <ref>. Based on this feedback, Sasha edits his statement to add more specific details about how he learned to code by forming an informal group with his friends. He continues to iterate on his statement of purpose, periodically referring back to FeedbackBuffet, and he is excited to show his progress to his instructor.
§ DISCUSSION
In this paper, we build on existing research <cit.> for integrating expertise and intelligence into UIs. We introduce the Prompt Middleware Framework to guide the process of integrating LLMs into a UI. We demonstrate this vision with FeedbackBuffet, a intelligent writing assistant that automatically generates feedback based on text input. Given that previous attempts at integrating intelligence may require effort to acquire intelligence sources or be costly, FeedbackBuffet offers a lightweight method for integrating intelligence and best practices into a UI. FeedbackBuffet’s UI acts as a facade around the LLM, abstracting away the complexity of interacting with LLMs. While FeedbackBuffet currently focuses on template-based prompts, we could include static prompts as well. For example, with a button titled `Pros and Cons', which would send the prompt shown in Figure <ref> to an LLM.
Researchers have identified several challenges individuals face when interacting with general purpose AI, such as a lack of awareness about the AI’s capabilities which can lead them to request overly complicated on non-existent tasks from the AI agent <cit.>. Researchers are still developing an understanding of the capabilities of LLMs, but in this paper we show it is possible to convey known possibilities afforded by LLMs through a UI. Through static prompts, users can use prompts that have been engineered by experts to be effective. Through template-based prompts, they can choose from a list of menu options to generate prompts that have been previously tested by experts. This ability to communicate the capabilities afforded by LLMs has the potential to make them more accessible for non-experts.
As future work, we plan to evaluate three systems, including FeedbackBuffet, that embody these three types of prompt middleware to understand how best to integrate LLMs into existing UI. Through this evaluation, we also hope to develop a better understanding of how much control is desired when interacting with LLMs through UI. While complete control in the form of free-form prompts might be desired in some contexts, it likely depends. For example, a feedback system based on static prompts, which provide less control, may simplify the feedback request process.
§ CONCLUSION
In this paper, we present FeedbackBuffet, a writing assistant that generates feedback on writing samples using GPT-3. The user can choose from a set of feedback options that are combined using a template to form a prompt for GPT-3. This system demonstrates how templates can serve as middleware to map affordances in a user interface to prompt a large language model. This work serves as an initial step toward developing a prompt middleware that can bridge the gap between users and large language models.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.02604v2
|
20230705185653
|
Bayesian D- and I-optimal designs for choice experiments involving mixtures and process variables
|
[
"Mario Becerra",
"Peter Goos"
] |
stat.ME
|
[
"stat.ME",
"stat.AP"
] |
Traversable Lorentzian wormhole on the Shtanov-Sahni braneworld with matter obeying the energy conditions
Rikpratik Sengupta^1, Shounak Ghosh^2, Mehedi Kalam^1
August 1, 2023
=========================================================================================================
Many food products involve mixtures of ingredients, where the mixtures can be expressed as combinations of ingredient proportions. In many cases, the quality and the consumer preference may also depend on the way in which the mixtures are processed. The processing is generally defined by the settings of one or more process variables. Experimental designs studying the joint impact of the mixture ingredient proportions and the settings of the process variables are called mixture-process variable experiments. In this article, we show how to combine mixture-process variable experiments and discrete choice experiments, to quantify and model consumer preferences for food products that can be viewed as processed mixtures. First, we describe the modeling of data from such combined experiments. Next, we describe how to generate D- and I-optimal designs for choice experiments involving mixtures and process variables, and we compare the two kinds of designs using two examples.
§ INTRODUCTION
As pointed out in the review paper on the state of the art of discrete choice experiments in food research by Lizin2022state, there has been a steady increase in the number of publications on the use of discrete choice experiments concerning food since 2000. A large number of discrete choice experiments in these papers deal with food safety or safety risks, origin or traceability, health or nutrition, biotechnology or genetic modification and animal welfare. The product categories mainly involved meat (beef, pork, poultry, and processed meat products), organic foods, functional foods and foods with nutrition or health claims. In recent years, alternatives to conventional meat received increasing attention. Lizin2022state also mention that a limited number of choice experiments in published papers were concerned with wine, olive oil, eggs and vegetables.
Despite the fact that many food products involve mixtures of ingredients, publications concerning food-related choice experiments with mixtures are scarce. The first known application of a discrete choice experiment concerning mixtures was published by courcoux1997methode, who modeled the preferences for cocktails involving different proportions of mango juice, lime juice, and blackcurrant syrup. goos_hamidouche_2019_choice defined a way to combine Scheffé models for data from mixture experiments with the logit type models typically used for choice experiments, and presented an alternative analysis of the data from courcoux1997methode. ruseckaite_bayesian_2017 and becerra2021bayesian demonstrated how D- and I-optimal designs can be generated for choice experiments with mixtures, applied their work to the cocktail experiment and used an additional example concerning a sports drink.
As witnessed by many of the examples in cornell2002experiments, the quality of food products involving mixtures of ingredients often also depends on characteristics unrelated to the composition of the mixture. For example, the firmness of a fish patty depends not only on the types of fish used, but also on baking temperature, baking time, and frying time.
The color, aroma, taste, texture and mouthfeel of pastillas de leche, a popular Filipino candy, depend on baking time and temperature in addition to mixture ingredients such as cornstarch, flour, glucose, sugar and milk apelladooptimization.
The aroma, hardness, crispness, color and fracture force of apple biscuits are affected by the mixture ingredients and the microwave blanching of the apples skaltsi2022development. In the general literature on mixture experiments, variables such as baking temperature, baking time, frying time, serving temperature, and microwave blanching are typically called process variables goos_jones_optimal_2011.
The fact that the quality of food products involving mixtures depends on the settings of such process variables implies that consumer preferences for these kinds of products will also be impacted by the process variables' settings. For this reason, in this article, we develop the methodology required to perform discrete choice experiments involving mixtures as well as process variables. First, we present a parsimonious model for data from choice experiments with mixture and process variables. Next, we discuss how to generate D- and I-optimal designs for such choice experiments. We discuss D-optimal designs because the D-optimality criterion is the most popular criterion for designing choice experiments; and I-optimal designs because they focus on precise predictions and precise predictions are helpful to find the optimal mixture formulation in combination with optimal settings for the process variables.
The rest of the paper is organized as follows. In Section <ref>, we introduce the most often used models for mixture experiments with process variables, the multinomial logit model for choice data and the combination of these two to model choice data concerning mixtures and process variables. In Section <ref>, we discuss the two most commonly used metrics to measure the quality of experimental designs. In Section <ref>, we present some of our computational results and provide example designs for two choice experiments involving a mixture and one or more process variables. Finally, in Section <ref>, we summarize our work and sketch possible directions for future research.
§ MODELS
In this section, we introduce the most commonly used models for data from mixture experiments with process variables as well as the multinomial logit model for choice data, and explain how to combine the two models for data from choice experiments involving mixtures and process variables.
§.§ Models for data from mixture experiments including process variables
Mixture experiments involve two or more ingredients and a response variable that depends only on the relative proportions of the ingredients in the mixture. Each mixture is described as a combination of q ingredient proportions, with the constraint that these proportions sum up to one. Due to this constraint, a classical regression model involving an intercept and linear terms in the ingredient proportions exhibits perfect collinearity. Therefore, researchers must use dedicated regression models when analyzing data from mixture experiments. The most commonly used family of models for data from mixture experiments is the Scheffé family (<cit.>; <cit.>). The most popular Scheffé models are the first-order, second-order, and special-cubic models.
Denoting the response in a traditional mixture experiment with a continuous outcome by Y and the q ingredient proportions by x_1,x_2,…,x_q, with x_i ≥ 0 and ∑_i = 1^q x_i = 1, the first-order Scheffé model is
Y = ∑_i = 1^q β_i x_i + ε.
The second-order Scheffé model is
Y =
∑_i = 1^q β_i x_i +
∑_i = 1^q-1∑_j = i+1^q β_ij x_i x_j +
ε,
and, finally, the special-cubic Scheffé model is
Y =
∑_i = 1^q β_i x_i +
∑_i = 1^q-1∑_j = i+1^q β_ij x_i x_j +
∑_i = 1^q-2∑_j = i+1^q-1∑_k = j+1^qβ_ijk x_i x_j x_k +
ε.
In all three cases, ε denotes the error term, which, for continuous outcomes, is typically assumed to be normally distributed.
In certain experiments involving mixtures, additional factors that might affect the response are studied as well. Generally, these factors describe how the mixture is processed (where the word `processed' should be interpreted in a broad sense). These additional factors are therefore referred to as process variables, and the resulting experiments are called mixture-process variable experiments. For instance, a dough needs to be baked at a certain temperature for a certain time, while the cocktails from the example in Section <ref> need to be cooled to a certain temperature before being served, and the fish patties from the example in Section <ref> are cooked and fried for a specific time at a specific temperature.
Models that involve q mixture ingredients and r process variables can be obtained by combining Scheffé models for the ingredient proportions with response surface models for the process variables (<cit.>; <cit.>; <cit.>; <cit.>). For example, consider the second-order Scheffé model in Equation (<ref>) for q ingredients x_1, x_2, …, x_q and a main-effects-plus-two-factor-interaction model for r process variables z_1, z_2, …, z_r defined as
Y =
α_0 +
∑_k = 1^r α_k z_k +
∑_k = 1^r-1∑_l = k+1^r α_kl z_k z_l +
ε.
One combined model crosses the terms in Equation (<ref>) with each of those in Equation (<ref>):
Y = ∑_i = 1^q β_i x_i
+ ∑_i = 1^q-1∑_j = i+1^q β_ij x_i x_j
+ ∑_i = 1^q∑_k = 1^r γ_ik x_i z_k
+ ∑_i = 1^q ∑_k = 1^r-1∑_l = k+1^r γ_ikl x_i z_k z_l
+ ∑_i = 1^q-1∑_j = i+1^q ∑_k = 1^r δ_ijk x_i x_j z_k
+ ∑_i = 1^q-1∑_j = i+1^q ∑_k = 1^r-1∑_l = k+1^r δ_ijkl x_i x_j z_k z_l
+ ε.
This model allows the effects of both the ingredient proportions and process variables to jointly affect the response variable. In other words, the model allows the effects of the process variables to depend on the ingredient proportions and the effects of the ingredient proportions to depend on the process variables. The combined model in Equation (<ref>) does not include any main effects of the process variables z_1, …, z_r. This is because their inclusion would result in an inestimable model due to perfect collinearity. In the event that the effects of the process variables do not depend on the ingredient proportions, all γ_ik as well as all γ_ikl in the combined model are equal and all
δ_ijk and all δ_ijkl are zero. In such event, the model simplifies to
Y = ∑_i = 1^q β_i x_i
+ ∑_i = 1^q-1∑_j = i+1^q β_ij x_i x_j
+ ∑_k = 1^r α_k z_k
+ ∑_k = 1^r-1∑_l = k+1^r α_kl z_k z_l
+ ε.
This alternative model also combines the models in Equations (<ref>) and (<ref>), but without crossing any of the terms. Depending on the application, it may be necessary to extend the above models by including cubic terms involving the mixture ingredient proportions (as in the special-cubic Scheffé model in Equation (<ref>)) or quadratic terms in the process variables. An example of such an extended model would be
Y = ∑_i = 1^q β_i x_i
+ ∑_i = 1^q-1∑_j = i+1^q β_ij x_i x_j + ∑_i = 1^q-2∑_j = i+1^q-1∑_k = j+1^qβ_ijk x_i x_j x_k
+ ∑_i = 1^q∑_k = 1^r γ_ik x_i z_k
+ ∑_i = 1^q ∑_k = 1^r-1∑_l = k+1^r γ_ikl x_i z_k z_l
+ ∑_i = 1^q-1∑_j = i+1^q ∑_k = 1^r δ_ijk x_i x_j z_k
+ ∑_i = 1^q-1∑_j = i+1^q ∑_k = 1^r-1∑_l = k+1^r δ_ijkl x_i x_j z_k z_l + ∑_i = 1^r α_i z_i^2
+ ε.
A problem with the combined model in Equation (<ref>) is that its number of parameters quickly increases with the number of mixture ingredients and process variables: for q mixture ingredients and r process variables, the total number of parameters is [q + q(q - 1)/2] × [1 + r + r(r - 1)/2]. The extended model in Equation (<ref>) even involves q(q-1)(q-2)/6+r extra parameters. In contrast, the model described in Equation (<ref>) involves a number of parameters that is as low as [q + q(q - 1)/2]+[r + r(r - 1)/2]. The drawback of the latter model is that it may not be realistic. For this reason, kowalski2000new suggest a compromise model involving q + q(q-1)/2 + qr + r(r-1)/2 + r terms:
Y =
∑_k = 1^q γ_k^0 x_k +
∑_k = 1^q-1∑_l = k+1^q γ_kl^0 x_k x_l +
∑_i = 1^r∑_k = 1^q γ_k^i x_k z_i +
∑_i = 1^r-1∑_j = i + 1^r α_ij z_i z_j +
∑_i = 1^r α_i z_i^2 +
ε.
Because this compromise model strikes a balance between the overly complex models in Equations (<ref>) and (<ref>) and the overly simple model in Equation (<ref>), we use it as our starting point for computing optimal designs for choice experiments involving mixtures and process variables in the remainder of this paper.
§.§ Multinomial logit model for choice data
The multinomial logit model builds on random-utility theory and assumes that a respondent in a choice experiment faces S choice sets involving J alternatives. The model assumes that, within each choice set s ∈1, ..., S, each respondent chooses the alternative that has the highest perceived utility. Therefore, the probability that a respondent chooses alternative j ∈1, ..., J in choice set s, denoted by p_js, is the probability that the perceived utility of alternative j in choice set s, denoted by U_js, is larger than that of the other alternatives in the choice set:
p_js = ℙ[ U_js > max(U_1s, ..., U_j-1, s, U_j+1, s, ..., U_Js ) ].
Since, generally, each alternative in a choice set has a set of observable attributes that characterize it, the perceived utility U_js can be expressed as
U_js = f^T(a_js) θ + ε_js,
where a_js is the vector that contains the attributes corresponding to alternative j in choice set s, f(a_js) represents the model expansion of this attribute vector, and θ is the vector containing the model parameters. The model parameters contained within θ express the preferences of the respondents for the alternatives' attributes. In the multinomial logit model, the error terms ε_js are assumed to be independent and identically Gumbel distributed. The Gumbel distribution is also known as the generalized extreme value distribution of type I and as the log-Weibull distribution. As a result of the distributional assumption, it can be shown that
p_js = exp[ f^T(a_js) θ]/∑_t = 1^J exp[ f^T(a_ts) θ].
§.§ Model for choice data concerning mixtures and process variables
In this paper, we focus on choice experiments involving mixtures and process variables. Therefore, we assume that the attributes of the alternatives in the experiments are the proportions of the ingredients of a mixture and the settings of the process variables. Consequently, we assume that the attribute vector a_js from Equation (<ref>) contains the q ingredient proportions x_1,x_2,…,x_q and the r process variables z_1,…,z_r of the j-th alternative in choice set s and that f(a_js) represents the model expansion of these proportions and process variables. As a proof of concept, in this paper we base the polynomial expansion f(a_js) on a model combining a second-order Scheffé model for the q ingredients in the mixture with a main-effects-plus-two-factor-interaction model for the r process variables, as in Equation (<ref>).
When starting from the main-effects-plus-two-factor-interaction model in Equation (<ref>), the most natural thing to do would be to write the perceived utility U_js of alternative j in choice set s as
U_js =
∑_i = 1^qγ_i^0 x_ijs +
∑_i = 1^q-1∑_k = i+1^q γ_ik^0 x_ijs x_kjs +
∑_i = 1^r∑_k = 1^q γ_k^i x_kjs z_ijs +
∑_i = 1^r-1∑_k = i + 1^r α_ik z_ijs z_kjs +
∑_i = 1^r α_i z_ijs^2 +
ε_js,
where x_ijs denotes the proportion of the i-th mixture ingredient in alternative j from choice set s, and z_kjs denotes the setting of the k-th process variable for alternative j in choice set s, and the error terms ε_js are assumed to be independent and identically Gumbel distributed. However, as explained by ruseckaite_bayesian_2017, goos_hamidouche_2019_choice, and becerra2021bayesian, due to the constraint that the ingredient proportions sum up to one, this leads to an inestimable multinomial logit model. As a consequence of the constraint, we can rewrite x_qjs as 1 - x_1js - ... - x_q-1,js and U_js as
U_js =
∑_i = 1^q-1γ_i^0 x_ijs +
γ_q^0 (1 - x_1js - ... - x_q-1,j,s) +
∑_i = 1^q-1∑_k = i+1^q γ_ik^0 x_ijs x_kjs +
∑_i = 1^r∑_k = 1^q γ_k^i x_kjs z_ijs +
∑_i = 1^r-1∑_k = i + 1^r α_ik z_ijs z_kjs +
∑_i = 1^r α_i z_ijs^2 +
ε_js
=
γ_q^0 +
∑_i = 1^q-1 (γ_i^0 - γ_q^0) x_ijs +
∑_i = 1^q-1∑_k = i+1^q γ_ik^0 x_ijs x_kjs +
∑_i = 1^r∑_k = 1^q γ_k^i x_kjs z_ijs +
∑_i = 1^r-1∑_k = i + 1^r α_ik z_ijs z_kjs +
∑_i = 1^r α_i z_ijs^2 +
ε_js.
This final expression for the perceived utility U_js involves a constant, γ_q^0. Since the multinomial logit model only takes into account differences in utility, that constant causes the model to be ill-defined and, hence, inestimable. This can be circumvented by dropping γ_q^0, defining the parameters
γ_i^0* = γ_i^0 - γ_q^0 for i ∈1, ..., q-1, and using the following expression for the perceived utility:
U_js =
∑_i = 1^q-1γ_i^0* x_ijs +
∑_i = 1^q-1∑_k = i+1^q γ_ik^0 x_ijs x_kjs +
∑_i = 1^r∑_k = 1^q γ_k^i x_kjs z_ijs +
∑_i = 1^r-1∑_k = i + 1^r α_ik z_ijs z_kjs +
∑_i = 1^r α_i z_ijs^2 +
ε_js.
The parameter vector θ then becomes
θ =
(
γ_1^0*, γ_2^0*, ..., γ_q-1^0*, γ_1,2^0, ..., γ_q-1,q^0, γ_1^1, γ_2^1, ..., γ_q^1, γ_q^2, ..., γ_q^r, α_1,2, ..., α_r-1,r, α_1, ..., α_r)^T.
This vector has q + q(q-1)/2 + qr + r(r-1)/2 + r - 1 elements.
§ OPTIMAL DESIGN CRITERIA
In the literature on the optimal design of choice experiments in general, several criteria have been studied. kessels2006comparison elaborate on the D-, I-, A-, and G-optimality criteria for the multinomial logit model and compare the performances of the resulting choice designs. However, in the literature on optimal design of choice experiments with mixtures, the two optimality metrics that have been studied are D-optimality and I-optimality. In this section, we extend the D- and I-optimality criteria to cope with the multinomial logit model for choice experiments involving mixtures as well as process variables.
§.§ Information matrix
In order to create D- and I-optimal experimental designs, we need to compute a design's information matrix corresponding to the model under investigation. For the multinomial logit model, the information matrix depends on the unknown parameter vector θ through the choice probabilities p_js defined in Equation (<ref>).
This is typical for models that are not linear in the parameters, such as discrete choice models, and it implies that prior information is needed to find optimal designs. This information can be provided in the form of a point estimate, or in the form of a prior distribution (<cit.>; <cit.>; <cit.>; <cit.>). The use of a point estimate leads to so-called locally optimal designs, which have the problem that they may perform poorly for values of the parameter vector θ for which they were not optimized. This weakness of locally optimal designs is, of course, highly relevant given that the true values of the model parameters are unknown. An alternative is to use a prior distribution, which leads to so-called Bayesian optimal designs. In addition to taking into account prior information, Bayesian optimal designs also take into account the uncertainty about the parameter vector θ through the use of a prior distribution π(θ) that summarizes the prior knowledge concerning the parameter vector θ.
The information matrix I(X, θ) for the multinomial logit model is the sum of the information matrices of each of the S choice sets kessels2006comparison:
I(X, θ) =
∑_s = 1^S
X_s^T (P_s - p_s p_s^T) X_s,
with p_s = ( p_1s, ..., p_Js)^T, P_s = diag(p_s), X_s^T = [ f(a_1s), f(a_2s), ..., f(a_Js) ] the model matrix containing the model expansions of the attribute levels of all J alternatives in choice set s, and X = [ X_1, ..., X_S ] the model matrix for all S choice sets. The inverse of the information matrix is the asymptotic variance-covariance matrix of the maximum likelihood estimates of the parameter vector θ.
§.§ D-optimal designs
For a model matrix X and prior parameter vector θ, the D-optimality criterion can be defined as
𝒟 =
[ (
I^-1(X, θ)
) ]^1/m,
where I^-1(X, θ) is the inverse of the information matrix and m is the number of parameters in the model. A D-optimal design minimizes the 𝒟-value. Since the D-optimal design approach focuses on minimizing the generalized variance of the maximum likelihood estimators of the model parameters, it can be viewed as an estimation-based approach. D-optimality is arguably the most traditional metric used in the literature on the design of choice experiments (<cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>).
The definition in Equation (<ref>) uses a prior point estimate of the parameter vector θ. However, as we mentioned above, a prior distribution can also be used to obtain a Bayesian D-optimal design. The Bayesian D-optimality criterion is generally defined in the literature as the average of the D-optimality criterion over the prior distribution (<cit.>; <cit.>; <cit.>). Therefore, following ruseckaite_bayesian_2017 and becerra2021bayesian, we define the Bayesian D-optimality criterion for the multinomial logit model as
𝒟_B =
∫_ℝ^m[
(
I^-1(X, θ)
)
]^1/mπ(θ) dθ,
where π(θ) is the prior distribution of θ. Note that we call a design that minimizes the expression in Equation (<ref>) a Bayesian D-optimal design, even though the criterion does not take into account the posterior distribution and some authors therefore prefer to call such a design a pseudo-Bayesian design (e.g., ryan2016review).
§.§ I-optimal designs
The I-optimality criterion is generally defined as the average prediction variance over the experimental region, which is why it can be seen as a prediction-oriented criterion: it focuses on getting precise predictions with the estimated statistical model. I-optimality is also sometimes called V-optimality (<cit.>; <cit.>).
When using choice models, there are two ways in which we can define I-optimality. If the goal is to predict choice probabilities, the I-optimality criterion is the average variance of the predicted choice probabilities. If the goal is to predict perceived utilities, the I-optimality criterion is the average variance of the predicted utilities. becerra2021bayesian introduced a computationally efficient definition for I-optimal designs for choice experiments focused on the perceived utilities. This is the definition we will use here too. Under this definition, the I-optimality criterion is
ℐ =
tr[ I^-1(X, θ) W],
where I^-1(X, θ) again denotes the inverse of the information matrix for model matrix X and prior parameter vector θ. The matrix W is the moments matrix, defined as
W = ∫_χf(a_js) f^T(a_js) da_js,
with f(a_js) again the model expansion of attribute vector a_js and χ the experimental region which combines the (q-1)-dimensional simplex S_q-1 for the ingredient proportions and an r-dimensional hyperrectangle for the possible settings of the process variables.
To compute the moments matrix W for the model described in Equation (<ref>), we first need to compute the matrix f(a_js) f^T(a_js), which has elements of the form
( ∏_k = 1^q x_k^n_k) ( ∏_l = 1^r z_l^m_l),
for some n_k, m_l ∈ℕ, k ∈{ 1, ..., q } and l ∈{ 1, ..., r }.
Hence, each element of the moments matrix is of the form
∫_χ( ∏_k = 1^q x_k^n_k) ( ∏_l = 1^r z_l^m_l) dx_1 … dx_q dz_1 … dz_r,
which can be separated in two parts: one corresponding to the process variables and one part corresponding to the ingredient proportions. Therefore, assuming that the r process variables take values from the intervals [ a_1, b_1 ], [ a_2, b_2 ], …, [ a_r, b_r ], the i-th element in the j-th column of the moments matrix, denoted by W_ij, can be calculated as
W_ij =
∫_a_1^b_1∫_a_2^b_2 ... ∫_a_r^b_r∏_l = 1^r z_l^m_l( ∫_S_q-1∏_k = 1^q x_k^n_k dx_1 ... dx_q-1) dz_1 … dz_r,
=
∫_a_1^b_1∫_a_2^b_2 ... ∫_a_r^b_r∏_l = 1^r z_l^m_l( ∏_k = 1^q n_k!/(q - 1 + ∑_k = 1^q n_k)!) dz_1 … dz_r,
=
( ∏_l = 1^r b_l^m_l+1 - a_l^m_l+1/m_l + 1) ( ∏_k = 1^q n_k!/(q - 1 + ∑_k = 1^q n_k)!).
If we adopt the convention that the settings of the process variables are rescaled to the [-1, +1 ] interval, the hyperrectangle becomes a hypercube and the expression for W_ij can be simplified to
W_ij = ( ∏_l = 1^r 1^m_l+1 - (-1)^m_l+1/m_l + 1) ( ∏_k = 1^q n_k!/(q - 1 + ∑_k = 1^q n_k)!).
In the event one of the m_l values is odd, 1^m_l+1 - (-1)^m_l+1 is zero and W_ij also becomes zero. In the event all m_l values are even, 1^m_l+1 - (-1)^m_l+1 is equal to 2.
So, for example, in the case where there are three mixture variables and one process variable (i.e., q = 3 and r = 1) the model expansion
f(a_js) is
(x_1, x_2, x_1x_2, x_1x_3, x_2x_3, x_1z, x_2z, x_3z, z^2)^T. Multiplying f(a_js) by its transpose yields the matrix
f(a_js) f^T(a_js) =
[ x_1^2 x_1 x_2 x_1^2 x_2 x_1^2 x_3 x_1 x_2 x_3 x_1^2 z x_1 x_2 z x_1 x_3 z x_1 z^2; x_1 x_2 x_2^2 x_1 x_2^2 x_1 x_2 x_3 x_2^2 x_3 x_1 x_2 z x_2^2 z x_2 x_3 z x_2 z^2; x_1^2 x_2 x_1 x_2^2 x_1^2 x_2^2 x_1^2 x_2 x_3 x_1 x_2^2 x_3 x_1^2 x_2 z x_1 x_2^2 z x_1 x_2 x_3 z x_1 x_2 z^2; x_1^2 x_3 x_1 x_2 x_3 x_1^2 x_2 x_3 x_1^2 x_3^2 x_1 x_2 x_3^2 x_1^2 x_3 z x_1 x_2 x_3 z x_1 x_3^2 z x_1 x_3 z^2; x_1 x_2 x_3 x_2^2 x_3 x_1 x_2^2 x_3 x_1 x_2 x_3^2 x_2^2 x_3^2 x_1 x_2 x_3 z x_2^2 x_3 z x_2 x_3^2 z x_2 x_3 z^2; x_1^2 z x_1 x_2 z x_1^2 x_2 z x_1^2 x_3 z x_1 x_2 x_3 z x_1^2 z^2 x_1 x_2 z^2 x_1 x_3 z^2 x_1 z^3; x_1 x_2 z x_2^2 z x_1 x_2^2 z x_1 x_2 x_3 z x_2^2 x_3 z x_1 x_2 z^2 x_2^2 z^2 x_2 x_3 z^2 x_2 z^3; x_1 x_3 z x_2 x_3 z x_1 x_2 x_3 z x_1 x_3^2 z x_2 x_3^2 z x_1 x_3 z^2 x_2 x_3 z^2 x_3^2 z^2 x_3 z^3; x_1 z^2 x_2 z^2 x_1 x_2 z^2 x_1 x_3 z^2 x_2 x_3 z^2 x_1 z^3 x_2 z^3 x_3 z^3 z^4 ].
To illustrate how W_11 is calculated, we start from the first element in the first row and the first column in this matrix, i.e., x_1^2. This term is the square of the first mixture ingredient proportion. Hence, its exponent n_1 is equal to 2. The other two mixture variables, x_2 and x_3, are not present, meaning their exponents n_2 and n_3 are 0. Additionally, this element does not involve any process variables, meaning m_1 = 0. Using Equation (<ref>), we obtain
W_11 =
( 1^m_1+1 - (-1)^m_1+1/m_1 + 1) ( n_1! × n_2! × n_3!/(3 - 1 + n_1 + n_2 + n_3)!) =
( 1^0+1 - (-1)^0+1/0 + 1) ( 2! × 0! × 0!/(3 - 1 + 2 + 0 + 0)!)
= ( 2/1)( 2/24)
= 1/6.
As another illustration, we calculate W_99. To this end, we start from the element in the last row and the last column of f(a_js) f^T(a_js), i.e., z^4. This term is the process variable raised to the 4-th power. Hence, m_1 = 4. None of the mixture variables are present, meaning that their exponents are all 0, and thus n_1 = n_2 = n_3 = 0. So, using Equation (<ref>) again, we obtain
W_99 =
( 1^m_1+1 - (-1)^m_1+1/m_1 + 1) ( n_1! × n_2! × n_3!/(3 - 1 + n_1 + n_2 + n_3)!) =
( 1^4+1 - (-1)^4+1/4 + 1) ( 0! × 0! × 0!/(3 - 1 + 0 + 0 + 0)!)
= ( 2/5)( 1/2)
= 1/5.
Following this process for each of the elements in the matrix f(a_js) f^T(a_js), we obtain the full moments matrix,
W =
[ 1/6 1/12 1/30 1/30 1/60 0 0 0 1/9; 1/12 1/6 1/30 1/60 1/30 0 0 0 1/9; 1/30 1/30 1/90 1/180 1/180 0 0 0 1/36; 1/30 1/60 1/180 1/90 1/180 0 0 0 1/36; 1/60 1/30 1/180 1/180 1/90 0 0 0 1/36; 0 0 0 0 0 1/18 1/36 1/36 0; 0 0 0 0 0 1/36 1/18 1/36 0; 0 0 0 0 0 1/36 1/36 1/18 0; 1/9 1/9 1/36 1/36 1/36 0 0 0 1/5 ].
As with the D-optimality criterion, we define the Bayesian I-optimality criterion as the I-optimality criterion averaged over the prior distribution π(θ) of the parameter vector θ:
ℐ_B =
∫_ℝ^mtr[ I^-1(X, θ) W]
π(θ) dθ.
§.§ Numerical approximation to optimality criteria
The Bayesian optimality criteria must be approximated numerically because there is no closed-form solution to the integrals in Equations (<ref>) and (<ref>). This is usually done by using random or systematic draws from the prior distribution π(θ) (<cit.>; <cit.>; <cit.>; <cit.>; <cit.>). In our work, we utilize Halton draws from the prior distribution because they reduce the variance of the approximation to the integral and provide a good coverage of the entire domain of the prior distribution (<cit.>; <cit.>). Moreover, bhat2001quasi verified that around 100 Halton draws provide about the same level of accuracy as 2000 pseudo-random draws in the context of a 5-dimensional approximation to the likelihood of a mixed multinomial model. yu2010comparing showed that Halton draws also produce good approximations of integrals with higher dimensions in the context of optimal design for choice experiments.
Denoting the number of Halton draws by R and each individual draw by θ^(i), our approximations for Equations (<ref>) and (<ref>) are
𝒟_B ≈1/R∑_i = 1^R
[
(
I^-1(X, θ^(i))
) ]^1/m,
and
ℐ_B ≈1/R∑_i = 1^R
tr[ I^-1(X, θ^(i)) W],
respectively.
Like ruseckaite_bayesian_2017 and becerra2021bayesian, we used R = 128 Halton draws from a multivariate normal prior distribution in both of our examples in the next section. We verified numerically that this number of draws provided a sufficiently good approximation of the Bayesian optimality criteria for the numbers of parameters in the models used in the two examples.
§.§ Construction of D- and I-optimal designs
To compute our optimal designs, we used a coordinate-exchange algorithm (<cit.>; <cit.>). A coordinate-exchange algorithm was also used by kessels_efficient_2009, ruseckaite_bayesian_2017, and becerra2021bayesian in the context of choice experimentation. becerra2021bayesian implemented their algorithm in the R programming language rlang with the aid of several existing R packages (<cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>), and created a package called , available at , which allows the computation of locally D-optimal, Bayesian D-optimal, locally I-optimal, and Bayesian I-optimal designs for first-order, second-order, and special-cubic Scheffé models. We extended the package and added the functionality to compute locally D-optimal, Bayesian D-optimal, locally I-optimal and Bayesian I-optimal designs for the model presented in Equation (<ref>), involving mixture ingredient proportions as well as process variables.
The coordinate-exchange algorithm we implemented starts from a random initial design, and begins by optimizing the first ingredient proportion of the first alternative within the first choice set, followed by the second ingredient proportion of the first alternative within the first choice set, and so on, until all q ingredient proportions have been optimized. Then, it continues with each of the r process variables. The algorithm then repeats this process for each alternative in each choice set in the design. The whole process is repeated until the design can no longer be improved or until a maximum number of iterations has been reached. At each step of the coordinate-exchange algorithm, we seek the optimal value of every individual ingredient proportion x_ijs or process variable setting z_ijs. This is a univariate optimization problem which can be solved in a straightforward way using Brent's univariate optimization method brent1973algorithms. Every time Brent's univariate optimization method is invoked during the course of the coordinate-exchange algorithm, the Bayesian D- or I-optimality criterion has to be evaluated. Despite the efficient approximation of these criteria using Halton draws, this renders the coordinate-exchange algorithm for choice experiments computationally intensive.
As indicated in piepel_construction_2005, goos_jones_optimal_2011, ruseckaite_bayesian_2017, and becerra2021bayesian, the coordinate-exchange algorithm must be modified to deal with mixtures. Since the mixture proportions must sum up to one, they cannot be independently changed. As a matter of fact, a change in one proportion requires a change in at least one other proportion. This dependency is solved by using the so-called Cox effect direction (<cit.>; <cit.>; <cit.>). After a change of one of the ingredient proportions, x_ijs, to x_ijs + Δ, we modify the other q-1 proportions as follows:
x_kjs^new =
( 1 - Δ/1 - x_ijs) x_kjs if x_ijs≠ 1,
1 - (x_ijs + Δ)/q - 1 if x_ijs = 1.
§ RESULTS
In this section, as proofs of concept, we present D- and I-optimal designs for two example choice experiments involving a mixture and one or more process variables. In both examples, we use a normal prior distribution, which is the most commonly used prior in the literature on the optimal design of choice experiments. The first example involves a cocktail tasting experiment and was inspired by courcoux1997methode, while the second example involves a fish patty experiment. The inspiration for this example came from cornell2002experiments, cornell1984fractional, cornell1988analyzing and goos2022fish.
§.§ Cocktail example
courcoux1997methode discussed an experiment in which cocktails involving mango juice, blackcurrant syrup, and lemon juice were tasted. The experiment was conducted by asking respondents to taste different pairs of cocktails and indicating their preferred one in each pair. ruseckaite_bayesian_2017 and becerra2021bayesian revisited this experiment, computed prior distributions for the parameter vector θ of a special-cubic Scheffé model, and created optimal experimental designs using this prior.
In the experiment, courcoux1997methode imposed lower bounds of 0.3, 0.15 and 0.1 on the three ingredient proportions. To deal with this issue and to be able to use our implementation of the coordinate-exchange algorithm, like ruseckaite_bayesian_2017 and becerra2021bayesian, we expressed the mixtures defining the cocktails in terms of so-called pseudocomponents x_1, x_2, and x_3. These pseudocomponents are defined such that they take a minimum value of 0 and a maximum value of 1, and sum up to one. The conversion of the true ingredient proportions into pseudocomponent proportions is done via the formula x_i = (a_i - L_i)/(1 - L), where L_i denotes the lower bound of ingredient i, a_i denotes the true ingredient proportion, and L is the sum of the lower bounds for all q ingredient proportions.
In September 2019, students at KU Leuven replicated the experiment by asking 35 respondents to taste cocktails made with mango juice, blackcurrant syrup, and lemon juice and say which one they preferred. Each respondent tasted four choice sets of two cocktails. This experiment, as the original in courcoux1997methode, did not have process variables. Nonetheless, since the preference for a cocktail may depend on the temperature at which it is served, we used this data and created additional simulated responses with a synthetic process variable related to temperature to obtain a prior normal distribution for parameter vector θ using the model in Equation (<ref>).
We then fitted a multinomial logit model to these data, which gave us an estimated mean and variance-covariance-matrix, which in turn we used to construct the prior distribution in our cocktail example.
Our prior mean vector is θ = (7.562, 0.907, 5.109, 14.573, 17.1806, 19.2705, 19.2705, 19.2705, 0)^T. This means that the utility of alternative j in choice set s was modeled as
U_js =
7.562 x_1js + 0.907 x_2js
+ 5.109 x_1js x_2js + 14.573 x_1js x_3js + 17.1806 x_2js x_3js
+ 19.2705 x_1js z_1js + 19.2705 x_2js z_1js + 19.2705 x_3js z_1js
+ 0 z_1js^2 + ε_js.
The prior variance-covariance matrix we used is Σ_0 = diag(4, 9, 49, 36, 49, 900, 900, 900, 900). It must be noted that the variances in this matrix were rounded to the nearest integer from the estimated multinomial logit model. With this prior distribution, we computed Bayesian D- and I-optimal designs using the coordinate-exchange algorithm discussed in Section <ref>.
Our Bayesian D- and I-optimal designs are shown graphically in Figure <ref>. In the figure, the mixtures in each of the 35 × 4 = 140 choice sets are presented in terms of the pseudocomponent proportions. The shade of blue of each dot denotes the level of process variable temperature for the corresponding mixture. Figure <ref> shows the distribution of the temperatures selected for the alternatives in the 140 choice sets in each of the designs.
In Figure <ref>, it can be seen that the points in the I-optimal design are spread more evenly over the entire simplex compared to those of the D-optimal counterpart. This is consistent with the results of becerra2021bayesian for choice experiments with mixtures in the absence of process variables. It is also worth pointing out that both designs use levels other than -1 and +1 for the process variable temperature, even though the mean prior value for the quadratic effect of the process variable temperature is zero.
Figure <ref> shows the fraction of design space plots of the two Bayesian optimal designs. These plots display the performance of the designs in terms of the prediction variance for each point in the experimental region or design space zahran2003fraction. The horizontal axis corresponds to a fraction of the experimental region, while the vertical axis ranges from the minimum to the maximum prediction variance over the entire experimental region goos_jones_optimal_2011. A curve in a fraction of design space plot shows the prediction variances f^T(x) I^-1(X, β) f(x) for a large number of random points selected from the experimental region, ordered from small to large. Ideally, all prediction variances are small throughout the entire experimental region, in which case the curve in the fraction of design space plot is virtually flat. Another way of explaining the fraction of design space plot is to say that it is the cumulative distribution function of the prediction variances across the experimental region, but with the positions of the two axes swapped.
The typical method to construct a fraction of design space plot for a given design is to randomly sample a large number of points M (e.g., 10,000 points) inside the experimental region. Then, the prediction variance f^T(x) I^-1(X, β) f(x) is calculated for each of these points, and all M prediction variances
are sorted from smallest to largest to obtain the empirical cumulative distribution function of the prediction variances
(<cit.>; <cit.>; <cit.>). If we denote the prediction variance of the i-th sampled point by v_i, then the non-decreasing curve joining the M pairs (i/M, v_i) forms the fraction of design space plot. A point i/M on the horizontal axis of the fraction of design space plot gives the proportion of the design space that has a prediction variance less than or equal to the corresponding value v_i on the vertical axis smucker2018optimal. In order to deal with the issue of the prediction variance depending on the unknown parameter vector, we computed prediction variances for 128 Halton draws from the prior distribution of the parameter vector θ and averaged the results.
The main takeaway from Figure <ref> is that the prediction variance is much higher for the Bayesian D-optimal design than for its I-optimal counterpart. The median prediction variance for the Bayesian D-optimal design is about 21.6, while it is about 10.9 for the Bayesian I-optimal one.
§.§ Fish patty example
The second example we discuss involves a fish patty and was inspired by the work of cornell2002experiments, cornell1984fractional, cornell1988analyzing, goos2022fish. In the original experiment, the interest was in the firmness of patties made with a mixture of three fish species: mullet, sheepshead, and croaker. These patties were subjected to different processing conditions: oven cooking temperature (375 or 425 degrees Fahrenheit), oven cooking time (25 or 40 minutes), and deep fat frying time (25 or 40 seconds). The first three variables are mixture variables and the last three are process variables.
Since the original interest was in the firmness of the patty, no preference data is available to construct a normal prior distribution for our example. However, assuming firmness is proportional to utility, we used the original data and the model
Y =
γ_1^0 x_1 + γ_2^0 x_2 + γ_3^0 x_2
+ γ_12^0 x_1 x_2 + γ_13^0 x_1 x_3 + γ_23^0 x_2 x_3
+ γ_1^1 x_1 z_1 +γ_2^1 x_2 z_1 + γ_3^1 x_3 z_1
+ γ_1^2 x_1 z_2 + γ_2^2 x_2 z_2 + γ_3^2 x_3 z_2
+ γ_1^3 x_1 z_3 + γ_2^3 x_2 z_3 + γ_3^3 x_3 z_3
+ α_12 z_1 z_2 + α_13 z_1 z_3 + α_23 z_2 z_3 + ε
to obtain a prior point estimate for the parameter vector θ. This model is the same as the one in Equation (<ref>), but without the quadratic terms for the three process variables. The reason we did not include these quadratic effects is that, in the original experiment, the process variables were studied at two levels only. As a consequence, the quadratic effects were inestimable.
We obtained the following estimate for the parameter vector
θ^T =
(γ_1^0, γ_2^0, γ_3^0, γ_12^0, γ_13^0, γ_23^0, γ_1^1, γ_2^1, γ_3^1, γ_1^2, γ_2^2, γ_3^2, γ_1^3, γ_2^3, γ_3^3, α_12, α_13, α_23)
= (2.864, 1.074, 2.003, -0.974, -0.834, 0.356, 0.376, 0.106, 0.206, 0.642, 0.2, 0.403, -0.078, -0.087, -0.01, 0.027, 0.001, -0.008).
Next, we transformed the parameter vector to the identified parameter space, as explained in Section <ref>. To this end, we computed
γ_1^0* = γ_1^0 - γ_3^0 = 2.864 - 2.003 = 0.861 and γ_2^0* = γ_2^0 - γ_3^0 = 1.074 - 2.003 = -0.929. As a result, our prior model for the utility of alternative j in choice set s in the fish patty example is
U_js =
0.861 x_1js - 0.929 x_2js
-0.974 x_1js x_2js -0.834 x_1js x_3js + 0.356 x_2js x_3js
+ 0.376 x_1js z_1js + 0.106 x_2js z_1js + 0.206 x_3js z_1js
+ 0.642 x_1js z_2js + 0.2 x_2js z_2js + 0.403 x_3js z_2js
- 0.078 x_1js z_3js -0.087 x_2js z_3js -0.01 x_3js z_3js
+ 0.027 z_1js z_2js + 0.001 z_1js z_3js - 0.008 z_2js z_3js
+ 0 z_1js^2 + 0 z_2js^2 + 0 z_3js^2 + ε_js.
The estimates of the parameters in the initial model were used as the means of a set of normal prior distributions with variance-covariance matrices of the form Σ_0 = κI_21, where κ is a positive scalar that controls the level of uncertainty and I_21 is the identity matrix of size 21. A higher value of κ indicates a higher level of uncertainty concerning the parameter values. This structure of variance-covariance gives us a simple way to study the impact of different levels of uncertainty expressed by the prior distribution on the final design.
The variance-covariance matrix Σ_0 corresponding to the initial 21-parameter model must then also be transformed to the identified 20-dimensional parameter space. This results in a new 20 × 20 prior variance-covariance matrix
Σ_0^' =
[r]
2 κ κ 0 … 0 0
κ 2κ 0 … 0 0
0 0 κ … 0 0
⋮ ⋮ ⋮ ⋱ ⋮ ⋮
0 0 0 … κ 0
0 0 0 … 0 κ.
We computed Bayesian D- and I-optimal designs for the same κ values as ruseckaite_bayesian_2017 and becerra2021bayesian, that is 0.5, 5, 10 and 30. All of our Bayesian D- and I-optimal designs are shown graphically in Figures <ref> and <ref>.
It can be seen that the spread in the points in the optimal designs increases with κ, and the spread is more pronounced for the Bayesian I-optimal designs than for the Bayesian D-optimal designs.
Figure <ref> shows the fraction of design space plots for the Bayesian D- and I-optimal designs. For each value of κ, the D-optimal design has a much higher prediction variance than its I-optimal counterpart. Hence, the Bayesian I-optimal designs add substantial value in terms of precision of prediction when compared to Bayesian D-optimal designs.
§ DISCUSSION
We introduced the theory for choice experiments involving mixtures and process variables, and embedded the Bayesian D- and I-optimality criteria in a coordinate-exchange algorithm for constructing optimal designs for this type of choice experiments. We also showed two examples in which the I-optimal designs perform substantially better than their D-optimal counterparts in terms of the variance of the predicted utility, which is something desirable because it is crucial to have precise predictions for any combination of ingredient proportions and process variables when optimizing the formulation of a mixture and the settings of the related process variables.
We identified three possible extensions of our work. The first possibility is inspired by a practical difficulty that arises when conducting choice experiments with mixtures with or without process variables. When the number of distinct mixtures appearing in the Bayesian optimal designs is large and the mixtures have to be tasted, it is logistically very complicated to perform the experiment. For instance, for a given number of tasters, organizing a choice experiment in which 40 distinct mixtures have to be tasted in perhaps 80 different choice sets is much harder to organize and perform than a choice experiment in which only 20 distinct mixtures have to be tasted in 40 different choice sets. While the former experiment may be preferable from a statistical viewpoint, it may be practically infeasible. Therefore, it is valuable to develop an algorithm that finds optimal designs with mixtures and process variables with an upper bound on the number of distinct mixtures and/or an upper bound on the number of distinct choice sets, as well as an upper bound on the number of distinct settings and values that the process variables can take.
Second, we focused on the multinomial logit model, which assumes that there is homogeneity in the preferences of the respondents. This works well in many practical scenarios, but it might be an unrealistic assumption, as demonstrated by courcoux1997methode and goos_hamidouche_2019_choice. Hence, it would make sense to extend the algorithms presented here to other types of choice models that take into account the possible presence of consumer heterogeneity, such as the mixed logit model and the latent class choice model.
A third topic for future research would be to modify our coordinate-exchange algorithm, so that it can also cope with experimental regions for the ingredient proportions that are not a simplex. Such experimental regions arise when there are constraints on the ingredient proportions other than lower bounds for individual proportions. Methodologically speaking, this is not highly innovative, since the mixture coordinate-exchange algorithm of piepel_construction_2005 for linear regression models is able to deal with this complication. However, embedding this capability in our implementation of the coordinate-exchange algorithm for choice experiments with mixtures would be useful for practitioners.
Finally, we would like to point out that the work we presented here has applications in other fields of research than food. This is because choice experiments involving mixtures are relevant in, for example, transportation and economics too. As a matter of fact, zijlstra2019mixture conducted a choice experiments in which the mixtures between which the respondents had to choose were different ways in which a given mobility budget could be spent. khademi2013traveler discuss a choice experiment involving a mixture of road toll, congestion pricing and parking price. boonaert2021twofold use a choice experiment concerning the desired composition of a family, where the family composition is considered a mixture of boys and girls with different education levels. Finally, yang2016prevalence use a mixture choice experiment to measure context-dependent responses to accumulative energy charges under budget constraints. In all of these non-food-related choice experiments, an ad-hoc experimental design was used and there was a variable related to the total amount of the mixture. This total amount can be viewed as a process variable, and, therefore, the models and the optimal design approach we present here would be applicable to these choice experiments too.
|
http://arxiv.org/abs/2307.02968v1
|
20230706130827
|
A Simple $(1-ε)$-Approximation Semi-Streaming Algorithm for Maximum (Weighted) Matching
|
[
"Sepehr Assadi"
] |
cs.DS
|
[
"cs.DS",
"cs.DC"
] |
Umathx45
Umathxmn
<5> <6> <7> <8> <9> <10>
<10.95> <12> <14.4> <17.28> <20.74> <24.88>
mathx10
mathxUmathxmn
1mathx"91
skins,breakable
enhanced jigsaw
hyperref
propertypropertyProperty
property(#1)#2#3
equationeqEq
equation(#1)#2#3
arrows
arrows.meta
shapes
backgrounds
positioning
decorations.markings
patterns
calc
fit
snakes
vertex/.style=circle, black, fill=Yellow, line width=1pt, draw, minimum width=8pt, minimum height=8pt, inner sep=0pt
-@thistlm
theoremTheorem
lemmaLemma[section]
proposition[lemma]Proposition
corollary[lemma]Corollary
claim[lemma]Claim
fact[lemma]Fact
conj[lemma]Conjecture
definition[lemma]Definition
problemProblem
*claim*Claim
*proposition*Proposition
*lemma*Lemma
*problem*Problem
lemmaLemmaLemmas
claimClaimClaims
mdresultResult
result[backgroundcolor=lightgray!40,topline=false,rightline=false,leftline=false,bottomline=false,innertopmargin=2pt]
remark[lemma]Remark
observation[lemma]Observation
definition
mdproblemProblem
Problem[hidealllines=false,innerleftmargin=10pt,backgroundcolor=gray!10,innertopmargin=5pt,innerbottommargin=5pt,roundcorner=10pt]
*mdproblem*Problem
Problem*[hidealllines=false,innerleftmargin=10pt,backgroundcolor=gray!10,innertopmargin=5pt,innerbottommargin=5pt,roundcorner=10pt]
mddefinition[lemma]Definition
Definition[hidealllines=false,innerleftmargin=10pt,backgroundcolor=white!10,innertopmargin=5pt,innerbottommargin=5pt,roundcorner=10pt]
mdalgorithmAlgorithm
Algorithm[hidealllines=false,innerleftmargin=10pt,backgroundcolor=white!10,innertopmargin=5pt,innerbottommargin=5pt,roundcorner=10pt]
Ourbox[hidealllines=false,innerleftmargin=10pt,backgroundcolor=white!10,innertopmargin=10pt,innerbottommargin=10pt,roundcorner=10pt]
[]
()
||
⌈⌉
⌊⌋
{}
[2]Δ_TVD()#1, #2
[1]Ω()#1
[1]O()#1
sOE_m
_#3
#1
*#4
[#2]#4
sOE_m
_#3
#1
*#4
[#2]#4
[1][]#1
[1][]#1
[1]exp()#1
sOE_m
_#3
#1
*#4
[#2]#4
[1]()#1
[2]()#1 #2
[1]()#1
[1]()#1
tbox[
enlarge top by=5pt,
enlarge bottom by=5pt,
breakable,
boxsep=0pt,
left=4pt,
right=4pt,
top=10pt,
arc=0pt,
boxrule=1pt,toprule=1pt,
colback=white
]
A Simple (1-)-Approximation Semi-Streaming Algorithm for Maximum (Weighted) Matching
Sepehr Assadi[([email protected]) Cheriton School of Computer Science, University of Waterloo, and Department of Computer Science, Rutgers University.
Supported in part by an Alfred P. Sloan Fellowship, a University of Waterloo startup grant, an NSF CAREER grant CCF-2047061, and a gift from Google Research.]
===========================================================================================================================================================================================================================================================================================================================
roman
We present a simple semi-streaming algorithm for (1-)-approximation of bipartite matching in O(log(n)/) passes. This
matches the performance of state-of-the-art “-efficient” algorithms, while being considerably simpler.
The algorithm relies on a “white-box” application of the multiplicative weight update method with a self-contained primal-dual analysis that can be of independent interest.
To show case this, we use the same ideas, alongside standard tools from matching theory, to present an equally simple semi-streaming algorithm for (1-)-approximation of weighted matchings in general (not necessarily bipartite)
graphs, again in O(log(n)/) passes.
arabic
§ INTRODUCTION
We consider the maximum matching problem in the semi-streaming model of <cit.>: given any n-vertex graph G=(V,E) whose edges are presented in a stream, the goal is to make a minimal number of passes
over this stream and use a limited space of (n):= O(n ·log(n)) bits to output a (1-)-approximate maximum matching of G for some given > 0.
The maximum matching problem is arguably the most studied problem in the graph streaming literature at this point (see, e.g. <cit.> for a quick summary).
Most relevant to our work, the first (1-)-approximation algorithm for maximum cardinality matching
was designed by <cit.> which requires (1/)^O(1/) passes.
This algorithm has since been improved numerous times <cit.> culminating in the state-of-the-art, consisting of two incomparable families of algorithms:
* “Constant pass” algorithms. The algorithm of <cit.> (and its precursor <cit.>) with O(1/^2) passes for bipartite graphs and that of <cit.> with (1/) passes for general graphs. Similarly, for weighted graphs,
we have the algorithm of <cit.> with O(log(1/)/^2) passes for bipartite graphs and <cit.> with (1/) passes for general graphs[It is worth mentioning that the dependence on
in <cit.> is quite high – it appears to be O((1/)^19) passes in <cit.> (see Lemma 5.6 of arXiv version 5) and can only be higher in <cit.>. ].
* “-efficient” algorithms. The algorithm of <cit.> (and its precursor <cit.>) with O(log(n)/) passes for general weighted graphs, and a simpler and space-optimal algorithm of <cit.>
with O(logn·log(1/)/) passes that is specific to bipartite cardinality matching.
See <Ref> for a detailed summary. Nevertheless, despite significant progress in bringing down the pass-complexity of more general cases, for the most basic version of the problem, namely, maximum (cardinality) bipartite matching (MBM),
the best bounds have been stuck at O(1/^2) and O(log(n)/) passes for over a decade now (since <cit.> and <cit.>, respectively). On the other hand,
even the recent breakthroughs on multi-pass graph streaming lower bounds in <cit.> can only rule out o(log(1/))-pass algorithms for MBM <cit.> (under a certain combinatorial hypothesis), leaving an exponential gap open for further progress on both ends.
In our opinion, a key contributing factor to this lack of algorithmic progress is the fact that the O(log(n)/)-pass algorithms of <cit.> are quite complicated (even for MBM). While some simplifications have been
made in <cit.>, even this new algorithm is far from being simple. This is contrast with the constant-pass algorithms that, at least for MBM,
admit quite simple algorithms in <cit.> and even already in <cit.>. The goal of this paper is to remedy this state of affairs.
1.7
§.§ Our Contributions
We present a novel way of approximating matchings that is easily implementable via semi-streaming algorithms (among others).
The high level idea—with some ambiguity left on purpose—is:
Our general algorithmic approach:
* Sample (n/) edges uniformly and compute a maximum matching M of the sample.
* If M is large enough, return M; otherwise, (a) find edges that “could have potentially led to a larger matching”,
(b) increase their “importance”, and repeat the sampling.
This general idea of “sample-and-solve” is a staple in the graph streaming literature dating back, at the very least, to the filtering <cit.> and sample-and-prune <cit.> techniques (both for implementing greedy algorithms).
It relies on a fundamental power of semi-streaming algorithms: once we sparsify the input to fit into the memory, we can process it essentially however we want (in this context,
once the algorithm only has (n/) edges to work with in the sample, it can find its maximum matching or perform any other “heavy” computation easily).
The approach proposed above, based on adjusting importance of edges, is clearly reminiscent of the Multiplicative Weight Update (MWU) method (see <cit.>) and the Plotkin-Shmoys-Tardos framework for approximating
packing/covering LPs <cit.> using MWU. There is just one issue here: we would like this algorithm to converge in ≈ 1/ passes, while these MWU-based approaches
tend to only guarantee ≈ 1/^2 iterations for convergence to a (1-)-approximate solution (see, e.g., <cit.>). Addressing this issue is the key difference in our work compared to prior work.
Prior approaches. The algorithms in <cit.> also start with the same overall approach and address the above-mentioned issue through several different steps:
(i) using a non-standard LP relaxation of the problem, (ii) relying on the dual variables of this LP to guide step (a) of the approach, (iii) adding a penalty-term to the LP to still maintain an O(log(n)/^2) iterations
convergence guarantee in Plotkin-Shmoys-Tardos framework (to reduce the width of the resulting problem; see <cit.>), (iv) “folding” O(1/) iterations of this framework in O(1) passes, and (v) using a notion of a
“deferred (cut) sparsification” (instead of sampling) that allows for implementing this last step. We refer the reader to <cit.> for more details on this algorithm; here, we only note
that the end result is a highly sophisticated algorithm that barely resembles the above strategy but can now run in O(log(n)/) passes.
The recent algorithm of <cit.> entirely deviates from the above approach. It instead relies on more sophisticated optimizations tools in <cit.> on area convexity based on classical work in <cit.> that give ≈ 1/-iteration solvers directly. This approach to some extent ignores the aforementioned power of semi-streaming algorithms—meaning arbitrary computation power on sparse-enough inputs—and
seems to be highly tailored to bipartite cardinality matching[The work of <cit.> also have an algorithm for weighted bipartite matching but the pass-complexity depends linearly on maximum weight of an edge (which can be polynomial in n) and hence is typically not efficient.].
Our approach. Unlike prior work, we are going to revert to the original approaches of <cit.> for greedy algorithms and
implement the above algorithmic approach quite literally, without relying on Plotkin-Shmoys-Tardos or similar generic frameworks.
Concretely, our algorithm for MBM is this: in step (a), find
a minimum (bipartite) vertex cover of the sampled graph (relying on Konig's theorem; see <Ref>); we then consider any edge of the original graph not covered by this vertex cover
as an edge that “could have potentially led to a larger matching”. For step (b), we double the importance of these edges (making
them twice as likely to be sampled next)[As a side note, this is a much more aggressive update rule compared to a typical MWU application, say in Plotkin-Shmoys-Tardos framework, which would have updated the weights
by only a (1 + ) factor; see also <Ref>.]. A simple analysis similar to MWU, relying on the duality of matching and vertex covers, bounds
the number of iterations by O(log(n)/), leading to the following result.
There is a simple semi-streaming algorithm that given any n-vertex bipartite graph G=(L,R,E) and a parameter ∈ (0,1), uses O(nlog(n)/) bits of space and O(log(n)/) passes
and with high probability outputs a (1-)-approximate maximum matching of G.
We believe <Ref> is our main contribution as it already contains our key new ideas. Still, this result can be
considerably generalized all the way to maximum weight (general) matchings, without not much extra work and by relying on standard tools from matching theory.
There is a simple semi-streaming algorithm that given any n-vertex general graph G=(V,E) with integer edge weights w: E → and a parameter ∈ (0,1), uses O(nlog^2(n)/) bits of space and O(log(n)/) passes
and with high probability outputs a (1-)-approximate maximum weight matching of G.
<Ref> is now providing a considerably simpler version of the main results of <cit.> (also with a better space-dependence by (logn,1/) factors). We hope this can pave the path for both future theoretical improvements
and more practical algorithms for this fundamental problem[It is worth mentioning that the generic approaches of <cit.> that are most similar to our algorithms have
indeed led to highly practical algorithms; see these two papers for the empirical evaluations.].
Finally, we note that in the interest of keeping the ideas in this paper as clear and transparent as possible, we have opted to focus only on the most important aspects of our algorithms. However,
in <Ref>, we point out several standard and not-so-standard extensions of our algorithms such as improved runtime, O(1/)-pass algorithms in n^1+Ω(1) space, derandomization, and others.
Notation. For any graph G=(V,E), we use n to denote the number of vertices and m as the number of edges.
We further use μ(G) to denote the maximum matching size in G and μ(G,w) to denote the maximum matching weight in G under edge weights w: E →.
We say an event happens “with high probability” or “with exponentially high probability” if the probability of it not happening can be bounded by n^-Θ(1) or exp(-Θ(n)), respectively.
We are going to prove that our algorithms in both <Ref> and <Ref> succeed even with exponentially high probability (which is a stronger guarantee than what is typical in this context).
§ MAXIMUM CARDINALITY BIPARTITE MATCHING
We prove <Ref> in this section. We start by recalling some basic facts and definitions
from matching theory in bipartite graphs. We then present our new algorithm in a generic and model-independent way, and subsequently show how it can be implemented in the semi-streaming model.
§.§ Basics of Matching Theory in Bipartite Graphs
Let G=(L,R,E) be a bipartite graph. Recall the following definitions:
* A matching M is a set of vertex-disjoint edges in E and a fractional matching x ∈ [0,1]^E is an assignment to the edges
so that for every vertex v ∈ L ∪ R, we have ∑_e ∋ v x_e ≤ 1. We denote the size of a fractional matching x by x := ∑_e ∈ E x_e.
* Similarly, a vertex cover U is a set of vertices incident on every edge and a fractional vertex cover y ∈ [0,1]^V is an assignment to the vertices so that for every edge e=(u,v) ∈ E, y_u + y_v ≥ 1.
We denote the size of a fractional vertex cover y by y := ∑_v ∈ L ∪ R y_v.
(We only use use fractional matchings and vertex covers in the analysis).
Konig's theorem <cit.> establishes duality of maximum (fractional) matchings and minimum (fractional) vertex covers in bipartite graphs.
[Konig's theorem]
In any bipartite graph, sizes of maximum matchings, fractional matchings, minimum vertex covers, and fractional vertex covers are all the same.
See the excellent book of Lovasz and Plummer <cit.> on matching theory for more details.
§.§ The Algorithm
We give the generic algorithm here and postpone its semi-streaming implementation to <Ref>.
A sample-and-solve approximation algorithm for maximum bipartite matching.
* Input: A bipartite graph G=(L,R,E) and parameter ∈ (0,1);
* Output: A (1-)-approximate maximum matching in G.
* Start with importance[While it is more common to refer to this concept as “weight” in the context of MWU, given we will eventually work with weighted matchings, we use
“importance” to avoid ambiguity.] 1_e = 1 for every edge e ∈ E and define 1 := ∑_e ∈ E1_e.
* For r=1 to R := 4·logm iterations:
* Sample each edge e ∈ E with probability:
p^(r)_e := 2n/·r_e/r.
* Compute a maximum matching M^(r) and a minimum vertex cover U^(r) of the sample.
* For any edge e ∈ E not covered by U^(r), update:
r+1_e = r_e · 2.
Then, let r+1 = ∑_e ∈ Er+1_e.
* Return the largest of matchings M^(r) for r ∈ [R].
For any bipartite graph G=(L,R,E) and parameter ∈ (0,1), <Ref> outputs a matching of size (1-) ·μ(G) in G with exponentially high probability.
The proof follows the recipe of MWU analysis, using r as a potential function. The first lemma upper bounds R at the end of the algorithm.
With exponentially high probability, R≤ (1+/2)^R· m.
Fix any iteration r ∈ [R]. Let F^(r)⊆ E denote the set of edges not covered by U^(r). We claim that with exponentially high probability, we have,
∑_e ∈ F^(r)r_e ≤/2·r;
in words, <Ref> states that the importance of edges not covered by U^(r) in the graph, according to the importances of iteration r, is relatively “small” (despite the fact that U^(r) was computed
on a sample and not the entire input). Before proving this claim, let us see how it concludes the proof.
By a union bound over all O(logn/) iterations of the algorithm, we have that for every r ∈ [R],
r+1 = ∑_e ∈ F^(r)r+1_e + ∑_e ∈ E ∖ F^(r)r+1_e by partitioning the edges in and out of F^(r)
= ∑_e ∈ F^(r) 2 ·r_e + ∑_e ∈ E ∖ F^(r)r_e by the update rule of the algorithm
= (∑_e ∈ F^(r) w^(r)_e) + rby the definition of r
≤1+/2·r,
where the inequality is by <Ref>. The lemma then follows from this and since 1 = m.
Proof of <Ref>. Let U ⊆ V be any subset of vertices in the graph and F(U) be the set of edges not covered by U.
Suppose
∑_e ∈ F(U)r_e > /2·r;
we show that in this case, with an exponentially high probability, U cannot be a vertex cover of the sampled edges either. Indeed, we have,
U can be a vertex cover = no edge from F(U) is sampled
= ∏_e ∈ F(U) (1-p^(r)_e) by the independence and the sampling probability of edges
≤exp-∑_e ∈ F(U) p_eas (1-x) ≤ e^-x for all x ∈ [0,1]
= exp-2n/·∑_e ∈ F(U)r_e/rby the choice of p^(r)_e in <Ref>
≤exp-n≈ 2^-1.44n
by our assumption about U and importance of edges in F(U) earlier.
A union bound over 2^n choices of U ensures that with exponentially high probability, any choice of U^(r) that is returned as a minimum vertex cover
of sampled edges should satisfy <Ref>.
On the other hand, we are going to show that if none of the matchings M^(r) is sufficiently large, then the importance of at least one edge should have dramatically increased to the point that it will contradict the bounds in <Ref>.
The proof of this lemma is based on a simple primal-dual analysis using Konig's theorem in <Ref>.
Suppose in every iteration r ∈ [R], we have M^(r) < (1-) ·μ(G). Then,
there exists at least one edge e ∈ E such that R_e ≥ 2^· R.
Assume the contrary that for every edge e ∈ E, R_e < 2^· R. Given the update rule of the algorithm, this means that every single edge e ∈ E is covered by U^(r)'s at least (1-) · R times.
Define the fractional vector y ∈ [0,1]^V so that for every v ∈ V:
y_v := 1/R·r ∈ [R] | v ∈ U^(r),
i.e., y_v is the fraction of iterations wherein v belonged to the computed vertex cover U^(r). By the above argument,
for every edge e=(u,v) ∈ E, we have,
y_u + y_v ≥1/R·r ∈ [R] |e is covered by U^(r)≥ (1-).
This implies that the vector (1-)^-1· y is a fractional vertex cover of G.
At the same time, we also have that
|y| = ∑_v ∈ V y_v = 1/R·∑_r=1^R U^(r) < 1/R· R · (1-) ·μ(G) = (1-) ·μ(G),
where the inequality is by Konig's theorem (<Ref>) and the assumption that M^(r) < (1-) ·μ(G)
in the lemma statement. This implies that the vector (1-)^-1· y is a fractional vertex cover of G with size strictly less than μ(G).
Another application of Konig's theorem (<Ref>) implies that maximum matching in G has to be of size strictly less than μ(G), a contradiction.
We are now ready to conclude the proof of <Ref>.
Assume the exponentially high probability event of <Ref> happens, thus
R≤ (1+/2)^R· m < 2^3 R/4 + logm. as (1+x) < 2^3x/2 for x > 0
Suppose towards a contradiction that none of the matchings M^(r) for r ∈ [R] computed by the algorithm
are of size at least (1-) ·μ(G). Then, by <Ref>, there is an edge e ∈ E such that
R_e ≥ 2^ R.
Putting these two equations together, as R_e ≤R (by positivity of importances), we obtain that
R < 3 R/4 + logm,
which only holds for R < 4logm/, contradicting the choice of R in the algorithm. Thus, at least one of the matchings returned by the algorithm is of size (1-) ·μ(G),
concluding the proof.
§.§ Semi-Streaming Implementation
We now present a semi-streaming implementation of <Ref> in the following lemma.
<Ref> can be implemented in the semi-streaming model with O(nlogn/) bits of memory and O(logn/) passes.
We implement each iteration of the algorithm in O(1) streaming passes. The main part of the implementation is to maintain the importance of the edges implicitly. We do this as follows:
* For every vertex v ∈ V and every iteration r ∈ [R],
we maintain a bit b(v,r) denoting if v belongs to the vertex cover U^(r) of iteration r (this needs O(logn) bits per vertex in total);
* Whenever an edge e=(u,v) ∈ E arrives in the stream in the r-th pass, we can compute the number of times e has remained uncovered by U^(r') for r' < r, denoted by c(e,r). This is done by checking b(u,r') and b(v,r') stored
so far; in particular,
c(e,r) = r' < r | b(u,r') = b(v,r') = 0.
The importance r_e of the edge e in this pass r is then 2^c(e,r).
* Once we calculate the importance of an edge upon its arrival, we can sample the edge with probability prescribed by <Ref> (maintaining the normalization factor r can be done the
same way also).
The rest of the algorithm can be implemented directly. In particular, the total number of edges sampled in each iteration is O(n/) with exponentially high probability by Chernoff bound. Thus, in the semi-streaming algorithm,
we can store these edges and then compute M^(r) and U^(r) at the end of the pass. This concludes the proof of the lemma.
<Ref> now follows immediately from <Ref> and <Ref>.
§ MAXIMUM WEIGHT GENERAL MATCHING
We now switch to proving <Ref> which is a vast generalization of <Ref> in the last section. Interestingly however,
despite its generality, the proof is more or less a direct “pattern matching” of the previous ideas to general weighted graphs using
the existing rich theory of matching polytopes and duality. As before, we start by recalling the basics of matching theory in general weighted graphs, and
then present our general algorithm followed by its semi-streaming implementation. Also, for simplicity of exposition, we are going to present our algorithm
with space- and pass-complexity depending on the total weights of the edges, and then show how to fix this using
standard ideas and prove <Ref>.
§.§ Basics of Matching Theory in General (Weighted) Graphs
Let G=(V,E) be a (general) graph with edge weights w: E →. The duality between matchings and vertex covers no longer holds in general graphs, nor
the equivalence of fractional matchings and integral ones (the way defined in the previous section). Thus, one needs a more general definition here. In the following, we use odd(V) to denote
the collection of all sets of vertices in V with odd cardinality. For a set S ⊆ V, we use E[S] to denote the set of edges with both endpoints in S.
* As before, a matching M is a set of vertex-disjoint edges in E and w(M) is its weight. We define a (general) fractional matching x ∈ [0,1]^E as an assignment to the edges satisfying:
for all v ∈ V: ∑_e ∋v x_e ≤1 and for all S ∈ odd(V): ∑_e ∈E[S] x_e ≤S-1/2.
We define the weight of a fractional matching as ∑_e ∈ E w(e) · x_e.
* The dual to maximum fractional matchings is the following “odd-set cover” problem. We define a fractional odd-set cover as a pair of assignments y ∈^V and z ∈^odd(V) to vertices and odd-sets in G
satisfying the following:
for all e=(u,v) ∈ E: y_u + y_v + ∑_S ∈ odd(V)
e ∈ E[S] z_S ≥ w(e).
The value of a fractional odd-set cover is then
(y,z) := ∑_v ∈ V y_v + ∑_S ∈ odd(V)S-1/2· z_S.
We have the following results on maximum fractional matchings and odd-set covers.
[Edmond's matching polytope theorem <cit.>]
In any graph, weights of maximum weight matching and fractional matchings, and size of minimum odd-set covers are all the same.
[Cunningham-Marsh theorem <cit.>]
In any graph G with integer edge-weights, there is an optimal fractional odd-set cover y ∈^V and z ∈^odd(V) such that (i) both y and z only take integer values,
and (ii) z_S > 0 only for a family of sets in odd(V) that form a laminar family[A family of sets ℱ is laminar iff for all A,B ∈ℱ, either A ∩ B = ∅ or A ⊆ B or B ⊆ A.].
Again, see <cit.> for more details and proofs of these facts.
§.§ The Algorithm
We now give a generalization of <Ref> for finding maximum weight (general) matchings. The two main changes are: (i) while importances are defined more or less as before,
we also take into account the weights of edges in the sampling step; and (ii) we use odd-set covers in place of vertex covers to guide us in updating the importance of edges.
A sample-and-solve approximation algorithm for weighted general matching.
* Input: A (general) graph G=(V,E) with weights w: E → and parameter ∈ (0,1);
* Output: A (1-)-approximate maximum weight matching in G.
* Start with importance 1_e = 1 for every edge e ∈ E and define 1 := ∑_e ∈ E w(e) ·1_e.
* Let W:= ∑_e ∈ E w(e). For r=1 to R := 4·logW iterations:
* Sample each edge e ∈ E with probability:
r_e := 8n ·ln(nW)/·r_e · w(e)/r.
* Compute a maximum weight matching M^(r) and a minimum odd-set cover solution (y^(r),z^(r)) of the sample (using the original weights w(·) on sampled edges).
* For any edge e ∈ E not covered by (y^(r),z^(r))[By this, we mean (y,z) does not satisfy <Ref> for the given edge e ∈ E.], update:
r+1_e = r_e · 2.
Then, let r+1 = ∑_e ∈ E w(e) ·r+1_e.
* Return the maximum weight matching among M^(r)'s for r ∈ [R].
For any general graph G=(V,E) with weights w: E → and parameter ∈ (0,1), <Ref> outputs a matching of weight (1-) ·μ(G,w) in G with exponentially high probability.
We follow the same exact strategy as before. The first step is to bound the “potential” function. The following lemma
is an analogue of <Ref>, with a similar proof. The key difference is a new union bound argument at the very end
for all potential odd-set covers which needs to be more careful compared to the trivial 2^n-bound for vertex covers.
With an exponentially high probability, R≤ (1+/2)^R· W.
Fix any iteration r ∈ [R]. Let F^(r)⊆ E denote the set of edges not covered by (y^(r),z^(r)) as defined in <Ref>. We claim that with exponentially high probability, we will have,
∑_e ∈ F^(r)r_e · w(e) ≤/2·r;
in words, <Ref> states that the total “importance × weight” of the edges not covered by the odd-set cover solution on the entire graph, according to the importances of iteration r, is relatively “small”. Before proving this claim, let us see how it concludes the proof.
By a union bound over all iterations of the algorithm, we have that for every r ∈ [R],
r+1 = ∑_e ∈ F^(r)r+1_e · w(e) + ∑_e ∈ E ∖ F^(r)r+1_e · w(e) by partitioning the edges in and out of F^(r)
= ∑_e ∈ F^(r) 2 ·r_e · w(e) + ∑_e ∈ E ∖ F^(r)r_e · w(e) by the update rule of the algorithm
= (∑_e ∈ F^(r)r_e · w(e)) + rby the definition of r
≤1+/2·r,
where the inequality is by <Ref>. The lemma then follows from this and the choice of 1_e=1 for all edge e ∈ E which implies 1 = ∑_e ∈ E 1 · w(e) = W.
Proof of <Ref>. Let y ∈^V and z ∈^odd(V) be any “potential” odd-set cover of G.
We define F(y,z) ⊆ E as the set of edges not covered by this potential odd-cut cover.
Suppose
∑_e ∈ F(y,z)r_e · w(e) > /2·r;
we show that in this case, with an exponentially high probability, (y,z) cannot be a feasible odd-set cover of the sampled edges either. Indeed, we have,
(y,z) is feasible on sampled edges = no edge from F(y,z) is sampled
= ∏_e ∈ F(y,z) (1-p^(r)_e) by the independence and the sampling probability of edges
≤exp-∑_e ∈ F(y,z) p_eas (1-x) ≤ e^-x for all x ∈ [0,1]
= exp-8n ·ln(nW)/·∑_e ∈ F(y,z)r· w(e)/rby the choice of p^(r)_e in <Ref>
≤exp-4n ·ln(nW),
by our assumption about (y,z) and importance of edges in F(y,z) earlier.
The last step of the proof is to union bound over all potential odd-set covers (y,z) using the calculated probabilities above.
This step needs to be more careful compared to <Ref> because (y,z) can be fractional and even for integral values, z can have an exponential support, leading to a doubly exponential number of choices for it which
is too much for the above probabilities to handle. Both of these are handled using Cunningham-Marsh theorem (<Ref>):
* Firstly, we can assume without loss of generality that (y,z) only take integer values in [0:W] in the optimal solutions computed in Line (<ref>) of <Ref>.
This, for instance, implies that the total number of choices for y that we need to consider are only (W+1)^n many.
* Secondly, and more importantly, we can use the laminarity of ℱ(z) := S ∈ odd(V) | z_S > 0 in the optimal solution.
A standard observation about
laminar families (over n elements/vertices) is that they can have size at most 2n-1[Without loss of generality, assume ℱ is a maximal laminar family on [n].
Proof by induction (base case is trivial):
maximality ensures that there are two non-empty sets A,B ∈ℱ with A ∪ B = [n] and A ∩ B = ∅. By induction, there are at most 2A-1 subsets of A in ℱ, at most 2B-1 subsets of B,
and the set A ∪ B, implying that ℱ≤ (2A-1) + (2B-1) + 1 = 2n-1.]. We can use this to provide a (very crude) upper bound
the number of choices for z as follows:
* Define T(n) as the number of laminar families on [n][There are more accurate ways of bounding T(n) (see, e.g. <cit.>), but given
that these more accurate bounds do not help with our subsequent calculations, we just establish a crude upper bound with a self-contained proof here.]. We claim T(n) ≤ (2n) · T(n-1). Because, we can pick a laminar family on n-1 elements
in T(n-1) ways, and then decide to put the last element in (a) one of its (at most) (2n-3) sets and “propagate” it to each of its supersets also, (b) in a singleton set, or (c) not placing it anywhere.
This leads to <(2n) options times T(n-1) and establishes the claim.
* Given that T(1) = 2 (singleton or empty-set), we get T(n) ≤ (2n)^n. To pick z, we can first pick ℱ(z) using at most (2n)^n ways, and then assign values from [W] to each of its at most (2n-1) sets. Hence,
there are at most W^2n-1· (2n)^n choices for z.
All in all, we obtain that the total number of possible optimal solutions (y,z) that we need to take a union bound over can be (very) crudely upper bounded by the following (assuming n > 4 without loss of generality):
(W+1)^n· W^2n-1· (2n)^n ≤ (2n)^2n· W^3n < (nW)^3n < exp3n ·ln(nW).
Finally, we can apply a union bound over these many choices and get that with an exponentially high probability no such solution (y,z) can be a feasible (and optimal) solution on the sample, and returned by the algorithm.
This concludes the proof.
We then prove an analogue of <Ref>, using the duality of odd-set covers and (general) matchings, in place of vertex cover/matching duality in bipartite graphs.
Suppose in every iteration r ∈ [R], we have w(M^(r)) < (1-) ·μ(G,w). Then,
there exists at least one edge e ∈ E such that R_e · w(e) ≥ 2^· R.
Assume the contrary that for every edge e ∈ E, R_e · w(e) < 2^· R. Given the update rule of the algorithm, this means that every single edge e ∈ E is covered by the chosen
odd-set covers in Line (<ref>) at least (1-) · R times.
Define the fractional vectors (y,z) ∈^V ×^odd(V) as:
y := 1/R·∑_r=1^R y^(r), and z := 1/R·∑_r=1^R z^(r).
i.e., (y,z) is the average of odd-set covers (y^(r),z^(r)) computed by the algorithm across the R iterations. Given that every edge e=(u,v) ∈ E has been covered (1-) · R times
by the algorithm, we have that:
y_u + y_v + ∑_S ∈ odd(V)
E[S] ∋ ez_S ≥1/R·∑_r=1^R(y^(r)_u + y^(r)_v + ∑_S ∈ odd(V)
G[S] ∋ e z^(r)_S) ≥1/R·(1-) · R · w(e)_covered iterations = (1-) · w(e).
This implies that ((1-)^-1· y,(1-)^-1· z) is a fractional odd-set cover of G in <Ref>.
At the same time, we also have that
(y,z) = 1/R·∑_r=1^R (y^(r),z^(r)) < 1/R· R · (1-) ·μ(G,w) = (1-) ·μ(G,w),
where the inequality is by Edmond's matching polytope theorem (<Ref>) and the assumption that w(M^(r)) < (1-) ·μ(G)
in the lemma. This implies that the solution ((1-)^-1· y, (1-)^-1· z) is a feasible fractional odd-set cover of G with size strictly less than μ(G,w).
Another application of Edmond's theorem (<Ref>) implies that maximum weight matching in G has to be of size strictly less than μ(G,w), a contradiction.
We are now ready to conclude the proof of <Ref> exactly as in that of <Ref>, using the above established lemmas instead.
Assume the exponentially high probability event of <Ref> happens, thus
R≤ (1+/2)^R· W < 2^3 R/4 + logW. as (1+x) < 2^3x/2 for x > 0
Suppose towards a contradiction that none of the matchings M^(r) for r ∈ [R] computed by the algorithm
are of weight at least (1-) ·μ(G,w). By <Ref>, there is an edge e ∈ E such that
R_e ≥ 2^ R.
Putting these two equations together, as R_e ≤R (by positivity of importances), we obtain that
R < 3 R/4 + logW,
which only holds for R < 4logW/, contradicting the choice of R in the algorithm. Thus, at least one of the found matchings M^(r)'s is of weight (1-) ·μ(G,w),
concluding the proof.
§.§ Semi-Streaming Implementation
Finally, we are going to show a semi-streaming implementation of <Ref> in the following lemma.
The proof is similar to that of <Ref> although again with crucial changes to account for the difference of vertex covers in <Ref> with odd-set covers in <Ref> (we need some minor
modifications after this also in order to be able to prove <Ref> which will be done next).
<Ref> is implementable in the semi-streaming model with O(n log(nW) ·log(n)/) bits of memory and O(log(W)/) passes.
We implement each iteration of the algorithm via O(1) passes over the stream. The main part of the semi-streaming implementation is to maintain the importance of the edges implicitly. To do this, we do as follows:
* For every iteration/pass r ∈ [R]:
* Store the vector y^(r) explicitly using O(nlogW) bits using the integrality of the optimal solution (by <Ref>).
* Store the vector z^(r) explicitly using O(nlogn + nlogW) bits using the laminarity of support of z in the optimal solution (by <Ref>): this can be done
because earlier (a) we bounded the number of laminar families by (2n)^n and hence they require O(nlogn) bits of representation, and (b) we bounded the number of sets in any laminar family
by 2n-1 and so we only need to store O(nlogW) bits to store their values.
* Whenever an edge e=(u,v) ∈ E arrives in the stream in the r-th pass, we can compute the number of times e has remained uncovered by U^(r') for r' < r, denoted by c(e,r), by checking (y^(r'),z^(r')) stored
so far and counting the number of times they violate <Ref> for this particular edge e. The importance r_e of the edge e is then 2^c(e,r).
* Once we calculate the importance of an edge upon its arrival, we can sample the edge with probability prescribed by <Ref>.
The rest of the algorithm can be implemented directly. In particular, the total number of edges sampled in each iteration is O(nlog(nW)/) with exponentially high probability by Chernoff bound. Thus, in the semi-streaming algorithm,
we can store these edges and then compute M^(r) and (y^(r),z^(r)) at the end of the pass. This concludes the proof of the lemma.
We can now conclude the proof of <Ref>. The only remaining part is to replace the dependence on W,
using an entirely standard idea.
We can use a single pass to find the maximum weight edge w^* and subsequently, ignore all edges entirely with weight less than (/n) · w^* because the total contribution of those edges to the maximum weight matching
is always less than -fraction of its weight. Thus, we can assume that the weights are in [1: n/] from now on.
Moreover, we can assume without loss of generality that > 1/n^3 as otherwise, we can simply store the entire graph in O(n^2log(n/)) = O(nlogn/) bits (consistent with the bounds of <Ref>) and
trivially solve the problem in one pass (as a side note, we are almost always interested in much larger values of anyway).
This step implies that without loss of generality, when proving <Ref>, we can assume all edges have integer weights bounded by n^4 by re-scaling to some Θ(). Combining this with <Ref> and <Ref>
now gives an O(nlog^2(n)/) memory algorithm in O(log(n)/) passes.
§ CONCLUDING REMARKS
In this paper, we presented a rather complete simplification of the prior O(log(n)/)-pass algorithms of <cit.> in our <Ref> and <Ref>.
As stated earlier, in the interest of having a clear and concise exposition, we opted to focus only on the most important aspects of our algorithms. In this section,
we present further natural extensions of our results and conclude the paper with some key open questions at this point.
§.§ Further Extensions
§.§.§ (I). Running time
Given that the main resource of interest in the streaming model is the space, we did not put any emphasize on the runtime of our algorithms in the preceding discussions.
Nevertheless, our algorithms can be made time efficient also (beside just being polynomial time).
It can be shown that in both <Ref> and <Ref>,
we can even get away with computing a (1-)-approximate maximum matching and minimum vertex cover or odd-set cover, respectively, over the samples via minimal
changes to the rest of the algorithms and analysis. But, this then allows us to use the approximation algorithm of <cit.> for (general) weighted matchings (in case of the odd-set covers, we also need to guarantee
laminarity of the support of odd-set variables but that is already guaranteed by the algorithm). This means the total runtime of the algorithm can also be reduced to only (m/) time, matching the
best offline results up to (n) factors.
§.§.§ (II). Fewer passes in more space
Our algorithms, similar to that of <cit.>, can be made more pass efficient at the cost of increasing the space, allowing us to prove the following result:
For every p ≥ 1 and ∈ (0,1), there is a randomized streaming algorithm for (1-)-approximation of maximum weight matching
in O(n^1+1/plog^2(n)/) space and O(p/) passes.
This in particular means if instead of semi-streaming space, we allow the streaming algorithm to use n^1+δ space for any constant δ > 0, then, the number of passes needed is only O(1/).
This also implies that even for semi-streaming algorithms, the pass-complexity can be actually brought down to O(log(n)/(loglogn·)) passes by taking p=(logn/loglogn) and having n^1+1/p = (n).
We provide a proof sketch in <Ref>. Here, we only mention that this result is also obtained via a direct analysis of this MWU-based approach in
our paper, as opposed to the more common technique of “delayed sampling” used, e.g., in <cit.>, in similar contexts.
We remark that in addition to <cit.> and <Ref>, <cit.> also provides an O(n^1.5/) space O(1/) pass algorithm for maximum (cardinality) bipartite matching.
Incidentally, the algorithm behind our <Ref> (and its extension in <Ref>) turns out to be quite similar in hindsight to the algorithm of <cit.> that also relies on a sample-and-solve approach
using vertex covers to guide their sampling; however, unlike our approach, <cit.> sticks to uniform sampling and does not adjust any importances, which leads to the larger space-complexity of O(n^1.5) instead
of (essentially) n^1+o(1)-space in our work for O(1/)-pass algorithms.
§.§.§ (III). Extension to other related models
Our algorithmic approach in this paper is quite flexible and easily extends to many other models. In particular, given its sample-and-solve nature, the algorithm can be implemented via
a linear sketch (see <cit.>), which also implies the following two results:
* Dynamic streams. There exists a randomized semi-streaming algorithm for (1-)-approximation of weighted (general) matching that for every p ≥ 1, uses (n^1+1/p/) space and O(p/) passes.
* Massively parallel computation (MPC): There exists a randomized MPC algorithm for (1-)-approximation of weighted (general) matching that for every p ≥ 1, uses machines of
memory (n^1+1/p/) and asymptotically the same global memory, and O(p/) rounds.
As this is not the focus of the paper, we omit the definition and details of the models and instead
refer the interested to <cit.> and <cit.> for each model, respectively.
We only note that the prior results in <cit.> also achieved similar corollaries but this is not true of the approach of <cit.> in case of MPC algorithms.
§.§.§ (IV). Derandomization via cut sparsifiers
The failure probability of our algorithms are exponentially small, which is better than the typical “with high probability bounds” in the same context.
But, in fact we can derandomize the algorithms also at the cost of increasing (only) the space by (logn,1/) factors.
We state and sketch the proof of this only for unweighted graphs, but with more technical work, one can also extends this to weighted graphs (we omit the latter result as
it require too much of a detour).
For every p ≥ 1 and ∈ (0,1), there is a deterministic streaming algorithm for (1-)-approximation of maximum cardinality matching
in (n^1+1/p/^2) space and O(p/) passes.
<Ref> is proven via replacing the sampling step of the algorithm with cut sparsifiers of <cit.>: these are (re-weighted) subgraphs of the input
that preserve weights of cuts to within a (1±)-approximation while having only O(nlog(n)/^2) edges. We note that in general cut sparsifiers are not good at preserving large matchings[There are examples wherein a cut sparsifier of graph with a perfect matching may only have a
maximum matching of size O(√(n)) edges (for instance a union of a perfect matching plus O(√(n)) vertices connected to all other vertices).] but appear to be good at preserving “near feasible” vertex covers and odd-set covers
we need for our primal-dual analysis.
Let us focus on derandomizing <Ref> for MBM. Suppose in <Ref>, instead of sampling edges proportional to importances,
we pick a Θ()-cut sparsifier H of G with edges weighted by the importances. Then, we simply pick a vertex cover of H (ignoring the weights now). We claim that <Ref>
still holds. Let U ⊆ V be a “potential” vertex cover so that the total importance of edges it does not cover is > (/2) · Q,
where Q is the importance of all edges in G in this iteration. One can show that H needs to contain at least one edge entirely in V ∖ U
to be able to be a, say, (/100)-cut sparsifier of G.
This means
that U could have not been chosen as a vertex cover of H. Thus, vertex covers of H satisfy <Ref> and the rest of the analysis is the same (recall that H ⊆ G so
a “good” vertex cover always exist). We provide a proof sketch in <Ref>.
Finally, it is known how to compute a Θ()-cut sparsifier in the streaming model using a single pass and (n/^2) space deterministically using any deterministic static (non-streaming)
algorithm for this problem, say <cit.>; see, e.g., <cit.> for this elegant and quite simple reduction (based on the merge-and-reduce technique dating back to the work of <cit.> on quantile estimation).
We shall remark that we were inspired by the use of cut sparsifiers in <cit.> for this part of the argument. Although, to our knowledge, the use of sparsifiers in <cit.> is for a different purpose
of their “delayed sparsification” and folding O(1/) iterations of their optimization method in O(1) passes; we are instead using them for derandomization purposes (the algorithms of <cit.> are randomized despite using sparsifiers even
in insertion-only streams).
§.§.§ (V). Explicit Connections to MWU and Plotkin-Shmoys-Tardos Framework
It turns out that our algorithms can be cast in the Plotkin-Shmoys-Tardos Framework <cit.> for solving covering/packing LPs via MWU—despite the fact that the number of iterations is O(log(n)/^2) in this framework—by making the following observation (this discussion assumes
a basic familiarity with this framework; see <cit.> for a quick introduction).
Firstly, we use this framework for solving the (fractional) vertex/odd-set cover LP, which translates
into maintaining “MWU weights” over the edges. The goal, perhaps counter intuitively, is to fail in finding a small vertex/odd-set cover, which implies
we have found a “witness” to existence of a large matching in the graph. The oracle used in these techniques
can be implemented by our sampling approach. The key observation is that this oracle is extremely efficient compared to a typical oracle, in that, it achieves a very accurate solution with a very small width. This leads
to something interesting: the weights of the variables, under the “cautious” updates of MWU, grows so slowly that the same oracle solution remains approximately valid for the next O(1/) iterations!
The implication of the above for semi-streaming algorithms is that, effectively, one only needs to
rerun the oracle, using another pass over the stream, for every O(1/) iterations of the framework. This allows running the O(log(n)/^2) iterations of these framework in O(log(n)/) passes.
We shall however caution the reader that while the above intuition is morally true, implementing the algorithm and following the standard analysis this way is quite “messy” and does not seem to yield to a necessarily simple algorithm nor analysis.
Thus, we find the direct proof presented in the paper much more illuminating and opted to provide that instead[We should also add that this connection was only made in hindsight after having the new algorithm and analysis.].
§.§ Open Questions
Finally, we conclude the paper with some open questions.
As stated earlier, the key question at this point—which was also one motivation behind this work itself—is to bridge the gap between the two types of pass-complexity of semi-streaming algorithms obtained for bipartite (cardinality) matching: the O(log(n)/) passes of <cit.> and <Ref>, and the O(1/^2) passes of <cit.>.
Open question 1: Can we design a semi-streaming algorithm for maximum bipartite cardinality matching with O(1/) passes?
We would like to make a (rather bold) conjecture that the “right” pass-complexity of this problem might be even (much) lower than O(1/) passes. But, at this point, we seem to be
far from achieving such results or ruling out their possibilities[Atlhough <cit.> provide a semi-streaming n^3/4+o(1)-pass algorithm for solving MBM exactly, which corresponds to =1/n. Thus,
at least for this very small values of , we already know ≪ 1/-pass algorithms.].
A slightly less exciting question than the above is to significantly reduce the pass-complexity of the “constant-pass” algorithms for maximum matching in general graphs
in <cit.> to match the results for MBM in <cit.> (see <Ref>).
In particular,
Open question 2: Can we obtain a semi-streaming algorithm for maximum matching in general graphs (weighted or unweighted) with O(1/^2) passes?
We hope that by simplifying the state-of-the-art, our results in this paper can pave the path for addressing these questions. Note that as stated earlier, we can indeed provide a positive answer to both questions for the (strictly) more relaxed
case when the space of the algorithms is n^1+δ for constant δ > 0 (this was also already known by the prior work of <cit.>).
§ ACKNOWLEDGEMENT
Many thanks to Soheil Behnezhad, Shang-En Huang, Peter Kiss, Rasmus Kyng, and Thatchaphol Saranurak for helpful discussions and to all
the organizers of the DIMACS Workshop on “Modern Techniques in Graph Algorithms” (June 2023) where these conversations happened.
I am also grateful to Aaron Bernstein, Aditi Dudeja, Arun Jambulapati, Michael Kapralov, Sanjeev Khanna, Kent Quanrud, and Janani Sundaresan for various helpful discussions about this problem over the years.
Finally, thanks to Soheil Behnezhad for pointing out the similarity of our <Ref> with the O(n^1.5/)-space O(1/)-pass streaming algorithm of <cit.> for MBM discussed in <Ref>.
halpha-abbrv
§ APPENDIX: MORE DETAIL ON FURTHER EXTENSIONS
§.§ Proof Sketch of <Ref>
A common technique for proving <Ref> (from an already-existing semi-streaming algorithm) is “delayed sampling” used, e.g., in <cit.> by
running multiple iterations of the semi-streaming in a single pass of the larger-space algorithm. This is done via oversampling the input first, and then do rejection sampling
(see <cit.> for more details). While this approach would work for us also,
it would require (slightly) more space (by some n^1/p/ factors), and more importantly an indirect analysis.
Instead, one can directly adjust our importance-sampling based approach, say, in <Ref>.
It simply involves sampling the edges by a factor of O(n^1/p) more and then also increasing the importance of violated edges in the algorithm, quite aggressively, by a factor of O(n^1/p) instead of only 2.
We provide a proof sketch in the following.
Let η be a parameter (that will be later chosen to be n^1/p). We change <Ref> as follows:
* We increase the sampling rate in <Ref> by a factor of η.
* We increase the importance of any violated edge in <Ref> by a factor of (1+η) instead.
The implications of these changes are:
* The space of the algorithm is increased by an O(η) factor because we store a larger sample.
* The upper bound of (1+/2)^R· W on the potential function in <Ref> still holds because <Ref> now holds with /2 replaced by /2η which
“cancels out” the effect of increasing importances by (1+η) factor instead.
* On the other hand the lower bound of 2^ R on the potential function in <Ref> simply becomes (1+η)^· R
given this new update rule on the importances.
Thus, by combining (ii) and (iii), we get that
R logη≤ 3/4 · R + logW
which gives us the bound of R=O(logW/(logη·)).
Now, recall that we earlier argued we can take W to be at most n^4. This implies that by setting η = n^1/p, we get
R = O(logW/(logη·)) = O(logn/(1/p ·logn·)) = O(p/),
This concludes the proof sketch of <Ref>.
§.§ Proof Sketch of <Ref>
As outlined in <Ref>, the algorithm is the following. Instead of sampling edges in <Ref>, we compute an (/100)-cut sparsifier H of G whose edges are weighted by the importances in this iteration.
Then, we compute an odd-set cover (y,z) of H, ignoring all edge weights in this step, and continue exactly as before. Given that a cut sparsifier can be computed in (n/^2) space in the semi-streaming model (see <cit.>),
we will obtain the desired deterministic algorithm.
Recall that the sampling step was only used in the analysis in <Ref> and in particular to establish <Ref> with an exponentially high probability. We instead show that this new approach
deterministically satisfies <Ref>. The rest of the proof of <Ref> then follows verbatim as in <Ref>. Thus,
we only need to show the following:
* Let H be an (/100)-cut sparsifier of G=(V,E) under the edge weights q_e. Then, any odd-set cover (y,z) of H satisfies <Ref> deterministically, i.e.,
∑_e ∈ F(y,z) q_e ≤ (/2) · Q,
where F(y,z) is the set of violated edges by (y,z) in the unweighted graph G, and q_e and Q are the importance of edge e ∈ E, and total importance of all edges, respectively.
We now prove this statement.
In the following, for a graph G=(V,E) and two disjoint sets of vertices A,B ⊆ V, we define cut_G(A) and cut_G(A,B)
as the weight of the cuts (A, V ∖ A) and (A,B), respectively (we apply this to G with weight function being edge importances, and to H with the re-weighted weights of the sparsifier).
Let (y,z) be an optimal odd-set cover of H. By Cunningham-Marsh theorem (<Ref>),
y and z are both integral and ℱ(z) := S ∈ odd(V) | z_S > 0 forms a laminar family. Moreover, given the optimality of (y,z) and since H is unweighted (when calculating the odd-set cover),
we have that y,z ∈0,1^n which implies that ℱ(z) = S_1,S_2,…,S_s is actually a collection of disjoint sets, and is disjoint from the support of y, denoted by T.
Notice that the set of violated edges by (y,z) in G are the ones
that are not inside S_1,…,S_s, nor incident on T.
Suppose towards a contradiction that <Ref> does not hold. Let (A,B) be a maximum cut of the graph G[V ∖ T] among all cuts that where each S_i ∈ℱ(z) is entirely on one side of the cut.
Since a maximum cut always has weight at least half of the weight of edges in the graph, we have,
cut_G(A,B) > (/4) · Q.
On the other hand, in any graph, we also have
cut_G(A) + cut_G(B) = cut_G(A ∪ B) + cut_G(A, B).
Given that cut_G(A),cut_G(B),cut_G(A ∪ B) ≤ Q trivially, and since H is an (/100)-cut sparsifier,
cut_H(A,B) = cut_H(A) + cut_H(B) - cut_H(A ∪ B)
≥ cut_G(A) + cut_G(B) - cut_G(A ∪ B) - 3 · (/100) · Q
= cut_G(A,B) - 3 · (/100) · Q
≥ (/4) · Q - 3 · (/100) · Q > 0,
which implies that there is at least one edge between A and B in H. But recall that none of the edges between A and B were covered by (y,z), contradicting the fact that (y,z) was a
feasible odd-set cover of H. This proves <Ref>.
|
http://arxiv.org/abs/2307.03147v1
|
20230706172350
|
Trend to equilibrium for flows with random diffusion
|
[
"Shrey Aryan",
"Matthew Rosenzweig",
"Gigliola Staffilani"
] |
math.AP
|
[
"math.AP",
"math.PR",
"35Q35, 35Q49, 35Q70, 35R60, 60H50"
] |
Motivated by the possibility of noise to cure equations of finite-time blowup, recent work <cit.> by the second and third named authors showed that with quantifiable high probability, random diffusion restores global existence for a large class of active scalar equations in arbitrary dimension with possibly singular velocity fields. This class includes Hamiltonian flows, such as the SQG equation and its generalizations, and gradient flows, such as the Patlak-Keller-Segel equation. A question left open is the asymptotic behavior of the solutions, in particular, whether they converge to a steady state. We answer this question by showing that the solutions from <cit.> in the periodic setting converge in Gevrey norm exponentially fast to the uniform distribution as time t→∞.
[
Vittorio De Falco^1,2
August 1, 2023
=========================
§ INTRODUCTION
Taking inspiration from <cit.>, recent work <cit.> by the second and third named authors showed for a large class of scalar flows that the addition of a random diffusion to the dynamics leads to global classical solutions with high probability. Such an effect is significant, as without noise, the class considered includes equations, such as aggregation equations, for which finite-time blowup holds for classical solutions, as well equations such as the inviscid SQG equation, for which global existence of classical solutions is unknown. We refer to the introduction of <cit.> for a detailed discussion of the physical relevance and mathematical history of the class of equations considered.
A question left open in the cited work is the asymptotic behavior of solutions as t→∞. The purpose of this note is to answer this question by showing that with high probability, solutions converge to the uniform distribution with mass equal to that of the initial data. One may interpret this as “equilibriation” of the system. As the uniform distribution is a stationary solution, in particular, this implies that it is the unique equilibrium. The present work together with the previous works <cit.>, demonstrate a fairly complete global theory for the effect of random damping/diffusion.
§.§ The model
The stochastic partial differential equation (SPDE) we consider is
_tθ +÷(θ∇∗θ) = ν^sθẆ
θ|_t=0 = θ^0
(t,x)∈_+×^d.
Above, is a d× d constant matrix with real entries and ∈'(^d) is a tempered distribution, such that there is a >0 so that the Fourier transform (k) satisfies the bound
∀ k∈^d, |(k)| ≲ |k|^-.
The random diffusion corresponds to the term in the right-hand side of (<ref>), where ν>0, ^s is the fractional Laplacian of order s (i.e., the Fourier multiplier with symbol |k|^s), and W is a one-dimensional standard Brownian motion. The randomness stems from the fact that the diffusivity coefficient ν is modulated by the white noise Ẇ. The addition of such a term was first proposed by Buckmaster et al. <cit.> to obtain global existence in the case of , corresponding to the inviscid SQG equation, following an earlier random damping term proposed by Glatt-Holtz and Vicol <cit.> in the case of , corresponding to the d=2 incompressible Euler vorticity equation. In <cit.>, an inhomogeneous diffusion ν(1+^s)θẆ was instead used because the problem was set on ^d, which entails issues at low frequencies (see <ref> for further elaboration).
The mathematical interpretation of the SPDE (<ref>) is based on a pathwise change of unknown. Supposing we have a solution θ to (<ref>) and formally setting μ^tΓ^tθ^t, where for each realization of the Brownian motion W, Γ^t e^-ν W^t^s is the Fourier multiplier with symbol e^-ν W^t|k|^s, Itô's lemma implies
_tμ = - ÷Γ*Γ^-1μ∇∗Γ^-1μ- ν^2/2^2sμ.
See <cit.> or <cit.> for details of the computation and <cit.> for an explanation of the choice of Itô noise, as opposed to Stratonovich noise. Equation (<ref>) is a random PDE that may be interpreted pathwise: for a fixed realization of W, which almost surely is a locally continuous path on [0,∞), one studies the Cauchy problem for (<ref>).
§.§ Main results
To state our results, we first fix some notions. Here and throughout this paper, we assume that the potential satisfies the condition (<ref>). We assume that we have a standard real Brownian motion {W^t}_t≥ 0 defined on a filtered probability space (Ω, , {^t}_t≥ 0,) satisfying all the usual assumptions. For ,,ν>0, we define the event
Ω_,,ν{ω∈Ω : +β t - ν W^t(ω) ≥ 0 ∀ t∈ [0,∞)}⊂Ω.
It is known that (Ω_,,ν) = 1 - e^-2/ν^2 <cit.>. The definition of the Fourier-Lebesgue space Ŵ^,r and norm ·_Ŵ^,r used below may be found in <ref>.
Let d≥ 1, >0, max(1/2,2-/2)<s≤ 1. Suppose that satisfies (<ref>) for . Given ,,ν>0, set ϕ^t+β t and
ζinf_k∈^d : k≠ 0(ν^2/2-β|k|^-s - ((k)|k|^-2s(k· k))).
Assume that ζ>0.
If s is sufficiently large depending on , then there exists an r_0≥ 1 depending on d,,s, such that the following holds. For any 1≤ r ≤ r_0 and any σ>0 sufficiently large depending on d,,r,s, there is a constant C>0 depending only on d,,r,s,σ, such that for initial data μ^0 satisfying 1/(2π)^d∫_^dμ^0dx=1 and
e^(+)^sμ^0-1_Ŵ^σ s,r < ζ/C||,
for >0, and any path in Ω_,,ν, there exists a unique global solution μ∈ C^0([0,∞); Ŵ^σ s,r) to equation (<ref>) with initial datum μ^0. Moreover,
∀ t≥ 0, e^(ϕ^t+)^sμ^t-1_Ŵ^σ s,r≤ e^-ζ t/2e^(+)^sμ^0-1_Ŵ^σ s,r.
To make the statement of <ref> reader-friendly, we have opted not to include the explicit relations between parameters, such as d,,s,r_0,σ. These relations are explicitly worked out in <Ref>. Throughout the paper, the reader should keep in mind that the most favorable choice is (s,r) = (1,1).
The condition s>2-/2 ensures that we can make ζ>0 by fixing β,, and then taking ν sufficiently large.
By rescaling time and using conservation of mass (see <ref> below), we may always reduce to the case 1/(2π)^d∫_^dμ^0 dx=1 up to a change of ν. More precisely, suppose that μ is a solution to (<ref>). Letting m = 1/(2π)^d∫_^dμ^0 dx, set μ_m^t 1/mμ^t/m. Then using the chain rule,
_tμ_m^t = -1/m^2Γ^t/m÷*(Γ^t/m)^-1μ^t/m∇∗(Γ^t/m)^-1μ^t/m -ν^2/2m^2^2sμ^t/m
= -Γ_m^t÷*(Γ_m^t)^-1μ_m^t∇∗(Γ_m^t)^-1μ_m^t - ν_m^2/2^2sμ_m^t,
where ν_m ν/√(m), W_m^t √(m)W^t/m, and Γ_m^t e^-ν_m W_m^t ^s. Note that W_m is again a standard Brownian motion (e.g., see <cit.>).
As advertised at the beginning of the introduction, our main result shows that with quantifiable high probability, solutions of the random PDE (<ref>) with Gevrey initial data are global and as t→∞, converge exponentially fast in Gevrey norm to the uniform distribution with the same mass as μ^t. The essential point and importance of our work is that our result is agnostic to (no gradient flow or repulsive-type assumptions) and to , subject to the very general condition (<ref>). This generality means our result covers equations for which global existence, let alone asymptotic behavior, is unknown or for which finite-time blow-up happens in the deterministic case.
The long-time behavior of equation (<ref>) with ν=0 is highly dependent on the nature of and the singularity of . In general, little is known in the Hamiltonian case where is antisymmetric. For instance, if d=2, is rotation by π/2, and (k)=|k|^-2, the equation becomes the incompressible Euler vorticity equation (see <cit.>, <cit.>). Global well-posedness of classical/weak solutions <cit.> is known, but the asymptotic behavior is only partially understood (e.g., see <cit.> and references therein). For the same choice of d,, if (k)=|k|^-, for ∈ (0,2), then equation (<ref>) becomes the inviscid generalized SQG equation <cit.>. Global existence of smooth solutions to the gSQG equation is a major open problem <cit.>. It is only known if one adds suitably deterministic strong diffusion (e.g., see <cit.>). In the gradient case where =∓, global existence vs. finite-time blow-up depends on the choice of sign. We discuss only the model interaction (k) = |k|^- which is sometimes called a fractional porous medium equation. Local well-posedness of classical solutions is known <cit.>. But in the attractive case , suitably strong solutions blow up in finite time <cit.>. In the repulsive case -, global existence, uniqueness, and asymptotic behavior of nonnegative classical and L^∞ weak solutions are known when =2 <cit.> (see also <cit.>). The easier case >2 follows by the same arguments <cit.> (see also <cit.>). For 0<<2, global existence, regularity, and asymptotic behavior of certain nonnegative weak solutions are known <cit.>; but per our knowledge, these weak solutions are only known to be unique if d=1 <cit.>. It is an open problem whether classical solutions are global if 0<<2. In the interests of completeness, we also mention there is a large body of work on the long-time behavior of the gradient case for regular potentials satisfying convexity assumptions (on ^d). For example, see <cit.>, to which the title of our paper pays homage.[Many of the references discussed in this paragraph are set on ^d; but in general, these results have analogues on ^d.]
There is an extensive literature on the effects of noise (e.g., “regularization by noise”), a sample of which is contained in the references <cit.>. But to our knowledge, these previous works have not investigated the equilibriating properties of stochastic perturbations. Related in spirit to our work, we mention some works <cit.> on the ergodicity of fluid equations subject to stochastic forcing. But we emphasize these results add noise to a diffusive deterministic model, for which a result comparable to ours is already known (e.g., see <cit.> for 2D Navier-Stokes), and are instead about the balance between the injection of energy through noise and the dissipation of energy through viscosity.
§.§ Comments on the proof
The proof of <ref> builds on the previous work <cit.>. The key point there to obtaining global solutions is a monotonicity formula for the Gevrey norm, asserting that it is strictly decreasing, provided the initial data and parameters are appropriately chosen. Showing this monotonicity requires carefully estimating the size of nonlinearity and showing it does not overwhelm the dissipative effect of the diffusion. In the present work, we go a step further by considering the evolution equation satisfied the unknown ϱ^t μ^t-1. We show a dissipation inequality for the Gevrey norm of ϱ^t, which, under suitable conditions on the initial data, allows us to deduce the exponential-in-time decay of the Gevrey norm of ϱ^t through a delicate continuity argument and Grönwall's lemma.
One might ask why we work on the torus for the equation (<ref>), as opposed to on ^d for the equation
_tθ +÷(θ∇∗θ) = ν(1+^s)θẆ^t
originally considered in <cit.>. The periodic setting is technically simpler since the spectrum is discrete and one does not have to worry about low-frequency issues, in particular, when >1. This allows to replace the inhomogeneous multiplier (1+^s), which kills off all Fourier modes, by ^s, which kills off only nonzero Fourier modes. We expect that a similar analysis can be performed for (<ref>) on ^d mutatis mutandis, where now with high probability, μ^t e^-ν W^t(1+^s)θ^t should converge to zero (vacuum) in Gevrey norm as t→∞.
Finally, let us mention that our method is quite robust and would also work, for example, for the periodic 3D incompressible Euler equation modified by random diffusion (alternatively, the 3D Navier-Stokes with white noise modulated hyperviscosity):
_t u + u ·∇ u = -∇ p + ν^s uẆ.
This becomes clearer from rewriting (<ref>) in Leray projector form. One can show that with high probability, the transformed unknown v^t Γ^t u^t converges in Gevrey norm exponentially fast as t→∞ to the vector ∫_^3v^0 dx. To minimize the length of the paper, we leave such extensions to the interested reader.
§.§ Organization of paper
We briefly comment on the organization of the remaining body of the paper. <ref> introduces the scale of Gevrey function spaces, some elementary embeddings for these spaces, and then uses these spaces to show the local well-posedness for equation (<ref>), with the main result being <ref>. <ref> then shows the global existence and exponential decay to equilibrium, completing the proof of <ref>. This is spread over two preliminary results: <ref> and <ref>.
§.§ Notation
Let us conclude the introduction by reviewing the essential notation of the paper, following the conventions of <cit.>.
Given nonnegative quantities A and B, we write A≲ B if there exists a constant C>0, independent of A and B, such that A≤ CB. If A ≲ B and B≲ A, we write A∼ B. To emphasize the dependence of the constant C on some parameter p, we sometimes write A≲_p B or A∼_p B.
The Fourier and inverse transform of a function f:^d→^m are given by
f̂(k) = (f)(k) ∫_^df(x)e^-ix· kdx,
f̌(x) = ^-1(f)(x) 1/(2π)^d∑_k∈^df(k)e^ik· x,
The homogeneous Bessel potential space Ẇ^s,p is defined by
f_Ẇ^s,p|∇|^sf_L^p, s∈, p∈ (1,∞),
and the Fourier-Lebesgue space Ŵ^s,p is defined by
f_Ŵ^s,p|·|^sf̂_ℓ^p, s∈, p∈ [1,∞].
C^0([0,T); X) denotes the space of functions taking values in the Banach space X, which are continuous and bounded.
§ LOCAL WELL-POSEDNESS
We show local well-posedness for the equation (<ref>), the main result of this section being <ref> stated below. This proposition—and its proof via a contraction mapping argument—is a modification of <cit.>. Although it was noted in <cit.> that the results from that paper have corresponding analogues on the torus, we present the proof anyway because it is not written anywhere else and, more importantly, the two-tier function space (see (3.7) in the cited work) used on ^d becomes unnecessary on the torus because the spectrum is discrete. We also improve on <cit.> (and the earlier <cit.>) by removing the smallness condition β < ν^2/2, which explains why the statement may not look comparable.
Set A^2s and define the bilinear operator
B(f,g) ÷Γ*Γ^-1f(∇∗Γ^-1g).
Strictly speaking, B is time-dependent through Γ. When necessary, we make explicit this time dependence by writing B^t(f,g). Assume 1/(2π)^d∫_^dμ^0dx=1. In contrast to <cit.>, it will be more convenient to work with the unknown ϱ^t μ^t - 1, which satisfies the equation
_tϱ^t =-÷Γ*Γ^-1ϱ^t∇∗Γ^-1ϱ^t - ÷(∇∗ϱ^t) - ν^2/2^2sϱ^t
= -B^t(ϱ^t, ϱ^t) -L ϱ^t -ν^2/2Aϱ^t,
where L ÷(∇∗(·)). Note that L=0 if is antisymmetric. If we have a solution ϱ^t to (<ref>), then μ^t 1+ϱ^t is a solution to (<ref>). So, there is no loss in working with the unknown ϱ^t. We rewrite the Cauchy problem for (<ref>) in the mild form,
ϱ^t = e^-t(ν^2 A/2+L)ϱ^0 - ∫_0^t e^-(t-τ)(ν^2 A/2+L) B^τ(ϱ^τ,ϱ^τ)dτ.
Observe that the real part of the symbol of ν^2 A/2+L is
ν^2|k|^2s/2 + ((k))( k· k) ≥ν^2|k|^2s/2 - C|| |k|^2-,
where we have used assumption (<ref>) to obtain the lower bound. If 2s≥ 2-, then for all |k|≥(2C||/ν^2)^1/2s-2+, the symbol of ν^2 A/2+L has nonnegative real part.
To perform a contraction mapping argument based on (<ref>), we use a scale of Gevrey function spaces from <cit.> (see <cit.> for earlier L^2 special cases). For a ≥ 0, κ∈ℝ, define
f__a^κ, re^a A^1 / 2 f_Ŵ^κ s, r.
For 0<T<∞ and a continuous function ϕ:[0,T]→ [0,∞), we define
f_C_T^0_ϕ^,rsup_0≤ t≤ Tf^t__ϕ^t^,r.
We write C_∞^0 when sup_0≤ t≤ T is replaced by sup_0≤ t<∞. Define the Banach space
C_T^0_ϕ^,r{f∈ C([0,T]; Ŵ^ s,r(^d)) : f_C_T^0_ϕ^,r < ∞}.
We also allow for T=∞, replacing [0,T] in the preceding line with [0,∞).
Let d≥ 1, >0, max(1/2,2-/2)<s≤ 1. Given ,,ν>0, suppose W is a realization from the set Ω_,,ν and set ϕ^t+β t.
There exists r_0 ≥ 1 depending on d,,s, such that the following holds. For any 1≤ r≤ r_0, there exists σ_0 ∈ (0,2s-1/s) depending on d,,r,s, such that for any σ∈ (σ_0,2s-1/s) with 1-≤σ s, there exists a time T>0 such that for ϱ^0__^σ,r≤ R, there exists a unique solution ϱ∈ C_T^0 _ϕ^σ,r to the Cauchy problem for (<ref>). Moreover,
ϱ_C_T^0_^σ,r≤ 2ϱ^0__^σ,r.
Additionally, if ϱ_j^0__^σ,r≤ R, for j∈{1,2}, then
ϱ_1-ϱ_2_C_T^0_^σ,r≤ 2ϱ_1^0-ϱ_2^0__^σ,r.
The solutions given by <ref> conserve mass, hence solutions to the original equation (<ref>) also conserve mass. One readily sees this by integrating both sides of (<ref>) over ^d and by using the fundamental theorem of calculus together with ℱ(e^-t(ν^2 A/2+mL))(0) = 1. Thus,
∫_^dϱ^t(x)dx =∫_^dϱ^0(x)dx = 0.
§.§ Gevrey and Sobolev embeddings
Before proceeding to the proof of <ref>, we record some elementary embeddings satisfied by the spaces _a^,r. For proofs of the following lemmas, see <cit.>.
If a'≥ a≥ 0 and '≥, then
f__a^,r≤ e^a-a'f__a'^',r.
If '≥ and a'>a≥ 0, then
f__a^',r≤⌈'-⌉ !/(a'-a)^⌈'-⌉f__a'^,r,
where ⌈·⌉ denotes the usual ceiling function.
If 1≤ p <r≤∞, then
f_Ŵ^s,p≲_d,p,rf_Ŵ^(s+d(r-p)/rp)+,r,
where the notation (·)+ means (·)+, for any >0, with the implicit constant then depending on and possibly blowing up as → 0^+. If 2≤ p≤∞, then if f̂(0) =0,
f_Ŵ^s,p≲_d,pf_Ẇ^s,p/p-1.
§.§ Contraction mapping argument
Throughout this subsection, assume that we have fixed a realization of W from Ω_,,ν. Fix ϱ^0 and define the map
ϱ^t↦ (ϱ)^t e^-t(ν^2 A/2+L)ϱ^0 - ∫_0^t e^-(t-τ)(ν^2 A/2+L) B^τ(ϱ^τ,ϱ^τ)dτ.
We check that is well-defined on C_T^0_ϕ^σ,r for ϕ^t=+ t, with ,,σ,r>0 satisfying the conditions in the statement of <ref>.
First, we control the linear term in (<ref>). We introduce some notation that will be used in what follows. Define the parameters
|k_0| sup{|k|: k ∈^d, β |k|^s + ((k) k· k)-ν^2 |k|^2s/2≥ 0 },
sup_k∈^d(β |k|^s + ((k) k· k)-ν^2 |k|^2s/2).
Since 2s>max(2-,s) by assumption, |k_0| is finite and
= sup_k:|k|≤ |k_0|(β |k|^s + ((k) k· k)-ν^2 |k|^2s/2).
For any 1≤ r≤∞, max(2-/2,0)<s≤ 1, σ∈, and ,,ν>0, it holds that
∀ t≥ 0, e^-t(ν^2 A/2+L)f__ϕ^t^σ,r≤ e^ t f__^σ,r.
Unpacking the definition of the _ϕ^t^σ, r norm, it holds that
e^-t(ν^2 A/2+L)f__ϕ^t^σ,r^r = e^ϕ^t A^1 / 2 e^-t(ν^2 A/2+L)f_Ŵ^σ s,r^r
=∑_k|k|^r s σ|e^ϕ^t|k|^s-ν^2|k|^2s/2 + ( k· k)(k)f̂(k)|^r
= [∑_|k|≤ |k_0| + ∑_|k| >|k_0|]|k|^r s σe^rα |k|^s|e^ t(β |k|^s + ((k) k· k)-ν^2 |k|^2s/2)f̂(k)|^r
≤∑_|k|≤ |k_0| |k|^r s σe^rα |k|^se^rt|f̂(k)|^r +∑_|k|> |k_0||k|^r s σe^rα |k|^s |f̂(k)|^r
≤ e^rt e^ A^1/2f_Ŵ^σ s,r^r,
where the final line follows from ≥ 0.
Next, we control the bilinear term in (<ref>).
Let d≥ 1, >0, max(2-/2,1/2)<s≤ 1. There exists an r_0 ∈ [1,∞], depending on d,s, such that the following holds. For any 1≤ r ≤ r_0, there exists σ_0 ∈ (0,2s-1/s) depending on d,s,r, such that for any σ∈ (σ_0, 2s-1/s) with 1-≤σ s, there exists a constant C depending only on d,r,q,σ,s,β,ν, such that for any T>0,
∫_0^t e^-(ν^2A/2 + L)B^τ(ϱ_1^τ,ϱ_2^τ)dτ_C_T^0_ϕ^σ,r
≤ C|| ϱ_1_C_T^0_ϕ^σ,rϱ_2_C_T^0_ϕ^σ,r(|k_0|^sσ(e^t-1/) + T^1-σ s+1/2s).
We make the change of unknown ϱ_j^t e^-ϕ^t A^1/2|∇|^-σ sρ_j^t, so that
ρ_j^t_L̂^r = ϱ_j^t__ϕ^t^σ,r.
By Minkowski's inequality, we see that
e^ϕ^t A^1/2∫_0^te^-(t-τ)(ν^2 A/2+L)B^τ(ϱ_1^τ,ϱ_2^τ) dτ_Ŵ^σ s,r≤∫_0^t e^ϕ^t A^1/2 - (t-τ)(ν^2 A/2+L)B^τ(ϱ_1^τ,ϱ_2^τ)_Ŵ^σ s,rdτ,
and by definition of the Ŵ^σ s,r norm, the preceding right-hand side equals
∫_0^t (∑_ke^r(t-τ)(β |k|^s + ((k) k· k)-ν^2 |k|^2s/2) |k|^rσ s
|∑_j|k· j| |(j)|/|k-j|^σ s|j|^σ s e^(ϕ^τ-ν W^τ)[|k|^s-|k-j|^s-|j|^s]ρ̂^τ_1(k-j) ρ̂^τ_2(j)|^r )^1/r d τ.
We adopt the notational convention ρ̂_1^τ(k-j)/|k-j|^σ s 0 when k=j (similarly, for ρ̂_2^τ). Using ϕ^t-ϕ^τ = (t-τ), the preceding expression is controlled by
∫_0^t (∑_ke^r(t-τ)(β |k|^s + ((k) k· k)-ν^2 |k|^2s/2)|k|^rσ s
|∑_j |k· j| |(j)|/|k-j|^σ s|j|^σ s e^(ϕ^τ-ν W^τ)[|k|^s-|k-j|^s-|j|^s]ρ̂^τ_1(k-j) ρ̂^τ_2(j)|^r )^1/r d τ.
Since 0<s≤ 1, we have |k|^s - |k-j|^s-|j|^s≤ 0 for all k,j∈^d. Since ϕ^τ-ν W^τ≥ 0 for all 0≤τ≤ t by assumption, it follows that
e^(ϕ^τ-ν W^τ)[|k|^s-|k-j|^s-|j|^s]≤ 1.
Thus, for fixed k, estimating the inner sum of (<ref>), we find
|∑_j|k· j| |(j)|/|k-j|^σ s|j|^σ s e^(ϕ^τ-ν W^τ)[|k|^s-|k-j|^s-|j|^s]ρ̂^τ_1(k-j) ρ̂^τ_2(j)|^r
≲(∑_j|k· j| |(j)|/|k-j|^σ s|j|^σ s |ρ̂^τ_1(k-j)| |ρ̂^τ_2(j)|)^r
≲ |𝕄|^r |k|^r (∑_j |k-j|^-σ s |ρ̂^τ_1(k-j)| |j|^1-γ-σ s |ρ̂^τ_2(j)|)^r,
where we have implicitly used that satisfies (<ref>) to obtain the last line. With |k_0| defined as in (<ref>) above, there exists a constant >̣0, such that for frequencies |k|>|k_0|,
ν^2 |k|^2s/2 - β |k|^s - ((k) k· k)≥|̣k|^2s.
Furthermore, observe that by writing |k| = (t-τ)^-1/2s(t-τ)^1/2s|k|, it follows from the power series for z↦ e^z that
e^r(t-τ)(β |k|^s + ((k) k· k)-ν^2 |k|^2s/2) |k|^rsσ≤ e^-r(̣t-τ)|k|^2s|k|^rσ s≲_(̣t-τ)^-r(σ s+1)/2s.
For frequencies |k|≤ |k_0| (of which there are at most finitely many), we crudely estimate
e^r(t-τ)(β |k|^s + ((k) k· k)-ν^2 |k|^2s/2) |k|^rsσ≤ e^r(t-τ)|k_0|^rsσ,
with as in (<ref>). With these observations, we find
∫_0^t (∑_ke^r(t-τ)(β |k|^s + ((k) k· k)-ν^2 |k|^2s/2)|k|^rσ s
|∑_j|k· j| |(j)|/|k-j|^σ s|j|^σ s e^(ϕ^τ-ν W^τ)[|k|^s-|k-j|^s-|j|^s]ρ̂^τ_1(k-j) ρ̂^τ_2(j)|^r )^1/r d τ
≲ |||k_0|^sσ∫_0^t e^(t-τ)(∑_|k|≤ |k_0|(∑_j|k-j|^-σ s |ρ̂^τ_1(k-j)| |j|^1-γ-σ s |ρ̂^τ_2(j)|)^r )^1/r d τ
+ || ∫_0^t (t-τ)^-(σ s+1)/2s(∑_|k|>|k_0|(∑_j|k-j|^-σ s |ρ̂^τ_1(k-j)| |j|^1-γ-σ s |ρ̂^τ_2(j)|)^r )^1/r d τ.
Thus, it remains to estimate the factor containing the ℓ_k^r norm of the sum over j. For this, we use Young's inequality followed by Sobolev embedding <ref>,
(∑_k(∑_j|k-j|^-σ s |ρ̂^τ_1(k-j)| |j|^1-γ-σ s |ρ̂^τ_2(j)|)^r )^1/r
≤|·|^-σ sρ̂_1^τ_ℓ^p|·|^1--σ sρ̂_2^τ_ℓ^rp/(r+1)p-r
≲ρ_1^τ_Ŵ^-σ s, 1ρ_2^τ_Ŵ^1--σ s, 1_r=1 + ρ_1^τ_Ŵ^(d(r-1)/r-σ s)+, rρ_2^τ_Ŵ^1--σ s, r_p=1
r>1
+ ρ_1^τ_Ŵ^-σ s,rρ_2^τ_Ŵ^(1--σ s + d(r-1)/r)+, r_ p=r
r>1
+ρ_1^τ_Ŵ^(d(r-p)/rp-σ s)+,rρ_2^τ_Ŵ^(1--σ s+d(p-1)/p)+, r_1<p<r
r>1
= e^ϕ^τ A^1/2ϱ_1^τ_Ŵ^0,1e^ϕ^τ A^1/2ϱ_2^τ_Ŵ^1-,1_r=1 + e^ϕ^τ A^1/2ϱ_1^τ_Ŵ^d(r-1)/r+,re^ϕ^τ A^1/2ϱ_2^τ_Ŵ^1-,r_p=1
r>1
+ e^ϕ^τ A^1/2ϱ_1^τ_Ŵ^0,re^ϕ^τ A^1/2ϱ_2^τ_Ŵ^(1- + d(r-1)/r)+,r_ p=r
r>1
+ e^ϕ^τ A^1/2ϱ_1^τ_Ŵ^(d(r-p)/rp)+,r e^ϕ^τ A^1/2ϱ_2^τ_Ŵ^(1-+d(p-1)/p)+, r_1<p<r
r>1,
where the final equality follows from unpacking the definition of ρ^t. To obtain estimates that close, the top Sobolev index appearing in (<ref>) must be ≤σ s. This leads to the following conditions:
1-≤σ s, r=1
d(r-1)/r<σ s and 1-≤σ s, p=1 and r>1
1-+d(r-1)/r<σ s, p=r and r>1
d(r-p)/rp<σ s and 1-+d(p-1)/p < σ s, 1<p<r and r>1.
Assuming the preceding conditions are met and also that (σ s+1)/2s <1, it follows from our work that
e^ϕ^t A^1/2∫_0^te^-(t-τ)(ν^2 A/2+L)B^τ(ϱ_1^τ,ϱ_2^τ) dτ_Ŵ^σ s,r≲ |||k_0|^sσ(e^T-1/)ϱ_1_C_T^0_ϕ^σ,rϱ_2_C_T^0G_ϕ^σ,r
+|| T^1-σ s+1/2sϱ_1_C_T^0_ϕ^σ,rϱ_2_C_T^0G_ϕ^σ,r
for any 0≤ t≤ T. We adopt the convention that the first term in the preceding right-hand side is zero if =0.
To complete the proof of the lemma, it is important to list all the conditions we imposed on the parameters d,,σ,s,r during the course of the above analysis:
*
2-/2<s≤ 1;
*
*
r=1 and 1-≤σ s,
*
or r>1 and d(r-1)/r<σ s and 1-≤σ s,
*
or r>1 and 1- + d(r-1)/r<σ s,
*
or r>1 and ∃ p∈ (1,r) such that d(r-p)/rp<σ s and 1- +d(p-1)/p<σ s;
*
(σ s+1)/2s<1
We refer the reader to the proof of <cit.> for the existence of a non-trivial choice of parameters satisfying the above conditions.
Putting together the estimates of <Ref>, we have shown that there exists a constant C>0 depending on d,,r,σ,s,β,ν, such that
(ϱ)_C_T^0_ϕ^σ,r≤ e^Tϱ^0__ϕ^σ,r + C|| ϱ_C_T^0_ϕ^σ,r^2 (|k_0|^sσ(e^T-1/) + T^1-σ s+1/2s)
and
(ϱ_1)-(ϱ_2)_C_T^0_ϕ^σ,r≤ C||(|k_0|^sσ(e^T-1/) + T^1-σ s+1/2s)ϱ_1-ϱ_2_C_T^0_ϕ^σ,r
×*ϱ_1_C_T^0_ϕ^σ,r+ϱ_2_C_T^0_ϕ^σ,r.
We want to show that for any appropriate choice of T, the map is a contraction on the closed ball B_R(0) of radius R≥ 2ϱ^0__ϕ^σ,r centered at the origin in the space C_T^0_ϕ^σ,r. From the estimates (<ref>) and (<ref>), we see that if
e^T≤3/2,
2C||R(|k_0|^sσ(e^T-1/) + T^1-σ s+1/2s) ≤1/8,
then is a contraction on B_R(0). So by the contraction mapping theorem, there exists a unique fixed point ϱ=(ϱ) ∈ C_T^0_ϕ^σ,r. We let T_0 denote the maximal T such that (<ref>), (<ref>) both hold. We note that the maximal lifespan of the solution is ≥ T_0.
The preceding result shows the local existence and uniqueness. To complete the proof of <ref>, we now prove continuous dependence on the initial data. For j=1,2, let ϱ_j be a solution in C_T_j^0_ϕ^σ,r to (<ref>) with initial datum ϱ_j^0, such that ϱ_j^0__ϕ^σ,r≤ R. From the mild formulation (<ref>), the triangle inequality, <Ref>, we see that
ϱ_1-ϱ_2_C_T^0_ϕ^σ,r≤ϱ_1^0-ϱ_2^0__ϕ^σ,r
+ C||(|k_0|^sσ(e^T-1/) + T^1-σ s+1/2s)ϱ_1-ϱ_2_C_T^0_ϕ^σ,r*ϱ_1_C_T^0_ϕ^σ,r + ϱ_2_C_T^0_ϕ^σ,r.
Taking T smaller if necessary while still preserving T≳ T_0, we may assume that
2C||(|k_0|^sσ(e^T-1/) + T^1-σ s+1/2s) ≤1/4.
Bounding each ϱ_j_C_T^0_ϕ^σ,r by R in the last factor, it then follows from (<ref>) that
ϱ_1-ϱ_2_C_T^0_ϕ^σ,r≤ 2ϱ_1^0-ϱ_2^0__ϕ^σ,r.
This last estimate completes the proof of <ref>.
An examination of the proof of <ref> reveals that when ≤ 0, which is implied by the assumption ζ≥ 0 (recall (<ref>)), the time of existence T given by the fixed point argument satisfies the lower bound
T ≥ C(||R)^-2s/(2-σ)s-1,
where the constant C>0 depends quantitatively on the parameters d,,s,σ,r,β,ν.
§ GLOBAL EXISTENCE AND CONVERGENCE TO EQUILIBRIUM
We now conclude the proof of <ref> by showing that the solutions are global and converge to the uniform distribution as t→∞.
Assume that 1/(2π)^d∫_^dμ^0 dx=1. Let ϱ^t = μ^t -1 be as in <ref>, and recall that ϱ^t satisfies the equation
_tϱ^t = -B^t(ϱ^t, ϱ^t) -L ϱ^t -ν^2/2Aϱ^t,
where we remind the reader that A=^2s, the operator B was defined in <ref>, and L= ÷(∇∗(·)). Also, recall that ϱ̂^0(0) = ∫_^dϱ^0dx =0, so by conservation of mass (<ref>), ϱ̂^t(0)=0 for every t≥ 0.
Our first result shows that if ϱ belongs to a higher regularity Gevrey space on [0,T], then the norm associated to a lower regularity Gevrey space decays exponentially in time on [0,T].
Let d≥ 1, >0, 1≤ r≤∞, max(1/2,2-/2)<s≤ 1. Given ,,ν>0, set ϕ^t+ t and assume that W is a realization from Ω_,,ν. Define
ζinf_k∈^d : k≠ 0(ν^2/2-β|k|^-s +((k))|k|^-2s k· k)
and assume that ζ>0.
There is a threshold _0∈ depending on r,d,s,, such that for any >_0, the following holds. There is a constant C>0, depending only on d,,r,s,, such that if ϱ∈ C_T^0_ϕ^+2/r,r is a solution to (<ref>), for some T>0, satisfying
ϱ^0__^,r < ζ/C||,
then
∀ t∈ [0,T], ϱ^t__ϕ^t^,r≤ e^-ζ t/2ϱ^0__^,r .
The starting point of the proof of <ref> (cf. <cit.>) is to compute for k∈^d,
d/d t|e^ϕ^t |k|^sϱ̂^t(k)|=(|e^ϕ^t | k |^sϱ̂^t(k)|^-1 e^ϕ^t | k |^sϱ̂^t(k)(β|k|^s e^ϕ^t|k|^sϱ̂^t(k)..
. -e^ϕ^t|k|^sℱ(B(ϱ^t, ϱ^t))(k) + (k· k )(k)e^ϕ^t|k|^sϱ̂^t(k)-ν^2/2|k|^2s e^ϕ^t|k|^sϱ̂^t(k)).
Majorizing the nonlinear term by its absolute value, we obtain
d/d t|e^ϕ^t|k|^sϱ̂^t(k)| ≤-|e^ϕ^t|k|^sϱ̂^t(k)|(ν^2/2|k|^2 s-β|k|^s - ((k)) k· k)+|e^ϕ^t|k|^sℱ(B^t(ϱ^t, ϱ^t))(k)|.
Using (<ref>), we compute
1/rd/d te^ϕ^t A^1 / 2ϱ^t_Ŵ^σ s, r^r =1/rd/dt∑_k |k|^rσ s|e^ϕ^t |k|^sϱ̂^t(k)|^r
=∑_k |k|^rσ s|e^ϕ^t |k|^sϱ̂^t(k)|^r-1d/d t|e^ϕ^t|k|^sϱ̂^t(k)|
≤-ζ∑_k|e^ϕ^t|k|^s|k|^(σ+2/r) sϱ̂^t(k)|^r
+∑_k|e^ϕ^t|k|^s|k|^σ sϱ̂^t(k)|^r-1|k|^σ s e^ϕ^t |k|^s|ℱ(B^t(ϱ^t, ϱ^t))(k)|.
To estimate the nonlinear term in the preceding right-hand side, we use two lemmas, which are periodic analogues of <cit.>, respectively. We omit their proofs, as the arguments are essentially the same as the ≤ 1 Euclidean case.
For any t≥ 0 with ϕ^t - ν W^t≥ 0, it holds for any test functions f,g that
|e^ϕ^t|k|^s(B^t(f,g))(k)| ≲_ ||∑_j≠ 0|k| |j|^1-|e^ϕ^t|k-j|^sf̂(k-j) e^ϕ^t|j|^sĝ(j)|.
Let d≥ 1, >0, 1≤ r≤∞, 1/2< s≤ 1. Then there exists a threshold _0 depending on d,,r,s, such that for any >_0, there exists a constant C>0 depending on d,,r,s, so that
∑_k|e^ϕ^t|k|^s|k|^ sĥ(k) |^r-1|k|^ s+1∑_j≠ 0|j|^1-|e^ϕ^t|k-j|^sf̂(k-j) e^ϕ^t|j|^sĝ(j)|
≤ Ce^ϕ^t A^1/2 h_Ŵ^(+2/r)s,r^r-1(e^ϕ^t A^1/2f_Ŵ^( + 2/r)s,re^ϕ^t A^1/2g_Ŵ^ s,r + e^ϕ^t A^1/2f_Ŵ^ s,re^ϕ^t A^1/2g_Ŵ^( + 2/r)s,r).
Applying <Ref> with f=g=h=ϱ^t and >_0, and choosing σ= in the inequality (<ref>), we find that
d/d t1/re^ϕ^t A^1 / 2ϱ^t_Ŵ^κ s,r^r ≤-ζe^ϕ^t A^1 / 2ϱ^t_Ŵ^(κ+2/r) s, r^r + C|𝕄| e^ϕ^tA^1/2ϱ^t^r_Ŵ^(κ+2/r)s,re^ϕ^t A^1 / 2ϱ^t_Ŵ^κ s, r
= e^ϕ^t A^1 / 2ϱ^t_Ŵ^(κ+2/r) s, r^r(C|𝕄|e^ϕ^t A^1 / 2ϱ^t_Ŵ^κ s, r-ζ).
If we assume that
e^ A^1 / 2ϱ^0_Ŵ^κ s, r < ζ/2C|𝕄|,
where C is the same constant as in (<ref>), then we claim that this inequality persists for all time t ∈ [0,T]. We argue by contradiction. Let T_*≥ 0 denote the maximal time such that
∀ t∈ [0,T_*), e^ϕ^t A^1/2ϱ^t_Ŵ^ s, r < ζ/2C|𝕄|.
Such a T_* exists and is positive since the preceding inequality is true at t=0 by assumption and the function t↦e^ϕ^t A^1/2ϱ^t_Ŵ^ s,r is continuous. If T_*=T, then there is nothing to prove, so assume otherwise. (<ref>) together with (<ref>) imply that t↦e^ϕ^t A^1/2ϱ^t_Ŵ^ s, r is strictly decreasing on [0,T_*) (assuming ϱ^t is a nonzero solution), implying
e^ϕ^t A^1/2ϱ^T_*_Ŵ^ s,r <e^ϕ^t A^1/2ϱ^0_Ŵ^ s, r < ζ/2C|𝕄|.
This inequality implies by maximality that T_*=T. Therefore, for t∈ [0,T],
d/d te^ϕ^t A^1 / 2ϱ^t_Ŵ^κ s, r^r≤ -rζ/2e^ϕ^t A^1 / 2ϱ^t_Ŵ^(κ+2/r) s, r^r≤ -rζ/2e^ϕ^t A^1 / 2ϱ^t_Ŵ^κ s, r^r.
Applying Grönwall's lemma, we conclude that
∀ t∈ [0,T], e^ϕ^t A^1 / 2ϱ^t_Ŵ^κ s, r^r≤ e^-rζ t/2e^α A^1 / 2ϱ^0_Ŵ^κ s, r^r,
which completes the proof of <ref>.
On its own, <ref> does not imply <ref> because the former assumes that ϱ^t lives in a higher index Gevrey space on [0,T], while only showing that a lower index Gevrey norm of ϱ^t decays on [0,T]. The lower index norm does not control the higher index norm, so somehow we have to make up for this discrepancy between spaces.
Fix >0 and suppose that ϱ^0∈_+^σ_0,r for σ_0 above the regularity threshold _0 given by <ref>. Assume that the parameters d,,r,s,σ_0,,,ν satisfy all the constraints of <ref> and also assume that
ϱ^0__+^σ_0,r < ζ/C_exp||,
where ζ is as in (<ref>) and C_exp>0 is the constant from <ref>. Assuming a realization of W from Ω_,,ν and given r≥ 1 sufficiently small depending on d,,s, <ref> implies that for any 0<σ<2s-1/s, with 1-≤σ s, sufficiently large depending on d,,s,r, there is a maximal solution ϱ to equation (<ref>) with lifespan [0,T_max,σ,), such that ϱ belongs to C_T^0_ϕ+^σ,r for any 0≤ T<T_max,σ,. The main lemma to conclude global existence relates the lifespan of ϱ^t in _ϕ^t+^σ,r to the lifespan of ϱ^t in the larger space _ϕ^t+'^σ,r, for any '∈ [0,). For details on how to prove such a result, see the proof of <cit.>, bearing in mind <ref>.
Let ϱ be as above. There exists a constant C>0 depending on d,,r,s,σ,,ν such that for any 0≤_2<_1≤, the maximal times of existence T_max,σ,_1, T_max,σ,_2 of ϱ^t as taking values in _ϕ^t+_1^σ,r, _ϕ^t+_2^σ,r, respectively, satisfy the inequality
T_max,σ,_2≥ T_max,σ,_1 + C(||ϱ^0__+^σ_0,r)^-2s/2s-σ s-1.
Fix 0<'<, and let σ,σ_0 be as above. If T_max, σ, '<∞, then let n∈ be such that n C (||ϱ^0__+^σ_0,r)^-2s/2s-σ s-1 satisfies the inequality
n C(||ϱ^0__+^σ_0,r)^-2s/2s-σ s-1 > T_max,σ,'-T_max,σ,,
where C is the same constant as in the inequality (<ref>). We observe from <ref> that
T_max,σ,' - T_max,σ, = ∑_j=0^n-1*T_max, σ, - (j+1)(-')/n - T_max,σ,-j(-')/n
≥∑_j=0^n-1 C(||ϱ^0__+^σ_0,r)^-2s/2s-σ s-1
> T_max,σ,'-T_max,σ,,
which is a contradiction. Thus, T_max,σ,'=∞.
For any 0<'< and any 0<σ<2s-1/s sufficiently large depending on d,,s,r, it therefore holds that ϱ_C_T^0_ϕ+'^σ,r<∞ for all T>0. Using the arbitrariness of ', <ref> implies that for any T>0, ϱ_C_T^0_ϕ+'^σ_0+2/r,r < ∞. Using that
ϱ^0__+'^σ_0,r≤ϱ^0__+^σ_0,r < ζ/C_exp||
by assumption (<ref>), we can apply <ref> on the interval [0,T] to obtain that
∀ t∈ [0,T], ϱ^t__ϕ^t+'^σ_0,r≤ e^-ζ t/2ϱ^0__+'^σ_0,r .
Since T>0 was arbitrary, the decay (<ref>), in fact, holds on [0,∞).
Finally, we can replace ' in both sides of (<ref>) by the larger . Indeed, the result of the preceding paragraph and the trivial inequality ·__+'^σ_0,r≤·__+^σ_0,r, for '≤, give
∀ t≥ 0, ϱ^t__ϕ^t+'^σ_0,r≤ e^-ζ t/2ϱ^0__+'^σ_0,r≤ e^-ζ t/2ϱ^0__+^σ_0,r < ∞.
The desired conclusion now follows by unpacking the definition of the left-hand side and appealing to the monotone convergence theorem.
alpha
|
http://arxiv.org/abs/2307.02053v1
|
20230705063654
|
Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning
|
[
"Deepanway Ghosal",
"Yew Ken Chia",
"Navonil Majumder",
"Soujanya Poria"
] |
cs.CL
|
[
"cs.CL"
] |
Emoji Prediction using Transformer Models
1st Muhammad Osama Nusrat
Dept of Computing
Fast Nuces
Islamabad, Pakistan
[email protected]
2nd Zeeshan Habib
Dept of Computing
Fast Nuces
Islamabad, Pakistan
[email protected]
2nd Mehreen Alam
Dept of Computing
Fast Nuces
Islamabad, Pakistan
[email protected]
2nd Saad Ahmed Jamal
Department of GeoInformatics ZGIS
University of Salzburg
Salzburg, Austria
[email protected]
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================
[t]
< g r a p h i c s >
[t]
Code: <https://github.com/declare-lab/flacuna>
Model: <https://huggingface.co/declare-lab/flacuna-13b-v1.0>
: <https://huggingface.co/declare-lab/flan-mini>
Recently, the release of <cit.> has provided valuable insights into the performance of large language models (LLMs) that utilize encoder-decoder or decoder-only architecture. Interestingly, despite being introduced four years ago, T5-based LLMs, such as Flan-T5, continue to outperform the latest decoder-based LLMs, such as LLaMA and Vicuna, on tasks that require general problem-solving skills. This performance discrepancy can be attributed to three key factors: (1) Pre-training data, (2) Backbone architecture, and (3) Instruction dataset. In this technical report, our main focus is on investigating the impact of the third factor by leveraging Vicuna, a large language model based on LLaMA, which has undergone fine-tuning on ChatGPT conversations. To achieve this objective, we fine-tuned Vicuna using a customized instruction dataset collection called
.
This collection includes a subset of the
large-scale
instruction dataset known as Flan, as well as various code-related datasets and conversational datasets derived from ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand problem-solving skills. Our experimental findings strongly indicate that the enhanced problem-solving abilities of our model, , are obtained through fine-tuning Vicuna on the Flan dataset, leading to significant improvements across numerous benchmark datasets in . is publicly available at <https://huggingface.co/declare-lab/flacuna-13b-v1.0>.
§ INTRODUCTION
ChatGPT and its successor GPT-4 have surpassed their prior state-of-the-art models on a vast majority of the benchmarking tasks and datasets. However, to preserve privacy, natively running a 175B+ sized model like GPT-3 is beyond the capabilities of most organizations, let alone individuals. This has prompted many researchers to fine-tune manageable-sized LLMs — from 7B to 30B on a diverse set of
instruction examples
generated by ChatGPT or GPT-4. This has birthed LLMs, such as, Alpaca <cit.> and <cit.> that are fine-tuned checkpoints of LLaMA <cit.>. These models have attained close to ChatGPT-level performance on some specific benchmarking tasks, but overall generalization still remains elusive. Recent works like <cit.> strongly hint that the fine-tuning datasets dictate the task-specific performances. For instance, it has been observed that Flan-T5 — a T5 checkpoint fine-tuned on Flan Collection instruction dataset — outperforms and Alpaca on tasks involving strong reasoning and problem-solving skills. This spurred us to fine-tune on Flan-mini Collection dataset, anticipating improvement on reasoning-intensive tasks in <cit.>.
To this end, we first sample a 1M-sized instruction dataset from the 15M-sized Flan Collection dataset <cit.> and combined it with several other datasets comprising coding tasks and ChatGPT/GPT-4 distilled conversations. The resulting smaller dataset, Flan-mini, is then cast into the conversational format of .
To ensure a reasonable computational cost for the fine-tuning process,
we retrofit LoRA <cit.> adapter into the LLaMA <cit.> decoder-transformer of . Following a parameter-efficient LoRA fine-tuning of the checkpoint on Flan-mini, we obtain . As expected, outperforms by a substantial margin on most benchmark datasets, especially for reasoning-intensive tasks. However, the performance of still remains below on the same reasoning benchmarks. This could be attributed to the 15-times smaller dataset of the instruction dataset which may contain less diverse samples. Furthermore, full fine-tuning of may narrow the gap with .
This work overall has the following contributions:
* Improving the problem-solving capability of through parameter efficient fine-tuning on Flan-mini.
* Introducing an instruction tuning dataset, Flan-mini, comprising a diverse set of tasks and templates.
§ TRAINING DETAILS
Preparing the Flan-mini Collection.
Given the enormous size of the Flan Collection <cit.>, we opted to work with a carefully selected subset that maintains a high level of task diversity while reducing the overall dataset size. In Table <ref>, we present the specific tasks included in our subset of Flan, along with their respective dataset sizes.
As the public release of the Flan Collection does not include programming tasks, we augment the collection with existing code datasets.
Specifically, we include CodeContests <cit.>, APPS <cit.> and CodeSearchNet <cit.>.
Following the data processing pipeline of Flan Collection, we sample a fixed number of examples from each dataset, where each example is randomly augmented with different prompt templates.
Specifically, the examples are processed with a pool of handcrafted prompt templates and may be used as zero-shot examples or grouped together with few-shot demonstrations <cit.>.
Maintaining Vicuna's Chatting Ability.
Vicuna has demonstrated remarkable chatting ability, achieving 90% of the performance of ChatGPT. This indicates its significant potential as an open-source alternative to closed-source large language models (LLMs) like ChatGPT. To ensure that retains Vicuna's learned knowledge and chatting ability, we incorporated various ChatGPT datasets, including Alpaca <cit.>, Code Alpaca <cit.>, and ShareGPT <cit.>, into our Flan collection. Among these three datasets, Vicuna was originally fine-tuned using the ShareGPT dataset. The final collection was then used to train .
Architecture. We employed LoRA in the Vicuna model for fine-tuning on the Flan-mini collection. We inserted the low-rank adapters on all the query and value projection layers, resulting in a total trainable parameter count of 6.55M, which is only around 0.05% of the parameter count of the original 13B Vicuna model. The maximum input sequence length was set to 1280, and efficient training was facilitated by utilizing bf16 precision.
Hyperparameter Details.
was trained on 4×A6000 GPUs for 1 epoch. We use 16 gradient accumulation steps with a per-device batch size of 2, resulting in a total batch size of 128. We used 3000 warm-up steps and a learning rate of 2e-5.
§ EVALUATION TASKS AND RESULTS
§.§ Problem Solving Evaluation
To assess the problem-solving prowess of instructed large language models (LLMs), employs a range of benchmarks encompassing real-world exams that delve into diverse topics. These benchmarks encompass complex instructions, arithmetic problems, programming challenges, and causal reasoning tasks. In order to excel in these benchmarks, models need to exhibit a profound understanding of the world, demonstrate multi-hop reasoning capabilities, showcase creativity, and employ a plethora of other cognitive skills.
World Knowledge.
The Massive Multitask Language Understanding (MMLU) benchmark, introduced in the work by <cit.>, serves as an assessment tool to gauge the problem-solving aptitude and world knowledge of language models across various subjects. It offers evaluations in both zero-shot and few-shot settings, presenting a more challenging and human-like evaluation scenario. The MMLU benchmark encompasses a comprehensive range of 57 subjects spanning STEM, humanities, social sciences, and other domains. The difficulty levels of the tasks within the benchmark vary from elementary to advanced professional levels, providing a comprehensive assessment of the model's capabilities in problem-solving and domain understanding.
Complex Instructions.
The subset known as BIG-Bench Hard (BBH) comprises 23 highly demanding tasks carefully selected from the BIG-Bench benchmark <cit.> to specifically target tasks that are considered to surpass the current capabilities of language models <cit.>. BBH presents models with intricate instructions that require advanced skills in navigation, logical deduction, and fallacy detection.
Comprehension and Arithmetic.
Discrete Reasoning Over Paragraphs (DROP) is a reading comprehension task with a mathematical focus. It challenges systems to engage in discrete reasoning by analyzing passages extracted from Wikipedia articles. In order to excel in the DROP task, a system needs to adeptly navigate references within a question and identify the appropriate sections of the provided passage. Additionally, the system must demonstrate proficiency in performing discrete operations like addition, counting, or sorting.
Programming.
HumanEval serves as a problem-solving benchmark specifically designed for assessing the performance of large language models that are trained on code <cit.>. The benchmark comprises 164 unique programming problems, encompassing areas such as language comprehension, algorithms, and basic mathematics. Some of the problems included in HumanEval are similar in nature to straightforward software interview questions. In the evaluation process, models are assessed based on the functional correctness of the code programs they generate, with the criteria for correctness determined by the given docstrings. HumanEval provides a comprehensive evaluation framework for assessing the problem-solving capabilities of language models in a code-centric context.
Causality.
The Counterfactual Reasoning Assessment (CRASS) benchmark is a novel dataset and evaluation tool developed specifically to assess the causal reasoning abilities of large language models. By employing counterfactual scenarios, CRASS tests the model's capability to identify and select appropriate causal explanations. This benchmark provides a unique and rigorous evaluation framework to gauge the causal reasoning capabilities of language models.
§.§ Alignment to Human Values
Noting the importance of aligning LLMs to human values, incorporates the Helpful, Honest, and Harmless (HHH) benchmark <cit.>. The benchmark showcases engaging dialogues between humans and conversational assistants, challenging the model to discern and provide the most appropriate response. It encompasses a diverse array of 61 honesty-related, 59 helpfulness-related, and 58 harmlessness-related samples, along with 43 unique instances falling within the "other" category. The inclusion of the "other" category accounts for examples that embody values not explicitly covered by honesty, helpfulness, or harmlessness.
§.§ Writing Experiments
For the writing experiment, we utilized the IMPACT dataset, which is readily available in . This comprehensive dataset consists of 50 prompts across distinct categories, namely informative, professional, argumentative, and creative. Following that, ChatGPT was assigned the responsibility of scoring the models' responses in terms of relevance (Rel.) and coherence (Coh.) on a scale of 1 to 5. For more comprehensive information regarding this evaluation, we refer readers to <cit.>.
§.§ Results
Comparative Baselines.
As baselines, we selected Vicuna <cit.> and StableVicuna[<https://huggingface.co/CarperAI/stable-vicuna-13b-delta>].
Few-shot Problem-solving.
We present the results of on five datasets (see Table <ref>) from the benchmark, focusing on problem-solving tasks. In 4 out of 5 tasks, outperformed Vicuna, showing an average performance improvement of 5.6 points over the LLaMA backbone. However, it performed slightly worse on code-related problem-solving tasks in the HumanEval dataset, with a margin of 0.6 points. Overall, the improvement in compared to Vicuna is 5.1 points averaged over the five tasks.
Out of the five problem-solving datasets, one of them, DROP, is categorized as a held-in dataset. It is a part of our Flan collection and was utilized for training . As a result, we observed a significant performance boost of 11 points compared to Vicuna. The remaining datasets are considered held out.
0-shot Problem-solving.
We conducted a 0-shot performance evaluation of and compared it against both Vicuna and StableVicuna. The results presented in Table <ref> demonstrate a noteworthy performance leap by compared to its competitors. This improvement can be attributed to the training of on the high-quality Flan instruction dataset.
HHH Evaluation.
We conducted a further evaluation using BBH's HHH evaluation dataset (see Table <ref>), where exhibited an impressive 11% improvement over Vicuna. Notably, our instruction dataset collection aimed to enhance Vicuna's problem-solving abilities, but it also had a positive impact on its HHH performance. This observation aligns with the experience of Flan-T5, which achieved a 24.2% performance improvement over its T5 backbone after fine-tuning on Flan.
Writing Evaluation.
While primarily excels in problem-solving tasks, we made efforts to maintain the impressive writing and chatting ability of Vicuna. To achieve this, we incorporated conversational datasets generated by GPT-4, such as GPT-4-Alpaca and ShareGPT, into the Flan-mini collection. However, despite these efforts, we observed certain issues in 's
writing
performance. In some cases, it generates code snippets in response to prompts that are unrelated to coding. We attribute this behavior to the significant data imbalance, where the conversational dataset constitutes only 8.2% of the entire data mixture. Prompt engineering techniques can help rectify such issues.
We discovered that generates responses of reasonable quality when provided with the following template: . This finding suggests that with the appropriate prompts, we can improve 's chatting performance.
However, upon careful examination of the generated samples, it becomes apparent that does not outperform as
a writing assistant.
This observation is reinforced by the evaluation of the generated responses to the prompts in the IMPACT dataset using ChatGPT, as depicted in Table <ref>. ChatGPT consistently ranks 's responses significantly higher than those of .
The subpar performance of in
writing-based
scenarios can be attributed to several factors. Firstly, the disproportionate
scarcity
of conversational datasets in may have contributed to this outcome.
Additionally, parameter-efficient tuning methods such as LoRA may limit the effectiveness of the model in learning both problem-solving and general writing abilities. Hence, we may explore other efficient training methods for LLMs in the future <cit.>.
An example of the prompt and 's response is shown below.
Prompt:
's response:
[c]
1. Introduction
Welcome to the user manual for the XYZ household appliance. This appliance is designed to make your life easier and more convenient. With this appliance, you can prepare your meals quickly and efficiently.
2. Operation
To operate the appliance, follow these steps:
a. Turn on the appliance by pressing the power button.
b. Select the desired cooking mode by using the mode selector.
c. Place the food inside the appliance.
d. Close the lid and press the start button.
e. Wait for the appliance to complete the cooking process.
3. Maintenance
To keep your appliance in good working condition, follow these steps:
a. Clean the appliance after each use.
b. Check the appliance for any signs of damage or wear and tear.
c. If you notice any issues, contact the manufacturer or a certified repair technician.
4. Troubleshooting
If you encounter any issues with your appliance, try the following troubleshooting steps:
a. Check the power cord and plug to ensure they are securely connected.
b. Check the fuse to ensure it is not blown.
c. Check the appliance manual for troubleshooting tips.
d. If the issue persists, contact the manufacturer or a certified repair technician.
5. Safety Precautions
Always follow these safety precautions when using the appliance:
a. Keep the appliance clean and free of debris.
b. Do not use the appliance if it is damaged or showing signs of wear and tear.
c. Do not leave the appliance unattended while it is in operation.
d. Always use the appliance according to the manufacturer's instructions.
6. Conclusion
Congratulations on your new XYZ household appliance! With proper use and maintenance, your appliance will provide you with years of reliable service. If you have any questions or concerns, please refer to the appliance manual or contact the manufacturer.
§ LIMITATIONS AND FUTURE WORK
Despite the promising advancements of compared to Vicuna, we have identified some issues that require addressing:
* If is asked to provide descriptive answers to questions like “Present arguments for or against lowering the age bar for drinking,” generates code snippets instead. This behavior could be attributed to its imperfect understanding of instructions or a tendency to hallucinate.
* is still significantly behind Flan-T5 in terms of problem-solving abilities.
* Surprisingly, exhibits inferior performance compared to both LLaMA and Vicuna on coding-related problems. This outcome is unexpected, considering that we incorporated numerous coding problem-solving datasets into our instruction tuning collection.
* is trained with a maximum input sequence length of 1280 which limits its ability to comprehend longer input sequences.
To address these limitations and known issues, we can explore the following steps:
* Based on previous studies, it has been observed that LoRA performs better with larger models <cit.>, such as those with 30B or 65B parameters, and excels in task-specific settings. Therefore, in future work, we could enhance by fully fine-tuning , without LoRA, particularly on the Flan collection. Another future work is to train on longer token length.
* We can incorporate the original Flan collection into the training process, as it is fifteen times larger than the instruction dataset we used in this study. Flan-T5 underwent training on this extensive collection, which resulted in remarkable problem-solving performance.
* The chatting or writing performance of could be improved by incorporating larger conversational datasets in and subsequently training on it.
unsrtnat
|
http://arxiv.org/abs/2307.00504v1
|
20230702073856
|
On efficient computation in active inference
|
[
"Aswin Paul",
"Noor Sajid",
"Lancelot Da Costa",
"Adeel Razi"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"q-bio.NC"
] |
A re-examination to the SCoTLASS problems for SPCA and two projection-based methods for themt1
[
==============================================================================================
Despite being recognized as neurobiologically plausible, active inference faces difficulties when employed to simulate intelligent behaviour in complex environments due to its computational cost and the difficulty of specifying an appropriate target distribution for the agent. This paper introduces two solutions that work in concert to address these limitations. First, we present a novel planning algorithm for finite temporal horizons with drastically lower computational complexity. Second, inspired by Z-learning from control theory literature, we simplify the process of setting an appropriate target distribution for new and existing active inference planning schemes.
Our first approach leverages the dynamic programming algorithm, known for its computational efficiency, to minimize the cost function used in planning through the Bellman-optimality principle.
Accordingly, our algorithm recursively assesses the expected free energy of actions in the reverse temporal order. This improves computational efficiency by orders of magnitude and allows precise model learning and planning, even under uncertain conditions. Our method simplifies the planning process and shows meaningful behaviour even when specifying only the agent's final goal state.
The proposed solutions make defining a target distribution from a goal state straightforward compared to the more complicated task of defining a temporally informed target distribution. The effectiveness of these methods is tested and demonstrated through simulations in standard grid-world tasks. These advances create new opportunities for various applications.
§ INTRODUCTION
How should an organism perceive, learn, and act to ensure survival when born into a new world? How do `agents' eventually learn to exhibit sentient behaviour in nature, such as hunting and navigation?
A prominent framework that approaches these questions is stochastic optimal control (SOC), which determines the best possible set of decisions—given a specific criterion—at any given time and in the face of uncertainty. The fundamental problem that SOC addresses can be defined as follows: When born at time t=1 and ahead, an `agent' receives observations from its surrounding `environment'. This `agent' not only passively receives observations but also is capable of responding with `actions'. Additionally, it may receive information or has inbuilt reward systems that quantify its chance of survival and progress. So, this process may be summarised as a stream of data from the agent's perspective: (o_1; a_1), (o_2, r_2; a_2),..., (o_t, r_t). Here, o_t stands for the observation at time t, a_t stands for the agent's action at time t, and r_t stands for the `reward' at time t from the external environment or agent's inbuilt reward structure. In this setting, the primary goal of an agent is to
Maximise: Score = ∑_1^t r_t. [Reward scores the desirability for a particular outcome or state; akin to some cost function. Briefly, it can be explicitly defined by the 'external' environment (extrinsic reward) or internally by the agent itself (intrinsic reward).]
Eq.<ref> is an optimisation problem, and due to its general structure, it has a vast scope in various disciplines across the sciences. Several fields of research grew around this idea in the past decades, like reinforcement learning (RL) <cit.>, control theory <cit.>, game theory <cit.>, and economics <cit.>. But in fact, formulating decision-making as utility maximisation originated much earlier in the ethical theory of utilitarianism in 18th-century philosophy <cit.>, and was later applied by Pavlov in the early 20th century to account for animal conditioning <cit.>. Many current engineering methods, such as Q-learning <cit.>, build upon the Bellman-optimality principle to learn proper observation-action mappings that maximise cumulative reward. Model-based methods in RL, like Dyna-Q <cit.>, employ an internal model of the `environment' to accelerate this planning process <cit.>. Similarly, efficient methods, e.g., which linearly scales with the problem dimensions, emerged in classical control theory to compute optimal actions in similar settings <cit.>.
Another critical and complementary research direction is studying systems showing `general intelligence', which abounds in nature. Indeed, we see a spectrum of behaviour in the natural world that may or may not be accountable by the rather narrow goal of optimising cumulative reward. By learning more about how the brain produces sentient behaviour, we can hope to accelerate the generation of artificial general intelligence <cit.>. This outlook motivates us to look into the neural and cognitive sciences, where an integral theory is the free energy principle (FEP), which brings together Helmholtz's early observations of perception with more recent ideas from statistical physics and machine learning <cit.> to attempt a mathematical description of brain function and behaviour in terms of inference that has the potential of unifying many previous theories on the subject, including but not limited to cumulative reward maximisation <cit.>.
In the last decade, the FEP has been applied to model and generate biological-like behaviour under the banner of active inference <cit.>. Active inference has since percolated into many adjacent fields owing to its ambitious scope as a general modelling framework for behaviour <cit.>. In particular, several recent experiments posit active inference as a promising approach to optimal control and explainable and transparent artificial intelligence <cit.>. In this article, we consider active inference as an approach to stochastic control, its current limitations, and how they can be overcome with dynamic programming and the adequate specification of a target distribution.
In the following three sections, we consider the active inference framework, discuss existing ideas accounting for perception, planning and decision-making—and identify their limitations. Next, in Section <ref>, we show how dynamic programming can address these limitations by enabling efficient planning and can scale up existing methods. We formalise these ideas in a practical algorithm for partially observed Markov decision processes (POMDP) in Section<ref>. Then we discuss the possibility of learning the agent's preferences by building upon Z-learning <cit.> in Section <ref>. We showcase these innovations with illustrative simulations in Section <ref>.
§ ACTIVE INFERENCE AS BIOLOGICALLY PLAUSIBLE OPTIMAL CONTROL
The active inference framework is a formal way of modelling the behaviour of self-organising systems that interface with the external world and maintain a consistent form over time <cit.>. The framework assumes that agents embody generative models of the environment they interact with, on which they base their (intelligent) behaviour <cit.>. The framework, however, does not impose a particular structure on such models. Here, we focus on generative models in the form of partially observed Markov decision processes (POMDPs) for their simplicity and ubiquitous use in the optimal control literature <cit.>. In the next section, we discuss the basic structure of POMDPs and how the active inference framework uses them.
§.§ Generative models using POMDPs
Assuming agents have a discrete representation of their surrounding environment, we turn to the POMDP framework <cit.>. POMDPs offer a fairly expressive structure to model discrete state-space environments where parameters can be expressed as tractable categorical distributions. The POMDP-based generative model can be formally defined as a tuple of finite sets (S, O, U, 𝔹,𝔸):
∘ s ∈ S: S is a set of hidden states (s) causing observations o.
∘o ∈ O: O is a set of observations, where o=s, in the fully observable setting. In a partially observable setting, o=f(s).
∘ u ∈ U: U is a set of actions (u) Eg: U={Left, Right, Up, Down}.
∘𝔹: encodes the one-step transition dynamics, P(s_t| s_t-1, u_t-1) i.e., the probability that when action u_t-1 is taken while being in state s_t-1 (at time t-1) results in s_t at time t.
∘ 𝔸: encodes the likelihood mapping, P(o_τ| s_τ) for the partially observable setting.
∘ 𝔻: Encodes the prior of the agent about the hidden state factor s.
∘ 𝔼: Encodes the prior of the agent about actions u.
In a POMDP, the hidden states (s) generate observations (o) through the likelihood mapping (𝔸) in the form of a categorical distribution, P(o_τ| s_τ) = Cat(𝔸× s_τ).
𝔹 is a collection of square matrices 𝔹_u, where 𝔹_u represents transition dynamics P(s_t| s_t-1, u_t-1 = u): The transition matrix (𝔹) determines the dynamics of s given the agent's action u as P(s_t| s_t-1, u_t-1) = Cat(𝔹_u_t-1× s_t-1). In [𝔸× s_τ] and [ 𝔹_u_τ× s_τ], s_τ is represented as a one-hot vector that is multiplied through regular matrix multiplication [One-hot is a group of bits among which the legal combinations of values are only those with a single high (1) bit and all the others low (0). Here, the bit (1) is allocated to the state s= s_τ ]. The Markovianity of POMDPs means that state transitions are independent of history (i.e. state s_t only depends upon the state-action pair (s_t-1, u_t-1) and not s_t-2, u_t-2 etc.).
In summary, the generative model can be summarised as follows,
P(o_1:t,s_1:t,u_1:t) = P(𝔸) P(𝔹) P(𝔻) P(𝔼) ∏_τ=1^t P(o_τ| s_τ, 𝔸) ∏_τ=2^t P(s_τ| s_τ-1, u_τ-1,𝔹).
So, from the agent's perspective, when encountering a stream of observations in time, such as (o_1, o_2, o_3, ..., o_t), as a consequence of performing a stream of actions (u_1, u_2, u_3, ..., u_t-1), the generative model quantitatively couples and quantifies the causal relationship from action to observation through some assumed hidden states of the environment. These are called `hidden' states because, in POMDPs, the agent cannot observe them directly. Based on this representation, an agent can now attempt to optimise its actions to keep receiving preferred observations.
Currently, the generative model has no concept of `preference' and `goal' <cit.>. Rather than attempting to maximise cumulative reward from the environment, active inference agents minimise the `surprise' of encountered observations <cit.>. We look at this idea closely in the next section.
§.§ Surprise and free energy
The surprise of a given observation in active inference <cit.> is defined through the relation
S(o) = -log(P(o)).
Please note that the agent does not have access to the true probability of an observation: P_true(o). However, the internal generative model expects an observation with a certain probability P(o), which quantifies surprise in Eq.<ref>. Minimising surprise directly requires the marginalisation of the generative model, i.e., P(o) = ∑_s P(o,s), which is often computationally intractable due to the large size of the state-space <cit.>. Since f(x) = log(x) is a convex function, we can solve this problem by defining an upper-bound to surprise using Jensen's inequality [Jensen's inequality: If X is a random variable and ψ is a convex function, ψ( 𝔼[X]) ≤𝔼[ ψ(X) ].]:
S(o) = -log∑_s P(o,s) ≤ -∑_s Q(s)logP(o,s)/Q(s) = F[Q].
The newly introduced term Q(s) is often interpreted as an (approximate posterior) belief about the (hidden) state: s. This upper bound (F) is called the variational free energy (VFE) (it is also commonly known as evidence lower bound – ELBO <cit.> [The connection between the two lies in the fact that they are essentially equivalent up to a constant (the log evidence), but with opposite signs. In other words, minimizing VFE is equivalent to maximizing the ELBO. Formally this is:
VFE = - ELBO + constant]). So, by optimising the belief Q(s) to minimise the variational free energy (F), an agent is capable of minimising the surprise S(o)=-log(P(o)) or at least maintain it bounded at low values.
How is this formulation useful for stochastic control? Imagine the agent embodies a biased generative model with `goal-directed' expectations for observations. The goal then becomes to minimise F, which can be done through the conjunction of perception, i.e., optimising the belief Q(s), or action, i.e., controlling the environment to sample observations that lead to a lower F <cit.>. So, instead of passively inferring what caused observations, the agent starts to `actively' infer, exerting control over the environment using available actions in U. The central advantage of this formalism is that there is now only one single cost function (F) to optimise all aspects of behaviour, such as perception, learning, planning, and decision-making (or action selection). There are related works in the reinforcement literature noting the use of similar information-theoretic metrics for control <cit.>. The following section discusses this feature in detail and further develops the active inference framework.
§ PERCEPTION AND LEARNING
§.§ Perception
From the agent's perspective, perception means (Bayes optimally) maintaining a belief about hidden states s causing the observations o. In active inference, agents optimise the beliefs Q(s) to minimise F. The VFE may be rewritten (from Eq.<ref>), using the identity P(o,s) = P(s)P(o | s), as:
F = ∑_s Q(s)[ log Q(s)-log P(o| s) - logP(s) ].
Differentiating F w.r.t Q(s) and setting the derivative to zero, we get (see Supplementary <ref>),
δ F/δ Q(s) = ∑_s 1 + logQ(s) - logP(o| s) - logP(s) = 0.
Using the above equation, we can evaluate the optimal Q(s) that minimises[The second derivate of Eq.<ref> w.r.t to Q(s) is greater than zero which corresponds to local minima of F w.r.t to Q(s).] F using,
logQ^*(s) = logP(s) +logP(o| s).
This equation provides the (Bayesian) belief propagation scheme, given by
Q(s_t+1)_Posterior = σ(logP(s_t+1)_Prior + log(o_t+1·𝔸 s_t+1)_Likelihood).
Here, σ is the softmax function; that is, the exponential of the input that is then normalised so that the output is a probability distribution. Given a real-valued vector V in ℝ^K, the i-th element of σ(V) reads:
σ(V)^i = expV^i/∑_j=1^K expV^j,
where V^i corresponds to the i-th element of V.
We estimate the first term of Eq.<ref>, i.e. the prior using belief Q(s_t) at time t, and the action u_t taken at time t. Using the transition dynamics operator 𝔹_u_t, we write:
P(s_t+1)=𝔹_u_t· Q(s_t).
At the first time step, i.e. t=0, we use a known prior about the hidden state 𝔻 to substitute for the term P(s_t+1).
Similarly, the second term in Eq.<ref>, i.e., the estimate of the hidden state from the observation we gathered from the environment at time t+1 can be evaluated as the dot product between the likelihood function 𝔸 and the observation gathered at time t+1. The belief propagation scheme here is shown in the literature to have a degree of biological plausibility in the sense that it can be implemented by a local neuronal message-passing scheme <cit.>. The following section discusses the learning of the model parameters.
§.§ Learning
The parameter learning rules of our generative model are defined in terms of the optimised belief about states Q(s).
In our architecture, the agent uses belief propagation [We stick to the belief propagation scheme for perception in this paper. However, general schemes like variational message passing may be used to estimate Q(s).] to best estimate Q(s), the belief about (hidden) states in the environment. Given these beliefs, observations sampled, and actions undertaken by the agent, the agent hopes to learn the underlying contingencies of the environment. The learning rules of active inference consist of inferring parameters of 𝔸, 𝔹, and 𝔻 at a slower time scale. We discuss such learning rules in detail in the following.
§.§.§ Transition dynamics
Agents learn the transition dynamics, 𝔹, across time by maintaining a concentration parameter b_u, using conjugate update rules well documented in the active inference literature<cit.> such as:
b_u b_u + Q(u_t-1) ·(Q(s_t) ⊗ Q(s_t - 1) ),
where Q(u) is the probability of taking action u, Q(s_t) is belief the state at time t as a consequence of action u at t-1, and Q(s_t) ⊗ Q(s_t - 1) represents a square matrix of Kronecker product between two vectors Q(s_t) and Q(s_t-1).
Every column of the transition dynamics 𝔹_u, can be estimated from b_u column-wise as,
col(𝔹_u)_i = Dir[ col(b_u)_i].
Here, col(X)_i is the i-th column of X. Dir(b_u) represents the mean of the Dirichlet distribution [Dirichlet distributions are commonly used as prior distributions in Bayesian statistics given that the Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution. We use the mean of the Dirichlet distribution here because we are performing a Bayesian update. Briefly, the mean of a Dirichlet distribution with parameters b = (b_1, ..., b_K) is given by μ = (μ_1, ..., μ_K) where μ_k = b_k/∑_j=1^K b_j. So, in this case, each entry in the estimated transition probabilities is the corresponding entry in b_u, divided by the sum of all entries in the corresponding column of b_u. This normalization ensures that the columns of 𝔹_u sum to 1.] with parameter b_u.
§.§.§ Likelihood
Similar to the conjugacy update in Eq.<ref>, the Dirichlet parameter (a) for the likelihood dynamics (𝔸) is learned over time within trials using the update rule,
a a + o_t⊗ Q(s_t).
Here, o_t is the observation gathered from environment at time t, and Q(s_t)≈ P(s_t| o_1:t) is the approximate posterior belief about the hidden-state (s) <cit.>.
Like perception and learning, decision-making and planning can also be formulated around the cost function F and belief Q. In the next section, we review in detail existing ideas <cit.> for planning and decision-making. We then identify their limitations and, next, propose an improved architecture.
§ PLANNING AND DECISION MAKING
§.§ Classical formulation of active inference
Traditionally, planning and decision-making by active inference agents revolve around the goal of minimising the variational free energy of the observations one expects in the future. To implement this, we define a policy space comprising sequences of actions in time. The policy space in classical active inference <cit.> is defined as a collection of policies π_n
Π = {π_1, π_2, ..., π_N},
which are themselves sequences of actions indexed in time; that is, π = (u_1, u_2, ..., u_T), where u_t is one of the available action in U, and T is the agent's planning horizon. N is the total number of unique policies defined by permutations of available actions u over a time horizon of planning T.
To enable goal-directed behaviour, we need a way to quantify the agent's preference for sample observations o. The prior preference for observations is usually defined as a categorical distribution over observations,
ℂ = Cat(o).
So, if the value corresponding to an observation in ℂ is the highest, it is the most preferred observation for the agent.
Given these two additional parameters (Π and ℂ), we can define a new quantity called the expected free energy (EFE) of a policy π similar to the definition in <cit.> as,
G(π) = ∑_t=1^T D_KL[ Q(o_t|π^t) || ℂ]_Risk + 𝔼_Q(s_t| s_t-1, π^t-1)[ ℍ[ P(o_t| s_t)] ]_Expected ambiguity.
In Eq.(<ref>) above, π^t is the t-th element in π, i.e. the action corresponding to time t for policy π. The term, Q(o_t|π^t) represents the most likely observation caused by the policy π at time t. D_KL stands for the KL-divergence, which, when minimised, forces the distribution Q(o_t|π^t) closer towards ℂ. This term is also called the "Risk" term, representing the goal-directed behaviour of the agent. The KL-divergence between two distributions, P and Q, is defined as:
D_KL(P || Q) = ∑_i P(i) logP(i)/Q(i),
and P=Q if and only if D_KL(P || Q) = 0.
In the second term of Eq.<ref>, ℍ[ P(o_t| s_t) ] stands for the (Shannon) entropy of P(o_t| s_t) defined as,
ℍ(P(o)) = -∑_o ϵ O P(o)logP(o).
The second term is also called the `Expected ambiguity' term. When the expected entropy of P(o_t| s_t) w.r.t the belief Q(s_t| s_t-1, π^t-1) is less, the agent is more confident of the state-observation mapping (i.e., 𝔸) in its generative model.
Hence, by choosing the policy π to make decisions that minimise G, the agent minimises `Risk' at the same time and also its `Ambiguity' about the state-observation mapping. Hence, in active inference, decision-making naturally balances the exploration-exploitation dilemma <cit.>. We also note that the agent is not optimising G but only evaluating and comparing various G over different policies π in the policy space Π. Once the best policy π is identified, the most simple decision-making rule follows by choosing actions u_t = π^t at time t, where π^t is the t-th element of π.
It may already be evident that the above formulation has one fundamental problem: in the stochastic control problems that are commonly encountered in practice, the size of possible action spaces U and the time horizons of planning T make the policy space too large to be computationally tractable. For example, with eight available actions in U and a time horizon of planning T=15, the total number of (definable) policies that need to be considered are (3.5*10^13) 35 trillion. Even for this relatively small-scale example, this policy space is not computationally tractable to simulate agent behaviour (unless additional decision tree search methods are considered <cit.> or policy amortisation <cit.>) or by eliminating implausible policy trajectories using Occam's principle. We now turn to an improved scheme that redefines policy space and planning all together.
§.§ Sophisticated inference
Graduating from the classical definition of policy as a sequence of actions in time, sophisticated inference <cit.> attempts to evaluate the EFE of observation-action pairs at a given time t, G(o_t, u_t). Given this joint distribution, an agent can sample actions using the conditional distribution Q(u_t | o_t) when observing o_t at time t,
u_t∼ Q(u_t | o_t) = σ[ -G(o_t, u_t)].
Given the prior-preference distribution of an agent P(s), in terms of hidden states s, the expected free energy of an observation-action pair is defined as <cit.>,
G(o_t, u_t) = 𝔼_P(o_t+1| s_t+1) Q(s_t+1| u_<t+1)[ log Q(s_t+1| u_<t+1) - logP(s_t+1)_Risk - log P(o_t+1| s_t+1)_Ambiguity]_EFE of action at time t +
𝔼_Q(u_t+1| o_t+1)Q(o_t+1| u_≤ t)[ G(o_t+1, u_t+1) ]_EFE of future actions.
We rewrite this equation in a familiar fashion to Eq.<ref>. In the above equation, the agent holds evaluated beliefs about future hidden states given all past actions in the term Q(s_t+1| u_<t+1). Beliefs about hidden states can be extrapolated to observations using the likelihood mapping (𝔸) as
P(o_t+1| s_t+1) Q(s_t+1| u_<t+1) = Q(o_t+1| u_<t+1).
Also, the prior preference of the agent is defined in terms of hidden states s in Eq.<ref>. Now the Eq.<ref> can be rewritten using mappings like Eq.<ref> as,
G(o_t, u_t) = 𝔼_Q(o_t+1| u_<t+1)[ log Q(o_t+1| u_<t+1) - logℂ_Risk - log P(o_t+1| s_t+1)_Ambiguity]_EFE of action at time t +
𝔼_Q(u_t+1| o_t+1)Q(o_t+1| u_≤ t)[ G(o_t+1, u_t+1) ]_EFE of future actions.
Note that the prior preference distribution in the equation above is over observations o, ℂ=P(o).
Rewriting Eq.<ref> in a similar fashion to the previously discussed classical active inference we obtain
G(o_t, u_t) = D_KL[ Q(o_t+1| u_<t+1) || ℂ] _Risk + 𝔼_Q(s_t+1| u_<t+1)ℍ[ P(o_t+1| s_t+1) ]_Expected ambiguity_EFE of action at time t +
𝔼_Q(u_t+1| o_t+1)Q(o_t+1| u ≤ t)[ G(o_t+1, u_t+1) ]_EFE of future actions.
The first two terms can be interpreted the same way as we did for Eq.<ref> in the previous section. However, the third term in Eq.<ref> gives rise to a recursive tree-search algorithm, accumulating free energies of the future (as deep as we evaluate forward in time). Such an evaluation is pictorially represented in Fig.<ref> (A).
While Bellman optimal <cit.>, one unavoidable limitation of the sophisticated inference planning algorithm is that it faces a worse curse of dimensionality for even relatively small planning horizons. For example, to evaluate the goodness of an action within a period of fifteen time-steps into the future, and with eight available actions and a hundred hidden states, requires an exorbitant (100*8)^15 (≈ 3.5*10^43) calculations, in comparison to 100*(8)^15(≈ 3.5*10^15) for classical active inference. A simple solution proposed in <cit.> is to eliminate tree search branches by setting a threshold value to predictive probabilities such as Q(u_t+1| o_t+1) in Eq.<ref>.
So, for example, when Q(u_t+1| o_t+1) < 1/16 during planning, the algorithm terminates the search over future branches. This restriction significantly reduces the computational time, and a set of ensuing (meaningful) simulations was presented in <cit.>.
Another limitation is that in all active or sophisticated inference agents to facilitate desirable behaviour, a prior preference needs to be defined by the modeller or learned by the agents <cit.> informing the agent that some states are preferable to others, as demonstrated in Fig.<ref> (B) for a grid problem given in Fig.<ref> (A). An informed prior preference enables the agent to solve this navigation task by only planning four or more time steps ahead. It can take action and move towards a `more preferred state' if not the final goal state. However, without such information, the agent is `blind' (cf. Fig.<ref> (C)) and can only find the optimal move when planning the whole eight-step trajectory for the given grid.
We first noticed this limitation when comparing different active inference schemes to various well-known reinforcement learning algorithms in <cit.> in a fully observable setting (i.e., MDPs). In the next section, we demonstrate how to scale the sophisticated inference scheme using dynamic programming for the general case of POMDP-based generative models.
§ DYNAMIC PROGRAMMING FOR EVALUATING EXPECTED FREE ENERGY
The Bellman optimality principle states that the sub-policy of an optimal policy (for a given problem) must itself will be an optimal policy for the corresponding sub-problem <cit.>. Dynamic programming is an optimisation technique that naturally follows the Bellman optimality principle; rather than attempting to solve a problem as a whole, dynamic programming attempts to solve sub-parts of the problem and integrate the sub-solutions into a solution to the original problem. This approach makes dynamic programming scale favourably, as we solve one sub-problem at a time to integrate later. The more we break down the large problem into corresponding sub-problems, the more computationally tractable is the solution.
Inspired by this principle, let us consider a spatial navigation problem that the agent needs to solve in our setting. The optimal solution to this navigation problem is a sequence of individual steps. Our prior preference for the `goal state' is for the end of our time horizon of planning. So, the agent may start planning from the last time step (a step sub-problem) and go backwards to solve the problem. This approach is also called planning by backward induction <cit.>.
So, for a planning horizon of T (i.e., the agent aims to reach goal state at time T), the EFE of the (last) action for the T-1th time step in a POMDP setting is written as:
G(u_T-1, o_T-1) = D_KL [Q(o_T | u_T-1,s_T-1) || ℂ].
The term, G(u_T-1| o_T-1) is the expected free energy associated with any action u_T-1, given we are in (hidden) state s_T-1. This estimate measures how much we believe observations at time T will align with our prior preference ℂ.
Note that, for simplicity, we ignored the `expected ambiguity' term in the equation above, i.e. the uncertainty of state-observation mapping (or likelihood), cf. Eq.<ref>. This does not affect our subsequent derivations; we can always add it as an additional term. The following derivation provided technical details of dynamic programming while focusing only on the `risk' term in G.
To estimate Q(o_T| u_T-1,s_T-1), we make use of the prediction about states Q(s_T) that can occur at time T:
Q(s_T | u_T-1 ,s_T-1)=𝔹_u_T-1· Q(s_T-1),
and given the prediction Q(s_T), we write
Q(o_T | u_T-1,s_T-1)=𝔸· Q(s_T | u_T-1 ,s_T-1)
= 𝔸·( 𝔹_u^T-1· Q(s_T-1) ).
Next, using Eq.<ref>, the corresponding action distribution (for action selection) is calculated at time T,
Q(u_T-1| o_T-1)=σ( - G(u_T-1| o_T-1) ),
where we recursively calculate the expected free energy for actions and the corresponding action-distributions for time steps T-2, T-3, ..., t=1 backwards in time,[For times other than T-1, the first term in Eq.<ref> does not contribute to solving the particular instance if ℂ only accommodates preference to a (time-independent) goal-state. However, for a temporally informed ℂ, i.e. with a separate preference for reward maximisation at each time step, this term will meaningfully influence action selection.]
G(u_t| o_t)= D_KL [Q(o_t+1 | u_t,s_t) || ℂ]_EFE of action at time t + 𝔼_Q(o_t+1,u_t+1| o_t,u_t)[ G(u_t+1| o_t+1)]_EFE of next action at t+1.
In the equation above, the second term condenses information about all future observations rather than doing a forward tree search in time.
To inform G(u_t| o_t), we consider all possible observation-action pairs that can occur in time t+1 and use the previously evaluated G(u_t+1| o_t+1).
In Eq.<ref>, we evaluate Q(o_t+1,u_t+1| o_t,u_t) using,
Q(s_t+1,u_t+1| s_t,u_t)= Q(s_t+1| s_t,u_t)_𝔹·Q(u_t+1| s_t+1)_Action distribution.
We then map the distribution Q(s_t+1,u_t+1| s_t,u_t) to the observation space and evaluate Q(o_t+1,u_t+1| o_t,u_t) using the likelihood mapping 𝔸.
In Eq.<ref>, we assume that actions in time are independent of each other, i.e. u_t is independent of u_t+1. Even though actions are assumed to be explicitly independent in time, the information (and hence desirability) about actions are also informed backwards in time from the recursive evaluation of expected free energy.
While evaluating the EFE, G, backwards in time, we used the action distribution in Eq.<ref>. This action distribution can be directly used for action selection.
Given an observation o at time t, u_t may be sampled [Precision of action-section may be controlled by introducing a positive constant inside the softmax function σ(.) in Eq.<ref>. The higher the constant, the higher the chance of selecting the action with less EFE.] from,
u_t∼ Q(u_t| o_t = o).
In the next section, we summarise the above formulation as a novel active inference algorithm useful for modelling intelligent behaviour in sequential POMDP settings.
§.§ Algorithmic formulation of DPEFE
Here, we formalise a generic algorithm that can be employed for a sequential POMDP problem. The main algorithm (see Alg.<ref>) works sequentially in time and brings together three different aspects of the agent's behaviour, namely, perception (inference), planning, and learning.
For planning, that is, to evaluate the expected free energy (G) for actions (given states) in time, we employ the planning algorithm (See. Alg.<ref>) as a subroutine to Alg.<ref>. In the most general case, the algorithm is initialised with 'flat' priors for the likelihood function (𝔸) and transition dynamics (𝔹). The algorithm also allows us to equip the agent with a more informed prior about 𝔸 and 𝔹. Learning ℂ in the DPEFE algorithm is setting ℂ as a one-hot vector with the encountered goal state. This technique accelerates the parameters' learning process during trials and improves agent performance. We can also make available the `true' dynamics of the environment to the agent whenever present. With `true' dynamics available at the agent's disposal, the agent can infer hidden states and plan accurately.
The next section discusses a different approach to ameliorating the curse of dimensionality in sophisticated inference. Later, we discuss a potential learning rule for the prior preference distribution ℂ inspired by a seminar work in the control theoretic literature.
§ LEARNING PRIOR PREFERENCES
In the previous section, we introduced a practical algorithm solution that speeds up planning in sophisticated inference. The second innovation on offer is to enable learning of preferences ℂ such that smaller planning horizons become sufficient for our agent to take optimal actions, as discussed in Fig.<ref>.
A seminal work from the literature on control theory proposes using a `desirability' function, scoring how desirable each state is, to compute optimal actions for a particular class of MDPs and, importantly, showing that the planning complexity of computing those actions is linear in time <cit.>. When the underlying MDP model of the environment is unavailable and the agent needs to take actions based solely on a stream of samples of states and rewards (i.e., s_t, r_t, s_t+1), an online algorithm called Z-learning, inspired from the theoretical developments in <cit.>, was proposed to solve this problem.
Given an optimal desirability function z(s), the optimal control, or policy, is analytically computable. The calculation of z(s) does not rely on knowledge of the underlying MDP but instead, on the following online learning rule:
ẑ(s_t) ← (1 - η_t)ẑ(s_t) + η_t exp(r_t) ẑ(s_t+1),
where, η is a learning rate that is continuously optimised—see below. These two terms form a weighted average that updates the estimate of ẑ(s_t), with η_t controlling the balance between the old estimate and the new information.
Inspired by these developments, we write a learning rule for updating ℂ which can be useful for the sophisticated inference agent. Given the samples (o_t, r_t, o_t+1), an agent may learn the parameter c online using a rule analogous to Eq.<ref>,
c(o_t) ← (1 - η_t) c(o_t) + η_t exp(r_t) c(o_t+1).
In the above equation, c(o_t) represents the desirability of an observation o at time t. The value of c(o_t) is updated depending on the reward received and the desirability of the observation received at the next time step c(o_t+1).
The learning rate η is a time-dependent parameter in Z-learning, as given in the equation below. e is a hyperparameter we optimise that influences how fast/slow η gets updated over time <cit.>:
η_t = e/e + t.
If η_t is high, the algorithm puts more weight on the new information. If η_t is low, the algorithm puts more weight on the current estimate. Using the update rule in Eq.<ref> with the learning rate evolving as in (<ref>), the value of c evolves over time and may be used to update ℂ online, ensuring that ℂ is a categorical distribution over observations using the softmax function:
ℂ = σ(c).
We use the standard grid world environments as shown in Fig.<ref> for the evaluation of the performance of various agents (more details in the next sections). Fig.<ref>, is a visualisation that represents the learned prior preference (for the grid shown in Fig. <ref> (A)) useful for the sophisticated inference agent. With an informed prior preference like this, the agent needs to plan only one time step ahead to navigate the grid successfully. It should be noted that in the DPEFE setting, we fix the prior preference ℂ either before a trial or learn it when we encounter the goal as a one-hot vector. We are not learning an informed prior preference for the DPEFE agent in the simulations presented in the paper. The method for learning prior preference discussed in this section holds for any agent, but in our paper, DPEFE is not using this feature to demonstrate its ability to plan deeper. When we aid an active inference algorithm with the learning rule for ℂ, a planning horizon of T = 1 suffices to take desirable actions (i.e. with no deep tree search like in SI or policy space (Π) as in CAIF). With considering only the next time step, (i.e. only the consequence of immediately available actions), planning in all active inference agents (CAIF, SI, and DPEFE) are algorithmically equivalent. In the rest of the paper, we call this agent with planning horizon T = 1, which is aided with the learning rule of C as active inference AIF (T = 1) agent. In our simulations, we compare the performance of these two approaches (i.e., deep planning with sparse C and short-term planning with learning C).
An animation that visualises the learning prior preference distribution for the grid over 50 episodes in Fig.<ref> can be found in this https://github.com/aswinpaul/dpefe_2023link. In the following section, we discuss and compare the computational complexity of planning between existing and newly introduced schemes.
§ COMPUTATIONAL COMPLEXITY
In this section, we compare the computational complexity in evaluating the expected free energy term, used for planning and decision making, with two other active inference approaches: classical active inference <cit.>,<cit.>, and sophisticated inference <cit.>.
In classical active inference (<cit.>, <cit.>), the expected free energy for an MDP (i.e., a fully observable case) is given by,
G(π| s_t-1) = D_KL[Q(s_t|π) || P(s_t)].
Here, P(s_t) represents an agent's prior preference and is equivalent to ℂ in an MDP setting. In this paper, ℂ is directly defined in terms of the hidden states. To avoid confusion, we always use the notation ℂ in this paper regarding the observations o.
Similarly, for sophisticated inference <cit.>, we have,
G(u_t) = D_KL[Q(s_t+1| u_< t+1) || P(s_t +1)] + 𝔼_Q(u_t+1)[G(u_t+1)].
In the above equation, we restrict the recursive evaluation of the second term, forward in time, till a `planning horizon (T)' as mentioned in <cit.>. T necessary for 'full-depth planning' i.e planning to the end of the episode is often required for sparsely defined prior preferences. This is required since the agent would not be able to differentiate the desirability of actions until reaching the last step of the episode through a tree search.
In classical active inference, to evaluate Eq.<ref>, the computational complexity is proportional to:
𝒪 [𝐜𝐚𝐫𝐝(S) ×𝐜𝐚𝐫𝐝(U)^T].
For sophisticated inference, to evaluate Eq.<ref>, the complexity scales proportionally to:
𝒪 [(𝐜𝐚𝐫𝐝(S) ×𝐜𝐚𝐫𝐝(U)) ^ T]. The dimensions of the quantities involved are specified in Tab.<ref>. And recall that both Eq.<ref> and Eq.<ref> ignore the `ambiguity' term for simplicity.
For evaluating EFE using dynamic programming, the expected free energy for an MDP can be deduced from Eq.<ref> as,
G(u_t| s_t) = D_KL[Q(s_t+1| s_t, u_t) || P(s_t+1)] + 𝔼_Q(s_t+1| s_t, u_t)[G(u_t+1| s_t+1)].
Since we only evaluate on time-step ahead in Eq.<ref>, even when evaluating backwards in time, the complexity scales as: 𝒪 [𝐜𝐚𝐫𝐝(S) ×𝐜𝐚𝐫𝐝(U) × T].
§ SIMULATIONS RESULTS
§.§ Setup
We perform simulations in the standard grid world environment in Fig.<ref> to evaluate the performance of our proposed algorithms. The agent is born in a random start state at the beginning of every episode and can take one of the four available actions (North, South, East, West) at every time step to advance towards the goal state until the episode terminates either by a time-out (10000, 20000, and 40000 steps for the grids in Fig.<ref> respectively) or by reaching the goal state. For completeness, we compare the performance of the following algorithms in the grid world:
* Q-learning: a benchmark model-free RL algorithm <cit.>
* Dyna-Q: a benchmark model-based RL algorithm improving upon Q-learning <cit.>
* DPEFE algorithm with strictly defined (sparse) ℂ (See Sec.<ref>)
* Active inference algorithm aided with learning rule for ℂ (See Sec.<ref>) and planning horizon of T = 1 i.e with no deep tree search like in SI, or policy space (Π) as in CAIF. With considering only the next time step, (i.e. only the consequence of immediately available actions) planning in all active inference agents (CAIF, SI, and DPEFE) are algorithmically equivalent. In the rest of the paper, we call this agent with planning horizon T = 1 aided with the learning rule of C as active inference AIF (T = 1) agent.
We perform simulations in deterministic and stochastic grid variations shown in Fig.<ref>. The deterministic variation is a fully observable grid with no noise. So, an agent fully observes the present state—i.e., an MDP setting. Also, the outcomes of actions are non-probabilistic with no noise—i.e., a deterministic MDP setting. In the stochastic variation, we make the environment more challenging to navigate by adding 25% noise in the transitions and 25% noise in the observed state. In this case, the agent faces uncertainty at every time step about the underlying state (i.e., partially observable) and the next possible state (i.e., stochastic transitions)—i.e., a stochastic POMDP setting.
§.§ Summary of results
The agents' performance in this navigation problem is summarised in Fig.<ref> and Fig.<ref>. Performance is quantified in terms of how quickly an agent learns to solve the grid task, i.e. the total score. The agent receives a reward of ten points when the goal state is reached and a small negative reward for every step taken. The total score hence represents how fast the agent navigated to the goal state for a given episode. The grid has a fixed goal state throughout the episodes in Fig.<ref> (A, B) and Fig.<ref> (A). For simulations in Fig.<ref> (B), the goal state is shifted to another random state every 10 episodes. This setup helps to evaluate the adaptability of agents in the face of changes in the environment. It is clear that during the initial episodes, the agents take longer to reach the goal state but learn to navigate quicker as the episodes unfold.
Standard RL algorithms (i.e., Dyna-Q and Q-Learning) are used here to benchmark the performance of active inference agents, as they are efficient state-of-the-art algorithms to solve this sort of task.
In our simulations, the DPEFE algorithm performs at par with the Dyna-Q algorithm with a planning depth of T = 80 [a planning horizon more than any optimal path in this grid. Since the start state is randomized, optimal paths can have many lengths in the grid. A planning depth of T=80 ensures that the agent plans enough not to miss the length that needs to be covered in any setting.] (See (Fig.<ref> (A, B), Fig.<ref> (A)). The DPEFE agent performs even better when we randomised goal-states every 10 episode (Fig.<ref> (B)). In contrast to online learning algorithms like Dyna-Q, active inference agents can take advantage of the re-definable explicit prior preference distribution ℂ. For the AIF(T = 1) agent, we observe that the performance improves over time but is not as good as the DPEFE agent. This is because the AIF(T = 1) agent plans for only one step ahead in our trials by design. We could also observe that the Q-Learning agent performs worse than the random agent and recovers slower than the AIF(T = 1) agent when faced with uncertainty in the goal state. It is a promising direction to optimise the learning of prior preference ℂ in the AIF(T = 1) agent, ensuring accuracy in the face of uncertainty. All simulations were performed for 100 trials with different random seeds to ensure the reproducibility of results.
Besides this, we observe a longer time to achieve the goal state for both active inference agents (even longer than the `Random agent') in the initial episodes. This is a characteristic feature of active inference agents, as their exploratory behaviour dominates during the initial trials. The goal-directed behaviour dominates only after the agent sufficiently minimises uncertainty in the model parameters <cit.>.
§.§ Optimising learning rate parameter for AIF (T = 1) agent
The learning rule proposed for sophisticated inference in Eq.<ref> requires a (manually) optimised value of e for every environment that influences the learning rate η. <cit.> inspires this learning rule, where the value of η_t determines how fast the parameter c converges for a given trial. The structure of learned c is crucial for the active inference agent, as ℂ determines how meaningful the planning is for the agent. In Fig.<ref>, we plot the performance of the AIF(T = 1) agent as a function of e for the grids in Fig. <ref>. A promising direction for future research is to improve the learning rule based on η and fine-tune the method for learning ℂ. The observation in Fig.<ref> is that the performance of the AIF(T = 1) agent is not heavily dependent on the value of e. We used different values of e > 10000 in AIF(T = 1) agents in all settings in this paper.
§.§ An emphasis on computational complexity
To understand why the classical active inference (CAIF) and SI methods cannot solve these grid environments with the traditional planning method, we provide an exemplar setting in Tab. <ref>. Consider the small grid as shown in Fig.<ref> with 𝐜𝐚𝐫𝐝(S) = 100, 𝐜𝐚𝐫𝐝(U) = 4, T=30. Tab.<ref> summarises the computational complexity of simulating various active inference agents for this small grid world problem. The computational complexity exceeds practical implementations even with a T = 2 planning horizon. We can observe this visually in Fig.<ref>.
However, we note that the proposed solution of first learning the prior preferences (See Sec.<ref>) using the Z-learning rule enables the active inference (AIF) agent to learn and solve the unknown environment by avoiding the computational complexity of a deep tree search. It should also be noted that neither of the active inference algorithms (DPEFE and AIF (T=1)) was equipped with meaningful priors about the (generative) model parameters (𝔹, ℂ, and 𝔻). Agents start blindly with `uninformed' model priors and evolve by integrating all aspects of behaviour: perception, planning, decision-making, and learning. Yet, the fact that, like Dyna-Q, they start with a model of the world means that they are much less agnostic than the model-free alternative offered by Q-learning. The following section discusses the merits and limitations of the proposed solutions to optimise decision-making in active inference.
§ DISCUSSION
In this work, we explored the usefulness of active inference as an algorithm to model intelligent behaviour and its application to a benchmark control problem, the stochastic grid world task. We identified the limitations of some of the most common formulations of active inference <cit.>, which do not scale well for planning and decision-making tasks in high-dimensional settings. We proposed two computational solutions to optimise planning: harnessing the machinery offered by dynamic programming and the Bellman optimality principle and harnessing the Z-learning algorithm to learn informed preferences.
First, our proposed planning algorithm evaluates expected free energy backwards in time, exploiting Bellman's optimality principle, considering only the immediate future as in the dynamic programming algorithm. We present an algorithm for general sequential POMDP problems that combines perception, action selection and learning under the single cost function of variational free energy. Additionally, the prior preference, i.e., the goal state about the control task, was strictly defined (i.e., uninformed) and supplied to the agent, unlike well-informed prior preferences as seen in earlier formulations.
Secondly, we explored the utility of equipping agents so as to learn their prior preferences. We observed that learning the prior preference enables the agent to solve the task while avoiding the computationally (often prohibitively) expensive tree search. We used state-of-the-art model-based reinforcement learning algorithms, such as Dyna-Q, to benchmark the performance of active inference agents.
Lastly, there is further potential to optimise computational time by exploiting approximation parameters involved in planning and decision-making. For example, the softmax functions used while planning and decision-making determine the precision of output distributions. There is also scope to optimise further the SI agent proposed in this paper by learning the prior preference. Based on the Z-learning method, the learning rule for prior preference parameters shall be optimised and fine-tuned for active inference applications in future work. Since the Z-learning method is fine-tuned for a particular class of MDP problems <cit.>, we leave a detailed comparison of the two approaches to future work. We conclude that the above results advance active inference as a promising suite of methods for modelling intelligent behaviour and for solving stochastic control problems.
§ ACKNOWLEDGMENTS
AP acknowledges research sponsorship from IITB-Monash Research Academy, Mumbai and the Department of Biotechnology, Government of India. AR is funded by the Australian Research Council (Refs: DE170100128 & DP200100757) and Australian National Health and Medical Research Council Investigator Grant (Ref: 1194910). AR is a CIFAR Azrieli Global Scholar in the Brain, Mind & Consciousness Program. AR, NS, and LD are affiliated with The Wellcome Centre for Human Neuroimaging, supported by core funding from Wellcome [203147/Z/16/Z]. NS is funded by the Medical Research Council (MR/S502522/1) and the 2021-2022 Microsoft PhD Fellowship. LD is supported by the Fonds National de la Recherche, Luxembourg (Project code: 13568875). This publication is based on work partially supported by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1).
§ SOFTWARE NOTE
All the code for agents, optimisation and grid environments used are custom written in Python 3.9.15 and is available in this project repository: <https://github.com/aswinpaul/dpefe_2023>.
unsrtnat
equationsection
figuresection
§ DERIVATION OF OPTIMAL STATE-BELIEF
We want to differentiate the following w.r.t. Q(s):
F = ∑_s Q(s)[ log Q(s)-log P(o| s) - logP(s) ].
First, note that the derivative of the logarithm function is 1/x. Second, observe that the derivative of Q(s) with respect to Q(s) is 1. With these two pieces in mind, we can differentiate F:
Let's define f(s) = log Q(s) - log P(o|s) - log P(s), then:
dF/dQ(s) = ∑_s [ df/dQ(s)· Q(s) + f(s) ·dQ(s)/dQ(s) ]
The derivative of f(s) with respect to Q(s) can be computed as:
df/dQ(s) = 1/Q(s) - 0 - 0 = 1/Q(s)
This leads to:
dF/dQ(s) = ∑_s [ Q(s) ·(1/Q(s)) + f(s) ]
= ∑_s 1 + log Q(s) - log P(o|s) - log P(s)
So, the derivative of F with respect to Q(s) is:
dF/dQ(s) = ∑_s 1 + log Q(s) - log P(o|s) - log P(s)
The goal is to minimize the free energy F with respect to the distribution Q(s). To find the minimum, we can set the derivative of F with respect to Q(s) to zero. From the previous derivation, we know that:
dF/dQ(s) = ∑_s 1 + log Q(s) - log P(o|s) - log P(s)
Setting this equal to zero gives:
1 + log Q(s) - log P(o|s) - log P(s) = 0
log Q(s) = log P(o|s) + log P(s) - 1
However, note that the log function is typically normalized such that the sum of the probabilities in Q(s) equals 1 (since Q(s) is a probability distribution), so we can safely ignore the -1 term:
log Q(s) = log P(o|s) + log P(s)
The optimal distribution Q^*(s) that minimizes the free energy F is thus:
log Q^*(s) = log P(o|s) + log P(s) .
§ OPTIMISING LEARNING PARAMETER FOR AIF (T=1) AGENT
|
http://arxiv.org/abs/2307.01506v1
|
20230704064002
|
The ideal test for the divergence of a series
|
[
"Rafał Filipów",
"Adam Kwela",
"Jacek Tryba"
] |
math.CA
|
[
"math.CA",
"math.RA",
"40A05, 46B87, 40A35 (Primary) 15A03, 46B45 (Secondary)"
] |
R. Filipów]Rafał Filipów
[R. Filipów]Institute of Mathematics
Faculty of Mathematics, Physics and Informatics
University of Gdańsk
ul. Wita Stwosza 57
80-308 Gdańsk
Poland
[email protected]
http://mat.ug.edu.pl/ rfilipow
A. Kwela]Adam Kwela
[A. Kwela]Institute of Mathematics
Faculty of Mathematics
Physics and Informatics
University of Gdańsk
ul. Wita Stwosza 57
80-308 Gdańsk
Poland
[email protected]
https://mat.ug.edu.pl/ akwela
J. Tryba]Jacek Tryba
[J. Tryba]Institute of Mathematics
Faculty of Mathematics, Physics and Informatics
University of Gdańsk
ul. Wita Stwosza 57
80-308 Gdańsk
Poland
[email protected]
[2020]Primary: 40A05, 46B87, 40A35 Secondary: 15A03, 46B45.
We generalize the classical Olivier's theorem which says that for any convergent series ∑_n a_n with positive nonincreasing real terms the sequence (n a_n) tends to zero. Our results encompass many known generalizations of Olivier's theorem and give some new instances. The generalizations are done in two directions: we either drop the monotonicity assumption completely or we relax it to the monotonicity on a large set of indices. In both cases, the convergence of (na_n) is replaced by ideal convergence.
In the second part of the paper, we examine families of sequences for which the assertions of our generalizations of Olivier's theorem fail. Here, we are interested in finding large linear and algebraic substructures in these families.
The ideal test for the divergence of a series
[
August 1, 2023
=============================================
§ INTRODUCTION
A basic test for divergence of an infinite series says that for any sequence (a_n) of reals we have
| ∑_n=1^∞ a_n| <∞lim_n→∞ a_n=0.
The following theorem says more about the speed of convergence to zero of the terms of a convergent series with positive and nonincreasing terms.
For every nonincreasing sequence (a_n) of positive reals we have
∑_n=1^∞ a_n<∞lim_n→∞ na_n=0.
A nonempty family ⊆() of subsets of is an ideal on if it is closed under taking subsets and finite unions of its elements, ∉ and contains all finite subsets of . By we denote the ideal of all finite subsets of .
The family
_1/n={A⊆: ∑_n∈ A1/n<∞}
is an ideal on that is called the summable ideal.
For an ideal on and a sequence (a_n) of reals, we write
-lim a_n=L
if
{n∈: |x_n-L|≥ε}∈ for every ε>0. Obviously -lim a_n coincides with the ordinary limit of a sequence (a_n). (More information on ideal convergence can be found e.g. in <cit.>.)
The following theorem shows that using the ideal convergence, we can drop the monotonicity assumption in Olivier's theorem.
Let be an ideal on . The following conditions are equivalent.
* For every sequence (a_n) of positive reals we have
∑_n=1^∞ a_n<∞-lim n a_n=0.
* The ideal extends the summable ideal:
_1/n⊆.
For an ideal on , we write ^*={∖ A: A∈} and call it the filter dual to .
For a sequence (a_n) of reals, we write
^*-lim a_n=L
if
there exists F∈^* such that the subsequence (a_n)_n∈ F has the limit L i.e.
∃ F∈^* ∀ε>0 ∃ k ∀ n∈ F (k≤ n |a_n - L |<ε).
It is known (<cit.>)
that ^*-lim a_n=L implies -lim a_n=L, whereas the reversed implication holds if and only if is a P-ideal (an ideal is a P-ideal if for each countable family ⊆ there exists B∈^* such that B ∩ A is finite for each A∈).
In Section <ref>, we prove some generalizations (see Theorems <ref> and <ref>) of the above mentioned theorem of Šalát and Toma. Our results encompass other generalizations of Theorem <ref> considered in the literature (see <cit.> and <cit.>). Among others, we show (Corollary <ref>) that -convergence can be replaced by a much stronger condition of ^*-convergence in Theorem <ref>.
The results of Section <ref> utilizes summable ideals (defined by Mazur <cit.>) which generalize the summable ideal _1/n and are defined in the following manner:
for every divergent series ∑_n=1^∞ d_n=∞ with nonegative terms we define a summable ideal generated by a sequence (d_n) by
_(d_n) = {A⊂: ∑_n∈ Ad_n<∞}.
All summable ideals are P-ideals (see e.g. <cit.>).
For an ideal on , we write ^+={ A: A∉} and call it the coideal of .
For a sequence (a_n) of reals we write
^+-lim a_n=L
if
there exists A∈^+ such that the subsequence (a_n)_n∈ A has the limit L i.e.
∃ A∈^+ ∀ε>0 ∃ k ∀ n∈ A (k≤ n |a_n - L |<ε).
Note that ^+-limit of a sequence does not have to be unique. This kind of limit was considered in <cit.>, where the authors proved among others that for a large class of ideals (e.g. for the summable ideal _1/n) every bounded sequence of reals has ^+-limit.
Obviously ^*-lim a_n=L implies ^+-lim a_n=L, and the reversed implication holds if and only if is a maximal ideal (an ideal is maximal if ^+=^*). In general, there is no relation between -convergence and ^+-convergence, but
^+-lim a_n=L implies -lim a_n=L if and only if is a maximal ideal, whereas
-lim a_n=L implies ^+-lim a_n=L if and only if is a weak P-ideal (an ideal is a weak P-ideal if for each countable family ⊆ there exists B∈^+ such that B∩ A is finite for each A∈).
In Section <ref>, we prove (see Theorem <ref>) similar results as in Section <ref>, but with the aid of ^+-convergence which is independent of -convergence in general.
We say that a sequence (a_n) of reals is ^*-nonincreasing if there exists F∈^* such that the subsequence (a_n)_n∈ F is nonincreasing i.e.
∃ F∈^* ∀ n,k∈ F (n≤ k a_n≥ a_k).
In <cit.>, the authors have been weakening the assumption on monotonicity in Olivier's theorem instead of dropping it entirely. Among others, they posted the following problem.
[Faisant-Grekos-Mišík <cit.>]
Characterize ideals with the property that for every ^*-nonincreasing sequence (a_n) of positive reals we have
∑_n=1^∞ a_n<∞^*-lim na_n =0.
One can see that Olivier's theorem (Theorem <ref>) says that the ideal = has this property.
On the other hand, in <cit.>, the authors construct an ideal without this property.
In Section <ref>, we solve the above problem by providing (Theorems <ref>
and <ref>) some characterizations of the above mentioned property and other properties of similar flavour.
In all of the above mentioned results there is an assumption that the considered series has positive terms, but we can weaken this assumption to consider also absolutely convergent series with arbitrary terms (as -lim n|a_n| =0 implies -lim na_n=0, and similarly for other types of convergences).
However, the alternating harmonic series
∑_n=1^∞ (-1)^n/n
shows that both Olivier's theorem and Theorem <ref> fail (as the sequence (-1)^n is not -convergent to zero for any ideal ) if we allow the series to be conditionally convergent in these theorems.
On the other hand, it is known (Kronecker <cit.>, see also <cit.>) that a version of Olivier's theorem for arbitrary series holds if one replaces ordinary convergence by Cesáro convergence (also known as Cesáro summation or the Cesáro mean).
If a series ∑_n=1^∞ a_n is convergent, then the sequence (na_n) is Cesáro convergent to zero i.e.
lim_n→∞a_1+2a_2+…+na_n/n=0.
For more than a decade, the research on finding linear subsets of nonlinear sets in vector spaces (the trend nowadays known as lineability or spaceability) is gathering more and more mathematicians. Below we provide the notions we will use in the last part of our paper (for more on the subject see e.g. the book <cit.> or the survey <cit.>).
Let X be a Banach algebra and κ be a cardinal number.
We say that a subset Y⊆ X is
* κ-lineable if Y∪{0} contains a vector subspace of dimension κ;
* κ-spaceable if Y∪{0} contains a closed vector subspace of dimension κ;
* κ-algebrable if Y∪{0} contains a κ-generated subalgebra;
* strongly κ-algebrable if Y∪{0}
contains a κ-generated subalgebra which is a free algebra;
* lineable if it is κ-lineable for some infinite κ (and similarly for other notions defined above).
In <cit.>, the authors consider the Banach algebra ℓ_1 (of all real sequences (a_n) with absolutely convergent series ∑_n=1^∞ a_n equipped with the norm ‖ (a_n)‖ = ∑_n=1^∞ |a_n| and coordinatewise addition and multiplication) and examine the lineability-like notions of the set of those sequences for which the assertion of Olivier's theorem fails, namely they examine the set:
AOS = {(a_n)∈ℓ_1:(na_n) is not convergent to zero}.
Among others, they proved that AOS is strongly -algebrable, -lineable and spaceable.
In Section <ref>, we examine the lineability-like notions of the following sets:
AOS() = {(a_n)∈ℓ_1:(na_n) is not -convergent to zero},
AOS(^*) = {(a_n)∈ℓ_1:(na_n) is not ^*-convergent to zero},
AOS(^+) = {(a_n)∈ℓ_1:(na_n) is not ^+-convergent to zero}
for an arbitrary ideal on .
Since ^*-convergence implies both -convergence and ^+-convergence, AOS()⊆ AOS(^*) and AOS(^+)⊆ AOS(^*). However, in general, these inclusions do not reverse (see Proposition <ref>).
In Section <ref>, we prove that a necessary and sufficient condition for AOS(), AOS(^*) and AOS(^+) to be -lineable is that these families are nonempty (see Theorem <ref>).
In Section <ref>, we describe some classes of ideals for which
a necessary and sufficient condition for AOS() and AOS(^*) to be spaceable is that these families are nonempty (see
Theorems <ref> and <ref>).
An example of such a class of ideals is the family of all Borel ideals (see Theorem <ref>).
However, we do not know if this condition works for every ideal (see Question <ref>).
Moreover, we do not know any conditions under which AOS(^+) is spaceable (see Question <ref>).
In Section <ref>, we describe some classes of ideals for which AOS(), AOS(^*) and AOS(^+) are strong -algebrable (see Theorem <ref>).
§ THE IDEAL TEST FOR THE DIVERGENCE OF AN INFINITE SERIES
§.§ and ^* tests
Let be an ideal on .
Let g:(0,∞)→(0,∞) be a strictly increasing function such that
lim_x→ 0^+g(x)/x^γ=M
for some positive constants γ,M∈(0,∞).
Let (b_n) and (c_n) be sequences of positive reals such that
lim_n→∞ c_n=∞.
Then the following conditions are equivalent.
* For every sequence (a_n) with a_n∈(g) we have
∑_n=1^∞ b_n a_n<∞^*-lim c_n g^-1(a_n)=0.
* For every sequence (a_n) with a_n∈(g) we have
∑_n=1^∞ b_n a_n<∞-lim c_n g^-1(a_n)=0.
* The ideal extends the summable ideal generated by the sequence (b_n g(1/c_n)):
_(b_n g(1/c_n))⊆.
(<ref>)(<ref>)
It follows from the fact that ^*-convergence implies -convergence.
(<ref>)(<ref>)
Suppose that there exists A∈_(b_n g(1/c_n))∖. For any n∈, take as d_n some element of (g) with d_n≤1/n^2 b_n. We can find such an element by the assumption lim_x→ 0^+ g(x)=0.
We define
a_n={[ g(1/c_n) for n∈ A,; d_n for other n. ].
We can see that ∑_n=1^∞ b_n a_n<∞. Indeed, observe that
∑_n=1^∞ b_n a_n≤∑_n∈ Ab_n g(1/c_n) + ∑_n∉Ab_n/n^2 b_n.
Since A∈_(b_n g(1/c_n)), the first sum is finite and the second sum is finite because the series ∑1/n^2 is convergent.
It follows from the assumption that -lim c_n g^-1(a_n)=0. On the other hand, for every n∈ A we have
c_n g^-1(a_n)=c_n g^-1( g(1/c_n) )=c_n (1/c_n)=1.
Since A∉, we obtain -lim c_n g^-1(a_n)≠0, a contradiction.
(<ref>)(<ref>)
Let ∑_n=1^∞b_n a_n<∞. We need to show that -lim c_n g^-1(a_n)=0.
Take ε>0. We consider the set
A={n∈:c_n g^-1(a_n)≥ε}={n∈: a_n≥ g(ε/c_n) }={n∈: b_n a_n≥ b_n g(ε/c_n) }.
In order to prove that A∈, we only need to show that ∑_n∈ A b_n g(1/c_n)<∞.
First we notice that ∑_n∈ A b_n g(ε/c_n)<∞ since
∑_n∈ A b_n g(ε/c_n)≤∑_n∈ Ab_n a_n≤∑_n=1^∞b_n a_n<∞.
Now, we would like to prove that convergence of the series ∑_n∈ A b_n g(ε/c_n) implies convergence of the series ∑_n∈ A b_n g(1/c_n).
Notice that since lim_n→∞ c_n=∞, we have lim_n→∞ 1/c_n=0. Therefore
lim_n→∞g(1/c_n)/(1/c_n)^γ=M and lim_n→∞g(ε/c_n)/(ε/c_n)^γ=M.
It follows that
lim_n→∞g(1/c_n)/g(ε/c_n)·(ε/c_n)^γ/(1/c_n)^γ=1,
thus
lim_n→∞g(1/c_n)/g(ε/c_n)=ε^-γ∈(0,∞).
Because of that, ∑_n∈ A b_n g(ε/c_n)<∞ is equivalent to ∑_n∈ A b_n g(1/c_n)<∞.
(<ref>)(<ref>)
Since
(<ref>)(<ref>)
and _(b_n g(1/c_n))⊆_(b_n g(1/c_n)),
the convergence of the series ∑_n=1^∞ b_n a_n implies
_(b_n g(1/c_n))-lim c_n g^-1(a_n)=0.
Since _(b_n g(1/c_n)) is a P-ideal, we obtain _(b_n g(1/c_n))^*-lim c_n g^-1(a_n)=0. By the assumption _(b_n g(1/c_n))⊆, we have _(b_n g(1/c_n))^*⊆^*, thus ^*-lim c_n g^-1(a_n)=0.
Let be an ideal on .
Let g:(0,∞)→(0,∞) be a strictly increasing function such that
lim_x→ 0^+ g(x)=0 and ∀ε>0 ∃ M ∀ x>0 (g(x)/g(ε x)≤ M).
Let (b_n) and (c_n) be sequences of positive reals.
Then the following conditions are equivalent.
* For every sequence (a_n) with a_n∈(g) we have
∑_n=1^∞ b_n a_n<∞^*-lim c_n g^-1(a_n)=0.
* For every sequence (a_n) with a_n∈(g) we have
∑_n=1^∞ b_n a_n<∞-lim c_n g^-1(a_n)=0.
* The ideal extends the summable ideal generated by the sequence (b_n g(1/c_n)):
_(b_n g(1/c_n))⊆.
(<ref>)(<ref>)
It follows from the fact that ^*-convergence implies -convergence.
(<ref>) (<ref>)
Suppose that there exists A∈_(b_n g(1/c_n))∖. For any n∈, take as d_n some element of (g) with d_n≤1/n^2 b_n. We can find such an element because lim_x→ 0^+ g(x)=0.
We define
a_n={[ g(1/c_n) for n∈ A,; d_n for other n. ].
We can see that ∑_n=1^∞ b_n a_n<∞. Indeed, observe that
∑_n=1^∞ b_n a_n≤∑_n∈ Ab_n g(1/c_n) + ∑_n∉Ab_n/n^2 b_n.
Since A∈_(b_n g(1/c_n)), the first sum is finite and the second sum is finite because the series ∑1/n^2 is convergent.
It follows from the assumption that -lim c_n g^-1(a_n)=0. On the other hand, for every n∈ A we have
c_n g^-1(a_n)=c_n g^-1( g(1/c_n) )=c_n (1/c_n)=1.
Since A∉, we obtain -lim c_n g^-1(a_n)≠0, a contradiction.
(<ref>) (<ref>)
Let ∑_n=1^∞b_n a_n<∞. We need to show that -lim c_n g^-1(a_n)=0.
Take ε>0. We consider the set
A={n∈:c_n g^-1(a_n)≥ε}={n∈: a_n≥ g(ε/c_n) }={n∈: b_n a_n≥ b_n g(ε/c_n) }.
In order to prove that A∈, we only need to show that ∑_n∈ A b_n g(1/c_n)<∞.
First we notice that ∑_n∈ A b_n g(ε/c_n)<∞ since
∑_n∈ A b_n g(ε/c_n)≤∑_n∈ Ab_n a_n≤∑_n=1^∞b_n a_n<∞.
Now, observe that there exists M such that for all n∈ we have
g(1/c_n)/g(ε/c_n)≤ M,
thus
∑_n∈ A b_n g(1/c_n)≤ M ∑_n∈ A b_n g(ε/c_n)<∞.
(<ref>)(<ref>)
Since
(<ref>)(<ref>)
and _(b_n g(1/c_n))⊆_(b_n g(1/c_n)),
the convergence of the series ∑_n=1^∞ b_n a_n implies
_(b_n g(1/c_n))-lim c_n g^-1(a_n)=0.
Since _(b_n g(1/c_n)) is a P-ideal, we obtain _(b_n g(1/c_n))^*-lim c_n g^-1(a_n)=0. By the assumption _(b_n g(1/c_n))⊆, we have _(b_n g(1/c_n))^*⊆^*, thus ^*-lim c_n g^-1(a_n)=0.
* Notice that both Theorems <ref> and <ref> would still be true if we add 0 to the domain and codomain of g and require that g(0)=0. There is even no need to change any of the proofs, and then we can strengtheen these theorems by requiring the sequence (a_n) to be non-negative instead of positive.
* Note that Theorem <ref>
does not imply Theorem <ref>.
Indeed, the function g(x)=e^x-1 satisfies the assumption of Theorem <ref> (as, using l'Hospital's rule, we have lim_x→ 0^+g(x)/x=1, so γ=1 and M=1 work), but it does not satisfy the assumption of Theorem <ref> (as lim_x→∞g(x)/g(x/2)=∞, so if ε=1/2, then for every M>0 one can find x>0 such that g(x)/g(x/2)>M).
* On the other hand, Theorem <ref> works for any sequences (c_n) whereas Theorem <ref> works only for sequences (c_n) which are divergent to infinity.
* If a function g(x) satisfies the assumptions of Theorem <ref>, it also satisfies lim_x→ 0^+g(x)=0. On the other hand, if g(x) = e^-1/x, then lim_x→ 0^+ g(x) = 0, but g(x) does not satisfy the assumption of Theorem <ref>.
Moreover, the equivalence from Theorem <ref> does not hold for the function g(x)=e^-1/x as it is witnessed by sequences
a_n=1/n^2, b_n=1 and c_n=ln n.
If g, (b_n) and (c_n) are like in Theorem <ref> or Theorem <ref>, then
for every sequence (a_n) with a_n∈(g) we have
∑_n=1^∞ b_n a_n<∞_(b_n g(1/c_n))^*-lim c_n g^-1(a_n)=0
and
∑_n=1^∞ b_n a_n<∞_(b_n g(1/c_n))-lim c_n g^-1(a_n)=0.
Apply Theorem <ref> or Theorem <ref> with the ideal = _(b_n g(1/c_n)).
The equivalence “(<ref>) (<ref>)” in the following result is just Theorem <ref> and it was proved in <cit.>. Here, we strengthen this theorem essentially, because ^*-convergence is stronger than -convergence.
Let be an ideal on . The following conditions are equivalent.
* For every sequence (a_n) of non-negative numbers we have
∑_n=1^∞ a_n<∞-lim n a_n=0.
* For every sequence (a_n) of non-negative numbers we have
∑_n=1^∞ a_n<∞^*-lim n a_n=0.
* The ideal extends the summable ideal:
_1/n⊆.
Apply Theorem <ref> with
g(x)=x, b_n=1 and c_n=n and Remark <ref>(<ref>).
Let be an ideal on .
Let p, q be fixed positive numbers and α,β be fixed nonnegative numbers.
Then the following conditions are equivalent.
* For every sequence (d_n) of positive numbers we have
∑_n=1^∞ n^α d_n^p<∞-lim n^β d_n^q=0.
* The ideal extends the summable ideal generated by the sequence (n^α - β p/q):
_(n^α - β p/q)⊆.
We can apply Theorem <ref> with
g(x)=x^p/q,
a_n=d_n^p, b_n=n^α and c_n=n^β.
Let be an ideal on .
Let (b_n), (c_n) be sequences of positive numbers and let p, q be fixed positive numbers.
Then the following conditions are equivalent.
* For every sequence (d_n) of positive numbers we have
∑_n=1^∞ b_n d_n^p<∞-lim c_n d_n^q=0.
* The ideal extends the summable ideal generated by the sequence (b_n c_n^-p/q):
_(b_n c_n^-p/q)⊆.
We can apply Theorem <ref> with
g(x)=x^p/q,
a_n=d_n^p, b_n=b_n and c_n=c_n.
One can show that for instance functions g(x)=e^x-1, g(x)=ln(x+1), g(x)=arctan x and even g(x)=Φ(x)-1/2 (where Φ(x) is the cumulative distribution function of the standard normal distribution) satisfy the assumptions of Theorem <ref>.
On the other hand, all these functions are not the power functions x^p/q considered in Corollary <ref>.
§.§ ^+ test
Let be an ideal on .
Let g, (b_n) and (c_n) be like in Theorem <ref> or Theorem <ref>.
Then the following conditions are equivalent.
* For every sequence (a_n) with a_n∈(g) we have
∑_n=1^∞ b_n a_n<∞^+-lim c_n g^-1(a_n)=0.
* The filter dual to is disjoint from the summable ideal generated by the sequence (b_n g(1/c_n)):
_(b_n g(1/c_n))∩^*=∅.
(<ref>) (<ref>)
Suppose that there exists A∈_(b_n g(1/c_n))∩^*. For any n∈, take as d_n some element of (g) with d_n≤1/n^2 b_n. We can find such an element by the assumption lim_x→ 0^+ g(x)=0.
We define
a_n={[ g(1/c_n) for n∈ A,; d_n for n∉A. ].
We can see that ∑_n=1^∞ b_n a_n<∞ because
∑_n=1^∞b_n a_n≤∑_n∈ A b_n g(1/c_n) + ∑_n∉Ab_n/b_n n^2.
Since A∈_(b_n g(1/c_n)), the first sum is finite and the second sum is finite because the series ∑1/n^2 is convergent.
On the other hand, for all n∈ A we have
c_n g^-1(a_n)=c_n g^-1(g(1/c_n))=c_n (1/c_n)=1.
Since A∈^*, it follows that for any B∈^+ there are infinitely many n∈ B∩ A with c_n g^-1(a_n)=1, thus we cannot have ^+-lim c_n g^-1(a_n)=0.
(<ref>) (<ref>)
Suppose that there exists a positive sequence (a_n) with ∑_n=1^∞ b_n a_n<∞ such that for any B∈^+ we have lim sup_n∈ B c_n g^-1(a_n)>0.
Consider the sets A_k={n∈: c_n g^-1(a_n)≥ 1/k}. We can notice that for each k∈ we have A_k∈_(b_n g(1/c_n)). Indeed, let us assume that it is not the case for some k∈. Then
∞>∑_n∈ b_n a_n ≥∑_n∈ A_kb_n a_n ≥∑_n∈ A_kb_n g(1/k c_n),
which is infinite since A_k∉_(b_n g(1/c_n)) and
lim sup_n→∞g(1/c_n)/g(ε/c_n)∈(0,∞)
for any ε>0 by the reasonings presented in the proofs of Theorems <ref> and <ref>, thus bringing us to a contradiction.
Now, since _(b_n g(1/c_n)) is a P-ideal, there exists B∈_(b_n g(1/c_n))^* such that B∩ A_k is finite for all k∈. We can see that lim_n∈ Bc_n g^-1(a_n)=0. By our assumption we get B∉^+, hence B∈. If we now take C=∖ B, we obtain C∈_(b_n g(1/c_n))∩^*.
Notice that Theorem <ref> would still be true if we add 0 to the domain and codomain of g and require that g(0)=0.
Let be an ideal on .
Then the following conditions are equivalent.
* For every sequence (a_n) of non-negative numbers we have
∑_n=1^∞ a_n<∞^+-lim n a_n=0.
* The filter dual to is disjoint from the summable ideal:
_1/n∩^*=∅.
Apply Theorem <ref> with
g(x)=x,
b_n=1 and c_n=n and Remark <ref>.
§ THE IDEAL TEST FOR THE DIVERGENCE OF AN INFINITE SERIES WITH MONOTONE TERMS
For an infinite set X={x_1<x_2<…}⊆, we define f_X:→ by
f_X(i)=1/x_n i∈(x_n-1, x_n] for some n∈
(we take x_0=0), and
_X={A⊆: ∑_n∈ A f_X(n)<∞}.
The following easy proposition summarizes few basic properties of ideals of the form _X.
* _=_1/n.
* For any infinite X, _X is equal to the summable ideal generated by the sequence f_X: _X=_(f_X).
* If X⊆ Y then _X⊇_Y.
* _1/n⊆_X for every infinite X.
* For any infinite X, if A∈_X^* then A has upper asymptotic density 1:
A∈^*_X⟹lim sup_n→∞|A∩{1,…,n}|/n=1.
The first four items are easy observations. We will prove the last item by showing that if A has positive lower asymptotic density then A∉_X, i.e.
lim inf_n→∞|A∩{1,…,n}|/n>0⟹ A∉_X.
Take A⊆ with positive lower asymptotic density. Pick α>0 such that the lower asymptotic density of A is greater than 2α.
First, observe that there exist k,N∈ such that for all n≥ N we have |A∩[2^nk,2^nk+k)|/2^nk+k>α. Indeed, otherwise for infinitely many n∈ we have
2α<|A∩[1,2^nk+k]|/2^nk+k≤2^nk/2^nk+k+ |A∩[2^nk,2^n+k)|/2^nk+k≤ 2^-k+α,
which is a contradiction for any k∈ with 2^-k≤α.
Now, for any n∈ denote by I_n the interval [2^nk,2^nk+k). Let Y={y_1<y_2<…} be such a subset of X that |Y∩ I_n|≤ 1 for all n∈ and y_1>max I_N. Since _X⊆_Y by (3), we will finish the proof by showing that A∉_Y.
Take any y_n. Then y_n∈ I_m for some m>N, thus
∑_i∈ A∩ I_m-1f_Y(i)≥α 2^mk/y_n≥α 2^mk/2^mk+k=α/2^k.
Since that calculation holds for any y_n and Y is infinite, we obtain
∑_i∈ Af_Y(i)≥∑_n=1^∞α/2^k=∞,
hence A∉_Y.
Let be an ideal on .
Then the following conditions are equivalent.
* For every ^*-nonincreasing sequence (a_n) of positive reals we have
∑_n=1^∞ a_n<∞^*-lim n a_n=0.
* For every ^*-nonincreasing sequence (a_n) of positive reals we have
∑_n=1^∞ a_n<∞-lim n a_n=0.
* The filter dual to is disjoint from each ideal _X with X∈^+:
∀ X∈^+ ( _X∩^*=∅).
(<ref>) (<ref>)
It follows from the fact that ^*-convergence implies -convergence.
(<ref>) (<ref>)
Suppose there exist X∈^+ and A⊆ such that A∈_X∩^*≠∅.
Define
a_n=
f_X(n) for n∈ A,
1/n^2 for n∉A.
Since f_X is nonincreasing, the sequence a_n is nonincreasing on the set A∈^*. Moreover,
∑_n∈a_n=∑_n∈ A f_X(n)+∑_n∉A1/n^2 <∞,
because A∈_X.
On the other hand, for every n=x_k∈ X∩ A we have
n a_n=x_k f_X(x_k)=x_k·1/x_k=1.
Since X∩ A∈^+, the sequence (n a_n) cannot be -convergent to zero.
(<ref>) (<ref>)
Suppose there exists a sequence (a_n) with ∑_n∈a_n<∞ that is nonincreasing on some set A∈^* and the sequence (n a_n) is not ^*-convergent to zero.
We have two cases.
(a) There is an ε>0 such that {n∈ A: n a_n>ε}∈^+.
(b)
For every ε>0 we have {n∈ A: n a_n>ε}∈
In case (a), let ε>0 be such that X = {n∈ A: n a_n>ε}∈^+, and enumerate the elements of X increasingly by x_1,x_2,….
Since the sequence (a_n) is nonincreasing on A and X⊆ A, we can notice that a_n≥ a_x_k for n∈ (x_k-1,x_k]∩ A, k∈.
Therefore,
∑_n∈ A a_n
≥∑_k∈∑_n∈ (x_k-1,x_k]∩ Aa_x_k
>
∑_k∈∑_n∈ (x_k-1,x_k]∩ Aε/x_k
=
ε∑_k∈∑_n∈ (x_k-1,x_k]∩ A1/x_k
=
ε∑_n∈ A f_X(n).
From the assumption that ∑_n∈ A a_n<∞, it follows that A∈_X. Thus, A∈_X∩^*, which makes _X∩^* nonempty.
In case (b), since the sequence (n a_n) is not ^*-convergent to 0 and A∉, we can find a strictly decreasing sequence (ε_k) tending to 0 such that X_k={n∈ A: n a_n∈ [ε_k,ε_k-1)}∈∖ for every k∈ (we put ε_0=∞). Observe also that for every B∈^* there is some k∈ with B∩ X_k∉. Enumerate elements of each X_k increasingly by x_1^(k),x_2^(k),… and add x_0^(k)=0.
We will prove that A∈_X_k for every k∈. Take any k∈ and notice that for any n∈ X_k we have a_n≥ε_k/n, thus, using the fact that (a_n) is nonincreasing on A, we have
∑_n∈ Af_X_k(n)=∑_i∈|A∩ (x_i-1^(k),x_i^(k)] |/x_i^(k)≤1/ε_k∑_n∈ A a_n<∞.
Because ∑_n∈ Af_X_k(n)<∞ for each k∈, we can see that we may always find such t_k∈ that
∑_i≥ t_k|A∩ (x_i-1^(k),x_i^(k)] /x_i^(k)<1/2^k.
Moreover, by increasing t_k if necessary, we can assume that for each k>1 there exist some j≥ t_1 such that x_j^(1)∈ (x_t_k-1^(k),x_t_k^(k)).
Next, define X=⋃_k∈X_k∖{x_1^(k),…,x_t_k-1^(k)}. Note that X∈^+ as otherwise would be a set in ^* that has finite intersections with every X_k. Enumerate increasingly elements of X by x_1,x_2,… and add x_0=0. Observe that x_1=x_t_1^(1).
We will finish the proof by showing that A∈_X. In order to prove that, observe that every x_j is equal to x_i^(k) for some k∈ and i≥ t_k. Moreover, for every x_j=x_i^(k) other than x_1=x_t_1^(1) we can notice that (x_j-1,x_j]⊆ (x_i-1^(k), x_i^(k)] because either i>t_k and then x_i-1^(k)∈ X, thus x_i-1^(k)≤ x_j-1, or i=t_k and then X_1∩ X∩ (x_t_k-1^(k),x_t_k^(k))≠∅, thus x_i-1^(k)< x_j-1. Therefore,
∑_n∈ A∖{1,…,x_1}f_X(n)
=
∑_j≥ 2|A∩(x_j-1,x_j]|/x_j
≤∑_k∈∑_i≥ t_k|A∩ (x_i-1^(k),x_i^(k)] |/x_i^(k)
<
∑_k∈1/2^k=1<∞.
It clearly follows that ∑_n∈ Af_X(n)<∞, thus A∈_X∩^*.
Let be an ideal on .
Then the following conditions are equivalent.
* For every ^*-nonincreasing sequence (a_n) of positive reals we have
∑_n=1^∞ a_n<∞^+-lim n a_n=0.
* The filter dual to is disjoint from the summable ideal:
_1/n∩^*=∅.
(<ref>) (<ref>)
It follows from Corollary <ref>.
(<ref>) (<ref>)
Suppose that there exists A∈_1/n∩^*.
We define
a_n=1/n for n∈ A
and a_n=1/n^2 for n∉A.
Then (a_n) is ^*-nonincreasing and we can see that
∑_n=1^∞a_n=∑_n∈ A1/n + ∑_n∉A1/n^2<∞,
because A∈_1/n.
On the other hand, for all n∈ A we have n a_n=1. Since A∈^*, it follows that for any B∈^+ there are infinitely many n∈ B∩ A with n a_n=1, thus we cannot have ^+-lim n a_n=0.
If is an ideal on such that every A∈ has the upper asymptotic density less than 1, then
∑_n=1^∞ a_n<∞^*-lim n a_n=0
for every ^*-nonincreasing sequence (a_n) of positive reals.
Since every A∈ has the upper asymptotic density less than 1, by Proposition <ref>(<ref>) it follows that ∩_X^*=∅ for every infinite X⊆. Hence, in particular, _X∩^*=∅ for every X∉. Therefore, by Theorem <ref> we obtain the desired result.
Let be an ideal on .
In the following list of conditions, each implies the next
and none of the implications reverse.
* _1/n∩^+=∅.
* _X∩^*=∅ for every X∈^+.
* _1/n∩^*=∅.
(1)(2)
Suppose that _1/n∩^+=∅.
Then using Theorems <ref> and <ref> we obtain
_X∩^*=∅ for every X∈^+.
(2)(3)
Suppose that _X∩^*=∅ for every X∈^+.
Taking X= we get _X=_1/n, so
_1/n∩^*=∅.
(2) (1)
Consider a summable ideal =_1/ln (n+1).
Then ⊆_1/n, and since
{n^2:n∈}∈_1/n∖,
we have _1/n∩^+≠∅.
On the other hand, since ⊆_1/n⊆_X for every X, we have _X∩^*=∅ for every X.
(3) (2)
Let k_n=⌊ nln (n+1) ⌋ for n∈ and define K={k_n:n∈}. Take ={A⊆: A∩ K∈}.
Notice that if A∈^* then K ∖ A is finite and
∑_n∈ K1/n=∑_n∈1/k_n≥∑_n∈1/n ln(n+1)=∞,
thus K∉_1/n, hence _1/n∩^*=∅.
Now, we pick the sequence i_1<i_2<… in such a way that i_n/k_i_n<2^-n. We can do it because lim_n→∞n/k_n=0.
Consider the set A={k_i_n:n∈}. Then A∈^+ as A⊆ K and A is infinite.
Moreover,
∑_k∈ Kf_A(k)≤∑_n∈(i_n-i_n-1)1/k_i_n≤∑_n∈i_n/k_i_n< ∑_n∈1/2^n <∞.
Therefore, there is A∈^+ such that K∈_A∩^*, thus _A∩^*≠∅.
Let be an ideal on .
In the following list of conditions, each implies the next
and none of the implications revers.
* The coideal of is disjoint from the summable ideal:
_1/n∩^+=∅.
* For every ^*-nonincreasing sequence (a_n) of positive reals we have
∑_n=1^∞ a_n<∞^*-lim n a_n=0.
* The filter dual to is disjoint from the summable ideal:
_1/n∩^*=∅.
Use Theorem <ref> along with Proposition <ref>.
§ ALGEBRAIC STRUCTURES IN FAMILIES OF SEQUENCES RELATED TO OLIVIER'S THEOREM
The following proposition gives a necessary and sufficient conditions under which the families AOS(), AOS(^+) and AOS(^+) are nonempty.
Let be an ideal on .
* The following conditions are equivalent.
* AOS()≠∅.
* AOS(^*)≠∅.
* ^+∩_1/n≠∅.
* The following conditions are equivalent.
* AOS(^+)≠∅.
* ^*∩_1/n≠∅.
(1) It follows from Corollary <ref>.
(2) It follows from Corollary <ref>.
Since ^*-convergence implies both -convergence and ^+-convergence,
AOS()⊆ AOS(^*) and AOS(^+)⊆ AOS(^*).
Below, we show that, in general, these inclusions do not reverse, and there is no inclusions between AOS() and AOS(^+).
Let be an ideal on .
* If ^+∩_1/n≠∅ and ^*∩_1/n=∅, then AOS()⊈AOS(^+) and AOS(^*)⊈AOS(^+).
* Every ideal which is strictly contained in _1/n (e.g. =) satisfies assumptions of item (1).
* If is not a weak P-ideal and ^*∩_1/n≠∅, then
AOS(^+)⊈AOS().
* If is not a P-ideal and ^*∩_1/n≠∅, then
AOS(^*)⊈AOS().
* There exists an ideal which satisfies the assumptions of items (3) and (4).
(1)
By Proposition <ref>, we have AOS()≠∅, AOS(^*)≠∅ and AOS(^+)=∅.
(2) It is obvious.
(3)
Take A∈^*∩_1/n. Let sets A_1,A_2…∈ be pairwise disjoint such that for any B∉ there is n∈ with B∩ A_n∉. We may assume that ⋃_n=1^∞ A_n=A.
Indeed, if that is not the case then enumerate A∖⋃_n=1^∞ A_n by (x_i) (for either i∈ or i≤ N depending on whether this difference is infinite or not) and for each n∈ put A_n'=(A_n∩ A)∪{x_n} (or A'_n=A_n ∩ A in case x_n is not defined). Then clearly A'_n∈ for every n∈ and ⋃_n=1^∞ A'_n=A.
We define
a_n=1/n i for n∈ A_i,
1/n^2 for other n.
Obviously
∑_n=1^∞ a_n≤∑_n=1^∞1/n^2 +∑_n∈ A1/n<∞.
Moreover, we can notice that for every k∈ we have
{n∈:n a_n≥1/k}⊆ (∖ A)∪⋃_i=1^k A_i∈,
thus -lim n a_n=0, hence (a_n)∉AOS().
On the other hand for every B∉ there is k∈ with B∩ A_k∉, thus there are infinitely many n∈ B such that n a_n=1/k, hence (n a_n)_n∈ B cannot be convergent to 0. It follows that (a_n)∈ AOS(^+).
(4)
The obvious modification of the proof of item (3) gives the proof of item (4).
(5)
Take an infinite set A∈_1/n.
Let {A_n : n∈} be an infinite partition of A into infinite sets.
We define an ideal by
B∈ B∩ A_n∈ for all but finitely many n∈.
Then is not a weak P-ideal (so also not a P-ideal) and A∈^*∩_1/n.
§.§ Lineability
Let be an ideal on .
* The following conditions are equivalent.
* AOS()≠∅.
* AOS(^*)≠∅.
* AOS() is 𝔠-lineable.
* AOS(^*) is 𝔠-lineable.
* _1/n∩^+≠∅.
* The following conditions are equivalent.
* AOS(^+)≠∅.
* AOS(^+) is 𝔠-lineable.
* _1/n∩^*≠∅.
The equivalence of (1a), (1b) and (1e) follows from Proposition <ref>.
The implications (1c)(1a) and (1d)(1b) are obvious.
By AOS()⊆ AOS(^*) we get (1c)(1d). Thus, it suffices to show (1e)(1c).
The equivalence of (2a) and (2c) follows from Proposition <ref>. The implication (2b)(2a) is obvious. Thus, it suffices to show (2c)(2b).
Below we prove that (1e)(1c) ((2c)(2b), resp.).
Assume that there is some A∈_1/n∩^+ (A∈_1/n∩^*, resp.).
Since ∑_n∈ A1/n<∞, we can find an increasing sequence (j_k) of integers such that
∑_n>j_k,n∈ A1/n<1/k2^k.
We put A_0=(∖ A)∪[1,j_1]∩ and A_k=A∩(j_k,j_k+1] for each k≥ 1.
Observe that (A_k) is a partition of and A_k∈ for k≥ 1, while A_0∉^* (A_k∈ for all k, resp.). Moreover,
∑_k=0^∞(k∑_n∈ A_k1/n)<∞.
For each α∈ (0,1) let x^(α) be a sequence given by
x^(α)(n)=k^α/n n∈ A_k.
Note that x^(α)∈ℓ_1 for each α∈(0,1) as
∑_n=1^∞ x^(α)(n)=∑_k=0^∞(k^α∑_n∈ A_k1/n)≤∑_k=0^∞(k∑_n∈ A_k1/n)<∞.
In order to show 𝔠-lineability of AOS() (AOS(^+), resp.), consider a linear combination
y=c_1 x^(α_1)+…+c_m x^(α_m),
where c_i∈∖{0} for i≤ m and α_1>…>α_m. We need to show that y∈ AOS() (y∈ AOS(^+), resp.).
Obviously, y∈ℓ_1 as a linear combination of ℓ_1-sequences.
Observe that for each n∈ A_k with k≥ 1 we have
|n y(n)|
=
|n ∑_i=1^m c_i·k^α_i/n|
=
|∑_i=1^m c_i k^α_i|
=
|c_1k^α_1|·|1+ ∑_i=2^m c_i/c_1· k^α_i-α_1|
≥
|c_1k^α_1|·(1- |∑_i=2^m c_i/c_1· k^α_i-α_1|)
∞,
as
lim_k→∞ k^α_1=∞ and lim_k→∞ k^α_i-α_1=lim_k1/k^α_1-α_i=0 for all i=2,…,m.
To show that y∈ AOS(), find k_0 such that |n y(n)|≥ 1 for all n∈⋃_k≥ k_0A_k and note that ⋃_k≥ k_0A_k∉ as A_0∉^* and A_k∈ for k≥ 1. Hence, y∈ AOS().
To show that y∈ AOS(^+), fix any B∈^+. Since A_k∈ for all k∈∪{0}, the set {k∈: B∩ A_k≠∅} has to be infinite. Thus, (|n y(n)|)_n∈ B contains a subsequence (|n y(n)|)_n∈ B∖ A_0 which is divergent to infinity. Hence, y∈ AOS(^+).
§.§ Spaceability
For any x∈ℓ_1, we write
‖ x‖ = ∑_n=1^∞ |x(n)| and (x) = {n∈:x(n)≠ 0}.
For an ideal and a set C∈^+, we define an ideal C on the set C by
C = {A∩ C:A∈}.
Let be an ideal on such that
C is not a maximal ideal for every C∈^+ (i.e. every set from ^+ can be divided into two disjoint sets from ^+).
Then the following conditions are equivalent.
* AOS()≠∅.
* AOS(^*)≠∅.
* AOS() is 𝔠-lineable.
* AOS(^*) is 𝔠-lineable.
* AOS() is spaceable.
* AOS(^*) is spaceable.
* _1/n∩^+≠∅.
The equivalence of (1), (2), (3), (4) and (7) is due to Theorem <ref>.
It is known (<cit.>, see also <cit.> or <cit.>) that every infinite-dimensional Banach space has dimension at least , so we obtain the implications (6)(4) and (5)(3).
By AOS()⊆ AOS(^*) we get (5)(6).
Thus, it suffices to show (1)(5).
Let (b_n)∈ AOS().
Without loss of generality, we can assume that ‖ (b_n)‖=1 and b_n≥ 0 for each n∈.
Then there exists ε>0 such that
C = {n∈: nb_n≥ε}∉,
and, consequently, there exist
pairwise disjoint sets D_n∉ such that D_n⊆ C for each n∈.
For each i,n∈, we define
x^(i)(n) =
b_n if n∈ D_i∖{min D_i},
1-∑_n∈ D_i∖{min D_i}b_n if n=min D_i,
0 otherwise.
Then ‖ x^(i)‖=1, (x^(i)) = D_i and (x^(i))∩(x^(j))=∅ for each i,j∈, i≠ j.
Thus
V={∑_i=1^∞ t_i x^(i): (t_i)∈ℓ_1}
is a closed subspace of infinite dimension in ℓ_1. Hence, we only need to show that V⊆ AOS()∪{0}.
Let (t_i)∈ℓ_1. If t_i=0 for all i∈ then obviously ∑_i=1^∞ t_i x^(i)∈ AOS()∪{0}.
Suppose that t_i_0≠ 0 for some i_0∈.
Then for any n∈ D_i_0∖{min D_i_0} we have
|n (∑_i=1^∞ t_i x^(i)(n))|
=
|n t_i_0 b_n|
≥| t_i_0ε|>0.
Since
D_i_0∉,
we obtain that the sequence
(n (∑_i=1^∞ t_i x^(i)(n)))_n
is not -convergent to zero, hence it belongs to AOS().
By identifying sets of natural numbers with their characteristic functions,
we equip () with the topology of the Cantor space {0,1}^ (with the product measure of a countable sequence of the uniform measures on each {0,1}, resp.) and therefore
we can assign topological (measure-theoretic) notations to ideals on .
In particular, an ideal has the Baire property (is Lebesgue measurable or is Borel, resp.) if has the Baire property (is Lebesgue measurable or is a Borel set, resp.) as a subset of {0,1}^.
For instance, summable ideals are Borel (even F_σ) ideals.
We say that an ideal has the hereditary Baire property (is hereditary Lebesgue measurable, resp.) if C has the Baire property (is Lebesgue measurable, resp.) for every C∈^+. (Using <cit.>, one can construct an ideal with the Baire property which does not have the hereditary Baire property).
On the other hand, there is no use defining the hereditary Borel ideals because it is known that if is a Borel ideal then C is a Borel ideal for every C∈^+ (see for instance the proof of <cit.>).
Consequently, Borel ideals have the hereditary Baire property as well as they are hereditary Lebesgue'a measurable.
Let be an ideal on which has the hereditary Baire property or is hereditary Lebesgue measurable (in particular, if it is a Borel ideal).
Then the following conditions are equivalent.
* AOS()≠∅.
* AOS(^*)≠∅.
* AOS() is 𝔠-lineable.
* AOS(^*) is 𝔠-lineable.
* AOS() is spaceable.
* AOS(^*) is spaceable.
* _1/n∩^+≠∅.
Let be an ideal with the hereditary Baire property (hereditary Lebesgue measurable, resp.).
Since a maximal ideal does not have the Baire property and is not Lebesgue measurable (see e.g. <cit.>), we obtain that C
is not a maximal ideal for any C∈^+.
Thus, Theorem <ref> finishes the proof.
An ideal on is called tall if for every infinite A⊆ there is an infinite B⊆ A such that B∈.
The assumption of Theorem <ref> is not satisfied for some non-tall ideals. Below, we provide an additional result (see Theorem <ref>) which for instance guarantees that AOS() is spaceable for every non-tall ideal (see Corollary <ref>).
By e_D:→ D we denote the increasing enumeration of a set D⊆.
If is an ideal on such that
there exist pairwise disjoint sets D_n∈^+, n∈, and a set C∈^+∩_1/n such that
{e_D_n(i):i∈ C}∈^+
for each n∈,
then
* AOS()≠∅,
* AOS(^*)≠∅,
* AOS() is 𝔠-lineable,
* AOS(^*) is 𝔠-lineable,
* AOS() is spaceable,
* AOS(^*) is spaceable.
Since ^+∩_1/n≠∅, we obtain
(1), (2), (3) and (4) from Theorem <ref>.
By AOS()⊆ AOS(^*) we get (5)(6).
Thus, it suffices to show (5).
Let D_n and C be as in the assumption of the theorem.
For each n∈ we define
a_n = 1/n for n∈ C,
1/n^2 otherwise.
Then (a_n)∈ℓ_1, ‖ (a_n)‖>0
and the sequence (na_n) is not -convergent to zero.
Now, we define b_n=a_n/‖ (a_n)‖ for each n∈, and notice that
(b_n)∈ℓ_1, ‖ (b_n)‖=1 and the sequence (nb_n) is not -convergent to zero.
For each i,n∈, we define
x^(i)(n) =
b_j if n∈ D_i, n=e_D_i(j),
0 otherwise.
Then x^(i)∈ℓ_1, ‖ x^(i)‖=1, (x^(i)) = D_i and (x^(i))∩(x^(j))=∅ for each i,j∈, i≠ j.
Thus
V={∑_i=1^∞ t_i x^(i): (t_i)∈ℓ_1}
is a closed subspace of infinite dimension. Hence, we only need to show that V⊆ AOS()∪{0}.
Let (t_i)∈ℓ_1. If t_i=0 for all i∈ then obviously ∑_i=1^∞ t_i x^(i)∈ AOS()∪{0}.
Suppose that t_i_0≠ 0 for some i_0∈.
Then for any j∈ C and n=e_D_i_0(j) we have
|n(∑_i=1^∞ t_i x^(i)(n))|
=
|e_D_i_0(j) (∑_i=1^∞ t_i x^(i)(e_D_i_0(j)))|
=
|e_D_i_0(j) t_i_0 x^(i_0)(e_D_i_0(j))|
=
|e_D_i_0(j) t_i_0 b_j|
≥|j t_i_0 b_j|
=
| j t_i_0·1/j/‖ (a_k)‖|
=
|t_i_0|/‖ (a_k)‖ >0.
Since
{e_D_i_0(j):j∈ C}∉,
we obtain that the sequence
(n (∑_i=1^∞ t_i x^(i)(n)))_n
is not -convergent to zero, hence it belongs to AOS().
If an ideal is not tall, then
* AOS()≠∅,
* AOS(^*)≠∅,
* AOS() is 𝔠-lineable,
* AOS(^*) is 𝔠-lineable,
* AOS() is spaceable,
* AOS(^*) is spaceable.
Let A⊆ be an infinite set which does not contain an infinite subsets from .
Let D_n⊆ A, n∈, be pairwise disjoint infinite sets.
Take any C∈^+∩_1/n.
Then C is infinite, so
{e_D_n(i):i∈ C} is an infinite subset of A, hence it belongs to ^+.
Now, Theorem <ref> finishes the proof.
Let be an ideal on such that
* there exists an infinite partition of into sets from ^+,
* for each B∈^+ there exists D⊆ B such that D∈^+ and
∀ A⊆ (A∈{e_D(i):i∈ A}∈)
(i.e. the bijection e_D witnesses the fact that the ideals and D are isomorphic).
Then the following conditions are equivalent.
* AOS()≠∅.
* AOS(^*)≠∅.
* AOS() is 𝔠-lineable.
* AOS(^*) is 𝔠-lineable.
* AOS() is spaceable.
* AOS(^*) is spaceable.
* _1/n∩^+≠∅.
The equivalence of (1), (2), (3), (4) and (7) is due to Theorem <ref>.
It is known (<cit.>, see also <cit.> or <cit.>) that every infinite-dimensional Banach space has dimension at least , so we obtain the implications (6)(4) and (5)(3).
By AOS()⊆ AOS(^*) we get (5)(6).
Thus, it suffices to show (7)(5).
Let C∈^+∩_1/n.
Let B_n∈^+, n∈, be an infinite partition of .
For every n∈, we take D_n⊆ B_n such that D_n∈^+ and e_D_n witnesses the fact that and D_n are isomorphic.
Since C∈^+, we obtain that the set {e_D_n(i):i∈ C}∈^+ for each n∈.
Now Theorem <ref> finishes the proof.
The first assumption of Corollary <ref> can be characterized in terms of maximal ideals (see Proposition <ref>), which in turn can be used to show (see Proposition <ref>) that this assumption is valid for most ideals used in the literature (e.g. for all Borel ideals).
Let be an ideal on . Then the following conditions are equivalent.
* There exists an infinite partition of into sets from ^+.
* is not equal to the intersection of finitely many maximal ideals.
Let be an ideal on .
If has the Baire property or is Lebesgue measurable (in particular, if it is a Borel ideal), then there exists an infinite partition of into sets from ^+.
Let be an ideal with the Baire property (Lebesgue measurable, resp.).
In view of Proposition <ref>, we only need to show that is not the intersection of finitely many maximal ideals.
In <cit.> (<cit.>, resp), the authors proved that the intersection of countably many ideals without the Baire property (Lebesgue nonmeasurable, resp.) does not have the Baire property (is not Lebesgue measurable, resp.).
Since maximal ideals do not have the Baire property and are not Lebesgue measurable (see e.g. <cit.>), we obtain that is not the intersection of countably many (in particular, finitely many) maximal ideals.
Below, we show two examples of tall ideals which satisfy assumptions of Corollary <ref>.
[Hindman ideal]
A set A⊆ is an IP-set if there exists an infinite set D⊆ such that (D)⊆ A where (D) denotes the set of all finite non-empty sums of distinct elements of D.
It follows from Hindman's theorem (<cit.>, see also <cit.>) that if A∪ B is an IP-set, then A or B is an IP-set as well.
Thus, the family
_IP = {A⊆: A is not an IP-set}
is an ideal on .
The ideal _IP is coanalytic as
_IP^+ is a projection on the first coordinate of a closed set
B = {(A,D)∈{0,1}^× []^ω: (D)⊆ A},
where []^ω is a set of all infinite subsets of (which is a G_δ subset of {0,1}^, hence the Polish space).
Consequently, _IP has the Baire property, so by Proposition <ref>, the ideal _IP satisfies the first assumption of Corollary <ref>.
The fact that _IP satisfies the second assumption of Corollary <ref> follows from the proof of
<cit.>.
[van der Waerden ideal]
A set A⊆ is an AP-set if A contains arithmetic progressions of arbitrary finite length.
It follows from van der Waerden's theorem (<cit.>, see also <cit.>) that if A∪ B is an AP-set, then A or B is an AP-set as well.
Thus, the family
_AP = {A⊆: A is not an AP-set}
is an ideal on .
One can show, that this is a Borel ideal (even F_σ ideal).
Indeed, _AP is F_σ as _AP^+ is G_δ and that is true because
_AP^+=⋂_n=1^∞⋃_k=1^∞⋃_r=1^∞ A_n,k,r,
where A_n,k,r={A⊆:{k,k+r,k+2r,…, k+nr}⊆ A} is a basic open set.
Therefore, by Proposition <ref>, the ideal _AP satisfies the first assumption of Corollary <ref>.
The fact that _AP satisfies the second assumption of Corollary <ref> follows from the proof of
<cit.>.
If is an ideal on such that
there exists a set C∈^+∩_1/n such that there exists an infinite partition of C into sets from ^+,
then
* AOS()≠∅,
* AOS(^*)≠∅,
* AOS() is 𝔠-lineable,
* AOS(^*) is 𝔠-lineable,
* AOS() is spaceable,
* AOS(^*) is spaceable.
Let (b_n) be defined as in the proof of Theorem <ref> and then proceed as in the proof of Theorem <ref>.
Let C∈_1/n be an infinite set. Let be an ideal such that
C is a Borel ideal and (∖ C) is a maximal ideal.
Then the ideal
satisfies the assumption of the above theorem, but it does not satisfy the assumption of Theorem <ref>.
Is AOS()≠∅ (AOS(^*)≠∅, resp.) a necessary and sufficient condition for AOS() (AOS(^*), resp.) to be spaceable for each ideal ?
Does there exist an ideal such that AOS(^+)≠∅ is spaceable?
§.§ Algebrability
Let (a_k) be a sequence tending to infinity and (m_k) be an increasing sequence of positive integers such that
m_k≥ k^a_k for each k.
If M={m_k: k∈} and
* M ∈^+, then AOS() and AOS(^*) are strongly -algebrable;
* M∈^*, then AOS(^+) is strongly -algebrable.
Since AOS()⊆ AOS(^*), strong -algebrability of AOS(^*) will follow from
strong -algebrability of AOS().
Below we simultaneously prove strong -algebrability of AOS() and AOS(^+).
Let (a_k) and M∈^+ (M∈^*, resp.) satisfy the assumptions of the theorem.
Let Λ⊆(1,2) be a linearly independent set over rationals with |Λ|=.
For every α∈Λ, we define a sequence a^(α) by
a^(α)(n)=1/k^α for n=m_k∈ M,
1/n^α for n∉M.
Note that a^(α)∈ℓ_1 because
∑_n=1^∞a^(α)(n)≤∑_k=1^∞1/k^α +∑_n=1^∞1/n^α<∞.
Moreover, a^(α)∈ AOS() (a^(α)∈ AOS(^+), resp.) as for every n=m_k∈ M we have
n a^(α)(n)= m_k a^(α)(m_k)=m_k/k^α≥ k^a_k-α≥ k^a_k-2,
which tends to infinity.
Using <cit.>, we know that
in order to show strong -algebrability of AOS() (AOS(^+), resp.),
it is enough to prove that
P(a^(α_1),…,a^(α_q))∈ AOS() (∈ AOS(^+), resp.)
for any pairwise distinct α_1,…,α_q∈Λ
and any polynomial
P(x_1,…,x_q)=∑_i=1^p c_i x_1^β_i,1… x_q^β_i,q,
where c_i are nonzero reals and [β_i,j] is a matrix of nonnegative integers with pairwise distinct, nonzero rows.
First, observe that for any m_k∈ M we have
P(a^(α_1),…,a^(α_q)) (m_k)=∑_i=1^p c_i k^-(α_1 β_i,1+…+α_qβi,q)=∑_i=1^p c_ik^-r_i,
where r_i=α_1 β_i,1+…+α_qβi,q.
Since Λ is linearly independent, all r_i are positive and pairwise distinct. We may assume that r_1=min{r_1,…,r_p}.
Then
|m_kP(a^(α_1),…,a^(α_q)) (m_k)|
=
m_k|∑_i=1^p c_ik^-r_i|
=
m_k |c_1k^-r_1|·|1+ ∑_i=2^p c_i/c_1· k^-r_i+r_1|
≥
k^a_k|c_1k^-r_1|·|1+ ∑_i=2^p c_i/c_1· k^-r_i+r_1|
=
|c_1|· k^a_k-r_1·|1+ ∑_i=2^p c_i/c_1·1/k^r_i-r_1|
∞,
because r_i-r_1>0 for each i≥ 2 and
a_k tends to infinity.
Since M ∈^+ (M∈^*, resp.), we conclude that P(a^α_1,…,a^α_q)∈ AOS() (∈ AOS(^+), resp.).
The ideal = {A⊆: A∩{n^n:n∈} is finite}
satisfies the assumptions of the above theorem, so
AOS(), AOS(^*)
and AOS(^+)
are strongly -algebrable. However,
the nonemptiness of
AOS(), AOS(^*)
or AOS(^+) does not guarantee even 1-algebrability of these sets.
There exists an ideal such that AOS()≠∅, AOS(^+)≠∅ and AOS(^*)≠∅ but neither AOS() nor AOS(^+) nor AOS(^*) is 1-algebrable.
For a set B={n^2:n∈}, we define a summable ideal
= {A⊆:∑_n∈ A∩ B1/√(n)<∞}.
Since B∈^*∩_1/n, the sets AOS(^+), AOS() and AOS(^*) are nonempty by Proposition <ref>.
Now, we show that AOS(^*) does not contain any subalgebra generated by a singleton. This will finish the proof as AOS()⊆ AOS(^*) and AOS(^+)⊆ AOS(^*).
Take any a=(a_n)∈ AOS(^*).
Then C={n∈: |a_n|≤ 1/ √(n)}∈^* as otherwise B∖ C∈^+ and consequently
∑_n∈ |a_n|≥∑_n∈ B∖ C 1/√(n)=∞,
a contradiction with a_n∈ℓ_1.
Now, consider the polynomial P(x)=x^3. Then for any n∈ C we have
|n a_n^3|≤n/(√(n))^3=1/√(n),
which tends to zero, thus P(a_n)∉AOS(^*).
amsplain
|
http://arxiv.org/abs/2307.03377v1
|
20230707041037
|
Mitigating Negative Transfer with Task Awareness for Sexism, Hate Speech, and Toxic Language Detection
|
[
"Angel Felipe Magnossão de Paula",
"Paolo Rosso",
"Damiano Spina"
] |
cs.CL
|
[
"cs.CL",
"cs.LG"
] |
Mitigating Negative Transfer with Task Awareness for Sexism, Hate Speech, and
Toxic Language Detection
Angel Felipe Magnossão de Paula1,
Paolo Rosso1 and
Damiano Spina2
1Department of Computer Systems and Computation,
Universitat Politècnica de València, València, Spain 46022
2School of Computing Technologies,
RMIT University, Melbourne, Australia 3000
Email: {adepau@doctor, prosso@dsic}.upv.es, [email protected]
Abstract
0.9
We review the modular flavor symmetric models of quarks and leptons
focusing on our works.
We present some flavor models of quarks and leptons
by using finite modular groups and discuss the phenomenological implications.
The modular flavor symmetry gives interesting phenomena at the fixed point of
modulus. As a representative, we show the successful texture structure at the fixed point τ = ω.
We also study CP violation, which occurs through the modulus stabilization.
Finally,
we study SMEFT with modular flavor symmetry by including higher dimensional operators.
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper proposes a novelty approach to mitigate the negative transfer problem. In the field of machine learning, the common strategy is to apply the Single-Task Learning approach in order to train a supervised model to solve a specific task. Training a robust model requires a lot of data and a significant amount of computational resources, making this solution unfeasible in cases where data are unavailable or expensive to gather. Therefore another solution, based on the sharing of information between tasks, has been developed: Multi-task Learning (MTL). Despite the recent developments regarding MTL, the problem of negative transfer has still to be solved. Negative transfer is a phenomenon that occurs when noisy information is shared between tasks, resulting in a drop in performance. This paper proposes a new approach to mitigate the negative transfer problem based on the task awareness concept. The proposed approach results in diminishing the negative transfer together with an improvement of performance over classic MTL solution. Moreover, the proposed approach has been implemented in two unified architectures to detect Sexism, Hate Speech, and Toxic Language in text comments. The proposed architectures set a new state-of-the-art both in EXIST-2021 and HatEval-2019 benchmarks.
Multi-task Learning, Negative Transfer, Natural Language Processing, Deep Learning
§ INTRODUCTION
Machine Learning has numerous applications in fields as diverse as Natural Language Processing (NLP) (e.g., named entity recognition and hate speech detection) <cit.> or Computer Vision (CV) (e.g., object detection and object classification) <cit.>. Generally, a single model or an ensemble of models is trained to address all the desired tasks. These models are then fine-tuned and tweaked on the chosen task until they specialize, and their performance no longer increases. Despite producing satisfactory results, a Single-Task Learning (STL) strategy ignores knowledge that may be gathered from datasets of related tasks, allowing our model to generalize better on our original task. Furthermore, in many cases, more than the available data is needed to train a model robustly. Therefore, several strategies to transfer knowledge from one task to another have been developed <cit.>.
Multi-Task Learning (MTL) <cit.> is a new area of study that aims at exploiting the synergy between different tasks to reduce the amount of data or computational resources required for these activities. This approach aims at improving generalization by learning multiple tasks simultaneously.
The soft <cit.> or hard parameter-sharing <cit.> strategies are two of the most commonly used
methods for MTL employing neural networks.
In soft parameter-sharing, task-specific networks are implemented, while feature-sharing methods handle cross-task communication to encourage the parameters to be similar. Since the size of the multi-task network grows linearly with respect to the number of tasks, an issue with soft parameter-sharing systems is given by scalability.
In hard parameter-sharing, the parameter set is split into shared and task-specific operations.
It is commonly implemented with a shared encoder and numerous task-specific decoding heads <cit.>. One of the benefits of this method is the minimization of overfitting <cit.>.
Multilinear relationship networks <cit.> enhanced this architecture by imposing tensor normal priors on the fully connected layers' parameter set. The branching sites in the network are set ad-hoc in these works, which can result in inefficient job groupings. To address this limitation, tree-based approaches <cit.> have been proposed. Despite the improvement introduced by those works, jointly learning multiple tasks might lead to negative transfer <cit.>
if noisy information is shared among the tasks. During training, the hard parameter-sharing
encoder learns to construct a generic representation that focuses on extracting specific features from the input data. Nevertheless, a subset of these features may provide critical information for a given decoder head but introduces noise to another decoder to solve its respective task. Hence, negative transfer refers to situations in which the transfer of information results in a decrease in the overall model performance.
In this work, we propose a new approach to overcome the negative transfer problem based on the concept of Task Awareness (TA). This approach enables the MTL model to take advantage of the information regarding the addressed task.
The overarching goal is for the model to handle its internal weight for its own task prioritization. Unlike the State-Of-The-Art (SOTA) approaches (later presented in Section <ref>), the proposed solution does not require a recursive structure, saving time and resources. Moreover, we designed two mechanisms based on the TA approach and implemented them in the creation of two Multi-Task Learning TA (MTL-TA) architectures to address SOTA challenges: Sexism, Hate Speech, and Toxic Language detection. The source code is publicly available.[<https://github.com/AngelFelipeMP/Mitigating-Negative-Transfer-with-TA>]
The main contributions of our work are as follows:
* We propose the use of the TA concept to mitigate the negative transfer problem during MTL training.
* Design of the Task-Aware Input (TAI) mechanism to grant the MTL models with task awareness ability to mitigate negative transfer and even improve results compared with traditional MTL models.
* Design of the Task Embedding (TE) mechanism to give MTL models task recognition capability to diminish negative transfer and improve the results over classic MTL solutions.
* Creation and validation of two unified architectures to detect Sexism, Hate Speech, and Toxic Language in text comments.
* Our proposed method outperforms the SOTA on two public benchmarks for Sexism and Hate Speech detection: (i) EXIST-2021 and (ii) HatEval-2019 datasets.
The rest of the paper is structured as follows. Section <ref>
presents the related works of transfer learning and MTL. Section <ref> describes the details of our proposed method.
Section <ref> illustrates the experiment setup.
Section <ref> discusses and evaluates the experimental results.
Section <ref> presents the limitation of our approach.
Finally, conclusions and future work are drawn in Section <ref>.
§ RELATED WORK
Transfer learning is a widespread technique in machine learning based on the idea that a model created for one task can be improved by transferring information from another task <cit.>.
Training a model from scratch requires a large quantity of data and resources, but there are some circumstances where gathering training data is prohibitively expensive or impossible. As a result, there is the need to construct high-performance learners trained with more easily accessible data from different tasks. Transfer learning techniques allow us to improve the results of target tasks through information extracted from related tasks.
These techniques have been effectively used for a variety of machine learning applications, including NLP <cit.> and CV <cit.>. The MTL framework <cit.>, which seeks to learn many tasks at once even when they are distinct, is a closely related learning technique to transfer learning. This approach works well and can take advantage of sharing information among tasks. Still, if the tasks are not sufficiently related, it can lead to negative transfer. The problem of negative transfer consists of performance degradation caused by noisy information being shared between tasks.
To solve this issue, several approaches for balancing learning between different tasks have been proposed based on a re-weighing of the losses (for instance, via Homoscedastic uncertainty <cit.>, Gradient normalization <cit.> and Adversarial training <cit.>) or task prioritization <cit.>.
Further recent approaches <cit.> make use of the initial predictions obtained through multi-task networks to improve, once or repeatedly, each task output, overcoming a characteristic of the previously mentioned methods that computed all the task outputs for a given input at once.
Those last approaches culminate to be very time-consuming and require a lot of computational resources due to their recursive nature.
This paper proposes two unified architectures to detect Sexism, Hate Speech, and Toxic Language in text comments.
<cit.> represents the first semi-supervised multi-task approach for sexism classification. The authors addressed three tasks based on labels achieved through unsupervised learning or weak labeling. The neural multi-task architecture they proposed allows shared learning across multiple tasks via common weight and a combined loss function.
The method outperforms several SOTA baselines.
<cit.> proposed an MTL innovative approach to solve Aggressive Language Detection (ALD) together with text normalization.
The authors propose a shared encoder to learn the common features between the two tasks and a single encoder dedicated to learning the task-relevant features. The proposed model achieved a significant improvement in performance concerning the ALD task.
Those last approaches inspired the mechanism we propose in this paper. The main commonality is to have additional mechanisms added to the MTL models to improve the representation sent to the task heads.
The main difference with respect to the TA approach we propose is that we enrich the model with the ability to discover by itself which task it will perform. It allows the MTL-TA models to create a suitable representation for each task head. In addition, the MTL-TA modes do not need to learn an auxiliary task, resulting in more efficiency. In fact, the TA approach allows the MTL models, at each step, to try to optimize over the task at hand. The key idea is to learn a task-relevant latent representation of the data, efficiently solving many NLP tasks <cit.>. The resulting mechanisms are proposed in the following section.
§ PROPOSED APPROACH
This section describes the details of the MTL-TA models. We first introduce the notion of TA and explain how it can be beneficial in diminishing the negative transfer <cit.> for multi-task joint training <cit.>. Secondly, two different TA mechanisms are proposed in order to incorporate the task self-awareness capability into MTL models.
The mainstream approach to supervised multi-task is the hard parameter-sharing method <cit.>. The model is composed of an encoder and N decoders or task heads, where N corresponds to the number of tasks the model is simultaneously trained <cit.>. During execution, the encoder receives input and creates a task-agnostic latent representation that is sent to a certain task head, which is in charge of producing the final prediction.
The lack of a closer relationship between the latent representation generated by the encoder and the tasks degrades the overall MTL model performance <cit.>. For the same input, the optimal latent representation for task heads are likely to be different <cit.>. Furthermore, the encoder representation can get prone to more demanding tasks or with a larger data volume during training <cit.>. These model performance deteriorations are the reflex of the negative transfer phenomenon <cit.>, where a task head receives an inaccurate input representation to solve its respective task.
We propose two TA mechanisms to mitigate negative transfer when solving multiple NLP tasks by applying the MTL approach <cit.>. These mechanisms tailor, depending on the specific task that is addressed, the input representation that is sent to its respective head. In addition, our proposed MTL model still takes advantage of the generalization improvements the multi-task joint training provided. Hence, the encoder and other MTL model parts located before the task heads are updated during training for every task. It should be noted that all our proposed MTL models belong to the MTL-TA class, and they follow the conventional MTL paradigm. Therefore, only the specific task head attached to the input data is considered during the task parameter updating.
§.§ Task-Aware Input
The first mechanism we designed to introduce task awareness into MTL models is Task-Aware Input (TAI). To compel the encoder to generate a suitable representation for each task head, we proposed to modify the MTL conventional input for NLP tasks.
The TAI includes a Text Snippet (TS) plus a Task Description (TD), as shown in Fig. <ref>. The TS is a text chunk whose length varies according to the task. It is usually the integral input for the MTL encoders. The TD is a piece of text describing what a specific head is dealing with, such as `Sexism Detection’ and `Hate Speech Detection’. The new modified input provides context for the encoder to generate a task-centered representation. The MTL model endowed with the TAI mechanism is referred as MTL Task-Aware Input (MTL-TAI).
§.§ Task Embedding
The second mechanism we designed to convey MTL models with the TA capability was named Task Embedding (TE). We proposed to insert an additional building block between the encoder and the task heads, which we call Task Embedding Block (TEB), as displayed in Fig. <ref>. It receives two inputs: (i) the Task Identification Vector (TIV) and (ii) the latent encoder representation. The TIV is a unidimensional one-hot vector whose size is proportional to the number of task heads. Each TIV location is related to one of the task heads.
The TEB is composed of Learning Units (LU) that encompass a linear layer followed by a ReLU layer. The number of LUs is a hyperparameter that depends on the task and data, among other factors. The TEB objective is to generate a suitable representation for the task the MTL model is solving at a specific time. Hence, depending on the task, the TEB will retrieve a different output for the same exact encoder representation. It relies on the TIV to indicate for which task the TEB will generate a representation. The TIV has the number one in the location that corresponds to the task the model is about to solve. The remainder of the vector is populated with zeros, as Fig. <ref> reflects.
The MTL model equipped with the TE mechanism is referred as MTL Task Embedding (MTL-TE).
§ EXPERIMENTAL SETUP
This section first describes the tasks and the datasets used to evaluate our approach. It then presents the implementation details and models for reference. Finally, we share the settings for the experiments.
§.§ Data
Our approach for selecting the datasets for Sexism, Hate Speech, and Toxic Language detection was based on two requirements: (i) being publicly available; (ii) having been used to evaluate a high number of ML models. We use three datasets – EXIST-2021 <cit.>, DETOXIS-2021 <cit.>, and HateEval-2019 <cit.> – which we describe below.
EXIST-2021 <cit.>: The dataset was created for the sExism Identification in Social neTworks (EXIST) shared task at Iberian Languages Evaluation Forum (IberLEF) 2021. The dataset consists of 11345 annotated social media text posts in English and Spanish from Twitter and Gab.com (Gab), an uncensored social media platform. The dataset development was supervised and monitored by experts in gender issues. The EXIST was the first challenge on Sexism detection in social media, whose objective was to identify sexism in a wide sense, from explicit misogyny to more implicit sexist behaviors. The challenge received 70 official runs for the Sexism identification task. It is a binary classification where the samples belong to the Sexist class or the Not-Sexist class. The official evaluation metric was accuracy, and data was split into training and test sets. Table <ref> shows the data distribution.
DETOXIS-2021 <cit.>: The dataset was collected for the DEtection of TOxicity in comments In Spanish (DETOXIS) shared task at IberLEF 2021. The objective of the shared task was toxic language detection in comments to various online news articles regarding immigration. The proposed annotation methodology focused on diminishing the subjectivity of toxicity labeling considering contextual information (e.g., linguistic features and conversational threads). The team that worked on the data annotation was composed of trained annotators and expert linguists. The dataset consists of 4354 text comments from Twitter in Spanish and provides labels for Toxic Language detection. The task is characterized as a binary classification where the samples are divided between the Toxic and Not-Toxic classes. More than 30 teams evaluated their machine learning model in the collected dataset in the participation for DETOXIS shared task. The official data evaluation metric was F1-score in the Toxic class, and the data were divided into training and test sets. Table <ref> shows the data distribution.
HatEval-2019 <cit.>: The dataset was constructed for the Detection of Hate Speech Against Immigrants and Women in Twitter (HatEval) shared task, which was part of the International Workshop on Semantic Evaluation (SemEval) 2019. The dataset comprises 19600 tweets published in English and Spanish and supplies labels for Hate Speech detection. The data collection methodology employed different gathering strategies: (i) monitoring likely victims of hate accounts; (ii) downloading the records of recognized haters; (iii) filtering Twitter streams with keywords. The annotation was performed by experts and crowdsourced contributors tested for reliable annotation. The task was defined as a binary classification where the samples are associated with the Hateful class or the Not-Hateful class. The data is composed of training, development, and test sets, and the official evaluation metric was the F1-macro, which is the unweighted mean of the F1-score calculated for the two classes. HatEval was one of the most popular shared tasks in SemEval 2019, with more than 100 submitted runs for Hate Speech detection. We can see the dataset distribution in Table <ref>.
§.§ Implementation Details
The encoder was constructed using a popular BERT <cit.> version for Spanish called BETO <cit.>, followed by max and mean pooling calculation over its output. BETO has 12 self-attention layers, each with 12 attention-heads, using 768 as the hidden size with around 110 million parameters.
BETO receives a text sequence and returns a hidden representation dimensionally equivalent to its hidden size for each token that belongs to the sequence. The latent encoder representation is created by a concatenation of max pooling and mean pooling calculation on the entire 768-dimensional sequence of tokens returned by BETO.
Regarding the TE approach, the TEB preserves the same dimension of the latent encoder representation.
The task heads are linear classifiers whose input dimension corresponds to the latent encoder representation, and the output depends on the task. In the case of binary classification, the linear classifier returns two values, and the higher value corresponds to the predicted class.
Furthermore,
the TDs for the EXIST-2021 <cit.>, DETOXIS-2021 <cit.>, and HatEval-2019 <cit.> datasets are, respectively, the following pieces of text: 'Sexism detection’, ‘Toxic Language detection’, and ‘Hate Speech detection’.
The models were trained using the optimization algorithm AdamW <cit.> with a linear decay learning rate schedule and a learning rate varying from 5e-6 to 1e-4. In the learning process, we trained our model for 15 epochs with a dropout of 0.3 and batch size of 64. Additionally, we experimented with 1 up to 3 LUs. Similar to the early stopping strategy <cit.>, we adopted the model with the best performance within the epochs based on the task's official metric.
§.§ Comparison Models
We compare our approach with two types of models: (i) Baselines and (ii) SOTA models. The baselines are the two models that we implemented:
* MTL is the classic MTL model. It is constructed with the same architecture as the MTL-TA model (described in Section <ref>), but it does not include the TAI mechanism. Therefore, the MTL model receives only the TS as input.
* STL is the classic STL model. It has the same architecture as the MTL model, yet it encompasses only one task head. Hence, to compare this model type with the MTL models, it is necessary to train one model for each one of the addressed tasks.
The SOTA are the models which currently achieved the best performance on the datasets considered in our experiments:
* AI-UPV <cit.>: is a deep learning architecture based on the combination of different Transformers models <cit.>. It takes advantage of ensemble methods and, during training, applies data augmentation mechanisms. It is the SOTA for EXIST-2021 <cit.>.
* SINAI <cit.>: is a BERT base model <cit.> trained using the MTL hard parameter-sharing method. In spite of addressing five tasks and six datasets, the model was focused on Toxic Language detection, while the other tasks were used as auxiliary tasks. It is the SOTA for DETOXIS-2021 <cit.>.
* Atalaya <cit.>: is a model based on Support Vector Machines <cit.>. It was trained on several representations computed from FastText <cit.> sentiment-oriented word vectors, such as tweet embeddings <cit.>, bag-of-characters <cit.>, and bag-of-words <cit.>. It is the SOTA for HatEval-2019 <cit.>.
§.§ Experimental Settings
We conducted two experiments to evaluate our TA approach for mitigating negative transfer <cit.>, as described below.
*Cross-Validation Experiment To assess whether the TAI and TE mechanisms were capable of reducing the negative transfer during MTL training, we performed a cross-validation experiment. Therefore, for each one of the datasets described in Subsection <ref>, we aggregate the different sets that compose the dataset in a unique set. Then, we run 5-fold cross-validation on the STL, MTL, MTL-TAI, and MTL-TE models.
*Official Training-Test Split In order to compare our approach to the SOTA models <cit.> in the utilized datasets, we carried out an experiment using the official training-test split of the respective datasets. We trained our models with the training set or a combination of the training and development sets when the last was available. After that, we evaluated the models in the test partitions.
In both experiments, we use only the data samples in the Spanish language and evaluate the models employing the dataset's respective official metrics (described in Section <ref>). We explored versions that combined two and three tasks for the MTL models. Furthermore, models whose results were the highest regarding the evaluation metrics were selected.
Finally, we applied the t-test to calculate the 95% confidence interval for the experiments results.
§ RESULTS AND ANALYSIS
This section presents the experiment’s results and the comparison among the evaluated models described in Section <ref>.
§.§ Cross-Validation Experiment
Table <ref> shows the cross-validation results. It is organized into three parts in the following order: model type, model’s task heads, and model’s performance. Regarding the Baseline models (described in Section <ref>), results show that the MTL training approach suffered negative transfer on nearly all occasions.
The MTL model showed improvement over the STL model only for the Sexism detection task when the model was trained for Sexism and Hate Speech detection and when it was trained on the three tasks.
Apart from that, the STL model achieved superior performance in the rest of the explored combinations. It probably happened because the negative transfer restrained the learning process of the MTL model on all the other occasions.
According to our results, the TA mechanisms worked well to diminish negative transfer.
The MTL-TAI model equipped with the TA mechanism and the MTL-TE model equipped with the TE mechanism on all occasions achieved superior performance than the classic MTL model, as shown in Table <ref>. The MTL-TAI and MTL-TE models also overcame results obtained by the STL model for the three evaluated tasks. In general, the MTL-TE model performs better than the MTL-TAI model.
§.§ Official Training-Test Split
Table <ref>, following the same organization as Table <ref>, presents the experiment carried out on the three datasets using their respective official training-test split. We see in Table <ref>
that the MTL training was not beneficial for the classic MTL model when addressing the sexism detection task.
The model achieved lower accuracy compared with the STL model. We believe it was again due to the negative transfer phenomenon. Nevertheless, because of the TA mechanisms, the MTL-TA and MTL-TE models mitigated the negative transfer presented in the classic MTL training, achieving higher accuracy than the STL model and the EXIST-2021 SOTA (AI-UPV <cit.>).
The MTL training improves the result for Toxic Language detection over the STL baseline for the training-test experiment. In general, the MTL, MTL-TAI, and MTL-TE models achieved similar results, meaning there were low negative transfer levels for this task during the formal MTL training.
We see in Table <ref> that for the training and test experiment, the MTL training improved the result of Hate Speech detection. The MTL model obtained a higher F1-macro than the HatEval-2019 SOTA (Atalaya <cit.>) and the STL Baseline. The MTL models with the TA mechanisms improved the results even more. They mitigate the negative transfer in the traditional MTL training, and both models achieved superior F1-macro than the conventional MTL model.
§.§ Overall Analysis
Analyzing Tables <ref> and <ref>, we see evidence that the STL model was a competitive baseline to compare our TA approach. Therefore, the STL models achieved close or better results than the SOTA models for the training-test experiment. The STL achieved the same results as the EXIST-2021 SOTA (AI-UPV <cit.>) and comparable results to the DETOXIS-2021 SOTA (SINAI <cit.>). Furthermore, the STL obtained better results than the HatEval-2019 SOTA (Atalaya <cit.>).
Summarizing the results of the two experiments, the MTL-TA models (MTL-TAI & MTL-TEB) outperformed both the STL and the classic MTL models. It shows that our proposed TA approach could mitigate the negative transfer presented in the conventional MTL training.
§ LIMITATIONS
In this section, we mention the main limitations of our MTL-TA models.
First, the two models depending on a powerful encoder to achieve good performance. It could be a problem for low-resource computation systems that cannot afford to use deep learning architectures such as Transformers <cit.> for the encoder.
Secondly, dealing with a higher number of tasks means having more task heads – increasing the number of model parameters. Therefore, MTL-TA models will require more computational power to be fine-tuned.
Finally, we wonder if the MTL-TA models have their ability to adapt to unseen tasks (e.g., few-shot learning and instruction-based prompts) reduced due to the fine-tuning process utilizing information about the tasks.
§ CONCLUSION AND FUTURE WORK
We proposed the TA strategy to address the negative transfer <cit.> problem during MTL training. The proposed method has been translated into two mechanisms: TAI and TE.
The TAI mechanism is the inclusion of the TD information to enrich the input of the MTL model encoder. The TE mechanism is the introduction of the TEB, an extra component that receives the representation generated by the encoder plus a TIV representation. The TD and the TIV provide information regarding the task the MTL model will perform at that precise moment.
The objective of the TAI and TE is to enable the MTL model to construct task-dependent representations for the task heads to diminish negative transfer during MTL training and improve the MTL model performance.
We proposed two MTL models, the MTL-TAI equipped with the TAI mechanisms and the MTL-TE that includes the TE mechanism.
Our two experiments show that the TA capability reduces negative transfer during traditional MTL training and improves performance over standard MTL solutions.
We achieved competitive results compared with SOTA for the two proposed MTL-TA models for the addressed tasks: Sexism, Hate Speech, and Toxic Language detection.
In particular, the proposed models set a new SOTA on two public benchmarks: (i) EXIST-2021 <cit.> and (ii) HatEval-2019 <cit.> datasets, demonstrating a general performance improvement of the proposed approach with respect to both the STL and classic MTL model. The TA mechanisms proved to be a valid approach to mitigate the negative transfer <cit.> problem in the MTL training.
This research demonstrated how an MTL approach equipped with TA mechanism leads to performance improvement in several NLP tasks. This approach has been demonstrated to be feasible in cases where we have a scarcity of labeled data. In future studies, it would be interesting to deepen the analyses to find out how many labeled samples or volumes of information it is worth applying MTL rather than using STL.
Further analyses regarding the enrichment of the MTL model input with low-level task supervision are worth it. In this scenario, the decoder receives all or a subgroup of the encoder's hidden representations instead of just the last one. It would be interesting to analyze the impact of different encoder representations in an MTL model.
We also plan to apply MTL with TA to other scenarios, such as sexism identification under the learning with disagreement regime <cit.>, where it is necessary to learn from all the labels provided by the annotators rather than the aggregated gold label. This new paradigm is gaining importance in NLP, especially for tasks where often there is not only one correct label.
Finally, we would like to research unsupervised techniques to improve the suggested models and tackle the same problems (detecting Hate Speech, Toxic Language, and Sexism). For instance, Latent Dirichlet Allocation <cit.>, Self-Organizing Maps <cit.>, and K-Means Clustering <cit.> could be considered.
§ ACKNOWLEDGMENTS
Angel Felipe Magnossão de Paula has received a mobility grant for doctoral
students by the Universitat Politècnica de València.
The work of Paolo Rosso was in the framework of the FairTransNLP-Stereotypes research project
(PID2021-124361OB-C31) on Fairness and Transparency for equitable NLP
applications in social media: Identifying stereotypes and prejudices and
developing equitable systems, funded by MCIN/AEI/10.13039/501100011033 and
by ERDF, EU A way of making Europe. Damiano Spina is the recipient of an
Australian Research Council DECRA Research Fellowship (DE200100064).
|
http://arxiv.org/abs/2307.02359v1
|
20230705152053
|
Transverse $Λ$ polarization in $e^+e^-$ annihilations and in SIDIS processes at the EIC within TMD factorization
|
[
"Umberto D'Alesio",
"Leonard Gamberg",
"Francesco Murgia",
"Marco Zaccheddu"
] |
hep-ph
|
[
"hep-ph"
] |
[email protected]
Dipartimento di Fisica, Università di Cagliari, Cittadella Universitaria, I-09042 Monserrato (CA), Italy
INFN, Sezione di Cagliari, Cittadella Universitaria, I-09042 Monserrato (CA), Italy
[email protected]
Division of Science, Penn State Berks, Reading, PA 19610, USA
[email protected]
INFN, Sezione di Cagliari, Cittadella Universitaria,
I-09042 Monserrato (CA), Italy
[email protected]
Dipartimento di Fisica, Università di Cagliari, Cittadella Universitaria, I-09042 Monserrato (CA), Italy
INFN, Sezione di Cagliari, Cittadella Universitaria, I-09042 Monserrato (CA), Italy
We present a phenomenological study on the role of charm contribution and SU(2) isospin symmetry in the extraction of the Λ polarizing fragmentation functions from e^+e^- →Λ^↑ (Λ̅^↑) h + X annihilation processes. We adopt the well-established transverse-momentum-dependent factorization formalism, within the Collins-Soper-Sterman evolution scheme at next-to-leading logarithm accuracy, carefully exploiting the role
of the nonperturbative component of the polarizing fragmentation function. We then discuss the impact of these results on the predictions for transverse Λ, Λ̅ polarization in semi-inclusive deep inelastic scattering processes at typical energies of the future Electron-Ion Collider.
Transverse Λ polarization in e^+e^- annihilations
and in SIDIS processes at the EIC within TMD factorization
Marco Zaccheddu
August 1, 2023
=============================================================================================================
§ INTRODUCTION
The study of the fragmentation mechanism of partons into hadrons within the field theoretic framework of quantum chromodynamics (QCD), along with factorization theorems, which connect perturbative parton dynamics to universal hadron fragmentation functions, is
fundamental to unfolding the quark and gluon structure of hadrons.
When one includes also spin and its correlations with intrinsic transverse momentum the information one can extract is much richer and the description is more complete. This can be achieved, for instance, by studying the spontaneous transverse Λ polarization in processes where factorization theorems, in terms of transverse momentum dependent distributions (TMDs), hold. We refer, in particular, to double-hadron production in e^+e^- annihilation and semi-inclusive deep inelastic scattering (SIDIS) processes <cit.>.
These are characterized by the presence of two ordered energy scales, a small one (the transverse momentum unbalance of the two hadrons in e^+e^- processes or the transverse momentum of the final hadron in SIDIS) and a large one, the virtuality of the exchanged photon.
We emphasize that the understanding of the transverse Λ polarization, originally measured in inclusive unpolarized proton-proton and proton-nucleus collisions in the late 70's <cit.>, still represents a challenging problem in hadron physics.
One of the earliest attempts to describe this phenomenon within a phenomenological model was presented in Ref. <cit.> and further extended to SIDIS processes in Ref. <cit.>.
Recently, experimental data collected by the Belle Collaboration <cit.>, for the transverse Λ,Λ̅ polarization in almost back-to-back
two-hadron production in e^+e^- processes, has triggered a renewed interest in the subject matter. Preliminary studies within a simplified TMD model at fixed scale
were discussed in Refs. <cit.>.
Moreover, a series of phenomenological analyses within the TMD factorization framework adopting the Collins-Soper-Sterman (CSS) approach <cit.> has been carried out <cit.>.
The general TMD formalism, following the Lorentz decomposition or the helicity approach, was developed and presented in Refs. <cit.> for e^+e^- processes, and in Refs. <cit.> for SIDIS.
One of the main goals of these phenomenological studies, besides the description of data, is the extraction of the polarizing fragmentation function (pFF) for Λ hyperons, that provides information on the correlations between the intrinsic transverse momentum in the parton-to-hadron fragmentation process and the final hadron polarization. In this respect, this TMD function represents a window towards a deeper understanding of the nonperturbative fragmentation mechanism when also spin-polarization effects are taken into account.
In this paper, that represents a natural extension of Ref. <cit.>, we reanalyze Belle data for the transverse Λ polarization limiting
this study to the associated production case, and paying special attention to two issues, mentioned in our previous work
that here we will study in depth: namely, the role of the SU(2) isospin symmetry (see also Refs. <cit.>) and the charm contribution in the fragmentation of Λ hyperons.
We consider three different scenarios, discussing their statistical significance in the data description and the difference in the extracted polarizing fragmentation functions. We then employ these results to give predictions for the same observable in SIDIS processes at the energies and kinematics typical of the future Electron-Ion Collider (EIC). We will show how new measurements could help in disentangling among the different scenarios. The role of intrinsic charm in the proton <cit.> will be also addressed.
This analysis will allow us to check, at the same time, other fundamental issues, like the universality of the TMD fragmentation functions and their QCD evolution with the energy scale.
The paper is organized as follows: in Sec. <ref> we present the formalism and the cross sections for the production of a transversely polarized spin-1/2 hadron in e^+e^- collisions, in association with a light hadron, and in semi-inclusive deep inelastic scattering processes. The main results are then employed in the phenomenology part in Sec. <ref>, where we discuss the role of the charm quark contribution and the issue of SU(2) isospin symmetry in the re-analysis of Belle data <cit.>. Estimates for the transverse Λ/Λ̅ polarization in e^+e^- collisions and in SIDIS processes, at different center of mass energies, are presented with particular focus on how these are influenced by the choice of the pFF parametrization and of the nucleon PDF set. Lastly, in Sec. <ref> we collect our concluding remarks.
§ FORMALISM
In this section, we briefly recall the formalism for the production of a transversely polarized spin-1/2 hadron in e^+e^- annihilation processes, in association with an unpolarized light-hadron, and in semi-inclusive deep inelastic scattering processes. The main equations will be used in the following section to study the production of transversely polarized Λ hyperons in both processes.
§.§ Double-hadron production in e^+e^- processes
We start considering double-hadron production in e^+e^- collisions:
e^+(l_e^+) e^-(l_e^-)→ h_1(P_1,S_1) h_2(P_2) +X ,
where h_1 is a spin-1/2 hadron, with momentum P_1, spin-polarization vector S_1 and mass M_1, while h_2 is a light unpolarized hadron with momentum P_2 (we will neglect its mass), and they are produced almost back-to-back in the center-of-mass frame of the incoming leptons. For more details we refer the reader to Refs. <cit.>.
In Fig. <ref> we show the kinematics of the process in the hadron-frame configuration, where we fix the momentum of the second hadron, h_2, along the ẑ_L axis, while the first one, h_1, moving in the opposite hemisphere, has a small transverse momentum P_1T with respect to the second hadron direction.
From the theoretical point of view, it is however more convenient to adopt a different frame, where the two hadrons are exactly back to back, along a new ẑ axis, and the hadron transverse unbalance (P_1T) is now carried out by the virtual photon. In this frame, the differential cross section can be expressed, neglecting terms not relevant in the present study, as <cit.>
dσ^e^+e^-→ h_1(S_1) h_2 X/2 dy dz_h_1dz_h_2d^2q_T = σ^e^+e^-_0[ F_UU - |S_1T|sin(ϕ_1 - ϕ_S_1) F^sin(ϕ_1 - ϕ_S_1)_TU + ⋯] ,
where ϕ_S_1 is the azimuthal angle of the spin of the hadron h_1. Here q_T is the transverse momentum of the virtual photon (of momentum q), related to the transverse momentum of the hadron h_1 as P_1T = - z_1 q_T, being z_1 its light-cone momentum fraction, defined for both hadrons as
z_1 = P^-_1/p_q^-, z_2 = P^+_2/p_q̅^+ ,
where p_q and p_q̅ are the four-momenta of the quark and the antiquark fragmenting into the hadron h_1 and h_2, carrying a transverse momentum k_⊥ and p_⊥ with respect to the parent quark momenta, respectively.
The two scaling variables in Eq. (<ref>), z_h_1, z_h_2, are the usual invariants (energy fractions), related to the light-cone momentum fractions as
z_h = 2P_h· q/ Q^2 = 2E_h/Q≃ z ( 1 + M^2_h/ z^2 Q^2) ,
where Q is the center-of-mass energy of the process, Q^2 = q^2,
and where in the last relation we have neglected terms of the order 𝒪(k_⊥^2/(zQ)^2). Another scaling variable, usually adopted in phenomenological analyses, is the hadron momentum fraction
z_p =2|P_h|/Q≃ z ( 1 - M^2_h/ z^2 Q^2) .
Notice that since for the light hadron h_2 we neglect its mass, in the following we will use z_2= z_h_2= z_p_2, within this approximation.
The remaining variable is the fraction y=P_2· l_e^+ /P_2· q, related to the polar angle θ in the hadron frame (see Fig. <ref>). Lastly we have
σ^e^+e^-_0 = 3πα^2/Q^2[y^2 + (1-y)^2] .
In Eq. (<ref>), the F terms are convolutions of two fragmentation functions, where the subscripts denote the polarization states of, respectively, the first and the second hadron (U = unpolarized, T = transversely polarized).
These have the following expressions <cit.>:
F_UU = z_p_1^2z_p_2^2ℋ^(e^+e^-)(Q) ℱ[D_1 D̅_1] ,
F^sin(ϕ_1 - ϕ_S_1)_TU = z_p_1^2z_p_2^2ℋ^(e^+e^-)(Q) ℱ[ĥ·k_T/M_1D^⊥_1TD̅_1] ,
where ℋ^(e^+e^-)(Q) is the hard scattering part
for the massless on-shell process e^+e^-→ q q̅ (normalized to one at leading order), at the center-of-mass energy Q,
D_1(z,k_) is the unpolarized TMD fragmentation function (FF) and D^⊥_1T(z,k_) is the polarizing FF, with
ĥ= P_1T/|P_1T| and k_T = -k_/z_p_1 (and similarly p_T = -p_/z_p_2), where k_T (p_T) is the transverse momentum of the quark (antiquark) with respect to the hadron h_1 (h_2) direction of motion. The ℱ are proper convolutions of TMD-FFs, defined as follows:
ℱ[ω D D̅] = ∑_q e^2_q ∫ d^2k_T d^2p_T δ^(2)(k_T + p_T - q_T) ω(k_T, p_T) D(z_1,k_) D̅(z_2,p_) ,
where ω is a suitable weight factor depending on the two transverse momenta and D and D̅ are the TMD-FFs.
In order to employ the Collins-Soper-Sterman (CSS) evolution equations, it is useful to write the convolutions in the conjugate b_T-space:
F_UU = z_p_1^2z_p_2^2ℬ_0 [D_1 D̅_1] =
z_p_1^2z_p_2^2∑_q e^2_q ∫d b_T/2 π b_T J_0(b_T q_T) D_1(z_1,b_T) D̅_1(z_2,b_T) ,
F^sin(ϕ_1 - ϕ_S_1)_TU =
M_1 z_p_1^2z_p_2^2 ℬ_1 [D^⊥ (1)_1TD̅_1]
=
M_1 z_p_1^2z_p_2^2∑_q e^2_q ∫d b_T/2 π b^2_T J_1(b_T q_T) D^⊥ (1)_1T(z_1,b_T) D̅_1(z_2,b_T) ,
where
D_1(z_1,b_T) is the Fourier transform of the unpolarized FF, D^⊥ (1)_1T(z_1,b_T) is the first moment of the polarizing fragmentation function in b_T-space, and J_i is the Bessel function of the first kind of i-th order. Notice that we have already used ℋ^(e^+e^-)(Q)=1 and all light-cone momentum fractions have to be properly understood in terms of the corresponding energy fractions, z_h.
After solving the CSS evolution equations, as discussed in Refs. <cit.>, the convolutions can be written again as:
ℬ_0 [D_1 D̅_1] = 1/z^2_1 z^2_2∑_q e^2_q∫d b_T/2 π b_T J_0(b_T q_T) d_h_1/q(z_1; μ̅_b) d_h_2/q̅(z_2; μ̅_b)
× M_D_1(b_c(b_T),z_1) M_D_2(b_c(b_T),z_2)
e^-g_K(b_c(b_T);b_max)ln(Q^2 z_1 z_2/M_1 M_2)- S_ pert(b_*;μ̅_b) ,
ℬ_1 [D^⊥ (1)_1TD̅_1]
= 1/z^2_1 z^2_2∑_q e^2_q∫d b_T/2 π b^2_T J_1(b_T q_T) D^⊥ (1)_1T (z_1;μ̅_b) d_h_2/q̅(z_2; μ̅_b)
× M^⊥_D_1(b_c(b_T),z_1) M_D_2(b_c(b_T),z_2)e^-g_K(b_c(b_T);b_max)ln(Q^2 z_1 z_2/M_1 M_2)- S_ pert(b_*;μ̅_b) ,
where the d_h/j's are the p_⊥-integrated unpolarized fragmentation functions. M_D_i and M^⊥_D_1 are, respectively, the nonperturbative functions of the unpolarized and of the polarizing FFs, and g_K is the nonperturbative function of the Collins-Soper Kernel. All other quantities appearing in the above equations, necessary to properly separate the perturbative from the nonperturbative region, are defined and discussed in detail in Ref. <cit.>. See also below.
It is worth recalling that Eqs. (<ref>) and (<ref>) are obtained by using the leading term of the operator product expansions (OPEs), for small-b_T values, of the TMD distribution functions <cit.>.
Lastly, S_ pert is the perturbative Sudakov factor, defined as (see also Appendix <ref> for more details):
S_ pert(b_*;μ̅_b)=-K(b_*;μ̅_b) lnQ^2/μ̅_b^2 - ∫^Q_μ̅_bdμ'/μ' [ 2γ_D(g(μ');1) - γ_K(g(μ')) lnQ^2/μ'^2] .
The expression of the transverse polarization for the hadron h_1 is defined as:
P^h_1_n = dσ^↑ -dσ^↓/dσ^↑ +dσ^↓ = dσ^↑ -dσ^↓/dσ^ unp ,
where dσ^↑(↓) is the differential cross section, Eq. (<ref>), for the production of a transversely polarized hadron along the up(down) direction (n̂) with respect to the production plane,[Notice that in such a configuration sin(ϕ_1-ϕ_S_1)=-1.] and dσ^ unp is the unpolarized cross section.
Finally, we can write the q_T-integrated transverse polarization as the ratio of the two convolutions in b_T-space <cit.>:
P^h_1_n(z_h_1,z_h_2) = ∫ d^2q_T F^sin(ϕ_1 - ϕ_S_1)_TU/∫ d^2q_T F_UU = M_1∫ dq_T q_T dϕ_1 ℬ_1 [D^⊥ (1)_1TD̅_1]/∫ dq_T q_T dϕ_1 ℬ_0 [D_1 D̅_1] .
The integration over the azimuthal angle, ϕ_1, is trivial.
Moreover, since the only terms inside the convolutions depending on q_T are the Bessel functions, we can separately integrate them, obtaining
∫^q_T_max_0 dq_T q_T J_0(b_T q_T) = q_T_max/b_TJ_1(b_T q_T_ max) ,
∫^q_T_max_0 dq_T q_T J_1(b_T q_T) =π q_T_max/2 b_T{J_1(b_T q_T_max)H_0(b_T q_T_ max) - J_0(b_T q_T_max)H_1(b_T q_T_max) } ,
where H_0,1 are the Struve functions of order zero and one respectively. Notice that in the above integration we have introduced a maximum value q_T_max,
that has to fulfil the condition q_T_max≪ Q, in order to guarantee the validity of the TMD factorization <cit.>.
§.§ Semi-inclusive Deep Inelastic Scattering
Here, we present the formal expressions for the production of a transversely polarized massive hadron h_1 in unpolarized SIDIS processes:
e(l) N(P)→ e(l') h_1(P_1,S_1) +X ,
where N is an unpolarized nucleon with momentum P.
In Fig. <ref> we show the kinematics of the process in the γ^*N c.m. frame, where the virtual photon, with momentum q = l- l' (virtuality q^2=-Q^2), and the nucleon collide along the ẑ_L axis, while the hadron h_1 moves towards the negative ẑ_L direction with transverse momentum P_1T with respect to the γ-N direction. Notice that at variance with the configuration adopted in the “Trento Conventions” paper <cit.>,
the photon moves along -ẑ_L.
As for the case of the e^+e^- annihilation process, it is more convenient to adopt a frame where the nucleon and the hadron h_1 move back to back, along a new ẑ axis, and the hadron transverse unbalance is again carried out by the virtual photon. In this frame, the differential cross section, limiting to the terms relevant in the present study, can be written as:
dσ^e N → e h_1(S_1) X/ dy dx_B dz_hd^2q_T = σ^ DIS_0[ F_UU - |S_1T|sin(ϕ_1 - ϕ_S_1) F^sin(ϕ_1 - ϕ_S_1)_UT + …] ,
with
x_B=Q^2/2 P· q=x , y=P · q/ P· l , z_h=P · P_1/ P· q=z = z_p ,
where x = p^+/P^+ is the light-cone momentum fraction of the nucleon momentum carried by the parton with momentum p, and z is the light-cone momentum fraction, defined in Eq. (<ref>), for the final-state hadron. Notice that the last equalities are exactly true when neglecting the nucleon and the hadron masses, together with terms of order 𝒪 (k_⊥^2/Q^2).
It can be shown that if we keep the final hadron mass (relevant in some kinematical regions)[The nucleon mass can be safely neglected in our study.] we have
z_p ≃ z_h ( 1 - M^2_1/z^2_h Q^2x_B/1-x_B) .
Another set of invariants adopted in SIDIS, useful from the phenomenological point of view, are the following:
s = (P +l)^2 , Q^2 =- q^2= x_Bys , (P + q)^2 = W^2 = 1 - x_B/x_BQ^2 ,
where s is the total c.m. energy squared
and W is the c.m. energy of the photon-nucleon system.
In the lepton-nucleon c.m. frame they can be expressed as:
s = 4 E_N E_e , Q^2= 4 x_B y E_N E_e ,
where E_N,e are respectively the nucleon and electron beam energy. Lastly, for the elementary cross section, we have <cit.>:
σ^ DIS_0 = 2πα^2/Q^21 + (1-y)^2/y .
In Eq. (<ref>) the F terms are now convolutions of a TMD-PDF and a TMD-FF, where again the subscripts denote the polarization states of the initial-state nucleon and the final-state hadron. These are defined as follows <cit.>:
F_UU = z_p^2 ℋ^( DIS)(Q) ℱ[f_1 D_1] ,
F^sin(ϕ_1 - ϕ_S_1)_UT = z_p^2 ℋ^( DIS)(Q) ℱ[ĥ·k_T/M_1f_1 D^⊥_1T] ,
where f_1(x,p_⊥) is the TMD unpolarized parton distribution function and ℋ^( DIS)(Q) is the hard scattering part for the massless on-shell process e q→ e q, at the center-of-mass energy Q. Once again at LO this last quantity is normalized to one and will be dropped in the following.
The convolutions can be written in the conjugate b_T-space as Fourier transforms:
F_UU = z_p^2 ℬ_0 [f_1 D_1]= z_p^2 ∑_q e^2_q ∫d b_T/(2 π) b_T J_0(b_T q_T) f_1(x,b_T) D_1(z,b_T) ,
F_UT^sin(ϕ_1 - ϕ_S_1) = M_1 z_p^2 ℬ_1 [ f_1 D^⊥ (1)_1T]
= M_1 z_p^2 ∑_q e^2_q ∫d b_T/2 π b^2_T J_1(b_T q_T) f_1(x,b_T)D^⊥ (1)_1T(z,b_T) ,
and, after solving the CSS evolution equations, they can be expressed in their full form as:
ℬ_0 [f_1 D_1] = 1/z^2∑_q e^2_q∫d b_T/(2 π) b_T J_0(b_T q_T) f_q/N(x; μ̅_b) d_h/q(z; μ̅_b)
× M_f_1(b_c(b_T),x) M_D_h(b_c(b_T),z)
e^-g_K(b_c(b_T);b_max)ln(Q^2 z/x M_P M_h)- S_ pert(b_*;μ̅_b) ,
ℬ_1 [ f_1 D^⊥ (1)_1T] = 1/z^2∑_q e^2_q∫d b_T/(2 π) b^2_T J_1(b_T q_T) f_q/N(x; μ̅_b) D^⊥ (1)_1T,q (z;μ̅_b)
× M_f_1(b_c(b_T),x) M^⊥_D_1(b_c(b_T),z)e^-g_K(b_c(b_T);b_max)ln(Q^2 z/x M_P M_h)- S_ pert(b_*;μ̅_b) ,
where f_q/N is the integrated unpolarized parton distribution function, and M_f_1 is the nonperturbative component of the unpolarized PDF.
All the remaining terms that appear in Eq. (<ref>) and (<ref>) are the same defined in the previous section.
The operative expression of the transverse polarization can be obtained from Eq. (<ref>), where now dσ^↑(↓) is the differential cross section for a transversely polarized hadron along the up(down) n̂ direction, with respect to the production plane, in Eq. (<ref>).
For nucleons, we can directly write the transverse polarization of the final state hadron and the q_T-integrated one as the ratio of the two convolutions in b_T-space:
P^h_1_n(x_B,z_h,q_T) = F^sin(ϕ_1 - ϕ_S_1)_UT/ F_UU = M_1∫ dϕ_1 ℬ_1 [ f_1 D^⊥ (1)_1T]/∫ dϕ_1 ℬ_0 [f_1 D_1] ,
P^h_1_n(x_B,z_h) = ∫ d^2q_T F^sin(ϕ_1 - ϕ_S_1)_UT/∫ d^2q_T F_UU = M_1∫ dq_T q_T dϕ_1 ℬ_1 [ f_1 D^⊥ (1)_1T]/∫ dq_T q_T dϕ_1 ℬ_0 [f_1 D_1] .
To compute the cross section for the scattering off nuclei, we adopt a simple approach taking the incoherent sum of the contribution of every nucleon that composes the nucleus, neglecting nuclear effects. That is, for the scattering off a nucleus with A nucleons and Z protons we use:
dσ^e A → e h_1(S_1) X = Z dσ^e p → e h_1(S_1) X + (A-Z) dσ^e n → e h_1(S_1) X .
§ PHENOMENOLOGY
In this section, after recalling the main results of the analysis of Belle data <cit.> presented in Ref. <cit.>, we will focus more extensively on the role of the charm contribution and of the SU(2) isospin symmetry.
Then we will give predictions for the transverse Λ polarization in e^+e^- collisions, for different values of the c.m. energy. Finally, we will present estimates for the same observable in semi-inclusive deep inelastic scattering processes, for different values of the lepton and nucleon beam energies.
§.§ Two-hadron production data fit: charm and SU(2) isospin symmetry
We begin giving the setup for the phenomenological analysis of Belle data. This is mainly based on our previous work <cit.>.
Here we consider only the Belle data set for the polarization of Λ/Λ̅ hyperons produced in association with a light hadron, π^± or K^±, measured at √(s) = 10.58 GeV. The 128 data points are given as a function of z_Λ and z_π/K, the energy fractions of Λ/Λ̅ and π/K particles. For the current analysis, we impose a cut on large values of the light-hadron energy fractions, z_π/K<0.5, keeping only 96 data points, as discussed and motivated in Ref. <cit.>. We will come back on this point below.
We will use the following expression to parametrize the z dependence of the first transverse moment of the polarizing Λ FF, D^⊥ (1)_1T, Λ/q:
D^⊥ (1)_1T, Λ/q(z;μ_b)=𝒩^ p_q(z)
d_Λ/q(z;μ_b) ,
with, as adopted and motivated in Ref. <cit.>, q = u, d, s, u̅, d̅, s̅, and where
𝒩^ p_q(z) (the superscript here refers to the polarizing FF) is parametrized as:
𝒩^ p_q(z) = N_q z^a_q(1-z)^b_q(a_q +b_q )^(a_q +b_q )/a_q^a_qb_q^b_q .
In Eq. (<ref>), d_Λ/q is the collinear unpolarized Λ fragmentation function for which we employ
the AKK08 set <cit.>.
This parametrization is given for Λ + Λ̅ and adopts the longitudinal momentum fraction, z_p, as scaling variable. In order to separate the two contributions we assume
d_Λ̅/q(z_p) = d_Λ/q̅(z_p) = (1- z_p) d_Λ/q(z_p) .
This is a common way to take into account the expected difference between the quark and antiquark FF with a suppressed sea at large z_p as compared to the valence component. Other similar choices have a very little impact on the fit.
Concerning the nonperturbative function M^_D, Λ we employ the Gaussian model:
M^_D, Λ(b_T,z) = exp(-⟨ p_⊥^2 ⟩_p b^2_T/4 z^2_p) ,
where ⟨ p_⊥^2 ⟩_p is the Gaussian width, a free parameter that we extract from the fit.
Regarding the collinear FFs of the unpolarized light hadrons, π and K, we adopt the DSS07 set <cit.>, while for M_D we consider the PV17 model <cit.>:
M_D(b_T,z) =g_3 e^-b^2_Tg_3/4z^2 +λ_F/z^2g^2_4(1 -g_4b_T^2/4z^2 )e^-b^2_Tg_4/4z^2/g_3 + λ_F/z^2g^2_4 ,
where
g_3,4 = N_3,4(z^β+δ)(1-z)^γ/(ẑ^β+δ)(1-ẑ)^γ
ẑ = 0.5 ; N_3= 0.21 GeV^2 ; N_4 = 0.13 GeV^2 ;
β = 1.65 ; δ = 2.28 ; γ=0.14 ; λ_F= 5.50 GeV^-2 .
For the g_K function, we use the one extracted in Ref. <cit.>:
g_K(b_T;b_max) = g_2 b^2_T/2 ; g_2 = 0.13 GeV^2 .
For what concerns the Λ unpolarized FF, for M_D we use a Power-Law model, see Refs. <cit.>:
M_D(b_T,z,p,m) = 2^2-p/Γ(p-1) (b_T m/z_p)^p-1K_p-1(b_T m/z_p) ,
with p=2 and m=1 GeV.
Notice that in the above equations all conversions among the different scaling variables (z,z_p,z_h) involved are properly taken into account.
In Eqs. (<ref>), (<ref>), (<ref>) and (<ref>) we use the following definition for the μ̅_b variable:
μ̅_b = C_1/b_*(b_T) ,
where C_1 = 2e^-γ_E (with γ_E being the Euler-Mascheroni constant), and the b_* prescription of Ref. <cit.>:
b_* ≡ b_*(b_T;b_ min,b_ max) = b_ max(1-e^-b_T^4/b^4_ max/1-e^-b_T^4/b^4_ min)^1/4 .
Moreover, we adopt
b_c(b_T) = √(b^2_T + b^2_min) ,
with b_min = 2e^-γ_E/Q and b_max = 0.6 GeV^-1 where, in this analysis, Q = 10.58 GeV.
Since our present goal is a phenomenological analysis at NLL accuracy, for the perturbative Sudakov factor in Eq. (<ref>) we use α_s at LO, the anomalous dimension γ_K at the second order, and the Collins-Soper kernel K and γ_D at the first order (see Appendix <ref> for the explicit expressions of the Sudakov factor and of the anomalous dimensions). A complete N^2LL extraction could be achieved only by adopting the coefficient functions, in the OPE, at the next order.
Lastly, for the integration in Eqs. (<ref>) and (<ref>), we use q_T_max = 0.27 Q. This specific value is chosen on the basis of the results obtained in Ref. <cit.> (Fig. 7), where, as shown for this particular choice of nonperturbative functions, the χ^2_ dof reaches its minimum.
Concerning the phenomenological analysis and the extraction of the polarizing FFs from Belle data, we consider three different scenarios, exploiting the role of the charm quark contribution and the SU(2) isospin symmetry:
* Scenario 1. Here we do not include the charm contribution in the unpolarized cross section and we do not impose the SU(2) isospin symmetry. We then extract different Λ pFFs for the u,d, s quarks and a single pFF for the sea (u̅=d̅=s̅) antiquarks. As discussed in our prevous analysis <cit.>, the optimal choice turns out to be an eight-parameter fit: N_u, N_d, N_s, N_ sea, a_s, b_u, b_ sea and ⟨ p_⊥^2⟩_ p.
* Scenario 2. We include the charm contribution in the unpolarized cross section, but we still do not impose the SU(2) isospin symmetry.
We continue to extract different Λ pFFs for the u,d, s quarks and a single pFF for the sea (u̅=d̅=s̅) antiquarks, as in the first scenario. Here, we need to include an extra parameter resulting in a nine-parameter fit: N_u, N_d, N_s, N_ sea, a_d, a_s, b_u, b_ sea and ⟨ p_⊥^2⟩_ p.
* Scenario 3. We include the charm contribution in the unpolarized cross section and impose SU(2) isospin symmetry for the u, d quark pFFs ,
while still adopting different pFFs for the s and s̅ quarks. Notice that the AKK08 FF set allows for a slight violation of the SU(2) symmetry; therefore, even imposing 𝒩_u^ p = 𝒩_d^ p and 𝒩_u̅^ p = 𝒩_d̅^ p, the extracted pFFs will be still slightly different, see below. In such a case the nine free parameters are: N_u,d, N_u̅,d̅, N_s, N_s̅, a_u,d, a_s, b_u,d, b_s̅ and ⟨ p_⊥^2⟩_ p. Notice that the inclusion of a further parameter for the sea pFFs, namely b_u̅,d̅, does not improve the quality of the fit.
As already discussed in our previous analyses, the imposition of the SU(2) symmetry alone within a three-flavor scheme would lead to a very poor quality of the fit.
The best-fit parameters extracted for the first moment of the pFFs are given in Tab. <ref>, together with the χ^2_ dofs for each scenario, while in Fig. <ref> we show the corresponding estimates of the transverse Λ, Λ̅ polarizations, produced in association with a light-hadron, compared against Belle data <cit.>.
Some comments are in order here: Comparing the parameter values obtained within the first and second scenario, we see a significant difference in their magnitudes, somehow due to the inclusion of the charm quark contribution in the second one. However, as already shown in our previous works <cit.>, in both cases only the up pFF is positive, while the remaining pFFs are all negative. On the contrary, within the third scenario, we observe that both the up and down pFF are positive, having an opposite sign with respect to the anti-up and anti-down pFFs. The strange and anti-strange pFFs come out still negative. The main point in this comparison is that if we allow for different normalization factors for the up and down pFFs (Sc. 1 and Sc. 2), they come out opposite in sign, leading to a strong violation of the SU(2) symmetry. And this happens even if we allow for independent normalization factors for the sea contributions. In other words, only imposing N_d=N_u we can restore, at least approximately with this set of unpolarized FFs, the symmetry.
Despite these differences, within all scenarios we obtain similar sizes for the Gaussian width.
The first k_⊥-moments of the polarizing FFs are shown in Fig. <ref> for scenarios 1 and 2, and in Fig. <ref> for scenario 3.
In scenarios 1 and 2 the first moments are all compatible, at least within the uncertainty bands, with the exception of the strange pFF. When we move to the third scenario the up pFF comes out still compatible with the results in the other scenarios, with the strange pFF somehow in between. The most interesting finding is that since in this scenario the down pFF is positive (SU(2) constrained), the negative sea contributions are larger in size.
It is important to stress here that in the extraction of the first moment of the polarizing FFs we do not impose any positivity bound, that, in principle, could prevent a proper sampling of the parameter space. On the other hand we have checked, a posteriori, that this is fulfilled in all scenarios considered.
Moving to the comparison with data, we can generally say that all three scenarios are able to describe reasonably, or even quite, well the Λπ^±, Λ̅π^±, Λ K^- and Λ̅ K^+ polarization data. However, as already pointed out in our first works, where we did <cit.> or did not <cit.> employ the full TMD machinery, within scenario 1 one cannot describe, at variance with the Λπ case, the Λ K^+ and Λ̅K^- data with z_K>0.5.
Quite interestingly, when we include the charm contribution, imposing or not the SU(2) isospin symmetry, we can still obtain similar good fits with a simultaneous very good description of these data points (even if not included in the analysis), see Figs. <ref>c and <ref>d, lower panels.
This result, focusing on Λ K^+ for simplicity, can be understood as follows: in scenarios 2 and 3 the inclusion of the charm contribution in the denominator, with non negligible charm FFs both for K^+'s and Λ's, requires larger, in size, pFFs in the fit.
Moreover, since this extra piece in the denominator happens to be a decreasing function in z_K
the polarization eventually increases in size with z_K.
From the present study, it is very likely that the inclusion of the charm contribution, at least in the unpolarized cross section, must be considered necessary for the analysis of Belle data. Several attempts to include this contribution also in the numerator of the transverse polarization (that is parametrizing also pFFs for charm quarks) have been carried out but no significant improvement on the χ^2_ dof value or in the description of data has been found.
Similar conclusions, even if on a more qualitative ground since they do not provide any χ^2 value and any uncertainty band, have been obtained in Ref. <cit.>. Here, by including the charm contribution, also for the polarizing FFs (resulting in a 20-parameter fit) they show that Belle data can be described reasonably well even without any isospin symmetry violation.
In this respect, we agree that the issue of SU(2) symmetry has to be taken with care and that cannot be solved by analysing only the data on the transverse polarization of Λ/Λ̅ produced in e^+e^- processes. More experimental information is therefore certainly needed.
§.§ Predictions for the transverse Λ polarization in e^+e^- collisions at different energies
Here we give some predictions at different energies, focusing on Λ-K production, with the aim to look for possible significant differences among the three scenarios.
In Fig. <ref> we show the estimates for the transverse polarization of Λ's produced with K^± mesons, at different energies, namely, 8.48 GeV (left panel) and 12.58 GeV (right panel). Notice that in the first case we cannot have the z_Λ=0.25 bin for kinematical reasons.
At both energies, only the first z_Λ bins show some discrepancies at large z_K values between the predictions obtained within scenarios 2 and 3. On the other hand, for higher values of z_Λ all predictions become very similar, within the uncertainties. This, also true for high energy values, could prevent the distinction between the two scenarios in future e^+e^- measurements.
§.§ Predictions for the transverse Λ polarization in SIDIS
In this section, by using Eqs. (<ref>), (<ref>), (<ref>) and (<ref>), we present the predictions for the transverse Λ polarization in unpolarized SIDIS electron-proton and electron-deuterium collisions, for different values of their beam energies, and considering the three previously presented scenarios.
A similar analysis, showing also estimates obtained from our previous extraction <cit.>, has been presented in Ref. <cit.>. In this paper they consider in detail the transverse Λ polarization only, including the charm contribution and imposing SU(2) symmetry (see Ref. <cit.> and our comments above). In this respect, our predictions are indeed in qualitative agreement with theirs. On the other hand, they do not show any result for the Λ̅ case, that, as we will discuss below, represents a much powerful tool to discriminate different scenarios. It is also worth noticing that a comprehensive phenomenological impact study on the transverse Λ polarization at the future EIC, even if limited to scenario 1 and at LO accuracy, has been carried out in Ref. <cit.>. In this work they include EIC pseudodata to reweight the parametrization of the polarizing FFs as extracted from Belle e^+e^- data, leading to a significant reduction in the theoretical uncertainties.
Concerning the present analysis, for Λ hyperon, we employ the unpolarized nonperturbative function in Eq. (<ref>), and the polarizing FF first moment and nonperturbative function in Eqs. (<ref>), (<ref>) and (<ref>), adopting the parameters given in Tab. <ref>. As for the proton PDFs, we use for the unpolarized ones the CT14 NNLO set <cit.>, and for the nonperturbative function the one extracted in Ref. <cit.>, which has the following form:
M_f_1(b_T,x) = 1/2 π e^-g_1b_T^2/4(1-λ g^2_1/1+ λ g_1b_T^2/4) ,
where
g_1 = N_1(1-x)^αx^σ/(1-x̂)^αx̂^σ
x̂ = 0.1 N_1= 0.28 GeV^2
α = 2.95 σ = 0.173 λ=0.86 GeV^-2 .
Regarding the neutron PDF, we use the same proton nonperturbative function and unpolarized PDF set but with the following substitution for the up and down quarks:
u_n = d_p , d_n = u_p , u̅_n = d̅_p , d̅_n = u̅_p .
In the following we show estimates for the transverse Λ polarization integrated over its transverse momentum (or, more precisely, over q_T), using Eq. (<ref>).
We will consider two different values of the c.m. energy √(s_eN), Eq. (<ref>), reported in Tab. <ref>, corresponding to various combinations of nucleon (electron) beam energies E_N (E_e). We will keep fixed y to 0.4, and explore different values of x_B and z_Λ.
In Fig. <ref> we show the estimates for the q_T-integrated transverse Λ/Λ̅ polarization in electron-proton (deuterium) collisions, see Eqs. (<ref>) and (<ref>), for different values of √(s_eN), x_B and z_Λ, in the three scenarios.
Since we adopt a fixed ratio q_T_ max/Q=0.27, by exploring large values of Q, up to 30 GeV in our case, for certain x_B values we enter the region of large q_T (up to 8 GeV).
Firstly, we notice that scenarios 1 and 2 lead to Λ/Λ̅ polarization with similar size and behavior, for the two values of √(s_eN) both for proton and deuteron targets.
We can see how the Λ polarization tends to decrease, becoming negative, as z_Λ increases, while the Λ̅ polarization is always negative. In particular, for √(s_eN) = 28.6 GeV (Fig. <ref>.a and <ref>.c), the polarization has the same pattern and size in each x_B bin, while for greater √(s_eN) values (Fig. <ref>.b and <ref>.d), we have a general reduction in size of the polarization as x_B grows.
For what concerns the third scenario, we can see that the polarization follows a pattern similar to that illustrated for the first and second scenarios, but with some differences. The Λ polarization has a similar or slightly greater size than in the other two scenarios; the most significant difference can be found for the Λ̅ polarization, which is much greater in size, reaching values of about 40% for x_B=0.6 and √(s_eN)=28.6 GeV.
Finally we provide a comment on the strong similarities between the Λ̅ polarizations in ep and eD collisions: the reasons can be traced back to the dominant contribution driven by the up and down distribution functions in both targets.
For Λ̅ production this enters
directly convoluted with the polarizing FF for sea quarks in the numerator and the unpolarized sea FF in the denominator.
In Λ production this does not happen since the up and down parton distributions couple in a different way to the up and down pFFs when one considers a proton or a deuterium target.
At variance with the case of the double-hadron production in e^+e^- collisions, the estimates for the transverse polarization within the second and third scenarios are clearly
separated. Thus, future measurements of transversely polarized Λ/Λ̅ in SIDIS will potentially allow us to gain further insights and to distinguish between the two scenarios.
It is worth noticing that the corresponding
estimates for the transverse polarization as a function of the Λ/Λ̅ transverse momentum, P_1T, are not able to discriminate among the different scenarios.
§.§ Role of intrinsic charm contribution
From the previous discussion it is clear that the charm contribution in the fragmentation process can be relevant for the study of the transverse Λ polarization.
Here, we explore how the employment of collinear PDFs, that take into account the presence of an intrinsic charm (IC) component in the proton, can play a role in this context.
For this study, we consider again the CT14NNLO set and two recent PDF sets: the CT14NNLO IC set <cit.>, by using the Brodsky-Hoyer-Peterson-Sakai (BHPS) model for the intrinsic charm component, and the NNPDF4.0 NNLO set <cit.>.
In Fig. <ref>, we present a comparison of the estimates of the transverse Λ/Λ̅ polarization in electron-proton (deuterium) scattering at √(s_eN) = 28.6 GeV, obtained using the second scenario parameters for the polarizing FFs and the three PDF sets. We observe that the estimated polarization obtained using the BHPS model for IC (violet bands) and the NNPDF set (green bands) do not differ significantly from the predictions shown in the previous section (Fig. <ref>) without the IC component (orange bands). This behavior is also present for smaller and greater values of the c.m. energy. For completeness, we have also explored the role of the perturbative charm component (NNPDF set) without any significant differences.
On the contrary, when adopting the third scenario parameters, the estimates can vary significantly as x_B increases. As we can see in Fig. <ref>, the transverse Λ̅ polarization estimates obtained with the BHPS model for IC (violet bands) and the NNPDF set (green bands) gradually move away from the predictions without the IC component, both in electron-proton and in electron-deuterium collisions, leading to a smaller polarization size. Notice that this conclusion is valid also when adopting the perturbative NNPDF charm component.
Concerning the Λ polarization, only the estimates with the NNPDF set differ from the other two predictions. In fact, both the bands without the IC component and the ones with the BHPS model decrease to zero, as z_Λ increases, while the NNPDF predictions become negative and grow up to about 10 % in size. It is worth noting that this is mainly due to the different PDF set adopted and not to the inclusion of the IC component.
In Fig. <ref>, we can see how, including the intrinsic charm contribution, the estimates for Λ̅ in eD collisions in the second and third scenarios are sufficiently well separated for both PDF sets.
§ CONCLUSIONS
In this paper we have carried out a comprehensive reanalysis, within a TMD framework at NLL accuracy, of the transverse Λ/Λ̅ polarization data from Belle Collaboration in the associated two-hadron production in e^+e^- processes. In particular, we have focused on the role of isospin symmetry and of the charm contribution in the extraction of the polarizing fragmentation functions. While requiring SU(2) simmetry alone (within a three-flavor scheme) leads to a very unsatisfactory fit, we have shown that all other scenarios considered allow for very similar and quite good descriptions of the available data. We can then conclude that Belle e^+e^- data, or more generally e^+e^- processes, alone are not able to discriminate among the different scenarios and, in particular, to shed light on the SU(2) symmetry issue.
We have therefore explored this fundamental aspect by considering the same observable in SIDIS processes. By assuming the expected universality of the polarizing FFs we have given several predictions for the kinematical set-up reachable at the EIC, exploiting three different scenarios.
In such a case, by including the charm contribution in the unpolarized cross section, one can indeed distinguish between a scenario where isospin symmetry is respected or not. We have also considered a pFF for charm quarks in the numerator of the polarization without any improvement in the fit.
For completeness, we have discussed the role of the intrinsic charm in the proton for SIDIS processes and shown that the above conclusion does not change.
The spontaneous transverse Λ polarization remains a challenging subject, but at the same time offers a unique opportunity to study the fragmentation mechanism and, more specifically, spin and transverse momentum correlations.
The present study, focused on processes where TMD factorization has been proven to hold, provides a further step to shed light on this very interesting phenomenon.
As we have shown, future EIC meaurements can play a significant role in this context: certainly in testing the phenomenological results obtained in e^+e^- annihilation processes and more generally, in testing fundamental issues like the universality of the polarizing FFs, their scale dependence, their flavor decomposition a well as the role of SU(2) symmetry.
§ ACKNOWLEDGMENTS
We thank Carlo Flore for his suggestions on the role of the intrinsic charm. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement N. 824093 (STRONG-2020). U.D. and M.Z. also acknowledge
financial support by Fondazione di Sardegna under the projects “Proton tomography at the LHC”, project number F72F20000220007 and “Matter-antimatter asymmetry and polarisation in strange hadrons at LHCb`”, project number F73C22001150007 (University of Cagliari).
L.G. acknowledges
support from the US Department of Energy under contract No. DE-FG02-07ER41460.
§ PERTURBATIVE SUDAKOV FACTOR
Here we give the analytic expression of the perturbative Sudakov factor presented in Eq. (<ref>):
S_ pert(b_*;μ̅_b)=-K(b_*;μ̅_b) lnQ^2/μ̅_b^2 - ∫^Q_μ̅_bdμ'/μ' [ 2γ_D(g(μ');1) - γ_K(g(μ')) lnQ^2/μ'^2] .
As discussed in Section <ref>, since our goal is a phenomenological analysis at NLL accuracy, we take α_s at LO order:
α_s(μ^2) = 1/β_0 ln(μ^2/Λ^2_ QCD) ,
and we expand the anomalous dimensions as follows:
γ_K = ∑_n γ^[n]_K (α_s/4π)^n γ_D = ∑_n γ^[n]_D(α_s/4π)^n ,
up to, respectively, the second and first order.
Given that the first order term of K(b_*;μ̅_b) is zero <cit.>, the perturbative Sudakov factor can be written again as:
S_ pert(b_*;μ̅_b) = γ^[1]_D/4πβ_0ln(ln(Q/Λ_ QCD)/ln(μ̅_b/Λ_ QCD))
+γ^[1]_K/4πβ_0[ln(Q/μ̅_b)- ln(Q/Λ_ QCD) ln(ln(Q/Λ_ QCD)/ln(μ̅_b/Λ_ QCD)) ]
+ γ^[2]_K/2(4πβ_0)^2[- ln(Q/μ̅_b)/ln(μ̅_b/Λ_ QCD) + ln(ln(Q/Λ_ QCD)/ln(μ̅_b/Λ_ QCD))] ,
where <cit.>:
β_0 = 11 C_A - 4 T_F n_f/12 π , γ^[1]_D=6 C_F ,
γ^[1]_K = 8 C_F , γ^[2]_K = C_A C_F (536/9 - 8 π^2/3) - 80/9 C_F n_f ,
with C_F=4/3, C_A=3, T_F=1/2, and Λ_ QCD=0.2123 GeV for n_f=3 or Λ_ QCD=0.1737 GeV for n_f=4.
10
collins_2011
J. Collins, Foundations of Perturbative QCD, Cambridge Monographs on
Particle Physics, Nuclear Physics and Cosmology, Cambridge University Press
(2011),
https://doi.org/10.1017/CBO978051197559210.1017/CBO9780511975592.
Ji:2004xq
X.-d. Ji, J.-P. Ma and F. Yuan, QCD factorization for spin-dependent
cross sections in DIS and Drell-Yan processes at low transverse momentum,
https://doi.org/10.1016/j.physletb.2004.07.026Phys. Lett. B
597 (2004) 299
[https://arxiv.org/abs/hep-ph/0405085hep-ph/0405085].
Ji:2004wu
X.-d. Ji, J.-P. Ma and F. Yuan, QCD factorization for semi-inclusive
deep-inelastic scattering at low transverse momentum,
https://doi.org/10.1103/PhysRevD.71.034005Phys. Rev. D
71 (2005) 034005
[https://arxiv.org/abs/hep-ph/0404183hep-ph/0404183].
Bunce:1976yb
G. Bunce et al., Λ^0 Hyperon Polarization in Inclusive
Production by 300 GeV Protons on Beryllium,
https://doi.org/10.1103/PhysRevLett.36.1113Phys. Rev. Lett.
36 (1976) 1113.
Schachinger:1978qs
L. Schachinger et al., A Precise Measurement of the Λ^0 Magnetic
Moment, https://doi.org/10.1103/PhysRevLett.41.1348Phys. Rev.
Lett. 41 (1978) 1348.
Heller:1978ty
K.J. Heller et al., Polarization of Λ's and Λ̅'s
Produced by 400 GeV Protons,
https://doi.org/10.1103/PhysRevLett.41.607Phys. Rev. Lett.
41 (1978) 607 [Erratum: Phys. Rev. Lett. 45, 1043 (1980)].
Erhan:1979xm
S. Erhan et al., Λ^0 polarization in proton proton interactions
at √(s)=53 GeV and 62 GeV,
https://doi.org/10.1016/0370-2693(79)90761-5Phys. Lett.
82B (1979) 301.
Lundberg:1989hw
B. Lundberg et al., Polarization in inclusive Λ and
Λ̅ production at large p_T,
https://doi.org/10.1103/PhysRevD.40.3557Phys. Rev. D
40 (1989) 3557.
Ramberg:1994tk
E.J. Ramberg et al., Polarization of Λ and Λ̅
produced by 800 GeV protons,
https://doi.org/10.1016/0370-2693(94)91397-8Phys. Lett. B
338 (1994) 403.
Abt:2006da
HERA-B Collaboration, Polarization of Λ
and Λ̅ in 920 GeV fixed-target proton-nucleus collisions,
https://doi.org/10.1016/j.physletb.2006.05.040Phys. Lett. B
638 (2006) 415
[https://arxiv.org/abs/hep-ex/0603047hep-ex/0603047].
Anselmino:2000vs
M. Anselmino, D. Boer, U. D'Alesio and F. Murgia, Λ polarization
from unpolarized quark fragmentation,
https://doi.org/10.1103/PhysRevD.63.054029Phys. Rev. D
63 (2001) 054029
[https://arxiv.org/abs/hep-ph/0008186hep-ph/0008186].
Anselmino:2001js
M. Anselmino, D. Boer, U. D'Alesio and F. Murgia, Transverse lambda polarization in semiinclusive DIS,
https://doi.org/10.1103/PhysRevD.65.114014Phys. Rev. D
65 (2002) 114014
[https://arxiv.org/abs/hep-ph/0109186hep-ph/0109186].
Belle:2018ttu
Belle Collaboration, Observation of Transverse
Λ/Λ̅ Hyperon Polarization in e^+e^- Annihilation at
Belle, https://doi.org/10.1103/PhysRevLett.122.042001Phys.
Rev. Lett. 122 (2019) 042001
[https://arxiv.org/abs/1808.050001808.05000].
DAlesio:2020wjq
U. D'Alesio, F. Murgia and M. Zaccheddu, First extraction of the
Λ polarizing fragmentation function from Belle e^+e^- data,
https://doi.org/10.1103/PhysRevD.102.054001Phys. Rev. D
102 (2020) 054001
[https://arxiv.org/abs/2003.011282003.01128].
Callos:2020qtu
D. Callos, Z.-B. Kang and J. Terry, Extracting the transverse momentum
dependent polarizing fragmentation functions,
https://doi.org/10.1103/PhysRevD.102.096007Phys. Rev. D
102 (2020) 096007
[https://arxiv.org/abs/2003.048282003.04828].
Collins:1981uk
J.C. Collins and D.E. Soper, Back-to-back jets in QCD,
https://doi.org/https://doi.org/10.1016/0550-3213(81)90339-4Nucl.
Phys. B 193 (1981) 381 [Erratum: Nucl. Phys. B 213, 545 (1983)].
Collins:1981va
J.C. Collins and D.E. Soper, Back-to-back jets: Fourier transform from
b to k_T,
https://doi.org/https://doi.org/10.1016/0550-3213(82)90453-9Nucl.
Phys. B 197 (1982) 446.
Collins:1984kg
J.C. Collins, D.E. Soper and G. Sterman, Transverse momentum
distribution in Drell-Yan pair and W and Z boson production,
https://doi.org/10.1016/0550-3213(85)90479-1Nucl. Phys. B
250 (1985) 199.
Gamberg:2021iat
L. Gamberg, Z.-B. Kang, D.Y. Shao, J. Terry and F. Zhao, Transverse
Λ polarization in e^+e^- collisions,
https://doi.org/10.1016/j.physletb.2021.136371Phys. Lett. B
818 (2021) 136371
[https://arxiv.org/abs/2102.055532102.05553].
Kang:2021kpt
Z.-B. Kang, J. Terry, A. Vossen, Q. Xu and J. Zhang, Transverse Lambda
production at the future Electron-Ion Collider,
https://doi.org/10.1103/PhysRevD.105.094033Phys. Rev. D
105 (2022) 094033
[https://arxiv.org/abs/2108.053832108.05383].
Li:2020oto
H. Li, X. Wang, Y. Yang and Z. Lu, The transverse polarization of
Λ hyperons in e^+e^-→Λ ^↑ h X processes
within TMD factorization,
https://doi.org/10.1140/epjc/s10052-021-09064-1Eur. Phys. J. C
81 (2021) 289
[https://arxiv.org/abs/2009.071932009.07193].
Chen:2021hdn
K.-B. Chen, Z.-T. Liang, Y.-L. Pan, Y.-K. Song and S.-Y. Wei, Isospin
symmetry of fragmentation functions,
https://doi.org/10.1016/j.physletb.2021.136217Phys. Lett. B
816 (2021) 136217
[https://arxiv.org/abs/2102.006582102.00658].
DAlesio:2022brl
U. D'Alesio, L. Gamberg, F. Murgia and M. Zaccheddu, Transverse
Λ polarization in e^+e^- processes within a TMD factorization
approach and the polarizing fragmentation function,
https://doi.org/10.1007/JHEP12(2022)074JHEP 12
(2022) 074 [https://arxiv.org/abs/2209.116702209.11670].
Boer:1997mf
D. Boer, R. Jakob and P.J. Mulders, Asymmetries in polarized hadron
production in e^+ e^- annihilation up to order 1/Q,
https://doi.org/10.1016/S0550-3213(97)00456-2Nucl. Phys. B
504 (1997) 345
[https://arxiv.org/abs/hep-ph/9702281hep-ph/9702281].
Pitonyak:2013dsu
D. Pitonyak, M. Schlegel and A. Metz, Polarized hadron pair production
from electron-positron annihilation,
https://doi.org/10.1103/PhysRevD.89.054032Phys. Rev. D
89 (2014) 054032
[https://arxiv.org/abs/1310.62401310.6240].
DAlesio:2021dcx
U. D'Alesio, F. Murgia and M. Zaccheddu, General helicity formalism for
two-hadron production in e^+e^- annihilation within a TMD approach,
https://doi.org/10.1007/JHEP10(2021)078JHEP 10
(2021) 078 [https://arxiv.org/abs/2108.056322108.05632].
Mulders:1995dh
P.J. Mulders and R.D. Tangerman, The complete tree level result up to
order 1/Q for polarized deep inelastic leptoproduction,
https://doi.org/10.1016/0550-3213(95)00632-XNucl. Phys. B
461 (1996) 197 [Erratum: Nucl. Phys. B 484, 538–540
(1997)]
[https://arxiv.org/abs/hep-ph/9510301hep-ph/9510301] .
Bacchetta:2006tn
A. Bacchetta, M. Diehl, K. Goeke, A. Metz, P.J. Mulders and M. Schlegel,
Semi-inclusive deep inelastic scattering at small transverse
momentum, https://doi.org/10.1088/1126-6708/2007/02/093JHEP
02 (2007) 093
[https://arxiv.org/abs/hep-ph/0611265hep-ph/0611265].
Anselmino:2011ch
M. Anselmino, M. Boglione, U. D'Alesio, S. Melis, F. Murgia, E.R. Nocera
et al., General helicity formalism for polarized semi-inclusive deep
inelastic scattering,
https://doi.org/10.1103/PhysRevD.83.114019Phys. Rev. D
83 (2011) 114019
[https://arxiv.org/abs/1101.10111101.1011].
Chen:2021zrr
K.-b. Chen, Z.-T. Liang, Y.-K. Song and S.-Y. Wei, Longitudinal and
transverse polarizations of Λ hyperon in unpolarized SIDIS and
e^+e^- annihilation,
https://doi.org/10.1103/PhysRevD.105.034027Phys. Rev. D
105 (2022) 034027
[https://arxiv.org/abs/2108.077402108.07740].
Brodsky:1980pb
S.J. Brodsky, P. Hoyer, C. Peterson and N. Sakai, The intrinsic charm of
the proton, https://doi.org/10.1016/0370-2693(80)90364-0Phys.
Lett. B 93 (1980) 451.
Brodsky:2015fna
S.J. Brodsky, A. Kusina, F. Lyonnet, I. Schienbein, H. Spiesberger and R. Vogt,
A review of the intrinsic heavy quark content of the nucleon,
https://doi.org/10.1155/2015/231547Adv. High Energy Phys.
2015 (2015) 231547
[https://arxiv.org/abs/1504.062871504.06287].
Collins:2016hqq
J. Collins, L. Gamberg, A. Prokudin, T.C. Rogers, N. Sato and B. Wang,
Relating transverse momentum dependent and collinear factorization
theorems in a generalized formalism,
https://doi.org/10.1103/PhysRevD.94.034014Phys. Rev. D
94 (2016) 034014
[https://arxiv.org/abs/1605.006711605.00671].
Bacchetta:2004jz
A. Bacchetta, U. D'Alesio, M. Diehl and C.A. Miller, Single-spin
asymmetries: The Trento conventions, https://doi.org/10.1103/PhysRevD.70.117504Phys. Rev. D 70
(2004) 117504 [https://arxiv.org/abs/hep-ph/0410050hep-ph/0410050].
Boer:1999uu
D. Boer, R. Jakob and P.J. Mulders, Angular dependences in electroweak
semi-inclusive leptoproduction,
https://doi.org/10.1016/S0550-3213(99)00586-6Nucl. Phys. B
564 (2000) 471
[https://arxiv.org/abs/hep-ph/9907504hep-ph/9907504].
Albino:2008fy
S. Albino, B.A. Kniehl and G. Kramer, AKK update: Improvements from new
theoretical input and experimental data,
https://doi.org/10.1016/j.nuclphysb.2008.05.017Nucl. Phys. B
803 (2008) 42 [https://arxiv.org/abs/0803.27680803.2768].
deFlorian:2007aj
D. de Florian, R. Sassot and M. Stratmann, Global analysis of
fragmentation functions for pions and kaons and their uncertainties, https://doi.org/10.1103/PhysRevD.75.114010Phys. Rev. D 75 (2007) 114010
[https://arxiv.org/abs/hep-ph/0703242hep-ph/0703242].
Bacchetta:2017gcc
A. Bacchetta, F. Delcarro, C. Pisano, M. Radici and A. Signori,
Extraction of partonic transverse momentum distributions from
semi-inclusive deep-inelastic scattering, Drell-Yan and Z-boson production,
https://doi.org/10.1007/JHEP06(2017)081JHEP 06
(2017) 081 [https://arxiv.org/abs/1703.101571703.10157].
Boglione:2017jlh
M. Boglione, J.O. Gonzalez-Hernandez and R. Taghavi, Transverse parton
momenta in single inclusive hadron production in e^+e^-
annihilation processes,
https://doi.org/10.1016/j.physletb.2017.06.034Phys. Lett. B
772 (2017) 78
[https://arxiv.org/abs/1704.088821704.08882].
Boglione:2020auc
M. Boglione and A. Simonelli, Factorization of e^+e^- → H X cross
section, differential in z_h, P_T and thrust, in the 2-jet limit,
https://doi.org/10.1007/JHEP02(2021)076JHEP 02
(2021) 076 [https://arxiv.org/abs/2011.073662011.07366].
Boglione:2022nzq
M. Boglione, J.O. Gonzalez-Hernandez and A. Simonelli, Transverse
momentum dependent fragmentation functions from recent BELLE data,
https://doi.org/10.1103/PhysRevD.106.074024Phys. Rev. D
106 (2022) 074024
[https://arxiv.org/abs/2206.088762206.08876].
Dulat:2015mca
S. Dulat, T.-J. Hou, J. Gao, M. Guzzi, J. Huston, P. Nadolsky et al.,
New parton distribution functions from a global analysis of quantum
chromodynamics,
https://doi.org/10.1103/PhysRevD.93.033006Phys. Rev. D
93 (2016) 033006
[https://arxiv.org/abs/1506.074431506.07443].
Hou:2017khm
T.-J. Hou, S. Dulat, J. Gao, M. Guzzi, J. Huston, P. Nadolsky et al.,
CT14 Intrinsic Charm Parton Distribution Functions from CTEQ-TEA
Global Analysis, https://doi.org/10.1007/JHEP02(2018)059JHEP
02 (2018) 059
[https://arxiv.org/abs/1707.006571707.00657].
NNPDF:2021njg
NNPDF collaboration, The path to proton structure at 1%
accuracy, https://doi.org/10.1140/epjc/s10052-022-10328-7Eur.
Phys. J. C 82 (2022) 428
[https://arxiv.org/abs/2109.026532109.02653].
Aybat:2011zv
S.M. Aybat and T.C. Rogers, Transverse momentum dependent parton distribution and fragmentation
functions with QCD evolution,
https://doi.org/10.1103/PhysRevD.83.114042Phys. Rev. D
83 (2011) 114042
[https://arxiv.org/abs/1101.50571101.5057].
Collins:2017oxh
J. Collins and T.C. Rogers, Connecting different TMD factorization
formalisms in QCD,
https://doi.org/10.1103/PhysRevD.96.054011Phys. Rev. D
96 (2017) 054011
[https://arxiv.org/abs/1705.071671705.07167].
|
http://arxiv.org/abs/2307.00621v1
|
20230702172517
|
Some exact anisotropic cosmological solutions of a simple nonlocal de Sitter gravity
|
[
"Ivan Dimitrijevic"
] |
gr-qc
|
[
"gr-qc"
] |
I. DimitrijevicSome exact anisotropic cosmological solutions of a simple nonlocal de Sitter gravity
University of Belgrade, Faculty of Mathematics, Studentski Trg 16
Belgrade, Serbia
[email protected]
Some exact anisotropic cosmological solutions of a simple nonlocal de Sitter gravity
Ivan Dimitrijevic
August 1, 2023
======================================================================================
Day Month Year
Day Month Year
It was shown recently that a very simple nonlocal de Sitter gravity model contains exact vacuum cosmological solution which mimics dark energy and dark matter in flat space. Some other interesting solutions have been also found. In this paper we proceed with finding several new exact cosmological solutions which belong to Bianchi I space. These solutions are simple generalizations of solutions previously found in the FLRW case of the same nonlocal de Sitter gravity model. Obtained results are discussed.
PACS numbers:04.50.Kd, 04.20.Jb, 02.40.Ky
§ INTRODUCTION
Current state of the Universe is very well described by the Standard Model of Cosmology (SMC) which is mainly based on General Relativity (GR), the Standard Model of Particle Physics (SMPP), observation that the Universe is homogeneous and isotropic at the very large cosmic scales and has an accelerating expansion. According to the SMC, the Universe at present time consists of about 68% of dark energy (DE), 27% of dark matter (DM) and only 5% of standard matter described by the SMPP<cit.>. DM was introduced as a possible explanation of large velocities within and between clusters of galaxies. After discovery of the accelerating expansion of the Universe in 1998, it was introduced DE as a new kind of matter with negative pressure that acts as antigravity causing this acceleration. According to the SMC, DE is related to the cosmological constant Λ. Therefore, the SMC is also known as the ΛCDM, where CDM means cold dark matter.
Since existence of dark matter and dark energy has not yet been experimentally confirmed, some researchers turned to look for alternative explanation of flat rotation curves in spiral galaxies, as well as the late time cosmic acceleration<cit.>. Practically, it means modification of the geometric sector of general relativity. Since there is no, so far, a guiding principle how to choose appropriate extension of the Einstein-Hilbert (EH) action, there are many phenomenological approaches. Usually, these approaches are some extensions of the scalar curvature R in the EH action by various forms of scalars that can be constructed in the pseudo-Riemannian geometry. The most elaborated version has been f(R) gravity<cit.>, where R is replaced by a function f(R). One of the actual and attractive approaches to the extension of GR is nonlocal modified gravity, see review<cit.>. Note that general relativity, despite enormous success, has its own problems like black hole and big bang singularity, and problems with its quantization<cit.>.
Recently, it was shown that a very simple nonlocal de Sitter gravity model, given by action
S = 1/16 π G∫(R- 2 Λ + √(R-2Λ) () √(R-2Λ)) √(-g) d^4 x ,
contains exact vacuum cosmological solution which mimics dark energy and dark matter in flat space<cit.>. Some other interesting solutions have been also found<cit.>. In (<ref>), R is scalar curvature, Λ is cosmological constant and () = ∑_n=1^+∞ f_n ^n + ∑_n=1^+∞ f_-n^-n is nonlocal operator with d'Alembertian . In this paper, we present several exact vacuum anisotropic cosmological solutions of the Bianchi I type of the same nonlocal gravity model (<ref>), which are connected with its exact solutions in the Friedmann-Lemaître-Robertson-Walker (FLRW) metric case. While at the very large cosmic scales the Universe is homogeneous and isotropic, nevertheless anisotropic solutions are interesting as exact solutions and may be of interest for the very early evolution of the Universe, e.g. see recent references in modified gravity<cit.>. It is worth mentioning that in recent article<cit.> an anisotropic bouncing cosmological solution in higher-derivative non-local gravity was found.
This paper is organized as follows. In Sec. 2 metric of the Bianchi I space is presented. Nonlocal gravity model (<ref>) and the corresponding equations of motion are considered in Sec. 3. Some anisotropic cosmological solutions are investigated in Sec. 4. Sec. 5 contains some concluding remarks.
§ THE METRIC
Let us consider the Bianchi type I anisotropic metric in the form
ds^2 = -d t^2 + a_1(t)^2dx^2 + a_2(t)^2dy^2 + a_3(t)^2dz^2,
with three scale factors a_1(t), a_2(t) and a_3(t). It is worth noting that if all three scale factors are equal one obtains flat FLRW metric. Also d'Alembertian of the metric (<ref>) reads
u(t)= - ü(t) - (H_1(t) + H_2(t) + H_3(t))u̇(t),
where H_i(t) = ȧ_i(t)/a_i(t).
Therefore if we introduce the Hubble parameter H(t) by
H(t) = 1/3 (H_1(t) + H_2(t) + H_3(t)),
we obtain the same d'Alembertian as in the FLRW metric. By integrating H(t) one obtains the corresponding scale factor
a(t) = √(a_1(t)a_2(t)a_3(t)).
For further calculations in the sequel it would be convenient to introduce the following notation
a_i(t) = a(t) e^β_i(t), i=1,2,3.
Moreover, we will consider functions β_1, β_2 and β_3 as a components of a curve β(t) = (β_1(t), β_2(t), β_3(t)) in ℝ^3. Using the condition (<ref>) it is easy to see that β(t) is a plane curve that lies in a plane x+y+z =0, i.e.
β_1(t)+ β_2(t) + β_3(t) =0.
Let us denote the speed of curve β(t) by σ(t), hence
σ(t)^2 = β̇_1(t)^2+ β̇_2(t)^2 + β̇_3(t)^2.
Therefore the metric takes the form
ds^2 = -d t^2 +a(t)^2(e^2β_1(t)dx^2+ e^2β_2(t)dy^2 + e^2β_3(t)dz^2).
The metric of the form (<ref>) has been introduced in the paper<cit.>.
The velocity vector β̇(t) has norm σ(t) and therefore can be written as
β̇(t) = σ(t) β̂(t), where β̂(t) is a unit vector for all t. Assuming that β(t) lies in a plain z=0 one obtains
β̇(t) = σ(t) (cosθ(t), sinθ(t),0),
for some function θ(t) which will be determined later. Direct integration yields that
β(t) = (∫σ(t) cosθ(t)d t,∫σ(t) sinθ(t) d t,0).
Since we have a curve β(t) lying in plane z=0 and we need it to be in plane x+y+z=0,
it remains to find the rotation of the space that maps plane z=0 to x+y+z=0. Let us recall that each rotation can be written as a composition of three rotations around coordinate axis using Euler angles. Therefore arbitrary rotation in space can be expressed with the following matrix
M = (
[ cosζcosηcosξ -sinζsinξ -cosηcosξsinζ -cosζsinξ cosξsinη; cosξsinζ +cosζcosηsinξ cosζcosξ -cosηsinζsinξ sinηsinξ; -cosζsinη sinζsinη cosη; ]).
Since M is orthogonal matrix, which preserves lengths and angles it is sufficient to map normal vector to normal vector
1/√(3)(
[ 1; 1; 1; ]) = M (
[ 0; 0; 1; ]).
The solution is given by
M = (
[ cosζ/√(6)-sinζ/√(2) -cosζ/√(2)-sinζ/√(6) 1/√(3); cosζ/√(6)+sinζ/√(2) cosζ/√(2)-sinζ/√(6) 1/√(3); -√(2/3)cosζ √(2/3)sinζ 1/√(3); ]).
For the purpose of the sequel it is sufficient to take one solution, therefore let ζ=0 and
M = (
[ 1/√(6) -1/√(2) 1/√(3); 1/√(6) 1/√(2) 1/√(3); -√(2/3) 0 1/√(3); ]).
The curve β(t) then takes the form
β(t) = M ([ ∫σ (t) cosθ(t) d t; ∫σ (t) sinθ(t) d t; 0 ])
= ([ ∫σ (t) cosθ (t) dt/√(6) - ∫σ (t) sinθ (t) dt/√(2); ∫σ(t) sinθ (t) dt/√(2)+∫σ (t) cosθ (t) dt/√(6); -√(2/3)∫σ (t) cosθ (t) dt ]).
§ MODEL AND EOM
In this paper we discuss the action given by
S = 1/16 π G∫(R- 2 Λ + P(R)() Q(R)) √(-g) d^4 x ,
where the Universe is represented by a pseudo-Riemannian manifold with metric g_μν of signature (1,3), P and Q are differentiable functions of scalar curvature R, Λ is cosmological constant and () = ∑_n=1^+∞ f_n ^n + ∑_n=1^+∞ f_-n^-n is nonlocal operator. It is obvious that this model includes GR if we set () =0. Since the emphasis in this paper is on the nonlocal modification of the gravity we will not include matter term in the action.
The first step is derivation of equations of motion, which is a lengthy procedure that was presented in <cit.>, in particular<cit.>,
G_μν + Λ g_μν - 1/2 g_μν P () Q+ R_μν W - K_μν W + 1/2Ω_μν =0,
where
W = P'(R) () Q(R) + Q'(R)()P(R),
K_μν = ∇_μ∇_ν - g_μν,
S_μν(A,B) = g_μν∇_λ A ∇^λ B + g_μνA B -2 ∇_μ A ∇_ν B,
Ω_μν = ∑_n=1^+∞f_n ∑_l=0^n-1 S_μν(^l P,^n-1-lQ)
- ∑_n=1^+∞f_-n∑_l=0^n-1 S_μν(^-(l+1) P,^-(n-l)Q),
and ' denote derivation over R.
Equation (<ref>) contains infinitely many derivatives and therefore we cannot find its general solution, but we can simplify it considerably by choosing P=Q and moreover we let Q be an eigenfunction of the operator with eigenvalue q. Hence, equation (<ref>) is simplified
G_μν + Λ g_μν - 1/2 g_μν(q) Q^2+ R_μν W - K_μν W + 1/2Ω_μν =0,
where
W = 2 Q'(R) () Q(R),
Ω_μν = '(q) S_μν(Q,Q).
This equation transforms as
(G_μν + Λ g_μν)(1+ 2(q) Q Q') + (q)g_μν(-1/2 Q^2 + QQ' (R-2Λ))
-2 (q)K_μν QQ'+ 1/2'(q) S_μν(Q,Q) =0.
In particular, the most interesting case for us is Q = √(R-2Λ), which gives us Q Q' = 1/2 and equations of motion get the form
(G_μν + Λ g_μν)(1+ (q)) + 1/2'(q) S_μν(Q,Q) =0.
It is clear that if we choose function such that
(q) = -1, '(q) = 0,
then equation (<ref>) is satisfied. Therefore the next section is devoted to solving the following eigenvalue problem
√(R-2Λ) = q √(R-2Λ).
From the previous discussion we conclude that if we solve (<ref>) and function is constrained by (<ref>) we know that equations of motion (<ref>) are satisfied as well.
§ COSMOLOGICAL SOLUTIONS
In the beginning, it is interesting to note that Ricci tensor, scalar curvature and d'Alembertian of the metric (<ref>) do not depend on θ(t) and hence it will remain undetermined in the following calculations.
R = R_FLRW + σ^2,
u(t) = _FLRWu(t),
R_00 = R_00,FLRW - σ^2,
G_00 = G_00,FLRW - 1/2σ^2,
where index FLRW denotes quantities corresponding to the FLRW metric with scale factor a(t) and k=0.
§.§ Scale factor in exponential form a(t)=e^γ t^2
As a first case we take
a(t)= A e^γ t^2,
then the eigenvalue problem (<ref>) takes the form
4 σ(t)^2 (q (6 γ-Λ +24 γ ^2t^2)+12 γ ^2 (6
γ t^2+1))+4 q (-6 γ +Λ -24γ ^2 t^2)^2
+q σ(t)^4 + 2 (6 γ -Λ+24 γ ^2 t^2) σ̇(t)^2+2 σ (t) ((6 γ -Λ +24 γ ^2 t^2) σ̈(t)
+6 γ t (-2 γ -Λ +24 γ ^2 t^2) σ̇(t))+96 γ ^2 (-Λ +144 γ ^3
t^4+36 γ ^2 t^2+γ(6-6 Λ t^2))
+σ (t)^3 (6 γ t σ̇(t)+σ̈(t)) = 0.
The remaining equation in σ(t) is nonlinear second order equation and we can obtain only particular solutions. To simplify further we take γ = Λ/6 and q=-Λ as we have in the corresponding FLRW solution (see <cit.> for details) one obtains the following equation in σ(t)
4 Λ ^2 t^2 σ̇(t)^2+4 Λ ^2 t σ (t) ((Λ t^2-2)
σ̇(t)+t σ̈(t))-4 Λ ^2 (Λ t^2-1) σ
(t)^2
+3 σ (t)^3 (Λ t σ̇(t)+σ̈(t))-3 Λσ(t)^4 =0.
One particular solution is
σ(t) =σ_0 t,
for some arbitrary constant σ_1.
Hence we obtained the solution of eigenvalue problem (<ref>) in the form
q = -Λ, a(t) = A e^Λ/6 t^2, σ(t) =σ_0 t,
which is also a solution of EOM if
(Λ/6) = -1, '(Λ/6) = 0,
as we have already seen in the previous section.
§.§ Scale factor as a linear combination of exponential functions a(t) = α e^λ t + β e^-λ t
As a second case let us take
a(t) = α e^λ t + β e^-λ tσ(t) = σ_0 a(t)^-2.
Inserting these values into (<ref>) yields an equation of the form
∑_n=0^8 A_n e^2λ n t =0,
where the coefficients A_n are given by
A_0 =4 β ^8 q (6 λ^2-Λ)^2,
A_1 =16 αβ ^7 (6 λ^2-Λ) (3 λ ^4+6 λ ^2 q-2 Λ q),
A_2 =4 β ^4 (144α ^2 β ^2 λ ^6-72 α ^2 β ^2 λ ^4
Λ +288 α ^2 β ^2λ ^4 q-192 α ^2 β^2 λ ^2 Λ q
+28 α^2 β ^2 Λ ^2 q+6 λ^2 q σ_0^2-Λ q
σ_0^2+6 λ ^4σ_0^2-λ ^2 Λσ_0^2),
A_3 =8 αβ^3 (252 α ^2 β ^2 λ ^6-90 α ^2 β ^2λ ^4 Λ +216 α ^2 β ^2 λ ^4 q-156 α^2 β ^2 λ ^2 Λ q
+28 α ^2 β ^2 Λ ^2 q+6 λ ^2 q σ_0^2-2
Λ q σ_0^2-3 λ ^4 σ_0^2+2 λ ^2
Λσ_0^2),
A_4 =3456 α ^4 β ^4 λ ^6-960 α ^4 β ^4 λ ^4
Λ +2016 α ^4 β ^4 λ ^4 q-1440 α ^4 β
^4 λ ^2 Λ q
+280 α ^4 β ^4 Λ ^2 q+q s_0^4+48 α ^2 β ^2 λ ^2 q σ_0^2-24 α ^2 β ^2 Λ q σ_0^2-2 λ ^2
σ_0^4
-96 α ^2 β ^2 λ ^4 σ_0^2+40 α
^2 β ^2 λ ^2 Λσ_0^2,
A_5 =8 α ^3 β(252 α ^2 β ^2 λ ^6-90 α ^2 β ^2 λ ^4 Λ +216 α ^2 β ^2 λ ^4 q-156 α
^2 β ^2 λ ^2 Λ q
+28 α ^2 β ^2 Λ ^2 q+6 λ ^2 q σ_0^2-2 Λ q σ_0^2-3 λ^4 σ_0^2+2 λ ^2
Λσ_0^2),
A_6 =4α ^4 (144 α ^2β ^2 λ ^6-72 α ^2β ^2 λ ^4 Λ +288 α ^2 β ^2 λ ^4q -192 α ^2 β ^2 λ^2 Λ q
+28 α ^2 β ^2 Λ ^2 q+6 λ ^2 qσ_0^2-Λ q σ_0^2+6 λ ^4σ_0^2-λ ^2 Λσ_0^2),
A_7 =16 α ^7 β(6 λ ^2-Λ) (3 λ ^4+6λ ^2 q-2 Λ q),
A_8 =4α ^8 q (6 λ^2-Λ)^2.
Thus, we need to solve the system
A_n=0, n=0,8.
The highest order coefficient A_8 vanish if λ^2 = Λ/6, while the remaining coefficients are simplified to
A_2 =576 α ^2 β ^6 λ ^4 (q-2 λ
^2),
A_3 =24 αβ ^3 λ ^2 (-96 α ^2β ^2 λ ^4+96 α ^2β ^2 λ ^2 q-2 q
σ_0^2+3 λ ^2σ_0^2),
A_4 =-2304 α ^4β ^4 λ ^6+3456 α ^4 β ^4 λ ^4 q +qσ_0^4,
A_5 =-96 α ^2 β ^2
λ ^2 q σ_0^2-2 λ^2 σ_0^4+144 α ^2 β^2 λ ^4 σ_0^2,
A_6 =24 α ^3 βλ ^2(-96 α ^2 β ^2 λ ^4+96 α ^2 β ^2λ ^2 q-2 q σ_0^2+3
λ ^2 σ_0^2),
A_7 =576α ^6 β ^2 λ ^4 (q-2 λ ^2).
Now we set q = 2λ^2 and the remaining equations are
24 αβ ^3λ ^4 (96 α ^2β ^2 λ^2-σ_0^2) =0,
-48 α^2 β ^2λ ^4 (96 α ^2β ^2 λ^2-σ_0^2) =0,
24 α^3 βλ ^4 (96 α ^2β ^2 λ^2-σ_0^2) =0.
Hence we get a solution
a(t)= α e^λ t + β e^-λ tσ(t) = σ_0 (α e^λ t + β e^-λ t)^-2,
in the following two cases
* λ = ±√(Λ/6), q = Λ/3,
σ_0^2 = 16 α^2 β^2 Λ,
* λ = ±√(Λ/6), q = Λ/3, αβ=0.
For example, in the first case we get scale factors of the form a(t)= A coshλ t and a(t)= A sinhλ t, while in the second case we get a(t)= α e^λ t.
§.§ Scale factor a(t)= (α e^λ t + β e^-λ t)^1/2
Also one can take scale factor a(t) in the form
a(t)= (α e^λ t + β e^-λ t)^1/2,
and the condition (<ref>) is transformed into
2 (3 λ ^2-2 Λ) (β +α e^2
λ t) (3 λ
^2 q-2 Λ q+ σ̇(t)^2)
+4 q (3 λ
^2-2 Λ) σ (t)^2
(β +α e^2 λ
t)+2 q σ (t)^4
(β +α e^2 λ
t)
+(3 λ ^2-2 Λ) σ (t) (2
σ̈(t) (β +α
e^2 λ t)+3 λσ̇(t) (α e^2
λ t-β))
+σ (t)^3
(2 σ̈(t) (β
+α e^2 λ t)+3
λσ̇(t) (α
e^2 λ t-β)) = 0.
This expression is simplified by taking λ^2 = 2/3Λ
σ (t)^3 (2 q σ (t) (β +α e^2 λ
t)+2 σ̈(t) (β +α e^2 λ
t)+3 λσ̇(t) (α e^2 λ
t-β)) = 0.
The last equation has obvious solution σ(t)=0 and its general solution is expressed in terms of hypergeometric functions
σ(t) = C_1 (e^λ t√(α/β))^3/4 -η _2F_1(3/4, 3/4-η;1-η; -α e^2 λ t/β)
+ C_2 (e^λ t√(α/β))^3/4 +η _2F_1(3/4, 3/4+η;1+η; -α e^2 λ t/β),
where η = √(9/16 - λ^-2 q). As an example one can take η = 1/2 and hence q = 5/16λ^2 which gives us
σ(t) = (e^λ t√(α/β))^1/4(C_1
√(1+√(1+α/βe^2λ t))/√(1 + α/βe^2λ t)
+ C_2 sin(1/2arctan√(α/β)e^λ t)/√(1 + α/βe^2λ t)).
§.§ Constant σ solutions
Consider the scale factor
a(t) = t^n(α e^γ t^2 + β e^-γ t^2),
σ(t) = σ_0 = const,
and the eigenvalue problem is transformed into polynomial equation over e^2γ t^2
∑_j=0^6 B_j e^2j γ t^2=0.
From the highest order term we get n(2n-1)(3n-2)=0.
On the other hand we see that αβ =0 and without loss of generality we set β =0.
In case n=2/3 it is remaining to solve
4/9α ^6 (108 γ -6 Λ +4 q+3
σ_0^2) =0,
8/3α ^6 (12 γ ^2+6 γΛ -3 γσ_0^2+44 γ q-2 Λ q+q
σ0^2) =0,
α^6 (6336 γ ^3-288 γ ^2 Λ +144 γ ^2
σ_0^2+2064 γ ^2 q
-176 γΛ q+88 γ q σ_0^2+4 Λ
^2 q-4 Λ q σ_0^2+q σ_0^4) =0 ,
96 α ^6 γ^2 (180 γ ^2-6 γΛ +3 γσ_0^2+44 γ q-2 Λ q+qσ_0^2)=0,
2304 α ^6 γ ^4 (6 γ+q)=0.
It is evident that q=-6γ and after substitution one finds that
σ_0^2=2Λ-28γ.
Instead of σ_0 we will introduce parameter η such that σ_0^2 = 2Λη and hence the final solution reads
a(t) = A t^2/3e^Λ/14(1-η) t^2,
q = -3/7Λ (1-η),
σ^2 = 2Λη.
In case n=1/2 we conclude that β =0 and q=-6γ in the same way as in the previous case. Hence we have the following conditions
-6 α ^6 γ(16
γ -2 Λ +σ_0^2) (36 γ -2
Λ +σ_0^2) =0,
-288 α ^6 γ^3 (24 γ -2 Λ+σ_0^2)=0,
which clearly has no solution.
Finally the third case n=0 was discussed previously.
The solution (<ref>) converges to an isotropic solution as η tends to 0. This isotropic solution have been found and discussed in papers<cit.>. Moreover there are several more solutions of the flat FLRW model that can be extended to the anisotropic case with constant σ. The other solutions of the FLRW model can be treated similarly and provide the following solutions
a_1(t) = A cosh^2/3(√(3 Λ/8) (1-η) t) ,
q = 3 Λ/8 (1-η)^2, σ^2 = 2 Λη (2-η),
a_2(t) = A sinh^2/3(√(3 Λ/8) (1-η) t) ,
q = 3 Λ/8 (1-η)^2, σ^2 = 2 Λη (2-η),
a_3(t) = A cos^2/3(√(-3 Λ/8) (1-η) t) ,
q = 3 Λ/8 (1-η)^2, σ^2 = 2 Λη (2-η),
a_4(t) = A sin^2/3(√(-3 Λ/8) (1-η) t) ,
q = 3 Λ/8 (1-η)^2, σ^2 = 2 Λη (2-η).
It is worth noting that in case of solutions a_1(t) and a_2(t) cosmological constant Λ is positive, while for a_3(t) and a_4(t) cosmological constant Λ is negative.
§.§ FLRW solutions as anisotropic solutions
The expressions for scalar curvature for the metric (<ref>) and FLRW metric (with arbitrary k) are
R = 6 (a(t) ä(t)+ȧ(t)^2)/a(t)^2+σ (t)^2,
R_FLRW = 6 (a(t) ä(t)+ȧ(t)^2 +k )/a(t)^2.
Comparing these expressions for scalar curvature we can see that if we choose scale factor a(t) such that it is a solution of FLRW model with k≠ 0 and σ = σ_0 a(t)^-1 we see that expressions (<ref>) and (<ref>) are equal if we set σ_0^2 = 6k. Therefore each scale factor a(t) which is a solution of the FLRW model with k=1 can be extended to an anisotropic solution by the formulas (<ref>) and (<ref>). According to the paper <cit.> there are three such scale factors have been found
* a(t) = A e^±√(1/6Λ) t, σ(t) = √(6)/A e^∓√(1/6Λ) t,
* a(t) = A cosh^1/2√(2/3Λ) t, σ(t) = √(6)/Acosh^-1/2√(2/3Λ) t,
* a(t) = A sinh^1/2√(2/3Λ) t, σ(t) = √(6)/Acosh^-1/2√(2/3Λ) t.
On the other hand if we take any FLRW solution for k=0 in nonlocal de Sitter model (<ref>) and choose σ(t) in the following way
σ(t)^2 =σ_0^2(6 (a(t) ä(t)+ȧ(t)^2)/a(t)^2-2 Λ),
we see that R-2Λ and R_FLRW -2Λ are proportional, therefore each solution of FLRW model is an anisotropic solution as well with this choice of σ(t).
There are (at least) eight such solutions
* a(t) = A t^2/3 e^Λ/14 t^2, σ(t)= σ_0 t^-1(7+3Λ t^2),
* a(t) = A e^Λ/6 t^2, σ(t) = σ_0 Λ t,
* a(t) = A cosh^2/3√(3/8Λ) t, σ(t) = σ_0 √(10Λ -9Λcosh^-2√(3/8Λ) t),
* a(t) = A sinh^2/3√(3/8Λ) t, σ(t) = σ_0 √(10Λ +9 Λsinh^-2√(3/8Λ) t),
* a(t) = A (1±sin√(-3/2Λ) t)^1/3, σ(t) = σ_0/√(± 1 +sin√(-3/2Λ) t),
* a(t) = A cos^2/3√(-3/8Λ) t, σ(t) = σ_0/√(1+cos√(-3/2Λ) t),
* a(t) = A sin^2/3√(-3/8Λ) t, σ(t) = σ_0/√(-1+cos√(-3/2Λ) t).
§ CONCLUDING REMARKS
In this paper, several anisotropic and homogeneous Bianchi I cosmological solutions of nonlocal de Sitter gravity model (<ref>) are presented. Anisotropy depends on two time dependent parameters σ (t) and θ(t). Equations of motion contain σ (t), while θ (t) remains undetermined. These anisotropic solutions are an extension of the corresponding homogeneous and isotropic ones, and when parameter σ (t) tends to zero, anisotropy disappears.
It is worth noting that anisotropic cosmological solutions may be important not only for research of space-time dynamics at early stage of the Universe but also at late cosmic scales in the framework of dipole cosmology<cit.> with evidence of some dipole anisotropy, what may have implications for the currently debated cosmic tensions (including H_0).
Simple nonlocal de Sitter gravity model (<ref>) shows rich spectrum of so far obtained cosmological solutions, see also<cit.>. We plan to continue with exploration of new cosmological possibilities of (<ref>).
§ ACKNOWLEDGMENTS
I would like to thank Branko Dragovich, Zoran Rakic and Jelena Stankovic for numerous discussions and comments about the paper.
This research was partially funded by the Ministry of Education, Science and Technological Developments of the Republic of Serbia: grant number 451-03-47/2023-01/ 200104 with University of Belgrade, Faculty of Mathematics. It is also
partially supported by the COST Action: CA21136 – Addressing observational tensions in cosmology with
systematics and fundamental physics (CosmoVerse).
0
Planck2018 N. Aghanim, et al., Planck 2018 results. VI. Cosmological parameters, Planck 118 collaboration
A&A 641 (2020) A6 [arXiv:1807.06209 [astro-ph.CO]].
faraoni T. P. Sotiriou and V. Faraoni, f(R) theories of gravity, Rev. Mod. Phys.
82 (2010) 451 [arXiv:0805.1726v4 [gr-qc]].
nojiri S. Nojiri and S. D. Odintsov, Unified cosmic history in modified gravity: from F(R) theory
to Lorentz non-invariant models, Phys. Rep. 505 (2011) 59–144 [arXiv:1011.0544v4 [gr-qc]].
clifton T. Clifton, P. G. Ferreira, A. Padilla and C. Skordis, Modified gravity and cosmology,
Phys. Rep. 513 (2012) 1 [arXiv:1106.2476v2 [astro-ph.CO]].
nojiri1 S. Nojiri, S. D. Odintsov and V. K. Oikonomou, Modified gravity theories on a nutshell: Inflation, bounce and late-time evolution, Phys. Rep. 692 (2017) 1–104 [arXiv:1705.11098 [gr-qc]].
capozziello S. Capozziello and F. Bajardi, Nonlocal gravity cosmology: An overview, Int. J. Mod. Phys. D 31 (2022) 2230009 [arXiv:2201.04512 [gr-qc]].
modesto L. Modesto and L. Rachwal, Super-renormalizable and finite gravitational theories,
Nucl. Phys. B 889 (2014) 228. arXiv:1407.8036 [hep-th].
dimitrijevic10 I. Dimitrijevic, B. Dragovich, A. S. Koshelev, Z. Rakic and J. Stankovic, Cosmological solutions of a nonlocal square-root gravity, Phys. Lett. B 797 (2019) 134848 arXiv:1906.07560 [gr-qc].
dimitrijevic11 I. Dimitrijevic, B. Dragovich, A. S. Koshelev, Z. Rakic and J. Stankovic, Some cosmological solutions of a new nonlocal gravity model, Symmetry 2020 12 (2020) 917 [arXiv:2006.16041 [gr-qc]].
dimitrijevic12 I. Dimitrijevic, B. Dragovich, Z. Rakic and J. Stankovic, New cosmological solutions of a nonlocal gravity model, Symmetry 2022 14 (2022) 3 [arXiv:2112.06312 [gr-qc]].
dimitrijevic13 I. Dimitrijevic, B. Dragovich, Z. Rakic and J. Stankovic, Nonlocal de Sitter gravity and its exact cosmological solutions, JHEP12(2022)054 [arXiv:2206.13515v1 [gr-qc]].
dimitrijevic20 I. Dimitrijevic, B. Dragovich, Z. Rakic and J. Stankovic,On the Schwarzschild-de Sitter metric of nonlocal de Sitter gravity, to be published in Filomat, arXiv:2212.13896 [gr-qc], (2023).
nojiri2 S. Nojiri, S. D. Odintsov, V. K. Oikonomou and A. Constantini, Formalizing anisotropic inflation in modified gravity, Nucl. Phys. B 985, (2022) 116011, arXiv:2210.16383 [gr-qc].
devi L. A. Devi, S. S. Singh, L. Kumrah and M. K. Alam, Anisotropic Universe in f(Q) gravity with hybrid expansion, arXiv:2209.03959 [gr-qc], (2022).
kumar K. S. Kumar, S. Maheshwari, A. Mazumdar and J. Peng, An anisotropic bouncing universe in non-local gravity, JCAP07(2021)025,
arXiv:2103.13980 [gr-qc], (2021).
dimitrijevic9 I. Dimitrijevic, B. Dragovich, Z. Rakic and J. Stankovic, Variations of infinite derivative modified gravity, Springer Proc. Mathematics & Statistics 263 (2018) 91.
biswas4 T. Biswas, A. Conroy, A. S. Koshelev and A. Mazumdar, Generalized gost-free quadratic curvature gravity, Class. Quantum Grav. 31 (2014) 015022 [arXiv:1308.2319].
krishnan C. Krishnan, R. Mondol and M. M. Sheikh-Jabbari, Dipole cosmology: The Copernican paradigm beyond FLRW, arXiv:2209.14918v2 [astro-ph.CO], (2022).
|
http://arxiv.org/abs/2307.02163v1
|
20230705100554
|
EMORF/S: EM-Based Outlier-Robust Filtering and Smoothing With Correlated Measurement Noise
|
[
"Aamir Hussain Chughtai",
"Muhammad Tahir",
"Momin Uppal"
] |
eess.SP
|
[
"eess.SP"
] |
EMORF/S: EM-Based Outlier-Robust Filtering and Smoothing With Correlated Measurement Noise
Aamir Hussain Chughtai, Muhammad Tahir, Senior Member, IEEE, and Momin Uppal, Senior Member, IEEE
The authors are with Department of Electrical Engineering, Lahore University of Management Sciences, DHA Lahore Cantt., 54792, Lahore Pakistan. (email: [email protected]; [email protected]; [email protected])
June 2023
========================================================================================================================================================================================================================================================================================================================================
In this article, we consider the problem of outlier-robust state estimation where the measurement noise can be correlated. Outliers in data arise due to many reasons like sensor malfunctioning, environmental behaviors, communication glitches, etc. Moreover, noise correlation emerges in several real-world applications e.g. sensor networks, radar data, GPS-based systems, etc. We consider these effects in system modeling which is subsequently used for inference. We employ the Expectation-Maximization (EM) framework to derive both outlier-resilient filtering and smoothing methods, suitable for online and offline estimation respectively. The standard Gaussian filtering and the Gaussian Rauch–Tung–Striebel (RTS) smoothing results are leveraged to devise the estimators. In addition, Bayesian Cramer-Rao Bounds (BCRBs) for a filter and a smoother which can perfectly detect and reject outliers are presented. These serve as useful theoretical benchmarks to gauge the error performance of different estimators. Lastly, different numerical experiments, for an illustrative target tracking application, are carried out that indicate performance gains compared to similarly engineered state-of-the-art outlier-rejecting state estimators. The advantages are in terms of simpler implementation, enhanced estimation quality, and competitive computational performance.
State-Space Models, Approximate Bayesian Inference, Nonlinear Filtering and Smoothing, Outliers, Kalman Filters, Variational Inference, Expectation-Maximization, Robust Estimation, Statistical Learning, Stochastic Dynamical Systems.
§ INTRODUCTION
State estimation is a key fundamental task in analyzing different dynamical systems with subsequent decision-making and control actions arising in a variety of fields including cybernetics, robotics, power systems, sensor fusion, positioning, and target tracking <cit.> etc. The states describing the system dynamics can evolve intricately. Moreover, these are not directly observable only manifesting themselves in the form of external measurements. Mathematically, it means that for state estimation in general, the inference is performed considering stochastic nonlinear equations making it a nontrivial task.
Filtering is the common term used for online state estimation where inference is carried out at each arriving sample. Kalman filter with its linear and nonlinear versions <cit.> are considered the primary choices for filtering given their ease of implementation and estimation performance. Other options for nonlinear filtering are also available including methods based on Monte-Carlo (MC) approximations e.g. Particle Filters (PFs) <cit.>, ensemble Kalman filter (EnKF) <cit.> etc.
Smoothing, on the other hand, refers to offline state estimation where the primary concern is not to work on a per-sample basis. We are rather interested in state inference considering the entire batch of measurements. Different options for smoothing exist including the famous Rauch–Tung–Striebel (RTS), two-filter smoothers <cit.> etc.
The standard state estimators are devised with the assumption that the dynamical system under consideration is perfectly modeled. The estimators assume the availability of system and observation mathematical models including the process and measurement noise statistics. However, any modeling mismatch can result in deteriorated performance even possibly crippling the functionality of the regular estimators completely.
In this work, we are interested in coping with the modeling discrepancy and the associated estimation degradation that results from the occurrence of outliers in the measurements. Data outliers can arise due to several factors including data communication problems, environmental variations, and effects, data preprocessing front-end malfunctioning, inherent sensor defects, and degradation, etc <cit.>. We keep our consideration generic by taking into account the possibility of correlated measurement noise with a fully enumerated nominal noise covariance matrix. This is in contrast to the existing approaches where noise in each data dimension is assumed to be independent targeting a specific class of applications <cit.>. However, this leaves out several important application scenarios where noise correlation exists which should be taken into account. For example, due to double differencing of the original measurements in Real Time Kinematic (RTK) systems, noise correlation appears <cit.>. Likewise, a significant negative correlation exists between the range and range rate measurement noise in radar data <cit.>. Similarly, due to the use of a common reference sensor to extract the time difference of arrival (TDOA), correlated range measurement noise arises <cit.>. Besides, in different sensor networks, correlated observation noise also emerges <cit.>.
The problem of neutralizing data outliers during state estimation has been approached with various proposals. The traditional way of dealing with outliers is based on assuming fixed statistics for measurement noise or the residuals between predicted and actual measurements. For example, different methods resort to describing observation noise using heavy-tailed distributions like the Student-t and Laplace densities <cit.>. Similarly, the theory of robust statistics suggests the use of prior models for residuals to downweigh the effect of outliers during inference <cit.>. Moreover, some techniques are based on rejecting the data sample by comparing the normalized measurement residuals with some predefined thresholds <cit.>.
Literature survey indicates that performances of the conventional approaches are sensitive to the tuning of the design parameters which affect the static residual error loss functions during estimation <cit.>. Therefore, tuning-free learning-based techniques have been justified in prior works that make the error loss function adaptive <cit.>. These approaches consider appropriate distributions for the measurement noise and subsequently learn the parameters describing the distributions and consequently the loss functions during state estimation. Fig. <ref> depicts the comparison of typical static loss functions in traditional approaches and dynamic loss functions in learning-based methods considering a uni-dimensional model for visualization (see Section III-D <cit.> for more details). Resultingly, learning-based robust state estimators offer more advantages by reducing user input, being more general, and suiting better for one-shot scenarios.
Several learning-based methods for robust state estimation have been reported in the relevant literature. As exact inference is not viable for developing these approaches, approximate inference techniques like PFs and variational Bayesian (VB) methods can be used in their design.
Since PFs can be computationally prohibitive, VB-based techniques are the appealing alternative considering these can leverage the existing standard filtering and smoothing results. Our focus in this work remains on the learning-based outlier mitigation approaches designed using VB.
In previous works, we observe that various outlier-robust state estimators, devised using VB, treat the entire measurement vector collectively during estimation owing to under-parameterized modeling <cit.>. Instead of treating each dimension individually, the complete vector is either considered or downweighted by varying the noise covariance matrix by a scalar multiplicative factor. This leaves room for improvement considering useful information is unnecessarily lost during inference. In this regard, we offer a vectorial parameterization to treat each dimension individually in <cit.>. Therein we also suggest a way to make the estimators in <cit.> selective. However, these proposals are based on the assumption of independent noise for each measurement dimension. Another learning-based outlier-resilient filter has been presented in <cit.>. But the authors only consider linear systems and test the method with diagonal measurement noise covariance matrices. With the possibility of correlated measurements, the Variational Bayes Kalman Filter (VBKF) has been devised <cit.> by extending the work in <cit.>. However, we observe that VBKF assumes a complex hierarchical model. As a result, along with updating the state densities, it involves updating the nuisance parametric distributions and their hierarchical distributions during the VB updates. This includes evaluation of the digamma function to find the expectation of logarithmic expressions <cit.>. Therefore, implementing VBKF can get complicated e.g. within an embedded computing device where access to such functions is not inherently available and additional libraries are required. Moreover, extending VBKF to outlier-robust smoothing also gets cumbersome. This calls for simpler state estimation approaches, for systems with correlated noise, that can weather the effect of outliers.
With this background, considering the possibility of correlated measurements in nonlinear dynamical systems, we make the following contributions in this work.
* Using a suitable model and VB (more specifically Expectation-Maximization (EM)) we devise an outlier-robust filter availing the standard Gaussian filtering results. The results are further utilized in deriving an outlier-robust smoother based on the standard Gaussian RTS smoothing. Since our proposed method is inspired by our prior work which considers independent measurement noise <cit.>, we also present insightful connections.
* We derive Bayesian Cramer-Rao Bounds (BCRBs) for a filter and a smoother which can perfectly detect and reject outliers. This provides a useful benchmark to assess the estimation ability of different outlier-mitigating estimators.
* We evaluate the performance of the devised estimators as compared to the other similarly devised outlier-discarding methods. Different scenarios of a relevant TDOA-based target tracking application are considered in numerical experiments indicating the merits of the proposed methods.
The rest of the article is organized as follows. Section <ref>
provides the modeling details. In Section <ref>, we present the derivation of the proposed filter. Thereafter, the derivation of the proposed smoother is given in Section <ref>. In Section <ref>, BCRBs for a filter and a smoother with perfect outlier detecting and rejecting capabilities are provided. Subsequently, the performance evaluation results have been discussed in Section <ref>. The paper ends with a conclusive commentary in Section <ref>.
Notation: As a general notation in this work, 𝐫^⊤ is the transpose of the vector 𝐫, r^i denotes the ith element of a vector 𝐫; 𝐫^i- is the vector 𝐫 with its ith element removed; the subscript k is used for time index; 𝐫_k is the vector 𝐫 at time instant k; 𝐫_k- is the group of vectors 𝐫 considering the entire time horizon except the time instant k; R^i,j is the element of the matrix 𝐑 present at the ith row and jth column; 𝐑^-1 is the inverse of 𝐑; |𝐑| is the determinant of 𝐑; ℜ is the swapped form of 𝐑 where the swapping operation is defined in a particular context; δ(.) represents the delta function; ⟨.⟩_q(ψ_k) denotes the expectation of the argument with respect to the distribution q(ψ_k); tr(.) is the trace operator; a b denotes the remainder of a/b; the superscripts - and + are used for the predicted and updated filtering parameters respectively; the superscript s is used for the parameters of the marginal smoothing densities. Other symbols are defined in their first usage context.
§ STATE-SPACE MODELING
§.§ Standard modeling
Consider a standard nonlinear discrete-time state-space model (SSM) to represent the dynamics of a physical system given as
𝐱_k = 𝐟(𝐱_k-1)+𝐪_k-1
𝐲_k = 𝐡(𝐱_k)+𝐫_k
where 𝐱_k∈ℝ^n and y_k ∈ℝ^m denote the state and measurement vectors respectively; the nonlinear functions f(.):ℝ^n→ℝ^n and h(.):ℝ^n→ℝ^m represent the process dynamics and observation transformations respectively; q_k∈ℝ^n and r_k∈ℝ^m account for the additive nominal process and measurement noise respectively. q_k and r_k are assumed to be statistically independent, White, and normally distributed with zero mean and known covariance matrices Q_k and R_k respectively. We consider that R_k can be a fully enumerated matrix capturing the correlations between the measurement noise entries.
§.§ Modeling outliers for inference
The model in (<ref>)-(<ref>) assumes that the measurements are only affected by nominal measurement noise r_k. However, the observations in every dimension can be corrupted with outliers leading to the disruption of standard state estimators as the measurement data cannot be described by the regular model. Therefore, data outliers need to be appropriately modeled within the generative SSM with two basic objectives. Firstly, the model should sufficiently capture the effect of outlier contamination in the data. Secondly, the model should remain amenable to inference.
To model the outliers in SSM, we consider an indicator vector ℐ_k∈ℝ^m having Bernoulli elements where its ith element ℐ^i_k can assume two possible values: ϵ (close to zero) and 1. ℐ^i_k=ϵ denotes the presence of an outlier, whereas ℐ^i_k=1 indicates no outlier in the ith dimension at time k. Since outliers can occur independently at any instant, we assume that the elements of ℐ_k are statistically independent of their past. Additionally, we assume that the entries of ℐ_k to be independent of each other since generally no knowledge of correlations between outliers is available which are not easy to model anyway. Moreover, this choice is motivated by the goal of inferential tractability. We also consider ℐ_k and 𝐱_k to be statistically independent since the outlier occurrence does not depend on the state value. The assumed distribution of ℐ_k is given as
p(ℐ_k)=∏_i=1^mp(ℐ^i_k)=∏_i=1^m (1-θ^i_k) δ(ℐ^i_k-ϵ)+θ^i_kδ( ℐ^i_k-1)
where θ^i_k denotes the prior probability of no outlier in the ith observation at time k. Further, the conditional measurement likelihood given the current state 𝐱_k and the indicator ℐ_k, is proposed to be normally distributed as
p(𝐲_k|𝐱_k,ℐ_k)=𝒩(𝐲_k|𝐡(𝐱_k),𝐑_k(ℐ_k) )
=1/√((2 π)^m|𝐑_k(ℐ_k)|)exp{-12(𝐲_k-𝐡(𝐱_k))^⊤𝐑_k^-1(ℐ_k)
(𝐲_k-𝐡(𝐱_k)) }
where 𝐑_k(ℐ_k)
[ R^1,1_k / ℐ^1_k … R^1,m_k δ( ℐ^1_k-1) δ( ℐ^m_k-1); ⋮ ⋱ ⋮; R^m,1_k δ( ℐ^m_k-1) δ( ℐ^1_k-1) ⋯ R^m,m_k / ℐ^m_k ]
𝐑_k(ℐ_k) is the modified covariance matrix of the measurements considering the effect of outliers. The effect of 𝐑_k(ℐ_k) on the data generation process can be understood by considering the possible values of ℐ^i_k. In particular, ℐ^i_k=ϵ leads to a very large ith diagonal entry of 𝐑_k(ℐ_k), while placing zeros at the remaining ith row and column of the matrix. Resultingly, when an outlier occurs in the ith dimension its effect on state estimation is minimized. Moreover, the ith dimension no longer has any correlation with any other entry, ceasing to have any effect on any other dimension during inference. This is in contrast to ℐ^i_k=1 which ensures the diagonal element and the off-diagonal correlation entries with other non-affected dimensions are preserved. Lastly, note that the conditional likelihood is independent of the batch of all the historical observations 𝐲_1:k-1. Fig. <ref> shows how the standard probabilistic graphical model (PGM) is modified into the proposed PGM for devising outlier-robust state estimators. The suggested PGM meets the modeling aims of describing the nominal and corrupted data sufficiently while remaining docile for statistical inference.
§ PROPOSED ROBUST FILTERING
In filtering, we are interested in the posterior distribution of 𝐱_k conditioned on all the observations 𝐲_1:k that have been observed till time k. For this objective, we can employ the Bayes rule recursively. Given the proposed observation model, the analytical expression of the joint posterior distribution of 𝐱_k and ℐ_k conditioned on the set of all the observations 𝐲_1:k is given as
p(𝐱_k,ℐ_k|𝐲_1:k)=p(𝐲_k|ℐ_k,𝐱_k) p(𝐱_k|𝐲_1:k-1)p(ℐ_k)/p(𝐲_k|𝐲_1:k-1)
Theoretically, the joint posterior can further be marginalized to obtain the required posterior distribution p(𝐱_k|𝐲_1:k). Assuming p(𝐱_k|𝐲_1:k-1) as a Gaussian distribution, we need to run 2^m Kalman filters corresponding to each combination of ℐ_k to obtain the posterior. This results in computational complexity of around 𝒪(2^m m^3) where m^3 appears due to matrix inversions (ignoring sparsity). Moreover, we need to resort to a debatable approximation of the resulting Gaussian mixture distribution for p(𝐱_k|𝐲_1:k) as single Gaussian density for recursive tractability. Therefore, this approach clearly becomes impractical and unsuitable.
To get around the problem, we can possibly employ the standard VB method where the product of VB marginals is conveniently used to approximate the joint posterior. We assume the following factorization of the posterior
p(𝐱_k,ℐ_k|𝐲_1:k)≈ q^f(𝐱_k)∏_i q^f(ℐ^i_k)
The VB approximation aims to minimize the Kullback-Leibler (KL) divergence between the product approximation and the true posterior and leads to the following marginals <cit.>
q^f(𝐱_k) ∝exp( ⟨ln ( p(𝐱_k,ℐ_k|𝐲_1:k))⟩_ q^f(ℐ_k) )
q^f(ℐ^i_k) ∝exp( ⟨ln ( p(𝐱_k,ℐ_k|𝐲_1:k))⟩_q^f(𝐱_k) q^f(ℐ^i-_k) ) ∀ i
Using (<ref>)-(<ref>) alternately the VB marginals can be updated iteratively until convergence. The procedure provides a useful way to approximate the true marginals of the joint posterior by approximating these as p(𝐱_k|𝐲_1:k) ≈q^f(𝐱_k) and p(ℐ_k|𝐲_1:k)≈∏_i q^f(ℐ^i_k).
For our model, (<ref>) becomes computationally unfriendly if we use the standard VB approach. In fact, the same complexity order of 𝒪(m^3 2^m) appears as with the basic marginalization approach making this approach intractable too. We elaborate more on it in the upcoming subsection.
§.§ Expectation-Maximization as a particular case of variational Bayes
To deal with the complexity issue, instead of considering distributions we can resort to point estimates for ℐ^i_k. In particular, consider q^f(ℐ^i_k)= δ(ℐ^i_k-ℐ̂^i_k) where ℐ̂^i_k denotes the point approximation of ℐ^i_k. Consequently, the variational distributions can be updated in an alternating manner in the Expectation (E) and Maximization (M) steps in the EM algorithm given as <cit.>
§.§.§ E-Step
q^f(𝐱_k) =p(𝐱_k|𝐲_1:k,ℐ̂_k) ∝ p(𝐱_k,ℐ̂_k|𝐲_1:k)
§.§.§ M-Step
ℐ̂^i_k= ℐ^i_kargmax⟨ln(p(𝐱_k,ℐ^i_k,ℐ̂^i-_k|𝐲_1:k)⟩_q^f(𝐱_k)
where all ℐ̂^i_k in the M-Step are successively updated using the latest estimates.
§.§ Prediction
For filtering, we first obtain the predictive distribution p(𝐱_k|𝐲_1:k-1) using the posterior distribution at the previous instant p(𝐱_k-1|𝐲_1:k-1) approximated as Gaussian q^f(𝐱_k-1)≈𝒩(𝐱_k-1|𝐦^+_k-1,𝐏^+_k-1). Using Gaussian (Kalman) filtering results we make the following approximation <cit.>
p(𝐱_k|𝐲_1:k-1)≈𝒩(𝐱_k|𝐦^-_k,𝐏^-_k)
where
𝐦^-_k= ∫𝐟(𝐱_k-1) 𝒩(𝐱_k-1|𝐦^+_k-1,𝐏^+_k-1) d𝐱_k-1
𝐏^-_k= ∫{( 𝐟(𝐱_k-1)-𝐦^-_k)(𝐟(𝐱_k-1)-𝐦^-_k)^⊤
𝒩(𝐱_k-1|𝐦^+_k-1,𝐏^+_k-1)} d𝐱_k-1+𝐐_k-1
§.§ Update
Using the expressions of the prior distributions from (<ref>) and (<ref>) along with the conditional measurement likelihood in (<ref>), we can express the joint posterior distribution as
p(𝐱_k, ℐ_k|𝐲_1:k) ∝ 𝒩(𝐱_k | 𝐦_k^-, 𝐏_k^-)√((2 π)^m|𝐑_k(ℐ_k)|)exp{-12(𝐲_k-𝐡(𝐱_k))^⊤
𝐑_k^-1(ℐ_k)(𝐲_k-𝐡(𝐱_k))}{∏_i(1-θ_k^i)
δ(ℐ_k^i-ϵ)+
θ_k^iδ(ℐ_k^i-1) }
§.§.§ Derivation of q^f(𝐱_k)
With the E-Step in (<ref>) we can write
q^f(𝐱_k) ∝ exp{-12(𝐲_k-𝐡(𝐱_k))^⊤𝐑_k^-1 (ℐ̂_k) (𝐲_k-𝐡(𝐱_k))
-12(𝐱_k-𝐦_k^-)^⊤ (𝐏_k^-)^-1(𝐱_k-𝐦_k^-)}
where 𝐑_k^-1(ℐ̂_k) assumes a particular form resulting from the inversion of 𝐑_k(ℐ̂_k) as described in Appendix <ref>.
Note that we avoid evaluating ⟨𝐑_k^-1 (ℐ_k)⟩_ q^f(ℐ_k) that would be required in the standard VB approach. This means that we are able to evade the complexity level of around 𝒪(m^3 2^m) since matrix inversion for each of the 2^m combinations are required to evaluate the expectation. However, thanks to EM, we are now working with 𝐑_k^-1(ℐ̂_k) which can be evaluated with the maximum complexity of 𝒪(m^3) (considering a fully populated matrix).
To proceed further, we use the results of the general Gaussian filter, to approximate q^f(𝐱_k) with a Gaussian distribution, 𝒩(𝐱_k|𝐦^+_k,𝐏^+_k), with parameters updated as
𝐦^+_k =𝐦^-_k+𝐊_k
(𝐲_k-μ_k)
𝐏^+_k =𝐏^-_k-𝐂_k𝐊^⊤_k
where
𝐊_k= 𝐂_k (𝐔_k+𝐑_k(ℐ̂_k) )^-1={𝐂_k ( 𝐑_k^-1(ℐ̂_k)
-𝐑_k^-1(ℐ̂_k)(𝐈+𝐔_k 𝐑_k^-1(ℐ̂_k) )^-1𝐔_k 𝐑_k^-1(ℐ̂_k) )}
μ_k= ∫𝐡(𝐱_k) 𝒩(𝐱_k|𝐦^-_k,𝐏^-_k) d𝐱_k
𝐔_k= ∫(𝐡(𝐱_k)-μ_k)(𝐡(𝐱_k)-μ_k)^⊤𝒩(𝐱_k|𝐦^-_k,𝐏^-_k)d𝐱_k
𝐂_k= ∫(𝐱_k-𝐦^-_k)(𝐡(𝐱_k)-μ_k)^⊤𝒩(𝐱_k|𝐦^-_k,𝐏^-_k)d𝐱_k
§.§.§ Derivation of ℐ̂^i_k
With the M-Step in (<ref>) we can write
ℐ̂^i_k= ℐ^i_kargmax⟨ln(p(𝐱_k,ℐ^i_k,ℐ̂^i-_k|𝐲_1:k )⟩_q^f(𝐱_k)
Using the Bayes rule we can proceed as
ℐ̂^i_k= ℐ^i_kargmax⟨ln(p(𝐲_k|𝐱_k,ℐ^i_k,ℐ̂^i-_k, 𝐲_1:k-1))⟩_q^f(𝐱_k)
+ln (p(ℐ^i_k| 𝐱_k,ℐ̂^i-_k,𝐲_1:k-1 ))+const.
where const. is some constant and p(𝐱|𝐲,𝐳) denotes the conditional independence of 𝐱 and 𝐳 given 𝐲. We can further write
ℐ̂^i_k= ℐ^i_kargmax{-12tr( 𝐖_k 𝐑_k^-1( ℐ^i_k , ℐ̂^i-_k ) ) -12ln|𝐑_k( ℐ^i_k , ℐ̂^i-_k ) |
+ln( (1-θ_k^i) δ(ℐ_k^i-ϵ)+θ_k^iδ (ℐ_k^i-1) ) }
where 𝐑_k( ℐ^i_k , ℐ̂^i-_k ) denotes 𝐑_k(ℐ_k) evaluated at ℐ_k with it ith element as ℐ^i_k and remaining entries ℐ̂^i-_k and
𝐖_k=∫(𝐲_k-𝐡(𝐱_k)) (𝐲_k-𝐡(𝐱_k))^⊤𝒩(𝐱_k|𝐦^+_k,𝐏^+_k)d𝐱_k
Resultingly, ℐ̂^i_k can be determined as
ℐ̂^i_k =
1 if τ̂^i_k ≤ 0,
0 if τ̂^i_k >0
with
τ̂^i_k = { tr ( 𝐖_k 𝐑̂_k^-1 ) +ln(|𝐑_k( ℐ^i_k=1 , ℐ̂^i-_k ) |/|𝐑_k( ℐ^i_k=ϵ , ℐ̂^i-_k )|)
+2ln(1/θ^i_k-1 ) }
where
𝐑̂_k^-1= (𝐑_k^-1( ℐ^i_k=1 , ℐ̂^i-_k )-𝐑_k^-1( ℐ^i_k=ϵ , ℐ̂^i-_k ))
Using the steps outlined in Appendix <ref>, we can further simplify τ̂^i_k as
τ̂^i_k= { tr ( 𝐖_k 𝐑̂_k^-1 )
+ln|I-𝐑^-i,i_k𝐑^i,-i_k (𝐑̂^-i,-i_k)^-1/R^i,i_k|
+ ln(ϵ)+2ln(1/θ^i_k-1 ) }
where 𝐑̂^-i,-i_k is the submatrix of 𝐑_k(ℐ̂_k) corresponding to entries of ℐ̂^i-_k. 𝐑^i,-i_k and 𝐑^-i,i_k contain the measurement covariances between ith and rest of the dimensions.
Though we can directly evaluate 𝐑̂_k^-1 in (<ref>), we can save computations by avoiding repetitive calculations. To this end, we first need to compute
ℜ̂_k^-1=[ Ξ^i,i Ξ^i,-i; Ξ^-i,i Ξ^-i,-i ]
with
Ξ^i,i =1/R^i,i_k-𝐑^i,-i_k(𝐑̂^-i,-i_k)^-1𝐑^-i,i_k-ϵ/R^i,i_k
Ξ^i,-i =-𝐑^i,-i_k (𝐑̂^-i,-i_k)^-1/R^i,i_k-𝐑^i,-i_k(𝐑̂^-i,-i_k)^-1𝐑^-i,i_k
Ξ^-i,i =- (𝐑̂^-i,-i_k)^-1𝐑^-i,i_k/R^i,i_k-𝐑^i,-i_k(𝐑̂^-i,-i_k)^-1𝐑^-i,i_k
Ξ^-i,-i = (𝐑̂^-i,-i_k)^-1𝐑^-i,i_k 𝐑^i,-i_k (𝐑̂^-i,-i_k)^-1/R^i,i_k-𝐑^i,-i_k(𝐑̂^-i,-i_k)^-1𝐑^-i,i_k
where the ith row/column entries in 𝐑̂_k^-1 have been conveniently swapped with the first row/column elements to obtain ℜ̂_k^-1. By swapping the first row/column entries of ℜ̂_k^-1 to the ith row/column positions we can reclaim 𝐑̂_k^-1. Appendix <ref> provides further details in this regard.
The resulting EM-based outlier-robust filter (EMORF) is outlined as Algorithm <ref>. For the convergence criterion, we suggest using the ratio of the L2 norm of the difference of the state estimates from the current and previous VB iterations and the L2 norm of the estimate from the previous iteration. This criterion has been commonly chosen in similar robust filters <cit.>.
§.§ VB factorization of p(𝐱_k,ℐ_k|𝐲_1:k) and the associated computational overhead
Note that for better accuracy we can factorize p(𝐱_k,ℐ_k|𝐲_1:k)≈ q^f(𝐱_k) q^f(ℐ_k) instead of forcing independence between all ℐ^i_k. In this case, expression for evaluating q^f(𝐱_k) remains same as in (<ref>). However, this choice leads to the following VB marginal distribution of ℐ_k
q^f(ℐ_k) ∝exp( ⟨ln ( p(𝐱_k,ℐ_k|𝐲_1:k))⟩_q^f(𝐱_k) )
This results in a modified M-Step in the EM algorithm given as
§.§.§ M-Step
ℐ̂_k= ℐ_kargmax⟨ln(p(𝐱_k,ℐ_k|𝐲_1:k))⟩_q^f(𝐱_k)
Proceeding further, with the prediction and update steps during inference, we can arrive at
ℐ̂_k= ℐ_kargmax { -12 tr ( 𝐖_k 𝐑_k^-1( ℐ_k ) ) -12 ln|𝐑_k( ℐ_k ) |
+ln( (1-θ_k^i) δ(ℐ_k^i-ϵ)+θ_k^i δ(ℐ_k^i-1) ) }
It is not hard to notice that determining ℐ̂_k using (<ref>) involves tedious calculations. In fact, we run into the same computational difficulty that we have been dodging. To arrive at the result, we need to evaluate the inverses and determinants, for each of the 2^m combinations corresponding to the entries of ℐ_k. This entails the bothering complexity level of 𝒪(m^3 2^m).
With the proposed factorization in (<ref>), we obtain a more practical and scalable algorithm. The resulting complexity is 𝒪(m^4) following from the evaluation of matrix inverses and determinants for calculating each of the ℐ̂^i_k ∀ i=1⋯ m in (<ref>).
§.§ Connection between EMORF and SORF <cit.>
Since the construction of EMORF is motivated by selective observations rejecting filter (SORF) <cit.>, it is insightful to discuss their connection. We derived SORF considering a diagonal measurement covariance matrix 𝐑_k <cit.>. In SORF, we used distributional estimates for ℐ_k since it did not induce any significant computational strain. It is instructive to remark here that the if we use point estimates of ℐ_k in SORF it becomes a special case of EMORF. In particular, the point estimates for ℐ^i_k ∀ i in <cit.> can be obtained with the following criterion
ℐ̂^i_k =
1 if τ̅^i_k ≤ 0,
0 if τ̅^i_k >0
To deduce ℐ̂^i_k=1, the following should hold
Ω^i_k ≥1-Ω^i_k
Ω^i_k ≥0.5
where Ω^i_k denotes the posterior probability of ℐ^i_k=1. Using the expression of Ω^i_k from <cit.> we can write (<ref>) as
1/1+√(ϵ)(1/θ ^i_k-1)exp(W^i,i_k/2R^i,i_k (1-ϵ))≥0.5
leading to
τ̅^i_k ≤ 0
where
τ̅^i_k = W^i,i_k/R^i,i_k (1-ϵ)+ln(ϵ) + 2ln(1/θ ^i_k-1)
which can be recognized as the particular case of (<ref>) given 𝐑^i,-i_k and 𝐑^-i,i_k vanish as 𝐑_k is considered to be diagonal. The first term in (<ref>) reduces to W^i,i_k/R^i,i_k (1-ϵ) given Ξ^i,i=1-ϵ/R^i,i_k and Ξ^i,-i, Ξ^-i,i and Ξ^-i,-i all reduce to zero. Moreover, the second term in (<ref>) also disappears resulting in (<ref>).
§.§ Choice of the parameters θ^i_k and ϵ
For EMORF we propose setting the parameters θ^i_k and ϵ the same as in SORF. Specifically, we suggest choosing a neutral value of 0.5 or an uninformative prior for θ^i_k. The Bayes-Laplace and the maximum entropy approaches for obtaining uninformative prior for a finite-valued parameter lead to the choice of the uniform prior distribution <cit.>. Moreover, the selection has been justified in the design of outlier-resistant filters assuming no prior information about the outliers statistics is available <cit.>. For ϵ we recommend its value to be close to zero since the exact value of 0 denies the VB/EM updates as in <cit.>.
§ PROPOSED ROBUST SMOOTHING
In smoothing, our interest lies in determining the posterior distribution of all the states 𝐱_1:K conditioned on the batch of all the observations 𝐲_1:K. With that goal, we take a similar approach to filtering and approximate the joint posterior distribution as a product of marginals
p(𝐱_1:K,ℐ_1:K|𝐲_1:K)≈ q^s(𝐱_1:K)∏_k∏_i q^s(ℐ^i_k)
where the true marginals are approximated as p(𝐱_1:K|𝐲_1:K)≈q^s(𝐱_1:K) and p(ℐ_1:K|𝐲_1:K)≈∏_k∏_i q^s(ℐ^i_k). Let us assume q^s(ℐ^i_k)= δ(ℐ^i_k-ℐ̆^i_k) where ℐ̆^i_k denotes the point approximation of ℐ^i_k. Consequently, the EM steps are given as
§.§.§ E-Step
q^s(𝐱_1:K) =p(𝐱_1:K|𝐲_1:K,ℐ̆_1:K) ∝ p(𝐱_1:K,ℐ̆_1:K|𝐲_1:K)
§.§.§ M-Step
ℐ̆^i_k= ℐ^i_kargmax⟨ln(p(𝐱_1:K,ℐ^i_k,ℐ̆^i-_k,ℐ̆_k- |𝐲_1:K)⟩_q^s(𝐱_1:K)
where all ℐ̂^i_k in the M-Step are sequentially updated using the latest estimates.
§.§.§ Derivation of q^s(𝐱_1:K)
With the E-Step in (<ref>) we can write
q^s(𝐱_1:K) ∝ p(𝐲_1:K|𝐱_1:K,ℐ̆_1:K)p(𝐱_1:K)
q^s(𝐱_1:K) ∝∏_k p(𝐲_k|𝐱_k,ℐ̆_k)p(𝐱_k|𝐱_k-1)
We can identify that q^s(𝐱_1:K) can be approximated as a Gaussian distribution from the results of general Gaussian RTS smoothing <cit.>. Using the forward and backward passes, we can determine the parameters of q^s(𝐱_k)∼𝒩(𝔪^s_k,𝒫^s_k), which denotes the marginalized densities of q^s(𝐱_1:K).
§.§.§ Forward pass
The forward pass essentially involves the filtering equations given as
𝔪^-_k= ∫𝐟(𝐱_k-1) 𝒩(𝐱_k-1|𝔪^+_k-1,𝒫^+_k-1) d𝐱_k-1
𝒫^-_k= ∫{( 𝐟(𝐱_k-1)-𝔪^-_k)(𝐟(𝐱_k-1)-𝔪^-_k)^⊤
𝒩(𝐱_k-1|𝔪^+_k-1,𝒫^+_k-1)} d𝐱_k-1+𝐐_k-1
𝔪^+_k= 𝔪^-_k+𝒦_k
(𝐲_k-ν_k)
𝒫^+_k= 𝒫^-_k-𝒞_k𝒦^⊤_k
where
𝒦_k= 𝒞_k (𝒰_k+𝐑_k(ℐ̆_k) )^-1 = {𝒞_k ( 𝐑_k^-1(ℐ̆_k)
-𝐑_k^-1(ℐ̆_k)(𝐈+𝒰_k 𝐑_k^-1(ℐ̆_k) )^-1𝒰_k 𝐑_k^-1(ℐ̆_k) )}
ν_k= ∫𝐡(𝐱_k) 𝒩(𝐱_k|𝔪^-_k,𝒫^-_k) d𝐱_k
𝒰_k= ∫(𝐡(𝐱_k)-ν_k)(𝐡(𝐱_k)-ν_k)^⊤𝒩(𝐱_k|𝔪^-_k,𝒫^-_k)d𝐱_k
𝒞_k= ∫(𝐱_k-𝔪^-_k)(𝐡(𝐱_k)-ν_k)^⊤𝒩(𝐱_k|𝔪^-_k,𝒫^-_k)d𝐱_k
Note that 𝐑_k(ℐ̆_k) and 𝐑_k^-1(ℐ̆_k) can be evaluated similar to 𝐑_k(ℐ̂_k) and 𝐑_k^-1(ℐ̂_k).
§.§.§ Backward pass
The backward pass can be completed as
ℒ_k+1 = ∫(𝐱_k-𝔪^+_k)(𝐟(𝐱_k)-𝔪_k+1^-)^⊤𝒩(𝐱_k|𝔪^+_k,𝒫^+_k)d𝐱_k
𝒢_k = ℒ_k+1(𝒫_k+1^-)^-1
𝔪_k^s = 𝔪^+_k+𝒢_k(𝔪_k+1^s-𝔪_k+1^-)
𝒫_k^s = 𝒫^+_k+𝒢_k(𝒫_k+1^s-𝒫_k+1^-) 𝒢_k^⊤
§.§.§ Derivation of ℐ̆^i_k
With the M-Step in (<ref>) we can write
ℐ̆^i_k= ℐ^i_kargmax⟨ln(p(𝐱_1:K,ℐ^i_k,ℐ̆^i-_k,ℐ̆_k-|𝐲_1:k)⟩_q^s(𝐱_1:K)
Using the Bayes rule we can proceed as
ℐ̆^i_k= ℐ^i_kargmax⟨ln(p(𝐲_k|𝐱_k,ℐ^i_k,ℐ̆^i-_k,ℐ̆_k-,𝐲_k- ))
+ln (p(𝐱_1:K,ℐ^i_k,ℐ̆^i-_k,ℐ̆_k-|𝐲_k- )) ⟩_q^s(𝐱_1:K)+const.
which leads to
ℐ̆^i_k= ℐ^i_kargmax⟨ln(p(𝐲_k|𝐱_k,ℐ^i_k,ℐ̆^i-_k ))⟩_q^s(𝐱_k)
+ln (p(ℐ^i_k|𝐱_1:K,ℐ̆^i-_k,ℐ̂_k-,𝐲_k- ))+const.
which is similar to (<ref>) except that the expectation is taken with respect to the marginal smoothing distribution q^s(𝐱_k). Consequently, ℐ̆^i_k can be determined as
ℐ̆^i_k =
1 if τ̆^i_k ≤ 0,
0 if τ̆^i_k >0
where
τ̆^i_k = { tr ( 𝒲_k 𝐑̆_k^-1 ) +
ln|I-𝐑^-i,i_k𝐑^i,-i_k (𝐑̆^-i,-i_k)^-1/R^i,i_k|
+ ln(ϵ)+2ln(1/θ^i_k-1 ) }
with
𝐑̆_k^-1=(𝐑_k^-1( ℐ^i_k=1 , ℐ̆^i-_k )-𝐑_k^-1( ℐ^i_k=ϵ , ℐ̆^i-_k ))
which can be be calculated similar to 𝐑̂_k^-1 in (<ref>). 𝐑̆^-i,-i_k denotes the submatrix of 𝐑_k(ℐ̆_k) corresponding to entries of ℐ̆^i-_k and
𝒲_k=∫(𝐲_k-𝐡(𝐱_k)) (𝐲_k-𝐡(𝐱_k))^⊤𝒩(𝐱_k|𝔪^s_k,𝒫^s_k)d𝐱_k
The resulting EM-based outlier-robust smoother (EMORS) is outlined as Algorithm <ref>. We suggest using the same convergence criterion and parameters as for robust filtering.
§ PERFORMANCE BOUNDS
It is useful to determine the performance bounds of outlier-discarding state estimators considering correlated measurement noise. We evaluate the estimation bounds of filtering and smoothing approaches that are perfect outlier rejectors, having complete knowledge of the instances of outlier occurrences. In particular, we assume that the measurement covariance matrix is a function of perfectly known values of ℐ_k given as 𝐑_k(ℐ_k). In this case, ℐ^i_k=0 means rejection of the ith corrupted dimension, whereas ℐ^i_k=1 denotes inclusion of the ith measurement. Resultingly, 𝐑^-1_k(ℐ_k) has zeros at the diagonals, rows, and columns corresponding to dimensions for which ℐ^i_k=0. Remaining submatrix of 𝐑^-1_k(ℐ_k) can be evaluated as the inverse of submatrix of 𝐑_k considering the dimensions with ℐ^i_k=1.
Note that we set ℐ^i_k=ϵ, not exactly as 0, for outlier rejection in the proposed state estimators as it declines inference. However, for evaluating performance bounds this choice is appropriate resulting in perfect outlier rejection. Also note that during robust state estimation, we do not exactly know ℐ_k apriori and model it statistically for subsequent inference. The use of perfectly known ℐ_k for estimation bounds gives us an idea of how well we can estimate the state if outliers are somehow perfectly detected and rejected.
We evaluate BCRBs for the perfect rejector for the model in (<ref>)-(<ref>) that have been corrupted with measurement outliers for both filtering and smoothing.
§.§ Filtering
For the estimation error of 𝐱_k during filtering, the BCRB matrix can be written as <cit.>
BCRB^f_k≜ (𝐉^+_k)^-149
where the corresponding filtering Fisher information matrix (FIM) denoted as 𝐉^+_k can be evaluated recursively as
𝐉^-_k=𝐃_k-1^22(1)-𝐃_k-1^21(𝐉^+_k-1+𝐃_k-1^11)^-1𝐃_k-1^12
𝐉^+_k=𝐉^-_k+𝐃_k-1^22(2)
where 𝐉^+_0=⟨ -Δ _𝐱_0^𝐱_0ln p(𝐱_0) ⟩ _p(𝐱_0) and
Δ _Ψ^Θ=∇ _Ψ∇ _Θ^⊤
∇ _Θ=[∂/∂Θ _1,…, ∂/∂Θ _r]^⊤
𝐃_k^11 =⟨-Δ_𝐱_k^𝐱_k lnp(𝐱_k+1 |𝐱_k)⟩_p(𝐱_k+1,𝐱_k)
𝐃_k^12 =⟨-Δ_𝐱_k^𝐱_k+1 lnp(𝐱_k+1 |𝐱_k)⟩_p(𝐱_k+1,𝐱_k)
𝐃_k^21 =⟨-Δ_𝐱_k+1^𝐱_k lnp(𝐱_k+1 |𝐱_k)⟩_p(𝐱_k+1,𝐱_k)=(𝐃_k^12)^⊤
𝐃_k^22 = 𝐃_k^22(1)+𝐃_k^22(2)
𝐃_k^22(1) =⟨-Δ_𝐱_k+1^𝐱_k+1 lnp(𝐱_k+1 |𝐱_k)⟩_p(𝐱_k+1,𝐱_k)
𝐃_k^22(2) = ⟨-Δ_𝐱_k+1^𝐱_k+1 lnp(𝐲_k+1 |𝐱_k+1)⟩_p(𝐲_k+1,𝐱_k+1)
The bound is valid given the existence of the following derivatives and expectations terms for an asymptotically unbiased estimator <cit.>. For the perfect rejector considering the system model in (<ref>)-(<ref>) that is infested with observation outliers we can write
𝐃_k^11 =⟨𝐅̃^⊤(𝐱_k) 𝐐_k^-1𝐅̃(𝐱_k) ⟩_p(𝐱_k)
𝐃_k^12 = -⟨𝐅̃^⊤(𝐱_k) ⟩_p(𝐱_k)𝐐_k^-1
𝐃_k^22(1) = 𝐐_k^-1
𝐃_k^22(2) = ⟨𝐇̃^⊤(𝐱_k+1) 𝐑_k+1^-1(ℐ_k+1) 𝐇̃(𝐱_k+1) ⟩ _p(𝐱_k+1)
where 𝐅̃(.) and 𝐇̃(.) are the Jacobians of the transformations 𝐟(.) and 𝐡(.) respectively.
§.§ Smoothing
Similarly, for the estimation error of 𝐱_k during smoothing, the BCRB matrix can be written as <cit.>
BCRB^s_k≜ (𝐉^s_k)^-1
where 𝐉^s_K=𝐉^+_K. We can compute the associated smoothing FIM denoted as 𝐉^s_k recursively as
𝐉^s_k=𝐉^+_k+𝐃_k^11-𝐃_k^12(𝐃_k^22(1)+𝐉^s_k+1+𝐉^-_k+1)^-1𝐃_k^21
§ NUMERICAL EXPERIMENTS
To test the performance of the proposed outlier-resilient state estimators, we carry out several numerical experiments. We use Matlab on a computer powered by an Intel i7-8550U processor. All the experiments were conducted while considering SI units.
For performance evaluation, we resort to a target tracking problem with TDOA-based range measurements inspired by <cit.>. Fig. <ref> shows the setup of the considered example. Owing to the use of a common reference sensor to obtain the TDOA observations, from the difference of the time of arrival (TOA) measurements, the resulting covariance matrix becomes fully populated.
We consider the process equation for the target assuming an unknown turning rate as <cit.>
𝐱_k=𝐟(𝐱_k-1)+𝐪_k-1
with
𝐟(𝐱_k-1) = [ 1 sin(ω_k-1ζ)/ω_k-1 0 cos(ω_k-1ζ)-1/ω_k-1 0; 0 cos(ω_k-1ζ) 0 -sin(ω_k-1ζ) 0; 0 1-cos(ω_k-1ζ)/ω_k-1 1 sin(ω_k-1ζ)/ω_k-1 0; 0 sin(ω_k-1ζ) 0 cos(ω_k-1ζ) 0; 0 0 0 0 1 ]𝐱_k-1
where the state vector 𝐱_k= [a_k,ȧ_̇k̇,b_k,ḃ_̇k̇,ω_k]^⊤
is composed of the 2D position coordinates (a_k , b_k ), the corresponding velocities (ȧ_̇k̇ , ḃ_̇k̇ ), the angular velocity ω_k of the target at time instant k, ζ denotes the sampling period, and 𝐪_k-1∼ N(0,𝐐_k-1). 𝐐_k-1 is given in terms of scaling parameters η_1 and η_2 as <cit.>
𝐐_k-1=[ η_1 𝐌 0 0; 0 η_1 𝐌 0; 0 0 η_2 ], 𝐌=[ ζ^3/3 ζ^2/2; ζ^2/2 ζ ]
Range readings are obtained using m sensors installed in a zig-zag fashion as depicted in Fig. <ref>. The ith sensor is located at (a^ρ_i=350(i-1),b^ρ_i=350 ((i-1)2)) for i=1 ⋯ m. We assume the first sensor as the common sensor for reference resulting in m-1 TDOA-based measurements. The nominal measurement equation can be expressed as
𝐲_k = 𝐡(𝐱_k)+𝐫_k
with
h^j(𝐱_k ) = { √( (a_k - a^ρ_1)^2 + (b_k - b^ρ_1)^2 )
- √( (a_k - a^ρ_j+1)^2 + (b_k - b^ρ_j+1)^2 ) }
for j=1 ⋯ m-1. The corresponding nominal covariance measurement matrix is fully populated given as <cit.>
𝐑_k= [ σ^2_1+σ^2_2 … σ^2_1; ⋮ ⋱ ⋮; σ^2_1 ⋯ σ^2_1+σ^2_m ]
where σ^2_i is the variance contribution of the ith sensor in the resulting covariance matrix. To consider the effect of outliers the measurement equation can be modified as
𝐲_k = 𝐡(𝐱_k)+𝐫_k+𝐨_k
where 𝐨_k produces the effect of outliers in the measurements and is assumed to obey the following distribution
p(𝐨_k) =∏_j=1^m-1𝒥^j_k 𝒩(o^j_k|0,γ (σ^2_1+σ^2_j))
where 𝒥^j_k is a Bernoulli random variable, with values 0 and 1, that controls whether an outlier in the jth dimension occurs. Let λ denote the probability that a sensor's TOA measurement is affected. Therefore, the probability that no outlier appears in the jth dimension, corresponding to 𝒥^j_k=0, is (1-λ)^2 since the first sensor is a common reference for the TDOA-based measurements. We assume that the TOA measurements are independently affected and the corruption of the first TOA observation affects all the measurements. Similarly, the parameter γ controls the variance of an outlier in each dimension respectively. Using the proposed model we generate the effect of outliers in the data.
For filtering performance comparisons, we choose a hypothetical Gaussian filter that is a perfect rejector having apriori knowledge of all outlier instances. We also consider the generalized and independent VBKF estimators <cit.>, referred from hereon as Gen. VBKF and Ind. VBKF, for comparisons. In VBKFs, we set the design parameter as N=1 (essentially resorting to the EM method) since higher N results in more computational strain. Lastly, we use the derived BCRB-based filtering lower bounds to benchmark the performance of all the filters. Similarly, for smoothing we use the counterparts of all the considered filters i.e. a perfect outlier-rejecting general Gaussian RTS smoother and the generalized/independent VBKF-based RTS smoothers denoted as Gen. VBKS and Ind. VBKS.
For simulations the following values of parameters are used: the initial state 𝐱_0= [0,1,0,-1,-0.0524] ^T, ζ=1, η_1=0.1, η_2=1.75×10^-4, σ^2_j=10. The initialization parameters of estimators are: 𝐦^+_0∼𝒩(𝐱_0,𝐏^+_0), 𝐏^+_0=𝐐_k, ϵ=10^-6 and θ^i_k=0.5 ∀ i. For each method we use the Unscented transform (UT) for approximating the Gaussian integrals <cit.>, in all the considered methods. Resultingly, the Unscented Kalman Filter (UKF) becomes the core inferential engine for all the techniques.
In each method, UT parameters are set as α=1, β=2, and κ=0. Moreover, we use the same threshold of 10^-4 for the convergence criterion in each algorithm. Other parameters for VBKFs/VBKSs are assigned values as originally documented. All the simulations are repeated with a total time duration K=400 and 100 independent MC runs. Moreover, we use box and whisker plots to visualize all the results.
§.§ Filtering Performance
We assess the relative filtering performance under different scenarios.
First, we choose 10 number of sensors with γ=1000 and increase the TOA contamination probability λ. Fig. <ref> shows the mean squared error (MSE) of the state estimate of each filter as λ is increased. For λ=0 all the filters essentially work as the standard UKF having similar performance. As λ increases, MSE of each method and the lower bound value are seen to increase. The hypothetical ideal UKF exhibits the best performance followed by the proposed EMORF, Gen. VBKF, and Ind. VBKF respectively. The trend remains the same for each λ. Similar patterns have been observed for other combinations of m and γ. Performance degradation of Ind. VBKF as compared to EMORF and Gen. VBKF is expectable as it ignores the measurement correlations during filtering. We find EMORF to be generally more robust in comparison to Gen. VBKF. Our results are not surprising given that we found the modified selective observation rejecting (mSOR)-UKF to be more resilient to outliers as compared to the modified outlier-detecting (mOD)-UKF <cit.>, which are designed for independent measurements having similar structures to EMORF and VBKF respectively.
Next, we vary the number of sensors and assess the estimation performance of the filters. Fig. <ref> shows the MSE of each method as the number of sensors is increased with λ=0.3 and γ=200. As expected, the error bound and MSE of each filter decrease with increasing number of sensors since more sources of information become available. We see a pattern similar to the previous case with the best performance exhibited by the hypothetical ideal UKF followed by EMORF, Gen. VBKF, and Ind. VBKF respectively. Moreover, we have observed similar trends for other values of λ and γ as well.
Subsequently, we evaluate the processing overhead of each algorithm by varying the number of sensors. Fig. <ref> shows the execution time taken by each algorithm as the number of sensors is increased with λ=0.2 and γ=100. We observe that the ideal UKF and Ind. VBKF take lesser time for execution having a complexity of 𝒪(m^3). However, EMORF and Gen. VBKF induce more computational overhead, having a complexity of 𝒪(m^4), due to utilization of matrix inverses and determinants for evaluating each of the ℐ^i_k and 𝐳^(i)_t ∀ i=1⋯ m in EMORF and Gen. VBKF respectively. This is the cost we pay for achieving robustness with correlated measurement noise. Nevertheless, we find that EMORF generally takes less processing time as compared to Gen. VBKF as shown in Fig. <ref>. Moreover, similar performance has been observed for other combinations of λ and γ. This can be attributed to a simpler model being employed in EMORF resulting in reduced computations.
§.§ Smoothing Performance
For smoothing we perform analogous experiments and observe similar performance.
First, we choose 15 number of sensors and increase the TOA contamination probability λ with γ=100. Fig. <ref> shows how MSE of the state estimate of each smoother changes as λ is increased. Similar to filtering, we observe that MSE of each estimator grows with increasing λ including the BCRB-based smoothing lower bound. The hypothetical RTS smoother performs the best followed by the proposed EMORS, Gen. VBKS, and Ind. VBKS respectively. The trend remains the same for each λ. Similar patterns have been seen for other combinations of m and γ.
Thereafter, we assess the estimation performance of the filters by varying the number of sensors. Fig. <ref> depicts MSE of each estimator as the number of sensors increase with λ=0.2 and γ=100. MSE for each smoother decreases with growing number of sensors including the BRCB-based lower bound. The hypothetical RTS smoother is the best performing followed by EMORS, Gen. VBKS, and Ind. VBKS respectively. We have observed similar trends for other values of λ and γ as well.
Lastly, we evaluate the computational overhead of each algorithm by varying the number of sensors. Fig. <ref> shows the time each method takes as the number of sensors is increased with λ=0.2 and γ=100. We observe similar patterns as for filtering. The ideal RTS smoother and Ind. VBKS having a complexity of 𝒪(m^3) take lesser execution time. EMORS and Gen. VBKS having a complexity 𝒪(m^4) are more time consuming. EMORS generally induces lesser computing overhead as compared to Gen. VBKS as depicted in Fig. <ref>. We have observed similar patterns for different combinations of λ and γ .
§ CONCLUSION
We consider the problem of outlier-robust state estimation assuming the existence of measurement noise correlation. Given their advantages, resorting to tuning-free learning-based approaches is an attractive option in this regard. Identifying the shortcomings of such existing VB-based tractable methods, we propose EMORF and EMORS. Since the standard VB approach entails significant processing complexity, we adopt EM in our algorithmic constructions. We can conclude that the presented methods are simpler and hence more practicable as compared to the state-of-the-art Gen. VBKF/VBKS, devised for the same conditions. This is possible due to the reduction of inference parameters resulting from the proposal of an uncomplicated model. Also, the need of the specialized digamma function during implementation is obviated. In addition, numerical experiments in an illustrative TDOA-based target tracking example suggest further merits of the proposed methods. We find that EMORF/EMORS generally exhibit lesser errors as compared to Gen. and Ind. VBKF/VBKS in different scenarios of the example. Moreover, though the complexity order of EMORF/EMORS and Gen. VBKF/VBKS is the same, the proposed estimators are found to be computationally more competitive in general for different test conditions. These merits make the proposed state estimators worthy candidates for implementation in relevant scenarios.
§ EVALUATING 𝐑_K^-1(ℐ̂_K)
For evaluating 𝐑_k^-1(ℐ̂_k) we consider that 𝐑_k(ℐ̂_k) can easily be rearranged by swapping rows/columns depending on ℐ̂_k as
ℜ_k(ℐ̂_k)=
[ 𝚁_k / ϵ 0; 0 𝐑̂_k ]
where 𝚁_k is a sub-matrix with diagonal entries of 𝐑_k. 𝐑̂_k contains the rest of the fully populated submatrix of 𝐑_k(ℐ̂_k) corresponding to entries of ℐ̂_k=1. Inversion of ℜ_k (ℐ̂_k) results in
ℜ_k^-1(ℐ̂_k)= [ ϵ𝚁^-1_k 0; 0 𝐑̂^-1_k ]
Finally, ℜ_k^-1(ℐ̂_k) can be swapped accordingly to obtain the required matrix 𝐑_k^-1(ℐ̂_k) .
§ SIMPLIFYING Τ̂^I_K
We can swap the ith row/column entries of 𝐑_k( ℐ^i_k =1 , ℐ̂^i-_k ) and 𝐑_k( ℐ^i_k =1 , ℐ̂^i-_k ) with the first row/column elements to obtain
|ℜ_k( ℐ^i_k =1 , ℐ̂^i-_k )| =
R^i,i_k 𝐑^i,-i_k
𝐑^-i,i_k 𝐑̂^-i,-i_k
|𝐑_k( ℐ^i_k =ϵ , ℐ̂^i-_k )| =
R^i,i_k/ϵ 0
0 𝐑̂^-i,-i_k
Consequently, we can write
ln(|𝐑_k( ℐ^i_k=1 , ℐ̂^i-_k ) |/|𝐑_k( ℐ^i_k=ϵ , ℐ̂^i-_k )|)=ln(|ℜ_k( ℐ^i_k=1 , ℐ̂^i-_k ) |/|ℜ_k( ℐ^i_k=ϵ , ℐ̂^i-_k )|)
= ln|I-𝐑^-i,i_k𝐑^i,-i_k (𝐑̂^-i,-i_k)^-1/R^i,i_k|+ ln(ϵ)
where we have used the following property from matrix algebra <cit.>
𝖠 𝖡
𝖢 𝖣= |𝖠| |𝖣-𝖢𝖠^-1𝖡|
Resultingly, we can simplify (<ref>) to (<ref>).
§ EVALUATING 𝐑̂_K^-1
To avoid redundant calculations during the evaluation of 𝐑̂_k^-1, we can first swap the ith row/column elements of matrices with the first row/column entries in (<ref>) to obtain
ℜ̂_k^-1 =(ℜ_k^-1( ℐ^i_k=1 , ℐ̂^i-_k )-ℜ_k^-1( ℐ^i_k=ϵ , ℐ̂^i-_k ))
=[ R^i,i_k 𝐑^i,-i_k; 𝐑^-i,i_k 𝐑̂^-i,-i_k ]^-1-[ R^i,i_k/ϵ 0; 0 𝐑̂^-i,-i_k ]^-1
To simplify (<ref>), we use the following property from matrix algebra <cit.>
[ 𝖠 𝖡; 𝖢 𝖣 ]^-1=
[ 𝖲^-1 -𝖲^-1𝖡𝖣^-1; -𝖣^-1𝖢𝖲^-1 𝖣^-1+𝖣^-1𝖢𝖲^-1𝖡𝖣^-1 ]
where 𝖲 is the Schur's complement of 𝖣 given as 𝖲=𝖠-𝖡𝖣^-1𝖢. As a result, we obtain
ℜ̂_k^-1 =[ Ξ^i,i Ξ^i,-i; Ξ^-i,i Ξ^-i,-i ]
where the expressions of Ξ^i,i,Ξ^i,-i,Ξ^-i,i and Ξ^-i,-i are given in (<ref>)-(<ref>). The redundant calculations in (<ref>)-(<ref>) can be computed once and stored for further computations e.g. (R^i,i_k-𝐑^i,-i_k(𝐑̂^-i,-i_k)^-1𝐑^-i,i_k)^-1 and (𝐑̂^-i,-i_k)^-1. Lastly, the first row/column entries of ℜ̂_k^-1 are interchanged to the actual ith row/column positions to obtain the required 𝐑̂_k^-1.
IEEEtran
|
http://arxiv.org/abs/2307.00284v1
|
20230701093821
|
Particle Acceleration at Shocks: An Introduction
|
[
"Damiano Caprioli"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"physics.plasm-ph",
"physics.space-ph"
] |
Null controllability of a kind of n-dimensional degenerate parabolic equation
Hongli Sun^1, Yuanhang Liu^2, Weijia Wu^2,*, Donghui Yang^2
^1 School of Mathematics, Physics and Big data, Chongqing University of Science and Technology,
Chongging 401331, China
^2 School of Mathematics and Statistics, Central South University, Changsha 410083, China
===================================================================================================================================================================================================================================================================================================
These notes present the fundamentals of Fermi acceleration at shocks, with a special attention to the role that supernova remnants have in producing Galactic cosmic rays.
Then, the recent discoveries in the theory of diffusive shock acceleration (DSA) that stem from first-principle kinetic plasma simulations are discussed.
When ion acceleration is efficient, the back-reaction of non-thermal particles and self-generated magnetic fields becomes prominent and leads to both enhanced shock compression and particle spectra significantly softer than those predicted by the standard test-particle DSA theory.
These results are discussed in the context of the non-thermal phenomenology of astrophysical shocks, with a special focus on the remnant of SN1006.
§ THE SNR PARADIGM FOR THE ORIGIN OF GALACTIC CRS
The origin of cosmic rays (CRs) has been an outstanding issue in Astrophysics since the pioneering discovery by V. Hess in 1911.
At least for relatively low-energies, below and around the so-called knee of the overall CR spectrum (∼ 10^15 eV), the best source candidates have been supernova remnants (SNRs).
In 1934 Baade and Zwicky <cit.> suggested that supernova (SN) explosions were due to the release of a huge amount of gravitational binding energy during the transition from an ordinary star to a neutron star;
not satisfied of having introduced this groundbreaking idea, they also argued that SNe were also responsible for the acceleration of CRs.
The latter statement was based on an energetic argument which estimated a fraction of 20–30 per cent of the SN ejecta kinetic energy (∼ 10^51 erg) to be channelled into relativistic particles in order to account for the CR energetics.
Despite Baade and Zwicky were concerned with extragalactic SNe, the idea that Galactic CRs are accelerated in Galactic SNRs is widely popular and usually referred to as the SNR paradigm
<cit.>.
Already in the '60s, Ginzburg and Syrovatskii interpreted the radio emission of many SNRs as a result of synchrotron radiation emitted by accelerated electrons <cit.>, providing the first observational evidence that particle acceleration should be effective in SNRs.
Only with the advent of gamma-ray astronomy, though, it has been possible to infer that such systems can efficiently accelerate protons and heavier nuclei, as attested by the GeV and TeV photons that come from the decay of neutral pions produced by nuclear interactions between CRs and thermal plasma <cit.>.
An energetic argument alone, however, cannot be a satisfactory theory of particle acceleration;
in the past century, much effort has gone into figuring out the exact process thorough which a few particles can be extracted from a thermal plasma and be accelerated to ultra-relativistic energies, with spectra that are extremely regular, usually fitted with power-law distribution in momentum.
In the past few decades, the advancement of numerical methods and the advent of modern supercomputers have made possible ab-initio simulations of astrophysical plasmas and allowed astrophysicists to model the complex, nonlinear, interplay between particles and electromagnetic waves, eventually unravelling the processes responsible for the energization of the highest-energy particles in the Universe.
In this lecture we present the basic ingredients of the theory of particle acceleration in collisionless shocks, focusing on the basic properties of Fermi acceleration and diffusive shock acceleration (DSA).
In particular, we outline the recent progress in modeling DSA via kinetic plasma simulations, especially hybrid simulations (with kinetic ions and fluid electrons), and discuss the observational counterparts of such findings.
§ ACCELERATION AT SHOCKS
In this section we lay the groundwork for the theory of self-sustained particle acceleration at shocks.
More precisely, we discuss how repeated interactions of a particle with the magnetic structures embedded in a shock may lead to energy gain and to universal spectra of accelerated particles.
§.§ Magnetic Mirroring
When a magnetic field is effectively constant over one Larmor gyration, a particle conserves its magnetic moment μ=p_⊥^2/B, where p_⊥ is the component of the momentum transverse to the magnetic field.
Since magnetic fields do no work, the total momentum p^2= p_∥^2 + p^2_⊥ is constant, too.
This means that if a particle enters a region with stronger B, its p_⊥ must increase, and hence its p_∥ decrease;
if the magnetic gradient is sufficiently large, the particle may come to a stop before eventually reversing its motion.
This effect is known as magnetic mirroring and can be regarded as an effective way of scattering particles in non-homogeneous magnetic fields.
§.§ The Fermi Mechanism
Soon after listening to a lecture at the University of Chicago, in which by H. argued that the interstellar medium is permeated by magnetic irregularities, E. Fermi realized that such scattering of particles against such waves could naturally lead to energization <cit.>.
In fact, collisions are elastic in the mirror frame, but
when mirrors move, they may lead either to an energy gain if they are head-on, or to an energy loss, if they are tail-on;
physically speaking, an acceleration arises because of the motional electric field.
In astrophysical contests, where the mirrors are typically represented by Alfvén waves or by magnetized clouds (henceforth, magnetic irregularities), particles may statistically be accelerated as a consequence of the fact that head-on collisions are more frequent than tail-on ones[One may ponder the analogy with inverse-Compton scattering: Fermi, who moved to Chicago invited by A. Compton himself, must have been quite familiar with the idea...].
If a thermal particle succeeds in crossing an injection threshold <cit.>, which may be dictated either by ionization losses or by the minimum energy to enter the acceleration process, it can achieve relativistic energies and become a bona-fide CR.
Let us consider the interaction between a mirror that moves with velocity V⃗ along the x̂-axis and a particle moving with velocity v⃗=(-v cosϑ,-vsinϑ), such that ϑ=0 corresponds to a head-on collision.
Posing c=1 for simplicity, the initial energy and momentum of such a particle be E_i and p_i=v E_i.
Performing a Lorentz transformation into the mirror frame, where quantities are labelled with the prime subscript, we have
E'_i=Γ (V p_xi +E_i); p'_xi=Γ (p_xi+VE_i),
where Γ=1/√(1-V^2) is the Lorentz factor of the boost.
If the reflection is elastic in the mirror frame, then the final energy and momentum along x are
E'_f=E'_i; p'_xf=-p'_xi.
Boosting back in the laboratory frame one has
E_f= Γ^2 E_i (2 V v cosϑ +1 + V^2),
so that the fractional energy gain in a reflection is
Δ E/E≡E_f-E_i/E_i=2 V (vcosϑ+V).
In general, the rate of collisions is proportional to the relative velocity between the mirror and the particle.
The projection of the mirror's speed along v⃗ is Vcosϑ;
using the relativistic composition of velocities, the probability of collision per solid angle reads (consider the symmetry in the azimuthal angle) is:
dP(cosϑ)∝Vcosϑ+v/1+vVcosϑ dcosϑ∝ (1+Vcosϑ)dcosϑ,
where in the last step we also assumed v→ 1 and V≪ 1 (relativistic particle scattered by a non-relativistic mirror).
Note that Eq. <ref> contains an important piece of physics: it expresses the fact that head-on collisions (cosϑ=1) are more probable than tail-on ones (cosϑ=-1), which is at the basis of Fermi acceleration.
Finally, averaging the energy gain in Eq. <ref> over an isotropic distribution and reintroducing c, one gets:
⟨Δ E/E⟩_P=8/3V^2/c^2.
Since the gain is ∝ (V/c)^2, this process is often referred to as second-order Fermi mechanism, and may be not very efficient since typically V≪ c; for instance, for interstellar magnetic fluctuations V is of the order of the speed, i.e., ∼ 1-10 km/s, so that ∼ 10^10 collisions would be needed just to double the particle's energy.
This is where Fermi's original idea of accelerating CRs in the interstellar turbulence quantitatively fails: it is just too slow (in addition of not providing universal spectra, as we will show below).
If all the collisions were head-on, though, the average over the distribution in Eq. <ref> would be only on values of cosϑ>0, and the energy gain would be dominated by the first term in Eq. <ref>, which is instead proportional to V/c (first-order Fermi mechanism).
In the late '70s, various authors independently realized that, when Fermi mechanism is applied to shocks, it can lead to a very efficient acceleration of CRs <cit.>.
This process, referred to as Diffusive Shock Acceleration (DSA), is based on the fact that —if particles can be repeatedly scattered back and forth across the shock— they can experience multiple head-on collisions and gain energy very efficiently.
Before diving into the details of how particles can be accelerated, it is worth sketching a brief summary of the main characteristics of shocks.
§.§ Shock Hydrodynamics
[11]r0.47
< g r a p h i c s >
Schematic structure of a shock, in a frame centered on the discontinuity: the upstream medium is un-shocked and moves towards the shock with velocity u_1=.
The hydrodynamics of a stationary, non-viscous, non-relativistic fluid is described by the classical equations for the conservation of mass, momentum and energy <cit.>.
A 1D shock is a solution of such equations where there is a surface across which physical quantities are discontinuous and entropy increases.
In the shock frame, where the system can be assumed to be stationary, in the sense that quantities on both sides of the shock are homogeneous and time-independent, the differential conservation laws for mass, momentum and energy read:
∂/∂ x( ρ u ) = 0 ;
∂/∂ x( ρ u^2 + p)=0 ;
∂/∂ x( 1/2ρ u^3 +
γ/γ-1p u) = 0 .
As usual, ρ, u, p and γ stand for density, velocity, pressure and ratio of the specific heats of the fluid (i.e., the adiabatic index).
In the two terms of the energy conservation equation, we recognize the kinetic energy flux and the enthalpy flux.
In this simple treatment each physical quantity is continuous both ahead (upstream) and behind (downstream) the shock, but may be discontinuous at the shock surface.
The conditions which describe the jumps of these quantities between the shocked (labelled with the subscript 2) and unshocked physical quantities (labelled with subscript 1) are known as Rankine–Hugoniot relations:
R≡ρ_2/ρ_1=u_1/u_2=(γ+1)M_1^2/(γ-1)M_1^2+2→γ+1/γ-1 ,
p_2/p_1=2γ M_1^2-γ+1/γ+1→2γ M_1^2/γ+1 ,
T_2/T_1=[2γ M_1^2-(γ-1)][(γ-1)M_1^2+2]/(γ+1)^2M_1^2→ 2γγ-1/(γ+1)^2M_1^2 ,
where we introduced the compression factor between the downstream and upstream density R≡ρ_2/ρ_1 and the sonic Mach number M_1≡ u/c_s,1, where the sound speed c_s is defined as c_s^2= γ p/ρ.
The asymptotic values for M_1^2≫ 1 are also reported after the → symbol.
For an ideal monoatomic gas γ=5/3, so that generally R=4 for any strong shock.
We notice that both the pressure and temperature jumps are instead proportional to M_1^2, so that a strong shock tends to heat up the downstream plasma very efficiently.
Astrophysical shocks are often very strong, so we expect to often the shock dynamics be in such a regime.
For the Rankine–Hugoniot conditions of shocks propagating into a large-scale magnetic field, in the MHD limit, one may refer to <cit.>, while the relativistic generalization is provided by the Taub conditions <cit.>.
§.§ Diffusive Shock Acceleration
As mentioned above, DSA was proposed at the same time out by several authors <cit.> as a way to achieve a first-order Fermi acceleration, with the additional bonus that the velocity of the “mirror” could be very large (V≈ thousands of km/s for SNR shocks).
Let us consider a shock, in its reference frame where the downstream(upstream) is for x>0(x<0), so that both u_1 and u_2 are positive (see figure <ref>).
Let as also follow a test particle that is initially in the downstream with velocity v≫ u_1 and flight cosine μ=cosϑ;
in this situation, V=u_1-u_2 is the relative velocity between upstream and downstream fluids.
If the particle crosses the shock, its energy in the upstream frame would read
E'_i=Γ E_i (1+V v μ_i),
where we used the same notation as above.
While upstream, the particle may be scattered back by magnetic turbulence and re-encounter the shock with a different direction μ' and velocity v'.
If the particle makes it back downstream, a cycle is completed and its final energy reads:
E_f=Γ^2 E_i (1+Vv μ_i)(1-Vv'μ').
We need to express all the directions in the downstream frame, so we use the relativistic transformation of angles
μ_f=μ'v-V/1-μ'V→μ'=μ_f+V/v+μ_fV.
Finally, assuming v→ 1 and v'→ 1 (relativistic particles), we obtain
E_f/E_i≃1+Vμ_i/1+Vμ_f.
This equation holds for arbitrary shock speed V≤ 1 and conveys the fact that the energy gain only depends on the in/out-going directions.
For relativistic shocks, this leads to a typical energy gain of order of Γ^2, analog to a Compton scattering <cit.>.
Such an energy gain may be important also for one-shot interactions of CRs with relativistic jets (espresso mechanism), which is potentially important for the acceleration of ultra-high-energy CRs and neutrinos in relativistic AGN jets <cit.>.
Here we focus on the non-relativistic limit V≪ 1, which returns
E_f/E_i≃ (1+Vμ_i)(1-Vμ_f).
As we did before for the mirror-particle scattering, we want to calculate the mean fractional energy gain, averaging on both the initial and final distributions of μ.
In order to do this, we need to calculate the directions μ that lead to a shock crossing.
More precisely, for the particle to cross from downstream to upstream one needs
μ_i+u_2≤ 0 → -1≤μ_i≤ -u_2,
while for crossing from upstream to downstream one requires
μ_f+u_2≥ 0 → -u_2≤μ_f ≤ 1.
Assuming that particles are isotropized in both the upstream and downstream reference frame, the probability of interacting is proportional to the particle flux in the direction x, i.e., P(μ)∝μ (see Eq. <ref>).
Performing the average of Eq. <ref> with such a probability and for the extremes of integration above, one obtains (after some simple but somewhat tedious algebra):
⟨Δ E/E⟩≃4/3u_1-u_2/c,
which expresses the fractional energy gain for one DSA cycle at a non-relativistic shock.
The presence of a tangled magnetic field, and/or of some diffusion process are fundamental to ensure that the process may occur several times.
Strong particle diffusion may not be sufficient if particles are not injected into the acceleration process, as it is clear that not all of the thermal particles can be promoted to CRs.
The exact conditions required for ions and electrons to be injected into DSA will be discussed below, but at this stage it is worth pointing out that DSA is not bound to happen at any shock.
For instance, in a relativistic shock one has u_2=c/3 and it may be very hard for particles to swim back upstream once they have crossed the shock.
If the magnetic geometry of the shock allowed it (see, e.g., the discussion in <cit.>), in its first cycle a particle could gain energy proportional to Γ^2;
yet, multiple iterations of the process would be severely discouraged and/or yield much smaller energy gain due to the correlation between μ_i and μ_f <cit.>.
For a detailed treatment of these aspects of particle acceleration at relativistic shocks see, e.g., refs. <cit.>.
§.§ Bell's Argument for Power-law Distributions
The most important property of DSA is that it accelerates particles with a spectrum that is fully independent of the details of particle scattering.
This can be shown with Bell's elegant approach <cit.>.
Let us consider N_0 test particles of initial energy E_0 injected in a generic acceleration mechanism;
be G=Δ E/E the fractional energy gain per cycle and 1-P the probability of leaving the accelerator after each cycle.
After one cycle there will be N_0 P particles with energy GE_0
Similarly, after k steps there will be N_k=N_0P^k particles with energy E_k=E_0G^k.
Therefore, eliminating k= ln(N_k/N_0)/ln P=ln(E_k/E_0)/ln G, we obtain
ln(N_k/N_0)=Qln(E_k/E_0)→
N_k = N_0(E_k/E_0)^-Q Q≡-ln P/ln G.
For a shock, G is given by Eq. <ref>, while the probability of leaving the accelerator corresponds to the probability of being advected away downstream, i.e, P=J_-/J_+ is the ratio between the flux of particles returning to the shock, J_-, and the flux of particles impinging on the shock, J_+ (see Fig. <ref>).
Calling J_∞ the flux of particles escaping towards downstream infinity, stationarity requires
J_+=J_-+J_∞ .
If n is the particle number density and particles are isotropic in the fluid frame, then
J_+=∫^1_0dcosϑnc/2cosϑ=nc/4 ,
while the flux advected to infinity is, by definition,
J_∞=nu_2
and eventually:
P=J_+-J_∞/J_+=c/c+4u_2≈1-4u_2/c ,
where the last approximation holds for non-relativistic shocks, i.e., u_1,u_2≪ c.
In the same limit, one can substitute Eq. <ref> in Eq. <ref> and obtain
Q=-ln P/ln G≈-ln(1-4u_2/c)/ln[1+4/3(u_1-u_2)/c]≈3u_2/u_1-u_2≈3/R-1 .
Note that Q corresponds to the spectral index of the integral energy spectrum (i.e., the total number of particles with energy larger than E);
the differential spectrum then reads:
dN(E)/dE=f(E)∝ E^-q_E with
q_E=Q+1=R+2/R-1 .
It is very important to notice that the spectral index q_E depends only on the compression ratio R=u_1/u_2.
This means that for any strong shock, i.e. a shock for which V/c_s≫ 1 and consequently R=4, one always gets q_E=2, independently of the details of the mechanisms responsible for the diffusion.
Note that the derivation above holds for relativistic particles at non-relativistic shocks.
If one relaxes the hypothesis that particles are relativistic, the universal spectrum is a power-law in momentum that reads:
dN(p)/dp=4π p^2 f(p)∝ p^-q_p with q_p=3R/R-1 .
The conversion between energy and momentum must be done remembering that
4π p^2 f(p)dp= f(E)dE → f(E)=4π p^2 f(p)dp/dE;
therefore, for R=4, in the non-relativistic regime E=p^2/2m and q_E=1.5, while
in the relativistic limit E∝ p and q_E=2.
At relativistic shocks <cit.>, or when shocks are strongly magnetized and the magnetic field is quasi-perpendicular to the shock normal <cit.>, instead, the assumptions that the CR distribution is isotropic in the fluid frame should be relaxed, in general leading to spectra that deviate from the universal one and depend on the details of the scattering process.
Also, when CR acceleration is efficient, we anticipate that magnetic fluctuations may have a finite speed u_w with respect to the thermal gas, so that the effective compression ratio seen by the particles may differ from R; this effect is discussed in <ref>.
§.§ A Kinetic Derivation
It is instructive to derive the results above also from a kinetic perspective, i.e., by solving the Vlasov (also known as the non-collisional Boltzmann) equation for the CR distribution function f(x⃗,p⃗,t):
f/ t+v⃗(p)·∇⃗_x f+p⃗/ t·∇⃗_p f=0 p⃗/ t=q[E⃗+1/cv⃗(p)×B⃗] .
The standard approach is to introduce some assumptions that allow to reduce this equation to a transport equation that can be solved even analytically when the shock structure is known <cit.>.
Such main assumptions are:
1) CRs have Larmor radii larger than the shock thickness (i.e., the equation does not apply to thermal particles) and in general v≫ u;
2) CRs undergo many small pitch-angle scatterings on magnetic irregularities (generically called waves), which drive them towards isotropy in the local wave frame;
3) waves propagate parallel to the background field B⃗_0, assumed to be aligned with the shock velocity, i.e., we have a parallel shock[When B⃗_0 is perpendicular to the shock normal, we talk of a perpendicular shock.].
Under these assumptions, the CR distribution function can be expanded, in the wave frame, as a sum f=f_0+f_1+... of terms with growing anisotropy, with the n^ th term ∝𝒪(u/c).
For non-relativistic shocks, such an expansion can be truncated at the second order and the first-order term can be written as a diffusive flux, proportional to the spatial derivative of f_0.
Detailed calculation of the procedure described above can be found for example in ref. <cit.> or in the textbook at ref. <cit.>.
In the case of a non-relativistic, one-dimensional, parallel shock, we obtain a diffusion–advection equation for the CR isotropic part of the distribution function f_0=f(x,p):
f(x,p)/ t+u f(x,p)/ x=/ x[D(x,p) f(x,p)/ x]+
p/3du/dx f(x,p)/ p ,
Here we have introduced the effective fluid speed u=u+v_w, which accounts for a finite velocity of the scattering centers (waves) with respect to the fluid, which can be both positive and negative, and the diffusion coefficient D(x,p), which includes all the details of the interaction between the waves and the particles and describes the spacial random-walk of particles along the gradient of f.
The four terms correspond to: time evolution, advection, diffusion, and adiabatic compression, respectively.
Using the same notation of Fig. <ref> and assuming that:
1) f is continuous at the shock, which comes from the fact that CRs have large Larmor radii;
2) f(x→∞)→ 0, i.e., no CRs far upstream;
3) f is homogeneous downstream, i.e., f/ x(x>0)=0, which is the only acceptable choice in the stationary limit;
and
4) D is independent of x upstream (downstream it does not matter because of assumption 3),
we can promptly solve Eq. <ref> obtaining:
f(x,p)= f_s(p)exp(ux/D); f_s(p)∝ p^-q_p, q_p=3R/R-1,
where f_s(p) is the CR distribution at the shock and q_p is the same derived in Eq. <ref>, provided that the velocity of the scattering centers is small with respect to the fluid speed.
Since typically v_w is of the order of the speed, this is usually thought to be a good approximation, but we will see below that this is not generic and has important implications.
Again, we find that the CR spectrum is a power law in momentum, which depends only on the shock compression ratio (better, on the compression ratio of the scattering center velocity R=u_1/u_2).
In summary, the very reason why DSA is so attractive is that is just depends on particles tending to become isotropic in the fluid (or wave) frame on each side of the shock;
since there is a relative velocity u_1-u_2 between the two frames, repeatedly achieving such an isotropization requires energy transfer from the fluid to the particles.
Since also escape depends on the shock hydrodynamics, this results in a universal mechanism that returns extended power-law spectra in momentum.
At the zero-th order, these spectra are consistent with the astrophysical phenomenology of CRs and of the multi-wavelength emission from individual sources powered by non-relativistic shocks, such as novae, supernovae, heliospheric shocks, intra-cluster shocks, etc...
In <ref> we discuss where such predictions fails to explain observations and in <ref> how the DSA theory needs to be modified to resolve the discrepancy.
§.§ The DSA Maximum Energy
DSA is a very generic method for generating power-law spectra with roughly the expected slope, but it does not predict how many particles are injected into the acceleration mechanism <cit.>, and hence the overall normalization of f;
also, it does not predict the maximum energy that can be achieved in a given source.
Note that a spectrum ∝ E^-2 is mildly divergent (it has the same energy per decade), so a maximum energy E_max is required in order to keep the total energy in CRs finite.
This energy limit may be due to the time needed to accelerate a CR up to E_max or to the size of the accelerator, or even to energy losses particles may suffer.
The most relevant factor limiting the energy of cosmic hadrons is the acceleration time, which is related to the time a particle takes to diffuse back to the shock, i.e., the duration of a cycle, which increases with energy <cit.>.
At some point, such an acceleration time becomes comparable with the age of the source (or the dynamical time-scale over which a shock must slow down), and a maximum energy is achieved.
Given the spatial diffusion coefficient D(E), the typical displacement from the shock scales in time as for a standard random walk (Δ x)^2∝ Dt;
diffusion has to fight against a shock that “chases” the particle at a rate Δ x∝ t and hence, substituting Δ x, one derives a timescale for the residence time of the particle upstream of the shock
T_acc(E)≈D(E)/^2.
A more refined calculation that takes into account also the residence time in the downstream <cit.> returns
T_acc(E)= 3/u_1-u_2(D_1/u_1+D_2/u_2)≃6R/R-1D(E)/^2,
where in the last step we assumed D∝ B^-1∝ u,
which is realized for Bohm diffusion since D(E)=c/3 r_L(E), with r_L(E)=E/eB.
A similar constrain comes from requiring the diffusion length of an accelerating particle λ∝ D(E)/ not to exceed the source size R_ sh≃ t.
This is analog to the Hillas criterion <cit.>, which states that the maximum energy achievable in a source corresponds to the potential drop of the motional electric field over the source extent.
Note that for non-relativistic systems such an electric field is a factor of /c smaller than the magnetic field, which means that the maximum allowed gyroradius is a factor /c smaller than the system;
for Bohm diffusion /c is of the order of the ratio between the particle's gyroradius and diffusion length.
It is easy to show that, if particle diffusion occurred in SNRs at the same rate as in the interstellar medium <cit.>, the maximum energy achievable by CRs would not be larger than a few GeV <cit.>.
Also considering a mean free path for pitch-angle scattering as small as the Larmor radius (Bohm diffusion), for the typical interstellar magnetic fields of 1–10 μG, it would be hard to account for energies as high as the knee <cit.> and thus the SNR paradigm would fail.
§.§ Magnetic field amplification in young SNRs
A fundamental tile of the mosaic has been provided by the present generation of X-ray telescopes, such as Chandra and XMM–Newton.
Their high spatial resolution has revealed that young SNRs exhibit bright, narrow, rims, whose emission is due to synchrotron radiation by electrons with energies as high as 1–10 TeV <cit.>.
These results are important for two reasons: first, they prove that SNRs accelerate electrons up to very high energies, and second, the measurement of the rim thickness provides a lower limit to the magnetization of the post-shock medium.
Such non-thermal rims are not resolved by Chandra even in a close SNRs such as SN1006, which means that they are thinner than ∼ 0.01pc.
For TeV electrons to lose energy on such small scales, the local magnetic fields must be of order of a few hundreds μG, a factor 10–100 larger than typical interstellar ones, implying that some kind of magnetic field amplification has to occur in young SNRs <cit.>.
This piece of information perfectly fits in the DSA paradigm: the diffusive flux of CR upstream of the shock has a net drift speed that is super-Alfvénic, which is expected to drive plasma instabilities that can change the topology and amplify any pre-existing magnetic field.
Such streaming instabilities may lead to a rapid growth of different modes, either resonant <cit.> or non-resonant <cit.> with the Larmor radius of the relativistic particles, as discussed in other contributions in this volume.
Eventually, accounting for the CR-induced magnetic field amplification may allow SNRs to accelerate protons and heavier ions up to the knee <cit.>, though the details depend on a complex chain in which non-linear wave-particle interactions make it hard to estimate the shape and the extent of the spectrum of CRs produced in different SN environments <cit.>.
Whether SNRs can be PeVatrons remains an outstanding question that seeks both theoretical and observational answers.
§ THE NEED FOR A NON-LINEAR THEORY OF DSA
According to the SNR paradigm for the acceleration of Galactic CRs, a substantial fraction (≳ 10%) of the kinetic energy of the SN shocks should be converted into CRs.
As a consequence, at some point the validity of the test-particle approach should break and accelerated particles should not be considered passive spectators of the shock dynamics any longer.
In fact, as soon as the first quantitative calculations of the DSA efficiency were worked out, it became clear that pressure and energy in the shape of accelerated particles could no longer be neglected with respect to fluid ones, revealing the need for a non-linear theory of DSA (NLDSA) in which particle acceleration and shock dynamics are self-consistently calculated.
The first attempts to carry out a study of NLDSA, often referred to as two-fluid models, treated CRs as a fluid of relativistic particles <cit.>.
This approach is of great interest for pointing out the main effects of the non-linear CR backreaction on the shock.
Because of the CR pressure, which reaches its maximum at the shock (Eq. <ref>), the upstream develops a precursor in which the fluid approaching the shock gradually slows down:
the net result is to produce a weaker shock (now called the subshock) and in turn a heating of the downstream plasma less efficient than in the test-particle case.
The presence of CRs modifies the shock compression ratio in two ways:
first, the contribution of relativistic particles to the total pressure makes the fluid more compressible, as if its adiabatic index → 4/3 and hence R→ 7;
second, CRs escaping upstream because of the lack of confinement at large distances from the shock act as a sink of energy, making the shock partially radiative.
The net effect is that the far-upstream to downstream density ratio becomes much larger than 4, while the subshock density ratio is typically in the range 3–4.
Two-fluid approaches, though, cannot provide self-consistent information about the spectrum of the accelerated particles;
this can be achieved via a kinetic approach, in which CRs are described by means of a distribution function in both space and momentum.
Different ways of dealing with a kinetic analysis have been proposed in the literature, spanning from the semi-analytic models by <cit.>, to the Monte Carlo approach by <cit.>, and to the numerical procedures by <cit.>.
The standard non-linear theory of DSA (e.g., <cit.> for reviews) suggests that CR spectra should deviate from power-laws, but confirms that the slope at a given momentum should be determined by the effective compression ratio felt by such particles, which eventually depends on their diffusion length into the upstream.
High-p particles that can probe the full precursor should feel a total compression ratio ≫ 4, and hence exhibit a spectrum that is flatter than the standard ∝ p^-4 prediction.
Since most of the pressure is carried by relativistic CRs, all particles with energy larger than a few GeV should thus exhibit spectra (much) flatter than the DSA prediction.
Conversely, non-relativistic CRs should be confined close to the subshock and have spectra slightly steeper than the standard prediction, since ≲ 4.
Accounting for the dynamical role of the magnetic turbulence produced by CRs themselves may limit such a compression to values less than ∼ 10 <cit.>, but does not alter the theoretical expectation that shocks that are efficient in producing CRs should return concave spectra, invariably flatter than p^-4 at relativistic energies.
§.§ The Challenging Observations
Since the NLDSA theory has been developed, all the attempts to find evidence of concave spectra in shock-powered astrophysical systems were at best inconclusive.
Instead, more and more observations hinted to the fact that shock acceleration should produce particle spectra that are appreciably steeper than what predicted even by the test-particle DSA theory.
The main three pieces of evidence are summarized in the following.
γ-ray emission from SNRs.
SNRs have been extensively observed by Cherenkov telescopes (HESS, MAGIC, VERITAS, and HAWC) in the TeV energy range and satellites (Fermi and AGILE) in the GeV band.
Very interestingly, the photon spectral index in most of the γ-ray bright SNRs is inferred to be appreciably larger than 2, typically in the range 2.2–3 <cit.>, see Fig. <ref>.
γ-ray emission may be either leptonic (relativistic bremsstrahlung and inverse-Compton scattering, IC) or hadronic (π^0 decay);
in the hadronic scenario, the γ-ray spectrum is parallel to the one of parent hadrons, while IC scattering produces harder photon spectra (∝ E^-1.5 for an E^-2 electron spectrum).
Therefore, away from the cut-off of the parent particle distribution, a steep spectrum represents a strong signature of hadron acceleration and suggests that the parent hadrons also have spectrum with q>2.
Remarkably, at GeV energies, where synchrotron cooling is not effective, protons and electrons are expected to show the same spectral index <cit.>, which means that steep spectra are required even in a leptonic scenario.
Radio-SNe.
Very young SNRs (days to months old), observed in other galaxies in the radio and X-rays, also offer us clues that electrons are typically accelerated to relativistic energies with spectra as steep as E^-3 <cit.>; radiative losses does not seem to be important in this case, either.
Note that these so-called radio-SNe probe a different regime of shock acceleration than Galactic SNRs, the shock velocity still being quite large: ≳ 10^4 km s^-1 and even transrelativistic.
Galactic CRs.
Connected with the problem of accelerating CRs with power-law distributions is the problem of preserving such regular structures during the CR journey from sources to the Earth.
The CR Galactic residence time can be estimated thanks to radioactive clocks such as ^10Be and to the ratios of secondary to primary species (e.g., B/C), which return the grammage traversed by primary CRs in the Galaxy.
If CRs are produced in the disk and diffusively escape at some distance H (∼ a few kpc) in the halo, the Galactic residence time is τ_ gal(E)≈ H^2/D_ gal(E), where D_ gal(E) is the diffusion coefficient that parametrizes CR transport in the Galaxy, assumed homogeneous and isotropic.
The energy dependence of primary/secondary ratios scales as τ_ gal∝ E^-δ and is crucial for connecting the spectra injected at sources (N_ s∝ E^-γ) with those measured at Earth, (∝ E^-2.65 below the knee <cit.>).
The equilibrium CR spectrum can in fact be written as N_ gal(E)∝ N_ s(E) ℛ_ 𝒮𝒩τ_ gal(E), which imposes δ+γ≈ 2.65.
Since the most recent AMS-02 data constrain δ≈ 0.33 <cit.>, one finds γ≈ 2.3-2.4, quite steeper than the DSA prediction for strong shocks.
Note that the cumulative spectrum produced over the SNR history is typically ∼ 0.1 steeper than the one at the beginning of the Sedov stage, i.e., γ≃ q_E+0.1 <cit.>.
Despite its simplicity, this diffusive model for CR transport is quite solid because it simultaneously accounts for the measured CR secondary/primary ratios, the diffuse Galactic synchrotron and γ-ray emission (see, e.g., <cit.>), and even the observed anisotropy in the arrival directions of CRs <cit.>.
§ COLLISIONLESS SHOCKS: KINETIC SIMULATIONS
To address most of these questions it is necessary to model the non-linear interplay between energetic particles and the electromagnetic fields, which is very hard to tackle analytically.
Astrophysical plasmas are typically collisionless, i.e., their dynamics is mediated by collective interactions rather than by binary collisions, and can be fruitfully modeled ab initio by iteratively moving particles on a grid according to the Lorentz force and self-consistently adjusting the electromagnetic fields.
Such particle-in-cells (PIC) simulations essentially solve the Vlasov equation by sampling the phase space with individual macro-particles and are particularly useful to account for spectra that may span several order of magnitude in momentum, where standard Vlasov solvers may lose accuracy <cit.>.
While for understanding electron injection full PIC simulations are needed, the general dynamics of shocks should be sculpted by the accelerated ions;
therefore, one may revert to the hybrid approach, in which electrons are considered as a massless neutralizing fluid <cit.>, and still model shock formation, ion acceleration, and plasma instabilities self-consistently.
Hybrid simulations have been extensively used for heliospheric shocks[To give an idea, time and length scales accessible to hybrid simulations on modern supercomputers are comparable with the physical scales of the Earth's bow shock <cit.>.] (e.g., <cit.>), and more recently even to stronger astrophysical shocks.
SNR shocks are characterized by large sonic and Alfvénic (M_A≡/v_A, with v_A=B_0/√(4π m n) the Alfvén velocity) Mach numbers, which makes it computationally challenging to capture the diffusion length of accelerated ions D/≈ v/ r_L ≫ M_A c/ω_p while resolving the ion skin depth c/ω_p (ω_p=√(4π n e^2/m) is the ion plasma frequency and n, e, and m the ion density, charge, and mass).
Also promising is the coupling of the hybrid technique with a MHD description of the background plasma <cit.>, though in this framework injection into DSA must be specified by hand.
Hybrid simulations have been used to perform a comprehensive analysis of ion acceleration at collisionless shocks as a function of the strength and topology of the pre-shock magnetic field, the nature of CR-driven instabilities, and the transport of energetic particles in the self-generated magnetic turbulence <cit.>.
Moreover, they have been used to unravel the processes that lead to the injection into DSA of protons <cit.>, ions with arbitrary mass/charge ratio <cit.>, and pre-existing CRs <cit.>.
The progress in modeling non-relativistic shocks via first-principles simulations also features the first PIC simulations showing simultaneous acceleration of both ions and electrons <cit.>, though it is still computationally challenging to go beyond 1D setups for long-term simulations.
§.§ Hybrid Simulations: Ion Acceleration
Large 2D and 3D hybrid simulations of non-relativistic shocks have been performed with the Newtonian code dHybrid <cit.> and its descendent dHybridR <cit.>, which allows accelerated particles to become relativistic.
The typical set up is outlined in <cit.>: a supersonic/superalfvénic flow is smashed onto a reflective wall and the interaction between incoming and reflected flows produces a shock.
Lengths are measured in units of c/ω_p, velocities normalized to the Alfvén speed v_A, and energies to ≡ m v_ s^2/2, where v_ s is the velocity of the upstream fluid in the downstream frame.
The shock strength is expressed by the Alfvénic Mach number M_A, assumed to be comparable with M_s (both are indicated by M if not otherwise specified).
The shock inclination is defined by the angle ϑ between the shock normal and the background magnetic field B⃗_0;
therefore, ϑ=0 (90) for a parallel (perpendicular) shock.
The kinetic simulations presented in refs. <cit.> have been able, for the first time, to demonstrate that DSA acceleration at non-relativistic strong shocks in general depends on the shock inclination and that can indeed be efficient.
The left panel of Fig. <ref> shows , i.e., the fraction of the bulk energy flow that goes into CRs as a function of shock strength and inclination.
The acceleration efficiency can be as high as ≳ 15% at strong, quasi-parallel shocks, and drops for ≳ 45, independently of the shock Mach number.
The right panel of Fig. <ref>, instead, shows the ion spectra for shocks with M=50 and different inclinations;
the DSA non-thermal tail vanishes for quasi-perpendicular shocks, where ions gain a factor of few in energy, at most.
Also 3D simulations show the same dependence of the acceleration efficiency on <cit.>.
Note that when CR acceleration efficiency is large, the post-shock temperature is accordingly reduced with respect to Rankine–Hugoniot jump conditions;
such a modification is a manifestation of the back-reaction of efficient CR acceleration, as predicted by most models of non-linear DSA (more on this in <ref>).
§.§ Magnetic Field Amplification
Since the initial formulation of the DSA theory, particle acceleration has been predicted to be associated with plasma instabilities <cit.>, in particular with the generation of magnetic turbulence at scales comparable to the gyroradii of the accelerated particles (resonant streaming instability, see <cit.>).
Then, Bell pointed out that non-resonant, short-wavelength modes may grow faster than resonant ones (non-resonant hybrid instability <cit.>).
Fig. <ref> shows the structure of a parallel shock with M=30, with the upstream (downstream) to the right (left).
In the shock precursor, a cloud of non-thermal particles drives a current able to amplify the initial magnetic field by a factor of a few, also leading to the formation of underdense cavities filled with energetic particles and surrounded by dense filaments with strong magnetic fields.
The typical size of the cavities is comparable with the gyroradius of the highest-energy particles (a few hundred ion skin depths for the simulation in Fig. <ref>).
Note that, when the streaming instability enters its non-linear stage, filamentary modes are expected to grow, too (e.g., <cit.>).
The propagation of the shock through such an inhomogeneous medium leads to the formation of turbulent structures (via the Richtmyer–Meshkov instability), in which magnetic fields are stirred, stretched, and further amplified.
In this case, amplification via turbulent dynamo may become important even in the absence of large pre-existing density fluctuations.
Magnetic field generation depends on the presence of diffuse ions, hence it is more prominent at quasi-parallel shocks.
Simulations show that the maximum amplification achieved in the precursor scales as δ B/B_0∝√(M) and ranges from factors of a few for M≲ 5 to factors of ≳ 7 for M≳ 50 (see figure 5 in <cit.>).
For M≳ 20 the non-resonant instability grows significantly faster than the resonant one <cit.>, exciting distinctive right-handed modes with wavelength much smaller than the gyroradius r^*_L of the CRs driving the current.
Then, in the non-linear stage, an inverse cascade in k-space progressively channels magnetic energy into modes with increasingly small wavenumber k.
The non-resonant instability eventually saturates when the maximally-growing mode is k_max≈ 1/r^*_L, which effectively scatters the current ions <cit.>.
This is the very reason why the resonant instability saturates already when δ B/B_0∼1 <cit.> while the non-resonant one can grow up to non-linear levels before the driving current is disrupted.
For M≲ 10, δ B/B_0≲1 and both wave polarizations are observed, consistently with the prediction of quasi-linear theory <cit.>.
The reader can refer to <cit.> for a more detailed discussion of the wave spectra and the saturation of the two instabilities.
§.§ Particle Diffusion
CRs are scattered in pitch angle by waves with resonant wavenumbers k(p)∼ 1/r_L(p); in the regime of small deflections this process can be described by a diffusion coefficient.
The most popular choice is to assume the Bohm limit, which is obtained (in the quasi-linear limit δ B/B_0≲ 1) for an Alfvénic turbulence generated via resonant streaming instability by a CR distribution ∝ p^-4 <cit.>.
Bohm diffusion is often heuristically extrapolated into the regime of strong field amplification, but such a prescription used to lack a solid justification.
Global hybrid simulations allow to reconstruct CR diffusion in different regions of the shock, either by using an analytical procedure based on the extent of the CR distribution in the upstream or by tracking individual particles <cit.>.
The two methods return consistent results, as shown in the left panel of Fig. <ref> (see <cit.> for more details).
When magnetic field amplification occurs in the quasi-linear regime (M≲20), particle scattering is well described by the diffusion coefficient self-generated via resonant instability <cit.>, where the scattering rate depends on the magnetic power in resonant waves.
For stronger shocks, instead, D(E) is roughly proportional to the Bohm coefficient and its overall normalization depends on the level of magnetic field amplification δ B/B_0≳ 1 (see also <cit.>).
Such a scaling is determined by the fact that far upstream the spectrum of the excited magnetic turbulence (<cit.>, Figs. 6 and 7) peaks at relatively large wavelengths, comparable with the gyroradius of the highest-energy ions.
The effective scattering rate is also imprinted in the time evolution of .
The right panel of Fig. <ref> shows such an evolution, which is linear with time with a slope inversely proportional to the measured diffusion coefficient (dashed lines), as expected for DSA (e.g., <cit.>).
§.§ Oblique Shocks and the Importance of 3D
Th hybrid campaign detailed in <cit.> focused on 2D simulations, plus some 3D runs with M=6 <cit.>.
One natural question that arises is: Should the direction of the upstream magnetic field matter if the shock magnetization becomes smaller and smaller? (i.e., if M is sufficiently large?).
<cit.> performed a study of cases with M≳ 10 and found, for the first time in kinetic simulations, that in 3D a non-thermal tail develops spontaneously, i.e., without pre-existing seeds or turbulence <cit.>; ions perform multiple shock drift acceleration cycles before either being advected downstream or escaping upstream.
Oblique and quasi-perpendicular shocks are known to potentially be fast accelerators <cit.> but are also known to be less effective than quasi-parallel shocks in injecting thermal particles <cit.>.
On the other hand, oblique heliospheric shocks are often associated with particle acceleration (mostly electrons, less frequently ions) <cit.>.
While PIC simulations easily reproduce populations of back-streaming electrons <cit.>, 3D simulations are needed to study of injection and acceleration of thermal ions.
This likely explains why acceleration efficiency in 2D hybrid simulations cuts off quite abruptly for ≳ 45, different from the shallower trend in MMS events at the Earth bow shock (Fig. <ref>).
As pointed out in <cit.>, cross-field diffusion plays a crucial role in the return of ions from downstream, and is not properly captured if not in 3D.
<cit.> demonstrated that charged particles in an arbitrary electromagnetic field with at least one ignorable spatial coordinate remain forever tied to a B-field line.
Since in 2D field lines are effectively transverse “sheets", ion diffusion along the shock normal is inhibited;
in 3D, instead, field lines can twist and intertwine, and ions can diffuse cross-field, which effectively prevents them from being rapidly swept downstream.
Tracking reveals that in 2D ions are advected downstream after a couple of gyrations, while in 3D they diffuse back several times, gaining energy at each SDA cycle.
To some extent, this acceleration mechanism is similar to that proposed by <cit.>, who argued that the extreme case of a perpendicular shock where Bohm diffusion were realized downstream would be a rapid accelerator;
our self-consistent simulations show that the process may occur only for large M and may be intrinsically limited when <90.
Particles initially gain energy trough shock drift acceleration (SDA), tapping into the motional electric fields E=- v/c× B during their gyrations around the shock <cit.>.
Acceleration then briefly transitions to DSA (where ∝ t) before reaching a limit energy ^*, beyond which particles escape upstream.
Acceleration efficiency and spectral slope quite strongly depend on the shock Mach number M: while for M≲ 20 efficiency is only a few percent and spectra are very steep, for M≳ 50 efficiency can exceed 10–20% and spectra converge to the DSA ones, as flat as p^-4 in momentum (Fig. <ref>);
also the level of magnetic field amplification and the maximum energy limit increase with M.
The biggest questions that remain open are whether oblique/quasi-perpendicular shocks can efficiently drive plasma instabilities strong enough to self-sustain DSA up to energies significantly larger than ^*, and whether the same acceleration process is viable for electrons, too.
Both questions require different numerical approaches that are capable of either capturing the longer-term evolution of the system or the physics of electron injection.
§ HYBRID SIMULATIONS REVEAL COSMIC-RAY–MODIFIED SHOCKS
The code, which allows accelerated particles to become relativistic <cit.>, has been recently used to investigate the long-term evolution of non-relativistic shocks.
The reader can refer to <cit.> for more information about the setup;
one important ingredient in this campaign is that the authors assumed the fluid electrons to be adiabatic, rather than prescribing an effective polytropic index γ_e aimed to enforce electron/ion equipartition downstream (see appendix of <cit.>).
The latter choice, in fact, requires to fix the compression ratio a priori: if one guesses r=4, the electron equation of state becomes very stiff (γ_e∼ 3) and prevents any shock modification, enforcing r∼ 4 <cit.>.
Instead, using γ_e∼ 5/3, or iteratively setting γ_e until equipartition is self-consistently achieved even for r>∼ 4, yields consistent results.
Let us consider as benchmark a strong shock, with both sonic and Alfvénic Mach number M=20, propagating along a background magnetic field (parallel shock).
In this case, about 10% of the shock kinetic energy is converted into accelerated particles <cit.>.
§.§ CR-induced Precursor and Postcursor
Such a benchmark run confirmed the prediction that, when DSA is efficient, the shock develops an upstream precursor, in which the incoming flow is slowed down and compressed under the effect of the CR pressure (see figure 2 of <cit.>).
What was unexpected is that the shock also develops a postcursor, i.e., a region behind the shock where the dynamics is modified by the presence of CRs and self-generated magnetic perturbations.
More precisely, <cit.> attest to the presence of an extended region in which magnetic structures drift at a finite speed towards downstream infinity with respect to the thermal gas.
A Fourier analysis (figure 7 of <cit.>) shows that the phase speed of the magnetic fluctuations is comparable to the local Alfvén speed, both upstream and downstream;
as a result, CRs —which tend to become isotropic in the wave frame— also have a comparable net drift with respect to the background plasma (<cit.>, figure 5).
The development of the postcursor implies that energy/pressure in CRs and magnetic fields are advected away from the shock at a faster rate that in a gaseous shock, which has has two crucial effects:
1) it makes the shock behave as partially radiative, enhancing its compression;
2) it makes the CR spectrum steeper, enhancing the rate at which particles leave the acceleration region.
§.§ Enhanced Shock Compression Ratio
The left panels of Fig. <ref> illustrate the hydrodynamical modification induced by the postcursor.
The normalized pressure in CRs and magnetic fields, ξ_c and ξ_B, are plotted as a function of time in the first panel (crosses and stars, respectively);
the color code corresponds to the time in the simulation.
Together, the normalized CR and magnetic pressure encompass 15-20% of the pressure budget in the postcursor:
ξ_c increases quickly to a value ≳ 0.1 and remains nearly constant throughout the simulation, whereas the magnetic pressure rises more slowly up to 0.05-0.075 towards the end of the simulation.
In appendix B of <cit.>, we solve the shock jump conditions between far upstream and the postcursor, including the contributions of CRs and Alfvén-like structures in the conservation of mass, momentum, and energy;
such a solution accounts for the extra-compression observed in the simulation, where →∼ 6, as shown in the bottom left panel of Fig. <ref>;
to stress the importance of the postcursor in the shock dynamics, such a panel also includes as a dashed line the prediction with no CR/magnetic drift.
The CR pressure alone, without the magnetic/drift terms, is not sufficient to account for the strong shock modification that we observe, which demonstrates that the effect is inherently different from the enhanced compression expected in the classical theory of efficient DSA <cit.>.
§.§ Steep Spectra
Drastically different from the classical theory, an enhanced shock compression is not associated to flatter CR spectra, but rather to steeper ones, as shown in the right panels of Fig. <ref>.
The standard prediction is that the CR momentum spectrum should flatten with time: such expected spectra would be described by Eq. <ref> with r→ and are shown with dashed lines in the middle panel.
Instead, the measured postshock CR spectra (solid lines) are systematically steeper than such a prediction.
Since CRs do not feel the change in speed of the thermal plasma, but rather that of their scattering centers, the effective compression ratio that they feel is:
≃u_0/u_2+2≃/1 +α; α≡2/u_2,
where in the numerator we set 0≈ 0 because upstream infinity fluctuations should be small.
The α parameter quantifies the effect of the postcursor-induced spectral modification;
since α>0 (waves move towards downstream), the compression ratio felt by the CRs is always smaller than the fluid one.
Note that, when the magnetic field is compressed at the shock and B_2≈ B_1, we have α= ^3/2α_1≲ 8α_1 and the correction due to the postcursor dominates over the one in the precursor.
CR spectra turn out to be even steeper than p^-4 and match very well the slope calculated using , namely
q≡3/-1=
3/-1-α.
This is plotted as dashed lines in the bottom right panel of Fig. <ref>.
In Bell's approach (Eq. <ref>), we interpret the steepening as induced by an increase in the escape probability, rather than to a reduction of the acceleration rate (figure 3 in <cit.>).
§ A REVISED THEORY OF EFFICIENT DSA
In summary, an interesting non-linear, physics-rich, picture arises.
Quasi parallel shocks are inherently efficient in injecting thermal ions into the DSA mechanism <cit.>;
DSA is self-sustaining and the streaming of energetic particles upstream of the shock triggers violent plasma instabilities (especially, the non-resonant one <cit.>), which foster the rapid scattering and energization of CRs.
If no self-regulating effects kicked in, the DSA efficiency would grow uncontrolled, leading to the flatter and flatter spectra envisioned by the standard theory <cit.>.
Instead, when acceleration efficiency reaches ∼ 10%, the associated generation of magnetic field grows and back-reacts on both the shock modification and on the CR spectrum, as discussed above.
The net result of this non-linear chain is that both CR acceleration and B amplification saturate, yielding a DSA efficiency of ∼ 10% and CR spectra mildly steeper than p^-4.
One important implication is that the spectrum produced by efficient DSA is not universal, but rather depends on the strength of the self-generated fields.
This adds a novel, crucial, physical constraint when modeling the non-thermal emission from CR sources, as outlined in 6 of <cit.> and in <cit.>.
§.§ The Case of SN1006
Particularly interesting is the case of SN1006, which shows a bilateral symmetry defined by the direction of B⃗_0 <cit.>.
X/γ-ray emission comes from the quasi-parallel (polar caps) regions <cit.>, implying the presence of multi-TeV electrons, while radio emission is more azimuthally symmetric <cit.>, suggesting the presence of GeV electrons also in oblique regions.
While efficient ion DSA up to multi-TeV energies is consistent with the low polarization and strong synchrotron emission in the polar caps, electron acceleration in quasi-perpendicular regions is likely boot-strapped via SDA, as outlined here, and then can proceed up to GeV energies just because of the interstellar turbulence <cit.>.
Whether multi-TeV electrons should also be expected in quasi-perpendicular regions is an interesting question that hinges on the longer-term evolution of these systems.
It is worth noticing that
<cit.> used Chandra/XMM X-ray observations to shock that the CR-induced shock modification depends on the orientation of the ambient magnetic field, finding convincing evidence that CR acceleration and field amplification are more efficient in quasi-parallel regions and vanish in quasi-perpendicular ones, as originally predicted <cit.>.
In particular, in quasi-parallel regions the shock compression ratio clearly exceeds R=4, by almost a factor of 2;
moreover, the slope of the accelerated particles inferred from synchrotron emission is q_E≈ 2.3, steeper than the DSA theory <cit.>.
Both pieces of evidence, as well as the level of magnetic field amplification inferred from the non-detection of a precursor in the X-rays <cit.>, agree perfectly well with the revised DSA theory that hinges on the role of the postcursor in shaping the shock dynamics and the CR spectral slopes.
§ BEYOND PROTON ACCELERATION
These notes do not have the presumption to present all the monumental work done by several groups in the past decades to unravel shock acceleration.
The review of the recent progress, which mostly stemmed out from kinetic simulations, is heavily biased towards the theory of proton acceleration, which almost invariably controls the overall shock dynamics.
This does not directly address the acceleration of electrons and of ions heavier than H, which are important to understand the spectrum and the sources of CRs <cit.>.
Nevertheless, DSA is general enough that particles with the same rigidity should undergo the same acceleration, so that most of the arguments about spectral slopes and maximum energies should still apply.
The main difference, though, is that particles with different mass/charge ratio may be injected into DSA in a different way, the chief example being the fact that the electron/proton ratio is ∼ 10^-4-10^-3 both in Galactic CRs and in SNRs <cit.>.
§.§ Acceleration of Ions with Arbitrary Mass and Charge
Hybrid simulations have been used to study the thermalization, injection, and acceleration of ions with different mass/charge ratios, A/Z, in non-relativistic collisionless shocks <cit.>.
These results can be summarized as follows:
1) ions thermalize to a post-shock temperature proportional to A, which is to be expected since the free kinetic energy available scales with the species' mass;
2) when diffusive shock acceleration is efficient, ions develop a non-thermal tail whose extent scales with Z, a manifestation of the fact that DSA is rigidity dependent;
3) the normalization of the power-law tail is enhanced ∝ (A/Z)^2, so that heavy ions are preferentially accelerated.
This last scaling, never predicted theoretically but just observed in kinetic simulations, provides a quantitative explanation for the observed chemical composition of Galactic CRs, which are systematically richer in heavier nuclei (see figure 3 in <cit.>).
The preferential injection and acceleration of heavy nuclei depends on the shock strength: for shocks with lower Mach number the enhancement is less prominent, proportional to A/Z.
Moreover, we find that proton and ion injection depend on the shock inclination, being suppressed for magnetized oblique and quasi-perpendicular shocks.
These two trends hinge on the pivotal role that the self-generated magnetic turbulence has in promoting the injection of heavy nuclei, which need to be heated non-collisionally at the shock crossing (also see <cit.>).
§.§ Electron Acceleration
More than a subsection, this part would deserve an entire paper, given all the effort gone into trying to unraveling why electrons are injected into DSA less efficiently than protons.
Energetic electrons, despite being subdominant in number and energy density with respect to energetic ions, are responsible for most of the non-thermal radiation produced by a shock, emitting from radio to γ-rays via synchrotron, bremsstrahlung, and inverse-Compton emission.
Hybrid simulations have been pivotal in shaping the current theory of ion DSA and, in general, of CR-modified shocks, but they lack a kinetic treatment for the electrons and cannot address one outstanding question that arises in modeling multi-wavelength emission from shock-powered systems: When and how are electrons accelerated?
Accounting for electron physics naturally requires full-PIC simulations, which are dramatically more expensive than hybrid ones:
for the same problem in physical time/lengths, PIC is more expensive by a factor ∝ℳ^d/2+1, where ℳ is the proton/electron mass ratio and d is the number of spatial dimensions of the computational box[This scaling comes from resolving the electron inertial length and plasma frequency rather than the ion inertial length and cyclotron frequency. The problem worsens when also the Debye length must be resolved <cit.>, but this is not usually necessary for shocks <cit.>.].
A natural choice for keeping simulations manageable is to use an artificial value of ℳ≪ 1836 and a reduced box dimensionality, but these choices are not harmless because important pieces of the physics may be lost, as it was pointed out by several authors <cit.>.
In a nutshell, the processes that promote ion injection (outlined in <ref>), do not necessarily work for electrons, which have smaller gyroradii and opposite charge;
instead, electrons need to rely on conservation of their magnetic moment <cit.> and some pre-acceleration mechanism in order not to be advected downstream and thermalized after crossing the shock.
From both simulations and observations, we now know that particle injection depends on the shock inclination :
for ϑ≲ 45 (quasi-parallel shocks), ions are spontaneously injected into DSA and electron can be injected thanks to the magnetic turbulence driven by such energetic ions <cit.>;
however, for ϑ≳ 45 (oblique shocks), ion injection is suppressed in magnetized shocks <cit.> and electrons have to drive their own waves in order to diffuse back to the shock.
A lot of effort from several groups has gone into unraveling which pre-acceleration process(es) may eventually lead to electron injection into DSA for different simulations parameters;
a non-comprehensive list limited to the most promising ones includes: whistler waves, oblique firehose modes, shock surfing acceleration, shock drift acceleration, intermediate-scale instability, electron-cyclotron drift instability, and magnetic reconnection in the shock foot
<cit.>.
While these papers demonstrate that modeling electron DSA from first principles is possible, a comprehensive theory of electron injection and acceleration is still missing (see, e.g., the very recent review <cit.>).
§ PIC SIMULATIONS OF NON-RELATIVISTIC SHOCKS
As mentioned above, the shock inclination controls ion injection into DSA; it also plays a fundamental role in electron injection, so we will distinguish between quasi-parallel and oblique shocks.
Moreover, for every shock velocity there exists a critical angle ^* above which the shock becomes superluminal, i.e., even particles moving along a magnetic field line at the speed of light cannot overrun the shock because the projection of their velocity along the shock normal is invariably smaller than the shock speed <cit.>.
Such a condition can be expressed as cos^*=/c, so the problem of having particles escaping upstream (necessary to undergo DSA) is more serious at trans-relativistic and relativistic shocks.
One may expect that the direction of the magnetic field matters less and less if the shock magnetization becomes smaller and smaller, i.e., if the shock Alfvénic Mach number is sufficiently large.
Qualitatively, one can also think that if the plasma β≡ P_ th/P_ B≈ M_A^2/M_s^2 (i.e., the ratio of the upstream thermal to magnetic pressure, M_s being the sonic Mach number) is large enough, particles are less magnetized and more susceptible to be injected into DSA.
Therefore, we expect that both ion and electron acceleration should depend on four main parameters: the shock inclination , the shock speed /c, and the Alfvénic and sonic Mach numbers, M_A and M_s (or equivalently β).
On top of these parameters set by the shock environment, there are additional parameters that need be adjusted to keep the simulation numerically acceptable.
Such parameters are the reduced proton/electron mass ratio, ℳ≲ 1836, and the dimensionality of the simulation box.
Time/space resolution and number of particles per cell are further model parameters that require a convergence test <cit.>.
§.§ Quasi-parallel shocks
<cit.> put forward the first PIC simulation that showed the simultaneous development of DSA tails in both electron and ion distributions at a non-relativistic shock (Fig. <ref>).
In order to capture the diffusion lengths of accelerated ions, and the instability that they produce, a very long box (about 10 million cells, a few times 10^4d_i, where d_i is the ion skin depth) is needed, which makes it challenging to go beyond 1D.
The faster the shock, the closer all the velocity scales are, which results in a lower computational burden.
This allowed us to successfully run trans-relativistic shocks with =0.3-0.8c <cit.> even in 2D, confirming the simultaneous acceleration of electrons and ions (Fig. <ref>).
The overall picture that arises from these simulations is that at quasi-parallel shocks both species are spontaneously injected into DSA and accelerated to larger and larger energies.
The process is self-similar: the maximum ion energy increases with time because particles scatter on top of the magnetic perturbations that they generate upstream.
§.§ Oblique shocks
<cit.> performed a 1D survey of electron and ion acceleration at oblique shocks with =63.
By exploring the parameter space of different sonic and Alfvénic Mach numbers, they found that high Mach number quasi-perpendicular shocks can efficiently accelerate electrons to power-law spectra (Fig. <ref>).
Electrons are reflected by magnetic mirroring at the shock and drive nonresonant waves in the upstream.
Reflected electrons are trapped between the shock front and upstream waves, and undergo multiple cycles of shock drift acceleration before being injected into DSA.
Strong current-driven waves also temporarily change the shock obliquity and cause mild proton pre-acceleration even in quasi-perpendicular shocks, which usually have hard time accelerating protons.
The injection of electrons in high-β, weak (M_s≲ 5), oblique shocks has also been investigated in 2D <cit.>;
such an environment seems to favor electron injection into DSA, even if conclusive evidence of DSA has not been reported, yet.
§.§ Relativistic Shocks
Here we refer to the pioneering PIC simulations performed by A. Spitkovsky and L. Sironi <cit.>, which cover the parameter spaces that lead to DSA of electron, positrons, and ions.
In a nutshell, when all the species' velocities become very close to c, there is little difference between electrons and protons, in the sense that they are injected into DSA in a similar way.
Spectra tend to be slightly steeper than the test-particle prediction because particles are advected away downstream more effectively than in non-relativistic shocks (i.e., particles are not isotropic in the downstream <cit.>), as discussed in the thorough review by <cit.>.
Note that PIC simulations have not been pushed long enough to see non-linear DSA in action, so this may need to be revised, too.
Another may difference with respect to non-relativistic shocks is that the turbulence driven by accelerating particles seems to be too small-wavelength to efficiently scatter particles.
At non-relativistic shocks E_ max∝ t <cit.>, while at relativistic shocks generation of turbulence is hindered by the shorter advection timescales and acceleration is slower, E_ max∝√(t) <cit.>.
§ CONCLUSIONS
In these notes we introduced the fundamentals of shock acceleration (Fermi mechanisms, shock hydrodynamics, DSA) and the general expectations that stem from the linear theory ( <ref>).
We proceed by discussing the limitations of such a theory and the pieces of observational evidence that complement and challenge it (<ref>).
Then, in <ref> we presented the modern kinetic hybrid (kinetic ions—fluid electrons) plasma simulations that validate and complete the predictions for the spectral slope of the accelerated particles and quantify the most elusive ingredients that a linear theory cannot predict (injection efficiency, maximum achievable energy).
Unprecedentedly-long hybrid simulations are discussed in <ref>, where the main deviations from linear theory are introduced, most notably the formation of a post-cursor, which modified both the shock compression ratio and the spectrum of the accelerated particles.
A revised theory of DSA arises (<ref>), which agrees well with observations of many shock-powered astrophysical objects, in particular SNRs.
Finally, we outline the progress in unraveling the injection and acceleration of heavy ions in <ref> and electrons (<ref>), for which PIC simulations are crucial players.
We conclude by pointing out what the author believes are the most important missing pieces of a complete theory of shock acceleration:
* understanding electron injection as a function of the shock parameters; this is a key ingredient (and currently essentially a free parameter) in modeling the multi-wavelength emission from shock-powered astrophysical systems <cit.>;
* quantifying the maximum energy achievable as a result of the CR-driven instabilities; this encompasses characterizing the saturation of the Bell instability <cit.> in the context of Galactic accelerators <cit.>;
* understanding the long-term evolution of oblique and quasi-perpendicular shocks <cit.>; this requires full-3D simulations, but has the potential to address the detailed phenomenology of many space/astro shocks.
These notes cover the introduction to shock acceleration presented at the International School of Physics “Enrico Fermi" on Foundations of Cosmic Ray Astrophysics, held in Varenna (Italy), in June 2022.
I hope this may be useful to the next generation of cosmic rays astrophysicists!
I warmly thank my mentors Pasquale Blasi, Mario Vietri, and Anatoly Spitkovsky for their guidance, competence and passion, and for having introduced me to cosmic ray and astroplasma physics.
I also want to acknowledge how much I have learned standing on the shoulders of the founders of the physics in these notes (in alphabetical order): A. Bell, R. Blandford, L. O'C. Drury, D. Ellison, D. Eichler, T. Gaisser, T. Jones, M. Malkov, and W. Matthaeus.
Last, but not least, all of my collaborators in the papers mentioned here, especially E. Amato, S. Gupta, C. Haggerty, B. Schroer, L. Sironi, L. Wilson III, and my students R. Diesing, R. Mbarek, L. Orusa, E. Simon, and G. Zacharegkas.
Simulations were performed on computational resources provided by the University of Chicago Research Computing Center.
D.C. was partially supported by NASA through grants 80NSSC20K1273 and 80NSSC18K1218 and NSF through grants AST-1909778, PHY-2010240, and AST-2009326.
varenna
|
http://arxiv.org/abs/2307.02899v1
|
20230706101417
|
Experimental realization of quantum non-Markovianity through the convex mixing of Pauli semigroups on an NMR quantum processor
|
[
"Vaishali Gulati",
"Vinayak Jagadish",
"R. Srikanth",
"Kavita Dorai"
] |
quant-ph
|
[
"quant-ph"
] |
[email protected]
Department of Physical Sciences, Indian Institute of Science Education & Research Mohali,
Sector 81 SAS Nagar, Manauli PO 140306 Punjab India
[email protected]
Instytut Fizyki Teoretycznej, Uniwersytet Jagielloński, Łojasiewicza 11, 30-348 Kraków, Poland
[email protected]
Theoretical Sciences Division,
Poornaprajna Institute of Scientific Research (PPISR),
Bidalur post, Devanahalli, Bengaluru 562164, India
[email protected]
Department of Physical Sciences, Indian Institute of Science Education & Research Mohali,
Sector 81 SAS Nagar, Manauli PO 140306 Punjab India
This experimental study aims to investigate the convex combinations of Pauli semigroups with arbitrary mixing
parameters to determine whether the resulting dynamical map exhibits Markovian or non-Markovian behavior. Specifically, we consider the cases of equal as well as unequal mixing of two Pauli semigroups, and demonstrate that the resulting map is always non-Markovian. Additionally, we study three
cases of three-way mixing of the three Pauli semigroups and determine the
Markovianity or non-Markovianity of the resulting maps by experimentally
determining the decay rates.
To simulate the non-unitary dynamics of a single qubit system with different mixing combinations of Pauli semigroups on an NMR quantum processor, we use an algorithm involving two ancillary qubits. The experimental results align with the theoretical predictions.
Experimental realization of quantum non-Markovianity through the convex mixing of Pauli semigroups on an NMR quantum processor
Kavita Dorai
August 1, 2023
==============================================================================================================================
§ INTRODUCTION
The field of quantum computing is rapidly developing, and there is a crucial need to develop reliable methods to characterize and control quantum systems. Quantum systems can interact with their environment in various ways, leading to
decoherence and dissipation, which could have a deleterious effect on the computational protocols. The study of open quantum systems <cit.> therefore has significant
implications for applications in quantum information processing, quantum
computing, and quantum communication. Recent research has focused on the effect of decoherence on the performance of quantum computers <cit.> and the use of error correction codes to
address this issue <cit.>. A critical aspect of open quantum systems is characterizing their dynamical behavior, with a particular focus on the distinction between Markovian and non-Markovian dynamics <cit.>. The theory of
non-Markovian dynamics has become an important area of research, with a focus on characterization, quantification, and detection of non-Markovian behavior <cit.>.
The reduced dynamics of the quantum system of interest undergoing open evolution is described by a time-continuous family of completely positive (CP) and trace-preserving (TP) linear maps {Λ (t): t≥ 0, Λ(0) = 1} known as the quantum dynamical map, acting on the bounded operators of the Hilbert space of the system of interest <cit.>.
The dynamical map is also related to the time-local generator ℒ(t) <cit.> in the time-local master equation, Λ̇(t) = ℒ(t)Λ(t), with
ℒ(t)[ρ]= - [H(t),ρ]
+∑_i γ_i (t)
(L_i(t)ρ L_i(t)^†-1/2{L_i(t)^† L_i(t),ρ}),
were H(t) is the effective Hamiltonian, L_i(t)'s are the noise operators, and γ_i (t) the decoherence rates. The divisibility of the dynamical map is expressed as follows.
Λ(t_f, t_i) = V(t_f, t)Λ(t, t_i), ∀ t_f≥ t ≥ t_i≥ 0.
The map is CP-divisible if for all t, the propagator V(t_f, t) is CP and the corresponding decay rates γ_i (t) are positive at all times. Otherwise, the map is said to be CP indivisible.
In contrast with classical non-Markovianity, quantum non-Markovianity does not have a unique definition <cit.>. Two major proposals to address quantum non-Markovianity, are based on the CP-indivisibility criterion (RHP) <cit.> and on the distinguishability of states (BLP) <cit.>. According to the RHP divisibility criterion <cit.>, a quantum dynamical map is non-Markovian if it is CP-indivisible. A Markovian evolution, therefore is CP-divisible, with all the decay rates γ_i(t) in the time-local master equation Eq. (<ref>) are positive at all times. A temporarily negative decay rate is therefore a signature of CP-indivisibility of the map and therefore non-Markovianity. According to the BLP definition <cit.>, a quantum dynamical map Λ(t) is said to be Markovian if it does not increase the distinguishability of two initial states ρ_1 and ρ_2, i.e., if ‖Λ(t)(ρ_1) - Λ(t)(ρ_2)‖≤‖Λ(0)(ρ_1) - Λ(0)(ρ_2)||, where ‖·‖ denotes the trace distance. In this work, we stick to the CP-indivisibility criterion of non-Markovianity.
Convex combinations of Pauli semigroups and time-dependent Markovian Pauli dynamical maps was studied in <cit.> discussing the geometrical aspects and non-Markovianity. These results showed the non-convexity of the sets of CP-divisible and CP-indivisible Pauli dynamical maps. Convex combination of semigroups of generalized Pauli dynamical maps has been addressed in <cit.>.
Convex combinations of noninvertible dynamical maps has also been studied recently <cit.>. For the case of generalized Pauli dynamical maps, it was shown that mixing invertible maps can never result in noninvertible maps <cit.>. Subsequently, it was also shown that noninvertibility of the generalized Pauli input maps is necessary for getting a semigroup <cit.>. The fraction of (non)invertible maps obtained by mixing noninvertible generalized Pauli maps was quantified in <cit.>. The measure of the set of non-Markovian maps obtained by mixing noninvertible Pauli maps was studied in <cit.>.
In recent years, there has been a growing interest in the experimental
implementation of non-Markovian dynamics in various physical systems, including quantum dots <cit.>, superconducting qubits
<cit.>, trapped ions <cit.>, and nuclear magnetic resonance (NMR) systems <cit.>. NMR systems, in particular, are a useful platform to investigate non-Markovian dynamics due to their excellent
ability to control and manipulate system-environment interactions. Various studies in NMR investigate different quantum correlations present in the system <cit.> and their dynamics under various
environments <cit.>.
In this work, we aim to experimentally study the behavior of a single qubit system under the effect of different mixing combinations of Pauli semigroups on an NMR quantum processor. We demonstrate that the mixing of any two Markovian Pauli semigroups produces a map which is CP-indivisible and therefore RHP non-Markovian. One of the decay rate always turns out to be negative in this scenario. We also verify our experimental results for arbitrary choices of the mixing
parameters for the dynamical semigroup realizations of the three Pauli semigroups which are in agreement with the notion of Pauli Simplex as defined in <cit.>. We note that the non-Markovian nature of the map becomes apparent when one or more of the decay rates becomes negative. We consider the case of a single qubit with two ancilla qubits to
simulate non-unitary dynamics and make use of the algorithm for the
circuit design as in <cit.>.
The rest of this paper is organized as follows. Sec. <ref> briefly describes the theory of the convex combinations of Pauli semigroups. The experimental details and results are presented in Sec. <ref>. We then conclude in Sec. <ref>.
§ CONVEX COMBINATION OF PAULI SEMIGROUPS
Consider the three Pauli dynamical semigroups,
Λ_i (t)[ρ] = [1-p(t)]ρ + p(t)σ_iρσ_i, i= 1,2,3, with
p(t) = 1-e^-ct/2, c >0.
Here p(t) is the decoherence function and σ_i are the Pauli matrices.
The convex combination of the three Pauli semigroups Eq. (<ref>), each mixed in proportions of x_i is,
Λ̃(t) = ∑_i=1^3 x_iΛ_i (t), (x_i >0, ∑_i x_i =1).
Let us call the three Λ_i (t)'s input maps and Λ̃(t) the output map. The associated time-local master equation for Λ̃(t) is
ℒ(t)[ρ] = ∑_i=1^3γ_i (t) (σ_iρσ_i-ρ),
with the decay rates
γ_1(t) = (1-x_2/1-2 (1-x_2)p(t)+1-x_3/1-2 (1-x_3)p(t)-1-x_1/1-2 (1-x_1) p(t))ṗ(t)/2
γ_2(t) = (1-x_1/1-2 (1-x_1)p(t)+1-x_3/1-2 (1-x_3)p(t)-1-x_2/1-2 (1-x_2) p(t))ṗ(t)/2
γ_3(t) = (1-x_1/1-2 (1-x_1)p(t)+1-x_2/1-2 (1-x_2)p(t)-1-x_3/1-2 (1-x_3) p(t))ṗ(t)/2.
The CP-divisibility and therefore, the Markovianity of output map Λ̃(t) depends on the mixing coefficients x_i. For instance, an equal mixing of the three Pauli semigroups results in a Markovian output. The fraction of non-Markovian (CP-indivisible) maps obtained by mixing Pauli semigroups was reported in <cit.>.
As opposed to three-way mixing, any mixing of two Pauli semigroups is always non-Markovian. To this end, let x_1=0. The decay rate, γ_1(t) turns out to be
γ_1(t) = -[ (1-x_2) x_2 [1-p(t)] p(t)/[1-2p(t)] [1-2 (1-x_2) p(t)] [1-2 x_2 p(t)]] ṗ(t),
which remains negative for all values of x_2. (Note that x_3 = 1-x_2.)
§ EXPERIMENTAL ANALYSIS OF MARKOVIANITY AND NON-MARKOVIANITY
§.§ NMR Simulation of Pauli semigroups
A dynamical map acting on a system of d-dimensional Hilbert space could be simulated by a d^2-dimensional ancilla if one allows the most general unitary evolution
of the total system under the assumption that the ancillae is initialized in a pure state <cit.>. Therefore, to simulate maps on a qubit, a two qubit ancillae
is sufficient.
The finite time map Λ(t),as in Eq. (<ref>) being CPTP admits an operator-sum representation, Λ̃(t)(ρ) = ∑_k E_k(t)ρ E^†_k(t), where the operators
E_k(t) satisfies the trace-preservation condition, ∑_kE^†_k(t)E_k(t)=1.
The non-unitary operators E_k (t) associated with the dynamical map can be decomposed into a linear combination of 4 unitary operators (Pauli matrices σ_i's in this case) and are experimentally implemented using 2 ancillary qubits added to the working system. Efficient implementation of the non-unitary transformation represented by Λ(t) is achievable when suitable unitary operations U, V, and W are found, such that E_k = ∑_i W_kiV_i0U_i.
By applying the overall unitary operation (I⊗ W)U(I⊗ V) to the initial state of the working system and ancillary system, followed by the trace-out of the ancilla, the simulation of the map is obtained. The algorithm involving three unitaries offers the advantage in implementing the maps involving the convex mixtures of Pauli semigroups in a more general manner. This approach eliminates the need to design separate circuits for each specific mixing combination. By incorporating three unitaries into the algorithm, it becomes possible to dynamically adjust and experiment with different mixing parameters and Pauli operators, allowing for greater flexibility and versatility in simulating the desired non-unitary dynamics.
The algorithm is as follows.
* Transforming the state of the ancilla qubits: After initializing the three-qubit system in the state |0⟩_s|00⟩ where
|0⟩_s is the state of the system qubit and |00⟩ that of the ancillary qubits, a unitary operation V is performed on the ancillary qubits. The composite state evolves to V_00|0⟩_s|00⟩ + V_10|0⟩_s|01⟩ + V_20|0⟩_s|10⟩+ V_30|0⟩_s|11⟩. The mixing parameters and the decoherence function associated with the Kraus operators determine the values in the first column of the unitary matrix V.
* Transforming the state of the system: The unitary operations σ_i are applied
on the system qubit depending on the state of the ancilla qubits acting as control qubits.
U = σ_0
⊗|00⟩⟨00|+σ_1 ⊗|01⟩⟨01|+σ_2 ⊗|10⟩⟨10|+σ_3 ⊗|11⟩⟨11|,
where σ_0 is the Identity matrix. The system now evolves to the state V_00σ_0|0⟩_s|00⟩ + V_10σ_1|0⟩_s|01⟩ + V_20σ_2|0⟩_s|10⟩+ V_30σ_3|0⟩_s|11⟩.
* Finally,
the unitary
operation W is performed on the ancillary system which transforms the state into
∑_i,k=0^3 W_kiV_i0σ_i
|0⟩_s |k⟩, where E_k
=∑_i=0^3 W_kiV_i0σ_i. The elements of matrix W are uniquely determined by the choice of matrix elements of V. We obtain the W matrix as Identity matrix in our cases.
* On measuring the final state of the working system with the ancillary system in the state |k⟩⟨ k|, we obtain E_k|0⟩_s⟨0|_sE^†_k. By tracing out the ancillary qubits, summing over each state |k⟩⟨ k|, the resultant is ∑_k E_k(t)|0⟩_s⟨0|_s E^†_k(t) which corresponds to simulating the map Λ̃(ρ) where the initial state of the system ρ is |0⟩⟨0|.
The specific forms of the matrices V used in the experiments depend on the
dynamical map under consideration, and the specific forms used in our experiments are given in the following section.
§.§ Experimental Parameters
The three NMR qubits were realized
using the three ^19F spin-1/2 nuclei in the
molecule trifluoroiodoethylene (Fig. <ref>)
dissolved in the deuterated solvent, d6-acetone.
All experiments were performed at ambient
temperature (≈ 298 K) on a Bruker AVANCE-III 400 MHz NMR spectrometer
equipped with a Broadband Observe (BBO) probe. The high-temperature, high-field approximation simplifies the NMR Hamiltonian by neglecting certain terms when the thermal and Zeeman energies dominate over other interactions. This approximation enables easier analysis and calculations in NMR experiments. The resulting Hamiltonian, assuming weak scalar coupling J_ij between spins i and j, is given by <cit.>
H = - ∑_i=1^3ω_i I_iz
+ 2 π∑_i<j^3 J_ij I_iz I_jz,
where ω_i is the chemical shift of the ith spin, and I_iz
represents the z-component of the spin-1/2 operator for the ith
spin.
Nuclear spins at thermal equilibrium are
represented by the density operator,
ρ =exp(-H/k_BT)/Z,
where H is the Hamiltonian of the system, k_B is
the Boltzmann's constant, T is the temperature,
and Z is the partition function.
Starting from thermal equilibrium, the system is prepared in a pseudopure
state (PPS) using the spatial averaging technique <cit.>,
with the density matrix corresponding to the PPS being given by
ρ_000 = (1- ϵ)/81_8 +ϵ| 000 ⟩⟨ 000|,
where ϵ∼ 10^-5 is the spin polarization at
room temperature and 1_8 is the 8 × 8
identity operator. The identity part of the density operator plays no role
and the NMR signal arises solely
from
the traceless part of the density matrix given in Eq. (<ref>).
T_1 and T_2 relaxation times in NMR describe the return to equilibrium and loss of phase coherence of nuclear spins. T_1 measures the recovery of longitudinal magnetization, while T_2 measures the decay of transverse magnetization.
The experimentally determined T_1 and T_2 relaxation times for the three
qubits on the average range between 1-5 sec, respectively. The experimentally
measured scalar couplings are given by J_12= 69.65 Hz, J_13= 47.67 Hz
and J_23= -128.32 Hz .
The radiofrequency (rf) required for creating the PPS state were
designed using the Gradient Ascent Pulse Engineering
(GRAPE) <cit.> technique, along with pulsed
magnetic field gradients <cit.>.
The ^19F 90^∘ rf pulse duration
was set to 16.2 μs, at a power level of -14.56 dB.
The pulse length of the GRAPE pulses varied
between 700-2500 μs.
The system was evolved from the PPS to the other states
via state-to-state transfer unitaries, and all states
were created with high fidelities ≥
0.99.
The standard methods for quantum state reconstruction for NMR quantum
information processing typically involve performing full state
tomography <cit.> which is computationally expensive,
although some alternatives involving maximum likelihood estimation have been
proposed and used <cit.>. For this work, we
used a least squares constrained convex optimization method to reconstruct the
density matrix of the desired
state <cit.>.
Fidelities of the
experimentally reconstructed states (as compared to the theoretically expected
state) were computed using the Uhlmann-Jozsa measure <cit.>,
ℱ(χ^_
expt,χ^_ theo)= | Tr[χ^_
exptχ_ theo^†]|/√( Tr[χ_
expt^†χ^_ expt] Tr[χ_
theo^†χ^_ theo]),
where χ^_ theo and χ^_ expt denote the theoretical
and experimental density matrices respectively. We experimentally prepared
the PPS with a fidelity of 0.96±0.01.
§.§.§ Mixing of Two Pauli Semigroups
We experimentally demonstrate mixing of two-Pauli semigroups for two cases each with the decoherence parameter
p(t) = [1-exp(-2t)]/2. To this end, we consider convex mixing as
Λ̃(t)(ρ) = aΛ_3(t)(ρ)+(1-a)Λ_2(t)(ρ).
The two cases considered are
* Equal mixing with the mixing parameter a=0.5 and
* unequal mixing with the mixing parameter a=0.25.
For the simulation of mixing two Pauli semigroups, the algorithm described above leads to the following matrix.
V = (
[ √(1-p(t)) √(p(t)) 0 0; 0 0 1 0; √(p(t)(1-a)) -√((1-a) (1-p(t))) 0 √(a); √(a p(t)) -√(a(1- p(t))) 0 -√(1-a); ]).
To implement the unitary for the convex combination of the case of mixing two and three Pauli semigroups experimentally, we
utilized the quantum circuit shown in Fig. <ref>. For mixing of both two and three semigroups, the controlled operation U is the same, as in Eq. <ref>. The unitary operation V is
different for the two-way and three-way mixing. The W operation is equivalent to
the Identity operation for both cases and is hence not implemented experimentally. For the implementation of the NMR
pulse sequence, GRAPE-optimized pulses are used. The unitaries U and V are designed so as to be implemented by use of a single pulse for each time point in all the cases.
The experimental procedure involves three steps.
* Step 1- Initialization: The system is prepared in the state |000⟩⟨000|
with the help of optimized pulses and magnetic field gradients.
* Step 2- Simulation of the non-unitary dynamics: The implementation of U and V with GRAPE optimized pulses.
* Step 3- Measurement: The acquisition and tomography pulses are applied.
The rectangular shapes in Fig. <ref> depict the rf pulses used to prepare the initial pseudopure state required for step 1 of the algorithm. Each rectangle is associated with specific phases, which are indicated above them. The magnetic field direction is assumed to align with the z-axis. The rf pulses are applied along the x or y-axis at specific angles, allowing precise control over qubit rotations and transformations.
With the knowledge of the desired phases and angles of the rf pulses, we can perform operations like single-qubit rotations and two-qubit gates. For example, the first qubit is rotated by an angle of θ_1 =5π/12 radians around the y-axis, while the second qubit is rotated by an angle of θ_2 =π/3 radians. CNOT operations
between two qubits are represented by blue lines
between the corresponding qubits. The complete pulse sequence corresponding to the CNOT gate can be found in <cit.>. Before the CNOT gate operation, an x pulse with an angle of π/4 is applied. This pulse rotates the state of the qubit around the x-axis. Following the CNOT gate, a y pulse with an angle of -π/4 is applied, which rotates the state around the y-axis. The angles and pulses of the RF pulses or gate operations are carefully chosen to achieve the desired output state or perform the targeted operation. The specific choice of angles or gates depend on our goal which in this case is to prepare the PPS.
After the initialization, a GRAPE pulse corresponding to Step 2 of the algorithm is applied. This pulse applies the unitary operations V and U, depending on the specific case being considered.
§.§.§ Mixing of Three Pauli Semigroups
We next consider the case of the convex combination of three Pauli semigroups.
We experimentally demonstrate this for three cases,
each with the decoherence parameter
p(t) = [1-exp(-3t)]/2:
* Equal mixing with mixing parameters x_1=x_2=x_3=0.33,
* unequal mixing with mixing parameters
x_1=x_3=0.3, x_2=0.4 and
* unequal mixing with mixing parameters x_1=0.2, x_2=x_3=0.4.
The V matrix in this case is evaluated to be
V = (
[ √(1-p(t)) √(p(t)) 0 0; √(x_1 p(t)) -√(x_1(1-p(t))) √(1-x_1) 0; √(x_2 p(t)) -√(x_2(1-p(t))) -√(x_1 x_2/1-x_1) √(x_3/1-x_1); √(x_3 p(t)) -√(x_3(1-p(t))) -√(x_1 x_3/1-x_1) -x_2/√(x_2 (1-x_1)); ]).
The decay rate of the decoherence parameter p(t) is dependent on the chosen
constant c. Therefore, determining the optimal time interval required to study
the behavior of the system is directly linked to the selection of
c. Shorter time periods are preferable to minimize decoherence during
experimental duration. The appropriate choice of c is crucial to effectively
study the impact of the resulting dynamical map on the system, while minimizing noise
interference.
The final three-qubit density matrix was reconstructed using the least squares constrained convex optimization method. For the experimental matrix, we achieved fidelities
ranging from 0.95 to 0.98. The experimental output matrix for the single-qubit
system is obtained after tracing over the ancilla qubits. We plot bar graphs, Fig. <ref> to visually compare the real and imaginary parts of the theoretical and experimental density matrices for the specific example of the second case of mixing two semigroups at t=0.1ms. The fidelity of the experimental state, in this case, is 0.98. The decoherence
parameter p(t) is computed at every time point from the output matrix and the
experimental data is fitted to obtain the experimental parameter p_e(t) and
its time evolution ṗ_e(t). The experimental decay rates are subsequently computed with the help of
Eq. (<ref>).
Figures <ref> and <ref> depict a comparison of the theoretical
and experimental results for the two-way mixing case, for equal and
unequal mixing, respectively. For each case, the decoherence parameter p(t) is
plotted in the top panel. The blue dots represent the experimental data with
error bars, the blue curves represent the experimental fits, and the red
dashed curves represent the theoretical parameters. The experimental decay
rates γ_i(t) are negative for both case (i) and case (ii), indicating that the
resultant dynamical map, when two Pauli semigroups maps are mixed, is non-Markovian which
is consistent with the Theorem 1 in <cit.>.
Figures <ref>-<ref> presents a comparison of the theoretical and
experimental results for the case of three-way mixing. For each case, the
decoherence parameter p(t) is plotted in the top panel. The blue dots
represent the experimental data with error bars, the blue curves represent the
experimental fits, and the red dashed curves represent the theoretical
parameters. To determine whether the resultant dynamical map is Markovian or
Non-Markovian, the decay rates are analyzed. The decay rates
γ_1 (t),γ_2(t),γ_3(t) were all positive for case (i) and case
(ii) as shown in plots (b),(c) and (d) respectively, indicating that the
resultant dynamical maps are Markovian. However, for case (iii), the negative decay
rate of γ_1 (t) suggests that the resultant dynamical map is non-Markovian which is
consistent with the theoretical results.
Figures <ref>-<ref> provide clear evidence of the
agreement between the theoretical and experimental results. The experimental
results clearly corroborate the Markovian or non-Markovian nature of the
dynamical map in both cases of two- and the three-way mixing, which is consistent
with Theorem 1 and the Pauli simplex in <cit.>. The
outcomes presented here, which successfully demonstrate the effects of combining
different Pauli semigroups with arbitrary mixing parameters, provide
valuable insights for the study of memory effects in open quantum systems.
Moreover, these results are significant for the development of quantum error
correction and fault-tolerant quantum computing.
§ CONCLUSIONS
In our experimental study, we have successfully demonstrated the combination of
two and three Pauli semigroups, with different mixing parameters. The main
objective was to investigate the Markovianity and non-Markovianity of the
resulting dynamical maps. By analyzing the decay rates associated with these
dynamical maps, we were able to assess the characteristics of the quantum maps
under investigation. We compared our experimental analysis with the
theoretical predictions. The comparative analysis allowed us to validate the
accuracy of our experimental findings and establish the reliability of our
approach. The good agreement between the experimental results and theoretical
expectations highlight the efficacy of our methodology in capturing the
underlying dynamics of the system-environment interactions. This research
represents a significant step forward in advancing our understanding of quantum
correlations and the interplay between the system and its surrounding
environment. Overall, our experimental investigation contributes to the growing
body of knowledge in the field of quantum dynamics, paving the way for further
studies on the characterization and manipulation of quantum information in
realistic environments. NMR, with its precise control, long coherence times and
accurate measurements serves as a good platform for simulating the dynamics of
open quantum systems and understanding the correlations between quantum systems
and their environment.
V.J. acknowledges financial support by the Foundation for Polish Science
through TEAM-NET project (contract no. POIR.04.04.00-00-17C1/18-00). R.S. and K.D. acknowledge financial support from Department of Science and Technology (DST), India, Grants Nos:DST/ICPS/QuST/Theme-1/2019/14 and DST/ICPS/QuST/Theme-2/2019/Q-74, respectively. RS also acknowledges the support of the Govt. of India DST/SERB grant CRG/2022/008345.
52
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Breuer and Petruccione(2007)]petruccione
author author H.-P. Breuer and author F. Petruccione, 10.1093/acprof:oso/9780199213900.001.0001
title The Theory of Open Quantum Systems (publisher Oxford University Press, year 2007)NoStop
[Haroche and Raimond(2006)]haroche_exploring_2006
author author S. Haroche and author J.-M. Raimond, 10.1093/acprof:oso/9780198509141.001.0001
title Exploring the Quantum: Atoms, Cavities, and
Photons (publisher Oxford University Press, year 2006)NoStop
[Knill(2005)]Knill-nature-2005
author author E. Knill, 10.1038/nature03350 journal
journal Nature volume 434, pages 39 (year 2005)NoStop
[Fowler et al.(2012)Fowler,
Mariantoni, Martinis, and Cleland]fowler-pra-2012
author author A. G. Fowler, author M. Mariantoni,
author J. M. Martinis, and author A. N. Cleland, 10.1103/PhysRevA.86.032324 journal journal Phys. Rev. A volume 86, pages 032324 (year 2012)NoStop
[Breuer et al.(2016)Breuer,
Laine, Piilo, and Vacchini]breuer_colloquium:_2016
author author H.-P. Breuer, author E.-M. Laine,
author J. Piilo, and author B. Vacchini, 10.1103/RevModPhys.88.021002 journal journal
Rev. Mod. Phys. volume 88, pages
021002 (year 2016)NoStop
[Li et al.(2018)Li,
Hall, and Wiseman]li_concepts_2017
author author L. Li, author M. J. Hall, and author H. M. Wiseman, https://doi.org/10.1016/j.physrep.2018.07.001 journal journal Phys. Rep. volume
759, pages 1 (year 2018)NoStop
[de Vega and Alonso(2017)]de_vega_dynamics_2017
author author I. de Vega and author D. Alonso, 10.1103/RevModPhys.89.015001 journal journal Rev. Mod. Phys. volume
89, pages 015001 (year 2017)NoStop
[Carmele and Reitzenstein(2019)]carmele-nano-2019
author author A. Carmele and author S. Reitzenstein, doi:10.1515/nanoph-2018-0222 journal journal Nanophotonics volume
8, pages 655 (year 2019)NoStop
[Zhang et al.(2021)Zhang,
Liang, Sun, and Ahn]zhang-ieee-2019
author author L. Zhang, author H. Liang,
author Y. Sun, and author C. K. Ahn, 10.1109/TSMC.2019.2912846 journal journal IEEE
Trans. Syst. Man Cybern. volume 51, pages 2370 (year 2021)NoStop
[Jiang and Luo(2013)]jiang-pra-2013
author author M. Jiang and author S. Luo, 10.1103/PhysRevA.88.034101 journal journal Phys. Rev. A volume 88, pages 034101 (year 2013)NoStop
[Sudarshan et al.(1961)Sudarshan, Mathews, and Rau]sudarshan_stochastic_1961
author author E. C. G. Sudarshan, author P. M. Mathews, and author J. Rau, 10.1103/PhysRev.121.920
journal journal Phys. Rev. volume 121, pages 920 (year
1961)NoStop
[Jagadish and Petruccione(2018)]Quanta77
author author V. Jagadish and author F. Petruccione, 10.12743/quanta.v7i1.77 journal journal Quanta volume 7, pages 54 (year 2018)NoStop
[Gorini et al.(1976)Gorini,
Kossakowski, and Sudarshan]gorini_completely_1976
author author V. Gorini, author A. Kossakowski,
and author E. C. G. Sudarshan, 10.1063/1.522979 journal
journal J. Math. Phys. volume 17, pages 821 (year 1976)NoStop
[Rivas et al.(2014)Rivas,
Huelga, and Plenio]rivasreview
author author A. Rivas, author S. F. Huelga, and author M. B. Plenio, 10.1088/0034-4885/77/9/094001 journal
journal Rep. Prog. Phys. volume 77, pages 094001 (year 2014)NoStop
[Rivas et al.(2010)Rivas,
Huelga, and Plenio]rivas_entanglement_2010
author author A. Rivas, author S. F. Huelga, and author M. B. Plenio, 10.1103/PhysRevLett.105.050403 journal
journal Phys. Rev. Lett. volume 105, pages 050403 (year 2010)NoStop
[Hall et al.(2014)Hall,
Cresser, Li, and Andersson]hall2010
author author M. J. W. Hall, author J. D. Cresser, author L. Li, and author E. Andersson, 10.1103/PhysRevA.89.042120 journal journal Phys. Rev. A volume 89, pages 042120 (year 2014)NoStop
[Breuer et al.(2009a)Breuer, Laine, and Piilo]breuer-prl-2009
author author H.-P. Breuer, author E.-M. Laine, and author J. Piilo, 10.1103/PhysRevLett.103.210401 journal journal Phys. Rev. Lett. volume 103, pages 210401 (year 2009a)NoStop
[Laine et al.(2010)Laine,
Piilo, and Breuer]laine-pra-2010
author author E.-M. Laine, author J. Piilo, and author H.-P. Breuer, 10.1103/PhysRevA.81.062115 journal journal Phys. Rev. A volume 81, pages 062115 (year 2010)NoStop
[Breuer et al.(2009b)Breuer, Laine, and Piilo]breuer_measure_2009
author author H.-P. Breuer, author E.-M. Laine, and author J. Piilo, 10.1103/PhysRevLett.103.210401 journal journal Phys. Rev. Lett. volume 103, pages 210401 (year 2009b)NoStop
[Jagadish et al.(2020a)Jagadish, Srikanth, and Petruccione]jagadish_convex_2020
author author V. Jagadish, author R. Srikanth,
and author F. Petruccione, 10.1103/PhysRevA.101.062304 journal journal Phys. Rev. A volume 101, pages 062304 (year 2020a)NoStop
[Jagadish et al.(2020b)Jagadish, Srikanth, and Petruccione]jagadish_nonqds_2020
author author V. Jagadish, author R. Srikanth,
and author F. Petruccione, https://doi.org/10.1016/j.physleta.2020.126907 journal journal Phys. Lett. A volume
384, pages 126907 (year
2020b)NoStop
[Siudzińska and Chruściński(2020)]siudzinskajpa2020
author author K. Siudzińska and author D. Chruściński, 10.1088/1751-8121/aba7f2
journal journal J. Phys. A: Math.Theor. volume 53, pages 375305 (year 2020)NoStop
[Siudzi ńńska(2021)]siudzinska_markovian_2021
author author K. Siudzi ńńska, 10.1103/PhysRevA.103.022605 journal journal
Phys. Rev. A volume 103, pages
022605 (year 2021)NoStop
[Jagadish et al.(2022a)Jagadish, Srikanth, and Petruccione]jagadish2022noninvertibility
author author V. Jagadish, author R. Srikanth,
and author F. Petruccione, 10.1103/PhysRevA.105.032422 journal journal Phys. Rev. A volume 105, pages 032422 (year 2022a)NoStop
[Jagadish et al.(2022b)Jagadish, Srikanth, and Petruccione]jagadish_measureinvert_2022
author author V. Jagadish, author R. Srikanth,
and author F. Petruccione, 10.1103/PhysRevA.106.012438 journal journal Phys. Rev. A volume 106, pages 012438 (year 2022b)NoStop
[Jagadish et al.(2023)Jagadish, Srikanth, and Petruccione]jagadish_nonivertible_2023
author author V. Jagadish, author R. Srikanth,
and author F. Petruccione, @noop title Noninvertibility and
non-markovianity of quantum dynamical maps, (year 2023), http://arxiv.org/abs/2306.12773 arXiv:2306.12773 [quant-ph]
NoStop
[Liu et al.(2019)Liu,
Almeida, Bae, Padilha, and Cundiff]liu-prl-2019
author author A. Liu, author D. B. Almeida,
author W. K. Bae, author L. A. Padilha, and author S. T. Cundiff, 10.1103/PhysRevLett.123.057403 journal journal
Phys. Rev. Lett. volume 123, pages
057403 (year 2019)NoStop
[Harouni(2020)]harouni-cpb-2020
author author M. B. Harouni, 10.1088/1674-1056/abab75 journal
journal Chin. Phys. B volume 29, pages 124203 (year 2020)NoStop
[Fux et al.(2021)Fux,
Butler, Eastham, Lovett, and Keeling]fux-prappl-2021
author author G. E. Fux, author E. P. Butler,
author P. R. Eastham, author B. W. Lovett, and author J. Keeling, 10.1103/PhysRevLett.126.200401 journal journal
Phys. Rev. Lett. volume 126, pages
200401 (year 2021)NoStop
[Zhang et al.(2022)Zhang,
Pokharel, Levenson-Falk, and Lidar]zhang-prappl-2022
author author H. Zhang, author B. Pokharel,
author E. Levenson-Falk, and author D. Lidar, 10.1103/PhysRevApplied.17.054018 journal journal Phys. Rev. Appl. volume 17, pages 054018 (year 2022)NoStop
[Li et al.(2022)Li,
Mei, Wu, Cai, Wang, Yao, Zhou, and Duan]li-prl-2022
author author B.-W. Li, author Q.-X. Mei,
author Y.-K. Wu, author M.-L. Cai, author
Y. Wang, author L. Yao, author Z.-C. Zhou, and author L.-M. Duan, 10.1103/PhysRevLett.129.140501
journal journal Phys. Rev. Lett. volume 129, pages 140501 (year
2022)NoStop
[Li et al.(2019)Li,
Guo, and Piilo]li-epl-2020
author author C.-F. Li, author G.-C. Guo, and author J. Piilo, 10.1209/0295-5075/127/50001 journal journal EPL volume 127, pages
50001 (year 2019)NoStop
[Ho et al.(2019)Ho,
Matsuzaki, Matsuzaki, and Kondo]matsuzaki-njp-2019
author author L. B. Ho, author Y. Matsuzaki,
author M. Matsuzaki, and author Y. Kondo, 10.1088/1367-2630/ab3a25 journal journal New. J. Phys. volume 21, pages 093008 (year 2019)NoStop
[Bengs(2021)]bengs-jmr-2021
author author C. Bengs, https://doi.org/10.1016/j.jmr.2020.106868
journal journal J. Magn. Reson. volume 322, pages 106868 (year
2021)NoStop
[Gulati et al.(2022)Gulati,
Arvind, and Dorai]vg-epjd-2022
author author V. Gulati, author Arvind, and author K. Dorai, 10.1140/epjd/s10053-022-00527-y journal journal
Eur. Phys. J. D volume 76, pages 194
(year 2022)NoStop
[Singh et al.(2018a)Singh, Singh,
Dorai, and Arvind]singh-pra-2018
author author A. Singh, author H. Singh,
author K. Dorai, and author Arvind, 10.1103/PhysRevA.98.032301 journal journal Phys.
Rev. A volume 98, pages 032301
(year 2018a)NoStop
[Singh et al.(2018b)Singh, Arvind, and Dorai]singh-pra-2018-3udd
author author H. Singh, author Arvind, and author
K. Dorai, 10.1103/PhysRevA.97.022302 journal journal Phys.
Rev. A volume 97, pages 022302
(year 2018b)NoStop
[Gautam et al.(2022)Gautam,
Dorai, and Arvind]akan-qip-2022
author author A. Gautam, author K. Dorai, and author Arvind, 10.1007/s11128-022-03669-5 journal journal
Quantum Inf. Process. volume 21, pages
329 (year 2022)NoStop
[Xin et al.(2017)Xin,
Wei, Pedernales, Solano, and Long]xin-pra-2017
author author T. Xin, author S.-J. Wei,
author J. S. Pedernales,
author E. Solano, and author G.-L. Long, 10.1103/PhysRevA.96.062303 journal journal Phys.
Rev. A volume 96, pages 062303
(year 2017)NoStop
[Schumacher(1996)]schumacher96
author author B. Schumacher, 10.1103/PhysRevA.54.2614 journal journal Phys. Rev. A volume
54, pages 2614 (year 1996)NoStop
[Oliveira et al.(2007)Oliveira, Sarthour Jr., Bonagamba,
Azevedo, and Freitas]oliveira-book
author author I. Oliveira, author R. Sarthour
Jr., author T. Bonagamba,
author E. Azevedo, and author J. C. C. Freitas, https://www.elsevier.com/books/nmr-quantum-information-processing/oliveira/978-0-444-52782-0
title NMR Quantum Information Processing (publisher Elsevier, year 2007)NoStop
[Cory et al.(1998)Cory,
Price, and Havel]cory-1998
author author D. G. Cory, author M. D. Price, and author T. F. Havel, 10.1016/s0167-2789(98)00046-3 journal journal Phys. D: Nonlinear Phenom. volume 120, pages 82–101 (year 1998)NoStop
[Mitra et al.(2007)Mitra,
Sivapriya, and Kumar]avikmitra
author author A. Mitra, author K. Sivapriya, and author A. Kumar, 10.1016/j.jmr.2007.05.013 journal journal J. Magn. Reson. volume 187, pages 306—313 (year 2007)NoStop
[Khaneja et al.(2005)Khaneja, Reiss, Kehlet, Schulte-Herbrüggen, and Glaser]Khaneja-jmr-2005
author author N. Khaneja, author T. Reiss,
author C. Kehlet, author T. Schulte-Herbrüggen, and author S. J. Glaser, https://doi.org/10.1016/j.jmr.2004.11.004 journal
journal J. Magn. Reson. volume 172, pages 296 (year 2005)NoStop
[Dogra et al.(2015)Dogra,
Dorai, and Dorai]shruti-ijqi
author author S. Dogra, author A. Dorai, and author K. Dorai, 10.1142/S0219749915500598 journal journal Int. J. Quantum Inf. volume 13, pages 1550059 (year 2015)NoStop
[Long et al.(2001)Long,
Yan, and Sun]long-qst
author author G. L. Long, author H. Y. Yan, and author Y. Sun, http://stacks.iop.org/1464-4266/3/i=6/a=305 journal journal J. Opt. B: Quantum Semiclass. Opt. volume
3, pages 376 (year 2001)NoStop
[Leskowitz and Mueller(2004)]leskowitz
author author G. M. Leskowitz and author L. J. Mueller, 10.1103/PhysRevA.69.052302 journal journal Phys. Rev. A volume
69, pages 052302 (year 2004)NoStop
[Singh et al.(2016)Singh,
Arvind, and Dorai]singh-pla-2016
author author H. Singh, author Arvind, and author
K. Dorai, https://doi.org/10.1016/j.physleta.2016.07.046 journal
journal Phys. Lett. A volume 380, pages 3051 (year 2016)NoStop
[Gaikwad et al.(2021)Gaikwad, Shende, and Dorai]gaikwad-ijqi-2020
author author A. Gaikwad, author K. Shende, and author K. Dorai, 10.1142/S0219749920400043 journal journal Int. J. Quantum Inf. volume 19, pages 2040004 (year 2021)NoStop
[Gaikwad et al.(2022)Gaikwad, Shende, Arvind, and Dorai]akshay-scirep
author author A. Gaikwad, author K. Shende,
author Arvind, and author
K. Dorai, 10.1038/s41598-022-07721-3 journal journal Sci.
Rep. volume 12, pages 3688 (year 2022)NoStop
[Jozsa(1994)]jozsa
author author R. Jozsa, 10.1080/09500349414552171 journal
journal J. Mod. Opt. volume 41, pages 2315 (year 1994)NoStop
[Uhlmann(1976)]uhlmann
author author A. Uhlmann, 10.1016/0034-4877(76)90060-4 journal journal Rep. Math. Phys. volume 9, pages 273 (year 1976)NoStop
|
http://arxiv.org/abs/2307.02764v1
|
20230706041357
|
When Does Confidence-Based Cascade Deferral Suffice?
|
[
"Wittawat Jitkrittum",
"Neha Gupta",
"Aditya Krishna Menon",
"Harikrishna Narasimhan",
"Ankit Singh Rawat",
"Sanjiv Kumar"
] |
cs.LG
|
[
"cs.LG",
"stat.ML"
] |
SeLiNet: Sentiment enriched Lightweight Network for Emotion Recognition in Images
Tuneer Khargonkar1,
Shwetank Choudhary2,
Sumit Kumar3,
Barath Raj KR4
Samsung R&D Institute, Bangalore, India
Email: {1t.khargonkar, 2sj.choudhary, 3sumit.kr, 4barathraj.kr}@samsung.com
August 1, 2023
===================================================================================================================================================================================================
Cascades are a classical strategy to enable inference cost to vary adaptively across samples,
wherein a sequence of classifiers are invoked in turn.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
One simple deferral rule
employs
the confidence of the current classifier,
e.g.,
based on the maximum predicted softmax probability.
Despite being oblivious to the structure of the cascade
—
e.g., not modelling the errors of downstream models
—
such confidence-based deferral often works remarkably well in practice.
In this paper, we seek to better understand
the conditions under which confidence-based deferral may
fail,
and when alternate deferral strategies can perform better.
We first present a theoretical characterisation of the optimal deferral rule,
which precisely characterises settings under which confidence-based deferral may suffer.
We then
study post-hoc deferral mechanisms,
and demonstrate they
can significantly improve upon confidence-based deferral in settings where
(i) downstream models are specialists that only work well on a subset of inputs,
(ii) samples are subject to label noise,
and
(iii) there is distribution shift between the train and test set.
§ INTRODUCTION
Large neural models with several billions of parameters have shown considerable promise in challenging real-world problems, such as language modelling <cit.> and image classification <cit.>.
While the quality gains of these models are impressive, they are typically accompanied with a sharp increase in inference time <cit.>, thus limiting their applicability.
Cascades offer one strategy to mitigate this <cit.>,
by allowing for faster predictions on “easy” samples.
In a nutshell, cascades involve arranging multiple models in a sequence of increasing complexities.
For any test input,
one iteratively
applies the following recipe, starting with the first model in the sequence:
execute the current model, and employ
a deferral rule to determine whether to
invoke the next model,
or
terminate with the current model's prediction.
One may further combine cascades with ensembles to significantly improve accuracy-compute trade-offs <cit.>.
A key ingredient of cascades is the choice of deferral rule.
The simplest candidate
is to defer when the current model's confidence in its prediction is sufficiently low.
Popular confidence measures include
the maximum predictive
probability over all classes <cit.>,
and the entropy of the predictive distribution <cit.>.
Despite being oblivious to the nature of the cascade
—
e.g., not modelling the errors of downstream models
—
such confidence-based deferral works remarkably well in practice
<cit.>.
Indeed, it has often been noted that such deferral can perform on-par with more complex modifications to the model training <cit.>.
However, the reasons for this success remain unclear;
further, it is not clear if there are specific practical settings where confidence-based deferral may perform poorly <cit.>.
In this paper, we initiate a systematic study of
the potential limitations of confidence-based deferral for cascades.
Our findings and contributions are:
* We establish a novel result characterising the theoretically optimal deferral rule (Proposition <ref>), which for a two-model cascade relies on the confidence of both model 1 and model 2.
* In many regular classification tasks where model 2 gives a consistent estimate of the true posterior probability, confidence-based deferral is highly competitive.
However, we show that in some settings,
confidence-based deferral can be significantly sub-optimal both in theory and practice (<Ref>).
This includes when
(1) model 2's error probability is highly non-uniform across samples, which can
happen when model 2 is a specialist model,
(2) labels are subject to noise,
and
(3) there is distribution shift between the train and test set.
* Motivated by this,
we then study a series of post-hoc deferral rules, that seek to mimic the form of the optimal deferral rule (<ref>).
We show that post-hoc deferral can significantly improve upon confidence-based deferral
in the aforementioned settings.
To the best of our knowledge, this is the first work that precisely identifies
specific practical problems settings where confidence-based deferral can be sub-optimal for cascades.
Our findings give insights on when it is appropriate to deploy a confidence-based cascade model in practice.
§ BACKGROUND AND RELATED WORK
Fix an instance space and label space = [ L ] { 1, 2, …, L },
and let be a distribution over ×.
Given a training sample S = { ( x_n, y_n ) }_n ∈ [N] drawn from ,
multi-class classification seeks a classifier h → with low misclassification error R( h ) = ( y ≠ h( x ) ).
We may parameterise h as
h( x ) = _y' ∈ f_y'( x )
for a scorer f →^L.
In neural models,
one expresses f_y( x ) = w_y^⊤Φ( x ) for class weights w_y and embedding function Φ.
It is common to construct a probability estimator p →Δ^L using the softmax transformation, p_y( x ) ∝exp( f_y( x ) ).
§.§ Cascades and Deferral Rules
Conventional neural models involve a fixed inference cost for any test sample x ∈.
For large models, this cost may prove prohibitive.
This has motivated
several approaches to uniformly lower the inference cost for all samples,
such as
architecture modification <cit.>,
stochastic depth <cit.>,
network sparsification <cit.>,
quantisation <cit.>,
pruning <cit.>,
and distillation <cit.>.
A complementary strategy is to
adaptively lower the inference cost for “easy” samples,
while reserving the full cost only for “hard” samples <cit.>.
Cascade models
are a classic example of
adaptive predictors,
which
have proven useful in vision tasks such as object detection <cit.>,
and have grown increasingly popular in natural language processing <cit.>.
In the vision literature,
cascades are often designed for binary classification problems,
with lower-level classifiers being used to quickly identify negative samples <cit.>.
A cascade is composed of two components:
* a collection of base models (typically of non-decreasing inference cost)
* a deferral rule
(i.e., a function that decides which model to use for each x ∈).
For any test input,
one executes the first model, and employs the deferral rule to determine whether to terminate with the current model's prediction, or to invoke the next model;
this procedure is repeated until one terminates, or reaches the final model in the sequence.
Compared to using a single model, the goal of the cascade is to offer comparable predictive accuracy, but lower average inference cost.
Typically, the deferral rule is based purely on the current model's predictions.
Formally, let h^(1), …, h^(K) denote a sequence of K classifiers, and r →{ 0, 1 } a deferral rule.
Let k_ acc{ k ∈ [K - 1] r( h^(k)( x ) ) = 0 } be the set of indices where r recommends to not defer (i.e., terminate).
Then, the cascade classifier h^ cas→ predicts
h^ cas( x ) = h^k^*( x )
where
k^* min k_ acc if k_ acc≠∅
K else.
While the base models and deferral rules may be trained jointly <cit.>,
we focus on a setting where
the base models are pre-trained and fixed, and the goal is to train only the deferral rule.
This setting is practically relevant,
as it is often desirable to re-use powerful models that involve expensive training procedures.
Indeed, a salient feature of cascades is their ability to leverage off-the-shelf models
and simply adjust the desired operating point
(e.g., the rate at which we call a large model).
§.§ Confidence-based Cascades
A simple way to define a deferral rule
is by thresholding a model's confidence in its prediction.
While there are several means of quantifying and improving such confidence <cit.>,
we focus on
the maximum predictive probability γ( x ) max_y' p( y' | x ).
Specifically, given K trained models,
a confidence-based cascade is formed by picking the first model that whose confidence
γ( x ) is sufficiently high <cit.>.
This is made precise in <Ref>.
Forming a cascade in this manner is appealing because it does not require retraining of any models or the deferral rule.
Such a post-hoc approach has been shown to give good trade-off between
accuracy and inference cost <cit.>.
Indeed, it has often been noted that such deferral can perform on-par with more complex modifications to the model training <cit.>.
However, the reasons for this phenomenon are not well-understood; further, it is unclear if there are practical settings where confidence-based cascades are expected to underperform.
To address this,
we now formally analyse confidence-based cascades, and explicate their limitations.
§ OPTIMAL DEFERRAL RULES FOR CASCADES
In this section,
we derive the oracle (Bayes-optimal) deferral rule, which will
allow us to understand the effectiveness and limitations of confidence-based deferral (<Ref>).
§.§ The Bayes-Optimal Deferral Rule
For simplicity, we consider a cascade of K=2 pre-trained classifiers h^(1),h^(2)𝒳→𝒴;
our analysis readily generalises to cascades with K > 2 models (see Appendix <ref>).
Suppose that the classifiers
are based on probability estimators p^(1), p^(2),
where for i∈{1,2},
p_y'^(i)(x) estimates the probability
for class y' given x,
and
h^(i)(x)max_y'p_y'^(i)(x).
We do not impose any restrictions on the training procedure or performance of these classifiers.
We seek to learn a deferral rule
r →{ 0, 1 }
that can decide
whether h^(1) should be used (with r(x)=0),
or h^(2) (with r(x)=1).
To derive the optimal deferral rule, we must first specify a target metric to optimise.
Recall that cascades
offer a suitable balance between average inference cost, and predictive accuracy.
Thus,
we consider a weighted average of these two quantities as our metric.
It is straightforward to extend our analysis to generic cost-sensitive variants of the predictive accuracy <cit.>.
Concretely,
assume
without loss of generality
that h^(2) has higher computational
cost,
and that invoking h^(2)
incurs a constant cost c > 0.
The population risk for
the cascade can be written as
R(r;h^(1),h^(2)) = (y≠ h^(1)(x),r(x)=0)
+
(y≠ h^(2)(x),r(x)=1)+c ·(r(x)=1).
Intuitively, the first two terms measure the misclassification error only on samples where the respective classifier's predictions are used.
A good deferral rule ensures this error is small, while also invoking the larger model sparingly.
The Bayes-optimal rule,
which minimises R(r;h^(1),h^(2)) over all possible r →{ 0, 1 },
is given below.
(This relates to existing results from distinct lines of work; see <ref>.)
Let η_y'(x)(y'|x).
Then,
the Bayes-optimal deferral rule for the risk
in (<ref>) is:
r^*(x) =1[η_h^(2)(x)(x)-η_h^(1)(x)(x)>c],
Proof can be found in <Ref>.
Note that for a classifier h, η_h(x)(x)=(y=h(x)|x) i.e.,
the probability that h gives a correct prediction for x.
Observe that the randomness here reflects the inherent stochasticity of the labels y for an input x, i.e.,
the aleatoric uncertainty <cit.>.
We note that it is common to choose the deferral cost c such that ( r( x ) = 1 ) = τ for fixed deferral rate τ∈ [ 0, 1 ].
By varying c, one obtains a deferral curve quantifying the trade-off between
accuracy and deferral.
For the purposes of this curve,
the key quantity is η_h^(2)(x)(x)-η_h^(1)(x)(x),
i.e., the difference in the probability of correct prediction under each classifier.
This is intuitive:
it is optimal to defer to h^(2)
if the expected reduction in misclassification error exceeds the cost of invoking h^(2).
The Bayes-optimal deferral rule in (<ref>) is a theoretical construct,
which relies on knowledge of the true posterior probability η.
In practice, one is likely to use an approximation to this rule.
To quantify the effect of such an approximation, we may consider the excess risk or regret
of an arbitrary deferral rule r over r^*.
We have the following.
Let α( x ) η_h_1(x)(x) - η_h_2(x)(x) + c.
Then, the excess risk for an arbitrary r is
R( r; h^(1), h^(2) ) - R( r^*; h^(1), h^(2) ) = 𝔼_x[ ( 1(r(x) = 1) - 1(α(x) < 0) ) ·α(x) ].
Intuitively, the above shows that when we make deferral decisions that disagree with the Bayes-optimal rule,
we are penalised proportional to the difference between the two models' error probability.
§.§ Plug-in Estimators of the Bayes-Optimal Deferral Rule
In practice, we may seek to approximate the Bayes-optimal deferral rule r^* in (<ref>) with an estimator r̂.
We now present several oracle estimators,
which will prove useful in our subsequent analysis.
One-hot oracle
Observe that η_y'( x ) = 𝔼_y | x[ 1[y = y'] ].
Thus,
given a test sample
(x, y),
one
may replace the expectation with the observed label
y
to yield the
ideal estimator
η̂_y'(x) = 1[ y' = y ].
This
results in the rule
r̂_01(x) 1[ 1[y=h^(2)(x)] - 1[y=h^(1)(x)] > c ].
One intuitive observation is that for high c,
this rule only defers samples with y ≠ h^(1)( x ) but y = h^(2)( x ),
i.e., samples where the first model is wrong, but the second model is right.
Unfortunately, this rule is impractical, since it depends on the label y.
Nonetheless, it
serves as an oracle to help understand what one can gain
if we knew exactly whether the downstream model makes an error.
Probability oracle
Following a similar same reasoning as r̂_ 01, another estimator is given by
r̂_ prob(x) 1[ p_y^(2)(x) - p_y^(1)(x) > c].
Intuitively p_y^(2)(x) can be seen as a label-dependent
correctness score of model 2 on an instance x.
Relative confidence
The above oracles rely on the true label y.
A more practical plug-estimator is η̂_h^(i)( x )( x ) = max_y' p^(i)_y'( x ),
which simply uses each model's softmax probabilities.
The rationale for this rests upon the assumption that the probability
model p^(i) is a consistent estimate of the true posterior probability
so that (y|x)≈ p_y^(i)(x) for i∈{1,2}, where (x,y) is
a labeled example.
Thus,
η_h^(i)(x)(x)=(h^(i)(x)|x)≈ p^(i)(h^(i)(x)|x)=max_y'p_y'^(i)(x),
resulting in the rule
r̂_ rel(x) 1[max_y”p_y”^(2)(x)-max_y'p_y'^(1)(x)>c].
Observe that this
deferral decision depends on the confidence
of both models, in contrast to confidence-based deferral which relies only
on the confidence of the first model.
Note that the above oracles cannot be used directly for adaptive computation, because the second model is invoked on every input.
Nonetheless, they can inform us
about the available headroom to improve over confidence-based deferral
by
considering the confidence of the downstream model.
As shall be seen in <Ref>, these estimators are useful for deriving objectives to train a post hoc deferral rule.
§.§ Relation to Existing Work
The two-model cascade
is closely connected to the literature on
learning to defer to an expert <cit.>.
Here, the goal is to learn a base classifier h^(1) that has the option of invoking an “expert” model h^(2);
this invocation is controlled by a deferral rule r.
Indeed, the risk (<ref>) is a special case of <cit.>, where the second model is considered to be an “expert”.
Proposition <ref> is a simple generalisation of <cit.>, with the latter assuming c = 0.
In <Ref>, we generalise <Ref> to the cascades
of K > 2 models.
§ FROM CONFIDENCE-BASED TO POST-HOC DEFERRAL
Having presented the optimal deferral rule in Proposition <ref>,
we now use it to explicate some
failure modes for confidence-based deferral,
which will be empirically demonstrated in <ref>.
§.§ When Does Confidence-Based Deferral Suffice?
Suppose as before we have probabilistic models p^(1), p^(2).
Recall from Algorithm <ref> that for constant c^(1) > 0,
confidence-based deferral employs
the rule
r̂_ conf( x ) = 1[ max_y' p^(1)_y'( x ) < c^(1)].
Following <ref>, (<ref>) may be regarded as a
plug-in estimator for the “population confidence” rule
r_ conf( x ) 1[ η_h^(1)( x ) < c^(1)].
Contrasting this to Proposition <ref>,
we have the following:
The deferral rule
r_ conf
produces the same deferral curve as the Bayes-optimal rule (<ref>) if
η_h^(1)( x ) and η_h^(1)( x ) - η_h^(2)( x ) produce the same ordering over instances x ∈.
Lemma <ref> studies the agreement between the deferral curves of
r_ conf and the Bayes-optimal solution,
which eliminates the need for committing to a specific cost c^(1).
The lemma has an intuitive interpretation:
population confidence-based deferral is optimal
if and only if the absolute confidence in model 1's prediction agrees with the relative confidence is model 1 versus model 2's prediction.
Based on this, we now detail some cases where confidence-based deferral succeeds or fails.
Success mode: expert h^(2)
Lemma <ref> has one immediate, intuitive consequence:
confidence-based deferral
is optimal when
the downstream model has a constant error probability,
i.e., η_h_2(x)( x ) is a constant for all x ∈.
This may happen,
e.g.,
if that the labels are deterministic given the inputs, and the second classifier h^(2) perfectly predicts them.
Importantly, note that this is a sufficient (but not necessary) condition for the optimality of confidence-based deferral.
Failure mode: specialist h^(2)
As a converse to the above,
one setting where confidence-based deferral may fail is when
when the downstream model is a specialist,
which performs well only on a particular sub-group of the data (e.g., a subset of classes).
Intuitively,
confidence-based deferral may
erroneously forward samples where h^(2) performs worse than h^(1).
Concretely,
suppose there is a data sub-group _ good⊂ where h^(2)
performs exceptionally well,
i.e.,
η_h^(2)(x)≈ 1 when x ∈_ good.
On the other hand,
suppose
h^(2) does not perform well on _ bad∖_ good,
i.e.,
η_h^(2)(x)≈ 1/L when x ∈_ bad.
Intuitively,
while η_h^(1)(x) may be relatively low for x ∈_ bad,
it is strongly desirable to not defer such examples,
as h^(2) performs even worse than h^(1);
rather, it is preferable to identify and defer samples x ∈_ good.
Failure mode: label noise
Confidence-based deferral can fail when there are high levels of label noise.
Intuitively, in such settings,
confidence-based deferral may wastefully forward samples where h^(2) performs no better than h^(1).
Concretely,
suppose that instances x ∈_ bad⊂
may be mislabeled as one from a different, random class.
For x ∈_ bad, regardless of how the two models h^(1), h^(2) perform, we have
η_h^(1)(x)(x), η_h^(2)(x)(x) = 1/L (i.e., the accuracy of classifying these instances is chance level in expectation). Since η_h^(1)(x)(x) is low, confidence-based deferral will tend to
defer such input instance x.
However, this is a sub-optimal decision since model 2 is more computationally expensive, and expected to have the same chance-level performance.
Failure mode: distribution shift
Even when model h^(2) is an expert model, an intuitive setting where confidence-based deferral can fail is if there is distribution shift between the train and test ( y | x ) <cit.>.
In such settings, even if p^(1) produces reasonable estimates of the training class-probability, these may translate poorly to the test set.
There are numerous examples of confidence degradation under such shifts,
such as the presence of out-of-distribution samples <cit.>,
and the presence of a label skew during training <cit.>.
We shall focus on the latter in the sequel.
§.§ Post-Hoc Estimates of the Deferral Rule
Having established that
confidence-based deferral may be sub-optimal in certain settings,
we now consider the viability of deferral rules that are learned in a post-hoc manner.
Compared to confidence-based deferral, such rules aim to explicitly account for both the confidence of model 1 and 2,
and thus avoid the failure cases identified above.
The key idea behind such post-hoc rules is to directly mimic the optimal deferral rule in (<ref>).
Recall that
this optimal rule
has a dependence on the output of h^(2);
unfortunately, querying h^(2) defeats the entire purpose of cascades.
Thus, our goal is to estimate (<ref>) using only
the outputs of p^(1).
We summarise a number of post-hoc estimators in <Ref>,
which are directly
motivated by the One-hot, Probability, and Relative Confidence Oracle respectively from <Ref>.
The first is to
learn when model 1 is incorrect, and model 2 is correct.
For example,
given a validation set,
suppose we construct
samples
S_val{ ( x_i, z^(1)_i },
where z_i^(1) = 1[ y ≠ h^(2)( x_i )] - 1[y = h^(1)( x_i ) ].
Then,
we fit
min_g →1/| S_val |∑_( x_i, z^(1)_i ) ∈ S_ valℓ( z^(1)_i, g( x_i ) ),
where, e.g., ℓ is the square loss.
The score g( x ) may be regarded as the confidence in deferring to model 2.
Similarly,
a second approach is to
perform regression to predict z^(2)_i = p_y^(2)(x_i) - p_y^(1)(x_i).
The third approach is to
directly estimate
z^(3)_i = max_y' p^(2)_y'( x_i )
using predictions of the first model.
As shall be seen in <ref>, such post-hoc rules can learn to avoid the failure cases for confidence-based deferral identified in the previous section.
However, it is important to note that there are some conditions where such rules may not offer benefits over confidence-based deferral.
Failure mode: Bayes p^(2)
Suppose that the model p^(2) exactly matches the Bayes-probabilities, i.e.,
p^(2)_y'( x ) = ( y' | x ).
Then, estimating max_y' p^(2)_y'( x ) is equivalent to estimating max_y'( y' | x ).
However, the goal of model 1 is precisely to estimate ( y | x ).
Thus, if p^(1) is sufficiently accurate, in the absence of additional information (e.g., a fresh dataset), it is unlikely that one can obtain a better estimate of this probability than that provided by p^(1) itself.
This holds even if the ( y | x ) is non-deterministic, and so the second model has non-trivial error.
Failure mode: non-predictable p^(2) error
When the model p^(2)'s outputs are not strongly predictable,
post-hoc deferral may devolve to regular confidence-based deferral.
Formally, suppose we seek to predict z max_y' p^(2)_y'( x ), e.g., as in MaxProb.
A non-trivial predictor must achieve an average square error smaller than the variance of z, i.e.,
𝔼[ ( z - 𝔼[ z ] )^2 ].
If z is however not strongly predictable,
the estimate will be tantamount to simply using the constant 𝔼[ z ].
This brings us back to the assumption of model 2 having a constant probability of error,
i.e., confidence-based deferral.
§.§ Relation to Existing Work
As noted in <Ref>, learning a deferral rule for a two-model cascade is closely related to existing literature in learning to defer to an expert.
This in turn is a generalisation of the classical literature on learning to reject <cit.>, which refers to classification settings where one is allowed to abstain from predicting on certain inputs.
The population risk here is a special case of (<ref>),
where h^(2)( x ) is assumed to perfectly predict y.
The resulting Bayes-optimal classifier is known as Chow's rule <cit.>,
and exactly coincides with the deferral rule in
Lemma <ref>.
Plug-in estimates of this rule are thus analogous to confidence-based deferral,
and have been shown to be similarly effective <cit.>.
In settings where one is allowed to modify the training of h^(1), it is possible to construct losses that jointly optimise for both h^(1) and r <cit.>.
While effective, these are not applicable in our setting involving pre-trained, black-box classifiers.
Other variants of post-hoc methods have been considered in <cit.>, and implicitly in <cit.>;
however, here we more carefully study the different possible ways of constructing these methods, and highlight when they may fail to improve over confidence-based deferral.
§ EXPERIMENTAL ILLUSTRATION
In this section, we provide empirical evidence to support our analysis in <Ref> by considering the three failure modes in which confidence-based deferral underperforms.
For each of these settings, we compute deferral curves that plot the classification accuracy versus the fraction of samples deferred to the second model (which implicitly measures the overall compute cost).
In line with our analysis in <Ref>, post-hoc deferral rules offer better accuracy-cost trade-offs in these settings.
§.§ Confidence-Based versus Oracle Deferral
We begin by
illustrating the benefit of considering confidence of the second model when constructing a deferral rule.
In this experiment, h^(1) is a generalist (i.e., trained on all ImageNet classes), and h^(2) is a dog specialist trained on all images in the dog synset, plus a fraction of non-dog training examples, which we vary. There are 119 classes in the dog synset. We use MobileNet V2 <cit.> as h^(1), and a larger EfficientNet B0 <cit.> as h^(2). For hyperparameter details, see <Ref>.
<Ref> shows the accuracy of confidence-based deferral (Confidence) and Relative Confidence (<Ref>) on the standard ImageNet test set as a function of the deferral rate. We realise different deferral rates by varying the value of the deferral threshold c.
In <Ref>a, the fraction of non-dog training images is 100% i.e., model 2 is also a generalist trained on all images. In this case, we observe that Relative Confidence offers little gains over Confidence.
However, in <Ref>b and <Ref>c,
as the fraction of non-dog training images decreases, the
non-uniformity of h^(2)'s error probabilities increases i.e., h^(2) starts to specialise to dog images.
In line with our analysis in <Ref>, confidence-based deferral underperforms when model 2's error probability is highly non-uniform.
That is, being oblivious to the fact that h^(2) specialises in dog images,
confidence-based deferral may erroneously defer non-dog images to it.
By contrast, accounting for model 2's confidence, as done by Relative Confidence, shows significant gains.
§.§ Confidence-Based versus Post-Hoc Deferral
From <ref>,
one may construct better deferral rules by querying model 2 for its confidence.
Practically, however, querying model 2 at inference time defeats the entire purpose of cascades.
To that end,
we now
compare
confidence-based deferral
and
the post-hoc estimators (<Ref>),
which
do not need to invoke model 2 at inference time.
We consider
each of the settings from <Ref>,
and demonstrate that post-hoc deferral can significantly outperform confidence-based deferral.
We present more experimental results in <Ref>,
where we illustrate post-hoc deferral rules for K > 2 models.
Post hoc model training
In all of the following experiments, the post-hoc model g→ℝ
is based on a lightweight, three-layer Multi-Layer Perceptron (MLP) that takes as input the
probability outputs from model 1. That is, g(x) = MLP(p^(1)(x))
where p^(1)(x) ∈Δ_L denotes all probability outputs from model 1.
Learning g amounts to learning the MLP as the two base models are fixed.
We train g on a held-out validation set. For full technical details of the post-hoc model architecture and training, see <Ref>. We use the objectives described in <Ref> to train g.
Specialist setting
We start with the same ImageNet-Dog specialist setting used in <Ref>.
This time, we compare six methods:
Confidence, Random, MaxProb, Diff-01, Diff-Prob, and Entropy.
Random is a baseline approach that defers to either model 1 or model 2 at random;
MaxProb, Diff-01, and Diff-Prob are the post-hoc rules described in <Ref>;
Entropy defers based on the thresholding the entropy of p^(1) <cit.>,
as opposed to the maximum probability.
Results for this setting are presented in <Ref> (first row).
We see that there are gains from post-hoc deferral, especially in the low deferral regime:
it can accurately determine whether the second model is likely to make a mistake.
Aligning with our analysis,
for the generalist setting (<Ref>a),
confidence-based deferral is highly competitive,
since h^(2) gives a consistent estimate of (y| x).
Label noise setting
In this setting,
we look at a problem with label noise. We consider CIFAR 100 dataset where
training examples from pre-chosen L_ noise∈{0, 10, 25} classes are assigned a uniformly drawn label. The case of L_ noise = 0 corresponds to the standard CIFAR 100 problem.
We set h^(1) to be CIFAR ResNet 8 and set h^(2) to be CIFAR ResNet 14,
and train both models on the noisy data.
The results are shown
in <Ref> (second row).
It is evident that when there is label noise, post-hoc approaches yield higher accuracy than confidence-based on a large range of deferral rates, aligning with our analysis in <Ref>.
Intuitively, confidence-based deferral tends to forward noisy samples to h^(2), which performs equally poorly, thus leading to a waste of deferral budget.
By contrast, post-hoc rules can learn to “give up” on samples with extremely low model 1 confidence.
Distribution shift setting
To simulate distribution shift,
we consider a long-tailed version of CIFAR 100 <cit.> where
there are h ∈100, 50, 25 head classes, and 100-h tail classes. Each head class has 500 training images, and each tail class has 50 training images. The standard CIFAR 100 dataset corresponds to h=100.
Both models h^(1) (CIFAR ResNet 8) and h^(2) (CIFAR ResNet 56) are trained
on these long-tailed datasets.
At test
time, all methods are evaluated on the standard CIFAR 100 balanced test set,
resulting in a label distribution shift.
We present our results in <Ref> (third row).
In <Ref>g, there is no distribution shift.
As in the case of the specialist setting, there is little to no gain from post-hoc approaches in this case since both models are sufficiently accurate.
As h decreases from 100 to 50 (<Ref>h) and 25 (<Ref>i), there is more distribution shift at test time, and post-hoc approaches (notably Diff-01) show clearer gains.
To elaborate, the two base models are of different sizes and respond to the
distribution shift differently, with CIFAR ResNet 56 being able to better handle
tail classes overall. Diff-01 is able to identify the superior performance of h^(2) and defer input instances from tail classes.
§.§ On the Generalisation of Post-Hoc Estimators
Despite the benefits of post-hoc approaches as demonstrated earlier, care must be taken in controlling
the capacity of the post-hoc models.
We consider the same ImageNet-Dog specialist setting as in the top row of <Ref>.
Here, model 2 is trained on all dog images, and a large fraction of non-dog images (8%).
Since model 2 has access to a non-trivial fraction of non-dog images,
the difference in the probability of correct prediction of the two models is less predictable.
We report deferral curves on both training and test splits in <Ref>.
Indeed, we observe that the post-hoc method Diff-01 can overfit,
and fail to
generalise. Note that this is despite using a feedforward network with two hidden layers of only 64 and 16 units (see <Ref> for details on hyperparameters) to control the capacity of the post-hoc model.
Thoroughly investigating approaches to increase generalisation of post-hoc models will be an interesting topic for future study.
§ CONCLUSION AND FUTURE WORK
The Bayes-optimal deferral rule we present suggests that key to optimally defer is to identify
when the first model is wrong and the second is right. Based on this result, we then study
a number of estimators (<Ref>) to construct trainable post hoc deferral rules, and show that they can improve upon the commonly used confidence-based deferral.
While we have identified conditions under which confidence-based deferral underperforms (e.g., specialist setting, label noise), these are not exhaustive.
An interesting direction for future work is to design post-hoc deferral schemes attuned for settings involving other forms of distribution shift,
such as the presence of out-of-distribution samples.
It is also of interest to study the efficacy of more refined confidence measures, such as those based on conformal prediction <cit.>.
Finally, while our results have focussed on image classification settings, it would be of interest to study analogous trends for natural language processing models.
plainnat
tocsectionAppendix
PART:
Appendix
§ PROOFS
§.§ Proof of Proposition <ref>
The risk in (<ref>) can be written as
R(r;h^(1),h^(2))
=(y≠ h^(1)(x),r(x)=0)+(y≠ h^(2)(x),r(x)=1)+c·(r(x)=1)
=𝔼[1[y≠ h^(1)(x)]·1[r(x)=0]+1[y≠ h^(2)(x)]·1[r(x)=1]+c1[r(x)=1]]
=𝔼[1[y≠ h^(1)(x)]·(1-1[r(x)=1])+1[y≠ h^(2)(x)]·1[r(x)=1]+c1[r(x)=1]]
=(y≠ h^(1)(x))+𝔼[-1[y≠ h^(1)(x)]·1[r(x)=1]+1[y≠ h^(2)(x)]·1[r(x)=1]+c1[r(x)=1]]
=(y≠ h^(1)(x))+𝔼_x1[r(x)=1]𝔼_y|x[1[y≠ h^(2)(x)]-1[y≠ h^(1)(x)]+c]
=(y≠ h^(1)(x))+𝔼_x1[r(x)=1]𝔼_y|x[1[y=h^(1)(x)]-1[y=h^(2)(x)]+c]
=(y≠ h^(1)(x))+𝔼_x1[r(x)=1]·[η_h^(1)(x)(x)-η_h^(2)(x)(x)+c],
where we define η_y'(x)(y=y'| x). Thus, it is
optimal to defer when
r(x)=1 η_h^(1)(x)(x)-η_h^(2)(x)(x)+c<0
η_h^(2)(x)(x)-η_h^(1)(x)(x)>c.
§.§ Proof of <Ref>
For fixed h^(1),h^(2), we have already computed the Bayes-optimal
rejector r^*. Let α(x)η_h^(1)(x)(x)-η_h^(2)(x)(x)+c.
Plugging r^* into the risk results in Bayes-risk
R(r^*;h^(1),h^(2)) =(y≠ h^(1)(x))+𝔼_x1[r^*(x)=1]·α(x).
The excess risk for an arbitrary r is thus
R(r;h^(1),h^(2))-R(r^*;h^(1),h^(2))
=𝔼_x[1[r(x)=1]-1[α(x)<0]]·α(x).
§.§ Proof of <Ref>
We start with <Ref> which will help prove <Ref>.
Given two base classifiers h^(1),h^(2)𝒳→[L],
two deferral rules r_1,r_2:𝒳→{0,1} yield the
same accuracy for the cascade if and only if 𝔼( r_1(x)β(x) ) = 𝔼( r_2(x)β(x) ),
where β(x)η_h^(2)(x)(x)-η_h^(1)(x)(x).
For a deferral rule r, by definition, the accuracy is given by
A(r) =𝔼_(x,y)[1[h^(1)(x)=y] · (1-r(x))+1[h^(2)(x)=y] · r(x)]
=ℙ(h^(1)(x)=y)+𝔼[r(x) ·(1[h^(2)(x)=y]-1[h^(1)(x)=y])]
=ℙ(h^(1)(x)=y)+𝔼_xr(x)𝔼_y|x[1[h^(2)(x)=y]-1[h^(1)(x)=y]]
=ℙ(h^(1)(x)=y)+𝔼_xr(x) ·(η_h^(2)(x)(x)-η_h^(1)(x)(x)).
Given two deferral rules r_1,r_2, A(r_1)=A(r_2)𝔼( r_1(x)β(x) ) =𝔼( r_2(x)β(x) )
where β(x)η_h^(2)(x)(x)-η_h^(1)(x)(x).
We are ready to prove <Ref>.
Recall the confidence-based deferral rule and the Bayes-optimal rule
are respectively
r_conf(x) =1[η_h^(1)(x)(x)<c'],
r^*(x) =1[η_h^(1)(x)(x)-η_h^(2)(x)(x)<c].
For brevity, we use η_i(x) and η_h^(i)(x)(x) interchangeably.
Define real-valued random variables zη_1(x), and z^*η_1(x)-η_2(x)
where x∼ℙ_x. Let ρ(c')𝔼(1[η_1(x)<c'] )=ℙ(η_1(x)<c')
be the deferral rate of the confidence-based deferral rule with threshold
c'. Similarly, define ρ^*(c)𝔼(1[η_1(x)-η_2(x)<c]).
Let γ_α,γ_α^*
be the α-quantile of the distributions of z and z^*,
respectively. By definition, ρ(γ_α)=ρ^*(γ_α^*)=α.
Let A(r,c') denote the accuracy of the cascade with the deferral
rule r and the deferral threshold c'.
Formally, the deferral curve of r_conf is the set of
deferral rate-accuracy tuples {(ρ(c'),A(r_conf,c'))| c'∈[0,1]}.
The same deferral curve may be generated with {(α,A(r_conf,γ_α)) |α∈[0,1]}.
Similarly, for the Bayes-optimal rule, the deferral curve is defined
as {(α,A(r^*,γ_α^*)|α∈[0,1]}.
To show that the two deferral rules produce the same deferral curve,
we show that A(r_conf,γ_α)=A(r^*,γ_α^*)
for any α∈[0,1]. By Lemma <ref>, this is equivalent
to showing that
𝔼(1[η_1(x) <γ_α] ·β(x) ) =𝔼(1[ η_1(x) - η_2(x) <γ_α^*] ·β(x)),
where β(x)η_2(x)-η_1(x).
Suppose that η_1(x) and η_1(x)-η_2(x) produce
the same ordering over instances x∈𝒳. This means that
for any x∈𝒳 and α∈[0,1], η_1(x) <γ_αη_1(x) - η_2(x) < γ_α^*.
Thus, (<ref>) holds.
§ AMOUNT OF COMPUTE USED FOR EXPERIMENTS
There are two types of models involved in all experiments:
(1) base classifiers (i.e., h^(1) and h^(2)), and
(2) post-hoc model.
For post-hoc model training and evaluation, we use one Nvidia V100 GPU.
As discussed in <Ref>, our post-hoc model is only a small
MLP model with only two hidden layers. In each experiment reported in the main text, training one post-hoc model for 20 epochs only takes a few minutes.
For training and evaluating a post-hoc model, a GPU is not needed.
For training of base classifiers, the amount of compute varies depending on the dataset and model architecture. This is summarized in the following table. In the following table, a GPU always refers
to an Nvidia V100 GPU, and a TPU always refers to a Google Cloud TPU v3.[Google Cloud TPU v3: <https://cloud.google.com/tpu/docs/system-architecture-tpu-vm.>]
Dataset Model Devices Approximate Training Time
CIFAR 100 CIFAR ResNet 8 8× GPUs 12m (batch size: 1024, 256 epochs)
CIFAR 100 CIFAR ResNet 14 8× GPUs 20m (batch size: 1024, 256 epochs)
CIFAR 100 CIFAR ResNet 56 8× GPUs 20m (batch size: 1024, 256 epochs)
ImageNet MobileNet V2 8× TPUs 7h (batch size: 64, 90 epochs)
ImageNet EfficientNet B0 8× TPUs 4h (batch size: 1024, 90 epochs)
§ EXPERIMENTAL SETUP: HYPER-PARAMETERS
§.§ Training of Models in a Cascade
We describe hyperparameters we used for training all base models (i.e., h^(1) and h^(2)).
In the following table, BS denotes batch size, and schedule refers
to learning rate schedule.
Dataset Model LR Schedule Epochs BS22.5cmStandard ImageNet MobileNet V2 0.05 anneal 90 64 EfficientNet B0 0.1 cosine 90 1024ImageNet Dog (specialist) EfficientNet B0 0.1 cosine 90 51222.5cmCIFAR 100 CIFAR ResNet 8 1.0 anneal 256 1024 CIFAR ResNet 14 1.0 anneal 256 1024 CIFAR ResNet 56 1.0 anneal 256 1024
We use SGD with momentum as the optimisation
algorithm for all models. For annealing schedule, the specified learning
rate (LR) is the initial rate, which is then decayed by a factor of
ten after each epoch in a specified list. For CIFAR100, these epochs
are 15, 96, 192 and 224. For ImageNet, these epochs are 5, 30, 60,
and 80.
In the above table, CIFAR 100 includes the standard CIFAR 100 classification problem, CIFAR 100 with label noise, and CIFAR 100 with distribution shift as discussed in <Ref>.
§.§ Training of Post-hoc Deferral Rules
All post-hoc models g→ℝ we consider are based
on a lightweight Multi-Layer Perceptron (MLP) that takes as input a small set of features constructed from probability outputs from model 1.
More precisely, let p^(1)(x)∈Δ_L denote all probability outputs from model 1 for an L-class classification problem.
Let v(p^(1)(x)) ∈ℝ^D be a list of features extracted from the probability outputs.
In all experiments, the post-hoc model is g(x)=MLP(v(p^(1)(x))) with v producing D=L+11 features. These features are
* The entropy of p^(1)(x).
* Top 10 highest probability values of p^(1)(x).
* One-hot encoding of max_y' p^(1)_y' (i.e., an L-dimensional binary vector).
For the MLP, let FC_K denote a fully connected layer
with K output units without an activation. Let FC_K,f denote
a fully connected layer with K output units with f as the activation.
In all experiments involving post-hoc rules, we use
g(x) = (FC_1∘FC_2^4,ReLU∘FC_2^6,ReLU∘ v ∘ p^(1))(x),
where ReLU denotes the Rectified Linear Unit.
Note that both the MLP and the set of input features to g are small. We found that
a post-hoc model can easily overfit to its training set. Controlling its capacity
helps mitigate this issue.
To train g, we use Adam <cit.>
as the optimisation algorithm with a constant learning rate of 0.0007
and batch size 128. For CIFAR 100 experiments, we use a held-out set of size 5000
as the training set. For ImageNet experiments, we use an held-out set of size 10000.
We train for 20 epochs.
Further, an L2 regularization of weights in each layer in the MLP is also added to the training objective. We set the regularization weight to 0.001.
§ CALIBRATION ANALYSIS
To further study the performance of confidence-based deferral,
Figure <ref> presents calibration plots <cit.>.
These ideally visualise
( y ≠ h^(1)( x ) y = h^(2)( x ) | x ∈𝒳_q ) for q ∈ [ 0, 1 ],
where 𝒳_q { x ∈𝒳max_y' p^(1)_y'( x ) = q } is the set of samples where model 1's confidence equals q.
In practice, we discretise q into 10 buckets.
These plots reveal that, as expected,
model 1's confidence may be a poor predictor of when model 1 is wrong but model 2 is right:
indeed, it systematically over-estimates this probability, and thus may result in erroneously forwarding samples to the second model.
This suggests using post-hoc estimates of when model 1 is wrong, but model 2 is right.
We remark here that in a different context, <cit.> showed that certain cascades of pre-trained language models may be amenable to confidence-based deferral:
the confidence of the small model can predict samples where the small model is wrong, but the large model is correct.
In this work, we focus on image classification settings.
Generalising our analysis to natural langauge processing models will be an interesting
topic for future study.
§ ADDITIONAL EXPERIMENTAL RESULTS
In this section, we present more experimental results to support our analysis in <Ref>.
§.§ When Model 2 is Highly Accurate
To illustrate that confidence-based deferral is optimal when the second model has
a constant error probability, we train an MLP model as h^(1) and a CIFAR ResNet 56 as h^(2) on the MNIST dataset <cit.>, a well known 10-class classification problem.
The MLP has one hidden layer composed of two hidden units with a ReLU activation,
and a fully connected layer with no activation function to produce 10 logit scores (for 10 classes).
<Ref> presents our results.
§.§ Oracle Curves
In this section, we illustrate different plug-in estimates
for the oracle deferral rule in (<ref>) (see <Ref>).
We consider two base models: MobileNet V2 <cit.> and EfficientNet B0 <cit.> as h^(1) and h^(2), which are trained independently on ImageNet. The second model is trained on a subset of ImageNet dataset containing only “dog” synset (i.e., a dog specialist model).
<Ref> illustrates the performance of confidence-based deferral
along with different plug-in estimates for the oracle deferral rule.
We see that in this specialist setting, confidence-based deferral is sub-optimal,
with a considerable gap to the oracle curves;
this is expected, since here ( y = h^(2)( x ) ) is highly variable for different x.
In particular, ( y = h^(2)( x ) ) ∼ 1 for dog images, but ( y = h^(2)( x ) ) ∼1/L - #non-dog classes for other images. There are 119 classes in the dog synset.
Note that all Oracle curves are theoretical constructs that rely on the true label
y.
They serve as an upper bound on the performance what we could achieve when we
learn post-hoc deferral rules to imitate them.
§.§ Confidence of a Dog-Specialist Model
In this section, we report confidence of the two base models we use in the ImageNet dog-specialist setting in <Ref>.
Recall that p^(1) is MobileNet V2 <cit.> trained on the full ImageNet dataset (1000 classes),
and p^(2) is EfficientNet B0 <cit.> trained only on the
dog synset from ImageNet (119 classes).
We show empirical distributions of max_y' p^(1)_y'(x), max_y' p^(2)_y'(x), max_y” p^(2)_y”(x)-max_y' p^(1)_y'(x), p^(2)_y(x)-p^(1)_y(x),
and p^(2)_y(x)-p^(1)_y(x) in <Ref>, grouped by the category
of each input image (Dog or Non-Dog).
These statistics are computed on the test set of ImageNet dataset, containing 50000 images.
We observe from the distribution of max_y' p^(2)_y'(x) that the specialist EfficientNet B0
is confident on dog images. To the specialist, non-dog images are out of distribution and so
max_y' p^(2)_y'(x) is not a reliable estimate of the confidence. That is, p^(2) can be highly confident on non-dog images. As a result, a deferral rule based on Relative Confidence (max_y” p^(2)_y”(x)-max_y' p^(1)_y'(x)) may erroneously route non-dog images
to the second model.
Thus, for training a post-hoc model, it is important to consider model 2's probability of the true label
(i.e., p_y^(2)(x)), which is low when (x,y) is a labeled image outside the dog synset.
We further show a scatter plot of confidence of both models in <Ref>.
We observe that there is no clear relationship between max_y' p^(1)_y'(x) and max_y' p^(2)_y'(x).
§ EXTENSION TO MULTI-MODEL CASCADES
One may readily employ post-hoc deferral rules to train multi-model cascades.
Given K classifiers h^(1), …, h^(K),
a simple recipe is to train K - 1 deferral rules, with the kth rule trained to predict whether or not h^(k) should defer to h^(k+1).
Each of these individual rules may be trained with any of the objectives detailed in Table <ref>.
At inference time, one may invoke these rules sequentially to determine a suitable termination point.
In Figure <ref>,
we present results for a 3-model cascade in the label noise setting.
As with the 2-model case, post-hoc deferral improves significantly over confidence-based deferral,
owing to the latter wastefully deferring on noisy samples that no model can correctly predict.
Here, the relative inference cost is computed as the relative cost compared to always querying the large model.
The above can be seen as a particular implementation of the following generalisation of Proposition <ref>.
Suppose we have K classifiers h^(1), …, h^(K),
with inference costs c^(1), …, c^(K)∈ [0, 1].
We assume without loss of generality that 0 = c^(1)≤ c^(2)≤…≤ c^(K),
i.e.,
the costs reflect the excess cost over querying the first model.
Now consider learning
a selector s → [ K ] that determines which of the K classifiers is used to make a prediction
for x ∈.
Our goal in picking such a selector is to minimise the standard misclassification accuracy under the chosen classifier, plus an appropriate inference cost penalty:
R( s; h^(1), …, h^(K) )
( y ≠ h^( s( x ) )( x ) )
+ [ c^( s( x ) ) ] .
We have the following, which is easily seen to generalise Proposition <ref>.
The optimal selector for (<ref>) is given by
s^*( x ) = k ∈ [ K ] ℙ ( y ≠ h^( k )( x ) ) + c^( k ).
Observe that
R( s; h^(1), …, h^(K) ) = ( y ≠ h^( s( x ) )( x ) ) + λ· c^( s( x ) )
= _x_y | x 1( y ≠ h^( s( x ) )( x ) ) + λ· c^( s( x ) ).
Now suppose we minimise s without any capacity restriction.
We may perform this minimisation pointwise:
for any fixed x ∈,
the optimal prediction is thus
s^*( x ) = k ∈ [ K ] _y | x 1( y ≠ h^( k )( x ) ) + λ· c^( k ).
When K = 2,
we exactly arrive at Proposition <ref>.
For K > 2, the optimal selection is the classifier with the best balance between misclassification error and inference cost.
Lemma <ref> may be seen as a restatement of <cit.>,
which established the Bayes-optimal classifiers for a sequential classification setting,
where each classifier is allowed to invoke a “reject” option to defer predictions to the next element in the sequence.
This equivalently packs together a standard classifier h^(k) and deferral rule r^(k) into a new classifier h̅^(k).
§ LIMITATIONS
In this work, we have identified problem settings where confidence-based
deferral can be sub-optimal. These problem settings are 1. specialist setting, 2. label noise, and 3. distribution shift (see <Ref> for experimental results in these settings).
These problem settings are not exhaustive. Identifying other conditions under
which confidence-based deferral performs poorly is an interesting direction for
future work. Another interesting topic worth investigating is finite-sample behaviors of cascades, both with confidence-based deferral and with post hoc rules.
|
http://arxiv.org/abs/2307.00766v2
|
20230703060528
|
Accelerated variational quantum eigensolver with joint Bell measurement
|
[
"Chenfeng Cao",
"Hiroshi Yano",
"Yuya O. Nakagawa"
] |
quant-ph
|
[
"quant-ph"
] |
Department of Physics, The Hong Kong University of Science and Technology,
Clear Water Bay, Kowloon, Hong Kong, China
QunaSys Inc., Aqua Hakusan Building 9F, 1-13-7 Hakusan, Bunkyo, Tokyo 113-0001, Japan
Department of Applied Physics and Physico-Informatics, Keio University, Hiyoshi 3-14-1, Kohoku, Yokohama 223-8522, Japan
QunaSys Inc., Aqua Hakusan Building 9F, 1-13-7 Hakusan, Bunkyo, Tokyo 113-0001, Japan
QunaSys Inc., Aqua Hakusan Building 9F, 1-13-7 Hakusan, Bunkyo, Tokyo 113-0001, Japan
The variational quantum eigensolver (VQE) stands as a prominent quantum-classical hybrid algorithm for near-term quantum computers to obtain the ground states of molecular Hamiltonians in quantum chemistry. However, due to the non-commutativity of the Pauli operators in the Hamiltonian, the number of measurements required on quantum computers increases significantly as the system size grows, which may hinder practical applications of VQE. In this work, we present a protocol termed joint Bell measurement VQE (JBM-VQE) to reduce the number of measurements and speed up the VQE algorithm. Our method employs joint Bell measurements, enabling the simultaneous measurement of the absolute values of all expectation values of Pauli operators present in the Hamiltonian. In the course of the optimization, JBM-VQE estimates the absolute values of the expectation values of the Pauli operators for each iteration by the joint Bell measurement, while the signs of them are measured less frequently by the conventional method to measure the expectation values. Our approach is based on the empirical observation that the signs do not often change during optimization. We illustrate the speed-up of JBM-VQE compared to conventional VQE by numerical simulations for finding the ground states of molecular Hamiltonians of small molecules, and the speed-up of JBM-VQE at the early stage of the optimization becomes increasingly pronounced in larger systems. Our approach based on the joint Bell measurement is not limited to VQE and can be utilized in various quantum algorithms whose cost functions are expectation values of many Pauli operators.
Accelerated variational quantum eigensolver with joint Bell measurement
Yuya O. Nakagawa
August 1, 2023
=======================================================================
§ INTRODUCTION
Noisy intermediate-scale quantum (NISQ) devices <cit.> have attracted considerable interest as they hold the potential to solve certain computational tasks faster than classical computers <cit.>.
These devices typically have a relatively small number of qubits (usually between 50 and a few hundred) and are subject to hardware noise, so the computational results obtained from them may not be completely reliable.
Despite these limitations, NISQ devices are expected to be capable of performing calculations that are beyond the capabilities of classical computers, which makes them exciting tools in the near future.
The research community has made substantial progress in developing NISQ-friendly variational quantum optimization algorithms for a variety of applications, including quantum machine learning <cit.>, fidelity estimation <cit.>, quantum error-correcting code discovery <cit.>.
The variational quantum eigensolver (VQE) is considered a flagship algorithm within this field, utilizing parameterized quantum circuits and the variational principle to prepare the ground or excited states of quantum many-body systems <cit.>.
VQE has already been experimentally realized on actual quantum hardware to solve small-sized problems in quantum chemistry and material calculation <cit.>.
However, the scalability of VQE to larger system sizes remains a challenge.
One of the main reasons, especially in the application to quantum chemistry, is the large number of measurements required during the optimization process.
For the molecular Hamiltonians in quantum chemistry, there are usually n^4 Pauli operators for an n-qubit system, and estimation of their expectation values is required in each iteration of VQE.
For example, it was estimated that a single evaluation of the expectation value of the Hamiltonian for analysing the combustion energies of some organic molecules requires ∼ 10^9 measurement shots and may take as long as several days <cit.>.
To tackle the problem of scalability, various methods have been proposed to reduce the number of measurements in the optimization of VQE.
One possible strategy is to divide the n^4 Pauli operators in the Hamiltonian into the groups of simultaneously-measurable operators, and there are methods that realize 𝒪(n^3) groups <cit.> and even 𝒪(n^2) groups <cit.>.
The reduction of the number of groups can result in the reduction of the number of measurements to estimate their expectation values, which leads to the alleviation of the scalability problem of VQE.
Nonetheless, it is still highly demanded to develop a method to reduce the number of measurements in the whole optimization process of VQE.
Although it is impossible to perform the projective measurement simultaneously on the non-commuting Pauli operators to estimate their expectation values, the absolute values of the expectation values can be estimated simultaneously by using the so-called joint Bell measurement in a doubled system consisting of 2n qubits (see Sec. <ref> for detailed explanations).
This is because for any n-qubit Pauli operators P_1, P_2, …, P_M acting on the original system of n qubit, the operators P_1 ⊗ P_1, P_2 ⊗ P_2, …, P_M ⊗ P_M acting on the doubled system of 2n qubits commute with each other.
Then the expectation values of the doubled state (⟨ψ|⊗⟨ψ|)(P_j ⊗ P_j)(|ψ⟩⊗|ψ⟩) = (P_jψ)^2 can be estimated simultaneously for all j=1,⋯,M, where |ψ⟩ is a state in the original n-qubit system.
This joint Bell measurement was utilized to show an exponential advantage of quantum computers over classical ones in predicting properties of physical systems <cit.> or estimate the reduced density matrix of the quantum state efficiently <cit.>.
It is also noteworthy to point out that the reduction of the number of measurements by a constant factor was achieved in Ref. <cit.> through the use of Bell measurement.
We note that the classical shadow technique <cit.> also aims at predicting expectation values of many operators simultaneously, but the measurement overhead in the protocol with random Pauli measurements scales exponentially with the locality of the operator, so the application to the quantum chemistry Hamiltonians with non-local Pauli operators is not straightforward (see also Ref. <cit.>).
In this study, we introduce a method to reduce measurement overhead in VQE by employing joint Bell measurements (JBM), referred to as JBM-VQE (a schematic illustration is provided in Fig. <ref>). Our approach takes advantage of the correlation between Pauli expectation values across successive iterations of VQE. More concretely, we observe that the signs of expectation values of the Pauli operators in the Hamiltonian change infrequently during the optimization of VQE while their absolute values change frequently.
In JBM-VQE, one measures the absolute values by the joint Bell measurement, which requires only one circuit to measure, during every iteration.
On the other hand, the signs of expectation values of the Pauli operators are measured once in the fixed number of iterations by the conventional measurement method (using naively n^4, at least n^2, distinct circuits).
The expectation value of the Hamiltonian is subsequently constructed by combining the estimated absolute values and signs with assuming that the latest estimation of the signs is valid for the following iterations.
We can expect a reduction in the measurement cost during the optimization by using this protocol.
To exemplify this, we numerically compare JBM-VQE to the conventional VQE for molecular Hamiltonians of small molecules under the reasonable condition that the statistical fluctuations of the energy expectation values in both methods are almost the same.
JBM-VQE requires fewer shots to approach the vicinity of the exact ground state compared to conventional VQE, with this trend becoming more pronounced for larger molecules. Our proposal is applicable to any molecular Hamiltonian in quantum chemistry and is expected to expedite the practical utilization of VQE.
Furthermore, the protocols in JBM-VQE can be utilized in various variational quantum algorithms which optimize the expectation values of many Pauli operators, other than VQE.
The paper is organized as follows.
We present the JBM-VQE algorithm in Sec. <ref>.
The required number of measurements to estimate the expectation values of the Pauli operators at certain precision is discussed for JBM-VQE and the conventional VQE in Sec. <ref>.
The numerical comparison between our proposed JBM-VQE and the conventional VQE on various molecular Hamiltonians is presented in Sec. <ref>, demonstrating the acceleration of JBM-VQE.
We discuss several aspects of JBM-VQE in Sec. <ref>.
Finally, we summarize the paper and provide an outlook in Sec. <ref>.
§ ALGORITHM
In this section, we describe the algorithm of JBM-VQE.
We first explain our target Hamiltonian and the joint Bell measurement.
We then explain the algorithm of JBM-VQE.
§.§ Setup
We focus on the quantum chemistry Hamiltonian in the second-quantized form, given by
H_f = ∑_i,j=1^n h_i j c_i^† c_j+1/2∑_i,j,k,l=1^n V_i j k l c_i^† c_j^† c_l c_k,
where c_i (c_i^†) is an annihilation (creation) operator of an electron labeled by i = 1, 2, ⋯, n satisfying the canonical anti-commutation relation {c_i, c_j} = {c_i^†, c_j^†} = 0, {c_i, c_j^†} = δ_ij, and h_ij (V_ijkl) is the scalar related to the so-called one-electron (two-electron) integrals <cit.>.
This Hamiltonian is mapped to a qubit representation by fermion-qubit mappings such Jordan-Wigner mapping <cit.>, parity mapping <cit.>, or Bravyi-Kitaev mapping <cit.>, as follows:
H = ∑_j=1^M λ_j P_j,
where λ_j ∈ℝ is a coefficient, P_j is an n-qubit Pauli operators P_j ∈{I,X,Y,Z}^⊗ n, and M = n^4 is the number of the Pauli operators.
Throughout this paper, we omit the identity term I^⊗ n in the Hamiltonian and assume P_j ≠ I^⊗ n.
Similar to the conventional VQE, JBM-VQE is based on the ansatz quantum state:
|ψ()⟩ = U() |0⟩,
where U() is a parameterized quantum circuit with parameters = (θ_1, ⋯, θ_N_θ).
Our objective is to minimize the energy expectation value:
E() := H_ = ∑_j=1^M λ_j P_j_,
where ⋯_ := ⋯ψ(), with respect to the parameters .
§.§ Joint Bell measurement
The joint Bell measurement enables the determination of the absolute values of expectation values of all 4^n Pauli operators for an n qubit state <cit.>.
It requires a 2n-qubit system comprising two identical n qubit systems, denoted as A and B.
We prepare a 2n qubit state,
|Ψ()⟩ := |ψ()⟩_A ⊗|ψ()⟩_B = (U() ⊗ U()) |0⟩_A ⊗|0⟩_B,
and apply CNOT gates and Hadamard gates between the corresponding qubits of A and B, and eventually measure them in the computational basis (see Fig. <ref>).
For each pair of qubits, measuring the state in the computational basis results in the projective measurement onto the Bell basis,
|Φ_1^±⟩ =1/√(2)(|0⟩_A ⊗|0⟩_B ±|1⟩_A ⊗|1⟩_B),
|Φ_2^±⟩ =1/√(2)(|0⟩_A ⊗|1⟩_B ±|1⟩_A ⊗|0⟩_B).
These basis states are common eigenstates of Pauli operators X_A ⊗ X_B, Y_A ⊗ Y_B, Z_A ⊗ Z_B. Therefore, the measurement for all 2n qubits in Fig. <ref> constitutes the projective measurement on simultaneous eigenstates of all 2n qubit Pauli operators in the form of P_j ⊗ P_j (P_j ∈{I,X,Y,Z}^⊗ n), whose total number is 4^n.
Consequently, for any n qubit Pauli operators P_j, we can estimate
(P_j ⊗ P_j)Ψ() = P_j_^2
from the measurement outcomes of the single quantum circuit in Fig. <ref>.
We denote the estimated value as P_j_^2.
The absolute value of P_j_ is then estimated as
P_j_ = √(max{0, P_j_^2}).
We refer to this protocol to estimate the absolute values of expectation values of all 4^n Pauli operators as the joint Bell measurement.
We note that the estimate (<ref>) is biased when the number of shots for measurements is finite because of the non-linearity of the square root and the max functions.
§.§ JBM-VQE algorithm
In the JBM-VQE algorithm, we estimate the energy E() and its gradient E() by decomposing the expectation value P_j_ into its sign,
s_j() := sgn( P_j_) :=
+1 (if P_j_≥ 0)
-1 (if P_j_ < 0)
,
and its absolute value P_j_.
We employ the following two subroutines to estimate the energy and gradient in the algorithm.
Subroutine 1.
The first subroutine takes the parameters as its inputs.
In this subroutine, we first estimate the signs s_j() by evaluating the expectation values P_j_ themselves with the standard measurement strategy using n qubits, as done in the conventional VQE.
One naive way to estimate the signs is to perform the projective measurement of each P_j with n^4 distinct measurement circuits, subsequently estimating the sign s_j() via the majority vote of its ± 1 result.
The estimated sign is denoted as s_j(θ), and the total number of shots (repetitions of quantum circuit executions) to estimate all s_j(θ) is represented as m_S^tot. Following this, the joint Bell measurement using 2n qubits is utilized to approximate the absolute values of the expectation values P_j_, resulting in their estimates P_j_.
The number of shots for the joint Bell measurement is denoted as m.
The estimate of the energy (expectation value of the Hamiltonian) is constructed as
E() = ∑_j λ_j s_j()P_j_.
Additionally, we estimate the gradient of E() by using the so-called parameter shift rule <cit.>, mathematically expressed in the simplest case as
P_j_θ_l= 1/2 sinα( P_j_^(l)_+ - P_j_^(l)_-),
where ^(l)_± = ±αδ_l, δ_l is a unit vector with only the l-th component non-zero, and α∈ℝ is a fixed constant.
We take α=π/4 in the numerical calculation in Sec. <ref>.
Analogous to the energy estimation, we use the standard measurement strategy to estimate the signs s_j(^(l)_±) using the quantum states |ψ(^(l)_±)⟩, which may require n^4 distinct quantum circuits in a naive way.
The absolute values P_j_^(l)_± are then estimated with the joint Bell measurement for the states |ψ(^(l)_±)⟩.
The gradient of the energy is estimated by
E()θ_l = ∑_j λ_j/2sinα( s_j(^(l)_+)P_j_^(l)_+ - s_j(^(l)_-)P_j_^(l)_-).
The estimates of the energy (<ref>) and the gradient (<ref>) are the outputs of this subroutine.
Subroutine 2.
The second subroutine takes the parameters and a set of guessed signs
{t_j}_j=1^M, {t^(l=1)_j,+}_j=1^M, ⋯, {t^(l=N_θ)_j,+}_j=1^M,
{t^(l=1)_j,-}_j=1^M, ⋯, {t^(l=N_θ)_j,-}_j=1^M,
as inputs (t_j, t^(l)_j, ± = ± 1).
In this subroutine, we estimate only the absolute values (P_j_, P_j_^(l)_+ and P_j_^(l)_-) by the joint Bell measurement.
The energy and the gradient are estimated by
E() = ∑_j λ_j t_j P_j_.
E()θ_l = ∑_j λ_j/2sinα( t^(l)_j,+P_j_^(l)_+ - t^(l)_j,+P_j_^(l)_-).
These two estimates are outputs of the second subroutine.
The JBM-VQE algorithm is described in Algorithm <ref>.
Let us assume that the parameters are in the n_iter-th iteration (n_iter=0,1,⋯) of the algorithm.
When n_iter is a multiple of T_S, we invoke subroutine 1 to estimate the energy and gradient, E() [Eq. (<ref>)] and E() [Eq. (<ref>)], respectively.
Importantly, we also record the estimates of the signs s_j(), s_j,±^(l)() for j=1,⋯,M and l=1,⋯,N_θ.
When n_iter is not a multiple of T_S, we invoke subroutine 2 with utilizing the pre-recorded signs (obtained at some past iteration) as the guessed signs [Eq.(<ref>)].
In other words, we estimate the energy and the gradient by Eqs. (<ref>)(<ref>) with performing only the joint Bell measurement that estimates the absolute values of the Pauli expectation values.
Then we update the parameters by the gradient decent method ' = - ηE(), where η is a learning rate.
It is worth noting that the gradient descent is not the only choice in the JBM-VQE and other sophisticated optimization algorithms can be employed (see the discussion in Sec. <ref>).
Several remarks regarding our JBM-VQE algorithm are in order.
Firstly, this algorithm relies on the expectation that the signs of the Pauli expectation values (P_j_ and P_j_^(l)_±) do not change frequently during the optimization process.
Subroutine 2 consists of the joint Bell measurement that uses only (2N_θ + 1) quantum circuits and may typically require fewer measurement shots to estimate the absolute values of the Pauli expectation values than the conventional VQE.
For this reason, we expect a reduction in the total number of shots in JBM-VQE compared with the conventional VQE.
Secondly, the joint Bell measurement has a bias on its estimates, causing the energy and gradient estimated in both subroutines 1 and 2 to exhibit bias, although this bias will vanish as the number of shots (m and m_S^tot) approaches infinity.
JBM-VQE should be employed when the bias remains relatively small compared to the required energy precision, such as during the early stage of VQE optimization (we discuss this point in Sec. <ref>).
The rough criteria of the number of shots for realizing a certain precision of the estimated expectation values are discussed in Sec. <ref>.
Thirdly, if we use a 2n-qubit system just as two independent copies of the original n-qubit system and conduct the conventional VQE, m executions of the circuits are equivalent to 2m shots in the original system. Consequently, our JBM-VQE algorithm must surpass the conventional VQE by at least a factor of two concerning the number of shots, and it is actually realized in the numerical simulation in Sec. <ref>. Lastly, the sign-updating period T_S influences the efficiency of JBM-VQE. A larger period leads to fewer shots required for optimization, albeit with the trade-off of less accurate energy and gradient estimates. While there is no a priori criterion for determining T_S, it can be set manually or adaptively by monitoring the optimization history.
[t]
JBM-VQE.
ReturnReturn
Hamiltonian H=∑_jλ_jP_j, variational circuit U(θ), number of shots for the joint Bell measurement m, sign-updating period T_S, number of shots for sign-updating m_S^tot, and learning rate η.
Parameters _opt which approximate the ground state of H by |ψ(_opt)⟩=U(_opt)|0⟩ and the estimated optimal energy E().
Initialize and set n_iter=0
energy estimate E() has not converged
n_iter % T_S = 0
Call subroutine 1 with input , and obtain the estimates of the energy [Eq. (<ref>)] and the gradient [Eq. (<ref>)]
Record the signs s_j(), s_j,±^(l)() as t_j, t^(l)_j,±
n_iter % T_S ≠ 0
Call subroutine 2 with input and the guessed signs t_j, t^(l)_j,± at some past iteration, and obtain the estimates of the energy [Eq. (<ref>)] and the gradient [Eq. (<ref>)]
← - ηE()
n_iter← n_iter + 1
Return and E()
§ SHOT THRESHOLDS
In both JBM-VQE and the conventional VQE, the energy estimate exhibits finite statistical fluctuation due to the limited number of measurement shots. This occurs even without any noise present in quantum devices. To facilitate a fair comparison between JBM-VQE and the conventional VQE, it is essential to establish a common criterion ensuring that both methods display the same level of fluctuation.
In this section, we discuss such a criterion by investigating the number of shots required to estimate an expectation value of a single Pauli operator with a fixed level of accuracy.
We formulate the number of shots to estimate the expectation value with certain accuracy and probability, and numerically calculate the actual numbers.
The number established here is utilized in Sec. <ref>, where numerical demonstrations of JMB-VQE are performed for quantum chemistry Hamiltonians of small molecules.
§.§ Shot threshold for the conventional VQE
Let us define the number of shots to estimate the expectation value of a single Pauli operator with the projective measurement, which is the standard measurement strategy for the conventional VQE.
For a given n-qubit state |ψ⟩ and an n-qubit Pauli operator P, the probability of estimating P := Pψ within an additive error τ_th under the m-shot projective measurement of P is given by
p(m, τ_th, P) = ∑_x=x_min^x_maxmx(1+P/2)^x (1-P/2)^m-x,
x_min = max{ 0, ⌈ m(1+P-τ_th) /2⌉},
x_max = min{ m, ⌊ m(1+P+τ_th) /2⌋},
where ⌈ ... ⌉ (⌊ ... ⌋) is the ceiling (floor) function of integers.
Since there are various Pauli operators included in the Hamiltonian, we consider the averaged probability for estimating the expectation value within an additive error τ_th,
p^(av)(m, τ_th) = 1/2∫_-1^1 dy p(m, τ_th, y).
We then define the standard measurement (SM) shot threshold as follows:
The standard measurement (SM) shot threshold m^SM_th(τ_th, p_th) is defined as
m_th^SM(τ_th, p_th) := min{m∈ℤ^+ | p^(av)(m, τ_th) ≥ p_th}
The SM shot threshold m_th^SM(τ_th, p_th) indicates the minimum number of the shots of the projective measurement of P to estimate P within an additive error τ_th with a probability at least p_th, where the expectation value P is averaged in the uniform distribution for [-1, 1].
We leverage m_th^SM(τ_th, p_th) to determine the number of shots in numerical simulation of the conventional VQE in Sec. <ref>.
We note that the value of P may cluster around 0 if we consider random states in the Hilbert space, e.g., Haar random states, but we employ the uniform distribution because the ground states of quantum chemistry Hamiltonians are not random states and various values of P may appear.
Finally, we evaluate actual numerical values of m_th^SM(τ_th, p_th) for various τ_th and p_th.
For a given m, we calculate p^ave(m, τ_th) by approximating the integral through numerical integration, taking 2000 uniformly-spaced points of P in the interval [-1,1].
The results are presented in Fig. <ref>.
§.§ Shot threshold for JBM-VQE
In JBM-VQE, the estimation of the absolute value P and the sign s := sign(P) for a given state |ψ⟩ and a Pauli operator P is performed differently.
The absolute value is estimated using joint Bell measurements, while the sign is typically determined through the projective measurement of P.
We consider the number of shots to estimate each of them within certain error and probability.
First, let us define the number of shots necessary for accurately estimating the absolute value P.
Since the joint Bell measurement provides the expectation values of P ⊗ P, the probability of determining |⟨ P ⟩| within an error τ_th is calculated as follows:
q(m, τ_th, P)
= ∑_x ∈𝕏mx(1+P^2/2)^x (1-P^2/2)^m-x,
where 𝕏 is a set of integers in 0 ≤ x ≤ m that satisfies
√(max{0, 2x/m-1}) - P≤τ_th
(see Eq. (<ref>)).
Similar to SM shot threshold, we average the probability q(m, τ_th, P) as
q^(av)(m, τ_th) = 1/2∫_-1^1 dy q(m, τ_th, y).
We can now define JBM shot threshold:
JBM shot threshold m^JBM_th(τ_th, p_th) is defined as
m_th^JBM(τ_th, p_th) := min{m∈ℤ^+ | p^(av)(m, τ_th) ≥ p_th}
This means that the JBM shot threshold m^JBM_th(τ_th, p_th) indicates the minimal number of shots needed to estimate the absolute value of the expectation value P within an additive error τ_th with probability p_th.
Numerical values of m^JBM_th(τ_th, p_th) are again calculated by 2000 points of P that are uniformly-spaced in [-1,1] and summarized in Fig. <ref>.
Next, we examine the estimation of the sign s in JBM-VQE that is performed through a majority vote of the results of the projective measurement of P.
When the number of shots is an even integer m_S, the probability of estimating s correctly is
p_S(m_S, P) = ∑_x ∈𝕏_Sm_Sx(1+P/2)^x (1-P/2)^m_S-x,
𝕏_S =
{m_S/2, m_S/2+1, ⋯, m_S } (P≥ 0)
{ 0, 1, ⋯, m_S/2 -1 } (P < 0)
The numerical values of p_S(m_S, ⟨ P ⟩) are presented in Fig. <ref>.
We observe that even for a relatively small number of shots such as m_S = 17, the probability p' is as high as ≥ 0.8 for P≥ 0.2.
In our numerical simulations presented in the next section, we take these values into account for determining the value of m_S in the JBM-VQE algorithm.
Finally, it is worthwhile to point out that m_th^SM and m_S considered here are defined for a single Pauli operator P, so if there are M Pauli operators, it would require M · m_th^SM and M · m_S shots to estimate all expectation values and their sign.
Similarly, it is shown <cit.> that log(M)/ϵ^4 measurements are need to estimate P_1, ⋯, P_M within the error ϵ simultaneously, so it would require log(M) · m_th^JBM to estimate all absolute values of them.
§ NUMERICAL DEMONSTRATION
In this section, we present numerical demonstrations of JBM-VQE for finding the ground states of quantum chemistry Hamiltonians.
We first examine an example where the signs of Pauli expectation values, s_j(), do not undergo frequent flipping during the course of standard VQE optimization.
This observation leads to the anticipation that JBM-VQE can reduce the number of shots to optimize the parameters with a specified level of accuracy.
We then compare JBM-VQE and the conventional VQE by taking various small molecules as examples.
The parameter optimization in JBM-VQE proceeds more rapidly than in the conventional VQE, according to the metric we have established for the early stage of the optimization.
The advantage of JBM-VQE becomes increasingly apparent as system sizes (the number of qubits) grow larger.
§.§ Signs of Pauli expectation values during parameter optimization
We conduct a numerical simulation of the conventional VQE to prepare the ground state of the H2 molecular Hamiltonian whose bond distance is 0.74Å.
The quantum chemistry Hamiltonian of the form (<ref>) is constructed by using the Hartree-Fock molecular orbitals with the STO-3G basis set.
The Hamiltonian is then mapped to the qubit form (<ref>) via the Jordan-Wigner transformation.
The resulting qubit Hamiltonian consists of n=4 qubits and 14 non-identity Pauli operators.
The construction of the Hamiltonian was implemented using the numerical libraries PySCF <cit.> and OpenFermion <cit.>.
For the VQE ansatz state, we employ the symmetry-preserving ansatz <cit.> |ψ()⟩ = U()|n_e⟩, where U() represents a variational quantum circuit depicted in Fig. <ref> of Appendix <ref> and |n_e⟩ is a computational basis state |0⋯ 0 1⋯ 1⟩ with n-n_e “0"s and n_e “1"s with n_e denoting the number of electrons (for H2 molecule, n_e=2).
It has eight parameters in total, with initial values sampled randomly from [0,π/5].
The parameters are updated using the gradient descent method with a learning rate η = 0.02.
We simulate the energy expectation values E()=Hψ() exactly without assuming any statistical error and noise sources by the numerical library Qulacs <cit.>.
The result is presented in Fig. <ref>.
After 3000 iterations, the VQE algorithm successfully finds the exact ground state.
Importantly, we observe that the signs of the expectation values of the most Pauli operators P_jψ() either remain unchanged or exhibit only a single change during the VQE optimization.
This observation motivates us to accelerate VQE by estimating only the absolute values of P_jψ() in the majority of iterations.
We note that expectation values of some Pauli operators (e.g., ⟨ X_1⊗ Y_2⊗ Y_3⊗ X_4⟩ and ⟨ Y_1⊗ X_2⊗ X_3⊗ Y_4⟩) are the same throughout the optimization process due to the ansatz symmetry.
§.§ Comparison of JBM-VQE with the conventional VQE for various small molecules
Here we present a numerical comparison of the measurement cost between JBM-VQE and the conventional VQE.
We consider eight molecular systems: H2, H3+, H4, H5+, LiH, H2O, NH3, and BeH2.
The geometries of these molecules are provided in Table <ref> of Appendix <ref>.
For the latter four molecules (LiH, H2O, NH3, and BeH2), the active space approximation of four orbitals are taken so that the Hamiltonians become 8-qubit ones.
Similar to the calculations in the previous subsection, we construct quantum chemistry Hamiltonians of the form (<ref>) using Hartree-Fock molecular orbitals with the STO-3G basis set for the eight molecules under consideration.
The Jordan-Wigner transformation is employed to obtain the qubit Hamiltonian.
The symmetry-preserving ansatz (described in Fig. <ref> of Appendix <ref>) is employed for both JBM-VQE and the conventional VQE. The ansatz depths for molecules H2, H3+, H4, H5+, LiH, H2O, NH3, and BeH2 are 2, 3, 8, 14, 3, 5, 6, and 5, respectively. Initial parameters are uniformly sampled from [0,π/5] for all cases.
We evaluate the expectation value E() = Hψ() and its gradient as follows.
In JBM-VQE, the signs { s_j() }_j=1^M and { s_j(^(l)_±) }_j=1^M are estimated by the projective measurement of the Pauli operators P_j included in the Hamiltonian.
We employ the qubit-wise commuting (QWC) grouping <cit.> to make groups of simultaneously-measurable Pauli operators, which requires additional n one-qubit gates and reduces the number of the total shots for estimation (see Appendix <ref> for details).
We allocate the same number of shots for all generated groups, denoting the number of shots for each group as m_S. The absolute values {P_j_}_j=1^M are estimated using the joint Bell measurement with 2n qubits, as described in Sec. <ref>. The number of shots for the joint Bell measurement is denoted by m.
In the conventional VQE, we estimate the energy expectation value by directly estimating P_jψ() with the projective measurement of P_j.
We also employ QWC grouping to reduce the number of shots and allocate the same amount of shots for all groups.
The number of shots for each group is denoted by m_VQE.
For both JBM-VQE and the conventional VQE, all outcomes of the measurement shots (bitstrings) are simulated without considering any noise sources using the numerical library Qulacs <cit.>.
The learning parameter for the gradient descent is set to η=0.02 for both JBM-VQE and the conventional VQE.
As mentioned in Sec. <ref>, we set α=π/4 in the parameter shift rule (<ref>) to estimate the gradient because of the reason described in Appendix <ref>.
We consider the threshold of p_th = 0.9 and τ_th = 0.05 for estimating the Pauli expectation values.
According to Figs. <ref>, <ref> and <ref>, the numbers of the shots are taken as m=4159, m_S=513, and m_VQE=739.
Note that the standard deviations (or fluctuations) in estimating the energy expectation values are not strictly the same between JBM-VQE and the conventional VQE because the values of the thresholds discussed in Sec. <ref> are for a single Pauli operator.
The sign-updating period of JBM-VQE is fixed at T_S=30 for all simulations. For every 200 iterations, the mean estimated energy is computed. Should the energy fail to decrease by a minimum of 0.001 following 200 iterations, the optimization process is terminated.
The results are shown in Fig. <ref>.
For all molecules, JBM-VQE exhibits a faster decrease of energy with fewer measurement shots.
The sudden jump of the estimated energy in JBM-VQE appears to be due to the update of the signs.
It is also observed that the advantage of JBM-VQE becomes more evident as the system size increases from 4 qubits (H2) to 10 qubits (H5+).
To quantify the improvement of the measurement cost of JBM-VQE over the conventional VQE, we count the number of the shots required to achieve a result with energy lower than the Hartree-Fock energy.
The ratio between such shots for the conventional VQE over JBM-VQE is calculated by averaging the results of at least 50 independent numerical simulations with different initial parameters, which yields (n denotes the number of qubits)
H2(n=4):1.44, H3+(n=6):3.45,
H4(n=8):9.00, H5+(n=10):9.24,
BeH2(n=8):3.70, LiH(n=8):4.60,
NH3(n=8):11.60, H2O(n=8):2.24.
These numbers illustrate the reduction of the measurement cost of JBM-VQE.
Finally, we comment on the choice of the Hartree-Fock energy to define the number of shots for optimizing the parameters of the ansatz although the Hartree-Fock energy can be obtained by the initial state of the ansatz without applying U().
This is because the purpose of this numerical illustration is to simply show the reduction of the number of shots in the optimization process of (JBM-)VQE.
Furthermore, it is important to acknowledge that the ansatz selected in this study does not guarantee the accurate representation of the ground-state for the given Hamiltonian. This limitation arises due to the expressibility constraints of the ansatz and make it difficult to define the number of shots for the optimization by using the exact ground state energy.
§ DISCUSSION
In this section, we discuss several aspects of the JBM-VQE method.
First, JBM-VQE can be expected to present a significant advantage over conventional VQE as the number of qubits n increases, as demonstrated in the hydrogen chain examples from the previous section.
This is because the number of distinct quantum circuits to evaluate the energy expectation value scales at least n^2 (naively n^4) in the conventional VQE while that of JBM-VQE does 1 when we skip the evaluation of the sign of the Pauli expectation values.
Although a small number of distinct quantum circuits does not directly imply the efficiency of the evaluation, we anticipate a reduction of the total number of shots in JBM-VQE, as illustrated in the previous section.
Second, we propose using JBM-VQE as an initial optimizer when the accuracy of the energy estimate is not as high, as demonstrated in the numerical simulation in Sec. <ref>, because of the following two reasons.
The first reason is that the estimate of the energy in JBM-VQE is biased, i.e., the average of E() (Eqs. (<ref>)(<ref>)) is not equal to the true value E() (note that the estimate E() is a random variable whose probability distribution is determined by that of the outcomes of the joint Bell measurement).
The bias has already been seen in the numerical simulation (Fig. <ref>), where the distribution of the dots (the energy estimates) is not centered at the corresponding line (the exact value of the energy at those parameters) for several molecules.
The second reason is the scaling of the number of shots in the joint Bell measurement with respect to the estimation error.
As shown in Ref. <cit.>, the number of shots required to estimate all M absolute values P_1, ⋯, P_M with the error ϵ by the joint Bell measurement is log(M)/ϵ^4.
This ϵ^-4 scaling can be problematic when we take small ϵ.
These two properties of JBM-VQE can pose challenges when optimizing the ansatz parameters with high accuracy required quantum chemistry, such as the so-called chemical accuracy 10^-3 Hartree.
Third, we point out that the JBM-VQE protocol is flexible and can be combined with various variational algorithms which require the evaluation of many Pauli expectation values to optimize the parameters.
For example, sophisticated optimizers like Adam <cit.> or the ones more tailored for VQE <cit.> can be used in JBM-VQE although we employ the plain-vanilla gradient decent in the numerical simulations.
One can simply skip the evaluation of the signs of Pauli expectation values at some iterations (parameter updates) and perform the joint Bell measurement to estimate the energy and its gradient that are fed into the optimizers.
§ SUMMARY AND OUTLOOK
In this study, we introduce a protocol designed to accelerate the Variational Quantum Eigensolver (VQE) algorithm for determining the ground state of molecular Hamiltonians. Our approach employs the joint Bell measurement to estimate the energy expectation value and its gradient of n qubit systems with n^0=1 distinct quantum circuits in the majority of iterations, under the assumption that the signs of Pauli expectation values do not frequently change during optimization. In contrast, the conventional VQE necessitates at least 𝒪(n^2) distinct quantum circuits for estimating the energy and gradient in each iteration.
We conducted numerical simulations of various small molecular Hamiltonians, demonstrating that our proposed protocol effectively reduces the number of measurement shots required to optimize the ansatz parameters to a certain level. JBM-VQE holds promise for application in a broad range of near-term quantum algorithms that depend on Pauli expectation value estimations.
In future work, it is interesting to apply our protocol to various NISQ algorithms other than the simple VQE presented in this study.
For example, it is possible to combine JBM-VQE with the variants of VQE for excited states <cit.>, quantum imaginary-time evolution <cit.>, and algorithmic error mitigation schemes <cit.>. There is reason to believe that in protocols involving estimating quantum computed moments ⟨ψ |H^k| ψ⟩ like the Lanczos-inspired error mitigation scheme <cit.>, variance-VQE (where k=2) <cit.>, and variance extrapolation <cit.>, the acceleration from joint Bell measurement become more significant since the number of Pauli terms to be evaluated is of order 𝒪(n^4k). Furthermore, it would be beneficial to explore the integration of our protocol with two notable strategies - the α-VQE <cit.> and the parallelized VQE <cit.>. Both of these approaches have demonstrated promising results in improving the efficiency of VQE optimization.
§ DETAILS OF NUMERICAL CALCULATION
§.§ Details of ansatz and molecules
In the numerical simulations presented in Sec. <ref>, we employ the symmetry-preserving real-valued ansatz depicted in Fig. <ref>.
We use molecules with geometries and active spaces listed in Table <ref>.
Some of these geometries are chosen as the stable structures at the level of Hartree-Fock/STO-3G, as referenced from the CCCBDB database.
§.§ Qubit wise commuting grouping
For the numerical simulations shown in Fig. <ref>, we employ the qubit-wise commuting (QWC) grouping <cit.> to evaluate the expectation values of the Hamiltonian H.
Consider two n-qubit Pauli operators, Q_1 = σ_1^(1)⊗σ_1^(2)⊗…⊗σ_1^(n) and Q_2 = σ_2^(1)⊗σ_2^(2)⊗…⊗σ_2^(n), where σ_1,2^(j) is a single qubit Pauli operator I, X, Y, Z acting on j-th qubit.
We define that Q_1 and Q_2 are qubit-wise commuting if and only if σ_1^(j)σ_2^(j)=σ_2^(j)σ_1^(j) for all j ∈{1,2,…,n}.
The Pauli operators {Q_1, Q_2, ⋯} that are mutually qubit-wise commuting can be simultaneously measured by applying an additional quantum circuit consisting of O(n) one-qubit gates to the state |ψ⟩.
Therefore, we can reduce the number of distinct quantum circuit to evaluate the expectation value of the Hamiltonian H=∑_j=1^M λ_j P_j by dividing the Pauli operators {P_j }_j=1^M into groups of mutually qubit-wise commuting operators.
It should be noted that various grouping methods have been explored in the literature, including those considering usual commutativity and anti-commutativity of the Pauli operators <cit.>.
The QWC grouping method is adopted in our simulation because it does not require deep and complicated quantum circuits to perform simultaneous measurements of the operators in each group.
The greedy search with sorting the Pauli operators <cit.> is employed when grouping the Pauli operators included in the Hamiltonian. We sort the M Pauli operators {P_j }_j=1^M in the Hamiltonian by descending order of the absolute values of the coefficients |λ_j|. The sorted operators are denoted {P'_j }_j=1^M. We assign P'_1 to the first group. For j = 2,3,…,k, if P'_j qubit-wise commutes with all Pauli operators in an existing group, it is assigned to that group. If P'_j does not qubit-wise commute with any Pauli operators in an existing group, a new group is created to house P'_j. This procedure is repeated until all Pauli operators are assigned to a group.
§.§ Choice of α in parameter-shift rule
Here, we explain the choice of α=π/4 in the parameter shift rule (<ref>) in our numerical calculation.
The parameter shift rule is related to the fact <cit.> that the functional form of the Pauli expectation value with respect to θ_l is a trigonometric function when other circuit parameters remain constant:
in the simplest cases where the parameter θ_l in the ansatz |ψ()⟩ is an angle of some Pauli rotation gate e^-i θ_l/2 Q_l satisfying Q_l^2=I, we have
P_j_ = a_l cos( θ_l - ϕ_l ) + c_l,
where a_l, ϕ_l and c_l are real coefficients depending on θ_1, ⋯, θ_l-1, θ_l+1,⋯, θ_N_θ.
The parameter shift rule can improve the signal-to-noise ratio when we take larger α.
For example, if we use small α like α = 0.01, the values P_j_ + αδ_l and P_j_ - αδ_l become almost the same so that a lot of measurement shots are required to estimate the gradient (∝P_j_ + αδ_l - P_j_ - αδ_l) with high accuracy.
This is why α=π/2, the largest α considering the periodicity of the function (<ref>), is typically used in the literature <cit.>.
In our numerical calculation, we observed that some Pauli terms exhibit expectation values approaching P_j_≈± 1 in the late stages of optimization, or at the vicinity of the exact ground state.
For these Pauli terms, P_j_± (π/2) δ_l sometimes becomes close to zero (e.g., when a_l=1, c_l=0 in (<ref>)) and the estimation of it by the joint Bell measurement requires a large number of shots (see Table <ref>).
Consequently, to strike a balance, we adopt α = π/4 for our numerical computations presented in Sec. <ref>.
In fact, for Pauli terms satisfying P_j_ = ± 1, P_j_±π/4 δ_l is farther away from zero than P_j_±π/2 δ_l and guaranteed to be larger than √(2)/2,
P_j_±π/2 δ_l = a_l ·√(2)/2 + c_l≥√(2)/2,
because |a_l| ≤ 1 and |a_l| + |c_l| ≤ 1 hold due to P_j_≤ 1 for any .
|
http://arxiv.org/abs/2307.01995v2
|
20230705025629
|
Dynamic Feature-based Deep Reinforcement Learning for Flow Control of Circular Cylinder with Sparse Surface Pressure Sensing
|
[
"Qiulei Wang",
"Lei Yan",
"Gang Hu",
"Wenli Chen",
"Bernd R. Noack"
] |
cs.LG
|
[
"cs.LG",
"physics.flu-dyn"
] |
Geometrically thick equilibrium tori around a dyonic black hole with quasi-topological electromagnetism
Xuan Zhou^1,
Songbai Chen^1,2[Corresponding author: [email protected]],
Jiliang Jing^1,2[[email protected]]
August 1, 2023
============================================================================================================================
This study proposes a self-learning algorithm for closed-loop cylinder wake control targeting lower drag and lower lift fluctuations with the additional challenge of sparse sensor information, taking deep reinforcement learning as the starting point. DRL performance is significantly improved by lifting the sensor signals to dynamic features (DF), which predict future flow states. The resulting dynamic feature-based DRL (DF-DRL) automatically learns a feedback control in the plant without a dynamic model. Results show that the drag coefficient of the DF-DRL model is 25% less than the vanilla model based on direct sensor feedback. More importantly, using only one surface pressure sensor, DF-DRL can reduce the drag coefficient to a state-of-the-art performance of about 8% at Re = 100 and significantly mitigate lift coefficient fluctuations. Hence, DF-DRL allows the deployment of sparse sensing of the flow without degrading the control performance. This method also shows good robustness in controlling flow under higher Reynolds numbers, which reduces the drag coefficient by 32.2% and 46.55% at Re = 500 and 1000, respectively, indicating the broad applicability of the method. Since surface pressure information is more straightforward to measure in realistic scenarios than flow velocity information, this study provides a valuable reference for experimentally designing the active flow control of a circular cylinder based on wall pressure signals, which is an essential step toward further developing intelligent control in realistic multi-input multi-output (MIMO) system.
§ INTRODUCTION
Flow control has been a popular research area of great academic and industrial interest, which can be divided into passive flow control and active flow control on the basis of whether external energy input is necessary. Passive control has the advantages of requiring no energy, being easy to set up, and having low cost, but if the actual situation of the flow field differs from that expected, the control is often difficult to achieve the best effect. Active control can be divided into open-loop control and closed-loop control according to whether it is necessary to obtain feedback information from the flow field and adjust the flux of the actuator<cit.>. It has been found that compared with open-loop control, closed-loop active control has a robust adaptive ability, which can give full play to the potential of the actuator with a small amount of energy input. For example, <cit.> has presented a sequence of experiments on the flow around a cylinder, which included MSBC (Moving Surfaces Boundary Control). This method involved injecting momentum into the boundary layer of the cylinder by using two rotating cylinder modules, thereby delaying separation and preventing vortices. Nevertheless, the complexity of the nonlinear Navier-Stokes equations leads to a flow field with high dimension and multimodal characteristics, thus making it challenging to devise effective real-time closed-loop active flow control procedures.
In recent years, machine learning has made significant advances, and active flow control is becoming more effective and intelligent <cit.>. One of the earliest machine learning techniques applied in this field was genetic programming (GP). GP uses a population of computer programs as potential solutions to a problem. The programs evolve using genetic operators like mutation and crossover, and the fittest ones produce the next generation of solutions. <cit.> applied GP to search explicit control laws for reducing the recirculation zone behind a backward-facing step. <cit.> applied the linear GP to control the dynamics of a turbulent jet and discovered novel wake patterns. <cit.> adopted GP-identified control laws to successfully suppress vortex-induced vibrations in a numerical simulation environment. <cit.> demonstrated that many techniques from the ML family can be applied to AFC tasks, from GP to Bayesian Optimization (BO), LIPschitz global Optimization (LIPO), and Reinforcement Learning (RL), and that these methods have trade-offs relatively to each others.
Artificial neural networks (ANNs) can also be trained to learn complex patterns and relationships in fluid dynamics data and to generate control strategies that optimize fluid manipulation, which can be used for various tasks, including predicting fluid flow patterns, controlling robotic arms that manipulate fluids, and optimizing the design of microfluidic devices. <cit.> applied an adaptive controller based on a neural network for turbulent channel flow, demonstrating a simple control scheme that reduced skin friction by up to 20% and produced an optimum wall blowing and suction proportional to a local sum of wall-shear stress.
With the rapid development of deep reinforcement learning (DRL), which is effective at interacting with complex nonlinear environments, has brought new ideas to the above flow control problems. Previous studies have shown that deep reinforcement learning can effectively acquire control strategies in high-dimensional, non-linear, and other complex environments. Suppose that deep reinforcement learning is employed to interact with a flow control environment, in such a scenario, it is essential for the closed-loop flow control method to establish the control law based on the learned strategy after continuous trial and error and adjustment of the optimization strategy. For example, <cit.> introduced DRL to active flow control for the first time by applying deep reinforcement learning to blunt body drag reduction at Re = 100 and successfully demonstrated a closed-loop active control strategy that could achieve stable drag reduction of about 8% by using proximal strategy optimization (PPO) method. In this study, the velocity measured by 151 sensors located around the cylinder and in the downstream flow field (each sensor collects both the flow lateral velocity) is used as the feedback signal. Besides, <cit.> employed the S-PPO-CMA method, a novel DRL algorithm, to optimize the sensor position and investigate the efficiency and robustness of the identified control strategy. The algorithm is proposed to optimize the sensor layout and reduce the number of sensors while keeping state-of-the-art performance and successfully acquired a closed-loop active control strategy with stable drag reduction of about 18.4% at Re = 120. From a more technical viewpoint, the use of physics-inspired reward functions has been demonstrated by <cit.>, while speedup of the training process was demonstrated using several parallel simulations in <cit.>. In order to investigate higher Reynolds numbers, <cit.> applied the Lattice-Boltzmann Method (LBM) to establish a CFD environment with weak turbulence conditions and a Reynolds number of up to 1000 was effectively controlled. Similar to <cit.> study, the jet actuators were deployed on the lower and upper sides of the cylinder. The results show that the DRL agent could find an effective feedback law and achieve a drag reduction of more than 30%. Applications in even more chaotic conditions, corresponding to 2D cylinder at a Reynolds number Re = 2000, have recently been presented by <cit.>, highlighting that DRL controllers can learn drastically different control laws as the underlying dominating physics are changed. In order to optimize the sensor layout, sensitivity analysis was conducted. In another study, <cit.> placed four synthetic jets on the lower and upper sides symmetrically for active flow control of the cylinder. In <cit.>'s study, two small rotating cylinders were placed obliquely behind the main cylinder at a Reynolds number of 240. The rotational speed of the small cylinders was controlled by a DRL agent. This experimental setup aimed to investigate the potential of wake stabilization using DRL-controlled rotating control cylinders. The findings of the study were later confirmed by <cit.>, who experimentally verified the effectiveness and feasibility of this approach. In addition to its application in the field of AFC, researchers have also aimed to utilize DRL approach to achieve other objectives. These objectives include reducing the energy expenditure of the follower <cit.>, mitigating vortex-induced vibration <cit.>, shape optimization <cit.>, or the control of turbulent channel flows <cit.>. As the field of DRL applications for fluid mechanics is evolving fast, we refer the reader curious of more details to any of the recent reviews on the topic, i.e. <cit.>.
Most of the aforementioned studies have collected state information using a large number of velocity sensors in the wake region, which poses significant challenges for practical structural flow fields. For instance, in the case of vehicles and high-rise buildings, it would be more convenient and easier to maintain to deploy surface pressure sensors. However, compared to the state in the wake region, the pressure on the surface of the structure may have insufficient characteristic information, making it difficult for the DRL agent to estimate the state of the entire flow field. This will result in typical reinforcement learning methods being unable to learn effective control strategies. Based on the fact, we introduce dynamic feature (DF) lifting approach into deep reinforcement learning and propose the DF-DRL method. In the case of flow around cylinder, this method can significantly enhance the convergence performance of DRL algorithm, enabling it to achieve a drag reduction effect that is almost consistent with the benchmark (147 velocity sensors deployed in the wake region) with a reduction of 99.3% of the sensor quantity.
The novelty and main contribution of the present study are listed as follow:
* A critical and novel enabler is to lift the sensor signals into dynamic features as input to the actor. Dynamic features contain enough information to predict future states. Dynamic data-driven models of wake flows have, for instance, been build from two filtered pressure signals <cit.> and time derivatives of the lift coefficient <cit.>. Here, time-delay coordinates of surface pressure signals are used<cit.>. The resultung dynamic feature-based Deep Reinforcement learning (DF-DRL) is improves DRL convergence performanc, while reduce lift and drag force coefficient under a sparse state space flow. DF-DRL further reduces the drag coefficient by 2.6% compared to the vanilla one.
* A novel and promising DRL method, called dynamic feature-based Deep Reinforcement learning (DF-DRL), is introduced to improve DRL convergence performanc, while reduce lift and drag force coefficient under a sparse state space flow. The results show that the tests with DF-DRL approach further reduces the drag coefficient by 2.6% compared to the vanilla one.
* A comprehensive study of the distribution of the pressure sensors is conducted. In contrast to the existing studies on active flow control with deep reinforcement learning, where the sensors are arranged in the wake region, the present study focuses on surface pressure sensors and investigates the number and layout of these sensors in detail. We observe that under active flow control with a low Reynolds number scenario, a single surface pressure sensor can reach excellent control effect, which is comparable with the effect with 147 sensors located in the wake region.
* To validate the robustness of DF-DRL method, two different inflow situations are conducted, including Re = 500 and 1000. The results indicate that even under a much more complicated scenario with higher Reynold number, the DF-DRL agent with sparse surface pressure sensing is capable of controlling the wake development behind a circular cylinder.
In the present study, we utilized the DRLinFluids package <cit.> to train a DRL agent and execute interactions. The package leverages Tensorforce <cit.> and Tianshou <cit.> packages to provide DRL algorithm libraries, and OpenFOAM <cit.> as the CFD interaction environment. Firstly, we compare the performance of vanilla DRL and DF-DRL based plant to a benchmark case study of flow around a circular cylinder with a Reynolds number of 100. Subsequently, we varied the number of pressure surface sensors to validate the effectiveness of the proposed method. Finally, we trained a DF-DRL agent with a single surface pressure sensor and deployed it to the flow under higher Reynolds numbers of Re = 500 and 1000 to illustrate the robustness of the approach.
§ ACTIVE FLOW CONTROL SYSTEM WITH DRL-BASED JET ACTUATORS
The present section is partitioned into two components: (1) an illustration of the DRL algorithm, especially for the Soft Actor-Critic (SAC) method, which will be used as the DRL part in the whole study; (2) a detailed introduction of the dynamic feature-based DRL framework, including the dynamic feature lifting and the coupling with the flow simulation.
§.§ Deep reinforcement learning
Deep reinforcement learning (DRL) is a powerful method of optimal control based on a parameterized policy, commonly referred to as an agent, that learns through trial and error. In the context of computational fluid dynamics (CFD), the environment can be modeled as the flow over a circular cylinder. During the optimization procedure, the DRL agent interacts with this environment to generate experiences according to the current policy. These experiences are then cached in a buffer and used by the training algorithm to improve the policy. This iterative process is repeated until the agent can yield a control strategy that satisfies the desired performance criteria. Thus, DRL has the potential to revolutionize the field of fluid mechanics by enabling the discovery of previously unknown control strategies that can enhance the performance of fluid systems.
There are several types of DRL algorithms <cit.>. One of the most popular types of DRL algorithms is Q-learning <cit.>, which uses a neural network to approximate the optimal action-value function, and updates the network's weights using the Bellman equation to minimize the difference between the predicted and actual reward. Another type of algorithm is policy gradient methods <cit.>, which directly optimize the agent's policy to maximize the expected reward, and often use techniques like Monte Carlo sampling or trust region optimization. Actor-critic methods <cit.> combine the advantages of both Q-learning and policy gradient methods by simultaneously learning a value function and a policy. Another type of DRL algorithm is model-based reinforcement learning <cit.>, which involves learning a model of the environment dynamics and using it to plan actions. Model-based algorithms can be more sample-efficient than model-free algorithms like Q-learning, but require additional computational resources to learn and maintain the model. Referring to our previous work <cit.>, the SAC algorithm is a feasible choice and is selected in the following study.
The Soft Actor-Critic (SAC) method <cit.> is an actor-critic off-policy DRL algorithm that learns by leveraging a maximum entropy reinforcement learning algorithm. The agent's goal is to maximize entropy and prospective reward and reach the desired value while acting as randomly as possible. Since it is an off-policy algorithm, training can be performed efficiently with limited samples. The optimal policy can be formulated as
π^*=max_π(θ)𝔼_(s_t, a_t) ∼ρ_π[∑_tR(s_t, a_t)_Reward+αH(π_θ(·| s_t))_Entropy],
H(π_θ(·| s_t))=-∑_i=1^n p(π_θ(·| s_t)) log p(π_θ(·| s_t)),
where R(s_t, a_t) represents the reward for taking action a_t in state s_t, H(π_θ(·|s_t)) is the entropy of the policy π_θ at state s_t, and α is the temperature coefficient. The smaller α is, the more uniform the distribution of the output action becomes, and the maximum entropy reinforcement learning degrades to standard reinforcement learning (i.e., α→ 0). The objective is to find the optimal policy π_θ that maximizes the expected reward while also maximizing entropy.
The core idea behind the maximum entropy approach is to randomize the policy by distributing the probability of each action output widely, rather than concentrating on a single action. This approach enables the neural network to explore all possible optimal paths and avoid losing the essence of maximum entropy to a single action or trajectory. The resulting benefits include the following: (1) Learning policies that can serve as initializations for more complex tasks, as the policy learns multiple ways to solve a given task, making it more conducive to learning new tasks. (2) Strengthening the ability to explore, which makes it easier to identify better patterns under multimodal reward conditions. (3) Enhancing the robustness and generalization ability of the approach since the optimal possibilities are explored in different ways, making it easier to adjust in the presence of interference.
The entropy term in the SAC algorithm affects the policy's exploration in two important ways. First, the entropy term encourages the policy to take more exploratory actions by adding a penalty to the objective function for actions that have low probability under the current policy. This penalty is proportional to the negative entropy of the policy, which measures the degree of randomness or uncertainty in the actions selected by the policy. By minimizing this penalty, the policy is incentivized to explore more widely and try out new actions that may lead to higher rewards. Second, the entropy term also helps prevent the policy from becoming too deterministic, which can limit its ability to adapt to changes in the environment or learn new behaviors. By adding an entropy term to the objective function, SAC encourages the policy to maintain a balance between exploration and exploitation, rather than becoming overly focused on a single optimal action. This can be particularly important in environments with multiple suboptimal solutions or where the optimal solution may change over time. In summary, the entropy term in SAC encourages exploration by penalizing the policy for taking low-probability actions, and helps prevent the policy from becoming too deterministic by promoting a balance between exploration and exploitation. This can lead to better performance and more robust learning in complex, dynamic environments.
The SAC algorithm used in this study employs a maximum entropy target as its optimization objective, which has been shown to enhance the algorithm's exploration properties and robustness. The ability to effectively explore is achieved by maximizing information entropy, which promotes a uniform distribution of the probability of each action output, rather than focusing on a single action. For instance, the Uniform strategy is a high-entropy strategy. On the other hand, the robustness of the algorithm is reflected in its ability to generate alternative action outputs when faced with environmental noise. In contrast, a previous greedy strategy may lead to the agent's inefficacy due to the certainty of its actions. The SAC algorithm ensures that every action has a varying probability, rather than being either high or low. Therefore, when the environment encounters noise, the agent can still produce alternative action outputs without failure. From these perspectives, the SAC algorithm is highly suitable for the present study, which involves the application of active flow control (AFC) using surface pressure temporal series. For a more detailed description of the SAC algorithm, please refer to <cit.>.
The present work employs a closed-loop control framework for the AFC task described in Sec. <ref>, as depicted in Fig. <ref>. The framework comprises two main components: the environment and the DRL agent (critic and actor in the case of the SAC algorithm used). The flow around a circular cylinder is simulated by OpenFOAM as has been described in Sec. <ref>. The flow velocity or pressure measured by specific sensors is collected as the state provided to the agent. Following the setup of sensors in <cit.>'s study, a total of 147 sensors can capture sufficient flow information for control policy learning, which is adapted as a baseline. The DRLinFluids package <cit.> is used to accomplish the interaction between the DRL agent and the CFD simulation, and Tianshou <cit.> is employed as the DRL algorithm backend. The DRL policy network consisting of two dense layers, each with 512 fully connected neurons. The input layer receives data from pressure sensors, and the output layer gives the jet velocity. The time interval between each step is set to 7.5% of the vortex-shedding period of the cylinder without actuators. The SAC agent interacts with and updates the ANN parameters every 50 steps. The process is repeated three times with the same hyperparameters to ensure the stability and validity of the training. To save training time and provide a consistent start point, a vanilla case without control is simulated in advance until reaches a stable status, then the state of the flow field is stored and utilized as the initialization for the following DRL training stage.
To avoid non-physical abrupt changes in pressure and velocity resulting from the use of incompressible CFD algorithms, a continuous-time approach is adopted for the control mechanism. The control for each jet is determined at every time step during the simulation and smoothed to obtain a continuous control signal over time. An appropriate interpolation method is crucial to serialize the received time-discretized control signal for this system. Hence, we smooth the jet actuation to ensure a continuous change in the control signal without excessive lift fluctuations due to sudden changes in jet velocity. Based on the interpolation functions demonstrated by Tang<cit.>, the control action is set to change as follows:
V_Γ_i(t) = V_Γ_i(t-1)+ α[a - V_Γ_i(t-1)], i=1, 2,
where α = 0.1 is a numerical parameter determined by trial and error, V_Γ_i(t) and V_Γ_i(t-1) is the jet flow velocity used at the non-dimensional times t and t-1 respectively, and a is one jet flow velocity in an agent step, i.e. the action generated by the DRL agent.
Flow control of circular cylinders is a highly popular topic in both academic and industrial sectors. The aim of this research is to reduce or eliminate drag and lift forces, using advanced reinforcement learning techniques. This objective can be achieved by setting an appropriate reward function. A reward function that combines drag and lift coefficients is proposed to achieve the optimization goal, which is set as follows:
R_t = (C_D)_Baseline - ⟨ C_D^t⟩_T - 0.1*|⟨ C_L^t⟩ _T|,
where (C_D)_Baseline is the mean drag coefficient of the circular cylinder without flow control, C_D^t and C_L^t means temporal drag and lift coefficient at the time t respectively, and ⟨·⟩ _T indicates the sliding average back in time over a duration corresponding to one jet flow control period T with active flow control.
§.§ Active flow control with dynamic feature-based DRL enhancement
For DRL tasks, the agent needs to perceive the state of the environment and give corresponding actions. Deep reinforcement learning has long emphasized setting an appropriate reward function while ignoring the modeling of the environmental state. This may be related to different types of learning tasks. For example, in the fields of robotics <cit.> or autonomous driving <cit.>, DRL state almost invariably selects temporal image sequences as inputs and is assisted by commonly used feature extraction techniques such as Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM). However, in the field of flow control, especially for various structural drag reduction tasks (such as vehicles and high-rise buildings) in the real world, it is often difficult to observe or model information on the velocity or pressure of the entire flow field with respect to the complex dynamic system, which makes the perception of environmental state itself a challenge. Further, this problem can be formulated as how to estimate information on the full flow field through fewer sensor data. This is actually regard as pattern recognition and reduced order modelling, which have always been hot topics in the field of fluid mechanics research.
CFD numerical simulations provide the opportunity to collect space-time-resolved data within the considered computational domain. However, in the real world, it is often difficult to obtain a comprehensive view of the flow field, which means only a limited number of time-resolved sensor measurements s are accessible. The objective of this study is to demonstrate how recent advancements in system identification and machine learning can be utilized to construct reduced-order models directly from these sparse sensor measurements. To achieve this, we simulate experimental conditions using direct numerical simulations, and focus on a single sensor measurement represented by
s( t) :=p( t;ℒ),
where p is the surface pressure. The measurement vector s, generally, can comprise various measurements such as the lift and drag coefficients, pressure measurements on a cylinder, or velocity field measurements at specific locations, e.g., wake region. However, for the scope of this study, the pressure alone is deemed adequate to characterize the flow according to the results shown in <ref>.
Given the sensor measurements s, our objective is to develop an effective flow state estimation that enables a DRL agent can obtain efficient information based on it. However, raw signals may not be ideal for the purposes, and an augmentation or dynamic feature lifting is required with incorporating sensor measurement functions. In this regard, we define the augmented state S as a feature vector that encompasses such functions:
S=g(s)
There exist numerous options for the mapping function g, which can enhance sensor measurements and improve model accuracy. If the sensors are adequate to determine the system state, the identity map can be utilized as g, which means S=s. Alternatively, when the measurements provide high-dimensional snapshots, g can leverage Proper Orthogonal Decomposition (POD) mode coefficients. <cit.> and <cit.> use delay embedding technology to augment the measurements, resulting in a sufficiently high-dimensional feature vector that fully characterizes the system dynamics. The task of selecting an effective transformation function g is a critical unresolved issue that is relevant to both representation theory and the Koopman operator viewpoint on dynamical systems. Both <cit.> and <cit.> are actively engaged in investigating this problem. In this study, we choose g to augment the sensor measurement with its time derivative, while appropriately scaling the augmented measurement. Furthermore, <cit.> propose a comprehensive sparse reduced-order modelling for flow full-state estimation, which includes time-resolved sensor data and optional non-time-resolved particle image velocimetry (PIV) snapshots.
Inspired by the aforementioned facts, we presents a novel approach, named dynamic feature-based DRL, to overcoming the limitations of measurements in real world and highlights the potential of deep reinforcement learning techniques for sparse surface pressure sensing. An effective augmentation function g at time t is used to lift the sensor signals so that a high-dimensional dynamic feature space is formed, which can be expressed as
S_t = ( [ α s_1^t - M ⋯ α s_i^t - M β a_1^t - M ⋯ β a_j^t - M; ⋮ ⋱ ⋮ ⋮ ⋱ ⋮; α s_1^t ⋯ α s_i^t β a_1^t ⋯ β a_j^t ]) ∈ℝ^(M+1)× (i+j),
where a is the agent action at time t, M is the number of backtracking time steps, which is set to 30 in this study, i and j are the identifier of sensor and actuator, respectively, α and β are corresponding scaling factor.
In past studies, a common practice has been to use a single snapshot of the flow field as state data, such as four sensors of pressure data in one time step, to provide as input to the policy network. This is illustrated by the upper panel of Fig. <ref>. By contrast, the DF-DRL method uses pressure data assembled from sensor measurements extracted from the 30 previous action time steps, resulting in an augmented agent state. The details of dynamic feature lifting within DF-DRL method is illustrated in lower Fig. <ref>. It is expected (and, in fact, confirmed in <ref>) that the policy will be improved using such dynamic feature lifting input data. However, a possible challenge is that this may increase quite a bit the dimensionality of the state input to the ANN, since it is a two-dimensional array with one dimension corresponding to the sensor number, and another dimension corresponding to the timeseries index. In particular, sensors on the surface observe lower magnitude variations in flow velocity and pressure than sensors in the wake, and are unable to observe changes in the trend of the wake and the shedding of cylinder vortices. Therefore, using the DF-DRL method is most appropriate for the surface sensors AFC training process, which involves fewer sensors. Besides, it is also vital to use an input standardization method individually on each sensing time series. In particular, it is necessary to normalize the surface pressure sensors observations so that these fluctuations are well perceived even though these have a very different dynamic range compared with sensors in the wake region.
§ NUMERICAL PLANT: LAMINAR FLOW AROUND CIRCULAR CYLINDER
In this paper, we choose the classic benchmark proposed by <cit.>, laminar flow around a cylinder, and place jets symmetrically arranged on both lateral sides as actuators for active flow control. The objective is to reduce the drag force and lift fluctuation on the cylinder. Firstly, we formalize the research problem in <ref>. Then, we provide a detailed description of the flow configuration and numerical solution methods in <ref>, followed by a validation of the accuracy of the numerical algorithms. Finally, we define three different types of sensor installation methods in <ref>.
§.§ Problem formalization
The active flow control task formulated in this study aims at finding out a real-time control policy π of two jet actuators located on a circular cylinder with sensor feedback, which can effectively reduce the fluid force on it. Generally, the surface pressure information s_t can be regarded as the input state of the control policy π, and the jet intensity can be viewed as the DRL action a_t at the time t. The action is decided by the DRL controller based on the state observation. Therefore, the control processing can be modeled as a deterministic or stochastic relationship:
a_t ∼ π ( s_t |θ ) .
Hence, given a deep reinforcement learning agent with the control policy π, the objective is to minimize the lift and drag coefficients of the cylinder by optimizing the set of weights θ of the DRL agent policy network:
π ^* =π( θ ^*) ,
θ ^* =max_π ( θ )𝔼_( s_t ,a_t) ∼ρ _π ( θ )𝒯( s_t ,a_t),
where the superscript * represents the optimal value, 𝔼 is the expected value operator, and 𝒯 denotes a target function, which represents the current policy π.
§.§ Flow configuration and numerical method
In this work, we use the open-source computational fluid dynamics software package OpenFOAM (Open-source Field Operation And Manipulation) developed by the OpenFOAM Foundation to perform simulations. Under the assumption of incompressible viscous flow, the governing Navier-Stokes equations can be expressed in a non-dimensional manner as:
∂u/∂ t+u·(∇u)=-∇ p+Re^-1Δu,
∇·u=0,
Re = U̅D/ν,
where u is the non-dimensional velocity, t is the non-dimensionless time, p is the non-dimensional pressure, ν is the kinematic viscosity of the fluid, and U̅ is the mean velocity at the inlet. The corresponding Reynolds number Re is 100 in the training stage.
This study focuses on two-dimensional simulations of flow around a circular cylinder with a diameter D, which is the characteristic length scale. The computational domain has dimensions of L = 22D and B = 4.1D in the streamwise and cross-stream directions, respectively, as shown in Fig. <ref>. Following the widely recognized benchmark conducted by <cit.>, the cylinder is slightly off-center to induce faster development of the vortex shedding alley during the initial simulation convergence stage. The outlet boundary is placed 19.5D downstream of the cylinder to allow the wake to fully develop.
The inlet boundary denoted as Γ_in is subject to the parabolic velocity inlet boundary condition. The no-slip constraint, Γ_W, is applied to both the top and bottom of the channel and the surfaces of the cylinder. Additionally, the right boundary of the channel, Γ_out, is designated as a pressure outlet, wherein zero velocity gradient and constant pressure are maintained. The inlet boundary is assigned as a parabolic velocity form and expressed as the following in the streamwise direction:
U(0, y)=4 U_m y(H-y) / H^2,
where U_m is the maximum inflow velocity at the middle of the channel. Employing a parabolic inflow profile, U_m is 1.5 times the mean velocity U̅, as defined by:
U̅=1/H∫_0^H U(y) d y=2/3 U_m .
To accomplish the active flow control, the flow control technique using two jet actuators (Γ_1 and Γ_2) located on opposite sides of the cylinder is employed. A parabolic velocity profile with a jet width of ω = 10^∘ is imposed at both jets, as depicted in Fig. <ref>. Due to the velocity of the jet flow being orthogonal to the inflow direction, drag reduction is strictly achieved by effective actuation rather than by momentum injection. Moreover, the jet flow on both sides is constrained as synthetic jet flow, i.e., V_Γ_1 = V_Γ_2, so that the jets collectively do not add nor remove mass to the flow.
In the current study, unstructured meshes are adopted for computational fluid dynamics (CFD) simulations. Emphasis has been laid on refining the mesh around the surface boundary and the wake flow regions, as these are crucial for ensuring the appropriate resolution of these significant flow domains and of the physics happening there. The numerical solution is obtained at each time step, and the drag (F_D) and lift (F_L) forces are computed by integrating over the cylinder surface, following:
F=∫(σ·n) ·e_j d ,
where σ is the Cauchy stress tensor, the unit vector n is defined as normal to the cylinder surface, while e_j is denoted as a unit vector in the direction of the inflow velocity for drag force calculations, and as a vector perpendicular to the inflow velocity for lift force calculations. Specifically, the drag C_D and lift C_L coefficients can be expressed as follows:
C_D=F_D/1/2ρU̅^2 D,
C_L=F_L/1/2ρU̅^2 D,
where F_D and F_L are denoted as integral drag and lift force, respectively.
To further validate the accuracy of the CFD simulations, a series of mesh convergence studies are performed at a Reynolds number Re = 100. In particular, meshes of three different resolutions are employed, and the corresponding results for the maximum values of the drag coefficient C_D and lift coefficient C_L, denoted as C_D^max and C_L^max, respectively, are reported in Table <ref>. The numerical analysis reveals that the discrepancies among various mesh resolutions are insignificant. Considering the trade-off between computational cost and numerical accuracy, the meshing scheme of Grid II is preferred for the DRL training stage.
§.§ Layout of surface pressure sensors
In the present study, a series of pressure sensor layout schemes are proposed to study the influence of sensors location. First, a baseline configuration proposed by <cit.> with 147 pressure sensors are set up both around the cylinder and in the wake region, as shown in Fig. <ref>a. Then, A varying number of pressure sensors, 4, 8, 24 sensors, are symmetrically arranged on the surface of the cylinder (along the direction of inflow). To avoid the inadequate information with smaller pressure fluctuations at the front of the cylinder, the sensors are uniformly distributed with the exception of the point at the front, as shown in Fig. <ref>b. Finally, a comprehensive study is carried out using a single sensor location. The placement of the single sensor is started putting it at the front of the cylinder as θ = 0^∘ relatively to the incoming flow, and we change its position by gradually increasing its angular position on the cylinder in increments of 15^∘ until it reaches the rear edge of the cylinder (θ = 180^∘), as shown in Fig. <ref>c and Fig. <ref>c. As a consequence, a total of 17 single pressure sensor positions are investigated. One could expect that pressure sensors on the surface of the cylinder can provide valuable information about the flow to the DRL controller. However, using surface sensors like those shown in Fig. <ref>b and c presents a challenge due to limited quantity of data provided and the placement being solely on the surface of the cylinder. This results in a lack of information regarding the cylinder wake and vortex shedding pattern during the DRL training stage.
In order to facilitate the description in the next sections, the notation ℒ is used to describe the different sensor layout schemes. The subscripts represent different sensor layout types, for example, 1 represents the baseline configuration with 147 sensors placed inside the flow field, 2 represents the sensor configuration placed on the cylinder surface, and 3 represents a single sensor configuration placed on the cylinder surface. For type 2N, the superscript N indicates the number of sensors, and the polar coordinates of sensor i can be expressed as
r_i = 1/2D, θ_i = 2π i/N+1, i=1,2,…,N,
while for type 3θ, the superscript θ indicates the angle of the sensor placement where the coordinate axis of the polar coordinate system with origin opposite to the inflow direction, the leading edge of the cylinder is denoted as the 0^∘ point, which is illustrated in Figure <ref>.
§ RESULTS AND DISCUSSION
In this section, we first evaluate the performance and reliability of the proposed DF-DRL approach in <ref>, comparing with a vanilla DRL algorothm. Then, the impact of different number of surface sensors configurations and layouts of single surface sensors on the performance of flow control are investigated in <ref>. Furthermore, we verify the robustness of the DF-DRL controllers under two different Reynolds numbers, Re=500 and Re=1000 in <ref>.
§.§ DRL-based AFC with sparse surface pressure sensing
To evaluate the effectiveness and reliability of the DF-DRL approach, sensor locations 1 and 24 are selected for illustration, as depicted in Fig. <ref>a and b, respectively. Figure <ref> shows the learning curves for active flow control using both vanilla DRL and DF-DRL techniques under two different sensor quantity configurations (4 and 147 sensors). The three subplots a, b, and c correspond to the mean drag coefficient, reward, and the standard deviation (std) of lift coefficient, respectively.
The results indicate that there are significant differences between these four cases. Using a vanilla DRL algorithm, the maximum drag reduction of 8% was achieved with the use of 147 sensors (scheme 1), and the maximum reward value is 19.13. Moreover, the standard deviation of the lift coefficient is reduced to 0.15 when learning has converged. For the case of 4 sensors with vanilla DRL method (scheme 24), the maximum drag reduction is only 6.4%, and the std of the lift coefficient decreases to only 0.29, as shown in Fig. <ref>. These two cases show that with few surface sensors, the performance of active flow control performance becomes worse. This is due to the inability of DRL to correctly estimate the flow field and limited observable data.
As for the results using the DF-DRL method, it is observed that both 1 and 24 achieve similar drag reduction amplitudes of 8%, corresponding to the maximum drag reduction also observed using the 147 pressure sensors. However, the latter approach achieves a higher reward value, as indicated by the decrease in the standard deviation of the lift coefficient due to the lift penalty term within the reward function. This improvement in performance is happening despite the fact that both the 1 and 24 approaches undergo a temporary rebound in their learning. Therefore, when utilizing DF-DRL method, fewer sensors, as used in scheme 24, can achieve the same drag reduction as scheme 1 which achieve 25% better than the vanilla model based on direct sensor-feedback. This can be obtained while also improving on the reduction of lift fluctuations. These observations suggest that the DF-DRL method can maintain drag-reduction performance while reducing the number of sensors required. The difference in reward observed between the two cases can be attributed to the reduction in the standard deviation of the lift coefficient achieved in scheme 24.
For scheme 1, with a large number of sensors distributed around the cylinder and wake region, the DRL controller is able to obtain exhaustive flow field information, and thus the inclusion of historical data provided by the DF-DRL method has a minor impact. However, for scheme 24, with limited sensor numbers and sparse information on the cylinder surface, without the DF-DRL method, there is not enough information available to perform effective flow control. These results suggest, unsurprisingly, that the use of more sensors leads to better drag coefficient reduction effects and more stable reward convergence with naive DRL agent. Moreover, when the flow field information is limited in quantity and placement of the sensors (due to physical restrictions), the DF-DRL method demonstrates better convergence and yields a superior control policy.
§.§ Control performance and learning convergence with DF-DRL method
To further investigate the impact of sensor quantity and placement azimuth on the control effectiveness and convergence performance of the DF-DRL based controller, we conducted case studies with different quantities of sensor, 1, 4, 8, 12, 24, and 36, as well as various placement azimuth layouts of 0^∘ to 180^∘ with a 15^∘ spacing.
§.§.§ Sensor quantity
Five typical layout schemes 2 of surface pressure sensors are investigated in this section. The arrangement of these sensors is depicted in Fig. <ref>b, where the sensors are evenly distributed and all the leading edge sensor is removed, as described in <ref>. Since the state includes the jet actions component, the pressure sensors distributed around the jet will not significantly impact the results.
The impact of adding more surface pressure sensors on training performance is illustrated in Fig. <ref>. Results show that increasing the number of sensors does not lead to a significant improvement in drag and lift coefficient reduction, which remains around 8% across all schemes. Additionally, all cases converge at approximately 200 episodes. Figure <ref>b displays learning curves that follow the same trend as the drag coefficient, indicating that the final DF-DRL performance is very similar to the benchmark case for different numbers of pressure sensors on the surface of the cylinder.
As depicted in Fig. <ref>c, the standard deviation of the lift exhibits a comparable declining pattern to that observed in the previous section. Following an initial decrease and temporary increase at around episode 200, all five groups undergo a consistent decrease until the completion of DF-DRL training.
This result can be explained by the fact that the pressure on the cylinder surface, as a surrogate for lift, is a better dynamic feature than wake measurements, where varying vortex shedding destroys the phase relationship. The pressure on the cylinder surface provides a more accurate representation of the dynamic behavior of the flow, leading to a better understanding of the underlying physics. This conclusion is in agreement with previous work by <cit.>. The use of pressure as a dynamic feature highlights the potential for feature-based approaches in reduced-order modeling of fluid flows. Given that the disparities in drag coefficient and lift fluctuation reduction among the experiments are not substantial, to further demonstrate the potential of using surface pressure as a dynamic feature for nonlinear system and its combination with DRL, we will reduce the number of sensors to one in the next section.
§.§.§ Placement azimuth
Based on the results above, it is apparent that an increase in the number of sensor utilized in DF-DRL based AFC task does not necessarily result in better performance, including drag and lift reduction. To explore the maximum performance potential of DF-DRL and obtain the optimal sensor layout scheme, single sensor schemes 3 are selected in the following study, as illustrated in Fig. <ref>c. According to the symmetry of the geometry of the setup and the boundary conditions, this study only consider deploying sensors on the upper semicircle region for training and analysis purposes. The cylinder surface features a single pressure sensor located every 15^∘, covering a range of 0^∘ to 180^∘, with a total of 13 configurations. Three repetitions of the training are performed for each case with the same hyperparameters to eliminate randomness. As shown in Fig. <ref>a, the mean drag coefficient C_D indicates that AFC performance with only one pressure sensor is almost as optimal as baseline scheme 1. The results suggest that scheme 3 on cylinder surfaces using the DF-DRL method can reach the best performance of active flow control.
It can also be observed from Fig. <ref>b, that the trailing edge sensor of the cylinder has a higher reward value compared to the leading edge sensor, resulting in a lower mean drag coefficient. Furthermore, a sudden reduction in the drag coefficient can be observed between 375 and 390. This can be attributed to the influence of jet actuators situated on top side of the cylinder, where changes in jet velocity can lead to abnormal pressure fluctuations on the surface at 90^∘ that confuse the DRL controller. Additionally, a significant jump in performance occurs between 390 and 3150, characterized by a marked decrease in the mean drag coefficient and the lift coefficient. A decreasing trend can be observed in the standard deviation of the lift coefficient from 30 to 3180, as depicted in Fig. <ref>c. This further emphasizes that a sensor located closer to the trailing edge of the cylinder can effectively generate information that can be used to mitigate the standard deviation of the lift coefficient.
To provide a more comprehensive explanation of this phenomenon, Fig. <ref> depicts the time-averaged vorticity field of the uncontrolled flow around the cylinder, with the red dots representing the 13 different single-sensor layout schemes. The sensors located at the trailing edge of the cylinder, namely 3105 to 3180, are positioned at the vortex shedding location, indicating that these pressure sensors contain crucial information about vortex shedding compared to the windward side of the cylinder. This observation explains why the trailing edge pressure sensors outperform the leading edge sensors in the overall training outcomes, which include the reduction of drag and lift coefficients.
To summarize, a general tendency of mean drag coefficient, reward, and standard deviation of lift coefficient is presented in Fig. <ref>d. Notably, a single sensor situated between 0^∘ and 180^∘ demonstrates near optimal reduced drag and lift coefficients, corresponding to an elevated reward value. These results offer inspiration for the training strategy of a single sensor system 3 at higher Reynolds numbers.
§.§ Robustness of DF-DRL based plant under higher Reynolds number
Based on the promising performance of active flow control demonstrated in <ref>, Scheme 3150 is chosen to investigate the robustness of a single surface sensor at higher Reynolds numbers (Re = 500 and 1000). The policy network architecture is kept the same as before, consisting of two dense layers of 512 fully connected neurons, with the input layer receiving data from a single pressure sensor, and the output layer representing the jet velocity. As the vortex shedding frequency of the cylinder increases with the rise in Reynolds numbers, the SAC agent interacts with the environment every 44 and 46 time steps at Re = 500 and 1000, respectively.
Figure <ref> shows the evolution for the mean drag coefficient, reward value, and lift coefficient obtained from three repeated training processes at Reynolds numbers of Re = 500 and 1000. After approximately 400 episodes at Re = 500 and 600 episodes at Re = 1000, the drag coefficient approached convergence, demonstrating that a stable control strategy was achieved. Meanwhile, the reward curves gradually increased with each episode with the standard deviation of the lift coefficient declined and then stabilized. As described in Equation <ref>, lift and drag coefficient are both first-order terms, where lift has a weight of 1 and drag has a weight of 0.1. For the deep reinforcement learning agent, this implies that the reduction of drag has a higher reward, and when it is reduced to its maximum value (2.1 at Re = 500 and 1.9 at Re = 1000, respectively), inhibiting lift becomes the only viable option. However, as the Reynolds number increases, the learning requires more episodes to converge, and the agents need more trial-and-error steps to comprehend the nonlinear relationships inherent in the dynamic system.
An interesting phenomenon is observed when comparing the final performance of drag reduction across different Reynolds numbers. As the Reynolds number increases, the drag reduction effect improves. This contradicts our intuition, as we would normally expect the flow field to become more complex at high Reynolds numbers, with increased turbulence and vortex shedding, making it difficult for the DRL agent to learn an effective strategy for flow control. However, this is not the case. The main reason for this lies in the drag force component <cit.>. The overall drag F_d on the circular cylinder submerged in a Newtonian fluid can be calculated by
F_d = ∮ p ·cos (θ ) ·dA_Pressure drag + ∮τ _w·sin (θ ) ·dA_Skin friction drag,
τ_w=μ·(∂ v_t/∂ n)_Surface,
where p and τ _w is the normal stress and shear stress act on the cylinder surface, respectively. v_t is the velocity along the cylinder surface, n is normal direction.
Figure <ref> shows the magnitudes of pressure drag and skin friction drag, as well as the proportion of pressure drag in total drag, at different Reynolds numbers. It can be observed that, as the Reynolds number increases, the proportion of pressure drag monotonically increases, growing from 79.2% at Re=100 to 94.5% at Re=1500. Under a invariant aerodynamic shape and inflow velocity, the main mechanism of active flow control was to suppress the shedding of vortices at the rear end of the cylinder <cit.>. This indicates that the significant drag reduction effect was mainly attributed to the reduction of pressure drag caused by flow separation. As the Reynolds number increased, the proportion of pressure drag in the overall drag force increased, leading to a more pronounced drag reduction effect if the flow separation is effectively manipulated by the DRL controller.
<cit.> also proposed a compelling explanation. The flow around a cylinder can be decomposed into a superposition of steady baseflow and vortex shedding components. The baseflow numerically simulated using a symmetric boundary condition at the equatorial plane of the cylinder. The results showed that the drag force on the cylinder controlled by DRL was consistent with the drag force of the baseflow, which indicates that the drag reduction of active flow control using DRL mainly originates from vortex shedding, and the drag generated by vortex shedding is mainly attributable to the pressure drag component. Straightforwardly, under high Reynolds number conditions, the increased drag caused by the pressure component (both in absolute value and proportion) allows the DRL agent to have greater potential for flow control. When the DRL agent finds the optimal control rate, it leads to a decrease in the drag coefficient.
The results of dynamic feature-based soft actor-critic (SAC) algorithm are presented in Fig. <ref>. The entire training process is parallelized across five environments provided by DRLinFluids. The algorithm successfully learned to perform active flow control, resulting in a continuous reduction of drag and suppression of lift. In the absence of actuation, the drag coefficient C_D oscillates periodically around a mean value, as shown in Fig. <ref>a. The mean value of C_D is 3.20, with a standard deviation of 0.283 for the drag coefficient and 2.17 for the lift coefficient. With DF-DRL based active flow control, the mean drag coefficient is reduced to 2.17, corresponding to a drag reduction of approximately 32.2%. Furthermore, the fluctuation of the drag coefficient is suppressed, as indicated by the reduced standard deviation value of 0.252. Meanwhile, the standard deviation value of the lift coefficient is slightly reduced to 1.61.
Power spectrum analyses are conducted to compare the drag C_D and lift C_L coefficients of the cylinder with and without active flow control, and the results are presented in Fig. <ref>c and d. The power spectrum curves for both C_D and C_L of the plain cylinder exhibit a distinct peak. This indicates that there is a series of distinct vortex shedding at this frequency, which contribute to the majority of energy required for the mean drag and the fluctuation. By contrast, the peaks disappear in the power spectrum curves of C_D and C_L of the cylinder with DF-DRL based active flow control.
The results presented in Fig. <ref> demonstrate that at a Reynolds number of 1000, the turbulent conditions are relatively weak. Specifically, for the plain cylinder, the mean drag coefficient is measured to be 3.48 with a standard deviation of 0.455. In addition, the lift coefficient exhibits a standard deviation of 2.76. However, when active flow control is implemented, a significant reduction in the mean drag coefficient is achieved, resulting in a value of 1.86, which corresponds to a drag reduction of approximately 46.55%. Moreover, the standard deviation of the drag coefficient decreases to 0.31, indicating a more consistent behavior of the cylinder under flow control. Notably, the standard deviation of the lift coefficient is also markedly reduced to 1.61, which is highly desirable for suppressing the lift force and mitigating flow-induced instability of the cylinder.
The power spectrum analyses of C_D and C_L for the cylinder with and without active flow control are presented in Fig. <ref>c and d. The power spectrum curves for both C_D and C_L of the plain cylinder shows an obvious peak, indicating the presence of a regular vortex shedding with significant energy. In contrast, when active flow control is implemented, the peak in the power spectrum curves of C_D and C_L for the cylinder is eliminated, indicating that the regular vortex shedding has been completely disrupted by the jet actuation.
Figure <ref> displays the instantaneous flow field around a circular cylinder, with and without active control. The impact of controlled jet flow on reducing the aerodynamic force acting on the cylinder is explained in terms of the flow pattern. In Fig. <ref>a and c, which represent conditions for Re = 500 and 1000, respectively, a vortex shedding pattern is observed for the plain cylinder, as expected. This alternate vortex shedding pattern directly contributes to fluctuations in both the drag and lift coefficients, as demonstrated in Fig. <ref> and <ref>.
Figure <ref>b and d illustrate the impact of fluctuating actuation on the vortex shedding pattern. The alternate vortex shedding is suppressed, resulting in reduced fluctuations of both C_D and C_L. Meanwhile, an elongated recirculation bubble is formed in the near wake, which is associated with increased pressure and a reduction in drag force. The elongated wake implies a reduced curvature of the shear layer, which corresponds to increased pressure at the rearward cylinder side. As a result, the cylinder with active flow control experiences less drag.
Figure <ref> depicts the mean vorticity contours around a cylinder, with and without active control. The increase in the recirculation zone is very visible at Re = 500 and 1000 and illustrates the effective control strategy learned by the DRL agent. The results demonstrate that well-trained DRL agents, which utilize a single surface pressure sensor's temporal information as the state, can achieve efficient control even under flow conditions with strong nonlinearity and various Reynolds numbers.
It is worth noting that learning a DRL-based control law for AFC task presents a significant challenge in utilizing a single surface pressure sensor as the state in weak turbulent conditions. However, these results demonstrate the efficacy of DF-DRL based active flow control of a circular cylinder with sparse surface pressure sensing, and offer a promising avenue for reducing drag and enhancing AFC performance in fluid dynamics systems.
§ CONCLUSIONS AND OUTLOOK
This study presents significant progress towards practical active flow control with surface pressure sensor located on a circular cylinder as sole input for a DRL agent. This approach has potential for advancements in deep reinforcement learning for real-world applications, such as drag and lift reduction for vehicle and high-rise building. The main results of this study are summarized as follows.
Firstly, a novel deep reinforcement learning method called Dynamic Feature-Based Deep Reinforcement Learning (DF-DRL) is introduced. Essentially, DF-DRL utilizes prior knowledge to extract one or several features of a nonlinear dynamic system, enabling it to estimate the complete state of the system to the fullest extent possible. This concept aligns with the ideas of pattern recognition and reduced order modelling. The DF-DRL model combines identification and control, which is not limited to the traditional DRL tasks that take the state at a certain moment as input. Instead, suitable dynamic feature states are selected and lifted to a higher dimensional vector based on the characteristics of different dynamic systems. Following this step, the vector is used as the state input to the agent. Results show that using DF-DRL with a single surface pressure sensor can achieve the same drag reduction performance as vanilla DRL method using 147 velocity sensors that fully sample the cylinder wake region.
Secondly, the study investigates the distribution of sensors needed for active flow control of cylinder wake. We conclude that in low to moderate Reynolds number scenarios, a single surface pressure sensor can achieve control results comparable to those obtained with 147 wake sensors under active flow control when DF-DRL is used. Additionally, we find that the reward value obtained with a single trailing edge sensor on the cylinder is higher than if the sensor is located at the leading edge, resulting in a lower mean drag coefficient and standard deviation of the lift coefficient.
Thirdly, two different flow situations were examined to verify the effectiveness and robustness of the proposed sensor configuration and DF-DRL method. Results show that the deep reinforcement learning agent utilizing a single surface pressure sensor is capable of controlling wake development behind the circular cylinder, even under more complex scenarios corresponding to higher Reynolds numbers.
Sparse reduced-order modeling <cit.> is a highly popular research field, in which selecting appropriate dynamic feature lifting methods for different fluid dynamic systems can enable more accurate estimation using fewer sensor data. Processing these features and using them as DRL states is a promising approach. In the present study, significant reductions in the drag coefficient of a cylinder are achieved through two distinct approaches. Specifically, the use of typical DRL resulted in a reduction of 6.4% utilizing a 4 sensor layout scheme, while dynamic feature sensing with lifting, yielded an even greater reduction of 8.0% comparing against the benchmark performance. Importantly, the results of this investigation demonstrate that the drag coefficient of the dynamic feature sensing with lifting and DRL (DF-DRL) model is impressively lower than the vanilla model that relies solely on direct sensor feedback. In particular, the DF-DRL model exhibited a reduction of 25% in the drag coefficient, highlighting the efficacy of this approach for improving aerodynamic performance. The DF-DRL method presents a promising approach to significantly reducing the number of required sensors while achieving optimal drag and lift coefficient reduction performance, which offers a promising pathway for taming complex fluid dynamics systems.
§ ACKNOWLEDGEMENT
This study is supported by the National Key R&D Program of China (2021YFC3100702), National Natural Science Foundation of China (52278493, 52108451), Shenzhen Science and Technology Program (SGDX20210823103202018, GXWD20201230155427003-20200823230021001, KQTD20210811090112003), and Guangdong-Hong Kong-Macao Joint Laboratory for Data-Driven Fluid Mechanics and Engineering Applications (2020B1212030001).
This work is also supported
by the National Science Foundation of China (NSFC) through grants 12172109 and 12172111,
by Guangdong province, China, via the Natural Science and Engineering grant 2022A1515011492
and by the Shenzhen Research Foundation for Basic Research, China, through grant JCYJ20220531095605012.
jfm
|
http://arxiv.org/abs/2307.00586v1
|
20230702150515
|
ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition
|
[
"Debaditya Roy",
"Dhruv Verma",
"Basura Fernando"
] |
cs.CV
|
[
"cs.CV"
] |
1]Debaditya Roy
1,2]Dhruv Verma
1,2]Basura Fernando
[1]Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore 138632, Republic of Singapore.
[2]Centre for Frontier AI Research (CFAR), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore 138632, Republic of Singapore
ClipSitu: Effectively Leveraging CLIP for Conditional Predictions in Situation Recognition
[
==========================================================================================
Situation Recognition is the task of generating a structured summary of what is happening in an image using an activity verb and the semantic roles played by actors and objects.
In this task, the same activity verb can describe a diverse set of situations as well as the same actor or object category can play a diverse set of semantic roles depending on the situation depicted in the image.
Hence model needs to understand the context of the image and the visual-linguistic meaning of semantic roles.
Therefore, we leverage the CLIP foundational model that has learned the context of images via language descriptions.
We show that deeper-and-wider multi-layer perceptron (MLP) blocks obtain noteworthy results for the situation recognition task by using CLIP image and text embedding features and it even outperforms the state-of-the-art CoFormer, a Transformer-based model, thanks to the external implicit visual-linguistic knowledge encapsulated by CLIP and the expressive power of modern MLP block designs.
Motivated by this, we design a cross-attention-based Transformer using CLIP visual tokens that model the relation between textual roles and visual entities.
Our cross-attention-based Transformer known as ClipSitu XTF outperforms existing state-of-the-art by a large margin of 14.1% on semantic role labelling (value) for top-1 accuracy using imSitu dataset.
We will make the code publicly available.
§ INTRODUCTION
Situation recognition was first introduced to computer vision in pioneering work <cit.>.
Situation recognition is an important problem in scene understanding, activity understanding, and action reasoning as it provides a structured representation of the main activity depicted in the image.
The key component in situation recognition is the task of semantic role labeling (SRL).
Semantic role labeling is complex as the same activity verb may have different functional meanings and purposes depending on the context of the image.
For example, the verb “spray" has a completely different semantic meaning depending on the context as shown in <ref> where the semantic roles such as source, substance, destination, and place define the situation.
Furthermore, the change in substance from spraying grease, hair spray, or perfume in the <ref>(b),(c), and (d) change the destination to salad, hair or face, yet the visual appearance change in the action of spraying is minimal.
Hence, semantic role labeling requires a detailed understanding of the event in the image using contextual information from the image and how it relates to the linguistic definition of the event in terms of the activity (verb) and activity-specific roles.
Apart from the complexity of activity-specific roles, the activities in situation recognition are a mixture of literal and abstract verbs as shown in <ref>.
In <ref>(a) and (b), the situational verbs being described are nagging and cramming though the literal action is eating which is also a situational verb.
Therefore, the prediction model should sometimes ignore the literal action to understand the described situation.
Another challenge is identifying the nuanced difference between abstract situations shown in <ref>(c) and (d) which show nagging and encouraging.
Both these situations are opposites but visually similar and therefore the model should only concentrate on the agent's gestures (person nagging or providing encouragement) rather than the subject (person being nagged or encouraged) who shows similar expressions.
The way we humans overcome these challenges and understand these situations is based on our experiences from different areas in our daily life.
Therefore, we propose that these challenges of abstract and nuanced understanding of images are better addressed by models that have access to the context from a variety of images and their text descriptions.
Multimodal Foundation Models (MFM) such as CLIP <cit.> and ALIGN <cit.> provide this context as they are trained on many millions of image/text pairs to capture cross-modal dependencies between images and text.
CLIP is an excellent MFM for solving image semantic role labeling tasks as it provides a grounded understanding of visual and linguistic information.
In <cit.>, MFMs such as CLIP are shown to be trainable for complex vision and language tasks termed Structured Vision and Language Concepts (SVLC).
Another way to leverage MFMs is to apply an MLP on top of the image encoder in works such as VL-Adapter <cit.>, AIM <cit.>, EVL <cit.> and wise-ft <cit.>.
These approaches can be applied for predicting the main activity in the image i.e. for image classification <cit.> or action detection <cit.>.
However, semantic role labeling is a conditional classification task that needs verb and role along with the image.
Therefore, in <cit.>, authors convert situation recognition to a text-prompt-based prediction problem by fine-tuning a CLIP image encoder with the text outputs from a large language model – GPT-3 <cit.> called CLIP-Event.
The verbs are ranked using the prompt “An image of ⟨verb⟩” based on image CLIP embeddings.
After predicting the verb, each noun is predicted using another text prompt “The ⟨name⟩ is a ⟨role⟩ of ⟨verb⟩”, i.e. “The firefighter is an agent in spraying”.
Even with the world knowledge in GPT-3, CLIP-Event performs worse on semantic role labeling than state-of-the-art CoFormer <cit.> which is directly trained on the images.
The reason is that finetuning CLIP on semantic role labeling is not effective as that dataset imSitu <cit.> is not massive containing only 126,102 images yet it contains a massive amount of nouns (11,538) that are related to 190 unique roles. Therefore, the mapping between roles and nouns becomes an extremely challenging task.
We show that a well-designed multimodal MLP that consists of a modern MLP block design is able to solve semantic role labeling using CLIP embeddings and it even performs the state-of-the-art without finetuning the CLIP model.
This multimodal MLP is trained on a combination of image and text embedding from the verb and the role obtained from the CLIP model. Multimodal MLP predicts the entity corresponding to the role using a simple loss function.
Motivated by the effectiveness of CLIP-based multimodal MLP, we adopt a Transformer encoder to leverage the connection across semantic roles in an image.
Each semantic role is represented using a multimodal input of image and text embedding of the verb and the role.
We show that sharing information across semantic roles using a Transformer leads to slightly improved performance.
Motivated by these two findings, we design a cross-attention Transformer to learn the relation between semantic role queries and CLIP-based visual token representations of the image to further enhance the connection between visual and textual entities.
We term this model as ClipSitu XTF and it obtains state-of-the-art results for Situation Recognition on imSitu dataset outperforming state-of-the-art CoFormer <cit.> by 14.1% on top-1 value performance.
§ RELATED WORK
Situation Recognition. To understand the relationship between different entities in an image, tasks such as image captioning <cit.>, scene graph generation <cit.>, and human-object interaction detection <cit.> have been proposed in the literature.
Image captioning produces a natural language description of actions and entities in the image.
Image captions are subjective because annotators may emphasize certain entities while others may not.
So, it is not easy to evaluate whether all the entities involved in the activity have been adequately represented in the caption.
Role labeling between pairs of entities using scene graphs only describes generic relations between entities that may not be with respect to the activity in the image.
All these limitations are addressed with a structured description of a situation using verbs, roles and entities in <cit.> and an accompanying dataset called imSitu.
The situational verbs and their roles are obtained based on the meaning of the activity in each image from FrameNet <cit.>. The entities for each role are populated using the large object dataset ImageNet.
Situation recognition is more straightforward to evaluate than captions and the relationship between various entities is grounded in the activity.
Recently, situation recognition has also been extended to videos with the VidSitu dataset <cit.> where each video spans multiple events each of which is described using a situational verb, semantic roles, and their nouns.
The VidSitu dataset is extended with grounded entities in <cit.> while <cit.> proposes a contrastive learning objective framework for video semantic role labeling.
We limit the scope of this work to situation recognition in images.
One-stage prediction approaches predict the situational verb from the image and then the nouns associated with the roles of those verbs.
In <cit.>, a conditional random field model is proposed that decomposes the task of situation recognition into verb prediction and semantic role labeling (SRL).
For SRL, they optimize the log-likelihood of the ground-truth nouns corresponding to each role for an image over possible semantic role-noun pairs from the entire dataset.
In <cit.>, a tensor decomposition model is used on top of CRF that scores combinations of role-noun pairs.
They also perform semantic augmentation to provide extra training samples for rarely observed noun-role combinations.
In <cit.>, a predefined order for semantic roles is decided to predict the nouns for an image, and a recurrent neural network is used to predict the nouns in that order.
Authors in <cit.> propose a gated graph neural network (GGNN) to capture all possible relations between roles instead of a predefined order as in <cit.>.
In <cit.>, a mixture kernel
is applied to relate the nouns predicted for one role with respect to the noun predicted for another role.
These relations provide a prior for the GGNN <cit.> to predict nouns for each role.
In <cit.>, imSitu is extended with grounded entities in each image.
They propose two models – Independent Situation Localizer (ISL) and Joint Situation Localizer (JSL).
Both ISL and JSL use LSTMs to predict nouns in a predefined sequential order similar to <cit.> while RetinaNet estimates the locations of entities.
A transformer encoder-decoder architecture is proposed in <cit.> where the encoder
captures semantic features from the image for verb prediction and the decoder learns the role relations.
In <cit.>, situational verbs are predicted using a CLIP encoder on the image and the detected objects in the image.
Two-stage prediction approaches introduce an additional stage to enhance the verb prediction using the predicted nouns of the roles.
In <cit.>, transformers are used to predict semantic roles using interdependent queries that contain the context of all roles.
The context acts as the key and values while the verb and the role form the query to predict the noun.
They also consider the nouns of two predefined roles along with the image to enhance the verb prediction using a CNN.
In <cit.>, a coarse-to-fine refinement of verb prediction is proposed by re-ranking verbs based on the nouns predicted for the roles of the verb.
CoFormer<cit.> combines ideas from <cit.> and <cit.> with transformer encoder and decoder predicting verbs and nouns, respectively.
They add another encoder-decoder to refine the verb prediction based on the decoder outputs from the noun decoder.
§ CLIPSITU MODELS AND TRAINING
In this section, first, we present how we extract CLIP <cit.> embedding for situation recognition.
Then, we present three models for Situation Recognition using the CLIP model.
Afterward, we present a loss function that we use to train and finally,
the verb prediction model is presented.
§.§ Extracting CLIP embedding
Every image I has a situational action associated with it, denoted by a verb v.
For this verb v, there is a set of semantic roles R_v = {r_1, r_2, ⋯, r_m} each of which is played by an entity denoted by its noun value N = { n_1, n_2, ⋯, n_m}.
As shown in <ref>, the verb is spraying, and the roles are agent, source, substance, and destination.
The corresponding entity noun values
for each of the roles are firefighter, hose, water, and fire.
We use CLIP <cit.> visual encoder ψ_v(), and the textual encoder ψ_t() to obtain representations for the image, verb, roles, and nouns denoted by X_I, X_V, X_R_v and, X_N respectively.
Here X_R_v = { X_r_1, X_r_2, ⋯, X_r_m} and X_N = { X_n_1, X_n_2, ⋯, X_n_m} where X_r_i = ψ_t(r_i) and X_n_i = ψ_t(n_i). Similarly, the X_I = ψ_v(I) and X_V = ψ_t(v). Note that all representations X_I, X_V, X_r_i and X_n_i have the same dimensions.
§.§ ClipSitu MLP
Here we present a modern multimodal MLP block design for semantic role labeling for Situation Recognition that predicts each semantic role of a verb in an image.
We term this method as ClipSitu MLP.
Specifically, given the image, verb, and role embedding, the ClipSitu MLP predicts the embedding of the corresponding noun value for the role.
In contrast to what has been done in the literature, ClipSitu MLP obtains contextual information by conditioning the information from the image, verb, and role embeddings.
While the image embedding provides context about the possible nouns for the role, the verb provides the context on how to interpret the image situation.
For example, <ref> shows a crane floating on the dock and lifting something from the water.
Specifying that the situational verb is floating implies that the model should look at the dock and not at the crane.
Finally, the role input specifies which particular function of the verb floating i.e., medium, place, agent, is to be represented.
We concatenate the role embedding for each role r_i to the image and verb embedding to form the multimodal input X_i where X_i = [X_I, X_v, X_r_i].
Then, we stack l MLP blocks to construct CLIPSitu MLP and use it to transform the multimodal input X_i to predict the noun embedding X̂_n_i as follows:
X̂_n_i = ϕ_MLP(X_i).
In ϕ_MLP, the first MLP block projects the input feature X_i to a fixed hidden dimension using a linear projection layer followed by a LayerNorm <cit.>.
Each subsequent MLP block consists of a Linear layer followed by a Dropout layer (with a dropout rate of 0.2), ReLU <cit.>, and a LayerNorm as shown in <ref>(a).
We predict the noun class from the predicted noun embedding using a dropout layer (rate 0.5) followed by a linear layer which we name as classifier ϕ_c as
ŷ_n_i = ϕ_c(X̂_n_i)
where ŷ_n_i is the predicted noun class.
We use cross-entropy loss between predicted ŷ_n_i and ground truth nouns y_n_i as explained later in <ref> to train the model.
§.§ ClipSitu TF: ClipSitu Transformer
The role-noun pairs associated with a verb in an image are related as they contribute to different aspects of the execution of the verb.
For example in <ref>(a), we see that in spraying the role of destination is played by fire, and the role of source is played by hose.
Knowing that the destination is fire and the source is hose restricts the output of medium to be water as the most plausible.
Similarly, the role of agent played by dock/pier is strongly linked to both water as a medium and ocean/river as place.
We extend our ClipSitu MLP model using a Transformer <cit.> to exploit the interconnected semantic roles and predict them in parallel.
The input to the Transformer is similar to ClipSitu MLP (i.e. X_i = [X_I, X_v, X_r_i]), however, we build a set of vectors using { X_1, X_2, ⋯, X_m} where m denotes the number of roles of the verb.
Each vector in the set is further processed by a linear projection to reduce dimensions.
We initialize a Transformer model ϕ_TF with l encoder layers and multi-head attention with h heads.
Using the Transformer model, we predict the value embedding of the m roles as output tokens of the transformer
{X̂_n_1, X̂_n_2, ⋯, X̂_n_m} = ϕ_TF({ X_1, X_2, ⋯, X_m}).
Similar to the MLP, we predict the noun classes using a classifier on the value embedding as ŷ_i = ϕ_c(X̂_n_i) where i= {1, ⋯, m} as shown in <ref>(b).
§.§ ClipSitu XTF: Cross-Attention Transformer
Each semantic role in a situation is played by an object located in a specific region of the image.
Therefore, it is important to pay attention to
the regions of the image which has a stronger relationship with the role.
Such a mechanism would allow us to obtain better noun prediction accuracy.
Hence, we propose to use the encoding for each patch of the image obtained from the CLIP model.
We design a cross-attention Transformer called ClipSitu XTF to model how each patch of the image is related to every role of the verb through attention as shown in <ref>(c).
Let the patch embedding of an image be denoted by X_I,p = { X^1_I, X^2_I, ⋯, X^p_I} where p is the number of image patches.
These patch embeddings form the key and values of the cross-attention Transformer while the verb-role embedding is the query in Transformer.
The verb embedding is concatenated with each role embedding to form m verb-role embeddings X_vr = { [X_V; X_r_1], [X_V; X_r_2], ⋯, [X_V; X_r_m] }.
We project each verb-role embedding to the same dimension as the image patch embedding using a linear projection layer.
Then the cross-attention operator in a Transformer block is denoted as follows:
Q = W_Q X_vr, K = V = W_I X_I,p
X̂ = softmaxQK^T/√(d_K) V
where W_Q and W_I represent projection weights for queries, keys, and values and d_K is the dimension of the key token K.
As with ClipSitu TF, we have l cross-attention layers in ClipSitu XTF.
The predicted output from the final cross-attention layer contains m noun embeddings X̂ = {X̂_n_1, X̂_n_2, ⋯, X̂_n_m}.
Similar to the transformer in <ref>, we predict the noun classes using a classifier on the noun embeddings as ŷ_i = ϕ_c(X̂_n_i) where i= {1, ⋯, m}.
We call this network ClipSitu XTF.
§.§ Minimum Annotator Cross Entropy Loss
It is common to have multiple annotators for the same image and in some instances, annotators may not provide the same annotation.
Existing approaches <cit.> make multiple predictions instead of one to tackle this issue.
However, this can confuse the network during training as for the same instance there are multiple different annotations.
The loss function should not penalize a prediction that is close to any of the annotators' ground truth but further away from others.
We choose the minimum cross-entropy loss for a prediction ŷ_i across the ground truth from all the annotators 𝒜= { A_1, ⋯ A_q } to train our network
= min_𝒜 -∑_c=1^C y^(A_j)_i,c log(ŷ_i,c) where ∀ A_j ∈𝒜
Here C denotes the total number of classes and stands for Minimum Annotator Cross Entropy Loss.
To train noun prediction models, we use this modified loss function.
§.§ ClipSitu Verb MLP
Apart from semantic role labeling, the situation recognition performance of a system is also assessed on its ability to predict the situational verb correctly from the image.
We design a simple MLP with CLIP embeddings of the image X_I as input called ClipSitu Verb MLP as follows:
v̂ = ϕ_V(X_I).
where ϕ_V contains l linear layers of a fixed dimension with ReLU activation to predict the situational verb.
Just before the final classifier, there is a Dropout layer with a 0.5 rate.
We train ClipSitu Verb MLP with standard cross-entropy loss.
§ EXPERIMENTS
§.§ Evaluation Details
We perform our experiments on imSitu dataset <cit.> that contains a total of 125k images with 75k train, 25k validation, and 25k test images.
The metrics used for semantic role labeling are value and value-all <cit.> which predict the accuracy of noun prediction for a given role.
For a given verb with k roles, value measures whether the predicted noun for at least one of k roles is correct.
On the other hand, value-all measures whether all the predicted nouns for all k roles are correct.
A prediction is correct if it matches the annotation of any one of the three annotators.
The metrics value and value-all are evaluated in three settings based on whether we are using ground truth verb, top-1 predicted verb, or top-5 predicted verbs.
For our model ablation on semantic role labeling, we use the ground truth verb setting for measuring value and value-all.
As CLIP embeddings are not directly geared toward object detection, we have not evaluated the grounding of nouns in this work.
All experiments are performed on the imSitu dev set unless otherwise specified.
§.§ Implementation Details
We use the CLIP model with ViT-B32 image encoder to extract image features unless otherwise specified.
The input to ClipSitu MLP is a concatenation of the CLIP embeddings of the image, verb, and role, each of 512 dimensions leading to 1536 dimensions.
For both ClipSitu TF and XTF, we set the sequence length to be 6 which refers to the maximum number of roles possible for a verb in imSitu following <cit.>.
Each verb has a varying number of roles and we mask the inputs that are not required.
For ClipSitu TF, each input token in the sequence is the concatenated image, verb, and role CLIP embedding same as the MLP above which is projected to 512 dimensions using a linear layer.
For the patch-based cross-attention Transformer (ClipSitu XTF), we obtain the embedding for input image patches from CLIP image encoder (ViT-B32 model) which results in 50 tokens (224/32 × 224/32 + 1 class) of 512 dimensions that are used as key and value.
The query tokens are concatenated verb and role CLIP embeddings that are projected to 512 dimensions using a linear layer.
Unless otherwise mentioned, we train all our models with a batch size of 64, a learning rate of 0.001, and an ExponentialLR scheduler with Adamax optimizer, on a 24 GB Nvidia 3090.
§.§ Ablations on hyperparameters for ClipSitu MLP, TF, and XTF
In <ref>, we explore combinations of MLP blocks and the hidden dimensions of each block to obtain the best MLP network for semantic role labeling.
Increasing the number of MLP blocks and hidden dimensions steadily improves performance as the number of unique nouns to be predicted is 11538.
We train MLP with small to very large hidden dimensions i.e. 128→16384 which results in a steady improvement in both value and value-all.
No improvement in value and value-all is seen when we increase the layer dimension further to 32768 for 3 MLP blocks which demonstrates that we have reached saturation.
Our best ClipSitu MLP for semantic role labeling obtains 76.91 for value and 43.22 for value-all with 3 MLP blocks with each block having 16,384 hidden dimensions which beats the state-of-the-art CoFormer <cit.>.
The main reason our ClipSitu MLP performs so well on semantic role labeling is our modern MLP block design that contains large hidden dimensions along with LayerNorm which have not been explored in existing MLP-based CLIP finetuning approaches.
We also compare the performance of ClipSitu MLP with the proposed minimum annotator cross-entropy loss () versus applying cross-entropy using the noun labels of each annotator separately.
We find that produces better value and value-all performance (76.91 and 43.22) compared to cross-entropy (76.57 and 42.88).
In <ref>, we explore the number of heads and layers needed to obtain the best-performing architecture for semantic role labeling using ClipSitu TF and XTF.
We find that a single head with 4 transformer layers performs the best in terms of value for both ClipSitu TF and XTF while for value-all, an 8-head 4-layer ClipSitu TF performs the best and we use this for subsequent evaluation.
For both ClipSitu TF and XTF, increasing the number of layers beyond 4 does not yield any improvement in value or value-all when using less number of heads (1,2,4).
Similarly, for ClipSitu XTF, increasing the number of heads and layers leads to progressively deteriorating performance.
Both of these performance drops can be attributed to the fact that we have insufficient samples for training larger Transformer networks <cit.>.
§.§ Verb and noun prediction with different CLIP Image Encoders
In <ref>, we compare the proposed ClipSitu Verb MLP model against a state-of-the-art CLIP finetuning model called weight-space ensembles (wise-ft) <cit.> that leverages both zero-shot and fine-tuned CLIP models to make verb predictions.
We choose wise-ft for comparison as external context from MFMs can be useful in predicting abstract situational verbs as we discussed in <ref>.
We compare ClipSitu Verb MLP and wise-ft using 4 CLIP image encoders - ViT-B32, ViT-B16, ViT-L14, and ViT-L14@336px.
The image clip embeddings for ViT-B32 and ViT-B16 are 512 dimensions and for ViT-L14, and ViT-L14@336px are 768 dimensions.
These four encoders represent different image patch sizes, different depths of image transformers, and different input image sizes.
The hidden layers for ClipSitu Verb MLP are all 1024 dimensional.
Increasing the number of hidden layers does not improve performance for ClipSitu Verb MLP and we obtain the best top-1 and top-5 verb prediction with a single hidden layer.
ClipSitu Verb MLP performs better than wise-ft for all image encoders which shows that finetuning on CLIP image features works better than finetuning the CLIP image encoder itself for situational verb prediction.
Our best performing ClipSitu Verb MLP outperforms wise-ft by 5.6% on Top-1 and 3.2% on Top-5 when using the same ViT-L14 image encoder.
Next, we study the effect of using different CLIP image encoders for noun prediction with ClipSitu MLP, TF and XTF.
We compare ViT-B32, ViT-B16, ViT-L14, and ViT-L14@336px.
For ClipSitu XTF, the number of image patch tokens used as key and value changes based on patch size and image size.
We have 197 tokens (224/16 × 224/16 + 1 class token) for ViT-B16, 257 tokens for ViT-L14 (224/14 × 224/14 + 1 class token), and
577 tokens for ViT-L14@336px (336/14 × 336/14 + 1 class token).
For ViT-L14, and ViT-L14@336px image encoders, we obtain 768-dimensional embeddings which are projected using a linear layer to 512.
To demonstrate the results of different image encoders, We choose the best architectures for ClipSitu MLP, TF, and XTF obtained in <ref>.
In <ref>, we observe that the value and value-all using ground truth verbs steadily improve for all three models as the number of patches increases from 32 to 16 to 14 or the image size increases from 224 to 336.
For ViT-B32 and ViT-B16, the best performance is obtained by ClipSitu MLP but it drops with ViT-L14.
On the other hand, the maximum improvement is seen in ClipSitu XTF i.e. 5.1% for value-all compared to 1.6% and 2.8% for ClipSitu MLP and TF, respectively.
ClipSitu XTF is able to extract more relevant information when attending to more image patch tokens to produce better predictions.
To compare noun prediction using top-1 and top-5 predicted verbs, we use the best ClipSitu Verb MLP (ViT-L14@336px) from <ref>.
For both Top-1 and Top-5 predicted verbs, we observe a similar trend as the ground truth verb.
ClipSitu XTF again shows the most improvement in value and value-all to obtain the best performance among the three models across ground truth, Top-1 and Top-5 predicted verbs.
We compare the number of parameters for ClipSitu MLP, TF, and XTF using the ViT-L14-336 image encoder in <ref>.
We find that ClipSitu TF is the most efficient followed by ClipSitu XTF.
The most computationally expensive model is ClipSitu MLP with 12× the number of parameters of ClipSitu XTF and 29× the number of parameters of ClipSitu TF.
Therefore, we conclude that ClipSitu XTF not only performs the best at semantic role labeling but is also efficient in terms of the number of parameters compared to ClipSitu MLP.
§.§ Comparison with SOTA
In <ref>, we compare the performance of proposed approaches with state-of-the-art approaches on situation recognition.
We use ViT-L14@336px image encoder for all models – ClipSitu Verb MLP, ClipSitu MLP, ClipSitu TF, and ClipSitu XTF.
ClipSitu Verb MLP outperforms SOTA method CoFormer on Top-1 and Top-5 verb prediction by a large margin of 12.6% and 12.4%, respectively, on the imSitu test set, which shows the effectiveness of using CLIP image embeddings over directly predicting the verb from the images.
The comparison with existing works shows that with a well-designed MLP network, ClipSitu MLP outperforms state-of-the-art CoFormer <cit.> in all metrics comprehensively.
ClipSitu MLP, TF, and XTF also handily outperform the only other CLIP-based semantic role labeling method, CLIP-Event <cit.>.
ClipSitu XTF performs the best for noun prediction based on both the predicted top-1 verb and top-5 verb for value and value-all matrices.
ClipSitu XTF outperforms state-of-the-art CoFormer by a massive margin of 14.1% on top-1 value and by 9.6% on top-1 value-all using the Top-1 predicted verb on imSitu test set.
Qualitative Results
In <ref>, we compare the qualitative results of ClipSitu XTF with CoFormer.
ClipSitu XTF is able to correctly predict abstract verbs and role-based nouns where CoFormer falters.
For abstract verbs such as cramming (<ref>(b)), CoFormer focuses on the action of eating and hence incorrectly predicts the verb which also makes its noun predictions for the container and theme incorrect.
On the other hand, ClipSitu XTF correctly identifies the situation as cramming and the theme as hotdog which is generally associated with cramming in eating competitions.
CoFormer predicts the place as table and predicts the verb as dusting (<ref>(c)) instead of focusing on the action of nagging.
Finally, we see in <ref>(d) that CoFormer is confused by the visual context of kitchen as it predicts stirring instead of identifying the action which is drumming.
On the other hand, ClipSitu XTF correctly predicts drumming and the tool as drumsticks while still predicting the place as the kitchen.
§ CONCLUSION
We propose to leverage CLIP embeddings for semantic role labeling.
We show that multimodal ClipSitu MLP with large hidden dimensions outperforms the state-of-the-art semantic role labeling approach.
We propose a ClipSitu XTF model that employs cross-attention between image patch embeddings from the CLIP image encoder and text embeddings.
ClipSitu XTF sets the new state-of-the-art in semantic role labeling improving the current results by a large margin of 14.1% on top-1 value and by 9.6% on top-1 value-all.
We also show that our approach of using CLIP embeddings is much more effective than finetuning CLIP, given the relatively small size of the imSitu dataset.
Unlike, VL-Adapter <cit.>, AIM <cit.>, EVL <cit.> and wise-ft <cit.>, our models can handle conditional inputs to solve Situation Recognition task.
Despite the simplicity, our work shows that a traditional approach of freeze and finetune can be still relevant when used with modern neural network designs especially when using Foundational models.
Acknowledgment This research/project is supported by the National Research Foundation, Singapore, under its NRF Fellowship (Award NRF-NRFF14-2022-0001). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation, Singapore.
plain
|
http://arxiv.org/abs/2307.01879v2
|
20230704184934
|
Stability Analysis Framework for Particle-based Distance GANs with Wasserstein Gradient Flow
|
[
"Chuqi Chen",
"Yue Wu",
"Yang Xiang"
] |
cs.LG
|
[
"cs.LG"
] |
Vanishing Bach-Like Tensors on Complete Gradient Shrinking Ricci Solitons
James Siene
=========================================================================
In this paper, we investigate the training process of generative networks that use a type of probability density distance named particle-based distance as the objective function, e.g. MMD GAN, Cramér GAN, EIEG GAN. However, these GANs often suffer from the problem of unstable training. In this paper, we analyze the stability of the training process of these GANs from the perspective of probability density dynamics. In our framework, we regard the discriminator D in these GANs as a feature transformation mapping that maps high dimensional data into a feature space, while the generator G maps random variables to samples that resemble real data in terms of feature space. This perspective enables us to perform stability analysis for the training of GANs using the Wasserstein gradient flow of the probability density function. We find that the training process of the discriminator is usually unstable due to the formulation of min_G max_D E(G, D) in GANs. To address this issue, we add a stabilizing term in the discriminator loss function. We conduct experiments to validate our stability analysis and stabilizing method.
§ INTRODUCTION
Generative Adversarial Networks (GANs) <cit.> have emerged as a prominent framework for generative modeling in recent years, finding applications across a wide range of fields, including image style transformation <cit.>, super-resolution <cit.>, and 3D object generation <cit.> etc. In the GANs framework, there are two networks involved: the generator and the discriminator. The generator G is trained to map a random variable, typically drawn from a normal distribution i.e., z∼𝒩(0, I), to samples that resemble those from the data distribution (i.e., G(z) ∼ℙ_data). The discriminator D, on the other hand, is trained to evaluate the scores D(x) ∈ℝ^d of real or generated samples x. Together, the generator and discriminator networks are trained iteratively to improve the quality of generated samples until the generator is able to produce samples that gain the same score from the discriminator. The standard formulation of GANs is given by
min_Gmax_D E(G, D),
where min and max of objective function E with respect to G and D are taken over the set of generator and discriminator functions.
Within the GANs framework, different probabilistic metrics can be used to define various objective functions for different GANs. For example, the original GAN <cit.> uses the JS divergence, and the WGAN <cit.> uses the Wasserstein distance. Other metrics include Cramér distance used by Cramér GAN <cit.>, the maximum mean discrepancy (MMD) used by the MMD GAN <cit.>, and the elastic interaction energy-based metric used by the EIEG GAN <cit.>. In this paper, we focus on the latter three GANs and introduce a unified expression for the probability density distance in these models, which we refer to as the particle-based distance. Furthermore, we analyze the stability of their training process using the Wasserstein gradient flow.
Our motivations originate from molecular dynamics <cit.> where we consider samples from ℙ_data and generated samples from ℙ_g as a system of interacting particles. The corresponding particle-based distance between the two distributions can be considered as the potential energy of this system. Under this framework, we treat the training process of the discriminator and generator as a process of evolution of the particles. We analyze the stability of the training by analyzing the density evolution equation for the particles, which is
Wasserstein gradient flow based on particle-based distance. Our analysis shows that the training process of the discriminator is often unstable under the min_Gmax_D E(G, D) formulation of GANs. To address this issue, we propose an additional stabilizing term in the discriminator loss function.
To summarize, our contributions can be stated as follows:
* In Section <ref>, we propose a new framework for analyzing the training process of particle-based distance GANs using Wasserstein gradient flow. The training stability is determined by the corresponding perturbation evolution equation. Our analysis reveals that the training of discriminator is always unstable.
* To address the unstable training issue of the discriminator in these GANs, we introduce a stabilizing term in the discriminator loss in Section <ref>.
* In Section <ref>, we conduct experiments to validate our analysis and the proposed stabilizing method.
Finally, we study the connection to existing works in Section <ref> and discuss the potential for extending our method of proving stability using functional gradient flow to other types of GAN models.
§ PRELIMINARIES
Notation. In this paper, we use p_r(x) to denote the probability density function corresponding to the data distribution ℙ_data, p_g(x) for the generated data distribution ℙ_g, and p_f(x) for the distribution of the samples in feature space ℙ_f. Without ambiguity, ∇ stands for ∇_x for conciseness.
For GANs, D_ϕ denotes the discriminator neural network parameterized by ϕ, and G_θ denotes the generator neural network parameterized by θ. Notation · denotes the l_2 norm on ℝ^d.
Generative Adversarial Networks (GANs) with particle-based distance.
We denote the probability density distance of Cramér GAN <cit.>, MMD GAN <cit.>, EIEG GAN <cit.> in a unified form named particle-based distance.
[Particle-based distance]
Consider two probability density functions p(x), q(x): ℝ^n↦ℝ, the particle-based distance between these two distributions is
E[p(x),q(x) ] =∫_ℝ^n∫_ℝ^n e(x,x)(p(x) - q(x)) (p(y) - q(y)) d Ω_x d Ω_y,
where e(x,y) stands for a type of distance between x and y with e(x,y) ≥ 0.
The distance e(x, y) can be specified for the following GAN variants:
* Cramér GAN <cit.> uses a distance function given by
e(x,y) = x-z_0 + y-z_0 -x-y,
for any choice of z_0 ∈ℝ^n. ( z_0 = 0 is often chosen to simplify notation <cit.>).
* MMD GAN with Gaussian RBF kernel <cit.> uses a distance function given by
e(x,y) = k_σ^rbf(x, y)=exp(-1/2σ^2x-y^2),
where σ is a scaling factor.
* MMD GAN with rational quadratic kernel <cit.> uses a distance function given by
e(x,y) = k_α^rq(x, y)= (1 + x - y^2/2α)^-α,
where α is a scaling factor.
* EIEG GAN <cit.> uses a distance function given by
e(x,y) = 1/x-y^n-1,
where n is the dimension of x and y.
Remark. Given a type of distance e(x,y) ≥ 0 between x and y, then E[p(x),q(x)] = 0 if and only if p(x) = q(x).
The proposed particle-based distance can be written in the following form:
E[p,q] = - 2𝔼_x∼ p𝔼_y∼ qe(x,y) +𝔼_x∼ p𝔼_y∼ pe(x,y) + 𝔼_x∼ q𝔼_y∼ qe(x,y).
In Eq. (<ref>), the first term captures the interaction energy between samples from different distributions. The second and third terms, on the other hand, represent the self-energy of samples within their distributions, respectively.
The objective function for GAN variants based on the particle-based distance is
E(G,D) =
-2 𝔼_x∼ℙ_data, z∼𝒩(0,I)[e_D(x, G(z))]
+𝔼_x, x^'∼ℙ_data[e_D(x, x^')]+𝔼_z, z^'∼𝒩(0,I)[e_D(G(z), G(z^'))],
where e_D(x, y) = e(D(x), D(y)).
Wasserstein gradient flow and particle dynamics. Given a target distribution p_*(x), and a distance between p_t(x) and p_*(x), i.e., E[p_t(x), p_*(x)], a Wasserstein gradient flow is a curve for p_t(x) following the direction of the steepest descent of a functional
E(p_t(x)) = E[p_t(x),p_*(x)], which leads p_t(x) converges to p_*(x).
[Wasserstein gradient flow <cit.>]
Given an energy functional E(p_t(x)), the Wasserstein gradient flow of density function p_t(x) is defined as
∂ p_t/∂ t = -∇_W E(p
_t)=∇·(p_t ∇δ E/δ p_t),
where ∇_W E(p_t) is the first variation of the functional E(p_t) in Wasserstein spaces and δ E/δ p_t is the first variation of the functional E(p_t) in Hilbert spaces.
The Wasserstein gradient flow possesses a physical interpretation in molecular dynamics.
When particles are initially distributed according to X_0 ∼ p_0, the distribution p_t(x) of the particles will approach p_*(x) as t →∞, following the dynamics described by the equation
dX_t = - ( ∇δ E/δ p_t) dt, X_0 ∼ p_0.
Specifically, Eq. (<ref>) defines the evolution equation of the particle X_t whose density distribution p_t satisfies Eq. (<ref>). The stability of the particle dynamics (Eq. (<ref>)) is consistent with the stability of the evolution equation of its corresponding density distribution (Eq. (<ref>)). Based on this, we propose our framework for analyzing the training stability of particle-based distance GANs.
§ STABILITY ANALYSIS
§.§ Analysis framework
Stability of training dynamics.
A training dynamics is stable if the perturbations appearing at some time during the training do not cause the perturbations to be magnified as the training is continued.
That is, the training dynamics is stable if the perturbations decay and eventually damp out as the training is carried forward.
Conversely, if the perturbations grow over time, the training dynamics is unstable.
A neutrally stable training dynamics is one in which the perturbations remain constant as the training progresses.
The framework of our stability analysis is as follows. The Wasserstein gradient flow of the particle-based distance E(p_t(x)) = E[p_t(x),p_*(x)] is
∂ p_t/∂ t = ∇·(p_t ∇δ E/δ p_t).
Consider at a fixed point x_0 with a perturbation v, near the point x_0, p_t = p_t(x_0) + v, where v ≪ 1 is a small perturbation.
Substituting it into the Eq. (<ref>), and since v ≪ 1 we keep only the linear terms of v, which gives
∂ v/∂ t = 𝒜v,
where 𝒜 is linear operator on perturbation function v.
Through the evolution of the perturbation v Eq. (<ref>), if |v| →∞ as t →∞ the dynamics is unstable. Conversely, if |v| → 0, as t →∞ the dynamics is stable. If |v| remains constant, the dynamics is considered to be neutrally stable.
§.§ Training stability analysis
In our framework, the stability analysis of the training dynamics of particle-based distance GANs is based on the evolution equation of the distribution density, which is the Wasserstein gradient flow of the particle-based distance (Eq. (<ref>)).
We view the discriminator as a feature transformation mapping that projects the high-dimensional data space into a low-dimensional feature space in our framework.
To be specific, the discriminator with parameters ϕ maps data samples x∼ℙ_data and the generated samples G_θ(z) ∼ℙ_g to samples in the feature space, represented by D_ϕ(x) ∼ℙ_f_data and D_ϕ(G_θ(z)) ∼ℙ_f_g, respectively.
According to this understanding,
the generator G_θ is trained to generate samples whose distribution matches the distribution of the data in terms of feature space.
Specifically, the generator G_θ maps random variables z∼𝒩(0,I) to samples, such that ℙ_f_g approximates ℙ_f_data.
Such an understanding is also proposed in EIEG GAN <cit.>, in which the elastic discriminator maps the data into a two-dimensional feature space, while the generator is trained to minimize the elastic interaction energy-based distance between p_f_data and p_f_g in the feature space. More discussion can be found in Appendix <ref>
The minmax formulation of GANs min_Gmax_D E(G, D)
is usually solved iteratively using gradient descent.
We update the parameters of the discriminator network D to maximize its objective function max_Dℒ_D while keeping the generator network G fixed. Then, we update the parameters of the generator network G to minimize the objective function min_Gℒ_G while keeping the discriminator network D fixed. We repeat this process for several iterations until convergence or until a stopping criterion is met.
We interpret the training process of these GANs as particle dynamics.
Starting from the objective function based on the particle-based distance (Eq. (<ref>)),
the loss function for the generator is
min_Gℒ_G = -2 𝔼_x∼ℙ_data, z∼𝒩(0,I) e_D (x, G(z)) +𝔼_z,z^'∼𝒩(0,I)e_D (G (z), G (z^')),
with a fixed D.
Correspondingly, the evolution of the generated sample dynamics in feature space is
dX_t = [ - 2𝔼_y∼ℙ_f_data (∇ e(X_t,y)) + 2𝔼_y∼ℙ_f_g (∇ e(X_t,y)) ]dt, X_0 ∼ℙ_f_𝒩(0,I).
The Wasserstein gradient flow is
∂ p_f_g/∂ t = ∇·(p_f_g∇δ E/δ p_f_g)
=∇·(p_f_g∇(2𝔼_y∼ℙ_f_ge(x, y)- 2𝔼_y∼ℙ_f_datae(x, y))),
where E = E[p_f_g,p_f_data] represents the particle-based distance between p_f_g and p_f_data, which correspond to the probability density functions of generated and real samples in feature space, respectively.
On the other hand, the loss function for the discriminator is
max_Dℒ_D = -2 𝔼_x∼ℙ_data, y∼ℙ_ge_D(x, y)+𝔼_x,x^'∼ℙ_datae_D(x, x^')+𝔼_y,y^'∼ℙ_ge_D(y, y^').
With G fixed, the evolution of the generated samples dynamics in feature space is
dX_t = [ 2𝔼_y∼ℙ_data (∇ e(X_t, y)) - 2𝔼_y∼ℙ_g (∇ e(X_t,y)) ]dt, X_0 ∼ℙ_f_g.
And the Wasserstein gradient flow is
∂ p_f_g/∂ t = -∇·(p_f_g∇δ E/δ p_f_g)
= -∇·(p_f_g∇(2𝔼_y∼ℙ_f_ge(x,y)- 2𝔼_y∼ℙ_f_data
e(x,y))).
It is worth noticing that the only difference between the Wasserstein gradient flow for the generator (Eq. (<ref>)) and that for the discriminator (Eq. (<ref>)) is their evolution direction, i.e., different sign in Eq. (<ref>) and Eq. (<ref>) due to min_Gℒ_G and max_Dℒ_D. This is attributed to the min-max formulation min_Gmax_D E(G, D) of GANs in which if one evolution direction is stable, the other direction is unstable.
§.§ Results
We use the analysis framework described above to investigate the training stability of particle-based distance GANs. Central to this analysis is Eq. (<ref>), which defines the perturbation dynamics from the Wasserstein gradient flow Eq. (<ref>). We find that the evolution equation for the perturbation v in Fourier spaces always takes the form
d v̂/dt = ∓ C(2π)^2|ξ|^2v̂ℱ(e(x)).
The constant C ≥ 0 is associated with p_f_data. In the context of GANs, the negative sign indicates the perturbation dynamics of the generator, while the positive sign indicates the perturbation dynamics of the discriminator. The function e(x,y) can be expressed as e(x-y). ℱ(e(x)) denotes the Fourier transform[Here we define the Fourier transform of f(x) as ℱ(f(x))(ξ)=∫_ℝ^nf(x) e^-i 2 π(ξ·x) dx] of e(x).
We use v̂ to denote the Fourier transform of v, and ξ to represent the Fourier mode.
As shown in Eq. (<ref>), if ℱ(e(x))(ξ) > 0 for all ξ, the training dynamics of the generator is stable, with v̂→ 0 as t →∞, and the corresponding training of the discriminator is unstable. Conversely, if ℱ(e(x))(ξ) < 0 for all ξ, the situation is reversed, with unstable generator training and stable discriminator training. If the sign of ℱ(e(x))(ξ) depends on |ξ|, both the generator and discriminator training are unstable for some value of ξ. This result provide valuable insights into the stability of various particle-based distance GANs, and can guide the development of new stabilizing methods.
The stability analysis results for Cramér GAN <cit.>, MMD GAN <cit.>, and EIEG GAN <cit.> based on our framework are presented in Table <ref>. For Cramér GAN, we observe that the training of the generator is stable, while the training of the discriminator is unstable. Similarly, for MMD GAN with a Gaussian RBF kernel, the training of the generator is stable, but the training of the discriminator is unstable. In the case of MMD GAN with a rational quadratic kernel, the situation is more complex, as the function ℱ((1 + x^2/2α)^-α) takes diverse forms for different values of α, which leads to the training stability depending on α. For EIEG GAN, we find that the training of the generator is stable, while the training of the discriminator is unstable. Detailed proofs and experimental results to support these analytical findings are provided in the Appendix.
To provide a clear and concise demonstration of the stability analysis under the proposed framework, we use MMD GAN with a Gaussian RBF kernel <cit.> as an example.
Example 1 (MMD GAN with Gaussian RBF kernel <cit.>). For MMD GAN with Gaussian RBF kernel, the evolution equation of the perturbation v in generator training dynamics is
∂ v/∂ t = C Δ∫_ℝ^2(e^-x-y^2/2σ^2)vdΩ_y = C Δ (e^-x^2/2σ^2 * v)(x),
where Δ is Laplacian operator and C>0.
Taking Fourier Transform on both sides of Eq. (<ref>), we have
d v̂/dt = -C(2π)^2|ξ|^2v̂ℱ(e^-x^2/2σ^2) = - C(2π)^2σ|ξ|^2v̂e^-σ^2|ξ|^2/4.
Thus in Fourier spaces, the solution for the perturbation term v̂ is
v̂ = v̂_0e^-(C(2π)^2σ|ξ|^2e^-σ^2|ξ|^2/4)t,
where v̂_0 is the initial value of v̂. Thus the perturbations with all |ξ| decay, i.e., |v̂| = |v̂_0|e^-(C(2π)^2 σ|ξ|^2e^-σ^2|ξ|^2/4)t→ 0 as t →∞, which indicates that the training dynamics for the generator is stable.
On the other hand, in the discriminator training dynamics, following the framework we proposed, the evolution equation of the perturbation v̂ in Fourier spaces is
d v̂/dt = C(2π)^2|ξ|^2v̂ℱ(e^-x^2/2σ^2) = C(2π)^2σ|ξ|^2v̂e^-σ^2|ξ|^2/4.
The solution for the perturbation term v̂ is
v̂ = e^((2π)^2Cσ|ξ|^2e^-σ^2|ξ|^2/4)t.
Thus the perturbations with all |ξ| decay, i.e., |v̂| = |v̂_0|e^(C(2π)^2 σ|ξ|^2e^-σ^2|ξ|^2/4)t→∞
as t →∞, which indicates that the training dynamics for the discriminator is unstable.
§ STABILIZING METHOD
The above analysis of particle-based distance GANs has revealed that the sign of ℱ(e(x)) is a crucial factor in determining the stability of the training process. To stabilize the unstable training process, we propose an approach that involves introducing a stabilizing term s(x,y) = s(x-y) into the particle-based distance. This stabilizing term can be added to the generator or discriminator loss such that ℱ(e(x) - ϵ s(x)) > 0 for generator and ℱ(e(x) - ϵ s(x)) < 0 for discriminator, for all Fourier mode ξ.
Without loss of generality, here we focus on the case where the training of the discriminator is unstable. The same approach can be applied to the case of unstable generator training.
To address the instability of the discriminator training, we propose adding a stabilizing term to the particle-based distance function in the discriminator loss ℒ_D. Specifically, we define a modified particle-based distance function e(x,y) = e(x,y) - ϵ s(x,y), where s(x,y) is the stabilizing term and ϵ > 0 is a hyperparameter.
The stabilized loss function ℒ^s_D of the discriminator is
ℒ^s_D = -2 𝔼_x∼ℙ_data, y∼ℙ_ge_D(x, y)+𝔼_x,x^'∼ℙ_datae_D(x, x^')+𝔼_y,y^'∼ℙ_ge_D(y, y^'),
where
e_D(x,y) = e_D(x,y) - ϵ s_D(x,y) = e(D(x),D(y)) - ϵ s(D(x),D(y)).
Here the stabilizing term s(x,y) = s(x-y) can be controlled by ϵ >0.
Consequently, the evolution equation for perturbation v̂ in Fourier space (Eq. (<ref>))
becomes:
d v̂/dt = C(2π)^2|ξ|^2v̂ℱ(e(x)- ϵ s(x)).
By selecting an appropriate form of the stabilizing term s(x,y) and parameter ϵ, we can ensure that ℱ(e(|x|) -ϵ s(|x|))(ξ) < 0 for any ξ. This condition guarantees that |v̂| → 0 as t →∞, indicating that the training process of the discriminator becomes stable.
Choice of s(x,y). First, we propose a rescaling distance e_k(x,y) parameterized with a scaler k:
* Rescale Gaussian RBF kernel (Fig. <ref>):
e_σ(x,y) =1/σexp(-1/2σ^2x-y^2).
* Rescale rational quadratic kernel (Fig. <ref>):
e_α(x,y) = α (1 + x - y^2/2α)^-α.
* Elastic interaction term:
e_m(x,y) = 1/x-y^m.
In EIEG GAN <cit.>, s(x,y) = e_m(x,y) is a higher order term.
Specifically, the stabilized distance is e_m(x,y) = 1/r^n-1 - ϵ1/r^m where the stabilizing term is e_m(x,y) = 1/r^m with m>n-1.
Such form is consistent with the he Lennard-Jones potential <cit.> V_LJ = 4ϵ[(σ/r)^12 - (σ/r)^6]
in molecular dynamics.
Here we can also propose a similar stabilizing term for MMD GANs.
The stabilized distance for MMD GAN with Gaussian RBF kernel <cit.> is (Fig. <ref>),
e_σ(x,y) = e_σ_1(x,y) - ϵ e_σ_2(x,y).
where σ_2 < σ_1, and ε > e^|ξ|(σ_2^2 - σ_1^2 )/4 such that ℱ(1/σ_1e^-x^2/2σ_1^2 - ε1/σ_2e^-x^2/2σ_2^2) < 0 for all Fourier mode ξ,
and thus the training of discriminator becomes stable in our framework.
The stabilized distance for MMD GAN with rational quadratic kernel <cit.> is (Fig. <ref>),
e_α(x,y) = e_α_1(x,y) - ϵ e_α_2(x,y),
where α_2 > α_1.
The setting of α_1, α_2 is more complex for the rational kernel, and we will provide more discussions in the Appendix.
Parameter ϵ. The selection of the parameter ϵ is critical for the success of the stabilizing approach. On the one hand, ϵ should be large enough to stabilize the training by ensuring that ℱ(e(x)- ϵ s(x))<0. For example, in MMD GAN with a Gaussian RBF kernel, we found that ϵ≥ 1 is sufficient to stabilize the training. On the other hand, ϵ cannot be too large, as this would cause the data points from the same distribution to be too scattered in the feature space, and would also reduce the adversarial nature of the discriminator.
To understand the effect of the stabilizing term on the training process, we use a molecular dynamics analogy to interpret the optimization of the discriminator loss with the stabilizing term, i.e., max_Dℒ^s. In this analogy, we consider the force between two samples in the feature space, where e(x,y) represents the potential energy between them. If e(x,y) < 0, this indicates that the force between the two particles is repulsive, while if e(x,y) > 0, the force is attractive. As shown in Fig. <ref> and Fig. <ref>, when two samples are close to each other, the force between them is repulsive, while when they are far apart, the force between them is attractive. Therefore, if the stabilizing term ϵ is set too large, it will cause too much repulsion between samples from the same distribution, resulting in the samples being spread too thinly in the feature space. On the other hand, if ϵ is too small, it may lead to training instability and mode collapse, as the samples in the feature space collapse. More discussion can be found in Appendix.
A similar stabilizing term can also be added to the MMD GAN with Gaussian RBF kernel <cit.>, i.e., e^s(x,y) = 1/r^m. This construction is physically meaningful, as it corresponds to the Buckingham potential[Buckingham potential<cit.> Φ_12(r) = A e^(-Br) - C/r^6.] in the case of MMD GAN with Gaussian kernel <cit.>, and the Lennard-Jones potential[Lennard-Jones- type potential<cit.> V_LJ = 4ϵ[(σ/r)^12 - (σ/r)^6].] in the case of EIEG GAN <cit.>.
§ RELATED WORKS
MMD GAN related work.
In the original MMD GAN <cit.>, the discriminator is viewed as a kernel selection mechanism.
Here, we propose an alternative perspective that the discriminator can be regarded as a feature transformation mapping.
This view provides insights into various approaches to improve MMD GAN performance by preserving more information about the data and samples in the feature space.
For example, in <cit.>, the proposed repulsive discriminator loss can be understood from our perspective as preventing sample collapse in feature space. In <cit.>, the addition of consistency regularization to the discriminator loss can be understood as grouping similar samples closely in the feature space.
Furthermore, we utilize this perspective to analyze the training stability of MMD GAN via Wasserstein gradient flow. Our results suggest that MMD GAN training is unstable.
This finding is consistent with some experimental results in <cit.>.
Our approach is simpler and more accessible than previous theoretical works, such as <cit.>, which analyze the convergence of MMD GAN through gradient flow. Additionally, to the best of our knowledge, our work is the first to perform training stability analysis on MMD GANs.
Stabilization methods for GANs.
Training stability is a critical issue in GANs, and various methods have been proposed to address this challenge <cit.>. One common approach involves imposing Lipschitz conditional restrictions on the discriminator through normalization and regularization techniques. Normalization methods such as spectral normalization <cit.> and gradient normalization <cit.> have been effective in stabilizing training. Regularization methods, such as adding a gradient penalty to the discriminator loss <cit.>, have also been widely adopted. Using our analysis framework, we analyze the impact of adding a gradient penalty on training stability and find that it does indeed stabilize training (see Appendix). When a gradient penalty is added to the discriminator's loss as a stabilizing term, it appears in the gradient flow as an additional Laplacian term. However, this can cause the discriminator to become overly smooth, and the generated samples may become connected, leading to mode collapse, while the proposed stabilizing term has no such problem. Although spectral normalization is an effective method for stabilizing GAN training, it has been reported that it may lead to mode collapse in SNGAN <cit.>. Our stabilizing term, which creates a repulsive force, can prevent sample points from collapsing together, thereby addressing the mode collapse issue. More discussion can be found in the Appendix.
§ EXPERIMENTS
To validate both the proposed analysis and stabilizing method, we take an example of MMD GAN with a Gaussian RBF kernel k_σ^rbf(x, y),
and conduct experiments on synthetic and real datasets (CIFAR-10 <cit.>). More experiments and detailed settings are provided in Appendix.
Gaussian Mixture.
We conduct Gaussian mixture experiments to compare our method with the original MMD GAN <cit.> and MMD GAN-GP <cit.>.
We sample from a mixture of eight two-dimensional Gaussian distributions, and all models are trained with 2000 particles.
The results are shown in Fig. <ref>: (1) The generated samples from MMD GAN are disorganized (Fig. <ref>), which is caused by the instability of the training process. (2) Mode collapse occurs in MMD GAN-GP where the generator fails to grasp all the modes of the distribution, as shown in Fig. <ref>. Also in Fig. <ref>, the generated samples all link together. This is because the gradient penalty added to the discriminator loss as a stabilizer makes the generated sample points more scattered in the feature space. (3) As shown in Fig. <ref>, our proposed method successfully grasps all the modes of the Mixture Gassuian.
r0.4
< g r a p h i c s >
Training Curves on CIFAR10 of MMD GAN with or
without stabilizing term.
Image Generation.
To verify the results of our stability analysis presented in Table <ref>, which showed that the training dynamics of MMD GAN with a Gaussian RBF kernel is unstable, and to demonstrate the effectiveness of our approach, we conduct image generation experiments on the CIFAR-10 dataset.
We use the same network architecture and hyperparameters as in the original MMD GAN paper <cit.>.
We then use a linear combination of particle-based distances with different scales, i.e., e_rbf(x,y) = ∑_i = 1^Ke_σ_i(x,y) as in <cit.>,
where K = 4 and σ_i = {2,4,8,16}.
For the stabilizing term, we set s(x,y) = ∑_i = 1^Ke_σ_i(x,y), where σ_i = {1,√(2),2,2√(2)}.
To provide a more intuitive representation of the stability of GAN training for image generation, we use the Inception score <cit.> to plot the training curve.
The results are shown in Fig. <ref>.
As indicated in the figure, the original MMD GAN suffers from training instability, while our stabilizing term significantly improves the stability of training and enhances the quality of the generated images.
§ CONCLUSION AND DISCUSSION
This study introduces a novel framework for analyzing the training stability of particle-based distance GANs using the Wasserstein gradient flow.
We use the proposed perturbation evolution dynamics to analyze the training stability and our analysis reveals that the training of these GANs is unstable.
Moreover, we develop a new stabilizing method by introducing a stabilizing term in the loss function of the unstable network. The empirical results
validate our analysis and demonstrate the effectiveness of the proposed stabilizing method.
Our analysis in this paper focuses on particle-based distance GANs.
A property of those GANs is that the probability density function of the samples in feature space p_f_data, is smooth, enabling us to use the Wasserstein gradient flow to analyze their evolution stability.
For Vanilla GAN <cit.>, the probability density function in feature space of the generated samples is discrete.
Therefore, the Wasserstein gradient flow framework,
which describes the evolution of a smooth probability density function, cannot apply to Vanilla GAN.
Alternatively, we can derive the perturbation evolution dynamics based on the functional gradient flow to analyze the training stability of the Vanilla GAN's generator and discriminator training, i.e., ∂ G/∂ t = -δℒG/δ G and ∂ D/∂ t = δℒD/δ D.
We provide this analysis in the Appendix.
Additionally, our framework can be extended to the case where we take account of the neural network architectures and analyze the perturbation evolution equation through the gradient flow of the network parameters. By finding the perturbation evolution equation for the corresponding gradient flow, our framework can be extended to various training stability analysis.
Note that our analytical framework is continuous, and we do not include the stability of discrete iterations brought about by the optimization algorithm or the structure of the neural network.
To take these into account, we can consider the gradient flow of the network parameters.
We believe that our stability analysis framework can
also be extended to the gradient flow of neural network parameters when the neural network structure can be naturally incorporated, providing a promising avenue for future work.
§ ACKNOWLEDGEMENTS
The work of Y.X. was supported by the Project of Hetao Shenzhen-HKUST Innovation Cooperation Zone HZQB-KCZYB-2020083.
plain
Appendix
§ PHYSICAL INTERPRETATION OF PROPOSED FRAMEWORK
Section <ref> describes our proposed stability analysis framework, which is based on the perspective that the discriminator can be viewed as a feature transformation mapping. Figure <ref> provides an intuitive illustration of this perspective.
In our framework, we analyze the training stability of particle-based GANs through the evolution equation of generated samples in feature space. We offer a physical interpretation of the training process of the generator and discriminator. Fig. <ref> illustrates that in the training process of the generator min_G ℒ_G (Eq. (<ref>)), the generated samples experience a repulsive force between each other while the force between generated and real data samples is attractive. Fig. <ref> shows that in the discriminator training max_D ℒ_D (Eq. (<ref>)), the force between the generated samples is attractive while the force between the generated and real data samples is repulsive.
Fig. <ref> demonstrates that during the training of a stabilizing discriminator max_D ℒ_D^s (Eq. (<ref>)), the force between the generated and real data samples is repulsive, and if two generated samples are close to each other, the force between them is also repulsive, otherwise it is attractive. The stability effect for adding the stabilizing term can be found in the main paper.
§ PROOFS
Throughout this section, we analyze the stability in the simplest case where the perturbation function is added to a constant-valued density with respect to space where the constant value may change with time.
§.§ Stability analysis for particle-based distance GANs
In this section, we demonstrate how to derive the perturbation evolution equation (Eq. (<ref>)) of the Wasserstein gradient flow of particle-based distance GANs. We give detailed proofs for the analytical results presented in Table <ref> in section <ref>.
§.§.§ Derivation of perturbation evolution equation
The perturbation evolution equation of the Wasserstein gradient flow of particle-based distance GANs in Fourier space is
d v̂/dt = ∓ C|ξ|^2v̂ℱ(e(x)).
The constant C ≥ 0 is associated with p_f_data. In the context of GANs, the negative sign indicates the training dynamics of the generator, while the positive sign indicates the training dynamics of the discriminator. The function e(x,y) = e(x-y) and ℱ(e(x)) denotes the Fourier transform of e(x)
[Here we define the Fourier transform of f(x) as ℱ(f(x))(ξ)=∫_ℝ^nf(x) e^-i 2 π(ξ·x) dx].
We use v̂ to denote the Fourier transform of v, and ξ to represent the Fourier mode.
Consider generated samples G(z) in the data space, with a fixed discriminator D, and denote the generated samples in feature space as X_t := D(G(z)) .
The evolution of X_t can be described by
dX_t = [ - 2𝔼_y∼ℙ_f_data (∇ e(X_t,y)) + 2𝔼_y∼ℙ_f_g (∇ e(X_t,y)) ]dt = -∇ (δ E/δ p_f_g), X_0 ∼ℙ_f_𝒩(I, 0).
The corresponding density flow is
∂ p_f_g/∂ t =
-∇·[ p_f_g(-∇δ E/δ p_f_g) ]
=
∇·(p_f_g∇δ E/δ p_f_g)
=
∇·[
p_f_g∇(2𝔼_y∼ℙ_f_ge(x, y)- 2𝔼_y∼ℙ_f_datae(x, y))]
=
2∇·[ p_f_g∇( ∫_ℝ^d(e(x, y)p_f_g(y)- e(x, y)p_f_data(y))dy)],
where p_f_g is the constant-valued density of the generated samples X_t in the feature space.
Assume we add a small perturbation v, |v| ≪ 1
to the density p_f_g
and we denote C_0:=p_f_g≥ 0, which may various with time.
Substituting the perturbed density C_0 + v into the Wasserstein gradient flow (Eqn. (<ref>)) and only keeping the linear term of v since |v| ≪ 1, we can obtain
∂ v/∂ t = C_0 Δ∫_ℝ^de(x,y)vdy= C_0 Δ∫_ℝ^de(x-y)vdy = C_0 Δ (e(x) * v)(x).
Taking Fourier transform on both sides of the equation
d v̂/dt = - 2C_0 (2π)^2|ξ|^2v̂ℱ(e(x)) = - C|ξ|^2v̂ℱ(e(x)),
where v̂ is the Fourier transform of v, ξ is the Fourier mode, and C = 2(2π)^2C_0.
Thus in the Fourier space, the solution for the perturbation term v̂ is
v̂ = v̂_0e^-C|ξ|^2 ℱ(e(x)) t,
where v̂_0 is the initial value of v̂. If ℱ(e(x))>0 for all ξ, then the perturbation with all |ξ| decay as training processes, i.e., |v̂| = |v̂_0|e^-C|ξ|^2 ℱ(e(x)) t→ 0 as t →∞, the training of generator is stable;
if ℱ(e(x))<0 for all ξ, then the perturbation with all |ξ| grows as training processes, i.e., |v̂| = |v̂_0|e^-C|ξ|^2 ℱ(e(x)) t→∞ as t →∞; if the sign of ℱ(e(x))(ξ) depends on |ξ|, both the generator is unstable for the value of ξ, where ℱ(e(x))(ξ)<0.
With a fixed generator G, the evolution of the generated samples in feature space X_t := D(G(z)) is
dX_t = [2𝔼_y∼ℙ_f_data (∇ e(X_t,y)) - 2𝔼_y∼ℙ_f_g (∇ e(X_t,y)) ]dt = ∇ (δ E/δ p_f_g), X_0 ∼ℙ_f_𝒩(I, 0).
Thus the corresponding density flow for the generated samples in feature space X_t is
∂ p_f_g/∂ t = -∇·(p_f_g (∇δ E/δ p_f_g))
=∇·(p_f_g∇(-2𝔼_y∼ℙ_f_ge(x, y)+ 2𝔼_y∼ℙ_f_datae(x, y)))
= -2∇·(p_f_g∇( ∫_ℝ^d(e(x, y)p_f_g(y)- e(x, y)p_f_data(y))dy)) .
Thus the perturbation evolution equation is
∂ v/∂ t = -2C_0 Δ∫_ℝ^de(x,y)vdy= -2C_0 Δ∫_ℝ^de(x-y)vdy = -2C_0 Δ (e(x) * v)(x).
Taking Fourier transform on both sides of the equation
d v̂/dt = 2C_0 (2π)^2|ξ|^2v̂ℱ(e(x)) = C|ξ|^2v̂ℱ(e(x)).
Worth noticing that, the only difference between the perturbation evolution equation of generator and discriminator is the sign before the formula, one is positive the other is negative. This is caused by the minmax formulation max_D min_G E(G, D).
In conclusion, the evolution equation for the perturbation v in Fourier spaces always takes the form
d v̂/dt = ∓ C|ξ|^2v̂ℱ(e(x)).
Remark. Here, we assume that p_f_data= C_0. The stability and instability are local effects. In this case where p_f_data is not constant, p_f_data can still be approximated as a constant locally, which allows the above analysis to be applied.
From the above proposition, if ℱ(e(x))(ξ) > 0 for all ξ, the training dynamics of the generator is stable, with v̂→ 0 as t →∞, and the corresponding training of the discriminator is unstable. Conversely, if ℱ(e(x))(ξ) < 0 for all ξ, the situation is reversed, with unstable generator training and stable discriminator training. If the sign of ℱ(e(x))(ξ) depends on |ξ|, both the generator and discriminator training are unstable for some value of ξ.
§.§.§ Stability analysis for Cramér GAN
In Cramér GAN, the training process of the generator, i.e.,min_G ℒ_G, is stable, while the training process of the discriminator, i.e., max_D ℒ_D, is unstable.
The objective function for Cramér GAN is
E(G,D) =
-2 𝔼_x∼ℙ_data, z∼𝒩(0,I)[D(x) + D(G(z)) - D(x)-D(G(z))]
+𝔼_x, x^'∼ℙ_data[D(x) + D(x^') - D(x)-D(x^'))]
+𝔼_z, z^'∼𝒩(0,I)[D(G(z^')) + D(G(z)) - D(G(z))-D(G(z^'))]
= -2 𝔼_x∼ℙ_data, z∼𝒩(0,I)[ - D(x)-D(G(z))]
+𝔼_x, x^'∼ℙ_data[ - D(x)-D(x^'))]
+𝔼_z, z^'∼𝒩(0,I)[ - D(G(z))-D(G(z^'))].
Thus in this case the actual particle-based distance is e^*(x,y) = -x-y. The Fourier transform for it is
ℱ(-x)= C_n/|ξ|^n+1,
where n is the dimension of the feature space, k=1,2,3... and C_n >0. For all ξ, F(-x) < 0, from the above proposition, we know that for Cramér GAN, the training of generator is unstable while the training for discriminator is stable.
§.§.§ Stability analysis for MMD GAN with Gaussian RBF kernel
In MMD GAN with Gaussian RBF kernel, the training process of the generator, i.e., min_G ℒ_G, is stable, while the training process of the discriminator, i.e., max_D ℒ_D, is unstable.
For MMD GAN with Gaussian RBF kernel, we have
e(x,y) = e^-x-y^2/2σ^2. The Fourier transform for it is
ℱ(e^-x^2/2σ^2) = σ e^-σ^2|ξ|^2/4,
where for all ξ, ℱ(e^-x^2/2σ^2) = σ e^-σ^2|ξ|^2/4>0. From the above proposition, we know that for MMD GAN with Gaussian RBF kernel, the training of generator is stable while the training for discriminator is unstable.
§.§.§ Stability analysis for MMD GAN with rational quadratic kernel
In MMD GAN with rational quadratic kernel, the stability of the training process depends on the value of α.
For MMD GAN with rational quadratic kernel, we have e(x,y) = (1 + x - y^2/2α)^-α. It should be noted that the Fourier transform of (1 + |x|^2/2α)^-α does not have a uniform form that depends on α. Here we only list some examples.
* For α = 1/2 we have,
ℱ( (1 + x^2/2α)^-α) = ℱ(1/21/√(1+x^2)) = K_0(ξ),
where K_0(ξ) is the Bessel function and K_0(ξ)>0. From the above proposition, we know that for α = 1/2, the training of generator is stable while the training for discriminator is unstable.
* For α = 1 we have,
ℱ((1 + x^2/2α)^-α) = ℱ(1/1+x^2/2) = e^|ξ|,
where for all ξ, 1/α e^|ξ|>0. From the above proposition, we know that for α=1, the training of generator is stable while the training for discriminator is unstable.
* For α = 2 we have,
ℱ( (1 + x^2/2α)^-α) = ℱ(1/(1+x^2/4)^2) = 1/2(-|ξ|+8)e^|ξ|,
where when |ξ|>8, 1/α(-|ξ|+8)e^|ξ| >0 the training of discriminator is unstable; when |ξ|<8, 1/α(-|ξ|+8)e^|ξ| <0 the training of generator is unstable. In this case the training for both generator and discriminator is unstable.
* For α = 3 we have,
ℱ((1 + x^2/2α)^-α) = ℱ(1/(1+x^2/6)^3) = 1/4 (|ξ|^2 - 3|ξ|+3)e^|ξ|,
where for all ξ, 1/α3/4 (|ξ|^2 - 3|ξ|+3)e^|ξ|>0. From the above proposition, we know that for α=3, the training of generator is stable while the training for discriminator is unstable.
To summarize, we have analyzed the stability properties of MMD GANs with a rational quadratic kernel for various values of α. Our results indicate that the stability of the GAN training process depends on the specific value of α, with some values leading to stable training of the generator while others do not. However, in all cases, the training process of the discriminator is found to be unstable.
§.§.§ Stability analysis for EIEG GAN
In EIEG GAN, the training process of the generator, i.e.,min_G ℒ_G, is stable, while the training process of the discriminator, i.e., max_D ℒ_D, is unstable.
For EIEG GAN, we have e(x,y) = 1/x-y^d-1. The Fourier transform for it is
ℱ(1/x^d-1) = 1/|ξ|,
where for all ξ, 1/|ξ|>0. From the above proposition, we know that for EIEG GAN, the training of generator is stable while the training for discriminator is unstable.
§.§ Stability analysis for stabilized particle-based distance GANs
We give training stability analysis for our proposed stabilizing method in section <ref>.
§.§.§ Stability analysis for stabilized Cramér GAN
Rescale Cramér distance
e_β(x,y) = x^β+y^β-x-y^β,
where β∈ (0,1].
In the stabilized function ℒ^s_G (Eq. (<ref>), the stabilized distance is
e_β(x,y) = e_β_1(x,y) - ϵ e_β_2(x,y),
where β_2 > β_1.
Here we prove the stability of Cramér GAN where the dimension of the feature space is 1. And we demonstrate the training stability of our stabilized method with a special case that e_β(x,y) = e_1/2(x,y) - ϵ e_1(x,y).
For stabilized Cramér GAN, the training process of the stabilized generator G to maxℒ^s_G with the stabilized distance e_β(x,y) = e_1/2(x,y) - ϵ e_1(x,y) is stable for ϵ > 1/2√(2), |ξ| ≠ 0.
For the stabilized e_β(x,y) = e_1(x,y) - ϵ e_2(x,y), the actual particle-based distance is e^*(x,y) = -|x-y|^1 + ϵ |x-y|^2. The Fourier transform for it is
ℱ(-|x|^1/2 + ϵ |x|) = - 4/|ξ|^3/2 + ϵ1/|ξ|^2 .
In this case
d v̂/dt = - C|ξ|^2v̂ℱ(e_1/2(|x|)-ϵ e_1(|x|)) = -C|ξ|^2v̂(- 4/|ξ|^3/2 + ϵ1/|ξ|^2) = -Cv̂(ϵ - 4 |ξ|^1/2).
Knowing that in the training of D, we always normalize the input data X into a finite domain, i.e., [-1,1] × [-1,1]. We know that |ξ| is bounded and min |ξ| = 1/2 for |ξ| not equal to 0. When ϵ > 1/2√(2), (1 - 4ϵ |ξ|^1/2)<0, the training process is stable. When |ξ| = 0, d v̂/dt =Cv̂(1 - 4ϵ |ξ|^1/2) = Cv̂. Hence, the training process is unstable when ξ = 0.
Remark. Under our analysis framework, β = 1 is not a good choice as a distance, since for the case dv̂/dt = C|ξ|^2 v̂ℱ(e_1(x)) = Cv̂|ξ|^2(1/|ξ|) = Cv̂, is always unstable.
§.§.§ Stability analysis for stabilized MMD GAN with Gaussian RBF kernel
Rescale MMD GAN with Gaussian RBF kernel
e_σ(x,y) = 1/σexp(-1/2σ^2x-y^2),
where σ∈ (0,+∞).
In the stabilized function ℒ^s_D (Eq. (<ref>), the stabilized distance is
e_σ(x,y) = e_σ_1(x,y) - ϵ e_σ_2(x,y),
where σ_2 < σ_1.
For stabilized MMD GAN with Gaussian RBF kernel, the training process of the stabilized discriminator D to maxℒ^s_D with the stabilized distance e_σ(x,y) = e_σ_1(x,y) - ϵ e_σ_2(x,y) is stable for ϵ > 1 and σ_1<σ_2.
For the stabilized e_σ(x,y) = e_σ_1(x,y) - ϵ e_σ_2(x,y), the Fourier transform for it is
ℱ(1/σ_1e^-x^2/2σ_1^2 - ϵ1/σ_2e^-x^2/2σ_2^2) = e^-σ_1^2|ξ|^2/4 - ϵ e^-σ_2^2|ξ|^2/4 = e^--σ_1^2|ξ|^2/4(1 - ϵ e^-σ_2^2-σ_1^2/4|ξ|^2),
where since σ_2 < σ_1, (1 - ϵ e^-σ_2^2-σ_1^2/4|ξ|^2) < 0 with ϵ > 1. Thus we have ℱ(e_σ(x,y))<0 which indicates that the training process of the stabilized discriminator D is stable.
§.§.§ Stability Analysis for Stabilized MMD GAN with rational quadratic kernel
Rescale MMD GAN with rational quadratic kernel
e_α(x,y) = α (1 + x^2/2α)^-α,
where α∈ (0,+∞).
In the stabilized function ℒ^s_D (Eq. (<ref>), the stabilized distance is
e_α(x,y) = e_α_1(x,y) - ϵ e_α_2(x,y),
where α_2 > α_1.
Here we demonstrate the training stability of our stabilized method with a special case that e_α(x,y) = e_1/2(x,y) - ϵ e_1(x,y).
In MMD GAN with rational quadratic kernel, the training process of the stabilized discriminator D to maxℒ^s_D with the stabilized distance e_α(x,y) = e_1/2(x,y) - ϵ e_1(x,y) is stable for ϵ > 1.
For the stabilized e_α(x,y) = e_1/2(x,y) - ϵ e_1(x,y), the Fourier transform for it is
ℱ(1/21/√(1+x^2) - ϵ1/1+x^2/2) = K_0(ξ) - ϵ e^|ξ|,
where K_0(ξ) > 0 is Bessel function, and K_0(ξ) → +∞ as ξ→ 0.
In this case
d v̂/dt = C|ξ|^2v̂ℱ(e_1/2(x)-ϵ e_1(x)) = C|ξ|^2v̂( K_0(ξ) - ϵ e^|ξ|).
Knowing that in the training of D, we always normalize the input data X into a finite domain, i.e., [-1,1] × [-1,1]. We know that |ξ| is bounded and min |ξ| = 1/2 for |ξ| not equal to 0. When ϵ > K_0(1/2)/e^1/2, (K_0(ξ) - ϵ e^|ξ|)<0, the training process is stable. When |ξ| = 0, d v̂/dt =C|ξ|^2v̂( K_0(ξ) - ϵ e^|ξ|) →∞. Hence, the training process is unstable when ξ = 0.
Remark. In this case, a better choice for stabilizing terms may be e_m(x,y) = 1/x-y^m.
§.§.§ Stability analysis for stabilized EIEG GAN
Rescale EIEG GAN
e_m(x,y) = 1/x- y^m,
where m ∈ (0,+∞).
In the stabilized function ℒ^s_D (Eq. (<ref>), the stabilized distance is
e_m(x,y) = e_m_1(x,y) - ϵ e_m_2(x,y),
where m_2 > m_1.
Here we demonstrate the training stability of our stabilized method with a special case that e_m(x,y) = e_d-1(x,y) - ϵ e_d+3(x,y), where d is the feature dimension.
For stabilized EIEG GAN, the training process of the stabilized discriminator D to maxℒ^s_D with the stabilized distance e_α(x,y) = e_d-1(x,y) - ϵ e_d+3(x,y) is stable for ϵ > 1.
For the stabilized e_α(x,y) = e_d-1(x,y) - ϵ e_d+3(x,y), the Fourier transform for it is
ℱ( 1/x- y^d-1 - ϵ1/x- y^d+3 ) = 1/|ξ|-ε |ξ|^3.
In this case
d v̂/dt = C|ξ|^2v̂ℱ(e_d-1(x)-ϵ e_d+1(x)) = C|ξ|^2v̂(1/|ξ|-ε |ξ|^3).
Knowing that in the training of D, we always normalize the input data X into a finite domain, i.e., [-1,1] × [-1,1]. We know that |ξ| is bounded and min |ξ| = 1/2 for |ξ| not equal to 0. When ϵ > 1, (1 - ϵ |ξ|^4)<0, the training process is stable. When |ξ| = 0, d v̂/dt =0. Hence, the perturbation does not grow, and the solution remains stable.
§ OTHER GRADIENT FLOW
While the stability analysis in our main paper focuses on the Wasserstein gradient flow, we recognize that other types of gradient flow can also play an important role of GANs. To extend our stability analysis framework, we present examples where we consider other types of gradient flow as well.
Our proposed stability analysis framework can be applied to analyze the training stability of other types of GANs. The process of stability analysis remains similar: firstly, we identify the gradient flow function corresponding to the subject of the study. Next, we derive the perturbation evolution equation that describes how small perturbations appearing at some time during the training behave. Finally, by analyzing the perturbation evolution equation, we can gain insights into the stability properties of the GAN and the factors that influence them. This approach can provide valuable guidance for improving the training stability of GANs and enhancing their performance in practical applications.
§.§ Stability analysis for Vanilla GANs
In this section, we use our framework to analyze Vanilla GAN <cit.>. In Vanilla GAN, the feature space is {0,1}. In this case, the probability function ℙ_f_data{D(x)=1, x∈𝒜} = 1, ℙ_f_g{D(x)=0, x∈ℬ}=1, where 𝒜 stands for the dataset of data samples and ℬ stands for the dataset of generated samples. In this case, the probability density function is Delta function at points 1 and 0, which is non-smooth. The Wasserstein gradient flow framework,
which describes the evolution of a smooth probability density function, cannot be applied to Vanilla GAN. Thus we analyze the training stability through the particle dynamics in feature space.
The objective function of Vanilla GAN is
max_Dmin_G v(G,D) = 𝔼_y ∼ℙ_datalog[D(y)] + 𝔼_z∼𝒩(0,I)log[1 - D(G(z))],
where D is discriminator and G is generator.
In this case, the corresponding loss function for discriminator is
max_D ℒ_D= 𝔼_y∼ℙ_datalog[D(y)] + 𝔼_x∼ℙ_glog[1 - D(x)].
With G fixed, for a sample x_0 the evolution of the samples in feature space is
dX_t = [p_r(x_0)/X_t - p_g(x_0)/1-X_t]dt.
For discriminator, if p_r and p_g have disjoint support 𝒜 and ℬ, i.e., 𝒜∩ℬ = ∅.
* For the case x_0 ∈𝒜, the data samples dynamics is d X_t = p_r(x_0)/X_t dt, and the corresponding perturbation evolution equation
d v/dt = -v/X_t^2.
* For the case x_0 ∈ℬ, the generated samples dynamics is d X_t = - p_g(x_0)/1-X_tdt, and the corresponding perturbation evolution equation
d v/dt = -v/(1-X_t)^2.
* For the case x_0 ∉𝒜 or x_0 ∉ℬ, the dynamics is dX_t = 0, and the corresponding perturbation evolution equation
d v/dt = 0.
Based on the above perturbation evolution equation, the training process of discriminator of Vanilla GAN is stable.
The corresponding loss function for generator is (the -log D alternative <cit.>)
min_G ℒ_G= 𝔼_z ∼𝒩(0,I)log[1 - D(G(z))].
With a fixed D, for a generated sample x_0, the evolution of the generated sample in feature space X_t = D(x_0) is
dX_t = [p_g(x_0)/1-X_t]dt.
And the corresponding perturbation evolution equation is
dv/dt = v/(1-X_t)^2,
which indicates that the training process of generator is unstable.
Thus an alternative loss for the generator is proposed (the -log D trick <cit.>.)
ℒ_G =𝔼_z ∼𝒩(0,I) [-log D(G(z))],
and the evolution of the generated sample x_0 in feature space is
dX_t = [ p_g(x_0)/X_t]dt.
The corresponding perturbation evolution equation is
dv/dt = -v/X_t^2,
which indicates that the training process of the alternative discrimimator is stable.
§.§ Functional gradient flow
We can also consider the gradient flow of discriminator D to analyze the training stability. Here we use the gradient flow of D to analyze the training stability of the discriminator in WGAN-GP <cit.>. And this example is used to illustrate adding gradient penalty in the loss function of the discriminator as a stabilizing term.
The loss function of the discriminator D in the WGAN-GP is
ℒ_D=-x∼ℙ_g𝔼[D(x)]+y∼ℙ_data𝔼[D(y)]-λx̂∼ℙ_x̂𝔼[(∇_x̂ D(x̂)_2-1)^2]
For training of the discriminator minℒ_D, we consider the gradient flow of the discriminator
∂ D/∂ t = δℒ_D/δ D = p_r - p_g+ λ(Δ D - 2 ∇(∇ D/∇ D )) (ϵ p_r + (1-ϵ)p_g),
The corresponding perturbation evolution equation is
∂ v/∂ t = λΔ v (ϵ p_r(x_0) + (1-ϵ)p_g(x_0))
Taking Fourier transform on both sides of Eqn.(<ref>), we have
d v̂/dt =
- λ (ϵ p_r(x_0) + (1-ϵ)p_g(x_0))|ξ|^2 v̂,
which indicates that the evolution of W^2 during the training is neutrally stable.
From the above analysis, we know that gradient penalty is also a kind of stabilizing term that can be added in discriminator loss function. And also, with the gradient penalty in the discriminator loss function, the gradient flow of D has a Laplacian term which causes the discriminator to become overly smooth and the generated samples may become connected, leading to mode collapse. The experiment results shown in Fig.<ref> validate this view.
§.§ Parameter gradient flow
To take the structure of the neural network into account in the training stability analysis, we can consider the gradient flow of the corresponding parameter. We can consider a simple discriminator made of a fully connected network without bias term, with input x:
D(x,ϕ) = W^L+1a_L(W^L(a_L-1(W^L-1(...a_1(W^1x...))))),
where ϕ := {W^1,...,W^L,W^L+1} is the learning parameters set, W^l∈R^d_l × d_l-1, W^L+1∈R^1 × d_L, and a_l is an piece-wise linear linear activation function. To illustrate the idea, we consider a simple case with a single layer neural network with structure
D(x,ϕ) = W^2a_1(W^1x)
For training of the discriminator with maxℒ_D, we consider the gradient flow for parameter W^2 is
∂ W^2/∂ t = d ℒ_D/dW^2 = δℒ_D/δ DA_1(W^1x),
where A_1(W^1x) ∈ℝ^d1 × d2 is from the derivative of ∂ D/∂ W^2. Considering perturbations v≪1 appearing during the training, the perturbation evolution equation is
d v/d t= 0,
which indicates that the evolution of W^2 during the training is neutrally stable.
One of popular stabilizing methods is spectral normalization <cit.>, where Ŵ_SN(W):= W/σ(W) and σ(W) is the spectral norm of the matrix.
In this case the gradient flow for the last layer parameter W^2 is
∂ W^2/∂ t = δℒ_D/δ DA_1(W^1x)/σ(W^2).
Considering the perturbation evolution equation
dv/dt = - |δ L_D/δ D| a_1(W^1x)1/σ(W^2)^2v,
which indicates that the evolution of W^2 during the training is stable.
§.§ Future Work
From some of the above discussion, we can know that we can generalize the framework of our stability analysis. If we consider the network structure, we can use the parameter gradient flow (Eq. (<ref>)) for the stability analysis. In addition to this, we can also take into account the effect of the optimization algorithm used on stability by considering discrete time dynamics.
§ EXPERIMENTAL DETAILS
In this section, we include more details about the experiments done in the main paper. We show neural network architecture and hyper-parameter settings for them. And we also show that All experiments are conducted on Python 3.7 with NVIDIA 2080 Ti.
§.§ Gaussian Mixture
For Gaussian Mixture, we sample a 2-d 8-cluster Gaussian Mixture distributed in a circle, where cluster means are sampled from 𝒩(I, 0), the marginal probability of each cluster is 1/8. For MMD GAN and MMD GAN-GP, the linear combination of Gaussian RBF kernel, i.e., e_rbf(x,y) = ∑_i = 1^Ke_σ_i(x,y), where K = 3 and σ_i = {4,8,16}.
In stabilized MMD GAN, the stabilizing term, we set s(x,y) = ∑_i = 1^Ke_σ_i(x,y), where σ_i = {1,√(2),2}. For both generator and discriminator, we use Adam to train with learning rate lr = 5e-3 and (α,β) = (0.5,0.9) for 3000 epochs.
For this case, we use multi-layer perceptron (MLP) networks.
* The MLP discriminator takes a 2-dimensional tensor as the input. Its architecture has a set of fully-connected layers (fc marked with input-dimension and output-dimension) and LeakyReLU layers (hyperparameter set as 0.2): fc (2 → 100), LeakyReLU, fc (100 → 50), LeakyReLU, fc (50 → 16).
* The MLP generator network takes a 2-dimensional random Gaussian variables as the input. Its architecture: fc (2 → 100), LeakyReLU, fc (100 → 50), LeakyReLU, fc (50 → 2).
§.§ Image generation
For image generation, we use the dataset CIFAR-10. For this case, we use convolutional neural networks (CNN). For both generator and discriminator, we use Adam to train with learning rate lr = 5e-5 and (α,β) = (0.5,0.9) for 50 epochs with batchsize B = 64. And we also train n_c=5 times discriminator per generator.
* The CNN elastic discriminator takes a B × C × H × W tensor as the input. Its architecture has a set of convolution layers (conv marked with input-c, output-c, kernel-size, stride, padding), Batch Normalization layers (BN) and LeakyReLU layers (hyperparameter as 0.2): conv (3,64,4,2,1), LeakyReLU, conv (64,128,4,2,1), BN, LeakyReLU, conv (128,256,4,2,1), BN, LeakyReLU, conv (256,512,4,2,1), BN, LeakyReLU, conv (512,128,4,2,1).
* The CNN generator network given a 100 dimensional random Gaussian variables: conv (100,256,4,2,0), BN, ReLU, conv (256,128,4,2,1), BN, ReLU, conv (128,64,4,2,1), BN, ReLU, conv (64,32,4,2,1), Tanh.
Quantitative analysis We also evaluate the FID scores, for MMD GAN: 64.72 and for stabilized MMD GAN: 48.61. The Inception Score, for MMD GAN: 6.14 and for stabilized MMD GAN: 6.8489.
Generated samples for CIFAR-10 The generated samples are shown in Fig.<ref>.
§.§ Experiments on MMD GAN with rational quadratic kernel
We also conduct experiments on MMD GAN with rational quadratic kernel with CIFAR-10 to show its instability training process.
§ FOURIER TRANSFORM OF X
It is well known that the Fourier transform in n dimensional space
-1/2πlogx, n=2
1/n(n-2)α(n)1/x^n-2, n≥ 3 ,
is 1/|ξ|^2. Here α(n) is the area of a uni-ball in n dimension space. And we also have
ℱ(1/x^n-1) = 1/|ξ|
where ξ is the Fourier modes.
Here, we take n=2 and n=3 as examples here.
In the case n=2, we know ℱ(1/x) = 1/|ξ|. Therefore we have
ℱ^-1(1/|ξ|^3) = ℱ^-1(Δ1/|ξ|) = - r^2 ℱ^-1(1/|ξ|) = -r^2 1/r = -r,
where r = √(x^2 + y^2), which means
ℱ(-x) = 1/|ξ|^3
In the case n=3, we know ℱ(1/x^2) = α(3) 1/|ξ|^2. Therefore we have
ℱ^-1(2/|ξ|^4) = ℱ^-1(Δ1/|ξ|^2) = - r^2 ℱ^-1(1/|ξ|^2) = -r^21/α(3)1/r = -1/α(3)r,
where r = √(x^2 + y^2), which means
ℱ(-x) = C_3/|ξ|^4.
Here C_3>0.
|
http://arxiv.org/abs/2307.03011v1
|
20230706142031
|
New Inequalities in Extended Black Hole Thermodynamics
|
[
"Masaya Amo",
"Antonia M. Frassino",
"Robie A. Hennigar"
] |
gr-qc
|
[
"gr-qc",
"hep-th"
] |
calc
conjecture
conjectureConjecture
rsfs
|
http://arxiv.org/abs/2307.00317v1
|
20230701120551
|
On Finding Constrained Independent Sets in Cycles
|
[
"Ishay Haviv"
] |
cs.DS
|
[
"cs.DS",
"cs.CC",
"math.CO"
] |
l-0.035cm39-0.03truecm
theoremTheorem[section]
proposition[theorem]Proposition
definition[theorem]Definition
deff[theorem]Definition
algorithm[theorem]Algorithm
claim[theorem]Claim
lemma[theorem]Lemma
conjecture[theorem]Conjecture
notation[theorem]Notation
corollary[theorem]Corollary
fact[theorem]Fact
comment[theorem]Comment
remark[theorem]Remark
question[theorem]Question
proof[1][]
Proof#1:
A
B
G
F
L
S
W
E
M
P
ŁL
T
H
ℤ
ℝ
ℚ
ℂ
ℕ
c̃
⟩
⟨
On Finding Constrained Independent Sets in Cycles
Ishay HavivSchool of Computer Science, The Academic College of Tel Aviv-Yaffo, Tel Aviv 61083, Israel. Research supported in part by the Israel Science Foundation (grant No. 1218/20).
=============================================================================================================================================================================================
A subset of [n] = {1,2,…,n} is called stable if it forms an independent set in the cycle on the vertex set [n].
In 1978, Schrijver proved via a topological argument that for all integers n and k with n ≥ 2k, the family of stable k-subsets of [n] cannot be covered by n-2k+1 intersecting families.
We study two total search problems whose totality relies on this result.
In the first problem, denoted by (n,k,m), we are given an access to a coloring of the stable k-subsets of [n] with m = m(n,k) colors, where m ≤ n-2k+1, and the goal is to find a pair of disjoint subsets that are assigned the same color. While for m = n-2k+1 the problem is known to be -complete, we prove that for m < d ·⌊n/2k+d-2⌋, with d being any fixed constant, the problem admits an efficient algorithm.
For m = ⌊ n/2 ⌋-2k+1, we prove that the problem is efficiently reducible to the problem. Motivated by the relation between the problems, we investigate the family of unstable k-subsets of [n], which might be of independent interest.
In the second problem, called Unfair Independent Set in Cycle, we are given ℓ subsets V_1, …, V_ℓ of [n], where ℓ≤ n-2k+1 and |V_i| ≥ 2 for all i ∈ [ℓ], and the goal is to find a stable k-subset S of [n] satisfying the constraints |S ∩ V_i| ≤ |V_i|/2 for i ∈ [ℓ].
We prove that the problem is -complete and that its restriction to instances with n=3k is at least as hard as the Cycle plus Triangles problem, for which no efficient algorithm is known. On the contrary, we prove that there exists a constant c for which the restriction of the problem to instances with n ≥ c · k can be solved in polynomial time.
§ INTRODUCTION
For integers n and k with n ≥ 2k, the Kneser graph K(n,k) is the graph whose vertices are all the k-subsets of [n]= {1,2…,n}, where two such sets are adjacent in the graph if they are disjoint. The graph K(n,k) admits a proper vertex coloring with n-2k+2 colors. This indeed follows by assigning the color i, for each i ∈ [n-2k+1], to all the vertices whose minimal element is i, and the color n-2k+2 to the remaining vertices, those contained in [n] ∖ [n-2k+1]. In 1978, Lovász <cit.> proved, settling a conjecture of Kneser <cit.>, that fewer colors do not suffice, that is, the chromatic number of the graph satisfies χ(K(n,k)) = n-2k+2.
Soon later, Schrijver <cit.> strengthened Lovász's result by proving that the subgraph S(n,k) of K(n,k) induced by the stable k-subsets of [n], i.e., the vertices of K(n,k) that form independent sets in the cycle on the vertex set [n], has the same chromatic number. It was further shown in <cit.> that the graph S(n,k) is vertex-critical, in the sense that any removal of a vertex from the graph decreases its chromatic number.
It is interesting to mention that despite the combinatorial nature of Kneser's conjecture <cit.>, Lovász's proof <cit.> relies on the Borsuk–Ulam theorem <cit.>, a fundamental result in the area of algebraic topology. Several alternative proofs and extensions were provided in the literature over the years (see, e.g., <cit.>). Although they are substantially different from each other, they all essentially rely on topological tools.
The computational search problem associated with Kneser graphs, denoted by , was proposed by Deng, Feng, and Kulkarni <cit.> and is defined as follows.
Its input consists of integers n and k with n ≥ 2k and an access to a coloring of the vertices of K(n,k) with n-2k+1 colors. The goal is to find a monochromatic edge in the graph, i.e., two disjoint k-subsets of [n] that are assigned the same color by the given coloring. Since the number of colors used by the input coloring is strictly smaller than the chromatic number of K(n,k) <cit.>, it follows that this search problem is total, in the sense that every input is guaranteed to have a solution. Note that the input coloring may be given as an oracle access that provides the color of any queried vertex, and that an algorithm for the problem is considered efficient if its running time is polynomial in n. In other variants of the problem, the input coloring is given by some succinct representation, e.g., a Boolean circuit or an efficient Turing machine. The computational search problem is defined similarly, where the input represents a coloring of the vertices of S(n,k) with n-2k+1 colors, and the goal is to find a monochromatic edge, whose existence is guaranteed by the aforementioned result of Schrijver <cit.>.
The computational complexity of the problem was determined in <cit.>, where it was shown to be complete in the complexity class .
This complexity class, introduced in 1994 by Papadimitriou <cit.>, is known to capture the complexity of several additional total search problems whose totality is based on the Borsuk–Ulam theorem, e.g., Consensus Halving, Bisecting Sandwiches, and Splitting Necklaces <cit.>.
Note that this line of -completeness results is motivated not only from the computational complexity perspective, but also from a mathematical point of view, as one may find those results as an indication for the necessity of topological arguments in the existence proof of the solutions of these problems.
As for the problem, it is an open question whether it is also -hard, as was suggested by Deng et al. <cit.>.
We remark that its complexity is related to that of the Agreeable Set problem from the area of resource allocation (see <cit.>).
The and problems were also investigated in the framework of parameterized algorithms <cit.>, where it was shown that they admit randomized fixed-parameter algorithms with respect to the parameter k, namely, algorithms whose running time is n^O(1)· k^O(k) on input colorings of K(n,k) and S(n,k).
Before turning to our results, let us mention another computational search problem, referred to as the problem.
Its input consists of an integer k and a graph on 3k vertices, whose edge set is the disjoint union of a Hamilton cycle and k pairwise vertex-disjoint triangles. The goal is to find an independent set of size k in the graph. The existence of a solution for every input of the problem follows from a result of Fleischner and Stiebitz <cit.>, which settled in the early nineties a conjecture of Du, Hsu, and Hwang <cit.> as well as its strengthening by Erdös <cit.>. Their proof in fact shows that every such graph is 3-choosable, and thus 3-colorable, so in particular, it contains an independent set of size k. Here, however, the existence of a solution for every input of the problem is known to follow from several different arguments. While the proof of <cit.> relies on the polynomial method in combinatorics (see also <cit.>), an elementary proof was given slightly later by Sachs <cit.>, and another proof, based on the chromatic number of S(n,k), was provided quite recently by Aharoni et al. <cit.>. Yet, none of these proofs is constructive, in the sense that they do not suggest an efficient algorithm for the problem. The question of whether the problem admits an efficient algorithm was asked by several authors and is still open (see, e.g., <cit.>).
Interestingly, the approach of <cit.> implies that the problem is not harder than the restriction of the problem to colorings of S(n,k) with n=3k.
§.§ Our Contribution
In this paper, we introduce two total search problems concerned with finding stable sets under certain constraints. The totality of the problems relies on the chromatic number of the graph S(n,k) <cit.>. We study these problems from algorithmic and computational perspectives. In what follows, we describe the two problems and our results on each of them.
§.§.§ The Generalized Schrijver Problem
We start by considering a generalized version of the problem, which allows the number of colors used by the input coloring to be any prescribed number.
Let (n,k,m) denote the problem which asks to find a monochromatic edge in S(n,k) for an input coloring that uses m = m(n,k) colors. Note that every input of the problem is guaranteed to have a solution whenever m ≤ n-2k+1, and that for m=n-2k+1, the problem coincides with the standard problem.
The (n,k,m) problem obviously becomes easier as the number of colors m decreases.
For example, it is not difficult to see that for m = ⌊ n/k ⌋-1, the problem can be solved efficiently, in time polynomial in n.
Indeed, the clique number of the graph S(n,k) is ⌊ n/k ⌋, which is strictly larger than m, so by querying the input coloring for the colors of the vertices of a clique of maximum size, one can find two adjacent vertices with the same color. Our first result extends this observation and essentially shows that the (n,k,m) problem can be solved efficiently for any number of colors m satisfying m = O(n/k).
For every integer d ≥ 2, there exists an algorithm for the (n,k,m) problem with m < d ·⌊n/2k+d-2⌋ whose running time is n^O(d).
Our next result relates the generalized (n,k,m) problem to the problem.
(n,k,⌊ n/2 ⌋-2k+1) is polynomial-time reducible to .
The simple proof of Theorem <ref> involves a proper coloring of the subgraph of K(n,k) induced by the unstable k-subsets of [n], i.e., the vertices of K(n,k) that do not form vertices of S(n,k). This graph, which we denote by U(n,k), can be properly colored using ⌈ n/2 ⌉ colors. Indeed, every unstable k-subset of [n] includes an odd element, hence by assigning to each vertex of U(n,k) some odd element that belongs to its set, we obtain a proper coloring of the graph with the desired number of colors. Since U(n,k) is a subgraph of K(n,k), it follows that for all admissible values of n and k, we have χ(U(n,k)) ≤min (n-2k+2, ⌈ n/2 ⌉ ).
Motivated by the reduction given by Theorem <ref>, we further explore the graph U(n,k), whose study may be of independent interest.
We prove that the above upper bound on the chromatic number is essentially tight (up to an additive 1 in certain cases; see Corollary <ref> and the discussion that follows it).
The proof is topological and applies the Borsuk–Ulam theorem. We further determine the independence number of the graph U(n,k) (see Theorem <ref>), using a structural result of Hilton and Milner <cit.> on the largest non-trivial intersecting families of k-subsets of [n].
The motivation for Theorem <ref> comes from the fact that the problem is known to be -hard, whereas no hardness result is known for the problem.
However, in a subsequent work we show that under plausible complexity assumptions it is unlikely that the (n,k,m) problem with m = ⌊ n/2 ⌋-2k+1 is -hard.
Yet, it is of interest to figure out whether or not the problem admits an efficient algorithm.
While this challenge is left open, the following result shows that the problem is not harder than the restriction of the standard problem to colorings of S(n,k) with n=4k.
If there exists a polynomial-time algorithm for the restriction of the problem to colorings of S(n,k) with n=4k, then there exists a polynomial-time algorithm for the (n,k,m) problem where m = ⌊ n/2 ⌋ -2k+1.
We finally observe that the restriction of (n,k,m) with m = ⌊ n/2 ⌋-2k+1 to instances satisfying n = Ω(k^4) admits an efficient randomized algorithm. This essentially follows from the fixed-parameter algorithm presented in <cit.> (see Section <ref> for details).
§.§.§ The Unfair Independent Set in Cycle Problem
The second problem studied in this paper is the Unfair Independent Set in Cycle problem, denoted by and defined as follows.
Its input consists of two integers n and k with n ≥ 2k and ℓ subsets V_1, …, V_ℓ of [n], where ℓ≤ n-2k+1 and |V_i| ≥ 2 for all i ∈ [ℓ]. The goal is to find a stable k-subset S of [n] that satisfies the constraints |S ∩ V_i| ≤ |V_i|/2 for i ∈ [ℓ].
The name of the problem essentially borrows the terminology of <cit.>, where a set is said to fairly represent a set V_i if it includes at least roughly half of its elements, hence the desired stable set in the problem is required to unfairly represent each of the given sets V_i.
It is not difficult to show, using the chromatic number of S(n,k), that every input of the problem has a solution (see Lemma <ref>).
Note that the requirement that the input sets satisfy |V_i| ≥ 2 for all i ∈ [ℓ] is discussed in Section <ref>.
It is natural to compare the definition of the problem to that of the Fair Independent Set in Cycle problem, denoted by and studied in <cit.> (see Definition <ref>).
While the goal in the former is to find a stable subset of [n] with a prescribed size k that includes no more than half of the elements of each V_i, the goal in the latter is, roughly speaking, to find a stable subset of [n], of an arbitrary size, that includes at least half of the elements of each V_i.
The specification of the size k in the inputs of makes the problem non-trivial and allows us to study it for various settings of the quantities n and k.
The following result shows that the complexity of the problem is perfectly captured by the class .
This is established using the and problems which are -complete <cit.>.
The problem is -complete.
We next consider some restrictions of the problem to instances in which the integer n is somewhat larger than 2k.
On the one hand, the restriction of the problem to instances with n=3k is at least as hard as the problem, for which no efficient algorithm is known (see Proposition <ref>). On the other hand, we prove that on instances whose ratio between n and k is above some absolute constant, the problem can be solved in polynomial time.
There exists a constant c >0, such that there exists a polynomial-time algorithm for the restriction of the problem to instances with n ≥ c · k.
The proof of Theorem <ref> is based on a probabilistic argument with alterations, which is derandomized into a deterministic algorithm using the method of conditional expectations (see, e.g., <cit.>).
The approach is inspired by a probabilistic argument of Kiselev and Kupavskii <cit.>, who proved that for n ≥ (2+o(1)) · k^2, every proper coloring of the Kneser graph K(n,k) with n-2k+2 colors has a trivial color class (all of whose members share a common element).
§.§ Outline
The rest of the paper is organized as follows.
In Section <ref>, we collect some definitions and results that will be used throughout the paper.
In Section <ref>, we study the generalized problem and prove Theorems <ref>, <ref>, and <ref>.
In Section <ref>, we study the problem and prove Theorems <ref> and <ref>.
Finally, in Section <ref>, we consider the family of unstable k-subsets of [n] and study the chromatic and independence numbers of the graph U(n,k).
§ PRELIMINARIES
§.§ Kneser and Schrijver Graphs
For integers n and k, let [n]k denote the family of all k-subsets of [n].
A subset of [n] is called stable if it does not include two consecutive elements nor both 1 and n, equivalently, it forms an independent set in the cycle on the vertex set [n] with the natural order along the cycle. Otherwise, the set is called unstable. The family of stable k-subsets of [n] is denoted by [n]k_stab. The Kneser graph and the Schrijver graph are defined as follows.
For integers n and k with n ≥ 2k, the Kneser graph K(n,k) is the graph on the vertex set [n]k, where two sets A,B ∈[n]k are adjacent if they satisfy A ∩ B = ∅. The Schrijver graph S(n,k) is the subgraph of K(n,k) induced by the vertices of [n]k_stab.
Obviously, the number of vertices in K(n,k) is nk. The number of vertices in S(n,k) is given by the following lemma (see, e.g., <cit.>).
For all integers n and k with n ≥ 2k, the number of stable k-subsets of [n] is
n/k·n-k-1k-1.
With Lemma <ref> at hand, one can derive the following (see also <cit.>).
For all integers n and k with n ≥ 2k and for every i ∈ [n], the number of stable k-subsets of [n] that include i is
n-k-1k-1.
As usual, we denote the independence number of a graph G by α(G), and its chromatic number by χ(G).
The chromatic numbers of K(n,k) and S(n,k) were determined, respectively, by Lovász <cit.> and by Schrijver <cit.>, as stated below.
For all integers n and k with n ≥ 2k, χ(K(n,k)) = χ(S(n,k)) = n-2k+2.
§.§ Intersecting Families
A family of sets is called intersecting if for every two sets A,B ∈ it holds that A ∩ B ≠∅.
Note that a family of k-subsets of [n] is intersecting if and only if it forms an independent set in the graph K(n,k).
An intersecting family is said to be trivial if there exists an element that belongs to all members of . Otherwise, the family is non-trivial.
The famous Erdös-Ko-Rado theorem <cit.> asserts that the largest size of an intersecting family of k-subsets of [n] is n-1k-1, which is attained by the maximal trivial intersecting families.
The following result of Hilton and Milner <cit.> determines the largest size of a non-trivial intersecting family in this setting and characterizes the extremal families attaining it.
For integers k ≥ 3 and n ≥ 2k, let ⊆[n]k be a non-trivial intersecting family.
Then,
|| ≤n-1k-1 - n-k-1k-1+1.
Moreover, if n>2k then equality holds if and only if there exist an element i ∈ [n] and a k-subset A of [n] with i ∉ A such that
= { F ∈[n]k | i ∈ F, F ∩ A ≠∅}∪{A},
or k=3 and there exists a 3-subset A of [n] such that
= {F ∈[n]3 | |F ∩ A| ≥ 2 }.
§.§ Complexity Classes
The complexity class consists of the total search problems in , i.e., the search problems in which every input has a solution, where a solution can be verified in polynomial time. The complexity class (Polynomial Parity Argument <cit.>) consists of the problems in that can be reduced in polynomial time to a problem called . The definition of the problem is not needed in this paper, but we mention it briefly below for completeness.
The problem asks, given a graph with maximum degree 2 and a leaf (i.e., a vertex of degree 1), to find another leaf in the graph. The input graph, though, is not given explicitly. Instead, the vertex set of the graph is defined to be {0,1}^n for some integer n, and the graph is succinctly represented by a Boolean circuit that for a vertex of the graph computes its (at most two) neighbors. Note that the size of the graph might be exponential in the size of its description.
§.§ Computational Problems
We gather here several computational problems that will be studied and used throughout the paper.
We start with a computational search problem associated with Schrijver graphs. This problem is studied in Section <ref>.
For m = m(n,k), the (n,k,m) problem is defined as follows.
The input consists of two integers n and k with n ≥ 2k and a coloring c: [n]k_stab→ [m] of the vertices of the graph S(n,k) with m=m(n,k) colors, and the goal is to find a monochromatic edge, i.e., two vertices A,B ∈[n]k_stab such that A ∩ B = ∅ and c(A)=c(B).
In the black-box input model, the coloring c is given as an oracle access that given a vertex A outputs its color c(A). In the white-box input model, the coloring c is given by a Boolean circuit that for a vertex A computes its color c(A).
For m = n-2k+1, the problem (n,k,m) is denoted by .
The problem is defined similarly to the problem. Here, the input coloring c: [n]k→ [n-2k+1] is defined on the entire vertex set of K(n,k). By Theorem <ref>, every input of the and problems is guaranteed to have a solution. Moreover, whenever m = m(n,k) ≤ n-2k+1, every input of the (n,k,m) problem has a solution as well.
We remark that algorithms for the (n,k,m) problem are considered in this paper with respect to the black-box input model.
The running time of such an algorithm is referred to as polynomial if it is polynomial in n.
Observe that a polynomial-time algorithm for the (n,k,m) problem in the black-box input model yields an algorithm for the analogue problem in the white-box input model, whose running time is polynomial as well (in the input size).
For computational complexity results, like reductions and -completeness, we adopt the more suitable white-box input model.
For example, the problem in the white-box input model was shown in <cit.> to be -complete.
Another search problem studied in <cit.> is the following.
In the problem, the input consists of integers n and m along with a partition V_1, … ,V_m of [n] into m sets. The goal is to find a stable subset S of [n] satisfying |S ∩ V_i| ≥1/2· |V_i|-1 for all i ∈ [m].
The existence of a solution for every input of the problem was proved in <cit.>.
It was shown in <cit.> that the problem is -complete, even restricted to instances in which each part V_i of the given partition has an odd size larger than 2.
We next define the problem, studied in Section <ref>.
The input of the problem consists of two integers n and k with n ≥ 2k and ℓ subsets V_1, …, V_ℓ of [n], where ℓ≤ n-2k+1 and |V_i| ≥ 2 for all i ∈ [ℓ]. The goal is to find a stable k-subset S of [n] that satisfies the constraints |S ∩ V_i| ≤ |V_i|/2 for i ∈ [ℓ].
Note that Definition <ref> requires the sets V_1, …, V_ℓ of an instance of the problem to satisfy |V_i| ≥ 2 for all i ∈ [ℓ].
This requirement is justified by the observation that if |V_i|=1 for some i ∈ [ℓ], then any solution for this instance does not include the single element of V_i. Hence, by removing this element from the given sets and from the ground set, such an instance can be reduced to an instance with ground set of size smaller by one.
By repeatedly applying this reduction, one can get a `core' instance that fits Definition <ref>.
We observe that the problem is total. The argument relies on the chromatic number of the graph S(n,k).
Every instance of the problem has a solution.
Consider an instance of the problem, i.e., integers n and k with n ≥ 2k and ℓ subsets V_1, …, V_ℓ of [n], where ℓ≤ n-2k+1 and |V_i| ≥ 2 for all i ∈ [ℓ].
For every i ∈ [ℓ], let
_i = { S ∈[n]k_stab | |S ∩ V_i| > |V_i|/2 },
and notice that every two sets of _i have a common element of V_i, hence _i is an intersecting family. However, by Theorem <ref>, the chromatic number of S(n,k) is n-2k+2, hence the family of stable k-subsets of [n] cannot be covered by fewer than n-2k+2 intersecting families. By ℓ≤ n-2k+1, this implies that there exists a set S ∈[n]k_stab that does not belong to any of the families _i, hence it satisfies |S ∩ V_i| ≤ |V_i|/2 for all i ∈ [ℓ]. This implies that S is a valid solution for the given instance, and we are done.
We end this section with the definition of the problem.
In the problem, the input consists of an integer k and a graph G on 3k vertices, whose edge set is the disjoint union of a Hamilton cycle and k pairwise vertex-disjoint triangles. The goal is to find an independent set in G of size k.
The existence of a solution for every input of the problem follows from a result of <cit.> (see also <cit.>).
§ THE GENERALIZED SCHRIJVER PROBLEM
In this section, we prove our results on the (n,k,m) problem (see Definition <ref>).
We start with Theorem <ref>.
Fix some integer d ≥ 2.
For integers n and k with n ≥ 2k, put t = ⌊n/2k+d-2⌋ and m=d · t-1, and consider an instance of the (n,k,m) problem, i.e., a coloring c: nk_stab→ [m] of the vertices of S(n,k). The definition of t allows us to consider t pairwise disjoint subsets J_1, …, J_t of [n], where each of the subsets includes 2k+d-2 consecutive elements.
For each i ∈ [t], let _i denote the family of all stable k-subsets of J_i with respect to the natural cyclic order of J_i (where the largest element precedes the smallest one), and notice that _i ⊆nk_stab.
Consider the algorithm that given an oracle access to a coloring c as above, queries the oracle for the colors of all the sets of _1 ∪⋯∪_t, and returns a pair of disjoint sets from this collection that are assigned the same color by c.
For correctness, we show that the collection of sets _1 ∪⋯∪_t necessarily includes two vertices that form a monochromatic edge.
Indeed, since the number of colors used by the coloring c does not exceed d · t -1, it follows that either there exist distinct i,j ∈ [t] for which a vertex of _i and a vertex of _j have the same color, or there exists an i ∈ [t] for which the vertices of _i are colored using fewer than d colors.
For the former case, notice that for distinct i and j, every vertex of _i is disjoint from every vertex of _j, hence the collection includes two vertices that form a monochromatic edge. For the latter case, let i ∈ [t] be an index for which the vertices of _i are colored using fewer than d colors.
Observe that the subgraph of S(n,k) induced by _i is isomorphic to the graph S(2k+d-2,k), hence by Theorem <ref>, its chromatic number is (2k+d-2)-2k+2 = d.
Since the vertices of _i are colored using fewer than d colors, it follows that they include two vertices that form a monochromatic edge, and we are done.
We finally analyze the running time of the algorithm. By Lemma <ref>, the number of vertices in the graph S(n,k) is
n/k·n-k-1k-1 = n/k·n-k-1n-2k≤ n · (n-k-1)^n-2k≤ n^n-2k+1.
Since the subgraph of S(n,k) induced by each _i is isomorphic to S(2k+d-2,k), it follows that the total number of queries that the algorithm makes does not exceed t · (2k+d-2)^d-1≤ n^O(d). This implies that in running time n^O(d), it is possible to enumerate all the sets of _1 ∪⋯∪_t, to query the oracle for their colors, and to find the desired monochromatic edge. This completes the proof.
We consider now the (n,k,m) problem with m = ⌊ n/2 ⌋-2k+1.
We first prove Theorem <ref> that says that the problem is efficiently reducible to the problem (whose definition is given in Section <ref>).
Put m = ⌊ n/2 ⌋-2k+1, and let c: [n]k_stab→ [m] be an instance of the (n,k,m) problem.
Consider the reduction that maps such a coloring c to a coloring c': [n]k→ [n-2k+1] of the vertices of K(n,k) defined as follows.
For every set A ∈[n]k, if A is unstable then it includes an odd element, so denote its smallest odd element by 2i-1, and define c'(A)=i. Notice that this i satisfies 1 ≤ i ≤⌈ n/2 ⌉.
Otherwise, A is a stable k-subset of [n], and we define c'(A) = c(A)+⌈ n/2 ⌉.
Notice that m+⌈ n/2 ⌉ = n-2k+1, hence the colors used by c' are all in [n-2k+1], as needed for an instance of the problem.
Notice further that given a Boolean circuit that computes the coloring c, it is possible to efficiently produce a Boolean circuit that computes the coloring c'.
For correctness, we simply show that any solution for the produced instance of the problem is also a solution for the given instance of the (n,k,m) problem.
To see this, consider a solution for the former, i.e., two disjoint k-subsets A and B of [n] with c'(A) = c'(B).
By the definition of c', the color assigned by c' to A and B cannot be some i ≤⌈ n/2 ⌉ because this would imply that the element 2i-1 belongs to both A and B, which are disjoint. It thus follows that A and B are stable k-subsets of [n] satisfying c'(A) = c(A) + ⌈ n/2 ⌉ and c'(B) = c(B) + ⌈ n/2 ⌉. By c'(A)=c'(B), it follows that c(A)=c(B), hence A and B form a monochromatic edge in S(n,k) and thus a solution for the given instance of the (n,k,m) problem. This completes the proof.
The reduction presented in the proof of Theorem <ref> extends a given coloring of S(n,k) to a coloring of the entire graph K(n,k). To do so, it uses a proper coloring with ⌈ n/2 ⌉ colors of the subgraph U(n,k) of K(n,k) induced by the unstable k-subsets of [n]. However, in order to obtain a coloring of K(n,k) with n-2k+1 colors, as required for instances of the problem, one has to reduce from the (n,k,m) problem with m = ⌊ n/2 ⌋-2k+1. This suggests the question of whether U(n,k) can be properly colored using fewer colors. Motivated by this question, we study some properties of this graph in Section <ref>, where we essentially answer this question in the negative (see Corollary <ref> and the discussion that follows it).
We next show that the (n,k,m) problem with m = ⌊ n/2 ⌋-2k+1 is not harder than the restriction of the standard problem to colorings of S(n,k) with n=4k.
This confirms Theorem <ref>.
Suppose that there exists a polynomial-time algorithm, called 𝖠𝗅𝗀𝗈, for the restriction of the problem to colorings of S(n,k) with n=4k.
Such an algorithm is able to efficiently find a monochromatic edge in the graph S(4k,k) given an access to a coloring of its vertices with fewer than χ(S(4k,k)) colors.
By Theorem <ref>, it holds that χ(S(4k,k)) = 2k+2.
Suppose without loss of generality that the algorithm 𝖠𝗅𝗀𝗈 queries the oracle for the colors of the two vertices of the monochromatic edge that it returns.
For integers n and k with n ≥ 4k, put m = ⌊ n/2 ⌋ -2k+1, and let c: nk_stab→ [m] be an instance of the (n,k,m) problem, i.e., a coloring of the vertices of S(n,k) with m colors.
We present an algorithm that finds a monochromatic edge in S(n,k). It may be assumed that n > 8k. Indeed, otherwise it holds that m ≤ 2k+1 < χ(S(4k,k)), hence a monochromatic edge can be found by running the given algorithm 𝖠𝗅𝗀𝗈 on the restriction of the coloring c to the subgraph of S(n,k) induced by the stable k-subsets of [4k]. Since this graph is isomorphic to S(4k,k), 𝖠𝗅𝗀𝗈 is guaranteed to find a monochromatic edge in this subgraph, which also forms a monochromatic edge in the entire graph S(n,k).
Now, put t = ⌊n/4k⌋, and let J_1, …, J_t be t pairwise disjoint subsets of [n], where each of the subsets includes 4k consecutive elements.
For each i ∈ [t], let _i denote the family of all stable k-subsets of J_i with respect to the natural cyclic order of J_i (where the largest element precedes the smallest one). Observe that the subgraph of S(n,k) induced by the vertices of each _i is isomorphic to S(4k,k).
Observe further that
t · (2k+2) > (n/4k-1 ) · (2k+2) = n/2+n/2k-2k-2 > ⌊n/2⌋ -2k+1 = m,
where the last inequality holds because n > 8k.
Consider the algorithm that given an oracle access to a coloring c as above, for each i ∈ [t], simulates the algorithm 𝖠𝗅𝗀𝗈 on the restriction of the coloring c to the subgraph of S(n,k) induced by the vertices of _i.
If all the vertices queried throughout the ith simulation have at most 2k+1 distinct colors, then the algorithm returns the monochromatic edge returned by 𝖠𝗅𝗀𝗈. Otherwise, for each i ∈ [t], the algorithm uses the queries made in the ith simulation of 𝖠𝗅𝗀𝗈 to produce a set _i ⊆_i of 2k+2 vertices with distinct colors.
Then, the algorithm finds a monochromatic edge that involves two vertices of _1 ∪⋯∪_t and returns it.
This completes the description of the algorithm.
Since the running time of 𝖠𝗅𝗀𝗈 is polynomial, the described algorithm can be implemented in polynomial time.
Let us prove the correctness of the algorithm.
Suppose first that for some i ∈ [t], all the vertices queried throughout the ith simulation of 𝖠𝗅𝗀𝗈 have at most 2k+1 distinct colors (including the two vertices of the returned edge).
In this case, the answers of the oracle in the ith simulation is consistent with some coloring with at most 2k+1 colors of the subgraph of S(n,k) induced by _i. Since this graph is isomorphic to S(4k,k), whose chromatic number is 2k+2, 𝖠𝗅𝗀𝗈 is guaranteed to find in the graph a monochromatic edge, which is also a monochromatic edge in S(n,k), and thus a valid output of the algorithm.
Otherwise, for each i ∈ [t], the attempt to simulate 𝖠𝗅𝗀𝗈 on the subgraph of S(n,k) induced by _i provides a set _i of 2k+2 vertices of _i with distinct colors.
By (<ref>), the total number m of colors used by the coloring c is smaller than t · (2k+2). This implies that there exist distinct indices i,j ∈ [t] for which a vertex of _i and a vertex of _j have the same color. Since the vertices of _i are disjoint from those of _j, these two vertices form a monochromatic edge in S(n,k) and form a valid output of the algorithm.
We end this section with the observation that there exists an efficient randomized algorithm for the (n,k,m) problem with m = ⌊ n/2 ⌋-2k+1 on instances with n = Ω( k^4).
This follows from the paper <cit.>, which yields that for such n and k, the (n,k,m) problem is essentially reducible to the (n-1,k,m-1) problem in randomized polynomial time (with exponentially small failure probability).
By applying this reduction m-1 times, it follows that the (n,k,m) problem with m = ⌊ n/2 ⌋-2k+1 where n = Ω(k^4), is efficiently reducible to the (⌈ n/2 ⌉ +2k,k,1) problem, which can obviously be solved efficiently.
§ THE UNFAIR INDEPENDENT SET IN CYCLE PROBLEM
In this section, we study the problem (see Definition <ref>).
§.§ Hardness
We prove now Theorem <ref>, which asserts that is -complete.
We first show that the problem belongs to .
To do so, we show a polynomial-time reduction to the problem in the white-box input model, which lies in <cit.> (see Definition <ref>).
Consider an instance of the problem, i.e., integers n and k with n ≥ 2k and ℓ subsets V_1, …, V_ℓ of [n], where ℓ≤ n-2k+1 and |V_i| ≥ 2 for all i ∈ [ℓ]. For such an instance, the reduction produces a Boolean circuit that given a stable k-subset A of [n], outputs the smallest index i ∈ [ℓ] such that |A ∩ V_i| > |V_i|/2 if such an i exists, and outputs ℓ otherwise. Note that this circuit represents a coloring c: [n]k_stab→ [ℓ] of the vertices of the graph S(n,k) with ℓ≤ n-2k+1 colors, hence it is an appropriate instance of the problem. Clearly, the Boolean circuit that computes c can be constructed in polynomial time.
For correctness, we show that a solution for the constructed instance can be used to efficiently find a solution for the given instance.
Consider a monochromatic edge of S(n,k), i.e., two disjoint sets A,B ∈[n]k_stab with c(A)=c(B). Since A and B are disjoint, it is impossible that |A ∩ V_i| > |V_i|/2 and |B ∩ V_i| > |V_i|/2 for some i ∈ [ℓ]. By the definition of the coloring c, it follows that c(A)=c(B)=ℓ, hence |A ∩ V_i| ≤ |V_i|/2 and |B ∩ V_i| ≤ |V_i|/2 for all i ∈ [ℓ-1]. Moreover, at least one of A and B intersects V_ℓ at no more than |V_ℓ|/2 elements, and thus forms a valid solution for the given instance. Since it is possible to check in polynomial time which of the sets A and B satisfies this requirement, the proof of the membership of in is completed.
We next prove that the problem is -hard. To do so, we reduce from the problem (see Definition <ref>). We use here the fact, proved in <cit.>, that this problem is -hard even when it is restricted to the instances in which the parts of the given partition have odd sizes larger than 2.
Consider such an instance of the problem, i.e., integers n and m along with a partition V_1, … ,V_m of [n] such that |V_i| is odd and satisfies |V_i| ≥ 3 for all i ∈ [m]. Notice that n and m have the same parity, and define k = n-m/2. Our reduction simply returns the integers n and k, which clearly satisfy n ≥ 2k, and the sets V_1, …, V_m. Note that |V_i| ≥ 2 for all i ∈ [m] and that the number m of sets is n-2k. Since the latter does not exceed n-2k+1, this is a valid instance of the problem.
For correctness, we show that a solution for the constructed instance is also a solution for the given instance.
Let S be a solution for the instance, i.e., a stable k-subset of [n] such that for all i ∈ [m] it holds that |S ∩ V_i| ≤ |V_i|/2. Since the sizes of the sets V_1, …, V_m are odd, it follows that |S ∩ V_i| ≤|V_i|-1/2 for all i ∈ [m]. Since the sets V_1, …, V_m form a partition of [n], it further follows that
|S| = ∑_i ∈ [m]|S ∩ V_i|≤∑_i ∈ [m]|V_i|-12 = n-m/2=k.
However, by |S|=k, we derive from (<ref>) that |S ∩ V_i| = |V_i|-1/2 for all i ∈ [m].
This implies that S is a stable subset of [n] satisfying |S ∩ V_i| ≥ |V_i|/2-1 for all i ∈ [m], hence it forms a valid solution for the given instance.
This completes the proof.
Given the -hardness of the problem, it is interesting to identify the range of the parameters n and k for which the hardness holds.
One can verify, using properties of the hard instances constructed in <cit.>, that the hardness given in Theorem <ref> holds for instances with n = (2+o(1)) · k, where the o(1) term tends to 0 as n and k tend to infinity.
The following simple result shows that for n=3k the problem is at least as hard as the problem, whose tractability is an open question (see Definition <ref>).
The problem is polynomial-time reducible to the restriction of the problem to instances that consist of k sets of size 3 that form a partition of [n] where n=3k.
Consider an instance of the problem, i.e., an integer k and a graph G on 3k vertices, whose edge set is the disjoint union of a Hamilton cycle and k pairwise vertex-disjoint triangles. It was shown in <cit.> that given such a graph it is possible to find in polynomial time such Hamilton cycle and triangles.
Let V_1, …, V_k denote the triplets of vertices of the k triangles, and assume without loss of generality that the vertices along the Hamilton cycle are labeled by the elements of [3k] according to their natural cyclic order.
Consider the polynomial-time reduction that given such an instance, returns the integers n=3k and k along with the sets V_1, …, V_k. Observe that the number k of sets does not exceed n-2k+1=k+1, hence the reduction returns an appropriate instance of the problem.
Note that the reduction can be implemented in polynomial time.
For correctness, consider a solution for the produced instance, i.e., a stable k-subset S of [n] satisfying |S ∩ V_i| ≤ 3/2, and thus |S ∩ V_i| ≤ 1, for all i ∈ [k]. Since the sets V_1, …, V_k form a partition of [n], using |S|=k, it follows that |S ∩ V_i| = 1 for all i ∈ [k]. Therefore, the stable set S is an independent set of size k in G, and thus forms a solution for the given instance as well.
§.§ Algorithms
We next prove Theorem <ref>, which states that the problem can be solved efficiently on instances with n ≥ c · k for some absolute constant c.
We start by presenting a randomized algorithm, based on a probabilistic argument with alterations, and then derandomize it using the method of conditional expectations.
Consider an instance of the problem, i.e., integers n and k with n ≥ 2k and ℓ subsets V_1, …, V_ℓ of [n], where ℓ≤ n-2k+1 and |V_i| ≥ 2 for all i ∈ [ℓ]. Put r_i = |V_i| ≥ 2 for each i ∈ [ℓ]. Suppose further that n ≥ c · k for a sufficiently large constant c to be determined later.
Let p = 2k/n ≤ 2/c, and consider the following randomized algorithm.
* Pick a random subset A of [n] by including in A every element of [n] independently with probability p.
* Remove from A every element j ∈ [n] that satisfies {j,j+1}⊆ A (where for j=n, the element j+1 is considered as 1). Let A' denote the obtained set.
* For every i ∈ [ℓ] that satisfies |A' ∩ V_i|>r_i/2, remove from A' arbitrary |A' ∩ V_i| - ⌊ r_i/2 ⌋ elements of V_i. Let A” denote the obtained set.
* If |A”| ≥ k, then return an arbitrary k-subset of A”. Otherwise, return `failure'.
We first claim that unless the algorithm returns `failure', it returns a valid output. Indeed, Item <ref> of the algorithm guarantees that the set A' is stable. Further, Item <ref> guarantees that its subset A” satisfies |A”∩ V_i| ≤⌊ r_i/2 ⌋ for all i ∈ [ℓ]. Therefore, in the case where |A”| ≥ k, any k-subset of A” returned in Item <ref> of the algorithm is a valid solution for the given instance.
We next estimate the expected size of the set A” produced by the algorithm.
The set A chosen in Item <ref> of the algorithm includes every element of [n] with probability p. Hence, its expected size satisfies |A| = p · n. In Item <ref> of the algorithm, the probability of every element of [n] to be removed from A is equal to the probability that both the element and its successor modulo n belong to A, which is p^2. By linearity of expectation, this implies that the expected size of the set A' satisfies |A'| = (p-p^2) · n.
It remains to estimate the expected number of elements removed from A' in Item <ref> of the algorithm.
Observe that for each i ∈ [ℓ], the algorithm removes from A' the smallest possible number of elements of V_i ensuring that the obtained set A” includes at most ⌊ r_i/2 ⌋ of them.
Therefore, the number of removed elements of V_i does not exceed the number of subsets of V_i of size ⌊ r_i/2 ⌋ + 1 that are contained in A (because it suffices to remove one element from each of them). It thus follows that the expected number of elements of V_i that are removed from A' in Item <ref> of the algorithm is at most
r_i⌊ r_i/2 ⌋+1· p^⌊ r_i/2 ⌋ + 1≤ 2^r_i· p^⌊ r_i/2 ⌋+1≤ (4p)^⌊ r_i/2 ⌋ + 1≤ (4p)^2,
where in the last inequality we use the assumption r_i ≥ 2 and the fact that p ≤ 1/4 (which holds for any sufficiently large choice of the constant c).
It therefore follows, using again the linearity of expectation, that the expected size of A” satisfies
|A”|≥ (p-p^2) · n - ℓ· (4p)^2 ≥ (p-17p^2) · n ≥ k,
where the second inequality holds by ℓ≤ n, and the last inequality by the definition of p=2k/n, assuming again that n ≥ c · k for a sufficiently large constant c (say, c=68). This implies that there exists a random choice for the presented randomized algorithm for which it returns a valid solution.
We next apply the method of conditional expectations to derandomize the above algorithm.
Let us start with a few notations.
For a set S ⊆ [n], define
f(S) = |S| - |{ j ∈ [n] |{j,j+1}⊆ S}| - ∑_i ∈ [ℓ] | {B ⊆ S ∩ V_i | |B| = ⌊ r_i/2 ⌋+1 } |.
In words, f(S) is determined by subtracting from the size of S the number of pairs of consecutive elements in S (modulo n) as well as the number of subsets of S ∩ V_i of size ⌊ r_i/2 ⌋+1 for each i ∈ [ℓ].
For a vector x ∈{0,1,∗}^n, let S_x denote a random subset of [n] such that for every i ∈ [n], if x_i=1 then i ∈ S_x, if x_i=0 then i ∉ S_x, and if x_i = ∗ then i is chosen to be included in S_x independently with probability p=2k/n. We refer to the vector x as a partial choice of a subset of [n].
We further define a potential function ϕ:{0,1,∗}^n → that maps every vector x ∈{0,1,∗}^n to the expected value of f(S) where S is chosen according to the distribution S_x, that is, ϕ(x) = f(S_x).
We observe that given a partial choice x ∈{0,1,∗}^n, the value of ϕ(x) can be calculated efficiently, in time polynomial in n.
Indeed, to calculate the expected value of f(S_x), it suffices, by linearity of expectation, to calculate the expected value of each of the three terms in (<ref>) evaluated at the set S_x. It is easy to see that the expected value of the first term is
|{j ∈ [n] | x_j=1}|+ p · |{ j ∈ [n] | x_j = ∗}|,
and that the expected value of the second term is
| { j ∈ [n] | x_j=x_j+1=1 } | + p · | { j ∈ [n] |{x_j,x_j+1}={1,∗}} | + p^2 · | { j ∈ [n] | x_j=x_j+1=∗} |.
As for the third term, by linearity of expectation, it suffices to determine the expected value of
| {B ⊆ S_x ∩ V_i | |B| = ⌊ r_i/2 ⌋+1 } |
for i ∈ [ℓ]. Letting s_i = |{ j ∈ V_i | x_j=∗}| and t_i = |{ j ∈ V_i | x_j=1}|, one can check that the required expectation is precisely
∑_m=0^⌊ r_i/2 ⌋+1s_im·t_i⌊ r_i/2 ⌋+1 - m· p^m.
Since all the terms can be calculated in time polynomial in n, so can ϕ(x).
We describe a deterministic algorithm that finds a set S ⊆ [n] satisfying f(S) ≥ k.
Given such a set, the algorithm is completed by applying Items <ref>, <ref>, and <ref> of the algorithm presented above.
Indeed, by applying Items <ref> and <ref> we obtain a stable set S” such that |S”∩ V_i| ≤ r_i/2 for all i ∈ [ℓ].
The fact that f(S) ≥ k guarantees that this set S” satisfies |S”| ≥ k, hence Item <ref> returns a valid solution.
To obtain the desired set S ⊆ [n] with f(S) ≥ k, our algorithm maintains a partial choice x ∈{0,1,∗}^n satisfying ϕ(x) ≥ k.
We start with x = (∗, …, ∗), for which the analysis of the randomized algorithm guarantees that ϕ(x) ≥ k, provided that n ≥ c · k for a sufficiently large constant c. We then choose the entries of x, one by one, to be either 0 or 1.
In the ith iteration, in which x_1, …, x_i-1∈{0,1}, the algorithm evaluates ϕ at the two partial choices x_i ← 0 = (x_1, …, x_i-1,0,∗,…,∗) and x_i ← 1 = (x_1, …, x_i-1,1,∗,…,∗), and continues to the next iteration with one of them which maximizes the value of ϕ. By the law of total expectation, it holds that ϕ(x) = p ·ϕ(x_i ← 1) + (1-p) ·ϕ(x_i ← 0), implying that the choice of the algorithm preserves the inequality ϕ(x) ≥ k.
At the end of the process, we get a vector x ∈{0,1}^n with ϕ(x) ≥ k, which fully determines the desired set S with f(S) ≥ k.
Since the evaluations of ϕ can be calculated in time polynomial in n, the algorithm can be implemented in polynomial time.
This completes the proof.
Given the above result, it would be interesting to determine the smallest constant c for which the problem can be solved efficiently on instances with n ≥ c · k. Of particular interest is the restriction of the problem to instances with n=3k and with pairwise disjoint sets of size 3, because as follows from Proposition <ref>, an efficient algorithm for this restriction would imply an efficient algorithm for the problem.
Interestingly, it turns out that the restriction of the problem to instances with n=4k and with pairwise disjoint sets of size 4 does admit an efficient algorithm. This is a consequence of the following result derived from an argument of Alon <cit.> (see also <cit.>). We include its quick proof for completeness.
There exists a polynomial-time algorithm that given an integer k and a partition of [4k] into k subsets V_1, …, V_k with |V_i|=4 for all i ∈ [k], finds a partition of [4k] into four stable k-subsets S_1,S_2,S_3,S_4 of [4k] such that |S_j ∩ V_i| = 1 for all j ∈ [4] and i ∈ [k].
Consider the algorithm that given an integer k and a partition of V = [4k] into k subsets V_1, …, V_k with |V_i|=4 for all i ∈ [k], acts as follows.
Let M_1 and M_2 be the two matchings of size 2k in the cycle on the vertex set V with the natural order along the cycle.
Let M_3 be an arbitrary matching of size 2k on the vertex set V that includes two edges with vertices in V_i for each i ∈ [k].
Consider the graph G_1 = (V,M_1 ∪ M_3). Since the edge set of G_1 is a union of two matchings, it has no odd cycles, hence it is 2-colorable, and a 2-coloring c_1 of G_1 can be found in polynomial time.
Notice that the edges of M_3 in G_1 guarantee that each color class of the coloring c_1 includes precisely two vertices from V_i for each i ∈ [k].
Now, let M_4 be the matching of size 2k on the vertex set V that includes, for each i ∈ [k], the two edges with vertices in V_i whose endpoints have the same color according to c_1.
Consider the graph G_2 = (V,M_2 ∪ M_4). As before, G_2 is 2-colorable, and a 2-coloring c_2 of G_2 can be found in polynomial time.
Finally, the algorithm defines a 4-coloring of the graph (V,M_1 ∪ M_2 ∪ M_3 ∪ M_4) by assigning to every vertex v ∈ V the pair (c_1(v),c_2(v)).
Observe that the produced coloring is proper and that its color classes can be found efficiently.
By the edges of M_1 ∪ M_2, each color class is a stable subset of [4k], and by the edges of M_3 ∪ M_4, each color class includes at most one vertex from each V_i.
This implies that the four color classes are stable k-subsets of V that satisfy the assertion of the proposition, and we are done.
§ UNSTABLE SETS
In this section, we explore two subgraphs of the Kneser graph K(n,k) induced by families of unstable k-subsets of [n].
These subgraphs are defined as follows.
Let n and k be integers with n ≥ 2k.
Let U(n,k) denote the subgraph of K(n,k) induced by the family of all k-subsets of [n] that include a pair of consecutive elements (where the elements n and 1 are not considered as consecutive for n>2).
Let U(n,k) denote the subgraph of K(n,k) induced by the family of all k-subsets of [n] that include a pair of consecutive elements modulo n, i.e., the family of unstable k-subsets of [n].
§.§ Chromatic Number
We study now the chromatic numbers of the graphs U(n,k) and U(n,k).
It is worth mentioning here that a result of Donikov <cit.> generalizes the lower bound of Lovász <cit.> on the chromatic number of K(n,k) to general graphs, using a notion called colorability defect (see also <cit.> and <cit.>). This generalization implies a tight lower bound of n-2k+2 on the chromatic number of K(n,k) and a somewhat weaker lower bound of n-4k+4 on the chromatic number of S(n,k) (see, e.g., <cit.>). It turns out, though, that this generalized approach of <cit.> does not yield any meaningful bounds on the chromatic numbers of the graphs from Definition <ref>.
The following theorem determines the exact chromatic number of the graph U(n,k).
For all integers n and k with n ≥ 2k,
χ(U(n,k)) = min(n-2k+2,⌊ n/2 ⌋).
The proof of Theorem <ref> relies on a topological argument.
It uses the following variant of the Borsuk–Ulam theorem (see, e.g., <cit.>).
Here, ^t stands for the t-dimensional unit sphere with respect to the Euclidean norm, that is, ^t = { x ∈^t+1|x=1}.
If the t-dimensional sphere ^t is covered by t+1 sets F_1, …, F_t+1, each F_j open or closed, then there exist an index j ∈ [t+1] and a point x ∈^t such that both x and -x belong to F_j.
Another crucial ingredient in the proof of Theorem <ref> is the following lemma.
It is inspired by a lemma of Gale <cit.> that was applied in <cit.> to determine the chromatic number of S(n,k) (see also <cit.>).
Here, a hyperplane h in ^t+1 that passes through the origin is defined as the set { z ∈^t+1|⟨ x,z ⟩ = 0} for some x ∈^t, and the two open hemispheres that h determines are { z ∈^t |⟨ x,z⟩ >0 } and { z ∈^t |⟨ x,z⟩ <0 }.
For integers n and k with n ≥ 2k, let t = min(n-2k+2,⌊ n/2 ⌋)-1.
Then, there exist n points y_1,…,y_n ∈^t such that for every hyperplane h in ^t+1 that passes through the origin, at least one of the two open hemispheres that h determines contains the points of {y_i | i ∈ A} for some vertex A of U(n,k).
Let n, k, and t be integers as in the statement of the lemma.
Let γ: →^t+1 denote the function defined by
γ(x) = (1,x,x^2,…,x^t).
For every i ∈ [n], consider the point w_i = γ(i) ∈^t+1.
We prove that for every hyperplane h in ^t+1 that passes through the origin, at least one of the two open half-spaces that h determines contains the points of {w_i | i ∈ A} for some vertex A of U(n,k). This will immediately imply that the points y_1, …, y_n defined by y_i = w_i/w_i for i ∈ [n], which all lie on ^t, satisfy the assertion of the lemma.
Consider an arbitrary hyperplane h in ^t+1 that passes through the origin.
Every point w_i either lies on h or belongs to one of the two open half-spaces determined by h.
Let W_on denote the set of indices i ∈ [n] for which the point w_i lies on h, and let W_1 and W_2 denote the sets of indices i ∈ [n] of the points w_i that belong to the two open half-spaces determined by h. Our goal is to show that at least one of the sets W_1 and W_2 contains a vertex of U(n,k). To do so, one has to show that at least one of them includes two consecutive elements and has size at least k.
The definition of the points w_1,…,w_n implies that the size of W_on does not exceed the number of roots of some nonzero polynomial of degree at most t, hence
|W_on| ≤ t.
In fact, we may and will assume that |W_on|=t, as otherwise it is possible to continuously move h so that it will satisfy this property and no w_i will cross from one side to the other (see, e.g., <cit.>).
Now, the points w_i with i ∈ W_on divide the image (γ) of the function γ into t+1 open continuous parts that alternate between the two open half-spaces determined by h.
Observe that all the indices i of the points w_i that belong to every such continuous part are either in W_1 or in W_2.
Suppose without loss of generality that W_1 corresponds to the points w_i that belong to ⌈t+12⌉ of the parts of (γ), and thus W_2 corresponds to the points w_i that belong to the other ⌊t+12⌋ parts of (γ).
In order to prove that W_1 (or W_2) includes two consecutive elements, it suffices to show that its size exceeds the number of parts of (γ) associated with it.
We first observe that at least one of the sets W_1 and W_2 satisfies this requirement, that is,
|W_1| > ⌈t+1/2⌉ |W_2| > ⌊t+1/2⌋.
To see this, assume for the sake of contradiction that both the inequalities in (<ref>) do not hold.
It follows that
n-t = |W_1|+|W_2| ≤⌈t+1/2⌉ + ⌊t+1/2⌋ = t+1,
which implies that n ≤ 2 t +1.
This, however, contradicts the definition of t which guarantees that t ≤⌊ n/2 ⌋ -1.
Given that at least one of the inequalities in (<ref>) holds, it is easy to see that in the case where |W_1| ≤ |W_2|, the first inequality in (<ref>) implies the second, hence the second inequality in (<ref>) necessarily holds.
Further, in the case where |W_1| > |W_2|, since the right-hand side of the two inequalities in (<ref>) differ by at most 1, the first inequality in (<ref>) necessarily holds.
It thus follows that for at least one of the sets W_1 and W_2, its size is equal to max(|W_1|,|W_2|) and is strictly larger than the number of parts of (γ) associated with it.
This implies that this set includes two consecutive elements.
By the definition of t, we have t ≤ n-2k+1, which implies that the size of the set satisfies
max(|W_1|,|W_2|) ≥⌈n-t/2⌉≥⌈2k-1/2⌉ = k.
This completes the proof.
We are ready to prove Theorem <ref>.
For the upper bound, apply first Theorem <ref> to obtain that
χ(U(n,k)) ≤χ(K(n,k)) = n-2k+2.
Next, since every vertex of U(n,k) includes two consecutive elements, it must include an even element. By assigning to every such vertex its minimal even element, we obtain a proper coloring of U(n,k) with ⌊ n/2 ⌋ colors, hence χ(U(n,k)) ≤⌊ n/2 ⌋. This completes the proof of the upper bound.
The lower bound relies on the Borsuk–Ulam theorem (Theorem <ref>). Let
t = min (n-2k+2, ⌊ n/2 ⌋) - 1,
and suppose for the sake of contradiction that there exists a proper coloring of U(n,k) with t colors. Let y_1,…,y_n ∈^t denote the points given by Lemma <ref>. We define t sets F_1,…,F_t ⊆^t as follows. A point x ∈^t is included in F_j with j ∈ [t] if there exists a vertex A of U(n,k) colored j such that { y_i | i ∈ A}⊆ H(x), where H(x) is the open hemisphere centered at x. We further define F_t+1 = ^t ∖ (F_1 ∪⋯∪ F_t). Note that the sets F_1,…,F_t+1 cover ^t. Note further that F_1,…,F_t are open whereas F_t+1 is closed.
By Theorem <ref>, there exist an index j ∈ [t+1] and a point x ∈^t such that both x and -x belong to F_j.
If j ∈ [t], then it follows from the definition of F_j that there exist two vertices of U(n,k) with color j that correspond to disjoint sets, contradicting the assumption that the given coloring is proper.
If j = t+1 then neither H(x) nor H(-x) contains { y_i | i ∈ A} for a vertex A of U(n,k), contradicting Lemma <ref>.
This completes the proof.
We derive the following result on the chromatic number of U(n,k).
For all integers n and k with n ≥ 2k,
min(n-2k+2,⌊ n/2 ⌋) ≤χ(U(n,k)) ≤min(n-2k+2,⌈ n/2 ⌉).
For the upper bound, apply first Theorem <ref> to obtain that
χ(U(n,k)) ≤χ(K(n,k)) = n-2k+2.
Next, since every vertex of U(n,k) includes two consecutive elements modulo n, it must include an odd element. By assigning to every such vertex its minimal odd element, we obtain a proper coloring of U(n,k) with ⌈ n/2 ⌉ colors, hence χ(U(n,k)) ≤⌈ n/2 ⌉.
This completes the proof of the upper bound.
The lower bound follows by combining Theorem <ref> with the fact that U(n,k) is an induced subgraph of U(n,k).
We conclude this section with a discussion on the tightness of Corollary <ref>.
Notice that the upper and lower bounds provided in Corollary <ref> coincide whenever the integer n is even or satisfies n ≤ 4k-4.
For other values of n and k the two bounds differ by 1.
Yet, it turns out that the proof technique of Theorem <ref> can be used to show that the upper bound in Corollary <ref> is tight for all integers n that are congruent to 1 modulo 4. We provide the details in Appendix <ref>. This leaves us with a gap of 1 between the upper and lower bounds in Corollary <ref> only for those integers n and k, where n is congruent to 3 modulo 4 and satisfies n ≥ 4k-1.
We further observe that for an odd integer n and for every proper coloring of U(n,k) that includes a trivial color class (all of whose members share a common element), the number of used colors is at least the upper bound in Corollary <ref>. Indeed, the restriction of such a coloring to the vertices that do not include the common element of the trivial color class is a proper coloring of a graph isomorphic to U(n-1,k), so by Theorem <ref> it uses at least min(n-2k+1,(n-1)/2) colors. Together with the additional color of the trivial color class, the total number of colors is at least min(n-2k+2, ⌈ n/2 ⌉), as claimed.
§.§ Independence Number
We next determine the largest size of an independent set in the graph U(n,k).
The proof uses the Hilton–Milner theorem (Theorem <ref>).
For all integers k ≥ 2 and n ≥ 2k, it holds that
α(U(n,k)) = n-1k-1 - n-k-1k-1.
Fix some integers k ≥ 2 and n ≥ 2k.
We first observe that there exists an independent set in U(n,k) with the required size. This follows by considering the family of all vertices of U(n,k) that include some fixed element i ∈ [n]. This family is clearly intersecting and thus forms an independent set in U(n,k). The number of k-subsets of [n] that include i is n-1k-1, where by Lemma <ref>, n-k-1k-1 of them are stable and therefore do not form vertices of U(n,k). Therefore,
α(U(n,k)) ≥n-1k-1 - n-k-1k-1.
We now prove the upper bound.
We start with the simple case of n=2k.
Here, the edges of the Kneser graph K(n,k) form a perfect matching in which every edge connects a set to its complement. Any independent set in U(n,k) includes at most one vertex from every edge in this matching. However, the set of the odd elements of [n] and the set of the even elements of [n] are adjacent in K(n,k) but are not vertices of U(n,k).
This implies that for n=2k, it holds that
α(U(n,k)) ≤1/2·2kk-1 = 2k-1k-1-1.
This coincides with the required bound.
Suppose now that k >3 and n > 2k.
Let be a family of k-subsets of [n] that forms an independent set in U(n,k).
Since U(n,k) is an induced subgraph of K(n,k), it follows that is an intersecting family.
If is trivial, then its size does not exceed the number of vertices in U(n,k) that include some fixed element i ∈ [n], hence || ≤n-1k-1 - n-k-1k-1, and we are done.
So suppose that is non-trivial, and assume for the sake of contradiction that its size satisfies || ≥n-1k-1 - n-k-1k-1 +1.
By the Hilton–Milner theorem, stated as Theorem <ref>, it follows that || = n-1k-1 - n-k-1k-1 +1. Moreover, using k >3, it follows that there exist an element i ∈ [n] and a k-subset A of [n] with i ∉ A such that = { F ∈[n]k | i ∈ F, F ∩ A ≠∅}∪{A}.
To obtain a contradiction, it suffices to show that such a family includes a set that does not form a vertex of U(n,k).
So assume without loss of generality that i=1, and consider the following two k-subsets of [n]: {1,3,5,…,2k-1} and {1,4,6,…,2k}. By n > 2k, neither of them is a vertex of U(n,k). If there exists an element j ∈ A satisfying 3 ≤ j ≤ 2k, then at least one of them belongs to , hence includes a set that does not form a vertex of U(n,k), as required.
Otherwise, using k ≥ 3 and 1 ∉ A, there exists an element j ∈ A satisfying 2k < j < n. It thus follows that the set {1,3,5,…,2k-3,j} belongs to but does not form a vertex of U(n,k), as required.
We turn to the case in which k=3 and n > 6.
The proof for this case follows the argument presented above for k>3. The only difference is that while assuming in contradiction that the independent set satisfies || = n-1k-1 - n-k-1k-1 +1, the Hilton–Milner theorem provides another possible structure for , namely, = {F ∈[n]3 | |F ∩ A| ≥ 2} for some 3-subset A of [n]. It thus remains to show that such a family must include a set that does not form a vertex of U(n,3).
Note that A ∈. Therefore, if A is not a vertex of U(n,3) then we are done.
Otherwise, it can be assumed without loss of generality that A = {1,2,j} for some 3 ≤ j < n.
For j=3, the set {1,3,5} belongs to but does not form a vertex of U(n,3).
For j=4, the set {1,4,6} belongs to but does not form a vertex of U(n,3).
And finally, for j ≥ 5, the set {1,3,j} belongs to but does not form a vertex of U(n,3), so we are done.
We are left with the case of k=2.
Here, the graph U(n,k) is isomorphic to the complement of a cycle on n vertices. It is not difficult to check that for n ≥ 4, it holds that α(U(n,2)) = 2, which coincides with the stated result. This completes the proof.
abbrv
§ ON THE TIGHTNESS OF COROLLARY <REF>
We prove the following result.
For all integers n and k such that n is congruent to 1 modulo 4 and n ≥ 2k,
χ(U(n,k)) = min (n-2k+2, ⌈ n/2 ⌉).
Theorem <ref> shows that the upper bound in Corollary <ref> is tight whenever n is congruent to 1 modulo 4.
Its proof relies on the following lemma whose proof resembles that of Lemma <ref>.
For integers n and k for which it holds that n is congruent to 1 modulo 4 and n ≥ 2k, let t = min(n-2k+2,⌈ n/2 ⌉)-1.
Then, there exist n points y_1,…,y_n ∈^t such that for every hyperplane h in ^t+1 that passes through the origin, at least one of the two open hemispheres that h determines contains the points of {y_i | i ∈ A} for some vertex A of U(n,k).
Let n, k, and t be integers as in the statement of the lemma.
Observe that the assumption that n is congruent to 1 modulo 4 implies that t is even.
Let γ: →^t+1 denote the function defined by γ(x) = (1,x,x^2,…,x^t).
For every i ∈ [n], consider the point w_i = γ(i) ∈^t+1.
It suffices to show that for every hyperplane h in ^t+1 that passes through the origin, at least one of the two open half-spaces that h determines contains the points of {w_i | i ∈ A} for some vertex A of U(n,k).
Consider an arbitrary hyperplane h in ^t+1 that passes through the origin.
Every point w_i either lies on h or belongs to one of the two open half-spaces determined by h.
Let W_on denote the set of indices i ∈ [n] for which the point w_i lies on h, and let W_1 and W_2 denote the sets of indices i ∈ [n] of the points w_i that belong to the two open half-spaces determined by h. Our goal is to show that at least one of the sets W_1 and W_2 contains a vertex of U(n,k).
The definition of the points w_1,…,w_n implies that the size of W_on does not exceed the number of roots of some nonzero polynomial of degree at most t, hence
|W_on| ≤ t, and it can be assumed that |W_on|=t.
The points w_i with i ∈ W_on divide the image (γ) of the function γ into t+1 open continuous parts that alternate between the two open half-spaces determined by h. Since t is even, the first and last parts lie on the same open half-space determined by h. We merge these two parts into a single part.
Observe that all the indices i of the points w_i that belong to each of the obtained t parts of (γ) are either in W_1 or in W_2.
It follows that each of the sets W_1 and W_2 is associated with t/2 of these parts.
Suppose without loss of generality that |W_1| ≥ |W_2|.
We claim that W_1 contains a vertex of U(n,k). To this end, we show that it includes two consecutive elements modulo n and has size at least k.
Indeed, it holds that |W_1| > t/2, as otherwise n-t = |W_1|+|W_2| ≤ t, which implies that n ≤ 2 t. Since n is odd, this contradicts the definition of t which guarantees that t ≤⌈ n/2 ⌉ -1. It thus follows that at least one of the parts of (γ) associated with W_1 contains at least two of the points w_1,…, w_n, hence W_1 includes two consecutive elements modulo n.
Further, by the definition of t, we have t ≤ n-2k+1, which implies that |W_1| ≥⌈n-t/2⌉≥⌈2k-1/2⌉ = k, completing the proof.
Let n and k be integers such that n is congruent to 1 modulo 4 and n ≥ 2k.
The upper bound on χ(U(n,k)) follows from Corollary <ref>.
For the lower bound, let
t = min (n-2k+2, ⌈ n/2 ⌉) - 1,
and suppose for the sake of contradiction that there exists a proper coloring of U(n,k) with t colors. Let y_1,…,y_n ∈^t denote the points given by Lemma <ref>. We define t sets F_1,…,F_t ⊆^t as follows. A point x ∈^t is included in F_j with j ∈ [t] if there exists a vertex A of U(n,k) colored j such that { y_i | i ∈ A}⊆ H(x), where H(x) is the open hemisphere centered at x. We further define F_t+1 = ^t ∖ (F_1 ∪⋯∪ F_t). Note that the sets F_1,…,F_t+1 cover ^t.
By Theorem <ref>, there exist an index j ∈ [t+1] and a point x ∈^t such that both x and -x belong to F_j.
If j ∈ [t], then it follows from the definition of F_j that there exist two vertices of U(n,k) with color j that correspond to disjoint sets, contradicting the assumption that the given coloring is proper.
If j = t+1 then neither H(x) nor H(-x) contains { y_i | i ∈ A} for a vertex A of U(n,k), contradicting Lemma <ref>.
This completes the proof.
|
http://arxiv.org/abs/2307.01312v1
|
20230703193552
|
Self-Tuning PID Control via a Hybrid Actor-Critic-Based Neural Structure for Quadcopter Control
|
[
"Iman Sharifi",
"Aria Alasty"
] |
eess.SY
|
[
"eess.SY",
"cs.AI",
"cs.RO",
"cs.SY"
] |
IEEEtitlepagestyle
plain
Self-Tuning PID Control via a Hybrid Actor-Critic-Based Neural Structure for Quadcopter Control
Iman Sharifi and Aria AlastyIman Sharifi and Aria Alasty are with the Department of Mechanical Engineering, Sharif University of Technology, Tehran, Iran.
Received ; accepted
======================================================================================================================================================================
Proportional-Integrator-Derivative (PID) controller is used in a wide range of industrial and experimental processes. There are a couple of offline methods for tuning PID gains. However, due to the uncertainty of model parameters and external disturbances, real systems such as Quadrotors need more robust and reliable PID controllers. In this research, a self-tuning PID controller using a Reinforcement-Learning-based Neural Network for attitude and altitude control of a Quadrotor has been investigated. An Incremental PID, which contains static and dynamic gains, has been considered and only the variable gains have been tuned. To tune dynamic gains, a model-free actor-critic-based hybrid neural structure was used that was able to properly tune PID gains, and also has done the best as an identifier. In both tunning and identification tasks, a Neural Network with two hidden layers and sigmoid activation functions has been learned using Adaptive Momentum (ADAM) optimizer and Back-Propagation (BP) algorithm. This method is online, able to tackle disturbance, and fast in training. In addition to robustness to mass uncertainty and wind gust disturbance, results showed that the proposed method had a better performance when compared to a PID controller with constant gains.
Hybrid Neural Network, Actor-Critic, Reinforcement Learning, Self-tuning PID, Quadrotor
§ INTRODUCTION
Due to their low cost, simple mechanical structure, vertical take-off and landing capabilities, and high maneuverability, quadcopters are applicable in a wide range of industrial processes such as agriculture, search and rescue, inspection, and surveillance <cit.>. Because of the coupling among actuators and the uncertainty of external disturbances, they have a highly nonlinear dynamic system. That is why the role of attitude control in quadcopter trajectory tracking and maneuvering is of paramount importance. In <cit.>, a variety of control algorithms were executed and showed that none of them can meet the requirements, even though hybrid methods have better adaptability and robustness to pass disturbances.
The PID controller is used extensively in real industrial systems due to its simplicity and ease of implementation. However, the accuracy of this controller is highly dependent on its gains and the system model. In high-order nonlinear systems, parameter uncertainties and external disturbances can reduce the performance of the PID controller. Conventional offline tuning methods are not efficient for such systems <cit.>. To achieve acceptable performance in nonlinear systems, it is better to use online methods such as Adaptive Control, Fuzzy Systems, and Neural Networks (NN) <cit.>. NNs are capable of solving non-trivial problems efficiently and approximating high-order nonlinear functions <cit.>.
Furthermore, Reinforcement Learning (RL) methods, which use NNs in their structures, have demonstrated their ability and, in some cases, even outperform humans. RL algorithms are semi-supervised learning methods that employ an agent to learn by interacting with an environment. RL has considerable utility in self-tuning PID control and is capable of doing so without human designer intervention. For example, the Q-learning algorithm has been employed to tune PID gains <cit.>. However, this algorithm cannot take continuous actions and requires a powerful processor and memory for improved accuracy. Additionally, using Actor-Critic methods, an agent can generate continuous actions (control signals), and more importantly, these methods are online and NN-based. In <cit.>, a Radial Basis Function (RBF) NN was employed for actor policy and critic value function approximation, demonstrating that this algorithm can track a complex trajectory. Moreover, Deep Deterministic Policy Gradient (DDPG) is utilized in PID tuning <cit.>, which has offline training and necessitates a powerful processor. However, using DDPG, the trained model may lose its performance in a real system with environmental disturbances. In <cit.>, Asynchronous Advantage Actor Critic (A3C) was employed for PID tuning, which can learn multi-actor and critic as workers. Results demonstrated that this method enhanced PID performance.
In this research, a new structure for tuning PID gains is introduced using NNs, taking advantage of recent algorithms that can perform both self-tuning PID and system identification. This fast and online method does not require a high-capacity memory, a powerful processor, or offline training. The Adaptive Momentum (ADAM) optimizer is used to update network weights with the Back Propagation algorithm <cit.>. ADAM is fast, efficient in deep networks, and capable of bypassing shallow local minimums. To evaluate the efficiency of our method, we investigate its performance when faced with mass uncertainty and wind gust disturbances.
In the following sections, we describe the dynamical modeling of a Quadrotor and PID control in Section II. In Section III, we design a hybrid neural structure for online PID gain tuning, followed by optimization. We perform comparative numerical simulations to demonstrate the effectiveness of the proposed controller in Section IV. Finally, we summarize our conclusions and contributions in Section V.
§ DYNAMIC MODELING AND PID CONTROL METHOD
Quadcopter (as shown in Fig. <ref>) is an under-actuated system with four control inputs (u_1, u_2, u_3, u_4) and six degrees of freedom (DOFs), including position (x,y,z) and attitude (ϕ, θ, ψ). Due to the nonlinearities of the quadcopter system and the complexity of environmental situations, it is nearly impossible to accurately model this robot. In such cases, system identification using methods such as NNs can efficiently estimate the states of the system. Therefore, our control method does not require a precise model and only needs instantaneous inputs and outputs of the system.
Therefore, in this research, a simple mathematical model <cit.> is used with known parameters only for simulating the real system without noise. In fact, we assume that we do not know the parameters of the system and estimate the states using corresponding control inputs and recent states. The governing equations are shown in Eq. <ref>. In this set of equations, x, y, z are the positions of the center of gravity related to the reference coordinates x_I, y_I, z_I, and ϕ, θ, ψ are the rotational angles around x_B, y_B, z_B.
ϕ̈ =θ̇ψ̇J_y-J_z/J_x+l/J_xu_2
θ̈ =ϕ̇ψ̇J_z-J_x/J_y+l/J_yu_3
ψ̈ =ϕ̇θ̇J_x-J_y/J_z+1/J_zu_4
z̈ =u_1/m cosϕ cosθ-g
ẍ =u_1/m (cosϕ sinθ cosψ+sinϕ sinψ)
ÿ =u_1/m (cosϕ sinθ sinψ-sinϕ cosψ)
where m, g, l are the total mass of robot, gravity, and quadcopter arm, and also J_x, J_y, J_z are moments of innertia around body coordinates, respectively. Control inputs (Eq. <ref>) are combination of squared motor angular velocities (Ω_1, Ω_2, Ω_3, Ω_4).
u_1 =b(Ω_1^2+Ω_2^2+Ω_3^2+Ω_4^2)
u_2 =b(Ω_4^2-Ω_2^2)
u_3 =b(Ω_3^2-Ω_1^2)
u_4 =d(Ω_4^2+Ω_2^2-Ω_1^2+Ω_3^2)
where b and d are thrust and torque coefficients.
Now, we should determine control inputs using PID control algorithm in an online manner. Firstly, static PID gains are chosen with trial and error or Ziegler-Nichols method in which system reach credible stability. What is more, attained gains will not change during each mission. Having tuned dynamic gains using the proposed method, they are added to corresponding static gains. Thus, according to the following equation, each control input (u_1, u_2, u_3, u_4) is the summation of corresponding static and dynamic gain inputs.
u_(t)=u_sg(t)+u_dg(t)
in which,
u_sg(t) =K_p^s e(t)+K_i^s ∫_0^t e(τ) dτ+K_d^s ė(t)
u_dg(t) =K_p^d e(t)+K_i^d ∫_0^te(τ)dτ+K_d^d ė(t)
where K_p^s, K_i^s, K_d^s are constant and determined in advance and
K_p^d, K_i^d, K_d^d are tuned using a neural network in which is based on actor-critic method. This method will be elaborated upon in the rest of the paper.
§ NETWORK STRUCTURE
The proposed method consists of two parts. The first part is self-tuning PID, and the second part is system identification using neural networks (NNs). In the first part, a NN is designed to tune dynamic PID gains. In the next step, the control input is computed by externally injecting PID errors to the network. Obtaining the control input in each step, it is fed to the system identification net when accompanied by recent outputs. This network estimates the new output of the system using an Actor-Critic structure. In fact, the whole identification net is composed of two networks. The first one is an Actor network that tries to identify the actual output of the system. Meanwhile, the second net, the Critic network, computes the value function of the net inputs (environment states) and shows the value of the actor's action in the current state.
Considering Fig. <ref>, Sigmoid activation function is used in hidden layers, whereas Tanh utilized in the PID gains layer. Finally, dynamic PID gains (K_n^d) are computed using the following equation.
K_n^d(k)=f_n( u(k-1),u(k-2),s(k-1),s(k-2),
e_p(k-1),e_i(k-1),e_d(k-1))
where n = p, i, d and f_n(.) is a nonlinear function with quite a few weights and biases which are initialized in a bound near zero. In the identification net, actor network has two outputs, an average (μ) and a variance (σ). The outputs of the actor net are fed into a normal distribution equation (𝒩(μ,σ^2)) and using normal distribution of actor outputs, a sample is taken randomly (Fig. <ref>).
where s_m is an estimation of each state of the quadcopter (attitudes and altitude). This sample is the final output of actor network and we expect it to track the actual output of the system.
Critic has to estimate the value function (v) using states (control input and recent outputs) and by doing so, it gives the actor a feedback to perform better than before.
Finally, the outputs of the system identification network will be obtained using the following equations.
μ(k) =f_μ(u(k),s(k-1),s(k-2))
σ(k) =f_σ(u(k),s(k-1),s(k-2))
v(k) =f_v(u(k),s(k-1),s(k-2))
where f_μ(.), f_σ(.) are the corresponding functions of Actor, and f_v(.) is that of Critic.
Designing the self-tuning PID, Actor, and Critic networks individually, it is now time to connect the networks to work alongside each other to achieve the final objective of self-tuning after system identification. Therefore, we connect the self-tuning PID network to the system identification net in series (Fig. <ref>). By doing so, the final output of the former network will be fed into the latter one, resulting in the formation of a general uniform network. The inputs of the network are control inputs, current states, and PID errors. Eventually, the outputs are the estimations of each state and the value function of network inputs. With all of this in mind, we do not need a model of the system, thereby making the method a model-free one.
§ OPTIMIZATION
After designing the network structure and initialization of weights and biases, the net parameters have to be adjusted using an optimizer. The actor's goal is output estimation or minimization of the estimation error (s_m-s) and critic should minimize Temporal Difference error (δ_TD) <cit.>. δ_TD is computed using the following equation.
δ_TD=R_k+1+γ v_k+1-v_k
where γ and R_k+1 are discount factor and reward function and v_k and v_k+1 are value function in step k and the next one. Reward function is defined in a quadratic form so that the less absolute error, error-rate, and control input, the more reward, according to the following equation.
R_k+1=-r_1(s_m-s)^2-r_2(ṡ_m-ṡ)^2-r_3u^2
where r_1, r_2, r_3 are arbitrary reward coeficients and u is the control signal. Finally, a loss function for Actor (L_a) and another for Critic (L_c) established, according to the following equations.
L_a =w_1 (s_m-s)^2 (η+|δ_TD|)+w_2 √(2π e σ^2)
L_c =w_3δ_TD^2
Where w_1, w_2, and w_3 are also constant weights that show the importance of each term. η is a constant close to zero to prevent the actor's loss from being zero when δ_TD approaches zero. In fact, η prevents the actor loss function from becoming zero before reaching the optimal point and allows the actor to explore more in the environment. The critic computes the δ_TD signal and sends it to the actor, resulting in criticizing the actor's action. If this value is high, the actor loss function is also high, and the actor should choose a better action. Also, if the value is near zero, there is no need to change the action anymore, which leads to the convergence of the action to the optimal point. The full network with the self-tuning and identification network is shown in Fig. <ref>. A general loss function (L_t) is needed to optimize this network, which is obtained by adding the actor and critic loss functions.
L_t=L_a+L_c
To find optimal weights, ADAM optimizer is used that is not only fast but also certain in deep neural networks. Eq. <ref> indicates ADAM optimizer equations,
m_t = β_1 m_t+(1-β_1) g_t
v_t = β_2 v_t + (1-β_2) g_t^2
m̂_t =m_t/1-β_1,v̂_t=v_t/1-β_2
θ_t+1 =θ_t-α/√(v̂_t+ϵ)m̂_t
where, g_t=L_tθ_t. Assume θ_st and θ_si are weights of self-tuning and system identification networks. Thus, rate of changes of total loss function respect to each parameter is computed as following,
g_st =(L_as_ms_mu+L_aσσu+L_cvvu)uθ_st
g_si =L_as_ms_mθ_si+L_aσσθ_si+L_cvvθ_si
in which,
uθ_st=e_p K_p^dθ_st+e_i K_i^dθ_st+e_d K_d^dθ_st
Just note that the weights e_p, e_i, e_d are externally injected into the network and will not be tuned by the optimizer. The designed structure is applicable to Single-Input-Single-Output (SISO) systems, while the quadcopter is a Multi-Input-Multi-Output (MIMO) system. Fortunately, if we assume that the states remain close to the equilibrium point, it can be divided into four independent SISO subsystems: ϕ, θ, ψ, and z, with ϕ, θ being the underactuated subsystems that control x and y completely dependent on them.
§ RESULTS
Having designed the structure and set up an optimizer, we simulated the network using Python programming and PyTorch library. Additionally, V-rep Coppeliasim[] has been set to perform as a real environment. In this framework, the model of the quadcopter is completely unknown, and only the inputs and outputs of the system were used continuously. We considered certain scenarios to verify and challenge the proposed method.
§.§ Squared path in a constant height
To begin with, a squared path in a constant height was chosen as a scenario for tracking. According to Figs. <ref> and <ref>, at the beginning of the mission, the variance (σ) of choosing gains is high, and the agent tries to choose different actions to improve identification. Therefore, the rates of change of gains are high, and after a while, they decrease slightly until converging to a constant number. These figures indicate that the proposed method can efficiently control the attitude of the Quadcopter for a simple tracking scenario.
When the rewards (shown in Fig. <ref>) and the loss functions (shown in Fig. <ref>) of each agent increase over time to reach the optimal point, it emphasizes the fact that the network weights have been optimized, resulting in well-trained networks. Since the weights of the networks are trained in an online manner, this method is fast and can be used in real-world robots.
Tunning PID gains for the first scenario, the attitude control (Euler angles) could track the desired angles. When the attitude control is decent, position control can be attained simply (Fig. <ref>).
§.§ Mass uncertainty
In the next step, we changed the total mass of the Quadcopter (Fig. <ref>) with respect to time to execute parameter uncertainty and challenge the efficiency of the network. The mass of the system changes significantly after a short period of time. When the system mass increases, the altitude of the system changes, which indicates that the corresponding controller should adapt itself to these changes. According to Fig. <ref>, the controller is able to compensate for the error of the system accordingly. Compared to traditional PID, it is obvious that the performance of the proposed controller is better when confronting mass uncertainty. Moreover, if we add more mass, the traditional PID method will not respond well because its gains have been designed for a system with a different mass. On the other hand, the proposed method can change the gains simultaneously, thereby preventing the error from getting bigger. As a result, this method is not only online but also adaptable to changes.
As a more complex scenario, we designed a helical path for the Quadcopter to ensure that the control system works well. When an underactuated robot like a Quadrotor can track desired positions in different directions, we can conclude that the attitude control of the system has been achieved successfully because the latter one is in the inner loop of the system and is a priority for the former one. Fig. <ref> shows that the robot tracks the path appropriately, and the error is converging to zero. When position tracking is good, the attitude control is also fine.
§.§ Disturbance rejection
To challenge the performance of our method, we investigateed the impact of wind gust disturbance on attitude. Guass-Markov equations <cit.> are adopted, as indicated in the following equations:
ḋ=-1/τ_sd+ρ B_w q_w
Eq. <ref> is known as a "shaping filter" for the wind gusts, where q_w is an independent constant with zero mean, τ_s=0.3 is the correlation time of the wind, B_w is the turbulence input identity matrix, and ρ=0.5 is the scalar weighting factor. Eq. <ref> was solved using the ODE45 method alongside Eq. <ref>, and Fig. <ref> indicates the recorded disturbances during the mission with initial conditions set to zero. The size of each disturbance is sufficient to affect the performance of every controller.
However, the proposed controller proves its robustness against disturbance in all attitude angles compared to conventional PID controller. After a time period, PID controller cannot compensate the increased error because the integrator gain is constant, whereas the proposed method can change this gain in an adaptive manner. In fact, this controller learns how to change gains using error, error rate, control input, and previous states.
To compare the performance of the proposed method and conventional PID controller, Root Mean Square Error (RMSE) of each angle has been computed for both controllers and the results are shown in Table <ref>.
§ CONCLUSION
In this research, a novel self-tuning PID control has been proposed using a hybrid neural structure based on the actor-critic method. The proposed method is able to tune PID gains and identify states in an online manner. Not only does this method have a straightforward structure, but it is also applicable to real SISO systems. We have taken advantage of NNs and the Adam optimizer in this work, the latter of which is fast and reliable. Results showed that the control algorithm is able to track complex paths even with randomly initialized weights. Using this method, altitude control was achieved with mass uncertainty, and the response was good when confronting wind gust disturbances on attitude control. We have shown that the proposed method has far better results compared to the conventional PID method.
§ ACKNOWLEDGMENT
This research has proposed as the result of author's master's thesis and without any sponsorship.
ieeetr
|
http://arxiv.org/abs/2307.01620v1
|
20230704101617
|
A 2 & 3 Player Scheme for Quantum Direct Communication
|
[
"Theodore Andronikos",
"Alla Sirokofskich"
] |
quant-ph
|
[
"quant-ph"
] |
Fixed-time Stabilization with a Prescribed Constant Settling Time by Static Feedback for Delay-Free and Input Delay Systems
Andrey Polyakov[Inria, University of Lille, FR-59000, France ([email protected])] and Miroslav Krstic[
University of California, San Diego, USA]
============================================================================================================================================================
This paper introduces two information-theoretically secure protocols that achieve quantum secure direct communication between Alice and Bob in the first case, and among Alice, Bod and Charlie in the second case. Both protocols use the same novel method to embed the secret information in the entangled composite system of the players. The way of encoding the information is the main novelty of this paper and the distinguishing feature compared to previous works in the field. The advantage of this method is that it is easily extensible and can be generalized to a setting involving three, or even more, players, as demonstrated with the second protocol. This trait can be beneficial when two spatially separated players posses only part the secret information that must be combined and transmitted to Alice in order for her to reveal the complete secret. Using the three player protocol, this task can be achieved in one go, without the need to apply a typical QSDC protocol twice, where Alice first receives Bob's information and afterwards Charlie's information. Another characteristic of both protocols their simplicity and uniformity. The two player protocol relies on EPR pairs, and the three player protocol on | GHZ_ 3 ⟩ triples, which can be easily prepared with our current technology. In the same vein, the local quantum circuits are similar or identical, and are easily constructible as they employ only Hadamard and CNOT gates.
Keywords:: Quantum Secure Direct Communication, quantum entanglement, Bell states, EPR pairs, GHZ states, quantum games.
§ INTRODUCTION
Nowadays, it is hardly necessary to advocate the importance of privacy and security for every aspect of our life as individuals. Indeed, privacy is a constitutional right that must be respected and protected under all circumstances. This, in turn, has advanced the design and implementation of technical tools that ensure the security of out digital data. Devising bulletproof algorithms and protocols that protect our privacy from unauthorized access is a major trend in current research. This, however, may not be as easy as it sounds. The reason is that we have just entered a new scientific era, the quantum era, which brings the promise of unprecedented computational power. This, unharnessed so far, power offers new algorithms that can, potentially compromise the security offered by established classical methods. Two iconic examples that help drive this point home, are Shor’s <cit.> and Grover's <cit.> algorithms. Shor’s algorithm can factorize large numbers in polynomial time and its practical implementation is bound to threaten public key cryptosystems. Grover’s algorithm speeds up unordered search and may also be used to attack symmetric key cryptosystems like AES.
Up to this day, there exist no quantum computers powerful enough to threaten the classical status quo. However, this will probably change sooner than initially anticipated, if one judges by the impressive progress that has been achieved lately. A case in mind is IBM's 127 qubit Eagle processor <cit.> and the more recent 433 qubit Osprey <cit.> processor. It seems prudent, if not imperative, to find ways to seriously upgrade our algorithms and protocols, before they become a liability to our security infrastructure. The enormous effort to come up with a robust solution, has led to the creation of two new scientific fields, the field of post-quantum or quantum-resistant cryptography and the field of quantum cryptography. The former, is actually an incremental evolution of the current state of affairs <cit.>, reducing security issues to carefully chosen computationally hard problems, an approach that has been vindicated so far. The latter, quantum cryptography, relies on the laws of nature, such as entanglement, monogamy of entanglement, the no-cloning theorem, and nonlocality to ensure ironclad security. Quantum cryptography advocates exploiting the unique and powerful quantum phenomena to design new secure protocols for a plethora of critical applications, such as key distribution <cit.>, secret sharing <cit.>, quantum teleportation <cit.>, cloud storage <cit.> and blockchain <cit.>.
In the seminal paper <cit.>, the authors proposed a protocol for Quantum Secure Direct Communication (QSDC for short). The characteristic trait of QSDC, which distinguishes it from key distribution that establishes a common random key between two parties, is that QSDC transmits information directly and without using an existing key. Furthermore, the classical channel is employed only for detection purposes and not for transmitting information necessary to decipher the secret message. The intended recipient must be able to uncover the secret information after receiving the quantum states via the quantum channel. Finally, any eavesdropper must be detected, without being allowed to compromise the secret. Almost immediately, in 2003 the researchers in <cit.> introduced the influential two-step QSDC protocol. Later, <cit.> presented a QSDC protocol using single photons, <cit.> gave a protocol based on superdense coding, and <cit.> proposed the first QSDC protocol with multipartite entanglement. Since then, progress in this are has been non-stop. For a though and comprehensive review of the current state of the field, we refer the reader to the recent <cit.>.
In this work, we initially introduce a new protocol, called 2PSQDC, for quantum secure direct communication between two entities. Subsequently, the 2PSQDC is generalized in a intuitive and straightforward manner, so as to provide for quantum secure direct communication among three entities. The resulting protocol, which is called 3PSQDC, can be seamlessly generalized to an arbitrary number of entities. We present our protocols as games, involving the usual cast of Alice, Bob and Charlie. Hopefully, the pedagogical nature of games will make the presentation of the technical concepts easier to follow. Quantum games, from their inception in 1999 <cit.>, have known great acceptance since quantum strategies are sometimes superior to classical ones <cit.>. The famous prisoners' dilemma game provides such the most prominent example <cit.>, which also applies to other abstract quantum games <cit.>. The quantization of many classical systems can even apply to political structures, as was shown in <cit.>. While on the subject of games on unconventional environments, let us mention that games in biological systems have attracted significant attention <cit.>. It is interesting to observe that biosystems may give rise to biostrategies superior compared to the classical ones, even in the Prisoners' Dilemma iconic game <cit.>.
Contribution. This paper presents two protocols that achieve quantum secure direct communication between Alice and Bob in the first case, and among Alice, Bod and Charlie in the second case. Both protocols, which are proven to be information-theoretically secure, use the same idea, i.e., embedding the secret information via an oracle that uses the inner product modulo 2 operation. This way of encoding the information is the main novelty of this paper that distinguishes it from the many previous works in the field. The advantage of this method is that it is seamlessly extensible and can be generalized to a setting involving three, or even more, players, as demonstrated with the 3PSQDC protocol. This last case, is not only useful, but often necessary, when two spatially separated players posses only part the secret information that must be combined and transmitted to Alice in order for her to reveal the complete secret. Using the 3PSQDC protocol, this task can be achieved in one go, without the need to apply a typical QSDC protocol twice, where Alice first receives Bob's information and afterwards Charlie's information. Assuming sufficient resources, the 3PSQDC protocol can be extended in the obvious manner to allow for n - 1 players to simultaneously send information to Alice. It is also worth mentioning that both protocols are practically accessible, since they relay on EPR pairs, in the two player case, and | GHZ_ 3 ⟩ triples, in the three player case, that can be generated with our current technology. Moreover, the local quantum circuits are characterized by uniformity and symmetry, being similar or identical, and are easily constructible as they employ only Hadamard and CNOT gates.
§.§ Organization
The paper is organized as follows. Section <ref> contains an introduction to the subject along with bibliographic pointers to related works. Section <ref> explains the underlying theory required for the understanding of the protocols. Section <ref> provides a detailed presentation of the 2PSQDC protocol that tackles information transmission from Alice to Bob. Section <ref> contains a formal presentation of the 3PSQDC protocol that, in addition to Bob and Alice, also involves Charlie. Finally, Section <ref> gives a brief summary of this work, and outlines directions for future research.
§ PRELIMINARIES
§ BACKGROUND & NOTATION
§.§ |Φ^ + ⟩ EPR pairs
There are certain properties of quantum physics that are quite strange, in the sense that they have no analogue in classical physics and even contradict our everyday intuition. Undoubtedly, entanglement falls into this category. This strange phenomenon is also a source of great potential, as it seems to be one of the keys for achieving things that are difficult or impossible in the classical world. Technically, entanglement appears in composite quantum systems, consisting of at least two subsystems, which can be, and usually are, spatially separated. In mathematical terms, a composite system is entangled, if its state must be described as a linear combination of two or more product states of its subsystems. Bell states, also referred to as EPR pairs, provide us with most well-known example of maximal entanglement for a two-qubit system. There are four Bell states expressed as shown below (see <cit.>). We use the subscripts A and B to make explicitly clear that the first qubit belongs to Alice and the second to Bob.
[
grow to left by = 1.50 cm,
grow to right by = 0.00 cm,
colback = white,
enhanced jigsaw,
sharp corners,
toprule = 0.1 pt,
bottomrule = 0.1 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
[b]0.475
|Φ^ + ⟩
=
| 0 ⟩_ A | 0 ⟩_ B + | 1 ⟩_ A | 1 ⟩_ B /√( 2 )
[b]0.45
|Φ^ - ⟩
=
| 0 ⟩_ A | 0 ⟩_ B - | 1 ⟩_ A | 1 ⟩_ B /√( 2 )
[b]0.475
|Ψ^ + ⟩
=
| 0 ⟩_ A | 1 ⟩_ B + | 1 ⟩_ A | 0 ⟩_ B /√( 2 )
[b]0.45
|Ψ^ - ⟩
=
| 0 ⟩_ A | 1 ⟩_ B - | 1 ⟩_ A | 0 ⟩_ B /√( 2 )
One of the critical advantages of quantum entanglement is that when one qubit of the pair gets measured, then the other will immediately collapse to its corresponding basis state, irrespective of the distance between them. It is precisely this celebrated trait of quantum entanglement that is utilized in quantum cryptographic protocols, e.g., for key distribution, secret sharing, etc. Obviously, to achieve the intended result, a sequence of such EPR pairs is required. In the 2PSQDC protocol, we shall be using |Φ^ + ⟩ pairs. The mathematical description of m pairs in the |Φ^ + ⟩ state is given next.
|Φ^ + ⟩^⊗ m =
1 /√( 2^m )∑_ 𝐱 ∈ 𝔹 ^ m | 𝐱 ⟩_ A | 𝐱 ⟩_ B ,
where 𝔹 = { 0, 1 }.
§.§ | GHZ_ 3 ⟩ triplets
Obviously, entanglement is a phenomenon that appears in general multipartite system. For a composite system consisting of three or more qubits, one of the most well-known and studied types of maximal entanglement is the so-called GHZ state. In the 3PSQDC protocol, we shall employ triplets of qubits in the | GHZ_ 3 ⟩ state, which is expressed mathematically by the equation (<ref>). As in the case of the |Φ^ + ⟩ pairs, subscripts A, B and C are used to make clear that the first qubit belongs to Alice, the second to Bob and the third to Charlie.
| GHZ_ 3 ⟩
=
| 0 ⟩_ A | 0 ⟩_ B | 0 ⟩_ C
+
| 1 ⟩_ A | 1 ⟩_ B | 1 ⟩_ C /√( 2 ) .
A single | GHZ_ 3 ⟩ triplet will not suffice for the execution of the 3PSQDC protocol; m such triplets will be required. A system comprised of m | GHZ_ 3 ⟩ triplets is described by the next formula (for its detailed derivation we refer to <cit.> and <cit.>).
| GHZ_ 3 ⟩^⊗ m
=
1 /√( 2^ m )∑_ 𝐱 ∈ 𝔹 ^ m | 𝐱 ⟩_ A | 𝐱 ⟩_ B | 𝐱 ⟩_ C .
In formula (<ref>), the notation 𝐱 ∈ 𝔹 ^ m means that the bit vector 𝐱 ranges through all the 2^ m bit vector representations of the basis kets. In accordance to what we mentioned before, | 𝐱 ⟩_ A, | 𝐱 ⟩_ B and | 𝐱 ⟩_ C correspond to the basis states of Alice, Bob and Charlie's quantum registers, respectively.
Existing quantum computers based on the circuit model can trivially produce the four Bell states. Figure <ref> depicts a quantum circuit that generates |Φ^ + ⟩ pairs. This particular circuit is designed using the IBM Quantum Composer <cit.>. Similarly, it is easy in principle to construct quantum circuits that produce general | GHZ_ n ⟩ states. As a matter of fact there is a methodology for constructing efficient general GHZ circuits <cit.>, in the sense that it require n steps to produce given state. Although, for large n there are practical difficulties in preparing and maintaining a | GHZ_ n ⟩ state, for | GHZ_ 3 ⟩ that are states necessary for the implementation of the 3PSQDC protocol, things are quite manageable. Figure <ref> shows a quantum circuit, also designed using the IBM Quantum Composer <cit.>, that prepares | GHZ_ 3 ⟩ triples. Figures <ref> and <ref> give the state vector descriptions of the |Φ^ + ⟩ and | GHZ_ 3 ⟩ states respectively.
[
grow to left by = 1.00 cm,
grow to right by = 1.00 cm,
colback = WordTurquoiseLighter80!12,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
[
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = white!12,
enhanced jigsaw,
sharp corners,
toprule = 0.1 pt,
bottomrule = 0.1 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
[
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = white!12,
enhanced jigsaw,
sharp corners,
toprule = 0.1 pt,
bottomrule = 0.1 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
Apart from |Φ^ + ⟩ pairs and | GHZ_3⟩ triplets, we use the well-known states |+⟩ and |-⟩. For completeness, we provide their definitions.
[
grow to left by = 1.50 cm,
grow to right by = 0.00 cm,
colback = white,
enhanced jigsaw,
sharp corners,
toprule = 0.1 pt,
bottomrule = 0.1 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
[b]0.475
| + ⟩ = H | 0 ⟩ = | 0 ⟩ + | 1 ⟩/√( 2 )
[b]0.45
| - ⟩ = H | 1 ⟩ = | 0 ⟩ - | 1 ⟩/√( 2 )
Another useful formula proved in textbooks such as <cit.> and <cit.>, which will be applied in the explanation of the protocol, is the following
H^⊗ m | 𝐱 ⟩ =
1 /√( 2^ n )∑_ 𝐳 ∈ 𝔹 ^ m
( - 1 )^ 𝐳 · 𝐱 | 𝐳 ⟩ .
Finally, we mention that quantum measurements are performed as a rule with respect to the computational basis {| 0 ⟩, | 1 ⟩}, and, occasionally, with respect to the Hadamard basis {| + ⟩, | - ⟩}, in which case it is explicitly clarified.
§.§ Inner product modulo 2
In this work, we follow the typical convention of writing bit vectors 𝐱 ∈ 𝔹 ^ m in boldface. That is, a bit vector 𝐱 of length m is a sequence of m bits 𝐱 = x_ m - 1 … x_ 0. The zero bit vector is designated by 0 = 0 … 0.
Given two bit vectors 𝐱 , 𝐲 ∈ 𝔹 ^ m, where 𝐱 = x_ m - 1 … x_ 0 and 𝐲 = y_ m - 1 … y_ 0, we define the inner product modulo 2, denoted by 𝐱 · 𝐲, as
𝐱 · 𝐲 =
x_ n - 1 y_ n - 1 ⊕…⊕
x_ 0 y_ 0 .
In the above formula, ⊕ stands for addition modulo 2. The operation inner product modulo 2 exhibits a very useful property. If 𝐜 ∈ 𝔹 ^ m is different from 0, then for half of the elements 𝐱 ∈ 𝔹 ^ m, 𝐜· 𝐱 is 0 and for the other half, 𝐜 · 𝐱 is 1. Obviously, if 𝐜 = 0, then for all 𝐱 ∈ 𝔹 ^ m, 𝐜 · 𝐱 = 0 (a more detailed analysis can be found in <cit.>). For easy reference, we state this as property (IP).
If 𝐜 ≠ 0 ,
there are 2^ m - 1 bit vectors 𝐱 ∈ 𝔹 ^ m ,
such that 𝐜 · 𝐱 = 0, and
2^ m - 1 bit vectors 𝐱 ∈ 𝔹 ^ m ,
such that 𝐜 · 𝐱 = 1 .
IP
It will also be expedient to extend the operation of addition modulo 2 to bitwise addition modulo 2 between bit vectors. Given two bit vectors 𝐱 , 𝐲 ∈ 𝔹 ^ m, where 𝐱 = x_ m - 1 … x_ 0 and 𝐲 = y_ m - 1 … y_ 0, we define their bitwise addition modulo 2, denoted by 𝐱 ⊕ 𝐲, as
𝐱 ⊕ 𝐲 =
( x_ n - 1 ⊕ y_ n - 1 )
…
( x_ 0 y_ 0 )
.
We use the same symbol ⊕ for the operation of addition modulo 2 between bits, and for the operation of bitwise addition modulo 2 between bit vectors, since the context will help avoid any confusion.
§ THE 2PSQDC PROTOCOL
The 2 player scheme for quantum secure direct communication, 2PSQDC protocol from now on, is designed to allow one party to communicate with a spatially separated second party securely and directly using only the quantum channel. To emphasize that is about the secure direct communication of two players, the protocol is called 2PSQDC. To enhance its game-like presentation, we call the two parties Alice and Bob. Alice is the one having the initiative and intending to send some information to Bob. The setting is completed by the notorious Eve, a cunning adversary that attempts to steal any information possible. The major advantage that quantum protocols exhibit over classical ones is that communication through the quantum channel involves an array of unique features, such as the no-cloning theorem <cit.>, the monogamy of entanglement <cit.>, and nonlocality <cit.>, that can be used to inhibit Eve.
§.§ Entanglement distribution phase
The 2PSQDC protocol evolves in phases. Initially, during the entanglement distribution phase, Alice, or a third trusted source, prepares |Φ^ + ⟩ pairs. As customary, we assume the existence of a trusted quantum source, which may not necessarily be Alice, that is responsible for this task. In any event, the produced |Φ^ + ⟩ pairs are shared between Alice and Bob, according to the following pattern.
* Alice sends to Bob a sequence of m qubits b_ 0 , …, b_ m - 1, called the transmission sequence, which Bob stores in his register.
* At the same time, Alice stores in her register a corresponding sequence of m qubits a_ 0 , …, a_ m - 1, called the embedding sequence.
* The k^ th qubit in the transmission sequence, 0 ≤ k ≤ m - 1, is either
* the first qubit of a |Φ^ + ⟩ pair, in which case the k^ th qubit in the embedding sequence is the second qubit of the same |Φ^ + ⟩ pair, or
* a nonentangled qubit in a state randomly chosen, with equal probability, from {| 0 ⟩, | 1 ⟩, | + ⟩, | - ⟩}, in which case the k^ th qubit in the embedding sequence is also a nonentangled qubit in the same state. A qubit of this type is called a decoy.
* In all, Alice sends to Bob d decoys. It is imperative that Alice insert the decoys randomly within the transmission sequence. Obviously, Alice keeps track of the position and the state of all her decoys.
The integer m should be sufficiently large to ensure the secure execution of the protocol, and the integer d should be significantly smaller than m. In our forthcoming mathematical analysis in Section <ref>, we shall be more precise.
[
grow to left by = 1.00 cm,
grow to right by = 1.00 cm,
colback = white!03,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
The situation before the entanglement distribution is visualized in Figure <ref>. At the end of the distribution phase, both Alice and Bob have in their own quantum registers m qubits each. Their registers are correlated because Alice and Bob's corresponding qubits are entangled in the state |Φ^ + ⟩. The setup at this point is described in Figure <ref>. To complete this phase, Alice and Bob conduct the first consistency check on their registers. If the test is successful, they proceed to the secret embedding phase. If not, then their communication has been compromised by Eve, and, so, they abort the protocol. We defer the detailed explanation of the actions taking place during the first consistency check until Section <ref>.
§.§ Secret embedding phase
During this phase, Alice encodes the secret she intends to transmit to Bob. To achieve this, she acts locally upon her register using the local quantum circuit outlined in Figure <ref>. Although Alice and Bob are spatially separated and they both operate via their local quantum circuits, the entanglement correlating their registers results in one composite system, consisting of Alice and Bob's subsystems.
[
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = MagentaVeryLight!03,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
In the above Figure <ref>, AR and BR designate Alice and Bob's entangled registers, respectively, and AQ stands for Alice's qubit, initialized in state | - ⟩. The subscripts A and B are used to distinguish between Alice and Bob's qubits and registers.
The information that Alice aims to send to Bob is represented by the secret bit vector 𝐬, where the bits corresponding to the positions of the decoys are set to 0.
𝐬
=
s_ m - 1 …
s_ 0 .
Alice embeds the secret into the state of the composite quantum system by using the unitary transform U_ A on her quantum register. U_ A is based on the function
f_ A ( 𝐱 )
=
𝐬 · 𝐱 .
The complete definition of U_ A follows the typical rule given below
U_ A :
| y ⟩_ A | 𝐱 ⟩_ A →| y ⊕ f_ A ( 𝐱 ) ⟩_ A | 𝐱 ⟩_ A ,
where | y ⟩_ A and | 𝐱 ⟩_ A represent the state of Alice's qubit AQ and register AR, respectively. Taking into account equation (<ref>) and the fact that | y ⟩_ A = | - ⟩_ A, equation (<ref>) becomes
U_ A :
| - ⟩_ A | 𝐱 ⟩_ A →
( - 1 )^ 𝐬 · 𝐱 | - ⟩_ A | 𝐱 ⟩_ A .
The quantum circuit of Figure <ref> begins its operation in the initial state |ψ_0 ⟩. By invoking (<ref>), |ψ_ 0 ⟩ can be expressed as
|ψ_ 0 ⟩
=
1 /√( 2^ m )∑_ 𝐱 ∈ 𝔹 ^ m | - ⟩_ A | 𝐱 ⟩_ A | 𝐱 ⟩_ B .
Alice acts on her quantum register with the unitary transform (<ref>) driving the composite system into the next state |ψ_ 1 ⟩:
|ψ_ 1 ⟩ (<ref>) = 1 /√( 2^ m )∑_ 𝐱 ∈ 𝔹 ^ m
( - 1 )^ 𝐬 · 𝐱 | - ⟩_ A | 𝐱 ⟩_ A | 𝐱 ⟩_ B .
At the end of this phase, Alice sends to Bob the m qubits in her register through the quantum channel. Bob receives and places these qubits in a second register denoted by BR_ A.
§.§ Secret decryption phase
During this phase, Bob will complete the protocol and decipher Alice's secret locally, using the quantum circuit outlined in Figure <ref>. In addition to the quantum register BR, utilized in the circuit of Figure <ref>, Bob now also uses register BR_ A. These two registers are entangled and, by acting on both of them, Bob will uncover the secret bit vector 𝐬. To avoid any confusion, henceforth we shall use subscript 1 to designate the state and contents of BR_ A and subscript 0 to designate the state and contents of register BR.
[
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = MagentaVeryLight!03,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
The circuit depicted in Figure <ref> starts its operation in the |ψ_ 1 ⟩ state. Bob applies to both his registers BR and BR_ A the m-fold Hadamard transform, driving the system to the next state |ψ_ 2 ⟩. Note that in the rest of the computations we ignore Alice's local qubit AQ in state | - ⟩, as it has served its intended purpose to introduce the relative phase ( - 1 )^ 𝐬 · 𝐱.
|ψ_ 2 ⟩ =
1 /√( 2^ m )∑_ 𝐱 ∈ 𝔹 ^ m
( - 1 )^ 𝐬 · 𝐱
H^⊗ m | 𝐱 ⟩_ 1
H^⊗ m | 𝐱 ⟩_ 0
( <ref> ) = 1 /√( 2^ m )∑_ 𝐱 ∈ 𝔹 ^ m
( - 1 )^ 𝐬 · 𝐱 (
1 /√( 2^ m )∑_ 𝐚 ∈ 𝔹 ^ m
( - 1 )^ 𝐚 · 𝐱 | 𝐚 ⟩_ 1 )
(
1 /√( 2^ m )∑_ 𝐛 ∈ 𝔹 ^ m
( - 1 )^ 𝐛 · 𝐱 | 𝐛 ⟩_ 0 )
=
1 / 2^ m √( 2^ m )∑_ 𝐚 ∈ 𝔹 ^ m ∑_ 𝐛 ∈ 𝔹 ^ m ∑_ 𝐱 ∈ 𝔹 ^ m
( - 1 )^ ( 𝐬 ⊕ 𝐚 ⊕ 𝐛 ) · 𝐱 | 𝐚 ⟩_ 1 | 𝐛 ⟩_ 0 .
The above expression (<ref>) can be simplified via the use of property (<ref>) of the inner product modulo 2, which asserts that if
𝐚 ⊕ 𝐛 ⊕ 𝐬 = 0 ⇔ 𝐚 ⊕ 𝐛 = 𝐬 ,
the sum ∑_ 𝐱 ∈ 𝔹 ^ m ( - 1 )^ ( 𝐬 ⊕ 𝐚 ⊕ 𝐛 ) · 𝐱 | 𝐚 ⟩_ 1 | 𝐛 ⟩_ 0 is equal to 2^ m | 𝐚 ⟩_ 1 | 𝐛 ⟩_ 0, whereas if 𝐚 ⊕ 𝐛 ⊕ 𝐬 ≠ 0, the sum reduces to 0. This allows the rewriting of |ψ_ 2 ⟩ as
|ψ_ 2 ⟩
=
1 /√( 2^ m )∑_ 𝐚 ∈ 𝔹 ^ m ∑_ 𝐛 ∈ 𝔹 ^ m | 𝐚 ⟩_ 1 | 𝐛 ⟩_ 0 , where 𝐚 ⊕ 𝐛 = 𝐬 .
The above formula is the mathematical expression of the correlation of the contents of the registers BR and BR_ A. This is a result of the entanglement between Alice and Bob's registers in the initial state of the quantum circuit of Figure <ref>. The practical significance of this fact is that the contents of the two registers do not vary independently of each other, as they must always obey (<ref>).
Let us now recall that a CNOT (controlled-NOT) gate acts by negating the second qubit, called target qubit, if and only the first qubit, called control qubit, is in state | 1 ⟩. One way to express its operation is by writing
CNOT | x ⟩| y ⟩
=
| x ⟩| x ⊕ y ⟩ .
Moreover, it will convenient to express | 𝐚 ⟩_ 1 and | 𝐛 ⟩_ 0 as
| 𝐚 ⟩_ 1 =
| a_ m - 1 ⟩…| a_ 0 ⟩
| 𝐛 ⟩_ 0 =
| b_ m - 1 ⟩…| b_ 0 ⟩ .
Having established notation, we now proceed to explain in detail Bob's actions to decipher Alice's secret. Bob applies m CNOT gates to his registers BR_ A and BR. Each of the m qubits in BR_ A will serve as a control qubit targeting the corresponding qubit in BR. The formal mathematical description of this process is given below
CNOT | a_ m - 1 ⟩| b_ m - 1 ⟩… CNOT | a_ 0 ⟩| b_ 0 ⟩
=
| a_ m - 1 ⟩| a_ m - 1 ⊕ b_ m - 1 ⟩…| a_ 0 ⟩| a_ 0 ⊕ b_ 0 ⟩
( <ref> ) = | 𝐚 ⟩_ 1 | 𝐬 ⟩_ 0 .
Hence, in view of equations (<ref>) and (<ref>), state |ψ_ 3 ⟩ can be expressed as
|ψ_ 3 ⟩
=
1 /√( 2^ m )∑_ 𝐚 ∈ 𝔹 ^ m | 𝐚 ⟩_ 1 | 𝐬 ⟩_ 0 .
Now, Bob measures (in the computational basis) the contents of the register BR and obtains the secret bit vector 𝐬.
To complete the protocol, Alice and Bob perform the second consistency check. If the test is successful, then Bob accepts the obtained secret bit vector 𝐬. If not, then their second communication through the quantum channel was compromised by Eve, and, so, Bob rejects 𝐬 and the protocol starts all over again. We defer the detailed explanation of the actions taking place during the second consistency check until Section <ref>. If the secret bit vector 𝐬 is accepted, then Alice may further apply standard techniques, like quantum error correction and privacy amplification <cit.>.
§ THE 3PSQDC PROTOCOL
The 3 player scheme for quantum secure direct communication, 3PSQDC protocol from now on, is a generalization of the 2PSQDC, where three spatially separated players interact in order to enable two of them to communicate with the third player directly and securely using only the quantum channel. Many real-life situations involve more than two agents. Therefore, it is advantageous to possess algorithms and techniques facilitating secure communication and information exchange among an arbitrary number of players.
The 3PSQDC protocol evolves as a game among Alice, Bob and Charlie. To make the game more interesting this time, we assume that Bob and Charlie, being loyal agents of Alice, each have come up with some information, which is incomplete by itself. Only by combining the two pieces of information can the complete secret be revealed. Therefore, Bob and Charlie must send their information to Alice, so that she may uncover the secret. A similar setting in which an arbitrary number of agents send information to Alice in order for her to compose the complete secret, was analyzed in <cit.>. The major difference compared to the present work, is that in <cit.> the protocol at the final stage requires the agents to communicate to Alice the information necessary to unlock the secret through the classical channel. Eve, as usual, undertakes the role of the adversary aiming to sabotage the protocol and steal the secret. The advantage that quantum protocols exhibit over classical ones is that communication through the quantum channel involves an array of unique features, such as the no-cloning theorem <cit.>, the monogamy of entanglement <cit.>, and nonlocality <cit.>, that can be used to inhibit Eve.
§.§ Entanglement distribution phase
We may conceptually divide the 3PSQDC protocol in phases. The first phase is the entanglement distribution phase, during which Alice, or a third trusted source, prepares | GHZ_ 3 ⟩ triplets. For the execution of the protocol, it is immaterial whether it is Alice or another trusted source that undertakes this task. What matters is that using a quantum computer running the quantum circuit depicted in Figure <ref>, or some equivalent apparatus, entangled triplets can be generated in order to be shared among Alice, Bob, and Charlie, according to the following scheme.
* Alice sends to Bob a sequence of m qubits b_ 0 , …, b_ m - 1, called Bob's transmission sequence, which Bob stores in his register. Symmetrically, Alice also sends to Charlie a sequence of m qubits c_ 0 , …, c_ m - 1, called Charlie's transmission sequence, which Charlie stores in his register.
* At the same time, Alice stores in her register a corresponding sequence of m qubits a_ 0 , …, a_ m - 1, called Alice's sequence.
* The k^ th qubit, 0 ≤ k ≤ m - 1, in all the above sequences is either
* one qubit from the same | GHZ_ 3 ⟩ triplet, or
* a nonentangled qubit in a state randomly chosen, with equal probability, from {| 0 ⟩, | 1 ⟩, | + ⟩, | - ⟩}, called a decoy.
* In all, Alice sends to each of Bob and Charlie d decoys. It is imperative that Alice insert the decoys randomly within the transmission sequence. Obviously, Alice keeps track of the position and the state of all her decoys.
[
grow to left by = 1.00 cm,
grow to right by = 1.00 cm,
colback = white!03,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
We point out that m should be sufficiently large to ensure the secure implementation of the protocol, and the integer d should be significantly smaller than m. In our forthcoming mathematical analysis in Section <ref>, we shall be more precise.
The situation before Alice distributes Bob and Charlie's sequences is visualized in Figure <ref>. At the end of the distribution phase, Alice, Bob and Charlie have in their local quantum registers m qubits each. All three registers are correlated because Alice, Bob and Charlie's corresponding qubits are entangled in the | GHZ_ 3 ⟩ state, and the whole setup is shown in Figure <ref>. To complete this phase, Alice and Bob and, simultaneously, Alice and Charlie conduct the first consistency check on their registers. If the test is successful, they proceed to the secret embedding phase. If at least one consistency check fails, then their communication has been compromised by Eve, and, so, they abort the protocol. We defer the detailed explanation of the actions taking place during the first consistency check until Section <ref>.
§.§ Secret embedding phase
The idea behind this generalization is that the two agents Bob and Charlie have in their possession part of a secret that they must transmit to Alice in order for her to obtain the complete secret 𝐬. To succeed in this task, the three players use their local circuits outlined in Figure <ref>. Despite spatial separation between the three, the correlations among their registers, due to entanglement, effectively create one quantum distributed system.
[
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = MagentaVeryLight!03,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
In Figure <ref>, AR, BR and CR stand for Alice, Bob and Charlie's entangled registers, respectively, while BQ and CQ denote Bob and Charlie's qubits, both initialized in state | - ⟩. To avoid any confusion, we use subscripts A, B and C to distinguish among Alice, Bob and Charlie's qubits and registers.
The partial secrets that Bob and Charlie intend to send to Alice are encoded in the secret bit vectors 𝐬 _ B and 𝐬 _ C, where the bits corresponding to the positions of the decoys are set to 0. Alice must combine both of them via bitwise addition in order to uncover the complete secret 𝐬:
𝐬 = 𝐬 _ B ⊕ 𝐬 _ B .
Bob and Charlie encode their secrets into the state of the composite system by using the unitary transforms U_ B and U_ C on their registers, where
U_ B :
| - ⟩_ B | 𝐱 ⟩_ B →
( - 1 )^ 𝐬 _ B · 𝐱 | - ⟩_ B | 𝐱 ⟩_ B , and
U_ C :
| - ⟩_ C | 𝐱 ⟩_ C →
( - 1 )^ 𝐬 _ C · 𝐱 | - ⟩_ C | 𝐱 ⟩_ C .
Using (<ref>), we can express the initial state |ψ_0 ⟩ of the circuit of Figure <ref> as
|ψ_ 0 ⟩
=
1 /√( 2^ m )∑_ 𝐱 ∈ 𝔹 ^ m | 𝐱 ⟩_ A | - ⟩_ B | 𝐱 ⟩_ B | - ⟩_ C | 𝐱 ⟩_ C .
The application of the unitary transforms (<ref>) sends the system to the next state |ψ_ 1 ⟩:
|ψ_ 1 ⟩ (<ref>) = 1 /√( 2^ m )∑_ 𝐱 ∈ 𝔹 ^ m | 𝐱 ⟩_ A
( - 1 )^ 𝐬 _ B · 𝐱 | - ⟩_ B | 𝐱 ⟩_ B
( - 1 )^ 𝐬 _ C · 𝐱 | - ⟩_ C | 𝐱 ⟩_ C
(<ref>) = 1 /√( 2^ m )∑_ 𝐱 ∈ 𝔹 ^ m
( - 1 )^ 𝐬 · 𝐱 | 𝐱 ⟩_ A | - ⟩_ B | 𝐱 ⟩_ B | - ⟩_ C | 𝐱 ⟩_ C .
At the end of this phase, Bob sends to Alice the m qubits in his register through the quantum channel. Alice organizes these qubits in an additional register denoted by AR_ B. Similarly, Alice receives from Charlie his m qubits and places them into a third register, designated by AR_ C.
§.§ Secret decryption phase
[
grow to left by = 1.00 cm,
grow to right by = 1.00 cm,
colback = MagentaVeryLight!03,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
During this phase, Alice will use the quantum circuit outlined in Figure <ref> to decrypt the complete secret bit vector 𝐬. In addition to the quantum register AR, shown in the circuit of Figure <ref>, Alice now uses registers AR_ B and AR_ C. All these registers are entangled and, by using all of them, Alice will uncover the secret bit vector 𝐬. To avoid any confusion, in the formulas below we employ subscripts 0, 1, and 2 to designate the state and contents of registers AR_ C, AR_ B and AR, respectively.
The circuit depicted in Figure <ref> is initialized in state |ψ_ 1 ⟩. Alice applies to all her registers m-fold Hadamard transforms, driving the system to the next state |ψ_ 2 ⟩. Note that in the rest of the computations we ignore the qubits BQ and CQ because they have completed their purpose, i.e., the introduction of the relative phase ( - 1 )^ 𝐬 · 𝐱.
|ψ_ 2 ⟩ =
1 /√( 2^ m )∑_ 𝐱 ∈ 𝔹 ^ m
( - 1 )^ 𝐬 · 𝐱
H^⊗ m | 𝐱 ⟩_ 2
H^⊗ m | 𝐱 ⟩_ 1
H^⊗ m | 𝐱 ⟩_ 0
( <ref> ) = 1 /√( 2^ m )∑_ 𝐱 ∈ 𝔹 ^ m
( - 1 )^ 𝐬 · 𝐱 (
1 /√( 2^ m )∑_ 𝐚 ∈ 𝔹 ^ m
( - 1 )^ 𝐚 · 𝐱 | 𝐚 ⟩_ 2 )
(
1 /√( 2^ m )∑_ 𝐛 ∈ 𝔹 ^ m
( - 1 )^ 𝐛 · 𝐱 | 𝐛 ⟩_ 1 )
(
1 /√( 2^ m )∑_ 𝐜 ∈ 𝔹 ^ m
( - 1 )^ 𝐜 · 𝐱 | 𝐜 ⟩_ 0 )
=
1 / 2^ m 2^ m ∑_ 𝐚 ∈ 𝔹 ^ m ∑_ 𝐛 ∈ 𝔹 ^ m ∑_ 𝐜 ∈ 𝔹 ^ m ∑_ 𝐱 ∈ 𝔹 ^ m
( - 1 )^ ( 𝐬 ⊕ 𝐚 ⊕ 𝐛 ⊕ 𝐜 ) · 𝐱 | 𝐚 ⟩_ 2 | 𝐛 ⟩_ 1 | 𝐜 ⟩_ 0 .
Applying the property (<ref>) of the inner product modulo 2,
which says that if
𝐚 ⊕ 𝐛 ⊕ 𝐜 ⊕ 𝐬 = 0 ⇔ 𝐚 ⊕ 𝐛 ⊕ 𝐜 = 𝐬 ,
the sum ∑_ 𝐱 ∈ 𝔹 ^ m ( - 1 )^ ( 𝐬 ⊕ 𝐚 ⊕ 𝐛 ⊕ 𝐜 ) · 𝐱 | 𝐚 ⟩_ 2 | 𝐛 ⟩_ 1 | 𝐜 ⟩_ 0 is equal to 2^ m | 𝐚 ⟩_ 2 | 𝐛 ⟩_ 1 | 𝐜 ⟩_ 0, whereas if 𝐚 ⊕ 𝐛 ⊕ 𝐜 ⊕ 𝐬 ≠ 0, the sum reduces to 0. Hence, expression (<ref>) can be simplified as
|ψ_ 2 ⟩
=
1 / 2^ m ∑_ 𝐚 ∈ 𝔹 ^ m ∑_ 𝐛 ∈ 𝔹 ^ m ∑_ 𝐜 ∈ 𝔹 ^ m | 𝐚 ⟩_ 2 | 𝐛 ⟩_ 1 | 𝐜 ⟩_ 0 , where 𝐚 ⊕ 𝐛 ⊕ 𝐜 = 𝐬 .
The above formula conveys the correlation of the contents of the registers AR, AR_ B, and AR_ C. This is a result of the entanglement between Alice, Bob and Charlie's registers in the initial state of the quantum circuit of Figure <ref>. Conceptually, we may consider this situation as follows: THE contents of any two of the three registers may vary of each other, but then the contents of the remaining third register are completed defined by (<ref>).
If we expand | 𝐚 ⟩_ 2, | 𝐛 ⟩_ 1 and | 𝐜 ⟩_ 0 as
| 𝐚 ⟩_ 2 =
| a_ m - 1 ⟩…| a_ 0 ⟩
| 𝐛 ⟩_ 1 =
| b_ m - 1 ⟩…| b_ 0 ⟩
| 𝐜 ⟩_ 0 =
| c_ m - 1 ⟩…| b_ 0 ⟩ ,
then the action of the first group of m CNOT gates, where each of the m qubits in AR_ C serves as a control qubit targeting the corresponding qubit in AR_ B, results in
CNOT | c_ m - 1 ⟩| b_ m - 1 ⟩… CNOT | c_ 0 ⟩| b_ 0 ⟩
=
| c_ m - 1 ⟩| b_ m - 1 ⊕ c_ m - 1 ⟩…| c_ 0 ⟩| b_ 0 ⊕ c_ 0 ⟩ .
The subsequent action of the second group of m CNOT gates, where each of the m qubits in AR_ B serves as a control qubit targeting the corresponding qubit in AR, drives the circuit into the next state
CNOT | b_ m - 1 ⊕ c_ m - 1 ⟩| a_ m - 1 ⟩… CNOT | b_ 0 ⊕ c_ 0 ⟩| a_ 0 ⟩
=
| b_ m - 1 ⊕ c_ m - 1 ⟩| a_ m - 1 ⊕ b_ m - 1 ⊕ c_ m - 1 ⟩…| b_ 0 ⊕ c_ 0 ⟩| a_ 0 ⊕ b_ 0 ⊕ c_ 0 ⟩
( <ref> ) = | 𝐬 ⟩_ 2 | 𝐛 ⊕ 𝐜 ⟩_ 1 | 𝐜 ⟩_ 0 .
Thus, in view of equations (<ref>) and (<ref>), state |ψ_ 3 ⟩ can be written as
|ψ_ 3 ⟩
=
1 / 2^ m ∑_ 𝐛 ∈ 𝔹 ^ m ∑_ 𝐜 ∈ 𝔹 ^ m | 𝐬 ⟩_ 2 | 𝐛 ⊕ 𝐜 ⟩_ 1 | 𝐜 ⟩_ 0 .
Now, Alice measures (in the computational basis) the contents of the register AR and obtains the secret bit vector 𝐬.
To complete the protocol, Alice performs the second consistency check. If the test is successful, then Alice accepts the obtained secret bit vector 𝐬. If not, then her second communication with either Bob or Charlie through the quantum channel was compromised by Eve. In such an eventuality, Alice rejects 𝐬 and the protocol starts all over again. We defer the detailed explanation of the actions taking place during the second consistency check until Section <ref>. As we have pointed out before, if the secret bit vector 𝐬 is accepted, then Alice may further apply standard techniques, like quantum error correction and privacy amplification <cit.>.
§ SECURITY ANALYSIS
The current section presents a unified security analysis of the 2PSQDC and 3PSQDC protocols. The ubiquitous Eve, as usual, undertakes the role of the adversary aiming to sabotage the protocol and steal the secret. We shall also make use of a classical authenticated channel, in order to detect the presence of the eavesdropper Eve and perform error correction, and not for the transition of information pertaining to the secret bit vector. For a recent comprehensive text analyzing security issues of quantum protocols in general we refer to <cit.> and the more recent <cit.>. Extensive security analysis specifically for QSDC can be found in the thorough and very recent <cit.>.
We analyze the 3 players case, involving Alice, Bob and Charlie, since the 2 player case can be viewed as a special case. The setting now includes, in addition to our three protagonists, a forth notorious entity, traditionally named Eve, whose sole purpose is to devise and implement attacks against our protocol, aiming to acquire a piece of the secret information, or even, the complete secret information.
Ultimately the security analysis of any quantum protocol rests on certain well-understood assumptions. For the sake of completeness, we briefly mention them at this point. First, we assume that quantum theory is correct, which in turn means that hallmark features such as the no-cloning theorem <cit.>, the monogamy of entanglement <cit.>, and nonlocality <cit.> are valid. Clearly, if quantum protocols did not exhibit these properties, they would be useless. Secondly, we assume that quantum theory is complete, which implies that Eve is constrained by the laws of physics, and she cannot derive more information beyond what is predicted by quantum mechanics.
§.§ First consistency check
The secret embedding phase begins only after the successful completion of the first consistency check and consists of the following steps.
* Alice communicates to Bob and Charlie the positions and the intended basis of measurement for the decoys.
* Bob and Charlie send back to Alice the results of their measurements.
* Alice analyzes the results received from Bob and Charlie, and decides whether the test was successful or not, according to the following rationale.
♢ If no or very few inconsistencies are found, then Alice deems that the first consistency check was successful.
♢ If the number of inconsistencies is ≈ d / 4, or above a predefined threshold, then Alice deems that the first consistency check failed, aborts and terminates the protocol.
In the ideal scenario, where there is no eavesdropping and the quantum channel is perfect, there will be 0 inconsistencies. In a more realistic scenario, we can expect a few inconsistencies (≈ 0) due to the channel imperfections. On the other hand, if Eve managed to eavesdrop, there should be ≈ d / 4 different measurement outcomes. The probability of Eve using the wrong basis is 1 / 2, and, in such a case, the probability of ultimately getting the wrong result is also 1 / 2. Thus, Eve's eavesdropping would cause Bob or Charlie to obtain the wrong outcome with a probability of 1 / 4.
Let us be clear that even if Eve successfully eavesdrop during the distribution phase, she will get no information whatsoever because no information has been encoded yet. However it is still possible that she may still disrupt the execution of the protocol. The probability of achieving that without being detected is practically zero. Let us consider what Eve may do during the distribution phase.
* Measure and Resend. Eve intercepts two qubits from each | GHZ_ 3 ⟩ triplet during their transmission from Alice to Bob and Charlie, measures them and resends them back to Bob and Charlie. Eve will fail to discover any information because at this phase the | GHZ_ 3 ⟩ triplets do not be carry any information. By the acts of measurement, Eve destroys the entanglement. Assuming that Eve randomly chooses the measurement basis between the computational or the Hadamard basis with equal probability, she will introduce ≈ d / 4 wrong outcomes, which will be reveled during the first consistency check.
* Intercept and Resend fake | GHZ_ 3 ⟩ triplets. Eve intercepts two qubits from each | GHZ_ 3 ⟩ triplet during their transmission from Alice to Bob and Charlie. Although cloning is prohibited by the no-cloning theorem, if Eve has already created a sufficient number of her own | GHZ_ 3 ⟩ triplets, she may store the intercepted qubits, and, in their place, forward her own qubits. By employing such a strategy she will gain no information because at this phase the | GHZ_3⟩ triplets carry no information. Moreover, the fact that Eve knows nothing regarding the decoys, will again lead to ≈ d / 4 wrong outcomes when Bod and Charlie measure the decoys during the first consistency check.
§.§ Second consistency check
Alice accepts as valid the secret bit vector 𝐬 only after the successful completion of the second consistency check, which consists of the following steps.
* Alice chooses a verification sequence v_ 1 , …, v_ f of f positions within the sequence 0, …, m - 1, excluding of course the decoys. It is critical that Alice chooses the verification sequence randomly using an appropriate probability distribution.
* Alice communicates to Bob and Charlie the the verification sequence.
* Bob and Charlie send back to Alice the bits s_ v_ 1 ^ B , …, s_ v_ f ^ B and s_ v_ 1 ^ C , …, s_ v_ f ^ C, respectively, of their secret bit vectors.
* Alice compares the bit values s_ v_ 1 , …, s_ v_ f of her own 𝐬 with the results received from Bob and Charlie, and decides whether the test was successful or not, according to the following rationale.
♢ If no or very few inconsistencies are found, then Alice deems that the second consistency check was successful.
♢ If the number of inconsistencies is ≈ f / 2, or above a predefined threshold, then Alice deems that the second consistency check failed, aborts and terminates the protocol.
Obviously, in the ideal scenario, where there is no eavesdropping and the quantum channel is perfect, there will be 0 inconsistencies. In a more realistic scenario, we can expect a few inconsistencies (≈ 0) due to the channel imperfections. On the other hand, if Eve managed to eavesdrop, there should be ≈ f / 2 different measurement outcomes.
Let us now consider what attacks might Eve devise during the second quantum transmission, where Bob and Charlie each send m qubits to Alice.
* Measure and Resend. Eve intercepts two qubits from each | GHZ_ 3 ⟩ triplet during their transmission from Bob and Charlie to Alice, measures them and resends them back to Alice. Eve will fail to discover any information because she has no access to Alice's registers. By the acts of measurement, Eve destroys the entanglement. This random collapse of entanglement will result to ≈ f / 2 wrong outcomes in the verification sequence, which will be reveled during the second consistency check.
* Intercept and Resend fake | GHZ_ 3 ⟩ triplets. Eve intercepts two qubits from each | GHZ_ 3 ⟩ triplet during their transmission from Bob and Charlie to Alice. Although cloning is prohibited by the no-cloning theorem, if Eve has already created a sufficient number of her own | GHZ_ 3 ⟩ triplets, she may store the intercepted qubits, and, in their place, forward her own qubits. The flaw in this scenario is that Eve's entangled qubits do not carry the information embedded in Bob and Charlie's qubits. By employing such a strategy she will gain no information because she lacks the third piece of information existing in Alice's register. Obviously, the newly transmitted qubits will not contain the information Alice requires to reveal the correct 𝐬. In the resulting derivation, approximately half the bits will be wrong, and, thus, the verification sequence will contain ≈ f / 2 inconsistencies.
* Entangle and Measure. Once again, Eve intercepts two qubits from each | GHZ_ 3 ⟩ triplet during their transmission from Bob and Charlie to Alice. This time Eve does not measure them, but entangles them with her ancilla state, and then sends the corresponding GHZ qubits to Alice. Eve waits until the protocol completes before measuring her qubits, hoping to gain useful information. However, the result of Eve's actions is that, instead of having ≈ m | GHZ_ 3 ⟩ triplets evenly distributed among Alice, Bob and Charlie, to end up with ≈ m | GHZ_ 4 ⟩ quadruples evenly distributed among Alice, Bob, Charlie, and Eve. Ultimately, Alice will derive an incorrect 𝐬, a fact that she will realize the second consistency check. Eve will also fail gain information about the correct 𝐬 because that would require the contents of Alice's register. Therefore, in this case too, Eve will fail, whereas Alice will be able to infer that Eve tempered with the protocol.
* PNS. The photon number splitting attack (PNS), introduced in <cit.> and subsequently analyzed in <cit.>, is regarded as one of the most effective attack strategies that Eve can employ against any quantum protocol. This attack exploits the fact that, due to technological limitations, photon sources occasionally do not emit single-photon signals, but may produce multiple identical photons instead of just one. This allows Eve to intercept pulses emanating from Alice for the distribution of the | GHZ_ 3 ⟩ triplets, keep one photon from the multi-photon pulse for herself and send the remaining photons to Bob and Charlie without being detected during the transmission phase. As far as the 3PSQDC protocol is concerned, this case resembles the Entangle and Measure attack analyzed above. Again, instead of | GHZ_ 3 ⟩ triplets evenly distributed among Alice, Bob and Charlie, in reality there are | GHZ_ 4 ⟩ quadruples evenly distributed among Alice, Bob, Charlie, and Eve. Eve becomes effectively the fourth player and is unable to gain any information about the other players' measurements for the same reasons as in the previous case.
The above security analysis demonstrates that both the 2PSQDC and 3PSQDC protocols information-theoretically secure. Parameters d and f are critical for the correct execution of the first and second consistency checks, respectively. In a perfect or near perfect channel their values need not be large; the probability that Eve eavesdrops undetected converges rapidly to zero, as shown in Table <ref>. Things become significantly more complicated in the case of a noisy channel, where proper values must be selected in order to ensure that the probability that Eve eavesdrops undetected is negligible.
§ DISCUSSION AND CONCLUSIONS
In this work, we have introduced two new protocols for quantum secure direct communication. The first, called 2PSQDC, involves communication between two entities, Alice and Bob. Subsequently, the 2PSQDC was generalized in a intuitive and straightforward manner, so as to provide for quantum secure direct communication among three entities. The resulting protocol, which is called 3PSQDC, can be seamlessly generalized to an arbitrary number of entities. Both protocols, which are proven to be information-theoretically secure, use the same idea, i.e., embedding the secret information via an oracle that uses the inner product modulo 2 operation. This way of encoding the information is the main novelty of this paper that distinguishes it from the many previous works in the field. The advantage of this method is that it is seamlessly extensible and can be generalized to a setting involving three, or even more, players, as demonstrated with the 3PSQDC protocol. This last case, is not only useful, but often necessary, when two spatially separated players posses only part the secret information that must be combined and transmitted to Alice in order for her to reveal the complete secret. Using the 3PSQDC protocol, this task can be achieved in one go, without the need to apply a typical QSDC protocol twice, where Alice first receives Bob's information and afterwards Charlie's information. Assuming sufficient resources, the 3PSQDC protocol can be extended in the obvious manner to allow for n - 1 players to simultaneously send information to Alice. It is also worth mentioning that both protocols are practically accessible, since they relay on EPR pairs, in the two player case, and | GHZ_ 3 ⟩ triples, in the three player case, that can be generated with our current technology. Moreover, the local quantum circuits are characterized by uniformity and symmetry, being similar or identical, and are easily constructible as they employ only Hadamard and CNOT gates.
ieeetr
|
http://arxiv.org/abs/2307.01482v2
|
20230704051919
|
Nexus sine qua non: Essentially Connected Networks for Traffic Forecasting
|
[
"Tong Nie",
"Guoyang Qin",
"Yunpeng Wang",
"Jian Sun"
] |
cs.LG
|
[
"cs.LG"
] |
Monotone twist maps and Dowker-type theorems
Peter Albers[
Mathematisches Institut,
Universität Heidelberg,
69120 Heidelberg,
Germany;
[email protected]]
Serge Tabachnikov[
Department of Mathematics,
Pennsylvania State University,
University Park, PA 16802,
USA;
[email protected]]
August 1, 2023
===================================================================================================================================================================================================================================================================
Spatial-temporal graph neural networks (STGNNs) have become the de facto models for learning spatiotemporal representations of traffic flow. However, modern STGNNs often contain superfluous or obscure components, along with complex techniques, posing significant challenges in terms of complexity and scalability.
Such concerns prompt us to rethink the design of neural architectures and to identify the key challenges in traffic forecasting as spatial-temporal contextualization.
Here, we present an essentially connected model based on an efficient message-passing backbone, powered by learnable node embedding, without any complex sequential techniques such as TCNs, RNNs, and Transformers.
Intriguingly, empirical results demonstrate how a simple and elegant model with contextualization capability compares favorably w.r.t. the state-of-the-art with elaborate structures, while being much more interpretable and computationally efficient for traffic forecasting. We anticipate that our findings will open new horizons for further research to explore the possibility of creating simple but effective neural forecasting architectures.
§ INTRODUCTION
Modeling and forecasting spatiotemporal traffic flow not only facilitates decision making by practitioners, but also deepens our scientific understanding of the underlying dynamic systems.
Spatiotemporal traffic forecasting (task) is one of the most popular applications of modern neural forecasting models, and numerous works have been devoted to developing advanced spatial-temporal graph neural networks (STGNNs) that can capture complex correlations in spatiotemporal traffic data <cit.>.
Stacking multiple novel spatial-temporal layers tends to be a standardized practice in newly-emerged STGNNs. Despite improvements in accuracy in common benchmarks, the introduction of complex techniques and overly cumbersome architectures hinders the understanding of their behaviors and the discovery of components that really contribute to forecasting. It is dubious that the designs of some advanced STGNNs may conceptually complicate the forecasting process. Furthermore, increasing complexity often requires extensive training time and tedious parameter tuning, making many state-of-the-art models infeasible for large-scale sensor networks <cit.>.
The above phenomena prompt us to revisit the design of STGNNs and throw out an intriguing question: Is there an essentially simple but effective model for task? With this question, we first carefully inspect the task task.
We find the forecasting can be particularly problematic when the future series is often nondeterministic with respect to the past series within a look-back window, which means that the maps to learn are one-to-many, and a deterministic one-to-one function, however expressive it is, cannot fully capture this multivalued mapping.
This is especially difficult when there is no external information available to explain away the ambiguity. A key workaround is to reconstruct ctx information from the data itself to separate “many” in a higher-dimensional space and turn the multi-value mapping into a univalued mapping. By contemplating the essence of the problem, we find that it basically involves injecting two essential elements corresponding to each of the multivalued mappings, which we call a “where locator” and a “when locator”, to contextualize the forecasting task in spatial and temporal dimensions, respectively (see Fig. <ref>).
Using the “where locator" and “when locator" lenses, we can focus on surveying models based on how they approach the two types of ctx in a variety of studies. We then question the necessity and validity of using complex STGNNs for task and argue that many of them do not alleviate the ctx problem but make it even more difficult to distinguish spatiotemporal samples. Given the complexity and inefficiency of current models, it may be tempting to go beyond such acknowledged paradigms and explore the performance of a minimalistic model that only includes essential connection components to directly express “where locator" and “when locator".
To achieve this, we assign a unique vector as the “where locator" for each location. However, we do not assign additional vectors as “when locator" for each piece of time, as this would increase redundant parameters.
Instead, we attempt to reuse the “where locator" as learnable embeddings to correlate past series of all sensors to contextualize the temporal location of the current series. Surprisingly, in our experiments, this reuse of the “where locator" as a “when locator" can perform comparatively with its complex counterparts while enjoying much faster training, less resource consumption, and fewer model parameters (see Fig. <ref>(3)).
Our main contributions are threefold:
* We present a conceptually simple but effective model with essentially connected components for task;
* We identify and highlight the significance of node embedding in spatiotemporal ctx;
* We conduct extensive experiments to evaluate the performance of the proposed model on 7 traffic forecasting benchmarks and reveal its superiority by comparing with 15 state-of-the-art neural forecasting models.
We hope that our results can encourage further work on this topic to go beyond the sphere of complex STGNNs and reconsider the significance of simpler models. As we aim to grasp the most essential components for task, we refer to our model as model networks.
§ NOTATIONS AND PRELIMINARY
Notations and problem statement
We investigate the multivariate traffic forecasting problem in a spatiotemporal context. For ease of representation, we follow the terminology of previous work <cit.>.
In particular, N time series are collected from different static sensors, and each of them has a d_in dimensional traffic flow measurement, denoted by 𝐱_t^i∈ℝ^d_in. The entire record at time t can be linked by sensor networks and forms a graph signal matrix 𝐗_t∈ℝ^N× d_in with side information represented by the static adjacent matrix (possibly time varying) 𝐀∈ℝ^N× N. In addition, we indicate with the tensor 𝒳_t:t+T∈ℝ^N× T× d_in the sequence of graph signals within the time interval [t, t+T]. For task dataset, exogenous variables about time stamps are readily collected for the entire period, which is denoted as 𝐔_t ∈ℝ^N× d_s. Similarly, node-specific features such as sensor ids are indicated as 𝐕∈ℝ^N× d_v. Note that this paper uses the terms nodes, sensors, and locations interchangeably. Input graph signals within a time window W can be denoted as 𝒢_T-W:T=⟨𝒳_T-W:T, {𝐔_t, 𝐕, 𝐀_t,t=T-W,…, T}⟩. The objective of task is to forecast the next H horizons of graph signals given a length of W historical records with a predictive function F_θ(·): 𝒢_T+1:T+H = F_θ(𝒢_T-W:T|θ).
Neural Message-passing of STGNNs
GNNs can be considered as a general model of message-passing neural networks (MPNNs). MPNNs contain three forward computation phases: message passing (MP), feature aggregation, and node updating, parameterized by permutation-invariant functions <cit.>. Under the spatiotemporal settings, MPNN at each time step can be formulated as follows:
𝐦_t^i← j,(l)=Msg_l(𝐡_t^i,(l-1),𝐡_t^j,(l-1), e^t_i← j),
𝐦_t^i,(l)=Agg(𝐦_t^i← j,(l);∀ j∈𝒩(i)),
𝐡_t^i,(l)=Up_l(𝐡_t^i,(l-1),𝐦_t^i,(l)),
where e^t_i← j is the edge weight between node i and j at time t.
An additional temporal MP process can be applied to model temporal correlations <cit.>. Stacking several MPNNs or applying high-order ones can somewhat alleviate the spatial ctx problem. However, these solutions are prone to the over-squashing problem <cit.> and deficient in locating series in the time axis.
§ ESSENTIALLY CONNECTED NEURAL PREDICTORS
The above discussion motivates us to develop a simple but effective model for task. We identify the key challenge of task as spatial and temporal ctx and depend on the node embedding (NE) to learn distinguishable spatiotemporal representations.
In particular, model features an essentially connected neural architecture based on a message-passing backbone (see Fig. <ref>), and the rationale for model is the versatile use of learnable node embedding and the simplicity of modular design. Notably, our model completely avoids complex temporal methods (e.g., RNNs, TCNs, and self-attention) and spatial techniques (e.g., diffusion convolutions and distance-based adjacent graphs) entirely. With a concise and comprehensible neural predictor, we seek to empirically answer the following key questions:
Q1 Are NE powerful for spatiotemporal ctx?
Q2 With NE, how to keep the model structure minimalistic?
Q3 Are simplified model architectures effective for task?
With these questions, we provide detailed descriptions about each modular component in the following sections.
§.§ Overview of Architectural Components
Overview The overall architecture of model can be concisely formulated as follows:
𝐇_t^0=Projection(𝒳_t-W:t),
𝐇_t^(1)=TimeMixer(𝐇_t^(0);𝐄_t),
𝐇_t^(l+1)=SpaceMixer(𝐇_t^(l);𝐀), l∈{1,…,L}
𝒳_t:t+H=Readout(𝐇_t^(L+1)),
where TimeMixer and SpaceMixer build on conceptually and technically simple Mlp and MPNN blocks, which can be represented together by Fig. <ref>(2). 𝐄_t is learnable NE, and 𝐀 is the relational graph. In the following paragraphs we will detail each component in turn.
Input flattening and Projection
STGNNs usually handle time and feature dimension separately, i.e., first project the input features at each time step to high-dimensional representations independently and correlate different time steps with sequential models like RNNs. Despite being reasonable and intuitive, this treatment increases the model complexity and the risk of overfitting significantly, especially when the hidden dimension is much larger than the channel (feature) dimension. As such, we proposed to flatten raw time series along time dimension and project the inputs into hidden space with a simple Mlp layer:
𝐗_t-W:t = Fold([𝒳_t-W:t𝐔_t],dim=0),
𝐇^0 =Mlp(𝐗_t-W:t),
where Fold(·):ℝ^N× T× d_in→ℝ^N× T d_in is the flattening operation. By flattening and projecting along the time dimension, serial information is stored in the hidden state 𝐇^0.
We further inject exogenous variables 𝐔_t about series properties here to inform the projection layer with time-series information, e.g., periodicity and seasonality.
TimeMixer for exploiting historical information
Time mixer models the temporal relations and patterns contained in historical series. We adopt 2-layer Mlp with residual connection <cit.> to encode temporal representations:
𝐇^0 =(Θ_time^0𝐇^0),
𝐇^1 =IN(Θ_time^1𝐇^0)+Θ_time^0𝐇^0,
where Θ_time^0,Θ_time^1 are linear weights, IN(·) is the instance normalization <cit.>, which is adopted to mitigate distribution shift <cit.> and reinforce the representations of high-frequency temporal components <cit.>.
SpaceMixer for modeling relational information
The proposed graph mixer can be viewed as a structured feature (channel) mixing layer, which is a specialization of the vanilla Mlp mixer <cit.>. As shown in Fig. <ref>, what differentiates SpaceMixer from TimeMixer is the MP step that aggregates neighborhood features on graphs.
Considering the dependence of channels (sensors), the Mlp mixer can be formulate as:
𝐇^(l)=σ(Θ^(l)_channel𝐇^(l-1)Θ^(l)_time),
where Θ_feature is the channel mixing weight and can also be interpreted as an unconstrained graph learning module that contains N^2 parameters. To specialize this model, we adopt a structured MPNN as the featured block of graph mixer:
Mpnn^(l)(𝐇^(l-1);𝐀|Θ)=(𝐀𝐇^(l-1)Θ^(l)_time),
where 𝐀 is the relational graphs, which can be either apriori ones or adaptively learned. It can be seen that our graph mixer exploits the characteristics of adjacent matrices and explicitly models the pairwise interactions among nodes. By seizing relational information, our model achieves a modular extension from univariate models to multivariate.
Dense Readout
For multi-step task, we adopt a Mlp and reshaping layer to directly output the predictions:
𝐗_t=Mlp(𝐇_t^(L)),
𝒳_t:t+H=Unfold(𝐗_t),
where Unfold(·) is the inverse linear operator of Fold(·).
Again, we do not elaborate on a complex sequential decoder, and the multistep predictions are regressed directly.
§.§ Node Embedding for When-Where Contextualization
One cornerstone of the proposed framework is the versatility of NE. To tackle the spatial-temporal ctx issue, the roles of NE in our model are twofold, including: where locator and when locator, which correspond to positional and structural representation in the theory of GNNs' representation learning <cit.>.
Spatial-temporal node embedding For the space dimension, we consider setting a unique identifier for each sensor. An optional structural embedding is the random-walk diffusion matrix <cit.>. For simplicity, we use the learnable NE as a simple index positional encoding without any structural priors. As for implementation, we assign a learnable vector of size d_emb with random initialization for each series, denoted as 𝐄∈ℝ^N× d_emb, then 𝐄 is included in the forward computation as endogenous variables and its gradient is updated end-to-end by backpropagation.
This NE reflects the static features of each sensor, e.g., dominating traffic patterns <cit.>, and is agnostic to temporal information. Inspired by <cit.>, we propose to inform the model with stationary serial property of traffic flows, e.g., periodicity and seasonality. The sinusoidal positional encoding <cit.> 𝐔_t∈ℝ^T× d_s is adopted to inject the time-of-day information to all series. Given 𝐔_t, we first fold the time dimension and project it into the hidden space:
𝐮 =𝐖_UFold(𝐔_t),
𝐔 =Broadcast(𝐮,N),
𝐄_t =ResMlp(ResMlp(𝐄+𝐔)),
where 𝐖_U∈ℝ^d_emb× Td_s is learnable weights, and Broadcast(·) duplicates the inputs with given times along a new dimension. 𝐄_t is the final spatiotemporal node embedding (STNE). Since input data from different time-of-day intervals has different sinusoidal stamps, STNE can reflect periodicity of series. The following paragraphs explain how this STNE contributes to the traffic forecasting process.
“Where locator”
It can be seen that Eqs. <ref> and <ref> are global models shared by all sensors. Given two sensors with similar historical inputs, such global models fail to contextualize each series in the spatial dimension, even though they have different dynamics in the future. To circumvent such a one-to-many problem, we parameterize each node with a discriminative identifier using the STNE. Especially, we add the learnable 𝐄_t to each series in the first Mlp of TimeMixer as spatial ctx directly:
𝐡_i^0 =Mlp(𝐡_i^0 + 𝐞_i).
Notably, the introduction of learnable embedding in each node is shown to be equivalent to the specialization of the univariate model <cit.>, which benefits the use of node-specific local dynamics <cit.>.
On the other hand, since GNNs focus on local graph structures, they are inadequate to distinguish between two nodes with close neighborhood structure, which is understood as a spatial indistinguishability problem <cit.>.
Therefore, to allow the aggregation function to be aware of the difference of each node, so that the node features can still be discriminated after the message aggregation, we include 𝐄_t in the MP process in Eq. <ref>:
𝐦_t^i← j=Msg([ 𝐡_t^i
𝐞^i],[ 𝐡_t^j
𝐞^j], a^t_i← j).
We claim that adding learnable features to the message function is equivalent to incorporating high-frequency components into low-frequency representations to highlight the impact of local events in spatiotemporal data. Actually, the random node initialization itself enhances the GNNs <cit.>. As can be seen, the “where locator" only implies the exclusivity of the identifier, we set it to be learnable for the sake of reuse for the “when locator".
“When locator”
STNE represents the dominant traffic flow patterns and can be understood as a global and static identifier.
If a sensor's readings are different at two different times but share a close historical series, the “where locator" alone is less effective at contextualizing its temporal information. In this case, an additional “when locator" is needed.
Following the discussions in <cit.>, we can first divide forecasting models into two categories according to their encoding mechanisms: (1) time-dependent and (2) data-dependent. Concretely, a linear predictive model with weight 𝐖 and bias 𝐛 can be expressed as:
𝐱_t+1:t+H=𝐖𝐱_t-W:t+𝐛,
or: x_t+h=∑_k=0^Ww_k,hx_t-k+b_k,h, h∈{1,…,H},
which forms a vanilla autoregressive (AR) model at each forecast horizon, where the AR parameters depend only on the relative order and are agnostic to the location in the historical sequence. The TimeMixer works in this way.
One natural “when locator" for time-dependent model is to assign a unique identifier to each piece of time. However, there exists three drawbacks: [label=(*)]
* since the prediction is basically done sequence-to-sequence, having different identifiers within a time window is redundant;
* the time-dependent model in Eq. <ref> relies on relative timestamps and the use of absolute ones is difficult for it;
* timestamp-based encoding would attribute the ambiguity to time-varying factors, e.g., rush hour, but this may be caused by local high-frequency spatial dynamics such as the spread of traffic congestion or irregular driving behavior.
Conversely, the behaviors of data-dependent models are pattern-aware and condition on local temporal variations:
x_t+h=∑_k=0^Wℱ_k(𝐱_t-W:t)x_t-k+b_k,h, h∈{1,…,H},
where ℱ_k(·) is a data-driven function, e.g., self-attention. We argue that this forms a fully time-varying AR model <cit.> in which the AR process is parameterized by time-varying coefficients. This overparameterization tends to overfit the data rather than capture the temporal relationships such as the location on the time axis.
With this in mind, we propose to reuse the “where locator" as a query for all available reference information from other series to contextualize the temporal patterns. In essence, we only need to specify the relation graphs in Eq. <ref>:
𝐀_t = Softmax(𝐄_t·𝐄_t^𝖳).
Since 𝐄_t is aware of the time-of-day feature, this query function is time-varying within a period. Our approach lies between the time-dependent and data-dependent routines and can forecast with both relative time stamp and local spatial patterns. In this way, the positional node embeddings and structural graph representations are equivalently unified <cit.> in spatiotemporal graphs.
§.§ Remarks on Parameter-Efficient Neural Message-Passing Designs
Another ingenuity of model is its parameter-efficient MP backbone. We present several task-agnostic techniques to lighten the overall neural architecture. Note that since model propagates features only after the temporal encoder, rather than at every time step, it is more efficient than alternately stacked spatial-temporal blocks.
Residual connection
We keep a shortcut for the linear part in each block. When all residual connections are activated (e.g., when graph relation is unnecessary), our model can degenerate into a class of linear models with channel-independence, e.g., <cit.>, which show great potential for long-term series forecasting.
Parameter sharing
Different from recent design trends that assign different node parameters or graphs for different layers <cit.>, we set the NE globally shared for all modules. The idea is to facilitate end-to-end training of random node features and reduction of model size. In addition, recent works empirically show that MLPs and MPNNs can share similar feature spaces <cit.>. Without MP, SpaceMixer can collapse into TimeMixer. Considering this, we instantiate the successive MP layers with a common linear transform for time mixing.
Shallow-layer structure
Instead of multilayer deep GNN structures, we adopt shallow-layer structures with larger receptive field. In particular, we consider using one or two layers of MPNN with fully connected graphs to capture long-range spatial dependence, without multiple stacked sparse graph aggregators or hierarchical operations, e.g., diffusion graph convolutions <cit.>.
Recent studies reveal that the expressive power of GNNs are determined by both of the depth and width <cit.>, and wider GNNs can be more expressive than deeper ones <cit.>.
A single MP can gather adequate information from a large number of nodes when the underlying graph is dense <cit.>.
§ RELATED WORKS
Spatiotemporal traffic forecasting has sparked great interest in both academia and industry.
As a standard paradigm, STGNNs take all series as input and use GNNs to correlate them. The properties of traffic flow series are preserved by an elaborate sequential model.
Remarkable results have been achieved in a variety of studies using STGNNs <cit.>. However, modern STGNNs tend to be complex and nontrivial to implement.
Research on simplified deep models for task has been relatively limited.
<cit.> propose a scalable STGNN based on preprocessing that encodes spatiotemporal features prior to training.
<cit.> replace the GNNs with feature aggregation and use a graph sampling strategy. While, both of them rely on complex temporal encoders and pre-computed graph features. Similarly to our work, <cit.> developed a fully connected gated GNN model (GatedGN) using a graph inference structure. However, all pairwise attention is computationally very expensive.
Along another path of research, several methods have emerged to challenge the existing complex models for LTSF, e.g., LightTS <cit.>, LTSF-Linear <cit.>, TS-Mixer <cit.> and TiDE <cit.>. While most adopt a channel-independence assumption and do not include explicit relational modeling.
Regarding positional encoding in STGNNs,
<cit.> applies learnable embedding to all nodes, time of day, and day of week points, enabling MLPs to achieve competitive performance on multiple datasets. However, overparameterized identifiers would be redundant and prone to overfit. Additionally, this kind of temporal identifier only reflect time-varying components and is less effective in capturing local nonstationary spatial dynamics.
<cit.> further interpreted the role of node embedding as local effects and incorporated it into a global-local architecture.
In summary, none of the existing works has directly and thoroughly investigated the essential model components for spatiotemporal traffic forecasting, nor provided exhaustive discussions on the ctx issue in STGNNs.
§ EMPIRICAL EVALUATION
Extensive experiments are carried out to evaluate the performance of the model model on 7 well-known traffic forecasting benchmarks and compare it with 15 neural forecasting baselines. Details about experimental settings are provided in the Appendix.
PyTorch implementations and reproducible results are publicly available [code][Code will be open-sourced upon publication.].
§.§ Experiment Setup
Datasets Six high resolution traffic flow datasets are used to compare short-term forecast performance, including two traffic speed data: and datasets <cit.>, and four traffic volume data <cit.>: , , , and . All these traffic measurements are obtained from loop sensors installed on highway networks and aggregated at 5 min. In particular, the six datasets provide adjacent matrices constructed by the pairwise geographic distance between sensors. The same graphs from previous work <cit.> are prepared for baselines that require predefined graphs.
Then we evaluate the scalability and the ability to forecast long-term series on the benchmark <cit.>, which provides records of hourly road occupancy rates measured by 862 sensors for 48 months on San Francisco Bay Area freeways. Detailed descriptions of these benchmarks are given in Appendix.
Baselines We consider a variety of deep learning baselines in the literature to benchmark our model: <cit.>;
<cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>;
<cit.>; <cit.>; <cit.>. To give a fair comparison, we use the official implementations in their original papers and use the recommended hyper-parameters on each dataset as far as possible. All baselines are evaluated under the same training, validation, and testing environments.
Experimental settings We follow the same experiment setups as used in previous works whatever possible to provide unbiased evaluations. For six short-term datasets, we use 12-step historical series (1 hour) to forecast future 12-step observations.
For large-scale data, we adopt the settings in <cit.> that 96 steps are used as input and predictions at next {96,192,288,384} steps are used for evaluation.
Quantitative metrics including mean absolute error (MAE), mean squared error (MSE), and mean absolute percentage error (MAPE) are calculated.
§.§ Results
Short-term traffic benchmarks Model comparison results on six short-term speed and volume benchmarks are given in the Tab. <ref>. Intriguingly, model has achieved performances comparable to or better than its other complicated counterparts.
In particular, forecasting data is supposed to be more challenging due to complex temporal patterns, while our model consistently outperforms baselines by a large margin, even without popular sequential techniques (Q3). For data, model is still among the best-performing predictors.
Compared to STID, our model features the extraction of distinguishable low-frequency node representations with more effective “when locators", thus showing better accuracy in all scenarios. Additionally, attention-based methods such as GatedGN, MTGNN, and D2STGNN also show competitive results. However, high memory consumption and computational complexity hinder them from large-scale applications.
Similar observations can be made in the traffic volume datasets. Although model does not depend on a predefined adjacent matrix, it still achieves SOTA performance. Unlike the loop speed, distance-based correlation metrics may fail to describe the behavior of traffic volumes <cit.>. Our observation further confirms this claim. It is worth commenting that PeMS07 contains more than eight hundred sensors, so that the attention mechanism on the spatial dimension will lead to a prohibitive memory cost. Finally, with spatiotemporal ctx, model can take advantage of relational information to improve forecasting with a simple structure (Q1).
Computational performance Apart from results on forecasting precision, we also examine the computational performance. Fig. <ref> displays the validation MAE curves of several competing STGNNs on two datasets. Records of training speed, model size and memory usage on METR-LA are exemplified in Tab. <ref>.
Notably, model enjoys a much faster training speed (converge in ten minutes of GPU time), smoother convergence curve, lower error bound, and less training expense, indicating high efficiency and scalability.
This superiority is achieved through a technically simple structure, node embedding reuse, and parameter-efficient backbones (Q2).
Advanced STGNNs like DGCRN, D2STGNN apply a series of alternating graph convolution and temporal models, resulting in a high computational burden and unstable optimization. Extra operations such as self-attention or graph attention further complicate the process.
Long-term forecasting performances We also examine the ability to predict long-term series. Tab. <ref> reports the results of different prediction horizons with a 96-step sequence as input. Compared to three strong baselines in the LTSF literature, model shows great potential to handle the challenging LTSF task. This may be because our model can encode series with both relative time stamp and local patterns.
Ablation study We conduct ablation studies to test the rationality of model designs.
Tab. <ref> reports the forecasting performance of different model variations: w/o 𝐄 indicates the node embedding is removed and model degrees into a Mlp-mixer; w 𝐀_pre incorporates the predefined graphs with additional diffusion graph convolution <cit.>; w 𝐒_emb means that we replace the STNE with timestamp-based embedding. The results clearly confirm the essence of the model designs and corroborate our hypothesis (Q1, Q2).
§ CONCLUSION AND OUTLOOK
This work goes beyond the realm of STGNNs and demonstrates an essentially connected network model for traffic forecasting.
We identify the core challenge of task as spatial-temporal ctx issue and design “where" and “when" locators to tackle it. model features a simple structure and a parameter-efficient backbone.
Extensive and rigorous evaluations on several benchmarks indicate that model not only outperforms a diverse array of baselines but also enjoys high computational efficiency. We believe that model will form the basis for further research on understanding and designing simpler neural forecasting models and will help practitioners develop more efficient tools in industrial applications. Future work includes further exploration of performance on larger-scale datasets and a wider variety of data, as well as the interpretability.
§ ACKNOWLEDGMENTS
This research was sponsored by the National Natural Science Foundation of China (52125208), and the Science and Technology Commission of Shanghai Municipality (No. 22dz1203200).
|
http://arxiv.org/abs/2307.10184v1
|
20230703122844
|
A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives
|
[
"Yudong Gao",
"Honglong Chen",
"Peng Sun",
"Junjian Li",
"Anqing Zhang",
"Zhibo Wang"
] |
cs.CR
|
[
"cs.CR",
"cs.AI",
"cs.LG",
"53A45",
"I.4.10"
] |
[1]
[email protected]
China University of Petroleum (East China)
China
[2]
China University of Petroleum (East China)
China
[3]
Hunan University
China
[4]
China University of Petroleum (East China)
China
[5]
China University of Petroleum (East China)
China
[6]
Zhejiang University
China
printacmref=false
Backdoor attacks pose serious security threats to deep neural networks (DNNs). Backdoored models make arbitrarily (targeted) incorrect predictions on inputs embedded with well-designed triggers while behaving normally on clean inputs. Many works have explored the invisibility of backdoor triggers to improve attack stealthiness. However, most of them only consider the invisibility in the spatial domain without explicitly accounting for the generation of invisible triggers in the frequency domain, making the generated poisoned images be easily detected by recent defense methods. To address this issue, in this paper, we propose a DUal stealthy BAckdoor attack method named DUBA, which simultaneously considers the invisibility of triggers in both the spatial and frequency domains, to achieve desirable attack performance, while ensuring strong stealthiness. Specifically, we first use Discrete Wavelet Transform to embed the high-frequency information of the trigger image into the clean image to ensure attack effectiveness. Then, to attain strong stealthiness, we incorporate Fourier Transform and Discrete Cosine Transform to mix the poisoned image and clean image in the frequency domain. Moreover, the proposed DUBA adopts a novel attack strategy, in which the model is trained with weak triggers and attacked with strong triggers to further enhance the attack performance and stealthiness. We extensively evaluate DUBA against popular image classifiers on four datasets. The results demonstrate that it significantly outperforms the state-of-the-art backdoor attacks in terms of the attack success rate and stealthiness.
<ccs2012>
<concept>
<concept_id>10002978.10003014.10003015</concept_id>
<concept_desc>Security and privacy Security protocols</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010178.10010224</concept_id>
<concept_desc>Computing methodologies Computer vision</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[300]Security and privacy Security protocols
[500]Computing methodologies Computer vision
A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives
Zhibo Wang
Received date ; Accepted date
======================================================================
§ INTRODUCTION
Deep neural networks (DNNs) have made great achievements in many fields, such as image classification <cit.>, image segmentation <cit.>, and target recognition <cit.>. Despite the remarkable success, DNNs face various security threats since the models are usually trained on datasets labeled by third-parties or even outsourced for training. Recent studies have shown that DNNs are vulnerable to backdoor attacks <cit.>, where an adversary intentionally manipulates a part of the training data or modifies the model parameters to make the model behave correctly on clean data but make arbitrarily (targeted) incorrect predictions on poisoned data. Backdoor attacks pose serious security threats to deep learning systems, especially in security-sensitive applications (e.g., autonomous driving <cit.>).
Most existing backdoor attacks craft backdoor triggers in the spatial domain <cit.>. However, as shown in Figure <ref>, with the visible triggers, early backdoor attacks (e.g., BadNets <cit.> and Blend <cit.>) can be easily detected and removed. To enhance attack stealthiness, recent works consider generating invisible triggers through image steganography <cit.>, disorting fields <cit.> and so on. Nevertheless, these methods only consider the invisibility of triggers in the spatial domain but not the frequency domain. Consequently, the generated backdoored images can be easily identified by typical image classification models that employ the Fourier transform as a part of the task pipeline <cit.>. More importantly, researchers have proposed effective backdoor defenses from frequency perspective. For example, FTD <cit.> demonstrated that most backdoor triggers are high-frequency semantics, and it trained a DNN model that classifies images in the frequency domain to effectively defend against most attacks. That is, most backdoors are perceptible in the frequency domain. Therefore, it is essential to ensure the trigger's invisibility in the both the spatial and frequency domains to launch a powerful yet stealthy backdooor attack.
A few recent works began to study how to implant stealthy backdoor attacks from frequency perspective. For example, the work in <cit.> employs a low-pass filter to implant a backdoor invisible in the frequency domain while visible in the spatial domain.
Motivated by the above discussions, in this paper, we propose a DUal stealthy BAckdoor attack called DUBA, which crafts invisible triggers in both the spatial and frequency domains while achieving desirable attack performance. Specifically, we first embed the high-frequency information of the trigger image into the clean image by discrete wavelet transform (DWT), yielding the initial poisoned image. Then, to ensure strong stealthiness, we fuse the initial poisoned image (which has high-frequency triggers) with the clean image in the Fourier and Cosine transform domains. Furthermore, we propose an attack strategy, which can greatly reduce the embedded high-frequency information of the trigger images and randomly mask more parts of the trigger images in the training phase to make the victim model better learn the triggers.
The major contributions of this paper are summarized as follows:
* We design a DUal stealthy BAckdoor attack named DUBA that achieves desirable invisibility in both spatial and frequency domains by embedding high-frequency trigger information through DWT and smoothing it in Fourier Transform and Cosine Transform domains.
* We propose a novel attack strategy for DUBA where the model is trained with weak triggers and attacked with strong triggers to attain satisfactory attack performance while ensuring stealthiness.
* We conduct extensive experimental evaluation of DUBA on four datasets and popular models. The results demonstrate its outstanding performance in terms of both attack effectiveness and stealthiness.
§ RELATED WORK
§.§ Backdoor Attack
Backdoor attack has drawn wide attention since its introduction <cit.>. According to the trigger generation method, existing backdoor attacks can be roughly divided into two categories, i.e., spatial domain backdoors and frequency domain backdoors.
Spatial Domain Backdoors. Researchers in BadNets <cit.> first reveals the existence of backdoors in DNNs. This attack embeds a visible square in the bottom right corner of the clean image and manipulates the associated label to the target label. Then, the backdoor can be injected to the model after training on the poisoned data. In the inference phase, input images with the same trigger will be misclassified into the attacker-chosen target label. Inspired by BadNets, researchers have also investigated other backdoor attacks. Blend <cit.> advocates image blending backdoors, whereas another work <cit.> employs a fixed watermark as a trigger to insert backdoors. However, these early backdoors are all visually visible and thus can be easily detected and removed. Therefore, how to generate and implant visually invisible backdoors has recently become a hot research topic. For example, ISSBA <cit.> embeds trigger information by steganography; WaNet <cit.> crafts triggers by distorting fields; and LIRA <cit.> searches for triggers in a highly nonlinear parameter space. Though these methods can successfully generate invisible triggers and bypass mainstream backdoor defenses, none of them explicitly account for the characteristics of the image in the frequency domain. Thus, these backdoor attacks can be easily detected by models empowered by Fourier transform (which is often employed as a part of the task pipeline) or the frequency-oriented defense methods.
Frequency Domain Backdoors. Recently, <cit.> starts to explore backdoor attacks in the frequency domain. To avoid high-frequency artifacts after the Discrete Cosine Transform (DCT) <cit.>, it applies a low-pass filter to generate a smooth trigger. However, this method yields visible artifacts in the spatial domain. FIBA <cit.> crafts triggers in the frequency domain by mixing the low-frequency components of two images after Fast Fourier Transform (FFT) <cit.>, which is visually imperceptible in spatial domain but still visible in frequency domain. Another work FTROJAN <cit.> first transforms the clean image with YUV or UV (two color coding methods), then applies DCT with modifications on the high-frequency or mid-frequency components to generate poisoned image. However, the transformations required and the frequency components to be modified differ across different images thus increasing the computation overheads. Moreover, the trigger generated in FTROJAN is also visible in the frequency domain.
§.§ Backdoor Defense
To defend DNNs against various backdoor attacks, researchers have proposed many defense methods accordingly <cit.>. Generally, backdoor defenses can be categorized into input-based, model-based, and output-based methods.
Input-based Defenses. Input-based defenses focus on input abnormalities <cit.>. Grad-Cam <cit.> uses a saliency map to dissect the regions of the input image that the model focuses on. If the model does not focus on the object or keeps focusing on the same region, the image is considered as poisoned. FTD <cit.> employs DCT to distinguish whether the input image has high-frequency artifacts. They design a DNN-based discriminator to classify images with high-frequency artifacts as poisoned images.
Model-based Defenses. They focus on the investigation of the victim model <cit.>. Fine-Pruning <cit.> mitigates backdoors by pruning dormant neurons since it is likely that these neurons provide specialized support to backdoors. Neural Cleanse <cit.> identifies whether there is a backdoor in the model by reverse engineering the triggers and utilizes anomaly detection to determine the most potential backdoor.
Output-based Defenses. Defense methods of this type often observe output anomalies <cit.>. STRIP <cit.> superimposes various image patterns on the suspicious image to observe its output. Higher poisoning odds yield lower output randomness. To circumvent the existing backdoor defenses from both spatial and frequency domains, in this work, we aim to craft a powerful backdoor attack that is invisible in both domains.
§ METHODOLOGY
§.§ Threat Model
Attacker’s Capabilities. In the training phase, following prior studies <cit.>, we consider that the attacker is only allowed to tamper with a part of the training data, but has no access to other model training components (e.g., the victim model architecture and loss function). In the inference phase, the attacker can only manipulate the input images (i.e., embedding the crafted backdoor trigger). Such a threat model can be seen in many real-world scenarios like the outsourcing of model training to third-parties.
Attacker’s Goals. Generally, an effective backdoor attack should mislead the models into making arbitrarily (targeted) incorrect predictions on the tampered testing images without compromising the models' performance on normal inputs. Furthermore, a powerful backdoor attack should satisfy the following two objectives:
* Invisibility. The poisoned images should be invisible in both spatial and frequency domains.
* Robustness. The backdoor attack can successfully circumvent state-of-the-art defense methods.
§.§ Problem Formulation
We focus on the typical supervised image classification, which is widely used in face recognition, traffic signal recognition and other security-sensitive fields. Formally, the image classification can be described as a mapping function f_θ: 𝒳→𝒞, where 𝒳 is the input domain and 𝒞 is the set of target classes. In the image classification, the model parameters θ can be learned from the training dataset D_train ={(x_i, y_i)}_i=1^N of N data samples, where x_i ∈𝒳,
y_i ∈𝒞. The core of backdoor attacks is to craft a set of posioned data samples D_poison ={(T(x_i), γ(y_i))}_i=1^M, where T(·) denotes the trigger implantation method and γ(·) ∈𝒞 represents the designated target label. Specifically, γ(y_i)=c, where c is a constant, stands for the All-to-One attack, while γ(y_i)=y_i+1 represents All-to-All attack. In summary, backdoor attacks aim to manipulate a subset of the training data (note that M N and ∂ = M/N is the poisoning ratio) by injecting adversarial triggers such that the model trained on the tampered dataset yields the following behaviors when deployed:
f_θ( x_i) = y_i , f_θ( T( x_i)) = γ( y_i).
In this paper, we focus on designing the trigger implantation method T(·).
§.§ The Proposed Attack
Overview of DUBA. Figure <ref> shows the framework of DUBA, which is composed of three steps and an attack strategy. First, to attain desirable backdoor attack performance, we employ DWT to embed the high-frequency information of a fixed trigger image into the clean image to generate the initial poisoned image. Second, to ensure strong stealthiness in both spatial and frequency domains, we incorporate FFT and DCT to mix the initial poisoned image with clean image to generate the intermediate poisoned image. Third, to ensure that the victim model learns the backdoor scattered over the entire poisoned image while ensuring good invisibility, we propose to randomly mask the trigger of intermediate poisoned image to get the final poisoned image. Besides, we propose an attack strategy where the victim model is trained with weak triggers and attacked with strong triggers to achieve higher attack success rates (ASRs).
Step 1: High-Frequency Information Embedding. Inspired by prior works <cit.> that utilize high-frequency semantic information for trigger embedding, we propose to extract the high-frequency information from a fixed image (different from the image to be poisoned) as the initial trigger. Specifically, we employ DWT for high-frequency information extraction given that DWT can finely dissect images at high frequencies but coarsely analyze images at low frequencies. Formally, as shown in Step 1 in Figure <ref>, given a clean image x_c and a random initial trigger image x_p, the idea is to embed the high-frequency part of x_p into the deep high-frequency region of x_c. DWT decomposes image x into one low-frequency part and three high-frequency parts, which are represented as:
W( x) = {L,H_1,H_2,H_3},
where L represents the low-frequency approximate component, while H_1, H_2, and H_3 denote the high-frequency components at the vertical, diagonal, and horizontal directions, respectively. Accordingly, the image can also be recovered by inverse discrete wavelet transform (IDWT) W^ -:
W^ - ( L,H_1,H_2,H_3) = x.
To embed a sufficiently hidden trigger, we apply three DWTs to clean image x_c, which is expressed as:
{L_i + 1,H_i + 1,1,H_i + 1,2,H_i + 1,3} = W( L_i),i=0,1,2,
where L_0 equals x_c and i stands for the i-th DWT. Then, we apply one DWT to trigger image x_p. Note that since the image size will change after DWT and we perform three DWTs on x_c, trigger image x_p first needs to be resized into two different sizes, which are referred to as x_p1 and x_p2. Then, for both x_p1 and x_p2, we apply DWT once to obtain two different high-frequency trigger information.
[ {LP_1,HP_1,1,HP_1,2,HP_1,3} = W(x_p1); {LP_1^',HP_1,1^',HP_1,2^',HP_1,3^'} = W(x_p2) ].
Next we embed the high-frequency information of the trigger image, i.e., HP_1,j and HP_1,j^' (j= 1,2,3), into different high-frequency parts of the clean image x_c. Formally, we have:
{[ H_3,j^' = H_3,j×α + HP_1,j^'×( 1 - α); H_2,j^' = H_2,j×β + HP_1,j×( 1 - β) ], .j=1,2,3,
where α and β indicate the embedding intensity. With H_3,j^' and H_2,j^', initial poisoned image P_i can be derived using the IDWT as:
L_i^' = W^ - ( L_i + 1^',H_i + 1,1^',H_i + 1,2^',H_i + 1,3^'), i=0,1,2,
where L_3^' equals L_3, H_1,j^' equals H_1,j and the final low-frequency component L_0^' is exactly the generated poisoned image in Step 1, which is denoted as P_i. Considering that the high-frequency information is stealthier than the low-frequency information, the above method can generate an almost invisible backdoor image in the spatial domain (actually it depends on the intensity of α and β, we will discuss the effect of α and β on the visual effect in the ablation experiments). In what follows, we aim to answer the question on how to craft a poisoned image that is also invisible in the frequency domain.
Step 2: Frequency Domain Smoothing. Previous studies <cit.> have shown that the phase spectrum of an image after FFT retains information about the edges and the overall structure of the image, which captures high-level semantic information. Meanwhile, the amplitude spectrum obtains the underlying semantic information, preserving the frequency information <cit.>. Besides, it has been observed in <cit.> that changes in the amplitude spectrum do not significantly affect the perception of high-level semantics. Moreover, since the result of FFT is in the complex domain, while the image we usually observe in the frequency domain is actually the amplitude spectrum, we choose a straightforward yet effective approach. Specifically, to ensure both the backdoor attack performance and the amplitude spectrum's invisibility, we directly swap the amplitude spectrums of the P_i and x_c after FFT. Formally, as shown in Step 2 of Figure <ref>, let F^S( ·) and F^P( ·) be the amplitude and phase components of FFT results, the amplitude and phase spectra of P_i and x_c after FFT are obtained as:
{[ S^c = F^S( x_c),P^c = F^P( x_c); S^p = F^S( P_i),P^p = F^P( P_i) ]..
Then the smoothing poisoned image P_mi is calculated by the amplitude spectrum of x_c and the phase spectrum of P_i. Formally,
P_mi = F^ - ( S^c,P^p),
where F^-(·) represents the inverse FFT. We would like to note that the FFT-based smoothing is conducted in the complex domain. Thus, despite the spectrogram of the image after FFT-based smoothing being theoretically hidden, the image is still perceivable in the real domain (see the ablation experiment for a visual demonstration). Moreover, the poisoned image may be detected by the DCT-based defense, which is unacceptable even though the detection probability is low.
To address this issue, next we incorporate the DCT, which is a special case of FFT in real domain, to fuse P_mi and x_c. Due to the linear property of DCT, fusing the two images after only one DCT and then inversing the fused result is equivalent to directly fusing the two images, which can not achieve the purpose of deep smoothing. Therefore, we apply two DCTs on P_mi and x_c to achieve deeper information fusion. Let D be the DCT while D^ - be the inverse DCT (IDCT), we obtain the deep information in DCT domain as follows:
D_c^k = D( D_c^k-1) , D_p^k = D( D_p^k-1), k = 1,2,
where k stands for the k-th DCT, D_c^0 equals x_c, and D_p^0 equals P_mi. As shown in Figure <ref>, the DCT smoothing is then implemented according to:
D_p^k - 1 = D^ - [ D_p^k ×λ + D_c^k × (1 - λ)], k=1,2,
where λ indicates the fusing intensity. After two steps of IDCT, we obtain the new D_p^0, which is exactly the intermediate poisoned image (denoted as P_m) generated in Step 2.
Step 3: Random Trigger Masking. To ensure that the victim model learns the backdoor scattered over the entire image while well preserving the desirable attack stealthiness in both the spatial and frequency domains, we propose to randomly mask the trigger image. As shown in Step 3 of Figure <ref>, we first obtain the trigger embedded in P_m by subtracting x_c from P_m. Then, randomly mask it. Finally, we again embed the trigger pattern after masking into the clean image, yielding the final poisoned image P_f.
Attack Strategy Design. We further devise an attack strategy for DUBA to enhance both attack performance and stealthiness. Specifically, in the training phase, we adopt a weak trigger pattern via two operations, i.e., shrinking the values of α and β as much as possible and masking more pixel points of the trigger image in Step 3. We employ a strong trigger pattern in the inference phase, which means that we make α and β as large as possible while ensuring the triggers' invisibilities and masking fewer pixel points in Step 3.
Moreover, considering that the triggers can be visible when the pixel values of points in the clean image are close to 0 or 255, we further mask the corresponding regions in the trigger image. In particular, in the training phase, we appropriately expand such regions for better attack stealthiness.
§ EXPERIMENTS
§.§ Experimental Settings
Datasets. To evaluate the performance of DUBA on different tasks, we conduct experiments on four different datasets: 1) Cifar10 <cit.>, 2) Gtsrb <cit.>, 3) ImageNet <cit.>, and 4) Fer2013 <cit.>. Cifar10 and ImageNet are object classification datasets that include horses, aircraft, and other objects. Fer2013 is the face expression recognition dataset, while Gtsrb is the traffic signal recognition dataset. We present the details of these datasets in Table <ref>. Note that the ImageNet is too large and we only use a subset of it.
Models. We conduct experiments on three models: ResNet18 <cit.>, RepVGG <cit.>, and Conformer <cit.>. ResNet18 is a classic classification model. RepVGG is the latest VGG model. Conformer is the latest transformer model for image classification.
Baseline Backdoor Attacks. We compare DUBA with BadNets <cit.>, Blend <cit.>, WaNet <cit.>, and FIBA <cit.>. BadNets and Blend are representational visible backdoor attacks. WaNet is the latest invisible backdoor attack in the spatial domain while FIBA is the latest backdoor attack proposed from the frequency perspective.
Evaluation Metrics. We evaluate DUBA and compare with baselines from two perspectives, i.e., attack performance and attack stealthiness. For attack performance evaluation, we employ the attack success rate (ASR), which is defined as the proportion of poisoned examples that are misclassified as the target label among all poisoned examples used for testing. Additionally, we utilize the benign accuracy (BA) to characterize the model's performance on clean testing data. For attack stealthiness evaluation, we use the following similarity metrics: peak signal-to-noise ratio (PSNR) <cit.>, structural similarity (SSIM) <cit.> and learned perceptual image patch similarity (LPIPS) <cit.>. There are some correlations among these three metrics. Generally speaking, increased PSNR and SSIM indicate improved image steganography but decreased LPIPS.
Implementation Details. In our experiments, we randomly select an image with a dog's ear as the initial trigger image. In the training phase, we set both α and β to 0.4 and mask the regions in the trigger image where the corresponding pixel points of the clean image are lower than 30 and more than 220. In the attack phase, we set α and β both to 0.6, λ to 0.7 and mask the regions in the trigger image which corresponds to pixel points lower than 5 or more than 245 in the clean image. We employ the SGD optimizer to train the victim model for 200 periods. The learning rate is set to 0.01 with a decay factor of 0.1 and decay periods of 50, 100, 150. The batch size is configured as 64. Following other studies <cit.>, all attack settings are set to All-to-One attacks, which is sufficient for evaluating attack effectiveness. The defense experiments are all conducted on RepVgg model.
§.§ Attack Performance Evaluation
Attack Effectiveness. We evaluate the effectiveness of different backdoor attacks with ASR and BA. The relevant results are summarized in Table <ref>, which shows that our proposed DUBA achieves higher or comparable ASRs under most datasets and models. But in some cases such as experiments under ImageNet, the ASRs of DUBA are slightly lower than BadNets. Considering that the crafted trigger by DUBA is invisible in both spatial and frequency domains (which will be validated next), such a result is acceptable. Besides, DUBA only incurs negligible loss (lower than 1%) of BA compared with the clean benchmark. The above results show that DUBA achieves desirable attack effectiveness.
Attack Stealthiness. Now we examine the stealthiness of different backdoor attacks. Figure <ref> shows the poisoned images of different methods and Figure <ref> provides more visual comparison between clean images and images poisoned by DUBA. Compared with other methods, DUBA achieves the best invisibility in both the spatial and frequency domains. The backdoor generated by DUBA is visually invisible in the spatial domain and the residual image in the frequency domain is also close to pure black image, indicating that the poisoned image in the frequency domain is very similar to the clean image. In Table <ref>, the visual outcomes of various methods are quantified. The PSNR and LPIPS of DUBA is the best in most cases. Although the SSIM of DUBA is slightly lower than BadNets, it is also close to 1 and higher than most methods. It can be validated from Figure <ref> that BadNets has the worst stealthiness due to its obvious square trigger in the corner. In summary, DUBA achieved the best stealthy results with comprehensive visual perception and different metrics.
§.§ Robustness to Defenses
In this subsection, we test DUBA against five state-of-the-art defenses, including GradCam <cit.>, Neural Cleanse <cit.>, STRIP <cit.>, Fine-Prunning <cit.>, and FTD <cit.>.
Robustness to GradCam. The GradCam-based defense method uses the saliency map to analyze the model's decision process. Specifically, given an input sample to the model, GradCam yields the model's heat value. For a clean image, GradCam will focus on the object. As shown in Figure <ref>, for small triggers such as BadNets, the heat map locks the highest heat value on the trigger, resulting in an abnormal heat map. The results show that GradCam for DUBA is similar to clean images, and even locks onto the object more than clean images. This indicates that GradCam fails to detect DUBA.
Robustness to Neural Cleanse. Neural Cleanse reconstructs the trigger for each class label, and then checks whether there exists a class with a significantly smaller reverse-engineered trigger, which will be treated as a poisoned sample. Specifically, this method quantifies the deviations of reverse-engineered triggers based on their sizes using the anomaly index and considers models with an anomaly index greater than 2 as poisoned models. Table <ref> shows that the anomaly index of DUBA is only 1.22, which is smaller than those of the baseline methods. This validates that our proposed DUBA can effectively circumvent Neural Cleanse.
Robustness to STRIP. STRIP determines whether a model is poisoned or not by superimposing input images to observe the consistency of predicted classes. Specifically, the entropy values is used to quantify the level of consistency. Models with an average entropy value lower than 0.2 and the clean results are classified as poisoned models. Figure <ref> shows the entropy values of poisoned images by different methods on different datasets and the corresponding clean results. All the entropy values of DUBA are larger than 0.2 and very close to
that of clean images, which are significantly better than BadNets and Blend. DUBA also achieves comparable entropy values to WaNet and FIBA. Thus, DUBA can effectively bypass STRIP.
Robustness to Fine-Pruning. Fine-pruning assumes that the backdoor behavior of the model is related to the dormant neurons in the model. By simply clipping these neurons, a
clean model will be obtained. Specifically, it records the activation values of clean samples passing through each neuron and considers the neuron with the smallest activation value as the most dormant neuron. The neurons are gradually pruned from small to large according to neuronal activation values. Usually, the attack is considered successful if the BA of clean images drops below 50% before (in terms of pruning ratio) the ASR of poisoned images. Figure <ref> shows the results of different methods on Cifar10. Among all the methods, the ASR of DUBA is the last to decline. In particular, when the ASR of DUBA starts to decrease, the pruning ratio almost reaches 96%. Figure <ref> provides more detailed results on DUBA regarding the four datasets, which shows that all the BAs decrease to 50% before ASRs. Thus, we can conclude that DUBA remains effective against network pruning-based defenses.
Robustness to FTD. FTD detects whether an image has high-frequency artifacts, which is regarded as poisoned image. It trains a DNN model that can classify poisoned images and clean images after DCT. Table <ref> presents that the detection rate of FTD with respect to DUBA is below 50%, implying that DUBA can effectively bypass FTD. This is because the trigger image is smoothed twice in the frequency domain. Note that FIBA also presents a low detection ratio (higher than DUBA) as it uses a low-frequency trigger.
§.§ Ablation Studies
In this section, we conduct ablation experiments to study the impact of some important parts on DUBA. Experiments are conducted on Cifar10 for training RepVgg.
High-Frequency Embedding Rate. We first examine the effect of α and β (the embedding ratio of the trigger image) on the ASR. Table <ref> shows that DUBA yields a lower ASR when the embedding ratio during training is small. This can be attributed to the inability of the model to learn the complete backdoor or the large differences in the embedding amount between the training and inference phases. When the embedding ratio rises in both the training and attacking phases, DUBA achieves higher ASRs.
DCT Smoothing Parameters. We also investigate the effect of the DCT smoothing parameter on the ASR. Intuitively, when λ decreases, the poisoned images will be closer to clean ones. Thus, the ASR is compromised while the attack stealthiness is enhanced. Table <ref> shows that the ASR increases with λ, which is consistent with the intuition.
Initial Trigger Selection. We explore the effect of different initial trigger images on DUBA. In addition to the image of dog's ear used as the initial trigger image in the above experiments, three other images in Cifar10, Gtsrb, and ImageNet are also tested. Table <ref> shows that there is no substantial association between the initial trigger and ASR.
The Necessity of Three
Frequency Domain Transforms. In the following three subsections, we conduct ablation experiments to show the necessity of three frequency domain transforms.
Use DWT Only. We conduct experiments using only DWT and do not apply any subsequent smoothing steps to the output poisoned image, as shown in Step 1 of Figure <ref>. Figure <ref> visualizes the results. Although the PSNR and SSIM values between the poisoned and clean images are high enough when the embedding coefficients are small, the poisoned images inevitably have visible artifacts in the frequency domain, making it impossible to achieve dual stealth (similar to the previous single stealth backdoor study). This demonstrates the necessity of following smoothing operations.
Use DWT and FFT. We then conduct experiments where the poisoned image is only smoothed in the FFT domain. Figure <ref> shows that even when α and β are large enough, the residual images in the frequency domain are pure black (i.e., the poisoned images are invisible in the frequency domain). Although the embedding ratio is small enough, the poisoned images (first row) still have some line-like artifacts in the spatial domain. Furthermore, when α and β are set to 0.4, FTD can detect our attack with a probability of about 65%, which is already lower than most attacks but is unacceptable. This demonstrates the necessity of the DCT-based smoothing operations.
Use DWT, FFT and DCT. According to Table <ref>, we set λ to 0.7. As shown in Figure <ref>, after using the three transforms, the poisoned image is stealthy in both domains (both the PSNR and SSIM have been improved). Thus, the three frequency domain transforms are adopted in the proposed DUBA to achieve dual stealth.
§.§ Summary of Experiments
After the experimental comparsion, we can conclude that DUBA is substantially more stealthy in the spatial domain than other attacks, which is also the only attack that is simultaneously invisible in both the spatial and frequency domains. Furthermore, DUBA achieves remarkable ASRs that outperform other methods in most cases. It is validated that the five advanced defenses fail to detect DUBA. And also it shows that DUBA has stronger robustness than other methods. Although there are some specific cases, such as the robustness on STRIP of Fer2013, where DUBA performs slightly worse than WaNet or FIBA, in most
cases, especially in terms of the invisibility in frequency domain, DUBA significantly outperforms all the methods. Thus we can conclude that the proposed DUBA is effective, which outperforms the state-of-the-art backdoor attacks.
§ CONCLUSION
In this paper, we showed that most backdoor attacks are visible in the frequency domain. In order to completely break the defense proposed from the frequency perspective while remaining stealthy in the spatial domain, we proposed a DUal stealthy BAckdoor called DUBA that is invisible in both the spatial and frequency domains. To hide high-frequency backdoor information in both the spatial and frequency domains, we leveraged the benefits of different frequency domain transforms. A novel attack strategy was also devised in order to enhance the efficiency of DUBA. We conducted an extensive experimental evaluation of DUBA. The results corroborate its outstanding performance in terms of attack success rates and attack stealthiness.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.01445v2
|
20230704024537
|
Distributed fusion filter over lossy wireless sensor networks with the presence of non-Gaussian noise
|
[
"Jiacheng He",
"Bei Peng",
"Zhenyu Feng",
"Xuemei Mao",
"Song Gao",
"Gang Wang"
] |
eess.SP
|
[
"eess.SP"
] |
UTF8gbsn
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Distributed fusion filter over lossy wireless sensor networks with the presence of non-Gaussian noise (This work has been submitted to the
IEEE for possible publication. Copyright may be
transferred without notice, after which this version
may no longer be accessible.)
Jiacheng He, Bei Peng, Zhenyu Feng, Xuemei Mao, Song Gao, Gang Wang
This study was founded by the National Natural Science Foundation of China with Grant 51975107 and Sichuan Science and Technology Major Project No.2022ZDZX0039, No.2019ZDZX0020,and Sichuan Science and Technology Program No.2022YFG0343.
Manuscript received April 19, 2021; revised August 16, 2021.
August 1, 2023
=========================================================================================================================================================================================================================================================================================================================================================================================
The information transmission between nodes in a wireless sensor networks (WSNs) often causes packet loss due to denial-of-service (DoS) attack, energy limitations, and environmental factors, and the information that is successfully transmitted can also be contaminated by non-Gaussian noise. The presence of these two factors poses a challenge for distributed state estimation (DSE) over WSNs. In this paper, a generalized packet drop model is proposed to describe the packet loss phenomenon caused by DoS attacks and other factors. Moreover, a modified maximum correntropy Kalman filter is given, and it is extended to distributed form (DM-MCKF). In addition, a distributed modified maximum correntropy Kalman filter incorporating the generalized data packet drop (DM-MCKF-DPD) algorithm is provided to implement DSE with the presence of both non-Gaussian noise pollution and packet drop. A sufficient condition to ensure the convergence of the fixed-point iterative process of the DM-MCKF-DPD algorithm is presented and the computational complexity of the DM-MCKF-DPD algorithm is analyzed. Finally, the effectiveness and feasibility of the proposed algorithms are verified by simulations.
distributed state estimation, wireless sensor networks, maximum correntropy criterion, data packet drops.
§ INTRODUCTION
Recently, the state estimation in wireless sensor networks (WSNs) has been a subject of considerable interest, and it has been applied widely, such as in static or dynamic target positioning <cit.> and tracking <cit.>, indoor positioning <cit.>, vehicle navigation <cit.>, and others <cit.>. The most popular state estimation methods over WSNs are centralized and distributed fusion architectures. Distributed state estimation (DSE) strategies with their strong robustness and low computational complexity have come to be known as the dominant state estimation algorithm for WSNs.
In distributed estimation methods, each node functions as a sensor and a fusion center, gathering data from itself and its neighbors to produce local estimation. However, information interaction with neighboring nodes often suffers from packet loss. The causes of packet loss usually include the following: 1) the sending of sensor information is usually randomly activated <cit.> or event-based <cit.> due to long-term latent demand or energy limitations <cit.>, the intermittent transmission of information can be considered a special case of packet loss; 2) the failure of information transmission due to the performance of the sensor itself or environmental factors <cit.>; 3) the DoS attacks in WSNs <cit.>. These factors inevitably cause packet loss in the transmission of information between neighbors. The packet loss phenomenon is a potential source of instability, and it can lead to a further increase in the differences in node estimates in WSNs, which poses difficulties for the DSE of the network. Therefore, DSE that takes into account packet loss phenomena has gained significant attention.
To further reduce the difference in estimates between nodes due to packet loss, numerous research results have been carried out, and the consensus-based <cit.> method and the diffusion-based <cit.> method are two often employed techniques. A distributed Kalman filter (KF) <cit.> based on a diffusion approach is developed with intermittent measurement, and the diffusion method is extended to nonlinear systems <cit.>. Based on the above algorithms, a distributed diffusion unscented KF taking into account the unknown correlations in WSNs is derived <cit.>. Moreover, the discussion of consensus-based DSE over WSNs has also gained significant attention, some distributed Kalman consensus filters <cit.> are developed over WSNs with intermittent observations for linear systems. For the DSE of nonlinear systems, the weighted average consensus-based cubature information filter <cit.> is derived with the presence of measurement loss. In addition, from a certain point of view, stochastic sensor <cit.> or link <cit.> activation, or the DoS attacks <cit.> can be considered as special cases of packet loss over WSNs. The study of different distributed fusion strategies constitutes an important branch of the lossy WSNs. On top of that, the multiplicative noises <cit.> and correlated additive noises <cit.>, and these algorithms use the random parameter measurement matrices.
The above-mentioned studies take into account that the measurement being successfully passed between nodes contains Gaussian noise, and they perform well in Gaussian environments. However, due to the fact that non-Gaussian noise is common in practical application <cit.>, which results in measurements that are successfully delivered being often contaminated by non-Gaussian noise, and this situation inevitably degrades the performance of the algorithm under Gaussian assumption. Several studies have focused on conducting state estimation in WSNs under a non-Gaussian noise environment. Distributed particle filters (DPFs) <cit.> have been developed for nonlinear and non-Gaussian noise systems, however, the high computational complexity of DPF has always been the main factor limiting its wide application. A new idea, which introduces the concept of information theoretic learning <cit.> as a new cost function, has been widely employed to create a new distributed Kalman filter algorithm in recent years. The distributed maximum correntropy KF (DMCKF) algorithm <cit.> and its variants <cit.> are developed to reduce the influence of non-Gaussian noise. To further enhance the performance of the DSE under non-Gaussian conditions, the distributed minimum error entropy KF (DMEEKF) algorithm is developed in <cit.>. Due to the double summation of minimum error entropy criteria, the computational complexity of the DMEEKF algorithm is higher than the DMCKF algorithm, and it is a burden for sensor nodes with energy limitations. Obviously, it is still an open and nontrivial work to investigate the DSE issues over lossy WSNs with measurement contaminated by non-Gaussian noise, which also constitutes the main motivation for this article.
In this paper, we first propose a generalized packet loss model to describe the process of packet loss due to energy constraints and DoS attacks. A modified maximum correntropy Kalman filter (M-MCKF) is proposed on top of the analysis of the traditional KF algorithm. Moreover, the M-MCKF is extended to a distributed form, and one can get the distributed M-MCKF (DM-MCKF) algorithm. In addition, a distributed M-MCKF algorithm incorporating the proposed generalized packet loss model is called the DM-MCKF-DPD algorithm. Moreover, a sufficient condition is provided that can ensure the convergence of the DM-MCKF-DPD algorithm's fixed-point iterations.
The rest of this work is structured as follows. In Section <ref>, the problem formulation is briefly reviewed. In Section <ref>, the derivation, computational complexity, and convergence issue of the DM-MCKF-DPD algorithm are presented. The simulation examples are provided in Section <ref>, and, finally, the conclusion is given in Section <ref>.
§ PROBLEM FORMULATION
§.§ Distributed Kalman filter
Consider a WSNs with N sensors, and the states and observations are as follows:
{x_k = A_kx_k - 1 + q_k - 1,
y_k^i = C^ix_k + v_k^i,i = 1,2, ⋯ ,N,and i ∈Ω ,
.
where x_k - 1∈ℝ^n × 1 represents the states of the dynamical system at instants k - 1, Ω represents the set of all sensors in the network, y_k^i ∈ℝ^m_i× 1(m_i = rank (C_i),∀ i ∈Ω ) denotes the observations obtained by node i at instant k, A_k and C^i represent the state transition matrix and observation matrix of the system, respectively, and q_k and v_k^i are the mutually uncorrelated process noise and measurement noise, respectively, with zero means and covariances Q_k - 1 and R_k^i. We then define Ω _i⊂Ω to represent the set of all neighboring sensors of the ith sensor. Therefore, set ℧ _i = Ω _i∪{ i} contains node i itself and all its neighboring nodes. The main steps of DKF include state prediction and update.
1) state prediction: use the state x̂_k - 1|k - 1^i and covariance P_k - 1|k - 1^i of the k-1 time to predict the state x̂_k|k - 1^i and covariance P_k|k - 1^i, and the specific methods are presented in
x̂_k|k - 1^i = A_kx̂_k - 1|k - 1^i
and
P_k|k - 1^i = A_kP_k - 1|k - 1^iA_k^T + Q_k - 1,
where ( ·)^T is the transpose operation.
2) state update: update the state x̂_k|k^i and covariance P_k|k^i using measurement information y_k^℧ _i = vec{y_k^j} _j ∈℧ _i and gain K_k^i, and the specific steps are
K_k^i = P_k|k - 1^i( C^℧ _i)^T[ C^℧ _iP_k|k - 1^i( C^℧ _i)^T + R_k^℧ _i]^ - 1,
x̂_k|k^i = x̂_k|k - 1^i + K_k^i( y_k^℧ _i - C^℧ _ix̂_k|k - 1^i),
and
P_k|k^i = ( I - K_k^iC^℧ _i)P_k|k - 1^i,
where C^℧ _i = col{C^j} _j ∈℧ _i, and col{·} is the columnization operation; R_k^℧ _i = diag{R_k^j} _j ∈℧ _i, diag{·} is the block diagonalization operation.
From the above algorithm flow, the distributed Kalman filter method can be observed to operate by collecting measurements from nearby nodes and its own observations. However, the transmitted measurements from neighboring nodes often results in packet loss in wireless sensor networks due to intermittent sensor failures, stochastic communication link activation, and DoS attacks.
§.§ Generalized packet drop model
The state-space model of the DKF over the nodes of WSNs under conditions of data packet drops is the first problem that must be solved to estimate the state of a target node. The following model can be used to explain how node i receives the measurements from its neighbouring node j at instant k:
s_k^i,j≜f_k^i,j(y_k^j),i ∈Ω ,j ∈℧ _i,
where s_k^i,j is the data obtained by node i from node j. f_k^i,j( ·) is an arbitrary function, and it represents the mapping relationship between s_k^i,j and y_k^j. When f_k^i,j( ·) is a nonlinear function, the function can be regarded as a nonlinear attack model <cit.> leading to packet drop. For the linear case, f_k^i,j( ·) is a linear transformation of y_k^j. The proposed linear model is defined as
s_k^i,j≜T_k^i,jy_k^j,
where T_k^i,j∈ℝ^m_j×m_j is an arbitrary matrix. (<ref>) can be used to represent linear attack strategies <cit.>, and it can also be regarded as a stochastic sensor or link activation model <cit.>, and event-based model <cit.>.
When DoS attack occurs, the measurements may not be delivered to the local fusion center. (<ref>) can be expressed as
T_k^i,j = γ _k^i,jI_m_j,
where the flag term γ _k^i,j = 0 or γ _k^i,j = 1 means DoS attack does not occur or occurred. More generally, flag term γ _k^i,j are used to indicate the successful and unsuccessful transfer of information between nodes i and j at time k. Similarly, due to energy limitations, sensor link random activation and event-triggered based activation can be modeled as (<ref>), the flag term γ _k^i,j = 0 or γ _k^i,j = 1 represent inactive or active communication links respectively.
For this type of stochastic intermittent transmission of measurement y_k^j in lossy WSNs. It is assumed that the process has independent and identically distributed properties for γ _k^i,j with the probability
P{γ _k^i;j = 1} = p_k^i;j > 0,∀ k ⩾ 0,
and E [γ _k^i;j = 1] = p_k^i;j and (μ _k^i;j)^2 = E [(γ _k^i;j - p_k^i;j)^2] = p_k^i;j - (p_k^i;j)^2.
It is assumed that variables γ _k^i,j and γ _k^j,i are mutually independent (i.e. ∀ i j), but the condition p_k^i,j = p_k^j,i is allowed, and γ _k^i,j is independent of the process noise, measurement noise, and initial state of the dynamical system. According to (<ref>), if sensor i receives the observations from its neighboring node j at instant k, then γ _k^i,j = 1 and γ _k^i,j = 0 when all the components of y_k^j are lost. We assume that node i can receive all observations from itself at any time, so γ _k^i,i≡ 1.
According to the above discussion, the state-space model of the ith sensor in the case of data packet drops can be written as
x_k = Ax_k - 1 + q_k - 1,
y_k^℧ _i = C^℧ _ix_k^i + v_k^℧ _i,
and
s_k^℧ _i = D_γ ;k^℧ _iy_k^℧ _i,
where v_k^℧ _i = vec{v_k^j} _j ∈℧ _i, s_k^℧ _i = vec{s_k^j} _j ∈℧ _i, and D_γ ;k^℧ _i = diag{γ _k^i,jI_m_j} _j ∈℧ _i; I_m_j(j ∈℧ _i) represents an identity matrix of dimension m_j×m_j.
§.§ Existing problem
All the observations obtained by node i are contained in s_k^℧ _i. Due to packet loss, some of the measurements of neighboring nodes are not delivered to s_k^℧ _i smoothly. The measurements that are delivered to s_k^℧ _i are usually considered in the existing literature only when they are disturbed by Gaussian noise. However, non-Gaussian noise is very common in WSNs, which means that the matrix s_k^℧ _i is not only affected by communication packet loss, but also contaminated by non-Gaussian observation noise. In this paper, the measurements information matrix s_k^℧ _i, which is affected by both communication packet loss and non-Gaussian noise, is used to perform DSE over WSNs. The generalized packet drop model is incorporated into the proposed distributed fusion filter, and a solution is provided that enables distributed state estimation under the coexistence of non-Gaussian noise and packet loss.
§ PROPOSED DM-MCKF-DPD ALGORITHM
§.§ Correntropy
The concept of correntropy, first proposed by Principe et al.<cit.>, is a very practical method for evaluating the similarity between random variables X,Y ∈ℝ with the same dimensions. Here, correntropy is defined as
V( X,Y) = E[ κ( X,Y)] = ∫κ( x,y) dF_XY( x,y),
where E[.] is the expectation operator, V( ·) is the information potential, F_XY( x,y) denotes the probability distribution function (PDF) with on X and Y, and κ( ·,·) is the shift-invariant Mercer kernel. Here, we employ the Gaussian kernel, which is given as
κ( x,y) = G_σ( e ) = exp( - e^2/2σ ^2),
where e = x - y represents the error between elements x and y, and σ > 0 represents the kernelwidth (or kernel size) of the Gaussian kernel function.
However, only a limited amount of data related to the variables X and Y can be obtained in realistic scenarios, and the PDF F_XY( x,y) is usually unknown. Under these conditions, a sample estimator can be utilized to calculate the correntropy as follows:
V̂( X,Y) = 1/L∑_l = 1^L G_σ( e^l),
where
e^l = x^l - y^l,( x^l,y^l∈{x^l,y^l}_l = 1^L),
and L samples are employed to define F _XY(x,y). Compared with other similarity measurement schemes, such as the mean-square error (MSE) criterion, correntropy contains all even-order moments and is therefore useful for nonlinear and signal processing applications in non-Gaussian noise environments.
§.§ The adjustment mechanisms of the KF and DKF
According to the matrix inversion lemma <cit.>, (<ref>) can be rewritten as
P_k|k^i = [ ( P_k|k - 1^i)^ - 1 + ( C^℧ _i)^T( R_k^℧ _i)^ - 1C^℧ _i]^ - 1.
Combining (<ref>) and (<ref>), one can get
K_k^iR_k^℧ _i = P_k|k^i( C^℧ _i)^T.
Substitute (<ref>) and (<ref>) into (<ref>) yields
x̂_k|k^i = x̂_k|k - 1^i + K_k^i( y_k^℧ _i - C^℧ _ix̂_k|k - 1^i)
= x̂_k|k - 1^i - K_k^iC^℧ _ix̂_k|k - 1^i + K_k^iy_k^℧ _i
= A_Px̂_k|k - 1^i + A_Ry_k^℧ _i,
with
{[ A_P = M_k( P_k|k - 1^i)^ - 1,; A_R = M_k( C^℧ _i)^T( R_k^℧ _i)^ - 1, ].
and M_k = [(P_k|k - 1^i)^ - 1 + (C^℧ _i)^T(R_k^℧ _i)^ - 1C^℧ _i]^ - 1.
From (<ref>), the updated state x̂_k|k^i can be regarded as the linear combination of the predicted state x̂_k|k - 1^i and measurement y_k^J_i. Matrices A_P and A_R are able to adjust the weights of x̂_k|k - 1^i and y_k^J_i according to the statistical properties of the process noise and the measurement noise. Specifically, P_k|k - 1^i and R_k^℧ _i are central to the adjustment of weights, and P_k|k - 1^i is determined by Q_k - 1. The complete analysis of the adaptation mechanisms of weights A_P and A_R to the noise statistical properties of the algorithm is very complex and difficult. For the special case in which both the observation vector and the state vector are one-dimensional, which is also a case of KF. As the variance R_k^℧ _i of the measurement noise becomes larger, the scalar A_R will become smaller, which will reduce the proportion of the measurement y_k^J_i in the updated state x̂_k|k^i; a smaller variance of the measurement noise increases the corresponding proportion. The effect of the variance of the process noise on x̂_k|k^i is similar to the analysis above. The DKF algorithm adjusts the corresponding weights according to process noise and measurement noise in a similar way to that of the KF algorithm.
Using a similar approach, the state update method of MCKF algorithm <cit.> can be expressed as
{[ x̂_k|k^i = A̅_Px̂_k|k - 1^i + A̅_R̅y_k,
A̅_P = [ ( P_k|k - 1^i)^ - 1 + C^TR̅_k^ - 1C]^ - 1( P_k|k - 1^i)^ - 1,; A̅_R = [ ( P_k|k - 1^i)^ - 1 + C^TR̅_k^ - 1C]^ - 1C^TR̅_k^ - 1, ].
where R̅_k = B_r;kC_y;k^ - 1B_r;k^T and C_y;k = diag[ G_σ( e_n + 1;k), ⋯G_σ( e_n + m;k)], and B_r;k can be obtained using the Cholesky decomposition of the variance R_I;k with impulse noise. R_I;k denotes the variance of the sequence that contains the impulse noise. It is easy to get
[R_I;k]_jj > [R_N;k]_jj,
where R_N;k denotes the variance of the sequence that does not contain the impulse noise, and [ · ]_jj denotes the jth row and jth column element of a matrix. For the case in which both the state vector and the observation vector are one-dimensional, and when the error is affected by impulse noise, the weight adjustment factor C_y;k is much less than 1, which significantly reduces the weight of y_k in updated state x̂_k|k^i, and the effect of impulse noise is thus mitigated. This adjustment mechanism is similar to that of conventional KF when the variance of the observed noise is large. When the noise is non-impulse noise, the weight adjustment factor C_y;k^ - 1 = 1 attempts to raise the weight A̅_R̅ of the measurement information in x̂_k|k^i by the Gaussian Gaussian kernel function. However, since the kernel width cannot be set to infinity to account for the performance of the algorithm in non-Gaussian noise conditions, C_y;k^ - 1 > 1 always holds. Furthermore, the non-Gaussian character of the measurement noise allows equation (<ref>) to hold.
The main reason for this situation is the limited downward adjustment of the Gaussian kernel function with a fixed kernelwidth. In other words, the covariance, which reflects the second-order statistical properties of non-Gaussian noise, is not sufficiently flexible to adjust the weights in the case of non-Gaussian noise, and which also provides an idea for improvement of the MCKF algorithm, i.e. using R_N;k instead of R_I;k to improve the flexibility of the weight adjustment of the algorithm. The detailed implementation of the proposed algorithms is shown in Section <ref>.
To reduce the negative impact of inappropriate covariance R_N;k, it is also beneficial to increase the value of the kernelwidth appropriately so that the value of C_y;k^ - 1 is close to 1 when the errors are not caused by impulse noise, thus giving a more reasonable weight to A̅_R̅. This strategy also implies that the choice of kernelwidth for the M-MCKF algorithm is greater than that for the MCKF algorithm.
§.§ Algorithm derivation
First, the measurement noise with outliers is decomposed into Gaussian components with different means, variances, and proportions. The Gaussian mixture model (GMM) <cit.>, in this paper, is employed to decompose the measurement noise. The outlier-contaminated measurement noise is decomposed into
p( v_k^i) = ∑_o = 1^O β ^i;og( v_k^i|μ_k^i;o,R_k^i;o)
with
g( v_k^i|μ_k^i;o,R_k^i;o) =
exp{ - 1/2[ v_k^i - μ_k^i;o]^T( R_k^i;o)^ - 1[ v_k^i - μ_k^i;o]}/( 2π)^m_i/2| R_k^i;o|^1/2,
where p( v_k^i) is the probability density function (PDF) of the measurement noise of the ith node, g(v_k^i|μ_k^i;o,R_k^i;o) denotes the Gaussian distribution with mean μ_k^i;o and variance R_k^i;o, and O is the number of the Gaussian component; β ^i;o is the proportion of the oth component and | ·| denote the determinant of the matrix. By using GMM, outliers can be represented as a Gaussian distribution with a large variance R_k^i;l, and another Gaussian component with a relatively small variance R_k^i;s. The variance R_k^i;s is employed to replace the variances in the MCKF and DMCKF algorithms, and one can obtain
R_k^℧ _i = diag[ R_k^1;s,R_k^2;s, ⋯R_k^m_i;s].
The measurement equation and filter update will be reformulated as a regression problem in the linear-regression-based KF solution <cit.>. Denote x_k the real state of the target, and hence the state prediction error can be written as ε_k|k - 1^i = x_k - x̂_k|k - 1^i. With s_k^℧ _i and x̂_k|k - 1^i, the regression problem with the form of
[ [ x̂_k|k - 1^i; s_k^℧ _i ]] = [ [ I_n; D_γ ;k^℧ _iC^℧ _i ]]x_k^i + g_k^i,
where I_n represents an n-dimensional identity matrix and g_k^i represents the augmented noise vector containing the state and measurement errors of the dynamical system, which is defined as
g_k^i = [ [ - ε_k|k - 1^i; D_γ ;k^℧ _iv_k^℧ _i ]].
Assuming that the covariance matrix of the augmented vector E[g_k^i(g_k^i)^T] is positive definite yields the following:
E[ g_k^i( g_k^i)^T] = [ [ P_k|k - 1^i 0; 0 D_p;k^℧ _iR_k^℧ _iD_p;k^℧ _i ]]
= [ [ B_P( k|k - 1)^i( B_P( k|k - 1)^i)^T 0; 0 B_R;k^i( B_R;k^i)^T ]]
= B_k^i( B_k^i)^T,
with
E[D_γ ;k^℧ _i] = diag{ p_k^i;jI_m_j} _j ∈℧ _i = D_p;k^℧ _i,
where, B_k^i and (B_k^i)^T can be produced by the Cholesky decomposition of E[g_k^i(g_k^i)^T], and matrix D_p;k^℧ _i represents the expectation of D_γ ;k^℧ _i. Left multiplying each term of (<ref>) by (B_k^i)^ - 1 yields
d_k^i = W_k^ix_k^i + e_k^i,
where
{[ d_k^i = ( B_k^i)^ - 1[ [ x̂_k|k - 1^i; s_k^℧ _i ]],; W_k^i = ( B_k^i)^ - 1[ [ I; D_γ ;k^℧ _iC^℧ _i ]],; e_k^i = ( B_k^i)^ - 1g_k^i. ].
The following cost function based on the MC criterion is suggested by the aforementioned derivation:
J( x_k^i) = 1/H∑_h = 1^H G _σ( d_k^i;h - w_k^i;hx_k^i),
where d_k^i;h represents the hth element of d_k^i, w_k^i;h represents the hth row of W_k^i, and H = n + m_i;a (( m_i;a)_i ∈Ω = ∑_j ∈℧ _im_j) denotes the number of elements of d_k^i. Then, the objective function of the optimal state estimation x_k^i based on the MC criterion is
x̂_k^i = max_x_k^i J( x_k^i) = max_x_k^i∑_h = 1^L G_σ( e_k^i;h),
with
e_k^i;h = d_k^i;h - w_k^i;hx_k^i,
where h = 1,2, ⋯ H is the hth element of e_k^i. Finally, the optimal state estimation of x̂_k^i can be achieved by maximizing the information potential V( ·). To that end, we set the gradient of the cost function J(x_k^i) regarding x_k^i to zero,
and obtaining the optimal state of x_k^i is relatively easy:
x_k^i = [ ∑_h = 1^H G_σ( e_k^i;h)( w_k^i;h)^Tw_k^i;h]^ - 1
×∑_h = 1^H G_σ( e_k^i;h)( w_k^i;h)^Td_k^i;h.
Since e_k^i;h = d_k^i;h - w_k^i;hx_k^i is a function of x_k^i, the optimal state estimation in (<ref>) is a fixed-point iterative equation of x_k^i, and can be expressed in the following form:
x_k^i = f( x_k^i),
where
f( x_k^i) = [ ∑_h = 1^H G_σ( d_k^i;h - w_k^i;hx_k^i)( w_k^i;h)^Tw_k^i;h]^ - 1
×∑_h = 1^H G_σ( d_k^i;h - w_k^i;hx_k^i)( w_k^i;h)^Td_k^i;h.
According to the above derivation, the iterative equation in (<ref>) can be written as follows:
( x̂_k^i)_t + 1 = f[ ( x_k^i)_t].
Here, (x̂_k^i)_t + 1 represents the result of x_k^i at the fixed-point iteration t+1, and (<ref>) can be further written in the form of matrix multiplication:
( x_k^i)_t + 1 = [ ( W_k^i)^T( Λ_k^i)_tW_k^i]^ - 1( W_k^i)^T( Λ_k^i)_td_k^i,
where
( Λ_k^i )_t = [ [ ( Λ_x;k^i )_t 0; 0 ( Λ_y;k^i )_t ] ],
(Λ_x;k^i)_t = diag[ G_σ[(e_k^i;1)_t], ⋯,G_σ[(e_k^i;n)_t] ],
(Λ_y;k^i)_t = diag[ G_σ[(e_k^i;n + 1)_t], ⋯,G_σ[(e_k^i;n + m_i;a)_t] ].
According to (<ref>), (<ref>), and (<ref>), we obtain the following:
[(W_k^i)^T(Λ_k^i)_tW_k^i]^ - 1 =
{[(B_P( k|k - 1)^i)^ - 1]^T(Λ_x;k^i)_t(B_P( k|k - 1)^i)^ - 1 +
(D_γ ;k^℧ _iC^℧ _i)^T[(B_R;k^i)^ - 1]^T(Λ_y;k^i)_t(B_R;k^i)^ - 1(D_γ ;k^℧ _iC^℧ _i)
}^ - 1 .
We then apply the following matrix inversion lemma <cit.>, then define
{G = [ ( B_P( k|k - 1)^i)^ - 1]^T( Λ_x;k^i)_t( B_P( k|k - 1)^i)^ - 1,
B = ( D_γ ;k^℧ _iC^℧ _i)^T,
C = [ ( B_R;k^i)^ - 1]^T( Λ_y;k^i)_t( B_R;k^i)^ - 1,
D = D_γ ;k^℧ _iC^℧ _i,
.
and obtain the expression (<ref>) given below.
mytempeqncnt
According to (<ref>), (<ref>), and (<ref>), we obtain the following:
(W_k^i)^T(Λ_k^i)_td_k^i = [(B_P( k|k - 1)^i)^ - 1]^T(Λ_x;k^i)_t(B_P( k|k - 1)^i)^ - 1
×x̂_k|k - 1^i + (D_γ ;k^℧ _iC^℧ _i)^T[(B_R;k^i)^ - 1]^T(Λ_y;k^i)_t(B_R;k^i)^ - 1s_k^℧ _i.
Substituting formulas (<ref>) and (<ref>) into (<ref>) yields
(x̂_k|k^i)_t + 1 = x̂_k|k - 1^i + (K̅_k^i)_t(s_k^℧ _i - D_γ ;k^℧ _iC^℧ _ix̂_k|k - 1^i),
where
(K̅_k^i)_t = (P̅_k|k - 1^i)_t(D_γ;k^℧_iC^℧_i)^T ×
[ D_γ;k^℧_iC^℧_i(P̅_k|k - 1^i)_t(D_γ;k^℧_iC^℧_i)^T + (R̅_k^i)_t ]^ - 1,
(P̅_k|k - 1^i)_t = B_P( k|k - 1 )^i(Λ_x;k^i)_t^ - 1(B_P( k|k - 1 )^i)^T,
(R̅_k^i)_t = B_R;k^i(Λ_y;k^i)_t^ - 1(B_R;k^i)^T.
The following updates are made iteratively to the posterior covariance:
P_k|k^i = (I - K̃_k^iD_γ ;k^℧ _iC^℧ _i)P_k|k - 1^i(I - K̃_k^iD_γ ;k^℧ _iC^℧ _i)^T
+ K̃_k^iR_k^℧ _i(K̃_k^i)^T.
Equation (<ref>) is the optimal solution of x_k^i, and it depends on the prior estimate x̂_k|k - 1^i and available observation information s_k^℧ _i. The observation information obtainable by the ith node is determined by the matrix D_γ ;k^℧ _i.
If the noise is not decomposed and the obtained R_I;k is used directly, then the algorithm proposed in this paper will degenerate into the D-MCKF-DPD algorithm.
According to the above derivations, we summarize the steps of the proposed DM-MCKF-DPD algorithm as follows.
§.§ Distributed consensus filter
Based on the proposed DM-MCKF-DPD algorithm, the consensus filter algorithm is derived by fusing the updated state of the neighbor node at the previous time, and it is called the consensus-based DM-MCKF-DPD (C-DM-MCKF-DPD) algorithm. It is assumed that the measurement information y_k^j and updated status (x̂_k - 1|k - 1^j)_t + 1 of neighbor node j ∈Ω _i are transmitted to node i through the same channel, thus, γ _k - 1^i,j should be introduced to represent the case of packet loss in the updated state (x̂_k - 1|k - 1^j)_t + 1. According to the above analysis, the consensus scheme can be expressed as
(x̂_k|k^i)_t + 1 = x̂_k|k - 1^i + (K̅_k^i)_t(s_k^℧ _i - D_γ ;k^℧ _iC^℧ _ix̂_k|k - 1^i)
+ η∑_j ∈Ω _iγ _k - 1^i,j[ (x̂_k|k^j)_t + 1 - (x̂_k|k^i)_t + 1] .
where η⩾ 0 is the consensus coefficient. The consensus scheme, which intuitively improves the estimated performance of node i and reduces the difference between nodes, fuses the measurement information of the neighbour node through the Kalman gain (K̅_k^i)_t and fuses the estimated difference between the node i and its neighbour node j through the consensus coefficient. In particular, the consensus strategy dose not work when η=0.
§.§ Computational complexity
The computational complexity of the proposed DM-MCKF-DPD algorithm can be compared with that of the stationary DKF <cit.> in terms of the equations and operations utilized by the two algorithms, which are presented in Table 1.
The stationary DKF algorithm mainly includes (<ref>), (<ref>), (<ref>) and (<ref>) cited in the present paper, and (5) and (6) cited within<cit.>. Therefore, we can evaluate the computational complexity of the stationary DKF as
S_SDKF = 11n^3 + 12m_i;a^2n + 10m_i;an^2 + 4m_i;a^2
- 2m_i;an - n^2 - 2n - m_i;a + 2O(m_i;a^3).
Accordingly, the computational complexity of the DM-MCKF-DPD algorithm can be defined based on an average number of fixed-point iterative algorithm iterations T as
S_SDMCKF = 6Tm_i;a^3 + 6Tn^3 + 16Tm_i;a^2n +
10Tm_i;an^2 + (2 - 3T)m_i;a^2 + 2n^2 + (2 - 3T)m_i;an
+ (6T - 1)m_i;a + (6T - 1)n + TO(n^3) + TO(m_i;a^3).
We can infer from this discussion that the computational complexity of the DM-MCKF-DPD algorithm is moderate compared with that of the stationary DKF, provided that the value of T is small, which is indeed the case, as will be demonstrated later in Section <ref>.
§.§ Convergence issue
It is quite difficult to fully analyse the convergence behaviour of the DM- MCKF-DPD algorithm, which is based on the fixed-point iterative technique. Therefore, we merely provide a sufficient condition that guarantees the fixed-point iterative algorithm's convergence. However, a detailed proof process is not presented here because the convergence condition is similar to the analysis presented in an earlier work<cit.>, which can be consulted for additional details.
Theorem 1. First, we assume the conditions β _i > ς _i = √(n)∑_h = 1^H |d_k^i;h|||(w_k^i;h)^T||_1/
.
-χ _min∑_h = 1^H (w_k^i;h)^Tw_k^i;h and σ _i≥max{σ _i^*,σ _i^}. Here, σ _i^* is the optimal result of the equation ϕ (σ _i) = β _i, where
ϕ( σ _i) = √(n)∑_h = 1^H |d_k^i;h|||(w_k^i;h)^T||_1/χ _min∑_h = 1^H G_σ _i(β _i||w_k^i;h||_1 + |d_k^i;h|)(w_k^i;h)^Tw_k^i;h,
, σ _i∈( 0,∞), and
ψ( σ _i) = √(n)∑_h = 1^H [
(β _i||w_k^i;h||_1 + |d_k^i;h|)||w_k^i;h||_1×
(β _i||(w_k^i;h)^Tw_k^i;h||_1 + ||(w_k^i;h)^Td_k^i;h||_1)
]/σ _i^2λ _min∑_h = 1^H G_σ _i(β _i||w_k^i;h||_1 + |d_k^i;h|)(w_k^i;h)^Tw_k^i;h
and σ _i^ is the result of the equation ψ (σ _i) = α _i(0 < α _i < 1), where ψ (σ _i) is given in (<ref>) below. Accordingly, it holds that ||f(x_k^i)||_1⩽β _i and ||∇ _x_k^if(x_k^i)||_1⩽α _i for all x_k^i ∈{x_k^i ∈ℝ^n:||x_k^i||_1⩽β _i}. Here, the n × n Jacobian matrix f(x_k^i) is given as follows:
∇ _x_k^if( x_k^i) = [ ∂/∂x_k^i;1f( x_k^i) ⋯∂/∂x_k^i;nf( x_k^i)],
where the terms are defined in (<ref>) below
∂/∂x_k^i;jf( x_k^i) = T_g1/σ _i^2∑_i = 1^L [ e_k^i;hw_k^i;h;jG_σ _i( e_k^i)( w_k^i;h)^Td_k^i;h]
- T_g1/σ _i^2∑_i = 1^L [ e_k^i;hw_k^i;h;jG_σ _i( e_k^i)( w_k^i;h)^Tw_k^i;h]f( x_k^i)
with
T_g = [ ∑_h = 1^H G _σ _i( d_k^i;h - w_k^i;hx_k^i)( w_k^i;h)^Tw_k^i;h]^ - 1,
and w_k^i;h;j is the jth element of the vector w_k^i;h.
According to Theorem 1, we obtain the following conditions:
{[ f( x_k^i)_1⩽β _i,; ∇ _x_k^if( x_k^i)_1⩽α _i < 1, ].
if the kernelwidth σ is sufficiently large (e.g., greater than max{σ _i^*,σ _i^}). According to the Banach fixed-point theorem, the DM-MCKF-DPD fixed-point iterative technique will undoubtedly converge to a single fixed point in the range x_k^i ∈{x_k^i ∈ℝ^n: ||x_k^i||_1⩽β _i} provided that the initial state of the system meets the condition ||(x_k^i)_0|| ⩽β _i and σ_i is sufficiently large.
Theorem 1 demonstrates that the kernelwidth of the Gaussian kernel function has an important influence on the convergence of the DM-MCKF-DPD algorithm. Here, reducing the kernelwidth can improve the accuracy of state estimation, but this will also decrease the convergence rate of the algorithm or make it diverge. Conversely, increasing the kernel width will increase the convergence rate of the algorithm, but will often yield poor estimation performance under impulsive noise conditions. In practice, the kernelwidth can be selected by trial and error in accordance with the desired estimation accuracy and convergence rate of the algorithm.
§ SIMULATIONS
In this part, we compare the proposed algorithms with some existing algorithms, and the performance of these algorithms is adjusted to the optimum performance with the appropriate parameters. The performance of these algorithms is measured by the mean square deviation (MSD) in the form of
MSD = 10log _10x_k^i - x̂_k^i.
Several noise models covered in this paper are presented before these simulations are implemented, such as Laplace noise, mixed-Gaussian noise, etc.
* The mixed-Gaussian model <cit.> takes the following form:
v ∼λ𝒩( a_1,μ _1) + ( 1 - λ)𝒩( a_2,μ _2),0 ⩽λ⩽ 1,
where 𝒩( a_1,μ _1) denotes the Gaussian distribution with mean a_1 and variance μ _1, and λ represents the mixture coefficient of two kinds of Gaussian distribution. The mixed-Gaussian distribution can be abbreviated as v ∼ M( λ ,a_1,a_2,μ _1,μ _2).
* The PDF of the Laplace distribution is f( v|μ ,b) = 1 /
.
-2bexp( - | v - μ|/
.
- b) with a location parameter μ and a scale parameter b.
* The characteristic function of the α-stable noise is presented in <cit.>, and the noise that obeys the α-stable distribution is written as v ∼ S( a,b,γ ,ϖ), where parameter a, b, γ, and ϖ are the characteristic factor, symmetry parameter, dispersion parameter, and location parameter.
In this paper, four scenarios are considered, it is assumed that all of the scene's process noise is Gaussian noise with form q∼𝒩( 0,0.01I_n). The distribution of the measurement noise for these four scenarios is 𝒩( 0,0.01I_m_i), r ∼ M( 0.9,0,0,0.01,64), f( q|0,9), and S( 1.2,1.0,0,0.5), respectively.
A vehicle tracking model <cit.> is considered to evaluate the effectiveness of the M-MCKF and DM-MCKF algorithms, and the state space model is
x_k^i = [ [ 1 0 Δ T 0; 0 1 0 Δ T; 0 0 1 0; 0 0 0 1 ]]x_k - 1^i + q_k - 1,
and
y_k^i = [ [ [ 1 0 0 0 ]; [ 0
0
0 1
0
0 0
1
0 0
0
1 ] ]]x_k^i + v_k^i,
where x_k^i = [ [ x_1;k^i x_2;k^i x_3;k^i x_4;k^i ]]^T is the state of the vehicle output by the ithe node, x_1;k^i and x_2;k^i denote the position of vehicle on the x and y axis, x_3;k^i and x_4;k^i denote the velocity of vehicle on the x and y axis. Δ T = 0.1 is the time interval, and the covariance matrix of process noise is Q_k - 1 = [ [ ΔT^24 0 ΔT^32 0; 0 ΔT^24 0 ΔT^32; ΔT^32 0 ΔT^2 0; 0 ΔT^32 0 ΔT^2 ]]. The initial states of x_0, x̂_0|0, and P̂_0|0 are
{x_0∼𝒩( 0,I_n),
x̂_0|0∼𝒩( x_0,I_n),
P_0|0 = I_n,
.
and the threshold is set to ε = 10^ - 6.
§.§ Performance verification of the M-MCKF and DM-MCKF algorithms
In this example, the MSD of the M-MCKF algorithm is compared with that of KF, MCKF <cit.>, and R-MEEKF <cit.>. The simulation findings are displayed in Fig.<ref> and Table <ref>. Fig.<ref> shows the MSD of the M-MCKF algorithm and competitor, as well as the parameters of these algorithms; the stable MSDs of different algorithms in different scenarios are presented in Table <ref>, where N/A represent a situation where it is not applicable. From the simulation results, one can obtain that the M-MCKF performs best in terms of stable MSD with non-Gaussian noise; and the performance of the M-MCKF is comparable to that of the conventional KF algorithm. These findings demonstrate the robustness of the M-MCKF in both Gaussian and non-Gaussian noise environments.
The performance of the distributed algorithm based on the proposed M-MCKF algorithm (DM-MCKF) is compared with that of DKF, D-MCKF <cit.>, and DMEEKF <cit.>. The state estimation performance is demonstrated by a classic WSNs example <cit.> with 20 sensor nodes, and the nodes and connections are illustrated in Fig. <ref>. All simulation conditions are set in the same way as in the simulation above. The simulation results and parameter settings are shown in Fig. <ref> and Table <ref>. Fig. <ref> illustrates the instantaneous MSD of the DM-MCKF algorithm and several competitors in the second scenario, and the node number is i=7; Table <ref> presents the steady-state MSD of the DM-MCKF, DMCKF, and DKF at nodes 7, 13, and 16, respectively. From simulation results, one can obtain that 1) the DM-MCKF algorithm performs best with mixed-Gaussian, Laplace, and α-stable noise, and its performance is comparable to that of the DKF with Gaussian noise in terms of stead-state MSD, which demonstrates the robustness of the DM-MCKF algorithm; 2) for the same algorithm, the method performs better as more nodes are added that are next to it.
§.§ Performance verification of the proposed algorithms considering packet drop
In addition, the state estimation performance is verified by the DM-MCKF-DPD, DMCKF-DPD, and stationary DKF algorithm <cit.> when applying them to a classic WSNs example <cit.> under data packet drop and impulsive noise conditions. It is assumed that the probability that all nodes can receive information from neighboring nodes is p_k^i,j = 0.8, and the kernelwidths are shown in Fig. <ref>. The performance of the proposed algorithms is demonstrated with different nodes and different scenarios, and the corresponding results are shown in Fig. <ref> and Table. Fig. <ref> shows the convergence curves of MSD in the second scenario and the node number is i=7; Fig. <ref> shows the convergence curves of MSD in the third scenario and the node number i=13. From Fig. <ref>, one can obtain that 1) the proposed DM-MCKF-DPD and DMCKF-DPD algorithms perform better than the DKF-DPD algorithm with non-Gaussian noise; 2) DM-MCKF-DPD algorithm performs better than DMCKF-DPD algorithm.
In addition, the C-DM-MCKF-DPD algorithm is verified in this part. The kernelwidth is set to 5.0 and the consensus coefficient is set as η=0.05. The convergence curve of MSD of the 17th node in the second scenario is shown in Fig. <ref>. From the simulation result, one can obtain that the consensus scheme ameliorates the impact of communication data loss to some extent.
§.§ Parameters discussion
The influence of σ on the performance of the M-MCKF and DM-MCKF algorithms is studied in this part. The influence of the consensus coefficient η on the performance of the consensus DM-MCKF-DPD algorithm is investigated. The conclusions obtained can be used to guide the choice of the proposed algorithms.
The parameter σ is set as σ = 3.0, 5.0, 8.0, 10.0, 15.0, 20.0, 35.0, 45.0, and the initial states of the system are the same as in (<ref>). The simulation results are presented in Fig. <ref>, Table <ref>, and Table <ref>. Fig. <ref> displays the convergence curves of the M-MCKF and DM-MCKF algorithms with different σ, respectively. The steady-state MSD, in the four scenarios, of the M-MCKF algorithm with different σ is presented in Table <ref>. The steady-state MSD, in the four scenarios, of the DM-MCKF algorithm with different σ and nodes is shown in Table <ref>. From Fig. <ref> and Table <ref>, one can obtain that the performance of the M-MCKF algorithm improves with the increasing σ with Gaussian measurement noise, the it is optimal for mixed-Gaussian, Laplace, and α-stable noise when σ are around 8.0, 3.0, 3.0. From Fig. <ref> and Table <ref>, we can infer that the proposed DM-MCKF algorithm performs best with above the non-Gaussian noise when σ is around 3.0. In addition, it is clear that the higher the number of neighboring nodes the better the performance of the DM-MCKF algorithm.
The parameter η is set as η = 0.1, 0.2, 0.4, 0.48, the kernelwidth of DM-MCKF-DPD and C-DM-MCKF-DPD are 3, and p_k^i,j = 0.9. In the third scenario, the convergence curves of MSD of the 17th node are displayed in Fig. <ref>. In addition, the performance surfaces for the influence of the number of neighboring nodes Ω _i and the consensus coefficient η on the performance of the C-DM-MCKF-DPD algorithm are shown in Fig. <ref>. From the simulation presented in Fig. <ref>, one can obtain that the optimal consensus coefficient is around 0.2 for the 17th node in the third scenario. From Fig. <ref>, one can obtain that 1) when the number of neighbors of a node Ω _i is small (0 < Ω _i⩽ 4), choosing a larger consensus coefficient (0.2 ⩽η⩽ 0.35) can improve the performance of the consensus algorithm; 2) when the number of neighbors of a node Ω _i is large (4 < Ω _i⩽ 7), choosing a smaller consensus coefficient (0.1 ⩽η⩽ 0.2) can improve the performance of the consensus algorithm.
§ CONCLUSION
The distributed state estimation over wireless sensor network with presence of non-Gaussian noise was concerned in this paper. By proposed a generalized packet drop model, the process of packet loss due to DoS attacks and energy limitations is described. A modified maximum correntropy KF algorithm was developed by analysing the advantages and disadvantages of existing algorithms, and it was extended to distributed M-MCKF algorithm. In addition, the distributed modified maximum correntropy Kalman filter incorporating the proposed generalized packet drop model was developed. The computational complexity of the DM-MCKF-DPD algorithm was demonstrated to be moderate compared to that of the conventional stationary DKF, and a sufficient condition to ensure the convergence of the fixed-point iterative algorithm was presented. Finally, simulations conducted with a 20-node WSNs demonstrated that the proposed DM-MCKF and DM-MCKF-DPD algorithms perform better than some existing algorithms. As future work, we intend to investigate the state estimation performance of a distributed Kalman filter based on the MEE criterion for WSNs under data packet drop conditions to further improve estimated performance with data packet drop and non-Gaussian noise.
unsrt
[
< g r a p h i c s >
]
Jiacheng He received the B.S. degree in mechanical engineering from University of Electronic Science and Technology of China, Chengdu, China, in 2020. He is currently pursuing a Ph.D. degree in the School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China. His current research interests include information-theoretic learning, signal processing, and target tracking.
[
< g r a p h i c s >
]
Bei Peng received the B.S. degree in mechanical engineering from Beihang University, Beijing, China, in 1999, and the M.S. and Ph.D. degrees in mechanical engineering from Northwestern University, Evanston, IL, USA, in 2003 and 2008, respectively. He is currently a Full Professor of Mechanical Engineering with the University of Electronic Science and Technology of China, Chengdu, China. He holds 30 authorized patents. He has served as a PI or a CoPI for more than ten research projects, including the National Science Foundation of China. His research interests mainly include intelligent manufacturing systems, robotics, and its applications.
[
< g r a p h i c s >
]
Zhenyu Feng was born in Anhui, China. He received the B.E. and MA.Sc degree from the University of Electronic Science and Technology of China, Chengdu, China, in 2014 and 2018, respectively. He is currently working toward the Ph.D. degree in the school of Mechanical and Electrical Engineering at University of Electronic Science and Technology of China. He has over five years of industrial experience in designing and implementing for applications in multi-agent sensor network systems. His current research interests lie in the field of multi-agent systems, underwater unmanned vehicle swarm intelligence, as well as distributed communication technology.
[
< g r a p h i c s >
]
Xuemei Mao received the B.S. degree in mechanical design, manufacturing, and automation from Xidian University, Xian, China, in 2020. She is currently pursuing the Ph.D. degree in mechanical engineering with School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China. Her current research interests include signal processing, adaptive filtering, and target tracking.
[
< g r a p h i c s >
]
Song Gao received the B.S. degree in mechanical design, manufacturing and automation from University of Electronic Science and Technology of China (UESTC), in 2020. He is currently pursuing the Ph.D. degree in mechanical engineering with School of Mechanical and Electrical Engineering of UESTC, Chengdu, China. His current research interests include multi-agent systems, consensus control, signal processing.
[
< g r a p h i c s >
]
Gang Wang received the B.E. degree in Communication Engineering and the Ph.D. degree in Biomedical Engineering from University of Electronic Science and Technology of China, Chengdu, China, in 1999 and 2008, respectively. In 2009, he joined the School of Information and Communication Engineering, University of Electronic Science and Technology of China, China, where he is currently an Associate Professor. His current research interests include signal processing and intelligent systems.
|
http://arxiv.org/abs/2307.02387v1
|
20230705155514
|
Puiseux asymptotic expansions for convection-dominated transport problems in thin graph-like networks: strong boundary interactions
|
[
"Taras Mel'nyk",
"Christian Rohde"
] |
math.AP
|
[
"math.AP",
"35K20, 35R02, 35B40, 35B25, 35B45, 35K57, 35Q49"
] |
Puiseux asymptotic expansion for transport problem in thin networks]
Puiseux asymptotic expansions for convection-dominated transport problems in thin graph-like networks:
strong boundary interactions
Taras Mel'nyk & Christian Rohde] Taras Mel'nyk^♮, ♭ & Christian Rohde^♮
-12pt
^♮ Institute of Applied Analysis and Numerical Simulation,
Faculty of Mathematics and Physics, University of Stuttgart
Pfaffenwaldring 57, 70569 Stuttgart, Germany
^♭ Department of Mathematical Physics, Faculty of Mathematics and Mechanics
Taras Shevchenko National University of Kyiv
Volodymyrska str. 64, 01601 Kyiv, Ukraine
[email protected]
-12pt ^♮ Institute of Applied Analysis and Numerical Simulation,
Faculty of Mathematics and Physics, University of Stuttgart
Pfaffenwaldring 57, 70569 Stuttgart, Germany
[email protected]
-10pt
This article completes the study of the influence of the intensity parameter α in the boundary condition
ε∂_ν_ε u_ε - u_ε V_ε·ν_ε = ε^αφ_ε
given on the boundary of a thin three-dimensional graph-like network consisting of thin cylinders that are interconnected by small domains (nodes) with diameters of order 𝒪(ε). Inside of the thin network a time-dependent convection-diffusion equation with high Péclet number of order 𝒪(ε^-1) is considered. The novelty of this article is the case of α <1, which indicates a strong intensity of physical processes on the boundary, described by the inhomogeneity φ_ε (the cases α =1 and α >1 were previously studied by the same authors).
A complete Puiseux asymptotic expansion is constructed for the solution u_ε as ε→ 0, i.e., when the diffusion coefficients are eliminated and the thin network shrinks into a graph. Furthermore, the corresponding uniform pointwise and energy estimates are proved, which provide an approximation of the solution with a given accuracy in terms of the parameter ε.
[
[
August 1, 2023
==================
§ INTRODUCTION
In this paper we continue our study <cit.> of parabolic
transport problems for some species' concentration in graph-like networks. For ε being a small parameter, these consist of thin cylinders that are interconnected by small domains (nodes) of order 𝒪(ε) in diameter, see Fig. <ref>. The parameter ε is not only characterising the geometry of the thin network but also the strength of certain physical processes. First, we consider a high Péclet number regime of order 𝒪(ε^-1), i.e. a high ratio of convective to diffusive transport rates. This assumption leads to a re-scaled diffusion operator in the parabolic differential equation
∂_t u_ε - ε Δ_x u_ε + div_x ( V_ε u_ε)
= 0,
governing the unknown species concentration u_ε. In (<ref>), V_ε is
a given convective vector field with small transversal velocity components.
In order to study additionally the influence of external influences via boundary interactions, in the paper <cit.> an additional parameter α was introduced in the inhomgeneous boundary conditions
ε∂_ν_ε u_ε - u_ε V_ε·ν_ε = ε^αφ^(i)_ε, i ∈{0, 1, …, ℳ},
which hold both on the lateral surfaces of thin cylinders and on the boundaries of nodes forming the thin network.
Here, a given function φ^(i) describes physical processes on the surface of the ith component of the network.
Three qualitatively different cases can be identified for the asymptotic behaviour (as ε→ 0) of the solution u_ε depending on the value of α, namely
α > 1 low intensity of boundary processes (if α≥ 2 they can simply be ignored, if α∈ (1, 2) their influence appears in the second-order terms of the asymptotics);
α =1 moderate intensity, then the boundary processes directly affect the first terms {w^(i)_0}_i of the asymptotics, which represent the solution of the limit problem consisting of first-order hyperbolic equations
∂_tw^(i)_0(x_i,t) + ∂_x_i( v_i^(i)(x_i,t) w^(i)_0(x_i,t) )
= - φ^(i),
each of which is defined on the i-th edge of the corresponding metric graph, to which the thin network shrinks
(here -φ^(i) is the limit transformation of the boundary interaction φ^(i) into the right-hand side of the differential equation);
α <1 strong intensity (it was only noted in the conclusion of <cit.> that the question of finding the asymptotics remains open and one should expect that the solution will be unbounded as ε→ 0).
In this paper we present a complete analysis of the latter, qualitatively different case whereas the first two ones have been analyzed in
<cit.>.
Studying the influence of inhomogeneous Neumann or Fourier boundary conditions on the processes in the entire domain is a very timely task, especially in regions with complex structures, e.g., see papers <cit.> for perforated domains, <cit.> for thin domains, <cit.> for domains with rapidly oscillating boundaries, <cit.> for thick junctions or domains with a highly oscillating boundary.
The problems considered in these works are mathematical models of various reaction-diffusion, adsorption and transport phenomena in hydrogeology, chemistry, biology, medicine, and thermal conductivity.
In the vast majority of these works, the intensity of the processes at the boundaries is not high, which leads to the appearance of an additional term in the corresponding limit differential equation (it depends on the ratio between the total size of the surfaces and the intensity of the processes on these surfaces).
In <cit.> the author gave an example of a boundary-value problem with a large boundary interaction in a domain with a highly oscillating boundary and proved that the solution becomes unbounded as the oscillation parameter tends to zero.
Additional assumptions that guarantee a priori estimates for the solution regardless of the small parameter can help to prove the convergence theorem even for large boundary interactions (see, e.g., <cit.>).
However, in the general case, physical processes at the boundary of regions with a complex structure can provoke cardinal changes in the global phenomenon throughout the region.
The principal novelty of the present paper is the study of the impact of strong boundary interactions (the case α < 1 and
without any additional assumptions guaranteeing a priori estimates) on the asymptotic behaviour (as ε→ 0) of the solution to the parabolic convection-dominated transport problem in a thin graph-like network.
To do this, we use the asymptotic expansion method, which is a very powerful tool for studying various perturbed problems.
Usually the corresponding first term in the expansion gives the basic limiting behavior for the solution of interest; the subsequent terms allow to estimate the sensitivity of the model to smaller scale perturbations. Therefore, convergence results used in studies of perturbed models may not be acceptable in cases where the values of the perturbation parameters are not small enough.
Asymptotic expansions allow the influence of other features of the model to be taken into account through higher-order terms in the expansions, and improve the accuracy of the corresponding numerical procedures by combining them with asymptotic results.
It should be noted that in asymptotic analysis it is very important to guess the form of the asymptotic ansatz, which is affected by various parameters of the problem (see. e.g., <cit.>).
In the present work we compose effective recurrent algorithms for constructing asymptotic Puiseux expansions. Such series are a generalization of power series that allow negative and fractional exponents of the indeterminate; in our case, these are real exponents of the parameter ε, which depend on the parameter α. We show that the principal part of the Puiseux series
(it consists of unbounded coefficients as ε→ 0; see Definition <ref>), which determines the main behaviour of the solution in the ith thin cylinder of the network, is as follows
ε^α -1 w_α -1^(i) (x_i,t) + ε^α w_α^(i) (x_i,t)
+
∑_k=2^-⌊α⌋ε^α +k -1 (w_α +k -1^(i) (x_i) + u_α +k -1^(i)( x_i, x_iε, t)).
Note that, for α∈ [0,1), there is only one term ε^α -1 w_α -1^(i) in the principal part. The coefficients {w_α -1^(i)}_i form the solution to the corresponding hyperbolic transport problem on the graph, the right-hand sides of which are the averaged characteristics of the boundary interactions (see (<ref>)). The coefficients {w_α^(i)}_i represent the solution to the hyperbolic transport problem (<ref>), and they take into account interactions on the node boundary, physical processes inside the node and geometric characteristics of the network through the special gluing condition
∑_iv_i h_i^2 w_α^(i) (0,t) = d_α(t)
at the graph vertex, where d_α(t) is determined in (<ref>).
It is worth noting another interesting feature in the study of boundary-value problems with a predominance of convection in thin networks, namely that the corresponding first-order hyperbolic problems for the coefficients of the asymptotic expansion have only a Kirchhoff-type transmission condition (similar to (<ref>)) at the graph vertices; no other condition, commonly called the continuity condition.
This means that such solutions, generally speaking, are not continuous at the vertices, which is explained by the wave-particle duality of first-order hyperbolic differential equations (for more details see the conclusion section of our paper <cit.>).
To neutralize this duality and to ensure continuity of the approximation in the vicinities of nodes, a special node-layer part of the asymptotics is introduced whose coefficients are special solutions (with polynomial growth at infinity) to boundary value problems in an unbounded domain with different outlets at infinity (see (<ref>)). Writing down the solvability conditions for such problems, Kirchhoff-type conditions are derived at the vertex of the graph for terms of the regular expansion.
For the sake of clarity of presentation and to focus more on the study of the effects of boundary processes, we consider a simple model of a thin graph-like network consisting of three thin cylinders located along the coordinate axes, respectively, and the Laplace equation for modelling diffusion processes. A more general diffusion operator ε div_x( 𝔻(x/ε) ∇_x u_ε) as well as thin junctions with curvilinear cylinders were considered in our previous paper <cit.>, and general thin networks with different convective vector field dynamics in <cit.>.
The paper is structured as follows. In Section <ref>, we describe a model thin graph-like junction, make assumptions for a given convection vector field and for boundary interactions, and formulate a problem. Section <ref> is devoted to the construction of a formal asymptotic expansion of the solution to the problem (<ref>). The asymptotic expansion consists of three parts: the regular part of the asymptotics located inside each thin cylinder, the boundary-layer part located near the bases of some thin cylinders, and the node-layer part located in the vicinity of the node. Here we prove the solvability of all interrelated recurrent procedures that determine
coefficients of these parts of the asymptotics. A complete asymptotic expansion 𝔘^(ε) in the whole thin graph-like junction is constructed in Section <ref>, where we also calculate residuals that its partial sum 𝔘_M^(ε) leaves in the problem (<ref>). Our main result is Theorem <ref> that justifies the constructed asymptotic expansion and provides both the asymptotic uniform pointwise and energy estimates for the difference between the solution u_ε and the partial sum for any M ∈ N and M > 3/2(1 - ⌊α⌋). The article ends with a section of conclusions and remarks.
§ PROBLEM STATEMENT
For a small positive parameter ε, a model thin graph-like junction Ω_ε consists of three thin cylinders
Ω_ε^(i) =
{
x=(x_1, x_2, x_3)∈R^3 ε ℓ_0 <x_i<ℓ_i, ∑_j=1^3 (1-δ_i,j)x_j^2<ε^2 h_i^2
}, i=1,2,3,
that are connected with a domain Ω_ε^(0) (referred to as the "node"). Here ℓ_0∈(0, 1/3), ℓ_i≥1, h_i>0 are given numbers, δ_i,j denotes the Kronecker delta, i.e.,
δ_ii = 1 and δ_ij = 0 if i ≠ j.
We denote the lateral surface of the thin cylinder Ω_ε^(i) by
Γ_ε^(i) := ∂Ω_ε^(i)∩{ x∈R^3 : εℓ_0<x_i<ℓ_i }
and by
Υ_ε^(i) (μ) := Ω_ε^(i)∩{ x∈R^3 : x_i= μ}
its cross-section at the point μ∈ [ εℓ_0, ℓ_i].
The node Ω_ε^(0) (see Fig. <ref>) is formed by the homothetic transformation with coefficient ε from a bounded domain Ξ^(0)⊂ R^3 containing the origin, i.e.,
Ω_ε^(0) = ε Ξ^(0).
In addition, we assume that the boundary ∂Ξ^(0) of Ξ^(0) contains the disks
Υ^(i)_1(ℓ_0) := Ξ^(0)∩{ x x_i= ℓ_0}, i∈{1,2,3}, that are the bases of some right cylinders, respectively, and the lateral surfaces of these cylinders belong to ∂Ξ^(0). So, the boundary of the node Ω_ε^(0) consists of
the disks Υ_ε^(i) (εℓ_0), i∈{1,2,3},
and the surface
Γ_ε^(0) :=
∂Ω_ε^(0)\{Υ_ε^(1) (εℓ_0) ∪ Υ_ε^(2) (εℓ_0) ∪ Υ_ε^(3) (εℓ_0)}.
Thus, Ω_ε is the interior of the union
⋃_i=0^3Ω_ε^(i)
(see Fig. <ref>), and we assume that the surface ∂Ω_ε∖⋃_i=0^3Υ_ε^(i) (ℓ_i) is smooth of the class C^3.
The given vector-valued function V_ε depends on the parts of Ω_ε and has the following structure:
V_ε(x)=
(v^(0)_1(xε), v^(0)_2(xε), v^(0)_3(xε)
) =: V_ε^(0)(x), x ∈ Ω_ε^(0),
V_ε(x)=
(v^(1)_1(x_1), ε v^(1)_2(x_1, x_1ε), ε v^(1)_3(x_1,x_1ε)
) =: V_ε^(1)(x), x ∈ Ω_ε^(1) (v^(1)_1 < 0),
V_ε(x) =
( ε v^(2)_1(x_2, x_2ε), v^(2)_2(x_2), ε v^(2)_3(x_2,x_2ε)
) =: V_ε^(2)(x), x ∈ Ω_ε^(2) (v^(2)_2 > 0),
V_ε(x) =
(ε v^(3)_1(x_3, x_3ε), ε v^(3)_2(x_3,x_3ε), v^(3)_3(x_3)
) =: V_ε^(3)(x), x ∈ Ω_ε^(3) (v^(3)_3 > 0).
where
x_i =
{[ (x_2, x_3), if i=1,; (x_1, x_3), if i=2,; (x_1, x_2), if i=3. ].
For a fixed value of the index i∈{1, 2, 3}, the function v^(i)_i belongs to the space C^3([0,ℓ_i]) and is equal to a constant v_i in a neighbourhood of the origin; the other components of V_1^(i) are smooth of class C^3 in Ω_1^(i) and have compact supports with respect to the longitudinal variable x_i,
in particular, we will assume that they vanish in [0, δ_i], (δ_i > 0).
Hence, the main direction of the vector field V_ε^(i) is oriented along the axis of the cylinder Ω_ε^(i) (see Fig. <ref>).
To describe the dynamics of the velocity field V_ε in the node Ω_ε^(0), we make the following assumptions. The velocity field is conservative in the node and
its potential p is a solution to the boundary-value problem
{[ Δ_ξ p(ξ) = 0, ξ∈Ξ^(0),; ∂ p(ξ)∂ξ_i = v_i, ξ∈Υ^(i)_1(ℓ_0), i∈{1, 2, 3},; ∂ p(ξ)∂ν = 0, ξ∈Γ^(0) := ∂Ξ^(0)∖( ⋃_i=1^3 Υ^(i)_1(ℓ_0)); ∫_Ξ^(0) p(ξ) dξ =0. ].
Here Δ_ξ is the Laplace operator in the variables ξ=(ξ_1,ξ_2,ξ_3), ∂ p/∂ν is the derivative along the outward unit normal ν to ∂Ξ^(0). The Neumann problem (<ref>) has a unique solution if and only if the conservation condition
∑_i=1^3 h_i^2(0) v_i =0
holds. Thus,
V_ε^(0)(x) = ∇_ξ p(ξ)|_ξ = x/ε = ε∇_x ( p(xε) ), x∈Ω^(0)_ε,
and, clearly, div_x V_ε^(0) = 0 in Ω^(0)_ε, i.e.,
V_ε^(0) is incompressible in Ω^(0)_ε.
In Ω_ε, we consider the following parabolic convection-diffusion problem:
{[ ∂_t u_ε - ε Δ_x u_ε +
div_x ( V_ε u_ε) = 0, in Ω_ε× (0, T),; - ε ∂_ν_ε u_ε + u_ε V_ε·ν_ε = ε^αφ^(i)_ε on Γ^(i)_ε× (0, T), i ∈{0, 1,2,3},; u_ε|_x ∈Υ_ε^(i) (ℓ_i) = q_i(t), t∈ (0, T), i ∈{1,2,3},; u_ε|_t=0 = 0, in Ω_ε, ].
where ν_ε is the outward unit normal to ∂Ω_ε,
∂_ν_ε denotes the derivative along ν_ε, the intensity parameter α is less then 1. Given functions {φ^(i)_ε}_i=0^3 are determined as follows
φ^(0)_ε(x,t) := φ^(0)(xε,t), (x,t) ∈Γ_ε^(0)× [0,T],
φ^(i)_ε(x,t) := φ^(i)(x_iε, x_i, t), (x,t) ∈Γ_ε^(i)× [0,T], i∈{1,2,3},
where the function φ^(0)(ξ,t), (ξ,t) ∈Ξ^(0)× [0,T], and the functions
φ^(i)(ξ_i,x_i,t), (ξ_i,x_i,t) ∈{ |ξ_i|≤ h_i, x_i∈ [0, ℓ_i], t∈ [0,T]}, i∈{1,2,3},
belong to the class C^2 in their domains of definition. In addition, φ^(0) vanishes uniformly with respect to t∈ [0, T] in neighborhoods of
Υ^(i)_1(ℓ_0), i∈{1,2,3}, the functions {φ^(i)}_i=1^3 vanish uniformly with respect to t and ξ_i in neighborhoods of the ends of the corresponding closed interval [0, ℓ_i].
Given functions {q_i(t), t∈ [0, T]}_i=1^3 are smooth and nonnegative. To satisfy the zero- and first-order matching conditions for the problem (<ref>), we assume that
q_i(0) = d q_i/dt(0)=0, i∈{1,2,3}, φ^(i)|_t=0 =0, i∈{0, 1,2,3}.
Thus, due to the classical theory of parabolic initial-boundary problem there exists a unique classical solution to the problem (<ref>) (see e.g. <cit.>). Obviously, it is also a weak solution in the Sobolev space L^2(0,T; H^1(Ω_ε)).
Our goal is to construct the complete asymptotic expansion for the solution to the problem (<ref>) as
ε→ 0, i.e., when the thin junction Ω_ε is shrunk into the graph
ℐ := I_1 ∪ I_2 ∪ I_3,
where I_i := {x x_i ∈ [0, ℓ_i], x_i = (0, 0)}, i∈{1,2,3}.
§ FORMAL ASYMPTOTICS
To approximate the solution u^ε, the following Puiseux series approaches are suggested:
*
∑_k=0^+∞∑_p=0^1ε^pα +k -1 (w_pα +k -1^(i) (x_i) + u_pα +k -1^(i)( x_i, x_i/ε, t ) )
for the regular part of the asymptotics in each thin cylinder Ω^(i)_ε (i∈{1,2,3});
*
∑_k=0^+∞∑_p=0^1ε^pα +k -1 N_pα +k -1(x/ε, t)
for the node-layer part of the asymptotics in a neighborhood of the node Ω^(0)_ε;
*
∑_k=0^+∞∑_p=0^1ε^pα +k -1 Π^(i)_pα +k -1( ℓ_i - x_i/ε, x_iε, t )
for the boundary-layer part of the asymptotics in a neighborhood of the base Υ_ε^(i)(ℓ_i) (i ∈{2, 3}).
The ansatzes (<ref>) – (<ref>) are suitable for constructing the asymptotics of the solution to the problem (<ref>) for any real value of the parameter α.
Each of these ansatzes is the formal sum of two ansatzes, namely
∑_k=0^+∞ε^k-1 𝔅_k-1 + ∑_k=0^+∞ε^α + k -1 𝔅_α + k -1.
In the case of α∈ Z, these ansatzes are transformed into ansatzes in integer powers of ε, which have the following form:
∑_k=0^+∞ε^k 𝔅_k for α∈ N, and ∑_k=α -1^+∞ε^k 𝔅_k for α∈ Z, α≤ 0.
In this paper we consider the more complicated case α∈ R ∖ Z and α < 1, always assuming that coefficients with negative integer indices and coefficients with indices less than α -1 vanish in all series and equations.
For such series, we give the following definition of an asymptotic expansion.
Let B be a Banach space, f(ε) be an element in B,
which depends on a small parameter ε, and α∈ R ∖ Z, α < 1. We say that
a Puiseux series
∑_k=0^+∞∑_p=0^1ε^pα +k -1 𝔟_pα +k -1
whose coefficients belong to B is the asymptotic expansion of f(ε) in the Banach space B if for any
𝔑 > 0 there exists a positive integer M_0 such that for all M ∈ N and M ≥ M_0
f(ε) - ℒ_M(ε)_B = 𝒪(ε^𝔑) as ε→ 0,
where ℒ_M(ε) is the partial sum of (<ref>), which is defined as follows
ℒ_M(ε) = ∑_k=0^- ⌊α⌋ε^α +k -1 𝔟_α +k -1
+ ∑_k= 1^M +⌊α⌋ε^k -1 𝔟_k -1 + ∑_k=- ⌊α⌋ + 1^Mε^α +k -1 𝔟_α +k -1,
and ⌊·⌋ is the floor function.
The first sum ∑_k=0^- ⌊α⌋ε^α +k -1 𝔟_α +k -1
is called the principal part of the Puiseux series (<ref>).
It can be seen from Definition <ref> that the number of terms in the principal part of the Puiseux series (<ref>) depends on the parameter α and each term in the principal part is unbounded as ε→ 0. For example, if α∈ (0, 1), then the principal part contains only one term ε^α -1 𝔟_α -1.
Formally substituting (<ref>) into the differential equation of the problem (<ref>) and into the boundary condition on the lateral surface of the thin cylinder Ω_ε^(i) (the index i is fixed), collecting coefficients at the same power of ε, we get the following relations for each k ∈ N_0∪{-1} and p∈{0, 1}:
Δ_ξ̅_iu^(i)_pα +k(x_i, ξ̅_i, t)
= ( v^(i)_i(x_i) [ w^(i)_pα +k-1(x_i, t) + u^(i)_pα +k-1(x_i,ξ̅_i, t) ] )^'
+ ∂_t w_pα +k-1^(i)(x_i, t)
+ ∂_t u_pα +k-1^(i)(x_i,ξ̅_i, t) + div_ξ̅_i( V^(i)(x_i, ξ̅_i) [ w^(i)_pα +k-1(x_i, t) + u^(i)_pα +k-1(x_i,ξ̅_i, t) ])
- ( w^(i)_pα +k-2(x_i,t) + u^(i)_pα +k-2(x_i,ξ̅_i, t) )^'',
ξ̅_i ∈Υ_i,
where ξ̅_i=x_i/ε, Υ_i := {ξ_i∈R^2 |ξ_i|< h_i }, “'” denotes the derivative with respect to the longitudinal variable x_i,
V^(1):= (v^(1)_2(x_1, ξ̅_1), v^(1)_3(x_1,ξ̅_1)), V^(2):= (v^(2)_1(x_2, ξ̅_2), v^(2)_3(x_2,ξ̅_2)), V^(3):=
(v^(3)_1(x_3, ξ̅_3), v^(3)_2(x_3,ξ̅_3));
and
∂_ν̅_ξ̅_i u^(i)_pα +k = ( w^(i)_pα +k -1 + u^(i)_pα +k -1) V^(i)·ν̅_ξ̅_i - δ_pα +k -1, α -1 φ^(i)(ξ̅_i, x_i, t), ξ̅_i ∈∂Υ_i,
where ∂_ν̅_ξ̅_i is the derivative along the outward unit normal ν̅_ξ̅_i to the boundary of the disk Υ_i, δ_pα +k -1, α -1 is the Kronecker delta, V^(i)·ν̅_ξ̅_i is the scalar product of the vectors V^(i) and ν̅_ξ̅_i.
In (<ref>) and below, we will omit the dependence of some coefficients on variables to simplify the writing of equations, if this does not cause confusion.
The equations (<ref>) and (<ref>) form the Neumann problems in Υ_i with respect to the variables ξ̅_i to find u^(i)_pα +k. In the right-hand sides of (<ref>) and (<ref>) there are coefficients of the
ansatz (<ref>) with lower priorities. The variables x_i and t in these problems are considered as parameters from the set I_ε^(i)× (0, T), where I_ε^(i) := {x_i x_i ∈ (εℓ_0, ℓ_i), x̅_i = (0, 0) }.
To ensure uniqueness, we provide each problem with the condition
⟨ u_pα +k^(i)(x_1, · , t ) ⟩_Υ_i := ∫_Υ_i u_pα +k^(i)(x_i, ξ̅_i, t) dξ̅_i = 0.
Considering the last part of Remark <ref>, the Neumann problem for u^(i)_α -1 (k=-1, p=1) is as follows
Δ_ξ̅_iu^(i)_α -1 = 0 in Υ_i, ∂_ν̅_ξ̅_i u^(i)_α -1 = 0 on ∂Υ_i, ⟨ u_α -1^(i)⟩_Υ_i = 0,
whence u^(i)_α -1≡ 0. Similarly, we obtain u^(i)_0≡ 0 (k= p=0).
Thus, the ansatz (<ref>) begins with the summand ε^α -1 w^(i)_α -1.
For k=0, p=1 in (<ref>) and (<ref>), we have the problem
{[ Δ_ξ̅_iu^(i)_α = (v^(i)_i w^(i)_α-1)^'
+ ∂_t w_α-1^(i) + w_α-1^(i) div_ξ̅_iV^(i) in Υ_i,; ∂_ν̅_ξ̅_i u^(i)_α = w^(i)_α-1 V^(i)·ν̅_ξ̅_i - φ^(i)(ξ̅_i, x_i, t) on ∂Υ_i, ⟨ u_α^(i)⟩_Υ_i = 0; ].
and for k=1, p= 0 the problem
{[ Δ_ξ̅_iu^(i)_1 = (v^(i)_i w^(i)_0)^'
+ ∂_t w_0^(i) + w_0^(i) div_ξ̅_iV^(i) in Υ_i,; ∂_ν̅_ξ̅_i u^(i)_1 = w^(i)_0 V^(i)·ν̅_ξ̅_i on ∂Υ_i, ⟨ u_1^(i)⟩_Υ_i = 0. ].
Writing down the solvability condition for each of these problems, we deduce the differential equations for the coefficients w^(i)_α-1 and w^(i)_0:
∂_tw^(i)_α-1(x_i,t) + ( v_i^(i)(x_i) w^(i)_α-1(x_i,t) )^'
= - φ^(i)(x_i, t), (x_i, t) ∈ I_ε^(i)× (0, T),
and
∂_tw^(i)_0(x_i,t) + ( v_i^(i)(x_i) w^(i)_0(x_i,t) )^' = 0, (x_i, t) ∈ I_ε^(i)× (0, T),
where
φ^(i)(x_i, t) := 1/π h^2_i∫_∂Υ_iφ^(i)(x_i, ξ̅_i, t) dσ_ξ̅_i.
Let w^(i)_α-1 and w^(i)_0 be solutions to the equations (<ref>) and (<ref>), respectively. We will show below how to choose suitable unique solutions. Then there exist unique solutions to the problems (<ref>) and (<ref>), respectively, and the differential equations in these problems become
Δ_ξ̅_iu^(i)_α = - φ^(i) + w_α-1^(i) div_ξ̅_iV^(i) and Δ_ξ̅_iu^(i)_1 = w_0^(i) div_ξ̅_iV^(i).
Writing down the solvability condition for the problem (<ref>)–(<ref>), we get the differential equation
∂_t w_pα +k-1^(i)(x_i, t) + ( v^(i)_i(x_i) w^(i)_pα +k-1(x_i, t) )^'
= ( w^(i)_pα +k-2(x_i,t))^'', (x_i, t) ∈ I_ε^(i)× (0, T).
Let w_pα +k-1^(i) be a solution of (<ref>). Then there exists a unique solution to the problem (<ref>)–(<ref>) and the differential equation in this problem becomes
Δ_ξ̅_iu^(i)_pα +k(x_i, ξ̅_i, t)
= ( v^(i)_i(x_i) u^(i)_pα +k-1(x_i,ξ̅_i, t) )^'
+ ∂_t u_pα +k-1^(i)(x_i,ξ̅_i, t) - ( u^(i)_pα +k-2(x_i,ξ̅_i, t) )^''
+ div_ξ̅_i( V^(i)(x_i, ξ̅_i) [ w^(i)_pα +k-1(x_i, t) + u^(i)_pα +k-1(x_i,ξ̅_i, t) ]), ξ̅_i ∈Υ_i.
Since the functions {φ^(i)}_i=1^3 and {V^(i)}_i=1^3 have compact supports with respect to the corresponding longitudinal variable, the coefficients {u_pα +k -1^(i)} vanish in the corresponding neighborhoods
of the ends of the segment [0, ℓ_i].
To find transmission conditions for the functions {w_pα +k -1^(i)}, i∈{1,2,3} at the point 0, we should run the node-layer part (<ref>) of the asymptotics in a neighborhood of the node Ω^(0)_ε. For this purpose we pass to the scaled variables ξ=x/ε. Then, letting ε to 0, we see that the domain Ω_ε is transformed into the unbounded domain Ξ that is the union of the domain Ξ^(0) and three semi-infinite cylinders
Ξ^(i) = {ξ=(ξ_1,ξ_2,ξ_3)∈ R^3 ℓ_0<ξ_i<+∞,
|ξ_i|<h_i}, i∈{1,2,3},
i.e., Ξ is the interior of the set ⋃_i=0^3Ξ^(i).
Next, repeating the same procedure with the ansatz (<ref>), considering the incompressibility of V_ε^(0) in Ω^(0)_ε, and matching ansatzes (<ref>) and (<ref>),
we get the problem
{[ - Δ_ξN_pα +k(ξ,t) +
V(ξ) ·∇_ξN_pα +k(ξ,t) = - ∂_t N_pα +k-1(ξ,t), ξ∈Ξ^(0),; - ∂_ν_ξ N_pα +k(ξ,t) = δ_pα +k, αφ^(0)(ξ,t) , ξ∈Γ_0,; - Δ_ξN_pα +k(ξ,t) +
v_i ∂_ξ_iN_pα +k(ξ,t) = - ∂_tN_pα +k-1(ξ,t), ξ∈Ξ^(i),; ∂_ν̅_ξ̅_i N_pα +k(ξ,t) = 0, ξ∈Γ_i,; N_pα +k(ξ,t) ∼ w^(i)_pα +k(0,t) + Ψ^(i)_pα +k(ξ_i,t) as ξ_i → +∞, ξ∈Ξ^(i), i∈{1,2,3}, ].
for each k ∈ N_0∪{-1} and p∈{0, 1}, where
Ψ_ pα+k^(i)(ξ_i,t) = ∑_j=1^k+1ξ_i^j/j! ∂^j w_pα + k -j^(i)∂ x_i^j (0,t)
i∈{1,2,3}.
Note that the variable t appears as a parameter in the steady convection-diffusion problems (<ref>).
A solution with such a polynomial asymptotics at different exits to infinity is sought in the form
N_pα +k(ξ,t) = ∑_i=1^3(w^(i)_pα +k(0,t) + Ψ^(i)_pα +k(ξ_i,t) ) χ_ℓ_0(ξ_i) + N_pα +k(ξ,t),
where χ_ℓ_0∈ C^∞(R) is a smooth cut-off function such that
0≤χ_ℓ_0≤1, χ_ℓ_0(s) =0 if s ≤ 2ℓ_0 and
χ_ℓ_0(s) =1 if s ≥ 3ℓ_0. Then N_pα +k must be a solution to the problem
{[ - Δ_ξN_pα +k +
V(ξ) ·∇_ξN_pα +k = - ∂_t N_pα +k-1(ξ,t), ξ∈Ξ^(0),; - ∂_ν_ξN_pα +k(ξ,t) = δ_pα +k, αφ^(0)(ξ,t) , ξ∈Γ_0,; - Δ_ξN_pα +k +
v_i ∂_ξ_iN_pα +k = F_pα +k(ξ_i,t) - ∂_tN_pα +k-1(ξ,t), ξ∈Ξ^(i),; ∂_ν̅_ξ̅_iN_pα +k(ξ,t) = 0, ξ∈Γ_i,; N_pα +k(ξ,t) → 0 as ξ_i → +∞, ξ∈Ξ^(i), i∈{1,2,3}, ].
where
F_pα +k(ξ_i,t) := w^(i)_pα +k(0,t) χ”_ℓ_0(ξ_i) - v_i w^(i)_pα +k(0,t) χ'_ℓ_0(ξ_i)
+ ∂^2_ξ_i ξ_i(Ψ^(i)_pα +k(ξ_i,t) ) χ_ℓ_0(ξ_i)) - v_i ∂_ξ_i(Ψ^(i)_pα +k(ξ_i,t) ) χ_ℓ_0(ξ_i)).
Proposition A.1 from <cit.> asserts that the necessary and sufficient condition for the unique solvability of the problem (<ref>) in the Sobolev space of functions exponentially decreasing to zero is the equality
∑_i=1^3∫_Ξ^(i)(F_pα +k - ∂_tN_pα +k-1) dξ = ∫_Ξ^(0)∂_t N_pα +k-1 dξ + δ_pα +k, α∫_Γ_0φ^(0) d σ_ξ.
Since
∂^2_ξ_i ξ_iΨ^(i)_pα +k(ξ_i,t) - v_i ∂_ξ_iΨ^(i)_pα +k(ξ_i,t) =
∂_t w^(i)_pα +k-1(0,t) + ∂_tΨ^(i)_pα +k-1(ξ_i,t),
the difference
F_pα +k(ξ_i,t) - ∂_tN_pα +k-1(ξ,t) = w^(i)_pα +k(0,t) χ”_ℓ_0(ξ_i) - v_i w^(i)_pα +k(0,t) χ'_ℓ_0(ξ_i)
+(∂_ξ_iΨ^(i)_pα +k(ξ_i,t) -
v_i Ψ^(i)_pα +k(ξ_i,t) ) χ'_ℓ_0(ξ_i) + (Ψ^(i)_pα +k(ξ_i,t) χ'_ℓ_0(ξ_i))'
- ∂_tN_pα +k-1(ξ,t).
Using (<ref>), the equality (<ref>) can be rewritten as the gluing condition for the functions {w^(i)_pα +k}_i=1^3 at the origin
∑_i=1^3 v_i h_i^2 w_pα +k^(i) (0,t)
= d_pα +k(t),
where
d_pα +k(t)
= - δ_pα +k, α 1/π∫_Γ_0φ^(0) d σ_ξ -1/π∫_Ξ^(0)∂_t N_pα +k-1 dξ - 1/π∑_i=1^3∫_Ξ^(i)∂_tN_pα +k-1 dξ
+∑_i=1^3 h_i^2 ∫_2 ℓ_0^3 ℓ_0(∂_ξ_iΨ^(i)_pα +k(ξ_i,t) -
v_i Ψ^(i)_pα +k(ξ_i,t) ) χ'_ℓ_0(ξ_i) dξ.
For p=1, k = -1 and p=0, k=0 the value d_pα +k≡ 0. Thus, to determine {w^(i)_α-1}_i=1^3 and {w^(i)_0}_i=1^3 we get the following problems on the graph ℐ:
{[ ∂_tw^(i)_α-1(x_i,t) + ( v_i^(i)(x_i) w^(i)_α-1(x_i,t) )^'
= - φ^(i)(x_i, t), (x_i, t) ∈ (0, ℓ_i)× (0, T), i∈{1, 2, 3},; ∑_i=1^3 v_i h_i^2 w_α-1^(i) (0,t) = 0 for any t ∈ [0, T],; w^(1)_α-1(ℓ_1,t) = 0 for any t ∈ [0, T], w^(i)_α-1(x_i,0) =0 for any x_i∈ [0, ℓ_i], i∈{1, 2, 3} ; ].
and
{[ ∂_tw^(i)_0(x_i,t) + ( v_i^(i)(x_i) w^(i)_0(x_i,t) )^' = 0, (x_i, t) ∈ (0, ℓ_i)× (0, T), i∈{1, 2, 3},; ∑_i=1^3 v_i h_i^2 w_0^(i) (0,t) = 0 for any t ∈ [0, T],; w^(1)_0(ℓ_1,t) =q_1(t) for any t ∈ [0, T], w^(i)_0(x_i,0) =0 for any x_i∈ [0, ℓ_i], i∈{1, 2, 3}. ].
In these problems, there is only one boundary condition at the end x_1 =ℓ_1 because there is only one input cylinder with respect to the vector field V_ε. This is in full agreement with the approach proposed in <cit.>. According to this approach, we first find a solution to the corresponding hyperbolic mixed problem with a given boundary condition and initial condition. For example, for (<ref>) this is the following problem:
{[ ∂_tw^(i)_α-1(x_1,t) + ( v_1^(1)(x_1) w^(1)_α-1(x_1,t) )^'
= - φ^(1)(x_1, t), (x_1, t) ∈ (0, ℓ_1)× (0, T),; w^(1)_α-1(ℓ_1,t) = 0 for any t ∈ [0, T], w^(1)_α-1(x_1,0) =0 for any x_1∈ [0, ℓ_1], ].
The solvability criteria base on the characteristics method. Since φ^(1)(ℓ_1,0) =0, the problem (<ref>)
has a unique classical solution; in addition the explicit representation is possible for it (see <cit.>). From this representation it follows that ∂_t w_α-1^(1)(0,0)=0.
Then w^(2)_α-1 and w^(3)_α-1 are defined as classical solutions to the problems
{[ ∂_tw^(i)_α-1(x_i,t) + ( v_i^(i)(x_i) w^(i)_α-1(x_i,t) )^'
= - φ^(i)(x_i, t), (x_i, t) ∈ (0, ℓ_i)× (0, T),; w^(i)_α-1(0,t) = w^(1)_α-1(0,t) for any t ∈ [0, T], w^(i)_α-1(x_i,0) =0 for any x_i∈ [0, ℓ_i], ].
i∈{2, 3}, respectively. Since w_α-1^(1)(0,0) = ∂_t w_α-1^(1)(0,0)=0 and φ^(2)(0, 0) =φ^(3)(0, 0) =0, the solvability conditions are satisfied for these problem. Moreover, thanks to (<ref>), the gluing condition at the origin is fulfilled in the problem (<ref>).
Thus, the problem (<ref>) has a unique classical solution, and, interestingly, the continuity condition and the Kirchhoff condition are simultaneously satisfied at the graph vertex. In the same way, we justify the existence and uniqueness of the classical solution to the problem (<ref>) (now due to (<ref>)), for which the continuity condition and the Kirchhoff condition are also satisfied.
Having found these solutions, we can uniquely determine both solutions to the problems (<ref>) and (<ref>) and solutions to the problems
{[ - Δ_ξN_pα +k +
V(ξ) ·∇_ξN_pα +k = 0 in Ξ^(0),; - Δ_ξN_pα +k +
v_i ∂_ξ_iN_pα +k = w^(i)_pα +k(0,t) (χ”_ℓ_0(ξ_i) - v_i χ'_ℓ_0(ξ_i)) in Ξ^(i),; - ∂_ν_ξN_pα +k = 0 on ∂Ξ,; N_pα +k(ξ,t) → 0 as ξ_i → +∞, ξ∈Ξ^(i), i∈{1,2,3}, ].
for p=1, k= -1 and for p=k=0, respectively. Since the right-hand sides in the problem (<ref>) are uniformly bounded with respect to (ξ, t)∈Ξ× [0, T] and have compact supports, the corresponding solutions N_α-1 and N_0 have the following asymptotics uniform with respect to t∈ [0, T]:
N_pα +k(ξ,t) = w^(i)_pα +k(0,t) + 𝒪(exp(-β_0ξ_i))
ξ_i→+∞, ξ∈Ξ^(i), i={1,2,3} (β_0 >0)
for p=1, k= -1 and p=k=0. It is easy to verify that ∂_tN_α-1 and ∂_t N_0 have similar uniform asymptotics as well. In addition,
N_pα +k|_t=0≡N_pα +k|_t=0≡ 0, ∂_t N_pα +k|_t=0≡∂_t N_pα +k|_t=0≡ 0, u_pα +k+1^(i)|_t=0≡ 0, i∈{1, 2, 3}
for p=1, k= -1 and p=k=0.
It follows from the differential equations (<ref>) and (<ref>) that {w_pα +k -1^(i)} and {u_pα +k^(i)}
are defined in terms of the second derivatives of {w_pα +k -2^(i)} and {u_pα +k-2^(i)}.
This means that {w^(i)_α-1}_i=1^3 and {w^(i)_0}_i=1^3, as well as the other coefficients, must be infinitely differentiable.
Therefore, additional smoothness of the given functions and additional matching conditions are necessary, namely
* {φ^(i), q_i, V^(i), v^(i)_i}_i=1^3 belong to the class C^∞ in their domains of definition, the function φ^(0) is infinitely differentiable in t∈ [0, T],
* for each n∈ N and i∈{1, 2, 3}
d^n q_1 /dt^n|_t=0 = 0 and ∂^nφ^(0)/∂ t^n|_t=0= 0.
Using the explicit representation and additional assumptions, it can be verified that ∂^n_t w_α-1^(i)(0, 0) = 0 and ∂^n_t w_0^(i)(0, 0) = 0 for each n∈ N. This means also that ∂^n_t N_pα +k|_t=0≡∂^n_t N_pα +k|_t=0≡ 0 for p=1, k= -1 and p=k=0.
The influence of the interaction on the node boundary begins to manifest itself from the coefficient N_α that is a solution to the problem (<ref>) for p=1, k= 0. In this case, the solvability condition (<ref>) for
N_α has the non-zero right-hand side
d_α(t)
= - 1/π∫_Γ_0φ^(0) d σ_ξ -1/π∫_Ξ^(0)∂_t N_α -1 dξ - 1/π∑_i=1^3∫_Ξ^(i)∂_tN_α -1 dξ
+∑_i=1^3 h_i^2 ∂_x_iw^(i)_α -1(0,t)
(1 - v_i ∫_2ℓ_0^3ℓ_0ξ_i χ^'_ℓ_0(ξ_i) dξ_i).
The coefficients {w^(i)_α}_i=1^3 form a solution to the problem
{[ ∂_tw^(i)_α(x_i,t) + ( v_i^(i)(x_i) w^(i)_α(x_i,t) )^'
= ( w^(i)_α -1(x_i,t))^'', (x_i, t) ∈ (0, ℓ_i)× (0, T), i∈{1, 2, 3},; ∑_i=1^3 v_i h_i^2 w_α^(i) (0,t) = d_α(t) for any t ∈ [0, T],; w^(1)_α(ℓ_1,t) = 0 for any t ∈ [0, T], w^(i)_α(x_i,0) =0 for any x_i∈ [0, ℓ_i], i∈{1, 2, 3}. ].
As before, w^(1)_α is a solution to the corresponding hyperbolic mixed problem. But now, since d_α≠0, we cannot additionally satisfy the continuity condition at the vertex, as for the problems (<ref>) and (<ref>). Following approach of <cit.>, we propose the weighted incoming concentration average in the boundary conditions
for w^(2)_α and w^(3)_α, namely
w_α^(i)(0, t) = 12 v_i h_i^2 (d_α(t) - v_1 h_1^2 w_α^(1)(0, t)), t ∈ [0, T], i∈{2, 3}.
Taking into account the assumptions made in Remark <ref>,
we check the validity of the classical solvability criteria (d_α(0)=d_α'(0)=0) and the infinitely differentiability of solutions.
Thus, the problem (<ref>) has a classical solution. This means that the solvability condition both for the problem to define N_α (the problem (<ref>) for p=1 and k=0) and for the Neumann problem (<ref>), (<ref>) and (<ref>) (p=k=1) to define u^(i)_α+1 is satisfied.
In addition, since ∂_t N_α-1 uniformly in t∈ [0, T] decreases exponentially to zero (see (<ref>))
and the other summands in the right-hand side
F_α - ∂_tN_pα -1 = - ∂_tN_α -1(ξ,t) + w^(i)_α(0,t) χ”_ℓ_0(ξ_i) - v_i w^(i)_α(0,t) χ'_ℓ_0(ξ_i)
+ ∂_x_iw^(i)_α -1(0,t) (1 - v_i ξ_i ) χ'_ℓ_0(ξ_i) + ∂_x_iw^(i)_α -1(0,t) (ξ_i χ'_ℓ_0(ξ_i))'
in the problem (<ref>) (p=1, k=0) are uniformly bounded with respect to (ξ, t)∈Ξ× [0, T] and have the compact supports, the solution N_α to the problem (<ref>) has the following asymptotics uniform with respect to t∈ [0, T]:
N_α(ξ,t) = w^(i)_α(0,t) + ξ_i ∂_x_iw^(i)_α -1(0,t) + 𝒪(exp(-β_0ξ_i))
ξ_i→+∞, ξ∈Ξ^(i), i∈{1, 2, 3}.
It is easy to verify that
N_α|_t=0≡∂_t N_α|_t=0≡∂^2_t N_α|_t=0≡ 0,
u_α +1^(i)|_t=0≡ 0, i∈{1,2,3}.
Next, assuming that all coefficients
{{w^(i)_pα +m}_i=1^3, N_pα +m}_p∈{0,1}, m∈{-1,0,…, k-1} and {u^(i)_pα +m}_i∈{1,2,3}, p∈{0,1}, m∈{-1,0,…, k}
are determined and that they and their derivatives in t vanish at t=0, we first determine the value d_pα +k(t) by (<ref>), then the coefficients
* {w^(i)_pα +k}_i=1^3 as a solution to the problem
{[ ∂_t w_pα +k^(i) + ( v^(i)_i(x_i) w^(i)_pα +k)^'
= ( w^(i)_pα +k-1)^'', (x_i, t) ∈ (0, ℓ_i)× (0, T), i∈{1, 2, 3},; ∑_i=1^3 v_i h_i^2 w_p α+ k ^(i) (0,t) = d_pα+k(t) for any t ∈ [0, T],; w^(1)_pα+k(ℓ_1,t) = 0 for any t ∈ [0, T], w^(i)_pα+k(x_i,0) =0 for any x_i∈ [0, ℓ_i], i∈{1, 2, 3}; ].
* then for each i∈{1,2,3} the coefficient u^(i)_pα + k+1 as a solution to the corresponding Neumann problem
(<ref>), (<ref>), (<ref>);
* and finally, the coefficient N_pα +k as a solution to the problem (<ref>), which has
the following asymptotics uniform with respect to t∈ [0, T]:
N_pα +k(ξ,t) = w^(i)_pα +k(0,t) + Ψ^(i)_pα +k(ξ_i,t) + 𝒪(exp(-β_0 ξ_i))
ξ_i→+∞, ξ∈Ξ^(i),
where Ψ^(i)_pα +k is defined in (<ref>), i={1,2,3}, and β_0 >0.
In addition, it is easy to verify that all these coefficients vanish at t=0.
Unfortunately, the regular ansatzes 𝒰_ε^(2) and 𝒰_ε^(3)
don't satisfy the boundary conditions at the bases Υ_ε^(2) (ℓ_2) and Υ_ε^(3) (ℓ_3), respectively. Therefore, we must run the boundary-layer parts (<ref>) of the asymptotics compensating the residuals of the regular one at Υ_ε^(2) (ℓ_2) and Υ_ε^(3) (ℓ_3).
It is additionally assumed that the component v_i^(i) of the vector-valued function V_ε^(i) is independent of the variable x_i in a neighborhood of Υ_ε^(i)(ℓ_i) (i∈{2, 3}). This is a technical assumption. In the general case, the function v_i^(i) must be expanded in a Taylor series in a neighborhood of the point ℓ_i.
Substituting (<ref>) into the differential equation and boundary conditions of the problem (<ref>) in
a neighborhood the base Υ_ε^(i) (ℓ_i) of the thin cylinder Ω_ε^(i) and collecting coefficients at the same powers of ε, we get the following problems:
{[ Δ_ηΠ_pα +k^(i)(η,t) + v_i^(i)(ℓ_i) ∂_η_iΠ_pα +k^(i)(η,t) = ∂_tΠ_pα +k-1^(i)(η,t), η∈ℭ_+^(i),; ∂_ν_η_1Π_pα +k^(i)(η,t) = 0, η∈∂ℭ_+^(i)∖Υ^(i),; Π_pα +k^(i)|_η_i = 0 = Φ^(i)_pα +k(t) , η_i∈Υ^(i),; Π_pα +k^(i)(η,t) → 0 as η_i→+∞, ].
for k∈ N_0 ∪{-1}, p∈{0, 1}, i∈{2, 3}. In this sequence of problems, η=(η_1, η_2, η_3), η_i = ℓ_i - x_i/ε, η_i=x_i/ε,
Υ^(i):={η_i ∈ R^2 |η_i | < h_i}, ℭ_+^(i):={η η_i ∈Υ^(i), η_i∈(0,+∞)},
Φ^(i)_pα +k(t) := δ_pα +k, 0 q_i(t) - w_pα +k^(i)(ℓ_i,t), Π_-2^(i)≡Π_-1^(i)≡Π_α-2^(i)≡ 0.
Using the Fourier method, we find solutions to these problems step by step, e.g.,
Π_α-1^(i)(η_i,t) = Φ_α-1(t) e^- v_i^(i)(ℓ_i) η_i, Π_0^(i)(η_i,t) =
Φ_0(t) e^- v_i^(i)(ℓ_i) η_i ,
Π_α^(i)(η_1,t) = (Φ_α(t) - ∂_t Φ_α-1(t)/v_i^(i)(ℓ_3) η_1 ) e^-v_i^(i)(ℓ_i) η_i
Since v_i^(i)(ℓ_i) > 0 and the factor near e^-v_i^(i) η_i are bounded with respect to t∈ [0,T],
Π_p α +k^(i)(η_1,t) = 𝒪(e^- θ_i η_i) as η_i → +∞
uniformly in t∈ [0, T], where θ_i = v_i^(i)(ℓ_i)/2. Obviously, Π_p α +k^(i)|_t=0 =0.
Thus, we can successively determine all coefficients of the ansatzes (<ref>) – (<ref>).
§ JUSTIFICATION AND ASYMPTOTIC ESTIMATES
With the help of the smooth cut-off functions χ_ℓ_0 (see (<ref>)) and
χ_δ^(i) (x_i) =
{[ 1, if x_i ≥ℓ_i - δ,; 0, if x_i ≤ℓ_i - 2δ, ].
i ∈{2, 3},
where δ is a sufficiently small fixed positive number such that χ_δ^(i) vanishes in the support of φ_ε^(i),
and the ansatzes (<ref>) – (<ref>), we construct the following series in Ω_ε:
𝔘^(ε) :=
∑_k=0^+∞∑_p=0^1ε^pα +k -1 {[ 𝒰^(1)_pα +k -1 (x, t; ε) in Ω^(1)_ε,3ℓ_0,γ,; 𝒰^(i)_pα +k -1(x, t; ε) + 𝒫^(i)_pα +k -1(x_i, t; ε) in Ω^(i)_ε,3ℓ_0,γ, i∈{2,3},; N_pα +k -1( xε, t ) in Ω^(0)_ε, γ,; χ_ℓ_0(x_i/ε^γ) w_pα +k -1^(i) (x_i) +
(1- χ_ℓ_0(x_i/ε^γ)) N_pα +k -1 in Ω^(i)_ε,2ℓ_0,3ℓ_0,γ, i∈{1,2,3}, ].
where γ is a fixed number from the interval (2/3, 1),
𝒰^(i)_pα +k -1(x, t; ε) :=
w_pα +k -1^(i) (x_i) + u_pα +k -1^(i)( x_i, x_iε, t),
𝒫^(i)_pα +k -1(x_i, t; ε) := χ_δ^(i)(x_i) Π_p α +k^(i)(ℓ_i - x_iε, t ),
and the parts of the thin graph-like junction Ω_ε are defined as follows
Ω^(i)_ε,3ℓ_0,γ := Ω^(i)_ε ∩ {x x_i ∈ [3ℓ_0 ε^γ, ℓ_i]}, Ω^(i)_ε,2ℓ_0,3ℓ_0,γ := Ω^(i)_ε ∩ {x x_i ∈ [2ℓ_0 ε^γ, 3ℓ_0 ε^γ]},
Ω^(0)_ε, γ:=Ω^(0)_ε⋃(⋃_i=1^3 Ω^(i)_ε∩{x x_i∈ [εℓ_0, 2ℓ_0 ε^γ]}).
Take any M ∈ N and M > 3/2(1 - ⌊α⌋) and denote by 𝔘_M^(ε) the partial sum of (<ref>) (see Definition <ref>). Based on the properties of the coefficients of the series (<ref>) – (<ref>) proved above, we have
𝔘_M^(ε)|_t=0 = 0, 𝔘_M^(ε)|_x_i=ℓ_i = q_i(t), i∈{1, 2, 3}.
In Ω^(1)_ε,3ℓ_0,γ
𝔘_M^(ε)(x,t) = ∑_k=0^Mε^α +k -1 (w_α +k -1^(1) (x_1,t) + u_α +k -1^(1)( x_1, x_1ε, t))
+ ∑_k= 1^M +⌊α⌋ε^k -1 (w_k -1^(1) (x_1,t) + u_k -1^(1)( x_1, x_1ε, t)) ,
and due to (<ref>) and (<ref>) this partial sum satisfies the differential equation
∂_t 𝔘_M^(ε) - ε Δ_x 𝔘_M^(ε) +
div_x ( V_ε 𝔘_M^(ε))
= ε^M +⌊α⌋ -1 ℛ^(1)_M +⌊α⌋ -1 + ε^α +M -1 ℛ^(1)_α +M -1 in Ω^(1)_ε, γ× (0,T)
where
ℛ^(1)_𝔭 -1 (x_1,ξ̅_1, t) =
( v^(1)_1(x_1) u^(1)_𝔭 - 1(x_1,ξ̅_1, t) )^'
+ ∂_t u_𝔭-1^(1)(x_1,ξ̅_1, t) - ( u^(1)_𝔭-2(x_1,ξ̅_1, t) )^''
+ div_ξ̅_1( V^(1)(x_1, ξ̅_1) [ w^(1)_𝔭-1(x_1, t) + u^(1)_𝔭-1(x_1,ξ̅_1, t) ])
- ε( u^(1)_𝔭-1(x_1,ξ̅_1, t) )^''
for 𝔭∈{M +⌊α⌋, α +M }.
Thanks to our assumptions,
sup_Ω^(i) _ε,γ× (0,T) |ℛ^(1)_𝔭 -1 (x_1,x̅_1ε, t)| ≤ C^(1)_M,
where the constant C^(1)_M is independent of ε.
Hereinafter, all constants in inequalities are independent of the parameter ε.
In Ω^(2)_ε,3ℓ_0,γ and Ω^(3)_ε,3ℓ_0,γ the partial sum 𝔘_M^(ε)
additionally contains the partial sum of the boundary-layer ansatz (see (<ref>)). Therefore, residuals from this partial sum in the corresponding differential equation are the sum of the terms ∑_𝔭∈{M +⌊α⌋, α +M }ε^𝔭 -1 ℛ^(i)_𝔭 +M -1 , where ℛ^(i)_𝔭 +M -1 is estimated in the same way as in (<ref>) (i∈{2,3}), and
χ_δ^(i) (x_i)∑_𝔭∈{M +⌊α⌋, α +M }ε^𝔭 -1 ∂_t Π_𝔭 -1^(i)
+∑_k=0^Mε^α +k -1 ( 2 (χ_δ^(i))' ∂_ξ_iΠ_α +k -1 ^(i) + v_i^(i)(ℓ_i) (χ_δ^(i))' Π_α +k -1 ^(i) - ε (χ_δ^(3))^''Π_α +k -1 ^(i))
+ ∑_k= 1^M +⌊α⌋ε^k -1( 2 (χ_δ^(i))' ∂_ξ_iΠ_k -1 ^(i) + v_i^(i)(ℓ_i) (χ_δ^(i))' Π_k -1 ^(i) - ε (χ_δ^(3))^''Π_k -1 ^(i)).
The supports of summands in the second and third lines coincide with supp((χ_δ^(i))'), where
the functions {Π_pα +k -1 ^(i)} exponentially small as ε tends to zero (see (<ref>)). Therefore, residuals from 𝔘_M^(ε) in the differential equation in Ω^(i)_ε,3ℓ_0,γ are also of order
𝒪(ε^M +⌊α⌋ -1) + 𝒪(ε^α +M -1 )
for sufficiently small ε.
Using (<ref>) and taking the zero Neumann boundary condition for the solutions {Π_pα +k -1 ^(i)} (see (<ref>)) into account, we have
- ε ∂_ν_ε𝔘_M^(ε) + 𝔘_M^(ε) V_ε·ν_ε = ε^αφ^(i)_ε + ∑_𝔭∈{M +⌊α⌋, α +M }ε^𝔭 Φ_𝔭^(i) on Γ^(i)_ε, γ× (0, T),
where the lateral surfaces Γ^(i)_ε, γ := Γ^(i) _ε∩{x x_i ∈ [3 ℓ_0 ε^γ, ℓ_i) }, i∈{1, 2, 3},
Φ_𝔭^(i)(x_i,ξ̅_i, t) := ( w^(i)_𝔭 -1 + u^(i)_𝔭 -1) V^(i)·ν̅_ξ̅_i,
and there are positive constants ε_0 and C̃^(i)_M such that for all ε∈ (0, ε_0)
sup_Γ^(i)_ε, γ× (0,T) |Φ_𝔭 +M^(i)(x_i,x̅_iε, t)| ≤C̃^(i)_M.
In addition, the functions {Φ_𝔭^(i)}_i=1^3 vanish at circular strips on the lateral surfaces of the thin cylinders near their bases {Υ_ε^(i) (ℓ_i)}_i=1^3, since the functions {V^(i)} and {φ_ε^(i)} vanish there.
In virtue of (<ref>), we get
∂_t 𝔘_M^(ε) - ε Δ_x 𝔘_M^(ε) +
div_x ( V_ε 𝔘_M^(ε))
= -∑_𝔭∈{M +⌊α⌋, α +M }ε^𝔭 -1 ∂_t N_𝔭 -1 in Ω^(0)_ε, γ× (0,T),
and
- ε ∂_ν_ε𝔘_M^(ε) = ε^αφ^(0)_ε on (∂Ω^(0)_ε, γ∖{⋃_i=1^3 Υ_ε^(i) (2ℓ_0 ε^γ)}) × (0, T).
Now it remains to calculate and estimate residuals left by 𝔘_M^(ε) in the differential equations in the
thin and small cylinders Ω^(i)_ε,2ℓ_0,3ℓ_0,γ, i∈{1, 2, 3},
and in the boundary conditions on their lateral surfaces. Since φ^(i)_ε, V^(i) vanish there,
- ε ∂_ν_ε u_ε + u_ε V_ε·ν_ε = 0
on the corresponding lateral surface of Ω^(i)_ε,2ℓ_0,3ℓ_0,γ.
Similar to (<ref>) and (<ref>), but now taking into account Remark <ref>, we find
∂_t 𝔘_M^(ε) - ε Δ_x 𝔘_M^(ε) +
div_x ( V_ε 𝔘_M^(ε))
= -
∑_𝔭∈{M +⌊α⌋, α +M }ε^𝔭 -1 [εχ_ℓ_0(x_i/ε^γ) ( w^(i)_𝔭 -1(x_i,t))^''
+ (1- χ_ℓ_0(x_i/ε^γ)) ∂_t N_𝔭 -1]
- χ”_ℓ_0(x_i/ε^γ) ∑_k=1^M +⌊α⌋ε^k -2 γ(w_k -1^(i) (x_i) - N_k -1) + v_i χ'_ℓ_0(x_i/ε^γ) ∑_k=1^M +⌊α⌋ε^k -1-γ(w_k -1^(i) (x_i) - N_k -1)
- 2 χ'_ℓ_0(x_i/ε^γ) ∑_k=1^M +⌊α⌋ε^k -γ(∂_x_iw_k -1^(i) (x_i) - ε^-1∂_ξ_i N_k -1)
- χ”_ℓ_0(x_i/ε^γ) ∑_k=0^Mε^α +k -2 γ(w_α +k -1^(i) (x_i) - N_α +k -1) + v_i χ'_ℓ_0(x_i/ε^γ) ∑_k=0^Mε^α +k -1-γ(w_α +k -1^(i) (x_i) - N_α +k -1)
- 2 χ_ℓ_0'(x_i/ε^γ) ∑_k=0^Mε^α +k -γ(∂_x_iw_α +k -1^(i) (x_i) - ε^-1∂_ξ_i N_α +k -1)
in Ω^(i)_ε,2ℓ_0,3ℓ_0,γ.
The terms in the first line of the right-hand side of (<ref>) are of order
𝒪(ε^M +⌊α⌋ -1) + 𝒪(ε^α +M -1 ).
The rest of the terms are localized in the support of χ'_ℓ_0(x_i/ε^γ). Therefore, using the Taylor formula for the functions {w_pα +k -1^(i)} at the point x_i=0 and the formula (<ref>), the summands in the other lines of (<ref>) can be rewritten as follows
χ”_ℓ_0(x_iε^γ) ∑_k=1^ M +⌊α⌋ε^k -2 γN_k -1 + 𝒪(ε^γ(M +⌊α⌋ - 2)+1)
- v_i χ'_ℓ_0(x_iε^γ) ∑_k=1^M +⌊α⌋ε^k -1-γN_k -1 + 𝒪(ε^γ (M +⌊α⌋ -1))
+ 2 χ'_ℓ_0(x_iε^γ) ∑_k=0^M +⌊α⌋ - 1ε^k-γ∂_ξ_iN_k
+ 𝒪(ε^γ (M +⌊α⌋ -2)+1)
χ”_ℓ_0(x_iε^γ) ∑_k=0^Mε^α +k -2 γN_α +k -1 + 𝒪(ε^α + γ M -γ)
- v_i χ'_ℓ_0(x_iε^γ) ∑_k=0^Mε^α +k -1-γN_α +k -1 + 𝒪(ε^α + γ M -1).
+ 2 χ'_ℓ_0(x_iε^γ) ∑_k=-1^M-1ε^k-γ∂_ξ_iN_α +k
+ 𝒪(ε^α + γ M -γ).
Taking into account (<ref>) and (<ref>), the maximum of |N_𝔪| and |∂_ξ_iN_𝔪| over
(Ω_ε^(i)∩{ x: x_i∈ [2ℓ_0ε^γ, 3ℓ_0ε^γ] }) × [0, T]
are of order exp(-β_0 2 ℓ_0 ε^γ -1), i.e.,
these terms exponentially decrease as the parameter ε tends to zero. Thus, the right-hand side of (<ref>) is of order
𝒪(ε^γ (M +⌊α⌋ -1)) + 𝒪(ε^α + γ M -1),
and these values are infinitesimal as ε→ 0, since M ∈ N, M > 3/2(1 - ⌊α⌋), and
γ∈ (2/3, 1). In addition, if γ > 1 - α - ⌊α⌋/1 -⌊α⌋, then
γ (M +⌊α⌋ -1) < α + γ M -1, and therefore,
𝒪(ε^γ (M +⌊α⌋ -1)) + 𝒪(ε^α + γ M -1)=
𝒪(ε^γ (M +⌊α⌋ -1)).
In what follows, we consider the parameter
γ∈( max{2/3 , 1 - α - ⌊α⌋/1 -⌊α⌋} , 1 ).
It should be noted here that 1 - α - ⌊α⌋/1 -⌊α⌋ < 1 and
lim_α→ -∞( 1 - α - ⌊α⌋/1 -⌊α⌋) = 1.
As was noted in Remark <ref>, constants in inequalities are independent of ε, but they depend on the value of the parameters M, α, γ, and i. In what follow we indicate only the dependence of M and use the same notation C_M for all constants.
Let assumptions made in Section <ref>, in Remark <ref> and (<ref>) hold. Then the series (<ref>) is the asymptotic expansion for the solution u_ε to the problem (<ref>) in both the Banach space C(Ω_ε× [0,T]) and the Sobolev space L^2((0,T); H^1(Ω_ε)); and for any M∈ N and M > 3/2(1 - ⌊α⌋) there exist C_M>0 and ε_0>0 such that for all ε∈(0, ε_0) the estimates
u_ε - 𝔘_P^(ε)_C(Ω_ε× [0,T])≤ C_M ε^γ (M +⌊α⌋ -1)
and
1√(|Ω_ε|) ∇_x(u_ε - 𝔘_P+1^(ε))_L^2(Ω_ε× (0, T))≤ C_M ε^γ (M +⌊α⌋ -1) - 1/2
are satisfied, where 𝔘_P^(ε) is the partial sum of the series (<ref>), P = ⌊γ (M +⌊α⌋ -1) ⌋ +1 - ⌊α⌋, and |Ω_ε| is the Lebesque measure of Ω_ε.
1. From calculations above it follows that the partial sum 𝔘_M^(ε) leaves the biggest residuals in Ω^(i)_ε,2ℓ_0,3ℓ_0,γ, i∈{1, 2, 3}.
Therefore, taking into account the estimates of residuals carried out in this section, the difference between the partial sum 𝔘_M^(ε) of (<ref>) and the solution to the problem (<ref>) satisfies the following relations:
∂_t(𝔘_M^(ε) - u_ε) - ε Δ_x( 𝔘_M^(ε) - u_ε) +
div_x ( V_ε^(i) (𝔘_M^(ε) - u_ε))
= ε^γ (M +⌊α⌋ -1)ℛ^(i)_γ (M +⌊α⌋ -1) in Ω_ε^(i)× (0,T),
- ε ∂_ν_ε(𝔘_M^(ε) - u_ε) + (𝔘_M^(ε) - u_ε) V_ε·ν_ε =
∑_𝔭∈{M +⌊α⌋, α +M }ε^𝔭 Φ_𝔭^(i) on Γ^(i)_ε, γ× (0, T),
(𝔘_M^(ε) - u_ε)|_x_i= ℓ_i
= 0 on Υ_ε^(i) (ℓ_i)× (0,T),
i∈{1,2,3},
∂_t(𝔘_M^(ε) - u_ε) - ε Δ_x (𝔘_M^(ε) - u_ε) +
V_ε^(0)·∇_x(𝔘_M^(ε) - u_ε) =
-∑_𝔭∈{M +⌊α⌋, α +M }ε^𝔭 -1 ℛ^(0)_𝔭-1 in Ω^(0)_ε, γ× (0,T),
- ε ∂_ν_ε(𝔘_M^(ε) - u_ε) = 0 on Γ_ε^(0)× (0,T),
(𝔘_M^(ε) - u_ε)|_t=0 = 0 on Ω_ε,
where ℛ^(0)_𝔭-1 = - ∂_t N_𝔭 -1 for
𝔭∈{M +⌊α⌋, α +M }, the residual Φ_𝔭^(i) is determined in (<ref>) and satisfied the estimate (<ref>),
sup_Ω^(i)_ε× (0,T)|ℛ^(i)_γ (M +⌊α⌋ -1)(x, t; ε)| ≤ C_M.
From the maximum principle proved in <cit.> for parabolic problems in thin graph-like junctions, we obtain
u_ε - 𝔘_M^(ε)_C(Ω_ε× [0,T]) := max_Ω_ε× [0, T] |u_ε - 𝔘_M^(ε)| ≤ C_M ε^γ (M +⌊α⌋ -1).
Since M is an arbitrary natural number and M > 3/2(1 - ⌊α⌋), the inequality (<ref>) means, based on Definition <ref>, that the series (<ref>) is the asymptotic expansion of the solution u_ε in C(Ω_ε× [0,T]).
The partial sum 𝔘_M^(ε) contains terms that are infinitesimal with respect to ε^γ (M +⌊α⌋ -1) for ε→ 0. Therefore, from (<ref>) it follows the inequality
u_ε - 𝔘_P^(ε)_C(Ω_ε× [0,T])≤ C_M ε^γ (M +⌊α⌋ -1),
where P = ⌊γ (M +⌊α⌋ -1) ⌋ +1 - ⌊α⌋; it is easy to verify that 1 - ⌊α⌋≤ P ≤ M -1.
2. We multiply the differential equations (<ref>) and (<ref>) with U_ε := 𝔘_M^(ε) - u_ε and integrate them over the corresponding domain and over (0, τ), where τ is an arbitrary number from (0, T). Integrating by parts and taking into account the boundary conditions and the initial condition for U_ε, (<ref>), (<ref>),
and the volume of the domains being integrated over, we derive
∇_x U_ε^2_L^2(Ω_ε× (0, τ))≤ 1/ε∫_0^τ(∫_Ω^(0)_ε U_ε V_ε·∇_x U_ε dx + ∑_i=1^3∫_Ω^(i)_ε U_ε V_ε·∇_x U_ε dx) dt
+ C_M (ε^M +⌊α⌋ + ε^γ ( M +⌊α⌋) +1) U_ε_C(Ω_ε× [0,T]).
Owing to the assumptions for the vector field V_ε, in particular the incompressibleness of V_ε^(0) in Ω^(0)_ε
and the Dirichlet conditions for U_ε on {Υ_ε^(i) (ℓ_i)}_i=1^3,
1/ε|∫_Ω^(0)_ε U_ε V_ε·∇_x U_ε dx + ∑_i=1^3∫_Ω^(i)_ε U_ε V_ε·∇_x U_ε dx|
= 1/ε | - 1/2∑_i=1^3∫_Ω^(i)_ε U^2_ε ∂_x_iv^(i)_i dx
+ ε∑_i=1^3∫_Ω^(i)_ε U_ε V^(i)·∇_x̅_i U_ε dx|
≤ C ε U_ε^2_C(Ω_ε× [0,T]) + 1/2 ∇_x U_ε^2_L^2(Ω_ε× (0, τ))
Using (<ref>), we derive from (<ref>) and (<ref>) the inequality
∇_x(u_ε - 𝔘_M^(ε)) _L^2(Ω_ε× (0, T))≤ C_M ε^γ (M +⌊α⌋ -1) + 1/2 .
The inequality (<ref>) implies that the series (<ref>) is the asymptotic expansion of the solution u_ε in the Sobolev space L^2 ((0,T); H^1(Ω_ε)).
It should be emphasized that u _L^2(Ω_ε)≤ C ∇_x u _L^2(Ω_ε)
for any function from the Sobolev space H^1(Ω_ε) whose traces
on the bases {Υ_ε^(i)(ℓ_i)}_i=1^3 are equal to zero (for more detail see <cit.>).
From (<ref>) follows (<ref>), with the rescaled L^2-norm on the left side.
For applied problems it is not necessary to construct a complete asymptotic expansion of the solution. It is sufficient to approximate the solution to the required accuracy. Weaker assumptions about the smoothness of the coefficients and the given functions are then required.
For instance, let us consider case when α∈ (0, 1). Then, writing (<ref>) and (<ref>) for M=2, we get
u_ε - 𝔘_1^(ε)_C(Ω_ε× [0,T])≤ C_2 ε^γ
and
1√(|Ω_ε|) ∇_x(u_ε - 𝔘_2^(ε))_L^2(Ω_ε× (0, T))≤ C_2 ε^γ - 1/2
where γ is a fixed number from the interval ( max{2/3 , 1 - α} , 1 ).
The partial sum
𝔘_1^(ε) =
{[ ε^α -1 w_α -1^(1) (x_1, t) + w_0^(1) (x_1, t) + ε^α(w_α^(1) (x_1, t) +
u_α^(1)( x_1, x_1ε, t)) , x ∈Ω^(1)_ε,3ℓ_0,γ,; ε^α -1 w_α -1^(i) (x_i, t) + w_0^(i) (x_i, t) + ε^α(w_α^(i) (x_i, t) +
u_α^(i)( x_i, x_iε, t)) ; + χ_δ^(i) (x_i)( ε^α -1Π_α-1^(i)(ℓ_i - x_iε, t ) + Π_0^(i)(ℓ_i - x_iε, t ) + ε^αΠ_α^(i)), x ∈Ω^(i)_ε,3ℓ_0,γ, i∈{2, 3},; ε^α -1 N_α-1(x/ε, t) + N_0(x/ε, t) + ε^α N_α(x/ε, t), x ∈Ω^(0)_ε, γ,; χ_ℓ_0^(i)(x_i/ε^γ) (
ε^α -1 w_α -1^(i) (x_i, t) + w_0^(i) (x_i, t) + ε^α w_α^(i) (x_i, t) ) ; +
(1- χ_ℓ_0^(i)(x_i/ε^γ)) (
ε^α -1 N_α-1(x/ε, t) + N_0(x/ε, t) + ε^α N_α(x/ε, t)), x ∈Ω^(i)_ε,2ℓ_0,3ℓ_0,γ, i∈{1,2,3}, ].
where the coefficients {w_α -1^(i)}_i=1^3, {w_0^(i)}_i=1^3 and {w_α^(i)}_i=1^3 form classical solutions to the problems (<ref>), (<ref>) and (<ref>), respectively; the terms Π_α-1^(i),
Π_0^(i), and Π_α are determined in (<ref>); and N_α-1, N_0, and N_α are solutions to the problem (<ref>) for the corresponding values of the indices p and k.
To obtain the estimate (<ref>), it is necessary to construct the partial sum 𝔘_2^(ε) that additionally contains the coefficients {w_1^(i)}_i=1^3, {w_α+1^(i)}_i=1^3, {u_1^(i)}_i=1^3 and {u_α +1^(i)}_i=1^3.
This means that {w_0^(i)}_i=1^3, {w_α^(i)}_i=1^3 and {w_α -1^(i)}_i=1^3 must have C^2 and C^3
smoothness, respectively, on the corresponding edges of the graph (see Remark <ref>).
To derive (<ref>), the partial sum 𝔘_3^(ε) should be constructed and, as a result of which additional smoothness of the coefficients is required. Therefore, the following statements hold.
Let α∈ (0,1) and, in addition to the assumptions made in Section <ref> the functions {φ^(i)}_i=1^3 belong to the smoothness class C^4 in their domains, q_1 ∈ C^2([0,T]), and
∂_t φ^(0)|_t=0=∂^2_ttφ^(0)|_t=0=0, q”_1(0)=0.
Then the inequality (<ref>) hold.
Let α∈ (0,1) and, in addition to the assumptions made in Section <ref>, the functions {φ^(i)}_i=1^3 belong to the smoothness class C^5 in their domains, φ^(0) belongs to C^3 in t∈ [0,T], q_1 ∈ C^3([0,T]), v^(i)_i ∈ C^4([0,ℓ_i]), i∈{1, 2, 3}, and
∂_t φ^(0)|_t=0=∂^2_ttφ^(0)|_t=0= ∂^3_tttφ^(0)|_t=0=0, q”_1(0)=q”'_1(0)=0.
Then the inequality (<ref>) hold.
§ CONCLUDING REMARKS
1. We have studied the influence of large boundary interactions (α < 1) on the asymptotic behaviour of the solution u_ε to the problem (<ref>). The constructed asymptotic expansion has revealed the dependence of the solution on the parameters ε and α and other parameters of the problem (the geometric structure of the thin junction, including the local geometric irregularity of the node, through the constants {h_i} and the values {d_pα +k(t)} in the relations (<ref>) and (<ref>)).
The principal part of the Puiseux asymptotic expansion (<ref>) shows that the physical processes on the lateral surfaces of thin cylinders do cause cardinal changes in the global behavior of the solution (it becomes larger as the parameter ε decreases).
From a physical point of view, it is advisable to consider the parameter α from the interval (0,1), since for smaller values of the parameter α the solution becomes too large, indicating the instability of the transport process in this case.
The approximation 𝔘_1^(ε) constructed for the case α∈ (0,1) indicates that
* the principal coefficients {ε^α -1 w_α -1^(i)}_i=1^3 are directly affected by the boundary interactions {φ^(i)}_i=1^3 on the lateral surfaces of the thin cylinders through the solutions {u_α^(i)}_i=1^3, respectively;
* the coefficients {w_0^(i)}_i=1^3 take into account the inhomogeneous Dirichlet conditions on the bases of the thin cylinders;
* and only the coefficients {ε^α w_α^(i)}_i=1^3 begin to feel the influence of the node boundary condition
and physical processes inside the node through the value d_α(t) (see (<ref>)), which depends on both the interaction φ^(0) on the node boundary and on the solution N_α-1.
The node-layer solutions N_α-1, N_0, and N_α ensure the smoothness of the approximation 𝔘_1^(ε) at the node, while the boundary-layer solutions Π^(i)_α-1, Π^(i)_0, and Π^(i)_α ensure the fulfillment of the boundary condition at the base of the thin cylinder Ω^(i)_ε, i∈{2, 3}.
The estimates (<ref>) and (<ref>), proved in Theorem <ref>, allow us to construct approximations of the solution with a given accuracy with respect to the small parameter ε, which indicates the efficiency and usefulness of the proposed asymptotic approach.
2. In the case when different intensities of boundary processes are observed in different parts of the boundary of a thin network, it is necessary to consider the corresponding intensity parameters. For example, α_1, α_2, α_3 respectively for the lateral surfaces of the thin cylinders Ω_ε^(1), Ω_ε^(2), Ω_ε^(3), and α_0 for the node boundary. Then, for instance, the regular part of the asymptotics in each thin cylinder Ω^(i)_ε will have the form
∑_i=0^3∑_k=0^+∞ε^α_i +k -1 (w_α_i +k -1^(i) (x_i) + u_α_i +k -1^(i)( x_i, x_iε, t ) )
+
∑_k=0^+∞ε^k (w_k^(i) (x_i) + u_k^(i)( x_i, x_iε, t ) ).
The ansatz (<ref>) shows that if α_0 is less than α_1, α_2, α_3, then activities at the node boundary can cause crucial changes in the entire transport process in a thin network.
§ ACKNOWLEDGMENTS
The authors thank for funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project Number 327154368 – SFB 1313.
99
Ang_2020
M. Anguiano, Existence, uniqueness and homogenization
of nonlinear parabolic problems with dynamical boundary conditions
in perforated media, Mediterr. J. Math. 17, 18 (2020).
CarPanSir
G. Cardone, G.P. Panasenko and Y. Sirakov, Asymptotic analysis and numerical modeling of mass transport in
tubular structures, Mathematical Models and Methods in Applied Sciences 20(03) (2010) pp. 397– 421.
Cio-Dam-Don-Gri-Zak2012
D. Cioranescu, A. Damlamian, P. Donato, G. Griso and R. Zaki,
The periodic unfolding method in domains with holes,
SIAM Journal on Mathematical Analysis,
44 (2012) pp. 718–760.
Che-Fri-Pia_1999
G. Chechkin, A. Friedman and A. Piatnitski,
The boundary-value problem in domains with very rapidly oscillating boundary,
Journal of Mathematical Analysis and Applications, 231 (1999) pp. 213–234.
Con-Don_1988
C. Conca and P. Donato, Non-homogeneous Neumann problems in domains with small holes,
M2AN - Modélisation mathématique et analyse numérique, 22 (1988) pp. 561-607.
Conca_1
C. Conca, J. Diaz, and C. Timofte, Effective chemical processes in porous media,
Mathematical Models and Methods in Applied Sciences, 13(10) (2003) pp. 1437–1462
Conca
C. Conca, J. Diaz, A. Linan, and C. Timofte, Homogenization in
chemical reactive flows, Electron. J. Differential Equations. (2004) 1–22. http://eudml.org/doc/116613
Deu-Hoch_2004
P. Deuflhard and R. Hochmut, Multiscale analysis of thermoregulation in the human microvascular system,
Math. Meth. Appl. Sci. 27 (2004) pp. 971–989
Don-Gui-Oro_2018
P. Donato, O. Guibé, A. Oropeza, Homogenization of quasilinear elliptic problems with nonlinear Robin conditions and L^1 data,
Journal de Mathématiques Pures et Appliquées, 120 (2018) pp. 91–129.
Gau_1994
A. Gaudiello, Asymptotic behaviour of non-homogeneous Neumann problems in domains with oscillating boundary, Ric. Mat. 43 (2) (1994) pp. 239–292.
Gau-Mel_2019
A. Gaudiello and T. Mel'nyk, Homogenization of a nonlinear monotone problem with a big
nonlinear Signorini boundary interaction in a domain with highly rough boundary, Nonlinearity, 32 (2019) pp. 5150–5169.
Gom-Lob-Per-San_2018
D. Gómez, M. Lobo, E. Pérez and E. Sanchez-Palencia, Homogenization
in perforated domains: a Stokes grill and an adsorption process, Applicable Analysis, 97 (2018)
pp. 2893–2919.
Gom-Lob-Per-Pod-Sha_2018
D. Gómez, M. Lobo, E. Pérez, A. Podolskii and T. Shaposhnikova, Unilateral problems for the p-Laplace operator in perforated media involving large parameters, ESAIM: COCV 24 (2018), pp. 921–964.
Hor-Jag_1991
U. Hornung and W. Jäger, Diffusion, convection, adsorption and reaction of chemicals in porous media, Journal of
Differential Equations, 92 (1991) pp. 199–225.
Mel-Kle_2019
A. Klevtsovskiy and T. Mel'nyk, Influence of the node on the
asymptotic behaviour of the solution to a semilinear parabolic problem in a
thin graph-like junction, Asymptotic Analysis, 113 (2019), pp. 87–121.
Lad_Sol_Ura_1968
O. Ladyzhenskaya, V. Solonnikov, and N. Ural’tseva, Linear and
Quasilinear Equations of Parabolic Type, vol. 23, AMS, Transl. Math.
Monographs, 1968.
Mel_1991
T. Mel'nyk, Averaging of elliptic equations that describe processes in strongly inhomogeneous thin punctured domains with rapidly changing thickness. Dokl. Akad. Nauk Ukrain. SSR 1991, no. 10, 15–18,
Mel_IJB-2019
T. Mel'nyk, Asymptotic analysis of a mathematical model of the
atherosclerosis development, International Journal of Biomathematics, 12
(2019), p. 1950014.
M-AA-2021
T. Mel'nyk, Asymptotic approximations for eigenvalues and
eigenfunctions of a spectral problem in a thin graph-like junction with a
concentrated mass in the node, Analysis and Applications, 19 (2021),
pp. 875–939.
Mel-Pop_2009
T. Mel'nyk and A. Popov
Asymptotic approximations of solutions to parabolic boundary value problems
in thin perforated domains of rapidly varying thickness,
J Math Sci, 162 (2009) pp. 348–372.
Mel-Siv_2011
T. Mel'nyk and O. Sivak, Asymptotic approximations for solutions to
quasilinear and linear parabolic problems
with different perturbed boundary conditions in
perforated domains, J Math Sci 177 (2011) 50–70.
Mel-Sad_2019
T. Mel'nyk and D. Sadovyj, Multiple-Scale Analysis of Boundary-Value Problems in
Thick Multi-Level Junctions of Type 3:2:2, Springer, 2019.
Mel-Roh_preprint-2022
T. Mel'nyk and C. Rohde, Asymptotic expansion for
convection-dominated transport in a thin graph-like junction, E-print:
arXiv:2208.05812, (2022), https://arxiv.org/abs/2208.05812
Mel-Roh_preprint-2023
T. Mel'nyk and C. Rohde, Asymptotic approximations for semilinear parabolic convection-dominated transport problems
in thin graph-like networks, J. Math. Anal. Appl. (2023) in print; see also E-preprint: arXiv:2302.10105v1
https://doi.org/10.48550/arXiv.2302.10105
|
http://arxiv.org/abs/2307.03114v1
|
20230706163319
|
ANN-MoC Method for Solving Unidimensional Neutral Particle Transport Problems
|
[
"P. H. A. Konzen"
] |
math.NA
|
[
"math.NA",
"cs.NA"
] |
ANN-MoC Method for Solving Unidimensional Neutral Particle Transport Problems
Pedro H.A. Konzen
IME/UFRGS, Porto Alegre, RS, Brazil
August 1, 2023
=============================================================================
Neutral particle transport problems are fundamental in the modeling of energy transfer by radiation (photons) and by neutrons with many important applications. In this work, the novel ANN-MoC method for solving unidimensional neutral particle transport problems is presented. Following the Method of Discrete Ordinates (DOM) and decoupling with a Source Iteration (SI) scheme, the proposed method applies Artificial Neural Networks (ANNs) together with the Method of Characteristics (MoC) to solve the transport problem. Once the SI scheme converges, the method gives an ANN that estimates the average flux of particles at any points in the computational domain. Details of the proposed method are given and results for two test cases are discussed. The achieve results show the potential of this novel approach for solving neutral particle transport problems.
Keywords. Artificial Neural Networks, Method of Characteristics, Neutral Particle Transport
§ INTRODUCTION
Photon and neutron transport are important examples of neutral particles transport phenomena. The first appears in many applications, mainly in that involving energy transport via radiative transfer <cit.>. Practical applications includes the design of industrial furnaces, combustion chambers, or forming processes such as glass and ceramics manufacturing <cit.>. Other applications are found in the fields of astrophysics <cit.>, medical optics <cit.>, developing of micro-electro-mechanical systems <cit.>. Neutron transport also has applications in medicine and clearly in nuclear energy generation <cit.>.
In this work, the neutral particle transport is assumed to be modeled in a unidimensional space domain 𝒟 = [a, b] as it follows
∀μ∈ [-1, 1]: μ·/ x I(x,μ) + σ_tI = σ_s/2∫_-1^1I(x,μ') dμ' + q(x,μ), ∀ x∈𝒟,
∀μ>0: I(a,μ) = I_a,
∀μ<0: I(b,μ) = I_b,
where I(x,μ) is the angular flux of particles at the point x∈𝒟=[a, b] and in the direction μ∈ [-1, 1], σ_t is the total absorption coefficient and σ_s the scattering coefficient, q(x,μ), I_a and I_b are, respectively, the sources in 𝒟 and on its boundary. The average flux of particles is given by
Ψ(x) := 1/2∫_-1^1I(x,μ) dμ.
Many solution approaches are available to problem (<ref>) (see, for instance, <cit.>). One of the most applied is the so called Discrete Ordinates Method (DOM, <cit.>). By considering a numerical quadrature {μ_i,w_i}_i=1^N, the problem (<ref>) is approximated by a system of equations only for the discrete directions μ_i, i=1,2,…,N. The equations can be further decoupled by using the Source Iteration (SI) strategy, where the system is iteratively solved for approximations of Ψ(x)≈Ψ^(j)(x), j=1,2,3,…, until a given stop criteria. At each SI iterate, one has a decoupled system of N linear first order partial differential equations, which can be solved by the Method of Characteristics (MoC, <cit.>). To do so, one will need to compute an integral depending on the Ψ approximation.
In this work, we present a novel method to solve (<ref>), it integrates and Artificial Neural Network (ANN, <cit.>) into the DOM-MoC approach. The main idea is to train an ANN to estimate the average flux Ψ^(j) at each SI iterate. It is a meshless method, in the sense that it does not rely on a fixed domain mesh. After convergence, the method gives an ANN that estimate Ψ(x) for all x∈𝒟.
§ THE ANN-MOC METHOD
Following the Discrete Ordinates Method (DOM), we assume a numerical quadrature {μ_i, w_i}_i=1^N, and the Source Iteration (SI) approximation of problem (<ref>) is given as follows
i=1,…,N: μ_i·/ x I^(j)(x,μ_i) + σ_tI^(j)_i(x) = σ_sΨ^(j-1)(x) + q(x,μ_i), ∀ x∈𝒟,
μ_i>0: I^(j)_i(a) = I_a,
μ_i<0: I^(j)_i(b) = I_b,
where I^(j)_i ≈ I^(j)(x,μ_i), l=1,2,…,L, and Ψ^(0)(x) is a given initial approximation for Ψ(x). Then, the j-th approximation of the average flux is given by
Ψ^(j)(x) = 1/2∑_i=1^N w_iI^(j)_i(x)
Now we use the Method of Characteristics (MoC) by applying the change of variables x(s) = x_0 + s·μ_i. Then, for each i=1,…,N, equation (<ref>) can be rewritten as follows
d/dsI_i^(j)(s) + σ_t I^(j)_i(s) = σ_sΨ^(j-1)(s) + q(s,μ_i),
where I_i^(j)(s) = I_i^(j)(x(s)), and analogous for the other term. An integrating factor than gives us
I_i^(j)(s) = I_i^(j)(0)e^-∫_0^sσ_t ds' + ∫_0^s[Ψ^(j)(s')+q(s',μ_i)]e^-∫_s'^sσ_t ds” ds'
The computation of the integral term involving Ψ^(j)(s) is an issue, since it usually requires the evaluation of Ψ^(j)(s) at several points s∈ (0, s), which can be a large interval depending on the direction μ_i.
The idea of the proposed ANN-MoC method, is to train an Artificial Neural Network (ANN) to estimate Ψ^(j) at each source iteration. In the following, we simplify the notation by omitting the super-index (j).
§.§ ANN Average Flux Estimation
The ANN is assumed to be a Multilayer Perceptron (MLP, <cit.>) that has x∈𝒟 as input and the estimate Ψ̃(x) as output. It is denoted by
Ψ̃(x) = 𝒩(x; {(W^(l),b^(l), f^(l))}_l=1^n_l),
where (W^(l),b^(l), f^(l)) denotes the triple of the weights W^(l) = [w^(l)_i,j]_i,j=1^n^(l-1), n^(l), the bias b^(l) = (b^(l)_i)_i=1^n^(l) and the activation function f^(l) in the l-th layer of the network. The number of neurons (units) at each layer is denoted by n^(l), l=1,2,…,n_l. The MLP forwardly computes
a^(l) = f^(l)(W^(l)a^(l-1)+b^(l)),
where a^(0) = x and Ψ̃(x) = a^(n_l).
Given a fixed structure (number of layers n_l, number of units n^(l) per layer and the activation functions), the training of the ANN consists in solving the following optimization problem
min_{(W^(l),b^(l))}_l=1^n_l1/n_s∑_m=1^n_s(Ψ̃^(m)-Ψ^(m))^2
for a given training set {x^(m), Ψ̃(x^(m))}_m=1^n_s, where n_s is the number of samples.
§.§ The ANN-MoC Algorithm
The proposed ANN-MoC method computes successive approximations of the average flux Ψ(x) for all points in the domain 𝒟. It starts from the ANN (<ref>) trained with given initial training set {x^(m), Ψ̃^(0)(x^(m))}_m=1^n_s, for randomly selected points x^(m)∈𝒟, m=1,2,…,n_s. Then, the approximation Ψ̃^(j) is iteratively computed from its previous Ψ̃^(j-1) by solving the problem (<ref>) from the MoC solution (<ref>) and by replacing Ψ^(j)(s') for its estimate from the ANN 𝒩(s'), trained on the last l-1-th source iteration.
The ANN-MoC algorithm follows the steps:
* Set the ANN structure 𝒩(x) with random weights and bias.
* Set an initial approximation Ψ^(0)(x) for all x∈𝒟.
* Set n_s and the set of points {x^(m)}_m=1^n_s.
* Train the ANN with the training set {x^(m), Ψ^(0)(x^(m))}_m=1^n_s.
* Set the quadrature {μ_i, w_i}_i=1^N.
* For j=1,…,L:
* For i=1,…,N, for m=1,…,n_s:
* If μ_i>0, then s=(x^(m)-a)/μ_i
I^(j)_i(x^(m)) = I_ae^-∫_0^sσ_t ds' + ∫_0^s[𝒩(s')+q(s',μ_i)]e^-∫_s'^sσ_t ds” ds'
* If μ_i<0, then s=(x^(m)-b)/μ_i
I^(j)_i(x^(m)) = I_be^-∫_0^sσ_t ds' + ∫_0^s[𝒩(s')+q(s',μ_i)]e^-∫_s'^sσ_t ds” ds'
* Compute Ψ^(j) = 1/2∑ w_iI_i^(j).
* Retrain the ANN 𝒩(x) with the new training set {x^(m), Ψ^(j)(x^(m))}_m=1^n_s.
* Check a given stop criteria.
* Reset the random set of points {x^(m)}_m=1^n_s.
§ RESULTS
In this section we present results of the application of the ANN-MoC method to solve two different problems. The first is set from a manufactured solution and the second is a benchmark problem selected from the specialized literature.
§.§ Problem 1: Manufactured Solution
We assume the exact angular fluxes are given as
Î(x,μ) = e^-ασ_t x.
By substituting in (<ref>), one obtains the source
q(x,μ) = (κ - ασ_tμ))e^-ασ_t x.
The exact average particle flux can be also analytically calculated as
Ψ̂(x) = e^-ασ_t x.
Figure <ref> shows a comparison of the ANN-MoC versus the exact solutions for Problem 1 with several different values of κ and σ_s. The approximated solutions have been achieved by using an 1-100-50-5-1 MLP with hyperbolic tangent as activation function on hidden layers and the sigmoid function to activate the output neuron. The Adam method <cit.> has been used for solving the optimization problem (<ref>) at each training step. The Gauss-Legendre quadrature with N=100 nodes has been assumed for the DOM angular discretization and the number of point samples has been fixed to n_s=101. As stop criteria for the SI iterations, we have applied
Ψ̃^(l)-Ψ̃^(l-1)_2 < max{ε, εΨ̃^(l)},
with ε = 10^-5.
Table <ref> presents the average flux of particles computed at selected domain points for Problem 1 with κ=0.1 and σ_t=0.5. One can observe that the increase of sample points from n_s=11 to 201 produce similar results, which indicates the training of the MLP will not profit from further increasing the number of samples. This is due to the randomization of the sample points at each SI iteration.
§.§ Problem 2: Benchmark Solution
The second application of the ANN-MoC is for the benchmark problem available in the work <cit.>. The problem sources are
q(x,μ) = x - x^2,
and I_a=I_b = 0. The absorption coefficient is fixed to σ_t=1.
Figure <ref> shows a comparison of the ANN-MoC (lines) versus the exact (dots) solutions for Problem 2 with the scattering coefficient set to σ_s=0.9, 0.99 and 0.999. The ANN-MoC parameters were all set as the same used for solving the Problem 1, with n_s=101. As in that case, we can observe very good accordance between the proposed method and the expected solutions.
§ FINAL CONSIDERATIONS
In this paper, the novel ANN-MoC method has been presented for solving unidimensional neutral particle transport problems. Its main idea is to apply an ANN for the estimates of the average flux of particles computed from a DOMM-MoC approach. One of its advantages is to be a meshless method, since no fixed mesh is necessary in the computations. After the convergence of the SI iterations, the method gives an ANN to estimate the average flux at any point of the domain. The achieved first results have been presented and they show a very good accordance between the ANN-MoC and the expected solutions. This indicates the potential of the method as an alternative to be applied for the solution of more complex transport problems. Further work should also address on the ANN-MoC comparison with the classical strategy of estimating the average fluxes by interpolation on mesh points. If from one point of view the training and evaluation of an ANN is more expensive to compute that performing interpolation, it may be compensated by the need of relatively small number of sample points on a meshless structure.
|
http://arxiv.org/abs/2307.00815v1
|
20230703075354
|
Stability Conditions on Free Abelian Quotients
|
[
"Hannah Dell"
] |
math.AG
|
[
"math.AG",
"14F08 (Primary) 14L30, 14J60 (Secondary)"
] |
H.D.
School of Mathematics and Maxwell Institute, University of Edinburgh, James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh, EH9 3FD,United Kingdom
[email protected]
https://www.hannahdell.com/
We study slope-stable vector bundles and Bridgeland stability conditions on varieties which are a quotient of a smooth projective variety by a finite abelian group G acting freely. We show there is a one-to-one correspondence between G-invariant geometric stability conditions on the quotient and G-invariant geometric stability conditions on the cover. We apply our results to describe a connected component inside the stability manifolds of free abelian quotients when the cover has finite Albanese morphism. This applies to varieties with non-finite Albanese morphism which are free abelian quotients of varieties with finite Albanese morphism, such as Beauville-type and bielliptic surfaces. This gives a partial answer to a question raised by Lie Fu, Chunyi Li, and Xiaolei Zhao: If a variety X has non-finite Albanese morphism, does there always exist a non-geometric stability condition on X? We also give counterexamples to a conjecture of Fu-Li-Zhao concerning the Le Potier function, which characterises Chern classes of slope-semistable sheaves. As a result of independent interest, we give a description of the set of geometric stability conditions on an arbitrary surface in terms of a refinement of the Le Potier function. This generalises a result of Fu-Li-Zhao from Picard rank one to arbitrary Picard rank.
Stability conditions on free abelian quotients
Hannah Dell
August 1, 2023
==============================================
§ INTRODUCTION
In this article, we study stability conditions on varieties that are free quotients by finite abelian groups, especially quotients of varieties with finite Albanese morphism such as bielliptic and Beauville-type surfaces.
One approach is via group actions on triangulated categories. We sharpen the correspondence between G-invariant stability conditions on and stability conditions on the G-equivariant category _G introduced by Macrì, Mehrotra, and Stellari in <cit.>. This is used to control the set of geometric stability conditions on any free quotient by a finite abelian group.
We also study the Le Potier function introduced by Fu, Li, and Zhao in <cit.>. We give counter examples to the conjecture stated in <cit.>, and explain how a refinement of the Le Potier function controls the set of geometric Bridgeland stability conditions on any surface.
§.§ Geometric Stability Conditions and Group Actions
Let k be an algebraically closed field, and let G be a finite abelian group such that ((k),|G|)=1. Let be a k-linear additive idempotent complete triangulated category with an action of G in the sense of <cit.>. This induces an action on (), the space of all numerical Bridgeland stability conditions on . Let _G denote the corresponding category of G-equivariant objects. There is a residual action by G=(G,k^∗) on _G (see Proposition <ref>), and (_G)_G≅ by <cit.>. Lemma <ref> describes a one-to-one correspondence between G-invariant stability conditions on and G-invariant stability conditions on _G. This builds on the abelian case of <cit.> and <cit.>, and was independently obtained in <cit.>.
In this paper, we focus on the case where =(X) for X a smooth projective variety over , and the action of G on is induced by a free action by G on X. Then ((X))_G≅:=(_G(X)), the bounded derived category of G-equivariant coherent sheaves on X. Let π X→ Y:=X/G. We call Y a free abelian quotient. Then ≅. There is a decomposition of π_∗_X into line bundles _χ according to the 1-dimensional representations χ∈G. Then -⊗_χ(Y)→(Y) describes the residual action of G.
A stability condition σ∈(X):=() is called geometric if all skyscraper sheaves of points _x are σ-stable and of the same phase. In all known examples, the stability manifold contains an open set of geometric stability conditions. We prove that geometric stability conditions are preserved under the correspondence of Lemma <ref>:
geometric G inv corresponds to geometric G hat inv
Suppose G is a finite abelian group acting freely on a smooth projective variety X. Let π X→ Y:=X/G denote the quotient map. Consider the action of G on ≅(Y) as in Proposition <ref>. Then there is a one-to-one correspondence between G-invariant stability conditions on and G-invariant stability conditions on which preserves geometric stability conditions:
((X))^G [rr, "(π^∗)^-1", bend left, shift left] ((Y))^G [ll, "(π_∗)^-1", bend left, shift left]
The compositions (π_∗)^-1∘ ()^-1 and ()^-1∘ (π_∗)^-1 fix slicings and rescale central charges by |G|.
In particular, suppose σ=(_σ, Z_σ)∈((X))^G satisfies the support property with respect to (Λ,λ). Then ()^-1(σ)=:σ_Y=(_σ_Y,Z_σ_Y)∈((Y))^G is defined by:
_σ_Y(ϕ) ={∈ : π^∗()∈_σ(ϕ)},
Z_σ_Y = Z_σ∘π^∗,
where π^∗ is the natural induced map on (), and σ_Y satisfies the support property with respect to (Λ,λ∘).
Very little is known about how the geometry of a variety X relates to the geometry of (X). Recall that every algebraic variety X has a map _X, the Albanese morphism, to (X):=^0(^0(X)), the Albanese variety. It is algebraic, and every morphism f X→ A to another abelian variety A factors via _X. In <cit.>, the authors showed that if X has finite Albanese morphism, then all stability conditions on are geometric. In this set-up, we obtain a union of connected components of geometric stability conditions on any free abelian quotient of X.
Albanese connected component
Let X be a smooth projective variety with finite Albanese morphism. Let G be a finite abelian group acting freely on X and let Y=X/G. Then ^†(Y):=((Y))^G is a union of connected components consisting only of geometric stability conditions.
When X is a surface, we have the following stronger result.
finite albanese surface quotient has connected component of geos
Let X be a smooth projective surface with finite Albanese morphism. Let G be an abelian group acting freely on X. Let S=X/G. Then ^†(S)= ((X))^G=(S). In particular, (S) is a connected component of (S).
We explain in <ref> how to describe (S) explicitly for any surface S. Moreover, Corollary <ref> applies to the following 2 classes of minimal surfaces.
[Beauville-type surfaces, q=0]
Let X=C_1× C_2, where C_i⊂^2 are smooth projective curves of genus g(C_i)≥ 2. Each curve has finite Albanese morphism, and hence so does X. Suppose there is a free action of a finite group G on X, such that S=X/G has q(S):=h^1(S,_S)=0 and p_g(S):=h^2(S,_S)=0. Then _S is trivial. This generalises a construction due to Beauville in <cit.>, and we call S a Beauville-type surface. These are classified in <cit.>. There are 17 families, 5 of which involve an abelian group. In the abelian cases, G is one of the following groups: (/2)^3, (/2)^4, (/3)^2, (/5)^2.
[Bielliptic surfaces, q=1]
Let S≅ (E× F)/G, where E,F are elliptic curves, and G is a finite group of translations of E acting on F such that F/G≅^1. Then q(S)=1 and (S)≅ E/G, so _S is an elliptic fibration. Such surfaces are called bielliptic and were first classified in <cit.>. There are 7 families, see <cit.>.
Let S be a Beauville-type or bielliptic surface. As discussed above, S has non-finite Albanese morphism. By Corollary <ref>, (S)⊂(S) is a connected component. In particular, if (S) is connected, then the following question would have a negative answer.
[<cit.>]
Let X be a smooth projective variety whose Albanese morphism is not finite. Are there always non-geometric stability conditions on ?
This is the converse of <cit.>. In all other known examples, the answer to Question <ref> is positive (see <ref>).
§.§ The Le Potier Function
A fundamental problem in the study of stable sheaves on a smooth projective variety X is to understand the set of Chern characters of stable sheaves. This can be used to describe (X) for surfaces (see Theorem <ref>) and to control wall-crossing and hence indirectly control Brill-Noether phenomena as in <cit.> and <cit.>.
When studying slope-stable sheaves on a smooth projective complex variety X, a natural question is for which topological invariants (i.e. Chern character) slope-stable sheaves exist.
For X=^2, Drézet and Le Potier gave a complete solution in <cit.> in terms of a function of the slope, δ→. In <cit.>, the authors define a Le Potier function Φ_X,H which gives a generalisation of Drézet and Le Potier's function to any smooth projective polarised surface (X,H). They use this to control geometric Bridgeland stability conditions with respect to a sublattice of the numerical K-group of X, (X), coming from the polarisation.
Let _(X):=(X)⊗, where (X) is the Néron-Severi group of X, and let _(X) denote the ample cone inside _(X). In <ref> we introduce a generalisation of the Le Potier function which will be used to control the set of all geometric stability conditions in <ref>. We state the version for surfaces below to ease notation.
defn: twisted Le Potier
Let X be a smooth projective surface. Let (H,B)∈_(X)×_(X). We define the Le Potier function twisted by B, Φ_X,H,B→, as
Φ_X,H,B(x):=lim sup_μ→ x{_2(F)-B_1(F)/H^2_0(F) : 15emF∈(X) is H-semistable with μ_H(F)=μ}.
The Bogomolov-Gieseker inequality gives an upper bound for Φ_X,H,B (see Lemma <ref>). If B=0, this is the same as <cit.>, i.e. Φ_X,H,0=Φ_X,H, and the upper bound is x^2/2. Φ_X,H,B naturally generalises to higher dimensions, see Definition <ref>.
The Le Potier function partially determines the non-emptiness of moduli spaces of H-semistable sheaves of a fixed Chern character, which in turn controls wall crossing, along with the birational geometry of these moduli spaces, for example for ^2 <cit.>, K3 surfaces <cit.>, and abelian surfaces <cit.>.
The Le Potier function is known for abelian surfaces <cit.><cit.>, K3 surfaces <cit.>, del Pezzo surfaces of degrees 9 - m for m≤ 6 <cit.>, Hirzebruch surfaces <cit.>, and for surfaces with finite Albanese morphism <cit.>.
In this paper, we relate the Le Potier function of X to the Le Potier function of any free quotient of X by a finite group. We state these results for surfaces with B=0 below.
Le Potier Functions Agree
Let X be a smooth projective surface, and let G be a finite group acting freely on X. Let π X→ X/G=:S denote the quotient map, and let H_S∈_(S). Then Φ_S,H_S=Φ_X, H_S.
Proposition <ref> gives us a way to compute the Le Potier function of varieties that are finite free quotients of varieties with finite Albanese morphism.
cor: LP for quotients of varieties with finite Albanese
Let X be a smooth projective surface with finite Albanese morphism _X. Let G be a finite group acting freely on X. Let π X→ X/G=:S denote the quotient map. Let H_X=_X^∗ H = π^∗ H_S∈_(X) be an ample class pulled back from (X) and S. Then Φ_S,H_S(x)=x^2/2.
In Example <ref> we explain how to choose appropriate ample classes such that Corollary <ref> applies to bielliptic and Beauville-type surfaces. In particular, Beauville-type surfaces provide counterexamples to the following conjecture:
Let (S,H) be a smooth polarised surface with q=0, then the Le Potier function Φ_S,H is not continuous at 0.
This conjecture was motivated by Question <ref> and the expectation that discontinuities of Φ_S,H could be used to show the existence of a wall of the geometric chamber for regular surfaces, as in the cases of rational and K3 surfaces.
§.§ The Le Potier Function and Geometric Stability Conditions
Let X be a surface and fix H∈_(X). In <cit.>, Fu, Li, and Zhao show that Φ_X,H gives precise control over H(X), the set of geometric numerical Bridgeland stability conditions with respect to a specific lattice, Λ_H. When X has Picard rank 1, H(X)=(X).
We generalise this to the set of all geometric numerical Bridgeland stability conditions.
thm: LP gives precise control over set of geometric stability conditions
Let X be a smooth projective surface. Then
(X)≅×{(H,B,α,β)∈(_(X))^2×^2 : H is ample, α>Φ_X,H,B(β)}.
In particular, (X) is connected. We discuss in Remark <ref> how Theorem <ref> could be used to describe the boundary of (X). This emphasises how Φ_X,H,B is a crucial tool for understanding the existence of non-geometric stability conditions on surfaces. In particular, if one can compute the Le Potier function, one should be able to tell whether the boundary of the set of geometric stability conditions has a wall.
§.§ Survey: Geometric Stability Conditions
To give context for the results in this paper, we survey the cases where a connected component of the stability manifold is known, and where geometric and non-geometric stability conditions have been described.
There are the following general results:
* Varieties with _X finite: (X)=(X) <cit.>
* Quotients of varieties with _X finite: Let Y=X/G be a free abelian quotient of X, and assume _X is finite. If G-invariant stability conditions exist on X, then (Y)≅((X))^G is a union of connected components consisting only of geometric stability conditions, see Theorem <ref>.
The results for specific examples are summarised in the following table:
Note that the examples in the rightmost column have non-finite Albanese morphism. This gives a positive answer to Question <ref> in those cases.
Curves: For any curve C, (C)≅. Up to the action of , this corresponds to Mukai's slope-stability for (C) <cit.>.
* (^1)≅^2 <cit.>. Okada's construction uses the identification (^1)≅((K_2)) where K_2 is the Kroneker quiver. In particular, these are not all geometric.
* Let C be a curve of genus g(C)≥ 1, then (C)=(C)≅ <cit.>, <cit.>.
Surfaces: There is a construction called tilting which gives an open set of geometric stability conditions on any smooth projective surface, see for example <cit.>,<cit.>.
A connected component is known in the following cases:
* Surfaces with finite Albanese morphism: This connected component is precisely the set of geometric stability conditions which come from tilting. This follows from <cit.> together with Corollary <ref>.
* K3 surfaces: There is a distinguished connected component ^†(X) described by taking the closure and translates under autoequivalences of the open set of geometric stability conditions <cit.>. By <cit.>, at general points of the boundary of (X), either
* all skyscraper sheaves have a spherical vector bundle as a stable factor, or
* _x is strictly semistable if and only if x∈ C, a smooth rational curve in X.
* ^2: (^2) has a simply-connected component, ^†(^2), which is a union of geometric and algebraic stability conditions <cit.>.
* Enriques surfaces: Suppose Y is an Enriques surface with K3 cover X, and let ^†(X) be the connected component of (X) described above. Then there exists a connected component ^†(Y) which embeds into ^†(X) as a closed submanifold. Moreover, when Y is very general, ^†(Y)≅^†(X) <cit.>. ^†(X) has non-geometric stability conditions, hence by Theorem <ref> so does ^†(Y).
* Beauville-type and bielliptic surfaces: Let S=X/G. There is a connected component ^†(S)=(X)=((X))^G, see Corollary <ref>. If (S) is connected, this would give a negative answer to Question <ref>, in contrast to all previous examples.
Non-geometric stability conditions are known to exist in the following cases:
* Rational surfaces: the boundary of the geometric chamber contains points where skyscrapers sheaves are destabilised by exceptional bundles. This is explained for (_^2(-3)) in <cit.>, and the arguments generalise to any rational surface.
* Surfaces which contain a smooth rational curve C with negative self intersection: these have a wall of the geometric chamber such that _x is stable if x∉C, and strictly semistable if x∈ C <cit.>.
* ^1 bundles: Let p S → C be a ^1-bundle over a curve. has a semiorthogonal decomposition <cit.>. Non-geometric stability conditions can be constructed by gluing stability conditions from (C) with respect to this decomposition <cit.>.
Threefolds:
Fix H∈_(X). Denote by _H(X) the stability conditions such that the central charge factors via a certain lattice Λ_H⊂(X). If ρ(X)=1, this gives rise to elements of (X). A strategy for constructing stability conditions in _H(X) for threefolds was first introduced in <cit.>. This uses so-called tilt stability conditions to construct geometric stability conditions if a stronger BG-type inequality is satisfied.
Geometric stability conditions in _H(X) exist for some threefolds, see <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>.
Below we describe the only threefolds where (X) is known to be non-empty. These are also the only cases where a connected component of _H(X) was previously known.
* Abelian threefolds: There is a distinguished connected component _H^†(X) of _H(X) which has been completely described <cit.>. These have been shown to satisfy the full support property, in particular, they lie in a connected component ^†(X)⊂(X) <cit.>. Abelian threefolds are also a case of <cit.>.
* Calabi-Yau threefolds of abelian type: Let Y be a Calabi-Yau threefold admitting an abelian threefold X as a finite étale cover. There is a distinguished connected component _H^†(Y) of _H(Y) induced from ^†_H(X) <cit.>. By the previous paragraph together with Theorem <ref>, (Y) is also non-empty.
The only examples where non-geometric stability conditions are known to exist are those with complete exceptional collections. We explain this in greater generality below.
Exceptional collections: There are stability conditions on any triangulated category with a complete exceptional collections called algebraic stability conditions <cit.>. On ^n, this has been used to show the existence of geometric stability conditions <cit.> <cit.>. If X is a smooth projective variety with a complete exceptional collection, non-geometric stability conditions can be constructed from hearts that do not contain skyscraper sheaves <cit.>.
§.§ Notation
a triangulated category
G a finite group such that ((k),|G|)=1
_G the category of G-equivariant objects
X a smooth projective variety over an algebraically closed field
(X) the bounded derived category of coherent sheaves on X
the bounded derived category of G-equivariant coherent sheaves
(), (X) the Grothendieck group of , resp.
(), (X) the numerical Grothendieck group of , resp.
(), (X) the space of numerical Bridgeland stability conditions on ,
(X) the space of geometric numerical stability conditions on
(E) the Chern character of an object E ∈(X)
(X) (X)/^0(X), the Néron-Severi group of X
_(X) (X)⊗
_(X) the ample cone inside _(X)
_(X) the effective cone inside _(X)
(X) the Chow group of X
(X) the numerical Chow group up of X
§.§ Acknowledgements
I would like to thank my advisor Arend Bayer for suggesting the project and for many helpful discussions. I would also like to thank Augustinas Jacovskis and Sebastian Schlegel Mejia for useful conversations. This research was supported by the ERC Consolidator Grant WallCrossAG, No. 819864.
§ G-INVARIANT STABILITY CONDITIONS
We review the notions of equivariant triangulated categories in <ref> and Bridgeland stability conditions in <ref>. In <ref> we describe a correspondence between stability conditions on a triangulated category with an action of a finite abelian group and stability conditions on the corresponding equivariant category.
§.§ Review: G-equivariant triangulated categories
Let be a pre-additive category, linear over a ring k. Let G be a finite group with ((k),|G|)=1. The definition of a group action on a category and the corresponding equivariant category are due to Deligne <cit.>. We will follow the treatment by Elagin from <cit.> in our presentation below.
A (right) action of G on is defined by the following data:
* a functor ϕ_g→, for every g∈ G;
* a natural isomorphism ε_g,hϕ_gϕ_h→ϕ_hg for every g,h∈ G, for which all diagrams
ϕ_fϕ_gϕ_h [r, "ε_g,h"] [d, "ε_f,g"] ϕ_fϕ_hg [d, "ε_f,gh"]
ϕ_gfϕ_h [r, "ε_gf,h"] ϕ_hgf
are commutative.
[<cit.>]
Let G be a group acting on a scheme X. For each g∈ G, let ϕ_g:=g^∗(X)→(X). Then for all g,h∈ G there are canonical isomorphisms:
ϕ_gϕ_h = g^∗ h^∗ (hg)^∗ = ϕ_hg.
Together these define an action of G on the category (X).
Suppose G acts on a category . A G-equivariant object in is a pair (F,(θ_g)_g∈ G) where F∈ and (θ_g)_g∈ G is a family of isomorphisms
θ_g F→ϕ_g(F),
such that all diagrams
F [r, "θ_g"] [d, "θ_hg"] ϕ_g(F) [d, "ϕ_g(θ_h)"]
ϕ_hg(F) ϕ_g(ϕ_h(F)) [l, "ε_g,h"']
are commutative. We call the family of isomorphisms a G-linearisation. A morphism of G-equivariant objects from (F_1,(θ_g^1)) to (F_2,(θ_g^2)) is a morphism f F_1→ F_2 compatible with θ_g, i.e. such that the below diagrams commute for all g∈ G
F_1 [r, "θ^1_g"] [d, "f"] ϕ_g(F_1) [d, "ϕ_g(f)"]
F_2 [r, "θ^2_g"] ϕ_g(F_2).
The category of G-equivariant objects in is denoted _G
Let G be a group acting on a scheme X with ϕ_g and ε_g,h defined as in Example <ref>. G-equivariant objects are G-equivariant coherent sheaves. Let _G(X):=((X))_G and :=(_G(X)).
Suppose k=k and G acts freely on a smooth projective variety X over k. Let π X→ X/G be the quotient map. Then (X/G)≅_G(X) via ↦ (π^∗, (θ_g)), where θ_gπ^∗=⊕_h∈ G h^∗∼→⊕_h∈ G g^∗(h^∗), and ≅(X/G).
<cit.>
Suppose G is an abelian group and k is algebraically closed. Let G=(G,k^∗) be the group of 1-dimensional representations of G. Then there is an action of G on _G. For every χ∈G, ϕ_χ is given by
ϕ_χ((F,(θ_h))):= (F,(θ_h))⊗χ := (F,(θ_h·χ(h)))
For χ,ψ∈G the equivariant objects ϕ_χ(ϕ_ψ((F),(θ_h))) and ϕ_ψχ((F,(θ_h))) are the same, hence we set the isomorphisms ε_χ,ψ to be the identities.
Suppose G acts on a category . Then we denote by G_G → the forgetful functor G(F,(θ_g))= F. Also let G→_G be the inflation functor which is defined by
G(F):=(⊕_g∈ Gϕ_g(F), (ξ_g)),
where
ξ_g⊕_h∈ Gϕ_h(F) ⊕_h∈ Gϕ_gϕ_h(F)
is the collection of isomorphisms
ε_g,h^-1ϕ_hg(F)→ϕ_gϕ_h(F).
The forgetful functor G is faithful, and it is left and right adjoint to G.
Faithfulness follows immediately from the definition of morphisms between G-equivariant objects. For the fact that G is left and right adjoint to G see <cit.>
The following proposition builds on a result of Balmer in <cit.>.
Suppose G acts on a triangulated category which has a DG-enhancement, then _G is triangulated in such a way that G is exact.
The proof of the following theorem will use comonads. The full definitions can be found in <cit.> but for the proof we will only need to know the following: Given a comonad T on a category , a comodule over T is a pair (F,h) where F∈ and h F→ TF is a morphism, called the comonad structure, satisfying certain conditions (see <cit.>). All comodules over a given comonad T on form a category which is denoted _T. There is a forgetful functor T_T→ which forgets the comonad structure, i.e. (F,h)↦ F.
Suppose k is an algebraically closed field and let be a k-linear additive idempotent complete category. Let G be a finite abelian group with ((k),|G|)=1. Suppose is a k-linear additive idempotent complete category and G acts on . Then
(_G)_G≅.
In particular, under this equivalence G (_G)_G→_G is identified with G→_G and their adjoints G_G → (_G)_G and G_G → are also identified.
Elagin's proof that (_G)_G≅ uses the following chain of equivalences:
(_G)_G(1)≅ (_G)_T(G,G)(2)≅ (_G)_ℛ(3)≅
(_G)_T(G,G)(4)≅,
where T(G, G), ℛ,
T(G,G) are comonads on the corresponding categories.
The equivalences in (1) and (4) are the comparison functors from <cit.>. In particular, under (1), G≅T(G,G) and under (4), T(G,G)≅G. Moreover the equivalences (2) and (3) only change the comonad structure, hence the images of the forgetful functors for each category of comodules are the same. Therefore under the equivalence (_G)_G, G≅G. Finally recall that G and G are left and right adjoint, and as are G and G. Hence G≅G follows immediately.
§.§ Review: Bridgeland stability conditions
Let be a triangulated category.
A slicing on is a collection of full additive subcategories (ϕ)⊂ for each ϕ∈ such that:
* (ϕ)[1]=(ϕ+1)
* If F_1∈(ϕ_1), F_2∈(ϕ_2), then ϕ_1>ϕ_2 (F_1,F_2)=0
* Every E∈ has a Harder–Narasimhan (HN) filtration, i.e. there exist real numbers ϕ_1>ϕ_2>⋯ > ϕ_m, objects E_i∈, and a collection of distinguished triangles:
[column sep=tiny]
0=E_0 [rr] E_1 [rr] [ld] E_2 [rr] [ld] ⋯[rr] E_m-1 [rr] E_m=E [ld]
A_1 [lu, dotted] A_2 [lu, dotted] A_m [lu, dotted]
where A_i∈ P(ϕ_i) for all 1≤ i≤ m.
We will denote by ϕ_^+(E):=ϕ_1, ϕ_^-(E):=ϕ_m, and m_σ(E):=∑_i|Z(A_i)|. Moreover, non-zero objects of (ϕ) are called semistable of phase ϕ, and non-zero simple objects of (ϕ) are called stable of phase ϕ.
A Bridgeland pre-stability condition on is a pair σ=(,Z) such that:
* is a slicing
* Z() → is a homomorphism such that: for any E≠ 0, if E∈(ϕ) for some ϕ∈, then Z([E])=m(E)e^iπϕ, where m(E)∈_>0.
We call Z the central charge.
A Bridgeland pre-stability condition σ=(,Z) on satisfies the support property (with respect to (Λ,λ)) if
* Z factors via a finite rank lattice Λ, i.e. Z()Λ→, and
* there exists a quadratic form Q on ()⊗ such that
* Z is negative definite with respect to Q, and
* every σ-semistable object E∈ satisfies Q(λ(E))≥ 0.
A Bridgeland pre-stability condition that satisfies the support property is called a Bridgeland stability condition. If λ factors via (X), we call σ a numerical Bridgeland stability condition
The set of stability conditions with respect to (Λ,λ) will be denoted _Λ(). Unless stated otherwise, we will assume that all Bridgeland stability conditions are numerical. The set of numerical stability conditions on will be denoted by ().
As described in <cit.>, () has a natural topology induced by the generalised metric
d(σ_1,σ_2) = sup_0≠ E∈{|ϕ^-_σ_2(E) - ϕ^-_σ_1(E)|, |ϕ^+_σ_2(E) - ϕ^+_σ_1(E)|, | logm_σ_2(E)/m_σ_1(E)| }.
The space of stability conditions () has the natural structure of a complex manifold of dimension (()), such that the map:
() →_((),)
σ = (,Z) ⟼ Z
is a local homeomorphism at every point of ().
In other words, the central charge gives a local system of coordinates for the stability manifold.
There is a right action on () by the universal cover of ^+(), see <cit.> for details. If we consider ^∗ as a subgroup of ^+(), then this induces an action of ^∗= on ().
There is an equivalent characterisation of Bridgeland stability conditions, which uses the notion of a t-structure on a triangulated category. For the general theory of t-structures we refer the reader to <cit.>. We first need the following definitions:
The heart of a bounded t-structure in a triangulated category is a full additive subcategory Å such that:
* If k_1>k_2 then (Å[k_1],Å[k_2])=0.
* For any object E in there are integers k_1>k_2>⋯ > k_n, and a sequence of exact triangles:
[column sep=tiny]
0=E_0 [rr] E_1 [rr] [ld] E_2 [rr] [ld] ⋯[rr] E_n-1 [rr] E_n=E [ld]
A_1 [lu, dotted] A_2 [lu, dotted] A_n [lu, dotted]
such that A_i∈Å[k_i] for 1≤ i ≤ n.
Let Å be an abelian category on a triangulated category . A stability function for Å is a group homomorphism Z K(Å)→ such that for every non-zero object E of Å,
Z([E])∈:̋={m· e^iπϕ| m∈_>0, ϕ∈(0,1]}⊂.
For every non-zero object E, we define the phase by ϕ(E)=1/π (Z([E]))∈(0,1]. We say an object E is Z-stable (resp. semistable) if E≠0 and for every proper non-zero subobject A we have ϕ(A)<ϕ(E) (resp. ϕ(A)≤ϕ(E)).
Let Å be an abelian category and let Z K(Å)→ be a stability function on Å. A Harder-Narasimhan (HN) filtration of a non zero object E of Å is a finite chain of subobjects
0=E_0⊂ E_1⊂⋯ E_n-1⊂ E_n=E
such that each factor F_i=E_i/E_i-1 (called a Harder-Narasimhan factor) is a Z-semistable object of Å, and ϕ(F_1)>ϕ(F_2)>⋯ > ϕ(F_n). Moreover, we say that Z has the Harder-Narasimhan property if every non-zero object of Å has a Harder-Narasimhan filtration.
To give a Bridgeland pre-stability condition (,Z) on a triangulated category is equivalent to giving a pair (Z_Å,Å), where Å is the heart of a bounded t-structure Å on and Z_Å is a stability function for Å which has the Harder-Narasimhan property. Moreover, (,Z) is a numerical Bridgeland stability condition if and only if Z_Å factors via () and satisfies the support property (Definition <ref>(2)) for Z_Å-semistable objects.
§.§ Inducing stability conditions
Suppose a finite group G acts on a triangulated category by exact autoequivalences, {Φ_g | g∈ G}. This induces an action on the stability manifold via Φ_g·(,Z)=(Φ_g(),Z∘ (Φ_g)^-1_∗), where (Φ_g)_∗()⊗→()⊗ is the natural morphism induced by Φ_g. We say that a stability condition σ is G-invariant if Φ_g·σ = σ.
The results in <cit.> are stated for locally finite Bridgeland stability conditions: If σ=(,Z) is a pre-stability condition and there exists ε>0 such that, for all ϕ∈, ((ϕ-ε,ϕ+ε)) is of finite length, then we call σ locally-finite. We will write () for the space of all locally-finite stability conditions on , and (())^G for the G-invariant ones.
Let σ∈(())^G. By Lemma <ref> and Proposition <ref>, G_G→ is exact and faithful, so we can apply the construction of <cit.>: Define G^-1(σ):=σ_G=(_σ_G,Z_σ_G), where
_σ_G(ϕ) :={∈_G : G()∈_σ(ϕ)},
Z_σ_G := Z_σ∘ (G)_∗.
Here (G)_∗(_G)⊗→()⊗ is the natural morphism induced by G.
Suppose k is an algebraically closed field. Let be an essentially small k-linear additive idempotent complete triangulated category with a DG-enhancement. Let G be a finite abelian group such that ((k),|G|)=1. Suppose G acts on by exact autoequivalences Φ_g for every g∈ G. Suppose σ=(,Z)∈(())^G is a G-invariant pre-stability condition on . Then G^-1(σ)∈(_G).
By <cit.> and our assumptions on , it follows that _G is a triangulated category.
Suppose ∈(ϕ). Then G(G())=⊕_g∈ GΦ_g(). Since σ is G-invariant, Φ_g()∈_σ(ϕ) for all g∈ G. Moreover, _σ(ϕ) is extension closed, hence ⊕_g∈ GΦ_g() ∈_σ(ϕ). The result then follows from <cit.>.
Assume the hypotheses of Proposition <ref> and let G act on _G by twisting as in Proposition <ref>. Then G^-1(σ) is G-invariant.
First note that, for every class []=[(E,(θ_g))]∈(_G)⊗, (G)_∗([(E,(θ_g))])=[E]. Hence Z_σ_G([])=Z_σ∘ (G)_∗([(E,(θ_g))])=Z_σ([E]), where [E]∈()⊗. Moreover, from the definition of _σ_G, we have:
_σ_G(ϕ) = {∈_G : G()∈_σ(ϕ)}
= {(E,(θ_g))∈_G : E∈_σ(ϕ)}
In particular, since the action of G on (E,(θ_g))∈_G does not change E, it follows that the central charge Z_σ_G and slicing _σ_G are G-invariant, and hence σ_G∈((_G))^G.
Under the hypotheses of Proposition <ref>, the morphism G^-1 (())^G→ ((_G))^G is continuous, and the image of G^-1 is a closed embedded submanifold.
The proof of <cit.> is for the action of a finite group G on , induced by the action of G on X, a smooth projective variety over (i.e. Φ_g=g^∗). The result follows in our setting by replacing this with the action of exact autoequivalences Φ_g on in the proof.
In the case where G is abelian, we have the following description of the image of G^-1:
Suppose k is an algebraically closed field. Let be an essentially small k-linear additive idempotent complete triangulated category with a DG-enhancement. Let G be a finite abelian group such that ((k),|G|)=1. Suppose G acts on by exact autoequivalences Φ_g for every g∈ G, and consider the action of G on _G as in Proposition <ref>. Then there is a one-to-one correspondence between G-invariant stability conditions on and G-invariant stability conditions on _G:
(())^G [rr, "G^-1", bend left, shift left] ((_G))^G [ll, "G^-1", bend left, shift left]
In particular, the compositions G^-1∘G^-1 and G^-1∘G^-1 fix slicings and rescale central charges by |G|.
First note that by <cit.> and our assumptions on it follows that _G is a triangulated category.
Let σ∈(())^G. Then by Proposition <ref> and Lemma <ref>, σ_G:=G^-1(σ)∈((_G))^G. We now apply the construction of <cit.> again, with G. In particular let σ_G:=G^-1(σ_G), where
_σ_G(ϕ) ={∈(_G)_G : G()∈_σ_G(ϕ)}
={∈(_G)_G : G(G())∈_σ(ϕ)}.
By Proposition <ref>, G^-1(σ_G)∈((_G)_G). To complete the proof, we need to show that, under the equivalence (_G)_G≅, σ_G=σ up to rescaling the central charge by |G|. From Theorem <ref> we know that under this equivalence, G≅G. Hence we can apply the same argument as in the proof of <cit.>. In particular:
_σ_G(ϕ) ={∈ : G(G())∈_σ(ϕ)}
={∈ : ⊕_g∈ GΦ_g () ∈_σ(ϕ)}.
Suppose ∈_σ_G(ϕ). Since taking cohomology commutes with direct sums, Φ_g()∈_σ(ϕ) for all g∈ G. In particular ∈_σ(ϕ) and hence _σ_G(ϕ)⊆_σ(ϕ) for all ϕ∈. Now suppose ∈_σ(ϕ), then by the proof of Proposition <ref> it follows that G(G())∈_σ, and hence ∈_σ_G(ϕ). In particular, _σ_G(ϕ)⊇_σ(ϕ) for all ϕ∈, so _σ_G=_σ. Now let []∈()⊗ and consider the central charge:
Z_σ_G([])=Z_σ_G∘ (G)_∗([]) = Z_σ∘ (G)_∗∘(G)_∗([])=Z_σ(∑_g∈ G ([Φ_g()])).
Z_σ is G-invariant, hence Z_σ([])=(Φ_g)_∗ Z_σ([])=Z_σ([Φ_g()]) for all g∈ G. Finally, since Z_σ is a homomorphism, it follows that Z_σ_G([])=|G|· Z_σ ([]).
Note that if we start instead with a stability condition σ_G∈((_G))^G, then by a symmetric argument it follows that σ_G = G^-1∘G^-1(σ_G), up to rescaling the central charge by |G|=|G|.
If = where X is a scheme, and if the action of G on is induced by an action of G on X, i.e. Φ_g=g^∗, then the one-to-one correspondence above follows from the abelian case of <cit.>.
§ GEOMETRIC STABILITY CONDITIONS ON ABELIAN QUOTIENTS
We apply the methods of <ref> to describe geometric stability conditions on free abelian quotients. In particular, we show that geometric stability conditions are preserved under the correspondence in Lemma <ref>, and use this to describe a union of connected components of geometric stability conditions on free abelian quotients of varieties with finite Albanese morphism. In the case of surfaces, we obtain a stronger result using a description of the set of geometric stability conditions from <ref>.
§.§ Inducing geometric stability conditions
Let X be a smooth projective variety over . Let G be a finite group acting freely on X. Let Y=X/G and denote by π X → Y the quotient map. Let denote the derived category of G-equivariant coherent sheaves on X as in Example <ref>.
Recall that ≅, where the equivalence is given by:
Ψ ⟶
⟼ (π^∗(),),
and ={λ_g}_g∈ G is the G-linearisation given by:
λ_gπ^∗ g^∗π^∗ = (π∘ g)^∗≅π^∗.
Combining this with <ref>, we have the following commutative diagram:
[rr, "Ψ"] [dd, "π^∗", shift left] [dd, "G", shift left] [lldd, "G", shift left]
[rr, "∼"] [uu, "π_∗", shift left] [rruu, "G", shift left] ()_G[uu, "G", shift left]
The residual action of G on is given by tensoring with degree 0 line bundles _χ for each χ∈G.
A Bridgeland stability condition σ on is called geometric if for every point x∈ X, the skyscraper sheaf _x is σ-stable.
Let X be a smooth projective variety. Let σ be a geometric numerical stability condition on . Then all skyscraper sheaves are of the same phase.
In this context, the correspondence from Lemma <ref> preserves geometric stability.
Suppose G is a finite abelian group acting freely on a smooth projective variety X. Let π X→ Y:=X/G denote the quotient map. Consider the action of G on ≅(Y) as in Proposition <ref>. Then there is a one-to-one correspondence between G-invariant stability conditions on and G-invariant stability conditions on which preserves geometric stability conditions:
((X))^G [rr, "(π^∗)^-1", bend left, shift left] ((Y))^G [ll, "(π_∗)^-1", bend left, shift left]
The compositions (π_∗)^-1∘ ()^-1 and ()^-1∘ (π_∗)^-1 fix slicings and rescale central charges by |G|.
In particular, suppose σ=(_σ, Z_σ)∈((X))^G satisfies the support property with respect to (Λ,λ). Then ()^-1(σ)=:σ_Y=(_σ_Y,Z_σ_Y)∈((Y))^G is defined by:
_σ_Y(ϕ) ={∈ : π^∗()∈_σ(ϕ)},
Z_σ_Y = Z_σ∘π^∗,
where π^∗ is the natural induced map on (), and σ_Y satisfies the support property with respect to (Λ,λ∘).
First note that π_∗∘π^∗(Y)→(Y) is just multiplication by |G|, as it sends [E] to [E⊗⊕_χ∈G_χ]. Therefore, π^∗(Y)→(X) is injective.
Together with Lemma <ref>, it follows that (π^∗)^-1 and (π_∗)^-1 give a one-to-one correspondence between numerical Bridgeland stability conditions as described above. It remains to show that σ∈((X))^G is geometric if and only if σ_Y=(π^∗)^-1(σ) is.
1 Suppose σ=(_σ,Z_σ)∈((X))^G is geometric. Let y∈ Y, this corresponds to the orbit Gx for some x∈ X (so x is unique up to the action of G). We need to show π^∗(_y) is σ_Y-stable. Recall,
_σ_Y(ϕ) ={∈ : π^∗()∈_σ(ϕ)}
for every ϕ∈. Now consider:
π^∗(_y)=⊕_g∈ G_g^-1 x∈.
By our assumption on σ and Proposition <ref>, all skyscraper sheaves of points of X are σ-stable and of the same phase which we denote by . In particular, _g^-1x∈_σ() for all g∈ G. Moreover, _σ() is extension closed, hence ⊕_g∈ G_g^-1x∈_σ(), and thus _y∈_σ_Y().
Now suppose that _y is strictly semistable, then there exist ,∈_σ_Y() such that:
↪_y ↠
is non-trivial, i.e. is not isomorphic to 0 or _y. By definition of _σ_Y(), π^∗(),π^∗()∈_σ(). Moreover, π^∗ is exact, hence there is an exact sequence:
π^∗() ↪π^∗(_y)=⊕_g∈ G_g^-1x↠π^∗()
in _σ()⊂. Since π^∗() is a subobject of π^∗(_y), we must have π^∗()=⊕_a∈ A_a^-1x, where A⊂ G is a subset. Hence
(π^∗()) = {a^-1x : a∈ A}⊂{g^-1x : g∈ G} = (π^∗(_y)).
Note that π^∗() is a G-invariant sheaf. But (π^∗()) is G-invariant if and only if A=∅ or A=G. Hence =0 or =_y, which is a contradiction.
2 Suppose that σ_Y=(_σ_Y,Z_σ_Y)∈((Y))^G is geometric. Recall
_σ_Y(ϕ)={∈ : π^∗() ∈_σ(ϕ)},
for all ϕ∈. Fix x∈ X, and let y∈ Y be the point corresponding to the orbit Gx. By assumption, _y is σ_Y-stable. Let denote its phase. Then π^∗(_y)=⊕_g∈ G_x∈_σ(). Moreover, since taking cohomology commutes with direct summands, _x∈_σ() for all g∈ G. In particular, _x∈_σ(). Now suppose that _x is strictly semistable, then there exist Å,∈_σ() such that
Å↪_x↠
is non-trivial, i.e. Å is not isomorphic to 0 or _x. By Theorem <ref>, (π_∗)^-1 sends _σ() to _σ_Y(). Hence we have a short exact sequence in _σ_Y():
π_∗(Å)↪π_∗(_x)=_y↠π_∗().
However, _y is stable, hence π_∗(A)=0 or π_∗(B)=0. But π is finite, hence π_∗ is conservative. Therefore A=0 or B=0, which is a contradiction.
§.§ Group actions and the classification of geometric stability conditions on surfaces
We denote by (X) the set of all geometric stability conditions on X. The following result is proved in <ref>:
thm: geo stab determined by Z[<cit.>]
Let X be a smooth projective surface, and let σ=(,Z) ∈(X). Then σ is determined by its central charge up to shifting the slicing by [2n] for any n∈.
Moreover, if σ is normalised using the action of such that Z(_x)=-1 and ϕ(_x)=1 for any x∈ X. Then
* the central charge can be written as follows:
Z([E]) = (α-iβ)ω^2([E]) + (B+iω)·_1([E]) -_2([E])
where α,β∈, B,ω∈(X)⊗, and ω is ample. Moreover,
* the heart is of the form ((0,1])=⟨, [1]⟩, where (,) is the torsion pair on (X) given by:
:= { E∈(X) : 23emAny ω-semistable Harder–Narasimhan factor F of the torsion free part of E satisfies Z(F)> 0.},
:= { E∈(X) : 23emE is torsion free, and any ω-semistable Harder–Narasimhan factor F of E satisfies Z(F) ≤ 0.},
Let G be a group acting on a smooth projective surface X. Then σ=(,Z)∈(X) is G-invariant if and only if Z is G-invariant.
If σ=(,Z)∈(X) is G-invariant, then so is Z. Suppose σ=(,Z)∈(X) and Z is G-invariant. Fix g∈ G. Then g^∗σ = (g^∗(),Z∘ g^∗) and σ are both geometric, and skyscraper sheaves have the same phase. By Theorem <ref>, σ = g^∗σ.
Let X be a smooth projective variety, and let G⊆^0(X) be a finite subgroup. Then the induced action of G on (X) is trivial.
Let ∈ G and [E]∈(X). The induced action of G on (X) is given by · [E] := [E⊗]. Since is a degree 0 line bundle, ()=e^c_1() and c_1()=0 in (X). Therefore,
|_([E⊗])=|_([E])·|_()=|_([E]).
By Hirzebruch-Riemann-Roch, (X)→(X) induces an injective map (X)→(X). Therefore, · [E]=[E⊗] = [E] in (X).
Let S be a smooth projective surface and let G⊆^0(S) be a finite subgroup. Then every geometric stability condition on S is G-invariant.
Let σ=(,Z)∈(S). By Corollary <ref> it is enough to show that Z is G invariant. By Lemma <ref>, G acts trivially on (S). Since σ is numerical, Z(S)→ factors via (S), hence Z is G invariant.
Let ∈ G, so · E:=E⊗ for every E∈. Since is a line bundle, ()=e^c_1(). Moreover it has degree 0, hence c_1()=0 in the numerical chow group. Therefore (E⊗)=(E)()=(E)e^c_1()=(E) in the numerical chow group.
Since σ is numerical, the central charge takes the form:
Z(E)=-χ(π(σ),E),
for some vector π(σ)∈(X)⊗. Then from Hirzebruch-Riemann-Roch it follows that:
Z(E)=∫_X^∨(π(σ))·(E)·(X).
Since (E⊗)=(E) in the numerical chow group it follows that Z(E⊗)=Z(E). Hence Z is G-invariant.
Suppose G is a finite abelian group acting freely on a smooth projective variety X, and let Y:=X/G. Then by Proposition <ref> there is also an action of G=(G,) on ≅. As discussed in <ref>. The corresponding action on is given by tensoring with degree 0 line bundles _χ for each χ∈G. Corollary <ref> tells us that every geometric stability condition on is G invariant.
Let X be a smooth projective variety. Note that, by <cit.>, (X) is open. Moreover, in <ref> we prove the following result for surfaces:
prop: geometric open set connected for surfaces
Let X be a smooth projective surface. Then (X) is connected.
§.§ Applications to varieties with finite Albanese morphism
Suppose that a finite group G acts on a triangulated category and that the induced action on () is trivial. Then (())^G is a union of connected components inside ().
By Bridgeland's deformation theorem <cit.>, there is a local homeomorphism
𝒵()→((),).
Let g∈ G, and denote by g_∗ the induced action of g on () and (). Recall that the action of G on () is given by g·σ=(g(),Z∘ g_∗). The induced action of G on () is trivial, hence (σ) is G-invariant and 𝒵(g·σ)= 𝒵(σ). Since commutes with the action of G on (), the properties of being G-invariant and not being G-invariant are open in (), so the result follows.
We now combine this with the results of <ref> and <ref>.
Let X be a smooth projective variety with finite Albanese morphism. Let G be a finite abelian group acting freely on X and let Y=X/G. Then ^†(Y):=((Y))^G is a union of connected components consisting only of geometric stability conditions.
X has finite Albanese morphism, so it follows from <cit.> that all stability conditions on X are geometric. In particular, all G-invariant stability conditions on X are geometric, so from the correspondence of Theorem <ref> it follows that all G-invariant stability conditions on Y are geometric. Hence ((Y))^G⊂(S).
Recall from Example <ref> that G acts on (Y) by tensoring with degree 0 line bundles. Now we may apply Lemma <ref>, so it follows that G acts trivially on (Y). Hence, by Lemma <ref>, ((Y))^G is a union of connected components.
When X is a surface, we can apply the results of <ref> to describe all of (X):
Let X be a smooth projective surface with finite Albanese morphism. Let G be an abelian group acting freely on X. Let S=X/G. Then ^†(S)= ((X))^G=(S). In particular, ^†(S) is a connected component of (S).
By Theorem <ref>, ((S))^G⊂(S) is a union of connected components. By Corollary <ref>, (S)⊂ ((S))^G. Hence (S)=((S))^G. By Proposition <ref>, ^†(S)≅(S) is connected. In particular, ^†(S) is a connected component of (S).
See Example <ref> for an explicit description of ^†(S)=(S).
Let S=(C_1× C_2) / G be the quotient of a product of smooth curves such that g(C_1), g(C_2)≥ 1, and G is a finite abelian group acting freely on S. Then C_1× C_2 has finite Albanese morphism. By Corollary <ref>, (S) has a connected component consisting only of all geometric stability conditions. In particular, we could take S to be a Beauville-type or bielliptic surface (see Example <ref> and Example <ref>).
Let X be a smooth projective surface with finite Albanese morphism, and let G be an abelian group acting freely on X. Let S=X/G, and denote by π X→ S the quotient map. Moreover, let H_X be a G-invariant polarization of X and let H_S be the corresponding polarization on S such that π^∗ H_S = H_X. Then all stability conditions in _H(X) are G-invariant.
Therefore, under the correspondence of Theorem <ref>, ^†_H_S(S)=(_H_S(S))^G≅_H_X(X). From Corollary <ref> it follows that H_S(S)≅_H_X(X). _H_X(X) is the same as the connected component described in <cit.>. In particular, _H_S^†(S)≅H_S(S) is connected and contractible.
A Calabi–Yau threefold of abelian type is an étale quotient Y=X/G of an abelian threefold X by a finite group G acting freely on X such that the canonical line bundle of Y is trivial and H^1(Y,)=0. As discussed in <cit.>, these are classified in <cit.>. In particular, G can be chosen to be (/2)^⊕ 2 or D_8, and the Picard rank of Y is 3 or 2 respectively.
Fix a polarization (Y,H), and consider stability conditions that factor via Λ_H, i.e. _H(Y). This has a connected component 𝔓 of geometric stability conditions induced from _H(X) <cit.> which is described explicitly in <cit.>. Moreover, by <cit.> together with Theorem <ref>, σ∈𝔓 satisfies the full support property. Hence 𝔓 lies in a a connected component ^†(Y)⊂(X) consisting only of geometric stability conditions.
§ THE LE POTIER FUNCTION
We compute the Le Potier function of free abelian quotients and varieties with finite Albanese morphism. We apply this to Beauville-type surfaces which provides counter examples to Conjecture <ref>.
§.§ H stability
Let X be a smooth projective variety over .
Let X be a smooth projective variety over . Fix an ample class H∈_(X). Given F∈(X) we define the H-slope of F as follows:
μ_H(F) := H^n-1_1(F)/H^n_0(F), if _0(F)>0;
+∞, if _0(F)=0.
We say that F is H-(semi)stable if for every 0≠ E ⊊ F,
μ_H(E)<(≤)μ_H(F/E).
§.§ The Le Potier Function
When studying H-stability, a natural question that arises is whether there are necessary and sufficient conditions on a cohomology class γ∈ H^∗(X,) for there to exist a H-semistable sheaf F with (F)=γ.
The Bogomolov–Gieseker inequality (see <cit.>, or <cit.>) gives the following necessary condition for H-semistable sheaves on surfaces:
2 _0(F)_2(F)≤_1(F)^2.
This generalises to the following statement for any smooth projective variety X of dimension n≥ 2 via the Mumford–Mehta–Ramanathan restriction theorem.
Let X be a smooth projective variety of dimension n≥ 2. Fix H∈_(X). If F is a torsion-free H-semistable sheaf, then
2 _0(F)(H^n-2_2(F))≤ H^n-2_1(F)^2.
(1) Let A· B denote the intersection product of elements of (X)⊗. If A· B is 0-dimensional, we define A B :=(A· B).
(2) Let B∈_(X). The twisted Chern character is defined by ^B :=· e^-B. Then
2_0(F)^B(H^n-2_2(F)^B) -H^n-2(_1(F)^B)^2 = 2 _0(F)(H^n-2_2(F))- H^n-2_1(F)^2,
hence Theorem <ref> holds for twisted Chern characters.
Now fix (H,B)∈_(X)×_(X). Then H^n>0. Let F be any H-semistable sheaf. By the twisted version of Theorem <ref>,
2H^n_0(F)(H^n-2_2^B(F))≤ H^n (H^n-2_1^B(F)^2)≤(H^n-1_1^B(F))^2,
where the final inequality is by the Hodge Index Theorem. Since F is torsion free,
H^n-2_2^B(F)/H^n_0(F)≤1/2(H^n-1_1^B(F)/H^n_0(F))^2.
Now we expand the expressions for _2^B(F) and _1^B(F):
H^n-2_2(F)-H^n-2 B_1(F)+1/2H^n-2 B^2_0(F)/H^n_0(F)
≤1/2(H^n-1_1(F)-H^n-1 B_0(F)/H^n_0(F))^2
= 1/2(μ_H(F)-H^n-1 B/H^n)^2.
Therefore,
H^n-2_2(F)-H^n-2 B_1(F)/H^n_0(F)≤1/2(μ-H^n-1 B/H^n)^2 - 1/2H^n-2 B^2/H^n
This motivates the following Definition.
Let X be a smooth projective variety of dimension n ≥ 2. Let (H,B)∈_(X)×_(X). We define the Le Potier function twisted by B, Φ_X,H,B→∪{-∞}, by
Φ_X,H,B(x):=lim sup_μ→ x{H^n-2_2(F) - H^n-2 B_1(F)/H^n_0(F)15emF∈(X) is H-semistable with μ_H(F)=μ}
if the limit exists, and Φ_X,H,B(x):=-∞ otherwise.
If B=0, we will write Φ_X,H:=Φ_X,H,0. If n=2, then Φ_X,H is exactly <cit.>.
The above discussion and definition generalises <cit.>:
Let X be a smooth projective variety of dimension n≥ 2. Let (H, B)∈_(X)×_(X). Then Φ_X,H,B is well defined and satisfies
Φ_X,H,B(x)≤1/2[(x-H^n-1 B/H^n)^2 - H^n-2 B^2/H^n].
It is the smallest upper semi-continuous function such that
H^n-2_2(F)-H^n-2 B_1(F)/H^n_0(F)≤Φ_X,H,B(H^n-1_1(F)/H^n_0(F))
for every torsion-free H-stable (or H-semistable) sheaf F.
§.§ The Le Potier function for free quotients
Let X be a smooth projective variety, and let G be a finite group acting freely on X. There is an étale covering of smooth projective varieties, π X→ X/G=:Y. Then (Y)≅_G(X), the group of isomorphism classes of G-equivariant line bundles on X. Fix H_S∈_(Y). Then π^∗ H_S∈_(X) is G-invariant. Beauville-type and bielliptic surfaces provide examples of such quotients.
In this example we consider the Beauville surface, and follow the treatment by Galkin and Shinder in <cit.>. Let G=(/5)^2=/5· e_1⊕/5 · e_2 act on a three dimensional vector space V with induced action on ^2=(V) given by:
e_1· [X:Y:Z] = [ζ_5X:Y:Z]
e_2·[X:Y:Z] = [X:ζ_5Y:Z].
Here ζ_5 is a fixed 5th root of unity. Let C_1 be the plane G-invariant Fermat quintic curve given by X^2+Y^2+Z^2=0. Consider the scheme theoretic quotient C/G≅^1, and the quotient map π_1 C→^1 which has degree 25. Now let C_2 be the curve described by the same equation as C_1, but with a different G-action given by:
e_1· [X:Y:Z] = [ζ_5^2X:ζ_5^4Y:Z]
e_2·[X:Y:Z] = [ζ_5X:ζ_5^3Y:Z].
Then the diagonal action of G on X=C_1× C_2 is free, as the stabilisers of the actions on C_1 and C_2 are distinct (see <cit.>). Let S:=X/G, this is the Beauville surface. It was first discussed in <cit.>, and it satisfies p_g(S)=q(S)=0.
Now let p_1 X→ C_1, and p_2 X→ C_2 be the corresponding projections, and define:
(b,c):=p_1^∗(_C_1(b))⊗ p_2^∗(_C_2(c)),
for any b,c∈. Let _G(X) denote the group of isomorphism classes of G-equivariant line bundles on X. Let G=(G,^∗) be the group of characters. G is abelian so it follows from <cit.> that _G(X) fits into the following exact sequence:
0→G→_G(X)→((X))^G/rat∼→ 0,
where the quotient is the group of G-invariant divisors up to rational equivalence. Note that G→_G(X) is the map that associates to a character χ: G→^∗ a trivial line bundle with G-linearisation induced by χ (see Proposition <ref> for more details).
We have the following description of (S) from <cit.>:
(S)≅_G(X)=G·[]⊕[(1,0)]⊕[(0,1)].
In particular, any divisor of the form (b,c) where b,c∈_≥0 and bc≠0 is a G-invariant ample divisor on X. For example, let H_X=(1,1). As a divisor, H_X corresponds to 5[{pt}× C_2] + 5[C_1×{pt}] up to rational equivalence, and on S this corresponds to [{pt}× C_2] + [C_1×{pt}] (as the 5 points are in the same G-orbit on each curve).explain what H_S is
The above arguments for constructing a G-invariant ample divisor should generaliseexplain more to the following construction: Let X=C_1× C_2, where C_i is a curve of genus g_i. Suppose a finite group G acts freely on X, and let S:=X/G. If p_g(S)=q(S)=0, then these surfaces are classified in <cit.>. In particular, either S≅^1×^1, or g_1,g_2≥ 2. In the second case, we call S a Beauville-type surface. If G is abelian, then G is one of the following groups: (/2)^3, (/2)^4, (/3)^2, (/5)^2. See <cit.> for details.
[Ample classes on Beauville-type surfaces]
Let S= X / G be a Beauville-type surface, as introduced in Example <ref>. Then X=C_1× C_2 is a product of curves of genus g(C_i)>1, q(S):=h^1(S,_S)=0, p_g(S):=h^2(S,_S)=0 so χ(_S)=1, and K_S^2=8 where K_S is the canonical divisor of S.
Assume that there are actions of G on each curve C_i such that the action of G on C_1× C_2 is the diagonal action. This is called the unmixed case in <cit.> and excludes 3 families of dimension 0. To classify ample classes on S, we follow similar arguments to <cit.>. Let p_1 X→ C_1 and p_2 X→ C_2 denote the projections to each curve. For i,j∈, define
(i,j) := p_1^∗(_C_1(i))⊗ p_2^∗ (_C_2(j))∈_G(X).
Moreover,
(S) = (C_1)·(C_2)/|G| = 4 (1-g(C_1))(1-g(C_2))/|G| = 4 χ(_S) =4
Therefore, (S) = b_2=2 and
_(S) ≅· [(1,0)]⊕·[(0,1)].
In particular, _(S) ≅_>0·[(1,0)]⊕_>0·[(0,1)].
Let S be a bielliptic surface, i.e. S=(E× F)/G, where E,F are elliptic curves, and G is a finite abelian group acting on E and F such that F/G≅^1 and the action on E is by translation. Consider:
X:=E×F [d, "π"] [r, "a_X"] (X)=E×F [d]
S = (E×F) / G [r] E/G
a_S should be an elliptic fibration, (S)=1 so (S) is at least isogenous to E/G. check and finish
Let ∈(Y). Then is H_Y-semistable if and only if is π^∗ H_Y-semistable.
This follows from the same arguments as in the proof of <cit.>.
If ∈(X) is π^∗ H_Y-semistable, then π_∗ is H_Y-semistable.
Suppose that ∈(X) is π^∗ H_Y-semistable. Note that
(π_∗()) = ⊕_g∈ Gg^∗.
Since π^∗ H_Y is G-invariant, it follows that g^∗ is π^∗ H_Y-semistable for every g∈ G. In particular, ⊕_g∈ Gg^∗ is π^∗ H_Y-semistable. By Lemma <ref>, π_∗ is H_Y-semistable.
Let X be a smooth projective variety, and let G be a finite group acting freely on X. Let π X→ X/G=:Y denote the quotient. Let (H_Y B_Y)∈_(Y)×_(Y). Then Φ_Y,H_Y,B_Y=Φ_X, H_Y, B_Y.
To determine the Le Potier function, it is enough to consider torsion-free sheaves, since torsion sheaves have slope +∞.
1 Let ∈(Y) be torsion-free and H_Y-semistable. By Lemma <ref>, π^∗ is π^∗ H_Y-semistable. Moreover,
μ_π^∗ H_Y(π^∗)
= ((π^∗ H_Y)^n-1·π^∗(_1()))/((π^∗ H_Y)^n·π^∗(_0()))
= (π^∗ (H_Y^n-1·_1()))/(π^∗ (H_Y^n·_0())) (π is flat, so π^∗ is a ring morphism)
= (π) (H_Y^n-1·_1())/(π)(H_Y^n·_0())
= μ_H_Y()
By the same arguments,
(π^∗ H_Y)^n-2._2()-(π^∗ H_Y)^n-1.( B_Y)._1(π^∗)/(π^∗ H_Y)^n._0() = H_Y^n-2._2()-H^n-1_Y.B_Y._1()/H^n_Y._0().
Hence the contribution of to Φ_X, H_Y, B_Y is the same as the contribution of to Φ_Y,H_Y,B_Y.
2 Suppose ∈(X) is torsion-free and H_Y-semistable. Note that
(π_∗()) = ⊕_g∈ Gg^∗.
π^∗ H_Y is G-invariant, hence g^∗ is π^∗ H_Y-semistable for every g∈ G. In particular, ⊕_g∈ Gg^∗ is π^∗ H_Y-semistable. Since the Chern character is additive, π_∗ makes the same contribution to Φ_X, H_Y, B_Y as . By Lemma <ref>, π_∗ is H_Y-semistable. The result follows from Step 1.
§.§ The Le Potier function for varieties with finite Albanese morphism
The Le Potier function for surfaces with finite Albanese morphism was known previously <cit.>. Below, we give a different proof which works for Φ_X,H,B in any dimension. We first need the following definition.
A vector bundle E on an abelian variety X is semi-homogeneous if for every x∈ X, there exists a line bundle L on X such that T_x^∗(E)≅ E⊗ L, where T_x is translation on X by x.
See <cit.> for some equivalent characterisations. There are many H-semistable semi-homogeneous vector bundles on any abelian variety:
Let A be an abelian variety and fix H∈_(A). Then for every p/q∈ there exists a H-semistable semi-homogeneous vector bundle E_p,q on A with μ(E_p,q)=p/q and (E_p,q)=_0(E_p,q) · e^pH/q.
We use this to compute the Le Potier function for varieties with finite Albanese morphism.
Let X be a smooth projective variety with finite Albanese morphism a X→(X) and n:= X ≥ 2. Let H_X∈_(X). Then a^∗ E_p,q is H_X-semistable for every p/q∈.
Fix p/q∈ and H_A∈_((X)). Let E_p,q be the corresponding H_A-semistable semi-homogeneous vector bundle on (X) from Proposition <ref>. Let r:=_0(E_p,q) and let r_(X)(X)→(X) denote the multiplication by r map. By <cit.>, r_(X)^∗(E_p,q) is a homogeneous vector bundle up to tensoring with a fixed line bundle L.
Moreover, by <cit.>, any homogeneous vector bundle is a direct sum of vector bundles of the form P_i⊕ U_i, where P_i is a degree 0 line bundle and U_i is a unipotent line bundle (i.e. U_i is an iterated self-extension of _(X)). Therefore, r_(X)^∗(E_p,q) is an iterated extension of degree 0 line bundles.
Now consider the fibre square:
Z:=X×_(X)(X) [d, "p_X"] [r, "p_A"] (X) [d, "r_A"]
X [r, "a"] (X)
Then p_X^∗ a^∗ (E_p,q)= p_A^∗ r_A^∗ (E_p,q) on Z. The property of being an extension of degree 0 line bundles is preserved by taking pullback. Hence p_X^∗ a^∗(E_p,q) is an iterated extension of degree 0 line bundles. Recall that line bundles are stable with respect to any ample class. Thus p_X^∗ a^∗(E_p,q) is p_X^∗ H_X-semistable. Finally, by Lemma <ref>, a^∗(E_p,q) is H_X-semistable.
Let X be a smooth projective variety with finite Albanese morphism a X→(X) and n:= X ≥ 2. Fix (H,B)∈_((X))×_((X)). Then
Φ_X,a^∗ H,a^∗ B(x) = 1/2[(x-(a^∗ H)^n-1 a^∗ B/(a^∗ H)^n)^2 - (a^∗ H)^n-2 (a^∗ B) ^2/(a^∗ H)^n].
H_X:=a^∗ H is ample, since it is the pullback of an ample class by a finite morphism. Let p/q∈, and assume without loss of generality that (p,q)=1. Then consider:
μ_H_X(a^∗ E_p,q)
= (H_X^n-1· a^∗_1(E_p,q))/(H^n_X· a^∗_0(E_p,q))
=(H_X^n-1·_0(E_p,q)p/q a^∗ H)/(H^n_X·_0(E_p,q))
= p/q(H_X^n-1· H_X)/(H_X^n)
=μ_H(E_p,q)
Similarly,
H_X^n-2._2(a^∗ E_p,q) - H_X^n-2.B_X._1(a^∗ E_p,q)/H_X^n._0(a^∗ E_p,q)
= [H_X^n-2· a^∗(_0(E_p,q)1/2(p/qH)^2)-H_X^n-2· B_X· a^∗(_0(E_p,q)p/qH)]/(H^n_X·_0(E_p,q))
= 1/2(p/q)^2(H_X^n-2· H_X^2)/(H_X^n) -p/q(H_X^n-2· B_X· H_X)/(H_X^n)
=1/2[(μ_H_X(a^∗ E_p,q)-H_X^n-1. B_X/H_X^n)^2 - H_X^n-2.B_X^2/H_X^n]
Moreover, by Proposition <ref>, E_p,q is H_X-semistable. Hence this gives a lower bound for Φ_X,H_X,B_X(p/q)., which is the same as the upper bound in Lemma <ref>. Since Φ_X,H_X,B_X is upper semicontinuous, the result follows.
We now combine this with Proposition <ref>.
Let X be a smooth projective variety with finite Albanese morphism a X→(X), and let G be a finite group acting freely on X. Let π X→ X/G=:Y denote the quotient map. Suppose we have:
* H_X=a^∗ H = π^∗ H_Y : a class in _(X) pulled back from (X) and Y, and
* B_X=a^∗ B = π^∗ B_Y : a class in _(X) pulled back from (X) and Y.
Then Φ_Y,H_Y,B_Y(x)=1/2[(x-H_Y^n-1. B_Y/H_Y^n)^2 - H_Y^n-2.B_Y^2/H_Y^n].
By Proposition <ref> and Proposition <ref>, it follows that:
Φ_Y,H_Y,B_Y(x)=Φ_X,π^∗ H_Y, B_Y(x)=1/2[(x-(π^∗ H_Y)^n-1π^∗ B_Y/(π^∗ H_Y)^n)^2 - (π^∗ H_Y)^n-2 (π^∗ B_Y) ^2/(π^∗ H_Y)^n].
The result follows by the projection formula.
Suppose X has finite Albanese morphism a X→(X), and let G be a finite group acting freely on X. This induces an action of G on ((X)). Fix L∈((X)). Then H_X:=⊗_g∈ Gg^∗ L∈_(X) satisfies the hypotheses of Corollary <ref>. In particular, this applies to bielliptic surfaces (q=1) and Beauville-type surfaces (q=0). The latter provides a counter example to Conjecture <ref>.
§ GEOMETRIC STABILITY CONDITIONS AND THE LE POTIER FUNCTION
We use the Le Potier function to describe the set of geometric stability conditions on any surface. This was previously known for surfaces with Picard rank 1 <cit.>.
§.§ The deformation property and tilting
To prove existence of stability conditions later in this section, we will need the following refinement of Theorem <ref>:
Let be a triangulated category. Assume σ=(,Z)∈() satisfies the support property with respect to a quadratic form Q on ()⊗. Consider the open subset of _((),) consisting of central charges whose kernel is negative definite with respect to Q, and let U be the connected component containing Z. Let denote the local homeomorphism from Theorem <ref>, and let 𝒰⊂() be the connected component of the preimage 𝒵^-1(U) containing σ. Then
* the restriction 𝒵|_𝒰𝒰→ U is a covering map, and
* any stability condition σ'∈𝒰 satisfies the support property with respect to the same quadratic form Q.
Let be a triangulated category. Assume σ=(, Z)∈() satisfies the support property with respect to a quadratic form Q on ()⊗. Let U⊂_((),), and 𝒰⊂() be the connected components from Proposition <ref>. Suppose there is a path Z_t in U parametrised by t∈[0,1], such that Z_t is constant and Z_t_0=Z for some t_0∈[0,1]. Then this lifts to a path σ_t = (_t,Z_t) in 𝒰 passing through σ along which _t(0,1]=(0,1] and σ_t satisfies the support property with respect to Q.
Let denote the local homeomorphism from Theorem <ref>. By Proposition <ref>(1), |_→ U is a covering map. By the path lifting property, there is a unique path σ_t = (_t,Z_t) in 𝒰 with σ=σ_t_0. By Proposition <ref>(2), σ_t satisfies the support property with respect to Q for all t. It remains to show that _t(0,1]=(0,1].
Fix a non-zero object E∈. We claim that the set of points in the path σ_t where E∈_t(0,1] is open and closed. Suppose E∈_T(0,1] for some T∈[0,1]. Then all Jordan-Hölder (JH) factors of E with respect to σ_T, E_i, are in _T(0,1] and satisfy Z_T(E_i)≥ 0. The property for an object to be stable is open in (X) (see <cit.>). Moreover, 0<ϕ__t(E_i) is an open property. Since Z_t is constant, Z_t(E_i)≥ 0 for all t. Hence, for all sufficiently close σ_t, ϕ__t(E_i)≤ 1 and E∈_t(0,1].
Now suppose σ_T is in the closure and not the interior of {σ_t : E∈_t(0,1]} inside {σ_t : t∈[0,1]}. Recall that ϕ^+(E) and ϕ^-(E) are continuous. Hence ϕ^-__T(E)=0, and E has a morphism to a stable object in _T(0) which is also stable nearby. In particular, {σ_t : E∉_t(0,1]} is open, which proves the claim. Hence _t(0,1] is constant. Since _t_0=(0,1], the result follows.
To construct stability conditions, we will also need the following definition.
Let Å be an abelian category. A torsion pair on Å is a pair of full additive subcategories (,) of Å such that
* for any T∈ and F∈, (T,F)=0, and
* for any E∈Å there are T∈, F∈, and an exact sequence
0 → T → E → F → 0.
Let X be a smooth projective variety. Let Å be the heart of a bounded t-structure on . Suppose (,) is a torsion pair on Å. Then
Å^♯ := {E∈ D^b(Å) | _Å^0(E)∈, _A^-1(E)∈, _Å^i(E)=0 for all i≠0,-1}
is the heart of a bounded t-structure on D^b(Å). We call Å^♯ the tilt of Å with respect to (,).
§.§ The central charge of a geometric stability condition
For the rest of this section, let X be a smooth projective surface over . We are particularly interested in geometric Bridgeland stability conditions, i.e. σ∈(X) such that the skyscraper sheaf _x is σ-stable for every point x∈ X. Denote by (X) the set of all geometric stability conditions.
Let X be a smooth projective surface, and let σ=(,Z) ∈(X). Then σ is determined by its central charge up to shifting the slicing by [2n] for any n∈.
Moreover, if σ is normalised using the action of such that Z(_x)=-1 and ϕ(_x)=1 for all x∈ X. Then
* the central charge can be uniquely written in the following form:
Z([E]) = (α-iβ)H^2_0([E]) + (B+iH)_1([E]) -_2([E]),
where α,β∈, (H,B)∈_(X)×_(X). Moreover,
* the heart, ((0,1]), is the tilt of (X) at the torsion pair (,), where
:= { E∈(X) : 23emAny H-semistable Harder–Narasimhan factor F of the torsion free part of E satisfies Z([F])> 0.},
:= { E∈(X) : 23emE is torsion free, and any H-semistable Harder–Narasimhan factor F of E satisfies Z([F]) ≤ 0.},
We will use Z_H,B,α,β=Z to denote central charges of the above form. Since Z_H,B,α,β only depends on H and β, we will write (_H,β,_H,β) for the torsion pair, and ^H,β(X) for the corresponding tilted heart. Then σ_H,B,α,β:=( Z_H,B,α,β, ^H,β(X)).
The proof is similar to the case of K3 surfaces proved in <cit.>. We first need the following result which immediately generalises to any smooth projective surface:
Suppose σ=(,Z)∈(X) is a stability condition on a smooth projective surface X such that for each point x∈ X the sheaf _x is σ-stable of phase one. Let E be an object of . Then
* if E∈((0,1]) then H^i(E)=0 unless i∈{-1,0}, and moreover H^-1(E) is torsion free,
* if E∈(1) is stable, then either E=_x for some x∈ X, or E[-1] is a locally-free sheaf,
* if E∈(X) is a sheaf the E∈((-1,1]); if E is a torsion sheaf then E∈((0,1]),
* the pair of subcategories
= (X)∩((0,1]) and =(X)∩((-1,0])
defines a torsion pair on (X) and ((0,1]) is the corresponding tilt.
(of Theorem <ref>)
1 Since σ is numerical, the central charge can be written as follows:
Z([E]) = a _0([E]) + B_1([E]) + c _2([E]) + i(d _0([E]) + H _1([E]) + e _2([E])),
where a,c,d,e∈ and B,H∈_(X).
Since σ is geometric, _x is σ-stable and of the same phase for every point x∈ X by Proposition <ref>. As discussed in Remark <ref>, acts on (X). In particular, there is a unique element g∈ such that g^∗σ=(',Z') satisfies Z'([_x])=-1 and _x∈'(1) for all x∈ X. Now we may assume that Z([_x])=-1 and _x∈(1) for all x∈ X. Hence -1=c and e=0. Let C⊂ X be a curve. By Lemma <ref>(c), _C∈((0,1]). Since _0(_C)=0 and _1(_C)=C,
Z([_C]) = H C ≥ 0.
This holds for any curve C⊂ X, so H∈_(X) is nef. By <cit.>, (X) is open. Moreover, by Theorem <ref>, a small deformation from σ to σ' in (X) corresponds to a small deformation of the central charges Z to Z', and in turn a small deformation of H to H' inside _(X). In particular, H' C ≥ 0 for any curve C⊂ X. Therefore, H lies in the interior of the nef cone, hence H is ample.
Now let α:=a/H^2 and β:=-d/H^2. Then the central charge is of the form:
Z([E]) = (α-iβ)H^2_0([E]) + (B+iH)_1([E]) -_2([E]).
2 Consider the torsion pair (,) of Lemma <ref>(d), so ((0,1]) is the tilt of (X) at (,). By Lemma <ref>(c), all torsion sheaves lie in . To complete the proof, we need the following claim:
∗
E∈(X) is H-stable and torsion-freeE∈ if Z([E])>0,
E∈ if Z([E])≤0.
This is Step 2 of the proof of <cit.>. Bridgeland first shows that E must lie in or . We explain why it then follows that Z([E])=0 implies E∈. Assume E is non-zero and E∈. Since Z([E])∈, it follows that E∈(1). For any x∈(E), E has a non-zero map f E→_x. Let E_1 be its kernel in (X). Since _x is stable, f is a surjection in (1). Thus E_1 also lies in (1) and hence in . Moreover, Z([E_1])=Z([E])-Z([_x])=Z([E])-1. Repeating this by replacing E with E_1 and so on creates a chain of strict subobjects in (1), E⊋ E_1 ⊋ E_2 ⊋⋯, such that Z([E_n])=Z([E])-n. If this process does not terminate, then Z([E_k])∈_>0 for some k∈, contradicting the fact that E_n∈((0,1]). Otherwise, E_i≅_x for some i, contradicting the fact that E is torsion-free.
§.§ The set of all geometric stability conditions on surfaces
In the previous section, we showed that any geometric stability condition on a surface which satisfies Z(_x)=-1, ϕ(_x)=1 is determined by its central charge. In particular, it depends on parameters (H,B,α,β)∈_(X)×_(X)×^2. To characterise geometric stability conditions on surfaces, we will find necessary and sufficient conditions for when these parameters define a geometric stability condition. In Definition <ref> we introduced the Le Potier function twisted by B. We restate the version for surfaces below.
Let X be a smooth projective surface. Let (H,B)∈_(X)×_(X). We define the Le Potier function twisted by B, Φ_X,H,B→, by
Φ_X,H,B(x):=lim sup_μ→ x{_2(F)-B_1(F)/H^2_0(F) : 15emF∈(X) is H-semistable with μ_H(F)=μ}.
By <cit.>, for every rational number μ∈ there exists an H-stable sheaf F with μ_H(F)=μ. Together with the fact that Φ_X,H,B is bounded above, it follows that the value of Φ_X,H,B at every point is in .
Let X be a smooth projective surface. Then
(X)≅×{(H,B,α,β)∈_(X)×_(X)×^2 : α>Φ_X,H,B(β)}.
In <cit.>, the authors describe a subset of (X) parametrised by (H,B)∈_(X)×_(X). This corresponds to where α>1/2[(β - H B/H^2)^2-B^2/H^2] in the above Theorem (see Lemma <ref> for details). We will call this the BG range.
To ease notation, we make the following definitions.
𝒰 :={(H,B,α,β)∈_(X)×_(X)×^2 : α>Φ_X,H,B(β)}
N(X) :={σ=(,Z)∈(X):Z(_x)=-1, _x∈(1) ∀ x∈ X}
Idea of the proof. By Theorem <ref>, for every σ∈(X), there exists a unique g∈ such that g^∗σ∈N(X). To prove Theorem <ref> it is enough to show that N(X)≅. We do this in two steps:
1 Construct a continuous, injective, local homeomorphism ΠN(X)→: Theorem <ref> shows that, for every σ∈N(X), there are unique (H,B,α,β)∈_(X)×_(X)×^2 such that σ=σ_H,B,α,β. This gives an injective map:
ΠN(X) ⟶_(X)×_(X)×^2
σ=σ_H,B,α,β ⟼ (H,B,α,β)
We will show that the image is contained in (Lemma <ref>), and that Π is a local homeomorphism (Proposition <ref>).
2 Construct a pointwise inverse Σ→N(X). We will first show this is possible for (H,B,α,β) in the BG range (Lemma <ref>). In Proposition <ref>, we extend this to any α>Φ_X,H,B(β) by applying Corollary <ref> as follows:
* Fix (H,B)∈_(X)×_(X), and α_0>Φ_X,H,B(β_0).
* Fix α_1>1/2[(β_0 - H B/H^2)^2-B^2/H^2].
If only α varies, then Z_H,B,α,β_0 is constant. We construct a quadratic form (Proposition <ref>) and shows that it gives the support property for σ_H,B,α_1,β_0 (Lemma <ref>) and is negative definite on Z_H,B,α,β_0 for all α>Φ_X,H,B(β_0) (Lemma <ref>).
§.§.§ STEP 1: Construction of the map N(X)→.
Let σ=σ_H,B,α,β∈N(X). Then α>Φ_X,H,B(β). In particular, Π(N(X))⊆.
To prove this statement, we first need the following lemmas.
Let σ_H,B,α,β=(Z_H,B,α,β,^H,β(X))∈N(X). Then there is no torsion-free H-semistable sheaf F such that Z([F])∈_≤ 0.
Suppose such an F exists. Then Z([F])=0. From the definition of the torsion pair (_H,β,_H,β) in Theorem <ref>, it follows that F∈_H,β. But this implies that Z([F])∈_>0.
Let X be a smooth projective surface. Let (H,B,α,β)∈_(X)×_(X)×^2. Suppose
α≤max{_2(F)-B_1(F)/H^2_0(F) : F∈(X) is H-semistable with μ_H(F)=β}.
Then there exists an H-semistable sheaf F with _0(F)>0 and Z_H,B,α,β(F)∈_≤ 0.
By our hypotheses, there exists an H-semistable sheaf F with β=μ_H(F)=H_1(F)/H^2_0(F), and α≤_2(F)-B_1(F)/H^2_0(F). Since μ_H(F)≠ +∞, _0(F)>0. Moreover,
(Z_H,B,α,β([F])) =α H^2_0([F])+B_1([F])-_2([F])≤ 0,
(Z_H,B,α,β([F])) = H_1([F])-β H^2_0([F])=0.
Hence Z_H,B,α,β([F])∈_≤ 0, as required.
Suppose σ=σ_H,B,α,β∈N(X) is geometric. There there is an open neighbourhood W⊂^2 of (α,β), such that for every (α',β')∈ W, σ_H,B,α',β'∈N(X).
By <cit.>, there is an open neighbourhood U of σ in (X) where all skyscraper sheaves are stable. Together with Theorem <ref>, it follows that there is an open neighbourhood V⊆_((X),) of Z_H,B,α,β such that for any Z'∈ V, the associated stability condition σ'=(Z',Å')∈N(X). By Theorem <ref>, V can be identified with a subset of _(X)×_(X)×^2. Let W be the intersection of V with {H}×{B}×^2. Then W has the required properties.
Suppose σ=σ_H,B,α,β=(Z_H,B,α,β,^H,β(X)) is geometric. Suppose α≤Φ_X,H,B(β). Then there exists σ_0=(Z_0,Å_0)∈N(X), and a torsion-free H-semistable sheaf F such that Z_0([F])∈_≤ 0.
Let W⊂^2 be the open neighbourhood of (α,β) from Lemma <ref>. Recall that
Φ_X,H,B(β):=lim sup_μ→β{_2(F)-B_1(F)/H^2_0(F) : 15emF∈(X) is H-semistable with μ_H(F)=μ}.
Therefore, there exist H-semistable sheaves with slopes arbitrarily close to β, and _2-B_1/H^2_0 arbitrarily close to Φ_X,H,B(β). Hence there exists (α_0,β_0)∈ W and an H-semistable sheaf F with
μ_H(F)=β_0 and α_0 < _2(F)-B_1(F)/H^2_0(F).
In particular,
α_0 ≤max{_2(F)-B_1(F)/H^2_0(F) : F∈(X) is H-semistable with μ_H(F)=β_0}.
By Lemma <ref>, there exists an H-semistable sheaf F' with _0(F')>0 and Z_H,B,α_0,β_0([F'])∈_≤0. By Lemma <ref>, σ_0:=σ_H,B,α_0,β_0∈N(X).
From Theorem <ref>, we know that σ = g^∗σ_H,B,α,β for some g∈. Suppose α≤Φ_X,H,B(β). By Lemma <ref>, there exists a geometric stability condition σ_0=(Z_0,Å_0), and an H-semistable sheaf F with _0(F)>0 such that Z_0([F])∈_≤0. However, σ_0 is geometric, so this contradicts Lemma <ref>.
Let X be a smooth projective surface. Then the following map is an injective local homeomorphism onto its image
ΠN(X) ⟶={(H,B,α,β)∈_(X)×_(X)×^2 : α>Φ_X,H,B(β)}
σ=σ_H,B,α,β ⟼ (H,B,α,β)
Let (X)→((X),) denote the local homeomorphism from Theorem <ref>. Let
:={σ∈(X):Z(_x)=-1 ∀ x∈ X}.
Then |_ is a continuous local homeomorphism onto its image, hence so is |_N(X).
By Theorem <ref>, any σ∈N(X) is determined by its central charge, hence |_N(X) is injective, and Π factors via |_N(X). By the same argument as Step 1 of the proof of Theorem <ref>,
()≅{(H,B,α,β)∈(_(X))^2×^2 }.
Π is exactly the above isomorphism composed with |_N(X). Hence it is an injective local homeomorphism onto its image.
§.§.§ STEP 2: Construction of the pointwise inverse →N(X).
We first recall the construction of stability conditions in <cit.>.
Let X be a smooth projective surface. Let (H,B)∈_(X)×_(X). Define σ_H,B:=(^H,B(X),Z_H, B), where
Z_H,B([E]) = (-_2^B([E])+H^2/2_0^B([E])) + iH_1^B([E])
= [1/2(1-B^2/H^2)-iH B/H^2]H^2_0([E])+(B+iH)_1([E])-_2([E]),
_H,B = { E∈(X) : 23emAny H-semistable Harder–Narasimhan factor F of the torsion free part of E satisfies Z_H,B([F])> 0.},
_H,B = { E∈(X) : 23emE is torsion free, and any H-semistable Harder–Narasimhan factor F of E satisfies Z_H,B([F]) ≤ 0.},
and ^H,B(X) is the tilt of (X) at the torsion pair (_H,B, _H,B).
Let X be a smooth projective surface. Then there exists a continuous function C_(-)_(X)→_≥ 0 such that, for every D∈_(X),
C_H(H D)^2+D^2≥ 0.
C_H(H D)^2+D^2≥ 0 is invariant under rescaling. If we consider _(X)⊂_(X) as normed vector spaces, it is therefore enough to look at the subspace of unit vectors U⊂_(X).
Since D∈ U is effective and D≠ 0, H D>0. Hence there exists C∈_≥ 0 such that C(H D)^2+D^2≥ 0. Define:
C_H,D:=inf{ C∈_≥ 0 : C_H(H D)^2+D^2≥ 0}.
Since _(X) is open, H' D>0 for a small deformation H' of H. It follows that U is strictly contained in the subspace {E∈_(X) : E H>0}. Moreover, C_H,D is a continuous function on U, and U is compact as it is a closed subset of the unit sphere in _(X). Therefore, C_H,D has a maximum, which we call C_H. By construction, this is a continuous function on _(X).
Let X be a smooth projective surface. Let (H,B)∈_(X)×_(X). We define the following quadratic forms on (X)⊗:
Q_BG:=_1^2-2_2_0
HB:=Q_BG+ C_H(H_1^B)^2,
where C_H∈_≥ 0 is the constant from Lemma <ref>.
Let X be a smooth projective surface. Let (H,B)∈_(X)×_(X). Then σ_H,B∈N(X). In particular, σ_H,B satisfies the support property with respect to H'B', where (H',B')∈_(X)×_(X) are nearby rational classes.
Theorem <ref> was first proved for K3 surfaces in <cit.>, along with the fact that this gives rise to a continuous family. In <cit.>, the authors first prove the result holds for rational classes (H,B) and sketch how to extend this to arbitrary classes. In particular, σ_H,B can be obtained as a deformation of σ_H',B' for nearby rational classes (H',B'), and σ_H,B satisfies the same support property, H'B'. This uses the fact that HB varies continuously with (H,B), together with similar arguments to Proposition <ref>.
Let X be a smooth projective surface. Let (H,B)∈_(X)×_(X) and fix α_0,β_0∈ such that α_0>Φ_X,H,B(β_0). Suppose α>1/2[(β_0 - H B/H^2)^2-B^2/H^2]. Define b:=β_0 - H B/H^2∈ and a:=√(2α -b^2 + B^2/H^2)∈_>0. Then σ_H,B,α,β_0 and σ_aH,B+bH are the same up to the action of . Moreover, this is a continuous family for α>1/2[(β_0 - H B/H^2)^2-B^2/H^2].
We abuse notation and consider the central charges as homomorphisms (X)⊗→. By Theorem <ref>, it is enough to show Z_H,B,α,β_0= Z_aH,B+bH as sub-vector spaces of (X)⊗. Fix u∈(X)⊗. Since a>0, Z_aH,B+bH(u)=0 if and only if
0 = a H B _0(u) + abH^2_0(u) -aH_1(u)
= a ( H B _0(u) + (β_0 - H B/H^2) H^2_0(u) -H_1(u))
= a (β_0H^2_0(u) -H_1(u))
= -a Z_H,B,α,β_0(u).
Therefore, Z_aH,B+bH(u)=0 if and only if Z_H,B,α,β_0(u)=0. Now assume Z_aH,B+bH(u)=0, so H_1(u)=β_0 H^2_0(u). Then Z_aH,B+bH(u)=0 if and only if
0 = 1/2((aH)^2-(B+bH)^2)_0 + B_1+ bH_1(u) - _2(u)t
= 1/2(a^2 - (B+bH)^2/H^2 +2bβ_0)H^2_0(u) + B_1(u)-_2(u).
Moreover,
1/2(a^2 - (B+bH)^2/H^2 +2bβ_0)
= 1/2(a^2 -B^2/H^2 +2b(β_0 - B H/H^2) - b^2)
= 1/2(2α -b^2 + B^2/H^2 -B^2/H^2 +b^2)
= α.
It follows that u∈ Z_aH,B+bH if and only if u∈ Z_H,B,α,β_0. Therefore, by Theorem <ref>, σ_aH,B+bH∈N(X). Moreover, acts on (X) by autoequivalences, hence σ_H,B,α,β_0∈(X). Then, by definition, σ_H,B,α,β_0∈N(X). It remains to show this gives rise to a continuous family. By Proposition <ref>,
ΠN(X) →, σ_H,B,α,β↦ (H,B,α,β).
is an injective local homeomorphism. Let V:= {(H,B,α,β) : α>1/2[(β - H B/H^2)^2-B^2/H^2]}. The restriction Π|_Π^-1(V) is still an injective local homeomorphism. Moreover, By the arguments above, Π|_Π^-1(V) is surjective, hence it is continuous.
Let _2^+()⊂_2^+() denote the subgroup of shearings, i.e. transformations that preserves the real line. It is simply connected, hence it embeds as a subgroup into and acts on (X). In the above proof, σ_H,B,α,β_0 and σ_aH,B+bH have the same hearts, so they are the same up to the action of _2^+().
The next result follows from the proof of Theorem <ref>. We explain this part of the argument explicitly, as it will be essential for extending the support property in Lemma <ref>.
Let X be a smooth projective surface. Let (H,B)∈_(X)×_(X). There exists (H',B')∈_(X)×_(X) such that, for a≥ 1, Δ^C_H'_H',B' is negative definite on Z_aH,B⊗. In particular, Δ_H',B'^C_H' gives the support property for σ_aH,B.
By Theorem <ref>, σ_aH,B∈N(X) for a≥ 1, and there exists (H',B')∈_(X)×_(X) nearby to (H,B) such that Δ^C_H'_H',B' gives the support property for σ_H,B∈N(X). In particular, Δ^C_H'_H',B' is negative definite on _1:= Z_H,B⊗. By Proposition <ref>, it is enough to show H'B' is negative definite on K_a:= Z_aH,B⊗ for a≥ 1.
Recall that u=(_0^B(u),_1^B(u),_2^B(u))∈_a if and only if
a^2H^2/2_0^B(u)=_2^B(u), H_1^B(u)=0.
Let Ψ_a_1→_a be the isomorphism of sub-vector spaces of (X)⊗ given by
Ψ_a v=(_0^B(v),_1^B(v),_2^B(v))↦(_0^B(v),_1^B(v),_2^B(v)+(a^2-1)H^2/2_0^B(v)).
Let u∈_a. Then u=Ψ_a(v) for some v∈_1. Clearly Δ^C_H'_H',B'(0)=0, so we may assume u≠ 0. Hence v≠ 0, and it is enough to show that Δ^C_H'_H',B'(Ψ_a(v))<0. Recall that _1^B'=_1 - B'_0, hence _1^B'(Ψ_a(v))=_1^B'(v). Therefore,
Δ^C_H'_H',B'(Ψ_a(v)) = (_1^B(v))^2-2_0^B(v)_2^B(v)-2(a^2-1)H^2/2(_0^B(v))^2 + C_H'(H'_1^B'(v))^2
= Δ^C_H'_H',B'(v) - 2(a^2-1)H^2/2(_0^B(v))^2
≤Δ^C_H'_H',B'(v).
Since Δ^C_H'_H',B' is negative definite on _1, it follows that Δ^C_H'_H',B'(Ψ_a(v))<0.
Let X be a smooth projective surface. Let (H,B)∈_(X)×_(X). Let α>Φ_X,H,B(β), and let δ>0. We define the following quadratic form on (X)⊗:
Q_H,B,α,β,δ:=δ^-1(H_1-β_0H^2_0)^2 - (H^2_0)( _2-B_1 - (α_0-δ)H^2_0).
Let X be a smooth projective surface. Let (H,B)∈_(X)×_(X). Fix α_0,β_0∈ such that α_0>Φ_X,H,B(β_0). Then there exists δ>0 such that, for every H-semistable torsion-free sheaf F, we have Q_H,B,α_0,β_0,δ([F])≥ 0.
Since Φ_X,H,B is upper semi-continuous and bounded above by a quadratic polynomial in x, the same argument as in <cit.> applies. In particular, there exists a sufficiently small δ>0 such that
(x-β_0)^2/δ+α_0-δ≥Φ_X,H,B(x).
Suppose F is an H-semistable torsion-free sheaf. Let x=μ_H(F)=H_1(F)/H^2_0(F), then
δ^-1(H_1(F)-β_0H^2_0(F))^2+(α_0-δ)(H^2_0(F))^2≥ (H^2_0(F))^2Φ_X,H,B(H_1(F)/H^2_0(F)).
From Lemma <ref> it follows that
δ^-1(H_1(F)-β_0H^2_0(F))^2+(α_0-δ)(H^2_0(F))^2 ≥ (H^2_0(F))^2_2(F)-B_1(F)/H^2_0(F).
In particular,
δ^-1(H_1(F)-β_0H^2_0(F))^2 - (H^2_0(F))( _2(F)-B_1(F) - (α_0-δ)H^2_0(F))≥ 0.
Let u∈(X)⊗. Now consider Z_H,B,α_0,β_0 as a homomorphism (X)⊗→. Recall that [E]∈_α_0:= Z_H,B,α_0,β_0⊆(X)⊗ if and only if
α_0H^2_0(u)+B_1(u)-_2(u)=0, H_1(u)-β_0H^2_0(u)=0.
Then
Q_H,B,α_0,β_0,δ(u)=-δ(H^2_0(u))^2≤ 0,
for all u∈_α_0. In particular, Q_H,B,α_0,β_0,δ is negative semi-definite on K_α_0. Hence Q_H,B,α_0,β_0,δ does not fulfil the support property.
To construct a quadratic form which is negative definite on K_α_0= Z_H,B,α_0,β_0, we will combine Q_H,B,α_0,β_0,δ with Q_BG, the quadratic form coming from the Bogomolov-Gieseker inequality introduced in Definition <ref>.
Let X be a smooth projective surface. Let H∈_(X). Then Q_BG([F])≥ 0 for every H-semistable torsion-free sheaf F.
Let X be a smooth projective surface. Let (H,B)∈_(X)×_(X). Fix α_0,β_0∈ such that α_0>Φ_X,H,B(β_0). Choose δ>0 as in Lemma <ref>. Let Q^δ,ε_H,B,α_0,β_0:=Q_H,B,α_0,β_0,δ+ε Q_BG. Then there exists ε>0 such that
* Q^δ,ε_H,B,α_0,β_0([F])≥ 0 for every H-semistable torsion-free sheaf F,
* Q^δ,ε_H,B,α_0,β_0([T])≥ 0 for every torsion sheaf T, and
* Q^δ,ε_H,B,α_0,β_0 is negative definite on K_α_0:= Z_H,B,α_0,β_0⊆(X)⊗.
(1) follows immediately for any ε>0 from Lemma <ref> and Lemma <ref>. For (2), let C_H be the constant from Lemma <ref>. Choose ε_1 >0 such that ε_1 < δ^-1/C_H. Let T be a torsion sheaf, then
Q^δ,ε_1_H,B,α_0,β_0([T]) = δ^-1(H_1([T]))^2+ε_1_1([T])^2
= ε_1(δ^-1/ε_1(H_1([T]))^2+_1([T])^2)
> ε_1 ( C_H(H_1([T]))^2+_1([T])^2)
≥ 0,
For (3), fix a norm on (X) and let U denote the set of unit vectors in K_α_0 with respect to this norm. It will be enough to show there exists ε_2>0 such that Q^δ,ε_2_H,B,α_0,β_0|_U<0.
Let A:={u∈ U Q_H,B,α_0,β_0,δ=0}. For any a∈ A, _0(a)=0. The condition that Z_H,B,α_0,β_0(a)=0 becomes
B_1(a)=_2(a), H_1(a)=0.
H is ample, so _1(a)^2≤ 0 by the Hodge index theorem. If _1^2(a)=0, then _1(a)=0, and hence 0=B_1(a)=_2(a). So a=0, which contradicts the fact that a∈ U. Therefore,
Q_BGA([E])=_1([E])^2<0.
We now claim that there exists a sufficiently small ε_2>0 such that Q^δ,ε_2_H,B,α_0,β_0<0 on U. Note that Q^δ,ε_2_H,B,α_0,β_0A=ε_2 Q_BGA<0, so we only need to check the claim on U∖ A. Now suppose the converse, so for every ε>0, there exists u∈ U∖ A such that
Q_BG(u)≥ -1/εQ_H,B,α_0,β_0,δ(u)
Q_H,B,α_0,β_0,δ(u)<0 since Q_H,B,α_0,β_0,δ is negative semi-definite on U, and u∈ U∖ A. Therefore,
P(u):=Q_BG(u)/-Q_H,B,α_0,β_0,δ(u)≥1/ε.
Thus P is not bounded above on U∖ A. Moreover, A is closed and Q_BGA<0 on A. Hence Q_BG is negative definite on some open neighbourhood V of A, so PV<0. Finally, U∖ V is compact, so P must be bounded above on U∖ V. In particular, P is bounded above on U∖ A which gives a contradiction. It follows that there exists ε_2>0 such that Q^δ,ε_2_H,B,α_0,β_0 is negative definite on K_α_0. Finally, let ε =min{ε_1,ε_2}.
Let X be a smooth projective surface. Let (H,B)∈_(X)×_(X), and fix α_0,β_0∈ such that α_0>Φ_X,H,B(β_0). Choose δ,ε>0 as in Proposition <ref>. Then Q^δ,ε_H,B,α_0,β_0 is negative definite on K_α := Z_H,B,α,β⊗ for all α≥α_0.
Recall that u=(_0(u),_1(u),_2(u))∈_α= Z_H,B,α,β_0⊗ if and only if
α H^2_0(u)+B_1(u)-_2(u)=0, H_1(u)-β_0H^2_0(u)=0.
Let Ψ_α_α_0→_α be the isomorphism of sub-vector spaces of (X)⊗ given by
Ψ_α v=(_0(v),_1(v),_2(v))↦ (_0(v),_1(v),_2(v)+(α-α_0)H^2_0(v)).
Let u∈_α, then u=Ψ_α(v) for some v∈_α_0. Clearly Q^δ,ε_H,B,α_0,β_0(0)=0, so we may assume u≠0. Hence v≠ 0, and it is enough to show that Q^δ,ε_H,B,α_0,β_0(Ψ_α(v))<0. Moreover,
Q^δ,ε_H,B,α_0,β_0(Ψ_α(v)) = Q_H,B,α_0,β_0,δ(Ψ_α(v))+ε Q_BG(Ψ_α(v))
= Q_H,B,α_0,β_0,δ(v) - (α-α_0)(H^2_0(v))^2 + ε Q_BG(v) -2ε(α-α_0)H^2_0(v)^2
= Q^δ,ε_H,B,α_0,β_0(v)-(α-α_0)H^2_0(v)^2(H^2+2ε)
≤ Q^δ,ε_H,B,α_0,β_0(v).
Finally, by Proposition <ref>(3), Q^δ,ε_H,B,α_0,β_0(v)<0.
Let (H,B)∈_(X)×_(X). If E∈^H,B(X) is σ_aH,B-semistable for all a≫ 0, then it satisfies one of the following conditions:
* ^-1(E)=0 and ^0(E) is a H-semistable torsion-free sheaf.
* ^-1(E)=0 and ^0(E) is a torsion sheaf.
* ^-1(E) is a H-semistable torsion-free sheaf and ^0(E) is either 0 or a torsion sheaf supported in dimension zero.
Let (H,B)∈_(X)×_(X). Fix α_0,β_0∈ such that α_0>Φ_X,H,B(β_0). Choose δ,ε>0 as in Proposition <ref>. If E∈^H,B(X) is σ_aH,B-semistable for all a≫ 0, then Q^δ,ε_H,B,α_0,β_0([E])≥ 0.
Let Q:=Q^δ,ε_H,B,α_0,β_0. By our hypotheses, E satisfies one of the three conditions in Lemma <ref>. If E satisfies (1), then Q([E])=Q([^0(E)]), where ^0(E) is a H-semistable torsion-free sheaf, and the result follows from Proposition <ref>(1). Similarly, if E satisfies (2), then by Proposition <ref>(2), Q([E])=Q([^0(E)])≥ 0. Now assume E satisfies (3). Then
([E])=-(^-1(E))+(^0(E))
Hence
Q_BG([E]) = Q_BG([^-1(E)]) - (-_0(^-1(E)))(E) ≥ Q_BG(^-1(E)).
The same argument applies to Q_H,B,α_0,β_0,δ. Hence Q([E])≥ Q([^-1(E)]). The result follows by Proposition <ref>(1).
Let σ=(Z,)∈(X) with support property given by a quadratic form Q on (X)⊗. Suppose E∈ is strictly σ-semistable and let A_1,…,A_m be the Jordan-Hölder factors of E. Then Q(A_i)<Q(E) for all 1≤ i≤ m.
It is enough to prove that Q(A_1)<Q(E). Since E is σ-semistable, E∈(ϕ) for some ϕ∈. By definition, A_1∈(ϕ), and hence E/A_1∈(ϕ) also. Therefore, by the support property, Q(A_1)≥ 0 and Q(E/A_1)≥ 0. Moreover, since A_1 and E/A_1 have the same phase, there exists λ>0 such that Z(A_1)-λ Z(E/A_1)=0. Hence [A_1]-λ[E/A_1]∈ Z⊗ and is non-zero. Let Q also denote the associated symmetric bilinear form. By the support property,
0> Q([A_1]-λ[E/A_1]) = Q(A_1)-2λ Q(A,E/A_1)+λ^2 Q(E/A_1).
Moreover, λ, Q(A_1), Q(E/A_1)>0. It follows that Q(A_1,E/A_1)>0. Therefore,
Q(E) = Q(A_1) + Q(E/A_1)+2Q(A_1,E/A_1)>Q(A_1).
Let σ=(Z,)∈(X), and let Q be a quadratic form which is negative definite on Z⊗. Suppose E∈ is strictly σ-semistable and let A_1,…,A_m be the Jordan-Hölder factors of E. If Q(E)<0, then for some 1≤ j≤ m, Q(A_j)<0.
Assume for a contradiction that Q(A_1),Q(E/A_1)≥ 0. Let Q also denote the associated symmetric bilinear form. By the same argument as in the proof of Lemma <ref>, it follows that Q(A,E/A_1)>0. Therefore,
Q(E) = Q(A_1) + Q(E/A_1)+2(A_1,E/A_1) >0,
which is a contradiction. Hence either Q(A_1)<0 and we are done, or Q(E/A_1)<0. If Q(E/A_1)<0, we can repeat the argument with E/A_1 and A_2 instead of E and A_1. There are finitely many Jordan-Hölder factors, so this process terminates. Therefore, Q(A_j)<0 for some 1≤ j≤ n.
Let X be a smooth projective surface. Let (H,B)∈_(X)×_(X). Fix α_0,β_0∈ such that α_0>Φ_X,H,B(β_0). Choose δ,ε>0 as in Proposition <ref>. Fix α_1∈ such that α_1>max{α_0, 1/2[(β_0 - H B/H^2)^2-B^2/H^2]}. Assume E∈ is σ_H,B,α_1,β_0-semistable. Then Q^δ,ε_H,B,α_0,β_0([E])≥ 0. In particular, σ_H,B,α_1,β_0 satisfies the support property with respect to Q^δ,ε_H,B,α_0,β_0.
From Lemma <ref>, we know that for every α≥α_1, σ_H,B,α,β_0 and σ_a_α H,B+bH, have the same heart when b=β_0-H B/H^2 and a_α=√(2α -b^2+B^2/H^2).
Moreover, by Lemma <ref>, there exists (H',B')∈_(X)×_(X) such that Δ^C_H'_H',B' gives the support property for σ_aH,B+bH if a≥ a_α_1. We may assume Δ^C_H'_H',B'∈, since it is true after rescaling by some integer. Furthermore, since E is σ_a_α_1H,B+bH-semistable, Δ^C_H'_H',B'([E])∈_≥ 0.
If E is σ_H,B,α,β_0-stable for α≫ 0, then by definition of a_α, E is σ_aH,B-stable for a≫ 0. It then follows by Corollary <ref> that Q([E])≥ 0. Otherwise, there exists some α_2≥α_1 such that E is strictly σ_H,B,α_2,β_0-semistable. Let A_1,… A_m denote the Jordan-Hölder factors of E. Then by Lemma <ref>, Δ^C_H'_H',B'([A_i])<Δ^C_H'_H',B'([E]) for all 1≤ i ≤ m. Each A_i is σ_H,B,α_2,β_0-stable, so Δ^C_H'_H',B'([A_i])≥ 0 for all 1≤ i≤ m.
Assume for a contradiction that Q([E])<0. From Lemma <ref>, Q([A_j])<0 for some 1≤ j ≤ m. Let E_2:=A_j. We can now repeat this process for E_2 in place of E_1:=E, and so on. This gives a sequence E_1, E_2, E_3, …, E_k, … and α_1≤α_2<α_3… <α_k … such that E_k∈ is σ_H,B,α_k,β_0-semistable, Q(E_k)<0, and 0≤Δ^C_H'_H',B'([E_k+1])<Δ^C_H'_H',B'([E_k]) for all k≥ 1. But Δ^C_H'_H',B'([E_k])∈_≥ 0 for all k, so no such sequence can exist. This gives a contradiction.
Finally, by Lemma <ref>, Q^δ,ε_H,B,α_0,β_0 is negative definition on Z_H,B,α_1,β_0⊗.
We are finally ready to apply Corollary <ref>:
Let X be a smooth projective surface. Let (H,B)∈_(X)×_(X). Fix α_0,β_0∈ such that α_0>Φ_X,H,B(β_0). Then σ_H,B,α,β_0∈N(X) for all α≥α_0.
Fix α_1∈ such that α_1 >max{α_0, 1/2[(β_0 - H B/H^2)^2-B^2/H^2]}. By Lemma <ref>, it follows that σ_H,B,α_1,β_0∈N(X). Choose δ,ε>0 as in Proposition <ref>, then by Lemma <ref>, σ_H,B,α_1,β_0 satisfies the support property with respect to Q^δ,ε_H,B,α_0,β_0.
By Lemma <ref>, Q^δ,ε_H,B,α_0,β_0 is negative definite on Z_H,B,α,β_0 for all α≥α_0. Moreover, Z_H,B,α,β_0 remains constant as α varies. Therefore, the result follows by Corollary <ref>.
(of Theorem <ref>)
By Theorem <ref>, for every σ∈(X) there exists a unique g∈ such that g^∗σ∈N(X). Hence it is enough to show that N(X)≅, where
={(H,B,α,β)∈_(X)×_(X)×^2 : α>Φ_X,H,B(β)}
This follows from Proposition <ref> and Proposition <ref>.
§.§ Applications of Theorem <ref>
Let X be a smooth projective surface. Then (X) is connected.
There are precisely two types of walls of the geometric chamber for K3 surfaces and rational surfaces. They either correspond to walls of the nef cone (see <cit.> for a construction) or to discontinuities of the Le Potier function. For K3 surfaces, the second case comes from the existence of spherical bundles which is explained in <cit.>. For rational surfaces, the discontinuities correspond to exceptional bundles. This is explained for (_^2(-3)) in <cit.>, and the arguments generalise to any rational surface.
It seems reasonable to expect this to hold for all surfaces. The description of the geometric chamber given by Theorem <ref> also supports this. Indeed, a wall where _x is destabilised corresponds locally to the boundary of being linear. This boundary is exactly where:
* H becomes nef and not ample. We expect that this only gives rise to walls in the following cases:
* H is big and nef. Then H induces a contraction of rational curves. This can be used to construct non-geometric stability conditions <cit.>.
* H is nef and induces a contraction to a curve whose fibres are rational curves. In this case, we expect a wall. For example, let f S → C be a ^1-bundle over a curve. One can construct stability conditions on S such that all skyscraper sheaves are semistable, and they are destabilised by:
_f^-1(x)→_x →_f^-1(x)(-1)[1] →_f^-1(x)[1]
* If Φ_X,H,B is discontinuous at β, then N(X) locally has a linear boundary. We expect this to give rise to non-geometric stability conditions.
* α = Φ_X,H,B(β). We expect no boundary in this case.
Let X be a smooth projective surface. If Φ_X,H,B has no discontinuities and no linear pieces for any (H,B)∈_(X)×_(X), then any wall of (X) where _x is destabilised corresponds to a class H'∈_(X) which is nef and not ample.
Let X be a smooth projective surface. Suppose
Φ_X,H,B(x)=1/2[(x-H B/H^2)^2 - B^2/H^2].
By Lemma <ref>, (S)≅{(H,B)∈_(X)×_(X)}. In particular, by Corollary <ref>, this holds for free abelian quotients of surfaces with finite Albanese morphism, such as Beauville-type and bielliptic surfaces. For these examples, we know by Corollary <ref> that there are no walls of (X) where _x is destabilised.
§ FURTHER QUESTIONS
Let X be a smooth projective variety. There are no examples in the literature where (X) is known to be disconnected. It would be interesting to investigate the following examples:
Let S be a Beauville-type or bielliptic surface. Is (S) connected?
S has non-finite Albanese morphism and (S)⊂(S) is a connected component by Corollary <ref>. If (S) is connected, the following question would have a negative answer:
Question FLZ[<cit.>]
Let X be a smooth projective variety whose Albanese morphism is not finite. Are there always non-geometric stability conditions on ?
Suppose has a strong exceptional collection of vector bundles, and a corresponding heart Å that can be used to construct stability conditions as in <cit.>. If _x∈Å, then does _x correspond to a stable quiver representation?
|
http://arxiv.org/abs/2307.02339v1
|
20230705145036
|
GAFAR: Graph-Attention Feature-Augmentation for Registration A Fast and Light-weight Point Set Registration Algorithm
|
[
"Ludwig Mohr",
"Ismail Geles",
"Friedrich Fraundorfer"
] |
cs.CV
|
[
"cs.CV"
] |
Necessary and sufficient symmetries in Event-Chain Monte Carlo
with
generalized flows and Application to hard dimers
Manon Michel
========================================================================================================================
empty
empty
Rigid registration of point clouds is a fundamental problem in computer vision with many applications from 3D scene reconstruction to geometry capture and robotics.
If a suitable initial registration is available, conventional methods like ICP and its many variants can provide adequate solutions.
In absence of a suitable initialization and in the presence of a high outlier rate or in the case of small overlap though the task of rigid registration still presents great challenges.
The advent of deep learning in computer vision has brought new drive to research on this topic, since it provides the possibility to learn expressive feature-representations and provide one-shot estimates instead of depending on time-consuming iterations of conventional robust methods.
Yet, the rotation and permutation invariant nature of point clouds poses its own challenges to deep learning, resulting in loss of performance and low generalization capability due to sensitivity to outliers and characteristics of 3D scans not present during network training.
In this work, we present a novel fast and light-weight network architecture using the attention mechanism to augment point descriptors at inference time to optimally suit the registration task of the specific point clouds it is presented with.
Employing a fully-connected graph both within and between point clouds lets the network reason about the importance and reliability of points for registration, making our approach robust to outliers, low overlap and unseen data.
We test the performance of our registration algorithm on different registration and generalization tasks and provide information on runtime and resource consumption.
The code and trained weights are available at .
§ INTRODUCTION
Rigid registration of point clouds is the task of simultaneously inferring both pose and correspondences between two sets of points <cit.>.
As soon as either pose or correspondences are known, estimation of the respective other is straight forward, yet doing both simultaneously is posing challenges in computer vision and robotics.
Its importance in tasks such as pose estimation <cit.>, map-building and SLAM <cit.> as well as localization tasks geared towards autonomous driving <cit.> fuel the research interest in registration algorithms.
ICP and its many variants <cit.>, while able to provide exceptional results for good initializations, tend to get stuck in local minima if the initialization is insufficient, in the presence of high outlier rates, or in cases with low overlap.
Attempts to resolve this range from methods using branch-and-bound to infer a globally optimal solution <cit.>, methods based on feature matching between key-points followed by robust matching strategies <cit.> and in recent years deep neural networks for learning feature descriptors and matching <cit.>.
Yet both, branch-and-bound as well as robust matching, suffer from speed and accuracy issues in real-life application due to the high number of iterations necessary in cases of high outlier ratios.
Deep-learning based methods usually fare better with regard to outliers, yet still struggle due to the contradiction between low distinctiveness of local point-features caused by topological similarities and low match recall of global features in low-overlap cases.
A further drawback of algorithms using deep neural networks often is their high requirements concerning compute resources, limiting their use in mobile applications.
To tackle these challenges, we propose GAFAR: Graph-Attention Feature-Augmentation for Registration, which employs deep-learning techniques not only for extraction of meaningful local features from point sets, but also for learning an adaptive augmentation network for online transformation of local features for robust matching.
We achieve this by exploiting structural information from between point sets as well as from within a single one thorough an architecture of interleaved self- and cross-attention layers <cit.>.
While achieving state-of-the-art registration performance, our method is light-weight and fast.
We demonstrate this in a series of experiments, testing not only registration performance on the dataset used for training, but also robustness and generalization ability in two further experiments on vastly different datasets of real world scans, one captured with a handheld 3D scanner producing precise scans, the other being the Kitti Odometry Dataset <cit.> showing street scenes captured by a LiDAR scanner.
Furthermore, we provide insight into runtime and resource needs.
The main contributions of our method are:
* We demonstrate the use of transformer networks and the attention mechanism to build a fast and light-weight, yet accurate registration algorithm.
* We present an online feature augmentation strategy in registration which proves to be superior in terms of robustness to partial overlap and geometries not seen during training.
* We show how certain design-choices enable us to estimate the registration success without knowledge of the true transformation, enabling its use in applications that require fail-safes.
* We demonstrate state-of-the art performance and superior generalization capability in a light-weight package.
§ RELATED WORK
One of the oldest, yet still relevant methods for registration of point clouds is ICP <cit.>.
Starting from an initial alignment, ICP iteratively updates the registration parameters by establishing point correspondences using Euclidean distance, rejecting far away point pairs.
Due to this design it is prone to get stuck in local minima, the final registration accuracy heavily depends on the initialization.
Many variants have been proposed over the years <cit.> to mitigate these issues, yet the dependence on the initialization has remained.
Several registration algorithms trying to solve the dependence on initialization have been proposed <cit.>, alongside of handcrafted feature descriptors trying to capture local geometry of point clouds in a meaningful way, such as PPF <cit.> and FPFH <cit.>, among others <cit.>.
Yet, they never managed to reach the performance and robustness of their 2D counterparts.
Recent advances in deep-learning extend deep neural networks to 3D point clouds and have resulted in methods for learned local feature descriptors like PointNet <cit.>, FCGF <cit.>, Graphite <cit.> and DGCNN <cit.>, learned filtering of putative point matches <cit.> and complete learned registration pipelines.
3DSmoothNet <cit.> extracts a local reference frame and voxelizes the point cloud around key points, yet reference frame estimation is susceptible to outliers, voxelization tends to loose information due to spatial discretization.
PointNetLK <cit.> estimates registration parameters to match the deep representations from PointNet of complete point clouds, DCP <cit.> uses DGCNN to extract point features and the attention mechanism <cit.> to predict soft correspondences, restricting their application to registration of point clouds with high overlap.
Research into Pillar-Networks <cit.> is driven mainly by automotive applications for processing of LiDAR point clouds from mobile mapping systems, assuming the input point clouds to share a common z=up orientation.
They extract cylindrical point pillars along the z-axis around key point locations for further processing, and are therefor not applicable to general registration problems or when the assumption of z-axis alignment can not be guaranteed.
Keypoint based methods like <cit.> aim at detecting repeatable keypoints across scans, and registering them using powerful descriptors.
In contrast <cit.> uses a detection-free approach with a local-to-global detection strategy using superpoints.
IDAM <cit.> tackles inaccuracies arising from inner product norms for feature matching with an iterative distance-aware similarity formulation.
DeepGMR <cit.> recovers registration parameters from Gaussian Mixture Models, parameterized using pose-invariant correspondences.
RPM-Net <cit.> predicts annealing parameters and predicts correspondences with annealing in feature matching and the Sinkhorn Algorithm <cit.> as solver for linear assignment, predicting soft correspondences.
RGM <cit.> explicitly builds and matches graphs within point clouds to resolve ambiguity issues between locally similar patches and predicts hard correspondences using the Hungarian Algorithm.
In contrast to <cit.>, we use graph matching for feature augmentation before matching, but do not match graphs extracted from point clouds explicitly.
Similarity between the internal point cloud structures is handled by our method implicitly using cross-attention modules.
Our method predicts hard correspondences by thresholding of the assignment matrix after running sinkhorn iterations, interpreting the correspondence estimation as optimal transport problem of the feature correlation matrix.
§ PROBLEM FORMULATION
Rigid registration of two 3D point sets is the task of finding a transformation consisting of a rotation matrix 𝐑∈𝐒𝐎^3 and a translation vector 𝐭∈ℝ^3 aligning input point set 𝒫_S = {p_i ∈ℝ^3 | i = 1, ..., M} to the reference point set 𝒫_R = {p_j ∈ℝ^3 | j = 1, ..., N}.
Here M and N denote the respective sizes of the point sets.
The underlying assumption is, that both point sets are sampled on the same surface or the same object and share at least some common support (i.e., the physical location where the object has been sampled does actually overlap).
In the most general case, point sets 𝒫_S and 𝒫_R may not have any true correspondences between them, may suffer from outliers and additive noise and they may only share parts of their support, resulting in only partial overlap.
Given a set of corresponding points between two point sets, the rigid transformation aligning both sets can be recovered using SVD.
This approach relaxes the task of estimating a rigid transformation to that of finding pairs of corresponding points between both sets.
Since the transformation obtained using SVD aligns the point pairs in a least-squares sense, this formulation directly lends itself to the case where no exact matches exist.
Hence, the task of rigid point set registration can be formulated mathematically as:
𝐂^* = 𝐂arg min(∑_j^N ∑_i^M c_i,j‖𝐑_Cp_i + 𝐭_C - p_j ‖^2 ),
where 𝐂∈{0, 1}^M,N is a permutation matrix subject to row and column constraints ∑_i^M C_i = 1^N and ∑_j^N C_j = 1^M, associating the points between both point sets.
The transformation parameters 𝐑_C and 𝐭_C refer to those recovered by SVD using the point pairs designated by permutation matrix 𝐂.
To handle the case of partial overlap, the permutation matrix is augmented by a row and column to C ∈{0, 1}^M+1,N+1, while relaxing the constraints on rows and columns of 𝐂 to
∑_i^M C_i ≤1^N,
∑_j^N C_j ≤1^M
.
In practice, this formulation can be solved by augmenting an initial full point feature correlation matrix with an additional row and column and solving the relaxed optimization problem as the optimal transport problem <cit.>, using the Sinkhorn Algorithm as differentiable implementation of the linear assignment problem <cit.>.
§ THE MAKING OF GAFAR
The key idea behind our network architecture is to adapt initial local per-point feature descriptors ℱ_S of a source point set 𝒫_S for correspondence matching in an online fashion by injecting information of the reference point set 𝒫_R.
The reasoning behind this is, that for successful point matching neither only local geometric structure (which may be repetitive or non-distinctive) nor fully-global information (which in case of partial overlap may encode information of areas which are not shared) is sufficient.
The relevant information for successful point matching lies solely within the topology of the overlapping area as well as the relative position of points within this area.
Our architecture takes two point sets 𝒫_S and 𝒫_R, represented as point locations in Euclidean coordinates together with their respective point normals, as input.
Internally, the network architecture consists of a feature head generating per-point features for both point sets independently, as well as an augmentation stage inspired by <cit.>, consisting of interleaved self- and cross-attention layers.
This allows the network to reason jointly over both sets of feature descriptors, adapting them iteratively into representations optimally suited for finding high-quality correspondences between those two specific point sets.
Matching is done by calculating the dot-product similarity between all possible pairings of the resulting feature descriptors ℱ̂_S and ℱ̂_R, relaxing the match matrix by adding a slack row and slack column and running the Sinkhorn Algorithm a predefined number of iterations, as in <cit.>.
The network weights are shared between the two branches processing 𝒫_S and 𝒫_R, turning the architecture into a fully-siamese network <cit.>.
Figure <ref> depicts an overview of the architecture, the different building blocks are explained in greater detail in the following subsections.
§.§ Local Feature Descriptor Head
Our feature head, depicted in Figure <ref>, consists of two main building blocks, a local feature encoder with a neighbourhood size 𝒩 and a point-wise location encoder Multi-Layer Perceptron (MLP).
Both take point locations within the unit-circle and their respective normal vectors as input.
As point-feature network we employ an architecture derived from DGCNN <cit.>, extended by an MLP functioning as a bottleneck to reduce the feature dimensionality to a more suitable size.
A basic layer of this architecture embeds the lower-dimensional representation into a higher dimensional local representation with a nonlinear transformation by applying a MLP on point patches consisting of the 𝒩 nearest neighbours of each point p_i, followed by max-pooling over the patch and normalization.
Information is aggregated via multiple layers and concatenation until a high-dimensional internal representation ℱ_I ∈ℝ^d of the local point neighbourhood is reached.
Our point-wise location encoder is implemented as a pure point-wise MLP, for each point p in point set 𝒫 embedding its position in Euclidean space into a high-dimensional feature space, again of size ℝ^d.
The output of both, the feature encoder and the position encoder, are then concatenated and projected back to ℝ^d by a small point-wise MLP.
§.§ Graph-Attention Feature-Augmentation Network
The purpose of the graph attention network for feature augmentation is to optimize the feature representations ℱ_S of the input point set 𝒫_S at inference time for correspondence search by infusing knowledge of the reference point set 𝒫_R, and vice versa.
To this end, we build the feature augmentation sub-network as a stack of alternating self- and cross-attention layers, interleaved with normalization layers.
The architecture of the attention layers is depicted in Figure <ref>, implementing a residual block with message passing for feature update.
We set the feature-augmentation network up as a stack of fully-connected graph-attention layers, thereby letting the network learn which connections are relevant for the current point feature from all possible connections and to only attend to those via Multi-Head Softmax-Attention.
This allows to embed information of the relevant topology from both within and between point-sets in an iterative fashion into the feature descriptors, resulting in two sets ℱ̂_S: {𝑓_i ∈ℝ^d, i = 1, ..., M} and ℱ̂_R: {𝑓_j ∈ℝ^d, j = 1, ..., N} of point features for matching.
§.§ Feature Matching
After feature augmentation, matching is done by calculating the similarity score matrix 𝐒∈ℝ^M,N between the point feature descriptors ℱ^m_S and ℱ^m_R of all possible point pairs 𝐩_i,j = {p_i ∈𝒫_S, p_j ∈𝒫_R} using dot-product similarity:
𝐒: s_i,j = < 𝑓_i, 𝑓_j >.
Since we are interested in finding point-correspondences, we interpret the optimization problem of equation (<ref>) in terms of the optimal transport problem <cit.>, using the similarity score 𝐒 as its cost.
We find an approximate solution 𝐂^* by adding a row and column of slack variables to 𝐒 as detailed in equation <ref> and applying a few iterations of the Sinkhorn-Algorithm as a differentiable approximation to the Hungarian Algorithm for the solution of optimal transport <cit.>.
Finally, we threshold the resulting approximate permutation matrix 𝐂^* by threshold 𝑡_m ∈ [0, 1] and take mutual row- and column-wise maxima as point correspondences for calculation of the rigid transformation {𝐑, 𝐭} aligning the point sets using SVD.
§.§ Loss
As loss for network training we employ the binary cross entropy loss between the predicted permutation matrix 𝐂^* and the ground truth correspondence matrix 𝒢_gt:
ℒ_BCE = - ∑_i,j𝑔_i,jlogĉ_i,j + (1 - 𝑔_i,j) ·log (1 -ĉ_i,j).
§ EXPERIMENTS
In order to evaluate the performance of our proposed registration method, we perform two experiments.
The first experiment <ref> tests the performance on synthetic data of ModelNet40 <cit.> for different settings of noise and overlap.
The second experiment described in section <ref> tests the generalization ability using LiDAR point clouds of the Kitti Odometry Benchmark <cit.> and custom high-quality real-world object scans, using only models trained on synthetic data in the experiment of section <ref>.
Throughout the experiments, we have chosen the following parameters for our network:
The feature dimension is chosen as d=128, the number of layers and layer dimensions in the feature encoder of the feature head follows the parameterization of DGCNN <cit.> with a neighbourhood size of 𝒩 = 20.
The location encoder is chosen as a 4 layer MLP with layer dimensions [16, 32, 64, 128].
Our feature-augmentation graph-attention network consists of 9 stacks of consecutive self- and cross-attention layers with 2 attention heads.
For normalization, batch-norm is chosen throughout the network.
The number of Sinkhorn-iterations is set to 10 for both, training and inference.
We train the network on a single registration iteration per example, testing is done with a second iteration, feeding the source point cloud aligned by the result of the first iteration again through the network.
Model training usually converges after training for two days using AdamW optimizer with learning rate 1e^-4 on a Nvidia GeForce RTX3090 (between 800 and 1000 epochs).
§.§ Experiments on ModelNet40
ModelNet40 consists of 12,311 meshed CAD models in 40 object categories, spanning a vast array of scales from chairs to airplanes.
Consistent with previous work, we use the pre-sampled point clouds provided by Shapenet <cit.>, consisting of 2048 points per model to conduct the experiments.
For easy comparison we follow the setup of <cit.> and perform the same experiments.
All experiments with exemption of subsection <ref> follow the official training and testing split, with an additional 80:20 split of the official training set for training and validation.
The experiment described in subsection <ref> uses the first 20 object classes of the training set for training, the first 20 object classes of the test set for validation and the remaining 20 classes for testing.
The point clouds already come scaled to fit within the unit circle, therefor all measures related to point distance are given in a normalized scale.
As in <cit.>, we sample 1024 points at random from the point clouds and apply random rotations within [0^∘, 80^∘] around a random axis and random translations within [-0.5, 0.5] in normalized units.
Registration performance in measured using the same metrics as <cit.>, that is residual transformation errors of {𝐑, 𝐭} as mean isotropic errors (MIE) as proposed by <cit.>, as well as clipped chamfer distance (CCD) between reference point cloud 𝐘 and transformed source point cloud 𝐗̂ after registration:
CCD(𝐗̂, 𝐘) = ∑_x̂_i ∈𝐗̂min(y_j ∈𝐘min(||x̂_i - y_j ||^2_2), r)
+ ∑_ŷ_j ∈𝐘min(x̂_i ∈𝐗̂min(||x̂_i - y_j ||^2_2), r),
with clip distance r = 0.1.
Furthermore, we report registration recall (RR), defined as percentage of registration results with residual errors MAE(𝐑) < 1^∘ and MAE(𝐭) < 0.1.
To keep consistent with previous research, we also state the residual transformation errors in terms of mean absolute errors (MAE) as proposed by <cit.>, which is anisotropic.
Errors related to rotations are given in degrees, errors related to distance are normalized to object size (since the data in ModelNet40 does not have a common scale and is normalized to the unit circle).
The design of our registration method provides us directly with information on the reliability and success of a matching attempt.
Using the value of the matching score s_i,j matching point p_i to point p_j as well as the number of found matches, we can reject invalid registrations.
To this end, we provide results for matching thresholds t_m = 0.5 and rejecting registrations with less than 3 correspondences.
Evaluation of registration errors is done on successful registrations only, stating the percentage of successful registrations in braces after the method name.
Registration recall for our method is provided with respect to the full number of examples in the testing set, thereby making it directly comparable.
In practical applications, failed registrations can easily be rectified by either performing batched registrations with different samplings for a single registration task or repeating the registration with a different subset of points in case of failure.
Please note that the main competing methods do not allow any insight like this without knowledge of the underlying true registration, since RPMNet <cit.> works on soft-correspondences, RGM <cit.> only provides hard correspondences without associated score and returned in our experiments always more than 3 matches.
Results of the comparing methods are reproduced from <cit.>.
§.§.§ Full and clean data
The first experiment can be considered a baseline in registration performance, since the transformation has to be recovered from a full set of 1024 exact and noise-free correspondences and is mainly reproduced for completeness.
From Table <ref> we can see that basically all methods are able to almost perfectly register the point clouds with MAE(𝐑) below or around 1^∘.
Only ICP struggles in comparison.
§.§.§ Additive gaussian noise
In this experiment, source and reference point sets are sampled independently, so only a few perfect correspondences may exist.
Additionally, we add gaussian noise sampled from 𝒩(0, 0.01) and clipped to the range [-0.05, 0.05] to the point locations independently, thereby eliminating all perfect correspondences.
Point correspondences and point normals are then re-established, following the procedure of <cit.>, first finding mutual nearest neighbours and then adding remaining nearest neighbours, all within a maximum distance of 0.05 between corresponding points.
As can be expected, the performance degrades to a certain degree.
The results listed in Table <ref> show that the learning based methods still hold up rather well with MAE(𝐑) around or below 3^∘.
RPMNet, RGM as well as our method still achieve a RR of more than 90%.
Interestingly, the performance of ICP does not degrade, showing its robustness to outliers.
§.§.§ Registration of noisy, partially overlapping sets
In this experiment, in addition to additive gaussian noise, both source and reference point clouds are independently cropped along a random plane to 70% of their original size, resulting in variable overlap of at least 40%. This experimental setup corresponds closest to general real-world applications.
From Table <ref> we see that, with exception of RPMNet <cit.>, RGM <cit.> and ours, the registration performance degrades beyond anything what can be deemed usable in any applications.
Notably, the registration performance on recovered registrations of our method is the same as in the previous experiment with full overlap, albeit losing in successful registrations and in registration recall.
Comparing the registration recall of 77.2% to the percentage of recovered registrations of 84.3%, we see the merit of our architecture and the ability to predict whether a registration attempt was successful.
§.§.§ Partial overlap of unseen object categories
The difference to the experiment outlined in section <ref> is that now we only train on the first 20 object categories of ModelNet40, but evaluate on the remaining 20 categories.
Thereby we can explore to what extent the learned registration networks are able to generalize to geometries not present in training.
An interesting fact evident in the results listed in Table <ref> is that the performance of all methods except RPM <cit.> does not decline much relative to the experiment done on known categories, whereas RPM almost doubles its residual errors.
Although very powerful in establishing good correspondences, the neural network architecture in RPM seems to learn geometries by heart, hampering its generalization ability, whereas our method performs as strong as it did before, outperforming RGM in all measures.
This again exemplifies the merit of feature augmentation at test time for optimal matching success.
Furthermore we would like to point out that although our method is not able to successfully register all examples in the first attempt, using the match threshold t_m and the number of found matches, we can precisely predict unsuccessful attempts.
In all experiments, RR is close to the number of valid examples within a margin of about 5%.
§.§ Generalization to real-world 3D scans
For real-world application, the ability of 3D registration methods to generalize to new and different geometries as well as capturing modalities is crucial.
To this end, we compare the registration performance of the best performing methods trained on ModelNet40 as detailed in section <ref> on two datasets, a custom dataset (publication is planned) as well as the the well known Kitti Dataset <cit.>.
The custom dataset consist of objects scans of 10 objects taken with an Artec Leo <cit.> handheld 3D scanner, for each object up to 10 overlapping partial scans exist, with between 10.000 and 50.000 points each.
Figure <ref> shows a registration example of this dataset.
Transformations are generated within the same constraints as in the experiments on ModelNet40.
We report registration accuracy in terms of MIE(𝐑), MIE(𝐭) and registration recall.
Since the objects in this dataset have a common scale, MIE(𝐭) is reported in millimeters, registration recall is defined as percentage of registration results with residual errors MIE(𝐑) < 1^∘ and MIE(𝐭) < 5 mm.
Again, the number of in brackets behind versions of our method states the respective percentage of valid registrations.
For the experiments on Kitti, we follow the established praxis <cit.> of testing on sequences 8-10, testing registration performance of point cloud pairs at least 10m apart.
As in <cit.>, we use ground truth poses refined by ICP, MIE(𝐭) is reported in meters, and registration recall is defined as percentage of registration results with residual errors MIE(𝐑) < 5^∘ and MIE(𝐭) < 2 m.
Note that for fairness we applied an additional data normalization step for RPM-Net and RGM, scaling the data to fit into the unit circle for registration, thus making the input points span the same range as the training data of ModelNet40.
From Table <ref> we can see that our algorithm generalizes well to high quality 3D scans, the models trained on partial overlapping data outperform both RGM <cit.> and RPMNet <cit.> by a large margin in all metrics.
For registration of large-scale outdoor scenes of Kitti, a domain-gap for all methods is noticeable.
Nonetheless, our method still performs reasonably well given the circumstances, with registration recall of around 50% and mean errors of 3.1^∘ and 3.5m for the best generalizing models trained with only partial overlap, again showing its robustness to different data modalities.
Furthermore, the strong ability to predict which registrations were successful is visible from comparing the number of 51.1% valid registrations to the RR of 49.7% for the model trained on unseen categories.
Again, we can observe that while a powerful registration method, RGM seems to overfit on the training modalities, being beaten even by RPM-Net trained for the experiment on noisy data and unseen categories, whereas our method is rather robust to changes in sampling, overlap and geometry.
Interestingly, for both, RGM and RPM-Net, models trained on the harder cases of only partial overlap often lead to a decrease in generalization performance, whereas our methods ability to generalize to different data improves with the difficulty of the training task.
§.§ Resource Consumption and performance
Registration performance is not the only relevant criterion for the usability of an algorithm.
Execution time as well as compute resource needs are limited especially in mobile applications and are therefor a further relevant measure in algorithm selection.
To this end, we compare our algorithm in terms of complexity and resource needs to the two best competing methods.
Model complexity is measured in the number of trainable parameters.
Compute resource needs are given in GB of GPU memory use for batch sizes of 20, 5, and 1, as well as registration speed measured in registrations per second.
We can see from Table <ref>, that our method is both more light-weight and faster while still providing competitive results.
§.§ Ablation Study
In order to evaluate the benefit of different parts in our feature head, we test the following configurations.
Networks are trained on the task of partial overlap, as in section <ref> using the same random seed, with the following architectural differences:
* Location encoder: the feature head only uses the location encoder.
* Feature only: the feature head only uses the local point feature network.
* additive fusion: the MLP fusing position encoding and local point feature is replaced by a simple addition of feature vectors.
* MLP fusion: this is the full network architecture, consisting of the feature head with location encoder, point feature network and MLP for feature fusion.
Please note that the networks have not been trained to full convergence, since only a qualitative difference is required.
For testing, the same modalities as for the experiments in section <ref> have been employed.
From the results in Table <ref> we can see, that each additional structure improves the overall performance, the method works best if we let the network learn how to combine both feature vectors.
§ CONCLUSION
In this paper, we presented GAFAR, a novel, light-weight algorithm for point set registration using an end-to-end learnable deep neural network for feature encoding and correspondence prediction.
Its performance is competitive while being faster and less demanding on resources compared to other state-of-the-art methods, which makes it well suited for applications with constraints on compute resources, power consumption and runtime.
Our method shows very high generalization capability to different data modalities and exhibits little overfit to geometry details of the training set.
The strong performance for partial overlap, even for object classes not present in training, shows the merits of the cross-attention mechanism for feature augmentation.
A further benefit of our method is its ability to provide an indication on the quality of predicted correspondences, thereby giving opportunity to tune between high registration accuracy and high recall as well as to reject failed or bad registrations without additional knowledge.
In practice, failure cases can be remedied by either performing multiple registrations with different sub-sampling in parallel in a batched fashion, or by repeating the registration with a different sample in case of failure.
In the future, we plan to tackle the limitation to only small subsets of point clouds by applying the underlying architectural principles to the registration of large point sets directly, while still keeping with the paradigm of light-weight architecture and fast execution.
-0.5cm
§ ACKNOWLEDGMENT
This work was supported by Land Steiermark within the research initiative “Digital Material Valley Styria”.
bib/IEEEtran
|
http://arxiv.org/abs/2307.02913v1
|
20230706105620
|
Numerical Methods with Coordinate Transforms for Efficient Brownian Dynamics Simulations
|
[
"Dominic Phillips",
"Charles Matthews",
"Benedict Leimkuhler"
] |
math.NA
|
[
"math.NA",
"cs.NA",
"65C30"
] |
Revealing the structure of the lensed quasar Q 0957+561
C. Fian1, J. A. Muñoz1,2, E. Mediavilla3,4, J. Jiménez-Vicente5,6, V. Motta7, D. Chelouche8,9, A. Wurzer10, A. Hanslmeier10, K. Rojas11
Gastón P. Fernández[Ph.D. student at the University of Leuven (KU Leuven), Department of Economics, Naamsestraat 69, box 3565, 3000 Leuven (e-mail: [email protected]). I deeply appreciate the invaluable guidance of my advisors Laurens Cherchye and Frederic Vermeulen. I would also like to thank Wietse Leleu and all participants at the Conference of the European Society for Population Economics (ESPE) in Belgrade, the Trans-Atlantic Doctoral Conference (TADC) in London, and the Public-Labor-Health Seminar, the Household Economics Gathering, and the ECORES Summer School in Leuven for their helpful comments. All errors are on my own.]
University of Leuven (KU Leuven)
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Many stochastic processes in the physical and biological sciences can be modelled using Brownian dynamics with multiplicative noise. However, numerical integrators for these processes can lose accuracy or even fail to converge when the diffusion term is configuration-dependent. One remedy is to construct a transform to a constant-diffusion process and sample the transformed process instead. In this work, we explain how coordinate-based and time-rescaling-based transforms can be used either individually or in combination to map a general class of variable-diffusion Brownian motion processes into constant-diffusion ones. The transforms are invertible, thus allowing recovery of the original dynamics. We motivate our methodology using examples in one dimension before then considering multivariate diffusion processes. We illustrate the benefits of the transforms through numerical simulations, demonstrating how the right combination of integrator and transform can improve computational efficiency and the order of convergence to the invariant distribution. Notably, the transforms that we derive are applicable to a class of multibody, anisotropic Stokes-Einstein diffusion that has applications in biophysical modelling.
§ INTRODUCTION
Many problems in finance and the physical and biological sciences can be modelled as instances of Brownian motion. Examples include portfolio optimization <cit.>, options pricing <cit.>, diffusion in biological membranes and nanocomposites <cit.>, cell migration <cit.>, protein folding <cit.>, neuronal dynamics <cit.>, population genetics <cit.>, MRI imaging <cit.>, ecological modelling <cit.> and score-based diffusion for generative AI <cit.>. In these contexts, configuration-dependent diffusion is often critical to the modelling assumption but can introduce problems for numerical modelling. It make the problem stiffer by introducing unbounded noise or bounds on the state variables. Additionally, it can reduce the weak order of convergence of an integrator. This is a problem for simulation because sampling becomes more expensive. It's also a problem for estimation, such as when fitting a Brownian dynamics “grey-box" model, since high accuracy is required for the Extended Kalman Filter approximations to be meaningful <cit.>.
One remedy for these problems is to design sophisticated, derivative-free numerical integrators that maintain high-accuracy convergence for certain classes of state-dependent diffusion. In recent years, many authors have contributed to a series of improvements and various integrators have been proposed <cit.>. However, a common drawback of these integrators is the requirement of multiple evaluations of the force and diffusion tensor per time step. This can be expensive in multi-body simulations, where the evaluation of these terms is the computational bottleneck. In addition, many of these integrators place restrictions on the class of state-dependent diffusion, often requiring commutative noise, which is not suitable for all applications.
An alternative approach, preferred whenever possible, is to transform the original process into a process with constant diffusion, thereby mitigating the sampling challenges introduced by multiplicative noise <cit.>. For certain classes of stochastic differential equations (SDEs), this is achieved through a Lamperti transform, a type of non-linear change of variable <cit.>. The resulting constant-diffusion process might exhibit enhanced numerical stability and can be sampled with computationally cheap, high weak-order integrators. Take for example the Black-Scholes model from financial mathematics, which describes geometric Brownian motion on the positive real axis. When simulated with sufficiently large step sizes, positivity can be violated which results in numerical instability. Here the Lamperti transform approach is especially valuable since it is possible to simultaneously construct a transform to unit diffusion whilst also removing the positivity constraint <cit.>.
An alternative to a spatial coordinate transform is to apply a smooth, configuration-dependent time-rescaling <cit.>.
Recently, this has been explored as a method for adaptive stepsize control in Langevin dynamics sampling <cit.>. In this work, we take a different perspective and consider time-rescaling alongside the Lamperti transform as a strategy to remove multiplicative noise.
In this article, we focus on multivariate Brownian dynamics for concreteness, deriving the conditions on the noise term for the Lamperti transform and time-rescaling to be applicable. We also explore how the transforms can be combined and for which class of processes the transformed process is an instance of Brownian dynamics. We further show how phase-space averages of the original process can be estimated using samples from the transformed process and demonstrate with numerical experiments how combining transforms with the right choice of numerical integrator leads to an efficient, weak second-order sampling method that requires only one force and one diffusion evaluation per time step.
The article is structured as follows. Section <ref> introduces the Lamperti transform, time-rescaling transformation and Brownian dynamics. Section <ref> explores in detail how these transforms apply to one-dimensional Brownian dynamics. Numerical experiments in one dimension are presented in Section <ref>. Section <ref> extends the transforms to multivariate Brownian dynamics. A multivariate numerical experiment is presented in Section <ref>. Conclusions are presented in Section <ref>.
§ PRELIMINARIES
§.§ The Lamperti Transformation
Consider a time-homogeneous Itô SDE of the form
dX_t = f(X_t)dt + σ (X_t)RdW_t.
Here, W_t ∈ℝ^m denotes standard Brownian motion, X_t ∈ℝ^n is the state variable, f : ℝ^n ℝ^n is a vector drift, σ : ℝ^n ℝ^n ×ℝ^m is a diffusion matrix, and R∈ℝ^m ×ℝ^m is an arbitrary matrix of constant coefficients.
The Lamperti transform Y_t = ξ(X_t) is an invertible coordinate transformation ξℝ^n ℝ^n that, if applied to an SDE of the form (<ref>), results in a process with constant, unit diffusion <cit.>. This transformation exists if and only if:
(i) The dimensions of X_t and W_t are the same, i.e., n = m.
(ii) The matrix R is invertible.
(iii) The diffusion matrix σ(X_t) has a diagonal form:
σ(X_t) = diag(σ_1(X_1,t), σ_2(X_2,t), …, σ_n(X_n,t)),
where σ_i(x) > 0 for all x ∈ℝ and i ∈1,2,…,n.
By applying the multivariate version of Itô's lemma, it can be shown that the Lamperti transform is given by
Y_t = R^-1ϕ(X_t),
where ϕ(X_t) = [ϕ_1(X_1,t), ϕ_2(X_2,t), …, ϕ_n(X_n,t)]^T and ϕ_j : ℝℝ is the invertible map:
ϕ_j(x) = ∫_x_j,0^x1/σ_j(z) dz,
and x_j,0 is an arbitrary constant chosen from the state space of X_j. Note that the choice of x_j,0 is physically inconsequential for the transformed dynamics (c.f. Section <ref>).
The transformed process Y_t exhibits isotropic unit diffusion and Y_i,t obeys
dY_i,t = ∑_j=1^n R_ij^-1(f_j(ϕ^-1(RY_t))/σ_j(ϕ^-1_j((RY_t)_j)) - 1/2∂/∂ xσ_j (x)|_x=ϕ^-1_j((RY_t)_j))dt + dW_i,t.
To transform to isotropic constant diffusion D_0 dW_t, (<ref>) becomes Y_t = D_0 R^-1ϕ(X_t) and (<ref>) changes accordingly.
The Lamperti transform can be used as a tool to find exact solutions for specific classes of SDEs <cit.> or to perform statistical inference for SDEs <cit.>, but the extent to which the Lamperti transform is useful in practice is limited by the specific form that σ(X_t) must follow in (<ref>). In this work, we consider only time-homogeneous SDEs, although the Lamperti transform can be extended to certain time-inhomogeneous problems <cit.>.
§.§ The Time-Rescaling Transform
An alternative method for transforming an SDE to constant diffusion is the time-rescaling transformation (e.g. <cit.>, Chapter 8 and <cit.>, Chapter 8). This approach is applicable to a different class of SDEs compared to the Lamperti transformation. As before, we start by considering an SDE of the form
dX_t = f(X_t)dt + σ (X_t)RdW_t,
where the notation follows Equation (<ref>). We introduce a configuration-dependent time rescaling, denoted as t →τ(t), with the property.
dt/dτ(X_t)=g(X_τ),
The governing equation for the time-rescaled process becomes
dX_τ = f(X_τ)g(X_τ)dτ + σ (X_τ)R√(g(X_τ))dW_τ.
In the above equation, we have replaced dt with dt/dτ dτ = g(𝐗_τ)dτ using a change of variables. The term √(g(𝐗_τ)) arises from the scaling property of Brownian motion. To achieve a transformation to constant diffusion, the following conditions need to be satisfied:
(i) The dimensions of 𝐗_t and 𝐖_t are the same, i.e., n=m.
(ii) The matrix 𝐑 is invertible.
(iii) The diffusion matrix σ(𝐗_t) has a diagonal form:
σ(X_t) = diag(D(X_t), D(X_t), …, D(X_t)),
i.e. it is an isotropic matrix with arbitrary configuration dependence.
To remove the configuration dependence from the diffusion term, we can choose the time rescaling as
g(X) = 1/D^2(X).
Substituting this choice into the previous equation simplifies the governing equations to
dX_τ = f(X_τ)/D^2(X_τ)dτ + RdW_τ.
We may then transform to unit diffusion through a linear transform
Y_τ = R^-1X_τ.
Applying Itô's lemma then yields the constant-diffusion equation for the transformed process
dY_τ = R^-1f(RY_τ)/D^2(RY_τ)dt + dW_τ.
The time-rescaling method can also be used to transform the dynamics to one with arbitrary diffusion profile D̃(𝐗) by making the choice
g(X)=(D̃(X)/D(X))^2.
In later sections, we will explore the Lamperti and time-rescaling transforms within the framework of overdamped Langevin dynamics, also known as Brownian dynamics, which we now introduce.
§.§ Brownian Dynamics
In this section, we will examine the governing SDEs of Brownian dynamics. These equations describe the movement of particles in viscous fluids under the influence of position-dependent deterministic and stochastic forces.
One-dimensional Brownian dynamics: The SDE governing one-dimensional Brownian dynamics is given by
dx_t = - D(x)^2 dV/dxdt + 2 kT D(x) dD/dx dt + √(2 kT)D(x) dW_t,
where V : ℝℝ is a potential function, D : ℝℝ_>0 denotes the diffusion coefficient and kT > 0 is the product of the Boltzmann constant k and the temperature T in Kelvin. The process (<ref>) can be re-written on redefining D^2(x) D(x) as
dx_t = - D(x) dV/dxdt + kT d D/dxdt + √(2 kT D(x)) dW_t,
which is the form we will use in this work. Note that equations (<ref>) and (<ref>) have the same invariant distribution.
Multi-variate Brownian dynamics: For the case of multi-variate diffusion, (<ref>) generalises to
dX_t = -(D(X)D(X)^T) ∇ V(X) dt + kT div(DD^T)(X) dt + √(2 kT)D(X)dW_t,
where V: ℝ^n ℝ is the potential and D: ℝ^n ℝ^n ×ℝ^n is a configuration-dependent diffusion tensor that is everywhere positive definite. In the above, the matrix divergence is defined as the column vector resulting from applying the vector divergence to each matrix row:
div(DD^T)_i = ∑_j,k=1^n ∂/∂ x_k(D_ijD_kj).
Ergodicity of Brownian Dynamics: From here onwards, we assume that potential and diffusion functions are smooth. We further assume that the potential is confining in a way that ensures the geometric ergodicity of the dynamics. See, for example, Pavliotis (2014) for technical details <cit.>. Under these assumptions, geometric ergodicity guarantees the existence of a unique invariant distribution ρ.
An invariant distribution is a distribution that is preserved by the dynamics of the system. It can be shown that the invariant distribution of Brownian dynamics is the canonical ensemble:
ρ(X) ∝exp(- V(X)/kT).
The invariance of ρ can be confirmed by substituting it into the associated Fokker-Planck equations (Forward-Kolmogorov equations) of the process <cit.>.
The ergodicity of Brownian dynamics further establishes the Birkhoff ergodic theorem. This theorem states that the time average of any Borel-measurable, L1 integrable function f converges to its phase-space average as the simulation time goes to infinity, i.e.
∫_ℝ^n f(X)ρ(X)dX = lim_T →∞1/T∫_t=0^T f(X_t)dt.
§ ONE-DIMENSIONAL BROWNIAN DYNAMICS: TRANSFORMS TO UNIT DIFFUSION
We consider the Lamperti and time rescaling transforms applied to one-dimensional Brownian dynamics (<ref>).
§.§ Lamperti Transform for One-Dimensional Brownian Dynamics
In one dimension, the Lamperti transform can be seen as a specific instance of a well-known transformational-symmetry of Brownian dynamics. We state and prove this symmetry below. In Section <ref>, we then make the explicit connection to the Lamperti transform.
Applying a coordinate transformation y = y(x) to one-dimensional Brownian dynamics (<ref>) with potential V(x) and diffusion function D(x) results in another instance of Brownian dynamics with potential V̂(y) and diffusion function D̂(y) given by
V̂(y) = V(x(y)) + kT ln|dy/dx(x(y)) |,
D̂(y) = D(x(y))( dy/dx(x(y)))^2.
Applying Itô's Lemma to y=y(x), where x obeys (<ref>), we have,
dy = (-D(y(x))dV/dx(x(y)) + kT dD/dx(x(y)))dy/dx(x(y))dt+kT D d^2 y/dx^2(x(y))dt + √(2 kT D(x(y)))dy/dx(x(y)) dW_t.
Now substituting in the transformations (<ref>),
= -D̂d/dx( V̂dt - kTln|dy/dx|) (dy/dx)^-1dt + kT d/dx(D̂dy/dx^-2)dy/dxdt + kT D̂dy/dx^-2d^2 y/dx^2 dt + √(2 kT D̂) dW_t
= -D̂dV̂/dydt + kTD̂d^2 y/dx^2dy/dx^-2dt + kT dD̂/dydt -2kT D̂dy^2/dx^2dy/dx^-2 + kT D̂dy/dx^-2d^2 y/dx^2 dt + √(2 kT D̂) dW_t
= -D̂(y) dV̂/dydt + kT dD̂/dydt + √(2 kT D̂(y)) dW_t,
which is an instance of Brownian motion with potential V̂(y) and diffusion function D̂(y).
§.§.§ Lamperti Transform to Constant Diffusion
Setting D̂(y)=1 and solving for y(x), we recover the one-dimensional Lamperti transform:
y(x) = ∫_x_0^x( 1/D(z))^1/2dz,
where x_0 is an arbitrary constant. The integrand is everywhere positive since D(x) > 0 for all x ∈ℝ. Hence y(x) is a monotonically increasing function of x and has a unique inverse x(y).
Differentiating the transformation equation with respect to x, we find dy/dx = ( 1/D(x))^1/2. Substituting this result into (<ref>) and (<ref>), we obtain the transformed dynamics:
dy = - dV̂(y)/dydt + √(2kT)dW_t,
with effective potential
V̂(y) = V(x(y)) - kT/2lnD(x(y)).
Note that V̂(y) implicitly depends on the choice of x_0 through the inverse transform x(y). Since x_0 changes the vertical offset of y(x), it similarly changes the horizontal offset of x(y). Changing x_0 thus corresponds to horizontally translating V̂(y) and it has no consequence for the dynamics.
§.§.§ Lamperti Transform Ergodic Theorem
The Lamperti transformation changes the Birkhoff ergodic theorem in a straightforward manner, as we now prove.
Ensemble averages of a function f(x) with respect to the invariant distribution ρ(x) can be recovered from samples y_t of the Lamperti-transformed process as follows:
∫_-∞^∞ f(x)ρ(x)dx = lim_T →∞1/T∫_t=0^T f(x(y_t)) dt.
In other words, samples in y need to be mapped back to x space before applying the standard ergodic theorem.
We assume that the transformed process is geometrically ergodic and therefore satisfies a Birkoff ergodic theorem of the form
lim_T →∞1/T∫_t=0^T f(y_t) dt = ∫_-∞^∞ f(y) ρ̂(y) dy,
where ρ̂(y) = 1/Ẑexp(--V̂(y)/kT) is the invariant distribution in the transformed space. Substituting in the effective potential from equation (<ref>) for constant diffusion, the right-hand side becomes
∫_-∞^∞ f(y) exp(-V(x(y))/kT)/Ẑ√(D(x(y))) dy = ∫_-∞^∞ f(y(x)) exp(-V(x)/kT)/Ẑ√(D(x))dy/dx dx.
Using the fact dy/dx=1/√(D(x)), this equation simplifies to
∫_-∞^∞ f(y(x)) exp(-V(x)/kT)/Ẑdx = Z/Ẑ∫_- ∞^∞ f(y(x)) ρ(x) dx
where Z = ∫_-∞^∞exp(-V(x)/kT)dx and Ẑ = ∫_-∞^∞exp(-V̂(y)/kT)dy
are the partition functions in the original space and transformed space respectively.
By setting f(y_t)=1, we observe that
lim_T →∞1/T∫_t=0^T 1 dt = 1 = Z/Ẑ∫_-∞^∞ 1ρ(x) dx = Z/Ẑ.
Hence, we have
Z =Ẑ.
Alternatively, we can arrive at (<ref>) through a change of variables:
Z = ∫_-∞^∞exp(-V(x)/kT) dx = ∫_-∞^∞exp(-V(x(y))/kT) (dy/dx(x(y)))^-1 dy
= ∫_-∞^∞exp(-V(x(y))+kT ln|dy/dx(x(y)) |/kT) dy = ∫_∞^∞exp(-V̂(y)/kT) d kT = Ẑ.
Using this result, Equation <ref> becomes
∫_-∞^∞ f(y(x))ρ(x)dx = lim_T →∞1/T∫_t=0^T f(y_t) dt.
Finally, on making a suitable redefinition of f(x), we have
∫_-∞^∞ f(x)ρ(x)dx = lim_T →∞1/T∫_t=0^T f(x(y_t)) dt,
as required.
§.§.§ The Lamperti Transform Invariant Distribution
The invariant distribution ρ(x) of the original process and the invariant distribution ρ̂(y) of the Lamperti-transformed process are related by
ρ(x) = ρ̂(x(y))dy/dx.
First, consider the ergodicity formula of Theorem <ref> and make the specific choice f(x) = I(x ∈ [a, b]), is the indicator function on the interval [a, b]. This gives
∫_-∞^∞ f(x)ρ(x)dx = ∫_a^b ρ(x) dx = lim_T →∞1/T∫_t=0^T I(x(y_t) ∈ [a,b]) dt
= lim_T →∞1/T∫_t=0^T I(y_t ∈ [y(a),y(b)]) dt = ∫_y(a)^y(b)ρ̂(y) dy
= ∫_a^bρ̂(x(y)) dy/dxdx,
where in the second line we rescaled the indicator function by y(x) and applied the Berkoff ergodic theorem for the y process. This implies
∫_a^b (ρ(x) - ρ̂(x(y))dy/dx) dx = 0
which, from the arbitrariness of the constants a and b, proves the result.
In practice, if we take discretised samples y_n, sampled using a uniform timestep h, then from (<ref>),
∫_a^b ρ(x) dx = lim_N →∞1/N∑_n=0^N I(x(y_n) ∈ [a,b]),
which provides a numerical algorithm for estimating finite-width integrals of the original invariant distribution using samples in the transformed space.
§.§ Time Rescaling
We now consider a general time-rescaling transformation applied to one-dimensional Brownian dynamics.
Under the configuration-dependent time rescaling t τ(t) where dt/dτ(x) = g(x) the Brownian dynamics (<ref>) is preserved but with the following adjustments to the potential and diffusion coefficient:
V̂(x) = V(x) + kT lng(x)
D̂(x) = g(x)D(x).
Applying a version of the time rescaling appearing in equation (<ref>) to one-dimensional Brownian dynamics we arrive at,
dx_τ = -g(x)D(x)dV/dxdτ + kT g(x) dD(x)/dx dτ + √(2 kT g(x) D(x))dW_τ.
Inserting the identities from equation (<ref>) into (<ref>) we obtain
dx_τ = - D̂(x)d/dx( V̂(x) - kT lng(x))dτ + kTg(x) d/dx(D̂(x)/g(x)) dτ + √(2kT D̂(x))dW_τ
= - D̂(x)dV̂/dxdτ + kT D̂(x) g'(x)/g(x)dτ + kTdD̂(x)/dx dτ - kTD̂(x)g'(x)/g(x) dτ + √(2kT D̂(x))dW_τ
= - D̂(x)dV̂/dxdτ + kT d̂D̂(x)/dx dτ + √(2kT D̂(x))dW_τ,
which is a transformed version of the original one-dimensional Brownian dynamics but in an effective potential V̂(x) and a rescaled diffusion coefficient D̂(x), as required.
§.§.§ Time-Rescaling Transform to Constant Diffusion
The time rescaling to unit diffusion comes from setting D̂(x)=1 and is given by
g(x) = 1/D(x).
Substituting this result into (<ref>) and (<ref>), we obtain the transformed dynamics:
dx_τ = - dV̂/dxdτ + √(2kT)dW_τ,
with effective potential
V̂(x) = V(x) - kT lnD(x).
Note that these are different dynamics to those obtained through a Lamperti transform to constant diffusion in Section <ref>. We discuss these differences further in Section <ref>.
§.§.§ Time-Rescaling Ergodic Theorem
Ensemble averages of a function f(x) with respect to the invariant distribution ρ(x) can be recovered from samples x_τ of the time-rescaled process as
∫_-∞^∞ f(x) ρ(x) dx = lim_T →∞∫_τ=0^T f(x_τ) g(x_τ) dτ/∫_τ=0^T g(x_τ) dτ.
From the Berkoff ergodic theorem applied to the original process,
∫_-∞^∞ f(x) ρ(x) dx = lim_T ∞1/T∫_t=0^T f(x_t)dt.
Changing variables t →τ in the integration, the right-hand side becomes,
lim_T ∞1/T∫_τ=0^τ(T)f(x_τ) dt/dτdτ = lim_T ∞1/T∫_τ=0^τ(T) f(x_τ) g(x_τ) dτ.
Redefining T this can be alternatively written as
lim_T ∞1/t(T)∫_τ=0^T f(x_τ) g(x_τ) dτ.
Finally, we note that by integrating dt/dτ=g(x) between 0 and T we can obtain an expression for t(T),
t(T) = ∫_τ=0^T g(x_τ) dτ.
Substituting this into the above equation completes the proof.
The proof of Theorem <ref> does not require the assumption of Brownian dynamics. Theorem <ref> therefore holds more generally when time-rescaling general one-dimensional SDEs.
§.§.§ The Invariant Distributon
We use (<ref>) with the specific choice f(x) = I(x ∈ [a, b]):
∫_-∞^∞ f(x)ρ(x) dx = ∫_a^bρ(x) dx = lim_T →∞∫_τ=0^T I(x_τ∈ [a,b])g(x_τ) dτ/∫_τ=0^T g(x_τ)dτ.
Introducing discretised samples,
∫_a^bρ(x) dx = lim_N →∞∑_n=0^N g(x_τ_n)I(x_τ_n∈ [a, b])/∑_n=0^N g(x_τ_n),
which provides a numerical algorithm for estimating finite-width integrals of the original invariant distribution using samples of the time-rescaled process.
§.§ Comparison of the Approaches
The Lamperti transform and time-rescaling transform provide different ways to achieve a constant diffusion transformation, resulting in transformed processes with different effective potentials. Both methods can be used in one dimension for any positive diffusion coefficient D(x), even if it is non-smooth or discontinuous. However, there is a distinction in terms of computation. The Lamperti transform requires numerical integration in practice, as it can be analytically computed only for specific forms of D(x), whereas the time-rescaling transform does not have this limitation.
Another distinction between the transforms lies in how they modify the potential function. In the Lamperti transform, the transform y(x) is such that regions of x-space where D(x) > 1 tend to correspond to regions in y-space where the effective potential is steeper (more confining) than the original. Conversely, for 0 < D(x) < 1, the effective potential tends to become shallower (less confining). In the case of the time-rescaling transform, equation (<ref>) indicates that the transformed potential is steeper than the original wherever dD/dx < 0, and is shallower wherever dD/dx > 0.
Since the two transforms have different effects on the transformed potential, the preferred choice depends on the specific problem at hand. For example, when sampling rare event transitions, it may be preferable to use the transform that results in a softer potential since this choice ensures improved numerical stability for larger step sizes.
For example, consider one-dimension Brownian motion with a diffusion coefficient
D(x) = 1 + | x |.
The Lamperti transform required to map this into a frame in which diffusion is unity is given by (setting x_0=0)
y(x) = ∫_0^x √(1/1+| z |)dz = 2sgn(x)(√(1+| x |)-1).
From this, we compute
dy/dx = √(1/1+| x |), x(y) = y/4(| y | +4).
And the transformed potential becomes
V̂(y) = V(y/4(| y | +4)) - kT/2ln| 1 + | y | + y^2/4|.
Alternatively, using the transform equation (<ref>), we consider a time rescaling to constant diffusion with g(x) = 1/1+| x |. The effective potential for the time-rescaled process is
V̂(x) = V(x) - kT ln(1+| x |).
Sketches below of V(x), V̂(y) and V̂(x) are shown for the special case that kT = 1 and V(x) = x^2/2 + sin(1+3x). In this example, the Lamperti and time-rescaling transforms have opposite effects on the shape of the well; the Lamperti transform makes it stiffer and the time-rescaling makes it softer.
Whether it is the Lamperti-transformed potential or time-rescaled potential that is softer depends on the particular form of the diffusion coefficient. In Figure <ref> we illustrate this by showing how the two transforms differ for a variety of diffusion coefficients when applied to the same quadratic potential, V(x) = x^2.
§ NUMERICAL EXPERIMENTS
We simulate the system
dx = - D(x) dV/dxdt + kT dD/dxdt + √(2 kT D(x)) dW_t,
V(x) = x^2/2 + sin(1+3x), D(x) = 1 + | x |, kT = 1,
using various numerical integrators. We examine both the original system (<ref>), as well as the systems in which (<ref>) is transformed to constant-diffusion separately using either the Lamperti transform or Time Rescaling (c.f. Figure <ref>). We compare rates of convergence to the invariant distribution and finite time distributions of the different integrator methods, both with and without the transforms. Further, we quantify any improvements in sampling efficiency and numerical stability gained by applying the transformations. All experiments are performed on a Thinkpad P17 with a 12-core, 2.60GHz Intel i7-10750H CPU using a custom code implemented in Julia 1.8.5.
§.§ Numerical Integrators
The concept of order of convergence is important when comparing integrators. Let X^h_n denote the numerical solution after n steps of size h and let X_t_n denote the exact solution at time t_n = nh. If there exists a constant C > 0 such that for sufficiently small step sizes h, the L1 error satisfies
𝔼[| X^h_n - X(t_n) |] ≤ Ch^p_s,
then we say the integrator has a strong order of convergence p_s. If there exists a constant C > 0 such that for sufficiently small step sizes h, the absolute difference between the expected values of the numerical and exact solutions satisfies
|𝔼[X^h_n] - 𝔼[X(t_n)]| ≤ Ch^p_w,
then we say the integrator has a weak order of convergence p_w. The weak order is never smaller than the strong order. In the context of convergence rates to the invariant distribution, the weak order is more relevant and will be the primary focus in this work.
We use the notation
a(x) = -D(x)dV/dx + kTdD/dx,
σ(x) = √(2kTD(x)),
ã(x) = a(x) - 1/2σ(x)dσ/dx = -D(x)dV/dx + 1/2kT dD/dx,
where a(x) is the drift term, σ(x) the diffusion term, and ã(x) is the Stratonovich-corrected drift <cit.>. We consider the following integrators, where w_n, w_n+1iid∼𝒩(0,1) and h is the step size:
i) Euler-Maruyama (EM)
x_n+1 = x_n + a(x_n)h + σ(x_n)√(h)w_n;
ii) Milstein Method (MM)
x_n+1 = x_n + a(x_n)h + σ(x_n)√(h)w_n + 1/2kT dD/dx(x_n)(w_n^2 - 1)h;
iii) “Naive” Leimkuhler-Matthews (LM)
x_n+1 = x_n + a(x_n)h + σ(x_n)√(h)w_n + w_n+1/2;
iv) Hummer-Leimkuhler-Matthews (HLM)
x_n+1 = x_n + (a(x_n) + 1/4kTdD/dx(x_n))h + σ(x_n)√(h)w_n + w_n+1/2;
v) Stochastic Heun (SH)
x^*_n+1 = x_n + ã(x_n)h + σ(x_n)√(h)w_n
x_n+1 = x_n + 1/2(ã(x_n) + ã(x^*_n+1))h + 1/2(σ(x_n) + σ(x^*_n+1))√(h)w_n;
vi) Limit Method with Variable Diffusion (LMVD)
x̂_n+1 = √(kT)w_n + √(2h D(x_n))dV/dx(x_n) + √(h/2 D(x_n))dD/dx(x_n)
x̃_n+1 = {x(√(h/2)) | x(0) = x_n, dx = √(D(x))x̂_n+1dt }
x_n+1 = {x(√(h/2)) | x(0)=x̃_n+1, dx=√(kT D(x))w_n+1dt}.
The Euler-Maruyama (EM) integrator is an extension of the Euler method to SDEs. It has strong order of convergence 1/2 and weak order of convergence 1 <cit.>. The Milstein method (MM) modifies the EM method by introducing a second-order correction term derived from a stochastic Taylor series expansion. The method has strong order 1 and weak order 1 <cit.>. The “Naive” Leimkuhler-Matthews (LM) method is weak order 2 for constant diffusion (here “naive” because in experiments we will also use it in the multiplicative noise regime). It is derived from a high-friction limit of the BAOAB method of Langevin dynamics in the constant diffusion regime <cit.>. The Hummer-Leimkuhler-Matthews (HLM) method is an extension of LM that modifies the coefficient of the dD/dx term. This modification ensures that the expectation of x is exact in the case of locally linear diffusion, and this is conjectured to improve convergence in the variable diffusion regime[We would like to acknowledge Gerhard Hummer for devising and suggesting this method during our personal correspondence.]. The LM integrator has been studied numerically in the constant diffusion regime <cit.> but to the authors' knowledge, the LM and HLM methods have yet to be benchmarked in the variable diffusion regime. The Stochastic Heun (SH) method is a two-stage Runge-Kutta method and is weak order 2 for constant diffusion but only weak order 1 for variable diffusion <cit.>. However, the accuracy gains of SH come at the cost of higher computational requirements, as it involves two force evaluations, two diffusion coefficient evaluations, and two diffusion gradient evaluations per iteration. Finally, the Limit Method with Variable Diffusion (LMVD) scheme is weak order 2 for both constant and variable diffusion. It is derived from a high-friction limit of the BAOAB method of Langevin dynamics in the variable diffusion regime.
Unlike SH, it requires only one force evaluation per iteration. However, it requires two ODE solves per timestep. LMVD reduces to the LM method when D(x) is constant.
§.§ Error in Infinite Time
We sample the canonical distribution 1/Zexp(-V(x)/kT) and compare convergence rates using trajectories generated by the different integrators both with and without transforms to constant diffusion. For the untransformed dynamics, we compare EM, MM, LM, HLM, SH, and LMVD. For the Lamperti-transformed dynamics and the time-rescaled dynamics, we compare the EM, LM, and SH integrators. These three integrators are sufficient since MM reduces to EM, and LMVD and HLM reduce to LM for constant diffusion. For each method, we run trajectories of length T = 7.5 × 10^7, and 12 independent runs are averaged to further reduce sampling errors.
To compare the convergence to the invariant distribution, we introduce M intervals of equal length and compute the mean L1 error of the empirical probabilities of occupying an interval compared to the exact probabilities obtained by integrating the known invariant distribution. For experiments that use the Lamperti transform, we arrive at these empirical probabilities via equation (<ref>). For experiments that use time rescaling, we use equation (<ref>). The error is calculated as
Error = 1/M∑_i=1^M |ω_i - ω̂_i |,
where ω_i is the exact occupancy probability of the i^th interval and ω̂_i is the empirical estimate. We use 30 equal-width intervals in the range -5 to 5 and run each integrator using 10 different step sizes, equally spaced in log space between 10^-3 and 10^-1. We plot the error as a function of stepsize on a log-log scale, so methods that are first-order weak have gradient one, and methods that are second-order weak gradient two. The results are shown in Figure <ref>.
Figure <ref>(a) confirms the expected orders of weak convergence for the untransformed methods. It is noteworthy that MM has a larger error constant than EM, illustrating how improving strong convergence does not necessarily improve weak convergence. Additionally, although it is well-known that LM is not second-order weak for variable diffusion, these experiments show that it approaches a finite error of approximately 1.5 × 10^-3 in this example - the first demonstration that the method does not converge. Examining Figure <ref>(b), we see that the effect of applying transforms is method dependent. Applying transforms to the EM method results in no appreciable change in the convergence properties whereas transforming SH or LM to constant diffusion recovers second-order behaviour for these methods, the latter becoming equivalent in convergence properties to the more expensive LMVD method. Notice that the time-transformed methods are shifted relative to the Lamperti-transformed methods. This is not surprising since
a fixed stepsize in τ-time is not equivalent to a fixed stepsize in t-time. In the next section, we show how this apparent shift has no consequence for the relative computational efficiency of the two methods.
§.§ Computational Efficiency and Numerical Stability
We evaluate the computational efficiency of each method (excluding untransformed LM, which does not converge) by comparing the cost required to achieve a fixed error, as defined in (<ref>). This assessment is conducted in two stages. Firstly, for each method, we time 10^8 iterations with a fixed step size of h=0.01 and average across 12 independent runs. This provides an estimate of the relative wall clock time per iteration of each integrator. For the methods that use transforms, any additional cost incurred due to reweighting back to estimate the invariant measure (c.f. Equations (<ref>) and (<ref>)) is accounted for in these timings. Secondly, we fix a target error and run trajectories for various step sizes with each method, stopping when the target error is reached. For each step size considered, we average over 6000 independent runs. We then take the minimum over the step sizes to determine the fewest number of iterations needed to first reach the specified error. The total cost is then defined as the minimum number of iterations times the cost per iteration. We repeat the computation of the total cost for 5 target errors, evenly distributed in log space between 10^-3.5 and 10^-3. We normalise by the cost of using the EM method with a target error of 10^-3, which is set to 1 as a reference. The resulting cost-error diagram is illustrated in Figure <ref>. We also evaluate the numerical stability of the methods which we achieve by determining the smallest step size, in logarithmic increments of 10^0.1, that leads to numerical blow-up during a simulation of length T=10^6. The timing results for 10^8 iterations and the stability thresholds for each method are presented in Tables <ref> and <ref>.
Notable in Figure <ref>(a) is how LMVD, the untransformed method with the best convergence properties, is not necessarily the cheapest method - for target error 10^-3, the HLM method is around 5 times more efficient. In Figure <ref>(b) we see that transforming to constant diffusion with LM results in the most efficient method - consistently more efficient than HLM and approximately 5 times more efficient than LMVD for the target errors we examined. Compared with LM, the transforms provide a smaller benefit for the SH method, while they are slightly detrimental for the efficiency of EM. The choice of transfrom (Lamperti or time-rescaling) is largely immaterial for method efficiency, although there is some evidence that time-rescaling is better for the SH method in this example. Furthermore, examining Table <ref>, we see that time-rescaling does have the benefit of significantly improving numerical stability of SH and LM, which is not the case for the Lamperti transform. This is because the time-rescaled potential is the softer of the two transformed potentials for this particular choice of D(x), see Figure <ref>.
§.§ Error in Finite Time
We consider the weak accuracy of the various methods at finite times. To realise the evolving distribution, we average over 10^7 independent trajectories with initial points drawn from a standard normal distribution. As in Section <ref>, we divide the interval [-5, 5] into 30 equal-width bins and compute the L1 error. We run over t ∈ [0, 4.096]. Since the exact solution is unknown, we use as a reference solution the evolving distribution computed using the SH method with a small step size of 10^-4. This solution is compared with the evolving distributions for step sizes 8 × 10^-3, 1.6 × 10^-2, 2.4 × 10^-2, 3.2 × 10^-2 for each method. The evolution of the finite time errors for the case of step size h=3.2 × 10^-2 are shown in Figure <ref>. The trends in the finite time error at t=4.096 as a function of step size are shown in Figure <ref>.
Several aspects are noteworthy about the finite-time results. First, methods with the best infinite time convergence do not necessarily have the best finite time convergence. Note, for example, the lower finite-time error SH compared with LMVD in Figure <ref> (a). Second, the time-rescaling transformation can significantly increase finite-time errors, as we see for example with the time-rescaled LM method in Figure <ref> (b). In contrast, the Lamperti transform can in some cases reduce these errors, for example with the Lamperti-transformed SH algorithm in Figure <ref> (b). Overall the SH method has smaller errors than the LMVD method at these small step sizes. Note, however, that this trend reverses for larger step sizes, as shown in Leimkuhler et al. (2014) <cit.>.
§ MULTIVARIATE BROWNIAN MOTION: TRANSFORMS TO UNIT DIFFUSION
We now examine how the Lamperti and time-rescaling transforms generalise to multivariate Brownian dynamics.
§.§ Lamperti Transform
Consider multivariate Brownian dynamics with diffusion tensor
D(X)_ij = D_i(X_i)R_ij,
where R_ij in an invertible, constant matrix. For this class of variable diffusion, we know that a Lamperti transform to unit diffusion exists (c.f. Section <ref>). Nevertheless, the transformed dynamics is not Brownian dynamics for arbitrary invertible R. We prove this in Appendix <ref>. This means that an ergodic theorem similar to (<ref>) is not possible for general R. An ergodic theorem is, however, possible for the special case that R is (a multiple of) the identity matrix. We will show this in two steps, first, by deriving the transformed dynamics and showing that it is Brownian motion, and then using the transformed dynamics to prove an ergodic theorem.
Consider a multivariate Brownian dynamics process X_t following (<ref>), where the diffusion tensor D is defined as:
D(X)_ij = D_i(X_i)δ_ij,
with D_i : ℝ→ℝ. Then, the transformed process Y_t given by:
Y_i,t = √(2kT)∫_x_0^X_i,t1/D_i(x) dx = √(2kT)ϕ_i(X_i,t),
satisfies the constant-diffusion process:
dY_i,t = -∇_Y_iV̂(Y)dt + √(2kT)dW_i,
where V̂(Y) is the effective potential defined as:
V̂(Y) = V(ϕ^-1(Y)) - kT ∑_k=1^n ln D_k (ϕ^-1_k (Y_k,t)).
The map ϕ^-1: ℝ^n →ℝ is constructed by individually applying ϕ_i^-1 to each component of its argument, 1 ≤ i ≤ n.
The stated transformation is a multivariate Lamperti transform (<ref>) with
R = 1, f(X) = - D(X)D(X)^T ∇ V(X) + kT div(DD^T)(X), σ(X) = √(2kT)D(X).
From <ref>, the transformed process therefore satisfies
dY_i,t = √(2kT)(-∑_k=1^n(DD^T)_ik∂_k V/√(2kT)D_i + kT ∑_k=1^n∂_k(DD^T)_ik/√(2kT)D_i-1/2√(2kT)∂_i D_i)dt + √(2kT)dW_i,
where ∂_j = ∂/∂ X_j and V, D and D_j are functions of Y_t through the relations
V(X_t) = V(ϕ^-1(Y_t)), D(X_t) = D(ϕ^-1(Y_t)), D(X_j,t) = D(ϕ^-1_j(Y_j,t)).
Substituting D(X)_ij = D_i(X_i)δ_ij, this becomes
dY_i,t = (- D_i ∂_i V+kT∂_i (D_i^2)/D_i-kT ∂_i D_i)dt + √(2kT)dW_i.
Expanding, this simplifies to
dY_i,t = -D_i ∂_i V dt + kT ∂_i D_i + √(2kT)dW_i.
Changing variables so that the derivatives are with respect to Y we get
∂/∂ X_i = ∑_j=1^n ∂ Y_j/∂ X_i∂/∂ Y_j = ∑_j=1^n δ_ij/D_i(X_i)∂/∂ Y_j = 1/D_i(X_i)∂/∂ Y_i,
and the transformed equation becomes
dY_i,t = (-∇_Y_i V(ϕ^-1(Y_t)) + kT∇_Y_iln D_i(ϕ^-1_i(Y_i,t)))dt + √(2kT)dW_i,
which we identify as constant-diffusion Brownian dynamics with an effective potential
V̂(Y) = V(ϕ^-1(Y)) - kT ∑_k=1^n ln D_k (ϕ^-1_k (Y_k)).
In constructing this, we have used the fact ∇_Y_iln D_k (ϕ^-1_k (Y_k)) = δ_ik∇_Y_kln D_k (ϕ^-1_k (Y_k)).
With this, we now generalise Theorem <ref> to the multivariate case.
Let f(X) be a Borel-measurable, L1 integrable function. Phase-space averages of f with respect to the canonical distribution ρ(X) of the original Brownian dynamics process X_t can be recovered from the Lamperti-transformed process Y_t = ϕ(X_t) as the simulation time goes to infinity as
∫_ℝ^n f(X)ρ(X) dX = lim_T →∞1/T∫_t=0^T f(ϕ^-1(Y_t)) dt.
We assume that the effective potential (<ref>) is such that geometric ergodicity holds for Y_t. Then, by applying the Berkoff ergodic theorem to the transformed process, we obtain:
lim_T →∞1/T∫_t=0^T f(Y_t) dt = ∫_ℝ^n f(Y)ρ̂(Y) dY.
Substituting in the effective potential, we have:
= ∫_ℝ^n f(Y)1/Ẑexp(-V(ϕ^-1(Y))/kT)∏_i=1^n (D_i (ϕ^-1_i(Y_i,t)) )dY
Next, we change variables from Y to X. The Jacobian factor is given by:
J = |dY/dX| = | 1/D_i(X_i)δ_ij| = ∏_i=1^n 1/D_i(X_i),
which exactly cancels with the diffusion coefficients in the integral, and we have
= ∫_ℝ^n f(ϕ(X))1/Ẑexp(-V(X)/kT)dX = Z/Ẑ∫_ℝ^n f(ϕ(X))ρ(X)dX.
Choosing f(Y)=1 leads to Ẑ = Z, yielding:
lim_T →∞1/T∫_t=0^T f(Y_t) dt = ∫_ℝ^n f(ϕ(X))ρ(X)dX.
Finally, if we redefine f as f ∘ϕ^-1, then we obtain:
lim_T →∞1/T∫_t=0^T f(ϕ^-1(Y_t)) dt = ∫_ℝ^n f(X)ρ(X)dX,
as required.
§.§ Time Rescaling
Consider multivariate Brownian dynamics with diffusion tensor
D(X) = D(X)R,
where R is an invertible matrix. For this class of variable diffusion, a time-rescaling to unit diffusion exists (c.f. Section <ref>). In this section, we further prove that it is possible to derive an ergodic theorem similar to Theorem <ref> for this case. We begin by deriving the transformed dynamics and then derive the generalised ergodic theorem.
Consider a multivariate Brownian dynamics process X_t following (<ref>), where the diffusion tensor D is defined as:
D(X) = D(X)R,
where R is an arbitrary invertible matrix and D : ℝ^n →ℝ. Then the time-rescaled process Y_τ given by:
Y_τ = 𝐑^-1X_τ,
where:
dt/dτ = g(X) = 1/D^2(X),
satisfies the constant-diffusion process:
dY_τ = - ∇_YV̂(Y)dt + √(2kT)dW,
where V̂(Y) is the effective potential defined as:
V̂(Y) = V(RY)- 2kT ln D(RY).
The time-rescaling transform follows (<ref>) with f(X)=-D(X)D(X)^T ∇ V(X) + kT div(D(X)D(X)^T), which gives:
dY_τ = - R^-1DD^T∇_X V-kTdiv(DD^T)/D^2dt + √(2kT)dW_τ.
Here, V, D and D are functions of Y_τ through the relations:
V(X_τ) = V(RY_τ), D(X_τ) = D(RY_τ), D(X_τ) = D(RY_τ).
Substituting D(X)=D(X)R, in components this becomes
dY_i, τ = - ∑_j D^2R_ji∂_j V - kT∑_j R_ji∂_j D^2 /D^2dt + √(2kT)dW_i,τ,
which simplifies to
dY_i, τ = ∑_j R_ji(-∂_j V + 2kT ∂_j ln D)dt + √(2kT)dW_i,τ.
Changing variables so that the derivatives are with respect to Y we get
∂/∂ X_i = ∑_j=1^n ∂ Y_j/∂ X_i∂/∂ Y_j = ∑_j=1^n R_ji^-1∂/∂ Y_j.
The R matrix then cancels with its inverse, and the dynamics now reads
dY_τ = (-∇_Y V(RY) + 2kT ∇_Yln D(RY) )dt + √(2kT)dW_τ,
which is constant-diffusion Brownian dynamics in an effective potential V̂(Y) given by
V̂(Y) = V(RY)- 2kT ln D(RY).
This completes the proof.
With this, we now generalise Theorem <ref> to the multivariate case.
Let f(X) be a Borel-measurable, L1 integrable function. Phase-space averages of f(X) with respect to the canonical distribution ρ(X) of the original Brownian dynamics process X_t can be recovered from the time-rescaled process Y_τ = R^-1X_τ as the simulation time goes to infinity as
∫_ℝ^n f(X) ρ(X) dX = lim_T →∞∫_τ=0^T f(RY_τ) g(RY_τ) dτ/∫_τ=0^T g(RY_τ) dτ,
where g(X) = 1/D^2(X).
We begin with the Berkoff ergodic theorem of the original process, which states
∫_ℝ^n f(X) ρ(X) dX = lim_T ∞1/T∫_t=0^T f(X_t)dt.
To express this in terms of the time-rescaled process, we change the variable t →τ in the integration, resulting in:
lim_T ∞1/T∫_τ=0^τ(T)f(X_τ) dt/dτdτ = lim_T ∞1/T∫_τ=0^τ(T) f(X_τ) g(X_τ) dτ.
By redefining T, we can alternatively write this as
lim_T ∞1/t(T)∫_τ=0^T f(X_τ) g(X_τ) dτ,
where g(X_τ) = 1/D^2(X) by the definition of time rescaling.
Next, we integrate dt/dτ=g(X) from 0 to T to obtain an expression for t(T),
t(T) = ∫_τ=0^T g(X_τ) dτ,
Substituting equation (<ref>) and the relation 𝐗_τ = 𝐑𝐘_τ into equation (<ref>), we have:
∫_ℝ^n f(X) ρ(X) dX = lim_T →∞∫_τ=0^T f(RY_τ) g(RY_τ) dτ/∫_τ=0^T g(RY_τ) dτ,
as required.
Taking f(X) to be an interval function makes it possible to use this result to estimate finite-width integrals of the original invariant distribution.
The proof of Theorem <ref> does not require the assumption of Brownian dynamics. Theorem <ref> therefore holds more generally for SDEs of the form considered in Section <ref>.
§.§ Combining Transforms
Time-rescaling and the Lamperti transform can be combined. This allows a multivariate Brownian dynamics process to be transformed into a constant-diffusion process for a wider range of initial diffusion tensors than is possible by applying either transformation in isolation. However, it is important to note that only a very specific combination of time rescaling and the Lamperti transform results in a transformed process that is an instance of Brownian dynamics. Specifically, consider the class of multivariate Brownian dynamics processes 𝐗_t with a diffusion tensor of the form
D(X) = D^(1)(X)D^(2)(X),
where
D^(1)(X) = [ D(X) ; ⋱ ; D(X) ],
and
D^(2)(X) = [ D_1(X_1) ; ⋱ ; D_1(X_n) ].
We show in the Appendix <ref> that X_t can then be transformed to a constant-diffusion Brownian dynamics process Y_τ through a time rescaling followed by a Lamperti transform, represented schematically as
X_t X_τY_τ,
and that the effective potential of the transformed process is given by
V̂(Y) = V(Y) - 2kT lnD(ϕ^-1(Y)) - kT ∑_i=1^n ln D_i(ϕ^-1_i(Y)).
Phase-space averages with respect to the original invariant distribution can then be computed from the Y_τ process via
∫_ℝ^n f(X)ρ(X)dX = lim_T →∞∫_0^T f(ϕ^-1(Y_τ))g(ϕ^-1(Y_τ))dτ/∫_0^Tg(ϕ^-1(Y_τ))dτ,
where g(X) = 1/D^2(X).
It is worth noting that a more direct combination of the results from <ref> and <ref> would lead to considering a Brownian dynamics process with diffusion tensor of the form D(X) = D^(1)(X)RD^(2)(X), where R is any inevitable matrix. However, applying the time rescaling described in Section <ref> followed by the Lamperti transform described in Section <ref> does not generally result in a transformed process that is an instance of Brownian dynamics (unless R is a multiple of the identity). As a result, it is not possible to derive an ergodic theorem in this case.
§ MULTIVARIATE NUMERICAL EXPERIMENTS
As an example of multivariate Brownian dynamics, consider Stokes-Einstein diffusion which models the diffusion of a low concentration of non-interacting, spherical particles suspended in a fluid. In n dimensions, each particle obeys multivariate Brownian dynamics with the Stokes-Einstein diffusion tensor
D_SE = k_B T/6 πη r1_n,
where T is Kelvin temperature and η denotes the viscosity. If the temperature field or the fluid's material properties are non-homogeneous, then the diffusion tensor D_SE(X) is position dependent. Furthermore, the viscosity and temperature are functionally related through a constitutive relation η(X) = η(T(X), a(X)), where a(X) are a set of possibly position-dependent material properties whose values and functional relationship with η(X) which depends on the details of the specific fluid model. The general nature of Stokes-Einstein model lends itself to widespread applications particularly in materials science <cit.> and in the modelling of water diffusion in biological tissues, which has medical applications in diffusion-tensor MRI imaging <cit.>. To account for coordinate-dependent diffusion anisotropy, the Stokes-Einstein diffusion model can be generalised to D(X)=D_SE(X)D^(2)(X), where D^(2)(X) is of the functional form given by equation (<ref>). This kind of generalised diffusion can occur in protein transport in biological tissues, where the diffusion anisotropy derives from the matrix of actin filaments. Note that this is also of the form (<ref>), and so this process can be transformed to constant-diffusion Brownian dynamics as described in Section <ref>.
As a non-trivial example of Stokes-Einstein diffusion, we will consider multivariate Brownian dynamics in a 2D quadruple-well potential given by
V(x,y) = √(17/16 -2x^2 + x^4) + √(17/16 - 2y^2 + y^4),
with a Stokes-Einstein diffusion tensor given by the Moro-Cardin tensor <cit.>
D(x, y) = (1 + A exp(-x^2 + y^2/2 σ^2))^-11,
where A=5 and σ=0.3, see Figure <ref>. Since this diffusion tensor is isotropic, it can be mapped to constant diffusion through time rescaling. Figure <ref> illustrates the comparison of weak convergence to the invariant measure for the LM, EM, SH, and HLM integrators. In Figure <ref> (a), the comparison is shown without any transforms, while in Figure <ref> (b), a time-rescaling transform to constant diffusion is applied. We follow the same general approach as first outlined in Section <ref>. We run trajectories of length T = 5 × 10^6 and average over 12 independent runs and we run each integrator using 10 different step sizes, equally spaced in log-space between 10^-2.5 and 10^-0.5. For histogram computation, we use a 30 × 30 grid of equal-width square bins covering the domain [-3, 3] × [-3, 3] in the x-y plane.
We observe similar behavior to the 1D numerical experiments discussed in Section <ref>. It is noteworthy that applying a time-rescaling transform enhances the convergence rate for both the SH and LM integrators. Just like in the 1D case, the transformed LM integrator exhibits a lower error constant compared to SH, indicating its superior efficiency for this particular problem.
§ CONCLUSIONS
In this study, we demonstrated the effectiveness of applying transforms to achieve constant diffusion in Brownian dynamics. We have explored two options: a global coordinate transform (Lamperti transform) and a time-rescaling transform, examining them theoretically and numerically for both one-dimensional and multivariate Brownian dynamics problems.
We showed how both methods are applicable to one-dimensional Brownian dynamics, irrespective of the diffusion coefficient. However, an important observation is that the transformed potentials resulting from these methods differ, and so the choice of method may be significant when numerical stability is a concern. Through numerical experiments, we demonstrated that applying these transforms can improve convergence to the invariant measure, especially when combined with the Leimkuhler-Matthews method. Notably, in one dimension the computational efficiency gains were the same, regardless of using the Lamperti or time-rescaling transforms, and the most efficient transformed method was approximately five times more efficient than the Limit Method with Variable Diffusion, a more traditional second-order integrator for multiplicative noise. This approach also significantly outperformed the Euler-Maruyama method, having computational efficiency a factor of 10 to 25 times higher for target errors in the range 10^-3.5 to 10^-3. The two types of transformation are not, however, equivalent when it comes to finite time errors; time-rescaling increases these errors whereas the Lamperti transform does not. Crucially, the transformed integrator methods that we examined require only one force and one diffusion tensor evaluation per iteration and therefore scale better to high dimensional problems than competing methods that require multiple force and/or diffusion evaluations per step.
When considering multivariate Brownian dynamics, both the Lamperti and time-rescaling transforms have certain restrictions. The Lamperti transform works for any diagonal diffusion tensor where each diagonal entry depends solely on its corresponding state variable, and the time-rescaling transform is applicable to diagonal diffusion tensors that are locally isotropic. We demonstrated how these two transformations can be combined to transform a class of non-homogeneous, anisotropic Stokes-Einstein diffusion into a constant diffusion process. This particular class of diffusion tensor holds potential applications in biophysical modeling.
§ SUPPLEMENTARY MATERIAL
§ EXAMINING GENERAL LAMPERTI TRANSFORMATIONS
When the Lamperti transform, as defined by Equation (<ref>), is applied to the multivariate Brownian dynamics X_t, it does not necessarily produce another instance of multivariate Brownian dynamics for arbitrary invertible matrix R. We prove this below, showing in particular that if R is diagonal then the transformed process is Brownian dynamics.
Consider a multivariate Brownian dynamics process X_t following (<ref>), where the diffusion tensor D is defined as:
D(X)_ij = D_i(X_i)R_ij,
with D_i : ℝ→ℝ and 𝐑 an invertible matrix. Then the transformed process Y_t, given by
Y_i,t = √(2kT) R_ij^-1∫_x_0^X_j,t1/D_j(x) dx = √(2kT)ϕ_i(X_j,t),
is an instance of Brownian dynamics if and only if the matrix M with components
M_ij = ∑_k=1^n (R_ij^-1R_jkR_jkR_jj^-1) + R_jiR_jj^-1 - R_ij^-1R_jj^-1,
is diagonal. Here, R_ij^-1 is the i,j component of the inverse matrix R. In particular, this is true if R is diagonal.
The stated transformation is a multivariate Lamperti transform (<ref>) with
f(X) = - D(X)D(X)^T ∇ V(X) + kT div(DD^T)(X), σ(X) = √(2kT)D(X).
The transformed process therefore satisfies
dY_i,t = ∑_j=1^n R^-1_ij√(2kT)(-∑_k=1^n(DD^T)_jk∂_k V/√(2kT)D_j + kT ∑_k=1^n∂_k(DD^T)_jk/√(2kT)D_j-1/2√(2kT)∂_j D_j)dt + √(2kT)dW_i,
where ∂_j = ∂/∂ X_j and V, D and D_j are functions of Y_t through the relations
V(X_t) = V(ϕ^-1(RY_t)), D(X_t) = D(ϕ^-1(RY_t)), D(X_j,t) = D(ϕ^-1_j((RY)_j,t)).
Substituting D(X)_ij = D_i(X_i)R_ij, this becomes
dY_i,t = ∑_j,k,l=1^n R_ij^-1(- R_jkR_klD_jD_k ∂_k V/D_j+kTR_jlR_kl∂_k (D_j D_k)/D_j)dt-kT ∑_j=1^n R_ij^-1∂_j D_jdt + √(2kT)dW_i.
Expanding,
dY_i,t = ∑_k=1^n -R_kiD_k ∂_k V dt + kT(∑_j,k,l=1^n R_ij^-1R_jlR_kl(∂_k D_j D_k/D_j + ∂_k D_k) - ∑_j=1^n R_ij^-1∂_j D_j)dt + √(2kT)dW_i.
Noting that ∂_k D_j = δ_kj∂_j D_j, this becomes
dY_i,t = ∑_k=1^n -R_kiD_k ∂_k V dt + kT ( ∑_j,l R_ij^-1R_jlR_jl∂_k D_k + ∑_k R_ki∂_k D_k - ∑_j=1^n R_ij^-1∂_j D_j)dt + √(2kT)dW_i.
Changing variables so that the derivatives are with respect to Y we get
∂/∂ X_k = ∑_l=1^n ∂ Y_l/∂ X_k∂/∂ Y_l = ∑_l=1^n R^-1_lk/D_k(X_k)∂/∂ Y_l,
and the transformed equation becomes
dY_i,t = -∂_i V dt + kT ∑_k=1^n ( ( ∑_l=1^n R_ik^-1R_klR_klR_kk^-1) + R_kiR_kk^-1 + R_ik^-1R_kk^-1) ∇_Y_kln D_k dt + √(2kT)dW_i,
or equivalently:
dY_i,t = -∂_i V dt + kT ∑_k=1^n M_ik∇_Y_kln D_k dt + √(2kT)dW_i,
where M is the matrix defined in the theorem statement.
Note that only if the matrix M is diagonal is it possible to express the drift term as a gradient of a potential. In this case, we identify the process as constant-diffusion dynamics with an effective potential
V̂(Y) = V(ϕ^-1(RY)) - kT ∑_i=1^n M_iiln D_i (ϕ^-1_i (RY_i)).
The functions D_i can be arbitrarily scaled in such a manner that M_ii = 1 in equation (<ref>). By doing so, the transformed process becomes equivalent to the case R=I, as discussed in Section <ref>. Notably, Theorem <ref>, which is the ergodic theorem, remains applicable in this context.
§ COMBINING THE LAMPERTI TRANSFORMATION WITH TIME RESCALING
We show how the multivariate Brownian dynamics process, with diffusion tensor given by D(X) = D^(1)(X)D^(2)(X) (c.f. Section <ref>), can be transformed into a constant-diffusion Brownian dynamics process.
Consider a multivariate Brownian dynamics process X_t following (<ref>), where the diffusion tensor D is defined as
D(X) = D^(1)(X)D^(2)(X),
where D^(1) and D^(2) are given by (<ref>) and (<ref>) respectively. Then the transformed process Y_τ = ϕ(X_τ), resulting from a time rescaling where dt/dτ=g(X)=1/D^2(X) followed by a Lamperti transform given by
Y_i,τ = √(2kT)∫_x_0^X_i,τ1/D_i(x)dx = √(2kT)ϕ_i(X_i, τ),
satisfies the constant-diffusion Brownian dynamics process:
dY_i,τ = - ∇_Y_iV̂(Y)dt + √(2kT)dW_i,
where V̂(Y) is the effective potential defined as
V̂(Y) = V(ϕ^-1(Y))-2kT ln D(Y) - kT ∑_i=1^n ln D_k (ϕ^-1_k(Y_k,τ)).
The time-rescaling transformation gives
dX_τ = - DD^T ∇_XV - kT div(DD^T)/D^2dt + √(2kT)D^(2)(X)dW_τ.
Applying the Lamperti transform then gives
dY_i, τ = √(2kT)(- D_ijD_kj∂_kV -kT ∂_k (D_ijD_kj)/√(2kT)D^2 D_i - 1/2√(2kT)∂_i D_i)dt + √(2kT)dW_i,
where V(X_τ), D(X_τ), D(X_τ) and D_i(X_τ) are functions of Y_τ through the relation X_τ = ϕ^-1(Y_τ).
Since
D_ij(X) = ∑_k=1^n D^(1)_ik(X)D^(2)_kj(X) = ∑_k=1^n δ_ikδ_kj D(X) D_k(X_k) = D(X)D_i(X_i),
this becomes
dY_i, τ = - D_i ∂_i V dt + kT ∂_i(D^2 D_i^2)/D^2 D_i dt - kT ∂_i D_i dt + √(2kT)dW_i
which simplifies to
dY_i, τ = - D_i ∂_i V dt + 2kT D_i/D∂_i D dt + kT ∂_i D_i dt + √(2kT)dW_i.
Changing variables so that the derivatives are with respect to Y we get
∂/∂ X_k = ∑_l=1^n ∂ Y_l/∂ X_k∂/∂ Y_l = ∑_l=1^n δ_lk/D_k(X_k)∂/∂ Y_l=1/D_k∂/∂ Y_k,
and the transformed equation becomes
dY_i, τ = - ∇_Y_i V dt + 2kT ∇_Y_ilnD dt + kT ∇_Y_ilnD_i dt + √(2kT)dW_i,
which we identify as Brownian motion in an effective potential
V̂(Y) = V(ϕ^-1(Y)) - 2kT lnD(ϕ^-1(Y)) - kT ∑_i=1^n ln D_i(ϕ^-1_i(Y_i),
as required.
Next we consider multivariate Brownian dynamics process X_t following Equation (<ref>) where the diffusion tensor is given by
D(X) = D^(1)(X)RD^(2)(X)
where D^(1)(X) and D^(2)(X) are diagonal matrices as described in Section <ref>. We perform a time rescaling followed by a Lamperti transform and show that, although the resulting process has constant diffusion, it is not Brownian dynamics.
First, consider the time-rescaled process X_τ, where dt/dτ=1/D^2(X), which obeys the dynamics
dX_τ = -DD^T∇_X V-kTdiv(DD^T)/D^2dt + √(2kT)RD^(2)dW_τ.
Defining a transformed process Y_τ = R^-1X_τ, then applying the multidimensional Itô formula gives
dY_τ = -R^-1(DD^T∇_X V|_RY_τ-kTdiv(DD^T)|_RY_τ/D^2)dt + √(2kT)D^(2)dW_τ,
where
V = V(RY_τ), D = D(RY_τ), D = D(RY).
Finally, we apply a Lamperti transform to remove the noise dependence on D^(2). The transformed process Z_τ= ϕ(Y_τ) then satisfies
dZ_i, τ = - R^-1_ij( D_jkD_lk∂_l V|_Rϕ^-1(Z_τ) - kT ∂_l (D_jkD_lk)|_Rϕ^-1(Z_τ)/D^2D_i)dt + √(2kT)dW_i,τ,
where we have used the Einstein summation convention for sums over repeated indices. Changing variables,
∂/∂ X_k = ∑_l=1^n ∂ Y_l/∂ X_k∂/∂ Y_l = ∑_l=1^n R^-1_lk/D_k(X_k)∂/∂ Y_l
and substituting D_ij = δ_ikδ_ljR_klDD_l this becomes
dZ_i, τ = - R^-1_ij( δ_jmδ_nkδ_lpδ_qkR_mnR_pqD^2 D_nD_q R^-1_rl∇_Y_r V - kT R^-1_rl∇_Y_r (δ_jmδ_nkδ_lpδ_qkR_mnR_pqD^2 D_nD_q)/D^2D_iD_l)dt + √(2kT)dW_i,τ,
which simplifies to
dZ_i, τ = - R^-1_ij( R_jkR_lkD^2 D^2_k R^-1_rl∇_Y_r V - kT R^-1_rl∇_Y_r (R_jkR_lkD^2 D^2_k)/D^2D_iD_l)dt + √(2kT)dW_i,τ,
dZ_i, τ(no sum i)= - ( R_liD^2 D^2_i R^-1_rl∇_Y_r V - kT R^-1_rlR_li∇_Y_r (D^2 D^2_i)/D^2D_iD_l)dt + √(2kT)dW_i,τ.
Expanding this expression does not lead to a great simplification of terms. In particular, the non-vanishing of the R matrices means that the drift term cannot be written as a gradient of a potential and hence this is not Brownian dynamics.
plain
|
http://arxiv.org/abs/2307.01671v1
|
20230704120458
|
Eigen Value Statistics of Long-Term Monthly Average Temperature of Meghalaya, India
|
[
"Raju Kalita",
"Atul Saxena"
] |
physics.ao-ph
|
[
"physics.ao-ph"
] |
Eigen Value Statistics of Long-Term Monthly Average Temperature of Meghalaya, India
Raju Kalita[[email protected]], and Atul Saxena
Department of Physics, North-Eastern Hill University, Shillong-22, India
We use Random Matrix Theory (RMT) to describe the eigenvalue spacing of Meghalaya's historical monthly average temperature (T_avg) in grids. For that, the Nearest Neighbor Spacings (S_i) of the eigenvalues of the correlation matrices were found out for 1428 consecutive eigenvalue pair differences. It is found that the distribution of S_i follows Brody distribution at a correlation value of β=0.045. This value of β(0.045) indicates weak repulsion among the eigenvalues as it is closer to Poisson fluctuations, meaning there is a weak correlation among the grids.
1.5
§ INTRODUCTION
The theory of the Random Matrix is quite successful in understanding the amount of correlation in different time series. It was Eugene P. Wigner who first applied the technique of random matrix theory to model the nuclei of heavy atoms <cit.>. Since then, it has been used remarkably in many multivariate data sets like financial <cit.>, human electroencephalographic <cit.>, city transport <cit.>, internet traffic <cit.>, atmospheric data <cit.>, sea surface temperature <cit.>, etc. The statistical properties of random matrix ensembles such as Gaussian Orthogonal (GOE), Gaussian Unitary (GUE), and Gaussian Symplectic (GSE) have been studied extensively by pioneers like Wigner, Dyson, Mehta, etc. <cit.>. The main advantage of this theory is that it can correctly describe the spectral statistics of various complex, chaotic systems <cit.>.
Moreover, the spectral properties of the correlation matrices arising from the random matrix can separate signals from noise. The short-range correlations are mainly observed by studying the Nearest Neighbour Spacing Distributions (NNSD) of eigenvalues arising from the correlation matrices <cit.>. Since the NNSD of eigenvalues of the correlation matrices gives the nature of correlation, using RMT, their different modes of randomness can be predicted.
This paper shows that the empirical correlation matrices arising from the half-degree latitude-longitude T_avg grids over Meghalaya can be modeled as random matrices chosen from an appropriate ensemble.
§ STUDY AREA AND DATA USED
The area under study covers almost the entire state of Meghalaya, located in the North-Eastern part of India (Fig. 1(a)). The hilly terrain of Meghalaya mainly comprises of three mainlands; Khasi Hills (central region), Jaintia Hills (eastern part), and Garo Hills (western part). It lies in-between 25.00^0N to 26.10^0N latitude and 89.45^0E to 92.45^0E longitude covering an area of 22,549 square kms <cit.> (Fig. 1(b)).
The data set for monthly average temperature has been extracted from 0.5^0× 0.5^0 latitude-longitude grid boxes of CRU TS 4.04 over Meghalaya <cit.> using the Google Earth interface. Grids are sorted from left top to right bottom in a logical sequence (Fig. 2). Data set for 10 out of 11 grids from 1901 to 2019 were arranged in a matrix form in such a way that the first matrix for January 1901 has five values (grid no 1 to 5) in one row (center latitude: 25.75^0N; center longitude: 90.25^0E, 90.75^0E, 91.25^0E, 91.75^0E, 92.25^0E) and the rest five values (grid no 6 to 10) in the second row (center latitude: 25.25^0N; center longitude: 90.25^0E, 90.75^0E, 91.25^0E, 91.75^0E, 92.25^0E).
§ CONSTRUCTION AND EVALUATION OF RANDOM MATRICES
The RMT framework defines the grid system as an ensemble matrix W_2× 5 with random inputs. This random matrix W contains each month's data of 10-time series X_j(k) where j = 1,2,...,10 (grid position) and k=1,2,...,1428 (no. of months in ascending order). Since there are 1428 months from January 1901 to December 2019, each random matrix W corresponds to a particular month of each year. Then each of the correlation matrix C_2× 2 is constructed from the multivariate random matrix W of two rows and five columns given by,
C_ij=1/5∑_k=1^5x_i(k)x_j(k)
Where x_i(k) corresponds to the transpose of matrix W, and x_j(k) corresponds to matrix W. With λ_i, the eigenvalues, and ν⃗_⃗l⃗, the eigenvectors, the correlation matrix is,
Cν⃗_⃗l⃗=λ_iν⃗_⃗l⃗
The largest eigenvalue of each correlation matrix is then sorted as λ_1 ≤λ_2 ≤λ_3....... ≤λ_1428, with their increasing size. Now the distribution of these eigenvalues is closely related to the amount of correlation in the random inputs of the multivariate data set <cit.>. The Nearest Neighbor Spacings S_i were then found out as
S_i=λ_i+1-λ_i/<λ_i+1-λ_i>
where i=1,2,...,1427 and <λ_i+1-λ_i> denotes average value over 1428 consecutive eigenvalue pair differences. Studies have shown that the probability distribution is well described by Brody distribution <cit.>.
P(S_i)=[Γ(2+β/1+β)]^(1+β)(1+β) S_i^β e^-[Γ(2+β/1+β)]^(1+β)S_i^(1+β)
Where Γ(x) is the Gamma function. The parameter β in the above distribution classifies the correlation in the system with respect to its probability distribution. When there is no correlation, the spacing of levels is very close and β→ 0 and leads to Poisson distribution given by,
P(S_i)=e^-S_i
However, when a correlation is present, then the level repels each other and β→ 1, and this leads to GOE fluctuations given by,
P(S_i)=π/2S_ie^-π/4S_i^2
This Poisson to GOE fluctuation gives the measure of correlation in the system of the multivariate data set <cit.>.
§ RESULT AND DISCUSSION
After extracting the Eigenvalues from the random correlation matrices C_ij, their distribution is plotted analytically with a non-parametric fitting (Fig. 3). It is observed that most of the eigenvalues lie on the higher side. This indicates uniformity in the next-to-next eigenvalue, as a result of which the eigenvalues are likely to reside close to each other.
To find the Nearest Neighbour Spacing Distribution (NNSD), we plot the non-parametric histogram fitting of S_i (Fig. 4 [blue line]). After that, the best fit is adjusted using equation (4) and is obtained at the Brody parameter, β=0.045. This value of β indicates a fluctuation near to Poisson distribution. This means that though the level spacing repulsion is very small, it shows a very weak correlation among half-degree temperature grids of Meghalaya.
The 119 years for CRU TS v4.04 Tavg data analysis in RMT frameworks reveals that the half-degree grids over Meghalaya are weakly correlated. The NNSD shows fluctuations closer to Poisson than the GOE ensemble (Fig. 5). Thus, in the present work, we could replace the analytical spacing distribution with an ensemble of random matrices that follows Brody distribution at β=0.045, which indicates a weak random fluctuation in the average temperature that existed over the Meghalaya throughout the period 1901 to 2019.
ieeetr
|
http://arxiv.org/abs/2307.00240v1
|
20230701060222
|
VesselMorph: Domain-Generalized Retinal Vessel Segmentation via Shape-Aware Representation
|
[
"Dewei Hu",
"Hao Li",
"Han Liu",
"Xing Yao",
"Jiacheng Wang",
"Ipek Oguz"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
VesselMorph
Department of Electrical and Computer Engineering, Vanderbilt UniversityDepartment of Computer Science, Vanderbilt University
[email protected]
D. Hu et al.
VesselMorph: Domain-Generalized Retinal Vessel Segmentation via Shape-Aware Representation
Dewei Hu1Hao Li1 Han Liu2 Xing Yao2 Jiacheng Wang2 Ipek Oguz12
August 1, 2023
==========================================================================================
Due to the absence of a single standardized imaging protocol, domain shift between data acquired from different sites is an inherent property of medical images and has become a major obstacle for large-scale deployment of learning-based algorithms. For retinal vessel images, domain shift usually presents as the variation of intensity, contrast and resolution, while the basic tubular shape of vessels remains unaffected. Thus, taking advantage of such domain-invariant morphological features can greatly improve the generalizability of deep models. In this study, we propose a method named VesselMorph which generalizes the 2D retinal vessel segmentation task by synthesizing a shape-aware representation. Inspired by the traditional Frangi filter and the diffusion tensor imaging literature, we introduce a Hessian-based bipolar tensor field to depict the morphology of the vessels so that the shape information is taken into account. We map the intensity image and the tensor field to a latent space for feature extraction. Then we fuse the two latent representations via a weight-balancing trick and feed the result to a segmentation network. We evaluate on six public datasets of fundus and OCT angiography images from diverse patient populations. VesselMorph achieves superior generalization performance compared with competing methods in different domain shift scenarios.
§ INTRODUCTION
Medical images suffer from the distribution shift caused by the discrepancy in imaging acquisition protocols. Images can appear in different contrast, resolution and range of intensity values, even within the same modality. A set of examples is shown in Fig. <ref>. This obstacle severely impedes the learning-based algorithms reaching clinical adoption. Therefore, much effort has been spent on solving the domain generalization (DG) problem so that the deep models can robustly work on out-of-distribution (OOD) data. There are three major types of solutions: data augmentation <cit.>, meta-learning <cit.> and domain alignment <cit.>. The first two strategies aim to improve the model's generalizability by either augmenting the source domain with additional data or replicating the exposure to OOD data during training. In contrast, the domain alignment strives to align the distribution of the target domains in either image <cit.> or feature space <cit.>.
We propose a novel method, VesselMorph, to improve the DG performance by providing an explicit description of the domain-agnostic shape features as auxiliary training material. Even though traditional algorithms are outperformed by their learning-based counterparts in many aspects, they can typically better generalize to any dataset, regardless of distribution shifts. Specifically for vessel segmentation, Frangi et al. <cit.> proposed a Hessian-based model to express the tubular shape of vessels which can be regarded as a domain-invariant feature. Merging the Hessian-based shape description <cit.> with the principles of diffusion tensor imaging (DTI) <cit.>, we introduce a bipolar tensor field (BTF) to explicitly represent the vessel shape by a tensor at each pixel. To effectively merge the features in the intensity image and the shape descriptor BTF, we employ a full-resolution feature extraction network to obtain an interpretable representation in the latent space from both inputs. This technique is broadly used in unsupervised segmentation <cit.> and representation disentanglement <cit.>.
As shown in Fig. <ref>, let 𝐱 be the input image and Ψ(𝐱) the corresponding BTF. D(E^I(·)) and D(E^S(·)) are two feature extraction networks with a shared decoder D. We empirically observe that the intensity representation 𝐳^I can precisely delineate thinner vessels while the structure representation 𝐳^S works better on thick ones. We combine the strengths of the two pathways for a robust DG performance. The two latent images are fused by a weight-balancing trick Γ(𝐳^I,𝐳^S) to avoid any potential bias induced by the selection of source domains. Finally, we train a segmentation network D^T on the fused latent images. We compare the performance of VesselMorph to other DG approaches on four public datasets that represent various distribution shift conditions, and show that VesselMorph has superior performance in most OOD domains. Our contributions are:
118 A Hessian-based bipolar tensor field (BTF) that provides an explicit description of the vessel morphology (Sec. <ref>).
118 A full-resolution feature extraction network that generates vessel representation from both the intensity image and the BTF (Sec. <ref>).
118 A training pipeline that generates stable latent images for both pathways and a weight-balancing method to fuse the two representations (Sec. <ref>).
118 A comprehensive evaluation on public datasets which shows superior cross-resolution and cross-modality generalization performance (Sec. <ref>).
§ METHODS
§.§ Bipolar Tensor Field
Unlike ML models, our visual interpretation of vessels is rarely affected by data distribution shifts. Mimicking the human vessel recognition can thus help address the DG problem. In addition to intensity values, human perception of vessels also depends on the local contrast and the correlation in a neighborhood, which is often well described by the local Hessian. Inspired by the use of DTI to depict the white matter tracts, we create a Hessian-based bipolar tensor field to represent the morphology of vessels. Given a 2D input image 𝐱∈ℝ^h× w and scale σ, the classical Frangi vesselness 𝒱(σ) <cit.> is defined as:
𝒱(σ) =
0 if λ_2>0,
exp(-ℛ_B^2/2β^2)[1-exp(-S^2/2c^2)] else.
Here, λ_1, λ_2 are the sorted eigenvalues of the Hessian ℋ, ℛ_B=λ_1/λ_2, S is the Frobenius norm of the Hessian (ℋ_F), β=0.5 and c=0.5. Note that we assume vessels are brighter than the background; fundus images are negated to comply. To represent vessels of different sizes, we leverage the multiscale vesselness filter that uses the optimal scale σ^* for the Hessian ℋ(𝐱_ij,σ) at each pixel (i,j). This is achieved by grid search in the range [σ_min,σ_max] to maximize the vesselness 𝒱(σ), i.e.,
σ^* = _σ_min≤σ≤σ_max𝒱(σ).
Then the optimized Hessian is represented by a 2× 2 matrix:
ℋ(𝐱_ij,σ^*)=(σ^*)^2𝐱_ij∗∇^2 G(𝐱_ij,σ^*)
where G(𝐱_ij,σ^*) is a 2D Gaussian kernel with standard deviation σ^*. Then we apply the eigen decomposition to obtain the eigenvalues λ_1, λ_2 (|λ_1|≤|λ_2|) and the corresponding eigenvectors 𝐯_1, 𝐯_2 at the optimal σ^*.
Instead of solely analyzing the signs and magnitudes of the Hessian eigenvalues as in the traditional Frangi filter, we propose to leverage the eigenvectors along with custom-designed magnitudes to create our tensor field as shown in Fig. <ref>(Left). The core idea of Frangi filter is to enhance the tubular structure by matching the vessel diameter with the distance between the two zero crossings in the second order derivative of Gaussian ( 2√(2)σ^*). However, the solution is not guaranteed to land in range [σ_min,σ_max], especially for small vessels. Consequently, we observe that the inaccurate estimation of σ^* results in a blurring effect at the vessel boundary, which is problematic for segmentation. As an example in Fig. <ref>(Left), the direction of 𝐯_1 at p_2 aligns with that at p_1, even though p_1 is inside the vessel while p_2 is in the background but close to the boundary. This makes it difficult for the vector orientations alone to differentiate points inside and outside the vessel. To tackle this, we introduce the idea of a bipolar tensor by assigning a large magnitude to the orthogonal eigenvector 𝐯_2 to points in the background, as shown in the blue dashed ellipse. Specifically, we define the magnitudes α_1 and α_2 associated with the eigenvectors 𝐯_1 and 𝐯_2 as:
α_1 =
P(𝐱≤𝐱_ij)_brightexp(-ϵλ_1^2/ℋ^2_F)
_vessel-like
, α_2 =
P(𝐱> 𝐱_ij)_darkexp(-ϵλ_2^2/ℋ^2_F)
_vessel-like
where P(𝐱 > 𝐱_ij) is the probability that the intensity of a random pixel x in the image is greater than 𝐱_ij. This is equivalent to normalizing the histogram by the factor hw and computing the cumulative distribution function at 𝐱_ij. This term thus provides a normalized brightness function in the range [0,1]. The exponential term represents how vessel-like the voxel is by using a normalized eigenvalue, and is in the [0,1] range as well. ϵ is a constant that controls the sensitivity, which is empirically set to 0.5. With the custom magnitudes α_1 and α_2, the two poles can better differentiate vessels from the background. Fig. <ref>(Right) is an example of BTF on an OCTA image. In practice, we stack the two vectors as the input to the structural encoding network, i.e., Ψ(𝐱_ij)=[α_1𝐯_1^⊤,α_2𝐯_2^⊤]^⊤∈ℝ^4× 1.
§.§ Latent Vessel Representation
Preserving the spatial resolution for the bottleneck of models with U-Net backbone is a common strategy to emphasize the structural features in unsupervised segmentation <cit.> and representation disentanglement <cit.>. We employ a network that has a full-resolution (h× w pixels) latent space as the feature extraction model. We propose to extract vessel structure from both the intensity image 𝐱∈ℝ^h× w and its corresponding BTF, Ψ(𝐱)∈ℝ^4× h× w. Therefore, in Fig. <ref>, the intensity D(E^I(·)) and structure D(E^S(·)) encoding pathways share the decoder D, and the latent images 𝐳^I,𝐳^S ∈ℝ^h× w. To distribute more workload on the encoder, D has a shallower architecture and will be discarded in testing. For the intensity encoding, the model is optimized by minimizing the segmentation loss function defined as the combination of cross-entropy and Dice loss:
ℒ_seg=-1/N∑_n=1^N𝐲_nlog𝐲̂^I_n + (1-2∑_n=1^N 𝐲_n𝐲^I_n/∑_n=1^N 𝐲_n^2+(𝐲̂^I_n)^2)
where N=h× w is the total number of pixels in the image, 𝐲 is the ground truth and 𝐲̂^I is the prediction from the training-only decoder D. Although there is no explicit constraint on the latent image E^I(𝐱)=𝐳^I, we note that the segmentation-based supervision encourages it to include the vessels while most other irrelevant features are filtered out. Hence, we can view the latent feature as a vessel representation.
Our approach is slightly different for the structure encoding as we observe that it is hard for the feature extraction network to generate a stable latent image that is free of artifacts when the number of input channels is greater than 1. Thus, it is necessary to use E^I as a teacher model that provides direct supervision on the vessel representation. In other words, we first train the intensity encoding path to get E^I and D, then train the E^S by leveraging both the segmentation loss in Eq. <ref> and a similarity loss defined as:
ℒ_sim(𝐳^S,𝐳^I) = ∑_n=1^N𝐳^S_n-𝐳^I_n_1 + SSIM(𝐳^S, 𝐳^I)
which is a weighted sum of L_1 norm and structural similarity loss SSIM <cit.>.
SSIM is defined as
SSIM(A, B) = 2(2μ_Aμ_B+c_1)(2σ_AB+c_2)/(μ_A^2+μ_B^2+c_1)(σ_A^2+σ_B^2+c_2),
where μ and σ represent the mean and standard deviation of the image, and we set c_1=0.01 and c_2=0.03. The overall loss function for the structural encoding is thus
ℒ(Ψ(𝐱),𝐲) = ω_1 ℒ_seg(𝐲̂^S,𝐲)+ω_2 ℒ_sim(𝐳^S,𝐳^I),
with empirically determined weights ω_1=1, ω_2=5. Experimentally, we found that the 𝐳^I is good at preserving small vessels, while 𝐳^S works better on larger ones.
§.§ Fusion of Vessel Representations
Given the two synthesized vessel representations 𝐳^I and 𝐳^S, we need to introduce a fusion method to take advantage of both intensity and structure features. Naively stacking these two channels as input to the segmentation network is prone to inducing bias: if 𝐳^I is consistently better for images from the source domain, then the downstream task model D^T would learn to downplay the contribution of 𝐳^S due to this biased training data. As a result, despite its potential to improve performance, 𝐳^S would be hindered from making a significant contribution to the target domain during testing. To circumvent this issue, we propose a simple weight-balancing trick. As illustrated in Fig. <ref>, we randomly swap some patches between the two latent images so that D^T does not exclusively consider the feature from a single channel, even for biased training data. This trick is feasible because 𝐳^S and 𝐳^I are in the same intensity range, due to the similarity constraints applied in Eq. <ref>. Thus the input to D^T is 𝐱 = Γ(𝐳^I,𝐳^S), where 𝐱∈ℝ^2× h× w. The loss function leveraged for D^T is the same as Eq. <ref>.
Inputinput
Outputoutput
The complete algorithm for training of VesselMorph is shown in Algorithm.<ref>. Briefly, we first train the intensity encoder E^I as it is easier to generate a stable vessel representation 𝐳^I. Then a structure encoder E^S is trained with the supervision of the ground truth and teacher model E^I so that an auxiliary representation 𝐳^S is extracted from the structural descriptor BTF. The last step is to train a segmentation network D^T with the fusion of the two vessel maps Γ(𝐳^I,𝐳^S). During testing, the patch-swapping is no longer needed, so we simply concatenate E^I(𝐱) and E^S(Ψ(𝐱)) as the input to D^T.
§ EXPERIMENTS
§.§ Experimental Settings
Datasets.The 6 publicly available datasets used in this study are listed in Table <ref>. Since there are more labeled fundus data available, we set up a source domain 𝒮 that includes three fundus datasets: DRIVE, STARE and the control subjects in ARIA. In the target domain 𝒯, we test the performance of the model under three different conditions: pathology (diabetic/AMD subjects in ARIA), resolution change (HRF) and cross-modality (OCTA500 and ROSE).
Compared methods. We pick one representative algorithm from each of the three major categories of DG approaches (Sec. <ref>) as a competing method. For data augmentation, we implement BigAug <cit.>. For meta-learning, we use the MASF <cit.> model. For domain alignment, we use the domain regularization network <cit.>. In addition, we also include VFT <cit.> which proposes the idea of shape description for DG. The baseline model is a vanilla residual U-Net trained on 𝒮, and the oracle model is the same network trained directly on each target domain to represent the optimal performance. Note that for a fair comparison, we set the baseline model to have a bit more parameters than D(E^I(·)) (7.4× 10^5:6.7× 10^5).
Implementation Details. We use the residual U-Net structure for E^I, D and D^T. To take advantage of the tensor field, the structure encoder E^S is equipped with parallel transformer blocks with different window sizes as proposed in <cit.>. All networks are trained and tested on an NVIDIA RTX 2080TI 11GB GPU. We use a batch size of 5 and train for 100 epochs. We use the Adam optimizer with the initial learning rate η_E^I = η_E^S = 5× 10^-4, η_D^T = 1× 10^-3, decayed by 0.5 for every 3 epochs. For fundus images, we use the green channel as network input 𝐱. The intensity values are normalized to [0,1].
§.§ Results
a>Gray1p
b>Gray2p
Fig. <ref> shows a qualitative ablation study: it illustrates that the intensity representation 𝐳^I may miss large vessels in the very high-resolution HRF images, while 𝐳^S remains robust. In contrast, 𝐳^I provides sharper delineation for very thin vessels in ROSE. The fusion of both pathways outperforms either pathway for most scenarios. These observations are further supported by the quantitative ablation study in Fig.<ref>. We note that 𝐳^S and 𝐳^I can be used as synthetic angiograms that provide both enhanced vessel visualization and model interpretability.
Fig. <ref> shows the t-SNE plots <cit.> of the datasets. The distribution gaps between datasets are greatly reduced for the two latent vessel representations.
Table <ref> compares all methods on the target domain 𝒯. For the diseased ARIA data, all methods show comparable performance and are not significantly different from the baseline. VesselMorph has the best OOD outcome for both cross-modality (dark gray) and cross-resolution (light gray) scenarios, except the OCTA500 dataset where VFT, MASF and VesselMorph perform similarly. The results of VFT and VesselMorph prove the value of the shape information.
§ CONCLUSION
In this work, we propose to solve the DG problem by explicitly modeling the domain-agnostic tubular vessel shape with a bipolar tensor field which connects traditional algorithms with deep learning. We extract vessel representation from both intensity and BTF, then fuse the information from the two pathways so that the segmentation network can better exploit both types of description. Our VesselMorph model provides significant quantitative improvement on Dice score across a variety of domain shift conditions, and its latent images offer enhanced vessel visualization and interpretability.
Acknowledgements. Anonymous.
splncs04
|
http://arxiv.org/abs/2307.01542v1
|
20230704075355
|
Mitigating the Learning Bias towards Repetition by Self-Contrastive Training for Open-Ended Generation
|
[
"Jian Guan",
"Minlie Huang"
] |
cs.CL
|
[
"cs.CL"
] |
Degradation-aware data-enabled predictive control of energy hubs
Varsha Behrunani^1,2, Marta Zagorowska^1, Mathias Hudoba de Badyn^1, Francesco Ricca^1, Philipp Heer^2 and John Lygeros^1
=============================================================================================================================
Despite the huge progress in a myriad of generation tasks, pretrained language models (LMs) such as GPT2 still tend to generate repetitive texts with maximization-based decoding algorithms for open-ended generation.
We attribute their overestimation of token-level repetition probabilities to the learning bias: LMs capture simple repetitive patterns faster with the MLE loss. We propose self-contrastive training to penalize the output of a premature checkpoint of the same model when it incorrectly predicts repetition, which is shown to mitigate repetition effectively while maintaining fluency on two datasets.
Furthermore, we find that LMs use longer-range dependencies to predict repetitive tokens than non-repetitive ones, which may be
the cause of sentence-level repetition loops[The code is available at <https://github.com/thu-coai/SelfCont>].
§ INTRODUCTION
Existing LMs prefer to generate repetitive texts for open-ended generation with greedy decoding or beam search <cit.>. Even large-scale pretrained LMs such as
GPT3 <cit.> still generate redundant sentences <cit.>. Despite many solutions proposed from the perspective of both training <cit.> and decoding <cit.>, the cause of preference for repetition still needs to be clarified.
By analyzing the training dynamics of LMs regarding (non-)repetitive tokens,
we reveal the learning bias towards repetition: LMs capture simple repetitive patterns first, which dominate the output distribution throughout the input space, and then learn more non-repetitive patterns during training. We show that the repetition problem can be mitigated by only training more steps (i.e., allowing over-fitting), although the coherence with inputs will be impacted. Conversely, when trained insufficiently, LMs will overestimate repetition probabilities even for golden prefixes.
We propose self-contrastive training (SelfCont), which exploits the contrast with a premature checkpoint of the same model by penalizing its output when it incorrectly predicts repetition. Experiments on two datasets show that SelfCont effectively alleviates repetition while maintaining fluency by factoring out the undesired repetition behaviors highlighted by the premature checkpoint.
Besides the above analysis about overestimating token-level repetition probabilities during training,
we also find that
LMs use longer-range dependencies to predict
repetitive tokens than non-repetitive ones. It may explain why LMs tend to fall into repetition loops <cit.>.
The problem may be solved by improving the modeling of long-range dependencies (e.g., increasing model sizes),
which are left to future work.
Although
they do not solve the issue of modeling long-range dependencies.
towards the notorious self-reinforcement effect,
3. It is not clear why models tend to repeat and whether the above solution really works.
4. We further decompose the repetition problem into two sub-problems: (1) per-step repetition probability overestimation; (2) self-reinforced probability increasing.
5. Many works attribute the repetition problem to the disability to model long-range dependencies. We further confirm the claim: models can easily capture both the long-range/short-range repetition dependencies almost perfectly in the early training steps while struggling to learn non-repetition dependencies, especially at the long range. Therefore, with greedy decoding, when the generation length surpasses the capacity of long-range dependencies of models, the models will dive into the repetition loops. We show that sampling-based decoding algorithms or repetition penalty only help reduce repetition but not help utilize longer-range context.
6. As for the reason why models first learn repetition, we explain from the perspective of spectral bias. Models first learn low-frequency components (one-to-one) and then high-frequency components (one-to-many). The open-ended generation task is difficult exactly because there are many high-frequency components (one-to-many), as supposed to summarization generation.
four curves: model prediction; ground-truth; ground-truth with repetition positions replaced by high-frequency random noise; ground-truth with non-repetition positions replaced by high-frequency random noise (sampling based on the vocabulary frequency).
7. To alleviate the issue. We propose to reduce the repetition-related low-frequency components from LM, which is shown to encourage models to focus on longer-range non-repetition dependencies.
8. As evaluation metrics
§ RELATED WORK
Regarding the cause of the repetition problem,
<cit.> theoretically derived bounds of repetition probabilities of the first-order Markov LM, although it is difficult to extend the bounds to general LMs. Another line of works attributed repetition to error accumulation during generation
<cit.>,
while LMs still prefer repetition given golden prefixes.
We divide recent works that alleviate repetition into training- and decoding-based methods:
(1) Training-based Methods. <cit.> proposed unlikelihood training (UL) to reduce the probabilities of repetitive generations. <cit.> and <cit.> further extended the framework at the token and sequence level, respectively. SelfCont focuses on token-level modeling, which is orthogonal with sequence-level methods.
<cit.> adopted additional modules to learn repetition patterns and control repetition explicitly.
(2) Decoding-based Methods.
One straightforward solution to repetition is blocking repetitive n-grams generations <cit.> or penalizing probabilities of repetitive candidates <cit.>. <cit.> selected candidates that maximize the probability difference between different-sized models. Sampling-based decoding methods are also shown effective in avoiding repetition, such as temperature sampling <cit.>, Top-k sampling <cit.>, nucleus sampling <cit.>, and typical sampling <cit.>. Although these methods reduce superficial repetition, it is unclear whether they utilize the underlying long-range dependencies to maintain coherence.
§ EMPIRICAL ANALYSIS
Neural networks (NNs) are highly expressive to approximate arbitrary input-output mappings. Using Fourier analysis, <cit.> showed the
spectral bias of NNs: they learn low-frequency components faster during training, which are less complex and vary globally without local fluctuation.
Our key hypothesis is that simple repetitive patterns may be such low-frequency components and learned by LMs early. In this section, we first formulate LMs (<ref>), and then investigate the training dynamics (<ref>) and the ability to model long-range dependencies (<ref>) of LMs.
§.§ Language Models
LMs aim to fit the mapping x_t = f(x_1:t-1) defined by a training corpus, where x_1:t is a sequence from the corpus.
To this end, they are usually trained by minimizing the following cross-entropy loss:
ℒ =-x_t^T·log[softmax(f_θ(x_1:t-1))],
where x_t∈{0,1}^|𝒱| is the one-hot representation of x_t indicating its index in the vocabulary 𝒱, and f_θ(x_1:t-1)∈ℝ^|𝒱| is the output logits of the LM parameterized by θ. Predictably, with more training steps, argmax(f_θ) is closer to the target function f. Early stopping <cit.> is a commonly used regularization technique to avoid over-fitting, e.g., stopping training when the validation loss reaches the minimum. Since NNs prioritize learning low-complexity components, early stopping may result in unexpected generations. We are inspired to investigate whether simple repetitive patterns in human-written texts are learned first, thus dominating the generations.
§.§ Training Dynamics
We randomly sample 1k sequences containing 512 tokens from the Wikitext-103 dataset <cit.> and train GPT2_ base from scratch for 100 epochs[We use only 1k samples because we expect to over-fit these samples to observe how repetition in generated texts changes with the fitting degree, considering that it will be very time-consuming to fit the whole Wikitext-103 dataset.]. Given a golden prefix x_1:t-1, we regard the model prediction x̂_t=argmax(f_θ(x_1:t-1)) as correct if x̂_t=x_t. We call x_t or x̂_t repetitive if it is included in x_1:t-1, and non-repetitive otherwise.
Figure <ref> plots the training curves, revealing the learning bias of the LM: (1) The initially learned components prefer to copy input tokens throughout the input space, as indicated by predicting repetitive tokens at ∼90% of positions for both golden and generated prefixes.
(2) With golden prefixes, at those positions where x_t is repetitive, the LM almost always predicts repetition during training. When x_t is non-repetitive, the LM predicts more non-repetitive tokens with more training steps. The repetition ratio also gradually decreases in model-generated texts.
(3) The token prediction accuracy improves faster when x_t is repetitive, indicating that the LM learns repetitive patterns more easily.
Moreover, we notice that the validation loss rises at the 1,500th step, where the LM predicts much more repetitive tokens than the ground truth. At the end of the training, the generation has a closer token repetition ratio to the ground truth. But manual inspection finds the coherence with inputs is poor due to over-fitting.
Appendix <ref> shows several generation cases.
We further investigate how such learning bias influences the model generation. Figure <ref> plots the ratio of repetitive tokens in model-generated texts against training steps. The details are introduced in <ref>. We observe that the generated texts contain less repetition with more training steps because the model learns more non-repetitive dependencies.
§.§ Modeling Long-Range Dependencies
Figure <ref> (Top) shows that LMs are still able to predict non-repetitive tokens conditioned on golden prefixes. However, it is still unclear why they get into repetition loops during generation and do not generate any non-repetitive tokens.
To shed light on this behavior, we further investigate how LMs learn and utilize long-range dependencies. We fine-tune GPT2_ base on the training set of Wikitext-103, and examine the effect of prefix lengths on the perplexity of tokens that have appeared in the previous 250 tokens (called repetitive) or not on the original test set and model-generated texts.
Figure <ref> indicates (1) The LM only learns
dependencies within ∼100 tokens overall. When the prefix length is larger than 100, the perplexity on golden tokens no longer drops significantly (p⩾0.05).
(2) The LM learns and utilizes longer-range dependencies to predict repetitive tokens than non-repetitive ones.
The perplexity on golden repetitive/non-repetitive tokens plateaus
when the prefix length is larger than 160/50, respectively. The case is similar for generated texts. (3) The LM uses short-range contexts to predict non-repetitive tokens regardless of decoding algorithms. Contexts beyond 100 tokens hardly help predict non-repetitive tokens,
implying sampling-based decoding reduces repetition through randomness instead of using long-range dependencies.
Based on the above observation, we conjecture that the LMs keep repeating the same sentence with maximization-based decoding <cit.> because they rarely learn long-range non-repetitive patterns beyond the sentence level. When generating long texts, LMs may struggle to maintain non-repetitive within a long range.
To test the idea, we train GPT2_ base from scratch on three datasets constructed from the training set of Wikitext-103: (1) 𝒟_ original, where examples are directly sampled from the original training set;
(2) 𝒟_ random, where each example contains 30 randomly sampled sentences; (3) 𝒟_ norept, where each example also contains 30 random sentences, but there is at most one token overlapping
between any adjacent 5 sentences (generally the period “.”).
Each dataset consists of 20k examples.
We then generate texts using greedy decoding conditioned on the first 50 tokens in the original test set and compute the ratio of texts which fall into loops <cit.>.
As shown in Table <ref>, compared to 𝒟_ original, the LM trained on 𝒟_ random has higher repetition ratios because it learns shorter-range non-repetitive patterns only within one sentence. Besides, although sentences in each 𝒟_ random example are unrelated, they can contain repetitive tokens[The ratios of tokens that have appeared in previous 128 tokens are 12.52% and 32.05%
for the training sets of 𝒟_ original and 𝒟_ random,
respectively. 𝒟_ random has even more repetition than 𝒟_ original possibly because random sentences repeat high-frequency words than human-written sentences.], making the LM learn spurious long-range repetitive patterns to get into repetition loops. In contrast, the LM trained on 𝒟_ norept rarely gets into loops since it learns both repetitive and non-repetitive patterns almost within one sentence. Specifically, any adjacent five sentences in each 𝒟_ norept example are unrelated and hardly share tokens. These findings empirically support our hypothesis. Appendix <ref> shows more details.
§ SELF-CONTRASTIVE TRAINING
We denote the premature checkpoint as f_θ_0,
which frequently predicts repetitive tokens.
Formally, the SelfCont algorithm is formulated
as follows:
f_θ =f_θ_1+sg(wf_θ_0),
w =λ1(x_t∉x_1:t-1)1(x̂_t∈ x_1:t-1)
x̂_t =argmax(f_θ_0(x_1:t-1)),
where sg(·) means stopping back-propagation of gradients, λ is a tunable hyper-parameter to control the extent of repetition penalty, and 1 is the indicator function. f_θ_1 is the target LM initialized from f_θ_0, and we
optimize f_θ using Eq. <ref> until the validation loss converges to the minimum.
The gradient for each token u∈𝒱 has changed to:
∇_uℒ= exp(f_θ_1|_u)/∑_v∈𝒱w_v,uexp(f_θ_1|_v)-1(u=x_t),
w_v,u= exp(w(f_θ_0|_v-f_θ_0|_u)),
where f_θ_1|_u is the output of f_θ_1 at the u-th dimension.
If w is 0, w_v,u is always 1 and ∇_uℒ degenerates to the same as the vanilla LM. If w is not 0 and u is not x_t, tokens with high logits under f_θ_0 will receive larger gradients than the vanilla LM since w_v,u is mostly smaller than 1 with different v.
As for u=x_t (w≠0), it may also be penalized with a positive gradient if f_θ_0|_u is large enough, which usually means a dull token.
By penalizing components that excessively prefer repetitive or dull tokens highlighted by f_θ_0,
f_θ_1 can utilize more complex patterns learned later to generate texts.
§ EXPERIMENTS
Datasets
We conduct experiments on Wikitext-103 <cit.> and WritingPrompts <cit.>. The prompt and story in each WritingPrompts example are concatenated as a sequence. We set the maximum sequence length to 512 and take the first 50 tokens as input to generate the rest. Table <ref> presents the detailed statistics.
Baselines We compare SelfCont to three baselines: MLE, token-level UL <cit.> and ScaleGrad <cit.>. Since SelfCont focuses on token-level modeling, we do not compare it to sentence-level methods that
directly penalize repetition loops, e.g., DITTO <cit.>.
Implementation All baselines are implemented based on GPT2_ base. We set the batch size to 16, the learning rate to 1e-4, and λ in Eq. <ref> to 4.0. For SelfCont, we fine-tune GPT2_ base for one epoch using MLE and take the checkpoint as f_θ_0 for both datasets.
We use different p for different models based on the performance on the validation set. Appendix <ref> shows more details.
Metrics
We use perplexity (PPL) under GPT2_ xl to evaluate fluency, MAUVE <cit.> to measure the similarity between golden and generated distributions, the token repetition ratios (R-l) to measure the ratio of tokens that appear in previous l tokens <cit.>, and distinct (D-n) <cit.> to evaluate the n-gram diversity. The closer scores to the ground truth mean better quality for all metrics.
Results As shown in Table <ref>, SelfCont outperforms baselines in all metrics using greedy decoding. However, the high R-128 score shows it can still generate repetition loops due to the disability of small-scale LMs to model long-range dependencies. Using nucleus decoding, we see that different baselines can achieve similar repetition ratios and diversity to the truth by tuning p, while SelfCont has better fluency and higher MAUVE scores.
§ CONCLUSION
We present empirical studies on LMs' preference for repetition by analyzing the training dynamics, which highlights their learning bias towards simple repetitive patterns. We propose penalizing outputs of a premature checkpoint during training, which effectively mitigates repetition while maintaining fluency. We also provide insight into why LMs easily fall into repetition loops by showing their disability to model long-range dependencies.
Sampling-based decoding reduces repetition through randomness but not utilizing long-range dependencies. We believe that maximization-based decoding can also generate coherent texts without repetition by improving the modeling of long-range dependencies, which is left to future work.
§ ACKNOWLEDGMENTS
This work was supported by the National Science Foundation for Distinguished Young Scholars (with No. 62125604) and the NSFC projects (Key project with No. 61936010). This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2020GQG0005.
§ LIMITATIONS
The limitations of this paper mainly lie in the following folds: (1) We do not provide any theoretical analysis for the correlation between long-range dependencies and repetition loops, as well as solutions to avoid repetition loops with maximization-based decoding. (2) We do not discuss the source of LMs' learning bias, which may be caused by multiple factors, such as the Transformer architecture <cit.>, the MLE loss, or the auto-regressive generation manner. (3) We conduct experiments based on GPT2 due to resource limitations. The conclusions may differ for extra-large LMs (such as GPT3). (4) We do not experiment with RNN-based models, which are also shown to prefer repetition <cit.>.
(5) We do not perform the manual evaluation to compare SelfCont with baselines since we focus on repetition in this paper, which can be automatically evaluated reliably. Perplexity and mauve scores are also shown to correlate highly with manual evaluation for evaluating fluency and overall quality, respectively.
acl_natbib
§ DETAILS FOR EMPIRICAL ANALYSIS
§.§ Training Dynamics
Table <ref> shows several cases generated by the LM with greedy decoding at different training steps. We summarize the findings as follows: (1) In the beginning, the LM keeps repeating the high-frequency word “<eos>,” indicating that it does not capture phrase-level dependencies yet. (2) At the 1500th step, the LM first generates a few fluent sentences and then gets stuck into the repetition of “the building,” showing that it learns long-range dependencies conditioned on the golden prefix while the repetitive patterns dominate the probability distributions conditioned on the generated prefix. This case suggests the global tendency towards repetition for out-of-distribution inputs. (3)
At the 6000th step, the LM can generate long, fluent texts without repetition. However, it is difficult for the LM to maintain coherence with inputs due to over-fitting. For example, in the generated first sentence, “she had begun in 1962,” “she” conflicts with “he” in the input.
§.§ Long-Range Dependencies
Observation For the experiment in Figure <ref>, we generate texts with three decoding algorithms conditioned on the first 50 tokens on the test set. Ancestral decoding means directly sampling tokens from the original probability distribution. For nucleus decoding, we set p to 0.9. Figure <ref> shows the performance of GPT2_ large, which shows similar results with GPT2_ base in Figure <ref>.
Verification For the experiment in Table <ref>, we use the same approach to construct the corresponding validation sets of 480 examples for 𝒟_ original, 𝒟_ random and 𝒟_ norept, and train three LMs until the best validation performance. Table <ref> shows several generation cases with greedy decoding.
The LMs trained on 𝒟_ original and 𝒟_ random fall into repetition loops. Although the LM trained on 𝒟_ norept also generates sentences that have previously appeared, it does not get stuck into loops. We further investigate whether the three LMs show the self-reinforcement effect: the more times a sentence is repeated in the context, the higher the probability of continuing to generate that sentence <cit.>. Figure <ref> indicates that the LMs trained on 𝒟_ original and 𝒟_ random show the above effect, while the LM trained on 𝒟_ norept does not. The results suggest that longer-range repetitive patterns biased LMs to fall into repetition loops through the self-reinforcement effect whether such patterns are true or spurious. The LM trained on 𝒟_ norept always generate sentences in a limited set due to greedy decoding which aims to find the global maxima of probability distributions, instead of the preference for repetition loops.
§ HYPER-PARAMETERS
We decide the hyper-parameters λ in Eq. <ref> and p for nucleus sampling by searching for the value that makes the R-64 score of generated texts closest to the ground truth on the validation set. We search λ in the range {1.0, 2.0, 3.0, 4.0, 5.0, 6.0}, and p in the range {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}. Table <ref> shows the settings of p for different models. As for baselines, we follow the original papers to set α to 1.0 for UL and γ to 0.2 for ScaleGrad.
As for the choice of f_θ_0, we empirically choose the checkpoint after training for one epoch, which allows enough training steps for self-contrastive training. We use the premature checkpoint of the same model instead of other models since different models may have different biases. It costs about 24 hours to train SelfCont on Wikitext-103 (∼10 epochs) or CNN News (∼6 epochs). The results are based on one NVIDIA Tesla V100 (32GB memory) with a random single run.
§ MODELING TOKEN-LEVEL REPETITION
We compare SelfCont with baselines in terms of the performance for modeling token-level repetition. As shown in Table <ref>, SelfCont achieves higher overall accuracy, higher F1 score on non-repetitive tokens, and comparable F1 score on repetitive tokens.
§ CASE STUDY
Table <ref> and Table <ref> show the cases generated by different models on Wikitext-103 with greedy decoding and nucleus decoding, respectively. We see that SelfCont can still get stuck into loops with greedy decoding since it hardly learns longer-range dependencies than standard LMs. Although sampling helps reduce superficial repetition, it does not utilize underlying long-range dependencies to maintain long-range coherence. Therefore, it is important to improve the modeling of long-range dependencies to essentially solve the repetition problem in future work.
|
http://arxiv.org/abs/2307.00790v1
|
20230703071329
|
Learning permutation symmetries with gips in R
|
[
"Adam Chojecki",
"Paweł Morgen",
"Bartosz Kołodziejek"
] |
stat.CO
|
[
"stat.CO"
] |
§ INTRODUCTION
The study of hidden structures in the data is one of the biggest challenges in modern mathematical statistics and machine learning <cit.>.
Extracting meaningful information from high-dimensional datasets, where the number of variables p exceeds the number of observations n, poses a significant hurdle due to the curse of dimensionality.
One solution to the problem of an insufficient number of observations relative to the number of variables is to restrict to models with lower dimensionality. Graphical models have been introduced for this purpose <cit.>, where a conditional independence structure (graph Markovian structure) is imposed on the distribution of a random vector. Such structures are conveniently described by graphs and allow for a reduction in the dimensionality of the problem. However, if the graph is not sparse enough (the size of the largest clique still significantly exceeds the sample size), then such a procedure does not allow for a reliable estimation of the covariance matrix. We note that the study of the covariance matrix is the basic way to describe the dependency structure of a random vector and provides a convenient way to quantify the dependencies between variables.
If the data is insufficient and some inference must be performed, one has to propose additional assumptions or restrictions. In such a situation, colored graphical models could be considered, where, in addition to conditional independence, certain equality conditions on the covariance matrix are imposed.
Incorporating such equality conditions in colored graphical models is an example of parameter sharing. This concept allows for a reduction of dimensionality and can effectively incorporate domain knowledge into the model architecture. A notable example of parameter sharing, which possesses these advantages, is the convolution technique <cit.>.
A rich family of such symmetry conditions can be expressed using the language of permutations. This idea was introduced in <cit.> and <cit.>.
In the latter paper, three types of such models (RCOP among them) were introduced to describe situations where some entries of concentration or partial correlation matrices are approximately equal. These equalities can be represented by a colored graph. The RCOP model, apart from the graph Markovian structure, permits additional invariance of the distribution with respect to some permutation subgroup. We say that the distribution of a p-dimensional random vector Z is invariant under permutation subgroup Γ on V={1,…,p} if Z=(Z_i)_i∈ V has the same distribution as (Z_σ(i))_i∈ V for any permutation σ∈Γ, <cit.>. This property is called the permutation symmetry of the distribution of Z and imposes significant symmetry conditions on the model.
The case when the conditional dependency graph is unknown or known to be the complete graph was studied in <cit.>. In that paper, the authors introduced a Bayesian model selection procedure for the case when Z is a Gaussian vector. In other words, by assuming a prior distribution on the parameters, they derived the posterior probability of a specific model. This allows one to find the permutation group under which (most likely) the data is invariant. Not only does this result in dimensionality reduction but also provides a simple and natural interpretability of the results. For example, if the distribution of Z is invariant under swapping its ith and jth entries, then one can say that both Z_i and Z_j play a symmetrical role in the model.
The concept of group invariance finds application in various domains and often leads to improved estimation properties. If the group under which the model is invariant is known, precise convergence rates for the regularized covariance matrix were derived in <cit.>, demonstrating significant statistical advantages in terms of sample complexity. Another noteworthy paper, <cit.>, explores group symmetries for estimating complex covariance matrices in non-Gaussian models, which are invariant under a known permutation subgroup. However, neither of these articles provides guidance on identifying the permutation subgroup when it is unknown, which is typically the case in practical applications.
Identifying the permutation subgroup symmetry can be interpreted as an automated way of extracting expert knowledge from the data. Discovering the underlying symmetries allows for a deeper understanding of the relationships and dependencies between variables, offering insights that may not be apparent through traditional analysis alone. The automated approach reduces the reliance on manual exploration and expert intervention.
In the present paper, we introduce an R package called gips, <cit.>, which implements the model selection procedure described in <cit.>. The gips package, presented in this paper, serves two purposes:
* Discovering hidden permutation symmetries among variables (exploratory analysis).
* Estimating covariance matrix under the assumption of known permutation symmetry.
Both points are limited to the Gaussian setting. To the best of our knowledge, there are currently no other software packages available (in R or any other programming language) that address the topic of finding permutation symmetry. Our approach focuses on zero-mean Gaussian vectors, although the method can be applied to centered data and, if the sample size n is reasonably large, to standardized data as well, see Section <ref>.
Let Z be a Gaussian vector with a known mean. If we assume full symmetry of the model, meaning that the distribution of Z is invariant under any permutation, then the Maximum Likelihood Estimator (MLE) of the covariance matrix requires only a single sample (n_0=1) to exist. Somewhat surprisingly, the same phenomenon applies when the normal sample is invariant under a cyclic subgroup generated by a cycle of length p. While it is natural to consider permutation symmetries alongside conditional independence structures, we follow <cit.> and assume no conditional independencies among the variables. Such an approach already enables a substantial reduction in dimensionality, accompanied by a readily interpretable outcome. The development of the method to incorporate non-trivial graph Markovian structures is a topic for future research, and we will consider expanding the package if a new theory emerges. The first step towards generalizing the theory to homogeneous graphs has already been taken in <cit.>. Additionally, a simple heuristic can be employed to identify non-trivial Markovian structures using our model - see <cit.>, <cit.>, and Section <ref> in this paper.
Although there are no other software packages available for finding permutation symmetries in data, we have made the decision to compare the results of our model with canonical methods commonly used to tackle high-dimensional problems, namely RIDGE and GLASSO estimation and model selection (implemented, for instance, in R packages: huge <cit.> and rags2ridges <cit.>). These methods correspond to estimation with constraints or, conversely, to Bayesian estimation with Gaussian or Laplace priors, respectively <cit.>. We demonstrate that gips is competitive with these widely used approaches in terms of dimensionality reduction properties, and moreover, it offers interpretability of the results in terms to permutation symmetries.
Furthermore, it is worth noting that due to the discrete nature of the problem, we believe that finding permutation symmetry cannot be adequately addressed by penalized likelihood methods, which are generally much faster than Bayesian methods. Although other methods (which do not have available implementations to our best knowledge) allow for model selection within colored graphical Gaussian models, none of them are applicable to permutation invariant models (RCOP models). Compared to other models (such as RCON, RCOR in <cit.>), RCOP models possess a more elegant algebraic description and offer a natural interpretation <cit.>.
The “Replication code” is available at <https://github.com/PrzeChoj/gips_replication_code>.
§.§ Overview of the paper
The paper is organized as follows. The Introduction consists of four subsections. In the next subsection, we present two low-dimensional toy examples that illustrate the use of gips.
In the subsequent subsection, we discuss the potential for successfully exploiting group symmetry in many natural real-life problems.
In the final subsection of the Introduction, we argue that it is both necessary and sufficient to focus on cyclic symmetries, which are more tractable.
Section <ref> provides the necessary methodological background on permutation symmetries and defines the Bayesian model proposed in <cit.>, specialized to cyclic subgroups. We also introduce an MCMC algorithm that allows the estimation of the maximum a posteriori (MAP) within our Bayesian model, and we discuss the issue of centering and standardizing input data.
Section <ref> is dedicated to numerical simulations. We present a high-dimensional example using breast cancer data from . Additionally, we use a heuristic approach from <cit.> for identifying the graphical model invariant under permutation symmetries (RCOP model from <cit.>) and apply this procedure to the real-life example. In the subsequent subsections, we examine the impact of hyperparameters on model selection and compare gips with competing packages that facilitate dimensionality reduction.
Finally, in Section <ref> we draw some conclusions.
An example to Section <ref> is presented in Appendix <ref>.
Mathematical details behind the Bayesian model are relegated to the Appendix <ref>.
§.§ Toy examples
We illustrate the concept of permutation symmetry using the gips package in two simple use cases. These examples demonstrate how permutational symmetry can enhance the data mining process. A similar procedure was successfully applied to the Frets' heads dataset <cit.> and the mathematical marks dataset <cit.>.
In the first example, we use aspirin dataset from the HSAUR2 package. By examining the covariance matrix, we manually choose a reasonable permutation symmetry. Additionally, we employ the gips package to demonstrate that our algorithm generates reasonable estimates.
For the second example, we utilize the oddbooks dataset from the DAAG package. We showcase how one can incorporate expert field knowledge in the analysis. We use the gips to find the permutation symmetry and interpret the result.
A standard PC can execute the entire code in this section within 10 seconds.
§.§.§ Aspirin dataset
This dataset consists of information about a meta-analysis of the efficacy of Aspirin (versus placebo) in preventing death after a myocardial infarct.
We renumber the columns for better readability:
R> data("aspirin", package = "HSAUR2")
R> Z <- aspirin
R> Z[, c(2, 3)] <- Z[, c(3, 2)]
R> names(Z) <- names(Z)[c(1, 3, 2, 4)]
R> head(Z, 4)
dp da tp ta
1 67 49 624 615
2 64 44 771 758
3 126 102 850 832
4 38 32 309 317
Each of the n=7 rows in Z corresponds to a different study, and the p=4 columns represent the following: dp: number of deaths after placebo, da: number of deaths after Aspirin, tp: total number subjects treated with placebo, ta: total number of subjects treated with Aspirin.
Initially, we calculate the empirical covariance matrix S.
R> n <- nrow(Z)
R> p <- ncol(Z)
R> S <- cov(Z)
Note that since n=7 is greater than p=4, S is the standard MLE of Σ in the (unrestricted) Gaussian model. The heatmap of the S matrix is shown in Figure <ref>.
We observe significant similarities between the empirical covariances of variables tp (column 3) and ta (column 4). They exhibit comparable variances (S[3,3] ≈ S[4,4]), and their covariances with the other variables also show resemblance (S[1,3] ≈ S[1,4] and S[2,3] ≈ S[2,4]).
By definition, the distribution of a random vector Z=(Z_1,Z_2,Z_3,,Z_4)^⊤ is invariant under the permutation (3,4) if the distributions of (Z_1,Z_2,Z_3,Z_4)^⊤ and (Z_1, Z_2, Z_4, Z_3)^⊤ coincide. When Z follows a centered Gaussian distribution, this property can be expressed purely in terms of its covariance matrix, leading to the following conditions: [Z_3] = [Z_4], [Z_1, Z_3] = [Z_1, Z_4], and [Z_2, Z_3] = [Z_2, Z_4].
We observe that the structure of S closely corresponds to that of the covariance matrix of a random vector invariant under the permutation (3,4). By observing that S[1,1] ≈ S[2,2], S[1,3] ≈ S[2,3], and S[1,4] ≈ S[2,4], we can also argue that the data is invariant under the permutation (1,2) or even (1,2)(3,4).
We want to emphasize that such manual exploration becomes infeasible for larger values of p due to the massive number and complexity of possible relationships.
Ad hoc, it is unclear which scenario is preferable (it is natural to compare BIC, but the MLE does not always exists).
The gips package uses the Bayesian paradigm (described in detail in Section <ref>) to precisely quantify posterior probabilities of considered permutation groups.
The workflow in gips is as follows: first, use the gips() function to define an object of the class `gips` that contains all the necessary information for the model. Next, use the find_MAP() function with an optimizer of your choice to find the permutation that provides the maximum a posteriori estimate. Finally, we use the project_matrix() function to obtain the MLE of the covariance matrix in the invariant model, which will serve as a more stable covariance estimator. The process can be summarized as follows:
R> g <- gips(S, n)
R> g_MAP <- find_MAP(g,
+ optimizer = "BF", show_progress_bar = FALSE,
+ save_all_perms = TRUE, return_probabilities = TRUE
+ )
R> g_MAP
The permutation (1,2)(3,4):
- was found after 24 posteriori calculations;
- is 3.374 times more likely than the () permutation.
According to the output of find_MAP(), the permutation (1,2)(3,4) best reflects the symmetries of the models and is over 3 times more probable (under our Bayesian setting) than the identity permutation (), which corresponds to no symmetry.
The invariance with respect to the permutation (3,4) arises from the fact that the samples of patients treated with aspirin and placebo had similar sizes. On the other hand, the invariance with respect to the permutation (1,2) signifies the lack of aspirin treatment effect. The permutation (1,2)(3,4) corresponds to both of these effects. We emphasize that this study is an exploratory analysis rather than a statistical test.
We can easily calculate probabilities of all symmetries using a built-in function:
R> get_probabilities_from_gips(g_MAP)
(1,2)(3,4) (3,4) (1,2) () (1,4) (1,3)
5.107108e-01 1.695605e-01 1.663982e-01 1.513854e-01 4.341644e-04 4.047690e-04
(2,4) (2,3) (1,3,2,4) (1,3)(2,4) (1,4)(2,3) (1,3,4)
3.797581e-04 3.607292e-04 1.240381e-04 7.410652e-05 7.406484e-05 2.197791e-05
(1,2,4) (1,2,3) (2,3,4) (1,2,4,3) (1,2,3,4)
2.026609e-05 1.813565e-05 1.782315e-05 7.676231e-06 7.528912e-06
or compare two permutations of interest
R> compare_posteriories_of_perms(g_MAP, "(34)")
The permutation (1,2)(3,4) is 3.012 times more likely than the (3,4) permutation.
R> compare_posteriories_of_perms(g_MAP, "(12)")
The permutation (1,2)(3,4) is 3.069 times more likely than the (1,2) permutation.
R> compare_posteriories_of_perms(g_MAP, "()")
The permutation (1,2)(3,4) is 3.374 times more likely than the () permutation.
Note that for p=4, there are p!=24 different permutations, but only 17 distinct symmetries are reported above. This is because some permutations correspond to the same symmetry. More precisely, it is the group generated by a permutation σ and not σ itself that identifies the symmetry. For example σ_1 = (1,2,3) and σ_2 = (1,3,2) generate the same group.
We also note that given the small number of variables (p=4), the space of possible permutation symmetries is also small. Consequently, we were able to compute the exact posterior probabilities of our Bayesian model for every single permutation symmetry. The number of permutation symmetries grows superexponentially with p, e.g. for p=10 its cardinality is approximately 1 million (see OEIS[The On-Line Encyclopedia of Integer Sequences, <https://oeis.org/>.] sequence A051625). Thus, for larger p we recommend using the implemented Metropolis-Hastings algorithm to approximate these probabilities, see Section <ref>.
Assuming that the data actually come from a distribution invariant under the permutation (1,2)(3,4), we can provide a new estimate for the covariance matrix. Formally, we project the matrix S onto the space of positive definite matrices that are invariant under the permutation (1,2)(3,4) (for further details, refer to Section <ref>). In practice, we enforce the desired equalities by averaging.
R> S_projected <- project_matrix(S, g_MAP)
One can easily plot the found covariance estimator with a line
R> plot(g_MAP, type = "heatmap")
It is shown in Figure <ref> (we made cosmetic modifications to this plot; the exact code is provided in the attached “Replication code”).
The S_projected matrix can now be interpreted as a more stable covariance matrix estimator, see e.g., <cit.>.
§.§.§ Books dataset
This dataset consists of information about thickness (mm), height (cm), width (cm), and weight (g) of 12 books.
R> data("oddbooks", package = "DAAG")
R> head(oddbooks, 4)
thick height breadth weight
1 14 30.5 23.0 1075
2 15 29.1 20.5 940
3 18 27.5 18.5 625
4 23 23.2 15.2 400
We will only consider relationships between the thickness, height, and width.
R> Z <- oddbooks[, c(1, 2, 3)]
One can suspect that books from this dataset were printed with a √(2) aspect ratio, as in the popular A-series paper size. Therefore, we can utilize this domain knowledge in the analysis and unify the data for height and width:
R> Zheight <- Zheight / sqrt(2)
Let us see the standard MLE of the covariance matrix:
R> S <- cov(Z)
We can plot this covariance matrix to see if we would notice any connection between variables. Figure <ref> was obtained with the code below (we slightly modified this plot; the exact code is provided in the "Replication code"):
R> g <- gips(S, number_of_observations)
R> plot(g, type = "heatmap")
We can see that some
entries of S have similar colors, which suggests a lower dimensional model with equality contraints. In particular, the covariance between thick and height is very similar to the covariance between thick and breadth, and the variance of height is similar to the variance of breadth. Those are not surprising, given the data interpretation (after the height rescaling that we did).
Let us examine the posterior probabilities returned by gips:
R> g_MAP <- find_MAP(g,
+ optimizer = "BF", show_progress_bar = FALSE,
+ return_probabilities = TRUE, save_all_perms = TRUE
+ )
R> get_probabilities_from_gips(g_MAP)
(2,3) () (1,3) (1,2,3) (1,2)
5.660781e-01 4.339087e-01 6.728772e-06 4.683290e-06 1.862353e-06
We see that the a posteriori distribution is maximized by a permutation (2,3). The MLE of the covariance matrix in the model invariant under the permutation (2,3) is presented in Figure <ref>.
§.§ Motivation behind permutation symmetries
We argue that it is natural to expect certain symmetries in various applications, which strengthens the need for tools to investigate permutation symmetry within the data.
For example, there are natural symmetries in the data from gene expression. Specifically, the expression of a given gene is triggered by the binding of transcription factors to gene transcription factor binding sites. Transcription factors are proteins produced by other genes, often referred to as regulatory genes. Within the gene network, it is common for multiple genes to be triggered by the same regulatory genes, suggesting that their relative expressions depend on the abundance of the regulatory proteins (i.e., gene expressions) in a similar manner <cit.>. Extracting permutation symmetries can be utilized to identify genes with similar functions or groups of genes with similar interactions or regulatory mechanisms. This approach is particularly useful in unraveling the structures of gene regulatory networks <cit.>.
Furthermore, in examples of social networks, such as those influenced by geographical or social group clusters, additional symmetries must be taken into account, as mentioned in <cit.>. In the study of the human brain's dynamics, it is believed that the left and right hemispheres possess a natural symmetric structure <cit.>.
The discovery of hidden symmetries can greatly contribute to understanding complex mechanisms. Extracting patterns from gene expression profiles can offer valuable insights into gene function and regulatory systems <cit.>. Clustering genes based on their expression profiles can aid in predicting the functions of gene products with unknown purposes and identifying sets of genes regulated by the same mechanism.
§.§ Arbitrary permutation symmetries vs cyclic permutation symmetries
As observed in <cit.>, performing model selection within an arbitrary permutation subgroup is a highly challenging task. This difficulty arises not only due to theoretical reasons but also because of computational complexity issues arising when p is large. Informally speaking, finding the parameters of an arbitrary permutation group becomes virtually impossible for large values of p. In <cit.>, a general model was developed; however, it was specifically applied to cyclic subgroups. Such subgroups are generated by a single permutation, and by restricting the analysis to them, efficient methods can be devised to conduct the model selection procedure. All the technical details regarding these methods will be presented in the subsequent sections.
Furthermore, we argue that cyclic subgroups form a sufficiently rich family, as mentioned in <cit.>. Since these subgroups correspond to simpler symmetries, they are also more easily interpretable. Although our procedure exclusively explores cyclic subgroups, it can still provide valuable information even when the true subgroup is not cyclic, as discussed in <cit.>. In fact, if the posterior probabilities (which are calculated with gips) are high for multiple groups, it is reasonable to expect that the data will exhibit invariance under the group containing those subgroups. We present a simple example in the Appendix <ref>.
§ METHODOLOGICAL BACKGROUND
After providing an informal introduction, let us proceed to define the key concepts and present the theory behind the gips package in a formal manner. Definitions in this section are accompanied with code in gips package. The running example is for p=5 and n=10. A standard PC can execute all the code in this section within 30 seconds (except for the final chunk of code in Section <ref> which runs for 2 minutes).
R> p <- 5; n <- 10
§.§ Permutations
Fix p∈{1,2,…}. Let 𝔖_p denote the symmetric group, the set of all permutations on the set V={1,…,p}, with function composition as the group operation.
Each permutation σ∈𝔖_p can be represented in a cyclic form. For example, if σ maps 1 to 2, 2 to 1, and leaves 3 unchanged, then we can express σ as (1,2)(3). It is sometimes convenient to exclude cycles of length 1 from this representation. The identity permutation is denoted as id or (). The number of cycles denoted as C_σ, remains the same across different cyclic representations of σ. It is important to note that C_σ includes cycles of length 1 as well.
We say that a permutation subgroup Γ⊂𝔖_p is cyclic if Γ={σ, σ^2,…,σ^N}=:<σ> for some σ∈𝔖_p, where N is the smallest positive integer such that σ^N=id. Then, N is the order of the subgroup Γ.
If p_i denotes the length of the ith cycle in a cyclic decomposition of σ∈𝔖_p, then N is equal to the least common multiple of p_1,p_2, …, p_C_σ.
If Γ=<σ>, then we say that σ is a generator of Γ. It is worth noting that a cyclic subgroup may have several generators. Specifically, <σ>=⟨σ^k⟩ for all k=1,…,N-1, where k is coprime with N. We identify each cyclic permutation subgroup by its generator, which is the smallest permutation according to lexicographic order.
§.§ Permutation symmetry
Let Γ be an arbitrary subgroup of 𝔖_p. We say that the distribution of Z=(Z_i)_i∈ V is invariant under a subgroup Γ if Z has the same distribution as (Z_σ(i))_i∈ V for all σ∈Γ. If Z is a multivariate random variable following a centered Gaussian distribution N_p(0,Σ), then this invariance property can be expressed as a condition on the covariance matrix. Specifically, the distribution of Z is invariant under Γ if and only if for all i,j∈ V:
Σ_ij=Σ_σ(i)σ(j) for all σ∈Γ.
When Γ=𝔖_p, the above conditions imply that all diagonal entries of Σ are the same, and similarly, the off-diagonal entries are the same (see the left panel of Figure <ref>). On the other hand, if Γ is the trivial subgroup, i.e., Γ={id}, then (<ref>) does not impose any restrictions on the entries of Σ. If Γ is non-trivial, the sample size n required for the MLE to exist is lower than p, as discussed in Section <ref>.
Let Sym(p;ℝ) and Sym^+(p;ℝ) denote the space of p× p symmetric matrices and the corresponding cone of positive definite matrices, respectively. For a subgroup Γ⊂𝔖_p, we define the colored space as the space of symmetric matrices invariant under Γ:
𝒵_Γ
:= {S ∈Sym(p;ℝ) S_ij = S_σ(i)σ(j)σ∈Γ},
We also define the colored cone of positive definite matrices valued in 𝒵_Γ as:
𝒫_Γ := 𝒵_Γ∩Sym^+(p;ℝ).
The set 𝒫_Γ contains all possible covariance matrices of Gaussian vectors invariant under subgroup Γ.
The dimension of the space 𝒵_Γ corresponds to the number of free parameters in the covariance matrix. The dependence structure of a Gaussian vector is fully described by the covariance matrix Σ. When certain entries of Σ are identical, we refer to them as having the same color. There are colors that correspond to equalities among the diagonal elements of Σ, and there are independent colors that correspond to equalities among the off-diagonal elements of Σ. Thus, in the context of colored models, (𝒵_Γ) can be interpreted as the number of distinct colors.
In gips, we can easily find the number of free parameters in the model invariant under a cyclic subgroup as follows (S is a matrix from the bottom right of Figure <ref>):
R> g <- gips(S, n, perm = "(12345)", was_mean_estimated = FALSE)
R> summary(g)n_parameters
3
Notice there were exactly3different numbers in the top right of Figure <ref>.
It is important to note that the mappingΓ↦𝒵_Γis not one-to-one. In particular, forp=3, we have𝒵_<(1,2,3)>=𝒵_𝔖_3.
A notable property of cyclic subgroups is that they correspond to different colored spaces. More precisely, if𝒵_<σ>=𝒵_<σ'>for someσ,σ'∈𝔖_p, then<σ>=<σ'><cit.>.
§.§ The MLE in the Gaussian model invariant under permutation symmetry
LetZ^(1),…,Z^(n)be an i.i.d. sample fromN_p(0, Σ). The presence of equality restrictions in (<ref>) reduces the number of parameters to estimate in permutation invariant models. Consequently, the sample size required for the MLE ofΣto exist is lower thanpfor non-trivial subgroupΓ⊂𝔖_p. AssumingΣ∈𝒫_Γ, whereΓ=<σ>is a cyclic subgroup, <cit.> establishes that the MLE ofΣexists if and only if
n ≥ n_0 := C_σ.
In particular, whenσ= id, no restrictions are imposed onΣ, and we recover the well-known condition that the sample sizenmust be greater than or equal to the number of variablesp=C_id. However, ifσconsists of a single cycle, i.e.,C_σ= 1, the MLE always exists. This remarkable observation is crucial in high-dimensional settings.
In gips, we can computen_0as follows:
R> g <- gips(S, n, perm = "(12345)", was_mean_estimated = FALSE)
R> summary(g)n0
1
R> g <- gips(S, n, perm = "()", was_mean_estimated = FALSE)
R> summary(g)n0
5
If (<ref>) is satisfied, the MLE ofΣis given by
Σ̂ = π_Γ(1/n∑_i=1^n Z^(i)· (Z^(i))^⊤),
whereπ_Γdenotes the orthogonal projection onto the colored space𝒵_Γ. It is is defined as
π_Γ(X) = 1/#Γ∑_σ∈Γσ· X·σ^⊤,
where each permutationσis identified with its corresponding permutation matrix. The resulting matrixπ_Γ(X)is often referred to as the regularized matrix since the mapping averages the entries ofXthat correspond to the same orbits ofΓ:
for{i,j}⊂ Vdefine itsΓ-orbit byO_ij^Γ={{σ(i),σ(j)}σ∈Γ}. Then, for any{u,v}∈ O_ij^Γone has
π_Γ(X)_uv = 1/#O_ij^Γ∑_{k,l}∈ O_ij^Γ X_kl.
In gips, the projectionπ_<>()of a matrix S onto𝒵_<>is calculated as follows:
R> S_projected <- project_matrix(S, perm)
where perm can be the permutation of a form "(12345)", or object of a `gips` class.
§.§ Bayesian model selection procedure
Now we shift our focus to methods aimed at discovering permutation symmetries in the data. The model introduced in <cit.> is considered. In this model, the multivariate Gaussian sampleZ^(1),…, Z^(n)given{K=k, Γ=c}consists of i.i.d.N_p(0,k^-1)random vectors.
LetΓbe a discrete random variable uniformly distributed over the set𝒞:={<σ>σ∈𝔖_p}of cyclic subgroups of𝔖_p. It is assumed thatKgiven{Γ=c}follows the Diaconis-Ylvisaker conjugate prior <cit.> distribution, defined by its density
f_K|Γ=c(k)=1/I_c (δ,D)Det(k)^(δ-2)/2 e^- 12 Tr[D· k] 1_𝒫_c(k),
whereδ>1andD∈𝒫_care the hyperparameters, andI_c(δ,D)is the normalizing constant.
It was derived in <cit.> that the posterior probability is proportional to
(Γ=c|Z^(1),…,Z^(n)) ∝ I_c(δ + n, D+U)/I_c(δ,D), c∈𝒞,
whereU=∑_i=1^n Z^(i)· (Z^(i))^⊤. In order to utilize (<ref>), it is necessary to calculate or approximate the ratios of the normalizing constants. An efficient method for calculating these constants for cyclic subgroups was introduced in <cit.>. This method relies on the block decomposition of the colored space𝒵_Γand is implemented in the gips package. Further technical details are provided in Appendix <ref>.
In gips, one can calculate the quotient on the right-hand side of (<ref>) forc=⟨(123456)⟩as follows:
R> g <- gips(S, n, perm = "(12345)", was_mean_estimated = FALSE)
R> exp(log_posteriori_of_gips(g))
4.586821e-27
This is a very small number, but keep in mind that the posteriori probability of a subgroupcis proportional to this quantity only (not equal). One can compare it with other subgroups to get interpretable result (under our Bayesian setting):
R> compare_posteriories_of_perms(g, "(123)")
The permutation (1,2,3,4,5) is 22.827 times more likely
than the (1,2,3) permutation.
Following the Bayesian paradigm, we work with the maximum a posteriori (MAP) estimator, which corresponds to the cyclic subgroup with the highest posterior probability, i.e., this estimator is defined as
Γ̂ = arg max_c∈𝒞(Γ=c|Z^(1),…,Z^(n)).
While the choice of hyperparameters is not scale invariant, it is a common practice in similar models to setδ=3andD=I_p, <cit.>. The parameterδ=3serves as the default parameter in our method, but we decided to setD=tr(S)/p· I_pas default and justify our choice below and in Section <ref>. In gips, one can pass the desired values of these parameters via delta and D_matrix arguments in gips() function.
In Section <ref> we considered influence ofδanddinD = d · I_p. The role of theDparameter turns out to be quite similar to the role of the tuning parametersλin LASSO methods. In summary, smaller values ofdtend to favor big symmetries. Therefore, through such exploratory analysis, users can adjust the parameterdto achieve a model that aligns most meaningfully with their preferences and requirements.
§.§ Searching for a MAP estimator
The quotient (<ref>) enables the numerical evaluation of how well a given permutation symmetry (specifically, a cyclic group generated by a permutation) fits the data. Finding a cyclic subgroup with a high evaluation score is a challenging task for large values ofpdue to the vast size of the space of potential permutation symmetries.
Recall that𝒞is the space of cyclic subgroups of𝔖_p.
For small values ofp(in our investigation, up to8), it is possible to compute the posterior probabilities (<ref>) for allc∈𝒞and determineΓ̂from (<ref>) using exact calculations, i.e., forc∈𝒞we have
(Γ=c|Z^(1),…,Z^(n)) = I_c(δ + n, D+U)/I_c(δ,D)∑_s∈𝒞 I_s(δ + n, D+U)/I_s(δ,D).
However, the cardinality of𝒞grows super-exponentially withp. Specifically, forp=150, the cardinality of𝒞is approximately10^250(see OEIS[The On-Line Encyclopedia of Integer Sequences, https://oeis.org/.] sequence A051625). This makes it computationally infeasible to calculate the quotients (<ref>) for allc∈𝒞.
To address this challenge, we propose the use of a Monte Carlo Markov Chain method. We define an irreducible Markov chain(σ_t)_tthat traverses an even larger space,𝔖_p, and apply the Metropolis-Hastings algorithm to obtain preliminary estimates of the posterior probabilities. Subsequently, taking into account the fact that some permutations generate the same cyclic subgroup, we derive the estimates of the posterior probabilities using (<ref>) below. Through the ergodic theorem, the Metropolis-Hastings algorithm provides statistical guarantees that the estimates will converge to the true values as the number of iterations tends to infinity.
A transposition is a permutation that swaps two elements while leaving other elements unchanged. In other words, each transposition is in the form(i,j)for somei,j∈ Vwherei≠ j. Let𝒯denote the set of all transpositions.
The algorithm produces a sequence of permutations(σ_t)_t=1^T, and we construct a corresponding sequence of cyclic subgroups(<σ_t>)_t=1^T. The MAP estimator, which corresponds to the cyclic group with the highest posterior probability, is given by
Γ̂ = arg max_c∈(<σ_t>)_t=1^T(Γ=c|Z^(1),…,Z^(n))=arg max_c∈(<σ_t>)_t=1^TI_c(δ+n,D+U)/I_c(δ,D).
where the maximum is taken over all permutations visited by the Markov chain constructed in the algorithm.
The Algorithm <ref> is implemented in find_MAP() function with parameter optimizer="MH":
R> g <- gips(S, n, was_mean_estimated = FALSE)
R> g_MAP_MH_25 <- find_MAP(g, max_iter = 25, optimizer = "MH")
R> g_MAP_MH_25
The permutation (1,2,3,5):
- was found after 25 posteriori calculations;
- is 5.149 times more likely than the () permutation.
The algorithm in25steps found a quite long permutation. Always keep in mind the Algorithm <ref> is only an approximate one. However, if one wants to have the true MAP, a brute-force search can be applied:
R> g_MAP_BF <- find_MAP(g, optimizer = "BF")
R> g_MAP_BF
The permutation (1,2,3,4,5):
- was found after 120 posteriori calculations;
- is 33.743 times more likely than the () permutation.
R> compare_posteriories_of_perms(g_MAP_BF, g_MAP_MH_short)
The permutation (1,2,3,4,5) is 6.553 times more likely
than the (1,2,3,5) permutation.
If one is interested in estimating(Γ=c|Z^(1),…,Z^(n))for an arbitraryc∈𝒞, the following approach can be used. For a permutation subgroupΓ, letφ(Γ)=Φ(#Γ), whereΦis the Euler totient function, i.e.,Φ(n)=#{k∈{1,…,n} kn}. In <cit.>, it is shown that asT→∞and forc∈𝒞,
π̂_c:=∑_t=1^T 1(<σ_t>=c) /φ(<c>) ∑_t'=1^T 1/φ(<σ_t'>)a.s.⟶(Γ=c|Z^(1),…,Z^(n)).
In practice,π̂_cserves as an approximation to(Γ=c|Z^(1),…,Z^(n))for largeT.
By default, gips does not save all the computed permutations, but only the best one. One can set the flag save_all_perms = TRUE to get the desired exact distribution:
R> g_MAP_BF_with_probs <- find_MAP(g,
+ optimizer = "BF",
+ save_all_perms = TRUE, return_probabilities = TRUE
+ )
R> head(get_probabilities_from_gips(g_MAP_BF_with_probs), 10)
(1,2,3,4,5) (1,4)(2,3) (1,2)(3,5) (1,3)(4,5) (1,2,4,5,3)
0.13478976 0.05532886 0.05487531 0.04410568 0.03076733
(1,5)(2,4) (1,2,5,3) (1,2,3,4) (1,2,4)(3,5) (1,3,4,5)
0.02969751 0.02928655 0.02692967 0.02638740 0.02501291
If one wants to estimate the distribution, e.g. whenpis too large to search through entire space, one can do exactly the same with the optimizer = "MH":
R> g_MAP_MH_20000 <- find_MAP(g,
+ optimizer = "MH", max_iter = 20000,
+ save_all_perms = TRUE, return_probabilities = TRUE
+ )
R> head(get_probabilities_from_gips(g_MAP_MH_20000), 10)
(1,2,3,4,5) (1,4)(2,3) (1,2)(3,5) (1,3)(4,5) (1,5)(2,4)
0.13832762 0.06678977 0.05718808 0.04167766 0.03102084
(1,2,3,4) (1,2,5,3) (1,2,4,5,3) (1,2,4)(3,5) (1,3,4,5)
0.02970193 0.02964917 0.02930625 0.02653653 0.02495384
We can observe that the estimated probabilities are similar to the true ones.
However, please note that the above code is intended solely to demonstrate convergence, and in practical scenarios, it is unreasonable to execute the function find_MAP(optimizer = "MH", max_iter = my_max_iter) for my_max_iter> p!due to the faster and exact brute-force find_MAP(optimizer = "BF").
§.§ Scaling, centering and standardizing data
We would like to emphasize that the considered model of permutation symmetries is not scale-invariant in the following sense: ifZis invariant under a subgroup, a random vectordiag(α)· Z, whereα∈ℝ^p, is generally not invariant under any permutation subgroup. Therefore, it is recommended to apply our procedure to data that have comparable scales and keep all variables in the same units. That is the reason for the division of height by√(2)in the beginning of example Books dataset from Section <ref>.
It is worth noting that there are many examples of such data, such as gene expression data, where measurements are on the same scale due to being results of experiments of the same type and measured using the same gauges. For further references, see e.g. <cit.>.
However, our model is scale-invariant under common scaling, i.e., ifZis invariant underΓ, thenβ Zfor anyβ∈ℝis also invariant underΓ. Our practice shows that choosingβin a way thatβ Zhas average unit variance often produces good results. Such scaling can be accomplished by choosing the hyperparameterD=tr(S)/p· I_p, whereSis the empirical covariance matrix ofZ. Note that this is the default parameter for D_matrix in gips() function.
While our Bayesian model is designed for a zero mean Gaussian sample, it can be easily extended to handle samples with arbitrary means. IfZ^(1),…,Z^(n)is an i.i.d. sample fromN_p(μ,Σ), the user can center the data and take this into account by setting the parameter was_mean_estimated = TRUE in function gips().
In cases where the sample sizenis reasonably large, it is common to assume that the standardized normal sample (which follows a multivariatet-distribution) can be approximated by a Gaussian distribution. Therefore, for largen, one can standardize each variable and apply our model selection procedure to obtain reliable estimates. However, it is important to note that after standardization, the empirical covariance matrix will have a unit diagonal, which may favor cyclic subgroups whose generators consist of a single cycle, as they correspond to matrices with a constant diagonal. This is the reason why we do not recommend standardizing the data when the sample sizenis small.
§ PACKAGE STRUCTURE AND USAGE
The package gips is available under the general public license (GPL≥3) from the Comprehensive R Archive Network (CRAN) at <https://CRAN.R-project.org/package=gips>. Its documentation is available as pkgdown page at <https://przechoj.github.io/gips>
and can be installed and loaded into the current R session using the following code:
R> install.packages("gips")
R> library("gips")
The primary use case of the gips package is to find a permutation subgroup with the maximum a posteriori probability in the Bayesian model introduced in the previous sections and estimate the covariance matrix in the model invariant under this permutation subgroup. Representations and operations on permutations are performed using the permutations package. We decided to use this package due to its ease of transforming permutations and its compactness.
We start the description of the main functions implemented in the proposed package.
The workflow in gips is as follows: first, use the gips() function to define an object of the `gips` class that contains all the necessary information for the model.
R> g <- gips(S, number_of_observations, delta = 3, D_matrix = NULL,
+ was_mean_estimated = TRUE, perm = "")
The parameter S is thep× pempirical covariance matrix and number_of_observations is the corresponding sample size.
If one does not know the theoretical mean of the distribution data was samples, use S = cov(Z) whereZis a number_of_observations× pGaussian matrix and leave the flag was_mean_estimated = TRUE as default. If the theoretical mean is known to be0, use S = (t(Z) %*% Z) / number_of_observations, and set the flag was_mean_estimated = FALSE,
Parameters delta and D_matrix are the hyperparameters of our Bayesian model. The domains of these parameters are the following: delta>1and D_matrix has to bep× ppositive definite matrix. The default value of D_matrix is mean(diag(S))*diag(p). The last parameter, perm, is an optional permutation onpelements. Can be of any form that the function permutations::permutation() can handle. This is the starting permutation for the Metropolis-Hastings and for the hill climbing algorithms.
Next, use the find_MAP() function with an optimizer of your choice to find the permutation that provides the maximum a posteriori estimate.
R> find_MAP(g, max_iter = NA, optimizer = NA, show_progress_bar = TRUE,
+ save_all_perms = FALSE, return_probabilities = FALSE)
The first parameter, g, is the object of `gips` class. There are three optimizers implemented:
optimizer = "BF": brute-force search (only intended for p≤ 8).
optimizer = "MH": The Metropolis-Hasting algorithm.
optimizer = "HC": The hill climbing algorithm.
The parameter max_iter is the number of steps for the Metropolis-Hastings and the hill climbing algorithm. For the Metropolis-Hastings algorithm, it has to be finite and greater than2, while for hill climbing it can be also Inf.
The progress bar of the optimization process can be turned on and off by changing the boolean parameter show_progress_bar.
To obtain the entire posterior distribution, the flag return_probabilities has to be set to TRUE. This flag can only be provided only when save_all_perms=TRUE, which saves a list of all permutations that were visited during optimization.
In the case of optimizer="BF", the exact posterior probabilities are calculated. For the case optimizer="MH", their estimates are calculated.
One can access these probabilities using the function
R> get_probabilities_from_gips(g)
where g is the optimized `gips` object. If one is interested only in the maximum a posteriori estimate, it is better to set return_probabilities=FALSE in the find_MAP() function.
Finally, to obtain the MLE of the covariance matrix in the invariant model found by find_MAP, one projects the empirical covariance matrix on a colored space corresponding to chosen permutation.
R> project_matrix(S, perm)
The first argument, S, is thep× pcovariance matrix, which was used in gips() function. The perm is a permutation that describes the cyclic permutation symmetry.
§.§ Real life example
We obtained the results in this section with AMD EPYC 7413 on a single core, which took 3 hours 45 minutes to compute.
Let us present the capabilities of gips package using breast cancer data from .
Following the approach of <cit.>, we consider a set ofp = 150genes andn = 58samples with a mutation in the p53 sequence. We numbered the variables alphabetically. Sincep > n, only parsimonious models can be fitted at all. Data is available in GEOQuery package from BioConductor. Code for downloading and minimal preprocessing is available in the “Replication code”.
We stress that the model space to search is here very large. It can be roughly estimated to be of magnitude10^250.
R> Z <- breast_cancer
R> dim(Z)
[1] 58 150
We observe, that we have fewer observations than variables. Let us search for permutation symmetries. Create `gips` object and run find_MAP() function on it.
R> S <- cov(Z)
R> g <- gips(S, 58, D_matrix = diag(p), was_mean_estimated = TRUE)
R> set.seed(2022)
R> g_MAP <- find_MAP(g, max_iter = 150000, optimizer = "MH")
To acquire knowledge about the optimization process, we can call the summary() function on the object of the `gips` class.
R> summary(g_MAP)
The optimized `gips` object.
Permutation:
(1,10,83,61,69,37,137,106)(2,42,19,16,43,49,24,82,34,139,140,52,26,98,17,100,
97,145)(3,9,11,71,120,101,126,76)(4,8,89)(5,148)(6,30,149,107,65,78,60,127)
(7,133,36,95)(12,103,92,146,138,144,84,62,58,77,111,122,66,129,93,59,41,81,35,
64,86,117,63,150,70,75,108)(13,50,57,132,114,22,116,125,74,72,91,90,113,130,
124)(14,110,46,29)(15,51,56,48,53,25,45,119)(18,68,99)(20,79,21)(23,131,27,67,
38,128,147,112,102)(28,73,44,135,105,96,104,39)(31,40,118,115,143)(32,33,123,
134,121,88)(47,109,94,136)(141,142)
Log_posteriori:
3626.114
Times more likely than starting permutation:
7.865e+549
The number of observations:
58
The mean in the `S` matrix was estimated.
Therefore, one degree of freedom was lost.
There are 57 degrees of freedom left.
n0:
25
The number of observations is bigger than n0 for this permutation,
so the gips model based on the found permutation does exist.
The number of free parameters in the covariance matrix:
611
BIC:
8741.694
AIC:
7482.764
——————————————————————————————-
Optimization algorithm:
Metropolis_Hastings
Number of log_posteriori calls:
150000
Optimization time:
1.402607 hours
Acceptance rate:
0.00195333333333333
Log_posteriori calls after the found permutation:
36814
The resulting permutation consists ofC_σ = 26cycles andn_0=25. The dimension of the space𝒵_<σ>and therefore, the number of free parameters in the covariance matrix, is611.
We can interpret this result as an indication of hidden symmetry in genes and evidence that our procedure can be used as an exploratory tool for finding such
symmetries.
We also carry out the heuristic procedure introduced in <cit.> for finding a graphical model which is invariant under the above symmetry.
We threshold the entries of the partial
correlation matrix at the levelα = 0.05608621and construct undirected graphG=(V,E)withV={1,…,p}andE={{i,j} i,j∈ V, i≠ j}defined by
{i,j}∈ E |k_ij|/√(k_iik_jj)≥α,
where(k_ij)are the entries of the estimated precision matrixK̂=Σ̂^-1. The constructed dependency graph is depicted in Figure <ref>. The graph is non decomposable, it has3324edges (compared top(p-1)/2 = 11175edges in the full graph withp=150vertices) and the size of its biggest maximal clique is21. We found the MLE of the covariance matrix in the corresponding colored graphical model using the ggmfit() function from the gRim package. Note that the maximum likelihood equation for this model can be solved by first taking appropriate averages of the elements in the Wishart matrix (projecting the empirical covariance matrix onto the corresponding colored space) and then solving the equations for corresponding graphical Gaussian model without symmetry restrictions <cit.>.
By deleting edges, the number of parameters was reduced from611to271, resulting in a decrease in the log-likelihood from-3130to-3353. This reduction leads to a lower BIC compared to the former model, decreasing from8742to7807. These findings indicate that the simpler model provides a better description of the data and highlight the relevance of the entire approach.
§.§ Hyperparameter's influence
We obtained the results in this section with AMD EPYC 7413 on 24 cores, which took 4 minutes to compute.
The Bayesian model introduced in Section <ref> depends on two parameters of the a priori distribution, a scalarδand a matrixD.
In the following section, we present the effect of these hyperparameters on the a posteriori distribution.
Despite having an explicit formula for the probability (<ref>), it is too complex to allow for direct analysis. Furthermore, this formula inherently depends on the data, which further complicates the study.
Therefore, drawing conclusions about the influence of hyperparameters on the method's outcome is difficult and must be done with caution. They directly influence the shape of the a posteriori distribution and, therefore, change both the theoretical MAP and the difficulty of the optimization problem. Thus, the MAP solution is obtainable in a given computational budget.
We consider only the low-dimensional settingp=8because only then are we able to efficiently calculate posterior probabilities for all cyclic subgroups. On a standard PC, it takes about4minutes to calculate the entire posterior distribution. Moreover, there is no rationale to suggest that the influence of the hyperparameters would be significantly different for largerp. Comparisons are conducted across three different scenarios.
First, we generate an empirical covariance matrix S from the Wishart distribution onSym(p;ℝ)with the scale parameterI_pand the shape parameterp.
Then, the true covariance matrices for three scenarios are defined as the projections (recall (<ref>)) of S onto the spaces invariant under the following cyclic permutation subgroups:
no structure:<id>,
moderate structure:<(1,2,3,4)>,
large structure:<(1,2,…,8)>.
The number of free parameters (or just the dimension) of the model without structure isp+p(p-1)/2=36, while the model with moderate structure has dimension17, and the model with large structure has5dimensions. For each of these three scenarios, we simulaten=30samples fromN_p(0,Σ), whereΣis the true covariance matrix for a given scenario (see Figure <ref>).
In order to investigate the characteristics of the a posteriori distribution, we consideredδto be3and10, andDof the form10^k · I_pfork=-1, 0, 1, 2. Recall thatδ=3andD=I_pare the default parameters.
Since there is no natural total ordering of cyclic subgroups, we identify each subgroup with the dimension of the model it generates, i.e.,(𝒵_Γ). In this way, for the sake of this analysis, all models that have the same dimension are merged.
Comments:
* The influence of parameter D=d· I_p is best reflected in the moderate structure scenario. As the value of parameter d decreases, larger symmetries (corresponding to lower-dimensional models) are preferred.
Increasing the parameter δ slightly shifts the distribution and reduces the difference between probabilities and therefore increases the entropy of the distribution.
* On the other hand, the posterior distribution for large values of parameter D=d· I_p becomes independent of the data. In fact, the plots for d=100 are very similar across all the considered scenarios.
* In each of the scenarios, the Bayesian model correctly identifies the true model when δ=3 and D=1· I_p. The value of the default parameter D for all scenarios is very similar, being equal to 0.77· I_p. This demonstrates the model's good properties for the default values of hyperparameters.
§.§ Comparison with other methods
We obtained the results in this section with AMD EPYC 7413 on 40 cores, which took 1 hour 43 minutes to compute.
As mentioned in the Introduction,
although there are no other software packages available for finding permutation symmetries in data, we have made the decision to compare the results of our model with canonical methods commonly used to tackle high-dimensional problems. In this section, we will compare method from gips package with methods implemented in huge (GLASSO) and rags2ridges (RIGDE) packages. Both huge and rags2ridges are based on matrix penalization and include a hyperparameterλ, which controls the strength of the penalty. They also both have implemented hyperparameter search techniques, which we will utilize.
§.§.§ Methodology
We conducted the comparison across different sample sizes and across strengths of the symmetry structure of true covariance matrices, similarly as in the previous section. Forp=50, we utilized matrices that are invariant under the following permutation subgroups:
no structure:<id>,
moderate structure:<(1,2,…,25)>,
large structure:<(1,2,…,50)>.
For each of these scenarios, we constructed the true covariance matrices in the following way: first, we sampled a positive definite matrix from the Wishart distribution onSym(p; ℝ)with a scale parameter ofI_pand a shape parameter ofp. We then projected this matrix onto the colored space corresponding to a given scenario. Next, we thresholded the inverse of this matrix by setting25%of the off-diagonal entries with the smallest absolute values to zero.
The inverse of such matrix served as our true covariance matrix. It is important to note that this approach does not always produce a positive definite matrix, which was indeed the case in the 'no structure' scenario. In this particular case, we added 0.1*diag(p) to the realization of the Wishart distribution. This adjustment ensured the construction of a proper covariance matrix.
In the initial analysis, we also considered covariance matrices whose inverses did not contain any zeros. However, to our surprise, the results for matrices with and without zeros in their inverses were very similar for all the methods. Therefore, we decided to focus only on covariance matrices corresponding to nontrivial conditional dependence structures, as they are expected to favor likelihood penalization methods more.
For each of the scenarios, we considered three different sample sizesn∈{10,20,40}.
Therefore, we have a total of3·3 = 9settings for this experiment. The comparison method for each setting is as follows:
1) Fix a sample size n and true covariance matrix Σ.
2) Generate a sample Z from N_p(0, Σ) of size n.
3) Estimate the covariance matrix using:
* from gips package: find_MAP() function with optimizer = "MH" and n_iter = 300000,
* from rags2ridges package: ridgeP() function with lambda parameter found by the optPenalty.kCVauto() function with lambdaMin = 0.001, lambdaMax = 100 range,
* from huge: function huge() with parameters method = "glasso" and nlambda = 40 and llambda.min.ratio = 0.02 and the tuning parameter lambda was selected using default parameters of huge.select() function (rotation information criterion).
4) Record the log-likelihood and evaluate estimation using the Frobenius norm.
5) Repeat 2)-4) 10 times and aggregate results.
Recall that the Frobenius norm of ap× pmatrixM=(m_ij)_i,jis defined by
M_F = √(∑_i=1^p∑_j=1^p |m_ij|^2).
WhenMis a difference between the true covariance matrix and its estimate,M_F^2is proportional to the mean squared error (MSE).
§.§.§ Results
Each of the three methods produces an estimator of the covariance matrix. We calculate the corresponding (negative) log-likelihood in our Gaussian model and present the results in Figure <ref>.
Comments:
* In all configurations, the gips method outperformed huge. However, it should be emphasized that the main purpose of the GLASSO method is the model selection within graphical models rather than the estimation of the covariance matrix. Typically, when possible, estimation is performed within the selected model and such an approach leads to systematically smaller bias.
* The comparison with rags2ridges is more interesting, as the gips method yielded weaker results when there was no structure and better results when the symmetry structure was large. This behavior was expected as gips is designed to look for these structures in the data.
* We can see that the results of gips were very unstable when there was no structure in the underlying ground truth matrix. This behaviour is expected as gips will more likely find some non-existing structure when n is much smaller than p. Each method gains stability when the sample size increases.
The Frobenius norm of the difference of the estimate and the true covariance matrix is shown in Figure <ref>.
Comments:
* When there is no symmetry structure in the data, the three methods considered generate estimates that are very similar in terms of the Frobenius norm.
* Generally, the bigger the symmetry structure, the better the quality of the estimation. Both gips and huge perform similarly in all scenarios, while rags2ridges performs significantly better in the scenario with a large structure.
We acknowledge that the proposed method of comparison is not systematic enough to draw conclusions in full generality.
In general, the results act in support of the theory: gips method is a viable choice if we suspect, that the true matrix has some structure. However, it is difficult to recognise it post-hoc by comparing the method's performance using log-likelihood (or possibly other measures) usable in real-world cases, when the true covariance matrix is unknown.
From a practical perspective, it is worth repeating, that the gips method's output provides not only a projected covariance matrix but also an interpretation in the language of permutation symmetries of the data.
Finally, we note that both huge and rags2ridges methods execute within a few seconds, while it takes approximately20minutes to run the gips method for one scenario forp=50and300 000iterations of the Metropolis-Hastings algorithm.
§ SUMMARY AND DISCUSSION
In this paper, we have presented gips, an R package for learning permutation symmetry from Gaussian multivariate sample.
The proposed R package is available from CRAN at <https://CRAN.R-project.org/package=gips>. The “Replication code” is available at <https://github.com/PrzeChoj/gips_replication_code>.
Our model provides competitive results to popular dimensionality reduction and covariance matrix estimation methods in Gaussian models.
We emphasize that there is currently no competition for our package. Known model selection methods for colored graphs are, to the best of our knowledge, not implemented in publicly available package. Furthermore, the model presented in <cit.> and implemented in gips is the only one that allows for the search of permutation symmetries.
The gips package is under active maintenance and will continue to be developed to incorporate more advanced features. One potential avenue for future development is the inclusion of a model selection procedure within Gaussian graphical models that are invariant under permutation symmetry (RCOP models). By providing this package, we aim to facilitate the exploration of the implemented methodologies and their applications for statisticians and the R community, thus fostering wider adoption and utilization. We invite everyone to a discussion about potential directions of development, <https://github.com/PrzeChoj/gips/issues>.
§ COMPUTATIONAL DETAILS
The results in this paper were obtained using
R 4.2.1 with the gips 1.1.0.9100 package. R itself
and all packages used are available from the Comprehensive R Archive Network (CRAN) at <https://CRAN.R-project.org/>.
For gips's dependencies, we used numbers 0.8-5 (<cit.>), permutations 1.1-2 (<cit.>), rlang 1.1.1 (<cit.>), utils 4.2.2 (<cit.>).
For packages in Section <ref>, we used rags2ridges 2.2.6, huge 1.3.5.
The remaining packages used are Biobase 2.58.0, GEOquery 2.66.0, BiocManager 1.30.21, MASS 7.3-60 (<cit.>), ggplot2 3.4.2 (<cit.>), magrittr 2.0.3 (<cit.>), parallel 4.2.2, dplyr 1.1.2 (<cit.>), stringi 1.7.12 (<cit.>, gRim 0.2.10 (<cit.>).
For producing the Figure <ref> we used Cytoscape v3.10.0 <cit.>.
§ ACKNOWLEDGMENTS
Research was funded by (POB Cybersecurity and Data Science) of Warsaw University of
Technology within the Excellence Initiative: Research University
This research was carried out with the support of the Laboratory of Bioinformatics and Computational Genomics and the High Performance Computing Center of the Faculty of Mathematics and Information Science Warsaw University of Technology.
§ EXAMPLE TO SECTION 1.4
The standard PC can run all the code in this appendix within 2 seconds.
Consider an i.i.d. sample (^(i))_i=1^ from N_(0, I_) for =4 and =50 and let S be its empirical covariance matrix. The distribution of Z is clearly invariant under any permutation. Let us examine the output of gips.
R> p <- 4; n <- 50
R> set.seed(2022); Z <- matrix(rnorm(n * p), ncol = p); S <- cov(Z)
R> g <- gips(S, n)
R> g_MAP <- find_MAP(g,
+ optimizer = "BF", show_progress_bar = FALSE,
+ return_probabilities = TRUE, save_all_perms = TRUE
+ )
R> get_probabilities_from_gips(g_MAP)
(1,3,2,4) (1,2,3,4) (1,2,4,3) (1,3,4) (2,3,4) (1,2,4)
2.542477e-01 2.393022e-01 2.124555e-01 1.837534e-01 6.234428e-02 2.517374e-02
(1,2,3) (1,4)(2,3) (1,3)(2,4) (1,2)(3,4) (3,4) (1,4)
1.971380e-02 9.287411e-04 6.006721e-04 4.634810e-04 3.743338e-04 2.542462e-04
(1,3) (2,4) (2,3) (1,2) ()
1.811636e-04 8.977424e-05 8.207691e-05 3.465056e-05 2.263418e-07
We observe that the symmetries with the highest probability correspond to the long cycles, and these probabilities are very close to each other. This suggests that the data is invariant under each of these symmetries. The only model invariant under these three symmetries is the full-symmetry model, which is invariant under any permutation (both the diagonal and off-diagonal of the covariance matrix are constant).
§ FORMULAS FOR STRUCTURE CONSTANTS
In this appendix, we outline the steps required to find the ingredients necessary for the calculation of the normalizing constants
I_Γ (δ,D)= ∫_𝒫_ΓDet(k)^(δ-2)/2 e^- 12 Tr[D· k] dk, δ>1, D∈Sym^+(p;ℝ).
for arbitrary cyclic subgroup Γ. These constants are indispensable for our model selection procedure.
We note that the formulas for normalizing constants for an arbitrary subgroup Γ⊂𝔖_p are presented in <cit.>. Here, we specialize these formulas to cyclic subgroups, which allows for significant simplification.
Let p_i be the length of the i-th cycle in the cyclic decomposition of σ∈𝔖_p, and let {i_1, …, i_C_σ} be a complete system of representatives of the cycles of σ. Furthermore, let (e_i)_i=1^p be the standard basis of ℝ^p.
* For c=1,…,C_σ, calculate v^(c)_1, …, v^(c)_p_c∈ℝ^p as
v^(c)_1 := √(1/p_c)∑_k=0^p_c-1 e_σ^k(i_c),
v^(c)_2β := √(2/p_c)∑_k=0^p_c-1cos( 2πβ k/p_c) e_σ^k(i_c) (1 ≤β < p_c/2),
v^(c)_2β+1 := √(2/p_c)∑_k=0^p_c-1sin( 2πβ k/p_c) e_σ^k(i_c) (1 ≤β < p_c/2),
v^(c)_p_c := √(1/p_c)∑_k=0^p_c-1cos (π k) e_σ^k(i_c) (p_c ).
* Construct an orthogonal matrix U_Γ by arranging column vectors {v^(c)_k}, 1 ≤ c ≤ C_σ, 1≤ k ≤ p_c, in the following way:
we put v^(c)_k earlier than v^(c')_k'
if
(i)[k/2]/p_c < [k'/2]/p_c', or
(ii)[k/2]/p_c = [k'/2]/p_c'
and c<c', or
(iii)[k/2]/p_c = [k'/2]/p_c' and c = c' and k is even and k' is odd.
* Let N be the order of Γ. For α=0,1,…,⌊N/2⌋ calculate
r_α^∗ = #{c∈{1,…,C_σ}α p_c N},
d_α^∗ = 1 (α = 0 N/2),
2 .
n the definition of r_α^∗, we treat 0 as a multiple of N, and thus r_0^∗=C_σ.
Then, we set L=#{α r_α^∗>0},
r = (r_α^∗ r_α^∗>0)
and
d = (d_α^∗ r_α^∗>0).
The parameters (r_i,d_i)_i=1^L are called the structure constants.
The constructed orthogonal matrix U_Γ possesses a notable property. According to <cit.>, it performs a block decomposition of the colored space 𝒵_Γ in the following sense: for each S∈𝒵_Γ, we have
U_Γ^⊤· S· U_Γ
=
[ x_1 ; ⋱ ; x_L ],
where x_i∈Sym(r_i d_i;ℝ), i=1,…,L.
For any S∈Sym^+(p; ℝ) and δ∈ℝ, we define a function
γ_Γ(S,δ) = ∏_i=1^L Det(x_i)^-(δ+r_i-3)/2-1/d_i,
where x_i∈Sym(r_i d_i;ℝ) are the diagonal blocks of a decomposition (<ref>) of Π_Γ(S) (recall (<ref>)).
Finally, by <cit.>, integral I_Γ(δ,D) is convergent if
(δ-2)/2>max_i=1^L {-1/d_i} and D is positive definite. The expression max_i=1^L {-1/d_i} equals -1/2 unless L=1 (which corresponds to the trivial subgroup {id}), in which case it is equal to -1. Thus, for all δ>1 and D∈𝒫_Γ we have
I_Γ(δ,D)= e^- A_Γ (δ-2)/2 - B_Γγ_Γ(1/2D,δ)
∏_i=1^L Γ_i(1+ d_i (δ+r_i-3)/2),
where
A_Γ = ∑_i=1^L r_i d_ilog d_i,
B_Γ = 1/2∑_i=1^L r_i (1+(r_i-1)d_i/2)log d_i,
Γ_i(λ)=
(2π)^r_i(r_i-1) d_i/4∏_k=1^r_iΓ(λ-(k-1)d_i/2).
|
http://arxiv.org/abs/2307.00425v1
|
20230701204659
|
Explicit Cocycle of the Dedekind-Rademacher Cohomology Class and the Darmon-Dasgupta Measures
|
[
"Jae Hyung Sim"
] |
math.NT
|
[
"math.NT"
] |
Primal-Dual Gradient Methods for Searching Network Equilibria in Combined Models with Nested Choice Structure and Capacity Constraints
[
======================================================================================================================================
The work of Darmon, Pozzi, and Vonk <cit.> has recently
shown that the RM-values of the Dedekind-Rademacher cocycle J_DR
are Gross-Stark units up to controlled torsion. The authors
of <cit.> remarked that the measure-valued cohomology class
μ_DR which underlies J_DR is the level 1 incarnation
of earlier constructions in <cit.>. In this paper,
we make this relationship explicit by computing a concrete
cocycle representative of μ_DR by tracing the construction
of the cohomology class and comparing periods of weight 2 Eisenstein
series. While maintaining a global perspective in our computations, we
configure the appropriate method of smoothing cocycles
which exactly yields the p-adic measures of <cit.> when
applied to μ_DR. These methods will also explain
the optional degree zero condition imposed in <cit.>
which was remarked upon in <cit.> and <cit.>.
In <cit.>, Darmon and Vonk introduced the theory of rigid cocycles
which drew analogies from classical Complex Multiplication theory to address
previously inaccessible questions regarding the arithmetic of real quadratic
fields. In <cit.>, Darmon, Pozzi, and Vonk used the deformation of Hilbert
Eisenstein series to show that the Dedekind-Rademacher cocycle yields Gross-Stark
units when evaluated at real quadratic points. The key component in the
construction of the Dedekind-Rademacher cocycle is μ_DR, a cohomology
class with p-adic measure coefficients, which we refer to as the
Dedekind-Rademacher cohomology class. Specifically, the Poisson Transform of
μ_DR yields the analytic cocycle J_DR.
On the other hand, in <cit.>, Darmon and Dasgupta used the congruences of
Dedekind sums arising as periods of families of Eisenstein series to construct
μ_DD,δ, a family of p-adic measures relating to the norm of a
p-adic unit conjectured to be Gross-Stark units. Specifically, Theorem 4.2
of <cit.> shows the existence of a cocycle of p-adic measures whose values
on homogeneous polynomials are given by periods of Eisenstein series. The
Poisson transform of these p-adic measures were then proven to coincide
with norms of certain units when evaluated at RM points.
In <cit.>, Dasgupta leveraged the congruences of the Dedekind sums
and explicit polynomials converging to characteristic functions on open balls to give
an explicit formula for μ_DD,δ on compact open sets. This formula
is given in terms of Dedekind sums involving only the first periodified Bernoulli
distribution. These formulae not only gave a simple algorithm for computing
the conjectured Gross-Stark units but also proved the integrality of the measure by
observation. In <cit.>, Dasgupta generalized these constructions using
Shintani zeta functions to define p-adic measures which generate
Gross-Stark units as a corollary to the proof of the tame refinement of
the Gross-Stark conjecture for totally real fields <cit.>.
In <cit.>, it is remarked that μ_DD,δ is a sum of
_2()-translates of μ_DR so that μ_DR can be
seen as a level 1 construction which generalizes μ_DD,δ.
The main theorem of this paper makes this relationship precise
by establishing the exact method of “smoothing” μ_DR to
obtain the cohomology class of μ_DD,δ.
[Theorem <ref>]
Let c>1 be a positive integer coprime to 6, N an integer coprime to c,
and δ∈[_2()] be an element of the form δ=
∑_D| N,D>0 n_D· [LR(D)] satisfying ∑_D n_D· D=0.
Then,
12·μ_DR^pδ=-[(μ_DD,δ)_c]
where [(μ_DD,δ)_c] is the cohomology class of the cocycle
(μ_DD,δ)_c.
The general strategy will be to study the Kato-Siegel distribution
which is used to construct the cohomology class μ_DR to
define a cocycle representative _. Smoothing this cocycle
by δ will yield a cocycle of a congruence subgroup ^δ
which is clearly seen to coincide with μ_DD,δ up to an
explicit coboundary. From the necessary computation of periods of
weight 2 Eisenstein series in the definition of _ and
^δ, we will also extract a simple formula for _
on any test function.
Another consequence of this paper is an explanation for the dispensable
degree zero condition on δ from <cit.>. Fleischer, Liu, and Dasgupta
observed that this condition was not necessary on the level of Eisenstein
series during their computation of elliptic units <cit.>. The fact that
Theorem <ref> also does not require the degree 0 further
bolsters this observation. We discuss this further in Remark <ref>.
In 1, we establish our conventions for discussing distributions on
V:=^2 and their adelic counterparts. We lay out methods of smoothing and
restricting distributions and their resulting effect on group actions.
2 is dedicated to recalling the cohomological formulation of the Dedekind-Rademacher
cohomology class and the Darmon-Dasgupta cocycles. Here, we will define μ_DR,
μ_DD,δ, and the various decorations we adorn these two objects.
Our construction of the Dedekind-Rademacher cohomology class will largely follow
<cit.>, but we will use the language established in 1 to emphasize the
global nature of the distributions. Since μ_DD,δ is a family of
p-adic measures, we will be forced to descend μ_DR^δ down to
a p-adic measure μ_DR^pδ to make a direct comparison in
Theorem <ref>.
In 3, we will detail the computation of explicit formulae for the
cocycles _ and ^δ which are respectively representatives
of μ_DR and μ_DR^δ. We first decompose μ_KS
into a product of three simple distributions to define _
and ^δ in explicit fashion. These formulae are virtually identical
to those appearing in <cit.>, but we include the details for completeness.
We then review the computations of periods of weight 2 Eisenstein series.
While we include some details, we refer to <cit.> for key theorems.
We then compute ^δ which proves Theorem <ref>
and conclude the paper by characterizing _ by its values on
generators of G_ in Theorem <ref>.
* For an abelian group G, G':=G∖{e} where
e is the identity element of G.
* For any rational number x∈, UL(x), LR(x),
D(x)∈_2() are the diagonal matrices
UL(x) = [ x 0; 0 1 ] LR(x) = [ 1 0; 0 x ] D(x) = [ x 0; 0 x ].
* For any group G, γ∈ G, A a G-module, and a∈ A,
left actions will be denoted γ∗ a while right actions
will be denoted a|γ.
* V=^2 denotes the 2-dimensional -vector space.
* For a prime ℓ, V_ℓ:=V⊗__ℓ.
* For a set of primes , ^:=∏_ℓ∉ S' _ℓ.
* For any set U, [U] is the characteristic function on U.
§ DISTRIBUTION
In this section, we establish our conventions and methods
regarding distributions.
§.§ Basic Definitions and Conventions
Let ℓ be a rational prime and V_ℓ=_ℓ^2.
A test function on V_ℓ is a function f:V_ℓ→
that is locally constant with compact support. We denote
by (V_ℓ) the abelian group of test functions.
We denote by f_ℓ^0 the characteristic function of
_ℓ^2 ⊂ V_ℓ.
For an abelian group A, (V_ℓ,A):=_((V_ℓ),A)
is the group of A-valued distributions on V_ℓ.
Considering V_ℓ as row vectors, we have a natural
right action of _2(_ℓ). For the sake of later convenience,
we convert this action to a left action so that for γ∈_2(_ℓ) and v∈ V_ℓ, γ∗ v:= vγ^-1.
This induces a right action of _2(_ℓ) on (V_ℓ)
where (f|γ)(v)=f(γ∗ v) for all f∈(V_ℓ)
and γ∈_2(_ℓ). The benefit of this slightly awkward
group action is evident when considering test functions that are
characteristic functions of subsets of V_ℓ. For U⊂ V_ℓ
and [U] its characteristic function, then for all γ∈_2(_ℓ) we have
([U]|γ)(v)=[U](vγ^-1)=[Uγ](v).
Suppose now that A is a right _2(_ℓ)-module. Then,
(V_ℓ,A) inherits a left _2(_ℓ) action where
for all μ∈(V_ℓ,A), γ∈_2(_ℓ),
and f∈(V_ℓ),
(γ∗μ)(f)=μ(f|γ)|γ^-1.
Let be an arbitrarily large set of rational primes.
The group of -adelic test functions ()
is defined as the restricted tensor product over finite primes
'⊗_ℓ∉(V_ℓ)
where the restriction is that simple tensors are
of the form ⊗_ℓ∉ f_ℓ where
f_ℓ=f_ℓ^0 for all but finitely many primes.
For an abelian group A, the group of A-valued
-adelic distributions is (,A):= _((),A).
Letting ^ be the set of -adeles, i.e. the set of
adeles ignoring the primes in , () has a natural
right-action of _2(^). If A is a right
_2(^)-module, then (,A) inherits a left
_2(^)-action analogous to the ℓ-adic case.
From now on, let G:=_2^+(). We note that for any set of
primes , we can consider G as a subgroup of _2(^)
by the diagonal embedding.
* Let be the right G-module with the trivial G-action.
Then, for γ∈ G, f∈(), and μ∈(,), we have (γ∗μ)(f)=μ(f|γ).
* Let _ be the additive group of holomorphic functions
on the complex upper-half plane. _ has a right
G-module structure given by Moebius transform on . For
any set of primes , (,_) inherits a left
G-module structure. Explicitly, if γ∈ G, f∈(), μ∈(,_), and τ∈,
we have
(γ∗μ)(f)(τ)=μ(f|γ)(γ^-1∗τ).
* The same observation can be made for _^×, the
multiplicative group of units of _.
Let V:=^2. A lattice in V is a rank 2 -submodule
of V. We endow V with the lattice topology, i.e. the topology
generated by lattices in V. For a function f:V→,
we define the following terms:
* The support of f is the set (f):=
{v∈ V| f(v)≠ 0},
* f is bounded if there exists a lattice
Λ such that (f)⊂Λ,
* f is Λ-invariant for a lattice Λ
if for all v∈ V and λ∈Λ,
f(v)=f(v+λ).
The set of test functions on V is the set (V):=
{f:V→| f is bounded and locally constant.}.
We write (V')⊂(V) for the subgroup of test
functions supported away from 0.
For an abelian group A, (V,A):=((V),A) is the
group of A-valued distributions on V while (V',A)
:= ((V'),A) are the punctured distributions.
As in the local setting, V has a right action of _2()
where V is thought of as row vectors which we similarly convert
to a left action. This endows (V) (resp. (V,A)) a right
(resp. left) action of _2().
The adelic distributions and the distributions on V are connected
by the following proposition, which will be especially helpful
in giving explicit formulae for adelic distributions.
Let ϕ:(∅)→(V) be the homomorphism
defined by ϕ(⊗_ℓ f_ℓ) = ∏_ℓ f_ℓ|_V where
the restriction to V is the restriction to the natural
inclusion of V in V_ℓ. Then, ϕ is an isomorphism
of _2()-modules.
We define an explicit inverse ψ:(V)→(∅).
For any affine lattice x+Λ⊂ V, we assign
ψ([x+Λ])=⊗_ℓ [x+Λ⊗_ℓ].
Notice that for all but a finite number of primes,
x∈_ℓ^2 and Λ⊗_ℓ = _ℓ^2.
We claim that ϕ∘ψ is the identity function
on (V). It is sufficient to see this on characteristic
functions of affine lattices. Let [x+Λ]∈(V).
If α∈ x+Λ, [x+Λ⊗_ℓ](α)=1
for all primes ℓ.
If α∉ x+Λ, we want to show that there exists
a prime ℓ such that [x+Λ⊗_ℓ](α)=0.
It is sufficient to show that α-x∉Λ⊗_ℓ
for some prime ℓ. In fact, by scaling each coordinate, it is
sufficient to show this for the case when Λ=^2, which is
true since an element of V is in ^2 if and only if it is
integral at all primes.
To show that ϕ is an isomorphism, we apply the strong
approximation theorem to see that for all tensors of the
form
⊗_ℓ [x_ℓ+Λ_ℓ]
where [x_ℓ+Λ_ℓ]=f_ℓ^0 for all but a finite number
of primes, there exists x∈ V and v_1,v_2∈ V such that
[x+v_1_ℓ+v_2_ℓ]=[x_ℓ+Λ_ℓ]
for all primes ℓ. This means ψ is surjective, so
ϕ must be an isomorphism.
All definitions above can be replicated for the one-dimensional
setting by replacing V with and _2 with _1.
The one-dimensional setting will be used to define various
distributions in <ref>. The details
are entirely analogous and are left as an exercise for the reader.
§.§.§ Smoothing
Let δ=∑_g n_g· [g]∈[G].
Let M be a left G-module. The δ-smoothing map is defined by
-^δ:M → M
m ↦ m^δ:=δ∗ m.
Let H⊂ G be a subgroup. Define H_δ⊂ H as the intersection
H_δ=H∩(⋂_g∈ G
n_g≠ 0 gHg^-1).
The δ-smoothing map sends H-invariant elements to H_δ-invariant
elements,
-^δ:M^H→ M^H_δ.
Let m∈ M^H. Let γ∈ H_δ and for all g∈ G
such that n_g≠ 0, let γ_g∈ H such that
γ = gγ_g g^-1. We then observe that
γ∗ m^δ=∑_g n_g·(γ g)∗ m
=∑_g n_g· (gγ_g)∗ m=∑_g n_g· g∗ m
=m^δ.
Let A be a right G-module and let c∈ be a nonzero integer.
For μ∈(,A), the c-smoothed distribution
μ_c∈(,A) is the distribution obtained by smoothing
μ by the group element c^2[Id]-[D(c)]. Note that since Id
and D(c) are both in the center of G, c-smoothing is a
K-morphism for any subgroup K⊂ G.
Let be a set of primes and G_:=_2^+(_)
as defined in Definition <ref>. Let N be an integer
supported over and δ=∑_D| N n_D· [LR(D)].
Then, letting G_(N)⊂ G_ be the congruence subgroup
of matrices that are upper-triangular modulo N, we get a
δ-smoothing map (,A)^G_→(,A)^G_(N).
If we also have that A is acted on trivially by scalar matrices,
the equality LR(D)^-1=UL(D)· D(D^-1) yields
μ^δ(f)=∑ n_D· (μ(f| LR(D))| UL(D)).
§.§.§ Moving between sets of primes
Let ⊃ be two sets of primes. We define an injective
group homomorphism i_^:()→() where
i_^(f)= f⊗ f_∖^0.
We define (V_)⊂(V) (resp. (V_')⊂(V'))
as the image of () under ϕ∘ i_^∅. For a
lattice Λ⊂ V, _Λ(V_) is the intersection _Λ(V)∩(V_). For notational convenience, we write (V_',A) for
((V_'),A) and (',A) for
((ϕ∘ i_^∅)^-1((V_')),A).
Define _⊂ as the subring of rational numbers that are
integral at all primes in . If is finite, and n is the product
of primes in , _ is the localization _(n).
Define G_ as the group _2^+(_) and for any integer N
supported over , G_(N)⊂ G_ is the congruence subgroup
of matrices that are upper-triangular modulo N.
The notation (V_) is chosen to reflect the fact that
_^2(V_) is generated by the characteristic functions
of the cosets _^2/^2. We warn the reader that this
cannot be extended to other lattices. For example, if ℓ∈,
the characteristic function f=[(ℓ,ℓ)+ℓ^2^2] is not
contained in (V_) since the ℓ-factor of the simple
tensor ϕ^-1(f)∈(∅) is not f_ℓ^0.
The multitude of decorations on V are meant to emphasize the
different facets of the group (V). In particular, the form
(V_) is especially useful when calculating the values
of scalar-invariant distributions since _^2(V_)
has a nice set of generators. However, the G_ action of
interest is that of (), so we will
() when the G_-action is relevant.
Let ⊂ be two sets of primes. i_^ is a
G_-module homomorphism.
We observe that for all ℓ∈∖ and g∈ G_,
the action of g on f_ℓ^0 factors through _2(_ℓ),
so f_ℓ^0| g = f_ℓ^0. Thus, for all f∈(),
i_^(f| g)=(f| g)⊗ f_∖^0
=(f⊗ f_∖^0)| g=i_^(f)| g.
Let A be a right G-module. Then, for two sets of primes
⊂, the pull-back map (i_^)^∗:
(,A)→(,A) is a G_-module homomorphism.
§.§.§ Freeness of (V_)
For all lattices L⊂ V, _L(V) and _L(V')
are free -modules. The same is true for _L(V_) and
_L(V_') for any set of primes .
Since _L(V'),_L(V_), and _L(V_') are subgroups
of _L(V), it suffices to show that _L(V) is free.
We write down an explicit basis for :=_L(V).
Define B_L:={[v]| v∈ V/L}. Then, for all test functions
f∈, we can uniquely decompose f as
∑_v∈ V/L f(v)· [v]. (Notice that since f is L-invariant,
it makes sense to evaluate f at a coset in V/L.) The uniqueness of this
decomposition is immediate. The sum is also finite since
f having bounded support means there exists a lattice Λ⊂ V such that (f)⊂Λ, so f(v)
is nonzero only if v∈Λ/L which is finite set.
For any set of primes , (V_) and (V_') are free
abelian groups.
Once again, since (V_) and (V_') are subgroups
of (V), it is sufficient to show that :=(V) is
a free abelian group. For any lattice Λ⊂ V, let
_Λ:=_Λ(V). For all n∈, define
L_n:=n^2. Then, for all lattices
Λ⊂ V, there exists n∈ such that L_n
⊂Λ. This means _Λ⊂_L_n,
so to form a basis, it is sufficient
to find a basis for the union ⋃_n=1^∞_L_n.
Define a directed system on {_L_n| n∈} partially
ordered by divisibility on where for all
n,m∈, j_n,m:_L_n→_L_nm is the
natural inclusion. Then, we have =_n _L_n.
To show that is free, it suffices to show that each
transition map j_n,m is an isomorphism onto a direct summand.
Fix n,m∈. By the proof of Lemma <ref>,
we have an explicit basis of _L_n of the form B_n
={[v+L_n]| v∈ V/L_n}. We want to show that this basis
can be extended to a basis of _L_nm. Fix v∈ V
and we notice that
[v+L_n] = ∑_w∈ L_n/L_nm
[v+w+L_nm].
We can thus define a basis for the test functions in _L_nm
whose support lies in v+L_n as follows:
B_v:={[v+L_n]}∪{[v+w+L_nm]| w∈ (L_n/L_nm)'}.
Then, _L_nm=⊕_v∈ V/L_n⊕_f∈ B_v
f·. By the construction of B_v, we can see that j_n,m
identifies _L_n with a direct summand _L_nm.
§.§.§ Decomposition of G_
Let T_⊂ G_ be the subgroup of diagonal matrices.
Then, G_ is generated by T_ and _2().
Notice that G_ acts transitively on ^1().
Since _2() also acts transitively on ^1(),
we only need to show that stabilizers of the infinity cusp,
i.e. upper-triangular matrices, can all be written as a
product of elements in T_ and _2().
Take a generic upper-triangular matrix in G_,
γ=[ a b; 0 d ].
Since this matrix is invertible, we know that a and d
are units in _. Let α∈_^× be
a positive unit such that α b/a is an integer. Then,
we have
γ = [ a/α 0; 0 d ][ 1 α b/a; 0 1 ][ α 0; 0 1 ],
which is clearly a product of matrices in T_ and _2().
We note that if N is an integer that is supported over ,
the proof above can be exactly replicated to show that G_(N)
(the congruence subgroup of matrices upper-triangular modulo N)
is generated by T_ and Γ_0(N). The only change required
is to notice that the orbit of i∞ under the action of G_(N)
is the union of i∞ with rational numbers whose denominator is
divisible by N.
For computational purposes, we use the above proof to write an explicit decomposition
of a generic element of G_ as a product of elements in T_ and
_2(). Note that we can use continued fractions to easily write down
elements of _2() as explicit products of generators.
Let γ∈ G_ be of the form
γ=[ a b; c d ].
Let β∈_^× be a positive integer such that
aβ and cβ are coprime integers and let x,y∈
such that (aβ)y-(cβ)x=1. Let α∈_^×
be positive integer such that α(by-xd)∈ and
let D=(γ). Then, we have the decomposition
γ = [ aβ x; cβ y ][ (αβ)^-1 0; 0 Dβ ][ 1 α (by-xd); 0 1 ][ α 0; 0 1 ].
§.§ Constructing Distributions
The following proposition provides a tool for easily describing most
of the distributions we will encounter while simultaneously proving
their existence.
Let A be an abelian group with a right ^×-action and
let L=^2. Given a function σ:V/L→ A, the following
two statements are equivalent.
* There exists μ∈(V,A) that is ^×-invariant
such that for all v∈ V/L, μ([v])=σ(v).
* For all nonzero t∈ and v∈ V/L, we have
the equality
∑_w∈ V/L
wt=vσ(w)=σ(v)| t^-1.
Suppose the first statement is true and let v∈ V/L. For a t∈,
we have [v]| D(t^-1)=[vD(t^-1)]. Since vD(t^-1) is L-uniform,
we can decompose it as the disjoint union
vt^-1=⋃_w∈ V/L
wt=v w.
Thus, μ being ^×-invariant gives us the following equalities.
σ(v)| t^-1 =μ([v])| t^-1
=μ([v]| t^-1)
=μ( ∑_w∈ V/L; wt=v [w])
=∑_w∈ V/L
wt=vμ([w])
=∑_w∈ V/L
wt=vσ(w).
For the converse, we first define μ_L:_L(V)→ A by
writing test functions f∈_L(V) uniquely as a sum of
basis elements, i.e. f=∑_v∈ V/L f(v)[v]. We then assign μ_L(f)
=∑_v∈ V/L f(v) σ(v). We extend this to a distribution
by noticing that for all f∈(V), there exists t∈
such that f| t^-1∈_L(V), so we define μ(f)
=μ_L(f| t^-1)| t. We need to show that μ is
well-defined.
Since μ_L is well-defined by the uniqueness of the
decomposition of each f∈_L(V),
it is sufficient to show that for all t∈,
μ_L(f)=μ_L(f| t^-1)| t. In fact, we can reduce
to the case when f=[v] for some v∈ V/L, which is
exactly the second statement of the proposition.
Now, suppose A is a right _^×-module. Then, we have
a corresponding statement for _^×-invariant distributions
in (V_^,A) and maps of the form σ:_^2/^2→ A.
We note that, essentially due to Remark <ref>, we can only include
the case when the lattice is ^2.
Suppose A is an abelian group with the right action of
_^×. Given a function σ:_^2/^2
→ A, the following two statements are equivalent:
* There exists μ∈(V_,A) that is
_^×-equivariant such that for all
v∈_^2/^2, μ([v])=σ(v).
* For all nonzero t∈∩_^× and
v∈_^2/^2, we have the equality
∑_w∈_^2/^2
wt=vσ(w)=σ(v)| t^-1.
The proof is identical to that of Proposition <ref>
by replacing L with ^2 and observing that
_^2(V_)={[v]| v∈_^2/^2}.
§.§.§ Examples of Distributions
The following distributions will be integral to our constructions and comparisons
in later sections.
Dirac Distribution
A simple example of distributions is the Dirac distribution δ_v.
For v∈ V, δ_v∈(V,) is defined so that for all
test functions f∈(V), δ_v(f)=f(v). This clearly is a
homomorphism from test functions into .
In particular, we single out the rank 1 Dirac distribution
δ_0∈(,) which we will use extensively.
Bernoulli Distributions
Recall the Bernoulli polynomials defined by the following generating function.
te^xt/e^t-1 = ∑_n=0^∞ B_n(x)·t^n/n!.
For r a non-negative integer, the r-th periodified Bernoulli polynomial
B̃_r:→ is the function defined by
B̃_r(x)=
0 if r=0 and x∈
B_r(⟨ x⟩) otherwise
where ⟨ x⟩ is defined as the fractional part of x.
Let r be a non-negative integer and let _r-1 be the 1-dimensional
-vector space equipped with the degree r-1 right action by ^×,
i.e. for all x∈ and n∈^×, x| m = m^r-1· x.
Then, there exists a distribution _r∈(,_r-1) such that
for all x∈, _r([x+])=B̃_r(x).
Using the 1-dimensional analogue of Proposition <ref>
means we only need to show that for all m∈,
m^1-r·B̃_r(x) = ∑_ym = xB̃_r(y).
We do this for all r using the generating function. Let 0<x<1 be a rational
number.
∑_r=0^∞t^r/r!( ∑_i=0^m-1B̃_r((x+i)/m) )
=∑_i=0^m-1te^(x+i)t/m /e^t-1
=te^xt/m/e^t-1·∑_i=0^m-1 e^t i/m
=te^xt/m/e^t-1·e^t -1/e^t/m-1
=te^xt/m/e^t/m-1
=m(t/m)e^x(t/m)/e^t/m-1
=m∑_r=0^∞B̃_r(x)t^r/r!· m^r
=∑_r=0^∞ m^1-rB̃_r(x)t^r/r!.
For the case when x=, the calculation above should always carry a
t/2 error term. Otherwise, the computation is identical and yields the
required distribution relation.
The first Bernoulli polynomial (as opposed to the periodified Bernoulli
polynomial) will naturally arise in our computations. To account for these
polynomials in a distribution theoretic manner, we abuse notation to define
B_1∈(,) as B_1=_1 - δ_02
where δ_0 is the rank 2 Dirac distribution. Since _1 and
δ_0 are both ^×-invariant, B_1 is also
^×-invariant.
Cyclotomic Distributions
To save space, we state the following lemma without proof since the proof
is an elementary property of cyclotomic polynomials. In fact, the following
lemma holds when replacing with ^ab and e^2π i j/n with
a compatible system of roots of unity.
Let R be a commutative -algebra. Then for all n∈
and x∈ R, we have
∏_j=0^n-1 (1-xe^2π i j/n)=1-x^n.
Endow , , and ^× with the trivial ^×-action.
Furthermore, for all x∈ in the closed unit disc but not equal to 1,
define log(1-x) via the power series
log(1-x)=-∑_n=1^∞x^n/n.
The following assignments are ^× distributions defined for a generic
x=[a/b+]∈/ not equal to :
* ∈(',^×) such that ([x])
=1-e^2π i a/b,
* ∈(',) such that ([x])
=log(([x])),
* ∈(',) such that ([x])
=log([x]).
The existence of is given by Proposition <ref>
and Lemma <ref>. The existence of and are
consequences of log and log∘- defining ^×-module
homomorphisms from the image of to and , respectively.
We note that - = π i B_1.
Tensor of Distributions
Using the distributions above, we can define new distributions on V
using tensor products.
Let μ_1∈(,M_1) and μ_2∈(,M_2)
for some ^×-modules M_1,M_2. Then, μ_1⊗μ_2
∈(V,M_1⊗ M_2) is the distribution where for
test functions of the form f=[U_1× U_2], (μ_1⊗μ_2)(f)
=μ_1([U_1])⊗μ_2([U_2]).
Notice that for any n,m∈_≥ 0, _n⊗_m ≅_n+m by the map 1⊗ 1↦ 1.
This shows that we can define distributions of the form
_n⊗_m ∈(V,_n+m-2).
Using the cyclotomic distributions, we define ∈(V',^×) as the composition (
δ_0⊗).
Theta-unit Distribution
The next definition establishes a ring of q-expansions which we use to define
the Theta-unit distribution as well as in our later decomposition of the Kato-Siegel
unit distribution in 3.
Fix a compatible system of primitive roots of unity (ζ_n)_n∈⊂^ab and for all α=m/n∈, define ζ^α:=
ζ^m_n. For all n∈, define the power series ring P_n:=
^ab[[q_n]] where q_n is a formal variable. Let F_n be the field
of fractions of P_n and for all n,m∈, define i_n,m:F_n→ F_nm
by i_n,m(q_n)=q_mn^m. (This makes q_mn an m-th root of q_n.)
Define two multiplicative
groups Q_n and U_n where Q_n is the subgroup of F_n^× generated by
^ab,× and q_n^ while U_n⊂ P_n^× is the subgroup
of power series with leading coefficient 1. Finally, F is the direct limit
of the system ({F_n}_n∈,{i_n,m}_n,m∈) where the ordering
is given by divisibility. P, Q, and U are the corresponding images of
{P_n}, {Q_n}, and {U_n}. We define q as the image of q_1 in F.
We define a right action of the matrix T on the ^ab-algebra F
by the following data: all elements of ^ab are fixed by T,
for q_n∈ F, q_n| T = ζ_n q_n. Viewing q_n as
(2π i τ/n), this action coincides with τ| T=τ+1.
In particular, this action is compatible with the algebra structure of
F.
Let σ:(^2/^2)→ U be the function defined such that
σ((x_1,x_2)+^2)=∏_x∈^+∩ (x_1+)(1-q^xζ^x_2)
×∏_x∈^-∩ (x_1+)(1-q^-xζ^-x_2).
There exists a distribution u∈(V',U) such that
for all X∈ (^2/^2)', u([X])=σ(X). We refer to this
distribution as the theta-unit distribution.
By Proposition <ref>, we need to show that for all
n∈ and x_1,x_2∈^2∖ (1n)^2,
we have the identity
σ((nx_1,nx_2)+^2)
= ∏_i,j=0^n-1σ((x_1+i/n,x_2
+j/n)+^2).
Lemma <ref> tells us that for all α,β∈,
∏_j=0^n-1 (1-q^αζ^βζ^j/n)
=1-q^nαζ^nβ.
Fixing an i∈{0,…,n-1}, we have the product
∏_j=0^n-1σ((x_1+in,x_2+jn)+^2)
=∏_j=0^n-1∏_x∈^+
x≡ a+in
(1-q^xζ^x_2+j/n)×∏_x∈^-
x≡ a+in(1-q^-xζ^-x_2-j/n)
=∏_x∈^+
x≡ a+in
(1-q^nxζ^nx_2)×∏_x∈^-
x≡ a+in(1-q^-nxζ^-nx_2)
=∏_x∈^+
x≡ na+i n
(1-q^xζ^nx_2)×∏_x∈^-
x≡ na+i n(1-q^-xζ^-nx_2).
Taking the product over i then results in the infinite products
being taken over all x≡ na.
u is invariant under the left action of T, i.e. for all test
function f, u(f| T)=u(f)| T.
Since u is ^×-invariant, it is sufficient to check
this on test function in f∈_^2(V'), but this case
is immediate from the definition of σ.
§.§ Kato-Siegel Distribution
We start with the Kato-Siegel units as defined in Scholl's article, which
we reiterate here.
(<cit.>,Theorem 1.2.1)
Take an integer c such that (c,6)=1. There exists one and only one
rule ϑ_c which associates to each elliptic curve E→ S over
an arbitrary base a section ϑ_c^(E/S)∈^×(E∖([× c])) such that
* as a rational function on E, ϑ_c^(E/S) has
divisor c^2(e)-([× c]), where e is the
identity of E/S;
* if S'→ S is any morphism and g: E'=E×_S S'→ E
is the basechange, then g^∗ϑ_c^(E/S)
=ϑ_c^(E'/S');
* if α:E→ E' is an isogeny of elliptic curves over
a connected base S whose degree is prime to c, then
α_∗ϑ_c^(E/S)=ϑ_c^(E'/S);
* ϑ_-c=ϑ_c, ϑ_1=1, and if c=m· d
with m,d≥ 1, then [× m]_∗ϑ_c=
ϑ_c^m^2 and ϑ_d∘ [× m]=ϑ_c
/ϑ_m^d^2 (in particular, [× c]_∗ϑ_c = 1);
* if τ∈ and E_τ/ is the elliptic curve
whose points are /+τ, then
ϑ_c^(E_τ/) is the function
(-1)^c-1/2Θ(u,τ)^c^2Θ(c u,τ)^-1
where
Θ(u,τ)=q^1/12(t^1/2-t^-1/2)
∏_n>0(1-q^n t)(1-q^nt^-1)
and q=exp(2π i τ), t=exp(2π i u).
Since ϑ_1=1, we fix an integer c>1 such that (c,6)=1 and let
={ℓ: ℓ prime such that ℓ| c}.
Let x=(a,b)+^2∈ (_^2/^2)' and let
f=[x]∈_^2(V_'). Define a Kato-Siegel unit associated
to f by letting cf(τ)∈_^× be the function
cf(τ)=ϑ(aτ+b,τ).
Then, for all γ∈_2() and all γ∈ T_ with
integral coefficients, we have
cf|γ^-1(τ)=cf(γ^-1τ).
For τ∈, define the lattice Λ_τ=τ+⊂. We first observe that cf is well-defined since
ϑ_c(aτ +b,τ)
is doubly periodic with respect to a and b since
these are functions on the torsion points of the elliptic curve's
complex uniformization as /Λ_τ. The following
proof will rely heavily on the third property of Theorem <ref>
which we refer to as Property 3 for the remainder of this proof.
Let γ=LR(m) for some positive integer m prime to c and
let ϕ_m:E_τ→ E_τ/m be the quotient isogeny, i.e.
the projection /Λ_τ→/Λ_τ/m.
Property 3 tells us that since ϕ_m has degree prime to c,
ϕ_m∗ϑ_c^(E_τ/)=ϑ_c^(E_τ/m/).
Evaluating the right side at aτ/m + b
gives cf(τ/m). The left hand side evaluated
at aτ/m + b is equivalent to the product of
ϑ_c^(E_τ/) evaluated over the preimage of ϕ_m at
this point, i.e. (aτ/m + b)+iτ/m for 0≤ i≤ m-1.
ϕ_m∗ϑ_c^(E_τ/)(aτ/m+b)
= ∏_i=0^m-1ϑ_c^(E_τ/)((a+i)τ/m+b)
= ∏_i=0^m-1c[((a+i)/m,b)+^2](τ)
= cf| UL(m)^-1(τ).
The case of γ=UL(m) follows from the same analysis using the isogeny
corresponding to /Λ_τ→/(τ
+(1/m))≅/Λ_mτ where the last isomorphism
is scaling by m. These two cases implies that cf is
invariant under scalar matrices, so we have
cf| UL(m)^-1(τ) =cf| LR(m)(τ)
cf| LR(m)^-1(τ) =cf| UL(m)(τ).
Thus, cf(τ) is invariant with respect to all γ∈ T_
with positive entries. The negative entries will follow from the
_2()-invariance.
The _2()-invariance can be proven by observing
the invariance for generators of _2(), namely
S = [ 0 -1; 1 0 ];
T = [ 1 1; 0 1 ].
For T, we have on the one hand of Property 3,
cf| T(τ)
=c[(a,a+b)+^2](τ)
=ϑ_c^(E_τ/)(aτ+a+b)
=ϑ_c^(E_τ/)(a(τ+1)+b).
On the other hand, we have
c(a,b)(Tτ)
=c(a,b)(τ+1)
=ϑ_c^(E_τ+1/)(a(τ+1)+b).
The equality then follows from the fact that E_τ/≅ E_τ+1/ and this isomorphism arises from
the identity map → since the defining
lattices are identical.
For S, we have on one side of Property 3,
cf| S(τ)
=c[(b,-a)+^2](τ)
=ϑ_c^(E_τ/)(bτ-a).
On the other side, we have
c(a,b)(Sτ)
=c(a,b)(-1/τ)
=ϑ_c^(E_-1/τ/)(a(-1/τ)+b).
The lattices Λ_-τ^-1 and Λ_τ are
homothetic via multiplication by τ. This homothety
on the complex points of the elliptic curve identifies
(-1/τ)a+b∈/Λ_-τ^-1 with
bτ-a∈/Λ_τ, thus proving the relation.
By Corollary <ref> and Corollary <ref>
restricted to the case when γ is a scalar matrix, there exists a
measure in (',_^×) generalizing the Kato-Siegel
units of Corollary <ref>. In fact, Lemma <ref>
tells us that this distribution is G_-invariant.
The Kato-Siegel distribution μ_KS∈ H^0(G_,
(',_^×)) is the unique distribution
generalizing the assignment of Corollary <ref>.
§ DEDEKIND-RADEMACHER MEASURES AND DARMON-DASGUPTA MEASURES
In this section, we turn to group cohomology. First, we define a
method of smoothing cohomology class by an element of [G].
Then, we review the construction of the Dedekind-Rademacher
measures in <cit.> and define its δ-smoothed counterpart.
Lastly, we will recall the definition of the Darmon-Dasgupta measures which
we formulate as a cocycle to finally write down an explicit comparison
between the two constructions.
§.§ Smoothing Cohomology
We first note that the following process is a valid construction for
any group G, though we will only consider the case when G=_2^+().
Recall that for a subgroup H⊂ G be a subgroup and δ=∑ n_g· [g]
∈[G], we defined H_δ⊂ H as
H_δ=(⋂_g∈ G;n_g≠ 0 gHg^-1)∩ H.
Letting M be a left G-module, the map
(-)^δ:M → M
m ↦ m^δ:= δ∗ m
restricts to a group homomorphism M^H→ M^H_δ.
Now, let M and N be left G-modules with an H-homomorphism
η:N→ M. This homomorphism induces a map η:N^G → M^H.
Let F^∙(G,M) be the complex defined by the following data:
* F^n(G,M):= {Functions f:G^n+1→ M}
* d^n:F^n(G,M)→ F^n+1(G,M) where
f↦ (d^n(f):(g_0,…,g_n+1)
↦∑_i=0^n+1 (-1)^i f(g_0,…,ĝ_̂î,…,g_n+1)),
where (g_0,…,ĝ_̂î,…,g_n+1) denotes the n+1-tuple
obtained by omitting the i-th entry. F^∙(G,M) has a
natural left G-action where for γ∈ G, 𝐠∈ G^n+1,
and f∈ F^n(G,M), we have
(γ∗ f)(𝐠)=γ∗(f(γ^-1𝐠)).
The homogeneous cochain complex whose cohomology yields group cohomology
is obtained by taking G-invariants of F^∙(G,M), i.e. the
complex C^∙(G,M) where
C^n(G,M) := F^n(G,M)^G
d^n := d^n|_C^n(G,M).
Taking η:N→ M as above, we obtain an H-module homomorphism
η_∗:F^∙(G,N) → F^∙(G,M)
f ↦η∘ f.
One observes that this map commutes with the differentials
of F^∙(G,N) and F^∙(G,M). As above, restricting
to the G-invariants yields a map
η_∗:C^∙(G,N)→ F^∙(G,M)^H.
Then, the action of δ on F^∙(G,M) induces a map
(-)^δ:F^∙(G,M)^H→ F^∙(G,M)^H_δ.
Last but not least, we can restrict functions on G^n+1
to functions on H_δ^n+1 to obtain the restriction map
:F^∙(G,M)^H_δ→ C^∙(H_δ,M).
Putting this all together, we define the following map on
cochain complexes.
Let δ∈[G] and let H⊂ G. Define
H_δ⊂ H as above and let M and N be left
G-modules equipped with a group homomorphism η:N→ M
that is H-invariant. We define
(δ,η)^∙: C^∙(G,N) → C^∙(H_δ,M)
as the compositum
(δ,η):=∘ (-)^δ∘η_∗.
Since each of the maps used in defining (δ,η) commutes with
the differentials on their respective complexes, the following
is true.
(δ,η) induces a map on cohomology groups
(δ,η)^∙: H^∙(G,N)→ H^∙(H_δ,M).
§.§ Construction of the Dedekind-Rademacher cohomology class
For the remainder of the paper, fix the following data:
* c>1 an integer coprime to 6,
* N>1 an integer coprime to c,
* p∈ a prime that does not divide Nc,
* δ=∑_D| N, D>0 n_D· [LR(D)] ∈[G]
satisfying
∑_D| N, D>0 n_D· D=0,
* is the set of primes dividing c,
is the set of primes dividing cN,
and is the set of all primes excluding
p.
Let be the trivial right G_-module, _ be the
right G_-module of holomorphic functions on the complex
upper half plane , and _^× be the right G_-module
of non-vanishing holomorphic functions on . We have the
following exact sequence of G_-modules:
0→→_→_^×→ 1,
where _→_^× is the exponential map f↦(2π i f).
By Proposition <ref>, the functor (',-) is
exact, so we have a resulting exact sequence of distributions as left G_-modules:
0→(',)→(',_)→(',_^×)→ 1.
The associated long exact sequence of group cohomology gives us the
boundary map
∂_:H^0(G_,(,_^×))→
H^1(G_,(,)).
The Dedekind-Rademacher cohomology class μ_DR∈ H^1(G_,
(',)) is defined as μ_DR := ∂_(μ_KS).
The δ-smoothed Dedekind-Rademacher cohomology class is
defined as μ_DR^δ :=(δ,(i_^)^∗)^1(μ_DR).
For a right G-module A, we have an explicit description of
(δ,(i_^)^∗)^0:H^0(G_,(',A))
→ H^0(G_(N),(,A)). For μ∈ H^0(G_,(',A)),
we have
(δ,(i_^)^∗)^0(μ)(γ)
=∑_D| N n_D LR(D)∗((i_^)^∗(LR(D^-1)γ∗μ))
=δ∗((i_^)^∗μ).
§.§ The Darmon-Dasgupta Cocycle
Let Γ^p_0(N)⊂_2([p^-1]) be the congruence subgroup
Γ^p_0(N) := {γ∈_2([p^-1])|γ≡[ ∗ ∗; 0 ∗ ]N}.
For all k∈, let E_k denote the weight k-Eisenstein series and
let E_k^δ(z):=-24∑_D| Nn_D D· E_k(Dz). Lastly,
let _0:=_p^2∖ p_p^2 be the set of primitive vectors.
_0 is a fundamental domain of the D(p)-action on V_p'.
We warn the reader that the following theorem's action of Γ_0^p(N)
on V_p does not coincide with the rest of this paper. We will reconcile
this difference shortly.
<cit.><cit.>
There exists a unique collection of p-adic measures on the space
V_p', indexed by pairs (r,s)∈Γ_0^p(N)i∞×Γ_0^p(N)i∞
denoted by μ̃_δ{r→ s} satisfying the following properties:
* For every homogeneous polynomial h(x,y)∈[x,y] of
degree k-2,
∫__0 h(x,y) dμ̃_δ{r→ s}(x,y)
=Re((1-p^k-2)∫_r^s h(z,1)E_k^δ(z) dz)
* For all γ∈Γ_0^p(N) and all compact, open U⊂
V_p',
μ̃_δ{γ r→γ s}(γ U)
=μ̃_δ{r→ s}(U).
* For all compact, open U⊂ V_p',
μ̃_δ{r→ s}(pU)=μ̃_δ{r→ s}(U).
Furthermore, for all compact open sets of the form U=
u
v+p^n_p^2⊂_0
and αβ∈Γ_0(N)· i∞,
μ̃_δ{i∞→αβ}(U)
=-12∑_ℓ=0^β-1_1( α/β(ℓ+v/p^n)-u/p^n)∑_D| N
n_d_1( D/β(ℓ+v/p^n)).
The explicit formula at the end of Theorem <ref> was computed
by Dasgupta in <cit.> while the rest of the theorem is from
<cit.>. We note that the first criterion regarding the moments of
μ̃_δ is interchangeable with the
explicit formula for compact open sets. To navigate between the two,
one approximates characteristic functions of compact opens
using polynomials to take the limit of these approximations using
the first criterion. This is precisely the method carried out in <cit.>
to derive the formula on compact opens in the first place.
Conversely, one can calculate moments of the measure by taking
refinements of _0 with smaller and smaller balls. The
resulting limit of Riemann sums recovers the moments.
Assumption 2.1 of <cit.> further insists that δ be a degree
zero divisor, i.e. ∑_D n_D=0. The two conditions on δ
together makes the δ-smoothed Ramanujan Δ-function
Δ_δ(τ):= ∏_D| NΔ(Dτ)^n_D
into a modular unit that has no pole or zero at the infinity cusp.
However, in <cit.> and <cit.>, it is remarked that the
degree zero condition is unnecessary in carrying out calculations.
The fact that Theorem <ref> does not require the
degree zero condition provides some justification for this observation.
We further elaborate in Remark <ref>.
As mentioned prior, μ̃_δ is built considering V_p as
column vectors, which gives a natural left action of Γ_0^p(N) that
differs from our formulations. We reconcile this by letting V_C be ^2
considered as column vectors and define ϕ:V→ V_C where ϕ(v)= Sv^T, i.e.
the transpose multiplied by the matrix
S=[ 0 -1; 1 0 ].
We note that for all γ∈ G, we have S(γ^-1)^T S^-1
=γ, so taking the S-conjugate of the inverse transpose is an
automorphism of G. Taking the left action of G on V as defined
in 1, i.e. γ∗ v=vγ^-1,
it follows that ϕ is an isomorphism of left G-modules.
Since ()=(V_p), this defines a
G-equivariant map ϕ:((V_C)_',)
→(',). Notice that for all γ∈Γ_0^p(N) and
(a,b)+_p^2⊂ V_p', with α/β=γ i∞,
ϕ(μ̃{i∞→γ i∞})([U]) is equal to
ϕ(μ̃_δ{i∞→γ i∞})([U])
=-12∑_ℓ=0^β-1_1( α/β
(ℓ+a)+b)∑_D| N
n_D_1( D/β (ℓ+a)).
Interpreting ϕ(μ̃_δ{γ i∞→γ' i∞})
as an element of C^1(Γ_0^p(N),(',)), we have
the following proposition.
There exists a unique cocycle μ_DD,δ∈ Z^1(G_(N),
(',))
that coincides with ϕ∘μ̃_δ on Γ_0^p(N)^2.
Using the homogeneous resolution for our group complex, we define
a function C:G_^2→(',) where for all
γ_1,γ_2∈ G_=_2^+([p^-1]),
C(γ_1,γ_2)(f)=μ̃_δ{γ_1i∞→γ_2i∞}(ϕ^∗ f).
μ̃_δ being a partial modular symbol automatically makes
this satisfy the cocycle relation, i.e. for all γ_1,γ_2,
γ_3∈ G_,
C(γ_1,γ_2)+C(γ_2,γ_3)=C(γ_1,γ_3).
However, we still need to check that C is G_-invariant.
We note that Property 2 and 3 of Theorem <ref> already tells
us that C is invariant with respect to the group generated by
Γ_0^p(N) and D(p), respectively, so it suffices to check
that C is UL(p)-invariant. In fact, it is sufficient to show this
for the case when the first input is the identity, i.e.
C(Id,γ)([U]| UL(p))=C(UL(p),UL(p)γ)([U]).
Furthermore, we can reduce the case when U=(a,b)+_p^2⊂_0
via the D(p)-invariance.
In this case, we have
[U]| UL(p)=[U · UL(p)]=∑_i=0^p-1[(a+_p)×(b+i/p+_p)]
| D(p).
Suppose γ i∞ = α/β. The corresponding sum is
then
C(Id,γ)([U]| UL(p))
=∑_i=0^p-1C(Id,γ)([(a,(b+i)/p)+_p^2])
= ∑_i=0^p-1-12∑_ℓ=0^β-1_1 (α/β(ℓ+a)+b/p+i/p)
∑_D| N n_D _1(D/β(ℓ+a))
= -12∑_ℓ=0^β-1(∑_i=0^p-1_1 (α/β(ℓ+a)
+b/p+i/p))
∑_D| N n_D _1(D/β(ℓ+a))
= -12∑_ℓ=0^β-1_1 (pα/β(ℓ+a)+b)
∑_D| N n_D _1(D/β(ℓ+a))
= C(UL(p),UL(p)γ_2)([U]).
§.§ Main Theorem
Since μ_DR^δ is a cohomology class with values in
(',), an exact comparison with the Darmon-Dasgupta
measures requires a pullback to the test functions (')
via the map i_^. We denote this restriction by μ_DR^pδ.
On the other hand, μ_DD,δ makes no reference to the integer
c that is used to construct μ_DR by way of μ_KS. To
accomodate, we must smooth the cocycle μ_DD,δ by
c^2[Id]-[D(c)]∈[G] as in Example <ref>. The
resulting cocycle is denoted (μ_DD,δ)_c:=(c^2[Id]-[D(c)])∗μ_DD,δ while its cohomology class is denoted [(μ_DD,δ)_c].
With c, N, p, and δ as delineated at the beginning
of this section, we have the following equality of cohomology classes.
12·μ^pδ_DR = - [(μ_DD,δ)_c].
§ COMPUTATION OF COCYCLES
We now compute formulae for cocycle representatives of μ_DR
and μ_DR^δ. We remind the reader that we elect to use the
homogeneous complex in computing cohomology as in the beginning of 2.
We now take the time to recall how to compute the coboundary map
via the Snake Lemma. We start with the exact sequence
0→(',)→(',_)→(',_^×)→ 1.
For μ_KS∈ H^0(G_,(',_^×)),
we take Ẽ∈ C^0(G_,(',_)) such that
(2π i Ẽ)=μ_KS.
Then, d^0(Ẽ)∈ C^1(G_,(',_)) is in the kernel of
(2π i-), so d^0(Ẽ) lies within C^1(G_,(',
)). d^0(Ẽ) is a cocycle representative of μ_DR :=
∂_(μ_KS).
In this section, we will follow the steps above by
* Writing the Kato-Siegel distribution as a product of the distributions
from <ref> to define a section Ẽ;
* Reviewing classical calculations of period of Eisenstein
series;
* Calculating the values of ^δ:=d^0Ẽ^δ and
making a direct comparison with the Darmon-Dasgupta cocycle;
* Calculating the values of _:=d^0Ẽ on
generators of G_ to completely characterize the cocycle.
§.§ Decomposition of Kato-Siegel Units
Recall the Theta function Θ:×→ from
Theorem <ref> where for all u∈, τ∈,
t:=(2π i u), and q=:=(2π i τ),
Θ(u,τ):=q^1/12(t^1/2-t^-1/2)∏_n>0 (1-q^nt)(1-q^nt^-1).
Define θ:^2→_ as the function which
assigns to (a,b)∈^2 the holomorphic function
θ(a,b)(τ)=Θ(aτ+b,τ)
In Theorem <ref>, Θ(u,τ) was used
to define the Kato-Siegel units by assigning to each coset
v=(a,b)+^2∈ (_^2/^2)' the modular unit
c[v](τ)=(-1)^(c-1)/2θ(a,b)(τ)^c^2/θ(ca,cb)(τ).
We will now simplify the q-expansion of θ(a,b).
For the sake of simplicity, we represent v with (a,b)
non-negative rational numbers since c[v]
only depends on the coset v and not its representative (a,b).
We also recall that for any rational number b, ζ^b
denotes the complex number e^2π i b.
§.§.§ Simplification of θ(a,b)
We begin with the expansion of θ(a,b) as
q^1/12(q^a/2ζ^b/2-q^-a/2ζ^-b/2)
∏_n>0(1-q^n+aζ^b)(1-q^n-aζ^-b).
To understand the behavior of θ(a,b), we use the framework
of Definition <ref>. Specifically, we can embed ^ab
into by mapping the fixed primitive roots of unity ζ_n
to (2π i/n). We extend this embedding to an embedding of
Q_n into _ by identifying q_n with (2π i τ/n).
These embeddings commute with the transition maps i_n,m for all
n,m∈, so we get corresponding embeddings of F, P, Q, and
U into _. q:=(2π i τ) then corresponds to the
image of q_1 which is a pseudouniformizer of the local ring P.
Our goal is to express θ(a,b) as a product of the form
q^A B ∏_x>0(1-q^xC_x).
This corresponds to the projection of F^× onto the product
Q× U. These projections are multiplicative group homomorphisms,
so the resulting projections of μ_KS will decompose the Kato-Siegel
distribution into a product of simpler distributions.
a being non-negative means we only need to manipulate the following factors
of θ(a,b).
q^a/2ζ^b/2-q^-a/2ζ^-b/2 =q^-a/2(-ζ^-b/2)(1-q^aζ^b)
∏_n=1^a1-q^n-aζ^-b =∏_n=1^a-q^n-aζ^-b
(1-q^a-nζ^b)
(<ref>)× (<ref>)
=q^1/2(-a+a(a+1-2a))ζ^1/2(1-b+a(1-2b))
×([(a,b)+^2])∏_n=1^a(1-q^nζ^b).
Isolating the factors of q-power yields:
_qθ(a,b)=1/2( 1/6-a+a(a+1-2a) )
A direct calculation shows that
_qθ(a+1,b)=-a-1/2+_qθ(a,b).
The expression 12(_2([a+])-a^2) shares this transformation
property and coincides with _qθ(a,b) when 0≤ a<1, so letting
f=[(a,b)+^2], we have
_qθ(a,b)=1/2((_2⊗_0)(f)-a^2).
Recalling the Theta-unit distribution u, the remaining factors of θ(a,b) are:
(2π i·1/2(1-b+a(1-2b)))·(f)
= (2π i (-B_1(b)· (1+a)+b/2))·(f)
∏_x≡ a*
x>0(1-q^xζ^b)
∏_x≡ a*
x<0(1-q^-xζ^-b)
= u(f)(τ).
The following proposition summarizes the above.
For all (a,b)∈^2 such that a,b≥ 0 and f=[(a,b)+^2], we
have the equality
θ(a,b)(τ)=q^1/2((_2⊗_0)(f)-a^2)·ζ^-B_1(b)· (1+a)+b/2·(f)×
u(f)(τ)
§.§.§ Simplification of μ_KS
With the benefit of hindsight, we first define the following distribution.
π_c∈(V_',) is defined as the
^×-invariant distribution which maps [(a,b)+^2]
∈_^2(V_') to B_1(bc)(cB_1(a)-B_1(ac)).
Note that we can write π_c as a difference of tensors of
the distributions B_1 and the pullback of B_1 with respect
to the action of c on (_,_0). This makes
π_c invariant under the action of T_.
Returning our attention to the Kato-Siegel units,
Proposition <ref> tells us that the first non-zero
coefficient of the q-expansion of c[(a,b)+^2] is
(-1)^(c-1)/2·_c([(a,b)+^2])·ζ^-c^2B_1(b)· (1+a)+bc^2/2
+B_1(bc)· (1+ac)-bc/2
Writing (-1)^(c-1)/2 as ζ^(c-1)/4 and setting aside the
-factor for now, we are left with ζ raised to the
following power.
c-1/4- c^2B_1(b)· (1+a)+bc^2/2+B_1(bc)· (1+ac)-bc/2
=c-1/2 -c^2B_1(b)(1+a)+B_1(bc)(1+ac+(c-1)/2)
≡ -c^2B_1(b)(1+a)+cB_1(bc)+B_1(bc)(ac-(c-1)/2)
≡ -c^2B_1(b)a+B_1(bc)(ac-(c-1)/2).
As expected, we see that this value is invariant modulo when a or b is
translated by 1. However, translating b by 1/c also leaves the value invariant
modulo . π_c shares the same invariance property modulo . Furthermore,
letting a=x+jc and b for x,b∈ [0,1c) and j∈{0,…,c-1}
we see that
(<ref>)=B_1(bc)(j-(c-1)/2)=π_c([(a,b)+^2]).
Consequently, for all f∈(V_'), the first non-zero coefficient of μ_KS(f) is
_c(f)·ζ^π_c(f).
By Proposition <ref>, we can simplify the q-order of
μ_KS(f) as
_qμ_KS(f)
=1/2(_2⊗_0)_c(f).
We summarize the simplifications by the following proposition.
Let (_2⊗_0)_c, _c, and u_c denote the
smoothed counterparts of _2⊗_0, , and
the Theta-unit distribution u. Then, we have the following
unique decomposition:
μ_KS=
q^1/2(_2⊗_0)_c·(
ζ^π_c·_c)·
u_c.
§.§ Cocycle Representatives of μ_DR and μ_DR^δ
Proposition <ref> allows us to define Ẽ∈(',_) such that (2π i Ẽ)=μ_KS.
We note that by the exactness of (<ref>), Ẽ is the
unique section of μ_KS up to an integer-valued distribution.
ρ,Ẽ∈(',_) are the distributions
which map f∈(V_') and τ∈ as follows.
ρ(f)(τ) := τ/2(_2⊗_0)_c(f)
+π_c(f)
Ẽ(f)(τ) := ρ(f)(τ)+1/2π i
(δ_0⊗)_c(f)
+1/2π i∫_i∞^τ(u_c(f)).
Note that is implicit in the definition of Ẽ via the integer c.
Abusing notation, we also denote by Ẽ the 0-cochain G_→(V_',
_) satisfying γ↦γ∗Ẽ.
Since each term of Ẽ is invariant under the action by
scalar matrices in G_, we can reduce all calculations from this
point onward to ^2-invariant test functions.
The Dedekind-Rademacher cocycle _∈ Z^1(G_,(,_))
is the cocycle _ = d^0Ẽ.
By Proposition <ref>, _ is a cocycle representative
of μ_DR. We are interested in computing the δ-smoothed counterpart
of _ where δ=∑_D| N n_D· [LR(D)]∈[G] such that
∑_D| N n_D· D=0. Recall the smoothing map (δ,(i_^)^∗)
of Defintion <ref>.
The δ-smoothed Kato-Siegel distribution is μ_KS^δ
:=(δ,(i_^)^∗)^0 (μ_KS)∈(V_',_^×).
For f∈(') and D| N, we define f^D∈(')
as i_^(f| LR(D)). We extend this to test functions in (V_')
via the map ϕ, i.e. f^D∈(V_') is the test
function ϕ(i_^ (ϕ^-1(f)| LR(D))). If f=[(a,b)+^2]∈(V_'), f^D=[(a,bD)+^2].
For all f∈(V_'), _q(μ_KS^δ(f))=0.
By Proposition <ref>, _q(μ_KS(f))
=1/2(_2⊗_0)_c(f).
Since for all f∈(V_') and D| N,
(_2⊗_0)(f^D)=(_2⊗_0)(f),
so the q-order of μ_KS^δ(f) is
_q(μ_KS^δ(f))
=∑_D| N n_D· D· (_2⊗_0)(f^D)
=∑_D| N n_D· D· (_2⊗_0)(f)=0.
We define Ẽ^δ:=(δ,(i_^)^∗)^0(Ẽ)∈(',_). By Proposition <ref>,
for all f∈(V_'),
Ẽ^δ(f)(τ) =∑_D| N n_D·Ẽ(f^D)(Dτ)
=∑_D| N n_D ·(π_c(f^D)
+1/2π i (δ_0⊗)_c(f^D)+1/2π i∫_i∞^z (u_c(f^D)(Dτ))).
We have (2π iẼ^δ)=μ_KS^δ since
(δ,(i_^)^∗) preserves sections. Thus, the following is a cocycle
representative of the δ-smoothed Dedekind-Rademacher cohomology class.
We define ^δ:=d^0Ẽ^δ∈
Z^1(G_(N),(',)).
§.§ Periods of Eisenstein Series
We now review some classical calculations regarding periods of weight
2 Eisenstein series. These calculations will help us temper the line
integrals showing up in Ẽ.
Let r∈_2()· i∞ and f∈(V'). We are interested in
computing the following integral
1/2π i∫_r^i∞(u(f)).
As we are working with Eisenstein series, it is important
to clarify the path of integration. All following integrals
will have an endpoint at i∞, so for a rational cusp
r, we specify once and for all that ∫_r^i∞
denotes the integral along the path parametrized by it+r
where t∈_≥ 0.
Let D be a positive integer and (a,b)+^2∈ (/)^2.
Define E^+_D(a,b)(τ) and E^-_D(a,b)(τ) by the q-expansions
E^±_D(a,b)(τ):=-2π i D∑_x≡± a*
x>0
x∑_m=1^∞ (q^Dxζ^± b)^m.
We define E_D(a,b)(τ):= E^+_D(a,b)(τ)+E^-_D(a,b)(τ).
Notice that for all [(a,b)+^2]∈(V'), we have the identity
(u([(a,b)+^2])(Dτ)) = E_D(a,b)(τ).
To compute the periods of (u(f)(Dτ)), we will make extensive
use of Mellin transforms of E_D(a,b)(τ) as defined below.
Let f be a function on the upper-half plane that is periodic with respect to
translation by m for some integer m so that we have a q-expansion
f(τ)=∑_k=0^∞ a_k (2π i τ k/m).
Let f̃ be the function f̃(τ)=f(τ)-a_0.
For s∈ with real part greater than 1 and y the imaginary part
of τ, we define the Mellin transform
(f,s)=∫_0^i∞f̃(τ)· y^s-1 .
We now reduce the Mellin transform of E_D(a,b) to a sum of Mellin
transforms of E_1 to use Proposition 2.5.1 of <cit.> to
take the limit of (E_1(a,b),s) as s approaches 1.
§.§.§ Simplifications
Let D>0 be an integer. Then, for all a,b∈ and α/β∈,
∫_α/β^i∞ E_D(a,b)(τ)· y^s-1
=D^1-s∫_Dα/β^i∞ E_1(a,b)(τ)· ^s-1 .
By a standard change of variables, we have
∫_α/β^i∞ E_D(a,b)(τ)· y^s-1 = ∫_α/β^i∞ D· E_1(a,b)(Dτ)· y^s-1
= D^1-s∫_Dα/β^i∞ E_1(a,b)(τ)· y^s-1 .
We recall the following zeta functions for x∈/ and s∈ with real
part greater than 1.
Z(s,x):= ∑_n=1^∞(2π inx)/n^s; ζ(s,x):= ∑_k≡ x
k>0k^-s.
Let α,β∈ such that β>0 and (α,β)=1.
Let y denote the imaginary part of τ.
For all (a,b)∈ (/)^2, and ℓ=0,…,β-1,
define R(a,ℓ,β)=(a+ℓ)/β and Q(a,b,ℓ,r)
=b+r(a+ℓ). Then, we have
∫_α/β^i∞ E_1(a,b)(τ)· y^s-1
=β^1-s∑_ℓ=0^β-1(E_1(R(a,ℓ,β),
Q(a,b,ℓ,α/β)),s).
Consider the case when s has real part greater than 2 so that
everything is absolutely continuous.
We first consider the integral of E_1^+(a,b)(τ). The case of
E_1^-(a,b) will be entirely symmetric.
∫_α/β^i∞ E_1^+(a,b)(τ)· y^s-1
= -2π i ·∫_α/β^i∞∑_x≡ a
x>0
x∑_m=1^∞ (q^xζ^b)^m y^s-1
= -2π i ·∫_0^i∞∑_x≡ a
x>0
x∑_m=1^∞ (q^xζ^b+xα/β)^m y^s-1
= -2π i ·∑_x≡ a
x>0
x∑_m=1^∞ζ^m(b+xα/β)·∫_0^i∞q^mx y^s-1
= Γ(s)/i(2π)^s·∑_n=0^∞ (a+n)^1-s·∑_m=1^∞
m^-sζ^m(b+xα/β).
All series are absolutely convergent, so we can decompose the sum over
n by residue classes modulo β.
∫_α/β^i∞ E_1^+(a,b)(τ)· y^s-1
= Γ(s)/i(2π)^s·∑_ℓ=0^β-1∑_n=0^∞ (β R(a,ℓ,β)+β n)^1-s∑_m=1^∞
m^-sζ^mQ(a,b,ℓ,α/β)
= Γ(s)/i(2π)^s·β^1-s∑_ℓ=0^β-1ζ(R(a,ℓ,β),1-s)·
Z(Q(a,b,ℓ,α/β),s).
The same computation holds for E^-_1(a,b)(τ). The proof
is complete once we compare the summands with the case when
α/β=0.
With the same set-up as Proposition <ref> but with
a positive integer D,
∫_α/β^i∞ E_D(a,b)(τ)· y^s-1
= (Dβ)^1-s∑_ℓ=0^β/D-1(E_1(R(a,ℓ,β/D),Q(a,b,ℓ,Dα/β)),s).
§.§.§ Periods of (u(a,b))
Let γ∈_2() of the form
γ=[ α x; β y ].
For convenience, we insist that β>0 and define r=α/β.
Notice that this implicitly ignores the trivial case when r=i∞.
We now compute
1/2π i∫_r^i∞(u(a,b)(τ))
=1/2π i∫_r^i∞ E_1(a,b)(τ)
By Corollary <ref>, we have the equality
∫_r^i∞ E_1(a,b)(τ)· y^s-1
=β^s-1∑_ℓ=0^β-1(E_1(R(a,ℓ,β),Q(a,b,ℓ,α/β),s).
The final ingredient for the computation of periods is the following
proposition which provides the integral of (u(f)) along the
imaginary axis.
(<cit.> Proposition 2.5.1)
For all a,b∈/ and f=[(a,b)+^2],
12π i(E_1(a,b),1)
=(_1⊗_1)(f)
-1/2π i
(δ_0⊗)(f| S -f).
We define the following Dedekind sums taking inspiration from <cit.>
and the preceding Proposition <ref>.
For α,β∈ with β≠ 0 and a,b∈,
let C(α,β,a,b) denote the Dedekind sum
C(α,β,a,b):=∑_ℓ=0^β-1_1(R(a,ℓ,β))
_1(Q(a,b,ℓ,α/β)).
Using Proposition <ref> to take the limit as s approaches 1,
we get:
1/2π i∫_r^i∞ (u(a,b)(τ))
=C(α,β,a,b)
+1/2π i∑_ℓ=0^β-1δ_0([R(a,ℓ,β)+])·log1-e^2π i Q(a,b,ℓ,α/β)
-1/2π i∑_ℓ=0^β-1δ_0([Q(a,b,ℓ,α/β)+])·log1-e^2π i R(a,ℓ,β).
We now consider the terms (<ref>) and (<ref>) to
simplify the periods further. We will be considering the behaviors
of these terms as ℓ varies while all other inputs are fixed,
so we simplify notation by writing R_ℓ and Q_ℓ for
R(a,ℓ,β) and Q(a,b,ℓ,α/β), respectively.
The key observation for the following will be the fact that
Q_ℓ=b+α R_ℓ.
(<ref>): If δ_0([R_ℓ+])=1, Q_ℓ≡
b, so (<ref>) can be rewritten as
∑_ℓ=0^β-1δ_0([R_ℓ+])·log1-e^2π i b.
If a is not an integer, let be a prime such that the valuation
v_(a)<0. Then, v_(a/β)<v_(ℓ/β) for all ℓ,
so R_ℓ can never be an integer. Thus, R_ℓ∈ if and only if a∈ and ℓ=0, so we have
(<ref>)=(δ_0⊗)([(a,b)+]).
(<ref>): Let d∈ such that da and db are integers. Then,
Q_ℓ∈ ⇔ -b≡α/β(a+ℓ)
⇔ -(aα+bβ)≡αℓβ
⇔ -(daα +dbβ)≡ dαℓdβ.
Thus, Q_ℓ∈ if and only if ℓ is the unique solution
mod β to the congruence -(daα+dbβ)≡ dα x
dβ. Since (α,β)=1, the existence of a
solution is equivalent to d| -(daα+dbβ), i.e.
δ_0([aα+bβ+]). Thus, there is at most one
ℓ∈{0,…,β-1} such that δ_0([Q_ℓ+])=1
and such an ℓ exists if and only if δ_0([aα+bβ+])=1.
Suppose now that δ_0([aα+bβ+])=1. We have the
matrix γ∈_2() with the first column α
β.
We have αℓ≡ -(aα+bβ)β. Since
α y≡ 1β, we have ℓ≡ -y(aα+bβ)
β. Applying this to R_ℓ, we get
R_ℓ ≡a-aα y -bβ y/β
= a-a(1+β x)-bβ y/β
= -(aβ x+bβ y)/β
= -(ax+by).
Finally, we note that (q)=(-q) for all q∈,
so we conclude that
(<ref>)
=(δ_0⊗)([(a,b)+^2]|γ).
This effectively proves Theorem <ref>.
Let f=[(a,b)+^2]∈(V') and let γ∈_2() with
γ· i∞=α/β. Then,
∫_γ i∞^i∞(u(f))
= C(α,β,a,b)-
1/2π i ((δ_0⊗)
(f|γ-f)).
Having fixed α/β, we have made a choice of γ∈_2(). However, this choice is exactly up to a
multiplication of a power of the matrix T and this choice
is washed out due to δ_0⊗ being invariant
under the action of T.
The following is a direct consequence of Corollary <ref>
and Theorem <ref>.
Let D be a positive integer and take the the same assumptions
as Theorem <ref> with the added condition that
γ∈Γ_0(D). Let γ_D∈_2() such that
γ_D=UL(D)·γ· UL(D)^-1. Then, we have the following formula.
1/2π i∫_α/β^i∞(u(f)(Dτ))
=C(α,β/D,a,b)-1/2π i (δ_0⊗)(f|γ_D-f)
§.§ ^δ Calculations
Before focusing on ^δ, we preemptively manipulate the formula
for μ_DD,δ.
Let f=[(a,b)+^2]∈(V_').
Then, for all α/β∈Γ_0(N)· i∞ with β> 0,
μ_DD,δ{∞→αβ}(f)
=∑_D| N n_D· C(α,β/D,a,bD).
First, note that since α/β∈Γ_0(N)· i∞,
N|β. Given the formula for the Darmon-Dasgupta measure,
it is sufficient to show that for all D| N,
∑_ℓ=0^β-1_1(D(a+ℓ)/β)
_1(b+α/β(a+ℓ))
=∑_ℓ=0^β/D -1_1(D(a+ℓ)/β)
_1(Db+Dα/β(a+ℓ)).
Reindexing the left hand sum by ℓ+βDη where
0≤ℓ<βD and 0≤η< D, we have
∑_ℓ=0^β-1_1(D(a+ℓ)/β)
_1(b+α/β(a+ℓ))
=∑_ℓ=0^β/D -1_1(D(a+ℓ)/β)
∑_η=0^D-1_1(b+α/β
(a+ℓ+ηβ/D))
=∑_ℓ=0^β/D -1_1(D(a+ℓ)/β)
∑_η=0^D-1_1(b+α/β
(a+ℓ)+ηα/D).
Since D|β and (α,D)=1, the sum over
η is simply the distribution relation for _1:
∑_η=0^D-1_1(b+α/β
(a+ℓ)+η/D)
=_1(Db+Dα/β(a+ℓ)).
The following will be a recurring distribution for the remainder of
the paper.
We define Ψ,Ψ^δ∈(V_',) as the distributions
Ψ=π_c+1/2(δ_0⊗_1)_c and
Ψ^δ(f)=∑_D| N n_D·Ψ(f^D) for all f∈(V_').
We now compute ^δ. For all γ∈ G_(N),
^δ(Id,γ) being a -valued distribution, so
we can compute its value on test functions by taking a limit z→γ· i∞ approaching the cusp vertically down from i∞.
Recall that for a test function f, f^D:=f| LR(D).
^δ(Id,γ)(f) =lim_z→γ i∞Ẽ^δ(f|γ)(γ^-1z)-Ẽ^δ(f)(z)
=lim_z→γ i∞∑_D| Nn_D(
(π_c+ 1/2π i (δ_0⊗)_c)((f |γ - f)^D).
. +1/2π i∫_i∞^γ^-1z(u_c((f|γ)^D)
(Dγ^-1τ))
-1/2π i∫_i∞^z(u_c(f^D)(Dτ))).
Since only the line integrals are dependent on z∈,
the limit leaves us with an entirely self-contained expression.
^δ(Id,γ)(f) =∑_D| Nn_D((π_c+
1/2π i (δ_0⊗)_c)((f|γ - f)^D)
+1/2π i∫_γ i∞^i∞(u_c(f^D)(Dτ)))
If γ∈ T_, π_c and δ_0⊗ are fixed
by γ while γ i∞=i∞, so ^δ(Id,γ)=0.
Since T_ and Γ_0(N) generate G_(N), it is sufficient
to consider when γ∈Γ_0(N).
Let γ∈Γ_0(N) with γ· i∞=α/β and
let f be of the form [(a,b)+^2] (making f^D=[(a,bD)+^2]).
Corollary <ref> along with the observation that
- = π i B_1 leaves us with the following proposition.
For all γ∈ T_, ^δ(Id,γ)=0.
For all γ∈Γ_0(N) and f=[(a,b)+^2]∈(V_'),
^δ(Id,γ)(f) =Ψ^δ(f|γ-f)
+∑_D| N n_D
( c^2· C(α,β/D,a,bD)
-C(α,β/D,ac,bcD)).
Since G_(N) is generated by T_ and Γ_0(N),
this data completely characterizes the cocycle ^δ.
Let ^pδ∈ Z^1(G_,(',)) be the cocycle
^pδ=(i_^)_∗(^δ),
which is a cocycle representative of μ_DR^pδ. We can visually observe
now that for all γ∈ G_(N) and f∈(V_'), we have
12·^pδ(Id,γ)(f) +(μ_DD,δ)_c(Id,γ)(f)
=12·Ψ^δ(f|γ -f).
The following proposition concludes the proof of Theorem <ref>.
12·Ψ^δ is an integer valued distribution.
Let (_1)_c be the rank 1 _^×-invariant distribution
such that (_1)_c([a+])=B̃_1(ac). By a simple
calculation, we have the equality
Ψ=-c/2(δ_0⊗(c_1-(_1)_c))
-1/2((c_1-(_1)_c)⊗δ_0)
+(c_1-(_1)_c)⊗ (_1)_c.
Since Ψ is invariant under scalar matrices, we can reduce
our analysis to when f=[(a,b)+^2]∈_(V_'). If
a∈ or b∈, the equality above tells us that
12·Ψ(f)∈.
If a,b∉, we have for each D| N,
12·Ψ(f^D) =12(cB_1(a)-B_1(ac))
· B_1(bDc).
We notice that A:=cB_1(a)-B_1(ac) is an integer
for all a∈, so
12·Ψ^δ(f) =12∑_D| N n_D · A·
B_1(bDc)
= 12 · A ∑_D| N n_D·(bDc-1/2)
≡ A·∑_D| N n_D
· (12bDc-6)
=A(12bc∑_D| N n_D· D
-6∑_D| Nn_D)
=-6A∑_D| Nn_D∈.
In <cit.>, the p-adic distributions are motivated by taking
periods of the dlog of a modular unit
Δ^δ(τ)=∏_D| NΔ(Dτ)^n_D.
The conditions ∑_D| N n_D= ∑_D| N n_D· D=0
are posed to ensure Δ^δ is a modular unit
with no pole or zero at i∞. Removing the condition
∑_D| N n_D=0 obscures the modular unit Δ^δ,
but we now show that one can still recover a p-smoothed modular
unit from μ_KS^δ.
Assume that δ satisfies the condition ∑ n_D· D=0 and
for all i,j∈{0,…,p-1}, let f_i,j:=[(i/p,j/p)+^2]
and f:=∑_i,j f_i,j where the sum is taken over all i,j
excluding i=0. For all D| N, we have f^D=f, so
μ_KS^δ(f)(τ)
= ∏_D| Nμ_KS(f)(Dτ)^n_D
=∏_D| Nζ^π_c(f)· u_c(f)(Dτ).
However, since f is supported away from ×, the product
over D of the ζ-exponent is precisely ζ^12Ψ^δ(f)
which is 1 as seen in Proposition <ref>. In addition,
u_c(f)(τ)=u(f)(τ)^c^2-1 since c is coprime to p. Now, we
are left with
μ_KS^δ(f)(τ)=∏_D| Nu(f)(Dτ)^c^2-1.
Taking the product of u(f_i,j)(τ) over i,j yields
u(f)(τ)
= ∏_n=1^∞ (1-q^n)^2/(1-q^pn)^2.
Letting η be the Dedekind η-function and defining
η^δ(τ)=∏_D| Nη(Dτ)^n_D,
we see that
μ^δ_KS(f)(τ)=
(η^δ(τ)/η^δ(pτ))^2(c^2-1)
=(Δ^δ(τ)/Δ^δ(pτ).)^(c^2-1)/12
One way to interpret these modular units arising without the
degree zero condition on δ is noticing that at each prime,
μ^δ_KS contains Δ smoothed by the group element
∑_D| N n_D([LR(D)]-[LR(pD)])
which is a degree zero divisor that still satisfies the equation
∑_D| N n_D(D-pD)=(1-p)∑_D| Nn_D· D=0. This
also explains why Dasgupta's algorithm for computing elliptic units
in <cit.> does not require the degree zero condition as remarked
in <cit.>.
§.§ Explicit Dedekind-Rademacher Cocycle
We now calculate the values of _. However, unlike
the δ-smoothed case, we need to contend with a nonzero
order of μ_KS at i∞. However, the fact that we have
no congruence condition on G_ means we have an explicit set of
generators for G_, namely the diagonal matrices T_ and the
special matrices T and S. As in <ref>,
we have the following equality for all γ∈ G_, f∈(V_'),
and z∈.
_(Id,γ)(f)(z)=
lim_z→γ i∞Ẽ(f|γ)(γ^-1 z)
-Ẽ(f)(z)
Consider _(Id,UL(m))(f). Since UL(m)· i∞ = i∞,
the line integral portions of _(Id,UL(m))(f) vanishes when we
take the limit.
_(Id,UL(m))(f)=lim_z→ i∞ρ(f| UL(m))(m^-1z)
-ρ(f)(z)+1/2π i(δ_0⊗)_c(f| UL(m) - f).
Since δ_0 is scaling-invariant, the third term vanishes.
Expanding the ρ-terms yields
lim_z→ i∞ z/2(m^-1(_2⊗_0)_c
(f| UL(m))-(_2⊗_0)_c(f))+ π_c(f| UL(m)-f).
Since B_1 is scaling-invariant, the last term also vanishes.
Lastly, m^-1(_2⊗_0)=(m^-1_2)⊗_0, so
using the distribution relation on _2 makes the remaining two terms
cancel out. Thus, _(Id,UL(m))=0.
For _(Id,LR(m)), LR(m)· i∞=i∞, so the scaling
invariance of and the distribution relation of _0 similarly
yield _(Id,LR(m))=0.
Recall that Ẽ is scalar-invariant, so without loss of
generality, we take f=[(a,b)+^2]∈_^2(V_') for
the rest of this section.
We now compute _(Id,T). T stabilizes i∞, so the line
integral part of _(Id,T) similarly vanishes as z→ i∞.
_(Id,T)(f)=lim_z→ i∞ρ(f| T)(z-1)-ρ(f)(z)+1/2π i
(δ_0⊗)_c(f| T - f).
Since δ_0⊗ is fixed by T,
(δ_0⊗)_c(f| T-f)=0. Expanding
the remaining terms, we have
lim_z→ i∞ z-1/2
(_2⊗_0)_c(f| T)-z/2
(_2⊗_0)_c(f) + π_c(f| T-f).
Since _0 is translation invariant, (_2⊗_0)_c(f| T-f)=0,
which eliminates all occurences of z in the limit. For the sake of uniformity,
we make the observation that π_c(f| T-f)=Ψ(f| T-f), leaving us with:
_(Id,T)(f)=-1/2(_2⊗_0)_c(f| T)
+ Ψ(f| T-f)
Finally, to compute the measure _(Id,S), we will require the
following proposition regarding Mellin transforms.
Let f be a weight 2 modular form of level N with q-expansion
coefficient a_n(f). Let f̃ be the holomorphic function
f-a_0. Then,
(f,s) =∫_i^i∞f̃(τ)· y^s-1
-∫_i^i∞f̃|̃ ̃S̃(τ)· y^1-s
=i( a_0(f| S)/2-s-a_0/s).
We elect to compute _(Id,S)(f) by evaluating the holomorphic
function Ẽ(f| S)(S^-1z)-Ẽ(f)(z) at z=i. Expanding out
definitions, we have
_(Id,S)(f)
=Ẽ(f| S)(i)-Ẽ(f)(i)
=i/2(_2⊗_0)_c(f| S-f)
+π_c(f| S-f)+ 1/2π i(δ_0⊗)_c(f| S-f)
-1/2π i∫_i^i∞ u_c(f| S-f).
By Proposition 2.4.2 and Proposition 2.5.1bii of <cit.>,
Proposition <ref> specializes to the following
equality when s=1.
1/2π i∫_0^i∞(u_c(f))
=-1/2π i∫_i^i∞(u_c(f| S-f))
+i/2(_2⊗_0)_c(f| S-f).
[The distribution denoted ϕ_(a,b) in <cit.> is exactly
(_2⊗_0)(f)+12π i(ddτu(f))/u(f).]
Applying Proposition <ref> to the left of the above
equality and applying the result to _(Id,S)(f) leaves us
with the following.
_(Id,S)(f)
=(_1⊗_1)_c(f)+Ψ(f| S-f)
To summarize, we have proven the following theorem:
The Dedekind Rademacher cocycle is the unique cocycle _∈
Z^1(G_,(',)) characterized by the following
properties:
* For all γ∈ T_, _(Id,γ)=0,
* _(Id,T)(f)
=-1/2(_2⊗_0)_c(f| T)
+Ψ(f| T-f).
* _(Id,S)(f)=
(_1⊗_1)_c(f)+Ψ(f| S-f).
We notice from the above that the Ψ-terms seems to be a
coboundary. However, Ψ not being an integer-valued distribution
means we cannot get a cohomologous distribution by ignoring the
Ψ-terms. This also sheds light onto the ^δ case.
The proof of Theorem <ref> boiled down to showing
that the cocycles 12·^δ and (μ_DD,δ)_c
differed by a Ψ^δ-term. Proposition <ref>
can then be rephrased as showing that smoothing Ψ by 12δ yields
an integer-valued distribution.
|
http://arxiv.org/abs/2307.02316v1
|
20230705141800
|
Unintended electromagnetic radiation from Starlink satellites detected with LOFAR between 110 and 188 MHz
|
[
"F. Di Vruno",
"B. Winkel",
"C. G. Bassa",
"G. I. G. Józsa",
"M. A. Brentjens",
"A. Jessner",
"S. Garrington"
] |
astro-ph.IM
|
[
"astro-ph.IM"
] |
Unintended electromagnetic radiation from Starlink satellites detected with LOFAR
Square Kilometre Array Observatory, Lower Withington, Macclesfield, Cheshire, SK11 9FT, United Kingdom
[email protected]
European Science Foundation, Committee on Radio Astronomy Frequencies, 1, quai Lezay Marnésia BP 90015, F-67080 Strasbourg Cedex, France
Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121
Bonn, Germany
ASTRON, Netherlands Institute for Radio Astronomy, Oude
Hoogeveensedijk 4, 7991 PD Dwingeloo, The Netherlands
Department of Physics and Electronics, Rhodes University, PO Box 94, Makhanda, 6140, South Africa
Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy, University of Manchester, Manchester M13 9PL, United Kingdom
We report on observations of 68 satellites belonging to the SpaceX Starlink constellation with the LOFAR radio telescope. Radiation associated with Starlink satellites was detected at observing frequencies between 110 and 188 MHz, which is well below the 10.7 to 12.7 GHz radio frequencies used for the downlink communication signals. A combination of broad-band features, covering the entire observed bandwidth, as well as narrow-band (bandwidth <12.2 kHz) emission at frequencies of 125, 135, 143.05, 150, and 175 MHz, was observed. The presence and properties of both the narrow- and broad-band features vary between satellites at different orbital altitudes, indicating possible differences between the operational state of, or the hardware used in, these satellites. While the narrow-band detections at 143.05 MHz can be attributed to reflections of radar signals from the French GRAVES Space Surveillance Radar, the signal properties of the broad- and narrow-band features at the other frequencies suggest that this radiation is intrinsic to the Starlink satellites and it is seen for 47 out of the 68 Starlink satellites that were observed. We observed spectral power flux densities vary from 0.1 to 10 Jy for broad-band radiation, to 10 to 500 Jy for some of the narrow-band radiation, equivalent to electric field strengths of up to 49 dB[μV m^-1] (as measured at a 10 m distance from the satellites, with a measurement bandwidth of 120 kHz). In addition, we present equivalent power flux density simulations of the full Starlink phase 1 constellation, as well as other satellite constellations, for one frequency band allocated to radio astronomy by the International Telecommunication Union (ITU). With these, we calculate the maximum radiation level that each satellite constellation would need to have to comply with regulatory limits for intended emissions in that band. However, these limits do not apply if the radiation is unintended, that is to say if it does not originate from intentionally radiated signals for radio communication or other purposes. We discuss the results in light of the (absence of) regulations covering these types of unintended electromagnetic radiation and the possible consequences for astronomical radio observations.
Unintended electromagnetic radiation from Starlink satellites detected with LOFAR between 110 and 188 MHz
F. Di Vruno<ref>,<ref>Member of the IAU Centre for the Protection of the Dark and Quiet Sky from Satellite Constellation Interference (IAU CPS).
B. Winkel<ref>,<ref>^⋆
C. G. Bassa<ref>^⋆
G. I. G. Józsa<ref>,<ref>,<ref>^⋆
M. A. Brentjens<ref>
A. Jessner<ref>
S. Garrington<ref>^⋆
Received March 10, 2023; accepted May 12, 2023
==========================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Modern radio astronomy has profited greatly from advances in technology. Astronomical radio receivers nowadays are often operated with large fractional bandwidths (bandwidth Δν over observing frequency ν in excess of Δν/ν>50%; e.g. ), increased sensitivity, and aperture (e.g. ), as well as wider fields of view (e.g. ). At the same time, the numerical capabilities of digital back ends have enormously increased owing to field programmable gate arrays (FPGAs) or graphics processing units (GPUs) that allow one to implement special-purpose algorithms in flexible hardware boosting the processing speeds. This allows one to record data with unprecedented temporal and spectral resolution, which benefits spectroscopy, pulsar, and very large baseline interferometry (VLBI) observations alike.
However, astronomy is not alone in utilising the radio spectrum. There is a huge number of applications, such as radio and TV broadcasts, high-speed wireless communications (e.g. cell phone networks and WiFi), or radars, which require access to the spectrum. Any type of radio communication and intended radio transmissions is regulated to avoid a situation where different operators – when using the same or nearby frequencies – create interference on each other’s systems. This regulation of the radio spectrum is handled at the national level by national radio administrations; however, as radio waves do not care for national borders, international rules are required for harmonisation. The Radiocommunication sector of the International Telecommunication Union (ITU-R) is the top level organisation that takes care of this international regulation. It is a specialised agency of the United Nations. The ITU-R publishes the (RR), which is an international treaty and member states are expected to transform the RR into national law.
The ITU-R recognised radio astronomy as a service – the radio astronomy service (RAS) – already in 1959 and allocated bands in the radio spectrum to it. Unfortunately, the bands that are allocated to the RAS are relatively sparse and narrow – for spectral-line observations, the majority of the reserved bands only cover the typical Milky Way Doppler shifts. Also, the total amount of spectrum that is allocated to the RAS is not considered to be sufficient for modern radio astronomical research by most scientists. Below 4 GHz, only 5% of the radio spectrum is allocated to radio astronomy at various levels of protection. If only primary allocations (the highest level of protection) are considered, as little as 1.6% is allocated to the RAS. For more details about the regulatory process, the radio astronomy service and its protection, we refer readers to the ITU Handbook on Radio Astronomy <cit.>, the CRAF Handbook for Radio Astronomy <cit.>, and the Handbook of Frequency Allocations and Spectrum Protection for Scientific Uses <cit.>.
Not all radiation produced from electronic devices is subject to ITU-R regulations. To a large extent, the RR only cover the so-called emissions, which refer to the radiation that is directly related to the intentional use of radio frequencies in a system (for the purpose of communications, remote sensing, radionavigation, etc). This obviously includes the wanted signals, but also unwanted emission: spectral sidelobes including harmonics and intermodulation products that are an inevitable by-product of the generation of the wanted transmission. Unwanted emission is a consequence of the signal amplification or mixing, the chosen modulation scheme, etc. Both the wanted and unwanted emissions are regulated in the RR. But there is yet another source of electromagnetic radiation present in any electrical device (or system), which is related at its most fundamental level to the acceleration and deceleration of charges in any electrical or electronic circuits and not necessarily related to the generation of wanted radio signals. As the RR did not coin a regulatory term for this, hereafter we refer to this as unintended electromagnetic radiation (UEMR); it is worth mentioning that in engineering this radiation can be referred to as electromagnetic interference (EMI). UEMR can appear, for example, as the product of current loops in switching mode power supplies, communication signals in unbalanced or mismatched transmission lines, fast switching signals in printed circuits, and actuating electromechanical circuits, etc. Basically any electrical circuit generates some level of UEMR.
UEMR is not explicitly regulated at the ITU-R level, though other standardisation organisations have filled the gap. The Comité International Spécial des Perturbations Radioélectriques (CISPR[English: International Special Committee on Radio Interference]), which is a part of the International Electrotechnical Commission (IEC), sets standards for all kinds of terrestrial electrical and electronic devices in order to control electromagnetic interference. Unlike the RR, CISPR standards refer not only to radiocommunication systems but all kinds of electronic devices. Furthermore, the standards also cover measurement procedures, which are used to determine the level of UEMR produced by a device under test.
Unlike intended radio emission, UEMR is not clearly specified by a centre frequency, output power and bandwidth, yet it has some characteristics worth mentioning: i) its radiated power is normally several orders of magnitude lower than any intentional radiation; ii) UEMR is usually not radiated through an antenna, but mostly through cables and/or the mechanical structure of the system; therefore, its spatial radiation pattern is usually unknown but likely to be closer to isotropic than that of a directional antenna system; and iii) UEMR may have spectral contents which can be very variable depending on the type of electrical signals and design of the system.
Telescopes used for radio astronomy normally receive UEMR from terrestrial sources located nearby (distances of kilometres) and predominantly though their sidelobes. There are many examples of radio telescopes dealing with terrestrial UEMR, as is the case of wind farms affecting LOFAR observations[<https://www.astron.nl/test-wind-turbine-near-lofar-meets-agreed-radio-emission-norms-2/>] or emission from microwaves resembling astrophysical signals <cit.>. Radio astronomers also put great effort into shielding the necessary observation equipment (computers, receivers etc.) to avoid self-made UEMR to enter the data <cit.>. Environmental interference (intended and unintended) to radio telescopes can be minimised by building them in designated radio quiet zones or RQZs (). Unfortunately, RQZs provide no mitigation against radio emission from Earth orbiting satellites, which radio telescopes can receive through their primary beam or near sidelobes. In the case of the Iridium satellite constellation, unwanted radio emissions (i.e. not UEMR) interfered with astronomical observations of the 1612 MHz OH spectral line for more than 20 years (e.g. , , ). Studying reflections of terrestrial signals from satellites, <cit.> reported possible UEMR of two cubesats using the MWA between 80 and 103 MHz.
The proliferation of the new and large satellite constellations in low Earth orbit (LEO) – often referred to as mega-constellations – has caused worries in the astronomical community owing to the satellites ability to reflect sunlight and to emit radio signals <cit.>. This led to the Satellite Constellations workshops (SATCON1 and SATCON2; ), the Dark & Quiet Skies I and II workshops <cit.> and the founding of the IAU Centre for the Protection of the Dark and Quiet Skies from Satellite Constellation Interference (IAU CPS), the members of which investigated and continue to investigate the possible impact of large LEO satellite constellations on astronomy <cit.>. Owing to the increasing total number of satellites in LEO, and hence the increasing probability that a satellite appears within the field of view of a radio telescope, it makes sense to consider satellite UEMR as a potential source of interference in the future. The potential threat posed by satellite UEMR from large constellations was first considered at the Dark & Quiet Skies II workshop <cit.>.
In this paper we investigate the potential impact of satellite UEMR on radio astronomy through observations of the SpaceX Starlink satellite constellation. At the time of the observations presented here, this constellation was the largest in orbit with some 2100 satellites in orbit. This constellation provides broad-band internet connectivity with radio emission used for downlinks allocated to the 10.7 to 12.7 GHz frequency band[<https://fcc.report/IBFS/SAT-MOD-20200417-00037/2274316>]. Compatibility with radio astronomy observations in the protected 10.6-10.7 GHz band has previously been studied by the Electronic Communications Committee (ECC) of the European Conference of Postal and Telecommunications Agencies (CEPT) in its . As UEMR is predominantly expected at low frequencies (below ∼1 GHz) <cit.>, well below the allocated radio transmission downlinks, we observed satellites belonging to the Starlink constellation at frequencies between 110 and 188 MHz with the LOFAR radio telescope <cit.>.
This paper is organised as follows; Sect. <ref> presents an overview of standards and regulations applicable to satellites and their subsystems, while Sect. <ref> uses simulations to investigate the potential aggregate impact of several satellite constellations and its maximum radiated power to comply with the ITU-R threshold levels in one of the protected radio astronomy bands. We describe the observations and their processing in Sect. <ref> and discuss the analysis of the detected signals in Sect. <ref>. Finally, Sect. <ref> contains a summary and conclusions.
§ UEMR OF SATELLITE SYSTEMS
Typical satellites are composed of many different modules called subsystems, each one fulfilling a specific function for the satellite to operate. Satellite manufacturers make use of electromagnetic compatibility (EMC) to ensure that all the different subsystems will be compatible with each other. A typical EMC programme focuses on testing each subsystem to ensure that sufficient margins exist between emissions and susceptibilities for the ensemble to work without self-interference.
There are some EMC standards dedicated to space missions, such as the NASA MFSC-SPEC-521 or the ESA ECSS-E-ST-20-7C, most of them based on the US military standard MIL-STD-461. These EMC standards define, among other things, the maximum level of electromagnetic radiation that equipment can generate. Most standards for space are more stringent than the ones used for commercial apparatus such as CISPR-32 (see Fig. <ref>) but that is not a hard requirement, as a satellite does not need to be compatible with ordinary commercial equipment.
Once completely assembled, a satellite is usually characterised by a `system level' test that evaluates the overall UEMR (among many other parameters) of it as a whole. These tests can last for weeks, depending on the complexity of the satellite, making it a very expensive activity. For this reason, system level tests tend to focus on the minimum and necessary checks for each parameter of a complete satellite. A clear example of this can be seen in <cit.>, where UEMR is not highlighted as an important step to characterise a satellite constellation.
While commercial standards such as the IEC 61000 family, CISPR or the US Federal Communications Commission (FCC) part 15 (see Fig. <ref>), are harmonised and mandatory to allow entry into a certain market, there is currently no international agency or space law that requires a spacecraft to comply to a certain EMC standard. Furthermore, the information about which EMC standard is used for a specific programme, the considered UEMR thresholds, or the real level of emissions are rarely made public. Few examples are in the public domain such as <cit.> and <cit.>. Informal communications with satellite industry specialists indicated that the normal practice for satellite level UEMR tests is to set an emission threshold relatively high (which speeds-up testing times) and only apply stringent levels (long testing times) to narrow frequency bands where the satellite or the rocket-launcher have receivers or sensitive instruments. In <cit.>, results of a satellite emission level test are shown, where the limit threshold (marked as a solid red line in their Fig. 7) is defined at very high levels of emission almost for every frequency with the exception of a few communication bands.
Owing to this lack of information, we can suppose that a satellite could emit relatively strong UEMR signals, outside of the bands of interest for the manufacturer or operator, and still pass this type of testing. This is not an unlikely situation, since many subsystems can aggregate their emissions or their interconnection can change the electromagnetic configuration of the satellite and increase the emissions in a certain frequency band. This may not have been an issue in the past, with very small constellations or with single satellite systems. Even if a satellite had strong UEMR, it would require a very sensitive receiver to detect it or in other words would require the satellite to be in the main lobe of a radio telescope for a considerable fraction of an observation: a very rare condition until recently.
With the advent of the large LEO satellite constellations (such as Starlink phase 1 with 4408 satellites or OneWeb phase 1 with 720 satellites[<https://planet4589.org/space/con/conlist.html>.]) the situation changes. Firstly, the number of LEO satellites leads to an increase of the aggregate signal, which might become large enough to cause interference even through the sidelobes and increases the probability of a detection in the main lobes of the radio telescope. Secondly, the new satellites are manufactured in series, therefore it is possible that many satellites present similar UEMR. These two effects could make the situation for radio astronomy complicated, even in radio bands reserved to radio astronomy.
§ POTENTIAL IMPACT OF SATELLITE EMR ON RAS
To investigate the potential impact of satellite EMR on radio astronomical observations, it is possible to make use of the established methods that were developed by ITU-R for regular compatibility calculations of wanted and unwanted emissions. The ITU-R recommends to use the equivalent power flux density (EPFD) method (see ). A satellite constellation is simulated over a given time range. The power received from each satellite can be calculated from the transmitted power, taking into account transmitter and receiver antenna gains and path propagation losses (e.g. line of sight losses, atmospheric attenuation) before it enters the radio astronomy receiver. The total aggregated power, which is the sum of all power contributions, can be then determined. Under the assumption of standardised characteristics of the receiving antenna, the received power can also be converted to the associated power flux density (PFD, known as total or integrated flux density in the radio astronomy community), which allows to conveniently compare it to PFD threshold levels that are defined in regulations for the protection of a victim station. An advantage of this conversion is that it makes a better comparison possible between different receiving stations, which usually have different antenna patterns and gains. For example, the RAS protection criteria (; in Tables 1 and 2) are provided for an isotropic receiver, although in reality radio telescopes usually have very high forward gain.
§.§ Assessing the aggregate impact of a satellite constellation
In the following, the EPFD method is used to determine the potential impact of UEMR from different satellite constellations on radio astronomy observations. The EPFD method is widely used in spectrum management and is well documented in ITU-R documents. For convenience, a more detailed summary is provided in Annex <ref>. Here, the basic steps are explained in a simplified form. To calculate the received power for one particular pointing direction of the receiver antenna and a certain satellite orbit configuration, the procedure is as follows.
In the first step the satellite positions (and transmitter antenna orientations) with respect to the observer are be determined for a number of time steps and for a given period of time. The required time resolution mostly depends on the satellite altitudes. For low-earth orbit (LEO) satellites the time resolution should be 1 s or less as the angular velocities are high. Then the link budget (path propagation losses as well as transmitter and receiver antenna gains) between satellites and observer are computed. As the satellites are not necessarily in the main beam of the radio telescope, the angular separation between the antenna pointing direction and the geometrical position of the satellites needs to be accounted for, which changes the effective receiver gain. Likewise, the observer will usually not be situated in the forward direction of the satellite antenna. Modern satellites are often equipped with active antennas that allow electronic beam-forming in real-time, such that the effectively transmitted power towards the observer can fluctuate strongly. It should be noted, however, that in the case of UEMR, given its nature, a high directivity is not expected to be reached and an isotropic transmitting antenna pattern is used hereafter as an approximation. After the link budget is calculated, all the individually received powers (from each satellite) are added, which yields the total aggregated power. Finally the total aggregated power received at the radio telescope is compared to the permitted threshold levels, for example defined in . In this recommendation, the RAS protection levels are specified for an integration time of 2000 s, thus it is necessary to simulate the orbits over this time span.
The calculation is performed for a grid of sky cells (or telescope pointing directions) having approximately equal solid angles. This allows to analyse the spatial distribution of the contributed power levels. To assess statistical scatter, the whole simulation is repeated hundreds or thousands of times for different starting times and antenna pointings within the grid cells.
Often, the power flux density at the observer location (caused by the satellites) is transformed into the so-called equivalent power flux density (EPFD). This is the power flux density, which would need to be present in the boresight of a radio telescope to create the same power as the aggregated power from all satellites. Annex <ref> contains more details on this.
§.§ EPFD and large satellite constellations
For some of the large satellite constellations under construction, in particular SpaceX/Starlink and OneWeb, EPFD calculations were performed by the Electronic Communications Committee (ECC) of the European Conference of Postal and Telecommunications Administrations (CEPT) in its . In that report, the out-of-band emissions of the satellite downlinks in the RAS band at 10.60-10.70 GHz were analysed by means of this method.
To our knowledge, UEMR from large satellite constellations in operation has never been studied nor measured, probably because the number of satellites (of the same design) was not large enough to even be considered a problem, but this situation has changed now. Using the EPFD method it is possible to determine the maximum UEMR that each single satellite of a constellation may radiate in the 150.05-153 MHz primary radio astronomy band, while not producing harmful interference. Here we consider harmful interference as defined in .
The 150.05-153 MHz frequency band, which is allocated to the RAS, was chosen as it is commonly accepted that radiation caused by electronic circuits is mainly concentrated below 1 GHz, and it falls within the observing band of LOFAR. The harmful interference threshold in this band is -194 dB[W m^-2] over a bandwidth of about 3 MHz, according to (see their Table 1).
Given that the actually radiated emissions from a single satellite are unknown, we have to assume some value. An electric field strength of 30 dB[μ V m^-1] is a typical radiation level[The value of 30 dB[μ V m^-1] in CISPR refers to a so-called quasi-peak detector. Here, we assume that field strength to be the average over the measurement period. Usually, the two detector types may lead to significantly different outputs – by several dB – depending on the properties of the signal <cit.>] found in commercial standards such as CISPR-32 based on a detector bandwidth of 120 kHz and measured at a distance of 10 m. This number is equivalent to a radiated spectral power of -45.6 dB[mW MHz^-1]. We also assume in our simulations that this radiation is constant in time and frequency within the studied band. In practice this is certainly not the case. UEMR features can be time-variable and could also be narrow-band and in such a case a bandwidth correction factor would need to be applied. We furthermore work under the simplification that satellite UEMR is isotropically radiated.
The RAS antenna pattern and gain used in the calculations depends on the type of radio telescope. At these low frequencies, mostly interferometric telescopes are used, such as LOFAR and SKA1-Low. The actual antenna patterns of interferometers (after beam-forming and correlation) are complex and are not perfectly described by the model. Therefore, we perform the EPFD assuming parabolic-dish antennas of diameter 25-m and 70-m, respectively, which approximately have the same effective antenna area as SKA1-Low tiles and LOFAR (international) stations. In our simulations it is assumed that the RAS station is located at the geographical latitude of LOFAR, 53^∘ N.
Using these parameters and assumptions, EPFD calculations were carried out for a number of existing or currently in-deployment satellite constellations: Spire[<https://fcc.report/IBFS/SAT-LOA-20151123-00078/1126653>], Iridium NEXT[<https://fcc.report/IBFS/SAT-AMD-20151022-00074/1145619>], OneWeb[<https://fcc.report/IBFS/SAT-LOI-20160428-00041/1135071>], SpaceX/Starlink[<https://fcc.report/IBFS/SAT-MOD-20200417-00037/2274316>], and SpaceX/Swarm[<https://fcc.report/IBFS/SAT-LOA-20181221-00094/1592875>]. This provides us a range of constellation sizes from 66 satellites up to 4408 (see Tab. <ref>) in various orbital configurations. For the satellite position calculations we made use of the open-source Python package [<https://pypi.org/project/cysgp4/>] <cit.>, which is available under GPL-v3 license. It is a wrapper around the [<https://github.com/dnwrnr/sgp4>] C++ implementation of the simplified perturbation model SGP4 <cit.>. Furthermore, the [<https://pypi.org/project/pycraf/>] Python package <cit.> was used, which provides implementations for a number of relevant ITU-R Recommendations. It is also available under GPL-v3 license.
§.§ Simulation results
For each constellation in Table <ref>, one hundred iterations (simulation runs) were processed, which allows us to assess the statistical scatter of the results. As an example for the results, Fig. <ref> shows the cumulative distribution function for EPFD values for the Iridium NEXT and Starlink constellations with the assumption of UMR with an electric field strength of 30 dB[μ V m^-1] over the full RAS bandwidth[If there was only a narrow-band signal within the RAS band, a correction factor would need to be applied.] and a RAS antenna with a 70-m diameter located a geographic latitude of 53^∘ N. The light green and blue curves in the figure show the results for all sky cells in each individual simulation run, while the darker curves represent the median of the individual runs in each sky cell. recommends that the total data loss caused by a single interfering system should not exceed 2%, which is indicated by the horizontal red line (the 98% percentile) in the figure. The vertical red line marks the threshold. The cumulative probability at which this threshold is exceeded can be used to determine the actual expected data loss (about 10% for Iridium NEXT and 100% for Starlink with the assumptions used in the simulation). The intersection between the cumulative probability curve and the horizontal red line of 98% percentile yields the so-called margin, that is the difference between the RAS threshold and the actual received power flux density. If it is negative, emissions from the respective satellite constellation ought be below the assumed model values by that amount in order to comply with the thresholds in the RAS band. The inferred margins for all satellite constellations are presented in Fig. <ref>.
Based on the margins, under the assumptions used in the simulation, it is possible to determine a maximum electric field value that each satellite should comply with to ensure that the received power at the RAS station is not in excess of the permitted RAS threshold levels at the data loss of 2%. These values are summarised in Tab. <ref>. It is noted that the calculated values are lower than commercial EMC standard thresholds such as the CISPR-32 Class B with 30 dB[μ V m^-1].
It is also possible to investigate the regions on the visible (topocentric) sky, which contribute most to the overall received flux density, see Fig. <ref>, which shows the average EPFD per sky grid cell for the Iridium NEXT and Starlink constellations assuming a 70-m RAS antenna.
§ OBSERVATIONS, DATA CALIBRATION AND SIGNAL DETECTION
Based on the results obtained in Sec. <ref>, especially the ones for large satellite constellations such as Starlink, we conducted an observation with the LOFAR telescope which not only covers the frequency range of interest but can also produce multiple beams simultaneously increasing the probability of detecting satellite emissions within a reasonably short campaign. This section describes the observation method, data calibration and processing, and different types of detected signals.
§.§ Observations
LOFAR, the Low Frequency Array <cit.>, is a network of telescopes with stations spread over Europe and a dense core in the north of the Netherlands. We obtained a 1-hour observation targeting mostly SpaceX/Starlink satellites on 2022 April 1, starting at 18:30:00 UTC. Radio signals from the High Band Antennas (HBA) of the central six LOFAR core stations, those on the Superterp, were coherently beam-formed by the Cobalt beam-former <cit.> to form 91 tied-array beams (TABs). The TABs were distributed in five hexagonal rings covering the 47 full width at half maximum (FWHM) station beam, each ring separated by 24 from the next; see Fig. <ref>. This separation was chosen such that the TABs overlap at the half-power point around 150 MHz, assuming circular beams with a 24 FWHM at 150 MHz. For each tied-array beam, (uncalibrated) Stokes I intensities in the form of dynamic spectra were recorded between 110 and 188 MHz, with 10.48 ms time resolution and 12.21 kHz frequency resolution.
The TABs were centred towards, and tracking, α_J2000=08^h00^m00^s and δ_J2000=+493000. This pointing direction was chosen for its high Galactic latitude (b=311 at Galactic longitude l=1694) and hence low sky temperature (reducing the overall system temperature), as well as the high elevation above the horizon of LOFAR (maximum elevation of 865 at 18:54UTC), minimising the range between Starlink satellites at their operational altitude of 550 km. Furthermore, at the latitude of LOFAR (ϕ=5292), the currently most populated Starlink shells (with orbital inclinations of 530 and 532) lead to over-densities of satellites per unit area of sky near LOFAR's zenith <cit.>, maximising the number of Starlink satellites passing through the TABs.
We used public ephemerides[Distributed through <www.space-track.org>.] of the Starlink satellites generated by SpaceX for the observation planning and the processing of the data. The public ephemerides provide predictions for position and velocity of each Starlink satellite with respect to an Earth-centred inertial coordinate frame at 1 min time intervals, and include planned manoeuvres to adjust the satellite orbit. From these ephemerides, the trajectory of each satellite passing through the LOFAR beam pattern during the 1 hour observation was calculated, resulting in the passes shown in Fig. <ref>. We also computed the time of ingress and egress of each satellite through the station beam and the TABs. We note that individual Starlink satellites are known to make small unplanned manoeuvres, which generally result in the satellite passing early or late compared to predictions, without significantly altering its trajectory on the sky. The ephemerides show that a total of 68 individual Starlink satellites passed through the LOFAR station beam during the 1 hour observation, 22 of which were at the operational altitude of h=550 km. The other 46 Starlink satellites passing through the beam pattern were at an altitude of 350 km. These satellites belonged to a group of 48 satellites launched on 2022 March 9, 23 days before our observations, and were still raising their orbits to operational altitudes. The Starlink satellites of this launch are of a newer version 1.5 type[<space.skyrocket.de/doc_sdat/starlink-v1-5.htm>] compared to the Starlink satellites at the operational altitudes, which reportedly are version 1.0.
The properties of these satellites and their passes through the beam pattern are provided in Table <ref>. Owing to the high elevation of the observations above the horizon, the distances to the Starlink satellites in the operational orbits at 550 km was around 555 km, while the orbit raising group were at distances of around 356 km. These distances are in the far field of the LOFAR Superterp, whose maximum baseline of ∼300 m puts the Fraunhofer distance from 66 to 113 km for the observed LOFAR band of 110 to 188 MHz. At these distances, the satellites crossed the 47 FWHM of the station beam within 6 and 4 s, respectively, while the 24 TABs were crossed within 0.54 s for satellites at 550 km altitude, and 0.34 s for those at 350 km altitude (t_pass column of Table <ref>). Of the 68 satellite passes, only two did not pass through any of the TABs, while the majority of the others passed through several adjacent TABs, as indicated by the n_TAB column in Table <ref>. Finally, we note that all Starlink satellites passing through the beam pattern during this observation were illuminated by the Sun, and that their solar panels could have been generating power.
§.§ Data calibration
To calibrate the recorded data-sets, we performed both the frequency-dependent system gain (band-pass) correction[It is noted that the Cobalt beam-former applies a band-pass correction for every single spectral sub-band to correct for the digital filter curve of the poly-phase filter-bank.] as well as the intensity calibration.
For a single dish antenna, the on-source, off-source method represents a useful strategy to correct for the system gain. In very simple terms, the recorded uncalibrated power spectrum, P(t_i, f_j), at time, t_i and in frequency channel f_j is related to the actual antenna temperature, T_A, via the receiver system transfer function, G_bp(t_i, f_j). G_bp is a function of frequency, but it also depends mildly on t_i owing to slow drifts of the receiver (amplifier) gain. For the accuracy required for this project, one can safely assume that G_bp is constant with time over the relatively short observation period. Thus,
P(t_i, f_j) = G_bp(f_j) T_A(t_i, f_j) .
The idea of the on-source, off-source method is to divide two spectra to remove the frequency-dependent band-pass shape <cit.>. This yields
T_source/T_sys=P^on/P^off - 1 ,
where it was assumed that T_A^on = T_source + T_sys, while T_A^off = T_sys. The quantity T_source denotes the signal from a source to be measured, which would only be in the on-source spectrum, while all other constituents to the antenna temperature are denoted as system temperature, T_sys. Of course, anthropogenic signals, which are often highly variable with time and frequency, would produce residual imprints in the resulting data and ideally needs to be treated before the method is applied. Furthermore, any astronomical signal that is present in both the on- and off-source observation (e.g. large-scale continuum radiation) would also not be processed properly by the method.
Classically, the on-source, off-source strategy involves position switching as one needs a measurement without the (astronomical) source of interest for the reasons explained above. However, LEO satellites are within the observation beam for a very short amount of time, only. Thus, the off-source spectrum can simply be constructed by choosing data at a different time, for example shortly before and after a satellite crosses the beam, and taking the average spectrum over this time range. Another possibility would be to determine the off-source spectrum over the full time span of the observation, for example by averaging all spectra leaving out those that are associated with satellite crossings. The second method should only be applied, though, if the temporal stability of G_bp is sufficient. Here, both strategies have been tried out and no significant difference in the calibrated data-sets was found. In practice, all the averaging steps in the above procedures could also make use of the median estimator, which is more robust against outliers, produced by short-term anthropogenic signals.
Obviously, the beam-formed LOFAR data is not measured with only a single antenna. Nevertheless, the method outlined above can still be used in a very similar manner. The measured power spectrum, P, is again subject to a frequency dependent `system gain', which is now acting on an `effective (ensemble) antenna temperature' instead of each element's antenna temperature. The on-source, off-source method will remove the imprint of this system gain from the data, but the resulting quantity is not simply T_source/T_sys as in Eq. <ref> but a different quantity.
For the absolute flux calibration we used the approach outlined in <cit.> which models the effective area, beam shape, system temperature and coherence of LOFAR. The radiometer equation <cit.> relates the (power) flux density root mean square (RMS) at TAB level to these quantities by
Δ S_ν^tab = Δ T^station/Γ_tab = 1/Γ_tabT_sys^station/√(n_p t_obsΔν) ,
where Δ T^station is the noise level that can be achieved with a single station based on the radiometer equation. It depends on the system temperature of a station, T_sys^station, the number of polarisation channels, n_p=2, that were averaged, the integration time, t_obs, and the bandwidth, Δν, which in this case is the width of a spectral channel. The quantity Γ_tab=0.5A_eff^tab/k_B is the sensitivity or gain that translates between the station level system noise and the TAB flux density RMS. It is determined by the effective aperture area of a TAB, A_eff^tab, and the Boltzmann constant k_B. The value of A_eff^tab depends on the beam-forming efficiency and the number of contributing antennas. <cit.> derived an approximation formula,
A_eff^tab = η_active N^0.85A_eff^station ,
with the fraction of active dipoles, η_active=0.95 (about 5% of the dipoles are typically not in operation), the number of HBA sub-stations in the Superterp, N=12, and the effective aperture area, A_eff^station, of one of these sub-stations. <cit.> report on the values of A_eff^station for a number of frequencies. For the frequencies used in this paper, we interpolated these values linearly. <cit.> also estimated the system equivalent flux density (SEFD), which is the equivalent of the T_sys^station on the flux density scale. The SEFD is relatively constant in the frequency range considered in the following, with a value of about 3 kJy. This can be converted to the system temperature scale using SEFD [Jy]=2760 T_sys^station [K] / A_eff^station [m^2] <cit.>.
Based on these equations and previously reported quantities, the calibration parameters in Table <ref> were determined for use in the subsequent sections. Because the station aperture, A_eff^station, appears in both terms, T_sys^station and Γ_tab, Δ S_ν^tab is actually independent on A_eff^station. It has a value of 2.986 Jy for all frequencies in Tab. <ref> (a flat SEFD was assumed). In order to calibrate the spectra, it is only necessary to determine the noise level (in arbitrary units) and scale the data such that its RMS equals Δ S_ν^tab.
§.§ Signal detection
Any radio emission associated with Starlink satellites is expected to coincide in time with the predicted passage of a satellite through the LOFAR TABs, though it is a priori unclear if the radio emission would be broad-band, narrow-band, or a combination of both. The search is made difficult, however, as in the LOFAR observing band a lot of active radio services are operated producing signals which could by chance also appear at the same time when a satellite is predicted to pass through the beam.
The band from 110 to 188 MHz under consideration is allocated to several radio services such as air traffic control (118–137, 138–144 MHz), amateur radio (144–146 MHz), emergency pagers (169–170 MHz), satellite transmissions (137–138, 148–150 MHz) and digital audio broadcasting (174–230 MHz), with emergency pagers and digital audio broadcasting being the strongest sources of radio emission<cit.>. The majority of these emission sources are terrestrial and hence are located close to, or on the horizon. As such, these signals will be detected in the sidelobes of the LOFAR station beam and TABs, and hence will appear at the same time and with similar signal strength in all TABs. On the contrary, objects moving through the sky will produce signals in the dynamic spectra of the TABs at different times as they pass through the TABs. This not only applies to the target Starlink satellites, but also to other satellites as well as aircraft.
Based on these considerations, the data-set has been independently searched for signals to avoid biases. We first found a narrow-band signal at 175 MHz and broad-band features with varying intensity spread across the band. Different data processing strategies were applied for this, which apparently were suitable to find the two types of signals. After the first detections were made, it also became clear that some of the satellite positions were not accurately predicted by the ephemerides. However, these first finding made it possible to correct the positional data, which triggered additional detections at further frequencies. In the following we provide a summary of the process.
Most of the brighter broad-band signals were already visible in the raw, uncalibrated dynamic spectra of the TABs after binning; see Figure <ref>. As the duration of a pass through a TAB is of order 0.1 to 0.6 s, the dynamic spectra were averaged to a time resolution of 41.94 ms, keeping the frequency resolution fixed to 12.21 kHz. Next, <cit.> was used with the standard LOFAR flagging strategy to identify non-astrophysical signals and create a mask for the dynamic spectrum of each TAB. We found that, on average, 23% of the dynamic spectrum is flagged, 6.25% of which is due to each 16th channel, which contains the DC component of 16 channel poly-phase filter-bank used to channelise the LOFAR 0.195 MHz sub-bands into 12.21 kHz channels.
For each Starlink satellite passing through the LOFAR station beam, we started by extracting 20 s in time centred on the predicted mid-point of the pass through the LOFAR station beam from each of the 91 TABs. For this we used the band-pass calibrated dynamic spectra. To minimise the impact of terrestrial signals, which often appear similar in all beams, we subtracted from the extracted dynamic spectrum of each TAB the mean of the dynamic spectra of all the other TABs. Finally, again for each satellite pass, the resulting dynamic spectra of those TABs through which the satellite passed were aligned in time based on the predicted passage time and averaged to increase the signal-to-noise of any satellite emission. We note that with this approach we specifically chose not to mask any data that was flagged by , this was to ensure that no emission from satellites would be removed from the analysis.
Inspection of these averages of TABs showed broad-band emission throughout the observed frequency range, coinciding with the crossing times of Starlink satellites. Normalised, aligned, and averaged dynamic spectra for two satellites are shown in Fig. <ref>. The dynamic spectra have a time resolution of 41 ms and the full frequency resolution of 12.21 kHz. Due to the normalisation with the dynamic spectra of the other TABs, bright signals in those TABs may lead to depressions in these plots. To prevent masking of signals associated with satellites, no masking has been applied when normalising, aligning and averaging these spectra. Not all satellites reveal broad-band emission at the same frequencies – the two most common frequency ranges where emission is detected are at 116 to 124 MHz and 157 to 165 MHz. We focus our analysis on these two frequency ranges, but also include the ITU-R RAS frequency band from 150.05 to 153 MHz.
Besides broad-band emission, narrow-band emission was also detected in, and confined to, several individual 12.21 kHz channels. The frequencies of these channels cover 124.994 to 125.006 MHz, 134.991 to 135.004 MHz, 143.048 to 143.060 MHz, 149.994 to 150.006 MHz and 174.994 to 175.006 MHz. We include these signals in our analysis, and will refer to them as the narrow-band emission at 125, 135, 143.05, 150 and 175 MHz. As the maximum radial velocities of the Starlink satellites in this observation are less than |v_r|<1 km s^-1, any Doppler shifts at these frequencies are less than ∼600 Hz and hence confined to individual spectral channels.
As shown in Fig. <ref>, the signal strength of these narrow-band emission can vary significantly between frequencies as well as satellites. In some cases, the narrow-band features were so bright, that the satellite was detected passing through the sidelobes of individual TABs. Furthermore, in many cases, especially at 125 MHz, the narrow-band signals were superposed with terrestrial signals. This is also why the data processing strategy had to be modified in order to extract the narrow-band signals properly. Instead of subtracting the average of all beams from each spectrogram we subtracted a spectral baseline in a small window around each narrow-band peak.
Finally, in some, but not all, of the lower altitude Starlink satellites, a comb of narrow (within a 12.21 kHz channel) peaks was seen in the frequency range above 155 MHz. The dynamic spectra of satellite 51998 shown in Fig. <ref> shows this comb for frequencies between 170 and 176 MHz. Power spectra of the emission between 157 to 165 MHz shows that these peaks are spaced at 50 kHz offsets and is detectable in 17 of the 46 satellites at lower altitudes, but none of the higher altitude satellites. The satellites where this comb was detected are marked in Table <ref>.
For all satellites which were detected through either broad-band or narrow-band emission, we determined the time offset between the observed and the predicted passage time through the TABs by fitting a Gaussian profile to the temporal emission profiles. These time offsets are listed in Table <ref>. We found that the time offsets are less than 1 s for all but four satellites, and excluding those yields a median time offset of Δ̅ ̅t̅=-0.03±0.14 s. The four satellites with the largest time offsets passed through the beam pattern by as much as 6.4 s earlier, and others 1.3 s late compared to predictions. We furthermore found that the temporal width of the Gaussian fits matches those from predictions, where the satellites at 350 km orbital altitudes moved through the beam faster than those at 550 km. Subsequently, all these offsets were used to modify the satellite ephemerides and further analyses were based on the corrected positions.
To visualise the emission as a satellite passes through the LOFAR TAB beam pattern, Figs. <ref>, <ref> and <ref> show the temporal profiles of satellite passes in comparison to the location of the satellite as it passes through the beam pattern. The case of satellite 47373 shown in Fig. <ref> is one of example for a very bright event, where the narrow-band emission at 175 MHz was strong enough to be detected in all TABs for the full duration that the satellite passed through the 47 FWHM station beam. In other cases, such as for the pass of satellite 45705 (Fig. <ref>) the behaviour was `normal', however, and a signal was only detected in the beams covering the satellite sky track. For completeness, also an example for the broad-band emission between 116 to 124 MHz is displayed in Fig. <ref> for the pass of satellites 51978. As expected, the strongest detections coincide with the predicted time that the satellites passed through the individual TABs, confirming that the signal was coming from the direction of the satellites.
Next, we used the intensity-calibrated spectra to estimate the power flux densities (PFD) for each one of the detected signals. As the satellites usually did not cross any of the beam centres exactly, we determined the PFD as a function of the angular separation between the satellite positions with respect to each of the TAB centres; see Figure <ref> for two example satellites.
Based on a Gaussian least-squares fit to the data points, the peak PFD could be estimated. These PFD measurements are provided in Table <ref>. Furthermore, a visual overview is provided in Fig. <ref>. It is noteworthy that for the events with very high intensity (above about 100 Jy) the Gaussian fit was made difficult because of the cross-talk induced by the LOFAR beam-former (e.g. Fig. <ref> left panel). Therefore, the width parameter of the Gaussian fit curve was constrained to values below 1.1ϑ_fwhm^tab. Likewise, for all fits the zero level offset was constrained to values close to zero. Also, the scatter in the flux density values was rather large, such that the accuracy of the S_ν values in Table <ref> should not be overestimated.
§ ANALYSIS OF THE DETECTED EVENTS
§.§ Signal properties
Using the flux density measurements of satellite events at the narrow- and broad-band frequencies as listed in Table <ref> we can infer some properties of the detected signals.
We found that the narrow-band emission at 125, 135, 150, and 175 MHz is only detected for the Starlink satellites at their operational altitude of h=550 km, and not seen in any of the Starlink satellites in the lower orbit at altitudes of h=350 km. As the higher altitude satellites are more distant (d∼555 km) compared to the lower altitude satellites (d∼356 km), any emission of equal intensity should cause a detection in the received data about (555 km/356 km)^2∼2.4 times brighter for the satellites at lower altitudes. While the individual satellites showed some variation in the signal strengths, it is deemed extremely unlikely that all of the lower altitude satellites would by chance have very low emission. Hence, it naturally appears that there is an intrinsic difference between the satellites in higher altitude and lower altitude orbits with respect to the narrow-band features.
This is not the case for the broad-band emission, which was detected for the majority of satellites, regardless of their orbital altitude. We found that the median PFD of the low altitude satellites is a factor 2.0 and 2.3 higher than that of the high altitude satellites for frequency ranges of 116 to 124 MHz and 150.05 to 153 MHz, respectively. As this is close to the expected factor of 2.4, this indicates that the generation of this emission is independent on altitude. Curiously, the broad-band emission between 157 and 165 MHz is a factor 15 higher in the low altitude satellites, suggesting an intrinsic difference in this frequency range.
The occurrence of the signals for individual satellites at different frequencies is correlated. For 18 out of 19 cases in which narrow-band emission at 125 MHz was detected, emission was also present at 135 MHz, albeit somewhat fainter. A similar relation exists between the emission at 125 MHz and 175 MHz, though the emission at 175 MHz appears to be more variable and can be brighter than at 125 MHz. The signal at 175 MHz was detected in 14 cases. The narrow-band emission at 150 MHz was only seen for those satellites that were very bright at 175 MHz (and cross the station beam) and was detected in six cases.
As the lower altitude satellites were still in the orbit-raising phase, the 125 and 175 MHz might be associated with the regular operation (e.g. communication-link transmissions) of the satellites. Also, both frequencies are odd multiples of 25 MHz – a frequency often used for local oscillators – and could be harmonics, which usually appear stronger at either odd or even multiples of the fundamental mode. This would also explain, why the 150 MHz signal is only present for the brightest of the 175 MHz detections (as 150 MHz is an even multiple of 25 MHz). Typically, square wave-like signals are expected to produce odd harmonics. It is unclear, how the 135 MHz feature would fit into this. It might be owing to some intermodulation product of the detected narrow-band features with some other signal, but we were not able to find further evidence for this.
We attribute the narrow-band emission detected at a frequency of 143.05 MHz to the GRAVES space surveillance radar <cit.>. The GRAVES transmitter is located 30 km east of Dijon, France and is known to transmit continuous wave signals at 143.050 MHz for bi-static Doppler tracking of satellites. The transmitter illuminates a 180 range in azimuth (east to west through south) and a 30 range in elevation <cit.>. Though the radiated power of the transmitter is not publicly known, radar reflections from meteors are regularly detected by radio amateurs using modest equipment, even for meteors located well outside of the nominal illumination pattern of the GRAVES transmitter <cit.>. The Starlink satellites that we observed were also located far outside of the (known) GRAVES illumination area, implying that even in the far sidelobes of the GRAVES radar the effectively transmitted power is substantial.
Another interesting finding is that most high-altitude satellites do not show GRAVES reflections, even though LOFAR should have the sensitivity to detect them. The two satellites that were detected at 143.05 MHz were even brighter than the low-altitude satellite reflections (when they should be weaker owing to longer propagation paths). This suggests that the details of the propagation are subject to several effects, the magnitude of which cannot easily be determined without additional information. One aspect is certainly the orientation of the satellite relative to the LOFAR station. It is known that Starlink uses the `open-book' mode during orbit raising, where the solar array is aligned parallel to the satellite body to reduce atmospheric drag, while the operational satellites are in `shark-fin' configuration, where the solar array is located mostly behind the satellite as seen from Earth. Furthermore, the exact path geometry is expected to differ between lower and higher orbit altitudes, as well as the side-lobe gain of GRAVES towards different elevations.
§.§ Assessment of transmitted power levels
The maximum detected spectral power flux densities were about 500 Jy (average over one spectral channel) for the narrow-band signals and of the order of a few Jy for the broad-band signals. As the distance to the satellites, d, and the main beam gain of the HBA TAB are known, it is possible to determine the transmitter spectral EIRP (equivalent isotropically radiated power), P_ν^tx. The EIRP is the power that a transmitter with an isotropic antenna would have to radiate to produce the observed signal. As the transmitter antenna pattern, G_tx, and pointing direction are unknown, it is not possible to infer the conducted power at the antenna port of the satellite. The conversion formula between spectral EIRP and measured power flux densities is given by
S_ν = G_tx(ϑ, φ)P_ν^tx/4π d^2G≡1=P_ν^tx/4π d^2 ,
assuming only line-of-sight propagation loss and neglecting other effects, such as atmospheric attenuation. The resulting minimum and maximum spectral EIRP values for each band are compiled in Tab. <ref>, providing results for low- and high-altitude satellites separately.
The transmitted EIRPs can also be converted to electric field strengths to make comparison with EMC standards simpler; compare Section <ref>. The corresponding values are also provided in the table. For the narrow-band signals at 125, 135, 143, 150, and 175 MHz, respectively, electric field strengths in the range of 24 to 49 dB[μV m^-1] are determined, normalised to what an average detector with bandwidth of 120 kHz at a distance of 10 m would measure. The typical values for the broad-band signals are between 21 and 39 dB[μV m^-1], again for a 120 kHz detector bandwidth.
These values can be compared with the results of the EPFD simulations in Section <ref>, in particular with Table <ref>. In the EPFD simulations it was however assumed that a signal had a constant electrical field strength over the full allocated RAS band 150.05-153 MHz. All electrical field values have also been converted to a measurement bandwidth of 2.95 MHz which fully covers the RAS band for the convenience of comparison. They are provided in the right-most column of Tab. <ref>. It is noted that for narrow-band signals the values are the same for both detector bandwidths (120 kHz and 2.95 MHz), because the total integrated power is the same, while for a broad-band signal the total power increases, the more bandwidth is considered. The range of field strengths for the measurement bandwidth of 2.95 MHz is thus 24 to 49 dB[μV m^-1] (narrow-band) and 35 to 52 dB[μV m^-1] (broad-band)
For the detected Starlink satellites, Table <ref> cites maximum E-field values of 25.6 and 23.8 dB[μV m^-1] given a measurement bandwidth of 2.95 MHz for the (effective) antenna diameters of 25 and 70 m, respectively. Hence, even the weak detections exceed the suggested limit, while the brightest detections are more than 20 dB above the limit.
It has to be emphasised, though, that our observations represent only a snapshot, measuring a small sub-set of all satellites and that the detected signals are not equally bright and some satellites did not even reveal UEMR at certain frequencies. Nevertheless, the overall number of detections indicates that satellite-borne UEMR from large satellite constellations could indeed be an issue for RAS operations.
§.§ Intrinsic emission or reflection?
Theoretically, it is possible that the measured signals do not originally stem from Starlink satellites but are of terrestrial origin, reflected off the satellites. To test this hypothesis, we first determine whether a terrestrial signal could only be visible as reflection, but not over the direct terrestrial path. Second, the transmitted power level is estimated, which would be required to create a signal of the observed properties.
§.§.§ Geometrical considerations
Before the link budgets of both propagation paths can be compared, the geometry of the paths needs to be worked out. The highest likelihood that a terrestrial transmitter at distance d from the RAS station is not seen, while the reflected signal is visible, is given when d is as large as possible compared to transmitter-satellite and satellite-receiver distance, d_1 and d_2 respectively. This is the case, when all three objects (transmitter, satellite, and receiver) are in a plane perpendicular to the ground. It is noted that none of the paths actually follow straight lines. The terrestrial path, d follows a geodesic, while d_1 and d_2 are subject to refraction (which was not considered in this analysis).
In Fig. <ref> the path geometry is analysed for the high- and low-altitude satellites. It is assumed that the satellite appears at an elevation angle of 85 from the LOFAR observer. Based on the azimuthal angle of the satellite (with respect to LOFAR) one can construct a
geodesic[For simplicity, the Earth Ellipsoid WGS-84 is assumed.] starting at the LOFAR observer out to a certain distance. Along this path, one can put a hypothetical transmitter and determine under which elevation angle the same satellite would appear in the transmitter frame (topocentric). Likewise, the geodesic distance (i.e. the projection on the ground) between transmitter and satellite can be inferred. The latter two quantities are shown in Fig. <ref> as red and blue curves, respectively. At about 2000 km distance, the low-altitude satellite would be set below the horizon.
§.§.§ Link budgets
The propagation losses for both paths are determined by different physical processes. In the terrestrial case, the diffraction on the spherical Earth, tropospheric scatter, and other effects play a role. The model proposed in is employed to calculate the loss, L_terr(d). For the effective propagation loss, also the antenna gains need to be considered:
P_rx/P_tx = G_tx G_rx L^-1_path .
In the line-of-sight case (which is not relevant here), one would find[Because P_rx = S· A_eff^rx = P_txG_tx/4π d^2 A_eff^rx and A_eff^rx = G_rxλ^2/4π.]
L_terr(d) ≈[4π d/λ]^2 ,
It should be pointed out that we follow the common practice of spectrum management and many other fields, to define the loss as a quantity larger than One (i.e. positive on the Decibel scale).
Unfortunately, it is not known, what the antenna gains towards the local horizon are for both, transmitter and receiver. Therefore, we have to assume values. The most simple choice is to set both gains to 0 dBi.
For the reflection scenario, the Radar equation has to be used:
P_rx = P_txG_tx/4π d_1^2σ_rc1/4π d_2^2 A_eff^rx ,
and we can express this in a similar way as Eq. <ref>:
P_rx/P_tx = G_tx G_rx[4π/λ^2] [λ/4π d_1]^2 σ_rc[λ/4π d_2]^2
≡ G_tx G_rx[4π/λ^2] L^-1_sky(d_1) σ_rc L^-1_sky(d_2) .
Here, the radar cross section, σ_rc, was introduced. For Starlink, we assume σ_rc=10 m^2 as we are not aware of a publicly available measurement. The effective cross section also depends on the orientation of the satellite and the frequency range considered. Note, that for a mono-static Radar, d_1=d_2=d, and thus the propagation loss would scale with distance to the fourth power. In our case however, d_1 and d_2 can be very different.
Figure <ref> displays the path propagation losses of the satellite reflection scenario vs. the direct terrestrial (trans-horizon) path loss. It has to be noted that for the terrestrial path, neither the terrain (such as hills) nor clutter was accounted for. Both can add substantial additional path propagation losses of 20 dB and more, each. In the reflection case, the LOFAR TAB points at the satellite, such that the full main beam gain applies (43 dBi at 175 MHz). Again, without further knowledge it is assumed that the transmitter gain towards the satellite is 0 dBi. Under these assumptions, the propagation path over the reflection off the satellite would be more efficient beyond about 900 km compared to the terrestrial propagation. At this distance the satellite would appear at approximately 20-25 elevation in the transmitter frame. If the transmitter signal would be targeted towards the satellite, then the antenna gain, G_tx^sky would be much higher than the assumed 0 dBi, which would further decrease the distance at which the reflection scenario is more efficient. Likewise, diffraction at terrain or clutter losses would also increase the terrestrial path loss and make the reflection scenario more efficient.
§.§.§ Estimating the transmitter power (reflection scenario)
The calculations above show that it is indeed possible for a transmitter to create a stronger reflected signal than over the direct terrestrial path, once the distance between transmitter and receiver gets large enough. This is a consequence of the large diffraction loss on the trans-horizon terrestrial path. But still, the propagation loss via the satellite reflection is very high, so it may be interesting to estimate the required transmitter power. Again, as the transmitter antenna gain is unknown, we can only calculate the EIRP (towards the satellite), but not the conducted power at the antenna port of the transmitter.
Based on the reflected-case path propagation loss in Fig. <ref> and the maximum received narrow-band power of
P_rx=A_eff^tabS_νΔ f = 2997 m^2· 506 Jy· 12.2 kHz=-157 dB[W] ,
the transmitter power (EIRP towards satellite) would need to be between 81 and 92 dB[W] or 73 and 88 dB[W] for high- or low-altitude satellites, respectively, depending on the distance between radar transmitter and satellite. This is a huge number and would require a LOFAR-size transmitter with a conducted power in the kilo-Watts regime (concentrated within a bandwidth of only 12.2 kHz)[Similar figures apply for the broad-band signals. While these have lower intensity, they span many MHz and already the fraction in the RAS band (150.05-153 MHz) leads to the same received power as the higher-intensity narrow-band signal.]. In the case of the GRAVES frequency it is indeed very likely that the detected signal is in fact originating from the GRAVES radar and reflected off the satellites. GRAVES probably has sufficient transmit power to explain the received signals. In fact, there are numerous reports by amateurs who receive GRAVES signals that were reflected by meteors with small receiving antennas. The distance between GRAVES in the eastern part of France and the LOFAR superterp is about 620 km. This is large enough to make the reflection path more efficient than the direct terrestrial path, as for LOFAR the local clutter environment will play a role and there is also relatively hilly terrain along the propagation path in France and Belgium that would increase the diffraction losses.
No other radar facility is known that operates at the detected frequencies. While for these cases, the radar scenario cannot be fully excluded, we find it unlikely. The fact that the 125 and 175 MHz signals were only observed for the higher orbit satellites is another aspect that would be hard to explain within a radar scenario. And a powerful radar that would operate broad-band between about 110 and 170 MHz would probably be well-known as it would interfere with many applications in a large area around the transmitter.
§ SUMMARY AND CONCLUSIONS
Using the LOFAR radio telescope, we have detected radiation between radio frequencies of 110 and 188 MHz that is correlated with satellites of the SpaceX/Starlink constellation. These frequencies are well below the assigned transmission frequencies at 10.7 to 12.7 GHz. Broad-band emission was present over the whole observed bandwidth for some satellites, while others showed strong (from 10 Jy up to ∼500 Jy) narrow-band signals at frequencies of 125, 135, 150, and 175 MHz. The presence of narrow-band emission differs between Starlink satellites at operational altitudes with those that were still actively raising their orbits, indicating possible differences in the operational state of the satellites, or differences between their hardware versions. We found that the flux density of the broad-band emission decreases with range, suggesting this emission is likely intrinsically generated and is detectable in 47 of the 68 Starlink satellites that were observed. However, narrow-band radio emission at 143.05 MHz can be attributed to reflections of transmissions from the French GRAVES space surveillance radar, and while we know of no other radars operating at the detected narrow-band frequencies or broad-band frequency ranges, confirmation that the observed narrow-band emission at other frequencies is intrinsic is required.
The narrow-band emission detected at 125, 150, and 175 MHz may be harmonically related, suggesting a local oscillator or clock signal operating at a frequency of 25 MHz. It is noteworthy that the narrow-band signals were only detected for satellites at the operational altitude. No such signals were seen for the satellites in orbit-raising phase, it is unclear if this effect is owing to operation or satellite version. The broad-band features are with high probability caused by other means, such as switched-mode power supplies, communication signals internal to the satellites, or some other electronic or electrical subsystem.
Follow-up observations will be able to shed further light on the origin and properties of the observed emission. Observations with the LOFAR Low-Band Antennas (LBAs; 10-90 MHz) would be able to confirm the presence of a 25 MHz local oscillator, while higher frequency resolution observations should allow the distinction between intrinsic or reflected emission from the Doppler shifts of the narrow-band emission. Further observations with LOFAR as well as other radio telescopes will be required to investigate the properties of the emission between different Starlink satellite versions at operational altitudes, if the emission changes when the satellites are in the Earth's shadow and the solar array is not illuminated by the Sun, and if radio emission from Starlink satellites is detectable at higher radio frequencies. Besides further observations of satellites from the Starlink constellation, it would be prudent to determine if satellites from other constellations emit UEMR. Finally, the impact of – and possible mitigation strategies against – the observed emission from satellites of the Starlink, or any other, constellation on the different science cases of LOFAR and other current, as well as future, radio observatories (e.g. MWA, LWA, SKA1-Low) operating at low frequencies needs to be investigated.
Any kind of UEMR is not subject to spectrum management of active radio services. In fact, from the radio astronomers perspective, UEMR is currently not well regulated for satellites and spacecraft. While there are some electromagnetic compatibility standards for spacecraft, these were made to protect the subsystems within a spacecraft from each other or its launcher system, but not to protect third party activities. The measurements presented in this paper show that there is a potential for harmful interference (as defined in the ITU-R radio regulations using the RA.769 thresholds) in radio astronomy observations caused by satellites in frequency bands far away from their allocated carrier frequencies. This potential is a function of the number of satellites and their orbital parameters, thus large satellite constellations may pose a risk. A big difference between wanted transmissions via antennas and UEMR, is that the latter is most likely not directional but relatively isotropic. Therefore, one important protection measure, which is to exclude radio astronomy stations from the service area of a satellite network, is not possible for UEMR. In addition, a strong terrestrial transmitter, which is not immediately an issue because of good geographical separation, can produce reflected signals via the satellites' surfaces. A sphere of satellites could produce a new propagation channel which may need to be considered in terrestrial radio-propagation models as the ones developed by the ITU, this requires further study. Both effects, intrinsic and reflected emission, are presently not considered in the national and international regulation processes.
Because the detected signals in our one-hour observation represent only a snapshot and a small fraction of the Starlink constellation, one can currently not estimate accurately if and how much an entire satellite constellation, Starlink or other, would exceed protection thresholds in RAS frequency bands. However, the detected intensities are orders of magnitude above the level that each individual satellite would be allowed to have in order to comply with the thresholds (if all satellites were equally bright as explained in Section <ref>). Therefore, we are of the opinion that satellite operators and regulation authorities should consider satellite UEMR and reflected signals as another aspect of the regulatory process.
Additionally, a dialogue between the satellite operators and the (radio) astronomical community would be welcome to understand how the electrical properties and operational procedures of the satellites affect radio astronomy, and how these can be used to mitigate their impact. Hopefully, this dialogue can build on the co-operation that SpaceX/Starlink has with optical astronomy <cit.>, especially since radio observations may be affected continuously, not primarily during twilight as is the case with optical/infrared astronomy. This could follow the example that was set with the recent coordination agreement between the US National Science Foundation (NSF) and SpaceX. Most of the authors of this work are active members in the IAU CPS, where this dialogue can take place.
It cannot be overstated, that any loss of observing time can directly be translated into a monetary loss of the substantial investments which went into developing, operating and using radio astronomy facilities <cit.>. However, the much graver consequence is the loss of the output of this comparably small investment – fundamental research is a significant sector of physical science, which usually pays out only in a matter of decades. While some of the existing satellite constellations have the means to protect radio astronomy sites from intended radio transmissions by steering their radio beams away, this kind of active mitigation will not be possible for UEMR. Hence, this is an issue in need of close attention by satellite operators, regulators and the astronomical community. Tens of thousands of low-Earth orbit satellites are in the making and without proper consideration, these could potentially produce an artificial sphere of `radio light' that leaks into astronomical observations, rendering some astronomical observations impossible.
This paper is based (in part) on data obtained with the International LOFAR Telescope (ILT) under project code DDT16_003. LOFAR <cit.> is the Low Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, that are owned by various parties (each with their own funding sources), and that are collectively operated by the ILT foundation under a joint scientific policy. The ILT resources have benefitted from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Université d'Orléans, France; BMBF, MIWF-NRW, MPG, Germany; Science Foundation Ireland (SFI), Department of Business, Enterprise and Innovation (DBEI), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK; Ministry of Science and Higher Education, Poland. The project leading to this publication has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101004719.
The authors thank the support of the IAU Centre for the Protection of the Dark and Quiet Sky from Satellite Constellation Interference (IAU CPS). The IAU CPS is a virtual centre of the International Astronomical Union set up in partnership with the SKAO and the NSF’s NOIRLab. The Centre coordinates collaborative and multidisciplinary international efforts from institutions and individuals working across multiple geographic areas, seeks to raise awareness, and mitigate the negative impact of satellite constellations on ground-based optical, infrared and radio astronomy observations as well as on humanity’s enjoyment of the night sky. Conversations about UEMR started back in 2020 on the Dark and Quiet Skies 2 workshop and this paper is a result of those conversations and studies.
We thank Willem Baan and Uwe Bach for proof-reading our initial draft and providing valuable feedback.
This paper made extensive use of the Python scientific stack, and we would like to thank the developers of NumPy <cit.>, matplotlib <cit.>, SciPy <cit.>, Astropy <cit.>, and Cython <cit.>.
aa
§ THE EQUIVALENT-POWER FLUX DENSITY METHOD (EPFD)
Mathematically, the received aggregated power for a RAS pointing direction, (φ_0, ϑ_0), is given by
P_rx(φ_0, ϑ_0)=∑_i=0^n L_i^-1(φ_i, ϑ_i, d_i) G_rx(φ_i, ϑ_i; φ_0, ϑ_0)G_tx(φ̃_i, ϑ̃_i) P_tx .
The angles (φ_i, ϑ_i) describe the position of satellite i in the observer frame (e.g. azimuth and elevation), while (φ̃_i, ϑ̃_i) is the position of the observer in the satellite antenna frame. The distance between each of the satellites and the observer is denoted as d_i. Furthermore, P_tx is the transmitted power in forward direction, G_tx,rx are the effective transmitter and receiver antenna gains. The path attenuation/path propagation loss is subsumed into L_i(φ_i, ϑ_i, d_i).[It is common to define the path propagation loss as a quantity larger than One, such that it is positive on the Decibel scale, which is why one has to divide by L_i in Eq. <ref>.] If only line-of-sight loss would be accounted for (which is approximately correct at low frequencies), L_i becomes
L_i^-1(d_i) = [1/4πc/d_i f]^2 .
where c is the speed of light and f is the observing frequency. At higher frequencies, atmospheric attenuation plays an important role, too.
The received aggregated power as given in Eq. <ref> is not the quantity, which is used in . Instead, in these recommendations the EPFD is defined as
EPFD(φ_0, ϑ_0)=∑_i=0^n1/4π d_i^2G_rx(φ_i, ϑ_i; φ_0, ϑ_0)/G_rx^maxG_tx(φ̃_i, ϑ̃_i) P_tx .
This assumes pure line-of-sight propagation losses. In this case, we can also identify
EPFD(φ_0, ϑ_0)=4πf^2/c^21/G_rx^maxP_rx(φ_0, ϑ_0) ,
but as mentioned above, it is usually desired to normalise this to a hypothetical isotropic receiver, to make the comparison with easier, that is
.EPFD(φ_0, ϑ_0)|_G_rx^max=1=4πf^2/c^2P_rx(φ_0, ϑ_0) .
It should be noted that also contains limits for the received power, such that it would equally well be possible to directly work with Eq. <ref>. In the following, all PFD values are to be understood in the sense of Eq. <ref>. The simulations, carried out in this work, perform EPFD calculations for a grid of sky cells as proposed in and . The applied scheme returns cells that have approximately the same solid angles. also recommends to use a random pointing of the radio telescope antenna in a given cell for each iteration, but if the grid cells are not too large the final results usually do not show significant dependence on this. Nevertheless, as this has no impact on the computational complexity, it is usually done in this way.
Radio telescope antenna patterns are very complicated depending on the fine details of the aperture. For example, primary foci are often attached to support legs, which block part of the aperture (as the primary focus installation itself). For general purpose calculations, contains a (radially symmetric) reference antenna pattern to be used in spectrum management compatibility studies, which is based on a non-blocked circular aperture.
In the EPFD calculation the transmitter gain as well as the receiver gain have to be accounted for, both being direction dependent. While the receiver gain depends only on the angular separation between a given telescope pointing and a satellite (owing to the symmetry of the pattern), the satellite transmitter antenna pattern can be a more complicated function. Therefore, for each time step (and thus satellite position), the relative position of the RAS station in the dynamic satellite antenna frame must be inferred.
|
http://arxiv.org/abs/2307.02139v1
|
20230705092915
|
Extending the Dixon and Coles model: an application to women's football data
|
[
"Rouven Michels",
"Marius Ötting",
"Dimitris Karlis"
] |
stat.ME
|
[
"stat.ME"
] |
1.35
Extending the Dixon and Coles model: an application to women's football data
Rouven Michelscorresponding author: [email protected] Bielefeld University, Marius Ötting[2], Dimitris KarlisAthens University of Economics and Business
======================================================================================================================================================================
The prevalent model by <cit.>
extends the double Poisson model where two independent Poisson distributions model the number of goals scored by each team by
moving probabilities between the scores 0-0, 0-1, 1-0, and 1-1.
We show that this is a special case of a multiplicative model known as the Sarmanov family.
Based on this family, we create more suitable models by moving probabilities between scores and employing
other discrete distributions.
We apply the new models to women's football scores,
which exhibit some characteristics different than that of men's football.
Keywords: bivariate distribution, correlation, dependence modelling, Sarmanov family, women's football
§ INTRODUCTION
Football is the most popular sport in the world. There is an ever-growing interest in predicting the scores of football matches for various purposes, including betting, improved team organisation and just fun among other reasons. In the academic literature, <cit.> was the first to systematically investigate the the number of goals scored in men's football. Since his seminal work, many extensions have been proposed, aiming at modelling the number of goals scored by each team. When modelling this number, two main questions arise. First, one has to select a marginal distribution of the number of goals — the Poisson distribution constitutes a standard choice. Second, the correlation between the number of goals scored by the two competing teams is also essential. Commonly used models
include the double Poisson model of <cit.>, bivariate Poisson models as in <cit.>, and models based on copulas defined in <cit.>, which also allow for marginals others than Poisson, e.g., the negative binomial distribution <cit.> or the Weibull count distribution <cit.>.
Among these models, the one proposed by <cit.> has found tremendous impact. In particular, <cit.> found that scores like 0-0s and 1-1s are more likely to occur with real data than
under an independence assumption. To address this characteristic, they developed a model that shifts probability between the scores 0-0, 0-1, 1-0, and 1-1. As their approach has proven helpful in modelling football scores, it is considered one of the most widely used models. However, in their model formulation, it is not possible to shift probabilities of scores other than 0-0, 0-1, 1-0, and 1-1 and to use marginals other than Poisson. If only these four mentioned scores occur more (or less) likely than under independence, there is no need to shift the probabilities of the other scores. In fact, in men's football, the empirical proportions of other common scores, such as 2-0 and 3-0, are usually very close to what would be expected under independence. However,
this is usually not the case for women's football.
In women's football, scorelines such as 2-0 and 3-0 are much more likely to occur than under independence. However, we cannot modify the corresponding probabilities for such scores in the Dixon and Coles model. To overcome this limitation and to adequately model women's football scores, we extend the Dixon and Coles model in several ways. In particular, we first show that this model is a special case of the Sarmanov family of distributions <cit.>. Second, exploiting the connection to the Sarmanov family of distributions, we demonstrate how to shift probabilities of scores other than 0-0, 1-0, 0-1, and 1-1. In particular, even an infinite number of probabilities can be modified. Third, we allow for marginal distributions other than Poisson. Fourth, since the correlation implied under the Dixon and Coles model is relatively small and thus unrealistic in some applications, we present model formulations which are applicable to data with a wider range of correlation.
To demonstrate the feasibility of our approach, we consider data on the number of scored goals in the four most popular women's football leagues in Europe. In particular, we consider the English FA Women's Super League, the German Frauen-Bundesliga, the French Division 1 Feminine, and the Spanish Primera Iberdrola for the seasons 2011/12–2018/19 and 2021/22. Interest in women's football has increased in recent years as nowadays, demand for several women's football matches is as high as for men's football. In March 2022, more than 90,000 spectators witnessed the Champions League quarter-final between FC Barcelona and Real Madrid in the Camp Nou. More than half a million people attended the UEFA Women's Euro 2022. Despite this increased interest in women's football, it has been analysed very briefly to date, with most studies comparing men's and women's football (see, e.g. ). To our knowledge there is limited research on modelling women’s football scorelines in contrast to the men's game.
The rest of the paper is structured as follows. Section 2 introduces the model proposed by <cit.> and presents several extensions. Section 3 covers the application of the models presented in Section 2 to the women's football data, thereby comparing their fit and predictive performance. Section 4 concludes.
§ EXTENDING THE DIXON AND COLES MODEL
In this section, we show that the model developed by <cit.> is a special case of a vast family of probability distributions for modelling bivariate count data, namely the Sarmanov family of distributions (). This relationship implies that we can extend the Dixon and Coles model in specific directions, including other discrete distributions than Poisson or other dependence structures by altering the function that introduces correlation.
§.§ The Dixon and Coles model
<cit.> defined a bivariate model with Poisson marginal distributions. The corresponding joint probability mass function (pmf) is given by
P(X_1 = x_1, X_2 = x_2) = τ_λ_1, λ_2(x_1,x_2)λ^x_1_1exp ( -λ_1 )/x_1!λ^x_2_2exp (-λ_2 )/x_2!,
with
τ_λ_1, λ_2(x_1,x_2) = {[ 1- λ_1 λ_2 ω̃, x_1=x_2=0,; 1 + λ_1 ω̃ x_1=0, x_2=1,; 1 + λ_2 ω̃ x_1=1, x_2=0,; 1 - ω̃ x_1=x_2=1,; 1 , ].
where λ_1 and λ_2 are the means of the two Poisson marginal distributions and τ_λ_1, λ_2(·,·) measures the correlation between the scores. For x_1 > 1 and x_2 >1, the probabilities remain unchanged from the product of the marginal distributions as probabilities are shifted only between the four pairs (0,0), (1,0), (0,1) and (1,1). The magnitude of shifting depends on the dependence parameter ω̃, which has to satisfy the following inequality:
max(-1/λ_1, -1/λ_2)≤ω̃≤min(1/λ_1λ_2,1).
Thus, ω̃ can take positive or negative values but it is of limited range for reasonable means λ_1 and λ_2. The case ω̃ =0 corresponds to scores being independent.
How the probabilities for the pairs (0,0), (1,0), (0,1) and (1,1) are affected by ω̃ is shown in Figure <ref>. In particular,
for increasing values of ω̃, probabilities under the Dixon and Coles model are shifted from (0,0) and (1,1) to the pairs (0,1) and (1,0) in a proportional manner.
§.§ Sarmanov Family
The Sarmanov family was introduced by <cit.>, while <cit.> studied some general methods for the construction of different families considering different types of marginal distributions. Throughout this contribution, we focus on the case of discrete distributions.
For i =1,2, assuming P_i(x_i) are two probability mass functions (pmf)
and q_i(x_i) are two bounded non-constant functions such that
∑_x_i=-∞^∞ q_i(x_i) P_i(x_i)=0,
then a joint pmf can be defined by
P(X_1 = x_1,X_2 = x_2)=P_1(x_1)P_2(x_2)[1+ ω q_1(x_1)q_2(x_2)],
where ω q_1(x_1)q_2(x_2) specifies the dependence of X_1 and X_2 and ω∈ℝ satisfies, for all x_1 and x_2, the condition
[1+ ω q_1(x_1)q_2(x_2)] ≥ 0.
For ω=0, the variables X_1 and X_2 are independent.
Following <cit.>, the correlation between X_1 and X_2 is then given by
ρ = ω u_1 u_2/σ_1 σ_2
where
σ_i is the standard deviation of the marginal distribution and u_i = E[X_i q_i(X_i)] for i = 1,2.
§.§ Dixon and Coles model as a member of Sarmanov Family
By selecting suitable functions q_i(x_i) that fulfil equation (<ref>), we can build flexible bivariate distributions based on the Sarmanov family. The model by <cit.> introduced in Section 2.1 also belongs to the Sarmanov family. For Poisson marginals as considered by <cit.>, we set ω = -ω̃ and select the functions q_1(x_1) and q_2(x_2) as
q_dc(x_i) = {[ -λ_i if x_i=0; 1 if x_i=1; 0 if x_i=2,3,… ].
where λ_i is the mean of the variable X_i, for i = 1,2. Then, the condition in equation (<ref>) holds when assuming Poisson marginals. In fact, plugging q_dc(x_i) into equation (<ref>) yields the pmf of the model proposed by <cit.>.
To create more flexible bivariate discrete distributions with Poisson marginals, we may use other q-functions such that these functions still fulfil equation (<ref>). To our best knowledge, we are the pioneers in extending and enhancing the Dixon and Coles model within the Sarmanov family. In the following subsection, we provide such extensions.
§.§ Some new models
§.§.§ Poisson Marginals
Note that throughout this subsection, we will consider λ_i as the mean of a Poisson distributed variable X_i, i =1,2.
Still assuming Poisson marginal distributions, we can consider another function q̂ defined as
q̂(x_i) = {[ -λ_i^2 if x_i=0; λ_i if x_i=1; 0 if x_i=2,3,… ].
for i = 1,2,
which also satisfies equation (<ref>).
This generates another bivariate distribution with Poisson marginals, but probabilities across the four pairs (0,0), (1,0), (0,1) and (1,1) are now shifted differently. In particular, we shift probabilities with a quadratic term. However, we can formulate q̂ also with other exponents for λ_i — in Appendix <ref>, equation (<ref>) provides such a generalisation.
A peculiarity of the q-functions presented so far is that they shift probabilities only across the four pairs (0,0), (1,0), (0,1) and (1,1). However, we can easily relax this restriction. For example, we can extend the previous model to x_i = 2 with the following q-function:
q(x_i) = {[ -λ_i^2 if x_i=0; -λ_i if x_i=1; 4 if x_i=2; 0 if x_i=3,4, … ].
for i = 1,2.
Such a function moves the probabilities of the pairs (x_1,x_2): x_1=0,1,2; x_2=0,1,2 and thus induces correlation among nine pairs. We can extend this further
up to x_i=s, i.e., inducing correlation among (s+1)^2 pairs, by considering the following general function
q^(s)(x_i) = {[ -x_i! λ^s-x_i if x_i=0,1,…,s-1; s s! if x_i=s; 0 if x_i=s+1,… ].
for i = 1,2.
So far, we have always selected the same q-function for each marginal distribution. However, we can also consider different functions q for each marginal distribution. For example, considering q^(2)(x_1)
and q^(3)(x_2),
we can alter the probabilities for a broader range of values, in this example for all pairs (x_1,x_2): x_1=0,1,2, x_2=0,1,2,3.
Note that all the above-mentioned q functions satisfy equation (<ref>) and thus they provide proper bivariate discrete distributions, while they also define the appropriate limits for the ω parameter.
The panels in Figure <ref> show the probabilities under the different model formulations, i.e. the model under independence, the model proposed by <cit.> and models using the different q-functions developed so far.
§.§.§ Any other discrete distribution
In the previous subsection, we demonstrated how we can extend the Dixon and Coles model for Poisson marginals using other q-functions. It is relatively straightforward to proceed with similar functions for the case of any discrete distribution. Consider, for example, a discrete distribution function with probability assigned to x_i given as P_x_i, x_i ∈ℕ_0.
Let μ_i denote the expected value of this pmf. Remembering the assumption in equation (<ref>) that needs to be satisfied, we can generalise the form of the Dixon and Coles model by creating some new q-functions for general discrete distributions as
q_1P(x_i) = {[ -P_1/P_0 if x_i=0; 1 if x_i=1; 0 if x_i=2,3, … ].
q_2P(x_i) = {[ μ_i if x_i=0; -μ_i P_0/P_1 if x_i=1; 0 if x_i=2,3, … ].
For a function equivalent to q̂ we can define
q_3P(x_i) = {[ -μ_i P_1/P_0 if x_i=0; μ_i if x_i=1; 0 if x_i=2,3, … ].
for i = 1,2.
For Poisson marginals, since μ_i=λ_i and P_1/P_0 =λ_i, we can verify that the resulting functions are identical to those presented in the previous subsection. Up next, we will present a slightly more involved example with negative binomial margins.
§.§.§ Example: negative binomial marginals
For our application of modelling the number of goals in football, the Dixon and Coles model considers only Poisson marginals, which can be very restrictive for football modelling. To this end, we present an extension to negative binomial distributed variables X_1 and X_2.
To find a model that shifts probabilities only across the four pairs (0,0), (1,0), (0,1) and (1,1) — similar to the Dixon and Coles model — and that meets the constraints imposed by the Sarmanov family in equation (<ref>), we use the q_1P(x_i) function presented in the previous subsection 2.4.2. Thus, with P_1 = P(X_i = 1) = ϕ_i (μ_i/ϕ_i + μ_i) (ϕ_i/ϕ_i + μ_i)^ϕ_i and P_0 = P(X_i = 0) = (ϕ_i/ϕ_i + μ_i)^ϕ_i,
i= 1,2, we end up with the following function:
q_nb(x_i) = {[ -ϕ_i (μ_i/ϕ_i + μ_i) if x_i=0; 1 if x_i=1; 0 if x_i=2,3, … ].
with μ_i denoting the mean and μ_i + μ_i^2/ϕ_i the variance of the negative binomial distribution, for i=1,2.
This function moves probabilities for x=0 and x=1 as in the Dixon and Coles model.
However, similar to Poisson marginals, we can build further q-functions for negative binomial marginals such as
q̂_nb(x_i) = {[ -μ_i^2 if x_i=0; μ_i ϕ_i + μ_i/ϕ_i if x_i=1; 0 if x_i=2,3, … ].
q_nb(x_i) = {[ -μ_i^2 if x_i=0; -μ_i ϕ_i + μ_i/ϕ_i if x_i=1; 4 μ_i ϕ_i/ϕ_i + μ_i if x_i=2; 0 if x_i=3,4, … ].
for i = 1,2.
Finally, note that such derivations are also valid when considering marginal distributions from two different families. In particular, we can define discrete bivariate distributions by selecting suitable q-functions according to the marginal distributions.
For example, we can consider equation (<ref>), with P_1(x_1) and P_2(x_2) as the pmfs of the Poisson and negative binomial distribution, respectively, and q_1(x_1) and q_2(x_2) given by q_dc(x_1) and q_nb(x_2), respectively.
Such constructions can provide more powerful bivariate discrete distributions.
§.§ Shifting probabilities across the entire support
Up to this point, we only investigated bivariate distributions shifting probabilities for a pre-defined set of values.
However, some applications require to relax this assumption such that we shift probabilities more flexibly between all values of the support of the marginal distributions.
To set up a model based on the Sarmanov family that shifts probabilities across the entire support, we select q_Sar(x_i) = exp(-x_i) - L_i(1), x_i ∈ℕ_0, i = 1,2. Here, L_i(1) is the value of the Laplace transform of the marginal distribution evaluated at s=1, that is
L_i(s) = E(e^-sX_i)=∑_x_i=0^∞exp(-sx_i) P(x_i) ,
with P(·) denoting the pmf of the i-th marginal distribution. Then, by plugging q_Sar(x_i) into equation <ref>, a bivariate pmf is given by
P(X_1 = x_1,X_2 = x_2)=P_1(x_1)P_2(x_2){1+ ω[ exp(-x_1) - L_1(1) ]
[ exp(-x_2) - L_2(1) ] }.
In the following, we derive two bivariate pmfs according to this setup for Poisson and negative binomial marginals.
§.§.§ Example: Poisson margins
Considering Poisson marginal distributions, we use the series expansion of the exponential function to derive the corresponding Laplace transform as L_i(1)=exp(-λ_i (1-exp(-1))). The joint pmf is then given by
P(X_1 = x_1,X_2 = x_2) = λ^x_1_1exp ( -λ_1 )/x_1!λ^x_2_2exp (-λ_2 )/x_2!×
{ 1+ ω[ (e^-x_1 - e^-λ_1 c)(e^-x_2 -
e^-λ_2 c)
] }
where ω is a dependence parameter, λ_i, i = 1,2, are the means of the Poisson distributions and
c=1-exp(-1) is a constant.
The bivariate Poisson distribution presented here has also been studied in <cit.>.
§.§.§ Example: negative binomial margins
Similar to the previous example, we can also derive the joint pmf for negative binomial margins as
P(X_1 = x_1, X_2 = x_2) = Γ(x_1+ϕ_1)/Γ(ϕ_1)x_1!( ϕ_1/ϕ_1+μ_1)^ϕ_1( μ_1/ϕ_1+μ_1)^x_1×
Γ(x_2+ϕ_2)/Γ(ϕ_2)x_2!( ϕ_2/ϕ_2+μ_2)^ϕ_2( μ_2/ϕ_2+μ_2)^x_2×
{ 1+ ω[e^-x_1 - L_1(1)][e^-x_2 - L_2(1)]
},
with μ_i, denoting the mean of the i-th marginal distribution with the variances μ_i + μ_i^2/ϕ_i, i=1,2. Here, ϕ_i constitutes the overdispersion parameter. The Laplace transform of the negative binomial distributions evaluated at s = 1, i.e. L_i(1), for i = 1,2, is given by
L_i(1) = [ ϕ_i/ϕ_i + μ_i(1-e^-1)]^ϕ_i.
The distribution presented here in the second example is also examined by <cit.>.
§.§.§ Example: Alternative Negative binomial Sarmanov model
While the previous two examples have already been explored by <cit.> and <cit.>, respectively, we aim to generalise their distributions to end up with a more flexible model. To this end, by using
q_ANS(x) = [ϕ_i/(ϕ_i + μ_i)]^x_i - c_i (with c_i defined below) for i = 1,2, we create a novel bivariate distribution. In fact, if ϕ_i/(ϕ_i + μ_i) = e^-1, we obtain the bivariate distribution presented in the previous example. For the other cases, we create an alternative bivariate distribution with negative binomial margins based on the Sarmanov family without using the Laplace transform. We thus obtain the following pmf for the Alternative Negative binomial Sarmanov (ANS) distribution:
P(X_1 = x_1, X_2 = x_2) = Γ(x_1+ϕ_1)/Γ(ϕ_1)x_1!( ϕ_1/ϕ_1+μ_1)^ϕ_1( μ_1/ϕ_1+μ_1)^x_1×
Γ(x_2+ϕ_2)/Γ(ϕ_2)x_2!( ϕ_2/ϕ_2+μ_2)^ϕ_2( μ_2/ϕ_2+μ_2)^x_2×
{ 1+ ω[(ϕ_1/ϕ_1 + μ_1)^x_1 - c_1][(ϕ_2/ϕ_2 + μ_2)^x_2 - c_2]
}
where ω, μ_i and ϕ_i, i=1,2 are parameters with the meaning as above and
c_i = ( ϕ_i/ϕ_i + μ_i)^ϕ_i[1-(1-ϕ_i/ϕ_i + μ_i) ϕ_i/ϕ_i + μ_i]^-ϕ_i.
In Appendix <ref>, we show that equation (<ref>) still holds for this pmf. Correlation properties are reported in Appendix <ref>.
Figure <ref> illustrates the difference between the bivariate Sarmanov model using negative binomial marginals and the ANS model. Interpreted in terms of football scorelines, the ANS model shifts more weight from scoreless draws and close wins to clear wins compared to the Sarmanov model from the previous example.
§ APPLICATION
§.§ Data
We fit the models introduced in the previous section to data from women's football. Specifically, we use data on the number of goals scored in each match in the seasons 2011/12-2018/19 and 2021/22 of the English FA Women's Super League, the German Frauen-Bundesliga, the French Division 1 Feminine and the Spanish Primera Iberdrola. We thus only consider matches from seasons that were not affected by COVID-19 restrictions.
To investigate whether the number of home and away goals are independent, we consider their joint contingency table and calculate the ratio of the joint frequencies and the product of the marginal totals for each score.
Table <ref> displays these ratios for the most common results in all leagues, indicating that most ratios substantially differ from one — thus, assuming independence of home and away goals does not seem appropriate. Chi-squared tests reject the null hypothesis of independence for each league except for the English FA Women's Super League (p-value: 0.067).
While a dependence between home and away goals is in line with data from men's football (see, e.g., ), the ratios for women's football displayed in Table <ref> indicate an underrepresentation of 0-0s in each league, whereas such scoreless draws are usually overrepresented in men's football. Another common pattern in women's football is a substantial negative correlation between the two teams' number of goals which is an observation that bears similarities to those made by <cit.> in the context of international games. In particular, due to the underrepresentation of draws and overrepresentation of scores such as 3-0 and 4-0 in women's football, the correlations between home and away goals in our sample are -0.269 (England), -0.352 (Germany), -0.395 (France), and -0.263 (Spain). While for a few seasons, the correlations are positive, as shown in Figure <ref>, they remain mostly negative for all leagues and seasons considered. More importantly, the relatively large amount of negative correlation could not be captured by the Dixon and Coles model — for our data, the lower bound of the correlation is -0.05 (calculated based on the fitted models below).
Figure <ref> in Appendix <ref>
shows the ranges of correlation implied by the Dixon and Coles model as well as the proposed extensions, indicating that the ANS model can capture a much wider range of correlation.
Another pattern that is different in women's football compared to men's football concerns overdispersion. To illustrate this, Figure <ref> shows the mean and variance of goals scored in home and away matches of the different teams. Here, one dot refers to a team's home/away performance, and the diagonal indicates mean-variance equivalence.
For all four leagues, several data points lie above the diagonal, and for Germany, France, and Spain, several points also lie outside the 95% confidence interval. Figure <ref> thus indicates overdispersion in the data, potentially rendering the negative binomial distribution more suitable than the commonly used Poisson distribution for modelling the number of goals scored.
The remainder of this section first considers basic model formulations without including any covariates. To fully address the patterns observed in women's football scorelines, we employ the model formulations developed in Section 2. We further include team-specific attacking and defence parameters into the models and finally compare their fit and predictive performance.
§.§ Baseline model
We first fit relatively simple models to the women's football data which do not include covariates and which we refer to as baseline models. To obtain parameter estimates, we numerically maximise the log-likelihood, which is carried out in R using the function . Table <ref> displays the AIC values obtained for these models, indicating that the AIC prefers models with negative binomial margins over Poisson margins for each league. Specifically, among the models with negative binomial marginals, the ANS model is favoured by the AIC for all leagues. Compared to the other model formulations considered, the ANS model is more flexible and seems better to capture the specific dependence structure via its additional parameters.
§.§ Model including team dummies
The model formulations presented next include team-specific effects. For all models, we use the same type of mean-parametrization as introduced in Section 2, i.e., the first parameter of a marginal distribution always represents the mean. To account for team-specific effects, we consider an attacking and a defence parameter for each team in each league. Additionally, we include a binary variable for home matches to account for the well-known home-field advantage (). In particular, for n matches and two parameters θ_1j and θ_2j, j=1,…,n representing the mean of the home and away team of the chosen distribution of the response variable, the linear predictors then have the following form
log(θ_1j) = home + att_h_j + def_g_j,
log(θ_2j) = att_g_j + def_h_j,
where att_k and def_k denote the attacking and defence parameters of team k, with k being a placeholder for either h_j or g_j to indicate the home and away team in match j. To ensure identifiability, we use a sum-to-zero constraint for the defence parameters.
Table <ref> displays the AIC values obtained for the fitted models, including home and team-specific effects. While the AIC favours the ANS model for Germany, France, and Spain, the Dixon and Coles extended model shifting probabilities for scores up to 2 is preferred for England. We can explain this by the patterns in the data: the amount of overdispersion for the leagues in Germany, France, and Spain (cf. Figure <ref>) is larger than for England — the AIC thus prefers the ANS model for these leagues. In contrast, since in the English FA Women's Super League only a few teams show overdispersion, the additional complexity of the ANS model is not required here.
Our results suggest that the models developed in Section 2, especially the Alternative Negative binomial Sarmanov model, are more suitable for modelling the number of goals in women's football than classical models initially developed for men's football.
§.§ Model checking
To check the adequacy of the preferred ANS model, we calculate the difference between the empirical proportions of scores and the probabilities under the different models. Table <ref> displays the sum of the absolute differences for all models and leagues considered. For the English FA Women's Super League and the Spanish Primera Iberdrola, the Dixon and Coles model and an extension is preferred, respectively. However, the ANS model performs only slightly worse.
At the same time, the ANS model shows the best model fit for the German Frauen-Bundesliga and the French Division 1 Feminine.
§.§ Prediction
To further demonstrate the usefulness of our approach in practice, we consider the predictive performance of the most promising model, the ANS model, for the German Frauen-Bundesliga. In particular, we fit the
ANS model to the first two-thirds of the season 2021/22, i.e. to the first 15 matchdays, and evaluate the predictive performance based on the remaining seven matchdays.
To this end, we predict the probabilities for all scores in each match under the fitted model. We then simulate all matches played in the last seven matchdays 1,000 times using a Monte Carlo simulation. In this way, we end up with 1,000 simulated final points of the teams. From these final points, we calculate the 2.5%- and 97.5%-quantiles to obtain a 95% prediction interval for each team.
Figure <ref> shows the observed standings at the end of the season, together with the predicted intervals as obtained under the ANS model. The predicted intervals include the observed final points for all teams, thus suggesting a promising predictive performance of the ANS model.
§ DISCUSSION
There is wide interest in predicting football matches by fans, media, and academics. To that end, we provide a modelling framework to model the number of goals scored in a football match, which is a flexible extension of the model developed by <cit.>. A vital strength of the proposed approach is that we can not only shift probabilities between scores in a very flexible way but also consider (discrete) marginal distributions other than Poisson.
In our application, we tested the feasibility of our proposed models by analysing data from four different women's football leagues. In women's football, usually fewer 0-0s occur as expected under independence, which is different from men's football. To account for such characteristic in our modelling approach, we formulate several extensions of the Dixon and Coles model. The ANS model showed the most promising performance, both in terms of model fit and predictive power. Our application also demonstrated that the Dixon and Coles model is not able to capture all patterns in the data, especially the relatively large negative correlation found for women's football data.
For the prediction of scores in football, recent literature often uses several covariates, such as players' age and information about a team's recent performance (see, e.g., ). For future research, our models could be extended to model one or multiple parameters via such covariates. In the presence of many covariates, regularisation approaches have proven helpful when predicting football results (). Moreover, as teams' performance may not be constant over time, a further model extension could include recent performance in a weighted manner by putting more weight on more recent observations — <cit.> used a similar approach. As a team's form is usually not observable, the models presented here could also be extended by adding a latent state process. In particular, parameters such as the mean may depend on a team's underlying latent form. As the existing literature has already considered state-switching approaches in football <cit.>,
such extensions could build upon the modelling framework developed in this contribution to flexibly model football scores, especially for women's football.
apalike
[Baio and Blangiardo, 2010]baio2010bayesian
Baio, G. and Blangiardo, M. (2010).
Bayesian hierarchical model for the prediction of football results.
Journal of Applied Statistics, 37(2):253–264.
[Baker and Scarf, 2006]baker2006predicting
Baker, R. and Scarf, P. (2006).
Predicting the outcomes of annual sporting contests.
Journal of the Royal Statistical Society: Series C (Applied
Statistics), 55(2):225–239.
[Boshnakov et al., 2017]boshnakov2017bivariate
Boshnakov, G., Kharrat, T., and McHale, I. G. (2017).
A bivariate weibull count model for forecasting association football
scores.
International Journal of Forecasting, 33(2):458–466.
[Carmichael and Thomas, 2005]carmichael2005home
Carmichael, F. and Thomas, D. (2005).
Home-field effect and team performance: evidence from english
premiership football.
Journal of Sports Economics, 6(3):264–281.
[Dixon and Coles, 1997]dixon1997modelling
Dixon, M. J. and Coles, S. G. (1997).
Modelling association football scores and inefficiencies in the
football betting market.
Journal of the Royal Statistical Society: Series C (Applied
Statistics), 46(2):265–280.
[Famoye, 2010]famoye2010bivariate
Famoye, F. (2010).
On the bivariate negative binomial regression model.
Journal of Applied Statistics, 37(6):969–981.
[Garnica-Caparrós and Memmert, 2021]garnica2021understanding
Garnica-Caparrós, M. and Memmert, D. (2021).
Understanding gender differences in professional european football
through machine learning interpretability and match actions data.
Scientific Reports, 11(1):1–14.
[Groll et al., 2018]groll2018dependency
Groll, A., Kneib, T., Mayr, A., and Schauberger, G. (2018).
On the dependency of soccer scores–a sparse bivariate Poisson
model for the UEFA European Football Championship 2016.
Journal of Quantitative Analysis in Sports, 14(2):65–79.
[Karlis and Ntzoufras, 2003]karlis2003analysis
Karlis, D. and Ntzoufras, I. (2003).
Analysis of sports data by using bivariate Poisson models.
Journal of the Royal Statistical Society: Series D (The
Statistician), 52(3):381–393.
[Lakshminarayana et al., 1999]lakshminarayana1999bivariate
Lakshminarayana, J., Pandit, S., and Srinivasa Rao, K. (1999).
On a bivariate Poisson distribution.
Communications in Statistics-Theory and Methods,
28(2):267–276.
[Lee, 1997]lee1997modeling
Lee, A. J. (1997).
Modeling scores in the premier league: is manchester united really
the best?
Chance, 10(1):15–19.
[Maher, 1982]maher1982modelling
Maher, M. J. (1982).
Modelling association football scores.
Statistica Neerlandica, 36(3):109–118.
[Martínez-Lagunas et al., 2014]martinez2014women
Martínez-Lagunas, V., Niessen, M., and Hartmann, U. (2014).
Women's football: Player characteristics and demands of the game.
Journal of Sport and Health Science, 3(4):258–272.
[McHale and Scarf, 2007]mchale2007modelling
McHale, I. and Scarf, P. (2007).
Modelling soccer matches using bivariate discrete distributions with
general dependence structure.
Statistica Neerlandica, 61(4):432–445.
[McHale and Scarf, 2011]mchale2011modelling
McHale, I. and Scarf, P. (2011).
Modelling the dependence of goals scored by opposing teams in
international soccer matches.
Statistical Modelling, 11(3):219–236.
[Ötting et al., 2021]otting2021copula
Ötting, M., Langrock, R., and Maruotti, A. (2021).
A copula-based multivariate hidden Markov model for modelling
momentum in football.
AStA Advances in Statistical Analysis, pages 1–19.
[Pappalardo et al., 2021]pappalardo2021explaining
Pappalardo, L., Rossi, A., Natilli, M., and Cintia, P. (2021).
Explaining the difference between men’s and women’s football.
PLoS One, 16(8):e0255407.
[Pedersen et al., 2019]pedersen2019scaling
Pedersen, A. V., Aksdal, I. M., and Stalsberg, R. (2019).
Scaling demands of soccer according to anthropometric and
physiological sex differences: a fairer comparison of men’s and women’s
soccer.
Frontiers in Psychology, page 762.
[Pollard and Gómez, 2014]pollard2014comparison
Pollard, R. and Gómez, M. A. (2014).
Comparison of home advantage in men's and women's football leagues in
Europe.
European Journal of Sport Science, 14(sup1):S77–S83.
[Sarmanov, 1966]sarmanov1966generalized
Sarmanov, O. V. (1966).
Generalized normal correlation and two-dimensional Fréchet
classes.
In Doklady Akademii Nauk, volume 168, pages 32–35. Russian
Academy of Sciences.
[Ting Lee, 1996]ting1996properties
Ting Lee, M.-L. (1996).
Properties and applications of the Sarmanov family of bivariate
distributions.
Communications in Statistics-Theory and Methods,
25(6):1207–1222.
[van der Wurp et al., 2020]van2020generalised
van der Wurp, H., Groll, A., Kneib, T., Marra, G., and Radice, R. (2020).
Generalised joint regression for count data: a penalty extension for
competitive settings.
Statistics and Computing, 30(5):1419–1432.
[Whitaker et al., 2021]whitaker2021bayesian
Whitaker, G., Silva, R., Edwards, D., and Kosmidis, I. (2021).
A Bayesian approach for determining player abilities in football.
Journal of the Royal Statistical Society: Series C (Applied
Statistics), 70(1):174–201.
§ DISTRIBUTIONS WITH OTHER EXPONENTS
equationsection
As discussed in section 2.4.1, we can extend the model by <cit.> for other exponents than quadratic ones. To this end, assume a q-function of the form
q̂_(s)(x_i) = {[ -λ_i^s if x_i=0; λ_i^s-1 if x_i=1; 0 if x_i=2,3,… ].
for i= 1,2 and again assume Poisson marginal distributions.
Then, based on equation <ref> from section 2.2, we have a bivariate distribution with pmf
P(X_1 = x_1, X_2 = x_2) = τ_λ_1, λ_2(x_1,x_2)λ^x_1_1exp ( -λ_1 )/x_1!λ^x_2_2exp (-λ_2 )/x_2!,
with
τ_λ_1, λ_2(x_1,x_2) = {[ 1 + ω̃λ_1^s λ_2^s, x_1=x_2=0,; 1 - ω̃λ_1^s λ_2^s-1 x_1=0, x_2=1,; 1 - ω̃λ_1^s-1λ_2^s x_1=1, x_2=0,; 1 + ω̃λ_1^s-1λ_2^s-1 x_1=x_2=1,; 1 . ].
Obviously for s=1 we get the Dixon and Coles model.
In order for the above to be a proper pmf it must hold that
τ_λ_1, λ_2(x_1,x_2) ≥ 0 and then we can derive the
restrictions that
max( -1/λ_1^s λ_2^s,
-1/λ_1^s-1λ_2^s-1)
≤ω̃≤min( -1/λ_1^s λ_2^s-1,
-1/λ_1^s-1λ_2^s )
Note that ω̃ is a parameter with different interpretation of each s.
§ PROOF: ALTERNATIVE NEGATIVE BINOMIAL SARMANOV MODEL
Here, we demonstrate that the ANS distribution from section 2.5 indeed fulfils equation <ref> from section 2.2. For this, we have to proof that the expectation of the corresponding q-function, q_ANS, equals zero, i.e.:
Theorem 1:
Let q_ANS(x_i) = (ϕ_i/ϕ_i + μ_i)^x_i - c_i with
c_i = ( ϕ_i/ϕ_i + μ_i)^ϕ_i[1-(1-ϕ_i/ϕ_i + μ_i) ϕ_i/ϕ_i + μ_i]^-ϕ_i.
Then ∫_x_i=-∞^∞ q_i(x_i)f_i(x_i)=0, for i = 1,2.
Proof:
If E[(ϕ_i/ϕ_i + μ_i)^x_i] = c_i, then
∫_x_i=-∞^∞ q_i(x_i)f_i(x_i) =
∑_x_i = 0^∞[(ϕ_i/ϕ_i + μ_i)^x_i-c_i] P(x_i)
= ∑_x_i = 0^∞[(ϕ_i/ϕ_i + μ_i)^x_i] P(x_i) - c_i ∑_x_i = 0^∞ P(x_i)
= E[(ϕ_i/ϕ_i + μ_i)^x_i] - c_i
= E[(ϕ_i/ϕ_i + μ_i)^x_i] - E[(ϕ_i/ϕ_i + μ_i)^x_i] = 0
for i = 1,2.
In the first equation, we plug in the definition of q(x_i) and f(x_i). As c_i is a constant we can split the sum and drag c_i out of the sum. As P(·) is a probability function the sum over all possible values is 1.
For i =1,2,, it remains to show that E[(ϕ_i/ϕ_i + μ_i)^x_i] = c_i:
E[(ϕ_i/ϕ_i + μ_i)^x_i] = ∑_x_i = 0^∞(ϕ_i/ϕ_i + μ_i)^x_i P(x_i)
= ∑_x_i = 0^∞(ϕ_i/ϕ_i + μ_i)^x_ix_i + ϕ_i - 1x_i(ϕ_i/ϕ_i + μ_i)^ϕ_i (1-ϕ_i/ϕ_i + μ_i)^x_i
= ∑_x_i = 0^∞ (-1)^x_i-ϕ_ix_i(ϕ_i/ϕ_i + μ_i)^ϕ_i(ϕ_i/ϕ_i + μ_i)^x_i (1-ϕ_i/ϕ_i + μ_i)^x_i
= (ϕ_i/ϕ_i + μ_i)^ϕ_i∑_x_i = 0^∞-ϕ_ix_i[-(ϕ_i/ϕ_i + μ_i) (1-ϕ_i/ϕ_i + μ_i)]^x_i
= (ϕ_i/ϕ_i + μ_i)^ϕ_i{1+[-(ϕ_i/ϕ_i + μ_i) (1-ϕ_i/ϕ_i + μ_i)]}^-ϕ_i
= ( ϕ_i/ϕ_i + μ_i)^ϕ_i[1-(1-ϕ_i/ϕ_i + μ_i) ϕ_i/ϕ_i + μ_i]^-ϕ_i = c_i
We first plug in the definition of the expectation. Then, we use the definition of the pmf of the negative binomial distribution. After rearranging terms and dragging out the constant ( ϕ_i/ϕ_i + μ_i)^ϕ_i, we can use the binomial series as the inner part of the square brackets is always smaller than 1 in absolute value. Thus, we end up with the definition of c_i.
§ CORRELATION FOR THE ANS MODEL
While the calculation of the correlation for most of the models considered in this paper is straightforward, the calculation of the correlation for the ANS model from section 2.5 is a little bit more challenging. Thus, we outline it here:
Consider the negative binomial distribution with pmf
P(X=k) = Γ(k+r)/k! Γ(r) (1-p)^r p^k, k=0,1,2,…, r>0, p ∈ (0,1).
We can see that for this distribution it holds that
E(X t^X ) = ( 1-p/1-pt)^r ( ptr/1-pt).
To see that, it suffices to note that it holds from the pmf of the negative binomial that
∑_k=0^∞Γ(k+r)/k! p^k = Γ(r)/(1-p)^r.
Taking derivative with respect to p we derive that
∑_k=0^∞ k p ^k-1Γ(k+r)/k! = r Γ(r)/(1-p)^r+1.
Thus, for the expectation of interest we have that
E(Xt^X) = ∑_k=0^∞ k t^k Γ(k+r)/k! Γ(r) (1-p)^r p^k
= (1-p)^r pt/Γ(r)∑_k=0^∞ k Γ(k+r)/k! (pt)^k-1
= ( 1-p/1-pt)^r ( ptr/1-pt).
Consider now the Alternative Negative binomial Sarmanov model with
q_ANS(x_i) = (ϕ_i/ϕ_i + μ_i)^x_i - c_i
where
c_i = ( ϕ_i/ϕ_i + μ_i)^ϕ_i[1-(1-ϕ_i/ϕ_i
+ μ_i) ϕ_i/ϕ_i + μ_i]^-ϕ_i.
Hence, for i = 1,2,
E[X_iq_ANS(X_i)]= E[ X_i ( ϕ_i/ϕ_i+μ_i)^X_i] - c_i E(X_i)
and after using the expectation representation from above and tedious algebraic manipulations we end up with
E[X_iq_ANS(X_i)]= [ ϕ_i(μ_i+ϕ_i)/(μ_i+ϕ_i)^2 - μ_i ϕ_i)]^ϕ_i[ μ_i ϕ_i^2/(μ_i+ϕ_i)^2 - μ_i ϕ_i) -μ_i].
|
http://arxiv.org/abs/2307.02896v1
|
20230706100308
|
Robust Deployment and Resource Allocation for Robotic Aerial Base Station Enabled OFDM Integrated Sensing and Communication
|
[
"Yuan Liao",
"Vasilis Friderikos",
"Halim Yanikomeroglu"
] |
cs.NI
|
[
"cs.NI"
] |
Robust Deployment and Resource Allocation for Robotic Aerial Base Station Enabled OFDM Integrated Sensing and Communication
Yuan Liao, Student Member, IEEE, Vasilis Friderikos, Member, IEEE, Halim Yanikomeroglu, Fellow, IEEE
Yuan Liao and Vasilis Friderikos are with the Department of Engineering, King's College London, London WC2R 2LS, U.K. (e-mail: [email protected]; [email protected]).
Halim Yanikomeroglu is with the Non-Terrestrial
Networks (NTN) Lab, Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada (e-mail: [email protected]).
Received September 15, 1996; accepted March 16, 1997
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The envisioned robotic aerial base station (RABS) concept is expected to bring further flexibility to integrated sensing and communication (ISAC) systems. In this letter, characterizing the spatial traffic distribution on a grid-based model, the RABS-assisted ISAC system is formulated as a robust optimization problem to maximize the minimum satisfaction rate (SR) under a cardinality constrained uncertainty set. The problem is reformulated as a mixed-integer linear programming (MILP) and solved approximately by the iterative linear programming rounding algorithm. Numerical investigations show that the minimum SR can be improved by 28.61% on average compared to fixed small cells.
6G, small cells, UAVs, integrated sensing and communication, network optimization, robotic manipulators
§ INTRODUCTION
In the upcoming 6G era, reliable wireless coverage and accurate remote sensing capability are crucial for emerging applications such as intelligent transport systems and smart manufacturing. This has led to the recent surge in the development of integrated sensing and communication (ISAC) techniques. To enhance the flexibility and adjustability of ISAC systems, in this paper, we employ robotic aerial base stations (RABS) that can attach autonomously to lampposts or other tall urban landforms via energy neutral grasping, and fly to another grasping point via controllable maneuverability to perform the sensing and communication functions.
A number of works are devoted to perform ISAC tasks to improve spectrum efficiency and reduce the expenditure cost. In <cit.>, the sensing and communication performances, evaluated by mutual information (MI) and data rate respectively, are maximized jointly under the limitation of transmission power. The work <cit.> extends this approach by incorporating channel uncertainty, while in <cit.>, the transmission power is minimized while ensuring predefined thresholds for both MI and data rate. The subcarrier assignment problem is considered in <cit.> to optimize the transmission power and satisfaction utility, respectively. Besides the conventional terrestrial cells, unmanned aerial vehicle (UAV) is expected to improve the flexibility of next generation cellular networks <cit.>. The work <cit.> employs UAVs to perform ISAC tasks to improve the security and reliability of networks. The communication throughput and energy efficiency are optimized in the UAV-assisted ISAC systems in <cit.>, respectively. However, to overcome the issue that the serving endurance of UAVs is severely confined by the on-board battery capacity, the work <cit.> proposes the prototype of RABS carried by an UAV and mounted with a mechanical grasper so that it can attach on lampposts when providing wireless coverage and agilely relocate to another hot-spot to adapt to the traffic dynamic. The service time is significantly increased due to the lower grasping power (tens of Watts) compared to the hovering/flying power of UAV base stations (hundreds of Watts) <cit.>.
In this letter, we employ the RABS to perform ISAC tasks in a flexible and energy-efficient manner. Moreover, instead of assuming that the users' locations are fixed and known as <cit.>, this work is based on the spatial traffic distribution in which the traffic demand in a certain area can be predicted and seen as fixed during a certain period, even though the users keep moving and have dynamic demand. The performance metric of satisfaction rate (SR), introduced by <cit.>, is employed to evaluate the degree of satisfaction for sensing and communication demand. However, rather than treating the user/terminal as a point with specific coordinates, this grid-based model considers the traffic demand generated from a defined area encompassing a range of coordinates. To address the limitations of the point-to-point communication model within this innovative context, we introduce robust optimization tools to maximize the minimum SR and employ the cardinality constrained uncertainty set to control the robustness. To the best of our knowledge, this is the first work that introduce the robust optimization tools to the grid-based traffic model. The problem is then reformulated as a mixed integer linear programming (MILP) via duality theory and we propose an iterative linear programming (LP) rounding algorithm to solve it in polynomial time. Numerical results show that RABS can improve the system performance by 28.61% on average compared to fixed small cells.
§ APPLICATION SCENARIO AND SYSTEM MODEL
The grid-based model is a widely used model to characterize the spatial traffic distribution <cit.>, and was first introduced to aerial networks in <cit.>. To employ the grid-based model in this letter, an urban geographical area is divided into multiple grids, assuming that the traffic demand generated from each grid remains unchanged and known within a certain time interval, e.g., half an hour or an hour. Our research focuses on a specific time period and aims to determine the deployment and resource allocation of RABS during this epoch. It is worth noting that the inherent flying function of RABS allows them to relocate to other lampposts in response to changes in traffic patterns in subsequent epochs. Besides, the RABS works with a ominidirectional antenna to transmit ISAC signals and receive the scattered echoes reflected by targets<cit.>.
In addition to conventional communication functions, orthogonal frequency division multiplexing (OFDM) waveform is adopted for radar sensing applications because of its high spectrum efficiency, modulation flexibility and strong tolerance for inter-symbol interference. Unlike OFDM communication waveform which is continuous and consists of communication information and guard interval, OFDM sensing waveform is in the form of pulse signals without any embedded information or guard interval. To further improve the spectrum efficiency, OFDM-based ISAC applies a pulse OFDM waveform consisting of communication information to perform ISAC functions <cit.>. The comparison of these three kinds of waveform is shown in Fig. <ref>. Specifically, suppose there are K available OFDM subcarriers, denoted by 𝒦 = {1,2,...,K}, are utilized to perform ISAC. Therefore, the sensing signal performed on the subcarrier k with M consecutive integrated OFDM symbols can be described as <cit.>,
s_k(t) = e^j2π f^c_k t∑_n=0^N_s-1 a_k c_kn e^j2π k Δ f (t - nT_s)· rect[t - nT_s/T_s],
where t is the continuous-time independent variable, f^c_k and Δ f are the frequency and bandwidth of subcarrier k, a_k and c_kn denotes the amplitude and and phase code, respectively, T_s is the duration of each completed OFDM symbol including both the guard intervals and elementary symbol, and rect[x] is the rectangle function that is equal to one when x ∈ [0,1], and zero, otherwise. Accordingly, supposing the impulse response of a sensing target on subcarrier k, including path loss and radar cross section, is characterized by h_k(t), the received signals can be written as u_k(t) = h_k(t)*s_k(t) + n(t). We consider a RABS that can be deployed in a certain area, which is divided into I grids denoted by the set ℐ = {1,2,...,I}. There are a group of candidate locations distributed in that geographical area which can be chosen by RABSs for grasping; this set is denoted by 𝒥 = {1,2,...,J}. Besides, we should notice that one grid can be provisioned by one or multiple subcarriers while one subcarrier can only be assigned to at most one grid to avoid intra-cell interference.
Different performance metrics are employed to evaluate the sensing performance in aerial networks, such as Cramér–Rao lower bound and range resolution. In order to investigate the impact of RABS deployment and bandwidth allocation on the performance of ISAC systems, we utilize the conditional MI metric to assess the radar performance, similar to <cit.>. The conditional MI enables the characterization of the information-theoretic boundaries of the target information conveyed by the reflected sensing signal, which is commonly referred as sensing rate. Derived from (<ref>), when the sensing demand generated from grid i is served by a RABS deployed at candidate location j and operating on the subcarrier k, the lower bound value of MI will be achieved if there is a user, distributed in grid i, having the worst channel gain <cit.>,
M^lb_ijk = 1/2Δ f T_s N_s log_2 ( 1 + |a_k|^2 T_s^2 N_s H^sen,lb_ijk/σ^2),
where |a_k|^2 calculates the transmission power of the subcarrier k, and H^sen,lb_ijk represents the lower bound of the path loss value of the surveillance channel calculated by <cit.>,
H^sen,lb_ijk = G_t^s G_r^sηλ_k^2/ ((4 π)^3 D^lb_ij^4),
where G_t^s and G_r^s is the transmitting and receiving antenna gain, respectively, η denotes the mean of radar cross-section of the targets distributed in the grid, λ_k is the wavelength in the subcarrier k that could be calculated by λ_k = c/f^c_k where c is the speed of light, D^lb_ij denotes the longest distance between the grid i and candidate location j. Similarly, introducing the shortest distance D^ub_ij into (<ref>) and (<ref>), we can calculate the upper bound value of channel gain and MI in the best case, denoted by H^sen,ub_ijk and M^ub_ijk. An illustration of the lower and upper bounds of the distance is shown in Fig. <ref>. For notational convenience, we calculate the average MI as M_ijk=(M^ub_ijk+M^lb_ijk)/2 and bias as M̂_ijk=(M^ub_ijk-M^lb_ijk)/2. Consequently, for any user distributed in the grid i, the MI should take value from the range [M_ijk-M̂_ijk, M_ijk+M̂_ijk].
Moreover, the data rate is applied as the metric to evaluate the communication performance. The lower bound of the achievable rate can be calculated by,
R^lb_ijk = Δ f log_2 ( 1 + |a_k|^2 H^com,lb_ijk/σ^2 ),
where H^com,lb_ijk indicates the lower bound of the communication channel gain calculated as follows<cit.>,
H^com,lb_ijk = G_t^c G_r^c λ_k^2/ ((4 π)^2 D^lb_ij^2),
where G_t^c and G_r^c is the transmitting and receiving antenna gain.[Similar as <cit.>, we employ the free-space channel model for simplicity. Other models can be employed in the proposed formulation straightforwardly. ] It is worth pointing out that the concept of the worst channel gain is investigated in <cit.> for reliable communications. The upper bound of communication channel gain and data rate, denoted by H^com,ub_ijk and R^ub_ijk, can be then obtained by introducing the shortest distance D^ub_ij into (<ref>) and (<ref>). The average rate and bias can be calculated as R_ijk=(R^ub_ijk+R^lb_ijk)/2 and R̂_ijk=(R^ub_ijk-R^lb_ijk)/2, respectively. Accordingly, the data rate for any user distributed in the grid i would be within the range [R_ijk-R̂_ijk, R_ijk+R̂_ijk].
Three sets of binary variables are utilized to formulate the subcarrier allocation, grid association and RABS deployment. Specifically, x_ijk∈{0,1} indicates whether a RABS located at location j performs sensing operations to the user i by the subcarrier k or not; y_ijk∈{0,1} denotes whether a RABS located at location j communicate with the user i on the subcarrier k or not; z_j ∈{0,1} indicates whether the RABS would be deployed at location j. Because our objective is to satisfy these demands as much as possible under resource constraints, we employ the satisfaction rate (SR) to evaluate the degree of satisfaction for sensing and communication demand <cit.>. Accordingly, the SR for sensing demand in grid i is defined by (<ref>) shown on the top of this page. A parameter Γ_i, normally called the protection level for the i^th constraint, is introduced to control the conservatism of the robust optimization model. Specifically, in the numerator of (<ref>), the first part calculates the total served sensing MI when all grids perform the average channel gain, i.e., have the average MI. The second part is the robust bias, which indicates that there are up to ⌊Γ_i ⌋ coefficients allowed to change within the range [M_ijk-M̂_ijk, M_ijk+M̂_ijk], and one coefficient can at most change by (Γ_i - ⌊Γ_i ⌋) M̂_ijk. This kind of uncertainty set is referred as cardinality constrained uncertainty set in <cit.>, which reflects the inherent nature that only a subset of grids perform the worst channel gain in order to adversely affect the MI performance. Considering two extreme cases, setting Γ_i = 0 is the most ideal scenario when all grids have the average sensing performance. In contrast, setting Γ_i = |𝒥_i ×𝒦_i| is the most conservative case in which all grids perform the worst channel gain and therefore have the lowest MI. Overall, the numerator in (<ref>) calculates the satisfied sensing demand under the cardinality constrained uncertainty set and the denominator M_i denotes the sensing demand of the grid i. Therefore, (<ref>) defines the sensing SR M_i. Similarly, the communication SR R_i is defined by (<ref>) where Λ_i and R_i are the protection level and communication demand, respectively.
Hereafter, the proposed bi-objective optimization problem is formulated to maximize the weighted sum of minimum sensing and communication SR,
max_𝐗 , 𝐘, 𝐙,M,R μM + (1-μ) R
s.t.
M_i ≥M, R_i ≥R, ∀ i,
∑_i ∈ℐ∑_j ∈𝒥 x_ijk≤ 1, ∑_i ∈ℐ∑_j ∈𝒥 y_ijk≤ 1, ∀ k,
∑_i ∈ℐ∑_k ∈𝒦 x_ijk≤ IK z_j, ∑_i ∈ℐ∑_k ∈𝒦 y_ijk≤ IK z_j, ∀ j,
∑_j ∈𝒥 z_j ≤ 1,
x_ijk, y_ijk, z_j∈{0,1}, ∀ i,j,k,
M, R∈ [0,1],
where 𝐗≜{x_ijk}, 𝐘≜{y_ijk} and 𝐙≜{z_j} are the set of variables, μ∈ [0,1] is a predefined weight parameter. Eq. (<ref>) denotes the minimum sensing and communication SR by M and R. The constraints in (<ref>) denote that each orthogonal subcarrier can at most allocated to one grid for sensing or communication to avoid intra-cell interference, respectively. Eq. (<ref>) ensures that only when a RABS has been deployed at the location j then the grids can be associated to it for joint sensing and communication. Eq. (<ref>) indicates that there is only one RABS that can be deployed.
§ MILP REFORMULATION AND ALGORITHM DESIGN
§.§.§ MILP Reformulation
To convert the constraints in (<ref>) into linear constraints, we first define the protection function with a given 𝐗^* as,
γ_i(𝐗^*) = max_{𝒥_i ×𝒦_i ∪ (j_i,k_i) | 𝒥_i ⊆𝒥, 𝒦_i ⊆𝒦, | 𝒥_i ×𝒦_i| ≤⌊Γ_i ⌋, (j_i,k_i) ∈𝒥×𝒦 - 𝒥_i ×𝒦_i }{∑_j ∈𝒥_i∑_k ∈𝒦_iM̂_ijk x^*_ijk
+ (Γ_i - ⌊Γ_i ⌋) M̂_ij_ik_i x^*_ij_ik_i} ,
which can be written as the following problem:
γ_i(𝐗^*) = max_𝐰_i ∑_j ∈𝒥∑_k ∈𝒦M̂_ijk x^*_ijk w_ijk
s.t.
∑_j ∈𝒥∑_k ∈𝒦 w_ijk≤Γ_i,
0 ≤ w_ijk≤ 1, ∀ j,k,
where 𝐰_i ≜{ w_ijk | ∀ j ∈𝒥, ∀ k ∈𝒦} is the introduced variable. The equality between (<ref>) and (<ref>) can be proved by the observation that the optimal solution of (<ref>) must include ⌊Γ_i ⌋ variables taking the value of one and one variable at Γ_i - ⌊Γ_i ⌋. The detailed proof can be found in the Proposition 1 of <cit.>. Write the dual of (<ref>) as follows,
γ_i(𝐗^*) = min_α_i, {β_ijk | ∀ j, k} ∑_j ∈𝒥∑_k ∈𝒦β_ijk + Γ_i α_i
s.t.
α_i + β_ijk≥M̂_ijk x^*_ijk, ∀ j,k,
α_i ≥ 0, β_ijk≥ 0, ∀ j,k,
where {α_i} and {β_ijk} are dual variables. It can be observed that the problem (8) is a linear programming thus the strong duality is held between (<ref>) and (<ref>), i.e., they have the equal optimal solutions if feasible. Introducing (<ref>) into (<ref>), the constraints M_i ≥M in (<ref>) can be rewritten as the following constraint set:
[left= ]align
1/M_i ( ∑_j ∈𝒥 ∑_k ∈𝒦 (M_ijk x_ijk
- β_ijk) - Γ_i α_i )
≥M, ∀i,
α_i + β_ijk ≥M̂_ijk x_ijk, ∀i,j,k,
α_i ≥0, β_ijk ≥0, ∀i,j,k.
Applying the same procedure to the constraints R_i ≥R in (<ref>), the problem (<ref>) can be then reformulated as a MILP without loss of optimality.
§.§.§ Iterative LP Rounding Algorithm
To overcome the curse of dimensionality, an iterative LP rounding algorithm proposed in <cit.> is employed to solve the reformulated MILP problem approximately. Firstly, we focus on a selected location and use the same method to traverse all candidate locations at subsequent stages to obtain the best one. It can be observed that the sensing and communication decisions in (<ref>) can be decoupled once the variable 𝐙 is determined. We set z_j' = 1 and all other elements in 𝐙 are zero. A MILP problem including only the variables related to the sensing task can be written from (<ref>) and (<ref>) as,
max_𝐗_𝐣', M, 𝐀, 𝐁_𝐣' μM
s.t.
1/M_i( ∑_k ∈𝒦 (M_ij'k x_ij'k
- β_ij'k) - Γ_i α_i )
≥M, ∀ i,
α_i + β_ij'k≥M̂_ij'k x_ij'k, ∀ i,k,
∑_i ∈ℐ x_ij'k≤ 1, ∀ k,
M∈ [0,1], α_i ≥ 0, β_ij'k≥ 0, ∀ i,k,
x_ij'k∈{0,1}, ∀ i,k,
where 𝐗_𝐣'≜{x_ij'k| ∀ i ∈ℐ, ∀ k ∈𝒦}, 𝐀≜{α_i} and 𝐁≜{β_ij'k| ∀ i ∈ℐ, ∀ k ∈𝒦} are the sets of variables.
To apply the iterative LP rounding algorithm <cit.>, we first solve the linear relaxation of the problem (<ref>), that is, replacing the constraints in (<ref>) by x_ij'k∈ [0,1], and denote the solution as (𝐗_𝐣'^*, M^*, 𝐀^*, 𝐁^*). If 𝐗_𝐣'^* is binary, the optimal solution for (<ref>) is obtained. Otherwise, we introduce x_ij'k = 1 to (<ref>) if x_ij'k^* = 1 as new constraints. Afterwards we would decide to round the variables with fractional values in 𝐗_𝐣'^* to binary values via a procedure of verifying feasibility. Firstly, we select one variable with the largest fractional value in 𝐗_𝐣'^* and denote it as x_i_0j'k_0.[Because the objective is to maximize the minimum SR, it is suggested that in this step, prioritize selecting the elements in 𝐗_𝐣'^* corresponding to the grids that have not allocated any subcarriers to guarantee the fairness.] We add the constraint x_i_0j'k_0 = 1 to (<ref>) and try to solve this modified LP. If it is infeasible, we set x_i_0j'k_0 = 0 and round other variables according to 𝐗_𝐣'^*. If the modified LP is feasible, we add the constraint x_i_0j'k_0 = 1 to (<ref>) and repeat the above procedure until a binary 𝐗_𝐣' is achieved or there is no more subcarrier can be allocated. More details of the iterative LP rounding algorithm could be found in <cit.>.
In the section 6.6.1 of <cit.>, the worst case of solving a linear programming is 𝒪((n^v+n^c)^1.5n^v^2), where n^v and n^c are the number of variables and constraints, respectively. In the iterative LP rounding algorithm, the number of iterations is upper bounded by I × K, thus the complexity of the proposed algorithm is approximately 𝒪(IK·(n^v+n^c)^1.5n^v^2 ), where n^c = 2IK+I+K+1 and n^v is upper bounded by 2IK+I+1 for the linear relaxation of (<ref>).
§ NUMERICAL INVESTIGATIONS
A geographical area of 100 × 100 m^2 is divided into 25 small square grids with the size of 20 × 20 m^2, where 10 candidate locations distributed randomly for RABS grasping. The sensing and communication demand of grids follows the log-normal distribution <cit.>, where the mean value and standard deviation are denoted by [m^sen, m^com] and [σ^sen, σ^com] <cit.>. Hereafter, unless otherwise specified, we set m^sen = 15 bit, m^com = 20 Mbps and σ^sen = σ^com = 1 unless otherwise stated. Moreover, the carrier frequency of the ISAC signals is f^c_0 = 3 GHz and each subcarrier has the spacing Δ f = 0.25 MHz. Accordingly, the frequency of the k^th subcarrier is calculated by f_k^c = f^c_0 + kΔ f <cit.>. For notational convenience, we introduce a robustness parameter δ to control the protection level {Γ_i} and {Λ_i}, that is, Γ_i = Λ_i = δ× J × K. Taking δ = 10^-1 as an example, it means that 10% of the coefficients in (<ref>)-(<ref>) are allowed to take values from [M_ijk-M̂_ijk, M_ijk+M̂_ijk] and [R_ijk-R̂_ijk, R_ijk+R̂_ijk]. Other simulation parameters are reported in Table <ref>.
By adjusting the robustness parameter δ, we control the protection level {Γ_i} and {Λ_i} as well as the robustness of the problem (<ref>). It can be observe from Fig. <ref> that the minimum SR decreases as the robustness increases. This is in accordance with the intuition that the growth of system robustness comes at the expense of system performance. Taking a fixed small cell distributed randomly as a benchmark, it is shown from Fig. <ref> that the RABS can improve the system performance by 28.61% and 21.46% on average when setting the standard deviation to 1 and 2, respectively. Moreover, comparing the results for different standard deviation values of traffic distribution, it can be seen that the robustness has less impact on the system performance when the traffic spatial distribution is highly heterogeneous, represented as smoother curves in Fig. <ref>.
Fig. <ref> investigates the number of subcarriers allocation versus the sensing traffic distribution. Note that more subcarriers are biased to grids with higher traffic demand. The reason is that our objective is to maximize the minimum SR to guarantee fairness. Moreover, Fig. <ref> shows that the robustness parameter δ also affects the subcarrier allocation decisions. Comparing the results when setting δ = 10^-4 and δ = 10^0, the number of allocated subcarriers differs in grids 2, 14, 21, and 24.
The performance of the proposed iterative LP rounding algorithm is analyzed in Fig. <ref>. Although the maximum number of iterations is upper-bounded by I × K, as alluded in section <ref>, in reality the stopping criteria is satisfied after solving a limited number of LP problems as shown in Fig. <ref>. Moreover, Fig. <ref> presents the optimal gap of the iterative LP rounding algorithm by comparing with the globally optimal solution solved by Gurobi <cit.>. Numerically, the optimality gap of the proposed method is at least 2% when the robustness parameter is 10^-4, and 22% at most when robustness parameter is 10^-2.5. However, as shown in section (<ref>), the complexity of the proposed algorithm is in polynomial time, in contrast to the exponential worst-case complexity of Gurobi <cit.>.
§ CONCLUSIONS
In this paper, a flexible integrated sensing and communication (ISAC) system is proposed, assisted by the robotic aerial base station (RABS). To characterize the users' mobility and changing demand, we employ a grid-based model to represent the spatial traffic distribution. A robust programming is formulated on the cardinality constrained uncertainty set to determine the RABS deployment and resource allocation. which is reformulated as a MILP via duality theory and solved by a proposed iterative LP rounding algorithm in polynomial time. Numerical investigations show that the minimum SR can be improved by 28.61% on average thanks to the flexible mobility of RABS deployment. Future extensions of this letter may consider employing the novel orthogonal time frequency space modulation to improve the performance of OFDM-based ISAC systems <cit.>, and applying the Cramér–Rao lower bound as the metric to investigate how the maneuverability of the RABS can enhance the ISAC performance.
IEEEtran
|
http://arxiv.org/abs/2307.01642v1
|
20230704105416
|
Relations between basis sets of fields in the renormalization procedure
|
[
"Simonas Draukšas"
] |
hep-ph
|
[
"hep-ph"
] |
Relations between basis sets of fields in the renormalization
procedure
footnote2
Simonas DraukšasE-mail:
[email protected]
Institute of Theoretical Physics and Astronomy, Faculty of Physics,
Vilnius University,
9 Saulėtekio, LT-10222 Vilnius, Lithuania
August 1, 2023
=========================================================================================================================================================================================================================
It seems that the literature suggests to go in two opposing directions simultaneously. On the one hand, many papers construct basis-independent quantities, since exactly these quantities appear in the expressions for observables. This means that the mixing angles such as tanβ in the Two Higgs Doublet Model must drop out when calculating anything physical. On the other hand, there are many attempts to renormalize such mixing angles — this is in the opposite direction to basis-independence. This basis-dependent approach seems to bring gauge-dependence and singular behaviour, both of which are required to be absent in mixing renormalization. Most importantly, mixing angle counterterms single out a preferred basis and further basis rotations lead to inconsistencies. In contrast, we argue that the bare mixing angles should be identified with the renormalized ones — this is the basis-independent approach — such that all the mixing renormalization requirements are fulfilled in a trivial and consistent manner.
§ INTRODUCTION
Nowadays, the renormalization procedure is mostly well-established and is no longer considered to just “sweep infinities under the rug”, however, this establishment is not complete. For example, it does not seem that there is an agreed-upon recipe for the renormalization of mixing angles and the literature suggests a myriad of renormalization schemes <cit.> to name a few. Even more so, there appears to exist two different philosophies regarding the renormalization of mixing angles, sometimes even used simultaneously <cit.> or proposed as alternatives <cit.>. This is a rather unpleasant situation since particle mixing is present already in the quark sector of the Standard Model (SM) as well as in nearly all models with extended scalar sectors as compared to the SM.
In slightly more detail, the two renormalization approaches differ in whether the mixing angles receive counterterms or not. The more common treatment is to introduce mixing angle counterterms, which are rather inevitably related to the field renormalization (e.g. <cit.>). In turn, this causes these mixing counterterms to be gauge-dependent — an unwanted feature — such that additional effort must be put in to separate the gauge-independent part (e.g. <cit.>). The less common approach is to trade the mixing matrix counterterms for the off-diagonal mass matrix counterterms such that the bare mixing matrix is already renormalized (e.g. <cit.>). It seems that the latter, although not as popular, does not introduce downsides such as unwanted gauge-dependence.
The fact that there are two rather different philosophies, one of them in general leading to gauge-dependent mixing angle counterterms, seems to be an expression of the fact that mixing angles are basis-dependent and, therefore, not physical quantities. For example, this has been rather explicitly noted in <cit.> at tree-level when considering basis-independent methods for the Two Higgs Doublet Model (THDM). An analogous statement on the redundancy of the renormalization of mixing angles was also made in <cit.> in the context of the THDM. Seeing that mixing angles are basis-dependent is simple, for example, the flavour basis of the SM has no mixing matrices, but rotation to the quark mass-eigenstate basis produces the quark mixing matrix. Of course, many other bases where the quarks are not in their mass-eigenstates also contain some mixing matrix. The not so simple point, which seems to cause a lot of confusion, is whether and how to renormalize these basis-dependent quantities.
In this work we do not intend to propose a particular renormalization scheme, instead, we want to establish a conceptually consistent philosophy for the renormalization of mixing angles such that particular renormalization schemes can later be constructed. In particular, we expand on the point made in our previous work <cit.>, where we also propose a renormalization scheme for fermions, that mixing angles should not have counterterms associated to them. The absence of mixing angle counterterms seems to offer all of the required properties for mixing renormalization <cit.> and is a step towards basis-independence. Therefore, we consider this approach to be the consistent one and the one that should be used in practice over the more common approach with counterterms for mixing angles.
The paper is structured as follows: Section <ref> introduces nearly all the needed notation and relations, Section <ref> is then dedicated to providing arguments for having the mixing angle counterterms set to 0. In particular, Section <ref> is based on basis-independence arguments, Section <ref> discusses the gauge-dependence and Section <ref> considers the degenerate mass limit. In Section <ref> we give our conclusions.
§ BASIS ROTATIONS AND RENORMALIZATION
In this section we set up the discussion of mixing, mass, and field renormalization by generalizing the discussion found in <cit.> , while more specific arguments will be given in further sections.
For simplicity, let us consider a system of real scalar fields
ϕ_0=[ ϕ^0_1; ϕ^0_2; ⋮; ϕ^0_n ] ,
where the 0 (sub)superscripts indicate that the fields are bare. Now, one may relate the fields ϕ_0 in the initial basis to some other basis of the fields h_0 via an orthogonal rotation matrix R_0
ϕ_0=R_0 h_0 .
Considering the kinetic term in the Lagrangian in momentum space we may write this relation as
𝒦= ϕ^T_0(p^2- M^2_0 )ϕ_0
= h^T_0(p^2- R^T_0 M^2_0R_0 )h_0
= h^T_0(p^2-M^2_0)h_0 ,
where T in the superscript stands for transposition, p^2 is the squared momentum, M^2_0 (M^2_0) is the bare mass-squared matrix in the ϕ_0 (h_0) basis, which is in general not diagonal. We have used
R_0^T R_0=1
in the momentum term and defined
M^2_0= R^T_0 M^2_0R_0 .
Apart from performing basis rotations, the fields may be renormalized
ϕ_0= Z ϕ=(1+δZ)ϕ .
Here Z is the field renormalization constant, δZ is the corresponding counterterm that can be considered to be of 1-loop order, and ϕ stands for the vector of renormalized fields. Analogously, the fields h_0 may also be renormalized
h_0=Zh=(1+δZ) h .
The renormalization procedure also requires counterterms for the mass matrices
M^2_0= M^2+δ M^2 ,
M^2_0= M^2+δM^2 ,
where M^2(M^2) is the renormalized mass matrix and the δ M^2(δM^2) is the mass matrix counterterm in the ϕ( h) basis. For the sake of the argument we also introduce mixing matrix counterterms
R_0= R+δ R
such that both the bare and the renormalized mixing matrices are orthogonal. The following property stems from orthogonality
δ( R_0^T R_0)=0
⇒ δ R^T R=- R^T δ R .
Now, we should be able to apply the renormalization procedure to the kinetic term, Eq. (<ref>), in any basis. For example, taking Eqs. (<ref>) and (<ref>) we get
𝒦= ϕ^T{p^2-M^2
+δZ^T(p^2-M^2)
+(p^2-M^2)δZ
-δM^2}ϕ
= h^T{p^2-M^2
+δZ^T(p^2-M^2)
+(p^2-M^2)δZ
-δM^2}h ,
where we dropped all the terms non-linear in the counterterms. Alternatively, taking Eq. (<ref>), where the mixing matrix R_0 is present, leads to the following
𝒦= h^T{p^2-M^2
+δZ^T(p^2-M^2)
+(p^2-M^2)δZ
-δR^T R M^2
-M^2R^T δ R
- R^Tδ M^2R}h ,
where we have
M^2= R^T M^2R .
Splitting the field counterterms into the symmetric and anti-symmetric parts
δZ=δZ^S+δZ^A ,
with
(δZ^S)^T=δZ^S ,
(δZ^A)^T=-δZ^A ,
and by using Eq. (<ref>) we may rewrite the kinetic term as
𝒦= h^T{p^2-M^2
+δZ^S(p^2-M^2)
+(p^2-M^2)δZ^S
-[M^2, R^T δ R+ δZ^A ]
- R^Tδ M^2R}h ,
where […, …] is the commutator. The commutator term shows that the mixing matrix counterterms are indeed degenerate with the anti-symmetric part of the field renormalization, which is a slightly more general version of the statement made in <cit.>. This degeneracy implies that the mixing may be renormalized through the (anti-symmetric part of the) field renormalization, which is what enables, for example, the scheme in <cit.>. However, we attempt to make the statement stronger — the mixing angle/matrix counterterms should always be included in the field renormalization. In the following sections we give arguments for why one should set δ R = 0 by comparing Eqs. (<ref>), (<ref>), and (<ref>) in terms of basis-dependence and by discussing gauge-dependence and the degenerate mass limit.
§ ARGUMENTS FOR HAVING DELTA R = 0
§.§ Basis independence
Basis-independent methods are often sought after since observables must be expressed in terms of basis-independent quantities, for example, see <cit.>. In a similar manner it is desirable for the renormalization procedure to also show some basis-independent features. For example, the form of the renormalized kinetic term in Eqs. (<ref>) and (<ref>) is the same although the bases are different — this is welcome. In contrast, the form of Eq. (<ref>) is already different due to additional mixing/rotation matrix counterterms, even though all three equations (should) correspond to the same bare kinetic term.
It is rather simple to see that Eq. (<ref>) can be brought to the form of Eq. (<ref>), by simply setting R_0 = R ⇔δ R=0 or, equivalently, by redefining the anti-symmetric part of the field renormalization to include R^Tδ R. Once δ R no longer appears we may easily equate Eqs. (<ref>) and (<ref>) and get
δM^2 = R^Tδ M^2R .
Further, Eqs. (<ref>) and (<ref>) correspond to the same bare kinetic term if
Z= R^T Z R
and
ϕ = R h .
In more detail, with δ R≠ 0 one is, or at least should be, free to perform a rotation by R^T on the renormalized fields h in Eq. (<ref>)
𝒦= h^' T{p^2-M^2
+δZ^S(p^2-M^2)
+(p^2-M^2)δZ^S
-[M^2, δ R R^T + R δZ^A R^T ]
-δ M^2}h^' ,
Here h ^'= R h[For R_0= R one trivially has h^' = ϕ], we have used Eq. (<ref>) and Eq. (<ref>) for the symmetric part of the field renormalization. Evidently, all the terms except for the one with δ R contain quantities in the basis of ϕ even though the fields are labeled as h^'. This means that one computes identical amplitudes in both the ϕ and h^' bases, except that they are renormalized with different sets of counterterms. The presence of the δ R counterterm is the source of inconsistency.
For one thing, because of the δ R counterterm the basis rotations of the anti-symmetric part of field renormalization do not seem to follow the same law as the other counterterms. For the symmetric part we could use Eq. (<ref>), while the anti-symmetric part gives
δ Z^A !=δ R R^T + R δZ^A R^T .
To preserve the same law of basis transformations, Eq. (<ref>), one must have δ R=0.
For another view at the inconsistency, one easily notices that the δ R counterterm in the basis h^' does not have an associated renormalized parameter. This means that it is impossible to form the bare mixing matrix R_0 in the h^'_0 basis, i.e. the bare kinetic term no longer follows the form of Eq. (<ref>) and instead becomes
𝒦^'=
h^' T_0{
p^2 - M_0^2
-[ M^2,
δ R R^T
+ RδZ^A R^T
-δ Z^A]
} h_0^'≠𝒦 .
Here we have used the inverse of h^'_0= Z h. The only way to preserve the bare kinetic term and more generally the bare Lagrangian, which defines the theory, is for the commutator term to vanish. However, this gets us back to Eq. (<ref>) and so, setting δ R=0 preserves not only the form of basis transformations, but also the form of the bare Lagrangian.
The third and final view of the inconsistency may be seen at by considering why in Eq. (<ref>) we have 𝒦^'≠𝒦. We started with the bare kinetic term in Eq. (<ref>), rotated it by R_0 to Eq. (<ref>), renormalized it to get Eq. (<ref>), and tried to rotate back into the ϕ basis by R^T. However, instead of Eq. (<ref>) the rotation took us into Eq. (<ref>) and 𝒦^' in Eq. (<ref>)! In other words, we see that basis rotations and the renormalization procedure do not commute, i.e. there is a difference if one renormalizes the theory before or after basis rotations. This is a rather awkward feature since there is nothing special about basis rotations or renormalization and we should be working with the same theory, in whichever basis we choose to renormalize the theory. In turn, we formulate a consistency condition, which we also imposed in <cit.>, that basis rotations should commute with the renormalization procedure. This condition automatically requires the bare rotations to be identified with the renormalized ones, i.e. R_0= R and δ R=0.
The upshot is that having the bare rotation matrix set to the renormalized one, R_0= R, allows to freely change the basis at any point, be it for the bare fields as in Eq. (<ref>) or the renormalized ones in Eq. (<ref>) while keeping the same form of the Lagrangian. Alternatively, this may be rephrased as having a basis-invariant set of counterterms, i.e. upon basis rotations
{ Z, δ M^2, δλ}⇒{Z, δM^2, δλ}
but not
{ Z, δ M^2, δλ}⇒{Z, δM^2, δ R, δλ} ,
where δλ and δλ stand for the counterterms of other parameters in the theory in the two respective bases.
There is also a formulation in slightly more philosophical terms. One of the main points of the renormalization procedure is that it takes some measurement (observable) as a reference point in order to make the theory predictive. The standard book-keeping device of these measurements are the counterterms. Since the observables must be basis-independent it also makes sense to have a basis-independent set of counterterms — this means δ R = 0. Of course, one may argue that things such as the Cabbibo-Kobayashi-Maskawa (CKM) matrix <cit.> elements can be measured. However, the CKM matrix itself can in principle be expressed in terms of the initial (renormalized) mass matrices of the up- and down-type quarks. It is the renormalization of these mass matrices that provides a set of basis-independent counterterms. Mixing matrices such as the CKM matrix may still be used as they are a nice way of parameterizing the mixing, but it should not be forgotten that they are derived and basis-dependent quantities and, hence, should not have counterterms.
In the two following sections we show that setting δ R to 0 is not only conceptually consistent, but also of practical importance.
§.§ Gauge dependence
Let us consider the case with δ R ≠ 0 and see how it leads to difficulties. One of the requirements for the mixing renormalization is that it should be gauge-invariant <cit.>. However, this is a rather complicated task because of Eq. (<ref>) and the degeneracy between δ Z^A and R^Tδ R. A way to investigate gauge dependence is via the Nielsen Identities <cit.>, which allow to take gauge derivatives of the self-energies.
For concreteness, let us proceed in the basis of the fields h and consider the 1-loop case, for which the derivative w.r.t. the gauge parameter ξ of the bare self-energy Π^0(p^2) is <cit.>[Note that achieving this form requires the inclusion of tadpole diagrams in the self-energy.]
∂_ξΠ^0(p^2) =
Λ^T(p^2)(p^2-M^2)
+(p^2-M^2)Λ(p^2) ,
where Λ is a correlation function involving BRST sources, describes the gauge-dependence of Π^0(p^2), and is a matrix in flavour space. Just as for the field renormalization in Eq. (<ref>), we may split Λ in its symmetric and anti-symmetric parts, then the Nielsen Identity becomes
∂_ξΠ^0(p^2) =
Λ^S(p^2)(p^2-M^2)
+(p^2-M^2)Λ^S(p^2)
-[M^2, Λ^A]
.
Let us also consider the self-energy Π(p^2) renormalized as in Eq. (<ref>)
Π(p^2)= Π^0(p^2)
+δZ^S(p^2-M^2)
+(p^2-M^2)δZ^S
-[M^2, R^T δ R+ δZ^A ]
- R^Tδ M^2R .
Now, we may take the gauge derivative of the renormalized self-energy and arrive at
∂_ξΠ(p^2)= (∂_ξδZ^S+Λ^S)(p^2-M^2)
+(p^2-M^2)(∂_ξδZ^S+Λ^S)
-[M^2, R^T ∂_ξδ R + ∂_ξδZ^A+Λ^A ]
- R^T∂_ξδ M^2R .
Here we assumed M^2 and R to be gauge-independent. It is evident that the field counterterms as well as δ R are naturally associated with gauge-dependent structures. In turn, it is rather hard to fix δ R in a gauge-independent way since that immediately requires an additional renormalization condition to break the degeneracy between the field and mixing matrix counterterms. Once again, the easiest way around this is to simply set δ R=0.
In contrast, the mass counterterm R^Tδ M^2R is not associated with any gauge-dependent structure and so it can be defined in a naturally gauge-independent way, only non-physical renormalization conditions can induce gauge-dependence in the mass counterterm.
§.§ Non-singular degenerate mass limit
If one keeps δ R≠ 0 and manages to renormalize it in a gauge-independent way, the counterterm will still be problematic. To see this, let us for simplicity explicitly choose a basis where the mass matrix is diagonal
M^2 = diag(m_1^2, …, m_n^2)
and take Eq. (<ref>)
Π_ij(p^2)= Π_ij^0(p^2)
+δZ_ij^S(p^2-m^2_j)
+(p^2-m^2_i)δZ_ij^S
-(m^2_i-m^2_j)((R^T δ R)_ij+ δZ_ij^A )
-( R^Tδ M^2R)_ij .
Here i, j are flavour indices, the non-bold notation (where appropriate) indicates matrix elements, and the counterterm ( R^Tδ M^2R)_ij is in general not diagonal even if M^2 is.
Further, the counterterms must cancel the UV divergences in the bare self-energy independently of the chosen scheme, hence, we only take the UV parts, although the arguments carry over to the finite parts without difficulty. In addition, we consider only terms with i≠ j and also drop terms proportional to p^2-m^2_i and p^2-m^2_j such that only the commutator term and the mass counterterm remain
.Π_ij^0(p^2)|_UV,p^2-m^2_i,j =
(m^2_i-m^2_j)((R^T δ R)_ij+ δZ_ij^A )
+( R^Tδ M^2R)_ij .
Here lies the problem: in the degenerate mass limit, i.e. m_i→ m_j, the l.h.s. does not vanish in general. In turn, if one wants to cancel any of the UV divergences in this limit with the counterterms δ R or δZ^A, these counterterms must be proportional to (m_i^2-m_j^2)^-1.
In the literature there are many schemes (e.g. <cit.>) where the off-diagonal mass counterterm ( R^Tδ M^2R)_ij is set to 0, such that everything in Eq. (<ref>) must be canceled with the mixing matrix and the field renormalization counterterms, which must be singular in the degenerate mass limit for the cancellation to work out. In turn, these singularities can cause numerical problems, which are required to be absent for the mixing renormalization <cit.>. On the other hand, the non-diagonal mass counterterm can naturally cancel the non-vanishing terms without being singular as is explicitly done in <cit.>. Also note that according to Section <ref> (and with the diagonal mass matrix) the gauge-dependent parts vanish in the degenerate mass limit <cit.> so that the mass counterterms can be defined in a gauge-independent way. Even when the renormalization is performed in a basis where the (renormalized) mass matrix is diagonal the corresponding counterterm has to be a matrix with possible non-trivial off-diagonal elements depending on the particular model — this avoids singularities in the degenerate mass limit. Out of Π_ij^0 only terms which are gauge-independent and proportional to m_i^2-m_j^2 could be included in δ R such that it is non-singular and gauge-invariant. Of course, this is a step towards basis-dependence and it is best to keep δ R = 0 and to avoid inconsistencies altogether.
§ CONCLUSIONS
In this paper we have considered the interplay between basis rotations of the fields and the renormalization procedure. In particular, we have found that adding counterterms to mixing angles is a step towards basis-dependence and introduces various problems. For one thing, counterterms of mixing angles are naturally associated with gauge-dependent structures, while at the same time a gauge-independent definition of them is likely to be singular in the degenerate mass limit. Neither of these two properties are welcome, since the former makes physical amplitudes gauge-dependent and the latter causes numerical instabilities. More importantly, mixing angle counterterms obstruct the basis transformation law such that the renormalization procedure does not commute with basis rotations — we see this as an inconsistency and a step towards basis-dependence. In contrast, stepping in the direction of basis-independence by setting mixing angle counterterms to 0 completely avoids inconsistencies together with all the gauge-dependence and singular behaviour problems. We conclude that the basis-independent approach is practically far more simple, consistent and should be taken.
§.§.§ Acknowledgements
The author would like to thank his supervisor T. Gajdosik
for reading of the manuscript as well as for helpful comments and discussions.
|
http://arxiv.org/abs/2307.02539v1
|
20230705180002
|
A comprehensive optical search for pre-explosion outbursts from the quiescent progenitor of SN~2023ixf
|
[
"Yize Dong",
"David J. Sand",
"Stefano Valenti",
"K. Azalee Bostroem",
"Jennifer E. Andrews",
"Griffin Hosseinzadeh",
"Emily Hoang",
"Daryl Janzen",
"Jacob E. Jencson",
"Michael Lundquist",
"Nicolas E. Meza Retamal",
"Jeniveve Pearson",
"Manisha Shrestha",
"Joshua Haislip",
"Vladimir Kouprianov",
"Daniel E. Reichart"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.SR"
] |
Yize Dong
[email protected]
0000-0002-7937-6371]Yize Dong UTF8gbsn(董一泽)
Department of Physics and Astronomy, University of California, 1 Shields Avenue, Davis, CA 95616-5270, USA
0000-0003-4102-380X]David J. Sand
Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721-0065, USA
0000-0001-8818-0795]Stefano Valenti
Department of Physics and Astronomy, University of California, 1 Shields Avenue, Davis, CA 95616-5270, USA
0000-0002-4924-444X]K. Azalee Bostroem
LSSTC Catalyst Fellow
Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721-0065, USA
0000-0003-0123-0062]Jennifer E. Andrews
Gemini Observatory, 670 North A`ohoku Place, Hilo, HI 96720-2700, USA
0000-0002-0832-2974]Griffin Hosseinzadeh
Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721-0065, USA
0000-0003-2744-4755]Emily Hoang
Department of Physics and Astronomy, University of California, 1 Shields Avenue, Davis, CA 95616-5270, USA
0000-0003-0549-3281]Daryl Janzen
Department of Physics & Engineering Physics, University of Saskatchewan, 116 Science Place, Saskatoon, SK S7N 5E2, Canada
0000-0001-5754-4007]Jacob E. Jencson
Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
0000-0001-9589-3793]Michael Lundquist
W. M. Keck Observatory, 65-1120 Māmalahoa Highway, Kamuela, HI 96743-8431, USA
0000-0002-7015-3446]Nicolas E. Meza Retamal
Department of Physics and Astronomy, University of California, 1 Shields Avenue, Davis, CA 95616-5270, USA
0000-0002-0744-0047]Jeniveve Pearson
Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721-0065, USA
0000-0002-4022-1874]Manisha Shrestha
Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721-0065, USA
0000-0002-6703-805X]Joshua Haislip
Department of Physics and Astronomy, University of North Carolina, 120 East Cameron Avenue, Chapel Hill, NC 27599, USA
0000-0003-3642-5484]Vladimir Kouprianov
Department of Physics and Astronomy, University of North Carolina, 120 East Cameron Avenue, Chapel Hill, NC 27599, USA
0000-0002-5060-3673]Daniel E. Reichart
Department of Physics and Astronomy, University of North Carolina, 120 East Cameron Avenue, Chapel Hill, NC 27599, USA
We perform a comprehensive search for optical precursor emission at the position of SN 2023ixf using data from the DLT40, ZTF and ATLAS surveys.
By comparing the current data set with precursor outburst hydrodynamical model light curves, we find that the probability of a significant outburst within five years of explosion is low, and the circumstellar material (CSM) ejected during any possible precursor outburst is likely smaller than ∼0.015. By comparing to a set of toy models, we find that, if there was a precursor outburst, the duration must have been shorter than ∼100 days for a typical brightness of M_r≃-9 mag or shorter than 200 days for M_r≃-8 mag; brighter, longer outbursts would have been discovered.
Precursor activity like that observed in the normal type II SN 2020tlf (M_r≃-11.5) can be excluded in SN 2023ixf.
If the dense CSM inferred by early flash spectroscopy and other studies is related to one or more precursor outbursts, then our observations indicate that any such outburst would have to be faint and only last for days to months, or it occurred more than five years prior to the explosion.
Alternatively, any dense, confined CSM may not be due to eruptive mass loss from a single red supergiant (RSG) progenitor. Taken together, the results of SN 2023ixf and SN 2020tlf indicate that there may be more than one physical mechanism behind the dense CSM inferred around some normal type II SNe.
§ INTRODUCTION
Red supergiant (RSG) stars with zero-age main sequence masses in the range ∼8-17 can explode as Type II supernovae (SNe) <cit.>. Early SN observations provide hints about the circumstellar environment around the progenitor star just prior to explosion. For instance, spectroscopic observations within days of explosion show narrow `flash' recombination lines in a significant fraction of normal SNe II, which quickly disappear after several days <cit.>. A standard interpretation is that these lines signal dense, confined CSM that has been ionized by the shock breakout <cit.> or ejecta interaction <cit.>. Meanwhile, the fast rise of type II SN light curves has also been interpreted as a sign of dense CSM around the progenitor star, as indicated by hydrodynamic modeling <cit.>. Between 40–70% of standard type IIP SNe show evidence of dense CSM around their progenitor stars <cit.>.
The dense CSM around the progenitor requires intense mass loss, equivalent to ∼10^-4–10^-2 M_⊙ yr^-1, in the months to years leading up to explosion, much higher than the mass loss due to the normal stellar winds of RSGs. However, how and when this enhanced mass loss occurs is still a mystery. Some of the possible mass-loss mechanisms are mass ejection driven by wave transport <cit.>, common envelope interaction with a compact object <cit.>, and dynamical instability associated with turbulent convection in the core <cit.>.
One direct method to constrain very late-stage mass-loss mechanisms of SN progenitors is searching for signs of pre-explosion activity or precursor emission.
Precursor emission has been observed in many SNe IIn <cit.> and statistical studies on a sample of SNe IIn also support the idea that most experienced outbursts prior to exploding <cit.>. In contrast, pre-explosion activity in normal Type IIP/L has only been seen in SN 2020tlf, where excess emission is observed ∼130 days prior to and all the way up until the ultimate SN explosion <cit.>.
Based on the pre-explosion images of four Type IIP/L SN progenitors, <cit.> found that the probability that their progenitors had extended outbursts after oxygen ignition is low. However, they could not exclude short outbursts on the time-scale of months from their data.
Recently, there has been theoretical research on the morphology of precursor light curves of Type IIP/L SNe.
<cit.> constructed model spectra of the precursor emission for different mass-loss scenarios. They suggested that the precursor outburst likely occurs within one year of the explosion and would be optically bright for a few days with M_R≃-8.5, accompanied by intense mass loss.
In addition, RSGs can be very faint in the optical right before explosion due to the cooling of their surfaces and an increase of the molecular opacity <cit.>.
<cit.> modeled precursor outbursts by injecting energy into the base of the RSGs' hydrogen envelopes, and explored the corresponding observational light curves. They found that these outbursts can last for hundreds of days, with a peak brightness of ∼-8.5 to -10 mag in the R band, depending on the amount of energy injected. These kinds of precursors are usually too faint to be detected by most ongoing wide-field surveys. However, if a Type IIP/L SN explodes in a very nearby galaxy, its precursor activity can be used as an early warning of the explosion.
In this paper, we present optical pre-explosion monitoring data at the position of SN 2023ixf, a type II SN that exploded in the very nearby galaxy M101 (also known as the Pinwheel Galaxy). The SN displayed strong flash features indicative of dense, confined CSM around the progenitor star <cit.>. Given the proximity of SN 2023ixf and the wealth of available pre-explosion data, it provides an excellent opportunity to link the signatures of CSM in the SN data to one or more pre-explosion events. Pre-explosion photometry was gathered from several time domain programs: the Distance Less Than 40 Mpc <cit.> survey, the Zwicky Transient Facility <cit.>, and the Asteroid Terrestrial-Impact Last Alert System (ATLAS, ). The pre-explosion observations span about 3, 5 and 6 years prior to the explosion of SN 2023ixf for DLT40, ZTF and ATLAS, respectively. The high-cadence observations enable us to put strong constraints on any precursor outbursts or other activities.
The pre-explosion observations at the position of SN 2023ixf, and associated photometric limits, are described in Section <ref>. We use these photometric limits from multiple surveys to constrain the duration and brightness of any pre-explosion outbursts in Section <ref>, using both toy-model outbursts and those derived from hydrodynamic models. We also discuss our outburst constraints in the context of other evidence for dense, confined CSM in SN 2023ixf and other normal core collapse SNe.
Finally, we present our conclusions in Section <ref>.
§ DATA SET
SN 2023ixf was discovered on 2023 May 19 in the Pinwheel Galaxy <cit.> and was classified as a Type II SN <cit.>. The distance to SN 2023ixf is only 6.85 Mpc (μ = 29.18 mag) <cit.>, providing an unique opportunity to study a Type II SN in great detail. Following <cit.>, we adopted a Milky Way extinction of E(B-V)=0.0077 mag <cit.> and a host extinction of E(B-V) = 0.031 mag <cit.>, as well as R_V=3.1.
In this section, we present the pre-explosion data of SN 2023ixf taken by DLT40, ZTF and ATLAS. We also examined the pre-explosion data taken by the All-Sky Automated Survey for Supernovae (ASAS-SN, ). However, since the survey
is ∼2 mag shallower than the other surveys considered, we did not include the ASAS-SN data in our analysis.
§.§ DLT40 observations
The DLT40 survey is a sub-day-cadence SN search <cit.>, targeting prominent galaxies within 40 Mpc with the aim of finding about 10 very young and nearby SNe per year. DLT40 has been monitoring M101 since 2020 using the PROMPT-USask 0.4m telescope at Sleaford Observatory, Canada. These observations resulted in 264 frames taken in the Clear band, with an average time between two adjacent images of ∼3.7 days.
Each image has an exposure time of 45s and a field of view of 10'×10'.
Before doing any analysis, all the available images were visually inspected, and those of bad quality were removed from the sample.
A deep template was made using SWarp <cit.> with images taken between 2020-05-12 and 2020-08-31. The rest of the images are stacked in windows of 10 days, and image subtraction against the template was done using HOTPANTS <cit.>.
Aperture photometry was done on difference images to search for any precursor emission at the position of SN 2023ixf.
For aperture photometry, we adopt an aperture 2 times the FWHM of the image, a signal-to-noise threshold of 3 for source detections and a signal-to-noise threshold of 5 for computing upper limits, following <cit.>.
The final Clear-band aperture photometry was performed in a Python-based pipeline and was calibrated to the r-band using the APASS catalog.
This process resulted in a median limiting magnitude of r∼-10.6 mag.
§.§ ZTF observations
ZTF is a time-domain survey using the Palomar 48-inch Oschin telescope at Palomar Observatory <cit.>. ZTF observes the whole visible sky from Palomar in the g and r filters every two to three nights, and although there is both a public and private portion of the survey, both components are released at regular intervals.
The position of SN 2023ixf had been observed by ZTF for over 5 years before the SN explosion. There are 1092, 1152, and 345 frames taken in the g, r, and i filters, respectively. The average time between two adjacent images is ∼1.7 days for the g band, ∼1.6 days for the r band, and ∼3.6 days for the i band.
We obtained forced photometry from the template-subtracted images using the ZTF Forced Photometry Service <cit.>. Following <cit.>, we adopted a signal-to-noise threshold of 3 for the source detection and a signal-to-noise of 5 for computing the upper limit. Bad-quality data were removed following the description in <cit.>. We also removed epochs that have status code 56 to avoid the impact of bad or blank pixels. The single-epoch flux measurements were combined in 10-day time bins following the method described by <cit.>. The median limiting magnitudes are ∼-7.9 mag in g band, ∼-7.8 mag in r band, ∼-8.5 mag in i band.
§.§ ATLAS observations
ATLAS is an all-sky daily-cadence survey, using two filters, orange (o) and cyan (c), similar to Pan-STARRS filters r+i and g+r, respectively. For over six years prior to the SN explosion, ATLAS had collected 1787 images in the o band and 475 images in the c band. The average time between two adjacent images is ∼1.3 days for the o band and ∼5.0 days for the c band.
We obtained forced photometry at the supernova position from the ATLAS forced photometry server <cit.>. The single-epoch flux measurements have been stacked in 10-day bins following <cit.> to reach a deeper limit.
The median depths we can reach are ∼-9.2 mag in o band and ∼-8.9 mag in c band.
§.§ Spurious detections
All the stacked measurements and limits are shown in Figure <ref>, with a zoom-in around the time of SN 2023ixf's explosion in Figure <ref>.
There are a handful of epochs, both in ZTF and ATLAS, which have reported fluxes larger than 3σ and thus are marked as detections (Figure <ref>). We have listed the details of these epochs in Table <ref>. In all cases, the signal-to-noise ratio of these observations are slightly higher than 3 but smaller than 4.
In addition, none of these pre-explosion detections are consecutive in time; they are bracketed by non-detections of similar depth. For this reason, they are likely not true detections of precursor variability.
Given the hundreds of epochs examined, it is expected that some detections at this level would occur, even if they do not indicate true pre-explosion variability.
Assuming the noise is Gaussian, the number of such data points would be 1 for ZTF and ATLAS, respectively. Additionally, the spurious detections could be potentially due to image reduction issues and unsatisfactory weather conditions.
To further examine the reliability of the detections, we chose 12 positions around the SN position, separated by ∼4 to 10 pixels, corresponding to ∼4 to 10 arcsecs (illustrated in Figure <ref>). We then performed forced photometry on these sample positions in an identical manner as we did at the SN position. By itself, this grid of positions around the SN is unrelated to any transient, but with their close proximity to SN 2023ixf, we can use the sample positions to gauge the rate of random and low signal-to-noise detections in the data. We can also analyze detections that are at similar epochs but different spatial positions (perhaps indicating larger scale image artifacts in data from that time period).
For the ZTF data, we found that there are also several low significance detections in the r and i bands, in positions 3 and 5 in particular, at similar phases to our nominal detections at the SN position. Likewise for the ATLAS data, we found that there are detections in the o and c bands at similar phases for most of the grid sample positions. Given all of the above, we treat the low-significance detections at the SN position as spurious and do not include them in our analysis.
§ DISCUSSION
§.§ Constraints on the Precursor Activity
The combined DLT40, ZTF and ATLAS data provide an opportunity to put a strong limit on the brightness and duration of any possible precursor activity in SN 2023ixf. In this section, we discuss the constraint we put on a toy outburst model and the hydrodynamic precursor models of <cit.>.
§.§.§ Toy precursor model
We consider a toy burst model with constant brightness and finite duration. Examples of the toy model are shown in Figure <ref>.
For each brightness and duration, we simulated 5,000 outburst light curves during the 5-year period prior to the SN explosion.
If at one epoch the simulated light curve is brighter than the limit, the data point at that epoch would be marked as a detection. If there are at least two such epochs within 30 days, we will consider the event to be a detected precursor outburst. We note that since we use a signal-to-noise ratio of 5 for calculating the non-detection limit, the `detections' in the experiment presented here would be much more significant than the spurious detections we discussed in section <ref>.
We calculate the detection rate f for each outburst. If f is high, then the probability that we missed a precursor outburst before the explosion of SN 2023ixf is low. Only the r-band light curve is used in this analysis. The upper panel of Figure <ref> shows f as a function of the r-band absolute magnitude (M_r) for various outburst durations (solid lines). Another set of simulations is done by fixing the brightness of the outburst and varying its duration. The result is also shown in solid lines in the bottom panel of Figure <ref>.
In order to take advantage of the high-cadence multiband light curves, we assume the precursor has a perfect blackbody spectrum with a constant temperature. We then calculate the magnitude for each filter based on the M_r. In the outburst precursor model presented by <cit.>, the temperature at the progenitor surface can increase by about 1200 K during the outburst. The stellar temperature of the progenitor of SN 2023ixf was estimated to be 3500^+800_-1400 K <cit.>. Therefore, we tentatively adopt a blackbody temperature of 4000 K here.
In the whole data set, if the simulated light curve is brighter than the limit at more than two data points within 30 days, we consider the outburst to be detected by the survey.
We calculate f using the same method as described above. The results are shown in Figure <ref> as dashed lines. We note that the blackbody temperature of the precursor in SN 2020tlf is around 5000 K <cit.>. As an additional test, we run the same simulation with a temperature of 5000 K and find that this only marginally changes the result.
As presented in Figure <ref>, for an outburst brighter than M_r = -9 mag and longer than 100 days, the detection rate of a precursor (f) is larger than ∼0.9. Such a precursor is very likely to have been detected. In addition, for an outburst brighter than M_r = -8 mag, f is larger than 0.9 for a precursor that lasts longer than 200 days. Therefore, we conclude that if there was a precursor for SN 2023ixf, the outburst must have been shorter than 100 days if it was brighter than about M_r=-9 mag, or shorter than 200 days if it was brighter than about M_r=-8 mag.
§.§.§ Hydrodynamical precursor models
<cit.> modeled precursor outbursts by injecting energy into the hydrogen-rich envelope of a 15 RSG progenitor. Two situations were considered in their models, where energy injection occurs once and twice. The models are differentiated by both the number of energy injections and their amounts. To match the passbands of our data set, we produce blackbody spectra based on the temperature and radius from their model results and applied appropriate filter transmission functions. Examples of a few models are shown in Figure <ref>.
The progenitor mass of SN 2023ixf is estimated to be 11 ± 2 by <cit.>, 9 to 14 by <cit.>, 17 ± 4 by <cit.>, and 20 ± 4 by <cit.>.
To take the uncertainty of the progenitor mass into account, we added an uncertainty to the brightness of each model.
<cit.> showed that, for different progenitor masses, the brightness of the precursor light curve
varies by less than a factor of 2. Therefore, we varied the precursor brightness using a Gaussian distribution centered on the model light curve with a standard deviation of 0.4 mag, which is roughly equivalent to a factor of 2 of luminosity change.
For each model, we simulated 10,000 light curves by varying the luminosity as described above. These light curves are then randomly distributed in the 5 years of the pre-explosion observations. The energy injection time is at least 200 days prior to the SN explosion, so that all the models can reach the (first) light curve peak before the time of explosion.
We calculated f using the same method as described in section <ref> and listed the results in Table <ref>.
For all the models, the probability that there was a precursor that was not detected is low. The model with the lowest detection rate (f) is the single-small model. This is because this model has the smallest total injected energy, and thus the light curve is faster and dimmer than other models.
The CSM mass that is ejected in the single-small model is 0.015 , which is the lowest ejected mass among all the models. Given that this model has the lowest detection probability, we can use it to put an upper limit for the mass ejected during the possible pre-explosion outburst.
We conclude that the probability that we would not detect the single-small model precursor is 23 % within ∼5 years prior to the explosion of SN 2023ixf, while this probability is below about 10 % for all other models.
Therefore, the upper limit of mass ejection during the precursor outburst (if there was one) is around 0.015 .
ccc
1
Detection rate of several outburst models.
Model
f
Unbound CSM ()
Single-large 1.0 1.2
Single-fid 0.97 0.35
Single-small 0.77 0.015
Double-large 1.0 3.6
Double-fid 0.96 1.3
Double-long 0.91 1.2
The detection rate of a precursor in our data set for different models from <cit.>. These models are differentiated by both the number of energy injections and their amounts.
§.§ Comparison with other precursor studies
Precursor emission has been observed in many Type IIn SNe. These objects likely have extended, dense CSM around their progenitors, which is driving their long-duration narrow-line emission, and which may have been produced by pre-explosion activity in the progenitor <cit.>. This pre-explosion emission could be powered by the interaction with the surrounding CSM or the continuum-driven wind, while the underlying triggering mechanism is still uncertain.
The precursor outbursts in SNe IIn usually have an absolute magnitude between -15 mag and -12 mag, which is much brighter than the limits in our observations (see Figures <ref> and <ref>).
From SN observations, a significant fraction of RSGs are believed to have dense and confined CSM prior to the explosion, which may be because they have experienced intense mass loss before they explode as Type II SNe <cit.>. However, after analyzing the pre-explosion progenitors of four Type IIP/L SNe using data from the Large Binocular Telescope, <cit.> found that these progenitors were quiescent and the probability that they had extended outbursts after oxygen ignition (around 5.4-2.6 years before the SN explosion) is low.
To date, precursor emission in a normal Type IIP SN has only been observed in SN 2020tlf <cit.>. Both spectroscopic and photometric observations suggest that the progenitor of SN 2020tlf had experienced enhanced mass loss prior to the explosion, and its precursor emission is likely due to the ejection of the outer layer of its progenitor star during final-stage nuclear burning <cit.>. The precursor emission in SN 2020tlf is around -11.5 mag in the r, i, and z bands over about 100 days before explosion, which is about 1 mag brighter than our current limit in the DLT40 Clear band and about 3 mag brighter than our limit in the ZTF g and r bands (see Figures <ref> and <ref>). Therefore, the type of precursor observed in SN 2020tlf can be excluded in SN 2023ixf.
Multiple flash-spectroscopy studies have found evidence of dense CSM around the progenitor of SN 2023ixf, which requires a mass loss rate of 10^-3-10^-2 M_⊙ yr^-1 <cit.>, comparable to or slightly lower than the mass loss rate estimated for SN 2020tlf (10^-2 yr^-1) <cit.>.
The lack of similar precursor activity in SN 2023ixf may suggest that there are various physical mechanisms for the formation of dense CSM around the progenitors of normal Type II SNe.
For instance, <cit.> proposed that the binary interaction in the final evolutionary stage of RSG stars could contribute to the dense CSM around the SN progenitor.
Recently, <cit.> found that the CSM around the progenitor of SN 2023ixf is likely asymmetric, which could be a consequence of binary interaction triggered by pre-SN inflation of the RSG during Ne or O burning. In such a binary scenario, eruptive mass loss from a single RSG may not be the driving force behind the dense CSM that we observe.
§.§ Quiescent progenitor of SN 2023ixf versus enhanced pre-SN mass loss
After the discovery of SN 2023ixf, many independent studies have suggested that there is dense and confined CSM around the progenitor, implying an enhanced mass loss prior to the explosion.
Recently, by analyzing the early flash spectroscopy of SN 2023ixf, <cit.> and <cit.> suggest that, to produce the dense CSM around the progenitor, the mass-loss rate of the progenitor of SN 2023ixf should be around 10^-3-10^-2 M_⊙ yr^-1. Based on the hard X-ray observations, <cit.> also found evidence of dense pre-existing CSM, which requires a mass loss rate of 3×10^-4 M_⊙ yr^-1 before the explosion. In addition, <cit.> analyzed the near- and mid-infrared pre-SN imaging of SN 2023ixf and found a lower mass-loss rate of 10^-5-10^-3 M_⊙ yr^-1, but it is still higher than the mass-loss rate of typical RSGs in the same luminosity range. They also found that there was no evidence of infrared precursor outbursts up to ∼10 days before the explosion.
Furthermore, <cit.> found that the very early light curve evolution of SN 2023ixf is inconsistent with shock cooling models, which could be explained by the interaction of dense pre-existing CSM with the SN ejecta, and thus implies an enhanced mass loss before the SN explosion.
However, they also suggested that the unusual light curve behaviour could be due to a pre-explosion eruption around one day before the explosion or even extended duration emission from the shock breakout.
<cit.> examined imaging from the Large Binocular Telescope ranging from 5600 days to 400 days before the explosion of SN 2023ixf, and they found no progenitor variability in the R band at the level of 10^3L_⊙ up to 400 days before the explosion. Due to the sparse coverage of their data, they could not directly exclude short-lived outbursts. However, they argue that short outbursts would still have had a long-lived effect on the dust optical depth, leading to an increase of progenitor luminosity for decades, which they would have observed.
Our data set has a higher cadence up to about 10 days before the SN explosion, but we still found no signs of strong pre-SN activity from the progenitor.
The enhanced pre-SN mass-loss rate of SN 2023ixf derived from the flash spectroscopy and other studies seems in tension with the lack of any precursor emission in SN 2023ixf.
It is possible that the progenitor star had a relatively faint outburst on a time-scale of days to months. The probability that we would have detected this kind of outburst is low. <cit.> suggested that the precursor in SNe II-P is likely in the form of abrupt outbursts, in which the progenitor would only be optically bright for a few days before becoming fainter and redder than normal RSGs and ejecting a significant amount of mass into the surrounding space.
The detection of such an outburst would require higher-cadence observations prior to the time of explosion.
In addition, the flash spectroscopy may not necessarily imply enhanced mass loss. <cit.> suggested that, in a binary system, the shocked boundary layer produced by the collision of winds from two stars generates a high-density CSM around the progenitor, which produces the flash spectroscopy observed in SNe II.
If the progenitor of SN 2023ixf was in such binary configurations, then enhanced mass loss or an outburst prior to explosion would not be required.
§ CONCLUSIONS
We used 5 years of pre-explosion data from DLT40, ZTF, and ATLAS to constrain pre-explosion activity in the progenitor of SN 2023ixf. By comparing the data with a toy precursor model, we found that if there was any precursor activity, an outburst with a typical brightness of M_r≃ -9 must have had a duration shorter than 100 days, and an outburst with M_r≃ -8 must have had a duration shorter than 200 days.
We also found that the probability that there was a precursor outburst similar to the models of <cit.> is low, and therefore that the ejected mass prior to the explosion is likely less than 0.015 M_⊙.
The precursor activity like the outburst observed in SN 2020tlf can be excluded in SN 2023ixf.
The enhanced mass loss inferred from the early flash spectroscopy and other studies of SN 2023ixf is in some tension with the non-detection of any precursor outbursts. It is possible that there was a faint precursor within five years of the SN explosion that occurred on a time-scale of days to months. Such an outburst would likely not be detected by our current data set.
Alternatively, the dense, confined CSM may not be due to the enhanced mass loss from a single RSG progenitor. The dense CSM could have, for instance, originated from the interaction of stellar winds of two stars in a binary system.
In summary, it is likely there are various physical mechanisms for the formation of the dense CSM around the progenitors of normal Type II SNe.
In the near future, with the help of the Legacy Survey of Space and Time (LSST) survey, we will be able to put strong constraints on the precursor activities for a sample of Type IIP/L SNe, which will help us better understand the origin of the dense, confined CSM and the very last stages of RSG stellar evolution.
§ ACKNOWLEDGEMENTS
We would like to thank Daichi Tsuna for providing the precursor light curve models.
Research by Y.D., and S.V., N.M.R, and E.H. is supported by NSF grant AST-2008108.
Time domain research by D.J.S. is also supported by NSF grants AST-1821987, 1813466, 1908972, & 2108032, and by the Heising-Simons Foundation under grant #2020-1864.
This publication was made possible through the support of an LSSTC Catalyst Fellowship to K.A.B., funded through Grant 62192 from the John Templeton Foundation to LSST Corporation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of LSSTC or the John Templeton Foundation.
Based on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grant No. AST-2034437 and a collaboration including Caltech, IPAC, the Weizmann Institute for Science, the Oskar Klein Center at Stockholm University, the University of Maryland, Deutsches Elektronen-Synchrotron and Humboldt University, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, Trinity College Dublin, Lawrence Livermore National Laboratories, and IN2P3, France. Operations are conducted by COO, IPAC, and UW. The ZTF forced-photometry service was funded under the Heising-Simons Foundation grant
#12540303 (PI: Graham).
This research made use of Photutils, an Astropy package for detection and photometry of astronomical sources <cit.>.
Prompt-USASK, ZTF, ATLAS
Astropy <cit.>,
HOTPANTS <cit.>,
Matplotlib <cit.>,
NumPy (https://numpy.org),
PYRAF,
Pandas <cit.>,
SciPy (https://www.scipy.org),
SWarp <cit.>,
HOTPANTS <cit.>,
Photutils <cit.>,
the ZTF Forced Photometry Service <cit.>,
the ATLAS forced photometry server <cit.>
§ PARAMETERS OF SPURIOUS PRE-EXPLOSION DETECTIONS
Table <ref> presents the epochs that have signal-to-noise ratios larger than 3. All of these detections are below 4σ and are not consecutive in time, so they are likely false detections (see discussion in Section <ref>).
cccccccc
A1
Possible pre-explosion detections.
Epoch (days)
Filter
JD
Flux (μJy)
Flux Error (μJy)
S/N
N_frame
Source
-2237.7 o 2457845.5 29.88 8.66 3.45 25 ATLAS
-2217.2 o 2457866.0 34.88 10.48 3.33 25 ATLAS
-1825.7 r 2458257.5 2.80 0.84 3.33 16 ZTF
-1232.1 o 2458851.1 19.5 6.0 3.25 4 ATLAS
-1210.2 i 2458873.1 3.55 1.01 3.52 11 ZTF
-1180.9 r 2458902.4 2.20 0.56 3.90 28 ZTF
-1087.7 r 2458995.6 2.28 0.65 3.49 15 ZTF
-1058.7 r 2459024.5 2.31 0.68 3.41 20 ZTF
-679.8 r 2459403.5 3.42 1.11 3.08 8 ZTF
⋆Epoch is measured with respect to the explosion time <cit.>. N_frame refers to the number of individual image measurements that were combined for a given epoch, within the 10-day bin used throughout this work.
aasjournal
|
http://arxiv.org/abs/2307.00211v2
|
20230701033031
|
AIGCIQA2023: A Large-scale Image Quality Assessment Database for AI Generated Images: from the Perspectives of Quality, Authenticity and Correspondence
|
[
"Jiarui Wang",
"Huiyu Duan",
"Jing Liu",
"Shi Chen",
"Xiongkuo Min",
"Guangtao Zhai"
] |
cs.CV
|
[
"cs.CV",
"eess.IV"
] |
F. Author et al.
Shanghai Jiao Tong University, Shanghai, China
{wangjiarui,huiyuduan,minxiongkuo,zhaiguangtao}@sjtu.edu.cn
Tianjin University, Tianjin, China
Shanghai Second Polytechnic University, Shanghai, China
AIGCIQA2023: A Large-scale Image Quality Assessment Database for AI Generated Images: from the Perspectives of Quality, Authenticity
and Correspondence
Jiarui Wang1Huiyu Duan1Jing Liu2Shi Chen3
Xiongkuo Min1^*, and Guangtao Zhai1Corresponding Authors.
Received October 2007. Revised February 2008. Accepted March 2008.
==========================================================================================================================================================
Recent years have witnessed a rapid growth of Artificial Intelligence Generated Content (AIGC), among which with the development of text-to-image techniques, AI-based image generation has been applied to various fields.
However, AI Generated Images (AIGIs) may have some unique distortions compared to natural images, thus many generated images are not qualified for real-world applications.
Consequently, it is important and significant to study subjective and objective Image Quality Assessment (IQA) methodologies for AIGIs.
In this paper, in order to get a better understanding of the human visual preferences for AIGIs, a large-scale IQA database for AIGC is established, which is named as AIGCIQA2023. We first generate over 2000 images based on 6 state-of-the-art text-to-image generation models using 100 prompts.
Based on these images, a well-organized subjective experiment is conducted to assess the human visual preferences for each image from three perspectives including quality, authenticity and correspondence.
Finally, based on this large-scale database, we conduct a benchmark experiment to evaluate the performance of several state-of-the-art IQA metrics on our constructed database. The AIGCIQA2023 database and benchmark will be released to facilitate future research on <https://github.com/wangjiarui153/AIGCIQA2023>
§ INTRODUCTION
Artificial Intelligence Generated Content (AIGC) refers to the content, including texts, images, audios, or videos, etc., that is created or generated with the assistance of AI technology.
Many impressive AIGC models have been developed in recent years, such as ChatGPT
and DALLE<cit.>, which have been utilized in various application scenarios.
As an important part of AIGC, AI Generated Images (AIGIs) have also gained significant attention in recent years due to advancement in generative models including Generative Adversarial Network (GAN) <cit.>, Variational Autoencoder (VAE) <cit.>, diffusion models <cit.>, etc., and language-image pre-training techniques including CLIP<cit.>, BLIP<cit.>, etc.
However, the development of AIGI models also raises new problems and challenges.
One significant challenge is that not all generated images are qualified for real-world applications, which often require to be processed, adjusted, refined or filtered out before being applied to practical scenes.
However, unlike common image content, such as Natural Scene Images (NSIs)<cit.>, screen content images<cit.>, graphic images<cit.>, etc., which generally encounters some common distortions including noise, blur, compression, etc. <cit.>, AIGIs may suffer from some unique degradations such as unreal structures, unreasonable combinations, etc. Moreover, the generated images may not correspond to the semantics of the text prompts <cit.>.
Therefore, it is important to study the human visual preferences for AIGIs and design corresponding objective Image Quality Assessment (IQA) metrics for these images.
Many subjective IQA studies have been conducted for human captured or created images, and many objective IQA models have also been developed.
However, these models are designed for assessing low-level distortions, while AIGIs generally contain both low-level artifacts and high-level semantic degradations.
Some quantitative evaluation metrics such as Inception Score (IS)<cit.> and Fréchet Inception Distance (FID)<cit.> have been proposed to assess the performance of generative models and have been widely used to evaluate the authenticity of the generated images.
However, these methods cannot evaluate the authenticity of a single generated image, and cannot measure the correspondence between the generated images and the text-prompts.
As a new type of image content, previous IQA methods may fail to assess the image quality of AIGIs and cannot align well with human preferences due to the irregular distortions.
To gain a better understanding of human visual preferences for AIGIs and guide the design process of corresponding objective IQA models, in this paper, we conduct a comprehensive subjective and objective IQA study for AIGIs.
We first establish a large-scale IQA database for AIGIs termed AIGCIQA2023, which contains 2,400 diverse images generated by 6 state-of-the-art AIGI models based on 100 various text prompts.
Based on these images, a well-organized subjective experiment is conducted to assess the human visual preferences for each individual generated image from three perspectives including
quality, authenticity, and correspondence. Based on the constructed AIGCIQA2023 database, we evaluate the performance of several state-of-the-art IQA models and establish a new benchmark. Experimental results demonstrate that current IQA methods cannot well align with human visual preferences for AIGIs, and more efforts should be made in this research field in the future. The main contributions of this paper are summarized as follows:
* We propose to disentangle the human visual experience for AIGIs into three perspectives including quality, authenticity, and correspondence.
* Based on the above theory, we establish a novel large-scale database, i.e., AIGCIQA2023, to better understand the human visual preferences for AIGIs and guide the design of objective IQA models.
* We conduct a benchmark experiment to
evaluate the performance of several current state-of-the-art IQA algorithms in measuring the quality, authenticity, and text-image correspondence of AIGIs.
The rest of the paper is organized as follows. In Section 2 we introduce the details of our constructed AIGCIQA2023 database, including the generation of AIGIs and the subjective quality assessment methodology and procedures.
In section 3 we present the benchmark experiment for current state-of-the-art IQA algorithms based on the established database.
Section 4 concludes the whole paper and we discuss possible future research that can be conducted with the database.
§ DATABASE CONSTRUCTION AND ANALYSIS
In order to get a better understanding of human visual preferences for AI-generated images based on text prompts, we construct a novel IQA database for AIGIs, termed AIGCIQA2023, which is a collection of generated images derived from six state-of-the-art deep generative models based on 100 text prompts, and corresponding subjective quality ratings from three different perspectives.
Then we further analyze the human visual preferences for AIGIs based on the constructed database.
§.§ AIGI Collection
We adopt six latest text-to-image generative models, including Glide<cit.>, Lafite<cit.>, DALLE<cit.>, Stable-diffusion<cit.>, Unidiffuser<cit.>, Controlnet<cit.>, to produce AIGIs by using open source code and default weights.
To ensure content diversity and catch up with the practical application requirements, we collect diverse texts from the PartiPrompts website <cit.> as the prompts for AIGI generation.
The text prompts can be simple, allowing generative models to produce imaginative results.
They can also be complex, which raises the challenge for generative models.
We select 10 scene categories from the prompt set, and each scene contains 10 challenge categories.
Overall, we collect 100 text prompts (10 scene categories × 10 challenge categories) from PartiPrompts<cit.>.
The distribution of the selected scene and challenge categories is displayed in pie chart of Fig.1.
It can be observed that the dataset exhibits a high level of scene diversity, with images generated covering a broad range of challenges.
Then we perform the text-to-image generation based on these models and prompts. Specifically, for each prompt, we generate 4 various images randomly for each generative model. Therefore, the constructed AIGCIQA2023 database totally contains 2400 AIGIs (4 images × 6 models × 100 prompts) corresponding to 100 prompts.
§.§ Subjective Experiment Setup
Subjective IQA is the most reliable way to evaluate the visual quality of digital images perceived by the users.
It is generally used to construct image quality datasets and served as the ground truth to optimize or evaluate the performance of objective quality assessment metrics.
Due to the unnatural property of AIGIs and different text prompts having different target image spaces, it is unreasonable to just use one score, i.e., “quality” to represent human visual preferences.
In this paper, we propose to measure the human visual preferences of AIGIs from three perspectives including quality, authenticity, and text-image correspondence.
For an image, these three visual perception perspectives are related but different.
The first dimension of AIGI evaluation is “quality” evaluation, i.e., evaluating an AIGI from its clarity, color, lightness, contrast, etc., which is similar to the assessment of NSIs.
During the experiment procedure, subjects are instructed to evaluate whether the image outline is clear, whether the content can be distinguished, and the richness of details, etc.
Fig.3 (a) shows 10 high quality examples and 10 low quality examples of the images generated by the prompt of “a corgi”.
Considering the generation nature of AIGIs, an important problem of these images is that they may not look real compared to NSIs.
Therefore, we introduce a second dimension of evaluation metrics for the generated images, i.e., “authenticity” evaluation.
For this dimension, subjects are instructed to assess the image from the authenticity aspect, i.e., whether it looks real or whether they can distinguish that the image is AI-generated or not.
Fig.3 (b) shows 10 high authenticity and 10 low authenticity examples of images generated by the prompt of “a girl”.
Since an AIGI is generated from a text, it is also important to evaluate its correspondence with the original prompt, i.e., the third dimension, text-image “correspondence”.
For this purpose, subjects are instructed to consider textual information provided with the image and then give the correspondence score from 0 to 5 to assess the relevance between the generated image and its prompt.
Fig.3 (c) shows 10 high text-image correspondence and 10 low correspondence examples of images generated by the prompt of “a grandmother reading a book to her grandson and granddaughter”.
§.§ Subjective Experiment Procedure
To evaluate the quality of the images in the AIGCIQA2023 and obtain Mean Opinion Scores (MOSs), a subjective experiment is conducted following the guidelines of ITU-R BT.500-14 <cit.>.
The subjects are asked to rate their visual preference degree of exhibited AIGIs from the quality, authenticity and text-image correspondence.
The AIGIs are presented in a random order on an iMac monitor with a resolution of up to 4096 × 2304, using an interface designed with Python Tkinter, as shown in Fig.4. The interface allows viewers to browse the previous and next AIGIs and rate them using a quality scale that ranges from 0 to 5, with a minimum interval of 0.01. A total of 28 graduate students (14 males and 14 females) participate in the experiment, and they are seated at a distance of around 60 cm in a laboratory environment with normal indoor lighting.
§.§ Subjective Data Processing
We follow the suggestions recommended by ITU to conduct the outlier detection and subject rejection.
The score rejection rate is 2%.
In order to obtain the MOS for an AIGI, we first convert the raw ratings into Z-scores, then linearly scale them to the range [0,100] as follows:
z_i_j=r_i_j-μ_i_j/σ_i, z_ij'=100(z_ij+3)/6,
μ_i=1/N_i∑_j=1^N_ir_i_j, σ_i=√(1/N_i-1∑_j=1^N_i(r_i_j-μ_i_j)^2)
where r_ij is the raw ratings given by the i-th subject to the j-th image. N_i is the number of images judged by subject i.
Next, the mean opinion score (MOS) of the image j is computed by averaging the rescaled z-scores as follows:
MOS_j=1/M∑_i=1^Mz_ij'
where MOS_j indicates the MOS for the j-th AIGI, M is the number of valid subjects, and z'_i_j are the rescaled z-scores.
§.§ AIGI Analysis from Three Perspectives
To further illustrate the differences of the three perspectives, we demonstrate several example images and their corresponding subjective ratings from three aspects in Fig.5.
For each subfigure, it can be noticed that the right AIGI outperforms the left AIGI on two evaluation dimensions but is much worse than the left AIGI on another dimension, which demonstrates that each evaluation perspective (quality, authenticity, or text-image correspondence) has its own unique perspective and value.
Fig.6 demonstrates the MOS and score distribution for quality evaluation, authenticity evaluation, and text-image correspondence evaluation, respectively, which demonstrate the images in AIGCIQA 2023 cover a wide range of perceptual quality.
§ EXPERIMENT
§.§ Benchmark Models
Since the AIGIs in the proposed AIGCIQA2023 database are generated based on text prompts and have no pristine reference images, they can only be evaluated by no-reference (NR) IQA metrics.
In this paper, we select fifteen state-of-the-art IQA models for comparison. The selected models can be classified into two groups:
* Handcrafted-based models, including: NIQE<cit.>, BMPRI<cit.>, BPRI<cit.>, BRISQUE<cit.>, HOSA<cit.>, BPRI-LSSn<cit.>, BPRI-LSSs<cit.>, BPRI-PSS<cit.>, QAC<cit.>, HIGRADE-1 and HIGRADE-2<cit.>.
These models extract handcrafted features based on prior knowledge about image quality.
* Deep learning-based models, including: CNNIQA<cit.>, WaDIQaM-NR<cit.>, VGG (VGG-16 and VGG-19)<cit.> and ResNet (ResNet-18 and ResNet-34)<cit.>.
These models characterize quality-aware information by training deep neural networks from labeled data.
§.§ Evaluation Criteria
In this study, we utilize the following four performance evaluation criteria to evaluate
the consistency between the predicted scores and the corresponding ground-truth MOSs, including Spearman Rank Correlation Coefficient (SRCC), Pearson Linear Correlation Coefficient (PLCC), Kendall’s Rank Correlation Coefficient (KRCC), and Root Mean Squared Error (RMSE).
§.§ Experimental Setup
All the benchmark models are validated on the proposed AIGCIQA2023 database.
For traditional handcrafted-based models, they are directly evaluated based on the database.
For deep trainable models, we first randomly split the database into an 4:1 ratio for training/testing while ensuring the image with the same prompt label falls into the same set.
The partitioning and evaluation process is repeated several times for a fair comparison while considering the computational complexity, and the average result is reported as the final performance.
For deep learning-based models, we applied CNNIQA<cit.>, WaDIQaM-NR<cit.>, VGG (VGG-16 and VGG-19)<cit.> and ResNet (ResNet-18 and ResNet-34)<cit.> to predict the MOS of image quality.
The repeating time is 10, the training epochs are 50 with an initial learning rate of 0.0001 and batch size of 4.
§.§ Performance Discussion
The performance results of the state-of-the-art IQA models mentioned above on the proposed AIGCIQA2023 database are exhibited in Table 1, from which we can make several conclusions:
* The handcrafted-based methods achieve poor performance on the whole database, which indicates the extracted handcrafted features are not effective for modeling the quality representation of AIGIs. This is because most employed handcrafted features of
these methods are based on the prior knowledge learned from NSIs, which are not effective for evaluating AIGIs.
* The deep learning-based methods achieve relatively more competitive performance results on three evaluation perspectives. However, they are still far away from satisfactory.
* Most of the IQA models achieve better performance on quality evaluation and worse on text-image correspondence score assessment.
The reason is that the text prompts for image generation are not utilized for the IQA model training.
This makes it more challenging for the IQA models to extract relation features from AIGIs, which inevitably leads to performance drops.
§ CONCLUSION AND FUTURE WORK
In this paper, we study the human visual preference problem for AIGIs.
We first construct a new IQA database for AIGIs, termed AIGCIQA2023, which includes 2400 AIGIs generated based on 100 various text-prompts, and corresponding subjective MOSs evaluated from three perspectives (i.e., quality, authenticity, and text-image correspondence).
Experimental analysis demonstrates that these three dimensions can reflect different aspects of human visual preferences on AIGIs, which further manifests that the evaluation of Quality of Experience (QoE) for AIGIs should be considered from multiple dimensions.
Based on the constructed database, we evaluate the performance of several state-of-the-art IQA models and establish a new benchmark to facilitate future research.
In future work, we will further explore the human visual perception for AIGIs and develop corresponding objective evaluation models for better assessing the quality of AIGIs from the three perspectives proposed in this paper.
splncs04
|
http://arxiv.org/abs/2307.02364v1
|
20230705152525
|
High-rate quantum key distribution exceeding 110 Mb/s
|
[
"Wei Li",
"Likang Zhang",
"Hao Tan",
"Yichen Lu",
"Sheng-Kai Liao",
"Jia Huang",
"Hao Li",
"Zhen Wang",
"Hao-Kun Mao",
"Bingze Yan",
"Qiong Li",
"Yang Liu",
"Qiang Zhang",
"Cheng-Zhi Peng",
"Lixing You",
"Feihu Xu",
"Jian-Wei Pan"
] |
quant-ph
|
[
"quant-ph"
] |
High-rate quantum key distribution exceeding 110 Mb/s
Wei Li^1,2,∗, Likang Zhang^1,2,∗, Hao Tan^1,2, Yichen Lu^1,2, Sheng-Kai Liao^1,2,3, Jia Huang^4, Hao Li^4, Zhen Wang^4, Hao-Kun Mao^5, Bingze Yan^5, Qiong Li^5, Yang Liu^6, Qiang Zhang^1,2,3,6, Cheng-Zhi Peng^1,2,3, Lixin You^4, Feihu Xu^1,2,3,⋆, Jian-Wei Pan^1,2,3,⋆
August 1, 2023
===============================================================================================================================================================================================================================================================================
* Hefei National Research Center for Physical Sciences at the Microscale and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
* Shanghai Research Center for Quantum Science and CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Shanghai 201315, China
* Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
* State Key Laboratory of Functional Materials for Informatics, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 200050, China
* School of Cyberspace Science, Faculty of Computing, Harbin Institute of Technology, Harbin 150080, China
* Jinan Institute of Quantum Technology, Jinan, Shandong 250101, China
^∗These authors contributed equally.
^⋆e-mails: [email protected]; [email protected]
Quantum key distribution (QKD) can provide fundamentally proven security for secure communication. Toward application, the secret key rate (SKR) is a key figure of merit for any QKD system. So far, the SKR has been limited to about a few megabit-per-second (Mb/s). Here we report a QKD system that is able to generate key at a record high SKR of 115.8 Mb/s over 10-km standard fibre, and to distribute key over up to 328 km of ultra-low-loss fibre. This attributes to a multi-pixel superconducting nanowire single-photon detector with ultrahigh counting rate, an integrated transmitter that can stably encode polarization states with low error, a fast post-processing algorithm for generating key in real time and the high system clock-rate operation. The results demonstrate the feasibility of practical high-rate QKD with photonic techniques, thus opening its possibility for widespread applications.
High-rate quantum key distribution exceeding 110 Mb/s
Wei Li^1,2,∗, Likang Zhang^1,2,∗, Hao Tan^1,2, Yichen Lu^1,2, Sheng-Kai Liao^1,2,3, Jia Huang^4, Hao Li^4, Zhen Wang^4, Hao-Kun Mao^5, Bingze Yan^5, Qiong Li^5, Yang Liu^6, Qiang Zhang^1,2,3,6, Cheng-Zhi Peng^1,2,3, Lixin You^4, Feihu Xu^1,2,3,⋆, Jian-Wei Pan^1,2,3,⋆
August 1, 2023
===============================================================================================================================================================================================================================================================================
Introduction.
Quantum key distribution (QKD)<cit.> allows two remote parties to distill secret keys with information-theoretical security. During the past decades, QKD has drawn a lot of scientific attentions<cit.> as it not only provides quantum-secure cryptographic solutions but also insightful views to the bizarre quantum world. On the application front, increasing the secret key rate (SKR) of QKD is undoubtedly one of the pressing tasks as it not only allows more frequent key exchanges but also can provide services to a larger number of network users<cit.> or high-data-rate applications such as critical infrastructure protection, sharing medical data and distributed storage encryption<cit.>.
In the quest for high SKR, the system clock rate has been increased to gigahertz<cit.>. Per-channel-use rate has been improved via low-error rate<cit.> and high detection efficiency<cit.>; advanced protocols such as twin-field QKD<cit.> even hold the promise to beat the rate-loss law of repeaterless quantum communication. On the theory side, tight finite-key analysis<cit.> for acceptable accumulation times has been well studied. So far, 1 Mb/s SKR over 50-km fibre (10-dB loss) has been reported<cit.>. Recently, a real-time SKR of 13.7 Mb/s over 2-dB loss emulated channel (equivalent to 10-km fibre) has been achieved<cit.>. Nonetheless, these SKRs remain several orders of magnitude lower than current optical communication systems.
Our primary interest is on the standard decoy-state QKD<cit.> as it has been widely adopted in field implementations of metropolitan fibre links<cit.> and large-scale QKD networks<cit.>. Increasing the SKR faces several critical challenges including the transmitter, detection and post-processing. First, the high clock-rate implementation requires the stable and low-error modulation of the laser pulses and the encoding states, together with their driving electronics. Although gigahertz QKD systems are emerging in recent years, they often suffer from a large quantum bit error rate (QBER)<cit.>. Second, high SKR needs the photon detection with both high efficiency and high count rate<cit.>. The superconducting nanowire single-photon detector (SNSPD) presents high efficiency and low noise<cit.>, but it has a long recovery time, thereby limiting the absolute SKR<cit.>; the fast gated InGaAs SPD can detect high photon flux, but its efficiency is limited for practical use<cit.>. Finally, the post-processing speed is a limiting factor for real-time key generation<cit.>. The sifted keys should be reconciliated in a fast and efficient way, and the privacy amplification has to support large input block size and high compression ratio.
Here we address above challenges and report a polarization-encoding QKD system that is capable of generating a SKR of 115.8 Mb/s over 10-km standard fibre under the composable security against general attacks. For the source, we adopt an integrated modulator to realize the fast and stable modulations, the performance of which is optimized to produce an ultra-low QBER of 0.35%. For the detection, we introduce the implementation of multi-pixel SNSPDs<cit.> for both high-efficiency and high-rate photon detection. Our 8-pixel SNSPD has a maximum efficiency of 78% at 1550-nm wavelength, which can detect 552 million photons per second at an efficiency of 62%. For the classical post-processing, we adopt an enhanced Cascade reconciliation algorithm<cit.> and a hybrid hash-based privacy amplification algorithm<cit.> which achieves an average throughput of 344.3 Mb/s. Moreover, we develop high-speed electronics to operate the QKD system at 2.5-GHz clock rate, use an efficient polarization feedback control scheme, and adopt the protocol with 4-encoding states and 1-decoy state at the optimal SKR under the finite-key security<cit.>. All together, our QKD system enables a SKR enhancement about one order of magnitude over the previous BB84 record<cit.>. The system robustness and stability are verified by a 50-hour continuous operation. We also show the feasibility to generate secret keys at long distances up to 328-km of ultra-low-loss fibre using polarization states. This verifies a 60-dB dynamic range of photon counting rate between short and long distance, thus proving the practicality of our QKD system for general application scenarios.
Protocol.
In the BB84 protocol, the information is encoded in two conjugate basis, where the rectilinear basis (denoted as Z basis) and the diagonal basis (denoted as X basis) are used in the polarization dimension. To increase the sifting efficiency, Alice and Bob preferentially choose an efficient version of the BB84 protocol with biased basis choice<cit.>. That is, the Z basis (P_Z > 0.5) is used to distill keys, while the bit information in X basis is publicly announced to evaluate the information leakage to the eavesdropper. This means that the count rates of Z and X bases are also asymmetric. After sifting of the keys, Alice and Bob reconcile their keys to the same bits stream and distill the final secure keys through privacy amplification. The decoy state method is a standard approach to protect against photon-number-splitting attack<cit.>. Among the variants of decoy state method<cit.>, we adopt the 1-decoy state protocol<cit.> due to the following two reasons. First, a vacuum state is not needed, thus putting less stringent requirement on the intensity modulator. Second, within a realistic finite-key size (n_Z≤ 10^8) and when the QBER is low, the 1-decoy protocol gives higher SKRs at almost all distances (See Supplementary Section 5).
Following ref.<cit.>, the finite-key SKR (bits per second) under the composable security against general attacks can be bounded by:
K= [s_Z, 0^l+s_Z, 1^l(1-h(ϕ_Z^u))-f · n_Z · h(E_Z).
.-6 log _2(19 / ϵ_)-log _2(2 / ϵ_cor )] / t,
where t is the data accumulation time, s_Z, 1^l (s_Z, 0^l) is a lower bound on the number of single-photon contributions (vacuum-state contributions) in the sifted keys; ϕ_Z^u is an upper bound on the single-photon phase-error rate; h(·) is the binary entropy function; f is the efficiency of the error correction code; n_Z is the sifted key length in the Z basis; E_Z is the bit error rate in the Z basis; and ϵ_sec, ϵ_cor are the secrecy and correctness parameters respectively.
Setup.
To implement the protocol, we build a system as depicted in Fig. <ref>a. A distributed-feedback laser (signal laser, modeled Gooch & Housego AA0701) is gain-switched by a 120-ps pulse with carefully tuned pump intensity which enables the generation of 2.5-GHz phase-randomized pulse stream at 1550.12 nm. The waveforms of the driving signal and output light pulse are shown in Supplementary Fig. 7. Due to the amplified spontaneous emission process, each new pulse has a random phase, thus satisfying the phase randomization assumption for decoy state protocol<cit.>. The light is coupled into a silicon photonic chip modulator through a one-dimensional grating coupler.
The chip modulator (Fig. <ref>b) modulates the intensity of decoy states via the intensity modulator, encodes the polarization states via the polarization modulator and attenuates the light to the single-photon level via the attenuator. The intensity modulator is realized by a Mach-Zehnder interferometer incorporating two types of phase modulators, i.e., thermo-optic modulator (TOM) and carrier-depletion modulator (CDM). The TOM is used for static phase bias while CDM is modulated dynamically. Due to the compact size and the precise temperature control, the intensity stability is within 0.1 dB even without the bias feedback throughout the experiments. The polarization modulator is structured by a Mach-Zehnder interferometer followed by a two-dimensional grating coupler. The use of CDM for high-visibility polarization modulation is challenging<cit.>, because the variation of carrier induces the change in the refractive index thus causing loss dependence and the depletion of the carrier can reach a saturation point. We counter the effect of phase-dependent loss by optimizing the bias of the TOM in the first stage of polarization modulator, and optimize the design of CDM to increase its modulation efficiency. By precisely controlling the parameters and developing a homemade field-programmable gate array for electronic control, we are able to modulate four BB84 states dynamically at an average polarization extinction ratio of 23.7 dB (see Supplementary Section 1). This corresponds to an intrinsic QBER of 0.4%.
To avoid the pulse width from broadening due to the frequency chirp produced by the gain-switched laser, a dispersion compensating module is inserted before the quantum channel, loss of which is included in the attenuation of Alice. The link between Alice and Bob is constituted by standard telecom fibre spools (G.652). The synchronization signal at ITU channel 44 (1542.12 nm) is pulsed at 3 ns with a repetition rate of 152.6 kHz and is multiplexed with the classical communication using a 100 GHz dense wavelength division multiplexer.
The detection setup passively selects the measurement basis with probability q_Z which is tuned to the same value as the basis sending probability p_Z by a variable beam splitter. The electronic polarization controller (EPC) in front of the variable beam splitter is used to align the polarization bases between Alice and Bob, while the second EPC is used to rotate from Z to X basis. The polarization feedback control is crucial for a stable polarization-encoding QKD systems<cit.>. We adopt the stochastic parallel gradient descent algorithm<cit.> for the polarization feedback control where the QBERs in Z and X bases are used to feedback driving voltages of the EPC (see Methods).
We implement multi-pixel SNSPDs for both high-efficiency and high-rate photon detection<cit.>. Four NbN SNSPDs are enclosed in a cryogenic chamber and are cooled to 2.2 K. Benefiting from the asymmetric basis configuration, two 8-pixel SNSPDs are used for Z basis to accommodate high photon rate, while two 1-pixel SNSPDs are used for X basis for standard photon detection. The 8-pixel SNSPD has 8 interleaved nanowires covering a circular active area 15 m in diameter, and the nanowires are 75-nm wide (linewidth) with a lateral period (pitch) of 180 nm (Fig. <ref>c). The linewidth and pitch are selected to ensure a near-unity absorption and considerable fabrication margin. To increase the yield and uniformity of the interleaved nanowires, extended parallel nanowire structure outside the active area is designed to reduce the proximity effect in the process of electron-beam lithography. The electrical signal of each pixel is amplified and read out independently. As shown in Fig. <ref>a, the total efficiency is 78% (78%) and the total dark count is 52 (31) count/s when the bias current is set at 9 (10) A for detector D1 (D2). At this bias current, the full width at half maximum of the timing jitter is about 60 ps for a single pixel. Fig. <ref>b shows the performance at high photon flux. To characterize the equivalent dead time of the multi-pixel SNSPD, we fit the count rate dependence on the input photon flux. The fitted dead time is 0.7 ns which is used in the key rate simulation and optimization. The sub-ns dead time assures the 8-pixel SNSPD with the maximum count rate of 342 Mcount/s. In contrast, for a dead time of 50 ns as a typical value for a single-pixel SNSPD<cit.>, it would saturate when the count rate exceeds 20 Mcount/s.
The photon counting events are registered by a time-to-digital unit (Time Tagger Ultra from Swabian Instruments) which has a full width at half maximum timing jitter of 22 ps and a dead time of 2.1 ns channel-wise. We use the burst mode which can register 512 million events continuously at a rate of 475 Mcount/s. To distill the final secure key, the post-processing is performed on two Intel Core i7-10700 platforms communicating with each other via Gigabit Ethernet. The classical communication channel is consisted by 50-km fibre. Three steps are included in the post-processing: sifting, error reconciliation and privacy amplification. Particularly, to realize high-speed post-processing, we design a high-performance solution of Cascade reconciliation for error correction<cit.> involving two-way communications. Note however that a rigorous finite-key security analysis for the scenario of two-way error correction needs further study<cit.>. For high speed implementation of privacy amplification, we design a hybrid hash named multilinear-modular-hashing and modular arithmetic hashing<cit.> (see Methods).
Results.
Using the described setup, we perform a series of laboratory experiments from short to long distance transmissions using both standard fibre and ultra-low-loss fibre spools. We adopt secrecy and correctness parameters as ϵ_sec = 10^-10, ϵ_cor = 10^-15. The simulation and experimental results for the finite block size n_Z=10^8 are plotted in Fig. <ref> (see Methods). The measured SKRs for 10-, 50- and 101-km standard fibre (loss of 2.2, 9.5, 19.6 dB) are 115.8±8.9, 22.2±0.8 and 2.6±0.2 Mb/s with QBERs of 0.61±0.10%, 0.35±0.05% and 0.56±0.11%. See Supplementary Section 7 for detailed results. To highlight the progress entailed by our results, we compare our SKR along with recent high-rate QKD experiments in Fig. <ref> and Tab. <ref>. Even considering high dimensional<cit.> and continuous variable<cit.> QKD (using the local-local-oscillator protocol<cit.>), our work represents the highest SKR among the reported QKD systems.
The increased QBER at short distance of 10 km is mainly caused by the pile-up of SNSPD pulses at high photon flux. We characterize the skew induced by the pile-up for each channel and apply a timing correction to the detection events based on the skew (see Supplementary Section 3). The correction is first-order, meaning that only adjacent pulses intervals are considered. After the correction, the QBER in Z basis drops significantly from 7.01% to 0.83% for back to back scenario. The modulation error of the transmitter contributes 0.4% to the QBER while the other is mainly contributed by the false registering of the detector at high photon rate.
At long distance, we demonstrate 233±112 bit/s SKR over 328-km ultra-low-loss fibre (55.1 dB channel loss) for a 29.5-hour run. The QBER increases to 2.8±0.4% which is mainly contributed by dark count noise (1.4%) and polarization misalignment. We credit our successful polarization distribution over such long fibre to an advanced polarization compensation technique which uses strong pulses as feedback signals for the control algorithm. This result also represents the longest distance of fibre channel in polarization-encoding QKD systems<cit.>. The security distance might be further extended by employing the filtering techniques to reduce the dark noise of SNSPD<cit.>.
Fig. <ref>a shows the stability test result over 50-km fibre for 50 hours. This confirms the system robustness for continuous operations. The small QBER spikes in the figure are mainly caused by the room temperature variations (see Supplementary Fig. 5). The variation caused disturbance to the polarization states transmitted in fibre spool which was not fully compensated. To validate the post-processing speed, Fig. <ref>b shows the sifted and secret key rates for 3444 post-processed data blocks during 5-hour run under 10-km fibre channel. The rates are calculated Each data point was obtained when n_Z was accumulated to the size of 10^8 bits. The processing speed of error correction and privacy amplification are shown on the same plot. An average processing speed of 344 Mb/s is achieved with an average error correction efficiency f of 1.053. Besides, the frame error rate is 0.021% which has been considered into the calculation of the efficiency f. Importantly, the post-processing speed of error correction and privacy amplification has surpassed the average sifted key rate of 308.8 Mb/s for high-speed secret key extraction.
Discussion.
In summary, we have reported a QKD system capable of delivering secret keys at rates exceeding 115 Mb/s. To do so, we have developed a high-speed and stable QKD system, an integrated transmitter for low-error modulation, multi-pixel SNSPDs for high-rate detection and fast post-processing algorithms. Further SKR increase is possible using wavelength or spatial multiplexing technologies<cit.>. We note the recent important progress on high-rate CV-QKD<cit.>, but the practical issues including the finite-key security proof against general attacks and the fast implementation of information reconciliation for discrete modulation CV-QKD remain to be resolved. Our implementation and security analysis do not consider the device imperfections. In practice, however, our system needs special care against the side-channel attacks<cit.>. For high-speed QKD, polarization-dependent loss and intensity correlations are other important features to be characterized (Supplementary Section 6).
To our knowledge, our experiment is the first to show the superior performance of multi-pixel SNSPDs with interleaved nanowires for high-speed QKD. Although the multi-pixel SNSPD requires cryogenic cooling, our setup can be readily adopted in backbone QKD links<cit.> so as to enhance the bandwidth and support more users. It is also suitable for an upstream quantum access network<cit.> where a large number of transmitters multiplex a single detector. Besides, the silicon integrated modulator used in our setup can benefit users in cost, size and stability<cit.>. Overall, the substantial increase of key rate demonstrated here could potentially open new opportunities in areas where data security is utmost important and bring QKD closer to widespread applications.
Acknowledgments
The authors would like to thank Bing Bai, Ye Hong, Wei-Jun Zhang, Jun Zhang and Xiao Jiang for helpful discussions and assistance. This work was supported by National Key Research and Development Plan of China (Grant No. 2020YFA0309700), National Natural Science Foundation of China (Grant No. 62031024, 62071151), Innovation Program for Quantum Science and Technology (2021ZD0300300), Anhui Initiative in Quantum Information Technologies, Shanghai Municipal Science and Technology Major Project (Grant No. 2019SHZDZX01) and Chinese Academy of Sciences. W.L. acknowledges support from the Natural Science Foundation of Shanghai (Grant No. 22ZR1468100). F. Xu acknowledge the support from the Tencent Foundation.
References
49
url<#>1urlprefixURL
BB84
authorBennett, C. H. & authorBrassard, G.
In booktitleProceedings of the IEEE International
Conference on Computers, Systems and Signal Processing,
pages175 (organizationIEEE Press, Bangalore, India New
York, year1984).
ekert1991quantum
authorEkert, A. K.
titleQuantum cryptography based on Bell’s theorem.
journalPhys. Rev. Lett.
volume67, pages661 (year1991).
xu2019quantum
authorXu, F., authorMa, X., authorZhang,
Q., authorLo, H.-K. & authorPan, J.-W.
titleSecure quantum key distribution with realistic
devices.
journalRev. Mod. Phys.
volume92, pages025002
(year2020).
pirandola2020advances
authorPirandola, S. et al.
titleAdvances in quantum cryptography.
journalAdv. Opt. Photon.
volume12, pages1012–1236
(year2020).
chen_integrated_2021
authorChen, Y.-A. et al.
titleAn integrated space-to-ground quantum communication
network over 4,600 kilometres.
journalNature volume589,
pages214–219 (year2021).
diamanti2016practical
authorDiamanti, E., authorLo, H.-K.,
authorQi, B. & authorYuan, Z.
titlePractical challenges in quantum key distribution.
journalnpj Quantum Inf.
volume2, pages16025 (year2016).
sasaki2017quantum
authorSasaki, M.
titleQuantum networks: where should we be heading?
journalQuantum Sci. Technol.
volume2, pages020501 (year2017).
takesue2007quantum
authorTakesue, H. et al.
titleQuantum key distribution over a 40-dB channel loss
using superconducting single-photon detectors.
journalNat. Photonics
volume1, pages343–348
(year2007).
lucamarini2013efficient
authorLucamarini, M. et al.
titleEfficient decoy-state quantum key distribution with
quantified security.
journalOpt. Express volume21,
pages24550–24565 (year2013).
yuan2018
authorYuan, Z. et al.
title10-Mb/s quantum key distribution.
journalJ. Light. Technol.
volume36, pages3427–3433
(year2018).
islam2017provably
authorIslam, N. T., authorLim, C. C. W.,
authorCahall, C., authorKim, J. &
authorGauthier, D. J.
titleProvably secure and high-rate quantum key
distribution with time-bin qudits.
journalSci. Adv. volume3,
pagese1701491 (year2017).
boaron2018secure
authorBoaron, A. et al.
titleSecure quantum key distribution over 421 km of
optical fiber.
journalPhys. Rev. Lett.
volume121, pages190502
(year2018).
grunenfelder2020performance
authorGrünenfelder, F., authorBoaron, A.,
authorRusca, D., authorMartin, A. &
authorZbinden, H.
titlePerformance and security of 5 GHz repetition rate
polarization-based quantum key distribution.
journalAppl. Phys. Lett.
volume117, pages144003
(year2020).
agnesi2020simple
authorAgnesi, C. et al.
titleSimple quantum key distribution with qubit-based
synchronization and a self-compensating polarization encoder.
journalOptica volume7,
pages284–290 (year2020).
lucamarini2018overcoming
authorLucamarini, M., authorYuan, Z. L.,
authorDynes, J. F. & authorShields, A. J.
titleOvercoming the rate–distance limit of quantum key
distribution without quantum repeaters.
journalNature volume557,
pages400–403 (year2018).
tomamichel2012tight
authorTomamichel, M., authorLim, C. C. W.,
authorGisin, N. & authorRenner, R.
titleTight finite-key analysis for quantum cryptography.
journalNat. Commun. volume3,
pages634 (year2012).
lim2014concise
authorLim, C. C. W., authorCurty, M.,
authorWalenta, N., authorXu, F. &
authorZbinden, H.
titleConcise security bounds for practical decoy-state
quantum key distribution.
journalPhys. Rev. A volume89,
pages022307 (year2014).
rusca2018finite
authorRusca, D., authorBoaron, A.,
authorGrünenfelder, F., authorMartin, A. &
authorZbinden, H.
titleFinite-key analysis for the 1-decoy state QKD
protocol.
journalAppl. Phys. Lett.
volume112, pages171104
(year2018).
tanaka2012
authorTanaka, A. et al.
titleHigh-speed quantum key distribution system for
1-Mbps real-time key generation.
journalIEEE J. Quantum Electron.
volume48, pages542–550
(year2012).
frohlich2017long
authorFröhlich, B. et al.
titleLong-distance quantum key distribution secure against
coherent attacks.
journalOptica volume4,
pages163–167 (year2017).
wang2005beating
authorWang, X.-B.
titleBeating the photon-number-splitting attack in
practical quantum cryptography.
journalPhys. Rev. Lett.
volume94, pages230503
(year2005).
lo2005decoy
authorLo, H.-K., authorMa, X. & authorChen,
K.
titleDecoy state quantum key distribution.
journalPhys. Rev. Lett.
volume94, pages230504
(year2005).
hadfield_single-photon_2009
authorHadfield, R. H.
titleSingle-photon detectors for optical quantum
information applications.
journalNat. Photonics
volume3, pages696–705
(year2009).
marsili2013detecting
authorMarsili, F. et al.
titleDetecting single infrared photons with 93% system
efficiency.
journalNat. Photonics
volume7, pages210–214
(year2013).
you2020
authorYou, L.
titleSuperconducting nanowire single-photon detectors for
quantum information.
journalNanophotonics volume9,
pages2673–2692 (year2020).
korzh2015provably
authorKorzh, B. et al.
titleProvably secure and practical quantum key
distribution over 307 km of optical fibre.
journalNat. Photonics
volume9, pages163–168
(year2015).
comandar_quantum_2016
authorComandar, L. C. et al.
titleQuantum key distribution without detector
vulnerabilities using optically seeded lasers.
journalNat. Photonics
volume10, pages312–315
(year2016).
zhang201916
authorZhang, W. et al.
titleA 16-pixel interleaved superconducting nanowire
single-photon detector array with a maximum count rate exceeding 1.5 GHz.
journalIEEE Trans. Appl. Supercond.
volume29, pages2200204
(year2019).
mao2022high
authorMao, H.-K., authorLi, Q., authorHao,
P.-L., authorAbd-El-Atty, B. & authorIliyasu, A. M.
titleHigh performance reconciliation for practical quantum
key distribution systems.
journalOpt. Quantum Electron.
volume54, pages163 (year2022).
yan2022efficient
authorYan, B., authorLi, Q., authorMao, H.
& authorChen, N.
titleAn efficient hybrid hash based privacy amplification
algorithm for quantum key distribution.
journalQuantum Inf. Process.
volume21, pages130 (year2022).
yuan2014robust
authorYuan, Z. et al.
titleRobust random number generation using steady-state
emission of gain-switched laser diodes.
journalAppl. Phys. Lett.
volume104, pages261112
(year2014).
ma2016silicon
authorMa, C. et al.
titleSilicon photonic transmitter for polarization-encoded
quantum key distribution.
journalOptica volume3,
pages1274–1278 (year2016).
sibson2017integrated
authorSibson, P. et al.
titleIntegrated silicon photonics for high-speed quantum
key distribution.
journalOptica volume4,
pages172–177 (year2017).
wei2020high
authorWei, K. et al.
titleHigh-speed measurement-device-independent quantum key
distribution with integrated silicon photonics.
journalPhys. Rev. X volume10,
pages031030 (year2020).
avesani2021full
authorAvesani, M. et al.
titleFull daylight quantum-key-distribution at 1550 nm
enabled by integrated silicon photonics.
journalnpj Quantum Inf.
volume7, pages93 (year2021).
xavier2008full
authorXavier, G., authorde Faria, G. V.,
authorTemporão, G. & authorVon der Weid, J.
titleFull polarization control for fiber optical quantum
communication systems using polarization encoding.
journalOptx Express volume16,
pages1867–1873 (year2008).
vorontsov1997adaptive
authorVorontsov, M. A., authorCarhart, G. W. &
authorRicklin, J. C.
titleAdaptive phase-distortion correction based on
parallel gradient-descent optimization.
journalOpt. Lett. volume22,
pages907–909 (year1997).
dauler2009photon
authorDauler, E. A. et al.
titlePhoton-number-resolution with sub-30-ps timing using
multi-element superconducting nanowire single photon detectors.
journalJ. Mod. Opt. volume56,
pages364–373 (year2009).
scarani2008quantum
authorScarani, V. & authorRenner, R.
titleQuantum cryptography with finite resources:
Unconditional security bound for discrete-variable protocols with one-way
postprocessing.
journalPhys. Rev. Lett.
volume100, pages200501
(year2008).
lee2019large
authorLee, C. et al.
titleLarge-alphabet encoding for higher-rate quantum key
distribution.
journalOpt. Express volume27,
pages17539–17549 (year2019).
wang2020high
authorWang, H. et al.
titleHigh-speed gaussian-modulated continuous-variable
quantum key distribution with a local local oscillator based on
pilot-tone-assisted phase compensation.
journalOpt. Express volume28,
pages32882–32893 (year2020).
qi2015generating
authorQi, B., authorLougovski, P.,
authorPooser, R., authorGrice, W. &
authorBobrek, M.
titleGenerating the local oscillator “locally” in
continuous-variable quantum key distribution based on coherent detection.
journalPhys. Rev. X volume5,
pages041009 (year2015).
canas2017high
authorCañas, G. et al.
titleHigh-dimensional decoy-state quantum key distribution
over multicore telecommunication fibers.
journalPhys. Rev. A volume96,
pages022317 (year2017).
wengerowsky2018entanglement
authorWengerowsky, S., authorJoshi, S. K.,
authorSteinlechner, F., authorHübel, H. &
authorUrsin, R.
titleAn entanglement-based wavelength-multiplexed quantum
communication network.
journalNature volume564,
pages225–228 (year2018).
bacco2019boosting
authorBacco, D. et al.
titleBoosting the secret key rate in a shared quantum and
classical fibre communication system.
journalCommun. Phys. volume2,
pages140 (year2019).
wang_sub-gbps_2022
authorWang, H. et al.
titleSub-Gbps key rate four-state continuous-variable
quantum key distribution within metropolitan area.
journalCommun. Phys. volume5,
pages1–10 (year2022).
roumestan_high-rate_2021
authorRoumestan, F. et al.
titleHigh-Rate Continuous Variable Quantum Key
Distribution Based on Probabilistically Shaped 64 and 256-QAM.
In booktitle2021 European Conference on
Optical Communication (ECOC), pages1–4
(year2021).
frohlich_quantum_2013
authorFröhlich, B. et al.
titleA quantum access network.
journalNature volume501,
pages69–72 (year2013).
wang2020integrated
authorWang, J., authorSciarrino, F.,
authorLaing, A. & authorThompson, M. G.
titleIntegrated photonic quantum technologies.
journalNat. Photonics
volume14, pages273–284
(year2020).
§.§ Integrated modulator.
Fig. <ref>a,b shows all functional block on the integrated chip for intensity and polarization modulation. The chip fabricated by the commercial foundry process in a 4.8×3 mm^2 size wafer is packaged with a thermoelectric cooler. The TOM is designed to have an ohmic resistance of 680 . While three CDMs are 3.2-mm-long resulting in an electronic bandwidth of 21 GHz and a V_π of 4.7 V. Detailed performance analysis related to QKD application can be found in Supplementary Section 1. Due to the saturation of the modulator, 8-V peak-to-peak voltage is needed to produce 3 π / 2 phase change. A homemade field-programmable gate array board is used to generate the driving pulses (See Supplementary Section 2). Four BB84 states |ψ⟩=(|H⟩+e^i θ|V⟩) / √(2), θ∈{0, π / 2, π, 3 π / 2} are prepared, where θ is the phase modulated by the CDM in front of the two-dimensional grating coupler and |H⟩ (|V⟩) corresponds to the polarization state in the upper (lower) arm of the 2DGC.
The variable attenuator is composed by a p-i-n junction working on carrier injection. A 3-V voltage can induce 38-dB loss variation. The large power consumed by forward-bias diode needs to be taken care. For a 3-V bias, 230-mA diode current would result in 0.69-W power consumption. This would potentially overload the thermoelectric cooler and change the working condition of other on-chip modulators. Thus we use three cascaded diodes to reduce the voltage applied to each one and reduce the total power consumed.
§.§ Post processing.
For the error reconciliation, a Medium-Efficiency mode is used for block-length setting and 12 threads are run for parallel computing. In each thread, 100 processing units are applied, and each unit processes a frame of length L=64 kb. Once a processing unit completes the task of error correction, the verification of 64-bit cyclic redundancy check is operated. The corrected frame is transferred to the privacy amplification module. If the verification is negative, the frame will be revealed and the information leakage is accounted in the reconciliation efficiency. For the privacy amplification, we set the block number k=2, that can support a maximum compression ratio of 50%. The length of a single block γ is set to 57,885,161, and the corresponding input data size of privacy amplification is N=γ× k =115,770,322, which is larger than the finite key size of 10^8.
§.§ Polarization compensation.
In our experiments, we adopt the stochastic parallel gradient descent algorithm for polarization feedback control (Supplementary Section 4). The algorithm uses the QBER in Z and X bases as error signals (objective function) to feedback driving voltages of the EPC in front of the variable beam-splitter possessed by Bob. The controller consists of three fibre squeezers controlled by direct-current voltages applied on piezoelectric elements. The squeezers are aligned 0, 45 and 0 respectively. Alice sends sufficient calibration signals in the Z and X bases using the same sending probability as quantum signals. Bob collects the QBER of the calibration sequence and uses it as the feedback signal. The polarization basis of Alice and Bob can be aligned by keeping the two QBER values low. During the experiment over 328-km-long fibre, we use strong calibration pulses which is 12.9 dB larger than signal pulses in intensity and the accumulation time is 0.5 s. The ratios of calibration pulses are 1/8 and 1/256 for 328-km experiments and the others respectively.
§.§ Finite-key simulation.
For dedicated fibre, the observed yield and error rate per photon pulse prepared in basis B and intensity μ_k is given by:
D_B,k =1-(1-2p_ dc) e^-μ_kη_ sysq__B;
Q_B,k =D_B,k(1+p_ ap)
Q_B,kE_B,k =p_ dc+e_ mis(1-2p_ dc)(1-e^-μ_kη_ sysq__B)+1/2p_ apD_B,k
where B=Z,X stands for the basis, μ_k is the k-th intensity in the protocol while k=1,2 stands for the intensity index, q_B is Bob's probability of selecting basis B passively, η_ sys is the overall transmittance including the channel transmittance η_ ch and the efficiency of Bob's detection system η_ Bob. Thereafter, using Q_B,k and Q_B,kE_B,k generated from the above model as input, the bounds of single-photon (vacuum-state) contributions s_Z, 1^l (s_Z, 0^l) as well as the single-photon phase error rate ϕ_Z^u can be estimated by finite-key analysis and decoy-state methods. The final secret key rate is given by equation (<ref>) in the main text.
In the simulation, we assume the probability is the same for Alice and Bob to choose Z basis: p_Z=q_Z. The simulation is based on a fixed finite raw key length n_Z=10^8. The overall efficiency on Bob’s side is η_ Bob=56.08%, including the detector efficiency and internal losses of Bob’s apparatus. The detector efficiency is set to 65%, the same as the lowest efficiency of the four detectors due to the security requirement, and the overall internal loss of Bob’s apparatus is measured to be 0.64 dB. The dark count probability of one detector is p_ dc = 10^-8 per pulse, the dead time of one detector is t_ dt =0.7 ns, and the misalignment error rate is e_ mis = 0.4%. The channel transmittance from Alice to Bob is η_ ch = 10^-0.19L/10, in which L is the fibre length of the quantum channel. The security parameters are set to be ϵ_ sec = 10^-10 and ϵ_ cor = 10^-15. Based on the model and parameters, we can optimize the parameters (p_Z, μ_1, μ_2, P_μ_1, P_μ_2) by maximizing the SKR for any channel loss, where P_μ_k is the probability of sending μ_k.
|
http://arxiv.org/abs/2307.02992v1
|
20230706135425
|
Large deformations in terms of stretch and rotation and global solution to the quasi-stationary problem
|
[
"Abramo Agosti",
"Pierluigi Colli",
"Michel Frémond"
] |
math.AP
|
[
"math.AP",
"math-ph",
"math.MP"
] |
Large deformations in terms of stretch and rotation
and global solution to the quasi-stationary problem
Zi-Xiang Hu
==========================================================================================================
-1.2cm
Abramo Agosti^(1)
e-mail: [email protected]
Pierluigi Colli ^(1)
e-mail: [email protected]
Michel Frémond^(2)
e-mail: [email protected]
^(1)
Dipartimento di Matematica “F. Casorati”, Università di Pavia
via Ferrata 5, I-27100 Pavia, Italy
^(2)
Lagrange Laboratory
Dipartimento di Ingegneria Civile e Ingegneria Informatica,
Università di Roma “Tor Vergata"
Via del Politecnico, 1, 00163 Roma, Italy
In this paper we derive a new model for visco-elasticity with large deformations where the independent variables are the stretch and the rotation tensors which intervene with second gradients terms accounting for physical properties in the principle of virtual power.
Another basic feature of our model is that there is conditional compatibility, entering the model as kinematic constraint and depending on the magnitude of an internal force associated to dislocations. Moreover, due to the kinematic constraint, the virtual velocities depend on the solutions of the problem. As a consequence, the variational formulation of the problem and the related mathematical analysis are neither standard nor straightforward. We adopt the strategy to invert the kinematic constraints through Green propagators, obtaining a system of integro-differential coupled equations.
As a first mathematical step, we develop the analysis of the model in a simplified setting, i.e. considering the quasi-stationary version of the full system where we neglect inertia. In this context, we prove the existence of a global in time strong solution in three space dimensions for the system, employing techniques from PDEs and convex analysis, thus obtaining a novel contribution in the field of three dimensional finite visco-elasticity described in terms of the stretch and rotation variables.
We also study a limit problem, letting the magnitude of the internal force associated to dislocations tend to zero, in which case the deformation becomes incompatible and the equations takes the form of a coupled system of PDEs. For the limit problem we obtain global existence, uniqueness and continuous dependence from data in three space dimensions.
3mm
Keywords: Large deformations, stretch, rotation, compatibility, equations of motion, principle of virtual powers, integro-differential PDE system, initial-boundary value problem, existence of solutions.
3mm
2020 Mathematics Subject Classification: 74A99, 74A05, 74B20, 45K05, 35G31, 35A01
§ INTRODUCTION
Large deformation theory introduced by J. Ball relies on the gradient matrix
𝐅=Φ =𝐑𝐖,
with position function Φ, stretch matrix 𝐖 and rotation
matrix 𝐑.
We choose to describe the motion with matrices 𝐑 and 𝐖
which can be experimented.
Consider a nail firmly hammered in a wall and assume it is a point which has
inertia. When experimenting the motion of the nail, its angular velocity as
well as its angular acceleration are equal to the angular velocity and
acceleration of the wall: these quantities are continuous with respect to
space. In mathematical parlance this mechanical property is: the angular
velocity and acceleration of the nail are the trace on the point of the
volume quantities. Thus it is reasonable that they are functions with such
mathematical properties: for instance, with second
order derivatives that are volume square integrable. Note that in this point of view the bilateral contact of
the wall with a wooden stick, a beam involving second order derivatives, is
straightforward.
To satisfy relationship (<ref>), stretch matrix 𝐖 has to
satisfy the compatibility conditions. These conditions can be infringed in
case there are dislocations, in which case the matrix 𝐑𝐖 is no longer a gradient <cit.>, see also <cit.>. In this situation we have
𝐑𝐖=Φ+𝐙, 𝐙=0,
where 𝐙 accounts for the dislocations. The occurrence of
dislocations is conditional depending on the intensity of an internal force.
We prove an existence theorem in the framework of these ideas. We start by deriving a model, which will be further detailed in <cit.>, with conditional compatibility and inertia. We start from a generalized form of the principle of virtual power, together with kinematic conditions for the deformation map and the dislocations, choosing constitutive assumptions for the internal forces in the system in order to satisfy the Clausius–Duhem dissipative equality.
We assume a quadratic expression for the free energy density of the system, depending on the stretch, the rotation and the dislocation tensors, and a quadratic form also for the dissipation potential, containing viscous contributions in terms of the time derivative of the stretch tensor and on the angular velocity tensor.
The generalized virtual velocities are associated to the independent variables of the problem, i.e., to Φ, 𝐖,𝐑,𝐙, which are linked by kinematic constraints. As a consequence, the virtual velocities themselves satisfy internal constraints depending on the solutions of the problem. This feature of the model makes the definition of weak solutions involved, since we should deal with a variational formulation with test functions depending on the solutions themselves. Hence, we adopt the strategy to express the virtual velocities associated to the deformation map and the dislocations in terms of the virtual velocities associated to the stretch matrix and to the rotation through integral operators related to the kinematic constraints. Thus, we reduce the set of independent virtual velocities and eliminate their internal constraints, obtaining a system of integro-differential coupled equations.
We consider the inertia of the system expressed by a virtual power of acceleration forces containing second-order interaction terms in space, which allows us to obtain sufficient regularity of weak solutions to be able to represent a contact at a point with inertia, in agreement with experiments as previously explained.
We also associate to dislocations an internal force, which is a new independent variable of the system. This activates dislocations when its magnitude is greater than a certain threshold k>0. Also, we impose the positive definiteness of the stretch matrix as an internal constraint in the free energy of the system, which implies that the material is not flattening or crushing and that a point which is inside its domain at a certain time remains in the interior of the domain at later times <cit.>.
In the present contribution, as a first step and in order to expose all the technicalities for a simplified problem, we develop the analysis for the quasi-stationary approximation of the full system, i.e., neglecting inertia. In this context, we obtain the existence of a global in time strong solution in three space dimensions. Together with the derivation of the model, to the best of our knowledge this analytical result is a novel contribution in the framework of 3D finite visco-elastic problems solved in the stretch matrix and rotation variables. Hence, it is a first step in the analysis of models in nonlinear three-dimensional visco-elasticity where the rotation field is considered as one of the primary unknowns, which was firstly posed as an open problem in <cit.>. In the latter reference, the authors highlighted the interest in this approach to describe elasticity since it posseses ”a more geometrical flavour than the classical approach” with the deformation gradient.
We point out that the available analytical results in two and three spatial dimensions for visco-elastic problems with large deformations described in the standard approach with the deformation gradients entail the existence of local in time weak solutions <cit.>. We also study the limit problem as k→ 0, in which case it becomes a coupled system of PDEs where the incompatibility is always active. In this latter situation we obtain global existence, uniqueness and continuous dependence from data (i.e., well-posedeness) in three space dimensions. The study of the full case with inertial terms will be the subject of a second contribution.
The paper is organized as follows. In Section <ref> we introduce the necessary notation and some preliminary results. In Section <ref> we derive the full model with inertia and the new internal forces terms. In Section <ref> we study the existence problem for the quasi-stationary approximation of the full problem. In Section <ref> we complete the analysis by studying the limiting case as k→ 0. We conclude with some observations and future perspectives in Section <ref>.
§ NOTATIONS AND PRELIMINARIES
Let 𝒟_a⊂ℝ^3 be an open bounded and simply connected domain with Lipschitz boundary Γ_a:=∂𝒟_a, and let [0,T] be a finite time interval, with T>0. We introduce the notation 𝒟_aT:= 𝒟_a× [0,T]. We indicate as M(ℝ^3× 3) the linear space of square matrices, endowed with the Frobenius inner product
𝐀: 𝐁=∑_i,j=1^3𝐀_ij𝐁_ij,
for any 𝐀,𝐁∈ M(ℝ^3× 3).
We also indicate with the notation : : the Frobenius inner product in M(ℝ^3× 3× 3), and with the notation : : : the Frobenius inner product in M(ℝ^3× 3× 3× 3). The orthogonal subspaces of symmetric and antisymmetric matrices are denoted by Sym(ℝ^3× 3)⊂ M(ℝ^3× 3) and Skew(ℝ^3× 3)⊂ M(ℝ^3× 3),
respectively. We indicate the set of special orthogonal matrices as SO(ℝ^3× 3) and the set of positive definite symmetric matrices as Sym^+(ℝ^3× 3). We recall that for any 𝐑∈ SO(ℝ^3× 3) there exists a unique 𝐀∈ Skew(ℝ^3× 3) such that 𝐑=e^𝐀, where the exponential of a matrix must be intended as e^𝐀=∑_n=0^∞𝐀^n/n!.
For a generic subset K ⊂ M(ℝ^3× 3), let I_K : M(ℝ^3× 3) →{0, +∞} denote the indicator function of K, which is defined, for any 𝐀∈ M(ℝ^3× 3),by I_k(𝐀)=0 if 𝐀∈ K, I_k(𝐀)= +∞ if 𝐀∉K.
We introduce the space of vector fields 𝒱:=(ℝ^3)^𝒟_aT, whose elements are functions from 𝒟_aT to ℝ^3. We further introduce the spaces of tensor fields ℳ:=(M(ℝ^3× 3))^𝒟_aT, 𝒮𝒪:=(SO(ℝ^3× 3))^𝒟_aT, 𝒮:=(Sym(ℝ^3× 3))^𝒟_aT and 𝒜:=(Skew(ℝ^3× 3))^𝒟_aT, with ℳ=𝒮⊕𝒜. Given a tensor 𝐀∈ℳ, we denote by Sym(𝐀):=𝐀+𝐀^T/2 its symmetric part and by Skew(𝐀):=𝐀-𝐀^T/2 its antisymmetric part. We also need to introduce the space of tensor fields ℳ_:={𝐀∈ℳ|𝐀=0}, where the divergence of a second order tensor is defined row wise. In the following, we will operate also with the curl of second order tensors, which is defined row wise.
We denote by L^p(𝒟_a;K) and W^r,p(𝒟_a;K) the standard Lebesgue and Sobolev spaces of functions defined on 𝒟_a with values in a set K, where K may be ℝ or a vector subspace of a multiple power of ℝ, and by L^p(0,t;V) the Bochner space of functions defined on (0,t) with values in the functional space V, with 1≤ p ≤∞. If K≡ℝ, we simply write L^p(𝒟_a) and W^r,p(𝒟_a).
For a normed space X, the associated norm is denoted by ·_X. In the case p=2, we use the notations H^1:=W^1,2 and H^2:=W^2,2, and we denote by (·,·) and · the L^2 scalar product and induced norm between functions with scalar, vectorial or tensorial values. Moreover, we denote by C^k(𝒟_a;K),C_c^k(𝒟_a;K) the spaces of continuously differentiable functions (respectively with compact support) up to order k defined on 𝒟_a with values in a set K; by
C^k([0,t];V), k≥ 0, the spaces of continuously differentiable functions up to order k from [0,t] to the space V. The dual space of a Banach space Y is denoted by Y'. Finally, we denote by W_0^r,p
(𝒟_a;K) the closure of C_c^∞(𝒟_a;K) with respect to the norm ·_W^r,p(𝒟_a;K), and by W^-r,p'(𝒟_a;K) the dual space of W_0^r,p(𝒟_a;K), with p≥ 1 and p'≥ 1 conjugate exponents. As before, when p=2 we will indicate
the latter functional spaces as H_0^r(𝒟_a;K) and H^-r(𝒟_a;K). The duality pairing between H_0^1(𝒟_a;K) and H^-1(𝒟_a;K) is denoted by <·,·>. We endow the space
H_0^1(𝒟_a;K) with the inner product (A,B)_H_0^1(𝒟_a;K):=( A, B), for all A,B ∈ H_0^1(𝒟_a;K), and we introduce the Riesz isomorphism ℛ:H_0^1(𝒟_a;K)→
H^-1(𝒟_a;K), defined by
<ℛA,B>=(A,B)_H_0^1(𝒟_a;K), ∀ A,B∈ H_0^1(𝒟_a;K).
The operator ℛ=-Δ is the negative weak Laplace operator with homogeneous Dirichlet boundary conditions, which is positive definite and self adjoint. As a consequence of the Lax–Milgram theorem and the Poincaré inequality, the inverse operator (-Δ)^-1:H^-1(𝒟_a;K)→ H_0^1(𝒟_a;K) is well defined, and we set A:=(-Δ)^-1F=𝒢_L∗ F, for F∈ H^-1(𝒟_a;K), where 𝒢_L is the Green propagator associated to the Laplace operator with
homogeneous Dirichlet boundary conditions and ∗ denotes the convolution operation, if -Δ A=F in 𝒟_a in the weak sense, and A=0 on Γ_a in the sense of traces. We note that, if A∈ H_0^1(𝒟_a;K) solves -Δ A=F for some F∈ W^m,p(𝒟_a;K), 1<p<∞, m∈ℕ, and Γ_a is of class C^m+2, then from elliptic regularity theory A∈ W^m+2,p(𝒟_a;K) and -Δ A=F a.e. in 𝒟_a, with
A_W^m+2,p(𝒟_a;K)≤ CF_W^m,p(𝒟_a;K).
We also need to introduce the spaces
L_^2(𝒟_a,K):={𝐮∈ C_c^∞(𝒟_a,K): 𝐮=0 in 𝒟_a}^·_L^2(𝒟_a;K),
H_0,^1(𝒟_a,K):={𝐮∈ C_c^∞(𝒟_a,K): 𝐮=0 in 𝒟_a}^·_H^1(𝒟_a;K).
The duality pairing between H_0,^1(𝒟_a;K) and (H_0,^1(𝒟_a;K))^' is still denoted by <·,·>.
We can introduce, in a similar manner as before, the Riesz isomorphism ℛ_:H_0,^1(𝒟_a;K)→(H_0,^1(𝒟_a;K))^', defined by
<ℛ_A,B>=( A, B), ∀ A,B∈ H_0,^1(𝒟_a;K).
The operator ℛ_=-P_LΔ, where P_L:L^2(𝒟_a;K)→ L_^2(𝒟_a;K) denotes the Leray projector, is the negative projected Laplace operator with homogeneous Dirichlet boundary conditions, which is positive definite and self adjoint. As a consequence of the Lax–Milgram theorem and the Poincaré inequality, the inverse operator (-P_LΔ)^-1:(H_0,^1(𝒟_a;K))^'→ H_0,^1(𝒟_a;K) is well defined, and we set A:=(-P_LΔ)^-1F=𝒢_L,∗ F, for F∈(H_0,^1(𝒟_a;K))^', where 𝒢_L, is the Green propagator associated to the projected
Laplace operator with homogeneous Dirichlet boundary conditions, if -P_LΔ A=F in 𝒟_a in the weak sense, and A=0 on Γ_a in the sense of traces. We again note that, if A∈ H_0,^1(𝒟_a;K) solves -P_LΔ A=F for some F∈ W^m,p(𝒟_a;K)∩ L_^2(𝒟_a,K), 1<p<∞, m∈ℕ, and Γ_a is of class C^m+2, then from elliptic regularity theory A∈ W^m+2,p(𝒟_a;K)∩ L_^2(𝒟_a,K) and -P_LΔ A=F a.e. in 𝒟_a.
In the following, C denotes a generic positive constant independent of the unknown variables, the discretization and the physical parameters, the value of which might change from line to line; C_1, C_2, … indicate generic positive constants whose particular value must be tracked through the calculations; C(a,b,…) denotes a constant depending on the nonnegative parameters a,b,….
We recall the following form of the Helmholtz–Hodge decomposition for vector fields (see e.g. <cit.>).
Let ξ∈(ℝ^3)^𝒟 be a sufficiently smooth vector field over a simply connected bounded domain 𝒟 with smooth boundary ∂𝒟. Then, ξ is uniquely decomposed in the form
ξ=ϕ+(𝐝),
with ϕ∈ℝ^𝒟, 𝐝∈(ℝ^3)^𝒟 smooth scalar and vector fields respectively satisfying
ξ=Δϕ, ξ=-Δ𝐝,
and one of the following boundary conditions is satisfied:
ϕ·𝐧|_∂𝒟=ξ·𝐧|_∂𝒟, or (𝐝)∧𝐧|_∂𝒟=ξ∧𝐧|_∂𝒟.
We observe that since 𝐝 is uniquely defined up to the gradient of a scalar function, it is not reductive to take the solenoidal constraint 𝐝 = 0.
We also recall the Korn-Poincaré inequality (see e.g. <cit.>), which will be used in the following calculations.
Let 𝒟⊂ℝ^3 be a bounded domain with Lipschitz boundary Γ := ∂𝒟.
Then, for any p> 1, there exists a positive constant C depending on 𝒟 such that
𝐗_L^p(𝒟,ℝ^3× 3)≤ C(Sym(𝐗)_L^p(𝒟,ℝ^3× 3)+𝐗_L^p(𝒟,ℝ^3× 3)),
for all tensor fields 𝐗 in W_0,^1,p(𝒟,ℝ^3× 3):={𝐗∈ L^p(𝒟,ℝ^3× 3):𝐗∈ L^p(𝒟,ℝ^3× 3), 𝐗∧𝐧=0 on Γ}.
As a consequence of (<ref>), given 𝐀∈ W_0,^1,p(𝒟,Skew(ℝ^3× 3)), we have that
𝐀_L^p(𝒟,Skew(ℝ^3× 3))≤ C𝐗_L^p(𝒟,Skew(ℝ^3× 3))≤ C𝐗_L^p(𝒟,Skew(ℝ^3× 3)).
We also recall the Gagliardo-Nirenberg inequality (see e.g. <cit.>).
Let 𝒟⊂ℝ^3 be a bounded domain with Lipschitz boundary and f∈ W^m,r∩ L^q, q≥ 1, r≤∞, where f can be a function with scalar, vectorial or tensorial values. For any integer j with 0 ≤ j < m, suppose there is α∈ℝ such that
j-3/p=(m-3/r)α+(1-α)(-3/q), j/m≤α≤ 1.
Then, there exists a positive constant C depending on Ω, m, j, q, r, and α such that
D^jf_L^p≤ Cf_W^m,r^αf_L^q^1-α.
Finally, we will use the following result.
Let p≥ 1 and Ω_1,Ω_2∈ L^p(𝒟_a,Skew(ℝ^3× 3)). There exists a positive constant C such that
e^Ω_1-e^Ω_2_L^p(𝒟_a,ℝ^3× 3)≤ CΩ_1-Ω_2_L^p(𝒟_a,Skew(ℝ^3× 3)).
We introduce the three Euler angles θ,ϕ,χ, associated to a skew symmetric tensor Ω∈𝒜, and the three matrices 𝐀,𝐁,𝐂∈ Skew(𝐑^3× 3) which are elements of the canonical basis for Skew(𝐑^3× 3), i.e.,
𝐀=
[ 0 -1 0; 1 0 0; 0 0 0 ]
, 𝐁=
[ 0 0 0; 0 0 -1; 0 1 0 ]
, 𝐂=
[ 0 0 -1; 0 0 0; 1 0 0 ].
Then we may write, for any 𝐱∈𝒟_a,
Ω(𝐱)=θ(𝐱) 𝐀+ϕ(𝐱) 𝐁+χ(𝐱) 𝐂.
Observing the fact that, for any n∈ℕ,
𝐀^2n+1=(-1)^n𝐀, 𝐀^2n+2=(-1)^n+1[ 1 0 0; 0 1 0; 0 0 0 ],
with similar relations for 𝐁 and 𝐂, we have that
e^Ω_1(𝐱)-e^Ω_2(𝐱)=e^θ_1(𝐱) 𝐀e^ϕ_1(𝐱) 𝐁e^χ_1(𝐱) 𝐂-e^θ_2(𝐱) 𝐀e^ϕ_2(𝐱) 𝐁e^χ_2(𝐱) 𝐂
= [ cos(θ_1(𝐱)) -sin(θ_1(𝐱)) 0; sin(θ_1(𝐱)) cos(θ_1(𝐱)) 0; 0 0 0 ][ 0 0 0; 0 cos(ϕ_1(𝐱)) -sin(ϕ_1(𝐱)); 0 sin(ϕ_1(𝐱)) cos(ϕ_1(𝐱)) ]
×[ cos(χ_1(𝐱)) 0 -sin(χ_1(𝐱)); 0 0 0; sin(χ_1(𝐱)) 0 cos(χ_1(𝐱)) ]
-
[ cos(θ_2(𝐱)) -sin(θ_2(𝐱)) 0; sin(θ_2(𝐱)) cos(θ_2(𝐱)) 0; 0 0 0 ]
×[ 0 0 0; 0 cos(ϕ_2(𝐱)) -sin(ϕ_2(𝐱)); 0 sin(ϕ_2(𝐱)) cos(ϕ_2(𝐱)) ][ cos(χ_2(𝐱)) 0 -sin(χ_2(𝐱)); 0 0 0; sin(χ_2(𝐱)) 0 cos(χ_2(𝐱)) ].
The bound (<ref>) is thus a consequence of the uniform Lipschitz continuity and of the uniform boundedness of the cos and sin functions.
§ MODEL DERIVATION
We consider the motion of a deformable elastic solid in 𝒟_a which is fixed on its boundary Γ_a:=∂𝒟_a.
In the time interval (0,T), the motion is described by the map
(𝐚,t)→Φ(𝐚,t)∈ℝ^3, (𝐚,t)∈𝒟_aT := 𝒟_a× (0,T),
with
Φ(𝐚,0)=𝐚 for 𝐚∈𝒟_a and Φ(𝐚,t)=𝐚 for 𝐚∈Γ_a.
We assume that the motion is not compatible, i.e., there exist a dislocation tensor 𝐃∈ℳ_, with 𝐃(𝐚,0)=0 for 𝐚∈𝒟_a and 𝐃(𝐚,t)=0 for 𝐚∈Γ_a, such that
Φ=𝐑𝐖- 𝐃,
where 𝐑∈𝒮𝒪 is the rotation tensor and 𝐖∈𝒮 is the stretch tensor associated to the deformation gradient tensor, with 𝐑𝐖(𝐚,t)=𝐚
for 𝐚∈Γ_a. We introduce the stream potential 𝐙 such that 𝐃=𝐙. Since 𝐙 is uniquely defined up to the gradient of a vector function, it is not reductive to take the solenoidal constraint 𝐙=0, hence we consider 𝐙∈ℳ_. Applying the divergence and the curl operators to (<ref>), we obtain the kinematic relations:
ΔΦ=(𝐑𝐖),
endowed with the boundary condition Φ(𝐚,t)=𝐚 for 𝐚∈Γ_a, and
- Δ𝐙=(𝐑𝐖), 𝐙=0,
endowed with the boundary condition 𝐙(𝐚,t)=0 for 𝐚∈Γ_a. Since the , , operators are applied to second order tensors row-wise, we observe that the existence of the decomposition (<ref>), satisfying the relations (<ref>) and (<ref>), is a consequence of the application of Theorem <ref> to the row vectors of the involved tensors. Moreover, since ∑_j=1^3(𝐑𝐖)_ij𝐧_j=∑_j=1^3(Φ)_ij𝐧_j=𝐧_i for i=1,2,3, the decomposition (<ref>) is unique.
Taking moreover the time derivative of (<ref>) and (<ref>), introducing also the velocity vector field 𝐔:=Φ∈𝒱 and the angular velocity tensor Ω:=𝐑𝐑^T∈𝒜, we obtain the kinematic relations:
Δ𝐔=(𝐑𝐖+Ω𝐑𝐖),
endowed with the boundary condition 𝐔(𝐚,t)=0 on Γ_a, and
-Δ𝐙=(𝐑𝐖+Ω𝐑𝐖), 𝐙=0,
endowed with the boundary condition 𝐙(𝐚,t)=0 on Γ_a.
We derive the model equations from the principle of virtual power, which gives the equations of motion for the linear and angular momenta expressed in terms of the kinematic variables and internal force tensors. We then constitutively assign the form of the internal force tensors, in terms of the kinematic variables, in order for the system to satisfy the Clausius–Duhem dissipative equality.
We start by defining the set 𝒞 of virtual velocities. Given 𝐑∈𝒮𝒪, 𝐖∈𝒮, we define, for any t∈ (0,T), the set
𝒞:={(𝐕,𝐖,Ω,𝐙)∈ (𝒱,ℳ,ℳ,ℳ_) | 𝐖,Ω=0 on Γ_a,
Δ𝐕=(𝐑𝐖+Ω𝐑𝐖),
𝐕=0 on Γ_a ,
-P_LΔ𝐙=(𝐑𝐖+Ω𝐑𝐖),
𝐙=0 on Γ_a.
}
The virtual velocities then satisfy the following constraint, which, similarly to (<ref>), is a consequence of (<ref>) applied row-wise:
𝐕=𝐑𝐖+Ω𝐑𝐖-𝐙.
We observe that the set 𝒞 of virtual velocities is defined in terms of the variables 𝐑 and 𝐖, and hence depend on the solutions of the equations of motion. We can formally write
𝐕=-𝒢_L∗(𝐑𝐖+Ω𝐑𝐖),
and
𝐙=𝒢_L,∗(𝐑𝐖+Ω𝐑𝐖).
Given solutions with regularity, for a.e. t∈ (0,T), 𝐑∈ H^2(𝒟_a;ℝ^3× 3)∩𝒮𝒪, 𝐖∈ H^2(𝒟_a;ℝ^3× 3)∩𝒮, and choosing 𝐖,Ω∈ H^1(𝒟_a;ℝ^3× 3), from elliptic regularity theory and with the assumed regularity of Γ_a we get that 𝐕∈ H^2(0;ℝ^3)∩ H_0^1(𝒟_a;ℝ^3) and 𝐙∈ H^2(𝒟_a;ℝ^3× 3)∩ H_0,^1(𝒟_a;ℝ^3).
We now introduce the virtual power of internal forces p_int(𝒟_a,C), the virtual power of external forces p_ext(𝒟_a,C) and the virtual power of acceleration forces p_acc(𝒟_a,C), defined in terms of 𝒟_a and of an element C∈𝒞. The principle of virtual power then states that
p_acc(𝒟_a,C)=p_int(𝒟_a,C)+p_ext(𝒟_a,C) ∀ C∈𝒞.
The virtual power of internal forces is defined as
p_int(𝒟_a,C):=-∫_𝒟_a(Π:𝐕+𝐗:: 𝐖+𝐘::: 𝐖)
+1/2∫_𝒟_a(𝐌:Ω-Λ::Ω-𝐂:::Ω)+∫_𝒟_aΓ:𝐙,
where Π is the Piola–Kirchhoff–Boussinesq stress tensor, 𝐌 represents the momentum, Λ the momentum flux and 𝐂 the flux of the momentum flux. The quantities 𝐗, 𝐘, Γ are new internal force tensors associated to the kinematic variables 𝐖 and 𝐙. In particular, Γ is an internal force accounting for the evolution
of the dislocations. The virtual power of external forces is defined as
p_ext(𝒟_a,C):=∫_𝒟_a𝒲_ext:𝐖+ ∫_𝒟_aΩ_ext:Ω,
where 𝒲_ext and Ω_ext are external forces, possibly depending on 𝐖 and R, which perform work by stretching and rotating the system, respectively. Note that the reader may expect a factor 1/2 in front of the second integral in (<ref>); in fact, we choose to incorporate this factor in the definition of Ω_ext.
We note that, since 𝐕 can be expressed in terms of 𝐖 and Ω through (<ref>), the expression (<ref>) may include external powers for classical body forces like gravity, follower forces depending on the solutions <cit.>, pressure contributions depending on cof𝐖, and so on.
Finally, the virtual power of acceleration forces is defined as
p_acc(𝒟_a,C)
:=∫_𝒟_a( d𝐔/dt·𝐕 + 𝐖:
𝐖+Ω:Ω+Δ𝐖:Δ𝐖+
ΔΩ:ΔΩ).
As discussed in the Introduction, higher order terms in the virtual power of acceleration forces are introduced to be able to represent a situation of a contact at a point with inertia, which requires regularity in space and time of the angular velocity and acceleration variables.
Using (<ref>) in (<ref>) we obtain that
p_int(𝒟_a,C):=-∫_𝒟_a(𝐑^TΠ:𝐖+𝐗:: 𝐖+𝐘::: 𝐖)
+1/2∫_𝒟_a((𝐌-2Π𝐖𝐑^T):Ω-Λ::Ω-𝐂:::Ω)
+∫_𝒟_a(Γ+Π):𝐙,
where in the last term we have used integration by parts and the boundary conditions for 𝐙.
We rewrite the first term in (<ref>) employing (<ref>) and then obtaining
-∫_𝒟_ad𝐔/dt·(𝒢_L∗(𝐑𝐖+Ω𝐑𝐖))
=-∫_𝒟_a(𝒢_L∗d𝐔/dt)·(𝐑𝐖+Ω𝐑𝐖)=∫_𝒟_a(𝒢_L∗d𝐔/dt): (𝐑𝐖+Ω𝐑𝐖),
where in the last term we have integrated by parts and used the boundary conditions for 𝒢_L∗d𝐔/dt. We also use (<ref>) in the last term of (<ref>) and deduce that
∫_𝒟_a(Γ+Π):𝐙=∫_𝒟_a(Γ+Π):(𝒢_L,∗(𝐑𝐖+Ω𝐑𝐖))
∫_𝒟_a𝒢_L,∗(Γ+Π):(𝐑𝐖+Ω𝐑𝐖)
= ∫_𝒟_a(𝒢_L,∗(Γ+Π)):(𝐑𝐖+Ω𝐑𝐖),
where in the last term we have integrated by parts and used the boundary conditions for 𝒢_L,∗(Γ+Π). Inserting (<ref>), (<ref>), (<ref>) and (<ref>) in (<ref>) and integrating by parts, the principle of virtual power becomes: given 𝐑∈𝒮𝒪, 𝐖∈𝒮,
∫_𝒟_a(𝐑^T(𝒢_L∗d𝐔/dt)+𝐖+Δ^2𝐖-𝐑^T(𝒢_L,∗(Γ+Π)))𝐖
+∫_𝒟_a(𝐑^TΠ
-𝐗+𝐘)𝐖
+∫_𝒟_a((𝒢_L∗d𝐔/dt)𝐖𝐑^T
+Ω+Δ^2Ω)Ω
-∫_𝒟_a1/2( 2 (𝒢_L,∗(Γ+Π))𝐖𝐑^T +(𝐌-2Π𝐖𝐑^T)+Λ-𝐂)Ω
+∫_Γ_a(𝐗-𝐘)𝐍:𝐖+∫_Γ_a𝐘𝐍::𝐖+1/2∫_Γ_a(Λ-𝐂)𝐍:Ω
+1/2∫_Γ_a𝐂𝐍::Ω=∫_𝒟_a𝐖_ext𝐖+∫_𝒟_aΩ_ext:Ω,
for all virtual velocities 𝐖,Ω, where 𝐍 is the outward normal to Γ_a.
Assuming regularity of the integrands in (<ref>), and assigning homogeneous Neumann boundary conditions for the internal forces 𝐘,𝐂, the principle of virtual power implies the following equations, valid in 𝒟_aT, which are coupled to the kinematic relations (<ref>) and (<ref>):
𝐑^T(𝒢_L∗d𝐔/dt)+𝐖+Δ^2𝐖-𝐑^T(𝒢_L,∗(Γ+Π))+𝐑^TΠ
-𝐗+𝐘=𝐖_ext,
𝐘𝐍=0 on Γ_a× (0,T),
(𝒢_L∗d𝐔/dt)𝐖𝐑^T+Ω+Δ^2Ω-(𝒢_L,∗(Γ+Π))𝐖𝐑^T
-1/2(𝐌-2Π𝐖𝐑^T)-1/2Λ+1/2𝐂=Ω_ext,
𝐂𝐍=0 on Γ_a× (0,T),
𝐑=Ω𝐑,
ΔΦ=(𝐑𝐖), Φ(𝐚, t) =𝐚 for (𝐚, t) ∈Γ_a× (0,T),
- P_LΔ𝐙=(𝐑𝐖), 𝐙(𝐚, t)=0 for (𝐚, t) ∈Γ_a× (0,T).
Concerning boundary conditions, note that if we don't impose homogeneous Dirichlet boundary conditions on 𝐖 and Ω, then, in view of the boundary terms in (<ref>), we may set homogeneous Neumann boundary conditions of the form (𝐗-𝐘)𝐍=0 and (Λ-𝐂)𝐍=0 on Γ_a× (0,T).
We now assign general constitutive assumptions for Π,𝐌,𝐗,𝐘,Λ,𝐂,Γ in order for (<ref>) to satisfy the Clausius–Duhem dissipative equality in isothermal situations, which has the form
dψ/dt+(d D/dC(C),C)=-p_int(𝒟_a,C),
where C:=(𝐖,Ω,𝐙) is the actual velocity, ψ is the free energy of the system and D is the dissipation potential. We assume the following form for the free energy of the system:
ψ(𝐖,𝐑,𝐙):=1/2𝐖-𝐈^2+ψ(𝐖)+1/2𝐖^2+1/2𝐑^2
+k∫_𝒟_a |𝐙|+1/2𝐙^2+α_1/2𝐖^2,
where k≥ 0 is a material parameter, whose meaning will be specified later. We observe that the particular choice for the part of the free energy depending on 𝐙 will induce a constitutive law for Γ+Π representing conditional compatibility, as discussed in the Introduction and in the Remark <ref> . Moreover, α_1 ≥ 0 is a physical coefficient for the second gradient contribution, and
ψ(𝐖):=∫_𝒟_aI_SPD_α(𝐖),
where I_SPD_α is the indicator function of the set
SPD_α:={𝐖∈Sym(ℝ^3× 3): det𝐖≥α^3, tr(cof𝐖)≥ 2α^2, tr𝐖≥ 3α}.
If α> 0 the elements of SPD_α are positive definite matrices due to the constraints on
the determinant and the other quantities in (<ref>). More precisely, the tensor elements of the set
(<ref>) are characterized by the fact that all their eigenvalues are not smaller than α at the same time.
Also, let us point out that, for
𝐖∈ L^2(𝒟_a;ℝ^3× 3),
ψ(𝐖)=∫_𝒟_aI_SPD_α(𝐖) =
0 if 𝐖∈SPD_α a.e. in 𝒟_a,
+∞ otherwise.
Then, the functional (<ref>) may be written also as
ψ(𝐖)=∫_𝒟_aI_S(𝐖)+∫_𝒟_aI_C_α(𝐖),
where I_S is the indicator function of the set of symmetric matrices and I_C_α is the indicator function of the set
C_α:={𝐖∈ M(ℝ^3× 3): det𝐖≥α^3, tr(cof𝐖)≥ 2α^2, tr𝐖≥ 3α}
for α> 0. Let us introduce for future convenience the notation
ψ_D(𝐀):= k∫_𝒟_a |𝐀| +1/2𝐀^2, for all 𝐀∈ L^2(𝒟_a;ℝ^3× 3).
The set SPD_α defined in (<ref>) is closed and convex for all α≥ 0, hence the indicator function I_SPD_α(·) is a convex and l.s.c. function. Moreover, the function I_S(𝐖)+I_C_α(𝐖) is also a convex and l.s.c. function. The proofs of these properties can be found e.g. in <cit.>.
The fact that the free energy (<ref>) is convex implies the derivation of constitutive laws for the material that are monotone. From a mechanical point of view, roughly speaking this means that the more you push the more the material is affected by the deformation.
Moreover, we assume the following form for the dissipation potential of the system, containing viscous contributions:
D(𝐖,Ω):=1/2𝐖^2+1/2Ω^2 + ψ_A ( Ω )
+α_2/2𝐖^2 +α_3/2𝐖^2+α_4/2Ω^2,
where α_2,α_3,α_4≥0 are physical parameters,
ψ_A ( Ω ):=∫_𝒟_aI_A(Ω)
and I_A is the indicator function of the set of antisymmetric matrices.
The terms proportional to the non-negative constants α_1,…,α_4 in (<ref>) and (<ref>) introduce higher order gradient and time derivative terms in the system dynamics, and they will be activated (i.e., they will be taken different from zero) only when high regularity in space and time will be required to prove existence of a solution to the system.
Using (<ref>), (<ref>) and (<ref>) in (<ref>), we obtain the following constitutive assumptions
𝐑^TΠ=𝐖-𝐈
+χ_α+𝐖,
where
χ_α∈∂ψ(𝐖);
𝐌=2Π𝐖𝐑^T-2𝑆,
where 𝑆∈∂ψ_A(Ω);
Σ:=-(Γ+Π)∈∂ψ_D(𝐙)
=
k𝐙/|𝐙|+𝐙 if |𝐙| ≠ 0,
any 𝐌_D, with |𝐌_D|≤ k, if |𝐙| = 0;
𝐗=𝐖+α_2𝐖;
𝐘=α_1𝐖+α_3𝐖;
Λ=(𝐑)𝐑^T+Ω;
𝐂=α_4Ω.
We observe that ∂ψ_D is a maximal monotone operator in L^2(𝒟_a;ℝ^3× 3) and (<ref>) entails that
𝐙=0 if and only if |Σ|≤ k.
Hence, if the norm of the reaction term Γ+Π in (<ref>) is lower or equal than the threshold k, the dislocation tensor 𝐃=𝐙 in (<ref>) is the null tensor, and the motion is compatible.
Moreover, we observe that the constitutive assumptions (<ref>)–(<ref>) comply with the principle of objectivity, that is, the property
p_int(𝒟_a,C_rigid)=0
is satisfied for the rigid virtual velocities, i.e., for 𝐖=0, Ω=𝐀, 𝐕=ΩΦ and 𝐙=Ω𝐙, for any spatially constant tensor 𝐀∈Skew(ℝ^3× 3). Indeed, formally we have that
p_int(𝒟_a,C_rigid)=-∫_𝒟_a𝐒:𝐀+∫_𝒟_a(Γ+ Π): 𝐙
=-∫_𝒟_a∂ψ_D(𝐙):𝐀𝐙=0.
We remark that even in presence of incompatibility, the principle of objectivity is satisfied.
The subdifferential ∂ψ is a maximal monotone operator in L^2(𝒟_a;ℝ^3× 3) as well, and the inclusion χ_α∈∂ψ(𝐖) means that
𝐖 belongs to the domain of ∂ψ and
∫_𝒟_aχ_α: ( 𝐖 - 𝐖) + ψ (𝐖) ≤ψ (𝐖 )
for all 𝐖∈ L^2(𝒟_a;ℝ^3× 3).
In view of (<ref>)–(<ref>) and (<ref>)–(<ref>), it is not difficult to show that
(<ref>) can be equivalently rewritten as
𝐖∈ L^2(𝒟_a;ℝ^3× 3), 𝐖∈Sym(ℝ^3× 3) almost everywhere in 𝒟_a, and
∫_𝒟_aχ_α: ( 𝐖 - 𝐖) +
∫_𝒟_a I_C_α (𝐖) ≤∫_𝒟_a I_C_α (𝐖 )
for all symmetric matrices 𝐖∈ L^2(𝒟_a;ℝ^3× 3).
Then, it becomes clear that, setting
ψ_C_α ( 𝐖 ):=∫_𝒟_aI_C_α(𝐖) ,
the inclusion χ_α∈∂ψ(𝐖) can be formulated as
𝐖∈ L^2(𝒟_a;ℝ^3× 3), 𝐖∈Sym(ℝ^3× 3) a.e. in 𝒟_a, and χ_α∈∂ψ_C_α (𝐖).
Inserting (<ref>)–(<ref>) in (<ref>) we finally obtain
𝐑^T(𝒢_L∗d𝐔/dt)+𝐖+Δ^2𝐖+𝐑^T(𝒢_L,∗(Σ))+𝐖-𝐈
+χ_α+𝐖-Δ𝐖-α_2Δ𝐖+α_1Δ^2𝐖+α_3Δ^2𝐖=𝐖_ext,
(𝒢_L∗d𝐔/dt)𝐖𝐑^T+Ω+Δ^2Ω+(𝒢_L,∗(Σ))𝐖𝐑^T
+𝑆-1/2((𝐑)𝐑^T)-1/2ΔΩ+1/2α_4Δ^2Ω=Ω_ext,
χ_α∈∂ψ(𝐖), 𝑆∈∂ψ_A(Ω), Σ∈∂ψ_D(𝐙),
𝐑=Ω𝐑,
ΔΦ=(𝐑𝐖),
-P_LΔ𝐙=(𝐑𝐖),
valid in 𝒟_aT, with boundary conditions
𝐖=𝐈, 𝐖=0, ( α_1𝐖+α_3𝐖)𝐍=0 on Γ_a× (0,T),
𝐑=𝐈, Ω=0, ( Ω)𝐍=0 on Γ_a× (0,T),
Φ(𝐚, t) =𝐚, 𝐙(𝐚, t)=0 for (𝐚, t) ∈Γ_a× (0,T),
and initial conditions
𝐖(𝐚,0)=𝐈, 𝐖(𝐚,0)=0, 𝐑(𝐚,0)=𝐈, Ω(𝐚,0)=0, 𝐙(𝐚,0)=0 for 𝐚∈𝒟_a.
§.§ An Example
We consider the case in which the evolution is simply given by
Φ (𝐚,t)=𝐚, 𝐔(𝐚,t)=0, 𝐑(𝐚,t)=𝐈, 𝐖(𝐚,t)=𝐈, , 𝐙(𝐚,t)=0.
This yields a solution of equations (<ref>)_1, (<ref>)_2, (<ref>)_3 and (<ref>)_6 if
𝐖_ext= Sym ({𝒢_L,
∗Σ} ) ,
Ω_ext= Skew ({
𝒢_L,∗Σ} ),
are given by the internal force
|Σ(𝐚,t)|≤ k,
which is assumed to be known.
The external actions do not work and do not result in motion. They have no
macroscopic effect. But they produce dislocations which modify the internal
stress state Σ(𝐚,t). To produce a motion the external
actions have to be increased.
We have
𝐖_ext+Ω_ext=
{𝒢_L,∗Σ} ,
and for a virtual velocity 𝐕 with stretch and angular velocities
Sym{𝐕} and Skew{𝐕},
respectively, we have that
∫_𝒟_a(𝐖_ext: Sym(
𝐕)+Ω_ext: Skew(
𝐕))
=∫_𝒟_a{𝒢_L,∗Σ} :𝐕=0.
Then {𝒢_L,∗
Σ} is a reaction: a reaction to the compatibility condition.
Inside 𝒟_a the densities of power due to the evolution of the
dislocations are not null. But their total sum is null.
§ QUASI-STATIONARY CASE
In this section we study the existence of solutions to (<ref>) in the quasi-stationary case, i.e., considering p_acc(𝒟_a,C)=0 for all C∈𝒞 and thus neglecting the inertia terms in (<ref>)_1 and (<ref>)_2. We deal with the case k>0 and let α_1,…,α_4=0, then we study the existence and regularity of a global in time weak solution, which will be proved to be also a strong solution.
Consider the following reduced version of system (<ref>):
𝐑^T(𝒢_L,∗(Σ))+ 𝐖-𝐈+χ_α+𝐖-Δ𝐖=𝐖_ext(𝐖,t),
(𝒢_L,∗(Σ))𝐖𝐑^T+𝑆-1/2((𝐑)𝐑^T)-1/2ΔΩ=Ω_ext(𝐑,t),
χ_α∈∂ψ(𝐖), 𝑆∈∂ψ_A(Ω), Σ∈∂ψ_D(𝐙),
ΔΦ=(𝐑𝐖),
- P_LΔ𝐙=(𝐑𝐖),
valid in 𝒟_aT, with boundary conditions
𝐖=𝐑=𝐈, Ω=0 on Γ_a× (0,T),
Φ(𝐚, t) =𝐚, 𝐙(𝐚, t)=0 for (𝐚, t) ∈Γ_a× (0,T),
and initial conditions
𝐖(𝐚,0)=𝐖_0(𝐚), 𝐑(𝐚,0)=𝐑_0(𝐚)
for 𝐚∈𝒟_a,
where 𝐖_0 =𝐑_0=𝐈 on Γ_a.
For simplicity, in the following
we will take 𝐑_0=𝐈.
We observe that, given Ω∈𝒜, the differential equation (<ref>)_4 and the initial condition in (<ref>), with 𝐑_0=𝐈, uniquely define a rotation tensor
𝐑(𝐚,t)=e^∫_0^tΩ(𝐚,s)ds for (𝐚, t) ∈𝒟_a× (0,T).
Since Ω∈𝒜, we have that 𝐑:𝐑=e^∫_0^tΩ(𝐚,s)dse^-∫_0^tΩ(𝐚,s)ds:𝐈=3, hence
𝐑
∈ L^∞(𝒟_aT,ℝ^3× 3).
We introduce the variable Θ(𝐚,t):=∫_0^tΩ(𝐚,s)ds,
(𝐚, t) ∈𝒟_a× (0,T),
and rewrite the system (<ref>) as
e^-Θ(𝒢_L,∗(Σ))+𝐖-𝐈+χ_α+𝐖-Δ𝐖=𝐖_ext(𝐖,t),
(𝒢_L,∗(Σ))𝐖e^-Θ +𝑆-1/2ΔΘ-1/2ΔΘ=Ω_ext(Θ,t),
χ_α∈∂ψ(𝐖), 𝑆∈∂ψ_A(Θ), Σ∈∂ψ_D(𝐙),
ΔΦ=(e^Θ𝐖),
- P_LΔ𝐙=(e^Θ𝐖),
with boundary conditions
𝐖=𝐈, Θ=Θ=0 on Γ_a× (0,T),
Φ(𝐚, t) =𝐚, 𝐙(𝐚, t)=0 for (𝐚, t) ∈Γ_a× (0,T),
and initial conditions
𝐖(𝐚,0)=𝐖_0(𝐚), Θ(𝐚,0)=0 for 𝐚∈𝒟_a.
We observe that an initial condition for 𝐙 can be defined by assuming that (<ref>)_5 is valid for t=0, i.e.,
𝐙(𝐚,0)= (𝒢_L,∗𝐖_0 ) (𝐚) for 𝐚∈𝒟_a.
In the case 𝐖_0=𝐈, then we have 𝐙(𝐚,0)=0 for 𝐚∈𝒟_a.
Note that the equations (<ref>)_1 and (<ref>)_2 are coupled. Also, since ψ_A is defined as the integral of the indicator function I_A of the set of antisymmetric matrices, and
𝑆 should satisfy
𝑆∈∂ψ_A(Ω), that is, 𝑆∈∂ I_A(Ω) a.e. in 𝒟_a, then 𝑆 can be recovered a posteriori in terms of the symmetric part of (<ref>)_2, that is,
𝑆= - Sym((𝒢_L,∗(Σ))𝐖e^-Θ).
In the case in which 𝐖∈𝒮 and Θ∈𝒜, the system (<ref>) becomes
Sym(e^-Θ(𝒢_L,∗(Σ)))+𝐖-𝐈+χ_α+𝐖-Δ𝐖=𝐖_ext(𝐖,t),
Skew((𝒢_L,∗(Σ))𝐖e^-Θ) -1/2ΔΘ-1/2ΔΘ=Ω_ext(Θ,t),
χ_α∈∂ψ(𝐖), Σ∈∂ψ_D(𝐙),
ΔΦ=(e^Θ𝐖),
- P_LΔ𝐙=(e^Θ𝐖),
where the inclusion χ_α∈∂ψ(𝐖) is expressed as in (<ref>), with boundary conditions
𝐖=𝐈, Θ=Θ=0 on Γ_a× (0,T),
Φ(𝐚, t) =𝐚, 𝐙(𝐚, t)=0 for (𝐚, t) ∈Γ_a× (0,T),
and initial conditions
𝐖(𝐚,0)=𝐖_0(𝐚), Θ(𝐚,0)=0 for 𝐚∈𝒟_a.
We state now the main theorem of the present paper. We start by introducing the following assumptions on the data:
A1: 𝒟_a ⊂ℝ^3 is a bounded domain and the boundary Γ_a is of class C^3;
A2: The initial datum has the regularity 𝐖_0∈ H^1(𝒟_a;ℝ^3× 3)∩𝒮, with 𝐖_0∈SPD_α almost everywhere in 𝒟_a for a given α>0, and with 𝐖_0=𝐈 on Γ_a× (0,T);
A3: The forcing term 𝐖_ext: H^1(𝒟_a;Sym(ℝ^3× 3))× (0,T)→ L^2(𝒟_a;Sym(ℝ^3× 3)) is measurable in t∈ (0,T) and Lipschitz continuous in 𝐖∈ H^1(𝒟_a;Sym(ℝ^3× 3)), and it satisfies 𝐖_ext(0, t)=0 for all t∈ [0,T] and
𝐖_ext(𝐖_1,t)-𝐖_ext(𝐖_2,t)_L^2(𝒟_a;Sym(ℝ^3× 3))≤ L𝐖_1-𝐖_2_H^1(𝒟_a;Sym(ℝ^3× 3)),
for a.e. t∈ (0,T), for all 𝐖_1,𝐖_2∈ H^1(𝒟_a;Sym(ℝ^3× 3)) and for some L∈ℝ.
Similarly, the forcing term Ω_ext: H^1(𝒟_a;Skew(ℝ^3× 3))× (0,T)→ L^2(𝒟_a;Skew(ℝ^3× 3)) is measurable in t∈ (0,T) and Lipschitz continuous in Θ∈ H^1(𝒟_a;Skew(ℝ^3× 3)), and it satisfies Ω_ext(0, t)=0 for all t∈ [0,T] and
Ω_ext(Θ_1,t)-Ω_ext(Θ_2,t)_L^2(𝒟_a;Skew(ℝ^3× 3))≤ GΘ_1-Θ_2_H^1(𝒟_a;Skew(ℝ^3× 3)),
for a.e. t∈ (0,T), for all Θ_1,Θ_2∈ H^1(𝒟_a;Skew(ℝ^3× 3)) and for some G∈ℝ.
Let assumptions A1-A3 be satisfied. Then, for any T>0 there is a sextuplet
(𝐖,Θ,χ_α, Σ,Φ,𝐙), with
𝐖∈ L^∞(0,T;H^1(𝒟_a;Sym(ℝ^3× 3)))
∩ H^1(0,T;L^2(𝒟_a;Sym(ℝ^3× 3)))∩ L^2(0,T;H^2(𝒟_a,Sym(ℝ^3× 3))),
and 𝐖(𝐚,t)∈SPD_α for a.e. (𝐚,t)∈𝒟_aT,
Θ∈ H^1(0,T;H^2(𝒟_a,Skew(ℝ^3× 3))),
χ_α∈ L^2(0,T;L^2(𝒟_a;ℝ^3× 3)),
Σ∈ L^∞(0,T;L^2(𝒟_a;ℝ^3× 3)),
Φ∈ L^∞(0,T;H^2(𝒟_a,ℝ^3)∩ H^1(𝒟_a;ℝ^3))∩ L^2(0,T;H^3(𝒟_a;ℝ^3)),
𝐙∈ L^∞(0,T;H^2(𝒟_a,ℝ^3× 3)∩ H_0,^1(𝒟_a,ℝ^3× 3))∩ L^2(0,T;H^3(𝒟_a;ℝ^3)),
which solves the system (<ref>)–(<ref>) with equations and conditions satisfied almost everywhere.
Let us introduce the finite dimensional spaces which will be used to formulate the Galerkin ansatz to approximate the solutions of the system (<ref>)–(<ref>). Let {ξ_i}_i∈ℕ be the eigenfunctions of the Laplace operator with homogeneous Dirichlet boundary conditions, i.e.,
-Δξ_i=γ_i ξ_i in 𝒟_a, ξ_i =0 on Γ_a,
with 0<γ_0≤γ_1 ≤…≤γ_m→∞. The sequence {ξ_i}_i∈ℕ can be chosen as an orthonormal basis in L^2(𝒟_a) and an orthogonal basis in H^1(𝒟_a), and, thanks to Assumption A1, {ξ_i}_i∈ℕ⊂H^2(𝒟_a).
We then introduce the functions {𝐒_6k+i+j+n_i}_k∈ℕ; i,j=0,…,2; j≥ i defined by
𝐒_6k+i+j+(i>0):=ξ_k(𝐞_i⊗𝐞_j+𝐞_j⊗𝐞_i),
where 𝐞_i, i=0, …, 2 are the elements of the canonical basis of ℝ^3, and n_i is 0 when i=0 or 1 when i>0. We observe that, given k∈ℕ, the elements 𝐒_6k+i+j+n_i span the 6-th dimensional linear eigenspace of symmetric tensors associated to the eigenvalue γ_k.
We also introduce the projection operator
PS_m:H^1(𝒟_a;ℝ^3× 3)→span{𝐒_0,𝐒_1,…,𝐒_6m+5}.
We moreover introduce the functions {𝐀_3k+i+j-1}_k∈ℕ; i,j=0,…,2; j> i defined by
𝐀_3k+i+j-1:=ξ_k(𝐞_i⊗𝐞_j-𝐞_j⊗𝐞_i).
We observe that, given k∈ℕ, the elements 𝐀_3k+i+j-1 span the 3-th dimensional linear eigenspace of antisymmetric tensors associated to the eigenvalue γ_k.
We then introduce the projection operator
PA_m:H^1(𝒟_a;ℝ^3× 3)→span{𝐀_0,𝐀_1,…,𝐀_3m+2}.
We make the Galerkin ansatz
𝐖_m(𝐚, t)=𝐚+∑_i=0^6m+5x_i^m(t)𝐒_i(𝐚), Θ_m(𝐚, t)=∑_i=0^3m+2y_i^m(t)𝐀_i(𝐚),
(𝐚, t) ∈𝒟_a × (0,T),
with
𝐒_i∈H^2(𝒟_a;Sym(ℝ^3× 3))∩ H_0^1(𝒟_a;Sym(ℝ^3× 3)),
𝐀_i∈H^2(𝒟_a;Skew(ℝ^3× 3))∩ H_0^1(𝒟_a;Skew(ℝ^3× 3)),
to approximate the solutions 𝐖 and Θ of the system (<ref>)–(<ref>). We observe that through the Galerkin ansatz (<ref>) we are enforcing by construction that 𝐖_m∈𝒮 and Θ_m∈𝒜, hence the system (<ref>)–(<ref>) is equivalent to the system (<ref>)–(<ref>).
We consider a Faedo–Galerkin approximation of a regularized version of (<ref>), with solutions expressed in the form (<ref>) and
where the convex functions (cf. Remark <ref>) ψ_C_α and ψ_D and their subdifferentials ∂ψ_C_α and ∂ψ_D are replaced by the Moreau–Yosida approximations ψ_C_α^λ and ψ_D^λ, ∂ψ_C_α^λ and ∂ψ_D^λ, depending on a regularization parameter λ >0.
We refer to, e.g., <cit.>) for definitions and properties of these approximations, recalling simply that if f:L^2(𝒟_a;ℝ^3× 3)→ [0,+∞] is a proper convex lower semicontinuous function and ∂ f denotes its subdifferential, then
∂ f^λ :=I-(I+λ∂ f )^-1/λ, λ∈ (0,1),
where I here denotes the identity operator. In particular, ∂ f^λ is a monotone and 1/λ-Lipschitz continuous function. Moreover, due the special form of ψ_D defined in (<ref>),
we have that the following bounds are valid uniformly in λ:
1/2𝐀^2≤ C+ψ_D^λ(𝐀), for all 𝐀∈ L^2(𝒟_a;ℝ^3× 3),
∂ψ_D^λ(𝐀)^2≤ C( ψ_D^λ(𝐀)+1), for all 𝐀∈ L^2(𝒟_a;ℝ^3× 3) ,
where the constant C is also independent of k provided that 0< k ≤k, for some
k>0.
Given (<ref>), we define the approximations
χ_α,m = ∂ψ_C_α^λ(𝐖_m), Σ_m=∂ψ_D^λ((𝐙_m)),
ΔΦ_m=(e^Θ_m𝐖_m), - P_LΔ𝐙_m=(e^Θ_m𝐖_m),
with Φ_m(𝐚, t) =𝐚, 𝐙_m (𝐚, t)=0 for (𝐚, t) ∈Γ_a× (0,T).
Given the elliptic problems in (<ref>) with approximated right hand sides, we then have
Φ_m(𝐚, t)= 𝐚 -𝒢_L∗(e^Θ_m𝐖_m)(𝐚, t),
𝐙_m(𝐚, t)=𝒢_L,∗(e^Θ_m𝐖_m)(𝐚, t)
for (𝐚, t) ∈𝒟_a× (0,T).
We project the equation (<ref>)_1 for 𝐖_m onto span{𝐒_0,𝐒_1,…,𝐒_6m+5}, the equation (<ref>)_2 for Θ_m onto
span{𝐀_0,𝐀_1,…,𝐀_3m+2},
with 𝐙_m defined as in (<ref>) and χ_α,m, Σ_m defined in (<ref>), obtaining the following Galerkin approximation of (<ref>):
∫_𝒟_aSym(e^-Θ_m(
𝒢_L,∗[∂ψ_D^λ((𝒢_L,∗(e^Θ_m𝐖_m)))]))𝐒_i
+ ∫_𝒟_a(𝐖_m-𝐈+∂ψ_C_α^λ(𝐖_m)+𝐖_m)𝐒_i+∫_𝒟_a𝐖_m::𝐒_i
=∫_𝒟_a𝐖_ext(𝐖_m,t)𝐒_i,
∫_𝒟_aSkew(((
𝒢_L,∗[∂ψ_D^λ((𝒢_L,∗(e^Θ_m𝐖_m)))])𝐖_me^-Θ_m))𝐀_j
+1/2∫_𝒟_aΘ_m::𝐀_j+1/2∫_𝒟_aΘ_m::𝐀_j=∫_𝒟_aΩ_ext(Θ_m,t)𝐀_j,
ΔΦ_m=(e^Θ_m𝐖_m),
- P_LΔ𝐙_m=(e^Θ_m𝐖_m),
in [0,t], with 0<t≤ T, for i=0, …, 6m+5, j=0, …, 3m+2, with boundary conditions as in (<ref>) and with initial conditions (cf. the assumption A2)
𝐖_m(𝐚,0)=𝐚+PS_m(𝐖_0-𝐈)(𝐚), Θ_m(𝐚,0)=0, 𝐚∈𝒟_a.
The equations (<ref>)_1 and (<ref>)_2 are decoupled from the other equations in the system (<ref>) and define a collection of initial value problems for a system of coupled ODEs of the form
d/dtx_i^m=-(1+γ_i)x_i^m+∫_𝒟_a(-∂ψ_C_α^λ(𝐈 +∑_lx_l^m𝐒_l)+𝐖_ext(𝐈 +∑_lx_l^m𝐒_l,t))𝐒_i
= ∫_𝒟_aPS_m[e^-∑_ly_l^m𝐀_l(𝒢_L,∗[∂ψ_D^λ((
𝒢_L,
5cm ∗(e^∑_ry_r^m𝐀_r(𝐈 +∑_kx_k^m𝐒_k))))])]𝐒_i,
d/dty_j^m=-y_j^m+2/γ_j∫_𝒟_aΩ_ext(Θ_m,t)𝐀_j
-2/γ_j∫_𝒟_aPA_m[(𝒢_L,∗[∂ψ_D^λ((
𝒢_L,
1cm ∗(e^∑_ry_r^m𝐀_r(𝐈 +∑_kx_k^m𝐒_k)
)))]) (𝐈 +∑_kx_k^m𝐒_k)e^-∑_py_p^m𝐀_p]𝐀_j,
x_i^m(0)=∫_𝒟_a(𝐖_0-𝐈)𝐒_i, y_j^m(0)=0, i=0, …, 6m+5, j=0, …, 3m+2.
Due to Assumptions A3, to the Lipschitz continuity of ∂ψ_C_α^λ and ∂ψ_D^λ and to the regularity in space of the functions 𝐒_i, 𝐀_j, the system (<ref>) is a coupled system of first-order ODEs in the variables x_i^m, y_j^m, with a right hand side which is measurable in time and continuous in the independent variables. Then, we can apply the Carathéodory's existence theorem to infer that there exist a sufficiently small t_1 with 0<t_1≤ T and a local solution (x_i^m,y_j^m) of (<ref>), for i=0, …, 6m+5, j=0, …, 3m+2, which is absolutely continuous. Once we have a solution to (<ref>),
dealing with the elliptic problems with regular right-hand sides in (<ref>) leads to the elements Φ_m and 𝐙_m solving (<ref>)_3 and (<ref>)_4, respectively.
Next,
thanks to some uniform estimates, we will extend these solutions by continuity to the interval [0,T] and we will study the limit as m→∞ and λ→ 0. In particolar, we will study in a first step the limit as m→∞, and then the limit as λ→ 0 in the latter limit system.
We now deduce a priori estimates, uniform in the discretization parameter m and in the regularization parameter λ, for the solutions of system (<ref>), which can be rewritten, combining the equations over i=0, …, 6m+5 and j=0, …, 3m+2, as
∫_𝒟_aSym(e^-Θ_m(𝒢_L,∗(
Σ_m)))𝐖_m
+∫_𝒟_a(𝐖_m-𝐈+χ_α,m+𝐖_m)𝐖_m
+∫_𝒟_a𝐖_m::𝐖_m=∫_𝒟_a𝐖_ext(𝐖_m,t)𝐖_m,
∫_𝒟_aSkew((𝒢_L,∗(Σ_m))𝐖_me^-Θ_m)Ω_m +
1/2∫_𝒟_aΘ_m::Ω_m
+1/2∫_𝒟_aΘ_m::Ω_m=∫_𝒟_aΩ_ext(Θ_m,t)Ω_m,
χ_α,m = ∂ψ_C_α^λ(𝐖_m), Σ_m=∂ψ_D^λ(𝐙_m),
ΔΦ_m=(e^Θ_m𝐖_m),
-P_LΔ𝐙_m=(e^Θ_m𝐖_m),
for a.e. t ∈ [0,t_1] and all 𝐖_m ∈span{𝐒_0,𝐒_1,…,𝐒_6m+5}, Ω_m ∈span{𝐀_0,𝐀_1,…,𝐀_3m+2}, and with initial conditions defined in (<ref>).
The first a-priori estimate is obtained by taking 𝐖_m=𝐖_m in (<ref>)_1 and Ω_m=Θ_m in (<ref>)_2. Moreover, we take the time derivative of (<ref>)_5, multiply it by 𝒢_L,∗( Σ_m) and integrate over 𝒟_a.
We observe from (<ref>)_5 and from the regularity in space of the functions 𝐒_i, 𝐀_j that 𝐙_m∈ H^3(𝒟_a;ℝ^3× 3), for any t∈ [0,t_1]. Hence, from (<ref>)_3 and the Lipschitz continuity of ∂ψ_D^λ we obtain that Σ_m∈ H^1(𝒟_a;ℝ^3× 3), and as a consequence the L^2(𝒟_a;ℝ^3× 3) scalar product of equation (<ref>)_5 with the element 𝒢_L,∗( Σ_m)∈ H^2(𝒟_a;ℝ^3× 3) is well defined for any t∈ [0,t_1].
Finally we sum all the previous contributions and integrate in time between 0 and t∈ [0,t_1]. Observing that 𝐖_m=𝐖_m^T and Θ_m=-Θ_m^T, and since 𝐀𝐁=𝐀^T𝐁^T for any 𝐀,𝐁∈ M(ℝ^3× 3), we have that
1/2∫_𝒟_a(e^-Θ_m(
𝒢_L,∗(Σ_m))+[(
𝒢_L,∗(Σ_m))]^Te^Θ_m)𝐖_m
= ∫_𝒟_ae^-Θ_m(
𝒢_L,∗(Σ_m))𝐖_m=∫_𝒟_a(𝒢_L,∗(Σ_m))e^Θ_m𝐖_m,
and
1/2∫_𝒟_a(( 𝒢_L,∗(Σ_m))𝐖_me^-Θ_m-e^Θ_m𝐖_m[(𝒢_L,∗(Σ_m))]^T)Θ_m
=∫_𝒟_a(𝒢_L,∗(Σ_m))𝐖_me^-Θ_mΘ_m
=∫_𝒟_a(𝒢_L,∗(Σ_m))Θ_me^Θ_m𝐖_m.
Also, the contribution from (<ref>)_5, after integration by parts, gives that
∫_𝒟_a-Δ𝐙_m𝒢_L,∗(Σ_m)=∫_𝒟_a𝐙_mΣ_m=∫_𝒟_a𝐙_m∂ψ_D^λ(𝐙_m)
=∫_𝒟_ae^Θ_m𝐖_m(
𝒢_L,∗(Σ_m))+∫_𝒟_aΘ_me^Θ_m𝐖_m( 𝒢_L,∗(Σ_m)).
Hence, for any t∈ [0,t_1], we deduce that
1/2𝐖_m-𝐈^2
+ψ_C_α^λ(𝐖_m)
+1/2𝐖_m^2+1/4Θ_m^2+1/4Θ_m^2
+ψ_D^λ(𝐙_m)+∫_0^t_1𝐖_m
^2+1/2∫_0^t_1Θ_m^2
= 1/2𝐖_m(0)-𝐈^2
+ψ_C_α^λ(𝐖_m(0))
+1/2𝐖_m(0)^2
+1/4Θ_m(0)^2
+1/4Θ_m(0)^2+ψ_D^λ(𝐙_m(0))+1/2∫_𝒟_at_1Θ_mΘ_m
+ ∫_𝒟_at_1𝐖_ext(𝐖_m,t)𝐖_m+ ∫_𝒟_at_1Ω_ext(Θ_m,t)Θ_m
≤ C‖𝐖_m(0)‖ +C+ 1/2∫_0^t_1𝐖_m^2+1/4∫_0^t_1Θ_m^2
+C∫_0^t_1G^2(Θ_m^2+Θ_m^2)+C∫_0^t_1L^2(𝐖_m-𝐈^2+𝐖_m^2),
where we added 1/4d/dtΘ_m^2 to the left and 1/2∫_𝒟_aΘ_mΘ_m to the right, and used (<ref>) (considering that Θ_m is antisymmetric), Assumptions A2 and A3. Thanks to the Gronwall lemma, we thus have that
1/2𝐖_m-𝐈^2
+ψ_C_α^λ(𝐖_m)
+1/2𝐖_m^2+1/4Θ_m^2+1/4Θ_m^2
+ψ_D^λ(𝐙_m)+1/2∫_0^t_1𝐖_m^2+1/2∫_0^t_1Θ_m^2≤ C,
where the constant in the right hand side of (<ref>) depends only on the initial data, on the domain 𝒟_a and not on the discretization parameter m and on the regularization parameter λ. Thanks to the a priori estimate (<ref>), we may extend by continuity the local solution of system (<ref>) to the interval [0,T].
Using (<ref>) and (<ref>) we have that
sup_t∈ (0,T) (𝐙_m ) (t)^2≤ C.
Moreover, in view of (<ref>), from (<ref>)_3 and (<ref>) it follows that
sup_t∈ (0,T)Σ_m (t)^2≤ C.
We now multiply the equality Σ_m=∂ψ_D^λ(𝐙_m) in (<ref>)_3 by (𝒢_L,∗(Σ_m))∈ H^1(𝒟_a;ℝ^3× 3) and integrate over 𝒟_a. Employing multiple integration by parts, the Cauchy–Schwarz and Young inequalities and (<ref>), we obtain that
∫_𝒟_aΣ_m:(𝒢_L,∗(Σ_m))=∫_𝒟_aΣ_m: 𝒢_L,∗(Σ_m)
= (𝒢_L,∗(Σ_m)):(𝒢_L,∗(Σ_m))^2
= ∫_𝒟_a∂ψ_D^λ(𝐙_m):(𝒢_L,∗(Σ_m))
≤ Cψ_D^λ(𝐙_m)+C+1/2(𝒢_L,∗(Σ_m)):(𝒢_L,∗(Σ_m))^2.
Hence, given the estimate (<ref>), we have that
sup_t∈ (0,T)Σ_m(t)_(H_0,^1(𝒟_a,𝐑^3× 3))^'^2≤ C,
and, from a Lax–Milgram estimate associated to the operator -P_LΔ,
sup_t∈ (0,T)(𝒢_L,∗ (Σ_m))(t)_H_0,^1(𝒟_a,𝐑^3× 3)^2≤ C.
The second a priori estimate is obtained by taking 𝐖_m=-Δ𝐖_m in (<ref>)_1 and integrating in time between 0 and t∈[0,T]. Using Assumptions A2, A3 and estimate (<ref>), we infer that
1/2𝐖_m^2+∫_0^t ( (∂ I^λ_C_α(𝐖_m)) ,𝐖_m)_≥ 0+∫_0^tΔ𝐖_m^2
≤ C+1/2∫_0^tΔ𝐖_m^2+ C∫_0^tL^2(𝐖_m^2+𝐖_m^2)
+Ce^-Θ_m_L^∞(𝒟_at,ℝ^3× 3)^2∫_0^t( 𝒢_L,∗(Σ_m))^2.
Hence, by (<ref>) and (<ref>) the right-hand side is under control and then
1/2𝐖_m^2+1/2∫_0^tΔ𝐖_m^2≤ C.
We derive a further a priori estimate by taking Ω_m=-ΔΘ_m in (<ref>)_2 and integrating in time between 0 and t∈[0,T]. Using Assumption A3 we obtain that
1/4ΔΘ_m^2+1/2∫_0^t ΔΘ_m^2
≤C∫_0^t_1G^2(Θ_m^2+Θ_m^2)+1/4∫_0^tΔΘ_m^2+Ce^-Θ_m_L^∞(𝒟_at,ℝ^3× 3)^2
×( 𝒢_L,∗(Σ_m))_L^∞(0,t;L^2(𝒟_a,ℝ^3× 3))^2𝐖_m_L^2(0,t;L^∞(𝒟_a,Sym(ℝ^3× 3)))^2,
whence, using (<ref>), (<ref>), (<ref>) and the Sobolev embedding H^2 ↪ L^∞ (obtained from (<ref>) with j=0, p=∞, m=r=q=2), we find out that
1/4ΔΘ_m^2+1/4∫_0^t ΔΘ_m^2≤ C.
Thanks to (<ref>), (<ref>), (<ref>) and (<ref>), from (<ref>)_4, (<ref>)_5 and the estimates for the time derivatives of Φ_m and 𝐙_m we arrive at
Φ_m_L^∞(0,T;H^2(𝒟_a;ℝ^3))∩ L^2(0,T;H^3(𝒟_a,ℝ^3))∩ H^1(0,T;H^1(𝒟_a,ℝ^3))≤ C,
𝐙_m_L^∞(0,T;H^2(𝒟_a;ℝ^3× 3))∩ L^2(0,T;H^3(𝒟_a,ℝ^3× 3))∩ H^1(0,T;H^1(𝒟_a,ℝ^3× 3))≤ C.
Collecting the bounds (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), which are uniform in m and λ, from the Banach–Alaoglu, the Aubin–Lions and
the Arzelà–Ascoli lemmas,
we finally obtain the convergence properties, up to subsequences
, which we still label by the index m (without reporting the index λ), as follows:
𝐖_m ∗⇀𝐖 in L^∞(0,T;H^1(𝒟_a;Sym(ℝ^3× 3))),
𝐖_m ⇀𝐖 in L^2(0,T;H^2(𝒟_a;Sym(ℝ^3× 3)))∩ H^1(0,T;L^2(𝒟_a;Sym(ℝ^3× 3))),
𝐖_m →𝐖 in C^0([0,T];L^p(𝒟_a;Sym(ℝ^3× 3))) ∩ L^2(0,T;W^1,p(𝒟_a;Sym(ℝ^3× 3))),
with p∈ [1,6), and a.e. in 𝒟_aT,
𝐖_m →𝐖 in L^2(0,T;C^0(𝒟_a;Sym(ℝ^3× 3))),
Θ_m ⇀Θ in H^1(0,T;H^2(𝒟_a;Skew(ℝ^3× 3))),
Θ_m →Θ in C^0([0,T];W^1,p(𝒟_a;Skew(ℝ^3× 3))), p∈ [1,6), and a.e. in 𝒟_aT,
Θ_m→Θ, e^±Θ_m→e^±Θ uniformly in 𝒟_aT,
Φ_m ∗⇀Φ in L^∞(0,T;H^2(𝒟_a;ℝ^3)),
Φ_m ⇀Φ in L^2(0,T;H^3(𝒟_a;ℝ^3))∩ H^1(0,T;H^1(𝒟_a;ℝ^3)),
Φ_m →Φ in C^0([0,T];W^1,p(𝒟_a;ℝ^3))∩ L^2(0,T;W^2,p(𝒟_a;ℝ^3)),
with p∈ [1,6), and a.e. in 𝒟_aT,
𝐙_m ∗⇀𝐙 in L^∞(0,T;H^2(𝒟_a;ℝ^3× 3)),
𝐙_m ⇀𝐙 in L^2(0,T;H^3(𝒟_a;ℝ^3× 3))∩ H^1(0,T;H^1(𝒟_a;ℝ^3× 3)),
𝐙_m →𝐙 in C^0([0,T];W^1,p(𝒟_a;ℝ^3× 3))∩ L^2(0,T;W^2,p(𝒟_a;ℝ^3× 3)),
with p∈ [1,6), and a.e. in 𝒟_aT,
Σ_m ∗⇀Σ in L^∞(0,T;L^2(𝒟_a;ℝ^3× 3)),
𝒢_L,(Σ_m) ∗⇀𝒢_L,(Σ) in L^∞(0,T;H^1(𝒟_a;ℝ^3× 3)),
as m→∞. We note that (<ref>) follows from (<ref>) and the compact embedding
W^1,p(𝒟_a;Sym(ℝ^3× 3))⊂ C^0(𝒟_a;Sym(ℝ^3× 3)),
holding for p>3.
Moreover,
as
H^2(𝒟_a;Skew(ℝ^3× 3)) is compactly embedded into C^0(𝒟_a;Skew (ℝ^3× 3)),
the convergence (<ref>) implies a strong convergence in C^0([0,T];C^0(𝒟_a;Skew (ℝ^3× 3))), whence (<ref>) is easily deduced, thanks the continuity of the exponential operator as well.
With the convergence results (<ref>)–(<ref>), we can pass to the limit in the system (<ref>) in a first step as m→∞. Let's take 𝐖_m=PS_m(𝐖) and Ω_m=PA_m(Ω), with arbitrary 𝐖∈ L^2(𝒟_a;Sym(ℝ^3× 3)), Ω∈ L^2(𝒟_a;Skew(ℝ^3× 3)), multiply the first two equations by ω∈ C_c^∞([0,T]) and integrate over the time interval [0,T]. This gives
∫_0^Tω∫_𝒟_aSym(e^-Θ_m(𝒢_L,∗(
Σ_m)))𝐖_m
+ ∫_0^Tω∫_𝒟_a(𝐖_m-𝐈+∂ I_C_α^λ(𝐖_m)+𝐖_m)𝐖_m
+ ∫_0^Tω∫_𝒟_a𝐖_m::𝐖_m=∫_0^Tω∫_𝒟_a𝐖_ext(𝐖_m,t)𝐖_m,
∫_0^Tω∫_𝒟_aSkew((𝒢_L,∗(Σ_m))𝐖_me^-Θ_m)Ω_m
+ ∫_0^Tω/2∫_𝒟_a(Θ_m + Θ_m) ::
Ω_m
=∫_0^Tω∫_𝒟_aΩ_ext(Ω_m,t)Ω_m.
We observe that
PS_m(𝐖)→𝐖 in L^2(𝒟_a;Sym(ℝ^3× 3)),
PA_m(Ω)→Ω in L^2(𝒟_a;Skew(ℝ^3× 3)),
as m→∞.
Thanks to (<ref>) and (<ref>)_1, we have that
e^Θ_m𝐖_m→e^Θ𝐖 in L^∞(0,T;L^2(𝒟_a,ℝ^3× 3)).
Hence, using (<ref>), by the product of weak-strong convergence we have that
∫_0^Tω∫_𝒟_aSym(e^-Θ_m(𝒢_L,∗(
Σ_m)))𝐖_m
=∫_0^Tω∫_𝒟_a(𝒢_L,∗(
Σ_m))e^Θ_m𝐖_m
→∫_0^Tω∫_𝒟_aSym(e^-Θ(𝒢_L,∗(
Σ)))𝐖,
as m→∞. Owing to (<ref>) and the Lipschitz continuity of ∂ψ_C_α^λ, it turns out that
χ_α,m = ∂ψ_C_α^λ(𝐖_m) strongly converges to
χ_α := ∂ψ_C_α^λ(𝐖) say in C^0([0,T];L^2(𝒟_a;ℝ^3× 3)).
Then, on account of (<ref>)–(<ref>), (<ref>)_1 and Assumption A3, we readily obtain that
∫_0^Tω∫_𝒟_a(𝐖_m-𝐈+χ_α,m+𝐖_m)𝐖_m+∫_0^Tω∫_𝒟_a𝐖_m::𝐖_m
→∫_0^Tω∫_𝒟_a(𝐖-𝐈+χ_α+𝐖)𝐖-∫_0^Tω∫_𝒟_aΔ𝐖:𝐖,
and
∫_0^Tω∫_𝒟_a𝐖_ext(𝐖_m,t)𝐖_m→∫_0^Tω∫_𝒟_a𝐖_ext(𝐖,t)𝐖,
as m→∞. Moroever, thanks to (<ref>), (<ref>) and (<ref>)_2, we have that Ω_m 𝐖_me^Θ_m→Ω𝐖e^Θ in L^2(0,T;L^2(𝒟_a,ℝ^3× 3)). Hence, using (<ref>), by the product of weak-strong convergence and with similar calculations as in (<ref>) we have that
∫_0^Tω∫_𝒟_aSkew((𝒢_L,∗(Σ_m))𝐖_me^-Θ_m)Ω_m
→∫_0^Tω∫_𝒟_aSkew((𝒢_L,∗(Σ))𝐖e^-Θ)Ω,
as m→∞. Thanks to (<ref>)–(<ref>) it is easy to deduce that
∫_0^T
ω/2∫_𝒟_aΘ_m::Ω_m+∫_0^Tω/2∫_𝒟_aΘ_m::Ω_m
→ -∫_0^T
ω/2∫_𝒟_aΔΘ: Ω-∫_0^Tω/2∫_𝒟_aΔΘ: Ω
and
∫_0^Tω∫_𝒟_aΩ_ext(Ω_m,t)Ω_m→∫_0^Tω∫_𝒟_aΩ_ext(Ω,t)Ω.
For what concerns the second equality in (<ref>)_3, thanks to the convexity of ψ_D^λ we can express it as
∫_0^T∫_𝒟_a(𝐗-𝐙_m):Σ_m+∫_0^T∫_𝒟_aψ_D^λ(𝐙_m)≤∫_0^T∫_𝒟_aψ_D^λ(𝐗),
for all 𝐗∈ L^2(0,T;H^1(𝒟_a;ℝ^3× 3)). Given the convergence results (<ref>) and (<ref>), as Σ_m weakly converges to
Σ in L^2(0,T;L^2(𝒟_a;ℝ^3× 3)) and ψ_D^λ is Lipschitz continuous, we get in the limit that
∫_0^T∫_𝒟_a(𝐗-𝐙):Σ+∫_0^T∫_𝒟_aψ_D^λ(𝐙)≤∫_0^T∫_𝒟_aψ_D^λ(𝐗),
as m→∞, for all 𝐗∈ L^2(0,T;H^1(𝒟_a;ℝ^3× 3)). Finally, we want to
pass to the limit in (<ref>)_4 and (<ref>)_5 as m→∞. In order to do so, we observe
that, since e^Θ_m→e^Θ a.e. in 𝒟_aT, Θ_m→Θ in C^0(0,T;L^p(𝒟_a;ℝ^3× 3× 3)) and a.e. in 𝒟_aT and since e^Θ_m is uniformly bounded, a generalized form of the Lebesgue convergence theorem gives that
e^Θ_m→e^Θ in C^0(0,T;L^p(𝒟_a;ℝ^3× 3× 3)), p∈ [1,6).
Hence, using (<ref>), (<ref>), (<ref>) and (<ref>) we can prove that
(e^Θ_m𝐖_m)=(e^Θ_m)𝐖_m+e^Θ_m𝐖_m
→(e^Θ𝐖) in C^0(0,T;L^p/2(𝒟_a;ℝ^3))∩ L^2(0,T;L^p(𝒟_a;ℝ^3)), p∈ [1,6),
and analogously
(e^Θ_m𝐖_m)=(e^Θ_m)𝐖_m+ϵ1pt e^Θ_m𝐖_m
→(e^Θ𝐖) in C^0(0,T;L^p/2(𝒟_a;ℝ^3× 3))∩ L^2(0,T;L^p(𝒟_a;ℝ^3× 3)), p∈ [1,6),
where ϵ is the Ricci tensor. With the strong convergence results (<ref>), (<ref>), (<ref>) and (<ref>) we can straightforwardly pass to the limit in (<ref>)_4 and (<ref>)_5 as m→∞. Collecting all the previous results, we obtain the following limit system, as m→∞, in terms of the limit functions that will be now denoted by 𝐖^λ, Θ^λ, χ_α^λ, Σ^λ, Φ^λ, 𝐙^λ. Here, it is:
∫_𝒟_aSym(e^-Θ^λ(𝒢_L,∗(Σ^λ)))𝐖
+ ∫_𝒟_a(𝐖^λ-𝐈+χ_α^λ+𝐖^λ)𝐖
-∫_𝒟_aΔ𝐖^λ:𝐖=∫_𝒟_a𝐖_ext(𝐖^λ,t)𝐖,
∫_𝒟_aSkew((𝒢_L,∗(Σ^λ))𝐖^λe^-Θ^λ)Ω
-
1/2∫_𝒟_aΔΘ^λ: Ω-
1/2∫_𝒟_aΔΘ^λ: Ω=∫_𝒟_aΩ_ext(Θ^λ,t)Ω,
χ_α^λ = ∂ψ_C_α^λ(𝐖^λ), Σ^λ=∂ψ_D^λ(𝐙^λ),
ΔΦ^λ=(e^Θ^λ𝐖^λ),
-P_LΔ𝐙^λ=(e^Θ^λ𝐖^λ),
for a.e. t ∈ [0,T], for all choices of 𝐖∈ L^2(𝒟_a;Sym(ℝ^3× 3)), Ω∈ L^2(𝒟_a;Skew(ℝ^3× 3)),
and with initial conditions (cf. the assumption A2 and (<ref>))
𝐖^λ(·,0)=𝐖_0, Θ^λ (·,0)=0 in 𝒟_a.
In the system (<ref>) we have restored the index λ, to indicate the dependence of the solutions from the regularization parameter λ. We observe, without reporting all the details, that the estimates (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>),
(<ref>) and (<ref>) are preserved in the limit as m→∞, i.e., they are valid for the solutions of the system (<ref>). This allows us to pass to the limit as λ→ 0, up to subsequences of λ, in the system (<ref>), with similar calculations as the ones employed for the study of the limit problem as m→∞. On the other hand, since by comparison in (<ref>)_1 we obtain that
χ_α^λ is bounded in L^2(0,T;L^2(𝒟_a;Sym(ℝ^3× 3))),
uniformly with respect to λ, consequently
χ_α^λ will converge weakly to some χ_α in L^2(0,T;L^2(𝒟_a;Sym(ℝ^3× 3)))
as λ→ 0 along a subsequence. This weak convergence, combined with the strong convergence of 𝐖^λ to 𝐖 in the same space L^2(0,T;L^2(𝒟_a;Sym(ℝ^3× 3))), and the maximal monotonicity of the subdifferential operator ∂ψ_C_α enable us to prove that χ_α∈∂ψ_C_α (𝐖) a.e. in 𝒟_aT. Similar considerations can be done for the proof of the other inclusion
Σ∈∂ψ_D (𝐙 ); these are usual arguments in the framework of the theory of maximal monotone operators, see e.g. <cit.>.
Therefore, passing to the limit as λ→ 0 in the system (<ref>), we obtain that the limit point is a solution of
(<ref>)–(<ref>)
with initial conditions (<ref>), and regularity given by (<ref>)–(<ref>). Moreover, by lower semicontinuity we obtain in the limit
as λ→ 0 that
sup_t∈ (0,T)∫_𝒟_aI_C_α(𝐖 (·, t))≤ C,
whence, due to (<ref>) as well, we have that 𝐖(𝐚,t)∈SPD_α for all 𝐚∈𝒟_a and a.a. t∈ (0,T).
The property that 𝐖(𝐚,t)∈SPD_α for all 𝐚∈𝒟_a and a.a. t∈ (0,T), proved in the previous Theorem, implies that the material experiences no flattening or crushing and that (see <cit.>) a point which is in the interior of the domain remains inside the domain during the evolution, for a.a. t∈ (0,T).
§ THE LIMITING CASE
In this section we study the limit system of (<ref>)–(<ref>) as k→ 0. As we will see, in this case the solution of the limit system is unique and continuously depends on initial data. The drawback is that in this case the incompatibility in the system dynamics is always active, contrarily to what happens in the case with k>0, as observed in Remark <ref>.
In the case k=0, from (<ref>)_3 we have that Σ=𝐙, hence the system (<ref>) becomes
Sym(e^-Θ(𝐙))+𝐖-𝐈+χ_α+𝐖-Δ𝐖=𝐖_ext(𝐖,t),
Skew((𝐙)𝐖e^-Θ)-1/2ΔΘ-1/2ΔΘ=Ω_ext(Θ,t),
χ_α∈∂ψ(𝐖),
ΔΦ=(e^Θ𝐖),
- P_LΔ𝐙=(e^Θ𝐖),
with boundary conditions
𝐖=𝐈, Θ=Θ=0 on Γ_a× (0,T),
Φ(𝐚, t) =𝐚, 𝐙(𝐚, t)=0 for (𝐚, t) ∈Γ_a× (0,T)
and initial conditions
𝐖(𝐚,0)=𝐖_0(𝐚), Θ(𝐚,0)=0 for 𝐚∈𝒟_a.
The variable 𝐙 in the system (<ref>) may be interpreted as the Lagrange multiplier of the compatibility condition (e^Θ𝐖)=0, with the addition of an elliptic regularization of the constraint given by the term - P_LΔ𝐙 in
(<ref>)_5.
Indeed, the system (<ref>) may be obtained from the principle of virtual power (<ref>) and the dissipative equality (<ref>) by enforcing in the expression of the Free Energy (<ref>) the compatibility constraint through a Lagrange multiplier, i.e.,
ψ(𝐖,𝐑,𝐙):=1/2𝐖-𝐈^2+ψ(𝐖)+1/2𝐖^2+1/2𝐑^2+∫_𝒟_a𝐙:(e^Θ𝐖).
Setting
ℱ(𝐙,𝐖,Θ):=∫_𝒟_a𝐙:(e^Θ𝐖),
we observe that
(δℱ/δ𝐖,δ𝐖)=(𝐙,(e^Θδ𝐖))=(Sym(e^-Θ(𝐙)),δ𝐖),
(δℱ/δΘ,δΘ)=(𝐙,(δΘe^Θ𝐖))=(Skew((𝐙)𝐖e^-Θ),δΘ),
(δℱ/δ𝐙,δ𝐙)=((e^Θ𝐖),δ𝐙).
Moreover, substituting (<ref>)_5 with the relation -ϵ P_LΔ𝐙=(e^Θ𝐖), with 0<ϵ <<1, the system (<ref>) may be interpreted as a system with a penalization of the compatibility condition.
We give for the system (<ref>) the following existence and regularity result.
Let assumptions A1-A3 be satisfied. Then, for any T>0 there is a quintuplet
(𝐖,Θ,χ_α, Φ,𝐙),
with
𝐖∈ L^∞(0,T;H^1(𝒟_a;Sym(ℝ^3× 3)))
∩ H^1(0,T;L^2(𝒟_a;Sym(ℝ^3× 3)))∩ L^2(0,T;H^2(𝒟_a;Sym(ℝ^3× 3))),
and 𝐖(𝐚,t)∈ SPD_α for a.e. (𝐚,t)∈𝒟_aT,
Θ∈ H^1(0,T;H^2(𝒟_a;Skew(ℝ^3× 3))),
χ_α∈ L^2(0,T;L^2(𝒟_a;ℝ^3× 3)),
Φ∈ L^∞(0,T;H^2(𝒟_a,ℝ^3)∩ H^1(𝒟_a;ℝ^3))∩ L^2(0,T;H^3(𝒟_a;ℝ^3)),
𝐙∈ L^∞(0,T;H^2(𝒟_a;ℝ^3× 3)∩ H_0,^1(𝒟_a,ℝ^3× 3))∩ L^2(0,T;H^3(𝒟_a;ℝ^3)),
which solves the system (<ref>)–(<ref>) for a.e. 𝒟_aT with initial conditions (<ref>). Moreover, the solution is unique and the following continuous dependence result holds: given two solutions (𝐖_1,Θ_1,χ_α,1,Φ_1,𝐙_1), corresponding to the initial data (𝐖_1^0,Θ_1^0), and (𝐖_2,Θ_2,χ_α,2,Φ_2,𝐙_2), corresponding to the initial data (𝐖_2^0,Θ_2^0), there exists a constant C depending only on 𝒟_a such that
1/2(𝐖_1-𝐖_2)(t)^2+1/4(Θ_1-Θ_2)(t)^2+∫_0^T(𝐖_1-𝐖_2_H_0^1(𝒟_a,Sym(ℝ^3× 3))^2
+ 1/2(Θ_1-Θ_2)^2+𝐙_1-𝐙_2_H_0,^1(𝒟_a,ℝ^3× 3)^2+Φ_1-Φ_2_H_0^1(𝒟_a,ℝ^3)^2)
≤ C(1/2𝐖_1^0-𝐖_2^0^2+1/4(Θ_1^0-Θ_2^0)^2) for all t∈ [0,T].
In view of Theorem <ref> and its proof, the existence result in the statement is a consequence of a limit procedure as k→ 0. Indeed, letting k be some fixed parameter, for 0 < k≤k we consider
the solution (𝐖_k,Θ_k ,χ_α, k, Σ_k,Φ_k ,𝐙_k) to the system (<ref>)–(<ref>)
given by Theorem <ref>. Recalling the properties (<ref>)–(<ref>) and observing that they still hold for
ψ_D and ∂ψ_D, it turns out that we can reproduce the estimates (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) uniformly with respect to k. Hence, we are allowed to pass to the limit in the system (<ref>)–(<ref>), written for 𝐖_k,Θ_k ,χ_α, k, Σ_k,Φ_k ,𝐙_k,
as k→ 0. The argument is similar to the one developed in Section 4. Here, we deduce in particular that (see (<ref>)) Σ∈𝐙,
that is Σ = 𝐙, almost everywhere, where Σ
and 𝐙 denote the weak and strong limits (cf. (<ref>)–(<ref>)) of some subsequence of Σ_k and 𝐙_k, respectively.
By eliminating then the variable Σ, we obtain the claimed existence result for a solution of the system (<ref>)–(<ref>).
We are thus left to prove the bound (<ref>), which also implies the uniqueness of the solution.
Let us rewrite equation (<ref>)_1 as
(e^-Θ(𝐙),𝐖-𝐖)+(𝐖-𝐈+𝐖-Δ𝐖,𝐖-𝐖)+ψ_C_α(𝐖)
≤(𝐖_ext(𝐖,t),𝐖-𝐖)
+ ψ_C_α(𝐖),
valid for any 𝐖∈ L^2(𝒟_a;Sym(ℝ^3× 3)). Taking 𝐖=𝐖_2 in the inequality (<ref>) for 𝐖_1, 𝐖=𝐖_1 in the inequality (<ref>) for 𝐖_2, and summing the two inequalities, we obtain that
((e^-Θ_1-e^-Θ_2)(𝐙_1),𝐖_1-𝐖_2)+(e^-Θ_2(𝐙_1-𝐙_2),𝐖_1-𝐖_2)
+𝐖_1-𝐖_2_H_0^1(𝒟_a,Sym(ℝ^3× 3))^2+1/2d/dt𝐖_1-𝐖_2^2
≤(𝐖_ext(𝐖_1,t)-𝐖_ext(𝐖_2,t),
𝐖_1-𝐖_2)
≤ L𝐖_1-𝐖_2_H_0^1(𝒟_a,Sym(ℝ^3× 3))𝐖_1-𝐖_2,
where in the last inequality we have used Assumption 𝐀_3. Moreover, taking the L^2 scalar product of (<ref>)_2 for Θ_1 with Θ_1-Θ_2 and the L^2 scalar product of (<ref>)_2 for Θ_2 with Θ_1-Θ_2, then taking the difference between the two contributions, with the help of Assumption 𝐀_3 and (<ref>), we obtain that
((𝐙_1-𝐙_2)𝐖_1e^-Θ_1,Θ_1-Θ_2)+((𝐙_2)(𝐖_1-𝐖_2)e^-Θ_1,Θ_1-Θ_2)
+ ((𝐙_2)𝐖_2(e^-Θ_1-e^-Θ_2),Θ_1-Θ_2)
+1/2(Θ_1-Θ_2)^2 + 1/4d/dt(Θ_1-Θ_2)^2
≤(Ω_ext(Θ_1,t)-Ω_ext(Θ_2,t),Θ_1-Θ_2)
≤ GΘ_1-Θ_2_H_0^1(𝒟_a,Skew(ℝ^3× 3))Θ_1-Θ_2≤ C(Θ_1-Θ_2)^2.
Next, taking the L^2 scalar product of (<ref>)_4 for Φ_1 with (Φ_1-Φ_2), the L^2 scalar product of (<ref>)_4 for Φ_2 with (Φ_1-Φ_2), and subtracting the two contributions, we arrive at
Φ_1-Φ_2_H_0^1(𝒟_a,ℝ^3)^2
=-((e^Θ_1-e^Θ_2)𝐖_1, (Φ_1-Φ_2))-(e^Θ_2(𝐖_1-𝐖_2), (Φ_1-Φ_2)).
Analogously, taking the L^2 scalar product of (<ref>)_5 for Z_1 with (Z_1-Z_2) and for Z_2 with (Z_1-Z_2), then the difference between the two contributions leads to
Z_1-Z_2_H_0,(𝒟_a,ℝ^3× 3)^2
=((e^Θ_1-e^Θ_2)𝐖_1, (Z_1-Z_2))+(e^Θ_2(𝐖_1-𝐖_2), (Z_1-Z_2))).
Finally, summing the inequalities from (<ref>) to (<ref>), using the multilinear Hölder inequality, the Young inequality and the Korn–Poincaré inequality (<ref>), we obtain that
1/2d/dt𝐖_1-𝐖_2^2+1/4d/dt(Θ_1-Θ_2)^2+𝐖_1-𝐖_2_H_0^1(𝒟_a,Sym(ℝ^3× 3))^2
+1/2(Θ_1-Θ_2)^2+ Φ_1-Φ_2_H_0,^1(𝒟_a,ℝ^3)^2 +Z_1-Z_2_H_0^1(𝒟_a,ℝ^3× 3)^2
≤1/4𝐖_1-𝐖_2_H_0^1(𝒟_a,Sym(ℝ^3× 3))^2+C𝐖_1-𝐖_2^2+C (Θ_1-Θ_2)^2
+e^-Θ_1-e^-Θ_2_L^6(𝒟_a,ℝ^3× 3)(𝐙_1)_L^3(𝒟_a,ℝ^3× 3× 3)𝐖_1-𝐖_2
+ e^-Θ_2_L^∞(𝒟_a,ℝ^3× 3)(𝐙_1-𝐙_2) 𝐖_1-𝐖_2
+ (𝐙_1-𝐙_2) 𝐖_1_L^3(𝒟_a,Sym(ℝ^3× 3))e^-Θ_1_L^∞(𝒟_a,ℝ^3× 3)Θ_1-Θ_2_L^6(𝒟_a,Skew(ℝ^3× 3))
+ (𝐙_2) 𝐖_1-𝐖_2_L^3(𝒟_a,Sym(ℝ^3× 3))e^-Θ_1_L^∞(𝒟_a,ℝ^3× 3)Θ_1-Θ_2_L^6(𝒟_a,Skew(ℝ^3× 3))
+ (𝐙_2) 𝐖_2_L^6(𝒟_a,Sym(ℝ^3× 3))e^-Θ_1-e^-Θ_2_L^6(𝒟_a,ℝ^3× 3)Θ_1-Θ_2_L^6(𝒟_a,Skew(ℝ^3× 3))
+ e^Θ_1-e^Θ_2_L^3(𝒟_a,ℝ^3× 3)𝐖_1_L^6(𝒟_a,Sym(ℝ^3× 3))
×(Φ_1-Φ_2_H_0^1(𝒟_a,ℝ^3)+Z_1-Z_2_H_0,^1(𝒟_a,ℝ^3× 3))
+ e^Θ_2_L^∞(𝒟_a,ℝ^3× 3)𝐖_1-𝐖_2
×(Φ_1-Φ_2_H_0^1(𝒟_a,ℝ^3)+Z_1-Z_2_H_0,^1(𝒟_a,ℝ^3× 3)).
Hence, by employing the Sobolev embeddings (<ref>), the bound (<ref>), the Young inequality and the regularity results (<ref>)–(<ref>), integrating moreover in time in the interval (0,T), we obtain that
1/2(𝐖_1-𝐖_2)(t)^2+1/4(Θ_1-Θ_2)(t)^2 +∫_0^ t(𝐖_1-𝐖_2_H_0^1(𝒟_a,Sym(ℝ^3× 3))^2
+1/2(Θ_1-Θ_2)^2+ Φ_1-Φ_2_H_0^1(𝒟_a,ℝ^3)^2 +Z_1-Z_2_H_0,^1(𝒟_a,ℝ^3× 3)^2)
≤1/2𝐖_1^0-𝐖_2^0^2+1/4(Θ_1^0-Θ_2^0)^2
+∫_0^t(3/4𝐖_1-𝐖_2_H_0^1(𝒟_a,Sym(ℝ^3× 3))^2+1/2Φ_1-Φ_2_H_0^1(𝒟_a,ℝ^3)^2
+1/2Z_1-Z_2_H_0,^1(𝒟_a,ℝ^3× 3)^2)
+C∫_0^t(𝐖_1-𝐖_2^2+ (Θ_1-Θ_2)^2) for all t∈ [0,T] ,
from which, using a Gronwall argument, we finally show (<ref>).
§ CONCLUSION
In this work we introduced a novel model for
large deformations, described in terms of the stretch and the rotation tensors as independent variables. This description has a direct geometrical interpretation and the predicted quantities may be experimented. We derived the model from a generalized form of the principle of virtual power, where the virtual velocities depend on the state variables as a consequence of internal kinematic constraints associated to the compatibility condition. In our system, the compatibility of the deformation is conditionally valid depending on the magnitude of an internal force associated to dislocations, which enters the system as a new independent variable. We assumed a quadratic expression for the free energy density of the system, depending on the stretch, the rotation and the dislocation tensors, containing first and second gradient terms. In order to enforce the positive definiteness of the stretch matrix, we also added to the free energy the indicator function of a closed and convex set whose elements are positive definite symmetric matrices with eigenvalues which are not smaller than a given positive constant at the same time. We then assumed a quadratic form also for the dissipation potential of the system, containing viscous contributions in terms of the time derivative of the stretch tensor and on the angular velocity tensor. The internal forces in the system, which are thermodynamically coupled with the virtual velocities, were then chosen in compliance with the Clausius–Duhem dissipative equality. We adopted the strategy to invert the kinematic constraints associated to the compatibility condition through Green propagators, expressing the virtual velocities associated to the deformation map and the dislocations in terms of the virtual velocities associated to the stretch matrix and to the rotation, thus reducing the set of independent virtual velocities and eliminating their internal constraints, obtaining a system of integro-differential coupled equations with inclusions.
We then developed the analysis of the model in a simplified setting, i.e., considering the quasi-stationary version of the full system where we neglect inertia. Through a Faedo–Galerkin approximation strategy and employing the Moreau–Yosida regularization of the subdifferential of multivalued functions in the free energy, we proved the existence of a global in time weak solution in three space dimensions for the system, which is actually a strong solution, by studying the limit problem as the discretization parameter and the Moreau–Yosida regularization parameter tend to zero. We also proved that everywhere in space and almost everywhere in time the material is not flattening or crushing and that a point which is inside its domain at a certain time remains in the interior of the domain at later times.
Finally, we considered a limit problem, letting the magnitude of the internal force associated to dislocations tend to zero, in which case the deformation becomes incompatible and the equations take the form of a coupled system of PDEs. In the latter situations we obtained stronger analytical results, i.e., we obtained global existence, uniqueness and continuous dependence from data of the strong solution in three space dimensions.
In a second contribution we intend to study the full model with inertia.
§ ACKNOWLEDGMENTS
This research activity has been supported by the MIUR-PRIN Grant 2020F3NCPX
“Mathematics for industry 4.0 (Math4I4)”. AA and PC mention thier affilion to the GNAMPA (Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni) of INdAM (Istituto
Nazionale di Alta Matematica). Moreover, PC aims to point out his collaboration,
as Research Associate, to the IMATI – C.N.R. Pavia, Italy.
plain
|
http://arxiv.org/abs/2307.01905v1
|
20230704202716
|
ZotCare: A Flexible, Personalizable, and Affordable mHealth Service Provider
|
[
"Sina Labbaf",
"Mahyar Abbasian",
"Iman Azimi",
"Nikil Dutt",
"Amir M. Rahmani"
] |
cs.HC
|
[
"cs.HC",
"cs.CY"
] |
1
ZotCare]ZotCare: A Flexible, Personalizable, and Affordable mHealth Service Provider
]
[
[
August 1, 2023
==================
§
The proliferation of Internet-connected health devices and the widespread availability of mobile connectivity have resulted in a wealth of reliable digital health data and the potential for delivering just-in-time interventions.
However, leveraging these opportunities for health research requires the development and deployment of mobile health (mHealth) applications, which present significant technical challenges for researchers.
While existing mHealth solutions have made progress in addressing some of these challenges, they often fall short in terms of time-to-use, affordability, and flexibility for personalization and adaptation.
ZotCare aims to address these limitations by offering ready-to-use and flexible services, providing researchers with an accessible, cost-effective, and adaptable solution for their mHealth studies.
This article focuses on ZotCare's service orchestration and highlights its capabilities in creating a programmable environment for mHealth research.
Additionally, we showcase several successful research use cases that have utilized ZotCare, both in the past and in ongoing projects.
Furthermore, we provide resources and information for researchers who are considering ZotCare as their mHealth research solution.
§ KEYWORDS:
mobile health, mHealth solution, health cybernetics, digital health services, wearable internet-of-things
§ INTRODUCTION
The widespread adoption of smartphones, wearable technologies, and other Internet-connected health devices has led to the availability of reliable digital health data streams <cit.>. These devices and applications have played a significant role in various domains, such as improving lifestyles, achieving fitness goals, monitoring high-risk populations, and enhancing productivity <cit.>. Many vendors now offer access to the data streams generated by their products, opening up new opportunities for researchers to explore ubiquitous remote monitoring by leveraging different health data streams <cit.>. For instance, studies such as <cit.> and <cit.> have utilized Garmin smartwatches <cit.> to longitudinally monitor maternal sleep and dementia patients' caregivers, respectively.
Furthermore, the rise in mobile Internet connectivity <cit.> has provided researchers with the ability to promptly interact with participants, facilitating the collection of supplementary information for data modeling or the delivery of interventions within minutes or seconds.
By capitalizing on these two opportunities, researchers can not only collect accurate health data streams but also process the information, engage with participants, and implement necessary interventions.
For health researchers, leveraging these opportunities necessitates developing and deploying mobile health (mHealth) applications.
These applications perform tasks such as collecting health data streams, processing the data, invoking actions, and receiving feedback.
Figure <ref> outlines a typical mHealth system composed of three critical components: the central cloud server and separate interfaces for researchers/clinicians and participants.
The cloud server forms the foundation for data storage, model building, and action invoking aimed at participants and researchers, all while ensuring the preservation of data integrity, security, and participant privacy.
The participant interface, another critical component, necessitates real-time interaction capabilities with participants, as well as mechanisms for subjective and objective data collection.
Conversely, the researcher dashboard should be furnished with data analysis and monitoring tools essential for executing a mHealth study.
Each component operates within a distinct segment of the technology stack and possesses specific functionalities, giving rise to various development and deployment challenges.
First, researchers face the complex task of developing a diverse system encompassing various components, ranging from mobile and wearable applications to web servers, requiring diverse programming skills and knowledge.
Moreover, after development, deploying, and maintaining these applications can pose substantial obstacles due to the high frequency, longitudinal nature, and potential scalability of health data streams.
These challenges can impede research progress and divert focus from the core experiments.
Several open-source software platforms have been developed to facilitate mobile health (mHealth) studies <cit.>. These platforms offer a range of tools encompassing servers, mobile applications, and analytics tools, providing researchers with diverse possibilities. Researchers can also reprogram these platforms to suit their specific requirements. While mHealth platforms can reduce the need for extensive development, the burden of deployment still rests on the researchers. Additionally, the costs associated with deployment are typically borne by a single organization, making it relatively more expensive for smaller organizations conducting smaller-scale studies.
Alternatively, researchers can utilize online services for conducting their mHealth studies <cit.>. These services are platform solutions provided and deployed by service providers. They are designed to share resources between different organizations and studies, resulting in reduced time and effort required for developing and deploying a custom mHealth application. By sharing resources, these services effectively cut down costs. Typically, these services offer researchers a dashboard for reconfiguring the services with various options. However, the available configurations may not provide the necessary flexibility required for real-time studies.
Despite the significant advancements in existing mHealth solutions, there persists a pressing demand for a comprehensive solution that integrates three essential features into a unified package. Firstly, such a solution should offer a ready-to-use setup that eliminates the requirement for computer programming or infrastructure skills, ensuring accessibility for researchers without technical expertise. Secondly, the solution should prioritize affordability by reducing deployment costs through resource sharing and providing reusable components. Lastly, the solution must exhibit flexibility by offering components that can be combined in various ways to accommodate the diverse and evolving demands of modern mHealth studies, such as personalization.
This paper presents ZotCare, an innovative mHealth service provider that aims to overcome the limitations observed in existing mHealth solutions. We begin by highlighting how ZotCare offers a unique combination of flexibility and programmability, catering to users with diverse skill levels, in contrast to other state-of-the-art solutions. Subsequently, we delve into the details of ZotCare's service orchestration, elucidating how researchers and participants interact with our services to achieve desired outcomes. We specifically explore the capabilities of each service category, including Collection Services, Profile Services, and Real-time Processing, Intervention, and Integration Services, emphasizing the extent of customization achievable by utilizing these services in tandem. Furthermore, we discuss the frontend mobile application of ZotCare, elucidating its role in participant engagement and the provision of personalized experiences within mHealth studies. Additionally, we present the various features of ZotCare's researcher dashboard, showcasing its effectiveness in facilitating study management, service configuration, recruitment, and data analysis. To substantiate the capabilities of ZotCare, we provide multiple use cases and examples of successful mHealth projects that have leveraged ZotCare.
§ RELATED WORK
The adoption of mHealth solutions within healthcare applications has witnessed a significant surge, fueled by the shared objective of improving healthcare delivery and outcomes, as highlighted by <cit.>. These solutions encompass a wide array of features that greatly facilitate the implementation of mHealth studies. One key aspect is the ability of mHealth solutions to seamlessly integrate wearable devices and diverse data sources, thereby enabling real-time health monitoring. These solutions can also provide data visualization and analytic methods, promote interoperability, and support interventions while ensuring the privacy and security of users.
In the implementation of mHealth solutions, it is crucial to consider and explore three key aspects. The first aspect pertains to the setup time required to initiate and configure mHealth studies using the chosen solution. This setup time encompasses various phases, including system design, development, and deployment, each demanding a significant amount of time and effort. These stages involve designing the system architecture, developing the necessary functionalities, and deploying the infrastructure to support the intended mHealth studies.
The second aspect to be considered is the associated costs involved in the development and deployment of the mHealth solution. Development costs encompass the investment of human resources and time required for designing and developing the system infrastructure. This includes the efforts of software engineers, data scientists, and other relevant professionals. In addition, ongoing modifications and enhancements may require additional development efforts. Deployment costs encompass the procurement of necessary processing resources, such as servers or cloud infrastructure, as well as ongoing maintenance and operational expenses.
The third aspect revolves around the customization capabilities offered by the mHealth solution. Customization can be viewed across three distinct levels: development, configurability, and programmability. At the development level, customization refers to the ability to tailor the solution to meet specific research requirements and objectives. This may involve creating new functionalities or modifying existing ones. Configurability, on the other hand, allows users to adapt the solution's settings and parameters to align with the unique needs of their mHealth studies. Programmability refers to the capability of leveraging programming interfaces or APIs to integrate the solution with other systems or to extend its functionalities.
At the development level of customization, researchers are advised to allocate additional efforts to introduce new functionalities and tailor existing mHealth solutions to their specific needs. This level of customization entails direct involvement with the underlying codebase of the solution, thereby necessitating a high level of programming expertise. Researchers must possess the technical skills required to modify the existing code, introduce new functionalities, or make changes to the underlying algorithms.
Moving to the configurability level of customization, researchers can customize the solution by reconfiguring the available features within the provided framework. This level of customization does not demand extensive technical expertise and programming skills. Instead, researchers can make adjustments to the system's settings, parameters, or options offered by the solution. While configurability provides a certain degree of customization, it may be limited to predefined configurations and settings, constraining researchers from making substantial modifications beyond the available options.
Finally, at the programmability level of customization, researchers can leverage the solution's programming interfaces or APIs to customize its behavior based on specific situations and conditions.
In contrast to development-level customization, programmability-level customization offers researchers the ability to incorporate their own functionalities into the system with minimal effort, without requiring extensive technical expertise. In the following, we will provide an overview of existing mHealth solutions, highlight their limitations, and subsequently present the advantages of our solution, ZotCare, in addressing and bridging these gaps.
§.§ Related mHealth Solutions
The existing landscape of mHealth solutions can be broadly categorized into two primary classifications: platforms and services. Platforms encompass comprehensive frameworks that integrate various components of mHealth solutions through the utilization of one or multiple open-source software.
One such platform is Radar-base by <cit.>, which focuses on remote monitoring and data collection. It facilitates the integration of data from multiple sensors and devices, enabling comprehensive monitoring capabilities. Another notable open-source platform is mCerebrum by <cit.>, which provides tools for real-time monitoring, data processing, and personalized health interventions based on mobile sensor data. These platforms offer a wide range of features, including data integration, real-time monitoring, analytics, and decision support tools. The Bridge Platform <cit.> by Sage Bionetworks is another noteworthy example, providing an open-source software framework for digital health research studies. It allows researchers to develop mobile apps, securely collect participant data, and foster participant engagement while emphasizing privacy and data sharing.
However, deploying and utilizing these platforms for mHealth studies require substantial effort, as setting up the necessary software can extend the setup time of studies. Technical challenges may arise, particularly for researchers lacking expertise in Internet infrastructure. Moreover, these platforms are typically designed to operate within a single organization or study, making the deployment costs exclusive to that particular organization. Consequently, this exclusivity can disproportionately affect smaller-scale studies, potentially rendering the deployment financially burdensome.
Another significant challenge associated with these platforms is the limited availability of customization methods. While the open-source nature of these platforms provides some level of customizability at the development level, implementing additional features and functionalities typically necessitates the involvement of technically skilled developers. This dependency on technical expertise may hinder researchers' ability to efficiently add or modify elements within the platform to suit their specific requirements.
Conversely, services encompass pre-built solutions that are tailored to specific healthcare needs. These solutions are designed to address particular aspects of healthcare and offer a more focused approach. For instance, ilumivu <cit.> provides a closed-source service that facilitates remote patient monitoring and data collection through user-friendly mobile applications. This service emphasizes patient engagement and includes features for symptom tracking, medication adherence, and communication with clinicians. Ethica <cit.>, another closed-source service, places emphasis on privacy-preserving data collection and analysis. It ensures compliance with privacy regulations while enabling remote monitoring and research data collection.
These services offer ready-to-use features and intuitive interfaces, enabling researchers to swiftly adopt and utilize these mHealth solutions without requiring extensive technical expertise. By providing a streamlined and straightforward setup process, these services enable researchers to initiate their studies promptly, leveraging the available features and minimizing setup time.
In terms of costs, services typically entail lower expenses compared to platforms. This is primarily due to the fact that the deployment, maintenance, and resource management burdens are assumed by the service providers. Consequently, these costs are distributed among different studies that utilize the shared resources, making it more cost-effective for researchers.
However, services generally offer limited customizability options, particularly with regard to advanced functionalities. Customization opportunities mainly revolve around configuring existing features to align with researchers' needs. Researchers may encounter limitations when attempting to tailor these services to their specific workflows or integrate additional features beyond those provided by the service.
The choice between platforms and services depends on various factors, including specific requirements, available resources, researchers' technical expertise, and study objectives. Different solutions offer distinct trade-offs in terms of setup time, costs, and customization capabilities.
Services generally offer shorter setup times and lower costs compared to platforms. The pre-built nature of services allows for swift deployment and immediate utilization of the provided features. However, researchers may face limitations in customizing these services to align precisely with their experimental needs. The available configurations may be restricted to the options provided by the service, potentially constraining researchers in their experimentation.
On the other hand, platforms provide a more comprehensive range of customization options. This level of customization, however, typically necessitates expertise in modifying the underlying codebase.
Furthermore, there is a distinction in the burden and costs associated with deployment and maintenance between platforms and services. With platforms, the responsibility of deployment and maintenance lies with the researchers, entailing additional efforts and costs. In contrast, services assume these burdens on behalf of the researchers, sharing the costs across different organizations utilizing the service.
Figure <ref> provides a comprehensive overview of the key steps involved in conducting a mHealth study and illustrates how platforms and services can aid researchers in each stage of the process. Notably, the figure highlights the advantage of services in facilitating the deployment phase, specifically in building the mHealth system infrastructure. Conversely, services may have limitations in terms of personalization and adaptability, which are addressed by platforms during the system development stage.
Table <ref> presents a summary of the distinctions between state-of-the-art mHealth solutions, focusing on three key aspects: customization, cost, and setup time.
Our primary objective is to introduce ZotCare, a comprehensive programmable service orchestration, that combines the advantages of both platforms and services while remaining within the services category. ZotCare is specifically designed to operate within a shared environment, accommodating multiple organizations, studies, and researchers. This shared environment facilitates reduced setup time and costs compared to traditional mHealth platforms.
ZotCare offers extensive customization options across various levels, including development, configurability, and programmability. These customization capabilities allow for seamless implementation of new features and functionalities tailored to specific research needs. Notably, ZotCare excels at the programmability level, providing researchers with a diverse set of tools to achieve the personalization and adaptation required in modern mHealth studies. Figure <ref> illustrates how researchers can leverage ZotCare's programmable services to attain personalized and adaptive features within their experiments, eliminating the need for additional development efforts.
At the development level, researchers can utilize the open-source version of ZotCare, similar to existing platforms, enabling independent deployment and utilization.
Table <ref> summarizes the distinctions between ZotCare and other commonly used platforms and services in the field of mHealth studies. In the subsequent section, we will delve into a detailed discussion of ZotCare's capabilities.
§ ZOTCARE SERVICE ORCHESTRATION
ZotCare constitutes a Health Cybernetics platform, specifically designed to operate as a closed-loop real-time monitoring-intervention system. Its purpose is to cater to the requirements of researchers, clinicians, and community health workers engaged in conducting studies or delivering digital health services. This comprehensive platform enables ubiquitous monitoring of individuals, encompassing both general populations and those at heightened risk, while also providing mHealth interventions. Additionally, it offers a direct avenue for end-users to engage in self-management.
ZotCare encompasses fundamental components essential for conducting mHealth studies across the entire health technology stack. Notably, it provides services that streamline data collection through the utilization of intelligent devices, such as wearables and portable devices. Furthermore, it enables bidirectional interactions between study participants and researchers through gateway devices, including smartphones. Augmenting its capabilities, ZotCare's cloud services provide data analysis and visualization, facilitate the construction and execution of real-time predictive models, and initiate actions necessary to enable just-in-time adaptive interventions (JITAI).
The primary aim of ZotCare is to enable the expeditious and convenient advancement of mHealth solutions catering to users possessing varying degrees of programming and engineering expertise, irrespective of their level of technological literacy. Consequently, through utilizing ZotCare services, researchers can efficiently diminish the time and expenses associated with the implementation and deployment of monitoring systems, enabling them to focus their endeavors on study design, conceptualization, and participant engagement.
Figure <ref> illustrates a comprehensive overview of ZotCare services and interfaces.
The Data Collection Services facilitate the ingestion of data from diverse devices, applications, and services. Once collected, the data undergoes processing and is stored as a continuous stream within ZotCare.
The Profile Services assume responsibility for the storage and processing of data in the form of key-value pairs. This storage mechanism enables the creation of profiles for participants and groups, serving as a repository for personalized study-related data and models, as further elucidated subsequently.
Through the Real-time Processing, Intervention, and Integration (RPII) Services, researchers possess the capability to incorporate adaptive, intelligent, and real-time components into their studies. These components are capable of triggering various actions based on the data obtained from the Profile and Collection services.
In conjunction with these services, ZotCare provides two interfaces: a customizable dashboard and a user-facing mobile application.
The customizable ZotCare dashboard serves as a web application, offering researchers an interface for accessing and modifying ZotCare services pertinent to their respective studies. Researchers can employ the dashboard to manage collected data, recruit participants, and customize it for clinical purposes if desired.
The ZotCare application, on the other hand, functions as a user-facing mobile application for participants. It allows them to interact with ZotCare services, enabling functionalities such as receiving reminders, engaging in ecological momentary assessments (EMAs), and benefiting from adaptive mobile health interventions. Moreover, ZotCare facilitates the integration of contextual and behavioral monitoring applications, commonly referred to as lifelogging applications.
The subsequent subsections delve into further details regarding ZotCare services and provide insights into how researchers can effectively leverage these services to construct their closed-loop mHealth solutions.
§.§ Collection Services
The Collection Services assume the responsibility of acquiring and integrating participants' data within ZotCare.
Given the multifaceted nature of mHealth studies, various types of data are typically employed.
Objective physiological, behavioral, and contextual data, alongside subjective self-reported data, constitute the principal data types utilized in the context of mHealth studies.
Furthermore, third-party vendors and applications offer diverse methodologies for data collection, encompassing direct sensor readings as well as indirect data acquisition through their server-side APIs.
To accommodate these disparate data types and collection methods, ZotCare incorporates a range of features that enable the acquisition of data through diverse channels, subsequently presenting them to researchers in a cohesive and standardized format.
The Collection Service possesses the capability to gather physiological data from prominent fitness and well-being devices. These devices encompass wearable options, such as smartwatches and rings, and portable devices, like smart blood pressure monitors and scales.
These devices are capable of providing physiological data in processed formats, and in some cases, as raw data. The raw data typically comprises inertial measurements (accelerometer and gyroscope), photoplethysmography (PPG), electrocardiogram (ECG), air pressure, luminosity sensor data, and other sensor readings, contingent upon the specific type and model of the device.
On the other hand, processed data generally entails higher-level derived physiological metrics such as heart rate, heart rate variability, sleep quality, steps, exercise data, weight, and other relevant parameters. These metrics are derived from the raw sensor readings by the respective vendors.
To facilitate the collection of such data, ZotCare has been integrated with various healthcare device vendors. Presently, ZotCare offers support for Samsung, Garmin, Empatica, and Fitbit smartwatches, as well as Oura rings for smart wearables. Additionally, ZotCare can integrate with Withings smart scales and blood pressure monitors.
It is important to note that the list of supported devices is continually expanding, as indicated in Table <ref>. For certain devices that provide a software development kit (SDK) and open access to their operating system/firmware (e.g., Samsung Active watches running Tizen OS), ZotCare offers a native smartwatch application. This application enables direct access to the raw signals from these devices and transmits them to the ZotCare back-end.
Researchers also have the flexibility to incorporate new devices through direct connections or by utilizing third-party services, utilizing standard open authentication (OAuth) methods.
Furthermore, to augment data collection capabilities within ZotCare, we have seamlessly integrated the AWARE smartphone-based logging framework <cit.> to enable the passive collection of behavioral and contextual data.
Through AWARE, researchers can leverage participants' smartphones to gather data from various sensors, including location, accelerometer, battery status, light intensity, temperature, and more. AWARE also allows the extraction of contextual information from participants' daily lives, such as screen lock/unlock events, application usage patterns, step count, and even communication activities such as notifications, text messages, and phone calls.
To encompass the collection of self-reported subjective data within ZotCare, we have incorporated an Interaction sub-service into the system. This feature empowers researchers to design and deploy dynamic questionnaires, indicators, and interactive tasks using the Interaction's functionality. The ZotCare front-end application effectively handles these Interactions, capturing participants' responses along with detailed metadata for comprehensive analytics. Moreover, the Interactions feature serves as a versatile tool for various purposes, including EMAs, information delivery, assessments, recommendations, and interventions. Researchers have the flexibility to update questions, EMAs, and other interactive components on-the-fly using the ZotCare dashboard, granting them dynamic control over the study's data collection processes.
§.§ Profile Services
The Profile Services within ZotCare assume the responsibility of storing specific information pertaining to groups or individual participants. Researchers can program these profiles to establish key-value storage for data management purposes.
In the case of participant profiles, the programmed key-value storage consists of a predetermined set of keys established by the researchers for all participants. However, individual values can be stored per key for each participant, allowing for personalized data storage.
For group profiles, a single value is associated with each key, which can be replicated across different groups. This replication enables the creation of distinct groups, such as control and intervention groups, or allows for customization of shared resources, such as the ZotCare Frontend application.
Each key within the profiles can be configured with a variety of features. Researchers have the flexibility to choose whether the values associated with these keys should be stored on participants' edge devices or in the cloud. Additionally, researchers can determine whether these values should be visible to the participants, depending on the study's specific requirements and privacy considerations.
The Profile Services play a crucial role in enabling researchers to personalize and adapt their studies over time, particularly in advanced studies that require participant engagement, personalized interactions, or the utilization of statistical or AI models. However, studies that primarily focus on monitoring and passive data collection may not extensively utilize this service.
Within participants' profiles, researchers can store a range of important dates and times, such as join date, delivery date, significant personal events, and preferred notification times. Additionally, characteristics such as height, weight, and fitness level can be recorded. Serializable entities, such as personal AI models or statistical models, as well as files like images, audio recordings, or voice recordings, can also be stored within participants' profiles.
Group profiles, on the other hand, contain information that is shared among the members of a specific group. This may include timing information for different stages of the study or shared AI models. Furthermore, group profiles can include customization data specific to each study, such as differentiating between intervention and control groups or specifying menu items. The information stored within profiles serves multiple purposes within the Real-time Processing, Intervention, and Integration Services. It allows for the adaptation of study procedures based on individual participant characteristics. Researchers can also leverage profile information within Interactions to customize and personalize the individual experiences of participants. Furthermore, profiles can be used to locally store personal identifiers such as names, addresses, and photos, instead of saving them on servers. This enables further customization of the participant's experience while preserving their privacy.
Overall, the Profile service provides researchers with a versatile tool for personalization, adaptation, and customization, enhancing the effectiveness and participant-centric nature of their studies.
§.§ Real-time Processing, Intervention, and Integration (RPII) Services
ZotCare offers researchers a comprehensive suite of Real-time Processing, Intervention, and Integration (RPII) Services, which equip them with the capability to transform data into knowledge, incorporate intelligence into their studies, and effectively close the loop within their solutions.
Through the RPII Services, researchers gain the ability to process data derived from the Profile and Collection services, enabling them to extract meaningful insights and execute subsequent actions based on the processed data. These services can be leveraged at various stages of the data processing pipeline, encompassing tasks such as data pre-processing, AI model development, collection of smart labels and EMAs, scheduling adaptive interventions, and sending intelligent reminders.
By utilizing the RPII Services, researchers are empowered with complete control over the flow of data within their studies. This enables them to dynamically analyze and respond to data in real-time, facilitating the integration of intelligence into their research and ultimately closing the loop within the solution they have developed.
Within ZotCare, each study is capable of containing multiple Real-time Processing, Intervention, and Integration (RPII) instances, which play a pivotal role in enabling dynamic and intelligent functionality. Each RPII instance consists of three essential components: Triggers, Conditions, and Actions.
Triggers serve as indicators that determine when an RPII unit is to be executed. These triggers can be categorized as either data-driven, responding to incoming new data, or chronological, based on fixed times or frequencies.
Conditions, on the other hand, evaluate the data to determine if any adaptations or actions need to be performed. Based on the specified conditions and the available data, the RPII instance can make informed decisions regarding the subsequent actions.
Actions within an RPII instance are programmable functions that can trigger internal modifications within the ZotCare environment or invoke external functionalities. Researchers have the flexibility to program RPII instances with various internal functions within ZotCare, including data fetching, participant grouping and filtering, data processing, AI model building, and writing to data streams or profile values. Furthermore, ZotCare supports external actions such as sending emails, push notifications to the ZotCare mobile application, and accessing external resources.
To provide an overview of these features, a comprehensive summary is presented in Table <ref>, which outlines the various logic features supported by ZotCare.
Moreover, ZotCare offers seamless integration options for external systems with its RPII services. Researchers are provided with dedicated endpoints to access ZotCare from their own machines and servers, facilitating the integration of external resources into the ZotCare environment. To streamline the process of utilizing ZotCare externally, an SDK is available, designed to simplify the interaction with ZotCare and offer additional features. The SDK enables researchers to fetch, cache, and process data, as well as invoke actions within ZotCare, all without the need to handle complex authentications or intricate API calls.
By leveraging these integration capabilities, researchers can utilize their own resources to replace or supplement ZotCare's RPII components, enhancing the flexibility and adaptability of the system to suit their specific requirements.
§.§ The Customizable ZotCare Dashboard
The ZotCare dashboard serves as a customizable interface that facilitates interaction between users (such as researchers and clinicians) and ZotCare services.
Researchers can create different study groups through the dashboard.
Each group can be configured to utilize the ZotCare services for the purpose of that specific research or product.
The ZotCare dashboard incorporates a dedicated section for user management. Within this section, researchers can recruit new participants for their studies. This can be achieved through the utilization of random IDs for direct recruitment or by utilizing sign-up links for anonymous recruitment. Additionally, the dashboard enables researchers to edit user information and profile values as needed.
Furthermore, the ZotCare dashboard offers a comprehensive suite of data analysis capabilities, ensuring that researchers have the necessary tools to derive valuable insights from their research data. Researchers can leverage the provided tools to visualize data in its original format or apply sophisticated aggregation and filtering techniques to create visually informative charts and graphs. Moreover, the dashboard empowers researchers to employ their domain knowledge and expertise by facilitating direct annotation of data within the platform. These annotations are seamlessly stored as new data streams within ZotCare, contributing to a rich and comprehensive dataset for further analysis.
In addition to user management and data analysis functionalities, the ZotCare dashboard provides researchers with effective tools for managing their services within the platform. Through an intuitive interface, researchers can easily activate, modify, or review the configurations of their services. While certain services, such as collection services, entail straightforward setup steps, others, such as programmable services like RPII, profile, and interactions services, necessitate more advanced configurations. To streamline this process, the dashboard offers interactive editors that facilitate researchers in editing, debugging, and testing these programmable services, ensuring a seamless and efficient management experience.
ZotCare also incorporates a fine-grained access control mechanism that allows users to have specific permissions within individual studies. This feature enables researchers to involve different collaborators in their study, assigning them distinct roles based on their access scope. These roles can range from recruiters or data analysts to study managers or clinicians. Each collaborator is granted access only to the relevant parts of the dashboard that align with their assigned role. This stringent access control is crucial for safeguarding the privacy and integrity of the study, ensuring that each collaborator can only view and utilize the components that pertain to their specific responsibilities.
§.§ ZotCare Mobile Application
The ZotCare mobile application serves as an interface for facilitating ZotCare services to participants. This mobile app functions as a front-end interface, enabling various services, including mHealth interventions, multimedia interactions, and interactive profiles through its components.
Additionally, the ZotCare app acts as an assistant to participants, aiding them in device setup and facilitating communication between participants and researchers/clinicians.
The primary purpose of the app is to provide participants with interactive "interactions." These interactions encompass a range of components, such as multiple-choice, numerical, time, data, and text input, as well as sliders, among others. These components are well-suited for various purposes, such as EMAs, questionnaires, and data labeling, which are commonly employed in mHealth studies.
Furthermore, interactions are equipped with multimedia features, including videos, images, audio, and audio-video recorders. Extensive research has demonstrated the effectiveness of these multimedia tools for both assessment and mHealth interventions, as evidenced by studies presented in Section <ref>.
Moreover, interactions can incorporate customized components that researchers can create and incorporate, allowing for further customization and enhancement of their studies. Previous studies using ZotCare have showcased the utilization of such components for interventions, such as interactive breathing exercises, mindfulness-oriented image galleries, relational savoring exercises, and educational materials. Additionally, these components have been employed in assessments, such as cognitive games (e.g., finger tapping, word pair memory tests, rule-switching games, etc.).
In addition to the visible components, interactions can include condition statements, representation configurations, variables, and metadata. These features provide researchers with a broader set of tools for personalization and customization.
Furthermore, participants have the ability to grant authorization to ZotCare, via the application, to access their health data from third-party services, such as Oura and Garmin. This integration allows for seamless retrieval of pertinent health information.
Additionally, the app offers comprehensive instructions and troubleshooting steps for devices and applications that establish a direct connection with ZotCare, including Samsung and AWARE.
Furthermore, participants can access certain features of the Collection and Profile services through the ZotCare app. These services provide participants with valuable functionalities and data management capabilities.
The ZotCare app serves as a means for researchers and participants to maintain a continuous connection. This connection is facilitated through various means, including reminders, notifications, and messages. Researchers can choose to automate these communications through the RPII services or manually trigger them using the dashboard.
A general version of the ZotCare app is readily available for installation and use on Android and iOS smartphones. However, the app's flexibility allows for customization to accommodate different research studies. Researchers possess the capability to modify the app's colors, logos, menus, and other visual aspects to align with the specific requirements of their study. Moreover, they can also modify the app's components to create tailored "Interactions" with additional functionalities. By leveraging the Profiles feature, researchers can further personalize the app's appearance to suit individual studies or specific participant requirements.
§.§ Security and Privacy
In order to uphold the integrity of ZotCare's services, it is imperative to prioritize the security and privacy aspects of the platform. Robust security measures have been implemented to safeguard data and communication channels against potential threats posed by unauthorized individuals attempting to manipulate, delete, or disrupt data storage and transmission processes.
Privacy considerations within ZotCare are designed to empower participants by granting them control over their personal data. This includes ensuring the protection of identifiable information, thereby safeguarding the privacy of participants.
Security and privacy present unique challenges within service-based environments compared to platforms, as multiple organizations share the same resources. Consequently, it becomes necessary to implement measures to safeguard information from both internal and external sources.
Collected data from participants may consist of both objective sensitive data, such as location information and passwords, as well as subjective data that, based on responses provided in Interactions and Profiles, may reveal sensitive information. Similar security and privacy risks exist across other services as well. For instance, programmable services, including RPII services, are susceptible to malware injection. It is worth noting that researchers' mistakes, such as data overwriting, large or repeated queries, and infinite loops, can also introduce malware vulnerabilities.
To address these security challenges, ZotCare has implemented a gateway service that regulates authentication, authorization, and scope through standard encryption methods. This entails a two-step process in which the gateway first verifies the identity of the requester and subsequently checks if the requester has the necessary access permissions to the requested resource.
Privacy concerns extend beyond the scope of data collection and storage and begin with participant recruitment. In cases where studies possess knowledge of their participants, researchers can manage deidentification processes on their end, enrolling participants in ZotCare using anonymous IDs. However, for studies that allow individual participant sign-ups, ZotCare can deidentify data associated with participant emails, enabling participants to utilize their emails for password retrieval and receiving notifications. Nevertheless, researchers only have access to anonymous IDs.
ZotCare does not currently support deidentification of collected data at this stage. It is important to note that both researchers and users have the option to disable the collection of sensitive data across all ZotCare services, providing an additional layer of privacy control.
§ USE CASES
ZotCare has been utilized as a service within diverse mHealth research studies.
The functionalities of ZotCare were devised to meet the requisites of these studies and were adapted accordingly based on their specific utilization.
The initial studies availed themselves of preliminary versions of ZotCare, encompassing provisions for multi-modal data collection.
Subsequently, ZotCare broadened its spectrum of services and characteristics to address the requirements for customization and governance.
In the following, we will begin by providing an overview of select studies that used ZotCare services for purposes encompassing data collection, data modeling, and intervention.
Subsequently, we will delineate the challenges encountered and describe the integration of ZotCare into these aforementioned studies.
§.§ Personal Mental Health Navigation Project
The Mental Health Navigation (MHN) project develops a proactive, personalized approach to monitor, estimate, and guide individuals toward their desirable mental health state <cit.>.
MHN monitors a multimodal stream of objective and subjective information to build inference models to determine participants' mental states, context, and lifestyle.
Using the constructed personal model and the current state, a navigator system can steer the participants using interventions at each step.
The MHN project comprised two studies, the Affect study and the Loneliness study.
The Affect study focused on investigating the connection between college students' psychophysiological signals and sleep on their mood <cit.>.
Due to the onset of the COVID-19 pandemic during the midst of the Affect study, a revision was made to expand the study's objectives to encompass the impact of COVID-19 and subsequent lockdown measures on the lives and emotional well-being of college students <cit.>.
The subsequent phase of the MHN project referred to as the Loneliness study, was primarily dedicated to the real-time evaluation of the mental well-being of college students, along with the provision of just-in-time adaptive interventions for those individuals requiring support <cit.>.
Moreover, the Loneliness study encompassed the collection of life-logging and contextual data, enabling the inference of participants' virtual (through smartphones) and physical communication levels.
By integrating the acquired life-logging and contextual data with the pre-existing models established in the Affect study, the accuracy of the loneliness assessment models was significantly enhanced.
Consequently, the Loneliness study successfully developed adaptive interventions, leveraging these refined models, with the ultimate aim of mitigating the adverse mental state experienced by the participants.
Both the Affect and Loneliness studies employed the utilization of ZotCare as a means to gather bio-signal and EMA data and administer mHealth interventions.
The Affect study specifically utilized the Samsung smartwatch and Oura ring to continuously capture physiological signals, monitor sleep quality, and track levels of physical activity.
In addition to these devices, the Loneliness study incorporated the use of AWARE to gather life-logging data.
Furthermore, a customized ZotCare mobile application, namely mSavorUs, was developed and employed for the participants.
This tailored application not only retained all the features of the original ZotCare application but also provided supplementary custom features for relational savoring exercises and interventions.
The interventions within the mSavorUs application were personalized by incorporating participants' names and photos sourced from their memories, utilizing locally stored profiles.
To implement the interventions in the Loneliness study, the RPII services were utilized.
These services were employed both directly for delivering conditional notifications and via an API form to establish a connection with a machine learning agent maintained within a separate cluster.
The volume of data collected in each of these studies exceeded 200 gigabytes, with a total of 4.5K and 14.9K labels collected for the Affect and Loneliness studies, respectively.
The initial deployment of ZotCare for the MHN Affect study presented various challenges that underscored the necessity for additional features or services to address them effectively.
Given the study's incorporation of multiple data collection dimensions, including the Samsung smartwatch, Oura ring, AWARE framework, and questionnaires, study coordinators faced difficulties in monitoring the status of all dimensions and promptly informing participants about any potential issues and interruptions in data collections.
Interruptions occasionally occurred in the ZotCare collection services running on participants' devices due to factors such as high battery consumption, device inactivity, or inadvertent shutdowns by participants.
To overcome this challenge, ZotCare implemented a summary report generator within the data collection service.
These daily summaries could be utilized by researchers through ZotCare dashboards to assess the status of participant data and facilitate timely follow-up when necessary.
Additionally, interactive troubleshooting components were integrated into ZotCare Interactions, accessible via the ZotCare mobile application.
These troubleshooting components systematically analyzed the participants' collected data and provided step-by-step instructions for resolving any data communication issues that may have arisen.
Through the implementation of these additional features and services, ZotCare effectively tackled the challenges encountered during the MHN Affect study.
This ensured efficient data monitoring, facilitated effective troubleshooting, and ultimately enhanced participant engagement and study outcomes.
Moreover, during the course of the Loneliness study, the construction of the real-time inference model necessitated computational resources that surpassed the capacity of the ZotCare backend services available at that time. To address this challenge, the researchers leveraged the RPII capability of ZotCare, enabling integration with external resources. The team successfully employed the RPII service to execute an inference model within a cluster, which was triggered by ZotCare web-hooks. The obtained data was subsequently retrieved through the ZotCare SDK, processed, and intervention scheduling was performed using ZotCare API calls. This strategic approach effectively resolved the issue of resource limitations and facilitated the seamless integration of just-in-time adaptive interventions by harnessing the capabilities of external resources.
§.§ The UNITE Project
Smart, Connected, and Coordinated Maternal Care for Underserved Communities (UNITE) is a research project funded by the United States National Science Foundation, with the primary objective of developing innovative technologies to enhance the physical and emotional well-being of underserved pregnant women and their newborns.
The UNITE initiative endeavors to revolutionize conventional maternal care practices, which have traditionally been delivered within homes or clinics, by integrating an AI-supported remote monitoring system. The project comprises three distinct phases, each with its specific focus and objectives.
The initial phase, known as the “Feasibility" phase, concentrated on assessing the viability of remote maternal health monitoring. This involved investigating the level of engagement exhibited by pregnant women with the technology, taking into account their individual health conditions <cit.>.
The second phase of UNITE encompassed a series of small-scale randomized controlled trials (MicroRCTs), which sought to examine various aspects of maternal well-being, such as stress management or physical training, and assess their impact on pregnancy outcomes <cit.>.
Currently, the project is in its third phase, an "AI-assisted" study aimed at exploring the efficacy of incorporating AI assistants and nurses into the care loop for mothers. Within this phase, an AI-enabled exercise recommendation system has been deployed alongside the recommendations provided by healthcare providers. This human-in-the-loop mHealth approach has resulted in the development of a personalized, step-by-step recommender system that adapts to the specific pregnancy conditions and physical measures of each individual mother.
The Feasibility phase of the UNITE project primarily focused on the collection of data and subsequent analysis during the post-study phase. Given the vulnerable nature of the study group consisting of pregnant mothers, it was imperative to implement a triage system to swiftly identify and report any potentially risky situations. To fulfill this requirement, the UNITE initiative incorporated ZotCare's RPII services. By integrating data-driven triggers within the RPII unit, the behavior of participants could be assessed, enabling immediate alerts to be sent to researchers when necessary.
In the second phase of UNITE, known as the MicroRCT phase, the team explored an alternative approach to collecting labels. Instead of adhering to fixed times and frequencies, the possibility of employing smart labels was investigated, aiming to maximize the information obtained while minimizing participant interruptions. By utilizing statistical and active machine learning models within the RPII services, the UNITE team could send notifications to participants, prompting them to provide labels at more opportune times, resulting in improved accuracy and heightened participant engagement <cit.>.
During the AI-assisted recommender system phase, the main challenge revolved around establishing a cohesive loop involving the participants, the AI recommender engine, and health providers. To address this challenge, the Profile and RPII services were leveraged, providing the necessary flexibility.
Three sets of profiles were employed: one for participants to input their physical measurements, another for health providers to input their assessments, and a final one for the AI recommender engine to store its recommendations.
The RPII service played a crucial role in executing the recommender engine by utilizing the physical measurements and health providers' assessments from the Profile services, as well as participants' bio-signals, progress, and feedback from the Collection services.
Figure <ref> illustrates the adoption of the services within this solution. The process begins with Step (1), where participants input their physical measurements through the mobile app. This information is then used by the nurses to recommend the participants' initial exercise regimen.
In Step (2), the recommender engine utilized this information to train its models, infer the next set of exercise regimen recommendations, store the recommendations within the participant's profile, and notify the health providers of the ongoing process. The health providers, in Step (3), evaluated the final exercise regimen for each participant based on the suggestions provided by the recommender engine.
Steps (2) and (3) continue in a continuous loop throughout the duration of the study.
This system effectively assists health providers in processing a significant amount of information and providing frequent assessments.
§.§ Other Projects
As the services provided by ZotCare continued to develop, they began to be utilized by researchers and universities outside of the original project, encompassing a diverse array of research studies. These studies varied in terms of their contextual settings, languages used, time zones, and specific requirements. The expanding scope of applications enabled ZotCare to adapt and enhance its functionalities, thereby providing researchers with a wider range of features to facilitate their studies effectively. Some example studies are listed in Table <ref>.
Two research projects, namely “Sleep & Menstruation" <cit.> and “Sleep & Brain," were conducted to investigate subjective sleep assessment and cognitive abilities through the use of questionnaires and cognitive tasks.
The first project examined the impact of menstruation on sleep patterns, while the second project explored the relationship between sleep and cognitive abilities.
Both projects incorporated traditional questionnaires to gather information on various aspects of sleep, including duration, quality, and mood.
Standard question forms components such as multiple-choice, text input, sliders, and time pickers were utilized to ensure comprehensive data collection.
Furthermore, interactive cognitive tasks, designed to resemble games, were employed to assess participants' cognitive skills.
The flexibility of ZotCare's capabilities in creating customized interaction components proved invaluable for researchers.
This allowed them to focus on designing the tasks themselves without the need for developing a separate interactive mobile application.
By leveraging ZotCare's functionalities, researchers could streamline the data collection process and efficiently collect data.
Given that some questionnaires needed to be administered before bedtime or upon waking up, ZotCare's profile and logic services were utilized to personalize the timing and availability of the questionnaires for each individual participant.
This customization ensured that participants received questionnaires at the appropriate times based on their specific sleep schedules and preferences.
In addition to its utilization in the aforementioned studies conducted in the United States, ZotCare has also been employed in research studies conducted in other countries and across different languages.
One notable example is the “PREVENT" study conducted in Finland. This study specifically focused on maternal care and aimed to assess the daily well-being of pregnant women during the challenging circumstances imposed by the COVID-19 pandemic <cit.>.
The “PREVENT" study leveraged ZotCare's localization features to adapt the platform to the Finnish language and timezone.
This ensured that the study participants in Finland could access and interact with ZotCare's services seamlessly in their native language and within the context of their local time zone.
The Digital Health for the Future of Community-Centered Care (D-CCC) <cit.> research project aims to explore the integration of technology and community health workers in order to enhance healthcare delivery for underserved communities.
Specifically focusing on the elderly population, the project seeks to develop a symbiotic relationship between humans and technology, enabling the design of new technologies that can assist community health workers in providing more effective support.
ZotCare played a crucial role in alleviating the burden on participants who were unfamiliar with using advanced technologies.
By providing user-friendly interfaces and intuitive interactions, ZotCare facilitated the seamless integration of technology into the participants' lives.
Moreover, ZotCare enabled the continuous monitoring of participants' device usage patterns and their overall health status.
This feature helped researchers and community health workers gain valuable insights into the participants' well-being and promptly address any emerging issues.
The ZotCare dashboard proved to be an invaluable tool in monitoring vital signs trends among participants, such as heart rate and blood pressure.
Additionally, the ZotCare application allowed for the subjective capture of daily symptoms, including pain and fatigue, as well as adverse events such as falls.
The customization of the ZotCare dashboard specifically catered to the needs of health providers involved in this study.
It enabled them to receive intelligent alerts in the event of abnormalities in vital signs or adverse events, and also provided visualizations of the collected data, empowering them to make informed decisions regarding participant care.
§ USING ZOTCARE
To acquire more information on how to use ZotCare, please visit <https://futurehealth.uci.edu/projects/zotcare/>.
§ LIMITATIONS AND FUTURE WORK
This article primarily highlights the services and benefits of ZotCare in the context of mHealth research.
However, it is essential to note that certain aspects, such as ZotCare's system architecture, technical contributions, and system challenges, have not been addressed explicitly in this article.
These topics warrant separate and dedicated publications to provide in-depth insights and analysis.
The future of ZotCare will predominantly hinge on two principal enhancements: the integration of new features and the introduction of novel services.
Additional features can be incorporated into the existing services without altering the current orchestration, for instance, augmenting support for new devices, introducing new types of profiles, or incorporating new actions and triggers into RPII services.
Other relevant features, such as chatbots, could serve as beneficial additions to ZotCare's interactions.
Furthermore, there is potential for increasing the programmability of AI components within the system.
The system could be trained to suggest correlations and models congruent with researchers' experimental objectives, mitigating the need for manual programming.
Alterations in service orchestration would pose a more significant challenge, as the services must retain their flexibility, minimalism, and comprehensiveness.
Nevertheless, with the burgeoning demand in the mHealth sector, the team will explore opportunities for redesigning the service orchestration.
While programmable services offer a high level of customization, they adhere to stringent syntax and rules, necessitating a learning phase for researchers.
Besides that, some functionalities and patterns in mHealth systems often repeat in a similar way between studies, such as smart personalized notifications, standard data processing methods, and standard adaptive interventions.
In order to make it simpler for researchers, efforts are underway to incorporate more templates and straightforward, high-level configurations in the form of modules.
These modules can be available to researchers to add standard mHealth functionalities to their studies without the need to learn knowledge over ZotCare programmable services or recreate standard functionalities that have been done before.
§ CONCLUSION
mHealth solutions are significant for researchers aiming to design studies that leverage internet-connected health devices and smartphones.
These solutions must offer fast delivery, affordability, and the necessary programmability to support personalization and adaptation, particularly in the context of modern AI-driven investigations.
To achieve this, we introduced ZotCare as a service-based solution that provides a ready-to-use platform for conducting mHealth studies while also enabling resource sharing to reduce costs.
This paper primarily focused on ZotCare's service orchestration, features, usage, and capabilities.
ZotCare's service orchestration comprises Collection Services, Profile Services, and Real-time Processing, Integration, and Intervention (RPII) Services. Collection Services facilitate the aggregation of both objective and subjective data streams into the system, while Profile Services offer programmable key-value storage for participants.
These services empower RPII Services to process data, generate models, and trigger tailored actions based on varying circumstances.
Finally, we demonstrated the practical applications of ZotCare's services through various use cases.
These examples indicated how different mHealth studies across diverse domains have successfully utilized ZotCare's programmability.
§ ACKNOWLEDGEMENTS
This research was supported by the US National Science Foundation, under the Smart and Connected Communities (S&CC) program (grant CNS-1831918).
We want to extend our sincere gratitude to Arman Anzanpour for his assistance with creating and refining the figures and graphics presented in this paper.
Frontiers-Vancouver
|
http://arxiv.org/abs/2307.00333v1
|
20230701130452
|
Investigating Possible Correlations between Gamma-Ray and Optical Lightcurves for TeV-Detected Northern Blazars over 8 Years of Observations
|
[
"Atreya Acharyya",
"Alberto C. Sadun"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.GA"
] |
§ INTRODUCTION
Blazars form a subclass of radio-loud active galactic nuclei (AGN) that have jets closely aligned to our line-of-sight, resulting in the emission from these objects being highly Doppler-boosted and making them some of the brightest gamma-ray sources in the extragalactic sky <cit.>. Blazars are generally characterized by non-thermal, highly-polarized continuum emission, spanning the entire electromagnetic spectrum and characteristically show very fast variability, which has been observed down to timescales of minutes in the gamma-ray regime <cit.>, as well as in the optical regime <cit.>.
The spectral energy distribution (SED) of a typical blazar comprises two distinct peaks. Although the first peak, occurring in the radio to the X-ray regime, has been attributed to synchrotron emission from electrons and positrons within the jet, the physical mechanisms responsible for the second peak, produced in the X-ray to gamma-ray regime, is still a matter of debate and two main scenarios have been postulated to explain it. Leptonic models <cit.> attribute the high-energy peak to the inverse Compton (IC) scattering between the energetic leptons in the jet and a field of low energy photons (for example, <cit.>), either the same photons emitted through synchrotron emission (synchrotron self-Compton (SSC)) or photon populations external to the jet (external inverse Compton (EIC)).
On the other hand, hadronic models (for example, <cit.>) suggest that the second peak may be a result of high energy photons produced in cosmic-ray interactions through the decay of neutral pions (for example, <cit.>) or proton synchrotron emission (for example, <cit.>).
Localizing the gamma-ray emission is an indirect process and a variety of different methods have been used previously in the literature.
Assuming constant jet geometry, the size of the emission region, r, has been used to infer its distance from the supermassive black hole (SMBH), R, using r=ψR. Here ψ is the semi-aperture opening angle of the jet <cit.>. This relation has been used to constrain the emission region to be within a few parsec from the base of the jet. For example, Ref. <cit.>,
using ∼2 years of Fermi-LAT observations, constrained the emission to be from within the broad line region (BLR) under the assumption that the full width of the jet is responsible for the emission.
Moreover, the observation of very high energy (VHE) photons (E_γ≥ 20 GeV) suggest the emission originates farther out, at parsec scale distances and from within the molecular torus (MT) region <cit.>. VHE photons are expected to be severely attenuated from interactions with the photons in the BLR and the detection of blazars with ground-based instruments is difficult to explain if the emission were assumed to originate in regions near the central engine. A possible solution which accommodates both the short variability timescales and VHE detection is to abandon the one-zone emission model and invoke the presence of multiple emission regions <cit.>.
An important technique to localize the emission region in blazars is the correlation of multi-wavelength lightcurves. In particular, it helps in identifying potential relationships between emission zones and in understanding the dominant emission mechanisms. For instance, a strong correlation between synchrotron produced optical data and IC scattering produced gamma-ray observations (for example, <cit.>) is expected in some leptonic models. The lags and leads, if highly significant, can help to constrain the location of the gamma-ray emission region relative to the optical emission region and also discriminate between SSC and EC models. On the other hand, orphan flares in one waveband having no correlation with others, for example, the 2002 flare of 1ES 1959+650 <cit.>, can be interpreted as evidence of multiple emission zones or as support for hadronic emission models.
In this respect, the Large Area Telescope (LAT) on board the Fermi satellite <cit.> has been particularly important. This pair-conversion telescope, launched in June 2008, is sensitive to photon energies between 20 MeV and 2 TeV and scans the entire gamma-ray sky every three hours. For example, Ref. <cit.>
presented an investigation of correlations between optical data collected with the robotic 0.76 m Katzman Automatic Imaging Telescope (KAIT) at Lick Observatory and gamma-ray observations with the Fermi-LAT using discrete correlation functions (DCFs). The same method was used by <cit.> to study radio–gamma-ray correlations in blazars.
The Asteroid Terrestrial-impact Last Alert System (ATLAS <cit.>) is a high cadence all sky survey system comprising four independent units, one on Haleakala (HKO), and one on Mauna Loa (MLO) in the Hawaiian islands in the Northern Hemisphere and one each at the El Sauce Observatory, Chile and the South African Astronomical Observatory in the Southern Hemisphere. It is optimized to be efficient for finding potentially dangerous asteroids, as well as in tracking and searching for highly variable and transient sources, such as AGN, and is capable of discovering more bright, less than 19 mag, supernovae candidates than other ground based surveys.
Blazar observations with ATLAS are in the R-band, centered at 679 nm, having a typical cadence of one data point per two days. In this work, we apply local cross-correlation functions (LCCFs <cit.>) to investigate possible correlations between ATLAS optical data and gamma-ray observations with the Fermi-LAT for a sample of 18 TeV-detected northern blazars, over 8 years of observations between 2015 and 2022. For brighter sources (for example, Mrk 501 <cit.>), it is also possible to undertake an analysis of microvariability in outbursts using optical data and this will be investigated in detail in a future publication.
§ SOURCE SELECTION AND DATA REDUCTION
The main goal of this study is to investigate possible correlations between ATLAS optical data and Fermi-LAT gamma-ray observations in blazars. The identification of suitable sources was primarily governed by having sufficient photon statistics to allow for a detailed study of the LCCFs. The sample of sources chosen for this study comprised 18 Fermi-LAT northern blazars detected in the TeV regime [<http://tevcat2.uchicago.edu/> (accessed on 3 June 2023)] with ground-based gamma-ray observatories is shown in Table <ref>.
Optical observations were made with the ATLAS, a high cadence all sky survey system of four 0.5 m telescopes that scan the entire sky periodically. The telescopes are located in Hawaii (Mauna Loa, Hawaii and Haleakala, Maui), in South Africa (Sutherland Observing Station), and Chile (El Sauce Observatory, Rio Hurtado). Between declinations -50 degrees and + 50 degrees, the cadence is one set of observations each day, and in the polar regions once every two days, during observing season and weather permitting. The filter used in these observations is roughly equivalent to the Johnson–Cousins R filter. The transmission curve is centered at 679 nm. The filter is nonstandard, but well calibrated, see <cit.>. All the data is archived in a database for retrieval.
A forced photometry system is used in which the point spread function (PSF), which represents the distribution of light from a point source on a detector, is obtained for each object based on bright stars, and the function is forced onto the object at the chosen coordinates. Data processing is described in <cit.>. When the data is retrieved from the database, software automatically calculates the magnitude according to the AB system, as well as the flux for each object. A catalog of variable stars produced by this method is shown by <cit.>. The optical lightcurves for the sample of blazars are shown in Figures <ref>–<ref>.
In this work, we analyzed Fermi-LAT photons detected between MJD 57000 and MJD 59945, which corresponds to midnight on 9 December 2014 until midnight on Throughout the gamma-ray analysis, we used the Fermi Science Tools version 11-05-03 [<http://fermi.gsfc.nasa.gov/ssc/data/analysis/software> (accessed on 3 June 2023)], FERMIPY version 1.0.1 [<http://fermipy.readthedocs.io> (accessed on 3 June 2023)] <cit.>, in conjunction with the latest PASS 8 IRFs <cit.>. We consider the energy range 100 MeV–300 GeV and a region of interest (RoI) with radius 15^∘ centred on each source.
A lower limit of 100 MeV is chosen because the PSF of the Fermi-LAT increases at lower energies, making a point source analysis difficult. Furthermore, most of the blazars selected have relatively soft gamma-ray spectra and are not expected to be significantly detected with the Fermi-LAT at energies above 300 GeV.
Moreover, we selected only photon events from within a maximum zenith angle of 90^∘ in order to reduce contamination from background photons from the Earth's limb, produced by the interaction of cosmic-rays with the upper atmosphere.
Sources in the 4FGL-DR3 catalog <cit.> within a radius of 20^∘ from the spatial position of each source in the sample were included in the initial model for the analyses, with their spectral parameters fixed to their catalog values.
This takes into account the gamma-ray emission from sources lying outside the RoI which might yet contribute photons to the data, especially at low energies, due to the size of the point spread function of the Fermi-LAT. The contributions from the isotropic and galactic diffuse backgrounds were modeled using the most recent templates, iso_P8R3_SOURCE_V2_v1.txt and gll_iem_v07.fits, respectively.
The analysis began with an initial automatic optimization of the RoI by iteratively fitting the sources. This ensured all parameters were close to their global likelihood maxima. The spectral normalization of all modelled sources within the RoI were left free as were the normalization factor of both the isotropic and galactic diffuse emission templates. Furthermore, the spectral shape parameters of all sources within 5^∘ of the centre of the RoI were left free to vary whereas those of other sources were fixed to the values reported in the 4FGL-DR3 catalog. A binned likelihood analysis was then performed in order to obtain the spectral parameters best describing the model using a spatial binning of 0.1^∘ pixel^-1 and bins per decade.
In order to pursue a study of the temporal behaviour of the gamma-ray fluxes, the Fermi-LAT data were binned monthly with a likelihood routine applied to each bin separately [This was implemented using the gta.lightcurve()
method in FERMIPY].. During the production of the lightcurve, the spectral parameters of all sources within 5^∘ of the RoI centre were left free for each bin as were the normalization factors of the background emission models. The resulting lightcurves for the sample of blazars are shown in Figures <ref>–<ref>, along with the corresponding uncertainties. Only time intervals having TS ≥ 10 were considered, which roughly equates to a significance of 3σ.
§ RESULTS
Local cross-correlation functions (LCCFs; <cit.>) were then applied to investigate correlations between the Fermi-LAT and ATLAS lightcurves. Consider two lightcurves having fluxes a_i and b_j, corresponding to times t_a_i and t_b_j, the LCCF can then be computed as:
LCCF (τ) = 1/MΣ (a_i - a_τ) (b_j - b_τ)/σ_a τσ_b τ.
Here, the sums run over M pairs for which τ≤ t_a_i-t_b_j<τ+Δ t for a chosen timestep Δ t, and a_τ and b_τ are flux averages and σ_a τ and σ_b τ are standard deviations over the M
pairs, respectively <cit.>. As adopted in <cit.>, we choose the higher half median of the time separations between consecutive data points in the two lightcurves as the binning of the timelags, τ. Furthermore, the minimum and maximum values of τ are chosen to be ± 0.5 times the length of the shorter lightcurve <cit.>.
This method is independent of any difference in sampling rates between the lightcurves. LCCFs are intrinsically bound in the interval [-1,1] and <cit.> found them to be more efficient than the use of Discrete Correlation Functions (DCFs; <cit.>). The LCCFs obtained for each source are shown in Figures <ref> and <ref>.
As done in <cit.>, the centroid lags and uncertainties are derived from weighted least-square Gaussian fits to the LCCF points at the location of the most significant correlation peak. Although the peaks of the Gaussian fits give a first order determination of the uncertainty, these do not account for the effects of correlated red-noise between the datasets <cit.>. The significances of the correlations are obtained by performing Monte Carlo simulations to produce 1000 artificial lightcurves matching the probability distribution function (PDF) and power spectral density (PSD) of each observation using the method outlined in <cit.> [The code was developed from Connolly, S. D., 2016, Astrophysics Source Code Library, record ascl:1602.012. See https://github.com/samconnolly/DELightcurveSimulation, (accessed on 3 June 2023)
.]
. The 68%, 95%, and 99% confidence intervals obtained are also shown in Figure <ref>, in blue from lighter to darker shades.
The centroid lags and significances obtained for each source are summarised in Table <ref>. Throughout this paper, a positive time delay, Δ t > 0, is defined as corresponding to the optical emission leading the gamma-ray emission. Wider LCCF peaks may indicate a range of characteristic timescales in the correlated response or limitations in the instruments <cit.>. The presence of a peak, significant at a level of above 3σ, implies strong optical and gamma-ray correlations. This is seen for W Comae, S3 1227+25, 3C 66A, and 1ES 1215+303, and suggests a single-zone model of emission. Under the assumption that the optical and gamma-ray flares are produced by the same outburst propagating down the jet, both positive and negative time-lags on timescales of days to tens of days are predicted in SSC and EIC models. A significant peak consistent with zero indicates an absence of time-lag and is seen for S3 1227+25 and 3C 66A.
The results obtained in this investigation are broadly in agreement with those seen in <cit.>, where the optical and gamma-ray correlations of 121 blazars were found to be significant at a level of above 68 % (∼ 1 σ). The majority of the corresponding time-lags were within ± 20 days of zero-lag and this was interpreted as evidence of a common origin for the flares in the two bands, implying leptonic processes dominate blazar gamma-rays, while not excluding the possibility of hadronic contribution to the emission. However, it should be noted that although <cit.> found that the time delays, in general, did not exceed at 2 σ level and was less at 3 σ level, in this study, we find significant delays for 11 of the 18 considered objects, often exceeding 100 days.
Furthermore, out of the nine objects with a correlation of above 99%, five sources, namely, 1ES 0414+009, 1ES 1959+650, 1ES 1440+122, 1ES 1011+496, 1ES 1218+304, and PKS 1424+240, are found have a delay of more than 100 days. In all of these cases a negative time-lag is found, suggesting that the gamma-ray emission lags the optical emission. This long delay could imply a more complex emission mechanism than the simple one-zone leptonic SSC model found for the remaining sources; for example, one with multiple emission regions with the optical emission originating upstream of the gamma-ray emission (this was also seen for gamma-ray and radio correlations in <cit.>). However, continuous and simultaneous monitoring over a longer time period is required to draw any stronger conclusions.
Moreover, while similar conclusions of leptonic single-zone models where the low- and high-energy emission comes from the same population of electrons were also found in <cit.> from a study of gamma-ray and optical correlations for a sample of 1180 blazars, it should also be noted that they find evidence of "orphan" flares, gamma-ray flares with no optical counterpart or optical flares having no gamma-ray counterpart, in some of the sources. As seen in Figures <ref>–<ref>, a few sources in this investigation, for example, 1ES 0502+675 (MJD∼59750) and S3 1227+25 (MJD ∼58600), also show "orphan" flares. The origin of these flares is uncertain but they have been interpreted as support for hadronic emission models (for example, <cit.>), evidence of multiple emission zones (for example, <cit.>) or a result of contamination in the optical regime due to accretion disk emission (for example, <cit.>).
§ CONCLUSIONS
In this work, we present an investigation into possible correlations between ATLAS optical data and gamma-ray observations with the Fermi-LAT for a sample of 18 TeV-detected northern blazars, over 8 years of observations between 2015 and 2022. Overall, we find the optical and gamma-ray emission to be highly correlated in our sample, with varied time delays, ranging from timescales of days to even years for some sources. With the exception of one source, 1ES 1218+304, all the correlations are found to correspond to a significance of 68 % (∼ 1 σ) and for nine sources the correlations are found to correspond to significance of 99 % (∼ 3 σ).
The observed strong correlations support leptonic models of IC scattering gamma-ray emission in which the seed photons are scattered to higher energy in the relativistic jet by the same electrons responsible for synchrotron emission. It should be noted that the significance of the correlations and the corresponding time delays do not allow us to make strong conclusions on whether the seed photons are dominated by SSC or EIC radiation. However, the lack of clear trend towards lag or lead in our BL Lac sample agrees with the results presented in <cit.> and can be interpreted as evidence towards SSC being the dominant mechanism in BL Lac sources, as opposed to FSRQs which, in general, show the gamma-ray leading the optical emission, interpreted as evidence towards EIC emission being the dominant mechanism.
In conclusion, the gamma-ray optical correlation in BL Lac sources appears complex, as also seen in other variability studies (for example <cit.>). Multi-wavelength studies at high cadence over many years is needed to probe the emission mechanisms further and this will be possible with further coverage with ATLAS and other optical instruments in conjunction with the continued successful operation of the Fermi-LAT. Finally, we aim to perform a comprehensive study of the sample further investigating micro-variability in both energy bands in the future.
This work has made use of data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) project. The Asteroid Terrestrial-impact Last Alert System (ATLAS) project is primarily funded to search for near earth asteroids through NASA grants NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575; byproducts of the NEO search include images and catalogs from the survey area. This work was partially funded by Kepler/K2 grant J1944/80NSSC19K0112 and HST GO-15889, and STFC grants ST/T000198/1 and ST/S006109/1. AA acknowledges funding from NSF grant Award Number 1914579, WoU-MMA: Multi-Messenger Studies with Very-High-Energy Gamma Rays.
The ATLAS science products have been made possible through the contributions of the University of Hawaii Institute for Astronomy, the Queen’s University Belfast, the Space Telescope Science Institute, the South African Astronomical Observatory, and The Millennium Institute of Astrophysics (MAS), Chile.
-0cm
References
999
[Dermer and Giebels(2016)]gammaagn
Dermer, C.D.; Giebels, B.
Active galactic nuclei at gamma-ray energies.
C R Phys. 2016, 17, 594–616. Available online: <http://xxx.lanl.gov/abs/1602.06592>
.
[Gaidos et al.(1996)Gaidos, Akerlof, Biller, Boyle,
Breslin, Buckley, Carter-Lewis, Catanese, Cawley, Fegan,
Finley, Gordo, Hillas, Krennrich, Lamb, Lessard, McEnery,
Masterson, Mohanty, Moriarty, Quinn, Rodgers, Rose, Samuelson,
Schubnell, Sembroski, Srinivasan, Weekes, Wilson, and
Zweerink]1996_Gaidos
Gaidos, J.A.; Akerlof, C.W.; Biller, S.; Boyle, P.J.; Breslin, A.C.;
Buckley, J.H.; Carter-Lewis, D.A.; Catanese, M.; Cawley, M.F.;
Fegan, D.J.; et al.
Extremely rapid bursts of TeV photons from the active galaxy
Markarian 421. Nature
1996, 383, 319–320.
<https://doi.org/10.1038/383319a0>.
[Aharonian et al.(2009)Aharonian, Akhperjanian, Anton, Barres
de Almeida, Bazer-Bachi, Becherini, Behera, Benbow, Bernlöhr,
Boisson, Bochow, Borrel, Brion, Brucker, Brun, Bühler,
Bulik, Büsching, Boutelier, Chadwick, Charbonnier, Chaves,
Cheesebrough, Chounet, Clapson, Coignet, Costamante, Dalton,
Daniel, Davids, Degrange, Deil, Dickinson, Djannati-Ataï,
Domainko, O'C. Drury, Dubois, Dubus, Dyks, Dyrda, Egberts,
Emmanoulopoulos, Espigat, Farnier, Feinstein, Fiasson,
Förster, Fontaine, Füßling, Gabici, Gallant,
Gérard, Giebels, Glicenstein, Glück, Goret, Göhring,
Hauser, Hauser, Heinz, Heinzelmann, Henri, Hermann, Hinton,
Hoffmann, Hofmann, Holleran, Hoppe, Horns, Jacholkowska, de
Jager, Jahn, Jung, Katarzyński, Katz, Kaufmann, Kendziorra,
Kerschhaggl, Khangulyan, Khélifi, Keogh, Kluźniak,
Kneiske, Komin, Kosack, Lamanna, Lenain, Lohse, Marandon,
Martin, Martineau-Huynh, Marcowith, Maurin, McComb, Medina,
Moderski, Monard, Moulin, Naumann-Godo, de Naurois, Nedbal,
Nekrassov, Niemiec, Nolan, Ohm, Olive, de Oña Wilhelmi,
Orford, Ostrowski, Panter, Paz Arribas, Pedaletti, Pelletier,
Petrucci, Pita, Pühlhofer, Punch, Quirrenbach, Raubenheimer,
Raue, Rayner, Renaud, Rieger, Ripken, Rob, Rosier-Lees,
Rowell, Rudak, Rulten, Ruppel, Sahakian, Santangelo,
Schlickeiser, Schöck, Schröder, Schwanke, Schwarzburg,
Schwemmer, Shalchi, Sikora, Skilton, Sol, Spangler, Stawarz,
Steenkamp, Stegmann, Superina, Szostek, Tam, Tavernet, Terrier,
Tibolla, Tluczykont, van Eldik, Vasileiadis, Venter, Venter,
Vialle, Vincent, Vivier, Völk, Volpe, Wagner, Ward,
Zdziarski, and Zech]PKS_2155_minute_variability
Aharonian, F.; Akhperjanian, A.G.; Anton, G.; Barres de Almeida, U.;
Bazer-Bachi, A.R.; Becherini, Y.; Behera, B.; Benbow, W.;
Bernlöhr, K.; Boisson, C.; et al.
Simultaneous multiwavelength observations of the second exceptional
-ray flare of PKS 2155-304 in July 2006. Astron. Astrophys.
2009, 502, 749–770. <https://doi.org/10.1051/0004-6361/200912128>.
[Abeysekara et al.(2020)Abeysekara, Benbow, Bird, Brill,
Brose, Buchovecky, Buckley, Christiansen, Chromey, Daniel,
Dumm, Falcone, Feng, Finley, Fortson, Furniss, Galante, Gent,
Gillanders, Giuri, Gueta, Hassan, Hervet, Holder, Hughes,
Humensky, Johnson, Kaaret, Kar, Kelley-Hoskins, Kertzman,
Kieda, Krause, Krennrich, Kumar, Lang, Moriarty, Mukherjee,
Nelson, Nieto, Nievas-Rosillo, O'Brien, Ong, Otte, Park,
Petrashyk, Pichel, Pohl, Prado, Pueschel, Quinn, Ragan,
Reynolds, Richards, Roache, Rovero, Rulten, Sadeh, Santander,
Sembroski, Shahinyan, Stevenson, Sushch, Tyler, Vassiliev,
Wakely, Weinstein, Wells, Wilcox, Wilhelm, Williams, Zitzer,
Acciari, Ansoldi, Antonelli, Arbet Engels, Baack, Babić,
Banerjee, Barres de Almeida, Barrio, Becerra González,
Bednarek, Bellizzi, Bernardini, Berti, Besenrieder,
Bhattacharyya, Bigongiari, Biland, Blanch, Bonnoli, Busetto,
Carosi, Ceribella, Chai, Cikota, Colak, Colin, Colombo,
Contreras, Cortina, Covino, D'Elia, Da Vela, Dazzi, De Angelis,
De Lotto, Delfino, Delgado, Di Pierro, Do Souto Espiñera,
Dominis Prester, Dorner, Doro, Einecke, Elsaesser, Fallah
Ramazani, Fattorini, Fernández-Barral, Ferrara, Fidalgo,
Foffano, Fonseca, Font, Fruck, Galindo, Gallozzi, García
López, Garczarczyk, Gasparyan, Gaug, Godinović, Green,
Guberman, Hadasch, Hahn, Herrera, Hoang, Hrupec, Inoue,
Ishio, Iwamura, Kubo, Kushida, Lamastra, Lelas, Leone,
Lindfors, Lombardi, Longo, López, López-Coto,
López-Oramas, Machado de Oliveira Fraga, Maggio, Majumdar,
Makariev, Mallamaci, Maneva, Manganaro, Mannheim, Maraschi,
Mariotti, Martínez, Masuda, Mazin, Miceli, Minev, Miranda,
Mirzoyan, Molina, Moralejo, Morcuende, Moreno, Moretti,
Munar-Adrover, Neustroev, Niedzwiecki, Nievas Rosillo, Nigro,
Nilsson, Ninci, Nishijima, Noda, Nogués, Nöthe, Paiano,
Palacio, Palatiello, Paneque, Paoletti, Paredes, Peñil,
Peresano, Persic, Prada Moroni, Prandini, Puljak, Rhode,
Ribó, Rico, Righi, Rugliancich, Saha, Sahakyan, Saito,
Satalecka, Schweizer, Sitarek, Šnidarić, Sobczynska,
Somero, Stamerra, Strom, Strzys, Sun, Surić, Tavecchio,
Temnikov, Terzić, Teshima, Torres-Albà, Tsujimoto, van
Scherpenberg, Vanzo, Vazquez Acosta, Vovk, Will, Zarić,
Aller, Aller, Carini, Horan, Jordan, Jorstad, Kurtanidze,
Kurtanidze, Lähteenmäki, Larionov, Larionova, Madejski,
Marscher, Max-Moerbeck, Moody, Morozova, Nikolashvili, Raiteri,
Readhead, Richards, Sadun, Sakamoto, Sigua, Smith, Talvikki,
Tammi, Tornikoski, Troitsky, and Villata]2020_Mrk421
Abeysekara, A.U.; Benbow, W.; Bird, R.; Brill, A.; Brose, R.;
Buchovecky, M.; Buckley, J.H.; Christiansen, J.L.; Chromey, A.J.;
Daniel, M.K.; et al.
The Great Markarian 421 Flare of 2010 February: Multiwavelength
Variability and Correlation Studies. Astrophys. J.
2020, 890, 97. <https://doi.org/10.3847/1538-4357/ab6612>.
[Aranzana et al.(2018)Aranzana, Körding, Uttley,
Scaringi, and Bloemen]2018_Aranzana
Aranzana, E.; Körding, E.; Uttley, P.; Scaringi, S.; Bloemen, S.
Short time-scale optical variability properties of the largest AGN
sample observed with Kepler/K2. Mon. Not. R. Astron. Soc.
2018, 476, 2501–2515. <https://doi.org/10.1093/mnras/sty413>.
[Kim et al.(2018)Kim, Karouzos, Im, Choi, Kim, Jun,
Lee, and Mezcua]2018_Kim
Kim, J.; Karouzos, M.; Im, M.; Choi, C.; Kim, D.; Jun, H.D.; Lee,
J.H.; Mezcua, M.
Intra-Night Optical Variability of Active Galactic Nuclei in the
Cosmos Field with the KMTNet.
J. Korean Astron. Soc. 2018, 51, 89–110.
<https://doi.org/10.5303/JKAS.2018.51.4.89>.
[Goyal(2018)]galaxies6010034
Goyal, A.
A Comparative Study of Multiwavelength Blazar Variability on Decades
to Minutes Timescales.
Galaxies 2018, 6.
<https://doi.org/10.3390/galaxies6010034>.
[Blandford and Levinson(1995)]Blandford_and_Levinson_1995
Blandford, R.D.; Levinson, A.
Pair Cascades in Extragalactic Jets. I. Gamma Rays. Astrophys. J.
1995, 441, 79.
<https://doi.org/10.1086/175338>.
[Georganopoulos et al.(2002)Georganopoulos, Aharonian, and
Kirk]Georganopoulos_2002
Georganopoulos, M.; Aharonian, F.A.; Kirk, J.G.
External Compton emission from relativistic jets in Galactic black
hole candidates and ultraluminous X-ray sources. Astron. Astrophys.
2002, 388, L25–L28. <https://doi.org/10.1051/0004-6361:20020567>.
[Sikora et al.(1994)Sikora, Begelman, and Rees]1994_Sikora
Sikora, M.; Begelman, M.C.; Rees, M.J.
Comptonization of Diffuse Ambient Radiation by a Relativistic Jet:
The Source of Gamma Rays from Blazars? Astrophys. J.
1994, 421, 153.
<https://doi.org/10.1086/173633>.
[Böttcher et al.(2013)Böttcher, Reimer, Sweeney, and
Prakash]MB_2013
Böttcher, M.; Reimer, A.; Sweeney, K.; Prakash, A.
Leptonic and Hadronic Modeling of Fermi-detected Blazars. Astrophys. J.
2013, 768, 54.
<https://doi.org/10.1088/0004-637X/768/1/54>.
[Mannheim and Biermann(1992)]1992_Mannheim
Mannheim, K.; Biermann, P.L.
Gamma-ray flaring of 3C 279: A proton-initiated cascade in the jet? Astron. Astrophys.
1992, 253, L21–L24.
[Kovalev et al.(2016)Kovalev, Kardashev, Kellermann, Lobanov,
Johnson, Gurvits, Voitsik, Zensus, Anderson, Bach, Jauncey,
Ghigo, Ghosh, Kraus, Kovalev, Lisakov, Petrov, Romney,
Salter, and Sokolovsky]2016_kovalev
Kovalev, Y.Y.; Kardashev, N.S.; Kellermann, K.I.; Lobanov, A.P.;
Johnson, M.D.; Gurvits, L.I.; Voitsik, P.A.; Zensus, J.A.;
Anderson, J.M.; Bach, U.; et al.
RadioAstron Observations of the Quasar 3C273: A Challenge to the
Brightness Temperature Limit. Astrophys. J. Lett.
2016, 820, L9.
<https://doi.org/10.3847/2041-8205/820/1/L9>.
[Ghisellini and Tavecchio(2009)]Ghisellini_2009
Ghisellini, G.; Tavecchio, F.
Canonical high-power blazars. Mon. Not. R. Astron. Soc.
2009, 397, 985–1002.
<https://doi.org/10.1111/j.1365-2966.2009.15007.x>.
[Dermer et al.(2009)Dermer, Finke, Krug, and
Böttcher]2009Dermer
Dermer, C.D.; Finke, J.D.; Krug, H.; Böttcher, M.
Gamma-Ray Studies of Blazars: Synchro-Compton Analysis of Flat
Spectrum Radio Quasars. Astrophys. J.
2009, 692, 32–46.
<https://doi.org/10.1088/0004-637X/692/1/32>.
[Foschini et al.(2011)Foschini, Ghisellini, Tavecchio,
Bonnoli, and Stamerra]Foschini_2011
Foschini, L.; Ghisellini, G.; Tavecchio, F.; Bonnoli, G.; Stamerra,
A.
Search for the shortest variability at gamma rays in flat-spectrum
radio quasars. Astron. Astrophys.
2011, 530, A77.
<https://doi.org/10.1051/0004-6361/201117064>.
[Donea and Protheroe(2003)]Donea_2003
Donea, A.C.; Protheroe, R.J.
Gamma Ray and Infrared Emission from the M87 Jet and Torus.
Prog. Theor. Phys. Suppl. 2003, 151, 186–191.
<https://doi.org/10.1143/PTPS.151.186>.
[Liu and Bai(2006)]RN28
Liu, H.T.; Bai, J.M.
Absorption of 10-200 GeV Gamma Rays by Radiation from Broad-Line
Regions in Blazars. Astrophys. J.
2006, 653, 1089–1097.
<https://doi.org/10.1086/509097>.
[Acharyya et al.(2021)Acharyya, Chadwick, and
Brown]2021MNRAS.500.5297A
Acharyya, A.; Chadwick, P.M.; Brown, A.M.
Locating the gamma-ray emission region in the brightest Fermi-LAT
flat-spectrum radio quasars. Mon. Not. R. Astron. Soc.
2021, 500, 5297–5321.
<https://doi.org/10.1093/mnras/staa3483>.
[Liodakis et al.(2019)Liodakis, Romani, Filippenko, Kocevski,
and Zheng]2019_Liodakis
Liodakis, I.; Romani, R.W.; Filippenko, A.V.; Kocevski, D.; Zheng, W.
Probing Blazar Emission Processes with Optical/Gamma-Ray Flare
Correlations. Astrophys. J.
2019, 880, 32.
<https://doi.org/10.3847/1538-4357/ab26b7>.
[de Jaeger et al.(2023)de Jaeger, Shappee, Kochanek, Hinkle,
Garrappa, Liodakis, Franckowiak, Stanek, Beacom, and
Prieto]2023_de_Jaeger
de Jaeger, T.; Shappee, B.J.; Kochanek, C.S.; Hinkle, J.T.; Garrappa,
S.; Liodakis, I.; Franckowiak, A.; Stanek, K.Z.; Beacom, J.F.;
Prieto, J.L.
Optical/-ray blazar flare correlations:
understanding the high-energy emission process using ASAS-SN and Fermi light
curves. Mon. Not. R. Astron. Soc.
2023, 519, 6349–6380.
<https://doi.org/10.1093/mnras/stad060>.
[Krawczynski et al.(2004)Krawczynski, Hughes, Horan,
Aharonian, Aller, Aller, Boltwood, Buckley, Coppi, Fossati,
Götting, Holder, Horns, Kurtanidze, Marscher, Nikolashvili,
Remillard, Sadun, and Schröder]2004_orphan
Krawczynski, H.; Hughes, S.B.; Horan, D.; Aharonian, F.; Aller, M.F.;
Aller, H.; Boltwood, P.; Buckley, J.; Coppi, P.; Fossati, G.;
et al.
Multiwavelength Observations of Strong Flares from the TeV Blazar
1ES 1959+650. Astrophys. J.
2004, 601, 151–164.
<https://doi.org/10.1086/380393>.
[Atwood et al.(2009)Atwood, Abdo, Ackermann, Althouse,
Anderson, Axelsson, Baldini, Ballet, Band, Barbiellini,
Bartelt, Bastieri, Baughman, Bechtol, Bédérède,
Bellardi, Bellazzini, Berenji, Bignami, Bisello, Bissaldi,
Blandford, Bloom, Bogart, Bonamente, Bonnell, Borgland ,
Bouvier, Bregeon, Brez, Brigida, Bruel, Burnett, Busetto,
Caliandro, Cameron, Caraveo, Carius, Carlson, Casandjian,
Cavazzuti, Ceccanti, Cecchi, Charles, Chekhtman, Cheung,
Chiang, Chipaux, Cillis, Ciprini, Claus, Cohen-Tanugi,
Condamoor, Conrad, Corbet, Corucci, Costamante, Cutini, Davis,
Decotigny, DeKlotz, Dermer, de Angelis, Digel, do Couto e Silva,
Drell, Dubois, Dumora, Edmonds, Fabiani, Farnier, Favuzzi,
Flath, Fleury, Focke, Funk, Fusco, Gargano, Gasparrini,
Gehrels, Gentit, Germani, Giebels, Giglietto, Giommi, Giordano,
Glanzman, Godfrey, Grenier, Grondin, Grove, Guillemot, Guiriec,
Haller, Harding, Hart, Hays, Healey, Hirayama, Hjalmarsdotter,
Horn, Hughes, Jóhannesson, Johansson, Johnson, Johnson,
Johnson, Johnson, Kamae, Katagiri, Kataoka, Kavelaars, Kawai,
Kelly, Kerr, Klamra, Knödlseder, Kocian, Komin, Kuehn,
Kuss, Landriu, Latronico, Lee, Lee, Lemoine-Goumard, Lionetto,
Longo, Loparco, Lott, Lovellette, Lubrano, Madejski, Makeev,
Marangelli, Massai, Mazziotta, McEnery, Menon, Meurer,
Michelson, Minuti, Mirizzi, Mitthumsiri, Mizuno, Moiseev,
Monte, Monzani, Moretti, Morselli, Moskalenko, Murgia,
Nakamori, Nishino, Nolan, Norris, Nuss, Ohno, Ohsugi, Omodei,
Orlando, Ormes, Paccagnella, Paneque, Panetta, Parent, Pearce,
Pepe, Perazzo, Pesce-Rollins, Picozza, Pieri, Pinchera, Piron,
Porter, Poupard, Rainò, Rando, Rapposelli, Razzano, Reimer,
Reimer, Reposeur, Reyes, Ritz, Rochester, Rodriguez, Romani,
Roth, Russell, Ryde, Sabatini, Sadrozinski, Sanchez, Sand er,
Sapozhnikov, Parkinson, Scargle, Schalk, Scolieri, Sgrò,
Share, Shaw, Shimokawabe, Shrader, Sierpowska-Bartosik, Siskind,
Smith, Smith, Spandre, Spinelli, Starck, Stephens, Strickman,
Strong, Suson, Tajima, Takahashi, Takahashi, Tanaka, Tenze,
Tether, Thayer, Thayer, Thompson, Tibaldo, Tibolla, Torres,
Tosti, Tramacere, Turri, Usher, Vilchez, Vitale, Wang,
Watters, Winer, Wood, Ylinen, and Ziegler]Fermi_LAT
Atwood, W.B.; Abdo, A.A.; Ackermann, M.; Althouse, W.; Anderson, B.;
Axelsson, M.; Baldini, L.; Ballet, J.; Band, D.L.; Barbiellini, G.;
et al.
The Large Area Telescope on the Fermi Gamma-Ray Space Telescope
Mission. Astrophys. J.
2009, 697, 1071–1102.
<https://doi.org/10.1088/0004-637X/697/2/1071>.
[Cohen et al.(2014)Cohen, Romani, Filippenko, Cenko, Lott, Zheng, and
Li]Cohen_2014
Cohen, D.P.; Romani, R.W.; Filippenko, A.V.; Cenko, S.B.; Lott, B.; Zheng, W.;
Li, W.
Temporal correlations between optical and gamma-ray activity in
blazars.
Astrophys. J. 2014, 797, 137.
<https://doi.org/10.1088/0004-637x/797/2/137>.
[Fuhrmann et al.(2014)Fuhrmann, Larsson, Chiang, Angelakis,
Zensus, Nestoras, Krichbaum, Ungerechts, Sievers, Pavlidou,
Readhead, Max-Moerbeck, and Pearson]Fuhrmann
Fuhrmann, L.; Larsson, S.; Chiang, J.; Angelakis, E.; Zensus, J.A.;
Nestoras, I.; Krichbaum, T.Â.P.; Ungerechts, H.; Sievers, A.;
Pavlidou, V.; et al.
Detection of significant cm to sub-mm band radio and
-ray correlated variability in Fermi bright blazars. Mon. Not. R. Astron. Soc.
2014, 441, 1899–1909.
<https://doi.org/10.1093/mnras/stu540>.
[Max-Moerbeck et al.(2014)Max-Moerbeck, Hovatta, Richards,
King, Pearson, Readhead, Reeves, Shepherd, Stevenson,
Angelakis, Fuhrmann, Grainge, Pavlidou, Romani, and
Zensus]Max_Moerbeck_2014
Max-Moerbeck, W.; Hovatta, T.; Richards, J.L.; King, O.G.; Pearson,
T.J.; Readhead, A.C.S.; Reeves, R.; Shepherd, M.C.; Stevenson, M.A.;
Angelakis, E.; et al.
Time correlation between the radio and gamma-ray activity in blazars
and the production site of the gamma-ray emission. Mon. Not. R. Astron. Soc.
2014, 445, 428–436.
<https://doi.org/10.1093/mnras/stu1749>.
[Tonry et al.(2018)Tonry, Denneau, Heinze, Stalder, Smith,
Smartt, Stubbs, Weiland, and Rest]ATLAS_main
Tonry, J.L.; Denneau, L.; Heinze, A.N.; Stalder, B.; Smith, K.W.;
Smartt, S.J.; Stubbs, C.W.; Weiland, H.J.; Rest, A.
ATLAS: A High-cadence All-sky Survey System. Publ. Astron. Soc. Pac.
2018, 130, 064505.
<https://doi.org/10.1088/1538-3873/aabadf>.
[Welsh(1999)]Welsh_LCCF
Welsh, W.F.
On the Reliability of Cross-Correlation Function Lag Determinations
in Active Galactic Nuclei. Publ. Astron. Soc. Pac.
1999, 111, 1347–1366.
<https://doi.org/10.1086/316457>.
[Sadun et al.(2018)Sadun, Asadi-Zeydabadi, Mills, and
Moody]galaxies6010020
Sadun, A.C.; Asadi-Zeydabadi, M.; Mills, B.; Moody, J.W.
Statistical Analysis of the Microvariable AGN Source Mrk 501.
Galaxies 2018, 6.
<https://doi.org/10.3390/galaxies6010020>.
[Smith et al.(2020)Smith, Smartt, Young, Tonry, Denneau,
Flewelling, Heinze, Weiland, Stalder, Rest, Stubbs, Anderson,
Chen, Clark, Do, Förster, Fulton, Gillanders, McBrien,
O'Neill, Srivastav, and Wright]ATLAS_smith
Smith, K.W.; Smartt, S.J.; Young, D.R.; Tonry, J.L.; Denneau, L.;
Flewelling, H.; Heinze, A.N.; Weiland, H.J.; Stalder, B.; Rest, A.;
et al.
Design and Operation of the ATLAS Transient Science Server. Publ. Astron. Soc. Pac.
2020, 132, 085002.
<https://doi.org/10.1088/1538-3873/ab936e>.
[Heinze et al.(2018)Heinze, Tonry, Denneau, Flewelling,
Stalder, Rest, Smith, Smartt, and Weiland]2018_Heinze
Heinze, A.N.; Tonry, J.L.; Denneau, L.; Flewelling, H.; Stalder, B.;
Rest, A.; Smith, K.W.; Smartt, S.J.; Weiland, H.
A First Catalog of Variable Stars Measured by the Asteroid
Terrestrial-impact Last Alert System (ATLAS). Astron. J.
2018, 156, 241.
<https://doi.org/10.3847/1538-3881/aae47f>.
[Wood et al.(2017)Wood, Caputo, Charles, Di Mauro, Magill,
Perkins, and Fermi-LAT Collaboration]wood2017fermipy
Wood, M.; Caputo, R.; Charles, E.; Di Mauro, M.; Magill, J.;
Perkins, J.S.; Fermi-LAT Collaboration.
Fermipy: An open-source Python package for analysis of Fermi-LAT
Data.
In Proceedings of the 35th International Cosmic Ray Conference
(ICRC2017), Busan, Korea, 12–20 July 2017
; Volume 301; International Cosmic Ray Conference, p. 824.
[Atwood et al.(2013)Atwood, Albert, Baldini, Tinivella,
Bregeon, Pesce-Rollins, Sgrò, Bruel, Charles, Drlica-Wagner,
Franckowiak, Jogler, Rochester, Usher, Wood, Cohen-Tanugi, and
Zimmer]atwood2013pass
Atwood, W.; Albert, A.; Baldini, L.; Tinivella, M.; Bregeon, J.;
Pesce-Rollins, M.; Sgrò, C.; Bruel, P.; Charles, E.;
Drlica-Wagner, A.; et al.
Pass 8: Toward the Full Realization of the Fermi-LAT Scientific
Potential.
arXiv 2013, arXiv:1303.3514.
[Fermi-LAT collaboration et al.(2022)Fermi-LAT collaboration, :,
Abdollahi, Acero, Baldini, Ballet, Bastieri, Bellazzini,
Berenji, Berretta, Bissaldi, Blandford, Bloom, Bonino, Brill,
Britto, Bruel, Burnett, Buson, Cameron, Caputo, Caraveo,
Castro, Chaty, Cheung, Chiaro, Cibrario, Ciprini,
Coronado-Blazquez, Crnogorcevic, Cutini, D'Ammando, De Gaetano,
Digel, Di Lalla, Dirirsa, Di Venere, Dominguez, Fallah Ramazani,
Fegan, Ferrara, Fiori, Fleischhack, Franckowiak, Fukazawa,
Funk, Fusco, Galanti, Gammaldi, Gargano, Garrappa, Gasparrini,
Giacchino, Giglietto, Giordano, Giroletti, Glanzman, Green,
Grenier, Grondin, Guillemot, Guiriec, Gustafsson, Harding,
Hays, Hewitt, Horan, Hou, Johannesson, Karwin, Kayanoki,
Kerr, Kuss, Landriu, Larsson, Latronico, Lemoine-Goumard, Li,
Liodakis, Longo, Loparco, Lott, Lubrano, Maldera, Malyshev,
Manfreda, Marti-Devesa, Mazziotta, Mereu, Meyer, Michelson,
Mirabal, Mitthumsiri, Mizuno, Moiseev, Monzani, Morselli,
Moskalenko, Negro, Nuss, Omodei, Orienti, Orlando, Paneque,
Pei, Perkins, Persic, Pesce-Rollins, Petrosian, Pillera, Poon,
Porter, Principe, Raino, Rando, Rani, Razzano, Razzaque,
Reimer, Reimer, Reposeur, Sanchez-Conde, Saz Parkinson, Scotton,
Serini, Sgro, Siskind, Smith, Spandre, Spinelli, Sueoka,
Suson, Tajima, Tak, Thayer, Thompson, Torres, Troja,
Valverde, Wood, and Zaharijas]4fgl_dr3
Fermi-LAT collaboration.; Abdollahi, S.; Acero, F.; Baldini, L.;
Ballet, J.; Bastieri, D.; Bellazzini, R.; Berenji, B.; Berretta,
A.; et al.
Incremental Fermi Large Area Telescope Fourth Source Catalog.
arXiv 2022, arXiv:2201.11184.
[Meyer et al.(2019)Meyer, Scargle, and Blandford]2019_Meyer
Meyer, M.; Scargle, J.D.; Blandford, R.D.
Characterizing the Gamma-Ray Variability of the Brightest Flat
Spectrum Radio Quasars Observed with the Fermi LAT. Astrophys. J.
2019, 877, 39.
<https://doi.org/10.3847/1538-4357/ab1651>.
[Max-Moerbeck et al.(2014)Max-Moerbeck, Richards, Hovatta,
Pavlidou, Pearson, and Readhead]Max_Moerbeck_CCFs
Max-Moerbeck, W.; Richards, J.L.; Hovatta, T.; Pavlidou, V.; Pearson,
T.J.; Readhead, A.C.S.
A method for the estimation of the significance of
cross-correlations in unevenly sampled red-noise time series. Mon. Not. R. Astron. Soc.
2014, 445, 437–459.
<https://doi.org/10.1093/mnras/stu1707>.
[Edelson and Krolik(1988)]RN19
Edelson, R.A.; Krolik, J.H.
The Discrete Correlation Function: A New Method for Analyzing
Unevenly Sampled Variability Data. Astrophys. J.
1988, 333, 646.
<https://doi.org/10.1086/166773>.
[Uttley et al.(2003)Uttley, Edelson, McHardy, Peterson, and
Markowitz]Uttley
Uttley, P.; Edelson, R.; McHardy, I.M.; Peterson, B.M.; Markowitz, A.
Correlated Long-Term Optical and X-Ray Variations in NGC 5548. Astrophys. J.
2003, 584, L53–L56.
<https://doi.org/10.1086/373887>.
[Emmanoulopoulos et al.(2013)Emmanoulopoulos, McHardy, and
Papadakis]Emmanoulopoulos
Emmanoulopoulos, D.; McHardy, I.M.; Papadakis, I.E.
Generating artificial light curves: revisited and updated. Mon. Not. R. Astron. Soc.
2013, 433, 907–927.
<https://doi.org/10.1093/mnras/stt764>.
[Böttcher(2007)]2007Ap SS.309...95B
Böttcher, M.
Modeling the emission processes in blazars. Astrophys. Space Sci.
2007, 309, 95–104.
<https://doi.org/10.1007/s10509-007-9404-0>.
[Ackermann et al.(2014)Ackermann, Ajello, Allafort, Antolini,
Barbiellini, Bastieri, Bellazzini, Bissaldi, Bonamente, Bregeon,
Brigida, Bruel, Buehler, Buson, Caliandro, Cameron, Caraveo,
Cavazzuti, Cecchi, Chaves, Chekhtman, Chiang, Chiaro, Ciprini,
Claus, Cohen-Tanugi, Conrad, Cutini, D'Ammando, de Palma,
Dermer, Silva, Donato, Drell, Favuzzi, Finke, Focke,
Franckowiak, Fukazawa, Fusco, Gargano, Gasparrini, Gehrels,
Giglietto, Giordano, Giroletti, Godfrey, Grenier, Guiriec,
Hayashida, Hewitt, Horan, Hughes, Iafrate, Johnson,
Knödlseder, Kuss, Lande, Larsson, Latronico, Longo,
Loparco, Lovellette, Lubrano, Mayer, Mazziotta, McEnery,
Michelson, Mizuno, Monzani, Morselli, Moskalenko, Murgia,
Nemmen, Nuss, Ohsugi, Orienti, Orlando, Perkins, Pesce-Rollins,
Piron, Pivato, Porter, Rainò, Razzano, Reimer, Reimer,
Sanchez, Schulz, Sgrò, Siskind, Spandre, Spinelli, Stawarz,
Takahashi, Takahashi, Thayer, Thayer, Thompson, Tinivella,
Torres, Tosti, Troja, Usher, Vandenbroucke, Vasileiou,
Vianello, Vitale, Werner, Winer, Wood, Wood, Fermi Large Area
Telescope Collaboration, Aleksić, Ansoldi, Antonelli, Antoranz,
Babic, Bangale, Barres de Almeida, Barrio, Becerra González,
Bednarek, Berger, Bernardini, Biland, Blanch, Bock, Bonnefoy,
Bonnoli, Borracci, Bretz, Carmona, Carosi, Carreto Fidalgo,
Colin, Colombo, Contreras, Cortina, Covino, Da Vela, Dazzi, De
Angelis, De Caneva, De Lotto, Delgado Mendez, Doert,
Domínguez, Dominis Prester, Dorner, Doro, Einecke,
Eisenacher, Elsaesser, Farina, Ferenc, Fonseca, Font, Frantzen,
Fruck, García López, Garczarczyk, Garrido Terrats, Gaug,
Giavitto, Godinović, González Muñoz, Gozzini, Hadasch,
Herrero, Hildebrand, Hose, Hrupec, Idec, Kadenius, Kellermann,
Knoetig, Kodani, Konno, Krause, Kubo, Kushida, La Barbera,
Lelas, Lewandowska, Lindfors, Lombardi, López,
López-Coto, López-Oramas, Lorenz, Lozano, Makariev,
Mallot, Maneva, Mankuzhiyil, Mannheim, Maraschi, Marcote,
Mariotti, Martínez, Mazin, Menzel, Meucci, Miranda,
Mirzoyan, Moralejo, Munar-Adrover, Nakajima, Niedzwiecki,
Nishijima, Nilsson, Nowak, Orito, Overkemping, Paiano,
Palatiello, Paneque, Paoletti, Paredes, Paredes-Fortuny, Partini,
Persic, Prada, Prada Moroni, Prandini, Preziuso, Puljak,
Reinthal, Rhode, Ribó, Rico, Rodriguez Garcia, Rügamer,
Saggion, Saito, Saito, Salvati, Satalecka, Scalzotto, Scapin,
Schultz, Schweizer, Shore, Sillanpää, Sitarek, Snidaric,
Sobczynska, Spanier, Stamatescu, Stamerra, Steinbring, Storz,
Sun, Surić, Takalo, Takami, Tavecchio, Temnikov,
Terzić, Tescaro, Teshima, Thaele, Tibolla, Toyama, Treves,
Vogler, Wagner, Zandanel, Zanin, MAGIC Collaboration, Aller,
Angelakis, Blinov, Djorgovski, Drake, Efimova, Gurwell, Homan,
Jordan, Kopatskaya, Kovalev, Kurtanidze, Lähteenmäki,
Larionov, Lister, Nieppola, Nikolashvili, Ros, Savolainen,
Sigua, and Tornikoski]2014ApJ...786..157A
Ackermann, M.; Ajello, M.; Allafort, A.; Antolini, E.; Barbiellini,
G.; Bastieri, D.; Bellazzini, R.; Bissaldi, E.; Bonamente, E.;
Bregeon, J.; et al.
Multifrequency Studies of the Peculiar Quasar 4C +21.35 during the
2010 Flaring Activity. Astrophys. J.
2014, 786, 157.
<https://doi.org/10.1088/0004-637X/786/2/157>.
[Ong(2009)]1ES_0502+675
Ong, R.A.
Discovery of VHE Gamma-Ray Emission from the Fermi-LAT Source 1ES
0502+675.
Astron. Telegr. 2009, 2301, 1.
[Acciari et al.(2008)Acciari, Aliu, Beilicke, Benbow,
Böttcher, Bradbury, Buckley, Bugaev, Butt, Celik, Cesarini,
Ciupik, Chow, Cogan, Colin, Cui, Daniel, Ergin, Falcone,
Fegan, Finley, Finnegan, Fortin, Fortson, Furniss, Gall,
Gillanders, Grube, Guenette, Gyuk, Hanna, Hays, Holder,
Horan, Hui, Humensky, Imran, Kaaret, Karlsson, Kertzman,
Kieda, Konopelko, Krawczynski, Krennrich, Lang, LeBohec, Lee,
Maier, McCann, McCutcheon, Moriarty, Mukherjee, Nagai, Niemiec,
Ong, Pandel, Perkins, Petry, Pohl, Quinn, Ragan, Reyes,
Reynolds, Roache, Rose, Schroedter, Sembroski, Smith, Steele,
Swordy, Toner, Vassiliev, Wagner, Wakely, Ward, Weekes,
Weinstein, White, Williams, Wissel, Wood, and
Zitzer]W_Comae_detection
Acciari, V.A.; Aliu, E.; Beilicke, M.; Benbow, W.; Böttcher, M.;
Bradbury, S.M.; Buckley, J.H.; Bugaev, V.; Butt, Y.; Celik, O.;
et al.
VERITAS Discovery of >200 GeV Gamma-Ray Emission from the
Intermediate-Frequency-Peaked BL Lacertae Object W Comae. Astrophys. J.
2008, 684, L73..
<https://doi.org/10.1086/592244>.
[Ong(2009)]VER_J0521+211_detection
Ong, R.A.
VERITAS reports a High Gamma-ray Flux from VER J0521+211.
Astron. Telegr. 2009, 2309, 1.
[Acharyya et al.(2023)Acharyya, Adams, Archer, Bangale,
Benbow, Brill, Christiansen, Chromey, Errando, Falcone, Feng,
Finley, Foote, Fortson, Furniss, Gallagher, Hanlon, Hanna,
Hervet, Hinrichs, Hoang, Holder, Jin, Johnson, Kaaret,
Kertzman, Kieda, Kleiner, Korzoun, Krennrich, Lang, Lundy,
Maier, McGrath, Millard, Millis, Mooney, Moriarty, Mukherjee,
O'Brien, Ong, Pohl, Pueschel, Quinn, Ragan, Reynolds,
Ribeiro, Roache, Sadeh, Sadun, Saha, Santander, Sembroski,
Shang, Splettstoesser, Talluri, Tucci, Vassiliev, Williams,
Wong, Hovatta, Jorstad, Kiehlmann, Lahteenmaki, Liodakis,
Marscher, Max-Moerbeck, Readhead, Reeves, Smith, and
Tornikoski]S31227_discovery_paper
Acharyya, A.; Adams, C.; Archer, A.; Bangale, P.; Benbow, W.;
Brill, A.; Christiansen, J.; Chromey, A.; Errando, M.; Falcone, A.;
et al.
VERITAS discovery of very high energy gamma-ray emission from S3
1227+25 and multiwavelength observations.
arXiv 2023, arXiv:2305.02860.
<https://doi.org/10.48550/arXiv.2305.02860>.
[H. E. S. S. Collaboration et al.(2012)H. E. S. S. Collaboration,
Abramowski, Acero, Aharonian, Akhperjanian, Anton, Balzer,
Barnacka, Barres de Almeida, Becherini, Becker, Behera,
Bernloehr, Birsin, Biteau, Bochow, Boisson, Bolmont, Bordas,
Brucker, Brun, Brun, Bulik, Buesching, Carrigan, Casanova,
Cerruti, Chadwick, Charbonnier, Chaves, Cheesebrough, Chounet,
Clapson, Coignet, Cologna, Conrad, Dalton, Daniel, Davids,
Degrange, Deil, Dickinson, Djannati-Ataie, Domainko, Drury,
Dubois, Dubus, Dutson, Dyks, Dyrda, Egberts, Eger, Espigat,
Fallon, Farnier, Feinstein, Fernandes, Fiasson, Fontaine,
Foerster, Fuesling, Gallant, Gast, Gerard, Gerbig, Giebels,
Glicenstein, Glueck, Goret, Goering, Haeffner, Hague, Hampf,
Hauser, Heinz, Heinzelmann, Henri, Hermann, Hinton, Hoffmann,
Hofmann, Hofverberg, Holler, Horns, Jacholkowska, de Jager,
Jahn, Jamrozy, Jung, Kastendieck, Katarzynski, Katz, Kaufmann,
Keogh, Khangulyan, Khelifi, Klochkov, Kluzniak, Kneiske, Komin,
Kosack, Kossakowski, Laffon, Lamanna, Lennarz, Lohse, Lopatin,
Lu, Marandon, Marcowith, Masbou, Maurin, Maxted, Mayer,
McComb, Medina, Mehault, Moderski, Moulin, Naumann,
Naumann-Godo, de Naurois, Nedbal, Nekrassov, Nguyen, Nicholas,
Niemiec, Nolan, Ohm, de Ona Wilhelmi, Opitz, Ostrowski, Oya,
Panter, Paz Arribas, Pedaletti, Pelletier, Petrucci, Pita,
Puehlhofer, Punch, Quirrenbach, Raue, Rayner, Reimer, Reimer,
Renaud, de Los Reyes, Rieger, Ripken, Rob, Rosier-Lees, Rowell,
Rudak, Rulten, Ruppel, Sahakian, Sanchez, Santangelo,
Schlickeiser, Schoeck, Schulz, Schwanke, Schwarzburg, Schwemmer,
Sheidaei, Sikora, Skilton, Sol, Spengler, Stawarz, Steenkamp,
Stegmann, Stinzing, Stycz, Sushch, Szostek, Tavernet, Terrier,
Tluczykont, Valerius, van Eldik, Vasileiadis, Venter, Vialle,
Viana, Vincent, Voelk, Volpe, Vorobiov, Vorster, Wagner,
Ward, White, Wierzcholska, Zacharias, Zajczyk, Zdziarski, Zech,
Zechlin, Costamante, Fegan, and Ajello]1ES_0414+009_detection
Collaboration, H.E.S.S.; Abramowski, A.; Acero, F.; Aharonian, F.;
Akhperjanian, A.G.; Anton, G.; Balzer, A.; Barnacka, A.; Barres de
Almeida, U.; Becherini, Y.; et al.
Discovery of hard-spectrum -ray emission from
the BL Lacertae object 1ES 0414+009. A&A
2012, 538, A103.
<https://doi.org/10.1051/0004-6361/201118406>.
[Acciari et al.(2009)Acciari, Aliu, Arlen, Bautista,
Beilicke, Benbow, Böttcher, Bradbury, Buckley, Bugaev,
Butt, Byrum, Cannon, Celik, Cesarini, Chow, Ciupik, Cogan,
Colin, Cui, Dickherber, Duke, Ergin, Falcone, Fegan, Finley,
Finnegan, Fortin, Fortson, Furniss, Gall, Gibbs, Gillanders,
Grube, Guenette, Gyuk, Hanna, Hays, Holder, Horan, Hui,
Humensky, Imran, Kaaret, Karlsson, Kertzman, Kieda, Kildea,
Konopelko, Krawczynski, Krennrich, Lang, LeBohec, Maier,
McCann, McCutcheon, Millis, Moriarty, Mukherjee, Nagai, Ong,
Otte, Pandel, Perkins, Petry, Pohl, Quinn, Ragan, Reyes,
Reynolds, Roache, Rose, Schroedter, Sembroski, Smith, Steele,
Swordy, Theiling, Toner, Valcarcel, Varlotta, Vassiliev,
Wagner, Wakely, Ward, Weekes, Weinstein, White, Williams,
Wissel, Wood, and Zitzer]1ES_0806+524_detection
Acciari, V.; Aliu, E.; Arlen, T.; Bautista, M.; Beilicke, M.;
Benbow, W.; Böttcher, M.; Bradbury, S.M.; Buckley, J.H.;
Bugaev, V.; et al.
Discovery of Very High Energy Gamma-ray Radiation from the BL Lac
1ES 0806+524. Astrophys. J.
2009, 690, L126–L129.
<https://doi.org/10.1088/0004-637X/690/2/L126>.
[Ong et al.(2010)Ong, VERITAS Collaboration, Paneque, and
Fermi Large Area Telescope]1FGL_J0648.8+1516
Ong, R.A.; VERITAS Collaboration.; Paneque, D.; Fermi Large Area
Telescope.
VERITAS Discovery of Very High-Energy Gamma-Ray Emission from 1FGL
J0648.8+1516.
Astron. Telegr. 2010, 2486, 1.
[Swordy(2008)]3C66A_detection
Swordy, S.
Discovery of >100 GeV Gamma-ray Emission from the Blazar 3C66A by
VERITAS.
Astron. Telegr. 2008, 1753, 1.
[Mariotti(2011)]1ES_1215+303_discovery
Mariotti, M.
Discovery of Very High Energy Gamma-Ray Emission from 1ES 1215+303
by MAGIC.
Astron. Telegr. 2011, 3100, 1.
[Ong(2009)]RGB_J0710+591_discovery
Ong, R.
VERITAS Discovery of VHE Gamma-Ray Emission from BL Lac object RGB
J0710+591.
Astron. Telegr. 2009, 1941, 1.
[Nishiyama(1999)]1ES_1959+650_discovery
Nishiyama, T.
Detection of a new TeV gamma-ray source of BL Lac object 1ES
1959+650.
In Proceedings of the 26th International Cosmic Ray Conference
(ICRC26), Salt Lake City, USA, 17–25August, 1999; Volume 3, International Cosmic Ray Conference,
[Aharonian et al.(2007)Aharonian, Akhperjanian, Barres de
Almeida, Bazer-Bachi, Behera, Beilicke, Benbow, Bernlöhr,
Boisson, Bolz, Borrel, Braun, Brion, Brown, Bühler,
Bulik, Büsching, Boutelier, Carrigan, Chadwick, Chounet,
Clapson, Coignet, Cornils, Costamante, Dalton, Degrange,
Dickinson, Djannati-Ataï, Domainko, O'C. Drury, Dubois,
Dubus, Dyks, Egberts, Emmanoulopoulos, Espigat, Farnier,
Feinstein, Fiasson, Förster, Fontaine, Funk, Füßling,
Gallant, Giebels, Glicenstein, Glück, Goret, Hadjichristidis,
Hauser, Hauser, Heinzelmann, Henri, Hermann, Hinton, Hoffmann,
Hofmann, Holleran, Hoppe, Horns, Jacholkowska, de Jager, Jung,
Katarzyński, Kendziorra, Kerschhaggl, Khélifi, Keogh,
Komin, Kosack, Lamanna, Latham, Lemière, Lemoine-Goumard,
Lenain, Lohse, Martin, Martineau-Huynh, Marcowith, Masterson,
Maurin, Maurin, McComb, Moderski, Moulin, de Naurois, Nedbal,
Nolan, Ohm, Olive, de Oña Wilhelmi, Orford, Osborne,
Ostrowski, Panter, Pedaletti, Pelletier, Petrucci, Pita,
Pühlhofer, Punch, Ranchon, Raubenheimer, Raue, Rayner,
Renaud, Ripken, Rob, Rolland, Rosier-Lees, Rowell, Rudak,
Ruppel, Sahakian, Santangelo, Schlickeiser, Schöck,
Schröder, Schwanke, Schwarzburg, Schwemmer, Shalchi, Sol,
Spangler, Stawarz, Steenkamp, Stegmann, Superina, Tam,
Tavernet, Terrier, van Eldik, Vasileiadis, Venter, Vialle,
Vincent, Vivier, Völk, Volpe, Wagner, Ward, Zdziarski, and
Zech]1ES_0229+200_discovery
Aharonian, F.; Akhperjanian, A.G.; Barres de Almeida, U.; Bazer-Bachi,
A.R.; Behera, B.; Beilicke, M.; Benbow, W.; Bernlöhr, K.;
Boisson, C.; Bolz, O.; et al.
New constraints on the mid-IR EBL from the HESS discovery of VHE
-rays from 1ES 0229+200. Astron. Astrophys.
2007, 475, L9–L13.
<https://doi.org/10.1051/0004-6361:20078462>.
[Ong(2010)]1ES_1440+122_discovery
Ong, R.A.
Discovery of Very High Energy Gamma-ray Emission from the Blazar 1ES
1440+122.
Astron. Telegr. 2010, 2786, 1.
[Punch et al.(1992)Punch, Akerlof, Cawley, Chantell, Fegan,
Fennell, Gaidos, Hagan, Hillas, Jiang, Kerrick, Lamb,
Lawrence, Lewis, Meyer, Mohanty, O'Flaherty, Reynolds, Rovero,
Schubnell, Sembroski, Weekes, Whitaker, and
Wilson]Mrk421_discovery
Punch, M.; Akerlof, C.W.; Cawley, M.F.; Chantell, M.; Fegan, D.J.;
Fennell, S.; Gaidos, J.A.; Hagan, J.; Hillas, A.M.; Jiang, Y.;
et al.
Detection of TeV photons from the active galaxy Markarian 421. Nature
1992, 358, 477–478.
<https://doi.org/10.1038/358477a0>.
[Albert et al.(2006)Albert, Aliu, Anderhub, Antoranz,
Armada, Asensio, Baixeras, Barrio, Bartelt, Bartko, Bastieri,
Bavikadi, Bednarek, Berger, Bigongiari, Biland, Bisesi, Bock,
Bretz, Britvitch, Camara, Chilingarian, Ciprini, Coarasa,
Commichau, Contreras, Cortina, Curtef, Danielyan, Dazzi, De
Angelis, de los Reyes, De Lotto, Domingo-Santamaría, Dorner,
Doro, Errando, Fagiolini, Ferenc, Fernández, Firpo, Flix,
Fonseca, Font, Galante, Garczarczyk, Gaug, Giller, Goebel,
Hakobyan, Hayashida, Hengstebeck, Höhne, Hose, Jacon,
Kalekin, Kranich, Laille, Lenisa, Liebing, Lindfors, Longo,
López, López, Lorenz, Lucarelli, Majumdar, Maneva,
Mannheim, Mariotti, Martínez, Mase, Mazin, Meucci, Meyer,
Miranda, Mirzoyan, Mizobuchi, Moralejo, Nilsson,
Oña-Wilhelmi, Orduña, Otte, Oya, Paneque, Paoletti,
Pasanen, Pascoli, Pauss, Pavel, Pegna, Persic, Peruzzo,
Piccioli, Poller, Prandini, Rhode, Rico, Riegel, Rissi,
Robert, Rügamer, Saggion, Sánchez, Sartori, Scalzotto,
Schmitt, Schweizer, Shayduk, Shinozaki, Shore, Sidro,
Sillanpää, Sobczyńska, Stamerra, Stark, Takalo,
Temnikov, Tescaro, Teshima, Tonello, Torres, Torres, Turini,
Vankov, Vardanyan, Vitale, Wagner, Wibig, Wittek, and
Zapatero]1ES_1218+30_detection
Albert, J.; Aliu, E.; Anderhub, H.; Antoranz, P.; Armada, A.;
Asensio, M.; Baixeras, C.; Barrio, J.A.; Bartelt, M.; Bartko, H.;
et al.
Discovery of Very High Energy Gamma Rays from 1ES 1218+30.4. Astrophys. J.
2006, 642, L119–L122.
<https://doi.org/10.1086/504845>.
[Albert et al.(2007)Albert, Aliu, Anderhub, Antoranz,
Armada, Baixeras, Barrio, Bartko, Bastieri, Becker, Bednarek,
Berger, Bigongiari, Biland, Bock, Bordas, Bosch-Ramon, Bretz,
Britvitch, Camara, Carmona, Chilingarian, Coarasa, Commichau,
Contreras, Cortina, Costado, Curtef, Danielyan, Dazzi, De
Angelis, Delgado, de los Reyes, De Lotto, Domingo-Santamaría,
Dorner, Doro, Errando, Fagiolini, Ferenc, Fernández, Firpo,
Flix, Fonseca, Font, Fuchs, Galante, García-López,
Garczarczyk, Gaug, Giller, Goebel, Hakobyan, Hayashida,
Hengstebeck, Herrero, Höhne, Hose, Hsu, Jacon, Jogler,
Kosyra, Kranich, Kritzer, Laille, Lindfors, Lombardi, Longo,
López, López, Lorenz, Majumdar, Maneva, Mannheim,
Mansutti, Mariotti, Martínez, Mazin, Merck, Meucci, Meyer,
Miranda, Mirzoyan, Mizobuchi, Moralejo, Nieto, Nilsson,
Ninkovic, Oña-Wilhelmi, Otte, Oya, Paneque, Panniello,
Paoletti, Paredes, Pasanen, Pascoli, Pauss, Pegna, Perlman,
Persic, Peruzzo, Piccioli, Prandini, Puchades, Raymers, Rhode,
Ribó, Rico, Rissi, Robert, Rügamer, Saggion, Saito,
Sánchez, Sartori, Scalzotto, Scapin, Schmitt, Schweizer,
Shayduk, Shinozaki, Shore, Sidro, Sillanpää, Sobczynska,
Stamerra, Stark, Takalo, Tavecchio, Temnikov, Tescaro, Teshima,
Torres, Turini, Vankov, Vitale, Wagner, Wibig, Wittek,
Zandanel, Zanin, and Zapatero]1ES_1011+496_detection
Albert, J.; Aliu, E.; Anderhub, H.; Antoranz, P.; Armada, A.;
Baixeras, C.; Barrio, J.A.; Bartko, H.; Bastieri, D.; Becker, J.K.;
et al.
Discovery of Very High Energy -Rays from 1ES
1011+496 at z = 0.212. Astrophys. J.
2007, 667, L21–L24.
<https://doi.org/10.1086/521982>.
[Quinn et al.(1996)Quinn, Akerlof, Biller, Buckley,
Carter-Lewis, Cawley, Catanese, Connaughton, Fegan, Finley,
Gaidos, Hillas, Lamb, Krennrich, Lessard, McEnery, Meyer,
Mohanty, Rodgers, Rose, Sembroski, Schubnell, Weekes, Wilson,
and Zweerink]Mrk_501_discovery
Quinn, J.; Akerlof, C.W.; Biller, S.; Buckley, J.; Carter-Lewis,
D.A.; Cawley, M.F.; Catanese, M.; Connaughton, V.; Fegan, D.J.;
Finley, J.P.; et al.
Detection of Gamma Rays with E > 300 GeV from Markarian 501. Astrophys. J.
1996, 456, L83.
<https://doi.org/10.1086/309878>.
[Acciari et al.(2010)Acciari, Aliu, Arlen, Aune, Bautista,
Beilicke, Benbow, Böttcher, Boltuch, Bradbury, Buckley,
Bugaev, Byrum, Cannon, Cesarini, Chow, Ciupik, Cogan, Cui,
Duke, Falcone, Finley, Finnegan, Fortson, Furniss, Galante,
Gall, Gillanders, Godambe, Grube, Guenette, Gyuk, Hanna,
Holder, Hui, Humensky, Kaaret, Karlsson, Kertzman, Kieda,
Konopelko, Krawczynski, Krennrich, Lang, LeBohec, Maier,
McArthur, McCann, McCutcheon, Millis, Moriarty, Nagai, Ong,
Otte, Pandel, Perkins, Pichel, Pohl, Quinn, Ragan, Reyes,
Reynolds, Roache, Rose, Schroedter, Sembroski, Senturk, Smith,
Steele, Swordy, Theiling, Thibadeau, Varlotta, Vassiliev,
Vincent, Wagner, Wakely, Ward, Weekes, Weinstein, Weisgarber,
Williams, Wissel, Wood, Zitzer, VERITAS Collaboration, Abdo,
Ackermann, Ajello, Baldini, Ballet, Barbiellini, Bastieri,
Baughman, Bechtol, Bellazzini, Berenji, Blandford, Bloom,
Bonamente, Borgland, Bregeon, Brez, Brigida, Bruel, Burnett,
Caliandro, Cameron, Caraveo, Casandjian, Cavazzuti, Cecchi,
Çelik, Chekhtman, Cheung, Chiang, Ciprini, Claus,
Cohen-Tanugi, Conrad, Cutini, Dermer, de Angelis, de Palma, do
Couto e Silva, Drell, Drlica-Wagner, Dubois, Dumora, Farnier,
Favuzzi, Fegan, Focke, Fortin, Frailis, Fukazawa, Fusco,
Gargano, Gasparrini, Gehrels, Germani, Giebels, Giglietto,
Giommi, Giordano, Glanzman, Godfrey, Grenier, Grove, Guillemot,
Guiriec, Hanabata, Hays, Hughes, Jackson, Jóhannesson,
Johnson, Johnson, Kamae, Katagiri, Kataoka, Kawai, Kerr,
Knödlseder, Kocian, Kuss, Lande, Latronico, Longo, Loparco,
Lott, Lovellette, Lubrano, Madejski, Makeev, Mazziotta,
McEnery, Meurer, Michelson, Mitthumsiri, Mizuno, Moiseev,
Monte, Monzani, Morselli, Moskalenko, Murgia, Nolan, Norris,
Nuss, Ohsugi, Omodei, Orlando, Ormes, Paneque, Parent,
Pelassa, Pepe, Pesce-Rollins, Piron, Porter, Rainò, Rando,
Razzano, Reimer, Reimer, Reposeur, Rodriguez, Roth, Ryde,
Sadrozinski, Sanchez, Sander, Saz Parkinson, Scargle, Sgrò,
Shaw, Siskind, Smith, Spandre, Spinelli, Strickman, Suson,
Tajima, Takahashi, Tanaka, Thayer, Thayer, Thompson, Tibaldo,
Torres, Tosti, Tramacere, Uchiyama, Usher, Vasileiou, Vilchez,
Vitale, Waite, Wang, Winer, Wood, Ylinen, Ziegler, Fermi LAT
Collaboration, Barber, and Terndrup]PKS_1424+240_detection
Acciari, V.A.; Aliu, E.; Arlen, T.; Aune, T.; Bautista, M.;
Beilicke, M.; Benbow, W.; Böttcher, M.; Boltuch, D.; Bradbury,
S.M.; et al.
Discovery of Very High Energy Gamma Rays from PKS 1424+240 and
Multiwavelength Constraints on Its Redshift. Astrophys. J. Lett.
2010, 708, L100–L106.
<https://doi.org/10.1088/2041-8205/708/2/L100>.
|
http://arxiv.org/abs/2307.01285v1
|
20230703182053
|
Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth
|
[
"Benjamin Bergougnoux",
"Vera Chekan",
"Robert Ganian",
"Mamadou Moustapha Kanté",
"Matthias Mnich",
"Sang-il Oum",
"Michał Pilipczuk",
"Erik Jan van Leeuwen"
] |
cs.DS
|
[
"cs.DS"
] |
Non-local interference in arrival time
Mehdi Golshani
August 1, 2023
======================================
Dynamic programming on various graph decompositions is one of the most fundamental techniques used in parameterized complexity. Unfortunately, even if we consider concepts as simple as path or tree decompositions, such dynamic programming uses space that is exponential in the decomposition's width, and there are good reasons to believe that this is necessary. However, it has been shown that in graphs of low treedepth it is possible to design algorithms which achieve polynomial space complexity without requiring worse time complexity than their counterparts working on tree decompositions of bounded width. Here, treedepth is a graph parameter that, intuitively speaking, takes into account both the depth and the width of a tree decomposition of the graph, rather than the width alone.
Motivated by the above, we consider graphs that admit clique expressions with bounded depth and label count, or equivalently, graphs of low shrubdepth. Here, shrubdepth is a bounded-depth analogue of cliquewidth, in the same way as treedepth is a bounded-depth analogue of treewidth. We show that also in this setting, bounding the depth of the decomposition is a deciding factor for improving the space complexity. More precisely, we prove that on n-vertex graphs equipped with a tree-model (a decomposition notion underlying shrubdepth)
of depth d and using k labels,
* Independent Set can be solved in time 2^(dk)· n^(1) using (dk^2log n) space;
* Max Cut can be solved in time n^(dk) using (dklog n) space; and
* Dominating Set can be solved in time 2^(dk)· n^(1) using n^(1) space via a randomized algorithm.
We also establish a lower bound, conditional on a certain assumption about the complexity of Longest Common Subsequence, which shows that at least in the case of Independent Set the exponent of the parametric factor in the time complexity has to grow with d if one wishes to keep the space complexity polynomial.
§ INTRODUCTION
Treewidth and Treedepth. Dynamic programming on graph decompositions is a fundamental method in the design of parameterized algorithms. Among various decomposition notions, tree decompositions, which underly the parameter treewidth, are perhaps the most widely used; see e.g. <cit.> for an introduction. A tree decomposition of a graph G of width k provides a way to “sweep” G while keeping track of at most k+1 “interface vertices” at a time. This can be used for dynamic programming: during the sweep, the algorithm maintains a set of representative partial solutions within the part already swept, one for each possible behavior of a partial solution on the interface vertices. Thus, the width of the decomposition is the key factor influencing the number of partial solutions that need to be stored.
In a vast majority of applications, this number of different partial solutions depends (at least) exponentially on the width k of the decomposition, which often leads to time complexity of the form f(k)· n^(1) for an exponential function f. This should not be surprising, as most problems where this technique is used are 𝖭𝖯-hard. Unfortunately, the space complexity—which often appears to be the true bottleneck in practice —is also exponential. There is a simple tradeoff trick, first observed by Lokshtanov et al. <cit.>, which can often be used to reduce the space complexity to polynomial at the cost of increasing the time complexity. For instance, Independent Set can be solved in 2^k· n^(1) time and using 2^k· n^(1) space on an n-vertex graph equipped with a width-k tree decomposition via dynamic programming <cit.>;
combining this algorithm with a simple recursive Divide&Conquer scheme yields an algorithm with running time 2^(k^2)· n^(1) and space complexity n^(1).
Allender et al. <cit.> and then Pilipczuk and Wrochna <cit.> studied the question whether the loss on the time complexity is necessary if one wants to achieve polynomial space complexity in the context of dynamic programming on tree decompositions. While the formal formulation of their results is somewhat technical and complicated, the take-away message is the following: there are good complexity-theoretical reasons to believe that even in the simpler setting of path decompositions, one cannot achieve algorithms with polynomial space complexity whose running times asymptotically match the running times of their exponential-space counterparts. We refer to the works <cit.> for further details.
However, starting with the work of Fürer and Yu <cit.>, a long line of advances <cit.> showed that bounding the depth, rather than the width, of a decomposition leads to the possibility of designing algorithms that are both time- and space-efficient. To this end, we consider the treedepth of a graph G, which is the least possible depth of an elimination forest: a forest F on the vertex set of G such that every two vertices adjacent in G are in the ancestor/descendant relation in F.
An elimination forest of depth d can be regarded as a tree decomposition of depth d, and thus treedepth is the bounded-depth analogue of treewidth. As shown in <cit.>, for many classic problems, including 3-Coloring, Independent Set, Dominating Set,
and Hamiltonicity, it is possible to design algorithms with running time 2^(d)· n^(1) and polynomial space complexity, assuming the graph is supplied with an elimination forest of depth d. In certain cases, the space complexity can even be as low as (d+log n) or (dlog n) <cit.>. Typically, the main idea is to reformulate the classic bottom-up dynamic programming approach so that it can be replaced by a simple top-down recursion. This reformulation is by no means easy—it often involves a highly non-trivial use of algebraic transforms or other tools of algebraic flavor, such as inclusion-exclusion branching.
Cliquewidth and Shrubdepth. In this work, we are interested in the parameter cliquewidth and its low-depth counterpart: shrubdepth. While treewidth applies only to sparse graphs, cliquewidth is a notion of tree-likeness suited for dense graphs as well. The decompositions underlying cliquewidth are called clique expressions <cit.>. A clique expression is a term operating over k-labelled graphs—graphs where every vertex is assigned one of k labels—and the allowed operations are: (i) apply any renaming function to the labels; (ii) make a complete bipartite graph between two given labels; and (iii) take the disjoint union of two k-labelled graphs. Then the cliquewidth of G is the least number of labels using which (some labelling of) G can be constructed. Similarly to treewidth, dynamic programming over clique expressions can be used to solve a wide range of problems, in particular all problems expressible in 𝖬𝖲𝖮_1 logic, in 𝖥𝖯𝖳 time when parameterized by cliquewidth. Furthermore, while several problems involving edge selection or edge counting, such as Hamiltonicity or Max Cut, remain 𝖶[1]-hard under the cliquewidth parameterization <cit.>, standard dynamic programming still allows us to solve them in 𝖷𝖯 time. In this sense, clique-width can be seen as the “least restrictive” general-purpose graph parameter which allows for efficient dynamic programming algorithms where the decompositions can also be computed efficiently <cit.>.
Nevertheless, since the cliquewidth of a graph is at least as large as its linear cliquewidth, which in turn is as large as its pathwidth, the lower bounds of Allender et al. <cit.> and of Pilipczuk and Wrochna <cit.> carry over to the cliquewidth setting. Hence, reducing the space complexity to polynomial requires a sacrifice in the time complexity.
Shrubdepth, introduced by Ganian et al. <cit.>, is a variant of cliquewidth where we stipulate the decomposition to have bounded depth. This necessitates altering the set of operations used in clique expressions in order to allow taking disjoint unions of multiple graphs as a single operation. In this context, we call the decompositions used for shrubdepth (d,k)-tree-models, where d stands for the depth and k for the number of labels used; a formal definition is provided in Section <ref>.
Shrubdepth appears to be a notion of depth that is sound from the model-theoretic perspective, is 𝖥𝖯𝖳-time computable <cit.>, and has become an important concept in the logic-based theory of well-structured dense graphs <cit.>.
Since shrubdepth is a bounded-depth analogue of cliquewidth in the same way as treedepth is a bounded-depth analogue of treewidth, it is natural to ask whether for graphs from classes of bounded shrubdepth, or more concretely, for graphs admitting (d,k)-tree-models where both d and k are considered parameters, one can design space-efficient 𝖥𝖯𝖳 algorithms. Exploring this question is the topic of this work.
Our contribution. We consider three example problems: Independent Set, Max Cut, and Dominating Set. For each of them we show that on graphs supplied with (d,k)-tree-models where d=(1), one can design space-efficient fixed-parameter algorithms whose running times asymptotically match the running times of their exponential-space counterparts working on general clique expressions. While we focus on the three problems mentioned above for concreteness, we in fact provide a more general algebraic framework, inspired by the work on the treedepth parameterization <cit.>, that can be applied to a wider range of problems. Once the depth d is not considered a constant, the running times of our algorithms increase with d. To mitigate this concern, we give a conditional lower bound showing that this is likely to be necessary if one wishes to keep the space complexity polynomial.
Recall that standard dynamic programming solves the Independent Set problem in time 2^k· n^(1) and space 2^k· n^(1) on a graph constructed by a clique expression of width k <cit.>. Our first contribution is to show that on graphs with (d,k)-tree-models, the space complexity can be reduced to as low as (dk^2·log n) at the cost of allowing time complexity 2^(dk)· n^(1). In fact, we tackle the more general problem of computing the independent set polynomial.
theoremspaceIS
There is an algorithm which takes as input an n-vertex graph G along with a (d,k)-tree model of G, runs in time 2^(k d)· n^(1) and uses at most (dk^2 log n) space, and computes the independent set polynomial of G.
theoremspaceIS
There is an algorithm which takes as input an n-vertex graph G along with a (d,k)-tree model of G, runs in time 2^(k d)· n^(1) and uses at most (dk^2 log n) space, and computes the independent set polynomial of G.
The idea of the proof of Theorem <ref> is to reorganize the computation of the standard bottom-up dynamic programming by applying the zeta-transform to the computed tables. This allows a radical simplification of the way a dynamic programming table for a node is computed from the tables of its children, so that the whole dynamic programming can be replaced by top-down recursion. Applying just this yields an algorithm with space polynomial in n. We reduce space to (dk^2 log n) by computing the result modulo several small primes, and using space-efficient Chinese remaindering. This is inspired by the algorithm for Dominating Set on graphs of small treedepth of Pilipczuk and Wrochna <cit.>.
In fact, the technique used to prove Theorem <ref> is much more general and can be used to tackle all coloring-like problems of local character. We formalize those under a single umbrella by solving the problem of counting List H-homomorphisms (for an arbitrary but fixed pattern graph H), for which we provide an algorithm with the same complexity guarantees as those of Theorem <ref>. The concrete problems captured by this framework include, e.g., Odd Cycle Transveral and q-Coloring for a fixed constant q (details in the appendix).; details are provided in Section <ref>.
Next, we turn our attention to the Max Cut problem. This problem is 𝖶[1]-hard when parameterized by cliquewidth, but it admits a simple n^(k)-time algorithm on n-vertex graphs provided with clique expressions of width k <cit.>. Our second contribution is a space-efficient counterpart of this result for graphs equipped with bounded-depth tree-models.
theoremspaceMC
There is an algorithm which takes as input an n-vertex graph G along with a (d,k)-tree model of G, runs in time n^(dk) and uses at most (dk log n) space, and solves the Max Cut problem in G.
theoremspaceMC
There is an algorithm which takes as input an n-vertex graph G along with a (d,k)-tree model of G, runs in time n^(dk) and uses at most (dk log n) space, and solves the Max Cut problem on G.
Upon closer inspection, the standard dynamic programming for Max Cut on clique expressions solves a Subset Sum-like problem whenever aggregating the dynamic programming tables of children to compute the table of their parent. We apply the approach of Kane <cit.> that was used to solve Unary Subset Sum in logarithmic space: we encode the aforementioned Subset Sum-like problem as computing the product of polynomials, and use Chinese remaindering to compute this product in a space-efficient way.
Finally, we consider the Dominating Set problem, for which we prove the following.
theoremspaceDS
There is a randomized algorithm which takes as input an n-vertex graph G along with a (d,k)-tree model of G, runs in time 2^(dk)· and uses at most (dk^2 log n+nlog n) space, and reports the minimum size of a dominating set in G that is correct with probability at least 1/2.
theoremspaceDS
There is a randomized algorithm which takes as input an n-vertex graph G along with a (d,k)-tree model of G, runs in time 2^(dk)· and uses at most (dk^2 log n+nlog n) space, and reports the minimum size of a dominating set in G that is correct with probability at least 1/2.
Note that the algorithm of Theorem <ref> is randomized and uses much more space than our previous algorithms: more than n log n. The reason for this is that we use the inclusion-exclusion approach proposed very recently by Hegerfeld and Kratsch <cit.>, which is able to count dominating sets only modulo 2. Consequently, while the parity of the number of dominating sets of certain size can be computed in space (dk^2log n), to determine the existence of such dominating sets we use the Isolation Lemma and count the parity of the number of dominating sets of all possible weights. This introduces randomization and necessitates sampling—and storing—a weight function. At this point we do not know how to remove neither the randomization nor the super-linear space complexity in Theorem <ref>; we believe this is an excellent open problem.
Note that in all the algorithms presented above, the running times contain a factor d in the exponent compared to the standard (exponential-space) dynamic programming on clique expressions. The following conditional lower bound shows that some additional dependency on the depth is indeed necessary; the relevant precise definitions are provided in Section <ref>.
theoremlcs-lb
Suppose Longest Common Subsequence cannot be solved in time M^f(r) and space f(r)· M^(1) for any computable function f, even if the length t of the sought subsequence is bounded by δ(N) for any unbounded computable function δ; here r is the number of strings on input, N is the common length of each string, and M is the total bitsize of the instance. Then for every unbounded computable function δ, there is no algorithm that solves the Independent Set problem in graphs supplied with (d,k)-tree-models satisfying d≤δ(k) that would run in time 2^(k)· n^(1) and simultaneously use n^(1) space.
The possibility of achieving time- and space-efficient algorithms for Longest Common Subsequence was also the base of conjectures formulated by Pilipczuk and Wrochna <cit.> for their lower bounds against time- and space-efficient algorithms on graphs of bounded pathwidth. The supposition made in Theorem <ref> is a refined version of those conjectures that takes also the length of the sought subsequence into account. The reduction underlying Theorem <ref> is loosely inspired by the constructions of <cit.>, but requires new ideas due to the different setting of tree-models of low depth.
Finally, given that the above results point to a fundamental role of shrubdepth in terms of space complexity, it is natural to ask whether shrubdepth can also be used to obtain meaningful tractability results with respect to the “usual” notion of fixed-parameter tractability. We conclude our exposition by highlighting two examples of problems which are -hard on graphs of bounded cliquewidth (and even of bounded pathwidth) <cit.>,
and yet which admit fixed-parameter algorithms when parameterized by the shrubdepth.
theoremFPTproblems
Metric Dimension and Firefighter can be solved in fixed-parameter time on graphs supplied with (d,k)-tree-models, where d and k are considered the parameters.
§ PRELIMINARIES
For a positive integer k, we denote by [k]={1,…,k} and [k]_0=[k]∪{0}.
For a function f A → B and elements a, b (not necessarily from A ∪ B), the function f[a ↦ b] A ∪{a}→ B ∪{b} is given by f[a ↦ b](x) = f(x) for x ≠ a and f[a ↦ b](a) = b.
We use standard graph terminology <cit.>.
The full proofs of our results also require the use of algebraic tools—notably the cover product and the fast subset convolution machinery of Björklund et al. <cit.>.
We use the same computational model as Pilipczuk and Wrochna <cit.>, namely the RAM model where each operation takes time polynomially proportional to the number of bits of the input, and the space is measured in terms of bits. We say that an algorithm A runs in time t(n) and space s(n) if, for every input of size n, the number of operations of A is bounded by t(n) and the auxiliary used space of A has size bounded by s(n) bits.
Shrubdepth.
We first introduce the decomposition notion for shrubdepth: tree-models.
For d,k∈, a (d,k)-tree-model (T, ℳ, , ) of a graph G is
a rooted tree T of depth d together with a family of symmetric Boolean k × k-matrices ℳ = {M_a}_a ∈ V(T), a labeling function V(G) → [k], and a family of renaming functions = {_ab}_ab ∈ E(T) with _ab [k] → [k] for all ab ∈ E(T) such that:
* The leaves of T are identified with vertices of G. For each node a of T, we denote by V_a ⊆ V(G) the leaves of T that are
descendants of a, and with G_a = G[V_a] we denote the subgraph induced by these vertices.
* With each node a of T we associate a labeling function _a : V_a → [k] defined as follows.
If a is a leaf, then _a(a) = (a).
If a is a non-leaf node, then for every child b of a and every vertex v ∈ V_b, we have _a(v) = _ab(_b(v)).
* For every pair of vertices (u,v) of G, let a denote their least common ancestor in T.
Then we have uv ∈ E(G) if and only if M_a[_a(u),_a(v)]=1.
We introduce some notation. If (T, ℳ, , ) is a (d,k)-tree model of a graph G, then for every node a of T and every i∈ [k], let V_a(i) = _a^-1(i) be the set
of vertices labeled i at a.
Given a subset X of V_a and i∈ [k], let X_a(i) = X ∩ V_a(i) be the vertices of X labeled i at a.
A (d,k)-tree-model can be understood as a term of depth d that constructs a k-labelled graph from single-vertex graphs by means of the following operations: renaming of the labels, and joining several labelled graphs while introducing edges between vertices originating from different parts based on their labels. This makes tree-models much closer to the NLC-decompositions which underly the parameter NLC-width than to clique expressions. NLC-width is a graph parameter introduced by Wanke <cit.> that can be seen as an alternative, functionally equivalent variant of cliquewidth.
We say that a class 𝒞 of graphs has shrubdepth d if there exists k∈ such that every graph in 𝒞 admits a (d,k)-tree-model. Thus, shrubdepth is a parameter of a graph class, rather than of a single graph; though there are functionally equivalent notions, such as SC-depth <cit.> or rank-depth <cit.>, that are suited for the treatment of single graphs. We remark that in the original definition proposed by Ganian et al. <cit.>, relabeling is not allowed; however, using either definition yields the same notion of shrubdepth.
Moreover, throughout this work we abstract away from the computation of the tree-models themselves and assume that a (d,k)-tree-model of the considered graph is provided on input.
We note that a fixed-parameter algorithm for computing tree-models has been proposed by Gajarský and Kreutzer <cit.> (in the sense of Ganian et al. <cit.>). The approach of Gajarský and Kreutzer is essentially kernelization: they iteratively “peel off” isomorphic parts of the graph until the problem is reduced to a kernel of size bounded only in terms of d and k. This kernel is then treated by any brute-force method. Consequently, a straightforward inspection of their algorithm <cit.> shows that it can be implemented with polynomial space; but not space of the form (d+k)^(1)·log n, due to the necessity of storing all the intermediate graphs in the kernelization process. We leave as an open question the computation of a (d,k)-tree model, for a given graph G, running in time f(d,k)· n^(1) and using space (d+k)^(1)·log n.
We remark that in the original definition proposed by Ganian et al. <cit.>, there is no renaming of the labels: for every vertex u∈ V(G), λ_a(u) is always the same label λ(u) for all relevant nodes a. This boils down to all the renaming functions _ab equal to the identify function on [k]. Clearly, a (d,k)-tree-model in the sense of Ganian et al. is also a (d,k)-tree-model in our sense, while a (d,k)-tree-model in our sense can be easily turned into a (d,k^d+1)-model in the sense of Ganian et al. by setting λ(u) to be the d+1 tuple of consisting of labels λ_a(u), for a ranging over the ancestors of u in T. Thus, using either definition yields the same notion of shrubdepth for graph classes. We choose to use the definition with renaming, as it provides more flexibility in the construction of tree-models that can result in a smaller number of labels and, consequently, better running times. It is also closer to the original definitions of clique expressions or NLC-decompositions.
Within this work we will always assume that a (d,k)-tree-model of the considered graph is provided on input. Thus, we abstract away the complexity of computing tree-models, but let us briefly discuss this problem. Gajarský and Kreutzer <cit.> gave an algorithm that given a graph G and parameters d and k, computes a (d,k)-tree-model of G (in the sense of Ganian et al. <cit.>), if there exists one, in time f(d,k)· n^(1) for a computable function f. The approach of Gajarský and Kreutzer is essentially kernelization: they iteratively “peel off” isomorphic parts of the graph until the problem is reduced to a kernel of size bounded only in terms of d and k. This kernel is then treated by any brute-force method. Consequently, a straightforward inspection of the algorithm of <cit.> shows that it can be implemented so that it uses polynomial space; but not space of the form (d+k)^(1)·log n, due to the necessity of storing all the intermediate graphs in the kernelization process.
Cover products and transforms.
We now recall the algebraic tools we are going to use.
Let U be a finite set and R be a ring. Let g_1, …, g_t 2^U → R be set functions, for some integer t. For every
i∈ [t],
the zeta-transform ξ g_i 2^U → R of g_i is defined by
(ξ g_i)(Y) = ∑_X ⊆ Y g_i(X),
(ξ g_i)(Y) = ∑_X ⊆ Y g_i(X),
and similarly, the Möbius-transform μ g_i 2^U → R of g_i is given by
(μ g_i)(Y) = ∑_X ⊆ Y (-1)^|Y ∖ X| g_i(X).
(μ g_i)(Y) = ∑_X ⊆ Y (-1)^|Y ∖ X| g_i(X).
The cover product g_1 ∗_c g_2 ∗_c …∗_c g_t 2^U → R of g_1, …, g_t is defined by
(g_1 ∗_c g_2 ∗_c …∗_c g_t)(Y) = ∑_X_1, …, X_t ⊆ 2^[k]
X_1 ∪…∪ X_t = Y g_1(X_1) · g_2(X_2) ·…· g_t(X_t).
We emphasize that unlike another well-known concept of subset convolution, here the sets X_1, …, X_t are not required to be pairwise disjoint.
The following result of Björklund et al. <cit.> will be relevant for us:
Let U be a finite set, R be a ring, and g_1, …, g_t 2^U → R be set functions for a positive integer t.
Then for every X ∈ 2^U, it holds that
(ξ(g_1 ∗_c g_2 ∗_c …∗_c g_t)) (X) = (ξ g_1)(X) · (ξ g_2)(X) ·…· (ξ g_t)(X).
Also for every i ∈ [t], we have μ (ξ (g_i)) = g_i.
§ SPACE-EFFICIENT ALGORITHMS ON TREE-MODELS
Independent Set.
§.§ Independent Set
In this section, we provide a fixed-parameter algorithm
computing the independent set polynomial of a graph in time 2^𝒪(dk)· and using (d, k) log n space, when given a (d,k)-tree model. In particular, given a (d, k)-tree model (T, ℳ, , ) of an n-vertex graph G, our algorithm will allow to compute the number of independent sets of size p for each p∈ [n].
For simplicity of representation, we start by describing an algorithm that uses (d, k, n) space and then show how a result by Pilipczuk and Wrochna <cit.> can be applied to decrease the space complexity to (d, k) log n.
In order to simplify forthcoming definitions/statements, let a be an internal node of T with b_1,…, b_t as children.
For S⊆ [k], we denote by q(a,S,p) the number of independent sets I of size p of G_a such that S={i∈ [k] | I_a(i)∅}.
Let us define the polynomial
(a,S) = ∑_p∈ q(a,S,p) · x^p.
(a,S) = ∑_p∈ q(a,S,p) · x^p.
For the root r of T, the number of independent sets of G of size p is then given by
∑_S⊆ [k] q(r,S,p).
∑_S⊆ [k] q(r,S,p)
and the independent set polynomial of G is
∑_S ⊆ [k](r, S).
∑_S ⊆ [k](r, S).
Therefore, the problem boils down to the computation of (r, S) and its coefficients q(r,S,p). A usual way to obtain a polynomial or logarithmic space algorithm is a top-down traversal of a rooted tree-like representation of the input—in our case, this will be the tree model. In this top-down traversal, the computation of coefficients q(a,S,p) of (a, S)
makes some requests to the coefficients q(b_i,S_i,p_i) of (b_i, S_i)
for each i∈ [t], for some integer p_i, and some set S_i of labels of G_b_i so that ∑_i∈ [t] p_i = p and ⋃_i∈ [t]_ab_i(S_i) = S.
Since there are exponentially many (in t) possible partitions of p into t integers and t can be Θ(n), we must avoid running over all such integer partitions, and this will be done by the fast computation of a certain subset cover.
We will later show that if some independent set of G_a contains vertices of labels i and j with M_a[i, j] = 1, then all these vertices come from the same child of a. In particular, the vertices of label i (rsp. j) cannot come from multiple children of a.
To implement this observation, after fixing a set S of labels, for each label class in S we “guess” (i.e., branch on) whether it will come from a single child of a or from many.
Such a guess is denoted by α S→{1_=, 2_≥}.
So, the assignment α will allow us to control the absence of edges in the sought-after independent set.
For a fixed α, naively branching over all possibilities of assigning the labels of S to the children of a with respect to α would take time exponential in t, which could be as large as Θ(n).
We will use inclusion-exclusion branching to speed-up the computations while retaining the space complexity.
In some sense, we will first allow less restricted assignments of labels to the children of a, and then filter out the ones that result in non-independent sets using the construction of a certain auxiliary graph.
The former will be implemented by using “less restricted” guesses β S →{1_=, 1_≥} where 1_≥ reflects that vertices of the corresponding label come from at least one child of a.
Note that if the vertices of some label i come from exactly one child of a, then such an independent set satisfies both β(i) = 1_= and β(i) = 1_≥.
Although it might seem counterintuitive, this type of guesses will enable a fast computation of a certain subset cover.
After that, we will be able to compute the number of independent sets satisfying guesses of type α S →{1_=, 2_≥} by observing that independent sets where some label i occurs in at least two children of a can be obtained by counting those where label i occurs in at least one child and subtracting those where this label occurs in exactly one child.
We now proceed to a formalization of the above.
Let S⊆_a(V_a) and α S→{1_=, 2_≥} be fixed.
Let s_1,…, s_α^-1(2_≥) be an arbitrary linear ordering of α^-1(2_≥).
To compute the number of independent sets that match our choice of α, we proceed by iterating over c∈{0,…, α^-1(2_≥)}, and we count independent sets where the labels in {s_1, …, s_c} occur exactly once, and the number of such sets where the labels occur at least once. Later, we will obtain the desired number of independent sets via carefully subtracting these two values. In particular, let γ{s_1,…,s_c}→{1_=, 1_≥}, and we denote by q(a,S,α,c,γ,p) the number of independent sets I of size p of G_a such that
* for every label i ∉ S, we have I_a(i)=∅;
* for every label i ∈{s_1, …, s_c} with γ(i) = 1_=,
there exists a unique child b_j of a such that I_a(i)∩ V_b_j≠∅;
* for every label i ∈{s_1, …, s_c} with γ(i) = 1_≥,
there exists at least one child b_j of a such that I_a(i)∩ V_b_j≠∅;
* for every label i ∈ S ∖{s_1, …, s_c} with α(i) = 1_=,
there exists a unique child b_j of a such that I_a(i)∩ V_b_j≠∅;
* and for every label i ∈ S ∖{s_1, …, s_c} with α(i) = 2_≥, there exist at least two children b_j_1 and b_j_2 of a such that I_a(i)∩ V_b_j_1∅ and I_a(i)∩ V_b_j_2∅.
Then for c ∈ [α^-1(2_≥)]_0 we define the polynomial T(a, S, α,c,γ) ∈[x] as
T(a, S, α,c,γ) = ∑_p ∈_0 q(a,S,α,c,γ,p) x^p.
We now proceed with some observations that directly follow from the definitions.
We have q(a,S,p) = ∑_α∈{1_=,2_≥}^S, γ∈{1_=,1_≥}^∅ q(a,S,α,0,γ,p) for every S⊆_a(V_a) and integer p.
Also, for every α∈{1_=,2_≥}^S, every c ∈{0,…, α^-1(2_≥) - 1} and every γ{s_1, …, s_c}→{1_=, 1_≥},
we have
q(a,S,α,c,γ,p) = q(a,S,α, c+1, γ[s_c+1↦ 1_≥],p) - q(a,S,α,c+1, γ[s_c+1↦ 1_=], p).
For every S⊆_a(V_a) and integer p, we have
q(a,S,p) = ∑_α∈{1_=,2_≥}^S,
γ∈{1_=,1_≥}^∅ q(a,S,α,0,γ,p)
and hence,
(a, S) = ∑_α∈{1_=,2_≥}^S
γ∈{1_=,1_≥}^∅T(a, S, α, 0,γ)
Moreover, for every α∈{1_=,2_≥}^S, every c ∈{0,…, α^-1(2_≥) - 1} and every γ{s_1, …, s_c}→{1_=, 1_≥},
we have
q(a,S,α,c,γ,p) = q(a,S,α, c+1, γ[s_c+1↦ 1_≥],p) - q(a,S,α,c+1, γ[s_c+1↦ 1_=], p).
and hence
T(a, S, α,c,γ) = T(a, S, α,c+1,γ[s_c+1↦ 1_≥]) - T(a, S, α,c+1,γ[s_c+1↦ 1_=]).
It remains then to show how to compute, for every α∈{1_=,2_≥}^S and every γ∈{1_=,1_≥}^α^-1(2_≥), the polynomial
T(a,S,α,α^-1(2_≥),γ).
It remains then to show how to compute, for every α∈{1_=,2_≥}^S, every γ∈{1_=,1_≥}^α^-1(2_≥), and every integer p the value q(a,S,α,α^-1(2_≥),γ,p).
It is worth mentioning that if β S→{1_=,1_≥} is such that β^-1(1_=)=α^-1(1_=)∪γ^-1(1_=) and β^-1(1_≥)=α^-1(2_≥)∖γ^-1(1_=), then q(a,S,α, α^-1(1_≥), γ,p) is exactly the number of
independent sets I of size p of G_a satisfying the following:
* For every i ∈ [k] ∖ S, we have I_a(i) = ∅.
* For every i ∈β^-1(1_=),
there exists a unique index j ∈ [t] such that I_a(i) ∩ V_b_j≠∅.
* For every i ∈β^-1(1_≥),
there exists a (not necessarily unique) index j ∈ [t] such that I_a(i)
∩ V_b_j≠∅.
We will therefore write q(a,S,β,p) instead of q(a,S,α, α^-1(1_≥), γ,p) and we define the polynomial (a, S, β) ∈[x] (where “T” stands for “transformed”) as
(a,S,β) = ∑_p∈ q(a,S,β, p)· x^p.
Recall that because we are computing
(a, S)
and
(a, S, β)
in a top-down manner, some queries for
(b_i, S_i)
will be made during the computation.
Before continuing in the computation of (a,S,β), let us first explain how to request the polynomials (b_j,S_j) from each child b_j of a. If a is not the root, let a^* be its parent in T, and we use (a, S)
(where “P” stands for “parent”) to denote the polynomial
(a, S) = ∑_p ∈_0 q^(a, S, p) · x^p
where
q^(a, S, p) = ∑_D ⊆_a(V_a)
_a^*a(D) = S q(a, D, p)
(a, S) = ∑_p ∈_0 q^(a, S, p) x^p where q^(a, S, p) = ∑_D ⊆_a(V_a)_a^*a(D) = S q(a, D, p)
is the number of independent sets of G_a of size p that contain a vertex with label i ∈ [k] (i.e., I_a*(i) ≠∅) if and only if i ∈ S holds, where the labels are treated with respect to _a^*.
Then it holds that
(a, S) = ∑_D ⊆_a(V_a)
_a^*a(D) = S(a, D) .
As our next step, we make some observations that will not only allow to restrict the β's we will need in computing the polynomial (a, S) from the polynomials (a, S, β),
but will also motivate the forthcoming definitions. Recall that we have fixed S⊆_a(V_a) and β S→{1_=,1_≥}, and in (a,S) and (a,S,α)
we are only counting independent sets I such that I_a(i)∅ if and only if i∈ S.
If there exist i_1, i_2 ∈ S such that M_a[i_1, i_2] = 1, then for any independent set I counted in
(a,S), there exists a unique j∈ [t] such that I_a(i_1)∪ I_a(i_2)⊆ V_b_j.
Both I_a(i_1) and I_a(i_2) are non-empty.
So if there are at least two distinct j_1 and j_2 in [t] such that I_1 I_a(i_1)∩ V_b_j_1 and I_2 I_a(i_2)∩ V_b_j_2 are non-empty,
then M_a[i_1, i_2] = 1 implies that there is a complete bipartite graph between I_1 and I_2.
Hence the graph induced on I would contain an edge, which is a contradiction.
Recall that for every label i∈α^-1(2_≥), each independent set I contributing to the value q(a,S,α,0,γ,p) has the property that there are distinct children b_j_1 and b_j_2 such that I_a(i)∩ V_b_j_1 and I_a(i)∩ V_b_j_2 are both non-empty.
Then by <ref> for every i_1∈ S it holds that if α(i_1)=2_≥, then M_a[i_1,i_2]=0 for all i_2∈ S.
So if α does not satisfy this, the request T(a,S,α,0,γ) can be directly answered with 0.
Otherwise, since we use <ref>
for recursive requests, the requests (a, S, β) made all have the property that for each i_1∈ S the following holds: if β(i_1)=1_≥, then M_a[i_1,i_2]=0 for all i_2∈ S.
We call such β's conflict-free and we restrict ourselves to only conflict-free β's.
In other words, we may assume that if i_1, i_2 ∈ S and M_a[i_1, i_2] = 1, then we have β(i_1) = β(i_2) = 1_=.
Observation <ref> implies that for such i_1 and i_2, each independent set I counted in (a,S,β) is such that I_a(i_1) ∪ I_a(i_2) ⊆ V_b_j for some child b_j of a.
Now, to
capture this observation, we define an auxiliary graph F^a, β as follows. The vertex set of F^a, β is β^-1(1_=) and there is an edge
between vertices i_1 ≠ i_2 if and only if M_a[i_1, i_2] = 1. Thus, by the above observation, if we consider a connected component C of F^a, β,
then in each independent set I counted in (a, S, β), all the vertices of I with labels from C come from a single child of a.
Let C be a connected component of F^a, β.
For every independent set
I counted in (a, S, β), there exists a unique j∈ [t] such that ⋃_i ∈ C I_a(i) ⊆ V_b_j.
We proceed with some intuition on how we compute (a, S, β) by requesting some (b_j, S_j).
Let I be some independent set counted in (a, S, β).
This set contains vertices with labels from the set S, and the assignment β determines whether there is exactly one or at least one child from which the vertices of a certain label come from.
Moreover, by <ref>, for two labels i_1, i_2 from the same connected component of F^a, β, the vertices with labels i_1 and i_2 in I come from the same child of a.
Hence, to count such independent sets, we have to consider all ways to assign labels from S to subsets of children of a such that the above properties are satisfied—namely, each connected component of F^a, β is assigned to exactly one child while every label from β^-1(1_≥) is assigned to at least one child.
Since the number of such assignments can be exponential in n, we employ the fast computation of a certain subset cover.
We now formalize this step.
Let (F^a, β) we denote the set of connected components of F^a, β.
The universe U^a, β (i.e., the set of objects we assign to the children of a) is defined as
U^a, β = β^-1(1_≥) ∪(F^a, β).
For every j ∈ [t], we define a mapping f_j^a, β 2^U^a, β→[x, z] (i.e., to polynomials over x and z) as follows:
f_j^a, β(X) = (b_j, ^a, β(X)) z^|X ∩(F^a, β)|
where ^a, β 2^U^a, β→ 2^S
intuitively performs a union over all the present labels—formally:
^a, β(W) = (W ∩β^-1(1_≥)) ∪⋃_w ∈ W ∩(F^a, β) w.
^a, β(W) = (W ∩β^-1(1_≥)) ∪⋃_w ∈ W ∩(F^a, β) w.
So if we fix the set X of labels coming from the child b_j, then the (unique) coefficient in f_j^a, β(X) reflects the number of independent sets of G_b_j using exactly these labels (with respect to _a).
The exponent of the formal variable z is intended to store the number of connected components of F^a, β assigned to b_j.
This will later allow us to exclude from the computation those assignments of labels from S to children of a where the elements of some connected component of F^a, β are assigned to multiple children of a.
For every j ∈ [t], we define a similar function g^a, β_j 2^S→[x, z] as follows:
g^a, β_j(Y) =
f^a, β_j(X) if ^a, β(X) = Y for some X ∈ 2^U^a, β,
0 otherwise.
Observe that the function ^a, β is injective and hence g^a, β_j is well-defined.
The mapping g^a, β_j filters out those assignments where some connected component of F^a, β is “split”.
For simplicity of notation, when a and β are clear from the context, we omit the superscript a, β.
Crucially for our algorithm, we claim that the following holds:
(a, S, β) = (∑_X_1, …, X_t ⊆ [k]
X_1 ∪…∪ X_t = S g_1(X_1) g_2(X_2) … g_t(X_t) ) ⟨ z^|(F)|⟩
where for a polynomial P = ∑_u_1, u_2 ∈_0 q_u_1, u_2 x^u_1 z^u_2∈[x, z] the polynomial P ⟨ z^|(F)|⟩∈[x] is defined as P ⟨ z^|(F)|⟩ = ∑_u_1 ∈_0 q_u_1, |(F)| x^u_1.
In simple words, the ⟨ z^|(F)|⟩ operator first removes all terms where the degree of z is not equal to |(F)| and then “forgets” about z.
Before we provide a formal proof, let us sketch the idea behind it.
On the left side of the equality, we have the polynomial keeping track of the independent sets of G_a that “respect” β.
First, for every label i ∈ S, some vertex of this label must occur in at least one child of a: this is handled by considering all covers X_1 ∪…∪ X_t = S where for every j ∈ [t], the set X_j represents the labels assigned to the child b_j.
Next, if some X_j “splits” a connected component, i.e., takes only a proper non-empty subset of this component, then such an assignment would not yield an independent set by <ref> and the function g_j ensures that the corresponding cover contributes zero to the result.
Hence, for every cover X_1 ∪…∪ X_t = S with a non-zero contribution to the sum, every connected component of F is completely contained in at least one X_j.
In particular, this implies that for every non-zero term on the right side, the degree of the formal variable z in this term is at least z^|(F)|.
On the other hand, if some connected component of F is contained in several sets X_j, then the degree of the corresponding monomial is strictly larger than the total number of connected components and such covers X_1, …, X_t are excluded from the consideration by applying ⟨ z^|(F)|⟩.
We formalize this intuition below:
R: if space permits, can put 10-line intuition here from long version.
Let (T, ℳ, , ) be a (d,k)-tree model of an n-vertex graph G. Let a be a non-leaf node of T and let b_1, …, b_t be the children of a. For every S ⊆_a(V_a), and every conflict-free β S →{1_=, 1_≥}, it holds that
(a, S, β) = (∑_X_1, …, X_t ⊆ [k]
X_1 ∪…∪ X_t = S (∏_j = 1^t g^a, β_j(X_j))) ⟨ z^|(F^a, β)|⟩.
Let (T, ℳ, , ) be a (d,k)-tree model of an n-vertex graph G. Let a be a non-leaf node of T and let b_1, …, b_t be the children of a. For every S ⊆_a(V_a), and every conflict-free β S →{1_=, 1_≥}, it holds that
(a, S, β) = (∑_X_1, …, X_t ⊆ [k]
X_1 ∪…∪ X_t = S (∏_j = 1^t g^a, β_j(X_j))) ⟨ z^|(F^a, β)|⟩,
where for a polynomial P = ∑_u_1, u_2 ∈_0 q_u_1, u_2 x^u_1 z^u_2∈[x, z] the polynomial P ⟨ z^|(F^a, β)|⟩∈[x] is defined as P ⟨ z^|(F^a, β)|⟩ = ∑_u_1 ∈_0 q_u_1, |(F^a, β)| x^u_1.
First, we bring the right-hand side of the equality into a more suitable form.
∑_X_1, …, X_t ⊆ [k]
X_1 ∪…∪ X_t = S∏_j = 1^t g_j(X_j) =
∑_X_1, …, X_t ⊆ [k]
X_1 ∪…∪ X_t = S,
∀ j ∈ [t] ∃ W_j ⊆ U (W_j) = X_j∏_j = 1^t (b_j, X_j) z^|W_j ∩(F)| =
∑_X_1, …, X_t ⊆ [k]
X_1 ∪…∪ X_t = S,
∀ j ∈ [t] ∃ W_j ⊆ U (W_j) = X_j(∏_j = 1^t (∑_p_j ∈_0 q^(b_j, X_j, p_j) x^p_j) z^|W_j ∩(F)|) =
∑_X_1, …, X_t ⊆ [k]
X_1 ∪…∪ X_t = S,
∀ j ∈ [t] ∃ W_j ⊆ U (W_j) = X_j
p_1, …, p_t ∈_0∏_j = 1^t q^(b_j, X_j, p_j) x^p_j z^|W_j ∩(F)| =
∑_X_1, …, X_t ⊆ [k]
X_1 ∪…∪ X_t = S,
∀ j ∈ [t] ∃ W_j ⊆ U (W_j) = X_j
p_1, …, p_t ∈_0(∏_j = 1^t q^(b_j, X_j, p_j) ) x^∑_j = 1^t p_j z^∑_j = 1^t |W_j ∩(F)|
We recall that is injective so the sum above is well-defined.
So we have to prove that
(a, S, β) =
(∑_X_1, …, X_t ⊆ [k]
X_1 ∪…∪ X_t = S,
∀ j ∈ [t] ∃ W_j ⊆ U (W_j) = X_j,
p_1, …, p_t ∈_0(∏_j = 1^t q^(b_j, X_j, p_j) ) x^∑_j = 1^t p_j z^∑_j = 1^t |W_j ∩(F)|) ⟨ z^|(F)|⟩,
i.e.,
(a, S, β) =
∑_X_1, …, X_t ⊆ 2^[k]
X_1 ∪…∪ X_t = S,
∀ j ∈ [t] ∃ W_j (W_j) = X_j,
∑_j ∈ [t] |W_j ∩(F)| = |(F)|,
p_1, …, p_t ∈_0(∏_j = 1^t q^(b_j, X_j, p_j) ) x^∑_j = 1^t p_j
To prove that these two polynomials are equal, we show that for every power p ∈_0 of x, the coefficients at x^p in both polynomials are equal.
So let us fix an arbitrary integer p.
For one direction, let I be an independent set counted in the coefficient q(a, S, β, p) at the term x^p on the left-hand side (a, S, β); in particular, we then have |I| = p.
For every j ∈ [t], let
I^j = I ∩ V_b_j,
p_j = |I_j|,
and X_j = { i ∈ [k] | I_a(i) ∩ V_b_j≠∅}.
Clearly, we have p_1 + … + p_t = p and X_1 ∪…∪ X_t = S.
Now consider some j ∈ [t].
The set I^j is an independent set of G_b_j that contains vertices with labels from exactly X_j (with respect to _a).
So I^j is counted in q^(b_j, X_j, p_j).
Let
A_j = X_j ∩β^-1(1_=)
and
B_j = X_j ∩β^-1(1_≥).
Note that A_j ∪ B_j = X_j, A_1 ∪…∪ A_t = β^-1(1_=), and B_1 ∪…∪ B_t = β^-1(1_≥).
Then by <ref>, for every connected component C of F and every j ∈ [t], we either have C ⊆ X_j or C ∩ X_j = ∅.
Therefore, for every j ∈ [t], we have
X_j = B_j ∪⋃_C ∈(F) C ∩ X_j ≠∅ C.
and hence,
X_j = (W_j) where W_j = B_j ∪{ C ∈(F) | C ∩ X_j ≠∅}.
Finally, by the definition of objects counted in (x, S, β),
since the labels from β^-1(1_=) occur in exactly one child of a,
it holds that A_j_1∩ A_j_2 = ∅ for any j_1 ≠ j_2 ∈ [t].
Together with A_1 ∪…∪ A_t = β^-1(1_=) this implies that for every connected component C of F, there exists exactly one index j_C ∈ [t] with C ⊆ A_j_C, i.e., C ∈ W_j_C.
So we obtain
∑_j ∈ [t] |W_i ∩(F)| = |(F)|.
Altogether, the tuple (I^1, …, I^t) is counted in the product ∏_j = 1^t q^(b_j, X_j, p_j) and the properties shown above imply that this product contributes to the coefficient at the monomial x^p.
Also note that the mapping of I to (I^1, …, I^t) is injective so we indeed obtain that the coefficient at x^p on the left-hand side of (<ref>) is at most as large as one the right-hand side.
Now we show that the other inequality holds as well.
Let X_1, …, X_t ⊆ [k], I^1, …, I^t ⊆ V, W_1, …, W_t ⊆ U, and p_1, …, p_t ∈_0 be such that the following properties hold:
* p_1 + … + p_t = p,
* X_1 ∪…∪ X_t = S,
* for every j ∈ [t], it holds that (W_j) = X_j,
* ∑_j ∈ [t] |W_j ∩(F)| = |(F)|,
* and for every j ∈ [t], the set I^j is an independent set of G_b_j of size p_j such that for every i ∈ [k], I^j_a(i) ≠∅ holds iff i ∈ X_j, i.e., I^j is counted in q^(b_j, X_j, p_j).
Let I = I^1 ∪…∪ I^t.
Since for every j ∈ [t], we have I^j ⊆ V_b_j, the sets I^1, …, I^t are pairwise disjoint and we have |I| = p.
We also have I ⊆ V_a and for every i ∈ [k], we have I_a(i) ≠∅ iff i ∈ S, i.e., I contains vertices with labels from exactly S with respect to _a.
We claim that I is an independent set of G_a.
Since I_1, …, I_t are independent sets of G_b_1, …, G_b_t, respectively, and G_b_1, …, G_b_t are induced subgraphs of G_a, it suffices to show that there are no edges between I_j_1 and I_j_2 for any j_1 ≠ j_2 ∈ [t].
For this, suppose there is an edge v_1 v_2 of G_a with v_1 ∈ I^j_1 and v_2 ∈ I^j_2 for some j_1 ≠ j_2 ∈ [t].
Also let i_1 = _a(v_1) and i_2 = _a(v_2).
Since a is the lowest common ancestor of v_1 and v_2, it holds that M_a[i_1, i_2] = 1.
By the assumption of the lemma, the mapping β is conflict-free so we have β(i_1) = β(i_2) = 1_=.
Then, the property M_a[i_1, i_2] = 1 implies that i_1 and i_2 belong to the same connected component, say C, of F.
Recall that we have i_1 ∈ X_j_1, i_2 ∈ X_j_2, (W_j_1) = X_j_1, and (W_j_2) = X_j_2.
Hence, it holds that
C ∈ W_j_1 and C ∈ W_j_2.
On the other hand, let C' be an arbitrary connected component of F and let i ∈ C' be some label.
The property X_1 ∪…∪ X_t = S implies that there exists an index j_C' with i ∈ X_j_C'.
Due to (W_j_C') = X_j_C' we then have C' ∈ W_j_C'.
I.e., every connected component of F is contained in at least one of the sets W_1, …, W_t while C is contained in at least two such sets so we get
∑_j ∈ [t] |W_j ∩(F)| > |(F)|
– a contradiction.
Hence, the set I is indeed an independent set of G_a of size p such that it contains vertices with labels from exactly S with respect to _a.
So it is counted in the coefficient q^(a, S, β, p) of the term x^p in (a, S, β).
Finally, first note that (I^1, …, I^t) is uniquely mapped to the tuple (X_1, …, X_t, p_1, …, p_t) so it is counted only once on the right-hand side.
And second, the mapping of (I^1, …, I^t) to I is injective (since V_b_1, …, V_b_t are pairwise disjoint).
Therefore, the coefficient at the term x^p on the right-hand side of (<ref>) is at most as large as on the left-hand side.
Altogether, we conclude that the two polynomials in (<ref>) are equal, as desired.
The above lemma implies that:
(a, S, β) =
∑_X_1, …, X_t ⊆ [k]
X_1 ∪…∪ X_t = S (∏_j = 1^t g_j(X_j)) ⟨ z^|(F)|⟩ =
(g_1 ∗_c g_2 ∗_c …∗_c g_t)(S) ⟨ z^|(F)|⟩<ref>=
((μ(ξ(g_1 ∗_c g_2 ∗_c …∗_c g_t)))(S)) ⟨ z^|(F)|⟩ =
(∑_Y ⊆ S (-1)^|S ∖ Y| (ξ(g_1 ∗_c g_2 ∗_c …∗_c g_t))(Y)) ⟨ z^|(F)|⟩<ref>=
(∑_Y ⊆ S (-1)^|S ∖ Y| (ξ g_1)(Y) (ξ g_2)(Y) … (ξ g_t)(Y) ) ⟨ z^|(F)|⟩ =
(∑_Y ⊆ S (-1)^|S ∖ Y|∏_j = 1^t (ξ g_j)(Y) ) ⟨ z^|(F)|⟩ =
(∑_Y ⊆ S (-1)^|S ∖ Y|∏_j = 1^t ∑_Z ⊆ Y g_j(Z) ) ⟨ z^|(F)|⟩
Now we can apply a result by Björklund et al. <cit.>
to accelerate the computation of (a, S, β):
It holds (a, S, β) = (∑_Y ⊆ S (-1)^|S ∖ Y|∏_j = 1^t ∑_Z ⊆ Y g_j(Z) ) ⟨ z^|(F)|⟩.
We now have the equalities required for our algorithm to solve Independent Set parameterized by shrubdepth.
By using these equalities directly, we would obtain an algorithm running in time 2^𝒪(k d)· n^𝒪(1) and space 𝒪(d k^2 n^2).
However, the latter can be substantially improved by using a result of Pilipczuk and Wrochna <cit.> based on the Chinese remainder theorem:
Let P(x) = ∑_i=0^n' q_i x^i be a polynomial in one variable x of degree at most n' with integer coefficients satisfying 0 ≤ q_i ≤ 2^n' for i = 0, …, n'.
Suppose that given a prime number p ≤ 2n' + 2 and s ∈_p, the value P(s) p can be computed in time T and space S.
Then given k ∈{0, ... , n'}, the value q_k can be computed in time 𝒪(T ·poly(n')) and space 𝒪(S + log n').
With this, we can finally prove: <Ref>.
*
The independent set polynomial of the graph G = G_r is exactly
∑_S ⊆ [k](r, S)
where r is the root of T.
Let us denote this polynomial P.
To apply <ref>, we use n' := n.
Let p ≤ 2n + 2 be a prime number.
The bound on p implies that any number from _p can be encoded using 𝒪(log n) bits, this will bound the space complexity.
There are at most 2^n independent sets of G so every coefficient of P lies between 0 and 2^n, and therefore the prerequisites stated in the first sentence of <ref> are satisfied.
Let s ∈_p.
We will now show that the value P(s) p can be evaluated in time 2^𝒪(k d)· and space 𝒪(d k^2 log n).
At that point, the result will follow by <ref>.
Since we are interested in the evaluation of P at s modulo p, instead of querying and storing all coefficients, as a result of the recursion, we return the evaluation of a certain polynomial (e.g., (a, S, β)) at s modulo p.
For this, the formal variable x is always substituted by s and then arithmetic operations in _p are carried out.
In the following, when computing a sum (resp. product) of certain values, these values are computed recursively one after another and we store the counter (, current subset S ⊆ [k]) as well as the current value of the sum (resp. product).
Our algorithm relies on the equalities provided above and we now provide more details to achieve the desired time and space complexity.
Let us denote (a,S,α) T_0(a, S, α, 0, ∅) for simplicity.
First, if a is a leaf of T, then (a, S) can be computed directly via
(a, S) =
1 if S = ∅
x if S = {_a(a)}
0 otherwise
and this is our base case.
Otherwise, the queries are answered recursively and five types of queries occur, namely (a, S), (a,S,β),
T(a,S,α,c,γ), (a, S, β), and (a, S).
Let a be an inner node with children b_1, …, b_t.
To answer a query T(a,S,α,c,γ) for c < |α^-1(2_≥)|, we recurse via (<ref>).
If c = |α^-1(2_≥)|, then we first construct β S →{1_=, 1_≥} given by β^-1(1_=)=α^-1(1_=)∪γ^-1(1_=) and β^-1(1_≥)=α^-1(2_≥)∖γ^-1(1_=) and then query (a, S, β).
Then, to answer a query (a, S, β), we recurse via (<ref>).
And finally, to answer a query (a, S), we recurse using (<ref>).
Each of the above recurrences is given by a combination of sums and products of the results of recursive calls and these values are from _p.
To keep the space complexity of the algorithm bounded, for such recursion, the result is computed “from inside to outside” by keeping track of the current sums (resp. products) as well as the next value to be queried.
For example, for (<ref>), we iterate through all Y ⊆ S, store the current value of the outer sum (modulo p), then for fixed Y, we iterate over j ∈ [t] and store j and the current value of the product (modulo p), and then for fixed Y and j, iterate through Z ⊆ Y and store the current value of Z and the current inner sum.
After the complete iteration over Z (resp. j) we update the current value of the product (resp. outer sum) and move on to the next j (resp. Y).
Now we analyze the time and space complexity of the algorithm.
We start with the running time.
For this, we analyze how often every query is answered.
Namely, for all relevant values of a, S, α, β, c, and γ, for each query (a, S), (a, S, α),
T(a, S,α, c,γ), (a, S, β), resp. (a, S),
we use Q((a, S)), Q((a, S, α)),
Q(T(a, S,α, c,γ)), Q((a, S, β)), Q((a, S)), respectively, to denote the number of times the query is answered by the algorithm and we call this value the multiplicity of the query.
Then, for h ∈ [d]_0, we define the value q_(h) to be the maximum multiplicity of a query (a, S) over all nodes a at height h in T and all reasonable S.
Similarly, we define the values Q_(h),
Q_T(h), Q_(h), and Q_(h) where we maximize over all nodes a at height h and all reasonable values of S, α, β, γ and c.
We now upper bound these values.
Let b be a node at height h for some h ∈ [d]_0.
If b = r, then a query (b, S) is not asked at all.
Otherwise, let a be the parent of b, and let j be such that b = b_j is the j-th child of a.
Then (b, S) can be asked when answering some query of the form (a, D, β) to compute some value ξ g_j^a, β(Y) such that S ⊆ Y ⊆ D.
Therefore, for fixed D and β, the value (b, S) is queried at most 2^k times, so we obtain
Q((b, S)) ≤∑_S ⊆ D ⊆ [k], β D →{1_=, 1_≥} Q((a, D, β))
and hence,
Q_(h)
= 0 if h = 0
≤ 2^3 k Q_(h - 1) otherwise
.
Next, we consider a query of form (b, S, β).
Observe that for every α S →{1_=, 2_≥} when recursing via (<ref>) to answer T(a, S, α, 0, ∅), we branch on the values 1_= and 1_≥ for s_1, …, s_|α^-1(2_≥)| one after another.
Thus, after |α^-1(2_≥)| steps every branch results in its own γ s_1, …, s_|α^-1(2_≥)|→{1_=, 1_≥}, and hence, in its own β S →{1_=, 1_≥}.
Therefore, if we fix α, then every (a, S, β) is queried at most once when answering T(a, S, α, 0, ∅).
Hence, we have
Q((b, S, β)) ≤∑_α S →{1_=, 1_≥} Q((b, S, α))
and therefore,
Q_(h) ≤ 2^k Q_(h).
Every query T(b, S, α,c,γ) is also asked at most once while answering a query of T(b, S, α,0,∅), i.e.,
Q(T(b,S,α,c,γ)) ≤ Q((b, S, α))
and
Q_T(h) ≤ Q_(h).
Further, for each fixed α, a query (b, S, α) is asked exactly once for every query of (b, S), i.e.,
Q((b, S, α)) ≤ Q((b, S))
and
Q_(h) ≤ Q_(h).
Finally, a query of the form (b, S) is queried at most once for every query of the form (b, D), so we have
Q((b, S)) ≤∑_D ⊆ [k] Q((b, D))
and
Q_(h) ≤ 2^k Q_(p).
By induction over h, we obtain that
Q_(h), Q_T(h), Q_(h), Q_(h), Q_(h) ≤ 2^5 h k
and
Q_(h), Q_T(h), Q_(h), Q_(h), Q_(h) ≤ 2^5 d k∈ 2^𝒪(k d)
for every h ∈ [d]_0, i.e., any fixed query is asked 2^𝒪(k d) times.
There are 𝒪(nd) nodes in T and there are at most 2^k reasonable values of S; for any S, there are at most 2^k choices for α, β, and γ; and there are at most k reasonable values of b.
Hence, there are at most
𝒪(nd) (2^2 k + 2^3 k k + 2^2 k + 2^k + 2^k) ∈ 2^𝒪(k)·
different forms of queries and so there are at most
2^𝒪(k d) 2^𝒪(k)· = 2^𝒪(k d)·
recursive calls.
Next, we bound the time spent on each query additionally to the recursive calls.
For each query, this additional time is mostly determined by 𝒪(2^2 k n) arithmetic operations.
For a query of the form (·), arithmetic operations are carried out over polynomials in a formal variable z
where the coefficients are from _p.
It is crucial to observe that since in the end of the computation we apply the ⟨ z^|(F)|⟩ operation and the auxiliary graph F has at most k connected components, we can safely discard coefficients at terms z^r for any r > k.
Therefore, it suffices to keep track of at most k coefficients from _p.
For the remaining queries, the arithmetic operations are carried out over _p.
So in any case, there are at most k relevant values from _p to store as a partial sum resp. product and a single arithmetic operation can be therefore carried out in time.
Further, when answering a query of the form (a, S, β) and computing a value of the form g_j^a, β(X_j) for this, we can check whether for X_j there is W_j with ^a, β(W_j) = X_j as follows. First, we compute the connected components of F^a, β: we start with a partition of β^-1(1_=) into singletons and then iterate over all pairs of vertices i_1 and i_2 and if M_a[i_1, i_2] = 1, then we merge the sets containing i_1 and i_2.
As a result of this process, we obtain the set of connected components of F^a, β.
Then for each connected component C, we check if C ∩ X_j ∈{∅, C} holds.
If this does not hold for at least one connected component, then we conclude that g_j^a, β(X_j) = 0.
Otherwise, the desired set W_j exists and we have |W_j ∩(F^a, β)| = r where r is the number of connected components C with C ∩ X_j = C.
This process then runs in time 𝒪(k^3) and space 𝒪(k).
Although this can be accelerated, this step is not a bottleneck so this time and space complexity suffices for our purposes.
Also, when answering a query of the form (a, S, α), we need to check whether there exist labels i_1, i_2 ∈ S with α(i_1) = 1_≥ and M_a[i_1, i_2] = 1: this can be done in time 𝒪(k^2) and space 𝒪(log k) by considering all pairs i_1, i_2 ∈ S and looking up these properties.
So for any query, the time spent on this query apart from the recursive calls is bounded by
𝒪(2^2 k n · k^3 ·log n) = 2^𝒪(k)·.
And the total running time of the algorithm is bounded by
2^𝒪(k d)·(n) · 2^𝒪(k)·(n) = 2^𝒪(k d)·(n),
i.e., the number of queries times the complexity of a single query.
Finally, we bound the space complexity.
The space used by a single query is to store the partial sums and/or products modulo p as well as the counters that store the information about the next recursive call (e.g., current S).
For any query other than (·), the partial result is in _p.
For a query of the form (·), we are working with a polynomial in the formal variable z.
Above we have argued why the coefficients at z^p for p > r can be discarded.
Therefore, it suffices to keep track of at most k coefficients from _p.
Recall that p ≤ 2n+2 so any value from _p can be encoded with log n bits.
When answering a query of the form (a, S, β), we also need to consider the connected components of F^a, β: as argued above, this can be accomplished in 𝒪(k) space.
So the space complexity of a single query can be bounded by
𝒪(k log n) + 𝒪(k) + log n = 𝒪(k log n).
The depth of the recursion is bounded by 𝒪(k d): the depth of T is d and for each node, there are at most k + 4 recursive calls queried at this node (namely, (·), (·), = T(·, c = 0, ·), …, T(·, c = |α^-1(2_≥)| ≤ k, ·), (·)).
Finally, during the algorithm we need to keep track of the node we are currently at.
Therefore, the space complexity of the algorithm is
𝒪(k d) 𝒪(k log n) + 𝒪(log n) = 𝒪(k^2 d log n).
Counting List-Homomorphisms.
§.§ Counting List-Homomorphisms
We now explain how to apply the techniques from <ref> the above to a broader class of problems, namely all problems expressible as instantiations of the #-List-H-Homomorphism problem for a fixed pattern graph H (which we will introduce in a moment). In this way, we cover problems such as Odd Cycle Transversal and q-Coloring, for a fixed q. Furthermore, the techniques will be useful for solving Dominating Set later.
Let H be a fixed undirected graph (possibly with loops) and let R ⊆ V(H) be a designated set of vertices.
An instance of the R-Weighted #-List-H-Homomorphism problem consists of
a graph G,
a weight function ω V(G) →,
a list function L V(G) → 2^V(H),
a cardinality C ∈ and a total weight W ∈.
The goal is to count the number of list H-homomorphisms of G such that exactly vertices of G are mapped to R and their total weight in ω is .
More formally, we seek the value
|{ϕ V(G) → V(H) | ∀ v ∈ V(G) ϕ(v) ∈ L(v),
∀ uv ∈ E(G) ϕ(u) ϕ(v) ∈ E(H),
|ϕ^-1(R)| = , and ω(ϕ^-1(R)) = }| .
We say that such ϕ has cardinality C and weight W.
For the “standard” #-List H-Homomorphism problem we would use R = V(H), C = W = |V(G)|, and unit weights. We also have the following special cases of the R-Weighted #-List-H-Homomorphism problem. In all cases, we consider unit weights.
* To model Independent Set, the pattern graph H consists of two vertices 𝐮 and 𝐯 and the edge set contains a loop at 𝐯 and the edge 𝐮𝐯.
The set R consists of 𝐮 only.
Then Independent Set is equivalent to finding the largest for which we have a positive number of solutions in the constructed instance of R-Weighted #-List-H-Homomorphism.
* Similarly, to model Odd Cycle Transversal, the pattern graph H is a triangle on vertex set {𝐮,𝐯,𝐰} with a loop added on 𝐮.
Again, we take R={𝐮}.
* To model q-Coloring, we take H to be the loopless clique on q vertices, and R=V(H).
While in all the cases described above we only use unit weights, we need to work with any weight function in our application to Dominating Set.
In this section, we build on the techniques of Subsection <ref> to establish the following result:We prove the following result.
Fix a graph H (possibly with loops) and R ⊆ V(H). There is an algorithm which takes as input an n-vertex graph G together with a weight function ω and a (d, k)-tree-model, runs in time 2^𝒪(d k)·· (W^*)^𝒪(1) and uses space 𝒪(k^2 d(log n + log W^*)), and solves the R-Weighted #-List-H-Homomorphism in G, where W^* denotes the maximum weight in ω.
Using the argumentation above, from <ref> we can derive the following corollaries.
Fix a graph H (possibly with loops).
Then given an n-vertex graph G together with a (d, k)-tree-model, #-List-H-Homomorphism in G can be solved in time 2^𝒪(dk)· and space 𝒪(dk^2 log n).
Fix q ∈. Then given an n-vertex graph G together with a (d, k)-tree-model, q-Coloring and Odd Cycle Transversal in G can be solved in time 2^𝒪(dk)· and space 𝒪(dk^2 log n).
The remainder of this section is devoted to the proof of <ref>. We assume that the reader is familiar with the approach presented in <ref>, as we will build upon it.
Let now H and R be fixed and let W^* = max_v ∈ Vω(v) be the maximum weight in ω.
Now we show how to adapt our techniques from <ref> to the R-Weighted #-List-H-Homomorphism problem.
We assume that the graph G is provided with a (d, k)-model (T, ℳ, , ).
There are two main changes: first, we adapt the dynamic programming formulas and second, we show how to apply <ref> to polynomials in two variables that will appear in the proof.
We start with dynamic programming.
Let a be a node of T.
For Maximum Independent Set, our guess S was the set of labels occurring in an independent set of the current subgraph G_a.
Now, instead, we guess a subset S of 𝐒𝐭𝐚𝐭𝐞𝐬{ (𝐡,i) |𝐡∈ V(H), i ∈ [k]}.
For each label i ∈ [k], the set S is intended to reflect to which vertices of H the set V_a^i is mapped by a homomorphism.
The set 𝐒𝐭𝐚𝐭𝐞𝐬 has size |V(H)| · k, i.e., 𝒪(k) for fixed H.
So as in <ref>, there are still 2^𝒪(k) possibilities for S and this will be the reason for the running time of 2^𝒪(d k)· as in that section.
As before, we then employ guesses of the form α S →{1_=, 2_≥} and β S →{1_≥, 1_=} to compute the polynomials reflecting the number of H-homomorphisms of certain cardinality via inclusion-exclusion.
Further, we need to forbid that edges of G are mapped to non-edges of H.
For this, the auxiliary graph F^a, β again has vertex set β^-1(1_=) but now there is an edge between two vertices (𝐡,i) and (𝐡',j) whenever M_a[i, j] = 1 and 𝐡 𝐡' is not an edge of H.
Then, if a homomoprhism maps a vertex v_1 with label i to 𝐡 and a vertex v_2 with label j to 𝐡', our approach from <ref> ensures that v_1 and v_2 come from the same child of a so that no edge between v_1 and v_2 is created at a.
In <ref>, all polynomials had only one variable x whose degree reflected the size of an independent set.
Here, additionally to cardinality we are interested in the weight of vertices mapped to H.
So instead of univariate polynomials from [x], we use polynomials in two variables x and y where the degree of y keeps track of the weight.
The weights of partial solutions are initialized in the leaves of the tree-model, there we also take care of lists L: the polynomial for a guess S and a leaf v is given by x · y^ω(v) provided S = {(𝐡,i)} for some 𝐡∈ L(v) and i = (v), and otherwise this polynomial is the zero polynomial.
With this adaptations in hand, by a straightforward implementation of the recursion we can already obtain a 2^𝒪(d k)·-time algorithm that uses only polynomial space and computes the polynomial
Q(x, y) = ∑_p ∈ [n]_0, w ∈ [n W^*]_0 q_p, w x^p y^w
where q_p, w is the number of list H-homomorphisms of G
of cardinality p and weight w.
The answer to the problem is then the value q_C, W.
To obtain logarithmic dependency on the graph size in space complexity, in <ref> we relied on <ref>.
However, <ref> concerns univariate polynomials, while Q has two variables.
We now explain how to model Q as a univariate polynomial P ∈[t] in order to apply the theorem.
Let
P(t) = ∑_j_1 ∈ [n]_0, j_2 ∈ [nW^*]_0 q_j_1, j_2 t^j_1(nW^* + 1) + j_2.
First, observe that j_1 and j_2 form a base nW^* + 1 representation of the degree of the corresponding monomial.
So the coefficient standing by t^C(nW^* + 1) + W in P is exactly q_C, W, i.e., the value we seek.
Further, it holds
P(t) = ∑_j_1 ∈ [n]_0, j_2 ∈ [n W^*]_0 q_j_1, j_2 (t^n W^* + 1)^j_1 t^j_2,
so evaluating P at some value s ∈ modulo a prime number p is equivalent to computing the value Q(s^n W^* + 1, s) p.
It remains to choose suitable values to apply <ref>.
The degree of P is bounded by 𝒪(n^2 W^*).
The number of H-homomorphisms of G, and hence each coefficient of P as well, is bounded by |V(H)|^n.
Since |V(H)| is a problem-specific constant, there is a value n' of magnitude 𝒪(n^2 W^*) satisfying the prerequisities of <ref>.
Then for a prime number p ≤ 2n'+2, any value from _p is 𝒪(log n + log W^*) bits long.
Now to compute the value Q(s^n W^* + 1, s) p for some s ∈_p, we proceed similarly to <ref>: during the recursion, instead of storing all coefficients of the polynomials, as a partial result we only store the current result of the evaluation at x = s^n W^* + 1 and y = s modulo p.
Let us now summarize the time and space complexity of this evaluation similarly to <ref>.
The depth of T is d and per node of T, there are at most 4 + |States| recursive calls where |States| reflects that the transformation from 1_≥ to 2_≥ is carried out for every element of a guess S ⊆States (recall the tables T(·, c=0, ·), …, T(·, c=|α^-1(2_≥)|, ·) in <ref>).
Due to |States| = k · |V(H)|, the recursion depth is then 𝒪(kd).
The number of possible guesses S as well as reasonable α and β is bounded by 2^𝒪(States) = 2^𝒪(k).
Also, for a node a and a reasonable β, the auxiliary graph F^a, β has at most |States| = 𝒪(k) vertices.
Recall that in <ref>, at some point of the computation we work with a polynomial using a variable z.
For this variable, only coefficients at monomials z^i for i ≤ |V(F^a, β)| are relevant.
Hence, for each query we need to keep only 𝒪(k) coefficients from _p and such a coefficient uses (log n + log W^*) bits.
The addition and multiplication of two such coefficients can be done in time 𝒪(log n + log W^*).
These properties imply that following the argument from <ref> we obtain the running time of 2^𝒪(dk)··log W^* and space complexity of 𝒪(kd) · k ·𝒪(log n + log W^*) = 𝒪(k^2 d (log n + log W^*)).
With that, <ref> implies that the coefficients of P, and in particular the sought value q_C, W, can be reconstructed in time 2^𝒪(dk)··log W^* and using 𝒪(k^2 d (log n + log W^*)) space. This concludes the proof of <ref>.
We remark that the result of <ref> can be combined with the Cut&Count technique of Cygan et al. <cit.> in order to incorporate also connectivity constraints to List H-Homomorphism and solve problems like Connected Vertex Cover and Connected Odd Cycle Transversal. In essence Cut&Count provides a randomized reduction from List H-Homomorphism with connectivity constraints to #-List H'-Homomorphism for a new pattern graph H' with at most twice as many vertices as H. Since in the reduction only the parity of the number of solutions is preserved, in Cut&Count one typically uses the Isolation Lemma <cit.> to sample a weight function so that with high probability, there is exactly one (and thus, an odd number) solution of minimum possible weight; then counting the number of solutions mod 2 for all possible weights reveals the existence of a solution. Note here that the algorithm of <ref> is already prepared to count weighted solutions. In our setting, the usage of Isolation Lemma necessitates allowing randomization and adds an (nlog n) factor to the space complexity for storing the sampled weights. We leave the details to the reader.
§.§ Max-Cut
Max Cut. In the classical Max Cut problem, we are given a graph G and the task is to output max_X⊆ V(G)E(X, V(G) ∖ X).
Towards solving the problem, let us fix a graph G and a (d,k)-tree model (T, ℳ, , ) of G.
Recall that for every node a of T, i∈ [k] and X⊆ V_a, we denote by X_a(i) the set of vertices in X labeled i at a, i.e., X∩λ_a^-1(i).
Given a child b of a, we let V_ab=V_b and we denote by V_ab(i) the set of vertices in V_b labeled i at a, , V_b ∩ V_a(i).
By X_ab(i) we denote the set X∩ V_ab(i).
Given c∈{a,ab}, we define the c-signature of X⊆ V_c — denoted by _c(X) — as the vector (X_c(1),X_c(2),…,X_c(k)).
We let (c) be the set of c-signatures of all the subsets of V_c, , (c){_c(X)| X⊆ V_c}.
Observe that |(c)| ∈ n^(k) holds.
Also, for the children b_1,…,b_t of a, we define (ab_1,…,ab_t) as the set of all tuples (s^1,…,s^t) with s^i∈(ab_i) for each i∈ [t].
Given s∈(c), we define f_c(s) as the maximum of E(X,V_c∖X) over all the subsets X⊆ V_c with c-signature s.
To solve Max Cut on G, it suffices to compute max_s∈(r) f_r(s) where r is the root of T.
Let b be a child of a. We start explaining how to compute f_ab(s) by making at most n^(k) calls to the function f_b.
Given s'∈(b), we define ρ_ab(s') as the vector s = (s_1, …, s_k)∈(ab) such that, for each i∈ [k], we have s_i=∑_j∈ρ_ab^-1(i) s_j'.
Observe that for every X⊆ V_b, we have _ab(X)= ρ_ab(_b(X)). Consequently, for every s∈(ab), f_ab(s) is the maximum of f_b(s') over the b-signatures s'∈(b) such that ρ_ab(s')=s.
It follows that we can compute f_ab(s) with at most n^(k) calls to the function f_b.
Given a node a of T with a child b and s∈(ab), we can compute f_ab in space (klog(n)) and time n^(k) with n^(k) oracle access to the function f_b.
In order to simplify forthcoming statements, we fix a node a of T with children b_1,…, b_t.
Now, we explain how to compute f_a(s) by making at most n^(k) calls to the functions f_ab_1, …, f_ab_t.
The first step is to express f_a(s) in terms of f_ab_1, …, f_ab_t.
We first describe E(X,V_a∖ X) in terms of E(X∩ V_b_i,V_b_i∖ X).
We denote by
E(V_b_1,…,V_b_t) the set of edges of G[V_a] whose endpoints lie in
different V_b_i's, i.e. E(G[V_b_1,…,V_b_t])∖ (E(G[V_b_1] ∪…∪ E(G[V_b_t]))).
Given X⊆ V_a, we denote by E_a(X) the intersection of E(X,V_a∖ X) and E(V_b_1,…,V_b_t).
In simple words, E_a(X) is the set of all cut-edges (i.e., between X and V_a ∖ X) running between distinct children of a.
For i,j∈ [k], we denote by E_a(X,i,j) the subset of E_a(X) consisting of the edges whose endpoints are labeled i and j.
We capture the size of E_a(X,i,j) with the following notion. For every c∈{a,ab_1,…,ab_t}, s∈(c) and i,j∈ [k], we define
_c(s,i,j)
s_i· (V_c(j)-s_j) + s_j· (V_c(i)-s_i) if i≠ j,
s_i · (V_c(i)-s_i) otherwise.
It is not hard to check that, for every subset X⊆ V_a with a-signature s, _a(s,i,j) is the size of _a(X,i,j)
being the set of pairs of distinct vertices in V_a labeled i and j at a such that exactly one of them is in X.
Observe that when M_a[i,j]=1, then E_a(X,i,j) is the number of pairs in _a(X,i,j) whose endpoints belong to different sets among V_b_1,…,V_b_t.
Moreover, given a child b of a, the number of pairs in _a(X,i,j) whose both endpoints belong to V_b is exactly _ab(_ab(X),i,j).
Thus when M_a[i,j]=1, we have
E_a(X,i,j) = _a(_a(X),i,j) - ∑_i∈[t]_ab_i(_ab_i(X),i,j) .
We capture the size of E_a(X) with the following notion. For every c∈{a,ab_1,…,ab_t}, s∈(c) and (k× k)-matrix M, we define
m_c(s,M)∑_i,j∈ [k], i≤ j
M[i,j]=1_c(s,i,j).
m_c(s,M)∑_i,j∈ [k], i≤ j
M[i,j]=1_c(s,i,j).
Note that E_a(X) = ∑_i,j∈ [k] i≤ j, M_a[i,j]=1E_a(X,i,j).
Hence, by Equation <ref>, we deduce that E_a(X)=m_a(_a(X),M_a) - ∑_i∈ [t] m_ab_i(_ab_i(X),M_a).
Since E(X,V_a∖ X) is the disjoint union of E_a(X) and the sets E(X∩ V_b_1,V_b_1∖ X),…,E(X∩ V_b_t,V_b_t∖ X) , we deduce:
For every X⊆ V_a we have
E(X, V_a ∖ X) = m_a(_a(X),M_a) + ∑_i=1^t(E(X_i∩ V_b_i, V_b_i∖ X_i) - m_ab_i(_ab_i(X_i),M_a)) .
We are ready to express f_a(s) in terms of f_ab_1,…,f_ab_t and m_a,m_ab_1,…,m_ab_t.
For every s∈(a), we have
f_a(s) = m_a(s,M_a) + max_(s^1,…,s^t)∈(ab_1,…,ab_t)
s=s^1+…+s^t(∑_i=1^t(f_ab_i(s^i) - m_ab_i(s^i,M_a))) .
Let s∈(a).
By <Ref> we know that
f_a(s) = max_X⊆ V_a
_a(X)=sE(X, V_a ∖ X) = m_a(s,M_a) +
max_X⊆ V_a
_a(X)=s( ∑_i=1^tE(X_i, V_b_i∖ X_i) - m_ab_i(_ab_i(X_i),M_a))
where X_i is a shorthand for X∩ V_b_i.
Observe that for every X⊆ V_a, we have _a(X)=s iff s=∑_i=1^t _ab_i(X∩ V_b_i).
Since f_ab_i(s^i) is the maximum E(X_i, V_b_i∖ X_i) over all X_i⊆ V_b_i with ab_i-signature s^i while m_ab_i(_ab_i(X_i),M_a) only depends on s^i and not on the concrete choice of X_i, we conclude that f_a(s) equals m_a(s,M_a) plus
max_(s^1,…,s^t)∈(ab_1,…,ab_t)
s=s^1+…+s^t(∑_i=1^t f_ab_i(s^i) - m_ab_i(s^i,M_a)).
To compute f_a(s) we use a twist of Kane's algorithm <cit.> for solving the k-dimensional Unary Subset Sum in Logspace.
The twist relies on using a polynomial, slightly different from the original work of Kane <cit.>, defined in the following lemma.
Given a vector s=(s_1,…,s_k)∈^k and B∈, we denote by s | B the vector (s_1,…,s_k,B).
We denote by C the number 2n^2+1 and, given a vector s'∈^k+1, we denote by C(s') the sum ∑_i∈ [k+1] C^i-1 s'_i.
Let s∈(a) and B∈ [E(G[V_a])].
Let A(s,B) be the number of tuples (s^1,…,s^t)∈(ab_1,…,ab_t) such that s=s^1+… + s^t and
B - m_a(s,M_a) = ∑_j=1^t f_ab_j(s^j) - m_ab_j(s^j,M_a).
For every prime number p > C^k+1 + 1, we have -A(s,B) ≡ P_a,s(B,p) (mod p) where
P_a,s(B,p) ∑_x=1^p-1 x^C(s | B - m_a(s,M_a))( ∏_j=1^t(∑_s^j∈(ab_j) x^-C(s^j | f_ab_j(s^j) - m_ab_j(s^j,M_a)))).
First, note that
x^C(s | B - m_a(s,M_a))( ∏_j=1^t(∑_s^j∈(ab_j) x^-C(s^j | f_ab_j(s^j) - m_ab_j(s^j,M_a))))
= ∑_s^1,…,s^t∈(b_1,…,b_t) x^α(s^1,…,s^t)
where
α(s^1,…,s^t)= C(s | B - m_a(s,M_a)) - ∑_j = 1^t(C(s^j | f_ab_j(s^j) - m_ab_j(s^j,M_a))).
As in <cit.>, the idea of this proof is to change the order of summation, show that the terms where α(s^1,…,s^t)≠ 0 cancel out, and prove that the sum of the terms where α(s^1,…,s^t)= 0 is -A(s,B).
The latter is implied by the following claim.
For every (s^1,…,s^t)∈(ab_1,…,ab_t), the absolute value of α(s^1,…,s^t) is at most C^k+1.
Moreover, α(s^1,…,s^t)=0 iff s=s^1+…+s^t and B - m_a(s,M_a) = ∑_i=1^t f_ab_i(s^i) - m_ab_i(s^i,M_a).
By definition of C(·|·), we have
α(s^1,…,s^t) = (∑_i=1^k C^i-1( s_i - ∑_j=1^t s_i^j )) + C^k+1(B - m_a(s,M_a) - ∑_j=1^t( f_ab_j(s^j) - m_ab_j(s^j,M_a)) ).
I.e.,
α(s^1,…,s^t) = ∑_i=1^k+1 C^i-1 e_i
with
e_i =
s_i - ∑_j=1^t s_i^j if 1 ≤ i ≤ k
B - m_a(s,M_a) - ∑_j=1^t(f_ab_j(s^j) - m_ab_j(s^j,M_a)) if i = k + 1
.
We claim that the absolute value of each e_i is at most C-1.
For every i∈ [k], by definition, s_i and ∑_j=1^t s_i^j are at least 0 and at most V_a(i)≤ n.
Hence, for each i∈ [k] the absolute value of e_i is at most n < C.
Both B and ∑_j=1^t f_ab_j(s^j) are upper bounded by E(G[V_a])≤ n^2.
Moreover, from the definition of the functions m_a,m_ab_1,…,m_ab_t, we deduce that both m_a(s,M_a) and ∑_j=1^t m_ab_j(s^j,M_a) are upper bounded by V_a^2 ≤ n^2.
It follows that the absolute value of e_k+1 is at most 2n^2 < C.
Thus, the absolute value of α(s^1,…,s^t) is at most ∑_i=1^k+1 C^i-1 e_i ≤∑_i=1^k+1 C^i-1 (C - 1) = C^k+1 - 1.
It remains to prove that that α(s^1,…,s^t)=0 iff e_j=0 for every j∈ [k+1].
One direction is trivial.
For the other direction, observe that if e_k+1≠ 0, then the absolute value of C^k e_k+1 is at least C^k.
But the absolute value of α(s^1,…,s^t) -C^k e_k+1 = ∑_i=1^k C^i-1 e_i is at most ∑_i=1^k C^i-1 (C-1) = C^k-1.
Hence, if e_k+1≠ 0, then α(s^1,…,s^t)≠ 0.
By induction, it follows that α(s^1,…,s^t)=0 is equivalent to e_i=0 for every i∈ [k+1].
By using <Ref> on P_a,s(B,p) and interchanging the sums, we deduce that
P_a,s(B,p)= ∑_s^1,…,s^t∈(b_1,…,b_t)(∑_x=1^p-1 x^α(s^1,…,s^t)).
It was proven in the proof of Lemma 1 in <cit.> that
∑_x=1^p-1 x^ℓ p =
-1 if ℓ≡ 0 p-1
0 otherwise.
We infer from the above formula that
P_a,s(B,p) p = ∑_s^1,…,s^t∈(ab_1,…,ab_t)
α(s^1,…,s^t)≡ 0 p - 1 -1 p.
Observe that, for every (s^1,…,s^t)∈(ab_1,…,ab_t), we have α(s^1,…,s^t)≡ 0 p - 1 iff α(s^1,…,s^t)= 0 because C^k+1 < p - 1 and the absolute value of α(s^1,…,s^t) is at most C^k+1 by <Ref>.
From the equivalence given by <Ref>, we deduce that there are A(s,B) tuples (s^1,…,s^t)∈(ab_1,…,ab_t) such that α(s^1,…,s^t)= 0, i.e.,
P_a,s(B,p) p = -A(s,B) p.
With this, we can prove Theorem <ref>.Theorem <ref> via <Ref>. As a subroutine, we use the function (p), which computes the smallest prime larger than p.
Let s∈(a).
<Ref> computes f_a(s) in space (klog(n)) and time n^(k) with n^(k) oracle access to the functions f_ab_1,…,f_ab_t.
The correctness of <Ref> follows from the following claims.
Let B be an integer between 0 and E(G[V_a]), and let A(s,B) be the integer defined in <Ref>.
If the algorithm returns B, then A(s,B)≠ 0.
Suppose there exists a prime number p>C^k+1 such that P_a,s(B,p)≢0 (mod p).
As P_a,s(B,p)≡ A(s,B) (mod p) by <Ref>, we have A(s,B)≠ 0 and thus there exists (s^1,…,s^t)∈(ab_1,…,ab_t) such that s=s^1+… + s^t and
B - m_a(s,M_a) = ∑_i=1^t f_ab_i(s^i) - m_ab_i(s^i,M_a).
From <Ref>, we deduce that there exists X⊆ V_x such that (X)=s^1+…+s^t=s and E(X,V_x∖ X)=B.
If P_a,s(B,p)≡ 0 (mod p) for every value taken by the variable p, then A(s,B)=0.
Let d be the product of the values taken by p.
Then d is a product of distinct primes p such that P_a,s(B,p)≡ 0 (mod p).
By <Ref>, we have P_a,s(B,p)≡ A(s,B) (mod p) for every prime p>C^k+1.
Therefore, A(s,B) is a multiple of d.
Observe that d>2^c and c> nklog(n).
Hence, we have d> n^nk.
Since A(s,B) corresponds to the number of tuples (s^1,…,s^t)∈(ab_1,…,ab_t) that satisfy some properties, we have A(s,B) ≤∏_i=1^t(ab_i)≤ n^nk.
As d divides A(s,B) and d>A(s,B), we conclude that A(s,B)=0.
From Claims <ref> and <ref>, we infer that Algorithm <ref> returns B where B is the maximum between 0 and E(G[V_a]) such that A(s,B)≠ 0.
By definition of A(s,B) and Lemma <ref>, we conclude that f_a(s)=B.
Complexity.
We adapt the arguments used in <cit.> to prove the complexity of our algorithm.
* First, the variable p is never more than n^(k).
Indeed, standard facts about prime numbers imply that there are nklog(n) prime numbers between C^k+1 and (C^k+1+nklog(n))^(1)=n^(k).
Each of these primes causes c to increase by at least 1.
Thus, each value of p can be encoded with (klog(n)) bits.
* Secondly, observe that we can compute P_a,s(p,B) p in space (klog(n)).
Recall that
P_a,s(B,p) ∑_x=1^p-1 x^C(s | B - m_a(s,M_a))( ∏_i=1^t(∑_s^i∈(b_i) x^-C(s^i | f_ab_i(s^i) - m_ab_i(s^i,M_a)))).
To compute P_a,s(B,p), it is sufficient to keep track of the current value of x, the current running total (modulo p) and enough information to compute the next term, i.e. x^C(s | B - m_a(s,M_a)) or x^-C(s^i | f_ab_i(s^i) - m_ab_i(s^i,M_a)).
For that, we need only the current values of i (at most log n bits) and s^i (at most k log n bits) and the current running total to compute C(s | B - m_a(s,M_a)) (or C(s^i | f_b_i(s^i) - m_b_i(s^i,M_a)) modulo p.
* Finally, primality testing of numbers between C^k+1 and n^(k) can be done in space (klog(n)) via n^(k) divisions, and thus each call to (·) can be computed in n^(k) time and (klog(n)) space.
We are now ready to prove that one can solve Max-Cut in time n^(dk) using (dklog(n)) space.
*
Given r the root of T, we solve Max-Cut by computing max_s∈(r)f_r(s).
For every internal node of a of T with children b_1,…,b_t, we use <Ref> to compute each call of f_a from calls to f_ab_1,…,f_ab_t. For every internal node a with child b, we use <Ref> to compute each call of f_ab from calls to f_b.
Finally, for every leaf ℓ of T, we simply have f_ℓ(s)=0 for every s∈(ℓ) because V_ℓ is a singleton.
First, we prove the running time.
By <Ref>, for each node a with children b_1,…,b_t and s∈(a), we compute f_a(s) by calling at most n^(k) times the functions f_ab_1,…,f_ab_t.
By <Ref>, for each node b with parent a and s∈(ab), we compute f_ab(s) by calling at most n^(k) times the function f_b.
Consequently, we call each of these functions at most n^(dk) times in total.
Since T has (n) nodes, we conclude that computing max_s∈(r)f_r(s) this way takes n^(dk) time.
Finally, observe that the stack storing the calls to these functions is of size at most (d).
Our algorithm solves Max Cut in space (dklog(n)).
§.§ Dominating Set
Dominating Set. In this section we prove <ref>, which we recall for convenience.
*
The remainder of this section is devoted to the proof of <ref>.
We now prove Theorem <ref>.
Note that Dominating Set cannot be directly stated in terms of H-homomorphisms for roughly the following reason.
For H-homomorphisms, the constraints are universal: every neighbor of a vertex with a certain state must have one of allowed states.
For Dominating Set, there is an existential constraint: a vertex in state “dominated” must have at least one neighbor in the dominating set.
Also, the state of a vertex might change from “undominated” to “dominated” during the algorithm.
The techniques we used for H-homomorphisms cannot capture such properties.
The problem occurs for other parameters as well.
One approach that circumvents the issue is informally called inclusion-exclusion branching, and was used by Pilipczuk and Wrochna <cit.> in the context of Dominating Set on graphs of low treedepth.
Their dynamic programming uses the states Taken (i.e., in a dominating set), Allowed (i.e., possibly dominated), and Forbidden (i.e., not dominated).
These states reflect that we are interested in vertex partitions into three groups such that there are no edges between Taken vertices and Forbidden vertices; these are constraints that can be modelled using H-homomorphisms for a three-vertex pattern graph H.
Crucially, for a single vertex v, if we fix the states of the remaining vertices, the number of partitions in which v is dominated is given by the number of partitions where v is possibly dominated minus the number of partitions where it is not dominated, i.e., informally “Dominated = Allowed - Forbidden”.
We will come back to this state transformation later to provide more details. We also remark that the transformed formulation of dynamic programming is exactly what one gets by applying the zeta-transform to the standard dynamic programming for Dominating Set.
For technical reasons explained later, our algorithm uses the classic Isolation Lemma:
Let ℱ⊆ 2^[n] be a non-empty set family over the universe [n]. For each i ∈ [n], choose a weight ω(i) ∈ [2n] uniformly and independently at random. Then with probability at least 1/2 there exists a unique set of minimum weight in ℱ.
Consequently,
we pick a weight function ω that assigns every vertex a weight from 1, …, 2n uniformly and independently at random. Storing ω takes 𝒪(n log n) space.
By <ref>, with probability at least 1/2 among dominating sets with the smallest possible cardinality there will be a unique one of minimum possible weight.
The remainder of the algorithm uses only 𝒪(dk^2 log n) space.
To implement the above idea, we let the graph H have vertex set {𝐓, 𝐀, 𝐅}
standing for Taken, Allowed, and Forbidden.
This graph H has a loop at each vertex as well as the edges 𝐓𝐀 and 𝐀𝐅. Further, let R {𝐓}.
Following our approach for H-homomorphisms,
for every set S ⊆𝐒𝐭𝐚𝐭𝐞𝐬 with States{(𝐓, 1), (𝐅, 1), …, (𝐓, k), (𝐅, k)}, every cardinality c ∈ [n]_0, and every weight w ∈ [2 n^2]_0, in time 2^𝒪(d k)· and space 𝒪(dk^2 log n) (recall that here for the maximum weight W^* we have W^* ≤ 2n) we can compute the value a_S, c, w being the number of ordered partitions (T, F, A) of V(G) satisfying the following properties:
* there are no edges between T and F;
* |T| = c and ω(T) = w; and
* for every i ∈ [k] and I ∈{T, F}, we have (𝐈, i) ∈ S iff I∩ V(i) ≠∅.
Note that we do not care whether vertices of some label i are mapped to A or not.
After that, we aim to obtain the number of dominating sets of cardinality c and weight w from values a_S, c, w.
For this we need to transform the “states” Allowed and Forbidden into Dominated.
Above we have explained how this transformation works if we know the state of a single vertex.
However, now the set S only captures for every label i, which states occur on the vertices of label i.
First, the vertices of this label might be mapped to different vertices of H.
And even if we take the partitions where all vertices of label i are possibly dominated and subtract the partitions where all these vertices are not dominated, then we obtain the partitions where at least one vertex with label i is dominated.
However, our goal is that all vertices of label i are dominated.
So
the Dominated = Allowed - Forbidden equality is not directly applicable here.
Recently, Hegerfeld and Kratsch <cit.> showed that when working with label sets, this equality is in some sense still true modulo 2.
On a high level, they show that if we fix a part T of a partition satisfying the above properties,
then any undominated vertex might be put to any of the sides A and F.
Thus, if T is not a dominating set of G, then there is an even number of such partitions and they cancel out modulo 2.
We can apply the same transformation to obtain from a_S,c,w's the number of dominating sets of size c and weight w modulo 2.
Isolation lemma implies that with probability at least 1/2 for some w this number if non-zero if a dominating set of size c exists.
Now we follow their ideas to formalize this approach and conclude the construction of the algorithm.
For i ∈ [k] and S ⊆{(𝐓, 1), (𝐅, 1), …, (𝐓, i), (𝐅, i)} we define the value D^w_i(S) as the number of ordered partitions (T, F, X) of V(G) with the following properties:
* there are no edges between T and F;
* |T| = c and ω(T) = w;
* for every j ∈ [i] and I ∈{T, F}, we have (𝐈, j) ∈ S iff I∩ V(j) ≠∅; and
* (V(i+1) ∪…∪ V(k)) ∖T is dominated by T.
The following observation is obvious.
For every S ⊆States, we have D^c, w_k(S) = a_S, c, w.
Next, we observe that it suffices to compute values D^c,w_i(S) for i=0 and S=∅.
D^c,w_0(∅) is the number of dominating sets of size c and total weight w.
Consider a partition (T, F, X) counted in D^c,w_0(∅).
Recall that V(1) ∪…∪ V(k) = V(G).
So the fourth property implies that V(G) ∖T is dominated by T, i.e., T is a dominating set of G.
The first property then implies that F is empty and X = V(G) ∖T.
Finally, by definition of D^c,w_0(∅), we know that the size of T is c and its weight is w.
On the other hand, every dominating set T of cardinality c and weight w defines a partition (T, ∅, V(G) ∖T) counted in D^c,w_0(∅).
Finally, we prove that modulo 2, D_i^c,w(S) can be computed from D_i+1^c,w(S).
For every i ∈ [k - 1]_0 and every S ⊆{(𝐓, 1), (𝐅, 1), …, (𝐓, i), (𝐅, i)}, it holds that
D^c,w_i(S) ≡∑_B ⊆{(𝐓, i+1), (𝐅, i+1)} D^c,w_i+1(S ∪ B) 2.
We follow the proof idea of Hegerfeld and Kratsch.
For B ⊆{(𝐓, i+1), (𝐅, i+1)}, let _i+1(S ∪ B) be the set of partitions counted in D^c,w_i+1(S ∪ B) (see the definition above).
Note that we have _i+1(S ∪ B_1) ∩_i+1(S ∪ B_2) = ∅ for any B_1 ≠ B_2 ⊆{(𝐓, i+1), (𝐅, i+1)}.
So
∑_B ⊆{(𝐓, i+1), (𝐅 , i+1)} D^w_i+1(S ∪ B) =
|⋃_B ⊆{(𝐓, i+1), (𝐅, i+1)}_i+1(S ∪ B)|.
Let ℒ be the set of partitions counted in D^c,w_i(S) and let
ℛ = ∪_B ⊆{(𝐓, i+1), (𝐅, i+1)}_i+1(S ∪ B).
The goal is to prove |ℒ| ≡ |ℛ| 2.
By definition of these values we have ℒ⊆ℛ.
We claim that the size of ℛ∖ℒ is even.
To see this, consider some fixed partition (T, F, X) ∈ℛ∖ℒ.
This is exactly the case if the following properties hold:
* there are no edges between T and F;
* |T| = c and ω(T) = w;
* for every j ∈ [i] and I ∈{T, F}, we have (I, j) ∈ S iff I∩ V(j) ≠∅; and
* the set (V(i+2) ∪…∪ V(k)) ∖T is dominated by T while the set (V(i+1) ∪…∪ V(k)) ∖T is not dominated by T,
Let U = V(i+1) ∖ N[T].
The last property implies that U is non-empty.
Also let X' = X∖ V(i+1) and F' = F∖ V(i+1).
Observe that N(T) ∩ V(i+1) ⊆X due to the first property.
We claim that if we fix the first set T of the partition as well as the partition of V ∖ V(i+1) (by fixing X' and F'), then the extensions of (T, F', X') to a partition in ℛ∖ℒ are exactly the partitions of form
(T, F' ∪ (U ∖ U'), X' ∪ (N(T) ∩ V(i+1)) ∪ U')
for U' ⊆ U.
So informally speaking, if we fix T, X', F', every vertex of U can be put to either X or F thus giving rise to an even number 2^|U| of such extensions.
Now we prove this claim following the idea of Hegerfeld and Kratsch.
First, consider a partition of form (<ref>) for an arbitrary U' ⊆ U.
Since T is fixed and the partition on V ∖ V(i+1) is fixed as well, the last three properties defining ℛ∖ℒ trivially hold.
Next, due to F' ⊆F,
there are no edges between T and F'.
And since U ∖ U' ⊆ U is not dominated by T, there are no edges between T and U ∖ U' as well, so the first property holds too.
For the other direction, if we consider an extension (T, F, X) ∈ℛ∖ℒ of (T, F', X'), then by the first property we know that F∩ V(i+1) has no edges to T and hence, it is a subset of U.
So, for any fixed (T, F', X'), either there is no extension to a partition from ℛ∖ℒ at all or there are 2^|U| of them where U is a non-empty set.
So the size of ℛ∖ℒ is even and this concludes the proof.
The application of <ref> for i = 0, …, k-1 implies
D^c,w_0(∅) ≡ ∑_B_1 ⊆{(𝐓, 1), (𝐅, 1)} D^c,w_1(B_1) ≡
∑_B_1 ⊆{(𝐓, 1), (𝐅, 1)}∑_B_2 ⊆{(𝐓, 2), (𝐅, 2)} D^c,w_2(B_1 ∪ B_2) ≡
…
∑_B_1 ⊆{(𝐓, 1), (𝐅, 1)}∑_B_2 ⊆{(𝐓, 2), (𝐅, 2)}…∑_B_k ⊆{(𝐓, k), (𝐅, k)} D^c,w_k(B_1 ∪ B_2 …∪ B_k) ≡
∑_S ⊆{(𝐓, 1), (𝐅, 1), …, (𝐓, k), (𝐅, k)} D^w_k(S) 2.
By <ref>, the parity of the number of dominating sets of size c and weight w can be expressed as
D^c,w_0(∅) ≡∑_S ⊆{(𝐓, 1), (𝐅, 1), …, (𝐓, k), (𝐅, k)} a_S, c, w2.
Recall that every a_S, c, w can be computed in time 2^𝒪(d k)· and space 𝒪(dk^2 log n), hence this is also the case for their sum modulo 2.
We compute the value D^c, w_0(∅) for all cardinalities c ∈ [n]_0 and all weights w ∈ [2n^2]_0 and output the smallest value c such that for some w the value D^c, w_0(∅) is non-zero (or it outputs n if no such value exists).
Now we argue the correctness of our algorithm.
Let C denote the size of the smallest dominating set of G.
First, this implies that for any c < C and any w ∈ [2n^2]_0, the value D^c, w_0(∅) is zero.
And second, Isolation Lemma (<ref>) implies that with probability at least 1/2, the weight function ω isolates the family of dominating sets of G of size C, i.e., there exists a weight W such that there is exactly one dominating set of size C and weight W, and therefore D^c,w_0(∅) = 1.
In this case, the algorithm outputs C.
So with probability at least 1/2 our algorithm outputs the minimum size of a dominating set of G.
The iteration over all c and w increases the space complexity by an additive 𝒪(log n) and it increases the running time by a factor of 𝒪(n^2).
Recall that in the beginning, to sample the weight function we have used space 𝒪(n log n).
So all in all, the running time of the algorithm is 2^(dk)· n^(1) and the space complexity is (dk^2log n + nlog n). This concludes the proof of <ref>.
Note that in our algorithm, the only reason for super-logarithmic dependency on n in the space complexity is the need to sample and store a weight function in order to isolate a minimum-weight dominating set. We conjecture that this can be avoided and ask:
Is there an algorithm for Dominating Set of n-vertex graphs provided with a (d, k)-tree-model that runs in time 2^𝒪(k d)· and uses (d+k)^(1)log n space?
§ THE LOWER BOUND
𝖨𝗇𝖿
𝖬𝖺𝗍𝖼𝗁
𝖬
𝖲
IJℳ
𝗀𝗈𝖺𝗅
In this section, we prove <Ref>.
This lower bound is based on a reasonable conjecture on the complexity of the problem Longest Common Subsequence (LCS).
An instance of LCS is a tuple (N,t,Σ,s_1,…,s_r) where N and t are positive integers, Σ is an alphabet and s_1,…,s_r are r strings over Σ of length N.
The goal is to decide whether there exists a string s ∈Σ^t of length t appearing as a subsequence in each s_i.
There is a standard dynamic programming algorithm for LCS that has time and space complexity (N^r).
From the point of view of parameterized complexity, LCS is
𝖶[p]-hard for every level p when parameterized by r <cit.>. It remains W[1]-hard when the size of the alphabet is constant <cit.>, and it is 𝖶[1]-complete when parameterized by
r+t <cit.>.Abboud et al. <cit.> proved that the existence of an algorithm with running time (N^r-ε) for any ε > 0 would contradict the Strong Exponential-Time Hypothesis. As observed by Elberfeld et al. <cit.>, LCS parameterized by r is complete for the class 𝖷𝖭𝖫𝖯: parameterized problems solvable by a nondeterministic Turing machine using f(k)· n^(1) time and f(k)log n space, for a computable function f. See also <cit.> for further research on 𝖷𝖭𝖫𝖯 and related complexity classes.The only known progress on the space complexity is due to Barsky et al. with an algorithm running in (N^r-1) space <cit.>.
This motivated Pilipczuk and Wrochna to formulate the following conjecture <cit.>.
There is no algorithm that solves the LCS problem in time M^f(r) and using f(r)M^(1) space for any computable function f, where M is the total bitsize of the instance and r is the number of input strings.
Note that in particular, the existence of an algorithm with time and space complexity as in <ref> implies the existence of such algorithms for all problems in the class 𝖷𝖭𝖫𝖯.
Our lower bound is based on the following stronger variant of <Ref>, in which we additionally assume that the sought substring is short.
For any unbounded and computable function δ, <Ref> holds even when t ≤δ(N).
Thus, we may rephrase <Ref> as follows.
Unless <Ref> fails, for any unbounded and computable function δ,
there is no algorithm that solves the Independent Set problem in graphs supplied with (d,k)-tree-models satisfying d≤δ(k) that would run in time 2^(k)· n^(1) and use n^(1) space.
The remainder of this section is devoted to the proof of <ref>. Not surprisingly, we provide a reduction from LCS to Independent Set on graphs provided with suitable tree-models.
Let (N,t,Σ,s_1,…,s_r) be an instance of LCS. We assume, without loss of generality, that N is a power of 2.
We provide a reduction from (N,t,Σ,s_1,…,s_r) to an equivalent instance of Independent Set consisting of a graph G with (r+t+N)^(1) vertices which admits a (d,k)-tree-model where d=(log t) and k=(r log N).
This implies <Ref> since for every unbounded and computable function δ there exists an unbounded and computable function δ' such that if t ≤δ'(N), then d ≤δ(k) for all sufficiently large N,r∈.
Let (N,t,Σ,s_1,…,s_r) be an instance of LCS.
For the sake of clarity, we assume without loss of generality that N is a power of 2.
Indeed, we can always obtain an equivalent instance (2^⌈log N ⌉, t + t', Σ', s_1',…,s_r') where t'=2^⌈log N ⌉ - N, Σ' is obtained from Σ by adding a new letter and each s_i' is obtained by adding t' times at the end of s_i.
For every I∈ [N], we denote the I-th letter of s_p by s_p[I].
In the following, we present our reduction from (N,t,Σ,s_1,…,s_r) to an equivalent instance of Independent Set consisting of a graph G with (r+t+N)^(1) vertices and a (d,k)-tree-model where d=(log t) and k=(r log N).
This implies <Ref> since for every unbounded and computable function δ there exists an unbounded and computable function δ' such that if t ≤δ'(N), then d ≤δ(k) (we explain this in more details at the end of this section).
To outline the main idea of the reduction, let s^⋆ be a potential common substring of s_1, …, s_r of length t.
We use matchings to represent the binary encoding of the positions of the letters of s^⋆
in each string.
In the intuitions along the construction, we denote by s^⋆ a potential common substring of s_1, …, s_r of length t.
The main idea is to use matchings to represent the binary encoding of the positions of the letters of s^⋆
in each string.
For every string s_p and q∈ [t], we define the selection gadget _p^q which contains, for every i∈ [log N], an edge called the i-edge of _p^q. One endpoint of this edge is called the 0-endpoint and the other is called the 1-endpoint; i.e., a selection gadget induces a matching on log N edges.
This results in the following natural bijection between [N] and the maximal independent sets of _p^q.
For every I∈ [N], we denote by _p^q|I the independent set that contains, for each i∈ [log N], the x-endpoint of the i-edge of _p^q where x is the value of the i-th bit of the binary representation of I-1 (we consider the first bit to be the most significant one and the log N-th one the least significant).
Then the vertices selected in _p^q encode the position of the q-th letter of s^⋆ in s_p.
We need to guarantee that the selected positions in the gadgets _p^1,…,_p^t are coherent, namely, for every q∈ [t], the position selected in _p^q is strictly smaller than the one selected in _p^q+1.
For this, we construct an inferiority gadget denoted by (p,q) for every string s_p and every q∈ [t-1].
The idea behind it is to ensure that the only possibility for an independent set to contain at least 3log N vertices from _p^q,_p^q+1, and their inferiority gadget, is the following: there exist I < J∈ [N] such that the independent set contains _p^q|I ∪_p^q+1|J.
The maximum solution size in the constructed instance of Independent Set—which is the sum of the independence number of each gadget—will guarantee that only such selections are possible.
We refer to the full version of this paper for the construction of these inferiority gadgets and the arguments proving the following observation.
<Ref> provides an example of the following construction.
The vertex set of (p,q) consists of the following vertices: for each i∈ [log N-1], there are two vertices v_i^0,p,q and v_i^1,p,q.
Moreover, for each i∈ [log N], there is a set V^01_i,p,q of log N -i +1 vertices (we drop p,q from the notation when they are clear from the context).
We now describe the edges incident to the inferiority gadget:
* For every i∈ [log N-1], v_i^0 and v_i^1 are adjacent and for each x∈{0,1}, v_i^x is adjacent to the (1-x)-endpoints of the i-edges from _p^q and _p^q+1.
* For every i∈[log N], all the vertices in V^01_i are adjacent to (1) the 1-endpoint of the i-edge from _p^q, (2) the 0-endpoint from the i-edge of _p^q+1, (3) all the vertices v_j^0,v_j^1 for every j≥ i and (4) all the vertices in V_j^01 for every j> i.
On a high level, an inferiority gadget reflects that for values I < J ∈ [N], if we go from high-order to low-order bits, then the binary encodings of I and J first contains the same bits and then there is an index, where I has a zero-bit and J has a one-bit.
If such a difference first occurs at some position ℓ∈ [log N], then the corresponding independent set first takes ℓ-1 vertices of the form v_ℓ'^0 or v_ℓ'^1 (for ℓ' < ℓ) and then takes log N - (ℓ - 1) vertices from V^01_ℓ – this results in log N vertices taken in the inferiority gadget. The following statement follows from
Let p∈ [r] and q∈ [t-1]. The independence number of (p,q) is log N and for every I, J∈ [N], we have I < J iff there exists a set of log N vertices S from (p,q) such that the union of S, _p^q|I and _p^q+1|J induces an independent set.
Next, we need to ensure that the t positions chosen in s_1, …, s_r indeed correspond to a common subsequence, i.e., for every q ∈ [t], the q-th chosen letter must be the same in every s_1, …, s_r.
For p∈ [r-1], let IJ_p denote the set of all ordered pairs (I,J)∈ [N]^2 such that the I-th letter of s_p and the J-th of s_p+1 are identical.
For each p∈ [r-1] and q∈ [t], we create the matching gadget (p,q) as follows:
* For every pair (I,J)∈IJ_p and for each p^⋆∈{p,p+1}, we create a copy _p^⋆^p,q,I,J of _p^⋆^q and for every ℓ∈[log N] and x∈{0,1}, we add an edge between the x-endpoint of the ℓ-edge of _p^⋆^q and the (1-x)-endpoint of the ℓ-edge of _p^⋆^p,q,I,J.
* For every pair (I,J)∈IJ_p, we add a new vertex v^q_p,I,J adjacent to (1) all the vertices from _p^p,q,I,J that are not in _p^p,q,I,J|I and (2) all the vertices from _p+1^p,q,I,J that are not in _p+1^p,q,I,J|J.
Finally, we turn {v^q_p,I,J| (I,J)∈IJ_p} into a clique.
Observe that, for each p^⋆∈{p,p+1}, an independent set S contains (IJ_p+1)log N vertices from _p^⋆^q and its copies _p^⋆^p,q,I,J if and only if there exists a value I∈ [N] such that S contains _p^⋆^q|I and _p^⋆^p,q,I,J|I for each copy.
This leads to the following observation.
Let p∈ [r-1] and q∈ [t]. The independence number of (p,q) is 1+2·IJ_p·log N and for every I,J∈ [N], we have (I,J)∈IJ_p iff there exists an independent set S of (p,q) with 1+2IJ_p·log N vertices such that the union of S, _p^q|I and _p+1^q|J is an independent set.
This concludes the construction of the graph G. See
<Ref> below for an overview.
We prove correctness of the reduction in the following lemma which follows mostly from Observations <ref> and <ref>.
We prove the correctness of the reduction in the following lemma.
There exists an integer such that G admits an independent set of size at least iff the strings s_1,…,s_r admit a common subsequence of length t.
Let = (rt + r(t-1) )log N + ∑_p∈ [r-1] t ( 1 + 2·IJ_p·log N ).
(⇒)
Assume that s_1,…,s_r admit a common subsequence s^⋆ of length t.
Then, for every string s_p, there exist I_p^1,…,I_p^t∈ [N] such that I_p^1 < … < I_p^t and s_p[I_p^q]= s^⋆[q] for every q∈ [t].
We construct an independent set S as follows.
For every selection gadget _p^q, we add _p^q| I_p^q to S.
Note that, at this point, S is an independent set because there is no edge between the selection gadgets _p^q in G.
For every inferiority gadget (p,q), since I_p^q < I_p^q+1, we can use <Ref> and add a set of log N vertices from (p,q) to S.
Note that S remains an independent set because the added vertices are not adjacent to _p^q| I_p^q and _p^q| I_p^q+1 by <Ref> and the only edges going out of (p,q) are incident to _p^q and _p^q+1.
At this point, we have (rt + r(t-1))log N vertices in S.
Observe that for every p∈ [r-1] and q∈ [t], we have s_p[I_p^q]= s^⋆[q] = s_p+1[I_p+1^q].
Thus, we have (I_p^q,I_p+1^q)∈IJ_p and by <Ref>, there exists an independent set S_p,q of (p,q) with 1+2IJ_p·log N vertices such that the union of S_p,q, _p^q| I_p^q and _p+1^q| I_p+1^q is an independent set.
We add S_p,q to S and note that S remains an independent set since the only edges going out of (p,q) are incident to _p^q and _p+1^q.
As we do this for every p∈ [r-1] and q∈ [t], the union of the S_p,q's contains ∑_p∈ [r-1] t ( 1 + 2IJ_p·log N ) vertices.
We conclude that G admits an independent set of size .
(⇐)
Assume G admits an independent set S of size at least .
The independence number of each selection gadget _p^q is log N and, by <Ref>, this is also the case for each inferiority gadget (p,q).
Hence, S contains at most (rt + r(t-1) )log N vertices from selection and inferiority gadgets.
By <Ref>, the independence number of each matching gadget (p,q) is 1 + 2IJ_p·log N, thus S contains at most ∑_p∈ [r-1] t ( 1 + 2IJ_p·log N ) vertices from the matching gadgets.
From the definition of , we obtain that S contains exactly log N vertices from each selection and inferiority gadget, and it contains exactly 1 + 2IJ_p·log N vertices from each matching gadget.
We make the following deductions:
* For each p∈ [r], there exist I_p^1,…,I_p^t∈ [N] such that S contains _p^q|I_p^q for every q∈ [t].
* For each p∈ [r] and q∈[t-1], the independent set S contains the vertices in _p^q|I_p^q and _p^q|I_p^q+1 as well as log N vertices from (p,q). <Ref> implies that I_p^q< I_p^q+1. Thus, s_p[I_p^1]… s_p[I_p^t] is a subsequence of s_p.
* For every p∈ [r-1] and q∈ [t], the independent set S contains _p^q|I_p^q and _p+1^q|I_p+1^q as well as 1 + 2IJ_p·log N vertices from (p,q).
We deduce from <Ref> that (I_p^q,I_p+1^q)∈IJ_p and consequently, s_p[I_p^q]=s_p+1[I_p+1^q].
Hence, for every q∈ [t], we have s_1[I_1^q]=…= s_r[I_r^q].
We conclude that s_1[I_1^1]… s_1[I_1^t] is a common subsequence of s_1,…,s_r.
The next step is to construct a tree-model of G.
We can compute in polynomial time a (d,k)-tree-model of G where d=2log t + 4 and k=14rlog N-3.
First, we prove that the union of the gadgets associated with a position q∈ [t] admits a simple tree-model.
For every q∈ [t], we denote by G^q the union of the selection gadgets _p^q with p∈ [r] and the matching gadgets (p,q) with p∈ [r-1].
For each q∈ [t], we prove that G^q admits a (3,k)-tree-model (T^q,^q,^q,λ^q) where the tree T^q is constructed as follows.
We create the root a^q of T^q and we attach all the vertices in the selection gadgets _p^q with p∈ [r] as leaves adjacent to a^q.
Then, for every p∈ [r-1], we create a node a^q_p adjacent to a^q and for every (I,J)∈IJ_p, we create a node a^q_p,I,J adjacent to a^q_p.
For each (I,J)∈IJ_p, we make a^q_p,I,J adjacent to the vertex v^q_p,I,J and all the vertices in ^p,q,I,J_p and ^p,q,I,J_p+1.
Note that all the vertices in (p,q) are the leaves of the subtree rooted at a^q_p, and the leaves of T^q are exactly the vertices in G^q.
See <Ref> for an illustration of T^q.
For every q∈[t-1], we denote by (q) the union of (1,q),…,(r,q).
Moreover, for every interval [x,y]⊆ [t], we denote by G^x,y, the union of the graphs G^q over q∈ [x,y], and the inferiority gadgets in (q) over q∈ [x,y] such that q+1∈ [x,y].
For every interval [x,y], we prove by induction on y-x that G^x,y admits a (2log(y-x+1)+4,k)-tree-model.
In particular, it implies that G^1,t=G admits a (d,k)-tree-model.
It is also easy to see from our proof that this (d,k)-tree-model is computable in polynomial time. For the base case of the induction, when y=x, we have G^x,y=G^x and we have proved above that it admits a (3,k)-tree-model.
When x<y, let q = ⌊ (y-x)/2 ⌋.
We use the induction hypothesis to obtain:
* A (2log(q - x)+4,k)-tree model (T^<q,^<q,^<q,λ^<q) for G^x,q-1.
* A (2log(y-q)+4,k)-tree-model (T^>q,^>q,^>q,λ^>q) for G^q+1,y.
Then, we construct a (4+2log(y-x+1),k)-tree-model (T,,,λ) of G^x,y from the tree-models of G^x,q-1, G^q+1,y, but also the (3,k)-tree-model (T^q,λ^q,^q,^q) of G^q.
To obtain T, we create the root α of T and we make it adjacent to a^q, the root of T^q, and two new vertices: α^<q and α^>q.
We make α^<q adjacent to the root of T^<q and to all the vertices in (q-1).
Symmetrically, we make α^>q adjacent to the root of T^>q and to all the vertices in (q).
See <Ref> for an illustration of T.
Let L_,L_,L_min,L_max,L_ and {ℓ_0} be disjoint subsets of [k] such that each set among L_,L_,L_min,L_max, has size 2rlog N and L_ has size 6rlog N -4.
First, we prove that the union of the gadgets associated with a position q∈ [t] admits a simple tree-model.
For every q∈ [t], we denote by G^q the union of the selection gadgets _p^q with p∈ [r] and the matching gadgets (p,q) with p∈ [r-1].
For every q∈ [t], we can construct in polynomial time a (3,k)-tree-model (T^q,^q,^q,λ^q) for G^q.
Let q∈ [t].
We create the root a^q of T^q and we attach all the vertices in the selection gadgets _p^q with p∈ [r] as leaves adjacent to a^q.
Then, for every p∈ [r-1], we create a node a^q_p adjacent to a^q and for every (I,J)∈IJ_p, we create a node a^q_p,I,J adjacent to a^q_p.
For each (I,J)∈IJ_p, we make a^q_p,I,J adjacent to the vertex v^q_p,I,J and all the vertices in ^p,q,I,J_p, ^p,q,I,J_p+1.
Note that all the vertices in (p,q) are the leaves of the subtree rooted at a^q_p, and the leaves of T^q are exactly the vertices in G^q.
See <Ref> for an illustration of T^q.
We define λ^q as follows:
* λ^q maps each vertex in _1^q,…,_r^q to a unique label in L_.
* For every (p,i,x)∈ [r]× [log N] ×{0,1}, λ^q maps all the x-endpoints of the i-edges from the different copies _p^p',q,I,J of _p^q to a unique label in L_.
* We have λ^q(v_p,I,J^q)=ℓ_0 for every p∈[r-1] and (I,J)∈IJ_p.
We define ^q={ρ_ab| ab∈ E(T^q)} such that ρ_ab is the identity function for every ab∈ E(T^q).
It follows that λ^q_a=λ^q for every node a of T^q.
We finish the construction of the tree-model of G^q by proving that there exists a family of matrices ^q such that (T^q,^q,^q,λ^q) is a (3,k)-tree-model of G^q.
For doing so, we simply prove that the property φ(a,ℓ) holds for every label ℓ∈ [k] and internal node a of T^q with children b_1,…,b_c, where φ(a,ℓ) is true if:
* For every u∈ V_b_i and v∈ V_b_j with λ^q_a(u)=λ^q_a(v)=ℓ, we have N(u)∩ ( V_a∖ ( V_b_i∪ V_b_j))=N(v)∩ ( V_a∖ ( V_b_i∪ V_b_j)).
Observe that φ(a,ℓ) trivially holds when there is at most one vertex labeled ℓ in V_a.
Consequently, φ(a^q_p,I,J,ℓ) is true for every node a^q_p,I,J and ℓ∈ [k]. Moreover, φ(a, ℓ) is true for every internal node a and every ℓ∈ L_∪ L_min∪ L_max∪ L_.
Recall that λ^q_a=λ^q for every node a of T^q.
For every pair (u,v) of distinct vertices in V(G^q), if λ^q(u)=λ^q(v) then either:
* There exists (p,i,x)∈ [r]× [log N]×{0,1} such that u and v are the x-endpoints of the i-edges in respectively _p^p^⋆,q,I,J and _p^p',q,I',J' for some p^⋆,p'∈{p-1,p}, (I,J)∈IJ_p^⋆ and (I',J')∈IJ_p'.
Observe that the parent of u is a^q_p^⋆,I,J and the neighbors of u in V_a^q_p^⋆ are all children of a^q_p^⋆,I,J.
Indeed, the only neighbors of u in V_a^q_p^⋆ are the (1-x)-endpoint of the i-edge of _p^p^⋆,q,I,J and potentially v_p^⋆,I,J^q.
Symmetrically, v belongs to V_a^q_p',I',J', its parent is a^q_p',I',J' and the only neighbors of v in V_a^q_p' are children of a^q_p',I',J'.
Moreover, observe that u and v have both only one neighbor in V_a^q∖ V_a^q_p which is the (1-x)-endpoint of the i-edge of ^q_p.
We deduce that φ(a^q,ℓ) and φ(a^q_p,ℓ) and φ(a^q_p,I,J,ℓ) are true for every ℓ∈ L_.
* We have u=v^q_p,I,J and v=v^q_p',I',J'.
In that case, u is a child of a^q_p,I,J and v a child of a^q_p',I',J'.
The only neighbors of u and v that are not children of a^q_p,I,J nor a^q_p',I',J' are all the other vertices of label ℓ_0.
Thus, φ(a,ℓ_0) holds for every internal node a of T^q.
We conclude that φ(a,ℓ) holds for every internal node a of T^q and every ℓ∈ [k].
Hence, there exists a family of matrices ^q such that (T^q,^q,^q,λ^q) is a (3,k)-tree model of G^q for every q∈ [t].
For every q∈[t-1], we denote by (q) the union of (1,q),…,(r,q).
Moreover, for every interval [x,y]⊆ [t], we denote by G^x,y, the union of the graphs G^q over q∈ [x,y] and the inferiority gadgets in (q) over q∈ [x,y] such that q+1∈ [x,y].
We prove by induction that for every interval [x,y]⊆ [t], there exists a (2log(y-x+1)+4,k)-tree-model (T,,,λ) of G^x,y such that given the root α of T, the following properties are satisfied:
* λ_α maps each vertex from ^x_1,…,^x_r to a unique label in L_min.
* λ_α maps each vertex from ^y_1,…,^y_r to a unique label in L_min.
When x=y, we only require that exactly one property among (<ref>) and (<ref>) is satisfied (we can choose which one as it is symmetric).
The induction is on y-x.
The base case is when x=y, in which case G^x,y=G^x and we simply modify the (3,k)-tree-model (T^x,λ^x,^x,^x) for G^x as follows.
We add to T^x a new root α adjacent to the former root a^x.
We add to ^x the function ρ_α a^x that bijectively maps L_ to L_min (or L_max if we want to satisfy (<ref>) rather than (<ref>)) and every label not in L_ to ℓ_0. Finally, we add M_α the zero k× k-matrix to ^x. After these modifications, it is easy to see that (T^x,λ^x,^x,^x) is a (4,k)-tree model of G^x,y that satisfies (<ref>) or (<ref>).
Now assume that x< y and that G^x',y' admits the desired tree-model for every [x',y'] strictly included in [x,y].
Let q = ⌊ (y-x)/2 ⌋.
By induction hypothesis, there exist:
* A (2log(q - x)+4,k)-tree model (T^<q,^<q,^<q,λ^<q) for G^x,q-1 with the desired properties (if x=q -1, we require (<ref>) to be satisfied).
* A (2log(y-q)+4,k)-tree-model (T^>q,^>q,^>q,λ^>q) for G^q+1,y with the desired properties (if y=q +1, we require (<ref>) to be satisfied).
For the sake of legibility, we assume that x is different from q, which implies that G^x,q-1 is not empty graph (note that G^q+1,y is not empty as x<y and q = ⌊ (y-x)/2 ⌋).
We lose some generality with this assumption, but we can easily deal with the case x=q with some simple modifications on the following construction (i.e. removing some nodes and changing some renaming functions).
In the following, we construct a (4+2log(y-x+1),k)-tree-model (T,,,λ) of G^x,y from the above tree-models of G^x,q-1, G^q+1,y, but also the (3,k)-tree-model (T^q,λ^q,^q,^q) of G^q given by <Ref>.
To obtain T, we create the root α of T and we make it adjacent to a^q the root of T^q and two new vertices: α^<q and α^>q.
We make α^<q adjacent to the root of T^<q and to all the vertices in (q-1).
Symmetrically, we make α^>q adjacent to the root of T^>q and to all the vertices in (q).
See <Ref> for an illustration of T.
We define λ as follows:
λ(v)= λ^<q(v) if v∈ V(G^x,q-1),
λ^>q(v) if v∈ V(G^q+1,y),
λ^q(v) if v∈ V(G^q),
λ'(v) otherwise (when v belongs to (q-1) or (q))
where λ' maps the vertices in G^x,y from (q-1) and (q) to L_ such that for each label ℓ of L_, there exists q'∈{q-1,q} and p∈ [r] such that ℓ is associated with either: (1) v^x,p,q'_i for some i∈ [log N-1] and x∈{0,1} or (2) all the vertices in V^01,p,q'_i for some i∈ [log N].
Since L_= 6rlog( N) -4, we have enough labels for doing so.
The family of renaming function is obtained from the union of ^<q∪^>q∪^q by adding for every edge e in T that is not in T^q, T^<q or T^>q a function ρ_e defined as follows:
* ρ_e is the identify function when e is an edge adjacent to a leaf from (q-1) or (q).
* ρ_e maps every label in L_min∪ L_max to itself and every other label to ℓ_0, when e is the edge between a^⊛ q and the root of T^⊛ q for ⊛∈{<,>}.
* ρ_e maps every label in L_ to itself and every other label to ℓ_0 when e=α a^q.
* ρ_e maps every label in L_min∪ L_ to itself and every other label to ℓ_0, when e=αα^<q.
* ρ_e maps every label in L_max∪ L_ to itself and every other label to ℓ_0, when e=αα^>q.
Observe that λ_α satisfies Properties (<ref>) and (<ref>).
As λ_α^<q satisfies (<ref>), this function maps every vertex from ^x_1,…,^x_r to a unique label in L_min.
The above renaming functions guarantee that the only vertices mapped to a label in L_min by λ_α are from V_α^<q. We deduce that Property (<ref>) holds and symmetrically, Property (<ref>) holds too.
Now we prove that a family of matrices exists such that (T,,,λ) is a tree model of G^x,y.
As before, we prove that φ(a,ℓ) holds for every internal node a of T and every label ℓ∈ [k].
Since our construction is based on tree-models for G^q, G^x,q-1 and G^q+1,y, we only need to prove that φ(a,ℓ) holds for every a∈{α^<q,α^>q,α} and ℓ∈ [k].
We first deal with α^<q.
Let us describe the labeling function λ_α^<q.
Remember that (T^<q,^<q,^<q,λ^<q) satisfies Properties (<ref>) and (<ref>), or just (<ref>) when x=q-1. Moreover, the renaming function associated with the edge between α^<q and T^<q preserves the labels in L_min∪ L_max and maps the other labels to ℓ_0.
Thus, λ_α^<q assign every vertex in ^x_1,^q-1_1,…,^x_r,^q-1_r to a unique label in L_min∪ L_max.
We deduce that for every a pair (u,v) of distinct vertices in V_α^<q, if λ_α^<q assigns u and v to the same label ℓ∈ [k], then either:
* u,v∈ V^01,p,q-1_i for some p∈ [r] and i∈ [log N]. In this case, u and v are false twins by construction of (p,q-1)—i.e. N(u)=N(v)—and we deduce that φ(α^<q, ℓ) holds.
* ℓ=ℓ_0 and u,v are in V(G^x,q-1) and not from ^x_1,^q-1_1,…,^x_r,^q-1_r.
Then, all the neighbors of u and v are in G^x,q-1 and thus N(u)∖ V(G^x,q-1) = N(v) ∖ V(G^x,q-1). We deduce that φ(α^<q, ℓ) holds in this case too.
We conclude that φ(α^<q,ℓ) holds for every ℓ∈ [k] and with symmetric arguments, we can prove that φ(α^>q,ℓ) holds also for every ℓ∈ [k].
For α, notice that for every a∈{a^q,α^<q,α^>q}, the vertices in V_a labeled ℓ_0 by λ_α have neighbors only in V_a, hence φ(a,ℓ_0) holds.
Furthermore, every label ℓ in L_min∪ L_max∪ L_ is mapped by λ_α to a unique vertex in V_α, so φ(α,ℓ) holds.
Finally, each label in L_ is mapped by λ_α to a unique vertex or to all the vertices in V^01,p,q'_i for some p∈[r], q'∈{q-1,q} and i∈ [log N]. Since the vertices in V^01,p,q'_i are false twins, we deduce that φ(α,ℓ) holds for every ℓ∈ L_.
We conclude that φ(α,ℓ) holds for every ℓ∈ [k] and thus there exists a family of matrices such that (T,,,λ) is a tree-model of G^x,y.
It remains to prove that the depth of T is at most d=2log(y-x+1)+4.
By definition of q, both q-x and y-q are smaller than (y-x+1)/2.
Thus, log(q-x) and log(y-q) are smaller than log(y-x+1)-1.
Now observe that the depth of T is the maximum between (i) the depth of T^q plus 1 which is 4, (ii) the depth of T^<q plus 2, and (iii) the depth of T^>q plus 2.
The depth of T^<q is at most 2log(q-x)+4.
Since log(q-x)≤log(y-x+1)-1, the depth of T^<q plus 2 is at most 2log(y-x+1)+4.
Symmetrically, the depth of T^>q plus 2 is also upper bounded by 2log(y-x+1)+4.
It follows that the depth of T is at most 2log(y-x+1)+4.
We conclude that for every interval [x,y], G^x,y admits a (2log(y-x+1)+4,k)-tree-model.
In particular, it implies that G^1,t=G admits a (d,k)-tree-model.
It is easy to see from our proof that this (d,k)-tree-model is computable in polynomial time.
We are now ready to prove <Ref>.
Let δ be an unbounded and computable function.
Assume towards a contradiction that there exists an algorithm that solves the Independent Set problem in graphs supplied with (d,k)-tree-models satisfying d≤δ(k) that runs in time 2^(k)· n^(1) and uses n^(1) space.
Since δ is unbounded and computable and log is monotone, there exists an unbounded and computable function δ' such that for all sufficiently large N,r∈, we have
2log(δ'(N))+4 ≤δ( 14rlog N-3).
Let (N,t,Σ,s_1,…,s_r) be an instance of LCS such that t≤δ'(N).
Our reduction provides us with a graph G and an integer such that the following holds:
* G has (rtN^2log N) vertices and thus it can be constructed in M^(1) time where M is the total bitsize of (N,t,Σ,s_1,…,s_r).
Indeed, the selection gadgets are made of 2rtlog N vertices, the inferiority gadgets have exactly r(t-1)(2log N + log N(1+log N)/2) vertices and the matching gadgets consist of ∑_p∈ [r-1] t · 2IJ_p·(1 + 2log N) vertices.
* By <Ref>, G admits an independent set of size at least iff s_1,…,s_r admits a common subsequence of size t.
* By <Ref>, we can construct in polynomial time a (d,k)-tree-model of G with d=2log t + 4 and k=14rlog N-3.
Observe that we have
d=2log t+4 ≤ 2log(δ'(N))+4 ≤δ( 14rlog N-3 )=δ(k).
Consequently, we can run to check whether G admits an independent set of size at least in time 2^(k)· n^(1) and space n^(1).
Since k=14rlog N-3 and n=(rtN^2log N)≤ M^(1), it follows that we can solve (N,t,Σ,s_1,…,s_r) in time N^(r)· M^(1)≤ M^(r) and space M^(1).
As this can be done for every instance (N,t,Σ,s_1,…,s_r) where t≤δ'(N), it contradicts <Ref>.
§ FIXED-PARAMETER ALGORITHMS FOR METRIC DIMENSION AND FIREFIGHTING
Theorem <ref>—and in particular the fixed-parameter tractability of Metric Dimension and Firefighter parameterized by shrub-depth—can be obtained by combining known results about these problems <cit.> with a bound on the maximum length of induced paths in graph classes of bounded shrubdepth <cit.>.
These results contrast the -hardness of both problems on graphs of bounded pathwidth <cit.>.
The Firefighter problem on a graph G is the following. At time 0, a vertex r ∈ V(G) catches fire. Then at each time step i ≥ 1, first a firefighter is permanently placed on a vertex that is not currently on fire. This vertex is now permanently protected. Then the fire spreads to all unprotected neighbors of all vertices currently on fire. This process ends in the time step when the fire no longer spreads to new vertices. All vertices that do not catch fire during this process (including the protected vertices) are called saved; the rest are called burned. The goal is to maximize the number of saved vertices.
Bazgan et al. <cit.> showed that the Firefighter problem is fixed-parameter tractable when parameterized by the treewidth of the input graph and the number k of vertices that may be protected during the process. In this result, one first writes an 𝖬𝖲𝖮_2 formula φ(X) that expresses that a set of vertices X can be saved assuming that k vertices can be protected, and then applies the optimization variant of Courcelle's Theorem, due to Arnborg et al. <cit.>, to find the largest vertex subset A for which φ(A) is satisfied. By inspection of the formula, we can see that it does not quantify over edge sets, hence φ(X) is in fact an 𝖬𝖲𝖮_1 formula. Then, by replacing the usage of the algorithm of Arnborg et al. with the algorithm of Courcelle et al. <cit.>, we conclude that the Firefighter problem is fixed-parameter tractable when parameterized by the cliquewidth of the input graph and the number of vertices that may be protected.
We now recall that in graphs with a (d,k)-tree model, any induced path has length at most (2^k^d+1) (this follows from <cit.>; the bound accounts for our slightly different definition of a tree model). This implies that the firefighting game has at most (2^k^d+1) time steps and the same amount of vertices can be protected.
Hence, recalling that a graph with a (d,k)-tree model has bounded cliquewidth <cit.>, we immediately obtain the following result.
The Firefighter problem is fixed-parameter tractable when parameterized by d and k on graphs provided with a (d, k)-tree-model.
We observe that this is in contrast to the complexity of the Firefighter problem on graphs of bounded treewidth. The Firefighter problem is in fact already NP-hard on trees of maximum degree 3 (which are graphs of treewidth 1) <cit.> and trees of pathwidth 3 <cit.>.
A similar situation arises for the Metric Dimension problem. In Metric Dimension, given a graph G, we are asked to find a smallest set Z ⊆ V(G) such that for any pair u,v ∈ V(G), there is a vertex z ∈ Z such that the distance between u and z and the distance between v and z are distinct. Gima et al. <cit.> observed that Metric Dimension is fixed-parameter tractable when parameterized by the cliquewidth and the diameter of the input.
Since in graphs with a (d,k)-tree model, any induced path has length at most (2^k^d+1) (the bound accounts for our slightly different definition of a tree model), and any such graph has bounded cliquewidth <cit.>, we immediately obtain the following.
The Metric Dimension problem is fixed-parameter tractable when parameterized by d and k on graphs provided with a (d, k)-tree-model.
This is again in contrast to the complexity of the Metric Dimension problem on graphs of bounded treewidth. The Metric Dimension problem is in fact already 𝖭𝖯-hard on graphs of pathwidth 24 <cit.>.
plain
|
http://arxiv.org/abs/2307.02605v1
|
20230705185748
|
Traversable Lorentzian wormhole on the Shtanov-Sahni braneworld with matter obeying the energy conditions
|
[
"Rikpratik Sengupta",
"Shounak Ghosh",
"M. Kalam"
] |
gr-qc
|
[
"gr-qc"
] |
^1 Department of Physics, Aliah University, Kolkata 700160, West Bengal, India
^2 Department of Physics, Indian Institute of Engineering Science and Technology, Shibpur, Howrah 711103, West Bengal, India
^1 [email protected]
^2 [email protected]
^1 [email protected]
In this paper we have explored the possibility of constructing a traversable wormhole on the Shtanov-Sahni braneworld with a timelike extra dimension. We find that the Weyl curvature singularity at the throat of the wormhole can be removed with physical matter satisfying the NEC ρ+p ≥ 0, even in the absence of any effective Λ-term or any type of charge source on the brane. (The NEC is however violated by the effective matter description on the brane arising due to effects of higher dimensional gravity.) Besides satisfying NEC the matter constituting the wormhole also satisfies the Strong Energy Condition (SEC), ρ+3p ≥ 0, leading to the interesting possibility that normal matter on the brane may be harnessed into a wormhole. Incidentally, these conditions also need to be satisfied to realize a non-singular bounce and cyclic cosmology on the brane<cit.> where both past and future singularities can be averted. Thus, such a cyclic universe on the brane, constituted of normal matter can naturally contain wormholes. The wormhole shape function on the brane with a time-like extra dimension represents the tubular structure of the wormhole spreading out at large radial distances much better than in wormholes constructed in a braneworld with a spacelike extra dimension and have considerably lower mass resulting in minimization of the amount of matter required to construct a wormhole. Wormholes in the Shtanov-Sahni (SS) braneworld also have sufficiently low tidal forces, facilitating traversability. Additionally they are found to be stable and exhibit a repulsive geometry. We are left with the intriguing possibilty that both types of curvature singularity can be resolved with the SS model, which we discuss at the end of the concluding section.
Traversable Lorentzian wormhole on the Shtanov-Sahni braneworld with matter obeying the energy conditions
Rikpratik Sengupta^1, Shounak Ghosh^2, Mehedi Kalam^1
August 1, 2023
=========================================================================================================
§ INTRODUCTION
Wormholes are a class of topological object with a trivial boundary and non-simply-connected interior, which first appeared as a solution to the field equations of Einstein's General Relativity (GR) in 1916<cit.>. Later on, Einstein himself along with Rosen extended the idea to speculate the possible existence of bridges connecting two different spacetime regions. Such bridges were believed to create a shortcut through spacetime, resulting in a reduced travel distance and time<cit.>. The detailed mathematical structure of a wormhole was investigated few decades later by Fuller and Wheeler<cit.> for the Schwarzschild geometry, resulting in a tubular structure spreading out to be asymptotically flat. The two different spacetime regions were connected by the throat of the wormhole. However, it was found in their analysis that such a structure would collapse at the throat from the instability resulting due to the gravitational attraction exerted by matter present in the two different spacetime regions in the opposite directions. As a consequence, the throat would pinch-off, resulting in possible appearance of a Weyl singularity at the throat resulting from infinitely large tidal forces. So, such objects remained of very little physical interest until Morris and Thorne<cit.> proposed a generalized prescription to possibly avoid the pinch-off at the throat.
They prescribed certain properties for one of the metric potentials termed as the shape function, which represents the shape of the wormhole. According to their prescription, the shape function at the throat radius must be equal to the throat radius. For all radial distances within the surface of the wormhole greater than the throat radius, the shape function at a particular radial distance must be less than the radial distance itself. Most importantly, the derivative of the shape function with respect to the radial distance at the throat radius must be less than unity. This implies a violation of the null energy condition (NEC) at the throat. This is known as the 'flare-out' condition. Additionally, the surface density and surface pressure must vanish at the boundary of the wormhole.
The motivation for the main component of their prescription can be analyzed from two points of view. Physically, if some 'exotic' matter is introduced at the throat with negative pressure, such that it is gravitationally repulsive in nature, then the instability arising from attraction due to matter in the two different spacetime regions can be overcome as they will be repelled by the exotic matter due to its peculiar property. Thus, the throat will remain stable and 'flare-out' keeping the bridge-like structure stable. From a mathematical point of view, a Weyl type of curvature singularity, which is indeed a physical singularity indicating geodesic incompleteness, can be averted even in a relativistic context by the violation of the null energy condition (NEC), as inferred from the singularity theorems due to Penrose and Hawking<cit.>. It is interesting to note that matter sources violating the strong energy condition (SEC) and sometimes also the NEC, play an important role in cosmology<cit.> to explain the current accelerating state<cit.> of the universe.
In order to explain the current accelerating phase of the universe, the Einstein field equations (EFE) have to be modified. This is possible either by modifying the geometry or the matter sector. In order to modify the geometry sector, the standard Einstein-Hilbert (EH) action has to replaced such that the EH action can be recovered for suitable choice of the model parameters<cit.>. On the other hand, the modification in the matter sector<cit.> arises from introduction of a variety of matter sources violating one or more of the energy conditions, ranging from the well known Einstein's Cosmological Constant to scalar fields with a negative kinetic energy term dubbed as the 'phantom'. The phantom EoS with the EoS parameter being less than -1 can however be realized effectively as a modification in the geometry sector arising from the higher dimensional braneworld scenario, without considering any such exotic matter source<cit.>. Also, it is now well accepted that standard GR fails to work adequately at spacetime regions involving diverging energy densities and spacetime curvatures. The divergences are believed to occur due to the shortcomings of GR and for explaining such situations, GR needs to be replaced a modified gravity theory that reduce to standard GR at low energy limits. The two main contenders in this aspect are the higher dimensional braneworld models<cit.>, inspired from Superstring/M-theories<cit.> and an alternative four dimensional approach known as the Loop quantum gravity (LQG)<cit.>.
The braneworld models have attracted a lot of attention in recent times. One of the first braneworld models that became extremely popular, was proposed by Randall and Sundrum (RS) in 1999<cit.> as an attempt to solve the hierarchy problem<cit.> in particle physics. The comparative weakness of gravity as compared to the electroweak forces could be explained by assuming our universe to be contained in a (3+1)- dimensional membrane embedded in a higher dimensional bulk spacetime. The standard model fields of particle physics are considered to be confined to the brane, while gravity can access the bulk spacetime which is essentially AntideSitter (AdS_5). The negative cosmological constant on the brane confines the gravity close to the brane. Their first model consisted of two parallel braneworlds, but in the second model<cit.> the brane with a negative tension was moved to infinity. The RS single brane model has a spacelike extra dimension and finds wide range of application in both cosmology<cit.> and astrophysics<cit.>. Soon after, a dual to the RS-II single brane model was proposed by Shtanov and Sahni (SS), with a timelike extra dimension replacing the spacelike extra dimension<cit.>. For a spacelike extra dimension the bulk space has a Lorentzian signature but, for the later, the bulk space has a signature (-,-,+,+,+). Such a braneworld is characterized by a negative brane tension and positive bulk cosmological constant, contrary to the RS-II braneworld. The most interesting feature of this model is that the contracting phase realizes a natural bounce, avoiding reaching a singular state, even if matter on the brane satisfies the energy conditions appropriately. Nothing prevents the non-singular bounce from happening an infinite number of times and in the presence of a massive scalar field, the amplitude increases for each successive cycle leading to a resolution of the flatness problem<cit.>. A spatially closed universe consisting of matter satisfying the strong energy condition (SEC) can undergo the cyclic behaviour and in general a universe consisting of matter satisfying the null energy condition (NEC) can transit through a non-singular bounce both in the past and future, provided our universe is considered to be a (3+1)-dimensional brane embedded in the bulk with timelike extra dimension. There is an issue regarding the tachyonic nature of the Kaluza-Klein gravitational modes in a model with time-like extra dimension as discussed in<cit.> and has been addressed in <cit.>.
The possibility of wormhole formation have previously been investigated modifying both the matter<cit.> as well geometry<cit.> sectors. Wormholes have also been explored in the framework of extra dimensions in the context of the RS II braneworld model<cit.>. Generally to realize traversable wormholes in a relativistic context phantom matter is required to violate the NEC in order to make the throat stable but in a recent work<cit.>, it has been showed that using Z_2-reflection symmetry at the throat, a wormhole can be constructed with normal matter containing coupled Maxwell and Dirac fields but at the expense of coexisting particles and antiparticles which do not annihilate and non-smooth geometry and matter sectors. This has been solved by Konoplya and Zhidenko<cit.> by not considering the Z_2 symmetry and the problem of particle-antiparticle coexistence at the throat can be resolved besides obtaining a smooth metric and matter field. The braneworld model considered by us however invokes the Z_2 symmetry but not in a relativistic context or specicifically at the wormhole throat and is a generic feature of the model. In this paper, we attempt to explore the possibility of existence of wormholes in the context of the braneworld model with a timelike extra dimension, since it has many interesting additional features absent in the RS-II models, specially avoiding the cosmological singularities and realizing the possibility of a cyclic universe scenario. In the following section, we solve the EFE for the wormhole metric on the brane to obtain the unknown metric potential and study the validity of the NEC. The Israel-Darmois junction conditions<cit.> are formulated at the surface of the wormhole to evaluate the unknown model parameters. Finally, the tidal acceleration is obtained at the throat to ensure traversability besides checking on the stability involving a linearized stability analysis along with computation of the surface redshift and commenting on the nature of the wormhole.
§ MATHEMATICAL MODEL OF THE BRANE WORMHOLE
In this section we attempt to construct a mathematical model of a spherically symmetric, static, traversable wormhole on the brane
with a timelike extra dimension, that is stable under linear stability analysis. We check the various criteria that must be satisfied
by the wormhole solution to be rendered as physically realistic.
§.§ Modified EFE on the Brane
A spherically symmetric and static line element has the well known form
ds^2=-e^ν(r)dt^2+e^λ(r)dr^2+r^2(dθ^2+sin^2θ dϕ^2).
There is modification of the EH action representing the geometry sector for the braneworld resulting in the appearance of additional terms in the EFE. The modified EFE on the brane may be written in the most general form as <cit.>
m^2 G_μν+σ h_μν= T_μν+ ϵ M^3(K_μν-h_μνK),
Making use of the Gauss identity on the brane, the modified EFE reads<cit.>
G_μν+Λ_effh_μν=8π G_eff T_μν+ϵ/1+β[S_μν/M^6-W_μν]
The parameter β is given by β=2σ m^2/3M^6, where m is the Planck mass in four dimensions, M is the five dimensional Planck mass, σ denotes the brane tension,
G_μν is the Einstein tensor on the (3+1)-dimensional brane and T_μν is the energy momentum tensor on the brane.
Here K_μν is the extrinsic curvature and K represents its trace;
h_μν=g_μν-ϵ n_μn_ν is the induced metric on the brane, such that n^μ represents the vector
field of the inner unit normal to the brane and g_μν is the metric on the bulk. The parameter ϵ=± 1 determines the signature of the bulk space.
If we choose ϵ=1 then the bulk spacetime has a Lorentzian signature and the extra dimension is spacelike. We get the
RS II model in this case. On the other hand, a choice ϵ=-1 deviates the bulk signature from being Lorentzian and the
extra dimension is timelike giving the SS braneworld model. The effective cosmological constant on the brane is given by Λ_eff=Λ_RS/1+β, where Λ_RS is the effective cosmological constant on the RS II brane and the effective gravitational constant on the brane is given by 8π G_eff=β/m^2(1+β). The terms within the square brackets denote the correction terms which arise as a consequence of the extra dimensional effects. The term S_μν turns out to be quadratic in stress-energy and is computed from the bare Einstein equation E_μν=m^2 G_μν-T_μν obtained from Eq. (2). It is given by
S_μν=1/3EE_μν-E_μαE^α_ν+1/2(E_αγE^αγ-1/3E^2)h_μν
The other term W_μν is the projection of the bulk Weyl tensor W_μναγ on the brane that is responsible for the corrections encoding the bulk graviton effects. It is defined as W_μν=n^α n^γ W_μναγ.
The Weyl projection appears in Eq. (3) as the stess-energy tensor of some additional matter, resulting in the introduction of extra effective energy density and effective pressure terms in the modified field equations. The equation of state of the additional matter is that of radiation due to W_μν being traceless. The correction terms are related to one another by the conservation equation
D^μ(S_μν-M^6 W_μν)=0,
where D^μ denotes the covariant derivative on the brane associated with the induced metric h_μν. It is to be noted that we need not impose this equation additionally as it is a consequence of the presence of the projected Weyl term in the modified EFE on the brane.
In this paper, we shall work in the RS limit where the induced curvature term on the brane is made to vanish by setting m=0. m vanishes in the RS limit as the induced curvature (scalar curvature of the induced metric) on the brane generating from quantum correction to the matter action is not taken into account in the RS model. The fundamental constant in the RS model is not the 4D Planck mass. The effective Planck mass on the brane is not equal to m but to a certain combination involving all constants, and it does not vanish in the case of m = 0. In fact, the RS model is a viable gravity theory on the brane even though it has m = 0 in our notation. Alternatively, a vanishing M indicates diminishing significance of extra dimensional effects and standard GR is recovered. It is worth mentioning that for a timelike extra dimension, a non-singular bounce can be obtained in the cosmological context irrespective of whether the induced curvature term is made to vanish or not. However in the presence of an induced curvature term in the action, there is an additional possibility of the universe commencing from a quasi-singular state that has the peculiar property of a non-divergent Hubble parameter and stress-energy while only the curvature tensor is divergent. The effective gravitational constant on the brane depends on the brane tension σ and the parameter ϵ and may be written in the RS limit as 8 π G_eff=2ϵσ/3 M^6. As mentioned, for the SS braneworld with a timelike extra dimension, ϵ=-1. So, in order to make G_eff positive, the brane tension σ must be a negative quantity. Also, the effective cosmological
constant on the brane in the m=0 limit is just a linear combination of the bulk cosmological constant term Λ_5 and the brane tension and is expressed as Λ_RS=Λ_5/2+ϵσ^2/3M^6. For the possibility of a vanishing effective lambda term on the brane, the bulk cosmological constant term must be positive as ϵ=-1. Following RS, we assume Λ_RS=0 (implying a fine-tuning between the brane tension and bulk cosmological term) for constructing our wormhole model on the brane. As we consider the extra-dimension to be timelike, the evolution from the brane to the bulk is a Cauchy development, which is correctly posed and has a solution in the neighbourhood of the brane. So one can take an approach that one does not care what happens in the bulk <cit.>.
The modified EFE on the brane for line element (1) assuming a perfect fluid source with isotropic pressure having stress-energy components T_μ^ν=diag(-ρ,p,p,p) can be computed from Eqs. (2), (3) and (4) in geometrical units as
e^-λ(λ^'/r-1/r^2)+1/r^2
=8 π(ρ( 1-ρ/ρ_c) )-12U/ρ_c,
e^-λ(ν^'/r+1/r^2) -1/r^2
=8 π(p -ρ( p +ρ/2)/ρ_c/2)-4U/ρ_c-8P/ρ_c,
e^-λ(ν”/2-λ^'ν^'/4+ν^'^2/4+ν^'-λ^'/2r) = 8π(p-ρ(p+ρ/2)/ρ_c/2) -4U/ρ_c+4P/ρ_c.
ρ=ρ_c is a constant parameter that denotes the density at which the bounce takes place in a cosmological context. ρ_c=2|σ|, where σ is the brane tension and the modulus is taken to consider the absolute magnitude as the brane tension is negative in the SS braneworld model. The terms in the modified EFE containing U and P are the effective stress-energy components due to the Weyl contribution, where U and P denote the energy density and pressure due to bulk contribution on the brane, respectively. It is very interesting to note that although we consider a perfect fluid with isotropic pressure as the matter source on the brane, the effective radial and tangential pressures on the brane arising from higher dimensional bulk contributions are different, contributing to an induced pressure anisotropy amounting to 12P/ρ_c on the brane, due to bulk effects. Also, we can see that the average effective pressure due to the projected Weyl tensor can be obtained as P̃_eff^avg=1/3(P̃^r_eff+2P̃^t_eff)=1/3ρ̃_eff, which shows the fact that the additional effective matter appearing in the modified EFE from the projected Weyl term has the EoS of radiation due to the projected Weyl tensor being traceless.
The conservation equation on the (3+1)-brane is the same as in GR as the stress-energy is conserved separately on the brane and bulk.
So, for the line element given by Eq. (1), we have
dp/dr=-1/2dν/dr(p+ρ).
§.§ Solution for the Wormhole Shape Function
The line element for a static, spherically symmetric wormhole has the form
ds^2=-e^ν(r)dt^2+dr^2/1-b(r)/r+r^2(dθ^2+sin^2θ dϕ^2),
where b(r) and ν(r) denote the shape function and redshift function of the wormhole, respectively.
For the line element Eq. (10), the modified field equations on the 3-brane given by Eqs. (6)-(8) reduce to
b^'/r^2=ρ(1-ρ/ρ_c)-12 U/ρ_c,
(1-b/r)(ν^'/r+1/r^2) -1/r^2
=p -ρ(2 p +ρ)/ρ_c-4U/ρ_c-8P/ρ_c,
(1-b/r)(ν” + ν^'^2 +ν^'/r) -b^' -b/2r(ν^' +1/r)
=p-ρ(2 p +ρ)/ρ_c-4U/ρ_c+4P/ρ_c.
The first metric potential which contains the redhift function is assumed to be the Kuchowicz metric function <cit.>
e^ν(r)=e^Br^2+2ln C.
Here B is an arbitrary constant having dimension [L^-2] and C denotes a dimensionless constant. The reason behind the choice of
the Kuchowicz potential as the redshift function is that it is a well behaved regular function for all finite radial distances and can
represent the metric potential at the interior of regular collapse solution like the gravastar and other compact objects <cit.>. It has also been used as redshift
function for wormhole on other instance <cit.>. As we shall see later, the value of the parameter B and C as evaluated from the boundary conditions at the wormhole surface ensure the asymptotic flatness of the wormhole such that the redshift function remains finite for infinitely large radial distances.
The matter on the brane is taken to be a perfect fluid such that the stress energy tensor has the form T^μ_ν=diag(-ρ,p,p,p),
where the pressure and energy density are related via the equation of state (EoS) parameter μ as
p(r)=μρ(r).
The parameter μ can be evaluated from the Israel Darmois junction conditions at the wormhole surface.
Using this EoS in the stress energy conservation equation for the assumed form of the redshift function, the energy density of matter
constituting the wormhole can be found to be
ρ(r)=C_1e^-H_1 Br^2/2μ,
where C_1 is an integration constant.
In figure 1, we have plotted the variation of the energy density and the effective energy density along the radial distance. Its significance has been discussed in the concluding section.
As already discussed in the subsection A, we follow an approach<cit.> where we do not care what happens in the bulk, so we have a lot of freedom for choosing the specific form of the Weyl projection on the brane. Such choices have also been made in<cit.>. Here also, we assume a linear EoS connecting U and P of the form P=ω U, where ω is a constant EoS parameter that we will obtain from the junction conditions. The energy density due to the Weyl projection on the brane is assumed to have a profile U(r)=√(ρ_0 ρ(r)), where ρ_0 is another constant model parameter to be obtained from the boundary conditions at the wormhole surface.
Plugging in the obtained energy density in the field Eq. (13) and using the EOS of Eq. (15) along with Eq. (14), we obtain a solution
for the shape function b(r), having the form
b(r) = r e^-2Br^2( 8πC_1^2 F_2 μ e^Br^2H /μ/B ρ_c H G + G e^2Br^2 +C_2/G -112πC_1μ^2 e^Br^2 F /2μ/B F F_1 G -16 ( ω-1 ) μ√(ρ_0C_1) e^Br^2 F_1 /4μ/ B ρ_c F_1 G),
where C_2 is the constant of integration whose value can be obtained from the junction conditions.
Here we have G=(2Br^2+1), F=(3μ-1), H=(μ-1), F_1=(7μ-1), H_1=(μ+1), F_2=(2μ+1).
The variation of the shape function along the radial distance is plotted in Fig.2 and as we can see it has a desired shape capable
of representing a wormhole surface. There is a slight bend as we move away from the throat towards the wormhole surface, whose physical significance is discussed later on. The throat radius is assumed to be r_0=0.5 km.
§.§ Validity of NEC
On the braneworld, there is a modification in the geometry sector of the EFE which can effectively be represented as an effective matter
sector. So, even for ordinary perfect fluid matter, the components of the stress-energy tensor are different from usual GR. As a result,
there is an effective T_μν appearing on the RHS of the modified EFE, arising out of the modification to the standard action due
to the extra dimensional effects. We have T^μ (eff)_ν=diag(-ρ^eff, p_r^eff, p_t^eff, p_t^eff).
For a braneworld with a timelike extra dimension, the components of the effective stress-energy tensor are given as
ρ^eff=ρ( 1-ρ/ρ_c) -12U/ρ_c,
p^eff_r=(p +ρ( p -ρ/2)/ρ_c/2)-4U/ρ_c-8P/ρ_c,
and
p^eff_t=8π(p-ρ(p+ρ/2)/ρ_c/2) -4U/ρ_c+4P/ρ_c.
On adding the effective energy density and pressure, we have
ρ^eff+p^eff_r=1/ρ_c ( -16πC_1^2 H_1 ( e^- H_1 Br^2/2μ) ^2+8πC_1ρ_c H_1 e^- H_1 Br^2/2μ+8√(ρ_0C_1) e^H_1 Br^2/4μ( ω-2 ) )
ρ^eff+p^eff_t=1/ρ_c( -16πC_1^2 H_1 ( e^- H_1 Br^2/2μ) ^2+8πC_1ρ_c H_1 e^- H_1 Br^2/2μ+4√(ρ_0C_1) e^ H_1 Br^2/4μ( ω-4 ) )
The variation of the sum of effective pressures and density for matter obeying a linear EoS, constituting the wormhole on the braneworld with a
timelike extra dimension, along the radial distance is plotted in Fig. 3. On using the values of the model parameters obtained from the junction
conditions, it turns out that the NEC is violated effectively.
§.§ The Junction Conditions
We can see from the variation of the energy density ρ, as we approach the surface of the wormhole there is an increase in the matter
density which indicates the presence of matter at the surface of the wormhole, which however, does not disturb the asymptotic flatness of the wormhole as the flare-out condition is not violated since we are concerned with the effective matter on the brane and not the physical matter. This gives rise to an extrinsic discontinuity at the surface,
resulting to the generation of intrinsic surface energy density and surface pressure. The surface of the wormhole acting as a junction
between its interior and exterior spacetimes leads to the geodesic completeness of the wormhole, characterized by a perfect fluid matter
configuration. The junction conditions suggest a smooth matching of the interior and exterior spacetimes at the junction, involving a
continuity of the metric potentials, which however does not ensure the continuity of derivatives of the potentials. Thus, the surface
stress-energy can be obtained following the prescription of Darmois and Israel <cit.>.
The intrinsic surface stress energy tensor S_i^j is given by the Lanczos equation <cit.> as
S_j^i=-1/8π (κ_j^i-δ_j^iκ_k^k),
where discontinuity in the second fundamental form is given by
κ_ij=κ_ij^+-κ_ij^-.
We obtain the second fundamental form using the expression
κ_ij^±=-n_ν^±[∂^2X_ν/∂ξ^i∂ξ^j+
Γ_αβ^ν∂ X^α/∂ξ^i∂ X^β/∂ξ^j]|_S,
such that the unit normal vector n_ν^± have the form
n_ν^±=±|g^αβ∂ f/∂ X^α∂ f/∂ X^β|^-1/2∂ f/∂ X^ν.
Also, n^νn_ν=1 and ξ^i denotes the intrinsic coordinate of the wormhole surface having f(x^α(ξ^i))=0 as its parametric equation. + describes the spacetime exterior to the wormhole, while - describes the interior spacetime of the wormhole.
The solution at the exterior of the wormhole is a vacuum solution but as we have accounted for the projected bulk Weyl tensor on the brane to have a non-zero contribution, so the vacuum will possess a tidal charge which will effectively modify the line element for the vacuum, which is given by
ds^2=-(1-2M/r-Q/r^2)dt^2+(1-2M/r-Q/r^2)^-1dr^2+r^2(dθ^2+sin^2 θ dϕ^2),
where M represents the mass of the wormhole and Q represents the tidal charge which is a dimensionless quantity arsing from the effective stress-energy components of the projected Weyl term on the brane. The higher dimensional bulk gravitational effect transferred by the projection of the Weyl tensor to the brane not only modifies the dynamics inside the wormhole but also in the vacuum exterior to its surface boundary. If this correction to the effective energy density in the exterior spacetime is U_e, then U_e∼Q/r^4. So, the tidal charge parameter can be both greater than or less than zero in accordance with the sign of the effective energy density U_e that arises from the effective extra-matter like contribution due to the projection of the bulk Weyl tensor on the brane. A positive U_e corresponds to a positive Q and vice-versa. Generally for a braneworld with a spacelike extra dimension, the quantity U_e is a negative one to ensure confinement of the gravitational field to the brane and so is Q but as we are interested in a braneworld with a timelike bulk signature, so we had assumed the positive root solution to the effective energy density U inside the wormhole as well. The negative sign before the Q term is also a consequence of the extra dimension being timelike. The parameter Q having a small non-zero positive or negative value introduces significant extra-dimensional UV corrections to standard GR results. In the SS braneworld scenario, the sign of the U_e term will be positive for confining gravity to the brane aided by the negative brane tension and so the tidal charge is also positive. Had it been negative as in the RS scenario, the exterior spacetime would have resembled to a Reisnner-Nordstrom one.
Thus, the surface density is given by
Σ = -1/2π r[√(e^λ)]_-^+=1/2π r(√(1-2M/r-Q/r^2)-.
√(1- e^-2Br^2( 8πC_1^2 F_2 μ e^Br^2H /μ/B ρ_c H G + G e^2Br^2 +C_2/G -112πC_1μ^2 e^Br^2 F /2μ/B F F_1 G -16 ( ω-1 ) μ√(ρ_0C_1) e^Br^2 F_1 /4μ/ B ρ_c F_1 G)) ,
The surface pressure has the form
𝒫 =1/16π r [(2f+f^' r/√(f)) ]_-^+ =3 e^-2Br^2/ G ^2 F ρ_c H r^4Bπ( B ( F G^2/12) ρ_c H ( 2Mr^2-r^3+ ( Q-M ) r-Q ) e^Br^2.
.√(( 2μ^2C_1 e^3Br^2/2 e^Br^2/2μ/ B F G - C_2 e^Br^2/μ/ G +F_2 e^Br^2μC_1^2/ B ρ_c H G ) e^-Br^2/μ) +√(1-2M/r-Q/r^2)( - e^Br^2 F /2μμρ_c C_1H /6( r^3 H_1 B^2-Gμ...
... +B ( 5μ+1) r/2) F/3( ( r^3 H_1 B^2-G μ/2+rB(3μ+1)/2) C_1^2 F_2 e^Br^2H /μ/2+B ρ_c H C_2( B^2r^3-G/4+Br ) ) ) r^3)
e^Br^2/√(1/B ρ_c FH G ( 2μ^2C_1ρ_c e^3 Br^2/2H e^Br^2/2μ- ( BC_2ρ_c H e^Br^2/μ+2F_2 μC_1^2 e^Br^2/2) ( F/3) ) ( e^-Br^2/μ))1/√(1-2M/r-Q/r^2).
A static wormhole is characterized by a vanishing surface energy density and surface pressure Σ =𝒫=0 at the boundary, yielding the condition
b(r)|_r=R=2M+Q/R.
This is one of the boundary conditions that we shall use to obtain the unknown constant model parameters. In addition, the junction conditions imply the metric potential g_tt and its derivative δ g_tt/δ r to be continuous across the surface boundary at r=R.
On choosing physically realistic values of the model parameters M=0.5789993753 M_⊙, Q = 0.008, ρ_c = 0.41m^4, r_0=0.5km and R=3km we obtain B = -0.000125km^-2, μ = 0.41, ρ_0 = 0.14, C1=0.02273840355, C2= -5681.061699 and ω = 0.3450530290. The significance of the vital physical parameters shall be discussed in the concluding section.
§.§ Tidal acceleration
It is essential to constrain the velocity of the traveller at the throat of the wormhole by setting up a realistic limit on the tidal forces at the throat, as infinitely large tidal forces would lead to a Weyl curvature singularity at the throat resulting in a pinch-off, that would rip the traveller apart. We take a realistic upper limit on the tangential and radial components of the tidal acceleration to be the acceleration due to gravity on the Earth.
The radial component of the tidal acceleration can be computed in terms of the Riemann curvature tensor as
|R_rtrt|=|(1-b/r)[ν"/2+ν'^2/4-b'r-b/2r(r-b).ν'/2]|≤ g_Earth.
The tangential component of tidal acceleration using the Riemann tensor is given by
γ^2 |R_θ t θ t|+γ^2v^2 |R_θ r θ r|=|γ^2/2r^2[v^2(b'-b/r)+(r-b)ν']|≤ g_Earth.
Here, the Lorentz factor is given as γ=1/√(1-v^2) and v denotes the velocity of the traveller traversing the throat of the wormhole. On a realistic note, the traveller being a macroscopic object, velocity of the traveller must be much less than unity. So, γ≈ 1 appears to be quite a reasonable approximation. Using the assumed redshift function in the form of the Kuchowicz potential and the shape function obtained by solving the modified field equations on the brane, an upper limit on the velocity of the traveller traversing the wormhole throat can be obtained using the above inequality as
v≤ 0.05341107√(g_Earth).
We also have a reasonably small radial tidal acceleration at the wormhole throat.
§.§ Linearized stability analysis
In this section we shall perform a qualitative linearized stability analysis, for which we shall consider the throat radius of the wormhole to depend on the proper time. We consider the throat radius r_0=x(τ). On such a consideration, the energy density Σ and pressure 𝒫 can be expressed by the equations
Σ=-1/2π x√(f(x)+ẋ^2),
and
𝒫=1/8πf'(x)/√(f(x))-Σ/2,
such that f(x)=1-2M/x-Q/x^2 with the parameter M denoting the wormhole mass and Q the tidal charge.
The conservation of energy-momentum yields the equation of motion
ẋ^2+V(x)=0.
Here, the potential V(x) can be expressed as
V(x)=f(x)-[2π x Σ (x)]^2.
In order to perform a stability analysis, we need to consider a linearization around an assumed static solution x_0 for the equation of motion given by Eqn. (36).
On Taylor expansion of the potential V(x) upto the second order around x_0, we have
V(x)=V(x_0)-V'(x_0)(x-x_0)+1/2V"(x_0)(x-x_0)^2+O[(x-x_0)^3],
prime representing derivative with respect to x.
As we consider a static wormhole on the brane, described by a time independent line element, so both the potential and its first derivative with respect to x shall vanish at the assumed static solution for the equation of motion x_0. Thus the wormhole will be stable only for V"(x_0) > 0. We introduce a parameter β=δ𝒫/δΣ, in terms of which we obtain the stability condition for the wormhole.
The second derivative of the potential with respect to x can be expressed in terms of the energy density, pressure, parameter β and the function f(x) as
V”(x)=f”(x)-8 π^2 [ (Σ +2𝒫)^2+ Σ (Σ+𝒫)(1+2β)
Thus, the stability condition for the wormhole in terms of the parameter β is
β< f”(x_0)/8π^2-(Σ +2𝒫)^2-2Σ(Σ+𝒫)/4 Σ (Σ +𝒫).
Plugging in the expressions for Σ and 𝒫, the above inequality reduces to
β< x_0^2 (f_0')^2-2x_0^2 f_0” f_0/4 f_0(x_0 f_0' -2f_0)-1/2
The parameter β turns out to have the form
β=-2r^4+ (-12π +9 ) Mr^3+ ( (-16π +5 ) Q+20M^2(π -1/2) ) r^2+36 ( π -11/36) M Q r+12Q^2( π -1/4) /8r ( 2Mr-r^2+Q ) π(3Mr-r^2+2Q )
The stable regions for our wormhole model have been indicated as regions 1,2 and 3 in Fig. 4 whose significance is discussed briefly in the concluding section.
§.§ Surface Redshift
As we see our wormhole model can be constituted with normal matter that obeys the null and strong energy conditions and hence the presence of a photon cannot be ruled out. If any such photon is emitted from the suface of the wormhole, it will experience a redshift as it travels from a lower to higher gravitational potenial due to loss in energy in escaping the gravitational field of the wormhole. This is a consequence of the fact that photons have a non-zero gravitational mass.
The surface redshift of the wormhole can be be obtained by using the formula
Z_s = -1+1/√(g_ tt) =-1+1/√(C^2 e^Br^2).
The surface redshift has been plotted along the radial distance in Fig. 5. For a vansihing cosmological constant, the surface redshift (Z_s) must not exceed the value 2 in order to ensure stability<cit.>. The value however should not exceed 5 in case of the prsence of a Λ-term. As we can see from Figure 5, the surface redshift of the wormhole obtained indicates towrds its stability.
§.§ Acceleration and nature of the wormhole
The 'attractive' or 'repulsive' geometrical nature of a wormhole can be inferred upon using the redshift and shape functions function, involving derivative of the former with respect to the radial coordinate. By 'attractive' wormhole geometry, it means that the radial component of the four-acceleration of a static observer is positive, implying the observer requires an outward-directed radial acceleration to refrain from being pulled into the wormhole. On the contrary, a 'repulsive' wormhole geometry is characterized by a negative radial four-acceleration, which physically means the requirement of an inward-directed radial acceleration on the static observer to prevent being pushed away from the wormhole.
The four-velocity of a static observer in terms of the redshift function can be written as
U^μ=dx^μ/dτ=(e^-ν(r)/2,0,0,0),
τ representing the proper time. In turn, the four-acceleration a^μ=U^μ_;νU^ν has its radial component given by
a^r=ν'/2(1-b(r)/r).
For a test particle initially at rest, the geodesic equation in the radial direction is
d^2 r/dt^τ=- Γ^r_tt(dt/dτ)^2= -a^r
The radial four acceleration for our wormhole model can be computed to be
a^r = Br( - e^-2Br^2( 8πC_1^2 F_2 μ e^Br^2H /μ/B ρ_c H G + C_2/G -112πC_1μ^2 e^Br^2 F /2μ/B F F_1 G -16 ( ω-1 ) μ√(ρ_0C_1) e^Br^2 F_1 /4μ/ B ρ_c F_1 G))v^2
A variation of the radial four-acceleration along the radial distance has been plotted in Fig. 5. We find that the radial four-acceleration turns out to be negative which suggests that our wormhole spacetime has a repulsive geometric nature as opposed to wormhole on a brane with spacelike extra dimension.
§ DISCUSSIONS AND CONCLUSION
In this paper we have attempted to construct a static, spherically symmetric and traversable wormhole on the Sahni-Shtanov braneworld with a timelike extra dimension. It is well known that in a standard relativistic context, a wormhole structure connecting two different spacetime points is unstable at the throat due to the development of a Weyl curvature singularity as a consequence of infinitely large tidal forces. So the wormhole can not act as a bridge facilitating a shortcut path reducing travel time and distance between the two spacetime points as speculated by Einstein. A solution to this problem can be approached bidirectionally, either by modifying the geometry sector via the modification of the EH action or modifying the matter source Lagrangian by introducing sources that violate one or more of the energy conditions. The first solution was provided by Morris and Thorne suggesting the introduction of some type of matter at the throat capable of violating the NEC known as exotic matter. Such matter being characterized by negative pressure is gravitationally repulsive in nature. However, a possible way to avoid the use of such exotic matter violating the NEC is to modify the gravity sector as already discussed. Braneworld gravity provides an ideal framework for doing this as the gravitational contributions coming from the higher dimensional bulk can significantly modify the gravitational dynamics on the brane at considerably high energies. It is to be mentioned here that the braneworld scenario can also significantly modify the gravitational dynamics at low energies(late times)<cit.> with a spacelike extra dimension considering a non-vanishing curvature term on the brane(m≠ 0) and for a non-zero brane tension(σ≠ 0)<cit.>, both the SEC as well as NEC is effectively violated on the brane leading the universe to accelerate at late times with an effective phantom behaviour (ω_eff<-1). As there is no requirement for an exotic phantom fluid to exist physically, so the asscoiated problems with ghosts and instablities appearing in the relativistic context can be avoided and the acceleration is a consequence of the IR corrections to standard GR introduced with a spacelike extra dimension and the induced curvature on the brane.
The braneworld framework we have used in this paper with a timelike extra dimension is a dual to the Randall-Sundrum single brane model that modifies the dynamics on the brane even more drastically. In the RS model there is a correction term quadratic in stress-energy appearing with a positive sign due to the Lorentzian signature of the bulk. On the contrary, in the SS model, due to the bulk signature deviating from being Lorentzian, a negative sign appears before the quadratic correction term. This results in the avoidance of both the initial singularity and any possible future big crunch singularity provided the matter source satisfies ρ+p≥ 0, which is just the opposite condition required in a standard relativistic context for any likelihood of avoiding the singularity. If the matter source also obeys the condition ρ+3p>0, then the universe goes through an infinite number of non-singular bounces avoiding both the big bang and big crunch resulting in a cyclic universe. Additionally, the effective energy density and effective radial and tangential pressure terms on the brane appearing due to the projection of the bulk Weyl tensor on it also appear with the opposite sign due to the alteration of the bulk signature. So, it would be really interesting to check whether a static, spherically symmetric and traversable wormhole without assuming any form of charge to be possessed by the matter distribution, can be constructed on the SS brane and if so, under what conditions. Also, we study a few features of such a wormhole.
We have assumed the redshift function to be described by a Kuchowicz metric potential. The physical motivation behind the use of this function comes from the fact that it has been used to model the interior of gravastars where the central curvature singularity of a black hole is removed by assuming an ad hoc EoS source by gravitational condensate matter. Since, for the wormhole throat to be stable and traversable, the Weyl curvature singularity is to be avoided at the throat, so using a regular well behaved metric function that remains finite for finite radial distances may be a good choice. Moreover, such a metric potential has also been used as a wormhole redshift function in literature<cit.>. With the assumed redshift function and assuming perfect fluid matter on the brane described by a linear EoS, we use the modified EFE on the SS brane along with the stress-energy conservation equation to obtain the energy density of matter in the wormhole and the unknown metric potential in the form of the wormhole shape function. As we can see from the variation of the energy density along the radial distance in Fig. 1, the energy density remains positive throughout the wormhole and it is minimum at the throat but rises exponentially as we move towards the surface of the wormhole. It might appear apparently that a non-vanishing energy density at the surface violates the flare-out condition and the asymptotic flatness, but the effective energy density vanishes at the surface due to the higher dimensional corrections in energy and thus asymptotic flatness and flaring out is ensured. It is the effective matter distribution on the brane which includes the normal physical matter plus the effective matter arising due to local and non-local corrections that makes the construction of the traversable wormhole possible. This shows that there is no requirement for exotic matter to stabilize the throat as any such requirement would have maximized the matter distribution at the throat rather than minimizing it. The obtained shape function is also plotted along the radial distance and its shape resembles the shape of the wormhole very aptly. It is to be noted that the other desirable properties that must be possessed by a shape function to describe a physically consistent wormhole model, as prescribed by Morris and Thorne are also satisfied. Namely, (i) b(r_0)=r_0, (ii) b(r)/r<1 for all r>r_0, where r_0 denotes the throat radius. The bending of the shape function for an increasing radial distance from the throat indicates the presence of a potential required to be overcome by the traveller which justifies the repulsive geometry of the wormhole.
Next, the validity of the NEC has to be verified for matter constituting the wormhole on the brane. It is an essential condition that the NEC must be violated in order to ensure traversability and stability of the wormhole. As evident from Fig. 3, it turns out that the NEC is violated for an effective matter description on the brane, although we consider normal matter constituting the wormhole on the brane. This means that the sum of the effective density and effective pressure on the brane turns out to be negative for both the radial and tangential components. For drawing all the plots, we do not choose any arbitrary value for the constant model parameters, but as as shown in the section on junction conditions, we have obtained the parameters using the boundary conditions. First, the surface density and surface pressure of the wormhole is obtained making use of the Israel-Darmois junction conditions. The boundary conditions are then obtained making use of the junction conditions. Physically justifiable values are chosen for some of the model parameters, namely the throat radius r_0, the wormhole surface at r=R, the mass of the wormhole M, the tidal charge Q, the Kuchowicz parameter C and the critical density ρ_c. Applying these values and using the junction conditions, we evaluate the other unknown parameters including the Kuchowicz parameter B, the EoS parameters μ and ω, the unknown constant in the bulk contribution to the effetive energy density ρ_0 and the integration constants C_1 and C_2.
It turns out that for positive values of the EoS parameter implying gravitationally attractive matter, the NEC is still violated on the brane by the effective matter due to the presence of extra dimensional effects and the obtained shape function satisfies all the necessary criteria mentioned above. This can be accounted for by the high energy or UV corrections to gravity arising from the SS braneworld, where the local corrections arise as quadratic terms in the effective stress-energy tensor components while the non-local terms arising from the projection of the bulk Weyl tensor on the brane may be interpreted as some additional matter with its own effective energy density and effective pressures varying in the radial and tangential components that have an effective radiation-like behaviour. As already discussed earlier, an additional consequence of this Weyl projection is the presence of a tidal charge of positive magnitude in the vacuum spacetime exterior to the wormhole surface on the brane. This also contributes to the fact that normal matter can constitute a wormhole on the SS brane due to the effective violation of NEC. On varying the assumed model parameters, we find that a stable traversable wormhole can be obtained on the brane for multiple positive values of the EoS parameter μ but the best obtained wormhole shape is found at a value of 0.41. The corresponding value of the other EoS parameter describing the effective matter responsible for bulk contribution to the brane turns out to be around 0.345. Importantly, the Kuchowicz parameter B turns out to be negative, which for a unit value of the other Kuchowicz parameter C guarantees the asymptotic flatness as both the redshift and shape functions turn out to be finite as we move to infinitely large radial distances away from the throat of the wormhole.
Another important feature to ensure the traversability of a wormhole is the finiteness of the tidal acceleration at the throat. The radial and tangential tidal accelerations are expressed in terms of components of the Riemann curvature tensor. We constrain the velocity of a traveller traversing the wormhole throat by putting a realistic limit on the tangential tidal acceleration. Also, the radial acceleration is sufficiently small to ensure that there is no possibility of occurrence of any Weyl singularity at the throat that can rip the traveller apart while attempting to traverse the throat. The Weyl singularity is avoided at the throat with matter obeying the NEC due to the extra dimensional effects of gravity coming into play. As gravity is free to access the bulk spacetime, the extra dimensional effect can be realized on the brane gravitationally. The effective matter in the two different spacetimes connected by the wormhole throat on the SS brane do not gravitationally attract the throat in opposite directions making it unstable. This can be further verified using a linearized stability analysis, using which we obtain the regions of stability of the wormhole in Fig. 4 denoted by regions 1,2 and 3 in terms of the introduced parameter β. The parameter β is evaluated in the effective potential formalism where the potential is constructed from the surface density and β is obtained in terms of the surface density and surface pressure. For the equation of motion we obtained on making the throat radius to be a function of proper time, stable regions indicate a minima in the effective potential which is essentially the first order derivative of the potential with respect to the throat radius being zero and the second order derivative being a positive quantity and this is mathematically reduced to an inequality in terms of the parameter β. We also obtain the radial four-acceleration of the wormhole and plot its variation along the radial distance in Fig. 5 in order to assess the attractive or repulsive nature of the wormhole geometry obtained on the SS brane. If the quantity turns out to be negative then the wormhole geometry is repulsive, meaning that a static observer in the vicinity of the wormhole mouth shall require an acceleration in the radially inward direction not to be pushed away from the wormhole. Our wormhole model on the SS brane is found to be characterized by such a repulsive geometry. So, an inward directed radial acceleration is required by the observer to traverse the wormhole. The surface redshift has been computed for our wormhole model and the obtained values on variation of the radial distance also indicate towards the stability of our wormhole model.
There have been a few attempts to design ideas of possible detection or observation of wormholes in literature<cit.>. One such possibility arises as a consequence of the individual fluxes associated with the two different spacetimes connected by the wormhole not being conserved. The effect of this can be realised by mutual effects on massive objects in the vicinity of either wormhole mouths. An example of this may be such unexplained effect on the orbit of stars which are present in the vicinity of the black hole at the centre of our galaxy that is believed to harbor a possible wormhole<cit.>. The mass density of the wormhole has been constrained by obtaining an upper limit by studying its possible micro-lensing effects which are believed to resemble gamma ray bursts<cit.>. There also arises a possibility of emission of radiation pulses due to a wormhole which can possible be detected<cit.>. The quasinormal black hole ringing can be distinguished from a wormhole supported by phantom matter with a specific EoS. With or without applying the thin-shell formalism, the ringing can be distinguished either at late times or at all times<cit.>.
It is to be noted that for our wormhole model both the SEC and NEC are respected by the matter constituting the wormhole on the brane. In braneworld model with a timelike extra dimension, a singularity free universe can be realized due to the avoidance of the initial big bang singularity and possible future big crunch singularity such that the singularity is replaced by a bounce, provided the constituent matter obeys the NEC. For our wormhole model also if the matter constituting the wormhole obeys the NEC a stable wormhole can be obtained. Also, in such a braneworld model there can be an infinite number of bounces leading to a cyclic universe with each successive cycle characterized by an increase in amplitude leading to a natural resolution of the flatness problem in standard cosmology provided the constiuent matter obeys SEC. Our wormhole model is in agreement with this condition too, as again for the wormhole to be stable and divergence free at the throat by avoiding infintely large tidal forces, the constituent matter of the wormhole on the brane must obey the SEC. So, for obtaining a traversable wormhole on the SS brane, both the conditions necessary to realize a non-singular bounce and a cyclic cosmology must be obeyed in the absence of an induced curvature term on the brane. This leads to the realization that wormholes can be expected to exist naturally in such a higher dimensional cyclic universe on the brane. The tubular structure of the wormhole spreading out at infinitely large distances to be asymptotically flat is realized much better than in braneworld with a spacelike extra dimension and also wormhole can be constructed with considerably lower mass for a timelike extra dimension, thus minimizing the amount of matter required. One finds that SS braneworld model is capable of realizing singularity free solutions in both cosmological context and in the context of a wormhole with matter source obeying the energy conditions. Both the Riemann curvature singularity due to diverging energy density and the Weyl curvature singularity due to diverging tidal forces at the wormhole throat can be resolved in the SS braneworld model with normal matter. Moreover, this extra-dimensional model is capable of resolving some of the shortcomings of standard GR at the UV limit. A possible reason for this may be that due to the presence of the timelike extra dimension, some gravitationally repulsive effect takes over (which is not due to the presence of any effective Λ or electrodynamic charge and depends solely on the extra dimension being timelike) when the energy density or tidal force grows extremely large and this effect is responsible for turning the singular collapse, inevitable in the context of standard GR, into a smooth non-singular transition where the usual notion of spacetime is preserved.
§ ACKNOWLEDGMENTS
The authors are extremely thankful to Prof. Varun Sahni (IUCAA, Pune) and Prof. Yuri Shtanov (BITP, Kiev) for their kind and extremely helpful discussions and comments without which the paper would not have been possible in the present form. The authors are also grateful to them as the writing of the paper is largely inspired from the discussions with them.
MK is thankful to the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, India for providing the Visiting Associateship under which a part of this work was carried out. RS is thankful to the Govt. of West Bengal for financial support through SVMCM scheme. SG is thankful to the Directorate of Legal Metrology under the Department of Consumer Affairs, West Bengal for their support.
99
Sahni4 Y. Shtanov and V. Sahni, Phys.Lett.B 557 (2003) 1.
Ludwig1916 F. Ludwig, Physikalische Zeitschrift 17 (1916) 448.
ER1935 A. Einstein and N. Rosen, Phys. Rev. 48 (1935) 73.
FH1962 R. W. Fuller and J. A. Wheeler, Phys. Rev. 128 (1962) 919.
MT1988 M. S. Morris and K. S. Thorne, Am. J. Phys. 56 (1988) 395.
Hawking S.W. Hawking and G. F. R. Ellis, The Large Scale Structure
of Space-Time (Cambridge University Press, Cambridge,
England, 1973).
Sahni1 V. Sahni, Class. Quantum Grav. 19 (2002) 3435.
Peebles2003 P. J. E. Peebles and B. Ratra, Rev. Mod. Phys. 75 (2003) 559.
LudwickMpla K. Ludwick, Mod. Phys. Lett. A 32 (2017) 1730025.
Riess1998 A.G. Riess et al., Astron. J., 116 (1998) 1009.
Perlmutter1999 S. Perlmutter et al., Astrophys. J., 517 (1999) 565.
Harko T. Harko, F. S. N. Lobo, S. Nojiri and S. D. Odintsov, Phys. Rev. D, 84 (2011) 024020.
NojiriR S. Nojiri and S. D. Odintsov, Int. J. Geom. Methods Mod. Phys., 4 (2007) 115.
Sahni2 V. Sahni and Y. Shtanov, JCAP 0311 (2003) 014.
DDG C. Deffayet, G. Dvali, and G. Gabadadze, Phys. Rev. D 65 (2002) 044023.
Sahni3 V. Sahni and A. A. Starobinsky, Int. J. Mod. Phys. D 9, 373 (2000)
Sengupta4 P. Paul and R. Sengupta, AHEP, 2020 5249839.
Randall1 L. Randall and R. Sundrum, Phys. Rev. Lett. 83 (1999) 3370.
Randall2 L. Randall and R. Sundrum, Phys. Rev. Lett. 83 (1999) 4690.
Polchinski1 J. Polchinski, String Theory, Vol. 2, Superstring Theory and Beyond (Cambridge University Press, 1998).
Polchinski2 J. Polchinski, Fundamental Physics — Heisenberg and Beyond, Chap. 12 (Springer,2004).
Gasperini M. Gasperini, Elements of String Cosmology (Cambridge University Press, 2007).
Rovelli C. Rovelli, Living Rev. Rel. 11 (2008) 5.
Bojowald M. Bojowald, Living Rev. Rel. 8 (2005) 11.
VSK V. H. Satheeshkumar and P. K. Suresh, Int. Scholarly Research Notices, 2011.
Binetruy P. Binetruy, C. Deffayet, U. Ellwanger, and D. Langlois, Phys. Lett. B 477 (2000) 285.
Maeda K. I. Maeda, D. Wands, Phys. Rev. D 62 (2000) 124009.
Langlois D. Langlois, Phys. Rev. Lett. 86 (2001) 2212.
Chen C. M. Chen, T. Harko, M. K. Mak, Phys. Rev. D 64 (2001) 044013.
Kiritsis E. Kiritsis, JCAP 0510 (2005) 014.
Campos A. Campos and C. F. Sopuerta, Phys. Rev. D 63 (2001) 104012.
Sengupta1 R. Sengupta, P. Paul, B. C. Paul, S. Ray, Int. Jour. of Mod. Phys. D 28 (2019) 1941010.
Maartens R. Maartens, Phys. Rev. D 62, (2000) 084023.
Sengupta S. Ray et al., Int. Jour. of Mod. Phys. D 30, (2021) 2150093.
Wiseman2 T. Wiseman, Class. Quant. Grav. 19, (2002) 3083.
Germani C. Germani and R. Maartens, Phys. Rev. D 64 (2001) 124010.
Deruelle N. Deruelle, arXiv:gr-qc/0111065 (2001).
Wiseman T. Wiseman, Phys. Rev. D 65 (2002) 124007.
Visser M. Visser, D. L. Wiltshire, Phys. Rev. D 67 (2003) 104004.
Creek S. Creek, R. Gregory, P. Kanti, and B. Mistry, Class. Quant. Grav. 23 (2006) 6633.
Pal S. Pal, Phys. Rev. D 74 (2006) 124019.
Bruni M. Bruni, C. Germani, and R. Maartens, Phys. Rev. Lett. 87 (2001) 231302.
Govender M. Govender and N. Dadhich, Phys. Lett. B 538 (2002) 223.
Sengupta5 R. Sengupta et al., Phys. Rev. D 102 (2020) 024037.
S6 N. Kanekar, V. Sahni and Y. Shtanov, Phys.Rev. D 63 (2001) 083520.
S7 V. Sahni and A. Toporensky, Phys Rev D 85 (2012) 123542.
S8 V. Sahni, Y. Shtanov and A. Toporensky, Class. Quantum Grav. 32 (2015) 182001.
S10 Y. Shtanov, Phys. Lett. B 541 (2002) 177.
Barcelo C. Barcelo and M. Visser, Phys. Lett. B 466 (1999) 127.
Hayward S. A. Hayward, Phys. Rev. D 65 (2002) 124016.
Picon C. Armendariz-Picon, Phys. Rev. D 65 (2002) 104010.
Sushkov S. Sushkov, Phys. Rev. D 71 (2005) 043520.
Lobo F. S. N. Lobo, Phys. Rev. D 71 (2005) 084011.
Zaslavskii O. B. Zaslavskii, Phys. Rev. D 72 (2005) 061303.
Chakraborty S. Chakraborty and T. Bandyopadhyay, Int. J. Mod. Phys. D 18 (2009) 463.
Sengupta3 R. Sengupta, S. Ghosh and M. Kalam, Annals of Phys. 439 (2022) 168778.
Bhawal B. Bhawal and S. Kar, Phys. Rev. D 46 (1992) 2464.
Bhadra A. Bhadra and K. Sarkar, Mod. Phys. Lett. A 20 (2005) 1831.
Eiroa E.F. Eiroa and C. Simeone, Phys. Rev. D 71 (2005) 127501.
Bertolami O. Bertolami and R.Z. Ferreira, Phys. Rev. D 85 (2012) 104050.
Moraes P.H.R.S. Moraes, R.A.C. Correa and R.V. Lobato, JCAP 07 (2017) 029.
Agnese A.G. Agnese and M. La Camera, Phys. Rev. D 51 (1995) 2011.
He F. He and S.-W. Kim, Phys. Rev. D 65 (2002) 084022.
Dzhunushaliev V. Dzhunushaliev and D. Singleton, Phys. Rev. D 59 (1999) 064018.
Bronnikov K.A. Bronnikov and S. W. Kim, Phys. Rev. D 67 (2003) 064027.
Lobo2 F.S.N. Lobo, Phys. Rev. D 75 (2007) 064027.
Banerjee A. Banerjee, P. H. R. S. Moraes, R. A. C. Correa, and G. Ribeiro, arXiv:gr-qc/1904.10310.
Chakraborty2 S. Chakraborty and T. Bandyopadhyay, Astrophys. Space Sci. 317 (2008) 209.
Rahaman1 F. Rahaman, M. Kalam, K. A. Rahman, S. Chakraborti, Gen. Rel. Grav. 39 (2007) 945.
Wang D. Wang and X.-H. Meng, Front. Phys. 13 (2018) 139801.
Sengupta2 R. Sengupta et al., Class. Quant. Grav. 39 105004.
Salcedo J. L. Blázquez-Salcedo, C. Knoll and E. Radu, Phys. Rev. Lett. 126 (2021) 101102.
Konoplya R. A. Konoplya and A. Zhidenko, Phys. Rev. Lett. 128 (2022) 091104.
Israel W. Israel, Nuo. Cim. 66 (1966) 1.
Darmois G. Darmois, Mémorial des sciences mathématiques XXV (1927) Fasticule XXV Chap. V.
Sahni5 V. Sahni, Y. Shtanov and A. Viznyuk, JCAP 0512 (2005) 005.
S9 A. Iglesias and Z. Kakushadze, Phys Lett B 515 (2001) 477.
Kuchowicz B. Kuchowicz, Acta. Phys. Pol. 33 (1968) 541.
Ghosh2019 S. Ghosh, D. Shee, S. Ray, F. Rahaman and B.K. Guha, Res. Phys. 14 (2019) 102473.
Biswas2020 S. Biswas, D. Shee, S. Ray and B.K. Guha, Eur. Phys. J C 80 (2020) 175.
Lanczos1924 C. Lanczos, Ann. Phys. (Leipzig) 74 (1924) 518.
Sen1924 N. Sen, Ann. Phys. (Leipzig) 378, (1924) 365.
Perry1992 G.P. Perry, R.B. Mann, Gen. Relativ. Gravit. 24 (1992) 305.
Musgrave1996 P. Musgrave, K. Lake, Class. Quant. Gravit. 13 (1996) 1885.
Bohmer2006 C. G. Boehmer and T. Harko, Class. Quant. Grav. 23 (2006) 6479.
Shaikh R. Shaikh and S. Kar, Phys. Rev. D, 96 (2017) 044037.
Li Z. Li and C. Bambi, Phys. Rev. D, 90 (2014) 024071.
Ohgami T. Ohgami and N. Sakai, Phys. Rev. D, 91 (2015) 124020.
Tsukamoto N. Tsukamoto et al., Phys. Rev. D, 86 (2012) 104062.
Nandi K.K. Nandi et al., Phys. Rev. D, 95 (2017) 104011
Shaikh2 R. Shaikh, Phys. Rev. D, 98 (2018) 024044.
DS D. C. Dai and D. Stojkovic, Phys. Rev. D, 100 (2019) 083513.
Torres D. F. Torres, G. E. Romero, L. A. Anchordoqui, Phys. Rev. D, 58 (1998) 123001.
D A. Doroshkevich, J. Hansen, I. Novikov and A. Shatskiy, Int. J. Mod. Phys. D, 18 (2009) 1665.
KZ R. A. Konoplya and A. Zhidenko, J. Cosmol. Astropart. Phys., 12 2016.
|
http://arxiv.org/abs/2307.00999v1
|
20230703132554
|
Critical dynamics of long-range quantum disordered systems
|
[
"Weitao Chen",
"Gabriel Lemarie",
"Jiangbin Gong"
] |
cond-mat.dis-nn
|
[
"cond-mat.dis-nn",
"quant-ph"
] |
APS/123-QED
Department of Physics, National University of Singapore, Singapore.
MajuLab, CNRS-UCA-SU-NUS-NTU International Joint Research Unit, Singapore.
Centre for Quantum Technologies, National University of Singapore, Singapore.
[email protected]
MajuLab, CNRS-UCA-SU-NUS-NTU International Joint Research Unit, Singapore.
Centre for Quantum Technologies, National University of Singapore, Singapore.
Laboratoire de Physique Théorique, Université de Toulouse, CNRS, UPS, France.
Department of Physics, National University of Singapore, Singapore.
MajuLab, CNRS-UCA-SU-NUS-NTU International Joint Research Unit, Singapore.
Centre for Quantum Technologies, National University of Singapore, Singapore.
Long-range hoppings in quantum disordered systems are known to yield quantum multifractality, whose features can go beyond the characteristic properties associated with an Anderson transition. Indeed, critical dynamics of long-range quantum systems can exhibit anomalous dynamical behaviours distinct from those at the Anderson transition in finite dimensions. In this paper, we propose a phenomenological model of wave packet expansion in long-range hopping systems. We consider both their multifractal properties and the algebraic fat tails induced by the long-range hoppings. Using this model, we analytically derive the dynamics of moments and Inverse Participation Ratios of the time-evolving wave packets, in connection with the multifractal dimension of the system. To validate our predictions, we perform numerical simulations of a Floquet model that is analogous to the power law random banded matrix ensemble. Unlike the Anderson transition in finite dimensions, the dynamics of such systems cannot be adequately described by a single parameter scaling law that solely depends on time. Instead, it becomes crucial to establish scaling laws involving both the finite-size and the time. Explicit scaling laws for the observables under consideration are presented. Our findings are of considerable interest towards applications in the fields of many-body localization and Anderson localization on random graphs, where long-range effects arise due to the inherent topology of the Hilbert space.
Critical dynamics of long-range quantum disordered systems
Jiangbin Gong
August 1, 2023
==========================================================
§ INTRODUCTION
The study of eigenstate transitions in quantum-disordered systems has attracted a strong interest recently <cit.>. One celebrated example is the Anderson transition arising from the interplay between interference effects and disorder, which separates a phase where quantum states are localized from a phase where states are delocalized <cit.>. At the Anderson transition, a property called multifractality emerges as a consequence of strong and scale invariant spatial fluctuations of the states, intermediate between localization and delocalization <cit.>. Given the importance of the Anderson transition, multifractal properties have been extensively investigated, both theoretically and experimentally, in finite-dimension <cit.> and in random matrix ensembles <cit.>. Recently, it was discovered that quantum multifractality can be observed not only at critical points but also in phases called extended non-ergodic <cit.>. For example, the many-body localized phase has been shown to have multifractal properties on the Hilbert space <cit.>. The emergence of such non-ergodic extended phases have also been described in random matrix ensembles <cit.>, on the Cayley tree <cit.>, in Floquet systems <cit.> or in the presence of long-range correlations of disorder <cit.>.
Quantum multifractality can be characterized by the moments P_q of order q of eigenstate amplitudes:
⟨ P_q⟩ =⟨∑_i|Ψ_α(i)|^2q⟩∼ N^-D_q(q-1),
where the sum is over the N sites (indexed by i) of the system, with the eigenstate amplitudes |Ψ_α(i)|^2 normalized as ∑_i|Ψ_α(i)|^2=1. ⟨⟩ denotes an averaging over disorder and eigenstates in a certain energy window. An algebraic scaling of ⟨ P_q⟩ with N defines a multifractal dimension D_q.
D_q=1 indicates an ergodic delocalized behavior, while D_q=0 is a signature of localization. These behaviors are generally observed at a sufficiently large scale, e.g. N ≫Λ the correlation or localization volume.
Remarkably, 0<D_q<1 indicates scale-invariant multifractal behaviors, whose full characterization is based on a spectrum of multifractal dimensions <cit.>. Multifractal eigenstates thus occupy an extensive region which is however an algebraically vanishing fraction of the system: this is why they are called “non-ergodic delocalized” <cit.>, in contrast to egodic delocalized states which occupy a finite fraction of the system.
As a characteristic property of the Anderson transition in finite dimensions <cit.>, quantum multifractality is even richer in the presence of long-range hoppings.
One well-known example is the Power-law Random Banded Matrix (PRBM) model <cit.>. Similarly, Floquet models, particularly the Ruijsenaars-Schneider ensemble <cit.>, exhibit intriguing properties of quantum multifractality. There, long-range hoppings introduce anomalous properties beyond typical features at the Anderson transition in finite dimension. For example, they result in an unusually large critical regime <cit.>, can break a fundamental symmetry of the multifractal spectrum <cit.>, and induce correlation-induced localization <cit.>. In this article, we explore how long-range hoppings also give rise to anomalous dynamical properties.
In terms of detecting the Anderson transition, solely observing the expansion of a wave packet already serves as a convenient tool. Indeed, this has been extensively studied both theoretically and experimentally <cit.>. Precisely at the Anderson transition point, the wavepacket expansion exhibits an anomalous diffusion behavior that lies between localization and diffusion <cit.>, which needs more careful quantitative analysis of the wave packet spatial profile to see the impact of multifractality. Other observables such as the return probability or the coherent back and forward scattering peaks are more useful to study the multifractal properties of the eigenstates and the Cantor eigen-spectrum <cit.>.
Investigation of the quantum dynamics in long-range hopping systems <cit.> in connection with multifractality is more challenging, insofar as the dynamics is strongly affected by the algebraic tails induced by the long-range hoppings (analogous to lévy flights <cit.>), as we will show in this paper (see also <cit.>). In particular, the strong boundary effects caused by the algebraic tails present a severe challenge in computational studies. As shown in this work, it is not possible to circumvent these strong boundary effects by increasing the system size.
In other words, one cannot reach a regime where expansion of a wave packet
is not affected by finite-size effects, thus requiring more scaling analysis than in the case of the Anderson transition studied both theoretically and experimentally in different platforms <cit.>. Interestingly, in the cases we consider, it is necessary to take into account systematically the boundary effects via a scaling in time, in addition to the system size. Indeed, the focus of this paper is on understanding the subtle critical dynamical behaviors induced by long-range hopping via a two-parameter (time and size) scaling approach. Some results were already discussed in Refs. <cit.> using different approaches. This paper distinguishes itself from these studies by providing a coherent description of long-range coupling effects based on an unified model of wave packet propagation in these systems.
Studies of the dynamics of quantum systems are generally computationally expensive. In comparison, simulating the dynamical counterpart of Floquet kicked systems can be made more efficient, as is clear in the kicked rotor dynamics implemented via Fast Fourier Transforms <cit.>. Besides their computational efficiency, Floquet kicked systems also exhibit rich dynamical behaviors such as dynamical localization <cit.>, or Floquet time crystals <cit.>.
In this work, we employ a Floquet kicked model with algebraically long-range hoppings and eigenstates with multifractal properties <cit.> to simulate numerically the critical dynamics in such long-range hopping systems. We propose scaling laws whose scaling parameters include both time and system size, for different observables, based on a general and simple phenomenological model of wave packet expansion in the type of systems considered. Our analytical and numerical results demonstrate distinct dynamical behaviors depending on the observables considered.
As an outlook, we note that algebraic fat tails in time evolving wave packets are also relevant to studies of quantum dynamics on various graphs of infinite effective dimension, such as Anderson localization in random graphs <cit.> or the Hilbert space of a many-body localized system <cit.>. The Hilbert space of these systems have network structure where the number N_r of sites at distance r from the localization center of a wave packet grows exponentially, therefore the exponential decay of the wave packet with distance r can be regarded as an algebraic behavior as a function of N_r. Under the new coordinate N_r, important localization measures like the inverse participation ratio can be more easily studied since the network structure is simplified to 1-D. Hence, our findings hold potential relevance in this context, which has recently gathered significant attention.
The rest of the paper is organized as follows. In Sec. <ref>, we introduce the kicked Floquet model we consider and discuss its multifractal properties. In Sec. <ref>, we recall the temporal behavior and finite-size dependence of the return probability ⟨ R_0⟩ and generalize these known results to higher moments ⟨R_0^q(t)⟩. In Sec. <ref>, we propose a general phenomenological model of wave packet expansion in long-range hopping systems with multifractal properties, based on both analytical arguments and numerical observations. In Sec. <ref>, we derive from the phenomenological model the dynamics and time and size scaling laws for two other important types of observables (some may be accessible experimentally): the average k-th moments of a wave packet ⟨ p^k⟩ and the q-th Inverse Participation Ratio ⟨ P_q (t) ⟩. We present numerical results that validate our predictions. We conclude our study in Sec. <ref>.
§ THE MULTIFRACTAL KICKED ROTOR MODEL
In this article, we investigate a variant of the quantum kicked rotor <cit.> that we call the multifractal kicked rotor (MKR) model, with Hamiltonian <cit.>
ℋ=p^2/2+KV(q)∑_nδ(t-n),
where
V(q) = ln (q/π),q∈ [0,π),
ln (2-q/π),q∈ [π,2π) ,
and V(q+2π)=V(q). Hamiltonian Eq. (<ref>) yields a Floquet operator U=exp(-p^2/2ħ)exp(-iKV(q)/ħ), which can be quantized in a truncated Hilbert space with dimension N with p=Pħ, P an integer between -N/2 and N/2-1, and q=2π Q-ε/N, Q an integer between 1 and N satisfying periodic boundary conditions in both P and Q. However, note that we have assigned the value of ε=1 for q∈ [π,2π) (i.e., when Q=1,…, N/2), while for other values of q, we have set ε=0 to prevent numerical divergence. Such slight shifts break the symmetry of the kicking potential Eq. (<ref>) with respect to the axis q=π, consequently, the time-reversal symmetry of the Hamiltonian is broken. The phases corresponding to the kinetic energy Φ_P≡ P^2ħ/2 are pseudo-random phases when ħ is irrational with 2 π <cit.>. Here, we consider Φ_P
as fully-random phases, uniformly distributed over [0,2π). Without loss of generality, we set ħ=1 in the rest of the paper. We can therefore treat p and P as the same variable, and we will no longer use the notation P in the following.
The Floquet operator can be explicitly expressed in the momentum space using a discrete Fourier transform as
U_pp^'=e^-i Φ_p∑_Q=1^NF_pQe^-iKV(2π Q/N)F_Qp^'^-1,
where F_pQ=1/√(N)e^2iπ pQ/N. Due to the singular behavior of V(q) when q→ 0 (2π), the amplitudes of the matrix elements of U_pp^' decay as
|U_pp^'|∼1/|p-p^'|
for large |p-p^'| (note that there is another higher-order singularity at q=π which can be negelected), see <cit.> and App. <ref> for more details).
In App. <ref>, we characterize the multifractal properties of the MKR model, and in particular extract the multifractal dimension D_2=0.71 for K=10 by analyzing the system size dependence of eigenstate moments numerically. Another Floquet system, the Ruijsenaars-Schneider model with similar long-range hopping amplitudes, has been extensively studied for its multifractal properties <cit.>, spectral statistics <cit.>, and rich dynamics <cit.>.
§ RETURN PROBABILITY R_0
Before describing the behavior of observables which are significantly affected by the long-range hoppings introduced above, we recall the properties of a dynamical observable, the return probability R_0, which has been extensively investigated as a characteristic signature of quantum multifractality <cit.>. Starting from an initial condition ψ(p,t=0) = δ_p,0, R_0 is defined as R_0 ≡|ψ(p=0,t)|^2.
As a result of the Cantor eigenspectrum, R_0 decays as a power law in the time domain, ⟨ R_0⟩∼ t^-D_2^μ where D_2^μ is the multifractal dimension of the spectral measure <cit.>.
In our study, higher moments ⟨ R_0^q⟩ with q>0 will play a key role. Due to narrow distributions of large wave function amplitudes |ψ|^2 in such systems, see <cit.> and Appendix <ref>, the power-law decay of ⟨ R_0⟩ can be simply generalized to ⟨ R_0^q⟩ with q>0 as,
⟨ R_0^q⟩∼ t^-qD_2^μ,
as illustrated in Fig. <ref>.
On the other hand, in a finite system of size N, there exists a characteristic time scale t^*_N after which R_0 reaches a finite stationary value, equal to the inverse participation ratio ⟨ P_2⟩, Eq. (<ref>), i.e. ⟨ R_0(t→∞)⟩ =⟨ P_2⟩∼ N^-D_2^ψ,
where D_2^ψ is the spatial multifractal dimension of the eigenstates <cit.>. Similarly, we find that the size dependence of ⟨ R_0^q⟩ at large times follows:
⟨ R_0^q(t→∞)⟩∼ N^-qD_2^ψ .
Therefore, the characteristic time t^*_N should scale as
t^*_N∼ N^D_2^ψ/D_2^μ .
t^*_N reduces to the Heisenberg time (inverse of the mean level spacing 2 π/N) for systems with D_2^ψ=D_2^μ <cit.>. Combining the above relations, we can infer the following two parameter scaling behavior for R_0, namely,
⟨ R_0^q(t,N)⟩ =N^-qD_2^ψg(t/t^*_N).
The numerical data for the MKR verify the above scaling relations. Results presented in Fig. <ref> validate Eq. (<ref>) and Eq. (<ref>). By fitting the corresponding data, we extract the multifractal dimensions D_2^μ=0.64 and D_2^ψ=0.70. In Fig. <ref>, the collapse of R_0 onto a single scaling curve when ⟨ R_0⟩ N^D_2^ψ is plotted as a function of t/t^*_N confirms the validity of the proposed scaling law Eq. (<ref>). In App. <ref>, we show similar scaling properties for ⟨ R_0^q⟩ with q=0.1 and 2. Similar scaling properties for ⟨ R_0 ⟩ (i.e. q=1) have been observed in Ref. <cit.> in both single-particle and many-body quantum systems.
§ PHENOMENOLOGICAL MODEL FOR THE EXPANSION OF A WAVE PACKET IN MULTIFRACTAL SYSTEMS WITH ALGEBRAIC LONG-RANGE HOPPINGS
We shall now describe the rich and subtle effects of algebraic long-range hoppings on the critical dynamics of a wave packet, effects that cannot be characterized using the widely used return probability.
We construct in this section a phenomenological model, based on known analytical results and simple arguments such as wave packet normalization, and validate this model by numerical simulations using the MKR model Eq. (<ref>). Here we restrict our analysis to the regime p>0, since we expect similar scaling behavior for p<0, as the wave packet is initialized as ψ(p,t=0) = δ_p,0.
Starting from a wave packet initialized at a single site p=0, long-range hoppings will induce a power-law tail of the wave packet. This tail is primarily determined by the hopping elements before any interference effects induced by multifractality occur. If the long-range hoppings follow the behavior described in Eq. (<ref>), then the tail of the wave packet behaves as ⟨|ψ(p)|^2⟩∼ p^-2, so called the lévy flight tail <cit.>. However, in the vicinity of the site p=0 where the wave packet was initialized, a non-trivial power-law decay ⟨|ψ(p)|^2⟩∼ p^D_2^ψ-1 dynamically emerges, which is controlled by the spatial correlation dimension D_2^ψ of the wavefunction <cit.>.
Fig. <ref> represents the averaged probability distribution of a wave packet initialized at p=0, ⟨|ψ(p,t)|^2⟩ at different times, for the MKR model Eq. (<ref>). Two distinct power-law decays with p are clearly visible: a fast decay ⟨|ψ(p,t)|^2⟩∼ p^-2 at large p≫ p_c, and a slower decay ⟨|ψ(p,t)|^2⟩∼ p^D_2^ψ-1 close to the initial condition p≪ p_c. The crossover scale p_c has a non-trivial dependence on time which we will describe in the following. It is equivalent to the characteristic scale mentioned in <cit.>, which distinguishes the scaling behaviors of the density correlation function in the position-frequency representation, specifically between the large and small position regimes.
Crucially for our study, we also observe that other moments of wave packet amplitudes, ⟨|ψ(p,t)|^2q⟩ with q>0, obey a similar behavior, see Eq. (<ref>) below. The distributions for different q thus share the same shape, in particular the same p_c (see App. <ref> for more details).
Based on these observations, we propose the following phenomenological model for the average probability distributions of the generalized wave packets ⟨|ψ(p,t)|^2q⟩ for q>0,
⟨|ψ(p,t)|^2q⟩ = ⟨ R_0^q⟩ p^-qμ, 1≤ p< p_c,
B[p/p_c]^-qλ, p_c< p≤N/2 ,
where λ is the exponent of the power-law tail at large p≫ p_c (λ=2 in the MKR model), μ=1-D_2^ψ is the exponent of the power law decay at small p≪ p_c, related to the multifractal dimension D_2^ψ, and B=⟨ R_0^q⟩[p_c]^-qμ. Note that our model is valid only above a microscopic cutoff taken as p_min=1 here. This cutoff usually corresponds to the mean free path, see e.g. <cit.>. In the following, we will neglect contributions below this cutoff, which are not of our interest here.
The crossover scale p_c between the two power-law regimes can be interpreted as a multifractal wave-front. Its dynamics and finite-size scaling play an important role in the following. They can be understood simply by invoking normalization of the wave packet ||ψ||^2≡∑_p=-N/2^p=N/2-1|ψ(p,t)|^2≃ 2∫_1^N/2|ψ(p,t)|^2 dp, where we have taken into account the fact that the wave packet is symmetric with respect to the origin and neglected contributions below the cutoff p< 1. Therefore:
1=||ψ||^2≃ 2[∫_1^p_c⟨ R_0⟩ p^-μdp+∫_p_c^N/2B(p/p_c)^-λdp]
= 2⟨ R_0⟩/1-μ(p_c^1-μ-1)
+ 2⟨ R_0⟩p_c^-μ/1-λ[p_c^λ(N/2)^1-λ-p_c].
The previous expression can be simplified, using μ=1-D_2^ψ, as
(1/D_2^ψ+1/λ-1)p_c ^D_2^ψ-1/λ-1p_c^λ+D_2^ψ-1(N/2)^1-λ
≃1/2⟨ R_0⟩+1/D_2^ψ.
The second term in the left-hand side of the above equality vanishes when N→∞ if λ>1, as is the case in the MKR model considered. Using ⟨ R_0⟩∼ t^-D_2^μ for t<t^*_N, Eq. (<ref>), we get the following dynamical behavior of p_c:
p_c∼ t^D_2^μ/D_2^ψ, (t≪ t^*_N)
In the limit of large times t≫ t^*_N, substituting R_0(t→∞)=P_2∼ N^-D_2^ψ, one gets
p_c(t→∞)∼ N. (t≫ t^*_N)
As said above, we have shown in App. <ref> that the multifractal wave-front p_c is the same for generalized wave packets ⟨|ψ(p,t)|^2q⟩ with different q.
§ TWO PARAMETER SCALING IN SIZE AND TIME FOR CRITICAL QUANTUM DYNAMICS OF ALGEBRAICALLY LONG-RANGE SYSTEMS
In this section, we employ the phenomenological model introduced in Eq. (<ref>) to derive the critical dynamics dependent on time and size, in terms of the average k-th moments of a wave packet ⟨ p^k⟩. The observation of ⟨ p^k⟩ is possible in cold atom systems <cit.> and ultrasound experiments <cit.>, thus making it an experimentally accessible observable. Furthermore, we examine the q-th Inverse Participation Ratio ⟨ P_q (t)⟩, which is a significant quantity in standard multifractal analysis; see Ref. <cit.> for more information. Based on these dynamical observables, we propose scaling laws that are dependent on both time and size. In Tab. (<ref>), we summarize the analytical predictions of the finite-time and size-dependent dynamics for both observables. The MKR model of Eq. (<ref>) is used to numerically verify the predicted critical dynamics and their respective scaling laws.
§.§ k-th moment ⟨ p^k⟩ of a wave packet
The average k-th moments of a wave packet ⟨ p^k⟩:
⟨ p^k⟩=∑_p=-N/2^p=N/2-1|p|^k⟨ |ψ(p,t)|^2⟩
reflect the diffusive properties of a system. Note that we have defined the moments using an absolute value of p since the wave packet is symmetric with respect to the origin p=0.
Based on the phenomenological model proposed in Eq. (<ref>),
⟨ p^k⟩≈ 2∫_1^N/2⟨|ψ(p,t)|^2⟩ p^kdp
= ∫_1^p_c2⟨ R_0⟩ p^k-μdp+∫_p_c^N/22Bp_c^λp^k-λdp
= 2⟨ R_0⟩/k+1-μ(p_c^k+1-μ-1)
+ 2⟨ R_0⟩/k+1-λ[p_c^λ-μ(N/2)^k+1-λ-p_c^k+1-μ].
Combining the time-dependent analysis of p_c and ⟨ R_0⟩ in Eq. (<ref>), the time-dependent dynamics of ⟨ p^k⟩ can be derived as
⟨ p^k⟩ ∼⟨ R_0⟩ p_c^k+1-μ+⟨ R_0⟩ p_c^λ-μN^k+1-λ
∼ t^kD_2^μ/D_2^ψ+t^(λ-1)D_2^μ/D_2^ψN^k+1-λ ,
for t≪ t^*_N.
For k<λ-1, the second term of Eq. (<ref>) vanishes when N→∞, yielding ⟨ p^k⟩∼ t^k(D_2^μ/D_2^ψ). This regime was previously investigated in the Fibonacci chain and Harper model in Ref. <cit.>. Nevertheless, for k>λ-1, the second term dominates, contributing ⟨ p^k⟩∼ t^(λ-1)D_2^μ/D_2^ψ. For the MKR model, the power-law tail exponent λ=2, which yields ⟨ p^k⟩∼ t^D_2^μ/D_2^ψ for k>1 and ⟨ p^k⟩∼ t^kD_2^μ/D_2^ψ for 0<k<1. The numerical results shown in Fig. <ref> and Fig. <ref> confirm such predictions. The diffusive exponents for ⟨ p^k⟩ are independent of k when k>λ-1, which is a non-trivial consequence of the power-law tail of the wave-packet induced by algebraic long-range hoppings.
Furthermore, using the finite size scaling of p_c ∼ N and R_0 ∼ N^-D_2^ψ at large t ≫ t^*_N, we can also derive the finite size scaling of ⟨ p^k⟩:
⟨ p^k(t→∞)⟩∼ N^k.
Finally, a two parameter scaling law for ⟨ p^k⟩ can be naturally proposed based on the time-dependence of Eq. (<ref>) and the finite-size dependence of Eq. (<ref>):
⟨ p^k(t,N)⟩=N^kg(t/t^*_N).
The presented numerical results in Fig. <ref> and Fig. <ref> demonstrate that the data for ⟨ p^2⟩ of the MKR model adheres to the proposed scaling behavior. The data collapse onto a single scaling curve when ⟨ p^2⟩/N^2 is plotted as a function of t/t^*_N. Additionally, in App. <ref>, we provide numerical data for ⟨ p^3⟩ and ⟨ p^5⟩, which confirms the validity of the aforementioned predictions.
§.§ q-th inverse participation ratio ⟨ P_q(t)⟩ of a wave packet
We now turn to another key observable for multifractal properties, the generalized inverse participation ratios. As we are interested in the dynamics of a wave packet, we do not consider the ⟨ P_q ⟩ of the eigenstates, Eq. (<ref>), but the ⟨ P_q (t)⟩ of the time-evolving wave packet at a certain instant t:
⟨ P_q (t)⟩≡⟨∑_p=-N/2^p=N/2-1 |ψ(p,t)|^2q⟩.
We will study how ⟨ P_q (t)⟩ scales with system size, but also characterize its temporal behavior. The scaling with system size of the moments ⟨ P_q ⟩ of eigenstates captures the multifractality of critical systems directly, exhibiting distinct algebraic behaviors for different values of q. By contrast, the moments ⟨ P_q (t)⟩ for a time-evolving wave packet are different as they are non-equilibrium observables capturing the dynamical growth of the participation volume of the eigenstate (e.g., ⟨ P_2(t) ⟩ being the inverse volume occupied by the wave packet).
Similar to the analysis of the average k-th moments ⟨ p^k⟩, ⟨ P_q(t)⟩ can be calculated as
⟨ P_q(t)⟩ =2∫_1^N/2⟨|ψ(p,t)|^2q⟩ dp
=∫_1^p_c2⟨ R_0^q⟩ p^-qμdp+∫_p_c^N/22B^qp_c^qλp^-qλdp
=2⟨ R_0^q⟩/1-qμ(p_c^1-qμ-1)
+2⟨ R_0^q⟩/1-qλ[p_c^q(λ-μ)(N/2)^1-qλ-p_c^1-qμ]
∼ t^(1-q)D_2^μ/D_2^ψ+t^q(λ-1)D_2^μ/D_2^ψN^1-qλ.
When N→∞, for q>1/λ, i.e., q>1/2 for λ=2, the second term of the right hand side of Eq. (<ref>) vanishes, yielding the time-dependent decay ⟨ P_q⟩∼ t^(1-q)D_2^μ/D_2^ψ. Otherwise, for q<1/λ, i.e. q<1/2 for λ=2, the second term dominates, resulting in a time-dependent increase ⟨ P_q⟩∼ t^q(λ-1)D_2^μ/D_2^ψ. Applying a similar analysis for the finite-size saturation value at
t ≫ t^*_N yields
⟨ P_q(t→∞)⟩∼ N^-qD_2^ψN^1-qμ∼ N^1-q.
However, Eq. (<ref>) is valid only if p_c^1-qμ-1>0, i.e., q<1/1-D_2^ψ. If q>1/1-D_2^ψ, the contribution of the multifractal wave-front in the integral is smaller than or close to 1. The term ⟨ R_0^q⟩/1-qμ(p_c^1-qμ-1) is then dominated by ⟨ R_0^q⟩, yielding
⟨ P_q⟩ ≃2⟨ R_0^q⟩/1-qμ+2⟨ R_0^q⟩/1-qλp_c^q(λ-μ)(N/2)^1-qλ
∼ t^-qD_2^μ+t^q(λ-1)D_2^μ/D_2^ψN^1-qλ,
and
⟨ P_q (t→∞)⟩∼ N^-qD_2^ψ.
Eq. (<ref>) shows another regime where ⟨ P_q⟩∼ t^-qD_2^μ when q>1/1-D_2^ψ and q>1/λ. Combining the derivations above, two scaling laws can be proposed as
⟨ P_q(t,N)⟩=N^1-qg(t/t^*_N), 0<q<1/1-D_2^ψ,
⟨ P_q(t,N)⟩=N^-qD_2^ψg(t/t^*_N), q>1/1-D_2^ψ.
Applying the above insights to the MKR model considered here, overall it is clear that there are three regimes where P_q varies with distinct exponents. For q<0.5, ⟨ P_q⟩∼ t^q(λ-1)D_2^μ/D_2^ψ; for 0.5<q<1/1-D_2^ψ≈3.3, ⟨ P_q⟩∼ t^(1-q)D_2^μ/D_2^ψ and for q>1/1-D_2^ψ, ⟨ P_q⟩∼ t^-qD_2^μ. Fig. (<ref>), (<ref>) and (<ref>) present the collapse of data for q=0.1,2,4 , corresponding to the three distinct dynamical regimes. The different dynamical exponents are in good agreement with the predictions and the collapse of the rescaled data confirms the validity of the proposed scaling laws, similar numerical observations are also reported in Ref. <cit.> with power-law banded Anderson model.
§ CONCLUSION
In conclusion, we have presented a thorough investigation into the wave packet dynamics of disordered quantum critical systems, exploring the anomalous effects of long-range hoppings in the presence of multifractal properties eigenstates.
Our study indicates that long-range hoppings can induce subtle and rich dynamical behaviors. For example, the wave packet variance may increase linearly with time, namely, ⟨ p^2 ⟩∼ t, which could be akin to a diffusive behavior <cit.> though the system is multifractal. The multifractal properties of the wave packet dynamics itself, as characterized by the ⟨ P_q(t) ⟩, are all related to the fractal dimension D_2, but, according to the value of q, can have different scaling behaviors on system size N and time t.
Algebraic tails of time-evolving wave packets appear generically in systems with long-range couplings, but also effectively in localization problems on graphs of infinite dimensionality, such as in many-body localization. For these systems, this work indicates that we cannot avoid finite size effects, and that these effects can be taken into account via a two-parameter scaling theory depending on time t and the system size N.
It would be interesting to extend our study to either delocalized or localized phases of disordered systems with long-range hoppings, where the couplings decrease with a smaller or larger exponent than in the critical case, respectively (in one-dimension cases, the couplings would decrease with an exponent different from -1).
We wish to thank O. Giraud for
fruitful discussions. This study has been supported through
the EUR grant NanoX n° ANR-17-EURE-0009 in the framework of the "Programme des Investissements d’Avenir", research funding Grants No. ANR-17-CE30-0024, ANR-18-CE30-0017 and ANR-19-CE30-0013, and by the Singapore
Ministry of Education Academic Research Fund Tier I (WBS
No. R-144-000-437-114). We thank Calcul en Midi-Pyrénées
(CALMIP) and the National Supercomputing Centre (NSCC) of Singapore for computational resources and assistance.
§ POWER-LAW DECAY OF THE AMPLITUDES OF FLOQUET MATRIX ELEMENTS |U_P,P^'|
In this Appendix, we relate the power-law decay of the amplitudes of Floquet matrix elements |U_p, p^'| in momentum space with the singularity of the kicked potential V(q) in real space.
We consider the regime K≪1 and make a first-order expansion in K of the Floquet operator Eq. (<ref>) as
U_pp^'=e^iΦ_pδ_pp^'-iKe^iΦ_p∑_Q=1^NF_pQV(2π Q/N)F_Qp^'^-1.
Therefore,
|U_pp^'|≃ K|∑_Q=1^NF_pQV(2π Q/N)F_Qp^'^-1|
for p≠ p^'.
Next, we evaluate the Fourier transform
∑_Q=1^NF_pQV(2π Q/N)F_Qp^'^-1 =∑_Q=1^N1/Ne^2iπ (p-p^')Q/NV(2π Q/N)
as an integral. Notice that the potential V(q) is symmetric with respect to q=π, hence,
1/2π ∫_0^2πV(q)e^i(p-p^')qdq=1/π∫_0^πln(q)e^i(p-p^')qdq
|p-p^'|→+∞∼i/|p-p^'|∑_r=0^∞c_r(ln|p-p^'|)^1-r,
where the coefficients c_r follow <cit.>:
c_r=(-1)^r1r∑_k=0^rrk(π i/2)^(r-k).
The dominating term is therefore:
|U_pp^'|∼1/|p-p^'|.
Note that V(q) has another singularity at q=π, which is of higher order (first-derivative), compared to the singularity when q=0( 2π) (zero order), therefore, we only take into account the lowest order.
Although the above arguments are based on a first-order expansion in K valid for K≪1, numerical data presented in Fig. <ref> show that Eq. (<ref>) is valid even for larger values of K.
§ MULTIFRACTAL PROPERTIES OF THE MKR MODEL
Quantum multifractality can be characterized by the moments P_q, Eq. (<ref>), of eigenstate amplitudes |Ψ_α(i)|^2, ⟨ P_q⟩∼ N^-τ_q,
where τ_q=D_q(q-1), D_q are the multifractal dimensions and N is the system size. Numerically, we compute τ_q by
τ_q(N)=-[log_2 ⟨ P_q(N)⟩-log_2 ⟨ P_q(N/2)⟩].
The numerical data are shown in Fig. (<ref>) which confirm the multifractal properties of the MKR model, and in particular D_2≈ 0.71
§ TWO-PARAMETER SCALING PROPERTIES OF ⟨ R_0^Q⟩ AND ⟨ P^K⟩
Fig. (<ref>) presents numerical data for ⟨ R_0^q⟩ of the MKR model, Eq. (<ref>), for different q values, confirming the validity of the proposed two parameter scaling law Eq. (<ref>).
In Fig. <ref>, we show numerical data for ⟨ p^k⟩ for two different k>1 values, showing the universality of the prediction ⟨ p^k⟩∼ t^D_2^μ/D_2^ψ and the validity of the proposed scaling laws, Eq. (<ref>).
§ AVERAGE GENERALIZED WAVE PACKET ⟨|Ψ(P,T)|^2Q⟩ FOR DIFFERENT Q VALUES AND PROBABILITY DISTRIBUTION OF WAVE FUNCTION AMPLITUDES |Ψ|^2
Fig. <ref> presents numerical data for the generalized wave packets
⟨|ψ(p,t)|^2q⟩, showing the same shape across different q values, in particular the same multifractal wave-front p_c.
In Fig. <ref>, we present the probability distribution of α=-ln|ψ(p,t)|^2/ln N for different p and t values. On the right side of the distribution corresponding to small wave function amplitudes |ψ|^2, we observe that there is an anomalously wide distribution P(α)∼ N^-λα. Such distributions can be related to the Porter-Thomas law P(α)∼ N^β(1-α)/2exp(-1/2β N^1-α) as small amplitudes |ψ(p,t)|^2 are described by Random Matrix Theory <cit.>, where β=1,2 is the Dyson index corresponding to Orthogonal Ensemble and Unitary Ensemble. Hence, when α≫ 1 and N≫1, P(α)∼ N^-λα with λ=β/2. We confirm such scaling behavior of Gaussian fluctuations both in the MKR model (β=2) and the critical PRBM model (β=1) <cit.>, see Fig. <ref>.
However, on the left side of the distribution corresponding to large wave function amplitudes |ψ|^2, P(α) decreases faster than exponentially, indicating the absence of an algebraic fat for the corresponding distribution of |ψ(p,t)|^2 at large amplitudes |ψ(p,t)|^2. This absence of large fluctuations at large amplitudes is responsible for ⟨|ψ(p,t)|^2q⟩∼⟨|ψ(p,t)|^2⟩^q for q>0. Hence, the shape of ⟨|ψ(p,t)^2q|⟩ as a function of p is the same for different q>0 values, in particular they have the same p_c.
|
http://arxiv.org/abs/2307.02425v1
|
20230705164803
|
Geometric control of tilt transition dynamics in single-clamped thermalized elastic sheets
|
[
"Roberto Abril Valenzuela",
"Paul Z. Hanakata",
"Mark J. Bowick"
] |
cond-mat.soft
|
[
"cond-mat.soft",
"cond-mat.mtrl-sci"
] |
Department of Physics, University of California, Santa Barbara
Department of Physics, Harvard University
Kavli Institute for Theoretical Physics, University of California Santa Barbara
We study the finite-temperature dynamics of thin elastic sheets in a single-clamped cantilever configuration. This system is known to exhibit a tilt transition at which the preferred mean plane of the sheet shifts from horizontal to a plane above or below the horizontal. The resultant thermally roughened two-state (up/down) system possesses rich dynamics on multiple time-scales. In the tilted regime a finite energy barrier separates the spontaneously-chosen up state from the inversion-symmetric down state. Molecular dynamics simulations confirm that, over sufficiently long time, such thermalized elastic sheets transition between the two states, residing in each for a finite dwell time. One might expect that temperature is the primary driver for tilt inversion. We find, instead, that the primary control parameter, at fixed tilt order parameter, is the dimensionless and purely geometrical aspect ratio of the clamped width to the total length of the otherwise-free sheet. Using a combination of an effective mean-field theory and Kramers' theory, we derive the transition rate and examine its asymptotic behavior. At length scales beyond a material-dependent thermal length scale, renormalization of the elastic constants qualitatively modifies the temperature response. In particular the transition is suppressed by thermal fluctuations, enhancing the robustness of the tilted state. We check and supplement these findings with further molecular dynamics simulations for a range of aspect ratios and temperatures.
Geometric control of tilt transition dynamics in single-clamped thermalized elastic sheets
Mark J. Bowick
Received ; accepted
===========================================================================================
§ INTRODUCTION
The properties of polymerized or crystalline membranes (thin elastic sheets) at finite temperature have been of interest for quite some time, both theoretically <cit.> and experimentally <cit.>. The relative energetic preference of bending modes over stretching modes, when the system-size exceeds the thickness, leads to scale-dependent material properties of considerable relevance to both soft and hard condensed-matter systems as well as physical biology. At length scales beyond a material-dependent characteristic length scale, thermal fluctuations play a key role, with non-linear couplings between bending and stretching driving a renormalization of the elastic moduli, the most notable case being the strong growth of bending rigidity with increasing system size.
The implications of thermal fluctuations for the bending and elastic moduli of thin sheets was first understood theoretically <cit.>. For many years after it was thought that the physical systems most likely to exhibit strong thermal effects would come from the world of soft and/or biological matter, such as the spectrin cytoskeleton of red blood cells. The problem though is that soft materials are easily stretchable as well as bendable and so the dominance of bending over stretching is not manifest until very large length scales, typically larger than the actual physical systems. Hard 2D-metamaterials, such as intrinsically atomically-thin graphene, are in contrast very stiff to stretching at the microscopic scale (quantum mechanical bonds are electron-volt energy scales) but highly bendable because of their ultra-thinness. Blees et al. <cit.>, for example, found that micron width graphene ribbons have a bending rigidity four orders of magnitude larger than the microscopic bending rigidity found via first principle calculations <cit.>. Recent work has also highlighted the subtle role played by boundary conditions and the emergence of purely dimensionless geometric parameters as control variables <cit.>. There are thermalized versions of the zero temperature instabilities present in classical plate theory, such as the so-called classical Euler buckling <cit.>.
A key difference between the thermalized and the zero-temperature system is that the former allows for the existence of internal stresses that facilitate buckling transitions without the need to apply “extra" strains. This is due to the thermal shrinking of a membrane at finite temperature, leading to a smaller average projected area, W_th× L_th, as compared to its zero temperature counterpart, W_0 × L_0, at thermal equilibrium (see Fig. <ref>b). Thus, clamping at below or above the thermal equilibrium length or width will generate non-zero stresses.
Earlier work made use of this fact in analyzing thermalized thin sheets in a cantilever configuration. It was shown, both theoretically and computationally, that clamped boundary conditions along one side of the membrane induce a tilted state in which the mean plane of the sheet is above the horizontal <cit.>. This tilted state is more precisely characterized by a nonzero average height at the location of the free edge, ⟨ h(x=L,y)⟩0, for long time scales. Unlike the classical Euler problem at a fixed length, L, however, accessing the tilted state does not depend solely on a critical stress. Instead, it may be induced by varying the dimensionless geometric parameters of the system, in particular the aspect ratio. Spontaneous tilt occurs in thermalized sheets clamped along one edge and only for a window of aspect ratios all above one – that is the sheet is wider on the clamped edge than it is long.
The basic mechanism driving spontaneous tilt is the following: with respect to the reference equilibrium state of the free (unclamped) sheet a clamped sheet is under extensile tension concentrated on the clamped edge (x=0,y). This leads to a compressive stress in the orthogonal direction (x) which tends to buckle a sheet with a free end (x=L). This buckling manifests itself as tilt. The degree of the effective clamping stress is directly proportional to the aspect ratio α=W/L. Very low α is below the threshold for buckling/tilt and very large α irons out the sheet completely in the horizontal plane (no tilt). Thus tilt occurs for an intermediate range of aspect ratios.
The presence of a tilted phase is supported by a mean field approximation of the critical compression, Δ_c, required to tilt the membrane <cit.>. The effective energy is composed of a quadratic stress term and a quartic interaction term, leading to a ϕ^4-type model with, in the absence of an external field, two degenerate ground states. The two states are separated by a finite energy barrier, allowing thermal fluctuations to drive transitions between the up-and down-tilt states. Indeed, simulations reveal a tilted phase with the expected two-frequency behavior: higher-frequency (ω_well) oscillations in a well about a given tilted minimum (say up) and lower-frequency transitions (ω_dwell) between the two minima (up to down and vice versa). And so on time scales t_well∝ω_well^-1 the average height of the free end of the sheet is non-zero and on longer times τ_dwell∝ω_dwell^-1 there are up-down transitions that average the height to zero. This is reminiscent of the many examples of two-state systems that exhibit interstate transitions via some form of external energetic kicks, with thermal fluctuations here playing the role of the driving force.
Here, we model this tilt transition using mean field theory and invoke Kramers' theory to provide an estimate for the transition rate between a state and its inverted state. We find that for thermalized systems where the system dimensions exceed the characteristic thermal length-scale, temperature plays a surprising role, suppressing transitions as opposed to enabling them. In addition, we find that the transition rate can be controlled by tuning the dimensions of the system at a fixed temperature. We support this theory by performing molecular dynamics (MD) simulations of the tilt transitions of a triangulated elastic sheet.
§ BACKGROUND AND MODEL
§.§ Elastic Sheets
Consider an elastic sheet with zero-temperature width W_0, length L_0 and aspect ratio α = W_0/L_0, clamped along one of the two extended edges (for simplicity we choose to clamp the edge at x=0). At finite temperature, the free (unclamped) membrane will have equilibrium width W_th and length L_th, both smaller than their T=0 counterparts because of the induced thermal topography of the sheet (see Figs. <ref>(a) and (b)). Taking this free thermalized state as the appropriate reference state, we see that clamping one edge is an effective extensional strain focused along the line (x=0,y).
Continuum elasticity theory leads to an elastic free energy of the full thermalized system of the form <cit.>
F[u_ij(𝐱),h(𝐱)] = ∫ d^2 𝐱 [κ/2(∇^2 h)^2 .
.+ μ u_ij^2 + λ/2u_kk^2]
where u_ij = (∂_iu_j+∂_ju_i+∂_ih∂_jh)/2 is the strain tensor and u_i, h are the in-plane and out-of-plane displacements, respectively. The parameter κ is the bending rigidity, and μ,λ are Lamé coefficients<cit.>. It is often convenient to quantify the elastic properties of a membrane in terms of the dimensionless Foppl-von Karman number, vK=Y L^2/κ≈ (L/t)^2, where t is the thickness of the sheet and Y=4μ(μ+λ)/(2μ+λ) is the 2D Young's modulus. vK measures the ratio of stretching energy to bending energy for a membrane of extended size L and thickness t. To get a feel for these values, take for example graphene, with microscopic κ≈ 1.2 eV and Y ≈ 20 eV Å^-2: vK is then ≈ 10^12-13 for a sheet of length L=100 μm. Bending deformations are then much less costly, energy-wise, than in-plane elastic deformations and the high entropy of available bending configurations is a dominant feature of the statistical mechanical response. For the remainder of this paper, we will measure the elastic constants κ and Y in units of temperature, k_BT. We can then define elastic constants, κ̃=κ/k_BT and Ỹ = Y/k_BT. Temperature will then be measured in terms of scales relative to the scale at which thermal fluctuations become important, ℓ_th. This thermal length scale is usually defined as <cit.>
ℓ_th = √(32π^3 κ_0^2/3k_BT Y_0),
where we denote κ_0,Y_0 as the bare, microscopic bending rigidity and Young's modulus, respectively. We then set our temperature scales with the dimensionless constant L/ℓ_th, where L is our system size. For reference, at room temperature, the microscopic elastic constants of graphene in units of k_B T are κ̃= 48 and Ỹ = 800 Å^-2. This gives a value for the thermal lengthscale at room temperature of ℓ_th≈ 4nm. For a lab sample of graphene of size order L=10μm, L/ℓ_th∼ 10^4, deep in the thermalized regime.
The free energy in Eq. (<ref>) has a discretized energy on a triangular lattice with equilibrium lattice spacing, a, of the form <cit.>
E = κ̂∑_⟨ I,J⟩(1-𝐧̂_I·𝐧̂_J) + k_ stretch/2∑_⟨ i,j⟩ (r_ij-a)^2,
where the continuum bare bending rigidity κ_0 and Young's modulus Y_0 are related to the discrete bending rigidity κ̂ and the harmonic spring constant k_ stretch by κ_0 = √(3)κ̂/2 and Y_0 = 2k_ stretch/√(3). The first term represents the discretized bending energy resulting from normals on adjacent plaquettes (triangular faces) that are not perfectly aligned and the second term is a harmonic stretching energy between adjacent nodes. The first sum is performed over all nearest neighbor plaquettes, ⟨ I,J⟩, while the second sum is over all nearest-neighbor vertices ⟨ i,j⟩ (see Fig. <ref>).
§.§ 1D Ribbon Model
For simplicity, consider a polymer-like approximation to the sheet by taking its midline (y=0). Integrating out the quadratic in-plane modes gives a one-dimensional effective free energy for the height profile h(x):
E_eff[h(x)] = κ_R W/2∫ dx ([2]hx)^2 - Y_R W/2LΔ∫ dx(hx)^2 + Y_R W/2L[1/2∫ dx(hx)^2]^2 ,
where κ_R and Y_R are the renormalized values of κ and Y, respectively, which scale with system size as
<cit.>
κ_R(ℓ) ∼κ_0, ℓ<ℓ_th
κ_0 (ℓ/ℓ_th)^η, ℓ>ℓ_th
and
Y_R(ℓ) ∼
Y_0, ℓ<ℓ_th
Y_0 (ℓ/ℓ_th)^-η_u, ℓ>ℓ_th .
where ℓ is the length scale over which the sheet is fluctuating. Since long-wavelength fluctuations cannot exceed the smallest macroscopic scale in the problem ℓ≤ L for aspect ratio exceeding one. The scaling of the bending rigidity is characterized by the scaling exponent η which has been determined by various analytical methods and numerical simulations to be η≈ 0.8 <cit.>. The exponents η and η_u are related by rotational invariance: η_u = 2-2η <cit.>, yielding η_u ≈ 0.4.
The compression of the free end, Δ, is approximately given by [For a derivation of this result, consult supplemental information of reference <cit.>)]
Δ≈L_0αϵ/2sinh^2(πα/4)[πα/4cosh(πα/4)(1+ν_R).
.-sinh(πα/4)(1-ν_R)] ,
where ν_R is the renormalized Poisson ratio and ϵ≡(W_clamp-W_th)/W_th is the strain at the clamp generated by the thermal shrinking, which is given by <cit.>
ϵ≈1/8πκ̃_0[1/η-1/η(L_0/ℓ_th)^-η+ln(ℓ_th/a)].
As in the zero-temperature Euler-buckling problem, one might expect buckling beyond a threshold compression. Unlike the double-clamp problem, however, the response here is tilt because of an extensive force along the opposite axis produced by the clamped boundary condition – stress relaxation then leads to a different buckling mode since there is one free end <cit.>.
Near the tilt transition: we choose as an ansatz the first buckling mode in the T=0 cantilever problem, h(x) = H [1-cos(π x/2L)], where H is the height of the free end <cit.>. Upon inserting this ansatz into the effective energy we obtain a mean field energy
E_eff(H) = a(Δ_c-Δ)H^2 +bH^4
where a = π^2W Y_R/16 L^2 and b= aπ^2/32L. This yields a critical compression
Δ_c = π^2 /4Lκ̃_R/Ỹ_R.
In this form, it is easy to see a clear separation between the flat phase (Δ<Δ_c) and the tilted phase (Δ>Δ_c).
In the tilted phase there are two minima, E_±, separated by an energy barrier Δ E_b = |E_flat-E_±|, where E_flat=E(H=0)=0 is the energy of the unstable flat state and E_±=E(H_±) is the energy of a tilted state (see Fig. <ref>). Once the system is in one of the tilted states, as in any such two-state systems <cit.>, there is a non-zero probability of transitioning from one state to the other with maximal transition probability at some resonant value of an external parameter such as temperature or an external driving frequency <cit.>. One might expect the transition rate at finite temperature to be controlled primarily by thermal fluctuations over the barrier. We show here, however, that it is the dimensionless and purely geometrical aspect ratio that is key determiner.
§.§ Transition Rates
Consider a system with an energy landscape given by Eq. <ref> and assume that Δ>Δ_c so that we are in the tilted state. We can assign each of the extrema of E_eff a characteristic frequency, ω_± and ω_B, which are obtained from the second-order expansion of E(H) at one of the tilted states (H=H_±) and the saddle point (H=0), respectively. We can estimate the rate ℛ of transitioning from one of the tilted wells to the other using Kramers' theory <cit.>, which predicts
ℛ≈ R_0 e^- Δ E_b/k_BT
where the amplitude R_0 will take a form that depends on the friction of the system, β= γ/m, which is the ratio of the friction to the mass m of the sheet and has the units of frequency. The system can be underdamped (β≪ω_B) or overdamped (β≫ω_B). Minimizing the free energy (<ref>) gives the location of the minima, H_±, and the oscillation frequencies within a minimum, as well as the height of the barrier. The depth of the energy barrier is given by
Δ E_b = π^4Wκ_R^2/32 Y L^3(Δ-Δ_c/Δ_c)^2
All together, this yields a transition rate
ℛ≈
R_0exp[-π^4Δ̅^2 W/32L^3κ_R^2/k_B T Y_R],
for relative compression Δ̅≡ (Δ-Δ_c)/Δ_c, which is positive in the tilted phase. Note that the energy landscape is symmetric about the flat state (Fig. <ref>), which means that transition rates are also symmetric with respect to inversion. We can generalize this by adding a symmetry-breaking, transverse field (such as a gravitational or an electric field) that couples linearly to the height h(x) in Eq (<ref>). This will create an asymmetric potential well, resulting in two distinct transition rates.
We are interested in system sizes sufficiently large that thermal fluctuations are important. This means the length of the sheet satisfies L≫ℓ_th, where ℓ_th is the characteristic thermal length scale beyond which the elastic constants become scale-dependent.
For a thermalized system the elastic moduli κ and Y are renormalized by thermal fluctuations, rendering them length-scale dependent <cit.>. They must then be replaced by their respective renormalized values given by the scalings in Eqs. <ref> and <ref>.
With these scalings the transition rate simplifies to
ℛ≈ R_0 exp(-3πΔ̅^2/512α),
for L≫ℓ_th. For a fixed relative compression Δ̅, corresponding to a fixed value of the tilt order parameter or energy barrier, the transition rate is controlled by the aspect ratio α in the exponential, which therefore plays the role of a Boltzmann factor. Effectively geometry is replacing temperature. Temperature enters implicitly in tuning to a fixed relative compression as well as in the amplitude.
As previously mentioned, the prefactor R_0 depends on the magnitude of the friction, β. It takes the form <cit.>
R_0 ≈mβΔ E_b/π k_B T, β≪ω_B
ω_±ω_B/2πβ, β≫ω_B,
where we see a turnover from a linear to an inversely proportional dependence in β. In the thermalized limit, L≫ℓ_th, we obtain the following prefactors
R_0 ≈3mβΔ̅^2/512α, β≪ω_B
π^3Δ̅κ_0 /32√(2)mβ L^2(L/ℓ_th)^ηα, β≫ω_B,
in which we note that the temperature dependence in the overdamped case comes from the renormalization of the bending rigidity as given by Eq. <ref>. In the underdamped case, we see the same cancellation of temperature that occurs in the Arrhenius factor in Eq. <ref> and we have explicit independence of temperature at constant compression Δ. Note that for both cases, we expect the Arrhenius factor to provide the dominant behavior for significant compression Δ̅.
Fig. <ref> shows a density plot of the underdamped transition rate as a function of temperature and aspect ratio, normalized by the maximum rate in the displayed region as predicted by Eqs. <ref>- <ref>. We include a phase boundary provided by setting the predicted compression, Δ in Eq. <ref>, equal to the critical compression, Δ_c, computed from the 1D mean field theory (Eq. <ref>). In other words, we plot the line Δ=Δ_c accounting for renormalization of the elastic constants for L_0>ℓ_th. To the left of this boundary, the system is in the flat phase (Δ<Δ_c) and the transition rate vanishes. Note the high transition rate localized near the phase boundary. As the temperature is increased deep in the tilted state the transition rate reaches a dynamically stable “basin" with long dwell times, as indicated by the dark blue region. The α-scaling form of Eq. <ref> does not capture the full behavior near the upper branch of the phase boundary as it neglects the higher order contributions coming from Δ̅. Indeed our mean field theory will break down in the limit of large α where the length of the membrane becomes negligible compared to the width and the 1D midline model is no longer applicable.
Temperature is a more traditional parameter to tune the dynamics of these types of oscillator systems as higher temperatures often decrease the energy barrier and allows for higher transition probabilities. In this system, however, we see that thermalization gives access to another possible parameter that controls transitions namely, α, and with the addition of thermal factors in the prefactor, we see that high temperature instead stabilizes the system in one of the stable tilted states with close to zero transition probability. This can be thought of as temperature decreasing the rest area of the reference state, thereby effectively increasing the clamping strain. Based on the predicted form of the transition rate, we can expect ℛ≈ 0 for large temperatures, where we expect the sheet to be in a fully tilted state. This means that in an experimental setting, low temperatures are needed to access the dynamic tilted state, where ℛ0, and in order to retain a specific constant transition rate, the temperature must remain constant. One can circumvent this by considering the aspect ratio of the sample as a tunable parameter. We can prescribe appropriate geometrical dimensions corresponding to an aspect ratio that leads to the desired transition rate at some constant temperature. Thus, the aspect ratio gives access to a larger sample parameter space in the production of nano-mechanical actuators and may prove to be are more desirable parameter to tune in the manufacturing of such devices.
§ MOLECULAR DYNAMICS SIMULATIONS
§.§ Simulation Setup
The coarse-grained energy (<ref>) can be simulated using the HOOMD-blue python package for molecular dynamics (MD) <cit.> for a triangular lattice with a=1. The stretching term is treated as a harmonic potential between two nodes and the bending energy has a discrete representation in terms of the dihedral angle formed by neighboring triangular plaquettes, Θ_dih=π-θ_IJ, where θ_IJ is the angle between the two plaquette normals (see Fig.<ref>). Dihedral angles can be readily obtained using HOOMD-blue. A triangular lattice with fixed dimensions is initialized, clamped at one edge, and then integrated in an NVT ensemble for a total of N=2×10^7-10^8 time steps with step size dt=0.005 time steps and with energy scale set by k_B T. The first half of these time steps is discarded to ensure thermalization. We extract the time series of the out-of-plane height of a single node at the middle of the free edge opposite the clamped edge: (x,y)=(L,0). We generate multiple independent runs (n= 3-5) for each set of initial parameters to generate error statistics which are computed using the jackknife procedure <cit.>.
To test the predicted transition rate, we simulate multiple systems with elastic properties parallel to that of real crystalline systems (Y_0=20 eV Å^-2, κ=1.2 eV) at fixed length L=20 a (≈ 50 Å for graphene) and for a range of temperatures k_BT/κ≈ 0.01-2 (or L/ℓ_th≈ 0.8-5) and aspect ratios α=2-9 [Note that at fixed L, α is controlled by the clamped width W_clamp which is equal to W_0 unless otherwise stated]. Once we ensure thermalization we can proceed to analyze the dynamics of the tagged node. We estimate the thermalized length of the system, L_th, as the length of the free membrane at a given temperature, projected onto the z=0 plane, shown in Fig. <ref>(b). We then define the compressed length, L_c, as the projected length in the clamped configuration, illustrated in Fig. <ref>(d).
The up-down transition rate is calculated by tabulating the average time spent in a tilted state, the dwell time τ_dwell. Residence in the tilted state is determined conditionally with a threshold height h_th=0.1× L_0: viz. |h(t_n)|>h_th is assigned to the tilted state. The transition probability is then ℛ∼ 1/τ_dwell.
A more elaborate method of determining the dwell time is by computing the autocorrelation function of the time series post-thermalization. The normalized autocorrelation function, ρ(τ)≡ C_t(τ)/C_t(0), will decay exponentially with a time constant, τ_ac. This time constant corresponds to the shortest time scale available to the system, which in this case is the time spent in a given tilted state: τ_dwell≈τ_ac.
One can think of the dynamics of the sheet in the flat state, specifically the average height of the tagged node, as a Brownian particle trapped in a harmonic well. In the pre-buckling regime the Langevin equation for the position z(t) of a particle of mass m is
z̈(t) = -γ/mż(t)-ω_0^2 z(t) + 1/mξ(t)
where ω_0=k/m and ξ(t) is Gaussian noise with
⟨ξ(t)⟩=0, ⟨ξ(t)ξ(t')⟩=2mβ k_BTδ(t-t').
Fourier transforming ([n]tz(t)→ (-iω)^nz(ω)) gives
z(ω) = 1/mξ(ω)/ω_0^2-ω^2+iγ/mω.
We can now compute the correlation function via the inverse Fourier transform of the squared average in frequency space,
C_t(t') = ⟨ z(t)z(t')⟩ = ∫_-∞^∞dω/2π ⟨
|z(ω)|^2⟩ e^-iω t
= γ k_BT/π m^2∫_-∞^∞dω e^-iω t/(ω^2-ω_0^2)^2+γ/mω^2.
The integral in Eq. (<ref>) via complex methods with a semicircular contour
C_t(τ) = k_BT/mω_0^2e^-γτ/2m[cosω_1 τ+γ/2mω_1sinω_1 τ].
The normalized autocorrelation function is then
ρ(τ) = C_t(τ)/C_t(0) = e^-γτ/2m[cosω_1 τ+γ/2mω_1sinω_1 τ].
For sufficiently long times (τ≫ t_dwell) and for systems with low tilt transitions the autocorrelation will decay as <cit.>
ρ(τ) ∼exp(-τ/τ_ac).
Comparing Eqs. (<ref>) and (<ref>) shows that
τ_ac≈2m/γ
We now provide a more detailed comparison between the simple average dwell time method and the autocorrelation method.
We compute the autocorrelation function of the height time series within a state and fit the curve to a function of the form of Eq. (<ref>), extracting the time constant, τ. We can then compare to our previous results. Fig. <ref> shows a semilog plot of τ_ dwell^-1 as a function of (L/ℓ_th)^2, which is proportional to k_B T. The transition rate data τ_ dwell^-1 are approximated using two methods: (i) height filtering and (ii) fitting to the autocorrelation function. Note that in the tilted regime, using autocorrelation to extract the time constant may sample smaller timescales than the one of interest, namely the transition time, τ_trans<τ_dwell, which is the time it takes to jump from one state to the other. Fig. <ref> shows autocorrelation estimates and we see that both methods provide roughly the same probability values and trends. There is, however, high variance in the autocorrelation estimate which can be attributed to uncertainty in the fit. The two methods agree qualitatively and for the purposes of studying the trends in the transition rate, we chose the former to save computational time.
§.§.§ Clamping
To investigate the role of clamping we simulated several systems clamped at a range of strains close to W_clamp= W_th. Fig. <ref> shows the time series of the height field h=h(x=L,y=0), for several clamping strains, ϵ≡(W_clamp-W_th)/W_th. Clamping sufficiently close to W_th does not induce tilt – the sheet fluctuates about a mean horizontal state (blue curve). Above a positive parameter-dependent threshold for ϵ we see the onset of tilt and the accompanying up-down inversions (red curve). The dwell time increases with ϵ (black curve).
It is instructive to measure the effective plane stress throughout the clamped sheet, as determined by displacements with respect to a fixed average thermalized free state. We take the particle positions of a sheet configuration at a fixed timestep and compute the displacements relative to the average thermalized, free reference state. Treating our lattice as a triangulated mesh and embedding the displacement field to the vertices of this mesh, we compute the linear 2D plane stress via finite element analysis. Appendix A describes a related method that transforms the stresses back into the nodal basis.
For an effective extensive strain concentrated at the clamp (W_clamp>W_th), there are two competing effects: (1) the zero-temperature elastic response associated with a standard positive Poisson ratio, leading to compression along x and (2) the response associated with a negative Poisson ratio thermalized sheet associated with the known behavior at the thermal Foppl-von Karman fixed point, which creates an extensive response along x. The first effect should dominate in a zone of influence near the clamp as stretching suppresses thermal fluctuations. The second effect should dominate sufficiently far from the clamp where the sheet closely resembles a free fluctuating membrane. Fig. <ref> shows a simulated map of both diagonal elements of the stress tensor at fixed time steps obtained using the finite element method described above. We see that the σ_xx component confirms compressive stress for the tilted state (W_clamp=W_0>W_th) and very little stress in the flat state (W_clamp≈ W_th), as expected. The σ_yy component shows the expected extension in the tilted phase. We note that the flat phase also exhibits some extension, which we attribute to the fact that we have a nearly but not quite zero strain at the clamp. We can also compare these stress maps to results found in previous work on tilted flaps <cit.>. A detailed comparison can be found in Appendix B.
One can confirm that the flat state reflects a zero-stress configuration from by the height-height correlation function, which is expected to scale as
⟨ |h(q)|^2⟩≈k_BT/A(κ_R(q) q^4+σ_ij q_iq_j).
where σ is the stress due to the clamp, A is the projected area of the sheet and κ_R scales as in Eq. <ref>. In the absence of stress, the bending term dominates and the correlation function will scale as q^-(4-η). On the other hand, if there is a significant source of stress, we expect the quadratic term to dominate. Fig. <ref> shows the mean-squared height fluctuations in momentum space for three classes of clamp width. For clamping near the thermalized width (ε≈0) the slope is approximately -(4-η), indicative of bending dominance. The other two classes exhibit a quadratic fall-off, indicating stress dominance at low wavevector (see Eq. <ref>).
§.§.§ Comparison to Kramers' theory
We proceed to compare our molecular dynamics simulations to the predictions made by the underdamped Kramers' theory along with the elastic mean field theory described in Sec. I. We first estimate Δ̅=(Δ-Δ_c)/Δ_c using a time average of the in-plane displacement at a variety of temperatures. The critical compression is obtained by computing the height susceptibility in analogy to the classical Ising model <cit.>.
Eq. <ref> predicts that the log of the transition rate falls linearly with slope -3παΔ̅^2/512. Fig. <ref> shows a semi-log plot of the transition rate as a function of Δ̅: the curves are indeed linear at large Δ̅. Further confirmation of <ref> is found by normalizing R by the best-fit y-intercept R_0. The bottom plot of Fig. <ref> shows the normalized transition rate, R/R_0, as a function of the full argument 3παΔ̅^2/512. We see a near linear collapse.
Thermal fluctuations usually promote transitions between distinct energy minima. We find instead that they suppress transitions in the L≫ℓ_th regime, locking the membrane in one of the two tilted states. Recall that displacements are being measured with respect to a free-standing configuration where thermal fluctations shrink the overall area: W(T_1)<W(T_2) for T_1>T_2. As temperature is increased the strain induced at the clamp grows, driving the system deeper into the tilted phase.
§.§.§ The role of geometry
We now turn to the role of geometry as controlled by the aspect ratio. Geometrical tuning offers a very different addition to the experimental toolkit which may well be more feasible and reliable <cit.> than precise tuning of temperature and does not require any new materials or external fields. To explore this dependence we fix a temperature in the tilted phase and simulate a set of distinct aspect ratios in the range 0.5<α<9 and extract the simulated dwell time, τ_dwell. Fig. <ref> shows the inverse dwell time normalized by the maximum value for a set of temperatures corresponding to L/ℓ_th≈ 0.9,2.1,2.7,3.3. For L>ℓ_th, there is a clear α-dependence with a minimum for α≈4-5. We can compare this to Fig. <ref>(b) where Kramers' theory also predicts this low transition-rate region. This α-window corresponds to confinement in one of the tilted wells with rare transitions.
§ CONCLUSION
Combining a one-dimensional mean-field model of a thermalized thin elastic sheet with cantilever boundary conditions and Kramers' transition state theory, we have analyzed the transition dynamics of the tilted state in the regime where the width exceeds the length (α>1). Renormalization of the elastic constants due to thermal fluctuations beyond the thermal lengthscale leads to a cancellation of temperature in the Boltzmann factor of the transition probability, leaving a dominant dependence on the aspect ratio. Implicit temperature dependence enters via the relative compression Δ̅, slowing the dynamics and suppressing transitions between the two degenerate tilted states. Below the critical crumpling transition, the transition rate is low, locking the system in one of the two tilted phases. A key role is played by the effective stress at the clamp with respect to a free thermalized sheet.
The predictions of Kramers theory are verified by analyzing the variation of the transition rate with the compression Δ̅. The transition rate dependence in the transition rate exhibits the expected Arrhenius behavior ∼exp(-CΔ̅^2) with C=3πα/512.
Clamped thermalized sheets possess a rich dependence on the purely geometrical aspect ratio with the transition rate reaching a minimum for α_min≈ 4.
The temperature ranges where we observe the behavior studied here are currently beyond standard 2D-metamaterials such as micron-scale room temperature graphene. Perforated sheets and other kirigami-like structures <cit.> which lower the bare bending rigidity and enhance bending fluctuations, as well as permitting new bending configurations, may allow the observation of tilt and its dynamics in experimentally realizable systems.
In particular geometric control of the dynamical switching exhibited by the elastic sheets studied here should have rich applications in micro- and nanoelectromechanical systems (MEMS/NEMS) <cit.>.
§ ACKNOWLEDGEMENTS
This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. This material is based upon work supported by the National Science Foundation California LSAMP Bridge to the Doctorate Fellowship under Grant No. HRD-1701365. P.Z.H acknowledges support through NSF Grant No. DMR-1608501 and via the Harvard Materials Science Research and Engineering Center, through NSF Grant No. DMR-2011754. We also thank the KITP program, “The Physics of Elastic Films: From Biological Membranes to Extreme Mechanics,” supported in part by the National Science Foundation under Grant No. NSF PHY-1748958.
§ GRADIENT AVERAGE ESTIMATE OF STRESS TENSOR
An alternative method to obtain a version of Fig. <ref> is via discrete gradient averaging of the gradient operator on the lattice. This is a method that is commonly implemented on various differential operators defined on meshes <cit.>.
Consider a graph G={V,E} with vertices V coinciding with the vertices V and edges E of our simulated lattice and we define some vector field f⃗_i: V→ℝ^2, for i∈ V. We start by focusing on a single vertex v_i∈ V and its neighborhood (or star) 𝒩(v_i) = {T_I}_I=1^6 where T_I is 1 of the 6 triangles making up the neighborhood of v_i. We then compute the gradient defined in each triangle T_I. This is done via barycentric interpolation of the three values of f_i on the vertices T_I. The gradient at triangle T_I is given by
(∇ f_T_I)_ij = (f_i-f_k)(v_k-v_j)^⊥/2 A_T_I + (f_j-f_k)(v_i-v_k)^⊥/2A_T_I
where v^⊥ is the 90^∘ rotation of vector v and A_T_I is the area of triangle T_I. This gives a gradient tensor in the basis of the faces of G.
In order to move back to the basis of vertices we compute a weighted average over the neighborhood of vertex v and define the gradient at v as
(∇ f_v)_ij = 1/∑_T_I∈𝒩(v) A_T_I∑_T_I∈𝒩(v)A_T_I(∇ f_T_I)_ij
We can now compute the mesh gradient of the in-plane displacement field, u_i, as defined on the vertices of the deformed lattice. The in-plane gradient is then obtained by symmetrizing the gradient as computed above, that is, U_ij = (∂_iu_j +∂_j u_i)/2. Fig. <ref> below shows the strain map obtained using this method. Note that this gives very similar results to the method shown in the main text.
§ COMPARISON OF CLAMPING STRESS TO THEORY
Previous work <cit.> on this system estimated the in plane stress using a doubling method that converts the clamped boundary condition into an internal condition. This results in the stress components shown in Fig. <ref> for a sheet of α = 5 and ϵ = 0.02. Comparing to the simulated stresses (Figs. <ref>,<ref>) we see that we have a similar accumulation of high extensive stress in the yy component for both theory and simulation, indicating the extension of the clamp in reference to the free thermalized stress. We do see, however, a discrepancy in the location of this extension along the x-axis. Theory predicts this should be localized near the clamped side (x=0). In the simulated stress we instead have high extension near the edge opposite to the clamp (x=20a) with a region of low extension near the middle of the clamped edge. A possible explanation of this is the auxetic behavior of a free thermalized sheet. It is well known that free thermalized polymerized sheets are controlled by a Foppl-von Karman fixed point with a negative Poison ratio. Using a negative Poisson ratio for the stiffness matrix used to calculate our simulated stresses leads to the extensive behavior not found in typical solids. If we assume clamping renders the Poisson ratio positive (at least in a neighborhood of the clamp which can be quite large) we can recover the compressive behavior in σ_xx.
apsrev4-1
|
http://arxiv.org/abs/2307.03344v1
|
20230707011946
|
Deep Synoptic Array Science: First FRB and Host Galaxy Catalog
|
[
"C. J. Law",
"K. Sharma",
"V. Ravi",
"G. Chen",
"M. Catha",
"L. Connor",
"J. T. Faber",
"G. Hallinan",
"C. Harnach",
"G. Hellbourg",
"R. Hobbs",
"D. Hodge",
"M. Hodges",
"J. W. Lamb",
"P. Rasmussen",
"M. B. Sherman",
"J. Shi",
"D. Simard",
"R. Squillace",
"S. Weinreb",
"D. P. Woody",
"N. Yadlapalli"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.GA"
] |
Casey J. Law
[email protected]
These authors contributed equally to this work.
Cahill Center for Astronomy and Astrophysics, MC 249-17 California Institute of Technology, Pasadena CA 91125, USA
Owens Valley Radio Observatory, California Institute of Technology, Big Pine CA 93513, USA
These authors contributed equally to this work.
Cahill Center for Astronomy and Astrophysics, MC 249-17 California Institute of Technology, Pasadena CA 91125, USA
Cahill Center for Astronomy and Astrophysics, MC 249-17 California Institute of Technology, Pasadena CA 91125, USA
Owens Valley Radio Observatory, California Institute of Technology, Big Pine CA 93513, USA
Cahill Center for Astronomy and Astrophysics, MC 249-17 California Institute of Technology, Pasadena CA 91125, USA
Owens Valley Radio Observatory, California Institute of Technology, Big Pine CA 93513, USA
Cahill Center for Astronomy and Astrophysics, MC 249-17 California Institute of Technology, Pasadena CA 91125, USA
Cahill Center for Astronomy and Astrophysics, MC 249-17 California Institute of Technology, Pasadena CA 91125, USA
Cahill Center for Astronomy and Astrophysics, MC 249-17 California Institute of Technology, Pasadena CA 91125, USA
Owens Valley Radio Observatory, California Institute of Technology, Big Pine CA 93513, USA
Owens Valley Radio Observatory, California Institute of Technology, Big Pine CA 93513, USA
Cahill Center for Astronomy and Astrophysics, MC 249-17 California Institute of Technology, Pasadena CA 91125, USA
Owens Valley Radio Observatory, California Institute of Technology, Big Pine CA 93513, USA
Owens Valley Radio Observatory, California Institute of Technology, Big Pine CA 93513, USA
Cahill Center for Astronomy and Astrophysics, MC 249-17 California Institute of Technology, Pasadena CA 91125, USA
Owens Valley Radio Observatory, California Institute of Technology, Big Pine CA 93513, USA
Owens Valley Radio Observatory, California Institute of Technology, Big Pine CA 93513, USA
Owens Valley Radio Observatory, California Institute of Technology, Big Pine CA 93513, USA
Cahill Center for Astronomy and Astrophysics, MC 249-17 California Institute of Technology, Pasadena CA 91125, USA
Cahill Center for Astronomy and Astrophysics, MC 249-17 California Institute of Technology, Pasadena CA 91125, USA
Cahill Center for Astronomy and Astrophysics, MC 249-17 California Institute of Technology, Pasadena CA 91125, USA
Cahill Center for Astronomy and Astrophysics, MC 249-17 California Institute of Technology, Pasadena CA 91125, USA
Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721, USA
Cahill Center for Astronomy and Astrophysics, MC 249-17 California Institute of Technology, Pasadena CA 91125, USA
Owens Valley Radio Observatory, California Institute of Technology, Big Pine CA 93513, USA
Cahill Center for Astronomy and Astrophysics, MC 249-17 California Institute of Technology, Pasadena CA 91125, USA
Fast Radio Bursts (FRBs) are a powerful and mysterious new class of transient that are luminous enough to be detected at cosmological distances. By associating FRBs to host galaxies, we can measure intrinsic and environmental properties that test FRB origin models, in addition to using them as precise probes of distant cosmic gas. The 110-antenna Deep Synoptic Array (DSA-110) is a radio interferometer built to maximize the rate at which it can simultaneously detect and localize FRBs. Here, we present the first sample of FRBs and host galaxies discovered by the DSA-110. This sample of 11 FRBs is the largest uniform sample of localized FRBs to date and is selected based on association to host galaxies identified in optical imaging by Pan-STARRS1 and follow-up spectroscopy at the Palomar and Keck observatories. These FRBs have not been observed to repeat and their radio properties (dispersion, temporal scattering, energy) are similar to that of the known non-repeating FRB population. Most host galaxies have ongoing star formation, as has been identified before for FRB hosts. In contrast to prior work, a large fraction (four of eleven) of the new sample are more massive than 10^11 M_⊙ and most had elevated star formation rates more than 100 Myr in their past. The distribution of star-formation history across this host-galaxy sample shows that the delay-time distribution is wide, spanning from ∼100 Myr to ∼10 Gyr. This requires the existence of one or more progenitor formation channels associated with old stellar populations, such as the binary evolution of compact objects.
§ INTRODUCTION
Fast Radio Bursts (FRBs) are a new class of ≪1 s radio transient that is powerful enough to be seen from distant galaxies <cit.>. The nature of the source (or sources) that make FRBs is not known yet. Several hundred bursts have been detected <cit.>, some of which repeat stochastically <cit.> and some of which burst with quasiperiodic patterns over a wide range of timescales <cit.>. Polarimetric measurements have identified dynamic, highly magnetized plasma in the parsec-scale environments of some FRB sources <cit.>.
The need to localize FRBs to an arcsecond (or better) precision has motivated a new generation of radio instruments <cit.>. Of the roughly two dozen localized FRBs, the story told by their host galaxies and host environments is confusing. Most FRBs are associated with actively star-forming galaxies <cit.>, consistent with the discovery of FRB-like bursts from a magnetar in our own galactic disk <cit.>. However, some FRBs have been associated with much older stellar systems, including a nearby globular cluster <cit.> and a quiescent galaxy <cit.>. A small fraction have persistent radio counterparts, but many other FRBs have strict limits on associated persistent emission <cit.>. Therefore, a concordant picture of the FRB progenitors has not yet emerged from studies of FRB hosts.
A major question raised with the discovery of FRBs remains unanswered: what source (or sources) produces these powerful bursts? Their occurrence rate <cit.> is too high to be attributed solely to cataclysmic events such as classes of supernova. Models based on magnetized neutron stars can reproduce many burst properties and FRB associations <cit.>. However, even this one kind of object can be formed in young (through core-collapse supernovae) or older (through compact binary coalescence) stellar environments <cit.>. Some FRB behaviors, such as quasiperiodic activity timescales, have been used to argue that binarity plays a role <cit.>. Finally, as proposed in <cit.> and <cit.>, multiple formation channels may exist and be associated with specific stellar environments, as is the case for supernovae <cit.> and gamma-ray bursts <cit.>. If so, then a large sample of FRB host galaxy associations and careful statistical analysis will be required to identify specific classes of FRB source and their formation channels.
Here, we present the first sample of FRBs discovered by the DSA-110. This sample was selected from FRB discoveries made during science commissioning from February to October 2022. We selected all FRBs that are associated with a host galaxy in Pan-STARRS1, which we used to guide follow-up optical spectroscopic and infrared photometric observations. This sample represents a significant increase in the number of FRBs with quality host galaxies identified over the last seven years <cit.>. Three FRBs included in this sample have been reported in other publications <cit.> and this work uses the same instrument and analysis techniques described previously.
Our goal is to present a uniform sample of FRBs and their host galaxies to probe ideas for FRB origin (see <ref>). The DSA-110 was in science commissioning during this observing period, so the overall observing efficiency is not measured well enough to estimate an FRB rate. However, the FRB characterization and host identification is reliable enough to use this sample to study burst phenomenology and galaxy properties (see <ref>). In <ref>, we discuss details of FRB host galaxies and how their star-formation history is most consistent with FRB sources formed over a wide range of delay times. This is supported by analysis of the nearest FRBs and hosts, for which we have more reliable host identification and high physical resolution. We conclude by discussing FRB origins and whether multiple models are required in <ref>. We adopt standard cosmological parameters from <cit.>.
§ OBSERVATIONS
§.§ DSA-110
The DSA-110 is built to discover, localize, and characterize FRBs. The first FRB localization by the instrument demonstrated the techniques used for the current sample <cit.>. A full description of the DSA-110 interferometer and FRB search system will be presented in Ravi et al (in prep).
FRBs described here were detected during science commissioning with an array of 63, 4.65-meter antennas distributed over a 2.5-km area at the Owens Valley Radio Observatory[See <https://www.ovro.caltech.edu>.]. The FRB search system uses 48 antennas arranged as a linear array to form 256 coherent “fan beams” spaced by 1, thus spanning a little beyond the primary beam width of 3.4 deg (FWHM). The DSA-110 is a transit telescope with motorized control over antenna elevation. During all observations described here, the array was pointed at a declination of 71.6^∘.
The digital and software systems search a continual stream of calibrated power beams in real time for FRBs. The FRB search is designed to avoid and remove radio frequency interference (RFI) by pointing away from known RFI sources, automatically suspending search during RFI “storms”, and data cleaning. The search system is optimally sensitive to temporal widths from 0.26 ms to 8.32 ms, and FRB-like dispersion measures (DM) of 50 pc cm^-3 to 1500 pc cm^-3. For the fixed declination of the observations, the instrument observed Galactic latitudes from 8^∘ to 46^∘, which corresponds to a Milky Way DM contribution ranging from 133 to 36 pc cm^-3 <cit.>.
The search system is designed to automatically identify FRBs based on several criteria. The significance of power excess in any beam must be greater than 8.5 and not associated with strong terrestrial interference, and have a DM greater than both 50 pc cm^-3 and 0.75 times the Galactic DM contribution. For each candidate FRB, the system saves metadata, and channelized voltage data from the entire array (48 search antennas, plus 15 outrigger antennas). The voltage data can fully reproduce the real-time search data, as well as enable fine localization, high time- and frequency-resolution, and full polarimetry. Verification of each candidate is ultimately reliant on a successful interferometric localization.
Table <ref> and Figure <ref> present the first sample of 11 FRBs detected by DSA-110 and robustly associated with counterparts detected by Pan-STARRS1 (<ref>). These events were identified during science commissioning from January to October 2022. Analysis of this sample is presented in <ref>. A more detailed analysis of the burst properties, including full polarimetry, will be presented in Sherman et al (in prep) and Chen et al (in prep). DSA-110 FRB detections, including data to aid in reproducing this analysis, are available on the [DSA-110 archive]http://code.deepsynoptic.org/dsa110-archive and in the FRB software and data repository <cit.>.
§.§ Pan-STARRS1
Figure <ref> shows Pan-STARRS1 <cit.> gri imaging of the likely FRB host galaxies. PS1 provides images and catalogs for the entire sky north of a declination of –30 deg (the “3pi Steradian Survey”). Multi-epoch image stacks reach a depth of 21 to 23rd magnitude (5σ) with a median seeing from 1.3 to 1.0 arcsec in g, r, i, z, and y bands.
DSA-110 localizes FRBs with a precision better than roughly ±2 arcsec at 90% confidence, which corresponds to 3.8 kpc at a redshift of 0.1. This resolution is similar to the half-light radius of a typical galaxy <cit.>. Thus, in many cases, DSA-110 FRBs can be associated with individual galaxies at the characteristic distance of the FRB population. However, by requiring robust association with PS1, we introduce an optical magnitude limit to the FRB host sample. Selection requires that the host be detected in r and i bands above the 5σ detection limits of 23.2 and 23.1 mag.
§.§ Associating FRBs to Host Galaxies
Table <ref> summarizes the properties of the host galaxies used to associate them with the FRBs. We use <cit.> to calculate the association probability for each FRB to its host galaxy. The Bayesian framework estimates the association probability for all nearby galaxies from the FRB position and error, as well as galaxy magnitude, location, and size. Galaxies with an association probability greater than 0.9 are considered robust, given the conservative assumptions, and thus included in this sample.
We use the adopted priors in <cit.>, defined as an exponential FRB angular offset distribution and an association probability that scales inversely to the number density of galaxies at a given magnitude (“exp” and “inverse”, respectively). We deviate from the adopted set of priors by using a non-zero prior on undetected host galaxies, P(U). <cit.> analyzed the false-positive and false-negative association rate for simulated FRB host galaxies. At the depth of PS1 r-band imaging, a reliable and accurate association probability is found for P(U)=0.2. All 11 FRBs are covered by Pan-STARRS1 (DR2), but six of the FRBs are covered by the deeper Legacy survey <cit.>. For FRBs covered by the Legacy survey, we use P(U)=0.1.
For each FRB, we build a galaxy catalog from PS1 stack images or Legacy catalogs by selecting for resolved sources within 30. For PS1, point-like sources are removed if they are in the PS1-PSC catalog with “ps_score” less than 0.83 <cit.>. No compactness filter was used for Legacy Survey sources, but all FRB host galaxies are resolved, with R_e>1. For PS1, we use the r-band Kron radius to represent the galaxy's size, which is within 20% of the half-light radius under realistic scenarios <cit.>.
§.§ Optical/IR Photometry
Table <ref> summarizes the photometric and spectroscopic measurements made and used in modeling of host galaxies. We performed photometry on images from wide-area OIR surveys PS1, Two Micron All Sky Survey <cit.> and ALLWISE <cit.> surveys. We also acquired IR photometric data in J and H bands with the Wide Field Infrared Camera <cit.> observing instrument mounted on the Palomar 200-inch Telescope. These observations were acquired on August 16, 2022, and were reduced by a custom pipeline following the standard data reduction pipeline procedures.
We largely follow the analysis framework established in <cit.> for characterizing the host galaxies of these FRBs (including removal of a star on the host galaxy of ). We execute isophotal analysis on these host galaxies in PS1 i-band images, where the center and the size of the galaxy are left as free parameters. We identify the best isophote as the one that captures ≳95% of the galaxy. We further scale this best-fit isophote by the point spread function of different photometric bands for performing aperture photometry.
§.§ Optical/IR Spectroscopy
Optical spectroscopy is crucial for accurate redshift measurement, line-flux estimates to better understand the nature of ongoing star-formation and nuclear activity, and for modeling the ages of the stellar populations in a galaxy. The age of the stellar population of an FRB host galaxy, and preferably host environment, encodes the age of the probable progenitor object, which can be determined by probing host-galaxy star-formation histories.
Table <ref> summarizes the spectroscopic instruments used for this FRB sample.
We observed with the Low-Resolution Imaging Spectrometer on the Keck I telescope <cit.> on 26 May, 21 July, and 17 October of 2022. On the last of these observing sessions, the red component of the detector had malfunctioned. We used a mirror to direct light into the blue arm, and the light was dispersed using a 300/5000 grism. The spectra were reduced with the standard lpipe software <cit.> and calibrated using observations of the standard star BD+28 4211. The spectrum was scaled to match the PS1 photometry to account for slit losses.
We observed with the Double Spectrograph <cit.> mounted at the Palomar Hale 200-inch telescope on 1 June, 12 December of 2022 and 28 May 2023. Data were reduced following procedures described in <cit.>.
We measure the spectroscopic redshift and the line fluxes of the host galaxies using the penalized PiXel-Fitting software <cit.> by jointly fitting the stellar continuum and nebular emission using the MILES stellar library <cit.>.
§.§ Radio and High-Energy Counterpart Search
We searched for compact radio and X-ray counterparts in catalogs and images associated with large surveys. Radio catalogs with coverage of this declination range include VLASS <cit.>, NVSS <cit.>, TGSS <cit.>, GB6 <cit.>, WeNSS <cit.>, VLSSr <cit.>, and LoTSS <cit.>. At high energies, we searched the Chandra Source Catalog 2 <cit.> and XMM/Newton 4XMM-DR11 <cit.>. No counterpart was found for any of our sample in these catalogs.
As the nearest non-repeating FRB, the environment and counterparts to can be probed especially well. We observed with the Karl G. Jansky Very Large Array (VLA) under program 22A-490 to search for a persistent radio counterpart and repeat bursts. We observed with antennas in the A configuration in the 1.4 and 5 GHz bands. The correlator was configured for continuum observing and a commensal 10-ms transient search with the realfast instrument <cit.>. We detected no burst or persistent counterpart and the continuum imaging sensitivity was approximately 21 microJy/beam in both bands (1σ, dual-polarization, robust weighting).
§ ANALYSIS
§.§ Burst Properties
The radio bursts themselves can be used to constrain both intrinsic properties and propagation effects. Bursts are characterized by DM, intrinsic width and structure, scintillation, scattering, as well as polarimetric properties. For the total-intensity burst properties considered here, we find that the DSA-110 sample is typical of the non-repeating population identified previously. Figure <ref> shows the extragalactic DM compared to burst fluence for DSA-110 and other FRB discoveries. The new sample has a median fluence of ∼5 Jy ms, compared to ∼20 Jy ms for the reference sample of localized FRBs with data available in <cit.>.
Figure <ref> also shows the luminosity and timescale distribution for Galactic and extragalactic millisecond transients <cit.>. The present sample is somewhat shorter and more luminous than the bulk of previous detections. This offset reflects the sensitivity and relatively fast sampling time of the DSA-110 search system. We note that the requirement for a PS1 host biases against very distant hosts and high FRB luminosities. Future FRB samples from the DSA-110 with dedicated follow-up optical observing should be expected to show higher burst luminosities.
Of special interest is whether bursts repeat or have complex “substructure” within individual bursts.
Using the burst classification defined in <cit.>, we characterize 8 bursts as “broad” and three as “complex”. These terms are synonymous with having single or multiple millisecond-scale components in time, respectively. We note that the relatively fast time resolution and relatively narrow bandwidth of DSA-110 makes it easier for DSA-110 to detect and identify bursts as complex, while limiting the ability to see narrow or downward drifting bursts. A more complete discussion of burst classification with the inclusion of polarimetric properties will be presented in Sherman et al. (in prep), and a detailed analysis of the total-intensity properties will be presented in Chen et al. (in prep).
None of the bursts were observed to repeat on timescales longer than a second. Based on the observing pattern during the science commissioning period, the FRB locations were nominally observed for 150 hours through the 2022 observing year[The effective observing time is currently not well constrained, so no rate estimate can be calculated.]. If the three multi-component bursts are treated as repeat bursts, then the wait times range from 0.5 to 3 ms. This is consistent with first peak of the wait time distribution for active repeating FRBs such as 121102 <cit.>.
§.§ Host Galaxy SED Modeling
We obtain the stellar properties of our set of host galaxies using the stellar population synthesis modeling software Prospector <cit.>. We sample from the posterior using the ensemble sampler emcee <cit.>. We follow the SED modeling approach described in <cit.>, which includes a discussion of spectral processing, model parameters, and priors.
Generally, the photometric and flux-calibrated spectroscopic observations are jointly fit when possible. The host galaxies of , , and have low signal-to-noise ratios in their spectra and thus the absorption features are not well characterized, due to which we refrain from doing a joint photometric-spectroscopic fit. Therefore, for these three hosts, we only do a photometric SED modeling with similar procedures to <cit.>, but drop out spectroscopic calibrations from the model parameters. For , we use legacy survey data instead of PS1 in our SED modeling due to better SNR in legacy data. The resulting SED fits for all the host galaxies, along with their constrained non-parametric star formation histories, are displayed in Figure <ref> and Figure <ref>, respectively. The set of derived host properties is summarized in Table <ref> and plotted in Figure <ref>. Best-fit line fluxes are shown in Table <ref>. We note that the recent star formation history is not well constrained in a few cases. We suspect that this is primarily due to the lack of UV photometry in our SED modeling.
§.§ Limits on Persistent Radio Sources
Persistent radio sources (PRS) are luminous compact radio sources that are spatially coincident with FRBs. They have been robustly associated with two FRBs <cit.> and have been a valuable driver of models for the physics of FRBs <cit.>.
We find no persistent radio counterparts associated with these DSA-110 FRBs. The VLASS 3σ limit is 0.5 mJy (at 3 GHz). Using the PRS luminosity threshold proposed previously <cit.>, we can exclude the presence of a PRS in FRBs and . FRBs , , and are slightly above the limit, but still fainter than the two confirmed PRS. Note that this luminosity limit is mostly defined to exclude astrophysical foregrounds and that non-detections do not necessarily rule out PRS-like emission.
We reviewed recently published FRB localizations to build on the sample of known PRS and upper limits <cit.>. Among new localizations from ASKAP and MeerKAT, we find that non-repeating FRBs 20211212A, 20211127I, 20210405I, and 20210410D all have significant constraints on PRS counterparts. FRBs presented in <cit.> have less robust associations to host galaxies and <cit.> includes a PRS candidate that may be attributed to its host galaxy, so they are not counted here.
Considering these new measurements, we find 15 non-repeating FRBs with upper limits on PRS counterparts. The measurements toward repeating FRBs remain unchanged, with 2 detections out of 6 useful measurements. Repeating the analysis of <cit.>, we place a 90% upper limit on the fraction of non-repeating FRBs with a PRS at 0.14, while the bounds on the repeater fraction are unchanged. The chance that the subset of repeating FRBs could randomly be associated with the two PRS sources is 8%. We conclude that there is weak evidence for repeating FRBs to be associated with PRS.
§ DISCUSSION
§.§ What Kind of Galaxies Host FRBs?
Analyses of the host galaxies of populations of extragalactic sources can provide insights into formation and evolution channels. We use the present DSA-110 sample of localized FRBs to first explore the characteristics of the host galaxies. In <ref> below, we focus on the star-formation histories as probes of the formation of FRB progenitors via the delay time distribution.
The basic properties of the host galaxies of the DSA-110 sample, and the FRB sightlines through these galaxies, are not unusual with respect to the existing sample of localized FRBs <cit.>. The left panel of Figure <ref> shows the flux ratios of nebular emission lines observed from the DSA-110 hosts, in a BPT diagram <cit.>. Most measurements and constraints on the line ratios are consistent with nebular emission driven by star-formation activity, and we see no compelling evidence for AGN contributions. We note that is more consistent with line ratios seen in LINER galaxies, although that line ratio was measured directly on the nucleus of the (well resolved) galaxy <cit.>. This BPT analysis justifies the standard techniques used for stellar population synthesis modeling.
The right panel of Figure <ref> compares the published and present sample of FRBs to a model of DM from the intergalactic medium. The median redshift of the DSA-110 sample is 0.24. This analysis assumes DM = DM_ISM,MW + DM_halo,MW + DM_IGM + DM_host/(1+z). This model is a reasonable representation of the DM-redshift relationship, and no egregious outliers are present in the DSA-110 sample relative to published FRBs, although there are four DSA-110 FRBs with some excess DM (, , , ). In the case of , this is directly attributed to the intracluster gas associated with the host galaxy <cit.>, and contributions from intervening structures (e.g., filaments, groups) are not fully characterized for other DSA-110 FRBs.
Useful morphological information is only accessible for a handful of DSA-110 hosts. As discussed by <cit.>, the host of is a face-on barred spiral and the FRB is significantly offset. The disk-dominated host galaxy of is viewed nearly edge-on, and a significant DM_IGM + DM_host/(1+z)=170 pc cm^-3 is observed (assuming DM_halo,MW=10 pc cm^-3) despite the low redshift of 0.043. Detailed morphological analysis will be presented in Sharma et al. (in prep).
Important insight can be gained from an analysis of the total stellar mass and recent star-formation rates of the FRB host-galaxy sample. Figure <ref> shows the stellar mass and star-formation rate for both the published and DSA-110 FRB host galaxies in three redshift bins. The star-formation rates are averaged over the last 100 Myr from a non-parametric star-formation history analysis. For reference, we used the largest field galaxy sample modeled using similar techniques <cit.>, following <cit.>, and the published FRB hosts were also analyzed using similar techniques. The DSA-110 sample spans two orders of magnitude in both stellar mass and star-formation rate, and three orders of magnitude in specific star-formation rate. This diversity of host properties is consistent with the published FRB sample.
The DSA-110 sample is subject to a selection effect on optical magnitude. For example, using <cit.>, we estimate the characteristic galaxy that is detectable in PS1 (5σ stack limits). For galaxies with mass-to-light ratios from 0.7 to 1, we find minimum stellar masses, M_⊙, min of 8×10^8 M_⊙ at z=0.1, 9×10^9 M_⊙ at z=0.3, and 4×10^10 M_⊙ at z=0.6. This selection effect is clear in Figure <ref>. While the minimum stellar mass of a DSA-110 host excludes dwarf galaxies, the present sample is sensitive to the star-forming main sequence and quiescent galaxies in all redshift bins.
Early analysis of FRB hosts <cit.> seemed to identify FRB hosts as offset from the star-forming main sequence and more closely associated with the “green valley” galaxies with slowing star-formation <cit.>. However, newer FRB host modeling analysis that compares against non-parametric star formation histories shows no such offset <cit.>. Figure <ref> confirms that FRB host galaxies are not associated with green valley galaxies. Instead, we associate FRBs to both the star-forming main sequence and quiescent hosts, which dominate the field galaxy population by number and stellar mass.
The major new result from the DSA-110 FRB host-galaxy sample is the prevalence of massive hosts with low specific star-formation rates. Four of 11 hosts in our sample have stellar masses greater than 10^11 M_⊙ (, , , ), and all have specific star-formation rates below the star-forming main sequence <cit.>. This preference for massive hosts initially seems to contrast with the published sample of FRB hosts (see Figure <ref>). Although our sample is defined by detection in PS1 and association using priors that favor brighter galaxies, our method does not deviate from that used in the community and is unlikely to be biased for the stellar masses under consideration. The DSA-110 is a more sensitive FRB detector than other instruments capable of interferometric localization, and our FRB sample is fainter than other localizations (see Figure <ref>). However, for this to be a source of bias would require apparently fainter FRBs to be associated with more massive hosts, something that we do not consider likely pending further investigation. Selection effects, particularly due to radio propagation, may lead to a preference for lower specific star-formation rates <cit.>, but that bias is not unique to the present sample. Considering that there is no known bias and that the number of massive host galaxies is still small, we conclude that the DSA-110 sample is the first that is large and uniform enough to reveal these rarest kinds of host.
We proceed by considering the implications of the massive hosts among the DSA-110 sample alone. Figure <ref> shows the cumulative distribution of the hosts in stellar mass, in comparison with the cumulative distributions of stellar mass and star-formation rate of the background galaxy population. The background galaxy population was obtained from the COSMOS sample modeled with Prospector <cit.> using selections identical to the PS1 magnitude limits for the FRB host sample. Roughly half of the star-formation in the Universe is contributed by galaxies that contribute only 15 – 30% of its stellar mass (for z<0.7). That distinction, and how it evolves with redshift, allows us to associate FRB formation with specific environments by comparing mass distributions. If FRB occurrence is tied to either current or the accumulation of past star formation, we expect the stellar-mass distribution to trace that of star-formation or stellar mass, respectively. Figure <ref> shows that FRB hosts with z<0.2 show no clear preference for either distribution, but that 0.2<z<0.7 hosts follow the stellar mass distribution.
This result seems to be at odds with previous FRB host or rate analyses <cit.>. More detailed insight can be gained by associating the occurrence of FRBs to specific stellar-population ages via a delay time distribution, as described below.
§.§ Delay-Time Distribution
FRBs and other transients are produced by sources that are born during periods of star-formation. Galaxies that are actively forming stars are associated with short-lived sources <cit.>, while early-type galaxies are more likely to be associated with long-lived transient progenitors <cit.>. The delay-time distribution (DTD) describes the time between the formation of stars and transient events and has been used to study short GRBs <cit.>, SN Ia <cit.>, and core-collapse supernovae <cit.>.
The delay-time distribution is used to statistically infer transient origins. For example, the minimum delay time for core-collapse supernovae was initially assumed to be tied to the lifetime of massive stars (t_d≈20-50 Myr). However, DTD analysis now supports theoretical modeling that shows binary evolution introduces a tail of ∼15% “late” explosions <cit.>.
Compact object binaries are expected to inspiral as they emit gravitational radiation, producing a DTD powerlaw slope β=-1. For comparison, short GRBs
have a delay time powerlaw slope β=-1.83^+0.35_-0.39 <cit.> and for SN Ia supernovae
β=-1.13^+0.04_-0.06 <cit.>.
For the present host galaxy sample, we model the DTD as a powerlaw with a minimum, maximum, and slope for the FRB time relative to the star-formation history of each galaxy <cit.>. The probability of detecting an FRB is assumed to follow a Poisson distribution, such that for a given host galaxy i and a star formation history posterior sample j, the expected rate of FRBs is ṅ_i^j at redshift z_i^j.
Figure <ref> shows the DTD the current FRB host galaxy sample compared to other that of other transient classes. The minimum and maximum delay time is constrained to be 250^+114_-110 Myr and 6.62^4.38_-2.27 Gyr, respectively, and slope β=-1.75^+1.66_-0.87. The posterior distributions require a very wide range of delay times, which supports the conclusions of <cit.>. While the FRB host sample is small, it looks similar to that of the short gamma-ray burst DTD.
There are two important caveats to this analysis. First, the traditional DTD analysis assumes that transients are cataclysmic, but we know that some (or all) FRBs can emit multiple bursts and the burst rate may evolve in time <cit.>. Second, we assume a single, stationary Poisson process for the entire FRB population, but there may be multiple kinds of FRB. For these reasons, the slope of the DTD powerlaw should not be directly compared to that of other DTDs. The minimum and maximum of the distribution are likely robust to these caveats. However, we expect the estimate of t_min to be affected by the lack of data constraining recent star formation history for some hosts.
Despite these caveats, the DTD analysis provides useful context to the binary framing of whether FRB hosts trace star formation or stellar mass. We find FRBs can occur with either a short or long delay from star formation, but the presence of long delays produces a stronger correlation with the integrated history of star formation of a galaxy (i.e., stellar mass). Modeling of the larger sample of FRB DMs and redshifts show that the rate evolves with redshift similarly to the mean star formation rate of the universe, albeit with large errors <cit.>. Given the wide DTD measured here, it should eventually be possible to see deviations of the evolution of the FRB rate from that of star-formation, which scales as (1+z)^2.7 in the local universe. Specifically, long delays shift the rate peak to later times and flatten its evolution. This is evident in other transient populations with wide delay time distributions, such as SN Ia and short GRBs <cit.>.
§ CONCLUSIONS
We presented a sample of 11 FRBs discovered by the DSA-110 and selected on their association to host galaxies. This represents the largest and most uniform sample of FRB host galaxies. The FRBs are not known to repeat and are fainter and narrower in time than previous large FRB samples. Despite that, the bursts are phenomenologically similar to previous FRB discoveries from wide-field survey telescopes, such as CHIME and ASKAP. The new host galaxy sample supports prior work in associating FRBs to galaxies with active star formation. However, we find a wide delay-time distribution for FRBs relative to star-formation. In the DSA-110 sample, we see this as a FRB host stellar mass distribution that matches the field galaxy stellar mass distribution.
The wide delay time distribution requires that either (1) the sources of FRBs are formed over a wide range of times relative to star formation or (2) that they are formed during star formation and can emit up to a few Gyr after formation. The latter scenario is disfavored if bursts are powered by spin or magnetic fields in neutron stars <cit.>. Under the former scenario, single or multiple formation channels are allowed. Binary or dynamical formation channels <cit.> are particularly attractive, because they can form over a range of delays and naturally explain some burst phenomenology, such as periodic activity cycles. It remains difficult to distinguish between the single and multiple FRB formation channels because burst properties only weakly correlate with environmental properties. With this new sample, we strengthen the evidence for repeating FRBs to be associated with PRS, which, if true, suggests that activity is causally related to the PRS emission.
The DSA-110 continues to detect FRBs and associate them to host galaxies. Future DSA-110 publications will use large, uniform samples of burst spectra and polarization to classify sources and characterize their environments. Larger host galaxy samples will include morphological and offset analysis that can test formation channels. It may eventually be possible to detect enough hosts to model the FRB rate with multiple formation channels, as has been done for SN Ia <cit.>. Furthermore, FRB localization samples are still small, such that any effects or subclasses that occur in less than 1% of the population remain to be discovered.
We thank the OVRO staff for making this science possible through epidemics, fires, floods, and other disasters. The observatory is located on the ancestral homelands of the Big Pine Paiute Tribe of the Owens Valley. We recognize and acknowledge the historical and cultural significance of these lands to members of the Tribe.
The DSA-110 is supported by the National Science Foundation Mid-Scale Innovations Program in Astronomical Sciences (MSIP) under grant AST-1836018. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
We acknowledge use of the VLA calibrator manual and the radio fundamental catalog. This research has made use of NASA’s Astrophysics Data System.
The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.
The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS; Proposal ID #2014B-0404; PIs: David Schlegel and Arjun Dey), the Beijing-Arizona Sky Survey (BASS; NOAO Prop. ID #2015A-0801; PIs: Zhou Xu and Xiaohui Fan), and the Mayall z-band Legacy Survey (MzLS; Prop. ID #2016A-0453; PI: Arjun Dey). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, NSF’s NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. Pipeline processing and analyses of the data were supported by NOIRLab and the Lawrence Berkeley National Laboratory (LBNL). The Legacy Surveys project is honored to be permitted to conduct astronomical research on Iolkam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham Nation.
NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. LBNL is managed by the Regents of the University of California under contract to the U.S. Department of Energy.
The Legacy Surveys imaging of the DESI footprint is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123, by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; and by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to NOAO.
astropy <cit.>, astroquery <cit.>, astro-datalab <cit.>, astropath <cit.>, emcee <cit.>, lpipe <cit.>, Prospector <cit.>, CASA <cit.>, Heimdall <cit.>, wsclean <cit.>, dynesty <cit.>, Bilby<cit.>
Hale, VLA, Keck:I (LRIS), Keck:II (ESI), DSA-110, PS1
§ SPECTROSCOPY AND SPECTRAL MODELING
Table <ref> summarizes the photometric and spectroscopic measurements made for the FRB host galaxies. Figure <ref> shows the modeling of the host galaxy photometry and spectroscopy.
aasjournal
|
http://arxiv.org/abs/2307.01538v1
|
20230704074851
|
Convergence to the uniform distribution of moderately self-interacting diffusions on compact Riemannian manifolds
|
[
"Simon Holbach",
"Olivier Raimond"
] |
math.PR
|
[
"math.PR",
"60K35"
] |
Degradation-aware data-enabled predictive control of energy hubs
Varsha Behrunani^1,2, Marta Zagorowska^1, Mathias Hudoba de Badyn^1, Francesco Ricca^1, Philipp Heer^2 and John Lygeros^1
=============================================================================================================================
Abstract: Consider a self-interacting diffusion X on a smooth compact Riemannian manifold , described by the stochastic differential equation
dX_t = √(2) dW_t(X_t)- β(t) ∇ V_t(X_t)dt,
where β is suitably lower-bounded and grows at most logarithmically, and
V_t(x)=1/t∫_0^t V(x,X_s)ds
for a suitable smooth function V^2→ that makes the term -∇ V_t(X_t) self-repelling. We prove that the normalized occupation measure μ_t of X converges almost surely in total variation to the uniform distribution , and we provide a polynomial rate of convergence. The key to this result is showing that μ_e^t shadows the solution to the measure valued ordinary differential equation
ν̇_t=-ν_t+.
This work complements and extends results from <cit.> and <cit.>.
Keywords: self-interacting diffusion, ergodicity, asymptotic pseudotrajectory, measure-valued ordinary differential equation, stochastic calculus
Mathematics Subject Classification (2020): 60K35
Acknowledgments: S.H. acknowledges support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)–Project No. 233630050–TRR 146 “Multiscale Simulation Methods for Soft Matter Systems”. This research has been conducted within the Fédération Parisienne de Modélisation Mathématique (FP2M)–CNRS FR 2036. This research has been conducted as part of the project Labex MME-DII (ANR11-LBX-0023-01).
§ INTRODUCTION AND MAIN RESULT
Let a smooth compact Riemannian manifold, and let () denote the space of finite signed Borel measures on . For any smooth function V^2→ and any μ∈() we write
V_μ(x):=μ(V(x,·)):=∫_ V(x,y)μ(dy) for all x∈.
A stochastic process X is called a self-interacting diffusion on , if it satisfies an equation of the type
dX_t = √(2) dW_t(X_t)- β(t) ∇ V_μ_t(X_t)dt,
where V^2→ is smooth, ∇ denotes the surface gradient on , β[0,∞)→ is a continuous function,
μ_t=1/t∫_0^tδ_X_sds
is the normalized occupation measure of the process X up until time t, and W is a standard Brownian vector field on .
Here, V describes the type of self-interaction and β is the temporal weight that is given to the self-interaction mechanism. The asymptotic behavior of X has been studied for different choices of V and β, and we provide a summary of corresponding results and methods in Section <ref> below. Let us now describe the assumptions under which we work in the present article.
There is a smooth function v→^N with v_∞=1 and
∫_ v(x) dx=0
such that
V(x,y)=v(x)· v(y) for all x,y∈,
where · denotes the euclidean inner product.
With this choice of V, (<ref>) yields
V_μ_t(x)= μ_t(v) · v(x) = (1/t∫_0^t v(X_s)ds) · v(x).
Hence, the drift term -∇ V_μ_t(X_t) is self-repelling in the sense that it tends to drive v(X_t) away from the temporal mean of (v(X_s))_s∈[0,t]. This interpretation is particularly intuitive in the case where
=^n={x∈^n+1:x=1}
and v(x)=x, so that
V(x,y)= x· y=cos(d(x,y)) for all x,y∈^n,
where d is the geodesic distance on ^n.
Next, we explain our assumptions on β. Here and everywhere else in this article, C and t_0 denote finite, positive, deterministic constants, the exact value of which is unimportant and may change from one step to the next with no indication.
The function β [0,∞)→ is differentiable and there are a∈(0,∞) and γ∈(0,1] such that
β(t)≤ alog t and β'(t)≤ C t^-γ for all t≥ t_0.
Assumption <ref> allows the weight β(t) of the self-repelling drift -∇ V_μ_t to increase to infinity, but not fast enough to fully compensate the normalization 1/t of the occupation measure. The generic case that we will usually have in mind is that of
β(t)=blog(t+1) with b>0.
Other valid choices include β(t)=blog(log(t+e)) or simply β≡ b. Note that we cap γ at 1 only for simplicity, as γ>1 provides no meaningful improvement for the estimates that are relevant to our proofs and only results in awkward case distinctions.
Assumption <ref> does not require β to be non-negative, but we will assume a suitable lower bound in Assumption <ref> below. Before we can state it precisely, we need to introduce some notation. Let m∈^N. Setting
Z(m)=∫_ e^-m· v(x)dx,
we define a probability measure Π(m) on via
Π(m)(dx)=e^-m· v(x)/Z(m)dx.
We also interpret Π as a function on () by setting
Π(μ):=Π(μ(v)) for all μ∈().
Furthermore, for any probability measure μ on and any f∈(^2(μ))^N we write
_μ(f)=(μ(f_if_j)-μ(f_i)μ(f_j))_i,j∈{1,…,N}∈^N× N.
There is a β_0≥0 such that
β(t)≥ -β_0 >-1/Λ for all t>t_0,
where
Λ=sup_m∈^N(sup_x∈^N, x=1 x^T_Π(m)(v)x).
Assumption <ref> means that while the weight factor in front of the self-repelling drift -∇ V_μ_t is in fact allowed to turn negative, we do require a suitable lower bound that depends on v. In other words, we allow a limited amount of self-attraction. Clearly,
0<Λ≤v_∞^2,
so (<ref>) makes sense. In particular, β≡ b is covered in our setting if and only if b > - 1/Λ.
We always equip () with the total variation norm
μ=sup{μ(f) : f∈, f_∞≤ 1} for all μ∈(),
which, due to density arguments, is the same whether we take as C() or C^∞() or the real-valued measurable functions on or just the indicator functions of Borel-measurable subsets of . Our main result is the following:
Let denote the uniform distribution on . There is a constant κ∈(0,∞) such that if the Assumptions <ref>, <ref>, and <ref> hold with 2aκ<γ, then
lim sup_t→∞logμ_t-/log t≤ -η almost surely,
where
η=min{γ/2-aκ, 1-Λβ_0}>0.
In particular, Theorem <ref> yields weak convergence
μ_t almost surely for t→∞,
i.e. the ergodicity property
1/t∫_0^tf(X_s)ds (f) almost surely for all f∈ C().
However, Theorem <ref> is much stronger than this, as it also includes some uniformity with respect to f and a polynomial convergence rate.
[Weak self-interaction]
Let
β≡ b>-1/Λ.
Depending on the sign of b, this corresponds either to a self-repelling (b>0) or self-attracting (b<0) diffusion, and since β is constant in time, we speak of weak self-interaction.
Then β'≡ 0 and for any a>0 we have |β(t)| ≤ alog t for all t≥ t_0=e^b/a. Hence, Theorem <ref> can be applied with γ=1, β_0=max{0,-b} and any a>0, and we get that
lim sup_t→∞logμ_t-/log t≤
-min{1/2, 1- Λ |b|} almost surely.
In the particular case =^n and v(x)=x, this strengthens part (i) of <cit.>. In Section <ref>, we provide a more detailed investigation of this connection and also a proof that we can actually weaken Assumption <ref> in this situation (Proposition <ref>).
[Moderate self-repulsion]
Let
β(t)=blog(t+1) with b>0.
In this situation, β is positive and increases to infinity, but slower than the normalization factor t in μ_t, so we speak of moderate self-repulsion.
Then |β'(t)|≤ Ct^-1 and for any a>b there is a t_0>0 such that |β(t)| ≤ alog t for all t≥ t_0. Hence, Theorem <ref> can be applied whenever b<1/2κ, and we get that
lim sup_t→∞logμ_t-/log t≤ -(1/2-bκ) almost surely.
§.§ Context
The asymptotic behavior of self-interacting diffusions has been studied with various degrees of generality concerning the state space and the type of self-interaction governed by V. The weight β(t) is usually chosen as either β≡ b or β(t)=b t, which are sometimes referred to as weak and strong self-interaction respectively. In this sense, the prototypical case β(t)=blog(t+1) of Assumption <ref> corresponds to moderate self-interaction.
There are a number of case studies and some general results for =^n (e.g. <cit.>), but we focus on compact state spaces. Some of the results in the following summary are more general than others, but all of them are valid for =^n and include the important case V(x,y)=cos(d(x,y)) (self-repulsion) or V(x,y)=-cos(d(x,y)) (self-attraction).[Of course, it is somewhat arbitrary to include the sign that distinguishes repulsion from attraction in V instead of β, but this choice reinforces the interpretation of V as the type of interaction and β as a weight (even though our Assumptions <ref> and <ref> do not require β to be non-negative).] The general expectation is that a diffusion with sufficient self-attraction will asymptotically be concentrated around (or even converge to) some limit random variable X_∞∈, while a self-repelling diffusion (on a compact state space) will quite contrarily be uniformly distributed in the limit.
We will now give a brief overview of the methods that have been used in these different cases, the corresponding results are summarized in Table <ref>.
The case of weak self-interaction has been studied the most thoroughly, in particular in the series of papers <cit.>. Under mild conditions on V, the authors link μ_t to a measure-valued ordinary differential equation and use this to precisely describe its asymptotic behavior for some specific choices of V. The same approach is used in <cit.> to treat the case of moderate self-interaction. This turns out to be more delicate and only the self-attracting case is solved satisfyingly. The case of moderate self-repulsion is the content of the present paper, and we use a similar strategy. A detailed explanation of the general idea of this method is presented in Section <ref> below, including a discussion of the differences between <cit.>, <cit.>, and the present paper. The case of strong interaction has been studied with different methods. In <cit.>, the authors rewrite (<ref>) as a time-homogeneous proper stochastic differential equation for an extended variable (X_t,Y_t)∈^n×^d and prove that in the self-repelling case it is Harris recurrent and exponentially ergodic, where the invariant distribution in restriction to X_t is the uniform distribution. Almost sure convergence in the self-attracting case is proved in <cit.>, again with arguments that involve the shadowing of an ordinary differential equation, but in a completely different way than in <cit.>, <cit.>, or the present paper.
Of course, it seems plausible that increasing the strength of the repulsion or attraction by increasing the weight β should not change the results qualitatively. If β̃ is asymptotically larger than β, and the self-repelling diffusion with weight β is asymptotically uniformly distributed, then the same should be true for the self-repelling diffusion with weight β̃. Similarly, if the self-attracting diffusion with weight β converges to a random variable X_∞ in some sense, then the self-attracting diffusion with weight β̃ should converge to some X̃_∞. However, such comparison theorems are not available, and they do not seem to be within reach. In view of these considerations, it also seems counter-intuitive that we need a to be sufficiently small in Theorem <ref> and that the rate of convergence decreases when a increases. This can be thought of as a technical assumption that is an "artifact" of our method.
The factor √(2) in front of dW_t(X_t) in (<ref>) is absent both in <cit.> (where this equation is mentioned explicitly only in the abstract) and in <cit.> (where the corresponding equation is the first one of the article).
In <cit.> this factor √(2) is hidden in the vector fields e_i, so that (<ref>) is entirely equivalent to <cit.> and the results from <cit.> are compatible with our setting with no adjustment of the parameters.
In <cit.> however, this is not the case and this factor √(2) leads to a factor 2 in <cit.> when compared to our definition of Π in (<ref>), and also to a factor 1/2 in the definition of A_μ just below <cit.> when compared to our definition in (<ref>). This has to be taken into account when comparing our results with those from <cit.>.
§.§ Outline of proof
In order to get a grip of the long time behavior of
μ_t=1/t∫_0^tδ_X_sds,
we calculate its time evolution. First, we have
∂_tμ_t=1/t(-μ_t+δ_X_t),
where the derivative is to be understood as the derivative of a real function pointwise in all f∈ C(). In order to eliminate the factor 1/t, we look at the dynamics on an exponential time scale, i.e.
∂_tμ_e^t=-μ_e^t+δ_X_e^t.
If t>0 is large, the distribution of X_t should be close to the equilibrium of the current drift potential β(t) V_μ_t(x), i.e. to the probability distribution Π(β(t)μ_t) as defined in (<ref>). If our intuition of the process is correct, Π(β(t)μ_t) on the other hand should be close to the uniform distribution on . Therefore, we set
^1_t:= δ_X_e^t-Π(β(e^t)μ_e^t)
and
^2_t:= Π(β(e^t)μ_e^t)-,
so that
∂_tμ_e^t=-μ_e^t+ + ^1_t + ^2_t.
If ^1_t and ^2_t are asymptotically negligible in a suitable sense, then the trajectory t↦μ_e^t shadows that of a solution to the measure-valued ordinary differential equation
ν̇_t=-ν_t+,
so that in particular μ_t converges to . The details of this argument are given in Section <ref> below, but first we devote Sections <ref> and <ref> to the required analysis of the asymptotics of ^1 and ^2.
Our proof strategy is inspired by <cit.> and <cit.>. In these works, the authors study the equation
∂_tμ_e^t=-μ_e^t+Π(β(e^t)μ_e^t) + ^1_t,
which will also be very useful for us in Section <ref>. The formal limit equation corresponding to (<ref>) is
ν̇_t=-ν_t+Π(β(e^t)ν_t).
Note that under (<ref>) we have Π()=, and so (<ref>) can be viewed as a variant of the limit equation (<ref>) that is less specific to a particular situation. For constant β as in <cit.>, the equation (<ref>) is homogeneous in time and hence after establishing that μ_e^t shadows it, powerful results from the theory of dynamical systems can be used to study the long-time behavior of μ_t for several different choices of V (cf. <cit.>). For non-constant β as in <cit.>, this link can still be established under certain conditions (cf. <cit.>), but it is not as fruitful, since (<ref>) is no longer homogeneous in time. In particular, <cit.> are not used in the proof of the convergence result for moderately self-attracting diffusions on the sphere (<cit.>) which is instead proved "by hand".
The limit equation (<ref>) is much simpler than (<ref>), because it is tailor-made for cases in which we expect μ_t to converge to the uniform distribution (while in other cases we might not expect a deterministic limit at all). Introducing the second error term ^2 allows us to move the explicit dependence of the problem on β entirely into the error terms.
§ PROOF OF THE MAIN RESULT
For Section <ref>, we only need the Assumptions <ref> and <ref> to hold. In Sections <ref> and <ref> on the other hand, we suppose that all of the Assumptions <ref>, <ref>, and <ref> hold.
§.§ Dealing with ^1
This subsection is mostly taken from Section 2 of <cit.>. We include this detailed summary, since many of the objects and intermediate results will be used in the next subsection.
Let us fix μ∈() and consider the time-homogeneous stochastic differential equation with no self-interaction
dY_t = √(2) dW_t(Y_t)- ∇ V_μ(Y_t)dt,
where we think of the dynamics as those arising from freezing the drift potential in (<ref>) at some time t_0, so that formally μ=β(t_0)μ_t_0. These dynamics can also be described via the infinitesimal generator
A_μ f = Δ f-∇ V_μ·∇ f for all f∈(A_μ)⊃ C^2(),
and the corresponding equilibrium is given by Π(μ) as defined in (<ref>). Let (P_s^μ)_s≥0 denote the transition semigroup on ^2(Π(μ))=^2(dx) generated by A_μ. Then A_μ satisfies a Poincaré inequality and therefore P_s^μf converges to Π(μ)(f) exponentially fast with respect to the ^2-norm for any f∈^2(dx) (see Sections 1.1 and 1.4.3 of <cit.>). Using this and basic semigroup theory, one can easily show that
Q_μ f:=-∫_0^∞ P^μ_s(f-Π(μ)(f))ds for all f∈^2(dx),
is well-defined and
A_μ Q_μ f=Q_μ A_μ f=f-Π(μ) (f) for all f∈(A_μ),
so Q_μ is "almost an inverse to A_μ".
Now set
A_t:=A_β(t)μ_t, Q_t:=Q_β(t)μ_t.
Then for all f∈ C^∞() we can rewrite
^1_t(f)=f(X_e^t)-Π(β(e^t)μ_e^t)(f)=A_e^tQ_e^tf(X_e^t).
If we set
F^f_t(x):=1/tQ_tf(x),
applying the change of variables r↦log r and then Ito's formula yields
∫_s^t^1_r(f)d
=∫_e^s^e^tA_rF^f_r(X_r)dr
=F^f_e^t(X_e^t)-F^f_e^s(X_e^s)- ∫_e^s^e^tḞ^f_r (X_r)dr + (M^f_t-M^f_s),
where (M^f_t)_t≥0 is a martingale with
⟨ M^f,M^g⟩_t= ∫_1^e^t∇ F^f_r(X_r)·∇ F^g_r(X_r) dr for all f,g∈ C^∞()
(compare <cit.> and <cit.>). Therefore, in order to estimate the integral over ^1, we need to estimate F^f_t, ∇ F^f_t, and the time derivative Ḟ^f_t, which can be done with the help of the following lemma, which we will also need in Section <ref> below in order to deal with ^2.
There is a constant
κ∈(0,∞)
such that for all f∈ C^∞() and t≥ t_0 the following estimates hold.
* Q_tf_∞≤ C t^a κf_∞,
* ∇ Q_tf_∞≤ C (1+alog t)^1/2t^a κf_∞,
* ∂_t Q_t f_∞≤ C t^2aκ-γ (log t)^3/2f_∞.
Thanks to Assumption <ref>, this follows from Lemmas 2.3 and 2.8 of <cit.>. These mainly rely on classical results about log-Sobolev and Poincaré inequalities for the operator A_t and an application of the Bakry-Emery criterion.
From now on, κ will always be the constant from (<ref>). In particular, this is the same κ that we use in Theorem <ref>.
If 2aκ<γ, then almost surely
lim sup_t→∞1/tlog( sup_s≥0∫_t^t+s^1_rdr) ≤ -(γ/2-aκ).
This follows from <cit.>, which is proved via (<ref>), (<ref>), and Lemma <ref>. The exact value on the right hand side is not given explicitly in <cit.>, but can be deduced from a careful reading of the proofs in <cit.>.
§.§ Dealing with ^2
We start this section with a preparatory lemma about properties of Π. Recall that
Π(m)(dx)=e^-m· v(x)/Z(m)dx with Z(m)=∫_ e^-m· v(x)dx
for all m∈^N.
* The mapping Π^N→() is Lipschitz continuous, i.e.
Π(m)-Π(m')≤ C m-m' for all m,m'∈^N.
* For all m∈^N, we have
Π(m)(v)=-∇log Z(m)
and
_Π(m)(v)=log Z(m),
where denotes the Hessian matrix.
For the first part, it is easy to check that the derivative of Π is bounded. The second part is a simple calculation.
Combining the first part of Lemma <ref> and the first part of Assumption <ref>, we have
^2_t= Π(β(e^t)μ_e^t(v))-Π(0)≤ C t μ_e^t(v).
Lemma <ref> below provides a speed of convergence to 0 for the right hand side, but we first need another preparatory lemma. For the remainder of this section, we use the shorthand notation
m_t:=μ_e^t(v)∈^N for all t≥0.
There is a family of strongly convex functions J_t^N→ such that
ṁ_t=-∇ J_t (m_t)+ ^1_t(v) for all t≥ 0
and
m ·∇ J_t (m) ≥ (1-Λβ_0) m^2 for all t≥ t_0 and m∈^N.
By (<ref>), we have
ṁ_t=-m_t+Π(β(e^t)m_t)(v)+ ^1_t(v).
Using (<ref>) and the fact that Π(0)(v)=0 due to (<ref>), we see that (<ref>) holds with
J_t^N→, J_t(m)=1/2m^2+1_β(e^t)≠ 0·1/β(e^t)log Z(β(e^t)m).
Using (<ref>), we get
J_t(m)=1_N× N+β(e^t)_Π(β(e^t)m)(v),
and thanks to Assumption <ref>, this implies that the smallest eigenvalue of J_t(m) is at least 1-Λβ_0>0. Therefore J_t is strongly convex, more precisely
(m-m') ·(∇ J_t (m)-∇ J_t (m')) ≥ (1-Λβ_0) m-m'^2 for all m,m'∈^N.
Since ∇ J_t (0)=Π(0)(v)=0, setting m'=0 yields (<ref>).
J_t(m), ∇ J_t (m), and J_t(m) are all continuous with respect to t∈ [0,∞).
In the sequel, η will always be the constant from Theorem <ref>, i.e.
η=min{γ/2-aκ, 1-Λβ_0}>0.
If 2aκ<γ, then
e^δ t m_t 0 almost surely for all δ<η.
1.) Set
m̃_t:=m_t-F_e^t(X_e^t)
where
F_t(x):=(F_t^v_i(x))_i=1,…,N=
t^-1(Q_tv_i(x))_i=1,…,N
(compare (<ref>)). The first estimate of Lemma <ref> implies
e^δ t |m_t|≤ e^δ t(|m̃_t|+F_e^t (X_e^t))
≤ e^δ t |m̃_t| + Ce^-(1-aκ-δ)t for all t≥ t_0.
Since δ<γ/2-aκ<1-aκ, the second summand in the above bound vanishes for t→∞, and hence it suffices to show for this lemma that
e^δ tm̃_t 0 almost surely for all δ<η.
Note that we know a priori that m_t has a deterministic upper bound and then (by Lemma <ref>) so has m̃_t.
2.) In this step, we use a similar approach as in Sections 3.1 - 3.4 of <cit.> in order to find a stochastic differential equation that is fulfilled by |m̃_t|^2. In order to calculate the dynamics of m̃_t, we first deduce from (<ref>) that
F_e^t(X_e^t)-F_e^s(X_e^s)=∫_s^t ^1_r(v)dr + ∫_e^s^e^tḞ_r(X_r)dr - (M^v_t-M^v_s),
where we read M^v_t as the vector (M^v_i_t)_i=1,…,N. Combining (<ref>) with (<ref>) yields
dm̃_t= -∇ J_t(m_t)dt - Ḟ_e^t(X_e^t)dt + dM^v_t.
Setting
H_t=-Ḟ_e^t(X_e^t)- (∇ J_t(m_t)-∇ J_t(m̃_t) ),
we can rewrite (<ref>) as
dm̃_t= -∇ J_t(m̃_t)dt + H_tdt +dM^v_t.
Applying Ito's formula and (<ref>) then yields
d|m̃_t|^2= (-2 m̃_t ·∇ J_t(m̃_t) +2 m̃_t· H_t +e^t∇ F_e^t(X_e^t)^2)dt+2 m̃_t· dM^v_t.
3.) Our next goal is to find a suitable upper bound for the drift in (<ref>). For the rest of the proof we assume that t≥ t_0 and
0<α< 2η=min{γ-2aκ, 2-2Λβ_0}.
Since the proof of <cit.> does not depend on the choice of V, it can also be applied here and it yields
H_t≤ C e^-1+α/2t.
Since α < (1+α)/2 (as α < γ≤ 1) and since |m̃_t| is bounded, (<ref>) implies
2 m̃_t· H_t ≤ C e^-α t.
Using the second estimate of Lemma <ref>, we get
e^t∇ F_e^t(X_e^t)^2
=e^-t∑_i=1^N(∇ Q_e^tv_i)(X_e^t)^2
≤ C (1+at)e^-(1-2aκ)t≤ C e^-α t.
Finally, plugging (<ref>) and (<ref>) as well as (<ref>) from Lemma <ref> into (<ref>) yields
d|m̃_t|^2≤( -2 (1-Λβ_0)|m̃_t|^2 + C e^-α t)dt+2 m̃_t · dM^v_t.
4.) With (<ref>) at hand, we can now investigate the asymptotics of m̃_t. Setting
ξ:=2(1-Λβ_0)
and plugging (<ref>) into
d( e^ξ t|m̃_t|^2 )=ξ e^ξ t|m̃_t|^2dt+ e^ξ td|m̃_t|^2,
we get
|m̃_t|^2
≤ e^-ξ (t-t_0) |m̃_t_0|^2 + C∫_t_0^t e^-ξ (t-r)-α r dr + 2∫_t_0^t e^-ξ (t-r)m̃_r · dM^v_r
≤ Ce^-α t + 2e^-ξ tN_t,
where
N_t=∫_t_0^t e^ξ rm̃_r· dM^v_r.
Using (<ref>) and (<ref>), we have
⟨ N⟩_t
≤∫_t_0^t e^2ξ rm̃_r^2 e^r∇ F_e^r(X_e^r)^2 dr ≤ C ∫_t_0^t e^(2ξ-α)rm̃_r^2 dr
Since m̃ is bounded, (<ref>) yields
⟨ N⟩_t ≤ Ce^(2ξ-α) t,
so the law of the iterated logarithm implies that almost surely
lim sup_t→∞N_t/e^(ξ -α/2)tlog(t) <∞.
Combining (<ref>) and (<ref>) yields
e^δ tm̃_t 0 almost surely for all δ<α/4.
Knowing this, we can use (<ref>) in order to improve (<ref>) to
⟨ N⟩_t ≤ K e^(2ξ-(α+2δ)) t
for any δ<α/4, where K=K(δ) is an almost surely finite random variable. Then we use the same argument as above to improve (<ref>) to
e^δ tm̃_t 0 almost surely for all δ<3α/8.
Iterating this argument shows that for any n∈ we have
e^δ tm̃_t 0 almost surely for all δ<(2^n-1)α/2^n+1.
Since n can be arbitrarily large and α arbitrarily close to 2η, it follows that (<ref>) holds, and thus the proof is completed.
Note that Lemma <ref> can be interpreted as a statement on the polynomial decay of the drift potential β(t) V_μ_t of (<ref>). More precisely, for 2aκ<γ it implies that almost surely
lim sup_t→∞logβ(t)V_μ_t_∞/log t≤ -η .
We can now prove the main result of this section.
If 2aκ<γ, then almost surely
lim sup_t→∞1/tlog^2_t≤ -η.
This follows from (<ref>) and Lemma <ref>.
§.§ Putting the pieces together
The unique solution to (<ref>) with the initial value ν_0∈() is given by
Φ(t,ν_0) =e^-tν_0+(1-e^-t) for all t≥0,
and the mapping
Φ [0,∞)×()→(), (t,ν_0)↦Φ(t,ν_0),
defines a semiflow on the normed space () in the sense of <cit.>.
If 2aκ<γ, then almost surely (μ_e^t)_t≥0 is a (-η)-pseudotrajectory of the semiflow Φ, i.e. almost surely
lim sup_t→∞1/tlog(sup_s∈[0,T]μ_e^t+s-Φ(s,μ_e^t)) ≤ -η for all T>0.
By (<ref>) and (<ref>) we have for all f∈ C()
(μ_e^t+s - Φ(s,μ_e^t))(f) = -∫_0^s (μ_e^t+r - Φ(r,μ_e^t) )(f)dr + ∫_0^s(^1_t+r+^2_t+r)(f)dr,
so
(μ_e^t+s - Φ(s,μ_e^t))(f)
= ∫_t^t+se^-(t+s-r)(^1_r+^2_r)(f)dr.
Since integration by parts yields
∫_t^t+se^-(t+s-r)^1_r(f) dr = ∫_t^t+s^1_r(f) dr - ∫_t^t+s e^-(t+s-r)(∫_t^r^1_u(f)du) dr,
we get from (<ref>) that
μ_e^t+s-Φ(s,μ_e^t)≤∫_t^t+s^1_r dr + ∫_t^t+s e^-(t+s-r)∫_t^r^1_udu dr + ∫_t^t+s^2_rdr.
The claim now follows from Propositions <ref> and <ref>.
Intuitively, Proposition <ref> says that μ_e^t is exponentially close to the behavior of a solution of (<ref>). Since any solution of (<ref>) converges exponentially fast to , Theorem <ref> now follows from a general result about pseudotrajectories.
By (<ref>),
lim sup_t→∞1/tlogΦ(t,ν_0)- = -1 for all ν_0∈().
Thanks to this and Proposition <ref>, all of the conditions of part (i) of <cit.> (with B=(), K={}, X(t)=μ_e^t, Y(t)=, α=-η >-1=λ) are fulfilled, and hence we get
lim sup_t→∞1/tlogμ_e^t-≤ -η almost surely,
which is equivalent to (<ref>).
Note that the final step of the proof of Proposition <ref> also works with the supremum in (<ref>) being taken over all s≥0 instead of only s∈[0,T]. However, this stronger variant of the pseudotrajectory property is not needed for the argument in the proof of Theorem <ref> to work.
§ A CLOSER INVESTIGATION OF THE CASE =^N AND V(X)=X
For this entire section, let
=^n and v(x)=x.
In the case of weak self-interaction, i.e. β≡ b, Theorem 4.5 of <cit.> then implies that[Note that our notation differs slightly from that in <cit.>: the parameter a there corresponds to what is b/2 in our notation, and there is another factor 2 in <cit.> that is not in our definition of Π (compare Remark <ref>).]
μ_t almost surely ⇔ b≥ -(n+1),
while Theorem <ref> implies
b>-1/Λ ⇒ μ_t -→ 0 almost surely with polynomial speed
(compare Example <ref>). Because of (<ref>) and (<ref>), we already know that Λ≥ (n+1)^-1. In this section, we provide a way to calculate Λ, show numerically that it is in fact strictly larger than (n+1)^-1, but then prove that the conclusion of (<ref>) nevertheless actually holds for all b>-(n+1).
§.§ Calculating Λ
In the following, for any m∈^n+1 we use the notation
m̅ =
m/|m|, if m≠ 0,
0, if m= 0.
Let
ρ(r)=-∫_0^πcos x e^-rcos x (sin x)^n-1dx/∫_0^π e^-rcos x (sin x)^n-1dx for all r≥0.
* For all m∈^n+1 we have
Π(m)(v)= -ρ(|m|) m̅.
* We have
lim_r→ 0ρ(r)/r=1/n+1,
and we therefore interpret ρ(0)/0=1/n+1 in the following.
* For all m∈^n+1 we have
_Π(m)(v)
=(ρ'(|m|)-ρ(|m|)/|m|) m̅m̅^T + ρ(|m|)/|m| 1_(n+1)×(n+1)
and its largest eigenvalue is given by λ(|m|), where
λ(r)=2ρ(r)/r-ρ'(r) for all r≥0.
* We have
Λ=max_r≥ 0λ(r) ∈[1/n+1, 2/n+1).
The first part of this lemma is just a reformulation of <cit.>. Note that for all r≥0
ρ(r)=H'(r)/H(r) with H(r)=∫_0^π e^-rcos x (sin x)^n-1dx
and hence
ρ'(r)=H”(r)/H(r)-(H'(r)/H(r))^2>0
where the inequality follows from Cauchy-Schwarz. As shown in the proof of <cit.>,
d/drH”(r)/H(r)>0, H”(0)/H(0)=1/n+1,
and since an integration by parts yields
H'(r)=r/n(H(r)-H”(r)) for all r≥0,
we get
d/drρ(r)/r=d/dr(1/n(1-H”(r)/H(r)))<0, ρ(r)/r1/n+1.
In particular, we have
0>d/drρ(r)/r=ρ'(r)/r-ρ(r)/r^2 for all r>0
so, together with (<ref>),
0<ρ'(r)<ρ(r)/r for all r>0.
Combining (<ref>) with the second part of Lemma <ref>, we get
_Π(m)(v)
=( ∂_m_i(ρ(|m|)/|m| m_j) )_i,j=1,…,n+1
=(ρ'(|m|)-ρ(|m|)/|m|)m̅m̅^T + ρ(|m|)/|m| 1_(n+1)×(n+1)
and hence its largest eigenvalue is
sup_x ∈^n x^T _Π(m)(v)x = ρ'(|m|)-ρ(|m|)/|m| + ρ(|m|)/|m|=λ(|m|),
where the last equality uses (<ref>). Plugging (<ref>) into (<ref>) yields
ρ(r)/r<λ(r)<2ρ(r)/r for all r>0,
which, in combination with (<ref>), implies (<ref>).
With the help of Lemma <ref>, one can easily show that if (m_k)_k∈⊂^n+1 satisfies |m_k|→∞ and m̅_k→ m∈^n for k→∞, then Π(m_k)δ_-m for k→∞.
It follows from (<ref>) and (<ref>) that ρ satisfies the differential equation
ρ'(r)=1-ρ(r)(n/r+ρ(r)),
so thanks to (<ref>), we can express both λ(r) and λ'(r) as functions of r and ρ(r). This makes it easy to calculate λ and λ' and thus approximate the maximum of λ numerically. Simulations suggest that λ(r) attains Λ as its unique local and global maximum in a position r=λ that grows linearly in n. Furthermore, these simulations suggest that Λ· (n+1) is indeed strictly greater than 1, even though it is decreasing in n and the upper bound 2 from (<ref>) is far from optimal. Note however, that even (<ref>) is a vast improvement over the trivial bound from (<ref>). See Table <ref> for some approximate values and Figure <ref> for a visualisation of ρ and λ.
§.§ Improving Theorem <ref> in the case of weak self-attraction
The following proposition means that we can improve <cit.> for all cases in which the uniform distribution is the limit, with the exception of the critical value b=-(n+1).
Let β≡ b>-(n+1). Then
lim sup_t→∞logμ_t-/log t≤
-min{1/2, 1+ b/n+1} almost surely.
The case b≥0 is already covered entirely in Example <ref>, so let
0>b>-(n+1).
Since Proposition <ref> implies that
lim sup_t→∞1/tlog( sup_s≥0∫_t^t+s^1_rdr) ≤ -1/2 almost surely,
it only remains to show that
lim sup_t→∞1/tlog^2_t≤ -min{1/2, 1- |b|/n+1} almost surely,
since then the proof can be completed in the exact same way as in Section <ref>. In order to prove (<ref>), we will need to slightly refine the arguments from Lemmas <ref> and <ref> (the notation of which we adapt in the sequel) and make explicit use of <cit.>.
Since
lim_r→0λ(r)=λ(0)=1/n+1<|b|^-1,
we can choose θ>0 such that
ζ:=sup_m∈^n+1, |m|≤θ(sup_x∈^n x^T_Π(bm)(v)x)
=sup_r∈[0,θ |b|]λ(r)<|b|^-1.
Then with the same argument as in Lemma <ref> we get
m ·∇ J_t (m) ≥ (1- |b| ζ) m^2 for all m∈^n+1 with |m|≤θ.
We already know from <cit.> that almost surely μ_t (compare (<ref>)), so m_t→ 0 and hence also m̃_t → 0 (compare (<ref>) and Lemma <ref>). Therefore, there is an almost surely finite random time τ=τ(θ) such that |m̃_t| ≤θ for all t>τ, so with (<ref>) we get
-m̃_t ·∇ J_t (m̃_t) ≤ - (1- |b| ζ) m̃_t^2 for all t≥τ.
Also note that
-m̃_t ·∇ J_t (m̃_t) ≤ C for all t∈ [t_0,τ).
Setting
0<α< min{ 1, 2-2|b| ζ},
we can show as in the proof of Lemma <ref> that
d|m̃_t|^2≤( -2m̃_t ·∇ J_t (m̃_t) + C e^-α t)dt+2 m̃_t · dM^v_t for all t≥ t_0.
If we set ξ:=2- 2|b| ζ, plug (<ref>) into
d( e^ξ t|m̃_t|^2 )=ξ e^ξ t|m̃_t|^2dt+ e^ξ td|m̃_t|^2,
and then apply (<ref>) and (<ref>), we can estimate
|m̃_t|^2
≤ e^-ξ (t-t_0) |m̃_t_0|^2 + C∫_t_0^t e^-ξ (t-r)-α r dr
+ C ∫_t_0^τ e^-ξ (t-r) dr + 2∫_t_0^t e^-ξ (t-r)m̃_r · dM^v_r
≤ K e^-α t + 2e^-ξ t∫_t_0^t e^ξ rm̃_r· dM^v_r,
where K is a positive, almost surely finite random number. Now, (<ref>) follows from the same line of reasoning as in the last step of the proof of Lemma <ref> and in the proof of Proposition <ref>.
The general idea behind the proof of Proposition <ref> is not restricted to the specific case treated in this section: there is potential for the Assumption <ref> on the lower bound for β to be weakened, whenever we already know that μ_t and thus only need to bound the eigenvalues of β(e^t)_(v) in order to make the arguments in the proofs of Lemmas <ref> and <ref> work.
Lm
Benaim1999 M. Benaïm: Dynamics of stochastic approximation algorithms. Seminaire de Probabilités XXXIII, Lecture Notes in Math. 1709, 1–68 (1999)
BenaimRepelInfty M. Benaïm, I. Ciotir, C.E. Gauthier: Self-repelling diffusions via an infinite dimensional approach. Stochastic Partial Differential Equations: Analysis and Computations 3, 506–530 (2015)
BenaimRepel M. Benaïm, C.E. Gauthier: Self-repelling diffusions on a Riemannian manifold. Probab. Theory Relat. Fields 169, 63–104 (2017)
sidI M. Benaïm, M. Ledoux, O. Raimond: Self-interacting diffusions. Probab. Theory Relat. Fields 122, 1–41 (2002)
sidII M. Benaïm, O. Raimond: Self-interacting diffusions II: convergence in law. Ann. I. H. Poincaré – PR 39, 6 (2003) 1043–1055
sidIII M. Benaïm, O. Raimond: Self-interacting diffusions III: symmetric interactions. The Annals of Probability 33, 1716-1759 (2005)
sidIV M. Benaïm, O. Raimond: Self-interacting diffusions IV: rate of convergence. Electron. J. Probab. 18, 1815-1843 (2011)
Chambeu S. Chambeu, A. Kurtzmann: Some particular self-interacting diffusions: Ergodic behavior and almost sure convergence. Bernoulli 17(4), 1248-1267 (2011)
Cranston M. Cranston, Y. Le Jan: Self-attracting diffusions : Two cas studies. Math. Ann. 303, 87–93 (1995)
Gauthier C.E. Gauthier: Self attracting diffusions on a sphere and application to a periodic case. Electron. Commun. Probab. 21 (53), 1–12 (2016)
Herrmann S. Herrmann , B. Roynette: Boundedness and convergence of some self-attracting diffusions. Math. Ann. 325(1), 81–96 (2003)
RaimondOld O. Raimond: Self Attracting Diffusions: Case of the constant interaction. Probab. Theory Relat. Fields 107, 177–196 (1996)
Raimond O. Raimond: Self-interacting diffusions: a simulated annealing version. Probab. Theory Relat. Fields 144, 247–279 (2009)
Wang F.Y. Wang: Functional Inequalities, Markov Semigroups and Spectral Theory. Elsevier (2006)
|
http://arxiv.org/abs/2307.00245v1
|
20230701061310
|
Deep Angiogram: Trivializing Retinal Vessel Segmentation
|
[
"Dewei Hu",
"Xing Yao",
"Jiacheng Wang",
"Yuankai K. Tao",
"Ipek Oguz"
] |
eess.IV
|
[
"eess.IV",
"cs.CV"
] |
[
Christopher F. Parmeter
August 1, 2023
===========================
Among the research efforts to segment the retinal vasculature from fundus images, deep learning models consistently achieve superior performance. However, this data-driven approach is very sensitive to domain shifts. For fundus images, such data distribution changes can easily be caused by variations in illumination conditions as well as the presence of disease-related features such as hemorrhages and drusen. Since the source domain may not include all possible types of pathological cases, a model that can robustly recognize vessels on unseen domains is desirable but remains elusive, despite many proposed segmentation networks of ever-increasing complexity. In this work, we propose a contrastive variational auto-encoder that can filter out irrelevant features and synthesize a latent image, named deep angiogram, representing only the retinal vessels. Then segmentation can be readily accomplished by thresholding the deep angiogram. The generalizability of the synthetic network is improved by the contrastive loss that makes the model less sensitive to variations of image contrast and noisy features. Compared to baseline deep segmentation networks, our model achieves higher segmentation performance via simple thresholding. Our experiments show that the model can generate stable angiograms on different target domains, providing excellent visualization of vessels and a non-invasive, safe alternative to fluorescein angiography.
§ INTRODUCTION
Retinal fundus photography is a cheap, fast and non-invasive modality that reveals essential anatomical features including optic disc, optic cup, macula, fovea, vessels and lesions such as hemorrhages and exudates <cit.>. Therefore, it is widely used for the diagnosis of diseases such as diabetic retinopathy <cit.>, glaucoma <cit.> and age-related macular degeneration <cit.>.
While fundus photography is broadly used as a low-cost screening tool, it does not provide sufficient contrast to resolve clinically relevant vascular features and exogenous indocyanine green angiography (ICG)/fluorescein angiography (FA) remain the standard of care for visualization/quantifying retinal vasculopathies. An algorithm that can provide accurate vessel segmentation from these fundus images would have profound impact on future clinical practice. In recent years, deep learning models <cit.> have achieved remarkable success in this task. Nevertheless, the domain shift induced by variations in image contrast and presence of unseen pathological features in testing data can dramatically degrade the performance of deep models.
Recent research explored three main types of domain generalization methods <cit.>: domain randomization, representation learning and general learning strategy. Domain randomization augments the training data to extend the source domain <cit.>, improving the likelihood that an unseen target domain overlaps with the training domain. Representation learning refers to the disentanglement of features that are invariant to different domains <cit.>. A typical general learning strategy is meta-learning: for example, Li et al. simulate the domain shift by splitting the source domain into meta-train and meta-test <cit.>.
In this work, we leverage both domain randomization and representation learning approaches to train a model that has superior generalizability across different domains. We augment the source domain by the contrast limited adaptive histogram equalization (CLAHE) <cit.> with clip limit ϵ∈𝒩. In addition to well-enhanced contrast for vessels, the augmented images also have exaggerated irrelevant structures including noise and lesions. Inspired by the idea of disentangling the shared features in two images presented in our previous work <cit.>, we leverage a variational auto-encoder (VAE) to extract the representation of vessels. However, as we showed in <cit.>, this latent image may have an arbitrary style that contains unwanted features. We tackle this challenge by introducing a contrastive loss such that vessels are the only features in the synthetic image. We name the result a deep angiogram. Then, the segmentation task is simply reduced to Otsu thresholding <cit.>. Without the irrelevant features, the visibility of the vasculature is drastically improved in the deep angiogram compared to other vessel enhancement approaches <cit.>. We evaluate the generalizability of our model by the segmentation performance on the target domains. For baseline models, we trained two segmentation networks on the source domain that take the green channel fundus image and the principle component analysis (PCA) image as the input respectively. The result indicates that the proposed method generalizes better on target domains and achieves higher segmentation performance than deep segmentation networks, by simple thresholding.
§ METHODS
§.§ Causal Feature Extraction
Fig. <ref>(a) shows our VAE model composed by the encoder E_θ and the decoder D_φ. The input image is x and the supervision is provided by the label y. As we have previously shown <cit.>, when the latent manifold of the VAE has the same dimension with input x, the encoder is able to enhance the shared features in x and y. Intuitively, if an image is regarded as a collection of representations, then (x ∩ y) ⊆ E_θ(x) should hold to guarantee that there is no essential information missing in the output ŷ. In the context of causal learning, x ∩ y is the set of causal features for the final prediction. In this implementation, the fundus image x includes information of many anatomical structures such as optic disc, vessels, macula and lesions, whereas the causal features for the segmentation task contain just the vessels, so ideally the latent image should be a vessel map without any irrelevant features, i.e., (x ∩ y) = E_θ(x).
As suggested in Fig. <ref>, since we want to put most of the workload on the encoder E_θ, it is designed to have more learnable parameters than the decoder D_φ. Both E_θ and D_φ have residual U-Net architecture. Note that the decoder D_φ will not be applied in the testing since its purpose is to simply provide supervision to E_θ during training. The segmentation loss for the decoder is set to be a combination of cross-entropy and Dice loss:
ℒ_seg=-1/N∑_n=1^Ny_nlogŷ_n + (1-2∑_n=1^N y_nŷ_n/∑_n=1^N y_n^2+ŷ_n^2)
§.§ Domain Randomization
There are two major causes for distribution shift of fundus images. First, within a well-curated dataset (e.g., DRIVE <cit.>), the image contrast is usually consistent. A model trained on such a dataset may struggle with a poor-contrast test image. Second, since a given dataset is unlikely to exhaustively provide samples of all possible pathologies, unseen features such as drusen and hemorrhages can be problematic during testing.
To improve the robustness of the model, we randomize the source domain data by CLAHE <cit.> in addition to other commonly used augmentation methods (e.g., rotation). For an input image x, we apply CLAHE C_ϵ to all the color channels with a random clip limit ϵ∈𝒩(5,1). In the resultant image x', the contrast of vessels are strongly enhanced, as well as the background noise. Then as in Fig. <ref>, we introduce a contrastive loss ℒ_cont for the latent image to guarantee that the model is not distracted by this exaggerated noise and provides stable visualization for input with various contrasts. The loss function is defined as the sum of the L_2 loss and the structural similarity (SSIM) loss.
ℒ_cont=E_θ(x)-E_θ(x')_2 + SSIM(E_θ(x)-E_θ(x'))
The SSIM loss is defined as
SSIM(x,y)=(2μ_xμ_y+c_1)(2σ_xy+c_2)/(μ_x^2+μ_y^2+c_1)(σ_x^2+σ_y^2+c_2),
where μ and σ represent the mean and standard deviation of the image, and c_1 and c_2 are constants.
§.§ Experiments
Baseline Methods.
Since the color image is more sensitive to domain shift, it is common to convert the fundus image to grayscale as pre-processing, typically by extracting the green channel or using principle component analysis (PCA). We train a segmentation network that has the same architecture as E_θ with either the green channel or the PCA as input. We compare these two networks to Otsu thresholding of deep angiograms.
Datasets.
We use four publicly available fundus datasets as shown in Fig. <ref>(b). The DRIVE dataset <cit.> consists of 20 labelled images of size 565× 584. The HRF dataset <cit.> contains 45 labelled images of size 3504× 2336. The STARE dataset <cit.> includes 20 labelled images of size 700× 605.
The ARIA dataset <cit.> includes 138 labelled images of size 768× 576.
DRIVE and HRF are set as source domain, whereas STARE and ARIA are used for testing.
Implementation Details.
All networks are trained and tested on an NVIDIA RTX 2080TI 11GB GPU. We use a batch size of 4 and train for 300 epochs. We use the Adam optimizer with the initial learning rate of 5× 10^-4 for the proposed VAE, 1× 10^-3 for the baseline segmentation networks. The learning rate for both networks decay by 0.5 every 3 epochs.
§ RESULTS AND CONCLUSION
Fig. <ref> shows a test example from each of the target domains. We observe that for different datasets, the manual annotations includes varying amounts of detail: the label for the STARE dataset contains many more small vessels than ARIA. In the ARIA example, the deep angiogram is able to enhance the thin vessels with very poor contrast. This is also evident by the big vessels seen at the bottom left quadrant of the image where the illumination is low. Moreover, the angiogram filters out the circular artifacts seen within the red box. In the STARE example, our model extracts most of the vasculature including the faintly visible fine vessels. These tiny vessels have relatively lower intensity in the deep angiogram, which suggests lower confidence. Compared to the manual label, the deep angiogram can also delineate the vessel diameter more precisely.
We quantitatively evaluate the
vessel segmentation performance in Fig. <ref>.
By simple thresholding on deep angiogram, we obtain get better vessel maps than the segmentation networks that use the green channel and PCA image as inputs.
The proposed method can effectively extract a specific type of feature from a complex context. Specific to retinal vessels, our model can generate stable deep angiograms that dramatically enhance small vessels with poor contrast for color fundus images from unseen domains. Hence, deep angiogram is a low-cost method that can be performed using standard fundus photography technologies, including portable handheld systems. The ability to resolve vascular features without the need for exogenous contrast injections significantly reduces the clinical expertise/equipment/cost of retinal angiography. Integration of these technologies with recent demonstrations of cellphone-based fundus photography methods and remote diagnostic technologies can move retinal disease screening out of the clinic and dramatically expand the impact of color fundus photography.
§ ACKNOWLEDGEMENTS
This work is supported by the Vanderbilt University Discovery Grant Program.
spiebib
|
http://arxiv.org/abs/2307.02961v2
|
20230706125226
|
Selected Topics of Social Physics: Equilibrium Systems
|
[
"V. I. Yukalov"
] |
physics.soc-ph
|
[
"physics.soc-ph",
"cond-mat.stat-mech"
] |
equationsection
-14mm
-4mm
=17cm
=23.5cm
|
http://arxiv.org/abs/2307.00651v1
|
20230702200258
|
More Synergy, Less Redundancy: Exploiting Joint Mutual Information for Self-Supervised Learning
|
[
"Salman Mohamadi",
"Gianfranco Doretto",
"Donald A. Adjeroh"
] |
cs.CV
|
[
"cs.CV"
] |
Neuro-Symbolic Sudoku Solver
Ludovico Boratto
August 1, 2023
============================
Self-supervised learning (SSL) is now a serious competitor for supervised learning, even though it does not require data annotation.
Several baselines have attempted to make SSL models exploit information about data distribution, and less dependent on the augmentation effect. However, there is no clear consensus on whether maximizing or minimizing the mutual information between representations of augmentation views practically contribute to improvement or degradation in performance of SSL models.
This paper is a fundamental work where, we investigate role of mutual information in SSL, and reformulate the problem of SSL in the context of a new perspective on mutual information.
To this end, we consider joint mutual information from the perspective of partial information decomposition (PID) as a key step in reliable multivariate information measurement.
PID enables us to decompose joint mutual information into three important components, namely, unique information, redundant information and synergistic information.
Our framework aims for minimizing the redundant information between views and the desired target representation while maximizing the synergistic information at the same time. Our experiments lead to a re-calibration of two redundancy reduction baselines, and a proposal for a new SSL training protocol.
Extensive experimental results on multiple datasets and two downstream tasks show the effectiveness of this framework.
§ INTRODUCTION
Self-supervised learning (SSL) is among very successful principles that are needless of huge labeled datasets <cit.>. While deep learning has shown tremendous success in many domains and applications including computer vision <cit.>, biometrics <cit.>, genomics <cit.>, and etc, data-efficiency has been the focus of few problem domains such as deep active learning <cit.>, and SSL <cit.>. Essentially, SSL frameworks consist of two key elements, namely, loss function, and pretext task <cit.>.
Basically, the pretext task is a proxy task which is to be solved using a supervisory signal from the unlabeled data, guided by an objective (loss) function <cit.>. Loss functions on the other hand generally guides learning the representation of a given sample by comparing two or multiple augmented views of the same sample with each other or with views of other samples.
In fact, early baselines known as contrastive baselines were developed around the idea of contrasting augmented views of a sample with each other (positive pairs) and also with the views from other samples (negative pairs) <cit.>. This type of baselines, however, suffer from the problem of potential representation collapse, as well as the need for large negative batches for effective
representation. Next generation of baselines emerged as non-contrastive or negative pair-free baselines <cit.>, essentially eliminating the need
to contrast against negative views (negative pairs), and also almost with no risk of representation collapse. There is also a class of baselines known as clustering baselines such as <cit.>, primarily based on clustering views of samples in the latent space. Two most recent baselines are based on redundancy reduction in representation of augmented views of the samples <cit.>. This class of approaches mainly suggests that whitening the latent/embedding space of the a pair of networks trained on augmented views of samples allows for reducing redundant information in representation of the sample <cit.>. Later theoretical work on whitening baselines showed that the prime reason for their success is eliminating another type of collapse, dimensional collapse <cit.>.
In this work, we assess how this whitening process unwittingly eliminates the synergistic information along with redundant information. This relates to a larger controversy on how mutual information relates to learning the target representation. Hence, in this paper, we start with investigating long-standing ambiguity about the role of mutual information in SSL. This eventually leads us to reconsider the problem of mutual information between two variable (two views of a sample) by reformulating it as joint mutual information between three variables (two views and the target representation). To elaborate on the controversy, the general idea is to maximize the mutual information between encoder-representation of two augmented views for better representation; however some work <cit.> suggested that more mutual information does not necessarily improve the representation. A recent work based on Info-Min principle suggests that, in fact, less mutual information between augmented views along with more task-associated information would improve the representation using a certain augmentation setting <cit.>. Another very recent work acknowledges the questionable role of mutual information, and suggests that decomposing the estimation of mutual information by adding an extra term representing the condition on the image with some blocked patches would reinforce the role of mutual information. However, this work is different from our work as they decompose the estimation of two-variable mutual information, whereas we
focus on three-variable joint mutual information decomposition <cit.>. In fact, we seek out the solution in the theory of partial information decomposition (PID). Eventually, this leads us to decompose the joint mutual information into its integral components, i.e., unique, redundant and synergistic component as was first introduced by <cit.>. In the following, we first state the problem and discuss the decomposition of joint mutual information, then re-define SSL in this new context. We elaborate on the SSL baselines that rely on redundancy reduction, and propose a new training protocol for such SSL models, then empirically evaluate the new protocol.
§ METHODS
§.§ Problem Statement
From an information theoretic perspective, the general, though controversial, idea is that SSL frameworks generally tend to maximize the mutual information between encoder representation f(.) of two augmented views x_1 and x_2 of sample data x upper bounded by I(x_1;x_2), i.e., I(f(x_1);f(x_2))≤ I(x_1;x_2) <cit.>. This objective comes with challenges including how to optimally generate x_1 and x_2 <cit.> for actionable mutual information, as well as how to reduce redundant information in the representation <cit.>. To elaborate on former challenge, Tian et al <cit.> suggested an heterodox idea, indicating that the augmentation process for generating views should be modified in a way that will enable reducing the mutual information between representation of positive views without affecting task-relevant information, i.e., mutual information is not necessarily task-relevant information. The later challenge, on the other hand, suggests that whitening the latent/embedding space would reduce redundant information. However, we argue that rather than focusing on mutual information between the representation of augmented views', the joint mutual information between views' representations and the target representation could provide a possible way to resolve this controversy.
Hence, we take a totally different approach by formulating the core of SSL in terms of joint mutual information between views and the target representation .
This leads us to the observation that, even though rigorous redundancy reduction through whitening such as in <cit.> drops redundant information, it also risks reduction of useful synergistic information. This motivates us to design experiments to assess this claim in Sec. <ref>, and then to offer a training protocol to alleviate this loss of the synergistic element in joint mutual information. Specifically, we find it necessary to revisit the SSL principle from the joint mutual information perspective. Therefore, we assess two most recent baseline, Barlow-Twins <cit.> and W-MSE <cit.> which aim for redundancy reduction.
Below we elaborate on joint mutual information (in contrast with mutual information) and then we investigate two most recent baselines on whitening, which are also most relevant baseline to study redundancy and synergy.
§.§ Decomposing Joint Mutual Information
For the first time ever we consider the general SSL problem setting from the viewpoint of PID, which has diverse practical
applications including in neuroscience, game theory and statistical learning. Hence, first we present the PID introduced in <cit.> and then reformulate the SSL accordingly. We note that PID is not the only approach to multivariate measurement of information. However, it has multiple advantages in our SSL context, including non-negative decomposition of information as well as separate and simultaneous measurement of redundancy and synergy as distinct quantities <cit.>.
This new interpretation of SSL is primarily posed to address the ambiguity in the role of mutual information in SSL.
The PID is an approach to a non-overlapping decomposition of the joint mutual information between two sets of variables, a set of two or more source variables carrying information about a target, as well as the single target variable. This decomposition has been challenging as the proposed solutions mostly consisted of negative information terms, until a breakthrough work by <cit.> which introduced a non-negative decomposition in terms of quantifying three components, the unique, redundant, and synergistic information.
In its simplest form, suppose we have two source variables S_1 and S_2 carrying joint mutual information I(T; S_1,S_2) about a target variable T. Hence each of the source variables has mutual information with the target variable. Decomposing the joint mutual information into some non-negative components, models information interaction to assess the contribution offered by each source variable and combination of sources. According to <cit.>, as shown in Fig. <ref> the joint mutual information between sources and target, could be decomposed as three elements, unique, redundant, and synergistic information. Unique information is the part provided by each source separately, redundant information is the minimum information provided by each source (aka common mutual information), and synergistic information is the information provided only by a combination of S_1 and S_2 about T, which neither alone can provide <cit.>.
I(S_1,S_2:T)=Redundancy(T; S_1, S_2) +
Synergy(T; S_1, S_2)+ Unique(T; S_1) + Unique(T; S_2)
Now consider the general setting of SSL, where at least two random augmented views of a sample are generated. The goal is to contrast them in order to learn a representation that is maximally informative about the original sample distribution, while minimally
informative about the augmentation. This contrast in essence creates an information interaction between the information of the variables which could be studied under the PID framework. Here, the two augmented views could be seen as source variables S_1 and S_2, whereas the original sample distribution is the target variable T. In a more general sense, T could be considered the class distribution representing the invariant representation of the views of a given sample, i.e., the class the data sample belongs to.
Here, as only redundant and synergistic information will be the results of interaction in contrasting views in SSL frameworks, unique information is not the subject matter of our study in this work. Unique information would be the subject of non-contrastive supervised learning on labeled data.
§.§ Redundancy Reduction Baselines
Interestingly, two most recent SSL baselines <cit.> are redundancy reduction (aka hard/soft whitening) baselines. Both baselines take advantage of whitening (Cholskey whitening) of latent/embedding space of a cross-correlation matrix computed from augmented views of the same sample. Ermolov et al <cit.> proposed a hard whitening method based on a recent version of Cholesky decomposition <cit.> for whitening the latent space vectors. At the same time, Zbontar et al <cit.> has gained more popularity by proposing as simpler process called soft whitening, which essentially forces the cross-correlation matrix of the embedding vectors of two networks to identity matrix. The later approach, known as Barlow-Twins, suggests that their whitening approach intuitively results in redundancy reduction embedded in off-diagonal elements of the cross-correlation matrix.
We use both approaches for our investigation, and provide further insight on the synergy versus redundancy. However due to the lack of space we only represent the theoretical reformulation of Barlow-Twins under our framework, as it is more popular. The following is the loss function of Barlow-Twins:
ℒ_BT≜∑_i(1-C_ii)^2 + λ∑_i∑_j≠ i(C_ij)^2
C_ij≜∑_m z_m,i^A z_m,j^B/√(∑_m (z_m,i^A)^2)√(∑_m (z_m,j^B)^2)
where C_ij are elements of the cross-correlation matrix C between the embedding vectors with element z of two networks (twins), as presented in Eq. <ref>. λ as a weighting factor, originally set to 5× 10^-3.
§.§ Assessing synergy and redundancy
In order to lay a context for PID in the SSL context, we find it necessary to design simple experiments around redundancy reduction and synergy in Barlow-Twins (BT). Note that as the augmented views for a sample generated under standard augmentation for SSL share lots of information in common (redundant or commonly known as mutual information), BT attains desirable performance by implementing rigorous redundancy reduction. However we argue that if the redundant information was not as much, the performance would drop sharply. To assess this, we apply heavy augmentation on samples (such as <cit.>) to generate views with
significantly less redundant information, and then test BT performance on these. The top-1 accuracy for CIFAR10 and CIFAR100 (under experimental settings in next section) drops by %5.69 and % 5.13 respectively. Now under same heavy augmentation, we re-calibrate BT by setting λ=0.1 and also forcing off-diagonal elements to a multivariate Gaussian 𝒩(0,1) rather than zero to allow them better affect the learned representation, we gain accuracy, %0.91, and %0.81 compared with the former case. This implies that the off-diagonal elements not only carry redundant information, but also some other type of information. Otherwise allowing more redundancy by using multivariate Gaussian off-diagonal elements would have degraded the performance. We argue that off-diagonal elements do not only represent redundant information, but also synergistic information. This is why when we reduce the redundant information by implementing heavy augmentation, BT's rigorous redundancy reduction constraint on off-diagonal elements of the cross-correlation matrix, degrades the performance by targeting synergistic information. Below, we propose a training protocol that works even better than forcing off-diagonal elements to multivariate Gaussian, and present our experimental results on two baselines BT and W-MSE in Sec. <ref> to show the generality of our framework.
§ SYNERGY-BASED TRAINING PROTOCOL
We aim for re-calibrating the redundancy reduction in BT <cit.> and W-MSE <cit.> toward protecting the most synergistic information during the redundancy reduction process. In its current form, BT approach does not seem to optimally reduce redundancy, without significant loss in the synergistic component.
Our approach consists of a serial pre-training with first phase of dropping redundancy and second phase of adding to synergy. Hence, in this section, we define a new training protocol aiming for extracting more synergistic information during the process of redundancy reduction which will be implemented on both BT and W-MSE. We present this protocol
aimed at more synergy and less redundancy via the use of engineered off-diagonal elements, to show the effectiveness of the joint mutual information decomposition in SSL. As the augmented views of a sample under standard augmentation share lots of mutual information, we find it practically more efficient to update/replace the loss function of BT and W-MSE after initial pre-training with the original loss function which solely aims at redundancy reduction. This is done under a new training protocol with two phases of pre-training in two different settings. First phase aims at reducing the redundancy, while the second phase aims at adding to synergy. Below we only present the new formulation for BT, however, we provide the experimental results for both BT and W-MSe.
A. Gaussian off-diagonal: After initial pre-training of original model, here BT, the network is fixed, to resume the training with an updated loss. For BT we set λ=0.1 and replace the second term in Eq. <ref> with λ∑_i∑_j≠ i(C_ij-G_ij)^2 where G_ij are the multivariate Gaussian elements of a square matrix G of proper size. This allows the BT to better consider the off-diagonal elements of the cross-correlation matrix, which convey synergy and redundancy.
B. Reinforced off-diagonal: After initial pre-training of original model, here BT, the network is fixed and the average C^Ave_ij=1/n∑_n C_ij over all n samples will be computed. Then training resumes with new λ=0.1 and the second term in Eq. <ref> updated as λ∑_i∑_j≠ i(C_ij-C^Ave_ij)^2 forcing each off-diagonal element to its corresponding average.
§ EXPERIMENTS AND RESULTS
§.§ Experiments
Baselines: Our modification on BT and W-MSE
<cit.> resulted in GSBT and RSBT, as well as GSW-MSE and RSW-MSE respectively. We perform experiments using our new training protocol under standard and heavy data augmentation. We contrast it with most recent baselines including Whitening-MSE (d=4) <cit.>, a non-contrastive baseline BYOL <cit.>, and a clustering-based baseline SwAV <cit.>. Following <cit.>, latent spaces of all methods are
L_2-normalized.
Dataset and augmentation: We use six datasets including ImageNet <cit.>, CIFAR10, CIFAR100 <cit.>, Tiny ImageNet <cit.>, ImageNet-100, and VOC0712. We use two sets of augmentation protocols, standard and heavy. For standard augmentation including random grayscaling, random crop, color jittering, aspect ratio adjustment, and horizontal mirroring, we follow the settings in <cit.>, and for heavy augmentation we follow the settings in <cit.>.
Network & implementation details: For CIFAR10/100, following the details of each baseline <cit.>, we use ResNet18 while for ImageNet, Tiny ImageNet, and VOC0712 we use ResNet50 <cit.> , for the encoder and the same projector head as <cit.>, with the same size of projector output in all baselines. For VOC0712 similar to <cit.>, Faster R-CNN <cit.> is used. Optimization of all experiments were done using Adam optimizer <cit.>. Pre-training of RSBT, GSBT as well as RSW-MSE and GSW-MSE are performed in two phases, a phase one (redundancy reduction) consists of 500 epochs with batch size of 1024, which starts with a learning rate of 0.15 for some 20 epochs and drops to 0.001 for the remaining epochs. Phase two (synergy addition) also consists of another 500 epochs with the learning rate of 0.001, with their modified loss functions. The weight decay in both phases and all other experiments is 10^-6.
§.§ Evaluation and results
Similar to former methods, we perform the standard supervised linear evaluation for classification task as well as detection. Classification involves fixing the encoder weights after pre-training and replacing the projector with a linear classifier (fully connected followed by softmax), and training the linear classifier for some 500 epochs on evaluation data, and then testing it. The classification resluts for ImageNet, CIFAR10/100, Tiny ImageNet, and ImageNet-100 with different settings of proposed training protocol are presented in the Tables 1, 2, and 3, whereas the detection results with VOC0712 is presented in Table 1. Results for modified BT using our protocol is presented in Table 1 and 2, whereas the results for modified W-MSE using our protocol is available in Table 3. In both settings of data augmentation, our method outperforms prior approaches. While heavy augmentation degrade the performance of other approaches, it even improves the RSBT, GSBT, as well as RSW-MSE and GSW-MSE which shows robustness of our approach.
§ CONCLUSION
We address the ambiguity regarding how mutual information relates to better representation in SSL. To this end, we explore the use of PID in SSL and we re-define the formulation of SSL problem in terms of joint mutual information between three variables (two views of a sample and its original representation). This allows for recognition of synergistic information along with the redundant information and their role in boosting performance. We design and perform extensive experiments on the most recent redundancy reduction baselines, BT and W-MSE and instantiate the theoretical solution in practice under a new training protocol.
IEEEbib
|
http://arxiv.org/abs/2307.02189v1
|
20230705103049
|
Heralded three-photon entanglement from a single-photon source on a photonic chip
|
[
"Si Chen",
"Li-Chao Peng",
"Yong-Peng Guo",
"Xue-Mei Gu",
"Xing Ding",
"Run-Ze Liu",
"Xiang You",
"Jian Qin",
"Yun-Fei Wang",
"Yu-Ming He",
"Jelmer J. Renema",
"Yong-Heng Huo",
"Hui Wang",
"Chao-Yang Lu",
"Jian-Wei Pan"
] |
quant-ph
|
[
"quant-ph"
] |
Hefei National Research Center for Physical Sciences at the Microscale and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
Shanghai Research Center for Quantum Science and CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Shanghai 201315, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
Hefei National Research Center for Physical Sciences at the Microscale and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
Shanghai Research Center for Quantum Science and CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Shanghai 201315, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
University of Science and Technology of China, School of Cyberspace Security, Hefei, China
Hefei National Research Center for Physical Sciences at the Microscale and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
Shanghai Research Center for Quantum Science and CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Shanghai 201315, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
Adaptive Quantum Optics Group, Mesa+ Institute for Nanotechnology, University of Twente,
P.O. Box 217, 7500 AE Enschede, Netherlands
Hefei National Research Center for Physical Sciences at the Microscale and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China
Shanghai Research Center for Quantum Science and CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Shanghai 201315, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
In the quest to build general-purpose photonic quantum computers, fusion-based quantum computation has risen to prominence as a promising strategy. This model allows a ballistic construction of large cluster states which are universal for quantum computation, in a scalable and loss-tolerant way without feed-forward, by fusing many small n-photon entangled resource states. However, a key obstacle to this architecture lies in efficiently generating the required essential resource states on photonic chips. One such critical seed state that has not yet been achieved is the heralded three-photon Greenberger-Horne-Zeilinger (3-GHZ) state. Here, we address this elementary resource gap, by reporting the first experimental realization of a heralded dual-rail encoded 3-GHZ state. Our implementation employs a low-loss and fully programmable photonic chip that manipulates six indistinguishable single photons of wavelengths in the telecommunication regime. Conditional on the heralding detection, we obtain the desired 3-GHZ state with a fidelity 0.573±0.024. Our work marks an important step for the future fault-tolerant photonic quantum computing, leading to the acceleration of building a large-scale optical quantum computer.
Heralded Three-Photon Entanglement from a Single-Photon Source on a Photonic Chip
Jian-Wei Pan
August 1, 2023
=================================================================================
Photon is a favourable candidate for universal quantum computing <cit.>, allowing several advantages such as room-temperature operation, negligible decoherence, and easy integration into existing fiber-optic-based telecommunications systems. Especially, the rapidly developed integrated optics makes it an appealing physical platform for large-scale fault-tolerant quantum computing <cit.>.
Measurement-based quantum computing (MBQC) <cit.>, where quantum algorithms are performed by making single qubit measurements on a large entangled state—usually called cluster states, holds significant potential for photonic systems based on linear optics. Photonic cluster states can be efficiently created from small resource states in a fusion mechanism <cit.>.
Later, a ballistic strategy for MBQC has been proposed <cit.>, enabling scalable and loss-tolerant generation of large cluster states by fusing many small entangled resource states without any feed-forward, which has subsequently renormalized to fusion-based quantum computation <cit.>. So far, the heralded three-photon Greenberger-Horne-Zeilinger state has been identified as the minimal initial entangled state <cit.>, serving as an essential building block for constructing large entangled cluster states <cit.>.
Deterministic generations of multiphoton cluster states from single-photon sources have been proposed <cit.> and implemented <cit.>. However, high-quality deterministic preparation still remains challenging with existing technology. Alternatively, without detecting or destroying the photons, one can near-deterministically generate such entangled clusters with heralded 3-GHZ states, which can be obtained using six single photons <cit.>.
Here, we report the first experimental demonstration of a heralded dual-encoded 3-GHZ state using six single photons manipulated in a photonic chip. A high-quality single-photon source based on a semiconductor quantum dot <cit.> embedded in an open microcavity is used to deterministically produce single photons that are converted to the telecommunication band with a quantum frequency converter <cit.>. These single photons are deterministically demultiplexed into six indistinguishable single-photon sources <cit.>, which are manipulated in a fully programmable photonic chip <cit.>. Heraled by the detection of four output spatial modes with high-efficiency single-photon detectors, we obtain a heralded 3-GHZ state with a fidelity of 0.573±0.024. Our work is an important step towards fault-tolerant scalable photonic quantum computation.
Fig. <ref> illustrates a heralded generation of a dual-rail-encoded 3-GHZ state out of six single photons <cit.>. The scheme exploits a ten-mode linear optical circuit, which consists of twelve optical beams plitters with three distinct transmission coefficients and a pair of π-phase shifters. Six photons are injected into six specific input modes of the photonic circuit, preparing a number-basis initial state represented as |ψ_in⟩=a_1^†a_3^†a_4^†a_6^†a_8^†a_9^†|0⟩^⊗ 10=|1011010110⟩.
Here, a_i^† refers to the creation operator in mode i, |0⟩ symbolizes the vacuum state, and the numbers 0 and 1 indicate the number of photons occupied in each mode. The underlying unitary transformation U of the circuit is optimally parameterized such that a particular measurement patterns in four ancillary modes (7-10) herald a desired 3-GHZ state at six output modes (1-6) <cit.>. Qubits Q_1, Q_2 and Q_3 are identified with output mode pairs (1,2), (3,4) and (5,6). This association uses a dual-rail encoding method, which signifies the presence of one single photon in either of the two spatial modes within each mode pair, e.g., |0⟩_d=|10⟩ and |1⟩_d=|01⟩. The GHZ state is heralded in modes (1-6) only when single photons are detected in both (7,8) and in just one of the (9,10) ports, with the other one remaining in a vacuum state. Each event has a success probability of 1/108, resulting in an overall success rate of 1/54 <cit.>. Using a phase-tunable Mach-Zehnder interferometers (MZIs) at each output mode pair, one can perform arbitrary local projective measurements for estimating the full state. Details of the underlying state evolution and state measurements are provided in Supplementary.
To prepare six single photons, we firstly use the state-of-the-art self-assembled InAs/GaAs quantum dot (QD), which is coupled to an polarized and tunable microcavity <cit.> and cooled down to ∼4 K. Under resonant pumping by a π-pulse laser with a repetition rate of ∼76 MHz, the QD emits ∼50 MHz polarized resonance fluorescence single photons at the end of the single-mode fiber. We measured second-order correlation of the photon source with a Hanbury Brown-Twiss (HBT) interferometer <cit.>, and obtained g^2(0)=0.028(8) at zero delay, which indicates a high single-photon purity of 97.2(8)%. The single photon indistinguishability is tested using a Hong-Ou-Mandel (HOM) interferometer <cit.>, yielding a visibility of 89(1)% between two photons separated by ∼13 ns.
We then employ a quantum frequency converter (QFC) to transfer the near-infrared wavelength of the produced single photons to the preferable telecommunication regime <cit.>. For this purpose, we fabricate a periodically poled lithium niobate (PPLN) waveguide for difference-frequency generation process that can be adjusted by the wavelengths of the pump lasers. A continuous wave (CW) pump laser at ∼2060 nm and the QD-emitted single photons at ∼884.5 nm are then coupled into the PPLN waveguide, in which the difference frequency generation occurs, thus generating the output single photons at 1550 nm. By optimizing the waveguide coupling, and transmission and detection rate, we eventually achieve a overall single-photon conversion efficiency of ∼50%. To test whether the converted photons still preserve the single-photon nature and their indistinguishability, we preform the HBT and HOM measurements on the photons after conversion. The purity of the single photons at 1550 nm stays 97.4(6)%, and the indistinguishability between photon 1 and photons 2, 3, 4, 5, 6 are respectively 0.883(7), 0.86(1), 0.86(3), 0.88(1), 0.87(3), as shown in Fig. <ref> (a) and (b).
The converted single-photon stream is then deterministically demultiplexed into six spatial modes using a tree-like structure demultiplexer constructed by five pairs of Pockels cells (PCs) and polarizing beam splitters (PBSs). The PCs, synchronized to the laser pulses and operated at a repetition rate of ∼705 kHz, actively control the photon polarization when loaded with high-voltage electrical pulses. The measured average optical switches efficiency is ∼77%, which is mainly due to the coupling efficiency and propagation loss. With the help of six single-mode fibers of different lengths and translation stages, we precisely compensate the relative time delays of the six single photons such that they can simultaneously arrive at the input ports of the photonic circuit.
To realize the functional design of the unitary transformation in Fig. <ref>, we employ a photonic chip that is low-loss and fully programmable <cit.>. The circuit is based on stoichiometric silicon nitride waveguides which are fabricated for single-mode operation at a wavelength of 1550 nm. It consists 12 input and output spatial modes that are interconnected through an arrangement of adjustable beam splitters and thermo-optic phase shifters, as shown in Fig.<ref> (details about the photonic circuit, please refer to Ref. <cit.>). To achieve a heralded 3-GHZ state, six single photons are injected into six inputs (1, 3, 4, 6, 8, 9) of the circuit and propagate through the circuit. The heralded outputs are 7, 8, 9 and 10. At each heralded output (7, 8, 9), two superconducting nanowire single-photon detectors (SNSPDs) are employed and act as a pseudo-photon-number-detector that can resolve up to two photons. When each mode (7, 8, 9) contains a single photon and mode 10 has vacuum state, one can obtain a heralded GHZ generation for three dual-rail encoded qubits defined in the modes Q1=(1,2), Q2=(3,4) and Q3=(5,6).
We then send the six photons one by one and measure the output distribution at the nine output modes (1-9) for analyzing the quality of the photonic circuit. For each input-output combination, we implement a Mach-Zehnder-type coherence measurement to record the corresponding phase. The normalized amplitude and measured phase compared to their theoretical distributions are summarized in Fig. <ref> (c) and (d).
To analyze the generated heralded three-photon GHZ state in dual-rail encoding, we use the phase-tunable MZIs to perform any local projective measurements on the single photons. The transformation matrixes of these local measurements are compiled in the whole circuit. We then collect the six-photon coincidence counts at the used ten outputs in which each output mode (7, 8, 9) contains only one photon. To validate the three-qubit GHZ entanglement, we first measure the six-photon events in the |0⟩_d/|1⟩_d basis (see data in Fig. <ref> (a)) to calculate the population of (|0⟩_d⟨0|_d)^⊗3+(|1⟩_d⟨1|_d)^⊗3 over all the possible 2^3 combinations, leading to a population P=0.758 ± 0.025. We further estimate the expectation value of the observable M^⊗ N_θ=(cosθσ̂_x+sinθσ̂_y)^⊗ N, where θ=kπ/3 (k=0,1,2) and σ̂_x,σ̂_y are Pauli matrices. The coherence of the three-qubit GHZ state is defined by the off-diagonal element of its density matrix and can be calculated by C=(1/3)∑_k=0^2(-1)^k⟨ M_kπ/3^⊗3⟩, which is C=0.389±0.040 (see data in Fig. <ref> (b)). The state fidelity can be directly estimated by F=(P+C)/2=0.573±0.024, which surpasses the classical threshold of 0.5 by more than 3 standard deviations and is sufficient to show the presence of entanglement <cit.>.
In summary, we have demonstrated, for the first time, the heralded three-photon GHZ state using six photons from a high-quality quantum-dot single-photon source and a 12-mode fully programmable photonic chip. It should be noted that this experiment is unrealistic for commonly used spontaneous parametric down-conversion sources <cit.>, since the expected count rate will be four orders of magnitude smaller than this experiment, showing a huge advantage of deterministic quantum-dot single-photon sources. Our demonstrated three-qubit GHZ state is heralded and has a heralding efficiency that could reach to one in principle, which is the main building block for fusion-based photonic quantum computing. Here, the heralding efficiency is defined as the probability of successfully achieving a desired GHZ state when there are coincidences of desired heralding detectors. In our experiment, we obtain a heralding efficiency of ∼0.0005 after the correction of detectors' imperfections (see Supplementary). This heralding efficiency can be significantly improved in the near future, by further promoting the efficiency of single-photon sources and photonic circuit to enable the large-scale photonic quantum computing.
S. C. and L.-C P. contributed equally to this work.
*
apsrev4-2
|
http://arxiv.org/abs/2307.02791v1
|
20230706060647
|
The Role of Subgroup Separability in Group-Fair Medical Image Classification
|
[
"Charles Jones",
"Mélanie Roschewitz",
"Ben Glocker"
] |
cs.CV
|
[
"cs.CV",
"cs.AI",
"cs.CY",
"cs.LG"
] |
Subgroup Separability in Medical Image Classification
C. Jones et al.
Department of Computing, Imperial College London, UK
{charles.jones17,mb121,b.glocker}@imperial.ac.uk
The Role of Subgroup Separability in Group-Fair Medical Image Classification
Charles Jones Mélanie Roschewitz Ben Glocker
============================================================================
We investigate performance disparities in deep classifiers. We find that the ability of classifiers to separate individuals into subgroups varies substantially across medical imaging modalities and protected characteristics; crucially, we show that this property is predictive of algorithmic bias. Through theoretical analysis and extensive empirical evaluation[Code is available at <https://github.com/biomedia-mira/subgroup-separability>], we find a relationship between subgroup separability, subgroup disparities, and performance degradation when models are trained on data with systematic bias such as underdiagnosis. Our findings shed new light on the question of how models become biased, providing important insights for the development of fair medical imaging AI.
§ INTRODUCTION
Medical image computing has seen great progress with the development of deep image classifiers, which can be trained to perform diagnostic tasks to the level of skilled professionals <cit.>. Recently, it was shown that these models might rely on sensitive information when making their predictions <cit.> and that they exhibit performance disparities across protected population subgroups <cit.>. Although many methods exist for mitigating bias in image classifiers, they often fail unexpectedly and may even be harmful in some situations <cit.>. Today, no bias mitigation methods consistently outperform the baseline approach of empirical risk minimisation (ERM) <cit.>, and none are suitable for real-world deployment. If we wish to deploy appropriate and fair automated systems, we must first understand the underlying mechanisms causing ERM models to become biased.
An often overlooked aspect of this problem is subgroup separability: the ease with which individuals can be identified as subgroup members. Some medical images encode sensitive information that models may leverage to classify individuals into subgroups <cit.>. However, this property is unlikely to hold for all modalities and protected characteristics. A more realistic premise is that subgroup separability varies across characteristics and modalities. We may expect groups with intrinsic physiological differences to be highly separable for deep image classifiers (e.g. biological sex from chest X-ray can be predicted with > 0.98 AUC). In contrast, groups with more subtle differences (e.g. due to `social constructs') may be harder for a model to classify. This is especially relevant in medical imaging, where attributes such as age, biological sex, self-reported race, socioeconomic status, and geographic location are often considered sensitive for various clinical, ethical, and societal reasons.
We highlight how the separability of protected groups interacts in non-trivial ways with the training of deep neural networks. We show that the ability of models to detect which group an individual belongs to varies across modalities and groups in medical imaging and that this property has profound consequences for the performance and fairness of deep classifiers. To the best of our knowledge, ours is the first work which analyses group-fair image classification through the lens of subgroup separability. Our contributions are threefold:
* We demonstrate empirically that subgroup separability varies across real-world modalities and protected characteristics.
* We show theoretically that such differences in subgroup separability affect model bias in learned classifiers and that group fairness metrics may be inappropriate for datasets with low subgroup separability.
* We corroborate our analysis with extensive testing on real-world medical datasets, finding that performance degradation and subgroup disparities are functions of subgroup separability when data is biased.
§ RELATED WORK
Group-fair image analysis seeks to mitigate performance disparities caused by models exploiting sensitive information. In medical imaging, Seyyed-Kalantari et al. <cit.> highlighted that classification models trained through ERM underdiagnose historically underserved population subgroups. Follow-up work has additionally shown that these models may use sensitive information to bias their predictions <cit.>. Unfortunately, standard bias mitigation methods from computer vision, such as adversarial training <cit.> and domain-independent training <cit.>, are unlikely to be suitable solutions. Indeed, recent benchmarking on the MEDFAIR suite <cit.> found that no method consistently outperforms ERM. On natural images, Zietlow et al. <cit.> showed that bias mitigation methods worsen performance for all groups compared to ERM, giving a stark warning that blindly applying methods and metrics leads to a dangerous `levelling down' effect <cit.>.
One step towards overcoming these challenges and developing fair and performant methods is understanding the circumstances under which deep classifiers learn to exploit sensitive information inappropriately. Today, our understanding of this topic is limited. Closely related to our work is Oakden-Rayner et al., who consider how `hidden stratification' may affect learned classifiers <cit.>; similarly, Jabbour et al. use preprocessing filters to inject spurious correlations into chest X-ray data, finding that ERM-trained models are more biased when the correlations are easier to learn <cit.>. Outside of fairness, our work may have broader impact in the fields of distribution shift and shortcut learning <cit.>, where many examples exist of models learning to exploit inappropriate spurious correlations <cit.>, yet tools for detecting and mitigating the problem remain immature.
§ THE ROLE OF SUBGROUP SEPARABILITY
Consider a binary disease classification problem where, for each image x ∈ X, we wish to predict a class label y ∈ Y: {y^+,y^-}. We denote P: [Y | X] → [0,1] the underlying mapping between images and class labels. Suppose we have access to a (biased) training dataset, where P_tr is the conditional distribution between training images and training labels; we say that such a dataset is biased if P_tr P. We focus on group fairness, where each individual belongs to a subgroup a ∈ A and aim to learn a fair model that maximises performance for all groups when deployed on an unbiased test dataset drawn from P. We assume that the groups are consistent across both datasets. The bias we consider in this work is underdiagnosis, a form of label noise <cit.> where some truly positive individuals x^+ are mislabeled as negative. We are particularly concerned with cases where underdiagnosis manifests in specific subgroups due to historic disparities in healthcare provision or discriminatory diagnosis policy. Formally, group A = a^* is said to be underdiagnosed if it satisfies Eq. (<ref>):
P_tr(y | x^+, a^*) ≤ P(y | x^+, a^*) and ∀ a a^*, P_tr(y | x^+, a) = P(y | x^+, a)
We may now use the law of total probability to express the overall mapping from image to label in terms of the subgroup-wise mappings in Eq. (<ref>). Together with Eq. (<ref>), this implies Eq. (<ref>) – the probability of a truly positive individual being assigned a positive label is lower in the biased training dataset than for the unbiased test set.
P_tr(y | x) = ∑_a ∈ AP_tr(y | x, a)P_tr(a | x)
P_tr(y | x^+) ≤ P(y | x^+)
At training time, supervised learning with empirical risk minimisation aims to obtain a model p̂, mapping images to predicted labels ŷ = argmax_y∈ Yp̂(y | x) such that p̂(y | x) ≈ P_tr(y | x), ∀ (x, y). Since this model approximates the biased training distribution, we may expect underdiagnosis from the training data to be reflected by the learned model when evaluated on the unbiased test set. However, the distribution of errors from the learned model depends on subgroup separability. Revisiting Eq. (<ref>), notice that the prediction for any individual is a linear combination of the mappings for each subgroup, weighted by the probability the individual belongs to each group. When subgroup separability is high due to the presence of sensitive information, the model will learn a different mapping for each subgroup, shown in Eq. (<ref>) and Eq. (<ref>). This model underdiagnoses group A=a^* whilst recovering the unbiased mapping for other groups.
p̂(y | x^+, a^*) ≈ P_tr(y | x^+, a^*) ≤ P(y | x^+, a^*)
and ∀ a a^*, p̂(y | x^+, a) ≈ P_tr(y | x^+, a) = P(y | x^+, a)
Eq. (<ref>) and Eq. (<ref>) show that, at test-time, our model will demonstrate worse performance for the underdiagnosed subgroup than the other subgroups. Indeed, consider True Positive Rate (TPR) as a performance metric. The group-wise TPR of an unbiased model, TPR_a^(u), is expressed in Eq. (<ref>).
TPR_a^(u) = |p̂(y|x^+, a) > 0.5|/N_+, a≈|P(y|x^+, a) > 0.5|/N_+, a
Here, N_+, a denotes the number of positive samples belonging to group a in the test set. Remember, in practice, we must train our model on the biased training distribution P_tr. We thus derive test-time TPR for such a model, TPR_a^(b), from Eq. (<ref>) and Eq. (<ref>), giving Eq. (<ref>) and Eq. (<ref>).
TPR_a^*^(b)≈|P_tr(y|x^+, a^*) > 0.5|/N_+, a^*≤|P(y|x^+, a^*) > 0.5|/N_+, a^*≈TPR_a^*^(u)
and ∀ a a^*, TPR_a^(b)≈|P_tr(y|x^+, a) > 0.5|/N_+, a≈TPR_a^(u)
In the case of high subgroup separability, Eq. (<ref>) and Eq. (<ref>) demonstrate that TPR of the underdiagnosed group is directly affected by bias from the training set while other groups are mainly unaffected. Given this difference across groups, an appropriately selected group fairness metric may be able to identify the bias, in some cases even without access to an unbiased test set <cit.>. On the other hand, when subgroup separability is low, this property does not hold. With non-separable groups (i.e. P(a | x) ≈1/|A|, ∀ a ∈ A), a trained model will be unable to learn separate subgroup mappings, shown in Eq. (<ref>).
p̂(y | x^+, a) ≈ P_tr(y | x^+), ∀ a ∈ A
Equations (<ref>) and (<ref>) imply that the performance of the trained model degrades for all groups. Returning to the example of TPR, Eq. (<ref>) represents performance degradation for all groups when separability is poor. In such situations, we expect performance degradation to be uniform across groups and thus not be detected by group fairness metrics. The severity of the degradation depends on both the proportion of corrupted labels in the underdiagnosed subgroup and the size of the underdiagnosed subgroup in the dataset.
TPR_a^(b)≈|P_tr(y|x^+, a) > 0.5|/N_+, a≤|P(y|x^+, a) > 0.5|/N_+, a≈TPR_a^(u), ∀ a ∈ A
We have derived the effect of underdiagnosis bias on classifier performance for the two extreme cases of high and low subgroup separability. In practice, subgroup separability for real-world datasets may vary continuously between these extremes. In Section <ref>, we empirically investigate (i) how subgroup separability varies in the wild, (ii) how separability impacts performance for each group when underdiagnosis bias is added to the datasets, (iii) how models encode sensitive information in their representations.
§ EXPERIMENTS AND RESULTS
We support our analysis with experiments on five datasets adapted from a subset of the MEDFAIR benchmark <cit.>. We treat each dataset as a binary classification task (no-disease vs disease) with a binary subgroup label. For datasets with multiple sensitive attributes available, we investigate each individually, giving eleven dataset-attribute combinations. The datasets cover the modalities of skin dermatology <cit.>, fundus images <cit.>, and chest X-ray <cit.>. We record summary statistics for the datasets used in the supplementary material (Table <ref>), where we also provide access links (Table <ref>). Our architecture and hyperparameters are listed in Table <ref>, adapted from the experiments in MEDFAIR.
§.§ Subgroup separability in the real world
We begin by testing the premise of this article: subgroup separability varies across medical imaging settings. To measure subgroup separability, we train binary subgroup classifiers for each dataset-attribute combination. We use test-set area under receiver operating characteristic curve (AUC) as a proxy for separability, reporting results over ten random seeds in Table <ref>.
Some patterns are immediately noticeable from Table <ref>. All attributes can be predicted from chest X-ray scans with > 0.9 AUC, implying that the modality encodes substantial information about patient identity. Age is consistently well predicted across all modalities, whereas separability of biological sex varies, with prediction of sex from fundus images being especially weak. Importantly, the wide range of AUC results [0.642 → 0.986] across the dataset-attribute combinations confirms our premise that subgroup separability varies substantially across medical imaging applications.
§.§ Performance degradation under label bias
We now test our theoretical finding: models are affected by underdiagnosis differently depending on subgroup separability. We inject underdiagnosis bias into each training dataset by randomly mislabelling 25 % of positive individuals in Group 1 (see Table <ref>) as negative. For each dataset-attribute combination, we train ten disease classification models with the biased training data and ten models with the original clean labels; we test all models on clean data. We assess how the test-time performance of the models trained on biased data degrades relative to models trained on clean data. We illustrate the mean percentage point accuracy degradation for each group in Fig. <ref> and use the Mann-Whitney U test (with the Holm-Bonferroni adjustment for multiple hypothesis testing) to determine if the performance degradation is statistically significant at p_critical=0.05. We include an ablation experiment over varying label noise intensity in Fig. <ref>.
Our results in Fig. <ref> are consistent with our analysis in Section <ref>. We report no statistically significant performance degradation for dataset-attribute combinations with low subgroup separability (< 0.9 AUC). In these experiments, the proportion of mislabelled images is small relative to the total population; thus, the underdiagnosed subgroups mostly recover from label bias by sharing the correct mapping with the uncorrupted group. While we see surprising improvements in performance for PAPILA, note that this is the smallest dataset, and these improvements are not significant at p_critical=0.05. As subgroup separability increases, performance degrades more for the underdiagnosed group (Group 1), whilst performance for the uncorrupted group (Group 0) remains somewhat unharmed. We see a statistically significant performance drop for Group 0 in the MIMIC-Sex experiment – we believe this is because the model learns separate group-wise mappings, shrinking the effective size of the dataset for Group 0.
§.§ Use of sensitive information in biased models
Finally, we investigate how biased models use sensitive information. We apply the post hoc Supervised Prediction Layer Information Test (SPLIT) <cit.> to all models trained for the previous experiment, involving freezing the trained backbone and re-training the final layer to predict the sensitive attribute. We report test-set SPLIT AUC in Fig. <ref>, plotting it against subgroup separability AUC from Table <ref> and using Kendall's τ statistic to test for a monotonic association between the results (p_critical = 0.05). We find that models trained on biased data learn to encode sensitive information in their representations and see a statistically significant association between the amount of information available and the amount encoded in the representations. Models trained on unbiased data have no significant association, so do not appear to exploit sensitive information.
§ DISCUSSION
We investigated how subgroup separability affects the performance of deep neural networks for disease classification. We discuss four takeaways from our study:
Subgroup separability varies substantially in medical imaging. In fairness literature, data is often assumed to contain sufficient information to identify individuals as subgroup members. But what if this information is only partially encoded in the data? By testing eleven dataset-attribute combinations across three medical modalities, we found that the ability of classifiers to predict sensitive attributes varies substantially. Our results are not exhaustive – there are many modalities and sensitive attributes we did not consider – however, by demonstrating a wide range of separability results across different attributes and modalities, we highlight a rarely considered property of medical image datasets.
Performance degradation is a function of subgroup separability. We showed, theoretically and empirically, that the performance and fairness of models trained on biased data depends on subgroup separability. When separability is high, models learn to exploit the sensitive information and the bias is reflected by stark subgroup differences. When separability is low, models cannot exploit sensitive information, so they perform similarly for all groups. This indicates that group fairness metrics may be insufficient for detecting bias when separability is low. Our analysis centred on bias in classifiers trained with the standard approach of empirical risk minimisation – future work may wish to investigate whether subgroup separability is a factor in the failure of bias mitigation methods and whether it remains relevant in further image analysis tasks (e.g. segmentation).
Sources of bias matter. In our experiments, we injected underdiagnosis bias into the training set and treated the uncorrupted test set as an unbiased ground truth. However, this is not an endorsement of the quality of the data. At least some of the datasets may already contain an unknown amount of underdiagnosis bias (among other sources of bias) <cit.>. This pre-existing bias will likely have a smaller effect size than our artificial bias, so it should not play a significant role in our results. Still, the unmeasured bias may explain some variation in results across datasets. Future work should investigate how subgroup separability interacts with other sources of bias. We renew the call for future datasets to be released with patient metadata and multiple annotations to enable analysis of different sources and causes of bias.
Reproducibility and impact. This work tackles social and technical problems in machine learning for medical imaging and is of interest to researchers and practitioners seeking to develop and deploy medical AI. Given the sensitive nature of this topic, and its potential impact, we have made considerable efforts to ensure full reproducibility of our results. All datasets used in this study are publicly available, with access links in Table <ref>. We provide a complete implementation of our preprocessing, experimentation, and analysis of results at <https://github.com/biomedia-mira/subgroup-separability>.
§.§ Acknowledgements
C.J. is supported by Microsoft Research and EPSRC through the Microsoft PhD Scholarship Programme. M.R. is funded through an Imperial College London President's PhD Scholarship. B.G. received support from the Royal Academy of Engineering as part of his Kheiron/RAEng Research Chair.
splncs04
§ SUPPLEMENTARY MATERIAL
|
http://arxiv.org/abs/2307.01470v1
|
20230704042903
|
A Review of Driver Gaze Estimation and Application in Gaze Behavior Understanding
|
[
"Pavan Kumar Sharma",
"Pranamesh Chakraborty"
] |
cs.CV
|
[
"cs.CV",
"cs.HC",
"cs.LG"
] |
mymainaddress]Pavan Kumar Sharma
mymainaddress]Pranamesh Chakrabortymycorrespondingauthor
[mycorrespondingauthor]Corresponding author
Pavan Kumar Sharma: [email protected], Pranamesh Chakraborty: [email protected],+91-512-259-2146
[mymainaddress]Department of Civil Engineering, Indian Institute of Technology Kanpur, Kanpur-208016, U.P, India
Driver gaze plays an important role in different gaze-based applications such as driver attentiveness detection, visual distraction detection, gaze behavior understanding, and building driver assistance system. The main objective of this study is to perform a comprehensive summary of driver gaze fundamentals, methods to estimate driver gaze, and it's applications in real world driving scenarios. We first discuss the fundamentals related to driver gaze, involving head-mounted and remote setup based gaze estimation and the terminologies used for each of these data collection methods. Next, we list out the existing benchmark driver gaze datasets, highlighting the collection methodology and the equipment used for such data collection. This is followed by a discussion of the algorithms used for driver gaze estimation, which primarily involves traditional machine learning and deep learning based techniques. The estimated driver gaze is then used for understanding gaze behavior while maneuvering through intersections, on-ramps, off-ramps, lane changing, and determining the effect of roadside advertising structures. Finally, we have discussed the limitations in the existing literature, challenges, and the future scope in driver gaze estimation and gaze-based applications.
Driver gaze gaze estimation driver gaze datasets driver gaze understanding
§ INTRODUCTION
Driver safety is one of the major global concerns due to the increasing number of road crashes yearly. According to the Global status report on road safety 2018 by World Health Organization (WHO), approximately 1.3 million people die every year from road crashes <cit.>. There are several causes of road crashes, out of which distracted driving, drowsiness, and inattentiveness of the driver from their surrounding traffic are also significant. Driver gaze is an important clue to measuring distraction and attentiveness to the surrounding. Although research on driver gaze estimation has been mostly carried out during the last two to three decades, however, the history of human gaze estimation dates back to the nineteenth century. In the early 20th century, it was limited to the medical field with invasive gaze-estimation techniques <cit.>. Due to the advancement of technology over the past few decades, gaze estimation has become one of the critical research fields. In addition to driver gaze estimation, estimation of human gaze is used in many other applications, such as human-computer interaction <cit.>, health care and medical field <cit.>, education and e-learning <cit.>, consumer psychology and marketing <cit.>, etc. In the driving context, the gaze is estimated using an intrusive or non-intrusive manner. In intrusive technique, drivers wear a head-mountable eyeglass setup or eye trackers, some of which look like normal eyeglasses. On the contrary, in non-intrusive techniques, remote gaze tracking systems are used, which classify the driver's gaze in several areas of interest (AOI). In literature, AOI is also known as gaze zones or gaze classes. In this paper, we will primarily use the term gaze zones in the case of remote gaze tracking.
Multiple review studies exist on gaze estimation based on different estimation techniques and applications. Some studies include both traditional machine learning and deep learning-based gaze estimation techniques <cit.>, while others include only deep learning techniques <cit.>.
A majority of these review studies, except studies by <cit.> and <cit.>, have focused on gaze estimation on screen-based on different consumer platforms, including both constrained (fixed head movement) and non-constrained (head movement are allowed) environments.
On the other hand, <cit.> and <cit.> reviewed different driver eye-tracking techniques and attention models. They discussed different distraction and drowsiness measurement techniques used to measure drivers' attention and different features used in building advanced driver assistance systems (ADAS) by car manufacturers. Drowsiness and distraction of drivers are measured using eye movement information such as the percentage of times eyes closed, blink amplitude, amplitude velocity ratio, energy of blinking, blink frequency, blink duration, blink amplitude, etc. The study <cit.> primarily focused on different models of driver attention measurement for assistive and autonomous driving and also discussed in brief driver face and eye datasets. The above existing reviews do not clearly mention the terminologies used in gaze estimation and gaze behavior understanding based on head-mounted (based on eye tracker) and remote gaze estimation setup (cameras mounted on the dashboard or windshield).
Further, these studies have only focused on driver gaze and attention measurement and haven't discussed different aspects of driver gaze behavior. For example, these review studies have no discussion regarding how the gaze behavior differs among the different driver age groups (younger or older; novice or experienced) at different locations of roads such as signalized intersections (SI), unsignalized intersections (USI), mid-blocks, at road curves, entry and exit from ramps (On-ramp and Off-ramp), etc. To cater this gap, the present review paper primarily contributes to a review of the existing driver gaze estimation highlighting on the key terminologies and driver gaze behavior understanding during maneuvering on intersections, overpassing or overtaking, turning on curves, and entering and exiting from ramps.
The objective of this review is to understand the several terminologies used in driver gaze estimation and tracking, existing benchmark driver gaze datasets, driver gaze estimation techniques, and their applications in the driving field. The structure of the paper is shown in Fig. <ref>. Section 2 describes different terminologies used in estimating driver gaze using head-mounted gaze tracking and remote setup gaze tracking, while Section 3 discusses the existing driver gaze benchmark datasets with their pros and cons. Different gaze estimation algorithms and models based on traditional machine learning and deep learning are covered in Section 4. Section 5 covers the various driver gaze-based applications for understanding driver gaze behavior and building advanced driver safety systems. Finally, Section 6 provides a general discussion and future scope, which includes the challenges and limitations of the existing studies, followed by a conclusion at the end of the paper.
To write this review, we passed a thorough search on Google Scholar with the following keywords: gaze estimation, driver gaze estimation, driver head, and eye pose estimation, gaze tracking, driver gaze, eye tracker, driver gaze behavior, driver gaze behavior at intersections, lane changing or over passing gaze behavior, advance driving assistance system, driver gaze distraction, driver inattentiveness. Approximately 1100 research papers were retrieved using the above keywords and scrutinized based on their title and abstract. Finally, 160 relevant research papers reviewed have been included in the present paper. The included research papers are from different reputed journals such as Transactions on Intelligent Transportation Systems, Transactions on Intelligent Vehicles, Transactions on Vehicular Technology, Transactions on Information Theory, Expert Systems with Applications, Pattern Analysis and Machine Intelligence, The Bell System Technical Journal, Open Journal of Signal Processing, Transportation Research Part A: Policy and Practice, Transportation Research Part C: Emerging Technology, Transportation Research Part F: Traffic Psychology and Behavior, Transportation Research Record, Journal of Safety Research, IET Intelligent Transport Systems, Spanish Journal of Psychology, International Journal of Robotics Research, etc. We also include conference papers such as Intelligent Transportation Systems Conference, CVF Computer Vision and Pattern Recognition (CVPR) conference, International Conference on Computer Vision (ICCV), International Conference on Automatic Face and Gesture Recognition, International Conference on Robotics and Biomimetics, Intelligent Vehicles Symposium (IV), European Conference on Computer Vision (ECCV) and W.H.O. global status report on road safety. In this study, we have reviewed driver gaze related to only passenger car transportation modes, and other modes of transportation, such as motorcycles, auto rickshaws, trucks, buses, etc., have been excluded from the study. This paper includes literature that belongs to both simulators and real driving scenarios.
§ DRIVER GAZE TRACKING FUNDAMENTALS
In literature, estimation of driver gaze is referred by different terminologies, such as eye pose estimation, eye tracking <cit.>, gaze estimation, gaze detection <cit.>, and gaze tracking. However, all these are similar and often used interchangeably in different research papers. Gaze estimation is a technique to estimate the 3D line of sight (direction of gaze) from a given image. Continuous estimation of the gaze direction is known as gaze tracking. Tracking the driver's gaze is helpful in different applications, such as drowsiness detection, measurement of driver inattentiveness, driver behavior understanding, and building advanced driver assistance system.
§.§ Gaze estimation techniques
Gaze estimation is done using different techniques depending on the application. Initially, sensors attached to the facial skin, such as electrode pairs, were used to record potential differences during eye movements <cit.> to understand cognitive behavior. This technique though accurate but is usually uncomfortable to the users.
Due to the advancement of computer vision-based technology, gaze estimation has seen widespread application in different fields, including driver assistance and behavior understanding. Head-mounted trackers (wearable sensors) and remote setup (non-wearable)-based sensors are the two standard techniques used in practice for driver gaze estimation, briefly discussed next.
§.§.§ Head mounted gaze estimation
In head-mounted gaze estimation, the driver mounts or wear a device on their head called an eye tracker. Fig. <ref>a shows a sample eye-tracker glass, which looks exactly similar to regular prescription glasses. Primarily head-mounted system consists of near-eye cameras and infrared LED light for active illumination of the eyes (Fig. <ref>b) and a scene camera (Fig. <ref>c). In this system, a near-eye camera is present near each eye, recording the eye movements from close-up, as shown in Fig. <ref>e. The scene camera records the frontal view, allowing correlation of gaze data to the cues and stimuli present in the scene of the driver (Fig. <ref>f).
Head-mounted eye trackers rely on detecting image features for the near eye cameras, including the pupil, iris contours (Fig. <ref>d), and glints, i.e., reflections produced by the Infrared LEDs. Typically, eye trackers require calibration of each driver before estimating the driver gaze <cit.>. However, some latest eye trackers have been developed which are calibration-free <cit.>. Head mounted gaze estimation system allows driver head movements without affecting the camera views of the eyes. Several studies have used eye trackers to understand the driver gaze behavior in indoor (simulation-based studies) <cit.> and outdoor traffic environments (real-world driving studies) <cit.>.
§.§.§ Remote setup gaze estimation
For remote setup-based gaze estimation, cameras are typically placed on the dashboard <cit.> or sometimes mounted on the windshield <cit.>. Single or multiple cameras can be used, depending on the nature of gaze zone classification. Typically single cameras are preferred when dashboard and windshield areas are divided into coarser gaze zones, while in finer gaze zone classification, multiple cameras are primarily used <cit.>. Multiple cameras capture face images from different angles, such as left and right sides of the face and eye <cit.>. This helps to make gaze estimation robust even for substantial head movement, which is a shortcoming of single camera-based estimation. Cameras are placed in such a way that they do not create an interruption in the driver's field of view (FOV). The head pose images are used to extract the features in the case of traditional machine learning models, while for deep learning-based models, the images are directly used for end-to-end gaze classification. This system typically does not require driver's calibration <cit.>. However, in a similar electronic screen-based remote gaze tracking used to measure users' cognitive load while using a website, app, or reading text content, etc., typically subject calibration is needed before the gaze estimation on-screen <cit.>.
§.§ Terminology used for gaze estimation
This section will discuss the different terminologies used in head-mounted and remote setup-based gaze estimation. Gaze estimation based on the head-mounted and remote setup is two approaches to driver's gaze estimation depending on how precisely the gaze is estimated. Head-mounted-based gaze estimation is a finer way of gaze estimation in which driver gaze information is collected using an eye tracker (Fig. <ref>a). In remote setup, single or multiple cameras mounted on the dashboard and windshield area capture the different driver heads and eye poses, which are classified into different gaze classes based on the predefined gaze zone (Fig. <ref>b). The term fixation, saccades, dwell time, etc., are used in head-mounted gaze estimation, while in remote setup, the terms glance, glance duration and glance frequency are used to analyze the driver gaze. So to understand the driver's state and behavior, we must be familiar with several terminologies used in head-mounted and remote setup gaze estimation.
§.§.§ Terminology used in head mounted gaze estimation
This approach considers eye movements as the primary clue for gaze estimation. Eye movement is estimated by detecting the pupil's relative motion or iris center from a reference point inside the eye image. The reference point can be a glint point produced on the cornea using an LED illuminator, a point of intersection of eyelids and the canthus, etc. Determining pupils in outdoor lighting is often more challenging than in indoor lighting conditions. In an outdoor setting, it suffers from poor illumination, sunlight reflection on glasses or eyeballs, off-axis camera position, etc. <cit.>. The following terminology defines the continuous output of the driver's gaze in a head-mounted gaze estimation system.
[series=MyList, before=]
* Fixation: It is the state of eyes in which a driver maintain their visual gaze in a given gaze zone or area of interest (AOI) for a certain period <cit.>. The ISO 15007-1:2020 standard defines that individual fixations typically last between 100 milliseconds to 2000 milliseconds (ms) <cit.>. Remote eye tracker manufacturers such as Smart Eye Pro and Tobii consider 200 ms should be the minimum gaze period for a valid fixation in driver behavior studies <cit.>. Fixation time typically decreases if a driver is familiar with the road environment <cit.>.
* Dwell Time: It is the sum of the duration of fixations in a given area of interest. In driver behavior studies, dwell time gives the proportion of time spent by drivers gazing at objects in an interval of time. The object can be dynamic objects such as moving vehicles or static objects such as stationary vehicles, traffic lights, traffic signs, road markings, etc. <cit.>. Higher dwell time in a given gaze zone represents a high level of interest of the driver in that particular gaze zone.Time to first fixation (TTFF): It measures how long a stimulus takes to start looking at a specific gaze zone after the stimulus is presented. TTFF indicates how much an aspect of the scene initially attracted attention.
* Number of fixations: It represents the total number of time a subject fixate their gaze in a given gaze zone in a given time interval <cit.>.
* Saccades: Saccades are the rapid movement of the eyes where the gaze sight shifts from one fixation point to another <cit.>. The time taken to shift the eyes from one point of fixation to another is called saccadic duration. It shows the dynamics of the driver's gaze while driving.
* Scanpath: Scanpath represents the path followed by the driver's gaze moving from one fixation to another. A continuous segment of fixation and saccade can be a combination of two fixations and one saccade or multiple fixations and multiple saccades. Scanpath is used to find the driver gaze pattern in different traffic maneuvers such as turning, overtaking, merging, etc.
* Blink rate and pupil size: It is used to quantify the cognitive workload of the drivers. Blink rate decreases for more visually demanding tasks <cit.>, while shorter blink duration for increasing task demand for both (mental and visual) <cit.>. In a real-world driving study <cit.>, the amount of a driver's mental workload is measured based on the blink rate. The findings of this study revealed that as the road curves increased sharper, the eye blink rate reduced. When completing a cognitive task and detecting pupil size simultaneously, pupil size was greater than when merely conducting the cognitive task <cit.>.
* Entropy Rate: Entropy rate is one of the critical metrics to measure the driver's attentiveness from the surrounding traffic, which was inspired by the concept of Information entropy <cit.>. In a driving scenario, stationary gaze entropy (SGE) and gaze transition entropy (GTE) are the two commonly used metrics for measuring the driver's attention level. Stationary gaze entropy <cit.> describes the information generated by the driver gaze dispersed across the gaze zones. Stationary gaze transition is defined as:
S G E=-∑_l=1^L p_llog _2 p_l
where L is the total number of gaze zone and l is one of the gaze zones out of L number of gaze zones where the driver is looking during driving. The probability of the driver looking towards l number of gaze zone is represented by p_l. Since SGE does not reflect how driver control and asses the situation from the surrounding traffic environment. So for overcome these limitations gaze transition entropy <cit.> is used to measure the complexity of the different gaze transition patterns. For instance, where there are multiple stimuli, such as intersection, the gaze pattern is complex compared to the ordinary road; hence the GTE is higher at intersections.
GTE =-∑_k=1^L p_k∑_k, l=1^L(p_k, l) log _2(| p_k, l) {[ k, l=1,2,3 … L; k ≠ l ].
Where p_k, l is the occurrence of transition probability from k gaze zone to l gaze zone.
§.§.§ Terminology used in remote setup gaze estimation
The gaze estimation in remote setup-based systems typically gives a coarser gaze measurement than the head mounted. Here driver gaze is defined based on the head and eye pose, or sometimes only the head pose. Traditional machine learning or deep learning-based image classification models are trained to the predefined labeled image by either taking a head pose (face image) or a head and eye pose (eye images). In the coming section, we will explain a detailed discussion of ground truth label images and gaze classification techniques.
[series=MyList, before=]
* Glances: Glances are the coarser measurement of gaze, while fixation is a more refined measurement. It measures the gaze over an area, while fixation is measured at a point. In an interested gaze zone, glances may contain fixation and saccades. Here, gaze zones are pre-defined areas inside the vehicle cabin, such as the speedometer, center stack, left wing mirror, right wing mirror, rearview mirror <cit.>, etc. Glances are measured using glance duration, glance frequency, glance transition, etc.
* Glance Duration: Time spent in each gaze zone in a given time interval. It is measured using minimum, maximum, and average glances made by drivers in different gaze zones <cit.>.
* Glance Frequency: The number of glances the drivers make in a given gaze zone in a unit time interval is glance frequency <cit.>. Several studies include glance duration and frequency in examining the driver attention level while driving <cit.>. Longer glance duration and higher glance frequency signify the higher task demand <cit.>.
* Glance Transition: It denotes shifting of gaze from one gaze zone to another while assessing the situation of the surrounding traffic environment <cit.>. Glance transitions reveal the flow of attention between different gaze zones while driving <cit.>. A higher correlation exists between the two gaze zone when glance transitions occurring between the two is more frequent <cit.>.
* Glance Transition Sequence: The sequence followed by the glances shifting from one gaze zone to another in a given time duration. The occurrence of unusual sequences of gaze patterns while driving contains more rich information than usual.
* Glance Transition Length: It is the time duration of shifting glances from one specific gaze zone to another. It depends on the position of the former and later class of gaze zone. More glance transition length in the case of former and later classes are forward and leftwing mirror as compared to forward and right wing mirror (Fig. <ref>a).
* Number of Glance Transition: It is the sum of glance transitions the drivers make from one gaze zone to another in an observable time interval. A higher number of gaze transitions typically indicate higher driver attentiveness to their traffic surrounding.
As discussed above the terminology entropy rate can be used in both head-mounted and remote setup-based gaze estimation. In head-mounted gaze estimation, eye movement is primarily used for gaze tracking. The collected data from the eye trackers consist of fixation and saccadic information <cit.>. The gaze estimation accuracy of the eye tracker is measured in angular resolution (in degrees), representing the angular difference between real stimuli positions and measured gaze positions <cit.>.
Conversely, remote setup gaze estimation is a non-wearable gaze estimation technique. In this method, typically, instead of estimating pinpoint gaze location, we find a comparatively broader region of interest known as the gaze zone. In this system, the head movement is the pre-dominant for gaze region classification, and hence the term glances are often used for driver's gaze behavior analysis <cit.>. Gaze estimation based on eye trackers is more accurate than remote gaze tracking. Another advantage of using an eye tracker over remote gaze trackers is determining the stimuli in the scene image by producing the heat map on the fixation points. However, sometimes, it creates discomfort for the drivers, who do not have a habit of wearing eyeglasses and influences naturalistic driving gaze behavior.
§ BENCHMARK DATASETS AND COLLECTION METHODOLOGIES
Good quality data is one of the crucial needs for computer vision-based gaze-tracking applications. The quality of driver gaze datasets depends on the precision and configuration of collection equipment, methodology, and information level. Driver data must include possible driving scenarios and conditions and a sufficiently large number of subjects. This section discusses the different types of equipment used for data collection, the methodology adopted, and different open-source benchmark driver gaze datasets available for gaze estimation model development.
§.§ Equipment used for data collection
Driver face data is generally collected using a remote setup, where cameras are installed in front of the driver (in the dashboard or windshield) to record the driver's head movement. Different types of cameras have been used to collect driver face data, including traditional RGB, RGB-Depth, and Infrared cameras.
Traditional RGB cameras are used to capture driver face in the visible wavelength spectrum <cit.>. However, the image quality sufficiently degrades in low-light conditions, thereby making it difficult to understand the driver's gaze night-time. To alleviate this problem,
Infrared (IR)<cit.>/Near Infrared (NIR) cameras are also used for driver gaze data collection <cit.> due to its intrinsic advantage to capture better features in night-time compared to traditional RGB cameras. It provides a grayscale image using infrared/near-infrared light, which is invisible to the naked human eye. Although NIR cameras are robust enough to capture images in low-light conditions, however, prolonged use of the NIR camera may hurt the driver's eyes <cit.>. The great challenge in driver gaze estimation is the illumination vulnerability under poor environmental conditions where light and shade bring negative effects. Standard RGB cameras have the advantage of color information but missing the depth information. To overcome these challenges, RGB-D cameras <cit.> are used to obtain RGB images and depth information using point cloud-based sensors. RGB-D camera has unique features to merge pixel-to-pixel information of depth and RGB information in a single image. Depth, information of the camera is provided by a 3D depth sensor, which can be stereo, time of flight, structured light sensor, etc.
In studies <cit.>, Eye trackers are also used to capture data such as pupil dilation, iris and gaze information, in terms of fixation and saccades.
§.§ Collection methodology
Driver gaze data has been collected using a vehicle in a stationary state (parked vehicle) or moving state. In the stationary state <cit.>, dashboard and windshield areas of the vehicle (car) are divided into several zones by sticking stickers or by pointing with a marker. The number of gaze zones selected can be broadly divided into coarser and finer categories, as shown in Fig. <ref>a and Fig. <ref>b. In the coarser gaze zone classification, the windshield and dashboard area is divided into fewer gaze zones than the finer. Primarily coarser gaze zone classification includes forward, speedometer, center-stack, left wing mirror, right wing mirror, and rearview mirror <cit.>. In finer gaze zone classification, the windshield, wing mirror, and dashboard area are subdivided into different smaller gaze zones, and therefore the number of gaze zones is more. Despite giving the specific class name such as forward, rearview mirror, left-wing mirror, right-wing mirror, etc., they divided each broader gaze zone area into a smaller gaze zone with the name a numerical value such as 1, 2, 3, etc. <cit.> as shown in Fig. <ref>b. In this study <cit.>, instructions are given by the second person to the subjects or drivers to look toward specific gaze zones, while in some other studies, drivers typically look toward the gaze zones by their own choice <cit.>. The frontal face area of the driver is recorded by installing the camera on the dashboard, rearview mirror, or windshield. The captured frames were labeled by one or more human annotators and cross verify. Speek2label <cit.> is also one method of annotations in which audio signals (gaze zone names such as 1, 2, 3, etc.) are converted into text.
In the moving state, we can not instruct the driver to look towards the specific gaze zones because of driver safety concerns. In this state, the driver drives the vehicle actually on the road, and the labels are given either by the human annotators <cit.> or using unsupervised machine learning techniques <cit.>.
The advantage of stationary state-based data collection methods is that they are safe since data is collected on parked vehicles; also, the number of subjects is large. Getting the labels speak2label and looking towards the specific gaze zone while the second person is giving the instructions are relatively smooth. The drawback of this method is the gaze zone classifiers built using this data are not generalized satisfactorily from a stationary to a moving vehicle. Also, by giving the labels by the speak2label method, some classes are intermingled, which also affects the classifier results. In the moving state data collection method is more generalized since data is collected in actual road driving, but giving the correct labels is a challenging task.
§.§ Driver gaze datasets
Driver gaze data is in two categories, one contains the driver's face information, and the other includes the eyes pose information, such as the position of the iris, pupil, and cornea reflection inside the eye. Typically the eye data <cit.> are collected using cameras while iris and pupil <cit.> are collected using eye trackers.
§.§.§ Driver face benchmark datasets
Driver gaze data is in two categories, one contains the driver's face information, and the other includes the eyes pose information, such as the position of the iris, pupil, and cornea reflection inside the eye. Typically the eye data <cit.> are collected using cameras while iris and pupil <cit.> are collected using eye trackers.
§.§.§ Driver Face Benchmark Datasets
Several open-source driver face datasets are available, which are collected either inside parked vehicles or moving vehicles in real-world conditions. These datasets can be downloaded from different open-source repositories or available on request through a source generator. A detailed description of these publicly available driver gaze datasets is discussed next, and a comprehensive summary is provided in Table <ref>
RS-DMV <cit.> contains ten driver face videos on a grayscale (see Fig. <ref>a) in indoor (simulator) and on-campus outdoor driving. DriveFace <cit.> contains three classes right, frontal, and left of the driver's head pose (see Fig. <ref>b). Brain4Cars <cit.> consists of multi-sensor synchronized data containing outside and inside views of car video, vehicle speed, and GPS coordinates. The data was recorded using ten drivers in natural driving settings for up to two months. DriveAHEAD <cit.> dataset is a wide-range head pose dataset containing depth and IR images. To measure the head position (x, y, z coordinate) and orientation (Yaw, Pitch, Roll), they use a 3D motion capture sensor. The DMD <cit.> dataset is a multimodal dataset containing images from 3 cameras, face, body, and hands, each captured from three streams (RGB, IR, Depth). This dataset contains head pose, body pose, blink rate, and hand wheel interactions too. DriveMVT <cit.> is a multi-purpose natural driving data consisting of frame-by-frame information on driver health, such as heart rate, mental fatigue, head pose (yaw, pitch roll), drowsiness, etc. Participants have unique features with/without a beard, eyeglasses, and mustache. The data was collected using usb cameras and smartphone cameras for driver face capturing and for heart rate recording; a heart Xiaomi Mi Band 3 sensor was used.
To look at a specific gaze zone by instructing drivers in actual driving is unsafe, so datasets are also recorded in the parked vehicle. One such dataset is a LISA GAZE v2 <cit.>, a large-scale driver faces data containing possible driving conditions such as daylight, night light, harsh illumination, and eyeglasses reflections (see Fig. <ref>c). DG-UNICAMP <cit.> is one of the most extensive driver face datasets containing all three types RGB, IR (see Fig. <ref>d), and depth (see Fig. <ref>e) images. DGW datasets <cit.> is one of the largest subject datasets, which use speak-to-label techniques to label data. It contains all challenging possible lighting conditions day and night, such as low light (see Fig. <ref>f) , half, full face shadow, sunlight reflection, etc. The drawback of the data is its intermingled gaze classes, which reduced the model's classification accuracy.
§.§.§ Open source driver eyes tracking datasets
While the driver face dataset is the most common driver gaze dataset explored in literature, a few studies have also used the driver eye dataset for gaze estimation. Besides gaze estimation tasks, driver eye datasets are also used for detecting drowsiness, pupil dilation, and blink frequency for cognitive workload, etc.
DR(eye)VE <cit.>, is a real-world driving dataset consisting of 74 videos of different weather (sunny, cloudy, rainy) and light conditions (day, evening, night). Driver's gaze information and pupil dilation were captured using eye trackers, and the gaze was mapped to the surrounding traffic. Media research lab (MRL) <cit.> dataset was recorded on actual road driving using a NIR camera to reduce the low illumination light effect on the eyes during evening and night. IR illuminator is used to create reflection on eyes and eyeglasses to produce a wide range of lighting effects. The benchmark driver face and eye datasets are used to build several state-of-the-art gaze estimation models, which are discussed in the next section.
§ ALGORITHMS AND MODELS FOR DRIVER GAZE CLASSIFICATION
This section will discuss different state-of-the-art gaze estimation models based on traditional machine learning and deep learning. Typically, gaze estimation algorithms in computer vision for driver gaze estimation are clubbed into traditional machine learning and deep learning-based method. In traditional machine learning, different features of the face and eyes are extracted using feature extractors algorithms and fed to the classifier (see Fig. <ref>).
On the other hand, deep learning-based models directly learn the mapping function from the face and eye appearance of the driver. These models detect and track faces and eyes based on the image appearance <cit.> characterized by pixel intensity (color intensity) statistics.
§.§ Traditional machine learning based gaze classification
A feature in machine learning is a part or pattern of an object in an image that helps to identify it. In image processing and pattern recognition, feature extraction (also known as feature generation or feature construction) is a way of dimensionality reduction, and its primary goal is to find the most relevant information from the original data and represent that information in a lower dimensionality space. In the case of driver's gaze, classification commonly involves the determination of head pose and eye pose as a first step of feature generation. For head pose estimation, typically used face features are the left and right border, center of the driver's face, mouth and nose corner, nose tip, eye corner and contours, eyebrow, eyelids, etc. For eye pose estimation, commonly used features are pupil center (dark pupil or bright pupil), iris contour, corneal reflection, etc. Traditional machine learning-based gaze classifiers require hand-crafted feature extractors to extract these features from an image. The color, texture, and local features, such as edges, corners, etc., can be given as input to the classifier model. These individually created elements help to understand and differentiate between different classes.
HOG (histogram of gradient) can describe an object's shape, which is particularly helpful for human detection <cit.>. Local aspects of the image are provided by SIFT (scale invariant feature transform) and SURF (speeded up robust features) descriptors <cit.>, which reflect the target's specifics. However, because hand-crafted features cannot fully capture the essence of an image's content, the identification accuracy is only adequate for simple tasks.
<cit.>.
Extracted features are fed to different classifiers such as K-nearest neighbors (KNN), Support Vector Machine (SVM) <cit.>, Random Forest (RF) <cit.> for gaze classification. A brief discussion of these models is given next.
For classification driver gaze zone, <cit.> first detected the face in the input video frames using the Haars cascade classifier, then from the detected face, extracted a 14-dimensional feature vector, including the size and shape of the left iris, right iris, mouth, and nose. A multi-class linear support vector machine is employed on extracted features with a one-versus-one scheme where a binary function is learned between each pair of gaze zone classes. In another study based on head pose cue and iris, <cit.> extract features from the face: landmark points on eye corners, nose corners, and nose tip. First, they classified gaze zones by measuring head pose in terms of Euler angles. Since using only the head pose, the trained model was confused between some nearby classes, such as forward and the speedometer, center console, and rearview mirror. Therefore they added eye pose features to improve the accuracy. Gaze zone classification using owl analogy (head movement a lot) and lizard analogy (eye movement a lot), <cit.> estimate both head and eye pose. For the head pose, 68 points multi-PIE facial landmark markup includes parts of the nose, the upper edge of the eyebrows, outer and inner lips, the jawline, and parts in and around the eyes, while for the eye pose, the pupil center by extracting points of iso-contour. The extracted features were fed to the RF classifier because of higher accuracy than KNN and linear SVM for the gaze zone classification. Adding an eye pose with a head pose increases 5.4 percent accuracy of the classification system.
* K-nearest neighbor (KNN or k-NN) is one of the simplest non-parameterized supervised learning algorithms developed in 1967 <cit.>. It uses the proximity of data points to classify or predict the grouping of individual data points. Although it can be used in both regression and classification problems, it is commonly used for classification problems, including gaze classification.
The basic principle of KNN is that data points of the same class are closer to each other, while data points of different classes are far away. Therefore, the classification of a new test data point can be done by determining the classes of the training data points near the new test data point. In classification, class labels are allotted based on the majority vote, i.e., the labels of data points nearest to the new data point. One major disadvantage of KNN is that the classifier must check every training data point to classify each test data, making the model very slow during inference time.
* Support Vector Machine: SVM is a supervised learning algorithm that attempts to determine a maximum margin classification boundary between the training data points <cit.>. The objective function formulation is a quadratic optimization problem with linear constraints, which can be solved using QCQP (Quadratically Constrained Quadratic Program) solver. The solver can determine the unique solution if a linear decision boundary exists. The formulation where a linear decision boundary exists is called linear SVM. Even for non-linear decision boundaries, SVM can still work where the low-dimensional input feature space is transformed into a high-dimensional feature vector using a kernel trick. Kernel trick involves the application of kernel functions such as polynomial kernels, radial basis function (RBF) kernels, etc., for feature transformation. This is called non-linear SVM. The optimized objective function in SVM can be defined using a few specific data points only, called support vectors, which lie on the decision boundary. Therefore, SVM's inference time is significantly faster than k-NN, where all training data points are required during inference. Further, since the feature vectors are transformed into high-dimensional vector space using kernels, SVMs are particularly useful for images with high-dimensional inherent feature space. In a study, Vasli et al. <cit.> used a linear SVM to classify the six gaze zones, while Vicente et al. <cit.> used to identify either driver was wearing sunglasses or not to estimate the driver gaze.
* Random Forest (RF): RF is an ensemble learning algorithm developed by Leo Breiman in 2001 <cit.> for both classification and regression problems. It is a machine-learning technique using a group of decision trees. Each tree in the ensemble consists of a data sample drawn out from the training data with replacement, called a bootstrap sample. The determination of prediction will vary based on the type of the problem. For the classification task, a majority vote of the decision trees is used to determine the predicted class. RF model generation involves a selection of three hyperparameters: node size, number of decision trees, and number of feature samples.
For gaze zone estimation RF classifier is used for zone classification or probability prediction in conjunction with the different feature sets.
In a study <cit.> used 60 trees for their experiments. While <cit.> generate a set of probabilities for each class from a single feature vector. RF classifier of depth 25 with an ensemble of 2000 trees is used for all the experiments.
Driver gaze estimation literature mentioned some reasons why they preferred RF over KNN or SVM. The classification accuracy of random forest is higher than KNN and SVM, as mentioned in Table <ref>. It also gives the prediction probability over each class, and also a low number of tuning parameters are required.
§.§ Deep learning based gaze classification
This section will discuss deep learning-based gaze estimation, focusing on the convolutional neural network (CNN) based models for image classification. In the beginning, the basic architecture of deep neural networks is described, and then, after the use of different CNN models, for driver gaze estimation is discussed.
§.§.§ Basic architecture of deep neural network(DNN)
Deep learning (DL) is one of the specific branches of machine learning where a number of layers of stacked parameters are used for the learning process <cit.>. These parameters represent many factors that may impact the network's outcome. Each layer has several perceptrons, also called neurons or hidden units, and carries the parameter weights. These parameters multiply the input of each layer, and the result is an output that shows how each parameter affects the input. Typically after each layer or multiple layers, nonlinearity function such as sigmoid, tanh, and rectified linear function (ReLU) <cit.> are added to introduce the nonlinearity in the network. All these layers combined to make a deep neural network (DNN) <cit.>. The two main challenges in building a DNN are designing the structure of the networks (selection of the number of layers, neurons, and activation functions) and adjusting the weight of the parameters to train the neural network. The first challenge can be overcome simply by trial and error and prior experience. The second challenge can be reduced by using the backpropagation method to train the weight of the parameters in a supervised manner. The detailed discussion is given in the paper <cit.>.
§.§.§ Convolutional neural network (CNN)
A convolution neural network introduced by LeCun et al. <cit.> aims to increase classification accuracy and inference time based on computer-aided detection. Researchers <cit.> and <cit.> proposed that instead of using a fully connected layer, it is possible to use a single kernel and the shared weights to wisp the images and extract the local feature. This proposed approach improves the detection efficacy in terms of both classification accuracy and memory requirement when compared with traditional machine learning-based approaches, which require a handcrafted feature extractor <cit.>. Usually, CNN consists of several convolutional layers, followed by pooling, nonlinearity, and a fully connected plus output layer at the top. The first three layers, convolution, pooling, and nonlinearity, are accountable for extracting the features, while fully connected layers and the output layer are used for image classification.
* Convolutional Layer:
In CNN convolutional, layers consist of trainable parameters, each with learnable filters (kernels). Each filter has a width and height extended through the depth of the input volumes. When input enters the network, it spreads across the width and height of each filter, producing a 2D activation map or a feature map for each filter. Convolutional layers can reduce the model's complexity by optimizing its output. This optimization can be done by using three hyperparameters, depth, stride, and setting zero padding <cit.>.
* Nonlinearity Layer: It is also known as the activation layer. After each convolution layer, apply an activation function to check whether either neuron is active or not in the layer. The nonlinear activation function performs a nonlinear operation inside the input to make the network suitable for finding the complex pattern from the data. Different types of nonlinearity-providing activations functions are Sigmoid, Relu, Leaky Relu, Tanh, Softmax, etc. Of all of these, ReLu is the most commonly used activation function inside the CNN.
* Pooling Layer: The role of the pooling layer in CNN architecture is to reduce dimensionality, which further reduces the parameters and complexity of the model. It works over each feature map in the input and reduces its dimensionality using the MAX function <cit.>. There are several pooling techniques, out of which max pooling and average pooling are more commonly preferred in CNN <cit.>.
* Fully Connected Layer: In the fully connected layer, each neuron is fully connected to each neuron of the adjacent layers without being connected to the neuron within the layer. The fully connected layer takes the output (activation map) from the previous layer and converts it into a 1D vector, which is used as input to the layer. This 1D vector input passes through one or more fully connected layers.
* Output Layer: The final fully connected layer passes from the activation function (softmax) and gives the output, a continuous output in case of regression problem and discrete in case of the classification problem.
§.§.§ CNN models for driver gaze classification
Several state-of-the-art CNN-based models have been used for driver gaze classification mentioned in Table <ref>. These models are generally pre-trained on the large-scale ImageNet dataset <cit.> and applied to driver face data collected using RGB or NIR cameras. We first discuss studies that used RGB camera data, followed by NIR camera-based studies. In a study <cit.> proposed a CNN model inspired by Alexnet architecture <cit.>, consisting of three convolutional, three pooling, two fully connected, and one output layer for nine gaze zone classifications. The pooling layer between the convolutional layers was taken from the original Alexnet architecture <cit.>, and the prediction probability of each gaze zone was derived from the final output layer using the rectified linear unit. The proposed CNN model achieved a 95.0 % accuracy for nine gaze zone classifications for their datasets. Studies have focused on leveraging the advantage of large-scale CNN models by developing gaze classification models that can generalize for different drivers, driver position and perspective, lighting conditions, etc. In a study <cit.> collected ten drivers' gaze data in naturalistic driving conditions during dry weather in the daytime to build a generalized driver gaze classification model using CNN architecture. The driver face data were further cropped into half-face, full-face, and face with context. Each data type was classified by fine-tuning VGG16 <cit.> and Alexnet pre-trained CNN model. They achieved the highest accuracy of 88.9 % and 93.4 % accuracy in half-face data type by using AlexNet and VGG16, respectively. The reason for getting higher accuracy on the upper half face as compared to the full face and face with context is that upper half face images can extract finer features of the eyes, like the position and shape of the iris and eyelid, which explains its better performance. One of the challenges in gaze classification is a reflection or low visibility due to eyeglasses. This study <cit.> attempted to overcome the eyeglass challenges by removing the eyeglasses in the natural driving environment using the Gaze Preserving CycleGAN (GPCycleGAN) model. After removing the eyeglasses, these images were given as input to the SqueezNet <cit.> CNN model for gaze classification in seven zones and achieved an accuracy of 72.3 % accuracy.
Gaze classification using RGB cameras discussed above suffers during low light and sunlight reflection on the driver's face. To solve this problem, <cit.> and <cit.> collected driver face data using NIR camera. Naqvi et al. <cit.> used three separate VGG models, one each for the driver's face, left eye, and right eye, respectively. The authors extracted 4096 features from each input type and calculated the Euclidean distance between them. Finally, the gaze zone was classified based on the score level fusion from the three groups of features. The accuracy of the proposed system was measured using two metrics: strictly correct estimation rate (SCER) and loosely correct estimation rate (LCER). SCER refers to the ratio of the number of strictly correct frames divided by the total number of frames. The strictly correct frame represents the frame where the estimated gaze zone is equivalent to the ground truth gaze zone. LCER refers to the ratio of the number of loosely correct frames divided by the total number of frames. The loosely correct frame represents the frame where the estimated gaze zone is placed within the ground truth gaze zone or its surrounding zones. The system achieved 92.8 % and 99.6 % accuracy in SCER and LCER, respectively. On the other hand, <cit.> captured the drive frontal face and right side face images with two NIR cameras. The captured images consisted of the driver's face area and some context regions. They used the Dlib facial feature trackers <cit.> to detect 68 facial landmark points on the driver's face and detect the driver's face and eye images from the originally captured image consist the face and context region. They have a total of six images that include the right eye, left eye, and face region of interest (ROI) of the front camera and those of the side camera. These six images were then combined to make a single image of three channels. Among the three channels, the front and side images of face ROIs are arranged top to bottom in the first channel, the front and side images of left eye ROIs are arranged top to bottom in the second channel, and the front and side images of right eye ROIs are arranged vertically in the third channel. Finally, three ResNet models, ResNet-50, ResNet-101, and Resnet-152 <cit.>, were finetuned to trained on the dataset, which achieved an accuracy of 92.9, 79.1, 90.2 % in SCER, and 99.5, 97.1, 99.2 % in LCER, for the respective model.
Deep learning-based CNN models are preferred for gaze estimation because of their higher ability to learn complex features, such as large head and eye movement(relative position of pupil center, iris center, corneal reflection) detection, robustness to variations in vehicle type, driver's head, different lighting conditions, such as low light, sunlight reflection on the face and improved classification accuracy. These models can handle large amounts of data, which is used for improving the model's training to learn a complex feature and hence helpful in the finer gaze zone classification of the model.
§ APPLICATIONS OF DRIVER GAZE
Driver gaze estimation is essential from several aspects. One critical aspect of driver gaze is understanding driver gaze behavior at different road sections, which helps to build safer road infrastructure and safety systems for drivers. Driver gaze is also used to build a driver's distraction detection system, attentiveness warning system, and advanced driver assistance system.
Driver gaze behavior shows awareness of the driver of the surrounding traffic, such as vehicles coming from different traffic streams, road infrastructure such as traffic signs or traffic lights, road markings, etc., and roadside infrastructures such as buildings, trees, advertising hoardings, billboards, etc. It also shows whether the drivers check their surroundings before performing different traffic maneuvers such as lane changing, merging/diverging in on-ramp/off-ramps, and left/right turning at intersections. A detailed discussion of each aspect is provided in the following sections.
§.§ Gaze behavior at intersections
Intersections are known for their complex nature because of different participants' behavior and interactions <cit.>. Interactions at intersections are vehicle-to-vehicle (V2V) <cit.>, vehicle-to-pedestrian (V2P) <cit.>, vehicle-to-infrastructure (V2I) <cit.>, and pedestrian-to-infrastructure (P2I) <cit.>.
In this paper, the first three interactions are essential for the driver's interactions with vehicles, pedestrians, and infrastructures.
This section will discuss how the driver's gaze is influenced when the driver approaches, maneuvers (left turning, right turning, and going straight), and leaves the intersection. Literature on driver gaze behavior at intersections is broadly divided into three categories based on the driver's age or experience (novice, young experienced, old experienced), intersection types (signalized or unsignalized), and surrounding traffic environments (traffic density and familiarity), mentioned in Table <ref>. The discussion in this paper will be based on left-hand driving to maintain uniformity in the paper. A few studies compared driver glance behavior based on their age or experience while approaching or negotiating through the intersection <cit.>.
In real driving scenarios, <cit.> measured driver gaze scanning behavior in terms of the proportion of glances made by the drivers in the right, left, and rearview mirrors and the entropy rate in scanning. They found that middle-aged drivers (35-55 years old) had higher scanning randomness (i.e., a greater entropy rate) than older drivers (65-80 years old). Comparing glance frequency and average glance duration between younger and older drivers, <cit.> found older drivers looked more at road lines and markings to position themselves in surrounding traffic.
some other simulation-based studies examine the age and experience effect on driver gaze. While selecting a safe gap at an unsignalized intersection (USI), <cit.> compared glance transition patterns of three groups of drivers, including novice (mean age 20.57 years; SD=2.47 years), young experienced (mean age 23.79 years; SD=3.04 years) and older experienced (mean age 66.43 years; SD=5.03 years) on a right turn and divided intersection approach time into scanning and decision phases. The scanning phase was the first 10 seconds in which driver's not found any negotiable gaps, while the next 5 seconds was the decision phase immediately before initiating maneuvers. The results showed that young, experienced drivers distributed their gaze more evenly across all gaze zones, whereas older and novice drivers had more sweeping transitions, bypassing adjacent areas. In a study, <cit.> examined four hypotheses to determine why older drivers fail to scan effectively at intersections compared to young drivers. The four hypotheses were difficulty with head movements, increased distractibility, and failure to recall specific scanning patterns. None of the hypotheses fully explained the above reason. Still, the research does support the alternative theory that some of the issues older drivers experience when looking at junctions are due to unique attentional weaknesses in the older drivers' ability. As a result, older drivers fail to gaze at scan hazardous areas outside of the vehicle's intended path of travel. The effect of age and guidance type (lead car and GPS) was examined by <cit.> on gaze scanning while approaching an intersection. Overall guidance by the lead car was slightly reduced the gaze scanned when the driver was close to the intersection. As compared to younger drivers, the average scan magnitude was smaller for older drivers. <cit.> compared the static (dwell time) and dynamic (gaze transitions) gaze of novice and experienced drivers. Static analysis of novice drivers shows higher dwell time in an area of interest (AOI) than experience. The gaze transition of novice drivers between AOI at a close distance while experienced driver check surrounding traffic conditions for vehicle driving.
The general observation from these studies was older drivers, compared to younger drivers, scan fewer right and left areas of interest and focus more straight ahead or in the intended direction of vehicles. This behavior may explain the fact that older drivers are more involved in angle crashes at intersections and "failure to yield" when involved in "seen but not seen" crashes and accidents with other vehicles <cit.>.
In addition to studies focussing on driver scanning behavior in younger and older drivers, researchers have also focussed on driver gaze behavior in signalized intersections (SI) and unsignalized (USI) intersections. This study, <cit.> compared driver glance allocation frequencies, durations, and transition probabilities at SI and USI to examine the influence of intersection types on driver scanning measures. Visual scanning performance was found to be similar between SI and USI in the through and left-turning movements while right-turning at SI; drivers give more attention to the forward and right areas than at USI.
An approaching the SI, in two different studies, <cit.> examined the impact of the three-factor priority rule (yield, priority, and stop), expected traffic density (no traffic, light, and heavy), and familiarity on the allocation of visual information. Three traffic densities were simulated: no traffic at all, light traffic (on average spaced 250 m from each other, 10 s time gap), and heavy traffic (on average spaced 100 m from each other, 4 s time gap). The dwell time in intersecting road AOI was found to be higher in yield than in priority condition and smallest in stop sign condition. They also found that the horizontal gaze eccentricity was higher with lower traffic density than higher traffic density. Horizontal eccentricity is defined as the absolute value of the horizontal component of the gaze (and head) direction angle. Familiarity was introduced, bypassing the driver on the first and second time on the same proposed route. Higher horizontal eccentricities were found in the first passage compared to the second one. This indicates that Visual information related to the decision-making task starts later when the driver is slightly familiar with the environment.
§.§ Over taking/lane changing gaze behavior
Overtaking or lane-changing occurs in the traffic stream when all vehicles do not move at the design speed <cit.>. Here we are considering lane-changing gaze behavior for the overtaking of the vehicles. A leading vehicle moving at a slow speed hinders the following fast-moving vehicle, provoking the following vehicles to overtake, and lane changing occurs. A sequence of different glance patterns can be seen before the lane change starts to know the possible threats from the surrounding traffic. In lane changing broadly, two kinds of study have been done: driver gaze behavior understanding during a lane change and predicting lane changes based on driver gaze.
In a simulation study, <cit.> found that drivers began to exhibit notably different gaze behavior about three seconds before the lane change (independent of the vehicle speed), with an increase in the frequency of glances in the rearview mirror at the expense of glances in the direction of their current lane. As soon as the driver decides to change lanes, their eyes typically move from salient guiding features of the present lane (such as the tangent point or the lead car) to salient guiding features of the destination lane. Additionally, drivers increase their gazes at surrounding vehicles during lane changes to help with situation awareness and decision-making. Another simulation-based study by <cit.> tested the influence of age differences in glances to the blind spot and mirrors when changing lanes. Compared to younger drivers, older drivers were found to show reduced glance frequency in checking towards the left side mirror, rearview mirror, and blind spot. This behavior of drivers may explain the observations made by <cit.> that older drivers were more likely to be involved in collisions when changing lanes.
In addition to studies focussing on driver lane changing gaze behavior, researchers have also focussed on driver lane change prediction behavior using gaze information and other sensor data. The literature on lane change behavior prediction is categorized in three ways: (a) Uses vehicle-based data <cit.>, (b) Driver state-based (gaze-based prediction) <cit.>, and (c) Driving environment-based data <cit.>. Lane-changing prediction based on vehicle information, such as speed, lateral and longitudinal acceleration, etc., is not considered any gaze-related information. So this section will discuss only driver state-based and driving environmental-based lane change behavior prediction. Research on lane change prediction indicates that the driver's glances give an intent <cit.>, as an early indicator, before a lane change. The sequence of lane change on the highway has been defined by considering three parameters: reference point, start time, and end time. The lane marking has been defined as the reference point, while the start point is considered as the time just before touching the lane marking. Similarly, end time is taken as the time when the tire just crosses the reference lane marking. Since three seconds is usually taken as the critical decision-making <cit.> time for a lane change, several studies considered 3-5 seconds <cit.> duration before the start time to analyze the driver gaze behavior.
In real-time, lane change behavior prediction using driver gaze considers either eye movement, head movement, or sometimes both. Lane change prediction based on eye movements in a simulation-based study, <cit.> proposed a 4DDTW (four-dimensional dynamic time warping) KNN-based lane-changing prediction approach to overcome the challenge of prediction accuracy. Driver gaze was captured using an eye tracker, which consists of a scene camera to capture the scene image and map the angle of vision of the eyes in the scene image to get the x and y coordinates of the left and right eye, which is further used to estimate the gaze. They used a sliding-space time algorithm to extract the scanpath of the left and right eyes in time series data. 4DDTW was used to find the similarity between the scanpath and then applied KNN on each sample to classify left lane change, right lane change, and lane keeping. The KNN classifier results were compared with the existing LSTM-based approach and achieved 86.5 % and 86.3 % accuracies, respectively. Study <cit.> used only the head movements (Head movement classes such as left, front, right, etc.) to improve the relevance of vector machine-based classifiers for predicting lane change. Lane change prediction by considering only eye gaze fails in adverse lighting conditions, so integrating head movement with eye movement overcomes these challenges to some extent. In this study, <cit.> developed a machine vision-based lane change behavior, predictor. They classified dynamic lane change behavior into three classes: left or right lane change and lane keeping. A 10-second time window is considered to analyze the behavior, which comprised 5 seconds just before touching the lane marking and another 5 seconds after crossing the lane marking. Each time window consisted of multiple scanpath for each class, which were further used to extract features such as minimum, maximum, and average glance duration, frequency, and gaze accumulation in each gaze zone. The lane change gaze behavior for corresponding maneuvers was modeled using the multivariate normal distribution (MVN) and obtained an accuracy of around 75.0 % right and left lane change prediction.
Before and during the lane change, frequently looking gaze zones were rearview mirrors, left-wing and right-wing mirrors, statistics of fixations, scanpath were further used for the lane change prediction model.
§.§ Driver gaze behavior on curve, On-ramp and off-ramp sections
This section will cover the driver gaze behavior traveling on different road locations, such as on curve, on-ramp, and off-ramp. In a study, <cit.> examined the look ahead fixation driving experience behavior when approaching and negotiating the curve on a rural road. They found over the curve; experienced drivers spent more time on look-ahead fixation than the road ahead. In order to obtain accurate foveal information from the rest of the curve, drivers need to make an eccentric fixation towards the road further up, disengaging the gaze from the visual guidance of online control of steering; these fixations have been called look-ahead fixations. Compared to the entry phase, the driver's look-ahead fixation behavior is more on the approach phase because the turning steering driver needs a higher visual demand at the curve <cit.>. One of the measures of driver distraction is the measure of the eye off the road. The eye off the road indicates that the driver is looking somewhere else rather than looking at the road. The eye off the road delays the reaction time of the driver and causes a higher risk of accidents <cit.>. In a study, <cit.> investigated the effect of ground-mounted diagrammatic guide signs on drivers eye scanning before entrance of freeway ramps. The diagrammatic guide sign is a type of sign that indicates the destination using large map-like figures of the road layout. The study's findings revealed that ground-mounted signs on multi-lane arterials do not excessively distract drivers or influence eye-scanning behavior detrimentally. In situations where placing overhead span type sign bridges can not be economically feasible and placing these signs is highly desirable for driver's guidance, these diagrammatic signs give unfamiliar drivers more navigational information in advance (by identifying the correct lane to access the desired entrance ramp).
In addition to real-world experimental studies, simulation studies have also been done to understand driver gaze behavior in on-ramp and off-ramps. In this study, <cit.> checked how the presence of pavement shoulder influences driver gaze in the right bend curve on a two-lane rural highway. The results showed that the driver's gaze shifted towards the inside of the curve, followed by the steering trajectory, irrespective of the width of the shoulder. They suggested that the delineator on the curve is more useful to bring the driver's gaze and vehicle back to the lane.
Further, <cit.> analyzed the effect of driver age, specific service sign content and format, and familiarity with the road sign on the performance and attention level of the driver when exiting the freeways, i.e., on-ramps and off-ramps. Drivers were found to identify six-panel signs more accurately than nine-panels and were found to be more accurate when familiar with the road sign. Six-panel or nine-panel sign board contains six or nine panels in which each panel has different information signs or symbols. In contrast, familiarity with the road sign means the driver priory has seen or knows about the meaning of the signs. Older drivers pay more attention to the driving task for attention level allocation than middle-aged and younger drivers.
§.§ Influence of roadside advertising structure on driver gaze behavior
A recurring finding in the literature is that there appears to be a link between crashes <cit.> and the presence of roadside advertising. In literature, different studies have investigated the effect of different advertisement characteristics on driver behavior. Some of the attributes of the advertisement are its nature, placement, content, road, and traffic characteristics, type of area, and driver characteristics <cit.>.
Compared to traditional static road sign, electronic roadside advertising have been usually found to have a more significant influence on the driver's attention, which cause a higher safety risk for the general public <cit.>. Further, roadside advertising billboards' brightness levels and illumination also affect visual driver behavior. Driver gaze is attracted when the luminance changes in the visual field <cit.>. In a simulated study, <cit.> examined the impact of LED advertising signs on driver gaze behavior. They found that average glance duration was higher for LED-based signs compared to other objects (e.g., non-Led signs, hoardings, rearview mirrors, speedometer, etc.)
In comparison to control parts of the road with no billboards, drivers on the section of the road with billboards drove at lower mean speeds, with more speed variability, lane position variability, time spent at high-risk headway, and more visual fixations. The least detrimental effects on driving outcomes were caused by billboards with simple (versus complex) content presented for a longer dwell time (60 seconds versus 40 or 20 seconds). Regardless of dwell time, the billboards with complex content had similar adverse effects on driving.
In this study, <cit.> observed that the drivers confronting potential risks glanced more at street-level advertisements than the ones raised three meters from the street light. When the vehicle speed is low, drivers have been found to pay more attention to the electronic billboards and other advertising at the junction than other road locations <cit.>. Further, the frequency of glances has been found to be higher in retail areas <cit.>, while the longer duration of glances has been found in rural areas <cit.>.
§.§ Building advanced driver assistance system
An advanced driver assistance system (ADAS) is an automated system that assists the driver when the driver fails or misses some events in nearby traffic <cit.>. Since driver gaze information is an important input variable to measure a driver's attentiveness from the surrounding traffic environment, it plays an important role in building an advance driver assistance system.
In a real driving scenario, <cit.> made a driver assistance system using driver gaze direction and speed limit sign. When the traffic sign is recognized, the system checks two things (a) the drivers looking at the sign and (b) whether the speed and acceleration of the vehicle are compliant with the sign. If the vehicle state was not compliant and the driver had not seen the sign, a high-priority warning was given to the driver. In another study, <cit.> made a driver inattention detection system using eye gaze and road events. Multiple road event inattention detection system was built, and one of them was a road center inattention detection system. A warning is given whenever the driver's gaze has been diverted from the forward direction for a specified period of time. At the same time, the specified period was the function of the inverse of the vehicle speed. Driver distraction also contributes to too many road crashes, <cit.> made an AttenD Algorithm as a distraction detection warning system, which can mitigate some of these crashes. One assumption of this study is that driver attention is directed toward the same object as the gaze, which may or may not be true in actual driving.
§ GENERAL DISCUSSION AND FUTURE SCOPE
Over the past decade, significant progress has been made toward driver gaze estimation by gaze zone classification and tracking. Driver gaze data collection methods and equipment have also evolved over the years. Gaze estimation has evolved from coarse gaze zone classification to finer gaze zone classification, and gaze classification models have been developed using traditional machine learning techniques to deep learning algorithms. Detected driver gaze is used to analyze the driver's surrounding awareness and attentiveness, build a safety system, and also used to make safer driving guidelines for the drivers. However, there are still research gaps existing in the current studies that can be handled in the future, which can help to further evolution of this domain, ultimately helping in building a safer transportation system.
§.§ Benchmark datasets and collection methodologies
As discussed in Section 3, benchmark open-source gaze data, discussed in Table <ref>, have been typically collected in parked or moving vehicles. Mostly, the parked vehicle data collection participants are college or university students <cit.>, who can have limited knowledge of how head movement or eye movements occur during real driving scenarios. Datasets collected in real driving scenarios have been limited in terms of the number of participants (drivers) <cit.>. A limited number of participants in real driving datasets are also not diversified in terms of experience, age, and lighting conditions, traffic conditions. Therefore, large scale open source gaze datasets can be created encompassing different environmental and lighting conditions, which can help in the development of a robust and generalized system of gaze estimation based on deep learning techniques.
Further, there are inherent problems in classifying gaze zones inside a parked vehicle compared to real world driving scenarios. The ground truth labels of the gaze data in which the driver is looking from inside the parked vehicle are easy and safer for the driver. However, this method may affect the psychological behavior of the drivers because, in this method, a second person is giving the instructions <cit.> to the driver to look towards particular gaze zones for the labels, or the driver has been informed earlier to look at predefined gaze zones.
This study <cit.> revealed that the gaze zone classifier created using parked vehicles could not successfully generalize to a moving vehicle. While doing driver gaze data collection in moving vehicles, drivers gaze naturally in the dashboard and windshield area, but getting the ground truth labels of gaze zones is difficult in this situation. Since, in this method, drivers can not look towards the gaze zone by instructions due to accident risk, unsupervised techniques can be used for estimating the gaze zone class <cit.>.
Apart from data collection strategies, there are also limitations in ground truth generation methodology for gaze datasets. Datasets collected in the parked vehicle have been annotated by two or more human annotators and cross-verified. However, these datasets do not typically mention any data statistics on how much the difference or error was observed during ground truth generation of the different annotators and how to handle such cases. Also, gaze data annotations by humans cannot be assumed due to 100% accurate, which can impact gaze classification algorithms too.
Ground truth generation by speak2label <cit.> is an automatic way of ground truth generation, but they still have the limitations of data generation inside parked vehicle, described before.
Ground truth labels of existing driver gaze datasets are given based on the zone in which the driver is looking. These labels are the predefined regions such as rearview mirror, forward, left wing mirror, right wing mirror, center stack, speedometer, etc. These datasets do not clearly mention the surrounding data collection conditions except DGaze. During recording the DGaze datasets, moving traffic was shown on the screen, and the gaze zones were based on the vehicle entities <cit.>. Therefore, in future, gaze datasets can be classified not only based on the gaze zone but can also denote the vehicle entities observed, which can help in understanding driver gaze behavior better.
Finally, the datasets mentioned in Table <ref> captured the driver's face using a single camera. However, in case of large head movement, one face side is occluded by the other face side, thereby creating problems in detecting the iris or pupil position inside the eye. Therefore, in such cases, estimating the driver's actual point of gaze is very challenging. These limitations can be overcome by using multiple cameras and capturing the driver's face at different angles or positions. From the above discussion, we can conclude that there is a need for more extensive gaze data collection, which can be based on real world driving, incorporating a large number of drivers (both male and female), with and without eyeglasses, including all possible lighting and weather conditions. The ground truth based on the point of gaze (POG) can be created with the help of an eye tracker and compared with human annotation too.
§.§ Algorithms and models for driver gaze classification
Driver's gaze zone classification using traditional machine learning and deep learning-based methods has its own limitations. In traditional machine learning-based gaze classification, the decision made by the classifier completely depends on the individual sub-model (face, pupil detection, landmark estimation, feature extraction), which affects the accuracy of the classification. The hand-crafted features designed from facial landmarks on the eyes are not completely robust to variations across different drivers, cars and seat positions, etc.
On the other hand, gaze zone classification using a pre-trained CNN model does not require hand-crafted features because of inherent feature extraction quality. However, these models require large-scale datasets and more computational power for their training compared to traditional machine-learning classifiers. The availability of large-scale open-source datasets with different illumination conditions and more subjects can help build a robust classifier.
Also, gaze estimation models can focus on estimating the exact point of gaze instead of gaze zones in the windscreen, right and left-wing mirror only. This can help to understand driver attention on different traffic entities, such as vehicles, pedestrians, etc., and other surrounding objects, such as billboards, traffic signs, etc. This can also help to understand driver behavior during complex traffic maneuvers such as intersections, on-ramps, off-ramps, etc.
§.§ Applications of driver gaze
There has been a significant number of driver gaze behavior studies that have focused on maneuvering through intersections, lane changing during overpassing, etc. However, most of these studies are either based on comparing different driver groups, such as younger versus older, or based on experience, such as novice versus experienced. A few studies, based on real driving <cit.> and simulator <cit.>, evaluated the impact of traffic density and the familiarity of the route at intersections on the driver gaze. Further, gaze behavior studies on mixed traffic conditions and unstructured driving environments are also minimal; only one study in real driving compared the gaze pattern of drivers on SI and USI <cit.>. Hence, more research is needed on driver gaze behavior in mixed-traffic environments, which can include the effect of pedestrians, traffic density, intersection type, etc., along with focusing on unstructured driving environments too.
Gaze behavior, on eye trackers-based studies, used the terminologies such as fixation, dwell time, and saccades; on the other hand, studies based on the remote setup used the terminologies such as glance duration, glance frequency, and gaze transitions to define driver gaze behavior. They explain the gaze behavior based on the statistics such as minimum, and maximum glance duration, glance frequency, number of fixations, dwell time in each gaze zone, glance or gaze transitions, saccades between the gaze zones, etc. However, these studies do not consider the effect of different traffic entities' shapes and sizes on driver gaze. For example, does the influence of small vehicles (cycles, motorcycles) on driver gaze are same or different than the large vehicle such as trucks, buses, etc.? More detailed analysis on understanding such gaze behavior will be helpful in determining driver attentiveness and build a safer driving environment.
Gaze research can also be incorporated and extended with other physiological sensors such as heartbeat (fitbit) <cit.>, EEG (Electroencephalography) <cit.> etc., to understand driver gaze behavior comprehensively. Since connected and autonomous vehicles are the emerging future of intelligent transportation systems, much research is required to know how the driver's gaze is influenced in the surrounding traffic, including autonomous or connected vehicles. Very limited research on the use of driver gaze in building assistance driver systems. So there is significant scope for researchers and industry experts to build a robust and generalized driver assistance system that will help drivers perform safe maneuvers at intersections, overpassing, etc. Driver gaze behavior understanding can also help in taking steering control from manual to semi-automatic when the driver is not fully attentive to the surrounding traffic environment.
§ CONCLUSION
Driver gaze plays an important role in different driving gaze-based applications, such as driver attentiveness detection, visual distraction detection, and taking automatic steering control.
This study thoroughly summarizes different terminologies used in driver gaze estimation and behavior understanding based on head-mounted or remote setup-based techniques and compiles the existing benchmark gaze estimation datasets. We also reviewed different gaze estimation algorithms and their applications on different gaze behavior understanding, such as negotiating at intersections, lane changing during overpassing, moving on the curve, on-ramp and off-ramp, the influence of roadside advertising infrastructure, etc. Compared to traditional machine learning, deep learning-based approaches are more robust in detecting the driver gazes in different lighting conditions and large head movements. Finally, we have provided some suggestions and the future scope to the researchers and the developers, which can help them to build a more robust and generalized driver gaze estimation and gaze-based driving assistance system.
§ ACKNOWLEDGEMENT
Our research results are based upon work supported by the Initiation Grant scheme of Indian Institute of Technology Kanpur (IITK/CE/2019378). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the IITK.
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
Pavan Kumar Sharma: Conceptualization, Formal analysis, Methodology, Investigation, Writing.
Pranamesh Chakraborty: Conceptualization, Methodology, Investigation, Supervision, Writing.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
http://arxiv.org/abs/2307.01110v2
|
20230703153547
|
Asymptotic tails of massive gravitons in light of pulsar timing array observations
|
[
"R. A. Konoplya",
"A. Zhidenko"
] |
gr-qc
|
[
"gr-qc",
"astro-ph.HE",
"hep-th"
] |
[email protected]
Institute of Physics and Research Centre of Theoretical Physics and Astrophysics, Faculty of Philosophy and Science, Silesian University in Opava, CZ-746 01 Opava, Czech Republic
[email protected]
Centro de Matemática, Computação e Cognição (CMCC), Universidade Federal do ABC (UFABC),
Rua Abolição, CEP: 09210-180, Santo André, SP, Brazil
04.30.-w,04.50.Kd,04.70.-s
We demonstrate that the asymptotic oscillatory tails of massive gravitons, present in both massive theories of gravity and effectively in extra-dimensional scenarios, could potentially contribute to gravitational waves with very long wavelengths. However, their impact on recent pulsar timing array observations is expected to be relatively small, predominantly consisting of radiation emitted by black holes in our region of the Milky Way.
Asymptotic tails of massive gravitons in light of pulsar timing array observations
A. Zhidenko
August 1, 2023
==================================================================================
On June 28, 2023 the results of the 15 years observations of millisecond pulsars within our neighbourhood of the Milky Way galaxy were released <cit.>.
Gravitational waves of the enormous wavelength (of order of light years) were observed via distortion of the electromagnetic signals from the pulsars.
Although the main candidate for such gigantic gravitational waves are the binary (supermassive) black hole systems in the galactic centers, a number of alternative/additional sources are considered, such as: cosmic inflation or cosmic strings, including the ultra-light dark matter or other kinds of new physical processes.
The binary systems of supermassive black holes have not been directly observed so far and a growing number of works is devoted to further understanding of possible source of the long gravitational waves <cit.>. However, to the best of our knowledge, one potential source of long gravitational waves was omitted: an effective massive term of a dynamical field under consideration: either gravitational field or matter (e.g. cosmological) field coupled to gravity in some way.
The graviton can acquire an effective massive term owing to the extra-dimensional brane-world scenarios <cit.>, so that not only usual massive theories of gravity would be under consideration.
Massive fields in the background of a black hole after ringdown phase decay according to the universal law
ψ∝ t^-5/6sin(μ c^2/ħ t).
This behavior was observed at late times for the scalar field <cit.>, Proca field <cit.>, Dirac field <cit.>, and massive graviton in Randall-Sundrum-type models <cit.> (see Fig. <ref>). Therefore, we could suppose that this law does not depend on the particular model allowing a graviton to gain an (effective) mass.
It is well-known that the massive fields are short-ranged with the characteristic range
R∼ħ/2μ c=λ_c/2,
where λ_c is the reduced Compton length. However, since gravitons are expected to be ultralight, one could suppose that the range of their massive component interaction could be of the order of light years or more, and can contribute into the Gravitational-Wave Background observed by NANOGrav <cit.> and other pulsar timing arrays. This way the extremely large wavelength would be provided simply by a small mass of graviton in the Universe and would not require specific constituents, such as supermassive binary black holes or cosmic strings.
The massless gravitational-wave signal observed by LIGO/VIRGO collaborations <cit.>, for which the amplitude decreases by many orders of magnitude during the ringdown stage (see Fig. <ref>), is negligibly small comparing to the massive tails. Notice that the time scale of Fig. <ref> defined by the supermassive black hole mass M varies from munutes to several hours while the time scale of Fig. <ref> is of the order of years.
Since the asymptotic decay law (<ref>) is very slow, the Bayesian analysis is similar to the one for the ultralight dark matter induced signal <cit.>:
the timing residuals for a pulsar, h_I, is written in the form,
h_I=∑_iA_E^isin(ω t+γ_E^i)+∑_iA_P^isin(ω t+γ_P^i),
where ω=μ c^2/ħ and the signal amplitudes, A_E^i and A_P^i, and the phases, γ_E^i and γ_P^i (“E” and ”P” subscripts denote Earth and Pulsar term contributions, respectively), must be considered independent, because there is no correlation between independent sources of the gravitational-wave tails.
The energy source we are interested here is the gravitational ringdowns from all the nearby black holes, so that it is not proportional to the dark matter density, like it happens when considering ultralight dark matter as a source. Therefore, the amplitude is not limited anymore by the local abundance of the dark matter, ρ_ϕ≈0.4 GeV/cm^3, and does not exclude the signal, corresponding the graviton mass of (see Fig. 13 of <cit.>)
μ∼ 2· 10^-23 eV/c^2 (λ_c∼1 ly).
It was noted that scalar matter tails, being a source for gravitational perturbations, only slightly wiggle the gravitational tails at late times and do not lead to the oscillating tails <cit.>. Therefore, if the Gravitational-Wave Background appears due to the ringdown tails, it implies existence of the massive gravitons. Notice that some alternative theories of gravity allow for the gravitational waves with both massless and massive polarizations (e.g., due to non-local curvature corrections <cit.>). If gravitational perturbations possess massive polarizations along with the massless ones, the massive degrees of freedom will contribute to the large wavelength gravitational oscillations.
It is necessary to compare this estimation with the graviton-mass constraints with the gravitational-wave signals (see <cit.> for a review).
The LIGO detection of GW150914 provides the upper limit for the graviton mass <cit.>,
μ<1.2·10^-22 eV/c^2,
and the statistical analysis of 24 events shifted the upper limit to <cit.>,
μ<1.76·10^-23 eV/c^2,
with 90% credibility, which is of the same order of magnitude as the estimation (<ref>).
It is worth mentioning that other estimations, based on the temperature and gas density profiles of galaxies, such as Chandra cluster data in X-rays <cit.>, provide much stronger bounds to the graviton mass, which exclude the graviton masses of (<ref>). However, starting points of these estimations are the dark matter distributions inferred from a standard cosmological model, which assumes that the graviton mass is zero and, thereby, has no effect on the cosmological evolution. They are also based on the Yukawa gravity model, which does not cover Lorentz invariant massive gravity theories at large distances (see <cit.> for a discussion).
Summarizing, the pulsar timing array observations of the gravitational wave background with the characteristic length of
1 ly ≲λ_c ≲ 10 ly,
allow for contribution of massive particles with masses in the range
2· 10^-24 eV/c^2 ≲μ≲ 2· 10^-23 eV/c^2.
This means that all binary (non-supermssive) black holes, mostly, in our corner of the Milky Way could support the long waves observed in <cit.> via the mechanism of asymptotic oscillatory tails of massive gravitons described here.
If in the future gravitational waves of even longer length will be observed, they could be ascribed to plausible ultra-light massive gravitons with mass smaller than μ≈ 2· 10^-24 eV/c^2.
A. Z. was supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq).
80
NANOGrav:2023gor
G. Agazie et al. [NANOGrav],
Astrophys. J. Lett. 951, no.1, L8 (2023)
doi:10.3847/2041-8213/acdac6
[arXiv:2306.16213 [astro-ph.HE]].
NANOGrav:2023hvm
A. Afzal et al. [NANOGrav],
Astrophys. J. Lett. 951, no.1, L11 (2023)
doi:10.3847/2041-8213/acdc91
[arXiv:2306.16219 [astro-ph.HE]].
NANOGrav:2023icp
A. D. Johnson et al. [NANOGrav],
[arXiv:2306.16223 [astro-ph.HE]].
NANOGrav:2023pdq
G. Agazie et al. [NANOGrav],
[arXiv:2306.16222 [astro-ph.HE]].
NANOGrav:2023hfp
G. Agazie et al. [NANOGrav],
[arXiv:2306.16220 [astro-ph.HE]].
Antoniadis:2023lym
J. Antoniadis, S. Babak, A. S. B. Nielsen, C. G. Bassa, A. Berthereau, M. Bonetti, E. Bortolas, P. R. Brook, M. Burgay and R. N. Caballero, et al.
doi:10.1051/0004-6361/202346841
[arXiv:2306.16224 [astro-ph.HE]].
Antoniadis:2023xlr
J. Antoniadis, P. Arumugam, S. Arumugam, P. Auclair, S. Babak, M. Bagchi, A. S. B. Nielsen, E. Barausse, C. G. Bassa and A. Bathula, et al.
[arXiv:2306.16227 [astro-ph.CO]].
Smarra:2023ljf
C. Smarra, B. Goncharov, E. Barausse, J. Antoniadis, S. Babak, A. S. B. Nielsen, C. G. Bassa, A. Berthereau, M. Bonetti and E. Bortolas, et al.
[arXiv:2306.16228 [astro-ph.HE]].
Zic:2023gta
A. Zic, D. J. Reardon, A. Kapur, G. Hobbs, R. Mandow, M. Curyło, R. M. Shannon, J. Askew, M. Bailes and N. D. R. Bhat, et al.
[arXiv:2306.16230 [astro-ph.HE]].
Xu:2023wog
H. Xu, S. Chen, Y. Guo, J. Jiang, B. Wang, J. Xu, Z. Xue, R. N. Caballero, J. Yuan and Y. Xu, et al.
Res. Astron. Astrophys. 23 (2023) no.7, 075024
doi:10.1088/1674-4527/acdfa5
[arXiv:2306.16216 [astro-ph.HE]].
Franciolini:2023wjm
G. Franciolini, D. Racco and F. Rompineve,
[arXiv:2306.17136 [astro-ph.CO]].
Shen:2023pan
Z. Q. Shen, G. W. Yuan, Y. Y. Wang and Y. Z. Wang,
[arXiv:2306.17143 [astro-ph.HE]].
Lambiase:2023pxd
G. Lambiase, L. Mastrototaro and L. Visinelli,
[arXiv:2306.16977 [astro-ph.HE]].
Guo:2023hyp
S. Y. Guo, M. Khlopov, X. Liu, L. Wu, Y. Wu and B. Zhu,
[arXiv:2306.17022 [hep-ph]].
Ellis:2023tsl
J. Ellis, M. Lewicki, C. Lin and V. Vaskonen,
[arXiv:2306.17147 [astro-ph.CO]].
Franciolini:2023pbf
G. Franciolini, A. Iovino, Junior., V. Vaskonen and H. Veermae,
[arXiv:2306.17149 [astro-ph.CO]].
Ellis:2023dgf
J. Ellis, M. Fairbairn, G. Hütsi, J. Raidal, J. Urrutia, V. Vaskonen and H. Veermäe,
[arXiv:2306.17021 [astro-ph.CO]].
Ghoshal:2023fhh
A. Ghoshal and A. Strumia,
[arXiv:2306.17158 [astro-ph.CO]].
Deng:2023btv
H. Deng, B. Bécsy, X. Siemens, N. J. Cornish and D. R. Madison,
[arXiv:2306.17130 [gr-qc]].
Vagnozzi:2023lwo
S. Vagnozzi,
[arXiv:2306.16912 [astro-ph.CO]].
DiBari:2023upq
P. Di Bari and M. H. Rahat,
[arXiv:2307.03184 [hep-ph]].
Du:2023qvj
X. K. Du, M. X. Huang, F. Wang and Y. K. Zhang,
[arXiv:2307.02938 [hep-ph]].
Huang:2023chx
H. L. Huang, Y. Cai, J. Q. Jiang, J. Zhang and Y. S. Piao,
[arXiv:2306.17577 [gr-qc]].
Cai:2023dls
Y. F. Cai, X. C. He, X. Ma, S. F. Yan and G. W. Yuan,
[arXiv:2306.17822 [gr-qc]].
Inomata:2023zup
K. Inomata, K. Kohri and T. Terada,
[arXiv:2306.17834 [astro-ph.CO]].
Broadhurst:2023tus
T. Broadhurst, C. Chen, T. Liu and K. F. Zheng,
[arXiv:2306.17821 [astro-ph.HE]].
Gouttenoire:2023ftk
Y. Gouttenoire and E. Vitagliano,
[arXiv:2306.17841 [gr-qc]].
KoyamaTomimatsu
H. Koyama and A. Tomimatsu,
Phys. Rev. D 63, 064032 (2001)
doi:10.1103/PhysRevD.63.064032
[arXiv:gr-qc/0012022 [gr-qc]];
Phys. Rev. D 64, 044014 (2001)
doi:10.1103/PhysRevD.64.044014
[arXiv:gr-qc/0103086 [gr-qc]];
Phys. Rev. D 65, 084031 (2002)
doi:10.1103/PhysRevD.65.084031
[arXiv:gr-qc/0112075 [gr-qc]].
Moderski:2001tk
R. Moderski and M. Rogatko,
Phys. Rev. D 64, 044024 (2001)
doi:10.1103/PhysRevD.64.044024
[arXiv:gr-qc/0105056 [gr-qc]].
Konoplya:2006gq
R. A. Konoplya, A. Zhidenko and C. Molina,
Phys. Rev. D 75, 084004 (2007)
doi:10.1103/PhysRevD.75.084004
[arXiv:gr-qc/0602047 [gr-qc]].
Jing:2004zb
J. Jing,
Phys. Rev. D 72, 027501 (2005)
doi:10.1103/PhysRevD.72.027501
[arXiv:gr-qc/0408090 [gr-qc]].
Seahra:2004fg
S. S. Seahra, C. Clarkson and R. Maartens,
Phys. Rev. Lett. 94, 121302 (2005)
doi:10.1103/PhysRevLett.94.121302
[arXiv:gr-qc/0408032 [gr-qc]].
LIGOScientific:2016aoc
B. P. Abbott et al. [LIGO Scientific and Virgo],
Phys. Rev. Lett. 116 (2016) no.6, 061102
doi:10.1103/PhysRevLett.116.061102
[arXiv:1602.03837 [gr-qc]].
Degollado:2014vsa
J. C. Degollado and C. A. R. Herdeiro,
Phys. Rev. D 90, no.6, 065019 (2014)
doi:10.1103/PhysRevD.90.065019
[arXiv:1408.2589 [gr-qc]].
Capozziello:2021bki
S. Capozziello and M. Capriolo,
Class. Quant. Grav. 38, no.17, 175008 (2021)
doi:10.1088/1361-6382/ac1720
[arXiv:2107.06972 [gr-qc]].
Piorkowska-Kurpas:2022xmb
A. Piórkowska-Kurpas,
Universe 8, no.2, 83 (2022)
doi:10.3390/universe8020083.
LIGOScientific:2016lio
B. P. Abbott et al. [LIGO Scientific and Virgo],
Phys. Rev. Lett. 116, no.22, 221101 (2016)
[erratum: Phys. Rev. Lett. 121, no.12, 129902 (2018)]
doi:10.1103/PhysRevLett.116.221101
[arXiv:1602.03841 [gr-qc]].
LIGOScientific:2020tif
R. Abbott et al. [LIGO Scientific and Virgo],
Phys. Rev. D 103, no.12, 122002 (2021)
doi:10.1103/PhysRevD.103.122002
[arXiv:2010.14529 [gr-qc]].
Gupta:2018pzp
S. Gupta and S. Desai,
Class. Quant. Grav. 36, no.10, 105001 (2019)
doi:10.1088/1361-6382/ab1599
[arXiv:1811.09378 [astro-ph.CO]].
deRham:2016nuf
C. de Rham, J. T. Deskins, A. J. Tolley and S. Y. Zhou,
Rev. Mod. Phys. 89, no.2, 025004 (2017)
doi:10.1103/RevModPhys.89.025004
[arXiv:1606.08462 [astro-ph.CO]].
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.